Files
gh-rawveg-skillsforge-marke…/skills/n8n/references/llms-txt.md
2025-11-30 08:50:41 +08:00

3.7 MiB
Raw Blame History

N8N - Llms-Txt

Pages: 1301


HTTP Request node common issues

URL: llms-txt#http-request-node-common-issues

Contents:

  • Bad request - please check your parameters
  • The resource you are requesting could not be found
  • JSON parameter need to be an valid JSON
  • Forbidden - perhaps check your credentials
  • 429 - The service is receiving too many requests from you
    • Batching
    • Retry on Fail

Here are some common errors and issues with the HTTP Request node and steps to resolve or troubleshoot them.

Bad request - please check your parameters

This error displays when the node receives a 400 error indicating a bad request. This error most often occurs because:

  • You're using an invalid name or value in a Query Parameter.
  • You're passing array values in a Query Parameter but the array isn't formatted correctly. Try using the Array Format in Query Parameters option.

Review the API documentation for your service to format your query parameters.

The resource you are requesting could not be found

This error displays when the endpoint URL you entered is invalid.

This may be due to a typo in the URL or a deprecated API. Refer to your service's API documentation to verify you have a valid endpoint.

JSON parameter need to be an valid JSON

This error displays when you've passed a parameter as JSON and it's not formatted as valid JSON.

To resolve, review the JSON you've entered for these issues:

  • Test your JSON in a JSON checker or syntax parser to find errors like missing quotation marks, extra or missing commas, incorrectly formatted arrays, extra or missing square brackets or curly brackets, and so on.

  • If you've used an Expression in the node, be sure you've wrapped the entire JSON in double curly brackets, for example:

Forbidden - perhaps check your credentials

This error displays when the node receives a 403 error indicating authentication failed.

To resolve, review the selected credentials and make sure you can authenticate with them. You may need to:

  • Update permissions or scopes so that your API key or account can perform the operation you've selected.
  • Format your generic credential in a different way.
  • Generate a new API key or token with the appropriate permissions or scopes.

429 - The service is receiving too many requests from you

This error displays when the node receives a 429 error from the service that you're calling. This often means that you have hit the rate limits of that service. You can find out more on the Handling API rate limits page.

To resolve the error, you can use one of the built-in options of the HTTP request node:

Use this option to send requests in batches and introduce a delay between them.

  1. In the HTTP Request node, select Add Option > Batching.
  2. Set Items per Batch to the number of input items to include in each request.
  3. Set Batch Interval (ms) to introduce a delay between requests in milliseconds. For example, to send one request to an API per second, set Batch Interval (ms) to 1000.

Use this option to retry the node after a failed attempt.

  1. In the HTTP Request node, go to Settings and enable Retry on Fail.
  2. Set Max Tries to the maximum number of times n8n should retry the node.
  3. Set Wait Between Tries (ms) to the desired delay in milliseconds between retries. For example, to wait one second before retrying the request again, set Wait Between Tries (ms) to 1000.

Examples:

Example 1 (unknown):

{{
      {
      "myjson":
      {
          "name1": "value1",
          "name2": "value2",
          "array1":
              ["value1","value2"]
      }
      }
  }}

JWT credentials

URL: llms-txt#jwt-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using Passphrase
  • Using private key (PEM key)
  • Available algorithms

You can use these credentials to authenticate the following nodes:

Supported authentication methods

  • Passphrase: Signed with a secret with HMAC algorithm
  • Private key (PEM key): For use with Private Key JWT with RSA or ECDSA algorithm

Refer to the JSON Web Token spec for more details.

For a more verbose introduction, refer to the JWT website Introduction to JSON Web Tokens. Refer to JSON Web Token (JWT) Signing Algorithms Overview for more information on selecting between the two types and the algorithms involved.

To configure this credential:

  1. Select the Key Type of Passphrase.
  2. Enter the Passphrase Secret
  3. Select the Algorithm used to sign the assertion. Refer to Available algorithms below for a list of supported algorithms.

Using private key (PEM key)

To configure this credential:

  1. Select the Key Type of PEM Key.
  2. A Private Key: Obtained from generating a Key Pair. Refer to Generate RSA Key Pair for an example.
  3. A Public Key: Obtained from generating a Key Pair. Refer to Generate RSA Key Pair for an example.
  4. Select the Algorithm used to sign the assertion. Refer to Available algorithms below for a list of supported algorithms.

Available algorithms

This n8n credential supports the following algorithms:

  • HS256
  • HS384
  • HS512
  • RS256
  • RS384
  • RS512
  • ES256
  • ES384
  • ES512
  • PS256
  • PS384
  • PS512
  • none

White labelling

URL: llms-txt#white-labelling

Contents:

  • Prerequisites
  • Theme colors
  • Theme logos
  • Text localization
    • Window title

Embed requires an embed license. For more information about when to use Embed, as well as costs and licensing processes, refer to Embed on the n8n website.

White labelling n8n means customizing the frontend styling and assets to match your brand identity. The process involves changing two packages in n8n's source code github.com/n8n-io/n8n:

You need the following installed on your development machine:

  • git
  • Node.js and npm. Minimum version Node 18.17.0. You can find instructions on how to install both using nvm (Node Version Manager) for Linux, Mac, and WSL here. For Windows users, refer to Microsoft's guide to Install NodeJS on Windows.

Create a fork of n8n's repository and clone your new repository.

Install all dependencies, build and start n8n.

Whenever you make changes you need to rebuild and restart n8n. While developing you can use npm run dev to automatically rebuild and restart n8n anytime you make code changes.

To customize theme colors open packages/frontend/@n8n/design-system and start with:

At the top of _tokens.scss you will find --color-primary variables as HSL colors:

In the following example the primary color changes to #0099ff. To convert to HSL you can use a color converter tool.

To change the editors logo assets look into packages/frontend/editor-ui/public and replace:

  • favicon-16x16.png
  • favicon-32x32.png
  • favicon.ico
  • n8n-logo.svg
  • n8n-logo-collapsed.svg
  • n8n-logo-expanded.svg

Replace these logo assets. n8n uses them in Vue.js components, including:

In the following example replace n8n-logo-collapsed.svg and n8n-logo-expanded.svg to update the main sidebar's logo assets.

If your logo assets require different sizing or placement you can customize SCSS styles at the bottom of MainSidebar.vue.

To change all text occurrences like n8n or n8n.io to your brand identity you can customize n8n's English internationalization file: packages/frontend/@n8n/i18n/src/locales/en.json.

n8n uses the Vue I18n internationalization plugin for Vue.js to translate the majority of UI texts. To search and replace text occurrences inside en.json you can use Linked locale messages.

In the following example add the _brand.name translation key to white label n8n's AboutModal.vue.

To change n8n's window title to your brand name, edit the following:

The following example replaces all occurrences of n8n and n8n.io with My Brand in index.html and useDocumentTitle.ts.

Examples:

Example 1 (unknown):

git clone https://github.com/<your-organization>/n8n.git n8n
cd n8n

Example 2 (unknown):

npm install
npm run build
npm run start

Example 3 (unknown):

@mixin theme {
	--color-primary-h: 6.9;
	--color-primary-s: 100%;
	--color-primary-l: 67.6%;

Example 4 (unknown):

@mixin theme {
	--color-primary-h: 204;
	--color-primary-s: 100%;
	--color-primary-l: 50%;

Vercel AI Gateway Chat Model node

URL: llms-txt#vercel-ai-gateway-chat-model-node

Contents:

  • Node parameters
    • Model
  • Node options
    • Frequency Penalty
    • Maximum Number of Tokens
    • Response Format
    • Presence Penalty
    • Sampling Temperature
    • Timeout
    • Max Retries

Use the Vercel AI Gateway Chat Model node to use AI Gateway chat models with conversational agents.

On this page, you'll find the node parameters for the Vercel AI Gateway Chat Model node and links to more resources.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

Select the model to use to generate the completion.

n8n dynamically loads models from the AI Gateway and you'll only see the models available to your account.

Use these options to further refine the node's behavior.

Frequency Penalty

Use this option to control the chance of the model repeating itself. Higher values reduce the chance of the model repeating itself.

Maximum Number of Tokens

Enter the maximum number of tokens used, which sets the completion length.

Choose Text or JSON. JSON ensures the model returns valid JSON.

Use this option to control the chance of the model talking about new topics. Higher values increase the chance of the model talking about new topics.

Sampling Temperature

Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations.

Enter the maximum request time in milliseconds.

Enter the maximum number of times to retry a request.

Use this option to set the probability the completion should use. Use a lower value to ignore less probable options.

Templates and examples

Browse Vercel AI Gateway Chat Model integration templates, or search all templates

As the Vercel AI Gateway is API-compatible with OpenAI, you can refer to LangChains's OpenAI documentation for more information about the service.

View n8n's Advanced AI documentation.


Scaling n8n

URL: llms-txt#scaling-n8n

When running n8n at scale, with a large number of users, workflows, or executions, you need to change your n8n configuration to ensure good performance.

n8n can run in different modes depending on your needs. The queue mode provides the best scalability. Refer to Queue mode for configuration details.

You can configure data saving and pruning to improve database performance. Refer to Execution data for details.


Code node

URL: llms-txt#code-node

Contents:

  • Usage
    • Choose a mode
  • JavaScript
    • Supported JavaScript features
    • External libraries
    • Built-in methods and variables
    • Keyboard shortcuts
  • Python (Pyodide - legacy)
    • Built-in methods and variables
    • Keyboard shortcuts

Use the Code node to write custom JavaScript or Python and run it as a step in your workflow.

This page gives usage information about the Code node. For more guidance on coding in n8n, refer to the Code section. It includes:

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Code integrations page.

Function and Function Item nodes

The Code node replaces the Function and Function Item nodes from version 0.198.0. If you're using an older version of n8n, you can still view the Function node documentation and Function Item node documentation.

How to use the Code node.

  • Run Once for All Items: this is the default. When your workflow runs, the code in the code node executes once, regardless of how many input items there are.
  • Run Once for Each Item: choose this if you want your code to run for every input item.

The Code node supports Node.js.

Supported JavaScript features

The Code node supports:

  • Promises. Instead of returning the items directly, you can return a promise which resolves accordingly.
  • Writing to your browser console using console.log. This is useful for debugging and troubleshooting your workflows.

External libraries

If you self-host n8n, you can import and use built-in and external npm modules in the Code node. To learn how to enable external modules, refer to the Enable modules in Code node guide.

If you use n8n Cloud, you can't import external npm modules. n8n makes two modules available for you:

Built-in methods and variables

n8n provides built-in methods and variables for working with data and accessing n8n data. Refer to Built-in methods and variables for more information.

The syntax to use the built-in methods and variables is $variableName or $methodName(). Type $ in the Code node or expressions editor to see a list of suggested methods and variables.

Keyboard shortcuts

The Code node editing environment supports time-saving and useful keyboard shortcuts for a range of operations from autocompletion to code-folding and using multiple-cursors. See the full list of keyboard shortcuts.

Python (Pyodide - legacy)

Pyodide is a legacy feature. Future versions of n8n will no longer support this feature.

n8n added Python support in version 1.0. It doesn't include a Python executable. Instead, n8n provides Python support using Pyodide, which is a port of CPython to WebAssembly. This limits the available Python packages to the Packages included with Pyodide. n8n downloads the package automatically the first time you use it.

Slower than JavaScript

The Code node takes longer to process Python than JavaScript. This is due to the extra compilation steps.

Built-in methods and variables

n8n provides built-in methods and variables for working with data and accessing n8n data. Refer to Built-in methods and variables for more information.

The syntax to use the built-in methods and variables is _variableName or _methodName(). Type _ in the Code node to see a list of suggested methods and variables.

Keyboard shortcuts

The Code node editing environment supports time-saving and useful keyboard shortcuts for a range of operations from autocompletion to code-folding and using multiple-cursors. See the full list of keyboard shortcuts.

File system and HTTP requests

You can't access the file system or make HTTP requests. Use the following nodes instead:

Python (Native - beta)

n8n added native Python support using task runners (beta) in version 1.111.0.

Main differences from Pyodide:

  • Native Python supports only _items in all-items mode and _item in per-item mode. It doesn't support other n8n built-in methods and variables.
  • Native Python supports importing native Python modules from the standard library and from third-parties, if the n8nio/runners image includes them and explicitly allowlists them. See adding extra dependencies for task runners for more details.
  • Native Python denies insecure built-ins by default. See task runners environment variables for more details.
  • Unlike Pyodide, which accepts dot access notation, for example, item.json.myNewField, native Python only accepts bracket access notation, for example, item["json"]["my_new_field"]. There may be other minor syntax differences where Pyodide accepts constructs that aren't legal in native Python.

Keep in mind upgrading to native Python is a breaking change, so you may need to adjust your Python scripts to use the native Python runner.

This feature is in beta and is subject to change. As it becomes stable, n8n will roll it out progressively to n8n cloud users during 2025. Self-hosting users can try it out and provide feedback.

There are two places where you can use code in n8n: the Code node and the expressions editor. When using either area, there are some key concepts you need to know, as well as some built-in methods and variables to help with common tasks.

When working with the Code node, you need to understand the following concepts:

  • Data structure: understand the data you receive in the Code node, and requirements for outputting data from the node.
  • Item linking: learn how data items work, and how to link to items from previous nodes. You need to handle item linking in your code when the number of input and output items doesn't match.

Built-in methods and variables

n8n includes built-in methods and variables. These provide support for:

  • Accessing specific item data
  • Accessing data about workflows, executions, and your n8n environment
  • Convenience variables to help with data and time

Refer to Built-in methods and variables for more information.

Use AI in the Code node

AI assistance in the Code node is available to Cloud users. It isn't available in self-hosted n8n.

AI generated code overwrites your code

If you've already written some code on the Code tab, the AI generated code will replace it. n8n recommends using AI as a starting point to create your initial code, then editing it as needed.

To use ChatGPT to generate code in the Code node:

  1. In the Code node, set Language to JavaScript.
  2. Select the Ask AI tab.
  3. Write your query.
  4. Select Generate Code. n8n sends your query to ChatGPT, then displays the result in the Code tab.

For common questions or issues and suggested solutions, refer to Common Issues.


monday.com node

URL: llms-txt#monday.com-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the monday.com node to automate work in monday.com, and integrate monday.com with other applications. n8n has built-in support for a wide range of monday.com features, including creating a new board, and adding, deleting, and getting items on the board.

On this page, you'll find a list of operations the monday.com node supports and links to more resources.

Minimum required version

This node requires n8n version 1.22.6 or above.

Refer to monday.com credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Board
    • Archive a board
    • Create a new board
    • Get a board
    • Get all boards
  • Board Column
    • Create a new column
    • Get all columns
  • Board Group
    • Delete a group in a board
    • Create a group in a board
    • Get list of groups in a board
  • Board Item
    • Add an update to an item.
    • Change a column value for a board item
    • Change multiple column values for a board item
    • Create an item in a board's group
    • Delete an item
    • Get an item
    • Get all items
    • Get items by column value
    • Move item to group

Templates and examples

Create ticket on specific customer messages in Telegram

View template details

Microsoft Outlook AI Email Assistant with contact support from Monday and Airtable

by Cognitive Creators

View template details

Retrieve a Monday.com row and all data in a single node

View template details

Browse monday.com integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


RBAC role types

URL: llms-txt#rbac-role-types

Contents:

  • Project Admin

  • Project Editor

  • Project Viewer

  • The Project Editor role is available on Pro Cloud and Self-hosted Enterprise plans.

  • The Project Viewer role is only available on Self-hosted Enterprise and Cloud Enterprise plans.

Within projects, there are three user roles: Admin, Editor, and Viewer. These roles control what the user can do in a project. A user can have different roles within different projects.

A Project Admin role has the highest level of permissions. Project admins can:

  • Manage project settings: Change name, delete project.
  • Manage project members: Invite members and remove members, change members' roles.
  • View, create, update, and delete any workflows, credentials, or executions within a project.

A Project Editor can view, create, update, and delete any workflows, credentials, or executions within a project.

A Project Viewer is effectively a read-only role with access to all workflows, credentials, and executions within a project.

Viewers aren't able to manually execute any workflows that exist in a project.

Role types and account types

Role types and account types are different things. Every account has one type. The account can have different role types for different projects.

Permission Admin Editor Viewer
View workflows in the project
View credentials in the project
View executions
Edit credentials and workflows
Add workflows and credentials
Execute workflows
Manage members
Modify the project

Variables and tags aren't affected by RBAC: they're global across the n8n instance.


Export and import workflows

URL: llms-txt#export-and-import-workflows

Contents:

  • Copy-Paste
  • From the Editor UI menu
  • From the command line

n8n saves workflows in JSON format. You can export your workflows as JSON files or import JSON files into your n8n library. You can export and import workflows in several ways.

Exported workflow JSON files include credential names and IDs. While IDs aren't sensitive, the names could be, depending on how you name your credentials. HTTP Request nodes may contain authentication headers when imported from cURL. Remove or anonymize this information from the JSON file before sharing to protect your credentials.

You can copy and paste a workflow or parts of it by selecting the nodes you want to copy to the clipboard (Ctrl + c or cmd +c) and pasting it (Ctrl + v or cmd + v) into the Editor UI.

To select all nodes or a group of nodes, click and drag:

From the Editor UI menu

From the top navigation bar, select the three dots in the upper right to see the following options:

Import & Export workflows menu

  • Download: Downloads your current workflow as a JSON file to your computer.
  • Import from URL: Imports workflow JSON from a URL, for example, this workflow JSON file on GitHub.
  • Import from File: Imports a workflow as a JSON file from your computer.

From the command line


Customer Datastore (n8n Training) node

URL: llms-txt#customer-datastore-(n8n-training)-node

Use this node only for the n8n new user onboarding tutorial. It provides dummy data for testing purposes and has no further functionality.


Replace 2.1.0 with your version number

URL: llms-txt#replace-2.1.0-with-your-version-number

npm install n8n-nodes-nodeName@2.1.0


---

## DeepSeek Chat Model node

**URL:** llms-txt#deepseek-chat-model-node

**Contents:**
- Node parameters
  - Model
- Node options
  - Base URL
  - Frequency Penalty
  - Maximum Number of Tokens
  - Response Format
  - Presence Penalty
  - Sampling Temperature
  - Timeout

Use the DeepSeek Chat Model node to use DeepSeek's chat models with conversational [agents](../../../../../glossary/#ai-agent).

On this page, you'll find the node parameters for the DeepSeek Chat Model node and links to more resources.

You can find authentication information for this node [here](../../../credentials/deepseek/).

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name.

Select the model to use to generate the completion.

n8n dynamically loads models from DeepSeek and you'll only see the models available to your account.

Use these options to further refine the node's behavior.

Enter a URL here to override the default URL for the API.

### Frequency Penalty

Use this option to control the chances of the model repeating itself. Higher values reduce the chance of the model repeating itself.

### Maximum Number of Tokens

Enter the maximum number of tokens used, which sets the completion length.

Choose **Text** or **JSON**. **JSON** ensures the model returns valid JSON.

Use this option to control the chances of the model talking about new topics. Higher values increase the chance of the model talking about new topics.

### Sampling Temperature

Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations.

Enter the maximum request time in milliseconds.

Enter the maximum number of times to retry a request.

Use this option to set the probability the completion should use. Use a lower value to ignore less probable options.

## Templates and examples

**🐋🤖 DeepSeek AI Agent + Telegram + LONG TERM Memory 🧠**

[View template details](https://n8n.io/workflows/2864-deepseek-ai-agent-telegram-long-term-memory/)

**🤖 AI content generation for Auto Service 🚘 Automate your social media📲!**

[View template details](https://n8n.io/workflows/4600-ai-content-generation-for-auto-service-automate-your-social-media/)

**AI Research Assistant via Telegram (GPT-4o mini + DeepSeek R1 + SerpAPI)**

[View template details](https://n8n.io/workflows/5924-ai-research-assistant-via-telegram-gpt-4o-mini-deepseek-r1-serpapi/)

[Browse DeepSeek Chat Model integration templates](https://n8n.io/integrations/deepseek-chat-model/), or [search all templates](https://n8n.io/workflows/)

As DeepSeek is API-compatible with OpenAI, you can refer to [LangChains's OpenAI documentation](https://js.langchain.com/docs/integrations/chat/openai/) for more information about the service.

View n8n's [Advanced AI](../../../../../advanced-ai/) documentation.

---

## Cisco Meraki credentials

**URL:** llms-txt#cisco-meraki-credentials

**Contents:**
- Prerequisites
- Authentication methods
- Related resources
- Using API key

You can use these credentials to authenticate when using the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to make a [Custom API call](../../../custom-operations/).

- Create a [Cisco DevNet developer account](https://developer.cisco.com).
- Access to a [Cisco Meraki account](https://meraki.cisco.com/).

## Authentication methods

Refer to [Cisco Meraki's API documentation](https://developer.cisco.com/meraki/api-v1/introduction/) for more information about the service.

This is a credential-only node. Refer to [Custom API operations](../../../custom-operations/) to learn more. View [example workflows and related content](https://n8n.io/integrations/cisco-meraki/) on n8n's website.

To configure this credential, you'll need:

- An **API Key**: Refer to the [Cisco Meraki Obtaining your Meraki API Key documentation](https://developer.cisco.com/meraki/api-v1/authorization/#obtaining-your-meraki-api-key) for instructions on getting your API Key.

---

## crowd.dev credentials

**URL:** llms-txt#crowd.dev-credentials

**Contents:**
- Prerequisites
- Supported authentication methods
- Related resources
- Using API key

You can use these credentials to authenticate the following nodes:

- [crowd.dev](../../app-nodes/n8n-nodes-base.crowddev/)
- [crowd.dev Trigger](../../trigger-nodes/n8n-nodes-base.crowddevtrigger/)

Create a working instance of [crowd.dev](https://www.crowd.dev/).

## Supported authentication methods

Refer to [crowd.dev's documentation](https://docs.crowd.dev/docs) for more information about the service, and their [API documentation](https://api.crowd.dev/api-reference) for working with the API.

To configure this credential, you'll need:

- A **URL**:
  - If your crowd.dev instance is hosted on crowd.dev, keep the default of `https://app.crowd.dev`.
  - If your crowd.dev instance is [self-hosted](https://docs.crowd.dev/docs/technical-docs/self-hosting), use the URL you use to access your crowd.dev instance.
- Your crowd.dev **Tenant ID**: Displayed in the **Settings** section of the crowd.dev app
- An API **Token**: Displayed in the **Settings** section of the crowd.dev app

Refer to the [crowd.dev API documentation](https://api.crowd.dev/api-reference) for more detailed instructions.

---

## Wise credentials

**URL:** llms-txt#wise-credentials

**Contents:**
- Prerequisites
- Supported authentication methods
- Related resources
- Using API token
- Add a private key

You can use these credentials to authenticate the following nodes:

- [Wise](../../app-nodes/n8n-nodes-base.wise/)
- [Wise Trigger](../../trigger-nodes/n8n-nodes-base.wisetrigger/)

Create a [Wise](https://wise.com/) account.

## Supported authentication methods

Refer to [Wise's API documentation](https://docs.wise.com/api-docs/api-reference) for more information about the service.

To configure this credential, you'll need:

- An **API Token**: Go to your **user menu > Settings > API tokens** to generate an API token. Enter the generated API key in your n8n credential. Refer to [Getting started with the API](https://wise.com/help/articles/2958107/getting-started-with-the-api) for more information.
- Your **Environment**: Select the environment that best matches your Wise account environment.
  - If you're using a Wise test sandbox account, select **Test**.
  - Otherwise, select **Live**.
- **Private Key (Optional)**: For live endpoints requiring Strong Customer Authentication (SCA), generate a public and private key. Enter the private key here. Refer to [Add a private key](#add-a-private-key) for more information.
  - If you're using a **Test** environment, you'll only need to enter a Private Key if you've enabled Strong Customer Authentication on the [public keys management page](https://sandbox.transferwise.tech/public-keys).

Wise protects some live endpoints and operations with Strong Customer Authentication (SCA). Refer to [Strong Customer Authentication & 2FA](https://docs.wise.com/api-docs/features/strong-customer-authentication-2fa) for details.

If you make a request to an endpoint that requires SCA, Wise returns a 403 Forbidden HTTP status code. The error returned will look like this:

> This request requires Strong Customer Authentication (SCA). Please add a key pair to your account and n8n credentials. See https://api-docs.transferwise.com/#strong-customer-authentication-personal-token

To use endpoints requiring SCA, generate an RSA key pair and add the relevant key information to both Wise and n8n:

1. Generate an RSA key pair:

1. Add the content of the public key `public.pem` to your Wise **user menu > Settings > API tokens > Manage public keys**.

1. Add the content of the private key `private.pem` in n8n to the **Private Key (Optional)**.

Refer to [Personal Token SCA](https://docs.wise.com/api-docs/guides/strong-customer-authentication-2fa/personal-token-sca) for more information.

**Examples:**

Example 1 (unknown):
```unknown
$ openssl genrsa -out private.pem 2048 
   $ openssl rsa -pubout -in private.pem -out public.pem

AI coding with GPT

URL: llms-txt#ai-coding-with-gpt

Contents:

  • Use AI in the Code node
  • Usage limits
  • Feature limits
  • Writing good prompts
    • Example prompts
    • Reference incoming node data explicitly
    • Related resources
  • Fixing the code

Not available on self-hosted.

Python isn't supported. ///

Use AI in the Code node

AI assistance in the Code node is available to Cloud users. It isn't available in self-hosted n8n.

AI generated code overwrites your code

If you've already written some code on the Code tab, the AI generated code will replace it. n8n recommends using AI as a starting point to create your initial code, then editing it as needed.

To use ChatGPT to generate code in the Code node:

  1. In the Code node, set Language to JavaScript.
  2. Select the Ask AI tab.
  3. Write your query.
  4. Select Generate Code. n8n sends your query to ChatGPT, then displays the result in the Code tab.

During the trial phase there are no usage limits. If n8n makes the feature permanent, there may be usage limits as part of your pricing tier.

The ChatGPT implementation in n8n has the following limitations:

  • The AI writes code that manipulates data from the n8n workflow. You can't ask it to pull in data from other sources.
  • The AI doesn't know your data, just the schema, so you need to tell it things like how to find the data you want to extract, or how to check for null.
  • Nodes before the Code node must execute and deliver data to the Code node before you run your AI query.
  • Doesn't work with large incoming data schemas.
  • May have issues if there are a lot of nodes before the code node.

Writing good prompts

Writing good prompts increases the chance of getting useful code back.

  • Provide examples: if possible, give a sample expected output. This helps the AI to better understand the transformation or logic youre aiming for.
  • Describe the processing steps: if there are specific processing steps or logic that should apply to the data, list them in sequence. For example: "First, filter out all users under 18. Then, sort the remaining users by their last name."
  • Avoid ambiguities: while the AI understands various instructions, being clear and direct ensures you get the most accurate code. Instead of saying "Get the older users," you might say "Filter users who are 60 years and above."
  • Be clear about what you expect as the output. Do you want the data transformed, filtered, aggregated, or sorted? Provide as much detail as possible.

And some n8n-specific guidance:

  • Think about the input data: make sure ChatGPT knows which pieces of the data you want to access, and what the incoming data represents. You may need to tell ChatGPT about the availability of n8n's built-in methods and variables.
  • Declare interactions between nodes: if your logic involves data from multiple nodes, specify how they should interact. "Merge the output of 'Node A' with 'Node B' based on the 'userID' property". if you prefer data to come from certain nodes or to ignore others, be clear: "Only consider data from the 'Purchases' node and ignore the 'Refunds' node."
  • Ensure the output is compatible with n8n. Refer to Data structure for more information on the data structure n8n requires.

These examples show a range of possible prompts and tasks.

Example 1: Find a piece of data inside a second dataset

To try the example yourself, download the example workflow and import it into n8n.

In the third Code node, enter this prompt:

The slack data contains only one item. The input data represents all Notion users. Sometimes the person property that holds the email can be null. I want to find the notionId of the Slack user and return it.

Take a look at the code the AI generates.

This is the JavaScript you need:

Example 2: Data transformation

To try the example yourself, download the example workflow and import it into n8n.

In the Join items Code node, enter this prompt:

Return a single line of text that has all usernames listed with a comma. Each username should be enquoted with a double quotation mark.

Take a look at the code the AI generates.

This is the JavaScript you need:

Example 3: Summarize data and create a Slack message

To try the example yourself, download the example workflow and import it into n8n.

In the Summarize Code node, enter this prompt:

Create a markdown text for Slack that counts how many ideas, features and bugs have been submitted. The type of submission is saved in the property_type field. A feature has the property "Feature", a bug has the property "Bug" and an idea has the property "Bug". Also, list the five top submissions by vote in that message. Use "" as markdown for links.

Take a look at the code the AI generates.

This is the JavaScript you need:

Reference incoming node data explicitly

If your incoming data contains nested fields, using dot notation to reference them can help the AI understand what data you want.

To try the example yourself, download the example workflow and import it into n8n.

In the second Code node, enter this prompt:

The data in "Mock data" represents a list of people. For each person, return a new item containing personal_info.first_name and work_info.job_title.

This is the JavaScript you need:

Pluralsight offer a short guide on How to use ChatGPT to write code, which includes example prompts.

The AI-generated code may work without any changes, but you may have to edit it. You need to be aware of n8n's Data structure. You may also find n8n's built-in methods and variables useful.

Examples:

Example 1 (unknown):

const slackUser = $("Mock Slack").all()[0];
const notionUsers = $input.all();
const slackUserEmail = slackUser.json.email;

const notionUser = notionUsers.find(
  (user) => user.json.person && user.json.person.email === slackUserEmail
);

return notionUser ? [{ json: { notionId: notionUser.json.id } }] : [];

Example 2 (unknown):

const items = $input.all();
const usernames = items.map((item) => `"${item.json.username}"`);
const result = usernames.join(", ");
return [{ json: { usernames: result } }];

Example 3 (unknown):

const submissions = $input.all();

// Count the number of ideas, features, and bugs
let ideaCount = 0;
let featureCount = 0;
let bugCount = 0;

submissions.forEach((submission) => {
  switch (submission.json.property_type[0]) {
    case "Idea":
      ideaCount++;
      break;
    case "Feature":
      featureCount++;
      break;
    case "Bug":
      bugCount++;
      break;
  }
});

// Sort submissions by votes and take the top 5
const topSubmissions = submissions
  .sort((a, b) => b.json.property_votes - a.json.property_votes)
  .slice(0, 5);

let topSubmissionText = "";
topSubmissions.forEach((submission) => {
  topSubmissionText += `<${submission.json.url}|${submission.json.name}> with ${submission.json.property_votes} votes\n`;
});

// Construct the Slack message
const slackMessage = `*Summary of Submissions*\n
Ideas: ${ideaCount}\n
Features: ${featureCount}\n
Bugs: ${bugCount}\n
Top 5 Submissions:\n
${topSubmissionText}`;

return [{ json: { slackMessage } }];

Example 4 (unknown):

const items = $input.all();
const newItems = items.map((item) => {
  const firstName = item.json.personal_info.first_name;
  const jobTitle = item.json.work_info.job_title;
  return {
    json: {
      firstName,
      jobTitle,
    },
  };
});
return newItems;

MSG91 credentials

URL: llms-txt#msg91-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key
  • IP Security

You can use these credentials to authenticate the following nodes:

Create a MSG91 account.

Supported authentication methods

Refer to MSG91's API documentation for more information about the service.

To configure this credential, you'll need:

MSG91 enables IP Security by default for authkeys.

For the n8n credentials to function with this setting enabled, add all the n8n IP addresses as whitelisted IPs in MSG91. You can add them in one of two places, depending on your desired security level:

  • To allow any/all authkeys in the account to work with n8n, add the n8n IP addresses in the Company's whitelisted IPs section of the Authkey page.
  • To allow only specific authkeys to work with n8n, add the n8n IP addresses in the Whitelisted IPs section of an authkey's details.

Limit

URL: llms-txt#limit

Contents:

  • Node parameters
    • Max Items
    • Keep
  • Templates and examples
  • Related resources

Use the Limit node to remove items beyond a defined maximum number. You can choose whether n8n takes the items from the beginning or end of the input data.

Configure this node using the following parameters.

Enter the maximum number of items that n8n should keep. If the input data contains more than this value, n8n removes the items.

If the node has to remove items, select where it keeps the input items from:

  • First Items: Keeps the Max Items number of items from the beginning of the input data.
  • Last Items: Keeps the Max Items number of items from the end of the input data.

Templates and examples

Scrape and summarize webpages with AI

View template details

Generate Leads with Google Maps

View template details

Chat with OpenAI Assistant (by adding a memory)

View template details

Browse Limit integration templates, or search all templates

Learn more about data structure and data flow in n8n workflows.


Toggl Trigger node

URL: llms-txt#toggl-trigger-node

Toggl is a time tracking app that offers online time tracking and reporting services through their website along with mobile and desktop applications.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Toggl Trigger integrations page.


Guardrails node

URL: llms-txt#guardrails-node

Contents:

  • Node parameters
    • Operation
    • Text To Check
    • Guardrails
    • Customize System Message

Use the Guardrails node to enforce safety, security, and content policies on text. You can use it to validate user input before sending it to an AI model, or to check the output from an AI model before using it in your workflow.

Chat Model Connection Required for LLM-based Guardrails

This node requires a Chat Model node to be connected to its Model input when using the Check Text for Violations operation with LLM-based guardrails. Many guardrail checks (like Jailbreak, NSFW, and Topical Alignment) are LLM-based and use this connection to evaluate the input text.

Use these parameters to configure the Guardrails node.

The operation mode for this node to define its behavior.

  • Check Text for Violations: Provides a full set of guardrails. Any violation will send items to Fail branch.
  • Sanitize Text: Provides a subset of guardrails that can detect URLs, regular expressions, secret keys, or personally identifiable information (PII), such as phone numbers and credit card numbers. The node replaces detected violations with placeholders.

The text the guardrails evaluate. Typically, you map this text using an expression from a previous node, such as text from a user query or a response from an AI model.

Select one or more guardrails to apply to the Text To Check. When you add a guardrail from the list, its specific configuration options appear below.

  • Keywords: Checks if specified keywords appear in the input text.
    • Keywords: A comma-separated list of words to block.
  • Jailbreak: Detects attempts to bypass AI safety measures or exploit the model.
    • Customize Prompt: (Boolean) If you turn this on, a text input appears with the default prompt for the jailbreak detection model. You can change this prompt to fine-tune the guardrail.
    • Threshold: A value between 0.0 and 1.0. This represents the confidence level required from the AI model to flag the input as a jailbreak attempt. A higher threshold is stricter.
  • NSFW: Detects attempts to generate Not Safe For Work (NSFW) content.
    • Customize Prompt: (Boolean) If you turn this on, a text input appears with the default prompt for the NSFW detection model. You can change this prompt to fine-tune the guardrail.
    • Threshold: A value between 0.0 and 1.0 representing the confidence level required to flag the content as NSFW.
  • PII: Detects personally identifiable information (PII) in the text.
    • Type: Choose which PII entities to scan for:
      • All: Scans for all available entity types.
      • Selected: Allows you to choose specific entities from a list.
    • Entities: (Appears if Type is Selected) A multi-select list of PII types to detect (for example, CREDIT_CARD, EMAIL_ADDRESS, PHONE_NUMBER, and US_SSN).
  • Secret Keys: Detects the presence of secret keys or API credentials in the text.
    • Permissiveness: How strict or permissive the detection should be when flagging secret keys:
      • Strict
      • Permissive
      • Balanced
  • Topical Alignment: Ensures the conversation stays within a predefined scope or topic (also known as "business scope").
    • Prompt: A preset prompt that defines the allowed topic. The guardrail checks if the Text To Check aligns with this prompt.
    • Threshold: A value between 0.0 and 1.0 representing the confidence level required to flag the input as off-topic.
  • URLs: Manages URLs the node finds in the input text. It detects all URLs as violations, unless you specify them in Block All URLs Except.
    • Block All URLs Except: (Optional) A comma-separated list of URLs that you permit.
    • Allowed Schemes: Select the URL schemes to permit (for example, https, http, ftp, and mailto).
    • Block userinfo: (Boolean) If you turn this on, the node blocks URLs containing user credentials (for example, user:pass@example.com) to prevent credential injection.
    • Allow subdomain: (Boolean) If you turn this on, the node automatically allows subdomains of any URL in the Block All URLs Except list (for example, sub.example.com would be allowed if example.com is in the list).
  • Custom: Define your own custom, LLM-based guardrail.
    • Name: A descriptive name for your custom guardrail (for example, "Check for rude language").
    • Prompt: A prompt that instructs the AI model what to check for.
    • Threshold: A value between 0.0 and 1.0 representing the confidence level required to flag the input as a violation.
  • Custom Regex: Define your own custom regular expression patterns.
    • Name: A name for your custom pattern. The node uses this name as a placeholder in the Sanitize Text mode.
    • Regex: Your regular expression pattern.

Customize System Message

If you turn this on, a text input appears with a message that the guardrail uses to enforce thresholds and JSON output according to schema. Change it to modify the global guardrails behavior.


CircleCI credentials

URL: llms-txt#circleci-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using personal API token

You can use these credentials to authenticate the following nodes:

Create a CircleCI account.

Supported authentication methods

Refer to CircleCI's API documentation for more information about the service.

Using personal API token

To configure this credential, you'll need:


Hybrid Analysis credentials

URL: llms-txt#hybrid-analysis-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate when using the HTTP Request node to make a Custom API call.

Create a Hybrid Analysis account.

Supported authentication methods

Refer to Hybrid Analysis' API documentation for more information about the service.

This is a credential-only node. Refer to Custom API operations to learn more. View example workflows and related content on n8n's website.

To configure this credential, you'll need:


Postmark Trigger node

URL: llms-txt#postmark-trigger-node

Postmark helps deliver and track application email. You can track statistics such as the number of emails sent or processed, opens, bounces and, spam complaints.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Postmark Trigger integrations page.


AWS Certificate Manager node

URL: llms-txt#aws-certificate-manager-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported

Use the AWS Certificate Manager node to automate work in AWS Certificate Manager, and integrate AWS Certificate Manager with other applications. n8n has built-in support for a wide range of AWS Certificate Manager features, including creating, deleting, getting, and renewing SSL certificates.

On this page, you'll find a list of operations the AWS Certificate Manager node supports and links to more resources.

Refer to AWS Certificate Manager credentials for guidance on setting up authentication.

  • Certificate
    • Delete
    • Get
    • Get Many
    • Get Metadata
    • Renew

Templates and examples

Clean Up Expired AWS ACM Certificates with Slack Approval

View template details

Generate SSL/TLS Certificate Expiry Reports with AWS ACM and AI for Slack & Email

View template details

Auto-Renew AWS Certificates with Slack Approval Workflow

View template details

Browse AWS Certificate Manager integration templates, or search all templates

Refer to AWS Certificate Manager's documentation for more information on this service.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


LingvaNex node

URL: llms-txt#lingvanex-node

Contents:

  • Operations
  • Templates and examples

Use the LingvaNex node to automate work in LingvaNex, and integrate LingvaNex with other applications. n8n has built-in support for translating data with LingvaNex.

On this page, you'll find a list of operations the LingvaNex node supports and links to more resources.

Refer to LingvaNex credentials for guidance on setting up authentication.

Templates and examples

Get data from Hacker News and send to Airtable or via SMS

View template details

Get daily poems in Telegram

View template details

Translate instructions using LingvaNex

View template details

Browse LingvaNex integration templates, or search all templates


Wolfram|Alpha credentials

URL: llms-txt#wolfram|alpha-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using API key
  • Resolve Forbidden connection error

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to Wolfram|Alpha's Simple API documentation for more information about the service.

View n8n's Advanced AI documentation.

To configure this credential, you'll need a registered Wolfram ID and:

  1. Open the Wolfram|Alpha Developer Portal and go to API Access.
  2. Select Get an App ID.
  3. Enter a Name for your application, like n8n integration.
  4. Enter a Description for your application.
  5. Select Simple API as the API.
  6. Select Submit.
  7. Copy the generated App ID and enter it in your n8n credential.

Refer to Getting Started in the Wolfram|Alpha Simple API documentation for more information.

Resolve Forbidden connection error

If you enter your App ID and get an error that the credential is Forbidden, make sure that you have verified your email address for your Wolfram ID:

  1. Go to your Wolfram ID Details.
  2. If you don't see the Verified label underneath your Email address, select the link to Send a verification email.
  3. You must open the link in that email to verify your email address.

It may take several minutes for the verification to populate to the API, but once it does, retrying the n8n credential should succeed.


Simple Vector Store node

URL: llms-txt#simple-vector-store-node

Contents:

  • Data safety limitations
    • Vector store data isn't persistent
    • All instance users can access vector store data
  • Node usage patterns
    • Use as a regular node to insert and retrieve documents
    • Connect directly to an AI agent as a tool
    • Use a retriever to fetch documents
    • Use the Vector Store Question Answer Tool to answer questions
  • Memory Management
    • Configuration Options

Use the Simple Vector Store node to store and retrieve embeddings in n8n's in-app memory.

On this page, you'll find the node parameters for the Simple Vector Store node, and links to more resources.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

This node is different from AI memory nodes

The simple vector storage described here is different to the AI memory nodes such as Simple Memory.

This node creates a vector database in the app memory.

Data safety limitations

Before using the Simple Vector Store node, it's important to understand its limitations and how it works.

n8n recommends using Simple Vector store for development use only.

Vector store data isn't persistent

This node stores data in memory only. All data is lost when n8n restarts and may also be purged in low-memory conditions.

All instance users can access vector store data

Memory keys for the Simple Vector Store node are global, not scoped to individual workflows.

This means that all users of the instance can access vector store data by adding a Simple Vector Store node and selecting the memory key, regardless of the access controls set for the original workflow. Take care not to expose sensitive information when ingesting data with the Simple Vector Store node.

Node usage patterns

You can use the Simple Vector Store node in the following patterns.

Use as a regular node to insert and retrieve documents

You can use the Simple Vector Store as a regular node to insert or get documents. This pattern places the Simple Vector Store in the regular connection flow without using an agent.

You can see an example of in step 2 of this template.

Connect directly to an AI agent as a tool

You can connect the Simple Vector Store node directly to the tool connector of an AI agent to use a vector store as a resource when answering queries.

Here, the connection would be: AI agent (tools connector) -> Simple Vector Store node.

Use a retriever to fetch documents

You can use the Vector Store Retriever node with the Simple Vector Store node to fetch documents from the Simple Vector Store node. This is often used with the Question and Answer Chain node to fetch documents from the vector store that match the given chat input.

An example of the connection flow (the linked example uses Pinecone, but the pattern is the same) would be: Question and Answer Chain (Retriever connector) -> Vector Store Retriever (Vector Store connector) -> Simple Vector Store.

Use the Vector Store Question Answer Tool to answer questions

Another pattern uses the Vector Store Question Answer Tool to summarize results and answer questions from the Simple Vector Store node. Rather than connecting the Simple Vector Store directly as a tool, this pattern uses a tool specifically designed to summarizes data in the vector store.

The connections flow in this case would look like this: AI agent (tools connector) -> Vector Store Question Answer Tool (Vector Store connector) -> Simple Vector store.

The Simple Vector Store implements memory management to prevent excessive memory usage:

  • Automatically cleans up old vector stores when memory pressure increases
  • Removes inactive stores that haven't been accessed for a configurable amount of time

Configuration Options

You can control memory usage with these environment variables:

Variable Type Default Description
N8N_VECTOR_STORE_MAX_MEMORY Number -1 Maximum memory in MB allowed for all vector stores combined (-1 to disable limits).
N8N_VECTOR_STORE_TTL_HOURS Number -1 Hours of inactivity after which a store gets removed (-1 to disable TTL).

On n8n Cloud, these values are preset to 100MB (about 8,000 documents, depending on document size and metadata) and 7 days respectively. For self-hosted instances, both values default to -1(no memory limits or time-based cleanup).

This Vector Store node has four modes: Get Many, Insert Documents, Retrieve Documents (As Vector Store for Chain/Tool), and Retrieve Documents (As Tool for AI Agent). The mode you select determines the operations you can perform with the node and what inputs and outputs are available.

In this mode, you can retrieve multiple documents from your vector database by providing a prompt. The prompt is embedded and used for similarity search. The node returns the documents that are most similar to the prompt with their similarity score. This is useful if you want to retrieve a list of similar documents and pass them to an agent as additional context.

Insert Documents

Use insert documents mode to insert new documents into your vector database.

Retrieve Documents (as Vector Store for Chain/Tool)

Use Retrieve Documents (As Vector Store for Chain/Tool) mode with a vector-store retriever to retrieve documents from a vector database and provide them to the retriever connected to a chain. In this mode you must connect the node to a retriever node or root node.

Retrieve Documents (as Tool for AI Agent)

Use Retrieve Documents (As Tool for AI Agent) mode to use the vector store as a tool resource when answering queries. When formulating responses, the agent uses the vector store when the vector store name and description match the question details.

Enables reranking. If you enable this option, you must connect a reranking node to the vector store. That node will then rerank the results for queries. You can use this option with the Get Many, Retrieve Documents (As Vector Store for Chain/Tool) and Retrieve Documents (As Tool for AI Agent) modes.

Get Many parameters

  • Memory Key: Select or create the key containing the vector memory you want to query.
  • Prompt: Enter the search query.
  • Limit: Enter how many results to retrieve from the vector store. For example, set this to 10 to get the ten best results.

Insert Documents parameters

  • Memory Key: Select or create the key you want to store the vector memory as.
  • Clear Store: Use this parameter to control whether to wipe the vector store for the given memory key for this workflow before inserting data (turned on).

Retrieve Documents (As Vector Store for Chain/Tool) parameters

  • Memory Key: Select or create the key containing the vector memory you want to query.

Retrieve Documents (As Tool for AI Agent) parameters

  • Name: The name of the vector store.
  • Description: Explain to the LLM what this tool does. A good, specific description allows LLMs to produce expected results more often.
  • Memory Key: Select or create the key containing the vector memory you want to query.
  • Limit: Enter how many results to retrieve from the vector store. For example, set this to 10 to get the ten best results.

Templates and examples

Building Your First WhatsApp Chatbot

View template details

RAG Chatbot for Company Documents using Google Drive and Gemini

View template details

🤖 AI Powered RAG Chatbot for Your Docs + Google Drive + Gemini + Qdrant

View template details

Browse Simple Vector Store integration templates, or search all templates

Refer to LangChains's Memory Vector Store documentation for more information about the service.

View n8n's Advanced AI documentation.


Customer Messenger (n8n Training) node

URL: llms-txt#customer-messenger-(n8n-training)-node

Use this node only for the n8n new user onboarding tutorial. It provides no further functionality.


Line node

URL: llms-txt#line-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Deprecated: End of service

LINE Notify is discontinuing service as of April 1st 2025 and this node will no longer work after that date. View LINE Notify's end of service announement for more information.

Use the Line node to automate work in Line, and integrate Line with other applications. n8n has built-in support for a wide range of Line features, including sending notifications.

On this page, you'll find a list of operations the Line node supports and links to more resources.

Refer to Line credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Notification
    • Sends notifications to users or groups

Templates and examples

Line Message API : Push Message & Reply

View template details

Customer Support Channel and Ticketing System with Slack and Linear

View template details

Send daily weather updates via a notification in Line

View template details

Browse Line integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Splitting workflows with conditional nodes

URL: llms-txt#splitting-workflows-with-conditional-nodes

Splitting uses the IF or Switch nodes. It turns a single-branch workflow into a multi-branch workflow. This is a key piece of representing complex logic in n8n.

Compare these workflows:

This is the power of splitting and conditional nodes in n8n.

Refer to the IF or Switch documentation for usage details.


User management SMTP, and two-factor authentication environment variables

URL: llms-txt#user-management-smtp,-and-two-factor-authentication-environment-variables

File-based configuration

You can add _FILE to individual variables to provide their configuration in a separate file. Refer to Keeping sensitive data in separate files for more details.

Refer to User management for more information on setting up user management and emails.

Variable Type Default Description
N8N_EMAIL_MODE String smtp Enable emails.
N8N_SMTP_HOST String - your_SMTP_server_name
N8N_SMTP_PORT Number - your_SMTP_server_port
N8N_SMTP_USER String - your_SMTP_username
N8N_SMTP_PASS String - your_SMTP_password
N8N_SMTP_OAUTH_SERVICE_CLIENT String - If using 2LO with a service account this is your client ID
N8N_SMTP_OAUTH_PRIVATE_KEY String - If using 2LO with a service account this is your private key
N8N_SMTP_SENDER String - Sender email address. You can optionally include the sender name. Example with name: N8N <contact@n8n.com>
N8N_SMTP_SSL Boolean true Whether to use SSL for SMTP (true) or not (false).
N8N_SMTP_STARTTLS Boolean true Whether to use STARTTLS for SMTP (true) or not (false).
N8N_UM_EMAIL_TEMPLATES_INVITE String - Full path to your HTML email template. This overrides the default template for invite emails.
N8N_UM_EMAIL_TEMPLATES_PWRESET String - Full path to your HTML email template. This overrides the default template for password reset emails.
N8N_UM_EMAIL_TEMPLATES_WORKFLOW_SHARED String - Overrides the default HTML template for notifying users that a workflow was shared. Provide the full path to the template.
N8N_UM_EMAIL_TEMPLATES_CREDENTIALS_SHARED String - Overrides the default HTML template for notifying users that a credential was shared. Provide the full path to the template.
N8N_UM_EMAIL_TEMPLATES_PROJECT_SHARED String - Overrides the default HTML template for notifying users that a project was shared. Provide the full path to the template.
N8N_USER_MANAGEMENT_JWT_SECRET String - Set a specific JWT secret. By default, n8n generates one on start.
N8N_USER_MANAGEMENT_JWT_DURATION_HOURS Number 168 Set an expiration date for the JWTs in hours.
N8N_USER_MANAGEMENT_JWT_REFRESH_TIMEOUT_HOURS Number 0 How many hours before the JWT expires to automatically refresh it. 0 means 25% of N8N_USER_MANAGEMENT_JWT_DURATION_HOURS. -1 means it will never refresh, which forces users to log in again after the period defined in N8N_USER_MANAGEMENT_JWT_DURATION_HOURS.
N8N_MFA_ENABLED Boolean true Whether to enable two-factor authentication (true) or disable (false). n8n ignores this if existing users have 2FA enabled.
N8N_INVITE_LINKS_EMAIL_ONLY Boolean false When set to true, n8n will only deliver invite links via email and will not expose them through the API. This option enhances security by preventing invite URLs from being accessible programmatically, or to high priviledged users.

WhatsApp Business Cloud node

URL: llms-txt#whatsapp-business-cloud-node

Contents:

  • Operations
  • Waiting for a response
    • Response Type
    • Approval response customization
    • Free Text response customization
    • Custom Form response customization
  • Templates and examples
  • Related resources
  • Common issues
  • What to do if your operation isn't supported

Use the WhatsApp Business Cloud node to automate work in WhatsApp Business, and integrate WhatsApp Business with other applications. n8n has built-in support for a wide range of WhatsApp Business features, including sending messages, and uploading, downloading, and deleting media.

On this page, you'll find a list of operations the WhatsApp Business Cloud node supports and links to more resources.

Refer to WhatsApp Business Cloud credentials for guidance on setting up authentication.

  • Message
    • Send
    • Send and Wait for Response
    • Send Template
  • Media
    • Upload
    • Download
    • Delete

Waiting for a response

By choosing the Send and Wait for a Response operation, you can send a message and pause the workflow execution until a person confirms the action or provides more information.

You can choose between the following types of waiting and approval actions:

  • Approval: Users can approve or disapprove from within the message.
  • Free Text: Users can submit a response with a form.
  • Custom Form: Users can submit a response with a custom form.

You can customize the waiting and response behavior depending on which response type you choose. You can configure these options in any of the above response types:

  • Limit Wait Time: Whether the workflow will automatically resume execution after a specified time limit. This can be an interval or a specific wall time.
  • Append n8n Attribution: Whether to mention in the message that it was sent automatically with n8n (turned on) or not (turned off).

Approval response customization

When using the Approval response type, you can choose whether to present only an approval button or both approval and disapproval buttons.

You can also customize the button labels for the buttons you include.

Free Text response customization

When using the Free Text response type, you can customize the message button label, the form title and description, and the response button label.

Custom Form response customization

When using the Custom Form response type, you build a form using the fields and options you want.

You can customize each form element with the settings outlined in the n8n Form trigger's form elements. To add more fields, select the Add Form Element button.

You'll also be able to customize the message button label, the form title and description, and the response button label.

Templates and examples

Building Your First WhatsApp Chatbot

View template details

Respond to WhatsApp Messages with AI Like a Pro!

View template details

AI-Powered WhatsApp Chatbot 🤖📲 for Text, Voice, Images & PDFs with memory 🧠

View template details

Browse WhatsApp Business Cloud integration templates, or search all templates

Refer to WhatsApp Business Platform's Cloud API documentation for details about the operations.

For common errors or issues and suggested resolution steps, refer to Common Issues.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Strapi node

URL: llms-txt#strapi-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported

Use the Strapi node to automate work in Strapi, and integrate Strapi with other applications. n8n has built-in support for a wide range of Strapi features, including creating and deleting entries.

On this page, you'll find a list of operations the Strapi node supports and links to more resources.

Refer to Strapi credentials for guidance on setting up authentication.

  • Entry
    • Create
    • Delete
    • Get
    • Get Many
    • Update

Templates and examples

Enrich FAQ sections on your website pages at scale with AI

View template details

Create, update, and get an entry in Strapi

View template details

Automate testimonials in Strapi with n8n

View template details

Browse Strapi integration templates, or search all templates

Refer to Strapi's documentation for more information about the service.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Using an API playground

URL: llms-txt#using-an-api-playground

Contents:

  • Documentation playground
  • Built-in playground

This documentation site provides a playground to test out calls. Self-hosted users also have access to a built-in playground hosted as part of their instance.

Documentation playground

You can test API calls from this site's API reference. You need to set your server's base URL and instance name, and add an API key.

n8n uses Scalar's open source API platform to power this functionality.

Exposed API key and data

Use a test API key with limited scopes and test data when using a playground. All calls from the playground are routed through Scalar's proxy servers.

You have access to your live data. This is useful for trying out requests. Be aware you can change or delete real data.

Built-in playground

The API playground isn't available on Cloud. It's available for all self-hosted pricing tiers.

The n8n API comes with a built-in Swagger UI playground in self-hosted versions. This provides interactive documentation, where you can try out requests. The path to access the playground depends on your hosting.

n8n constructs the path from values set in your environment variables:

The API version number is 1. There may be multiple versions available in the future.

If you select Authorize and enter your API key in the API playground, you have access to your live data. This is useful for trying out requests. Be aware you can change or delete real data.

The API includes built-in documentation about credential formats. This is available using the credentials endpoint:

How to find credentialTypeName

To find the type, download your workflow as JSON and examine it. For example, for a Google Drive node the {credentialTypeName} is googleDriveOAuth2Api:

Examples:

Example 1 (unknown):

N8N_HOST:N8N_PORT/N8N_PATH/api/v<api-version-number>/docs

Example 2 (unknown):

N8N_HOST:N8N_PORT/N8N_PATH/api/v<api-version-number>/credentials/schema/{credentialTypeName}

Example 3 (unknown):

{
    ...,
    "credentials": {
        "googleDriveOAuth2Api": {
        "id": "9",
        "name": "Google Drive"
        }
    }
}

Choose your node building approach

URL: llms-txt#choose-your-node-building-approach

Contents:

  • Data handling differences
  • Syntax differences

n8n has two node-building styles, declarative and programmatic.

You should use the declarative style for most nodes. This style:

  • Uses a JSON-based syntax, making it simpler to write, with less risk of introducing bugs.
  • Is more future-proof.
  • Supports integration with REST APIs.

The programmatic style is more verbose. You must use the programmatic style for:

  • Trigger nodes
  • Any node that isn't REST-based. This includes nodes that need to call a GraphQL API and nodes that use external dependencies.
  • Any node that needs to transform incoming data.
  • Full versioning. Refer to Node versioning for more information on types of versioning.

Data handling differences

The main difference between the declarative and programmatic styles is how they handle incoming data and build API requests. The programmatic style requires an execute() method, which reads incoming data and parameters, then builds a request. The declarative style handles this using the routing key in the operations object. Refer to Node base file for more information on node parameters and the execute() method.

Syntax differences

To understand the difference between the declarative and programmatic styles, compare the two code snippets below. This example creates a simplified version of the SendGrid integration, called "FriendGrid." The following code snippets aren't complete: they emphasize the differences in the node building styles.

In programmatic style:

In declarative style:

Examples:

Example 1 (unknown):

import {
	IExecuteFunctions,
	INodeExecutionData,
	INodeType,
	INodeTypeDescription,
	IRequestOptions,
} from 'n8n-workflow';

// Create the FriendGrid class
export class FriendGrid implements INodeType {
  description: INodeTypeDescription = {
    displayName: 'FriendGrid',
    name: 'friendGrid',
    . . .
    properties: [
      {
        displayName: 'Resource',
        . . .
      },
      {
        displayName: 'Operation',
        name: 'operation',
        type: 'options',
        displayOptions: {
          show: {
              resource: [
              'contact',
              ],
          },
        },
        options: [
          {
            name: 'Create',
            value: 'create',
            description: 'Create a contact',
          },
        ],
        default: 'create',
        description: 'The operation to perform.',
      },
      {
        displayName: 'Email',
        name: 'email',
        . . .
      },
      {
        displayName: 'Additional Fields',
        // Sets up optional fields
      },
    ],
};

  async execute(this: IExecuteFunctions): Promise<INodeExecutionData[][]> {
    let responseData;
    const resource = this.getNodeParameter('resource', 0) as string;
    const operation = this.getNodeParameter('operation', 0) as string;
    //Get credentials the user provided for this node
    const credentials = await this.getCredentials('friendGridApi') as IDataObject;

    if (resource === 'contact') {
      if (operation === 'create') {
      // Get email input
      const email = this.getNodeParameter('email', 0) as string;
      // Get additional fields input
      const additionalFields = this.getNodeParameter('additionalFields', 0) as IDataObject;
      const data: IDataObject = {
          email,
      };

      Object.assign(data, additionalFields);

      // Make HTTP request as defined in https://sendgrid.com/docs/api-reference/
      const options: IRequestOptions = {
        headers: {
            'Accept': 'application/json',
            'Authorization': `Bearer ${credentials.apiKey}`,
        },
        method: 'PUT',
        body: {
            contacts: [
            data,
            ],
        },
        url: `https://api.sendgrid.com/v3/marketing/contacts`,
        json: true,
      };
      responseData = await this.helpers.httpRequest(options);
      }
    }
    // Map data to n8n data
    return [this.helpers.returnJsonArray(responseData)];
  }
}

Example 2 (unknown):

import { INodeType, INodeTypeDescription } from 'n8n-workflow';

// Create the FriendGrid class
export class FriendGrid implements INodeType {
  description: INodeTypeDescription = {
    displayName: 'FriendGrid',
    name: 'friendGrid',
    . . .
    // Set up the basic request configuration
    requestDefaults: {
      baseURL: 'https://api.sendgrid.com/v3/marketing'
    },
    properties: [
      {
        displayName: 'Resource',
        . . .
      },
      {
        displayName: 'Operation',
        name: 'operation',
        type: 'options',
        displayOptions: {
          show: {
            resource: [
              'contact',
            ],
          },
        },
        options: [
          {
            name: 'Create',
            value: 'create',
            description: 'Create a contact',
            // Add the routing object
            routing: {
              request: {
                method: 'POST',
                url: '=/contacts',
                send: {
                  type: 'body',
                  properties: {
                    email: {{$parameter["email"]}}
                  }
                }
              }
            },
            // Handle the response to contact creation
            output: {
              postReceive: [
                {
                  type: 'set',
                  properties: {
                    value: '={{ { "success": $response } }}'
                  }
                }
              ]
            }
          },
        ],
        default: 'create',
        description: 'The operation to perform.',
      },
      {
        displayName: 'Email',
        . . .
      },
      {
        displayName: 'Additional Fields',
        // Sets up optional fields
      },
    ],
  }
  // No execute method needed
}

Workflow management in Embed

URL: llms-txt#workflow-management-in-embed

Contents:

  • Workflow per user
      1. Obtain user credentials
      1. Create user credentials
      1. Create the workflow
  • Single workflow
    • Create the workflow
    • Call the workflow

Embed requires an embed license. For more information about when to use Embed, as well as costs and licensing processes, refer to Embed on the n8n website.

When managing an embedded n8n deployment, spanning across teams or organizations, you will likely need to run the same (or similar) workflows for multiple users. There are two available options for doing so:

Solution Pros Cons
Create a workflow for each user No limitation on how workflow starts (can use any trigger) Requires managing multiple workflows.
Create a single workflow, and pass it user credentials when executing Simplified workflow management (only need to change one workflow). To run the workflow, your product must call it

The APIs referenced in this document are subject to change at any time. Be sure the check for continued functionality with each version upgrade.

There are three general steps to follow:

  • Obtain the credentials for each user, and any additional parameters that may be required based on the workflow.
  • Create the n8n credentials for this user.
  • Create the workflow.

1. Obtain user credentials

Here you need to capture all credentials for any node/service this user must authenticate with, along with any additional parameters required for the particular workflow. The credentials and any parameters needed will depend on your workflow and what you are trying to do.

2. Create user credentials

After all relevant credential details have been obtained, you can proceed to create the relevant service credentials in n8n. This can be done using the Editor UI or API call.

Using the Editor UI

  1. From the menu select Credentials > New.
  2. Use the drop-down to select the Credential type to create, for example Airtable.
  3. In the Create New Credentials modal, enter the corresponding credentials details for the user, and select the nodes that will have access to these credentials.
  4. Click Create to finish and save.

The frontend API used by the Editor UI can also be called to achieve the same result. The API endpoint is in the format: https://<n8n-domain>/rest/credentials.

For example, to create the credentials in the Editor UI example above, the request would be:

With the request body:

The response will contain the ID of the new credentials, which you will use when creating the workflow for this user:

3. Create the workflow

Best practice is to have a “base” workflow that you then duplicate and customize for each new user with their credentials (and any other details).

You can duplicate and customize your template workflow using either the Editor UI or API call.

Using the Editor UI

  1. From the menu select Workflows > Open to open the template workflow to be duplicated.

  2. Select Workflows > Duplicate, then enter a name for this new workflow and click Save.

  3. Update all relevant nodes to use the credentials for this user (created above).

  4. Save this workflow set it to Active using the toggle in the top-right corner.

  5. Fetch the JSON of the template workflow using the endpoint: https://<n8n-domain>/rest/workflows/<workflow_id>

The response will contain the JSON data of the selected workflow:

  1. Save the returned JSON data and update any relevant credentials and fields for the new user.

  2. Create a new workflow using the updated JSON as the request body at endpoint: https://<n8n-domain>/rest/workflows

The response will contain the ID of the new workflow, which you will use in the next step.

  1. Lastly, activate the new workflow:

Passing the additional value active in your JSON payload:

There are four steps to follow to implement this method:

  • Obtain the credentials for each user, and any additional parameters that may be required based on the workflow. See Obtain user credentials above.
  • Create the n8n credentials for this user. See Create user credentials above.
  • Create the workflow.
  • Call the workflow as needed.

Create the workflow

The details and scope of this workflow will vary greatly according to the individual use case, however there are a few design implementations to keep in mind:

  • This workflow must be triggered by a Webhook node.
  • The incoming webhook call must contain the users credentials and any other workflow parameters required.
  • Each node where the users credentials are needed should use an expression so that the nodes credential field reads the credential provided in the webhook call.
  • Save and activate the workflow, ensuring the production URL is selected for the Webhook node. Refer to webhook node for more information.

Call the workflow

For each new user, or for any existing user as may be needed, call the webhook defined as the workflow trigger and provide the necessary credentials (and any other workflow parameters).

Examples:

Example 1 (unknown):

POST https://<n8n-domain>/rest/credentials

Example 2 (unknown):

{
   "name":"MyAirtable",
   "type":"airtableApi",
   "nodesAccess":[
      {
         "nodeType":"n8n-nodes-base.airtable"
      }
   ],
   "data":{
      "apiKey":"q12we34r5t67yu"
   }
}

Example 3 (unknown):

{
   "data":{
      "name":"MyAirtable",
      "type":"airtableApi",
      "data":{
         "apiKey":"q12we34r5t67yu"
      },
      "nodesAccess":[
         {
            "nodeType":"n8n-nodes-base.airtable",
            "date":"2021-09-10T07:41:27.770Z"
         }
      ],
      "id":"29",
      "createdAt":"2021-09-10T07:41:27.777Z",
      "updatedAt":"2021-09-10T07:41:27.777Z"
   }
}

Example 4 (unknown):

GET https://<n8n-domain>/rest/workflows/1012

Taiga credentials

URL: llms-txt#taiga-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using basic auth

You can use these credentials to authenticate the following nodes:

Create a Taiga account.

Supported authentication methods

Refer to Taiga's API documentation for more information about the service.

To configure this credential, you'll need:

  • A Username: Enter your username or user email address. Refer to Normal login for more information.
  • A Password: Enter your password.
  • The Environment: Choose between Cloud or Self-Hosted. For Self-Hosted instances, you'll also need to add:
    • The URL: Enter your Taiga URL.

n8n Form node

URL: llms-txt#n8n-form-node

Contents:

  • Setting up the node
    • Set default selections with query parameters
    • Displaying custom HTML
    • Including hidden fields
    • Defining the form using JSON
    • Form Ending
    • Forms with branches
    • Node options
  • Running the node
    • Build and test workflows

Use the n8n Form node to create user-facing forms with multiple steps. You can add other nodes with custom logic between to process user input. You must start the workflow with the n8n Form Trigger node.

View workflow file

Setting up the node

Set default selections with query parameters

You can set the initial values for fields by using query parameters with the initial URL provided by the n8n Form Trigger node. Every page in the form receives the same query parameters sent to the n8n Form Trigger node URL.

Query parameters are only available when using the form in production mode. n8n won't populate field values from query parameters in testing mode.

When using query parameters, percent-encode any field names or values that use special characters. This ensures n8n uses the initial values for the given fields. You can use tools like URL Encode/Decode to format your query parameters using percent-encoding.

As an example, imagine you have a form with the following properties:

  • Production URL: https://my-account.n8n.cloud/form/my-form
  • Fields:
    • name: Jane Doe
    • email: jane.doe@example.com

With query parameters and percent-encoding, you could use the following URL to set initial field values to the data above:

Here, percent-encoding replaces the at-symbol (@) with the string %40 and the space character () with the string %20. This will set the initial value for these fields no matter which page of the form they appear on.

Displaying custom HTML

You can display custom HTML on your form by adding a Custom HTML field to your form. This provides an HTML box where you can insert arbitrary HTML code to display as part of the form page.

You can use the HTML field to enrich your form page by including things like links, images, videos, and more. n8n will render the content with the rest of the form fields in the normal document flow.

Because custom HTML content is read-only, these fields aren't included in the form output data by default. To include the raw HTML content in the node output, provide a name for the data using the Element Name field.

The HTML field doesn't support <script>, <style>, or <input> elements.

If you're using the Form Ending Page Type, you can fully customize the final page that you send users (including the use of <script>, <style>, and <input> elements) by selecting the On n8n Form Submission parameter to Show Text.

Including hidden fields

It's possible to include fields in a form without displaying them to users. This is useful when you want to pass extra data to the form that doesn't require interactive user input.

To add fields that won't show up on the form, use the Hidden Field form element. There, you can define the Field Name and optionally provide a default value by filling out the Field Value.

When serving the form, you can pass values for hidden fields using query parameters.

Defining the form using JSON

Use Define Form > Using JSON to define the fields of your form with a JSON array of objects. Each object defines a single field by using a combination of these keys:

  • fieldLabel: The label that appears above the input field.
  • fieldType: Choose from checkbox, date, dropdown, email, file, hiddenField, html, number, password, radio, text, or textarea.
    • Use date to include a date picker in the form. Refer to Date and time with Luxon for more information on formatting dates.
    • When using dropdown, set the choices with fieldOptions (reference the example below). By default, the dropdown is single-choice. To make it multiple-choice, set multiselect to true. As an alternative, you can use checkbox or radio together with fieldOptions too.
    • When using file, set multipleFiles to true to allow users to select more than one file. To define the file types to allow, set acceptFileTypes to a string containing a comma-separated list of file extensions (reference the example below).
    • Use hiddenField to add a hidden field to your form. Refer to Including hidden fields for more information.
    • Use html to display custom HTML on your form. Refer to Displaying custom HTML for more information.
  • placeholder: Specify placeholder data for the field. You can use this for every fieldType except dropdown, date, and file.
  • requiredField: Require users to complete this field on the form.

An example JSON that shows the general format required and the keys available:

Use the Form Ending Page Type to end a form and either show a completion page, redirect the user to a URL, or display custom HTML or text. Only one Form Ending page displays per execution, even when n8n executes multiple branches that contain Form Ending nodes.

Choose between these options when using On n8n Form Submission:

  • Show Completion Screen: Shows users a final screen to confirm that they submitted the form.
    • Fill in Completion Title to set the h1 title on the form.
    • n8n displays the Completion Message as a subtitle below the main h1 title on the form. Use \n or <br> to add a line break.
    • Select Add option and fill in Completion Page Title to set the page's title in the browser tab.
  • Redirect to URL: Redirect the user to a specified URL when the form completes.
    • Fill in the URL field with the page you want to redirect to when users complete the form.
  • Show Text: Display a final page defined by arbitrary plain text and HTML.
    • Fill in the Text field with the HTML or plain text content you wish to show.
  • Return Binary File: Return a binary file upon completion.
    • Fill in Completion Title to set the h1 title on the form.
    • n8n displays the Completion Message as a subtitle below the main h1 title on the form. Use \n or <br> to add a line break.
    • Provide the Input Data Field Name containing the binary file to return to the user.

Forms with branches

The n8n Form node executes and displays its associated form page whenever it receives data from a previous node. When building forms with n8n, to avoid confusion, it's important to understand how forms behave when branching occurs.

Workflows with mutually exclusive branches

Form workflows containing mutually exclusive branches work as expected. n8n will execute a single branch according to the submitted data and conditions you outline. As it executes, n8n will display each page in the branch, ending with an n8n Form node with the Form Ending page type.

This workflow demonstrates mutually exclusive branching. Each selection can only execute a single branch.

View workflow file

Workflows that may execute multiple branches

Form workflows that send data to multiple branches at the same time require more care. When multiple branches receive data during an execution (for example, from a switch node), n8n executes each branch that receives data sequentially. Upon reaching the end of one branch, the execution will move to the next branch with data.

n8n only executes a single Form Ending n8n Form node for each execution. When multiple branches of a form workflow receive data, n8n ignores all Form Ending nodes except for the one associated with the final branch.

This workflow may execute more than one branch during an execution. Here, n8n executes all valid branches sequentially. This impacts which n8n Form nodes n8n executes (in particular, which Form Ending node displays):

View workflow file

Select Add Option to view more configuration options:

  • Form Title: The title for your form. n8n displays the Form Title as the webpage title and main h1 title on the form.
  • Form Description: The description for your form. n8n displays the Form Description as a subtitle below the main h1 title on the form. This field supports HTML. Use \n or <br> to add a line break. The Form Description also populates the HTML meta description for the page.
  • Button Label: The label to use for your form's submit button. n8n displays the Button Label as the name of the submit button.
  • Custom Form Styling: Override the default styling of the public form interface with CSS. The field pre-populates with the default styling so you can change only what you need to.
  • Completion Page Title: The title for the final completion page of the form.

Build and test workflows

While building or testing a workflow, use the Test URL in the n8n Form Trigger node. Using a test URL ensures that you can view the incoming data in the editor UI, which is useful for debugging.

There are two ways to test:

  • Select Execute Step. n8n opens the form. When you submit the form, n8n runs the node and any previous nodes, but not the rest of the workflow.
  • Select Execute Workflow. n8n opens the form. When you submit the form, n8n runs the workflow.

Production workflows

When your workflow is ready, switch to using the n8n Form Trigger's Production URL by opening the trigger node and selecting the Production URL in the From URLS selector. You can then activate your workflow, and n8n runs it automatically when a user submits the form.

When working with a production URL, ensure that you have saved and activated the workflow. Data flowing through the Form trigger isn't visible in the editor UI with the production URL.

Templates and examples

🤖Automate Multi-Platform Social Media Content Creation with AI

View template details

AI-Powered Social Media Content Generator & Publisher

View template details

🚀Transform Podcasts into Viral TikTok Clips with Gemini+ Multi-Platform Posting

View template details

Browse n8n Form integration templates, or search all templates

Examples:

Example 1 (unknown):

https://my-account.n8n.cloud/form/my-form?email=jane.doe%40example.com&name=Jane%20Doe

Example 2 (unknown):

// Use the "requiredField" key on any field to mark it as mandatory
// Use the "placeholder" key to specify placeholder data for all fields
// except 'dropdown', 'date' and 'file'

[
  {
    "fieldLabel": "Date Field",
    "fieldType": "date",
    "formatDate": "mm/dd/yyyy", // how to format received date in n8n
    "requiredField": true
  },
  {
    "fieldLabel": "Dropdown Options",
    "fieldType": "dropdown",
    "fieldOptions": {
      "values": [
        {
          "option": "option 1"
        },
        {
          "option": "option 2"
        }
      ]
    },
    "requiredField": true
  },
  {
    "fieldLabel": "Multiselect",
    "fieldType": "dropdown",
    "fieldOptions": {
      "values": [
        {
          "option": "option 1"
        },
        {
          "option": "option 2"
        }
      ]
    },
    "multiselect": true // setting to true allows multi-select
  },
  {
    "fieldLabel": "Email",
    "fieldType": "email",
    "placeholder": "me@mail.con"
  },
  {
    "fieldLabel": "File",
    "fieldType": "file",
    "multipleFiles": true, // setting to true allows multiple files selection
    "acceptFileTypes": ".jpg, .png" // allowed file types
  },
  {
    "fieldLabel": "Number",
    "fieldType": "number"
  },
  {
    "fieldLabel": "Password",
    "fieldType": "password"
  },
  {
    // "fieldType": "text" can be omitted since it's the default type
    "fieldLabel": "Text"
  },
  {
    "fieldLabel": "Textarea",
    "fieldType": "textarea"
  },
  {
    "fieldType": "html",
    "elementName": "content", // Optional field. It can be used to include the html in the output.
    "html": "<div>Custom element</div>"
  },
  {
    "fieldLabel": "Checkboxes",
    "fieldType": "checkbox",
    "fieldOptions": {
      "values": [
        {
          "option": "option 1"
        },
        {
          "option": "option 2"
        }
      ]
    }
  },
  {
    "fieldLabel": "Radio",
    "fieldType": "radio",
    "fieldOptions": {
      "values": [
        {
          "option": "option 1"
        },
        {
          "option": "option 2"
        }
      ]
    }
  },
  {
    "fieldLabel": "hidden label",
    "fieldType": "hiddenField",
    "fieldValue": "extra form data"
  }
]

Cloudflare credentials

URL: llms-txt#cloudflare-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API token

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to Cloudflare's API documentation for more information about the service.

To configure this credential, you'll need:


Specify user folder path

URL: llms-txt#specify-user-folder-path

n8n saves user-specific data like the encryption key, SQLite database file, and the ID of the tunnel (if used) in the subfolder .n8n of the user who started n8n. It's possible to overwrite the user-folder using an environment variable.

Refer to Environment variables reference for more information on this variable.

Examples:

Example 1 (unknown):

export N8N_USER_FOLDER=/home/jim/n8n

Pull latest version

URL: llms-txt#pull-latest-version


Release notes

URL: llms-txt#release-notes

Contents:

New features and bug fixes for n8n.

You can also view the Releases in the GitHub repository.

Latest and Next versions

n8n releases a new minor version most weeks. The latest version is for production use. next is the most recent release. You should treat next as a beta: it may be unstable. To report issues, use the forum.

Current latest: 1.118.2
Current next: 1.119.0

The steps to update your n8n depend on which n8n platform you use. Refer to the documentation for your n8n:

Semantic versioning in n8n

n8n uses semantic versioning. All version numbers are in the format MAJOR.MINOR.PATCH. Version numbers increment as follows:

  • MAJOR version when making incompatible changes which can require user action.
  • MINOR version when adding functionality in a backward-compatible manner.
  • PATCH version when making backward-compatible bug fixes.

You can find the release notes for older versions of n8n here

View the commits for this version.
Release date: 2025-11-05

This is the latest version. n8n recommends using the latest version. The next version may be unstable. To report issues, use the forum.

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-11-03

This is the next version. n8n recommends using the latest version. The next version may be unstable. To report issues, use the forum.

This release includes multiple bug fixes for AI Agent, task runners, editor, and integrations, as well as new features like improved workflow settings, AWS Assume Role credentials, and enhanced security and audit capabilities.

The Guardrails node provides a set of rules and policies that control an AI agent's behavior by filtering its inputs and outputs. This helps safeguard from malicious input and from generating unsafe or undesirable responses. There are two operations:

  • Check Text for Violations: Validate text against a set of policies (e.g. NSFW, prompt injection).
  • Sanitize Text: Detects and replaces specific data such as PII, URLs, or secrets with placeholders.

The default presets and prompts are adapted from the open-source guardrails package made available by OpenAI.

For more info, see Guardrails documentation

cesars-gh
ongdisheng

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-10-28

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-10-28

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-10-27

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-10-27

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-10-24

This release contains bug fixes.

AI Workflow Builder is now available to Enterprise Cloud users.

AI Workflow Builder turns prompts into workflows. Describe what you want to build, and n8n will generate a draft workflow by adding, configuring, and connecting nodes for you. From there, you can refine and expand the workflow directly in the editor.

  • Previously available to Starter and Pro users, AI Workflow Builder is now accessible to Enterprise Cloud users as well, with 1,000 monthly credits.
  • Supported on n8n version 1.115+. If you dont see the feature yet, open /settings/usage to trigger a license refresh.
  • Weve fixed a bug and now cloud users on v1.117.1 onwards will have access to a more reliable builder.
  • Were currently working on bringing AI Workflow Builder to self-hosted users as well, including Community, Business, and Enterprise.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-10-21

This release contains bug fixes.

jackfrancismurphy
JiriDeJonghe
ramkrishna2910
Oracle and/or its affiliates (sudarshan12s)

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-10-21

This release contains a bug fix.

View the commits for this version.
Release date: 2025-10-21

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-10-14

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-10-13

This release contains bug fixes.

Data migration tool

You can now easily migrate n8n data between different database types. This new tooling currently supports SQLite and Postgres, making the transition to a scaling database choice simpler, allowing you to take your data with you.

The tooling comes in the form of two new CLI commands, export:entities and import:entities

Export The new export command lets you export data from your existing n8n database (SQLite / Postgres), producing a set of encrypted files within a compressed directory for you to move around and use with the import commands.

For details, see Export entities

Import The new import command allows you to read from a compressed and encrypted set of files generated from the new export command, and import them in to your new database of choice (SQLite / Postgres) to be used with your n8n instance.

For details, see Import entities

JHTosas
clesecq
Gulianrdgd
tishun

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-10-13

This release contains bug fixes.

JHTosas
clesecq
Gulianrdgd
tishun

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-10-14

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-10-10

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-10-07

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-10-06

This release contains bug fixes.

AI Workflow Builder (Beta)

AI Workflow Builder turns your natural language prompts into working automations. Describe what you want to build, and n8n will generate a draft workflow by adding and configuring nodes and wiring up the logic for you. From there, you can refine, expand, or adjust the workflow directly in the editor.

This feature helps you move from idea to implementation faster and without losing technical control. Its especially helpful when starting from a blank canvas, validating an approach, or exploring new nodes and capabilities. Multi-turn interaction lets you iterate in conversation, turning your ideas into structured, production-ready workflows step by step.

Learn more about how we were building this feature in our forum post.

  • This feature is initially going to be available for Cloud users on the 14-day Trial, Starter and Pro plans.

  • Availability for Enterprise users on Cloud will follow in a future update.

  • We are actively exploring the best way to bring this feature to self-hosted users.

  • To ensure the smoothest experience for all users, this feature will be rolled out to users on version 1.115.0 over the course of a week so you may not have access to the feature immediately when you upgrade to 1.115.0.

Credit limits by plan: This feature will have monthly credit limits by plan.

  • Each prompt/interaction with the AI Workflow Builder consumes one credit.
  • Trial users have access to 20 credits, Starter plans have 50 per month and Pro plans will have 150 credits per month.
  • At this time, there will not be a way to access additional credits within your plan, however we are we are exploring this.

Learn more about AI Workflow Builder in documentation.

Source Control: Added HTTPS support

You can now connect to Git repositories via HTTPS in addition to SSH, making Source Control usable in environments where SSH is restricted.

HTTPS is now supported as a connection type in Environments.

baileympearson
h40huynh
Ankit-69k
francisfontoura
iocanel

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-10-06

This release contains bug fixes.

View the commits for this version.
Release date: 2025-10-02

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-10-02

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-09-29

This release contains core updates, editor improvements, project updates, performance improvements, and bug fixes.

nealzhu3

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-09-26

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-09-26

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-09-24

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

Python task runner

This version introduces the Python task runner as a beta feature. This feature secures n8n's Python sandbox and enables users to run real Python modules in n8n workflows. The original Pyodide-based implementation will be phased out.

This is a breaking change that replaces Pyodide - see here for a list of differences. Any Code node set to the legacy python parameter will need to be manually updated to use the new pythonNative parameter. Any Code node script set to python and relying on Pyodide syntax is likely to need to be manually adjusted to account for breaking changes.

  • For self-hosting users, see here for deployment instructions for task runners going forward and how to install extra dependencies.
  • On n8n Cloud, this will be a gradual transition. If in your n8n Cloud instance the Code node offers an option named "Python (Native) (Beta)", then your instance has been transitioned to native Python and you will need to look out for any breaking changes. Imports are disabled for security reasons at this time.

The native Python runner is currently in beta and is subject to change as we find a balance between security and usability. Your feedback is welcome.

View the commits for this version.
Release date: 2025-09-24

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-09-23

This release contains bug fixes.

Were excited to introduce data tables, bringing built-in data storage to n8n. You can now store and query structured data directly inside the platform, without relying on external databases for many common automation scenarios. Track workflow state between runs, store tokens or session data, keep product or customer reference tables, or stage intermediate results for multi-step processes.

Previously, persisting data meant provisioning and connecting to an external store such as Redis or Google Sheets. That added credential setup, infrastructure overhead, latency, and constant context switching. Data tables eliminate that friction and keeps your data easily editable and close to your workflows.

Data tables are available today on all plans. They currently support numbers, strings, and datetimes with JSON support coming soon. On Cloud, each instance can store up to 50MB. On self-hosted setups, the default is also 50 MB, but this limit can be adjusted if your infrastructure allows.

Overview of data tables

Create a data table

  • From the canvas, open the Create workflow dropdown and select Create Data table.
  • Or, go to the Overview panel on the left-side navigation bar and open the Data tables tab.

Use a data table in your workflow

  • Add the Data table node to your workflow to get, update, insert, upsert, or delete rows.

Adjust the storage limit (self-hosted only)

  • Change the default 50 MB limit with the environment variable: N8N_DATA_TABLES_MAX_SIZE_BYTES. See configuration docs.

  • Data tables dont currently support foreign keys or default values.

  • For now, all data tables are accessible to everyone in a project. More granular permissions and sharing options are planned.

Learn more about data tables and the Data table node.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-09-23

This release contains an editor improvement.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-09-22

This release contains core updates, editor improvements, a new node, node updates, and bug fixes.

Weve made updates to strengthen Single Sign-On (SSO) reliability and security, especially for enterprise and multi-instance setups.

  • OIDC and SAML sync in multi-main setups [version: 1.113.0]: In multi-main deployments, updates to SSO settings are now synchronized across all instances, ensuring consistent login behavior everywhere.
  • Enhanced OIDC integration [version 1.111.0]: n8n now supports OIDC providers that enforce state and nonce parameters. These are validated during login, providing smoother and more secure Single Sign-On.

Filter insights by project

We've added project filtering to insights, enabling more granular reporting and visibility into individual project performance.

ongdisheng

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-09-19

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-09-19

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-09-19

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-09-18

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-09-17

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-09-15

This release contains API improvements, core updates, editor improvements, node updates, and bug fixes.

Additional API Endpoints versions

Weve made several updates to the Executions API:

  • Execution details: GET /executions now includes status and workflow_name in the response.
  • Retry execution endpoint: Added new public API endpoints to retry failed executions.
  • Additional filters: You can now filter executions by running or canceled status.

Enhancements to workflow diff

We added a several updates on workflows diffs as well:

  • Better view in Code nodes and Stickies: Workflow diffs now highlight changes per line instead of per block, making edits easier to review and understand.
  • Enable/Disable sync: You can now enable or disable sync in the viewport, letting you compare a workflow change in one view without affecting the other.

GuraaseesSingh
jabbson
ongdisheng

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-09-08

This release contains core updates, API improvements, node updates, and bug fixes.

abellion
cesars-gh
durran

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-09-03

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-09-03

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-09-01

This release contains core updates, editor improvements, node updates, performance improvements, and bug fixes.

heyxmirko

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-08-27

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-08-27

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-08-25

This release contains core updates, editor improvements, node updates, performance improvements, and bug fixes.

naXa777
prettycode2022
oppai

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-08-20

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-08-20

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-08-18

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-08-18

This release contains a new CLI tool, editor improvements, node updates, performance improvements, and bug fixes.

For teams working across different environments, deployments often involve multiple people making changes at different times. Without a clear view of those changes, its easy to miss something important.

Workflow Diff gives you an easy and visual way to review workflow changes before you deploy them between environments.

  • Quickly see whats been added, changed, or deleted, with clear colour highlights.
  • Easily see important settings changes on a workflow.
  • Check changes inside each node, and spot connector updates, with a side-by-side view of its settings.
  • Get a quick count of all changes to understand the size of a deployment.

Workflow Diff eases the review and approval of changes before deployment, enabling teams to collaborate on workflows without breaking existing automations or disrupting production. Its one step further in integrating DevOps best practices in n8n.

Now available for Enterprise customers using Environments.

ManuLasker
EternalDeiwos
jreyesr

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-08-15

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-08-14

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-08-11

This release contains a backported update.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-08-11

This release contains bug fixes.

Amsal1
andrewzolotukhin
DMA902
fkowal
Gulianrdgd

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-08-08

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-08-07

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-08-07

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-08-05

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-08-04

This release contains performance improvements, core updates, editor improvements, node updates, a new node, and bug fixes.

No more limit of active workflows and new self-hosted Business Plan

We have rolled out a new pricing model to make it easier for builders of all sizes to adopt and scale automation with n8n.

No more limit of active workflows.

All n8n plans, from Starter to Enterprise, now include unlimited users, workflows, and steps. Our pricing is based on the volume of executions. Meaning you can build and test as many workflows as you want, including complex, data-heavy, or long-running automations, without worrying about quotas.

New self-hosted Business Plan for growing teams

Designed for SMBs and mid-sized companies, the Business Plan includes features such as:

  • 6 shared projects
  • SSO, SAML and LDAP
  • Different environments
  • Global variables
  • Version control using Git
  • 30 days of Insights

Please note that this plan only includes support from our community forum. For dedicated support we recommend upgrading to our Enterprise plan.

Enterprise pricing now scales with executions

Enterprise plans no longer use workflow-based pricing, and is now also based on the volume of executions.

What you need to do

To ensure these changes apply to your account, update your n8n instance to the latest version.

Read the blog for full details.

baruchiro
killthekitten
baileympearson
Yingrjimsch
joshualipman123

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-08-01

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-08-01

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-07-31

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-07-28

This release contains core updates, editor improvements, node updates, and bug fixes.

Respond to Chat node

With the **Respond to Chat node**, you can now access Human-in-the-Loop functionality natively in n8n Chat.

Enable conversational experiences where you can ask for clarification, request approval before taking further action, and get back intermediate results — all within a single workflow execution.

This unlocks multi-turn interactions that feel more natural and reduce the number of executions required. It is ideal for building interactive AI use cases like conversational forms, branched workflows based on user replies, and step-by-step approvals.

  • Add a Chat Trigger node and select Using Respond Nodes for the Response mode
  • Place the Respond to Chat node anywhere in your workflow to send a message into the Chat and optionally wait for the user to input a response before continuing execution of the workflow steps.

dana-gill

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-07-23

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-07-22

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-07-21

This release contains core updates, editor improvements, a new node, node updates, and bug fixes.

nunulk
iaptsiauri
KGuillaume-chaps

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-07-18

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-07-17

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-07-17

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-07-14

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-07-14

This release contains core updates, editor improvements, new nodes, node improvements, and bug fixes.

Chat streaming

No more waiting for full responses to load when using the n8n chat interface. Streaming now delivers AI-generated text replies word by word so users can read messages as theyre being generated. It feels faster, smoother, and more like what people expect from chat experiences.

Streaming is available in public chat views (hosted or embedded) and can be used in custom apps via webhook.

Configure streaming in the Node Details View of these nodes:

  • Chat Trigger node: Options>Add Field>Response Mode>Streaming
  • Webhook node: Respond>Streaming
  • AI Agent node: Add option> Enable streaming

Improved instance user list with more visibility

The instance user list has been updated with a new table layout and additional details to help admins manage access more easily.

  • See total users and filter by name or email
  • View which projects each user has access to
  • Whether a user has enabled 2FA and sort based on that
  • See the last active date for each user

This makes it easier to audit user activity, identify inactive accounts, and understand how access is distributed across your instance.

Webhook HTML responses

Starting with this release, if your workflow sends an HTML response to a webhook, n8n automatically wraps the content in an <iframe>. This is a security mechanism to protect the instance users.

This has the following implications:

  • HTML renders in a sandboxed iframe instead of directly in the parent document.
  • JavaScript code that attempts to access the top-level window or local storage will fail.
  • Authentication headers aren't available in the sandboxed iframe (for example, basic auth). You need to use an alternative approach, like embedding a short-lived access token within the HTML.
  • Relative URLs (for example, <form action="/">) won't work. Use absolute URLs instead.

Built-in Metrics for AI Evaluations

Using evaluations is a best practice for any AI solution, and a must if reliability and predictability are business-critical. With this release, weve made it easier to set up evaluations in n8n by introducing a set of built-in metrics. These metrics can review AI responses and assign scores based on factors like correctness, helpfulness, and more.

You can run regular evaluations and review scores over time as a way to monitor your AI workflow's performance. You can also compare results across different models to help guide model selection, or run evaluations before and after a prompt change to support data-driven, iterative building.

As with all evaluations in n8n, youll need a dataset that includes the inputs you want to test. For some evaluations, the dataset must also include expected outputs (ground truth) to compare against. The evaluation workflow runs each input through the portion you're testing to generate a response. The built-in metric scores each response based on the aspect you're measuring, allowing you to compare results before and after changes or track trends over time in the Evaluations tab.

You can still define your own custom metrics, but for common use cases, the built-in options make it much faster to implement.

  1. Set up your evaluation as described here, using an Evaluation node as the trigger and another with the Set Metrics operation.
  2. In the Set Metrics node, choose a metric from the dropdown list.
  3. Define any additional parameters required for your selected metric. In most cases, this includes mapping the dataset columns to the appropriate fields.

📏 Available built-in metrics:

  • Correctness (AI-based): Compares AI workflow-generated responses to expected answers. Another LLM acts as a judge, scoring the responses based on guidance you provide in the prompt.

  • Helpfulness (AI-based): Evaluates how helpful a response is in relation to a user query, using an LLM and prompt-defined scoring criteria.

  • String Similarity: Measures how closely the response matches the expected output by comparing strings. Useful for command generation or when output needs to follow a specific structure.

  • Categorization: Checks whether a response matches an expected label, such as assigning items to the correct category.

  • Tools Used: Verifies whether the AI agent called the tools you specified in your dataset. To enable this, make sure Return Intermediate Steps is turned on in your agent so the evaluation can access the tools it actually called.

  • Registered Community Edition enables analysis of one evaluation in the Evaluations tab which allows easy comparison of evaluation runs over time. Pro and Enterprise plans allow unlimited evaluations in the Evaluations tab.

Learn more about setting up and customizing evaluations.

AI Agent Tool node

With the AI Agent Tool node we are introducing a simplified pattern for multi-agent orchestration that can be run in a single execution and stay entirely on one canvas. You can now connect multiple AI Agent Tool nodes to a primary AI Agent node, allowing it to supervise and delegate work across other specialized agents.

This setup is especially useful for building complex systems that function like real-world teams, where a lead agent assigns parts of a task to specialists. You can even add multiple layers of agents directing other agents, just like you would have in a real multi-tiered organizational structure. It also helps with prompt management by letting you split long, complex instructions into smaller, focused tasks across multiple agents. While similar orchestration was already possible using sub-workflows, AI Agent Tool nodes are a good choice when you want the interaction to happen within a single execution or prefer to manage and debug everything from a single canvas.

  • Add an AI Agent node to your workflow and click + to create a Tools connection.

  • Search for and select the AI Agent Tool node from the Nodes Panel.

  • Name the node clearly so the primary agent can reference it, then add a short description and prompt.

  • Connect any LLMs, memory, and tools the agent needs to perform its role.

  • Instruct the primary AI Agent on when to use the AI Agent Tool and to pass along relevant context in its prompt.

  • The orchestrating agent does not pass full execution context by default. Any necessary context must be included in the prompt.

AI Agent Tool nodes makes it easier to build layered, agent-to-agent workflows without relying on sub-workflows, helping you move faster when building and debugging multi-agent systems.

ksg97031
israelshenkar

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-07-11

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-07-11

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-07-09

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-07-07

This release contains core updates, editor improvements, new nodes, node updates, and bug fixes.

Enforce 2FA across your instance

Enterprise Instance owners can now enforce two-factor authentication (2FA) for all users in their instance.

Once enabled, any user who hasnt set up 2FA will be redirected to complete the setup before they can continue using n8n. This helps organizations meet internal security policies and ensures stronger protection across the workspace.

This feature is available only on the Enterprise plan.

marty-sullivan
cesars-gh
dudanogueira

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-07-03

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-06-30

This release contains core updates, editor improvements, node updates, and bug fixes.

luka-mimi

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-06-25

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-06-23

This release contains core updates, editor improvements, a new node, node updates, and bug fixes.

Model Selector node

The Model Selector node gives you more control when working with multiple LLMs in your workflows.

Use it to determine which connected model should handle a given input, based on conditions like expressions or global variables. This makes it easier to implement model routing strategies, such as switching models based on performance, task type, cost, or availability.

🛠️ How to: Connect multiple model nodes to the Model Selector node, then configure routing conditions in the nodes settings.

  • Rules are evaluated in order. The first matching rule determines which model is used even if others would also match.
  • As a sub-node, expressions behave differently here: they always resolves to the first item rather than resolving for each item in turn.

The Model Selector node is especially useful in evaluation or production scenarios where routing logic between models needs to adapt based on performance, cost, availability, or dataset-specific needs.

Support for OIDC (OpenID Connect) authentication

You can now use OIDC (OpenID Connect) as an authentication method for Single Sign-On (SSO).

This gives enterprise teams more flexibility to integrate n8n with their existing identity providers using a widely adopted and easy-to-manage standard. OIDC is now available alongside SAML, giving Enterprises the choice to select what best fits their internal needs.

Project admins can now commit to Git within environments

Project admins now have the ability to commit workflow and credential changes directly to Git through the environments feature. This update streamlines the workflow deployment process by giving project-level admins direct control over committing their changes. It also ensures that the those who know their workflows best can review and commit updates themselves, without needing to involve instance-level admins.

Learn more about source control environments

aliou

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-06-19

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-06-18

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-06-16

This release contains performance improvements, core updates, editor changes, node updates, and bug fixes.

Automatically name nodes

Default node names now update automatically based on the resource and operation selected, so youll always know what a node does at a glance.

This adds clarity to your canvas and saves time renaming nodes manually.

Dont worry, automatic naming wont break references. And, and if youve renamed a node yourself, well leave it just the way you wrote it.

Support for RAG extended with built-in templates

Retrieval-Augmented Generation (RAG) can improve AI responses by providing language models access to data sources with up-to-date, domain-specific, or proprietary knowledge. RAG workflows typically rely on vector stores to manage and search this data efficiently.

To get the benefits of using vector stores, such as returning results based on semantic meaning rather than just keyword matches, you need a way to upload your data to the vector store and a way to query it.

In n8n, uploading and querying vectors stores happens in two workflows. Now, you have an example to get your started and make implementation easier with the RAG starter template.

  • The Load Data workflow shows how to add data with the appropriate embedding model, split it into chunks with the Default Data Loader, and add metadata as desired.
  • The Retriever workflow for querying data, shows how agents and vector stores work together to help you define highly relevant results and save tokens using the Question and Answer tool.

Enable semantic search and the retrieval of unstructured data for increased quality and relevance of AI responses.

  • Search for RAG starter template in the search bar of the Nodes panel to insert it into your workflow.

Learn more about implementing RAG in n8n here.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-06-12

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-06-11

This release contains performance improvements, core updates, editor changes, node updates, a new node, and bug fixes.

luka-mimi
Alexandero89
khoazero123

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-06-04

This release contains backports.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-06-03

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-06-02

This release contains new features, performance improvements and bug fixes.

Convert to sub-workflow

Large, monolithic workflows can slow things down. Theyre harder to maintain, tougher to debug, and more difficult to scale. With sub-workflows, you can take a more modular approach, breaking up big workflows into smaller, manageable parts that are easier to reuse, test, understand, and explain.

Until now, creating sub-workflows required copying and pasting nodes manually, setting up a new workflow from scratch, and reconnecting everything by hand. Convert to sub-workflow allows you to simplify this process into a single action, so you can spend more time building and less time restructuring.

  1. Highlight the nodes you want to convert to a sub-workflow. These must:
    • Be fully connected, meaning no missing steps in between them
    • Start from a single starting node
    • End with a single node
  2. Right-click to open the context menu and select Convert to sub-workflow
    • Or use the shortcut: Alt + X
  3. n8n will:
    • Open a new tab containing the selected nodes
    • Preserve all node parameters as-is
    • Replace the selected nodes in the original workflow with a Call My Sub-workflow node

Note: You will need to manually adjust the field types in the Start and Return nodes in the new sub-workflow.

This makes it easier to keep workflows modular, performant, and easier to maintain.

Learn more about sub-workflows.

This release contains performance improvements and bug fixes.

maatthc

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-06-02

This release failed to build. Please use 1.97.0 instead.

This release contains API updates, core changes, editor improvements, node updates, and bug fixes.

API support for assigning users to projects

You can now use the API to add and update users within projects. This includes:

  • Assigning existing or pending users to a project with a specific role
  • Updating a users role within a project
  • Removing users from one or more projects

This update now allows you to use the API to add users to both the instance and specific projects, removing the need to manually assign them in the UI.

Add pending users to project member assignment

You can now add pending users, those who have been invited but haven't completed sign-up, to projects as members.

This change lets you configure a user's project access upfront, without waiting for them to finish setting up their account. It eliminates the back-and-forth of managing access post-sign-up, ensuring users have the right project roles immediately upon joining.

matthabermehl
Stamsy

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-05-29

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-05-27

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-05-27

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-05-26

This release contains core updates, editor improvements, node updates, and bug fixes.

Evaluations for AI workflows

Weve added a feature to help you iterate, test, and compare changes to your AI automations before pushing them to production so you can achieve more predictability and make better decisions.

When you're building with AI, a small prompt tweak or model swap might improve results with some inputs, while quietly degrading performance with others. But without a way to evaluate performance across many inputs, youre left guessing whether your AI is actually getting better when you make a change.

By implementing Evaluations for AI workflows in n8n, you can assess how your AI performs across a range of inputs by adding a dedicated path in your workflow for running test cases and applying custom metrics to track results. This helps you build viable proof-of-concepts quickly, iterate more effectively, catch regressions early, and make more confident decisions when your AI is in production.

Evaluation node and tab

The Evaluation node includes several operations that, when used together, enable end-to-end AI evaluation.

  • Run your AI logic against a wide range of test cases in the same execution
  • Capture the outputs of those test cases
  • Score the results using your own metrics or LLM-as-judge logic
  • Isolate a testing path to only include the nodes and logic you want to evaluate

The Evaluations tab enables you to review test results in the n8n UI, perfect for comparing runs, spotting regressions, and viewing performance over time.

🛠 How evaluations work

The evaluation path runs alongside your normal execution logic and only activates when you want—making it ideal for testing and iteration.

Get started by selecting an AI workflow you want to evaluate that includes one or more LLM or Agent nodes.

  1. Add an Evaluation node with the On new Evaluation event operation. This node will act as an additional trigger youll run only when testing. Configure it to read your dataset from Google Sheets, with each row representing a test input.

💡 Better datasets mean better evaluations. Craft your dataset from a variety of test cases, including edge cases and typical inputs, to get meaningful feedback on how your AI performs. Learn more and access sample datasets here.

  1. Add a second Evaluation node using the Set Outputs operation after the part of the workflow you're testing—typically after an LLM or Agent node. This captures the response and writes it back to your dataset in Google Sheets.

  2. To evaluate output quality, add a third Evaluation node with the Set Metrics operation at a point after youve generated the outputs. You can develop workflow logic, custom calculations, or add an LLM-as-Judge to score the outputs. Map these metrics to your dataset in the nodes parameters.

💡 Well-defined metrics = smarter decisions. Scoring your outputs based on similarity, correctness, or categorization can help you track whether changes are actually improving performance. Learn more and get links to example templates here.

When the Evaluation trigger node is executed, it runs each input in our dataset through your AI logic. This continues until all test cases are processed, a limit is reached, or you manually stop the execution. Once your evaluation path is set up, you can update your prompt, model, or workflow logic—and re-run the Evaluation trigger node to compare results. If youve added metrics, theyll appear in the Evaluations tab.

In some instances, you may want to isolate your testing path to make iteration faster or to avoid executing downstream logic. In this case, you can add an Evaluation node with the Check If Evaluating operation to ensure only the expected nodes run when performing evaluations.

Things to keep in mind

Evaluations for AI Workflows are designed to fit into your development flow, with more enhancements on the way. For now, here are a few things to note:

  • Test datasets are currently managed through Google Sheets. Youll need a Google Sheets credential to run evaluations.
  • Each workflow supports one evaluation at a time. If youd like to test multiple segments, consider splitting them into sub-workflows for more flexibility.
  • Community Edition supports one single evaluation. Pro and Enterprise plans allow unlimited evaluations.
  • AI Evaluations are not enabled for instances in scaling mode at this time.

You can find details, tips, and common troubleshooting info here.

👉 Learn more about the AI evaluation strategies and practical implementation techniques. Watch now.

Phiph
cesars-gh

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-05-19

This release contains editor improvements, an API update, node updates, new nodes, and bug fixes.

Verified community nodes on Cloud

Weve expanded the n8n ecosystem and unlocked a new level of flexibility for all users including those on n8n Cloud! Now you can access a select set of community nodes and partner integrations without leaving the canvas. This means you install and automate with a wider range of integrations without leaving your workspace. The power of the community is now built-in.

This update focuses on three major improvements:

  • Cloud availability: Community nodes are no longer just for self-hosted users. A select set of nodes is now available on n8n Cloud.
  • Built-in discovery: You can find and explore these nodes right from the Nodes panel without leaving the editor or searching on npm.
  • Trust and verification: Nodes that appear in the editor have been manually vetted for quality and security. These verified nodes are marked with a checkmark.

Were starting with a selection of around 25 nodes, including some of the most-used community-built packages and partner-supported integrations. For this phase, we focused on nodes that dont include external package dependencies - helping streamline the review process and ensure a smooth rollout.

This is just the start. We plan to expand the library gradually, bringing even more verified nodes into the editor along with the powerful and creative use cases they unlock. In time, our criteria will evolve, opening the door to a wider range of contributions while keeping quality and security in focus.

Learn more about this update and find out which nodes are already installable from the editor in our blog post.

💻 Use a verified node

Make sure you're on n8n version 1.94.0 or later and the instance Owner has enabled verified community nodes. On Cloud, this can be done from the Admin Panel. For self-hosted instances, please refer to documentation. In both cases, verified nodes are enabled by default.

  • Open the Nodes panel from the editor
  • Search for the Node. Verified nodes are indicated by a shield 🛡️
  • Select the node and click Install

Once an Owner installs a node, everyone on the instance can start using it—just drag, drop, and connect like any other node in your workflow.

🛠️ Build a node and get it verified

Want your node to be verified and discoverable from the editor? Heres how to get involved:

  1. Review the community node verification guidelines.
  2. If youre building something new, follow the recommendations for creating nodes.
  3. Check your design against the UX guidelines.
  4. Submit your node to npm.
  5. Request verification by filling out this form.

Already built a node? Raise your hand!

If youve already published a community node and want it considered for verification, make sure it meets the requirements noted above, then let us know by submitting the interest form. Were actively curating the next batch and would love to include your work.

Extended logs view

When workflows get complex, debugging can get... clicky. Thats where an extended Logs View comes in. Now you can get a clearer path to trace executions, troubleshoot issues, and understand the behavior of a complete workflow — without bouncing between node detail views.

This update brings a unified, always-accessible panel to the bottom of the canvas, showing you each step of the execution as it happens. Whether you're working with loops, sub-workflows, or AI agents, youll see a structured view of everything that ran, in the order it ran—with input, output, and status info right where you need it.

You can jump into node details when you want to dig deeper, or follow a single item through every step it touched. Real-time highlighting shows you which nodes are currently running or have failed, and youll see total execution time for any workflow—plus token usage for AI workflows to help monitor performance. And if you're debugging across multiple screens? Just pop the logs out and drag them wherever youd like.

  • Adds a Logs view to the bottom of the canvas that can be opened or collapsed. (Chat also appears here if your workflow uses it).
  • Displays a hierarchical list of nodes in the order they were executed—including expanded views of sub-workflows.
  • Allows you to click a node in hierarchy to preview inputs and outputs directly, or jump into the full Node Details view with a link.
  • Provides ability to toggle input and output data on and off.
  • Highlights each node live as it runs, showing when it starts, completes, or fails.
  • Includes execution history view to explore past execution data in a similar way.
  • Shows roll-up stats like total execution time and total AI tokens used (for AI-enabled workflows).
  • Includes a “pop out” button to open the logs as a floating window—perfect for dragging to another screen while debugging.

To access the expanded logs view, click on the Logs bar at the bottom of the canvas. The view is also opens up when you open the chat window on the bottom of the page.

Stamsy
feelgood-interface

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-05-12

This release contains core updates, editor improvements, new nodes, node updates, and bug fixes.

Faster ways to open sub-workflows

Weve added several new ways to navigate your multi-workflow automations faster.

From any workflow with a sub-workflow node:

🖱️ Right-click on a sub-workflow node and select Open sub-workflow from the context menu

⌨️ Keyboard shortcuts

  • Windows: CTRL + SHIFT + O or CTRL + Double Click
  • Mac: CMD + SHIFT + O or CMD + Double Click

These options will bring your sub-workflow up in a new tab.

Archive workflows

If youve ever accidentally removed a workflow, youll appreciate the new archiving feature. Instead of permanently deleting workflows with the Remove action, workflows are now archived by default. This allows you to recover them if needed.

  • Archive a workflow - Select Archive from the Editor UI menu. It has replaced the Remove action.

  • Find archived workflows - Archived workflows are hidden by default. To find your archived workflows, select the option for Show archived workflows in the workflow filter menu.

  • Permanently delete a workflow - Once a workflow is archived, you can Delete it from the options menu.

  • Recover a workflow - Select Unarchive from the options menu.

  • Workflows archival requires the same permissions as required previously for removal.

  • You cannot select archived workflows as sub-workflows to execute

  • Active workflows are deactivated when they are archived

  • Archived workflows can not be edited

LeaDevelop
ayhandoslu
valentina98

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-05-08

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-05-08

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-05-06

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-05-05

This release contains core updates, editor improvements, node updates, and bug fixes.

Partial Execution for AI Tools

Weve made it easier to build and iterate on AI agents in n8n. You can now run and test specific tools without having to execute the entire agent workflow.

Partial execution is especially useful when refining or troubleshooting parts of your agent logic. It allows you to test changes incrementally, without triggering full agent runs, reducing unnecessary AI calls, token usage, and downstream activity. This makes iteration faster, more cost-efficient, and more precise when working with complex or multi-step AI workflows.

Partial execution for AI tools is available now for all tools - making it even easier to build, test, and fine-tune AI agents in n8n.

To use this feature you can either:

  • Click the Play button on the tool you want to execute directly from the canvas view.
  • Open the tools Node Details View and select "Execute Step" to run it from there.

If you have previously run the workflow, the input and output will be prefilled with data from the last execution. A pop-up form will open where you can manually fill in the parameters before executing your test.

Extended logs view

When workflows get complex, debugging can get... clicky. Thats where an extended Logs View comes in. Now you can get a clearer path to trace executions, troubleshoot issues, and understand the behavior of a complete workflow — without bouncing between node detail views.

This update brings a unified, always-accessible panel to the bottom of the canvas, showing you each step of the execution as it happens. Whether you're working with loops, sub-workflows, or AI agents, youll see a structured view of everything that ran, in the order it ran—with input, output, and status info right where you need it.

You can jump into node details when you want to dig deeper, or follow a single item through every step it touched. Real-time highlighting shows you which nodes are currently running or have failed, and youll see total execution time for any workflow—plus token usage for AI workflows to help monitor performance. And if you're debugging across multiple screens? Just pop the logs out and drag them wherever youd like.

  • Adds a Logs view to the bottom of the canvas that can be opened or collapsed. (Chat also appears here if your workflow uses it).
  • Displays a hierarchical list of nodes in the order they were executed—including expanded views of sub-workflows.
  • Allows you to click a node in hierarchy to preview inputs and outputs directly, or jump into the full Node Details view with a link.
  • Provides ability to toggle input and output data on and off.
  • Highlights each node live as it runs, showing when it starts, completes, or fails.
  • Includes execution history view to explore past execution data in a similar way.
  • Shows roll-up stats like total execution time and total AI tokens used (for AI-enabled workflows).
  • Includes a “pop out” button to open the logs as a floating window—perfect for dragging to another screen while debugging.

To access the expanded logs view, click on the Logs bar at the bottom of the canvas. The view is also opens up when you open the chat window on the bottom of the page.

Insights enhancements for Enterprise

Two weeks after the launch of Insights, were releasing some enhancements designed for enterprise users.

  • Expanded time ranges. You can now filter insights over a variety of time periods, from the last 24 hours up to 1 year. Pro users are limited to 7 day and 14 day views.
  • Hourly granularity. Drill down into the last 24 hours of production executions with hourly granularity, making it easier to analyze workflows and quickly identify issues.

These updates provide deeper visibility into workflow history, helping you uncover trends over longer periods and detect problems sooner with more precise reporting.

Stamsy

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-05-05

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-05-05

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-05-01

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-04-28

This release contains core updates, editor improvements, node updates, and bug fixes.

Breadcrumb view from the canvas

Weve added breadcrumb navigation directly on the canvas, so you can quickly navigate to any of a workflows parent folders right from the canvas.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-04-25

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-04-22

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-04-22

This release contains core updates, editor updates, node updates, performance improvements, and bug fixes.

Extended HTTP Request tool functionality

Weve brought the full power of the HTTP Request node to the HTTP Request tool in AI workflows. That means your AI Agents now have access to all the advanced configuration options—like Pagination, Batching, Timeout, Redirects, Proxy support, and even cURL import.

This update also includes support for the $fromAI function to dynamically generate the right parameters based on the context of your prompt — making API calls smarter, faster, and more flexible than ever.

  • Open your AI Agent node in the canvas.
  • Click the + icon to add a new tool connection.
  • In the Tools panel, select HTTP Request Tool.
  • Configure it just like you would a regular HTTP Request node — including advanced options

👉 Learn more about configuring the HTTP Request tool.

Users on the Enterprise plan can now create API keys with specific scopes to control exactly what each key can access.

Previously, API keys had full read/write access across all endpoints. While sometimes necessary, this level of access can be excessive and too powerful for most use cases. Scoped API keys allow you to limit access to only the resources and actions a service or user actually needs.

When creating a new API key, you can now:

  • Select whether the key has read, write, or both types of access.
  • Specify which resources the key can interact with.

Supported scopes include:

  • Variables — list, create, delete
  • Security audit — generate reports
  • Projects — list, create, update, delete
  • Executions — list, read, delete
  • Credentials — list, create, update, delete, move
  • Workflows — list, create, update, delete, move, add/remove tags

Scoped API keys give you more control and security. You can limit access to only whats needed, making it safer to work with third parties and easier to manage internal API usage.

Drag and Drop in Folders

Folders just got friendlier. With this release, you can now drag and drop workflows and folders — making it even easier to keep things tidy.

Need to reorganize? Just select a workflow or folder and drag it into another folder or breadcrumb location. Its a small change that makes a big difference when managing a growing collection of workflows.

📁 Folders are available to all registered users—jump in and get your workspace in order!

Zordrak

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-04-16

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-04-15

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-04-14

This release contains API updates, core updates, editor updates, a new node, node updates, and bug fixes.

We're rolling out Insights, a new dashboard to monitor how your workflows are performing over time. It's designed to give admins (and owners) better visibility of their most important workflow metrics and help troubleshoot potential issues and improvements.

In this first release, were introducing a summary banner, the insights dashboard, and time saved per execution.

1. Summary banner

A new banner on the overview page that gives instance admins and owners a birds eye view of key metrics over the last 7 days.

Insights summary banner

  • Total production executions
  • Total failed executions
  • Failure rate
  • Average runtime of all workflows
  • Estimated time saved

This overview is designed to help you stay on top of workflow activity at a glance. It is available for all plans and editions.

2. Insights dashboard

On Pro and Enterprise plans, a new dashboard offers a deeper view into workflow performance and activity.

The dashboard includes:

  • Total production executions over time, including a comparison of successful and failed executions
  • Per-workflow breakdowns of key metrics
  • Comparisons with previous periods to help spot changes in usage or behavior
  • Runtime average and failure rate over time

3. Time saved per execution

Within workflow settings, you can now assign a “time saved per execution” value to any workflow. This makes it possible to track the impact of your workflows and make it easier to share this visually with other teams and stakeholders.

This is just the beginning for Insights: the next phase will introduce more advanced filtering and comparisons, custom date ranges, and additional monitoring capabilities.

  • We added a credential check for the Salesforce node
  • We added SearXNG as a tool for AI agents

You can now search within subfolders, making it easier to find workflows across all folder levels. Just type in the search bar and go.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-04-10

This release contains new features, new nodes, performance improvements, and bug fixes.

Model Context Protocol (MCP) nodes

MCP aims to standardise how LLMs like Claude, ChatGPT, or Cursor can interact with tools or integrate data for their agents. Many providers - both established or new - are adopting MCP as a standard way to build agentic systems. It is an easy way to either expose your own app as a server, making capabilities available to a model as tools, or as a client that can call on tools outside of your own system.

While its still early in the development process, we want to give you access to our new MCP nodes. This will help us understand your requirements better and will also let us converge on a great general solution quicker.

We are adding two new nodes:

The MCP Server Trigger turns n8n into an MCP server, providing n8n tools to models running outside of n8n. You can run multiple MCP servers from your n8n instance. The MCP Client Tool connects LLMs - and other intelligent agents - to any MCP-enabled service through a single interface.

Max from our DevRel team created an official walkthrough for you to get started:

Studio Update #04

MCP Server Trigger

The MCP Server Trigger turns n8n into an MCP server, providing n8n tools to models running outside of n8n. The node acts as an entry point into n8n for MCP clients. It operates by exposing a URL that MCP clients can interact with to access n8n tools. This means your n8n workflows and integrations are now available to models run elsewhere. Pretty neat.

Explore the MCP Server Trigger docs

The MCP Client Tool node is a MCP client, allowing you to use the tools exposed by an external MCP server. You can connect the MCP Client Tool node to your models to call external tools with n8n agents. In this regard it is similar to using a n8n tool with your AI agent. One advantage is that the MCP Client Tool can access multiple tools on the MCP server at once, keeping your canvas cleaner and easier to understand.

Explore the MCP Client Tool docs

  • Added a node for Azure Cosmos DB
  • Added a node for Milvus Vector Store
  • Updated the Email Trigger (IMAP) node

adina-hub
umanamente

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-04-09

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-04-09

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-04-08

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-04-07

This release contains new nodes, node updates, API updates, core updates, editor updates, and bug fixes.

cesars-gh
Stamsy
Pash10g

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-03-31

This release contains API updates, core updates, editor improvements, node updates, and bug fixes.

Aijeyomah
ownerer
ulevitsky

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-03-27

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-03-27

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-03-26

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-03-26

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-03-25

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-03-25

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-03-24

This release contains a new node, a new credential, core updates, editor updates, node updates, and bug fixes.

What can we say about folders? Well, theyre super handy for categorizing just about everything and theyre finally available for your n8n workflows. Tidy up your workspace with unlimited folders and nested folders. Search for workflows within folders. Its one of the ways were making it easier to organize your n8n instances more effectively.

Create and manage folders within your personal space or within projects. You can also create workflows from within a folder. You may need to restart your instance in order to activate folders.

It's a folder alright

Folders are available for all registered users so get started with decluttering your workspace now and look for more features (like drag and drop) to organize your instances soon.

Enhancements to Form Trigger Node

Recent updates to the Form Trigger node have made it a more powerful tool for building business solutions. These enhancements provide more flexibility and customization, enabling teams to create visually engaging and highly functional workflows with forms.

  • HTML customization: Add custom HTML to forms, including embedded images and videos, for richer user experiences.
  • Custom CSS support: Apply custom styles to user-facing components to align forms with your brands look and feel. Adjust fonts, colors, and spacing for a seamless visual identity.
  • Form previews: Your forms description and title will pull into previews of your form when sharing on social media or messaging apps, providing a more polished look.
  • Hidden fields: Use query parameters to add hidden fields, allowing you to pass data—such as a referral source—without exposing it to the user.
  • New responses options: Respond to user submissions in multiple ways including text, HTML, or a downloadable file (binary format). This enables forms to display rich webpages or deliver digital assets such as dynamically generated invoices or personalized certificates.

Form with custom CSS applied

These improvements elevate the Form Trigger node beyond a simple workflow trigger, transforming it into a powerful tool for addressing use cases from data collection and order processing to custom content creation.

Fank

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-03-18

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-03-17

This release contains a new node, node updates, editor updates, and bug fixes.

Pash10g

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-03-14

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-03-14

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-03-13

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-03-12

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-03-12

This release contains bug fixes and an editor update.

Schema Preview lets you view and work with a nodes expected output without executing it or adding credentials, keeping you in flow while building.

  • See expected node outputs instantly. View schemas for over 100+ nodes to help you design workflows efficiently without extra steps.

  • Define workflow logic first, take care of credentials later. Build your end-to-end workflow without getting sidetracked by credential setup.

  • Avoid unwanted executions when building. Prevent unnecessary API calls, unwanted data changes, or potential third-party service costs by viewing outputs without executing nodes.

  • Add a node with Schema Preview support to your workflow.

  • Open the next node in the sequence - Schema Preview data appears in the Node Editor where you would typically find it in the Schema View.

  • Use Schema Preview fields just like other schema data - drag and drop them into parameters and settings as needed.

Dont forget to add the required credentials before putting your workflow into production.

pemontto
Haru922

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-03-12

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-03-04

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-03-03

This release contains core updates, editor updates, new nodes, node updates, new credentials, credential updates, and bug fixes.

Tidy up instantly aligns nodes, centers stickies, untangles connections, and brings structure to your workflows. Whether you're preparing to share a workflow or just want to improve readability, this feature saves you time and makes your logic easier to follow. Clean, well-organized workflows aren't just nicer to look at—theyre also quicker to understand.

Open the workflow you want to tidy, then choose one of these options:

  • Click the Tidy up button in the bottom-left corner of the canvas (it looks like a broom 🧹)
  • Press Shift + Alt + T on your keyboard
  • Right-click anywhere on the canvas and select Tidy up workflow

Want to tidy up just part of your workflow? Select the specific nodes you want to clean up first - Tidy up will only adjust those, along with any stickies behind them.

Multiple API keys

n8n now supports multiple API keys, allowing users to generate and manage separate keys for different workflows or integrations. This improves security by enabling easier key rotation and isolation of credentials. Future updates will introduce more granular controls.

Rostammahabadi
Lanhild
matthiez
feelgood-interface
adina-hub

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-03-03

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-03-03

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-02-28

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-02-28

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-02-27

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-02-27

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-02-24

This release contains bug fixes, a core update, editor improvements, and a node update.

Improved partial executions

The new execution engine for partial executions ensures that testing parts of a workflow in the builder closely mirrors production behaviour. This makes iterating with updated run-data faster and more reliable, particularly for complex workflows.

Before, user would test parts of a workflow in the builder that didn't consistently reflect production behaviour, leading to unexpected results during development.

This update aligns workflow execution in the builder with production behavior.

Here is an example for loops:

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-02-21

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-02-21

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-02-21

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-02-21

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-02-20

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-02-20

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-02-17

This release contains bug fixes and an editor improvement.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-02-17

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-02-17

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-02-15

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-02-15

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-02-15

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-02-15

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-02-12

This release contains new features, node updates, and bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-02-06

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-02-05

This release contains new features, node updates, and bug fixes.

mocanew
Timtendo12

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-02-04

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-02-04

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-02-03

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-02-03

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-01-29

This release contains new features, editor updates, new nodes, new credentials, node updates, and bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-01-23

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-01-22

This release contains new features, editor updates, new credentials, node improvements, and bug fixes.

Stamsy
GKdeVries

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-01-17

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-01-17

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-01-17

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-01-17

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-01-15

This release contains bug fixes and editor updates.

Improved consistency across environments

We added new UX and automatic changes improvements resulting in a better consistency between your staging and production instances.

Previously, users faced issues like:

  • Lack of visibility into required credential updates when pulling changes
  • Incomplete synchronization, where changes — such as deletions — werent always applied across environments
  • Confusing commit process, making it unclear what was being pushed or pulled

We addressed these by:

  • Clearly indicating required credential updates when pulling changes
  • Ensuring deletions and other modifications sync correctly across environments
  • Improving commit selection to provide better visibility into whats being pushed

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-01-09

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2025-01-08

This release contains new features, a new node, node updates, performance improvements and bug fixes.

Overhauled Code node editing experience

We added a ton of new helpers to the Code node, making edits of your code much faster and more comfortable. You get:

  • TypeScript autocomplete
  • TypeScript linting
  • TypeScript hover tips
  • Search and replace
  • New keyboard shortcuts based on the VSCode keymap
  • Auto-formatting using prettier (Alt+Shift+F)
  • Remember folded regions and history after refresh
  • Multi cursor
  • Type function in the Code node using JSDoc types
  • Drag and drop for all Code node modes
  • Indentation markers

We build this on a web worker architecture so you won't have to suffer from performance degradation while typing.

To get the full picture, check out our Studio update with Max and Elias, where they discuss and demo the new editing experience. 👇

Studio Update #04

New node: Microsoft Entra ID

Microsoft Entra ID (formerly known as Microsoft Azure Active Directory or Azure AD) is used for cloud-based identity and access management. The new node supports a wide range of Microsoft Entra ID features, which includes creating, getting, updating, and deleting users and groups, as well as adding users to and removing them from groups.

  • AI Agent: Vector stores can now be directly used as tools for the agent
  • Code: Tons of new speed and convenience features, see above for details
  • Google Vertex Chat: Added option to specify the GCP region for the Google API credentials
  • HighLevel: Added support for calendar items

We also added a custom projects icon selector on top of the available emojis. Pretty!

igatanasov
Stamsy
feelgood-interface

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-12-19

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-12-19

This release contains node updates, performance improvements, and bug fixes.

  • AI Agent: Updated descriptions for Chat Trigger options
  • Facebook Graph API: Updated for API v21.0
  • Gmail: Added two new options for the Send and wait operation, free text and custom form
  • Linear Trigger: Added support for admin scope
  • MailerLite: Now supports the new API
  • Slack: Added two new options for the Send and wait operation, free text and custom form

We also added credential support for SolarWinds IPAM and SolarWinds Observability.

Last, but not least, we improved the schema view performance in the node details view by 90% and added drag and drop re-ordering to parameters. This comes in very handy in the If or Edit Fields nodes.

CodeShakingSheep
mickaelandrieu
Stamsy
pbdco

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-12-12

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-12-12

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-12-11

This release contains node updates, usability improvements, and bug fixes.

  • AI Transform: The maximum context length error now retries with reduced payload size
  • Redis: Added support for continue on fail

Improved commit modal

We added filters and text search to the commit modal when working with Environments. This will make committing easier as we provide more information and better visibility. Environments are available on the Enterprise plan.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-12-10

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-12-10

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-12-06

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-12-05

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-12-04

This release contains node updates, performance improvements, and bug fixes.

Task runners for the Code node in public beta

We're introducing a significant performance upgrade to the Code node with our new Task runner system. This enhancement moves JavaScript code execution to a separate process, improving your workflow execution speed while adding better isolation.

Task runners overview

Our benchmarks show up to 6x improvement in workflow executions using Code nodes - from approximately 6 to 35 executions per second. All these improvements happen under the hood, keeping your Code node experience exactly the same.

The Task runner comes in two modes:

  • Internal mode (default): Perfect for getting started, automatically managing task runners as child processes
  • External mode: For advanced hosting scenarios requiring maximum isolation and security

Currently, this feature is opt-in and can be enabled using environment variables. Once stable, it will become the default execution method for Code nodes.

To start using Task runners today, check out the docs.

  • AI Transform node: We improved the prompt for code generation to transform data
  • Code node: We added a warning if pairedItem is absent or could not be auto mapped

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-12-04

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-11-29

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-11-27

This release contains node updates, performance improvements and bug fixes.

New canvas in beta

The new canvas is now the default setting for all users. It should bring significant performance improvements and adds a handy minimap. As it is still a beta version you can still revert to the previous version with the three dot menu.

We're looking forward to your feedback. Should you encounter a bug, you will find a handy button to create an issue at the bottom of the new canvas as well.

  • We added credential support for Zabbix to the HTTP request node
  • We added new OAuth2 credentials for Microsoft SharePoint
  • The Slack node now uses markdown for the approval message when using the Send and Wait for Approval operation

feelgood-interface
adina-hub

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-11-26

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-11-26

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-11-25

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-11-20

This release contains a new feature, node improvements and bug fixes.

Sub-workflow debugging

We made it much easier to debug sub-workflows by improving their accessibility from the parent workflow.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-11-13

This release contains node updates, performance improvements and many bug fixes.

New AI agent canvas chat

We revamped the chat experience for AI agents on the canvas. A neatly organized view instead of a modal hiding the nodes. You can now see the canvas, chat and logs at the same time when testing your workflow.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-11-07

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-11-06

This release contains node updates and bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-10-31

This release contains performance improvements, a node update and bug fixes.

We made updates to how projects and workflow ownership are displayed making them easier to understand and navigate.

We further improved the performance logic of partial executions, leading to a smoother and more enjoyable building experience.

New n8n canvas alpha

We have enabled the alpha version of our new canvas. The canvas is the drawing board of the n8n editor, and were working on a full rewrite. Your feedback and testing will help us improve it. Read all about it on our community forum.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-10-28

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-10-25

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-10-25

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-10-24

Breaking change

What changed? Queue polling via the environment variable QUEUE_RECOVERY_INTERVAL has been removed.

When is action necessary? If you have set QUEUE_RECOVERY_INTERVAL, you can remove it as it no longer has any effect.

This release contains a new features, new nodes, node enhancements, and bug fixes.

New node: n8n Form

Use the n8n Form node to create user-facing forms with multiple pages. You can add other nodes with custom logic between to process user input. Start the workflow with a n8n Form Trigger.

A multi-page form with branching

Additionally you can:

  • Set default selections with query parameters

  • Define the form with a JSON array of objects

  • Show a completion screen and redirect to another URL

  • Google Business Profile and Google Business Profile Trigger: Use these to integrate Google Business Profile reviews and posts with your workflows

  • AI Agent: Removed the requirement to add at least one tool

  • GitHub: Added workflows as a resource operation

  • Structured Output Parser: Added more user-friendly error messages

For additional security, we improved how we handle multi-factor authentication, hardened config file permissions and introduced JWT for the public API.

For better performance, we improved how partial executions are handled in loops.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-10-24

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-10-21

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-10-16

This release contains a new node, node enhancements, performance improvements and bug fixes.

Enhanced node: Remove Duplicates

The Remove Duplicates node got a major makeover with the addition of two new operations:

  • Remove Items Processed in Previous Executions: Compare items in the current input to items from previous executions and remove duplicates
  • Clear Deduplication History: Wipe the memory of items from previous executions.

This makes it easier to only process new items from any data source. For example, you can now more easily poll a Google sheet for new entries by id or remove duplicate orders from the same customer by comparing their order date. The great thing is, you can now do this within and across workflow runs.

The new node for Gong allows you to get users and calls to process them further in n8n. Very useful for sales related workflows.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-10-15

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-10-15

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-10-15

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-10-11

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-10-11

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-10-11

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-10-11

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-10-09

Breaking change

  • The worker server used to bind to IPv6 by default. It now binds to IPv4 by default.
  • The worker server's /healthz used to report healthy status based on database and Redis checks. It now reports healthy status regardless of database and Redis status, and the database and Redis checks are part of /healthz/readiness.

When is action necessary?

  • If you experience a port conflict error when starting a worker server using its default port, set a different port for the worker server with QUEUE_HEALTH_CHECK_PORT.
  • If you are relying on database and Redis checks for worker health status, switch to checking /healthz/readiness instead of /healthz.

This release contains new features, node enhancements and bug fixes.

  • OpenAI: Added the option to choose between the default memory connector to provide memory to the assistant or to specify a thread ID
  • Gmail and Slack: Added custom approval operations to have a human in the loop of a workflow

We have also optimized the worker health checks (see breaking change above).

Each credential now has a seperate url you can link to. This makes sharing much easier.

For full release details, refer to Releases on GitHub.

Pemontto

View the commits for this version.
Release date: 2024-10-08

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-10-07

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-10-02

This release contains new features, node enhancements and bug fixes.

We skipped 1.62.0 and went straight to 1.62.1 with an additional fix.

Additional nodes as tools

We have made additional nodes usable with the Tools AI Agent node.

Additionally, we have added a $fromAI() placeholder function to use with tools, allowing you to dynamically pass information from the models to the connected tools. This function works similarly to placeholders used elsewhere in n8n.

Both of these new features enable you to build even more powerful AI agents by drawing directly from the apps your business uses. This makes integrating LLMs into your business processes even easier than before.

Drag and drop insertion on cursor position from schema view is now also enabled for code, SQL and Html fields in nodes.

Customers with an enterprise license can now rate, tag and highlight execution data in the executions view. To use highlighting, add an Execution Data Node (or Code node) to the workflow to set custom executions data.

For full release details, refer to Releases on GitHub.

Benjamin Roedell
CodeShakingSheep
manuelbcd
Miguel Prytoluk

View the commits for this version.
Release date: 2024-09-25

This release contains new features, node enhancements and bug fixes.

  • Brandfetch: Updated to use the new API
  • Slack: Made adding or removing the workflow link to a message easier

Big datasets now render faster thanks to virtual scrolling and execution annotations are harder to delete.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-09-20

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-09-20

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-09-18

This release contains new features, node enhancements and bug fixes.

Queue metrics for workers

You can now expose and consume metrics from your workers. The worker instances have the same metrics available as the main instance(s) and can be configured with environment variables.

You can now customize the maximum file size when uploading files within forms to webhooks. The environment variable to set for this is N8N_FORMDATA_FILE_SIZE_MAX. The default setting is 200MiB.

  • Invoice Ninja: Added actions for bank transactions
  • OpenAI: Added O1 models to the model select

For full release details, refer to Releases on GitHub.

CodeShakingSheep

View the commits for this version.
Release date: 2024-09-18

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-09-17

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-09-16

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-09-12

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-09-11

If you are using the Chat Trigger in "Embedded Chat" mode, with authentication turned on, you could see errors connecting to n8n if the authentication on the sending/embedded side is mis-configured.

This release contains bug fixes and feature enhancements.

For full release details, refer to Releases on GitHub.

oscarpedrero

View the commits for this version.
Release date: 2024-09-06

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-09-05

This release contains new features, bug fixes and feature enhancements.

New node: PGVector Vector Store

This release adds the PGVector Vector Store node. Use this node to interact with the PGVector tables in your PostgreSQL database. You can insert, get, and retrieve documents from a vector table to provide them to a retriever connected to a chain.

See active collaborators on workflows

We added collaborator avatars back to the workflow canvas. You will see other users who are active on the workflow, preventing you from overriding each other's work.

Collaboration avatars

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-08-28

This release contains new features and bug fixes.

Improved execution queue handling

We are exposing new execution queue metrics to give users more visibility of the queue length. This helps to inform decisions on horizontal scaling, based on queue status. We have also made querying executions faster.

New credentials for the HTTP Request node

We added credential support for Datadog, Dynatrace, Elastic Security, Filescan, Iris, and Malcore to the HTTP Request node making it easier to use existing credentials.

We also made it easier to select workflows as tools when working with AI agents by implementing a new workflow selector parameter type.

For full release details, refer to Releases on GitHub.

Bram Kn

View the commits for this version.
Release date: 2024-08-26

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-08-23

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-08-21

This release contains node updates, security and bug fixes.

For full release details, refer to Releases on GitHub.

CodeShakingSheep
Oz Weiss

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-08-16

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-08-16

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-08-15

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-08-15

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-08-15

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-08-14

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-08-14

Breaking change

The N8N_BLOCK_FILE_ACCESS_TO_N8N_FILES environment variable now also blocks access to n8n's static cache directory at ~/.cache/n8n/public.

If you are writing to or reading from a file at n8n's static cache directory via a node, e.g. Read/Write Files from Disk, please update your node to use a different path.

This release contains a new feature, a new node, a node update and bug fixes.

Override the npm registry

This release adds the option to override the npm registry for installing community packages. This is a paid feature.

We now also prevent npm downloading community packages from a compromised npm registry by explicitly using --registry in all npm install commands.

New node: AI Transform

This release adds the AI Transform node. Use the AI Transform node to generate code snippets based on your prompt. The AI is context-aware, understanding the workflows nodes and their data types. The node is only available on Cloud plans.

This release adds the Okta node. Use the Okta node to automate work in Okta and integrate Okta with other applications. n8n has built-in support for a wide range of Okta features, which includes creating, updating, and deleting users.

This release also adds the new schema view for the expression editor modal.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-08-13

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-08-08

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-08-07

This release contains new features, node enhancements, bug fixes and updates to our API.

Our public REST API now supports additional operations:

  • Create, delete, and edit roles for users
  • Create, read, update and delete projects

Find the details in the API reference.

CodeShakingSheep
Javier Ferrer González
Mickaël Andrieu
Oz Weiss
Pemontto

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-08-06

This release contains a bug fix.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-08-02

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-07-31

This release contains new features, new nodes, node enhancements, bug fixes and updates to our API.

Added Google Cloud Platform Secrets Manager support

This release adds Google Cloud Platform Secrets Manager to the list of external secret stores. We already support AWS secrets, Azure Key Vault, Infisical and HashiCorp Vault. External secret stores are available under an enterprise license.

New node: Information Extractor

This release adds the Information Extractor node. The node is specifically tailored for information extraction tasks. It uses Structured Output Parser under the hood, but provides a simpler way to extract information from text in a structured JSON form.

New node: Sentiment Analysis

This release adds the Sentiment Analysis node. The node leverages LLMs to analyze and categorize the sentiment of input text. Users can easily integrate this node into their workflows to perform sentiment analysis on text data. The node is flexible enough to handle various use cases, from basic positive/negative classification to more nuanced sentiment categories.

Our public REST API now supports additional operations:

  • Create, read, and delete for variables
  • Filtering workflows by project
  • Transferring workflows

Find the details in the API reference.

feelgood-interface
Oz Weiss

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-07-31

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-07-26

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-07-26

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-07-25

Breaking change

Prometheus metrics enabled via N8N_METRICS_INCLUDE_DEFAULT_METRICS and N8N_METRICS_INCLUDE_API_ENDPOINTS were fixed to include the default n8n_ prefix.

If you are using Prometheus metrics from these categories and are using a non-empty prefix, please update those metrics to match their new prefixed names.

This release contains new features, node enhancements and bug fixes.

Added Azure Key Vault support

This release adds Azure Key Vault to the list of external secret stores. We already support AWS secrets, Infisical and HashiCorp Vault and are working on Google Secrets Manager. External secret stores are available under an enterprise license.

View the commits for this version.
Release date: 2024-07-23

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-07-23

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-07-18

This release contains new nodes, node enhancements and bug fixes.

New node: Text Classifier

This release adds the Text Classifier node.

New node: Postgres Chat Memory

This release adds the Postgres Chat Memory node.

New node: Google Vertex Chat Model

This release adds the Google Vertex Chat Model node.

For full release details, refer to Releases on GitHub.

  • Enhanced nodes: Asana

View the commits for this version.
Release date: 2024-07-16

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-07-10

This release contains node enhancements and bug fixes.

  • Enhanced nodes: Chat Trigger, Google Cloud Firestore, Qdrant Vector Store, Splunk, Telegram
  • Deprecated node: Orbit (product shut down)

Beta Feature Removal

The Ask AI beta feature for the HTTP Request node has been removed from this version

Stanley Yoshinori Takamatsu
CodeShakingSheep
jeanpaul
adrian-martinez-onestic
Malki Davis

View the commits for this version.
Release date: 2024-07-03

This release contains a new node, node enhancements, and bug fixes.

  • New node added: Vector Store Tool for the AI Agent
  • Enhanced nodes: Zep Cloud Memory, Copper, Embeddings Cohere, GitHub, Merge, Zammad

For full release details, refer to Releases on GitHub.

Jochem
KhDu
Nico Weichbrodt
Pavlo Paliychuk

View the commits for this version.
Release date: 2024-07-03

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-07-03

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-07-01

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-07-01

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-06-27

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-06-27

This release contains bug fixes and feature enhancements.

For full release details, refer to Releases on GitHub.

KubeAl

View the commits for this version.
Release date: 2024-06-26

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-06-20

Calling $(...).last() (or (...).first() or $(...).all()) without arguments now returns the last item (or first or all items) of the output that connects two nodes. Previously, it returned the item/items of the first output of that node. Refer to the breaking changes log for details.

This release contains bug fixes, feature enhancements, a new node, node enhancements and performance improvements.

For full release details, refer to Releases on GitHub.

New node: HTTP request tool

This release adds the HTTP request tool. You can use it with an AI agent as a tool to collect information from a website or API. Refer to the HTTP request tool for details.

Daniel
ekadin-mtc
Eric Francis
Josh Sorenson
Mohammad Alsmadi Nikolai T. Jensen
n8n-ninja
pebosi
Taylor Hoffmann

View the commits for this version.
Release date: 2024-06-12

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-06-12

This release contains feature enhancements, node enhancements, and bug fixes.

For full release details, refer to Releases on GitHub.

Jean Khawand
pemontto
Valentin Coppin

View the commits for this version.
Release date: 2024-06-12

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-06-10

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-06-06

This release contains new features, node enhancements, and bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-06-03

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-05-30

This release contains new features, node enhancements, and bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-05-28

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-05-22

This release contains new features, node enhancements, and bug fixes.

Although this release doesn't include a breaking change, it is a significant update including database migrations. n8n recommends backing up your data before updating to this version.

Credential sharing required for manual executions

Instance owners and admins: you will see changes if you try to manually execute a workflow where the credentials aren't shared with you. Manual workflow executions now use the same permissions checks as production executions, meaning you can't do a manual execution of a workflow if you don't have access to the credentials. Previously, owners and admins could do manual executions without credentials being shared with them. To resolve this, the credential creator needs to share the credential with you.

New feature: Projects

With projects and roles, you can give your team access to collections of workflows and credentials, rather than having to share each workflow and credential individually. Simultaneously, you tighten security by limiting access to people on the relevant team.

Refer to the RBAC documentation for information on creating projects and using roles.

The number of projects and role types vary depending on your plan. Refer to Pricing for details.

New node: Slack Trigger

This release adds a trigger node for Slack. Refer to the Slack Trigger documentation for details.

Rolling back to a previous version

If you update to this version, then decide you need to role back:

  1. Delete any RBAC projects you created.
  2. Revert the database migrations using n8n db:revert.

Cloud: contact help@n8n.io.

Ayato Hayashi
Daniil Zobov
Guilherme Barile
Romain MARTINEAU

View the commits for this version.
Release date: 2024-05-20

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-05-16

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-05-15

This release contains new features, node enhancements, and bug fixes.

Note that this release removes the AI error debugger. We're working on a new and improved version.

New feature: Tools Agent

This release adds a new option to the Agent node: the Tools Agent.

This agent has an enhanced ability to work with tools, and can ensure a standard output format. This is now the recommended default agent.

For full release details, refer to Releases on GitHub.

Mike Quinlan
guangwu

View the commits for this version.
Release date: 2024-05-08

This release contains new features, node enhancements, and bug fixes.

Note that this release temporarily disables the AI error helper.

For full release details, refer to Releases on GitHub.

Florin Lungu

View the commits for this version.
Release date: 2024-05-02

Please note that this version contains a breaking change for instances using a Postgres database. The default value for the DB_POSTGRESDB_USER environment variable was switched from root to postgres. Refer to the breaking changes log for details.

This release contains new features, new nodes, node enhancements, and bug fixes.

New feature: Ask AI in the HTTP node

You can now ask AI to help create API requests in the HTTP Request node:

  1. In the HTTP Request node, select Ask AI.
  2. Enter the Service and Request you want to use. For example, to use the NASA API to get their picture of the day, enter NASA in Service and get picture of the day in Request.
  3. Check the parameters: the AI tries to fill them out, but you may still need to adjust or correct the configuration.

Self-hosted users need to enable AI features and provide their own API keys

New node: Groq Chat Model

This release adds the Groq Chat Model node.

For full release details, refer to Releases on GitHub.

Alberto Pasqualetto
Bram Kn
CodeShakingSheep
Nicolas-nwb
pemontto
pengqiseven
webk
Yoshino-s

View the commits for this version.
Release date: 2024-04-25

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-04-25

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-04-25

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-04-24

This release contains new nodes, node enhancements, and bug fixes.

New node: WhatsApp Trigger

This release adds the WhatsApp Trigger node.

Node enhancement: Multiple methods, one Webhook node

The Webhook Trigger node can now handle calls to multiple HTTP methods. Refer to the Webhook node documentation for information on enabling this.

For full release details, refer to Releases on GitHub.

Bram Kn

View the commits for this version.
Release date: 2024-04-18

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-04-18

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-04-17

This release contains new nodes, bug fixes, and node enhancements.

New node: Google Gemini Chat Model

This release adds the Google Gemini Chat Model sub-node.

New node: Embeddings Google Gemini

This release adds the Google Gemini Embeddings sub-node.

For full release details, refer to Releases on GitHub.

Chengyou Liu
Francesco Mannino

View the commits for this version.
Release date: 2024-04-17

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-04-15

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-04-12

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-04-11

Please note that this version contains a breaking change for self-hosted n8n. It removes the --file flag for the execute CLI command. If you have scripts relying on the --file flag, update them to first import the workflow and then execute it using the --id flag. Refer to CLI commands for more information on CLI options.

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-04-11

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-04-10

Please note that this version contains a breaking change for self-hosted n8n. It removes the --file flag for the execute CLI command. If you have scripts relying on the --file flag, update them to first import the workflow and then execute it using the --id flag. Refer to CLI commands for more information on CLI options.

This release contains a new node, improvements to error handling and messaging, node enhancements, and bug fixes.

This release adds the JWT core node.

For full release details, refer to Releases on GitHub.

Miguel Prytoluk

View the commits for this version.
Release date: 2024-04-04

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-04-03

This release contains new nodes, enhancements and bug fixes.

New node: Salesforce Trigger node

This release adds the Salesforce Trigger node.

New node: Twilio Trigger node

This release adds the Twilio Trigger node.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-03-28

This release contains enhancements and bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-03-26

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-03-25

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-03-20

This release contains new features, new nodes, and bug fixes.

New node: Microsoft OneDrive Trigger node

This release adds the Microsoft OneDrive Trigger node. You can now trigger workflows on file and folder creation and update events.

New data transformation functions

This release introduces new data transformation functions:

Bram Kn
pemontto

View the commits for this version.
Release date: 2024-03-15

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-03-15

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-03-13

This release contains new features, node enhancements, and bug fixes.

Support for Claude 3

This release adds support for Claude 3 to the Anthropic Chat Model node.

For full release details, refer to Releases on GitHub.

gumida
Ayato Hayashi
Jordan
MC Naveen

View the commits for this version.
Release date: 2024-03-07

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-03-07

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-03-06

This release contains new features, node enhancements, performance improvements, and bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-03-06

Please note that this version contains a breaking change. HTTP connections to the editor will fail on domains other than localhost. You can read more about it here.

This is a bug fix release and it contains a breaking change.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-02-28

This release contains new features, new nodes, node enhancements and bug fixes.

New nodes: Microsoft Outlook trigger and Ollama embeddings

This release adds two new nodes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-02-23

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-02-21

This release contains new features, node enhancements, and bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-02-16

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-02-15

This release contains new features, node enhancements, and bug fixes.

For full release details, refer to Releases on GitHub.

OpenAI node overhaul

This release includes a new version of the OpenAI node, adding more operations, including support for working with assistants.

Bruno Inec
Jesús Burgers

View the commits for this version.
Release date: 2024-02-15

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-02-07

This release contains new features, new nodes, node enhancements and bug fixes.

New nodes: Azure OpenAI chat model and embeddings

This release adds two new nodes to work with Azure OpenAI in your advanced AI workflows:

For full release details, refer to Releases on GitHub.

Andrea Ascari

View the commits for this version.
Release date: 2024-02-02

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-01-31

This release contains new features, node enhancements, and bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-01-31

This release removes own mode for self-hosted n8n. You must now use EXECUTIONS_MODE and set to either regular or queue. Refer to Queue mode for information on configuring queue mode.

Please upgrade directly to 1.27.1.

This release contains node enhancements and bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-01-24

This release contains new features, node enhancements, and bug fixes.

For full release details, refer to Releases on GitHub.

Daniel Schröder
Nihaal Sangha

View the commits for this version.
Release date: 2024-01-22

This is a bug fix release.

For full release details, refer to Releases on GitHub.

Nihaal Sangha

View the commits for this version.
Release date: 2024-01-17

This release contains a new node, feature improvements, and bug fixes.

New node: Chat Memory Manager

The Chat Memory Manager node replaces the Chat Messages Retriever node. It manages chat message memories within your AI workflows.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-01-16

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-01-10

This is a bug fix release. It includes important fixes for the HTTP Request and monday.com nodes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-01-10

This release contains new nodes for advanced AI, node enhancements, new features, performance enhancements, and bug fixes.

n8n has created a new Chat Trigger node. The new node provides a chat interface that you can make publicly available, with customization and authentication options.

Mistral Cloud Chat and Embeddings

This release introduces two new nodes to support Mistral AI:

Anush
Eric Koleda
Mason Geloso
vacitbaydarman

View the commits for this version.
Release date: 2024-01-09

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2024-01-03

This release contains new nodes, node enhancements, new features, and bug fixes.

New nodes and improved experience for working with files

This release includes a major overhaul of nodes relating to files (binary data).

There are now three key nodes dedicated to handling binary data files:

n8n has moved support for iCalendar, PDF, and spreadsheet formats into these nodes, and removed the iCalendar, Read PDF, and Spreadsheet File nodes. There are still standalone nodes for HTML and XML.

New node: Qdrant vector store

This release adds support for Qdrant with the Qdrant vector store node.

Read n8n's Qdrant vector store node documentation

Aaron Gutierrez
Advaith Gundu
Anush
Bin
Nihaal Sangha

View the commits for this version.
Release date: 2024-01-03

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-12-27

Upgrade directly to 1.22.4

Due to issues with this release, upgrade directly to 1.22.4.

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-12-27

Upgrade directly to 1.22.4

Due to issues with this release, upgrade directly to 1.22.4.

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-12-21

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-12-21

This release contains node enhancements, new features, performance improvements, and bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-12-19

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-12-15

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-12-15

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-12-13

This release contains new features and nodes, node enhancements, and bug fixes.

New user role: Admin

This release introduces a third account type: admin. This role is available on pro and enterprise plans. Admins have similar permissions to instance owners.

Read more about user roles

New data transformation nodes

This release replaces the Item Lists node with a collection of nodes for data transformation tasks:

  • Aggregate: take separate items, or portions of them, and group them together into individual items.
  • Limit: remove items beyond a defined maximum number.
  • Remove Duplicates: identify and delete items that are identical across all fields or a subset of fields.
  • Sort: organize lists of in a desired ordering, or generate a random selection.
  • Split Out: separate a single data item containing a list into multiple items.
  • Summarize: aggregate items together, in a manner similar to Excel pivot tables.

Increased sharing permissions for owners and admins

Instance owners and users with the admin role can now see and share all workflows and credentials. They can't view sensitive credential information.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-12-06

This release contains bug fixes, node enhancements, and ongoing new feature work.

For full release details, refer to Releases on GitHub.

Andrey Starostin

View the commits for this version.
Release date: 2023-12-05

This is a bug fix release.

This release removes the TensorFlow Embeddings node.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-12-05

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-12-01

Missing ARM v7 support

This version doesn't support ARM v7. n8n is working on fixing this in future releases.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-11-29

Upgrade directly to 1.19.4

Due to issues with this release, upgrade directly to 1.19.4.

This release contains new features, node enhancements, and bug fixes.

LangChain general availability

This release adds LangChain support to the main n8n version. Refer to LangChain for more information on how to build AI tools in n8n, the new nodes n8n has introduced, and related learning resources.

Show avatars of users working on the same workflow

This release improves the experience of users collaborating on workflows. You can now see who else is editing at the same time as you.

View the commits for this version.
Release date: 2023-11-30

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-11-22

This release contains new features and bug fixes.

Template creator hub

Built a template you want to share? This release introduces the n8n Creator hub. Refer to the creator hub Notion doc for more information on this project.

Node input and output search filter

Cloud Pro and Enterprise users can now search and filter the input and output data in nodes. Refer to Data filtering for more information.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-11-17

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-11-15

This release contains node enhancements and bug fixes.

Sticky Note Colors

You can now select background colors for sticky notes.

Discord Node Overhaul

An overhaul of the Discord node, improving the UI making it easier to configure, improving error handling, and fixing issues.

For full release details, refer to Releases on GitHub.

antondollmaier
teomane

View the commits for this version.
Release date: 2023-11-08

This release contains node enhancements and bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-11-07

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-11-02

This release contains new features, node enhancements, and bug fixes.

Workflow history

This release introduces workflow history: view and load previous versions of your workflows.

Workflow history is available in Enterprise n8n, and with limited history for Cloud Pro.

Learn more in the Workflow history documentation.

Almost in time for Halloween: this release introduces dark mode.

  1. Select Settings > Personal.
  2. Under Personalisation, change Theme to Dark theme.

Optional error output for nodes

All nodes apart from sub-nodes and trigger nodes have a new optional output: Error. Use this to add steps to handle node errors.

Pagination support added to HTTP Request node

The HTTP Request node now supports an pagination. Read the node docs for information and examples.

For full release details, refer to Releases on GitHub.

Yoshino-s

View the commits for this version.
Release date: 2023-10-26

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-10-26

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-10-25

This release contains node enhancements and bug fixes.

Switch node supports more outputs

The Switch node now supports an unlimited number of outputs.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-10-25

This release contains new features, feature enhancements, and bug fixes.

Upgrade directly to 1.14.0

This release failed to publish to npm. Upgrade directly to 1.14.0.

RSS Feed Trigger node

This releases introduces a new node, the RSS Feed Trigger. Use this node to start a workflow when a new RSS feed item is published.

Facebook Lead Ads Trigger node

This releases add another new node, the Facebook Lead Ads Trigger. Use this node to trigger a workflow when you get a new lead.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-10-24

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

Burak Akgün

View the commits for this version.
Release date: 2023-10-23

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

Léo Martinez

View the commits for this version.
Release date: 2023-10-23

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

Inga
pemontto

View the commits for this version.
Release date: 2023-10-18

This release contains new features, node enhancements, and bug fixes.

Form Trigger node

This releases introduces a new node, the n8n Form Trigger. Use this node to start a workflow based on a user submitting a form. It provides a configurable form interface.

For full release details, refer to Releases on GitHub.

Damian Karzon
Inga
pemontto

View the commits for this version.
Release date: 2023-10-13

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-10-11

This release contains new features and bug fixes.

External storage for binary files

Self-hosted users can now use an external service to store binary data. Learn more in External storage.

If you're using n8n Cloud and are interested in this feature, please contact n8n.

Item Lists node supports binary data

The Item Lists node now supports splitting and concatenating binary data inputs. This means you no longer need to use code to split a collection of files into multiple items.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-10-11

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-10-10

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-10-09

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-10-05

This release contains bug fixes and preparatory work for new features.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-10-04

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

LangChain in n8n (beta)

Release date: 2023-10-04

This release introduces support for building with LangChain in n8n.

With n8n's LangChain nodes you can build AI-powered functionality within your workflows. The LangChain nodes are configurable, meaning you can choose your preferred agent, LLM, memory, and other components. Alongside the LangChain nodes, you can connect any n8n node as normal: this means you can integrate your LangChain logic with other data sources and services.

  • This is a beta release, and not yet available in the main product. Follow the instructions in Access LangChain in n8n to try it out. Self-hosted and Cloud options are available.
  • Learn how LangChain concepts map to n8n nodes in LangChain concepts in n8n.
  • Browse n8n's new Cluster nodes. This is a new set of node types that allows for multiple nodes to work together to configure each other.

View the commits for this version.
Release date: 2023-09-28

This release contains new features, performance improvements, and bug fixes.

This releases replaces RiotTmpl, the templating language used in expressions, with n8n's own templating language, Tournament. You can now use arrow functions in expressions.

N8N_BINARY_DATA_TTL and EXECUTIONS_DATA_PRUNE_TIMEOUT removed

The environment variables N8N_BINARY_DATA_TTL and EXECUTIONS_DATA_PRUNE_TIMEOUT no longer have any effect and can be removed. Instead of relying on a TTL system for binary data, n8n cleans up binary data together with executions during pruning.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-09-25

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-09-21

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-09-20

This release contains node enhancements and bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-09-14

This release contains bug fixes.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-09-13

This release contains node enhancements and bug fixes.

For full release details, refer to Releases on GitHub.

Quang-Linh LE
MC Naveen

View the commits for this version.
Release date: 2023-09-06

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-09-06

This release contains bug fixes, new features, and node enhancements.

Upgrade directly to 1.6.1

Skip this version and upgrade directly to 1.6.1, which contains essential bug fixes.

This release introduces support for TheHive API version 5. This uses a new node and credentials:

N8N_PERSISTED_BINARY_DATA_TTL removed

The environment variables N8N_PERSISTED_BINARY_DATA_TTL no longer has any effect and can be removed. This legacy flag was originally introduced to support ephemeral executions (see details), which are no longer supported.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-08-31

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-08-31

This release contains new features, node enhancements, and bug fixes.

Upgrade directly to 1.5.1

Skip this version and upgrade directly to 1.5.1, which contains essential bug fixes.

External secrets storage for credentials

Enterprise-tier accounts can now use external secrets vaults to manage credentials in n8n. This allows you to store credential information securely outside your n8n instance. n8n supports Infisical and HashiCorp Vault.

Refer to External secrets for guidance on enabling and using this feature.

Two-factor authentication

n8n now supports two-factor authentication (2FA) for self-hosted instances. n8n is working on bringing support to Cloud. Refer to Two-factor authentication for guidance on enabling and using it.

Debug executions

Users on a paid n8n plan can now load data from previous executions into their current workflow. This is useful when debugging a failed execution.

Refer to Debug executions for guidance on using this feature.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-08-29

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-08-23

This release contains new features, node enhancements, and bug fixes.

For full release details, refer to Releases on GitHub.

pemontto

View the commits for this version.
Release date: 2023-08-18

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-08-16

This release contains new features and bug fixes.

Trial feature: AI support in the Code node

This release introduces limited support for using AI to generate code in the Code node. Initially this feature is only available on Cloud, and will gradually be rolled out, starting with about 20% of users.

Learn how to use the feature, including guidance on writing prompts, in Generate code with ChatGPT.

For full release details, refer to Releases on GitHub.

Ian Gallagher
Xavier Calland

View the commits for this version.
Release date: 2023-08-14

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-08-09

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-08-09

This release contains new features, node enhancements, bug fixes, and performance improvements.

Upgrade directly to 1.2.1

When upgrading, skip this release and go directly to 1.2.1.

Credential support for SecOps services

This release introduces support for setting up credentials in n8n for the following services:

This makes it easier to do Custom operations with these services, using the HTTP Request node.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-07-27

This is a bug fix release.

Please note that this version contains breaking changes if upgrading from a 0.x.x version. For full details, refer to the n8n v1.0 migration guide.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-07-26

This release contains new features, bug fixes, and node enhancements.

Please note that this version contains breaking changes if upgrading from a 0.x.x version. For full details, refer to the n8n v1.0 migration guide.

Source control and environments

This release introduces source control and environments for enterprise users.

n8n uses Git-based source control to support environments. Linking your n8n instances to a Git repository lets you create multiple n8n environments, backed by Git branches.

Refer to Source control and environments to learn more about the features and set up your environments.

For full release details, refer to Releases on GitHub.

Adrián Martínez
Alberto Pasqualetto
Marten Steketee
perseus-algol
Sandra Ashipala
ZergRael

View the commits for this version.
Release date: 2023-07-24

This is a bug fix release.

Please note that this version contains breaking changes if upgrading from a 0.x.x version. For full details, refer to the n8n v1.0 migration guide.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-07-19

This is a bug fix release.

Please note that this version contains breaking changes if upgrading from a 0.x.x version. For full details, refer to the n8n v1.0 migration guide.

For full release details, refer to Releases on GitHub.

Romain Dunand
noctarius aka Christoph Engelbert

View the commits for this version.
Release date: 2023-07-13

This release contains API enhancements and adds support for sending messages to forum threads in the Telegram node.

Please note that this version contains breaking changes if upgrading from a 0.x.x version. For full details, refer to the n8n v1.0 migration guide.

For full release details, refer to Releases on GitHub.

Kirill

View the commits for this version.
Release date: 2023-07-05

This is a bug fix release.

Please note that this version contains breaking changes if upgrading from a 0.x.x version. For full details, refer to the n8n v1.0 migration guide.

Romain Dunand

View the commits for this version.
Release date: 2023-07-05

Please note that this version contains breaking changes. For full details, refer to the n8n v1.0 migration guide.

This is n8n's version one release.

For full details, refer to the n8n v1.0 migration guide.

Although JavaScript remains the default language, you can now also select Python as an option in the Code node and even make use of many Python modules. Note that Python is unavailable in Code nodes added to a workflow before v1.0.

Marten Steketee

Examples:

Example 1 (unknown):

toDateTime() //replaces toDate(). toDate() is retained for backwards compatability.
parseJson()
extractUrlPath()
toBoolean()
base64Encode()
base64Decode()

Example 2 (unknown):

toDateTime()
toBoolean()

Example 3 (unknown):

toJsonString()

Example 4 (unknown):

toJsonString()

MailerLite credentials

URL: llms-txt#mailerlite-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a MailerLite account.

Supported authentication methods

Refer to MailerLite's API documentation for more information about the service.

To configure this credential, you'll need:

Enable the Classic API toggle if the API key is for a MailerLite Classic account instead of the newer MailerLite experience.

Most new MailerLite accounts and all free accounts should disable the Classic API toggle. You can find out which version of MailerLite you are using and learn more about the differences between the two in the MailerLite FAQ.


Airtable credentials

URL: llms-txt#airtable-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using personal access token
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Create an Airtable account.

Supported authentication methods

  • Personal Access Token (PAT)
  • OAuth2

n8n used to offer an API key authentication method with Airtable. Airtable fully deprecated these keys as of February 2024. If you were using an Airtable API credential, replace it with an Airtable Personal Access Token or Airtable OAuth2 credential. n8n recommends using Personal Access Token instead.

Refer to Airtable's API documentation for more information about the service.

Using personal access token

To configure this credential, you'll need:

  • A Personal Access Token (PAT)
  1. Go to the Airtable Builder Hub Personal access tokens page.
  2. Select + Create new token. Airtable opens the Create personal access token page.
  3. Enter a Name for your token, like n8n credential.
  4. Add Scopes to your token. Refer to Airtable's Scopes guide for more information. n8n recommends using these scopes:
    • data.records:read
    • data.records:write
    • schema.bases:read
  5. Select the Access for your token. Choose from a single base, multiple bases (even bases from different workspaces), all of the current and future bases in a workspace you own, or all of the bases from any workspace that you own including bases/workspace added in the future.
  6. Select Create token.
  7. Airtable opens a modal with your token displayed. Copy this token and enter it in your n8n credential as the Access Token.

Refer to Airtable's Find/create PATs documentation for more information.

To configure this credential, you'll need:

  • An OAuth Redirect URL
  • A Client ID
  • A Client Secret

To generate all this information, register a new Airtable integration:

  1. Open your Airtable Builder Hub OAuth integrations page.
  2. Select the Register new OAuth integration button.
  3. Enter a name for your OAuth integration.
  4. Copy the OAuth Redirect URL from your n8n credential.
  5. Paste that redirect URL in Airtable as the OAuth redirect URL.
  6. Select Register integration.
  7. On the following page, copy the Client ID from Airtable and paste it into the Client ID in your n8n credential.
  8. In Airtable, select Generate client secret.
  9. Copy the client secret and paste it into the Client Secret in your n8n credential.
  10. Select the following scopes in Airtable:
    • data.records:read
    • data.records:write
    • schema.bases:read
  11. Select Save changes in Airtable.
  12. In your n8n credential, select the Connect my account. A Grant access modal opens.
  13. Follow the instructions and select the base you want to work on (or all bases).
  14. Select Grant access to complete the connection.

Refer to the Airtable Register a new integration documentation for steps on registering a new Oauth integration.


For a self-hosted n8n instance

URL: llms-txt#for-a-self-hosted-n8n-instance

curl -X 'GET'
'<N8N_HOST>:<N8N_PORT>/<N8N_PATH>/api/v/workflows?active=true&limit=150&cursor=MTIzZTQ1NjctZTg5Yi0xMmQzLWE0NTYtNDI2NjE0MTc0MDA'
-H 'accept: application/json'


Mailchimp node

URL: llms-txt#mailchimp-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Mailchimp node to automate work in Mailchimp, and integrate Mailchimp with other applications. n8n has built-in support for a wide range of Mailchimp features, including creating, updating, and deleting campaigns, as well as getting list groups.

On this page, you'll find a list of operations the Mailchimp node supports and links to more resources.

Refer to Mailchimp credentials for guidance on setting up authentication.

  • Campaign
    • Delete a campaign
    • Get a campaign
    • Get all the campaigns
    • Replicate a campaign
    • Creates a Resend to Non-Openers version of this campaign
    • Send a campaign
  • List Group
    • Get all groups
  • Member
    • Create a new member on list
    • Delete a member on list
    • Get a member on list
    • Get all members on list
    • Update a new member on list
  • Member Tag
    • Add tags from a list member
    • Remove tags from a list member

Templates and examples

Process Shopify new orders with Zoho CRM and Harvest

View template details

Add new contacts from HubSpot to the email list in Mailchimp

View template details

Send or update new Mailchimp subscribers in HubSpot

View template details

Browse Mailchimp integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Data

URL: llms-txt#data

Contents:

  • Related resources
    • Data transformation nodes

Data is the information that n8n nodes receive and process. For basic usage of n8n you don't need to understand data structures and manipulation. However, it becomes important if you want to:

Data transformation nodes

n8n provides a collection of nodes to transform data:

  • Aggregate: take separate items, or portions of them, and group them together into individual items.
  • Limit: remove items beyond a defined maximum number.
  • Remove Duplicates: identify and delete items that are identical across all fields or a subset of fields.
  • Sort: organize lists of in a desired ordering, or generate a random selection.
  • Split Out: separate a single data item containing a list into multiple items.
  • Summarize: aggregate items together, in a manner similar to Excel pivot tables.

Box credentials

URL: llms-txt#box-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Create a Box account.

Supported authentication methods

Refer to Box's API documentation for more information about the service.

Note for n8n Cloud users

Cloud users don't need to provide connection details. Select Connect my account to connect through your browser.

If you need to configure OAuth2 from scratch or need more detail on what's happening in the OAuth web flow, you'll need to create a Custom App. Refer to the Box OAuth2 Setup documentation for more information.


Edit Image

URL: llms-txt#edit-image

Contents:

  • Operations
  • Node parameters
    • Blur parameters
    • Border parameters
    • Composite parameters
    • Create parameters
    • Crop parameters
    • Draw parameters
    • Get Information parameters
    • Multi Step parameters

Use the Edit Image node to manipulate and edit images.

  1. If you aren't running n8n on Docker, you need to install GraphicsMagick.
  2. You need to use a node such as the Read/Write Files from Disk node or the HTTP Request node to pass the image file as a data property to the Edit Image node.
  • Add a Blur to the image to reduce sharpness
  • Add a Border to the image
  • Composite an image on top of another image
  • Create a new image
  • Crop the image
  • Draw on an image
  • Get Information about the image
  • Multi Step perform multiple operations on the image
  • Resize: Change the size of the image
  • Rotate the image
  • Shear image along the X or Y axis
  • Add Text to the image
  • Make a color in image Transparent

The parameters for this node depend on the operation you select.

  • Property Name: Enter the name of the binary property that stores the image data.
  • Blur: Enter a number to set how strong the blur should be, between 0 and 1000. Higher numbers create blurrier images.
  • Sigma: Enter a number to set the stigma for the blur, between 0 and 1000. Higher numbers create blurrier images.

Refer to Node options for optional configuration options.

Border parameters

  • Property Name: Enter the name of the binary property that stores the image data.
  • Border Width: Enter the width of the border.
  • Border Height: Enter the height of the border.
  • Border Color: Set the color for the border. You can either enter a hex or select the color swatch to open a color picker.

Refer to Node options for optional configuration options.

Composite parameters

  • Property Name: Enter the name of the binary property that stores the image data. This image is your base image.
  • Composite Image Property: Enter the name of the binary property that stores image to composite on top of the Property Name image.
  • Operator: Select composite operator, which determines how the composite works. Options include:
    • Add
    • Atop
    • Bumpmap
    • Copy
    • Copy Black
    • Copy Blue
    • Copy Cyan
    • Copy Green
    • Copy Magenta
    • Copy Opacity
    • Copy Red
    • Copy Yellow
    • Difference
    • Divide
    • In
    • Minus
    • Multiply
    • Out
    • Over
    • Plus
    • Subtract
    • Xor
  • Position X: Enter the x axis position (horizontal) of the composite image.
  • Position Y: Enter the y axis position (vertical) of the composite image.

Refer to Node options for optional configuration options.

Create parameters

  • Property Name: Enter the name of the binary property that stores the image data.
  • Background Color: Set the background color for the image. You can either enter a hex or select the color swatch to open a color picker.
  • Image Width: Enter the width of the image.
  • Image Height: Enter the height of the image.

Refer to Node options for optional configuration options.

  • Property Name: Enter the name of the binary property that stores the image data.
  • Width: Enter the width you'd like to crop to.
  • Height: Enter the height you'd like to crop to.
  • Position X: Enter the x axis position (horizontal) to start the crop from.
  • Position Y: Enter the y axis position (vertical) to start the crop from.

Refer to Node options for optional configuration options.

  • Property Name: Enter the name of the binary property that stores the image data.
  • Primitive: Select the primitive shape to draw. Choose from:
    • Circle
    • Line
    • Rectangle
  • Color: Set the color for the primitive. You can either enter a hex or select the color swatch to open a color picker.
  • Start Position X: Enter the x axis position (horizontal) to start drawing from.
  • Start Position Y: Enter the y axis position (vertical) to start drawing from.
  • End Position X: Enter the x axis position (horizontal) to stop drawing at.
  • End Position Y: Enter the y axis position (vertical) to start drawing at.
  • Corner Radius: Enter a number to set the corner radius. Adding a corner radius will round the corners of the drawn primitive.

Refer to Node options for optional configuration options.

Get Information parameters

For this operation, you only need to add the Property Name of the binary property that stores the image data.

Refer to Node options for optional configuration options.

Multi Step parameters

  • Property Name: Enter the name of the binary property that stores the image data.
  • Operations: Add the operations you want the multi step operation to perform. You can use any of the other operations.

Refer to Node options for optional configuration options.

Resize parameters

  • Property Name: Enter the name of the binary property that stores the image data.
  • Width: Enter the new width you'd like for the image.
  • Height: Enter the new height you'd like for the image.
  • Option: Select how you'd like to resize the image. Choose from:
    • Ignore Aspect Ratio: Ignore the aspect ratio and resize to the exact height and width you've entered.
    • Maximum Area: The height and width you've entered is the maximum area/size for the image. The image maintains its aspect ratio and won't be larger than the height and/or width you've entered.
    • Minimum Area: The height and width you've entered is the minimum area/size for the image. The image maintains its aspect ratio and won't be smaller than the height and/or width you've entered.
    • Only if Larger: Resize the image only if it's larger than the width and height you entered. The image maintains its aspect ratio.
    • Only if Smaller: Resize the image only if it's smaller than the width and height you entered. The image maintains its aspect ratio.
    • Percent: Resize the image using the width and height as percentages of the original image.

Refer to Node options for optional configuration options.

Rotate parameters

  • Property Name: Enter the name of the binary property that stores the image data.
  • Rotate: Enter the number of degrees to rotate the image, from --360 to 360.
  • Background Color: Set the background color for the image. You can either enter a hex or select the color swatch to open a color picker. This color is used to fill in the empty background whenever the image is rotated by multiples of 90 degrees. If multipled of 90 degrees are used for the Rotate field, the background color isn't used.

Refer to Node options for optional configuration options.

  • Property Name: Enter the name of the binary property that stores the image data.
  • Degrees X: Enter the number of degrees to shear from the x axis.
  • Degrees Y: Enter the number of degrees to shear from the y axis.

Refer to Node options for optional configuration options.

  • Property Name: Enter the name of the binary property that stores the image data.
  • Text: Enter the text you'd like to write on the image.
  • Font Size: Select the font size for the text.
  • Font Color: Set the font color. You can either enter a hex or select the color swatch to open a color picker.
  • Position X: Enter the x axis position (horizontal) to begin the text at.
  • Position Y: Enter the y axis position (vertical) to begin the text at.
  • Max Line Length: Enter the maximum amount of characters in a line before adding a line break.

Refer to Node options for optional configuration options.

Transparent parameters

  • Property Name: Enter the name of the binary property that stores the image data.
  • Color: Set the color to make transparent. You can either enter a hex or select the color swatch to open a color picker.

Refer to Node options for optional configuration options.

  • File Name: Enter the filename of the output file.
  • Format: Enter the image format of the output file. Choose from:
    • bmp
    • gif
    • jpeg
    • png
    • tiff
    • WebP

The Text operation also includes the option for Font Name or ID. Select the text font from the dropdown or specify an ID using an expression.

Templates and examples

Flux AI Image Generator

View template details

Generate Instagram Content from Top Trends with AI Image Generation

by mustafa kendigüzel

View template details

AI-Powered WhatsApp Chatbot 🤖📲 for Text, Voice, Images & PDFs with memory 🧠

View template details

Browse Edit Image integration templates, or search all templates


Workflow settings

URL: llms-txt#workflow-settings

Contents:

  • Access workflow settings
  • Available settings
    • Execution order
    • Error Workflow (to notify when this one errors)
    • This workflow can be called by
    • Timezone
    • Save failed production executions
    • Save successful production executions
    • Save manual executions
    • Save execution progress

You can customize workflow behavior for individual workflows using workflow settings.

Access workflow settings

To open the settings:

  1. Open your workflow.
  2. Select the three dots icon in the upper-right corner.
  3. Select Settings. n8n opens the Workflow settings modal.

Available settings

The following settings are available:

Choose the execution order for multi-branch workflows:

v1 (recommended) executes each branch in turn, completing one branch before starting another. n8n orders the branches based on their position on the canvas, from topmost to bottommost. If two branches are at the same height, the leftmost branch executes first.

v0 (legacy) executes the first node of each branch, then the second node of each branch, and so on.

Error Workflow (to notify when this one errors)

Select a workflow to trigger if the current workflow fails. See error workflows for more details.

This workflow can be called by

Choose which other workflows can call this workflow.

Sets the timezone for this workflow. The timezone setting is important for the Schedule Trigger node.

You can set your n8n instance's timezone to configure the default timezone workflows use:

If you don't configure the workflow or instance timezone, n8n defaults to the EDT (New York) timezone.

Save failed production executions

Whether n8n should save failed executions for active workflows.

Save successful production executions

Whether n8n should save successful executions for active workflows.

Save manual executions

Whether n8n should save executions for workflows started by the user in the editor.

Save execution progress

Whether n8n should save execution data for each node.

If set to Save, the workflow resumes from where it stopped in case of an error. This may increase latency.

Whether n8n should cancel the current workflow execution after a certain amount of time elapses.

When enabled, the Timeout After option appears. Here, you can set the time (in hours, minutes, and seconds) after which the workflow should timeout. For n8n Cloud users, n8n enforces a maximum available timeout for each plan.

Estimated time saved

An estimate of the number of minutes each of execution of this workflow saves you.

Setting this lets n8n calculate the amount of time saved for insights.


Bitbucket Trigger node

URL: llms-txt#bitbucket-trigger-node

Bitbucket is a web-based version control repository hosting service owned by Atlassian, for source code and development projects that use either Mercurial or Git revision control systems.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Bitbucket Trigger integrations page.


Venafi TLS Protect Cloud Trigger node

URL: llms-txt#venafi-tls-protect-cloud-trigger-node

Venafi is a cybersecurity company providing services for machine identity management. They offer solutions to manage and protect identities for a wide range of machine types, delivering global visibility, lifecycle automation, and actionable intelligence.

Use the n8n Venafi TLS Protect Cloud Trigger node to start a workflow in n8n in response to events in the cloud-based Venafi TLS Protect service.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Venafi TLS Protect Cloud Trigger integrations page.


RAG in n8n

URL: llms-txt#rag-in-n8n

Contents:

  • What is RAG
  • What is a vector store?
  • How to use RAG in n8n
    • Inserting data into your vector store
    • Querying your data
    • Using agents
    • Using the node directly
  • FAQs
    • How do I choose the right embedding model?
    • What is the best text splitting for my use case?

Retrieval-Augmented Generation (RAG) is a technique that improves AI responses by combining language models with external data sources. Instead of relying solely on the model's internal training data, RAG systems retrieve relevant documents to ground responses in up-to-date, domain-specific, or proprietary knowledge. RAG workflows typically rely on vector stores to manage and search this external data efficiently.

What is a vector store?

A vector store is a special database designed to store and search high-dimensional vectors: numerical representations of text, images, or other data. When you upload a document, the vector store splits it into chunks and converts each chunk into a vector using an embedding model.

You can query these vectors using similarity searches, which construct results based on semantic meaning, rather than keyword matches. This makes vector stores a powerful foundation for RAG and other AI systems that need to retrieve and reason over large sets of knowledge.

How to use RAG in n8n

Start with a RAG template

👉 Try out RAG in n8n with the RAG Starter Template. The template includes two ready-made workflows: one for uploading files and one for querying them.

Inserting data into your vector store

Before your agent can access custom knowledge, you need to upload that data to a vector store:

  1. Add the nodes needed to fetch your source data.
  2. Insert a Vector Store node (e.g. the Simple Vector Store) and choose the Insert Documents operation.
  3. Select an embedding model, which converts your text into vector embeddings. Consult the FAQ for more information on choosing the right embedding model.
  4. Add a Default Data Loader node, which splits your content into chunks. You can use the default settings or define your own chunking strategy:
    • Character Text Splitter: splits by character length.
    • Recursive Character Text Splitter: recursively splits by Markdown, HTML, code blocks or simple characters (recommended for most use cases).
    • Token Text Splitter: splits by token count.
  5. (Optional) Add metadata to each chunk to enrich the context and allow better filtering later.

Querying your data

You can query the data in two main ways: using an agent or directly through a node.

  1. Add an agent to your workflow.
  2. Add the vector store as a tool and give it a description to help the agent understand when to use it:
    • Set the limit to define how many chunks to return.
    • Enable Include Metadata to provide extra context for each chunk.
  3. Add the same embedding model you used when inserting the data.

To save tokens on an expensive model, you can first use the Vector Store Question Answer tool to retrieve relevant data, and only then pass the result to the Agent. To see this in action, check out this template.

Using the node directly

  1. Add your vector store node to the canvas and choose the Get Many operation.
  2. Enter a query or prompt:
    • Set a limit for how many chunks to return.
    • Enable Include Metadata if needed.

How do I choose the right embedding model?

The right embedding model differs from case to case.

In general, smaller models (for example, text-embedding-ada-002) are faster and cheaper and thus ideal for short, general-purpose documents or lightweight RAG workflows. Larger models (for example, text-embedding-3-large) offer better semantic understanding. These are best for long documents, complex topics, or when accuracy is critical.

What is the best text splitting for my use case?

This again depends a lot on your data:

  • Small chunks (for example, 200 to 500 tokens) are good for fine-grained retrieval.
  • Large chunks may carry more context but can become diluted or noisy.

Using the right overlap size is important for the AI to understand the context of the chunk. That's also why using the Markdown or Code Block splitting can often help to make chunks better.

Another good approach is to add more context to it (for example, about the document where the chunk came from). If you want you can read more about this, you can check out this great article from Anthropic.


Dropbox credentials

URL: llms-txt#dropbox-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using access token
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Supported authentication methods

  • API access token: Dropbox recommends this method for testing with your user account and granting a limited number of users access.
  • OAuth2: Dropbox recommends this method for production or for testing with more than 50 users.

You can transition an app from the API access token to OAuth2 by creating a new credential in n8n for OAuth2 using the same app.

Refer to Dropbox's Developer documentation for more information about the service.

Using access token

To configure this credential, you'll need a Dropbox developer account and:

  • An Access Token: Generated once you create a Dropbox app.
  • An App Access Type

To set up the credential, create a Dropbox app:

  1. Open the App Console within the Dropbox developer portal.
  2. Select Create app.
  3. In Choose an API, select Scoped access.
  4. In Choose the type of access you need, choose whichever option best fits your use of the Dropbox node:
    • App Folder grants access to a single folder created specifically for your app.
    • Full Dropbox grants access to all files and folders in your user's Dropbox.
    • Refer to the DBX Platform developer guide for more information.
  5. In Name your app, enter a name for your app, like n8n integration.
  6. Check the box to agree to the Dropbox API Terms and Conditions.
  7. Select Create app. The app's Settings open.
  8. In the OAuth 2 section, in Generated access token, select Generate.
  9. Copy the access token and enter it as the Access Token in your n8n credential.
  10. In n8n, select the same App Access Type you selected for your app.

Refer to the Dropbox App Console Settings documentation for more information.

On the Settings tab, you can add other users to your app, even with the access token method. Once your app links 50 Dropbox users, you will have two weeks to apply for and receive production status approval before Dropbox freezes your app from linking more users.

Note for n8n Cloud users

Cloud users don't need to provide connection details. Select Connect my account to connect through your browser.

Cloud users need to select the App Access Type:

  • App Folder grants access to a single folder created specifically for your app.
  • Full Dropbox grants access to all files and folders in your user's Dropbox.
  • Refer to the DBX Platform developer guide for more information.

If you're self-hosting n8n, you'll need to configure OAuth2 manually:

  1. Open the App Console within the Dropbox developer portal.
  2. Select Create app.
  3. In Choose an API, select Scoped access.
  4. In Choose the type of access you need, choose whichever option best fits your use of the Dropbox node:
    • App Folder grants access to a single folder created specifically for your app.
    • Full Dropbox grants access to all files and folders in your user's Dropbox.
    • Refer to the DBX Platform developer guide for more information.
  5. In Name your app, enter a name for your app, like n8n integration.
  6. Check the box to agree to the Dropbox API Terms and Conditions.
  7. Select Create app. The app's Settings open.
  8. Copy the App key and enter it as the Client ID in your n8n credential.
  9. Copy the Secret and enter it as the Client Secret in your n8n credential.
  10. In n8n, copy the OAuth Redirect URL and enter it in the Dropbox Redirect URIs.
  11. In n8n, select the same App Access Type you selected for your app.

Refer to the instructions in the Dropbox Implementing OAuth documentation for more information.

For internal tools and limited usage, you can keep your app private. But if you'd like your app to be used by more than 50 users or you want to distribute it, you'll need to complete Dropbox's production approval process. Refer to Production Approval in the DBX Platform developer guide for more information.

On the Settings tab, you can add other users to your app. Once your app links 50 Dropbox users, you will have two weeks to apply for and receive production status approval before Dropbox freezes your app from linking more users.


DHL node

URL: llms-txt#dhl-node

Contents:

  • Operations
  • Templates and examples

Use the DHL node to automate work in DHL, and integrate DHL with other applications. n8n has built-in support for a wide range of DHL features, including tracking shipment.

On this page, you'll find a list of operations the DHL node supports and links to more resources.

Refer to DHL credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Shipment
    • Get Tracking Details

Templates and examples

AI-powered WooCommerce Support-Agent

View template details

Expose Get tracking details to AI Agents via 🛠️ DHL Tool MCP Server

View template details

Automated DHL Shipment Tracking Bot for Web Forms and Email Inquiries

View template details

Browse DHL integration templates, or search all templates


Workflow Retriever node

URL: llms-txt#workflow-retriever-node

Contents:

  • Node parameters
    • Source
    • Workflow values
  • Templates and examples
  • Related resources

Use the Workflow Retriever node to retrieve data from an n8n workflow for use in a Retrieval QA Chain or another Retriever node.

On this page, you'll find the node parameters for the Workflow Retriever node, and links to more resources.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

Tell n8n which workflow to call. You can choose either:

  • Database and enter a workflow ID.
  • Parameter and copy in a complete workflow JSON.

Set values to pass to the workflow you're calling.

These values appear in the output data of the trigger node in the workflow you call. You can access these values in expressions in the workflow. For example, if you have:

  • Workflow Values with a Name of myCustomValue
  • A workflow with an Execute Sub-workflow Trigger node as its trigger

The expression to access the value of myCustomValue is {{ $('Execute Sub-workflow Trigger').item.json.myCustomValue }}.

Templates and examples

AI Crew to Automate Fundamental Stock Analysis - Q&A Workflow

View template details

Build a PDF Document RAG System with Mistral OCR, Qdrant and Gemini AI

View template details

AI: Ask questions about any data source (using the n8n workflow retriever)

View template details

Browse Workflow Retriever integration templates, or search all templates

Refer to LangChain's general retriever documentation for more information about the service.

View n8n's Advanced AI documentation.


Facebook Trigger Page object

URL: llms-txt#facebook-trigger-page-object

Contents:

  • Prerequisites
  • Trigger configuration
  • Related resources

Use this object to receive updates when updates to your page profile fields or profile settings occur or someone mentions your page. Refer to Facebook Trigger for more information on the trigger itself.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Facebook Trigger integrations page.

This Object requires some configuration in your app and page before you can use the trigger:

  1. At least one page admin needs to grant the manage_pages permission to your app.

  2. The page admin needs to have at least moderator privileges. If they don't, they won't receive all content.

  3. You'll also need to add the app to your page, and you may need to go to the Graph API explorer and execute this call with your app token:

Trigger configuration

To configure the trigger with this Object:

  1. Select the Credential to connect with. Select an existing or create a new Facebook App credential.
  2. Enter the APP ID of the app connected to your credential. Refer to the Facebook App credential documentation for more information.
  3. Select Page as the Object.
  4. Field Names or IDs: By default, the node will trigger on all the available events using the * wildcard filter. If you'd like to limit the events, use the X to remove the star and use the dropdown or an expression to select the updates you're interested in. Options include individual profile fields, as well as:
    • Feed: Describes most changes to a page's feed, including posts, likes, shares, and so on.
    • Leadgen: Notifies you when a page's lead generation settings change.
    • Live Videos: Notifies you when a page's live video status changes.
    • Mention: Notifies you when new mentions in pages, comments, and so on occur.
    • Merchant Review: Notifies you when a page's merchant review settings change.
    • Page Change Proposal: Notifies you when Facebook suggests proposed changes for your Facebook Page.
    • Page Upcoming Change: Notifies you about upcoming changes that will occur on your Facebook Page. Facebook has suggested these changes and they may have a deadline to accept or reject before automatically taking effect.
    • Product Review: Notifies you when a page's product review settings change.
    • Ratings: Notifies you when a page's ratings change, including new ratings or when a user comments on or reacts to a rating.
    • Videos: Notifies you when the encoding status of a video on a page changes.
  5. In Options, turn on the toggle to Include Values. This Object type fails without the option enabled.

Refer to Webhooks for Pages and Meta's Page Graph API reference for more information.

Examples:

Example 1 (unknown):

{page-id}/subscribed_apps?subscribed_fields=feed

AI Assistant

URL: llms-txt#ai-assistant

Contents:

  • Current capabilities
  • Tips for getting the most out of the Assistant
  • FAQs
    • What context does the Assistant have?
    • Who can use the Assistant?
    • How does the Assistant work?
  • Change instance ownership
  • Change instance username

The n8n AI Assistant helps you build, debug, and optimize your workflows seamlessly. From answering questions about n8n to providing help with coding and expressions, the AI Assistant can streamline your workflow-building process and support you as you navigate n8n's capabilities.

Current capabilities

The AI Assistant offers a range of tools to support you:

  • Debug helper: Identify and troubleshoot node execution issues in your workflows to keep them running without issues.
  • Answer n8n questions: Get instant answers to your n8n-related questions, whether they're about specific features or general functionality.
  • Coding support: Receive guidance on coding, including SQL and JSON, to optimize your nodes and data processing.
  • Expression assistance: Learn how to create and refine expressions to get the most out of your workflows.
  • Credential setup tips: Find out how to set up and manage node credentials securely and efficiently.

Tips for getting the most out of the Assistant

  1. Engage in a conversation: The AI Assistant can collaborate with you step-by-step. If a suggestion isn't what you need, let it know! The more context you provide, the better the recommendations will be.

  2. Ask specific questions: For the best results, ask focused questions (for example, "How do I set up credentials for Google Sheets?"). The assistant works best with clear queries.

  3. Iterate on suggestions: Don't hesitate to build on the assistant's responses. Try different approaches and keep refining based on the assistant's feedback to get closer to your ideal solution.

  4. Things to try out:

  • Debug any error you're seeing
    • Ask how to setup credentials
    • "Explain what this workflow does."
    • "I need your help to write code: [Explain your code here]"
    • "How can I build X in n8n?"

What context does the Assistant have?

The AI Assistant has access to all elements displayed on your n8n screen, excluding actual input and output data values (like customer information). To learn more about what data n8n shares with the Assistant, refer to AI in n8n.

Who can use the Assistant?

Any user on a Cloud plan can use the assistant.

How does the Assistant work?

The underlying logic of the assistant is build with the advanced AI capabilities of n8n. It uses a combination of different agents, specialized in different areas of n8n, RAG to gather knowledge from the docs and the community forum, and custom prompts, memory and context.

Change instance ownership

You can change the ownership of an instance by navigating to the Settings page in the owner's account and editing the Email field. After making the changes, scroll down and press Save. Note that for the change to be effective, the new email address can't be linked to any other n8n account.

Changing emails will change the owner of the instance, the email you log in with, and the email your invoices and general communication gets sent to.

If the workspace is deactivated, there will be no Settings page and no possibility to change the email address or the owner info.

Change instance username

It's not currently possible to change usernames.

If you want your instance to have a different name you will need to create a new account and transfer your work into it. The import/export documentation explains how you can transfer your work to a new n8n instance.


Facebook Trigger Certificate Transparency object

URL: llms-txt#facebook-trigger-certificate-transparency-object

Contents:

  • Trigger configuration
  • Related resources

Use this object to receive updates about newly issued certificates for any domains that you have subscribed for certificate alerts or phishing alerts. Refer to Facebook Trigger for more information on the trigger itself.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Facebook Trigger integrations page.

Trigger configuration

To configure the trigger with this Object:

  1. Select the Credential to connect with. Select an existing or create a new Facebook App credential.
  2. Enter the APP ID of the app connected to your credential. Refer to the Facebook App credential documentation for more information.
  3. Select Certificate Transparency as the Object.
  4. Field Names or IDs: By default, the node will trigger on all the available events using the * wildcard filter. If you'd like to limit the events, use the X to remove the star and use the dropdown or an expression to select the updates you're interested in. Options include:
    • Certificate: Notifies you when someone issues a new certificate for your subscribed domains. You'll need to subscribe your domain for certificate alerts.
    • Phishing: Notifies you when someone issues a new certificate that may be phishing one of your legitimate subscribed domains.
  5. In Options, turn on the toggle to Include Values. This Object type fails without the option enabled.

For these alerts, you'll need to subscribe your domain to the relevant alerts:

Refer to Webhooks for Certificate Transparency and Meta's Certificate Transparency Graph API reference for more information.


Current node input

URL: llms-txt#current-node-input

Methods for working with the input of the current node. Some methods and variables aren't available in the Code node.

You can use Python in the Code node. It isn't available in expressions.

Method Description Available in Code node?
$binary Shorthand for $input.item.binary. Incoming binary data from a node
$input.item The input item of the current node that's being processed. Refer to Item linking for more information on paired items and item linking.
$input.all() All input items in current node.
$input.first() First input item in current node.
$input.last() Last input item in current node.
$input.params Object containing the query settings of the previous node. This includes data such as the operation it ran, result limits, and so on.
$json Shorthand for $input.item.json. Incoming JSON data from a node. Refer to Data structure for information on item structure. (when running once for each item)
$input.context.noItemsLeft Boolean. Only available when working with the Loop Over Items node. Provides information about what's happening in the node. Use this to determine whether the node is still processing items.
Method Description
_input.item The input item of the current node that's being processed. Refer to Item linking for more information on paired items and item linking.
_input.all() All input items in current node.
_input.first() First input item in current node.
_input.last() Last input item in current node.
_input.params Object containing the query settings of the previous node. This includes data such as the operation it ran, result limits, and so on.
_json Shorthand for _input.item.json. Incoming JSON data from a node. Refer to Data structure for information on item structure. Available when you set Mode to Run Once for Each Item.
_input.context.noItemsLeft Boolean. Only available when working with the Loop Over Items node. Provides information about what's happening in the node. Use this to determine whether the node is still processing items.

Stripe node

URL: llms-txt#stripe-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Stripe node to automate work in Stripe, and integrate Stripe with other applications. n8n has built-in support for a wide range of Stripe features, including getting balance, creating charge, and deleting customers.

On this page, you'll find a list of operations the Stripe node supports and links to more resources.

Refer to Stripe credentials for guidance on setting up authentication.

  • Balance
    • Get a balance
  • Charge
    • Create a charge
    • Get a charge
    • Get all charges
    • Update a charge
  • Coupon
    • Create a coupon
    • Get all coupons
  • Customer
    • Create a customer
    • Delete a customer
    • Get a customer
    • Get all customers
    • Update a customer
  • Customer Card
    • Add a customer card
    • Get a customer card
    • Remove a customer card
  • Source
    • Create a source
    • Delete a source
    • Get a source
  • Token
    • Create a token

Templates and examples

Update HubSpot when a new invoice is registered in Stripe

View template details

Simplest way to create a Stripe Payment Link

View template details

Streamline Your Zoom Meetings with Secure, Automated Stripe Payments

View template details

Browse Stripe integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


OpenAI File operations

URL: llms-txt#openai-file-operations

Contents:

  • Delete a File
  • List Files
    • Options
  • Upload a File
    • Options
  • Common issues

Use this operation to create, delete, list, message, or update a file in OpenAI. Refer to OpenAI for more information on the OpenAI node itself.

Use this operation to delete a file from the server.

Enter these parameters:

  • Credential to connect with: Create or select an existing OpenAI credential.
  • Resource: Select File.
  • Operation: Select Delete a File.
  • File: Enter the ID of the file to use for this operation or select the file name from the dropdown.

Refer to Delete file | OpenAI documentation for more information.

Use this operation to list files that belong to the user's organization.

Enter these parameters:

  • Credential to connect with: Create or select an existing OpenAI credential.

  • Resource: Select File.

  • Operation: Select List Files.

  • Purpose: Use this to only return files with the given purpose. Use Assistants to return only files related to Assistants and Message operations. Use Fine-Tune for files related to Fine-tuning.

Refer to List files | OpenAI documentation for more information.

Use this operation to upload a file. This can be used across various operations.

Enter these parameters:

  • Credential to connect with: Create or select an existing OpenAI credential.

  • Resource: Select File.

  • Operation: Select Upload a File.

  • Input Data Field Name: Defaults to data. Enter the name of the binary property which contains the file. The size of individual files can be a maximum of 512 MB or 2 million tokens for Assistants.

  • Purpose: Enter the intended purpose of the uploaded file. Use Assistants for files associated with Assistants and Message operations. Use Fine-Tune for Fine-tuning.

Refer to Upload file | OpenAI documentation for more information.

For common errors or issues and suggested resolution steps, refer to Common Issues.


Zep credentials

URL: llms-txt#zep-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using API key
    • Zep Cloud setup
    • Self-hosted Zep Open Source setup

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to Zep's Cloud SDK documentation for more information about the service. Refer to Zep's REST API documentation for information about the API.

View n8n's Advanced AI documentation.

To configure this credential, you'll need a Zep server with at least one project and:

  • An API URL
  • An API Key

Setup depends on whether you're using Zep Cloud or self-hosted Zep Open Source.

Follow these instructions if you're using Zep Cloud:

  1. In Zep, open the Project Settings.
  2. In the Project Keys section, select Add Key.
  3. Enter a Key Name, like n8n integration.
  4. Select Create.
  5. Copy the key and enter it in your n8n integration as the API Key.
  6. Turn on the Cloud toggle.

Self-hosted Zep Open Source setup

The Zep team deprecated the open source Zep Community Edition in April, 2025. These instructions may not work in the future.

Follow these instructions if you're self-hosting Zep Open Source:

  1. Enter the JWT token for your Zep server as the API Key in n8n.
  2. Make sure the Cloud toggle is off.
  3. Enter the URL for your Zep server as the API URL.

Lightweight Directory Access Protocol (LDAP)

URL: llms-txt#lightweight-directory-access-protocol-(ldap)

Contents:

  • Enable LDAP

  • Merging n8n and LDAP accounts

  • LDAP user accounts in n8n

  • Turn LDAP off

  • Available on Self-hosted Enterprise and Cloud Enterprise plans.

  • You need access to the n8n instance owner account.

This page tells you how to enable LDAP in n8n. It assumes you're familiar with LDAP, and have an existing LDAP server set up.

LDAP allows users to sign in to n8n with their organization credentials, instead of an n8n login.

  1. Log in to n8n as the instance owner.
  2. Select Settings > LDAP.
  3. Toggle on Enable LDAP Login.
  4. Complete the fields with details from your LDAP server.
  5. Select Test connection to check your connection setup, or Save connection to create the connection.

After enabling LDAP, anyone on your LDAP server can sign in to the n8n instance, unless you exclude them using the User Filter setting.

You can still create non-LDAP users (email users) on the Settings > Users page.

Merging n8n and LDAP accounts

If n8n finds matching accounts (matching emails) for email users and LDAP users, the user must sign in with their LDAP account. n8n instance owner accounts are excluded from this: n8n never converts owner accounts to LDAP users.

LDAP user accounts in n8n

On first sign in, n8n creates a user account in n8n for the LDAP user.

You must manage user details on the LDAP server, not in n8n. If you update or delete a user on your LDAP server, the n8n account updates at the next scheduled sync, or when the user next tries to log in, whichever happens first.

If you remove a user from your LDAP server, they lose n8n access on the next sync.

  1. Log in to n8n as the instance owner.
  2. Select Settings > LDAP.
  3. Toggle off Enable LDAP Login.

If you turn LDAP off, n8n converts existing LDAP users to email users on their next login. The users must reset their password.


Zoho credentials

URL: llms-txt#zoho-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Create a Zoho account.

Supported authentication methods

Refer to Zoho's CRM API documentation for more information about the service.

To configure this credential, you'll need:

  • An Access Token URL: Zoho provides region-specific access token URLs. Select the region that best fits your Zoho data center:
    • AU: Select this option for Australia data center.
    • CN: Select this option for Canada data center.
    • EU: Select this option for the European Union data center.
    • IN: Select this option for the India data center.
    • US: Select this option for the United States data center.

Refer to Multi DC for more information about selecting a data center.

Note for n8n Cloud users

Cloud users don't need to provide connection details. Select Connect my account to connect through your browser.

If you need to configure OAuth2 from scratch, register an application with Zoho.

Use these settings for your application:

  • Select Server-based Applications as the Client Type.
  • Copy the OAuth Callback URL from n8n and enter it in the Zoho Authorized Redirect URIs field.
  • Copy the Client ID and Client Secret from the application and enter them in your n8n credential.

n8n Embed

URL: llms-txt#n8n-embed

Contents:

  • Support
  • Russia and Belarus

n8n Embed is part of n8n's paid offering. Using Embed, you can white label n8n, or incorporate it in your software as part of your commercial product.

For more information about when to use Embed, as well as costs and licensing processes, refer to Embed on the n8n website.

The community forum can help with various issues. If you are a current Embed customer, you can also contact n8n support, using the email provided when you bought the license.

Russia and Belarus

n8n Embed isn't available in Russia and Belarus. Refer to n8n's blog post Update on n8n cloud accounts in Russia and Belarus for more information.


Markdown

URL: llms-txt#markdown

Contents:

  • Operations
  • Node parameters
  • Node options
    • Markdown to HTML options
    • HTML to Markdown options
  • Templates and examples
  • Parsers

The Markdown node converts between Markdown and HTML formats.

This node's operations are Modes:

  • Markdown to HTML: Use this mode to convert from Markdown to HTML.

  • HTML to Markdown: Use this mode to convert from HTML to Markdown.

  • HTML or Markdown: Enter the data you want to convert. The field name changes based on which Mode you select.

  • Destination Key: Enter the field you want to put the output in. Specify nested fields using dots, for example level1.level2.newKey.

The node's Options depend on the Mode selected.

Some of the options depend on each other or can interact. We recommend testing out options to confirm the effects are what you want.

Markdown to HTML options

Option Description Default
Add Blank To Links Whether to open links a new window (enabled) or not (disabled). Disabled
Automatic Linking To URLs Whether to automatically link to URLs (enabled) or not (disabled). If enabled, n8n converts any string that it identifies as a URL to a link. Disabled
Backslash Escapes HTML Tags Whether to allow backslash escaping of HTML tags (enabled) or not (disabled). When enabled, n8n escapes any < or > prefaced with \. For example, \<div\> renders as &lt;div&gt;. Disabled
Complete HTML Document Whether to output a complete HTML document (enabled) or an HTML fragment (disabled). A complete HTML document includes the <DOCTYPE HTML> declaration, <html> and <body> tags, and the <head> element. Disabled
Customized Header ID Whether to support custom heading IDs (enabled) or not (disabled). When enabled, you can add custom heading IDs using {header ID here} after the heading text. Disabled
Emoji Support Whether to support emojis (enabled) or not (disabled). Disabled.
Encode Emails Whether to transform ASCII character emails into their equivalent decimal entities (enabled) or not (disabled). Enabled
Exclude Trailing Punctuation From URLs Whether to exclude trailing punctuation from automatically linked URLs (enabled) or not (disabled). For use with Automatic Linking To URLs. Disabled
GitHub Code Blocks Whether to enable GitHub Flavored Markdown code blocks (enabled) or not (disabled). Enabled
GitHub Compatible Header IDs Whether to generate GitHub Flavored Markdown heading IDs (enabled) or not (disabled). GitHub Flavored Markdown generates heading IDs with - in place of spaces and removes non-alphanumeric characters. Disabled
GitHub Mention Link Change the link used with GitHub Mentions. Disabled
GitHub Mentions Whether to support tagging GitHub users with @ (enabled) or not (disabled). When enabled, n8n replaces @name with https://github.com/name. Disabled
GitHub Task Lists Whether to support GitHub Flavored Markdown task lists (enabled) or not (disabled). Disabled
Header Level Start Number. Set the start level for headers. For example, changing this field to 2 causes n8n to treat # as <h2>, ## as <h3>, and so on. 1
Mandatory Space Before Header Whether to make a space between # and heading text required (enabled) or not (disabled). When enabled, n8n renders a heading written as ##Some header text literally (it doesn't turn it into a heading element) Disabled
Middle Word Asterisks Whether n8n should treat asterisks in words as Markdown (disabled) or render them as literal asterisks (enabled). Disabled
Middle Word Underscores Whether n8n should treat underscores in words as Markdown (disabled) or render them as literal underscores (enabled). Disabled
No Header ID Disable automatic generation of header IDs (enabled). Disabled
Parse Image Dimensions Support setting maximum image dimensions in Markdown syntax (enabled). Disabled
Prefix Header ID Define a prefix to add to header IDs. None
Raw Header ID Whether to remove spaces, ', and " from header IDs, including prefixes, replacing them with - (enabled) or not (disabled). Disabled
Raw Prefix Header ID Whether to prevent n8n from modifying header prefixes (enabled) or not (disabled) Disabled
Simple Line Breaks Whether to create line breaks without a double space at the end of a line (enabled) or not (disabled). Disabled
Smart Indentation Fix Whether to try to smartly fix indentation problems related to ES6 template strings in indented code blocks (enabled) or not (disabled). Disabled
Spaces Indented Sublists Whether to remove the requirement to indent sublists four spaces (enabled) or not (disabled). Disabled
Split Adjacent Blockquotes Whether to split adjacent blockquote blocks (enabled) or not (disabled). If you don't enable this, n8n treats quotes (indicated by > at the start of the line) on separate lines as a single blockquote, even when separated by an empty line. Disabled
Strikethrough Whether to support strikethrough syntax (enabled) or not (disabled). When enabled, you can add a strikethrough effect using ~~ around the word or phrase. Disabled
Tables Header ID Whether to add an ID to table header tags (enabled) or not (disabled). Disabled
Tables Support Whether to support tables (enabled) or not (disabled). Disabled

HTML to Markdown options

Option Description Default
Bullet Marker Specify the character to use for unordered lists. *
Code Block Fence Specify the characters to use for code blocks. ```
Emphasis Delimiter Specify the character <em>. _
Global Escape Pattern Overrides the default character escape settings. You may want to use Text Replacement Pattern instead. None
Ignored Elements Ignore given HTML elements, and their children. None
Keep Images With Data Whether to keep images with data (enabled) or not (disabled). Support files up to 1MB. Disabled
Line Start Escape Pattern Overrides the default character escape settings. You may want to use Text Replacement Pattern instead. None
Max Consecutive New Lines Number. Specify the maximum number of consecutive new lines allowed. 3
Place URLs At The Bottom Whether to place URLs at the bottom of the page and format using link reference definitions (enabled) or not (disabled). Disabled
Strong Delimiter Specify the characters for <strong>. **
Style For Code Block Specify the styling for code blocks. Options are Fence and Indented. Fence
Text Replacement Pattern Define a text replacement pattern using regex. None
Treat As Blocks Specify HTML elements to treat as blocks (surround with blank lines) None

Templates and examples

AI agent that can scrape webpages

View template details

Autonomous AI crawler

View template details

Personalized AI Tech Newsletter Using RSS, OpenAI and Gmail

View template details

Browse Markdown integration templates, or search all templates

n8n uses the following parsers:


Configure the Base URL for n8n's front end access

URL: llms-txt#configure-the-base-url-for-n8n's-front-end-access

Requires manual UI build

This use case involves configuring the VUE_APP_URL_BASE_API environmental variable which requires a manual build of the n8n-editor-ui package. You can't use it with the default n8n Docker image where the default setting for this variable is /, meaning that it uses the root-domain.

You can configure the Base URL that the front end uses to connect to the back end's REST API. This is relevant when you want to host n8n's front end and back end separately.

Refer to Environment variables reference for more information on this variable.

Examples:

Example 1 (unknown):

export VUE_APP_URL_BASE_API=https://n8n.example.com/

Bannerbear node

URL: llms-txt#bannerbear-node

Contents:

  • Operations
  • Templates and examples

Use the Bannerbear node to automate work in Bannerbear, and integrate Bannerbear with other applications. n8n has built-in support for a wide range of Bannerbear features, including creating and getting images and templates.

On this page, you'll find a list of operations the Bannerbear node supports and links to more resources.

Refer to Bannerbear credentials for guidance on setting up authentication.

  • Image
    • Create an image
    • Get an image
  • Template
    • Get a template
    • Get all templates

Templates and examples

Speed Up Social Media Banners With BannerBear.com

View template details

Render custom text over images

View template details

Send Airtable data as tasks to Trello

View template details

Browse Bannerbear integration templates, or search all templates


Perplexity node

URL: llms-txt#perplexity-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported

Use the Perplexity node to automate work in Perplexity and integrate Perplexity with other applications. n8n has built-in support for messaging a model.

On this page, you'll find a list of operations the Perplexity node supports, and links to more resources.

You can find authentication information for this node here.

  • Message a Model: Create one or more completions for a given text.

Templates and examples

Clone Viral TikToks with AI Avatars & Auto-Post to 9 Platforms using Perplexity & Blotato

View template details

🔍🛠️Generate SEO-Optimized WordPress Content with AI Powered Perplexity Research

View template details

AI-Powered Multi-Social Media Post Automation: Google Trends & Perplexity AI

View template details

Browse Perplexity integration templates, or search all templates

Refer to Perplexity's documentation for more information about the service.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


SolarWinds Observability SaaS credentials

URL: llms-txt#solarwinds-observability-saas-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using API Token

You can use these credentials to authenticate when using the HTTP Request node to make a Custom API call.

Supported authentication methods

Refer to SolarWinds Observability SaaS's API documentation for more information about the service.

To configure this credential, you'll need a SolarWinds Observability SaaS account and:

  • URL: The URL you use to access the SolarWinds Observability SaaS platform
  • API Token: An API token found in the SolarWinds Observability SaaS platform under Settings > Api Tokens

Refer to SolarWinds Observability SaaS's API documentation for more information about authenticating to the service.


Optional timezone to set which gets used by Cron and other scheduling nodes

URL: llms-txt#optional-timezone-to-set-which-gets-used-by-cron-and-other-scheduling-nodes


Expressions cookbook

URL: llms-txt#expressions-cookbook

Contents:

  • Related resources

This section contains examples and recipes for tasks you can do with expressions.

You can use Python in the Code node. It isn't available in expressions.


Data mocking

URL: llms-txt#data-mocking

Contents:

  • Mocking with real data using data pinning
  • Generate custom data using the Code or Edit Fields nodes
  • Output a sample data set from the Customer Datastore node

Data mocking is simulating or faking data. It's useful when developing a workflow. By mocking data, you can:

  • Avoid making repeated calls to your data source. This saves time and costs.
  • Work with a small, predictable dataset during initial development.
  • Avoid the risk of overwriting live data: in the early stages of building your workflow, you don't need to connect your real data source.

Mocking with real data using data pinning

Using data pinning, you load real data into your workflow, then pin it in the output panel of a node. Using this approach you have realistic data, with only one call to your data source. You can edit pinned data.

Use this approach when you need to configure your workflow to handle the exact data structure and parameters provided by your data source.

To pin data in a node:

  1. Run the node to load data.
  2. In the OUTPUT view, select Pin data . When data pinning is active, the button is disabled and a "This data is pinned" banner is displayed in the OUTPUT view.

Nodes that output binary data

You can't pin data if the output data includes binary data.

Generate custom data using the Code or Edit Fields nodes

You can create a custom dataset in your workflow using either the Code node or the Edit Fields (Set) node.

In the Code node, you can create any data set you want, and return it as the node output. In the Edit Fields node, select Add fields to add your custom data.

The Edit Fields node is a good choice for small tests. To create more complex datasets, use the Code node.

Output a sample data set from the Customer Datastore node

The Customer Datastore node provides a fake dataset to work with. Add and execute the node to explore the data.

Use this approach if you need some test data when exploring n8n, and you don't have a real use-case to work with.


Cloud IP addresses

URL: llms-txt#cloud-ip-addresses

Cloud IP addresses change without warning

n8n can't guarantee static source IPs, as Cloud operates in a dynamic cloud provider environment and scales its infrastructure to meet demand. You should use strong authentication and secure transport protocols when connecting into and out of n8n.

Outbound traffic may appear to originate from any of:

  • 20.79.227.226/32
  • 20.113.47.122/32
  • 20.218.202.73/32
  • 98.67.233.91/32
  • 4.182.111.50/32
  • 4.182.129.20/32
  • 4.182.88.118/32
  • 4.182.212.136/32
  • 98.67.244.108/32
  • 72.144.128.145/32
  • 72.144.83.147/32
  • 72.144.69.38/32
  • 72.144.111.50/32
  • 4.182.128.108/32
  • 4.182.190.144/32
  • 4.182.191.184/32
  • 98.67.233.200/32
  • 20.52.126.0/28
  • 20.218.238.112/28
  • 4.182.64.64/28
  • 20.218.174.0/28
  • 4.184.78.240/28
  • 20.79.32.32/28
  • 51.116.119.64/28

n8n Trigger node

URL: llms-txt#n8n-trigger-node

Contents:

  • Node parameters
  • Templates and examples

The n8n Trigger node triggers when the current workflow updates or activates, or when the n8n instance starts or restarts. You can use the n8n Trigger node to notify when these events occur.

The node includes a single parameter to identify the Events that should trigger it. Choose from these events:

  • Active Workflow Updated: If you select this event, the node triggers when this workflow is updated.
  • Instance started: If you select this event, the node triggers when the n8n instance starts or restarts.
  • Workflow Activated: If you select this event, the node triggers when this workflow is activated.

You can select one or more of these events.

Templates and examples

RAG Starter Template using Simple Vector Stores, Form trigger and OpenAI

View template details

Unify multiple triggers into a single workflow

by Guillaume Duvernay

View template details

Backup and Delete Workflows to Google Drive with n8n API and Form Trigger

View template details

Browse n8n Trigger integration templates, or search all templates


LangChain Code node methods

URL: llms-txt#langchain-code-node-methods

n8n provides these methods to make it easier to perform common tasks in the LangChain Code node.

LangChain Code node only

These variables are for use in expressions in the LangChain Code node. You can't use them in other nodes.

Method Description
this.addInputData(inputName, data) Populate the data of a specified non-main input. Useful for mocking data. - inputName is the input connection type, and must be one of: ai_agent, ai_chain, ai_document, ai_embedding, ai_languageModel, ai_memory, ai_outputParser, ai_retriever, ai_textSplitter, ai_tool, ai_vectorRetriever, ai_vectorStore - data contains the data you want to add. Refer to Data structure for information on the data structure expected by n8n.
this.addOutputData(outputName, data) Populate the data of a specified non-main output. Useful for mocking data. - outputName is the input connection type, and must be one of: ai_agent, ai_chain, ai_document, ai_embedding, ai_languageModel, ai_memory, ai_outputParser, ai_retriever, ai_textSplitter, ai_tool, ai_vectorRetriever, ai_vectorStore - data contains the data you want to add. Refer to Data structure for information on the data structure expected by n8n.
this.getInputConnectionData(inputName, itemIndex, inputIndex?) Get data from a specified non-main input. - inputName is the input connection type, and must be one of: ai_agent, ai_chain, ai_document, ai_embedding, ai_languageModel, ai_memory, ai_outputParser, ai_retriever, ai_textSplitter, ai_tool, ai_vectorRetriever, ai_vectorStore - itemIndex should always be 0 (this parameter will be used in upcoming functionality) - Use inputIndex if there is more than one node connected to the specified input.
this.getInputData(inputIndex?, inputName?) Get data from the main input.
this.getNode() Get the current node.
this.getNodeOutputs() Get the outputs of the current node.
this.getExecutionCancelSignal() Use this to stop the execution of a function when the workflow stops. In most cases n8n handles this, but you may need to use it if building your own chains or agents. It replaces the Cancelling a running LLMChain code that you'd use if building a LangChain application normally.

Gumroad credentials

URL: llms-txt#gumroad-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API access token

You can use these credentials to authenticate the following nodes:

Create a Gumroad account.

Supported authentication methods

Refer to Gumroad's API documentation for more information about the service.

Using API access token

To configure this credential, you'll need:


Get number of items returned by the previous node

URL: llms-txt#get-number-of-items-returned-by-the-previous-node

To get the number of items returned by the previous node:

The output will be similar to the following.

The output will be similar to the following.

Examples:

Example 1 (unknown):

if (Object.keys(items[0].json).length === 0) {
return [
	{
		json: {
			results: 0,
		}
	}
]
}
return [
	{
		json: {
			results: items.length,
		}
	}
];

Example 2 (unknown):

[
	{
		"results": 8
	}
]

Example 3 (unknown):

if len(items[0].json) == 0:
	return [
		{
			"json": {
				"results": 0,
			}
		}
	]
else:
	return [
		{
			"json": {
				"results": items.length,
			}
		}
	]

Example 4 (unknown):

[
	{
		"results": 8
	}
]

Chat Trigger node common issues

URL: llms-txt#chat-trigger-node-common-issues

Contents:

  • Pass data from a website to an embedded Chat Trigger node
  • Chat Trigger node doesn't fetch previous messages

Here are some common errors and issues with the Chat Trigger node and steps to resolve or troubleshoot them.

Pass data from a website to an embedded Chat Trigger node

When embedding the Chat Trigger node in a website, you might want to pass extra information to the Chat Trigger. For example, passing a user ID stored in a site cookie.

To do this, use the metadata field in the JSON object you pass to the createChat function in your embedded chat window:

The metadata field can contain arbitrary data that will appear in the Chat Trigger output alongside other output data. From there, you can query and process the data from downstream nodes as usual using n8n's data processing features.

Chat Trigger node doesn't fetch previous messages

When you configure a Chat Trigger node, you might experience problems fetching previous messages if you aren't careful about how you configure session loading. This often manifests as a workflow could not be started! error.

In Chat Triggers, the Load Previous Session option retrieves previous chat messages for a session using the sessionID. When you set the Load Previous Session option to From memory, it's almost always best to connect the same memory node to both the Chat Trigger and the Agent in your workflow:

  1. In your Chat Trigger node, set the Load Previous Session option to From Memory. This is only visible if you've made the chat publicly available.
  2. Attach a Simple Memory node to the Memory connector.
  3. Attach the same Simple Memory node to Memory connector of your Agent.
  4. In the Simple Memory node, set Session ID to Connected Chat Trigger Node.

One instance where you may want to attach separate memory nodes to your Chat Trigger and the Agent is if you want to set the Session ID in your memory node to Define below.

If you're retrieving the session ID from an expression, the same expression must work for each of the nodes attached to it. If the expression isn't compatible with each of the nodes that need memory, you might need to use separate memory nodes so you can customize the expression for the session ID on a per-node basis.

Examples:

Example 1 (unknown):

createChat({
	webhookUrl: 'YOUR_PRODUCTION_WEBHOOK_URL',
	metadata: {
		'YOUR_KEY': 'YOUR_DATA'
	};
});

OpenAI Conversation operations

URL: llms-txt#openai-conversation-operations

Contents:

  • Create a Conversation
    • Options
  • Get a Conversation
  • Remove a Conversation
  • Update a Conversation
    • Options

Use this operation to create, get, update, or remove a conversation in OpenAI. Refer to OpenAI for more information on the OpenAI node itself.

Create a Conversation

Use this operation to create a new conversation.

Enter these parameters:

  • Credential to connect with: Create or select an existing OpenAI credential.

  • Resource: Select Conversation.

  • Operation: Select Create a Conversation.

  • Messages: A message input to the model. Messages with the system role take precedence over instructions given with the user role. Messages with the assistant role will be assumed to have been generated by the model in previous interactions.

  • Metadata: A set of key-value pairs for storing structured information. You can attach up to 16 pairs to an object, which is useful for adding custom data that can be used for searching via the API or in the dashboard.

Refer to Conversations | OpenAI documentation for more information.

Get a Conversation

Use this operation to retrieve an existing conversation.

Enter these parameters:

  • Credential to connect with: Create or select an existing OpenAI credential.
  • Resource: Select Conversation.
  • Operation: Select Get Conversation.
  • Conversation ID: The ID of the conversation to retrieve.

Refer to Conversations | OpenAI documentation for more information.

Remove a Conversation

Use this operation to remove an existing conversation.

Enter these parameters:

  • Credential to connect with: Create or select an existing OpenAI credential.
  • Resource: Select Conversation.
  • Operation: Select Remove Conversation.
  • Conversation ID: The ID of the conversation to remove.

Refer to Conversations | OpenAI documentation for more information.

Update a Conversation

Use this operation to update an existing conversation.

Enter these parameters:

  • Credential to connect with: Create or select an existing OpenAI credential.

  • Resource: Select Conversation.

  • Operation: Select Update a Conversation.

  • Conversation ID: The ID of the conversation to update.

  • Metadata: A set of key-value pairs for storing structured information. You can attach up to 16 pairs to an object, which is useful for adding custom data that can be used for searching via the API or in the dashboard.

Refer to Conversations | OpenAI documentation for more information.


Mocean node

URL: llms-txt#mocean-node

Contents:

  • Operations
  • Templates and examples

Use the Mocean node to automate work in Mocean, and integrate Mocean with other applications. n8n has built-in support for a wide range of Mocean features, including sending SMS, and voice messages.

On this page, you'll find a list of operations the Mocean node supports and links to more resources.

Refer to Mocean credentials for guidance on setting up authentication.

  • SMS
    • Send SMS/Voice message
  • Voice
    • Send SMS/Voice message

Templates and examples

Browse Mocean integration templates, or search all templates


monday.com credentials

URL: llms-txt#monday.com-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using API token
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Minimum required version

The monday.com node requires n8n version 1.22.6 or above.

Supported authentication methods

Refer to monday.com's API documentation for more information about authenticating with the service.

To configure this credential, you'll need a monday.com account and:

  • An API Token V2
  1. In your monday.com account, select your profile picture in the top right corner.
  2. Select Developers. The Developer Center opens in a new tab.
  3. In the Developer Center, select My Access Tokens > Show.
  4. Copy your personal token and enter it in your n8n credential as the Token V2.

Refer to monday.com API Authentication for more information.

To configure this credential, you'll need a monday.com account and:

  • A Client ID
  • A Client Secret

To generate both these fields, register a new monday.com application:

  1. In your monday.com account, select your profile picture in the top right corner.
  2. Select Developers. The Developer Center opens in a new tab.
  3. In the Developer Center, select Build app. The app details open.
  4. Enter a Name for your app, like n8n integration.
  5. Copy the Client ID and enter it in your n8n credential.
  6. Show the Client Secret, copy it, and enter it in your n8n credential.
  7. In the left menu, select OAuth.
  8. For Scopes, select boards:write and boards:read.
  9. Select Save Scopes.
  10. Select the Redirect URLs tab.
  11. Copy the OAuth Redirect URL from n8n and enter it as the Redirect URL.
  12. Save your changes in monday.com.
  13. In n8n, select Connect my account to finish the setup.

Refer to Create an app for more information on creating apps.

Refer to OAuth and permissions for more information on the available scopes and setting up the Redirect URL.


Manual Trigger node

URL: llms-txt#manual-trigger-node

Contents:

  • Common issues
    • Only one 'Manual Trigger' node is allowed in a workflow

Use this node if you want to start a workflow by selecting Execute Workflow and don't want any option for the workflow to run automatically.

Workflows always need a trigger, or start point. Most workflows start with a trigger node firing in response to an external event or the Schedule Trigger firing on a set schedule.

The Manual Trigger node serves as the workflow trigger for workflows that don't have an automatic trigger.

  • To test your workflow before you add an automatic trigger of some kind.
  • When you don't want the workflow to run automatically.

Here are some common errors and issues with the Manual Trigger node and steps to resolve or troubleshoot them.

Only one 'Manual Trigger' node is allowed in a workflow

This error displays if you try to add a Manual Trigger node to a workflow which already includes a Manual Trigger node.

Remove your existing Manual Trigger or edit your workflow to connect that trigger to a different node.


Item linking concepts

URL: llms-txt#item-linking-concepts

Contents:

  • n8n's automatic item linking
  • Item linking example

Each output item created by a node includes metadata that links them to the input item (or items) that the node used to generate them. This creates a chain of items that you can work back along to access previous items. This can be complicated to understand, especially if the node splits or merges data. You need to understand item linking when building your own programmatic nodes, or in some scenarios using the Code node.

This document provides a conceptual overview of this feature. For usage details, refer to:

n8n's automatic item linking

If a node doesn't control how to link input items to output items, n8n tries to guess how to link the items automatically:

  • Single input, single output: the output links to the input.
  • Single input, multiple outputs: all outputs link to that input.
  • Multiple inputs and outputs:
    • If you keep the input items, but change the order (or remove some but keep others), n8n can automatically add the correct linked item information.
    • If the number of inputs and outputs is equal, n8n links the items in order. This means that output-1 links to input-1, output-2 to input-2, and so on.
    • If the number isn't equal, or you create completely new items, n8n can't automatically link items.

If n8n can't link items automatically, and the node doesn't handle the item linking, n8n displays an error. Refer to Item linking errors for more information.

Item linking example

In this example, it's possible for n8n to link an item in one node back several steps, despite the item order changing. This means the node that sorts movies alphabetically can access information about the linked item in the node that gets famous movie actors.

The methods for accessing linked items are different depending on whether you're using the UI, expressions, or the code node. Explore the following resources:


Set up SSL

URL: llms-txt#set-up-ssl

Contents:

  • Use a reverse proxy (recommended)
  • Pass certificates into n8n directly

There are two methods to support TLS/SSL in n8n.

Use a reverse proxy like Traefik or a Network Load Balancer (NLB) in front of the n8n instance. This should also take care of certificate renewals.

Refer to Security | Data encryption for more information.

Pass certificates into n8n directly

You can also choose to pass certificates into n8n directly. To do so, set the N8N_SSL_CERT and N8N_SSL_KEY environment variables to point to your generated certificate and key file.

You'll need to make sure the certificate stays renewed and up to date.

Refer to Deployment environment variables for more information on these variables and Configuration for more information on setting environment variables.


ActiveCampaign node

URL: llms-txt#activecampaign-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the ActiveCampaign node to automate work in ActiveCampaign, and integrate ActiveCampaign with other applications. n8n has built-in support for a wide range of ActiveCampaign features, including creating, getting, updating, and deleting accounts, contact, orders, e-commerce customers, connections, lists, tags, and deals.

On this page, you'll find a list of operations the ActiveCampaign node supports and links to more resources.

Refer to ActiveCampaign credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Account
    • Create an account
    • Delete an account
    • Get data of an account
    • Get data of all accounts
    • Update an account
  • Account Contact
    • Create an association
    • Delete an association
    • Update an association
  • Contact
    • Create a contact
    • Delete a contact
    • Get data of a contact
    • Get data of all contact
    • Update a contact
  • Contact List
    • Add contact to a list
    • Remove contact from a list
  • Contact Tag
    • Add a tag to a contact
    • Remove a tag from a contact
  • Connection
    • Create a connection
    • Delete a connection
    • Get data of a connection
    • Get data of all connections
    • Update a connection
  • Deal
    • Create a deal
    • Delete a deal
    • Get data of a deal
    • Get data of all deals
    • Update a deal
    • Create a deal note
    • Update a deal note
  • E-commerce Order
    • Create a order
    • Delete a order
    • Get data of a order
    • Get data of all orders
    • Update a order
  • E-Commerce Customer
    • Create a E-commerce Customer
    • Delete a E-commerce Customer
    • Get data of a E-commerce Customer
    • Get data of all E-commerce Customer
    • Update a E-commerce Customer
  • E-commerce Order Products
    • Get data of all order products
    • Get data of a ordered product
    • Get data of an order's products
  • List
    • Get all lists
  • Tag
    • Create a tag
    • Delete a tag
    • Get data of a tag
    • Get data of all tags
    • Update a tag

Templates and examples

Create a contact in ActiveCampaign

View template details

Receive updates when a new account is added by an admin in ActiveCampaign

View template details

🛠️ ActiveCampaign Tool MCP Server 💪 all 48 operations

View template details

Browse ActiveCampaign integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Iterable node

URL: llms-txt#iterable-node

Contents:

  • Operations
  • Templates and examples

Use the Iterable node to automate work in Iterable, and integrate Iterable with other applications. n8n has built-in support for a wide range of Iterable features, including creating users, recording the actions performed by the users, and adding and removing users from the list.

On this page, you'll find a list of operations the Iterable node supports and links to more resources.

Refer to Iterable credentials for guidance on setting up authentication.

  • Event
    • Record the actions a user perform
  • User
    • Create/Update a user
    • Delete a user
    • Get a user
  • User List
    • Add user to list
    • Remove a user from a list

Templates and examples

Browse Iterable integration templates, or search all templates


Mindee credentials

URL: llms-txt#mindee-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using invoice API key
  • Using receipt API key

You can use these credentials to authenticate the following nodes:

Create a Mindee account.

Supported authentication methods

Refer to Mindee's Invoice OCR API documentation and Mindee's Receipt OCR API documentation for more information about each service.

Using invoice API key

To configure this credential, you'll need:

Using receipt API key

To configure this credential, you'll need:


Gmail node Message Operations

URL: llms-txt#gmail-node-message-operations

Contents:

  • Add Label to a message
  • Delete a message
  • Get a message
  • Get Many messages
    • Get Many messages filters
  • Mark as Read
  • Mark as Unread
  • Remove Label from a message
  • Reply to a message
    • Reply options

Use the Message operations to send, reply to, delete, mark read or unread, add a label to, remove a label from, or get a message or get a list of messages in Gmail. Refer to the Gmail node for more information on the Gmail node itself.

Add Label to a message

Use this operation to add one or more labels to a message.

Enter these parameters:

  • Select the Credential to connect with or create a new one.
  • Resource: Select Message.
  • Operation: Select Add Label.
  • Message ID: Enter the ID of the message you want to add the label to.
  • Label Names or IDs: Select the Label names you want to add or enter an expression to specify IDs. The dropdown populates based on the Credential you selected.

Refer to the Gmail API Method: users.messages.modify documentation for more information.

Use this operation to immediately and permanently delete a message.

This operation can't be undone. For recoverable deletions, use the Thread Trash operation instead.

Enter these parameters:

  • Select the Credential to connect with or create a new one.
  • Resource: Select Message.
  • Operation: Select Delete.
  • Message ID: Enter the ID of the message you want to delete.

Refer to the Gmail API Method: users.messages.delete documentation for more information.

Use this operation to get a single message.

Enter these parameters:

  • Select the Credential to connect with or create a new one.
  • Resource: Select Message.
  • Operation: Select Get.
  • Message ID: Enter the ID of the message you wish to retrieve.
  • Simplify: Choose whether to return a simplified version of the response (turned on) or the raw data (turned off). Default is on.
    • This is the same as setting the format for the API call to metadata, which returns email message IDs, labels, and email headers, including: From, To, CC, BCC, and Subject.

Refer to the Gmail API Method: users.messages.get documentation for more information.

Use this operation to get two or more messages.

Enter these parameters:

  • Select the Credential to connect with or create a new one.
  • Resource: Select Message.
  • Operation: Select Get Many.
  • Return All: Choose whether the node returns all messages (turned on) or only up to a set limit (turned off).
  • Limit: Enter the maximum number of messages to return. Only used if you've turned off Return All.
  • Simplify: Choose whether to return a simplified version of the response (turned on) or the raw data (turned off). Default is on.
    • This is the same as setting the format for the API call to metadata, which returns email message IDs, labels, and email headers, including: From, To, CC, BCC, and Subject.

Get Many messages filters

Use these filters to further refine the node's behavior:

  • Include Spam and Trash: Select whether the node should get messages in the Spam and Trash folders (turned on) or not (turned off).
  • Label Names or IDs: Only return messages with the selected labels added to them. Select the Label names you want to apply or enter an expression to specify IDs. The dropdown populates based on the Credential you selected.
  • Search: Enter Gmail search refine filters, like from:, to filter the messages returned. Refer to Refine searches in Gmail for more information.
  • Read Status: Choose whether to receive Unread and read emails, Unread emails only (default), or Read emails only.
  • Received After: Return only those emails received after the specified date and time. Use the date picker to select the day and time or enter an expression to set a date as a string in ISO format or a timestamp in milliseconds. Refer to ISO 8601 for more information on formatting the string.
  • Received Before: Return only those emails received before the specified date and time. Use the date picker to select the day and time or enter an expression to set a date as a string in ISO format or a timestamp in milliseconds. Refer to ISO 8601 for more information on formatting the string.
  • Sender: Enter an email or a part of a sender name to return messages from only that sender.

Refer to the Gmail API Method: users.messages.list documentation for more information.

Use this operation to mark a message as read.

Enter these parameters:

  • Select the Credential to connect with or create a new one.
  • Resource: Select Message.
  • Operation: Select Mark as Read.
  • Message ID: Enter the ID of the message you wish to mark as read.

Refer to the Gmail API Method: users.messages.modify documentation for more information.

Use this operation to mark a message as unread.

Enter these parameters:

  • Select the Credential to connect with or create a new one.
  • Resource: Select Message.
  • Operation: Select Mark as Unread.
  • Message ID: Enter the ID of the message you wish to mark as unread.

Refer to the Gmail API Method: users.messages.modify documentation for more information.

Remove Label from a message

Use this operation to remove one or more labels from a message.

Enter these parameters:

  • Select the Credential to connect with or create a new one.
  • Resource: Select Message.
  • Operation: Select Remove Label.
  • Message ID: Enter the ID of the message you want to remove the label from.
  • Label Names or IDs: Select the Label names you want to remove or enter an expression to specify IDs. The dropdown populates based on the Credential you selected.

Refer to the Gmail API Method: users.messages.modify documentation for more information.

Reply to a message

Use this operation to send a message as a reply to an existing message.

Enter these parameters:

  • Select the Credential to connect with or create a new one.
  • Resource: Select Message.
  • Operation: Select Reply.
  • Message ID: Enter the ID of the message you want to reply to.
  • Select the Email Type. Choose from Text or HTML.
  • Message: Enter the email message body.

Use these options to further refine the node's behavior:

  • Append n8n attribution: By default, the node appends the statement This email was sent automatically with n8n to the end of the email. To remove this statement, turn this option off.
  • Attachments: Select Add Attachment to add an attachment. Enter the Attachment Field Name (in Input) to identify which field from the input node contains the attachment.
    • For multiple properties, enter a comma-separated list.
  • BCC: Enter one or more email addresses for blind copy recipients. Separate multiple email addresses with a comma, for example jay@gatsby.com, jon@smith.com.
  • CC: Enter one or more email addresses for carbon copy recipients. Separate multiple email addresses with a comma, for example jay@gatsby.com, jon@smith.com.
  • Sender Name: Enter the name you want displayed in your recipients' email as the sender.
  • Reply to Sender Only: Choose whether to reply all (turned off) or reply to the sender only (turned on).

Refer to the Gmail API Method: users.messages.send documentation for more information.

Use this operation to send a message.

Enter these parameters:

  • Select the Credential to connect with or create a new one.
  • Resource: Select Message.
  • Operation: Select Send.
  • To: Enter the email address you want the email sent to.
  • Subject: Enter the subject line.
  • Select the Email Type. Choose from Text or HTML.
  • Message: Enter the email message body.

Use these options to further refine the node's behavior:

  • Append n8n attribution: By default, the node appends the statement This email was sent automatically with n8n to the end of the email. To remove this statement, turn this option off.
  • Attachments: Select Add Attachment to add an attachment. Enter the Attachment Field Name (in Input) to identify which field from the input node contains the attachment.
    • For multiple properties, enter a comma-separated list.
  • BCC: Enter one or more email addresses for blind copy recipients. Separate multiple email addresses with a comma, for example jay@gatsby.com, jon@smith.com.
  • CC: Enter one or more email addresses for carbon copy recipients. Separate multiple email addresses with a comma, for example jay@gatsby.com, jon@smith.com.
  • Sender Name: Enter the name you want displayed in your recipients' email as the sender.
  • Send Replies To: Enter an email address to set as the reply to address.
  • Reply to Sender Only: Choose whether to reply all (turned off) or reply to the sender only (turned on).

Refer to the Gmail API Method: users.messages.send documentation for more information.

Send a message and wait for approval

Use this operation to send a message and wait for approval from the recipient before continuing the workflow execution.

Use Wait for complex approvals

The Send and Wait for Approval operation is well-suited for simple approval processes. For more complex approvals, consider using the Wait node.

Enter these parameters:

  • Select the Credential to connect with or create a new one.
  • Resource: Select Message.
  • Operation: Select Send and Wait for Approval.
  • To: Enter the email address you want the email sent to.
  • Subject: Enter the subject line.
  • Message: Enter the email message body.

Send and wait for approval options

Use these options to further refine the node's behavior:

  • Type of Approval: Choose Approve Only (default) to include only an approval button or Approve and Disapprove to also include a disapproval option.
  • Approve Button Label: The label to use for the approval button (Approve by default).
  • Approve Button Style: Whether to style the approval button as a Primary (default) or Secondary button.
  • Disapprove Button Label: The label to use for the disapproval button (Decline by default). Only visible when you set Type of Approval to Approve and Disapprove.
  • Disapprove Button Style: Whether to style the disapproval button as a Primary or Secondary (default) button. Only visible when you set Type of Approval to Approve and Disapprove.

Refer to the Gmail API Method: users.messages.send documentation for more information.

For common errors or issues and suggested resolution steps, refer to Common Issues.


Notion Trigger node

URL: llms-txt#notion-trigger-node

Contents:

  • Events
  • Related resources

Notion is an all-in-one workspace for your notes, tasks, wikis, and databases.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Notion Trigger integrations page.

  • Page added to database
  • Page updated in database

n8n provides an app node for Notion. You can find the node docs here.

View example workflows and related content on n8n's website.

Refer to Notion's documentation for details about their API.


Invoice Ninja credentials

URL: llms-txt#invoice-ninja-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create an Invoice Ninja account. Only the Pro and Enterprise plans support API integrations.

Supported authentication methods

Refer to Invoice Ninja's v4 API documentation and v5 API documentation for more information about the APIs.

To configure this credential, you'll need:

  • A URL: If Invoice Ninja hosts your installation, use either of the default URLs mentioned. If you're self-hosting your installation, use the URL of your Invoice Ninja instance.
  • An API Token: Generate an API token in Settings > Account Management > API Tokens.
  • An optional Secret, available only for v5 API users

Mailjet credentials

URL: llms-txt#mailjet-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using Email API key
  • Using SMS Token

You can use these credentials to authenticate the following nodes:

Create a Mailjet account.

Supported authentication methods

  • Email API key: For use with Mailjet's Email API
  • SMS token: For use with Mailjet's SMS API

Refer to Mailjet's Email API documentation and Mailjet's SMS API documentation for more information about each service.

Using Email API key

To configure this credential, you'll need:

  • An API Key: View and generate API keys in your Mailjet API Key Management page.
  • A Secret Key: View your API Secret Keys in your Mailjet API Key Management page.
  • Optional: Select whether to use Sandbox Mode for calls made using this credential. When turned on, all API calls use Sandbox mode: the API will still validate the payloads but won't deliver the actual messages. This can be useful to troubleshoot any payload error messages without actually sending messages. Refer to Mailjet's Sandbox Mode documentation for more information.

For this credential, you can use either:

  • Mailjet's primary API key and secret key
  • A subaccount API key and secret key

Refer to Mailjet's How to create a subaccount (or additional API key) documentation for detailed instructions on creating more API keys. Refer to What are subaccounts and how does it help me? page for more information on Mailjet subaccounts and when you might want to use one.

To configure this credential, you'll need:

  • An access Token: Generate a new token from Mailjet's SMS Dashboard.

OpenCTI credentials

URL: llms-txt#opencti-credentials

Contents:

  • Prerequisites
  • Authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate when using the HTTP Request node to make a Custom API call.

Create an OpenCTI developer account.

Authentication methods

Refer to OpenCTI's documentation for more information about the service.

This is a credential-only node. Refer to Custom API operations to learn more. View example workflows and related content on n8n's website.

To configure this credential, you'll need:


Figma credentials

URL: llms-txt#figma-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a Figma account. You need an admin or owner level account.

Supported authentication methods

Refer to Figma's API documentation for more information about the service.

To configure this credential, you'll need:


Recorded Future credentials

URL: llms-txt#recorded-future-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API access token

You can use these credentials to authenticate when using the HTTP Request node to make a Custom API call.

Create a Recorded Future account.

Supported authentication methods

Refer to Recorded Future's documentation for more information about the service. The rest of Recorded Future's help center requires a paid account.

This is a credential-only node. Refer to Custom API operations to learn more. View example workflows and related content on n8n's website.

Using API access token

To configure this credential, you'll need:

  • An API Access Token

Refer to the Recorded Future APIs documentation for more information on getting your API access token.


Microsoft Entra ID node

URL: llms-txt#microsoft-entra-id-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported
  • Common issues
    • Updating the Allow External Senders and Auto Subscribe New Members options fails

Use the Microsoft Entra ID node to automate work in Microsoft Entra ID and integrate Microsoft Entra ID with other applications. n8n has built-in support for a wide range of Microsoft Entra ID features, which includes creating, getting, updating, and deleting users and groups, as well as adding users to and removing them from groups.

On this page, you'll find a list of operations the Microsoft Entra ID node supports, and links to more resources.

You can find authentication information for this node here.

  • Group
    • Create: Create a new group
    • Delete: Delete an existing group
    • Get: Retrieve data for a specific group
    • Get Many: Retrieve a list of groups
    • Update: Update a group
  • User
    • Create: Create a new user
    • Delete: Delete an existing user
    • Get: Retrieve data for a specific user
    • Get Many: Retrieve a list of users
    • Update: Update a user
    • Add to Group: Add user to a group
    • Remove from Group: Remove user from a group

Templates and examples

Browse Microsoft Entra ID integration templates, or search all templates

Refer to Microsoft Entra ID's documentation for more information about the service.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.

Here are some common errors and issues with the Microsoft Entra ID node and steps to resolve or troubleshoot them.

Updating the Allow External Senders and Auto Subscribe New Members options fails

You can't update the Allow External Senders and Auto Subscribe New Members options directly after creating a new group. You must wait after creating a group before you can change the values of these options.

When designing workflows that use multiple Microsoft Entra ID nodes to first create groups and then update these options, add a Wait node between the two operations. A Wait node configured to pause for at least two seconds allows time for the group to fully initialize. After the wait, the update operation can complete without erroring.


Cohere Model node

URL: llms-txt#cohere-model-node

Contents:

  • Node Options
  • Templates and examples
  • Related resources

Use the Cohere Model node to use Cohere's models.

On this page, you'll find the node parameters for the Cohere Model node, and links to more resources.

This node lacks tools support, so it won't work with the AI Agent node. Instead, connect it with the Basic LLM Chain node.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Maximum Number of Tokens: Enter the maximum number of tokens used, which sets the completion length.
  • Sampling Temperature: Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations.

Templates and examples

Automate Sales Cold Calling Pipeline with Apify, GPT-4o, and WhatsApp

View template details

Create a Multi-Modal Telegram Support Bot with GPT-4 and Supabase RAG

by Ezema Kingsley Chibuzo

View template details

Build a Document QA System with RAG using Milvus, Cohere, and OpenAI for Google Drive

View template details

Browse Cohere Model integration templates, or search all templates

Refer to LangChains's Cohere documentation for more information about the service.

View n8n's Advanced AI documentation.


4. Setting Values for Processing Orders

URL: llms-txt#4.-setting-values-for-processing-orders

Contents:

  • Add another node before the Airtable node
  • Configure the Edit Fields node
  • Add data to Airtable
  • What's next?

In this step of the workflow, you will learn how to select and set data before transferring it to Airtable using the Edit Fields (Set) node. After this step, your workflow should look like this:

View workflow file

The next step in Nathan's workflow is to filter the data to only insert the employeeName and orderID of all processing orders into Airtable.

For this, you need to use the Edit Fields (Set) node, which allows you to select and set the data you want to transfer from one node to another.

The Edit Fields node can set completely new data as well as overwrite data that already exists. This node is crucial in workflows which expect incoming data from previous nodes, such as when inserting values into spreadsheets or databases.

Add another node before the Airtable node

In your workflow, add another node before the Airtable node from the If node in the same way we did it in the Filtering Orders lesson on the If node's true connector. Feel free to drag the Airtable node further away if your canvas feels crowded.

Configure the Edit Fields node

Now search for the Edit Fields (Set) node after you've selected the + sign coming off the If node's true connector.

With the Edit Fields node window open, configure these parameters:

  • Ensure Mode is set to Manual Mapping.
  • While you can use the Expression editor we used in the Filtering Orders lesson, this time, let's drag the fields from the Input into the Fields to Set:
    • Drag If > orderID as the first field.
    • Drag If > employeeName as the second field.
  • Ensure that Include Other Input Fields is set to false.

Select Execute step. You should see the following results:

Edit Fields (Set) node

Add data to Airtable

Next, let's insert these values into Airtable:

  1. Go to your Airtable base.

  2. Add a new table called processingOrders.

  3. Replace the existing columns with two new columns:

  • orderID (primary field): Number
    • employeeName: Single line text

If you get stuck, refer to the Inserting data into Airtable lesson.

  1. Delete the three empty rows in the new table.

  2. In n8n, connect the Edit Fields node connector to the Airtable node**.

  3. Update the Airtable node configuration to point to the new processingOrders table instead of the orders table.

  4. Test your Airtable node to be sure it inserts records into the new processingOrders table.

At this stage, your workflow should now look like this:

View workflow file

Nathan 🙋: You've already automated half of my work! Now I still need to calculate the booked orders for my colleagues. Can we automate that as well?

You 👩‍🔧: Yes! In the next step, I'll use some JavaScript code in a node to calculate the booked orders.


Max number of finished executions to keep. May not strictly prune back down to the exact max count. Set to 0 for unlimited.

URL: llms-txt#max-number-of-finished-executions-to-keep.-may-not-strictly-prune-back-down-to-the-exact-max-count.-set-to-0-for-unlimited.

export EXECUTIONS_DATA_PRUNE_MAX_COUNT=50000

Examples:

Example 1 (unknown):



Mautic credentials

URL: llms-txt#mautic-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using basic auth
  • Using OAuth2
  • Enable the API

You can use these credentials to authenticate the following nodes:

Supported authentication methods

  • Basic auth
  • OAuth2

Refer to Mautic's API documentation for more information about the service.

To set up this credential, your Mautic instance must have the API enabled. Refer to Enable the API for instructions.

To configure this credential, you'll need an account on a Mautic instance and:

  • Your URL
  • A Username
  • A Password
  1. In Mautic, go to Configuration > API Settings.
  2. If Enable HTTP basic auth? is set to No, change it to Yes and save. Refer to the API Settings documentation for more information.
  3. In n8n, enter the Base URL of your Mautic instance.
  4. Enter your Mautic Username.
  5. Enter your Mautic Password.

To set up this credential, your Mautic instance must have the API enabled. Refer to Enable the API for instructions.

To configure this credential, you'll need an account on a Mautic instance and:

  • A Client ID: Generated when you create new API credentials.
  • A Client Secret: Generated when you create new API credentials.
  • Your URL
  1. In Mautic, go to Configuration > Settings.

  2. Select API Credentials.

No API Credentials menu

If you don't see the API Credentials option under Configuration > Settings, be sure to Enable the API. If you've enabled the API and you still don't see the option, try manually clearing the cache.

  1. Select the option to Create new client.

  2. Select OAuth 2 as the Authorization Protocol.

  3. Enter a Name for your credential, like n8n integration.

  4. In n8n, copy the OAuth Callback URL and enter it as the Redirect URI in Mautic.

  5. Copy the Client ID from Mautic and enter it in your n8n credential.

  6. Copy the Client Secret from Mautic and enter it in your n8n credential.

  7. Enter the Base URL of your Mautic instance.

Refer to What is Mautic's API? for more information.

To enable the API in your Mautic instance:

  1. Go to Settings > Configuration.
  2. Select API Settings.
  3. Set API enabled? to Yes.
  4. Save your changes.

Refer to How to use the Mautic API for more information.


Kafka node

URL: llms-txt#kafka-node

Contents:

  • Operations
  • Templates and examples

Use the Kafka node to automate work in Kafka, and integrate Kafka with other applications. n8n has built-in support for a wide range of Kafka features, including sending messages.

On this page, you'll find a list of operations the Kafka node supports and links to more resources.

Refer to Kafka credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

Templates and examples

Browse Kafka integration templates, or search all templates


Vonage credentials

URL: llms-txt#vonage-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a Vonage developer account.

Supported authentication methods

Refer to Vonage's SMS API documentation for more information about the service.

To configure this credential, you'll need:

  • An API Key
  • An API Secret

Get your API Key and API Secret from your developer dashboard user account > Settings > API Settings. Refer to Retrieve your account information for more information.


Date and time with Luxon

URL: llms-txt#date-and-time-with-luxon

Contents:

  • Date and time behavior in n8n
  • Setting the timezone in n8n
  • Common tasks
    • Get the current datetime or date

Luxon is a JavaScript library that makes it easier to work with date and time. For full details of how to use Luxon, refer to Luxon's documentation.

n8n passes dates between nodes as strings, so you need to parse them. Luxon makes this easier.

Luxon is a JavaScript library. The two convenience variables created by n8n are available when using Python in the Code node, but their functionality is limited:

  • You can't perform Luxon operations on these variables. For example, there is no Python equivalent for $today.minus(...).
  • The generic Luxon functionality, such as Convert date string to Luxon, isn't available for Python users.

Date and time behavior in n8n

Be aware of the following:

  • In a workflow, n8n converts dates and times to strings between nodes. Keep this in mind when doing arithmetic on dates and times from other nodes.
  • With vanilla JavaScript, you can convert a string to a date with new Date('2019-06-23'). In Luxon, you must use a function explicitly stating the format, such as DateTime.fromISO('2019-06-23') or DateTime.fromFormat("23-06-2019", "dd-MM-yyyy").

Setting the timezone in n8n

Luxon uses the n8n timezone. This value is either:

  • Default: America/New York
  • A custom timezone for your n8n instance, set using the GENERIC_TIMEZONE environment variable.
  • A custom timezone for an individual workflow, configured in workflow settings.

This section provides examples for some common operations. More examples, and detailed guidance, are available in Luxon's own documentation.

Get the current datetime or date

Use the $now and $today Luxon objects to get the current time or day:

  • now: a Luxon object containing the current timestamp. Equivalent to DateTime.now().
  • today: a Luxon object containing the current timestamp, rounded down to the day. Equivalent to DateTime.now().set({ hour: 0, minute: 0, second: 0, millisecond: 0 }).

Note that these variables can return different time formats when cast as a string:

Examples:

Example 1 (unknown):

{{$now}}
// n8n displays the ISO formatted timestamp
// For example 2022-03-09T14:02:37.065+00:00
{{"Today's date is " + $now}}
// n8n displays "Today's date is <unix timestamp>"
// For example "Today's date is 1646834498755"

Example 2 (unknown):

$now
// n8n displays <ISO formatted timestamp>
// For example 2022-03-09T14:00:25.058+00:00
let rightNow = "Today's date is " + $now
// n8n displays "Today's date is <unix timestamp>"
// For example "Today's date is 1646834498755"

Microsoft To Do node

URL: llms-txt#microsoft-to-do-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Microsoft To Do node to automate work in Microsoft To Do, and integrate Microsoft To Do with other applications. n8n has built-in support for a wide range of Microsoft To Do features, including creating, updating, deleting, and getting linked resources, lists, and tasks.

On this page, you'll find a list of operations the Microsoft To Do node supports and links to more resources.

Refer to Microsoft credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Linked Resource
    • Create
    • Delete
    • Get
    • Get All
    • Update
  • List
    • Create
    • Delete
    • Get
    • Get All
    • Update
  • Task
    • Create
    • Delete
    • Get
    • Get All
    • Update

Templates and examples

📂 Automatically Update Stock Portfolio from OneDrive to Excel

View template details

Analyze Email Headers for IP Reputation and Spoofing Detection - Outlook

View template details

Create, update and get a task in Microsoft To Do

View template details

Browse Microsoft To Do integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Shopify node

URL: llms-txt#shopify-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Shopify node to automate work in Shopify, and integrate Shopify with other applications. n8n has built-in support for a wide range of Shopify features, including creating, updating, deleting, and getting orders and products.

On this page, you'll find a list of operations the Shopify node supports and links to more resources.

Refer to Shopify credentials for guidance on setting up authentication.

  • Order
    • Create an order
    • Delete an order
    • Get an order
    • Get all orders
    • Update an order
  • Product
    • Create a product
    • Delete a product
    • Get a product
    • Get all products
    • Update a product

Templates and examples

Promote new Shopify products on Twitter and Telegram

View template details

Run weekly inventories on Shopify sales

View template details

Process Shopify new orders with Zoho CRM and Harvest

View template details

Browse Shopify integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Action Network credentials

URL: llms-txt#action-network-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using API key
  • Request API access

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to Action Network's API documentation for more information about working with the service.

To configure this credential, you'll need an Action Network account with API key access enabled and:

  1. Log in to your Action Network account.
  2. From the Start Organizing menu, select Details > API & Sync.
  3. Select the list you want to generate an API key for.
  4. Generate an API key for that list.
  5. Copy the API Key and enter it in your n8n credential.

Refer to the Action Network API Authentication instructions for more information.

Request API access

Each user account and group on the Action Network has a separate API key to access that user or group's data.

You must explicitly request API access from Action Network, which you can do in one of two ways:

  1. If you're already a paying customer, contact them to request partner access. Partner access includes API key access.
  2. If you're a developer, request a developer account. Once your account request is granted, you'll have API key access.

QRadar credentials

URL: llms-txt#qradar-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate when using the HTTP Request node to make a Custom API call.

Create a Qradar account.

Supported authentication methods

Refer to QRadar's documentation for more information about the service.

This is a credential-only node. Refer to Custom API operations to learn more. View example workflows and related content on n8n's website.

To configure this credential, you'll need:

  • An API Key: Also known as an authorized service token. Use the Manage Authorized Services window on the Admin tab to create an authentication token. Refer to Creating an authentication token for more information.

Programmatic-style execute() method

URL: llms-txt#programmatic-style-execute()-method

The main difference between the declarative and programmatic styles is how they handle incoming data and build API requests. The programmatic style requires an execute() method, which reads incoming data and parameters, then builds a request. The declarative style handles requests using the routing key in the operations object.

The execute() method creates and returns an instance of INodeExecutionData.

You must include input and output item pairing information in the data you return. For more information, refer to Paired items.


Connections

URL: llms-txt#connections

Contents:

  • Create a connection
  • Delete a connection

A connection establishes a link between nodes to route data through the workflow. A connection between two nodes passes data from one node's output to another node's input.

Create a connection

To create a connection between two nodes, select the grey dot or Add node on the right side of a node and slide the arrow to the grey rectangle on the left side of the following node.

Delete a connection

Hover over the connection, then select Delete .


Deploy a node

URL: llms-txt#deploy-a-node

This section contains details on how to deploy and share your node.


Groq Chat Model node

URL: llms-txt#groq-chat-model-node

Contents:

  • Node parameters
  • Node options
  • Templates and examples
  • Related resources

Use the Groq Chat Model node to access Groq's large language models for conversational AI and text generation tasks.

On this page, you'll find the node parameters for the Groq Chat Model node, and links to more resources.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Model: Select the model which will generate the completion. n8n dynamically loads available models from the Groq API. Learn more in the Groq model documentation.

  • Maximum Number of Tokens: Enter the maximum number of tokens used, which sets the completion length.

  • Sampling Temperature: Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations.

Templates and examples

Conversational Interviews with AI Agents and n8n Forms

View template details

Telegram chat with PDF

by felipe biava cataneo

View template details

Build an AI-Powered Tech Radar Advisor with SQL DB, RAG, and Routing Agents

View template details

Browse Groq Chat Model integration templates, or search all templates

Refer to Groq's API documentation for more information about the service.

View n8n's Advanced AI documentation.


MongoDB Chat Memory node

URL: llms-txt#mongodb-chat-memory-node

Contents:

  • Node parameters
  • Related resources
  • Single memory instance

Use the MongoDB Chat Memory node to use MongoDB as a memory server for storing chat history.

On this page, you'll find a list of operations the MongoDB Chat Memory node supports, and links to more resources.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Session Key: Enter the key to use to store the memory in the workflow data.
  • Collection Name: Enter the name of the collection to store the chat history in. The system will create the collection if it doesn't exist.
  • Database Name: Enter the name of the database to store the chat history in. If not provided, the database from credentials will be used.
  • Context Window Length: Enter the number of previous interactions to consider for context.

Refer to LangChain's MongoDB Chat Message History documentation for more information about the service.

View n8n's Advanced AI documentation.

Single memory instance

If you add more than one MongoDB Chat Memory node to your workflow, all nodes access the same memory instance by default. Be careful when doing destructive actions that override existing memory contents, such as the override all messages operation in the Chat Memory Manager node. If you want more than one memory instance in your workflow, set different session IDs in different memory nodes.


Zep node

URL: llms-txt#zep-node

Contents:

  • Node parameters
  • Templates and examples
  • Related resources
  • Single memory instance

This node is deprecated, and will be removed in a future version.

Use the Zep node to use Zep as a memory server.

On this page, you'll find a list of operations the Zep node supports, and links to more resources.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Session ID: Enter the ID to use to store the memory in the workflow data.

Templates and examples

Browse Zep integration templates, or search all templates

Refer to LangChain's Zep documentation for more information about the service.

View n8n's Advanced AI documentation.

Single memory instance

If you add more than one Zep node to your workflow, all nodes access the same memory instance by default. Be careful when doing destructive actions that override existing memory contents, such as the override all messages operation in the Chat Memory Manager node. If you want more than one memory instance in your workflow, set different session IDs in different memory nodes.


Update your Cloud version

URL: llms-txt#update-your-cloud-version

Contents:

  • Best practices for updating
  • Automatic update

n8n recommends regularly updating your Cloud version. Check the Release notes to learn more about changes.

Only instance owners can upgrade n8n Cloud versions. Contact your instance owner if you don't have permission to update n8n Cloud.

  1. Log in to the n8n Cloud dashboard
  2. On your dashboard, select Manage.
  3. Use the n8n version dropdown to select your preferred release version:
    • Latest Stable: recommended for most users.
    • Latest Beta: get the newest n8n. This may be unstable.
  4. Select Save Changes to restart your n8n instance and perform the update.
  5. In the confirmation modal, select Confirm.

Best practices for updating

  • Update frequently: this avoids having to jump multiple versions at once, reducing the risk of a disruptive update. Try to update at least once a month.
  • Check the Release notes for breaking changes.
  • Use Environments to create a test version of your instance. Test the update there first.

n8n automatically updates outdated Cloud instances.

If you don't update you instance for 120 days, n8n emails you to warn you to update. After a further 30 days, n8n automatically updates your instance.


Postgres node common issues

URL: llms-txt#postgres-node-common-issues

Contents:

  • Dynamically populate SQL IN groups with parameters
  • Working with timestamps and time zones
  • Outputting Date columns as date strings instead of ISO datetime strings

Here are some common errors and issues with the Postgres node and steps to resolve or troubleshoot them.

Dynamically populate SQL IN groups with parameters

In Postgres, you can use the SQL IN comparison construct to make comparisons between groups of values:

While you can use n8n expressions in your query to dynamically populate the values in an IN group, combining this with query parameters provides extra protection by automatically sanitizing input.

To construct an IN group query with query parameters:

  1. Set the Operation to Execute Query.

  2. In Options, select Query Parameters.

  3. Use an expression to select an array from the input data. For example, {{ $json.input_shirt_sizes }}.

  4. In the Query parameter, write your query with the IN construct with an empty set of parentheses. For example:

  5. Inside of the IN parentheses, use an expression to dynamically create index-based placeholders (like $1, $2, and $3) for the number of items in your query parameter array. You can do this by increasing each array index by one since the placeholder variables are 1 indexed:

With this technique, n8n automatically creates the correct number of prepared statement placeholders for the IN values according to the number of items in your array.

Working with timestamps and time zones

To avoid complications with how n8n and Postgres interpret timestamp and time zone data, follow these general tips:

  • Use UTC when storing and passing dates: Using UTC helps avoid confusion over timezone conversions when converting dates between different representations and systems.
  • Set the execution timezone: Set the global timezone in n8n using either environment variables (for self-hosted) or in the settings (for n8n Cloud). You can set a workflow-specific timezone in the workflow settings.
  • Use ISO 8601 format: The ISO 8601 format encodes the day of the month, month, year, hour, minutes, and seconds in a standardized string. n8n passes dates between nodes as strings and uses Luxon to parse dates. If you need to cast to ISO 8601 explicitly, you can use the Date & Time node and a custom format set to the string yyyy-MM-dd'T'HH:mm:ss.

Outputting Date columns as date strings instead of ISO datetime strings

n8n's uses the pg package to integrate with Postgres, which affects how n8n processes date, timestamp, and related types from Postgres.

The pg package parses DATE values into new Date(row_value) by default, which produces a date that follows the ISO 8601 datetime string format. For example, a date of 2025-12-25 might produce a datetime sting of 2025-12-25T23:00:00.000Z depending on the instance's timezone settings.

To work around this, use the Postgres TO_CHAR function to format the date into the expected format at query time:

This will produce the date as a string without the time or timezone components. To continue the earlier example, with this casting, a date of 2025-12-25 would produce the string 2025-12-25. You can find out more in the pg package documentation on dates.

Examples:

Example 1 (unknown):

SELECT color, shirt_size FROM shirts WHERE shirt_size IN ('small', 'medium', 'large');

Example 2 (unknown):

SELECT color, shirt_size FROM shirts WHERE shirt_size IN ();

Example 3 (unknown):

SELECT color, shirt_size FROM shirts WHERE shirt_size IN ({{ $json.input_shirt_sizes.map((i, pos) => "$" + (pos+1)).join(', ') }});

Example 4 (unknown):

SELECT TO_CHAR(date_col, 'YYYY-MM-DD') AS date_col_as_date FROM table_with_date_col

All documentation

URL: llms-txt#all-documentation


Facebook App credentials

URL: llms-txt#facebook-app-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using app access token
    • Create a Meta app
    • Generate an App Access Token
    • Configure the Facebook Trigger
    • Optional: Add an App Secret
    • App review
  • Common issues
    • Unverified apps limit

You can use these credentials to authenticate the following nodes:

Facebook Graph API credentials

If you want to create credentials for the Facebook Graph API node, follow the instructions in the Facebook Graph API credentials documentation.

Supported authentication methods

Refer to Meta's Graph API documentation for more information about the service.

Using app access token

To configure this credential, you'll need a Meta for Developers account and:

  • An app Access Token
  • An optional App Secret: Used to verify the integrity and origin of the payload.

There are five steps in setting up your credential:

  1. Create a Meta app with the Webhooks product.
  2. Generate an App Access Token for that app.
  3. Configure the Facebook trigger.
  4. Optional: Add an app secret.
  5. App Review: Only required if your app's users don't have roles on the app itself. If you're creating the app for your own internal purposes, this isn't necessary.

Refer to the detailed instructions below for each step.

Create a Meta app

To create a Meta app:

  1. Go to the Meta Developer App Dashboard and select Create App.
  2. If you have a business portfolio and you're ready to connect the app to it, select the business portfolio. If you don't have a business portfolio or you're not ready to connect the app to the portfolio, select I dont want to connect a business portfolio yet and select Next. The Use cases page opens.
  3. Select Other, then select Next.
  4. Select Business and Next.
  5. Complete the essential information:
    • Add an App name.
    • Add an App contact email.
    • Here again you can connect to a business portfolio or skip it.
  6. Select Create app.
  7. The Add products to your app page opens.
  8. Select App settings > Basic from the left menu.
  9. Enter a Privacy Policy URL. (Required to take the app "Live.")
  10. Select Save changes.
  11. At the top of the page, toggle the App Mode from Development to Live.
  12. In the left menu, select Add Product.
  13. The Add products to your app page appears. Select Webhooks.
  14. The Webhooks product opens.

Refer to Meta's Create an app documentation for more information on creating an app, required fields like the Privacy Policy URL, and adding products.

For more information on the app modes and switching to Live mode, refer to App Modes and Publish | App Types.

Generate an App Access Token

Next, create an app access token to be used by your n8n credential and the Webhooks product:

  1. In a separate tab or window, open the Graph API explorer.

  2. Select the Meta App you just created in the Access Token section.

  3. In User or Page, select Get App Token.

  4. Select Generate Access Token.

  5. The page prompts you to log in and grant access. Follow the on-screen prompts.

You may receive a warning that the app isn't available. Once you take an app live, there may be a few minutes' delay before you can generate an access token.

  1. Copy the token and enter it in your n8n credential as the Access Token. Save this token somewhere else, too, since you'll need it for the Webhooks configuration.

  2. Save your n8n credential.

Refer to the Meta instructions for Your First Request for more information on generating the token.

Configure the Facebook Trigger

Now that you have a token, you can configure the Facebook Trigger node:

  1. In your Meta app, copy the App ID from the top navigation bar.
  2. In n8n, open your Facebook Trigger node.
  3. Paste the App ID into the APP ID field.
  4. Select Execute step to shift the trigger into listening mode.
  5. Return to the tab or window where your Meta app's Webhooks product configuration is open.
  6. Subscribe to the objects you want to receive Facebook Trigger notifications about. For each subscription:
    1. Copy the Webhook URL from n8n and enter it as the Callback URL in your Meta App.
    2. Enter the Access Token you copied above as the Verify token.
    3. Select Verify and save. (This step fails if you don't have your n8n trigger listening.)
    4. Some webhook subscriptions, like User, prompt you to subscribe to individual events. Subscribe to the events you're interested in.
    5. You can send some Test events from Meta to confirm things are working. If you send a test event, verify its receipt in n8n.

Refer to the Facebook Trigger node documentation for more information.

Optional: Add an App Secret

For added security, Meta recommends adding an App Secret. This signs all API calls with the appsecret_proof parameter. The app secret proof is a sha256 hash of your access token, using your app secret as the key.

To generate an App Secret:

  1. In Meta while viewing your app, select App settings > Basic from the left menu.
  2. Select Show next to the App secret field.
  3. The page prompts you to re-enter your Facebook account credentials. Once you do so, Meta shows the App Secret.
  4. Highlight it to select it, copy it, and paste this into your n8n credential as the App Secret.
  5. Save your n8n credential.

Refer to the App Secret documentation for more information.

App Review requires Business Verification.

Your app must go through App Review if it will be used by someone who:

  • Doesn't have a role on the app itself.
  • Doesn't have a role in the Business that has claimed the app.

If your only app users are users who have a role on the app itself, App Review isn't required.

As part of the App Review process, you may need to request advanced access for your webhook subscriptions.

Refer to Meta's App Review and Advanced Access documentation for more information.

Unverified apps limit

Facebook only lets you have a developer or administrator role on a maximum of 15 apps that aren't already linked to a Meta Verified Business Account.

Refer to Limitations | Create an app if you're over that limit.


PayPal node

URL: llms-txt#paypal-node

Contents:

  • Operations
  • Templates and examples

Use the PayPal node to automate work in PayPal, and integrate PayPal with other applications. n8n has built-in support for a wide range of PayPal features, including creating a batch payout and canceling unclaimed payout items.

On this page, you'll find a list of operations the PayPal node supports and links to more resources.

Refer to PayPal credentials for guidance on setting up authentication.

  • Payout
    • Create a batch payout
    • Show batch payout details
  • Payout Item
    • Cancels an unclaimed payout item
    • Show payout item details

Templates and examples

Create a PayPal batch payout

View template details

Receive updates when a billing plan is activated in PayPal

View template details

Automate Digital Delivery After PayPal Purchase Using n8n

View template details

Browse PayPal integration templates, or search all templates


Plan and Execute Agent node

URL: llms-txt#plan-and-execute-agent-node

Contents:

  • Node parameters
    • Prompt
    • Require Specific Output Format
  • Node options
    • Human Message Template
  • Templates and examples
  • Common issues

The Plan and Execute Agent is like the ReAct agent but with a focus on planning. It first creates a high-level plan to solve the given task and then executes the plan step by step. This agent is most useful for tasks that require a structured approach and careful planning.

Refer to AI Agent for more information on the AI Agent node itself.

Configure the Plan and Execute Agent using the following parameters.

Select how you want the node to construct the prompt (also known as the user's query or input from the chat).

  • Take from previous node automatically: If you select this option, the node expects an input from a previous node called chatInput.
  • Define below: If you select this option, provide either static text or an expression for dynamic content to serve as the prompt in the Prompt (User Message) field.

Require Specific Output Format

This parameter controls whether you want the node to require a specific output format. When turned on, n8n prompts you to connect one of these output parsers to the node:

Refine the Plan and Execute Agent node's behavior using these options:

Human Message Template

Enter a message that n8n will send to the agent during each step execution.

Available LangChain expressions:

  • {previous_steps}: Contains information about the previous steps the agent's already completed.
  • {current_step}: Contains information about the current step.
  • {agent_scratchpad}: Information to remember for the next iteration.

Templates and examples

Refer to the main AI Agent node's Templates and examples section.

For common questions or issues and suggested solutions, refer to Common issues.


Zendesk credentials

URL: llms-txt#zendesk-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API token
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

  • Zendesk

  • Zendesk Trigger

  • Create a Zendesk account.

  • For API token authentication, enable token access to the API in Admin Center under Apps and integrations > APIs > Zendesk APIs.

Supported authentication methods

Refer to Zendesk's API documentation for more information about the service.

To configure this credential, you'll need:

  • Your Subdomain: Your Zendesk subdomain is the portion of the URL between https:// and .zendesk.com. For example, if the Zendesk URL is https://n8n-example.zendesk.com/agent/dashboard, the subdomain is n8n-example.
  • An Email address: Enter the email address you use to log in to Zendesk.
  • An API Token: Generate an API token in Apps and integrations > APIs > Zendesk API. Refer to API token for more information.

To configure this credential, you'll need:

  • A Client ID: Generated when you create a new OAuth client.
  • A Client Secret: Generated when you create a new OAuth client.
  • Your Subdomain: Your Zendesk subdomain is the portion of the URL between https:// and .zendesk.com. For example, if the Zendesk URL is https://n8n-example.zendesk.com/agent/dashboard, the subdomain is n8n-example.

To create a new OAuth client, go to Apps and integrations > APIs > Zendesk API > OAuth Clients.

  • Copy the OAuth Redirect URL from n8n and enter it as a Redirect URL in the OAuth client.
  • Copy the Unique identifier for the Zendesk client and enter this as your n8n Client ID.
  • Copy the Secret from Zendesk and enter this as your n8n Client Secret

Refer to Registering your application with Zendesk for more information.


Tips and common issues

URL: llms-txt#tips-and-common-issues

Contents:

  • Combining multiple triggers
  • Avoiding evaluation breaking the chat
  • Accessing tool data when calculating metrics
  • Multiple evaluations in the same workflow
  • Dealing with inconsistent results

Combining multiple triggers

If you have another trigger in the workflow already, you have two potential starting points: that trigger and the evaluation trigger. To make sure your workflow works as expected no matter which trigger executes, you will need to merge these branches together.

Logic to merge two trigger branches together so that they have the same data format and can be referenced from a single node.

  1. Get the data format of the other trigger:
    • Execute the other trigger.
    • Open it and navigate to the JSON view of its output pane.
    • Click the copy button on the right.
  2. Re-shape the evaluation trigger data to match:
    • Insert an Edit Fields (Set) node after the evaluation trigger and connect them together.
    • Change its mode to JSON.
    • Paste your data into the 'JSON' field, removing the [ and ] on the first and last lines.
    • Switch the field type to Expression.
    • Map in the data from the trigger by dragging it from the input pane.
    • For strings, make sure to replace the entire value (including the quotes) and add .toJsonString() to the end of the expression.
  3. Merge the branches using a 'No-op' node: Insert a No-op node and wire both the other trigger and the Set node up to it. The 'No-op' node just outputs whatever input it receives.
  4. Reference the 'No-op' node outputs in the rest of the workflow: Since both paths will flow through this node with the same format, you can be sure that your input data will always be there.

Avoiding evaluation breaking the chat

n8n's internal chat reads the output data of the last executed node in the workflow. After adding an evaluation node with the 'set outputs' operation, this data may not be in the expected format, or even contain the chat response.

The solution is to add an extra branch coming out of your agent. Lower branches execute later in n8n, which means any node you attach to this branch will execute last. You can use a no-op node here since it only needs to pass the agent output through.

Accessing tool data when calculating metrics

Sometimes you need to know what happened in executed sub-nodes of an agent, for example to check whether it executed a tool. You can't reference these nodes directly with expressions, but you can enable the Return intermediate steps option in the agent. This will add an extra output field called intermediateSteps which you can use in later nodes:

Multiple evaluations in the same workflow

You can only have one evaluation set up per workflow. In other words, you can only have one evaluation trigger per workflow.

Even so, you can still test different parts of your workflow with different evaluations by putting those parts in sub-workflows and evaluating each sub-workflow.

Dealing with inconsistent results

Metrics can often have noise: they may be different across evaluation runs of the exact same workflow. This is because the workflow itself may return different results, or any LLM-based metrics might have natural variation in them.

You can compensate for this by duplicating the rows of your dataset, so that each row appears more than once in the dataset. Since this means that each input will effectively be running multiple times, it will smooth out any variations.


Azure OpenAI credentials

URL: llms-txt#azure-openai-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key
  • Using Azure Entra ID (OAuth2)
    • Register an application
    • Generate a client secret
  • Setting custom scopes

You can use these credentials to authenticate the following nodes:

Supported authentication methods

  • API key
  • Azure Entra ID (OAuth2)

Refer to Azure OpenAI's API documentation for more information about the service.

To configure this credential, you'll need:

  • A Resource Name: the Name you give the resource
  • An API key: Key 1 works well. This can be accessed before deployment in Keys and Endpoint.
  • The API Version the credentials should use. See the Azure OpenAI API preview lifecycle documentation for more information about API versioning in Azure OpenAI.

To get the information above, create and deploy an Azure OpenAI Service resource.

Model name for Azure OpenAI nodes

Once you deploy the resource, use the Deployment name as the model name for the Azure OpenAI nodes where you're using this credential.

Using Azure Entra ID (OAuth2)

Note for n8n Cloud users

Cloud users don't need to provide connection details. Select Connect my account to connect through your browser.

For self-hosted users, there are two main steps to configure OAuth2 from scratch:

  1. Register an application with the Microsoft Identity Platform.
  2. Generate a client secret for that application.

Follow the detailed instructions for each step below. For more detail on the Microsoft OAuth2 web flow, refer to Microsoft authentication and authorization basics.

Register an application

Register an application with the Microsoft Identity Platform:

  1. Open the Microsoft Application Registration Portal.
  2. Select Register an application.
  3. Enter a Name for your app.
  4. In Supported account types, select Accounts in any organizational directory (Any Azure AD directory - Multi-tenant) and personal Microsoft accounts (for example, Skype, Xbox).
  5. In Register an application:
    1. Copy the OAuth Callback URL from your n8n credential.
    2. Paste it into the Redirect URI (optional) field.
    3. Select Select a platform > Web.
  6. Select Register to finish creating your application.
  7. Copy the Application (client) ID and paste it into n8n as the Client ID.

Refer to Register an application with the Microsoft Identity Platform for more information.

Generate a client secret

With your application created, generate a client secret for it:

  1. On your Microsoft application page, select Certificates & secrets in the left navigation.
  2. In Client secrets, select + New client secret.
  3. Enter a Description for your client secret, such as n8n credential.
  4. Select Add.
  5. Copy the Secret in the Value column.
  6. Paste it into n8n as the Client Secret.
  7. Select Connect my account in n8n to finish setting up the connection.
  8. Log in to your Microsoft account and allow the app to access your info.

Refer to Microsoft's Add credentials for more information on adding a client secret.

Setting custom scopes

Azure Entra ID credentials use the following scopes by default:

To select different scopes for your credentials, enable the Custom Scopes slider and edit the Enabled Scopes list. Keep in mind that some features may not work as expected with more restrictive scopes.


HELP n8n_scaling_mode_queue_jobs_active Current number of jobs being processed across all workers in scaling mode.

URL: llms-txt#help-n8n_scaling_mode_queue_jobs_active-current-number-of-jobs-being-processed-across-all-workers-in-scaling-mode.


Chat Trigger node

URL: llms-txt#chat-trigger-node

Contents:

  • Node parameters
    • Make Chat Publicly Available
    • Mode
    • Authentication
    • Initial Message(s)
  • Node options
    • Hosted chat options
    • Embedded chat options
  • Templates and examples
  • Related resources

Use the Chat Trigger node when building AI workflows for chatbots and other chat interfaces. You can configure how users access the chat, using one of n8n's provided interfaces, or your own. You can add authentication.

You must connect either an agent or chain root node.

Workflow execution usage

Every message to the Chat Trigger executes your workflow. This means that one conversation where a user sends 10 messages uses 10 executions from your execution allowance. Check your payment plan for details of your allowance.

This node replaces the Manual Chat Trigger node from version 1.24.0.

Make Chat Publicly Available

Set whether the chat should be publicly available (turned on) or only available through the manual chat interface (turned off).

Leave this turned off while you're building the workflow. Turn it on when you're ready to activate the workflow and allow users to access the chat.

Choose how users access the chat. Select from:

  • Hosted Chat: Use n8n's hosted chat interface. Recommended for most users because you can configure the interface using the node options and don't have to do any other setup.
  • Embedded Chat: This option requires you to create your own chat interface. You can use n8n's chat widget or build your own. Your chat interface must call the webhook URL shown in Chat URL in the node.

Choose whether and how to restrict access to the chat. Select from:

  • None: The chat doesn't use authentication. Anyone can use the chat.
  • Basic Auth: The chat uses basic authentication.
    • Select or create a Credential for Basic Auth with a username and password. All users must use the same username and password.
  • n8n User Auth: Only users logged in to an n8n account can use the chat.

Initial Message(s)

This parameter's only available if you're using Hosted Chat. Use it to configure the message the n8n chat interface displays when the user arrives on the page.

Available options depend on the chat mode.

Hosted chat options

Allowed Origin (CORS)

Set the origins that can access the chat URL. Enter a comma-separated list of URLs allowed for cross-origin non-preflight requests.

Use * (default) to allow all origins.

Input Placeholder, Title, and Subtitle

Enter the text for these elements in the chat interface.

Load Previous Session

Select whether to load chat messages from a previous chat session.

If you select any option other than Off, you must connect the Chat trigger and the Agent you're using to a memory sub-node. The memory connector on the Chat trigger appears when you set Load Previous Session to From Memory. n8n recommends connecting both the Chat trigger and Agent to the same memory sub-node, as this ensures a single source of truth for both nodes.

Use this option when building a workflow with steps after the agent or chain that's handling the chat. Choose from:

  • When Last Node Finishes: The Chat Trigger node returns the response code and the data output from the last node executed in the workflow.
  • Using Response Nodes: The Chat Trigger node responds as defined in a Respond to Chat node or Respond to Webhook node. In this response mode, the Chat Trigger will solely show messages as defined in these nodes and not output the data from the last node executed in the workflow.

This mode replaces the 'Using Respond to Webhook Node' mode from version 1.2 of the Chat Trigger node.

  • Streaming response: Enables real-time data streaming back to the user as the workflow processes. Requires nodes with streaming support in the workflow (for example, the AI agent node).

Require Button Click to Start Chat

Set whether to display a New Conversation button on the chat interface (turned on) or not (turned off).

Embedded chat options

Allowed Origin (CORS)

Set the origins that can access the chat URL. Enter a comma-separated list of URLs allowed for cross-origin non-preflight requests.

Use * (default) to allow all origins.

Load Previous Session

Select whether to load chat messages from a previous chat session.

If you select any option other than Off, you must connect the Chat trigger and the Agent you're using to a memory sub-node. The memory connector on the Chat trigger appears when you set Load Previous Session to From Memory. n8n recommends connecting both the Chat trigger and Agent to the same memory sub-node, as this ensures a single source of truth for both nodes.

Use this option when building a workflow with steps after the agent or chain that's handling the chat. Choose from:

  • When Last Node Finishes: The Chat Trigger node returns the response code and the data output from the last node executed in the workflow.
  • Using Response Nodes: The Chat Trigger node responds as defined in a Respond to Chat node or Respond to Webhook node. In this response mode, the Chat Trigger will solely show messages as defined in these nodes and not output the data from the last node executed in the workflow.

This mode replaces the 'Using Respond to Webhook Node' mode from version 1.2 of the Chat Trigger node.

  • Streaming response: Enables real-time data streaming back to the user as the workflow processes. Requires nodes with streaming support enabled.

Templates and examples

RAG Starter Template using Simple Vector Stores, Form trigger and OpenAI

View template details

Unify multiple triggers into a single workflow

by Guillaume Duvernay

View template details

Trigger Outbound Vapi AI Voice Calls From New Jotform Submissions

View template details

Browse Chat Trigger integration templates, or search all templates

View n8n's Advanced AI documentation.

Set the chat response manually

You need to manually set the chat response when you don't want to directly send the output of an Agent or Chain node to the user. Instead, you want to take the output of an Agent or Chain node and modify it or do something else with it before sending it back to the user.

In a basic workflow, the Agent and Chain nodes output a parameter named either output or text, and the Chat trigger sends the value of this parameter to the user as the chat response.

If you need to manually create the response sent to the user, you must create a parameter named either text or output. If you use a different parameter name, the Chat trigger sends the entire object as its response, not just the value.

When you are using a Respond to Chat node to manually create the response sent to the user, you must set the Chat Trigger response mode to 'Using Response Nodes'.

For common questions or issues and suggested solutions, refer to Common Issues.


AWS Bedrock Chat Model node

URL: llms-txt#aws-bedrock-chat-model-node

Contents:

  • Node parameters
  • Node options
  • Proxy limitations
  • Templates and examples
  • Related resources

The AWS Bedrock Chat Model node allows you use LLM models utilising AWS Bedrock platform.

On this page, you'll find the node parameters for the AWS Bedrock Chat Model node, and links to more resources.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Model: Select the model that generates the completion.

Learn more about available models in the Amazon Bedrock model documentation.

  • Maximum Number of Tokens: Enter the maximum number of tokens used, which sets the completion length.
  • Sampling Temperature: Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations.

This node doesn't support the NO_PROXY environment variable.

Templates and examples

Browse AWS Bedrock Chat Model integration templates, or search all templates

Refer to LangChains's AWS Bedrock Chat Model documentation for more information about the service.

View n8n's Advanced AI documentation.


Database structure

URL: llms-txt#database-structure

Contents:

  • Database and query technology
  • Tables
    • auth_identity
    • auth_provider_sync_history
    • credentials_entity
    • event_destinations
    • execution_data
    • execution_entity
    • execution_metadata
    • installed_nodes

This page describes the purpose of each table in the n8n database.

Database and query technology

By default, n8n uses SQLite as the database. If you are using another database the structure will be similar, but the data-types may be different depending on the database.

n8n uses TypeORM for queries and migrations.

To inspect the n8n database, you can use DBeaver, which is an open-source universal database tool.

These are the tables n8n creates during setup.

Stores details of external authentication providers when using SAML.

auth_provider_sync_history

Stores the history of a SAML connection.

credentials_entity

Stores the credentials used to authenticate with integrations.

event_destinations

Contains the destination configurations for Log streaming.

Contains the workflow at time of running, and the execution data.

Stores all saved workflow executions. Workflow settings can affect which executions n8n saves.

execution_metadata

Stores Custom executions data.

Lists the community nodes installed in your n8n instance.

installed_packages

Details of npm community nodes packages installed in your n8n instance. installed_nodes lists each individual node. installed_packages lists npm packages, which may contain more than one node.

A log of all database migrations. Read more about Migrations in TypeORM's documentation.

Lists the projects in your instance.

Describes the relationship between a user and a project, including the user's role type.

Not currently used. For use in future work on custom roles.

Records custom instance settings. These are settings that you can't control using environment variables. They include:

  • Whether the instance owner is set up
  • Whether the user chose to skip owner and user management setup
  • Whether certain types of authentication, including SAML and LDAP, are on
  • License key

shared_credentials

Maps credentials to users.

Maps workflows to users.

All workflow tags created in the n8n instance. This table lists the tags. workflows_tags records which workflows have which tags.

Store variables.

Records the active webhooks in your n8n instance's workflows. This isn't just webhooks uses in the Webhook node. It includes all active webhooks used by any trigger node.

Your n8n instance's saved workflows.

Store previous versions of workflows.

workflow_statistics

Counts workflow IDs and their status.

Maps tags to workflows. tag_entity contains tag details.

Entity Relationship Diagram (ERD)


optional:

URL: llms-txt#optional:

Contents:

  • Required permissions
  • TLS
  • SQLite

export DB_POSTGRESDB_SSL_CA=$(pwd)/ca.crt export DB_POSTGRESDB_SSL_REJECT_UNAUTHORIZED=false

CREATE DATABASE n8n-db; CREATE USER n8n-user WITH PASSWORD 'random-password'; GRANT ALL PRIVILEGES ON DATABASE n8n-db TO n8n-user;


You can choose between these configurations:

- Not declaring (default): Connect with `SSL=off`
- Declaring only the CA and unauthorized flag: Connect with `SSL=on` and verify the server's signature
- Declaring `_{CERT,KEY}` and the above: Use the certificate and key for client TLS authentication

This is the default database that gets used if nothing is defined.

The database file is located at: `~/.n8n/database.sqlite`

**Examples:**

Example 1 (unknown):
```unknown
### Required permissions

n8n needs to create and modify the schemas of the tables it uses.

Recommended permissions:

Question and Answer Chain node common issues

URL: llms-txt#question-and-answer-chain-node-common-issues

Contents:

  • No prompt specified error
  • A Retriever sub-node must be connected error
  • Can't produce longer responses

Here are some common errors and issues with the Question and Answer Chain node and steps to resolve or troubleshoot them.

No prompt specified error

This error displays when the Prompt is empty or invalid.

You might see this in one of two scenarios:

  1. When you've set the Prompt to Define below and have an expression in your Text that isn't generating a value.
    • To resolve, enter a valid prompt in the Text field.
    • Make sure any expressions reference valid fields and that they resolve to valid input rather than null.
  2. When you've set the Prompt to Connected Chat Trigger Node and the incoming data has null values.
    • To resolve, make sure your input contains a chatInput field. Add an Edit Fields (Set) node to edit an incoming field name to chatInput.
    • Remove any null values from the chatInput field of the input node.

A Retriever sub-node must be connected error

This error displays when n8n tries to execute the node without having a Retriever connected.

To resolve this, click the + Retriever button at the bottom of your screen when the node is open, or click the Retriever + connector when the node isn't open. n8n will then open a selection of possible Retrievers to pick from.

Can't produce longer responses

If you need to generate longer responses than the Question and Answer Chain node produces by default, you can try one or more of the following techniques:

  • Connect a more verbose model: Some AI models produce more terse results than others. Swapping your model for one with a larger context window and more verbose output can increase the word length of your responses.
  • Increase the maximum number of tokens: Many model nodes (for example the OpenAI Chat Model) include a Maximum Number of Tokens option. You can set this to increase the maximum number of tokens the model can use to produce a response.
  • Build larger responses in stages: For more detailed answers, you may want to construct replies in stages using a variety of AI nodes. You can use AI split up a single question into multiple prompts and create responses for each. You can then compose a final reply by combining the responses again. Though the details are different, you can find a good example of the general idea in this template for writing a WordPress post with AI.

Workflow-level executions list

URL: llms-txt#workflow-level-executions-list

Contents:

  • View executions for a single workflow
  • Filter executions
  • Retry failed workflows

The Executions list in a workflow shows all executions for that workflow.

When you delete a workflow, n8n deletes its execution history as well. This means you can't view executions for deleted workflows.

Execution history and workflow history

Don't confuse the execution list with Workflow history.

Executions are workflow runs. With the executions list, you can see previous runs of the current version of the workflow. You can copy previous executions into the editor to Debug and re-run past executions in your current workflow.

Workflow history is previous versions of the workflow: for example, a version with a different node, or different parameters set.

View executions for a single workflow

In the workflow, select the Executions tab in the top menu. You can preview all executions of that workflow.

You can filter the executions list.

  1. In your workflow, select Executions.
  2. Select Filters.
  3. Enter your filters. You can filter by:
    • Status: choose from Failed, Running, Success, or Waiting.
  • Execution start: see executions that started in the given time.

  • Saved custom data: this is data you create within the workflow using the Code node. Enter the key and value to filter. Refer to Custom executions data for information on adding custom data.

Custom executions data is available on:

  • Cloud: Pro, Enterprise
    • Self-Hosted: Enterprise, registered Community

Retry failed workflows

If your workflow execution fails, you can retry the execution. To retry a failed workflow:

  1. Open the Executions list.
  2. For the workflow execution you want to retry, select Refresh .
  3. Select either of the following options to retry the execution:
    • Retry with currently saved workflow: Once you make changes to your workflow, you can select this option to execute the workflow with the previous execution data.
    • Retry with original workflow: If you want to retry the execution without making changes to your workflow, you can select this option to retry the execution with the previous execution data.

Microsoft OneDrive Trigger node

URL: llms-txt#microsoft-onedrive-trigger-node

Contents:

  • Events
  • Related resources

Use the Microsoft OneDrive Trigger node to respond to events in Microsoft OneDrive and integrate Microsoft OneDrive with other applications. n8n has built-in support for file and folder events in OneDrive.

On this page, you'll find a list of events the Microsoft OneDrive Trigger node can respond to and links to more resources.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Microsoft OneDrive integrations page.

  • On File Created
  • On File Updated
  • On Folder Created
  • On Folder Updates

n8n provides an app node for Microsoft OneDrive. You can find the node docs here.

View example workflows and related content on n8n's website.

Refer to Microsoft's OneDrive API documentation for more information about the service.


6. Notifying the Team

URL: llms-txt#6.-notifying-the-team

Contents:

  • What's next?

In this step of the workflow, you will learn how to send messages to a Discord channel using the Discord node. After this step, your workflow should look like this:

View workflow file

Now that you have a calculated summary of the booked orders, you need to notify Nathan's team in their Discord channel. For this workflow, you will send messages to the n8n server on Discord.

Before you begin the steps below, use the link above to connect to the n8n server on Discord. Be sure you can access the #course-level-1 channel.

Communication app nodes

You can replace the Discord node with another communication app. For example, n8n also has nodes for Slack and Mattermost.

In your workflow, add a Discord node connected to the Code node.

When you search for the Discord node, look for Message Actions and select Send a message to add the node.

In the Discord node window, configure these parameters:

  • Connection Type: Select Webhook.

  • Credential for Discord Webhook: Select - Create New Credential -.

    • Copy the Webhook URL from the email you received when you signed up for this course and paste it into the Webhook URL field of the credentials.
    • Select Save and then close the credentials dialog.
  • Operation: Select Send a Message.

  • Message:

    • Select the Expression tab on the right side of the Message field.
  • Copy the text below and paste it into the Expression window, or construct it manually using the Expression Editor.

Now select Execute step in the Discord node. If all works well, you should see this output in n8n:

Discord node output

And your message should appear in the Discord channel #course-level-1:

Nathan 🙋: Incredible, you've saved me hours of tedious work already! Now I can execute this workflow when I need it. I just need to remember to run it every Monday morning at 9 AM.

You 👩‍🔧: Don't worry about that, you can actually schedule the workflow to run on a specific day, time, or interval. I'll set this up in the next step.

Examples:

Example 1 (unknown):

This week we've {{$json["totalBooked"]}} booked orders with a total value of {{$json["bookedSum"]}}. My Unique ID: {{ $('HTTP Request').params["headerParameters"]["parameters"][0]["value"] }}

Rocket.Chat credentials

URL: llms-txt#rocket.chat-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API access token

You can use these credentials to authenticate the following nodes:

  • Rocket.Chat

  • Create a Rocket.Chat account.

  • Your account must have the create-personal-access-tokens permission to generate personal access tokens.

Supported authentication methods

Refer to Rocket.Chat's API documentation for more information about the service.

Using API access token

To configure this credential, you'll need:

  • Your User ID: Displayed when you generate an access token.
  • An Auth Key: Your personal access token. To generate an access token, go to your avatar > Account > Personal Access Tokens. Copy the token and add it as the n8n Auth Key.
  • Your Rocket.Chat Domain: Also known as your default URL or workspace URL.

Refer to Personal Access Tokens for more information.


Simple Memory node

URL: llms-txt#simple-memory-node

Contents:

  • Node parameters
  • Templates and examples
  • Related resources
  • Common issues

Use the Simple Memory node to persist chat history in your workflow.

On this page, you'll find a list of operations the Simple Memory node supports, and links to more resources.

Don't use this node if running n8n in queue mode

If your n8n instance uses queue mode, this node doesn't work in an active production workflow. This is because n8n can't guarantee that every call to Simple Memory will go to the same worker.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

Configure these parameters to configure the node:

  • Session Key: Enter the key to use to store the memory in the workflow data.
  • Context Window Length: Enter the number of previous interactions to consider for context.

Templates and examples

Chat with GitHub API Documentation: RAG-Powered Chatbot with Pinecone & OpenAI

View template details

🤖 Create a Documentation Expert Bot with RAG, Gemini, and Supabase

View template details

🤖 Build a Documentation Expert Chatbot with Gemini RAG Pipeline

View template details

Browse Simple Memory node documentation integration templates, or search all templates

Refer to LangChain's Buffer Window Memory documentation for more information about the service.

View n8n's Advanced AI documentation.

For common questions or issues and suggested solutions, refer to Common issues.


Webex by Cisco Trigger node

URL: llms-txt#webex-by-cisco-trigger-node

Webex by Cisco is a web conferencing and videoconferencing application.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Webex by Cisco Trigger integrations page.


QuickChart node

URL: llms-txt#quickchart-node

Contents:

  • Operations
  • Templates and examples
  • Related resources

Use the QuickChart node to automate work in QuickChart, and integrate QuickChart with other applications. n8n has built-in support for a wide range of QuickChart chart types, including bar, doughnut, line, pie, and polar charts.

On this page, you'll find a list of operations the QuickChart node supports and links to more resources.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

Create a chart by selecting the chart type:

  • Chart Type
    • Bar Chart
    • Doughnut Chart
    • Line Chart
    • Pie Chart
    • Polar Chart

Templates and examples

AI Agent with charts capabilities using OpenAI Structured Output and Quickchart

View template details

Visualize your SQL Agent queries with OpenAI and Quickchart.io

View template details

📊Multi-AI Agent Chatbot for Postgres/Supabase DB and QuickCharts + Tool Router

View template details

Browse QuickChart integration templates, or search all templates

Refer to QuickChart's API documentation for more information about the service.


License Key

URL: llms-txt#license-key

Contents:

  • Add a license key using the UI
  • Add a license key using an environment variables
  • Allowlist the license server IP addresses

To enable certain licensed features, you must first activate your license. You can do this either through the UI or by setting environment variables.

Add a license key using the UI

In your n8n instance:

  1. Log in as Admin or Owner.
  2. Select Settings > Usage and plan.
  3. Select Enter activation key.
  4. Paste in your license key.
  5. Select Activate.

Add a license key using an environment variables

In your n8n configuration, set N8N_LICENSE_ACTIVATION_KEY to your license key. If the instance already has an activated license, this variable will have no effect.

Refer to Environment variables to learn more about configuring n8n.

Allowlist the license server IP addresses

n8n uses Cloudflare to host the license server. As the specific IP addresses can change, you need to allowlist the full range of Cloudflare IP addresses to ensure n8n can always reach the license server.


Convenience methods

URL: llms-txt#convenience-methods

n8n provides these methods to make it easier to perform common tasks in expressions.

You can use Python in the Code node. It isn't available in expressions.

Method Description Available in Code node?
$evaluateExpression(expression: string, itemIndex?: number) Evaluates a string as an expression. If you don't provide itemIndex, n8n uses the data from item 0 in the Code node.
$ifEmpty(value, defaultValue) The $ifEmpty() function takes two parameters, tests the first to check if it's empty, then returns either the parameter (if not empty) or the second parameter (if the first is empty). The first parameter is empty if it's: - undefined - null - An empty string '' - An array where value.length returns false - An object where Object.keys(value).length returns false
$if() The $if() function takes three parameters: a condition, the value to return if true, and the value to return if false.
$max() Returns the highest of the provided numbers.
$min() Returns the lowest of the provided numbers.
Method Description
_evaluateExpression(expression: string, itemIndex?: number) Evaluates a string as an expression. If you don't provide itemIndex, n8n uses the data from item 0 in the Code node.
_ifEmpty(value, defaultValue) The _ifEmpty() function takes two parameters, tests the first to check if it's empty, then returns either the parameter (if not empty) or the second parameter (if the first is empty). The first parameter is empty if it's: - undefined - null - An empty string '' - An array where value.length returns false - An object where Object.keys(value).length returns false

Using source control and environments

URL: llms-txt#using-source-control-and-environments

  • Available on Enterprise.

  • You must be an n8n instance owner or instance admin to enable and configure source control.

  • Instance owners and instance admins can push changes to and pull changes from the connected repository.

  • Project admins can push changes to the connected repository. They can't pull changes from the repository.

  • Push and pull: Send work to Git, and fetch work from Git to your instance. Understand what gets committed, and how n8n handles merge conflicts.

  • Copy work between environments: How to copy work between different n8n instances.


Baserow node

URL: llms-txt#baserow-node

Contents:

  • Operations
  • Templates and examples

Use the Baserow node to automate work in Baserow, and integrate Baserow with other applications. n8n has built-in support for a wide range of Baserow features, including creating, getting, retrieving, and updating rows.

On this page, you'll find a list of operations the Baserow node supports and links to more resources.

Refer to Baserow credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Row
    • Create a row
    • Delete a row
    • Retrieve a row
    • Retrieve all rows
    • Update a row

Templates and examples

All-in-One Telegram/Baserow AI Assistant 🤖🧠 Voice/Photo/Save Notes/Long Term Mem

View template details

User Enablement Demo

View template details

Create AI Videos with OpenAI Scripts, Leonardo Images & HeyGen Avatars

View template details

Browse Baserow integration templates, or search all templates


Perplexity credentials

URL: llms-txt#perplexity-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to Perplexity's API documentation for more information about the service.

To configure this credential, you'll need a Perplexity account and:

Refer to Perplexity's API documentation for more information about authenticating to the service.


Integrations

URL: llms-txt#integrations

Contents:

  • Built-in nodes
  • Community nodes
  • Credential-only nodes and custom operations
  • Generic integrations
  • Where to go next

n8n calls integrations nodes.

Nodes are the building blocks of workflows in n8n. They're an entry point for retrieving data, a function to process data, or an exit for sending data. The data process includes filtering, recomposing, and changing data. There can be one or several nodes for your API, service or app. You can connect multiple nodes, which allows you to create complex workflows.

n8n includes a collection of built-in integrations. Refer to Built-in nodes for documentation on all n8n's built-in nodes.

As well as using the built-in nodes, you can also install community-built nodes. Refer to Community nodes for more information.

Credential-only nodes and custom operations

One of the most complex parts of setting up API calls is managing authentication. n8n provides credentials support for operations and services beyond those supported by built-in nodes.

  • Custom operations for existing nodes: n8n supplies hundreds of nodes to create workflows that link multiple products. However, some nodes don't include all the possible operations supported by a product's API. You can work around this by making a custom API call using the HTTP Request node.
  • Credential-only nodes: n8n includes credential-only nodes. These are integrations where n8n supports setting up credentials for use in the HTTP Request node, but doesn't provide a standalone node. You can find a credential-only node in the nodes panel, as you would for any other integration.

Refer to Custom operations for more information.

Generic integrations

If you need to connect to a service where n8n doesn't have a node, or a credential-only node, you can still use the HTTP Request node. Refer to the node page for details on how to set up authentication and create your API call.

  • If you want to create your own node, head over to the Creating Nodes section.
  • Check out Community nodes to learn about installing and managing community-built nodes.
  • If you'd like to learn more about the different nodes in n8n, their functionalities and example usage, check out n8n's node libraries: Core nodes, Actions, and Triggers.
  • If you'd like to learn how to add the credentials for the different nodes, head over to the Credentials section.

Webhook node

URL: llms-txt#webhook-node

Contents:

  • Workflow development process
  • Node parameters
    • Webhook URLs
    • HTTP Method
    • Path
    • Supported authentication methods
    • Respond
    • Response Code
    • Response Data
  • Node options

Use the Webhook node to create webhooks, which can receive data from apps and services when an event occurs. It's a trigger node, which means it can start an n8n workflow. This allows services to connect to n8n and run a workflow.

You can use the Webhook node as a trigger for a workflow when you want to receive data and run a workflow based on the data. The Webhook node also supports returning the data generated at the end of a workflow. This makes it useful for building a workflow to process data and return the results, like an API endpoint.

The webhook allows you to trigger workflows from services that don't have a dedicated app trigger node.

Workflow development process

n8n provides different Webhook URLs for testing and production. The testing URL includes an option to Listen for test event. Refer to Workflow development for more information on building, testing, and shifting your Webhook node to production.

Use these parameters to configure your node.

The Webhook node has two Webhook URLs: test and production. n8n displays the URLs at the top of the node panel.

Select Test URL or Production URL to toggle which URL n8n displays.

Sample Webhook URLs in the Webhook node's Parameters tab

  • Test: n8n registers a test webhook when you select Listen for Test Event or Execute workflow, if the workflow isn't active. When you call the webhook URL, n8n displays the data in the workflow.
  • Production: n8n registers a production webhook when you activate the workflow. When using the production URL, n8n doesn't display the data in the workflow. You can still view workflow data for a production execution: select the Executions tab in the workflow, then select the workflow execution you want to view.

The Webhook node supports standard HTTP Request Methods:

The webhook maximum payload size is 16MB. If you're self-hosting n8n, you can change this using the endpoint environment variable N8N_PAYLOAD_SIZE_MAX.

By default, this field contains a randomly generated webhook URL path, to avoid conflicts with other webhook nodes.

You can manually specify a URL path, including adding route parameters. For example, you may need to do this if you use n8n to prototype an API and want consistent endpoint URLs.

The Path field can take the following formats:

  • /:variable
  • /path/:variable
  • /:variable/path
  • /:variable1/path/:variable2
  • /:variable1/:variable2

Supported authentication methods

You can require authentication for any service calling your webhook URL. Choose from these authentication methods:

  • Basic auth
  • Header auth
  • JWT auth
  • None

Refer to Webhook credentials for more information on setting up each credential type.

  • Immediately: The Webhook node returns the response code and the message Workflow got started.
  • When Last Node Finishes: The Webhook node returns the response code and the data output from the last node executed in the workflow.
  • Using 'Respond to Webhook' Node: The Webhook node responds as defined in the Respond to Webhook node.
  • Streaming response: Enables real-time data streaming back to the user as the workflow processes. Requires nodes with streaming support in the workflow (for example, the AI agent node).

Customize the HTTP response code that the Webhook node returns upon successful execution. Select from common response codes or create a custom code.

Choose what data to include in the response body:

  • All Entries: The Webhook returns all the entries of the last node in an array.
  • First Entry JSON: The Webhook returns the JSON data of the first entry of the last node in a JSON object.
  • First Entry Binary: The Webhook returns the binary data of the first entry of the last node in a binary file.
  • No Response Body: The Webhook returns without a body.

Applies only to Respond > When Last Node Finishes.

Select Add Option to view more configuration options. The available options depend on your node parameters. Refer to the table for option availability.

  • Allowed Origins (CORS): Set the permitted cross-origin domains. Enter a comma-separated list of URLs allowed for cross-origin non-preflight requests. Use * (default) to allow all origins.
  • Binary Property: Enabling this setting allows the Webhook node to receive binary data, such as an image or audio file. Enter the name of the binary property to write the data of the received file to.
  • Ignore Bots: Ignore requests from bots like link previewers and web crawlers.
  • IP(s) Whitelist: Enable this to limit who (or what) can invoke a Webhook trigger URL. Enter a comma-separated list of allowed IP addresses. Access from IP addresses outside the whitelist throws a 403 error. If left blank, all IP addresses can invoke the webhook trigger URL.
  • No Response Body: Enable this to prevent n8n sending a body with the response.
  • Raw Body: Specify that the Webhook node will receive data in a raw format, such as JSON or XML.
  • Response Content-Type: Choose the format for the webhook body.
  • Response Data: Send custom data with the response.
  • Response Headers: Send extra headers in the Webhook response. Refer to MDN Web Docs | Response header to learn more about response headers.
  • Property Name: by default, n8n returns all available data. You can choose to return a specific JSON key, so that n8n returns the value.
Option Required node configuration
Allowed Origins (CORS) Any
Binary Property Either: HTTP Method > POST HTTP Method > PATCH HTTP Method > PUT
Ignore Bots Any
IP(s) Whitelist Any
Property Name Both: Respond > When Last Node Finishes Response Data > First Entry JSON
No Response Body Respond > Immediately
Raw Body Any
Response Code Any except Respond > Using 'Respond to Webhook' Node
Response Content-Type Both: Respond > When Last Node Finishes Response Data > First Entry JSON
Response Data Respond > Immediately
Response Headers Any

How n8n secures HTML responses

Starting with n8n version 1.103.0, n8n automatically wraps HTML responses to webhooks in <iframe> tags. This is a security mechanism to protect the instance users.

This has the following implications:

  • HTML renders in a sandboxed iframe instead of directly in the parent document.
  • JavaScript code that attempts to access the top-level window or local storage will fail.
  • Authentication headers aren't available in the sandboxed iframe (for example, basic auth). You need to use an alternative approach, like embedding a short-lived access token within the HTML.
  • Relative URLs (for example, <form action="/">) won't work. Use absolute URLs instead.

Templates and examples

📚 Auto-generate documentation for n8n workflows with GPT and Docsify

View template details

Automate Customer Support with Mintlify Documentation & Zendesk AI Agent

View template details

Transform Cloud Documentation into Security Baselines with OpenAI and GDrive

by Raphael De Carvalho Florencio

View template details

Browse Webhook node documentation integration templates, or search all templates

For common questions or issues and suggested solutions, refer to Common issues.


Returns all items the node "IF" outputs (index: 0 which is Output "true" of its most recent run)

URL: llms-txt#returns-all-items-the-node-"if"-outputs-(index:-0-which-is-output-"true"-of-its-most-recent-run)

allItems = _("IF").all();


Netscaler ADC node

URL: llms-txt#netscaler-adc-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported

Use the Netscaler ADC node to automate work in Netscaler ADC, and integrate Netscaler ADC with other applications. n8n has built-in support for a wide range of Netscaler ADC features, including creating and installing certificates and files.

On this page, you'll find a list of operations the Netscaler ADC node supports and links to more resources.

Refer to Netscaler ADC credentials for guidance on setting up authentication.

  • Certificate
    • Create
    • Install
  • File
    • Delete
    • Download
    • Upload

Templates and examples

Browse Netscaler ADC integration templates, or search all templates

Refer to Netscaler ADC's documentation for more information about the service.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


QuickBooks credentials

URL: llms-txt#quickbooks-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Create an Intuit developer account.

Supported authentication methods

Refer to Intuit's API documentation for more information about the service.

To configure this credential, you'll need:

  • A Client ID: Generated when you create an app.
  • A Client Secret: Generated when you create an app.
  • An Environment: Select whether this credential should access your Production or Sandbox environment.

To generate your Client ID and Client Secret, create an app.

Use these settings when creating your app:

  • Select appropriate scopes for your app. Refer to Learn about scopes for more information.
  • Enter the OAuth Redirect URL from n8n as a Redirect URI in the app's Development > Keys & OAuth section.
  • Copy the Client ID and Client Secret from the app's Development > Keys & OAuth section to enter in n8n. Refer to Get the Client ID and Client Secret for your app for more information.

Refer to Intuit's Set up OAuth 2.0 documentation for more information on the entire process.

Environment selection

If you're creating a new app from scratch, start with the Sandbox environment. Production apps need to fulfill all Intuit's requirements. Refer to Intuit's Publish your app documentation for more information.


Onfleet node

URL: llms-txt#onfleet-node

Contents:

  • Operations
  • Templates and examples

Use the Onfleet node to automate work in Onfleet, and integrate Onfleet with other applications. n8n has built-in support for a wide range of Onfleet features, including creating and deleting tasks in Onfleet as well as retrieving organizations' details.

On this page, you'll find a list of operations the Onfleet node supports and links to more resources.

Refer to Onfleet credentials for guidance on setting up authentication.

  • Admin
    • Create a new Onfleet admin
    • Delete an Onfleet admin
    • Get all Onfleet admins
    • Update an Onfleet admin
  • Container
    • Add task at index (or append)
    • Get container information
    • Fully replace a container's tasks
  • Destination
    • Create a new destination
    • Get a specific destination
  • Hub
    • Create a new Onfleet hub
    • Get all Onfleet hubs
    • Update an Onfleet hub
  • Organization
    • Retrieve your own organization's details
    • Retrieve the details of an organization with which you are connected
  • Recipient
    • Create a new Onfleet recipient
    • Get a specific Onfleet recipient
    • Update an Onfleet recipient
  • Task
    • Create a new Onfleet task
    • Clone an Onfleet task
    • Force-complete a started Onfleet task
    • Delete an Onfleet task
    • Get all Onfleet tasks
    • Get a specific Onfleet task
    • Update an Onfleet task
  • Team
    • Automatically dispatch tasks assigned to a team to on-duty drivers
    • Create a new Onfleet team
    • Delete an Onfleet team
    • Get a specific Onfleet team
    • Get all Onfleet teams
    • Get estimated times for upcoming tasks for a team, returns a selected driver
    • Update an Onfleet team
  • Worker
    • Create a new Onfleet worker
    • Delete an Onfleet worker
    • Get a specific Onfleet worker
    • Get all Onfleet workers
    • Get a specific Onfleet worker schedule
    • Update an Onfleet worker

Templates and examples

Send a Whatsapp message via Twilio when a certain Onfleet event happens

View template details

Create a QuickBooks invoice on a new Onfleet Task creation

View template details

Send a Discord message when a certain Onfleet event happens

View template details

Browse Onfleet integration templates, or search all templates


Google Slides node

URL: llms-txt#google-slides-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Google Slides node to automate work in Google Slides, and integrate Google Slides with other applications. n8n has built-in support for a wide range of Google Slides features, including creating presentations, and getting pages.

On this page, you'll find a list of operations the Google Slides node supports and links to more resources.

Refer to Google credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Page
    • Get a page
    • Get a thumbnail
  • Presentation
    • Create a presentation
    • Get a presentation
    • Get presentation slides
    • Replace text in a presentation

Templates and examples

AI-Powered Post-Sales Call Automated Proposal Generator

View template details

Dynamically replace images in Google Slides via API

View template details

Get all the slides from a presentation and get thumbnails of pages

View template details

Browse Google Slides integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


MySQL credentials

URL: llms-txt#mysql-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using database connection

You can use these credentials to authenticate the following nodes:

The Agent node doesn't support SSH tunnels.

Create a user account on a MySQL server database.

Supported authentication methods

  • Database connection

Refer to MySQL's documentation for more information about the service.

Using database connection

To configure this credential, you'll need:

  • The server Host: The database's host name or IP address.
  • The Database name.
  • A User name.
  • A Password for that user.
  • The Port number used by the MySQL server.
  • Connect Timeout: The number of milliseconds during the initial database connection before a timeout occurs.
  • SSL: If your database is using SSL, turn this on and add details for the SSL certificate.
  • SSH Tunnel: Choose whether to connect over an SSH tunnel. An SSH tunnel lets un-encrypted traffic pass over an encrypted connection and enables authorized remote access to servers protected from outside connections by a firewall.

To set up your database connection credential:

  1. Enter your database's hostname as the Host in your n8n credential. Run this query to confirm the hostname:

  2. Enter your database's name as the Database in your n8n credential. Run this query to confirm the database name:

  3. Enter the username of a User in the database. This user should have appropriate permissions for whatever actions you want n8n to perform.

  4. Enter the Password for that user.

  5. Enter the Port number used by the MySQL server (default is 3306). Run this query to confirm the port number:

  6. Enter the Connect Timeout you'd like the node to use. The Connect Timeout is the number of milliseconds during the initial database connection the node should wait before timing out. n8n defaults to 10000 which is the default used by MySQL of 10 seconds. If you want to match your database's connect_timeout, run this query to get it, then multiply by 1000 before entering it in n8n:

  7. If your database uses SSL and you'd like to use SSL for the connection, turn this option on in the credential. If you turn it on, enter the information from your MySQL SSL certificate in these fields:

  8. Enter the ca.pem file contents in the CA Certificate field.

    1. Enter the client-key.pem file contents in the Client Private Key field.
    2. Enter the client-cert.pem file contents in the Client Certificate field.
  9. If you want to use SSH Tunnel for the connection, turn this option on in the credential. Otherwise, skip it. If you turn it on:

  10. Select the SSH Authenticate with to set the SSH Tunnel type to build:

    • Select Password if you want to connect to SSH using a password.
    • Select Private Key if you want to connect to SSH using an identity file (private key) and a passphrase.
    1. Enter the SSH Host. n8n uses this host to create the SSH URI formatted as: [user@]host:port.
    2. Enter the SSH Port. n8n uses this port to create the SSH URI formatted as: [user@]host:port.
    3. Enter the SSH User to connect with. n8n uses this user to create the SSH URI formatted as: [user@]host:port.
    4. If you selected Password for SSH Authenticate with, add the SSH Password.
    5. If you selected Private Key for SSH Authenticate with:
      1. Add the contents of the Private Key or identity file used for SSH. This is the same as using the ssh-identity-file option with the shell-connect() command in MySQL.
      2. If the Private Key was created with a passphrase, enter that Passphrase. This is the same as using the ssh-identity-pass option with the shell-connect() command in MySQL. If the Private Key has no passphrase, leave this field blank.

Refer to MySQL | Creating SSL and RSA Certificates and Keys for more information on working with SSL certificates in MySQL. Refer to MySQL | Using an SSH Tunnel for more information on working with SSH tunnels in MySQL.

Examples:

Example 1 (unknown):

SHOW VARIABLES WHERE Variable_name = 'hostname';

Example 2 (unknown):

SHOW DATABASES;

Example 3 (unknown):

SHOW VARIABLES WHERE Variable_name = 'port';

Example 4 (unknown):

SHOW VARIABLES WHERE Variable_name = 'connect_timeout';

Malcore credentials

URL: llms-txt#malcore-credentials

Contents:

  • Prerequisites
  • Related resources
  • Using API key

You can use these credentials to authenticate when using the HTTP Request node to make a Custom API call.

Create a Malcore account.

Refer to Malcore's API documentation for more information about authenticating with the service.

This is a credential-only node. Refer to Custom API operations to learn more. View example workflows and related content on n8n's website.

To configure this credential, you'll need:

  • An API Key: Get an API Key from your Account > API.

Refer to Using the Malcore API for more information.


Raindrop credentials

URL: llms-txt#raindrop-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using OAuth

You can use these credentials to authenticate the following nodes:

Create a Raindrop account.

Supported authentication methods

Refer to Raindrop's API documentation for more information about the service.

To configure this credential, you'll need:

  • A Client ID
  • A Client Secret

Generate both by creating a Raindrop app.

To create an app, go to Settings > Integrations and select + Create new app in the For Developers section.

Use these settings for your app:

  • Copy the OAuth Redirect URL from n8n and add it as a Redirect URI in your app.
  • Copy the Client ID and Client Secret from the Raindrop app and enter them in your n8n credential.

Gmail Trigger node

URL: llms-txt#gmail-trigger-node

Contents:

  • Events
  • Node parameters
  • Node filters
  • Related resources
  • Common issues

Gmail is an email service developed by Google. The Gmail Trigger node can start a workflow based on events in Gmail.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Gmail Trigger integrations page.

  • Message Received: The node triggers for new messages at the selected Poll Time.

Configure the node with these parameters:

  • Credential to connect with: Select or create a new Google credential to use for the trigger. Refer to Google credentials for more information on setting up a new credential.
  • Poll Times: Select a poll Mode to set how often to trigger the poll. Your Mode selection will add or remove relevant fields. Refer to Poll Mode options to configure the parameters for each mode type.
  • Simplify: Choose whether to return a simplified version of the response (turned on, default) or the raw data (turned off).
    • The simplified version returns email message IDs, labels, and email headers, including: From, To, CC, BCC, and Subject.

Use these filters to further refine the node's behavior:

  • Include Spam and Trash: Select whether the node should trigger on new messages in the Spam and Trash folders (turned on) or not (turned off).
  • Label Names or IDs: Only trigger on messages with the selected labels added to them. Select the Label names you want to apply or enter an expression to specify IDs. The dropdown populates based on the Credential you selected.
  • Search: Enter Gmail search refine filters, like from:, to trigger the node on the filtered conditions only. Refer to Refine searches in Gmail for more information.
  • Read Status: Choose whether to receive Unread and read emails, Unread emails only (default), or Read emails only.
  • Sender: Enter an email or a part of a sender name to trigger only on messages from that sender.

n8n provides an app node for Gmail. You can find the node docs here.

View example workflows and related content on n8n's website.

Refer to Google's Gmail API documentation for details about their API.

For common questions or issues and suggested solutions, refer to Common issues.


Wufoo credentials

URL: llms-txt#wufoo-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a Wufoo account.

Supported authentication methods

Refer to Wufoo's API documentation for more information about the service.

To configure this credential, you'll need:

  • An API Key: Get your API key from the Wufoo Form Manager. To the right of a form, select More > API Information. Refer to Using API Information and Webhooks for more information.
  • A Subdomain: Your subdomain is the part of your Wufoo URL that comes after https:// and before wufoo.com. So if the full domain is https://n8n.wufoo.com, the subdomain is n8n. Admins can view the subdomain in the Account Manager. Refer to Your Subdomain for more information.

LDAP

URL: llms-txt#ldap

Contents:

  • Operations
  • Compare
  • Create
  • Delete
  • Rename
  • Search
    • Search options
  • Update
  • Templates and examples

This node allows you to interact with your LDAP servers to create, find, and update objects.

You can find authentication information for this node here.

Refer to the sections below for details on configuring the node for each operation.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

Configure this operation using these parameters:

  • Credential to connect with: Select or create an LDAP credential to connect with.
  • DN: Enter the Distinguished Name (DN) of the entry to compare.
  • Attribute ID: Enter the ID of the attribute to compare.
  • Value: Enter the value to compare.

Configure this operation using these parameters:

  • Credential to connect with: Select or create an LDAP credential to connect with.
  • DN: Enter the Distinguished Name (DN) of the entry to create.
  • Attributes: Add the Attribute ID/Value pairs you'd like to create.

Configure this operation using these parameters:

  • Credential to connect with: Select or create an LDAP credential to connect with.
  • DN: Enter the Distinguished Name (DN) of the entry to be deleted.

Configure this operation using these parameters:

  • Credential to connect with: Select or create an LDAP credential to connect with.
  • DN: Enter the current Distinguished Name (DN) of the entry to rename.
  • New DN: Enter the new Distinguished Name (DN) for the entry in this field.

Configure this operation using these parameters:

  • Credential to connect with: Select or create an LDAP credential to connect with.
  • Base DN: Enter the Distinguished Name (DN) of the subtree to search in.
  • Search For: Select the directory object class to search for.
  • Attribute: Select the attribute to search for.
  • Search Text: Enter the text to search for. Use * for a wildcard.
  • Return All: When turned on, the node will return all results. When turned off, the node will return results up to the set Limit.
  • Limit: Only available when you turn off Return All. Enter the maximum number of results to return.

You can also configure this operation using these options:

  • Attribute Names or IDs: Enter a comma-separated list of attributes to return. Choose from the list or specify IDs using an expression.
  • Page Size: Enter the maximum number of results to request at one time. Set to 0 to disable paging.
  • Scopes: The set of entries at or below the Base DN to search for potential matches. Select from:
    • Base Tree: Often referred to as subordinateSubtree or just "subordinates," selecting this option will search the subordinates of the Base DN entry but not the Base DN entry itself.
    • Single Level: Often referred to as "one," selecting this option will search only the immediate children of the Base DN entry.
    • Whole Subtree: Often referred to as "sub," selecting this option will search the Base DN entry and all its subordinates to any depth.

Refer to The LDAP Search Operation for more information on search scopes.

Configure this operation using these parameters:

  • Credential to connect with: Select or create an LDAP credential to connect with.
  • DN: Enter the Distinguished Name (DN) of the entry to update.
  • Update Attributes*: Select whether to Add new, Remove existing, or Replace** existing attribute.
  • Then enter the Attribute ID/Value pair you'd like to update.

Templates and examples

Adaptive RAG with Google Gemini & Qdrant: Context-Aware Query Answering

View template details

Adaptive RAG Strategy with Query Classification & Retrieval (Gemini & Qdrant)

View template details

OpenAI Responses API Adapter for LLM and AI Agent Workflows

View template details

Browse LDAP integration templates, or search all templates


Split Out

URL: llms-txt#split-out

Contents:

  • Node parameters
    • Field to Split Out
    • Include
  • Node options
    • Disable Dot Notation
    • Destination Field Name
    • Include Binary
  • Templates and examples
  • Related resources

Use the Split Out node to separate a single data item containing a list into multiple items. For example, a list of customers, and you want to split them so that you have an item for each customer.

Configure this node using the following parameters.

Field to Split Out

Enter the field containing the list you want to separate out into individual items.

If you're working with binary data inputs, use $binary in an expression to set the field to split out.

Select whether and how you want n8n to keep any other fields from the input data with each new individual item.

  • No Other Fields: No other fields will be included.
  • All Other Fields: All other fields will be included.
  • Selected Other Fields: Only the selected fields will be included.
    • Fields to Include: Enter a comma separated list of the fields you want to include.

Disable Dot Notation

By default, n8n enables dot notation to reference child fields in the format parent.child. Use this option to disable dot notation (turned on) or to continue using dot (turned off).

Destination Field Name

Enter the field in the output where the split field contents should go.

Choose whether to include binary data from the input in the new output (turned on) or not (turned off).

Templates and examples

Scrape and summarize webpages with AI

View template details

Scrape business emails from Google Maps without the use of any third party APIs

View template details

Automated Web Scraping: email a CSV, save to Google Sheets & Microsoft Excel

View template details

Browse Split Out integration templates, or search all templates

Learn more about data structure and data flow in n8n workflows.


Webflow Trigger node

URL: llms-txt#webflow-trigger-node

Webflow is an application that allows you to build responsive websites with browser-based visual editing software.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Webflow Trigger integrations page.


Gumroad Trigger node

URL: llms-txt#gumroad-trigger-node

Gumroad is an online platform that enables creators to sell products directly to consumers.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Gumroad Trigger integrations page.


Twilio node

URL: llms-txt#twilio-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported

Use the Twilio node to automate work in Twilio, and integrate Twilio with other applications. n8n supports sending MMS/SMS and WhatsApp messages with Twilio.

On this page, you'll find a list of operations the Twilio node supports and links to more resources.

Refer to Twilio credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • SMS
    • Send SMS/MMS/WhatsApp message
  • Call
    • Make a phone call using text-to-speech to say a message

Templates and examples

Handling Appointment Leads and Follow-up With Twilio, Cal.com and AI

View template details

Automate Lead Qualification with RetellAI Phone Agent, OpenAI GPT & Google Sheet

View template details

Enhance Customer Chat by Buffering Messages with Twilio and Redis

View template details

Browse Twilio integration templates, or search all templates

Refer to Twilio's documentation for more information about the service.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Google Drive Trigger node common issues

URL: llms-txt#google-drive-trigger-node-common-issues

Contents:

  • 401 unauthorized error
  • Handling more than one file change

Here are some common errors and issues with the Google Drive Trigger node and steps to resolve or troubleshoot them.

401 unauthorized error

The full text of the error looks like this:

This error occurs when there's an issue with the credential you're using and its scopes or permissions.

  1. For OAuth2 credentials, make sure you've enabled the Google Drive API in APIs & Services > Library. Refer to Google OAuth2 Single Service - Enable APIs for more information.
  2. For Service Account credentials:
    1. Enable domain-wide delegation.
    2. Make sure you add the Google Drive API as part of the domain-wide delegation configuration.

Handling more than one file change

The Google Drive Trigger node polls Google Drive for changes at a set interval (once every minute by default).

If multiple changes to the Watch For criteria occur during the polling interval, a single Google Drive Trigger event occurs containing the changes as items. To handle this, your workflow must account for times when the data might contain more than one item.

You can use an if node or a switch node to change your workflow's behavior depending on whether the data from the Google Drive Trigger node contains a single item or multiple items.

Examples:

Example 1 (unknown):

401 - {"error":"unauthorized_client","error_description":"Client is unauthorized to retrieve access tokens using this method, or client not authorized for any of the scopes requested."}

Taiga Trigger node

URL: llms-txt#taiga-trigger-node

Taiga is a free and open-source project management platform for startups, agile developers, and designers.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Taiga Trigger integrations page.


Oracle Database node

URL: llms-txt#oracle-database-node

Contents:

  • Operations
    • Delete
    • Execute SQL
    • Insert
    • Insert or Update
    • Select
    • Update
  • Related resources
  • Use bind parameters
  • Use n8n Expressions for bind values

Use the Oracle Database node to automate work in Oracle Database, and integrate Oracle Database with other applications. n8n has built-in support for a wide range of Oracle Database features which includes executing an SQL statement, fetching, inserting, updating or deleting data from Oracle Database. This node uses the node-oracledb driver internally.

On this page, you'll find a list of operations the Oracle Database node supports and links to more resources.

Refer to Oracle Database credentials for guidance on setting up authentication.

Requires Oracle Database 19c or later. For thick mode, use Oracle Client Libraries 19c or later.

Use this operation to delete an entire table or rows in a table.

Enter these parameters:

  • Credential to connect with: Create or select an existing Oracle Database credential.

  • Operation: Select Delete.

  • Schema: Choose the schema that contains the table you want to work on. Select From list to choose the schema from the dropdown list or By Name to enter the schema name.

  • Table: Choose the table that you want to work on. Select From list to choose the table from the dropdown list, or select By Name to enter the table name.

  • Command: The deletion action to take:

    • Truncate: Removes the table's data but preserves the table's structure.
    • Delete: Delete the rows that match the "Select Rows" condition. If you don't select anything, Oracle Database deletes all rows.
      • Select Rows: Define a Column, Operator, and Value to match rows on. The value can be passed as JSON using expression or string.
      • Combine Conditions: How to combine the conditions in "Select Rows". The AND requires all conditions to be true, while OR requires at least one condition to be true.
    • Drop: Deletes the table's data and structure permanently.
  • Auto Commit: When this property is set to true, the transaction in the current connection is automatically committed at the end of statement execution.

  • Statement Batching: The way to send statements to the database:

    • Single Statement: A single statement for all incoming items.
    • Independently: Execute one statement per incoming item of the execution.
    • Transaction: Execute all statements in a transaction. If a failure occurs, Oracle Database rolls back all changes.

Use this operation to execute an SQL statement.

Enter these parameters:

  • Credential to connect with: Create or select an existing Oracle Database credential.

  • Operation: Execute SQL Execute SQL.

  • Statement: The SQL statement to execute. You can use n8n expressions and positional parameters like :1, :2, or named parameters like :name, :id to use with Use bind parameters. To run a PL/SQL procedure, for example demo, you can use:

Execute Statement options

  • Auto Commit: When this property is set to true, the transaction in the current connection is automatically committed at the end of statement execution.
  • Bind Variable Placeholder Values: Enter the values for the bind parameters used in the statement Use bind parameters.
  • Output Numbers As String: Indicates if the numbers should be retrieved as a String.
  • Fetch Array Size: This property is a number that sets the size of an internal buffer used for fetching query rows from Oracle Database. Changing it may affect query performance but does not affect how many rows are returned to the application.
  • Number of Rows to Prefetch: This property is a query tuning option to set the number of additional rows the underlying Oracle driver fetches during the internal initial statement execution phase of a query.

Use this operation to insert rows in a table.

Enter these parameters:

  • Credential to connect with: Create or select an existing Oracle Database credential.

  • Operation: Select Insert.

  • Schema: Choose the schema that contains the table you want to work on. Select From list to choose the schema from the dropdown list or By Name to enter the schema name.

  • Table: Choose the table that you want to work on. Select From list to choose the table from the dropdown list, or select By Name to enter the table name.

  • Mapping Column Mode: How to map column names to incoming data:

    • Map Each Column Manually: Select the values to use for each column Use n8n expressions for bind values.
    • Map Automatically: Automatically map incoming data to matching column names in Oracle Database. The incoming data field names must match the column names in Oracle Database for this to work. If necessary, consider using the edit fields (set) node before this node to adjust the format as needed.
  • Auto Commit: When this property is set to true, the transaction in the current connection is automatically committed at the end of statement execution.

  • Output Columns: Choose which columns to output. You can select from a list of available columns or specify IDs using expressions.

  • Statement Batching: The way to send statements to the database:

    • Single Statement: A single statement for all incoming items.
    • Independently: Execute one statement per incoming item of the execution.
    • Transaction: Execute all statements in a transaction. If a failure occurs, Oracle Database rolls back all changes.

Use this operation to insert or update rows in a table.

Enter these parameters:

  • Credential to connect with: Create or select an existing Oracle Database credential.
  • Operation: Select Insert or Update.
  • Schema: Choose the schema that contains the table you want to work on. Select From list to choose the schema from the dropdown list or By Name to enter the schema name.
  • Table: Choose the table that you want to work on. Select From list to choose the table from the dropdown list, or select By Name to enter the table name.
  • Mapping Column Mode: How to map column names to incoming data:
    • Map Each Column Manually: Select the values to use for each column Use n8n expressions for bind values.
    • Map Automatically: Automatically map incoming data to matching column names in Oracle Database. The incoming data field names must match the column names in Oracle Database for this to work. If necessary, consider using the edit fields (set) node before this node to adjust the format as needed.

Insert or Update options

  • Auto Commit: When this property is set to true, the transaction in the current connection is automatically committed at the end of statement execution.
  • Output Columns: Choose which columns to output. You can select from a list of available columns or specify IDs using expressions.
  • Statement Batching: The way to send statements to the database:
    • Single Statement: A single statement for all incoming items.
    • Independently: Execute one statement per incoming item of the execution.
    • Transaction: Execute all statements in a transaction. If a failure occurs, Oracle Database rolls back all changes.

Use this operation to select rows in a table.

Enter these parameters:

  • Credential to connect with: Create or select an existing Oracle Database credential.
  • Operation: Select Select.
  • Schema: Choose the schema that contains the table you want to work on. Select From list to choose the schema from the dropdown list or By Name to enter the schema name.
  • Table: Choose the table that you want to work on. Select From list to choose the table from the dropdown list, or select By Name to enter the table name.
  • Return All: Whether to return all results or only up to a given limit.
  • Limit: The maximum number of items to return when Return All is disabled.
  • Select Rows: Set the conditions to select rows. Define a Column, Operator, and Value(as json) to match rows on. The Value can vary by type — for example with Fixed mode:
    • String: "hello", hellowithoutquotes, "hello with space"
    • Number: 12
    • JSON: { "key": "val" }

If you don't select anything, Oracle Database selects all rows.

  • Combine Conditions: How to combine the conditions in Select Rows. The AND requires all conditions to be true, while OR requires at least one condition to be true.

  • Sort: Choose how to sort the selected rows. Choose a Column from a list or by ID and a sort Direction.

  • Auto Commit: When this property is set to true, the transaction in the current connection is automatically committed at the end of statement execution.

  • Output Numbers As String: Indicates if the numbers should be retrieved as a String.

  • Fetch Array Size: This property is a number that sets the size of an internal buffer used for fetching query rows from Oracle Database. Changing it may affect query performance but does not affect how many rows are returned to the application.

  • Number of Rows to Prefetch: This property is a query tuning option to set the number of additional rows the underlying Oracle driver fetches during the internal initial statement execution phase of a query.

Use this operation to update rows in a table.

Enter these parameters:

  • Credential to connect with: Create or select an existing Oracle Database credential.

  • Operation: Select Update.

  • Schema: Choose the schema that contains the table you want to work on. Select From list to choose the schema from the dropdown list or By Name to enter the schema name.

  • Table: Choose the table that you want to work on. Select From list to choose the table from the dropdown list, or select By Name to enter the table name.

  • Mapping Column Mode: How to map column names to incoming data:

    • Map Each Column Manually: Select the values to use for each column Use n8n expressions for bind values.
    • Map Automatically: Automatically map incoming data to matching column names in Oracle Database. The incoming data field names must match the column names in Oracle Database for this to work. If necessary, consider using the edit fields (set) node before this node to adjust the format as needed.
  • Auto Commit: When this property is set to true, the transaction in the current connection is automatically committed at the end of statement execution.

  • Output Columns: Choose which columns to output. You can select from a list of available columns or specify IDs using expressions.

  • Statement Batching: The way to send statements to the database:

    • Single Statement: A single statement for all incoming items.
    • Independently: Execute one statement per incoming item of the execution.
    • Transaction: Execute all statements in a transaction. If a failure occurs, Oracle Database rolls back all changes.

Refer to SQL Language Reference for more information about the service.

Refer to node-oracledb documentation for more information about the node-oracledb driver.

Use bind parameters

When creating a statement to run on an Oracle database instance, you can use the Bind Variable Placeholder Values field in the Options section to load data into the statement. n8n sanitizes data in statement parameters, which prevents SQL injection.

For example, you would want to find specific fruits by their color. Given the following input data:

You can write a statement like:

Then in Bind Variable Placeholder Values, provide the field values to use. You can provide fixed values or expressions. For this example, use expressions so the node can pull the color from each input item in turn:

Use n8n Expressions for bind values

For Values to Send, you can provide inputs using n8n Expressions. Below are examples for different data types — you can either enter constant values or reference fields from previous items ($json):

  • Constant: {{ { k1: "v1", k2: "v2" } }}

  • From a previous item: {{ $json.COL_JSON }}

  • Constant: {{ [1, 2, 3, 4.5] }}

  • From a previous item: {{ $json.COL_VECTOR }}

  • Constant: {{ [94, 87, 34] }} or {{ ' BLOB data string' }}

  • From a previous item: {{ $json.COL_BLOB }}

  • Constant: {{ [94, 87, 34] }}

  • From a previous item: {{ $json.COL_RAW }}

  • Constant: {{ true }}

  • From a previous item: {{ $json.COL_BOOLEAN }}

  • Constant: 1234

  • From a previous item: {{ $json.COL_NUMBER }}

  • Constant: ' Hello World '

  • From a previous item: {{ $json.COL_CHAR }}

These examples assume JSON keys (e.g. COL_JSON, COL_VECTOR) map directly to the respective SQL column types.

Examples:

Example 1 (unknown):

BEGIN
    demo;
  END;

Example 2 (unknown):

[
    {
        "FRUIT_ID": 1,
        "FRUIT_NAME": "Apple",
        "COLOR": "Red" 
    },
    {
        "FRUIT_ID": 2,
        "FRUIT_NAME": "Banana",
        "COLOR": "Yellow"
    }
]

Example 3 (unknown):

SELECT * FROM FRUITS WHERE COLOR = :col

Example 4 (unknown):

// fruits is an example table name
fruits, {{ $json.color }}

Zoom credentials

URL: llms-txt#zoom-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API JWT token
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Create a Zoom account. Your account must have one of the following permissions:

  • Account owner
  • Account admin
  • Zoom for developers role

Supported authentication methods

  • API JWT token
  • OAuth2

API JWT token deprecation

Zoom removed support for JWT access tokens in June 2023. You must use OAuth2 for all new credentials.

Refer to Zoom's API documentation for more information about the service.

Using API JWT token

This authentication method has been fully deprecated by Zoom. Don't create new credentials with it.

To configure this credential, you'll need:

To configure this credential, you'll need:

  • A Client ID: Generated when you create an OAuth app on the Zoom App Marketplace.
  • A Client Secret: Generated when you create an OAuth app.

To generate your Client ID and Client Secret, create an OAuth app.

Use these settings for your OAuth app:

  • Select User-managed app for Select how the app is managed.
  • Copy the OAuth Callback URL from n8n and enter it as an OAuth Redirect URL in Zoom.
  • If your n8n credential displays a Whitelist URL, also enter that URL as a an OAuth Redirect URL.
  • Enter Scopes for the scopes you plan to use. For all functionality in the Zoom node, select:
  • Copy the Client ID and Client Secret provided in the Zoom app and enter them in your n8n credential.

Test a node

URL: llms-txt#test-a-node

This section contains information about testing your node.

There are two ways to test your node:

You should use both methods before publishing your node.


Keyboard shortcuts and controls

URL: llms-txt#keyboard-shortcuts-and-controls

Contents:

  • Workflow controls
  • Canvas
    • Move the canvas
    • Canvas zoom
    • Nodes on the canvas
    • With one or more nodes selected in canvas
  • Node panel
    • Node panel categories
  • Within nodes
  • Join the community

n8n provides keyboard shortcuts for some actions.

  • Ctrl + Alt + n: create new workflow

  • Ctrl + o: open workflow

  • Ctrl + s: save the current workflow

  • Ctrl + z: undo

  • Ctrl + shift + z: redo

  • Ctrl + Enter: execute workflow

  • Ctrl + Left Mouse Button + drag: move node view

  • Ctrl + Middle mouse button + drag: move node view

  • Space + drag: move node view

  • Middle mouse button + drag: move node view

  • Two fingers on a touch screen: move node view

  • + or =: zoom in

  • - or _: zoom out

  • 0: reset zoom level

  • 1: zoom to fit workflow

  • Ctrl + Mouse wheel: zoom in/out

Nodes on the canvas

  • Double click on a node: open the node details
  • Ctrl/Cmd + Double click on a sub-workflow node: open the sub-workflow in a new tab
  • Ctrl + a: select all nodes
  • Ctrl + v: paste nodes
  • Shift + s: add sticky note

With one or more nodes selected in canvas

  • ArrowDown: select sibling node below the current one

  • ArrowLeft: select node left of the current one

  • ArrowRight: select node right of the current one

  • ArrowUp: select sibling node above the current one

  • Ctrl + c: copy

  • Ctrl + x: cut

  • D: deactivate

  • Delete: delete

  • Enter: open

  • F2: rename

  • P: pin data in node. Refer to Data pinning for more information.

  • Shift + ArrowLeft: select all nodes left of the current one

  • Shift + ArrowRight: select all nodes right of the current one

  • Ctrl/Cmd + Shift + o on a sub-workflow node: open the sub-workflow in a new tab

  • Tab: open the Node Panel

  • Enter: insert selected node into workflow

  • Escape: close Node panel

Node panel categories

  • Enter: insert node into workflow, collapse/expand category, open subcategory

  • ArrowRight: expand category, open subcategory

  • ArrowLeft: collapse category, close subcategory view

  • =: in an empty parameter input, this switches to expressions mode.

This guide outlines a series of tutorials and resources designed to get you started with n8n.

It's not necessary to complete all items listed to start using n8n. Use this as a reference to navigate to the most relevant parts of the documentation and other resources according to your needs.

Join the community

n8n has an active community where you can get and offer help. Connect, share, and learn with other n8n users:

If you don't have an account yet, sign up to a free trial on n8n Cloud or install n8n's community edition with Docker (recommended) or npm. See Choose your n8n for more details.

Start with the quickstart guides to help you get up and running with building basic workflows.

Structured Courses

n8n offers two sets of courses.

Learn key concepts and n8n features, while building examples as you go.

  • The Beginner course covers the basics of n8n.
  • The Advanced course covers more complex workflows, more technical nodes, and enterprise features

Build more complex workflows while learning key concepts along the way. Earn a badge and an avatar in your community profile.

Explore various self-hosting options in n8n. If youre not sure where to start, these are two popular options:

If you can't find a node for a specific app or a service, you can build a node yourself and share with the community. See what others have built on npm website.


Performance and benchmarking

URL: llms-txt#performance-and-benchmarking

Contents:

  • Performance factors
  • Run your own benchmarking
  • Example: Single instance performance
  • Example: Multi-instance performance

n8n can handle up to 220 workflow executions per second on a single instance, with the ability to scale up further by adding more instances.

This document outlines n8n's performance benchmarking. It describes the factors that affect performance, and includes two example benchmarks.

Performance factors

The performance of n8n depends on factors including:

  • The workflow type
  • The resources available to n8n
  • How you configure n8n's scaling options

Run your own benchmarking

To get an accurate estimate for your use case, run n8n's benchmarking framework. The repository contains more information about the benchmarking.

Example: Single instance performance

This test measures how response time increases as requests per second increase. It looks at the response time when calling the Webhook Trigger node.

  • Hardware: ECS c5a.large instance (4GB RAM)
  • n8n setup: Single n8n instance (running in main mode, with Postgres database)
  • Workflow: Webhook Trigger node, Edit Fields node

This graph shows the percentage of requests to the Webhook Trigger node getting a response within 100 seconds, and how that varies with load. Under higher loads n8n usually still processes the data, but takes over 100s to respond.

Example: Multi-instance performance

This test measures how response time increases as requests per second increase. It looks at the response time when calling the Webhook Trigger node.

  • Hardware: seven ECS c5a.4xlarge instances (8GB RAM each)
  • n8n setup: two webhook instances, four worker instances, one database instance (MySQL), one main instance running n8n and Redis
  • Workflow: Webhook Trigger node, Edit Fields node
  • Multi-instance setups use Queue mode

This graph shows the percentage of requests to the Webhook Trigger node getting a response within 100 seconds, and how that varies with load. Under higher loads n8n usually still processes the data, but takes over 100s to respond.


Asana Trigger node

URL: llms-txt#asana-trigger-node

Contents:

  • Events
  • Related resources

Asana is a web and mobile application designed to help teams organize, track, and manage their work.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Asana Trigger integrations page.

n8n provides an app node for Asana. You can find the node docs here.

View example workflows and related content on n8n's website.

Refer to Asana's documentation for details about their API.


TYPE n8n_scaling_mode_queue_jobs_active gauge

URL: llms-txt#type-n8n_scaling_mode_queue_jobs_active-gauge

n8n_scaling_mode_queue_jobs_active 0


Output of other nodes

URL: llms-txt#output-of-other-nodes

Methods for working with the output of other nodes. Some methods and variables aren't available in the Code node.

You can use Python in the Code node. It isn't available in expressions.

Method Description Available in Code node?
$("<node-name>").all(branchIndex?, runIndex?) Returns all items from a given node. If branchIndex isn't given it will default to the output that connects node-name with the node where you use the expression or code.
$("<node-name>").first(branchIndex?, runIndex?) The first item output by the given node. If branchIndex isn't given it will default to the output that connects node-name with the node where you use the expression or code.
$("<node-name>").last(branchIndex?, runIndex?) The last item output by the given node. If branchIndex isn't given it will default to the output that connects node-name with the node where you use the expression or code.
$("<node-name>").item The linked item. This is the item in the specified node used to produce the current item. Refer to Item linking for more information on item linking.
$("<node-name>").params Object containing the query settings of the given node. This includes data such as the operation it ran, result limits, and so on.
$("<node-name>").context Boolean. Only available when working with the Loop Over Items node. Provides information about what's happening in the node. Use this to determine whether the node is still processing items.
$("<node-name>").itemMatching(currentNodeInputIndex) Use instead of $("<node-name>").item in the Code node if you need to trace back from an input item.
Method Description Available in Code node?
_("<node-name>").all(branchIndex?, runIndex?) Returns all items from a given node. If branchIndex isn't given it will default to the output that connectsnode-name with the node where you use the expression or code.
_("<node-name>").first(branchIndex?, runIndex?) The first item output by the given node. If branchIndex isn't given it will default to the output that connectsnode-name with the node where you use the expression or code.
_("<node-name>").last(branchIndex?, runIndex?) The last item output by the given node. If branchIndex isn't given it will default to the output that connectsnode-name with the node where you use the expression or code.
_("<node-name>").item The linked item. This is the item in the specified node used to produce the current item. Refer to Item linking for more information on item linking.
_("<node-name>").params Object containing the query settings of the given node. This includes data such as the operation it ran, result limits, and so on.
_("<node-name>").context Boolean. Only available when working with the Loop Over Items node. Provides information about what's happening in the node. Use this to determine whether the node is still processing items.
_("<node-name>").itemMatching(currentNodeInputIndex) Use instead of _("<node-name>").item in the Code node if you need to trace back from an input item. Refer to Retrieve linked items from earlier in the workflow for an example.

AMQP Trigger node

URL: llms-txt#amqp-trigger-node

AMQP is an open standard application layer protocol for message-oriented middleware. The defining features of AMQP are message orientation, queuing, routing, reliability and security. This node supports AMQP 1.0 compatible message brokers.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's AMQP integrations page.


Gotify node

URL: llms-txt#gotify-node

Contents:

  • Operations
  • Templates and examples

Use the Gotify node to automate work in Gotify, and integrate Gotify with other applications. n8n has built-in support for a wide range of Gotify features, including creating, deleting, and getting messages.

On this page, you'll find a list of operations the Gotify node supports and links to more resources.

Refer to Gotify credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Message
    • Create
    • Delete
    • Get All

Templates and examples

Send daily weather updates via a message using the Gotify node

View template details

Spotify Sync Liked Songs to Playlist

View template details

🛠️ Gotify Tool MCP Server

View template details

Browse Gotify integration templates, or search all templates


OpenID Connect (OIDC)

URL: llms-txt#openid-connect-(oidc)

  • Available on Enterprise plans.
  • You need to be an instance owner or admin to enable and configure OIDC.

This section covers how to enable and manage OpenID Connect (OIDC) for single sign-on (SSO). You can learn more about how OIDC works by visiting what is OpenID Connect by the OpenID Foundation.

  • Set up OIDC: a general guide to setting up OpenID Connect (OIDC) SSO in n8n.
  • Troubleshooting: a list of things to check if you encounter issues with OIDC.

Oura node

URL: llms-txt#oura-node

Contents:

  • Operations
  • Templates and examples

Use the Oura node to automate work in Oura, and integrate Oura with other applications. n8n has built-in support for a wide range of Oura features, including getting profiles, and summaries.

On this page, you'll find a list of operations the Oura node supports and links to more resources.

Refer to Oura credentials for guidance on setting up authentication.

  • Profile
    • Get the user's personal information.
  • Summary
    • Get the user's activity summary.
    • Get the user's readiness summary.
    • Get the user's sleep summary

Templates and examples

Browse Oura integration templates, or search all templates


Odoo credentials

URL: llms-txt#odoo-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using API key
  • Using password
  • Required plan type

You can use these credentials to authenticate the following nodes:

Supported authentication methods

  • API key (Recommended)
  • Password

Refer to Odoo's External API documentation for more information about the service.

Refer to the Odoo Getting Started tutorial if you're new to Odoo.

To configure this credential, you'll need a user account on an Odoo database and:

  • Your Site URL
  • Your Username
  • An API key
  • Your Database name

To set up the credential with an API key:

  1. Enter your Odoo server or site URL as the Site URL.
  2. Enter your Username as it's displayed on your Change password screen in Odoo.
  3. To use an API key, go to Your Profile > Preferences > Account Security > Developer API Keys.
    • If you don't have this option, you may need to upgrade your Odoo plan. Refer to Required plan type for more information.
  4. Select New API Key.
  5. Enter a Description for the key, like n8n integration.
  6. Select Generate Key.
  7. Copy the key and enter it as the Password or API key in your n8n credential.
  8. Enter your Odoo Database name, also known as the instance name.

Refer to Odoo API Keys for more information.

To configure this credential, you'll need a user account on an Odoo database and:

  • Your Site URL
  • Your Username
  • Your Password
  • Your Database name

To set up the credential with a password:

  1. Enter your Odoo server or site URL as the Site URL.
  2. Enter your Username as it's displayed on your Change password screen in Odoo.
  3. To use a password, enter your user password in the Password or API key field.
  4. Enter your Odoo Database name, also known as the instance name.

Password compatibility

If you try a password credential and it doesn't work for a specific node function, try switching to an API key. Odoo requires an API key for certain modules or based on certain settings.

Required plan type

Access to the external API is only available on a Custom Odoo plan. (The One App Free or Standard plans won't give you access.)

Refer to Odoo Pricing Plans for more information.


Cortex credentials

URL: llms-txt#cortex-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Install Cortex on your server.

Supported authentication methods

Refer to Cortex's API documentation for more information about the service.

To configure this credential, you'll need:

  • An API Key: Refer to the Cortex API Authentication documentation for detailed instructions on generating API keys.
  • The URL/Server Address for your Cortex Instance (defaults to http://<your_server_address>:9001/)

DeepSeek credentials

URL: llms-txt#deepseek-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a DeepSeek account.

Supported authentication methods

Refer to DeepSeek's API documentation for more information about the service.

To configure this credential, you'll need:

To generate your API Key:

  1. Login to your DeepSeek account or create an account.
  2. Open your API keys page.
  3. Select Create new secret key to create an API key, optionally naming the key.
  4. Copy your key and add it as the API Key in n8n.

Refer to the Your First API Call page for more information.


What's a tool in AI?

URL: llms-txt#what's-a-tool-in-ai?

Contents:

  • AI tools in n8n

In AI, 'tools' has a specific meaning. Tools act like addons that your AI can use to access extra context or resources.

Here are a couple of other ways of expressing it:

Tools are interfaces that an agent can use to interact with the world (source)

We can think of these tools as being almost like functions that your AI model can call (source)

n8n provides tool sub-nodes that you can connect to your AI agent. As well as providing some popular tools, such as Wikipedia and SerpAPI, n8n provides three especially powerful tools:

The next three examples highlight the Call n8n Workflow Tool:

You can also learn how to let AI dynamically specify parameters for tools with the $fromAI() function.


Qualys credentials

URL: llms-txt#qualys-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using basic auth

You can use these credentials to authenticate when using the HTTP Request node to make a Custom API call.

Create a Qualys user account with any user role except Contact.

Supported authentication methods

Refer to Qualys's documentation for more information about the service.

This is a credential-only node. Refer to Custom API operations to learn more. View example workflows and related content on n8n's website.

To configure this credential, you'll need:

  • A Username
  • A Password
  • A Requested With string: Enter a user description, like a user agent, or keep the default n8n application. This sets the required X-Requested-With header.

External secrets

URL: llms-txt#external-secrets

Contents:

  • Connect n8n to your secrets store

  • Use secrets in n8n credentials

  • Using external secrets with n8n environments

  • Using external secrets in projects

  • Troubleshooting

    • Infisical version changes
    • Only set external secrets on credentials owned by an instance owner or admin
  • External secrets are available on Enterprise Self-hosted and Enterprise Cloud plans.

  • n8n supports AWS Secrets Manager, Azure Key Vault, GCP Secrets Manager, Infisical and HashiCorp Vault.

  • n8n doesn't support HashiCorp Vault Secrets.

You can use an external secrets store to manage credentials for n8n.

n8n stores all credentials encrypted in its database, and restricts access to them by default. With the external secrets feature, you can store sensitive credential information in an external vault, and have n8n load it in when required. This provides an extra layer of security and allows you to manage credentials used across multiple n8n environments in one central place.

Connect n8n to your secrets store

Your secret names can't contain spaces, hyphens, or other special characters. n8n supports secret names containing alphanumeric characters (a-z, A-Z, and 0-9), and underscores. n8n currently only supports plaintext values for secrets, not JSON objects or key-value pairs.

  1. In n8n, go to Settings > External Secrets.

  2. Select Set Up for your store provider.

  3. Enter the credentials for your provider:

  • Azure Key Vault: Provide your vault name, tenant ID, client ID, and client secret. Refer to the Azure documentation to register a Microsoft Entra ID app and create a service principal. n8n supports only single-line values for secrets.

  • AWS Secrets Manager: provide your access key ID, secret access key, and region. The IAM user must have the secretsmanager:ListSecrets, secretsmanager:BatchGetSecretValue, and secretsmanager:GetSecretValue permissions.

To give n8n access to all secrets in your AWS Secrets Manager, you can attach the following policy to the IAM user:

You can also be more restrictive and give n8n access to select specific AWS Secret Manager secrets. You still need to allow the secretsmanager:ListSecrets and secretsmanager:BatchGetSecretValue permissions to access all resources. These permissions allow n8n to retrieve ARN-scoped secrets, but don't provide access to the secret values.

Next, you need set the scope for the secretsmanager:GetSecretValue permission to the specific Amazon Resource Names (ARNs) for the secrets you wish to share with n8n. Ensure you use the correct region and account ID in each resource ARNs. You can find the ARN details in the AWS dashboard for your secrets.

For example, the following IAM policy only allows access to secrets with a name starting with n8n in your specified AWS account and region:

For more IAM permission policy examples, consult the AWS documentation.

  • HashiCorp Vault: provide the Vault URL for your vault instance, and select your Authentication Method. Enter your authentication details. Optionally provide a namespace.

  • Refer to the HashiCorp documentation for your authentication method: Token auth method
    AppRole auth method
    Userpass auth method

    • If you use vault namespaces, you can enter the namespace n8n should connect to. Refer to Vault Enterprise namespaces for more information on HashiCorp Vault namespaces.
  • Infisical: provide a Service Token. Refer to Infisical's Service token documentation for information on getting your token. If you self-host Infisical, enter the Site URL.

Infisical environment

Make sure you select the correct Infisical environment when creating your token. n8n will load secrets from this environment, and won't have access to secrets in other Infisical environments. n8n only support service tokens that have access to a single environment.

n8n doesn't support Infisical folders.

  • Google Cloud Platform: provide a Service Account Key (JSON) for a service account that has at least these roles: Secret Manager Secret Accessor and Secret Manager Secret Viewer. Refer to Google's service account documentation for more information.
  1. Save your configuration.

  2. Enable the provider using the Disabled / Enabled toggle.

Use secrets in n8n credentials

To use a secret from your store in an n8n credential:

  1. Create a new credential, or open an existing one.

  2. On the field where you want to use a secret:

  3. Hover over the field.

    1. Select Expression.
  4. In the field where you want to use a secret, enter an expression referencing the secret name:

<vault-name> is either vault (for HashiCorp) or infisical or awsSecretsManager. Replace <secret-name> with the name as it appears in your vault.

Using external secrets with n8n environments

n8n's Source control and environments feature allows you to create different n8n environments, backed by Git. The feature doesn't support using different credentials in different instances. You can use an external secrets vault to provide different credentials for different environments by connecting each n8n instance to a different vault or project environment.

For example, you have two n8n instances, one for development and one for production. You use Infisical for your vault. In Infisical, create a project with two environments, development and production. Generate a token for each Infisical environment. Use the token for the development environment to connect your development n8n instance, and the token for your production environment to connect your production n8n instance.

Using external secrets in projects

To use external secrets in an RBAC project, you must have an instance owner or instance admin as a member of the project.

Infisical version changes

Infisical version upgrades can introduce problems connecting to n8n. If your Infisical connection stops working, check if there was a recent version change. If so, report the issue to help@n8n.io.

Only set external secrets on credentials owned by an instance owner or admin

Due to the permissions that instance owners and admins have, it's possible for owners and admins to update credentials owned by another user with a secrets expression. This will appear to work in preview for an instance owner or admin, but the secret won't resolve when the workflow runs in production.

Only use external secrets for credentials that are owned by an instance admin or owner. This ensures they resolve correctly in production.

AI agents are artificial intelligence systems capable of responding to requests, making decisions, and performing real-world tasks for users. They use large language models (LLMs) to interpret user input and make decisions about how to best process requests using the information and resources they have available.

AI chains allow you to interact with large language models (LLMs) and other resources in sequences of calls to components. AI chains in n8n don't use persistent memory, so you can't use them to reference previous context (use AI agents for this).

Completions are the responses generated by a model like GPT.

Embeddings are numerical representations of data using vectors. They're used by AI to interpret complex data and relationships by mapping values across many dimensions. Vector databases, or vector stores, are databases designed to store and access embeddings.

In AI, and specifically in retrieval-augmented generation (RAG) contexts, groundedness and ungroundedness are measures of how much a model's responses accurately reflect source information. The model uses its source documents to generate grounded responses, while ungrounded responses involve speculation or hallucination unsupported by those same sources.

AI hallucination

Hallucination in AI is when an LLM (large language model) mistakenly perceives patterns or objects that don't exist.

Reranking is a technique that refines the order of a list of candidate documents to improve the relevance of search results. Retrieval-Augmented Generation (RAG) and other applications use reranking to prioritize the most relevant information for generation or downstream tasks.

In an AI context, memory allows AI tools to persist message context across interactions. This allows you to have a continuing conversations with AI agents, for example, without submitting ongoing context with each message. In n8n, AI agent nodes can use memory, but AI chains can't.

AI retrieval-augmented generation (RAG)

Retrieval-augmented generation, or RAG, is a technique for providing LLMs access to new information from external sources to improve AI responses. RAG systems retrieve relevant documents to ground responses in up-to-date, domain-specific, or proprietary knowledge to supplement their original training data. RAG systems often rely on vector stores to manage and search this external data efficiently.

In an AI context, a tool is an add-on resource that the AI can refer to for specific information or functionality when responding to a request. The AI model can use a tool to interact with external systems or complete specific, focused tasks.

A vector store, or vector database, stores mathematical representations of information. Use with embeddings and retrievers to create a database that your AI can access when answering questions.

APIs, or application programming interfaces, offer programmatic access to a service's data and functionality. APIs make it easier for software to interact with external systems. They're often offered as an alternative to traditional user-focused interfaces accessed through web browsers or UI.

The canvas is the main interface for building workflows in n8n's editor UI. You use the canvas to add and connect nodes to compose workflows.

cluster node (n8n)

In n8n, cluster nodes are groups of nodes that work together to provide functionality in a workflow. They consist of a root node and one or more sub nodes that extend the node's functionality.

credential (n8n)

In n8n, credentials store authentication information to connect with specific apps and services. After creating credentials with your authentication information (username and password, API key, OAuth secrets, etc.), you can use the associated app node to interact with the service.

data pinning (n8n)

Data pinning allows you to temporarily freeze the output data of a node during workflow development. This allows you to develop workflows with predictable data without making repeated requests to external services. Production workflows ignore pinned data and request new data on each execution.

The n8n editor UI allows you to create and manage workflows. The main area is the canvas, where you can compose workflows by adding, configuring, and connecting nodes. The side and top panels allow you to access other areas of the UI like credentials, templates, variables, executions, and more.

entitlement (n8n)

In n8n, entitlements grant n8n instances access to plan-restricted features for a specific period of time.

Floating entitlements are a pool of entitlements that you can distribute among various n8n instances. You can re-assign a floating entitlement to transfer its access to a different n8n instance.

evaluation (n8n)

In n8n, evaluation allows you to tag and organize execution history and compare it against new executions. You can use this to understand how your workflow performs over time as you make changes. In particular, this is useful while developing AI-centered workflows.

expression (n8n)

In n8n, expressions allow you to populate node parameters dynamically by executing JavaScript code. Instead of providing a static value, you can use the n8n expression syntax to define the value using data from previous nodes, other workflows, or your n8n environment.

LangChain is an AI-development framework used to work with large language models (LLMs). LangChain provides a standardized system for working with a wide variety of models and other resources and linking different components together to build complex applications.

Large language model (LLM)

Large language models, or LLMs, are AI machine learning models designed to excel in natural language processing (NLP) tasks. They're built by training on large amounts of data to develop probabilistic models of language and other data.

In n8n, nodes are individual components that you compose to create workflows. Nodes define when the workflow should run, allow you to fetch, send, and process data, can define flow control logic, and connect with external services.

n8n projects allow you to separate workflows, variables, and credentials into separate groups for easier management. Projects make it easier for teams to collaborate by sharing and compartmentalizing related resources.

Each n8n cluster node contains a single root nodes that defines the main functionality of the cluster. One or more sub nodes attach to the root node to extend its functionality.

n8n cluster nodes consist of one or more sub nodes connected to a root node. Sub nodes extend the functionality of the root node, providing access to specific services or resources or offering specific types of dedicated processing, like calculator functionality, for example.

n8n templates are pre-built workflows designed by n8n and community members that you can import into your n8n instance. When using templates, you may need to fill in credentials and adjust the configuration to suit your needs.

trigger node (n8n)

A trigger node is a special node responsible for executing the workflow in response to certain conditions. All production workflows need at least one trigger to determine when the workflow should run.

An n8n workflow is a collection of nodes that automate a process. Workflows begin execution when a trigger condition occurs and execute sequentially to achieve complex tasks.

Examples:

Example 1 (unknown):

{
     	"Version": "2012-10-17",
     	"Statement": [
     		{
     			"Sid": "AccessAllSecrets",
     			"Effect": "Allow",
     			"Action": [
     				"secretsmanager:ListSecrets",
     				"secretsmanager:BatchGetSecretValue",
     				"secretsmanager:GetResourcePolicy",
     				"secretsmanager:GetSecretValue",
     				"secretsmanager:DescribeSecret",
     				"secretsmanager:ListSecretVersionIds",
     			],
     			"Resource": "*"
     		}
     	]
     }

Example 2 (unknown):

{
     	"Version": "2012-10-17",
     	"Statement": [
     		{
     			"Sid": "ListingSecrets",
     			"Effect": "Allow",
     			"Action": [
     				"secretsmanager:ListSecrets",
     				"secretsmanager:BatchGetSecretValue"
     			],
     			"Resource": "*"
     		},
     		{
     			"Sid": "RetrievingSecrets",
     			"Effect": "Allow",
     			"Action": [
     				"secretsmanager:GetSecretValue",
     				"secretsmanager:DescribeSecret"
     			],
     			"Resource": [
     				"arn:aws:secretsmanager:us-west-2:123456789000:secret:n8n*"
     			]
     		}
     	]
     }

Example 3 (unknown):

{{ $secrets.<vault-name>.<secret-name> }}

Harvest credentials

URL: llms-txt#harvest-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API Access Token
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Create a Harvest account.

Supported authentication methods

  • API access token
  • OAuth2

Refer to Harvest's API documentation for more information about the service.

Using API Access Token

To configure this credential, you'll need:

Note for n8n Cloud users

Cloud users don't need to provide connection details. Select Connect my account to connect through your browser.

If you need to configure OAuth2 from scratch or need more detail on what's happening in the OAuth web flow, refer to the instructions in the Harvest OAuth2 documentation to set up OAuth.


Queue mode environment variables

URL: llms-txt#queue-mode-environment-variables

Contents:

  • Multi-main setup

File-based configuration

You can add _FILE to individual variables to provide their configuration in a separate file. Refer to Keeping sensitive data in separate files for more details.

You can run n8n in different modes depending on your needs. Queue mode provides the best scalability. Refer to Queue mode for more information.

Variable Type Default Description
OFFLOAD_MANUAL_EXECUTIONS_TO_WORKERS Boolean false Set to true if you want manual executions to run on the worker rather than on main.
QUEUE_BULL_PREFIX String - Prefix to use for all queue keys.
QUEUE_BULL_REDIS_DB Number 0 The Redis database used.
QUEUE_BULL_REDIS_HOST String localhost The Redis host.
QUEUE_BULL_REDIS_PORT Number 6379 The Redis port used.
QUEUE_BULL_REDIS_USERNAME String - The Redis username (needs Redis version 6 or above). Don't define it for Redis < 6 compatibility
QUEUE_BULL_REDIS_PASSWORD String - The Redis password.
QUEUE_BULL_REDIS_TIMEOUT_THRESHOLD Number 10000 The Redis timeout threshold (in ms).
QUEUE_BULL_REDIS_CLUSTER_NODES String - Expects a comma-separated list of Redis Cluster nodes in the format host:port, for the Redis client to initially connect to. If running in queue mode (EXECUTIONS_MODE = queue), setting this variable will create a Redis Cluster client instead of a Redis client, and n8n will ignore QUEUE_BULL_REDIS_HOST and QUEUE_BULL_REDIS_PORT.
QUEUE_BULL_REDIS_TLS Boolean false Enable TLS on Redis connections.
QUEUE_BULL_REDIS_DUALSTACK Boolean false Enable dual-stack support (IPv4 and IPv6) on Redis connections.
QUEUE_WORKER_TIMEOUT (deprecated) Number 30 Deprecated Use N8N_GRACEFUL_SHUTDOWN_TIMEOUT instead. How long should n8n wait (seconds) for running executions before exiting worker process on shutdown.
QUEUE_HEALTH_CHECK_ACTIVE Boolean false Whether to enable health checks (true) or disable (false).
QUEUE_HEALTH_CHECK_PORT Number 5678 The port to serve health checks on. If you experience a port conflict error when starting a worker server using its default port, change this.
QUEUE_WORKER_LOCK_DURATION Number 60000 How long (in ms) is the lease period for a worker to work on a message.
QUEUE_WORKER_LOCK_RENEW_TIME Number 10000 How frequently (in ms) should a worker renew the lease time.
QUEUE_WORKER_STALLED_INTERVAL Number 30000 How often should a worker check for stalled jobs (use 0 for never).
QUEUE_WORKER_MAX_STALLED_COUNT Number 1 Maximum amount of times a stalled job will be re-processed.

Refer to Configuring multi-main setup for details.

Variable Type Default Description
N8N_MULTI_MAIN_SETUP_ENABLED Boolean false Whether to enable multi-main setup for queue mode (license required).
N8N_MULTI_MAIN_SETUP_KEY_TTL Number 10 Time to live (in seconds) for leader key in multi-main setup.
N8N_MULTI_MAIN_SETUP_CHECK_INTERVAL Number 3 Interval (in seconds) for leader check in multi-main setup.

Affinity Trigger node

URL: llms-txt#affinity-trigger-node

Contents:

  • Events
  • Related resources

Affinity is a powerful relationship intelligence platform enabling teams to leverage their network to close the next big deal.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Affinity Trigger integrations page.

  • Field value
  • Created
  • Deleted
  • Updated
  • Field
  • Created
  • Deleted
  • Updated
  • File
  • Created
  • Deleted
  • List entry
  • Created
  • Deleted
  • List
  • Created
  • Deleted
  • Updated
  • Note
  • Created
  • Deleted
  • Updated
  • Opportunity
  • Created
  • Deleted
  • Updated
  • Organization
  • Created
  • Deleted
  • Updated
  • Person
  • Created
  • Deleted
  • Updated

n8n provides an app node for Affinity. You can find the node docs here.

View example workflows and related content on n8n's website.

Refer to Affinity's documentation for details about their API.


OpenAI Text operations

URL: llms-txt#openai-text-operations

Contents:

  • Generate a Chat Completion
    • Options
  • Generate a Model Response
    • Built-in Tools
    • Options
  • Classify Text for Violations
    • Options
  • Common issues

Use this operation to message a model or classify text for violations in OpenAI. Refer to OpenAI for more information on the OpenAI node itself.

Previous node versions

n8n version 1.117.0 introduces the OpenAI node V2 that supports the OpenAI Responses API. It renames the 'Message a Model' operation to 'Generate a Chat Completion' to clarify its association with the Chat Completions API and introduces a separate 'Generate a Model Response' operation that uses the Responses API.

Generate a Chat Completion

Use this operation to send a message or prompt to an OpenAI model - using the Chat Completions API - and receive a response.

Enter these parameters:

  • Credential to connect with: Create or select an existing OpenAI credential.

  • Resource: Select Text.

  • Operation: Select Generate a Chat Completion.

  • Model: Select the model you want to use. If youre not sure which model to use, try gpt-4o if you need high intelligence or gpt-4o-mini if you need the fastest speed and lowest cost. Refer to Models overview | OpenAI Platform for more information.

  • Messages: Enter a Text prompt and assign a Role that the model will use to generate responses. Refer to Prompt engineering | OpenAI for more information on how to write a better prompt by using these roles. Choose from one of these roles:

    • User: Sends a message as a user and gets a response from the model.
    • Assistant: Tells the model to adopt a specific tone or personality.
    • System: By default, there is no system message. You can define instructions in the user message, but the instructions set in the system message are more effective. You can set more than one system message per conversation. Use this to set the model's behavior or context for the next user message.
  • Simplify Output: Turn on to return a simplified version of the response instead of the raw data.

  • Output Content as JSON: Turn on to attempt to return the response in JSON format. Compatible with GPT-4 Turbo and all GPT-3.5 Turbo models newer than gpt-3.5-turbo-1106.

  • Frequency Penalty: Apply a penalty to reduce the model's tendency to repeat similar lines. The range is between 0.0 and 2.0.

  • Maximum Number of Tokens: Set the maximum number of tokens for the response. One token is roughly four characters for standard English text. Use this to limit the length of the output.

  • Number of Completions: Defaults to 1. Set the number of completions you want to generate for each prompt. Use carefully since setting a high number will quickly consume your tokens.

  • Presence Penalty: Apply a penalty to influence the model to discuss new topics. The range is between 0.0 and 2.0.

  • Output Randomness (Temperature): Adjust the randomness of the response. The range is between 0.0 (deterministic) and 1.0 (maximum randomness). We recommend altering this or Output Randomness (Top P) but not both. Start with a medium temperature (around 0.7) and adjust based on the outputs you observe. If the responses are too repetitive or rigid, increase the temperature. If theyre too chaotic or off-track, decrease it. Defaults to 1.0.

  • Output Randomness (Top P): Adjust the Top P setting to control the diversity of the assistant's responses. For example, 0.5 means half of all likelihood-weighted options are considered. We recommend altering this or Output Randomness (Temperature) but not both. Defaults to 1.0.

Refer to Chat Completions | OpenAI documentation for more information.

Generate a Model Response

Use this operation to send a message or prompt to an OpenAI model - using the Responses API - and receive a response.

Enter these parameters:

  • Credential to connect with: Create or select an existing OpenAI credential.
  • Resource: Select Text.
  • Operation: Select Generate a Model Response.
  • Model: Select the model you want to use. Refer to Models overview | OpenAI Platform for an overview.
  • Messages: Choose from one of these a Message Types:
    • Text: Enter a Text prompt and assign a Role that the model will use to generate responses. Refer to Prompt engineering | OpenAI for more information on how to write a better prompt by using these roles.
    • Image: Provide an Image either through an Image URL, a File ID (using the OpenAI Files API) or by passing binary data from an earlier node in your workflow.
    • File: Provide a File in a supported format (currently: PDF only), either through a File URL, a File ID (using the OpenAI Files API) or by passing binary data from an earlier node in your workflow.
    • For any message type, you can choose from one of these roles:
      • User: Sends a message as a user and gets a response from the model.
      • Assistant: Tells the model to adopt a specific tone or personality.
      • System: By default, the system message is "You are a helpful assistant". You can define instructions in the user message, but the instructions set in the system message are more effective. You can only set one system message per conversation. Use this to set the model's behavior or context for the next user message.
  • Simplify Output: Turn on to return a simplified version of the response instead of the raw data.

The OpenAI Responses API provides a range of built-in tools to enrich the model's response:

  • Web Search: Allows models to search the web for the latest information before generating a response.

  • MCP Servers: Allows models to connect to remote MCP servers. Find out more about using remote MCP servers as tools here.

  • File Search: Allow models to search your knowledgebase from previously uploaded files for relevant information before generating a response. Refer to the OpenAI documentation for more information.

  • Code Interpreter: Allows models to write and run Python code in a sandboxed environment.

  • Maximum Number of Tokens: Set the maximum number of tokens for the response. One token is roughly four characters for standard English text. Use this to limit the length of the output.

  • Output Randomness (Temperature): Adjust the randomness of the response. The range is between 0.0 (deterministic) and 1.0 (maximum randomness). We recommend altering this or Output Randomness (Top P) but not both. Start with a medium temperature (around 0.7) and adjust based on the outputs you observe. If the responses are too repetitive or rigid, increase the temperature. If theyre too chaotic or off-track, decrease it. Defaults to 1.0.

  • Output Randomness (Top P): Adjust the Top P setting to control the diversity of the assistant's responses. For example, 0.5 means half of all likelihood-weighted options are considered. We recommend altering this or Output Randomness (Temperature) but not both. Defaults to 1.0.

  • Conversation ID: The conversation that this response belongs to. Input items and output items from this response are automatically added to this conversation after this response completes.

  • Previous Response ID: The ID of the previous response to continue from. Can't be used in conjunction with Conversation ID.

  • Reasoning: The level of reasoning effort the model should spend to generate the response. Includes the ability to return a Summary of the reasoning performed by the model (for example, for debugging purposes).

  • Store: Whether to store the generated model response for later retrieval via API. Defaults to true.

  • Output Format: Whether to return the response as Text, in a specified JSON Schema or as a JSON Object.

  • Background: Whether to run the model in background mode. This allows executing long-running tasks more reliably.

Refer to Responses | OpenAI documentation for more information.

Classify Text for Violations

Use this operation to identify and flag content that might be harmful. OpenAI model will analyze the text and return a response containing:

  • flagged: A boolean field indicating if the content is potentially harmful.
  • categories: A list of category-specific violation flags.
  • category_scores: Scores for each category.

Enter these parameters:

  • Credential to connect with: Create or select an existing OpenAI credential.

  • Resource: Select Text.

  • Operation: Select Classify Text for Violations.

  • Text Input: Enter text to classify if it violates the moderation policy.

  • Simplify Output: Turn on to return a simplified version of the response instead of the raw data.

  • Use Stable Model: Turn on to use the stable version of the model instead of the latest version, accuracy may be slightly lower.

Refer to Moderations | OpenAI documentation for more information.

For common errors or issues and suggested resolution steps, refer to Common Issues.


HTTP request helper for node builders

URL: llms-txt#http-request-helper-for-node-builders

Contents:

  • Usage
  • Example
  • Deprecation of the previous helper
  • Migration guide to the new helper

n8n provides a flexible helper for making HTTP requests, which abstracts away most of the complexity.

Programmatic style only

The information in this document is for node building using the programmatic style. It doesn't apply to declarative style nodes.

Call the helper inside the execute function.

options is an object:

url is required. The other fields are optional. The default method is GET.

Some notes about the possible fields:

  • body: you can use a regular JavaScript object for JSON payload, a buffer for file uploads, an instance of FormData for multipart/form-data, and URLSearchParams for application/x-www-form-urlencoded.
  • headers: a key-value pair.
    • If body is an instance of FormData then n8n adds content-type: multipart/form-data automatically.
    • If body is an instance of URLSearchParams, then n8n adds content-type: application/x-www-form-urlencoded.
    • To override this behavior, set a content-type header.
  • arrayFormat: if your query string contains an array of data, such as const qs = {IDs: [15,17]}, the value of arrayFormat defines how n8n formats it.
    • indices (default): { a: ['b', 'c'] } as a[0]=b&a[1]=c
    • brackets: { a: ['b', 'c'] } as a[]=b&a[]=c
    • repeat: { a: ['b', 'c'] } as a=b&a=c
    • comma: { a: ['b', 'c'] } as a=b,c
  • auth: Used for Basic auth. Provide username and password. n8n recommends omitting this, and using helpers.httpRequestWithAuthentication(...) instead.
  • disableFollowRedirect: By default, n8n follows redirects. You can set this to true to prevent this from happening.
  • skipSslCertificateValidation: Used for calling HTTPS services without proper certificate
  • returnFullResponse: Instead of returning just the body, returns an object with more data in the following format: {body: body, headers: object, statusCode: 200, statusMessage: 'OK'}
  • encoding: n8n can detect the content type, but you can specify arrayBuffer to receive a Buffer you can read from and interact with.

For an example, refer to the Mattermost node.

Deprecation of the previous helper

The previous helper implementation using this.helpers.request(options) used and exposed the request-promise library. This was removed in version 1.

To minimize incompatibility, n8n made a transparent conversion to another library called Axios.

If you are having issues, please report them in the Community Forums or on GitHub.

Migration guide to the new helper

The new helper is much more robust, library agnostic, and easier to use.

New nodes should all use the new helper. You should strongly consider migrating existing custom nodes to the new helper. These are the main considerations when migrating:

  • Accepts url. Doesn't accept uri.
  • encoding: null now must be encoding: arrayBuffer.
  • rejectUnauthorized: false is now skipSslCertificateValidation: true
  • Use body according to content-type headers to clarify the payload.
  • resolveWithFullResponse is now returnFullResponse and has similar behavior

Examples:

Example 1 (unknown):

// If no auth needed
const response = await this.helpers.httpRequest(options);

// If auth needed
const response = await this.helpers.httpRequestWithAuthentication.call(
	this, 
	'credentialTypeName', // For example: pipedriveApi
	options,
);

Example 2 (unknown):

{
	url: string;
	headers?: object;
	method?: 'GET' | 'POST' | 'PUT' | 'DELETE' | 'HEAD';
	body?: FormData | Array | string | number | object | Buffer | URLSearchParams;
	qs?: object;
	arrayFormat?: 'indices' | 'brackets' | 'repeat' | 'comma';
	auth?: {
		username: string,
		password: string,
	};
	disableFollowRedirect?: boolean;
	encoding?: 'arraybuffer' | 'blob' | 'document' | 'json' | 'text' | 'stream';
	skipSslCertificateValidation?: boolean;
	returnFullResponse?: boolean;
	proxy?: {
		host: string;
		port: string | number;
		auth?: {
			username: string;
			password: string;
		},
		protocol?: string;
	};
	timeout?: number;
	json?: boolean;
}

Insights

URL: llms-txt#insights

Contents:

  • Insights summary banner
  • Insights dashboard
  • Insights time periods
  • Setting the time saved by a workflow
  • Disable or configure insights metrics collection
  • Insights FAQs
    • Which executions do n8n use to calculate the values in the insights banner and dashboard?
    • Does n8n use historic execution data when upgrading to a version with insights?

Insights gives instance owners and admins visibility into how workflows perform over time. This feature consists of three parts:

  • Insights summary banner: Shows key metrics about your instance from the last 7 days at the top of the overview space.
  • Insights dashboard: A more detailed visual breakdown with per-workflow metrics and historical comparisons.
  • Time saved (Workflow ROI): For each workflow, you can set the number of minutes of work that each production execution saves you.

The insights summary banner displays activity from the last 7 days for all plans. The insights dashboard is only available on Pro (with limited date ranges) and Enterprise plans.

Insights summary banner

n8n collects several metrics for both the insights summary banner and dashboard. They include:

  • Total production executions (not including sub-workflow executions or manual executions)
  • Total failed production executions
  • Production execution failure rate
  • Time saved (when set on at least one or more active workflows)
  • Run time average (including wait time from any wait nodes)

Insights dashboard

Those on the Pro and Enterprise plans can access the Insights section from the side navigation. Each metric from the summary banner is also clickable, taking you to the corresponding chart.

The insights dashboard also has a table showing individual insights from each workflow including total production executions, failed production executions, failure rate, time saved, and run time average.

Insights time periods

By default, the insights summary banner and dashboard show a rolling 7 day window with a comparison to the previous period to identify increases or decreases for each metric. On the dashboard, paid plans also display data for other date ranges:

  • Pro: 7 and 14 days
  • Enterprise: 24 hours, 7 days, 14 days, 30 days, 90 days, 6 months, 1 year

Setting the time saved by a workflow

For each workflow, you can set the number of minutes of work a workflow saves you each time it runs. You can configure this by navigating to the workflow, selecting the three dots menu in the top right and selecting settings. There you can update the Estimated time saved value and save.

This setting helps you calculate how much time automating a process saves over time vs the manual effort to complete the same task or process. Once set, n8n calculates the amount of time the workflow saves you based on the number of production executions and displays it on the summary banner and dashboard.

Disable or configure insights metrics collection

If you self-host n8n, you can disable or configure insights and metrics collection using environment variables.

Which executions do n8n use to calculate the values in the insights banner and dashboard?

n8n insights only collects data from production executions (for example, those from active workflows triggered on a schedule or a webhook) from the main (parent) workflow. This means that it doesn't count manual (test) executions or executions from sub-workflows or error workflows.

Does n8n use historic execution data when upgrading to a version with insights?

n8n only starts collecting data for insights once you update to the first supported version (1.89.0). This means it only reports on executions from that point forward and you won't see execution data in insights from prior periods.


Manage users with SAML

URL: llms-txt#manage-users-with-saml

Contents:

  • Exempt users from SAML

  • Deleting users

  • Available on Enterprise plans.

  • You need to be an instance owner or admin to enable and configure SAML.

There are some user management tasks that are affected by SAML.

Exempt users from SAML

You can allow users to log in without using SAML. To do this:

  1. Go to Settings > Users.
  2. Select the menu icon by the user you want to exempt from SAML.
  3. Select Allow Manual Login.

If you remove a user from your IdP, they remain logged in to n8n. You need to manually remove them from n8n as well. Refer to Manage users for guidance on deleting users.


Okta Workforce Identity SAML setup

URL: llms-txt#okta-workforce-identity-saml-setup

Contents:

  • Prerequisites
  • Setup

Set up SAML SSO in n8n with Okta.

Workforce Identity and Customer Identity

This guide covers setting up Workforce Identity. This is the original Okta product. Customer Identity is Okta's name for Auth0, which they've acquired.

You need an Okta Workforce Identity account, and the redirect URL and entity ID from n8n's SAML settings.

Okta Workforce may enforce two factor authentication for users, depending on your Okta configuration.

Read the Set up SAML guide first.

  1. In your Okta admin panel, select Applications > Applications.

  2. Select Create App Integration. Okta opens the app creation modal.

  3. Select SAML 2.0, then select Next.

  4. On the General Settings tab, enter n8n as the App name.

  5. On the Configure SAML tab, complete the following General fields:

  • Single sign-on URL: the Redirect URL from n8n.
    • Audience URI (SP Entity ID): the Entity ID from n8n.
    • Default RelayState: leave this empty.
    • Name ID format: EmailAddress.
    • Application username: Okta username.
    • Update application username on: Create and update.
  1. Create Attribute Statements:
Name Name format Value
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/firstname URI Reference user.firstName
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/lastname URI Reference user.lastName
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn URI Reference user.login
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress URI Reference user.email
  1. Select Next. Okta may prompt you to complete a marketing form, or may take you directly to your new n8n Okta app.

  2. Assign the n8n app to people:

  3. On the n8n app dashboard in Okta, select Assignments.

    1. Select Assign > Assign to People. Okta displays a modal with a list of available people.
    2. Select Assign next to the person you want to add. Okta displays a prompt to confirm the username.
    3. Leave the username as email address. Select Save and Go Back.
    4. Select Done.
  4. Get the metadata XML: on the Sign On tab, copy the Metadata URL. Navigate to it, and copy the XML. Paste this into Identity Provider Settings in n8n.

  5. Select Save settings.

  6. Select Test settings. n8n opens a new tab. If you're not currently logged in, Okta prompts you to sign in. n8n then displays a success message confirming the attributes returned by Okta.


Notion node

URL: llms-txt#notion-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported
  • Common issues

Use the Notion node to automate work in Notion, and integrate Notion with other applications. n8n has built-in support for a wide range of Notion features, including getting and searching databases, creating pages, and getting users.

On this page, you'll find a list of operations the Notion node supports and links to more resources.

Refer to Notion credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Block
    • Append After
    • Get Child Blocks
  • Database
    • Get
    • Get Many
    • Search
  • Database Page
    • Create
    • Get
    • Get Many
    • Update
  • Page
    • Archive
    • Create
    • Search
  • User
    • Get
    • Get Many

Templates and examples

Transcribe Audio Files, Summarize with GPT-4, and Store in Notion

View template details

Host Your Own AI Deep Research Agent with n8n, Apify and OpenAI o3

View template details

Notion AI Assistant Generator

View template details

Browse Notion integration templates, or search all templates

n8n provides an app node for Notion. You can find the trigger node docs here.

Refer to Notion's documentation for details about their API.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.

For common errors or issues and suggested resolution steps, refer to Common issues.


Allows usage of all builtin modules

URL: llms-txt#allows-usage-of-all-builtin-modules

export NODE_FUNCTION_ALLOW_BUILTIN=*


Enable modules in Code node

URL: llms-txt#enable-modules-in-code-node

For security reasons, the Code node restricts importing modules. It's possible to lift that restriction for built-in and external modules by setting the following environment variables:

  • NODE_FUNCTION_ALLOW_BUILTIN: For built-in modules
  • NODE_FUNCTION_ALLOW_EXTERNAL: For external modules sourced from n8n/node_modules directory. External module support is disabled when an environment variable isn't set.

Grafana node

URL: llms-txt#grafana-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Grafana node to automate work in Grafana, and integrate Grafana with other applications. n8n has built-in support for a wide range of Grafana features, including creating, updating, deleting, and getting dashboards, teams, and users.

On this page, you'll find a list of operations the Grafana node supports and links to more resources.

Refer to Grafana credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Dashboard
    • Create a dashboard
    • Delete a dashboard
    • Get a dashboard
    • Get all dashboards
    • Update a dashboard
  • Team
    • Create a team
    • Delete a team
    • Get a team
    • Retrieve all teams
    • Update a team
  • Team Member
    • Add a member to a team
    • Retrieve all team members
    • Remove a member from a team
  • User
    • Delete a user from the current organization
    • Retrieve all users in the current organization
    • Update a user in the current organization

Templates and examples

Set DevOps Infrastructure with Docker, K3s, Jenkins & Grafana for Linux Servers

View template details

🛠️ Grafana Tool MCP Server 💪 all 16 operations

View template details

Deploy Docker Grafana, API Backend for WHMCS/WISECP

View template details

Browse Grafana integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Jira Trigger node

URL: llms-txt#jira-trigger-node

Jira is a proprietary issue tracking product developed by Atlassian that allows bug tracking and agile project management.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Jira trigger integrations page.


Slack Trigger node

URL: llms-txt#slack-trigger-node

Contents:

  • Events
  • Parameters
  • Options
  • Related resources
  • Required scopes
  • Verify the webhook
  • Common issues
    • Workflow only works in testing or production
    • Token expired

Use the Slack Trigger node to respond to events in Slack and integrate Slack with other applications. n8n has built-in support for a wide range of Slack events, including new messages, reactions, and new channels.

On this page, you'll find a list of events the Slack Trigger node can respond to and links to more resources.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Slack integrations page.

  • Any Event: The node triggers on any event in Slack.
  • Bot / App Mention: The node triggers when your bot or app is mentioned in a channel the app is in.
  • File Made Public: The node triggers when a file is made public.
  • File Shared: The node triggers when a file is shared in a channel the app is in.
  • New Message Posted to Channel: The node triggers when a new message is posted to a channel the app is in.
  • New Public Channel Created: The node triggers when a new public channel is created.
  • New User: The node triggers when a new user is added to Slack.
  • Reaction Added: The node triggers when a reaction is added to a message the app is added to.

Once you've set the events to trigger on, use the remaining parameters to further define the node's behavior:

  • Watch Whole Workspace: Whether the node should watch for the selected Events in all channels in the workspace (turned on) or not (turned off, default).

This will use one execution for every event in any channel your bot or app is in. Use with caution!

  • Channel to Watch: Select the channel your node should watch for the selected Events. This parameter only appears if you don't turn on Watch Whole Workspace. You can select a channel:

  • From list: The node uses your credential to look up a list of channels in the workspace so you can select the channel you want.

    • By ID: Enter the ID of a channel you want to watch. Slack displays the channel ID at the bottom of the channel details with a one-click copy button.
    • By URL: Enter the URL of the channel you want to watch, formatted as https://app.slack.com/client/<channel-address>.
  • Download Files: Whether to download files and use them in the node's output (turned on) or not (turned off, default). Use this parameter with the File Made Public and File Shared events.

You can further refine the node's behavior when you Add Options:

  • Resolve IDs: Whether to resolve the IDs to their respective names and return them (turned on) or not (turned off, default).
  • Usernames or IDs to ignore: Select usernames or enter a comma-separated string of encoded user IDs to ignore events from. Choose from the list, or specify IDs using an expression.

n8n provides an app node for Slack. You can find the node docs here.

View example workflows and related content on n8n's website.

Refer to Slack's documentation for details about their API.

To use this node, you need to create an application in Slack and enable event subscriptions. Refer to Slack credentials | Slack Trigger configuration for more information.

You must add the appropriate scopes to your Slack app for this trigger node to work.

The node requires scopes for the conversations.list and users.list methods at minimum. Check out the Scopes | Slack credentials list for a more complete list of scopes.

Verify the webhook

From version 1.106.0, you can set a Slack Signing Secret when configuring your Slack credentials. When set, the Slack trigger node automatically verifies that requests are from Slack and include a trusted signature. n8n recommends setting this to ensure you only process requests sent from Slack.

Here are some common errors and issues with the Slack Trigger node and steps to resolve or troubleshoot them.

Workflow only works in testing or production

Slack only allows you to register a single webhook per app. This means that you can't switch from using the testing URL to the production URL (and vice versa) without reconfiguring the registered webhook URL.

You may have trouble with this if you try to test a workflow that's also active in production. Slack will only send events to one of the two webhook URLs, so the other will never receive event notifications.

To work around this, you can disable your workflow when testing:

Halts production traffic

This temporarily disables your production workflow for testing. Your workflow will no longer receive production traffic while it's deactivated.

  1. Go to your workflow page.
  2. Toggle the Active switch in the top panel to disable the workflow temporarily.
  3. Edit the Request URL in your the Slack Trigger configuration to use the testing webhook URL instead of the production webhook URL.
  4. Test your workflow using the test webhook URL.
  5. When you finish testing, edit the Request URL in your the Slack Trigger configuration to use the production webhook URL instead of the testing webhook URL.
  6. Toggle the Inactive toggle to enable the workflow again. The production webhook URL should resume working.

Slack offers token rotation that you can turn on for bot and user tokens. This makes every tokens expire after 12 hours. While this may be useful for testing, n8n credentials using tokens with this enabled will fail after expiry. If you want to use your Slack credentials in production, this feature must be off.

To check if your Slack app has token rotation turned on, refer to the Slack API Documentation | Token Rotation.

If your app uses token rotation

Please note, if your Slack app uses token rotation, you can't turn it off again. You need to create a new Slack app with token rotation disabled instead.


Dynatrace credentials

URL: llms-txt#dynatrace-credentials

Contents:

  • Prerequisites
  • Related resources
  • Using Access Token

You can use these credentials to authenticate when using the HTTP Request node to make a Custom API call.

Create a Dynatrace account.

Refer to Dynatrace's API documentation for more information about authenticating with the service.

This is a credential-only node. Refer to Custom API operations to learn more. View example workflows and related content on n8n's website.

Using Access Token

To configure this credential, you'll need:

  • An Access Token

Refer to Access Tokens on Dynatrace's website for more information.


SeaTable credentials

URL: llms-txt#seatable-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a SeaTable account on either a cloud or self-hosted SeaTable server.

Supported authentication methods

Refer to SeaTable's API documentation for more information about the service.

To configure this credential, you'll need:

  • An Environment: Select the environment that matches your SeaTable instance:
    • Cloud-Hosted
    • Self-Hosted
  • An API Token (of a Base): Generate a Base-Token in SeaTable from the base options > Advanced > API Token.
  • A Timezone: Select the timezone of your SeaTable server.

Item linking for node creators

URL: llms-txt#item-linking-for-node-creators

Programmatic-style nodes only

This guidance applies to programmatic-style nodes. If you're using declarative style, n8n handles paired items for you automatically.

Use n8n's item linking to access data from items that precede the current item. n8n needs to know which input item a given output item comes from. If this information is missing, expressions in other nodes may break. As a node developer, you must ensure any items returned by your node support this.

This applies to programmatic nodes (including trigger nodes). You don't need to consider item linking when building a declarative-style node. Refer to Choose your node building approach for more information on node styles.

Start by reading Item linking concepts, which provides a conceptual overview of item linking, and details of the scenarios where n8n can handle the linking automatically.

If you need to handle item linking manually, do this by setting pairedItem on each item your node returns:

Examples:

Example 1 (unknown):

// Use the pairedItem information of the incoming item
newItem = {
	"json": { . . . },
	"pairedItem": {
		"item": item.pairedItem,
		// Optional: choose the input to use
		// Set this if your node combines multiple inputs
		"input": 0
};

// Or set the index manually
newItem = {
		"json": { . . . }
		"pairedItem": {
			"item": i,
			// Optional: choose the input to use
			// Set this if your node combines multiple inputs
			"input": 0
		},
};

JotForm credentials

URL: llms-txt#jotform-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to JotForm's API documentation for more information about the service.

To configure this credential, you'll need a JotForm account and:

  • An API Key
  • The API Domain
  1. Go to Settings > API.
  2. Select Create New Key.
  3. Select the Name in JotForm to update the API key name to something meaningful, like n8n integration.
  4. Copy the API Key and enter it in your n8n credential.
  5. In n8n, select the API Domain that applies to you based on the forms you're using:
    • api.jotform.com: Use this unless the other form types apply to you.
    • eu-api.jotform.com: Select this if you're using JotForm EU Safe Forms.
    • hipaa-api.jotform.com: Select this if you're using JotForm HIPAA forms.

Refer to the JotForm API documentation for more information on creating keys and API domains.


Gmail node Thread Operations

URL: llms-txt#gmail-node-thread-operations

Contents:

  • Add Label to a thread
  • Delete a thread
  • Get a thread
    • Get thread options
  • Get Many threads
    • Get Many threads filters
  • Remove label from a thread
  • Reply to a message
    • Reply options
  • Trash a thread

Use the Thread operations to delete, reply to, trash, untrash, add/remove labels, get one, or list threads. Refer to the Gmail node for more information on the Gmail node itself.

Add Label to a thread

Use this operation to create a new draft.

Enter these parameters:

  • Select the Credential to connect with or create a new one.
  • Resource: Select Thread.
  • Operation: Select Add Label.
  • Thread ID: Enter the ID of the thread you want to add the label to.
  • Label Names or IDs: Select the Label names you want to apply or enter an expression to specify IDs. The dropdown populates based on the Credential you selected.

Refer to the Gmail API Method: users.threads.modify documentation for more information.

Use this operation to immediately and permanently delete a thread and all its messages.

This operation can't be undone. For recoverable deletions, use the Trash operation instead.

Enter these parameters:

  • Select the Credential to connect with or create a new one.
  • Resource: Select Thread.
  • Operation: Select Delete.
  • Thread ID: Enter the ID of the thread you want to delete.

Refer to the Gmail API Method: users.threads.delete documentation for more information.

Use this operation to get a single thread.

Enter these parameters:

  • Select the Credential to connect with or create a new one.
  • Resource: Select Thread.
  • Operation: Select Get.
  • Thread ID: Enter the ID of the thread you wish to retrieve.
  • Simplify: Choose whether to return a simplified version of the response (turned on) or the raw data (turned off). Default is on.
    • This is the same as setting the format for the API call to metadata, which returns email message IDs, labels, and email headers, including: From, To, CC, BCC, and Subject.

Get thread options

Use these options to further refine the node's behavior:

  • Return Only Messages: Choose whether to return only thread messages (turned on).

Refer to the Gmail API Method: users.threads.get documentation for more information.

Use this operation to get two or more threads.

Enter these parameters:

  • Select the Credential to connect with or create a new one.
  • Resource: Select Thread.
  • Operation: Select Get Many.
  • Return All: Choose whether the node returns all threads (turned on) or only up to a set limit (turned off).
  • Limit: Enter the maximum number of threads to return. Only used if you've turned off Return All.

Get Many threads filters

Use these filters to further refine the node's behavior:

  • Include Spam and Trash: Select whether the node should get threads in the Spam and Trash folders (turned on) or not (turned off).
  • Label Names or IDs: Only return threads with the selected labels added to them. Select the Label names you want to apply or enter an expression to specify IDs. The dropdown populates based on the Credential you selected.
  • Search: Enter Gmail search refine filters, like from:, to filter the threads returned. Refer to Refine searches in Gmail for more information.
  • Read Status: Choose whether to receive Unread and read emails, Unread emails only (default), or Read emails only.
  • Received After: Return only those emails received after the specified date and time. Use the date picker to select the day and time or enter an expression to set a date as a string in ISO format or a timestamp in milliseconds. Refer to ISO 8601 for more information on formatting the string.
  • Received Before: Return only those emails received before the specified date and time. Use the date picker to select the day and time or enter an expression to set a date as a string in ISO format or a timestamp in milliseconds. Refer to ISO 8601 for more information on formatting the string.

Refer to the Gmail API Method: users.threads.list documentation for more information.

Remove label from a thread

Use this operation to remove a label from a thread.

Enter these parameters:

  • Select the Credential to connect with or create a new one.
  • Resource: Select Thread.
  • Operation: Select Remove Label.
  • Thread ID: Enter the ID of the thread you want to remove the label from.
  • Label Names or IDs: Select the Label names you want to remove or enter an expression to specify their IDs. The dropdown populates based on the Credential you selected.

Refer to the Gmail API Method: users.threads.modify documentation for more information.

Reply to a message

Use this operation to reply to a message.

Enter these parameters:

  • Select the Credential to connect with or create a new one.
  • Resource: Select Thread.
  • Operation: Select Reply.
  • Thread ID: Enter the ID of the thread you want to reply to.
  • Message Snippet or ID: Select the Message you want to reply to or enter an expression to specify its ID. The dropdown populates based on the Credential you selected.
  • Select the Email Type. Choose from Text or HTML.
  • Message: Enter the email message body.

Use these options to further refine the node's behavior:

  • Attachments: Select Add Attachment to add an attachment. Enter the Attachment Field Name (in Input) to identify which field from the input node contains the attachment.
    • For multiple properties, enter a comma-separated list.
  • BCC: Enter one or more email addresses for blind copy recipients. Separate multiple email addresses with a comma, for example jay@gatsby.com, jon@smith.com.
  • CC: Enter one or more email addresses for carbon copy recipients. Separate multiple email addresses with a comma, for example jay@gatsby.com, jon@smith.com.
  • Sender Name: Enter the name you want displayed in your recipients' email as the sender.
  • Reply to Sender Only: Choose whether to reply all (turned off) or reply to the sender only (turned on).

Refer to the Gmail API Method: users.messages.send documentation for more information.

Use this operation to move a thread and all its messages to the trash.

Enter these parameters:

  • Select the Credential to connect with or create a new one.
  • Resource: Select Thread.
  • Operation: Select Trash.
  • Thread ID: Enter the ID of the thread you want to move to the trash.

Refer to the Gmail API Method: users.threads.trash documentation for more information.

Use this operation to recover a thread and all its messages from the trash.

Enter these parameters:

  • Select the Credential to connect with or create a new one.
  • Resource: Select Thread.
  • Operation: Select Untrash.
  • Thread ID: Enter the ID of the thread you want to move to the trash.

Refer to the Gmail API Method: users.threads.untrash documentation for more information.

For common errors or issues and suggested resolution steps, refer to Common Issues.


Cluster nodes

URL: llms-txt#cluster-nodes

Contents:

  • Root nodes
  • Sub-nodes

Cluster nodes are node groups that work together to provide functionality in an n8n workflow. Instead of using a single node, you use a root node and one or more sub-nodes that extend the functionality of the node.

Each cluster starts with one root node.

Each root node can have one or more sub-nodes attached to it.


Workable credentials

URL: llms-txt#workable-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a Workable account.

Supported authentication methods

Refer to Workable's API documentation for more information about the service.

To configure this credential, you'll need:

  • A Subdomain: Your Workable subdomain is the part of your Workable domain between https:// and .workable.com. So if the full domain is https://n8n.workable.com, the subdomain is n8n. The subdomain is also displayed on your Workable Company Profile page.

  • An Access Token: Go to your profile > Integrations > Apps and select Generate API token. Refer to Generate a new token for more information.

If you're using this credential with the Workable Trigger node, select the r_candidates and r_jobs scopes when you generate your token. If you're using this credential in other ways, select scopes that are relevant for your use case.

Refer to Supported API scopes for more information on scopes.


n8n public REST API

URL: llms-txt#n8n-public-rest-api

Contents:

  • Learn about REST APIs

The n8n API isn't available during the free trial. Please upgrade to access this feature.

Using n8n's public API, you can programmatically perform many of the same tasks as you can in the GUI. This section introduces n8n's REST API, including:

n8n provides an n8n API node to access the API in your workflows.

Learn about REST APIs

The API documentation assumes you are familiar with REST APIs. If you're not, these resources may be helpful:

Use the API playground

Trying out the API in the playground can help you understand how APIs work. If you're worried about changing live data, consider setting up a test workflow, or test n8n instance, to explore safely.


xAI credentials

URL: llms-txt#xai-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create an xAI account.

Supported authentication methods

Refer to xAI's API documentation for more information about the service.

To configure this credential, you'll need:

Refer to the The Hitchhiker's Guide to Grok | xAI for more information.


MailerLite node

URL: llms-txt#mailerlite-node

Contents:

  • Operations
  • Templates and examples

Use the MailerLite node to automate work in MailerLite, and integrate MailerLite with other applications. n8n has built-in support for a wide range of MailerLite features, including creating, updating, deleting, and getting subscribers.

On this page, you'll find a list of operations the MailerLite node supports and links to more resources.

Refer to MailerLite credentials for guidance on setting up authentication.

  • Subscriber
    • Create a new subscriber
    • Get an subscriber
    • Get all subscribers
    • Update an subscriber

Templates and examples

Create, update and get a subscriber using the MailerLite node

View template details

Receive updates when a subscriber is added to a group in MailerLite

View template details

Capture Gumroad sales, add buyer to MailerLite group, log to GoogleSheets CRM

View template details

Browse MailerLite integration templates, or search all templates


Bitly node

URL: llms-txt#bitly-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Bitly node to automate work in Bitly, and integrate Bitly with other applications. n8n has built-in support for a wide range of Bitly features, including creating, getting, and updating links.

On this page, you'll find a list of operations the Bitly node supports and links to more resources.

Refer to Bitly credentials for guidance on setting up authentication.

  • Link
    • Create a link
    • Get a link
    • Update a link

Templates and examples

Explore n8n Nodes in a Visual Reference Library

View template details

Create a URL on Bitly

View template details

Automate URL Shortening with Bitly Using Llama3 Chat Interface

View template details

Browse Bitly integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


PayPal credentials

URL: llms-txt#paypal-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API client and secret

You can use these credentials to authenticate the following nodes:

Create a PayPal developer account.

Supported authentication methods

  • API client and secret

Refer to Paypal's API documentation for more information about the service.

Using API client and secret

To configure this credential, you'll need:

  • A Client ID: Generated when you create an app.
  • A Secret: Generated when you create an app.
  • An Environment: Select Live or Sandbox.

To generate the Client ID and Secret, log in to your Paypal developer dashboard. Select Apps & Credentials > Rest API apps > Create app. Refer to Get client ID and client secret for more information.


Eventbrite Trigger node

URL: llms-txt#eventbrite-trigger-node

Eventbrite is an event management and ticketing website. The service allows users to browse, create, and promote local events.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Eventbrite Trigger integrations page.


Supabase node common issues

URL: llms-txt#supabase-node-common-issues

Contents:

  • Filtering rows by metadata
  • Can't connect to a local Supabase database when using Docker
    • If only Supabase is in Docker
    • If Supabase and n8n are running in separate Docker containers
  • Records are accessible through Postgres but not Supabase

Here are some common errors and issues with the Supabase node and steps to resolve or troubleshoot them.

Filtering rows by metadata

To filter rows by Supabase metadata, set the Select Type to String.

From there, you can construct a query in the Filters (String) parameter to filter the metadata using the Supabase metadata query language, inspired by the MongoDB selectors format. Access the metadata properties using the Postgres ->> arrow JSON operator like this (curly brackets denote components to fill in):

For example to access an age property in the metadata and return results greater than or equal to 21, you could enter the following in the Filters (String) field:

You can combine these operators to construct more complex queries.

Can't connect to a local Supabase database when using Docker

When you run Supabase in Docker, you need to configure the network so that n8n can connect to Supabase.

The solution depends on how you're hosting the two components.

If only Supabase is in Docker

If only Supabase is running in Docker, the Docker Compose file used by the self-hosting guide already runs Supabase bound to the correct interfaces.

When configuring Supabase credentials, the localhost address should work without a problem (set the Host to localhost).

If Supabase and n8n are running in separate Docker containers

If both n8n and Supabase are running in Docker in separate containers, you can use Docker networking to connect them.

Configure Supabase to listen on all interfaces by binding to 0.0.0.0 inside of the container (the official Docker compose configuration already does this this). Add both the Supabase and n8n components to the same user-defined bridge network if you aren't already managing them together in the same Docker Compose file.

When configuring Supabase credentials, use the Supabase API gateway container's name (supabase-kong by default) as the host address instead of localhost. For example, if you use the default configuration, you would set the Host to http://supabase-kong:8000.

Records are accessible through Postgres but not Supabase

If queries for records return empty using the Supabase node, but are available through the Postgres node or with a Postgres client, there may be a conflict with Supabase's Row Level Security (RLS) policy.

Supabase always enables RLS when you create a table in a public schema with the Table Editor. When RLS is active, the API doesn't return any data with the public anon key until you create policies. This is a security measure to ensure that you only expose data you intend to.

To access data from a table with RLS enabled as the anon role, create a policy to enable the access patterns you intend to use.

Examples:

Example 1 (unknown):

metadata->>{your-property}={comparison-operator}.{comparison-value}

Example 2 (unknown):

metadata->>age=gte.21

healthz

URL: llms-txt#healthz

QUEUE_HEALTH_CHECK_ACTIVE=true


Refer to [Configuration methods](../../configuration/configuration-methods/) for more information on how to configure your instance using environment variables.

---

## Cal Trigger node

**URL:** llms-txt#cal-trigger-node

**Contents:**
- Events

[Cal](https://cal.com/) is the event-juggling scheduler for everyone. Focus on meeting, not making meetings.

You can find authentication information for this node [here](../../credentials/cal/).

Examples and templates

For usage examples and templates to help you get started, refer to n8n's [Cal Trigger integrations](https://n8n.io/integrations/cal-trigger/) page.

- Booking cancelled
- Booking created
- Booking rescheduled
- Meeting ended

---

## Data transformation functions

**URL:** llms-txt#data-transformation-functions

**Contents:**
- Usage

Data transformation functions are helper functions to make data transformation easier in [expressions](../../../glossary/#expression-n8n).

JavaScript in expressions

You can use any JavaScript in expressions. Refer to [Expressions](../../expressions/) for more information.

For a list of available functions, refer to the page for your data type:

- [Arrays](arrays/)
- [Dates](dates/)
- [Numbers](numbers/)
- [Objects](objects/)
- [Strings](strings/)

Data transformation functions are available in the expressions editor.

For example, to check if a string is an email:

**Examples:**

Example 1 (unknown):
```unknown
{{ dataItem.function() }}

Example 2 (unknown):

{{ "example@example.com".isEmail() }}

// Returns true

Spotify node

URL: llms-txt#spotify-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Spotify node to automate work in Spotify, and integrate Spotify with other applications. n8n has built-in support for a wide range of Spotify features, including getting album and artist information.

On this page, you'll find a list of operations the Spotify node supports and links to more resources.

Refer to Spotify credentials for guidance on setting up authentication.

  • Album
    • Get an album by URI or ID.
    • Get a list of new album releases.
    • Get an album's tracks by URI or ID.
    • Search albums by keyword.
  • Artist
    • Get an artist by URI or ID.
    • Get an artist's albums by URI or ID.
    • Get an artist's related artists by URI or ID.
    • Get an artist's top tracks by URI or ID.
    • Search artists by keyword.
  • Library
    • Get the user's liked tracks.
  • My Data
    • Get your followed artists.
  • Player
    • Add a song to your queue.
    • Get your currently playing track.
    • Skip to your next track.
    • Pause your music.
    • Skip to your previous song.
    • Get your recently played tracks.
    • Resume playback on the current active device.
    • Set volume on the current active device.
    • Start playing a playlist, artist, or album.
  • Playlist
    • Add tracks from a playlist by track and playlist URI or ID.
    • Create a new playlist.
    • Get a playlist by URI or ID.
    • Get a playlist's tracks by URI or ID.
    • Get a user's playlists.
    • Remove tracks from a playlist by track and playlist URI or ID.
    • Search playlists by keyword.
  • Track
    • Get a track by its URI or ID.
    • Get audio features for a track by URI or ID.
    • Search tracks by keyword

Templates and examples

Add liked songs to a Spotify monthly playlist

View template details

IOT Button Remote / Spotify Control Integration with MQTT

View template details

Download recently liked songs automatically with Spotify

View template details

Browse Spotify integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Incident response

URL: llms-txt#incident-response

n8n implements incident response best practices for identifying, documenting, resolving and communicating incidents.

n8n publishes incident notifications to a status page at n8n Status.

n8n notifies customers of any data breaches according to the company's Data Processing Addendum.


Code node cookbook

URL: llms-txt#code-node-cookbook

Contents:

  • Related resources

This section contains examples and recipes for tasks you can do with the Code node.


Discourse node

URL: llms-txt#discourse-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Discourse node to automate work in Discourse, and integrate Discourse with other applications. n8n has built-in support for a wide range of Discourse features, including creating, getting, updating, and removing categories, groups, posts, and users.

On this page, you'll find a list of operations the Discourse node supports and links to more resources.

Refer to Discourse credentials for guidance on setting up authentication.

  • Category
    • Create a category
    • Get all categories
    • Update a category
  • Group
    • Create a group
    • Get a group
    • Get all groups
    • Update a group
  • Post
    • Create a post
    • Get a post
    • Get all posts
    • Update a post
  • User
    • Create a user
    • Get a user
    • Get all users
  • User Group
    • Create a user to group
    • Remove user from group

Templates and examples

Enrich new Discourse members with Clearbit then notify in Slack

View template details

Create, update and get a post via Discourse

View template details

🛠️ Discourse Tool MCP Server 💪 all 16 operations

View template details

Browse Discourse integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


n8n community node blocklist

URL: llms-txt#n8n-community-node-blocklist

n8n maintains a blocklist of community nodes. You can't install any node on this list.

n8n may add community nodes to the blocklist for a range of reasons, including:

  • The node is intentionally malicious
  • It's low quality (low enough to be harmful)

If you are a community node creator whose node is on the blocklist, and you believe this is a mistake, contact [hello@n8n.io](mailto: hello@n8n.io).


OpenAI Chat Model node

URL: llms-txt#openai-chat-model-node

Contents:

  • Node parameters
    • Model
    • Built-in Tools
  • Node options
    • Base URL
    • Frequency Penalty
    • Maximum Number of Tokens
    • Response Format
    • Presence Penalty
    • Sampling Temperature

Use the OpenAI Chat Model node to use OpenAI's chat models with conversational agents.

On this page, you'll find the node parameters for the OpenAI Chat Model node and links to more resources.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

Select the model to use to generate the completion.

n8n dynamically loads models from OpenAI, and you'll only see the models available to your account.

The OpenAI Responses API provides a range of built-in tools to enrich the model's response:

  • Web Search: Allows models to search the web for the latest information before generating a response.
  • MCP Servers: Allows models to connect to remote MCP servers. Find out more about using remote MCP servers as tools here.
  • File Search: Allow models to search your knowledgebase from previously uploaded files for relevant information before generating a response. Refer to the OpenAI documentation for more information.
  • Code Interpreter: Allows models to write and run Python code in a sandboxed environment.

Use these options to further refine the node's behavior.

Enter a URL here to override the default URL for the API.

Frequency Penalty

Use this option to control the chances of the model repeating itself. Higher values reduce the chance of the model repeating itself.

Maximum Number of Tokens

Enter the maximum number of tokens used, which sets the completion length.

Choose Text or JSON. JSON ensures the model returns valid JSON.

Use this option to control the chances of the model talking about new topics. Higher values increase the chance of the model talking about new topics.

Sampling Temperature

Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations.

Enter the maximum request time in milliseconds.

Enter the maximum number of times to retry a request.

Use this option to set the probability the completion should use. Use a lower value to ignore less probable options.

The conversation that this response belongs to. Input items and output items from this response are automatically added to this conversation after this response completes.

Use this key for caching similar requests to optimize cache hit rates.

Safety Identifier

Apply an identifier to track users who may violate usage policies.

Select the service tier that fits your needs: Auto, Flex, Default, or Priority.

A set of key-value pairs for storing structured information. You can attach up to 16 pairs to an object, which is useful for adding custom data that can be used for searching by the API or in the dashboard.

Define an integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability.

Choose a response format: Text, JSON Schema, or JSON Object. Use of JSON Schema is recommended, if you want to receive data in JSON format.

Configure the prompt filled with a unique ID, its version, and substitutable variables.

Control the reasoning level of AI results: Low, Medium, or High.

Templates and examples

View template details

Building Your First WhatsApp Chatbot

View template details

Scrape and summarize webpages with AI

View template details

Browse OpenAI Chat Model integration templates, or search all templates

Refer to LangChains's OpenAI documentation for more information about the service.

Refer to OpenAI documentation for more information about the parameters.

View n8n's Advanced AI documentation.

For common questions or issues and suggested solutions, refer to Common issues.


Jenkins node

URL: llms-txt#jenkins-node

Contents:

  • Operations
  • Templates and examples

Use the Jenkins node to automate work in Jenkins, and integrate Jenkins with other applications. n8n has built-in support for a wide range of Jenkins features, including listing builds, managing instances, and creating and copying jobs.

On this page, you'll find a list of operations the Jenkins node supports and links to more resources.

Refer to Jenkins credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Build
    • List Builds
  • Instance
    • Cancel quiet down state
    • Put Jenkins in quiet mode, no builds can be started, Jenkins is ready for shutdown
    • Restart Jenkins immediately on environments where it's possible
    • Restart Jenkins once no jobs are running on environments where it's possible
    • Shutdown once no jobs are running
    • Shutdown Jenkins immediately
  • Job
    • Copy a specific job
    • Create a new job
    • Trigger a specific job
    • Trigger a specific job

Templates and examples

Browse Jenkins integration templates, or search all templates


Mailcheck credentials

URL: llms-txt#mailcheck-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API Key

You can use these credentials to authenticate the following nodes:

Create a Mailcheck account.

Supported authentication methods

Refer to Mailcheck's API documentation for more information about the service.

To configure this credential, you'll need:


Ghost credentials

URL: llms-txt#ghost-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using Admin API key
  • Using Content API key

You can use these credentials to authenticate the following nodes:

Create a Ghost account.

Supported authentication methods

  • Admin API key
  • Content API key

The keys are generated following the same steps, but the authorization flows and key format are different, so n8n stores the credentials separately. The Content API uses an API key; the Admin API uses an API key to generate a token for authentication.

Refer to Ghost's Admin API documentation for more information about the Admin API service. Refer to Ghost's Content API documentation for more information about the Content API service.

Using Admin API key

To configure this credential, you'll need:

  • The URL of your Ghost admin domain. Your admin domain can be different to your main domain and may include a subdirectory. All Ghost(Pro) blogs have a *.ghost.io domain as their admin domain and require https.
  • An API Key: To generate a new API key, create a new Custom Integration. Refer to the Ghost Admin API Token Authentication Key documentation for more detailed instructions. Copy the Admin API Key and use this as the API Key in the Ghost Admin n8n credential.

Using Content API key

To configure this credential, you'll need:

  • The URL of your Ghost admin domain. Your admin domain can be different to your main domain and may include a subdirectory. All Ghost(Pro) blogs have a *.ghost.io domain as their admin domain and require https.
  • An API Key: To generate a new API key, create a new Custom Integration. Refer to the Ghost Content API Key documentation for more detailed instructions. Copy the Content API Key and use this as the API Key in the Ghost Content n8n credential.

PostHog credentials

URL: llms-txt#posthog-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a PostHog account or host PostHog on your server.

Supported authentication methods

Refer to PostHog's API documentation for more information about the service.

To configure this credential, you'll need:

  • The API URL: Enter the correct domain for your API requests:
    • On US Cloud, use https://us.i.posthog.com for public POST-only endpoints or https://us.posthog.com for private endpoints.
    • On EU Cloud, use https://eu.i.posthog.com for public POST-only endpoints or https://eu.posthog.com for private endpoints.
    • For self-hosted instances, use your self-hosted domain.
    • Confirm yours by checking your PostHog instance URL.
  • An API Key: The API key you use depends on whether you're accessing public or private endpoints:

Custom executions data

URL: llms-txt#custom-executions-data

Contents:

  • Set and access custom data using the Code node
    • Set custom executions data
    • Access the custom data object during execution

You can set custom data on your workflow using the Code node or the Execution Data node. n8n records this with each execution. You can then use this data when filtering the executions list, or fetch it in your workflows using the Code node.

Custom executions data is available on:

  • Cloud: Pro, Enterprise
  • Self-Hosted: Enterprise, registered Community

Set and access custom data using the Code node

This section describes how to set and access data using the Code node. Refer to Execution Data node for information on using the Execution Data node to set data. You can't retrieve custom data using the Execution Data node.

Set custom executions data

Set a single piece of extra data:

Set all extra data. This overwrites the whole custom data object for this execution:

There are limitations:

  • They must be strings
  • key has a maximum length of 50 characters
  • value has a maximum length of 255 characters
  • n8n supports a maximum of 10 items of custom data

Access the custom data object during execution

You can retrieve the custom data object, or a specific value in it, during an execution:

Examples:

Example 1 (unknown):

$execution.customData.set("key", "value");

Example 2 (unknown):

_execution.customData.set("key", "value");

Example 3 (unknown):

$execution.customData.setAll({"key1": "value1", "key2": "value2"})

Example 4 (unknown):

_execution.customData.setAll({"key1": "value1", "key2": "value2"})

Shopify credentials

URL: llms-txt#shopify-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using access token
  • Using OAuth2
  • Using API key
  • Common issues
    • Enable custom app development
    • Forbidden credentials error

You can use these credentials to authenticate the following nodes with Shopify.

Supported authentication methods

  • Access token (recommended): For private apps/single store use. Can be created by regular admins.
  • OAuth2: For public apps. Must be created by partner accounts.
  • API key: Deprecated.

Refer to Shopify's authentication documentation for more information about the service.

Using access token

To configure this credential, you'll need a Shopify admin account and:

  • Your Shop Subdomain
  • An Access Token: Generated when you create a custom app.
  • An APP Secret Key: Generated when you create a custom app.

To set up the credential, you'll need to create and install a custom app:

  1. Enter your Shop Subdomain.
  • Your subdomain is within the URL: https://<subdomain>.myshopify.com. For example, if the full URL is https://n8n.myshopify.com, the Shop Subdomain is n8n.
  1. In Shopify, go to Admin > Settings > Apps and sales channels.

  2. Select Develop apps.

  3. Select Create a custom app.

Don't see this option?

If you don't see this option, your store probably doesn't have custom app development enabled. Refer to Enable custom app development for more information.

  1. In the modal window, enter the App name.

  2. Select an App developer. The app developer can be the store owner or any account with the Develop apps permission.

  3. Select Create app.

  4. Select Select scopes. In the Admin API access scopes section, select the API scopes you want for your app.

  • To use all functionality in the Shopify node, add the read_orders, write_orders, read_products, and write_products scopes.
  1. Select Install app.

  2. In the modal window, select Install app.

  3. Open the app's API Credentials section.

  4. Copy the Admin API Access Token. Enter this in your n8n credential as the Access Token.

  5. Copy the API Secret Key. Enter this in your n8n credential as the APP Secret Key.

Refer to Creating a custom app and Generate access tokens for custom apps in the Shopify admin for more information on these steps.

To configure this credential, you'll need a Shopify partner account and:

  • A Client ID: Generated when you create a custom app.
  • A Client Secret: Generated when you create a custom app.
  • Your Shop Subdomain

To set up the credential, you'll need to create and install a custom app:

Custom app development

Shopify provides templates for creating new apps. The instructions below only cover the elements necessary to set up your n8n credential. Refer to Shopify's Build dev docs for more information on building apps and working with app templates.

  1. Open your Shopify Partner dashboard.
  2. Select Apps from the left navigation.
  3. Select Create app.
  4. In the Use Shopify Partners section, enter an App name.
  5. Select Create app.
  6. When the app details open, copy the Client ID. Enter this in your n8n credential.
  7. Copy the Client Secret. Enter this in your n8n credential.
  8. In the left menu, select Configuration.
  9. In n8n, copy the OAuth Redirect URL and paste it into the Allowed redirection URL(s) in the URLs section.
  10. In the URLs section, enter an App URL for your app. The host entered here needs to match the host for the Allowed redirection URL(s), like the base URL for your n8n instance.
  11. Select Save and release.
  12. Select Overview from the left menu. At this point, you can choose to Test your app by installing it to one of your stores, or Choose distribution to distribute it publicly.
  13. In n8n, enter the Shop Subdomain of the store you installed the app to, either as a test or as a distribution.
    • Your subdomain is within the URL: https://<subdomain>.myshopify.com. For example, if the full URL is https://n8n.myshopify.com, the Shop Subdomain is n8n.

Shopify no longer generates API keys with passwords. Use the Access token method instead.

To configure this credential, you'll need:

  • An API Key
  • A Password
  • Your Shop Subdomain: Your subdomain is within the URL: https://<subdomain>.myshopify.com. For example, if the full URL is https://n8n.myshopify.com, the Shop Subdomain is n8n.
  • Optional: A Shared Secret

Here are some common issues setting up the Shopify credential and steps to resolve or troubleshoot them.

Enable custom app development

If you don't see the option to Create a custom app, no one's enabled custom app development for your store.

To enable custom app development, you must log in either as a store owner or as a user with the Enable app development permission:

  1. In Shopify, go to Admin > Settings > Apps and sales channels.
  2. Select Develop apps.
  3. Select Allow custom app development.
  4. Read the warning and information provided and select Allow custom app development.

Forbidden credentials error

If you get a Couldn't connect with these settings / Forbidden - perhaps check your credentials warning when you test the credentials, this may be due to your app's access scope dependencies. For example, the read_orders scope also requires read_products scope. Review the scopes you have assigned and the action you're trying to complete.


Item linking errors

URL: llms-txt#item-linking-errors

Contents:

  • Fix for 'Info for expressions missing from previous node'
  • Fix for 'Multiple matching items for expression'

In n8n you can reference data from any previous node. This doesn't have to be the node just before: it can be any previous node in the chain. When referencing nodes further back, you use the expression syntax $(node_name).item.

Diagram of threads for different items. Due to the item linking, you can get the actor for each movie using $('Get famous movie actors').item.

Since the previous node can have multiple items in it, n8n needs to know which one to use. When using .item, n8n figures this out for you behind the scenes. Refer to Item linking concepts for detailed information on how this works.

.item fails if information is missing. To figure out which item to use, n8n maintains a thread back through the workflow's nodes for each item. For a given item, this thread tells n8n which items in previous nodes generated it. To find the matching item in a given previous node, n8n follows this thread back until it reaches the node in question.

When using .item, n8n displays an error when:

  • The thread is broken
  • The thread points to more than one item in the previous node (as it's unclear which one to use)

To solve these errors, you can either avoid using .item, or fix the root cause.

You can avoid .item by using .first(), .last() or .all()[index] instead. They require you to know the position of the item that youre targeting within the target node's output items. Refer to Built in methods and variables | Output of other nodes for more detail on these methods.

The fix for the root cause depends on the exact error.

Fix for 'Info for expressions missing from previous node'

If you see this error message:

ERROR: Info for expression missing from previous node

There's a node in the chain that doesn't return pairing information. The solution here depends on the type of the previous node:

  • Code nodes: make sure you return which input items the node used to produce each output item. Refer to Item linking in the code node for more information.
  • Custom or community nodes: the node creator needs to update the node to return which input items it uses to produce each output item. Refer to Item linking for node creators for more information.

Fix for 'Multiple matching items for expression'

This is the error message:

ERROR: Multiple matching items for expression

Sometimes n8n uses multiple items to create a single item. Examples include the Summarize, Aggregate, and Merge nodes. These nodes can combine information from multiple items.

When you use .item and there are multiple possible matches, n8n doesn't know which one to use. To solve this you can either:


Kafka credentials

URL: llms-txt#kafka-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using client ID

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to Kafka's documentation for more information about using the service.

If you're new to Kafka, refer to the Apache Kafka Quickstart for initial setup.

Refer to Encryption and Authentication using SSL for working with SSL in Kafka.

To configure this credential, you'll need a running Kafka environment and:

  • A Client ID
  • A list of relevant Brokers
  • Username/password authentication details if your Kafka environment uses authentication
  1. Enter the CLIENT-ID of the client or consumer group in the Client ID field in your credential.
  2. Enter a comma-separated list of relevant Brokers for the credential to use in the format <broker-service-name>:<port>. Use the name you gave the broker when you defined it in the services list. For example, kafka-1:9092,kafka-2:9092 would add the brokers kafka-1 and kafka-2 on port 9092.
  3. If your Kafka environment doesn't use SSL, turn off the SSL toggle.
  4. If you've enabled authentication using SASL in your Kafka environment, turn on the Authentication toggle. Then add:
    1. The Username
    2. The Password
    3. Select the broker's configured SASL Mechanism. Refer to SASL configuration for more information. Options include:
      • Plain
      • scram-sha-256
      • scram-sha-512

Strava credentials

URL: llms-txt#strava-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to Strava's API documentation for more information about the service.

To configure this credential, you'll need:

Use these settings for your Strava app:

  • In n8n, copy the OAuth Callback URL. Paste this URL into your Strava app's Authorization Callback Domain.
  • Remove the protocol (https:// or http://) and the relative URL (/oauth2/callback or /rest/oauth2-credential/callback) from the Authorization Callback Domain. For example, if the OAuth Redirect URL was originally https://oauth.n8n.cloud/oauth2/callback, the Authorization Callback Domain would be oauth.n8n.cloud.
  • Copy the Client ID and Client Secret from your app and add them to your n8n credential.

Refer to Authentication for more information about Strava's OAuth flow.


SIGNL4 node

URL: llms-txt#signl4-node

Contents:

  • Operations
  • Templates and examples

Use the SIGNL4 node to automate work in SIGNL4, and integrate SIGNL4 with other applications. n8n supports sending and resolving alerts with SIGNL4.

On this page, you'll find a list of operations the SIGNL4 node supports and links to more resources.

Refer to SIGNL4 credentials for guidance on setting up authentication.

  • Alert
    • Send an alert
    • Resolve an alert

Templates and examples

Monitor a file for changes and send an alert

View template details

Send weather alerts to your mobile phone with OpenWeatherMap and SIGNL4

View template details

Send TheHive Alerts Using SIGNL4

View template details

Browse SIGNL4 integration templates, or search all templates


Node base file

URL: llms-txt#node-base-file

The node base file contains the core code of your node. All nodes must have a base file. The contents of this file are different depending on whether you're building a declarative-style or programmatic-style node. For guidance on which style to use, refer to Choose your node building approach.

These documents give short code snippets to help understand the code structure and concepts. For full walk-throughs of building a node, including real-world code examples, refer to Build a declarative-style node or Build a programmatic-style node.

You can also explore the n8n-nodes-starter and n8n's own nodes for a wider range of examples. The starter contains basic examples that you can build on. The n8n Mattermost node is a good example of a more complex programmatic-style node, including versioning.

For all nodes, refer to the:

For declarative-style nodes, refer to the:

For programmatic-style nodes, refer to the:


Yahoo IMAP credentials

URL: llms-txt#yahoo-imap-credentials

Contents:

  • Prerequisites
  • Set up the credential

Follow these steps to configure the IMAP credentials with a Yahoo account.

To follow these instructions, you must first generate an app password:

  1. Log in to your Yahoo account Security page.
  2. Select Generate app password or Generate and manage app passwords.
  3. Select Get Started.
  4. Enter an App name for your new app password, like n8n credential.
  5. Select Generate password.
  6. Copy the generated app password. You'll use this in your n8n credential.

Refer to Yahoo's Generate and manage 3rd-party app passwords for more information.

Set up the credential

To set up the IMAP credential with a Yahoo Mail account, use these settings:

  1. Enter your Yahoo email address as the User.
  2. Enter the app password you generated above as the Password.
  3. Enter imap.mail.yahoo.com as the Host.
  4. Keep the default Port number of 993. Check with your email administrator if this port doesn't work.
  5. Turn on the SSL/TLS toggle.
  6. Check with your email administrator about whether to Allow Self-Signed Certificates.

Refer to Set up IMAP for Yahoo mail account for more information.


Mautic Trigger node

URL: llms-txt#mautic-trigger-node

Contents:

  • Related resources

Mautic is an open-source marketing automation software that helps online businesses automate their repetitive marketing tasks such as lead generation, contact scoring, contact segmentation, and marketing campaigns.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Mautic Trigger integrations page.

n8n provides an app node for Mautic. You can find the node docs here.

View example workflows and related content on n8n's website.


Box Trigger node

URL: llms-txt#box-trigger-node

Contents:

  • Find your Box Target ID

Box is a cloud computing company which provides file sharing, collaborating, and other tools for working with files uploaded to its servers.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Box Trigger integrations page.

Find your Box Target ID

To get your Target ID in Box:

  1. Open the file/folder that you would like to monitor.
  2. Copy the string of characters after folder/ in your URL. This is the target ID. For example, if the URL is https://app.box.com/folder/12345, then 12345 is the target ID.
  3. Paste it in the Target ID field in n8n.

SearXNG credentials

URL: llms-txt#searxng-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using API URL

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to SearXNG's documentation for more information about the service.

To configure this credential, you'll need an instance of SearXNG running at an URL that's accessible from n8n:

  • API URL: The URL of the SearXNG instance you want to connect to.

Refer to SearXNG's Administrator documentation for more information about running the service.


Harvest node

URL: llms-txt#harvest-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Harvest node to automate work in Harvest, and integrate Harvest with other applications. n8n has built-in support for a wide range of Harvest features, including creating, updating, deleting, and getting clients, contacts, invoices, tasks, expenses, users, and projects.

On this page, you'll find a list of operations the Harvest node supports and links to more resources.

Refer to Harvest credentials for guidance on setting up authentication.

  • Client
    • Create a client
    • Delete a client
    • Get data of a client
    • Get data of all clients
    • Update a client
  • Company
    • Retrieves the company for the currently authenticated user
  • Contact
    • Create a contact
    • Delete a contact
    • Get data of a contact
    • Get data of all contacts
    • Update a contact
  • Estimate
    • Create an estimate
    • Delete an estimate
    • Get data of an estimate
    • Get data of all estimates
    • Update an estimate
  • Expense
    • Get data of an expense
    • Get data of all expenses
    • Create an expense
    • Update an expense
    • Delete an expense
  • Invoice
    • Get data of an invoice
    • Get data of all invoices
    • Create an invoice
    • Update an invoice
    • Delete an invoice
  • Project
    • Create a project
    • Delete a project
    • Get data of a project
    • Get data of all projects
    • Update a project
  • Task
    • Create a task
    • Delete a task
    • Get data of a task
    • Get data of all tasks
    • Update a task
  • Time Entries
    • Create a time entry using duration
    • Create a time entry using start and end time
    • Delete a time entry
    • Delete a time entry's external reference.
    • Get data of a time entry
    • Get data of all time entries
    • Restart a time entry
    • Stop a time entry
    • Update a time entry
  • User
    • Create a user
    • Delete a user
    • Get data of a user
    • Get data of all users
    • Get data of authenticated user
    • Update a user

Templates and examples

Automated Investor Intelligence: CrunchBase to Google Sheets Data Harvester

View template details

Process Shopify new orders with Zoho CRM and Harvest

View template details

Create a client in Harvest

View template details

Browse Harvest integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Workflows environment variables

URL: llms-txt#workflows-environment-variables

File-based configuration

You can add _FILE to individual variables to provide their configuration in a separate file. Refer to Keeping sensitive data in separate files for more details.

Variable Type Default Description
N8N_ONBOARDING_FLOW_DISABLED Boolean false Whether to disable onboarding tips when creating a new workflow (true) or not (false).
N8N_WORKFLOW_ACTIVATION_BATCH_SIZE Number 1 How many workflows to activate simultaneously during startup.
N8N_WORKFLOW_CALLER_POLICY_DEFAULT_OPTION String workflowsFromSameOwner Which workflows can call a workflow. Options are: any, none, workflowsFromAList, workflowsFromSameOwner. This feature requires Workflow sharing.
N8N_WORKFLOW_TAGS_DISABLED Boolean false Whether to disable workflow tags (true) or enable tags (false).
WORKFLOWS_DEFAULT_NAME String My workflow The default name used for new workflows.

Notion credentials

URL: llms-txt#notion-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API integration token
    • Share Notion page(s) with the integration
  • Using OAuth2
  • Internal vs. public integrations

You can use these credentials to authenticate the following nodes:

Create a Notion account with admin level access.

Supported authentication methods

  • API integration token: Used for internal integrations.
  • OAuth2: Used for public integrations.

Not sure which integration type to use? Refer to Internal vs. public integrations below for more information.

Refer to Notion's API documentation for more information about the service.

Using API integration token

To configure this credential, you'll need:

  • An Internal Integration Secret: Generated once you create a Notion integration.

To generate an integration secret, create a Notion integration and grab the integration secret from the Secrets tab:

  1. Go to your Notion integration dashboard.
  2. Select the + New integration button.
  3. Enter a Name for your integration, for example n8n integration. If desired, add a Logo.
  4. Select Submit to create your integration.
  5. Open the Capabilities tab. Select these capabilities:
    • Read content
    • Update content
    • Insert content
    • User information without email addresses
  6. Be sure to Save changes.
  7. Select the Secrets tab.
  8. Copy the Internal Integration Token and add it as your n8n Internal Integration Secret.

Refer to the Internal integration auth flow setup documentation for more information about authenticating to the service.

Share Notion page(s) with the integration

For your integration to interact with Notion, you must give your integration page permission to interact with page(s) in your Notion workspace:

  1. Visit the page in your Notion workspace.
  2. Select the triple dot menu at the top right of a page.
  3. In Connections, select Connect to.
  4. Use the search bar to find and select your integration from the dropdown list.

Once you share at least one page with the integration, you can start making API requests. If the page isn't shared, any API requests made will respond with an error.

Refer to Integration permissions for more information.

To configure this credential, you'll need:

  • A Client ID: Generated once you configure a public integration.
  • A Client Secret: Generated once you configure a public integration.

You must create a Notion integration and set it to public distribution:

  1. Go to your Notion integration dashboard.
  2. Select the + New integration button.
  3. Enter a Name for your integration, for example n8n integration. If desired, add a Logo.
  4. Select Submit to create your integration.
  5. Open the Capabilities tab. Select these capabilities:
    • Read content
    • Update content
    • Insert content
    • User information without email addresses
  6. Select Save changes.
  7. Go to the Distribution tab.
  8. Turn on the Do you want to make this integration public? control.
  9. Enter your company name and website in the Organization Information section.
  10. Copy the n8n OAuth Redirect URL and add it to as a Redirect URI in the Notion integration's OAuth Domain & URLs section.
  11. Go to the Secrets tab.
  12. Copy the Client ID and Client Secret and add them to your n8n credential.

Refer to Notion's public integration auth flow setup for more information about authenticating to the service.

Internal vs. public integrations

Internal integrations are:

  • Specific to a single workspace.
  • Accessible only to members of that workspace.
  • Ideal for custom workspace enhancements.

Internal integrations use a simpler authentication process (the integration secret) and don't require any security review before publishing.

Public integrations are:

  • Usable across multiple, unrelated Notion workspaces.
  • Accessible by any Notion user, regardless of their workspace.
  • Ideal for catering to broad use cases.

Public integrations use the OAuth 2.0 protocol for authentication. They require a Notion security review before publishing.

For a more detailed breakdown of the two integration types, refer to Notion's Internal vs. Public Integrations documentation.


ConvertKit Trigger node

URL: llms-txt#convertkit-trigger-node

Contents:

  • Events
  • Related resources

ConvertKit is a fully featured email marketing platform. Use ConvertKit to build an email list, send email broadcasts, automate sequences, create segments, and build landing pages.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's ConvertKit Trigger integrations page.

  • Form subscribe
  • Link click
  • Product purchase
  • Purchase created
  • Purchase complete
  • Sequence complete
  • Sequence subscribe
  • Subscriber activated
  • Subscriber unsubscribe
  • Tag add
  • Tag Remove

n8n provides an app node for ConvertKit. You can find the node docs here.

View example workflows and related content on n8n's website.

Refer to ConvertKit's documentation for details about their API.


Data structure

URL: llms-txt#data-structure

Contents:

  • Data item processing

In n8n, all data passed between nodes is an array of objects. It has the following structure:

Skipping the json key and array syntax

From 0.166.0 on, when using the Function node or Code node, n8n automatically adds the json key if it's missing. It also automatically wraps your items in an array ([]) if needed. This is only the case when using the Function or Code nodes. When building your own nodes, you must still make sure the node returns data with the json key.

Data item processing

Nodes can process multiple items.

For example, if you set the Trello node to Create-Card, and create an expression that sets Name using a property called name-input-value from the incoming data, the node creates a card for each item, always choosing the name-input-value of the current item.

For example, this input will create two cards. One named test1 the other one named test2:

Examples:

Example 1 (unknown):

[
	{
		// For most data:
		// Wrap each item in another object, with the key 'json'
		"json": {
			// Example data
			"apple": "beets",
			"carrot": {
				"dill": 1
			}
		},
		// For binary data:
		// Wrap each item in another object, with the key 'binary'
		"binary": {
			// Example data
			"apple-picture": {
				"data": "....", // Base64 encoded binary data (required)
				"mimeType": "image/png", // Best practice to set if possible (optional)
				"fileExtension": "png", // Best practice to set if possible (optional)
				"fileName": "example.png", // Best practice to set if possible (optional)
			}
		}
	},
]

Example 2 (unknown):

[
	{
		name-input-value: "test1"
	},
	{
		name-input-value: "test2"
	}
]

Templates and examples

URL: llms-txt#templates-and-examples

Contents:

  • Templates
  • Set up sample data using the Code node
  • Removing duplicates from the current input
  • Keep items where the value is new
  • Keep items where the value is higher than any previous value
  • Keep items where the value is a date later than any previous date

Here are some templates and examples for the Remove Duplicates node.

The examples included in this section are a sequence. Follow from one to another to avoid unexpected results.

Browse Templates and examples integration templates, or search all templates

Set up sample data using the Code node

Create a workflow with some example input data to try out the Remove Duplicates node.

  1. Add a Code node to the canvas and connect it to the Manual Trigger node.

  2. In the Code node, set Mode to Run Once for Each Item and Language to JavaScript.

  3. Paste the following JavaScript code snippet in the JavaScript field:

  4. Add a Split Out node to the canvas and connect it to the Code node.

  5. In the Split Out node, enter data in the Fields To Split Out field.

Removing duplicates from the current input

  1. Add a Remove Duplicates node to the canvas and connect it to the Split Out node. Choose Remove items repeated within current input as the Action to start.
  2. Open the Remove Duplicates node and ensure that the Operation is set to Remove Items Repeated Within Current Input.
  3. Choose All fields in the Compare field.
  4. Select Execute step to run the Remove Duplicates node, removing duplicated data in the current input.

n8n removes the items that have the same data across all fields. Your output in table view should look like this:

id name job last_updated
1 Taylor Swift Pop star 2024-09-20T10:12:43.493Z
2 Ed Sheeran Singer-songwriter 2024-10-05T08:30:59.493Z
3 Adele Singer-songwriter 2024-10-07T14:15:59.493Z
4 Bruno Mars Singer-songwriter 2024-08-25T17:45:12.493Z
5 Billie Eilish Singer-songwriter 2024-09-10T09:30:12.493Z
6 Katy Perry Pop star 2024-10-08T12:30:45.493Z
7 Lady Gaga Pop star 2024-09-15T14:45:30.493Z
8 Rihanna Pop star 2024-10-01T11:50:22.493Z
  1. Open the Remove Duplicates node again and change the Compare parameter to Selected Fields.
  2. In the Fields To Compare field, enter job.
  3. Select Execute step to run the Remove Duplicates node, removing duplicated data in the current input.

n8n removes the items in the current input that have the same job data. Your output in table view should look like this:

id name job last_updated
1 Taylor Swift Pop star 2024-09-20T10:12:43.493Z
2 Ed Sheeran Singer-songwriter 2024-10-05T08:30:59.493Z

Keep items where the value is new

  1. Open the Remove Duplicates node and set the Operation to Remove Items Processed in Previous Executions.
  2. Set the Keep Items Where parameter to Value Is New.
  3. Set the Value to Dedupe On parameter to {{ $json.name }}.
  4. On the canvas, select Execute workflow to run the workflow. Open the Remove Duplicates node to examine the results.

n8n compares the current input data to the items stored from previous executions. Since this is the first time running the Remove Duplicates node with this operation, n8n processes all data items and places them into the Kept output tab. The order of the items may be different than the order in the input data:

id name job last_updated
1 Taylor Swift Pop star 2024-09-20T10:12:43.493Z
1 Taylor Swift Pop star 2024-09-20T10:12:43.493Z
2 Ed Sheeran Singer-songwriter 2024-10-05T08:30:59.493Z
2 Ed Sheeran Singer-songwriter 2024-10-05T08:30:59.493Z
3 Adele Singer-songwriter 2024-10-07T14:15:59.493Z
3 Adele Singer-songwriter 2024-10-07T14:15:59.493Z
4 Bruno Mars Singer-songwriter 2024-08-25T17:45:12.493Z
5 Billie Eilish Singer-songwriter 2024-09-10T09:30:12.493Z
6 Katy Perry Pop star 2024-10-08T12:30:45.493Z
7 Lady Gaga Pop star 2024-09-15T14:45:30.493Z
8 Rihanna Pop star 2024-10-01T11:50:22.493Z

Items are only compared against previous executions

The current input items are only compared against the stored items from previous executions. This means that items repeated within the current input aren't removed in this mode of operation. If you need to remove duplicate items within the current input and across executions, connect two Remove Duplicate nodes together sequentially. Set the first to use the Remove Items Repated Within Current Input operation and the second to use the Remove Items Processed in Previous Executions operation.

  1. Open the Code node and uncomment (remove the // from) the line for "Tom Hanks."
  2. On the canvas, select Execute workflow again. Open the Remove Duplicates node again to examine the results.

n8n compares the current input data to the items stored from previous executions. This time, the Kept tab contains the one new record from the Code node:

id name job last_updated
9 Tom Hanks Actor 2024-10-17T13:58:31.493Z

The Discarded tab contains the items processed by the previous execution:

id name job last_updated
1 Taylor Swift Pop star 2024-09-20T10:12:43.493Z
1 Taylor Swift Pop star 2024-09-20T10:12:43.493Z
2 Ed Sheeran Singer-songwriter 2024-10-05T08:30:59.493Z
2 Ed Sheeran Singer-songwriter 2024-10-05T08:30:59.493Z
3 Adele Singer-songwriter 2024-10-07T14:15:59.493Z
3 Adele Singer-songwriter 2024-10-07T14:15:59.493Z
4 Bruno Mars Singer-songwriter 2024-08-25T17:45:12.493Z
5 Billie Eilish Singer-songwriter 2024-09-10T09:30:12.493Z
6 Katy Perry Pop star 2024-10-08T12:30:45.493Z
7 Lady Gaga Pop star 2024-09-15T14:45:30.493Z
8 Rihanna Pop star 2024-10-01T11:50:22.493Z

Before continuing, clear the duplication history to get ready for the next example:

  1. Open the Remove Duplicates node and set the Operation to Clear Deduplication History.
  2. Select Execute step to clear the current duplication history.

Keep items where the value is higher than any previous value

  1. Open the Remove Duplicates node and set the Operation to Remove Items Processed in Previous Executions.
  2. Set the Keep Items Where parameter to Value Is Higher than Any Previous Value.
  3. Set the Value to Dedupe On parameter to {{ $json.id }}.
  4. On the canvas, select Execute workflow to run the workflow. Open the Remove Duplicates node to examine the results.

n8n compares the current input data to the items stored from previous executions. Since this is the first time running the Remove Duplicates node after clearing the history, n8n processes all data items and places them into the Kept output tab. The order of the items may be different than the order in the input data:

id name job last_updated
1 Taylor Swift Pop star 2024-09-20T10:12:43.493Z
1 Taylor Swift Pop star 2024-09-20T10:12:43.493Z
2 Ed Sheeran Singer-songwriter 2024-10-05T08:30:59.493Z
2 Ed Sheeran Singer-songwriter 2024-10-05T08:30:59.493Z
3 Adele Singer-songwriter 2024-10-07T14:15:59.493Z
3 Adele Singer-songwriter 2024-10-07T14:15:59.493Z
4 Bruno Mars Singer-songwriter 2024-08-25T17:45:12.493Z
5 Billie Eilish Singer-songwriter 2024-09-10T09:30:12.493Z
6 Katy Perry Pop star 2024-10-08T12:30:45.493Z
7 Lady Gaga Pop star 2024-09-15T14:45:30.493Z
8 Rihanna Pop star 2024-10-01T11:50:22.493Z
9 Tom Hanks Actor 2024-10-17T13:58:31.493Z
  1. Open the Code node and uncomment (remove the // from) the lines for "Madonna" and "Bob Dylan."
  2. On the canvas, select Execute workflow again. Open the Remove Duplicates node again to examine the results.

n8n compares the current input data to the items stored from previous executions. This time, the Kept tab contains a single entry for "Bob Dylan." n8n keeps this item because its id column value (15) is higher than any previous values (the previous maximum value was 9):

id name job last_updated
15 Bob Dylan Folk singer 2024-09-24T08:03:16.493Z

The Discarded tab contains the 13 items with an id column value equal to or less than the previous maximum value (9). Even though it's new, this table includes the entry for "Madonna" because its id value isn't larger than the previous maximum value:

id name job last_updated
0 Madonna Pop star 2024-10-17T17:11:38.493Z
1 Taylor Swift Pop star 2024-09-20T10:12:43.493Z
1 Taylor Swift Pop star 2024-09-20T10:12:43.493Z
2 Ed Sheeran Singer-songwriter 2024-10-05T08:30:59.493Z
2 Ed Sheeran Singer-songwriter 2024-10-05T08:30:59.493Z
3 Adele Singer-songwriter 2024-10-07T14:15:59.493Z
3 Adele Singer-songwriter 2024-10-07T14:15:59.493Z
4 Bruno Mars Singer-songwriter 2024-08-25T17:45:12.493Z
5 Billie Eilish Singer-songwriter 2024-09-10T09:30:12.493Z
6 Katy Perry Pop star 2024-10-08T12:30:45.493Z
7 Lady Gaga Pop star 2024-09-15T14:45:30.493Z
8 Rihanna Pop star 2024-10-01T11:50:22.493Z
9 Tom Hanks Actor 2024-10-17T13:58:31.493Z

Before continuing, clear the duplication history to get ready for the next example:

  1. Open the Remove Duplicates node and set the Operation to Clear Deduplication History.
  2. Select Execute step to clear the current duplication history.

Keep items where the value is a date later than any previous date

  1. Open the Remove Duplicates node and set the Operation to Remove Items Processed in Previous Executions.
  2. Set the Keep Items Where parameter to Value Is a Date Later than Any Previous Date.
  3. Set the Value to Dedupe On parameter to {{ $json.last_updated }}.
  4. On the canvas, select Execute workflow to run the workflow. Open the Remove Duplicates node to examine the results.

n8n compares the current input data to the items stored from previous executions. Since this is the first time running the Remove Duplicates node after clearing the history, n8n processes all data items and places them into the Kept output tab. The order of the items may be different than the order in the input data:

id name job last_updated
0 Madonna Pop star 2024-10-17T17:11:38.493Z
1 Taylor Swift Pop star 2024-09-20T10:12:43.493Z
1 Taylor Swift Pop star 2024-09-20T10:12:43.493Z
2 Ed Sheeran Singer-songwriter 2024-10-05T08:30:59.493Z
2 Ed Sheeran Singer-songwriter 2024-10-05T08:30:59.493Z
3 Adele Singer-songwriter 2024-10-07T14:15:59.493Z
3 Adele Singer-songwriter 2024-10-07T14:15:59.493Z
4 Bruno Mars Singer-songwriter 2024-08-25T17:45:12.493Z
5 Billie Eilish Singer-songwriter 2024-09-10T09:30:12.493Z
6 Katy Perry Pop star 2024-10-08T12:30:45.493Z
7 Lady Gaga Pop star 2024-09-15T14:45:30.493Z
8 Rihanna Pop star 2024-10-01T11:50:22.493Z
9 Tom Hanks Actor 2024-10-17T13:58:31.493Z
15 Bob Dylan Folk singer 2024-09-24T08:03:16.493Z
  1. Open the Code node and uncomment (remove the // from) the lines for "Harry Nilsson" and "Kylie Minogue."

  2. On the canvas, select Execute workflow again. Open the Remove Duplicates node again to examine the results.

n8n compares the current input data to the items stored from previous executions. This time, the Kept tab contains a single entry for "Kylie Minogue." n8n keeps this item because its last_updated column value (2024-10-24T08:03:16.493Z) is later than any previous values (the previous latest date was 2024-10-17T17:11:38.493Z):

id name job last_updated
11 Kylie Minogue Pop star 2024-10-24T08:03:16.493Z

The Discarded tab contains the 15 items with a last_updated column value equal to or earlier than the previous latest date (2024-10-17T17:11:38.493Z). Even though it's new, this table includes the entry for "Harry Nilsson" because its last_updated value isn't later than the previous maximum value:

id name job last_updated
10 Harry Nilsson Singer-songwriter 2020-10-17T17:11:38.493Z
0 Madonna Pop star 2024-10-17T17:11:38.493Z
1 Taylor Swift Pop star 2024-09-20T10:12:43.493Z
1 Taylor Swift Pop star 2024-09-20T10:12:43.493Z
2 Ed Sheeran Singer-songwriter 2024-10-05T08:30:59.493Z
2 Ed Sheeran Singer-songwriter 2024-10-05T08:30:59.493Z
3 Adele Singer-songwriter 2024-10-07T14:15:59.493Z
3 Adele Singer-songwriter 2024-10-07T14:15:59.493Z
4 Bruno Mars Singer-songwriter 2024-08-25T17:45:12.493Z
5 Billie Eilish Singer-songwriter 2024-09-10T09:30:12.493Z
6 Katy Perry Pop star 2024-10-08T12:30:45.493Z
7 Lady Gaga Pop star 2024-09-15T14:45:30.493Z
8 Rihanna Pop star 2024-10-01T11:50:22.493Z
9 Tom Hanks Actor 2024-10-17T13:58:31.493Z
15 Bob Dylan Folk singer 2024-09-24T08:03:16.493Z

Examples:

Example 1 (unknown):

let data =[];

   return {
     data: [
       { id: 1, name: 'Taylor Swift', job: 'Pop star', last_updated: '2024-09-20T10:12:43.493Z' },
       { id: 2, name: 'Ed Sheeran', job: 'Singer-songwriter', last_updated: '2024-10-05T08:30:59.493Z' },
       { id: 3, name: 'Adele', job: 'Singer-songwriter', last_updated: '2024-10-07T14:15:59.493Z' },
       { id: 4, name: 'Bruno Mars', job: 'Singer-songwriter', last_updated: '2024-08-25T17:45:12.493Z' },
       { id: 1, name: 'Taylor Swift', job: 'Pop star', last_updated: '2024-09-20T10:12:43.493Z' },  // duplicate
       { id: 5, name: 'Billie Eilish', job: 'Singer-songwriter', last_updated: '2024-09-10T09:30:12.493Z' },
       { id: 6, name: 'Katy Perry', job: 'Pop star', last_updated: '2024-10-08T12:30:45.493Z' },
       { id: 2, name: 'Ed Sheeran', job: 'Singer-songwriter', last_updated: '2024-10-05T08:30:59.493Z' },  // duplicate
       { id: 7, name: 'Lady Gaga', job: 'Pop star', last_updated: '2024-09-15T14:45:30.493Z' },
       { id: 8, name: 'Rihanna', job: 'Pop star', last_updated: '2024-10-01T11:50:22.493Z' },
       { id: 3, name: 'Adele', job: 'Singer-songwriter', last_updated: '2024-10-07T14:15:59.493Z' },  // duplicate
       //{ id: 9, name: 'Tom Hanks', job: 'Actor', last_updated: '2024-10-17T13:58:31.493Z' },
       //{ id: 0, name: 'Madonna', job: 'Pop star', last_updated: '2024-10-17T17:11:38.493Z' },
       //{ id: 15, name: 'Bob Dylan', job: 'Folk singer', last_updated: '2024-09-24T08:03:16.493Z'},
       //{ id: 10, name: 'Harry Nilsson', job: 'Singer-songwriter', last_updated: '2020-10-17T17:11:38.493Z' },
       //{ id: 11, name: 'Kylie Minogue', job: 'Pop star', last_updated: '2024-10-24T08:03:16.493Z'},
     ]
   }

Booleans

URL: llms-txt#booleans

Contents:

  • toInt(): Number

A reference document listing built-in convenience functions to support data transformation in expressions for arrays.

JavaScript in expressions

You can use any JavaScript in expressions. Refer to Expressions for more information.

Convert a boolean to a number. false converts to 0, true converts to 1.



Supported authentication methods

URL: llms-txt#supported-authentication-methods

Contents:

  • Related resources
  • Using API key

Refer to Onfleet's API documentation for more information about the service.

To configure this credential, you'll need:

  • An API key: To create an API key, log into your organization's administrator account. Select Settings > API & Webhooks, then select + to create a new key. Refer to Onfleet's Creating an API key documentation for more information.

WordPress credentials

URL: llms-txt#wordpress-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using basic auth
    • Enable two-step authentication
    • Create an application password
    • Set up the credential

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to WordPress's API documentation for more information about the service.

To configure this credential, you'll need:

  • Your WordPress Username
  • A WordPress application Password
  • Your WordPress URL
  • Decide whether to Ignore SSL Issues

Using this credential involves three steps:

  1. Enable two-step authentication.
  2. Create an application password.
  3. Set up the credential.

Refer to the detailed instructions below for each step.

Enable two-step authentication

To generate an application password, you must first enable Two-Step Authentication in WordPress. If you've already done this, skip to the next section.

  1. Open your WordPress profile.
  2. Select Security from the left menu.
  3. Select Two-Step Authentication. The Two-Step Authentication page opens.
  4. If Two-Step Authentication isn't enabled, you must enable it.
  5. Choose whether to enable it using an authenticator app or SMS codes and follow the on-screen instructions.

Refer to WordPress's Enable Two-Step Authentication for detailed instructions.

Create an application password

With Two-Step Authentication enabled, you can now generate an application password:

  1. From the WordPress Security > Two-Step Authentication page, select + Add new application password in the Application passwords section.
  2. Enter an Application name, like n8n integration.
  3. Select Generate Password.
  4. Copy the password it generates. You'll use this in your n8n credential.

Set up the credential

Congratulations! You're now ready to set up your n8n credential:

  1. Enter your WordPress Username in your n8n credential.
  2. Enter the application password you copied above as the Password in your n8n credential.
  3. Enter the URL of your WordPress site as the WordPress URL.
  4. Optional: Use the Ignore SSL Issues to choose whether you want the n8n credential to connect even if SSL certificate validation fails (turned on) or whether to respect SSL certificate validation (turned off).

Log streaming

URL: llms-txt#log-streaming

Contents:

  • Set up log streaming
  • Events
  • Destinations

Log Streaming is available on all Enterprise plans.

Log streaming allows you to send events from n8n to your own logging tools. This allows you to manage your n8n monitoring in your own alerting and logging processes.

Set up log streaming

To use log streaming, you have to add a streaming destination.

  1. Navigate to Settings > Log Streaming.
  2. Select Add new destination.
  3. Choose your destination type. n8n opens the New Event Destination modal.
  4. In the New Event Destination modal, enter the configuration information for your event destination. These depend on the type of destination you're using.
  5. Select Events to choose which events to stream.
  6. Select Save.

If you self-host n8n, you can configure additional log streaming behavior using Environment variables.

The following events are available. You can choose which events to stream in Settings > Log Streaming > Events.

  • Workflow
    • Started
    • Success
    • Failed
  • Node executions
    • Started
    • Finished
  • Audit
    • User signed up
    • User updated
    • User deleted
    • User invited
    • User invitation accepted
    • User re-invited
    • User email failed
    • User reset requested
    • User reset
    • User credentials created
    • User credentials shared
    • User credentials updated
    • User credentials deleted
    • User API created
    • User API deleted
    • Package installed
    • Package updated
    • Package deleted
    • Workflow created
    • Workflow deleted
    • Workflow updated
  • AI node logs
    • Memory get messages
    • Memory added message
    • Output parser get instructions
    • Output parser parsed
    • Retriever get relevant documents
    • Embeddings embedded document
    • Embeddings embedded query
    • Document processed
    • Text splitter split
    • Tool called
    • Vector store searched
    • LLM generated
    • Vector store populated
  • Runner
    • Task requested
    • Response received
  • Queue
    • Job enqueued
    • Job dequeued
    • Job completed
    • Job failed
    • Job stalled

n8n supports three destination types:

  • A syslog server
  • A generic webhook
  • A Sentry client

Spotify credentials

URL: llms-txt#spotify-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to Spotify's Web API documentation for more information about the service.

Note for n8n Cloud users

Cloud users don't need to provide connection details. Select Connect my account to connect through your browser.

If you're self-hosting n8n, you'll need a Spotify Developer account so you can create a Spotify app:

  1. Open the Spotify developer dashboard.
  2. Select Create an app.
  3. Enter an App name, like n8n integration.
  4. Enter an App description.
  5. Copy the OAuth Redirect URL from n8n and enter it as the Redirect URI in your Spotify app.
  6. Check the box to agree to the Spotify Terms of Service and Branding Guidelines.
  7. Select Create. The App overview page opens.
  8. Copy the Client ID and enter it in your n8n credential.
  9. Copy the Client Secret and enter it in your n8n credential.
  10. Select Connect my account and follow the on-screen prompts to finish authorizing the credential.

Refer to Spotify Apps for more information.


Bitbucket credentials

URL: llms-txt#bitbucket-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API username/app password
  • App password permissions

You can use these credentials to authenticate the following nodes:

Create a Bitbucket account.

Supported authentication methods

  • API username and app password

Refer to Bitbucket's API documentation for more information about the service.

Using API username/app password

To configure this credential, you'll need:

  • A Username: Visible in your Bitbucket profile settings Personal settings > Account settings.
  • An App Password: Refer to the Bitbucket instructions to Create an app password.

App password permissions

Bitbucket API credentials will only work if the user account you generated the app password for has the appropriate privilege scopes for the selected app password permissions. The n8n credentials dialog will throw an error if the user account lacks the appropriate permissions for the selected scope, like Your credentials lack one or more required privilege scopes.

See the Bitbucket App password permissions documentation for more information on working with these permissions.


Quick Base credentials

URL: llms-txt#quick-base-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a Quick Base account.

Supported authentication methods

Refer to Quick Base's API documentation for more information about the service.

To configure this credential, you'll need:

  • A Hostname: The string of characters located between https:// and /db in your Quick Base URL.
  • A User Token: To generate a token, select your Profile > My preferences > My User Information > Manage my user tokens. Refer to Creating and using user tokens for detailed instructions.

Raindrop node

URL: llms-txt#raindrop-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Raindrop node to automate work in Raindrop, and integrate Raindrop with other applications. n8n has built-in support for a wide range of Raindrop features, including getting users, deleting tags, and creating, updating, deleting and getting collections and bookmarks.

On this page, you'll find a list of operations the Raindrop node supports and links to more resources.

Refer to Raindrop credentials for guidance on setting up authentication.

  • Bookmark
    • Create
    • Delete
    • Get
    • Get All
    • Update
  • Collection
    • Create
    • Delete
    • Get
    • Get All
    • Update
  • Tag
    • Delete
    • Get All
  • User
    • Get

Templates and examples

Fetch a YouTube playlist and send new items Raindrop

View template details

Create a collection and create, update, and get a bookmark in Raindrop

View template details

Save Mastodon Bookmarks to Raindrop Automatically

View template details

Browse Raindrop integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Sendy credentials

URL: llms-txt#sendy-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API Key

You can use these credentials to authenticate the following nodes:

Host a Sendy application.

Supported authentication methods

Refer to Sendy's API documentation for more information about the service.

To configure this credential, you'll need:

  • A URL: The URL of your Sendy application.
  • An API Key: Get your API key from your user profile > Settings > Your API Key.

Typeform Trigger node

URL: llms-txt#typeform-trigger-node

Typeform is an online software as a service company that specializes in online form building and online surveys. Its main software creates dynamic forms based on user needs.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Typeform Trigger integrations page.


Hosting n8n on Google Cloud Run

URL: llms-txt#hosting-n8n-on-google-cloud-run

Contents:

  • Before you begin: get a Google Cloud project
  • Easy mode
  • Durable mode
  • Enable APIs and set env vars
  • You may need to login first
  • Setup your Postgres database
  • Store sensitive data in Secret Manager
  • Create a service account for Cloud Run
  • Deploy the Cloud Run service
  • Troubleshooting

This hosting guide shows you how to self-host n8n on Google Cloud Run, a serverless container runtime. If you're just getting started with n8n and don't need a production-grade deployment, you can go with the "easy mode" option below for deployment. Otherwise, if you intend to use this n8n deployment at-scale, refer to the "durable mode" instructions further down.

You can also enable access via OAuth to Google Workspace, such as Gmail and Drive, to use these services as n8n workflow tools. Instructions for granting n8n access to these services are at the end of of this documentation.

If you want to deploy to Google Kubernetes Engine (GKE) instead, you can refer to these instructions.

Self-hosting knowledge prerequisites

Self-hosting n8n requires technical knowledge, including:

  • Setting up and configuring servers and containers
  • Managing application resources and scaling
  • Securing servers and applications
  • Configuring n8n

n8n recommends self-hosting for expert users. Mistakes can lead to data loss, security issues, and downtime. If you aren't experienced at managing servers, n8n recommends n8n Cloud.

Latest and Next versions

n8n releases a new minor version most weeks. The latest version is for production use. next is the most recent release. You should treat next as a beta: it may be unstable. To report issues, use the forum.

Current latest: 1.118.2
Current next: 1.119.0

Before you begin: get a Google Cloud project

If you have not yet created a Google Cloud project, do this first (and ensure you have billing enabled on the project; even if your Cloud Run service runs for free you must have billing activated to deploy). Otherwise, navigate to the project where you want to deploy n8n.

This is the fastest way to deploy n8n on Cloud Run. For this deployment, n8n's data is in-memory so this is only recommended for demo purposes. Anytime this Cloud Run service scales to zero or is redeployed, the n8n data will be lost. Refer to the durable mode instructions below if you need a production-grade deployment.

If you have not yet created a Google Cloud project, do this first (and ensure you have billing enabled on the project; even if your Cloud Run service will run for free you must have billing enabled to activated to deploy). Otherwise, navigate to the project where you want to deploy n8n.

Open the Cloud Shell Terminal (on the Google Cloud console, either type "G" then "S" or click on the terminal icon on the upper right).

Once your session is open, you may need to run this command first to login (and follow the steps it asks you to complete):

You can also explicitly enable the Cloud Run API (even if you don't do this, it will ask if you want this enabled when you deploy):

(you can specify whichever region you prefer, instead of "us-west1")

Once the deployment finishes, open another tab to navigate to the Service URL. n8n may still be loading and you will see a "n8n is starting up. Please wait" message, but shortly thereafter you should see the n8n login screen.

Optional: If you want to keep this n8n service running for as long as possible to avoid data loss, you can also set manual scale to 1 to prevent it from autoscaling to 0.

This does not prevent data loss completely, such as whenever the Cloud Run service is re-deployed/updated. If you want truly persistant data, you should refer to the instructions below for how to attach a database.

The following instructions are intended for a more durable, production-grade deployment of n8n on Cloud Run. It includes resources such as a database for persistance and secret manager for sensitive data.

Enable APIs and set env vars

Open the Cloud Shell Terminal (on the Google Cloud console, either type "G" then "S" or click on the terminal icon on the upper right) and run these commands in the terminal session:

You'll also want to set some environment variables for the remainder of these instructions:

Setup your Postgres database

Run this command to create the Postgres DB instance (it will take a few minutes to complete; also ensure you update the root-password field with your own desired password):

Once complete, you can add the database that n8n will use:

Create the DB user for n8n (change the password value, of course):

You can save the password you set for this n8n-user to a file for the next step of saving the password in Secret Manager. Be sure to delete this file later.

Store sensitive data in Secret Manager

While not required, it's absolutely recommended to store your sensitive data in Secrets Manager.

Create a secret for the database password (replace "/your/password/file" with the file you created above for the n8n-user password):

Create an encryption key (you can use your own, this example generates a random one):

Create a secret for this encryption key (replace "my-encryption-key" if you are supplying your own):

Now you can delete my-encryption-key and the database password files you created. These values are now securely stored in Secret Manager.

Create a service account for Cloud Run

You want this Cloud Run service to be restricted to access only the resources it needs. The following commands create the service account and adds the permissions necessary to access secrets and the database:

Deploy the Cloud Run service

Now you can deploy your n8n service:

Once the deployment finishes, open another tab to navigate to the Service URL. You should see the n8n login screen.

If you see a "Cannot GET /" screen this usually indicates that n8n is still starting up. You can refresh the page and it should eventually load.

(Optional) Enabling Google Workspace services as n8n tools

If you want to use Google Workspace services (Gmail, Calendar, Drive, etc.) as tools in n8n, it's recommended to setup OAuth to access these services.

First ensure the respective APIs you want are enabled:

Re-deploy n8n on Cloud Run with the necessary OAuth callback URLs as environment variables:

Lastly, you must setup OAuth for these services. Visit https://console.cloud.google.com/auth and follow these steps:

  1. Click "Get Started" if this button shows (when you have not yet setup OAuth in this Cloud project).
  2. For "App Information", enter whichever "App Name" and "User Support Email" you prefer.
  3. For "Audience", select "Internal" if you intend to only enable access to your user(s) within this same Google Workspace. Otherwise, you can select "External".
  4. Enter "Contact Information".
  5. If you selected "External", then click "Audience" and add any test users you need to grant access.
  6. Click "Clients" > "Create client", select "Web application" for "Application type", enter your n8n service URL into "Authorized JavaScript origins", and "/rest/oauth2-credential/callback" into "Authorized redirect URIs" where your YOUR-N8N-URL is also the n8n service URL (e.g. https://n8n-12345678.us-west1.run.app/rest/oauth2-credential/callback). Make sure you download the created client's JSON file since it contains the client secret which you will not be able to see later in the Console.
  7. Click "Data Access" and add the scopes you want n8n to have access for (e.g. to access Google Sheets, you need https://googleapis.com/auth/drive.file and https://googleapis.com/auth/spreadsheets)
  8. Now you should be able to use these workspace services. You can test if it works by logging into n8n, add a Tool for the respective service and add its credentials using the information in the OAuth client JSON file from step 6.

Examples:

Example 1 (unknown):

gcloud auth login

Example 2 (unknown):

gcloud services enable run.googleapis.com

Example 3 (unknown):

gcloud run deploy n8n \
    --image=n8nio/n8n \
    --region=us-west1 \
    --allow-unauthenticated \
    --port=5678 \
    --no-cpu-throttling \
    --memory=2Gi

Example 4 (unknown):

gcloud run deploy n8n \
    --image=n8nio/n8n \
    --region=us-west1 \
    --allow-unauthenticated \
    --port=5678 \
    --no-cpu-throttling \
    --memory=2Gi \
    --scaling=1

Mailjet node

URL: llms-txt#mailjet-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Mailjet node to automate work in Mailjet, and integrate Mailjet with other applications. n8n has built-in support for a wide range of Mailjet features, including sending emails, and SMS.

On this page, you'll find a list of operations the Mailjet node supports and links to more resources.

Refer to Mailjet credentials for guidance on setting up authentication.

  • Email
    • Send an email
    • Send an email template
  • SMS
    • Send an SMS

Templates and examples

Forward Netflix emails to multiple email addresses with GMail and Mailjet

View template details

Send an email using Mailjet

View template details

Monitor SEO Keyword Rankings with LLaMA AI & Apify Google SERP Scraping

View template details

Browse Mailjet integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Git credentials

URL: llms-txt#git-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using basic auth

You can use these credentials to authenticate the following nodes:

Create an account on GitHub, GitLab, or similar platforms for use with Git.

Supported authentication methods

Refer to Git's documentation for more information about the service.

To configure this credential, you'll need:

  • A Username for GitHub, GitLab, or a similar platform
  • A Password for GitHub, GitLab, or a similar platform

BambooHR node

URL: llms-txt#bamboohr-node

Contents:

  • Operations
  • Templates and examples

Use the BambooHR node to automate work in BambooHR, and integrate BambooHR with other applications. n8n has built-in support for a wide range of BambooHR features, including creating, deleting, downloading, and getting company reports, employee documents, and files.

On this page, you'll find a list of operations the BambooHR node supports and links to more resources.

Refer to BambooHR credentials for guidance on setting up authentication.

  • Company Report
    • Get a company report
  • Employee
    • Create an employee
    • Get an employee
    • Get all employees
    • Update an employee
  • Employee Document
    • Delete an employee document
    • Download an employee document
    • Get all employee document
    • Update an employee document
    • Upload an employee document
  • File
    • Delete a company file
    • Download a company file
    • Get all company files
    • Update a company file
    • Upload a company file

Templates and examples

BambooHR AI-Powered Company Policies and Benefits Chatbot

View template details

Test Webhooks in n8n Without Changing WEBHOOK_URL (PostBin & BambooHR Example)

View template details

🛠️ BambooHR Tool MCP Server 💪 all 15 operations

View template details

Browse BambooHR integration templates, or search all templates


Database environment variables

URL: llms-txt#database-environment-variables

Contents:

  • PostgreSQL
  • SQLite

File-based configuration

You can add _FILE to individual variables to provide their configuration in a separate file. Refer to Keeping sensitive data in separate files for more details.

By default, n8n uses SQLite. n8n also supports PostgreSQL. n8n deprecated support for MySQL and MariaDB in v1.0.

This page outlines environment variables to configure your chosen database for your self-hosted n8n instance.

Variable Type Default Description
DB_TYPE /_FILE Enum string: sqlite, postgresdb sqlite The database to use.
DB_TABLE_PREFIX * - Prefix to use for table names.
DB_PING_INTERVAL_SECONDS Number 2 The interval, in seconds, between pings to the database to check if the connection is still alive.
Variable Type Default Description
DB_POSTGRESDB_DATABASE /_FILE String n8n The name of the PostgreSQL database.
DB_POSTGRESDB_HOST /_FILE String localhost The PostgreSQL host.
DB_POSTGRESDB_PORT /_FILE Number 5432 The PostgreSQL port.
DB_POSTGRESDB_USER /_FILE String postgres The PostgreSQL user.
DB_POSTGRESDB_PASSWORD /_FILE String - The PostgreSQL password.
DB_POSTGRESDB_POOL_SIZE /_FILE Number 2 Control how many parallel open Postgres connections n8n should have. Increasing it may help with resource utilization, but too many connections may degrade performance.
DB_POSTGRESDB_CONNECTION_TIMEOUT /_FILE Number 20000 Postgres connection timeout (ms).
DB_POSTGRESDB_IDLE_CONNECTION_TIMEOUT /_FILE Number 30000 Amount of time before an idle connection is eligible for eviction for being idle.
DB_POSTGRESDB_SCHEMA /_FILE String public The PostgreSQL schema.
DB_POSTGRESDB_SSL_ENABLED /_FILE Boolean false Whether to enable SSL. Automatically enabled if DB_POSTGRESDB_SSL_CA, DB_POSTGRESDB_SSL_CERT or DB_POSTGRESDB_SSL_KEY is defined.
DB_POSTGRESDB_SSL_CA /_FILE String - The PostgreSQL SSL certificate authority.
DB_POSTGRESDB_SSL_CERT /_FILE String - The PostgreSQL SSL certificate.
DB_POSTGRESDB_SSL_KEY /_FILE String - The PostgreSQL SSL key.
DB_POSTGRESDB_SSL_REJECT_UNAUTHORIZED /_FILE Boolean true If n8n should reject unauthorized SSL connections (true) or not (false).
Variable Type Default Description
DB_SQLITE_POOL_SIZE Number 0 Controls whether to open the SQLite file in WAL mode or rollback journal mode. Uses rollback journal mode when set to zero. When greater than zero, uses WAL mode with the value determining the number of parallel SQL read connections to configure. WAL mode is much more performant and reliable than the rollback journal mode.
DB_SQLITE_VACUUM_ON_STARTUP Boolean false Runs VACUUM operation on startup to rebuild the database. Reduces file size and optimizes indexes. This is a long running blocking operation and increases start-up time.

Summarization Chain node

URL: llms-txt#summarization-chain-node

Contents:

  • Node parameters
  • Node Options
  • Templates and examples
  • Related resources

Use the Summarization Chain node to summarize multiple documents.

On this page, you'll find the node parameters for the Summarization Chain node, and links to more resources.

Choose the type of data you need to summarize in Data to Summarize. The data type you choose determines the other node parameters.

  • Use Node Input (JSON) and Use Node Input (Binary): summarize the data coming into the node from the workflow.
    • You can configure the Chunking Strategy: choose what strategy to use to define the data chunk sizes.
      • If you choose Simple (Define Below) you can then set Characters Per Chunk and Chunk Overlap (Characters).
      • Choose Advanced if you want to connect a splitter sub-node that provides more configuration options.
  • Use Document Loader: summarize data provided by a document loader sub-node.

You can configure the summarization method and prompts. Select Add Option > Summarization Method and Prompts.

Options in Summarization Method:

  • Map Reduce: this is the recommended option. Learn more about Map Reduce in the LangChain documentation.
  • Refine: learn more about Refine in the LangChain documentation.
  • Stuff: learn more about Stuff in the LangChain documentation.

You can customize the Individual Summary Prompts and the Final Prompt to Combine. There are examples in the node. You must include the "{text}" placeholder.

Templates and examples

Scrape and summarize webpages with AI

View template details

AI-Powered YouTube Video Summarization & Analysis

View template details

AI Automated HR Workflow for CV Analysis and Candidate Evaluation

View template details

Browse Summarization Chain integration templates, or search all templates

Refer to LangChain's documentation on summarization for more information about the service.

View n8n's Advanced AI documentation.


npm credentials

URL: llms-txt#npm-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API access token

You can use these credentials to authenticate the following nodes:

Create an npm account.

Supported authentication methods

Refer to npm's external integrations documentation for more information about the service.

Using API access token

To configure this credential, you'll need:

  • An Access Token: Create an access token by selecting Access Tokens from your profile menu. Refer to npm's Creating and viewing access tokens documentation for more detailed instructions.
  • A Registry URL: If you're using a custom npm registry, update the Registry URL to that custom registry. Otherwise, keep the public registry value.

ConvertKit credentials

URL: llms-txt#convertkit-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a ConvertKit account.

Supported authentication methods

Refer to ConvertKit's API documentation for more information about the service.

To configure this credential, you'll need:


Google Sheets

URL: llms-txt#google-sheets

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • Common issues
  • What to do if your operation isn't supported

Use the Google Sheets node to automate work in Google Sheets, and integrate Google Sheets with other applications. n8n has built-in support for a wide range of Google Sheets features, including creating, updating, deleting, appending, removing and getting documents.

On this page, you'll find a list of operations the Google Sheets node supports and links to more resources.

Refer to Google Sheets credentials for guidance on setting up authentication.

Templates and examples

Generate AI Viral Videos with Seedance and Upload to TikTok, YouTube & Instagram

View template details

Generate AI Videos with Google Veo3, Save to Google Drive and Upload to YouTube

View template details

Scrape business emails from Google Maps without the use of any third party APIs

View template details

Browse Google Sheets integration templates, or search all templates

Refer to Google Sheet's API documentation for more information about the service.

For common questions or issues and suggested solutions, refer to Common issues.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Contentful node

URL: llms-txt#contentful-node

Contents:

  • Operations
  • Templates and examples

Use the Contentful node to automate work in Contentful, and integrate Contentful with other applications. n8n has built-in support for a wide range of Contentful features, including getting assets, content types, entries, locales, and space.

On this page, you'll find a list of operations the Contentful node supports and links to more resources.

Refer to Contentful credentials for guidance on setting up authentication.

  • Asset
    • Get
    • Get All
  • Content Type
    • Get
  • Entry
    • Get
    • Get All
  • Locale
    • Get All
  • Space
    • Get

Templates and examples

Generate Knowledge Base Articles with GPT & Perplexity AI for Contentful CMS

View template details

Convert Markdown Content to Contentful Rich Text with AI Formatting

View template details

Get all the entries from Contentful

View template details

Browse Contentful integration templates, or search all templates


Embeddings Ollama node

URL: llms-txt#embeddings-ollama-node

Contents:

  • Node parameters
  • Templates and examples
  • Related resources

Use the Embeddings Ollama node to generate embeddings for a given text.

On this page, you'll find the node parameters for the Embeddings Ollama node, and links to more resources.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Model: Select the model to use to generate the embedding. Choose from:

Learn more about available models in Ollama's models documentation.

Templates and examples

Local Chatbot with Retrieval Augmented Generation (RAG)

View template details

Bitrix24 AI-Powered RAG Chatbot for Open Line Channels

View template details

Chat with Your Email History using Telegram, Mistral and Pgvector for RAG

View template details

Browse Embeddings Ollama integration templates, or search all templates

Refer to Langchain's Ollama embeddings documentation for more information about the service.

View n8n's Advanced AI documentation.


API authentication

URL: llms-txt#api-authentication

Contents:

  • API Scopes
  • Create an API key
  • Call the API using your key

n8n uses API keys to authenticate API calls.

The n8n API isn't available during the free trial. Please upgrade to access this feature.

Users of enterprise instances can limit which resources and actions a key can access with scopes. API key scopes allow you specify the exact level of access a key needs for its intended purpose.

Non-enterprise API keys have full access to all the account's resources and capabilities.

  1. Log in to n8n.
  2. Go to Settings > n8n API.
  3. Select Create an API key.
  4. Choose a Label and set an Expiration time for the key.
  5. If on an enterprise plan, choose the Scopes to give the key.
  6. Copy My API Key and use this key to authenticate your calls.

Call the API using your key

Send the API key in your API call as a header named X-N8N-API-KEY.

For example, say you want to get all active workflows. Your curl request will look like this:


Pushover credentials

URL: llms-txt#pushover-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API Key

You can use these credentials to authenticate the following nodes:

Create a Pushover account.

Supported authentication methods

Refer to Pushover's API documentation for more information about authenticating with the service.

To configure this credential, you'll need:


DHL credentials

URL: llms-txt#dhl-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to DHL's Developer documentation for more information about the service.

To configure this credential, you'll need a DHL Developer account and:

To get an API key, create an app:

  1. In the DHL Developer portal, select the user icon to open your User Apps.
  2. Select + Create App.
  3. Enter an App name, like n8n integration.
  4. Enter a Machine name, like n8n_integration.
  5. In SELECT APIs, select Shipment Tracking - Unified. The API is added to the Add API to app section.
  6. In the Add API to app section, select the + next to the Shipment Tracking - Unified API.
  7. Select Create App. The Apps page opens, displaying the app you just created.
  8. Select the app you just created to view its details.
  9. Select Show key next to API Key.
  10. Copy the API Key and enter it in your n8n credential.

Refer to How to create an app? for more information.


Pinecone Vector Store node

URL: llms-txt#pinecone-vector-store-node

Contents:

  • Node usage patterns
    • Use as a regular node to insert, update, and retrieve documents
    • Connect directly to an AI agent as a tool
    • Use a retriever to fetch documents
    • Use the Vector Store Question Answer Tool to answer questions
  • Node parameters
    • Operation Mode
    • Rerank Results
    • Get Many parameters
    • Insert Documents parameters

Use the Pinecone node to interact with your Pinecone database as vector store. You can insert documents into a vector database, get documents from a vector database, retrieve documents to provide them to a retriever connected to a chain, or connect directly to an agent as a tool. You can also update an item in a vector database by its ID.

On this page, you'll find the node parameters for the Pinecone node, and links to more resources.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

Node usage patterns

You can use the Pinecone Vector Store node in the following patterns.

Use as a regular node to insert, update, and retrieve documents

You can use the Pinecone Vector Store as a regular node to insert, update, or get documents. This pattern places the Pinecone Vector Store in the regular connection flow without using an agent.

You can see an example of this in scenario 1 of this template.

Connect directly to an AI agent as a tool

You can connect the Pinecone Vector Store node directly to the tool connector of an AI agent to use a vector store as a resource when answering queries.

Here, the connection would be: AI agent (tools connector) -> Pinecone Vector Store node.

Use a retriever to fetch documents

You can use the Vector Store Retriever node with the Pinecone Vector Store node to fetch documents from the Pinecone Vector Store node. This is often used with the Question and Answer Chain node to fetch documents from the vector store that match the given chat input.

An example of the connection flow would be: Question and Answer Chain (Retriever connector) -> Vector Store Retriever (Vector Store connector) -> Pinecone Vector Store.

Use the Vector Store Question Answer Tool to answer questions

Another pattern uses the Vector Store Question Answer Tool to summarize results and answer questions from the Pinecone Vector Store node. Rather than connecting the Pinecone Vector Store directly as a tool, this pattern uses a tool specifically designed to summarizes data in the vector store.

The connections flow in this case would look like this: AI agent (tools connector) -> Vector Store Question Answer Tool (Vector Store connector) -> Pinecone Vector store.

This Vector Store node has five modes: Get Many, Insert Documents, Retrieve Documents (As Vector Store for Chain/Tool), Retrieve Documents (As Tool for AI Agent), and Update Documents. The mode you select determines the operations you can perform with the node and what inputs and outputs are available.

In this mode, you can retrieve multiple documents from your vector database by providing a prompt. The prompt will be embedded and used for similarity search. The node will return the documents that are most similar to the prompt with their similarity score. This is useful if you want to retrieve a list of similar documents and pass them to an agent as additional context.

Insert Documents

Use Insert Documents mode to insert new documents into your vector database.

Retrieve Documents (As Vector Store for Chain/Tool)

Use Retrieve Documents (As Vector Store for Chain/Tool) mode with a vector-store retriever to retrieve documents from a vector database and provide them to the retriever connected to a chain. In this mode you must connect the node to a retriever node or root node.

Retrieve Documents (As Tool for AI Agent)

Use Retrieve Documents (As Tool for AI Agent) mode to use the vector store as a tool resource when answering queries. When formulating responses, the agent uses the vector store when the vector store name and description match the question details.

Update Documents

Use Update Documents mode to update documents in a vector database by ID. Fill in the ID with the ID of the embedding entry to update.

Enables reranking. If you enable this option, you must connect a reranking node to the vector store. That node will then rerank the results for queries. You can use this option with the Get Many, Retrieve Documents (As Vector Store for Chain/Tool) and Retrieve Documents (As Tool for AI Agent) modes.

Get Many parameters

  • Pinecone Index: Select or enter the Pinecone Index to use.
  • Prompt: Enter your search query.
  • Limit: Enter how many results to retrieve from the vector store. For example, set this to 10 to get the ten best results.

Insert Documents parameters

  • Pinecone Index: Select or enter the Pinecone Index to use.

Retrieve Documents (As Vector Store for Chain/Tool) parameters

  • Pinecone Index: Select or enter the Pinecone Index to use.

Retrieve Documents (As Tool for AI Agent) parameters

  • Name: The name of the vector store.
  • Description: Explain to the LLM what this tool does. A good, specific description allows LLMs to produce expected results more often.
  • Pinecone Index: Select or enter the Pinecone Index to use.
  • Limit: Enter how many results to retrieve from the vector store. For example, set this to 10 to get the ten best results.

Parameters for Update Documents

Pinecone Namespace

Another segregation option for how to store your data within the index.

Available in Get Many mode. When searching for data, use this to match with metadata associated with the document.

This is an AND query. If you specify more than one metadata filter field, all of them must match.

When inserting data, the metadata is set using the document loader. Refer to Default Data Loader for more information on loading documents.

Available in Insert Documents mode. Deletes all data from the namespace before inserting the new data.

Templates and examples

Ask questions about a PDF using AI

View template details

Chat with PDF docs using AI (quoting sources)

View template details

RAG Chatbot for Company Documents using Google Drive and Gemini

View template details

Browse Pinecone Vector Store integration templates, or search all templates

Refer to LangChain's Pinecone documentation for more information about the service.

View n8n's Advanced AI documentation.

Find your Pinecone index and namespace

Your Pinecone index and namespace are available in your Pinecone account.


Rapid7 InsightVM credentials

URL: llms-txt#rapid7-insightvm-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate when using the HTTP Request node to make a Custom API call.

Create a Rapid7 InsightVM account.

Supported authentication methods

Refer to Rapid7 InsightVM's API documentation for more information about the service.

This is a credential-only node. Refer to Custom API operations to learn more. View example workflows and related content on n8n's website.

To configure this credential, you'll need a Rapid7 InsightVM account and:

Refer to Rapid7 InsightVM's API documentation for more information about authenticating to the service.


Google Calendar Trigger node

URL: llms-txt#google-calendar-trigger-node

Contents:

  • Events
  • Related resources

Google Calendar is a time-management and scheduling calendar service developed by Google.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Google Calendar Trigger integrations page.

  • Event Cancelled
  • Event Created
  • Event Ended
  • Event Started
  • Event Updated

Browse Google Calendar Trigger integration templates, or search all templates

n8n provides an app node for Google Calendar. You can find the node docs here.

View example workflows and related content on n8n's website.

Refer to Google Calendar's documentation for details about their API.


n8n displays "Today's date is "

URL: llms-txt#n8n-displays-"today's-date-is-"


n8n Docs

URL: llms-txt#n8n-docs

Documentation for n8n, a workflow automation platform.

Documentation for n8n, a workflow automation platform. This file helps LLMs understand and use the documentation more effectively.


KoboToolbox node

URL: llms-txt#kobotoolbox-node

Contents:

  • Operations
  • Templates and examples
  • Options
    • Query Options
    • Submission options
  • What to do if your operation isn't supported

Use the KoboToolbox node to automate work in KoboToolbox, and integrate KoboToolbox with other applications. n8n has built-in support for a wide range of KoboToolbox features, including creating, updating, deleting, and getting files, forms, hooks, and submissions.

On this page, you'll find a list of operations the KoboToolbox node supports and links to more resources.

Refer to KoboToolbox credentials for guidance on setting up authentication.

  • File
    • Create
    • Delete
    • Get
    • Get Many
  • Form
    • Get
    • Get Many
      • Redeploy
  • Hook
    • Get
    • Get Many
    • Logs
    • Retry All
    • Retry One
  • Submission
    • Delete
    • Get
    • Get Many
    • Get Validation Status
    • Update Validation Status

Templates and examples

Browse KoboToolbox integration templates, or search all templates

The Query Submission operation supports query options:

  • In the main section of the Parameters panel:
    • Start controls the index offset to start the query from (to use the API pagination logic).
    • Limit sets the maximum number of records to return. Note that the API always has a limit of 30,000 returned records, whatever value you provide.
  • In the Query Options section, you can activate the following parameters:
    • Query lets you specify filter predicates in MongoDB's JSON query format. For example: {"status": "success", "_submission_time": {"$lt": "2021-11-01T01:02:03"}} queries for all submissions with the value success for the field status, and submitted before November 1st, 2021, 01:02:03.
    • Fields lets you specify the list of fields you want to fetch, to make the response lighter.
    • Sort lets you provide a list of sorting criteria in MongoDB JSON format. For example, {"status": 1, "_submission_time": -1} specifies a sort order by ascending status, and then descending submission time.

More details about these options can be found in the Formhub API docs

Submission options

All operations that return form submission data offer options to tweak the response. These include:

  • Download options lets you download any attachment linked to each particular form submissions, such as pictures and videos. It also lets you select the naming pattern, and the file size to download (if available - typically for images).
  • Formatting options perform some reformatting as described in About reformatting.

About reformatting

The default JSON format for KoboToolbox submission data is sometimes hard to deal with, because it's not schema-aware, and all fields are therefore returned as strings.

This node provides a lightweight opinionated reformatting logic, enabled with the Reformat? parameter, available on all operations that return form submissions: the submission query, get, and the attachment download operations.

When enabled, the reformatting:

  • Reorganizes the JSON into a multi-level hierarchy following the form's groups. By default, question grouping hierarchy is materialized by a / character in the field names, for example Group1/Question1. With reformatting enabled, n8n reorganizes these into Group1.Question1, as nested JSON objects.
  • Renames fields to trim _ (not supported by many downstream systems).
  • Parses all geospatial fields (Point, Line, and Area question types) into their standard GeoJSON equivalent.
  • Splits all fields matching any of the Multiselect Mask wildcard masks into an array. Since the multi-select fields appear as space-separated strings, they can't be guessed algorithmically, so you must provide a field naming mask. Format the masks as a comma-separated list. Lists support the * wildcard.
  • Converts all fields matching any of the Number Mask wildcard masks into a JSON float.

Here's a detailed example in JSON:

With reformatting enabled, and the appropriate masks for multi-select and number formatting (for example, Crops_* and *_sqm respectively), n8n parses it into:

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.

Examples:

Example 1 (unknown):

{
  "_id": 471987,
  "formhub/uuid": "189436bb09a54957bfcc798e338b54d6",
  "start": "2021-12-05T16:13:38.527+02:00",
  "end": "2021-12-05T16:15:33.407+02:00",
  "Field_Details/Field_Name": "Test Fields",
  "Field_Details/Field_Location": "-1.932914 30.078211 1421 165",
  "Field_Details/Field_Shape": "-1.932914 30.078211 1421 165;-1.933011 30.078085 0 0;-1.933257 30.078004 0 0;-1.933338 30.078197 0 0;-1.933107 30.078299 0 0;-1.932914 30.078211 1421 165",
  "Field_Details/Crops_Grown": "maize beans avocado",
  "Field_Details/Field_Size_sqm": "2300",
  "__version__": "veGcULpqP6JNFKRJbbMvMs",
  "meta/instanceID": "uuid:2356cbbe-c1fd-414d-85c8-84f33e92618a",
  "_xform_id_string": "ajXVJpBkTD5tB4Nu9QXpgm",
  "_uuid": "2356cbbe-c1fd-414d-85c8-84f33e92618a",
  "_attachments": [],
  "_status": "submitted_via_web",
  "_geolocation": [
    -1.932914,
    30.078211
  ],
  "_submission_time": "2021-12-05T14:15:44",
  "_tags": [],
  "_notes": [],
  "_validation_status": {},
  "_submitted_by": null
}

Example 2 (unknown):

{
  "id": 471987,
  "formhub": {
    "uuid": "189436bb09a54957bfcc798e338b54d6"
  },
  "start": "2021-12-05T16:13:38.527+02:00",
  "end": "2021-12-05T16:15:33.407+02:00",
  "Field_Details": {
    "Field_Name": "Test Fields",
    "Field_Location": {
      "lat": -1.932914,
      "lon": 30.078211
    },
    "Field_Shape": {
      "type": "polygon",
      "coordinates": [
        {
          "lat": -1.932914,
          "lon": 30.078211
        },
        {
          "lat": -1.933011,
          "lon": 30.078085
        },
        {
          "lat": -1.933257,
          "lon": 30.078004
        },
        {
          "lat": -1.933338,
          "lon": 30.078197
        },
        {
          "lat": -1.933107,
          "lon": 30.078299
        },
        {
          "lat": -1.932914,
          "lon": 30.078211
        }
      ]
    },
    "Crops_Grown": [
      "maize",
      "beans",
      "avocado"
    ],
    "Field_Size_sqm": 2300
  },
  "version": "veGcULpqP6JNFKRJbbMvMs",
  "meta": {
    "instanceID": "uuid:2356cbbe-c1fd-414d-85c8-84f33e92618a"
  },
  "xform_id_string": "ajXVJpBkTD5tB4Nu9QXpgm",
  "uuid": "2356cbbe-c1fd-414d-85c8-84f33e92618a",
  "attachments": [],
  "status": "submitted_via_web",
  "geolocation": {
    "lat": -1.932914,
    "lon": 30.078211
  },
  "submission_time": "2021-12-05T14:15:44",
  "tags": [],
  "notes": [],
  "validation_status": {},
  "submitted_by": null
}

HELP n8n_scaling_mode_queue_jobs_failed Total number of jobs failed across all workers in scaling mode since instance start.

URL: llms-txt#help-n8n_scaling_mode_queue_jobs_failed-total-number-of-jobs-failed-across-all-workers-in-scaling-mode-since-instance-start.


n8n node linter

URL: llms-txt#n8n-node-linter

Contents:

  • Setup
  • Usage
    • Linting
    • Exceptions

n8n's node linter, eslint-plugin-n8n-nodes-base, statically analyzes ("lints") the source code of n8n nodes and credentials in the official repository and in community packages. The linter detects issues and automatically fixes them to help you follow best practices.

eslint-plugin-n8n-nodes-base contains a collection of rules for node files (*.node.ts), resource description files (*Description.ts), credential files (*.credentials.ts), and the package.json of a community package.

If using the n8n node starter: Run npm install in the starter project to install all dependencies. Once the installation finishes, the linter is available to you.

If using VS Code, install the ESLint VS Code extension. For other IDEs, refer to their ESLint integrations.

Don't edit the configuration file

.eslintrc.js contains the configuration for eslint-plugin-n8n-nodes-base. Don't edit this file.

You can use the linter in a community package or in the main n8n repository.

In a community package, the linter runs automatically after installing dependencies and before publishing the package to npm. In the main n8n repository, the linter runs automatically using GitHub Actions whenever you push to your pull request.

In both cases, VS Code lints in the background as you work on your project. Hover over a detected issue to see a full description of the linting and a link to further information.

You can also run the linter manually:

  • Run npm run lint to lint and view detected issues in your console.
  • Run npm run lintfix to lint and automatically fix issues. The linter fixes violations of rules marked as automatically fixable.

Both commands can run in the root directory of your community package, or in /packages/nodes-base/ in the main repository.

Instead of fixing a rule violation, you can also make an exception for it, so the linter doesn't flag it.

To make a lint exception from VS Code: hover over the issue and click on Quick fix (or cmd+. in macOS) and select Disable {rule} for this line. Only disable rules for a line where you have good reason to. If you think the linter is incorrectly reporting an issue, please report it in the linter repository.

To add a lint exception to a single file, add a code comment. In particular, TSLint rules may not show up in VS Code and may need to be turned off using code comments. Refer to the TSLint documentation for more guidance.


Venafi TLS Protect Cloud node

URL: llms-txt#venafi-tls-protect-cloud-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported

Use the Venafi TLS Protect Cloud node to automate work in Venafi TLS Protect Cloud, and integrate Venafi TLS Protect Cloud with other applications. n8n has built-in support for a wide range of Venafi TLS Protect Cloud features, including deleting and downloading certificates, as well as creating certificates requests.

On this page, you'll find a list of operations the Venafi TLS Protect Cloud node supports and links to more resources.

Refer to Venafi TLS Protect Cloud credentials for guidance on setting up authentication.

  • Certificate
    • Delete
    • Download
    • Get
    • Get Many
    • Renew
  • Certificate Request
    • Create
    • Get
    • Get Many

Templates and examples

Browse Venafi TLS Protect Cloud integration templates, or search all templates

Refer to Venafi's REST API documentation for more information on this service.

  • A trigger node for Venafi TLS Protect Cloud.
  • A node for Venafi TLS Protect Datacenter.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Hosting n8n on Amazon Web Services

URL: llms-txt#hosting-n8n-on-amazon-web-services

Contents:

  • Hosting options
  • Prerequisites
  • Create a cluster
  • Clone configuration repository
  • Configure Postgres
    • Configure volume for persistent storage
    • Postgres environment variables
  • Configure n8n
    • Create a volume for file storage
    • Pod resources

This hosting guide shows you how to self-host n8n with Amazon Web Services (AWS). It uses n8n with Postgres as a database backend using Kubernetes to manage the necessary resources and reverse proxy.

AWS offers several ways suitable for hosting n8n, including EC2 (virtual machines), and EKS (containers running with Kubernetes).

This guide uses EKS as the hosting option. Using Kubernetes requires some additional complexity and configuration, but is the best method for scaling n8n as demand changes.

The steps in this guide use a mix of the AWS UI and the eksctl CLI tool for EKS.

While not mentioned in the documentation for eksctl, you also need to install the AWS CLI tool, and configure authentication of the tool.

Self-hosting knowledge prerequisites

Self-hosting n8n requires technical knowledge, including:

  • Setting up and configuring servers and containers
  • Managing application resources and scaling
  • Securing servers and applications
  • Configuring n8n

n8n recommends self-hosting for expert users. Mistakes can lead to data loss, security issues, and downtime. If you aren't experienced at managing servers, n8n recommends n8n Cloud.

Latest and Next versions

n8n releases a new minor version most weeks. The latest version is for production use. next is the most recent release. You should treat next as a beta: it may be unstable. To report issues, use the forum.

Current latest: 1.118.2
Current next: 1.119.0

Use the eksctl tool to create a cluster specifying a name and a region with the following command:

This can take a while to create the cluster.

Once the cluster is created, eksctl automatically sets the kubectl context to the cluster.

Clone configuration repository

Kubernetes and n8n require a series of configuration files. You can clone these from this repository. The following steps tell you what each file does, and what settings you need to change.

Clone the repository with the following command:

And change directory:

Configure Postgres

For larger scale n8n deployments, Postgres provides a more robust database backend than SQLite.

Configure volume for persistent storage

To maintain data between pod restarts, the Postgres deployment needs a persistent volume. The default AWS storage class, gp3, is suitable for this purpose. This is defined in the postgres-claim0-persistentvolumeclaim.yaml manifest.

Postgres environment variables

Postgres needs some environment variables set to pass to the application running in the containers.

The example postgres-secret.yaml file contains placeholders you need to replace with values of your own for user details and the database to use.

The postgres-deployment.yaml manifest then uses the values from this manifest file to send to the application pods.

Create a volume for file storage

While not essential for running n8n, using persistent volumes helps maintain files uploaded while using n8n and if you want to persist manual n8n encryption keys between restarts, which saves a file containing the key into file storage during startup.

The n8n-claim0-persistentvolumeclaim.yaml manifest creates this, and the n8n Deployment mounts that claim in the volumes section of the n8n-deployment.yaml manifest.

Kubernetes lets you specify the minimum resources application containers need and the limits they can run to. The example YAML files cloned above contain the following in the resources section of the n8n-deployment.yaml file:

This defines a minimum of 250mb per container, a maximum of 500mb, and lets Kubernetes handle CPU. You can change these values to match your own needs. As a guide, here are the resources values for the n8n cloud offerings:

  • Start: 320mb RAM, 10 millicore CPU burstable
  • Pro (10k executions): 640mb RAM, 20 millicore CPU burstable
  • Pro (50k executions): 1280mb RAM, 80 millicore CPU burstable

Optional: Environment variables

You can configure n8n settings and behaviors using environment variables.

Create an n8n-secret.yaml file. Refer to Environment variables for n8n environment variables details.

The two deployment manifests (n8n-deployment.yaml and postgres-deployment.yaml) define the n8n and Postgres applications to Kubernetes.

The manifests define the following:

  • Send the environment variables defined to each application pod
  • Define the container image to use
  • Set resource consumption limits
  • The volumes defined earlier and volumeMounts to define the path in the container to mount volumes.
  • Scaling and restart policies. The example manifests define one instance of each pod. You should change this to meet your needs.

The two service manifests (postgres-service.yaml and n8n-service.yaml) expose the services to the outside world using the Kubernetes load balancer using ports 5432 and 5678 respectively by default.

Send to Kubernetes cluster

Send all the manifests to the cluster by running the following command in the n8n-kubernetes-hosting directory:

You may see an error message about not finding an "n8n" namespace as that resources isn't ready yet. You can run the same command again, or apply the namespace manifest first with the following command:

n8n typically operates on a subdomain. Create a DNS record with your provider for the subdomain and point it to a static address of the instance.

To find the address of the n8n service running on the instance:

  1. Open the Clusters section of the Amazon Elastic Kubernetes Service page in the AWS console.
  2. Select the name of the cluster to open its configuration page.
  3. Select the Resources tab, then Service and networking > Services.
  4. Select the n8n service and copy the Load balancer URLs value. Use this value suffixed with the n8n service port (5678) for DNS.

This guide uses HTTP connections for the services it defines, for example in n8n-deployment.yaml. However, if you click the Load balancer URLs value, EKS takes you to an "HTTPS" URL which results in an error. To solve this, when you open the n8n subdomain, make sure to use HTTP.

If you need to delete the setup, you can remove the resources created by the manifests with the following command:

Examples:

Example 1 (unknown):

eksctl create cluster --name n8n --region <your-aws-region>

Example 2 (unknown):

git clone https://github.com/n8n-io/n8n-hosting.git

Example 3 (unknown):

cd n8n-hosting/kubernetes

Example 4 (unknown):

…
spec:
  storageClassName: gp3
  accessModes:
    - ReadWriteOnce
…

add more, one per line, e.g.:

URL: llms-txt#add-more,-one-per-line,-e.g.:


Facebook Trigger Instagram object

URL: llms-txt#facebook-trigger-instagram-object

Contents:

  • Trigger configuration
  • Related resources

Use this object to receive updates when someone comments on the Media objects of your app users; @mentions your app users; or when Stories of your app users expire. Refer to Facebook Trigger for more information on the trigger itself.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Facebook Trigger integrations page.

Trigger configuration

To configure the trigger with this Object:

  1. Select the Credential to connect with. Select an existing or create a new Facebook App credential.
  2. Enter the APP ID of the app connected to your credential. Refer to the Facebook App credential documentation for more information.
  3. Select Instagram as the Object.
  4. Field Names or IDs: By default, the node will trigger on all the available events using the * wildcard filter. If you'd like to limit the events, use the X to remove the star and use the dropdown or an expression to select the updates you're interested in. Options include:
    • Comments: Notifies you when anyone comments on an IG Media owned by your app's Instagram user.
    • Messaging Handover
    • Mentions: Notifies you whenever an Instagram user @mentions an Instagram Business or Creator Account in a comment or caption.
    • Messages: Notifies you when anyone messages your app's Instagram user.
    • Messaging Seen: Notifies you when someone sees a message sent by your app's Instagram user.
    • Standby
    • Story Insights: Notifies you one hour after a story expires with metrics describing interactions on a story.
  5. In Options, turn on the toggle to Include Values. This Object type fails without the option enabled.

Refer to Webhooks for Instagram and Meta's Instagram Graph API reference for more information.


MSG91 node

URL: llms-txt#msg91-node

Contents:

  • Operations
  • Templates and examples
  • Find your Sender ID

Use the MSG91 node to automate work in MSG91, and integrate MSG91 with other applications. n8n supports sending SMS with MSG91.

On this page, you'll find a list of operations the MSG91 node supports and links to more resources.

Refer to MSG91 credentials for guidance on setting up authentication.

Templates and examples

Browse MSG91 integration templates, or search all templates

Find your Sender ID

  1. Log in to your MSG91 dashboard.
  2. Select Sender Id in the left panel.
  3. If you don't already have one, select Add Sender Id +, fill in the details, and select Save Sender Id.

Microsoft credentials

URL: llms-txt#microsoft-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using OAuth2
    • Register an application
    • Generate a client secret
    • Service-specific settings
  • Common issues
    • Need admin approval

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to the linked Microsoft API documentation below for more information about each service's API:

Note for n8n Cloud users

Cloud users don't need to provide connection details. Select Connect my account to connect through your browser.

Some Microsoft services require extra information for OAuth2. Refer to Service-specific settings for more guidance on those services.

For self-hosted users, there are two main steps to configure OAuth2 from scratch:

  1. Register an application with the Microsoft Identity Platform.
  2. Generate a client secret for that application.

Follow the detailed instructions for each step below. For more detail on the Microsoft OAuth2 web flow, refer to Microsoft authentication and authorization basics.

Register an application

Register an application with the Microsoft Identity Platform:

  1. Open the Microsoft Application Registration Portal.
  2. Select Register an application.
  3. Enter a Name for your app.
  4. In Supported account types, select Accounts in any organizational directory (Any Azure AD directory - Multi-tenant) and personal Microsoft accounts (for example, Skype, Xbox).
  5. In Register an application:
    1. Copy the OAuth Callback URL from your n8n credential.
    2. Paste it into the Redirect URI (optional) field.
    3. Select Select a platform > Web.
  6. Select Register to finish creating your application.
  7. Copy the Application (client) ID and paste it into n8n as the Client ID.

Refer to Register an application with the Microsoft Identity Platform for more information.

Generate a client secret

With your application created, generate a client secret for it:

  1. On your Microsoft application page, select Certificates & secrets in the left navigation.
  2. In Client secrets, select + New client secret.
  3. Enter a Description for your client secret, such as n8n credential.
  4. Select Add.
  5. Copy the Secret in the Value column.
  6. Paste it into n8n as the Client Secret.
  7. If you see other fields in the n8n credential, refer to Service-specific settings below for guidance on completing those fields.
  8. Select Connect my account in n8n to finish setting up the connection.
  9. Log in to your Microsoft account and allow the app to access your info.

Refer to Microsoft's Add credentials for more information on adding a client secret.

Service-specific settings

The following services require extra information for OAuth2:

Dynamics OAuth2 requires information about your Dynamics domain and region. Follow these extra steps to complete the credential:

  1. Enter your Dynamics Domain.
  2. Select the Dynamics data center Region you're within.

Refer to the Microsoft Datacenter regions documentation for more information on the region options and corresponding URLs.

Microsoft (general)

The general Microsoft OAuth2 also requires you to provide a space-separated list of Scopes for this credential.

Refer to Scopes and permissions in the Microsoft identity platform for a list of possible scopes.

Outlook OAuth2 supports the credential accessing a user's primary email inbox or a shared inbox. By default, the credential will access a user's primary email inbox. To change this behavior:

  1. Turn on Use Shared Inbox.
  2. Enter the target user's UPN or ID as the User Principal Name.

SharePoint OAuth2 requires information about your SharePoint Subdomain.

To complete the credential, enter the Subdomain part of your SharePoint URL. For example, if your SharePoint URL is https://tenant123.sharepoint.com, the subdomain is tenant123.

SharePoint requires the following permissions:

Application permissions:

  • Sites.Read.All
  • Sites.ReadWrite.All

Delegated permissions:

  • SearchConfiguration.Read.All
  • SearchConfiguration.ReadWrite.All

Here are the known common errors and issues with Microsoft OAuth2 credentials.

Need admin approval

When attempting to add credentials for a Microsoft360 or Microsoft Entra account, users may see a message when following the procedure that this action requires admin approval.

This message will appear when the account attempting to grant permissions for the credential is managed by a Microsoft Entra. In order to issue the credential, the administrator account needs to grant permission to the user (or "tenant") for that application.

The procedure for this is covered in the Microsoft Entra documentation.


Facebook Trigger Permissions object

URL: llms-txt#facebook-trigger-permissions-object

Contents:

  • Trigger configuration
  • Related resources

Use this object to receive updates when a user grants or revokes a permission for your app. Refer to Facebook Trigger for more information on the trigger itself.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Facebook Trigger integrations page.

Trigger configuration

To configure the trigger with this Object:

  1. Select the Credential to connect with. Select an existing or create a new Facebook App credential.
  2. Enter the APP ID of the app connected to your credential. Refer to the Facebook App credential documentation for more information.
  3. Select Permissions as the Object.
  4. Field Names or IDs: By default, the node will trigger on all the available events using the * wildcard filter. If you'd like to limit the events, use the X to remove the star and use the dropdown or an expression to select the updates you're interested in.
  5. In Options, choose whether to turn on the toggle to Include Values. When turned on, the node includes the new values for the changes.

Refer to Meta's Permissions Graph API reference for more information.


Fortinet FortiGate credentials

URL: llms-txt#fortinet-fortigate-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API access token

You can use these credentials to authenticate when using the HTTP Request node to make a Custom API call.

Create a Fortinet FortiGate account.

Supported authentication methods

Refer to Fortinet FortiGate's API documentation for more information about the service.

This is a credential-only node. Refer to Custom API operations to learn more. View example workflows and related content on n8n's website.

Using API access token

To configure this credential, you'll need:

Refer to the Fortinet FortiGate Using APIs documentation for more information about token-based authentication in FortiGate.


Summarize

URL: llms-txt#summarize

Contents:

  • Node parameters
    • Fields to Summarize
    • Fields to Split By
  • Node options
    • Continue if Field Not Found
    • Disable Dot Notation
    • Output Format
  • Ignore items without valid fields to group by
  • Templates and examples
  • Related resources

Use the Summarize node to aggregate items together, in a manner similar to Excel pivot tables.

Fields to Summarize

Use these fields to define how you want to summarize your input data.

  • Aggregation: Select the aggregation method to use on a given field. Options include:
    • Append: Append
      • If you select this option, decide whether you want to Include Empty Values or not.
    • Average: Calculate the numeric average of your input data.
    • Concatenate: Combine together values in your input data.
      • If you select this option, decide whether you want to Include Empty Values or not.
      • Separator: Select the separator you want to insert between concatenated values.
    • Count: Count the total number of values in your input data.
    • Count Unique: Count the number of unique values in your input data.
    • Max: Find the highest numeric value in your input data.
    • Min: Find the lowest numeric value in your input data.
    • Sum: Add together the numeric values in your input data.
  • Field: Enter the name of the field you want to perform the aggregation on.

Fields to Split By

Enter the name of the input fields that you want to split the summary by (similar to a group by statement). This allows you to get separate summaries based on values in other fields.

For example, if our input data contains columns for Sales Rep and Deal Amount and we're performing a Sum on the Deal Amount field, we could split by Sales Rep to get a Sum total for each Sales Rep.

To enter multiple fields to split by, enter a comma-separated list.

Continue if Field Not Found

By default, if a Field to Summarize isn't in any items, the node throws an error. Use this option to continue and return a single empty item (turned on) instead or keep the default error behavior (turned off).

Disable Dot Notation

By default, n8n enables dot notation to reference child fields in the format parent.child. Use this option to disable dot notation (turned on) or to continue using dot (turned off).

Select the format for your output format. This option is recommended if you're using Fields to Split By

  • Each Split in a Separate Item: Use this option to generate a separate output item for each split out field.
  • All Splits in a Single Item: Use this option to generate a single item that lists the split out fields.

Ignore items without valid fields to group by

Set whether to ignore input items that don't contain the Fields to Split By (turned on) or not (turned off).

Templates and examples

Scrape and summarize webpages with AI

View template details

AI-Powered YouTube Video Summarization & Analysis

View template details

🤖 AI Powered RAG Chatbot for Your Docs + Google Drive + Gemini + Qdrant

View template details

Browse Summarize integration templates, or search all templates

Learn more about data structure and data flow in n8n workflows.


AlienVault credentials

URL: llms-txt#alienvault-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate when using the HTTP Request node to make a Custom API call.

Create an AlienVault account.

Supported authentication methods

Refer to AlienVault's documentation for more information about the service.

This is a credential-only node. Refer to Custom API operations to learn more. View example workflows and related content on n8n's website.

To configure this credential, you'll need:

  • An OTX Key: Once you have an AlienVault account, the OTX Key displays in your Settings.

Google Calendar Calendar operations

URL: llms-txt#google-calendar-calendar-operations

Contents:

  • Availability
    • Options

Use this operation to check availability in a calendar in Google Calendar. Refer to Google Calendar for more information on the Google Calendar node itself.

Use this operation to check if a time-slot is available in a calendar.

Enter these parameters:

  • Credential to connect with: Create or select an existing Google Calendar credentials.

  • Resource: Select Calendar.

  • Operation: Select Availability.

  • Calendar: Choose a calendar you want to check against. Select From list to choose the title from the dropdown list or By ID to enter a calendar ID.

  • Start Time: The start time for the time-slot you want to check. By default, uses an expression evaluating to the current time ({{ $now }}).

  • End Time: The end time for the time-slot you want to check. By default, uses an expression evaluating to an hour from now ({{ $now.plus(1, 'hour') }}).

  • Output Format: Select the format for the availability information:

    • Availability: Returns if there are already events overlapping with the given time slot or not.
    • Booked Slots: Returns the booked slots.
    • RAW: Returns the RAW data from the API.
  • Timezone: The timezone used in the response. By default, uses the n8n timezone.

Refer to the Freebusy: query | Google Calendar API documentation for more information.


Set up your development environment

URL: llms-txt#set-up-your-development-environment

Contents:

  • Requirements
  • Editor setup

This document lists the essential dependencies for developing a node, as well as guidance on setting up your editor.

To build and test a node, you need:

  • Node.js and npm. Minimum version Node 18.17.0. You can find instructions on how to install both using nvm (Node Version Manager) for Linux, Mac, and WSL (Windows Subsystem for Linux) here. For Windows users, refer to Microsoft's guide to Install NodeJS on Windows.
  • A local instance of n8n. You can install n8n with npm install n8n -g, then follow the steps in Run your node locally to test your node.
  • When building verified community nodes, you must use the n8n-node tool to create and test your node.

You should also have git installed. This allows you to clone and use the n8n-node-starter.

n8n recommends using VS Code as your editor.

Install these extensions:

By using VS Code and these extensions, you get access to the n8n node linter's warnings as you code.


Matrix node

URL: llms-txt#matrix-node

Contents:

  • Operations
  • Templates and examples

Use the Matrix node to automate work in Matrix, and integrate Matrix with other applications. n8n has built-in support for a wide range of Matrix features, including getting current user's account information, sending media and messages to a room, and getting room members and messages.

On this page, you'll find a list of operations the Matrix node supports and links to more resources.

Refer to Matrix credentials for guidance on setting up authentication.

  • Account
    • Get current user's account information
  • Event
    • Get single event by ID
  • Media
    • Send media to a chat room
  • Message
    • Send a message to a room
    • Gets all messages from a room
  • Room
    • New chat room with defined settings
    • Invite a user to a room
    • Join a new room
    • Kick a user from a room
    • Leave a room
  • Room Member
    • Get all members

Templates and examples

Manage room members in Matrix

View template details

Weekly Coffee Chat (Matrix Version)

View template details

🛠️ Matrix Tool MCP Server 💪 all 11 operations

View template details

Browse Matrix integration templates, or search all templates


Mapping in the expressions editor

URL: llms-txt#mapping-in-the-expressions-editor

Contents:

  • Access the linked item in a previous node's output
    • Access the linked item in the current node's input

These examples show how to access linked items in the expressions editor. Refer to expressions for more information on expressions, including built in variables and methods.

For information on errors with mapping and linking items, refer to Item linking errors.

Access the linked item in a previous node's output

When you use this, n8n works back up the item linking chain, to find the parent item in the given node.

As a longer example, consider a scenario where a node earlier in the workflow has the following output data:

To extract the name, use the following expression:

Access the linked item in the current node's input

In this case, the item linking is within the node: find the input item that the node links to an output item.

As a longer example, consider a scenario where the current node has the following input data:

To extract the name, you'd normally use drag-and-drop Data mapping, but you could also write the following expression:

Examples:

Example 1 (unknown):

// Returns the linked item
{{$("<node-name>").item}}

Example 2 (unknown):

[
  {
    "id": "23423532",
    "name": "Jay Gatsby",
  },
  {
    "id": "23423533",
    "name": "José Arcadio Buendía",
  },
  {
    "id": "23423534",
    "name": "Max Sendak",
  },
  {
    "id": "23423535",
    "name": "Zaphod Beeblebrox",
  },
  {
    "id": "23423536",
    "name": "Edmund Pevensie",
  }
]

Example 3 (unknown):

{{$("<node-name>").item.json.name}}

Example 4 (unknown):

// Returns the linked item
{{$input.item}}

Data mapping

URL: llms-txt#data-mapping

Data mapping means referencing data from previous nodes.

This section contains guidance on:


Google Gemini(PaLM) credentials

URL: llms-txt#google-gemini(palm)-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using Gemini(PaLM) API key

You can use these credentials to authenticate the following nodes:

Supported authentication methods

  • Gemini(PaLM) API key

Refer to Google's Gemini API documentation for more information about the service.

View n8n's Advanced AI documentation.

Using Gemini(PaLM) API key

To configure this credential, you'll need:

  • The API Host URL: Both PaLM and Gemini use the default https://generativelanguage.googleapis.com.
  • An API Key: Create a key in Google AI Studio.

Custom hosts not supported

The related nodes don't yet support custom hosts or proxies for the API host and must use https://generativelanguage.googleapis.com.

To create an API key:

  1. Go to the API Key page in Google AI Studio: https://aistudio.google.com/apikey.
  2. Select Create API Key.
  3. You can choose whether to Create API key in new project or search for an existing Google Cloud project to Create API key in existing project.
  4. Copy the generated API key and add it to your n8n credential.

LangChain in n8n

URL: llms-txt#langchain-in-n8n

n8n provides a collection of nodes that implement LangChain's functionality. The LangChain nodes are configurable, meaning you can choose your preferred agent, LLM, memory, and so on. Alongside the LangChain nodes, you can connect any n8n node as normal: this means you can integrate your LangChain logic with other data sources and services.


ProfitWell node

URL: llms-txt#profitwell-node

Contents:

  • Operations
  • Templates and examples

Use the ProfitWell node to automate work in ProfitWell, and integrate ProfitWell with other applications. n8n supports getting your company's account settings and retrieving financial metrics from ProfitWell.

On this page, you'll find a list of operations the ProfitWell node supports and links to more resources.

Refer to ProfitWell credentials for guidance on setting up authentication.

  • Company
    • Get your company's ProfitWell account settings
  • Metric
    • Retrieve financial metric broken down by day for either the current month or the last

Templates and examples

Browse ProfitWell integration templates, or search all templates


BambooHR credentials

URL: llms-txt#bamboohr-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API Key

You can use these credentials to authenticate the following node:

Create a BambooHR account.

Supported authentication methods

Refer to BambooHR's API documentation for more information about the service.

To configure this credential, you'll need:


Gmail IMAP credentials

URL: llms-txt#gmail-imap-credentials

Contents:

  • Prerequisites
    • Enable 2-step Verification
    • Generate an app password
  • Set up the credential

Follow these steps to configure the IMAP credentials with a Gmail account.

To follow these instructions, you must first:

  1. Enable 2-step Verification on your Gmail account.
  2. Generate an app password.

Enable 2-step Verification

To enable 2-step Verification:

  1. Log in to your Google Account.
  2. Select Security from the left navigation.
  3. Under How you sign in to Google, select 2-Step Verification.
    • If 2-Step Verification is already enabled, skip to the next section.
  4. Select Get started.
  5. Follow the on-screen steps to configure 2-Step Verification.

Refer to Turn on 2-step Verification for more information.

If you can't turn on 2-step Verification, check with your email administrator.

Generate an app password

To generate an app password:

  1. In your Google account, go to App passwords.
  2. Enter an App name for your new app password, like n8n credential.
  3. Select Create.
  4. Copy the generated app password. You'll use this in your n8n credential.

Refer to Google's Sign in with app passwords documentation for more information.

Set up the credential

To set up the IMAP credential with a Gmail account, use these settings:

  1. Enter your Gmail email address as the User.
  2. Enter the app password you generated above as the Password.
  3. Enter imap.gmail.com as the Host.
  4. For the Port, keep the default port number of 993. Check with your email administrator if this port doesn't work.
  5. Turn on the SSL/TLS toggle.
  6. Check with your email administrator about whether to Allow Self-Signed Certificates.

Refer to Add Gmail to another client for more information. You may need to Enable IMAP if you're using a personal Google account before June 2024.


Monica CRM node

URL: llms-txt#monica-crm-node

Contents:

  • Operations
  • Templates and examples

Use the Monica CRM node to automate work in Monica CRM, and integrate Monica CRM with other applications. n8n has built-in support for a wide range of Monica CRM features, including creating, updating, deleting, and getting activities, calls, contracts, messages, tasks, and notes.

On this page, you'll find a list of operations the Monica CRM node supports and links to more resources.

Refer to Monica CRM credentials for guidance on setting up authentication.

  • Activity
    • Create an activity
    • Delete an activity
    • Retrieve an activity
    • Retrieve all activities
    • Update an activity
  • Call
    • Create a call
    • Delete a call
    • Retrieve a call
    • Retrieve all calls
    • Update a call
  • Contact
    • Create a contact
    • Delete a contact
    • Retrieve a contact
    • Retrieve all contacts
    • Update a contact
  • Contact Field
    • Create a contact field
    • Delete a contact field
    • Retrieve a contact field
    • Update a contact field
  • Contact Tag
    • Add
    • Remove
  • Conversation
    • Create a conversation
    • Delete a conversation
    • Retrieve a conversation
    • Update a conversation
  • Conversation Message
    • Add a message to a conversation
    • Update a message in a conversation
  • Journal Entry
    • Create a journal entry
    • Delete a journal entry
    • Retrieve a journal entry
    • Retrieve all journal entries
    • Update a journal entry
  • Note
    • Create a note
    • Delete a note
    • Retrieve a note
    • Retrieve all notes
    • Update a note
  • Reminder
    • Create a reminder
    • Delete a reminder
    • Retrieve a reminder
    • Retrieve all reminders
    • Update a reminder
  • Tag
    • Create a tag
    • Delete a tag
    • Retrieve a tag
    • Retrieve all tags
    • Update a tag
  • Task
    • Create a task
    • Delete a task
    • Retrieve a task
    • Retrieve all tasks
    • Update a task

Templates and examples

Browse Monica CRM integration templates, or search all templates


Reddit node

URL: llms-txt#reddit-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Reddit node to automate work in Reddit, and integrate Reddit with other applications. n8n has built-in support for a wide range of Reddit features, including getting profiles, and users, retrieving post comments and subreddit, as well as submitting, getting, and deleting posts.

On this page, you'll find a list of operations the Reddit node supports and links to more resources.

Refer to Reddit credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Post
    • Submit a post to a subreddit
    • Delete a post from a subreddit
    • Get a post from a subreddit
    • Get all posts from a subreddit
    • Search posts in a subreddit or in all of Reddit.
  • Post Comment
    • Create a top-level comment in a post
    • Retrieve all comments in a post
    • Remove a comment from a post
    • Write a reply to a comment in a post
  • Profile
    • Get
  • Subreddit
    • Retrieve background information about a subreddit.
    • Retrieve information about subreddits from all of Reddit.
  • User
    • Get

Templates and examples

Analyze Reddit Posts with AI to Identify Business Opportunities

View template details

Extract Trends, Auto-Generate Social Content with AI, Reddit, Google & Post

View template details

View template details

Browse Reddit integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Timezone and localization environment variables

URL: llms-txt#timezone-and-localization-environment-variables

File-based configuration

You can add _FILE to individual variables to provide their configuration in a separate file. Refer to Keeping sensitive data in separate files for more details.

Variable Type Default Description
GENERIC_TIMEZONE * America/New_York The n8n instance timezone. Important for schedule nodes (such as Cron).
N8N_DEFAULT_LOCALE String en A locale identifier, compatible with the Accept-Language header. n8n doesn't support regional identifiers, such as de-AT. When running in a locale other than the default, n8n displays UI strings in the selected locale, and falls back to en for any untranslated strings.

Node user interface elements

URL: llms-txt#node-user-interface-elements

Contents:

  • String
    • Support drag and drop for data keys
  • Number
  • Collection
  • DateTime
  • Boolean
  • Color
  • Options
  • Multi-options
  • Filter

n8n provides a set of predefined UI components (based on a JSON file) that allows users to input all sorts of data types. The following UI elements are available in n8n.

String field for inputting passwords:

String field with more than one row:

Support drag and drop for data keys

Users can drag and drop data values to map them to fields. Dragging and dropping creates an expression to load the data value. n8n supports this automatically.

You need to add an extra configuration option to support dragging and dropping data keys:

  • requiresDataPath: 'single': for fields that require a single string.
  • requiresDataPath: 'multiple': for fields that can accept a comma-separated list of string.

The Compare Datasets node code has examples.

Number field with decimal points:

Use the collection type when you need to display optional fields.

The dateTime type provides a date picker.

The boolean type adds a toggle for entering true or false.

The color type provides a color selector.

The options type adds an options list. Users can select a single value.

The multiOptions type adds an options list. Users can select more than one value.

Use this component to evaluate, match, or filter incoming data.

This is the code from n8n's own If node. It shows a filter component working with a collection component where users can configure the filter's behavior.

Assignment collection (drag and drop)

Use the drag and drop component when you want users to pre-fill name and value parameters with a single drag interaction.

You can see an example in n8n's Edit Fields (Set) node:

Use the fixedCollection type to group fields that are semantically related.

The resource locator element helps users find a specific resource in an external service, such as a card or label in Trello.

The following options are available:

  • ID
  • URL
  • List: allows users to select or search from a prepopulated list. This option requires more coding, as you must populate the list, and handle searching if you choose to support it.

You can choose which types to include.

Refer to the following for live examples:

If your node performs insert, update, or upsert operations, you need to send data from the node in a format supported by the service you're integrating with. A common pattern is to use a Set node before the node that sends data, to convert the data to match the schema of the service you're connecting to. The resource mapper UI component provides a way to get data into the required format directly within the node, rather than using a Set node. The resource mapper component can also validate input data against the schema provided in the node, and cast input data into the expected type.

Mapping is the process of setting the input data to use as values when updating row(s). Matching is the process of using column names to identify the row(s) to update.

Refer to the Postgres node (version 2) for a live example using a database schema.

Refer to the Google Sheets node (version 2) for a live example using a schema-less service.

Resource mapper type options interface

The typeOptions section must implement the following interface:

Resource mapper method

This method contains your node-specific logic for fetching the data schema. Every node must implement its own logic for fetching the schema, and setting up each UI field according to the schema.

It must return a value that implements the ResourceMapperFields interface:

Refer to the Postgres resource mapping method and Google Sheets resource mapping method for live examples.

The HTML editor allows users to create HTML templates in their workflows. The editor supports standard HTML, CSS in <style> tags, and expressions wrapped in {{}}. Users can add <script> tags to pull in additional JavaScript. n8n doesn't run this JavaScript during workflow execution.

Refer to Html.node.ts for a live example.

Display a yellow box with a hint or extra info. Refer to Node UI design for guidance on writing good hints and info text.

There are two types of hints: parameter hints and node hints:

  • Parameter hints are small lines of text below a user input field.
  • Node hints are a more powerful and flexible option than Notice. Use them to display longer hints, in the input panel, output panel, or node details view.

Add a parameter hint

Add the hint parameter to a UI element:

Define the node's hints in the hints property within the node description:

Add a dynamic hint to a programmatic-style node

In programmatic-style nodes you can create a dynamic message that includes information from the node execution. As it relies on the node output data, you can't display this type of hint until after execution.

For a live example of a dynamic hint in a programmatic-style node, view the Split Out node code.

Examples:

Example 1 (unknown):

{
	displayName: Name, // The value the user sees in the UI
	name: name, // The name used to reference the element UI within the code
	type: string,
	required: true, // Whether the field is required or not
	default: 'n8n',
	description: 'The name of the user',
	displayOptions: { // the resources and operations to display this element with
		show: {
			resource: [
				// comma-separated list of resource names
			],
			operation: [
				// comma-separated list of operation names
			]
		}
	},
}

Example 2 (unknown):

{
	displayName: 'Password',
	name: 'password',
	type: 'string',
	required: true,
	typeOptions: {
		password: true,
	},
	default: '',
	description: `User's password`,
	displayOptions: { // the resources and operations to display this element with
		show: {
			resource: [
				// comma-separated list of resource names
			],
			operation: [
				// comma-separated list of operation names
			]
		}
	},
}

Example 3 (unknown):

{
	displayName: 'Description',
	name: 'description',
	type: 'string',
	required: true,
	typeOptions: {
		rows: 4,
	},
	default: '',
	description: 'Description',
	displayOptions: { // the resources and operations to display this element with
		show: {
			resource: [
				// comma-separated list of resource names
			],
			operation: [
				// comma-separated list of operation names
			]
		}
	},
}

Example 4 (unknown):

{
	displayName: 'Amount',
	name: 'amount',
	type: 'number',
	required: true,
	typeOptions: {
		maxValue: 10,
		minValue: 0,
		numberPrecision: 2,
	},
	default: 10.00,
	description: 'Your current amount',
	displayOptions: { // the resources and operations to display this element with
		show: {
			resource: [
				// comma-separated list of resource names
			],
			operation: [
				// comma-separated list of operation names
			]
		}
	},
}

Find your container ID

URL: llms-txt#find-your-container-id


Schedule Trigger node

URL: llms-txt#schedule-trigger-node

Contents:

  • Node parameters
    • Seconds trigger interval
    • Minutes trigger interval
    • Hours trigger interval
    • Days trigger interval
    • Weeks trigger interval
    • Months trigger interval
    • Custom (Cron) interval
  • Templates and examples
  • Common issues

Use the Schedule Trigger node to run workflows at fixed intervals and times. This works in a similar way to the Cron software utility in Unix-like systems.

You must activate the workflow

If a workflow uses the Schedule node as a trigger, make sure that you save and activate the workflow.

The node relies on the timezone setting. n8n uses either:

  1. The workflow timezone, if set. Refer to Workflow settings for more information.
  2. The n8n instance timezone, if the workflow timezone isn't set. The default is America/New York for self-hosted instances. n8n Cloud tries to detect the instance owner's timezone when they sign up, falling back to GMT as the default. Self-hosted users can change the instance setting using Environment variables. Cloud admins can change the instance timezone in the Admin dashboard.

Add Trigger Rules to determine when the trigger should run.

Use the Trigger Interval to select the time interval unit of measure to schedule the trigger for. All other parameters depend on the interval you select. Choose from:

You can add multiple Trigger Rules to run the node on different schedules.

Refer to the sections below for more detail on configuring each Trigger Interval. Refer to Templates and examples for further examples.

Seconds trigger interval

  • Seconds Between Triggers: Enter the number of seconds between each workflow trigger. For example, if you enter 30 here, the trigger will run every 30 seconds.

Minutes trigger interval

  • Minutes Between Triggers: Enter the number of minutes between each workflow trigger. For example, if you enter 5 here, the trigger will run every 5 minutes.

Hours trigger interval

  • Hours Between Triggers: Enter the number of hours between each workflow trigger.
  • Trigger at Minute: Enter the minute past the hour to trigger the node when it runs, from 0 to 59.

For example, if you enter 6 Hours Between Triggers and 30 Trigger at Minute, the node will run every six hours at 30 minutes past the hour.

Days trigger interval

  • Days Between Triggers: Enter the number of days between each workflow trigger.
  • Trigger at Hour: Select the hour of the day to trigger the node.
  • Trigger at Minute: Enter the minute past the hour to trigger the node when it runs, from 0 to 59.

For example, if you enter 2 Days Between Triggers, 9am for Trigger at Hour, and 15 Trigger at Minute, the node will run every two days at 9:15am.

Weeks trigger interval

  • Weeks Between Triggers: Enter the number of weeks between each workflow trigger.
  • Trigger on Weekdays: Select the day(s) of the week you want to trigger the node.
  • Trigger at Hour: Select the hour of the day to trigger the node.
  • Trigger at Minute: Enter the minute past the hour to trigger the node when it runs, from 0 to 59.

For example, if you enter 2 Weeks Between Triggers, Monday for Trigger on Weekdays, 3pm for Trigger at Hour, and 30 Trigger at Minute, the node will run every two weeks on Monday at 3:30 PM.

Months trigger interval

  • Months Between Triggers: Enter the number of months between each workflow trigger.
  • Trigger at Day of Month: Enter the day of the month the day should trigger at, from 1 to 31. If a month doesn't have this day, the node won't trigger. For example, if you enter 30 here, the node won't trigger in February.
  • Trigger at Hour: Select the hour of the day to trigger the node.
  • Trigger at Minute: Enter the minute past the hour to trigger the node when it runs, from 0 to 59.

For example, if you enter 3 Months Between Triggers, 28 Trigger at Day of Month, 9am for Trigger at Hour, and 0 Trigger at Minute, the node will run each quarter on the 28th day of the month at 9:00 AM.

Custom (Cron) interval

Enter a custom cron Expression to set the schedule for the trigger.

To generate a Cron expression, you can use crontab guru. Paste the Cron expression that you generated using crontab guru in the Expression field in n8n.

Type Cron Expression Description
Every X Seconds */10 * * * * * Every 10 seconds.
Every X Minutes */5 * * * * Every 5 minutes.
Hourly 0 * * * * Every hour on the hour.
Daily 0 6 * * * At 6:00 AM every day.
Weekly 0 12 * * 1 At noon every Monday.
Monthly 0 0 1 * * At midnight on the 1st of every month.
Every X Days 0 0 */3 * * At midnight every 3rd day.
Only Weekdays 0 9 * * 1-5 At 9:00 AM Monday through Friday.
Custom Hourly Range 0 9-17 * * * Every hour from 9:00 AM to 5:00 PM every day.
Quarterly 0 0 1 1,4,7,10 * At midnight on the 1st of January, April, July, and October.

Using variables in the Cron expression

While variables can be used in the scheduled trigger, their values only get evaluated when the workflow is activated. If you alter a variable's value in the settings after a workflow is activated, the changes won't alter the cron schedule. To re-evaluate the variable, set the workflow to Inactive and then back to Active again

Why there are six asterisks in the Cron expression

The sixth asterisk in the Cron expression represents seconds. Setting this is optional. The node will execute even if you don't set the value for seconds.

(*) * * * * *
(second) minute hour day of month month day of week(Sun-Sat)

Templates and examples

Browse Schedule Trigger integration templates, or search all templates

For common questions or issues and suggested solutions, refer to Common Issues.


Xero node

URL: llms-txt#xero-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported

Use the Xero node to automate work in Xero, and integrate Xero with other applications. n8n has built-in support for a wide range of Xero features, including creating, updating, and getting contacts and invoices.

On this page, you'll find a list of operations the Xero node supports and links to more resources.

Refer to Xero credentials for guidance on setting up authentication.

  • Contact
    • Create a contact
    • Get a contact
    • Get all contacts
    • Update a contact
  • Invoice
    • Create a invoice
    • Get a invoice
    • Get all invoices
    • Update a invoice

Templates and examples

Get invoices from Xero

View template details

Integrate Xero with FileMaker using Webhooks

View template details

Automate Invoice Processing with Gmail, OCR.space, Slack & Xero

View template details

Browse Xero integration templates, or search all templates

Refer to Xero's API documentation for more information about the service.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Google Sheets Trigger node common issues

URL: llms-txt#google-sheets-trigger-node-common-issues

Contents:

  • Stuck waiting for trigger event
  • Date and time columns are rendering as numbers

Here are some common errors and issues with the Google Sheets Trigger node and steps to resolve or troubleshoot them.

Stuck waiting for trigger event

When testing the Google Sheets Trigger node with the Execute step or Execute workflow buttons, the execution may appear stuck and unable to stop listening for events. If this occurs, you may need to exit the workflow and open it again to reset the canvas.

Stuck listening events often occur due to issues with your network configuration outside of n8n. Specifically, this behavior often occurs when you run n8n behind a reverse proxy without configuring websocket proxying.

To resolve this issue, check your reverse proxy configuration (Nginx, Caddy, Apache HTTP Server, Traefik, etc.) to enable websocket support.

Date and time columns are rendering as numbers

Google Sheets can render dates and times a few different ways.

The serial number format, popularized by Lotus 1-2-3 and used my types of spreadsheet software, represents dates as a decimal number. The whole number component (the part left of the decimal) represents the number of days since December 30, 1899. The decimal portion (the part right of the decimal) represents time as a portion of a 24-hour period (for example, .5 represents noon).

To use a different format for date and time values, adjust the format in your Google Sheet Trigger node. This is available when Trigger On is set to Row Added:

  1. Open the Google Sheet Trigger node on your canvas.
  2. Select Add option.
  3. Select DateTime Render.
  4. Change DateTime Render to Formatted String.

The Google Sheets Trigger node will now format date, time, datetime, and duration fields as strings according to their number format.

The number format depends on the spreadsheet's locale settings. You can change the local by opening the spreadsheet and selecting File > Settings. In the General tab, set Locale to your preferred locale. Select Save settings to adjust the value.


Configuration

URL: llms-txt#configuration

Contents:

  • Set environment variables by command line
    • npm
    • Docker
  • Docker Compose file
  • Keeping sensitive data in separate files

You can change n8n's settings using environment variables. For a full list of available configurations see Environment Variables.

Set environment variables by command line

For npm, set your desired environment variables in terminal. The command depends on your command line.

In Docker you can use the -e flag from the command line:

Docker Compose file

In Docker, you can set your environment variables in the n8n: environment: element of your docker-compose.yaml file.

Keeping sensitive data in separate files

You can append _FILE to individual environment variables to provide their configuration in a separate file, enabling you to avoid passing sensitive details using environment variables. n8n loads the data from the file with the given name, making it possible to load data from Docker-Secrets and Kubernetes-Secrets.

Refer to Environment variables for details on each variable.

While most environment variables can use the _FILE suffix, it's more beneficial for sensitive data such as credentials and database configuration. Here are some examples:

Examples:

Example 1 (unknown):

export <variable>=<value>

Example 2 (unknown):

set <variable>=<value>

Example 3 (unknown):

$env:<variable>=<value>

Example 4 (unknown):

docker run -it --rm \
 --name n8n \
 -p 5678:5678 \
 -e N8N_TEMPLATES_ENABLED="false" \
 docker.n8n.io/n8nio/n8n

Get the static data of the node

URL: llms-txt#get-the-static-data-of-the-node

nodeStaticData = _getWorkflowStaticData('node')


AWS SES node

URL: llms-txt#aws-ses-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the AWS SES node to automate work in AWS SES, and integrate AWS SES with other applications. n8n has built-in support for a wide range of AWS SES features, including creating, getting, deleting, sending, updating, and adding templates and emails.

On this page, you'll find a list of operations the AWS SES node supports and links to more resources.

Refer to AWS SES credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Custom Verification Email
    • Create a new custom verification email template
    • Delete an existing custom verification email template
    • Get the custom email verification template
    • Get all the existing custom verification email templates for your account
    • Add an email address to the list of identities
    • Update an existing custom verification email template.
  • Email
    • Send
    • Send Template
  • Template
    • Create a template
    • Delete a template
    • Get a template
    • Get all templates
    • Update a template

Templates and examples

Create screenshots with uProc, save to Dropbox and send by email

View template details

Send an email using AWS SES

View template details

Auto-Notify on New Major n8n Releases via RSS, Email & Telegram

View template details

Browse AWS SES integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


URL: llms-txt#facebook-trigger-link-object

Contents:

  • Trigger configuration
  • Related resources

Use this object to receive updates about links for rich previews by an external provider. Refer to Facebook Trigger for more information on the trigger itself.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Facebook Trigger integrations page.

Trigger configuration

To configure the trigger with this Object:

  1. Select the Credential to connect with. Select an existing or create a new Facebook App credential.
  2. Enter the APP ID of the app connected to your credential. Refer to the Facebook App credential documentation for more information.
  3. Select Link as the Object.
  4. Field Names or IDs: By default, the node will trigger on all the available events using the * wildcard filter. If you'd like to limit the events, use the X to remove the star and use the dropdown or an expression to select the updates you're interested in.
  5. In Options, turn on the toggle to Include Values. This Object type fails without the option enabled.

Refer to Meta's Links Workplace API reference for more information.


IMAP credentials

URL: llms-txt#imap-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using user account
    • Provider instructions
    • My provider isn't listed

You can use these credentials to authenticate the following nodes:

Create an email account on a service with IMAP support.

Supported authentication methods

Internet Message Access Protocol (IMAP) is a standard protocol for receiving email. Most email providers offer instructions on setting up their service with IMAP; refer to your provider's IMAP instructions.

Using user account

To configure this credential, you'll need:

  • A User name: The email address you're retrieving email for.
  • A Password: Either the password you use to check email or an app password. Your provider will tell you whether to use your own password or to generate an app password.
  • A Host: The IMAP host address for your email provider, often formatted as imap.<provider>.com. Check with your provider.
  • A Port number: The default is port 993. Use this port unless your provider or email administrator tells you to use something different.

Choose whether to use SSL/TLS and whether to Allow Self-Signed Certificates.

Provider instructions

Refer to the quickstart guides for these common email providers.

Refer to Gmail.

Refer to Outlook.com.

Refer to Yahoo.

My provider isn't listed

If your email provider isn't listed here, search for their IMAP settings or IMAP instructions.


Release notes pre 1.0

URL: llms-txt#release-notes-pre-1.0

Contents:

Features and bug fixes for n8n before the release of 1.0.0.

You can also view the Releases in the GitHub repository.

Latest and Next versions

n8n releases a new minor version most weeks. The latest version is for production use. next is the most recent release. You should treat next as a beta: it may be unstable. To report issues, use the forum.

Current latest: 1.118.2
Current next: 1.119.0

The steps to update your n8n depend on which n8n platform you use. Refer to the documentation for your n8n:

Semantic versioning in n8n

n8n uses semantic versioning. All version numbers are in the format MAJOR.MINOR.PATCH. Version numbers increment as follows:

  • MAJOR version when making incompatible changes which can require user action.
  • MINOR version when adding functionality in a backward-compatible manner.
  • PATCH version when making backward-compatible bug fixes.

View the commits for this version.
Release date: 2023-08-17

This is a bug fix release.

For full release details, refer to Releases on GitHub.

Jordan Hall
Xavier Calland

View the commits for this version.
Release date: 2023-07-18

This is a bug fix release.

For full release details, refer to Releases on GitHub.

Romain Dunand
noctarius aka Christoph Engelbert

View the commits for this version.
Release date: 2023-07-14

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-07-12

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-07-05

This release contains new nodes, node enhancements, and bug fixes.

For full release details, refer to Releases on GitHub.

This release includes a crowd.dev node and crowd.dev Trigger node. crowd.dev is a tool to help you understand who is engaging with your open source project.

crowd.dev node documentation.

Alberto Pasqualetto
perseus-algol
Romeo Balta
ZergRael

View the commits for this version.
Release date: 2023-07-05

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-06-28

This release contains new features, new nodes, node enhancements, and bug fixes.

This version is (as of 4th July 2023) considered unstable. n8n recommends against upgrading.

For full release details, refer to Releases on GitHub.

Marten Steketee
Sandra Ashipala

View the commits for this version.
Release date: 2023-06-22

This release contains new features, new nodes, node enhancements, and bug fixes.

This version is (as of 4th July 2023) considered unstable. n8n recommends upgrading directly to 0.234.1.

Irreversible database migration

This version contains a database migration that changes credential and workflow IDs to use nanoId strings, This migration may take a while to complete in some environments. This change doesn't break anything using the older numeric IDs.

If you upgrade to 0.234.0, you can't roll back to an earlier version.

For full release details, refer to Releases on GitHub.

The Debug Helper node can be used to trigger different error types or generate random datasets to help test n8n workflows.

Debug Helper node documentation.

View the commits for this version.
Release date: 2023-06-19

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-06-14

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-06-07

This release contains new features, new nodes, node enhancements, and bug fixes.

For full release details, refer to Releases on GitHub.

This release includes a new trigger node for Postgres, which allows you to listen to events, as well as listen to custom channels. Refer to Postgres Trigger for more information.

View the commits for this version.
Release date: 2023-06-17

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-06-14

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-06-06

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-05-31

This release contains bug fixes and new features.

For full release details, refer to Releases on GitHub.

Notable new features.

Resource mapper UI component

This release includes a new UI component, the resource mapper. This component is useful for node creators. If your node does insert, update, or upsert operations, you need to send data from the node in a format supported by the service you're integrating with. Often it's necessary to use a Set node before a node that sends data, to get the data to match the schema of the service you're connecting to. The resource mapper UI component provides a way to get data into the required format directly within the node.

Refer to Node user interface elements | Resource mapper for guidance for node builders.

View the commits for this version.
Release date: 2023-06-05

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-05-25

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-05-25

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-05-24

This release contains new features, new nodes, node enhancements, and bug fixes.

For full release details, refer to Releases on GitHub.

Save metadata for workflow executions. You can then search by this data in the Executions list.

Execution Data node documentation.

The LDAP node allows you to interact with your LDAP servers from your n8n workflows.

LDAP node documentation.

Integrate n8n with LoneScale, a buying intents data platform.

LoneScale node documentation.

Bram Kn
pemontto
Yann Aleman

View the commits for this version.
Release date: 2023-05-17

This release contains bug fixes, improves UI copy and error messages in some nodes, and other node enhancements.

For full release details, refer to Releases on GitHub.

Node enhancements

The Google Ads node now supports v13.

View the commits for this version.
Release date: 2023-05-15

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-05-11

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-05-11

This release contains new features, node enhancements, and bug fixes.

For full release details, refer to Releases on GitHub.

This release introduces the npm node. This is a new core node. It provides a way to query an npm registry within your workflow.

Adam Charnock

View the commits for this version.
Release date: 2023-05-15

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-05-03

This release contains new features, node enhancements, and bug fixes.

For full release details, refer to Releases on GitHub.

Node enhancements

  • An overhaul of the Microsoft Excel 365 node, improve the UI making it easier to configure, improve error handling, and fix issues.

This release deprecates the following:

  • The EXECUTIONS_PROCESS environment variable.
  • Running n8n in own mode. Main mode is now the default. Use Queue mode if you need full execution isolation.
  • The WEBHOOK_TUNNEL_URL flag. Replaced by WEBHOOK_URL.
  • Support for MySQL and MariaDB as n8n backend databases. n8n will remove support completely in version 1.0. n8n recommends using PostgreSQL instead.

View the commits for this version.
Release date: 2023-05-03

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-05-02

This is a bug fix release.

For full release details, refer to Releases on GitHub.

View the commits for this version.
Release date: 2023-04-26

This release contains new features, node enhancements, and bug fixes.

Please note that this version contains a breaking change to extractDomain and isDomain. You can read more about it here.

For full release details, refer to Releases on GitHub.

  • A new command to get information about licenses for self-hosted users:

Node enhancements

  • Nodes that use SQL, such as the PostgresSQL node, now have a better SQL editor for writing custom queries.
  • An overhaul of the Google BigQuery node to support executing queries, improve the UI making it easier to configure, improve error handling, and fix issues.

View the commits for this version.
Release date: 2023-04-25

This is a bug fix release.

  • Core: Upgrade google-timezones-json to use the correct timezone for Sao Paulo.
  • Code Node: Update vm2 to address CVE-2023-30547.

View the commits for this version.
Release date: 2023-04-20

This is a bug fix release.

  • Editor: Clean up demo and template callouts from workflows page.
  • Editor: Fix memory leak in Node Detail View by correctly unsubscribing from event buses.
  • Editor: Settings sidebar should disconnect from push when navigating away.
  • Notion Node: Update credential test to not require user permissions.

View the commits for this version.
Release date: 2023-04-19

This release introduces Variables. You can now create variables that allows you to store and reuse values in n8n workflows. This is the first phase of a larger project to support Environments in n8n.

  • Core: Add support for Google Service account authentication in the HTTP Request node.

  • GitLab Node: Add Additional Parameters for the file list operation.

  • MySQL Node: This node has been overhauled.

  • Core: Fix broken API permissions in public API.

  • Core: Fix paired item returning wrong data.

  • Core: Improve SAML connection test result views.

  • Core: Make getExecutionId available on all nodes types.

  • Core: Skip SAML onboarding for users with first- and lastname.

  • Editor: Add padding to prepend input.

  • Editor: Clean up demo/video experiment.

  • Editor: Enterprise features missing with user management.

  • Editor: Fix moving canvas on middle click preventing lasso selection.

  • Editor: Make sure to redirect to blank canvas after personalisation modal.

  • Editor: Fix an issue that was preventing typing certain characters in the UI on devices with touchscreen.

  • Editor: Fix n8n-checkbox alignment.

  • Code Node: Handle user code returning null and undefined.

  • GitHub Trigger Node: Remove content_reference event.

  • Google Sheets Trigger Node: Return actual error message.

  • HTTP Request Node: Fix itemIndex in HTTP Request errors.

  • NocoDB Node: Fix for updating or deleting rows with not default primary keys.

  • OpenAI Node: Update models to only show those supported.

  • OpenAI Node: Update OpenAI Text Moderate input placeholder text.

Bram Kn
Eddy Hernandez
Filipe Dobreira
Jimw383

View the commits for this version.
Release date: 2023-04-24

This is a bug fix release.

  • Core: Upgrade google-timezones-json to use the correct timezone for Sao Paulo.
  • Code Node: Update vm2 to address CVE-2023-30547.

View the commits for this version.
Release date: 2023-04-20

This is a bug fix release.

  • Core: Fix paired item returning wrong data.
  • Core: Make getExecutionId available on all nodes types.
  • Editor: Fix memory leak in Node Detail View by correctly unsubscribing from event buses.
  • Editor: Fix moving canvas on middle click preventing lasso selection.
  • Editor: Settings sidebar should disconnect from push when navigating away.
  • Google Sheets Trigger Node: Return actual error message.
  • HTTP Request Node: Fix itemIndex in HTTP Request errors.
  • Notion Node: Update credential test to not require user permissions.

Filipe Dobreira

View the commits for this version.
Release date: 2023-04-14

This is a bug fix release.

  • Core: Fix broken API permissions in public API.
  • Editor: Fix an issue that was preventing typing certain characters in the UI on devices with touchscreen.

View the commits for this version.
Release date: 2023-04-12

This release contains a new node, updates, and bug fixes.

This release introduces the TOTP node. This is a new core node. It provides a way to generate a TOTP (time-based one-time password) within your workflow.

  • Code Node: Update vm2 to address CVE-2023-29017.
  • Core: App shouldn't crash with a custom REST endpoint.
  • Core: Do not execute workflowExecuteBefore hook when resuming executions from a waiting state.
  • Core: Fix issue where sub workflows would display as running forever after failure to start.
  • Core: Update xml2js to address CVE-2023-0842.
  • Editor: Drop mergeDeep in favor of lodash merge.
  • HTTP Request Node: Restore detailed error message.

Loganaden Velvindron

View the commits for this version.
Release date: 2023-04-05

This release contains new features and bug fixes.

Please note that this version contains a breaking change. The minimum Node.js version is now v16. You can read more about it here.

  • Core: Convert eventBus controller to decorator style and improve permissions.
  • Core: Prevent non owners password reset when SAML is enabled (this is preparation for an upcoming feature).
  • Core: Read ephemeral license from environment and clean up ee flags.
  • Editor: Allow tab to accept completion.
  • Editor: Enable saving workflow when node details view is open.
  • Editor: SSO onboarding (this is preparation for an upcoming feature).
  • Editor: SSO setup (this is preparation for an upcoming feature).

Node enhancements

  • Filter Node: Show discarded items.

  • HTTP Request Node: Follow redirects by default.

  • Postgres Node: Overhaul node.

  • ServiceNow Node: Add support for work notes when updating an incident.

  • SSH Node: Hide the private key within the SSH credential.

  • Add droppable state for booleans when mapping.

  • Compare Datasets Node: Fuzzy comparen't comparing keys missing in one of the inputs.

  • Compare Datasets Node: Fix support for dot notation in skip fields.

  • Core: Deactivate active workflows during import.

  • Core: Stop marking duplicates as circular references in jsonStringify.

  • Core: Stop using util.types.isProxy for tracking of augmented objects.

  • Core: Fix curl import error when no data.

  • Core: Handle Date and RegExp correctly in jsonStringify.

  • Core: Handle Date and RegExp objects in augmentObject.

  • Core: Prevent augmentObject from creating infinitely deep proxies.

  • Core: Service account private key as a password field.

  • Core: Update lock file.

  • Core: Waiting workflows not stopping.

  • Date & Time Node: Add info box at top of date and time explaining expressions.

  • Date & Time Node: Convert Luxon DateTime object to ISO.

  • Editor: Add $if, $min, $max to root expression autocomplete.

  • Editor: Curb overeager item access linting.

  • Editor: Disable Grammarly in expression editors.

  • Editor: Disable password reset on desktop with no user management.

  • Editor: Fix connection lost hover text not showing.

  • Editor: Fix issue preventing execution preview loading when in an Iframe.

  • Editor: Fix mapping with special characters.

  • Editor: Prevent error from showing-up when duplicating unsaved workflow.

  • Editor: Prevent NDV schema view pagination.

  • Editor: Support backspacing with modifier key.

  • Google Sheets Node: Fix insertOrUpdate cell update with object.

  • HTML Extract Node: Support for dot notation in JSON property.

  • HTTP Request Node: Fix AWS credentials to stop removing URL parameters for STS.

  • HTTP Request Node: Refresh token properly on never fail option.

  • HTTP Request Node: Support for dot notation in JSON body.

  • LinkedIn Node: Update the version of the API.

  • Redis Node: Fix issue with hash set not working as expected.

View the commits for this version.
Release date: 2023-04-14

This is a bug fix release.

  • Core: Fix broken API permissions in public API.
  • Editor: Fix an issue that was preventing typing certain characters in the UI on devices with touchscreen.

View the commits for this version.
Release date: 2023-04-11

This is a bug fix release.

  • Code node: Update vm2 to address CVE-2023-29017.
  • Core: Update xml2js to address CVE-2023-0842.

Loganaden Velvindron

View the commits for this version.
Release date: 2023-04-04

This is a bug fix release.

  • AWS SNS Node: Fix an issue with messages failing to send if they contain certain characters.
  • Core: augmentObject should clone Buffer/Uint8Array instead of wrapping them in a proxy.
  • Core: augmentObject should use existing property descriptors whenever possible.
  • Core: Fix the issue of nodes not loading when run using npx.
  • Core: Improve Axios error handling in nodes.
  • Core: Password reset should pass in the correct values to external hooks.
  • Core: Prevent augmentObject from creating infinitely deep proxies.
  • Core: Use table-prefixes in queries in import commands.
  • Editor: Fix focused state in Code node editor.
  • Editor: Fix loading executions in long execution list.
  • Editor: Show correct status on canceled executions.
  • Gmail Node: Gmail Luxon object support, fix for timestamp.
  • HTTP Request Node: Detect mime-type from streaming responses.
  • HubSpot Trigger Node: Developer API key is required for webhooks.
  • Set Node: Convert string to number.

View the commits for this version.
Release date: 2023-03-30

This release contains new features, including custom filters for the executions list, and a new node to filter items in your workflows.

Upgrade directly to 0.222.1.

This release introduces improvements to the execution lists. You can now save Custom execution data, and use it to filter both the All executions and Single workflow executions lists.

  • Add test overrides.
  • Core: Improve LDAP/SAML toggle and tests.
  • Core: Limit user invites when SAML is enabled.
  • Core: Make OAuth2 error handling consistent with success handling.
  • Editor: Fix ResourceLocator dropdown style.

This release introduces the Filter node. The node allows you to filter items based on a condition. If the item meets the condition, the Filter node passes it on to the next node in the Filter node output. If the item doesn't meet the condition, the Filter node omits the item from its output.

  • Core: Assign properties.success earlier to set executionStatus correctly.
  • Core: Don't mark duplicates as circular references in jsonStringify.
  • Core: Don't use util.types.isProxy for tracking of augmented objects.
  • Core: Ensure that all non-lazy-loaded community nodes get post-processed correctly.
  • Core: Force-upgrade decode-uri-component to address CVE-2022-38900.
  • Core: Force-upgrade http-cache-semantics to address CVE-2022-25881.
  • Core: Handle Date and RegExp correctly in jsonStringify.
  • Core: Handle Date and RegExp objects in augmentObject.
  • Core: Improve Axios error handling in nodes.
  • Core: Improve community nodes loading.
  • Core: Initialize queue in the webhook server as well.
  • Core: Persist CurrentAuthenticationMethod setting change.
  • Core: Remove circular references from Code and push message.
  • Core: Require authentication on icons and nodes/credentials types static files.
  • Core: Return SAML service provider URls with configuration.
  • Core: Service account private key should display as a password field.
  • Core: Upgrade Luxon to address CVE-2023-22467.
  • Core: Upgrade simple-git to address CVE-2022-25912.
  • Core: Upgrade SQLite3 to address CVE-2022-43441.
  • Core: Upgrade Convict to address CVE-2023-0163.
  • Core: Waiting workflows not stopping.
  • Editor: Fix connection lost hover text not showing.
  • Editor: Fix issue preventing execution preview loading when in an iframe.
  • Editor: Use credentials when fetching node and credential types.
  • Google Sheets Node: Fix insertOrUpdate cell update with object.
  • HTTP Request Node: Add streaming to binary response.
  • HTTP Request Node: Fix AWS credentials to automatically deconstruct the URL.
  • HTTP Request Node: Fix AWS credentials to stop removing URL parameters for STS.
  • Split In Batches Node: Roll back changes in v1 and create v2.
  • Update PostHog no-capture.

Manish Dhanwal

View the commits for this version.
Release date: 2023-04-11

This is a bug fix release.

  • Code node: Update vm2 to address CVE-2023-29017.
  • Core: Update xml2js to address CVE-2023-0842.

Loganaden Velvindron

View the commits for this version.
Release date: 2023-03-24

This is a bug fix release. It fixes an issue with properties.success that was causing executionStatus to sometimes be incorrect.

View the commits for this version.
Release date: 2023-03-23

This is a bug fix release. It ensures the job queue is initiated before starting the webhook server.

View the commits for this version.
Release date: 2023-03-23

  • Core: n8n now augments data rather than copying it in the Code node. This is a performance improvement.
  • Editor: you can now move the canvas by holding Space and dragging with the mouse, or by holding the middle mouse button and dragging.
  • Editor: add authentication type recommendations in the credentials modal.
  • Editor: add the SSO login button.

This release adds a node for QuickChart, an open source chart generation tool.

  • Core: ensure n8n calls available error workflows in main mode recovery.
  • Core: fix telemetry execution status for manual workflows executions.
  • Core: return SAML attributes after connection test.
  • Editor: disable mapping tooltip for display modes that don't support mapping.
  • Editor: fix execution list item selection.
  • Editor: fix for large notifications being cut off.
  • Editor: fix redo in code and expression editor.
  • Editor: fix the canvas node distance when automatically injecting manual trigger.
  • HTTP Request Node: fix AWS credentials to automatically deconstruct the URL.
  • Split In Batches Node: roll back changes in v1 and create v2.

View the commits for this version.
Release date: 2023-03-22

This is a bug fix release. It reverts changes to version 1 of the Split In Batches node, and creates a version 2 containing the updates.

View the commits for this version.
Release date: 2023-03-16

This release adds schema view to the node output panel, and includes node enhancements and bug fixes.

  • Core: improve SAML connection test.
  • Editor: add basic Datatable and Pagination components.
  • Editor: add support for schema view in the NDV output.
  • Editor: don't show actions panel for single-action nodes.

Node enhancements

  • Item Lists Node: update actions text.

  • OpenAI Node: add support for GPT4 on chat completion.

  • Split In Batches Node: make it easier to combine processed data.

  • Core: initialize license and LDAP in the correct order.

  • Editor: display correct error message for $env access.

  • Editor: fix autocomplete for complex expressions.

  • Editor: fix owner set-up checkbox wording.

  • Editor: properly handle mapping of dragged expression if it contains hyphen.

  • Metabase Node: fix issue with question results not correctly being returned.

View the commits for this version.
Release date: 2023-03-10

This is a bug fix release. It resolves an issue with the HTTP Request node by removing the streaming response.

View the commits for this version.
Release date: 2023-03-09

  • Core: add advancedFilters feature flag.
  • Core: add SAML post and test endpoints.
  • Core: add SAML XML validation.
  • Core: limit user changes when SAML is enabled.
  • Core: refactor and add SAML preferences for service provider instance.
  • Editor: don't automatically add the manual trigger when the user adds another node.
  • Editor: redirect users to canvas if they don't have any workflows.

Node enhancements

  • Cal Trigger Node: update to support v2 webhooks.

  • HTTP Request Node: move from binary buffer to binary streaming.

  • Mattermost Node: add self signed certificate support.

  • Microsoft SQL Node: add support for self signed certificates.

  • Mindee Node: add support for v4 API.

  • Slack Node: move from binary buffer to binary streaming.

  • Core: allow serving icons for custom nodes with npm scoped names.

  • Core: rename advancedFilters to advancedExecutionFilters.

  • Editor: fix ElButton overrides.

  • Editor: only fetch new versions at app launch.

  • Fetch credentials on workflows view to include in duplicated workflows.

  • Fix color discrepancies for executions list items.

  • OpenAI Node: fix issue with expressions not working with chat complete.

  • OpenAI Node: simplify code.

Syed Ali Shahbaz

View the commits for this version.
Release date: 2023-03-02

This release contains node enhancements, bug fixes, and new features that lay groundwork for upcoming releases, along with some UX improvements.

  • Add distribution test tracking.
  • Add events to enable onboarding checklist.
  • Core: add SAML login setup (for upcoming feature).
  • Core: add SAML settings and consolidate LDAP under SSO (for upcoming feature).
  • Editor: add missing documentation to autocomplete items for inline code editor.
  • Editor: Show parameter hint on multiline inputs.

Node enhancements

  • JIRA node: support binary streaming for very large binary files.

  • OpenAI node: add support for ChatGPT.

  • Telegram node: add parse mode option to Send Document operation.

  • Core: fix execution pruning queries.

  • Core: fix filtering workflow by tags.

  • Core: revert isPending check on the user entity.

  • Fix issues with nodes missing in nodes panel.

  • Fix mapping paths when appending to empty expression.

  • Item Lists Node: tweak item list summarize field naming.

  • Prevent executions from displaying as running forever.

  • Show Execute Workflow node in the nodes panel.

  • Show RabbitMQ node in the nodes panel.

  • Stop showing mapping hint after mapping.

View the commits for this version.
Release date: 2023-02-27

This is a bug fix release.

  • Core: fix issue with execution pruning queries.
  • Core: fix for workflow filtering by tag.
  • Core: revert isPending check on the user entity.

View the commits for this version.
Release date: 2023-02-24

This is a bug fix release.

Prevent executions appearing to run forever.

View the commits for this version.
Release date: 2023-02-23

This release contains new features and bug fixes. It includes improvements to the nodes panel and executions list. It also deprecates the Read Binary File node.

  • Add new event hooks to support telemetry around the new onboarding experience.

  • Update nodes to set required path type.

  • Core: add configurable execution history limit. Use this to improve performance when self-hosting. Refer to Execution Data | Enable data pruning for more information.

  • Core: add execution runData recovery and status field. This allows us to show execution statuses on the Executions list.

  • Core: add SAML feature flag. This is preparatory for an upcoming feature.

  • Editor: improvements to the nodes panel search. When searching in root view, n8n now displays results from both trigger and regular nodes. When searching in a category view, n8n shows results from the category, and also suggests results from other categories.

  • Hide sensitive value in authentication header credentials and authentication query credentials.

  • Support feature flag evaluation server side.

  • Deprecate the Read Binary File node. Use the Read Binary Files node instead.

  • Baserow Node: fix issue with Get All not correctly using filters.

  • Compare Datasets Node: UI tweaks and fixes.

  • Core: don't allow arbitrary path traversal in BinaryDataManager.

  • Core: don't allow arbitrary path traversal in the credential-translation endpoint.

  • Core: don't explicitly bypass authentication on URLs containing .svg.

  • Core: don't remove empty output connections arrays in PurgeInvalidWorkflowConnections migration.

  • Core: fix execution status filters.

  • Core: user update endpoint should only allow updating email, firstName, and lastName.

  • Discord Node: fix wrong error message being displayed.

  • Discourse Node: fix issue with credential test not working.

  • Editor: apply correct IRunExecutionData to finished workflow.

  • Editor: fix an issue with zoom and canvas nodes connections.

  • Editor: fix unexpected date rendering on front-end.

  • Editor: remove crashed status from filter.

  • Fix typo in error messages when a property doesn't exist.

  • Fixes an issue when saving an active workflow without triggers would cause n8n to be stuck.

  • Google Calendar Node: fix incorrect labels for start and end times when getting all events.

  • Postgres Node: fix for tables containing field named JSON.

  • AWS S3 Node: fix issue with get many buckets not outputting data.

The steps to update your n8n depend on which n8n platform you use. Refer to the documentation for your n8n:

View the commits for this version.
Release date: 2023-03-09

This is a bug fix release. It reverts the isPending check on the user entity, resolving an issue with displaying user options when user management is disabled.

View the commits for this version.
Release date: 2023-02-23

This is a bug fix release.

Core: don't remove empty output connections arrays in PurgeInvalidWorkflowConnections migration.

View the commits for this version.
Release date: 2023-03-14

This is a bug fix release. It reverts the isPending check on the user entity, resolving an issue with displaying user options when user management is disabled.

The steps to update your n8n depend on which n8n platform you use. Refer to the documentation for your n8n:

View the commits for this version.
Release date: 2023-02-23

This is a bug fix release. It contains an important security fix.

  • Core: don't allow arbitrary path traversal in BinaryDataManager.
  • Core: don't allow arbitrary path traversal in the credential-translation endpoint.
  • Core: don't explicitly bypass authentication on URLs containing .svg.
  • Core: don't remove empty output connections arrays in PurgeInvalidWorkflowConnections migration.
  • Core: the user update endpoint should only allow updating email, first name, and last name.

View the commits for this version.
Release date: 2023-03-14

This is a bug fix release. It reverts the isPending check on the user entity, resolving an issue with displaying user options when user management is disabled.

The steps to update your n8n depend on which n8n platform you use. Refer to the documentation for your n8n:

View the commits for this version.
Release date: 2023-02-23

This is a bug fix release. It contains an important security fix.

  • Core: don't allow arbitrary path traversal in BinaryDataManager.
  • Core: don't allow arbitrary path traversal in the credential-translation endpoint.
  • Core: don't explicitly bypass authentication on URLs containing .svg.
  • Core: don't remove empty output connections arrays in PurgeInvalidWorkflowConnections migration.
  • Core: the user update endpoint should only allow updating email, first name, and last name.

View the commits for this version.
Release date: 2023-02-21

This is a bug fix release.

  • Core: don't allow arbitrary path traversal in BinaryDataManager.
  • Core: don't allow arbitrary path traversal in the credential-translation endpoint.
  • Core: don't explicitly bypass auth on URLs containing .svg.
  • Core: user update endpoint should only allow updating email, firstName, and lastName.

View the commits for this version.
Release date: 2023-02-16

This release contains new features, node enhancements, and bug fixes.

  • Add workflow and credential sharing access e2e tests.
  • Editor: add correct credential owner contact details for readonly credentials.
  • Editor: add most important native properties and methods to autocomplete.
  • Editor: update to personalization survey v4.
  • Update telemetry API endpoints.

Node enhancements

  • GitHub node: update code to use resource locator component.

  • GitHub Trigger node: update code to use resource locator component.

  • Notion node: add option to set icons when creating pages or database pages.

  • Slack node: add support for manually inputting a channel name for channel operations.

  • Core: fix data transformation functions.

  • Core: remove unnecessary info from GET /workflows response.

  • Bubble node: fix pagination issue when returning all objects.

  • HTTP Request Node: ignore empty body when auto-detecting JSON.

feelgood-interface

View the commits for this version.
Release date: 2023-02-14

This is a bug fix release. It solves an issue that was causing webhooks to be removed when they shouldn't be.

View the commits for this version.
Release date: 2023-02-11

This is a bug fix release.

  • Core: fix issue causing worker and webhook service to close on start.
  • Core: handle versioned custom nodes correctly.

View the commits for this version.
Release date: 2023-02-10

This release contains new features, node enhancements, and bug fixes.

  • Refactor the n8n Desktop user management experience.
  • Core: add support for WebSockets as an alternative to server-sent events. This introduces a new way for n8n's backend to push changes to the UI. The default is still server-sent events. If you're experiencing issues with the UI not updating, try changing to WebSockets by setting the N8N_PUSH_BACKEND environment variable to websocket.
  • Editor: add autocomplete for objects.
  • Editor: add autocomplete for expressions to the HTML editor component.

Node enhancements

  • Edit Image node: add support for WebP image format.

  • HubSpot Trigger node: add conversation events.

  • Core: disable transactions on SQLite migrations that use PRAGMA foreign_keys.

  • Core: ensure expression extension doesn't fail with optional chaining.

  • Core: fix import command for workflows with old format (affects workflows created before user management was introduced).

  • Core: stop copying icons to cache.

  • Editor: prevent creation of input connections for nodes without input slot.

  • Error workflow now correctly checks for subworkflow permissions.

  • ActiveCampaign Node: fix additional fields not being sent when updating account contacts.

  • Linear Node: fix issue with Issue States not loading correctly.

  • MySQL migration parses database contents if necessary (fix for MariaDB).

Kirill

View the commits for this version.
Release date: 2023-02-09

This is a bug fix release.

Editor: prevent creation of input connections for nodes without input slot.

View the commits for this version.
Release date: 2023-02-06

This is a bug fix release.

  • Editor: correctly show OAuth reconnect button.
  • Editor: fix resolvable highlighting for HTML editor.

View the commits for this version.
Release date: 2023-02-06

This is a bug fix release. It also contains an overhaul of the Slack node.

Node enhancements

This release includes an overhaul of the Slack node, adding new operations and a better user interface.

  • Editor: fix an issue with mapping to empty expression input.
  • Editor: fix merge node connectors.
  • Editor: fix multiple-output endpoints success style after connection is detached.

View the commits for this version.
Release date: 2023-02-03

This release contains new features, node enhancements, and bug fixes. The expressions editor now supports autocomplete for some built in data transformation functions. The new features also include two of interest to node builders: a way to allow users to drag and drop data keys, and the new HTML editor component.

Please note that this version contains a breaking change to Luxon. You can read more about it here.

Autocomplete in the Extension editor

Data transformation functions now have autocomplete support in the Expression editor.

  • Core: export OpenAPI spec for external tools.
  • Core: set custom Cache-Control headers for static assets.
  • Core: simplify pagination in declarative node design.
  • Editor: support mapping keys with drag and drop. Any field with the hint Enter the field name as text should now support mapping a data key using drag and drop. Node builders can enable this in their own nodes. Refer to Creating nodes | UI elements for more information.
  • Editor: add the HTML editor component for use in parameters. This means node builders can now use the HTML editor that n8n uses in the HTML node as a UI component.
  • Editor: append expressions in fixed values when mapping to string and JSON inputs.
  • Editor: continue to show mapping tooltip after dismiss.
  • Editor: roll out schema view.

Node enhancements

  • FTP Node: stream binary data for uploads and downloads.

  • Notion Node: add support for image blocks.

  • OpenAI Node: add Frequency Penalty and Presence Penalty to the node options for the text resource.

  • Salesforce Node: add Has Opted Out Of Email field to lead resource options.

  • SSH Node: stream binary data for uploads and downloads.

  • Write Binary File Node: stream binary data for writes.

  • YouTube Node: switch upload operation over to streaming and resumable uploads API.

  • Add paired item to the most used nodes.

  • Core: fix OAuth2 client credentials not always working.

  • Core: fix populating of node custom API call options.

  • Core: fix value resolution in declarative node design.

  • Core: prevent shared user details being saved alongside execution data.

  • Core: revert custom API option injecting.

  • Editor: add SMTP info translation link slot.

  • Editor: change executions title to match menu.

  • Editor: fix JSON field completions while typing.

  • Editor: handling router errors when navigation is canceled by user.

  • Editor: set max width for executions list.

  • Editor: stop unsaved changes popup display when navigating away from an untouched workflow.

  • Editor: fix workflow executions view.

  • Invoice Ninja Node: fix line items not being correctly set for quotes and invoices.

  • Linear Node: fix pagination issue for get all issues.

  • Mailchimp Trigger Node: fix webhook recreation.

  • Prevent unnecessarily touching updatedAt when n8n starts.

  • Schedule Trigger Node: change scheduler behaviour for intervals days and hours.

  • Set Node: fix behaviour when selecting continueOnFail and pairedItem.

View the commits for this version.
Release date: 2023-01-27

This release introduces LDAP, and a new node for working with HTML in n8n. It also contains node enhancements and bug fixes.

This release introduces support for LDAP on Self-hosted Enterprise and Cloud Enterprise plans. Refer to LDAP for more information on this feature.

  • Simplify the Node Details View by moving authentication details to the Credentials modal.
  • Improve workflow list performance.

n8n has a new HTML node. This replaces the HTML Extract node, and adds new functionality to generate HTML templates.

Node enhancements

  • GitLab node: add file resource and operations.

  • JIRA Software node: introduce the resource locator component to improve UX.

  • Send Email node: this node has been overhauled.

  • Core: don't crash express app on unhandled rejected promises.

  • Core: handle missing binary metadata in download URLs.

  • Core: upsert (update and insert) credentials and workflows in the import: commands.

  • Core: validate numeric IDs in the public API.

  • Editor: don't request workflow data twice when opening a workflow.

  • Editor: execution list micro optimization.

  • Editor: fix node authentication options ordering and hiding options based on node version.

  • Editor: fix save modal appearing after duplicating a workflow.

  • Editor: prevent workflow execution list infinite no network error.

  • Extension being too eager and making calls when it shouldn't.

  • Google Drive Node: use the correct MIME type on converted downloads.

  • HelpScout Node: fix tag search not working when getting all conversations.

  • Notion (Beta) Node: fix create database page with multiple relation IDs not working.

  • Update Sign in with Google button to properly match design guidelines.

  • Devin Buhl

  • Sven Ziegler

View the commits for this version.
Release date: 2023-01-23

This release includes an overhaul of the Google Analytics node, and bug fixes.

Node enhancements

This release includes an overhaul of the Google Analytics node. This brings the node's code and components in line with n8n's latest node building styles, and adds support for GA4 properties.

  • Add schema to Postgres migrations.
  • Core: fix execute-once incoming data handling.
  • Core: fix expression extension miss-detection.
  • Core: fix onWorkflowPostExecute not being called.
  • Core: fix URL in error handling for the error Trigger.
  • Core: make pinned data with webhook responding on last node manual-only.
  • Editor: making parameter input components label configurable.
  • Editor: remove infinite loading in not found workflow level execution.
  • Linear Node: fix issue with single item not being returned.
  • Notion (Beta) Node: fix create database page fails if relation parameter is empty/undefined.

View the commits for this version.
Release date: 2023-01-19

This release contains enhancements to the Item Lists node, and bug fixes.

This release adds experimental support for more Prometheus metrics. Self-hosting users can configure Prometheus using environment variables.

Node enhancements

The Item Lists node now supports a Summarize operation. This acts similarly to generating pivot tables in Excel, allowing you to aggregate and compare data.

  • Core: revert a lint rule @typescript-eslint/prefer-nullish-coalescing.
  • Editor: allow special characters in node selector completion.
  • GitLab Node: update the credential test endpoint.
  • Gmail Trigger Node: resolve an issue that was preventing filter by labels from working.
  • HTTP Request Node: ensure node enforces the requirement for valid JSON input.
  • HTTP Request Node: convert responses to text for all formats, including JSON.

Sven Ziegler

View the commits for this version.
Release date: 2023-01-17

This release contains a bug fix for community nodes, and a new trigger node.

Google Sheets Trigger node

This release adds a new Google Sheets Trigger node. You can now start workflows in response to row changes or new rows in a Google Sheet.

Fixes an issue that was preventing users from installing community nodes.

View the commits for this version.
Release date: 2023-01-16

This is a bug fix release. It resolves major issues with 0.211.0.

Editor: suppress validation errors for freshly added nodes.

Node enhancements

  • Google Ads node: update the API version to 11.

  • Google Drive Trigger node: start using the resource locator component.

  • Build CLI to fix Postgres and MySQL test runs.

  • Extend date functions clobbering plus/minus.

  • Extension deep comparen't quite working for some primitives.

  • Upgrade jsonwebtoken to address CVE-2022-23540.

View the commits for this version.
Release date: 2023-01-13

Don't use this version

Upgrade directly to 0.211.1.

  • Add demo experiment to help users activate.

  • Editor: Improvements to the Executions page.

  • Editor: Remove prevent-ndv-auto-open feature flag.

  • Editor: Update callout component design.

  • Add the expression extension framework.

  • Core: Fixes event message confirmations if no subscribers present.

  • Core: Remove threads package, rewrite log writer worker.

  • Core: Throw error in UI on expression referencing missing node but don't fail execution.

  • DB revert command shouldn't run full migrations before each revert.

  • Editor: Disable data pinning on multiple output node types.

  • Editor: Don't overwrite window.onerror in production.

  • Editor: Execution page bug fixes.

  • Editor: Fixes event bus test.

  • Editor: Hide data pinning discoverability tooltip in execution view.

  • Editor: Mapping tooltip dismiss.

  • Editor: Recover from unsaved finished execution.

  • Editor: Setting NDV session ID.

  • First/last being extended on proxy objects.

  • Handle memory issues gracefully.

  • PayPal Trigger Node: Omit verification in sandbox environment.

  • Report app startup and database migration errors to Sentry.

  • Run every database migration inside a transaction.

  • Upgrade class-validator to address CVE-2019-18413.

  • Zoom Node: Add notice about deprecation of Zoom JWT app support.

You may encounter errors when using the optional chaining operator in expressions. If this happens, avoid using the operator for now.

View the commits for this version.
Release date: 2023-01-09

Typeahead for expressions

When using expressions, n8n will now offer you suggestions as you type.

  • Core: fix crash of manual workflow executions for unsaved workflows.
  • Editor: omit pairedItem from proxy completions.
  • Editor: prevent refresh on submit in credential edit modal.
  • Google Sheets Node: fix for auto-range detection.
  • Read Binary File Node: don't crash the execution when the source file doesn't exist.
  • Remove anonymous ID from tracking calls.
  • Stop OOM crashes in Execution Data pruning.
  • Update links for user management and SMTP help.

View the commits for this version.
Release date: 2023-01-05

This is a bug fix release. It also contains a new feature to support user management without SMTP set up.

In earlier versions of self-hosted n8n, you needed SMTP set up on your n8n instance for user management to work. User management required SMTP to sent invitation emails.

0.210.1 introduces an invite link, which you can copy and send to users manually. n8n still recommends setting up SMTP, as this is needed for password resets.

  • Google Sheets node: fix an issue that was causing append and update operations to fail for numeric values.
  • Resolve issues with external hooks.

View the commits for this version.
Release date: 2023-01-05

This release introduces two major new features: log streaming and security audits. It also contains node enhancements, bug fixes, and performance improvements.

This release introduces log streaming for users on Enterprise self-hosted plans and custom Cloud plans. Log streaming allows you to send events from n8n to your own logging tools. This allows you to manage your n8n monitoring in your own alerting and logging processes.

This release adds a security audit feature. You can now run a security audit on your n8n instance, to detect common security issues.

  • Core: add support for Redis 6+ ACLs system using username in queue mode. Add the QUEUE_BULL_REDIS_USERNAME environment variable.

Node enhancements

  • Compare Datasets node: add an option for fuzzy compare.

  • Apply credential overwrites recursively. This ensures that overwrites defined for a parent credential type also apply to all credentials extending it.

  • Core: enable full manual execution of a workflow using the error trigger.

  • Core: fix OAuth credential creation using the API.

  • Core: fix an issue with workflow lastUpdated field.

  • Editor: clear node creator and scrim on workspace reset.

  • Editor: fix an infinite loop while loading executions that aren't on the current executions list.

  • Editor: make node title non-editable in executions view.

  • Editor: prevent scrim on executable triggers.

  • Editor: support tabbing away from inline expression editor.

  • Fix executions bulk deletion.

  • Google Sheets Node: fix exception when no Values to Send are set.

  • Respond to Webhook Node: fix issue that caused the content-type header to be overwritten.

  • Slack Node: add missing channels:read OAuth2 scope.

Performance improvements

  • Lazy-load public API dependencies to reduce baseline memory usage.
  • Lazy-load queue mode and analytics dependencies.

Thomas S.

View the commits for this version.
Release date: 2022-12-28

This is primarily a bug fix release.

  • Editor: add sticky note without manual trigger.
  • Editor: display default missing value in table view as undefined.
  • Editor: fix displaying of some trigger nodes in the creator panel.
  • Editor: fix trigger node type identification on add to canvas.
  • Editor: add the usage and plans page to Desktop.

Editor: pressing = in an empty parameter input switches to expression mode.

View the commits for this version.
Release date: 2022-12-27

This is primarily a bug fix release.

  • Core: don't send credentials to browser console.
  • Core: permit a workflow user who isn't the owner to use their own credentials.
  • Editor: fix for loading executions that aren't on the current executions list.
  • Editor: make the tertiary button on the Usage page transparent.
  • Editor: update credential owner warning when sharing.

Editor: Improve UX for brace completion in the inline expressions editor.

Node enhancements

Webhook node: when test the node by selecting Listen For Test Event then dispatching a call to the webhook, n8n now only runs the Webhook node. Previously, n8n ran the entire workflow. You can still test the full workflow by selecting Execute Workflow, then dispatching a test call.

View the commits for this version.
Release date: 2022-12-23

This is a bug fix release.

  • Editor: ensure full tree on expression editor parse. This resolves an issue with the expressions editor cutting off results.
  • Fix automatic credential selection when credentials are shared.

Performance improvements

Improvements to the workflows list performance.

View the commits for this version.
Release date: 2022-12-22

This is a bug fix release.

  • Editor: fix for executions preview scroll load bug and wrong execution being displayed.
  • Editor: force parse on long expressions.
  • Editor: restore trigger to the nodes panel.
  • Nodes: AWS DynamoDB Node Fix issue pagination and simplify issue.
  • Nodes: fix DynamoDB node type issues.
  • Resolve an issue with credentials and workflows not being matched correctly due to incorrect typing.
  • Restore missing tags when retrieving a workflow.

Nathan Apter

View the commits for this version.
Release date: 2022-12-21

This release introduces workflow sharing, and changes to licensing and payment plans.

Workflow sharing

This release introduces workflow sharing for users on some plans. With workflow sharing, users can invite other users on the same n8n instance to use and edit their workflows. Refer to Workflow sharing for details.

  • Editor: Correctly display trigger nodes without actions and with related regular node in the "On App Events" category.
  • Fix stickies resize.
  • Hide trigger tooltip for nodes with static test output.
  • Keep expression when dropping mapped value.
  • Prevent keyboard shortcuts in expression editor modal.
  • Redirect home to workflows always.
  • Update mapping GIFs.
  • Upgrade amqplib to address CVE-2022-0686.
  • View option for binary-data shouldn't download the file on Chrome/Edge.

View the commits for this version.
Release date: 2022-12-19

This is a bug fix release.

  • Always retain original errors in the error chain on NodeOperationError.
  • BinaryDataManager should store metadata when saving from buffer.
  • Editor: fix for wrong execution data displayed in executions preview.
  • Pick up credential test functions from versioned nodes.

View the commits for this version.
Release date: 2022-12-16

This release introduces a new inline expressions editor, and a new node: OpenAI. It also contains updates and bug fixes.

Inline expression editor

You can now quickly write expressions inline in a node parameter. You can still choose to open the full expressions editor.

  • Add workflow sharing telemetry.
  • Core: allow for hiding page usage with environment variables (for upcoming feature)
  • Editor: update UI copy for user management setup when sharing is disabled.
  • Editor: hide credentials password values.
  • Editor: set All workflows view as default view on the Workflows page.
  • Editor: update UI copy for workflow overwriting message.

This release adds an integration with OpenAI. Refer to the OpenAI node documentation for details.

Node enhancements

Send Email node: add support for a "Reply to" email address.

  • Core: fix for Google and Microsoft generic OAuth2 credentials.
  • Core: fix HTTP Digest Auth for responses without an opaque parameter.
  • Disqus node: fix thread parameter for "Get All Threads" operation.
  • Don't crash the server when Telemetry is blocked using DNS.
  • Editor: allow mapping onto expression editor with selection range.
  • Editor: don't show actions dialog for actionless triggers when selected using keyboard.
  • Editor: fix an issue where some node actions wouldn't select default parameters correctly.
  • Editor: fix typo in retry-button option "Retry with original workflow".
  • Update permission for showing workflow caller policy.
  • Update pnpm-lock to fix build.

Daemonxiao
Kirill
Ricardo Duarte

View the commits for this version.
Release date: 2022-12-13

This is a bug fix release. It resolves an issue with undo.

View the commits for this version.
Release date: 2022-12-12

This release adds support for undo/redo actions on the canvas, and includes bug fixes.

You can now undo and redo actions on the canvas.

Use ctrl/cmd + z to undo, ctrl/cmd + shift + z to redo.

Currently, n8n supports undo/redo for the following canvas actions:

  • Deleting connections

  • Import workflow (from file/from URL)

  • Disabling/enabling nodes

  • App integration actions are now displayed in the nodes pane.

  • Add sharing permissions info for workflow sharees.

  • Handle sharing features when the user skips instance owner setup.

  • Update the credential test error message for credential sharees.

  • Core: remove nodeGetter.

  • Core: Increase workflow reactivation max timeout to one day.

  • Core: Resolve an issue listing executions with Postgres.

  • Core: Remove foreign credentials when copying nodes or duplicating workflow.

  • Core: upgrade sse-channel to mitigate CVE-2019-10744.

  • Core: use license-sdk v1.6.1.

  • Editor: avoid adding Manual Trigger node when webhook node is added.

  • Editor: fix credential sharing issues handler when no matching ID or name.

  • Editor: fix for broken tab navigation.

  • Editor: schema view shows checkbox in case of empty data.

  • Editor: Stop returning UNKNOWN ERROR in the response if an actual error message is available.

  • Editor: update duplicate workflow action.

  • Move Binary Data Node: stringify objects before encoding them in MoveBinaryData.

  • Split In Batches Node: fix issue with pairedItem.

View the commits for this version.
Release date: 2022-12-06

This is a bug fix release.

  • Core: make expression resolution improvements.
  • Editor: schema unit test stub for Font Awesome icons.
  • Remove unnecessary console message.

View the commits for this version.
Release date: 2022-12-06

This release contains bug fixes, node enhancements, and a new node input view: schema view.

Schema view is a new node input view. It helps you browse the structure of your data, using the first input item.

  • Core: add workflow execution statistics.
  • Editor: add the alert design system component.
  • Editor: fix checkbox line hight and make checkbox label clickable.
  • Nodes: add a message for read-only nodes.
  • Nodes: add a prompt to overwrite changes when concurrent editing occurs.

Node enhancements

KoBo Toolbox node: add support for the media file API.

  • Core: fix linter error.
  • Core: fix partial execution with pinned data on child node run.
  • Core: OAuth2 scopes now save.
  • Enable source-maps on WorkflowRunnerProcess in own mode.
  • Handle error when workflow doesn'texist or is inaccessible.
  • Make nodes.exclude and nodes.include work with lazy-loaded nodes.
  • Code Node: restore pairedItem to required n8n item keys.
  • Execute Workflow Node: update Execute Workflow node info notice text.
  • Gmail Trigger Node: trigger node missing some emails.
  • Local File Trigger Node: fix issue that causes a crash if the ignore field is empty.

Marcel
Yann Jouanique

View the commits for this version.
Release date: 2022-12-02

This release contains an overhaul of the expressions editor, node enhancements, and bug fixes.

Expressions editor usability overhaul

This release contains usability enhancements for the expressions editor. The editor now includes color signals to indicate when syntax is valid or invalid, and better error messages and tips.

Node enhancements

  • Facebook Graph APInode: update to support API version 15.

  • Google Calendar node: introduce the resource locator component to help users retrieve calendar parameters.

  • Postmark Trigger node: update credentials so they can be used with the HTTP Request node (for custom API calls).

  • Todoist node: update to use API version 2.

  • Core: ensure executions list is properly filtered for all users.

  • Core: fix $items().length in Execute Once mode.

  • Core: mark binary data to be deleted when pruning executions.

  • Core: OAuth2 scope saved to database fix.

  • Editor: fix slots rendering of NodeCreator's NoResults component.

  • Editor: JSON view values can be mapped like keys.

  • AWS SNS Node: fix a pagination issue.

  • Google Sheets Node: fix exception if no matching rows are found.

  • Google Sheets Node: fix for append operation if no empty rows in sheet.

  • Microsoft Outlook Node: fix binary attachment upload.

  • Pipedrive Node: resolve properties not working.

  • Lazy load nodes for credentials testing.

  • Credential overwrites should take precedence over credential default values.

  • Remove background for resource ownership selector.

  • Update padding for resource filters dropdown.

  • Update size of select components in filters dropdown.

  • Update workflow save button type and design and share button type.

View the commits for this version.
Release date: 2022-11-24

This release contains performance enhancements and bug fixes.

  • Core: lazy-load nodes and credentials to reduce baseline memory usage.

  • Core: use longer stack traces when error reporting is enabled.

  • Dev: add credentials E2E test suite and page object.

  • Core: fix $items().length behavior in executeOnce mode.

  • Core: fix for unused imports.

  • Core: use CredentialsOverwrites when testing credentials.

  • Core: disable workflow locking due to issues.

  • Editor: fix for missing node connections in dev environment.

  • Editor: fix missing resource locator component.

  • Editor: prevent node-creator tabs from showing when toggled by CanvasAddButton.

  • Editor: table view column limit tooltip.

  • Editor: fix broken n8n-info-tip slots.

  • IF Node: fix "Is Empty" and "Is Not Empty" operation failures for date objects.

  • Remove redundant await in nodes API request functions without try/catch.

  • Schedule Trigger Node: fixes inconsistent behavior with cron and weekly intervals.

  • Workflow activation shouldn't crash if one of the credential is invalid.

View the commits for this version.
Release date: 2022-11-18

This is a bug fix release. It resolves an issue with the Google Sheets node versioning.

View the commits for this version.
Release date: 2022-11-17

This release includes an overhaul of the Google Sheets node, as well as other new features, node enhancements, and bug fixes.

  • Add duplicate workflow error handler.
  • Add workflow data reset action.
  • Add credential runtime checks and prevent tampering during a manual run.

Node enhancements

  • Compare Datasets: UI copy changes to improve usability.

  • Google Sheets: n8n has overhauled this node, including improved lookup for document and sheet selection.

  • Notion (beta) node: use the resource locator component for database and page parameters.

  • Core: deduplicate error handling in nodes.

  • Editor: show back mapping hint when parameter is focused.

  • Editor: add Stop execution button to execution preview.

  • Editor: curb direct item access linting.

  • Editor: fix expression editor variable selector filter.

  • Editor: fix for execution retry dropdown not closing.

  • Editor: fix for logging error on user logout.

  • Editor: fix zero treated as missing value in resource locator.

  • Editor: hide pin data in production executions.

  • Editor: skip optional chaining operators in Code Node editor linting.

  • Editor: update to Expression/Fixed toggle - keep expression when switching to Fixed.

  • Editor: fix foreign credentials being shown for new nodes.

  • Editor: store copy of workflow in workflowsById to prevent node data bugs.

  • Editor: fix user redirect to signin bug.

View the commits for this version.
Release date: 2022-11-10

This is a bug fix release. It removes some error tracking.

View the commits for this version.
Release date: 2022-11-10

This release contains core product improvements and bug fixes.

  • API: report unhandled app crashes using Sentry.

  • API: set up error tracking using Sentry.

  • Core: Add ownership, sharing and credential details to GET /workflows in n8n's internal API.

  • Editor: when building nodes, you can now add a property with type notice to your credentials properties.This was previously available in nodes but not credentials. Refer to Node UI elements for more information.

  • API: Don't use names for type ORM connections.

  • Core: Fix manual execution of pinned trigger on main mode.

  • Core: Streamline multiple pinned triggers behavior.

  • Editor: Curb argument linting for $input.first() and $input.last()

  • Editor: Fix duplicate bug when new workflow is open.

  • Editor: Fix for incorrect execution saving indicator in executions view.

  • Editor: Fix for OAuth authorization.

  • Editor: Fix workflow activation from the Workflows view.

  • Editor: Fix workflow back button navigation.

  • Editor: Prevent adding of the start node when importing workflow in the demo mode.

  • Editor: Show string numbers and null properly in JSON view.

  • Editor: Switch CodeNodeEditor linter parser to esprima-next.

  • Editor: Tweak dragged mapping state.

  • Editor: Update workflow buttons spacings.

  • Editor: Use base path in workflow preview component URL.

  • HTTP Request Node: Show error cause in the output.

  • HTTP Request Node: Use the data in Put Output in Field field.

  • HubSpot Node: Add notice to HubSpot credentials about API Key Sunset.

  • Notion Trigger (Beta) Node: Fix Notion trigger polling strategy.

  • Raindrop Node: Update access token URL.

  • SendInBlue Trigger Node: Fix typo in credential name.

  • Update E2E testing ENV variables.

feelgood-interface
Ugo Bataillard

View the commits for this version.
Release date: 2022-11-02

This release contains workflow and node enhancements, and bug fixes.

  • Core: reimplement blocking workflow updates on interim changes.
  • Editor: block the UI in node details view when the workflow is listening for an event.
  • Performance improvements

Node enhancements

Venafi TLS Protect Cloud node: make issuing template depend on application.

  • Core: fix wokflow hashing for MySQL.
  • Core: make deepCopy backward compatible.
  • Editor: ensure displayOptions received the value from the resource locator component.
  • Editor: disable the settings link in executions view for unsaved workflows.
  • Editor: ensure forms reliably save.
  • Editor: fix issues with interim updates in executions view.
  • Editor: fix for node creator search.
  • Editor: limit columns in table view to prevent the UI becoming unresponsive in the node details view.

View the commits for this version.
Release date: 2022-10-28

This is a bug fix release.

  • API: do not reset the auth cookie on every request to GET /login.
  • AWS SNS Trigger node: add missing jsonParse import.
  • Core: avoid callstack with circular dependencies.
  • Editor: resolve issues with the executions list auto-refresh, and with saving new workflows.
  • Editor: redirect the outdated /workflow path.
  • Editor: remove a filter that prevented display of running executions.

View the commits for this version.
Release date: 2022-10-27

This release contains improvements to the editor, node enhancements and bug fixes.

  • Core, editor: introduce workflow caller policy.
  • Core: block workflow update on interim change.
  • Editor: add a read-only state for nodes.
  • Editor: add execution previews using the new Executions tab in the node view.
  • Editor: improvements to node panel search.

Node enhancements

  • Airtable Trigger node: add the resource locator component.

  • HTTP Request node: add options for raw JSON headers and queries.

  • InvoiceNinja node: add support for V5.

  • Write Binary File node: add option to append to a file.

  • API: validate executions and workflow filter parameters.

  • Core: amend typing for jsonParse() options.

  • Core: fix predefinedCredentialType in node graph item.

  • Core: fix canvas node execution skipping parent nodes.

  • Core: fix single node execution failing in main mode.

  • Core: set JWT authentication token sameSite policy to lax.

  • Core: update to imports in helpers.

  • Editor: curb item method linting in single-item mode.

  • Editor: stop rendering expressions as HTML.

  • Email Trigger node: backport V2 mark-seen-after processing to V1.

  • Email Trigger node: improve connection handling and credentials.

  • HTTP Request node: fix sending previously selected credentials.

  • TheHive node: small fixes.

Bram Kn
Nicholas Penree

View the commits for this version.
Release date: 2022-10-21

This release includes new nodes, an improved workflow UI, performance improvements, and bug fixes.

New workflow experience

This release brings a collection of UI changes, aimed at improving the workflow experience for users. This includes:

  • Removing the Start node, and adding help to guide users to find a trigger node.

  • Improved node search.

  • New nodes: Manual Trigger and Execute Workflow Trigger.

  • Core: block workflow updates on interim changes.

  • Core: enable sending client credentials in the body of API calls.

  • Editor: add automatic credential selection for new nodes.

The Compare Datasets node helps you compare data from two input streams. You can find documentation for the new node here.

Execute Workflow Trigger node

The Execute Workflow Trigger starts a workflow in response to another workflow. You can find documentation for the new node here.

Manual Trigger node

The Manual Trigger allows you to start a workflow by clicking Execute Workflow, without any option to run it automatically. You can find documentation for the new node here.

Schedule Trigger node

This release introduces the Schedule Trigger node, replacing the Cron node. You can find documentation for the new node here.

Node enhancements

  • HubSpot node: you can now use your HubSpot credentials in the HTTP Request node to make a custom API call.

  • Rundeck node: you can now use your Rundeck credentials in the HTTP Request node to make a custom API call.

  • Editor: fix a hover bug in the bottom menu.

  • Editor: resolve performance issues when opening a node, or editing a code node, with a large amount of data.

  • Editor: ensure workflows always stop when clicking the stop button.

  • Editor: fix a bug that was causing text highlighting when mapping data in Firefox.

  • Editor: ensure correct linting in the Code node editor.

  • Editor: handle null values in table view.

  • Elasticsearch node: fix a pagination issue.

  • Google Drive node: fix typo.

  • HTTP Request node: avoid errors when a response doesn't provide a content type.

  • n8n node: fix a bug that was preventing the resource locator component from returning all items.

AndLLA
Nicholas Penree
vcrwr

View the commits for this version.
Release date: 2022-10-14

This release fixes a bug affecting scrolling through parameter lists.

View the commits for this version.
Release date: 2022-10-14

This is a bug fix release.

  • Editor: change the initial position of the Start node.
  • Editor: align JSON view properties with their values.
  • Editor: fix BASE_PATH for Vite dev mode.
  • Editor: fix data pinning success source.

Bram Kn

View the commits for this version.
Release date: 2022-10-14

Please note that this version contains breaking changes to the Merge node. You can read more about them here.

  • Editor: update the expressions display.
  • Editor: update the n8n-menu component.

This release introduces the Code node. This node replaces both the Function and Function Item nodes. Refer to the Code node documentation for more information.

Venafi TLS Protect Cloud Trigger node

Start a workflow in response to events in your Venafi Cloud service.

Node enhancements

  • Citrix ADC node: add Certificate Install operation.

  • Kafka node: add a Use key option for messages.

  • MySQL node: use the resource locator component for table parameters, making it easier for users to browse and select their database fields from within n8n.

  • Core, Editor: prevent overlap between running and pinning data.

  • Core: expression evaluation of processes now respects N8N_BLOCK_ENV_ACCESS_IN_NODE.

  • Editor: ensure the Axios base URL still works when hosted in a subfolder.

  • Editor: fixes for horizontal scrollbar rendering.

  • Editor: ensure the menu closes promptly when loading a credentials page.

  • Editor: menu UI fixes.

  • Box node: fix an issue that was causing the Create Folder operation to show extra items.

  • GSuite Admin node: resolve issue that was causing the User Update operation to fail.

  • GitLab Trigger node: ensure this node activates reliably.

  • HTTP Request node: ensure OAuth credentials work properly with predefined credentials.

  • KoboToolbox node: fix the hook logs.

  • SeaTable node: ensure link items show in response.

  • Zoom node: resolve an issue that was causing missing output items.

Jakob Backlund
Yan Jouanique

View the commits for this version.
Release date: 2022-10-10

This is a bug fix release. It resolves an issue with display width on the resource locator UI component.

View the commits for this version.
Release date: 2022-10-10

This release includes six new nodes, focused around infrastructure management. It also adds support for drag and drop data mapping in the JSON input view, and includes bug fixes.

  • Core: improve light versioning support in declarative node design.
  • Editor UI: data mapping for JSON view. You can now map data using drag and drop from JSON view, as well as table view.

AWS Certificate Manager

A new integration with AWS Certificate Manager. You can find the documentation here.

AWS Elastic Load Balancing

Manage your AWS load balancers from your workflow using the new AWS Elastic Load Balancing node. You can find the documentation here.

Citrix ADC is an application delivery and load balancing solution for monolithic and microservices-based applications. You can find the documentation here.

Cloudflare provides a range of services to manage and protect your websites. This new node allows you to manage zone certificates in Cloudflare from your workflows. You can find the documentation here.

This release includes two new Venafi nodes, to integrate with their Protect TLS service.

Node enhancements

Crypto node: add SHA3 support.

  • CLI: cache generated assets in a user-writeable directory.
  • Core: prevent excess runs when data is pinned in a trigger node.
  • Core: ensure hook URLs always added correctly.
  • Editor: a fix for an issue affecting linked items in combination with data pinning.
  • Editor: resolve a bug with the binary data view.
  • GitHub Trigger node: ensure trigger executes reliably.
  • Microsoft Excel node: fix pagination issue.
  • Microsoft ToDo node: fix pagination issue.

Stratos Theodorou

View the commits for this version.
Release date: 2022-09-30

This release includes major new features:

  • Better item linking
  • New built-in variables and methods
  • A redesigned main navigation
  • New nodes, as well as an overhaul of the HTTP Request node

It also contains bug fixes and node enhancements.

Improved item linking

Introducing improved support for item linking (paired items). Item linking is a key concept in the n8n data flow. Learn more in Data item linking.

Overhauled built-in variables

n8n's built-in methods and variables have been overhauled, introducing new variables, and providing greater consistency in behavior and naming.

Redesigned main navigation

We've redesigned the main navigation (the left hand menu) to create a simpler user experience.

Other new features

  • Improved error text when loading options in a node.
  • On reset, share unshared credentials with the instance owner.

The n8n node allows you to consume the n8n API in your workflows.

WhatsApp Business Platform node

The WhatsApp Business Platform node allows you to use the WhatsApp Business Platform Cloud API in your workflows.

Node enhancements

  • HTTP Request node: a major overhaul. It's now much simpler to build a custom API request. Refer to the HTTP Request node documentation for more information.

  • RabbitMQ Trigger node: now automatically reconnects on disconnect.

  • Slack node: add the 'get many' operation for users.

  • Build: add typing for SSE channel.

  • Build: fix lint issue.

  • CLI: add git to all Docker images

  • CLI: disable X-Powered-By: Express header.

  • CLI: disable CORS on SSE connections in production.

  • Core: remove commented out lines.

  • Core: delete unused dependencies.

  • Core: fix and harmonize documentation links for nodes.

  • Core: remove the --forceExit flag from CLI tests.

  • Editor: add missing event handler to accordion component.

  • Editor: fix Storybook setup.

  • Editor: ensure BASE_URL replacement works correctly on Windows.

  • Editor: fix parameter input field focus.

  • Editor: make lodash aliases work on case-sensitive file systems.

  • Editor: fix an issue affecting copy-pasting workflows into pinned data in the code editor.

  • Editor: ensure the run data pagination selector displays when appropriate.

  • Editor: ensure the run selector can open.

  • Editor: tidy up leftover i18n references in the node view.

  • Editor: correct an i18n string.

  • Editor: resolve slow loading times for node types, node creators, and push connections in the settings view.

  • Nodes: update descriptions in the Merge node

  • Nodes: ensure the card ID property displays for completed checklists in the Trello node.

  • Nodes: fix authentication for the new verions of WeKan.

  • Nodes: ensure form names list correctly in the Wufoo Trigger node.

Cristobal Schlaubitz Garcia

View the commits for this version.
Release date: 2022-09-23

This is a bug fix release. It fixes an issue with extracting values in expressions.

View the commits for this version.
Release date: 2022-09-22

  • Adds the ability to resize the main node panel.
  • Resolves an issue with resource locator in expressions.

View the commits for this version.
Release date: 2022-09-22

This is a bug fix release.

  • Editor: fix an expressions bug affecting numbers and booleans.
  • Added support for setting the TDS version in Microsoft SQL credentials.

View the commits for this version.
Release date: 2022-09-22

This is a bug fix release. It resolves an issue with MySQL migrations.

View the commits for this version.
Release date: 2022-09-21

This is a bug fix release. It resolves an issue with Postgres migrations.

View the commits for this version.
Release date: 2022-09-21

This release introduces user management and credential sharing for n8n's Cloud platform. It also contains other enhancements and bug fixes.

User management and credential sharing for Cloud

This release adds support for n8n's existing user management functionality to Cloud, and introduces a new feature: credential sharing. Credential sharing is currently only available on Cloud.

Also in this release:

  • Added a resourceLocator parameter type for nodes, and started upgrading n8n's built-in nodes to use it. This new option helps users who need to specify the ID of a record or item in an external service. For example, when using the Trello node, you can now search for a specific card by ID, URL, or do a free text search for card titles. Node builders can learn more about working with this new UI element in n8n's UI elements documentation.

  • Cache npm dependencies to improve performance on self-hosted n8n

  • Box node: fix an issue that sometimes prevented response data from being returned.

  • CLI: prevent n8n from crashing when it encounters an error in poll method.

  • Core: prevent calls to constructor, to forbid arbitrary code execution.

  • Editor: fix the output panel for Wait node executions.

  • HTTP node: ensure instance doesn't crash when batching enabled.

  • Public API: corrections to the OAuth schema.

  • Xero node: fix an issue that was causing line amount types to be ignored when creating new invoices.

Ikko Ashimine

View the commits for this version.
Release date: 2022-09-15

This release includes new nodes: a Gmail trigger, Google Cloud Storage, and Adalo. It also contains major overhauls of the Gmail and Merge nodes.

  • CLI: load all nodes and credentials code in isolation.
  • Core, Editor UI: introduce support for node deprecation.
  • Editor: implement HTML sanitization for Notification and Message components.
  • Editor: display the input number on multi-input nodes.

Adalo is a low code app builder. Refer to n8n's Adalo node documentation for more information.

Google Cloud Storage

n8n now has a Google Cloud Storage node.

n8n now has a Gmail Trigger node. This allows you to trigger workflows in response to a Gmail account receiving an email.

Node enhancements

  • Gmail node: this release includes an overhaul of the Gmail node, with updated resources and operations.

  • Merge node: a major overhaul. Merge mode's have new names, and have been simplified. Refer to the Merge node documentation to learn more.

  • MongoDB node: updated the Mongo driver to 4.9.1.

  • CLI: core: address Dependabot warnings.

  • CLI: avoid scanning unnecessary directories on Windows.

  • CLI: load nodes and directories on Windows using the correct file path.

  • CLI: ensure password reset triggers internal and external hooks.

  • CLI: use absolute paths for loading custom nodes and credentials.

  • Core: returnJsonArray helper no longer breaks nodes that return no data.

  • Core: fix an issue with node renaming and expressions.

  • Core: update OAuth endpoints to use the instance base URL.

  • Nodes: resolved an issue that was preventing versioned nodes from loading.

  • Public API: better error handling for bad requests.

  • AWS nodes: fixed an issue with credentials testing.

  • GoogleBigQuery node: fix for empty responses when creating records.

  • HubSpot node: correct the node name on the canvas.

Rhys Williams

View the commits for this version.
Release date: 2022-09-07

This is a bug fix release.

  • Editor: prevent editing in the Function nodes in executions view.
  • Editor: ensure button widths are correct.
  • Editor: fix a popup title.
  • Gmail node: fix an issue introduced due to incorrect automatic data formatting.

View the commits for this version.
Release date: 2022-09-06

This release contains new features that lay the groundwork for upcoming releases, and bug fixes.

  • It's now possible to configure the stop time for workers.

  • CLI: Added external hooks for when members are added or deleted.

  • Editor: Use the i18n component for localization (replacing v-html)

  • CLI: include "auth-excluded" endpoints on the history middleware as well.

  • Core: fix MySQL migration issue with table prefix.

  • Correct spelling.

  • Fix n8n-square-button import.

  • AWS nodes: handle query string and body properly for AWS related requests.

  • AWS Lambda node: fix JSON data being sent to AWS Lambda as string.

  • Beeminder node: fix request ID not being sent when creating a new data point.

  • GitHub node: fix binary data not being returned.

  • GraphQL node: fix issue with return items.

  • Postgres node: fix issue with Postgres insert and paired item.

  • Kafka Trigger node: fix Kafka trigger not working with default max requests value.

  • MonicaCrm node: fix pagination when using return all.

  • Gmail node: fix bug related to paired items.

  • Raindrop node: fix issue refreshing OAuth2 credentials.

  • Shopify node: fix pagination when empty fields are sent.

Aaron Delasy
ruanjiefeng

View the commits for this version.
Release date: 2022-09-01

This release contains bug fixes and node enhancements.

Node enhancements

MongoDB node: add credential testing and two new operations.

  • CLI: only initialize the mailer if the connection can be verified.
  • Core: fix an issue with disabled parent outputs in partial executions.
  • Nodes: remove duplicate wrapping of paired item data.

View the commits for this version.
Release date: 2022-09-01

This is a bug fix release. It resolves an issue that was causing errors with OAuth2 credentials.

View the commits for this version.
Release date: 2022-08-31

This is a bug fix release. It resolves an issue that was preventing column headings from displaying correctly in the editor.

View the commits for this version.
Release date: 2022-08-31

This release contains a new node, feature enhancements, and bug fixes.

This release adds an integration for HighLevel, an all-in-one sales and marketing platform.

  • Docker: reduce the size of Alpine Docker images.

  • Editor: improve mapping tooltip behavior.

  • Core: make digest auth work with query parameters.

  • Editor: send data as query on DELETE requests.

  • Fix credentials_entity table migration for MySQL.

  • Improve .npmignore to reduce the size of the published packages.

pemontto
Tzachi Shirazi

View the commits for this version.
Release date: 2022-08-25

This is a bug fix release.

  • Editor: fix the feature flag check when PostHog is unavailable.
  • Editor: fix for a mapping bug that occured when value is null.

View the commits for this version.
Release date: 2022-08-25

This is a bug fix release.

Account for non-array types in pinData migration.

View the commits for this version.
Release date: 2022-08-24

This release contains new features and enhancements, as well as bug fixes.

Map nested fields

n8n@0.187.0 saw the first release of data mapping, allowing you to drag and drop top level data from a node's INPUT panel into parameter fields. With this release, you can now drag and drop data from any level.

  • Core and editor: support pairedItem for pinned data.

  • Core and editor: integrate PostHog.

  • Core: add a command to scripts making it easier to launch n8n with tunnel.

  • CLI: notify external hooks about user profile and password changes.

  • Core: account for the enabled state in the first pinned trigger in a workflow.

  • Core: fix pinned trigger execution.

  • CLI: handle unparseable strings during JSON key migration.

  • CLI: fix the excessive instantiation type error for flattened executions.

  • CLI: initiate the nodes directory to ensure npm install succeeds.

  • CLI: ensure tsc build errors also cause Turbeorepo builds to fail.

  • Nextcloud node: fix an issue with credential verification.

  • Freshdesk node: fix an issue where the getAll operation required non-existant options.

View the commits for this version.
Release date: 2022-08-19

This is a bug fix release. It resolves an issue that was causing node connectors to disappear after a user renamed them.

View the commits for this version.
Release date: 2022-08-17

This release lays the groundwork for wider community nodes support. It also includes some bug fixes.

  • Community nodes are now enabled based on npm availability on the host system. This allows n8n to introduce community nodes to the Desktop edition in a future release.

  • Improved in-app guidance on mapping data.

  • CLI: fix the community node tests on Postgres and MySQL.

  • Core: fix an issue preventing child workflow executions from displaying.

  • Editor: handle errors when opening settings and executions.

  • Editor: improve expression and parameters performance.

  • Public API: fix executions pagination for n8n instances using Postgres and MySQL.

View the commits for this version.
Release date: 2022-08-10

This is a bug fix release.

  • Core: fix a crash caused by parallel calls to test webhooks.
  • Core: fix an issue preventing static data being saved for poll triggers.
  • Public API: fix a pagination issue.
  • GitHub Trigger: typo fix.

Nathan Poirier

View the commits for this version.
Release date: 2022-08-05

This is a bug fix release.

Fixed an issue with MySQL and MariaDB migrations.

View the commits for this version.
Release date: 2022-08-03

This release includes a new node, Sendinblue, as well as bug fixes.

Sendinblue node and Sendinblue Trigger node: introducing n8n's Sendinblue integration.

Node enhancements

NocoDB node: add support for v0.90.0+

  • Editor: fix a label cut off.
  • Fix an issue with saving workflows when tags are disabled.
  • Ensure support for community nodes on Windows.

mertmit
Nicholas Penree

View the commits for this version.
Release date: 2022-07-27

This release contains a new node for Metabase, bug fixes, and node and product enhancements.

This release includes a new Metabase node. Metabase is a business data analysis tool.

This release includes improvements to n8n's core pairedItems functionality.

Node enhancements

  • Item Lists node: add an operation to create arrays from input items.

  • Kafka Trigger node: add more option fields.

  • Core: add Windows support to import:credentials --separate.

  • Editor: correct linking buttons color.

  • Editor: ensure data pinning works as expected when pinData is null.

  • Editor: fix a bug with spaces.

  • Editor: resolve an issue with sticky note duplication and positioning.

  • Editor: restore missing header colors.

  • AWS DynamoDB node: fix for errors with expression attribute names.

  • Mautic node: fix an authentication issue.

  • Rocketchat node: fix an authentication issue.

Nicholas Penree

View the commits for this version.
Release date: 2022-07-21

This is a bug fix release.

  • Editor: fix for a console issue.
  • Editor: fix a login issue for non-admin users.
  • Editor: fix problems with the credentials modal that occured when no node is open.
  • NocoDB node: fix for an authentication issue.

View the commits for this version.
Release date: 2022-07-20

This release fixes a bug that was preventing new nodes from reliably displaying in all browsers.

View the commits for this version.
Release date: 2022-07-20

This release includes several major new features, including:

  • The community nodes repository: a new way to build and share nodes.
  • Data pinning and data mapping: accelerate workflow development with better data manipulation functionality.

Community nodes repository

This release introduces the community node repository. This allows developers to build and share nodes as npm packages. Users can install community-built nodes directly in n8n.

Data pinning allows you to freeze and edit data during workflow development. Data pinning means saving the output data of a node, and using the saved data instead of fetching fresh data in future workflow executions. This avoids repeated API calls when developing a workflow, reducing calls to external systems, and speeding up workflow development.

This release introduces a drag and drop interface for data mapping, as a quick way to map data without using expressions.

Simplify authentication setup for node creators

This release introduces a simpler way of handling authorization when building a node. All credentials should now contain an authenticate property that dictates how the credential is used in a request. n8n has also simplified authentication types: instead of specifying an authentication type and using the correct interface, you can now set the type as "generic", and use the IAuthenticateGeneric interface.

You can use this approach for any authentication method where data is sent in the header, body, or query string. This includes methods like bearer and basic auth. You can't use this approach for more complex authentication types that require multiple calls, or for methods that don't pass authentication data. This includes OAuth.

For an example of the new authentication syntax, refer to n8n's Asana node.

Other new features

  • Added a preAuthentication method to credentials.
  • Added more credentials tests.
  • Introduce automatic fixing for paired item information in some scenarios.

Node enhancements

  • ERPNext node: add credential tests, and add support for unauthorized certs.

  • Google Drive node: add support for move to trash.

  • Mindee node: support new version.

  • Notion node: support ignoring the Notion URL property if empty.

  • Shopify node: add OAuth support.

  • API: add missing node settings parameters.

  • API: validate static data value for resource workflow.

  • Baserow Node: fix an issue preventing table names from loading.

  • Editor: hide the Execute previous node button when in read-only mode.

  • Editor: hide tabs if there's only one branch.

  • Roundup of link fixes in nodes.

Florian Bachmann Olivier Aygalenq

View the commits for this version.
Release date: 2022-07-14

This is a bug fix release. It includes a fix for an issue with the Airtable node.

View the commits for this version.
Release date: 2022-07-13

This release contains bug fixes and node enhancements.

  • Add item information to more node errors.
  • Update multiple credentials with tests, and add support for custom operations.

Node enhancements

Bryce Sheehan h4ux miguel-mconf Nicholas Penree pemontto Yann Jouanique

View the commits for this version.
Release date: 2022-07-05

This release adds a new node, Google Ads. It also contains bug fixes and node enhancements, as well as a small addition to core.

Core: add the action parameter to INodePropertyOptions. This parameter is now available when building nodes.

Google Ads node: n8n now provides a Google Ads node, allowing you to get data from Google Ad campaigns.

Node enhancements

  • DeepL node: Add support for longer text fields, and add credentials tests.

  • Facebook Graph API node: Add support for Facebook Graph API 14.

  • JIRA node: Add support for the simplified option with rendered fields.

  • Webflow Trigger node: Reduce the chance of webhook duplication. Add a credentials test.

  • WordPress node: Add a post template option.

  • HubSpot node: Fix for search endpoints.

  • KoboToolbox node: Improve attachment matching logic and GeoJSON Polygon format.

  • Odoo node: Prevent possible issues with some custom fields.

  • Sticky note node: Fix an issue that was causing the main header to hide.

  • Todoist node: Improve multi-item support.

cgobrech pemontto Yann Jouanique Zapfmeister

View the commits for this version.
Release date: 2022-06-29

This release includes:

  • New core features

  • Enhancements to the Clockify node.

  • Bug fixes.

  • You can now access getBinaryDataBuffer in the pre-send method.

  • n8n now exposes the item index being processed by a node.

  • Migrated the expressions templating engine to n8n's fork of riot-tmpl.

Node enhancements

Clockify node: added three new resources: Client, User, and Workspace. Also added support for custom API calls.

  • Core: fixed an error with logging circular links in JSON.
  • Editor UI: now display the full text of long error messages.
  • Editor UI: fix for an issue with credentials rendering when the node has no parameters.
  • Cortex node: fix an issue preventing all analyzers being returned.
  • HTTP Request node: ensure all OAuth2 credentials work with this node.
  • LinkedIn node: fix an issue with image preview.
  • Salesforce node: fix an issue that was causing the lead status to not use the new name when name is updated.
  • Fixed an issue with required/optional parameters.

pemontto

View the commits for this version.
Release date: 2022-06-21

This release contains node enhancements and bug fixes, as well as an improved trigger nodes panel.

Enhancements to the Trigger inputs panel: When using a trigger node, you will now see an INPUT view that gives guidance on how to load data into your trigger.

Node enhancements

  • HubSpot node: you can now assign a stage on ticket update.

  • Todoist node: it's now possible to move tasks between sections.

  • Twake node: updated icon, credential test added, and added support for custom operations.

  • Core: don't allow OPTIONS requests from any source.

  • Core: GET /workflows/:id now returns tags.

  • Core: ensure predefined credentials show up in the HTTP Request node.

  • Core: return the correct error message on Axios error.

  • Core: updates to the expressions allow-list and deny-list.

Bryce Sheehan Rahimli Rahim

View the commits for this version.
Release date: 2022-06-16

This is a bug fix release. It resolves an issue with restarting waiting executions.

View the commits for this version.
Release date: 2022-06-14

This release contains enhancements to the Twilio and Wise integrations, and adds support for a new grant type for OAuth2. It also includes some bug fixes.

Added support for the client_credentials grant type for OAuth2.

Node enhancements

  • Twilio node: added the ability to make a voice call using TTS.

  • Wise node: added support for downloading statements as JSON, CSV, or PDF.

  • Core: fixes an issue that was causing parameters to get lost in some edge cases.

  • Core: fixes an issue with combined expressions not resolving if one expression was invalid.

  • Core: fixed an issue that was causing the public API to fail to build on Windows.

  • Editor: ensure errors display correctly.

  • HTTP Request node: better handling for requests that return null.

  • Pipedrive node: fixes a limits issue with the GetAll operation on the Lead resource.

  • Postbin node: remove a false error.

Albrecht Schmidt Erick Friis JoLo Shaun Valentin Mocanu

View the commits for this version.
Release date: 2022-06-09

This is a bug fix release. It resolves an issue that was sometimes causing nodes to error when they didn't return data.

View the commits for this version.
Release date: 2022-06-09

This is a bug fix release. It fixes two issues with multi-input nodes.

View the commits for this version.
Release date: 2022-06-08

This release introduces the public API.

New feature highlights

The n8n public API

This release introduces the n8n public REST API. Using n8n's public API, you can programmatically perform many of the same tasks as you can in the GUI. The API includes a built-in Swagger UI playground. Refer to the API documentation for more information.

Other new features

  • Core: you can now block user access to environment variables using the N8N_BLOCK_ENV_ACCESS_IN_NODE variable.

  • Core: properly resolve expressions in declarative style nodes.

View the commits for this version.
Release date: 2022-06-07

This release adds a new node for Cal.com, support for tags in workflow import and export, UI improvements, node enhancements, and bug fixes.

Tags in workflow import and export

When importing or exporting a workflow, the JSON can now include workflow tags.

Improved handling of activation errors

n8n now supports running an error workflow in response to an activation error.

This release adds a new trigger node for Cal.com. Refer to the Cal Trigger documentation for more guidance.

Node enhancements

  • GitHub node: add the Get All operation to the Organization resource.

  • QuickBooks node: add a new optional field for tax items.

  • Restore support for window in expressions.

  • Fix to the user-management:reset command.

  • Resolve crashes in queue mode.

  • Correct delete button hover spacing.

  • Resolve a bug causing stuck loading states.

  • EmailReadImap node: improve error handling.

  • HubSpot node: fix contact loading.

Mark Steve Samson Syed Ali Shahbaz

View the commits for this version.
Release date: 2022-05-30

This release features a new node for PostBin, as well as various node enhancements and bug fixes.

PostBin serves as a wrapper for standard HTTP libraries which can be used to test arbitrary API/Webhooks by sending requests and providing more advanced ways to analyze the responses.

Node enhancements

  • RabbitMQ Trigger node: Made message acknowledgement and parallel processing configurable.

  • ServiceNow node: Added support for attachments.

  • Todoist node: Added support for specifying the parent task when adding and listing tasks.

  • Core: Fixed migrations on non-public Postgres schema.

  • Core: Mitigated possible XSS vulnerability when importing workflow templates.

  • Editor UI: fixed erroneous hover state detection close to the sticky note button.

  • Editor UI: fixed display behavior of credentials assigned to versioned nodes.

  • Discord node: Fixed rate limit handling.

  • Gmail node: Fixed sending attachments in filesystem data mode.

  • Google Sheets node: Fixed an error preventing the Use Header Names as JSON Paths option from working as expected.

  • Nextcloud node: Updated the node so the list:folder operation works with Nextcloud version 24.

  • YouTube node: Fixed problem with uploading large files.

View the commits for this version.
Release date: 2022-05-25

This is a bug fix release. It solves an issue with loading parameters when making custom operations calls.

View the commits for this version.
Release date: 2022-05-24

This is a bug fix release. It solves an issue with setting credentials in the HTTP Request node.

View the commits for this version.
Release date: 2022-05-24

This release adds support for reusing existing credentials in the HTTP Request node, making it easier to do custom operation with APIs where n8n already has an integration.

The release also includes improvements to the nodes view, giving better detail about incoming data, as well as some bug fixes.

Credential reuse for custom API operations

n8n supplies hundreds of nodes, allowing you to create workflows that link multiple products. However, some nodes don't include all the possible operations supported by a product's API. You can work around this by making a custom API call using the HTTP Request node.

One of the most complex parts of setting up API calls is managing authentication. To simplify this, n8n now provides a way to use existential credential types (credentials associated with n8n nodes) in the HTTP Request node.

For more information, refer to Custom API operations.

Node details view

An improved node view, showing more detail about node inputs.

Node enhancements

Salesforce Node: Add the Country field.

  • Editor UI: don't display the dividing line unless necessary.
  • Editor UI: don't display the 'Welcome' sticky in template workflows.
  • Slack Node: Fix the kick operation for the channel resource.

View the commits for this version.
Release date: 2022-05-17

This release contains node enhancements, an improved welcome experience, and bug fixes.

Improved welcome experience

A new introductory video, automatically displayed for new users.

Automatically convert Luxon dates to strings

n8n now automatically converts Luxon DateTime objects to strings.

Node enhancements

  • Google Drive Node: Drive upload, delete, and share operations now support shared Drives.

  • Microsoft OneDrive: Add the rename operation for files and folders.

  • Trello: Add support for operations relating to board members.

  • core: Fix call to /executions-current with unsaved workflow.

  • core: Fix issue with fixedCollection having all default values.

  • Edit Image Node: Fix font selection.

  • Ghost Node: Fix post tags and add credential tests.

  • Google Calendar Node: Make it work with public calendars and clean up.

  • KoBoToolbox Node: Fix query and sort + use question name in attachments.

  • Mailjet Trigger Node: Fix issue that node couldn't get activated.

  • Pipedrive Node: Fix resolve properties when using multi option field.

Cristobal Schlaubitz Garcia Yann Jouanique

View the commits for this version.
Release date: 2022-05-10

This release contains bug fixes and node enhancements.

Node enhancements

  • Pipedrive node: adds support for filters to the Organization: Get All operation.

  • Pushover node: adds an HTML formatting option, and a credential test.

  • UProc node: adds new tools.

  • core: a fix for filtering the executions list by waiting status.

  • core: improved webhook error messages.

  • Edit Image node: node now works correctly with the binary-data-mode 'filesystem'.

Albert Kiskorov Miquel Colomer

View the commits for this version.
Release date: 2022-05-03

This is a bug fix release.

Fixes a bug in the editor UI related to node versioning.

View the commits for this version.
Release date: 2022-05-02

This release adds support for node versioning, along with node enhancements and bug fixes.

0.175.0 adds support for a lightweight method of node versioning. One node can contain multiple versions, allowing small version increments without code duplication. To use this feature, change the version parameter in your node to an array, and add your version numbers, including your existing version. You can then access the version parameter with @version in your displayOptions (to control which version n8n displays). You can also query the version in your execute function using const nodeVersion = this.getNode().typeVersion;.

Node enhancements

  • Google Sheets node: n8n now handles header names formatted as JSON paths.

  • Microsoft Dynamics CRM node: add support for regions other than North America.

  • Telegram node: add support for querying chat administrators.

  • core: fixed an issue that was causing n8n to apply authentication checks, even when user management was disabled.

  • core: n8n now skips credentials checks for disabled nodes.

  • editor: fix a bug affecting touchscreen monitors.

  • HubSpot node: fix for search operators.

  • SendGrid node: fixed an issue with sending attachments.

  • Wise node: respect the time parameter on get: exchangeRate.

Jack Rudenko MC Naveen vcrwr

View the commits for this version.
Release date: 2022-04-25

This release adds Sticky Notes, a new feature that allows you to annotate and comment on your workflows. Refer to the Sticky Notes for more information.

  • core: allow external OAuth connection. This enhancement adds support for connecting OAuth apps without access to n8n.

  • All AWS nodes now support AWS temporary credentials.

  • Google Sheets node: Added upsert support.

  • Microsoft Teams node: adds several enhancements:

    • An option to limit groups to "member of", rather than retrieving the whole directory.
    • An option to get all tasks from a plan instead of just a group member.
    • Autocompletion for plans, buckets, labels, and members in update fields for tasks.
  • MongoDB node: you can now parse dates using dot notation.

  • Calendly Trigger node: updated the logo.

  • Microsoft OneDrive node: fixed an issue that was preventing upload of files with special characters in the file name.

  • QuickBooks node: fixed a pagination issue.

Basit Ali Cody Stamps Luiz Eduardo de Oliveira Oliver Trajceski pemontto Ryan Goggin

View the commits for this version.
Release date: 2022-04-19

Fixes a bug with the Discord node icon name.

View the commits for this version.
Release date: 2022-04-19

Markdown node: added a new Markdown node to convert between Markdown and HTML.

editor: you can now drag and drop nodes from the nodes panel onto the canvas.

Node enhancements

  • Discord node: additional fields now available when sending a message to Discord.

  • GoogleBigQuery: added support for service account authentication.

  • Google Cloud Realtime Database node: you can now select a region.

  • PagerDuty node: now supports more detail in incidents.

  • Slack node: added support for blocks in Slack message update.

  • core: make the email for user management case insensitive.

  • core: add rawBody for XML requests.

  • editor: fix a glitch that caused dropdowns to break after adding expressions.

  • editor: reset text input value when closed with Esc.

  • Discourse node: fix an issue that was causing incomplete results when getting posts. Added a credentials test.

  • Zendesk Trigger node: remove deprecated targets, replace with webhooks.

  • Zoho node: fix pagination issue.

Florian Metz Francesco Pongiluppi Mark Steve Samson Mike Quinlan

View the commits for this version.
Release date: 2022-04-11

  • Changes to the data output display in nodes.

Node enhancements

Magento 2 Node: Added credential tests. PayPal Node: Added credential tests and updated the API URL.

core: Luxon now applies the correct timezone. Refer to Luxon for more information.
core: fixed an issue with localization that was preventing i18n files from loading.
Action Network Node: Fix a pagination issue and add credentials test.

Paolo Rechia

View the commits for this version.
Release date: 2022-04-06

This is a small bug fix release.

  • core: fix issue with current executions not displaying.
  • core: fix an issue causing n8n to falsely skip some authentication.
  • WooCommerce Node: Fix a pagination issue with the GetAll operation.

View the commits for this version.
Release date: 2022-04-03

Please note that this version contains breaking changes. You can read more about them here.

This release focuses on bug fixes and node enhancements, with one new feature, and one breaking change to the GraphQL node.

Breaking change to GraphQL node

The GraphQL node now errors when the response includes an error. If you use this node, you can choose to:

  • Do nothing: a GraphQL response containing an error will now cause the workflow to fail.
  • Update your GraphQL node settings: set Continue on Fail to true to allow the workflow to continue even when the GraphQL response contains an error.

You can now download binary data from individual nodes in your workflow.

View the commits for this version.
Release date: 2022-03-27

This release focuses on bug fixes and adding functionality to existing nodes.

View the commits for this version.
Release date: 2022-03-20

This release includes:

  • New functionality for existing nodes
  • A new node for Linear
  • Bug fixes
  • And a license change!

This release changes n8n's license, from Apache 2.0 with Commons Clause to Sustainable Use License.

This change aims to clarify n8n's license terms, and n8n's position as a fair-code project.

Read more about the new license in License.

Other improvements

For a comprehensive list of changes, view the commits for this version.
Release date: 2022-03-16

This release contains an important bug fix for 0.168.0. Users on 0.168.0 or 0.168.1 should upgrade to this.

For a comprehensive list of changes, view the commits for this version.
Release date: 2022-03-15

A bug fix for user management: fixed an issue with email templates that was preventing owners from inviting members.

For a comprehensive list of changes, view the commits for this version.
Release date: 2022-03-14

New feature: user management

User management in n8n allows you to invite people to work in your self-hosted n8n instance. It includes:

  • Login and password management
  • Adding and removing users
  • Two account types: owner and member

Check out the user management documentation for more information.

For a comprehensive list of changes, view the commits for this version.
Release date: 2022-03-13

Luxon and JMESPath

0.167.0 adds support for two new libraries:

  • Luxon: a JavaScript library for working with date and time
  • JMESPath: a query language for JSON

You can use Luxon and JMESPath in the code editor and in expressions.

New expressions variables

We've added two new variables to simplify working with date and time in expressions:

  • $now: a Luxon object containing the current timestamp. Equivalent to DateTime.now().
  • $today: a Luxon object containing the current timestamp, rounded down to the day. Equivalent to DateTime.now().set({ hour: 0, minute: 0, second: 0, millisecond: 0 }).

Negative operations in If and Switch nodes

Made it easier to perform negative operations on strings.

This release adds one new operation for numbers:

And the following new operations for strings:

  • Not Ends With
  • Regex Not Match
  • Not Starts With
  • Is Not Empty

Additionally, Regex is now labelled Regex Match.

New node: Redis Trigger

Added a Redis Trigger node, so you can now start workflows based on a Redis event.

Core functionality

  • Added support for Luxon and JMESPath.

  • Added two new expressions variables, $now and $today.

  • Added more negative operations for numbers and strings.

  • Added a link to the course from the help menu.

  • Facebook Graph API: Added suport for Facebook Graph API 13.

  • HubSpot: Added suport for private app token authentication.

  • MongoDB: Added the aggregate operation.

  • Redis Trigger: Added a Redis Trigger node.

  • Redis: Added support for publish operations.

  • Strapi: Added support for Strapi 4.

  • WordPress: Added status as an option to getAll post requests.

  • The Google Calendar node now correctly applies timezones when creating, updating, and scheduling all day events.

  • Fixed a bug that occasionally caused n8n to crash, or shut down workflows unexpectedly.

  • You can now use long credential type names with Postgres.

  • Luiz Eduardo de Oliveira Fonseca

  • Vitaliy Fratkin

  • sol

  • vcrwr

  • FFTDB

For a comprehensive list of changes, view the commits for this version.
Release date: 2022-03-08

Core functionality

  • Added new environment variable N8N_HIRING_BANNER_ENABLED to enable/disable the hiring banner.

  • Fixed a bug preventing keyboard shortcuts from working as expected.

  • Fixed a bug causing tooltips to be hidden behind other elements.

  • Fixed a bug causing some credentials to be hidden from the credentials list.

  • Baserow: Fixed a bug preventing the Sorting option of the Get All operation from working as expected.

  • HTTP Request: Fixed a bug causing Digest Authentication to fail in some scenarios.

  • Wise: Fixed a bug causing API requests requiring Strong Customer Authentication (SCA) to fail.

pemontto

For a comprehensive list of changes, view the commits for this version.
Release date: 2022-02-28

Please note that this version contains breaking changes. You can read more about them here.

  • Onfleet

  • Asana: Added Create operation to the Project resource.

  • Mautic: Added Edit Contact Points, Edit Do Not Contact List, Send Email operations to Contact resource. Also added new Segment Email resource.

  • Notion (Beta): Added support for rollup fields to the Simplify Output option. Also added the Parent ID to the Get All operation of the Block resource.

  • Pipedrive: Added Marketing Status field to the Create operation of the Person resource, also added User ID field to the Create and Update operations of the Person resource.

Core functionality

  • Added support for workflow templates.

  • Fixed a bug causing credentials tests to fail for versioned nodes.

  • Fixed a build problem by addind dependencies @types/lodash.set to the workflow package and @types/uuid to the core package.

  • Fixed an error causing some resources to ignore a non-standard N8N_PATH value.

  • Fixed an error preventing the placeholder text from being shown when entering credentials.

  • Improved error handling for telemetry-related errors.

  • Orbit: Fixed a bug causing API requests to use an incorrect workspace identifier.

  • TheHive: Fixed a bug causing the Ignore SSL Issues option to be applied incorrectly.

alexwitkowski, Iñaki Breinbauer, lsemaj, Luiz Eduardo de Oliveira Fonseca, Rodrigo Correia, Santiago Botero Ruiz, Saurabh Kashyap, Ugo Bataillard

For a comprehensive list of changes, check out the commits for this version.
Release date: 2022-02-20

Core Functionality

  • Fixed a bug preventing webhooks from working as expected in some scenarios.

For a comprehensive list of changes, check out the commits for this version.
Release date: 2022-02-20

  • Google Chat

  • Grist: Added support for self-hosted Grist instances.

  • Telegram Trigger: Added new Extra Large option to Image Size field.

  • Webhook: Added new No Response Body option. Also added support for DELETE, PATCH and PUT methods.

Core Functionality

  • Added new database indices to improve the performance when querying past executions.
  • Fixed a bug causing the base portion of a URL not to be prepended as expected in some scenarios.
  • Fixed a bug cuasing expressions to resolve incorrectly when referencing non-existent nodes or parameters.

Jhalter5Stones, Valentina Lilova, thorstenfreitag

For a comprehensive list of changes, check out the commits for this version.
Release date: 2022-02-13

Core Functionality

  • Fixed a bug preventing OAuth2 authentication from working as expected in some scenarios.

For a comprehensive list of changes, check out the commits for this version.
Release date: 2022-02-13

Core Functionality

  • Added automatic sorting by relative position to the node list inside the expression editor.

  • Added new /workflows/demo page to allow read-only rendering of workflows inside an iframe.

  • Added optional /healthz health check endpoint to worker instances.

  • Fixed unwanted list autofill behaviour inside the expression editor.

  • Improved the GitHub actions used by the nightly Docker image.

  • Function: Fixed a bug leaving the code editor size unchanged after resizing the window.

  • Function Item: Fixed a bug leaving the code editor size unchanged after resizing the window.

  • IF: Removed the empty sections left after removing a condition.

  • Item Lists: Fixed an erroneous placeholder text.

Iñaki Breinbauer, Manuel, pemontto

For a comprehensive list of changes, check out the commits for this version.
Release date: 2022-02-06

  • GitHub: Added new List operation to File resource.

Core Functionality

  • Added configurable debug logging for telemetry.

  • Added support for defining nodes through JSON. This functionality is in alpha state and breaking changes to the interface can take place in upcoming versions.

  • Added telemetry support to page events occuring before telemetry is initialized.

  • Fixed a bug preventing errors in sub-workflows from appearing in parent executions.

  • Fixed a bug where node versioning would not work as expected.

  • Fixed a bug where remote parameters would not load as expected.

  • Fixed a bug where unkown node types would not work as expected.

  • Prevented the node details view from opening automatically after duplicating a node.

  • Removed dependency fibers which is incompatible with the current LTS version 16 of Node.js.

  • XML: Fixed a bug causing the node to alter incoming data.

pemontto

For a comprehensive list of changes, check out the commits for this version.
Release date: 2022-02-01

Core Functionality

  • Added optional debug logging to health check functionality.

For a comprehensive list of changes, check out the commits for this version.
Release date: 2022-01-30

Core Functionality

  • Added default polling interval for trigger nodes using polling.

  • Added support for additional hints below parameter fields.

  • Fixed a bug preventing default values from being used when testing credentials.

  • Improved the wording in the Save your Changes? dialog.

  • Airtable: Improved field description.

  • Airtable Trigger: Improved field description.

  • erpNext: Prevented the node from throwing an error when no data is found.

  • Gmail: Fixed a bug causing the BCC field to be ignored.

  • Move Binary Data: Fixed a bug causing the binary data to JSON conversion to fail when using filesystem-based binary data handling.

  • Slack: Fixed a typo in the Type field.

fabian wohlgemuth

For a comprehensive list of changes, check out the commits for this version.
Release date: 2022-01-22

Core Functionality

  • Fixed a bug preventing the binary data preview from using the full available height and width.

  • Fixed a build problem by pinning chokidar version 3.5.2.

  • Prevent workflow activation when no trigger is presentand introduced a modal explaining production data handling.

  • Fixed Filter by tags placeholder text used in the Open Workflow modal.

  • HTTP Request: Fixed a bug causing custom headers from being ignored.

  • Mautic: Fixed a bug preventing all items from being returned in some situations.

  • Microsoft OneDrive: Fixed a bug preventing more than 200 items from being returned.

  • Spotify: Fixed a bug causing the execution to fail if there are more than 1000 search results, also fixed a bug preventing the Get New Releases operation of the Album resource from working as expected.

fabian wohlgemuth

For a comprehensive list of changes, check out the commits for this version.
Release date: 2022-01-18

Core Functionality

  • Temporarily removed debug logging for Axios requests.

For a comprehensive list of changes, check out the commits for this version.
Release date: 2022-01-16

  • Jenkins

  • GraphQL: Added support for additional authentication methods Basic Auth, Digest Auth, OAuth1, OAuth2, and Query Auth.

Core Functionality

  • Added support for executing workflows without an ID through the CLI.

  • Fixed a build problem.

  • Fixed a bug preventing the tag description from being shown on the canvas.

  • Improved build performance by skipping the node-dev package during build.

  • Box: Fixed a bug causing some files to be corrupted during download.

  • Philips Hue: Fixed a bug preventing the node from connecting to Philips Hue.

  • Salesforce: Fixed a bug preventing filters on date and datetime fields from working as expected.

  • Supabase: Fixed an errorneous documentation link.

Phil Clifford

For a comprehensive list of changes, check out the commits for this version.
Release date: 2022-01-09

Core Functionality

  • Added new external hook when active workflows finished initializing.

  • Fixed a bug preventing the personalisation survey from showing up.

  • Improved telemetry.

  • Edit Image: Fixed a bug causing two items to be returned.

  • iCalendar: Fixed a bug preventing dates in January from working as expected.

  • Merge: Fixed causing empty binary data to overwrite other binary data on merge.

Ricardo Georgel, Pierre, Vahid Sebto

For a comprehensive list of changes, check out the commits for this version.
Release date: 2022-01-03

Core Functionality

  • Fixed a bug where not all nodes could use the new binary data handling.

For a comprehensive list of changes, check out the commits for this version.
Release date: 2022-01-02

  • Function: The node now prevents unsupported data from being returned.
  • Function Item: The node now prevents unsupported data from being returned.
  • HubSpot: Added Engagement resource with Create, Delete, Get, and Get All operations.
  • Notion (Beta): Upgraded the Notion node: Added Search operation for the Database resource, Get operation for Database Page resource, Archive operation for the Page resource. Also added Simplify Output option and test for credential validity.
  • Wait: Added new Ignore Bots option.
  • Webhook: Added new Ignore Bots option.

Core Functionality

  • Fixed a bug where a wrong number suffix was used after duplicating nodes.

  • HTTP Request: Fixed a bug where using Digest Auth would fail.

pemontto

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-12-25

  • GitLab Trigger: Added new trigger events: Confidential Issue, Confidential Comment, Deployment, Release.
  • Google Drive: Added support for downloading and converting native Google files.
  • Kitemaker: Added Space ID field to Create operation of Work Item resource.
  • Raindrop: Added Parse Metadata option to Create, Update operations of the Bookmark resource.

Core Functionality

  • Added execution ID to workflow.postExecute hook
  • Added response body to UI for failed Axios requests
  • Added support for automatically removing new lines from Google Service Account credentials
  • Added support for disabling the UI using environment variable
  • Fixed a bug causing the wrong expression result to be shown for items from an output other than the first
  • Improved binary data management
  • Introduced Monaco as new UI code editor

Arpad Gabor, Leo Lou, Manuel

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-12-19

Core Functionality

  • Added support for internationalization (i18n). This functionality is currently in alpha status and breaking changes are to be expected.

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-12-19

  • Plivo: Added user agent to all API requests.

Core Functionality

  • Allow deletion of nodes from the canvas using the backspace key

  • Fixed an issue causing clicks in the value survey to impact the main view

  • Fixed an issue preventing the update panel from closing

  • Todoist: Fixed a bug where using the additional field Due Date Time on the Task resource would cause the Create operation to fail.

Mohammed Huzaif, Лебедев Иван

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-12-11

Core Functionality

  • Added frontend for value surveys

  • Fixed an issue preventing the recommendation logic from working as expected after selecting a work area

  • Fixed an issue where a wrong exit code was sent when running n8n on an unsupported version of Node.js

  • Fixed an issue where node options would disappear on hovering when a node isn't selected

  • Fixed an issue where the execution id was missing when running n8n in queue mode

  • Fixed an issue where execution data was missing when waiting for a webhook in queue mode

  • Improved error handling when the n8n port is already in use

  • Improved diagnostic events

  • Removed toast notification on webhook deletion, added toast notification after node is copied

  • Removed default trigger tooltip for polling trigger nodes

  • APITemplate.io: Fixed a bug where the Create operation on the Image resource would fail when the Download option isn't enabled.

  • HubSpot: Fixed authentication for new HubSpot applications by using granular scopes when authenticating against the HubSpot OAuth2 API.

  • HubSpot Trigger: Fixed authentication for new HubSpot applications by using granular scopes when authenticating against the HubSpot Developer API.

  • Jira Software: Fixed an issue where the Reporter field would not work as expected on Jira Server instances.

  • Salesforce: Fixed a typo preventing the value in the amount field of from being saved.

pemontto, Jascha Lülsdorf, Jonathan Bennetts

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-12-04

Core Functionality

  • Added a plus (+) connector to end nodes

  • Allowed opening workflows and executions in a new window when using Ctrl + Click

  • Enforced type checking for all node parameters

  • Fixed a build issue in the custom n8n docker image

  • Fixed a memory leak in the UI which could occur when renaming nodes or navigate to another workflow

  • Improved stability of internal test workflows

  • Improved expression security

  • Introduced redirect to a new page and UI error message when trying to open a deleted workflow

  • Introduced support for multiple arguments when logging

  • Updated the onboarding survey

  • Google BigQuery: Fixed a bug preventing pagination from working as expected when the Return All option is enabled.

  • RabbitMQ Trigger: Added Trigger to the name of the trigger node.

  • Salesforce: Fixed a typo affecting the Type field of the Opportunity resource.

Zvonimir Erdelja, m2scared

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-11-26

Core Functionality

  • Fixed a bug causing connections between nodes to disappear when renaming a newly added node after drawing a connection to its endpoints.

  • Fixed a build issue by adding TypeScript definitions for validator.js to CLI package, also fixed a linting issue by removing an unused import.

  • Improved the waiting state of trigger nodes to explain when an external event is required.

  • Loops are now drawn below their source node.

  • Edit Image: Fixed an issue preventing the Composite operation from working correctly in some cases.

Jonathan Bennetts

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-11-19

Core Functionality

  • Fixed a build issue by pinning rudder-sdk-node version 1.0.6 in CLI package.

  • Fixed an issue preventing the n8n import:workflow --separate CLI command from finding workflows on Windows.

  • Further improved the expression security.

  • Moved all nodes into separate directories in preparation for internationalization.

  • Removing default headers for PUT and PATCH operations when using Axios.

  • Revamped the workflow canvas.

  • HTTP Request: Fixed an issue causing the wrong Content-Type header to be set when downloading a file.

  • ServiceNow: Fixed incorrect mapping of incident urgency and impact values.

  • Start: Fixed an issue causing the node to be disabled in a new workflow.

  • Xero: Fixed an issue causing the node to only fetch the first page when querying the Xero API.

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-11-13

  • One Simple API

  • Edit Image: Added Circle Primitive to Draw operation. Also added Composite operation.

  • Zendesk: Added check for API credentials validity.

  • Zulip: Added additional field Role to the Update operation of the User resource.

Core Functionality

  • Fixed an issue causing an error message to be thrown when executing a workflow through the CLI.

  • Improved expression security by limiting the available process properties.

  • Improved the behaviour of internal tests executed through the CLI.

  • Updated the owner of the node user's home directory in the custom docker image.

  • Google Tasks: Fixed an issue where the Due Date field had no effect (Update operation) or was unavailable (Create operation).

  • HTTP Request: Fixed an issue where the Content-Length header was not calculated and sent when using the a Body Content Type of Form-Data Multipart.

  • Stripe Trigger: Fixed an issue preventing the node from being activated when a previously created webhook no longer exists.

  • Toggl Trigger: Updated the API URL used by the node.

GeylaniBerk, Jonathan Bennetts

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-11-05

Core Functionality

  • Added a hook after workflow creation.

  • Fixed a build issue with npm v7 by overriding unwanted behaviour through the .npmrc file.

  • Fixed an issue preventing unknown node types from being imported.

  • Fixed an issue with the UI falsely indicating a credential can't be selected when using SQLite and multiple credentials with the same name exist.

  • Stripe: Fixed an issue where setting additional Metadata fields would not have the expected effect. Also fixed an issue where pagination would not work as expected.

  • Zendesk: Fixed an issue preventing the additional field External ID from being evaulated correctly.

mizzimizzi, nikozila, Pauline

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-11-03

Core Functionality

  • Fixed a build issue by moving the chokidar dependency to a regular dependency.

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-11-03

Core Functionality

  • Improved the database migration process to reduce memory footprint.
  • Fixed an issue with telemetry by adding an anonymous ID.

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-10-29

Core Functionality

  • Added name and ID of a workflow to its settings.

  • Added parameter inputs to be multi-line.

  • Fixed an issue with declaring proxies when Axios is used.

  • Fixed an issue with serializing arrays and special characters.

  • Fixed an issue with updating expressions after renaming a node.

  • HTTP Request: Fixed an issue with the Full Response option not taking effect when used with the Ignore Response Code option.

Valentina Lilova, Oliver Trajceski

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-10-22

  • AWS Textract

  • Google Drive Trigger

  • Bitbucket Trigger: Added check for credentials validity. Removed deprecated User and Team resources, and added the Workspace resource.

  • GitHub: Added check for API credentials validity.

  • Home Assistant: Added check for credentials validity.

  • Jira Software: Added check for credentials validity.

  • Microsoft OneDrive: Added functionality to create folder hierarchy automatically upon subfolder creation.

  • Pipedrive: Added All Users option to Get All operation of Activity resource.

  • Slack: Increase the Slack default query limit from 5 to 100 in order to reduce number of requests.

  • Twitter: Added Tweet Mode additional field to the Search operation of Tweet resource.

Core Functionality

  • Changed vm2 library version from 3.9.3 to 3.9.5.

  • Fixed an issue with ignoring the response code.

  • Fixed an issue with overwriting credentials using environment variables.

  • Fixed an issue with using query strings combined with the x-www-form-urlencoded content type.

  • Introduced telemetry.

  • Jira Software: Fixed an issue with the Expand option for the Issue resource. Also fixed an issue with using custom fields on Jira Server.

  • Slack: Fixed an issue with pagination when loading more than 1,000 channels.

  • Strapi: Fixed an issue using the Where option of the Get All operation.

  • WooCommerce: Fixed an issue where a wrong postcode field name was used for the Order resource.

pemontto, rdd2, robertodamiani, Rodrigo Correia

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-10-15

  • Nextcloud: Added Share operation to the File and Folder resources.
  • Zendesk: Added support for deleting, listing, getting, and recovering suspended tickets. Added the query option for regular tickets. Added assignee emails, internal notes, and public replies options to the update ticket operation.

Core Functionality

  • Improved the autofill behaviour on Google Chrome when entering credentials.

  • Airtable: Fixed an issue with the sort field.

  • Cron: Set the version of the cron library to 1.7.2.

Jonathan Bennetts

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-10-14

Core Functionality

  • Fixed a build issue affecting a number of AWS nodes.

  • Changed workflows to use credential ids primarily (instead of names), allowing users to have different credentials with the same name.

  • FTP: Fixed error when opening FTP/SFTP credentials.

Rodrigo Correia

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-10-07

Core Functionality

  • Fixed overlapping buttons when viewing on mobile.

  • Fixed issue with partial workflow executions when Wait node was last.

  • Fixed issue with broken non-JSON requests.

  • Node errors now only displayed for executing nodes, not disconnected nodes.

  • Automatic save when executing new workflows with Webhook node.

  • Fixed an issue with how arrays were serialized for certain nodes.

  • Fixed an issue where executions could not be cancelled when running in Main mode.

  • Duplicated workflows now open in a new window.

  • HTTP Request: Fixed 'Ignore response code' flag.

  • Rundeck: Fixed issue with async loading of credentials.

  • SeaTable: Fixed issue when entering a Baser URI with a trailing slash.

Günther, Tom Klingenberg

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-10-01

Core Functionality

  • Fixed issue with body formatting of x-form-www-urlencoded requests.

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-09-30

Core Functionality

  • Performance improvements in Editor UI
  • Improved error reporting

Alex Hall, Tom Klingenberg

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-09-29

  • Splunk

  • Telegram: Added binary data support to the Send Animation, Send Audio, Send Document, Send Photo, Send Video, and Send Sticker operations.

Core Functionality

  • Fixed startup behavior when running n8n in scaled mode (i.e. skipWebhoooksDeregistrationOnShutdown is enabled).
  • Fixed behavior around handling empty response bodies.
  • Fixed an issue with handling of refresh tokens.

pemontto

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-09-23

Core Functionality

  • Bug fixes and improvements for Editor UI.

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-09-22

Core Functionality

  • Updated node design to include support for versioned nodes.

  • SendGrid: Fixed issue with adding contacts to lists.

Matías Aguirre

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-09-15

  • Item Lists

  • Magento 2

  • Baserow: Added the following filter options: Contains, Contains Not, Date Before Date, Date After Date, Filename Contains, Is Empty, Is Not Empty, Link Row Has, Link Row Does Not Have, Single Select Equal, and Single Select Not Equal.

  • Pipedrive: Added support for Notes on Leads.

  • WeKan: Added Sort field to the Card resource.

Core Functionality

  • General UX improvements to the Editor UI.

  • Fixed an issue with the PayloadTooLargeError.

  • Lemlist: Fixed issue where events were not sent in the correct property.

  • Notion: Fixed issue listed unnamed databases.

bramknuever, Chris Magnuson

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-09-05

  • Freshservice

  • Clockify: Added Task resource.

  • HubSpot: Added dropdown selection for Properties and Properties with History filters for Get All Deals operations.

  • Mautic: Added Campaign Contact resource.

  • MongoDB: Added ability to query documents by '_id'.

  • MQTT: Added SSL/TLS support to authentication.

  • MQTT Trigger: Added SSL/TLS support to authentication.

  • Salesforce: Added File Extension option to the Document resource. Added Type field to Task resource.

  • Sms77: Added Voice Call resource. Added the following options to SMS resource: Debug, Delay, Foreign ID, Flash, Label, No Reload, Performance Tracking, TTL.

  • Zendesk: Added Organization resource. Added Get Organizations and Get Related Data operations to User resource.

Core Functionality

  • Added execution ID to logs of queue processes.

  • Added description to operation errors.

  • Added ability for webhook processes to wake waiting executions.

  • HubSpot: Fixed issue with 'RequestAllItems' API.

  • WordPress: Fixed issue with 'RequestAllItems' API only returning the first 10 items.

André Matthies, DeskYT, Frederic Alix, Jonathan Bennetts, Ketan Somvanshi, Luiz Eduardo de Oliveira Fonseca, TheFSilver

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-08-30

  • Notion: Added handling of Rich Text when simplifying data.

Core Functionality

  • General UI design improvements.

  • Improved errors messages during debugging of custom nodes.

  • All packages upgraded to TypeScript 4.3.5, improved linting and formatting.

  • FTP: Fixed issue where incorrect paths were displayed when using the node.

  • Wait: Fixed issue when receiving multiple files using On Webhook Call operation.

  • Webhook: Fixed issue when receiving multiple files.

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-08-27

Core Functionality

  • Fixed Canvas UI inconsistencies when duplicating workflows.
  • Added log message during upgrade to indicate database migration has started.
  • General improvements to parameter labels and tooltips.

Kyle Mohr

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-08-26

Core Functionality

  • Added expression support for credentials.
  • Fixed performance issues when loading credentials.

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-08-23

Core Functionality

  • Fixed an issue where if n8n was shutdown during database migration while upgrading versions, errors would result upon next startup.

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-08-22

Please note that this version contains breaking changes. You can read more about it here. The features that introduced the breaking changes have been flagged below.

Core Functionality

  • In-node method for accessing binary data is now asynchronous and a helper function for this has been implemented.

  • Credentials are now loaded from the database on-demand.

  • Webhook UUIDs are automatically updated when duplicating a workflow.

  • Fixed an issue when referencing values before loops.

  • Interval: Fixed issue where entering too large a value (> 2147483647ms) resulted in an interval of 1sec being used rather than an error.

Aniruddha Adhikary, lublak, parthibanbalaji

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-08-15

  • AWS DynamoDB: Added Scan option to Item > Get All operation.
  • Google Drive: Added File Name option to File > Update operation.
  • Mautic: Added the following fields to Company resource: Address, Annual Revenue, Company Email, Custom Fields, Description, Fax, Industry, Number of Employees, Phone, Website.
  • Notion: Added Timezone option when inserting Date fields.
  • Pipedrive: Added the following Filters options to the Deal > Get All operation: Predefined Filter, Stage ID, Status, and User ID.
  • QuickBooks: Added the Transaction resource and Get Report operation.

Core Functionality

  • Integrated Nodelinter in n8n.

  • Fix to add a trailing slash (/) to all webhook URLs for proper functionality.

  • AWS SES: Fixed issue where special characters in the message were not encoded.

  • Baserow: Fixed issue where Create operation inserted null values.

  • HubSpot: Fixed issue when sending context parameter.

calvintwr, CFarcy, Jeremie Dokime, Michael Hirschler, Rodrigo Correia, sol

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-08-08

Core Functionality

  • Fixed UI lag when editing large workflows.

  • Nextcloud: Fixed issue where List operation on an empty Folder returned an error.

  • Spotify: Fixed issues with pagination and infinite executions.

Jacob Burrell, Лебедев Иван

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-08-02

  • Interval: Fixed issue with infinite executions.

Лебедев Иван

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-08-02

Core Functionality

  • Changed TypeORM version to 0.2.34

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-08-01

Core Functionality

  • Fixed an issue for large internal values.

Ed Linklater, Rodrigo Correia

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-07-24

Please note that this version contains a breaking change. You can read more about it here. The features that introduced the breaking changes have been flagged below.

Core Functionality

  • Added Continue-on-fail support to all nodes.

  • Added new version notifications.

  • Added Refresh List for remote options lists.

  • Added $position expression variable to return the index of an item within a list.

  • Spreadsheet File: Fixed issue when saving dates.

Anthr@x, Felipe Cecagno

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-07-18

Please note that this version contains a breaking change. You can read more about it [here](https://github.com/n8n-io/n8n/ blob/master/packages/cli/BREAKING-CHANGES.md#01300). The features that introduced the breaking changes have been flagged below.

Core Functionality

  • Fixed an issue where failed workflows were displayed as "running".

  • Fixes issues with uncaught errors.

  • Notion: Fixed issue when filtering field data type.

Michael Hirschler, Mika Luhta, Pierre Lanvin

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-07-12

  • Baserow

  • SSH: Fixed issue with access rights when downloading files.

Jérémie Pardou-Piquemal

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-07-11

  • Home Assistant

  • Stripe

  • HTTP Request: Added support for arrays in Querystring. Any parameter appearing multiple times with the same name is grouped into an array.

  • Mautic: Added Contact Segment resource.

  • Telegram: Added Delete operation to the Message resource.

Core Functionality

  • Performance improvement for loading of historical executions (> 3mil) when using Postgres.

  • Fixed error handling for unending workflows and display of "unknown" workflow status.

  • Fixed format of Workflow ID when downloading from UI Editor to enable compatibility with importing from CLI.

  • Microsoft SQL: Fixed an issue with sending the connectionTimeout parameter, and creating and updating data using columns with spaces.

Kaito Udagawa, Rodrigo Correia

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-07-04

Please note that this version contains a breaking change. You can read more about it here. The features that introduced the breaking changes have been flagged below.

  • Airtable: Added Bulk Size option to all Operations.
  • Box: Added Share operation to File and Folder resources.
  • Salesforce: Added Last Name field to Update operation on Contact resource.
  • Zoho CRM: Added Account, Contact, Deal, Invoice, Product, Purchase, Quote, Sales Order, and Vendor resources.

Core Functionality

  • Added a workflow testing framework using a new CLI command to execute all desired workflows. Run n8n executeBatch --help for details.

  • Added support to display binary video content in Editor UI.

  • Google Sheets: Fixed an issue with handling 0 value that resulted in empty cells.

  • SSH: Fixed an issue with setting passphrases.

flybluewolf, Kaito Udagawa

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-06-29

Core Functionality

  • Fixed issues with keyboard shortcuts when a modal was open.

  • Microsoft SQL: Fixed an issue with handling of Boolean values when inserting.

  • Pipedrive: Fixed an issue with the node icon.

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-06-27

Core Functionality

  • Templates from the n8n Workflows page can now be directly imported by appending /workflows/templates/<templateId> to your instance base URL. For example, localhost:5678/workflows/templates/1142.

  • Added new Editor UI shortcuts. See Keyboard Shortcuts for details.

  • Fixed an issue causing console errors when deleting a node from the canvas.

  • Ghost: Fixed an issue with the Get All operation functionality.

  • Google Analytics: Fixed an issue that caused an error when attempting to sort with no data present.

  • Microsoft SQL: Fixed an issue when escaping single quotes and mapping empty fields.

  • Notion: Fixed an issue with pagination of databases and users.

calvintwr, Jan Baykara

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-06-20

  • Spotify: Added Search operation to Album, Artist, Playlist, and Track resources, and Resume and Volume operations to Player resource.

Core Functionality

  • Implemented new design of the Nodes Panel, adding categories and subcategories, along with improved search. For full details, see the commits.

  • MySQL: Fixed an issue where n8n was unable to save data due to collation, resulting in workflows ending with Unknown status.

Amudhan Manivasagam, Carlos Alexandro Becker, Kaito Udagawa

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-06-16

Core Functionality

  • Improved error log messages
  • Fixed an issue where the tags got removed when deactivating the workflow or updating settings
  • Removed the circular references for the error caused by the request library

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-06-13

  • Google Drive: Added APP Properties and Properties options to the Upload operation of the File resource
  • HTTP Request: Added the functionality to log the request to the browser console for testing
  • Notion: Added the Include Time parameter date field types
  • Salesforce: Added Upsert operation to Account, Contact, Custom Object, Lead, and Opportunity resources
  • Todoist: Added the Description option to the Task resource

Core Functionality

  • Implemented the functionality to display the error details in a toast message for trigger nodes

  • Improved error handling by removing circular references from API errors

  • Jira: Fixed an issues with the API version and fixed an issue with fetching the custom fields for the Issue resource

Jean M, romaincolombo-daily, Thomas Jost, Vincent

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-06-06

Core Functionality

  • Fixed a build issue for missing node icons

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-06-06

  • Git

  • Microsoft To Do

  • Pipedrive: Added a feature to fetch data from the Pipedrive API, added Search operation to the Deals resource, and added custom fields option

  • Spotify: Added My Data resource

Core Functionality

  • Fixed issues with NodeViewNew navigation handling

  • Fixed an issue with the view crashing with large requests

  • ASW Transcribe: Fixed issues with options

Rodrigo Correia, Sam Roquitte

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-06-04

Core Functionality

  • Fixed error messages for the text area field

  • Added the missing winston dependency

  • Fixed an issue with adding values using the Variable selector. The deleted values don't reappear

  • Fixed an issue with the Error Workflows not getting executed in the queue mode

  • Notion: Fixed an issue with parsing the last edited time

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-05-31

  • Function: Added console.log support for writing to browser console
  • Function Item: Added console.log support for writing to browser console

Core Functionality

  • Fixed an issue that enables clicks on tags
  • Fixed an issue with escaping workflow name
  • Fixed an issue with selecting variables in the Expression Editor

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-05-30

Core Functionality

  • Fixed an issue with the order in migration rollback

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-05-30

Core Functionality

  • Changed bcrypt library from @node-rs/bcrypt to bcryptjs

  • Fixed an issue with optional parameters that have the same name

  • Added the functionality to tag workflows

  • Fixed errors in the Expression Editor

  • Fixed an issue with nodes that only get connected to the second input. This solves the issue of copying and pasting the workflows where only one output of the IF node gets connected to a node

  • Google Drive: Fixed an issue with the Drive resource

  • Notion: Fixed an issue with the filtering fields type and fixed an issue with the link option

  • Switch: Fixed an issue with the Expression mode

Alexander Mustafin

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-06-01

Core Functionality

  • Fixed an issue with copying the output values
  • Fixed issues with the Expression Editor
  • Made improvements to the Expression Editor

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-05-20

  • Notion

  • Notion Trigger

  • GraphQL: Added Header Auth authentication method

  • Twilio: Added API Key authentication method

  • HubSpot: Fixed an issue with pagination for Deals resource

  • Keap: Fixed an issue with the data type of the Order Title field

  • Orbit: Fixed an issue with the activity type in Post operation

  • Slack: Fixed an issue with the Get Profile operation

  • Strava: Fixed an issue with the paging parameter

Jacob Spizziri

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-05-17

  • iCalendar

  • Google Cloud Firestore: Added the functionality for GeoPoint parsing and added ISO-8601 format for date validation

  • IMAP Email: Added the Force reconnect option

  • Paddle: Added the Use Sandbox environment API parameter

  • Spotify: Added the Position parameter to the Add operation of the Playlist resource

  • WooCommerce: Added the Include Credentials in Query parameter

Core Functionality

  • Added await to hooks to fix issues with the Unknown status of the workflows

  • Changed the data type of the credentials_entity field for MySQL database to fix issues with long credentials

  • Fixed an issue with the ordering of the executions when the list is auto-refreshed

  • Added the functionality that allows reading sibling parameters

  • Clockify Trigger: Fixed an issue that occurred when the node returned an empty array

  • Google Cloud Firestore: Fixed an issue with parsing empty document, and an issue with the detection of date

  • HubSpot: Fixed an issue with the Return All option

DeskYT, Daniel Lazaro, DerEnderKeks, mdasmendel

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-05-09

Core Functionality

  • Implemented timeout for subworkflows

  • Removed the deregistration webhooks functionality from the webhook process

  • Google Cloud Firestore: Fixed an issue with parsing null value

  • Google Sheets: Fixed an issue with the Key Row parameter

  • HubSpot: Fixed an issue with the authentication

Nikita

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-05-05

Core Functionality

  • Fixed an issue with error workflows

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-05-02

Please note that this version contains a breaking change. You can read more about it here. The features that introduced the breaking changes have been flagged below.

  • Kitemaker

  • MQTT

  • CrateDB: Added query parameters. The Execute Query operation returns the result from all queries executed instead of just one of the results.

  • ERPNext: Added support for self-hosted ERPNext instances

  • FTP: Added the functionality to delete folders

  • Google Calendar: Added the Continue on Fail functionality

  • Google Drive: Added the functionality to add file name when downloading files

  • Gmail: Added functionality to handle multiple binary properties

  • Microsoft Outlook: Added Is Read and Move option to the Message resource

  • Postgres: Added query parameters. The Execute Query operation returns the result from all queries executed instead of just one of the results.

  • QuestDB: Added query parameters. The Execute Query operation returns the result from all queries executed instead of just one of the results.

  • QuickBase: Added option to use Field IDs

  • TimescaleDB: Added query parameters. The Execute Query operation returns the result from all queries executed instead of just one of the results.

  • Twist: Added Get, Get All, Delete, and Update operations to the Message Conversation resource. Added Archive, Unarchive, and Delete operations to the Channel resource. Added Thread and Comment resource

Core Functionality

  • Implemented the native fs/promise library where possible

  • Added the functionality to output logs to the console or a file

  • We have updated the minimum required version for Node.js to v14.15. For more details, check out the entry in the breaking changes page

  • GetResponse Trigger: Fixed an issue with error handling

  • GitHub Trigger: Fixed an issue with error handling

  • GitLab Trigger: Fixed an issue with error handling

  • Google Sheets: Fixed an issue with the Lookup operation for returning empty rows

  • Orbit: Fixed issues with the Post resource

  • Redis: Fixed an issue with the node not returning an error

  • Xero: Fixed an issue with the Create operation for the Contact resource

Gustavo Arjones, lublak, Colton Anglin, Mika Luhta

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-04-24

Please note that this version contains a breaking change. You can read more about it here. The features that introduced the breaking changes have been flagged below.

  • Mailcheck

  • n8n Trigger

  • Workflow Trigger

  • CrateDB: Added the Mode option that allows you to execute queries as transactions

  • Nextcloud: Added Delete, Get, Get All, and Update operation to the User resource

  • Postgres: Added the Mode option that allows you to execute queries as transactions

  • QuestDB: Added the Mode option that allows you to execute queries as transactions

  • Salesforce: Added Owner option to the Case and Lead resources. Added custom fields to Create and Update operations of the Case resource

  • Sentry.io: Added Delete and Update operations to Project, Release, and Team resources

  • TimescaleDB: Added the Mode option that allows you to execute queries as transactions

  • Zendesk Trigger: Added support to retrieve custom fields

Core Functionality

  • The Activation Trigger node has been deprecated. It has been replaced by two new nodes - the n8n Trigger and the Workflow Trigger node. For more details, check out the entry in the breaking changes page

  • Added the functionality to open the New Credentials dropdown by default

  • Google Sheets: Fixed an issue with the Lookup operation for returning multiple empty rows

  • Intercom: Fixed an issue with the User operation in the Company resource

  • Mautic: Fixed an issue with sending the lastActive parameter

Bart Vollebregt, Ivan Timoshenko, Konstantin Nosov, lublak, Umair Kamran,

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-04-20

Core Functionality

  • Fixed a timeout issue with the workflows in the main process

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-04-17

  • Google BigQuery

  • Webflow

  • Date & Time: Added Calculate a Date action that allows you to add or subtract time from a date

  • GitLab: Added Get, Get All, Update, and Delete operations to the Release resource

  • Microsoft OneDrive: Added Delete operation to the Folder resource

  • Monday: Added support for OAuth2 authentication

  • MongoDB: Added Limit, Skip, and Sort options to the Find operation and added Upsert parameter to the Update operation. Added the functionality to close the connection after use

  • MySQL: Added support for insert modifiers and added support for SSL

  • RabbitMQ: Added the functionality to close the connection after use and added support for AMPQS

Core Functionality

  • Changed bcrypt library from bcryptjs to @node-rs/bcrypt

  • Improved node error handling. Status codes and error messages in API responses have been standardized

  • Added global timeout setting for all HTTP requests (except HTTP Request node)

  • Implemented timeout for workers and corrected timeout for sub workflows

  • AWS SQS: Fixed an issue with API version and casing

  • IMAP: Fixed re-connection issue

  • Keap: Fixed an issue with the Opt In Reason parameter

  • Salesforce: Fixed an issue with loading custom fields

Allan Daemon, Anton Romanov, Bart Vollebregt, Cassiano Vailati, entrailz, Konstantin Nosov, LongYinan

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-04-10

Core Functionality

  • Fixed an issue with expressions not being displayed in read-only mode

  • Fixed an issue that didn't allow editing JavaScript code in read-only mode

  • Added support for configuring the maximum payload size

  • Added support to dynamically add menu items

  • Jira: Fixed an issue with loading issue types with classic project type

  • RabbitMQ Trigger: Fixed an issue with the node reusing the same item

  • SendGrid: Fixed an issue with the dynamic field generation

Mika Luhta, Loran, stwonary

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-04-03

Core Functionality

  • Fixed an issue with the Redis connection to prevent memory leaks

  • Bitwarden: Fixed an issue with the Update operation of the Group resource

  • Cortex: Fixed an issue where only the last item got returned

  • Invoice Ninja: Fixed an issue with the Project parameter

  • Salesforce: Fixed an issue with the Get All operation of the Custom Object resource

Agata M, Allan Daemon, Craig McElroy, mjysci

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-03-26

  • New nodes
  • Activation Trigger
  • Plivo
  • Enhanced nodes
  • ClickUp: Added Space Tag, Task List, and Task Tag resource
  • GitHub: Added pagination to Get Issues and Get Repositories operations
  • Mattermost: Added Reaction resource and Post Ephemeral operation
  • Move Binary Data: Added Encoding and Add BOM option to JSON to Binary mode and Strip BOM to Binary to JSON mode
  • SendGrid: Added Mail resource
  • Spotify: Added Library resource
  • Telegram: Added Answer Inline Query operation to the Callback resource
  • uProc: Added Get ASIN code by EAN code, Get EAN code by ASIN code, Get Email by Social Profile, Get Email by Full name and Company's domain, and Get Email by Full name and Company's name operations
  • Bug fixes
  • Clearbit: Fixed an issue with the autocomplete URI
  • Dropbox: Fixed an issue with the Dropbox credentials by adding the APP Access Type parameter in the credentials. For more details, check out the entry in the breaking changes page
  • Spotify: Fixed an issue with the Delete operation of the Playlist resource
  • The variable selector now displays empty arrays
  • Fixed a permission issue with the Raspberry Pi Docker image

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-03-19

  • New nodes
  • DeepL
  • Enhanced nodes
  • TheHive: Added Mark as Read and Mark as Unread operations and added Ignore SSL Issues parameter to the credentials
  • Bug fixes
  • AWS SES: Fixed an issue to map CC addresses correctly
  • Salesforce: Fixed an issue with custom object for Get All operations and fixed an issue with the first name field for the Create and Update operations for the Lead resource
  • Strava: Fixed an issue with the access tokens not getting refreshed
  • TheHive: Fixed an issue with the case resolution status
  • Fixed an issue with importing separate decrypted credentials
  • Fixed issues with the sub-workflows not finishing
  • Fixed an issue with the sub-workflows running on the main process
  • Fixed concurrency issues with sub-workflows

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-03-12

  • New nodes
  • Autopilot
  • Autopilot Trigger
  • Wise
  • Wise Trigger
  • Enhanced nodes
  • Box: Added Get operation to the Folder resource
  • Dropbox: Added Search operation to the File resource. All operations are now performed relative to the user's root directory. For more details, check out the entry in the breaking changes page
  • Facebook Graph API: Added new API versions
  • Google Drive: Added Update operation to the File resource
  • HubSpot: Added the Deal Description option
  • Kafka: Added the SASL mechanism
  • Monday.com: Added Move operation to Board Item resource
  • MongoDB: Added Date field to the Insert and Update operations
  • Micrsoft SQL: Added connection timeout parameter to credentials
  • Salesforce: Added Mobile Phone field to the Lead resource
  • Spotify: Added Create a Playlist operation to Playlist resource and Get New Releases to the Album resource
  • Bug fixes
  • Airtable: Fixed a bug with updating and deleting records
  • Added the functionality to expose metrics to Prometheus. Read more about that here
  • Updated fallback values to match the value type
  • Added the functionality to display debugging information for pending workflows on exit
  • Fixed an issue with queue mode for the executions that shouldn't be saved
  • Fixed an issue with workflows crashing and displaying Unknown status in the execution list
  • Fixed an issue to prevent crashing while saving execution data when the data field has over 64KB in MySQL
  • Updated jws-rsa to version 1.12.1

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-03-04

  • Bug fixes
  • APITemplate.io: Fixed an issue with the naming of the node

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-03-04

  • New nodes
  • APITemplate.io
  • Bubble
  • Lemlist
  • Lemlist Trigger
  • Enhanced nodes
  • Microsoft Teams: Added option to reply to a message
  • Bug fixes
  • Dropbox: Fixed an issue with parsing the response with the Upload operation
  • Gmail: Fixed an issue with the scope for the Service Account authentication method and fixed an issue with the label filter
  • Google Drive: Fixed an issue with the missing Parent ID field for the Create operation and fixed an issue with the Permissions field
  • HelpScout: Fixed an issue with sending tags when creating a conversation
  • HTTP Request: Fixed an issue with the raw data and file response
  • HubSpot: Fixed an issue with the OAuth2 credentials
  • Added support for Date & Time in the IF node and the Switch node
  • Fixed an issue with mouse selection when zooming in or out
  • Fixed an issue with current executing workflows when using queues for Postgres
  • Fixed naming and description for the N8N_SKIP_WEBHOOK_DEREGISTRATION_SHUTDOWN environment variable
  • Fixed an issue with auto-refresh of the execution list

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-02-22

  • New nodes
  • Bitwarden
  • Emelia
  • Emelia Trigger
  • GoToWebinar
  • Raindrop
  • Enhanced nodes
  • AWS Rekognition: Added the Detect Text type to the Analyze operation for the Image resource
  • Google Calendar: Added RRULE parameter to the Get All operation for the Event resource
  • Jira: Added User resource and operations
  • Reddit: Added the Search operation for the Post resource
  • Telegram: Added the Send Location operation
  • Bug fixes
  • RocketChat: Fixed error responses
  • Fixed the issue which caused the execution history of subworkflows (workflows started using the Execute Workflow node) not to be saved
  • Added an option to export the credential data in plain text format using the CLI

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-02-15

  • New nodes
  • Demio
  • PostHog
  • QuickBooks
  • Enhanced nodes
  • Trello: Added Create Checklist Item operation to the Checklist resource
  • Webhook: Removed trailing slash in routes and updated logic to select dynamic webhook
  • Bug fixes
  • Google Drive: Fixed an issue with returning the fields the user selects for the Folder and File resources
  • Twitter: Fixed a typo in the description
  • Webhook: Fixed logic for static route matching
  • Added the functionality to sort the values that you add in the IF node, Rename node, and the Set node
  • Added the functionality to optionally save execution data after each node
  • Added queue mode to scale workflow execution
  • Separated webhook from the core to scale webhook separately
  • Fixed an issue with current execution query for unsaved running workflows
  • Fixed an issue with the regex that detected node names
  • n8n now generates a unified execution ID instead of two separate IDs for currently running and saved executions

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-02-08

  • New nodes
  • AWS Comprehend
  • GetResponse Trigger
  • Peekalink
  • Stackby
  • Enhanced nodes
  • AWS SES: Added Custom Verification Email resource
  • Microsoft Teams: Added Task resource
  • Twitter: Added Delete operation to the Tweet resource
  • Bug fixes
  • Google Drive: Fixed an issue with the Delete and Share operations
  • FileMaker: Fixed an issue with the script list parsing
  • Updated Node.js version of Docker images to 14.15
  • Added a shortcut CTRL + scroll to zoom

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-02-05

  • New nodes
  • Reddit
  • Tapfiliate
  • Enhanced nodes
  • Airtable Trigger: Added Download Attachment option
  • HubSpot: Added Custom Properties option to the Create and Update operations of the Company resource
  • MySQL: Added Connection Timeout parameter to the credentials
  • Telegram: Added Pin Chat Message and Unpin Chat Message operations for the Message resource
  • Bug fixes
  • Typeform: Fixed an issue with the OAuth2 authentication method
  • Added support for s and u flags for regex in the IF node and the Switch node

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-02-01

  • New nodes
  • Discourse
  • SecurityScorecard
  • TimescaleDB
  • Enhanced nodes
  • Affinity: Added List and List Entry resource
  • Asana: Added Project IDs option to the Create operation of the Task resource
  • HubSpot Trigger: Added support for multiple subscriptions
  • Jira: Added Issue Attachment resource and added custom fields to Create and Update operations of the Issue resource
  • Todoist: Added Section option
  • Bug fixes
  • SIGNL4: Fixed an issue with the attachment functionality
  • Added variable $mode to check the mode in which the workflow is being executed

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-01-27

  • Fixed an issue with the credentials parameters that have the same name

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-01-26

  • Fixed a bug with expressions in credentials

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-01-26

  • New nodes
  • Compression
  • Enhanced nodes
  • GitHub: Added Invite operation to the User resource
  • EmailReadImap: Increased the authentication timeout
  • Mautic: Added Custom Fields option to the Create and Update operations of the Contact resource. Also, the Mautic OAuth credentials have been updated. Now you don't have to enter the Authorization URL and the Access Token URL
  • Nextcloud: Added User resource
  • Slack: Added Get Permalink and Delete operations to the Message resource
  • Webhook: Added support for request parameters in webhook paths
  • Bug fixes
  • Google Drive: Fixed the default value for the Send Notification Email option
  • Added support for expressions to credentials
  • Removed support for MongoDB as a database for n8n. For more details, check out the entry in the breaking changes page

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-01-21

  • Bug fixes
  • Trello: Fixed the icon

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-01-21

  • New nodes
  • SendGrid
  • Enhanced nodes
  • AMQP: Added Container ID, Reconnect, and Reconnect limit options
  • AMQP Trigger: Added Container ID, Reconnect, and Reconnect Limit options
  • GitHub: Added Review resource
  • Google Drive: Added Drive resource
  • Trello: Added Get All and Get Cards operation to the List resource
  • Bug fixes
  • AWS Lambda: Fixed an issue with signature
  • AWS SNS: Fixed an issue with signature
  • Fixed an issue with nodes not executing if two input gets passed and one of them didn't return any data
  • The code editor doesn'tget closed when clicked anywhere outside the editor
  • Added CLI commands to export and import credentials and workflows
  • The title in the browser tab now resets for new workflows

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-01-15

  • New nodes
  • Beeminder
  • Enhanced nodes
  • Crypto: Added hash type SHA384
  • Google Books: Added support for user impersonation
  • Google Drive: Added support for user impersonation
  • Google Sheets: Added support for user impersonation
  • Gmail: Added support for user impersonation
  • Microsoft Outlook: Added support for a shared mailbox
  • RabbitMQ: Added Exchange mode
  • Salesforce: Added filters to all Get All operations
  • Slack: Made changes to the properties As User and Ephemeral. For more details, check out the entry in the breaking changes page
  • Typeform Trigger: The node now displays the recall information in the question in square brackets. For more details, check out the entry in the breaking changes page
  • Zendesk: Removed the Authorization URL and Access Token URL fields from the OAuth2 credentials. The node now uses the subdomain passed by a user to connect to Zendesk.
  • Bug fixes
  • CoinGecko: Fixed an issue to process multiple input items correctly

For a comprehensive list of changes, check out the commits for this version.
Release date: 2021-01-07

  • New nodes
  • Google Analytics
  • PhantomBuster
  • Enhanced nodes
  • AWS: Added support for custom endpoints
  • Gmail: Added an option to send messages formatted as HTML
  • Philips Hue: Added Room/Group name to Light name to make it easier to identify lights
  • Slack: Added ephemeral message option
  • Telegram: Removed the Bot resource as the endpoint is no longer supported
  • Bug fixes
  • E-goi: Fixed the name of the node
  • Edit Image: Fixed an issue with the Border operation
  • HTTP Request: Fixed batch sizing to work when batchSize = 1
  • PayPal: Fixed a typo in the Environment field
  • Split In Batches: Fixed a typo in the description
  • Telegram: Fixed an issue with the Send Audio operation
  • Based on your settings, vacuum runs on SQLite on startup
  • Updated Axios to version 0.21.1

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-12-30

  • New nodes
  • Microsoft Outlook
  • Enhanced nodes
  • ActiveCampaign: The node loads more options for the fields
  • Asana: Added Subtask resource and Get All operation for the Task resource
  • Edit Image: Added Multi Step operation
  • HTTP Request: Added Use Querystring option
  • IF: Added Ends With and Starts With operations
  • Jira: Added Issue Comment resource
  • Switch: Added Ends With and Starts With operations
  • Telegram: Added File resource
  • Bug fixes
  • Box Trigger: Fixed a typo in the description
  • Edit Image: Fixed an issue with multiple composite operations
  • HTTP Request: Fixed an issue with the binary data getting used by multiple nodes
  • S3: Fixed an issue with uploading files
  • Stripe Trigger: Fixed an issue with the existing webhooks
  • Telegram: Fixed an issue with the Send Audio operation
  • Binary data stays visible if a node gets re-executed

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-12-24

  • Fixed a bug that caused HTML to render in JSON view

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-12-24

  • New nodes
  • e-goi
  • RabbitMQ
  • RabbitMQ Trigger
  • uProc
  • Enhanced nodes
  • ActiveCampaign: Added the functionality to load the tags for a user
  • FTP: Added Delete and Rename operation
  • Google Cloud Firestore: The node now gives the Collection ID in response
  • Iterable: Added User List resource
  • MessageBird: Added Balance resource
  • TheHive Trigger: Added support for the TheHive3 webhook events, and added Log Updated and Log Deleted events
  • Bug fixes
  • Dropbox: Fixed an issue with the OAuth credentials
  • Google Sheets: Fixed an issue with the parameters getting hidden for other operations
  • Added functionality to copy the data and the path from the output
  • Fixed an issue with the node getting selected after it was duplicated

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-12-16

  • New nodes
  • Brandfetch
  • Pushcut
  • Pushcut Trigger
  • Enhanced nodes
  • Google Sheets: Added Spreadsheet resource
  • IF: Added Is Empty option
  • Slack: Added Reaction and User resource, and Member operation to the Channel resource
  • Spreadsheet File: Added the option Include Empty Cell to display empty cells
  • Webhook: Added option to send a custom response body. The node can now also return string data
  • Bug fixes
  • GitLab: Fixed an issue with GitLab OAuth credentials. You can now specify your GitLab server to configure the credentials
  • Mautic: Fixed an issue with the OAuth credentials
  • If a workflow is using the Error Trigger node, by default, the workflow will use itself as the Error Workflow
  • Fixed a bug that caused the Editor UI to display an incorrect (save) state upon activating or deactivating a workflow

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-12-10

  • New nodes
  • Ghost
  • NASA
  • Snowflake
  • Twist
  • Enhanced nodes
  • Automizy: Added options to add and remove tabs for the Update operation of the Contact resource
  • Pipedrive: Added label field to Person, Organization, and Deal resources. Also added Update operation for the Organization resource
  • Bug fixes
  • Fixed a bug that caused OAuth1 requests to break
  • Fixed Docker user mount path

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-12-03

  • New nodes
  • Cortex
  • Iterable
  • Kafka Trigger
  • TheHive
  • TheHive Trigger
  • Yourls
  • Enhanced nodes
  • HubSpot: Added Contact List resource and Search operation for the Deal resource
  • Google Calendar: You can now add multiple attendees in the Attendees field
  • Slack: The node now loads both private and public channels
  • Bug Fixes
  • MQTT: Fixed an issue with the connection. The node now uses mqtt@4.2.1
  • Fixed a bug which caused the Trigger-Nodes to require data from the first output
  • Added configuration to load only specific nodes

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-11-25

  • Bug Fixes
  • Airtable Trigger: Fixed the icon of the node

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-11-25

  • New nodes
  • Airtable Trigger
  • LingvaNex
  • OpenThesaurus
  • ProfitWell
  • Quick Base
  • Spontit
  • Enhanced nodes
  • Airtable: The Application ID field has been renamed to Base ID, and the Table ID field has been renamed to Table. The List operation now downloads attachments automatically
  • Harvest: Moved the account field from the credentials to the node parameters. For more details, check out the entry in the breaking changes page
  • Bug Fixes
  • Slack: Fixed an issue with creating channels and inviting users to a channel

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-11-20

  • Bug Fixes
  • GraphQL: Fixed an issue with the variables
  • WooCommerce Trigger: Fixed an issue with the webhook. The node now reuses a webhook if it already exists.

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-11-19

  • New nodes
  • Google Cloud Natural Language
  • Google Firebase Cloud Firestore
  • Google Firebase Realtime Database
  • Humantic AI
  • Enhanced nodes
  • ActiveCampaign: Added Contact List and List resource
  • Edit Image: Added support for drawing, font selection, creating a new image, and added the Composite resource
  • FTP: Added Private Key and Passphrase fields to the SFTP credentials and made the directory creation more robust
  • IMAP: Increased the timeout
  • Matrix: Added option to send notice, emote, and HTML messages
  • Segment: Made changes to the properties traits and properties. For more details, check out the entry in the breaking changes page
  • Bug Fixes
  • GraphQL: Fixed an issue with the variables
  • Mailchimp: Fixed an issue with the OAuth credentials. The credentials are now sent with the body instead of the header
  • YouTube: Fixed a typo for the Unlisted option
  • Added horizontal scrolling

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-11-11

  • New nodes
  • GetResponse
  • Gotify
  • Line
  • Strapi
  • Enhanced nodes
  • AMQP: Connection is now closed after a message is sent
  • AMQP Trigger: Added Message per Cycle option to retrieve the specified number of messages from the bus for every cycle
  • HubSpot: Added Custom Properties for the Deal resource as Additional Fields
  • Jira: The node retrieves all the projects for the Project field instead of just 50
  • Mattermost: Improved the channel selection
  • Microsoft SQL: Added TLS parameter for the credentials
  • Pipedrive Trigger: Added OAuth authentication method. For more details, check out the entry in the breaking changes page
  • Segment: Added Custom Traits option for the Traits field
  • Bug Fixes
  • Shopify Trigger: Fixed an issue with activating the workflow
  • For custom nodes, you can now set custom documentation URLs

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-11-04

  • New nodes
  • Facebook Trigger
  • Google Books
  • Orbit
  • Storyblok
  • Enhanced nodes
  • Google Drive: Removed duplicate parameters
  • Twitter: Added Direct Message resource
  • Bug Fixes
  • Gmail: Fixed an issue with the encoding for the subject field
  • Improved the Editor UI for the save workflow functionality

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-10-23

  • New nodes
  • Kafka
  • MailerLite
  • MailerLite Trigger
  • Pushbullet
  • Enhanced nodes
  • Airtable: Added Ignore Fields option for the Update operation
  • AMQP Sender: Added Azure Service Bus support
  • Google Calendar: Added Calendar resource and an option to add a conference link
  • G Suite Admin: Added Group resource
  • HTTP Request: Added Batch Size and Batch Interval option
  • Mautic: Added Company resource
  • Salesforce: Added OAuth 2.0 JWT authentication method
  • Bug Fixes
  • IF: Fixed an issue with undefined expression
  • Paddle: Fixed an issue with the Return All parameter
  • Switch: Fixed an issue with undefined expression
  • Added CLI commands to deactivate the workflow
  • Added an option to get the full execution data from the server
  • The Editor UI gives an alert if you redirect without saving a workflow
  • The Editor UI now indicates if a workflow is saved or not
  • Improved support for touch devices
  • Node properties now load on demand
  • Updated the Node.js version for the Docker images

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-10-23

  • Added a check for the Node.js version on startup. For more details, check out the entry in the breaking changes page
  • Bug Fixes
  • Google Translate: Fixed an issue with the rendering of the image in n8n.io

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-10-22

  • Bug Fixes
  • Strava Trigger: Fixed a typo in the node name

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-10-22

  • Removed debug messages

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-10-22

  • New Nodes
  • Pushover
  • Strava
  • Strava Trigger
  • Google Translate
  • Bug Fixes
  • HTTP Request: Fixed an issue with the POST request method for the 'File' response format
  • Fixed issue with displaying non-active workflows as active
  • Fixed an issue related to multiple-webhooks

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-10-16

  • Bug Fixes
  • HTTP Request: Fixed an issue with the Form-Data Multipart and the RAW/Custom Body Content Types

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-10-16

  • Enhanced Fixes
  • Matrix: Added support for specifying a Matrix Homeserver URL
  • Salesforce: Added Custom Object resource and Custom Fields and Sort options
  • Bug Fixes
  • AWS SES: Fixed an issue with the Send Template operation for the Email resource
  • AWS SNS Trigger: Fixed an issue with the Subscriptions topic

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-10-15

  • Bug Fixes
  • Google Sheets: Fixed an issue with spaces in sheet names
  • Automizy: Fixed an issue with the default resource

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-10-15

  • Bug Fixes
  • Gmail: Fixed an issue with the Message ID
  • HTTP Request: Fixed an issue with the GET Request
  • Added HMAC-SHA512 signature method for OAuth 1.0

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-10-14

  • New nodes
  • Automizy
  • AWS Rekognition
  • Matrix
  • Sendy
  • Vonage
  • WeKan
  • Enhanced nodes
  • AWS SES: Added Send Template operation for the Email resource and added the Template resource
  • ClickUp: Added Time Entry and Time Entry Tag resources
  • Function: The Function field is now called the JavaScript Code field
  • Mailchimp: Added Campaign resource
  • Mindee: Added currency to the simplified response
  • OneDrive: Added Share operation
  • OpenWeatherMap: Added Language parameter
  • Pipedrive: Added additional parameters to the Get All operation for the Note resource
  • Salesforce: Added Flow resource
  • Spreadsheet File: Added Range option for the Read from file operation
  • Bug Fixes
  • ClickUp Trigger: Fixed issue with creating credentials
  • Pipedrive Trigger: Fixed issue with adding multiple webhooks to Pipedrive
  • The link.fish Scrape node has been removed from n8n. For more details, check out the entry in the breaking changes page

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-10-06

  • Enhanced nodes
  • CoinGecko: Small fixes to the CoinGecko node

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-10-05

  • New nodes
  • Clockify
  • CoinGecko
  • G Suite Admin
  • Mindee
  • Wufoo Trigger
  • Enhanced nodes
  • Slack: Added User Profile resource
  • Mattermost: Added Create and Invite operations for the User resource
  • Bug Fixes
  • S3: Fixed issue with uploading files
  • Webhook ID gets refreshed on node duplication

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-09-30

  • Enhanced nodes
  • Postgres: Added Schema parameter for the Update operation
  • Bug Fixes
  • Jira: Fixed a bug with the Issue Type field
  • Pipedrive Trigger: Fixed issues with the credentials
  • Changed the bcrypt library to bcrypt.js to make it compatible with Windows
  • The OAuth callback URLs are now generated in the backend

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-09-23

  • Bug Fixes
  • Google Sheets: Fixed issues with the update and append operations

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-09-23

  • Fixed an issue with the build by setting jwks-rsa to an older version

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-09-23

  • Fixed an issue with the OAuth window. The OAuth window now closes after authentication is complete

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-09-23

  • Additional endpoints can be excluded from authentication checks. Multiple endpoints can be added separated by colons

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-09-23

  • Enhanced nodes
  • Twitter: Added support for auto mention of users in reply tweets
  • Bug Fixes
  • Google Sheets: Fixed issue with non-Latin sheet names
  • HubSpot: Fixed naming of credentials
  • Microsoft: Fixed naming of credentials
  • Mandrill: Fixed attachments with JSON parameters
  • Expressions now use short variables when selecting input data for the current node
  • Fixed issue with renaming credentials for active workflows

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-09-18

  • New nodes
  • LinkedIn
  • Taiga
  • Taiga Trigger
  • Enhanced nodes
  • ActiveCampaign: Added multiple functions, read more here
  • Airtable: Added typecast functionality
  • Asana: Added OAuth2 support
  • ClickUp: Added OAuth2 support
  • Google Drive: Added share operation
  • IMAP Email: Added support for custom rules when checking emails
  • Sentry.io: Added support for self-hosted version
  • Twitter: Added retweet, reply, and like operations
  • WordPress: Added author field to the post resource
  • Bug Fixes
  • Asana Trigger: Webhook validation has been deactivated
  • Paddle: Fixed returnData format and coupon description
  • The ActiveCampaign node has breaking changes
  • Fixed issues with test-webhook registration

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-09-14

  • Speed for basic authentication with hashed password has been improved

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-09-14

  • New nodes
  • Microsoft Teams
  • Enhanced nodes
  • Freshdesk: Added Freshdesk contact resource
  • HTTP Request: Run parallel requests in HTTP Request Node
  • Bug Fixes
  • Philips Hue: Added APP ID to Philips Hue node credentials
  • Postmark Trigger: Fixed parameters for the node
  • The default space between nodes has been increased to two units
  • Expression support has been added to the credentials
  • Passwords for your n8n instance can now be hashed

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-09-09

  • New nodes
  • Sentry.io
  • Enhanced nodes
  • Asana
  • ClickUp
  • Clockify
  • Google Contacts
  • Salesforce
  • Segment
  • Telegram
  • Telegram Trigger

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-09-02

  • New nodes
  • Customer.io
  • MQTT Trigger
  • S3
  • Enhanced nodes
  • Acuity Scheduling
  • AWS S3
  • ClickUp
  • FTP
  • Telegram Trigger
  • Zendesk

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-08-30

  • The bug that caused the workflows to not get activated correctly has been fixed

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-08-28

  • Added missing rawBody for "application/x-www-form-urlencoded"

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-08-28

  • Enhanced nodes
  • Contentful
  • HTTP Request
  • Postgres
  • Webhook
  • Removed Test-Webhook also in case checkExists fails
  • HTTP Request node doesn'toverwrite accept header if it's already set
  • Add rawBody to every request so that n8n doesn'tgive an error if body is missing

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-08-27

  • New nodes
  • Contentful
  • ConvertKit
  • ConvertKit Trigger
  • Paddle
  • Enhanced nodes
  • Airtable
  • Coda
  • Gmail
  • HubSpot
  • IMAP Email
  • Postgres
  • Salesforce
  • SIGNL4
  • Todoist
  • Trello
  • YouTube
  • The Todoist node has breaking changes
  • Added dynamic titles on workflow execution
  • Nodes will now display a link to associated credential documentation

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-08-18

  • New nodes
  • Gmail
  • Google Contacts
  • Unleashed Software
  • YouTube
  • Enhanced nodes
  • AMQP
  • AMQP Trigger
  • Bitly
  • Function Item
  • Google Sheets
  • Shopify
  • Todoist
  • Enhanced support for JWT based authentication
  • Added an option to execute a node once, using data of only the first item

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-08-05

  • New nodes
  • Customer.io Trigger
  • FTP
  • Medium
  • Philips Hue
  • TravisCI
  • Twake
  • Enhanced nodes
  • CrateDB
  • Move Binary Data
  • Nodes will now display a link to associated documentation

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-07-26

  • New nodes
  • Box
  • Box Trigger
  • CrateDB
  • Jira Trigger
  • Enhanced nodes
  • GitLab
  • Nextcloud
  • Pipedrive
  • QuestDB
  • Webhooks now support OPTIONS request

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-07-15

  • New nodes
  • Hacker News
  • QuestDB
  • Xero
  • Enhanced nodes
  • Affinity Trigger
  • HTTP Request
  • Mailchimp
  • MongoDB
  • Pipedrive
  • Postgres
  • UpLead
  • Webhook
  • Webhook URLs are now handled independently of the workflow ID by https://{hostname}/webhook/{path} instead of the older https://{hostname}/webhook/{workflow_id}/{node_name}/{path}.

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-07-08

  • Enhanced nodes
  • Microsoft SQL

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-07-08

  • New nodes
  • CircleCI
  • Microsoft SQL
  • Zoom
  • Enhanced nodes
  • Postmark Trigger
  • Salesforce
  • It's now possible to set default values for credentials that get prefilled, and the user can't change.

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-07-02

  • Enhanced nodes
  • Drift
  • Eventbrite Trigger
  • Facebook Graph API
  • Pipedrive
  • Fixed credential issue for the Execute Workflow node

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-06-25

  • New nodes
  • Google Tasks
  • SIGNL4
  • Spotify
  • Enhanced nodes
  • HubSpot
  • Mailchimp
  • Typeform
  • Webflow
  • Zendesk
  • Added Postgres SSL support
  • It's now possible to deploy n8n under a subfolder

For a comprehensive list of changes, check out the commits for this version.
Release date: 2020-06-13

  • Enhanced nodes
  • GitHub
  • Mautic Trigger
  • Monday.com
  • MongoDB
  • Fixed the issue with multiuser-setup

Examples:

Example 1 (unknown):

n8n license:info

Example 2 (unknown):

// in AsanaApi.credentials.ts
import {
	IAuthenticateGeneric,
	ICredentialType,
	INodeProperties,
} from 'n8n-workflow';

export class AsanaApi implements ICredentialType {
	name = 'asanaApi';
	displayName = 'Asana API';
	documentationUrl = 'asana';
	properties: INodeProperties[] = [
		{
			displayName: 'Access Token',
			name: 'accessToken',
			type: 'string',
			default: '',
		},
	];

	authenticate: IAuthenticateGeneric = {
		type: 'generic',
		properties: {
			headers: {
				Authorization: '=Bearer {{$credentials.accessToken}}',
			},
		},
	};
}

Sendy node

URL: llms-txt#sendy-node

Contents:

  • Operations
  • Templates and examples

Use the Sendy node to automate work in Sendy, and integrate Sendy with other applications. n8n has built-in support for a wide range of Sendy features, including creating campaigns, and adding, counting, deleting, and getting subscribers.

On this page, you'll find a list of operations the Sendy node supports and links to more resources.

Refer to Sendy credentials for guidance on setting up authentication.

  • Campaign
    • Create a campaign
  • Subscriber
    • Add a subscriber to a list
    • Count subscribers
    • Delete a subscriber from a list
    • Unsubscribe user from a list
    • Get the status of subscriber

Templates and examples

Send automated campaigns in Sendy

View template details

Enviar Miembros del CMS Ghost hacia Newsletter Sendy

View template details

🛠️ Sendy Tool MCP Server 💪 6 operations

View template details

Browse Sendy integration templates, or search all templates


Google Books node

URL: llms-txt#google-books-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Google Books node to automate work in Google Books, and integrate Google Books with other applications. n8n has built-in support for a wide range of Google Books features, including retrieving a specific bookshelf resource for the specified user, adding volume to a bookshelf, and getting volume.

On this page, you'll find a list of operations the Google Books node supports and links to more resources.

Refer to Google credentials for guidance on setting up authentication.

  • Bookshelf
    • Retrieve a specific bookshelf resource for the specified user
    • Get all public bookshelf resource for the specified user
  • Bookshelf Volume
    • Add a volume to a bookshelf
    • Clears all volumes from a bookshelf
    • Get all volumes in a specific bookshelf for the specified user
    • Moves a volume within a bookshelf
    • Removes a volume from a bookshelf
  • Volume
    • Get a volume resource based on ID
    • Get all volumes filtered by query

Templates and examples

Scrape Books from URL with Dumpling AI, Clean HTML, Save to Sheets, Email as CSV

View template details

Get a volume and add it to your bookshelf

View template details

Transform Books into 100+ Social Media Posts with DeepSeek AI and Google Drive

View template details

Browse Google Books integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


For example 2022-03-09T14:00:25.058+00:00

URL: llms-txt#for-example-2022-03-09t14:00:25.058+00:00

rightNow = "Today's date is " + str(_now)


Telegram Trigger node

URL: llms-txt#telegram-trigger-node

Contents:

  • Events
  • Options
  • Related resources
  • Common issues

Telegram is a cloud-based instant messaging and voice over IP service. Users can send messages and exchange photos, videos, stickers, audio, and files of any type. On this page, you'll find a list of events the Telegram Trigger node can respond to and links to more resources.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Telegram Trigger integrations page.

  • *: All updates except "Chat Member", "Message Reaction", and "Message Reaction Count" (default behavior of Telegram API as they produces a lot of calls of updates).
  • Business Connection: Trigger when the bot is connected to or disconnected from a business account, or a user edited an existing connection with the bot.
  • Business Message: Trigger on a new message from a connected business account.
  • Callback Query: Trigger on new incoming callback query.
  • Channel Post: Trigger on new incoming channel post of any kind — including text, photo, sticker, and so on.
  • Chat Boost: Trigger when a chat boost is added or changed. The bot must be an administrator in the chat to receive these updates.
  • Chat Join Request: Trigger when a request to join the chat is sent. The bot must have the can_invite_users administrator right in the chat to receive these updates.
  • Chat Member: Trigger when a chat member's status is updated. The bot must be an administrator in the chat.
  • Chosen Inline Result: Trigger when the result of an inline query chosen by a user is sent. Please see Telegram's API documentation on feedback collection for details on how to enable these updates for your bot.
  • Deleted Business Messages: Trigger when messages are deleted from a connected business account.
  • Edited Business Message: Trigger on new version of a message from a connected business account.
  • Edited Channel Post: Trigger on new version of a channel post that is known to the bot is edited.
  • Edited Message: Trigger on new version of a channel post that is known to the bot is edited.
  • Inline Query: Trigger on new incoming inline query.
  • Message: Trigger on new incoming message of any kind — text, photo, sticker, and so on.
  • Message Reaction: Trigger when a reaction to a message is changed by a user. The bot must be an administrator in the chat. The update isn't received for reactions set by bots.
  • Message Reaction Count: Trigger when reactions to a message with anonymous reactions are changed. The bot must be an administrator in the chat. The updates are grouped and can be sent with delay up to a few minutes.
  • My Chat Member: Trigger when the bot's chat member status is updated in a chat. For private chats, this update is received only when the bot is blocked or unblocked by the user.
  • Poll: Trigger on new poll state. Bots only receive updates about stopped polls and polls which are sent by the bot.
  • Poll Answer: Trigger when user changes their answer in a non-anonymous poll. Bots only receive new votes in polls that were sent by the bot itself.
  • Pre-Checkout Query: Trigger on new incoming pre-checkout query. Contains full information about checkout.
  • Purchased Paid Media: Trigger when a user purchases paid media with a non-empty payload sent by the bot in a non-channel chat.
  • Removed Chat Boost: Trigger when a boost is removed from a chat. The bot must be an administrator in the chat to receive these updates.
  • Shipping Query: Trigger on new incoming shipping query. Only for invoices with flexible price.

Some events may require additional permissions, see Telegram's API documentation for more information.

  • Download Images/Files: Whether to download attached images or files to include in the output data.
    • Image Size: When you enable Download Images/Files, this configures the size of image to download. Downloads large images by default.
  • Restrict to Chat IDs: Only trigger for events with the listed chat IDs. You can include multiple chat IDs separated by commas.
  • Restrict to User IDs: Only trigger for events with the listed user IDs. You can include multiple user IDs separated by commas.

n8n provides an app node for Telegram. You can find the node docs here.

View example workflows and related content on n8n's website.

Refer to Telegram's API documentation for details about their API.

For common questions or issues and suggested solutions, refer to Common issues.


Snowflake node

URL: llms-txt#snowflake-node

Contents:

  • Operations
  • Templates and examples

Use the Snowflake node to automate work in Snowflake, and integrate Snowflake with other applications. n8n has built-in support for a wide range of Snowflake features, including executing SQL queries, and inserting rows in a database.

On this page, you'll find a list of operations the Snowflake node supports and links to more resources.

Refer to Snowflake credentials for guidance on setting up authentication.

  • Execute an SQL query.
  • Insert rows in database.
  • Update rows in database.

Templates and examples

Load data into Snowflake

View template details

Create a table, and insert and update data in the table in Snowflake

View template details

Import Productboard Notes, Companies and Features into Snowflake

View template details

Browse Snowflake integration templates, or search all templates


Data flow within nodes

URL: llms-txt#data-flow-within-nodes

Nodes can process multiple items.

For example, if you set the Trello node to Create-Card, and create an expression that sets Name using a property called name-input-value from the incoming data, the node creates a card for each item, always choosing the name-input-value of the current item.

For example, this input will create two cards. One named test1 the other one named test2:

Examples:

Example 1 (unknown):

[
	{
		name-input-value: "test1"
	},
	{
		name-input-value: "test2"
	}
]

Dropcontact credentials

URL: llms-txt#dropcontact-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a developer account in Dropcontact.

Supported authentication methods

Refer to Dropcontact's API documentation for more information about the service.

To configure this credential, you'll need:


Chat with a Google Sheet using AI

URL: llms-txt#chat-with-a-google-sheet-using-ai

Contents:

  • Key features
  • Using the example

Use n8n to bring your own data to AI. This workflow uses the Chat Trigger to provide the chat interface, and the Call n8n Workflow Tool to call a second workflow that queries Google Sheets.

View workflow file

  • Chat Trigger: start your workflow and respond to user chat interactions. The node provides a customizable chat interface.
  • Agent: the key piece of the AI workflow. The Agent interacts with other components of the workflow and makes decisions about what tools to use.
  • Call n8n Workflow Tool: plug in n8n workflows as custom tools. In AI, a tool is an interface the AI can use to interact with the world (in this case, the data provided by your workflow). The AI model uses the tool to access information beyond its built-in dataset.

To load the template into your n8n instance:

  1. Download the workflow JSON file.
  2. Open a new workflow in your n8n instance.
  3. Copy in the JSON, or select Workflow menu > Import from file....

The example workflows use Sticky Notes to guide you:

  • Yellow: notes and information.
  • Green: instructions to run the workflow.
  • Orange: you need to change something to make the workflow work.
  • Blue: draws attention to a key feature of the example.

Keap credentials

URL: llms-txt#keap-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Create a Keap developer account.

Supported authentication methods

Refer to Keap's REST API documentation for more information about the service.

Note for n8n Cloud users

Cloud users don't need to provide connection details. Select Connect my account to connect through your browser.

If you need to configure OAuth2 from scratch or need more detail on what's happening in the OAuth web flow, refer to the instructions in the Getting Started with OAuth2 documentation.


Activation Trigger node

URL: llms-txt#activation-trigger-node

Contents:

  • Node parameters
  • Templates and examples

The Activation Trigger node gets triggered when an event gets fired by n8n or a workflow.

n8n has deprecated the Activation Trigger node and replaced it with two new nodes: the n8n Trigger node and the Workflow Trigger node. For more details, check out the entry in the breaking changes page.

If you want to use the Activation Trigger node for a workflow, add the node to the workflow. You don't have to create a separate workflow.

The Activation Trigger node gets triggered for the workflow that it gets added to. You can use the Activation Trigger node to trigger a workflow to notify the state of the workflow.

  • Events
    • Activation: Run when the workflow gets activated
    • Start: Run when n8n starts or restarts
    • Update: Run when the workflow gets saved while it's active

Templates and examples

Browse Activation Trigger integration templates, or search all templates


For example "Today's date is 1646834498755"

URL: llms-txt#for-example-"today's-date-is-1646834498755"

Contents:

  • Convert JavaScript dates to Luxon
  • Convert date string to Luxon
  • Get n days from today
  • Create human-readable dates
  • Get the time between two dates
  • A longer example: How many days to Christmas?

{{DateTime.fromISO('2019-06-23T00:00:00.00')}}

let luxonDateTime = DateTime.fromISO('2019-06-23T00:00:00.00')

{{DateTime.fromFormat("23-06-2019", "dd-MM-yyyy")}}

let newFormat = DateTime.fromFormat("23-06-2019", "dd-MM-yyyy")

{{$today.minus({days: 7})}}

let sevenDaysAgo = $today.minus({days: 7})

{{$today.minus({days: 7}).toLocaleString()}}

let readableSevenDaysAgo = $today.minus({days: 7}).toLocaleString()

{{$today.minus({days: 7}).toLocaleString({month: 'long', day: 'numeric', year: 'numeric'})}}

let readableSevenDaysAgo = $today.minus({days: 7}).toLocaleString({month: 'long', day: 'numeric', year: 'numeric'})

{{DateTime.fromISO('2019-06-23').diff(DateTime.fromISO('2019-05-23'), 'months').toObject()}}

let monthsBetweenDates = DateTime.fromISO('2019-06-23').diff(DateTime.fromISO('2019-05-23'), 'months').toObject()

{{"There are " + $today.diff(DateTime.fromISO($today.year + '-12-25'), 'days').toObject().days.toString().substring(1) + " days to Christmas!"}}

let daysToChristmas = "There are " + $today.diff(DateTime.fromISO($today.year + '-12-25'), 'days').toObject().days.toString().substring(1) + " days to Christmas!";


This outputs `"There are <number of days> days to Christmas!"`. For example, on 9th March, it outputs "There are 291 days to Christmas!".

A detailed explanation of what the code does:

- `"There are "`: a string.
- `+`: used to join two strings.
- `$today.diff()`: This is similar to the example in [Get the time between two dates](#get-the-time-between-two-dates), but it uses n8n's custom `$today` variable.
- `DateTime.fromISO($today.year + '-12-25'), 'days'`: this part gets the current year using `$today.year`, turns it into an ISO string along with the month and date, and then takes the whole ISO string and converts it to a Luxon DateTime data structure. It also tells Luxon that you want the duration in days.
- `toObject()` turns the result of diff() into a more usable object. At this point, the expression returns `[Object: {"days":-<number-of-days>}]`. For example, on 9th March, `[Object: {"days":-291}]`.
- `.days` uses JMESPath syntax to retrieve just the number of days from the object. For more information on using JMESPath with n8n, refer to our [JMESpath](../jmespath/) documentation. This gives you the number of days to Christmas, as a negative number.
- `.toString().substring(1)` turns the number into a string and removes the `-`.
- `+ " days to Christmas!"`: another string, with a `+` to join it to the previous string.

**Examples:**

Example 1 (unknown):
```unknown
n8n provides built-in convenience functions to support data transformation in expressions for dates. Refer to [Data transformation functions | Dates](../../builtin/data-transformation-functions/dates/) for more information.

### Convert JavaScript dates to Luxon

To convert a native JavaScript date to a Luxon date:

- In expressions, use the [`.toDateTime()` method](../../builtin/data-transformation-functions/dates/#date-toDateTime). For example, `{{ (new Date()).ToDateTime() }}`.
- In the Code node, use `DateTime.fromJSDate()`. For example, `let luxondate = DateTime.fromJSDate(new Date())`.

### Convert date string to Luxon

You can convert date strings and other date formats to a Luxon DateTime object. You can convert from standard formats and from arbitrary strings.

A difference between Luxon DateTime and JavaScript Date

With vanilla JavaScript, you can convert a string to a date with `new Date('2019-06-23')`. In Luxon, you must use a function explicitly stating the format, such as `DateTime.fromISO('2019-06-23')` or `DateTime.fromFormat("23-06-2019", "dd-MM-yyyy")`.

#### If you have a date in a supported standard technical format:

Most dates use `fromISO()`. This creates a Luxon DateTime from an ISO 8601 string. For example:

Example 2 (unknown):


Example 3 (unknown):

Luxon's API documentation has more information on [fromISO](https://moment.github.io/luxon/api-docs/index.html#datetimefromiso).

Luxon provides functions to handle conversions for a range of formats. Refer to Luxon's guide to [Parsing technical formats](https://moment.github.io/luxon/#/parsing?id=parsing-technical-formats) for details.

#### If you have a date as a string that doesn't use a standard format:

Use Luxon's [Ad-hoc parsing](https://moment.github.io/luxon/#/parsing?id=ad-hoc-parsing). To do this, use the `fromFormat()` function, providing the string and a set of [tokens](https://moment.github.io/luxon/#/parsing?id=table-of-tokens) that describe the format.

For example, you have n8n's founding date, 23rd June 2019, formatted as `23-06-2019`. You want to turn this into a Luxon object:

Example 4 (unknown):



The subdomain to serve from

URL: llms-txt#the-subdomain-to-serve-from


Configure workflow timeout settings

URL: llms-txt#configure-workflow-timeout-settings

A workflow times out and gets canceled after this time (in seconds). If the workflow runs in the main process, a soft timeout happens (takes effect after the current node finishes). If a workflow runs in its own process, n8n attempts a soft timeout first, then kills the process after waiting for a fifth of the given timeout duration.

EXECUTIONS_TIMEOUT default is -1. For example, if you want to set the timeout to one hour:

You can also set maximum execution time (in seconds) for each workflow individually. For example, if you want to set maximum execution time to two hours:

Refer to Environment variables reference for more information on these variables.

Examples:

Example 1 (unknown):

export EXECUTIONS_TIMEOUT=3600

Example 2 (unknown):

export EXECUTIONS_TIMEOUT_MAX=7200

LDAP credentials

URL: llms-txt#ldap-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using LDAP server details

You can use these credentials to authenticate the following nodes:

Create a server directory using Lightweight Directory Access Protocol (LDAP).

Some common LDAP providers include:

Supported authentication methods

  • LDAP server details

Refer to your LDAP provider's own documentation for detailed information.

For general LDAP information, refer to Basic LDAP concepts for a basic overview and The LDAP Bind Operation for information on how the bind operation and authentication work.

Using LDAP server details

To configure this credential, you'll need:

  • The LDAP Server Address: Use the IP address or domain of your LDAP server.
  • The LDAP Server Port: Use the number of the port used to connect to the LDAP server.
  • The Binding DN: Use the Binding Distinguished Name (Bind DN) for your LDAP server. This is the user account the credential should log in as. If you're using Active Directory, this may look something like cn=administrator, cn=Users, dc=n8n, dc=io. Refer to your LDAP provider's documentation for more information on identifying this DN and the related password.
  • The Binding Password: Use the password for the Binding DN user.
  • Select the Connection Security: Options include:
    • None
    • TLS
    • STARTTLS
  • Optional: Enter a numeric value in seconds to set a Connection Timeout.

Manage users

URL: llms-txt#manage-users

Contents:

  • Delete a user
  • Resend an invitation to a pending user

The Settings > Users page shows all users, including ones with pending invitations.

  1. Open the three-dot menu for the user you want to delete and select Delete user.
  2. Confirm you want to delete them.
  3. If they're an active user, choose whether to copy their workflow data and credentials to a new user, or permanently delete their workflows and credentials.

Resend an invitation to a pending user

Click the menu icon by the user, then click Resend invite.


Strapi credentials

URL: llms-txt#strapi-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API user account
    • Configure a role
    • Create a user account
  • Using API token

You can use these credentials to authenticate the following nodes:

Create a Strapi admin account with:

  • Access to an existing Strapi project.
  • At least one collection type within that project.
  • Published data within that collection type.

Refer to the Strapi developer Quick Start Guide for more information.

Supported authentication methods

  • API user account: Requires a user account with appropriate content permissions.
  • API token: Requires an admin account.

Refer to Strapi's documentation for more information about the service.

Using API user account

To configure this credential, you'll need:

  • A user Email: Must be for a user account, not an admin account. Refer to the more detailed instructions below.
  • A user Password: Must be for a user account, not an admin account. Refer to the more detailed instructions below.
  • The URL: Use the public URL of your Strapi server, defined in ./config/server.js as the url parameter. Strapi recommends using an absolute URL.
    • For Strapi Cloud projects, use the URL of your Cloud project, for example: https://my-strapi-project-name.strapiapp.com
  • The API Version: Select the version of the API you want your calls to use. Options include:
    • Version 3
    • Version 4

In Strapi, the configuration involves two steps:

  1. Configure a role.
  2. Create a user account.

Refer to the more detailed instructions below for each step.

For API access, use the Users & Permissions Plugin in Settings > Users & Permissions Plugin.

Refer to Configuring Users & Permissions Plugin for more information on the plugin. Refer to Configuring end-user roles for more information on roles.

For the n8n credential, the user must have a role that grants them API permissions on the collection type. For the role, you can either:

  • Update the default Authenticated role to include the permissions and assign the user to that role. Refer to Configuring role's permissions for more information.
  • Create a new role to include the permissions and assign the user to that role. Refer to Creating a new role for more information.

For either option, once you open the role:

  1. Go to the Permissions section.
  2. Open the section for the relevant collection type.
  3. Select the permissions for the collection type that the role should have. Options include:
    • create (POST)
    • find and findone (GET)
    • update (PUT)
    • delete (DELETE)
  4. Repeat for all relevant collection types.
  5. Save the role.

Refer to Endpoints for more information on the permission options.

Create a user account

Now that you have an appropriate role, create an end-user account and assign the role to it:

  1. Go to Content Manager > Collection Types > User.
  2. Select Add new entry.
  3. Fill in the user details. The n8n credential requires these fields, though your Strapi project may have more custom required fields:
    • Username: Required for all Strapi users.
    • Email: Enter in Strapi and use as the Email in the n8n credential.
    • Password: Enter in Strapi and use as the Password in the n8n credential.
    • Role: Select the role you set up in the previous step.

Refer to Managing end-user accounts for more information.

To configure this credential, you'll need:

  • An API Token: Create an API token from Settings > Global Settings > API Tokens. Refer to Strapi's Creating a new API token documentation for more details and information on regenerating API tokens.

API tokens permission

If you don't see the API tokens option in Global settings, your account doesn't have the API tokens > Read permission.

  • The URL: Use the public URL of your Strapi server, defined in ./config/server.js as the url parameter. Strapi recommends using an absolute URL.

  • For Strapi Cloud projects, use the URL of your Cloud project, for example: https://my-strapi-project-name.strapiapp.com

  • The API Version: Select the version of the API you want your calls to use. Options include:

  • Version 3

    • Version 4

Qdrant Vector Store node

URL: llms-txt#qdrant-vector-store-node

Contents:

  • Node usage patterns
    • Use as a regular node to insert and retrieve documents
    • Connect directly to an AI agent as a tool
    • Use a retriever to fetch documents
    • Use the Vector Store Question Answer Tool to answer questions
  • Node parameters
    • Operation Mode
    • Rerank Results
    • Get Many parameters
    • Insert Documents parameters

Use the Qdrant node to interact with your Qdrant collection as a vector store. You can insert documents into a vector database, get documents from a vector database, retrieve documents to provide them to a retriever connected to a chain or connect it directly to an agent to use as a tool.

On this page, you'll find the node parameters for the Qdrant node, and links to more resources.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

Node usage patterns

You can use the Qdrant Vector Store node in the following patterns.

Use as a regular node to insert and retrieve documents

You can use the Qdrant Vector Store as a regular node to insert or get documents. This pattern places the Qdrant Vector Store in the regular connection flow without using an agent.

You can see an example of this in the first part of this template.

Connect directly to an AI agent as a tool

You can connect the Qdrant Vector Store node directly to the tool connector of an AI agent to use a vector store as a resource when answering queries.

Here, the connection would be: AI agent (tools connector) -> Qdrant Vector Store node.

Use a retriever to fetch documents

You can use the Vector Store Retriever node with the Qdrant Vector Store node to fetch documents from the Qdrant Vector Store node. This is often used with the Question and Answer Chain node to fetch documents from the vector store that match the given chat input.

An example of the connection flow would be: Question and Answer Chain (Retriever connector) -> Vector Store Retriever (Vector Store connector) -> Qdrant Vector Store.

Use the Vector Store Question Answer Tool to answer questions

Another pattern uses the Vector Store Question Answer Tool to summarize results and answer questions from the Qdrant Vector Store node. Rather than connecting the Qdrant Vector Store directly as a tool, this pattern uses a tool specifically designed to summarizes data in the vector store.

The connections flow in this case would look like this: AI agent (tools connector) -> Vector Store Question Answer Tool (Vector Store connector) -> Qdrant Vector store.

This Vector Store node has four modes: Get Many, Insert Documents, Retrieve Documents (As Vector Store for Chain/Tool), and Retrieve Documents (As Tool for AI Agent). The mode you select determines the operations you can perform with the node and what inputs and outputs are available.

In this mode, you can retrieve multiple documents from your vector database by providing a prompt. The prompt is embedded and used for similarity search. The node returns the documents that are most similar to the prompt with their similarity score. This is useful if you want to retrieve a list of similar documents and pass them to an agent as additional context.

Insert Documents

Use insert documents mode to insert new documents into your vector database.

Retrieve Documents (as Vector Store for Chain/Tool)

Use Retrieve Documents (As Vector Store for Chain/Tool) mode with a vector-store retriever to retrieve documents from a vector database and provide them to the retriever connected to a chain. In this mode you must connect the node to a retriever node or root node.

Retrieve Documents (as Tool for AI Agent)

Use Retrieve Documents (As Tool for AI Agent) mode to use the vector store as a tool resource when answering queries. When formulating responses, the agent uses the vector store when the vector store name and description match the question details.

Enables reranking. If you enable this option, you must connect a reranking node to the vector store. That node will then rerank the results for queries. You can use this option with the Get Many, Retrieve Documents (As Vector Store for Chain/Tool) and Retrieve Documents (As Tool for AI Agent) modes.

Get Many parameters

  • Qdrant collection name: Enter the name of the Qdrant collection to use.
  • Prompt: Enter the search query.
  • Limit: Enter how many results to retrieve from the vector store. For example, set this to 10 to get the ten best results.

This Operation Mode includes one Node option, the Metadata Filter.

Insert Documents parameters

  • Qdrant collection name: Enter the name of the Qdrant collection to use.

This Operation Mode includes one Node option:

  • Collection Config: Enter JSON options for creating a Qdrant collection creation configuration. Refer to the Qdrant Collections documentation for more information.

Retrieve Documents (As Vector Store for Chain/Tool) parameters

  • Qdrant Collection: Enter the name of the Qdrant collection to use.

This Operation Mode includes one Node option, the Metadata Filter.

Retrieve Documents (As Tool for AI Agent) parameters

  • Name: The name of the vector store.
  • Description: Explain to the LLM what this tool does. A good, specific description allows LLMs to produce expected results more often.
  • Qdrant Collection: Enter the name of the Qdrant collection to use.
  • Limit: Enter how many results to retrieve from the vector store. For example, set this to 10 to get the ten best results.

Available in Get Many mode. When searching for data, use this to match with metadata associated with the document.

This is an AND query. If you specify more than one metadata filter field, all of them must match.

When inserting data, the metadata is set using the document loader. Refer to Default Data Loader for more information on loading documents.

Templates and examples

🤖 AI Powered RAG Chatbot for Your Docs + Google Drive + Gemini + Qdrant

View template details

AI Voice Chatbot with ElevenLabs & OpenAI for Customer Service and Restaurants

View template details

Complete business WhatsApp AI-Powered RAG Chatbot using OpenAI

View template details

Browse Qdrant Vector Store integration templates, or search all templates

Refer to LangChain's Qdrant documentation for more information about the service.

View n8n's Advanced AI documentation.

Self-hosted AI Starter Kit

New to working with AI and using self-hosted n8n? Try n8n's self-hosted AI Starter Kit to get started with a proof-of-concept or demo playground using Ollama, Qdrant, and PostgreSQL.


XML

URL: llms-txt#xml

Contents:

  • Node parameters
  • Node options
    • JSON to XML options
    • XML to JSON options
  • Templates and examples

Use the XML node to convert data from and to XML.

If your XML is within a binary file, use the Extract from File node to convert it to text first.

  • Mode: The format the data should be converted from and to.
    • JSON to XML: Converts data from JSON to XML.
    • XML to JSON: Converts data from XML to JSON.
  • Property Name: Enter the name of the property which contains the data to convert.

These options are available regardless of the Mode you select:

  • Attribute Key: Enter the prefix used to access the attributes. Default is $.
  • Character Key: Enter the prefix used to access the character content. Default is _.

All other options depend on the selected Mode.

JSON to XML options

These options only appear if you select JSON to XML as the Mode:

  • Allow Surrogate Chars: Set whether to allow using characters from the Unicode surrogate blocks (turned on) or not (turned off).
  • Cdata: Set whether to wrap text nodes in <![CDATA[ ... ]]> instead of escaping when it's required (turned on) or not (turned off).
    • Turning this option on doesn't add <![CDATA[ ... ]]> if it's not required.
  • Headless: Set whether to omit the XML header (turned on) or include it (turned off).
  • Root Name: Enter the root element name to use.

XML to JSON options

These options only appear if you select XML to JSON as the Mode:

  • Explicit Array: Set whether to put child nodes in an array (turned on) or create an array only if there's more than one child node (turned off).
  • Explicit Root: Set whether to get the root node in the resulting object (turned on) or not (turned off).
  • Ignore Attributes: Set whether to ignore all XML attributes and only create text nodes (turned on) or not (turned off).
  • Merge Attributes: Set whether to merge attributes and child elements as properties of the parent (turned on) or key attributes off a child attribute object (turned off). This option is ignored if Ignore Attribute is turned on.
  • Normalize: Set whether to trim whitespaces inside the text nodes (turned on) or not to trim them (turned off).
  • Normalize Tags: Set whether to normalize all tag names to lowercase (turned on) or keep tag names as-is (turned off).
  • Trim: Set whether to trim the whitespace at the beginning and end of text nodes (turned on) or to leave the whitespace as-is (turned off).

Templates and examples

Generating Keywords using Google Autosuggest

View template details

💡🌐 Essential Multipage Website Scraper with Jina.ai

View template details

Extract Google Trends Keywords & Summarize Articles in Google Sheets

View template details

Browse XML integration templates, or search all templates


The very quick quickstart

URL: llms-txt#the-very-quick-quickstart

Contents:

  • Step one: Open a workflow template and sign up for n8n Cloud
  • Step two: Run the workflow
  • Step three: Add a node
  • Next steps

This quickstart gets you started using n8n as quickly as possible. Its allows you to try out the UI and introduces two key features: workflow templates and expressions. It doesn't include detailed explanations or explore concepts in-depth.

In this tutorial, you will:

  • Load a workflow from the workflow templates library
  • Add a node and configure it using expressions
  • Run your first workflow

Step one: Open a workflow template and sign up for n8n Cloud

n8n provides a quickstart template using training nodes. You can use this to work with fake data and avoid setting up credentials.

This quickstart uses n8n Cloud. A free trial is available for new users.

  1. Go to Templates | Very quick quickstart.

  2. Select Use for free to view the options for using the template.

  3. Select Get started free with n8n cloud to sign up for a new Cloud instance.

  4. Gets example data from the Customer Datastore node.

  5. Uses the Edit Fields node to extract only the desired data and assigns that data to variables. In this example, you map the customer name, ID, and description.

The individual pieces in an n8n workflow are called nodes. Double click a node to explore its settings and how it processes data.

Step two: Run the workflow

Select Execute Workflow. This runs the workflow, loading the data from the Customer Datastore node, then transforming it with Edit Fields. You need this data available in the workflow so that you can work with it in the next step.

Step three: Add a node

Add a third node to message each customer and tell them their description. Use the Customer Messenger node to send a message to fake recipients.

  1. Select the Add node connector on the Edit Fields node.

  2. Search for Customer Messenger. n8n shows a list of nodes that match the search.

  3. Select Customer Messenger (n8n training) to add the node to the canvas. n8n opens the node automatically.

  4. Use expressions to map in the Customer ID and create the Message:

    1. In the INPUT panel select the Schema tab.
  5. Drag Edit Fields1 > customer_id into the Customer ID field in the node settings.

  6. Hover over Message. Select the Expression tab, then select the expand button to open the full expressions editor.

  7. Copy this expression into the editor:

  8. Close the expressions editor, then close the Customer Messenger node by clicking outside the node or selecting Back to canvas.

  9. Select Execute Workflow. n8n runs the workflow.

The complete workflow should look like this:

View workflow file

Examples:

Example 1 (unknown):

Hi {{ $json.customer_name }}. Your description is: {{ $json.customer_description }}

RSS Read

URL: llms-txt#rss-read

Contents:

  • Node parameters
  • Node options
  • Templates and examples
  • Related resources

Use the RSS Read node to read data from RSS feeds published on the internet.

  • URL: Enter the URL for the RSS publication you want to read.

  • Ignore SSL Issues: Choose whether n8n should ignore SSL/TLS verification (turned on) or not (turned off).

Templates and examples

Personalized AI Tech Newsletter Using RSS, OpenAI and Gmail

View template details

Content Farming - : AI-Powered Blog Automation for WordPress

View template details

AI-Powered Information Monitoring with OpenAI, Google Sheets, Jina AI and Slack

View template details

Browse RSS Read integration templates, or search all templates

n8n provides a trigger node for RSS Read. You can find the trigger node docs here.


Okta node

URL: llms-txt#okta-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported

Use the Okta node to automate work in Okta and integrate Okta with other applications. n8n has built-in support for a wide range of Okta features, which includes creating, updating, and deleting users.

On this page, you'll find a list of operations the Okta node supports, and links to more resources.

You can find authentication information for this node here.

  • User
    • Create a new user
    • Delete an existing user
    • Get details of a user
    • Get many users
    • Update an existing user

Templates and examples

[Browse Okta integration templates](https://n8n.io/integrations/{{ okta }}/), or search all templates

Refer to Okta's documentation for more information about the service.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Demio node

URL: llms-txt#demio-node

Contents:

  • Operations
  • Templates and examples

Use the Demio node to automate work in Demio, and integrate Demio with other applications. n8n has built-in support for a wide range of Demio features, including getting, and registering events and reports.

On this page, you'll find a list of operations the Demio node supports and links to more resources.

Refer to Demio credentials for guidance on setting up authentication.

  • Event
    • Get an event
    • Get all events
    • Register someone to an event
  • Report
    • Get an event report

Templates and examples

Browse Demio integration templates, or search all templates


Manually install community nodes from npm

URL: llms-txt#manually-install-community-nodes-from-npm

Contents:

  • Install a community node
  • Uninstall a community node
  • Upgrade a community node
    • Upgrade to the latest version
    • Upgrade or downgrade to a specific version

You can manually install community nodes from the npm registry on self-hosted n8n.

You need to manually install community nodes in the following circumstances:

Install a community node

Access your Docker shell:

Create ~/.n8n/nodes if it doesn't already exist, and navigate into it:

Uninstall a community node

Access your Docker shell:

Upgrade a community node

Breaking changes in versions

Node developers may introduce breaking changes in new versions of their nodes. A breaking change is an update that breaks previous functionality. Depending on the node versioning approach that a node developer chooses, upgrading to a version with a breaking change could cause all workflows using the node to break. Be careful when upgrading your nodes. If you find that an upgrade causes issues, you can downgrade.

Upgrade to the latest version

Access your Docker shell:

Upgrade or downgrade to a specific version

Access your Docker shell:

Run npm uninstall to remove the current version:

Run npm install with the version specified:

Examples:

Example 1 (unknown):

docker exec -it n8n sh

Example 2 (unknown):

mkdir ~/.n8n/nodes
cd ~/.n8n/nodes

Example 3 (unknown):

npm i n8n-nodes-nodeName

Example 4 (unknown):

docker exec -it n8n sh

Discord node common issues

URL: llms-txt#discord-node-common-issues

Contents:

  • Add extra fields to embeds
  • Mention users and channels

Here are some common errors and issues with the Discord node and steps to resolve or troubleshoot them.

Add extra fields to embeds

Discord messages can optionally include embeds, a rich preview component that can include a title, description, image, link, and more.

The Discord node supports embeds when using the Send operation on the Message resource. Select Add Embeds to set extra fields including Description, Author, Title, URL, and URL Image.

To add fields that aren't included by default, set Input Method to Raw JSON. From here, add a JSON object to the Value parameter defining the field names and values you want to include.

For example, to include footer and fields, neither of which are available using the Enter Fields Input Method, you could use a JSON object like this:

You can learn more about embeds in Using Webhooks and Embeds | Discord.

If you experience issues when working with embeds with the Discord node, you can use the HTTP Request with your existing Discord credentials to POST to the following URL:

In the body, include your embed information in the message content like this:

Mention users and channels

To mention users and channels in Discord messages, you need to format your message according to Discord's message formatting guidelines.

To mention a user, you need to know the Discord user's user ID. Keep in mind that the user ID is different from the user's display name. Similarly, you need a channel ID to link to a specific channel.

You can learn how to enable developer mode and copy the user or channel IDs in Discord's documentation on finding User/Server/Message IDs.

Once you have the user or channel ID, you can format your message with the following syntax:

  • User: <@USER_ID>
  • Channel: <#CHANNEL_ID>
  • Role: <@&ROLE_ID>

Examples:

Example 1 (unknown):

{
    "author": "My Name",
	"url": "https://discord.js.org",
	"fields": [
		{
			"name": "Regular field title",
			"value": "Some value here"
		}
	],
	"footer": {
		"text": "Some footer text here",
		"icon_url": "https://i.imgur.com/AfFp7pu.png"
	}
}

Example 2 (unknown):

https://discord.com/api/v10/channels/<CHANNEL_ID>/messages

Example 3 (unknown):

{
	"content": "Test",
	"embeds": [
		{
			"author": "My Name",
			"url": "https://discord.js.org",
			"fields": [
				{
					"name": "Regular field title",
					"value": "Some value here"
				}
			],
			"footer": {
				"text": "Some footer text here",
				"icon_url": "https://i.imgur.com/AfFp7pu.png"
			}
		}
	]
}

Compare changes with workflow diffs

URL: llms-txt#compare-changes-with-workflow-diffs

Contents:

  • Accessing workflow diffs
  • Understanding the workflow diff view
    • When pushing
    • When pulling
  • Reviewing node changes
  • Viewing the summary of changes
  • Navigating through each change
  • Who can use workflow diffs

Workflow diffs allow you to visually compare changes between the workflow you have on an instance and the most recent version saved in your connected Git repository. This helps you understand the exact changes to the workflow before you decide to either push or pull it across different environments.

Accessing workflow diffs

You can access workflow diffs from two locations:

  1. When pushing changes: Click the workflow diff icon in the commit modal alongside the workflow you want to review
  2. When pulling changes: Click the workflow diff icon in the modified changes modal alongside the workflow you want to review

Understanding the workflow diff view

When you open a workflow diff, n8n displays two workflows stacked vertically:

  • Top panel (Remote branch): Latest version in your Git repository

  • Bottom panel (Local): Current locally saved version of the workflow

  • Top panel (Local): Current version on your n8n instance

  • Bottom panel (Remote branch): Version you're pulling from the Git repository

In both cases, the top panel always displays the workflow that will update with changes.

The diff view highlights three types of changes:

  • Added nodes and connectors: New node additions or connectors will show as green along with an "N" icon
  • Modified nodes and connectors: Modifications to existing nodes or connectors will show as orange along with a "M" icon
  • Deleted nodes and connectors: Node or connector deletions will show as red along with a "D" icon

Reviewing node changes

For modified nodes, you can also compare the specific changes. Click modified nodes to show a JSON diff of the changes. You can review the exact configuration for that node before and after the given change.

Viewing the summary of changes

In the top-right corner, the changes button shows the number of changes. This represents the total number of changes across node and node connectors, as well as general workflow settings updates.

Navigating through each change

You can use the next and previous arrows in the upper-right corner to cycle through your changes in a logical order. Use the back button in the top-left corner to return to the commit or pull modal to select a different workflow to review changes on.

Who can use workflow diffs

Only users who can push or pull commits for an instance can access workflow diffs:

  • instance owners
  • instance admins
  • project admins

Sustainable Use License

URL: llms-txt#sustainable-use-license

Contents:

  • License FAQs
    • What license do you use?
    • What source code is covered by the Sustainable Use License?
    • What is the Sustainable Use License?
    • What is and isn't allowed under the license in the context of n8n's product?
    • What if I want to use n8n for something that's not permitted by the license?
    • Why don't you use an open source license?
    • Why did you create a license?
    • My company has a policy against using code that restricts commercial use can I still use n8n?
    • What happens to the code I contribute to n8n in light of the Sustainable Use License?

Proprietary licenses for Enterprise

Proprietary licenses are available for enterprise customers. Get in touch for more information.

n8n's Sustainable Use License and n8n Enterprise License are based on the fair-code model.

What license do you use?

n8n uses the Sustainable Use License and n8n Enterprise License. These licenses are based on the fair-code model.

What source code is covered by the Sustainable Use License?

The Sustainable Use License applies to all our source code hosted in our main GitHub repository except:

  • Content of branches other than master.
  • Source code files that contain .ee. in their file name. These are licensed under the n8n Enterprise License.

What is the Sustainable Use License?

The Sustainable Use License is a fair-code software license created by n8n in 2022. You can read more about why we did this here. The license allows you the free right to use, modify, create derivative works, and redistribute, with three limitations:

  • You may use or modify the software only for your own internal business purposes or for non-commercial or personal use.
  • You may distribute the software or provide it to others only if you do so free of charge for non-commercial purposes.
  • You may not alter, remove, or obscure any licensing, copyright, or other notices of the licensor in the software. Any use of the licensor's trademarks is subject to applicable law.

We encourage anyone who wants to use the Sustainable Use License. If you are building something out in the open, it makes sense to think about licensing earlier in order to avoid problems later. Contact us at license@n8n.io if you would like to ask any questions about it.

What is and isn't allowed under the license in the context of n8n's product?

Our license restricts use to "internal business purposes". In practice this means all use is allowed unless you are selling a product, service, or module in which the value derives entirely or substantially from n8n functionality. Here are some examples that wouldn't be allowed:

  • White-labeling n8n and offering it to your customers for money.
  • Hosting n8n and charging people money to access it.

All of the following examples are allowed under our license:

  • Using n8n to sync the data you control as a company, for example from a CRM to an internal database.
  • Creating an n8n node for your product or any other integration between your product and n8n.
  • Providing consulting services related to n8n, for example building workflows, custom features closely connect to n8n, or code that gets executed by n8n.
  • Supporting n8n, for example by setting it up or maintaining it on an internal company server.

Can I use n8n to act as the back-end to power a feature in my app?

Usually yes, as long as the back-end process doesn't use users' own credentials to access their data.

Here are two examples to clarify:

Example 1: Sync ACME app with HubSpot

Bob sets up n8n to collect a user's HubSpot credentials to sync data in the ACME app with data in HubSpot.

NOT ALLOWED under the Sustainable Use License. This use case collects the user's own HubSpot credentials to pull information to feed into the ACME app.

Example 2: Embed AI chatbot in ACME app

Bob sets up n8n to embed an AI chatbot within the ACME app. The AI chatbot's credentials in n8n use Bob's company credentials. ACME app end-users only enter their questions or queries to the chatbot.

ALLOWED under the Sustainable Use License. No user credentials are being collected.

What if I want to use n8n for something that's not permitted by the license?

You must sign a separate commercial agreement with us. We actively encourage software creators to embed n8n within their products; we just ask them to sign an agreement laying out the terms of use, and the fees owed to n8n for using the product in this way. We call this mode of use n8n Embed. You can learn more, and contact us about it here.

If you are unsure whether the use case you have in mind constitutes an internal business purpose or not, take a look at the examples, and if you're still unclear, email us at license@n8n.io.

Why don't you use an open source license?

n8n's mission is to give everyone who uses a computer technical superpowers. We've decided the best way for us to achieve this mission is to make n8n as widely and freely available as possible for users, while ensuring we can build a sustainable, viable business. By making our product free to use, easy to distribute, and source-available we help everyone access the product. By operating as a business, we can continue to release features, fix bugs, and provide reliable software at scale long-term.

Why did you create a license?

Creating a license was our least favorite option. We only went down this path after reviewing the possible existing licenses and deciding nothing fit our specific needs. There are two ways in which we try to mitigate the pain and friction of using a proprietary license:

  1. By using plain English, and keeping it as short as possible.
  2. By promoting fair-code with the goal of making it a well-known umbrella term to describe software models like ours.

Our goals when we created the Sustainable Use License were:

  1. To be as permissive as possible.
  2. Safeguarding our ability to build a business.
  3. Being as clear as possible what use was permitted or not.

My company has a policy against using code that restricts commercial use can I still use n8n?

Provided you are using n8n for internal business purposes, and not making n8n available to your customers for them to connect their accounts and build workflows, you should be able to use n8n. If you are unsure whether the use case you have in mind constitutes an internal business purpose or not, take a look at the examples, and if you're still unclear, email us at license@n8n.io.

What happens to the code I contribute to n8n in light of the Sustainable Use License?

Any code you contribute on GitHub is subject to GitHub's terms of use. In simple terms, this means you own, and are responsible for, anything you contribute, but that you grant other GitHub users certain rights to use this code. When you contribute code to a repository containing notice of a license, you license the code under the same terms.

n8n asks every contributor to sign our Contributor License Agreement. In addition to the above, this gives n8n the ability to change its license without seeking additional permission. It also means you aren't liable for your contributions (e.g. in case they cause damage to someone else's business).

It's easy to get started contributing code to n8n here, and we've listed broader ways of participating in our community here.

Why did you switch to the Sustainable Use License from your previous license arrangement (Apache 2.0 with Commons Clause)?

n8n was licensed under Apache 2.0 with Commons Clause until 17 March 2022. Commons Clause was initiated by various software companies wanting to protect their rights against cloud providers. The concept involved adding a commercial restriction on top of an existing open source license.

However, the use of the Commons Clause as an additional condition to an open source license, as well as the use of wording that's open to interpretation, created some confusion and uncertainty regarding the terms of use. The Commons Clause also restricted people's ability to offer consulting and support services: we realized these services are critical in enabling people to get value from n8n, so we wanted to remove this restriction.

We created the Sustainable Use License to be more permissive and more clear about what use is allowed, while continuing to ensure n8n gets the funding needed to build and improve our product.

What are the main differences between the Sustainable Use License and your previous license arrangement (Apache 2.0 with Commons Clause)?

There are two main differences between the Sustainable Use License and our previous license arrangement. The first is that we have tightened the definition of how you can use the software. Previously the Commons Clause restricted users ability to "sell" the software; we have redefined this to restrict use to internal business purposes. The second difference is that our previous license restricted people's ability to charge fees for consulting or support services related to the software: we have lifted that restriction altogether.

That means you are now free to offer commercial consulting or support services (e.g. building n8n workflows) without the need for a separate license agreement with us. If you are interested in joining our community of n8n experts providing these services, you can learn more here.

Is n8n open source?

Although n8n's source code is available under the Sustainable Use License, according to the Open Source Initiative (OSI), open source licenses can't include limitations on use, so we do not call ourselves open source. In practice, n8n offers most users many of the same benefits as OSI-approved open source.

We coined the term 'fair-code' as a way of describing our licensing model, and the model of other companies who are source-available, but restrict commercial use of their source code.

What is fair-code, and how does the Sustainable Use License relate to it?

Fair-code isn't a software license. It describes a software model where software:

  • Is generally free to use and can be distributed by anybody.
  • Has its source code openly available.
  • Can be extended by anybody in public and private communities.
  • Is commercially restricted by its authors.

The Sustainable Use License is a fair-code license. You can read more about it and see other examples of fair-code licenses here.

We're always excited to talk about software licenses, fair-code, and other principles around sharing code with interested parties. To get in touch to chat, email license@n8n.io.

Can I use n8n's Sustainable Use License for my own project?

Yes! We're excited to see more software use the Sustainable Use License. We'd love to hear about your project if you're using our license: license@n8n.io.


Azure Cosmos DB node

URL: llms-txt#azure-cosmos-db-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported

Use the Azure Cosmos DB node to automate work in Azure Cosmos DB and integrate Azure Cosmos DB with other applications. n8n has built-in support for a wide range of Azure Cosmos DB features, which includes creating, getting, updating, and deleting containers and items.

On this page, you'll find a list of operations the Azure Cosmos DB node supports, and links to more resources.

You can find authentication information for this node here.

  • Container:
    • Create
    • Delete
    • Get
    • Get Many
  • Item:
    • Create
    • Delete
    • Get
    • Get Many
    • Execute Query
    • Update

Templates and examples

🤖 AI content generation for Auto Service 🚘 Automate your social media📲!

View template details

Build Your Own Counseling Chatbot on LINE to Support Mental Health Conversations

View template details

CallForge - 05 - Gong.io Call Analysis with Azure AI & CRM Sync

View template details

Browse Azure Cosmos DB integration templates, or search all templates

Refer to Azure Cosmos DB's documentation for more information about the service.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Flow Trigger node

URL: llms-txt#flow-trigger-node

Contents:

  • Events
  • Related resources

Flow is modern task and project management software for teams. It brings together tasks, projects, timelines, and conversations, and integrates with a lot of tools.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Flow Trigger integrations page.

n8n provides an app node for Flow. You can find the node docs here.

View example workflows and related content on n8n's website.

Refer to Flow's documentation for details about their API.


TheHive Trigger node

URL: llms-txt#thehive-trigger-node

Contents:

  • Events
  • Related resources
  • Configure a webhook in TheHive

On this page, you'll find a list of events the TheHive Trigger node can respond to and links to more resources.

TheHive and TheHive 5

n8n provides two nodes for TheHive. Use this node (TheHive Trigger) if you want to use TheHive's version 3 or 4 API. If you want to use version 5, use TheHive 5 Trigger.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's TheHive Trigger integrations page.

  • Alert
    • Created
    • Deleted
    • Updated
  • Case
    • Created
    • Deleted
    • Updated
  • Log
    • Created
    • Deleted
    • Updated
  • Observable
    • Created
    • Deleted
    • Updated
  • Task
    • Created
    • Deleted
    • Updated

n8n provides an app node for TheHive. You can find the node docs here.

View example workflows and related content on n8n's website.

Refer to TheHive's documentation for more information about the service:

Configure a webhook in TheHive

To configure the webhook for your TheHive instance:

  1. Copy the testing and production webhook URLs from TheHive Trigger node.

  2. Add the following lines to the application.conf file. This is TheHive configuration file:

  3. Replace TESTING_WEBHOOK_URL and PRODUCTION_WEBHOOK_URL with the URLs you copied in the previous step.

  4. Replace TESTING_WEBHOOK_NAME and PRODUCTION_WEBHOOK_NAME with your preferred endpoint names.

  5. Replace ORGANIZATION_NAME with your organization name.

  6. Execute the following cURL command to enable notifications:

Examples:

Example 1 (unknown):

notification.webhook.endpoints = [
   	{
   		name: TESTING_WEBHOOK_NAME
   		url: TESTING_WEBHOOK_URL
   		version: 0
   		wsConfig: {}
   		includedTheHiveOrganisations: ["ORGANIZATION_NAME"]
   		excludedTheHiveOrganisations: []
   	},
   	{
   		name: PRODUCTION_WEBHOOK_NAME
   		url: PRODUCTION_WEBHOOK_URL
   		version: 0
   		wsConfig: {}
   		includedTheHiveOrganisations: ["ORGANIZATION_NAME"]
   		excludedTheHiveOrganisations: []
   	}
   ]

Example 2 (unknown):

curl -XPUT -uTHEHIVE_USERNAME:THEHIVE_PASSWORD -H 'Content-type: application/json' THEHIVE_URL/api/config/organisation/notification -d '
   {
   	"value": [
   		{
   		"delegate": false,
   		"trigger": { "name": "AnyEvent"},
   		"notifier": { "name": "webhook", "endpoint": "TESTING_WEBHOOK_NAME" }
   		},
   		{
   		"delegate": false,
   		"trigger": { "name": "AnyEvent"},
   		"notifier": { "name": "webhook", "endpoint": "PRODUCTION_WEBHOOK_NAME" }
   		}
   	]
   }'

Imperva WAF credentials

URL: llms-txt#imperva-waf-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate when using the HTTP Request node to make a Custom API call.

Create an Imperva WAF account.

Supported authentication methods

Refer to Imperva WAF's documentation for more information about the service.

This is a credential-only node. Refer to Custom API operations to learn more. View example workflows and related content on n8n's website.

To configure this credential, you'll need:

  • An API ID
  • An API Key

Refer to Imperva WAF's API Key Management documentation for instructions on generating and viewing API Keys and IDs.


Salesforce credentials

URL: llms-txt#salesforce-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using JWT
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to Salesforce's developer documentation for more information about the service.

To configure this credential, you'll need a Salesforce account and:

  • Your Environment Type (Production or Sandbox)
  • A Client ID: Generated when you create a connected app.
  • Your Salesforce Username
  • A Private Key for a self-signed digital certificate

To set things up, first you'll create a private key and certificate, then a connected app:

  1. In n8n, select the Environment Type for your connection. Choose the option that best describes your environment from Production or Sandbox.
  2. Enter your Salesforce Username.
  3. Log in to your org in Salesforce.
  4. You'll need a private key and certificate issued by a certification authority. Use your own key/cert or use OpenSSL to create a key and a self-signed digital certificate. Refer to the Salesforce Create a Private Key and Self-Signed Digital Certificate documentation for instructions on creating your own key and certificate.
  5. From Setup in Salesforce, enter App Manager in the Quick Find box, then select App Manager.
  6. On the App Manager page, select New Connected App.
  7. Enter the required Basic Info for your connected app, including a Name and Contact Email address. Refer to Salesforce's Configure Basic Connected App Settings documentation for more information.
  8. Check the box to Enable OAuth Settings.
  9. For the Callback URL, enter http://localhost:1717/OauthRedirect.
  10. Check the box to Use digital signatures.
  11. Select Choose File and upload the file that contains your digital certificate, such as server.crt.
  12. Add these OAuth scopes:
    • Full access (full)
    • Perform requests at any time (refresh_token, offline_access)
  13. Select Save, then Continue. The Manage Connected Apps page should open to the app you just created.
  14. In the API (Enable OAuth Settings) section, select Manage Consumer Details.
  15. Copy the Consumer Key and add it to your n8n credential as the Client ID.
  16. Enter the contents of the private key file in n8n as Private Key.
    • Use the multi-line editor in n8n.
  • Enter the private key in standard PEM key format:

These steps are what's required on the n8n side. Salesforce recommends setting refresh token policies, session policies, and OAuth policies too:

  1. In Salesforce, select Back to Manage Connected Apps.
  2. Select Manage.
  3. Select Edit Policies.
  4. Review the Refresh Token Policy field. Salesforce recommends using expire refresh token after 90 days.
  5. In the Session Policies section, Salesforce recommends setting Timeout Value to 15 minutes.
  6. In the OAuth Policies section, select Admin approved users are pre-authorized for permitted users for Permitted Users, and select OK.
  7. Select Save.
  8. Select Manage Profiles, select the profiles that are pre-authorized to use this connected app, and select Save.
  9. Select Manage Permission Sets to select the permission sets. Create permission sets if necessary.

Refer to Salesforce's Create a Connected App in Your Org documentation for more information.

To configure this credential, you'll need a Salesforce account.

Note for n8n Cloud users

Cloud users don't need to provide connection details. Select Connect my account to connect through your browser.

Cloud and hosted users will need to select your Environment Type. Choose between Production and Sandbox.

If you're self-hosting n8n, you'll need to configure OAuth2 from scratch by creating a connected app:

  1. In n8n, select the Environment Type for your connection. Choose the option that best describes your environment from Production or Sandbox.
  2. Enter your Salesforce Username.
  3. Log in to your org in Salesforce.
  4. From Setup in Salesforce, enter App Manager in the Quick Find box, then select App Manager.
  5. On the App Manager page, select New Connected App.
  6. Enter the required Basic Info for your connected app, including a Name and Contact Email address. Refer to Salesforce's Configure Basic Connected App Settings documentation for more information.
  7. Check the box to Enable OAuth Settings.
  8. For the Callback URL, enter http://localhost:1717/OauthRedirect.
  9. Add these OAuth scopes:
    • Full access (full)
    • Perform requests at any time (refresh_token, offline_access)
  10. Make sure the following settings are unchecked:
    • Require Proof Key for Code Exchange (PKCE) Extension for Supported Authorization Flows
    • Require Secret for Web Server Flow
    • Require Secret for Refresh Token Flow
  11. Select Save, then Continue. The Manage Connected Apps page should open to the app you just created.
  12. In the API (Enable OAuth Settings) section, select Manage Consumer Details.
  13. Copy the Consumer Key and add it to your n8n credential as the Client ID.
  14. Copy the Consumer Secret and add it to your n8n credential as the Client Secret.

These steps are what's required on the n8n side. Salesforce recommends setting refresh token policies and session policies, too:

  1. In Salesforce, select Back to Manage Connected Apps.
  2. Select Manage.
  3. Select Edit Policies.
  4. Review the Refresh Token Policy field. Salesforce recommends using expire refresh token after 90 days.
  5. In the Session Policies section, Salesforce recommends setting Timeout Value to 15 minutes.

Refer to Salesforce's Create a Connected App in Your Org documentation for more information.

Examples:

Example 1 (unknown):

-----BEGIN PRIVATE KEY-----
     KEY DATA GOES HERE
     -----END PRIVATE KEY-----

PhantomBuster node

URL: llms-txt#phantombuster-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the PhantomBuster node to automate work in PhantomBuster, and integrate PhantomBuster with other applications. n8n has built-in support for a wide range of PhantomBuster features, including adding, deleting, and getting agents.

On this page, you'll find a list of operations the PhantomBuster node supports and links to more resources.

Refer to PhantomBuster credentials for guidance on setting up authentication.

  • Agent
    • Delete an agent by ID.
    • Get an agent by ID.
    • Get all agents of the current user's organization.
    • Get the output of the most recent container of an agent.
    • Add an agent to the launch queue.

Templates and examples

Create HubSpot contacts from LinkedIn post interactions

View template details

Store the output of a phantom in Airtable

View template details

Personalized LinkedIn Connection Requests with Apollo, GPT-4, Apify & PhantomBuster

View template details

Browse PhantomBuster integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


AWS Lambda node

URL: llms-txt#aws-lambda-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the AWS Lambda node to automate work in AWS Lambda, and integrate AWS Lambda with other applications. n8n has built-in support for a wide range of AWS Lambda features, including invoking functions.

On this page, you'll find a list of operations the AWS Lambda node supports and links to more resources.

Refer to AWS Lambda credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

Templates and examples

Invoke an AWS Lambda function

View template details

Convert and Manipulate PDFs with Api2Pdf and AWS Lambda

View template details

AWS Lambda Manager with GPT-4.1 & Google Sheets Audit Logging via Chat

View template details

Browse AWS Lambda integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Google Gemini Chat Model node

URL: llms-txt#google-gemini-chat-model-node

Contents:

  • Node parameters
  • Node options
  • Limitations
    • No proxy support
  • Templates and examples
  • Related resources

Use the Google Gemini Chat Model node to use Google's Gemini chat models with conversational agents.

On this page, you'll find the node parameters for the Google Gemini Chat Model node, and links to more resources.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Model: Select the model to use to generate the completion.

n8n dynamically loads models from the Google Gemini API and you'll only see the models available to your account.

  • Maximum Number of Tokens: Enter the maximum number of tokens used, which sets the completion length.
  • Sampling Temperature: Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations.
  • Top K: Enter the number of token choices the model uses to generate the next token.
  • Top P: Use this option to set the probability the completion should use. Use a lower value to ignore less probable options.
  • Safety Settings: Gemini supports adjustable safety settings. Refer to Google's Gemini API safety settings for information on the available filters and levels.

The Google Gemini Chat Model node uses Google's SDK, which doesn't support proxy configuration.

If you need to proxy your connection, as a work around, you can set up a dedicated reverse proxy for Gemini requests and change the Host parameter in your Google Gemini credentials to point to your proxy address:

Templates and examples

🤖Automate Multi-Platform Social Media Content Creation with AI

View template details

AI-Powered Social Media Content Generator & Publisher

View template details

Build Your First AI Agent

View template details

Browse Google Gemini Chat Model integration templates, or search all templates

Refer to LangChain's Google Gemini documentation for more information about the service.

View n8n's Advanced AI documentation.


Isolate n8n

URL: llms-txt#isolate-n8n

By default, a self-hosted n8n instance sends data to n8n's servers. It notifies users about available updates, workflow templates, and diagnostics.

To prevent your n8n instance from connecting to n8n's servers, set these environment variables to false:

Unset n8n's diagnostics configuration:

Refer to Environment variables reference for more information on these variables.

Examples:

Example 1 (unknown):

N8N_DIAGNOSTICS_ENABLED=false
N8N_VERSION_NOTIFICATIONS_ENABLED=false
N8N_TEMPLATES_ENABLED=false

Example 2 (unknown):

EXTERNAL_FRONTEND_HOOKS_URLS=
N8N_DIAGNOSTICS_CONFIG_FRONTEND=
N8N_DIAGNOSTICS_CONFIG_BACKEND=

Common issues and questions

URL: llms-txt#common-issues-and-questions

Contents:

  • Listen for multiple HTTP methods
  • Use the HTTP Request node to trigger the Webhook node
  • Use curl to trigger the Webhook node
  • Send a response of type string
  • Test URL versus Production URL
  • IP addresses in whitelist are failing to connect
  • Only one webhook per path and method
  • Timeouts on n8n Cloud

Here are some common issues and questions for the Webhook node and suggested solutions.

Listen for multiple HTTP methods

By default, the Webhook node accepts calls that use a single method. For example, it can accept GET or POST requests, but not both. If you want to accept calls using multiple methods:

  1. Open the node Settings.
  2. Turn on Allow Multiple HTTP Methods.
  3. Return to Parameters. By default, the node now accepts GET and POST calls. You can add other methods in the HTTP Methods field.

The Webhook node has an output for each method, so you can perform different actions depending on the method.

Use the HTTP Request node to trigger the Webhook node

The HTTP Request node makes HTTP requests to the URL you specify.

  1. Create a new workflow.
  2. Add the HTTP Request node to the workflow.
  3. Select a method from the Request Method dropdown list. For example, if you select GET as the HTTP method in your Webhook node, select GET as the request method in the HTTP Request node.
  4. Copy the URL from the Webhook node, and paste it in the URL field in the HTTP Request node.
  5. If using the test URL for the webhook node: execute the workflow with the Webhook node.
  6. Execute the HTTP Request node.

Use curl to trigger the Webhook node

You can use curl to make HTTP requests that trigger the Webhook node.

In the examples, replace <https://your-n8n.url/webhook/path> with your webhook URL.
The examples make GET requests. You can use whichever HTTP method you set in HTTP Method.

Make an HTTP request without any parameters:

Make an HTTP request with a body parameter:

Make an HTTP request with header parameter:

Make an HTTP request to send a file:

Replace /path/to/file with the path of the file you want to send.

Send a response of type string

By default, the response format is JSON or an array. To send a response of type string:

  1. Select Response Mode > When Last Node Finishes.
  2. Select Response Data > First Entry JSON.
  3. Select Add Option > Property Name.
  4. Enter the name of the property that contains the response. This defaults to data.
  5. Connect an Edit Fields node to the Webhook node.
  6. In the Edit Fields node, select Add Value > String.
  7. Enter the name of the property in the Name field. The name should match the property name from step 4.
  8. Enter the string value in the Value field.
  9. Toggle Keep Only Set to on (green).

When you call the Webhook, it sends the string response from the Edit Fields node.

Test URL versus Production URL

n8n generates two Webhook URLs for each Webhook node: a Test URL and a Production URL.

While building or testing a workflow, use the Test URL. Once you're ready to use your Webhook URL in production, use the Production URL.

URL type How to trigger Listening duration Data shown in editor UI?
Test URL Select Listen for test event and trigger a test event from the source. 120 seconds
Production URL Activate the workflow Until workflow deactivated

Refer to Workflow development for more information.

IP addresses in whitelist are failing to connect

If you're unable to connect from IP addresses in your IP whitelist, check if you are running n8n behind a reverse proxy.

If so, set the N8N_PROXY_HOPS environment variable to the number of reverse-proxies n8n is running behind.

Only one webhook per path and method

n8n only permits registering one webhook for each path and HTTP method combination (for example, a GET request for /my-request). This avoids ambiguity over which webhook should receive requests.

If you receive a message that the path and method you chose are already in use, you can either:

  • Deactivate the workflow with the conflicting webhook.
  • Change the webhook path and/or method for one of the conflicting webhooks.

Timeouts on n8n Cloud

n8n Cloud uses Cloudflare to protect against malicious traffic. If your webhook doesn't respond within 100 seconds, the incoming request will fail with a 524 status code.

Because of this, for long-running processes that might exceed this limit, you may need to introduce polling logic by configuring two separate webhooks:

  • One webhook to start the long-running process and send an immediate response.
  • A second webhook that you can call at intervals to query the status of the process and retrieve the result once it's complete.

Examples:

Example 1 (unknown):

curl --request GET <https://your-n8n.url/webhook/path>

Example 2 (unknown):

curl --request GET <https://your-n8n.url/webhook/path> --data 'key=value'

Example 3 (unknown):

curl --request GET <https://your-n8n.url/webhook/path> --header 'key=value'

Example 4 (unknown):

curl --request GET <https://your-n8n.url/webhook/path> --from 'key=@/path/to/file'

TheHive 5 credentials

URL: llms-txt#thehive-5-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes with TheHive 5.

TheHive and TheHive 5

n8n provides two nodes for TheHive. Use these credentials with TheHive 5 node. If you're using TheHive node for TheHive 3 or TheHive 4, use TheHive credentials.

Install TheHive 5 on your server.

Supported authentication methods

Refer to TheHive's API documentation for more information about the service.

To configure this credential, you'll need:

  • An API Key: Users with orgAdmin and superAdmin accounts can generate API keys:
    • orgAdmin account: Go to Organization > Create API Key for the user you wish to generate a key for.
    • superAdmin account: Go to Users > Create API Key for the user you wish to generate a key for.
    • Refer to API Authentication for more information.
  • A URL: The URL of your TheHive server.
  • Ignore SSL Issues: When turned on, n8n will connect even if SSL certificate validation fails.

Execute Command

URL: llms-txt#execute-command

Contents:

  • Node parameters
    • Execute Once
    • Command
  • Templates and examples
  • Common issues

The Execute Command node runs shell commands on the host machine that runs n8n.

Security considerations

The Execute Command node can introduce significant security risks in environments that operate with untrusted users. Because of this, n8n recommends disabling it in such setups.

Which shell runs the command?

This node executes the command in the default shell of the host machine. For example, cmd on Windows and zsh on macOS.

If you run n8n with Docker, your command will run in the n8n container and not the Docker host.

If you're using queue mode, the command runs on the worker that's executing the task in production mode. When running manual executions, it runs on the main instance, unless you set OFFLOAD_MANUAL_EXECUTIONS_TO_WORKERS to true.

Not available on Cloud

This node isn't available on n8n Cloud.

Configure the node using the following parameters.

Choose whether you want the node to execute only once (turned on) or once for every item it receives as input (turned off).

Enter the command to execute on the host machine. Refer to sections below for examples of running multiple commands and cURL commands.

Run multiple commands

Use one of two methods to run multiple commands in one Execute Command node:

  • Enter each command on one line separated by &&. For example, you can combine the change directory (cd) command with the list (ls) command using &&.

  • Enter each command on a separate line. For example, you can write the list (ls) command on a new line after the change directory (cd) command.

Run cURL command

You can also use the HTTP Request node to make a cURL request.

If you want to run the curl command in the Execute Command node, you will have to build a Docker image based on the existing n8n image. The default n8n Docker image uses Alpine Linux. You will have to install the curl package.

  1. Create a file named Dockerfile.

  2. Add the below code snippet to the Dockerfile.

  3. In the same folder, execute the command below to build the Docker image.

  4. Replace the Docker image you used before. For example, replace docker.n8n.io/n8nio/n8n with n8n-curl.

  5. Run the newly created Docker image. You'll now be able to execute ssh using the Execute Command Node.

Templates and examples

Scrape and store data from multiple website pages

View template details

Git backup of workflows and credentials

View template details

Track changes of product prices

View template details

Browse Execute Command integration templates, or search all templates

For common questions or issues and suggested solutions, refer to Common Issues.

Examples:

Example 1 (unknown):

cd bin && ls

Example 2 (unknown):

cd bin
  ls

Example 3 (unknown):

FROM docker.n8n.io/n8nio/n8n
   USER root
   RUN apk --update add curl
   USER node

Example 4 (unknown):

docker build -t n8n-curl

Merge

URL: llms-txt#merge

Contents:

  • Node parameters
    • Append
    • Combine
    • SQL Query
    • Choose Branch
  • Templates and examples
  • Merging data streams with uneven numbers of items
  • Branch execution with If and Merge nodes
  • Try it out: A step by step example
    • Set up sample data using the Code nodes

Use the Merge node to combine data from multiple streams, once data of all streams is available.

Major changes in 0.194.0

The n8n team overhauled this node in n8n 0.194.0. This document reflects the latest version of the node. If you're using an older version of n8n, you can find the previous version of this document here.

Minor changes in 1.49.0

n8n version 1.49.0 introduced the option to add more than two inputs. Older versions only support up to two inputs. If you're running an older version and want to combine multiple inputs in these versions, use the Code node.

The Mode > SQL Query feature was also added in n8n version 1.49.0 and isn't available in older versions.

You can specify how the Merge node should combine data from different data streams by choosing a Mode:

Keep data from all inputs. Choose a Number of Inputs to output items of each input, one after another. The node waits for the execution of all connected inputs.

Append mode inputs and output

Combine data from two inputs. Select an option in Combine By to determine how you want to merge the input data.

Compare items by field values. Enter the fields you want to compare in Fields to Match.

n8n's default behavior is to keep matching items. You can change this using the Output Type setting:

  • Keep Matches: Merge items that match. This is like an inner join.
  • Keep Non-Matches: Merge items that don't match.
  • Keep Everything: Merge items together that do match and include items that don't match. This is like an outer join.
  • Enrich Input 1: Keep all data from Input 1, and add matching data from Input 2. This is like a left join.
  • Enrich Input 2: Keep all data from Input 2, and add matching data from Input 1. This is like a right join.

Combine by Matching Fields mode inputs and output

Combine items based on their order. The item at index 0 in Input 1 merges with the item at index 0 in Input 2, and so on.

Combine by Position mode inputs and output

All Possible Combinations

Output all possible item combinations, while merging fields with the same name.

Combine by All Possible Combinations mode inputs and output

Combine mode options

When merging data by Mode > Combine, you can set these Options:

  • Clash Handling: Choose how to merge when data streams clash, or when there are sub-fields. Refer to Clash handling for details.
  • Fuzzy Compare: Whether to tolerate type differences when comparing fields (enabled), or not (disabled, default). For example, when you enable this, n8n treats "3" and 3 as the same.
  • Disable Dot Notation: This prevents accessing child fields using parent.child in the field name.
  • Multiple Matches: Choose how n8n handles multiple matches when comparing data streams.
    • Include All Matches: Output multiple items if there are multiple matches, one for each match.
    • Include First Match Only: Keep the first item per match and discard the remaining multiple matches.
  • Include Any Unpaired Items: Choose whether to keep or discard unpaired items when merging by position. The default behavior is to leave out the items without a match.

If multiple items at an index have a field with the same name, this is a clash. For example, if all items in both Input 1 and Input 2 have a field named language, these fields clash. By default, n8n prioritizes Input 2, meaning if language has a value in Input 2, n8n uses that value when merging the items.

You can change this behavior by selecting Options > Clash Handling:

  • When Field Values Clash: Choose which input to prioritize, or choose Always Add Input Number to Field Names to keep all fields and values, with the input number appended to the field name to show which input it came from.
  • Merging Nested Fields
    • Deep Merge: Merge properties at all levels of the items, including nested objects. This is useful when dealing with complex, nested data structures where you need to ensure the merging of all levels of nested properties.
    • Shallow Merge: Merge properties at the top level of the items only, without merging nested objects. This is useful when you have flat data structures or when you only need to merge top-level properties without worrying about nested properties.

Write a custom SQL Query to merge the data.

Data from previous nodes are available as tables and you can use them in the SQL query as input1, input2, input3, and so on, based on their order. Refer to AlaSQL GitHub page for a full list of supported SQL statements.

Choose which input to keep. This option always waits until the data from both inputs is available. You can choose to Output:

  • The Input 1 Data
  • The Input 2 Data
  • A Single, Empty Item

The node outputs the data from the chosen input, without changing it.

Templates and examples

Scrape and summarize webpages with AI

View template details

Generate AI Viral Videos with Seedance and Upload to TikTok, YouTube & Instagram

View template details

🤖Automate Multi-Platform Social Media Content Creation with AI

View template details

Browse Merge integration templates, or search all templates

Merging data streams with uneven numbers of items

The items passed into Input 1 of the Merge node will take precedence. For example, if the Merge node receives five items in Input 1 and 10 items in Input 2, it only processes five items. The remaining five items from Input 2 aren't processed.

Branch execution with If and Merge nodes

n8n removed this execution behavior in version 1.0. This section applies to workflows using the v0 (legacy) workflow execution order. By default, this is all workflows built before version 1.0. You can change the execution order in your workflow settings.

If you add a Merge node to a workflow containing an If node, it can result in both output data streams of the If node executing.

One data stream triggers the Merge node, which then goes and executes the other data stream.

For example, in the screenshot below there's a workflow containing an Edit Fields node, If node, and Merge node. The standard If node behavior is to execute one data stream (in the screenshot, this is the true output). However, due to the Merge node, both data streams execute, despite the If node not sending any data down the false data stream.

Try it out: A step by step example

Create a workflow with some example input data to try out the Merge node.

Set up sample data using the Code nodes

  1. Add a Code node to the canvas and connect it to the Start node.

  2. Paste the following JavaScript code snippet in the JavaScript Code field:

  3. Add a second Code node, and connect it to the Start node.

  4. Paste the following JavaScript code snippet in the JavaScript Code field:

Try out different merge modes

Add the Merge node. Connect the first Code node to Input 1, and the second Code node to Input 2. Run the workflow to load data into the Merge node.

The final workflow should look like this:

View template details

Now try different options in Mode to see how it affects the output data.

Select Mode > Append, then select Execute step.

Your output in table view should look like this:

name language greeting
Stefan de
Jim en
Hans de
en Hello
de Hallo

Combine by Matching Fields

You can merge these two data inputs so that each person gets the correct greeting for their language.

  1. Select Mode > Combine.
  2. Select Combine by > Matching Fields.
  3. In both Input 1 Field and Input 2 Field, enter language. This tells n8n to combine the data by matching the values in the language field in each data set.
  4. Select Execute step.

Your output in table view should look like this:

name language greeting
Stefan de Hallo
Jim en Hello
Hans de Hallo

Combine by Position

Select Mode > Combine, Combine by > Position, then select Execute step.

Your output in table view should look like this:

name language greeting
Stefan en Hello
Jim de Hallo
Keep unpaired items

If you want to keep all items, select Add Option > Include Any Unpaired Items, then turn on Include Any Unpaired Items.

Your output in table view should look like this:

name language greeting
Stefan en Hello
Jim de Hallo
Hans de

Combine by All Possible Combinations

Select Mode > Combine, Combine by > All Possible Combinations, then select Execute step.

Your output in table view should look like this:

name language greeting
Stefan en Hello
Stefan de Hallo
Jim en Hello
Jim de Hallo
Hans en Hello
Hans de Hallo

Examples:

Example 1 (unknown):

SELECT * FROM input1 LEFT JOIN input2 ON input1.name = input2.id

Example 2 (unknown):

return [
     {
       json: {
         name: 'Stefan',
         language: 'de',
       }
     },
     {
       json: {
         name: 'Jim',
         language: 'en',
       }
     },
     {
       json: {
         name: 'Hans',
         language: 'de',
       }
     }
   ];

Example 3 (unknown):

return [
   	  {
       json: {
         greeting: 'Hello',
         language: 'en',
       }
     },
     {
       json: {
         greeting: 'Hallo',
         language: 'de',
       }
     }
   ];

("<node-name>").all(branchIndex?: number, runIndex?: number)

URL: llms-txt#("<node-name>").all(branchindex?:-number,-runindex?:-number)

Contents:

  • Getting items

This gives access to all the items of the current or parent nodes. If you don't supply any parameters, it returns all the items of the current node.

Examples:

Example 1 (unknown):

// Returns all the items of the given node and current run
let allItems = $("<node-name>").all();

// Returns all items the node "IF" outputs (index: 0 which is Output "true" of its most recent run)
let allItems = $("IF").all();

// Returns all items the node "IF" outputs (index: 0 which is Output "true" of the same run as current node)
let allItems = $("IF").all(0, $runIndex);

// Returns all items the node "IF" outputs (index: 1 which is Output "false" of run 0 which is the first run)
let allItems = $("IF").all(1, 0);

Filescan credentials

URL: llms-txt#filescan-credentials

Contents:

  • Prerequisites
  • Related resources
  • Using API key

You can use these credentials to authenticate when using the HTTP Request node to make a Custom API call.

Create a Filescan account.

Refer to Filescan's API documentation for more information about authenticating with the service.

This is a credential-only node. Refer to Custom API operations to learn more. View example workflows and related content on n8n's website.

To configure this credential, you'll need:


SecurityScorecard node

URL: llms-txt#securityscorecard-node

Contents:

  • Operations
  • Templates and examples

Use the SecurityScorecard node to automate work in SecurityScorecard, and integrate SecurityScorecard with other applications. n8n has built-in support for a wide range of SecurityScorecard features, including creating, updating, deleting, and getting portfolio, as well as getting a company's data.

On this page, you'll find a list of operations the SecurityScorecard node supports and links to more resources.

Refer to SecurityScorecard credentials for guidance on setting up authentication.

  • Company
    • Get company factor scores and issue counts
    • Get company's historical factor scores
    • Get company's historical scores
    • Get company information and summary of their scorecard
    • Get company's score improvement plan
  • Industry
    • Get Factor Scores
    • Get Historical Factor Scores
    • Get Score
  • Invite
    • Create an invite for a company/user
  • Portfolio
    • Create a portfolio
    • Delete a portfolio
    • Get all portfolios
    • Update a portfolio
  • Portfolio Company
    • Add a company to portfolio
    • Get all companies in a portfolio
    • Remove a company from portfolio
  • Report
    • Download a generated report
    • Generate a report
    • Get list of recently generated report

Templates and examples

Browse SecurityScorecard integration templates, or search all templates


Google Drive Trigger node

URL: llms-txt#google-drive-trigger-node

Contents:

  • Common issues

Google Drive is a file storage and synchronization service developed by Google. It allows users to store files on their servers, synchronize files across devices, and share files.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Google Drive Trigger integrations page.

Manual Executions vs. Activation

On manual executions this node will return the last event matching its search criteria. If no event matches the criteria (for example because you are watching for files to be created but no files have been created so far), an error is thrown. Once saved and activated, the node will regularly check for any matching events and will trigger your workflow for each event found.

For common questions or issues and suggested solutions, refer to Common issues.


Linear node

URL: llms-txt#linear-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Linear node to automate work in Linear, and integrate Linear with other applications. n8n has built-in support for a wide range of Linear features, including creating, updating, deleting, and getting issues.

On this page, you'll find a list of operations the Linear node supports and links to more resources.

Refer to Linear credentials for guidance on setting up authentication.

  • Comment
    • Add Comment
  • Issue
    • Add Link
    • Create
    • Delete
    • Get
    • Get Many
    • Update

Templates and examples

Customer Support Channel and Ticketing System with Slack and Linear

View template details

Visual Regression Testing with Apify and AI Vision Model

View template details

Send alert when data is created in app/database

View template details

Browse Linear integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


FTP

URL: llms-txt#ftp

Contents:

  • Operations
  • Delete
    • Delete options
  • Download
  • List
  • Rename
    • Rename options
  • Upload
  • Templates and examples

The FTP node is useful to access and upload files to an FTP or SFTP server.

You can find authentication information for this node here.

To connect to an SFTP server, use an SFTP credential. Refer to FTP credentials for more information.

To attach a file for upload, you'll need to use an extra node such as the Read/Write Files from Disk node or the HTTP Request node to pass the file as a data property.

This operation includes one parameter: Path. Enter the remote path that you would like to connect to.

The delete operation adds one new option: Folder. If you turn this option on, the node can delete both folders and files. This configuration also displays one more option:

  • Recursive: If you turn this option on and you delete a folder or directory, the node will delete all files and directories within the target directory.

Configure this operation with these parameters:

  • Path: Enter the remote path that you would like to connect to.
  • Put Output File in Field: Enter the name of the output binary field to put the file in.

Concurrent Reads with SFTP

When using SFTP, you can enable concurrent reads. This improves download speeds but may not be supported by all SFTP servers.

Configure this operation with these parameters:

  • Path: Enter the remote path that you would like to connect to.
  • Recursive: Select whether to return an object representing all directories / objects recursively found within the FTP/SFTP server (turned on) or not (turned off).

Configure this operation with these parameters:

  • Old Path: Enter the existing path of the file you'd like to rename in this field.
  • New Path: Enter the new path for the renamed file in this field.

This operation adds one new option: Create Directories. If you turn this option on, the node will recursively create the destination directory when renaming an existing file or folder.

Configure this operation with these parameters:

  • Path: Enter the remote path that you would like to connect to.
  • Binary File: Select whether you'll upload a binary file (turned on) or enter text content to be uploaded (turned off). Other parameters depend on your selection in this field.
    • Input Binary Field: Displayed if you turn on Binary File. Enter the name of the input binary field that contains the file you'll upload in this field.
    • File Content: Displayed if you turn off Binary File Enter the text content of the file you'll upload in this field.

To attach a file for upload, you'll need to use an extra node such as the Read/Write Files from Disk node or the HTTP Request node to pass the file as a data property.

Templates and examples

Working with Excel spreadsheet files (xls & xlsx)

View template details

Download a file and upload it to an FTP Server

View template details

Explore n8n Nodes in a Visual Reference Library

View template details

Browse FTP integration templates, or search all templates


Onfleet credentials

URL: llms-txt#onfleet-credentials

Contents:

  • Prerequisites

You can use these credentials to authenticate the following nodes:

Create an Onfleet administrator account.


SyncroMSP node

URL: llms-txt#syncromsp-node

Contents:

  • Operations
  • Templates and examples

Use the SyncroMSP node to automate work in SyncroMSP, and integrate SyncroMSP with other applications. n8n has built-in support for a wide range of SyncroMSP features, including creating and deleting new customers, tickets, and contacts.

On this page, you'll find a list of operations the SyncroMSP node supports and links to more resources.

Refer to SyncroMSP credentials for guidance on setting up authentication.

  • Contact
    • Create new contact
    • Delete contact
    • Retrieve contact
    • Retrieve all contacts
    • Update contact
  • Customer
    • Create new customer
    • Delete customer
    • Retrieve customer
    • Retrieve all customers
    • Update customer
  • RMM
    • Create new RMM Alert
    • Delete RMM Alert
    • Retrieve RMM Alert
    • Retrieve all RMM Alerts
    • Mute RMM Alert
  • Ticket
    • Create new ticket
    • Delete ticket
    • Retrieve ticket
    • Retrieve all tickets
    • Update ticket

Templates and examples

Browse SyncroMSP integration templates, or search all templates


Redis Trigger node

URL: llms-txt#redis-trigger-node

Redis is an open-source, in-memory data structure store, used as a database, cache and message broker.

Use the Redis Trigger node to subscribe to a Redis channel. The workflow starts whenever the channel receives a new message.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Redis Trigger integrations page.


TOTP credentials

URL: llms-txt#totp-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using secret and label

You can use these credentials to authenticate the following nodes:

Generate a TOTP Secret and Label.

Supported authentication methods

Time-based One-time Password (TOTP) is an algorithm that generates a one-time password (OTP) using the current time. Refer to Google Authenticator | Key URI format for more information.

Using secret and label

To configure this credential, you'll need:

  • A Secret: The secret key encoded in the QR code during authenticator setup. It's an arbitrary key value encoded in Base32, for example: BVDRSBXQB2ZEL5HE. Refer to Google Authenticator Secret for more information.
  • A Label: The identifier for the account. It contains an account name as a URI-encoded string. You can include prefixes to identify the provider or service managing the account. If you use prefixes, use a literal or url-encoded colon to separate the issuer prefix and the account name, for example: GitHub:john-doe. Refer to Google Authenticator Label for more information.

Actions library

URL: llms-txt#actions-library

This section provides information about n8n's Actions.


Runtime-only extras for the Python task runner (installed at image build)

URL: llms-txt#runtime-only-extras-for-the-python-task-runner-(installed-at-image-build)


Debug and re-run past executions

URL: llms-txt#debug-and-re-run-past-executions

Contents:

  • Load data

Available on n8n Cloud and registered Community plans.

You can load data from a previous execution into your current workflow. This is useful for debugging data from failed production executions: you can see a failed execution, make changes to your workflow to fix it, then re-run it with the previous execution data.

To load data from a previous execution:

  1. In your workflow, select the Executions tab to view the Executions list.
  2. Select the execution you want to debug. n8n displays options depending on whether the workflow was successful or failed:
    • For failed executions: select Debug in editor.
    • For successful executions: select Copy to editor.
  3. n8n copies the execution data into your current workflow, and pins the data in the first node in the workflow.

Check which executions you save

The executions available on the Executions list depends on your Workflow settings.


Adalo credentials

URL: llms-txt#adalo-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

You need a Team or Business plan to use the Adalo APIs.

Supported authentication methods

Refer to Adalo's API collections documentation for more information about working with the service.

To configure this credential, you'll need an Adalo account and:

  • An API Key
  • An App ID

To get these, create an Adalo app:

  1. From the app dropdown in the top navigation, select CREATE NEW APP.
  2. Select the App Layout type that makes sense for you and select Next.
    • If you're new to using the product, Adalo recommend using Mobile Only.
  3. Select a template to get started with or select Blank, then select Next.
  4. Enter an App Name, like n8n integration.
  5. If applicable, select the Team for the app.
  6. Select branding colors.
  7. Select Create. The app editor opens.
  8. In the left menu, select Settings (the gear cog icon).
  9. Select App Access.
  10. In the API Key section, select Generate Key.
    • If you don't have the correct plan level, you'll see a prompt to upgrade instead.
  11. Copy the key and enter it as the API Key in your n8n credential.
  12. The URL includes the App ID after https://app.adalo.com/apps/. For example, if the URL for your app is https://app.adalo.com/apps/b78bdfcf-48dc-4550-a474-dd52c19fc371/app-settings, b78bdfcf-48dc-4550-a474-dd52c19fc371 is the App ID. Copy this value and enter it in your n8n credential.

Refer to Creating an app for more information on creating apps in Adalo. Refer to The Adalo API for more information on generating API keys.


AMQP credentials

URL: llms-txt#amqp-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using AMQP connection

You can use these credentials to authenticate the following nodes:

Install an AMQP 1.0-compatible message broker like ActiveMQ. Refer to AMQP Products for a list of options.

Supported authentication methods

Advanced Message Queuing Protocol (AMQP) is an open standard application layer protocol for message-oriented middleware. The defining features of AMQP are message orientation, queuing, routing, reliability and security. Refer to the OASIS AMQP Version 1.0 Standard for more information.

Refer to your provider's documentation for more information about the service. Refer to ActiveMQ's API documentation as one example.

Using AMQP connection

To configure this credential, you'll need:

  • A Hostname: Enter the hostname of your AMQP message broker.
  • A Port: Enter the port number the connection should use.
  • A User: Enter the name of the user to establish the connection as.
    • For example, the default username in ActiveMQ is admin.
  • A Password: Enter the user's password.
    • For example, the default password in ActiveMQ is admin.
  • Optional: Transport Type: Enter either tcp or tls.

Refer to your provider's documentation for more detailed instructions.


Hosting n8n on DigitalOcean

URL: llms-txt#hosting-n8n-on-digitalocean

Contents:

  • Create a Droplet
  • Log in to your Droplet and create new user
  • Clone configuration repository
  • Default folders and files
    • Create Docker volumes
  • Set up DNS
  • Open ports
  • Configure n8n
  • The Docker Compose file
  • Configure Caddy

This hosting guide shows you how to self-host n8n on a DigitalOcean droplet. It uses:

  • Caddy (a reverse proxy) to allow access to the Droplet from the internet. Caddy will also automatically create and manage SSL / TLS certificates for your n8n instance.
  • Docker Compose to create and define the application components and how they work together.

Self-hosting knowledge prerequisites

Self-hosting n8n requires technical knowledge, including:

  • Setting up and configuring servers and containers
  • Managing application resources and scaling
  • Securing servers and applications
  • Configuring n8n

n8n recommends self-hosting for expert users. Mistakes can lead to data loss, security issues, and downtime. If you aren't experienced at managing servers, n8n recommends n8n Cloud.

Latest and Next versions

n8n releases a new minor version most weeks. The latest version is for production use. next is the most recent release. You should treat next as a beta: it may be unstable. To report issues, use the forum.

Current latest: 1.118.2
Current next: 1.119.0

  1. Log in to DigitalOcean.
  2. Select the project to host the Droplet, or create a new project.
  3. In your project, select Droplets from the Manage menu.
  4. Create a new Droplet using the Docker image available on the Marketplace tab.

When creating the Droplet, DigitalOcean asks you to choose a plan. For most usage levels, a basic shared CPU plan is enough.

DigitalOcean lets you choose between SSH key and password-based authentication. SSH keys are considered more secure.

Log in to your Droplet and create new user

The rest of this guide requires you to log in to the Droplet using a terminal with SSH. Refer to How to Connect to Droplets with SSH for more information.

You should create a new user, to avoid working as the root user:

  1. Create a new user:

  2. Follow the prompts in the CLI to finish creating the user.

  3. Grant the new user administrative privileges:

You can now run commands with superuser privileges by using sudo before the command.

  1. Follow the steps to set up SSH for the new user: Add Public Key Authentication.

  2. Log out of the droplet.

  3. Log in using SSH as the new user.

Clone configuration repository

Docker Compose, n8n, and Caddy require a series of folders and configuration files. You can clone these from this repository into the home folder of the logged-in user on your Droplet. The following steps will tell you which file to change and what changes to make.

Clone the repository with the following command:

And change directory to the root of the repository you cloned:

Default folders and files

The host operating system (the DigitalOcean Droplet) copies the two folders you created to Docker containers to make them available to Docker. The two folders are:

  • caddy_config: Holds the Caddy configuration files.
  • local_files: A folder for files you upload or add using n8n.

Create Docker volumes

To persist the Caddy cache between restarts and speed up start times, create a Docker volume that Docker reuses between restarts:

Create a Docker volume for the n8n data:

n8n typically operates on a subdomain. Create a DNS record with your provider for the subdomain and point it to the IP address of the Droplet. The exact steps for this depend on your DNS provider, but typically you need to create a new "A" record for the n8n subdomain. DigitalOcean provide An Introduction to DNS Terminology, Components, and Concepts.

n8n runs as a web application, so the Droplet needs to allow incoming access to traffic on port 80 for non-secure traffic, and port 443 for secure traffic.

Open the following ports in the Droplet's firewall by running the following two commands:

n8n needs some environment variables set to pass to the application running in the Docker container. The example .env file contains placeholders you need to replace with values of your own.

Open the file with the following command:

The file contains inline comments to help you know what to change.

Refer to Environment variables for n8n environment variables details.

The Docker Compose file

The Docker Compose file (docker-compose.yml) defines the services the application needs, in this case Caddy and n8n.

  • The Caddy service definition defines the ports it uses and the local volumes to copy to the containers.
  • The n8n service definition defines the ports it uses, the environment variables n8n needs to run (some defined in the .env file), and the volumes it needs to copy to the containers.

The Docker Compose file uses the environment variables set in the .env file, so you shouldn't need to change it's content, but to take a look, run the following command:

Caddy needs to know which domains it should serve, and which port to expose to the outside world. Edit the Caddyfile file in the caddy_config folder.

Change the placeholder domain to yours. If you followed the steps to name the subdomain n8n, your full domain is similar to n8n.example.com. The n8n in the reverse_proxy setting tells Caddy to use the service definition defined in the docker-compose.yml file:

If you were to use automate.example.com, your Caddyfile may look something like:

Start Docker Compose

Start n8n and Caddy with the following command:

This may take a few minutes.

In your browser, open the URL formed of the subdomain and domain name defined earlier. Enter the user name and password defined earlier, and you should be able to access n8n.

Stop n8n and Caddy

You can stop n8n and Caddy with the following command:

If you run n8n using a Docker Compose file, follow these steps to update n8n:

Examples:

Example 1 (unknown):

adduser <username>

Example 2 (unknown):

usermod -aG sudo <username>

Example 3 (unknown):

git clone https://github.com/n8n-io/n8n-docker-caddy.git

Example 4 (unknown):

cd n8n-docker-caddy

GitLab Trigger node

URL: llms-txt#gitlab-trigger-node

Contents:

  • Events
  • Related resources

GitLab is a web-based DevOps lifecycle tool that provides a Git-repository manager providing wiki, issue-tracking, and continuous integration/continuous installation pipeline features.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's GitLab Trigger integrations page.

  • Comment
  • Confidential issues
  • Confidential comments
  • Deployments
  • Issue
  • Job
  • Merge request
  • Pipeline
  • Push
  • Release
  • Tag
  • Wiki page

n8n provides an app node for GitLab. You can find the node docs here.

View example workflows and related content on n8n's website.

Refer to GitLab's documentation for details about their API.


MongoDB credentials

URL: llms-txt#mongodb-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using database connection - Connection string
  • Using database connection - Values

You can use these credentials to authenticate the following nodes:

If you are setting up MongoDB from scratch, create a cluster and a database. Refer to the MongoDB Atlas documentation for more detailed instructions on these steps.

Supported authentication methods

  • Database connection - Connection string
  • Database connection - Values

Refer to the MongoDBs Atlas documentation for more information about the service.

Using database connection - Connection string

To configure this credential, you'll need the Prerequisites listed above. Then:

  1. Select Connection String as the Configuration Type.
  2. Enter your MongoDB Connection String. To get your connection string in MongoDB, go to Database > Connect.
    1. Select Drivers.
    2. Copy the code you see in Add your connection string into your application code. It will be something like: mongodb+srv://yourName:yourPassword@clusterName.mongodb.net/?retryWrites=true&w=majority.
    3. Replace the <password> and <username> in the connection string with the database user's credentials you'll be using.
    4. Enter that connection string into n8n.
    5. Refer to Connection String for information on finding and formatting your connection string.
  3. Enter your Database name. This is the name of the database that the user whose details you added to the connection string is logging into.
  4. Select whether to Use TLS: Turn on to use TLS. You must have your MongoDB database configured to use TLS and have an x.509 certificate generated. Add information for these certificate fields in n8n:
    • CA Certificate
    • Public Client Certificate
    • Private Client Key
    • Passphrase

Refer to MongoDB's x.509 documentation for more information on working with x.509 certificates.

Using database connection - Values

To configure this credential, you'll need the Prerequisites listed above. Then:

  1. Select Values as the Configuration Type.
  2. Enter the database Host name or address.
  3. Enter the Database name.
  4. Enter the User you'd like to log in as.
  5. Enter the user's Password.
  6. Enter the Port to connect over. This is the port number your server uses to listen for incoming connections.
  7. Select whether to Use TLS: Turn on to use TLS. You must have your MongoDB database configured to use TLS and have an x.509 certificate generated. Add information for these certificate fields in n8n:
    • CA Certificate
    • Public Client Certificate
    • Private Client Key
    • Passphrase

Refer to MongoDB's x.509 documentation for more information on working with x.509 certificates.


OpenThesaurus node

URL: llms-txt#openthesaurus-node

Contents:

  • Operations
  • Templates and examples

Use the OpenThesaurus node to automate work in OpenThesaurus, and integrate OpenThesaurus with other applications. n8n supports synonym look-up for German words.

On this page, you'll find a list of operations the OpenThesaurus node supports and links to more resources.

OpenThesaurus node doesn't require authentication.

  • Get synonyms for a German word in German

Templates and examples

Browse OpenThesaurus integration templates, or search all templates


Wikipedia node

URL: llms-txt#wikipedia-node

Contents:

  • Templates and examples
  • Related resources

The Wikipedia node is a tool that allows an agent to search and return information from Wikipedia.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

Templates and examples

Respond to WhatsApp Messages with AI Like a Pro!

View template details

AI chatbot that can search the web

View template details

Write a WordPress post with AI (starting from a few keywords)

View template details

Browse Wikipedia integration templates, or search all templates

Refer to LangChain's documentation on tools for more information about tools in LangChain.

View n8n's Advanced AI documentation.


Monitoring

URL: llms-txt#monitoring

Contents:

  • healthz and healthz/readiness
  • metrics
  • Enable metrics and healthz for self-hosted n8n

There are three API endpoints you can call to check the status of your instance: /healthz, healthz/readiness, and /metrics.

healthz and healthz/readiness

The /healthz endpoint returns a standard HTTP status code. 200 indicates the instance is reachable. It doesn't indicate DB status. It's available for both self-hosted and Cloud users.

The /healthz/readiness endpoint is similar to the /healthz endpoint, but it returns a HTTP status code of 200 if the DB is connected and migrated and therefore the instance is ready to accept traffic.

The /metrics endpoint provides more detailed information about the current status of the instance.

The /metrics endpoint isn't available on n8n Cloud.

Enable metrics and healthz for self-hosted n8n

The /metrics and /healthz endpoints are disabled by default. To enable them, configure your n8n instance:

Examples:

Example 1 (unknown):

<your-instance-url>/healthz

Example 2 (unknown):

<your-instance-url>/healthz/readiness

Example 3 (unknown):

<your-instance-url>/metrics

Google Docs node

URL: llms-txt#google-docs-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Google Docs node to automate work in Google Docs, and integrate Google Docs with other applications. n8n has built-in support for a wide range of Google Docs features, including creating, updating, and getting documents.

On this page, you'll find a list of operations the Google Docs node supports and links to more resources.

Refer to Google Docs credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Document
    • Create
    • Get
    • Update

Templates and examples

Chat with PDF docs using AI (quoting sources)

View template details

🤖 AI Powered RAG Chatbot for Your Docs + Google Drive + Gemini + Qdrant

View template details

🩷Automated Social Media Content Publishing Factory + System Prompt Composition

View template details

Browse Google Docs integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


SeaTable Trigger node

URL: llms-txt#seatable-trigger-node

SeaTable is a collaborative database application with a spreadsheet interface.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's SeaTable Trigger integrations page.


HighLevel node

URL: llms-txt#highlevel-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported

Use the HighLevel node to automate work in HighLevel, and integrate HighLevel with other applications. n8n has built-in support for a wide range of HighLevel features, including creating, updating, deleting, and getting contacts, opportunities, and tasks, as well as booking appointments and getting free time slots in calendars.

On this page, you'll find a list of operations the HighLevel node supports and links to more resources.

Refer to HighLevel credentials for guidance on setting up authentication.

  • Contact
    • Create or update
    • Delete
    • Get
    • Get many
    • Update
  • Opportunity
    • Create
    • Delete
    • Get
    • Get many
    • Update
  • Task
    • Create
    • Delete
    • Get
    • Get many
    • Update
  • Calendar
    • Book an appointment
    • Get free slots

Templates and examples

High-Level Service Page SEO Blueprint Report Generator

by Custom Workflows AI

View template details

Verify mailing address deliverability of new contacts in HighLevel Using Lob

View template details

Create an Automated Customer Support Assistant with GPT-4o and GoHighLevel SMS

by Cyril Nicko Gaspar

View template details

Browse HighLevel integration templates, or search all templates

Refer to HighLevel's API documentation and support forums for more information about the service.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


UptimeRobot node

URL: llms-txt#uptimerobot-node

Contents:

  • Operations
  • Templates and examples

Use the UptimeRobot node to automate work in UptimeRobot, and integrate UptimeRobot with other applications. n8n has built-in support for a wide range of UptimeRobot features, including creating and deleting alerts, as well as getting account details.

On this page, you'll find a list of operations the UptimeRobot node supports and links to more resources.

Refer to UptimeRobot credentials for guidance on setting up authentication.

  • Account
    • Get account details
  • Alert Contact
    • Create an alert contact
    • Delete an alert contact
    • Get an alert contact
    • Get all alert contacts
    • Update an alert contact
  • Maintenance Window
    • Create a maintenance window
    • Delete a maintenance window
    • Get a maintenance window
    • Get all a maintenance windows
    • Update a maintenance window
  • Monitor
    • Create a monitor
    • Delete a monitor
    • Get a monitor
    • Get all monitors
    • Reset a monitor
    • Update a monitor
  • Public Status Page
    • Create a public status page
    • Delete a public status page
    • Get a public status page
    • Get all a public status pages

Templates and examples

Create, update, and get a monitor using UptimeRobot

View template details

Website Downtime Alert via LINE + Supabase Log

by sayamol thiramonpaphakul

View template details

Create, Update Alerts 🛠️ UptimeRobot Tool MCP Server 💪 all 21 operations

View template details

Browse UptimeRobot integration templates, or search all templates


Splunk node

URL: llms-txt#splunk-node

Contents:

  • Operations
  • Templates and examples

Use the Splunk node to automate work in Splunk, and integrate Splunk with other applications. n8n has built-in support for a wide range of Splunk features, including getting fired alerts reports, as well as deleting and getting search configuration.

On this page, you'll find a list of operations the Splunk node supports and links to more resources.

Refer to Splunk credentials for guidance on setting up authentication.

  • Fired Alert
    • Get a fired alerts report
  • Search Configuration
    • Delete a search configuration
    • Get a search configuration
    • Get many search configurations
  • Search Job
    • Create a search job
    • Delete a search job
    • Get a search job
    • Get many search jobs
  • Search Result
    • Get many search results
  • User
    • Create a user
    • Delete a user
    • Get a user
    • Get many users
    • Update a user

Templates and examples

Create Unique Jira tickets from Splunk alerts

View template details

🛠️ Splunk Tool MCP Server 💪 all 16 operations

View template details

IP Reputation Check & SOC Alerts with Splunk, VirusTotal and AlienVault

View template details

Browse Splunk integration templates, or search all templates


Set a 50 MB maximum size for each log file

URL: llms-txt#set-a-50-mb-maximum-size-for-each-log-file

export N8N_LOG_FILE_SIZE_MAX=50


Two-factor authentication (2FA)

URL: llms-txt#two-factor-authentication-(2fa)

Contents:

  • Enable 2FA
  • Disable 2FA for your instance

Two-factor authentication (2FA) adds a second authentication method on top of username and password. This increases account security. n8n supports 2FA using an authenticator app.

You need an authenticator app on your phone.

To enable 2FA in n8n:

  1. Go to you Settings > Personal.
  2. Select Enable 2FA. n8n opens a modal with a QR code.
  3. Scan the QR code in your authenticator app.
  4. Enter the code from your app in Code from authenticator app.
  5. Select Continue. n8n displays recovery codes.
  6. Save the recovery codes. You need these to regain access to your account if you lose your authenticator.

Disable 2FA for your instance

Self-hosted users can configure their n8n instance to disable 2FA for all users by setting N8N_MFA_ENABLED to false. Note that n8n ignores this if existing users have 2FA enabled. Refer to Configuration methods for more information on configuring your n8n instance with environment variables.


Dirty nodes

URL: llms-txt#dirty-nodes

Contents:

  • How to recognize dirty node data
  • Why n8n marks nodes dirty
  • Resolving dirty nodes

A dirty node is a node that executed successfully in the past, but whose output n8n now considers stale or unreliable. They're labeled like this to indicate that if the node executes again, the output may be different. It may also be the point where a partial execution starts from.

How to recognize dirty node data

In the canvas of the workflow editor, you can identify dirty notes by their different-colored border and a yellow triangle in place of the previous green tick symbol. For example:

In the node editor view, the output panel also displays a yellow triangle on the output panel. If you hover over the triangle, a tooltip appears with more information about why n8n considers the data stale:

Why n8n marks nodes dirty

There are several reasons why n8n might flag execution data as stale. For example:

  • Inserting or deleting a node: labels the first node that follows the inserted node dirty.
  • Modifying node parameters: labels the modified node dirty.
  • Adding a connector: labels the destination node of the new connector dirty.
  • Deactivating a node: labels the first node that follows the deactivated node dirty.

Other reasons n8n marks nodes dirty

  • Unpinning a node: labels the unpinned node dirty.
  • Modifying pinned data: labels the node that comes after the pinned data dirty.
  • If any of the above actions occur inside a loop, also labels the first node of the loop dirty.

For sub-nodes, also labels any executed parent nodes (up to and including the root) when:

  • Editing an executed sub-node

  • Adding a new sub-node

  • Disconnecting or deleting a sub-node

  • Deactivating a sub-node

  • Activating a sub-node

  • When deleting a connected node in a workflow:

  • The next node in the sequence becomes dirty:

When using loops (with the Loop over Items node), when any node within the loop is dirty, the initial node of the loop is also considered dirty:

Resolving dirty nodes

Executing a node again clears its dirty status. You can do this manually by triggering the whole workflow, or by running a partial execution with Execute step on the individual node or any node which follows it.


Trello Trigger node

URL: llms-txt#trello-trigger-node

Contents:

  • Find the Model ID

Trello is a web-based Kanban-style list-making application which is a subsidiary of Atlassian. Users can create their task boards with different columns and move the tasks between them.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Trello Trigger integrations page.

The model ID is the ID of any model in Trello. Depending on the use-case, it could be the User ID, List ID, and so on.

For this specific example, the List ID would be the Model ID:

  1. Open the Trello board that contains the list.
  2. If the list doesn't have any cards, add a card to the list.
  3. Open the card, add .json at the end of the URL, and press enter.
  4. In the JSON file, you will see a field called idList.
  5. Copy idListand paste it in the Model ID field in n8n.

Carbon Black credentials

URL: llms-txt#carbon-black-credentials

Contents:

  • Prerequisites
  • Authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate when using the HTTP Request node to make a Custom API call.

Authentication methods

Refer to Carbon Black's documentation for more information about the service.

This is a credential-only node. Refer to Custom API operations to learn more. View example workflows and related content on n8n's website.

To configure this credential, you'll need:


Using community nodes

URL: llms-txt#using-community-nodes

Contents:

  • Adding community nodes to your workflow
  • Community nodes with duplicate names

To use community nodes, you first need to install them.

Adding community nodes to your workflow

After installing a community node, you can use it like any other node. n8n displays the node in search results in the Nodes panel. n8n marks community nodes with a Package icon in the nodes panel.

Community nodes with duplicate names

It's possible for several community nodes to have the same name. If you use two nodes with the same name in your workflow, they'll look the same, unless they have different icons.


Google Perspective node

URL: llms-txt#google-perspective-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Google Perspective node to automate work in Google Perspective, and integrate Google Perspective with other applications. n8n has built-in support for a wide range of Google Perspective features, including analyzing comments.

On this page, you'll find a list of operations the Google Perspective node supports and links to more resources.

Refer to Google Perspective credentials for guidance on setting up authentication.

Templates and examples

Browse Google Perspective integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Streaming responses

URL: llms-txt#streaming-responses

Contents:

  • Configure nodes for streaming
  • Important information

Available on all plans from version 1.105.2.

Streaming responses let you send data back to users as an AI Agent node generates it. This is useful for chatbots, where you want to show the user the answer as it's generated to provide a better user experience.

You can enable streaming using either:

In both cases, set the node's Response Mode to Streaming.

Configure nodes for streaming

To stream data, you need to add nodes to the workflow that support streaming output. Not all nodes support this feature.

  1. Choose a node that supports streaming, such as:
  2. You can disable streaming in the options of these nodes. By default, they stream data whenever the executed trigger has its Response Mode set to Streaming response.

Important information

Keep in mind the following details when configuring streaming responses:

  • Trigger: Your trigger node must support streaming and have streaming configured. Without this, the workflow behaves according to your response mode settings.
  • Node configuration: Even with streaming enabled on the trigger, you need at least one node configured to stream data. Otherwise, your workflow will send no data.

Hunter credentials

URL: llms-txt#hunter-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a Hunter account.

Supported authentication methods

Refer to Hunter's API documentation for more information about the service.

To configure this credential, you'll need:


Supported databases

URL: llms-txt#supported-databases

Contents:

  • Shared settings
  • PostgresDB

By default, n8n uses SQLite to save credentials, past executions, and workflows. n8n also supports PostgresDB.

The following environment variables get used by all databases:

  • DB_TABLE_PREFIX (default: -) - Prefix for table names

To use PostgresDB as the database, you can provide the following environment variables:

  • DB_TYPE=postgresdb
  • DB_POSTGRESDB_DATABASE (default: 'n8n')
  • DB_POSTGRESDB_HOST (default: 'localhost')
  • DB_POSTGRESDB_PORT (default: 5432)
  • DB_POSTGRESDB_USER (default: 'postgres')
  • DB_POSTGRESDB_PASSWORD (default: empty)
  • DB_POSTGRESDB_SCHEMA (default: 'public')
  • DB_POSTGRESDB_SSL_CA (default: undefined): Path to the server's CA certificate used to validate the connection (opportunistic encryption isn't supported)
  • DB_POSTGRESDB_SSL_CERT (default: undefined): Path to the client's TLS certificate
  • DB_POSTGRESDB_SSL_KEY (default: undefined): Path to the client's private key corresponding to the certificate
  • DB_POSTGRESDB_SSL_REJECT_UNAUTHORIZED (default: true): If TLS connections that fail validation should be rejected
export DB_TYPE=postgresdb
export DB_POSTGRESDB_DATABASE=n8n
export DB_POSTGRESDB_HOST=postgresdb
export DB_POSTGRESDB_PORT=5432
export DB_POSTGRESDB_USER=n8n
export DB_POSTGRESDB_PASSWORD=n8n
export DB_POSTGRESDB_SCHEMA=n8n

---

## Access a specific value set during this execution

**URL:** llms-txt#access-a-specific-value-set-during-this-execution

customData = _execution.customData.get("key");

Postgres node

URL: llms-txt#postgres-node

Contents:

  • Operations
    • Delete
    • Execute Query
    • Insert
    • Insert or Update
    • Select
    • Update
  • Templates and examples
  • Related resources
  • Use query parameters

Use the Postgres node to automate work in Postgres, and integrate Postgres with other applications. n8n has built-in support for a wide range of Postgres features, including executing queries, as well as inserting and updating rows in a database.

On this page, you'll find a list of operations the Postgres node supports and links to more resources.

Refer to Postgres credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

Use this operation to delete an entire table or rows in a table.

Enter these parameters:

  • Credential to connect with: Create or select an existing Postgres credential.

  • Operation: Select Delete.

  • Schema: Choose the schema that contains the table you want to work on. Select From list to choose the schema from the dropdown list or By Name to enter the schema name.

  • Table: Choose the table that you want to work on. Select From list to choose the table from the dropdown list or By Name to enter the table name.

  • Command: The deletion action to take:

    • Truncate: Removes the table's data but preserves the table's structure.
      • Restart Sequences: Whether to reset auto increment columns to their initial values as part of the Truncate process.
    • Delete: Delete the rows that match the "Select Rows" condition. If you don't select anything, Postgres deletes all rows.
      • Select Rows: Define a Column, Operator, and Value to match rows on.
      • Combine Conditions: How to combine the conditions in "Select Rows". AND requires all conditions to be true, while OR requires at least one condition to be true.
    • Drop: Deletes the table's data and structure permanently.
  • Cascade: Whether to also drop all objects that depend on the table, like views and sequences. Available if using Truncate or Drop commands.

  • Connection Timeout: The number of seconds to try to connect to the database.

  • Delay Closing Idle Connection: The number of seconds to wait before considering idle connections eligible for closing.

  • Query Batching: The way to send queries to the database:

    • Single Query: A single query for all incoming items.
    • Independently: Execute one query per incoming item of the execution.
    • Transaction: Execute all queries in a transaction. If a failure occurs, Postgres rolls back all changes.
  • Output Large-Format Numbers As: The format to output NUMERIC and BIGINT columns as:

    • Numbers: Use this for standard numbers.
    • Text: Use this if you expect numbers longer than 16 digits. Without this, numbers may be incorrect.

Use this operation to execute an SQL query.

Enter these parameters:

Execute Query options

  • Connection Timeout: The number of seconds to try to connect to the database.
  • Delay Closing Idle Connection: The number of seconds to wait before considering idle connections eligible for closing.
  • Query Batching: The way to send queries to the database:
    • Single Query: A single query for all incoming items.
    • Independently: Execute one query per incoming item of the execution.
    • Transaction: Execute all queries in a transaction. If a failure occurs, Postgres rolls back all changes.
  • Query Parameters: A comma-separated list of values that you want to use as query parameters.
  • Output Large-Format Numbers As: The format to output NUMERIC and BIGINT columns as:
    • Numbers: Use this for standard numbers.
    • Text: Use this if you expect numbers longer than 16 digits. Without this, numbers may be incorrect.
  • Replace Empty Strings with NULL: Whether to replace empty strings with NULL in input. This may be useful when working with data exported from spreadsheet software.

Use this operation to insert rows in a table.

Enter these parameters:

  • Credential to connect with: Create or select an existing Postgres credential.

  • Operation: Select Insert.

  • Schema: Choose the schema that contains the table you want to work on. Select From list to choose the schema from the dropdown list or By Name to enter the schema name.

  • Table: Choose the table that you want to work on. Select From list to choose the table from the dropdown list or By Name to enter the table name.

  • Mapping Column Mode: How to map column names to incoming data:

    • Map Each Column Manually: Select the values to use for each column.
    • Map Automatically: Automatically map incoming data to matching column names in Postgres. The incoming data field names must match the column names in Postgres for this to work. If necessary, consider using the edit fields (set) node before this node to adjust the format as needed.
  • Connection Timeout: The number of seconds to try to connect to the database.

  • Delay Closing Idle Connection: The number of seconds to wait before considering idle connections eligible for closing.

  • Query Batching: The way to send queries to the database:

    • Single Query: A single query for all incoming items.
    • Independently: Execute one query per incoming item of the execution.
    • Transaction: Execute all queries in a transaction. If a failure occurs, Postgres rolls back all changes.
  • Output Columns: Choose which columns to output. You can select from a list of available columns or specify IDs using expressions.

  • Output Large-Format Numbers As: The format to output NUMERIC and BIGINT columns as:

    • Numbers: Use this for standard numbers.
    • Text: Use this if you expect numbers longer than 16 digits. Without this, numbers may be incorrect.
  • Skip on Conflict: Whether to skip the row if the insert violates a unique or exclusion constraint instead of throwing an error.

  • Replace Empty Strings with NULL: Whether to replace empty strings with NULL in input. This may be useful when working with data exported from spreadsheet software.

Use this operation to insert or update rows in a table.

Enter these parameters:

  • Credential to connect with: Create or select an existing Postgres credential.
  • Operation: Select Insert or Update.
  • Schema: Choose the schema that contains the table you want to work on. Select From list to choose the schema from the dropdown list or By Name to enter the schema name.
  • Table: Choose the table that you want to work on. Select From list to choose the table from the dropdown list or By Name to enter the table name.
  • Mapping Column Mode: How to map column names to incoming data:
    • Map Each Column Manually: Select the values to use for each column.
    • Map Automatically: Automatically map incoming data to matching column names in Postgres. The incoming data field names must match the column names in Postgres for this to work. If necessary, consider using the edit fields (set) node before this node to adjust the format as needed.

Insert or Update options

  • Connection Timeout: The number of seconds to try to connect to the database.
  • Delay Closing Idle Connection: The number of seconds to wait before considering idle connections eligible for closing.
  • Query Batching: The way to send queries to the database:
    • Single Query: A single query for all incoming items.
    • Independently: Execute one query per incoming item of the execution.
    • Transaction: Execute all queries in a transaction. If a failure occurs, Postgres rolls back all changes.
  • Output Columns: Choose which columns to output. You can select from a list of available columns or specify IDs using expressions.
  • Output Large-Format Numbers As: The format to output NUMERIC and BIGINT columns as:
    • Numbers: Use this for standard numbers.
    • Text: Use this if you expect numbers longer than 16 digits. Without this, numbers may be incorrect.
  • Replace Empty Strings with NULL: Whether to replace empty strings with NULL in input. This may be useful when working with data exported from spreadsheet software.

Use this operation to select rows in a table.

Enter these parameters:

  • Credential to connect with: Create or select an existing Postgres credential.

  • Operation: Select Select.

  • Schema: Choose the schema that contains the table you want to work on. Select From list to choose the schema from the dropdown list or By Name to enter the schema name.

  • Table: Choose the table that you want to work on. Select From list to choose the table from the dropdown list or By Name to enter the table name.

  • Return All: Whether to return all results or only up to a given limit.

  • Limit: The maximum number of items to return when Return All is disabled.

  • Select Rows: Set the conditions to select rows. Define a Column, Operator, and Value to match rows on. If you don't select anything, Postgres selects all rows.

  • Combine Conditions: How to combine the conditions in Select Rows. AND requires all conditions to be true, while OR requires at least one condition to be true.

  • Sort: Choose how to sort the selected rows. Choose a Column from a list or by ID and a sort Direction.

  • Connection Timeout: The number of seconds to try to connect to the database.

  • Delay Closing Idle Connection: The number of seconds to wait before considering idle connections eligible for closing.

  • Query Batching: The way to send queries to the database:

    • Single Query: A single query for all incoming items.
    • Independently: Execute one query per incoming item of the execution.
    • Transaction: Execute all queries in a transaction. If a failure occurs, Postgres rolls back all changes.
  • Output Columns: Choose which columns to output. You can select from a list of available columns or specify IDs using expressions.

  • Output Large-Format Numbers As: The format to output NUMERIC and BIGINT columns as:

    • Numbers: Use this for standard numbers.
    • Text: Use this if you expect numbers longer than 16 digits. Without this, numbers may be incorrect.

Use this operation to update rows in a table.

Enter these parameters:

  • Credential to connect with: Create or select an existing Postgres credential.

  • Operation: Select Update.

  • Schema: Choose the schema that contains the table you want to work on. Select From list to choose the schema from the dropdown list or By Name to enter the schema name.

  • Table: Choose the table that you want to work on. Select From list to choose the table from the dropdown list or By Name to enter the table name.

  • Mapping Column Mode: How to map column names to incoming data:

    • Map Each Column Manually: Select the values to use for each column.
    • Map Automatically: Automatically map incoming data to matching column names in Postgres. The incoming data field names must match the column names in Postgres for this to work. If necessary, consider using the edit fields (set) node before this node to adjust the format as needed.
  • Connection Timeout: The number of seconds to try to connect to the database.

  • Delay Closing Idle Connection: The number of seconds to wait before considering idle connections eligible for closing.

  • Query Batching: The way to send queries to the database:

    • Single Query: A single query for all incoming items.
    • Independently: Execute one query per incoming item of the execution.
    • Transaction: Execute all queries in a transaction. If a failure occurs, Postgres rolls back all changes.
  • Output Columns: Choose which columns to output. You can select from a list of available columns or specify IDs using expressions.

  • Output Large-Format Numbers As: The format to output NUMERIC and BIGINT columns as:

    • Numbers: Use this for standard numbers.
    • Text: Use this if you expect numbers longer than 16 digits. Without this, numbers may be incorrect.
  • Replace Empty Strings with NULL: Whether to replace empty strings with NULL in input. This may be useful when working with data exported from spreadsheet software.

Templates and examples

Chat with Postgresql Database

View template details

Generate Instagram Content from Top Trends with AI Image Generation

by mustafa kendigüzel

View template details

AI Customer Support Assistant · WhatsApp Ready · Works for Any Business

View template details

Browse Postgres integration templates, or search all templates

n8n provides a trigger node for Postgres. You can find the trigger node docs here.

Use query parameters

When creating a query to run on a Postgres database, you can use the Query Parameters field in the Options section to load data into the query. n8n sanitizes data in query parameters, which prevents SQL injection.

For example, you want to find a person by their email address. Given the following input data:

You can write a query like:

Then in Query Parameters, provide the field values to use. You can provide fixed values or expressions. For this example, use expressions so the node can pull the email address from each input item in turn:

For common questions or issues and suggested solutions, refer to Common issues.

Examples:

Example 1 (unknown):

[
    {
        "email": "alex@example.com",
        "name": "Alex",
        "age": 21 
    },
    {
        "email": "jamie@example.com",
        "name": "Jamie",
        "age": 33 
    }
]

Example 2 (unknown):

SELECT * FROM $1:name WHERE email = $2;

Example 3 (unknown):

// users is an example table name
{{ [ 'users', $json.email ] }}

The email address to use for the TLS/SSL certificate creation

URL: llms-txt#the-email-address-to-use-for-the-tls/ssl-certificate-creation

Contents:

    1. Create local files directory
    1. Create Docker Compose file
    1. Start Docker Compose
    1. Done
  • Next steps

SSL_EMAIL=user@example.com

services: traefik: image: "traefik" restart: always command: - "--api.insecure=true" - "--providers.docker=true" - "--providers.docker.exposedbydefault=false" - "--entrypoints.web.address=:80" - "--entrypoints.web.http.redirections.entryPoint.to=websecure" - "--entrypoints.web.http.redirections.entrypoint.scheme=https" - "--entrypoints.websecure.address=:443" - "--certificatesresolvers.mytlschallenge.acme.tlschallenge=true" - "--certificatesresolvers.mytlschallenge.acme.email=${SSL_EMAIL}" - "--certificatesresolvers.mytlschallenge.acme.storage=/letsencrypt/acme.json" ports: - "80:80" - "443:443" volumes: - traefik_data:/letsencrypt - /var/run/docker.sock:/var/run/docker.sock:ro

n8n: image: docker.n8n.io/n8nio/n8n restart: always ports: - "127.0.0.1:5678:5678" labels: - traefik.enable=true - traefik.http.routers.n8n.rule=Host(${SUBDOMAIN}.${DOMAIN_NAME}) - traefik.http.routers.n8n.tls=true - traefik.http.routers.n8n.entrypoints=web,websecure - traefik.http.routers.n8n.tls.certresolver=mytlschallenge - traefik.http.middlewares.n8n.headers.SSLRedirect=true - traefik.http.middlewares.n8n.headers.STSSeconds=315360000 - traefik.http.middlewares.n8n.headers.browserXSSFilter=true - traefik.http.middlewares.n8n.headers.contentTypeNosniff=true - traefik.http.middlewares.n8n.headers.forceSTSHeader=true - traefik.http.middlewares.n8n.headers.SSLHost=${DOMAIN_NAME} - traefik.http.middlewares.n8n.headers.STSIncludeSubdomains=true - traefik.http.middlewares.n8n.headers.STSPreload=true - traefik.http.routers.n8n.middlewares=n8n@docker environment: - N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true - N8N_HOST=${SUBDOMAIN}.${DOMAIN_NAME} - N8N_PORT=5678 - N8N_PROTOCOL=https - N8N_RUNNERS_ENABLED=true - NODE_ENV=production - WEBHOOK_URL=https://${SUBDOMAIN}.${DOMAIN_NAME}/ - GENERIC_TIMEZONE=${GENERIC_TIMEZONE} - TZ=${GENERIC_TIMEZONE} volumes: - n8n_data:/home/node/.n8n - ./local-files:/files

volumes: n8n_data: traefik_data:

sudo docker compose up -d

sudo docker compose stop


You can now reach n8n using the subdomain + domain combination you defined in your `.env` file configuration. The above example would result in `https://n8n.example.com`.

n8n is only accessible using secure HTTPS, not over plain HTTP.

If you have trouble reaching your instance, check your server's firewall settings and your DNS configuration.

- Learn more about [configuring](../../../configuration/environment-variables/) and [scaling](../../../scaling/overview/) n8n.
- Or explore using n8n: try the [Quickstarts](../../../../try-it-out/).

**Examples:**

Example 1 (unknown):
```unknown
## 5. Create local files directory

Inside your project directory, create a directory called `local-files` for sharing files between the n8n instance and the host system (for example, using the [Read/Write Files from Disk node](../../../../integrations/builtin/core-nodes/n8n-nodes-base.readwritefile/)):

Example 2 (unknown):

The Docker Compose file below can automatically create this directory, but doing it manually ensures that it's created with the right ownership and permissions.

## 6. Create Docker Compose file

Create a `compose.yaml` file. Paste the following in the file:

Example 3 (unknown):

The Docker Compose file above configures two containers: one for n8n, and one to run [traefik](https://github.com/traefik/traefik), an application proxy to manage TLS/SSL certificates and handle routing.

It also creates and mounts two [Docker Volumes](https://docs.docker.com/engine/storage/volumes/) and mounts the `local-files` directory you created earlier:

| Name            | Type                                                        | Container mount   | Description                                                                                                                         |
| --------------- | ----------------------------------------------------------- | ----------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
| `n8n_data`      | [Volume](https://docs.docker.com/engine/storage/volumes/)   | `/home/node/.n8n` | Where n8n saves its SQLite database file and encryption key.                                                                        |
| `traefik_data`  | [Volume](https://docs.docker.com/engine/storage/volumes/)   | `/letsencrypt`    | Where traefik saves TLS/SSL certificate data.                                                                                       |
| `./local-files` | [Bind](https://docs.docker.com/engine/storage/bind-mounts/) | `/files`          | A local directory shared between the n8n instance and host. In n8n, use the `/files` path to read from and write to this directory. |

## 7. Start Docker Compose

Start n8n by typing:

Example 4 (unknown):

To stop the containers, type:

JMESPath method

URL: llms-txt#jmespath-method

This is an n8n-provided method for working with the JMESPath library.

You can use Python in the Code node. It isn't available in expressions.

Method Description Available in Code node?
$jmespath() Perform a search on a JSON object using JMESPath.
Method Description
_jmespath() Perform a search on a JSON object using JMESPath.

Credentials library

URL: llms-txt#credentials-library

This section contains step-by-step information about authenticating the different nodes in n8n.

To learn more about creating, managing, and sharing credentials, refer to Manage credentials.


Rename Keys

URL: llms-txt#rename-keys

Contents:

  • Node parameters
  • Node options
  • Templates and examples

Use the Rename Keys node to rename the keys of a key-value pair in n8n.

You can rename one or multiple keys using the Rename Keys node. Select the Add new key button to rename a key.

For each key, enter the:

  • Current Key Name: The current name of the key you want to rename.
  • New Key Name: The new name you want to assign to the key.

Choose whether to use a Regex regular expression to identify keys to rename. To use this option, you must also enter:

  • The Regular Expression you'd like to use.
  • Replace With: Enter the new name you want to assign to the key(s) that match the Regular Expression.
  • You can also choose these Regex-specific options:
    • Case Insensitive: Set whether the regular expression should match case (turned off) or be case insensitive (turned on).
    • Max Depth: Enter the maximum depth to replace keys, using -1 for unlimited and 0 for top-level only.

Using a regular expression can affect any keys that match the expression, including keys you've already renamed.

Templates and examples

Explore n8n Nodes in a Visual Reference Library

View template details

Create Salesforce accounts based on Google Sheets data

View template details

Create Salesforce accounts based on Excel 365 data

View template details

Browse Rename Keys integration templates, or search all templates


Schema Preview

URL: llms-txt#schema-preview

Contents:

  • Using the preview

Schema Preview exposes expected schema data from the previous node in the Node Editor without the user having to provide credentials or execute the node. This makes it possible to construct workflows without having to provide credentials in advance. The preview doesn't include mock data, but it does expose the expected fields, making it possible to select and incorporate them into the input of subsequent nodes.

  1. There must be a node with Schema Preview available in your workflow.
  2. When clicking on the details of the next node in the sequence, the Schema Preview data will show up in the Node Editor where schema data would typically be exposed.
  3. Use data from the Schema Preview just as you would other schemas - drag and drop fields as input into your node parameters and settings.

Facebook Graph API node

URL: llms-txt#facebook-graph-api-node

Contents:

  • Operations
    • Parameters
  • Templates and examples

Use the Facebook Graph API node to automate work in Facebook Graph API, and integrate Facebook Graph API with other applications. n8n has built-in support for a wide range of Facebook Graph API features, including using queries GET POST DELETE for several parameters like host URL, request methods and much more.

On this page, you'll find a list of operations the Facebook Graph API node supports and links to more resources.

Refer to Facebook Graph API credentials for guidance on setting up authentication.

  • Default
    • GET
    • POST
    • DELETE
  • Video Uploads
    • GET
    • POST
    • DELETE

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Host URL: The host URL for the request. The following options are available:
    • Default: Requests are passed to the graph.facebook.com host URL. Used for the majority of requests.
    • Video: Requests are passed to the graph-video.facebook.com host URL. Used for video upload requests only.
  • HTTP Request Method: The method to be used for this request, from the following options:
    • GET
    • POST
    • DELETE
  • Graph API Version: The version of the Facebook Graph API to be used for this request.
  • Node: The node on which to operate, for example /<page-id>/feed. Read more about it in the official Facebook Developer documentation.
  • Edge: Edge of the node on which to operate. Edges represent collections of objects which are attached to the node.
  • Ignore SSL Issues: Toggle to still download the response even if SSL certificate validation isn't possible.
  • Send Binary File: Available for POST operations. If enabled binary data is sent as the body. Requires setting the following:
    • Input Binary Field: Name of the binary property which contains the data for the file to be uploaded.

Templates and examples

🤖Automate Multi-Platform Social Media Content Creation with AI

View template details

AI-Powered Social Media Content Generator & Publisher

View template details

Generate Instagram Content from Top Trends with AI Image Generation

by mustafa kendigüzel

View template details

Browse Facebook Graph API integration templates, or search all templates


Ghost node

URL: llms-txt#ghost-node

Contents:

  • Operations
    • Admin API
    • Content API
  • Templates and examples
  • What to do if your operation isn't supported

Use the Ghost node to automate work in Ghost, and integrate Ghost with other applications. n8n has built-in support for a wide range of Ghost features, including creating, updating, deleting, and getting posts for the Admin and content API.

On this page, you'll find a list of operations the Ghost node supports and links to more resources.

Refer to Ghost credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Post

    • Create a post
    • Delete a post
    • Get a post
    • Get all posts
    • Update a post
  • Post

    • Get a post
    • Get all posts

Templates and examples

Multi-Agent PDF-to-Blog Content Generation

View template details

📄🌐PDF2Blog - Create Blog Post on Ghost CRM from PDF Document

View template details

Research AI Agent Team with auto citations using OpenRouter and Perplexity

View template details

Browse Ghost integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Segment credentials

URL: llms-txt#segment-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a Segment account.

Supported authentication methods

Refer to Segment's Sources documentation for more information about the service.

To configure this credential, you'll need:

  • A Write Key: To get a Write Key, go to Sources > Add Source. Add a Node.js source and copy that write key to add to your n8n credential.

Refer to Locate your Write Key for more information.


Todoist credentials

URL: llms-txt#todoist-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using API key
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to Todoist's REST API documentation for more information about the service.

To configure this credential, you'll need a Todoist account and:

To get your API Key:

  1. In Todoist, open your Integration settings.
  2. Select the Developer tab.
  3. Copy your API token and enter it as the API Key in your n8n credential.

Refer to Find your API token for more information.

Note for n8n Cloud users

Cloud users don't need to provide connection details. Select Connect my account to connect through your browser.

If you're self-hosting n8n, you'll need a Todoist account and:

  • A Client ID
  • A Client Secret

Get both by creating an application:

  1. Open the Todoist App Management Console.
  2. Select Create a new app.
  3. Enter an App name for your app, like n8n integration.
  4. Select Create app.
  5. Copy the n8n OAuth Redirect URL and enter it as the OAuth redirect URL in Todoist.
  6. Copy the Client ID from Todoist and enter it in your n8n credential.
  7. Copy the Client Secret from Todoist and enter it in your n8n credential.
  8. Configure the rest of your Todoist app as it makes sense for your use case.

Refer to the Todoist Authorization Guide for more information.


Zoho CRM node

URL: llms-txt#zoho-crm-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Zoho CRM node to automate work in Zoho CRM, and integrate Zoho CRM with other applications. n8n has built-in support for a wide range of Zoho CRM features, including creating and deleting accounts, contacts, and deals.

On this page, you'll find a list of operations the Zoho CRM node supports and links to more resources.

Refer to Zoho CRM credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Account
    • Create an account
    • Create a new record, or update the current one if it already exists (upsert)
    • Delete an account
    • Get an account
    • Get all accounts
    • Update an account
  • Contact
    • Create a contact
    • Create a new record, or update the current one if it already exists (upsert)
    • Delete a contact
    • Get a contact
    • Get all contacts
    • Update a contact
  • Deal
    • Create a deal
    • Create a new record, or update the current one if it already exists (upsert)
    • Delete a contact
    • Get a contact
    • Get all contacts
    • Update a contact
  • Invoice
    • Create an invoice
    • Create a new record, or update the current one if it already exists (upsert)
    • Delete an invoice
    • Get an invoice
    • Get all invoices
    • Update an invoice
  • Lead
    • Create a lead
    • Create a new record, or update the current one if it already exists (upsert)
    • Delete a lead
    • Get a lead
    • Get all leads
    • Get lead fields
    • Update a lead
  • Product
    • Create a product
    • Create a new record, or update the current one if it already exists (upsert)
    • Delete a product
    • Get a product
    • Get all products
    • Update a product
  • Purchase Order
    • Create a purchase order
    • Create a new record, or update the current one if it already exists (upsert)
    • Delete a purchase order
    • Get a purchase order
    • Get all purchase orders
    • Update a purchase order
  • Quote
    • Create a quote
    • Create a new record, or update the current one if it already exists (upsert)
    • Delete a quote
    • Get a quote
    • Get all quotes
    • Update a quote
  • Sales Order
    • Create a sales order
    • Create a new record, or update the current one if it already exists (upsert)
    • Delete a sales order
    • Get a sales order
    • Get all sales orders
    • Update a sales order
  • Vendor
    • Create a vendor
    • Create a new record, or update the current one if it already exists (upsert)
    • Delete a vendor
    • Get a vendor
    • Get all vendors
    • Update a vendor

Templates and examples

Process Shopify new orders with Zoho CRM and Harvest

View template details

Get all leads from Zoho CRM

View template details

Jotform Automated Commerce Sync: Telegram Confirmation & Zoho Invoice

View template details

Browse Zoho CRM integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Item List Output Parser node

URL: llms-txt#item-list-output-parser-node

Contents:

  • Node options
  • Templates and examples
  • Related resources

Use the Item List Output Parser node to return a list of items with a specific length and separator.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Number of Items: Enter the maximum items to return. Set to -1 for unlimited items.
  • Separator: Select the separator used to split the results into separate items. Defaults to a new line.

Templates and examples

Breakdown Documents into Study Notes using Templating MistralAI and Qdrant

View template details

Automate Your RFP Process with OpenAI Assistants

View template details

Explore n8n Nodes in a Visual Reference Library

View template details

Browse Item List Output Parser integration templates, or search all templates

Refer to LangChain's output parser documentation for more information about the service.

View n8n's Advanced AI documentation.


Splunk credentials

URL: llms-txt#splunk-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API auth token
  • Required capabilities

You can use these credentials to authenticate the following nodes:

Free trial Splunk Cloud Platform accounts can't access the REST API

Free trial Splunk Cloud Platform accounts don't have access to the REST API. Ensure you have the necessary permissions. Refer to Access requirements and limitations for the Splunk Cloud Platform REST API for more details.

Supported authentication methods

Refer to Splunk's Enterprise API documentation for more information about the service.

Using API auth token

To configure this credential, you'll need:

  • An Auth Token: Once you've enabled token authentication, create an auth token in Settings > Tokens. Refer to Creating authentication tokens for more information.
  • A Base URL: For your Splunk instance. This should include the protocol, domain, and port, for example: https://localhost:8089.
  • Allow Self-Signed Certificates: If turned on, n8n will connect even if SSL validation fails.

Required capabilities

Your Splunk platform account and role must have certain capabilities to create authentication tokens:

  • edit_tokens_own: Required if you want to create tokens for yourself.
  • edit_tokens_all: Required if you want to create tokens for any user on the instance.

Refer to Define roles on the Splunk platform with capabilities for more information.


Insights environment variables

URL: llms-txt#insights-environment-variables

File-based configuration

You can add _FILE to individual variables to provide their configuration in a separate file. Refer to Keeping sensitive data in separate files for more details.

Insights gives instance owners and admins visibility into how workflows perform over time. Refer to Insights for details.

Variable Type Default Description
N8N_DISABLED_MODULES String - Set to insights to disable the feature and metrics collection for an instance.
N8N_INSIGHTS_COMPACTION_BATCH_SIZE Number 500 The number of raw insights data to compact in a single batch.
N8N_INSIGHTS_COMPACTION_DAILY_TO_WEEKLY_THRESHOLD_DAYS Number 180 The maximum age (in days) of daily insights data to compact.
N8N_INSIGHTS_COMPACTION_HOURLY_TO_DAILY_THRESHOLD_DAYS Number 90 The maximum age (in days) of hourly insights data to compact.
N8N_INSIGHTS_COMPACTION_INTERVAL_MINUTES Number 60 Interval (in minutes) at which compaction should run.
N8N_INSIGHTS_FLUSH_BATCH_SIZE Number 1000 The maximum number of insights data to keep in the buffer before flushing.
N8N_INSIGHTS_FLUSH_INTERVAL_SECONDS Number 30 The interval (in seconds) at which the insights data should be flushed to the database.

Milvus credentials

URL: llms-txt#milvus-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using basic auth

You can use these credentials to authenticate the following nodes:

Create and run an Milvus instance. Refer to the Install Milvus for more information.

Supported authentication methods

Refer to Milvus's Authentication documentation for more information about setting up authentication.

View n8n's Advanced AI documentation.

To configure this credential, you'll need:

  • Base URL: The base URL of your Milvus instance. The default is http://localhost:19530.
  • Username: The username to authenticate to your Milvus instance. The default value is root.
  • Password: The password to authenticate to your Milvus instance. The default value is Milvus.

getWorkflowStaticData(type)

URL: llms-txt#getworkflowstaticdata(type)

This gives access to the static workflow data.

  • Static data isn't available when testing workflows. The workflow must be active and called by a trigger or webhook to save static data.
  • This feature may behave unreliably under high-frequency workflow executions.

You can save data directly in the workflow. This data should be small.

As an example: you can save a timestamp of the last item processed from an RSS feed or database. It will always return an object. Properties can then read, delete or set on that object. When the workflow execution succeeds, n8n checks automatically if the data has changed and saves it, if necessary.

There are two types of static data, global and node. Global static data is the same in the whole workflow. Every node in the workflow can access it. The node static data is unique to the node. Only the node that set it can retrieve it again.

Example with global data:

Examples:

Example 1 (unknown):

// Get the global workflow static data
const workflowStaticData = $getWorkflowStaticData('global');

// Access its data
const lastExecution = workflowStaticData.lastExecution;

// Update its data
workflowStaticData.lastExecution = new Date().getTime();

// Delete data
delete workflowStaticData.lastExecution;

Understand source control and environments

URL: llms-txt#understand-source-control-and-environments

  • Available on Enterprise.

  • You must be an n8n instance owner or instance admin to enable and configure source control.

  • Instance owners and instance admins can push changes to and pull changes from the connected repository.

  • Project admins can push changes to the connected repository. They can't pull changes from the repository.

  • Environments in n8n: The purpose of environments, and how they work in n8n.

  • Git in n8n: How n8n uses Git.

  • Branch patterns: The possible relationships between n8n instances and Git branches.


xAI Grok Chat Model node

URL: llms-txt#xai-grok-chat-model-node

Contents:

  • Node parameters
  • Node options
  • Templates and examples
  • Related resources

Use the xAI Grok Chat Model node to access xAI Grok's large language models for conversational AI and text generation tasks.

On this page, you'll find the node parameters for the xAI Grok Chat Model node, and links to more resources.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Model: Select the model which will generate the completion. n8n dynamically loads available models from the xAI Grok API. Learn more in the xAI Grok model documentation.

  • Frequency Penalty: Use this option to control the chances of the model repeating itself. Higher values reduce the chance of the model repeating itself.

  • Maximum Number of Tokens: Enter the maximum number of tokens used, which sets the completion length. Most models have a context length of 2048 tokens with the newest models supporting up to 32,768 tokens.

  • Response Format: Choose Text or JSON. JSON ensures the model returns valid JSON.

  • Presence Penalty: Use this option to control the chances of the model talking about new topics. Higher values increase the chance of the model talking about new topics.

  • Sampling Temperature: Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations.

  • Timeout: Enter the maximum request time in milliseconds.

  • Max Retries: Enter the maximum number of times to retry a request.

  • Top P: Use this option to set the probability the completion should use. Use a lower value to ignore less probable options.

Templates and examples

🤖 AI content generation for Auto Service 🚘 Automate your social media📲!

View template details

AI Chatbot Call Center: Demo Call Center (Production-Ready, Part 2)

View template details

Homey Pro - Smarthouse integration with LLM

by Ole Andre Torjussen

View template details

Browse xAI Grok Chat Model integration templates, or search all templates

Refer to xAI Grok's API documentation for more information about the service.

View n8n's Advanced AI documentation.


TheHive node

URL: llms-txt#thehive-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported
  • Related resources

Use the TheHive node to automate work in TheHive, and integrate TheHive with other applications. n8n has built-in support for a wide range of TheHive features, including creating alerts, counting tasks logs, cases, and observables.

On this page, you'll find a list of operations the TheHive node supports and links to more resources.

TheHive and TheHive 5

n8n provides two nodes for TheHive. Use this node (TheHive) if you want to use TheHive's version 3 or 4 API. If you want to use version 5, use TheHive 5.

Refer to TheHive credentials for guidance on setting up authentication.

The available operations depend on your API version. To see the operations list, create your credentials, including selecting your API version. Then return to the node, select the resource you want to use, and n8n displays the available operations for your API version.

  • Alert
  • Case
  • Log
  • Observable
  • Task

Templates and examples

Analyze emails with S1EM

View template details

Weekly Shodan Query - Report Accidents

View template details

Create, update and get a case in TheHive

View template details

Browse TheHive integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.

n8n provides a trigger node for TheHive. You can find the trigger node docs here.

Refer to TheHive's documentation for more information about the service:


Local File Trigger node

URL: llms-txt#local-file-trigger-node

Contents:

  • Node parameters
  • Changes to a Specific File
  • Changes Involving a Specific Folder
  • Node options
    • Examples for Ignore

The Local File Trigger node starts a workflow when it detects changes on the file system. These changes involve a file or folder getting added, changed, or deleted.

This node isn't available on n8n Cloud.

You can choose what event to watch for using the Trigger On parameter.

Changes to a Specific File

The node triggers when the specified file changes.

Enter the path for the file to watch in File to Watch.

Changes Involving a Specific Folder

The node triggers when a change occurs in the selected folder.

Configure these parameters:

  • Folder to Watch: Enter the path of the folder to watch.
  • Watch for: Select the type of change to watch for.

Use the node Options to include or exclude files and folders.

  • Include Linked Files/Folders: also watch for changes to linked files or folders.
  • Ignore: files or paths to ignore. n8n tests the whole path, not just the filename. Supports the Anymatch syntax.
  • Max Folder Depth: how deep into the folder structure to watch for changes.

Examples for Ignore

Ignore a single file:

**/<fileName>.<suffix>

---

## crowd.dev Trigger node

**URL:** llms-txt#crowd.dev-trigger-node

**Contents:**
- Events
- Related resources

Use the crowd.dev Trigger node to respond to events in [crowd.dev](https://www.crowd.dev/) and integrate crowd.dev with other applications. n8n has built-in support for a wide range of crowd.dev events, including new activities and new members.

On this page, you'll find a list of events the crowd.dev Trigger node can respond to and links to more resources.

You can find authentication information for this node [here](../../credentials/crowddev/).

Examples and templates

For usage examples and templates to help you get started, refer to n8n's [crowd.dev Trigger integrations](https://n8n.io/integrations/crowddev-trigger/) list.

- New Activity
- New Member

n8n provides an app node for crowd.dev. You can find the node docs [here](../../app-nodes/n8n-nodes-base.crowddev/).

View [example workflows and related content](https://n8n.io/integrations/crowddev/) on n8n's website.

Refer to [crowd.dev's documentation](https://docs.crowd.dev/docs) for more information about the service.

---

## Plan a node

**URL:** llms-txt#plan-a-node

This section provides guidance on designing your node, including key technical decisions such as choosing your node building style.

When building a node, there are design choices you need to make before you start:

- Which [node type](node-types/) you need to build.
- Which [node building style](choose-node-method/) to use.
- Your [UI design and UX principles](node-ui-design/)
- Your node's [file structure](../build/reference/node-file-structure/).

---

## MISP credentials

**URL:** llms-txt#misp-credentials

**Contents:**
- Prerequisites
- Supported authentication methods
- Related resources
- Using API key

You can use these credentials to authenticate the following nodes:

- [MISP](../../app-nodes/n8n-nodes-base.misp/)

Install and run a [MISP](https://misp.github.io/MISP/) instance.

## Supported authentication methods

Refer to [MISP's Automation API documentation](https://www.circl.lu/doc/misp/automation) for more information about the service.

To configure this credential, you'll need:

- An **API Key**: In MISP, these are called Automation keys. Get an automation key from **Event Actions > Automation**. Refer to [MISP's automation keys documentation](https://www.circl.lu/doc/misp/automation/#automation-key) for instructions on generating more keys.
- A **Base URL**: Your MISP URL.
- Select whether to **Allow Unauthorized Certificates**: If turned on, the credential will connect even if SSL certificate validation fails.

---

## Stop the container with the `<container_id>`

**URL:** llms-txt#stop-the-container-with-the-`<container_id>`

docker stop <container_id>

---

## Cal.com credentials

**URL:** llms-txt#cal.com-credentials

**Contents:**
- Prerequisites
- Supported authentication methods
- Related resources
- Using API key

You can use these credentials to authenticate the following nodes:

- [Cal.com Trigger](../../trigger-nodes/n8n-nodes-base.caltrigger/)

Create a [Cal.com](https://www.cal.com/) account.

## Supported authentication methods

Refer to [Cal.com's API documentation](https://cal.com/docs/enterprise-features/api#api-server-specifications) for more information about the service.

To configure this credential, you'll need:

- An **API Key**: Refer to the [Cal API Quick Start documentation](https://cal.com/docs/enterprise-features/api/quick-start) for information on how to generate a new API key.
- A **Host**: If you're using the cloud version of Cal.com, leave the Host as `https://api.cal.com`. If you're self-hosting Cal.com, enter the **Host** for your Cal.com instance.

---

## Basic LLM Chain node

**URL:** llms-txt#basic-llm-chain-node

**Contents:**
- Node parameters
  - Prompt
  - Require Specific Output Format
- Chat Messages
- Templates and examples
- Related resources
- Common issues
  - No prompt specified error

Use the Basic LLM Chain node to set the prompt that the model will use along with setting an optional parser for the response.

On this page, you'll find the node parameters for the Basic LLM Chain node and links to more resources.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's [Basic LLM Chain integrations](https://n8n.io/integrations/basic-llm-chain/) page.

Select how you want the node to construct the prompt (also known as the user's query or input from the chat).

- **Take from previous node automatically**: If you select this option, the node expects an input from a previous node called `chatInput`.
- **Define below**: If you select this option, provide either static text or an expression for dynamic content to serve as the prompt in the **Prompt (User Message)** field.

### Require Specific Output Format

This parameter controls whether you want the node to require a specific output format. When turned on, n8n prompts you to connect one of these output parsers to the node:

- [Auto-fixing Output Parser](../../sub-nodes/n8n-nodes-langchain.outputparserautofixing/)
- [Item List Output Parser](../../sub-nodes/n8n-nodes-langchain.outputparseritemlist/)
- [Structured Output Parser](../../sub-nodes/n8n-nodes-langchain.outputparserstructured/)

Use **Chat Messages** when you're using a chat model to set a message.

n8n ignores these options if you don't connect a chat model. Select the **Type Name or ID** you want the node to use:

Enter a sample expected response in the **Message** field. The model will try to respond in the same way in its messages.

Enter a system **Message** to include with the user input to help guide the model in what it should do.

Use this option for things like defining tone, for example: `Always respond talking like a pirate`.

Enter a sample user input. Using this with the AI option can help improve the output of the agent. Using both together provides a sample of an input and expected response (the **AI Message**) for the model to follow.

Select one of these input types:

- **Text**: Enter a sample user input as a text **Message**.
- **Image (Binary)**: Select a binary input from a previous node. Enter the **Image Data Field Name** to identify which binary field from the previous node contains the image data.
- **Image (URL)**: Use this option to feed an image in from a URL. Enter the **Image URL**.

For both the **Image** types, select the **Image Details** to control how the model processes the image and generates its textual understanding. Choose from:

- **Auto**: The model uses the auto setting, which looks at the image input size and decide if it should use the Low or High setting.
- **Low**: The model receives a low-resolution 512px x 512px version of the image and represents the image with a budget of 65 tokens. This allows the API to return faster responses and consume fewer input tokens. Use this option for use cases that don't require high detail.
- **High**: The model can access the low-resolution image and then creates detailed crops of input images as 512px squares based on the input image size. Each of the detailed crops uses twice the token budget (65 tokens) for a total of 129 tokens. Use this option for use cases that require high detail.

## Templates and examples

**Chat with PDF docs using AI (quoting sources)**

[View template details](https://n8n.io/workflows/2165-chat-with-pdf-docs-using-ai-quoting-sources/)

**Respond to WhatsApp Messages with AI Like a Pro!**

[View template details](https://n8n.io/workflows/2466-respond-to-whatsapp-messages-with-ai-like-a-pro/)

**🚀Transform Podcasts into Viral TikTok Clips with Gemini+ Multi-Platform Posting✅**

[View template details](https://n8n.io/workflows/4568-transform-podcasts-into-viral-tiktok-clips-with-gemini-multi-platform-posting/)

[Browse Basic LLM Chain integration templates](https://n8n.io/integrations/basic-llm-chain/), or [search all templates](https://n8n.io/workflows/)

Refer to [LangChain's documentation on Basic LLM Chains](https://js.langchain.com/docs/tutorials/llm_chain/) for more information about the service.

View n8n's [Advanced AI](../../../../../advanced-ai/) documentation.

Here are some common errors and issues with the Basic LLM Chain node and steps to resolve or troubleshoot them.

### No prompt specified error

This error displays when the **Prompt** is empty or invalid.

You might see this error in one of two scenarios:

1. When you've set the **Prompt** to **Define below** and haven't entered anything in the **Text** field.
   - To resolve, enter a valid prompt in the **Text** field.
1. When you've set the **Prompt** to **Connected Chat Trigger Node** and the incoming data has no field called `chatInput`.
   - The node expects the `chatInput` field. If your previous node doesn't have this field, add an [Edit Fields (Set)](../../../core-nodes/n8n-nodes-base.set/) node to edit an incoming field name to `chatInput`.

---

## PGVector Vector Store node

**URL:** llms-txt#pgvector-vector-store-node

**Contents:**
- Node usage patterns
  - Use as a regular node to insert and retrieve documents
  - Connect directly to an AI agent as a tool
  - Use a retriever to fetch documents
  - Use the Vector Store Question Answer Tool to answer questions
- Node parameters
  - Operation Mode
  - Rerank Results
  - Get Many parameters
  - Insert Documents parameters

PGVector is an extension of Postgresql. Use this node to interact with the PGVector tables in your Postgresql database. You can insert documents into a vector table, get documents from a vector table, retrieve documents to provide them to a retriever connected to a [chain](../../../../../glossary/#ai-chain), or connect directly to an [agent](../../../../../glossary/#ai-agent) as a [tool](../../../../../glossary/#ai-tool).

On this page, you'll find the node parameters for the PGVector node, and links to more resources.

You can find authentication information for this node [here](../../../credentials/postgres/).

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name.

## Node usage patterns

You can use the PGVector Vector Store node in the following patterns.

### Use as a regular node to insert and retrieve documents

You can use the PGVector Vector Store as a regular node to insert or get documents. This pattern places the PGVector Vector Store in the regular connection flow without using an agent.

You can see an example of this in scenario 1 of [this template](https://n8n.io/workflows/2621-ai-agent-to-chat-with-files-in-supabase-storage/) (the template uses the Supabase Vector Store, but the pattern is the same).

### Connect directly to an AI agent as a tool

You can connect the PGVector Vector Store node directly to the tool connector of an [AI agent](../n8n-nodes-langchain.agent/) to use a vector store as a resource when answering queries.

Here, the connection would be: AI agent (tools connector) -> PGVector Vector Store node.

### Use a retriever to fetch documents

You can use the [Vector Store Retriever](../../sub-nodes/n8n-nodes-langchain.retrievervectorstore/) node with the PGVector Vector Store node to fetch documents from the PGVector Vector Store node. This is often used with the [Question and Answer Chain](../n8n-nodes-langchain.chainretrievalqa/) node to fetch documents from the vector store that match the given chat input.

An [example of the connection flow](https://n8n.io/workflows/1960-ask-questions-about-a-pdf-using-ai/) (the linked example uses Pinecone, but the pattern is the same) would be: Question and Answer Chain (Retriever connector) -> Vector Store Retriever (Vector Store connector) -> PGVector Vector Store.

### Use the Vector Store Question Answer Tool to answer questions

Another pattern uses the [Vector Store Question Answer Tool](../../sub-nodes/n8n-nodes-langchain.toolvectorstore/) to summarize results and answer questions from the PGVector Vector Store node. Rather than connecting the PGVector Vector Store directly as a tool, this pattern uses a tool specifically designed to summarizes data in the vector store.

The [connections flow](https://n8n.io/workflows/2465-building-your-first-whatsapp-chatbot/) (the linked example uses the Simple Vector Store, but the pattern is the same) in this case would look like this: AI agent (tools connector) -> Vector Store Question Answer Tool (Vector Store connector) -> Simple Vector store.

This Vector Store node has four modes: **Get Many**, **Insert Documents**, **Retrieve Documents (As Vector Store for Chain/Tool)**, and **Retrieve Documents (As Tool for AI Agent)**. The mode you select determines the operations you can perform with the node and what inputs and outputs are available.

In this mode, you can retrieve multiple documents from your vector database by providing a prompt. The prompt is embedded and used for similarity search. The node returns the documents that are most similar to the prompt with their similarity score. This is useful if you want to retrieve a list of similar documents and pass them to an agent as additional context.

#### Insert Documents

Use insert documents mode to insert new documents into your vector database.

#### Retrieve Documents (as Vector Store for Chain/Tool)

Use Retrieve Documents (As Vector Store for Chain/Tool) mode with a vector-store retriever to retrieve documents from a vector database and provide them to the retriever connected to a chain. In this mode you must connect the node to a retriever node or root node.

#### Retrieve Documents (as Tool for AI Agent)

Use Retrieve Documents (As Tool for AI Agent) mode to use the vector store as a tool resource when answering queries. When formulating responses, the agent uses the vector store when the vector store name and description match the question details.

Enables [reranking](../../../../../glossary/#ai-reranking). If you enable this option, you must connect a reranking node to the vector store. That node will then rerank the results for queries. You can use this option with the `Get Many`, `Retrieve Documents (As Vector Store for Chain/Tool)` and `Retrieve Documents (As Tool for AI Agent)` modes.

### Get Many parameters

- **Table name**: Enter the name of the table you want to query.
- **Prompt**: Enter your search query.
- **Limit**: Enter a number to set how many results to retrieve from the vector store. For example, set this to `10` to get the ten best results.

### Insert Documents parameters

- **Table name**: Enter the name of the table you want to query.

### Retrieve Documents parameters (As Vector Store for Chain/Tool)

- **Table name**: Enter the name of the table you want to query.

### Retrieve Documents (As Tool for AI Agent) parameters

- **Name**: The name of the vector store.
- **Description**: Explain to the LLM what this tool does. A good, specific description allows LLMs to produce expected results more often.
- **Table Name**: Enter the PGVector table to use.
- **Limit**: Enter how many results to retrieve from the vector store. For example, set this to `10` to get the ten best results.

A way to separate datasets in PGVector. This creates a separate table and column to keep track of which collection a vector belongs to.

- **Use Collection**: Select whether to use a collection (turned on) or not (turned off).
- **Collection Name**: Enter the name of the collection you want to use.
- **Collection Table Name**: Enter the name of the table to store collection information in.

The following options specify the names of the columns to store the vectors and corresponding information in:

- **ID Column Name**
- **Vector Column Name**
- **Content Column Name**
- **Metadata Column Name**

Available in **Get Many** mode. When searching for data, use this to match with metadata associated with the document.

This is an `AND` query. If you specify more than one metadata filter field, all of them must match.

When inserting data, the metadata is set using the document loader. Refer to [Default Data Loader](../../sub-nodes/n8n-nodes-langchain.documentdefaultdataloader/) for more information on loading documents.

## Templates and examples

**HR & IT Helpdesk Chatbot with Audio Transcription**

[View template details](https://n8n.io/workflows/2752-hr-and-it-helpdesk-chatbot-with-audio-transcription/)

**Explore n8n Nodes in a Visual Reference Library**

[View template details](https://n8n.io/workflows/3891-explore-n8n-nodes-in-a-visual-reference-library/)

**📥 Transform Google Drive Documents into Vector Embeddings**

[View template details](https://n8n.io/workflows/3647-transform-google-drive-documents-into-vector-embeddings/)

[Browse PGVector Vector Store integration templates](https://n8n.io/integrations/postgres-pgvector-store/), or [search all templates](https://n8n.io/workflows/)

Refer to [LangChain's PGVector documentation](https://js.langchain.com/docs/integrations/vectorstores/pgvector) for more information about the service.

View n8n's [Advanced AI](../../../../../advanced-ai/) documentation.

## Self-hosted AI Starter Kit

New to working with AI and using self-hosted n8n? Try n8n's [self-hosted AI Starter Kit](../../../../../hosting/starter-kits/ai-starter-kit/) to get started with a proof-of-concept or demo playground using Ollama, Qdrant, and PostgreSQL.

---

## Source control and environments

**URL:** llms-txt#source-control-and-environments

- Available on Enterprise.
- You must be an n8n instance owner or instance admin to enable and configure source control.
- Instance owners and instance admins can push changes to and pull changes from the connected repository.
- Project admins can push changes to the connected repository. They can't pull changes from the repository.

n8n uses Git-based source control to support environments. Linking your n8n instances to a Git repository lets you create multiple n8n environments, backed by Git branches.

- [Understand](understand/):
  - [Environments in n8n](understand/environments/): The purpose of environments, and how they work in n8n.
  - [Git and n8n](understand/git/): How n8n uses Git.
  - [Branch patterns](understand/patterns/): The possible relationships between n8n instances and Git branches.
- [Set up source control for environments](setup/): How to connect your n8n instance to Git.
- [Using](using/):
  - [Push and pull](using/push-pull/): Send work to Git, and fetch work from Git to your instance.
  - [Copy work between environments](using/copy-work/): How to copy work between different n8n instances.
- [Tutorial: Create environments with source control](create-environments/): An end-to-end tutorial, setting up environments using n8n's recommended configurations.

- [Variables](../code/variables/): reusable values.
- [External secrets](../external-secrets/): manage [credentials](../glossary/#credential-n8n) with an external secrets vault.

---

## AWS IAM node

**URL:** llms-txt#aws-iam-node

**Contents:**
- Operations
- Templates and examples
- Related resources
- What to do if your operation isn't supported

Use the AWS IAM node to automate work in AWS Identity and Access Management (IAM) and integrate AWS IAM with other applications. n8n has built-in support for a wide range of AWS IAM features, which includes creating, updating, getting and deleting users and groups as well as managing group membership.

On this page, you'll find a list of operations the AWS IAM node supports, and links to more resources.

You can find authentication information for this node [here](../../credentials/aws/).

- **User**:
  - **Add to Group**: Add an existing user to a group.
  - **Create**: Create a new user.
  - **Delete**: Delete a user.
  - **Get**: Retrieve a user.
  - **Get Many**: Retrieve a list of users.
  - **Remove From Group**: Remove a user from a group.
  - **Update**: Update an existing user.
- **Group**:
  - **Create**: Create a new group.
  - **Delete**: Create a new group.
  - **Get**: Retrieve a group.
  - **Get Many**: Retrieve a list of groups.
  - **Update**: Update an existing group.

## Templates and examples

**Automated GitHub Scanner for Exposed AWS IAM Keys**

[View template details](https://n8n.io/workflows/5021-automated-github-scanner-for-exposed-aws-iam-keys/)

**Automated AWS IAM Key Compromise Response with Slack & Claude AI**

[View template details](https://n8n.io/workflows/5123-automated-aws-iam-key-compromise-response-with-slack-and-claude-ai/)

**Send Slack Alerts for AWS IAM Access Keys Older Than 365 Days**

[View template details](https://n8n.io/workflows/7501-send-slack-alerts-for-aws-iam-access-keys-older-than-365-days/)

[Browse AWS IAM integration templates](https://n8n.io/integrations/aws-iam/), or [search all templates](https://n8n.io/workflows/)

Refer to the [AWS IAM documentation](https://docs.aws.amazon.com/IAM/latest/APIReference/welcome.html) for more information about the service.

## What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**.
1. Select the service you want to connect to.
1. Select your credential.

Refer to [Custom API operations](../../../custom-operations/) for more information.

---

## Ollama Chat Model node

**URL:** llms-txt#ollama-chat-model-node

**Contents:**
- Node parameters
- Node options
- Templates and examples
- Related resources
- Common issues
- Self-hosted AI Starter Kit

The Ollama Chat Model node allows you use local Llama 2 models with conversational [agents](../../../../../glossary/#ai-agent).

On this page, you'll find the node parameters for the Ollama Chat Model node, and links to more resources.

You can find authentication information for this node [here](../../../credentials/ollama/).

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name.

- **Model**: Select the model that generates the completion. Choose from:
  - **Llama2**
  - **Llama2 13B**
  - **Llama2 70B**
  - **Llama2 Uncensored**

Refer to the Ollama [Models Library documentation](https://ollama.com/library) for more information about available models.

- **Sampling Temperature**: Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations.
- **Top K**: Enter the number of token choices the model uses to generate the next token.
- **Top P**: Use this option to set the probability the completion should use. Use a lower value to ignore less probable options.

## Templates and examples

**Chat with local LLMs using n8n and Ollama**

[View template details](https://n8n.io/workflows/2384-chat-with-local-llms-using-n8n-and-ollama/)

**🔐🦙🤖 Private & Local Ollama Self-Hosted AI Assistant**

[View template details](https://n8n.io/workflows/2729-private-and-local-ollama-self-hosted-ai-assistant/)

**Auto Categorise Outlook Emails with AI**

[View template details](https://n8n.io/workflows/2454-auto-categorise-outlook-emails-with-ai/)

[Browse Ollama Chat Model integration templates](https://n8n.io/integrations/ollama-chat-model/), or [search all templates](https://n8n.io/workflows/)

Refer to [LangChains's Ollama Chat Model documentation](https://js.langchain.com/docs/integrations/chat/ollama/) for more information about the service.

View n8n's [Advanced AI](../../../../../advanced-ai/) documentation.

For common questions or issues and suggested solutions, refer to [Common issues](common-issues/).

## Self-hosted AI Starter Kit

New to working with AI and using self-hosted n8n? Try n8n's [self-hosted AI Starter Kit](../../../../../hosting/starter-kits/ai-starter-kit/) to get started with a proof-of-concept or demo playground using Ollama, Qdrant, and PostgreSQL.

---

## HELP n8n_scaling_mode_queue_jobs_waiting Current number of enqueued jobs waiting for pickup in scaling mode.

**URL:** llms-txt#help-n8n_scaling_mode_queue_jobs_waiting-current-number-of-enqueued-jobs-waiting-for-pickup-in-scaling-mode.

---

## Enable Prometheus metrics

**URL:** llms-txt#enable-prometheus-metrics

**Contents:**
- Queue metrics

To collect and expose metrics, n8n uses the [prom-client](https://www.npmjs.com/package/prom-client) library.

The `/metrics` endpoint is disabled by default, but it's possible to enable it using the `N8N_METRICS` environment variable.

Refer to the respective [Environment Variables](../../environment-variables/endpoints/) (`N8N_METRICS_INCLUDE_*`) for configuring which metrics and labels should get exposed.

Both `main` and `worker` instances are able to expose metrics.

To enable queue metrics, set the `N8N_METRICS_INCLUDE_QUEUE_METRICS` env var to `true`. You can adjust the refresh rate with `N8N_METRICS_QUEUE_METRICS_INTERVAL`.

n8n gathers these metrics from Bull and exposes them on the main instances. On multi-main setups, when aggregating queries, you can identify the leader using the `instance_role_leader` gauge, set to `1` for the leader main and `0` otherwise.

**Examples:**

Example 1 (unknown):
```unknown
export N8N_METRICS=true

npm

URL: llms-txt#npm


Stop and remove older version

URL: llms-txt#stop-and-remove-older-version


GitLab credentials

URL: llms-txt#gitlab-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using API access token
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Supported authentication methods

  • API access token
  • OAuth2 (Recommended)

Refer to GitLab's API documentation for more information about the service.

Using API access token

To configure this credential, you'll need a GitLab account and:

  • The URL of your GitLab Server
  • An Access Token

To set up the credential:

  1. In GitLab, select your avatar, then select Edit profile.
  2. In the left sidebar, select Access tokens.
  3. Select Add new token.
  4. Enter a Name for the token, like n8n integration.
  5. Enter an expiry date for the token. If you don't enter an expiry date, GitLab automatically sets it to 365 days later than the current date.
    • The token expires on that expiry date at midnight UTC.
  6. Select the desired Scopes. For the GitLab node, use the api scope to easily grant access for all the node's functionality. Or refer to Personal access token scopes to select scopes for the functions you want to use.
  7. Select Create personal access token.
  8. Copy the access token this creates and enter it in your n8n credential as the Access Token.
  9. Enter the URL of your GitLab Server in your n8n credential.

Refer to GitLab's Create a personal access token documentation for more information.

Note for n8n Cloud users

Cloud users don't need to provide connection details. Select Connect my account to connect through your browser.

If you're self-hosting n8n, you'll need a GitLab account. Then create a new GitLab application:

  1. In GitLab, select your avatar, then select Edit profile.
  2. In the left sidebar, select Applications.
  3. Select Add new application.
  4. Enter a Name for your application, like n8n integration.
  5. In n8n, copy the OAuth Redirect URL. Enter it as the GitLab Redirect URI.
  6. Select the desired Scopes. For the GitLab node, use the api scope to easily grant access for all the node's functionality. Or refer to Personal access token scopes to select scopes for the functions you want to use.
  7. Select Save application.
  8. Copy the Application ID and enter it as the Client ID in your n8n credential.
  9. Copy the Secret and enter it as the Client Secret in your n8n credential.

Refer to GitLab's Configure GitLab as an OAuth 2.0 authentication identity provider documentation for more information. Refer to the GitLab OAuth 2.0 identity provider API documentation for more information on OAuth2 and GitLab.


Jina AI node

URL: llms-txt#jina-ai-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported

Use the Jina AI node to automate work in Jina AI and integrate Jina AI with other applications. n8n has built-in support for a wide range of Jina AI features.

On this page, you'll find a list of operations the Jina AI node supports, and links to more resources.

You can find authentication information for this node here.

  • Reader:
    • Read: Fetches content from a URL and converts it to clean, LLM-friendly formats.
    • Search: Performs a web search using Jina AI and returns the top results as clean, LLM-friendly formats.
  • Research:
    • Deep Research: Research a topic and generate a structured research report.

Templates and examples

AI Powered Web Scraping with Jina, Google Sheets and OpenAI : the EASY way

View template details

AI-Powered Information Monitoring with OpenAI, Google Sheets, Jina AI and Slack

View template details

AI-Powered Research with Jina AI Deep Search

View template details

Browse Jina AI integration templates, or search all templates

Refer to Jina AI's reader API documentation and Jina AI's search API documentation for more information about the service.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Trello node

URL: llms-txt#trello-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported
  • Find the List ID

Use the Trello node to automate work in Trello, and integrate Trello with other applications. n8n has built-in support for a wide range of Trello features, including creating and updating cards, and adding and removing members.

On this page, you'll find a list of operations the Trello node supports and links to more resources.

Refer to Trello credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Attachment
    • Create a new attachment for a card
    • Delete an attachment
    • Get the data of an attachment
    • Returns all attachments for the card
  • Board
    • Create a new board
    • Delete a board
    • Get the data of a board
    • Update a board
  • Board Member
    • Add
    • Get All
    • Invite
    • Remove
  • Card
    • Create a new card
    • Delete a card
    • Get the data of a card
    • Update a card
  • Card Comment
    • Create a comment on a card
    • Delete a comment from a card
    • Update a comment on a card
  • Checklist
    • Create a checklist item
    • Create a new checklist
    • Delete a checklist
    • Delete a checklist item
    • Get the data of a checklist
    • Returns all checklists for the card
    • Get a specific checklist on a card
    • Get the completed checklist items on a card
    • Update an item in a checklist on a card
  • Label
    • Add a label to a card.
    • Create a new label
    • Delete a label
    • Get the data of a label
    • Returns all labels for the board
    • Remove a label from a card.
    • Update a label.
  • List
    • Archive/Unarchive a list
    • Create a new list
    • Get the data of a list
    • Get all the lists
    • Get all the cards in a list
    • Update a list

Templates and examples

RSS Feed News Processing and Distribution Workflow

View template details

Process Shopify new orders with Zoho CRM and Harvest

View template details

Sync Google Calendar tasks to Trello every day

View template details

Browse Trello integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.

  1. Open the Trello board that contains the list.
  2. If the list doesn't have any cards, add a card to the list.
  3. Open the card, add .json at the end of the URL, and press enter.
  4. In the JSON file, you will see a field called idList.
  5. Copy the contents of the idListfield and paste it in the *List ID field in n8n.

Clearbit node

URL: llms-txt#clearbit-node

Contents:

  • Operations
  • Templates and examples

Use the Clearbit node to automate work in Clearbit, and integrate Clearbit with other applications. n8n has built-in support for a wide range of Clearbit features, including autocompleting and looking up companies and persons.

On this page, you'll find a list of operations the Clearbit node supports and links to more resources.

Refer to Clearbit credentials for guidance on setting up authentication.

  • Company
    • Auto-complete company names and retrieve logo and domain
    • Look up person and company data based on an email or domain
  • Person
    • Look up a person and company data based on an email or domain

Templates and examples

Summarize social media activity of a company before a call

View template details

Verify emails & enrich new form leads and save them to HubSpot

View template details

List social media activity of a company before a call

View template details

Browse Clearbit integration templates, or search all templates


Don't save node progress for each execution

URL: llms-txt#don't-save-node-progress-for-each-execution

export EXECUTIONS_DATA_SAVE_ON_PROGRESS=false


Flow credentials

URL: llms-txt#flow-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a Flow account.

Supported authentication methods

Refer to Flow's API documentation for more information about the service.

To configure this credential, you'll need:

  • Your numeric Organization ID
  • An Access Token

Refer to the Flow API Getting Started documentation for instructions on generating your Access Token and viewing your Organization ID.


ClickUp Trigger node

URL: llms-txt#clickup-trigger-node

Contents:

  • Events
  • Related resources

ClickUp is a cloud-based collaboration and project management tool suitable for businesses of all sizes and industries. Features include communication and collaboration tools, task assignments and statuses, alerts and a task toolbar.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's ClickUp Trigger integrations page.

  • Key result
  • Created
  • Deleted
  • Updated
  • List
  • Created
  • Deleted
  • Updated
  • Space
  • Created
  • Deleted
  • Updated
  • Task
  • Assignee updated
  • Comment
    • Posted
    • Updated
  • Created
  • Deleted
  • Due date updated
  • Moved
  • Status updated
  • Tag updated
  • Time estimate updated
  • Time tracked updated
  • Updated

n8n provides an app node for ClickUp. You can find the node docs here.

View example workflows and related content on n8n's website.

Refer to ClickUp's documentation for details about their API.


Stripe Trigger node

URL: llms-txt#stripe-trigger-node

Stripe is a suite of payment APIs that powers commerce for online businesses.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Stripe Trigger integrations page.


OpenAI Assistant operations

URL: llms-txt#openai-assistant-operations

Contents:

  • Create an Assistant
    • Options
  • Delete an Assistant
  • List Assistants
    • Options
  • Message an Assistant
    • Options
  • Update an Assistant
    • Options
  • Common issues

Use this operation to create, delete, list, message, or update an assistant in OpenAI. Refer to OpenAI for more information on the OpenAI node itself.

Assistant operations deprecated in OpenAI node V2

n8n version 1.117.0 introduces V2 of the OpenAI node that supports the OpenAI Responses API and removes support for the to-be-deprecated Assistants API.

Create an Assistant

Use this operation to create a new assistant.

Enter these parameters:

  • Credential to connect with: Create or select an existing OpenAI credential.

  • Resource: Select Assistant.

  • Operation: Select Create an Assistant.

  • Model: Select the model that the assistant will use. If youre not sure which model to use, try gpt-4o if you need high intelligence or gpt-4o-mini if you need the fastest speed and lowest cost. Refer to Models overview | OpenAI Platform for more information.

  • Name: Enter the name of the assistant. The maximum length is 256 characters.

  • Description: Enter the description of the assistant. The maximum length is 512 characters.

  • Instructions: Enter the system instructions that the assistant uses. The maximum length is 32,768 characters. Use this to specify the persona used by the model in its replies.

  • Code Interpreter: Turn on to enable the code interpreter for the assistant, where it can write and execute code in a sandbox environment. Enable this tool for tasks that require computations, data analysis, or any logic-based processing.

  • Knowledge Retrieval: Turn on to enable knowledge retrieval for the assistant, allowing it to access external sources or a connected knowledge base. Refer to File Search | OpenAI Platform for more information.

  • Files: Select a file to upload for your external knowledge source. Use Upload a File operation to add more files.

  • Output Randomness (Temperature): Adjust the randomness of the response. The range is between 0.0 (deterministic) and 1.0 (maximum randomness). We recommend altering this or Output Randomness (Top P) but not both. Start with a medium temperature (around 0.7) and adjust based on the outputs you observe. If the responses are too repetitive or rigid, increase the temperature. If theyre too chaotic or off-track, decrease it. Defaults to 1.0.

  • Output Randomness (Top P): Adjust the Top P setting to control the diversity of the assistant's responses. For example, 0.5 means half of all likelihood-weighted options are considered. We recommend altering this or Output Randomness (Temperature) but not both. Defaults to 1.0.

  • Fail if Assistant Already Exists: If enabled, the operation will fail if an assistant with the same name already exists.

Refer to Create assistant | OpenAI documentation for more information.

Delete an Assistant

Use this operation to delete an existing assistant from your account.

Enter these parameters:

  • Credential to connect with: Create or select an existing OpenAI credential.
  • Resource: Select Assistant.
  • Operation: Select Delete an Assistant.
  • Assistant: Select the assistant you want to delete From list or By ID.

Refer to Delete assistant | OpenAI documentation for more information.

Use this operation to retrieve a list of assistants in your organization.

  • Credential to connect with: Create or select an existing OpenAI credential.

  • Resource: Select Assistant.

  • Operation: Select List Assistants.

  • Simplify Output: Turn on to return a simplified version of the response instead of the raw data. This option is enabled by default.

Refer to List assistants | OpenAI documentation for more information.

Message an Assistant

Use this operation to send a message to an assistant and receive a response.

Enter these parameters:

  • Credential to connect with: Create or select an existing OpenAI credential.

  • Resource: Select Assistant.

  • Operation: Select Message an Assistant.

  • Assistant: Select the assistant you want to message.

  • Prompt: Enter the text prompt or message that you want to send to the assistant.

    • Connected Chat Trigger Node: Automatically use the input from a previous node's chatInput field.
    • Define Below: Manually define the prompt by entering static text or using an expression to reference data from previous nodes.
  • Base URL: Enter the base URL that the assistant should use for making API requests. This option is useful for directing the assistant to use endpoints provided by other LLM providers that offer an OpenAI-compatible API.

  • Max Retries: Specify the number of times the assistant should retry an operation in case of failure.

  • Timeout: Set the maximum amount of time in milliseconds, that the assistant should wait for a response before timing out. Use this option to prevent long waits during operations.

  • Preserve Original Tools: Turn off to remove the original tools associated with the assistant. Use this if you want to temporarily remove tools for this specific operation.

Refer to Assistants | OpenAI documentation for more information.

Update an Assistant

Use this operation to update the details of an existing assistant.

Enter these parameters:

  • Credential to connect with: Create or select an existing OpenAI credential.

  • Resource: Select Assistant.

  • Operation: Select Update an Assistant.

  • Assistant: Select the assistant you want to update.

  • Code Interpreter: Turn on to enable the code interpreter for the assistant, where it can write and execute code in a sandbox environment. Enable this tool for tasks that require computations, data analysis, or any logic-based processing.

  • Description: Enter the description of the assistant. The maximum length is 512 characters.

  • Instructions: Enter the system instructions that the assistant uses. The maximum length is 32,768 characters. Use this to specify the persona used by the model in its replies.

  • Knowledge Retrieval: Turn on to enable knowledge retrieval for the assistant, allowing it to access external sources or a connected knowledge base. Refer to File Search | OpenAI Platform for more information.

  • Files: Select a file to upload for your external knowledge source. Use Upload a File operation to add more files. Note that this only updates the Code Interpreter tool, not the File Search tool.

  • Model: Select the model that the assistant will use. If youre not sure which model to use, try gpt-4o if you need high intelligence or gpt-4o-mini if you need the fastest speed and lowest cost. Refer to Models overview | OpenAI Platform for more information.

  • Name: Enter the name of the assistant. The maximum length is 256 characters.

  • Remove All Custom Tools (Functions): Turn on to remove all custom tools (functions) from the assistant.

  • Output Randomness (Temperature): Adjust the randomness of the response. The range is between 0.0 (deterministic) and 1.0 (maximum randomness). We recommend altering this or Output Randomness (Top P) but not both. Start with a medium temperature (around 0.7) and adjust based on the outputs you observe. If the responses are too repetitive or rigid, increase the temperature. If theyre too chaotic or off-track, decrease it. Defaults to 1.0.

  • Output Randomness (Top P): Adjust the Top P setting to control the diversity of the assistant's responses. For example, 0.5 means half of all likelihood-weighted options are considered. We recommend altering this or Output Randomness (Temperature) but not both. Defaults to 1.0.

Refer to Modify assistant | OpenAI documentation for more information.

For common errors or issues and suggested resolution steps, refer to Common Issues.

Examples:

Example 1 (unknown):

A virtual assistant that helps users with daily tasks, including setting reminders, answering general questions, and providing quick information.

Example 2 (unknown):

Always respond in a friendly and engaging manner. When a user asks a question, provide a concise answer first, followed by a brief explanation or additional context if necessary. If the question is open-ended, offer a suggestion or ask a clarifying question to guide the conversation. Keep the tone positive and supportive, and avoid technical jargon unless specifically requested by the user.

Example 3 (unknown):

A virtual assistant that helps users with daily tasks, including setting reminders, answering general questions, and providing quick information.

Example 4 (unknown):

Always respond in a friendly and engaging manner. When a user asks a question, provide a concise answer first, followed by a brief explanation or additional context if necessary. If the question is open-ended, offer a suggestion or ask a clarifying question to guide the conversation. Keep the tone positive and supportive, and avoid technical jargon unless specifically requested by the user.

Simple Memory node common issues

URL: llms-txt#simple-memory-node-common-issues

Contents:

  • Single memory instance
  • Managing the Session ID

Here are some common errors and issues with the Simple Memory node and steps to resolve or troubleshoot them.

Single memory instance

If you add more than one Simple Memory node to your workflow, all nodes access the same memory instance by default. Be careful when doing destructive actions that override existing memory contents, such as the override all messages operation in the Chat Memory Manager node. If you want more than one memory instance in your workflow, set different session IDs in different memory nodes.

Managing the Session ID

In most cases, the sessionId is automatically retrieved from the On Chat Message trigger. But you may run into an error with the phrase No sessionId.

If you have this error, first check the output of your Chat trigger to ensure it includes a sessionId.

If you're not using the On Chat Message trigger, you'll need to manage sessions manually.

For testing purposes, you can use a static key like my_test_session. If you use this approach, be sure to set up proper session management before activating the workflow to avoid potential issues in a live environment.


1. Getting data from the data warehouse

URL: llms-txt#1.-getting-data-from-the-data-warehouse

Contents:

  • Create new workflow
  • Add an HTTP Request node
  • Get the data
  • What's next?

In this part of the workflow, you will learn how to get data by making HTTP requests with the HTTP Request node.

After completing this section, your workflow will look like this:

View workflow file

First, let's set the scene for building Nathan's workflow.

Create new workflow

Open your Editor UI and create a new workflow with one of the two possible commands:

  • Select Ctrl+Alt+N or Cmd+Option+N on your keyboard.
  • Open the left menu, navigate to Workflows, and select Add workflow.

Name this new workflow "Nathan's workflow."

The first thing you need to do is get data from ABCorp's old data warehouse.

In a previous chapter, you used an action node designed for a specific service (Hacker News). But not all apps or services have dedicated nodes, like the legacy data warehouse from Nathan's company.

Though we can't directly export the data, Nathan told us that the data warehouse has a couple of API endpoints. That's all we need to access the data using the HTTP Request node in n8n.

No node for that service?

The HTTP Request node is one of the most versatile nodes, allowing you to make HTTP requests to query data from apps and services. You can use it to access data from apps or services that don't have a dedicated node in n8n.

Add an HTTP Request node

Now, in your Editor UI, add an HTTP Request node like you learned in the lesson Adding nodes. The node window will open, where you need to configure some parameters.

This node will use credentials.

Credentials are unique pieces of information that identify a user or a service and allow them to access apps or services (in our case, represented as n8n nodes). A common form of credentials is a username and a password, but they can take other forms depending on the service.

In this case, you'll need the credentials for the ABCorp data warehouse API included in the email from n8n you received when you signed up for this course. If you haven't signed up yet, sign up here.

In the Parameters of the HTTP Request node, make the following adjustments:

  • Method: This should default to GET. Make sure it's set to GET.

  • URL: Add the Dataset URL you received in the email when you signed up for this course.

  • Send Headers: Toggle this control to true. In Specify Headers, ensure Using Fields Below is selected.

    • Header Parameters > Name: Enter unique_id.
    • Header Parameters > Value: The Unique ID you received in the email when you signed up for this course.
  • Authentication: Select Generic Credential Type. This option requires credentials before allowing you to access the data.

    • Generic Auth Type: Select Header Auth. (This field will appear after you select the Generic Credential Type for the Authentication.)
  • Credential for Header Auth: To add your credentials, select + Create new credential. This will open the Credentials window.

  • In the Credentials window, set Name to be the Header Auth name you received in the email when you signed up for this course.

  • In the Credentials window, set Value to be the Header Auth value you received in the email when you signed up for this course.

  • Select the Save button in the Credentials window to save your credentials. Your Credentials Connection window should look like this:

HTTP Request node credentials

New credential names follow the " account" format by default. You can rename the credentials by clicking on the name, similarly to renaming nodes. It's good practice to give them names that identify the app/service, type, and purpose of the credential. A naming convention makes it easier to keep track of and identify your credentials.

Once you save, exit out of the Credentials window to return to the HTTP Request node.

Select the Execute step button in the HTTP Request node window. The table view of the HTTP request results should look like this:

HTTP Request node output

This view should be familiar to you from the Building a mini-workflow page.

This is the data from ABCorp's data warehouse that Nathan needs to work with. This data set includes sales information from 30 customers with five columns:

  • orderID: The unique id of each order.
  • customerID: The unique id of each customer.
  • employeeName: The name of Nathan's colleague responsible for the customer.
  • orderPrice: The total price of the customer's order.
  • orderStatus: Whether the customer's order status is booked or still in processing.

Nathan 🙋: This is great! You already automated an important part of my job with only one node. Now instead of manually accessing the data every time I need it, I can use the HTTP Request Node to automatically get the information.

You 👩‍🔧: Exactly! In the next step, I'll help you one step further and insert the data you retrieved into Airtable.


n8n Form Trigger node

URL: llms-txt#n8n-form-trigger-node

Contents:

  • Build and test workflows
  • Production workflows
  • Set default selections with query parameters
  • Node parameters
    • Authentication
    • Form URLs
    • Form Path
    • Form Title
    • Form Description
    • Form Elements

Use the n8n Form trigger to start a workflow when a user submits a form, taking the input data from the form. The node generates the form web page for you to use.

You can add more pages to continue the form with the n8n Form node.

Build and test workflows

While building or testing a workflow, use the Test URL. Using a test URL ensures that you can view the incoming data in the editor UI, which is useful for debugging.

There are two ways to test:

  • Select Execute Step. n8n opens the form. When you submit the form, n8n runs the node, but not the rest of the workflow.
  • Select Execute Workflow. n8n opens the form. When you submit the form, n8n runs the workflow.

Production workflows

When your workflow is ready, switch to using the Production URL. You can then activate your workflow, and n8n runs it automatically when a user submits the form.

When working with a production URL, ensure that you have saved and activated the workflow. Data flowing through the Form trigger isn't visible in the editor UI with the production URL.

Set default selections with query parameters

You can set the initial values for fields by using query parameters with the initial URL provided by the n8n Form Trigger. Every page in the form receives the same query parameters sent to the n8n Form Trigger URL.

Query parameters are only available when using the form in production mode. n8n won't populate field values from query parameters in testing mode.

When using query parameters, percent-encode any field names or values that use special characters. This ensures n8n uses the initial values for the given fields. You can use tools like URL Encode/Decode to format your query parameters using percent-encoding.

As an example, imagine you have a form with the following properties:

  • Production URL: https://my-account.n8n.cloud/form/my-form
  • Fields:
    • name: Jane Doe
    • email: jane.doe@example.com

With query parameters and percent-encoding, you could use the following URL to set initial field values to the data above:

Here, percent-encoding replaces the at-symbol (@) with the string %40 and the space character () with the string %20. This will set the initial value for these fields no matter which page of the form they appear on.

These are the main node configuration fields:

  • Basic Auth
  • None

Using basic auth

To configure this credential, you'll need:

  • The Username you use to access the app or service your HTTP Request is targeting.
  • The Password that goes with that username.

The Form Trigger node has two URLs: Test URL and Production URL. n8n displays the URLs at the top of the node panel. Select Test URL or Production URL to toggle which URL n8n displays.

  • Test URL: n8n registers a test webhook when you select Execute Step or Execute Workflow, if the workflow isn't active. When you call the URL, n8n displays the data in the workflow.
  • Production URL: n8n registers a production webhook when you activate the workflow. When using the production URL, n8n doesn't display the data in the workflow. You can still view workflow data for a production execution. Select the Executions tab in the workflow, then select the workflow execution you want to view.

Set a custom slug for the form.

Enter the title for your form. n8n displays the Form Title as the webpage title and main h1 title on the form.

Enter the description for your form. n8n displays the Form Description as a subtitle below the main h1 title on the form. Use \n or <br> to add a line break.

Create the question fields for your form. Select Add Form Element to add a new field.

Every field has the following settings:

  • Field Label: Enter the label that appears above the input field.
  • Element Type: Choose from Checkboxes, Custom HTML, Date, Dropdown, Email, File, Hidden Field, Number, Password, Radio Buttons, Text, or Textarea.
    • Select Checkboxes to include checkbox elements in the form. By default, there is no limit on how many checboxes a form user can select. You can set the limit by specifying a value for the Limit Selection option as Exact Number, Range, or Unlimited.
    • Select Custom HTML to insert arbitrary HTML.
      • You can include elements like links, images, video, and more. You can't include <script>, <style>, or <input> elements.
      • By default, Custom HTML fields aren't included in the node output. To include the Custom HTML content in the output, fill out the associated Element Name field.
    • Select Date to include a date picker in the form. Refer to Date and time with Luxon for more information on formatting dates.
    • Select Dropdown List > Add Field Option to add multiple options. By default, the dropdown is single-choice. To make it multiple-choice, turn on Multiple Choice.
    • Select Radio Buttons to include radio button elements in the form.
    • Select Hidden Field to include a form element without displaying it on the form. You can set a default value using the Field Value parameter or pass values for the field using query parameters.
  • Required Field: Turn on to require users to complete this field on the form.

Choose when n8n sends a response to the form submission. You can respond when:

  • Form Is Submitted: Send a response to the user as soon as they submit the form.
  • Workflow Finishes: Use this if you want the workflow to complete its execution before you send a response to the user. If the workflow errors, it sends a response to the user telling them there was a problem submitting the form.

Select Add Option to view more configuration options:

  • Append n8n Attribution: Turn off to hide the Form automated with n8n attribute at the bottom of the form.
  • Button Label: The label to use for your form's submit button. n8n displays the Button Label as the name of the submit button.
  • Form Path: The final segment of the form's URL, for both testing and production. Replaces the automatically generated UUID as the final component.
  • Ignore Bots: Turn on to ignore requests from bots like link previewers and web crawlers.
  • Use Workflow Timezone: Turn on to use the timezone in the Workflow settings instead of UTC (default). This affects the value of the submittedAt timestamp in the node output.
  • Custom Form Styling: Override the default styling of the public form interface with CSS. The field pre-populates with the default styling so you can change only what you need to.

Customizing Form Trigger node behavior

Format response text with line breaks

You can use one of the following methods to add line breaks to form response text:

• Use HTML formatting instead of plain text in the formSubmittedText field • Replace newline characters (\n) with HTML break tags (<br>) before sending the response • Consider using a custom HTML response page if you need more formatting control

Restrict form access with authentication

You can use one of the following options to add authentication to your form:

• Use the OTP (One-Time Password) field with TOTP node validation for token-based authentication • Add a Wait node with form authentication as a secondary form page • Store hashed passwords in a database and compare against form submissions for validation • Use external authentication providers like Google Forms if you need advanced authentication

Templates and examples

RAG Starter Template using Simple Vector Stores, Form trigger and OpenAI

View template details

Unify multiple triggers into a single workflow

by Guillaume Duvernay

View template details

Backup and Delete Workflows to Google Drive with n8n API and Form Trigger

View template details

Browse n8n Form Trigger integration templates, or search all templates

Examples:

Example 1 (unknown):

https://my-account.n8n.cloud/form/my-form?email=jane.doe%40example.com&name=Jane%20Doe

Grafana credentials

URL: llms-txt#grafana-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to Grafana's API documentation for more information about authenticating with the service.

To configure this credential, you'll need:

  • An API Key: Refer to the Create an API key documentation for detailed instructions on creating an API key.
  • The Base URL for your Grafana instance, for example: https://n8n.grafana.net.

Telegram node File operations

URL: llms-txt#telegram-node-file-operations

Contents:

  • Get File

Use this operation to get a file from Telegram. Refer to Telegram for more information on the Telegram node itself.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

Use this operation to get a file from Telegram using the Bot API getFile method.

Enter these parameters:

  • Credential to connect with: Create or select an existing Telegram credential.
  • Resource: Select File.
  • Operation: Select Get.
  • File ID: Enter the ID of the file you want to get.
  • Download: Choose whether you want the node to download the file (turned on) or not (turned off).

Refer to the Telegram Bot API getFile documentation for more information.


Shuffler credentials

URL: llms-txt#shuffler-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate when using the HTTP Request node to make a Custom API call.

Create a Shuffler account on either a cloud or self-hosted instance.

Supported authentication methods

Refer to Shuffler's documentation for more information about the service.

This is a credential-only node. Refer to Custom API operations to learn more. View example workflows and related content on n8n's website.

To configure this credential, you'll need:

  • An API Key: Get your API key from the Settings page.

Grist node

URL: llms-txt#grist-node

Contents:

  • Operations
  • Templates and examples
  • Get the Row ID
  • Filter records when using the Get All operation

Use the Grist node to automate work in Grist, and integrate Grist with other applications. n8n has built-in support for a wide range of Grist features, including creating, updating, deleting, and reading rows in a table.

On this page, you'll find a list of operations the Grist node supports and links to more resources.

Refer to Grist credentials for guidance on setting up authentication.

  • Create rows in a table
  • Delete rows from a table
  • Read rows from a table
  • Update rows in a table

Templates and examples

Browse Grist integration templates, or search all templates

To update or delete a particular record, you need the Row ID. There are two ways to get the Row ID:

Create a Row ID column in Grist

Create a new column in your Grist table with the formula $id.

Use the Get All operation

The Get All operation returns the Row ID of each record along with the fields.

You can get it with the expression {{$node["GristNodeName"].json["id"]}}.

Filter records when using the Get All operation

  • Select Add Option and select Filter from the dropdown list.
  • You can add filters for any number of columns. The result will only include records which match all the columns.
  • For each column, you can enter any number of values separated by commas. The result will include records which match any of the values for that column.

Architecture

URL: llms-txt#architecture

Understanding n8n's underlying architecture is helpful if you need to:

  • Embed n8n
  • Customize n8n's default databases

This section is a work in progress. If you have questions, please try the forum and let n8n know which architecture documents would be useful for you.


Yourls credentials

URL: llms-txt#yourls-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Install Yourls on your server.

Supported authentication methods

Refer to Yourl's documentation for more information about the service.

To configure this credential, you'll need:

  • A Signature token: Go to Tools > Secure passwordless API call to get your Signature token. Refer to Yourl's Passworldess API documentation for more information.
  • A URL: Enter the URL of your Yourls instance.

SeaTable node

URL: llms-txt#seatable-node

Contents:

  • Operations
  • Templates and examples

Use the SeaTable node to automate work in SeaTable, and integrate SeaTable with other applications. n8n has built-in support for a wide range of SeaTable features, including creating, updating, deleting, updating, and getting rows.

On this page, you'll find a list of operations the SeaTable node supports and links to more resources.

Refer to SeaTable credentials for guidance on setting up authentication.

  • Row
    • Create
    • Delete
    • Get
    • Get All
    • Update

Templates and examples

Browse SeaTable integration templates, or search all templates


UpLead credentials

URL: llms-txt#uplead-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create an UpLead account.

Supported authentication methods

Refer to UpLead's API documentation for more information about the service.

To configure this credential, you'll need:

  • An API Key: Go to your Account > Profiles to Generate New API Key.

Philips Hue credentials

URL: llms-txt#philips-hue-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Create a Philips Hue account.

Supported authentication methods

Refer to Philips Hue's CLIP API documentation for more information about the service.

Note for n8n Cloud users

Cloud users don't need to provide connection details. Select Connect my account to connect through your browser.

If you're using the built-in OAuth connection, you don't need to enter an APP ID.

If you need to configure OAuth2 from scratch, you'll need a Philips Hue developer account

Create a new remote app on the Add new Hue Remote API app page.

Use these settings for your app:

  • Copy the OAuth Callback URL from n8n and add it as a Callback URL.
  • Copy the AppId, ClientId, and ClientSecret and enter these in the corresponding fields in n8n.

Microsoft Outlook Trigger node

URL: llms-txt#microsoft-outlook-trigger-node

Contents:

  • Events
  • Related resources

Use the Microsoft Outlook Trigger node to respond to events in Microsoft Outlook and integrate Microsoft Outlook with other applications.

On this page, you'll find a list of events the Microsoft Outlook Trigger node can respond to, and links to more resources.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Microsoft Outlook integrations page.

n8n provides an app node for Microsoft Outlook. You can find the node docs here.

View example workflows and related content on n8n's website.


Lemlist node

URL: llms-txt#lemlist-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Lemlist node to automate work in Lemlist, and integrate Lemlist with other applications. n8n has built-in support for a wide range of Lemlist features, including getting activities, teams and campaigns, as well as creating, updating, and deleting leads.

On this page, you'll find a list of operations the Lemlist node supports and links to more resources.

Refer to Lemlist credentials for guidance on setting up authentication.

  • Activity
    • Get Many: Get many activities
  • Campaign
    • Get Many: Get many campaigns
    • Get Stats: Get campaign stats
  • Enrichment
    • Get: Fetches a previously completed enrichment
    • Enrich Lead: Enrich a lead using an email or LinkedIn URL
    • Enrich Person: Enrich a person using an email or LinkedIn URL
  • Lead
    • Create: Create a new lead
    • Delete: Delete an existing lead
    • Get: Get an existing lead
    • Unsubscribe: Unsubscribe an existing lead
  • Team
    • Get: Get an existing team
    • Get Credits: Get an existing team's credits
  • Unsubscribe
    • Add: Add an email to an unsubscribe list
    • Delete: Delete an email from an unsubscribe list
    • Get Many: Get many unsubscribed emails

Templates and examples

Create HubSpot contacts from LinkedIn post interactions

View template details

lemlist <> GPT-3: Supercharge your sales workflows

View template details

Classify lemlist replies using OpenAI and automate reply handling

View template details

Browse Lemlist integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


uProc credentials

URL: llms-txt#uproc-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API Key

You can use these credentials to authenticate the following nodes:

Create a uProc account.

Supported authentication methods

Refer to uProc's API documentation for more information about the service.

To configure this credential, you'll need:

  • An Email address: Enter the email address you use to log in to uProc. This is also displayed in Settings > Integrations > API Credentials.
  • An API Key: Go to Settings > Integrations > API Credentials. Copy the API Key (real) from the API Credentials section and enter it in your n8n credential.

Security audit

URL: llms-txt#security-audit

Contents:

  • Run an audit
    • CLI
    • API
    • n8n node
  • Report contents
    • Credentials
    • Database
    • File system
    • Nodes
    • Instance

You can run a security audit on your n8n instance, to detect common security issues.

You can run an audit using the CLI, the public API, or the n8n node.

Make a POST call to the /audit endpoint. You must authenticate as the instance owner.

Add the n8n node to your workflow. Select Resource > Audit and Operation > Generate.

The audit generates five risk reports:

  • Credentials not used in a workflow.

  • Credentials not used in an active workflow.

  • Credentials not use in a recently active workflow.

  • Expressions used in Execute Query fields in SQL nodes.

  • Expressions used in Query Parameters fields in SQL nodes.

  • Unused Query Parameters fields in SQL nodes.

This report lists nodes that interact with the file system.

  • Official risky nodes. These are n8n built in nodes. You can use them to fetch and run any code on the host system, which exposes the instance to exploits. You can view the list in n8n code | Audit constants, under OFFICIAL_RISKY_NODE_TYPES.

  • Community nodes.

  • Custom nodes.

  • Unprotected webhooks in the instance.

  • Missing security settings

  • If your instance is outdated.


Docker Compose

URL: llms-txt#docker-compose

n8n: environment: - EXECUTIONS_DATA_PRUNE=true - EXECUTIONS_DATA_MAX_AGE=168 - EXECUTIONS_DATA_PRUNE_MAX_COUNT=50000


If you run n8n using the default SQLite database, the disk space of any pruned data isn't automatically freed up but rather reused for future executions data. To free up this space configure the `DB_SQLITE_VACUUM_ON_STARTUP` [environment variable](../../configuration/environment-variables/database/#sqlite) or manually run the [VACUUM](https://www.sqlite.org/lang_vacuum.html) operation.

Binary data pruning operates on the active binary data mode. For example, if your instance stored data in S3, and you later switched to filesystem mode, n8n only prunes binary data in the filesystem. This may change in future.

---

## Node building reference

**URL:** llms-txt#node-building-reference

This section contains reference information, including details about:

- [Node UI elements](ui-elements/)
- [Organizing your node files](node-file-structure/)
- Key parameters in your node's [base file](node-base-files/) and [credentials file](credentials-files/).
- [UX guidelines](ux-guidelines/) and [verification guidelines](verification-guidelines/) for submitting your node for [verification by n8n](../../../community-nodes/installation/verified-install/).

---

## Zulip node

**URL:** llms-txt#zulip-node

**Contents:**
- Operations
- Templates and examples

Use the Zulip node to automate work in Zulip, and integrate Zulip with other applications. n8n has built-in support for a wide range of Zulip features, including creating, deleting, and getting users and streams, as well as sending messages.

On this page, you'll find a list of operations the Zulip node supports and links to more resources.

Refer to [Zulip credentials](../../credentials/zulip/) for guidance on setting up authentication.

- Message
  - Delete a message
  - Get a message
  - Send a private message
  - Send a message to stream
  - Update a message
  - Upload a file
- Stream
  - Create a stream.
  - Delete a stream.
  - Get all streams.
  - Get subscribed streams.
  - Update a stream.
- User
  - Create a user.
  - Deactivate a user.
  - Get a user.
  - Get all users.
  - Update a user.

## Templates and examples

[Browse Zulip integration templates](https://n8n.io/integrations/zulip/), or [search all templates](https://n8n.io/workflows/)

---

## Hugging Face credentials

**URL:** llms-txt#hugging-face-credentials

**Contents:**
- Supported authentication methods
- Related resources
- Using API key

You can use these credentials to authenticate the following nodes:

- [Hugging Face Inference](../../cluster-nodes/sub-nodes/n8n-nodes-langchain.lmopenhuggingfaceinference/)
- [Embeddings Hugging Face Inference](../../cluster-nodes/sub-nodes/n8n-nodes-langchain.embeddingshuggingfaceinference/)

## Supported authentication methods

Refer to [Hugging Face's documentation](https://huggingface.co/docs/api-inference/quicktour) for more information about the service.

View n8n's [Advanced AI](../../../../advanced-ai/) documentation.

To configure this credential, you'll need a [Hugging Face](https://huggingface.co/) account and:

- An **API Key**: Hugging Face calls these API tokens.

To get your API token:

1. Open your Hugging Face profile and go to the [**Tokens**](https://huggingface.co/settings/tokens) section.
1. Copy the token listed there. It should begin with `hf_`.
1. Enter this API token as your n8n credential **API Key**.

Refer to [Get your API token](https://huggingface.co/docs/api-inference/quicktour#get-your-api-token) for more information.

---

## Node codex files

**URL:** llms-txt#node-codex-files

**Contents:**
- Node categories

The codex file contains metadata about your node. This file is the JSON file at the root of your node. For example, the [`HttpBin.node.json`](https://github.com/n8n-io/n8n-nodes-starter/blob/master/nodes/HttpBin/HttpBin.node.json) file in the n8n starter.

The codex filename must match the node base filename. For example, given a node base file named `MyNode.node.ts`, the codex would be named `MyNode.node.json`.

| Parameter      | Description                                                                                                                                                     |
| -------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `node`         | Includes the node name. Must start with `n8n-nodes-base.`. For example, `n8n-nodes-base.openweatherapi`.                                                        |
| `nodeVersion`  | The node version. This should have the same value as the `version` parameter in your main node file. For example, `"1.0"`.                                      |
| `codexVersion` | The codex file version. The current version is `"1.0"`.                                                                                                         |
| `categories`   | The settings in the `categories` array determine which category n8n adds your node to in the GUI. See [Node categories](#node-categories) for more information. |
| `resources`    | The `resources` object contains links to your node documentation. n8n automatically adds help links to credentials and nodes in the GUI.                        |

You can define one or more categories in your node configuration JSON. This helps n8n put the node in the correct category in the nodes panel.

Choose from these categories:

- Data & Storage
- Finance & Accounting
- Marketing & Content
- Productivity
- Miscellaneous
- Sales
- Development
- Analytics
- Communication
- Utility

You must match the syntax. For example, `Data & Storage` not `data and storage`.

---

## Expressions common issues

**URL:** llms-txt#expressions-common-issues

**Contents:**
- The 'JSON Output' in item 0 contains invalid JSON
- Can't get data for expression
- Invalid syntax

Here are some common errors and issues related to [expressions](../../../expressions/) and steps to resolve or troubleshoot them.

## The 'JSON Output' in item 0 contains invalid JSON

This error occurs when you use JSON mode but don't provide a valid JSON object. Depending on the problem with the JSON object, the error sometimes display as `The 'JSON Output' in item 0 does not contain a valid JSON object`.

To resolve this, make sure that the code you provide is valid JSON:

- Check the JSON with a [JSON validator](https://jsonlint.com/).
- Check that your JSON object doesn't reference undefined input data. This may occur if the incoming data doesn't always include the same fields.

## Can't get data for expression

This error occurs when n8n can't retrieve the data referenced by an expression. Often, this happens when the preceding node hasn't been run yet.

Another variation of this may appear as `Referenced node is unexecuted`. In that case, the full text of this error will tell you the exact node that isn't executing in this format:

> An expression references the node '<node-name>', but it hasnt been executed yet. Either change the expression, or re-wire your workflow to make sure that node executes first.

To begin troubleshooting, test the workflow up to the named node.

For nodes that use JavaScript or other custom code, you can check if a previous node has executed before trying to use its value by checking the following:

As an example, this JSON references the parameters of the input data. This error will display if you test this step without connecting it to another node:

This error occurs when you use an expression that has a syntax error.

For example, the expression in this JSON includes a trailing period, which results in an invalid syntax error:

To resolve this error, check your [expression syntax](../../../expressions/) to make sure they follow the expected format.

**Examples:**

Example 1 (unknown):
```unknown
$("<node-name>").isExecuted

Example 2 (unknown):

{
  "my_field_1": {{ $input.params }}
}

Example 3 (unknown):

{
  "my_field_1": "value",
  "my_field_2": {{ $('If').item.json. }}
}

GetResponse node

URL: llms-txt#getresponse-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the GetResponse node to automate work in GetResponse, and integrate GetResponse with other applications. n8n has built-in support for a wide range of GetResponse features, including creating, updating, deleting, and getting contacts.

On this page, you'll find a list of operations the GetResponse node supports and links to more resources.

Refer to GetResponse credentials for guidance on setting up authentication.

  • Contact
    • Create a new contact
    • Delete a contact
    • Get a contact
    • Get all contacts
    • Update contact properties

Templates and examples

Add subscribed customers to Airtable automatically

View template details

Get all the contacts from GetResponse and update them

View template details

🛠️ GetResponse Tool MCP Server 💪 5 operations

View template details

Browse GetResponse integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Google Translate node

URL: llms-txt#google-translate-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Google Translate node to automate work in Google Translate, and integrate Google Translate with other applications. n8n has built-in support for a wide range of Google Translate features, including translating languages.

On this page, you'll find a list of operations the Google Translate node supports and links to more resources.

Refer to Google Translate credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Language
    • Translate data

Templates and examples

Translate PDF documents from Google drive folder with DeepL

View template details

Translate text from English to German

View template details

🉑 Generate Anki Flash Cards for Language Learning with Google Translate and GPT

View template details

Browse Google Translate integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Wolfram|Alpha tool node

URL: llms-txt#wolfram|alpha-tool-node

Contents:

  • Templates and examples
  • Related resources

Use the Wolfram|Alpha tool node to connect your agents and chains to Wolfram|Alpha's computational intelligence engine.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

Templates and examples

Browse Wolfram|Alpha integration templates, or search all templates

Refer to Wolfram|Alpha's documentation for more information about the service. You can also view LangChain's documentation on their WolframAlpha Tool.

View n8n's Advanced AI documentation.


Workflow development

URL: llms-txt#workflow-development

Contents:

  • Build and test workflows
  • Production workflows

The Webhook node works a bit differently from other core nodes. n8n recommends following these processes for building, testing, and using your Webhook node in production.

n8n generates two Webhook URLs for each Webhook node: a Test URL and a Production URL.

Build and test workflows

While building or testing a workflow, use the Test webhook URL.

Using a test webhook ensures that you can view the incoming data in the editor UI, which is useful for debugging. Select Listen for test event to register the webhook before sending the data to the test webhook. The test webhook stays active for 120 seconds.

When using the Webhook node on localhost on a self-hosted n8n instance, run n8n in tunnel mode:

Production workflows

When your workflow is ready, switch to using the Production webhook URL. You can then activate your workflow, and n8n runs it automatically when an external service calls the webhook URL.

When working with a Production webhook, ensure that you have saved and activated the workflow. Data flowing through the webhook isn't visible in the editor UI with the production webhook.

Refer to Create a workflow for more information on activating workflows.


Character Text Splitter node

URL: llms-txt#character-text-splitter-node

Contents:

  • Node parameters
  • Templates and examples
  • Related resources

Use the Character Text Splitter node to split document data based on characters.

On this page, you'll find the node parameters for the Character Text Splitter node, and links to more resources.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Separator: Select the separator used to split the document into separate items.
  • Chunk Size: Enter the number of characters in each chunk.
  • Chunk Overlap: Enter how much overlap to have between chunks.

Templates and examples

Building Your First WhatsApp Chatbot

View template details

Scrape and summarize webpages with AI

View template details

Ask questions about a PDF using AI

View template details

Browse Character Text Splitter integration templates, or search all templates

Refer to LangChain's text splitter documentation and LangChain's API documentation for character text splitting for more information about the service.

View n8n's Advanced AI documentation.


Trellix ePO credentials

URL: llms-txt#trellix-epo-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using basic auth

You can use these credentials to authenticate when using the HTTP Request node to make a Custom API call.

Create a Trellix ePolicy Orchestrator account.

Supported authentication methods

Refer to Trellix ePO's documentation for more information about the service.

This is a credential-only node. Refer to Custom API operations to learn more. View example workflows and related content on n8n's website.

To configure this credential, you'll need:

  • A Username to connect as.
  • A Password for that user account.

n8n uses These fields to build the -u parameter in the format of -u username:pw. Refer to Web API basics for more information.


Embeddings Azure OpenAI node

URL: llms-txt#embeddings-azure-openai-node

Contents:

  • Node options
  • Templates and examples
  • Related resources

Use the Embeddings Azure OpenAI node to generate embeddings for a given text.

On this page, you'll find the node parameters for the Embeddings Azure OpenAI node, and links to more resources.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Model (Deployment) Name: Select the model (deployment) to use for generating embeddings.
  • Batch Size: Enter the maximum number of documents to send in each request.
  • Strip New Lines: Select whether to remove new line characters from input text (turned on) or not (turned off). n8n enables this by default.
  • Timeout: Enter the maximum amount of time a request can take in seconds. Set to -1 for no timeout.

Templates and examples

Auto-Update Knowledge Base with Drive, LlamaIndex & Azure OpenAI Embeddings

View template details

PDF RAG Agent with Telegram Chat & Auto-Ingestion from Google Drive

View template details

Generate Contextual Recommendations from Slack using Pinecone

View template details

Browse Embeddings Azure OpenAI integration templates, or search all templates

Refer to LangChains's OpenAI embeddings documentation for more information about the service.

View n8n's Advanced AI documentation.


Discourse credentials

URL: llms-txt#discourse-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

  • Discourse

  • Host an instance of Discourse

  • Create an account on your hosted instance and make sure that you are an admin

Supported authentication methods

Refer to Discourse's API documentation for more information about the service.

To configure this credential, you'll need:

  • The URL of your Discourse instance, for example https://community.n8n.io
  • An API Key: Create an API key through the Discourse admin panel. Refer to the Discourse create and configure an API key documentation for instructions on creating an API key and specifying a username.
  • A Username: Use your own name, system, or another user.

Refer to the Authentication section of the Discourse API documentation for examples.


Microsoft Excel 365 node

URL: llms-txt#microsoft-excel-365-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Microsoft Excel node to automate work in Microsoft Excel, and integrate Microsoft Excel with other applications. n8n has built-in support for a wide range of Microsoft Excel features, including adding and retrieving lists of table data, and workbooks, as well as getting worksheets.

On this page, you'll find a list of operations the Microsoft Excel node supports and links to more resources.

Refer to Microsoft credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Table
    • Adds rows to the end of the table
    • Retrieve a list of table columns
    • Retrieve a list of table rows
    • Looks for a specific column value and then returns the matching row
  • Workbook
    • Adds a new worksheet to the workbook.
    • Get data of all workbooks
  • Worksheet
    • Get all worksheets
    • Get worksheet content

Templates and examples

Automated Web Scraping: email a CSV, save to Google Sheets & Microsoft Excel

View template details

Get all Excel workbooks

View template details

Daily Newsletter Service using Excel, Outlook and AI

View template details

Browse Microsoft Excel 365 integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Groq credentials

URL: llms-txt#groq-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a Groq account.

Supported authentication methods

Refer to Groq's documentation for more information about the service.

View n8n's Advanced AI documentation.

To configure this credential, you'll need:

  1. Go to the API Keys page of your Groq console.
  2. Select Create API Key.
  3. Enter a display name for the key, like n8n integration, and select Submit.
  4. Copy the key and paste it into your n8n credential.

Refer to Groq's API Keys documentation for more information.

Groq binds API keys to the organization, not the user.


TheHive 5 node

URL: llms-txt#thehive-5-node

Contents:

  • Operations
  • Templates and examples
  • Related resources

Use the TheHive 5 node to automate work in TheHive, and integrate TheHive with other applications. n8n has built-in support for a wide range of TheHive features, including creating alerts, counting tasks logs, cases, and observables.

On this page, you'll find a list of operations the TheHive node supports and links to more resources.

TheHive and TheHive 5

n8n provides two nodes for TheHive. Use this node (TheHive 5) if you want to use TheHive's version 5 API. If you want to use version 3 or 4, use TheHive.

Refer to TheHive credentials for guidance on setting up authentication.

  • Alert
    • Create
    • Delete
    • Execute Responder
    • Get
    • Merge Into Case
    • Promote to Case
    • Search
    • Update
    • Update Status
  • Case
    • Add Attachment
    • Create
    • Delete Attachment
    • Delete Case
    • Execute Responder
    • Get
    • Get Attachment
    • Get Timeline
    • Search
    • Update
  • Comment
    • Create
    • Delete
    • Search
    • Update
  • Observable
    • Create
    • Delete
    • Execute Analyzer
    • Execute Responder
    • Get
    • Search
    • Update
  • Page
    • Create
    • Delete
    • Search
    • Update
  • Query
    • Execute Query
  • Task
    • Create
    • Delete
    • Execute Responder
    • Get
    • Search
    • Update
  • Task Log
    • Add Attachment
    • Create
    • Delete
    • Delete Attachment
    • Execute Responder
    • Get
    • Search

Templates and examples

Browse TheHive 5 integration templates, or search all templates

n8n provides a trigger node for TheHive. You can find the trigger node docs here.

Refer to TheHive's documentation for more information about the service.


MQTT Trigger node

URL: llms-txt#mqtt-trigger-node

MQTT is an open OASIS and ISO standard lightweight, publish-subscribe network protocol that transports messages between devices.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's MQTT Trigger integrations page.


ActiveCampaign credentials

URL: llms-txt#activecampaign-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to ActiveCampaign's API documentation for more information about working with the service.

To configure this credential, you'll need an ActiveCampaign account and:

  • An API URL
  • An API Key

To get both and set up the credential:

  1. In ActiveCampaign, select Settings (the gear cog icon) from the left menu.
  2. Select Developer.
  3. Copy the API URL and enter it in your n8n credential.
  4. Copy the API Key and enter it in your n8n credential.

Refer to How to obtain your ActiveCampaign API URL and Key for more information or for instructions on resetting your API key.


Edit Fields (Set)

URL: llms-txt#edit-fields-(set)

Contents:

  • Node parameters
    • Mode
    • Fields to Set
    • Keep Only Set Fields
    • Include in Output
  • Node options
    • Include Binary Data
    • Ignore Type Conversion Errors
    • Support Dot Notation
  • Templates and examples

Use the Edit Fields node to set workflow data. This node can set new data as well as overwrite data that already exists. This node is crucial in workflows which expect incoming data from previous nodes, such as when inserting values to Google Sheets or databases.

These are the settings and options available in the Edit Fields node.

You can either use Manual Mapping to edit fields using the GUI or JSON Output to write JSON that n8n adds to the input data.

If you select Mode > Manual Mapping, you can configure the fields by dragging and dropping values from INPUT.

The default behavior when you drag a value is:

  • n8n sets the value's name as the field name.
  • The field value contains an expression which accesses the value.

If you don't want to use expressions:

  1. Hover over a field. n8n displays the Fixed | Expressions toggle.
  2. Select Fixed.

You can do this for both the name and value of the field.

Keep Only Set Fields

Enable this to discard any input data that you don't use in Fields to Set.

Include in Output

Choose which input data to include in the node's output data.

Use these options to customize the behavior of the node.

Include Binary Data

If the input data includes binary data, choose whether to include it in the Edit Fields node's output data.

Ignore Type Conversion Errors

Enabling this allows n8n to ignore some data type errors when mapping fields.

Support Dot Notation

By default, n8n supports dot notation.

For example, when using manual mapping, the node follows the dot notation for the Name field. That means if you set the name in the Name field as number.one and the value in the Value field as 20, the resulting JSON is:

You can prevent this behavior by selecting Add Option > Support Dot Notation, and setting the Dot Notion field to off. Now the resulting JSON is:

Templates and examples

Creating an API endpoint

View template details

Scrape and summarize webpages with AI

View template details

Very quick quickstart

View template details

Browse Edit Fields (Set) integration templates, or search all templates

Arrays and expressions in JSON Output mode

You can use arrays and expressions when creating your JSON Output.

For example, given this input data generated by the Customer Datastore node:

Add the following JSON in the JSON Output field, with Include in Output set to All Input Fields:

Examples:

Example 1 (unknown):

{ "number": { "one": 20} }

Example 2 (unknown):

{ "number.one": 20 }

Example 3 (unknown):

[
  {
    "id": "23423532",
    "name": "Jay Gatsby",
    "email": "gatsby@west-egg.com",
    "notes": "Keeps asking about a green light??",
    "country": "US",
    "created": "1925-04-10"
  },
  {
    "id": "23423533",
    "name": "José Arcadio Buendía",
    "email": "jab@macondo.co",
    "notes": "Lots of people named after him. Very confusing",
    "country": "CO",
    "created": "1967-05-05"
  },
  {
    "id": "23423534",
    "name": "Max Sendak",
    "email": "info@in-and-out-of-weeks.org",
    "notes": "Keeps rolling his terrible eyes",
    "country": "US",
    "created": "1963-04-09"
  },
  {
    "id": "23423535",
    "name": "Zaphod Beeblebrox",
    "email": "captain@heartofgold.com",
    "notes": "Felt like I was talking to more than one person",
    "country": null,
    "created": "1979-10-12"
  },
  {
    "id": "23423536",
    "name": "Edmund Pevensie",
    "email": "edmund@narnia.gov",
    "notes": "Passionate sailor",
    "country": "UK",
    "created": "1950-10-16"
  }
]

Example 4 (unknown):

{
  "newKey": "new value",
  "array": [{{ $json.id }},"{{ $json.name }}"],
  "object": {
    "innerKey1": "new value",
    "innerKey2": "{{ $json.id }}",
    "innerKey3": "{{ $json.name }}",
 }
}

Role-based access control (RBAC)

URL: llms-txt#role-based-access-control-(rbac)

Contents:

  • Create a project
  • Add and remove users in a project
  • Delete a project
  • Move workflows and credentials between projects or users
  • Using external secrets in projects

RBAC is available on all plans except the Community edition. Different plans have different numbers of projects and roles. Refer to n8n's pricing page for plan details.

Role types and account types

Role types and account types are different things. Every account has one type. The account can have different role types for different projects.

RBAC is a way of managing access to workflows and credentials based on user roles and projects. You group workflows into projects, and user access depends on the user's project role. This section provides guidance on using RBAC in n8n.

RBAC is available on all plans except the Community edition. Different plans have different numbers of projects and roles. Refer to n8n's pricing page for plan details.

n8n uses projects to group workflows and credentials, and assigns roles to users in each project. This means that a single user can have different roles in different projects, giving them different levels of access.

Instance owners and instance admins can create projects.

  1. Select Add project.
  2. Fill out the project settings.
  3. Select Save.

Add and remove users in a project

Project admins can add and remove users.

To add a user to a project:

  1. Select the project.
  2. Select Project settings.
  3. Under Project members, browse for users or search by username or email address.
  4. Select the user you want to add.
  5. Check the role type and change it if needed.
  6. Select Save.

To remove a user from a project:

  1. Select the project.

  2. Select Project settings.

  3. In the three-dot menu for the user you want to remove, select Remove user.

  4. Select Save.

  5. Select the project.

  6. Select Project settings.

  7. Select Delete project.

  8. Choose what to do with the workflows and credentials. You can select:

    • Transfer its workflows and credentials to another project: n8n prompts you to choose a project to move the data to.
    • Delete its workflows and credentials: n8n prompts you to confirm that you want to delete all the data in the project.

Move workflows and credentials between projects or users

Workflow and credential owners can move workflows or credentials (changing ownership) to other users or projects they have access to.

Moving revokes sharing

Moving workflows or credentials removes all existing sharing. Be aware that this could impact other workflows currently sharing these resources.

  1. Select Workflow menu or Credential menu > Move.

Moving workflows with credentials

When moving a workflow with credentials you have permission to share, you can choose to share the credentials as well. This ensures that the workflow continues to have access to the credentials it needs to execute. n8n will note any credentials that can't be moved (credentials you don't have permission to share).

  1. Select the project or user you want to move to.

  2. Confirm you understand the impact of the move: workflows may stop working if the credentials they need aren't available in the target project, and n8n removes any current individual sharing.

  3. Select Confirm move to new project.

Using external secrets in projects

To use external secrets in a project, you must have an instance owner or instance admin as a member of the project.


HELP n8n_scaling_mode_queue_jobs_completed Total number of jobs completed across all workers in scaling mode since instance start.

URL: llms-txt#help-n8n_scaling_mode_queue_jobs_completed-total-number-of-jobs-completed-across-all-workers-in-scaling-mode-since-instance-start.


S3 node

URL: llms-txt#s3-node

Contents:

  • Operations
  • Templates and examples
  • Node reference
    • Setting file permissions in Wasabi

Use the S3 node to automate work in non-AWS S3 storage and integrate S3 with other applications. n8n has built-in support for a wide range of S3 features, including creating, deleting, and getting buckets, files, and folders. For AWS S3, use AWS S3.

Use the S3 node for non-AWS S3 solutions like:

On this page, you'll find a list of operations the S3 node supports and links to more resources.

Refer to S3 credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Create a bucket

    • Delete a bucket
    • Get all buckets
    • Search within a bucket
  • Copy a file

    • Delete a file
    • Download a file
    • Get all files
    • Upload a file

Attach file for upload

To attach a file for upload, use another node to pass the file as a data property. Nodes like the Read/Write Files from Disk node or the HTTP Request work well.

  • Create a folder
    • Delete a folder
    • Get all folders

Templates and examples

Flux AI Image Generator

View template details

Hacker News to Video Content

View template details

Transcribe audio files from Cloud Storage

View template details

Browse S3 integration templates, or search all templates

Setting file permissions in Wasabi

When uploading files to Wasabi, you must set permissions for the files using the ACL dropdown and not the toggles.


LingvaNex credentials

URL: llms-txt#lingvanex-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a LingvaNex account.

Supported authentication methods

Refer to Lingvanex's Cloud API documentation for more information about the service.

To configure this credential, you'll need:


OpenAI credentials

URL: llms-txt#openai-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create an OpenAI account.

Supported authentication methods

Refer to OpenAI's API documentation for more information about the service.

To configure this credential, you'll need:

  • An API Key
  • An Organization ID: Required if you belong to multiple organizations; otherwise, leave this blank.

To generate your API Key:

  1. Login to your OpenAI account or create an account.
  2. Open your API keys page.
  3. Select Create new secret key to create an API key, optionally naming the key.
  4. Copy your key and add it as the API Key in n8n.

Refer to the API Quickstart Account Setup documentation for more information.

To find your Organization ID:

  1. Go to your Organization Settings page.
  2. Copy your Organization ID and add it as the Organization ID in n8n.

Refer to Setting up your organization for more information. Note that API requests made using an Organization ID will count toward the organization's subscription quota.


Outlook.com Send Email credentials

URL: llms-txt#outlook.com-send-email-credentials

Contents:

  • Set up the credential
  • Use an app password
    • Security Info app password

Follow these steps to configure the Send Email credentials with an Outlook.com account.

Set up the credential

To configure the Send Email credential to use an Outlook.com account:

  1. Enter your Outlook.com email address as the User.

  2. Enter your Outlook.com password as the Password.

Outlook.com doesn't require you to use an app password, but if you'd like to for security reasons, refer to Use an app password.

  1. Enter smtp-mail.outlook.com as the Host.

  2. Enter 587 for the Port.

  3. Turn on the SSL/TLS toggle.

Refer to Microsoft's POP, IMAP, and SMTP settings for Outlook.com documentation for more information. If the settings above don't work for you, check with your email administrator.

Use an app password

If you'd prefer to use an app password instead of your email account password:

  1. Log into the My Account page.
  2. If you have a left navigation option for Security Info, jump to Security Info app password. If you don't have an option for Security Info, continue with these instructions.
  3. Go to the Additional security verification page.
  4. Select App passwords and Create.
  5. Enter a Name for your app password, like n8n credential.
  6. Use the option to copy password to clipboard and enter this as the Password in n8n instead of your email account password.

Refer to Outlook's Manage app passwords for 2-step verification page for more information.

Security Info app password

If you have a left navigation option for Security Info:

  1. Select Security Info. The Security Info page opens.
  2. Select + Add method.
  3. On the Add a method page, select App password and then select Add.
  4. Enter a Name for your app password, like n8n credential.
  5. Copy the Password and enter this as the Password in n8n instead of your email account password.

Refer to Outlook's Create app passwords from the Security info (preview) page for more information.


URL: llms-txt#peekalink-node

Contents:

  • Operations
  • Templates and examples

Use the Peekalink node to automate work in Peekalink, and integrate Peekalink with other applications. n8n supports checking, and reviewing links with Peekalink.

On this page, you'll find a list of operations the Peekalink node supports and links to more resources.

Refer to Peekalink credentials for guidance on setting up authentication.

  • Check whether preview for a given link is available
  • Return the preview for a link

Templates and examples

Browse Peekalink integration templates, or search all templates


Drift credentials

URL: llms-txt#drift-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API personal access token
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Supported authentication methods

  • API personal access token
  • OAuth2

Refer to Drift's API documentation for more information about the service.

Using API personal access token

To configure this credential, you'll need:

  • A Personal Access Token: To get a token, create a Drift app. Install the app to generate an OAuth Access token. Add this to the n8n credential as your Personal Access Token.

Note for n8n Cloud users

Cloud users don't need to provide connection details. Select Connect my account to connect through your browser.

If you need to configure OAuth2 from scratch or need more detail on what's happening in the OAuth web flow, refer to the instructions in the Drift Authentication and Scopes documentation to set up OAuth for your app.


Facebook Trigger node

URL: llms-txt#facebook-trigger-node

Contents:

  • Objects
  • Related resources

Facebook is a social networking site to connect and share with family and friends online.

Use the Facebook Trigger node to trigger a workflow when events occur in Facebook.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Facebook Trigger integrations page.

  • Ad Account: Get updates for certain ads changes.

  • Application: Get updates sent to the application.

  • Certificate Transparency: Get updates when new security certificates are generated for your subscribed domains, including new certificates and potential phishing attempts.

  • Activity and events in a Group

  • Instagram: Get updates when someone comments on the Media objects of your app users; @mentions your app users; or when Stories of your app users expire.

  • Link: Get updates about the links for rich previews by an external provider

  • Page updates

  • Permissions: Updates when granting or revoking permissions

  • User profile updates

  • WhatsApp Business Account

n8n recommends using the WhatsApp Trigger node with the WhatsApp credentials instead of the Facebook Trigger node for these events. The WhatsApp Trigger node has more events to listen to.

For each Object, use the Field Names or IDs dropdown to select more details on what data to receive. Refer to the linked pages for more details.

View example workflows and related content on n8n's website.

Refer to Meta's Graph API documentation for details about their API.


Storyblok node

URL: llms-txt#storyblok-node

Contents:

  • Operations
    • Content API
    • Management API
  • Templates and examples

Use the Storyblok node to automate work in Storyblok, and integrate Storyblok with other applications. n8n has built-in support for a wide range of Storyblok features, including getting, deleting, and publishing stories.

On this page, you'll find a list of operations the Storyblok node supports and links to more resources.

Refer to Storyblok credentials for guidance on setting up authentication.

  • Story

    • Get a story
    • Get all stories
  • Story

    • Delete a story
    • Get a story
    • Get all stories
    • Publish a story
    • Unpublish a story

Templates and examples

Browse Storyblok integration templates, or search all templates


Vector Store Question Answer Tool node

URL: llms-txt#vector-store-question-answer-tool-node

Contents:

  • Node parameters
    • Description of Data
    • Limit
  • How n8n populates the tool description
  • Related resources

The Vector Store Question Answer node is a tool that allows an agent to summarize results and answer questions based on chunks from a vector store.

On this page, you'll find the node parameters for the Vector Store Question Answer node, and links to more resources.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Vector Store Question Answer Tool integrations page.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

Description of Data

Enter a description of the data in the vector store.

The maximum number of results to return.

How n8n populates the tool description

n8n uses the node name (select the name to edit) and Description of Data parameter to populate the tool description for AI agents using the following format:

Useful for when you need to answer questions about [node name]. Whenever you need information about [Description of Data], you should ALWAYS use this. Input should be a fully formed question.

Spaces in the node name are converted to underscores in the tool description.

Avoid special characters in node names

Using special characters in the node name will cause errors when the agent runs:

Use only alphanumeric characters, spaces, dashes, and underscores in node names.

View example workflows and related content on n8n's website.

Refer to LangChain's documentation on tools for more information about tools in LangChain.

View n8n's Advanced AI documentation.


One Simple API node

URL: llms-txt#one-simple-api-node

Contents:

  • Operations
  • Templates and examples
  • Related resources

Use the One Simple API node to automate work in One Simple API, and integrate One Simple API with other applications. n8n has built-in support for a wide range of One Simple API features, including getting profiles, retrieving information, and generating utilities.

On this page, you'll find a list of operations the One Simple API node supports and links to more resources.

Refer to One Simple API credentials for guidance on setting up authentication.

  • Information
    • Convert a value between currencies
    • Retrieve image metadata from a URL
  • Social Profile
    • Get details about an Instagram profile
    • Get details about a Spotify Artist
  • Utility
    • Expand a shortened url
    • Generate a QR Code
    • Validate an email address
  • Website
    • Generate a PDF from a webpage
    • Get SEO information from website
    • Create a screenshot from a webpage

Templates and examples

Validate email of new contacts in Mautic

View template details

Validate email of new contacts in Hubspot

View template details

🛠️ One Simple API Tool MCP Server 💪 all 10 operations

View template details

Browse One Simple API integration templates, or search all templates

Refer to One Simple API's documentation for more information about the service.


Redis Vector Store node

URL: llms-txt#redis-vector-store-node

Contents:

  • Prerequisites
  • Node usage patterns
    • Use as a regular node to insert and retrieve documents
    • Connect directly to an AI agent as a tool
    • Use a retriever to fetch documents
    • Use the Vector Store Question Answer Tool to answer questions
  • Node parameters
    • Operation Mode
    • Rerank Results
    • Get Many parameters

Use the Redis Vector Store node to interact with your Redis database as a vector store. You can insert documents into the vector database, get documents from the vector database, retrieve documents using a retriever connected to a chain, or connect it directly to an agent to use as a tool.

On this page, you'll find the node parameters for the Redis Vector Store node, and links to more resources.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

Before using this node, you need a Redis database with the Redis Query Engine enabled. Use one of the following:

  • Redis Open Source (v8.0 and later) - includes the Redis Query Engine by default
  • Redis Cloud - fully managed service
  • Redis Software - self-managed deployment

A new index will be created if you don't have one.

Creating your own indices in advance is only necessary if you want to use a custom index schema or reuse an existing index. Otherwise, you can skip this step and let the node create a new index for you based on the options you specify.

Node usage patterns

You can use the Redis Vector Store node in the following patterns:

Use as a regular node to insert and retrieve documents

You can use the Redis Vector Store as a regular node to insert or get documents. This pattern places the Redis Vector Store in the regular connection flow without using an agent.

You can see an example of this in scenario 1 of this template (the template uses the Supabase Vector Store, but the pattern is the same).

Connect directly to an AI agent as a tool

You can connect the Redis Vector Store node directly to the tool connector of an AI agent to use a vector store as a resource when answering queries.

Here, the connection would be: AI agent (tools connector) -> Redis Vector Store node.

Use a retriever to fetch documents

You can use the Vector Store Retriever node with the Redis Vector Store node to fetch documents from the Redis Vector Store node. This is often used with the Question and Answer Chain node to fetch documents from the vector store that match the given chat input.

An example of the connection flow (the linked example uses Pinecone, but the pattern is the same) would be: Question and Answer Chain (Retriever connector) -> Vector Store Retriever (Vector Store connector) -> Redis Vector Store.

Use the Vector Store Question Answer Tool to answer questions

Another pattern uses the Vector Store Question Answer Tool to summarize results and answer questions from the Redis Vector Store node. Rather than connecting the Redis Vector Store directly as a tool, this pattern uses a tool specifically designed to summarizes data in the vector store.

The connections flow (the linked example uses Qdrant, but the pattern is the same) in this case would look like this: AI agent (tools connector) -> Vector Store Question Answer Tool (Vector Store connector) -> Redis Vector store.

This Vector Store node has four modes: Get Many, Insert Documents, Retrieve Documents (As Vector Store for Chain/Tool), and Retrieve Documents (As Tool for AI Agent). The mode you select determines the operations you can perform with the node and what inputs and outputs are available.

In this mode, you can retrieve multiple documents from your vector database by providing a prompt. The prompt is embedded and used for similarity search. The node returns the documents that are most similar to the prompt with their similarity score. This is useful if you want to retrieve a list of similar documents and pass them to an agent as additional context.

Insert Documents

Use insert documents mode to insert new documents into your vector database.

Retrieve Documents (as Vector Store for Chain/Tool)

Use Retrieve Documents (As Vector Store for Chain/Tool) mode with a vector-store retriever to retrieve documents from a vector database and provide them to the retriever connected to a chain. In this mode you must connect the node to a retriever node or root node.

Retrieve Documents (as Tool for AI Agent)

Use Retrieve Documents (As Tool for AI Agent) mode to use the vector store as a tool resource when answering queries. When formulating responses, the agent uses the vector store when the vector store name and description match the question details.

Enables reranking. If you enable this option, you must connect a reranking node to the vector store. That node will then rerank the results for queries. You can use this option with the Get Many, Retrieve Documents (As Vector Store for Chain/Tool) and Retrieve Documents (As Tool for AI Agent) modes.

Get Many parameters

  • Redis Index: Enter the name of the Redis vector search index to use. Optionally choose an existing one from the list.
  • Prompt: Enter the search query.
  • Limit: Enter how many results to retrieve from the vector store. For example, set this to 10 to get the ten best results.

This Operation Mode includes one Node option, the Metadata Filter.

Insert Documents parameters

  • Redis Index: Enter the name of the Redis vector search index to use. Optionally choose an existing one from the list.

Retrieve Documents (As Vector Store for Chain/Tool) parameters

  • Redis Index: Enter the name of the Redis vector search index to use.

This Operation Mode includes one Node option, the Metadata Filter. Optionally choose an existing one from the list.

Retrieve Documents (As Tool for AI Agent) parameters

  • Name: The name of the vector store.
  • Description: Explain to the LLM what this tool does. A good, specific description allows LLMs to produce expected results more often.
  • Redis Index: Enter the name of the Redis vector search index to use. Optionally choose an existing one from the list.
  • Limit: Enter how many results to retrieve from the vector store. For example, set this to 10 to get the ten best results.

Whether to include document metadata.

You can use this with the Get Many and Retrieve Documents (As Tool for AI Agent) modes.

Metadata filters are available for the Get Many, Retrieve Documents (As Vector Store for Chain/Tool), and Retrieve Documents (As Tool for AI Agent) operation modes. This is an OR query. If you specify more than one metadata filter field, at least one of them must match. When inserting data, the metadata is set using the document loader. Refer to Default Data Loader for more information on loading documents.

Redis Configuration Options

Available for all operation modes:

  • Metadata Key: Enter the key for the metadata field in the Redis hash (default: metadata).
  • Key Prefix: Enter the key prefix for storing documents (default: doc:).
  • Content Key: Enter the key for the content field in the Redis hash (default: content).
  • Embedding Key: Enter the key for the embedding field in the Redis hash (default: embedding).

Available for the Insert Documents operation mode:

  • Overwrite Documents: Select whether to overwrite existing documents (turned on) or not (turned off). Also deletes the index.
  • Time-to-Live: Enter the time-to-live for documents in seconds. Does not expire the index.

Templates and examples

Explore n8n Nodes in a Visual Reference Library

View template details

🐶 AI Agent for PetShop Appointments (Agente de IA para agendamentos de PetShop)

View template details

🤖 AI-Powered WhatsApp Assistant for Restaurants & Delivery Automation

View template details

Browse Redis Vector Store integration templates, or search all templates

View n8n's Advanced AI documentation.

Self-hosted AI Starter Kit

New to working with AI and using self-hosted n8n? Try n8n's self-hosted AI Starter Kit to get started with a proof-of-concept or demo playground using Ollama, Qdrant, and PostgreSQL.


AI Workflow Builder

URL: llms-txt#ai-workflow-builder

Contents:

  • Working with the builder
    • Commands you can run in the builder
  • Understanding credits
    • How credits work
    • Getting more credits
  • AI model and data handling

AI Workflow Builder enables you to create, refine, and debug workflows using natural language descriptions of your goals.

It handles the entire workflow construction process, including node selection, placement, and configuration, thereby reducing the time required to build functional workflows.

For details of pricing and availability of AI Workflow Builder, see n8n Plans and Pricing.

Working with the builder

  1. Describe your workflow: Either select an example prompt or describe your requirements in natural language.
  2. Monitor the build: The builder provides real-time feedback through several phases.
  3. Review and refine the generated workflow: Review required credentials and other parameters. Refine the workflow using prompts.

Commands you can run in the builder

  • /clear: Clears the context for the LLM and lets you start from scratch

Understanding credits

Each time you send a message to the builder asking it to create or modify a workflow, that counts as one interaction, which is worth one credit.

Counts as an interaction

  • Sending a message to create a new workflow
  • Asking the builder to modify an existing workflow
  • Clicking the Execute and refine button in the builder window after a workflow is built

Does NOT count as an interaction

  • Messages that fail or produce generation errors
  • Requests you manually stop by clicking the stop button

Getting more credits

If you've used your monthly limit, you can upgrade to a higher plan.

For details on plans and pricing, see n8n Plans and Pricing.

AI model and data handling

The following data are sent to the LLM:

  • Text prompts that you provide to create, refine, or debug the workflow
  • Node definitions, parameters, and connections and the current workflow definition.
  • Any mock execution data that is loaded when using the builder

The following data are not sent:

  • Details of any credentials you use
  • Past executions of the workflow

seven node

URL: llms-txt#seven-node

Contents:

  • Operations
  • Templates and examples

Use the seven node to automate work in seven, and integrate seven with other applications. n8n has built-in support for a wide range of seven features, including sending SMS, and converting text to voice.

On this page, you'll find a list of operations the seven node supports and links to more resources.

Refer to seven credentials for guidance on setting up authentication.

  • SMS
    • Send SMS
  • Voice Call
    • Converts text to voice and calls a given number

Templates and examples

Automate WhatsApp Booking System with GPT-4 Assistant, Cal.com and SMS Reminders

View template details

Sending an SMS using sms77

View template details

🛠️ seven Tool MCP Server with both available operations

View template details

Browse seven integration templates, or search all templates


CircleCI node

URL: llms-txt#circleci-node

Contents:

  • Operations
  • Templates and examples

Use the CircleCI node to automate work in CircleCI, and integrate CircleCI with other applications. n8n has built-in support for a wide range of CircleCI features, including getting and triggering pipelines.

On this page, you'll find a list of operations the CircleCI node supports and links to more resources.

Refer to CircleCI credentials for guidance on setting up authentication.

  • Pipeline
    • Get a pipeline
    • Get all pipelines
    • Trigger a pipeline

Templates and examples

Browse CircleCI integration templates, or search all templates


Troubleshooting OIDC SSO

URL: llms-txt#troubleshooting-oidc-sso

Contents:

  • Known issues
    • State parameter not supported
    • PKCE not supported

State parameter not supported

When using OIDC providers that enforce the use of the state CSRF token parameter, authentication fails with the error:

n8n's current OIDC implementation doesn't handle the state parameter that some OIDC providers send as a security measure against CSRF attacks.

For now, the only work around is to configure your OIDC provider to disable the state parameter if possible.

n8n is working on adding full support for the OIDC state parameter in a future release.

PKCE not supported

OIDC providers that require PKCE (Proof Key for Code Exchange) may fail authentication or reject n8n's authorization requests. n8n's current OIDC implementation doesn't support PKCE.

The only work around is to configure your OIDC provider to not require PKCE for the n8n client if this option is available in your providers settings.

n8n plans on adding PKCE support in a future release

Examples:

Example 1 (unknown):

{"code":0,"message":"authorization response from the server is an error"}

Google Business Profile node

URL: llms-txt#google-business-profile-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported

Use the Google Business Profile node to automate work in Google Business Profile and integrate Google Business Profile with other applications. n8n has built-in support for a wide range of Google Business Profile features, which includes creating, updating, and deleting posts and reviews.

On this page, you'll find a list of operations the Google Business Profile node supports, and links to more resources.

You can find authentication information for this node here.

  • Post
    • Create
    • Delete
    • Get
    • Get Many
    • Update
  • Review
    • Delete Reply
    • Get
    • Get Many
    • Reply

Templates and examples

🛠️ Google Business Profile Tool MCP Server 💪 all 9 operations

View template details

Automated Google Business Reports with GPT Insights to Slack & Email

View template details

Automate Google Business Profile Posts with GPT-4 & Google Sheets

by Muhammad Qaisar Mehmood

View template details

Browse Google Business Profile integration templates, or search all templates

n8n provides a trigger node for Google Business Profile. You can find the trigger node docs here.

Refer to Google Business Profile's documentation for more information about the service.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Queue mode

URL: llms-txt#queue-mode

Contents:

  • How it works
  • Configuring workers
    • Set encryption key
    • Set executions mode
    • Start Redis
    • Start workers
  • Running n8n with queues
  • Webhook processors
    • Configure webhook URL
    • Configure load balancer

You can run n8n in different modes depending on your needs. The queue mode provides the best scalability.

n8n doesn't support queue mode with binary data storage in filesystem. If your workflows need to persist binary data in queue mode, you can use S3 external storage.

When running in queue mode, you have multiple n8n instances set up, with one main instance receiving workflow information (such as triggers) and the worker instances performing the executions.

Each worker is its own Node.js instance, running in main mode, but able to handle multiple simultaneous workflow executions due to their high IOPS (input-output operations per second).

By using worker instances and running in queue mode, you can scale n8n up (by adding workers) and down (by removing workers) as needed to handle the workload.

This is the process flow:

  1. The main n8n instance handles timers and webhook calls, generating (but not running) a workflow execution.
  2. It passes the execution ID to a message broker, Redis, which maintains the queue of pending executions and allows the next available worker to pick them up.
  3. A worker in the pool picks up message from Redis.
  4. The worker uses the execution ID to get workflow information from the database.
  5. After completing the workflow execution, the worker:
    • Writes the results to the database.
    • Posts to Redis, saying that the execution has finished.
  6. Redis notifies the main instance.

Configuring workers

Workers are n8n instances that do the actual work. They receive information from the main n8n process about the workflows that have to get executed, execute the workflows, and update the status after each execution is complete.

Set encryption key

n8n automatically generates an encryption key upon first startup. You can also provide your own custom key using environment variable if desired.

The encryption key of the main n8n instance must be shared with all worker and webhooks processor nodes to ensure these worker nodes are able to access credentials stored in the database.

Set the encryption key for each worker node in a configuration file or by setting the corresponding environment variable:

Set executions mode

Database considerations

n8n recommends using Postgres 13+. Running n8n with execution mode set to queue with an SQLite database isn't recommended.

Set the environment variable EXECUTIONS_MODE to queue on the main instance and any workers using the following command.

Alternatively, you can set executions.mode to queue in the configuration file.

Running Redis on a separate machine

You can run Redis on a separate machine, just make sure that it's accessible by the n8n instance.

To run Redis in a Docker container, follow the instructions below:

Run the following command to start a Redis instance:

By default, Redis runs on localhost on port 6379 with no password. Based on your Redis configuration, set the following configurations for the main n8n process. These will allow n8n to interact with Redis.

Using configuration file Using environment variables Description
queue.bull.redis.host:localhost QUEUE_BULL_REDIS_HOST=localhost By default, Redis runs on localhost.
queue.bull.redis.port:6379 QUEUE_BULL_REDIS_PORT=6379 The default port is 6379. If Redis is running on a different port, configure the value.

You can also set the following optional configurations:

Using configuration file Using environment variables Description
queue.bull.redis.username:USERNAME QUEUE_BULL_REDIS_USERNAME By default, Redis doesn't require a username. If you're using a specific user, configure it variable.
queue.bull.redis.password:PASSWORD QUEUE_BULL_REDIS_PASSWORD By default, Redis doesn't require a password. If you're using a password, configure it variable.
queue.bull.redis.db:0 QUEUE_BULL_REDIS_DB The default value is 0. If you change this value, update the configuration.
queue.bull.redis.timeoutThreshold:10000ms QUEUE_BULL_REDIS_TIMEOUT_THRESHOLD Tells n8n how long it should wait if Redis is unavailable before exiting. The default value is 10000 (ms).
queue.bull.gracefulShutdownTimeout:30 N8N_GRACEFUL_SHUTDOWN_TIMEOUT A graceful shutdown timeout for workers to finish executing jobs before terminating the process. The default value is 30 seconds.

Now you can start your n8n instance and it will connect to your Redis instance.

You will need to start worker processes to allow n8n to execute workflows. If you want to host workers on a separate machine, install n8n on the machine and make sure that it's connected to your Redis instance and the n8n database.

Start worker processes by running the following command from the root directory:

If you're using Docker, use the following command:

You can set up multiple worker processes. Make sure that all the worker processes have access to Redis and the n8n database.

Each worker process runs a server that exposes optional endpoints:

  • /healthz: returns whether the worker is up, if you enable the QUEUE_HEALTH_CHECK_ACTIVE environment variable
  • /healthz/readiness: returns whether worker's DB and Redis connections are ready, if you enable the QUEUE_HEALTH_CHECK_ACTIVE environment variable
  • credentials overwrite endpoint
  • /metrics

View running workers

  • Available on Self-hosted Enterprise plans.
  • If you want access to this feature on Cloud Enterprise, contact n8n.

You can view running workers and their performance metrics in n8n by selecting Settings > Workers.

Running n8n with queues

When running n8n with queues, all the production workflow executions get processed by worker processes. This means that even the webhook calls get delegated to the worker processes, which might add some overhead and extra latency.

Redis acts as the message broker, and the database persists data, so access to both is required. Running a distributed system with this setup over SQLite isn't supported.

If you want to migrate data from one database to another, you can use the Export and Import commands. Refer to the CLI commands for n8n documentation to learn how to use these commands.

Webhook processors

Webhook processes rely on Redis and need the EXECUTIONS_MODE environment variable set too. Follow the configure the workers section above to setup webhook processor nodes.

Webhook processors are another layer of scaling in n8n. Configuring the webhook processor is optional, and allows you to scale the incoming webhook requests.

This method allows n8n to process a huge number of parallel requests. All you have to do is add more webhook processes and workers accordingly. The webhook process will listen to requests on the same port (default: 5678). Run these processes in containers or separate machines, and have a load balancing system to route requests accordingly.

n8n doesn't recommend adding the main process to the load balancer pool. If you add the main process to the pool, it will receive requests and possibly a heavy load. This will result in degraded performance for editing, viewing, and interacting with the n8n UI.

You can start the webhook processor by executing the following command from the root directory:

If you're using Docker, use the following command:

Configure webhook URL

To configure your webhook URL, execute the following command on the machine running the main n8n instance:

You can also set this value in the configuration file.

Configure load balancer

When using multiple webhook processes you will need a load balancer to route requests. If you are using the same domain name for your n8n instance and the webhooks, you can set up your load balancer to route requests as follows:

  • Redirect any request that matches /webhook/* to the webhook servers pool
  • All other paths (the n8n internal API, the static files for the editor, etc.) should get routed to the main process

Note: The default URL for manual workflow executions is /webhook-test/*. Make sure that these URLs route to your main process.

You can change this path in the configuration file endpoints.webhook or using the N8N_ENDPOINT_WEBHOOK environment variable. If you change these, update your load balancer accordingly.

Disable webhook processing in the main process (optional)

You have webhook processors to execute the workflows. You can disable the webhook processing in the main process. This will make sure to execute all webhook executions in the webhook processors. In the configuration file set endpoints.disableProductionWebhooksOnMainProcess to true so that n8n doesn't process webhook requests on the main process.

Alternatively, you can use the following command:

When disabling the webhook process in the main process, run the main process and don't add it to the load balancer's webhook pool.

Configure worker concurrency

You can define the number of jobs a worker can run in parallel by using the concurrency flag. It defaults to 10. To change it:

Concurrency and scaling recommendations

n8n recommends setting concurrency to 5 or higher for your worker instances. Setting low concurrency values with a large numbers of workers can exhaust your database's connection pool, leading to processing delays and failures.

  • Available on Self-hosted Enterprise plans.

In queue mode you can run more than one main process for high availability.

In a single-mode setup, the main process does two sets of tasks:

  • regular tasks, such as running the API, serving the UI, and listening for webhooks, and
  • at-most-once tasks, such as running non-HTTP triggers (timers, pollers, and persistent connections like RabbitMQ and IMAP), and pruning executions and binary data.

In a multi-main setup, there are two kinds of main processes:

  • followers, which run regular tasks, and
  • the leader, which runs both regular and at-most-once tasks.

Leader designation

In a multi-main setup, all main instances handle the leadership process transparently to users. In case the current leader becomes unavailable, for example because it crashed or its event loop became too busy, other followers can take over. If the previous leader becomes responsive again, it becomes a follower.

Configuring multi-main setup

To deploy n8n in multi-main setup, ensure:

  • All main processes are running in queue mode and are connected to Postgres and Redis.
  • All main and worker processes are running the same version of n8n.
  • All main processes have set the environment variable N8N_MULTI_MAIN_SETUP_ENABLED to true.
  • All main processes are running behind a load balancer with session persistence (sticky sessions) enabled.

If needed, you can adjust the leader key options:

Using configuration file Using environment variables Description
multiMainSetup.ttl:10 N8N_MULTI_MAIN_SETUP_KEY_TTL=10 Time to live (in seconds) for leader key in multi-main setup.
multiMainSetup.interval:3 N8N_MULTI_MAIN_SETUP_CHECK_INTERVAL=3 Interval (in seconds) for leader check in multi-main setup.

Examples:

Example 1 (unknown):

export N8N_ENCRYPTION_KEY=<main_instance_encryption_key>

Example 2 (unknown):

export EXECUTIONS_MODE=queue

Example 3 (unknown):

docker run --name some-redis -p 6379:6379  -d redis

Example 4 (unknown):

./packages/cli/bin/n8n worker

TimescaleDB node

URL: llms-txt#timescaledb-node

Contents:

  • Operations
  • Templates and examples
  • Specify a column's data type

Use the TimescaleDB node to automate work in TimescaleDB, and integrate TimescaleDB with other applications. n8n has built-in support for a wide range of TimescaleDB features, including executing an SQL query, as well as inserting and updating rows in a database.

On this page, you'll find a list of operations the TimescaleDB node supports and links to more resources.

Refer to TimescaleDB credentials for guidance on setting up authentication.

  • Execute an SQL query
  • Insert rows in database
  • Update rows in database

Templates and examples

Browse TimescaleDB integration templates, or search all templates

Specify a column's data type

To specify a column's data type, append the column name with :type, where type is the data type you want for the column. For example, if you want to specify the type int for the column id and type text for the column name, you can use the following snippet in the Columns field: id:int,name:text.


Filter

URL: llms-txt#filter

Contents:

  • Node parameters
    • Combining conditions
  • Node options
  • Templates and examples
  • Available data type comparisons
    • String
    • Number
    • Date & Time
    • Boolean
    • Array

Filter items based on a condition. If the item meets the condition, the Filter node passes it on to the next node in the Filter node output. If the item doesn't meet the condition, the Filter node omits the item from its output.

Create filter comparison Conditions to perform your filter.

  • Use the data type dropdown to select the data type and comparison operation type for your condition. For example, to filter for dates after a particular date, select Date & Time > is after.
  • The fields and values to enter into the condition change based on the data type and comparison you select. Refer to Available data type comparisons for a full list of all comparisons by data type.

Select Add condition to create more conditions.

Combining conditions

You can choose to keep items:

  • When they meet all conditions: Create two or more conditions and select AND in the dropdown between them.
  • When they meet any of the conditions: Create two or more conditions and select OR in the dropdown between them.

You can't create a mix of AND and OR rules.

  • Ignore Case: Whether to ignore letter case (turned on) or be case sensitive (turned off).
  • Less Strict Type Validation: Whether you want n8n to attempt to convert value types based on the operator you choose (turned on) or not (turned off). Turn this on when facing a "wrong type:" error in your node.

Templates and examples

Scrape business emails from Google Maps without the use of any third party APIs

View template details

Build Your First AI Data Analyst Chatbot

View template details

Generate Leads with Google Maps

View template details

Browse Filter integration templates, or search all templates

Available data type comparisons

String data type supports these comparisons:

  • exists
  • does not exist
  • is empty
  • is not empty
  • is equal to
  • is not equal to
  • contains
  • does not contain
  • starts with
  • does not start with
  • ends with
  • does not end with
  • matches regex
  • does not match regex

Number data type supports these comparisons:

  • exists
  • does not exist
  • is empty
  • is not empty
  • is equal to
  • is not equal to
  • is greater than
  • is less than
  • is greater than or equal to
  • is less than or equal to

Date & Time data type supports these comparisons:

  • exists
  • does not exist
  • is empty
  • is not empty
  • is equal to
  • is not equal to
  • is after
  • is before
  • is after or equal to
  • is before or equal to

Boolean data type supports these comparisons:

  • exists
  • does not exist
  • is empty
  • is not empty
  • is true
  • is false
  • is equal to
  • is not equal to

Array data type supports these comparisons:

  • exists
  • does not exist
  • is empty
  • is not empty
  • contains
  • does not contain
  • length equal to
  • length not equal to
  • length greater than
  • length less than
  • length greater than or equal to
  • length less than or equal to

Object data type supports these comparisons:

  • exists
  • does not exist
  • is empty
  • is not empty

LangChain learning resources

URL: llms-txt#langchain-learning-resources

You don't need to know details about LangChain to use n8n, but it can be helpful to learn a few concepts. This pages lists some learning resources that people at n8n have found helpful.

The LangChain documentation includes introductions to key concepts and possible use cases. Choose the LangChain | Python or LangChain | JavaScript documentation for quickstarts, code examples, and API documentation. LangChain also provide code templates (Python only), offering ideas for potential use cases and common patterns.

What Product People Need To Know About LangChain provides a list of terminology and concepts, explained with helpful metaphors. Aimed at a wide audience.

If you prefer video, this YouTube series by Greg Kamradt works through the LangChain documentation, providing code examples as it goes.

n8n offers space to discuss LangChain on the Discord. Join to share your projects and discuss ideas with the community.


Intercom node

URL: llms-txt#intercom-node

Contents:

  • Operations
  • Templates and examples

Use the Intercom node to automate work in Intercom, and integrate Intercom with other applications. n8n has built-in support for a wide range of Intercom features, including creating, updating, deleting, and getting companies, leads, and users.

On this page, you'll find a list of operations the Intercom node supports and links to more resources.

Refer to Intercom credentials for guidance on setting up authentication.

  • Company
    • Create a new company
    • Get data of a company
    • Get data of all companies
    • Update a company
    • List company's users
  • Lead
    • Create a new lead
    • Delete a lead
    • Get data of a lead
    • Get data of all leads
    • Update new lead
  • User
    • Create a new user
    • Delete a user
    • Get data of a user
    • Get data of all users
    • Update a user

Templates and examples

Enrich new Intercom users with contact details and more from ExactBuyer

View template details

Create a new user in Intercom

View template details

Autonomous Customizable Support Chatbot on Intercom + Discord Thread Reports

View template details

Browse Intercom integration templates, or search all templates


Mailgun node

URL: llms-txt#mailgun-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Mailgun node to automate work in Mailgun, and integrate Mailgun with other applications. n8n has built-in support for sending emails with Mailgun.

On this page, you'll find a list of operations the Mailgun node supports and links to more resources.

Refer to Mailgun credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

Templates and examples

Handle errors from a different workflow

View template details

Report phishing websites to Steam and CloudFlare

View template details

AI Agent Creates Content to Be Picked by ChatGPT, Gemini, Google

View template details

Browse Mailgun integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Microsoft Outlook node

URL: llms-txt#microsoft-outlook-node

Contents:

  • Operations
  • Waiting for a response
    • Response Type
    • Approval response customization
    • Free Text response customization
    • Custom Form response customization
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported

Use the Microsoft Outlook node to automate work in Microsoft Outlook, and integrate Microsoft Outlook with other applications. n8n has built-in support for a wide range of Microsoft Outlook features, including creating, updating, deleting, and getting folders, messages, and drafts.

On this page, you'll find a list of operations the Microsoft Outlook node supports and links to more resources.

Refer to Microsoft credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Calendar
    • Create
    • Delete
    • Get
    • Get Many
    • Update
  • Contact
    • Create
    • Delete
    • Get
    • Get Many
    • Update
  • Draft
    • Create
    • Delete
    • Get
    • Send
    • Update
  • Event
    • Create
    • Delete
    • Get
    • Get Many
    • Update
  • Folder
    • Create
    • Delete
    • Get
    • Get Many
    • Update
  • Folder Message
    • Get Many
  • Message
    • Delete
    • Get
    • Get Many
    • Move
    • Reply
    • Send
    • Send and Wait for Response
    • Update
  • Message Attachment
    • Add
    • Download
    • Get
    • Get Many

Waiting for a response

By choosing the Send and Wait for a Response operation, you can send a message and pause the workflow execution until a person confirms the action or provides more information.

You can choose between the following types of waiting and approval actions:

  • Approval: Users can approve or disapprove from within the message.
  • Free Text: Users can submit a response with a form.
  • Custom Form: Users can submit a response with a custom form.

You can customize the waiting and response behavior depending on which response type you choose. You can configure these options in any of the above response types:

  • Limit Wait Time: Whether the workflow will automatically resume execution after a specified time limit. This can be an interval or a specific wall time.
  • Append n8n Attribution: Whether to mention in the message that it was sent automatically with n8n (turned on) or not (turned off).

Approval response customization

When using the Approval response type, you can choose whether to present only an approval button or both approval and disapproval buttons.

You can also customize the button labels for the buttons you include.

Free Text response customization

When using the Free Text response type, you can customize the message button label, the form title and description, and the response button label.

Custom Form response customization

When using the Custom Form response type, you build a form using the fields and options you want.

You can customize each form element with the settings outlined in the n8n Form trigger's form elements. To add more fields, select the Add Form Element button.

You'll also be able to customize the message button label, the form title and description, and the response button label.

Templates and examples

Create a Branded AI-Powered Website Chatbot

View template details

Auto Categorise Outlook Emails with AI

View template details

Phishing Analysis - URLScan.io and VirusTotal

View template details

Browse Microsoft Outlook integration templates, or search all templates

Refer to Outlook's API documentation for more information about the service.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Handling API rate limits

URL: llms-txt#handling-api-rate-limits

Contents:

  • Identify rate limit issues
  • Handle rate limits for integrations
    • Enable Retry On Fail
    • Use Loop Over Items and Wait
  • Handle rate limits in the HTTP Request node
    • Batch requests
    • Paginate results

API rate limits are restrictions on request frequency. For example, an API may limit the number of requests you can make per minute, or per day.

APIs can also limits how much data you can send in one request, or how much data the API sends in a single response.

Identify rate limit issues

When an n8n node hits a rate limit, it errors. n8n displays the error message in the node output panel. This includes the error message from the service.

If n8n received error 429 (too many requests) from the service, the error message is The service is receiving too many requests from you.

To check the rate limits for the service you're using, refer to the API documentation for the service.

Handle rate limits for integrations

There are two ways to handle rate limits in n8n's integrations: using the Retry On Fail setting, or using a combination of the Loop Over Items and Wait nodes:

  • Retry On Fail adds a pause between API request attempts.
  • With Loop Over Items and Wait you can break you request data into smaller chunks, as well as pausing between requests.

Enable Retry On Fail

When you enable Retry On Fail, the node automatically tries the request again if it fails the first time.

  1. Open the node.
  2. Select Settings.
  3. Enable the Retry On Fail toggle.
  4. Configure the retry settings: if using this to work around rate limits, set Wait Between Tries (ms) to more than the rate limit. For example, if the API you're using allows one request per second, set Wait Between Tries (ms) to 1000 to allow a 1 second wait.

Use Loop Over Items and Wait

Use the Loop Over Items node to batch the input items, and the Wait node to introduce a pause between each request.

  1. Add the Loop Over Items node before the node that calls the API. Refer to Loop Over Items for information on how to configure the node.
  2. Add the Wait node after the node that calls the API, and connect it back to the Loop Over Items node. Refer to Wait for information on how to configure the node.

For example, to handle rate limits when using OpenAI:

Handle rate limits in the HTTP Request node

The HTTP Request node has built-in settings for handling rate limits and large amounts of data.

Use the Batching option to send more than one request, reducing the request size, and introducing a pause between requests. This is the equivalent of using Loop Over Items and Wait.

  1. In the HTTP Request node, select Add Option > Batching.
  2. Set Items per Batch: this is the number of input items to include in each request.
  3. Set Batch Interval (ms) to introduce a delay between requests. For example, if the API you're using allows one request per second, set Wait Between Tries (ms) to 1000 to allow a 1 second wait.

APIs paginate their results when they need to send more data than they can handle in a single response. For more information on pagination in the HTTP Request node, refer to HTTP Request node | Pagination.


OpenRouter credentials

URL: llms-txt#openrouter-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a OpenRouter account.

Supported authentication methods

Refer to OpenRouter's API documentation for more information about the service.

To configure this credential, you'll need:

To generate your API Key:

  1. Login to your OpenRouter account or create an account.
  2. Open your API keys page.
  3. Select Create new secret key to create an API key, optionally naming the key.
  4. Copy your key and add it as the API Key in n8n.

Refer to the OpenRouter Quick Start page for more information.


Webflow credentials

URL: llms-txt#webflow-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API access token
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Supported authentication methods

  • API access token
  • OAuth2

Refer to Webflow's API documentation for more information about the service.

Using API access token

To configure this credential, you'll need:

  • A Site Access Token: Access tokens are site-specific. Go to your site's Site Settings > Apps & integrations > API access and select Generate API token. Refer to Get a Site Token for more information.

Note for n8n Cloud users

Cloud users don't need to provide connection details. Select Connect my account to connect through your browser.

If you need to configure OAuth2 from scratch, register an application in your workspace.

Use these settings for your application:

  • Copy the OAuth callback URL from n8n and add it as a Redirect URI in your application.
  • Once you've created your application, copy the Client ID and Client Secret and enter them in your n8n credential.
  • If you are using the Webflow Data API V1 (deprecated), enable the Legacy toggle. Otherwise, leave this inactive.

Refer to OAuth for more information on Webflow's OAuth web flow.


Populate a Pinecone vector database from a website

URL: llms-txt#populate-a-pinecone-vector-database-from-a-website

Contents:

  • Key features
  • Using the example

Use n8n to scrape a website, load the data into Pinecone, then query it using a chat workflow. This workflow uses the HTTP node to get website data, extracts the relevant content using the HTML node, then uses the Pinecone Vector Store node to send it to Pinecone.

View workflow file

To load the template into your n8n instance:

  1. Download the workflow JSON file.
  2. Open a new workflow in your n8n instance.
  3. Copy in the JSON, or select Workflow menu > Import from file....

The example workflows use Sticky Notes to guide you:

  • Yellow: notes and information.
  • Green: instructions to run the workflow.
  • Orange: you need to change something to make the workflow work.
  • Blue: draws attention to a key feature of the example.

Google Drive Shared Drive operations

URL: llms-txt#google-drive-shared-drive-operations

Contents:

  • Create a shared drive
    • Options
  • Delete a shared drive
  • Get a shared drive
    • Options
  • Get many shared drives
    • Options
  • Update a shared drive
    • Update Fields

Use this operation to create, delete, get, and update shared drives in Google Drive. Refer to Google Drive for more information on the Google Drive node itself.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

Create a shared drive

Use this operation to create a new shared drive.

Enter these parameters:

  • Credential to connect with: Create or select an existing Google Drive credentials.

  • Resource: Select Shared Drive.

  • Operation: Select Create.

  • Name: The name to use for the new shared drive.

  • Capabilities: The capabilities to set for the new shared drive (see REST Resources: drives | Google Drive for more details):

    • Can Add Children: Whether the current user can add children to folders in this shared drive.
    • Can Change Copy Requires Writer Permission Restriction: Whether the current user can change the copyRequiresWriterPermission restriction on this shared drive.
    • Can Change Domain Users Only Restriction: Whether the current user can change the domainUsersOnly restriction on this shared drive.
    • Can Change Drive Background: Whether the current user can change the background on this shared drive.
    • Can Change Drive Members Only Restriction: Whether the current user can change the driveMembersOnly restriction on this shared drive.
    • Can Comment: Whether the current user can comment on files in this shared drive.
    • Can Copy: Whether the current user can copy files in this shared drive.
    • Can Delete Children: Whether the current user can delete children from folders in this shared drive.
    • Can Delete Drive: Whether the current user can delete this shared drive. This operation may still fail if there are items not in the trash in the shared drive.
    • Can Download: Whether the current user can download files from this shared drive.
    • Can Edit: Whether the current user can edit files from this shared drive.
    • Can List Children: Whether the current user can list the children of folders in this shared drive.
    • Can Manage Members: Whether the current user can add, remove, or change the role of members of this shared drive.
    • Can Read Revisions: Whether the current user can read the revisions resource of files in this shared drive.
    • Can Rename Drive: Whether the current user can rename this shared drive.
    • Can Share: Whether the current user can share files or folders in this shared drive.
    • Can Trash Children: Whether the current user can trash children from folders in this shared drive.
  • Color RGB: The color of this shared drive as an RGB hex string.

  • Hidden: Whether to hide this shared drive in the default view.

  • Restrictions: Restrictions to add to this shared drive (see REST Resources: drives | Google Drive for more details):

    • Admin Managed Restrictions: When enabled, restrictions here will override the similarly named fields to true for any file inside of this shared drive.
    • Copy Requires Writer Permission: Whether the options to copy, print, or download files inside this shared drive should be disabled for readers and commenters.
    • Domain Users Only: Whether to restrict access to this shared drive and items inside this shared drive to users of the domain to which this shared drive belongs.
    • Drive Members Only: Whether to restrict access to items inside this shared drive to its members.

Refer to the Method: drives.insert | Google Drive API documentation for more information.

Delete a shared drive

Use this operation to delete a shared drive.

Enter these parameters:

  • Credential to connect with: Create or select an existing Google Drive credentials.
  • Resource: Select Shared Drive.
  • Operation: Select Delete.
  • Shared Drive: Choose the shared drive want to delete.
    • Select From list to choose the title from the dropdown list, By URL to enter the URL of the drive, or By ID to enter the driveId.
    • You can find the driveId in the URL for the shared Google Drive: https://drive.google.com/drive/u/0/folders/driveID.

Refer to the Method: drives.delete | Google Drive API documentation for more information.

Get a shared drive

Use this operation to get a shared drive.

Enter these parameters:

  • Credential to connect with: Create or select an existing Google Drive credentials.

  • Resource: Select Shared Drive.

  • Operation: Select Get.

  • Shared Drive: Choose the shared drive want to get.

    • Select From list to choose the title from the dropdown list, By URL to enter the URL of the drive, or By ID to enter the driveId.
    • You can find the driveId in the URL for the shared Google Drive: https://drive.google.com/drive/u/0/folders/driveID.
  • Use Domain Admin Access: Whether to issue the request as a domain administrator. When enabled, grants the requester access if they're an administrator of the domain to which the shared drive belongs.

Refer to the Method: drives.get | Google Drive API documentation for more information.

Get many shared drives

Use this operation to get many shared drives.

Enter these parameters:

  • Credential to connect with: Create or select an existing Google Drive credentials.

  • Resource: Select Shared Drive.

  • Operation: Select Get Many.

  • Return All: Choose whether to return all results or only up to a given limit.

  • Limit: The maximum number of items to return when Return All is disabled.

  • Shared Drive: Choose the shared drive want to get.

    • Select From list to choose the title from the dropdown list, By URL to enter the URL of the drive, or By ID to enter the driveId.
    • You can find the driveId in the URL for the shared Google Drive: https://drive.google.com/drive/u/0/folders/driveID.
  • Query: The query string to use to search for shared drives. See Search for shared drives | Google Drive for more information.

  • Use Domain Admin Access: Whether to issue the request as a domain administrator. When enabled, grants the requester access if they're an administrator of the domain to which the shared drive belongs.

Refer to the Method: drives.get | Google Drive API documentation for more information.

Update a shared drive

Use this operation to update a shared drive.

Enter these parameters:

  • Credential to connect with: Create or select an existing Google Drive credentials.

  • Resource: Select Shared Drive.

  • Operation: Select Update.

  • Shared Drive: Choose the shared drive you want to update.

    • Select From list to choose the drive from the dropdown list, By URL to enter the URL of the drive, or By ID to enter the driveId.
    • You can find the driveId in the URL for the shared Google Drive: https://drive.google.com/drive/u/0/folders/driveID.
  • Color RGB: The color of this shared drive as an RGB hex string.

  • Name: The updated name for the shared drive.

  • Restrictions: Restrictions for this shared drive (see REST Resources: drives | Google Drive for more details):

    • Admin Managed Restrictions: When enabled, restrictions here will override the similarly named fields to true for any file inside of this shared drive.
    • Copy Requires Writer Permission: Whether the options to copy, print, or download files inside this shared drive should be disabled for readers and commenters.
    • Domain Users Only: Whether to restrict access to this shared drive and items inside this shared drive to users of the domain to which this shared drive belongs.
    • Drive Members Only: Whether to restrict access to items inside this shared drive to its members.

Refer to the Method: drives.update | Google Drive API documentation for more information.


Workflow 3: Monitoring workflow errors

URL: llms-txt#workflow-3:-monitoring-workflow-errors

Last but not least, let's help Nathan know if there are any errors running the workflow.

To accomplish this task, create an Error workflow that monitors the main workflow:

  1. Create a new workflow.

  2. Add an Error Trigger node (and execute it as a test).

  3. Connect a Discord node to the Error Trigger node and configure these fields:

  • Webhook URL: The Discord URL that you received in the email from n8n when you signed up for this course.

  • Text: "The workflow {workflow name} failed, with the error message: {execution error message}. Last node executed: {name of the last executed node}. Check this workflow execution here: {execution URL} My Unique ID: " followed by the unique ID emailed to you when you registered for this course.

Note that you need to replace the text in curly brackets {} with expressions that take the respective information from the Error Trigger node.

  1. Execute the Discord node.

  2. Set the newly created workflow as the Error Workflow for the main workflow you created in the previous lesson.

The workflow should look like this:

Workflow 3 for monitoring workflow errors

  • What fields does the Error Trigger node return?
  • What information about the execution does the Error Trigger node return?
  • What information about the workflow does the Error Trigger node return?
  • What's the expression to reference the workflow name?

APITemplate.io credentials

URL: llms-txt#apitemplate.io-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create an APITemplate.io account.

Supported authentication methods

Refer to APITemplate.io's API documentation for more information about the service.

To configure this credential, you'll need:

  • An API Key: Once you've created an APITemplate.io account, go to API Integration to copy the API Key.

Custom variables

URL: llms-txt#custom-variables

Contents:

  • Create variables

  • Edit and delete variables

  • Use variables in workflows

  • Available on Self-hosted Enterprise and Pro Cloud plans.

  • Only instance owners and admins can create variables.

Custom variables are read-only variables that you can use to store and reuse values in n8n workflows.

Variable scope and availability

  • Global variables are available to everyone on your n8n instance, across all projects.
  • Project-scoped variables are available only within the specific project they're created in.
  • Project-scoped variables are available in 1.118.0 and above. Previous versions only support global variables accessible from the left side menu.

You can access the Variables tab from either the overview page or a specific project.

To create a new variable:

  1. On the Variables tab, select Add Variable.
  2. Enter a Key and Value. The maximum key length is 50 characters, and the maximum value length is 1000 characters. n8n limits the characters you can use in the key and value to lowercase and uppercase letters, numbers, and underscores (A-Z, a-z, 0-9, _).
  3. Select the Scope (only available when creating from the overview page):
    • Global: The variable is available across all projects in the n8n instance.
    • Project: The variable is available only within a specific project (you can select which project).
    • When creating from a project page, the scope is automatically set to that project.
  4. Select Save. The variable is now available for use in workflows according to its scope.

Edit and delete variables

To edit or delete a variable:

  1. On the Variables tab, hover over the variable you want to change.
  2. Select Edit or Delete.

Use variables in workflows

You can access variables in the Code node and in expressions:

All variables are strings.

During workflow execution, n8n replaces the variables with the variable value. If the variable has no value, n8n treats its value as undefined. Workflows don't automatically fail in this case.

When a project-scoped variable has the same key as a global variable, the project-scoped variable value takes precedence and overrides the global variable value within that project's workflows.

Variables are read-only. You must use the UI to change the values. If you need to set and access custom data within your workflow, use Workflow static data.

Examples:

Example 1 (unknown):

// Access a variable
$vars.<variable-name>

Set up SAML

URL: llms-txt#set-up-saml

Contents:

  • Enable SAML

  • Generic IdP setup

  • Setup resources for common IdPs

  • Available on Enterprise plans.

  • You need to be an instance owner or admin to enable and configure SAML.

  1. In n8n, go to Settings > SSO.
  2. Make a note of the n8n Redirect URL and Entity ID.
    1. Optional: if your IdP allows you to set up SAML from imported metadata, navigate to the Entity ID URL and save the XML.
    2. Optional: if you are running n8n behind a load balancer make sure you have N8N_EDITOR_BASE_URL configured.
  3. Set up SAML with your IdP (identity provider). You need the redirect URL and entity ID. You may also need an email address and name for the IdP user.
  4. After completing setup in your IdP, load the metadata XML into n8n. You can use a metadata URL or raw XML:
    1. Metadata URL: Copy the metadata URL from your IdP into the Identity Provider Settings field in n8n.
    2. Raw XML: Download the metadata XML from your IdP, toggle Identiy Provider Settings to XML, then copy the raw XML into Identity Provider Settings.
  5. Select Save settings.
  6. Select Test settings to check your SAML setup is working.
  7. Set SAML 2.0 to Activated.

Please note, n8n currently doesn't support POST binding. Please configure your IdP to use HTTP request binding instead.

The steps to configure the IdP vary depending on your chosen IdP. These are some common setup tasks:

  • Create an app for n8n in your IdP.

  • Map n8n attributes to IdP attributes:

Name Name format Value (IdP side)
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/emailaddress URI Reference User email
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/firstname URI Reference User First Name
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/lastname URI Reference User Last Name
http://schemas.xmlsoap.org/ws/2005/05/identity/claims/upn URI Reference User Email

Setup resources for common IdPs

Documentation links for common IdPs.

IdP Documentation
Auth0 Configure Auth0 as SAML Identity Provider: Manually configure SSO integrations
Authentik Applications and the SAML Provider
Azure AD SAML authentication with Azure Active Directory
JumpCloud How to setup SAML (SSO) applications with JumpCloud (using Zoom as an example)
Keycloak Choose a Getting Started guide depending on your hosting.
Okta n8n provides a Workforce Identity setup guide
PingIdentity PingOne SSO

GitHub node

URL: llms-txt#github-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the GitHub node to automate work in GitHub, and integrate GitHub with other applications. n8n has built-in support for a wide range of GitHub features, including creating, updating, deleting, and editing files, repositories, issues, releases, and users.

On this page, you'll find a list of operations the GitHub node supports and links to more resources.

Refer to GitHub credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • File
    • Create
    • Delete
    • Edit
    • Get
    • List
  • Issue
    • Create
    • Create Comment
    • Edit
    • Get
    • Lock
  • Organization
    • Get Repositories
  • Release
    • Create
    • Delete
    • Get
    • Get Many
    • Update
  • Repository
    • Get
    • Get Issues
    • Get License
    • Get Profile
    • Get Pull Requests
    • List Popular Paths
    • List Referrers
  • Review
    • Create
    • Get
    • Get Many
    • Update
  • User
    • Get Repositories
    • Invite
  • Workflow
    • Disable
    • Dispatch
    • Enable
    • Get
    • Get Usage
    • List

Templates and examples

Back Up Your n8n Workflows To Github

View template details

Building RAG Chatbot for Movie Recommendations with Qdrant and Open AI

View template details

Chat with GitHub API Documentation: RAG-Powered Chatbot with Pinecone & OpenAI

View template details

Browse GitHub integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


CrateDB credentials

URL: llms-txt#cratedb-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using account connection

You can use these credentials to authenticate the following nodes:

An available instance of CrateDB.

Supported authentication methods

Refer to CrateDB's documentation for more information about the service.

Using account connection

To configure this credential, you'll need:

Refer to the Connect to a CrateDB cluster documentation for detailed instructions on these fields and their default values.


Facebook Trigger User object

URL: llms-txt#facebook-trigger-user-object

Contents:

  • Trigger configuration
  • Related resources

Use this object to receive updates when changes to a user's profile occur. Refer to Facebook Trigger for more information on the trigger itself.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Facebook Trigger integrations page.

Trigger configuration

To configure the trigger with this Object:

  1. Select the Credential to connect with. Select an existing or create a new Facebook App credential.
  2. Enter the APP ID of the app connected to your credential. Refer to the Facebook App credential documentation for more information.
  3. Select User as the Object.
  4. Field Names or IDs: By default, the node will trigger on all the available events using the * wildcard filter. If you'd like to limit the events, use the X to remove the star and use the dropdown or an expression to select the updates you're interested in.
  5. In Options, choose whether to turn on the toggle to Include Values. When turned on, the node includes the new values for the changes.

Refer to Meta's User Graph API reference for more information.


GetResponse credentials

URL: llms-txt#getresponse-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key
  • Using OAuth2
  • Configure OAuth2 credentials for a local environment

You can use these credentials to authenticate the following nodes:

Create a GetResponse account.

Supported authentication methods

Refer to GetResponse's API documentation for more information about the service.

To configure this credential, you'll need:

  • An API Key: To view or generate an API key, go to Integrations and API > API. Refer to the GetResponse Help Center for more detailed instructions.

To configure this credential, you'll need:

When you register your application, copy the OAuth Redirect URL from n8n and add it as the Redirect URL in GetResponse.

Redirect URL with localhost

The Redirect URL should be a URL in your domain, for example: https://mytemplatemaker.example.com/gr_callback. GetResponse doesn't accept a localhost callback URL. Refer to the FAQs to configure the credentials for the local environment.

Configure OAuth2 credentials for a local environment

GetResponse doesn't accept the localhost callback URL. Follow the steps below to configure the OAuth credentials for a local environment:

  1. Use ngrok to expose the local server running on port 5678 to the internet. In your terminal, run the following command:

  2. Run the following command in a new terminal. Replace <YOUR-NGROK-URL> with the URL that you got from the previous step.

  3. Follow the Using OAuth2 instructions to configure your credentials, using this URL as your Redirect URL.

Examples:

Example 1 (unknown):

ngrok http 5678

Example 2 (unknown):

export WEBHOOK_URL=<YOUR-NGROK-URL>

NASA node

URL: llms-txt#nasa-node

Contents:

  • Operations
  • Templates and examples

Use the NASA node to automate work in NASA, and integrate NASA with other applications. n8n has built-in support for a wide range of NASA features, including retrieving imagery and data.

On this page, you'll find a list of operations the NASA node supports and links to more resources.

Refer to NASA credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Astronomy Picture of the Day
    • Get the Astronomy Picture of the Day
  • Asteroid Neo-Feed
    • Retrieve a list of asteroids based on their closest approach date to Earth
  • Asteroid Neo-Lookup
    • Look up an asteroid based on its NASA SPK-ID
  • Asteroid Neo-Browse
    • Browse the overall asteroid dataset
  • DONKI Coronal Mass Ejection
    • Retrieve DONKI coronal mass ejection data
  • DONKI Interplanetary Shock
    • Retrieve DONKI interplanetary shock data
  • DONKI Solar Flare
    • Retrieve DONKI solar flare data
  • DONKI Solar Energetic Particle
    • Retrieve DONKI solar energetic particle data
  • DONKI Magnetopause Crossing
    • Retrieve data on DONKI magnetopause crossings
  • DONKI Radiation Belt Enhancement
    • Retrieve DONKI radiation belt enhancement data
  • DONKI High Speed Stream
    • Retrieve DONKI high speed stream data
  • DONKI WSA+EnlilSimulation
    • Retrieve DONKI WSA+EnlilSimulation data
  • DONKI Notifications
    • Retrieve DONKI notifications data
  • Earth Imagery
    • Retrieve Earth imagery
  • Earth Assets
    • Retrieve Earth assets

Templates and examples

Set credentials dynamically using expressions

View template details

Send the astronomy picture of the day daily to a Telegram channel

View template details

Retrieve NASA Space Weather & Asteroid Data with GPT-4o-mini and Telegram

View template details

Browse NASA integration templates, or search all templates


ERPNext credentials

URL: llms-txt#erpnext-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key
  • How to find the subdomain of an ERPNext cloud-hosted account

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to ERPNext's documentation for more information about the service.

Refer to ERPNext's developer documentation for more information about working with the framework.

To configure this credential, you'll need:

  • An API Key: Generate this from your own ERPNext user account in Settings > My Settings > API Access.
  • An API Secret: Generated with the API key.
  • Your ERPNext Environment:
    • For Cloud-hosted:
      • Your ERPNext Subdomain: Refer to the FAQs
      • Your Domain: Choose between erpnext.com and frappe.cloud.
    • For Self-hosted:
      • The fully qualified Domain where you host ERPNext
  • Choose whether to Ignore SSL Issues: When selected, n8n will connect even if SSL certificate validation is unavailable.

If you are an ERPNext System Manager, you can also generate API keys and secrets for other users. Refer to the ERPNext Adding Users documentation for more information.

How to find the subdomain of an ERPNext cloud-hosted account

You can find your ERPNext subdomain by reviewing the address bar of your browser. The string between https:// and either .erpnext.com or frappe.cloud is your subdomain.

For example, if the URL in the address bar is https://n8n.erpnext.com, the subdomain is n8n.


Autopilot credentials

URL: llms-txt#autopilot-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Autopilot branding change

Autopilot has become Ortto. The Autopilot credentials and nodes are only compatible with Autopilot, not the new Ortto API.

Create an Autopilot account.

Supported authentication methods

Refer to Autopilot's API documentation for more information about the service.

To configure this credential, you'll need:


n8n metadata

URL: llms-txt#n8n-metadata

Methods for working with n8n metadata.

  • Access to n8n environment variables for self-hosted n8n.
  • Metadata about workflows, executions, and nodes.
  • Information about instance Variables and External secrets.

You can use Python in the Code node. It isn't available in expressions.

Method Description Available in Code node?
$env Contains n8n instance configuration environment variables.
$execution.customData Set and get custom execution data. Refer to Custom executions data for more information.
$execution.id The unique ID of the current workflow execution.
$execution.mode Whether the execution was triggered automatically, or by manually running the workflow. Possible values are test and production.
$execution.resumeUrl The webhook URL to call to resume a workflow waiting at a Wait node.
$getWorkflowStaticData(type) View an example. Static data doesn't persist when testing workflows. The workflow must be active and called by a trigger or webhook to save static data. This gives access to the static workflow data.
$("<node-name>").isExecuted Check whether a node has already executed.
$itemIndex The index of an item in a list of items.
$nodeVersion Get the version of the current node.
$prevNode.name The name of the node that the current input came from. When using the Merge node, note that $prevNode always uses the first input connector.
$prevNode.outputIndex The index of the output connector that the current input came from. Use this when the previous node had multiple outputs (such as an If or Switch node). When using the Merge node, note that $prevNode always uses the first input connector.
$prevNode.runIndex The run of the previous node that generated the current input. When using the Merge node, note that $prevNode always uses the first input connector.
$runIndex How many times n8n has executed the current node. Zero-based (the first run is 0, the second is 1, and so on).
$secrets Contains information about your External secrets setup.
$vars Contains the Variables available in the active environment.
$version The node version.
$workflow.active Whether the workflow is active (true) or not (false).
$workflow.id The workflow ID.
$workflow.name The workflow name.
Method Description
_items Contains incoming items in "Run once for all items" mode.
_item Contains the item being iterated on in "Run once for each item" mode.
Method Description
_env Contains n8n instance configuration environment variables.
_execution.customData Set and get custom execution data. Refer to Custom executions data for more information.
_execution.id The unique ID of the current workflow execution.
_execution.mode Whether the execution was triggered automatically, or by manually running the workflow. Possible values are test and production.
_execution.resumeUrl The webhook URL to call to resume a workflow waiting at a Wait node.
_getWorkflowStaticData(type) View an example. Static data doesn't persist when testing workflows. The workflow must be active and called by a trigger or webhook to save static data. This gives access to the static workflow data.
_("<node-name>").isExecuted Check whether a node has already executed.
_nodeVersion Get the version of the current node.
_prevNode.name The name of the node that the current input came from. When using the Merge node, note that _prevNode always uses the first input connector.
_prevNode.outputIndex The index of the output connector that the current input came from. Use this when the previous node had multiple outputs (such as an If or Switch node). When using the Merge node, note that _prevNode always uses the first input connector.
_prevNode.runIndex The run of the previous node that generated the current input. When using the Merge node, note that _prevNode always uses the first input connector.
_runIndex How many times n8n has executed the current node. Zero-based (the first run is 0, the second is 1, and so on).
_secrets Contains information about your External secrets setup.
_vars Contains the Variables available in the active environment.
_workflow.active Whether the workflow is active (true) or not (false).
_workflow.id The workflow ID.
_workflow.name The workflow name.

Postgres Trigger node

URL: llms-txt#postgres-trigger-node

Contents:

  • Events
  • Related resources

Use the Postgres Trigger node to respond to events in Postgres and integrate Postgres with other applications. n8n has built-in support responding to insert, update, and delete events.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Postgres Trigger integrations page.

You can configure how the node listens for events.

  • Select Listen and Create Trigger Rule, then choose the events to listen for:
    • Insert
    • Update
    • Delete
  • Select Listen to Channel, then enter a channel name that the node should monitor.

n8n provides an app node for Postgres. You can find the node docs here.

View example workflows and related content on n8n's website.


Azure Cosmos DB credentials

URL: llms-txt#azure-cosmos-db-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API Key
  • Common issues
    • Need admin approval

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to Azure Cosmos DB's API documentation for more information about the service.

To configure this credential, you'll need:

  • An Account: The name of your Azure Cosmos DB account.
  • A Key: A key for your Azure Cosmos DB account. Select Overview > Keys in the Azure portal for your Azure Cosmos DB. You can use either of the two account keys for this purpose.
  • A Database: The name of the Azure Cosmos DB database to connect to.

Refer to Get your primary key | Microsoft for more detailed steps.

Here are the known common errors and issues with Azure Cosmos DB credentials.

Need admin approval

When attempting to add credentials for a Microsoft360 or Microsoft Entra account, users may see a message when following the procedure that this action requires admin approval.

This message will appear when the account attempting to grant permissions for the credential is managed by a Microsoft Entra. In order to issue the credential, the administrator account needs to grant permission to the user (or "tenant") for that application.

The procedure for this is covered in the Microsoft Entra documentation.


Schedule Trigger node common issues

URL: llms-txt#schedule-trigger-node-common-issues

Contents:

  • Invalid cron expression
  • Scheduled workflows run at the wrong time
    • Adjust the timezone globally
    • Adjust the timezone for an individual workflow
    • Variables not working as expected
    • Changing the trigger interval

Here are some common errors and issues with the Schedule Trigger node and steps to resolve or troubleshoot them.

Invalid cron expression

This error occurs when you set Trigger Interval to Custom (Cron) and n8n doesn't understand your cron expression. This may mean that there is a mistake in your cron expression or that you're using an incompatible syntax.

To debug, check that the following:

Scheduled workflows run at the wrong time

If the Schedule Trigger node runs at the wrong time, it may mean that you need to adjust the time zone n8n uses.

Adjust the timezone globally

If you're using n8n Cloud, follow the instructions on the set the Cloud instance timezone page to ensure that n8n executes in sync with your local time.

If you're self hosting, set your global timezone using the GENERIC_TIMEZONE environment variable.

Adjust the timezone for an individual workflow

To set the timezone for an individual workflow:

  1. Open the workflow on the canvas.
  2. Select the Three dots icon in the upper-right corner.
  3. Select Settings.
  4. Change the Timezone setting.
  5. Select Save.

Variables not working as expected

While variables can be used in the scheduled trigger, their values only get evaluated when the workflow is activated. After activating the worfklow, you can alter a variable's value in the settings but it won't change how often the workflow runs. To work around this, you must stop and then re-activate the workflow to apply the updated variable value.

Changing the trigger interval

You can update the scheduled trigger interval at any time but it only gets updated when the workflow is activated. If you change the trigger interval after the workflow is active, the changes won't take effect until you stop and then re-activate the workflow.

Also, the schedule begins from the time when you activate the workflow. For example, if you had originally set a schedule of every 1 hour and it should execute at 12:00, if you changed it to a 2 hour schedule and re-activated the workflow at 11:30, the next execution will be at 13:30, 2 hours from when you activated it.


AWS DynamoDB node

URL: llms-txt#aws-dynamodb-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the AWS DynamoDB node to automate work in AWS DynamoDB, and integrate AWS DynamoDB with other applications. n8n has built-in support for a wide range of AWS DynamoDB features, including creating, reading, updating, deleting items, and records on a database.

On this page, you'll find a list of operations the AWS DynamoDB node supports and links to more resources.

Refer to AWS credentials for guidance on setting up authentication.

  • Item
  • Create a new record, or update the current one if it already exists (upsert/put)
  • Delete an item
  • Get an item
  • Get all items

Templates and examples

Browse AWS DynamoDB integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Design your node's user interface

URL: llms-txt#design-your-node's-user-interface

Contents:

  • Design guidance
  • Standards
    • UI text style
    • UI text terminology
    • Node naming conventions
    • Showing and hiding fields
    • Conventions by field type
    • Common patterns and exceptions

Most nodes are a GUI (graphical user interface) representation of an API. Designing the interface means finding a user-friendly way to represent API endpoints and parameters. Directly translating an entire API into form fields in a node may not result in a good user experience.

This document provides design guidance and standards to follow. These guidelines are the same as those used by n8n. This helps provide a smooth and consistent user experience for users mixing community and built-in nodes.

All node's use n8n's node UI elements, so you don't need to consider style details such as colors, borders, and so on. However, it's still useful to go through a basic design process:

  • Review the documentation for the API you're integrating. Ask yourself:
    • What can you leave out?
    • What can you simplify?
    • Which parts of the API are confusing? How can you help users understand them?
  • Use a wireframe tool to try out your field layout. If you find your node has a lot of fields and is getting confusing, consider n8n's guidance on showing and hiding fields.
Element Style
Drop-down value Title case
Hint Sentence case
Info box Sentence case. Don't use a period (.) for one-sentence information. Always use a period if there's more than one sentence. This field can include links, which should open in a new tab.
Node name Title case
Parameter name Title case
Subtitle Title case
Tooltip Sentence case. Don't use a period (.) for one-sentence tooltips. Always use a period if there's more than one sentence. This field can include links, which should open in a new tab.

UI text terminology

  • Use the same terminology as the service the node connects to. For example, a Notion node should refer to Notion blocks, not Notion paragraphs, because Notion calls these elements blocks. There are exceptions to this rule, usually to avoid technical terms (for example, refer to the guidance on name and description for upsert operations).
  • Sometimes a service has different terms for something in its API and in its GUI. Use the GUI language in your node, as this is what most users are familiar with. If you think some users may need to refer to the service's API docs, consider including this information in a hint.
  • Don't use technical jargon when there are simpler alternatives.
  • Be consistent when naming things. For example, choose one of directory or folder then stick to it.

Node naming conventions

Convention Correct Incorrect
If a node is a trigger node, the displayed name should have 'Trigger' at the end, with a space before. Shopify Trigger ShopifyTrigger, Shopify trigger
Don't include 'node' in the name. Asana Asana Node, Asana node

Showing and hiding fields

Fields can either be:

  • Displayed when the node opens: use this for resources and operations, and required fields.
  • Hidden in the Optional fields section until a user clicks on that section: use this for optional fields.

Progressively disclose complexity: hide a field until any earlier fields it depends on have values. For example, if you have a Filter by date toggle, and a Date to filter by datepicker, don't display Date to filter by until the user enables Filter by date.

Conventions by field type

n8n automatically displays credential fields as the top fields in the node.

Resources and operations

APIs usually involve doing something to data. For example, "get all tasks." In this example, "task" is the resource, and "get all" is the operation.

When your node has this resource and operation pattern, your first field should be Resource, and your second field should be Operation.

  • Most important to least important.

  • Scope: from broad to narrow. For example, you have fields for Document, Page, and Text to insert, put them in that order.

  • Order fields alphabetically. To group similar things together, you can rename them. For example, rename Email and Secondary Email to Email (primary) and Email (secondary).

  • If an optional field has a default value that the node uses when the value isn't set, load the field with that value. Explain this in the field description. For example, Defaults to false.

  • Connected fields: if one optional fields is dependent on another, bundle them together. They should both be under a single option that shows both fields when selected.

  • If you have a lot of optional fields, consider grouping them by theme.

There are five types of help built in to the GUI:

  • Info boxes: yellow boxes that appear between fields. Refer to UI elements | Notice for more information.
  • Use info boxes for essential information. Don't over-use them. By making them rare, they stand out more and grab the user's attention.
  • Parameter hints: lines of text displayed beneath a user input field. Use this when there's something the user needs to know, but an info box would be excessive.
  • Node hints: provide help in the input panel, output panel, or node details view. Refer to UI elements | Hints for more information.
  • Tooltips: callouts that appear when the user hovers over the tooltip icon . Use tooltips for extra information that the user might need.
  • You don't have to provide a tooltip for every field. Only add one if it contains useful information.
  • When writing tooltips, think about what the user needs. Don't just copy-paste API parameter descriptions. If the description doesn't make sense, or has errors, improve it.
  • Placeholder text: n8n can display placeholder text in a field where the user hasn't entered a value. This can help the user know what's expected in that field.

Info boxes, hints, and tooltips can contain links to more information.

Make it clear which fields are required.

Add validation rules to fields if possible. For example, check for valid email patterns if the field expects an email.

When displaying errors, make sure only the main error message displays in the red error title. More information should go in Details.

Refer to Node Error Handling for more information.

  • Tooltips for binary states should start with something like Whether to . . . .

  • You may need a list rather than a toggle:

    • Use toggles when it's clear what happens in a false state. For example, Simplify Output?. The alternative (don't simplify output) is clear.
    • Use a dropdown list with named options when you need more clarity. For example, Append?. What happens if you don't append is unclear (it could be that nothing happens, or information is overwritten, or discarded).
  • Set default values for lists whenever possible. The default should be the most-used option.

  • Sort list options alphabetically.

  • You can include list option descriptions. Only add descriptions if they provide useful information.

  • If there is an option like All, use the word All, not shorthand like *.

Trigger node inputs

When a trigger node has a parameter for specifying which events to trigger on:

  • Name the parameter Trigger on.
  • Don't include a tooltip.

Set subtitles based on the values of the main parameters. For example:

When performing an operation on a specific record, such as "update a task comment" you need a way to specify which record you want to change.

  • Wherever possible, provide two ways to specify a record:
    • By choosing from a pre-populated list. You can generate this list using the loadOptions parameter. Refer to Base files for more information.
    • By entering an ID.
  • Name the field <Record name> name or ID. For example, Workspace Name or ID. Add a tooltip saying "Choose a name from the list, or specify an ID using an expression." Link to n8n's Expressions documentation.
  • Build your node so that it can handle users providing more information than required. For example:
    • If you need a relative path, handle the user pasting in the absolute path.
    • If the user needs to get an ID from a URL, handle the user pasting in the entire URL.

Dates and timestamps

n8n uses ISO timestamp strings for dates and times. Make sure that any date or timestamp field you add supports all ISO 8601 formats.

You should support two ways of specifying the content of a text input that expects JSON:

  • Typing JSON directly into the text input: you need to parse the resulting string into a JSON object.
  • Using an expression that returns JSON.

Common patterns and exceptions

This section provides guidance on handling common design patterns, including some edge cases and exceptions to the main standards.

Simplify responses

APIs can return a lot of data that isn't useful. Consider adding a toggle that allows users to choose to simplify the response data:

  • Name: Simplify Response
  • Description: Whether to return a simplified version of the response instead of the raw data

Upsert operations

This should always be a separate operation with:

  • Name: Create or Update
  • Description: Create a new record, or update the current one if it already exists (upsert)

Boolean operators

n8n doesn't have good support for combining boolean operators, such as AND and OR, in the GUI. Whenever possible, provide options for all ANDs or all ORs.

For example, you have a field called Must match to test if values match. Include options to test for Any and All, as separate options.

Source keys or binary properties

Binary data is file data, such as spreadsheets or images. In n8n, you need a named key to reference the data. Don't use the terms "binary data" or "binary property" for this field. Instead, use a more descriptive name: Input data field name / Output data field name.

Examples:

Example 1 (unknown):

subtitle: '={{$parameter["operation"] + ": " + $parameter["resource"]}}',

Adalo node

URL: llms-txt#adalo-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported

Use the Adalo node to automate work in Adalo, and integrate Adalo with other applications. n8n has built-in support for a wide range of Adalo features, including like creating, getting, updating and deleting databases, records, and collections.

On this page, you'll find a list of operations the Adalo node supports and links to more resources.

Refer to Adalo credentials for guidance on setting up authentication.

  • Collection
    • Create
    • Delete
    • Get
    • Get Many
    • Update

Templates and examples

Browse Adalo integration templates, or search all templates

Refer to Adalo's documentation for more information on using Adalo. Their External Collections with APIs page gives more detail about what you can do with Adalo collections.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


SQL AI Agent node

URL: llms-txt#sql-ai-agent-node

Contents:

  • Node parameters
    • Data Source
    • Prompt
  • Node options
    • Ignored Tables
    • Include Sample Rows
    • Included Tables
    • Prefix Prompt
    • Suffix Prompt
    • Limit

n8n removed this functionality in February 2025.

The SQL Agent uses a SQL database as a data source. It can understand natural language questions, convert them into SQL queries, execute the queries, and present the results in a user-friendly format. This agent is valuable for building natural language interfaces to databases.

Refer to AI Agent for more information on the AI Agent node itself.

Configure the SQL Agent using the following parameters.

Choose the database to use as a data source for the node. Options include:

  • MySQL: Select this option to use a MySQL database.
    • Also select the Credential for MySQL.
  • SQLite: Select this option to use a SQLite database.
    • You must add a Read/Write File From Disk node before the Agent to read your SQLite file.
    • Also enter the Input Binary Field name of your SQLite file coming from the Read/Write File From Disk node.
  • Postgres: Select this option to use a Postgres database.
    • Also select the Credential for Postgres.

Postgres and MySQL Agents

If you are using Postgres or MySQL, this agent doesn't support the credential tunnel options.

Select how you want the node to construct the prompt (also known as the user's query or input from the chat).

  • Take from previous node automatically: If you select this option, the node expects an input from a previous node called chatInput.
  • Define below: If you select this option, provide either static text or an expression for dynamic content to serve as the prompt in the Prompt (User Message) field.

Refine the SQL Agent node's behavior using these options:

If you'd like the node to ignore any tables from the database, enter a comma-separated list of tables you'd like it to ignore.

If left empty, the agent doesn't ignore any tables.

Include Sample Rows

Enter the number of sample rows to include in the prompt to the agent. Default is 3.

Sample rows help the agent understand the schema of the database, but they also increase the number of tokens used.

If you'd only like to include specific tables from the database, enter a comma-separated list of tables to include.

If left empty, the agent includes all tables.

Enter a message you'd like to send to the agent before the Prompt text. This initial message can provide more context and guidance to the agent about what it can and can't do, and how to format the response.

n8n fills this field with an example.

Enter a message you'd like to send to the agent after the Prompt text.

Available LangChain expressions:

  • {chatHistory}: A history of messages in this conversation, useful for maintaining context.
  • {input}: Contains the user prompt.
  • {agent_scratchpad}: Information to remember for the next iteration.

n8n fills this field with an example.

Enter the maximum number of results to return.

Templates and examples

Refer to the main AI Agent node's Templates and examples section.

For common questions or issues and suggested solutions, refer to Common issues.


Salesforce node

URL: llms-txt#salesforce-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported
  • Working with Salesforce custom fields

Use the Salesforce node to automate work in Salesforce, and integrate Salesforce with other applications. n8n has built-in support for a wide range of Salesforce features, including creating, updating, deleting, and getting accounts, attachments, cases, and leads, as well as uploading documents.

On this page, you'll find a list of operations the Salesforce node supports and links to more resources.

Refer to Salesforce credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Account
    • Add note to an account
    • Create an account
    • Create a new account, or update the current one if it already exists (upsert)
    • Get an account
    • Get all accounts
    • Returns an overview of account's metadata.
    • Delete an account
    • Update an account
  • Attachment
    • Create a attachment
    • Delete a attachment
    • Get a attachment
    • Get all attachments
    • Returns an overview of attachment's metadata.
    • Update a attachment
  • Case
    • Add a comment to a case
    • Create a case
    • Get a case
    • Get all cases
    • Returns an overview of case's metadata
    • Delete a case
    • Update a case
  • Contact
    • Add lead to a campaign
    • Add note to a contact
    • Create a contact
    • Create a new contact, or update the current one if it already exists (upsert)
    • Delete a contact
    • Get a contact
    • Returns an overview of contact's metadata
    • Get all contacts
    • Update a contact
  • Custom Object
    • Create a custom object record
    • Create a new record, or update the current one if it already exists (upsert)
    • Get a custom object record
    • Get all custom object records
    • Delete a custom object record
    • Update a custom object record
  • Document
    • Upload a document
  • Flow
    • Get all flows
    • Invoke a flow
  • Lead
    • Add lead to a campaign
    • Add note to a lead
    • Create a lead
    • Create a new lead, or update the current one if it already exists (upsert)
    • Delete a lead
    • Get a lead
    • Get all leads
    • Returns an overview of Lead's metadata
    • Update a lead
  • Opportunity
    • Add note to an opportunity
    • Create an opportunity
    • Create a new opportunity, or update the current one if it already exists (upsert)
    • Delete an opportunity
    • Get an opportunity
    • Get all opportunities
    • Returns an overview of opportunity's metadata
    • Update an opportunity
  • Search
    • Execute a SOQL query that returns all the results in a single response
  • Task
    • Create a task
    • Delete a task
    • Get a task
    • Get all tasks
    • Returns an overview of task's metadata
    • Update a task
  • User
    • Get a user
    • Get all users

Templates and examples

Create and update lead in Salesforce

View template details

Create Salesforce accounts based on Google Sheets data

View template details

Create Salesforce accounts based on Excel 365 data

View template details

Browse Salesforce integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.

Working with Salesforce custom fields

To add custom fields to your request:

  1. Select Additional Fields > Add Field.
  2. In the dropdown, select Custom Fields.

You can then find and add your custom fields.


FileMaker credentials

URL: llms-txt#filemaker-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using database connection

You can use these credentials to authenticate the following nodes:

Supported authentication methods

  • Database connection

Refer to FileMaker's Data API Guide for more information about the service.

Using database connection

To configure this credential:

  1. Enter the Host name or IP address of your FileMaker Server.
  2. Enter the Database name. This should match the database name as it appears in the Databases list within FileMaker.
  3. Enter the user account Login for the account with the fmrest extended privilege. Refer to the previous Prerequisites section for more information.
  4. Enter the Password for that user account.

Action Network node

URL: llms-txt#action-network-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Action Network node to automate work in Action Network, and integrate Action Network with other applications. n8n has built-in support for a wide range of Action Network features, including creating, updating, and deleting events, people, tags, and signatures.

On this page, you'll find a list of operations the Action Network node supports, and links to more resources.

Refer to Action Network credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Attendance
    • Create
    • Get
    • Get All
  • Event
    • Create
    • Get
    • Get All
  • Person
    • Create
    • Get
    • Get All
    • Update
  • Person Tag
    • Add
    • Remove
  • Petition
    • Create
    • Get
    • Get All
    • Update
  • Signature
    • Create
    • Get
    • Get All
    • Update
  • Tag
    • Create
    • Get
    • Get All

Templates and examples

Browse Action Network integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Processing data with code

URL: llms-txt#processing-data-with-code

Contents:

  • Function

A function is a block of code designed to perform a certain task. In n8n, you can write custom JavaScript or Python code snippets to add, remove, and update the data you receive from a node.

The Code node gives you access to the incoming data and you can manipulate it. With this node you can create any function you want using JavaScript code.


Pipedrive Trigger node

URL: llms-txt#pipedrive-trigger-node

Pipedrive is a cloud-based sales software company that aims to improve the productivity of businesses through the use of their software.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Pipedrive Trigger integrations page.


Clockify Trigger node

URL: llms-txt#clockify-trigger-node

Clockify is a free time tracker and timesheet app for tracking work hours across projects.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Clockify Trigger integrations page.

This node uses the workflow timezone setting to specify the range of time entries starting time. Configure the timezone in your Workflow Settings if you want this trigger node to retrieve the right time entries.


KoboToolbox credentials

URL: llms-txt#kobotoolbox-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API token

You can use these credentials to authenticate the following nodes:

Create a KoboToolbox account.

Supported authentication methods

Refer to KoboToolbox's API documentation for more information about the service.

To configure this credential, you'll need:

  • An API Root URL: Enter the URL of the KoboToolbox server where you created your account. For the Global KoboToolbox Server, use https://kf.kobotoolbox.org. For the European Union KoboToolbox Server, use https://eu.kobotoolbox.org.
  • An API Token: Displayed in your Account Settings. Refer to Getting your API token for more information.

Recursive Character Text Splitter node

URL: llms-txt#recursive-character-text-splitter-node

Contents:

  • Node parameters
  • Templates and examples
  • Related resources

The Recursive Character Text Splitter node splits document data recursively to keep all paragraphs, sentences then words together as long as possible.

On this page, you'll find the node parameters for the Recursive Character Text Splitter node, and links to more resources.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Chunk Size: Enter the number of characters in each chunk.
  • Chunk Overlap: Enter how much overlap to have between chunks.

Templates and examples

Building Your First WhatsApp Chatbot

View template details

Scrape and summarize webpages with AI

View template details

Ask questions about a PDF using AI

View template details

Browse Recursive Character Text Splitter integration templates, or search all templates

Refer to LangChain's text splitter documentation and LangChain's recursively split by character documentation for more information about the service.

View n8n's Advanced AI documentation.


Declarative-style parameters

URL: llms-txt#declarative-style-parameters

Contents:

  • methods and loadOptions
  • routing
  • version

These are the parameters available for node base file of declarative-style nodes.

This document gives short code snippets to help understand the code structure and concepts. For a full walk-through of building a node, including real-world code examples, refer to Build a declarative-style node.

Refer to Standard parameters for parameters available to all nodes.

methods and loadOptions

Object | Optional

methods contains the loadOptions object. You can use loadOptions to query the service to get user-specific settings, then return them and render them in the GUI so the user can include them in subsequent queries. The object must include routing information for how to query the service, and output settings that define how to handle the returned options. For example:

Object | Required

routing is an object used within an options array in operations and input field objects. It contains the details of an API call.

The code example below comes from the Declarative-style tutorial. It sets up an integration with a NASA API. It shows how to use requestDefaults to set up the basic API call details, and routing to add information for each operation.

Number or Array | Optional

If you have one version of your node, this can be a number. If you want to support more than one version, turn this into an array, containing numbers for each node version.

n8n supports two methods of node versioning, but declarative-style nodes must use the light versioning approach. Refer to Node versioning for more information.

Examples:

Example 1 (unknown):

methods : {
	loadOptions: {
		routing: {
			request: {
				url: '/webhook/example-option-parameters',
				method: 'GET',
			},
			output: {
				postReceive: [
					{
						// When the returned data is nested under another property
						// Specify that property key
						type: 'rootProperty',
						properties: {
							property: 'responseData',
						},
					},
					{
						type: 'setKeyValue',
						properties: {
							name: '={{$responseItem.key}} ({{$responseItem.value}})',
							value: '={{$responseItem.value}}',
						},
					},
					{
						// If incoming data is an array of objects, sort alphabetically by key
						type: 'sort',
						properties: {
							key: 'name',
						},
					},
				],
			},
		},
	}
},

Example 2 (unknown):

description: INodeTypeDescription = {
  // Other node info here
  requestDefaults: {
			baseURL: 'https://api.nasa.gov',
			url: '',
			headers: {
				Accept: 'application/json',
				'Content-Type': 'application/json',
			},
		},
    properties: [
      // Resources here
      {
        displayName: 'Operation'
        // Other operation details
        options: [
          {
            name: 'Get'
            value: 'get',
            description: '',
            routing: {
              request: {
                method: 'GET',
                url: '/planetary/apod'
              }
            }
          }
        ]
      }
    ]
}

SolarWinds IPAM credentials

URL: llms-txt#solarwinds-ipam-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using Username & Password

You can use these credentials to authenticate when using the HTTP Request node to make a Custom API call.

Supported authentication methods

  • Username & Password

Refer to SolarWinds IPAM's API documentation for more information about the service.

Using Username & Password

To configure this credential, you'll need a SolarWinds IPAM account and:

  • URL: The base URL of your SolarWinds IPAM server
  • Username: The username you use to access SolarWinds IPAM
  • Password: The password you use to access SolarWinds IPAM

Refer to SolarWinds IPAM's API documentation for more information about authenticating to the service.


Chargebee Trigger node

URL: llms-txt#chargebee-trigger-node

Contents:

  • Add webhook URL in Chargebee

Chargebee is a billing platform for subscription based SaaS and eCommerce businesses. Chargebee integrates with payment gateways to let you automate recurring payment collection along with invoicing, taxes, accounting, email notifications, SaaS Metrics and customer management.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Chargebee Trigger integrations page.

Add webhook URL in Chargebee

To add a Webhook URL in Chargebee:

  1. Open your Chargebee dashboard.
  2. Go to Settings > Configure Chargebee.
  3. Scroll down and select Webhooks.
  4. Select the Add Webhook button.
  5. Enter the Webhook Name and the Webhook URL.
  6. Select Create.

Mistral AI node

URL: llms-txt#mistral-ai-node

Contents:

  • Node parameters
  • Node options
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported

Use the Mistral AI node to automate work in Mistral AI and integrate Mistral AI with other applications. n8n has built-in support for extracting text with various models, file types, and input methods.

On this page, you'll find a list of operations the Mistral AI node supports, and links to more resources.

You can find authentication information for this node here.

  • Resource: The resource that Mistral AI should operate on. The current implementation supports the "Document" resource.

  • Operation: The operation to perform:

    • Extract Text: Extracts text from a document or image using optical character recognition (OCR).
  • Model: The model to use for the given operation. The current version requires the mistral-ocr-latest model.

  • Document Type: The document format to process. Can be "Document" or "Image".

  • Input Type: How to input the document:

    • Binary Data: Pass the document to this node as a binary field.
    • URL: Fetch the document from a given URL.
  • Input Binary Field: When using the "Binary Data" input type, defines the name of the input binary field containing the file.

  • URL: When using the "URL" input type, the URL of the document or image to process.

  • Enable Batch Processing: Whether to process multiple documents in the same API call. This may reduce your costs by bundling requests.

  • Batch Size: When using "Enable Batch Processing", sets the maximum number of documents to process per batch.

  • Delete Files After Processing: When using "Enable Batch Processing", whether to delete the files from Mistral Cloud after processing.

Templates and examples

🤖 AI content generation for Auto Service 🚘 Automate your social media📲!

View template details

Build a PDF Document RAG System with Mistral OCR, Qdrant and Gemini AI

View template details

Organise Your Local File Directories With AI

View template details

Browse Mistral AI integration templates, or search all templates

Refer to Mistral AI's documentation for more information about the service.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Telegram node

URL: llms-txt#telegram-node

Contents:

  • Operations
  • Related resources
  • Common issues

Use the Telegram node to automate work in Telegram and integrate Telegram with other applications. n8n has built-in support for a wide range of Telegram features, including getting files as well as deleting and editing messages.

On this page, you'll find a list of operations the Telegram node supports and links to more resources.

Refer to Telegram credentials for guidance on setting up authentication.

To use most of the Message operations, you must add your bot to a channel so that it can send messages to that channel. Refer to Common Issues | Add a bot to a Telegram channel for more information.

Templates and examples

Browse Telegram node documentation integration templates, or search all templates

Refer to Telegram's API documentation for more information about the service.

n8n provides a trigger node for Telegram. Refer to the trigger node docs here for more information.

For common errors or issues and suggested resolution steps, refer to Common Issues.


External hooks environment variables

URL: llms-txt#external-hooks-environment-variables

File-based configuration

You can add _FILE to individual variables to provide their configuration in a separate file. Refer to Keeping sensitive data in separate files for more details.

You can define external hooks that n8n executes whenever a specific operation runs. Refer to Backend hooks for examples of available hooks and Hook files for information on file formatting.

Variable Type Description
EXTERNAL_HOOK_FILES String Files containing backend external hooks. Provide multiple files as a colon-separated list (":").
EXTERNAL_FRONTEND_HOOKS_URLS String URLs to files containing frontend external hooks. Provide multiple URLs as a colon-separated list (":").

DeepL credentials

URL: llms-txt#deepl-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a DeepL developer account. n8n works with both Free and Pro API Plans.

Supported authentication methods

Refer to DeepL's API documentation for more information about the service.

To configure this credential, you'll need:

  • An API Key: Refer to DeepL's Authentication documentation for more information on getting your API key.
  • To identify which API Plan you're on. DeepL has different API endpoints for each plan, so be sure you select the correct one:
    • Pro Plan
    • Free Plan

Google BigQuery node

URL: llms-txt#google-bigquery-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported

Use the Google BigQuery node to automate work in Google BigQuery, and integrate Google BigQuery with other applications. n8n has built-in support for a wide range of Google BigQuery features, including creating, and retrieving records.

On this page, you'll find a list of operations the Google BigQuery node supports and links to more resources.

Refer to Google BigQuery credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Execute Query
  • Insert

Templates and examples

🗼 AI Powered Supply Chain Control Tower with BigQuery and GPT-4o

View template details

Send location updates of the ISS every minute to a table in Google BigQuery

View template details

Auto-Generate And Post Tweet Threads Based On Google Trends Using Gemini AI

View template details

Browse Google BigQuery integration templates, or search all templates

Refer to Google BigQuery's documentation for more information about the service.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Automating a business workflow

URL: llms-txt#automating-a-business-workflow

Contents:

  • Workflow design
  • Workflow prerequisites

Remember our friend Nathan?

Nathan 🙋: Hello, it's me again. My manager was so impressed with my first workflow automation solution that she entrusted me with more responsibility.
You 👩‍🔧: More work and responsibility. Congratulations, I guess. What do you need to do now?
Nathan 🙋: I got access to all our sales data and I'm now responsible for creating two reports: one for regional sales and one for orders prices. They're based on data from different sources and come in different formats.
You 👩‍🔧: Sounds like a lot of manual work, but the kind that can be automated. Let's do it!

Now that we know what Nathan wants to automate, let's list the steps he needs to take to achieve this:

  1. Get and combine data from all necessary sources.
  2. Sort the data and format the dates.
  3. Write binary files.
  4. Send notifications using email and Discord.

n8n provides core nodes for all these steps. This use case is somewhat complex. We should build it from three separate workflows:

  1. A workflow that merges the company data with external information.
  2. A workflow that generates the reports.
  3. A workflow that monitors errors in the second workflow.

Workflow prerequisites

To build the workflows, you will need the following:

Next, you will build these three workflows with step-by-step instructions.


Netscaler ADC credentials

URL: llms-txt#netscaler-adc-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using basic auth

You can use these credentials to authenticate the following nodes:

Install a NetScaler/Citrix ADC appliance.

Supported authentication methods

Refer to Netscaler ADC's 14.1 NITRO API documentation for more information about the service.

To configure this credential, you'll need:

  • A URL: Enter the URL of your NetScaler/Citrix ADC instance.
  • A Username: Enter your NetScaler/Citrix ADC username.
  • A Password: Enter your NetScaler/Citrix ADC password.

Refer to Performing Basic Netscaler ADC Operations for more information.


PhantomBuster credentials

URL: llms-txt#phantombuster-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a PhantomBuster account.

Supported authentication methods

Refer to PhantomBuster's API documentation for more information about the service.

To configure this credential, you'll need:


CoinGecko node

URL: llms-txt#coingecko-node

Contents:

  • Operations
  • Templates and examples

Use the CoinGecko node to automate work in CoinGecko, and integrate CoinGecko with other applications. n8n has built-in support for a wide range of CoinGecko features, including getting coins and events.

On this page, you'll find a list of operations the CoinGecko node supports and links to more resources.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Coin
    • Get a candlestick open-high-low-close chart for the selected currency
    • Get current data for a coin
    • Get all coins
    • Get historical data (name, price, market, stats) at a given date for a coin
    • Get prices and market related data for all trading pairs that match the selected currency
    • Get historical market data include price, market cap, and 24h volume (granularity auto)
    • Get the current price of any cryptocurrencies in any other supported currencies that you need
    • Get coin tickers
  • Event
    • Get all events

Templates and examples

Analyze Crypto Market with CoinGecko: Volatility Metrics & Investment Signals

View template details

Tracking your crypto portfolio in Airtable

View template details

Get the price of BTC in EUR and send an SMS

View template details

Browse CoinGecko integration templates, or search all templates


Securing n8n

URL: llms-txt#securing-n8n

Securing your n8n instance can take several forms.

At a high level, you can:

More granularly, consider blocking or opting out of features or data collection you don't want:


Calendly Trigger node

URL: llms-txt#calendly-trigger-node

Contents:

  • Events

Calendly is an automated scheduling software that's designed to help find meeting times.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Calendly Trigger integrations page.

  • Event created
  • Event canceled

Weaviate Vector Store node

URL: llms-txt#weaviate-vector-store-node

Contents:

  • Node usage patterns
    • Use as a regular node to insert and retrieve documents
    • Connect directly to an AI agent as a tool
    • Use a retriever to fetch documents
    • Use the Vector Store Question Answer Tool to answer questions
  • Node parameters
    • Operation Mode
    • Get Many parameters
    • Insert Documents parameters
    • Retrieve Documents (As Vector Store for Chain/Tool) parameters

Use the Weaviate node to interact with your Weaviate collection as a vector store. You can insert documents into or retrieve documents from a vector database. You can also retrieve documents to provide them to a retriever connected to a chain or connect this node directly to an agent to use as a tool. On this page, you'll find the node parameters for the Weaviate node, and links to more resources.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

Node usage patterns

You can use the Weaviate Vector Store node in the following patterns.

Use as a regular node to insert and retrieve documents

You can use the Weaviate Vector Store as a regular node to insert or get documents. This pattern places the Weaviate Vector Store in the regular connection flow without using an agent.

Connect directly to an AI agent as a tool

You can connect the Weaviate Vector Store node directly to the tool connector of an AI agent to use a vector store as a resource when answering queries.

Here, the connection would be: AI agent (tools connector) -> Weaviate Vector Store node.

Use a retriever to fetch documents

You can use the Vector Store Retriever node with the Weaviate Vector Store node to fetch documents from the Weaviate Vector Store node. This is often used with the Question and Answer Chain node to fetch documents from the vector store that match the given chat input.

Use the Vector Store Question Answer Tool to answer questions

Another pattern uses the Vector Store Question Answer Tool to summarize results and answer questions from the Weaviate Vector Store node. Rather than connecting the Weaviate Vector Store directly as a tool, this pattern uses a tool specifically designed to summarizes data in the vector store.

You can separate your data into isolated tenants for the same collection (for example, for different customers). For that, you must always provide a Tenant Name both when inserting and retrieving objects. Read more about multi tenancy in Weaviate docs.

This Vector Store node has four modes: Get Many, Insert Documents, Retrieve Documents (As Vector Store for Chain/Tool), and Retrieve Documents (As Tool for AI Agent). The mode you select determines the operations you can perform with the node and what inputs and outputs are available.

In this mode, you can retrieve multiple documents from your vector database by providing a prompt. The prompt is embedded and used for similarity search. The node returns the documents that are most similar to the prompt with their similarity score. This is useful if you want to retrieve a list of similar documents and pass them to an agent as additional context.

Insert Documents

Use insert documents mode to insert new documents into your vector database.

Retrieve Documents (as Vector Store for Chain/Tool)

Use Retrieve Documents (As Vector Store for Chain/Tool) mode with a vector-store retriever to retrieve documents from a vector database and provide them to the retriever connected to a chain. In this mode you must connect the node to a retriever node or root node.

Retrieve Documents (as Tool for AI Agent)

Use Retrieve Documents (As Tool for AI Agent) mode to use the vector store as a tool resource when answering queries. When formulating responses, the agent uses the vector store when the vector store name and description match the question details.

Get Many parameters

  • Weaviate Collection: Enter the name of the Weaviate collection to use.
  • Prompt: Enter the search query.
  • Limit: Enter how many results to retrieve from the vector store. For example, set this to 10 to get the ten best results.

Insert Documents parameters

  • Weaviate Collection: Enter the name of the Weaviate collection to use.
  • Embedding Batch Size: The number of documents to embed in a single batch. The default is 200 documents.

Retrieve Documents (As Vector Store for Chain/Tool) parameters

  • Weaviate Collection: Enter the name of the Weaviate collection to use.

Retrieve Documents (As Tool for AI Agent) parameters

  • Weaviate Collection: The name of the vector store.
  • Description: Explain to the LLM what this tool does. A good, specific description allows LLMs to produce expected results more often.
  • Weaviate Collection: Enter the name of the Weaviate collection to use.
  • Limit: Enter how many results to retrieve from the vector store. For example, set this to 10 to get the ten best results.

Whether to include document metadata.

You can use this with the Get Many and Retrieve Documents (As Tool for AI Agent) modes.

Enables reranking. If you enable this option, you must connect a reranking node to the vector store. That node will then rerank the results for queries. You can use this option with the Get Many, Retrieve Documents (As Vector Store for Chain/Tool) and Retrieve Documents (As Tool for AI Agent) modes.

Available for the Get Many, Retrieve Documents (As Vector Store for Chain/Tool), and Retrieve Documents (As Tool for AI Agent) operation modes.

When searching for data, use this to match metadata associated with documents. You can learn more about the operators and query structure in Weaviate's conditional filters documentation.

You can use both AND and OR with different operators. Operators are case insensitive:

Operator Required Field(s) Description
'equal' valueString or valueNumber Checks if the property is equal to the given string or number.
'like' valueString Checks if the string property matches a pattern (for example, sub-string match).
'containsAny' valueTextArray (string[]) Checks if the property contains any of the given values.
'containsAll' valueTextArray (string[]) Checks if the property contains all of the given values.
'greaterThan' valueNumber Checks if the property value is greater than the given number.
'lessThan' valueNumber Checks if the property value is less than the given number.
'isNull' valueBoolean (true/false) Checks if the property is null or not. (must enable before ingestion)
'withinGeoRange' valueGeoCoordinates (object with geolocation data) Filters by proximity to geographic coordinates.

When inserting data, the document loader sets the metadata. Refer to Default Data Loader for more information on loading documents.

You can define which metadata keys you want Weaviate to return on your queries. This can reduce network load, as you will only get properties you have defined. Returns all properties from the server by default.

Available for the Get Many, Retrieve Documents (As Vector Store for Chain/Tool), and Retrieve Documents (As Tool for AI Agent) operation modes.

The specific tenant to store or retrieve documents for.

Must enable at creation

You must pass a tenant name at first ingestion to enable multitenancy for a collection. You can't enable or disable multitenancy after creation.

The key in the document that contains the embedded text.

Whether to skip initialization checks when instantiating the client.

Number of seconds to wait before timing out during initial checks.

Number of seconds to wait before timing out during inserts.

Number of seconds to wait before timing out during queries.

A proxy to use for gRPC requests.

Available for the Insert Documents operation mode.

Whether to clear the collection or tenant before inserting new data.

Templates and examples

Build a Weekly AI Trend Alerter with arXiv and Weaviate

View template details

Build a PDF Search System with Mistral OCR and Weaviate DB

View template details

Document Q&A with RAG: Query PDF Content using Weaviate and OpenAI

View template details

Browse Weaviate Vector Store integration templates, or search all templates

Refer to LangChain's Weaviate documentation for more information about the service.

Refer to Weaviate Installation for a self hosted Weaviate Cluster.

View n8n's Advanced AI documentation.

Examples:

Example 1 (unknown):

{
  "OR": [
    {
        "path": ["source"],
        "operator": "Equal",
        "valueString": "source1"
    },
    {
        "path": ["source"],
        "operator": "Equal",
        "valueString": "source1"
    }
  ]
}

Workflow templates

URL: llms-txt#workflow-templates

Contents:

  • Access templates
  • Add your workflow to the n8n library
  • Self-hosted n8n: Use your own library
    • Endpoints
    • Query parameters
    • Data schema

When creating a new workflow, you can choose whether to start with an empty workflow, or use an existing template.

  • Help getting started: n8n might already have a template that does what you need.
  • Examples of what you can build
  • Best practices for creating your own workflows

Select Templates to view the templates library.

If you use n8n's template library, this takes you to browse Workflows on the n8n website. If you use a custom library provided by your organization, you'll be able to search and browse the templates within the app.

Add your workflow to the n8n library

You can submit your workflows to n8n's template library.

n8n is working on a creator program, and developing a marketplace of templates. This is an ongoing project, and details are likely to change.

Refer to n8n Creator hub for information on how to submit templates and become a creator.

Self-hosted n8n: Use your own library

In your environment variables, set N8N_TEMPLATES_HOST to the base URL of your API.

Your API must provide the same endpoints and data structure as n8n's.

Method Path
GET /templates/workflows/<id>
GET /templates/search
GET /templates/collections/<id>
GET /templates/collections
GET /templates/categories
GET /health

The /templates/search endpoint accepts the following query parameters:

Parameter Type Description
page integer The page of results to return
rows integer The maximum number of results to return per page
category comma-separated list of strings (categories) The categories to search within
search string The search query

The /templates/collections endpoint accepts the following query parameters:

Parameter Type Description
category comma-separated list of strings (categories) The categories to search within
search string The search query

You can explore the data structure of the items in the response object returned by endpoints here:

Show workflow item data schema

Show category item data schema

Show collection item data schema

You can also interactively explore n8n's API endpoints:

https://api.n8n.io/templates/categories
https://api.n8n.io/templates/collections
https://api.n8n.io/templates/search
https://api.n8n.io/health

You can contact us for more support.

Examples:

Example 1 (unknown):

{
  "$schema": "http://json-schema.org/draft-07/schema#",
  "title": "Generated schema for Root",
  "type": "object",
  "properties": {
    "id": {
      "type": "number"
    },
    "name": {
      "type": "string"
    },
    "totalViews": {
      "type": "number"
    },
    "price": {},
    "purchaseUrl": {},
    "recentViews": {
      "type": "number"
    },
    "createdAt": {
      "type": "string"
    },
    "user": {
      "type": "object",
      "properties": {
        "username": {
          "type": "string"
        },
        "verified": {
          "type": "boolean"
        }
      },
      "required": [
        "username",
        "verified"
      ]
    },
    "nodes": {
      "type": "array",
      "items": {
        "type": "object",
        "properties": {
          "id": {
            "type": "number"
          },
          "icon": {
            "type": "string"
          },
          "name": {
            "type": "string"
          },
          "codex": {
            "type": "object",
            "properties": {
              "data": {
                "type": "object",
                "properties": {
                  "details": {
                    "type": "string"
                  },
                  "resources": {
                    "type": "object",
                    "properties": {
                      "generic": {
                        "type": "array",
                        "items": {
                          "type": "object",
                          "properties": {
                            "url": {
                              "type": "string"
                            },
                            "icon": {
                              "type": "string"
                            },
                            "label": {
                              "type": "string"
                            }
                          },
                          "required": [
                            "url",
                            "label"
                          ]
                        }
                      },
                      "primaryDocumentation": {
                        "type": "array",
                        "items": {
                          "type": "object",
                          "properties": {
                            "url": {
                              "type": "string"
                            }
                          },
                          "required": [
                            "url"
                          ]
                        }
                      }
                    },
                    "required": [
                      "primaryDocumentation"
                    ]
                  },
                  "categories": {
                    "type": "array",
                    "items": {
                      "type": "string"
                    }
                  },
                  "nodeVersion": {
                    "type": "string"
                  },
                  "codexVersion": {
                    "type": "string"
                  }
                },
                "required": [
                  "categories"
                ]
              }
            }
          },
          "group": {
            "type": "string"
          },
          "defaults": {
            "type": "object",
            "properties": {
              "name": {
                "type": "string"
              },
              "color": {
                "type": "string"
              }
            },
            "required": [
              "name"
            ]
          },
          "iconData": {
            "type": "object",
            "properties": {
              "icon": {
                "type": "string"
              },
              "type": {
                "type": "string"
              },
              "fileBuffer": {
                "type": "string"
              }
            },
            "required": [
              "type"
            ]
          },
          "displayName": {
            "type": "string"
          },
          "typeVersion": {
            "type": "number"
          },
          "nodeCategories": {
            "type": "array",
            "items": {
              "type": "object",
              "properties": {
                "id": {
                  "type": "number"
                },
                "name": {
                  "type": "string"
                }
              },
              "required": [
                "id",
                "name"
              ]
            }
          }
        },
        "required": [
          "id",
          "icon",
          "name",
          "codex",
          "group",
          "defaults",
          "iconData",
          "displayName",
          "typeVersion"
        ]
      }
    }
  },
  "required": [
    "id",
    "name",
    "totalViews",
    "price",
    "purchaseUrl",
    "recentViews",
    "createdAt",
    "user",
    "nodes"
  ]
}

Example 2 (unknown):

{
  "$schema": "http://json-schema.org/draft-07/schema#",
  "type": "object",
  "properties": {
    "id": {
      "type": "number"
    },
    "name": {
      "type": "string"
    }
  },
  "required": [
    "id",
    "name"
  ]
}

Example 3 (unknown):

{
  "$schema": "http://json-schema.org/draft-07/schema#",
  "type": "object",
  "properties": {
    "id": {
      "type": "number"
    },
    "rank": {
      "type": "number"
    },
    "name": {
      "type": "string"
    },
    "totalViews": {},
    "createdAt": {
      "type": "string"
    },
    "workflows": {
      "type": "array",
      "items": {
        "type": "object",
        "properties": {
          "id": {
            "type": "number"
          }
        },
        "required": [
          "id"
        ]
      }
    },
    "nodes": {
      "type": "array",
      "items": {}
    }
  },
  "required": [
    "id",
    "rank",
    "name",
    "totalViews",
    "createdAt",
    "workflows",
    "nodes"
  ]
}

seven credentials

URL: llms-txt#seven-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a seven developer account.

Supported authentication methods

Refer to seven's API documentation for more information about the service.

To configure this credential, you'll need:

  • An API key: Go to Account > Developer > API Keys to create an API key. Refer to API First Steps for more information.

RabbitMQ credentials

URL: llms-txt#rabbitmq-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using user connection
  • guest user issues

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to RabbitMQ's Connections documentation for more information about the service.

Using user connection

To configure this credential, you'll need to have a RabbitMQ broker installed and:

  1. Enter the Hostname for the RabbitMQ broker.
  2. Enter the Port the connection should use.
  3. Enter a User the connection should use to log in as.
    • The default is guest. RabbitMQ recommends using a different user in production environments. Refer to Access Control | The Basics for more information. If you're using the guest account with a non-localhost connection, refer to guest user issues below for troubleshooting tips.
  4. Enter the user's Password.
    • The default password for the guest user is guest.
  5. Enter the virtual host the connection should use as the Vhost. The default virtual host is /.
  6. Select whether the connection should use SSL. If turned on, also set:
    • Passwordless: Select whether the SSL certificate connection users SASL mechanism EXTERNAL (turned off) or doesn't use a password (turned on). If turned on, you'll also need to enter:
      • The Client Certificate: Paste the text of the SSL client certificate to use.
      • The Client Key: Paste the SSL client key to use.
      • The Passphrase: Paste the SSL passphrase to use.
    • CA Certificates: Paste the text of the SSL CA certificates to use.

If you use the guest user for the credential and you try to access a remote host, you may see a connection error. The RabbitMQ logs show an error like this:

This happens because RabbitMQ prohibits the default guest user from connecting from remote hosts. It can only connect over the localhost.

To resolve this error, you can:

  • Update the guest user to allow it remote host access.
  • Create or use a different user to connect to the remote host. The guest user is the only user limited by default.

Refer to "guest" user can only connect from localhost for more information.

Examples:

Example 1 (unknown):

[error] <0.918.0> PLAIN login refused: user 'guest' can only connect via localhost

QuickBooks Online node

URL: llms-txt#quickbooks-online-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the QuickBooks node to automate work in QuickBooks, and integrate QuickBooks with other applications. n8n has built-in support for a wide range of QuickBooks features, including creating, updating, deleting, and getting bills, customers, employees, estimates, and invoices.

On this page, you'll find a list of operations the QuickBooks node supports and links to more resources.

Refer to QuickBooks credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Bill
    • Create
    • Delete
    • Get
    • Get All
    • Update
  • Customer
    • Create
    • Get
    • Get All
    • Update
  • Employee
    • Create
    • Get
    • Get All
    • Update
  • Estimate
    • Create
    • Delete
    • Get
    • Get All
    • Send
    • Update
  • Invoice
    • Create
    • Delete
    • Get
    • Get All
    • Send
    • Update
    • Void
  • Item
    • Get
    • Get All
  • Payment
    • Create
    • Delete
    • Get
    • Get All
    • Send
    • Update
    • Void
  • Purchase
    • Get
    • Get All
  • Transaction
    • Get Report
  • Vendor
    • Create
    • Get
    • Get All
    • Update

Templates and examples

Create a customer and send the invoice automatically

View template details

Create QuickBooks Online Customers With Sales Receipts For New Stripe Payments

View template details

Create a QuickBooks invoice on a new Onfleet Task creation

View template details

Browse QuickBooks Online integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


AWS Elastic Load Balancing node

URL: llms-txt#aws-elastic-load-balancing-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported

Use the AWS Elastic Load Balancing node to automate work in AWS ELB, and integrate AWS ELB with other applications. n8n has built-in support for a wide range of AWS ELB features, including adding, getting, removing, deleting certificates and load balancers.

On this page, you'll find a list of operations the AWS ELB node supports and links to more resources.

Refer to AWS ELB credentials for guidance on setting up authentication.

  • Listener Certificate
    • Add
    • Get Many
    • Remove
  • Load Balancer
    • Create
    • Delete
    • Get
    • Get Many

This node supports creating and managing application and network load balancers. It doesn't currently support gateway load balancers.

Templates and examples

Transcribe audio files from Cloud Storage

View template details

Extract and store text from chat images using AWS S3

View template details

Sync data between Google Drive and AWS S3

View template details

Browse AWS Elastic Load Balancing integration templates, or search all templates

Refer to AWS ELB's documentation for more information on this service.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


CrateDB node

URL: llms-txt#cratedb-node

Contents:

  • Operations
  • Templates and examples
  • Node reference
    • Specify a column's data type

Use the CrateDB node to automate work in CrateDB, and integrate CrateDB with other applications. n8n has built-in support for a wide range of CrateDB features, including executing, inserting, and updating rows in the database.

On this page, you'll find a list of operations the CrateDB node supports and links to more resources.

Refer to CrateDB credentials for guidance on setting up authentication.

  • Execute an SQL query
  • Insert rows in database
  • Update rows in database

Templates and examples

Browse CrateDB integration templates, or search all templates

Specify a column's data type

To specify a column's data type, append the column name with :type, where type is the data type you want for the column. For example, if you want to specify the type int for the column id and type text for the column name, you can use the following snippet in the Columns field: id:int,name:text.


Gmail node Draft Operations

URL: llms-txt#gmail-node-draft-operations

Contents:

  • Create a draft
    • Create draft options
  • Delete a draft
  • Get a draft
    • Get draft options
  • Get Many drafts
    • Get Many drafts options
  • Common issues

Use the Draft operations to create, delete, or get a draft or list drafts in Gmail. Refer to the Gmail node for more information on the Gmail node itself.

Use this operation to create a new draft.

Enter these parameters:

  • Select the Credential to connect with or create a new one.
  • Resource: Select Draft.
  • Operation: Select Create.
  • Subject: Enter the subject line.
  • Select the Email Type. Choose from Text or HTML.
  • Message: Enter the email message body.

Create draft options

Use these options to further refine the node's behavior:

  • Attachments: Select Add Attachment to add an attachment. Enter the Attachment Field Name (in Input) to identify which field from the input node contains the attachment.
    • For multiple properties, enter a comma-separated list.
  • BCC: Enter one or more email addresses for blind copy recipients. Separate multiple email addresses with a comma, for example jay@gatsby.com, jon@smith.com.
  • CC: Enter one or more email addresses for carbon copy recipients. Separate multiple email addresses with a comma, for example jay@gatsby.com, jon@smith.com.
  • From Alias Name or ID: Select an alias to send the draft from. This field populates based on the credential you selected in the parameters.
  • Send Replies To: Enter an email address to set as the reply to address.
  • Thread ID: If you want this draft attached to a thread, enter the ID for that thread.
  • To Email: Enter one or more email addresses for recipients. Separate multiple email addresses with a comma, for example jay@gatsby.com, jon@smith.com.

Refer to the Gmail API Method: users.drafts.create documentation for more information.

Use this operation to delete a draft.

Enter these parameters:

  • Select the Credential to connect with or create a new one.
  • Resource: Select Draft.
  • Operation: Select Delete.
  • Draft ID: Enter the ID of the draft you wish to delete.

Refer to the Gmail API Method: users.drafts.delete documentation for more information.

Use this operation to get a single draft.

Enter these parameters:

  • Select the Credential to connect with or create a new one.
  • Resource: Select Draft.
  • Operation: Select Get.
  • Draft ID: Enter the ID of the draft you wish to get information about.

Get draft options

Use these options to further refine the node's behavior:

  • Attachment Prefix: Enter a prefix for the name of the binary property the node should write any attachments to. n8n adds an index starting with 0 to the prefix. For example, if you enter `attachment_' as the prefix, the first attachment saves to 'attachment_0'.
  • Download Attachments: Select whether the node should download the draft's attachments (turned on) or not (turned off).

Refer to the Gmail API Method: users.drafts.get documentation for more information.

Use this operation to get two or more drafts.

Enter these parameters:

  • Select the Credential to connect with or create a new one.
  • Resource: Select Draft.
  • Operation: Select Get Many.
  • Return All: Choose whether the node returns all drafts (turned on) or only up to a set limit (turned off).
  • Limit: Enter the maximum number of drafts to return. Only used if you've turned off Return All.

Get Many drafts options

Use these options to further refine the node's behavior:

  • Attachment Prefix: Enter a prefix for the name of the binary property the node should write any attachments to. n8n adds an index starting with 0 to the prefix. For example, if you enter `attachment_' as the prefix, the first attachment saves to 'attachment_0'.
  • Download Attachments: Select whether the node should download the draft's attachments (turned on) or not (turned off).
  • Include Spam and Trash: Select whether the node should get drafts in the Spam and Trash folders (turned on) or not (turned off).

Refer to the Gmail API Method: users.drafts.list documentation for more information.

For common errors or issues and suggested resolution steps, refer to Common Issues.


Webex by Cisco credentials

URL: llms-txt#webex-by-cisco-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Create a Webex by Cisco account (this should automatically get you developer account access).

Supported authentication methods

Refer to Webex's API documentation for more information about the service.

Note for n8n Cloud users

You'll only need to enter the Credentials Name and select the Connect my account button in the OAuth credential to connect your Webex by Cisco account to n8n.

Should you need to configure OAuth2 from scratch, you'll need to create an integration to use this credential. Refer to the instructions in the Webex Registering your Integration documentation to begin.

n8n recommends using the following Scopes for your integration:

  • spark:rooms_read
  • spark:messages_write
  • spark:messages_read
  • spark:memberships_read
  • spark:memberships_write
  • meeting:recordings_write
  • meeting:recordings_read
  • meeting:preferences_read
  • meeting:schedules_write
  • meeting:schedules_read

Call n8n Workflow Tool node

URL: llms-txt#call-n8n-workflow-tool-node

Contents:

  • Node parameters
    • Description
    • Source
    • Workflow Inputs
  • Templates and examples
  • Related resources

The Call n8n Workflow Tool node is a tool that allows an agent to run another n8n workflow and fetch its output data.

On this page, you'll find the node parameters for the Call n8n Workflow Tool node, and links to more resources.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

Enter a custom code a description. This tells the agent when to use this tool. For example:

Call this tool to get a random color. The input should be a string with comma separated names of colors to exclude.

Tell n8n which workflow to call. You can choose either:

  • Database to select the workflow from a list or enter a workflow ID.
  • Define Below and copy in a complete workflow JSON.

When using Database as workflow source, once you choose a sub-workflow (and define the Workflow Input Schema in the sub-workflow), you can define the Workflow Inputs.

Select the Refresh button to pull in the input fields from the sub-workflow.

You can define the workflow input values using any combination of the following options:

  • providing fixed values
  • using expressions to reference data from the current workflow
  • letting the AI model specify the parameter by selecting the button AI button on the right side of the field
  • using the $fromAI() function in expressions to control the way the model fills in data and to mix AI generated input with other custom input

To reference data from the current workflow, drag fields from the input panel to the field with the Expressions mode selected.

To get started with the $fromAI() function, select the "Let the model define this parameter" button on the right side of the field and then use the X on the box to revert to user-defined values. The field will change to an expression field pre-populated with the $fromAI() expression. From here, you can customize the expression to add other static or dynamic content, or tweak the $fromAI() function parameters.

Templates and examples

AI agent that can scrape webpages

View template details

Build Your First AI Data Analyst Chatbot

View template details

Create a Branded AI-Powered Website Chatbot

View template details

Browse Call n8n Workflow Tool integration templates, or search all templates

Refer to LangChain's documentation on tools for more information about tools in LangChain.

View n8n's Advanced AI documentation.


OpenWeatherMap credentials

URL: llms-txt#openweathermap-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using API access token

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to OpenWeatherMap's API documentation for more information about the service.

Using API access token

To configure this credential, you'll need an OpenWeatherMap account and:

  • An Access Token

To get your Access Token:

  1. After you verify your email address, OpenWeatherMap includes an API Key in your welcome email.
  2. Copy that key and enter it in your n8n credential.

If you'd prefer to create a new key:

  1. To create a new key, go to Account > API Keys.
  2. In the Create Key section, enter an API Key Name, like n8n integration.
  3. Select Generate to generate your key.
  4. Copy the generated key and enter it in your n8n credential.

Chargebee credentials

URL: llms-txt#chargebee-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a Chargebee account.

Supported authentication methods

Refer to Chargebee's API documentation for more information about the service.

To configure this credential, you'll need:

  • An Account Name: This is your Chargebee Site Name or subdomain, for example if https://n8n.chargebee.com is the full site name, the Account Name is n8n.
  • An API Key: Refer to the Chargebee Creating an API key documentation for steps on how to generate an API key.

Refer to their more general API authentication documentation for further clarification.


Elastic Security node

URL: llms-txt#elastic-security-node

Contents:

  • Operations
  • Templates and examples

Use the Elastic Security node to automate work in Elastic Security, and integrate Elastic Security with other applications. n8n's has built-in support for a wide range of Elastic Security features, including creating, updating, deleting, retrieving, and getting cases.

On this page, you'll find a list of operations the Elastic Security node supports and links to more resources.

Refer to Elastic Security credentials for guidance on setting up authentication.

  • Case
    • Create a case
    • Delete a case
    • Get a case
    • Retrieve all cases
    • Retrieve a summary of all case activity
    • Update a case
  • Case Comment
    • Add a comment to a case
    • Get a case comment
    • Retrieve all case comments
    • Remove a comment from a case
    • Update a comment in a case
  • Case Tag
    • Add a tag to a case
    • Remove a tag from a case
  • Connector
    • Create a connector

Templates and examples

Browse Elastic Security integration templates, or search all templates


Metabase credentials

URL: llms-txt#metabase-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using basic auth

You can use these credentials to authenticate the following nodes:

Create a Metabase account with access to a Metabase instance.

Supported authentication methods

Refer to Metabase's API documentation for more information about the service.

To configure this credential, you'll need:

  • A URL: Enter the base URL of your Metabase instance. If you're using a custom domain, use that URL.
  • A Username: Enter your Metabase username.
  • A Password: Enter your Metabase password.

Copy work between environments

URL: llms-txt#copy-work-between-environments

Contents:

  • Single branch
  • Multiple branches
  • Automatically send changes to n8n

The steps to send work from one n8n instance to another are different depending on whether you use a single Git branch or multiple branches.

If you have a single Git branch the steps to copy work are:

  1. Push work from one instance to the Git branch.
  2. Log in to the other instance to pull the work from Git. You can automate pulls.

If you have more than one Git branch, you need to merge the branches in your Git provider to copy work between environments. You can't copy work directly between environments in n8n.

  1. Do work in your developments instance.
  2. Push the work to the development branch in Git.
  3. Merge your development branch into your production branch. Refer to the documentation for your Git provider for guidance on doing this:
  4. In your production n8n instance, pull the changes. You can automate pulls.

Automatically send changes to n8n

You can automate parts of the process of copying work, using the /source-control/pull API endpoint. Call the API after merging the changes:

This means you can use a GitHub Action or GitLab CI/CD to automatically pull changes to the production instance on merge.

A GitHub Action example:

Examples:

Example 1 (unknown):

curl --request POST \
	--location '<YOUR-INSTANCE-URL>/api/v1/source-control/pull' \
	--header 'Content-Type: application/json' \
	--header 'X-N8N-API-KEY: <YOUR-API-KEY>' \
	--data '{"force": true}'

Example 2 (unknown):

name: CI
on:
  # Trigger the workflow on push or pull request events for the "production" branch
  push:
    branches: [ "production" ]
  # Allows you to run this workflow manually from the Actions tab
  workflow_dispatch:
jobs:
  run-pull:
    runs-on: ubuntu-latest
    steps:
      - name: PULL
				# Use GitHub secrets to protect sensitive information
        run: >
          curl --location '${{ secrets.INSTANCE_URL }}/version-control/pull' --header
          'Content-Type: application/json' --header 'X-N8N-API-KEY: ${{ secrets.INSTANCE_API_KEY }}'

Mist credentials

URL: llms-txt#mist-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API token

You can use these credentials to authenticate when using the HTTP Request node to make a Custom API call.

Create a Mist account and organization. Refer to Create a Mist account and Organization for detailed instructions.

Supported authentication methods

Refer to Mist's documentation for more information about the service. If you're logged in to your Mist account, go to https://api.mist.com/api/v1/docs/Home to view the full API documentation.

This is a credential-only node. Refer to Custom API operations to learn more. View example workflows and related content on n8n's website.

To configure this credential, you'll need:

  • An API Token: You can use either a User API token or an Org API token. Refer to How to generate a user API token for instructions on generating a User API token. Refer to Org API token for instructions on generating an Org API token.
  • Select the Region you're in. Options include:
    • Europe: Select this option if your cloud environment is in any of the EMEA regions.
    • Global: Select this option if your cloud environment is in any of the global regions.

Credential sharing

URL: llms-txt#credential-sharing

Contents:

  • Share a credential
  • Remove access to a credential

Available on all Cloud plans, and Enterprise self-hosted plans.

You can share a credential directly with other users to use in their own workflows. Or share a credential in a project for all members of that project to use. Any users using a shared credential won't be able to view or edit the credential details.

Users can share credentials they created and own. Only project admins can share credentials created in and owned by a project. Instance owners and instance admins can view and share all credentials on an instance.

Refer to Account types for more information about owners and admins.

In projects, a user's role controls how they can interact with the workflows and credentials associated to the projects they're a member of.

Share a credential

To share a credential:

  1. From the left menu, select either Overview or a project.
  2. Select Credentials to see a list of your credentials.
  3. Select the credential you want to share.
  4. Select Sharing.
  5. In the Share with projects or users dropdown, browse or search for the user or project with which you want to share your credentials.
  6. Select a user or project.
  7. Select Save to apply the changes.

Remove access to a credential

To unshare a credential:

  1. From the left menu, select either Overview or a project.
  2. Select Credentials to see a list of your credentials.
  3. Select the credential you want to unshare.
  4. Select Sharing.
  5. Select trash icon on the user or project you want to remove from the list of shared users and projects.
  6. Select Save to apply the changes.

Snowflake credentials

URL: llms-txt#snowflake-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using database connection

You can use these credentials to authenticate the following nodes:

Create a Snowflake account.

Supported authentication methods

  • Database connection

Refer to Snowflake's API documentation and SQL Command Reference for more information about the service.

Using database connection

To configure this credential, you'll need:

  • An Account name: Your account name is the string of characters located between https:// and snowflakecomputing.com in your Snowflake URL. For example, if the URL of your Snowflake account is https://abc.eu-central-1.snowflakecomputing.com then the name of your account is abc.eu-central-1.
  • A Database: Enter the name of the database the credential should connect to.
  • A Warehouse: Enter the name of the default virtual warehouse to use for the session after connecting. n8n uses this warehouse for performing queries, loading data, and so on.
  • A Username
  • A Password
  • A Schema: Enter the schema you want to use after connecting.
  • A Role: Enter the security role you want to use after connecting.
  • Client Session Keep Alive: By default, client connections typically time out three or four hours after the most recent query execution. Turning this setting on sets the clientSessionKeepAlive parameter to true: the server will keep the client's connection alive indefinitely, even if the connection doesn't execute any queries.

Refer to Session Commands for more information on these settings.


Set a save location for the log file

URL: llms-txt#set-a-save-location-for-the-log-file

export N8N_LOG_FILE_LOCATION=/home/jim/n8n/logs/n8n.log


Humantic AI node

URL: llms-txt#humantic-ai-node

Contents:

  • Operations
  • Templates and examples

Use the Humantic AI node to automate work in Humantic AI, and integrate Humantic AI with other applications. n8n has built-in support for a wide range of Humantic AI features, including creating, retrieving, and updating profiles.

On this page, you'll find a list of operations the Humantic AI node supports and links to more resources.

Refer to Humantic AI credentials for guidance on setting up authentication.

  • Profile
    • Create a profile
    • Retrieve a profile
    • Update a profile

Templates and examples

Enrich and manage candidates data in Notion

View template details

Create, update, and get a profile in Humantic AI

View template details

Get, Create, Upadte Profiles 🛠️ Humantic AI Tool MCP Server

View template details

Browse Humantic AI integration templates, or search all templates


Google Calendar node

URL: llms-txt#google-calendar-node

Contents:

  • Operations
  • Templates and examples
  • Related resources

Use the Google Calendar node to automate work in Google Calendar, and integrate Google Calendar with other applications. n8n has built-in support for a wide range of Google Calendar features, including adding, retrieving, deleting and updating calendar events.

On this page, you'll find a list of operations the Google Calendar node supports and links to more resources.

Refer to Google Calendar credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Calendar
  • Event
    • Create: Add an event to calendar
    • Delete: Delete an event
    • Get: Retrieve an event
    • Get Many: Retrieve all events from a calendar
    • Update: Update an event

Templates and examples

AI Agent : Google calendar assistant using OpenAI

View template details

Build an MCP Server with Google Calendar and Custom Functions

View template details

Actioning Your Meeting Next Steps using Transcripts and AI

View template details

Browse Google Calendar integration templates, or search all templates

n8n provides a trigger node for Google Calendar. You can find the trigger node docs here.

Refer to Google Calendar's documentation for more information about the service.

View example workflows and related content on n8n's website.


ActiveCampaign Trigger node

URL: llms-txt#activecampaign-trigger-node

Contents:

  • Events
  • Related resources

ActiveCampaign is a cloud software platform for small-to-mid-sized business. The company offers software for customer experience automation, which combines the email marketing, marketing automation, sales automation, and CRM categories.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's ActiveCampaign Trigger integrations page.

  • New ActiveCampaign event

n8n provides an app node for ActiveCampaign. You can find the node docs here.

View example workflows and related content on n8n's website.

Refer to ActiveCampaign's documentation for details about their API.


RabbitMQ Trigger node

URL: llms-txt#rabbitmq-trigger-node

Contents:

  • Related resources

RabbitMQ is an open-source message broker that accepts and forwards messages.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Rabbit MQ Trigger integrations page.

n8n provides an app node for RabbitMQ. You can find the node docs here.

View example workflows and related content on n8n's website.


Google: OAuth2 generic

URL: llms-txt#google:-oauth2-generic

Contents:

  • Prerequisites
  • Set up OAuth
    • Create a Google Cloud Console project
    • Enable APIs
    • Configure your OAuth consent screen
    • Create your Google OAuth client credentials
    • Finish your n8n credential
  • Video
  • Scopes
  • Troubleshooting

This document contains instructions for creating a generic OAuth2 Google credential for use with custom operations.

Note for n8n Cloud users

For the following nodes, you can authenticate by selecting Sign in with Google in the OAuth section:

There are five steps to connecting your n8n credential to Google services:

  1. Create a Google Cloud Console project.
  2. Enable APIs.
  3. Configure your OAuth consent screen.
  4. Create your Google OAuth client credentials.
  5. Finish your n8n credential.

Create a Google Cloud Console project

First, create a Google Cloud Console project. If you already have a project, jump to the next section:

  1. Log in to your Google Cloud Console using your Google credentials.

  2. In the top menu, select the project dropdown in the top navigation and select New project or go directly to the New Project page.

  3. Enter a Project name and select the Location for your project.

  4. Select Create.

  5. Check the top navigation and make sure the project dropdown has your project selected. If not, select the project you just created.

Check the project dropdown in the Google Cloud top navigation

With your project created, enable the APIs you'll need access to:

  1. Access your Google Cloud Console - Library. Make sure you're in the correct project.

Check the project dropdown in the Google Cloud top navigation

  1. Go to APIs & Services > Library.

  2. Search for and select the API(s) you want to enable. For example, for the Gmail node, search for and enable the Gmail API.

  3. Some integrations require other APIs or require you to request access:

Google Drive API required

The following integrations require the Google Drive API, as well as their own API:

  • Google Docs
    • Google Sheets
    • Google Slides

In addition to the Vertex AI API you will also need to enable the Cloud Resource Manager API.

  1. Select ENABLE.

If you haven't used OAuth in your Google Cloud project before, you'll need to configure the OAuth consent screen:

  1. Access your Google Cloud Console - Library. Make sure you're in the correct project.

Check the project dropdown in the Google Cloud top navigation

  1. Open the left navigation menu and go to APIs & Services > OAuth consent screen. Google will redirect you to the Google Auth Platform overview page.

  2. Select Get started on the Overview tab to begin configuring OAuth consent.

  3. Enter an App name and User support email to include on the Oauth screen. Select Next to continue.

  4. For the Audience, select Internal for user access within your organization's Google workspace or External for any user with a Google account. Refer to Google's User type documentation for more information on user types. Select Next to continue.

  5. Select the Email addresses Google should use to contact you about changes to your project. Select Next to continue.

  6. Read and accept the Google's User Data Policy. Select Continue and then select Create.

  7. In the left-hand menu, select Branding.

  8. In the Authorized domains section, select Add domain:

  • If you're using n8n's Cloud service, add n8n.cloud
    • If you're self-hosting, add the domain of your n8n instance.
  1. Select Save at the bottom of the page.

Create your Google OAuth client credentials

Next, create the OAuth client credentials in Google:

  1. Access your Google Cloud Console. Make sure you're in the correct project.
  2. In the APIs & Services section, select Credentials.
  3. Select + Create credentials > OAuth client ID.
  4. In the Application type dropdown, select Web application.
  5. Google automatically generates a Name. Update the Name to something you'll recognize in your console.
  6. From your n8n credential, copy the OAuth Redirect URL. Paste it into the Authorized redirect URIs in Google Console.
  7. Select Create.

Finish your n8n credential

With the Google project and credentials fully configured, finish the n8n credential:

  1. From Google's OAuth client created modal, copy the Client ID. Enter this in your n8n credential.

  2. From the same Google modal, copy the Client Secret. Enter this in your n8n credential.

  3. You must provide the scopes for this credential. Refer to Scopes for more information. Enter multiple scopes in a space-separated list, for example:

  4. In n8n, select Sign in with Google to complete your Google authentication.

  5. Save your new credentials.

The following video demonstrates the steps described above:

Google services have one or more possible access scopes. A scope limits what a user can do. Refer to OAuth 2.0 Scopes for Google APIs for a list of scopes for all services.

n8n doesn't support all scopes. When creating a generic Google OAuth2 API credential, you can enter scopes from the Supported scopes list below. If you enter a scope that n8n doesn't already support, it won't work.

Service Available scopes
Gmail - https://www.googleapis.com/auth/gmail.labels - https://www.googleapis.com/auth/gmail.addons.current.action.compose - https://www.googleapis.com/auth/gmail.addons.current.message.action - https://mail.google.com/ - https://www.googleapis.com/auth/gmail.modify - https://www.googleapis.com/auth/gmail.compose
Google Ads - https://www.googleapis.com/auth/adwords
Google Analytics - https://www.googleapis.com/auth/analytics - https://www.googleapis.com/auth/analytics.readonly
Google BigQuery - https://www.googleapis.com/auth/bigquery
Google Books - https://www.googleapis.com/auth/books
Google Calendar - https://www.googleapis.com/auth/calendar - https://www.googleapis.com/auth/calendar.events
Google Cloud Natural Language - https://www.googleapis.com/auth/cloud-language - https://www.googleapis.com/auth/cloud-platform
Google Cloud Storage - https://www.googleapis.com/auth/cloud-platform - https://www.googleapis.com/auth/cloud-platform.read-only - https://www.googleapis.com/auth/devstorage.full_control - https://www.googleapis.com/auth/devstorage.read_only - https://www.googleapis.com/auth/devstorage.read_write
Google Contacts - https://www.googleapis.com/auth/contacts
Google Docs - https://www.googleapis.com/auth/documents - https://www.googleapis.com/auth/drive - https://www.googleapis.com/auth/drive.file
Google Drive - https://www.googleapis.com/auth/drive - https://www.googleapis.com/auth/drive.appdata - https://www.googleapis.com/auth/drive.photos.readonly
Google Firebase Cloud Firestore - https://www.googleapis.com/auth/datastore - https://www.googleapis.com/auth/firebase
Google Firebase Realtime Database - https://www.googleapis.com/auth/userinfo.email - https://www.googleapis.com/auth/firebase.database - https://www.googleapis.com/auth/firebase
Google Perspective - https://www.googleapis.com/auth/userinfo.email
Google Sheets - https://www.googleapis.com/auth/drive.file - https://www.googleapis.com/auth/spreadsheets
Google Slide - https://www.googleapis.com/auth/drive.file - https://www.googleapis.com/auth/presentations
Google Tasks - https://www.googleapis.com/auth/tasks
Google Translate - https://www.googleapis.com/auth/cloud-translation
GSuite Admin - https://www.googleapis.com/auth/admin.directory.group - https://www.googleapis.com/auth/admin.directory.user - https://www.googleapis.com/auth/admin.directory.domain.readonly - https://www.googleapis.com/auth/admin.directory.userschema.readonly

Google hasn't verified this app

If using the OAuth authentication method, you might see the warning Google hasn't verified this app. To avoid this:

  • If your app User Type is Internal, create OAuth credentials from the same account you want to authenticate.
  • If your app User Type is External, you can add your email to the list of testers for the app: go to the Audience page and add the email you're signing in with to the list of Test users.

If you need to use credentials generated by another account (by a developer or another third party), follow the instructions in Google Cloud documentation | Authorization errors: Google hasn't verified this app.

Google Cloud app becoming unauthorized

For Google Cloud apps with Publishing status set to Testing and User type set to External, consent and tokens expire after seven days. Refer to Google Cloud Platform Console Help | Setting up your OAuth consent screen for more information. To resolve this, reconnect the app in the n8n credentials modal.

Examples:

Example 1 (unknown):

https://www.googleapis.com/auth/gmail.labels https://www.googleapis.com/auth/gmail.addons.current.action.compose

Stripe credentials

URL: llms-txt#stripe-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using API key
    • Live mode Secret key
    • Test mode Secret key
  • Test mode and live mode
  • Key prefixes

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to Stripe's API documentation for more information about the service.

To configure this credential, you'll need a Stripe admin or developer account and:

  • An API Secret Key

Before you generate an API key, decide whether to generate it in live mode or test mode. Refer to Test mode and live mode for more information about the two modes.

Live mode Secret key

To generate a Secret key in live mode:

  1. Open the Stripe developer dashboard and select API Keys.
  2. In the Standard Keys section, select Create secret key.
  3. Enter a Key name, like n8n integration.
  4. Select Create. The new API key displays.
  5. Copy the key and enter it in your n8n credential as the Secret Key.

Refer to Stripe's Create a secret API key for more information.

Test mode Secret key

To use a Secret key in test mode, you must copy the existing one:

  1. Go to your Stripe test mode developer dashboard and select API Keys.
  2. In the Standard Keys section, select Reveal test key for the Secret key.
  3. Copy the key and enter it in your n8n credential as the Secret Key.

Refer to Stripe's Create a secret API key for more information.

Test mode and live mode

All Stripe API requests happen within either test mode or live mode. Each mode has its own API key.

Use test mode to access simulated test data and live mode to access actual account data. Objects in one mode arent accessible to the other.

Refer to API keys | Test mode versus live mode for more information about what's available in each mode and guidance on when to use each.

n8n credentials for both modes

If you want to work with both live mode and test mode keys, store each mode's key in a separate n8n credential.

Stripes' Secret keys always begin with sk_:

  • Live keys begin with sk_live_.
  • Test keys begin with sk_test_.

n8n hasn't tested these credentials with Restricted keys (prefixed rk_).

Don't use the Publishable keys (prefixed pk_) with your n8n credential.


Airtop credentials

URL: llms-txt#airtop-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create an Airtop account.

Supported authentication methods

Refer to Airtop's API documentation for more information about the service.

To configure this credential, you'll need an Airtop account and an API key. To generate a new key:

  1. Log in to the Airtop Portal.
  2. Go to API Keys.
  3. Select the + Create new key button.
  4. Enter a name for the API key.
  5. Select the generated key to copy the key.
  6. Enter this as the API Key in your n8n credential.

Refer to Airtop's Support for assistance if you have any issues creating your API key.


SearXNG Tool node

URL: llms-txt#searxng-tool-node

Contents:

  • Node Options
  • Running a SearXNG instance
  • Templates and examples
  • Related resources

The SearXNG Tool node allows you to integrate search capabilities into your workflows using SearXNG. SearXNG aggregates results from multiple search engines without tracking you.

On this page, you'll find the node options for the SearXNG Tool node, and links to more resources.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Number of Results: The number of results to retrieve. The default is 10.
  • Page Number: The page number of the search results to retrieve. The default is 1.
  • Language: A two-letter language code to filter search results by language. For example: en for English, fr for French. The default is en.
  • Safe Search: Enables or disables filtering explicit content in the search results. Can be None, Moderate, or Strict. The default is None.

Running a SearXNG instance

This node requires running the SearXNG service on the same network as your n8n instance. Ensure your n8n instance has network access to the SearXNG service.

This node requires results in JSON format, which isn't enabled in the default SearXNG configuration. To enable JSON output, add json to the search.formats section of your SearXNG instance's settings.yml file:

If the formats section isn't there, add it. The exact location of the settings.yml file depends on how you installed SearXNG. You can find more by visiting the SearXNG configuration documentation.

The quality and availability of search results depend on the configuration and health of the SearXNG instance you use.

Templates and examples

Browse SearXNG Tool integration templates, or search all templates

Refer to SearXNG's documentation for more information about the service. You can also view LangChain's documentation on their SearXNG integration.

View n8n's Advanced AI documentation.

Examples:

Example 1 (unknown):

search:
  # options available for formats: [html, csv, json, rss]
  formats:
    - html
    - json

Build a node

URL: llms-txt#build-a-node

This section provides tutorials on building nodes. It covers:


Data filtering

URL: llms-txt#data-filtering

Available on Cloud Pro and Enterprise plans.

Search and filter data in the node INPUT and OUTPUT panels. Use this to check your node's data.

  1. In a node, select Search in the INPUT or OUTPUT panel.
  2. Enter your search term.

n8n filters as you type your search, displaying the objects or rows containing the term.

Filtering is purely visual: n8n doesn't change or delete data. The filter resets when you close and reopen the node.


Evaluation node

URL: llms-txt#evaluation-node

Contents:

  • Operations
    • Set Outputs
    • Set Metrics
    • Check If Evaluating
  • Templates and examples
  • Related resources

The Evaluation node performs various operations related to evaluations to validate your AI workflow reliability.

Use the Evaluation node in these scenarios:

  • To conditionally execute logic based on whether the workflow is under evaluation
  • To write evaluation outcomes back to a Google Sheet datasetor
  • To log scoring metrics for your evaluation performance to n8n's evaluations tab

Credentials for Google Sheets

The Evaluation node's Set Outputs operation records evaluation results to data tables or Google Sheets. To use Google Sheets as a recording location, configure a Google Sheets credential.

The Evaluation node offers the following operations:

  • Set Outputs: Write the results of an evaluation back to a data table or Google Sheet dataset.
  • Set Metrics: Record metrics scoring the evaluation performance to n8n's Evaluations tab.
  • Check If Evaluating: Branches the workflow execution logic depending on whether the current execution is an evaluation.

The parameters and options available depend on the operation you select.

The Set Outputs operation has the following parameters:

  • Source: Select the location to which you want to output the evaluation results. Default value is Data table.

Source settings differ depending on Source selection.

You define the items to write to the data table or Google Sheet in the Outputs section. For each output, you set the following:

  • Name: The Google Sheet column name to write the evaluation results to.
  • Value: The value to write to the Google Sheet.

The Set Metrics operation includes a Metrics to Return section where you define the metrics to record and track for your evaluations. You can see the metric results in your workflow's Evaluations tab.

For each metric you wish to record, you set the following details:

  • Name: The name to use for the metric.
  • Value: The numeric value to record. Once you run your evaluation, you can drag and drop values from previous nodes here. Metric values must be numeric.

Check If Evaluating

The Check If Evaluating operation doesn't have any parameters. This operation provides branching output connectors so that you can conditionally execute logic depending on whether the current execution is an evaluation or not.

Templates and examples

AI Automated HR Workflow for CV Analysis and Candidate Evaluation

View template details

HR Job Posting and Evaluation with AI

View template details

AI-Powered Candidate Screening and Evaluation Workflow using OpenAI and Airtable

View template details

Browse Evaluation integration templates, or search all templates

To learn more about n8n evaluations, check out the evaluations documentation

n8n provides a trigger node for evaluations. You can find the node docs here.

For common questions or issues and suggested solutions, refer to the evaluations tips and common issues page.

Examples:

Example 1 (unknown):

* When **Source** is **Data table**:
    * **Data table:** Select a data table by name or ID
* When **Source** is **Google Sheets**:
    * **Credential to connect with**: Create or select an existing [Google Sheets credentials](/integrations/builtin/credentials/google/index.md).
    * **Document Containing Dataset**: Choose the spreadsheet document you want to write the evaluation results to. Usually this is the same document you select in the [Evaluation Trigger](/integrations/builtin/core-nodes/n8n-nodes-base.evaluationtrigger.md) node.
    * Select **From list** to choose the spreadsheet title from the dropdown list, **By URL** to enter the url of the spreadsheet, or **By ID** to enter the `spreadsheetId`. 
        * You can find the `spreadsheetId` in a Google Sheets URL: `https://docs.google.com/spreadsheets/d/spreadsheetId/edit#gid=0`.
    * **Sheet Containing Dataset**: Choose the sheet you want to write the evaluation results to. Usually this is the same sheet you select in the [Evaluation Trigger](/integrations/builtin/core-nodes/n8n-nodes-base.evaluationtrigger.md) node.
        * Select **From list** to choose the sheet title from the dropdown list, **By URL** to enter the url of the sheet, **By ID** to enter the `sheetId`, or **By Name** to enter the sheet title. 
        * You can find the `sheetId` in a Google Sheets URL: `https://docs.google.com/spreadsheets/d/aBC-123_xYz/edit#gid=sheetId`.

Google Sheets Sheet Within Document operations

URL: llms-txt#google-sheets-sheet-within-document-operations

Contents:

  • Append or Update Row
    • Options
  • Append Row
    • Options
  • Clear a sheet
  • Create a new sheet
    • Options
  • Delete a sheet
  • Delete Rows or Columns
  • Get Row(s)

Use this operation to create, update, clear or delete a sheet in a Google spreadsheet from Google Sheets. Refer to Google Sheets for more information on the Google Sheets node itself.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

Append or Update Row

Use this operation to update an existing row or add a new row at the end of the data if a matching entry isn't found in a sheet.

Enter these parameters:

  • Credential to connect with: Create or select an existing Google Sheets credentials.

  • Resource: Select Sheet Within Document.

  • Operation: Select Append or Update Row.

  • Document: Choose a spreadsheet that contains the sheet you want to append or update row(s) to.

    • Select From list to choose the spreadsheet title from the dropdown list, By URL to enter the url of the spreadsheet, or By ID to enter the spreadsheetId.
    • You can find the spreadsheetId in a Google Sheets URL: https://docs.google.com/spreadsheets/d/spreadsheetId/edit#gid=0.
  • Sheet: Choose a sheet you want to append or update row(s) to.

    • Select From list to choose the sheet title from the dropdown list, By URL to enter the url of the sheet, By ID to enter the sheetId, or By Name to enter the sheet title.
    • You can find the sheetId in a Google Sheets URL: https://docs.google.com/spreadsheets/d/aBC-123_xYz/edit#gid=sheetId.
  • Mapping Column Mode:

    • Map Each Column Manually: Enter Values to Send for each column.
    • Map Automatically: n8n looks for incoming data that matches the columns in Google Sheets automatically. In this mode, make sure the incoming data fields are the same as the columns in Google Sheets. (Use an Edit Fields node before this node to change them if required.)
    • Nothing: Don't map any data.
  • Cell Format: Use this option to choose how to format the data in cells. Refer to Google Sheets API | CellFormat for more information.

    • Let Google Sheets format (default): n8n formats text and numbers in the cells according to Google Sheets' default settings.
    • Let n8n format: New cells in your sheet will have the same data types as the input data provided by n8n.
  • Data Location on Sheet: Use this option when you need to specify the data range on your sheet.

    • Header Row: Specify the row index that contains the column headers.
    • First Data Row: Specify the row index where the actual data starts.
  • Handling extra fields in input: When using Mapping Column Mode > Map Automatically, use this option to decide how to handle fields in the input data that don't match any existing columns in the sheet.

    • Insert in New Column(s) (default): Adds new columns for any extra data.
    • Ignore Them: Ignores extra data that don't match the existing columns.
    • Error: Throws an error and stops execution.
  • Use Append: Turn on this option to use the Google API append endpoint for adding new data rows.

    • By default, n8n appends empty rows or columns and then adds the new data. This approach can ensure data alignment but may be less efficient. Using the append endpoint can lead to better performance by minimizing the number of API calls and simplifying the process. But if the existing sheet data has inconsistencies such as gaps or breaks between rows and columns, n8n may add the new data in the wrong place, leading to misalignment issues.
    • Use this option when performance is a priority and the data structure in the sheet is consistent without gaps.

Refer to the Method: spreadsheets.values.update | Google Sheets API documentation for more information.

Use this operation to append a new row at the end of the data in a sheet.

Enter these parameters:

  • Credential to connect with: Create or select an existing Google Sheets credentials.

  • Resource: Select Sheet Within Document.

  • Operation: Select Append Row.

  • Document: Choose a spreadsheet with the sheet you want to append a row to.

    • Select From list to choose the spreadsheet title from the dropdown list, By URL to enter the url of the spreadsheet, or By ID to enter the spreadsheetId.
    • You can find the spreadsheetId in a Google Sheets URL: https://docs.google.com/spreadsheets/d/spreadsheetId/edit#gid=0.
  • Sheet: Choose a sheet you want to append a row to.

    • Select From list to choose the sheet title from the dropdown list, By URL to enter the url of the sheet, By ID to enter the sheetId, or By Name to enter the sheet title.
    • You can find the sheetId in a Google Sheets URL: https://docs.google.com/spreadsheets/d/aBC-123_xYz/edit#gid=sheetId.
  • Mapping Column Mode:

    • Map Each Column Manually: Select the Column to Match On when finding the rows to update. Enter Values to Send for each column.
    • Map Automatically: n8n looks for incoming data that matches the columns in Google Sheets automatically. In this mode, make sure the incoming data fields are the same as the columns in Google Sheets. (Use an Edit Fields node before this node to change them if required.)
    • Nothing: Don't map any data.
  • Cell Format: Use this option to choose how to format the data in cells. Refer to Google Sheets API | CellFormat for more information.

    • Let Google Sheets format (default): n8n formats text and numbers in the cells according to Google Sheets' default settings.
    • Let n8n format: New cells in your sheet will have the same data types as the input data provided by n8n.
  • Data Location on Sheet: Use this option when you need to specify the data range on your sheet.

    • Header Row: Specify the row index that contains the column headers.
    • First Data Row: Specify the row index where the actual data starts.
  • Handling extra fields in input: When using Mapping Column Mode > Map Automatically, use this option to decide how to handle fields in the input data that don't match any existing columns in the sheet.

    • Insert in New Column(s) (default): Adds new columns for any extra data.
    • Ignore Them: Ignores extra data that don't match the existing columns.
    • Error: Throws an error and stops execution.
  • Use Append: Turn on this option to use the Google API append endpoint for adding new data rows.

    • By default, n8n appends empty rows or columns and then adds the new data. This approach can ensure data alignment but may be less efficient. Using the append endpoint can lead to better performance by minimizing the number of API calls and simplifying the process. But if the existing sheet data has inconsistencies such as gaps or breaks between rows and columns, n8n may add the new data in the wrong place, leading to misalignment issues.
    • Use this option when performance is a priority and the data structure in the sheet is consistent without gaps.

Refer to the Method: spreadsheets.values.append | Google Sheets API documentation for more information.

Use this operation to clear all data from a sheet.

Enter these parameters:

  • Credential to connect with: Create or select an existing Google Sheets credentials.
  • Resource: Select Sheet Within Document.
  • Operation: Select Clear.
  • Document: Choose a spreadsheet with the sheet you want to clear data from.
    • Select From list to choose the spreadsheet title from the dropdown list, By URL to enter the url of the spreadsheet, or By ID to enter the spreadsheetId.
    • You can find the spreadsheetId in a Google Sheets URL: https://docs.google.com/spreadsheets/d/spreadsheetId/edit#gid=0.
  • Sheet: Choose a sheet you want to clear data from.
    • Select From list to choose the sheet title from the dropdown list, By URL to enter the url of the sheet, By ID to enter the sheetId, or By Name to enter the sheet title.
    • You can find the sheetId in a Google Sheets URL: https://docs.google.com/spreadsheets/d/aBC-123_xYz/edit#gid=sheetId.
  • Clear: Select what data you want cleared from the sheet.
    • Whole Sheet: Clear the entire sheet's data. Turn on Keep First Row to keep the first row of the sheet.
    • Specific Rows: Clear data from specific rows. Also enter:
      • Start Row Number: Enter the first row number you want to clear.
      • Number of Rows to Delete: Enter the number of rows to clear. 1 clears data only the row in the Start Row Number.
    • Specific Columns: Clear data from specific columns. Also enter:
      • Start Column: Enter the first column you want to clear using the letter notation.
      • Number of Columns to Delete: Enter the number of columns to clear. 1 clears data only in the Start Column.
    • Specific Range: Enter the table range to clear data from, in A1 notation.

Refer to the Method: spreadsheets.values.clear | Google Sheets API documentation for more information.

Create a new sheet

Use this operation to create a new sheet.

Enter these parameters:

  • Credential to connect with: Create or select an existing Google Sheets credentials.

  • Resource: Select Sheet Within Document.

  • Operation: Select Create.

  • Document: Choose a spreadsheet in which you want to create a new sheet.

    • Select From list to choose the spreadsheet title from the dropdown list, By URL to enter the url of the spreadsheet, or By ID to enter the spreadsheetId.
    • You can find the spreadsheetId in a Google Sheets URL: https://docs.google.com/spreadsheets/d/spreadsheetId/edit#gid=0.
  • Title: Enter the title for your new sheet.

  • Hidden: Turn on this option to keep the sheet hidden in the UI.

  • Right To Left: Turn on this option to use RTL sheet instead of an LTR sheet.

  • Sheet ID: Enter the ID of the sheet.

    • You can find the sheetId in a Google Sheets URL: https://docs.google.com/spreadsheets/d/aBC-123_xYz/edit#gid=sheetId
  • Sheet Index: By default, the new sheet is the last sheet in the spreadsheet. To override this behavior, enter the index you want the new sheet to use. When you add a sheet at a given index, Google increments the indices for all following sheets. Refer to Sheets | SheetProperties documentation for more information.

  • Tab Color: Enter the color as hex code or use the color picker to set the color of the tab in the UI.

Refer to the Method: spreadsheets.batchUpdate | Google Sheets API documentation for more information.

Use this operation to permanently delete a sheet.

Enter these parameters:

  • Credential to connect with: Create or select an existing Google Sheets credentials.
  • Resource: Select Sheet Within Document.
  • Operation: Select Delete.
  • Document: Choose a spreadsheet that contains the sheet you want to delete.
    • Select From list to choose the spreadsheet title from the dropdown list, By URL to enter the url of the spreadsheet, or By ID to enter the spreadsheetId.
    • You can find the spreadsheetId in a Google Sheets URL: https://docs.google.com/spreadsheets/d/spreadsheetId/edit#gid=0.
  • Sheet: Choose the sheet you want to delete.
    • Select From list to choose the sheet title from the dropdown list, By URL to enter the url of the sheet, By ID to enter the sheetId, or By Name to enter the name of the sheet.
    • You can find the sheetId in a Google Sheets URL: https://docs.google.com/spreadsheets/d/aBC-123_xYz/edit#gid=sheetId.

Refer to the Method: spreadsheets.batchUpdate | Google Sheets API documentation for more information.

Delete Rows or Columns

Use this operation to delete rows or columns in a sheet.

Enter these parameters:

  • Credential to connect with: Create or select an existing Google Sheets credentials.
  • Resource: Select Sheet Within Document.
  • Operation: Select Delete Rows or Columns.
  • Document: Choose a spreadsheet that contains the sheet you want to delete rows or columns from.
    • Select From list to choose the spreadsheet title from the dropdown list, By URL to enter the url of the spreadsheet, or By ID to enter the spreadsheetId.
    • You can find the spreadsheetId in a Google Sheets URL: https://docs.google.com/spreadsheets/d/spreadsheetId/edit#gid=0.
  • Sheet: Choose the sheet in which you want to delete rows or columns.
    • Select From list to choose the sheet title from the dropdown list, By URL to enter the url of the sheet, By ID to enter the sheetId, or By Name to enter the name of the sheet.
    • You can find the sheetId in a Google Sheets URL: https://docs.google.com/spreadsheets/d/aBC-123_xYz/edit#gid=sheetId.
  • Start Row Number or Start Column: Enter the row number or column letter to start deleting.
  • Number of Rows to Delete or Number of Columns to delete: Enter the number of rows or columns to delete.

Refer to the Method: spreadsheets.batchUpdate | Google Sheets API documentation for more information.

Use this operation to read one or more rows from a sheet.

Enter these parameters:

  • Credential to connect with: Create or select an existing Google Sheets credentials.

  • Resource: Select Sheet Within Document.

  • Operation: Select Get Row(s).

  • Document: Choose a spreadsheet that contains the sheet you want to get rows from.

    • Select From list to choose the spreadsheet title from the dropdown list, By URL to enter the url of the spreadsheet, or By ID to enter the spreadsheetId.
    • You can find the spreadsheetId in a Google Sheets URL: https://docs.google.com/spreadsheets/d/spreadsheetId/edit#gid=0.
  • Sheet: Choose a sheet you want to read rows from.

    • Select From list to choose the sheet title from the dropdown list, By URL to enter the url of the sheet, By ID to enter the sheetId, or By Name to enter the name of the sheet.
    • You can find the sheetId in a Google Sheets URL: https://docs.google.com/spreadsheets/d/aBC-123_xYz/edit#gid=sheetId.
  • Filters: By default, the node returns all rows in the sheet. Set filters to return a limited set of results:

    • Column: Select the column in your sheet to search against.
    • Value: Enter a cell value to search for. You can drag input data parameters here. If your filter matches multiple rows, n8n returns the first result. If you want all matching rows:
      1. Under Options, select Add Option > When Filter Has Multiple Matches.
      2. Change When Filter Has Multiple Matches to Return All Matches.
  • Data Location on Sheet: Use this option to specify a data range. By default, n8n will detect the range automatically until the last row in the sheet.

  • Output Formatting: Use this option to choose how n8n formats the data returned by Google Sheets.

  • General Formatting:

    • Values (unformatted) (default): n8n removes currency signs and other special formatting. Data type remains as number.
    • Values (formatted): n8n displays the values as they appear in Google Sheets (for example, retaining commas or currency signs) by converting the data type from number to string.
    • Formulas: n8n returns the formula. It doesn't calculate the formula output. For example, if a cell B2 has the formula =A2, n8n returns B2's value as =A2 (in text). Refer to About date & time values | Google Sheets for more information.
  • Date Formatting: Refer to DateTimeRenderOption | Google Sheets for more information.

    • Formatted Text (default): As displayed in Google Sheets, which depends on the spreadsheet locale. For example 01/01/2024.
    • Serial Number: Number of days since December 30th 1899.
  • When Filter Has Multiple Matches: Set to Return All Matches to get multiple matches. By default only the first result gets returned.

n8n treats the first row in a Google Sheet as a heading row, and doesn't return it when reading all rows. If you want to read the first row, use the Options to set Data Location on Sheet.

Refer to the Method: spreadsheets.batchUpdate | Google Sheets API documentation for more information.

Use this operation to update existing row in a sheet. This operation only updates existing rows. To append rows when a matching entry isn't found in a sheet, use Append or Update Row operation instead.

Enter these parameters:

  • Credential to connect with: Create or select an existing Google Sheets credentials.

  • Resource: Select Sheet Within Document.

  • Operation: Select Update Row.

  • Document: Choose a spreadsheet with the sheet you want to update.

    • Select From list to choose the spreadsheet title from the dropdown list, By URL to enter the url of the spreadsheet, or By ID to enter the spreadsheetId.
    • You can find the spreadsheetId in a Google Sheets URL: https://docs.google.com/spreadsheets/d/spreadsheetId/edit#gid=0.
  • Sheet: Choose a sheet you want to update.

    • Select From list to choose the sheet title from the dropdown list, By URL to enter the url of the sheet, By ID to enter the sheetId, or By Name to enter the sheet title.
    • You can find the sheetId in a Google Sheets URL: https://docs.google.com/spreadsheets/d/aBC-123_xYz/edit#gid=sheetId.
  • Mapping Column Mode:

    • Map Each Column Manually: Enter Values to Send for each column.
    • Map Automatically: n8n looks for incoming data that matches the columns in Google Sheets automatically. In this mode, make sure the incoming data fields are the same as the columns in Google Sheets. (Use an Edit Fields node before this node to change them if required.)
    • Nothing: Don't map any data.
  • Cell Format: Use this option to choose how to format the data in cells. Refer to Google Sheets API | CellFormat for more information.

    • Let Google Sheets format (default): n8n formats text and numbers in the cells according to Google Sheets' default settings.
    • Let n8n format: New cells in your sheet will have the same data types as the input data provided by n8n.
  • Data Location on Sheet: Use this option when you need to specify where the data range on your sheet.

    • Header Row: Specify the row index that contains the column headers.
    • First Data Row: Specify the row index where the actual data starts.

Refer to the Method: spreadsheets.batchUpdate | Google Sheets API documentation for more information.


One Simple API credentials

URL: llms-txt#one-simple-api-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API token

You can use these credentials to authenticate the following nodes:

Create a One Simple API account.

Supported authentication methods

Refer to One Simple API's documentation for more information about the service.

To configure this credential, you'll need:

  • An API token: Create a new API token on the API Tokens page. Be sure you select appropriate permissions for the token.

You can also access the API Tokens page by selecting your Profile > API Tokens.


Zendesk node

URL: llms-txt#zendesk-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Zendesk node to automate work in Zendesk, and integrate Zendesk with other applications. n8n has built-in support for a wide range of Zendesk features, including creating, and deleting tickets, users, and organizations.

On this page, you'll find a list of operations the Zendesk node supports and links to more resources.

Refer to Zendesk credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Ticket
    • Create a ticket
    • Delete a ticket
    • Get a ticket
    • Get all tickets
    • Recover a suspended ticket
    • Update a ticket
  • Ticket Field
    • Get a ticket field
    • Get all system and custom ticket fields
  • User
    • Create a user
    • Delete a user
    • Get a user
    • Get all users
    • Get a user's organizations
    • Get data related to the user
    • Search users
    • Update a user
  • Organization
    • Create an organization
    • Delete an organization
    • Count organizations
    • Get an organization
    • Get all organizations
    • Get data related to the organization
    • Update a organization

Templates and examples

Automate SIEM Alert Enrichment with MITRE ATT&CK, Qdrant & Zendesk in n8n

View template details

Sync Zendesk tickets with subsequent comments to Jira issues

View template details

Sync Zendesk tickets to Slack thread

View template details

Browse Zendesk integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Set the self-hosted instance timezone

URL: llms-txt#set-the-self-hosted-instance-timezone

The default timezone is America/New_York. For instance, the Schedule node uses it to know at what time the workflow should start. To set a different default timezone, set GENERIC_TIMEZONE to the appropriate value. For example, if you want to set the timezone to Berlin (Germany):

You can find the name of your timezone here.

Refer to Environment variables reference for more information on this variable.

Examples:

Example 1 (unknown):

export GENERIC_TIMEZONE=Europe/Berlin

crowd.dev node

URL: llms-txt#crowd.dev-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported

Use the crowd.dev node to automate work in crowd.dev and integrate crowd.dev with other applications. n8n has built-in support for a wide range of crowd.dev features, which includes creating, updating, and deleting members, notes, organizations, and tasks.

On this page, you'll find a list of operations the crowd.dev node supports, and links to more resources.

You can find authentication information for this node here.

  • Activity
    • Create or Update with a Member
    • Create
  • Automation
    • Create
    • Destroy
    • Find
    • List
    • Update
  • Member
    • Create or Update
    • Delete
    • Find
    • Update
  • Note
    • Create
    • Delete
    • Find
    • Update
  • Organization
    • Create
    • Delete
    • Find
    • Update
  • Task
    • Create
    • Delete
    • Find
    • Update

Templates and examples

Browse crowd.dev integration templates, or search all templates

n8n provides a trigger node for crowd.dev. You can find the trigger node docs here.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


TimescaleDB credentials

URL: llms-txt#timescaledb-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using database connection

You can use these credentials to authenticate the following nodes:

An available instance of TimescaleDB.

Supported authentication methods

  • Database connection

Refer to the Timescale documentation for more information about the service.

Using database connection

To configure this credential, you'll need:

  • The Host: The fully qualified server name or IP address of your TimescaleDB server.
  • The Database: The name of the database to connect to.
  • A User: The user name you want to log in with.
  • A Password: Enter the password for the database user you are connecting to.
  • Ignore SSL Issues: If turned on, n8n will connect even if SSL certificate validation fails and you won't see the SSL selector.
  • SSL: This setting controls the ssl-mode connection string for the connection. Options include:
    • Allow: Sets the ssl-mode parameter to allow. First try a non-SSL connection; if that fails, try an SSL connection.
    • Disable: Sets the ssl-mode parameter to disable. Only try a non-SSL connection.
    • Require: Sets the ssl-mode parameter to require, which is the default for TimescaleDB connection strings. Only try an SSL connection. If a root CA file is present, verify that a trusted certificate authority (CA) issued the server certificate.
  • Port: The port number of the TimescaleDB server.

Refer to the Timescale connection settings documentation for more information about the non-SSL fields. Refer to Connect with a stricter SSL for more information about the SSL options.


Help Scout node

URL: llms-txt#help-scout-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Help Scout node to automate work in Help Scout, and integrate Help Scout with other applications. n8n has built-in support for a wide range of Help Scout features, including creating, updating, deleting, and getting conversations, and customers.

On this page, you'll find a list of operations the Help Scout node supports and links to more resources.

Refer to Help Scout credentials for guidance on setting up authentication.

  • Conversation
    • Create a new conversation
    • Delete a conversation
    • Get a conversation
    • Get all conversations
  • Customer
    • Create a new customer
    • Get a customer
    • Get all customers
    • Get customer property definitions
    • Update a customer
  • Mailbox
    • Get data of a mailbox
    • Get all mailboxes
  • Thread
    • Create a new chat thread
    • Get all chat threads

Templates and examples

Browse Help Scout integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


SecurityScorecard credentials

URL: llms-txt#securityscorecard-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a SecurityScorecard account.

Supported authentication methods

Refer to SecurityScorecard's Developer documentation and API documentation for more information about the service.

To configure this credential, you'll need:


Ollama credentials

URL: llms-txt#ollama-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using instance URL
    • Ollama and self-hosted n8n

You can use these credentials to authenticate the following nodes:

Create and run an Ollama instance with one user. Refer to the Ollama Quick Start for more information.

Supported authentication methods

Refer to Ollama's API documentation for more information about the service.

View n8n's Advanced AI documentation.

Using instance URL

To configure this credential, you'll need:

  • The Base URL of your Ollama instance or remote authenticated Ollama instances.
  • (Optional) The API Key for Bearer token authentication if connecting to a remote, authenticated proxy.

The default Base URL is http://localhost:11434, but if you've set the OLLAMA_HOST environment variable, enter that value. If you have issues connecting to a local n8n server, try 127.0.0.1 instead of localhost.

If you're connecting to Ollama through authenticated proxy services (such as Open WebUI) you must include an API key. If you don't need authentication, leave this field empty. When provided, the API key is sent as a Bearer token in the Authorization header of the request to the Ollama API.

Refer to How do I configure Ollama server? for more information.

Ollama and self-hosted n8n

If you're self-hosting n8n on the same machine as Ollama, you may run into issues if they're running in different containers.

For this setup, open a specific port for n8n to communicate with Ollama by setting the OLLAMA_ORIGINS variable or adjusting OLLAMA_HOST to an address the other container can access.

Refer to Ollama's How can I allow additional web origins to access Ollama? for more information.


Freshservice credentials

URL: llms-txt#freshservice-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a Freshservice account.

Supported authentication methods

Refer to Freshservice's API documentation for more information about the service.

To configure this credential, you'll need:

  • An API Key: Refer to the Freshservice API authenticaton documentation for detailed instructions on getting your API key.
  • Your Freshservice Domain: Use the subdomain of your Freshservice account. This is part of the URL, for example https://<subdomain>.freshservice.com. So if you access Freshservice through https://n8n.freshservice.com, enter n8n as your Domain.

Dropcontact node

URL: llms-txt#dropcontact-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Dropcontact node to automate work in Dropcontact, and integrate Dropcontact with other applications. n8n has built-in support for a wide range of Dropcontact features, including fetching contacts.

On this page, you'll find a list of operations the Dropcontact node supports and links to more resources.

Refer to Dropcontact credentials for guidance on setting up authentication.

  • Enrich
  • Fetch Request

Templates and examples

Create HubSpot contacts from LinkedIn post interactions

View template details

Enrich up to 1500 emails per hour with Dropcontact batch requests

View template details

Enrich Google Sheet contacts with Dropcontact

View template details

Browse Dropcontact integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


S3 credentials

URL: llms-txt#s3-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using S3 endpoint
    • Using DigitalOcean Spaces
    • Using Wasabi

You can use these credentials to authenticate the following nodes:

Create an account on an S3-compatible server. Use the S3 node for generic or non-AWS S3 like:

Supported authentication methods

Refer to your S3-compatible provider's documentation for more information on the services. For example, refer to Wasabi's REST API documentation or DigitalOcean's Spaces API Reference Documentation.

To configure this credential, you'll need:

  • An S3 Endpoint: Enter the URL endpoint for the S3 storage backend.
  • A Region: Enter the region for your S3 storage. Some providers call this the "region slug."
  • An Access Key ID: Enter the S3 access key your S3 provider uses to access the bucket or space. Some providers call this API keys.
  • A Secret Access Key: Enter the secret access key for the Access Key ID.
  • Force Path Style: When turned on, the connection uses path-style addressing for buckets.
  • Ignore SSL Issues: When turned on, n8n will connect even if SSL certificate validation fails.

More detailed instructions for DigitalOcean Spaces and Wasabi follow. If you're using a different provider, refer to their documentation for more information.

Using DigitalOcean Spaces

To configure the credential for use with DigitalOcean spaces:

  1. In DigitalOceans, go to the control panel and open Settings. Your endpoint should be listed there. Prepend https:// to that endpoint and enter it as the S3 Endpoint in n8n.
    • Your DigitalOceans endpoint depends on the data center region your bucket's in.
  2. For the Region, enter the region your bucket's located in, for example, nyc3.
    • If you plan to use this credential to create new Spaces, enter us-east-1 instead.
  3. From your DigitalOceans control panel, go to API.
  4. Open the Spaces Keys tab.
  5. Select Generate New Key.
  6. Enter a Name for your key, like n8n integration and select the checkmark.
  7. Copy the Key displayed next to the name and enter this as the Access Key ID in n8n.
  8. Copy the Secret value and enter this as the Secret Access Key in n8n.
  9. Keep the Force Path Style toggle turned off unless you want to use subdomain/virtual calling format.
  10. Decide how you want the n8n credential to handle SSL:
    • To respect SSL certificate validation, keep the default of Ignore SSL Issues turned off.
    • To connect even if SSL certificate validation fails, turn on Ignore SSL Issues.

Refer to DigitalOcean's Spaces API Reference Documentation for more information.

To configure the credential for use with Wasabi:

  1. For the S3 Endpoint, enter the service URL for your bucket's region. Start it with https://.
  2. For the Region, enter the region slug portion of the service URL. For example, if you entered https://s3.us-east-2.wasabisys.com as the S3 Endpoint, us-east-2 is the region.
  3. Log into you Wasabi Console as the root user.
  4. Open the Menu and select Access Keys.
  5. Select CREATE NEW ACCESS KEY.
  6. Select whether the key is for the Root User or a Sub-User and select CREATE.
  7. Copy the Access Key and enter it in n8n as the Access Key ID.
  8. Copy the Secret Key and enter it in n8n as the Secret Access Key.
  9. Wasabi recommends turning on the Force Path Style toggle "because the path-style offers the greatest flexibility in bucket names, avoiding domain name issues." Refer to the Wasabi REST API Introduction for more information.
  10. Decide how you want the n8n credential to handle SSL:
    • To respect SSL certificate validation, keep the default of Ignore SSL Issues turned off.
    • To connect even if SSL certificate validation fails, turn on Ignore SSL Issues.

Google Cloud Natural Language node

URL: llms-txt#google-cloud-natural-language-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Google Cloud Natural Language node to automate work in Google Cloud Natural Language, and integrate Google Cloud Natural Language with other applications. n8n has built-in support for a wide range of Google Cloud Natural Language features, including analyzing documents.

On this page, you'll find a list of operations the Google Cloud Natural Language node supports and links to more resources.

Refer to Google Cloud Natural Language credentials for guidance on setting up authentication.

  • Document
    • Analyze Sentiment

Templates and examples

ETL pipeline for text processing

View template details

Automate testimonials in Strapi with n8n

View template details

Add positive feedback messages to a table in Notion

View template details

Browse Google Cloud Natural Language integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Task runner environment variables

URL: llms-txt#task-runner-environment-variables

Contents:

  • n8n instance environment variables
  • Task runner launcher environment variables
  • Task runner environment variables (all languages)
  • Task runner environment variables (JavaScript)
  • Task runner environment variables (Python)

File-based configuration

You can add _FILE to individual variables to provide their configuration in a separate file. Refer to Keeping sensitive data in separate files for more details.

Task runners execute code defined by the Code node.

n8n instance environment variables

Variable Type Default Description
N8N_RUNNERS_ENABLED Boolean false Are task runners enabled.
N8N_RUNNERS_MODE Enum string: internal, external internal How to launch and run the task runner. internal means n8n will launch a task runner as child process. external means an external orchestrator will launch the task runner.
N8N_RUNNERS_AUTH_TOKEN String Random string Shared secret used by a task runner to authenticate to n8n. Required when using external mode.
N8N_RUNNERS_BROKER_PORT Number 5679 Port the task broker listens on for task runner connections.
N8N_RUNNERS_BROKER_LISTEN_ADDRESS String 127.0.0.1 Address the task broker listens on.
N8N_RUNNERS_MAX_PAYLOAD Number 1 073 741 824 Maximum payload size in bytes for communication between a task broker and a task runner.
N8N_RUNNERS_MAX_OLD_SPACE_SIZE String The --max-old-space-size option to use for a task runner (in MB). By default, Node.js will set this based on available memory.
N8N_RUNNERS_MAX_CONCURRENCY Number 5 The number of concurrent tasks a task runner can execute at a time.
N8N_RUNNERS_TASK_TIMEOUT Number 60 How long (in seconds) a task can take to complete before the task aborts and the runner restarts. Must be greater than 0.
N8N_RUNNERS_HEARTBEAT_INTERVAL Number 30 How often (in seconds) the runner must send a heartbeat to the broker, else the task aborts and the runner restarts. Must be greater than 0.
N8N_RUNNERS_INSECURE_MODE Boolean false Whether to disable all security measures in the task runner, for compatibility with modules that rely on insecure JS features. Discouraged for production use.

Task runner launcher environment variables

Variable Type Default Description
N8N_RUNNERS_LAUNCHER_LOG_LEVEL Enum string: debug, info, warn, error info Which log messages to show.
N8N_RUNNERS_AUTH_TOKEN String - Shared secret used to authenticate to n8n.
N8N_RUNNERS_AUTO_SHUTDOWN_TIMEOUT Number 15 The number of seconds to wait before shutting down an idle runner.
N8N_RUNNERS_TASK_BROKER_URI String http://127.0.0.1:5679 The URI of the task broker server (n8n instance).
N8N_RUNNERS_LAUNCHER_HEALTH_CHECK_PORT Number 5680 Port for the launcher's health check server.
N8N_RUNNERS_MAX_PAYLOAD Number 1 073 741 824 Maximum payload size in bytes for communication between a task broker and a task runner.
N8N_RUNNERS_MAX_CONCURRENCY Number 5 The number of concurrent tasks a task runner can execute at a time.

Task runner environment variables (all languages)

Variable Type Default Description
N8N_RUNNERS_GRANT_TOKEN String Random string Token the runner uses to authenticate with the task broker. This is automatically provided by the launcher.
N8N_RUNNERS_AUTO_SHUTDOWN_TIMEOUT Number 15 The number of seconds to wait before shutting down an idle runner.
N8N_RUNNERS_TASK_BROKER_URI String http://127.0.0.1:5679 The URI of the task broker server (n8n instance).
N8N_RUNNERS_LAUNCHER_HEALTH_CHECK_PORT Number 5680 Port for the launcher's health check server.
N8N_RUNNERS_MAX_PAYLOAD Number 1 073 741 824 Maximum payload size in bytes for communication between a task broker and a task runner.
N8N_RUNNERS_MAX_CONCURRENCY Number 5 The number of concurrent tasks a task runner can execute at a time.

Task runner environment variables (JavaScript)

Variable Type Default Description
NODE_FUNCTION_ALLOW_BUILTIN String - Permit users to import specific built-in modules in the Code node. Use * to allow all. n8n disables importing modules by default.
NODE_FUNCTION_ALLOW_EXTERNAL String - Permit users to import specific external modules (from n8n/node_modules) in the Code node. n8n disables importing modules by default.
N8N_RUNNERS_ALLOW_PROTOTYPE_MUTATION Boolean false Whether to allow prototype mutation for external libraries. Set to true to allow modules that rely on runtime prototype mutation (for example, puppeteer) at the cost of relaxing security.
GENERIC_TIMEZONE * America/New_York The same default timezone as configured for the n8n instance.
NODE_OPTIONS String - Options for Node.js.
N8N_RUNNERS_MAX_OLD_SPACE_SIZE String The --max-old-space-size option to use for a task runner (in MB). By default, Node.js will set this based on available memory.

Task runner environment variables (Python)

Variable Type Default Description
N8N_RUNNERS_STDLIB_ALLOW String - Python standard library modules that you can use in the Code node, including their submodules. Use * to allow all stdlib modules. n8n disables all Python standard library imports by default.
N8N_RUNNERS_EXTERNAL_ALLOW String - Third-party Python modules that are allowed to be used in the Code node, including their submodules. Use * to allow all external modules. n8n disables all third-party Python modules by default. Third-party Python modules must be included in the n8nio/runners image.
N8N_RUNNERS_BUILTINS_DENY String eval,exec,compile,open,input,breakpoint,getattr,object,type,vars,setattr,delattr,hasattr,dir,memoryview,__build_class__,globals,locals Python built-ins that you can't use in the Code node. Set to an empty string to allow all built-ins.
N8N_BLOCK_RUNNER_ENV_ACCESS Boolean true Whether to block access to the runner's environment from within Python code tasks. Set to false to enable all Python code node users access to the runner's environment via os.environ. For security reasons, environment variable access is blocked by default.

Clockify credentials

URL: llms-txt#clockify-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a Clockify account.

Supported authentication methods

Refer to Clockify's API documentation for more information about the service.

To configure this credential, you'll need:


Mindee node

URL: llms-txt#mindee-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Mindee node to automate work in Mindee, and integrate Mindee with other applications. n8n has built-in support for a wide range of Mindee features, including predicting invoices.

On this page, you'll find a list of operations the Mindee node supports and links to more resources.

Refer to Mindee credentials for guidance on setting up authentication.

  • Invoice
    • Predict
  • Receipt
    • Predict

Templates and examples

Extract expenses from emails and add to Google Sheets

View template details

Notify on new emails with invoices in Slack

View template details

Extract information from an image of a receipt

View template details

Browse Mindee integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


QuestDB node

URL: llms-txt#questdb-node

Contents:

  • Operations
  • Templates and examples
  • Node reference
    • Specify a column's data type

Use the QuestDB node to automate work in QuestDB, and integrate QuestDB with other applications. n8n supports executing an SQL query and inserting rows in a database with QuestDB.

On this page, you'll find a list of operations the QuestDB node supports and links to more resources.

Refer to QuestDB credentials for guidance on setting up authentication.

  • Executes a SQL query.
  • Insert rows in database.

Templates and examples

Browse QuestDB integration templates, or search all templates

Specify a column's data type

To specify a column's data type, append the column name with :type, where type is the data type you want for column. For example, if you want to specify the type int for the column id and type text for the column name, you can use the following snippet in the Columns field: id:int,name:text.


TheHive 5 Trigger node

URL: llms-txt#thehive-5-trigger-node

Contents:

  • Events
  • Related resources
  • Configure a webhook in TheHive

Use the TheHive 5 Trigger node to respond to events in TheHive and integrate TheHive with other applications. n8n has built-in support for a wide range of TheHive events, including alerts, cases, comments, pages, and tasks.

On this page, you'll find a list of events the TheHive5 Trigger node can respond to and links to more resources.

TheHive and TheHive 5

n8n provides two nodes for TheHive. Use this node (TheHive 5 Trigger) if you want to use TheHive's version 5 API. If you want to use version 3 or 4, use TheHive Trigger.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's TheHive 5 Trigger integrations page.

  • Alert
    • Created
    • Deleted
    • Updated
  • Case
    • Created
    • Deleted
    • Updated
  • Comment
    • Created
    • Deleted
    • Updated
  • Observable
    • Created
    • Deleted
    • Updated
  • Page
    • Created
    • Deleted
    • Updated
  • Task
    • Created
    • Deleted
    • Updated
  • Task log
    • Created
    • Deleted
    • Updated

n8n provides an app node for TheHive 5. You can find the node docs here.

Refer to TheHive's documentation for more information about the service.

Configure a webhook in TheHive

To configure the webhook for your TheHive instance:

  1. Copy the testing and production webhook URLs from TheHive Trigger node.

  2. Add the following lines to the application.conf file. This is TheHive configuration file:

  3. Replace TESTING_WEBHOOK_URL and PRODUCTION_WEBHOOK_URL with the URLs you copied in the previous step.

  4. Replace TESTING_WEBHOOK_NAME and PRODUCTION_WEBHOOK_NAME with your preferred endpoint names.

  5. Replace ORGANIZATION_NAME with your organization name.

  6. Execute the following cURL command to enable notifications:

Examples:

Example 1 (unknown):

notification.webhook.endpoints = [
   	{
   		name: TESTING_WEBHOOK_NAME
   		url: TESTING_WEBHOOK_URL
   		version: 1
   		wsConfig: {}
   		includedTheHiveOrganisations: ["ORGANIZATION_NAME"]
   		excludedTheHiveOrganisations: []
   	},
   	{
   		name: PRODUCTION_WEBHOOK_NAME
   		url: PRODUCTION_WEBHOOK_URL
   		version: 1
   		wsConfig: {}
   		includedTheHiveOrganisations: ["ORGANIZATION_NAME"]
   		excludedTheHiveOrganisations: []
   	}
   ]

Example 2 (unknown):

curl -XPUT -uTHEHIVE_USERNAME:THEHIVE_PASSWORD -H 'Content-type: application/json' THEHIVE_URL/api/config/organisation/notification -d '
   {
   	"value": [
   		{
   		"delegate": false,
   		"trigger": { "name": "AnyEvent"},
   		"notifier": { "name": "webhook", "endpoint": "TESTING_WEBHOOK_NAME" }
   		},
   		{
   		"delegate": false,
   		"trigger": { "name": "AnyEvent"},
   		"notifier": { "name": "webhook", "endpoint": "PRODUCTION_WEBHOOK_NAME" }
   		}
   	]
   }'

MessageBird node

URL: llms-txt#messagebird-node

Contents:

  • Operations
  • Templates and examples

Use the MessageBird node to automate work in MessageBird, and integrate MessageBird with other applications. n8n has built-in support for a wide range of MessageBird features, including sending messages, and getting balances.

On this page, you'll find a list of operations the MessageBird node supports and links to more resources.

Refer to MessageBird credentials for guidance on setting up authentication.

  • SMS
    • Send text messages (SMS)
  • Balance
    • Get the balance

Templates and examples

Browse MessageBird integration templates, or search all templates


Text courses

URL: llms-txt#text-courses

Contents:

  • Available courses

If you've found your way here, it means you're serious about your interest in automation. Maybe you're tired of manually entering data into the same spreadsheet every day, of clicking through a series of tabs and buttons for that one piece of information you need, of managing tens of different tools and systems.

Whatever the reason, one thing is clear: you shouldn't spend precious time doing things that don't spark joy or contribute to your personal and professional growth.

These tasks can and should be automated! And you don't need advanced technical knowledge or excellent coding skills to do thiswith no-code tools like n8n, automation is for everyone.


AMQP Sender node

URL: llms-txt#amqp-sender-node

Contents:

  • Operations
  • Templates and examples

Use the AMQP Sender node to automate work in AMQP Sender, and integrate AMQP Sender with other applications. n8n has built-in support for a wide range of AMQP Sender features, including sending messages.

On this page, you'll find a list of operations the AMQP Sender node supports and links to more resources.

Refer to AMQP Sender credentials for guidance on setting up authentication.

Templates and examples

Browse AMQP Sender integration templates, or search all templates


Cloud data management

URL: llms-txt#cloud-data-management

Contents:

  • Memory limits on each Cloud plan
  • How to reduce memory consumption in your workflow
  • How to manage execution data on Cloud
  • Cloud data pruning and out of memory incident prevention
    • Automatic data pruning
    • Manual data pruning

There are two concerns when managing data on Cloud:

  • Memory usage: complex workflows processing large amounts of data can exceed n8n's memory limits. If this happens, the instance can crash and become inaccessible.
  • Data storage: depending on your execution settings and volume, your n8n database can grow in size and run out of storage.

To avoid these issues, n8n recommends that you build your workflows with memory efficiency in mind, and don't save unnecessary data

Memory limits on each Cloud plan

  • Trial: 320MiB RAM, 10 millicore CPU burstable

  • Starter: 320MiB RAM, 10 millicore CPU burstable

  • Pro-1 (10k executions): 640MiB RAM, 20 millicore CPU burstable

  • Pro-2 (50k executions): 1280MiB RAM, 80 millicore CPU burstable

  • Enterprise: 4096MiB RAM, 80 millicore CPU burstable

  • Start: 320MiB RAM, 10 millicore CPU burstable

  • Power: 1280MiB RAM, 80 millicore CPU burstable

n8n gives each instance up to 100GB of data storage.

How to reduce memory consumption in your workflow

The way you build workflows affects how much data they consume when executed. Although these guidelines aren't applicable to all cases, they provide a baseline of best practices to avoid exceeding instance memory.

  • Split the data processed into smaller chunks. For example, instead of fetching 10,000 rows with each execution, process 200 rows with each execution.
  • Avoid using the Code node where possible.
  • Avoid manual executions when processing larger amounts of data.
  • Split the workflow up into sub-workflows and ensure each sub-workflow returns a limited amount of data to its parent workflow.

Splitting the workflow might seem counter-intuitive at first as it usually requires adding at least two more nodes: the Loop Over Items node to split up the items into smaller batches and the Execute Workflow node to start the sub-workflow.

However, as long as your sub-workflow does the heavy lifting for each batch and then returns only a small result set to the main workflow, this reduces memory consumption. This is because the sub-workflow only holds the data for the current batch in memory, after which the memory is free again.

Note that n8n itself consumes memory to run. On average, the software alone uses around 180MiB RAM.

Interactions with the UI also consume memory. Playing around with the workflow UI while it performs heavy executions could also push the memory capacity over the limit.

How to manage execution data on Cloud

Execution data includes node data, parameters, variables, execution context, and binary data references. It's text-based.

Binary data is non-textual data that n8n can't represent as plain text. This is files and media such as images, documents, audio files, and videos. It's much larger than textual data.

If a workflow consumes a large amounts of data and is past testing stage, it's a good option to stop saving the successful executions.

There are two ways you can control how much execution data n8n stores in the database:

In the admin dashboard:

  1. From your workspace or editor, navigate to Admin Panel.
  2. Select Manage.
  3. In Executions to Save deselect the executions you don't want to log.

In your workflow settings:

  1. Select the Options menu.
  2. Select Settings. n8n opens the Workflow settings modal.
  3. Change Save successful production executions to Do not save.

Cloud data pruning and out of memory incident prevention

Automatic data pruning

n8n automatically prunes execution logs after a certain time or once you reach the max storage limit, whichever comes first. The pruning always happens from oldest to newest and the limits depend on your Could plan:

  • Start and Starter plans: max 2500 executions saved and 7 days execution log retention;
  • Pro and Power plans: max 25000 executions saved and 30 days execution log retention;
  • Enterprise plan: max 50000 executions saved and unlimited execution log retention time.

Manual data pruning

Heavier executions and use cases can exceed database capacity despite the automatic pruning practices. In cases like this, n8n will manually prune data to protect instance stability.

  1. An alert system warns n8n if an instance is at 85% disk capacity.
  2. n8n prunes execution data. n8n does this by running a backup of the instance (workflows, users, credentials and execution data) and restoring it without execution data.

Due to the human steps in this process, the alert system isn't perfect. If warnings are triggered after hours or if data consumption rates are high, there might not be time to prune the data before the remaining disk space fills up.


Arrays

URL: llms-txt#arrays

Contents:

  • average(): Number
  • chunk(size: Number): Array
  • compact(): Array
  • difference(arr: Array): Array
  • intersection(arr: Array): Array
  • first(): Array item
  • isEmpty(): Boolean
  • isNotEmpty(): Boolean
  • last(): Array item
  • max(): Number

A reference document listing built-in convenience functions to support data transformation in expressions for arrays.

JavaScript in expressions

You can use any JavaScript in expressions. Refer to Expressions for more information.

average(): Number

Returns the value of elements in an array


chunk(size: Number): Array

Splits arrays into chunks with a length of size

Function parameters

The size of each chunk.


Removes empty values from the array.


difference(arr: Array): Array

Compares two arrays. Returns all elements in the base array that aren't present in arr.

Function parameters

The array to compare to the base array.


intersection(arr: Array): Array

Compares two arrays. Returns all elements in the base array that are present in arr.

Function parameters

The array to compare to the base array.


first(): Array item

Returns the first element of the array.


isEmpty(): Boolean

Checks if the array doesn't have any elements.


isNotEmpty(): Boolean

Checks if the array has elements.


last(): Array item

Returns the last element of the array.


Returns the highest value in an array.


merge(arr: Array): Array

Merges two Object-arrays into one array by merging the key-value pairs of each element.

Function parameters

The array to merge into the base array.


Gets the minimum value from a number-only array.


pluck(fieldName?: String): Array

Returns an array of Objects where keys equal the given field names.

Function parameters

fieldNameOptionalString

The key(s) you want to retrieve. You can enter as many keys as you want, as comma-separated strings.


randomItem(): Array item

Returns a random element from an array.


removeDuplicates(key?: String): Array

Removes duplicates from an array.

Function parameters

A key, or comma-separated list of keys, to check for duplicates.


renameKeys(from: String, to: String): Array

Renames all matching keys in the array. You can rename more than one key by entering a series of comma separated strings, in the pattern oldKeyName, newKeyName.

Function parameters

The key you want to rename.


smartJoin(keyField: String, nameField: String): Array

Operates on an array of objects where each object contains key-value pairs. Creates a new object containing key-value pairs, where the key is the value of the first pair, and the value is the value of the second pair. Removes non-matching and empty values and trims any whitespace before joining.

Function parameters

keyFieldRequiredString

nameFieldRequiredString


Returns the total sum all the values in an array of parsable numbers.


toJsonString(): String

Convert an array to a JSON string. Equivalent of JSON.stringify.


union(arr: Array): Array

Concatenates two arrays and then removes duplicate.

Function parameters

The array to compare to the base array.


unique(key?: String): Array

Remove duplicates from an array.

Function parameters

A key, or comma-separated list of keys, to check for duplicates.


Examples:

Example 1 (unknown):

// Input
{{ [{"type":"fruit", "name":"apple"},{"type":"vegetable", "name":"carrot"} ].smartJoin("type","name") }}
// Output
[Object: {"fruit":"apple","vegetable":"carrot"}]

Install community nodes from npm in the n8n app

URL: llms-txt#install-community-nodes-from-npm-in-the-n8n-app

Contents:

  • Install a community node
  • Uninstall a community node
  • Upgrade a community node
    • Upgrade to the latest version
    • Upgrade to a specific version
  • Downgrade a community node

Only for instance owners of self-hosted n8n instances

Only the n8n instance owner of a self-hosted n8n instance can install and manage community nodes from npm. The instance owner is the person who sets up and manages user management.

Admin accounts can also uninstall any community node, verified or unverified. This helps them remove problematic nodes that may affect the instance's health and functionality.

Install a community node

To install a community node from npm:

  1. Go to Settings > Community Nodes.
  2. Select Install.
  3. Find the node you want to install:
    1. Select Browse. n8n takes you to an npm search results page, showing all npm packages tagged with the keyword n8n-community-node-package.
    2. Browse the list of results. You can filter the results or add more keywords.
    3. Once you find the package you want, make a note of the package name. If you want to install a specific version, make a note of the version number as well.
    4. Return to n8n.
  4. Enter the npm package name, and version number if required. For example, consider a community node designed to access a weather API called "Storms." The package name is n8n-node-storms, and it has three major versions.
    • To install the latest version of a package called n8n-node-weather: enter n8n-nodes-storms in Enter npm package name.
    • To install version 2.3: enter n8n-node-storms@2.3 in Enter npm package name.
  5. Agree to the risks of using community nodes: select I understand the risks of installing unverified code from a public source.
  6. Select Install. n8n installs the node, and returns to the Community Nodes list in Settings.

Nodes on the blocklist

n8n maintains a blocklist of community nodes that it prevents you from installing. Refer to n8n community node blocklist for more information.

Uninstall a community node

To uninstall a community node:

  1. Go to Settings > Community nodes.
  2. On the node you want to install, select Options .
  3. Select Uninstall package.
  4. Select Uninstall Package in the confirmation modal.

Upgrade a community node

Breaking changes in versions

Node developers may introduce breaking changes in new versions of their nodes. A breaking change is an update that breaks previous functionality. Depending on the node versioning approach that a node developer chooses, upgrading to a version with a breaking change could cause all workflows using the node to break. Be careful when upgrading your nodes. If you find that an upgrade causes issues, you can downgrade.

Upgrade to the latest version

You can upgrade community nodes to the latest version from the node list in Settings > community nodes.

When a new version of a community node is available, n8n displays an Update button on the node. Click the button to upgrade to the latest version.

Upgrade to a specific version

To upgrade to a specific version (a version other than the latest), uninstall the node, then reinstall it, making sure to specify the target version. Follow the Installation instructions for more guidance.

Downgrade a community node

If there is a problem with a particular version of a community node, you may want to roll back to a previous version.

To do this, uninstall the community node, then reinstall it, targeting a specific node version. Follow the Installation instructions for more guidance.


Emelia Trigger node

URL: llms-txt#emelia-trigger-node

Contents:

  • Events

Emelia is a cold-mailing tool.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Emelia Trigger integrations page.

  • Email Bounced
  • Email Opened
  • Email Replied
  • Email Sent
  • Link Clicked
  • Unsubscribed Contact

What's a chain in AI?

URL: llms-txt#what's-a-chain-in-ai?

Contents:

  • Chains in n8n

Chains bring together different components of AI to create a cohesive system. They set up a sequence of calls between the components. These components can include models and memory (though note that in n8n chains can't use memory).

n8n provides three chain nodes:

  • Basic LLM Chain: use to interact with an LLM, without any additional components.
  • Question and Answer Chain: can connect to a vector store using a retriever, or to an n8n workflow using the Workflow Retriever node. Use this if you want to create a workflow that supports asking questions about specific documents.
  • Summarization Chain: takes an input and returns a summary.

There's an important difference between chains in n8n and in other tools such as LangChain: none of the chain nodes support memory. This means they can't remember previous user queries. If you use LangChain to code an AI application, you can give your application memory. In n8n, if you need your workflow to support memory, use an agent. This is essential if you want users to be able to have a natural ongoing conversation with your app.


AI Agent node common issues

URL: llms-txt#ai-agent-node-common-issues

Contents:

  • Internal error: 400 Invalid value for 'content'
  • Error in sub-node Simple Memory
  • A Chat Model sub-node must be connected error
  • No prompt specified error

Here are some common errors and issues with the AI Agent node and steps to resolve or troubleshoot them.

Internal error: 400 Invalid value for 'content'

A full error message might look like this:

This error can occur if the Prompt input contains a null value.

You might see this in one of two scenarios:

  1. When you've set the Prompt to Define below and have an expression in your Text that isn't generating a value.
    • To resolve, make sure your expressions reference valid fields and that they resolve to valid input rather than null.
  2. When you've set the Prompt to Connected Chat Trigger Node and the incoming data has null values.
    • To resolve, remove any null values from the chatInput field of the input node.

Error in sub-node Simple Memory

This error displays when n8n runs into an issue with the Simple Memory sub-node.

It most often occurs when your workflow or the workflow template you copied uses an older version of the Simple memory node (previously known as "Window Buffer Memory").

Try removing the Simple Memory node from your workflow and re-adding it, which will guarantee you're using the latest version of the node.

A Chat Model sub-node must be connected error

This error displays when n8n tries to execute the node without having a Chat Model connected.

To resolve this, click the + Chat Model button at the bottom of your screen when the node is open, or click the Chat Model + connector when the node is closed. n8n will then open a selection of possible Chat Models to pick from.

No prompt specified error

This error occurs when the agent expects to get the prompt from the previous node automatically. Typically, this happens when you're using the Chat Trigger Node.

To resolve this issue, find the Prompt parameter of the AI Agent node and change it from Connected Chat Trigger Node to Define below. This allows you to manually build your prompt by referencing output data from other nodes or by adding static text.

Examples:

Example 1 (unknown):

Internal error
Error: 400 Invalid value for 'content': expected a string, got null.
<stack-trace>

Jira credentials

URL: llms-txt#jira-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using SW Cloud API token
  • Using SW Server account

You can use these credentials to authenticate the following nodes:

Create a Jira Software Cloud or Server account.

Supported authentication methods

Refer to Jira's API documentation for more information about the service.

Using SW Cloud API token

To configure this credential, you'll need an account on Jira Software Cloud.

  1. Log in to your Atlassian profile > Security > API tokens page, or jump straight there using this link.
  2. Select Create API Token.
  3. Enter a good Label for your token, like n8n integration.
  4. Select Create.
  5. Copy the API token.
  6. In n8n, enter the Email address associated with your Jira account.
  7. Paste the API token you copied as your API Token.
  8. Enter the Domain you access Jira on, for example https://example.atlassian.net.

Refer to Manage API tokens for your Atlassian account for more information.

New tokens may take up to a minute before they work. If your credential verification fails the first time, wait a minute before retrying.

Using SW Server account

To configure this credential, you'll need an account on Jira Software Server.

  1. Enter the Email address associated with your Jira account.
  2. Enter your Jira account Password.
  3. Enter the Domain you access Jira on.

Google Cloud Storage node

URL: llms-txt#google-cloud-storage-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported

Use the Google Cloud Storage node to automate work in Google Cloud Storage, and integrate Google Cloud Storage with other applications. n8n has built-in support for a wide range of Google Cloud Storage features, including creating, updating, deleting, and getting buckets and objects.

On this page, you'll find a list of operations the Google Cloud Storage node supports and links to more resources.

Refer to Google Cloud Storage credentials for guidance on setting up authentication.

  • Bucket
    • Create
    • Delete
    • Get
    • Get Many
    • Update
  • Object
    • Create
    • Delete
    • Get
    • Get Many
    • Update

Templates and examples

Transcribe audio files from Cloud Storage

View template details

Automatic Youtube Shorts Generator

by Samautomation.work

View template details

Vector Database as a Big Data Analysis Tool for AI Agents [1/3 anomaly][1/2 KNN]

View template details

Browse Google Cloud Storage integration templates, or search all templates

Refer to Google's Cloud Storage API documentation for detailed information about the API that this node integrates with.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Push and pull

URL: llms-txt#push-and-pull

Contents:

  • Fetch other people's work
    • Workflow and credential owner may change on pull
    • Pulling may cause brief service interruption
  • Send your work to Git
  • What gets committed
  • Merge behaviors and conflicts
    • Workflows
    • Credentials, variables and workflow tags

If your n8n instance connects to a Git repository, you need to keep your work in sync with Git.

This document assumes some familiarity with Git concepts and terminology. Refer to Git and n8n for an introduction to how n8n works with Git.

Recommendation: don't push and pull to the same n8n instance

You can push work from an instance to a branch, and pull to the same instance. n8n doesn't recommend this. To reduce the risk of merge conflicts and overwriting work, try to create a process where work goes in one direction: either to Git, or from Git, but not both.

Fetch other people's work

n8n roles control which users can pull (fetch) changes

You must be an instance owner or instance admin to pull changes from git.

To pull work from Git, select Pull in the main menu.

Pull and push buttons when menu is closed

Pull and push buttons when menu is open

n8n may display a warning about overriding local changes. Select Pull and override to override your local work with the content in Git.

When the changes include new variable or credential stubs, n8n notifies you that you need to populate the values for the items before using them.

How deleted resources are handled

When workflows, credentials, variables, and tags are deleted from the repository, your local versions of these resources aren't deleted automatically. Instead, when you pull repository changes, n8n notifies you about any outdated resources and asks if you'd like to delete them.

Workflow and credential owner may change on pull

When you pull from Git to an n8n instance, n8n tries to assign workflows and credentials to a matching user or project.

If the original owner is a user:

If the same owner is available on both instances (matching email), the owner remains the same. If the original owner isn't on the new instance, n8n sets the user performing the pull as the workflow owner.

If the original owner is a project:

n8n tries to match the original project name to a project name on the new instance. If no matching project exists, n8n creates a new project with the name, assigns the current user as project owner, and imports the workflows and credentials to the project.

Pulling may cause brief service interruption

If you pull changes to an active workflow, n8n sets the workflow to inactive while pulling, then reactivates it. This may result in a few seconds of downtime for the workflow.

Send your work to Git

n8n roles control which users can push changes

You must be an instance owner, instance admin, or project admin to push changes to git.

  1. Select Push in the main menu.

Pull and push buttons when menu is closed

Pull and push buttons when menu is open

  1. In the Commit and push changes modal, select which workflows you want to push. You can filter by status (new, modified, deleted) and search for workflows. n8n automatically pushes tags, and variable and credential stubs.

  2. Enter a commit message. This should be a one sentence description of the changes you're making.

  3. Select Commit and Push. n8n sends the work to Git, and displays a success message on completion.

What gets committed

n8n commits the following to Git:

  • Workflows, including their tags and the email address of the workflow owner. You can choose which workflows to push.
  • Credential stubs (ID, name, type)
  • Variable stubs (ID and name)
  • Projects
  • Folders

Merge behaviors and conflicts

n8n's implementation of source control is opinionated. It resolves merge conflicts for credentials and variables automatically. n8n can't detect conflicts on workflows.

You have to explicitly tell n8n what to do about workflows when pushing or pulling. The Git repository acts as the source of truth.

When pulling, you might get warned that your local copy of a workflow differs from Git, and if you accept, your local copy would be overridden. Be careful not to lose relevant changes when pulling.

When you push, your local workflow will override what's in Git, so make sure that you have the most up to date version or you risk overriding recent changes.

To prevent the issue described above, you should immediately push your changes to a workflow once you finish working on it. Then it's safe to pull.

To avoid losing data:

  • Design your source control setup so that workflows flow in one direction. For example, make edits on a development instance, push to Git, then pull to production. Don't make edits on the production instance and push them.
  • Don't push all workflows. Select the ones you need.
  • Be cautious about manually editing files in the Git repository.

Credentials, variables and workflow tags

Credentials and variables can't have merge issues, as n8n chooses the version to keep.

  • If the tag, variable or credential doesn't exist, n8n creates it.

  • If the tag, variable or credential already exists, n8n doesn't update it, unless:

    • You set the value of a variable using the API or externally. The new value overwrites any existing value.
    • The credential name has changed. n8n uses the version in Git.
    • The name of a tag has changed. n8n updates the tag name. Be careful when renaming tags as tag names are unique and this could cause database issues when it comes to uniqueness during the pull process.
  • n8n overwrites the entire variables and tags files.

  • If a credential already exists, n8n overwrites it with the changes, but doesn't apply these changes to existing credentials on pull.

Manage credentials with an external secrets vault

If you need different credentials on different n8n environments, use external secrets.


AWS S3 node

URL: llms-txt#aws-s3-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the AWS S3 node to automate work in AWS S3, and integrate AWS S3 with other applications. n8n has built-in support for a wide range of AWS S3 features, including creating and deleting buckets, copying and downloading files, as well as getting folders.

On this page, you'll find a list of operations the AWS S3 node supports and links to more resources.

Refer to AWS credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Bucket
    • Create a bucket
    • Delete a bucket
    • Get all buckets
    • Search within a bucket
  • File
    • Copy a file
    • Delete a file
    • Download a file
    • Get all files
    • Upload a file
  • Folder
    • Create a folder
    • Delete a folder
    • Get all folders

Templates and examples

Transcribe audio files from Cloud Storage

View template details

Extract and store text from chat images using AWS S3

View template details

Sync data between Google Drive and AWS S3

View template details

Browse AWS S3 integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Kafka Trigger node

URL: llms-txt#kafka-trigger-node

Kafka is an open-source distributed event streaming platform that one can use for high-performance data pipelines, streaming analytics, data integration, and mission-critical applications.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Kafka Trigger integrations page.


Welcome to n8n Docs

URL: llms-txt#welcome-to-n8n-docs

Contents:

  • Where to start
  • About n8n

This is the documentation for n8n, a fair-code licensed workflow automation tool that combines AI capabilities with business process automation.

It covers everything from setup to usage and development. It's a work in progress and all contributions are welcome.

Jump in with n8n's quickstart guides.

Try it out

  • Choose the right n8n for you

Cloud, npm, self-host . . .

Options

  • Explore integrations

Browse n8n's integrations library.

Find your apps

  • Build AI functionality

n8n supports building AI functionality and tools.

Advanced AI

n8n (pronounced n-eight-n) helps you to connect any app with an API with any other, and manipulate its data with little or no code.

  • Customizable: highly flexible workflows and the option to build custom nodes.
  • Convenient: use the npm or Docker to try out n8n, or the Cloud hosting option if you want us to handle the infrastructure.
  • Privacy-focused: self-host n8n for privacy and security.

MultiQuery Retriever node

URL: llms-txt#multiquery-retriever-node

Contents:

  • Node options
  • Templates and examples
  • Related resources

The MultiQuery Retriever node automates the process of prompt tuning by using an LLM to generate multiple queries from different perspectives for a given user input query.

On this page, you'll find the node parameters for the MultiQuery Retriever node, and links to more resources.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Query Count: Enter how many different versions of the query to generate.

Templates and examples

Browse MultiQuery Retriever integration templates, or search all templates

Refer to LangChain's retriever conceptual documentation and LangChain's multiquery retriever API documentation for more information about the service.

View n8n's Advanced AI documentation.


Slack node

URL: llms-txt#slack-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • Required scopes
  • What to do if your operation isn't supported

Use the Slack node to automate work in Slack, and integrate Slack with other applications. n8n has built-in support for a wide range of Slack features, including creating, archiving, and closing channels, getting users and files, as well as deleting messages.

On this page, you'll find a list of operations the Slack node supports and links to more resources.

Refer to Slack credentials for guidance on setting up authentication.

  • Channel
    • Archive a channel.
    • Close a direct message or multi-person direct message.
    • Create a public or private channel-based conversation.
    • Get information about a channel.
    • Get Many: Get a list of channels in Slack.
    • History: Get a channel's history of messages and events.
    • Invite a user to a channel.
    • Join an existing channel.
    • Kick: Remove a user from a channel.
    • Leave a channel.
    • Member: List the members of a channel.
    • Open or resume a direct message or multi-person direct message.
    • Rename a channel.
    • Replies: Get a thread of messages posted to a channel.
    • Sets purpose of a channel.
    • Sets topic of a channel.
    • Unarchive a channel.
  • File
    • Get a file.
    • Get Many: Get and filter team files.
    • Upload: Create or upload an existing file.
  • Message
    • Delete a message
    • Get permalink: Get a message's permalink.
    • Search for messages
    • Send a message
    • Send and Wait for Approval: Send a message and wait for approval from the recipient before continuing.
    • Update a message
  • Reaction
    • Add a reaction to a message.
    • Get a message's reactions.
    • Remove a reaction from a message.
  • Star
    • Add a star to an item.
    • Delete a star from an item.
    • Get Many: Get a list of an authenticated user's stars.
  • User
    • Get information about a user.
    • Get Many: Get a list of users.
    • Get User's Profile.
    • Get User's Status.
    • Update User's Profile.
  • User Group
    • Create a user group.
    • Disable a user group.
    • Enable a user group.
    • Get Many: Get a list of user groups.
    • Update a user group.

Templates and examples

Back Up Your n8n Workflows To Github

View template details

Slack chatbot powered by AI

View template details

IT Ops AI SlackBot Workflow - Chat with your knowledge base

View template details

Browse Slack integration templates, or search all templates

Refer to Slack's documentation for more information about the service.

Once you create a Slack app for your Slack credentials, you must add the appropriate scopes to your Slack app for this node to work. Start with the scopes listed in the Scopes | Slack credentials page.

If those aren't enough, use the table below to look up the resource and operation you want to use, then follow the link to Slack's API documentation to find the correct scopes.

Resource Operation Slack API method
Channel Archive conversations.archive
Channel Close conversations.close
Channel Create conversations.create
Channel Get conversations.info
Channel Get Many conversations.list
Channel History conversations.history
Channel Invite conversations.invite
Channel Join conversations.join
Channel Kick conversations.kick
Channel Leave conversations.leave
Channel Member conversations.members
Channel Open conversations.open
Channel Rename conversations.rename
Channel Replies conversations.replies
Channel Set Purpose conversations.setPurpose
Channel Set Topic conversations.setTopic
Channel Unarchive conversations.unarchive
File Get files.info
File Get Many files.list
File Upload files.upload
Message Delete chat.delete
Message Get Permalink chat.getPermalink
Message Search search.messages
Message Send chat.postMessage
Message Send and Wait for Approval chat.postMessage
Message Update chat.update
Reaction Add reactions.add
Reaction Get reactions.get
Reaction Remove reactions.remove
Star Add stars.add
Star Delete stars.remove
Star Get Many stars.list
User Get users.info
User Get Many users.list
User Get User's Profile users.profile.get
User Get User's Status users.getPresence
User Update User's Profile users.profile.set
User Group Create usergroups.create
User Group Disable usergroups.disable
User Group Enable usergroups.enable
User Group Get Many usergroups.list
User Group Update usergroups.update

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Facebook Trigger Application object

URL: llms-txt#facebook-trigger-application-object

Contents:

  • Trigger configuration
  • Related resources

Use this object to receive updates sent to a specific app. Refer to Facebook Trigger for more information on the trigger itself.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Facebook Trigger integrations page.

Trigger configuration

To configure the trigger with this Object:

  1. Select the Credential to connect with. Select an existing or create a new Facebook App credential.
  2. Enter the APP ID of the app connected to your credential. Refer to the Facebook App credential documentation for more information.
  3. Select Application as the Object.
  4. Field Names or IDs: By default, the node will trigger on all the available events using the * wildcard filter. If you'd like to limit the events, use the X to remove the star and use the dropdown or an expression to select the updates you're interested in. Options include:
    • Add Account
    • Ads Rules Engine
    • Async Requests
    • Async Sessions
    • Group Install
    • Oe Reseller Onboarding Request Created
    • Plugin Comment
    • Plugin Comment Reply
  5. In Options, turn on the toggle to Include Values. This Object type fails without the option enabled.

Refer to Meta's Application Graph API reference for more information.


If

URL: llms-txt#if

Contents:

  • Add conditions
    • Combining conditions
  • Templates and examples
  • Branch execution with If and Merge nodes
  • Related resources
  • Available data type comparisons
    • String
    • Number
    • Date & Time
    • Boolean

Use the If node to split a workflow conditionally based on comparison operations.

Create comparison Conditions for your If node.

  • Use the data type dropdown to select the data type and comparison operation type for your condition. For example, to filter for dates after a particular date, select Date & Time > is after.
  • The fields and values to enter into the condition change based on the data type and comparison you select. Refer to Available data type comparisons for a full list of all comparisons by data type.

Select Add condition to create more conditions.

Combining conditions

You can choose to keep data:

  • When it meets all conditions: Create two or more conditions and select AND in the dropdown between them.
  • When it meets any of the conditions: Create two or more conditions and select OR in the dropdown between them.

Templates and examples

AI agent that can scrape webpages

View template details

🤖Automate Multi-Platform Social Media Content Creation with AI

View template details

Pulling data from services that n8n doesnt have a pre-built integration for

View template details

Browse If integration templates, or search all templates

Branch execution with If and Merge nodes

n8n removed this execution behavior in version 1.0. This section applies to workflows using the v0 (legacy) workflow execution order. By default, this is all workflows built before version 1.0. You can change the execution order in your workflow settings.

If you add a Merge node to a workflow containing an If node, it can result in both output data streams of the If node executing.

One data stream triggers the Merge node, which then goes and executes the other data stream.

For example, in the screenshot below there's a workflow containing an Edit Fields node, If node, and Merge node. The standard If node behavior is to execute one data stream (in the screenshot, this is the true output). However, due to the Merge node, both data streams execute, despite the If node not sending any data down the false data stream.

Refer to Splitting with conditionals for more information on using conditionals to create complex logic in n8n.

If you need more than two conditional outputs, use the Switch node.

Available data type comparisons

String data type supports these comparisons:

  • exists
  • does not exist
  • is empty
  • is not empty
  • is equal to
  • is not equal to
  • contains
  • does not contain
  • starts with
  • does not start with
  • ends with
  • does not end with
  • matches regex
  • does not match regex

Number data type supports these comparisons:

  • exists
  • does not exist
  • is empty
  • is not empty
  • is equal to
  • is not equal to
  • is greater than
  • is less than
  • is greater than or equal to
  • is less than or equal to

Date & Time data type supports these comparisons:

  • exists
  • does not exist
  • is empty
  • is not empty
  • is equal to
  • is not equal to
  • is after
  • is before
  • is after or equal to
  • is before or equal to

Boolean data type supports these comparisons:

  • exists
  • does not exist
  • is empty
  • is not empty
  • is true
  • is false
  • is equal to
  • is not equal to

Array data type supports these comparisons:

  • exists
  • does not exist
  • is empty
  • is not empty
  • contains
  • does not contain
  • length equal to
  • length not equal to
  • length greater than
  • length less than
  • length greater than or equal to
  • length less than or equal to

Object data type supports these comparisons:

  • exists
  • does not exist
  • is empty
  • is not empty

Specify location for your custom nodes

URL: llms-txt#specify-location-for-your-custom-nodes

Every user can add custom nodes that get loaded by n8n on startup. The default location is in the subfolder .n8n/custom of the user who started n8n.

You can define more folders with an environment variable:

Refer to Environment variables reference for more information on this variable.

Examples:

Example 1 (unknown):

export N8N_CUSTOM_EXTENSIONS="/home/jim/n8n/custom-nodes;/data/n8n/nodes"

Cloud concurrency

URL: llms-txt#cloud-concurrency

Contents:

  • Concurrency limits
  • Details
  • Comparison to queue mode

This document discusses concurrency in n8n Cloud. Read self-hosted n8n concurrency control to learn how concurrency works with self-hosted n8n instances.

Too many concurrent executions can cause performance degradation and unresponsiveness. To prevent this and improve instance stability, n8n sets concurrency limits for production executions in regular mode.

Any executions beyond the limits queue for later processing. These executions remain in the queue until concurrency capacity frees up, and are then processed in FIFO order.

Concurrency limits

n8n limits the number of concurrent executions for Cloud instances according to their plan. Refer to Pricing for details.

You can view the number of active executions and your plan's concurrency limit at the top of a project's or workflow's executions tab.

Some other details about concurrency to keep in mind:

  • Concurrency control applies only to production executions: those started from a webhook or trigger node. It doesn't apply to any other kinds, such as manual executions, sub-workflow executions, or error executions.
  • Test evaluations don't count towards concurrency limits. Your test evaluation concurrency limit is equal to, but separate from, your plan's regular concurrency limit.
  • You can't retry queued executions. Cancelling or deleting a queued execution also removes it from the queue.
  • On instance startup, n8n resumes queued executions up to the concurrency limit and re-enqueues the rest.

Comparison to queue mode

Queue mode is available for Cloud Enterprise plans. To enable it, contact n8n.

Concurrency in queue mode is a separate mechanism from concurrency in regular mode. In queue mode, the concurrency settings determine how many jobs each worker can run in parallel. In regular mode, concurrency limits apply to the entire instance.


Nextcloud credentials

URL: llms-txt#nextcloud-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using basic auth
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Supported authentication methods

  • Basic auth
  • OAuth2

Refer to Nextcloud's API documentation for more information about the service.

Refer to Nextcloud's user manual for more information on installing and configuring Nextcloud.

To configure this credential, you'll need a Nextcloud account and:

  • Your WebDAV URL
  • Your User name
  • Your Password or an app password
  1. To create your WebDAV URL: If Nextcloud is in the root of your domain: Enter the URL you use to access Nextcloud and add /remote.php/webdav/. For example, if you access Nextcloud at https://cloud.n8n.com, your WebDAV URL is https://cloud.n8n.com/remote.php/webdav.
    • If you have Nextcloud installed in a subdirectory, enter the URL you use to access Nextcloud and add /<subdirectory>/remote.php/webdav/. Replace <subdirectory> with the subdirectory Nextcloud's installed in.
    • Refer to Nextcloud's Third-party WebDAV clients documentation for more information on constructing your WebDAV URL.
  2. Enter your User name.
  3. For the Password, Nextcloud recommends using an app password rather than your user password. To create an app password:
    1. In the Nextcloud Web interface, select your avatar in the top right and select Personal settings.
    2. In the left menu, choose Security.
    3. Scroll to the bottom to the App Password section and create a new app password.
    4. Copy that app password and enter it in n8n as your Password.

To configure this credential, you'll need a Nextcloud account and:

  • An Authorization URL and Access Token URL: These depend on the URL you use to access Nextcloud.
  • A Client ID: Generated once you add an OAuth2 client application in Administrator Security Settings.
  • A Client Secret: Generated once you add an OAuth2 client application in Administrator Security Settings.
  • A WebDAV URL: This depends on the URL you use to access Nextcloud.
  1. In Nextcloud, open your Administrator Security Settings.

  2. Find the Add client section under OAuth 2.0 clients.

  3. Enter a Name for your client, like n8n integration.

  4. Copy the OAuth Callback URL from n8n and enter it as the Redirection URI.

  5. Then select Add in Nextcloud.

  6. In n8n, update the Authorization URL to replace https://nextcloud.example.com with the URL you use to access Nextcloud. For example, if you access Nextcloud at https://cloud.n8n.com, the Authorization URL is https://cloud.n8n.com/apps/oauth2/authorize.

  7. In n8n, update the Access Token URL to replace https://nextcloud.example.com with the URL you use to access Nextcloud. For example, if you access Nextcloud at https://cloud.n8n.com, the Access Token URL is https://cloud.n8n.com/apps/oauth2/api/v1/token.

Pretty URL configuration

The Authorization URL and Access Token URL assume that you've configured Nextcloud to use Pretty URLs. If you haven't, you must add /index.php/ between your Nextcloud URL and the /apps/oauth2 portion, for example: https://cloud.n8n.com/index.php/apps/oauth2/api/v1/token.

  1. Copy the Nextcloud Client Identifier for your OAuth2 client and enter it as the Client ID in n8n.

  2. Copy the Nextcloud Secret and enter it as the Client Secret in n8n.

  3. In n8n, to create your WebDAV URL: If Nextcloud is in the root of your domain, enter the URL you use to access Nextcloud and add /remote.php/webdav/. For example, if you access Nextcloud at https://cloud.n8n.com, your WebDAV URL is https://cloud.n8n.com/remote.php/webdav.

  • If you have Nextcloud installed in a subdirectory, enter the URL you use to access Nextcloud and add /<subdirectory>/remote.php/webdav/. Replace <subdirectory> with the subdirectory Nextcloud's installed in.

Refer to the Nextcloud OAuth2 Configuration documentation for more detailed instructions.


Server setups

URL: llms-txt#server-setups

Self-host with Docker Compose:

Self-host with Google Cloud Run (with access to n8n workflow tools for Google Workspace, e.g. Gmail, Drive):

Starting points for a Kubernetes setup:

Configuration guides to help you get started on other platforms:


Postmark credentials

URL: llms-txt#postmark-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API token

You can use these credentials to authenticate the following nodes:

Create a Postmark account on a Postmark server.

Supported authentication methods

Refer to Postmark's API documentation for more information about the service.

To configure this credential, you'll need:

  • A Server API Token: The Server API token is accessible by Account Owners, Account Admins, and users who have Server Admin privileges on a server. Get yours from the API Tokens tab under your Postmark server. Refer to API Authentication for more information.

Loop Over Items

URL: llms-txt#loop-over-items

Contents:

  • When to use the Loop Over Items node
  • Node parameters
    • Batch Size
  • Node options
    • Reset
  • Templates and examples
    • Read RSS feed from two different sources
    • Check that the node has processed all items
    • Get the current running index of the node

The Loop Over Items node helps you loop through data when needed.

The node saves the original incoming data, and with each iteration, returns a predefined amount of data through the loop output.

When the node execution completes, it combines all of the processed data and returns it through the done output.

When to use the Loop Over Items node

By default, n8n nodes are designed to process a list of input items (with some exceptions, detailed below). Depending on what you're trying to achieve, you often don't need the Loop Over Items node in your workflow. You can learn more about how n8n processes multiple items on the looping in n8n page.

These links highlight some of the cases where the Loop Over Items node can be useful:

  • Loop until all items are processed: describes how the Loop Over Items node differs from normal item processing and when you might want to incorporate this node.
  • Node exceptions: outlines specific cases and nodes where you may need to use the Loop Over Items node to manually build looping logic.
  • Avoiding rate limiting: demonstrates how to batch API requests to avoid rate limits from other services.

Enter the number of items to return with each call.

If turned on, the node will reset with the current input-data newly initialized with each loop. Use this when you want the Loop Over Items node to treat incoming data as a new set of data instead of a continuation of previous items.

For example, you can use the Loop Over Items node with the reset option and an If node to query a paginated service when you don't know how many pages you need in advance. The loop queries pages one at a time, performs any processing, and increments the page number. The loop reset ensures the loop recognizes each iteration as a new set of data. The If node evaluates an exit condition to decide whether to perform another iteration or not.

Include a valid termination condition

For workflows like the example described above, it's critical to include a valid termination condition for the loop. If your termination condition never matches, your workflow execution will get stuck in an infinite loop.

When enabled, you can adjust the reset conditions by switching the parameter representation from Fixed to Expression. The results of your expression evaluation determine when the node will reset item processing.

Templates and examples

Scrape business emails from Google Maps without the use of any third party APIs

View template details

Generate Leads with Google Maps

View template details

🚀Transform Podcasts into Viral TikTok Clips with Gemini+ Multi-Platform Posting

View template details

Browse Loop Over Items (Split in Batches) integration templates, or search all templates

Read RSS feed from two different sources

This workflow allows you to read an RSS feed from two different sources using the Loop Over Items node. You need the Loop Over Items node in the workflow as the RSS Feed Read node only processes the first item it receives. You can also find the workflow on n8n.io.

The example walks through building the workflow, but assumes you are already familiar with n8n. To build your first workflow, including learning how to add nodes to a workflow, refer to Try it out.

The final workflow looks like this:

View workflow file

Copy the workflow file above and paste into your instance, or manually build it by following these steps:

  1. Add the manual trigger.

  2. Add the Code node.

  3. Copy this code into the Code node:

  4. Add the Loop Over Items node.

  5. Configure Loop Over Items: set the batch size to 1 in the Batch Size field.

  6. Add the RSS Feed Read node.

  7. Select Execute Workflow. This runs the workflow to load data into the RSS Feed Read node.

  8. Configure RSS Feed Read: map url from the input to the URL field. You can do this by dragging and dropping from the INPUT panel, or using this expression: {{ $json.url }}.

  9. Select Execute Workflow to run the workflow and see the resulting data.

Check that the node has processed all items

To check if the node still has items to process, use the following expression: {{$node["Loop Over Items"].context["noItemsLeft"]}}. This expression returns a boolean value. If the node still has data to process, the expression returns false, otherwise it returns true.

Get the current running index of the node

To get the current running index of the node, use the following expression: {{$node["Loop Over Items"].context["currentRunIndex"];}}.

Examples:

Example 1 (unknown):

return [
   	{
   		json: {
   			url: 'https://medium.com/feed/n8n-io',
   		}
   	},
   	{
   		json: {
   			url: 'https://dev.to/feed/n8n',
   		}
   	}
   ];

Agile CRM credentials

URL: llms-txt#agile-crm-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create an Agile CRM account.

Supported authentication methods

Refer to Agile CRM's API documentation for more information about working with the service.

To configure this credential, you'll need:

  • An Email Address registered with AgileCRM
  • A REST API Key: Access your Agile CRM API key through Admin Settings > Developers & API > REST API key.
  • An Agile CRM Subdomain (for example, n8n)

Telegram Trigger node common issues

URL: llms-txt#telegram-trigger-node-common-issues

Contents:

  • Stuck waiting for trigger event
  • Bad request: bad webhook: An HTTPS URL must be provided for webhook
  • Workflow only works in testing or production

Here are some common errors and issues with the Telegram Trigger node and steps to resolve or troubleshoot them.

Stuck waiting for trigger event

When testing the Telegram Trigger node with the Execute step or Execute workflow buttons, the execution may appear stuck and unable to stop listening for events. If this occurs, you may need to exit the workflow and open it again to reset the canvas.

Stuck listening events often occur due to issues with your network configuration outside of n8n. Specifically, this behavior often occurs when you run n8n behind a reverse proxy without configuring websocket proxying.

To resolve this issue, check your reverse proxy configuration (Nginx, Caddy, Apache HTTP Server, Traefik, etc.) to enable websocket support.

Bad request: bad webhook: An HTTPS URL must be provided for webhook

This error occurs when you run n8n behind a reverse proxy and there is a problem with your instance's webhook URL.

When running n8n behind a reverse proxy, you must configure the WEBHOOK_URL environment variable with the public URL where your n8n instance is running. For Telegram, this URL must use HTTPS.

To fix this issue, configure TLS/SSL termination in your reverse proxy. Afterward, update your WEBHOOK_URL environment variable to use the HTTPS address.

Workflow only works in testing or production

Telegram only allows you to register a single webhook per app. This means that every time you switch from using the testing URL to the production URL (and vice versa), Telegram overwrites the registered webhook URL.

You may have trouble with this if you try to test a workflow that's also active in production. The Telegram bot will only send events to one of the two webhook URLs, so the other will never receive event notifications.

To work around this, you can either disable your workflow when testing or create separate Telegram bots for testing and production.

To create a separate telegram bot for testing, repeat the process you completed to create your first bot. Reference Telegram's bot documentation and the Telegram bot API reference for more information.

To disable your workflow when testing, try the following:

Halts production traffic

This workaround temporarily disables your production workflow for testing. Your workflow will no longer receive production traffic while it's deactivated.

  1. Go to your workflow page.
  2. Toggle the Active switch in the top panel to disable the workflow temporarily.
  3. Test your workflow using the test webhook URL.
  4. When you finish testing, toggle the Inactive toggle to enable the workflow again. The production webhook URL should resume working.

urlscan.io credentials

URL: llms-txt#urlscan.io-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create an urlscan.io account.

Supported authentication methods

Refer to urlscan.io's API documentation for more information about the service.

To configure this credential, you'll need:

  • An API Key: Get your API key from Settings & API > API Keys.

AWS Textract node

URL: llms-txt#aws-textract-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the AWS Textract node to automate work in AWS Textract, and integrate AWS Textract with other applications. n8n has built-in support for a wide range of AWS Textract features, including analyzing invoices.

On this page, you'll find a list of operations the AWS Textract node supports and links to more resources.

Refer to AWS Textract credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Analyze Receipt or Invoice

Templates and examples

Browse AWS Textract integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Google Sheets Document operations

URL: llms-txt#google-sheets-document-operations

Contents:

  • Create a spreadsheet
    • Options
  • Delete a spreadsheet

Use this operation to create or delete a Google spreadsheet from Google Sheets. Refer to Google Sheets for more information on the Google Sheets node itself.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

Create a spreadsheet

Use this operation to create a new spreadsheet.

Enter these parameters:

  • Credential to connect with: Create or select an existing Google Sheets credentials.

  • Resource: Select Document.

  • Operation: Select Create.

  • Title: Enter the title of the new spreadsheet you want to create.

  • Sheets: Add the Title(s) of the sheet(s) you want to create within the spreadsheet.

  • Locale: Enter the locale of the spreadsheet. This affects formatting details such as functions, dates, and currency. Use one of the following formats:

  • Recalculation Interval: Enter the desired recalculation interval for the spreadsheet functions. This affects how often NOW, TODAY, RAND, and RANDBETWEEN are updated. Select On Change for recalculating whenever there is a change in the spreadsheet, Minute for recalculating every minute, or Hour for recalculating every hour. Refer to Set a spreadsheets location & calculation settings for more information about these options.

Refer to the Method: spreadsheets.create | Google Sheets API documentation for more information.

Delete a spreadsheet

Use this operation to delete an existing spreadsheet.

Enter these parameters:

  • Credential to connect with: Create or select an existing Google Sheets credentials.
  • Resource: Select Document.
  • Operation: Select Delete.
  • Document: Choose a spreadsheet you want to delete.
    • Select From list to choose the title from the dropdown list, By URL to enter the url of the spreadsheet, or By ID to enter the spreadsheetId.
    • You can find the spreadsheetId in a Google Sheets URL: https://docs.google.com/spreadsheets/d/spreadsheetId/edit#gid=0.

Refer to the Method: files.delete | Google Drive API documentation for more information.


KoboToolbox Trigger node

URL: llms-txt#kobotoolbox-trigger-node

KoboToolbox is a field survey and data collection tool to design interactive forms to be completed offline from mobile devices. It's available both as a free cloud solution or as a self-hosted version.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's KoboToolbox Trigger integrations page.

This node starts a workflow upon new submissions of a specified form. The trigger node handles the creation/deletion of the hook, so you don't need to do any setup in KoboToolbox.

It works the same way as the Get Submission operation in the KoboToolbox node, including supporting the same reformatting options.


VirusTotal credentials

URL: llms-txt#virustotal-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate when using the HTTP Request node to make a Custom API call.

Create a VirusTotal account.

Supported authentication methods

Refer to VirusTotal's documentation for more information about the service.

This is a credential-only node. Refer to Custom API operations to learn more. View example workflows and related content on n8n's website.

To configure this credential, you'll need:

  • An API Token: Go to your user account menu > API key to get your API key. Enter this as the API Token in your n8n credential. Refer to API authentication for more information.

Form.io Trigger credentials

URL: llms-txt#form.io-trigger-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using basic auth

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to Form.io's API documentation for more information about the service.

To configure this credential, you'll need a Form.io account and:

  • Your Environment
  • Your login Email address
  • Your Password

To set up the credential:

  1. Select your Environment:
    • Choose Cloud hosted if you aren't hosting Form.io yourself.
    • Choose Self-hosted if you're hosting Form.io yourself. Then add:
      • Your Self-Hosted Domain. Use only the domain itself. For example, if you view a form at https://yourserver.com/yourproject/manage/view, the Self-Hosted Domain is https://yourserver.com.
  2. Enter the Email address you use to log in to Form.io.
  3. Enter the Password you use to log in to Form.io.

Embeddings Mistral Cloud node

URL: llms-txt#embeddings-mistral-cloud-node

Contents:

  • Node parameters
  • Node options
  • Templates and examples
  • Related resources

Use the Embeddings Mistral Cloud node to generate embeddings for a given text.

On this page, you'll find the node parameters for the Embeddings Mistral Cloud node, and links to more resources.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Model: Select the model to use to generate the embedding.

Learn more about available models in Mistral's models documentation.

  • Batch Size: Enter the maximum number of documents to send in each request.
  • Strip New Lines: Select whether to remove new line characters from input text (turned on) or not (turned off). n8n enables this by default.

Templates and examples

Breakdown Documents into Study Notes using Templating MistralAI and Qdrant

View template details

Build a Financial Documents Assistant using Qdrant and Mistral.ai

View template details

Build a Tax Code Assistant with Qdrant, Mistral.ai and OpenAI

View template details

Browse Embeddings Mistral Cloud integration templates, or search all templates

Refer to Langchain's Mistral embeddings documentation for more information about the service.

View n8n's Advanced AI documentation.


Box node

URL: llms-txt#box-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Box node to automate work in Box, and integrate Box with other applications. n8n has built-in support for a wide range of Box features, including creating, copying, deleting, searching, uploading, and downloading files and folders.

On this page, you'll find a list of operations the Box node supports and links to more resources.

Refer to Box credentials for guidance on setting up authentication.

  • File
    • Copy a file
    • Delete a file
    • Download a file
    • Get a file
    • Search files
    • Share a file
    • Upload a file
  • Folder
    • Create a folder
    • Get a folder
    • Delete a folder
    • Search files
    • Share a folder
    • Update folder

Templates and examples

Automated Video Translation & Distribution with DubLab to Multiple Platforms

View template details

Create a new folder in Box

View template details

Receive updates for events in Box

View template details

Browse Box integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Deployment

URL: llms-txt#deployment

Contents:

  • User data
  • Backups
  • Restarting

Embed requires an embed license. For more information about when to use Embed, as well as costs and licensing processes, refer to Embed on the n8n website.

See the hosting documentation for detailed setup options.

n8n recommends that you follow the same or similar practices used internally for n8n Cloud: Save user data using Rook and, if an n8n server goes down, a new instance starts on another machine using the same data.

Due to this, you don't need to use backups except in case of a catastrophic failure, or when a user wants to reactivate their account within your prescribed retention period (two weeks for n8n Cloud).

n8n recommends creating nightly backups by attaching another container, and copying all data to this second container. In this manner, RAM usage is negligible, and so doesn't impact the amount of users you can place on the server.

If your instance is down or restarting, missed executions (for example, Cron or Webhook nodes) during this time aren't recoverable. If it's important for you to maintain 100% uptime, you need to build another proxy in front of it which caches the data.


Item linking in the Code node

URL: llms-txt#item-linking-in-the-code-node

Contents:

  • pairedItem usage example

Use n8n's item linking to access data from items that precede the current item. It also has implications when using the Code node. Most nodes link every output item to an input item. This creates a chain of items that you can work back along to access previous items. For a deeper conceptual overview of this topic, refer to Item linking concepts. This document focuses on practical usage examples.

When using the Code node, there are some scenarios where you need to manually supply item linking information if you want to be able to use $("<node-name>").item later in the workflow. All these scenarios only apply if you have more than one incoming item. n8n automatically handles item linking for single items.

These scenarios are when you:

  • Add new items: the new items aren't linked to any input.
  • Return new items.
  • Want to manually control the item linking.

n8n's automatic item linking handles the other scenarios.

To control item linking, set pairedItem when returning data. For example, to link to the item at index 0:

pairedItem usage example

Take this input data:

And use it to generate new items, containing just the name, along with a new piece of data:

newItems is an array of items with no pairedItem. This means there's no way to trace back from these items to the items used to generate them.

Add the pairedItem object:

Each new item now links to the item used to create it.

Examples:

Example 1 (unknown):

[
	{
		"json": {
			. . . 
		},
		// The index of the input item that generated this output item
		"pairedItem": 0
	}
]

Example 2 (unknown):

[
  {
    "id": "23423532",
    "name": "Jay Gatsby"
  },
  {
    "id": "23423533",
    "name": "José Arcadio Buendía"
  },
  {
    "id": "23423534",
    "name": "Max Sendak"
  },
  {
    "id": "23423535",
    "name": "Zaphod Beeblebrox"
  },
  {
    "id": "23423536",
    "name": "Edmund Pevensie"
  }
]

Example 3 (unknown):

newItems = [];
for(let i=0; i<items.length; i++){
  newItems.push(
    {
    "json":
      {
        "name": items[i].json.name,
				"aBrandNewField": "New data for item " + i
      }
    }
  )
}

return newItems;

Example 4 (unknown):

newItems = [];
for(let i=0; i<items.length; i++){
  newItems.push(
    {
      "json":
        {
          "name": items[i].json.name,
					"aBrandNewField": "New data for item " + i
        },
      "pairedItem": i
    }    
  )
}
return newItems;

GraphQL

URL: llms-txt#graphql

Contents:

  • Node parameters
    • Authentication
    • HTTP Request Method
    • Endpoint
    • Ignore SSL Issues
    • Query
    • Response Format
  • Headers
  • Templates and examples
  • Related resources

GraphQL is an open-source data query and manipulation language for APIs, and a runtime for fulfilling queries with existing data. Use the GraphQL node to query a GraphQL endpoint.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

Select the type of authentication to use.

If you select anything other than None, the Credential for parameter appears for you to select an existing or create a new authentication credential for that authentication type.

HTTP Request Method

Select the underlying HTTP Request method the node should use. Choose from:

  • GET
  • POST: If you select this method, you'll also need to select the Request Format the node should use for the query payload. Choose from:
    • GraphQL (Raw)
    • JSON

Enter the GraphQL Endpoint you'd like to hit.

Ignore SSL Issues

When you turn on this control, n8n ignores SSL certificate validation failure.

Enter the GraphQL query you want to execute.

Refer to Related Resources for information on writing your query.

Select the format you'd like to receive query results in. Choose between:

  • JSON
  • String: If you select this format, enter a Response Data Property Name to define the property the string is written to.

Enter any Headers you want to pass as part of the query as Name / Value pairs.

Templates and examples

Get top 5 products on Product Hunt every hour

View template details

API queries data from GraphQL

View template details

Bulk Create Shopify Products with Inventory Management from Google Sheets

View template details

Browse GraphQL integration templates, or search all templates

To use the GraphQL node, you need to understand GraphQL query language. GraphQL have their own Introduction to GraphQL tutorial.


Level two: Introduction

URL: llms-txt#level-two:-introduction

Contents:

  • Is this course right for me?
  • What will I learn in this course?
  • What do I need to get started?
  • How long does the course take?
  • How do I complete the course?

Welcome to the n8n Course Level 2!

Is this course right for me?

This course is for you if you:

  • Want to automate somewhat complex business processes.
  • Want to dive deeper into n8n after taking the Level 1 course.

What will I learn in this course?

The focus in this course is on working with data. You will learn how to:

  • Use the data structure of n8n correctly.
  • Process different data types (for example, XML, HTML, date, time, and binary data).
  • Merge data from different sources (for example, a database, spreadsheet, or CRM).
  • Use functions and JavaScript code in the Code node.
  • Deal with error workflows and workflow errors.

You will learn all this by completing short practical exercises after the theoretical explanations and building a business workflow following instructions.

What do I need to get started?

To follow along this course (at a comfortable pace) you will need the following:

  • n8n set up: You can use the self-hosted version or n8n Cloud.
  • A user ID: Sign up here to get your unique ID and other credentials you will need in this course (Level 2). If you're a Level 1 finisher, please sign up again as you'll get different credentials for the Level 2 workflows.
  • Basic n8n skills: We strongly recommend taking the Level 1 course before this one.
  • Basic JavaScript understanding

How long does the course take?

Completing the course should take around two hours. You don't have to complete it in one go; feel free to take breaks and resume whenever you are ready.

How do I complete the course?

There are two milestones in this course that test your knowledge of what you have learned in the lessons:

You can always check your progress throughout the course by entering your unique ID here.

If you successfully complete the milestones above, you will get a badge and an avatar in your forum profile. You can then share your profile and course verification ID to showcase your n8n skills to others.

Let's get started!


Metabase node

URL: llms-txt#metabase-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Metabase node to automate work in Metabase, and integrate Metabase with other applications. n8n has built-in support for a wide range of Metabase features, including adding, and getting alerts, databases, metrics, and questions.

On this page, you'll find a list of operations the Metabase node supports and links to more resources.

Refer to Metabase credentials for guidance on setting up authentication.

  • Alert
    • Get
    • Get All
  • Database
    • Add
    • Get All
    • Get Fields
  • Metric
    • Get
    • Get All
  • Question
    • Get
    • Get All
    • Result Data

Templates and examples

Browse Metabase integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Autopilot Trigger node

URL: llms-txt#autopilot-trigger-node

Contents:

  • Events

Autopilot is a visual marketing software that allows you to automate and personalize your marketing across the entire customer journey.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Autopilot Trigger integrations page.

  • Contact added
  • Contact added to a list
  • Contact entered to a segment
  • Contact left a segment
  • Contact removed from a list
  • Contact unsubscribed
  • Contact updated

Venafi TLS Protect Cloud credentials

URL: llms-txt#venafi-tls-protect-cloud-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a Venafi TLS Protect Cloud account.

Supported authentication methods

Refer to Venafi TLS Protect Cloud's API documentation for more information about the service.

To configure this credential, you'll need:

  • A Region: Select the region that matches your business needs. Choose EU if you're located in the European Union. Otherwise, choose US.
  • An API Key: Go to your avatar > Preferences > API Keys to get your API key. You can also use VCert to get your API key. Refer to Obtaining an API Key for more information.

Objects

URL: llms-txt#objects

Contents:

  • isEmpty(): Boolean
  • merge(object: Object): Object
  • hasField(fieldName: String): Boolean
  • removeField(key: String): Object
  • removeFieldsContaining(value: String): Object
  • keepFieldsContaining(value: String): Object
  • compact(): Object
  • toJsonString(): String
  • urlEncode(): String

A reference document listing built-in convenience functions to support data transformation in expressions for objects.

JavaScript in expressions

You can use any JavaScript in expressions. Refer to Expressions for more information.

isEmpty(): Boolean

Checks if the Object has no key-value pairs.


merge(object: Object): Object

Merges two Objects into a single Object using the first as the base Object. If a key exists in both Objects, the key in the base Object takes precedence.

Function parameters

The Object to merge with the base Object.


hasField(fieldName: String): Boolean

Checks if the Object has a given field. Only top-level keys are supported.

Function parameters

fieldNameRequiredString

The field to search for.


removeField(key: String): Object

Removes a given field from the Object

Function parameters

The field key of the field to remove.


removeFieldsContaining(value: String): Object

Removes fields with a given value from the Object.

Function parameters

The field value of the field to remove.


keepFieldsContaining(value: String): Object

Removes fields that do not match the given value from the Object.

Function parameters

The field value of the field to keep.


compact(): Object

Removes empty values from an Object.


toJsonString(): String

Convert an object to a JSON string. Equivalent of JSON.stringify.


urlEncode(): String

Transforms an Object into a URL parameter list. Only top-level keys are supported.



MongoDB Atlas Vector Store node

URL: llms-txt#mongodb-atlas-vector-store-node

Contents:

  • Prerequisites
  • Node usage patterns
    • Use as a regular node to insert and retrieve documents
    • Connect directly to an AI agent as a tool
    • Use a retriever to fetch documents
    • Use the Vector Store Question Answer Tool to answer questions
  • Node parameters
    • Operation Mode
    • Rerank Results
    • Get Many parameters

MongoDB Atlas Vector Search is a feature of MongoDB Atlas that enables users to store and query vector embeddings. Use this node to interact with Vector Search indexes in your MongoDB Atlas collections. You can insert documents, retrieve documents, and use the vector store in chains or as a tool for agents.

On this page, you'll find the node parameters for the MongoDB Atlas Vector Store node, and links to more resources.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

Before using this node, create a Vector Search index in your MongoDB Atlas collection. Follow these steps to create one:

  1. Log in to the MongoDB Atlas dashboard.

  2. Select your organization and project.

  3. Find "Search & Vector Search" section.

  4. Select your cluster and click "Go to search".

  5. Click "Create Search Index".

  6. Choose "Vector Search" mode and use the visual or JSON editors. For example:

  7. Adjust the "dimensions" value according to your embedding model (For example, 1536 for OpenAI's text-embedding-small-3).

  8. Name your index and create.

Make sure to note the following values which are required when configuring the node:

  • Collection name
  • Vector index name
  • Field names for embeddings and metadata

Node usage patterns

You can use the MongoDB Atlas Vector Store node in the following patterns:

Use as a regular node to insert and retrieve documents

You can use the MongoDB Atlas Vector Store as a regular node to insert or get documents. This pattern places the MongoDB Atlas Vector Store in the regular connection flow without using an agent.

You can see an example of this in scenario 1 of this template (the template uses the Supabase Vector Store, but the pattern is the same).

Connect directly to an AI agent as a tool

You can connect the MongoDB Atlas Vector Store node directly to the tool connector of an AI agent to use the vector store as a resource when answering queries.

Here, the connection would be: AI agent (tools connector) -> MongoDB Atlas Vector Store node.

Use a retriever to fetch documents

You can use the Vector Store Retriever node with the MongoDB Atlas Vector Store node to fetch documents from the MongoDB Atlas Vector Store node. This is often used with the Question and Answer Chain node to fetch documents from the vector store that match the given chat input.

An example of the connection flow (the linked example uses Pinecone, but the pattern is the same) would be: Question and Answer Chain (Retriever connector) -> Vector Store Retriever (Vector Store connector) -> MongoDB Atlas Vector Store.

Use the Vector Store Question Answer Tool to answer questions

Another pattern uses the Vector Store Question Answer Tool to summarize results and answer questions from the MongoDB Atlas Vector Store node. Rather than connecting the MongoDB Atlas Vector Store directly as a tool, this pattern uses a tool specifically designed to summarize data in the vector store.

The connections flow (the linked example uses the In-Memory Vector Store, but the pattern is the same) in this case would look like this: AI agent (tools connector) -> Vector Store Question Answer Tool (Vector Store connector) -> In-Memory Vector store.

This Vector Store node has four modes: Get Many, Insert Documents, Retrieve Documents (As Vector Store for Chain/Tool), and Retrieve Documents (As Tool for AI Agent). The mode you select determines the operations you can perform with the node and what inputs and outputs are available.

In this mode, you can retrieve multiple documents from your vector database by providing a prompt. The prompt is embedded and used for similarity search. The node returns the documents that are most similar to the prompt with their similarity score. This is useful if you want to retrieve a list of similar documents and pass them to an agent as additional context.

Insert Documents

Use insert documents mode to insert new documents into your vector database.

Retrieve Documents (as Vector Store for Chain/Tool)

Use Retrieve Documents (As Vector Store for Chain/Tool) mode with a vector-store retriever to retrieve documents from a vector database and provide them to the retriever connected to a chain. In this mode you must connect the node to a retriever node or root node.

Retrieve Documents (as Tool for AI Agent)

Use Retrieve Documents (As Tool for AI Agent) mode to use the vector store as a tool resource when answering queries. When formulating responses, the agent uses the vector store when the vector store name and description match the question details.

Enables reranking. If you enable this option, you must connect a reranking node to the vector store. That node will then rerank the results for queries. You can use this option with the Get Many, Retrieve Documents (As Vector Store for Chain/Tool) and Retrieve Documents (As Tool for AI Agent) modes.

Get Many parameters

  • Mongo Collection: Enter the name of the MongoDB collection to use.
  • Vector Index Name: Enter the name of the Vector Search index in your MongoDB Atlas collection.
  • Embedding Field: Enter the field name in your documents that contains the vector embeddings.
  • Metadata Field: Enter the field name in your documents that contains the text metadata.

Insert Documents parameters

  • Mongo Collection: Enter the name of the MongoDB collection to use.
  • Vector Index Name: Enter the name of the Vector Search index in your MongoDB Atlas collection.
  • Embedding Field: Enter the field name in your documents that contains the vector embeddings.
  • Metadata Field: Enter the field name in your documents that contains the text metadata.

Retrieve Documents parameters (As Vector Store for Chain/Tool)

  • Mongo Collection: Enter the name of the MongoDB collection to use.
  • Vector Index Name: Enter the name of the Vector Search index in your MongoDB Atlas collection.
  • Embedding Field: Enter the field name in your documents that contains the vector embeddings.
  • Metadata Field: Enter the field name in your documents that contains the text metadata.

Retrieve Documents (As Tool for AI Agent) parameters

  • Name: The name of the vector store.

  • Description: Explain to the LLM what this tool does. A good, specific description allows LLMs to produce expected results more often.

  • Mongo Collection: Enter the name of the MongoDB collection to use.

  • Vector Index Name: Enter the name of the Vector Search index in your MongoDB Atlas collection.

  • Limit: Enter how many results to retrieve from the vector store. For example, set this to 10 to get the ten best results.

  • Metadata Filter: Filters results based on metadata.

Templates and examples

AI-Powered WhatsApp Chatbot for Text, Voice, Images, and PDF with RAG

View template details

Build a Knowledge Base Chatbot with OpenAI, RAG and MongoDB Vector Embeddings

View template details

Build a Chatbot with Reinforced Learning Human Feedback (RLHF) and RAG

View template details

Browse MongoDB Atlas Vector Store integration templates, or search all templates

View n8n's Advanced AI documentation.

Self-hosted AI Starter Kit

New to working with AI and using self-hosted n8n? Try n8n's self-hosted AI Starter Kit to get started with a proof-of-concept or demo playground using Ollama, Qdrant, and PostgreSQL.

Examples:

Example 1 (unknown):

{
     "fields": [
       {
         "type": "vector",
         "path": "<field-name>",
         "numDimensions": 1536, // any other value
         "similarity": "<similarity-function>"
       }
     ]
   }

Gmail Send Email credentials

URL: llms-txt#gmail-send-email-credentials

Contents:

  • Prerequisites
    • Enable 2-step Verification
    • Generate an app password
  • Set up the credential

Follow these steps to configure the Send Email credentials with a Gmail account.

To follow these instructions, you must first:

  1. Enable 2-step Verification on your Gmail account.
  2. Generate an app password.

Enable 2-step Verification

To enable 2-step Verification:

  1. Log in to your Google Account.
  2. Select Security from the left navigation.
  3. Under How you sign in to Google, select 2-Step Verification.
    • If 2-Step Verification is already enabled, skip to the next section.
  4. Select Get started.
  5. Follow the on-screen steps to configure 2-Step Verification.

Refer to Turn on 2-step Verification for more information.

If you can't turn on 2-step Verification, check with your email administrator.

Generate an app password

To generate an app password:

  1. In your Google account, go to App passwords.
  2. Enter an App name for your new app password, like n8n credential.
  3. Select Create.
  4. Copy the generated app password. You'll use this in your n8n credential.

Refer to Google's Sign in with app passwords documentation for more information.

Set up the credential

To set up the Send Email credential to use Gmail:

  1. Enter your Gmail email address as the User.
  2. Enter the app password you generated above as the Password.
  3. Enter smtp.gmail.com as the Host.
  4. For the Port:
    • Keep the default 465 for SSL or if you're unsure what to use.
    • Enter 587 for TLS.
  5. Turn on the SSL/TLS toggle.

Refer to the Outgoing Mail (SMTP) Server settings in Read Gmail messages on other email clients using POP for more information. If the settings above don't work for you, check with your email administrator.


Unleashed Software credentials

URL: llms-txt#unleashed-software-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create an Unleashed Software account.

Supported authentication methods

Refer to Unleashed's API documentation for more information about the service.

To configure this credential, you'll need:

  • An API ID: Go to Integrations > Unleashed API Access to find your API ID.
  • An API Key: Go to Integrations > Unleashed API Access to find your API Key.

Refer to Unleashed API Access for more information.

Account owner required

You must log in as an Unleashed account owner to view the API ID and API Key.


Root nodes

URL: llms-txt#root-nodes

Root nodes are the foundational nodes within a group of cluster nodes.

Cluster nodes are node groups that work together to provide functionality in an n8n workflow. Instead of using a single node, you use a root node and one or more sub-nodes that extend the functionality of the node.


Unleashed Software node

URL: llms-txt#unleashed-software-node

Contents:

  • Operations
  • Templates and examples

Use the Unleashed Software node to automate work in Unleashed Software, and integrate Unleashed Software with other applications. n8n has built-in support for a wide range of Unleashed Software features, including getting sales orders and stock on hand.

On this page, you'll find a list of operations the Unleashed Software node supports and links to more resources.

Refer to Unleashed Software credentials for guidance on setting up authentication.

  • Sales Order
    • Get all sales orders
  • Stock On Hand
    • Get a stock on hand
    • Get all stocks on hand

Templates and examples

Browse Unleashed Software integration templates, or search all templates


Download workflows

URL: llms-txt#download-workflows

Contents:

  • How to download workflows
  • Accessing workflows after your free trial

n8n Cloud instance owners can download workflows from the most recent backup.

You can do this with the Cloud admin dashboard.

How to download workflows

  1. Log in to n8n.
  2. Select Admin Dashboard to open the dashboard.
  3. In the Manage section, select the Export tab.
  4. Select Download Workflows.

Accessing workflows after your free trial

You have 90 days to download your workflows after your free trial ends. After that, all workflows will be permanently deleted and are unrecoverable.


Home Assistant credentials

URL: llms-txt#home-assistant-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using API access token

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to Home Assistant's API documentation for more information about the service.

Using API access token

To configure this credential, you'll need to Install Home Assistant, create a Home Assistant account, and have:

  • Your Host
  • The Port
  • A Long-Lived Access Token

To generate an access token and set up the credential:

  1. To generate your Access Token, log in to Home Assistant and open your User profile.
  2. In the Long-Lived Access Tokens section, generate a new token.
  3. Copy this token and enter it in n8n as your Access Token.
  4. Enter the URL or IP address of your Home Assistant Host, without the http:// or https:// protocol, for example your.awesome.home.
  5. For the Port, enter the appropriate port:
    • If you've made no port changes and access Home Assistant at http://, keep the default of 8123.
    • If you've made no port changes and access Home Assistant at https://, enter 443.
    • If you've configured Home Assistant to use a specific port, enter that port.
  6. If you've enabled SSL in Home Assistant in the config.yml map key, turn on the SSL toggle in n8n. If you're not sure, it's best to turn this setting on if you access your home assistant UI using https:// instead of http://.

Error handling in n8n nodes

URL: llms-txt#error-handling-in-n8n-nodes

Contents:

  • NodeApiError
    • Common usage patterns
  • NodeOperationError
    • Common usage patterns

Proper error handling is crucial for creating robust n8n nodes that provide clear feedback to users when things go wrong. n8n provides two specialized error classes to handle different types of failures in node implementations:

  • NodeApiError: For API-related errors and external service failures
  • NodeOperationError: For operational errors, validation failures, and configuration issues

Use NodeApiError when dealing with external API calls and HTTP requests. This error class is specifically designed to handle API response errors and provides enhanced features for parsing and presenting API-related failures such as:

  • HTTP request failures
  • external API errors
  • authentication/authorization failures
  • rate limiting errors
  • service unavailable errors

Initialize new NodeApiError instances using the following pattern:

Common usage patterns

For basic API request failures, catch the error and wrap it in NodeApiError:

Handle specific HTTP status codes with custom messages:

NodeOperationError

Use NodeOperationError for:

  • operational errors
  • validation failures
  • configuration issues that aren't related to external API calls
  • input validation errors
  • missing required parameters
  • data transformation errors
  • workflow logic errors

Initialize new NodeOperationError instances using the following pattern:

Common usage patterns

Use NodeOperationError for validating user inputs:

When processing multiple items, include the item index for better error context:

Examples:

Example 1 (unknown):

new NodeApiError(node: INode, errorResponse: JsonObject, options?: NodeApiErrorOptions)

Example 2 (unknown):

try {
	const response = await this.helpers.requestWithAuthentication.call(
		this,
		credentialType,
		options
	);
	return response;
} catch (error) {
	throw new NodeApiError(this.getNode(), error as JsonObject);
}

Example 3 (unknown):

try {
	const response = await this.helpers.requestWithAuthentication.call(
		this,
		credentialType,
		options
	);
	return response;
} catch (error) {
	if (error.httpCode === "404") {
		const resource = this.getNodeParameter("resource", 0) as string;
		const errorOptions = {
			message: `${
				resource.charAt(0).toUpperCase() + resource.slice(1)
			} not found`,
			description:
				"The requested resource could not be found. Please check your input parameters.",
		};
		throw new NodeApiError(
			this.getNode(),
			error as JsonObject,
			errorOptions
		);
	}

	if (error.httpCode === "401") {
		throw new NodeApiError(this.getNode(), error as JsonObject, {
			message: "Authentication failed",
			description: "Please check your credentials and try again.",
		});
	}

	throw new NodeApiError(this.getNode(), error as JsonObject);
}

Example 4 (unknown):

new NodeOperationError(node: INode, error: Error | string | JsonObject, options?: NodeOperationErrorOptions)

Azure OpenAI Chat Model node

URL: llms-txt#azure-openai-chat-model-node

Contents:

  • Node parameters
  • Node options
  • Proxy limitations
  • Templates and examples
  • Related resources

Use the Azure OpenAI Chat Model node to use OpenAI's chat models with conversational agents.

On this page, you'll find the node parameters for the Azure OpenAI Chat Model node, and links to more resources.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Model: Select the model to use to generate the completion.

  • Frequency Penalty: Use this option to control the chances of the model repeating itself. Higher values reduce the chance of the model repeating itself.

  • Maximum Number of Tokens: Enter the maximum number of tokens used, which sets the completion length.

  • Response Format: Choose Text or JSON. JSON ensures the model returns valid JSON.

  • Presence Penalty: Use this option to control the chances of the model talking about new topics. Higher values increase the chance of the model talking about new topics.

  • Sampling Temperature: Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations.

  • Timeout: Enter the maximum request time in milliseconds.

  • Max Retries: Enter the maximum number of times to retry a request.

  • Top P: Use this option to set the probability the completion should use. Use a lower value to ignore less probable options.

This node doesn't support the NO_PROXY environment variable.

Templates and examples

🤖 AI content generation for Auto Service 🚘 Automate your social media📲!

View template details

Build Your Own Counseling Chatbot on LINE to Support Mental Health Conversations

View template details

CallForge - 05 - Gong.io Call Analysis with Azure AI & CRM Sync

View template details

Browse Azure OpenAI Chat Model integration templates, or search all templates

Refer to LangChains's Azure OpenAI documentation for more information about the service.

View n8n's Advanced AI documentation.


Get help with n8n

URL: llms-txt#get-help-with-n8n

Contents:

  • Where to get help
    • n8n community forum
    • Email support
  • What to include in your message
    • Your n8n instance details
    • Details about your problem

n8n provides different support options depending on your plan and the nature of your problem.

n8n community forum

n8n provides free community support for all n8n users through the forum.

This is the best source for answers of all kinds, as both the n8n support team and community members can help.

n8n offers email support through the help@n8n.io for the following plans:

  • Enterprise plans can use email support with an SLA for technical, account, billing, and other inquiries.
  • Other Cloud plans can use email support for admin and billing issues. For technical support, please refer to the forum.

What to include in your message

When posting to the forum or emailing customer support, you'll get help faster if you provide details in your first message about your n8n instance and the issue you're experiencing.

Your n8n instance details

To collect basic information about your n8n instance:

  1. Open the left-side panel.
  2. Select Help.
  3. Select About n8n.
  4. The About n8n modal opens to display your current information.
  5. Select Copy debug information to copy your information.
  6. Include this information in your forum post or support email.

Details about your problem

To help resolve your issues more efficiently, here are some things you can include to provide more context:

  • Screenshots or video recordings: A quick Loom or screen recording that shows what's happening.
  • Relevant documentation: If you've followed any guides or documentation, include links to them in your message.
  • n8n Cloud workspace (if possible): If contacting support, provide the workspace URL for your n8n Cloud instance. It looks something like https://xxxxx.n8n.app.cloud.
  • Steps to reproduce the issue: A simple step-by-step outline of what you did before encountering the issue.
  • Workflow or Configuration files: Sharing relevant workflows or configuration files can be a huge help.

It may also be helpful to include a HAR (HTTP Archive) file in your message. You can learn how to generate a HAR file in your browser and how to redact sensitive details before posting using the Har Analizer.


Keap Trigger node

URL: llms-txt#keap-trigger-node

Keap is an e-mail marketing and sales platform for small businesses, including products to manage and optimize the customer lifecycle, customer relationship management, marketing automation, lead capture, and e-commerce.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Keap Trigger integrations page.


Don't save manually launched executions

URL: llms-txt#don't-save-manually-launched-executions

export EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS=false

Examples:

Example 1 (unknown):



marketstack node

URL: llms-txt#marketstack-node

Contents:

  • Operations
  • Templates and examples

Use the marketstack node to automate work in marketstack, and integrate marketstack with other applications. n8n has built-in support for a wide range of marketstack features, including getting exchanges, end-of-day data, and tickers.

On this page, you'll find a list of operations the marketstack node supports and links to more resources.

Refer to marketstack credentials for guidance on setting up authentication.

  • End-of-Day Data
    • Get All
  • Exchange
    • Get
  • Ticker
    • Get

Templates and examples

AI-Powered Financial Chart Analyzer | OpenRouter, MarketStack, macOS Shortcuts

View template details

AI agents can get end of day market data with this Marketstack Tool MCP Server

View template details

Detect Stock Price Anomalies & Send News Alerts with Marketstack, HackerNews & DeepL

View template details

Browse marketstack integration templates, or search all templates


Troubleshooting SAML SSO

URL: llms-txt#troubleshooting-saml-sso

If you get an error when testing your SAML setup, check the following:

  • Does the app you created in your IdP support SAML?
  • Did you enter the n8n redirect URL and entity ID in the correct fields in your IdP?
  • Is the metadata XML correct? Check that the metadata you copied into n8n is formatted correctly.

For more support, use the forum, or contact your support representative if you have a paid support plan.


Invoice Ninja node

URL: llms-txt#invoice-ninja-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Invoice Ninja node to automate work in Invoice Ninja, and integrate Invoice Ninja with other applications. n8n has built-in support for a wide range of Invoice Ninja features, including creating, updating, deleting, and getting clients, expense, invoice, payments and quotes.

On this page, you'll find a list of operations the Invoice Ninja node supports and links to more resources.

Refer to Invoice Ninja credentials for guidance on setting up authentication.

  • Client
    • Create a new client
    • Delete a client
    • Get data of a client
    • Get data of all clients
  • Expense
    • Create a new expense
    • Delete an expense
    • Get data of an expense
    • Get data of all expenses
  • Invoice
    • Create a new invoice
    • Delete a invoice
    • Email an invoice
    • Get data of a invoice
    • Get data of all invoices
  • Payment
    • Create a new payment
    • Delete a payment
    • Get data of a payment
    • Get data of all payments
  • Quote
    • Create a new quote
    • Delete a quote
    • Email an quote
    • Get data of a quote
    • Get data of all quotes
  • Task
    • Create a new task
    • Delete a task
    • Get data of a task
    • Get data of all tasks

Templates and examples

Receive updates on a new invoice via Invoice Ninja

View template details

Get multiple clients' data from Invoice Ninja

View template details

Automate Invoice Creation and Delivery with Google Sheets, Invoice Ninja and Gmail

View template details

Browse Invoice Ninja integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Level one: Introduction

URL: llms-txt#level-one:-introduction

Contents:

  • Is this course right for me?
  • What will I learn in this course?
  • What do I need to get started?
  • How long does the course take?
  • How do I complete the course?

Welcome to the n8n Course Level 1!

Is this course right for me?

This course introduces you to the fundamental concepts within n8n and develops your low-code automation expertise.

This course is for you if you:

  • Are starting to use n8n for the first time.
  • Are looking for some extra help creating your first workflow.
  • Want to automate processes in your personal or working life.

This course introduces n8n concepts and demonstrates practical workflow building without assuming any prior familiarity with n8n. If you'd like to get a feel for the basics without as much explanation, consult our quickstart guide.

What will I learn in this course?

We believe in learning by doing. You can expect some theoretical information about the basic concepts and components of n8n, followed by practice of building workflows step by step.

By the end of this course you will know:

  • How to set up n8n and navigate the Editor UI.
  • How n8n structures data.
  • How to configure different node parameters and add credentials.
  • When and how to use conditional logic in workflows.
  • How to schedule and control workflows.
  • How to import, download, and share workflows with others.

You will build two workflows:

  • A two-node workflow to get articles from Hacker News
  • A seven-node workflow to help your client get records from a data warehouse, filter them, make calculations, and notify team members about the results

What do I need to get started?

  1. n8n set up: You can use n8n Cloud (or the self-hosted version if you have experience hosting services).
  2. A course user ID: Sign up here to get your unique ID and other credentials you will need in this course (Level 1).
  3. Basic knowledge of JavaScript and APIs would be helpful, but isn't necessary.
  4. An account on the n8n community forum if you wish to receive a profile badge and avatar upon successful completion.

How long does the course take?

Completing the course should take around two hours. You don't have to complete it in one go; feel free to take breaks and resume whenever you are ready.

How do I complete the course?

There are two milestones in this course that test your knowledge of what you have learned in the lessons:

You can always check your progress throughout the course by entering your unique ID here.

If you complete the milestones above, you will get a badge and an avatar in your forum profile. You can then share your profile and course verification ID to showcase your n8n skills to others.

Let's get started!


Triggers library

URL: llms-txt#triggers-library

This section provides information about n8n's Triggers.


For example, /myDirectory/

URL: llms-txt#for-example,-/mydirectory/

Contents:

  • Templates and examples

Templates and examples

Breakdown Documents into Study Notes using Templating MistralAI and Qdrant

View template details

Build a Financial Documents Assistant using Qdrant and Mistral.ai

View template details

Organise Your Local File Directories With AI

View template details

Browse Local File Trigger integration templates, or search all templates


Slack credentials

URL: llms-txt#slack-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using API access token
    • Slack Trigger configuration
  • Using OAuth2
  • Scopes
  • Common issues
    • Token expired

You can use these credentials to authenticate the following nodes:

Supported authentication methods

  • API access token:
  • OAuth2:

Refer to Slack's API documentation for more information about the service.

Using API access token

To configure this credential, you'll need a Slack account and:

  • An Access Token

To generate an access token, create a Slack app:

  1. Open your Slack API Apps page.
  2. Select Create New App > From scratch.
  3. Enter an App Name.
  4. Select the Workspace where you'll be developing your app.
  5. Select Create App. The app details open.
  6. In the left menu under Features, select OAuth & Permissions.
  7. In the Scopes section, select appropriate scopes for your app. Refer to Scopes for a list of recommended scopes.
  8. After you've added scopes, go up to the OAuth Tokens section and select Install to Workspace. You must be a Slack workspace admin to complete this action.
  9. Select Allow.
  10. Copy the Bot User OAuth Token and enter it as the Access Token in your n8n credential.
  11. If you're using this credential for the Slack Trigger, follow the steps in Slack Trigger configuration to finish setting up your app.

Refer to the Slack API Quickstart for more information.

Slack Trigger configuration

To use your Slack app with the Slack Trigger node:

  1. Go to Your Apps in Slack and select the app you want to use.

  2. Go to Features > Event Subscriptions.

  3. Turn on the Enable Events control.

  4. In n8n, copy the Webhook URL and enter it as the Request URL in your Slack app.

Slack only allows one request URL per app. If you want to test your workflow, you'll need to do one of the following:

  • Test with your Test URL first, then change your Slack app to use the Production URL once you've verified everything's working
    • Use the Production URL with execution logging.
  1. Once verified, select the bot events to subscribe to. Use the Trigger on field in n8n to filter these requests.
  • To use an event not in the list, add it as a bot event and select Any Event in the n8n node.

Refer to Quickstart | Configuring the app for event listening for more information.

n8n recommends enabling request signature verification for your Slack Trigger for additional security:

  1. Go to Your Apps in Slack and select the app you want to use.
  2. Go to Settings > Basic Information.
  3. Copy the value of Signing.
  4. In n8n, Paste this value into the Signature Secret field for the credential.

Note for n8n Cloud users

Cloud users don't need to provide connection details. Select Connect my account to connect through your browser.

If you're self-hosting n8n and need to configure OAuth2 from scratch, you'll need a Slack account and:

  • A Client ID
  • A Client Secret

To get both, create a Slack app:

  1. Open your Slack API Apps page.
  2. Select Create New App > From scratch.
  3. Enter an App Name.
  4. Select the Workspace where you'll be developing your app.
  5. Select Create App. The app details open.
  6. In Settings > Basic Information, open the App Credentials section.
  7. Copy the Client ID and Client Secret. Paste these into the corresponding fields in n8n.
  8. In the left menu under Features, select OAuth & Permissions.
  9. In the Redirect URLs section, select Add New Redirect URL.
  10. Copy the OAuth Callback URL from n8n and enter it as the new Redirect URL in Slack.
  11. Select Add.
  12. Select Save URLs.
  13. In the Scopes section, select appropriate scopes for your app. Refer to Scopes for a list of scopes.
  14. After you've added scopes, go up to the OAuth Tokens section and select Install to Workspace. You must be a Slack workspace admin to complete this action.
  15. Select Allow.
  16. At this point, you should be able to select the OAuth button in your n8n credential to connect.

Refer to the Slack API Quickstart for more information. Refer to the Slack Installing with OAuth documentation for more details on the OAuth flow itself.

Scopes determine what permissions an app has.

  • If you want your app to act on behalf of users who authorize the app, add the required scopes under the User Token Scopes section.
  • If you're building a bot, add the required scopes under the Bot Token Scopes section.

Here's the list of scopes the OAuth credential requires, which are a good starting point:

Scope name Notes
channels:read
channels:write Not available as a bot token scope
channels:history
chat:write
files:read
files:write
groups:read
groups:history
im:read
im:history
mpim:read
mpim:history
reactions:read
reactions:write
stars:read Not available as a bot token scope
stars:write Not available as a bot token scope
usergroups:read
usergroups:write
users.profile:read
users.profile:write Not available as a bot token scope
users:read
search:read

Slack offers token rotation that you can turn on for bot and user tokens. This makes every tokens expire after 12 hours. While this may be useful for testing, n8n credentials using tokens with this enabled will fail after expiry. If you want to use your Slack credentials in production, this feature must be off.

To check if your Slack app has token rotation turned on, refer to the Slack API Documentation | Token Rotation.

If your app uses token rotation

Please note, if your Slack app uses token rotation, you can't turn it off again. You need to create a new Slack app with token rotation disabled instead.


vars

URL: llms-txt#vars

  • Available on Self-hosted Enterprise and Pro and Enterprise Cloud plans.
  • You need access to the n8n instance owner account to create variables.

vars contains all Variables for the active environment. It's read-only: you can access variables using vars, but must set them using the UI.

Examples:

Example 1 (unknown):

// Access a variable
$vars.<variable-name>

n8n

URL: llms-txt#n8n

Contents:

  • Operations
  • Generate audit
  • Create credential
  • Delete credential
  • Get credential schema
  • Get execution
    • Get execution option
  • Get many executions
    • Get many executions filters
    • Get many execution options

A node to integrate with n8n itself. This node allows you to consume the n8n API in your workflows.

Refer to the n8n REST API documentation for more information on using the n8n API. Refer to API endpoint reference for working with the API endpoints directly.

You can find authentication information for this node in the API authentication documentation.

This node doesn't support SSL. If your server requires an SSL connection, use the HTTP Request node to call the n8n API. The HTTP Request node has options to provide the SSL certificate.

This operation has no parameters. Configure it with these options:

  • Categories: Select the risk categories you want the audit to include. Options include:
    • Credentials
    • Database
    • Filesystem
    • Instance
    • Nodes
  • Days Abandoned Workflow: Use this option to set the number of days without execution after which a workflow should be considered abandoned. Enter a number of days. The default is 90.

Configure this operation with these parameters:

  • Name: Enter the name of the credential you'd like to create.
  • Credential Type: Enter the credential's type. The available types depend on nodes installed on the n8n instance. Some built-in types include githubApi, notionApi, and slackApi.
  • Data: Enter a valid JSON object with the required properties for this Credential Type. To see the expected format, use the Get Schema operation.

Configure this operation with this parameter:

  • Credential ID: Enter the ID of the credential you want to delete.

Get credential schema

Configure this operation with this parameter:

  • Credential Type: Enter the credential's type. The available types depend on nodes installed on the n8n instance. Some built-in types include githubApi, notionApi, and slackApi.

Configure this operation with this parameter:

  • Execution ID: Enter the ID of the execution you want to retrieve.

Get execution option

You can further configure this operation with this Option:

  • Include Execution Details: Use this control to set whether to include the detailed execution data (turned on) or not (turned off).

Get many executions

Configure this operation with these parameters:

  • Return All: Set whether to return all results (turned on) or whether to limit the results to the entered Limit (turned on).
  • Limit: Set the number of results to return if the Return All control is turned off.

Get many executions filters

You can further configure this operation with these Filters:

  • Workflow: Filter the executions by workflow. Options include:
    • From list: Select a workflow to use as a filter.
    • By URL: Enter a workflow URL to use as a filter.
    • By ID: Enter a workflow ID to use as a filter.
  • Status: Filter the executions by status. Options include:
    • Error
    • Success
    • Waiting

Get many execution options

You can further configure this operation with this Option:

  • Include Execution Details: Use this control to set whether to include the detailed execution data (turned on) or not (turned off).

Configure this operation with this parameter:

  • Execution ID: Enter the ID of the execution you want to delete.

Activate, deactivate, delete, and get workflow

The Activate, Deactivate, Delete, and Get workflow operations all include the same parameter for you to select the Workflow you want to perform the operation on. Options include:

  • From list: Select the workflow from the list.
  • By URL: Enter the URL of the workflow.
  • By ID: Enter the ID of the workflow.

Configure this operation with this parameter:

  • Workflow Object: Enter a valid JSON object with the new workflow's details. The object requires these fields:
    • name
    • nodes
    • connections
    • settings

Refer to n8n API reference for more information.

Get many workflows

Configure this operation with these parameters:

  • Return All: Set whether to return all results (turned on) or whether to limit the results to the entered Limit (turned on).
  • Limit: Set the number of results to return if the Return All control is turned off.

Get many workflows filters

You can further configure this operation with these Filters:

  • Return Only Active Workflows: Select whether to return only active workflows (turned on) or active and inactive workflows (turned off).
  • Tags: Enter a comma-separated list of tags the returned workflows must have.

Configure this operation with these parameters:

  • Workflow: Select the workflow you want to update. Options include:
    • From list: Select the workflow from the list.
    • By URL: Enter the URL of the workflow.
    • By ID: Enter the ID of the workflow.
  • Workflow Object: Enter a valid JSON object to update the workflow with. The object requires these fields:
    • name
    • nodes
    • connections
    • settings

Refer to the n8n API | Update a workflow documentation for more information.

Templates and examples

Very quick quickstart

View template details

AI agent that can scrape webpages

View template details

🤖Automate Multi-Platform Social Media Content Creation with AI

View template details

Browse n8n integration templates, or search all templates


OpenRouter Chat Model node

URL: llms-txt#openrouter-chat-model-node

Contents:

  • Node parameters
    • Model
  • Node options
    • Frequency Penalty
    • Maximum Number of Tokens
    • Response Format
    • Presence Penalty
    • Sampling Temperature
    • Timeout
    • Max Retries

Use the OpenRouter Chat Model node to use OpenRouter's chat models with conversational agents.

On this page, you'll find the node parameters for the OpenRouter Chat Model node and links to more resources.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

Select the model to use to generate the completion.

n8n dynamically loads models from OpenRouter and you'll only see the models available to your account.

Use these options to further refine the node's behavior.

Frequency Penalty

Use this option to control the chances of the model repeating itself. Higher values reduce the chance of the model repeating itself.

Maximum Number of Tokens

Enter the maximum number of tokens used, which sets the completion length.

Choose Text or JSON. JSON ensures the model returns valid JSON.

Use this option to control the chances of the model talking about new topics. Higher values increase the chance of the model talking about new topics.

Sampling Temperature

Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations.

Enter the maximum request time in milliseconds.

Enter the maximum number of times to retry a request.

Use this option to set the probability the completion should use. Use a lower value to ignore less probable options.

Templates and examples

Automate SEO-Optimized WordPress Posts with AI & Google Sheets

View template details

Personal Life Manager with Telegram, Google Services & Voice-Enabled AI

View template details

Publish WordPress Posts to Social Media X, Facebook, LinkedIn, Instagram with AI

View template details

Browse OpenRouter Chat Model integration templates, or search all templates

As OpenRouter is API-compatible with OpenAI, you can refer to LangChains's OpenAI documentation for more information about the service.

View n8n's Advanced AI documentation.


Linear credentials

URL: llms-txt#linear-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Create a Linear account.

Supported authentication methods

Refer to Linear's API documentation for more information about the service.

To configure this credential, you'll need:

To configure this credential, you'll need:

  • A Client ID: Generated when you create a new OAuth2 application.
  • A Client Secret: Generated when you create a new OAuth2 application.
  • Select the Actor: The actor defines how the OAuth2 application should create issues, comments and other changes. Options include:
    • User (Linear's default): The application creates resources as the authorizing user. Use this option if you want each user to do their own authentication.
    • Application: The application creates resources as itself. Use this option if you have only one user (like an admin) authorizing the application.
  • To use this credential with the Linear Trigger node, you must enable the Include Admin Scope toggle.

Refer to the Linear OAuth2 Authentication documentation for more detailed instructions and explanations. Use the n8n OAuth Redirect URL as the Redirect callback URL in your Linear OAuth2 application.


TheHive credentials

URL: llms-txt#thehive-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

TheHive and TheHive 5

n8n provides two nodes for TheHive. Use these credentials with TheHive node for TheHive 3 or TheHive 4. If you're using TheHive5 node, use TheHive 5 credentials.

Install TheHive on your server.

Supported authentication methods

Refer to TheHive 3's API documentation and TheHive 4's API documentation for more information about the services.

To configure this credential, you'll need:

  • An API Key: Create an API key from Organization > Create API Key. Refer to API Authentication for more information.
  • Your URL: The URL of your TheHive server.
  • An API Version: Choose between:
  • Ignore SSL Issues: When turned on, n8n will connect even if SSL certificate validation fails.

Demonstration of key differences between agents and chains

URL: llms-txt#demonstration-of-key-differences-between-agents-and-chains

Contents:

  • Key features
  • Using the example

In this workflow you can choose whether your chat query goes to an agent or chain. It shows some of the ways that agents are more powerful than chains.

View workflow file

  • Chat Trigger: start your workflow and respond to user chat interactions. The node provides a customizable chat interface.
  • Switch node: directs your query to either the agent or chain, depending on which you specify in your query. If you say "agent" it sends it to the agent. If you say "chain" it sends it to the chain.
  • Agent: the Agent node interacts with other components of the workflow and makes decisions about what tools to use.
  • Basic LLM Chain: the Basic LLM Chain node supports chatting with a connected LLM, but doesn't support memory or tools.

To load the template into your n8n instance:

  1. Download the workflow JSON file.
  2. Open a new workflow in your n8n instance.
  3. Copy in the JSON, or select Workflow menu > Import from file....

The example workflows use Sticky Notes to guide you:

  • Yellow: notes and information.
  • Green: instructions to run the workflow.
  • Orange: you need to change something to make the workflow work.
  • Blue: draws attention to a key feature of the example.

Gmail node common issues

URL: llms-txt#gmail-node-common-issues

Contents:

  • Remove the n8n attribution from sent messages
  • Forbidden - perhaps check your credentials
  • 401 unauthorized error
  • Bad request - please check your parameters

Here are some common errors and issues with the Gmail node and steps to resolve or troubleshoot them.

Remove the n8n attribution from sent messages

If you're using the node to send a message or reply to a message, the node appends this statement to the end of the email:

This email was sent automatically with n8n

To remove this attribution:

  1. In the node's Options section, select Add option.
  2. Select Append n8n attribution.
  3. Turn the toggle off.

Refer to Send options and Reply options for more information.

Forbidden - perhaps check your credentials

This error displays next to certain dropdowns in the node, like the Label Names or IDs dropdown. The full text looks something like this:

The error most often displays when you're using a Google Service Account as the credential and the credential doesn't have Impersonate a User turned on.

Refer to Google Service Account: Finish your n8n credential for more information.

401 unauthorized error

The full text of the error looks like this:

This error occurs when there's an issue with the credential you're using and its scopes or permissions.

  1. For OAuth2 credentials, make sure you've enabled the Gmail API in APIs & Services > Library. Refer to Google OAuth2 Single Service - Enable APIs for more information.
  2. For Service Account credentials:
    1. Enable domain-wide delegation.
    2. Make sure you add the Gmail API as part of the domain-wide delegation configuration.

Bad request - please check your parameters

This error most often occurs if you enter a Message ID, Thread ID, or Label ID that doesn't exist.

Try a Get operation with the ID to confirm it exists.

Examples:

Example 1 (unknown):

There was a problem loading the parameter options from server: "Forbidden - perhaps check your credentials?"

Example 2 (unknown):

401 - {"error":"unauthorized_client","error_description":"Client is unauthorized to retrieve access tokens using this method, or client not authorized for any of the scopes requested."}

Vercel AI Gateway credentials

URL: llms-txt#vercel-ai-gateway-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key
  • Using OIDC token

You can use these credentials to authenticate the following nodes:

Create a Vercel account.

Supported authentication methods

  • API key
  • OIDC token

Refer to the Vercel AI Gateway documentation for more information about the service.

To configure this credential, you'll need:

To generate your API Key:

  1. Login to Vercel or create an account.
  2. Go to the Vercel dashboard and select the AI Gateway tab.
  3. Select API keys on the left side bar.
  4. Select Add key and proceed with Create key from the Dialog.
  5. Copy your key and add it as the API Key in n8n.

To configure this credential, you'll need:

To generate your OIDC token:

  1. In local development, link your application to a Vercel project with the vc link command.
  2. Run the vercel env pull command to pull the environment variables from Vercel.
  3. Copy your token and add it as the OIDC TOKEN in n8n.

YouTube node

URL: llms-txt#youtube-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the YouTube node to automate work in YouTube, and integrate YouTube with other applications. n8n has built-in support for a wide range of YouTube features, including retrieving and updating channels, as well as creating and deleting playlists.

On this page, you'll find a list of operations the YouTube node supports and links to more resources.

Refer to YouTube credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Channel
    • Retrieve a channel
    • Retrieve all channels
    • Update a channel
    • Upload a channel banner
  • Playlist
    • Create a playlist
    • Delete a playlist
    • Get a playlist
    • Retrieve all playlists
    • Update a playlist
  • Playlist Item
    • Add an item to a playlist
    • Delete a item from a playlist
    • Get a playlist's item
    • Retrieve all playlist items
  • Video
    • Delete a video
    • Get a video
    • Retrieve all videos
    • Rate a video
    • Update a video
    • Upload a video
  • Video Category
    • Retrieve all video categories

Templates and examples

Generate AI Viral Videos with Seedance and Upload to TikTok, YouTube & Instagram

View template details

Generate AI Videos with Google Veo3, Save to Google Drive and Upload to YouTube

View template details

AI-Powered YouTube Video Summarization & Analysis

View template details

Browse YouTube integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Set the Cloud instance timezone

URL: llms-txt#set-the-cloud-instance-timezone

You can change the timezone for your n8n instance. This affects the Schedule Trigger and Date & Time node. Users can configure the timezone for individual workflows in Workflow settings.

  1. On your dashboard, select Manage.
  2. Change the Timezone dropdown to the timezone you want.

Wise node

URL: llms-txt#wise-node

Contents:

  • Operations
  • Templates and examples

Use the Wise node to automate work in Wise, and integrate Wise with other applications. n8n has built-in support for a wide range of Wise features, including getting profiles, exchange rates, and recipients.

On this page, you'll find a list of operations the Wise node supports and links to more resources.

Refer to Wise credentials for guidance on setting up authentication.

  • Account
    • Retrieve balances for all account currencies of this user.
    • Retrieve currencies in the borderless account of this user.
    • Retrieve the statement for the borderless account of this user.
  • Exchange Rate
    • Get
  • Profile
    • Get
    • Get All
  • Recipient
    • Get All
  • Quote
    • Create
    • Get
  • Transfer
    • Create
    • Delete
    • Execute
    • Get
    • Get All

Templates and examples

Browse Wise integration templates, or search all templates


Elasticsearch node

URL: llms-txt#elasticsearch-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Elasticsearch node to automate work in Elasticsearch, and integrate Elasticsearch with other applications. n8n has built-in support for a wide range of Elasticsearch features, including creating, updating, deleting, and getting documents and indexes.

On this page, you'll find a list of operations the Elasticsearch node supports and links to more resources.

Refer to Elasticsearch credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Document
    • Create a document
    • Delete a document
    • Get a document
    • Get all documents
    • Update a document
  • Index
    • Create
    • Delete
    • Get
    • Get All

Templates and examples

Build Your Own Image Search Using AI Object Detection, CDN and ElasticSearch

View template details

Create an automated workitem(incident/bug/userstory) in azure devops

View template details

Dynamic Search Interface with Elasticsearch and Automated Report Generation

View template details

Browse Elasticsearch integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


SurveyMonkey Trigger node

URL: llms-txt#surveymonkey-trigger-node

SurveyMonkey is an online cloud-based SaaS survey platform that also provides a suite of paid back-end programs.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's SurveyMonkey Trigger integrations page.


Configure n8n to use your own certificate authority or self-signed certificate

URL: llms-txt#configure-n8n-to-use-your-own-certificate-authority-or-self-signed-certificate

Contents:

  • Docker
    • Docker CLI
    • Docker Compose
  • Certificate requirements for Custom Trust Store

You can add your own certificate authority (CA) or self-signed certificate to n8n. This means you are able to trust a certain SSL certificate instead of trusting all invalid certificates, which is a potential security risk.

Added in version 1.42.0

This feature is available in version 1.42.0 and above.

To use this feature you need to place your certificates in a folder and mount the folder to /opt/custom-certificates in the container. The external path that you map to /opt/custom-certificates must be writable by the container.

The examples below assume you have a folder called pki that contains your certificates in either the directory you run the command from or next to your docker compose file.

When using the CLI you can use the -v flag from the command line:

You should also give the right permissions to the imported certs. You can do this once the container is running (assuming n8n as the container name):

Certificate requirements for Custom Trust Store

Supported certificate types:

  • Root CA Certificates: these are certificates from Certificate Authorities that sign other certificates. Trust these to accept all certificates signed by that CA.
  • Self-Signed Certificates: certificates that servers create and sign themselves. Trust these to accept connections to that specific server only.

You must use PEM format:

  • Text-based format with BEGIN/END markers
  • Supported file extensions: .pem, .crt, .cer
  • Contains the public certificate (no private key needed)

The system doesn't accept:

  • DER/binary format files
  • PKCS#7 (.p7b) files
  • PKCS#12 (.pfx, .p12) files
  • Private key files
  • Convert these formats to PEM before use.

Examples:

Example 1 (unknown):

docker run -it --rm \
 --name n8n \
 -p 5678:5678 \
 -v ./pki:/opt/custom-certificates \
 docker.n8n.io/n8nio/n8n

Example 2 (unknown):

name: n8n
services:
    n8n:
        volumes:
            - ./pki:/opt/custom-certificates
        container_name: n8n
        ports:
            - 5678:5678
        image: docker.n8n.io/n8nio/n8n

Example 3 (unknown):

docker exec --user 0 n8n chown -R 1000:1000 /opt/custom-certificates

Example 4 (unknown):

-----BEGIN CERTIFICATE-----
MIIDXTCCAkWgAwIBAgIJAKoK/heBjcOuMA0GCSqGSIb3DQEBBQUAMEUxCzAJBgNV
[base64 encoded data]
-----END CERTIFICATE-----

Webex by Cisco node

URL: llms-txt#webex-by-cisco-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Webex by Cisco node to automate work in Webex, and integrate Webex with other applications. n8n has built-in support for a wide range of Webex features, including creating, getting, updating, and deleting meetings and messages.

On this page, you'll find a list of operations the Webex node supports and links to more resources.

Refer to Webex credentials for guidance on setting up authentication.

Examples and Templates

For usage examples and templates to help you get started, take a look at n8n's Webex integrations list.

  • Meeting
    • Create
    • Delete
    • Get
    • Get All
    • Update
  • Message
    • Create
    • Delete
    • Get
    • Get All
    • Update

Templates and examples

Browse Webex by Cisco integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Source control environment variables

URL: llms-txt#source-control-environment-variables

File-based configuration

You can add _FILE to individual variables to provide their configuration in a separate file. Refer to Keeping sensitive data in separate files for more details.

n8n uses Git-based source control to support environments. Refer to Source control and environments for more information on how to link a Git repository to an n8n instance and configure your source control.

Variable Type Default Description
N8N_SOURCECONTROL_DEFAULT_SSH_KEY_TYPE String ed25519 Set to rsa to make RSA the default SSH key type for Source control setup.

HubSpot credentials

URL: llms-txt#hubspot-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using App token
  • Using Developer API key
    • Required scopes for HubSpot Trigger node
  • Using OAuth2
  • Required scopes for HubSpot node

You can use these credentials to authenticate the following nodes:

Supported authentication methods

HubSpot deprecated the regular API Key authentication method. The option still appears in n8n, but you should use the authentication methods listed above instead. If you have existing integrations using this API key method, refer to HubSpot's Migrate an API key integration to a private app guide and set up an app token.

Refer to HubSpot's API documentation for more information about the service. The HubSpot Trigger node uses the Webhooks API; refer to HubSpot's Webhooks API documentation for more information about that service.

To configure this credential, you'll need a HubSpot account or HubSpot developer account and:

To generate an app token, create a private app in HubSpot:

  1. In your HubSpot account, select the settings icon in the main navigation bar.
  2. In the left sidebar menu, go to Integrations > Private Apps.
  3. Select Create private app.
  4. On the Basic Info tab, enter your app's Name.
  5. Hover over the placeholder logo and select the upload icon to upload a square image that will serve as the logo for your app.
  6. Enter a Description for your app.
  7. Open the Scopes tab and add the appropriate scopes. Refer to Required scopes for HubSpot node for a complete list of scopes you should add.
  8. Select Create app to finish the process.
  9. In the modal, review the info about your app's access token, then select Continue creating.
  10. Once your app's created, open the Access token card and select Show token to reveal the token.
  11. Copy this token and enter it in your n8n credential.

Refer to the HubSpot Private Apps documentation for more information.

Using Developer API key

To configure this credential, you'll need a HubSpot developer account and:

  • A Client ID: Generated once you create a public app.
  • A Client Secret: Generated once you create a public app.
  • A Developer API Key: Generated from your Developer Apps dashboard.
  • An App ID: Generated once you create a public app.

To create the public app and set up the credential:

  1. Log into your HubSpot app developer account.
  2. Select Apps from the main navigation bar.
  3. Select Get HubSpot API key. You may need to select the option to Show key.
  4. Copy the key and enter it in n8n as the Developer API Key.
  5. Still on the HubSpot Apps page, select Create app.
  6. On the App Info tab, add an App name, Description, Logo, and any support contact info you want to provide. Anyone encountering the app would see these.
  7. Open the Auth tab.
  8. Copy the App ID and enter it in n8n.
  9. Copy the Client ID and enter it in n8n.
  10. Copy the Client Secret and enter it in n8n.
  11. In the Scopes section, select Add new scope.
  12. Add all the scopes listed in Required scopes for HubSpot Trigger node to your app.
  13. Select Update.
  14. Copy the n8n OAuth Redirect URL and enter it as the Redirect URL in your HubSpot app.
  15. Select Create app to finish creating the HubSpot app.

Refer to the HubSpot Public Apps documentation for more detailed instructions.

Required scopes for HubSpot Trigger node

If you're creating an app for use with the HubSpot Trigger node, n8n recommends starting with these scopes:

Element Object Permission Scope name
n/a n/a n/a oauth
CRM Companies Read crm.objects.companies.read
CRM Companies schemas Read crm.schemas.companies.read
CRM Contacts Read crm.objects.contacts.read
CRM Contacts schemas Read crm.schemas.contacts.read
CRM Deals Read crm.objects.deals.read
CRM Deals schemas Read crm.schemas.deals.read

Some HubSpot accounts don't have access to all the scopes. HubSpot is migrating accounts gradually. If you can't find all the scopes in your current HubSpot developer account, try creating a fresh developer account.

Note for n8n Cloud users

Cloud users don't need to provide connection details. Select Connect my account to connect through your browser.

If you're self-hosting n8n, you'll need to configure OAuth2 from scratch by creating a new public app:

  1. Log into your HubSpot app developer account.
  2. Select Apps from the main navigation bar.
  3. Select Create app.
  4. On the App Info tab, add an App name, Description, Logo, and any support contact info you want to provide. Anyone encountering the app would see these.
  5. Open the Auth tab.
  6. Copy the App ID and enter it in n8n.
  7. Copy the Client ID and enter it in n8n.
  8. Copy the Client Secret and enter it in n8n.
  9. In the Scopes section, select Add new scope.
  10. Add all the scopes listed in Required scopes for HubSpot node to your app.
  11. Select Update.
  12. Copy the n8n OAuth Redirect URL and enter it as the Redirect URL in your HubSpot app.
  13. Select Create app to finish creating the HubSpot app.

Refer to the HubSpot Public Apps documentation for more detailed instructions. If you need more detail on what's happening in the OAuth web flow, refer to the HubSpot Working with OAuth documentation.

Required scopes for HubSpot node

If you're creating an app for use with the HubSpot node, n8n recommends starting with these scopes:

Element Object Permission Scope name(s)
n/a n/a n/a oauth
n/a n/a n/a forms
n/a n/a n/a tickets
CRM Companies Read Write crm.objects.companies.read crm.objects.companies.write
CRM Companies schemas Read crm.schemas.companies.read
CRM Contacts schemas Read crm.schemas.contacts.read
CRM Contacts Read Write crm.objects.contacts.read crm.objects.contacts.write
CRM Deals Read Write crm.objects.deals.read crm.objects.deals.write
CRM Deals schemas Read crm.schemas.deals.read
CRM Owners Read crm.objects.owners.read
CRM Lists Write crm.lists.write

Some HubSpot accounts don't have access to all the scopes. HubSpot is migrating accounts gradually. If you can't find all the scopes in your current HubSpot developer account, try creating a fresh developer account.


API pagination

URL: llms-txt#api-pagination

The default page size is 100 results. You can change the page size limit. The maximum permitted size is 250.

When a response contains more than one page, it includes a cursor, which you can use to request the next pages.

For example, say you want to get all active workflows, 150 at a time.


HTTP Request Tool node

URL: llms-txt#http-request-tool-node

Contents:

  • Templates and examples
  • Related resources

New instances of the HTTP Request tool node that you add to workflows use the standard HTTP Request node as a tool. This page is describes the legacy, standalone HTTP Request tool node.

You can identify which tool version is in your workflow by checking if the node has an Add option property when you open the node on the canvas. If that button is present, you're using the new version, not the one described on this page.

The HTTP Request tool works just like the HTTP Request node, but it's designed to be used with an AI agent as a tool to collect information from a website or API.

On this page, you'll find a list of operations the HTTP Request node supports and links to more resources.

Refer to HTTP Request credentials for guidance on setting up authentication.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

Templates and examples

Browse HTTP Request Tool node documentation integration templates, or search all templates

Refer to LangChain's documentation on tools for more information about tools in LangChain.

View n8n's Advanced AI documentation.


Sticky Notes

URL: llms-txt#sticky-notes

Contents:

  • Create a Sticky Note
  • Edit a Sticky Note
  • Change the color
  • Sticky Note positioning
  • Writing in Markdown

Sticky Notes allow you to annotate and comment on your workflows.

n8n recommends using Sticky Notes heavily, especially on template workflows, to help other users understand your workflow.

Create a Sticky Note

Sticky Notes are a core node. To add a new Sticky Note:

  1. Open the nodes panel.
  2. Search for note.
  3. Click the Sticky Note node. n8n adds a new Sticky Note to the canvas.

Edit a Sticky Note

  1. Double click the Sticky Note you want to edit.
  2. Write your note. This guide explains how to format your text with Markdown. n8n uses markdown-it, which implements the CommonMark specification.
  3. Click away from the note, or press Esc, to stop editing.

To change the Sticky Note color:

  1. Hover over the Sticky Note
  2. Select Change color

Sticky Note positioning

  • Drag a Sticky Note anywhere on the canvas.
  • Drag Sticky Notes behind nodes. You can use this to visually group nodes.
  • Resize Sticky Notes by hovering over the edge of the note and dragging to resize.
  • Change the color: select Options to open the color selector.

Writing in Markdown

Sticky Notes support Markdown formatting. This section describes some common options.

The text in double asterisks will be **bold**

The text in single asterisks will be *italic*

Use # to indicate headings:

---

## Embeddings OpenAI node

**URL:** llms-txt#embeddings-openai-node

**Contents:**
- Node options
- Templates and examples
- Related resources

Use the Embeddings OpenAI node to generate [embeddings](../../../../../glossary/#ai-embedding) for a given text.

On this page, you'll find the node parameters for the Embeddings OpenAI node, and links to more resources.

You can find authentication information for this node [here](../../../credentials/openai/).

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name.

- **Model**: Select the model to use for generating embeddings.
- **Base URL**: Enter the URL to send the request to. Use this if you are using a self-hosted OpenAI-like model.
- **Batch Size**: Enter the maximum number of documents to send in each request.
- **Strip New Lines**: Select whether to remove new line characters from input text (turned on) or not (turned off). n8n enables this by default.
- **Timeout**: Enter the maximum amount of time a request can take in seconds. Set to `-1` for no timeout.

## Templates and examples

**Building Your First WhatsApp Chatbot**

[View template details](https://n8n.io/workflows/2465-building-your-first-whatsapp-chatbot/)

**Ask questions about a PDF using AI**

[View template details](https://n8n.io/workflows/1960-ask-questions-about-a-pdf-using-ai/)

**Chat with PDF docs using AI (quoting sources)**

[View template details](https://n8n.io/workflows/2165-chat-with-pdf-docs-using-ai-quoting-sources/)

[Browse Embeddings OpenAI integration templates](https://n8n.io/integrations/embeddings-openai/), or [search all templates](https://n8n.io/workflows/)

Refer to [LangChains's OpenAI embeddings documentation](https://js.langchain.com/docs/integrations/text_embedding/openai/) for more information about the service.

View n8n's [Advanced AI](../../../../../advanced-ai/) documentation.

---

## Wekan credentials

**URL:** llms-txt#wekan-credentials

**Contents:**
- Prerequisites
- Supported authentication methods
- Related resources
- Using basic auth

You can use these credentials to authenticate the following nodes:

- [Wekan](../../app-nodes/n8n-nodes-base.wekan/)

Install [Wekan](https://github.com/wekan/wekan/wiki) on your server.

## Supported authentication methods

Refer to [Wekan's API documentation](https://github.com/wekan/wekan/wiki/REST-API) for more information about authenticating with the service.

To configure this credential, you'll need:

- A **Username**: Enter your Wekan username.
- A **Password**: Enter your Wekan password.
- A **URL**: Enter your Wekan domain.

---

## Facebook Trigger Ad Account object

**URL:** llms-txt#facebook-trigger-ad-account-object

**Contents:**
- Trigger configuration
- Related resources

Use this object to receive updates on certain ads changes in an Ad Account. Refer to [Facebook Trigger](../) for more information on the trigger itself.

You can find authentication information for this node [here](../../../credentials/facebookapp/).

Examples and templates

For usage examples and templates to help you get started, refer to n8n's [Facebook Trigger integrations](https://n8n.io/integrations/facebook-trigger/) page.

## Trigger configuration

To configure the trigger with this Object:

1. Select the **Credential to connect with**. Select an existing or create a new [Facebook App credential](../../../credentials/facebookapp/).
1. Enter the **APP ID** of the app connected to your credential. Refer to the [Facebook App credential](../../../credentials/facebookapp/) documentation for more information.
1. Select **Ad Account** as the **Object**.
1. **Field Names or IDs**: By default, the node will trigger on all the available Ad Account events using the `*` wildcard filter. If you'd like to limit the events, use the `X` to remove the star and use the dropdown or an expression to select the updates you're interested in. Options include:
   - **In Process Ad Objects**: Notifies you when a campaign, ad set, or ad exits the `IN_PROCESS` status. Refer to Meta's [Post-processing for Ad Creation and Edits](https://developers.facebook.com/docs/marketing-api/using-the-api/post-processing/) for more information.
   - **With Issues Ad Objects**: Notifies you when a campaign, ad set, or ad under the ad account receives the `WITH_ISSUES` status.
1. In **Options**, turn on the toggle to **Include Values**. This Object type fails without the option enabled.

Refer to [Webhooks for Ad Accounts](https://developers.facebook.com/docs/graph-api/webhooks/getting-started/webhooks-for-ad-accounts) and Meta's [Ad Account](https://developers.facebook.com/docs/graph-api/webhooks/reference/ad-account/) Graph API reference for more information.

---

## Configure n8n webhooks with reverse proxy

**URL:** llms-txt#configure-n8n-webhooks-with-reverse-proxy

n8n creates the webhook URL by combining `N8N_PROTOCOL`, `N8N_HOST` and `N8N_PORT`. If n8n runs behind a reverse proxy, that won't work. That's because n8n runs internally on port 5678 but the reverse proxy exposes it to the web on port 443.

When running n8n behind a reverse proxy, it's important to do the following:

- set the webhook URL manually with the `WEBHOOK_URL` environment variable so that n8n can display it in the editor UI and register the correct webhook URLs with external services.
- Set the `N8N_PROXY_HOPS` environment variable to `1`.
- On the last proxy on the request path, set the following headers to pass on information about the initial request:
  - [`X-Forwarded-For`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Forwarded-For)
  - [`X-Forwarded-Host`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Forwarded-Host)
  - [`X-Forwarded-Proto`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Forwarded-Proto)

Refer to [Environment variables reference](../../environment-variables/endpoints/) for more information on this variable.

**Examples:**

Example 1 (unknown):
```unknown
export WEBHOOK_URL=https://n8n.example.com/
export N8N_PROXY_HOPS=1

Examples using n8n's HTTP Request node

URL: llms-txt#examples-using-n8n's-http-request-node

Contents:

  • Related resources

The HTTP Request node is one of the most versatile nodes in n8n. Use this node to make HTTP requests to query data from any app or service with a REST API.

Refer to HTTP Request for information on node settings.


Toggl credentials

URL: llms-txt#toggl-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using basic auth

You can use these credentials to authenticate the following nodes:

Create a Toggl account.

Supported authentication methods

Refer to Toggl's API documentation for more information about the service.

To configure this credential, you'll need:

  • A Username: Enter your user email address.
  • A Password: Enter your user password.

Refer to Authentication for more information.


Mailcheck node

URL: llms-txt#mailcheck-node

Contents:

  • Operations
  • Templates and examples

Use the Mailcheck node to automate work in Mailcheck, and integrate Mailcheck with other applications. n8n has built-in support for a wide range of Mailcheck features, including checking emails.

On this page, you'll find a list of operations the Mailcheck node supports and links to more resources.

Refer to Mailcheck credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

Templates and examples

Browse Mailcheck integration templates, or search all templates


Cohere credentials

URL: llms-txt#cohere-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a Cohere account.

You'll need an account with the following access:

  • For the Trial API, you need User or Owner permissions.
  • For Production API, you need Owner permissions.

Refer to Cohere Teams and Roles documentation for more information.

Supported authentication methods

Refer to Cohere's documentation for more information about the service.

View n8n's Advanced AI documentation.

To configure this credential, you'll need:


User management

URL: llms-txt#user-management

Contents:

  • Setup guides

User management in n8n allows you to invite people to work in your n8n instance. It includes:

  • Login and password management
  • Adding and removing users
  • Three account types: Owner and Member (and Admin for Pro & Enterprise plans)

The user management feature doesn't send personal information, such as email or username, to n8n.

This section contains most usage information for user management, and the Cloud setup guide. If you self-host n8n, there are extra steps to configure your n8n instance. Refer to the Self-hosted guide.

This section includes guides to configuring LDAP and SAML in n8n.


Jina AI credentials

URL: llms-txt#jina-ai-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to Jina AI's reader API documentation and Jina AI's search API documentation for more information about the service.

To configure this credential, you'll need:

  • API key: A Jina AI API key. You can get your free API key without creating an account by doing the following:
    1. Visit the Jina AI website.
    2. Select API on the page.
    3. Select API KEY & BILLING in the API app widget.
    4. Copy the key labeled "This is your unique key. Store it securely!".

Jina AI API keys start with 10 million free tokens that you can use non-commercially. To top up your key or use commercially, scroll on the API KEY & BILLING tab of the API widget and select the top up option that best fits your needs.


OpenAI node common issues

URL: llms-txt#openai-node-common-issues

Contents:

  • The service is receiving too many requests from you
  • Insufficient quota
  • Bad request - please check your parameters
  • Referenced node is unexecuted

Here are some common errors and issues with the OpenAI node and steps to resolve or troubleshoot them.

The service is receiving too many requests from you

This error displays when you've exceeded OpenAI's rate limits.

There are two ways to work around this issue:

  1. Split your data up into smaller chunks using the Loop Over Items node and add a Wait node at the end for a time amount that will help. Copy the code below and paste it into a workflow to use as a template.

  2. Use the HTTP Request node with the built-in batch-limit option against the OpenAI API instead of using the OpenAI node.

Insufficient quota

There are a number of OpenAI issues surrounding quotas, including failures when quotas have been recently topped up. To avoid these issues, ensure that there is credit in the account and issue a new API key from the API keys screen.

This error displays when your OpenAI account doesn't have enough credits or capacity to fulfill your request. This may mean that your OpenAI trial period has ended, that your account needs more credit, or that you've gone over a usage limit.

To troubleshoot this error, on your OpenAI settings page:

  • Select the correct organization for your API key in the first selector in the upper-left corner.
  • Select the correct project for your API key in the second selector in the upper-left corner.
  • Check the organization-level billing overview page to ensure that the organization has enough credit. Double-check that you select the correct organization for this page.
  • Check the organization-level usage limits page. Double-check that you select the correct organization for this page and scroll to the Usage limits section to verify that you haven't exceeded your organization's usage limits.
  • Check your OpenAI project's usage limits. Double-check that you select the correct project in the second selector in the upper-left corner. Select Project > Limits to view or change the project limits.
  • Check that the OpenAI API is operating as expected.

Balance waiting period

After topping up your balance, there may be a delay before your OpenAI account reflects the new balance.

If you find yourself frequently running out of account credits, consider turning on auto recharge in your OpenAI billing settings to automatically reload your account with credits when your balance reaches $0.

Bad request - please check your parameters

This error displays when the request results in an error but n8n wasn't able to interpret the error message from OpenAI.

To begin troubleshooting, try running the same operation using the HTTP Request node, which should provide a more detailed error message.

Referenced node is unexecuted

This error displays when a previous node in the workflow hasn't executed and isn't providing output that this node needs as input.

The full text of this error will tell you the exact node that isn't executing in this format:

To begin troubleshooting, test the workflow up to the named node.

For nodes that call JavaScript or other custom code, determine if a node has executed before trying to use the value by calling:

Examples:

Example 1 (unknown):

{
       "nodes": [
       {
           "parameters": {},
           "id": "35d05920-ad75-402a-be3c-3277bff7cc67",
           "name": "When clicking Execute workflow",
           "type": "n8n-nodes-base.manualTrigger",
           "typeVersion": 1,
           "position": [
           880,
           400
           ]
       },
       {
           "parameters": {
           "batchSize": 500,
           "options": {}
           },
           "id": "ae9baa80-4cf9-4848-8953-22e1b7187bf6",
           "name": "Loop Over Items",
           "type": "n8n-nodes-base.splitInBatches",
           "typeVersion": 3,
           "position": [
           1120,
           420
           ]
       },
       {
           "parameters": {
           "resource": "chat",
           "options": {},
           "requestOptions": {}
           },
           "id": "a519f271-82dc-4f60-8cfd-533dec580acc",
           "name": "OpenAI",
           "type": "n8n-nodes-base.openAi",
           "typeVersion": 1,
           "position": [
           1380,
           440
           ]
       },
       {
           "parameters": {
           "unit": "minutes"
           },
           "id": "562d9da3-2142-49bc-9b8f-71b0af42b449",
           "name": "Wait",
           "type": "n8n-nodes-base.wait",
           "typeVersion": 1,
           "position": [
           1620,
           440
           ],
           "webhookId": "714ab157-96d1-448f-b7f5-677882b92b13"
       }
       ],
       "connections": {
       "When clicking Execute workflow": {
           "main": [
           [
               {
               "node": "Loop Over Items",
               "type": "main",
               "index": 0
               }
           ]
           ]
       },
       "Loop Over Items": {
           "main": [
           null,
           [
               {
               "node": "OpenAI",
               "type": "main",
               "index": 0
               }
           ]
           ]
       },
       "OpenAI": {
           "main": [
           [
               {
               "node": "Wait",
               "type": "main",
               "index": 0
               }
           ]
           ]
       },
       "Wait": {
           "main": [
           [
               {
               "node": "Loop Over Items",
               "type": "main",
               "index": 0
               }
           ]
           ]
       }
       },
       "pinData": {}
   }

Example 2 (unknown):

An expression references the node '<node-name>', but it hasnt been executed yet. Either change the expression, or re-wire your workflow to make sure that node executes first.

Example 3 (unknown):

$("<node-name>").isExecuted

Clockify node

URL: llms-txt#clockify-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Clockify node to automate work in Clockify, and integrate Clockify with other applications. n8n has built-in support for a wide range of Clockify features, including creating, updating, getting, and deleting tasks, time entries, projects, and tags.

On this page, you'll find a list of operations the Clockify node supports and links to more resources.

Refer to Clockify credentials for guidance on setting up authentication.

  • Project
    • Create a project
    • Delete a project
    • Get a project
    • Get all projects
    • Update a project
  • Tag
    • Create a tag
    • Delete a tag
    • Get all tags
    • Update a tag
  • Task
    • Create a task
    • Delete a task
    • Get a task
    • Get all tasks
    • Update a task
  • Time Entry
    • Create a time entry
    • Delete a time entry
    • Get time entry
    • Update a time entry

Templates and examples

Time logging on Clockify using Slack

View template details

Manage projects in Clockify

View template details

Update time-tracking projects based on Syncro status changes

View template details

Browse Clockify integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Chat Memory Manager node

URL: llms-txt#chat-memory-manager-node

Contents:

  • Node parameters
  • Templates and examples
  • Related resources

The Chat Memory Manager node manages chat message memories within your workflows. Use this node to load, insert, and delete chat messages in an in-memory vector store.

This node is useful when you:

  • Can't add a memory node directly.
  • Need to do more complex memory management, beyond what the memory nodes offer. For example, you can add this node to check the memory size of the Agent node's response, and reduce it if needed.
  • Want to inject messages to the AI that look like user messages, to give the AI more context.

On this page, you'll find a list of operations that the Chat Memory Manager node supports, along with links to more resources.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Operation Mode: Choose between Get Many Messages, Insert Messages, and Delete Messages operations.
  • Insert Mode: Available in Insert Messages mode. Choose from:
    • Insert Messages: Insert messages alongside existing messages.
    • Override All Messages: Replace current memory.
  • Delete Mode: available in Delete Messages mode. Choose from:
    • Last N: Delete the last N messages.
    • All Messages: Delete messages from memory.
  • Chat Messages: available in Insert Messages mode. Define the chat messages to insert into the memory, including:
    • Type Name or ID: Set the message type. Select one of:
      • AI: Use this for messages from the AI.
      • System: Add a message containing instructions for the AI.
      • User: Use this for messages from the user. This message type is sometimes called the 'human' message in other AI tools and guides.
    • Message: Enter the message contents.
    • Hide Message in Chat: Select whether n8n should display the message to the user in the chat UI (turned off) or not (turned on).
  • Messages Count: Available in Delete Messages mode when you select Last N. Enter the number of latest messages to delete.
  • Simplify Output: Available in Get Many Messages mode. Turn on to simplify the output to include only the sender (AI, user, or system) and the text.

Templates and examples

Chat with OpenAI Assistant (by adding a memory)

View template details

Personal Life Manager with Telegram, Google Services & Voice-Enabled AI

View template details

AI Voice Chat using Webhook, Memory Manager, OpenAI, Google Gemini & ElevenLabs

View template details

Browse Chat Memory Manager integration templates, or search all templates

Refer to LangChain's Memory documentation for more information about the service.

View n8n's Advanced AI documentation.


Motorhead node

URL: llms-txt#motorhead-node

Contents:

  • Node parameters
  • Node reference
  • Templates and examples
  • Related resources
  • Single memory instance

Use the Motorhead node to use Motorhead as a memory server.

On this page, you'll find a list of operations the Motorhead node supports, and links to more resources.

You can find authentication information for this node here.

  • Session ID: Enter the ID to use to store the memory in the workflow data.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

Templates and examples

Browse Motorhead integration templates, or search all templates

Refer to LangChain's Motorhead documentation for more information about the service.

View n8n's Advanced AI documentation.

Single memory instance

If you add more than one Motorhead node to your workflow, all nodes access the same memory instance by default. Be careful when doing destructive actions that override existing memory contents, such as the override all messages operation in the Chat Memory Manager node. If you want more than one memory instance in your workflow, set different session IDs in different memory nodes.


Brevo Trigger node

URL: llms-txt#brevo-trigger-node

Contents:

  • Events
  • Related resources

Brevo is a digital marketing platform to help users grow their business.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Brevo Trigger integrations page.

  • Email blocked
  • Email clicked
  • Email deferred
  • Email delivered
  • Email hard bounce
  • Email invalid
  • Email marked spam
  • Email opened
  • Email sent
  • Email soft bounce
  • Email unique open
  • Email unsubscribed

n8n provides an app node for Brevo. You can find the node docs here.

View example workflows and related content on n8n's website.

Refer to Brevo's documentation for details about their API.


Strava Trigger node

URL: llms-txt#strava-trigger-node

Contents:

  • Events

Strava is an internet service for tracking human exercise which incorporates social network features.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Strava Trigger integrations page.

  • [All]
    • [All]
    • Created
    • Deleted
    • Updated
  • Activity
    • [All]
    • Created
    • Deleted
    • Updated
  • Athlete
    • [All]
    • Created
    • Deleted
    • Updated

Sub-workflow conversion

URL: llms-txt#sub-workflow-conversion

Contents:

  • Selecting nodes for a sub-workflow
  • How to convert part of a workflow to a sub-workflow
  • Things to keep in mind

Available on all plans from n8n version 1.97.0.

Use sub-workflow conversion to refactor your workflows into reusable parts. Expressions referencing other nodes are automatically updated and added as parameters in the Execute Workflow Trigger node.

See sub-workflows for a general introduction to the concept.

Selecting nodes for a sub-workflow

To convert part of a workflow to a sub-workflow, you must select the nodes in the original workflow that you want to convert.

Do this by selecting a group of valid nodes. The selection must be continuous and must connect to the rest of the workflow from at most one start node and one end node. The selection must fulfill these conditions:

  • Must not include trigger nodes.
  • Only a single node in the selection can have incoming connections from nodes outside of the selection.
    • That node can have multiple incoming connections, but only a single input branch (which means it can't be a Merge node for example).
    • That node can't have incoming connections from other nodes in the selection.
  • Only a single node in the selection can have outgoing connections to nodes outside of the selection.
    • That node can have multiple outgoing connections, but only a single output branch (it can't be an If node for example).
    • That node can't have outgoing connections to other nodes in the selection.
  • The selection must include all nodes between the input and output nodes.

How to convert part of a workflow to a sub-workflow

Select the desired nodes on the canvas. Right-click the canvas background and select Convert to sub-workflow.

Things to keep in mind

Most sub-workflow conversions work without issues, but there are some caveats and limitations to keep in mind:

  • You must set type constraints for input and output manually: By default, sub-workflow input and output allow all types. You can set expected types in sub-workflow's Execute Sub-workflow Trigger node and Edit Fields (set) node (labeled as Return and only included if the sub-workflow has outputs).

  • Limited support for AI nodes: When dealing with sub-nodes like AI tools, you must select them all and may need to duplicate any nodes shared with other AI Agents before conversion.

  • Uses v1 execution ordering: New workflows use v1 execution ordering regardless of the parent workflow's settings - you can change this back in the settings.

  • Accessor functions like first(), last(), and all() require extra care: Expressions using these functions don't always translate cleanly to a sub-workflow context. n8n may transform them to try to preserve their functionality, but you should check that they work as intended in their new context.

Sub-node parameter suffixes

n8n adds suffixes like _firstItem, _lastItem, and _allItems to variable names accessed by these functions. This helps preserve information about the original expression, since item ordering may be different in the sub-workflow context.

  • The itemMatching function requires a fixed index: You can't use expressions for the index value when using the itemMatching function. You must pass it a fixed number.

Submit community nodes

URL: llms-txt#submit-community-nodes

Contents:

  • Standards
  • Submit your node for verification by n8n

Community nodes are npm packages, hosted in the npm registry.

When building a node to submit to the community node repository, use the following resources to make sure your node setup is correct:

Developing with the n8n-node tool ensures that your node adheres to the following standards required to make your node available in the n8n community node repository:

  • Make sure the package name starts with n8n-nodes- or @<scope>/n8n-nodes-. For example, n8n-nodes-weather or @weatherPlugins/n8n-nodes-weather.
  • Include n8n-community-node-package in your package keywords.
  • Make sure that you add your nodes and credentials to the package.json file inside the n8n attribute.
  • Check your node using the linter (npm run lint) and test it locally (npm run dev) to ensure it works.
  • Submit the package to the npm registry. Refer to npm's documentation on Contributing packages to the registry for more information.

Submit your node for verification by n8n

n8n vets verified community nodes. Users can discover and install verified community nodes from the nodes panel in n8n. These nodes need to adhere to certain technical and UX standards and constraints.

Before submitting your node for review by n8n, you must:

  • Start from the n8n-node tool generated scaffolding. While this isn't strictly required, n8n strongly suggests using the n8n-node CLI tool for any community node you plan to submit for verification. Using the tool ensures that your node follows the expected conventions and adheres to the community node requirements.
  • Make sure that your node follows the technical guidelines for verified community nodes and that all automated checks pass. Specifically, verified community nodes aren't allowed to use any run-time dependencies.
  • Ensure that your node follows the UX guidelines.
  • Make sure that the node has appropriate documentation in the form of a README in the npm package or a related public repository.
  • Submit your node to npm as n8n will fetch it from there for final vetting.

If your node meets all the above requirements, sign up or log in to the n8n Creator Portal and submit your node for verification. Note that n8n reserves the right to reject nodes that compete with any of n8n's paid features, especially enterprise functionality.


Embeddings HuggingFace Inference node

URL: llms-txt#embeddings-huggingface-inference-node

Contents:

  • Node parameters
  • Node options
  • Templates and examples
  • Related resources

Use the Embeddings HuggingFace Inference node to generate embeddings for a given text.

On this page, you'll find the node parameters for the Embeddings HuggingFace Inference, and links to more resources.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Model: Select the model to use to generate the embedding.

Refer to the Hugging Face models documentation for available models.

  • Custom Inference Endpoint: Enter the URL of your deployed model, hosted by HuggingFace. If you set this, n8n ignores the Model Name.

Refer to HuggingFace's guide to inference for more information.

Templates and examples

Browse Embeddings HuggingFace Inference integration templates, or search all templates

Refer to Langchain's HuggingFace Inference embeddings documentation for more information about the service.

View n8n's Advanced AI documentation.


Keap node

URL: llms-txt#keap-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Keap node to automate work in Keap, and integrate Keap with other applications. n8n has built-in support for a wide range of Keap features, including creating, updating, deleting, and getting companies, products, ecommerce orders, emails, and files.

On this page, you'll find a list of operations the Keap node supports and links to more resources.

Refer to Keap credentials for guidance on setting up authentication.

  • Company
    • Create a company
    • Retrieve all companies
  • Contact
    • Create/update a contact
    • Delete an contact
    • Retrieve an contact
    • Retrieve all contacts
  • Contact Note
    • Create a note
    • Delete a note
    • Get a notes
    • Retrieve all notes
    • Update a note
  • Contact Tag
    • Add a list of tags to a contact
    • Delete a contact's tag
    • Retrieve all contact's tags
  • Ecommerce Order
    • Create an ecommerce order
    • Get an ecommerce order
    • Delete an ecommerce order
    • Retrieve all ecommerce orders
  • Ecommerce Product
    • Create an ecommerce product
    • Delete an ecommerce product
    • Get an ecommerce product
    • Retrieve all ecommerce product
  • Email
    • Create a record of an email sent to a contact
    • Retrieve all sent emails
    • Send Email
  • File
    • Delete a file
    • Retrieve all files
    • Upload a file

Templates and examples

Verify mailing address deliverability of contacts in Keap/Infusionsoft Using Lob

View template details

Get all contacts from Keap

View template details

Receive updates when a new contact is added in Keap

View template details

Browse Keap integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Microsoft Graph Security node

URL: llms-txt#microsoft-graph-security-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Microsoft Graph Security node to automate work in Microsoft Graph Security, and integrate Microsoft Graph Security with other applications. n8n has built-in support for a wide range of Microsoft Graph Security features, including getting, and updating scores, and profiles.

On this page, you'll find a list of operations the Microsoft Graph Security node supports and links to more resources.

Refer to Microsoft credentials for guidance on setting up authentication.

  • Secure Score
    • Get
    • Get All
  • Secure Score Control Profile
    • Get
    • Get All
    • Update

Templates and examples

Browse Microsoft Graph Security integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Wekan node

URL: llms-txt#wekan-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported
  • Load all the parameters for the node

Use the Wekan node to automate work in Wekan, and integrate Wekan with other applications. n8n has built-in support for a wide range of Wekan features, including creating, updating, deleting, and getting boards and cards.

On this page, you'll find a list of operations the Wekan node supports and links to more resources.

Refer to Wekan credentials for guidance on setting up authentication.

  • Board
    • Create a new board
    • Delete a board
    • Get the data of a board
    • Get all user boards
  • Card
    • Create a new card
    • Delete a card
    • Get a card
    • Get all cards
    • Update a card
  • Card Comment
    • Create a comment on a card
    • Delete a comment from a card
    • Get a card comment
    • Get all card comments
  • Checklist
    • Create a new checklist
    • Delete a checklist
    • Get the data of a checklist
    • Returns all checklists for the card
  • Checklist Item
    • Delete a checklist item
    • Get a checklist item
    • Update a checklist item
  • List
    • Create a new list
    • Delete a list
    • Get the data of a list
    • Get all board lists

Templates and examples

Browse Wekan integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.

Load all the parameters for the node

To load all the parameters, for example, Author ID, you need to give admin permissions to the user. Refer to the Wekan documentation to learn how to change permissions.


Help Scout Trigger node

URL: llms-txt#help-scout-trigger-node

Help Scout is a help desk software that provides an email-based customer support platform, knowledge base tool, and an embeddable search/contact widget for customer service professionals.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Help Scout Trigger integrations page.


Facebook Trigger Group object

URL: llms-txt#facebook-trigger-group-object

Contents:

  • Trigger configuration
  • Related resources

Use this object to receive updates about activities and events in a group. Refer to Facebook Trigger for more information on the trigger itself.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Facebook Trigger integrations page.

Trigger configuration

To configure the trigger with this Object:

  1. Select the Credential to connect with. Select an existing or create a new Facebook App credential.
  2. Enter the APP ID of the app connected to your credential. Refer to the Facebook App credential documentation for more information.
  3. Select Group as the Object.
  4. Field Names or IDs: By default, the node will trigger on all the available events using the * wildcard filter. If you'd like to limit the events, use the X to remove the star and use the dropdown or an expression to select the updates you're interested in.
  5. In Options, turn on the toggle to Include Values. This Object type fails without the option enabled.

Refer to Meta's Groups Workplace API reference for more information.


Mistral Cloud Chat Model node

URL: llms-txt#mistral-cloud-chat-model-node

Contents:

  • Node parameters
  • Node options
  • Templates and examples
  • Related resources

Use the Mistral Cloud Chat Model node to combine Mistral Cloud's chat models with conversational agents.

On this page, you'll find the node parameters for the Mistral Cloud Chat Model node, and links to more resources.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Model: Select the model to use to generate the completion. n8n dynamically loads models from Mistral Cloud and you'll only see the models available to your account.

  • Maximum Number of Tokens: Enter the maximum number of tokens used, which sets the completion length.

  • Sampling Temperature: Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations.

  • Timeout: Enter the maximum request time in milliseconds.

  • Max Retries: Enter the maximum number of times to retry a request.

  • Top P: Use this option to set the probability the completion should use. Use a lower value to ignore less probable options.

  • Enable Safe Mode: Enable safe mode by injecting a safety prompt at the beginning of the completion. This helps prevent the model from generating offensive content.

  • Random Seed: Enter a seed to use for random sampling. If set, different calls will generate deterministic results.

Templates and examples

🤖 AI content generation for Auto Service 🚘 Automate your social media📲!

View template details

Breakdown Documents into Study Notes using Templating MistralAI and Qdrant

View template details

Build a Financial Documents Assistant using Qdrant and Mistral.ai

View template details

Browse Mistral Cloud Chat Model integration templates, or search all templates

Refer to LangChains's Mistral documentation for more information about the service.

View n8n's Advanced AI documentation.


Lemlist Trigger node

URL: llms-txt#lemlist-trigger-node

Contents:

  • Events

Lemlist is an email outreach platform that allows you to automatically generate personalized images and videos and send personalized cold emails.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Lemlist Trigger integrations page.

  • *
  • Aircall Created
  • Aircall Done
  • Aircall Ended
  • Aircall Interested
  • Aircall Not Interested
  • Api Done
  • Api Failed
  • Api Interested
  • Api Not Interested
  • Attracted
  • Connection Issue
  • Contacted
  • Custom Domain Errors
  • Emails Bounced
  • Emails Clicked
  • Emails Failed
  • Emails Interested
  • Emails Not Interested
  • Emails Opened
  • Emails Replied
  • Emails Send Failed
  • Emails Sent
  • Emails Unsubscribed
  • Hooked
  • Interested
  • Lemwarm Paused
  • LinkedIn Interested
  • LinkedIn Invite Accepted
  • LinkedIn Invite Done
  • LinkedIn Invite Failed
  • LinkedIn Not Interested
  • LinkedIn Replied
  • LinkedIn Send Failed
  • LinkedIn Sent
  • LinkedIn Visit Done
  • LinkedIn Visit Failed
  • LinkedIn Voice Note Done
  • LinkedIn Voice Note Failed
  • Manual Interested
  • Manual Not Interested
  • Not Interested
  • Opportunities Done
  • Paused
  • Resumed
  • Send Limit Reached
  • Skipped
  • Warmed

Google Vertex Chat Model node

URL: llms-txt#google-vertex-chat-model-node

Contents:

  • Node parameters
  • Node options
  • Templates and examples
  • Related resources

Use the Google Vertex AI Chat Model node to use Google's Vertex AI chat models with conversational agents.

On this page, you'll find the node parameters for the Google Vertex AI Chat Model node, and links to more resources.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Project ID: Select the project ID from your Google Cloud account to use. n8n dynamically loads projects from the Google Cloud account, but you can also enter it manually.

  • Model Name: Select the name of the model to use to generate the completion, for example gemini-1.5-flash-001, gemini-1.5-pro-001, etc. Refer to Google models for a list of available models.

  • Maximum Number of Tokens: Enter the maximum number of tokens used, which sets the completion length.

  • Sampling Temperature: Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations.

  • Thinking Budget: Controls reasoning tokens for thinking models. Set to 0 to disable automatic thinking. Set to -1 for dynamic thinking. Leave empty for auto mode.

  • Top K: Enter the number of token choices the model uses to generate the next token.

  • Top P: Use this option to set the probability the completion should use. Use a lower value to ignore less probable options.

  • Safety Settings: Gemini supports adjustable safety settings. Refer to Google's Gemini API safety settings for information on the available filters and levels.

Templates and examples

Extract text from PDF and image using Vertex AI (Gemini) into CSV

View template details

Automated Stale User Re-Engagement System with Supabase, Google Sheets & Gmail

View template details

Create Structured Notion Workspaces from Notes & Voice Using Gemini & GPT

View template details

Browse Google Vertex Chat Model integration templates, or search all templates

Refer to LangChain's Google Vertex AI documentation for more information about the service.

View n8n's Advanced AI documentation.


ProfitWell credentials

URL: llms-txt#profitwell-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API token

You can use these credentials to authenticate the following nodes:

Create a ProfitWell account.

Supported authentication methods

Refer to Profitwell's API documentation for more information about the service.

To configure this credential, you'll need:

  • An API Token: To get an API key or token, go to Account Settings > Integrations and select ProfitWell API.

Disqus node

URL: llms-txt#disqus-node

Contents:

  • Operations
  • Templates and examples

Use the Disqus node to automate work in Disqus, and integrate Disqus with other applications. n8n has built-in support for a wide range of Disqus features, including returning forums.

On this page, you'll find a list of operations the Disqus node supports and links to more resources.

Refer to Disqus credentials for guidance on setting up authentication.

  • Forum
    • Return forum details
    • Return a list of categories within a forum
    • Return a list of threads within a forum
    • Return a list of posts within a forum

Templates and examples

Browse Disqus integration templates, or search all templates


Reddit credentials

URL: llms-txt#reddit-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Create a Reddit account.

Supported authentication methods

Refer to Reddit's developer documentation for more information about the service.

To configure this credential, you'll need:

  • A Client ID
  • A Client Secret

Reddit's developer program is in a closed beta. The instructions below are for regular Reddit users, not members of the developer platform.

Generate both by creating a third-party app. Visit the previous link or go to your profile > Settings > Safety & Privacy > Manage third-party app authorization > are you a developer? create an app.

Use these settings for your app:

  • Copy the OAuth Callback URL from n8n and use it as your app's redirect uri.
  • The app's client ID displays underneath your app name. Copy that and add it as your n8n Client ID.
  • Copy the app's secret and add it as your n8n Client Secret.

Default Data Loader node

URL: llms-txt#default-data-loader-node

Contents:

  • Node parameters
  • Node options
  • Templates and examples
  • Related resources

Use the Default Data Loader node to load binary data files or JSON data for vector stores or summarization.

On this page, you'll find a list of parameters the Default Data Loader node supports, and links to more resources.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Text Splitting: Choose from:

  • Type of Data: Select Binary or JSON.

  • Mode: Choose from:

    • Load All Input Data: Use all the node's input data.
    • Load Specific Data: Use expressions to define the data you want to load. You can add text as well as expressions. This means you can create a custom document from a mix of text and expressions.
  • Data Format: Displays when you set Type of Data to Binary. Select the file MIME type for your binary data. Set to Automatically Detect by MIME Type if you want n8n to set the data format for you. If you set a specific data format and the incoming file MIME type doesn't match it, the node errors. If you use Automatically Detect by MIME Type, the node falls back to text format if it can't match the file MIME type to a supported data format.

  • Metadata: Set the metadata that should accompany the document in the vector store. This is what you match to using the Metadata Filter option when retrieving data using the vector store nodes.

Templates and examples

Building Your First WhatsApp Chatbot

View template details

Scrape and summarize webpages with AI

View template details

Chat with PDF docs using AI (quoting sources)

View template details

Browse Default Data Loader integration templates, or search all templates

Refer to LangChain's documentation on document loaders for more information about the service.

View n8n's Advanced AI documentation.


GitLab node

URL: llms-txt#gitlab-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported

Use the GitLab node to automate work in GitLab, and integrate GitLab with other applications. n8n has built-in support for a wide range of GitLab features, including creating, updating, deleting, and editing issues, repositories, releases and users.

On this page, you'll find a list of operations the GitLab node supports and links to more resources.

Refer to GitLab credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • File
    • Create
    • Delete
    • Edit
    • Get
    • List
  • Issue
    • Create a new issue
    • Create a new comment on an issue
    • Edit an issue
    • Get the data of a single issue
    • Lock an issue
  • Release
    • Create a new release
    • Delete a new release
    • Get a new release
    • Get all releases
    • Update a new release
  • Repository
    • Get the data of a single repository
    • Returns issues of a repository
  • User
    • Returns the repositories of a user

Templates and examples

ChatGPT Automatic Code Review in Gitlab MR

View template details

Save your workflows into a Gitlab repository

View template details

GitLab Merge Request Review & Risk Analysis with Claude/GPT AI

View template details

Browse GitLab integration templates, or search all templates

Refer to GitLab's documentation for more information about the service.

n8n provides a trigger node for GitLab. You can find the trigger node docs here.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Sekoia credentials

URL: llms-txt#sekoia-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate when using the HTTP Request node to make a Custom API call.

Create a Sekoia SOC platform account.

Supported authentication methods

Refer to Sekoia's documentation for more information about the service.

This is a credential-only node. Refer to Custom API operations to learn more. View example workflows and related content on n8n's website.

To configure this credential, you'll need:

  • An API Key: To generate an API key, select + API Key. Refer to Create an API key for more information.

Google Chat node

URL: llms-txt#google-chat-node

Contents:

  • Operations
  • Waiting for a response
    • Response Type
    • Approval response customization
    • Free Text response customization
    • Custom Form response customization
  • Templates and examples

Use the Google Chat node to automate work in Google Chat, and integrate Google Chat with other applications. n8n has built-in support for a wide range of Google Chat features, including getting membership and spaces, as well as creating and deleting messages.

On this page, you'll find a list of operations the Google Chat node supports and links to more resources.

Refer to Google credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Member
    • Get a membership
    • Get all memberships in a space
  • Message
    • Create a message
    • Delete a message
    • Get a message
    • Send and Wait for Response
    • Update a message
  • Space
    • Get a space
    • Get all spaces the caller is a member of

Waiting for a response

By choosing the Send and Wait for a Response operation, you can send a message and pause the workflow execution until a person confirms the action or provides more information.

You can choose between the following types of waiting and approval actions:

  • Approval: Users can approve or disapprove from within the message.
  • Free Text: Users can submit a response with a form.
  • Custom Form: Users can submit a response with a custom form.

You can customize the waiting and response behavior depending on which response type you choose. You can configure these options in any of the above response types:

  • Limit Wait Time: Whether the workflow will automatically resume execution after a specified time limit. This can be an interval or a specific wall time.
  • Append n8n Attribution: Whether to mention in the message that it was sent automatically with n8n (turned on) or not (turned off).

Approval response customization

When using the Approval response type, you can choose whether to present only an approval button or both approval and disapproval buttons.

You can also customize the button labels for the buttons you include.

Free Text response customization

When using the Free Text response type, you can customize the message button label, the form title and description, and the response button label.

Custom Form response customization

When using the Custom Form response type, you build a form using the fields and options you want.

You can customize each form element with the settings outlined in the n8n Form trigger's form elements. To add more fields, select the Add Form Element button.

You'll also be able to customize the message button label, the form title and description, and the response button label.

Templates and examples

View template details

Generate AI Viral Videos with Seedance and Upload to TikTok, YouTube & Instagram

View template details

🤖Automate Multi-Platform Social Media Content Creation with AI

View template details

Browse Google Chat integration templates, or search all templates


Build an AI chat agent with n8n

URL: llms-txt#build-an-ai-chat-agent-with-n8n

Contents:

  • What you will need
  • What you will learn
  • AI concepts in n8n
    1. Create a new workflow
    1. Add a trigger node
    1. Add an AI Agent Node
    1. Configure the node
    1. Add credentials (if needed)
    1. Test the node
    1. Changing the prompt

Welcome to the introductory tutorial for building AI workflows with n8n. Whether you have used n8n before, or this is your first time, we will show you how the building blocks of AI workflows fit together and construct a working AI-powered chat agent which you can easily customize for your own purposes.

Many people find it easier to take in new information in video format. This tutorial is based on one of n8n's popular videos, linked below. Watch the video or read the steps here, or both!

What you will need

  • n8n: For this tutorial we recommend using the n8n cloud service - there is a free trial for new users! For a self hosted service, refer to the installation pages.
  • Credentials for a chat model: This tutorial uses OpenAI, but you can easily use DeepSeek, Google Gemini, Groq, Azure, and others (see the sub-nodes documentation for more).

What you will learn

  • AI concepts in n8n
  • How to use the AI Agent node
  • Working with Chat input
  • Connecting with AI models
  • Customising input
  • Observing the conversation
  • Adding persistence

AI concepts in n8n

If you're already familiar with AI, feel free to skip this section. This is a basic introduction to AI concepts and how they can be used in n8n workflows.

An AI agent builds on Large Language Models (LLMs), which generate text based on input by predicting the next word. While LLMs only process input to produce output, AI agents add goal-oriented functionality. They can use tools, process their outputs, and make decisions to complete tasks and solve problems.

In n8n, the AI agent is represented as a node with some extra connections.

Feature LLM AI Agent
Core Capability Text generation Goal-oriented task completion
Decision-Making None Yes
Uses Tools/APIs No Yes
Workflow Complexity Single-step Multi-step
Scope Generates language Performs complex, real-world tasks
Example LLM generating a paragraph An agent scheduling an appointment

By incorporating the AI agent as a node, n8n can combine AI-driven steps with traditional programming for efficient, real-world workflows. For instance, simpler tasks, like validating an email address, do not require AI, whereas a complex tasks, like processing the content of an email or dealing with multimodal inputs (e.g., images, audio), are excellent uses of an AI agent.

1. Create a new workflow

When you open n8n, you'll see either:

  • An empty workflow: if you have no workflows and you're logging in for the first time. Use this workflow.
  • The Workflows list on the Overview page. Select the button to create a new workflow.

2. Add a trigger node

Every workflow needs somewhere to start. In n8n these are called 'trigger nodes'. For this workflow, we want to start with a chat node.

  1. Select Add first step or press Tab to open the node menu.
  2. Search for Chat Trigger. n8n shows a list of nodes that match the search.
  3. Select Chat Trigger to add the node to the canvas. n8n opens the node.
  4. Close the node details view (Select Back to canvas) to return to the canvas.

More about the Chat Trigger node...

The trigger node generates output when there is an event causing it to trigger. In this case we want to be able to type in text to cause the workflow to run. In production, this trigger can be hooked up to a public chat interface as provided by n8n or embedded into another website. To start this simple workflow we will just use the built-in local chat interface to communicate, so no further setup is required.

View workflow file

3. Add an AI Agent Node

The AI Agent node is the core of adding AI to your workflows.

  1. Select the Add node connector on the trigger node to bring up the node search.
  2. Start typing "AI" and choose the AI agent node to add it.
  3. The editing view of the AI agent will now be displayed.
  4. There are some fields which can be changed. As we're using the Chat Trigger node, the default setting for the source and specification of the prompt don't need to be changed.

View workflow file

4. Configure the node

AI agents require a chat model to be attached to process the incoming prompts.

  1. Add a chat model by clicking the plus button underneath the Chat Model connection on the AI Agent node (it's the first connection along the bottom of the node).
  2. The search dialog will appear, filtered on 'Language Models'. These are the models with built-in support in n8n. For this tutorial we will use OpenAI Chat Model.
  3. Selecting the OpenAI Chat model from the list will attach it to the AI Agent node and open the node editor. One of the parameters which can be changed is the 'Model'. Note that for the basic OpenAI accounts, only the 'gpt-4o-mini' model is allowed.

As mentioned earlier, the LLM is the component which generates the text according to a prompt it is given. LLMs have to be created and trained, usually an intensive process. Different LLMS may have different capabilities or specialties, depending on the data they were trained with.

5. Add credentials (if needed)

In order for n8n to communicate with the chat model, it will need some credentials (login data giving it access to an account on a different online service). If you already have credentials set up for OpenAI, these should appear by default in the credentials selector. Otherwise you can use the Credentials selector to help you add a new credential.

  1. To add a new credential, click on the text which says 'Select credential'. An option to add a new credential will appear
  2. This credential just needs an API key. When adding credentials of any type, check the text to the right-hand side. In this case it has a handy link to take you straight to your OpenAI account to retrieve the API key.
  3. The API key is just one long string. That's all you need for this particular credential. Copy it from the OpenAI website and paste it into the API key section.

Keeping your credentials safe

Credentials are private pieces of information issued by apps and services to authenticate you as a user and allow you to connect and share information between the app or service and the n8n node. The type of information required varies depending on the app/service concerned. You should be careful about sharing or revealing the credentials outside of n8n.

Now that the node is connected to the Chat Trigger and a chat model, we can test this part of the workflow.

  1. Click on the 'Chat' button near the bottom of the canvas. This opens up a local chat window on the left and the AI agent logs on the right.
  2. Type in a message and press Enter. You will now see the response from the chat model appear below your message.
  3. The log window displays the inputs to and outputs from the AI Agent.

Accessing the logs...

You can access the logs for the AI node even when you aren't using the chat interface. Open up the AI Agent node and click on the Logs tab in the right hand panel.

7. Changing the prompt

The logs in the previous step reveal some extra data - the system prompt. This is the default message that the AI Agent primes the chat model with. From the log you can see this is set to "You are a helpful assistant". We can however change this prompt to alter the behavior of the chat model.

  1. Open the AI Agent node. In the bottom of the panel is a section labeled 'Options' and a selector labeled 'Add Option'. Use this to select 'System message'
  2. The system message is now displayed. This is the same priming prompt we noticed before in the logs. Change the prompt to something else to prime the chat model in a different way. You could try something like "You are a brilliant poet who always replies in rhyming couplets" for example.
  3. Close the node and return to the chat window. Repeat your message and notice how the output has changed.

8. Adding persistence

The chat model is now giving us useful output, but there is something wrong with it which will become apparent when you try to have a conversation.

  1. Use the chat and tell the chat model your name, for example "Hi there, my name is Nick".
  2. Wait for the response, then type the message "What's my name?". The AI will not be able to tell you, however apologetic it may seem. The reason for this is we are not saving the context. The AI Agent has no memory.
  3. In order to remember what has happened in the conversation, the AI Agent needs to preserve context. We can do this by adding memory to the AI Agent node. On the canvas click on the on the bottom of the AI Agent node labeled "Memory".
  4. From the panel which appears, select "Simple Memory". This will use the memory from the instance running n8n, and is usually sufficient for simple usage. The default value of 5 interactions should be sufficient here, but remember where this option is if you may want to change it later.
  5. Repeat the exercise of having a conversation above, and see that the AI Agent now remembers your name.

9. Saving the workflow

Before we leave the workflow editor, remember to save the workflow or all your changes will be lost.

  1. Click on the "Save" button in the top right of the editor window. Your workflow will now be saved and you can return to it later to chat again or add new features.

You have taken your first steps in building useful and effective workflows with AI. In this tutorial we have investigated the basic building blocks of an AI workflow, added an AI Agent and a chat model, and adjusted the prompt to get the kind of output we wanted. We also added memory so the chat could retain context between messages.

View workflow file

Now you have seen how to create a basic AI workflow, there are plenty of resources to build on that knowledge and plenty of examples to give you ideas of where to go next:


Vero credentials

URL: llms-txt#vero-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API auth token

You can use these credentials to authenticate the following nodes:

Create a Vero account.

Supported authentication methods

Refer to Vero's API documentation for more information about the service.

Using API auth token

To configure this credential, you'll need:


Pipedrive credentials

URL: llms-txt#pipedrive-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using API token
  • Using OAuth2
    • Pipedrive node scopes
    • Pipedrive Trigger node scopes

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to Pipedrive's developer documentation for more information about the service.

To configure this credential, you'll need a Pipedrive account and:

To get your API token:

  1. Open your API Personal Preferences.
  2. Copy Your personal API token and enter it in your n8n credential.

If you have multiple companies, you'll need to select the correct company first:

  1. Select your account name and be sure you're viewing the correct company.
  2. Then select Company Settings.
  3. Select Personal Preferences.
  4. Select the API tab.
  5. Copy Your personal API token and enter it in your n8n credential.

Refer to How to find the API token for more information.

To configure this credential, you'll need a Pipedrive developer sandbox account and:

  • A Client ID
  • A Client Secret

To get both, you'll need to register a new app:

  1. Select your profile name in the upper right corner.

  2. Find the company name of your sandbox account and select Developer Hub.

If you don't see Developer Hub in your account dropdown, sign up for a developer sandbox account.

  1. Select Create an app.

  2. Select Create public app. The app's Basic info tab opens.

  3. Enter an App name for your app, like n8n integration.

  4. Copy the OAuth Redirect URL from n8n and add it as the app's Callback URL.

  5. Select Save. The app's OAuth & access scopes tab opens.

  6. Turn on appropriate Scopes for your app. Refer to Pipedrive node scopes and Pipedrive Trigger node scopes below for more guidance.

  7. Copy the Client ID and enter it in your n8n credential.

  8. Copy the Client Secret and enter it in your n8n credential.

Refer to Registering a public app for more information.

Pipedrive node scopes

The scopes you add to your app depend on which node(s) you want to use it for in n8n and what actions you want to complete with those.

Scopes you may need for the Pipedrive node:

Object Node action UI scope Actual scope
Activity Get data of an activity Get data of all activities Activities: Read only or Activities: Full Access activities:read or activities:full
Activity Create Delete Update Activities: Full Access activities:full
Deal Get data of a deal Get data of all deals Search a deal Deals: Read only or Deals: Full Access deals:read or deals:full
Deal Create Delete Duplicate Update Deals: Full Access deals:full
Deal Activity Get all activities of a deal Activities: Read only or Activities: Full Access activities:read or activities:full
Deal Product Get all products in a deal Products: Read Only or Products: Full Access products:read or products:full
File Download Get data of a file Refer to note below Refer to note below
File Create Delete Refer to note below Refer to note below
Lead Get data of a lead Get data of all leads Leads: Read only or Leads: Full access leads:read or leads:full
Lead Create Delete Update Leads: Full access leads:full
Note Get data of a note Get data of all notes Refer to note below Refer to note below
Note Create Delete Update Refer to note below Refer to note below
Organization Get data of an organization Get data of all organizations Search Contacts: Read Only or Contacts: Full Access contacts:read or contacts:full
Organization Create Delete Update Contacts: Full Access contacts:full
Person Get data of a person Get data of all persons Search Contacts: Read Only or Contacts: Full Access contacts:read or contacts:full
Person Create Delete Update Contacts: Full Access contacts:full
Product Get data of all products Products: Read Only products:read

The scopes for Files and Notes depend on which object they relate to:

  • Files relate to Deals, Activities, or Contacts.
  • Notes relate to Deals or Contacts.

Refer to those objects' scopes.

The Pipedrive node also supports Custom API calls. Add relevant scopes for whatever custom API calls you intend to make.

Refer to Scopes and permissions explanations for more information.

Pipedrive Trigger node scopes

The Pipedrive Trigger node requires the Webhooks: Full access (webhooks:full) scope.


Airtable node common issues

URL: llms-txt#airtable-node-common-issues

Contents:

  • Forbidden - perhaps check your credentials
  • Service is receiving too many requests from you

Here are some common errors and issues with the Airtable node and steps to resolve or troubleshoot them.

Forbidden - perhaps check your credentials

This error displays when trying to perform actions not permitted by your current level of access. The full text looks something like this:

The error most often displays when the credential you're using doesn't have the scopes it requires on the resources you're attempting to manage.

Refer to the Airtable credentials and Airtables scopes documentation for more information.

Service is receiving too many requests from you

Airtable has a hard API limit on the number of requests generated using personal access tokens.

If you send more than five requests per second per base, you will receive a 429 error, indicating that you have sent too many requests. You will have to wait 30 seconds before resuming requests. This same limit applies for sending more than 50 requests across all bases per access token.

You can find out more in the Airtable's rate limits documentation. If you find yourself running into rate limits with the Airtable node, consider implementing one of the suggestions on the handling rate limits page.

Examples:

Example 1 (unknown):

There was a problem loading the parameter options from server: "Forbidden - perhaps check your credentials?"

Twake credentials

URL: llms-txt#twake-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using Cloud API key
  • Using Server API key

You can use these credentials to authenticate the following nodes:

Create a Twake account.

Supported authentication methods

  • Cloud API key
  • Server API key

Refer to Twake's documentation for more information about the service.

Using Cloud API key

To configure this credential, you'll need:

  • A Workspace Key: Generated when you install the n8n application to your Twake Cloud environment and select Configure. Refer to How to connect n8n to Twake for more detailed instructions.

Using Server API key

To configure this credential, you'll need:

  • A Host URL: The URL of your Twake self-hosted instance.
  • A Public ID: Generated when you create an app.
  • A Private API Key: Generated when you create an app.

To generate your Public ID and Private API Key, create a Twake application:

  1. Go to Workspace Settings > Applications and connectors > Access your applications and connectors > Create an application.
  2. Enter appropriate details.
  3. Once you've created your app, view its API Details.
  4. Copy the Public identifier and add it as the n8n Public ID.
  5. Copy the Private key and add it as the n8n Private API Key.

Refer to API settings for more information.


Google Contacts node

URL: llms-txt#google-contacts-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Google Contacts node to automate work in Google Contacts, and integrate Google Contacts with other applications. n8n has built-in support for a wide range of Google Contacts features, including creating, updating, retrieving, deleting, and getting contacts.

On this page, you'll find a list of operations the Google Contacts node supports and links to more resources.

Refer to Google Contacts credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Contact
    • Create a contact
    • Delete a contact
    • Get a contact
    • Retrieve all contacts
    • Update a contact

Templates and examples

Manage contacts in Google Contacts

View template details

Daily Birthday Reminders from Google Contacts to Slack

View template details

Enrich Google Sheet contacts with Dropcontact

View template details

Browse Google Contacts integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


FTP credentials

URL: llms-txt#ftp-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using FTP account
  • Using SFTP account

You can use these credentials to authenticate the following nodes:

Create an account on a File Transfer Protocol (FTP) server like JSCAPE, OpenSSH, or FileZilla Server.

Supported authentication methods

  • FTP account: Use this method if your FTP server doesn't support SSH tunneling or encrypted connections.
  • SFTP account: Use this method if your FTP server supports SSH tunneling and encrypted connections.

File Transfer Protocol (FTP) and Secure Shell File Transfer Protocol (SFTP) are protocols for transferring files directly between an FTP/SFTP client and server.

Use this method if your FTP server doesn't support SSH tunneling or encrypted connections.

To configure this credential, you'll need to:

  1. Enter the name or IP address of your FTP server's Host.
  2. Enter the Port number the connection should use.
  3. Enter the Username the credential should connect as.
  4. Enter the user's Password.

Review your FTP server provider's documentation for instructions on getting the information you need.

Using SFTP account

Use this method if your FTP server supports SSH tunneling and encrypted connections.

To configure this credential, you'll need to:

  1. Enter the name or IP address of your FTP server's Host.
  2. Enter the Port number the connection should use.
  3. Enter the Username the credential should connect as.
  4. Enter the user's Password.
  5. For the Private Key, enter a string for either key-based or host-based user authentication
    • Enter your Private Key in OpenSSH format. This is most often generated using the ssh-keygen -o parameter, for example: ssh-keygen -o -a 100 -t ed25519.
  6. If the Private Key is encrypted, enter the Passphrase used to decrypt it.
    • If the Private Key doesn't use a passphrase, leave this field blank.

Review your FTP server provider's documentation for instructions on getting the information you need.


Code in n8n

URL: llms-txt#code-in-n8n

Contents:

  • Code in your workflows
  • Other technical resources
    • Technical nodes
    • Other developer resources

n8n is a low-code tool. This means you can do a lot without code, then add code when needed.

Code in your workflows

There are two places in your workflows where you can use code:

Use expressions to transform data in your nodes. You can use JavaScript in expressions, as well as n8n's Built-in methods and variables and Data transformation functions.

Expressions

Use the Code node to add JavaScript or Python to your workflow.

Code node

Other technical resources

These are features that are relevant to technical users.

n8n provides core nodes, which simplify adding key functionality such as API requests, webhooks, scheduling, and file handling.

  • Write a backend

The HTTP Request, Webhook, and Code nodes help you make API calls, respond to webhooks, and write any JavaScript in your workflow.

Use this do things like Create an API endpoint.

Core nodes

  • Represent complex logic

You can build complex flows, using nodes like If, Switch, and Merge nodes.

Flow logic

Other developer resources

n8n provides an API, where you can programmatically perform many of the same tasks as you can in the GUI. There's an n8n API node to access the API in your workflows.

You can self-host n8n. This keeps your data on your own infrastructure.

Hosting

  • Build your own nodes

You can build custom nodes, install them on your n8n instance, and publish them to npm.

Creating nodes


Nodes

URL: llms-txt#nodes

Contents:

  • Add a node to your workflow
    • Add a node to an empty workflow
    • Add a node to an existing workflow
  • Node operations: Triggers and Actions
  • Node controls
  • Node settings

Nodes are the key building blocks of a workflow. They perform a range of actions, including:

  • Starting the workflow.
  • Fetching and sending data.
  • Processing and manipulating data.

n8n provides a collection of built-in nodes, as well as the ability to create your own nodes. Refer to:

Add a node to your workflow

Add a node to an empty workflow

  1. Select Add first step. n8n opens the nodes panel, where you can search or browse trigger nodes.

  2. Select the trigger you want to use.

Choose the correct app event

If you select On App Event, n8n shows a list of all the supported services. Use this list to browse n8n's integrations and trigger a workflow in response to an event in your chosen service. Not all integrations have triggers. To see which ones you can use as a trigger, select the node. If a trigger is available, you'll see it at the top of the available operations list.

For example, this is the trigger for Asana:

Add a node to an existing workflow

Select the Add node connector. n8n opens the nodes panel, where you can search or browse all nodes.

Node operations: Triggers and Actions

When you add a node to a workflow, n8n displays a list of available operations. An operation is something a node does, such as getting or sending data.

There are two types of operation:

  • Triggers start a workflow in response to specific events or conditions in your services. When you select a Trigger, n8n adds a trigger node to your workflow, with the Trigger operation you chose pre-selected. When you search for a node in n8n, Trigger operations have a bolt icon .
  • Actions are operations that represent specific tasks within a workflow, which you can use to manipulate data, perform operations on external systems, and trigger events in other systems as part of your workflows. When you select an Action, n8n adds a node to your workflow, with the Action operation you chose pre-selected.

To view node controls, hover over the node on the canvas:

  • Execute step : Run the node.
  • Deactivate : Deactivate the node.
  • Delete : Delete the node.
  • Node context menu : Select node actions. Available actions:
    • Open node
    • Execute step
    • Rename node
    • Deactivate node
    • Pin node
    • Copy node
    • Duplicate node
    • Tidy up workflow
    • Convert node to sub-workflow
    • Select all
    • Clear selection
    • Delete node

The node settings under the Settings tab allow you to control node behaviors and add node notes.

When active or set, they do the following:

  • Always Output Data: The node returns an empty item even if the node returns no data during execution. Be careful setting this on IF nodes, as it could cause an infinite loop.
  • Execute Once: The node executes once, with data from the first item it receives. It doesn't process any extra items.
  • Retry On Fail: When an execution fails, the node reruns until it succeeds.
  • On Error:
    • Stop Workflow: Halts the entire workflow when an error occurs, preventing further node execution.
    • Continue: Proceeds to the next node despite the error, using the last valid data.
    • Continue (using error output): Continues workflow execution, passing error information to the next node for potential handling.

You can document your workflow using node notes:

  • Notes: Note to save with the node.
  • Display note in flow: If active, n8n displays the note in the workflow as a subtitle.

Data pinning

URL: llms-txt#data-pinning

Contents:

  • Pin data
  • Unpin data

You can 'pin' data during workflow development. Data pinning means saving the output data of a node, and using the saved data instead of fetching fresh data in future workflow executions.

You can use this when working with data from external sources to avoid having to repeat requests to the external system. This can save time and resources:

  • If your workflow relies on an external system to trigger it, such as a webhook call, being able to pin data means you don't need to use the external system every time you test the workflow.
  • If the external resource has data or usage limits, pinning data during tests avoids consuming your resource limits.
  • You can fetch and pin the data you want to test, then have confidence that the data is consistent in all your workflow tests.

You can only pin data for nodes that have a single main output ("error" outputs don't count for this purpose).

Data pinning isn't available for production workflow executions. It's a feature to help test workflows during development.

To pin data in a node:

  1. Run the node to load data.
  2. In the OUTPUT view, select Pin data . When data pinning is active, the button is disabled and a "This data is pinned" banner is displayed in the OUTPUT view.

Nodes that output binary data

You can't pin data if the output data includes binary data.

When data pinning is active, a banner appears at the top of the node's output panel indicating that n8n has pinned the data. To unpin data and fetch fresh data on the next execution, select the Unpin link in the banner.


All executions

URL: llms-txt#all-executions

Contents:

  • Filter executions
  • Retry failed workflows
  • Load data from previous executions into your current workflow

To view all executions from an n8n instance, navigate to the Overview page and then click into the Executions tab. This will show you all executions from the workflows you have access to.

If your n8n instance supports projects, you'll also be able to view the executions tab within projects you have access to. This will show you executions only from the workflows within the specified project.

When you delete a workflow, n8n deletes its execution history as well. This means you can't view executions for deleted workflows.

You can filter the executions list:

  1. Select the Executions tab either from within the Overview page or a specific project to open the list.
  2. Select Filters.
  3. Enter your filters. You can filter by:
    • Workflows: choose all workflows, or a specific workflow name.
    • Status: choose from Failed, Running, Success, or Waiting.
    • Execution start: see executions that started in the given time.
    • Saved custom data: this is data you create within the workflow using the Code node. Enter the key and value to filter. Refer to Custom executions data for information on adding custom data.

Custom executions data is available on:

  • Cloud: Pro, Enterprise
  • Self-Hosted: Enterprise, registered Community

Retry failed workflows

If your workflow execution fails, you can retry the execution. To retry a failed workflow:

  1. Select the Executions tab from within either the Overview page or a specific project to open the list.
  2. On the execution you want to retry, select Retry execution .
  3. Select either of the following options to retry the execution:
    • Retry with currently saved workflow: Once you make changes to your workflow, you can select this option to execute the workflow with the previous execution data.
    • Retry with original workflow: If you want to retry the execution without making changes to your workflow, you can select this option to retry the execution with the previous execution data.

Load data from previous executions into your current workflow

You can load data from a previous workflow back into the canvas. Refer to Debug executions for more information.


Environments in n8n

URL: llms-txt#environments-in-n8n

Contents:

  • Environments: What and why
  • Environments in n8n

n8n has built its environments feature on top of Git, a version control software. This document helps you understand:

  • The purpose of environments.
  • How environments work in n8n.

Environments: What and why

In software development, the environment is all the infrastructure and tooling around the code, including the tools that run the software, and the specific configuration of those tools. For a more detailed introduction to environments in software development, refer to Codecademy | Environments.

Low-code development in n8n is similar. n8n is where you build and run your workflows. Your instance may have particular configurations: on Cloud, n8n determines the configuration. On self-hosted instances, there are extensive configuration options. You may also have made changes to the settings of your instance. This combination of n8n and your instance's specific configuration and settings is the environment your workflows run in.

There are advantages to having more than one environment. A common pattern is to have different environments for development and production:

  • Development: do work and make changes.
  • Production: the live environment.

A setup like this helps you make changes to workflows without breaking workflows that are in use.

Environments in n8n

In n8n, an environment comprises two parts, an n8n instance and a Git branch:

  • The n8n instance is where you build and run workflows.
  • The Git branch stores copies of the workflows, as well as tags, and variable and credential stubs.

n8n doesn't sync credentials and variable values with Git. You must set up the credentials and variable values manually when setting up a new instance. For more information, refer to Push and pull | What gets committed.

How you copy work between environments depends on your branch and n8n instance configuration:

  • Multiple instances, one branch: you can push from one instance to the Git branch, then pull the work to another instance.
  • Multiple instances, multiple branches: you need to create a pull request and merge in your Git provider. For example, if you have development, test, and production branches, each linked to their own instance, you need to merge the development branch into test to make the work from the development instance available on the test instance. Refer to Copy work between environments for more information, including steps to partially automate the process.

For detailed guidance on pushing and pulling work, refer to Push and pull.

Refer to Set up source control to learn more about linking your n8n instance to Git, or follow the Tutorial: Create environments with source control to set up your environments using one of n8n's recommended configurations.


Microsoft Azure Monitor credentials

URL: llms-txt#microsoft-azure-monitor-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using OAuth2

You can use these credentials to authenticate when using the HTTP Request node to make a Custom API call.

  • Create a Microsoft Azure account or subscription
  • An app registered in Microsoft Entra ID

Supported authentication methods

Refer to Microsoft Azure Monitor's API documentation for more information about the service.

To configure this credential, you'll need a Microsoft Azure account and:

  • A Client ID
  • A Client Secret
  • A Tenant ID
  • The Resource you plan to access

Refer to Microsoft Azure Monitor's API documentation for more information about authenticating to the service.


Send Email credentials

URL: llms-txt#send-email-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using SMTP account
    • Provider instructions
    • My provider isn't listed

You can use these credentials to authenticate the following nodes:

  • Send Email

  • Create an email account on a service that supports SMTP.

  • Some email providers require that you enable or set up outgoing SMTP or generate an app password. Refer to your provider's documentation to see if there are other required steps.

Supported authentication methods

Simple Message Transfer Protocol (SMTP) is a standard protocol for sending and receiving email. Most email providers offer instructions on setting up their service with SMTP. Refer to your provider's SMTP instructions.

Using SMTP account

To configure this credential, you'll need:

  • A User email address
  • A Password: This may be the user's password or an app password. Refer to the documentation for your email provider.
  • The Host: The SMTP host address for your email provider, often formatted as smtp.<provider>.com. Check with your provider.
  • A Port number: The port depends on the encryption method:
  • Port 465 for SSL/TLS (implicit encryption)
  • Port 587 for STARTTLS (explicit encryption)
  • Port 25 for no encryption (not recommended) Check with your email provider for their specific requirements.
  • SSL/TLS: This toggle controls the encryption method:
  • Turn ON for port 465 (uses implicit SSL/TLS encryption)
  • Turn OFF for port 587 (uses STARTTLS explicit encryption)
  • Turn OFF for port 25 (no encryption)
  • Disable STARTTLS: When SSL/TLS is disabled, the SMTP server can still try to upgrade the TCP connection using STARTTLS. Turning this on prevents that behaviour.
  • Client Host Name: If needed by your provider, add a client host name. This name identifies the client to the server.

Provider instructions

Refer to the quickstart guides for these common email providers.

Refer to Gmail.

Refer to Outlook.com.

Refer to Yahoo.

My provider isn't listed

If your email provider isn't listed here, search for SMTP settings to find their instructions. (These instructions may also be included with IMAP settings or POP settings.)


Set the logging level to 'debug'

URL: llms-txt#set-the-logging-level-to-'debug'

export N8N_LOG_LEVEL=debug


Nextcloud node

URL: llms-txt#nextcloud-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Nextcloud node to automate work in Nextcloud, and integrate Nextcloud with other applications. n8n has built-in support for a wide range of Nextcloud features, including creating, updating, deleting, and getting files, and folders as well as retrieving, and inviting users.

On this page, you'll find a list of operations the Nextcloud node supports and links to more resources.

Refer to Nextcloud credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • File
    • Copy a file
    • Delete a file
    • Download a file
    • Move a file
    • Share a file
    • Upload a file
  • Folder
    • Copy a folder
    • Create a folder
    • Delete a folder
    • Return the contents of a given folder
    • Move a folder
    • Share a folder
  • User
    • Invite a user to a Nextcloud organization
    • Delete a user.
    • Retrieve information about a single user.
    • Retrieve a list of users.
    • Edit attributes related to a user.

Templates and examples

Save email attachments to Nextcloud

View template details

Backs up n8n Workflows to NextCloud

View template details

Move a nextcloud folder file by file

View template details

Browse Nextcloud integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Docker

URL: llms-txt#docker

docker run -it --rm
--name n8n
-p 5678:5678
-e EXECUTIONS_DATA_PRUNE=true
-e EXECUTIONS_DATA_MAX_AGE=168
docker.n8n.io/n8nio/n8n

Examples:

Example 1 (unknown):



Register the docker group membership with current session without changing your primary group

URL: llms-txt#register-the-docker-group-membership-with-current-session-without-changing-your-primary-group

Contents:

    1. DNS setup
    1. Create an .env file

exec sg docker newgrp

sudo usermod -aG docker <USER_TO_RUN_DOCKER>

mkdir n8n-compose cd n8n-compose

Examples:

Example 1 (unknown):

To grant access to a different user, type the following, substituting `<USER_TO_RUN_DOCKER>` with the appropriate username:

Example 2 (unknown):

You will need to run `exec sg docker newgrp` from any of that user's existing sessions for it to access the new group permissions.

You can verify that your current session recognizes the `docker` group by typing:

Example 3 (unknown):

## 3. DNS setup

To host n8n online or on a network, create a dedicated subdomain pointed at your server.

Add an A record to route the subdomain accordingly:

| Record type | Name                              | Destination                |
| ----------- | --------------------------------- | -------------------------- |
| A           | `n8n` (or your desired subdomain) | `<your_server_IP_address>` |

## 4. Create an `.env` file

Create a project directory to store your n8n environment configuration and Docker Compose files and navigate inside:

Example 4 (unknown):

Inside the `n8n-compose` directory, create an `.env` file to customize your n8n instance's details. Change it to match your own information:

Telegram node Chat operations

URL: llms-txt#telegram-node-chat-operations

Contents:

  • Get Chat
  • Get Administrators
  • Get Chat Member
  • Leave Chat
  • Set Description
  • Set Title

Use these operations to get information about chats, members, administrators, leave chat, and set chat titles and descriptions. Refer to Telegram for more information on the Telegram node itself.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

Use this operation to get up to date information about a chat using the Bot API getChat method.

Enter these parameters:

  • Credential to connect with: Create or select an existing Telegram credential.
  • Resource: Select Chat.
  • Operation: Select Get.
  • Chat ID: Enter the Chat ID or username of the target channel in the format @channelusername.

Refer to the Telegram Bot API getChat documentation for more information.

Get Administrators

Use this operation to get a list of all administrators in a chat using the Bot API getChatAdministrators method.

Enter these parameters:

  • Credential to connect with: Create or select an existing Telegram credential.
  • Resource: Select Chat.
  • Operation: Select Get Administrators.
  • Chat ID: Enter the Chat ID or username of the target channel in the format @channelusername.

Refer to the Telegram Bot API getChatAdministrators documentation for more information.

Use this operation to get the details of a chat member using the Bot API getChatMember method.

Enter these parameters:

  • Credential to connect with: Create or select an existing Telegram credential.
  • Resource: Select Chat.
  • Operation: Select Get Member.
  • Chat ID: Enter the Chat ID or username of the target channel in the format @channelusername.
  • User ID: Enter the unique identifier of the user whose information you want to get.

Refer to the Telegram Bot API getChatMember documentation for more information.

Use this operation to leave a chat using the Bot API leaveChat method.

Enter these parameters:

  • Credential to connect with: Create or select an existing Telegram credential.
  • Resource: Select Chat.
  • Operation: Select Leave.
  • Chat ID: Enter the Chat ID or username of the channel you wish to leave in the format @channelusername.

Refer to the Telegram Bot API leaveChat documentation for more information.

Use this operation to set the description of a chat using the Bot API setChatDescription method.

Enter these parameters:

  • Credential to connect with: Create or select an existing Telegram credential.
  • Resource: Select Chat.
  • Operation: Select Set Description.
  • Chat ID: Enter the Chat ID or username of the channel you wish to leave in the format @channelusername.
  • Description: Enter the new description you'd like to set the chat to use, maximum of 255 characters.

Refer to the Telegram Bot API setChatDescription documentation for more information.

Use this operation to set the title of a chat using the Bot API setChatTitle method.

Enter these parameters:

  • Credential to connect with: Create or select an existing Telegram credential.
  • Resource: Select Chat.
  • Operation: Select Set Title.
  • Chat ID: Enter the Chat ID or username of the channel you wish to leave in the format @channelusername.
  • Title: Enter the new title you'd like to set the chat to use, maximum of 255 characters.

Refer to the Telegram Bot API setChatTitle documentation for more information.


Ollama Model node common issues

URL: llms-txt#ollama-model-node-common-issues

Contents:

  • Processing parameters
  • Can't connect to a remote Ollama instance
  • Can't connect to a local Ollama instance when using Docker
    • If only Ollama is in Docker
    • If only n8n is in Docker
    • If Ollama and n8n are running in separate Docker containers
    • If Ollama and n8n are running in the same Docker container
  • Error: connect ECONNREFUSED ::1:11434
  • Ollama and HTTP/HTTPS proxies

Here are some common errors and issues with the Ollama Model node and steps to resolve or troubleshoot them.

Processing parameters

The Ollama Model node is a sub-node. Sub-nodes behave differently than other nodes when processing multiple items using expressions.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

Can't connect to a remote Ollama instance

The Ollama Model node supports Bearer token authentication for connecting to remote Ollama instances behind authenticated proxies (such as Open WebUI).

For remote authenticated connections, configure both the remote URL and API key in your Ollama credentials.

Follow the Ollama credentials instructions for more information.

Can't connect to a local Ollama instance when using Docker

The Ollama Model node connects to a locally hosted Ollama instance using the base URL defined by Ollama credentials. When you run either n8n or Ollama in Docker, you need to configure the network so that n8n can connect to Ollama.

Ollama typically listens for connections on localhost, the local network address. In Docker, by default, each container has its own localhost which is only accessible from within the container. If either n8n or Ollama are running in containers, they won't be able to connect over localhost.

The solution depends on how you're hosting the two components.

If only Ollama is in Docker

If only Ollama is running in Docker, configure Ollama to listen on all interfaces by binding to 0.0.0.0 inside of the container (the official images are already configured this way).

When running the container, publish the ports with the -p flag. By default, Ollama runs on port 11434, so your Docker command should look like this:

When configuring Ollama credentials, the localhost address should work without a problem (set the base URL to http://localhost:11434).

If only n8n is in Docker

If only n8n is running in Docker, configure Ollama to listen on all interfaces by binding to 0.0.0.0 on the host.

If you are running n8n in Docker on Linux, use the --add-host flag to map host.docker.internal to host-gateway when you start the container. For example:

If you are using Docker Desktop, this is automatically configured for you.

When configuring Ollama credentials, use host.docker.internal as the host address instead of localhost. For example, to bind to the default port 11434, you could set the base URL to http://host.docker.internal:11434.

If Ollama and n8n are running in separate Docker containers

If both n8n and Ollama are running in Docker in separate containers, you can use Docker networking to connect them.

Configure Ollama to listen on all interfaces by binding to 0.0.0.0 inside of the container (the official images are already configured this way).

When configuring Ollama credentials, use the Ollama container's name as the host address instead of localhost. For example, if you call the Ollama container my-ollama and it listens on the default port 11434, you would set the base URL to http://my-ollama:11434.

If Ollama and n8n are running in the same Docker container

If Ollama and n8n are running in the same Docker container, the localhost address doesn't need any special configuration. You can configure Ollama to listen on localhost and configure the base URL in the Ollama credentials in n8n to use localhost: http://localhost:11434.

Error: connect ECONNREFUSED ::1:11434

This error occurs when your computer has IPv6 enabled, but Ollama is listening to an IPv4 address.

To fix this, change the base URL in your Ollama credentials to connect to 127.0.0.1, the IPv4-specific local address, instead of the localhost alias that can resolve to either IPv4 or IPv6: http://127.0.0.1:11434.

Ollama and HTTP/HTTPS proxies

Ollama doesn't support custom HTTP agents in its configuration. This makes it difficult to use Ollama behind custom HTTP/HTTPS proxies. Depending on your proxy configuration, it might not work at all, despite setting the HTTP_PROXY or HTTPS_PROXY environment variables.

Refer to Ollama's FAQ for more information.

Examples:

Example 1 (unknown):

docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

Example 2 (unknown):

docker run -it --rm --add-host host.docker.internal:host-gateway --name n8n -p 5678:5678 -v n8n_data:/home/node/.n8n docker.n8n.io/n8nio/n8n

Postgres Chat Memory node

URL: llms-txt#postgres-chat-memory-node

Contents:

  • Node parameters
  • Related resources
  • Single memory instance

Use the Postgres Chat Memory node to use Postgres as a memory server for storing chat history.

On this page, you'll find a list of operations the Postgres Chat Memory node supports, and links to more resources.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Session Key: Enter the key to use to store the memory in the workflow data.
  • Table Name: Enter the name of the table to store the chat history in. The system will create the table if doesn't exist.
  • Context Window Length: Enter the number of previous interactions to consider for context.

Refer to LangChain's Postgres Chat Message History documentation for more information about the service.

View n8n's Advanced AI documentation.

Single memory instance

If you add more than one Postgres Chat Memory node to your workflow, all nodes access the same memory instance by default. Be careful when doing destructive actions that override existing memory contents, such as the override all messages operation in the Chat Memory Manager node. If you want more than one memory instance in your workflow, set different session IDs in different memory nodes.


npm node

URL: llms-txt#npm-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported

Use the npm node to automate work in npm, and integrate npm with other applications.

On this page, you'll find a list of operations the npm node supports and links to more resources.

Refer to npm credentials for guidance on setting up authentication.

  • Package
    • Get Package Metadata
    • Get Package Versions
    • Search for Packages
  • Distribution Tag
    • Get All Tags
    • Update a Tag

Templates and examples

Browse npm integration templates, or search all templates

Refer to npm's documentation for more information about the service.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Twist node

URL: llms-txt#twist-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported
  • Get the User ID

Use the Twist node to automate work in Twist, and integrate Twist with other applications. n8n has built-in support for a wide range of Twist features, including creating conversations in a channel, as well as creating and deleting comments on a thread.

On this page, you'll find a list of operations the Twist node supports and links to more resources.

Refer to Twist credentials for guidance on setting up authentication.

  • Channel
    • Archive a channel
    • Initiates a public or private channel-based conversation
    • Delete a channel
    • Get information about a channel
    • Get all channels
    • Unarchive a channel
    • Update a channel
  • Comment
    • Create a new comment to a thread
    • Delete a comment
    • Get information about a comment
    • Get all comments
    • Update a comment
  • Message Conversation
    • Create a message in a conversation
    • Delete a message in a conversation
    • Get a message in a conversation
    • Get all messages in a conversation
    • Update a message in a conversation
  • Thread
    • Create a new thread in a channel
    • Delete a thread
    • Get information about a thread
    • Get all threads
    • Update a thread

Templates and examples

Browse Twist integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.

To get the User ID for a user:

  1. Open the Team tab.
  2. Select a user's avatar.
  3. Copy the string of characters located after /u/ in your Twist URL. This string is the User ID. For example, if the URL is https://twist.com/a/4qw45/people/u/475370 the User ID is 475370.

Using the n8n-node tool

URL: llms-txt#using-the-n8n-node-tool

Contents:

  • Get n8n-node
    • Run n8n-node without installing
    • Install n8n-node globally
  • Command overview
    • new
    • build
    • dev
    • lint
    • release
  • Creating a new node

The n8n-node tool is the official CLI for developing community nodes for n8n. You can use it to scaffold out new nodes, build your projects, and run your node as you develop it.

Using n8n-node, you can create nodes that adhere to the guidelines for verified community nodes.

Run n8n-node without installing

You can create an n8n-node project directly without installing by using the @n8n/create-node initializer with your package manager:

This sets up the initial project files locally (an alternative to installing n8n-node locally and explicitly running the new command). Afterward, you run the rest of the n8n-node commands through your package manager's script runner inside the project directory (for example, npm run dev).

Install n8n-node globally

You can install n8n-node globally with npm:

Verify access to the command by typing:

The n8n-node tool provides the following commands:

The new command creates the file system structure and metadata for a new node. This command initializes the same structure as outlined in run n8n-node without installing.

When called, it interactively prompts for details about your project to customize your starting code. You'll provide the project name, choose a node type, and select the starting template that best matches your needs. The n8n-node tool will create your project file structure and optionally install your initial project dependencies.

Learn more about how to use the new command in the creating a new node section.

The build command compiles your node and copies all the required assets.

Learn more about how to use the build command in the building your node section.

The dev command runs n8n with your node. It monitors your project directory and automatically rebuilds the live preview when it detects changes.

Learn more about how to use the dev command in the testing your node in n8n section.

The lint command checks the code for the node in the current directory. You can optionally use with the --fix option to attempt to automatically fix any issues it identifies.

Learn more about how to use the lint command in the lint your node section.

The release command publishes your community node package to npm. It uses release-it to clean, check and cleanly build your package before publishing it to npm.

Learn more about how to use the release command in the release your node section.

Creating a new node

To create a new node with n8n-node, call n8n-node new. You can call this command entirely interactively or provide details on the command line.

Create new node without installing

You can optionally create an n8n-node project directly without installing n8n-node by using the @n8n/create-node initializer with your package manager.

In the commands below, substitute n8n-node new with npm create @n8n/node@latest. When using this form, you must add a double dash (--) before including any options (like --template). For example:

The command will prompt for any missing information about your node and then generate a project structure to get you started. By default, it will follow up by installing the initial project dependencies (you can disable this by passing the --skip-install flag).

Setting node details interactively

When called without arguments, n8n-node new prompts you for details about your new node interactively:

This will start an interactive prompt where you can define the details of your project:

  • What is your node called? The name of your node. This impacts the name of your project directory, package name, and the n8n node itself. The name must use one of the following formats:
    • n8n-nodes-<YOUR_NODE_NAME>
    • @<YOUR_ORG>/n8n-nodes-<YOUR_NODE_NAME>
  • What kind of node are you building? The node type you want to build:
    • HTTP API: A low-code, declarative node structure that's designed for faster approval for n8n Cloud.
    • Other: A programmatic style node with full flexibility.
  • What template do you want to use? When using the HTTP API, you can choose the template to start from:
    • GitHub Issues API: A demo node that includes multiple operations and credentials. This can help you get familiar with the node structure and conventions.
    • Start from scratch: A blank template that will guide you through your custom setup with some further prompts.

When choosing HTTP API > Start from scratch, n8n-node will ask you the following:

  • What's the base URL of the API? The root URL for the API you plan to integrate with.
  • What type of authentication does your API use? The authentication your node should provide:
    • API Key: Send a secret key using headers, query parameters, or the request body.
    • Bearer Token: Send a token using the Authorization header (Authorization: Bearer <token>).
    • OAuth2: Use an OAuth 2.0 flow to get access tokens on behalf of a user or app.
    • Basic Auth: Send the base64-encoded username and password through Authorization headers.
    • Custom: Create your own credential logic. This will create an empty credential class that you can customize according to your needs.
    • None: No authentication necessary. Don't create a credential class for the node.

Once you've made your selections, n8n-node will create a new project directory for your node in the current directory. By default, it will also install the initial project dependencies (you can disable this by passing the --skip-install flag).

Providing node details on the command line

You can provide some of your node details on the command line to avoid prompts.

You can include the name you want to use for your node an argument:

Node names must use one of the following formats:

  • @<YOUR_ORG>/n8n-nodes-<YOUR_NODE_NAME>
  • n8n-nodes-<YOUR_NODE_NAME>

If you know the template you want to use ahead of time, you can also pass the value using the --template flag:

The template must be one of the following:

  • declarative/github-issues: A demo node that includes multiple operations and credentials. This can help you get familiar with the node structure and conventions.
  • declarative/custom: A blank template that will guide you through your custom setup with some further prompts.
  • programmatic/example: A programmatic style node with full flexibility.

Building your node

You can build your node by running the build command in your project's root directory:

n8n-node will compile your TypeScript files and bundle your other project assets. You can also call the build script from your package manager. For instance, if you're using npm, this works the same:

The n8n-node tool automatically creates a lint script for your project as well. You can run with your package manager. For example:

You can also run through your package manager's script runner:

If you include the --fix option (also callable with npm run lint:fix), n8n-node will attempt to fix the issues that it identifies:

Testing your node in n8n

To test your node in n8n, you run the dev command in your project's root directory:

As with the build command, you can also run this through your package manager. For example:

n8n-node will compile your project and then start up a local n8n instance through npm with your node loaded.

Visit your localhost:5678 to sign in to your n8n instance. If you open a workflow, your node appears in the nodes panel:

From there, you can add it to your workflow and test the node's functionality as you develop.

To publish your node, run the release command in your project directory. This command uses release-it to build and publish your node.

To use the release command, you must log in to npm using npm login command. Without this, n8n-node won't have authorization to publish your project files.

To run with npm, type:

When you run the release command, n8n-node will perform the following actions:

  • build the node
  • run lint checks against your files
  • update the changelog
  • create git tags
  • create a GitHub release
  • publish the package to npm

Examples:

Example 1 (unknown):

npm create @n8n/node@latest

Example 2 (unknown):

npm install --global @n8n/node-cli

Example 3 (unknown):

n8n-node --version

Example 4 (unknown):

npm create @n8n/node@latest n8n-nodes-mynode -- --template declarative/custom

Twilio Trigger node

URL: llms-txt#twilio-trigger-node

Contents:

  • Events
  • Related resources

Use the Twilio Trigger node to respond to events in Twilio and integrate Twilio with other applications. n8n has built-in support for a wide range of Twilio events, including new SMS and calls.

On this page, you'll find a list of events the Twilio Trigger node can respond to and links to more resources.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Twilio integrations page.

  • On New SMS
  • On New Call

It can take Twilio up to thirty minutes to generate a summary for a completed call.

n8n provides an app node for Twilio. You can find the node docs here.

View example workflows and related content on n8n's website.

Refer to Twilio's documentation for details about their API.


Mapping in the UI

URL: llms-txt#mapping-in-the-ui

Contents:

  • How to drag and drop data
    • Understand what you're mapping with drag and drop
    • Understand nested data

Data mapping means referencing data from previous nodes. It doesn't include changing (transforming) data, just referencing it.

You can map data in the following ways:

  • Using the expressions editor.
  • By dragging and dropping data from the INPUT into parameters. This generates the expression for you.

For information on errors with mapping and linking items, refer to Item linking errors.

How to drag and drop data

  1. Run your workflow to load data.
  2. Open the node where you need to map data.
  3. You can map in table, JSON, and schema view:
    • In table view: click and hold a table heading to map top level data, or a field in the table to map nested data.
    • In JSON view: click and hold a key.
    • In schema view: click and hold a key.
  4. Drag the item into the field where you want to use the data.

Understand what you're mapping with drag and drop

Data mapping maps the key path, and loads the key's value into the field. For example, given the following data:

You can map fruit by dragging and dropping fruit from the INPUT into the field where you want to use its value. This creates an expression, {{ $json.fruit }}. When the node iterates over input items, the value of the field becomes the value of fruit for each item.

Understand nested data

Given the following data:

n8n displays it in table form like this:

Examples:

Example 1 (unknown):

[
	{
		"fruit": "apples",
		"color": "green"
	}
]

Example 2 (unknown):

[
  {
    "name": "First item",
    "nested": {
      "example-number-field": 1,
      "example-string-field": "apples"
    }
  },
  {
    "name": "Second item",
    "nested": {
      "example-number-field": 2,
      "example-string-field": "oranges"
    }
  }
]

Call an API to fetch data

URL: llms-txt#call-an-api-to-fetch-data

Contents:

  • Key features
  • Using the example

Use n8n to bring data from any API to your AI. This workflow uses the Chat Trigger to provide the chat interface, and the Call n8n Workflow Tool to call a second workflow that calls the API. The second workflow uses AI functionality to refine the API request based on the user's query.

View workflow file

  • Chat Trigger: start your workflow and respond to user chat interactions. The node provides a customizable chat interface.
  • Agent: the key piece of the AI workflow. The Agent interacts with other components of the workflow and makes decisions about what tools to use.
  • Call n8n Workflow Tool: plug in n8n workflows as custom tools. In AI, a tool is an interface the AI can use to interact with the world (in this case, the data provided by your workflow). The AI model uses the tool to access information beyond its built-in dataset.
  • A Basic LLM Chain with an Auto-fixing Output Parser and Structured Output Parser to read the user's query and set parameters for the API call based on the user input.

To load the template into your n8n instance:

  1. Download the workflow JSON file.
  2. Open a new workflow in your n8n instance.
  3. Copy in the JSON, or select Workflow menu > Import from file....

The example workflows use Sticky Notes to guide you:

  • Yellow: notes and information.
  • Green: instructions to run the workflow.
  • Orange: you need to change something to make the workflow work.
  • Blue: draws attention to a key feature of the example.

urlscan.io node

URL: llms-txt#urlscan.io-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the urlscan.io node to automate work in urlscan.io, and integrate urlscan.io with other applications. n8n has built-in support for a wide range of urlscan.io features, including getting and performing scans.

On this page, you'll find a list of operations the urlscan.io node supports and links to more resources.

Refer to urlscan.io credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Scan
    • Get
    • Get All
    • Perform

Templates and examples

Phishing Analysis - URLScan.io and VirusTotal

View template details

Scan URLs with urlscan.io and Send Results via Gmail

by Calistus Christian

View template details

Perform, Get Scans 🛠️ urlscan.io Tool MCP Server 💪 all 3 operations

View template details

Browse urlscan.io integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Dates

URL: llms-txt#dates

Contents:

  • beginningOf(unit?: DurationUnit): Date
  • endOfMonth(): Date
  • extract(datePart?: DurationUnit): Number
  • format(fmt: TimeFormat): String
  • isBetween(date1: Date | DateTime, date2: Date | DateTime): Boolean
  • isDst(): Boolean
  • isInLast(n?: Number, unit?: DurationUnit): Boolean
  • isWeekend(): Boolean
  • minus(n: Number, unit?: DurationUnit): Date
  • plus(n: Number, unit?: DurationUnit): Date

A reference document listing built-in convenience functions to support data transformation in expressions for dates.

JavaScript in expressions

You can use any JavaScript in expressions. Refer to Expressions for more information.

beginningOf(unit?: DurationUnit): Date

Transforms a Date to the start of the given time period. Returns either a JavaScript Date or Luxon Date, depending on input.

Function parameters

unitOptionalString enum

A valid string specifying the time unit.

One of: second, minute, hour, day, week, month, year


endOfMonth(): Date

Transforms a Date to the end of the month.


extract(datePart?: DurationUnit): Number

Extracts the part defined in datePart from a Date. Returns either a JavaScript Date or Luxon Date, depending on input.

Function parameters

datePartOptionalString enum

A valid string specifying the time unit.

One of: second, minute, hour, day, week, month, year


format(fmt: TimeFormat): String

Formats a Date in the given structure

Function parameters

fmtRequiredString enum

A valid string specifying the time format. Refer to Luxon | Table of tokens for formats.


isBetween(date1: Date | DateTime, date2: Date | DateTime): Boolean

Checks if a Date is between two given dates.

Function parameters

date1RequiredDate or DateTime

The first date in the range.

date2RequiredDate or DateTime

The last date in the range.


Checks if a Date is within Daylight Savings Time.


isInLast(n?: Number, unit?: DurationUnit): Boolean

Checks if a Date is within a given time period.

Function parameters

The number of units. For example, to check if the date is in the last nine weeks, enter 9.

unitOptionalString enum

A valid string specifying the time unit.

One of: second, minute, hour, day, week, month, year


isWeekend(): Boolean

Checks if the Date falls on a Saturday or Sunday.


minus(n: Number, unit?: DurationUnit): Date

Subtracts a given time period from a Date. Returns either a JavaScript Date or Luxon Date, depending on input.

Function parameters

The number of units. For example, to subtract nine seconds, enter 9 here.

unitOptionalString enum

A valid string specifying the time unit.

Default: milliseconds

One of: second, minute, hour, day, week, month, year


plus(n: Number, unit?: DurationUnit): Date

Adds a given time period to a Date. Returns either a JavaScript Date or Luxon Date, depending on input.

Function parameters

The number of units. For example, to add nine seconds, enter 9 here.

unitOptionalString enum

A valid string specifying the time unit.

Default: milliseconds

One of: second, minute, hour, day, week, month, year


toDateTime(): Date

Converts a JavaScript date to a Luxon date object.



WordPress node

URL: llms-txt#wordpress-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the WordPress node to automate work in WordPress, and integrate WordPress with other applications. n8n has built-in support for a wide range of WordPress features, including creating, updating, and getting posts and users.

On this page, you'll find a list of operations the WordPress node supports and links to more resources.

Refer to WordPress credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Post
    • Create a post
    • Get a post
    • Get all posts
    • Update a post
  • Pages
    • Create a page
    • Get a page
    • Get all pages
    • Update a page
  • User
    • Create a user
    • Get a user
    • Get all users
    • Update a user

Templates and examples

Write a WordPress post with AI (starting from a few keywords)

View template details

🔍🛠️Generate SEO-Optimized WordPress Content with AI Powered Perplexity Research

View template details

Automate Content Generator for WordPress with DeepSeek R1

View template details

Browse WordPress integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Miro credentials

URL: llms-txt#miro-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using OAuth2

You can use these credentials to authenticate when using the HTTP Request node to make a Custom API call.

Create a Miro account.

Supported authentication methods

Refer to Miro's API documentation for more information about the service.

This is a credential-only node. Refer to Custom API operations to learn more. View example workflows and related content on n8n's website.

To configure this credential, you'll need a Miro account and app, as well as:

  • A Client ID: Generated when you create a new OAuth2 application.
  • A Client Secret: Generated when you create a new OAuth2 application.

Refer to Miro's API documentation for more information about authenticating to the service.

Note for n8n Cloud users

Cloud users don't need to provide connection details. Select Connect my account to connect through your browser.

If you're self-hosting n8n, you'll need to create an app to configure OAuth2. Refer to Miro's OAuth documentation for more information about setting up OAuth2.


Jenkins credentials

URL: llms-txt#jenkins-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API token

You can use these credentials to authenticate the following nodes:

Create an account on a Jenkins instance.

Supported authentication methods

Jenkins doesn't provide public API documentation; API documentation for each page is available from the user interface in the bottom right. Refer to those detailed pages for more information about the service. Refer to Jenkins Remote Access API for information on the API and API wrappers.

To configure this credential, you'll need:

  • The Jenkins Username: For the user whom the token belongs to
  • A Personal API Token: Generate this from the user's profile details > Configure > Add new token. Refer to these Stack Overflow instructions for more detail.
  • The Jenkins Instance URL

Jenkins rebuilt their API token setup in 2018. If you're working with an older Jenkins instance, be sure you're using a non-legacy API token. Refer to Security Hardening: New API token system in Jenkins 2.129+ for more information.


Airtable Trigger node

URL: llms-txt#airtable-trigger-node

Contents:

  • Events
  • Related resources
  • Node parameters
    • Poll Times
    • Base
    • Table
    • Trigger Field
    • Download Attachments
    • Download Fields
    • Additional Fields

Airtable is a spreadsheet-database hybrid, with the features of a database but applied to a spreadsheet. The fields in an Airtable table are similar to cells in a spreadsheet, but have types such as 'checkbox', 'phone number', and 'drop-down list', and can reference file attachments like images.

On this page, you'll find a list of events the Airtable Trigger node can respond to and links to more resources.

You can find authentication information for this node here.

  • New Airtable event

n8n provides an app node for Airtable. You can find the node docs here.

View example workflows and related content on n8n's website.

Refer to Airtable's documentation for details about their API.

Use these parameters to configure your node.

n8n's Airtable node uses polling for check for updates on configured Airtable resources. The Poll Times parameter configures the querying frequency:

  • Every Minute
  • Every Hour
  • Every Day
  • Every Week
  • Every Month
  • Every X: Check for updates every given number of minutes or hours.
  • Custom: Customize the polling interval by providing a cron expression.

Use the Add Poll Time button to add more polling intervals.

The Airtable base you want to check for updates on. You can provide your base's URL or base ID.

The Airtable table within the Airtable base that you want to check for updates on. You can provide the table's URL or table ID.

A created or last modified field in your table. The Airtable Trigger node uses this to determine what updates occurred since the previous check.

Download Attachments

Whether to download attachments from the table. When enabled, the Download Fields parameter defines the attachment fields.

When you enable the Download Attachments toggle, this field defines which table fields to download. Field names are case sensitive. Use a comma to separate multiple field names.

Additional Fields

Use the Add Field button to add the following parameters:

  • Fields: A comma-separated list of fields to include in the output. If you don't specify anything here, the output will contain only the Trigger Field.
  • Formula: An Airtable formula to further filter the results. You can use this to add further constraints to the events that trigger the workflow. Note that formula values aren't taken into account for manual executions, only for production polling.
  • View ID: The name or ID of a table view. When defined, only returns records available in the given view.

Embeddings Google Vertex node

URL: llms-txt#embeddings-google-vertex-node

Contents:

  • Node parameters
  • Templates and examples
  • Related resources

Use the Embeddings Google Vertex node to generate embeddings for a given text.

On this page, you'll find the node parameters for the Embeddings Google Vertex node, and links to more resources.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Model: Select the model to use to generate the embedding.

Learn more about available embedding models in Google VertexAI embeddings API documentation.

Templates and examples

Ask questions about a PDF using AI

View template details

Chat with PDF docs using AI (quoting sources)

View template details

RAG Chatbot for Company Documents using Google Drive and Gemini

View template details

Browse Embeddings Google Vertex integration templates, or search all templates

Refer to LangChain's Google Generative AI embeddings documentation for more information about the service.

View n8n's Advanced AI documentation.


AWS credentials

URL: llms-txt#aws-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using API access key

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to AWS's Identity and Access Management documentation for more information about the service.

Using API access key

To configure this credential, you'll need an AWS account and:

  • Your AWS Region
  • The Access Key ID: Generated when you create an access key.
  • The Secret Access Key: Generated when you create an access key.

To create an access key and set up the credential:

  1. In your n8n credential, select your AWS Region.
  2. Log in to the IAM console.
  3. In the navigation bar on the upper right, select your user name and then select Security credentials.
  4. In the Access keys section, select Create access key.
  5. On the Access key best practices & alternatives page, choose your use case. If it doesn't prompt you to create an access key, select Other.
  6. Select Next.
  7. Set a description tag value for the access key to make it easier to identify, for example n8n integration.
  8. Select Create access key.
  9. Reveal the Access Key ID and Secret Access Key and enter them in n8n.
  10. To use a Temporary security credential, turn that option on and add a Session token. Refer to the AWS Temporary security credential documentation for more information on working with temporary security credentials.
  11. If you use Amazon Virtual Private Cloud (VPC) to host n8n, you can establish a connection between your VPC and some apps. Use Custom Endpoints to enter relevant custom endpoint(s) for this connection. This setup works with these apps:
    • Rekognition
    • Lambda
    • SNS
    • SES
    • SQS
    • S3

You can also generate access keys through the AWS CLI and AWS API. Refer to the AWS Managing Access Keys documentation for instructions on generating access keys using these methods.


APITemplate.io node

URL: llms-txt#apitemplate.io-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the APITemplate.io node to automate work in APITemplate.io, and integrate APITemplate.io with other applications. n8n has built-in support for a wide range of APITemplate.io features, including getting and creating accounts and PDF.

On this page, you'll find a list of operations the APITemplate.io node supports and links to more resources.

Refer to APITemplate.io credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Account
    • Get
  • Image
    • Create
  • PDF
    • Create

Templates and examples

🤖 AI content generation for Auto Service 🚘 Automate your social media📲!

View template details

Create an invoice based on the Typeform submission

View template details

Generate Dynamic Images with Text & Templates using ImageKit.

View template details

Browse APITemplate.io integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


PagerDuty credentials

URL: llms-txt#pagerduty-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API token
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Create a PagerDuty account.

Supported authentication methods

Refer to PagerDuty's API documentation for more information about the service.

To configure this credential, you'll need:

  • A general access API Token: To generate an API token, go to Integrations > Developer Tools > API Access Keys > Create New API Key. Refer to Generate a General Access REST API key for more information.

Note for n8n Cloud users

Cloud users don't need to provide connection details. Select Connect my account to connect through your browser.

If you need to configure OAuth2 from scratch, register a new Pagerduty app.

Use these settings for registering your app:

  • In the Category dropdown list, select Infrastructure Automation.
  • In the Functionality section, select OAuth 2.0.

Once you Save your app, open the app details and edit your app configuration to use these settings:

  • Within the OAuth 2.0 section, select Add.
  • Copy the OAuth Callback URL from n8n and paste it into the Redirect URL field.
  • Copy the Client ID and Client Secret from PagerDuty and add these to your n8n credentials.
  • Select Read/Write from the Set Permission Scopes dropdown list.

Refer to the instructions in App functionality for more information on available functionality. Refer to the PagerDuty OAuth Functionality documentation for more information on the OAuth flow.


Hugging Face Inference Model node

URL: llms-txt#hugging-face-inference-model-node

Contents:

  • Node parameters
  • Node options
  • Templates and examples
  • Related resources

Use the Hugging Face Inference Model node to use Hugging Face's models.

On this page, you'll find the node parameters for the Hugging Face Inference Model node, and links to more resources.

This node lacks tools support, so it won't work with the AI Agent node. Instead, connect it with the Basic LLM Chain node.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Model: Select the model to use to generate the completion.

  • Custom Inference Endpoint: Enter a custom inference endpoint URL.

  • Frequency Penalty: Use this option to control the chances of the model repeating itself. Higher values reduce the chance of the model repeating itself.

  • Maximum Number of Tokens: Enter the maximum number of tokens used, which sets the completion length.

  • Presence Penalty: Use this option to control the chances of the model talking about new topics. Higher values increase the chance of the model talking about new topics.

  • Sampling Temperature: Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations.

  • Top K: Enter the number of token choices the model uses to generate the next token.

  • Top P: Use this option to set the probability the completion should use. Use a lower value to ignore less probable options.

Templates and examples

Browse Hugging Face Inference Model integration templates, or search all templates

Refer to LangChains's Hugging Face Inference Model documentation for more information about the service.

View n8n's Advanced AI documentation.


Xata node

URL: llms-txt#xata-node

Contents:

  • Node parameters
  • Templates and examples
  • Related resources
  • Single memory instance

Use the Xata node to use Xata as a memory server. On this page, you'll find a list of operations the Xata node supports, and links to more resources.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Session ID: Enter the ID to use to store the memory in the workflow data.
  • Context Window Length: Enter the number of previous interactions to consider for context.

Templates and examples

Building Your First WhatsApp Chatbot

View template details

Scrape and summarize webpages with AI

View template details

Pulling data from services that n8n doesnt have a pre-built integration for

View template details

Browse Xata integration templates, or search all templates

Refer to LangChain's Xata documentation for more information about the service.

View n8n's Advanced AI documentation.

Single memory instance

If you add more than one Xata node to your workflow, all nodes access the same memory instance by default. Be careful when doing destructive actions that override existing memory contents, such as the override all messages operation in the Chat Memory Manager node. If you want more than one memory instance in your workflow, set different session IDs in different memory nodes.


Save executions ending in errors

URL: llms-txt#save-executions-ending-in-errors

export EXECUTIONS_DATA_SAVE_ON_ERROR=all


F5 Big-IP credentials

URL: llms-txt#f5-big-ip-credentials

Contents:

  • Prerequisites
  • Authentication methods
  • Related resources
  • Using account login

You can use these credentials to authenticate when using the HTTP Request node to make a Custom API call.

Create an F5 Big-IP account.

Authentication methods

Refer to F5 Big-IP's API documentation for more information about the service.

This is a credential-only node. Refer to Custom API operations to learn more. View example workflows and related content on n8n's website.

Using account login

To configure this credential, you'll need:

  • A Username: Use the username you use to log in to F5 Big-IP.
  • A Password: Use the user password you use to log in to F5 Big-IP.

Gmail node

URL: llms-txt#gmail-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported
  • Common issues

Use the Gmail node to automate work in Gmail, and integrate Gmail with other applications. n8n has built-in support for a wide range of Gmail features, including creating, updating, deleting, and getting drafts, messages, labels, thread.

On this page, you'll find a list of operations the Gmail node supports and links to more resources.

Refer to Google credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

Templates and examples

🤖Automate Multi-Platform Social Media Content Creation with AI

View template details

Automated Web Scraping: email a CSV, save to Google Sheets & Microsoft Excel

View template details

Suggest meeting slots using AI

View template details

Browse Gmail integration templates, or search all templates

Refer to Google's Gmail API documentation for detailed information about the API that this node integrates with.

n8n provides a trigger node for Gmail. You can find the trigger node docs here.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.

For common errors or issues and suggested resolution steps, refer to Common Issues.


Structure of the node base file

URL: llms-txt#structure-of-the-node-base-file

Contents:

  • Outline structure for a declarative-style node
  • Outline structure for a programmatic-style node

The node base file follows this basic structure:

  1. Add import statements.
  2. Create a class for the node.
  3. Within the node class, create a description object, which defines the node.

A programmatic-style node also has an execute() method, which reads incoming data and parameters, then builds a request. The declarative style handles this using the routing key in the properties object, within descriptions.

Outline structure for a declarative-style node

This code snippet gives an outline of the node structure.

Refer to Standard parameters for information on parameters available to all node types. Refer to Declarative-style parameters for the parameters available for declarative-style nodes.

Outline structure for a programmatic-style node

This code snippet gives an outline of the node structure.

Refer to Standard parameters for information on parameters available to all node types. Refer to Programmatic-style parameters and Programmatic-style execute method for more information on working with programmatic-style nodes.

Examples:

Example 1 (unknown):

import { INodeType, INodeTypeDescription } from 'n8n-workflow';

export class ExampleNode implements INodeType {
	description: INodeTypeDescription = {
		// Basic node details here
		properties: [
			// Resources and operations here
		]
	};
}

Example 2 (unknown):

import { IExecuteFunctions } from 'n8n-core';
import { INodeExecutionData, INodeType, INodeTypeDescription } from 'n8n-workflow';

export class ExampleNode implements INodeType {
	description: INodeTypeDescription = {
    // Basic node details here
    properties: [
      // Resources and operations here
    ]
  };

  async execute(this: IExecuteFunctions): Promise<INodeExecutionData[][]> {
    // Process data and return
  }
};

n8n Cloud

URL: llms-txt#n8n-cloud

n8n Cloud is n8n's hosted solution. It provides:

  • No technical set up or maintenance for your n8n instance
  • Continual uptime monitoring
  • Managed OAuth for authentication
  • One-click upgrades to the newest n8n versions

Sign up for n8n Cloud

n8n Cloud isn't available in Russia and Belarus. Refer to this blog post: Update on n8n cloud accounts in Russia and Belarus for more information.


Cloud admin dashboard

URL: llms-txt#cloud-admin-dashboard

Contents:

  • Access the dashboard from the app
  • Access the dashboard if the app is offline

Instance owners can access the admin dashboard to manage their Cloud instance. This is where you can upgrade your n8n version and set the timezone.

Access the dashboard from the app

  1. Log in to n8n
  2. Select Admin Dashboard. n8n opens the dashboard.

Access the dashboard if the app is offline

If your instance is down, you can still access the admin dashboard. When you log in to the app, n8n will ask you if you want a magic link to access your dashboard. Select Send magic link, then check your email for the link.


Merging and splitting data

URL: llms-txt#merging-and-splitting-data

Contents:

  • Merging data
    • Merge Exercise
  • Looping
  • Splitting data in batches
    • Loop/Batch Exercise

In this chapter, you will learn how to merge and split data, and in what cases it might be useful to perform these operations.

In some cases, you might need to merge (combine) and process data from different sources.

Merging data can involve:

  • Creating one data set from multiple sources.
  • Synchronizing data between multiple systems. This could include removing duplicate data or updating data in one system when it changes in another.

One-way vs. two-way sync

In a one-way sync, data is synchronized in one direction. One system serves as the single source of truth. When information changes in that main system, it automatically changes in the secondary system; but if information changes in the secondary system, the changes aren't reflected in the main system.

In a two-way sync, data is synchronized in both directions (between both systems). When information changes in either of the two systems, it automatically changes in the other one as well.

This blog tutorial explains how to sync data one-way and two-way between two CRMs.

In n8n, you can merge data from two different nodes using the Merge node, which provides several merging options:

Notice that Combine > Merge by Fields requires you enter input fields to match on. These fields should contain identical values between the data sources so n8n can properly match data together. In the Merge node, they're called Input 1 Field and Input 2 Field.

Property Input fields in the Merge node

Property Input in dot notation

If you want to reference nested values in the Merge node parameters Input 1 Field and Input 2 Field, you need to enter the property key in dot-notation format (as text, not as an expression).

You can also find the Merge node under the alias Join. This might be more intuitive if you're familiar with SQL joins.

Build a workflow that merges data from the Customer Datastore node and Code node.

  1. Add a Merge node that takes Input 1 from a Customer Datastore node and Input 2 from a Code node.
  2. In the Customer Datastore node, run the operation Get All People.
  3. In the Code node, create an array of two objects with three properties: name, language, and country, where the property country has two sub-properties code and name.
    • Fill out the values of these properties with the information of two characters from the Customer Database.
    • For example, Jay Gatsby's language is English and country name is United States.
  4. In the Merge node, try out different merge options.

The workflow for this exercise looks like this:

Workflow exercise for merging data

If you merge data with the option Keep Matches using the name as the input fields to match, the result should look like this (note this example only contains Jay Gatsby; yours might look different depending on which characters you selected):

Output of Merge node with option to keep matches

To check the configuration of the nodes, you can copy the JSON workflow code below and paste it into your Editor UI:

In some cases, you might need to perform the same operation on each element of an array or each data item (for example sending a message to every contact in your address book). In technical terms, you need to iterate through the data (with loops).

n8n generally handles this repetitive processing automatically, as the nodes run once for each item, so you don't need to build loops into your workflows.

However, there are some exceptions of nodes and operations that will require you to build a loop into your workflow.

To create a loop in an n8n workflow, you need to connect the output of one node to the input of a previous node, and add an If node to check when to stop the loop.

Splitting data in batches

If you need to process large volumes of incoming data, execute the Code node multiple times, or avoid API rate limits, it's best to split the data into batches (groups) and process these batches.

For these processes, use the Loop Over Items node. This node splits input data into a specified batch size and, with each iteration, returns a predefined amount of data.

Execution of Loop Over Items node

The Loop Over Items node stops executing after all the incoming items get divided into batches and passed on to the next node in the workflow, so it's not necessary to add an If node to stop the loop.

Loop/Batch Exercise

Build a workflow that reads the RSS feed from Medium and dev.to. The workflow should consist of three nodes:

  1. A Code node that returns the URLs of the RSS feeds of Medium (https://medium.com/feed/n8n-io) and dev.to (https://dev.to/feed/n8n).

  2. A Loop Over Items node with Batch Size: 1, that takes in the inputs from the Code node and RSS Read node and iterates over the items.

  3. An RSS Read node that gets the URL of the Medium RSS feed, passed as an expression: {{ $json.url }}.

    • The RSS Read node is one of the exception nodes which processes only the first item it receives, so the Loop Over Items node is necessary for iterating over multiple items.
  4. Add a Code Node. You can format the code in several ways, one way is:

    • Set Mode to Run Once for All Items.
  • Set Language to JavaScript.

  • Copy the code below and paste it into the JavaScript Code editor:

  1. Add a Loop Over Items node connected to the Code node.
    • Set Batch Size to 1.
  2. The Loop Over Items node automatically adds a node called "Replace Me". Replace that node with an RSS Read node.
    • Set the URL to use the url from the Code Node: {{ $json.url }}.

The workflow for this exercise looks like this:

Workflow for getting RSS feeds from two blogs

To check the configuration of the nodes, you can copy the JSON workflow code below and paste it into your Editor UI:

Examples:

Example 1 (unknown):

{
"meta": {
	"templateCredsSetupCompleted": true,
	"instanceId": "cb484ba7b742928a2048bf8829668bed5b5ad9787579adea888f05980292a4a7"
},
"nodes": [
	{
	"parameters": {
		"mode": "combine",
		"mergeByFields": {
		"values": [
			{
			"field1": "name",
			"field2": "name"
			}
		]
		},
		"options": {}
	},
	"id": "578365f3-26dd-4fa6-9858-f0a5fdfc413b",
	"name": "Merge",
	"type": "n8n-nodes-base.merge",
	"typeVersion": 2.1,
	"position": [
		720,
		580
	]
	},
	{
	"parameters": {},
	"id": "71aa5aad-afdf-4f8a-bca0-34450eee8acc",
	"name": "When clicking \"Execute workflow\"",
	"type": "n8n-nodes-base.manualTrigger",
	"typeVersion": 1,
	"position": [
		260,
		560
	]
	},
	{
	"parameters": {
		"operation": "getAllPeople"
	},
	"id": "497174fe-3cab-4160-8103-78b44efd038d",
	"name": "Customer Datastore (n8n training)",
	"type": "n8n-nodes-base.n8nTrainingCustomerDatastore",
	"typeVersion": 1,
	"position": [
		500,
		460
	]
	},
	{
	"parameters": {
		"jsCode": "return [\n  {\n    'name': 'Jay Gatsby',\n    'language': 'English',\n    'country': {\n      'code': 'US',\n      'name': 'United States'\n    }\n    \n  }\n  \n];"
	},
	"id": "387e8a1e-e796-4f05-8e75-7ce25c786c5f",
	"name": "Code",
	"type": "n8n-nodes-base.code",
	"typeVersion": 2,
	"position": [
		500,
		720
	]
	}
],
"connections": {
	"When clicking \"Execute workflow\"": {
	"main": [
		[
		{
			"node": "Customer Datastore (n8n training)",
			"type": "main",
			"index": 0
		},
		{
			"node": "Code",
			"type": "main",
			"index": 0
		}
		]
	]
	},
	"Customer Datastore (n8n training)": {
	"main": [
		[
		{
			"node": "Merge",
			"type": "main",
			"index": 0
		}
		]
	]
	},
	"Code": {
	"main": [
		[
		{
			"node": "Merge",
			"type": "main",
			"index": 1
		}
		]
	]
	}
},
"pinData": {}
}

Example 2 (unknown):

let urls = [
     	{
     		json: {
     		url: 'https://medium.com/feed/n8n-io'
     		}
     	},
     	{
     	json: {
     		url: 'https://dev.to/feed/n8n'
     		} 
     	}
     ]
     return urls;

Example 3 (unknown):

{
"meta": {
	"templateCredsSetupCompleted": true,
	"instanceId": "cb484ba7b742928a2048bf8829668bed5b5ad9787579adea888f05980292a4a7"
},
"nodes": [
	{
	"parameters": {},
	"id": "ed8dc090-ae8c-4db6-a93b-0fa873015c25",
	"name": "When clicking \"Execute workflow\"",
	"type": "n8n-nodes-base.manualTrigger",
	"typeVersion": 1,
	"position": [
		460,
		460
	]
	},
	{
	"parameters": {
		"jsCode": "let urls = [\n  {\n    json: {\n      url: 'https://medium.com/feed/n8n-io'\n    }\n  },\n  {\n   json: {\n     url: 'https://dev.to/feed/n8n'\n   } \n  }\n]\n\nreturn urls;"
	},
	"id": "1df2a9bf-f970-4e04-b906-92dbbc9e8d3a",
	"name": "Code",
	"type": "n8n-nodes-base.code",
	"typeVersion": 2,
	"position": [
		680,
		460
	]
	},
	{
	"parameters": {
		"options": {}
	},
	"id": "3cce249a-0eab-42e2-90e3-dbdf3684e012",
	"name": "Loop Over Items",
	"type": "n8n-nodes-base.splitInBatches",
	"typeVersion": 3,
	"position": [
		900,
		460
	]
	},
	{
	"parameters": {
		"url": "={{ $json.url }}",
		"options": {}
	},
	"id": "50e1c1dc-9a5d-42d3-b7c0-accc31636aa6",
	"name": "RSS Read",
	"type": "n8n-nodes-base.rssFeedRead",
	"typeVersion": 1,
	"position": [
		1120,
		460
	]
	}
],
"connections": {
	"When clicking \"Execute workflow\"": {
	"main": [
		[
		{
			"node": "Code",
			"type": "main",
			"index": 0
		}
		]
	]
	},
	"Code": {
	"main": [
		[
		{
			"node": "Loop Over Items",
			"type": "main",
			"index": 0
		}
		]
	]
	},
	"Loop Over Items": {
	"main": [
		null,
		[
		{
			"node": "RSS Read",
			"type": "main",
			"index": 0
		}
		]
	]
	},
	"RSS Read": {
	"main": [
		[
		{
			"node": "Loop Over Items",
			"type": "main",
			"index": 0
		}
		]
	]
	}
},
"pinData": {}
}

ClickUp node

URL: llms-txt#clickup-node

Contents:

  • Operations
  • Operation details
    • Get a task
  • Templates and examples
  • What to do if your operation isn't supported

Use the ClickUp node to automate work in ClickUp, and integrate ClickUp with other applications. n8n has built-in support for a wide range of ClickUp features, including creating, getting, deleting, and updating folders, checklists, tags, comments, and goals.

On this page, you'll find a list of operations the ClickUp node supports and links to more resources.

Refer to ClickUp credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Checklist
    • Create a checklist
    • Delete a checklist
    • Update a checklist
  • Checklist Item
    • Create a checklist item
    • Delete a checklist item
    • Update a checklist item
  • Comment
    • Create a comment
    • Delete a comment
    • Get all comments
    • Update a comment
  • Folder
    • Create a folder
    • Delete a folder
    • Get a folder
    • Get all folders
    • Update a folder
  • Goal
    • Create a goal
    • Delete a goal
    • Get a goal
    • Get all goals
    • Update a goal
  • Goal Key Result
    • Create a key result
    • Delete a key result
    • Update a key result
  • List
    • Create a list
    • Retrieve list's custom fields
    • Delete a list
    • Get a list
    • Get all lists
    • Get list members
    • Update a list
  • Space Tag
    • Create a space tag
    • Delete a space tag
    • Get all space tags
    • Update a space tag
  • Task
    • Create a task
    • Delete a task
    • Get a task
    • Get all tasks
    • Get task members
    • Set a custom field
    • Update a task
  • Task List
    • Add a task to a list
    • Remove a task from a list
  • Task Tag
    • Add a tag to a task
    • Remove a tag from a task
  • Task Dependency
    • Create a task dependency
    • Delete a task dependency
  • Time Entry
    • Create a time entry
    • Delete a time entry
    • Get a time entry
    • Get all time entries
    • Start a time entry
    • Stop the current running timer
    • Update a time Entry
  • Time Entry Tag
    • Add tag to time entry
    • Get all time entry tags
    • Remove tag from time entry

When using the Get a task operation, you can optionally enable the following:

  • Include Subtasks: When enabled, also fetches and includes subtasks for the specified task.
  • Include Markdown Description: When enabled, includes the markdown_description field in the response, which preserves links and formatting in the task description. This is useful if your task descriptions contain links or rich formatting.

Templates and examples

Zoom AI Meeting Assistant creates mail summary, ClickUp tasks and follow-up call

by Friedemann Schuetz

View template details

Create a task in ClickUp

View template details

Sync Notion database pages as ClickUp tasks

View template details

Browse ClickUp integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Get the binary data buffer

URL: llms-txt#get-the-binary-data-buffer

The binary data buffer contains all the binary file data processed by a workflow. You need to access it if you want to perform operations on the binary data, such as:

  • Manipulating the data: for example, adding column headers to a CSV file.
  • Using the data in calculations: for example, calculating a hash value based on it.
  • Complex HTTP requests: for example, combining file upload with sending other data formats.

Not available in Python

getBinaryDataBuffer() isn't supported when using Python.

You can access the buffer using n8n's getBinaryDataBuffer() function:

You should always use the getBinaryDataBuffer() function, and avoid using older methods of directly accessing the buffer, such as targeting it with expressions like items[0].binary.data.data.

Examples:

Example 1 (unknown):

/* 
* itemIndex: number. The index of the item in the input data.
* binaryPropertyName: string. The name of the binary property. 
* The default in the Read/Write File From Disk node is 'data'. 
*/
let binaryDataBufferItem = await this.helpers.getBinaryDataBuffer(itemIndex, binaryPropertyName);

Example 2 (unknown):

let binaryDataBufferItem = await this.helpers.getBinaryDataBuffer(0, 'data');
// Returns the data in the binary buffer for the first input item

SSH credentials

URL: llms-txt#ssh-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using password
  • Using private key

You can use these credentials to authenticate the following nodes:

  • SSH

  • Create a remote server with SSH enabled.

  • Create a user account that can ssh into the server using one of the following:

Supported authentication methods

  • Password: Use this method if you have a user account that can ssh into the server using their own password.
  • Private key: Use this method if you have a user account that uses an SSH key for the server or service.

Secure Shell (SSH) protocol is a method for securely sending commands over a network. Refer to Connecting to GitHub with SSH for an example of SSH setup.

Use this method if you have a user account that can ssh into the server using their own password.

To configure this credential, you'll need to:

  1. Enter the IP address of the server you're connecting to as the Host.
  2. Enter the Port to use for the connection. SSH uses port 22 by default.
  3. Enter the Username for a user account with ssh access on the server.
  4. Enter the Password for that user account.

Use this method if you have a user account that uses an SSH key for the server or service.

To configure this credential, you'll need to:

  1. Enter the IP address of the server you're connecting to as the Host.
  2. Enter the Port to use for the connection. SSH uses port 22 by default.
  3. Enter the Username of the account that generated the private key.
  4. Enter the entire contents of your SSH Private Key.
  5. If you created a Passphrase for the Private Key, enter the passphrase.
    • If you didn't create a passphrase for the key, leave blank.

Account types

URL: llms-txt#account-types

There are three account types: owner, admin, and member. The account type affects the user permissions and access.

To use admin accounts, you need a pro or enterprise plan.

Account types and role types

Account types and role types are different things. Role types are part of RBAC.

Every account has one type. The account can have different role types for different projects.

Create a member-level account for the owner

n8n recommends that owners create a member-level account for themselves. Owners can see and edit all workflows, credentials, and projects. However, there is no way to see who created a particular workflow, so there is a risk of overriding other people's work if you build and edit workflows as an owner.

Permission Owner Admin Member
Manage own email and password
Manage own workflows
View, create, and use tags
Delete tags
View and share all workflows
View, edit, and share all credentials
Set up and use Source control
Create projects
View all projects
Add and remove users
Access the Cloud dashboard

Gmail Trigger node Poll Mode options

URL: llms-txt#gmail-trigger-node-poll-mode-options

Contents:

  • Poll mode options
    • Every Hour mode
    • Every Day mode
    • Every Week mode
    • Every Month mode
    • Every X mode
    • Custom mode

Use the Gmail Trigger node's Poll Time parameter to set how often to trigger the poll. Your Mode selection will add or remove relevant fields.

Refer to the sections below for details on using each Mode.

Enter the Minute of the hour to trigger the poll, from 0 to 59.

  • Enter the Hour of the day to trigger the poll in 24-hour format, from 0 to 23.

  • Enter the Minute of the hour to trigger the poll, from 0 to 59.

  • Enter the Hour of the day to trigger the poll in 24-hour format, from 0 to 23.

  • Enter the Minute of the hour to trigger the poll, from 0 to 59.

  • Select the Weekday to trigger the poll.

  • Enter the Hour of the day to trigger the poll in 24-hour format, from 0 to 23.

  • Enter the Minute of the hour to trigger the poll, from 0 to 59.

  • Enter the Day of the Month to trigger the poll, from 0 to 31.

  • Enter the Value of measurement for how often to trigger the poll in either minutes or hours.

  • Select the Unit for the value. Supported units are Minutes and Hours.

Enter a custom Cron Expression to trigger the poll. Use these values and ranges:

  • Seconds: 0 - 59
  • Minutes: 0 - 59
  • Hours: 0 - 23
  • Day of Month: 1 - 31
  • Months: 0 - 11 (Jan - Dec)
  • Day of Week: 0 - 6 (Sun - Sat)

To generate a Cron expression, you can use crontab guru. Paste the Cron expression that you generated using crontab guru in the Cron Expression field in n8n.

If you want to trigger your workflow every day at 04:08:30, enter the following in the Cron Expression field.

If you want to trigger your workflow every day at 04:08, enter the following in the Cron Expression field.

Why there are six asterisks in the Cron expression

The sixth asterisk in the Cron expression represents seconds. Setting this is optional. The node will execute even if you don't set the value for seconds.

* * * * * *
second minute hour day of month month day of week

Examples:

Example 1 (unknown):

30 8 4 * * *

Example 2 (unknown):

8 4 * * *

X (Formerly Twitter) node

URL: llms-txt#x-(formerly-twitter)-node

Contents:

  • Operations
  • Templates and examples

Use the X node to automate work in X and integrate X with other applications. n8n has built-in support for a wide range of X features, including creating direct messages and deleting, searching, liking, and retweeting a tweet.

On this page, you'll find a list of operations the X node supports and links to more resources.

Refer to X credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Direct Message
    • Create a direct message
  • Tweet
    • Create or reply a tweet
    • Delete a tweet
    • Search tweets
    • Like a tweet
    • Retweet a tweet
  • User
    • Get a user
  • List
    • Add a member to a list

Templates and examples

🤖Automate Multi-Platform Social Media Content Creation with AI

View template details

AI-Powered Social Media Content Generator & Publisher

View template details

🩷Automated Social Media Content Publishing Factory + System Prompt Composition

View template details

Browse X (Formerly Twitter) integration templates, or search all templates


Set a single piece of custom execution data

URL: llms-txt#set-a-single-piece-of-custom-execution-data

_execution.customData.set("key", "value");


Switch

URL: llms-txt#switch

Contents:

  • Node parameters
    • Rules
    • Expression
  • Templates and examples
  • Related resources
  • Available data type comparisons
    • String
    • Number
    • Date & Time
    • Boolean

Use the Switch node to route a workflow conditionally based on comparison operations. It's similar to the IF node, but supports multiple output routes.

Select the Mode the node should use:

  • Rules: Select this mode to build a matching rule for each output.
  • Expression: Select this mode to write an expression to return the output index programmatically.

Node configuration depends on the Mode you select.

To configure the node with this operation, use these parameters:

  • Create Routing Rules to define comparison conditions.
    • Use the data type dropdown to select the data type and comparison operation type for your condition. For example, to create a rules for dates after a particular date, select Date & Time > is after.
    • The fields and values to enter into the condition change based on the data type and comparison you select. Refer to Available data type comparisons for a full list of all comparisons by data type.
  • Rename Output: Turn this control on to rename the output field to put matching data into. Enter your desired Output Name.

Select Add Routing Rule to add more rules.

You can further configure the node with this operation using these Options:

  • Fallback Output: Choose how to route the workflow when an item doesn't match any of the rules or conditions.
    • None: Ignore the item. This is the default behavior.
    • Extra Output: Send items to an extra, separate output.
    • Output 0: Send items to the same output as those matching the first rule.
  • Ignore Case: Set whether to ignore letter case when evaluating conditions (turned on) or enforce letter case (turned off).
  • Less Strict Type Validation: Set whether you want n8n to attempt to convert value types based on the operator you choose (turned on) or not (turned off).
  • Send data to all matching outputs: Set whether to send data to all outputs meeting conditions (turned on) or whether to send the data to the first output matching the conditions (turned off).

To configure the node with this operation, use these parameters:

  • Number of Outputs: Set how many outputs the node should have.
  • Output Index: Create an expression to calculate which input item should be routed to which output. The expression must return a number.

Templates and examples

Building Your First WhatsApp Chatbot

View template details

Telegram AI Chatbot

View template details

Respond to WhatsApp Messages with AI Like a Pro!

View template details

Browse Switch integration templates, or search all templates

Refer to Splitting with conditionals for more information on using conditionals to create complex logic in n8n.

Available data type comparisons

String data type supports these comparisons:

  • exists
  • does not exist
  • is empty
  • is not empty
  • is equal to
  • is not equal to
  • contains
  • does not contain
  • starts with
  • does not start with
  • ends with
  • does not end with
  • matches regex
  • does not match regex

Number data type supports these comparisons:

  • exists
  • does not exist
  • is empty
  • is not empty
  • is equal to
  • is not equal to
  • is greater than
  • is less than
  • is greater than or equal to
  • is less than or equal to

Date & Time data type supports these comparisons:

  • exists
  • does not exist
  • is empty
  • is not empty
  • is equal to
  • is not equal to
  • is after
  • is before
  • is after or equal to
  • is before or equal to

Boolean data type supports these comparisons:

  • exists
  • does not exist
  • is empty
  • is not empty
  • is true
  • is false
  • is equal to
  • is not equal to

Array data type supports these comparisons:

  • exists
  • does not exist
  • is empty
  • is not empty
  • contains
  • does not contain
  • length equal to
  • length not equal to
  • length greater than
  • length less than
  • length greater than or equal to
  • length less than or equal to

Object data type supports these comparisons:

  • exists
  • does not exist
  • is empty
  • is not empty

Numbers

URL: llms-txt#numbers

Contents:

  • ceil(): Number
  • floor(): Number
  • format(locales?: LanguageCode, options?: FormatOptions): String
  • isEven(): Boolean
  • isOdd(): Boolean
  • round(decimalPlaces?: Number): Number
  • toBoolean(): Boolean
  • toDateTime(format?: String): Date

A reference document listing built-in convenience functions to support data transformation in expressions for numbers.

JavaScript in expressions

You can use any JavaScript in expressions. Refer to Expressions for more information.

Rounds up a number to a whole number.


Rounds down a number to a whole number.


format(locales?: LanguageCode, options?: FormatOptions): String

This is a wrapper around Intl.NumberFormat(). Returns a formatted string of a number based on the given LanguageCode and FormatOptions. When no arguments are given, transforms the number in a like format 1.234.

Function parameters

localesOptionalString

An IETF BCP 47 language tag.

optionsOptionalObject

Configure options for number formatting. Refer to MDN | Intl.NumberFormat() for more information.


isEven(): Boolean

Returns true if the number is even. Only works on whole numbers.


Returns true if the number is odd. Only works on whole numbers.


round(decimalPlaces?: Number): Number

Returns the value of a number rounded to the nearest whole number, unless a decimal place is specified.

Function parameters

decimalPlacesOptionalNumber

How many decimal places to round to.


toBoolean(): Boolean

Converts a number to a boolean. 0 converts to false. All other values convert to true.


toDateTime(format?: String): Date

Converts a number to a Luxon date object.

Function parameters

formatOptionalString enum

Can be ms (milliseconds), s (seconds), or excel (Excel 1900). Defaults to milliseconds.



Qdrant credentials

URL: llms-txt#qdrant-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to Qdrant's documentation for more information.

View n8n's Advanced AI documentation.

To configure this credential, you'll need a Qdrant cluster and:

  • An API Key
  • Your Qdrant URL
  1. Go to the Cloud Dashboard.
  2. Select Access Management to display available API keys (or go to the API Keys section of the Cluster detail page).
  3. Select Create.
  4. Select the cluster you want the key to have access to in the dropdown.
  5. Select OK.
  6. Copy the API Key and enter it in your n8n credential.
  7. Enter the URL for your Qdrant cluster in the Qdrant URL. Refer to Qdrant Web UI for more information.

Refer to Qdrant's authentication documentation for more information on creating and using API keys.


WhatsApp Trigger node

URL: llms-txt#whatsapp-trigger-node

Contents:

  • Events
  • Related resources
  • Common issues
    • Workflow only works in testing or production

Use the WhatsApp Trigger node to respond to events in WhatsApp and integrate WhatsApp with other applications. n8n has built-in support for a wide range of WhatsApp events, including account, message, and phone number events.

On this page, you'll find a list of events the WhatsApp Trigger node can respond to, and links to more resources.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's WhatsApp integrations page.

  • Account Review Update
  • Account Update
  • Business Capability Update
  • Message Template Quality Update
  • Message Template Status Update
  • Messages
  • Phone Number Name Update
  • Phone Number Quality Update
  • Security
  • Template Category Update

n8n provides an app node for WhatsApp. You can find the node docs here.

View example workflows and related content on n8n's website.

Refer to WhatsApp's documentation for details about their API.

Here are some common errors and issues with the WhatsApp Trigger node and steps to resolve or troubleshoot them.

Workflow only works in testing or production

WhatsApp only allows you to register a single webhook per app. This means that every time you switch from using the testing URL to the production URL (and vice versa), WhatsApp overwrites the registered webhook URL.

You may have trouble with this if you try to test a workflow that's also active in production. WhatsApp will only send events to one of the two webhook URLs, so the other will never receive event notifications.

To work around this, you can disable your workflow when testing:

Halts production traffic

This workaround temporarily disables your production workflow for testing. Your workflow will no longer receive production traffic while it's deactivated.

  1. Go to your workflow page.
  2. Toggle the Active switch in the top panel to disable the workflow temporarily.
  3. Test your workflow using the test webhook URL.
  4. When you finish testing, toggle the Inactive toggle to enable the workflow again. The production webhook URL should resume working.

MailerLite Trigger node

URL: llms-txt#mailerlite-trigger-node

Contents:

  • Events

MailerLite is an email marketing solution that provides you with a user-friendly content editor, simplified subscriber management, and campaign reports with the most important statistics.

On this page, you'll find a list of events the MailerLite Trigger node can respond to and links to more resources.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's MailerLite Trigger integrations page.

  • Campaign Sent
  • Subscriber Added to Group
  • Subscriber Automation Completed
  • Subscriber Automation Triggered
  • Subscriber Bounced
  • Subscriber Created
  • Subscriber Complained
  • Subscriber Removed from Group
  • Subscriber Unsubscribe
  • Subscriber Updated

7. Scheduling the Workflow

URL: llms-txt#7.-scheduling-the-workflow

Contents:

  • Remove the Manual Trigger node
  • Add the Schedule Trigger node
  • Connect the Schedule Trigger node
  • What's next?

In this step of the workflow, you will learn how to schedule your workflow so that it runs automatically at a set time/interval using the Schedule Trigger node. After this step, your workflow should look like this:

View workflow file

The workflow you've built so far executes only when you click on Execute Workflow. But Nathan needs it to run automatically every Monday morning. You can do this with the Schedule Trigger, which allows you to schedule workflows to run periodically at fixed dates, times, or intervals.

To achieve this, we'll remove the Manual Trigger node we started with and replace it with a Schedule Trigger node instead.

Remove the Manual Trigger node

First, let's remove the Manual Trigger node:

  1. Select the Manual Trigger node connected to your HTTP Request node.
  2. Select the trash can icon to delete.

This removes the Manual Trigger node and you'll see an "Add first step" option.

Add the Schedule Trigger node

  1. Open the nodes panel and search for Schedule Trigger.
  2. Select it when it appears in the search results.

In the Schedule Trigger node window, configure these parameters:

  • Trigger Interval: Select Weeks.
  • Weeks Between Triggers: Enter 1.
  • Trigger on weekdays: Select Monday (and remove Sunday if added by default).
  • Trigger at Hour: Select 9am.
  • Trigger at Minute: Enter 0.

Your Schedule Trigger node should look like this:

Schedule Trigger Node

To ensure accurate scheduling with the Schedule Trigger node, be sure to set the correct timezone for your n8n instance or the workflow's settings. The Schedule Trigger node will use the workflow's timezone if it's set; it will fall back to the n8n instance's timezone if it's not.

Connect the Schedule Trigger node

Return to the canvas and connect your Schedule Trigger node to the HTTP Request node by dragging the arrow from it to the HTTP Request node.

Your full workflow should look like this:

View workflow file

You 👩‍🔧: That was it for the workflow! I've added and configured all necessary nodes. Now every time you click on Execute workflow, n8n will execute all the nodes: getting, filtering, calculating, and transferring the sales data.

Nathan 🙋: This is just what I needed! My workflow will run automatically every Monday morning, correct?

You 👩‍🔧: Not so fast. To do that, you need to activate your workflow. I'll do this in the next step and show you how to interpret the execution log.


Iterable credentials

URL: llms-txt#iterable-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create an Iterable account.

Supported authentication methods

Refer to Iterable's API documentation for more information about the service:

To configure this credential, you'll need:


Execute Command node common issues

URL: llms-txt#execute-command-node-common-issues

Contents:

  • Command failed: /bin/sh: : not found
  • Error: stdout maxBuffer length exceeded

Here are some common errors and issues with the Execute Command node and steps to resolve or troubleshoot them.

Command failed: /bin/sh: : not found

This error occurs when the shell environment can't find one of the commands in the Command parameter.

To fix this error, review the following:

  • Check that the command and its arguments don't have typos in the Command parameter.

  • Check that the command is in the PATH of the user running n8n.

  • If you are running n8n with Docker, check if the command is available within the container by trying to run it manually. If your command isn't included in the container, you might have to extend the official n8n image with a custom image that includes your command.

    • If n8n is already running:
  • If n8n isn't running:

Error: stdout maxBuffer length exceeded

This error happens when your command returns more output than the Execute Command node is able to process at one time.

To avoid this error, reduce output your command produces. Check your command's manual page or documentation to see if there are flags to limit or filter output. If not, you may need to pipe the output to another command to remove unneeded info.

Examples:

Example 1 (unknown):

# Find n8n's container ID, it will be the first column
    docker ps | grep n8n
    # Try to execute the command within the running container
    docker container exec <container_ID> <command_to_run>

Example 2 (unknown):

# Start up a new container that runs the command instead of n8n
    # Use the same image and tag that you use to run n8n normally
    docker run -it --rm --entrypoint /bin/sh docker.n8n.io/n8nio/n8n -c <command_to_run>

Google Workspace Admin node

URL: llms-txt#google-workspace-admin-node

Contents:

  • Operations
  • Templates and examples
  • How to control which custom fields to fetch for a user

Use the Google Workspace Admin node to automate work in Google Workspace Admin, and integrate Google Workspace Admin with other applications. n8n has built-in support for a wide range of Google Workspace Admin features, including creating, updating, deleting, and getting users, groups, and ChromeOS devices.

On this page, you'll find a list of operations the Google Workspace Admin node supports and links to more resources.

Refer to Google credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • ChromeOS Device
    • Get a ChromeOS device
    • Get many ChromeOS devices
    • Update a ChromeOS device
    • Change the status of a ChromeOS device
  • Group
    • Create a group
    • Delete a group
    • Get a group
    • Get many groups
    • Update a group
  • User
    • Add an existing user to a group
    • Create a user
    • Delete a user
    • Get a user
    • Get many users
    • Remove a user from a group
    • Update a user

Templates and examples

Browse Google Workspace Admin integration templates, or search all templates

How to control which custom fields to fetch for a user

There are three different ways to control which custom fields to retrieve when getting a user's information. Use the Custom Fields parameter to select one of the following:

  • Don't Include: Doesn't include any custom fields.
  • Custom: Includes the custom fields from schemas in Custom Schema Names or IDs.
  • Include All: Include all the fields associated with the user.

To include custom fields, follow these steps:

  1. Select Custom from the Custom Fields dropdown list.
  2. Select the schema names you want to include in the Custom Schema Names or IDs dropdown list.

What you can do

URL: llms-txt#what-you-can-do

Contents:

  • All users
  • Self-hosted users
    • GDPR for self-hosted users

It's also your responsibility as a customer to ensure you are securing your code and data. This document lists some steps you can take.

If you self-host n8n, there are additional steps you can take:

  • Set up a reverse proxy to handle TLS, ensuring data is encrypted in transit.
  • Ensure data is encrypted at rest by using encrypted partitions, or encryption at the hardware level, and ensuring n8n and its database is written to that location.
  • Run a Security audit.
  • Be aware of the Risks when installing community nodes, or choose to disable them.
  • Make sure users can't import external modules in the Code node. Refer to Environment variables | Nodes for more information.
  • Choose to exclude certain nodes. For example, you can disable nodes like Execute Command or SSH. Refer to Environment variables | Nodes for more information.
  • For maximum privacy, you can Isolate n8n.

GDPR for self-hosted users

If you self-host n8n, you are responsible for deleting user data. If you need to delete data on behalf of one of your users, you can delete the respective execution. n8n recommends configuring n8n to prune execution data automatically every few days to avoid effortful GDPR request handling as much as possible. Configure this using the EXECUTIONS_DATA_MAX_AGE environment variable. Refer to Environment variables for more information.


MISP node

URL: llms-txt#misp-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the MISP node to automate work in MISP, and integrate MISP with other applications. n8n has built-in support for a wide range of MISP features, including creating, updating, deleting and getting events, feeds, and organizations.

On this page, you'll find a list of operations the MISP node supports and links to more resources.

Refer to MISP credentials for guidance on setting up authentication.

  • Attribute
    • Create
    • Delete
    • Get
    • Get All
    • Search
    • Update
  • Event
    • Create
    • Delete
    • Get
    • Get All
    • Publish
    • Search
    • Unpublish
    • Update
  • Event Tag
    • Add
    • Remove
  • Feed
    • Create
    • Disable
    • Enable
    • Get
    • Get All
    • Update
  • Galaxy
    • Delete
    • Get
    • Get All
  • Noticelist
    • Get
    • Get All
  • Object
    • Search
  • Organisation
    • Create
    • Delete
    • Get
    • Get All
    • Update
  • Tag
    • Create
    • Delete
    • Get All
    • Update
  • User
    • Create
    • Delete
    • Get
    • Get All
    • Update
  • Warninglist
    • Get
    • Get All

Templates and examples

Browse MISP integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Webflow node

URL: llms-txt#webflow-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Webflow node to automate work in Webflow, and integrate Webflow with other applications. n8n has built-in support for a wide range of Webflow features, including creating, updating, deleting, and getting items.

On this page, you'll find a list of operations the Webflow node supports and links to more resources.

Refer to Webflow credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Item
    • Create
    • Delete
    • Get
    • Get All
    • Update

Templates and examples

Enrich FAQ sections on your website pages at scale with AI

View template details

Sync blog posts from Notion to Webflow

View template details

Real-time lead routing in Webflow

View template details

Browse Webflow integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Workable Trigger node

URL: llms-txt#workable-trigger-node

Contents:

  • Events
  • Related resources

Use the Workable Trigger node to respond to events in the Workable recruiting platform and integrate Workable with other applications. n8n has built-in support for a wide range of Workable events, including candidate created and moved.

On this page, you'll find a list of events the Workable Trigger node can respond to and links to more resources.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Workable Trigger integrations page.

  • Candidate Created
  • Candidate Moved

View example workflows and related content on n8n's website.

Refer to Workable's API documentation for details about using the service.


LangChain concepts in n8n

URL: llms-txt#langchain-concepts-in-n8n

Contents:

  • Trigger nodes
  • Cluster nodes
    • Root nodes
    • Sub-nodes

This page explains how LangChain concepts and features map to n8n nodes.

This page includes lists of the LangChain-focused nodes in n8n. You can use any n8n node in a workflow where you interact with LangChain, to link LangChain to other services. The LangChain features uses n8n's Cluster nodes.

n8n implements LangChain JS

This feature is n8n's implementation of LangChain's JavaScript framework.

Chat Trigger

Cluster nodes are node groups that work together to provide functionality in an n8n workflow. Instead of using a single node, you use a root node and one or more sub-nodes that extend the functionality of the node.

Each cluster starts with one root node.

A chain is a series of LLMs, and related tools, linked together to support functionality that can't be provided by a single LLM alone.

Learn more about chaining in LangChain.

An agent has access to a suite of tools, and determines which ones to use depending on the user input. Agents can use multiple tools, and use the output of one tool as the input to the next. Source

Learn more about Agents in LangChain.

Vector stores store embedded data, and perform vector searches on it.

Learn more about Vector stores in LangChain.

LangChain Code: import LangChain. This means if there is functionality you need that n8n hasn't created a node for, you can still use it.

Each root node can have one or more sub-nodes attached to it.

Document loaders

Document loaders add data to your chain as documents. The data source can be a file or web service.

Learn more about Document loaders in LangChain.

LLMs (large language models) are programs that analyze datasets. They're the key element of working with AI.

Learn more about Language models in LangChain.

Memory retains information about previous queries in a series of queries. For example, when a user interacts with a chat model, it's useful if your application can remember and call on the full conversation, not just the most recent query entered by the user.

Learn more about Memory in LangChain.

Output parsers take the text generated by an LLM and format it to match the structure you require.

Learn more about Output parsers in LangChain.

Text splitters break down data (documents), making it easier for the LLM to process the information and return accurate results.

n8n's text splitter nodes implements parts of LangChain's text_splitter API.

Utility tools.

Embeddings capture the "relatedness" of text, images, video, or other types of information. (source)

Learn more about Text embeddings in LangChain.


ConvertAPI credentials

URL: llms-txt#convertapi-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using API Token

You can use these credentials to authenticate when using the HTTP Request node to make a Custom API call.

Supported authentication methods

Refer to ConvertAPI's API documentation for more information about the service.

This is a credential-only node. Refer to Custom API operations to learn more. View example workflows and related content on n8n's website.

To configure this credential, you'll need a ConvertAPI account and:

  • An API Token to authenticate requests to the service.

Refer to ConvertAPI's API documentation for more information about authenticating to the service.


Structured Output Parser node common issues

URL: llms-txt#structured-output-parser-node-common-issues

Contents:

  • Processing parameters
  • Adding the structured output parser node to AI nodes
  • Using the structured output parser to format intermediary steps
  • Structuring output from agents

Here are some common errors and issues with the Structured Output Parser node and steps to resolve or troubleshoot them.

Processing parameters

The Structured Output Parser node is a sub-node. Sub-nodes behave differently than other nodes when processing multiple items using expressions.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

Adding the structured output parser node to AI nodes

You can attach output parser nodes to select AI root nodes.

To add the Structured Output Parser to a node, enable the Require Specific Output Format option in the AI root node you wish to format. Once the option is enabled, a new output parser attachment point is displayed. Click the output parser attachment point to add the Structured Output Parser node to the node.

Using the structured output parser to format intermediary steps

The Structured Output Parser node structures the final output from AI agents. It's not intended to structure intermediary output to pass to other AI tools or stages.

To request a specific format for intermediary output, include the response structure in the System Message for the AI Agent. The message can include either a schema or example response for the agent to use as a template for its results.

Structuring output from agents

Structured output parsing is often not reliable when working with agents.

If your workflow uses agents, n8n recommends using a separate LLM-chain to receive the data from the agent and parse it. This leads to better, more consistent results than parsing directly in the agent workflow.


External storage

URL: llms-txt#external-storage

Contents:

  • Storing n8n's binary data in S3

    • Setup
    • Usage
  • Available on Self-hosted Enterprise plans

  • If you want access to this feature on Cloud Enterprise, contact n8n.

n8n can store binary data produced by workflow executions externally. This feature is useful to avoid relying on the filesystem for storing large amounts of binary data.

n8n will introduce external storage for other data types in the future.

Storing n8n's binary data in S3

n8n supports AWS S3 as an external store for binary data produced by workflow executions. You can use other S3-compatible services like Cloudflare R2 and Backblaze B2, but n8n doesn't officially support these.

Enterprise-tier feature

You will need an Enterprise license key for external storage. If your license key expires and you remain on S3 mode, the instance will be able to read from, but not write to, the S3 bucket.

Create and configure a bucket following the AWS documentation. You can use the following policy, replacing <bucket-name> with the name of the bucket you created:

Set a bucket-level lifecycle configuration so that S3 automatically deletes old binary data. n8n delegates pruning of binary data to S3, so setting a lifecycle configuration is required unless you want to preserve binary data indefinitely.

Once you finish creating the bucket, you will have a host, bucket name and region, and an access key ID and secret access key. You need to set them in n8n's environment:

If your provider doesn't require a region, you can set N8N_EXTERNAL_STORAGE_S3_BUCKET_REGION to 'auto'.

Tell n8n to store binary data in S3:

To automatically detect credentials to authenticate your S3 calls, set N8N_EXTERNAL_STORAGE_S3_AUTH_AUTO_DETECT to true. This will use the default credential provider chain.

Restart the server to load the new configuration.

After you enable S3, n8n writes and reads any new binary data to and from the S3 bucket. n8n writes binary data to your S3 bucket in this format:

n8n continues to read older binary data stored in the filesystem from the filesystem, if filesystem remains listed as an option in N8N_AVAILABLE_BINARY_DATA_MODES.

If you store binary data in S3 and later switch to filesystem mode, the instance continues to read any data stored in S3, as long as s3 remains listed in N8N_AVAILABLE_BINARY_DATA_MODES and your S3 credentials remain valid.

Binary data pruning operates on the active binary data mode. For example, if your instance stored data in S3, and you later switched to filesystem mode, n8n only prunes binary data in the filesystem. This may change in future.

Examples:

Example 1 (unknown):

{
 "Version": "2012-10-17",
 "Statement": [
  {
   "Sid": "VisualEditor0",
   "Effect": "Allow",
   "Action": ["s3:*"],
   "Resource": ["arn:aws:s3:::<bucket-name>", "arn:aws:s3:::<bucket-name>/*"]
  }
 ]
}

Example 2 (unknown):

export N8N_EXTERNAL_STORAGE_S3_HOST=... # example: s3.us-east-1.amazonaws.com
export N8N_EXTERNAL_STORAGE_S3_BUCKET_NAME=...
export N8N_EXTERNAL_STORAGE_S3_BUCKET_REGION=...
export N8N_EXTERNAL_STORAGE_S3_ACCESS_KEY=...
export N8N_EXTERNAL_STORAGE_S3_ACCESS_SECRET=...

Example 3 (unknown):

export N8N_AVAILABLE_BINARY_DATA_MODES=filesystem,s3
export N8N_DEFAULT_BINARY_DATA_MODE=s3

Example 4 (unknown):

workflows/{workflowId}/executions/{executionId}/binary_data/{binaryFileId}

Set up Single Sign-On (SSO)

URL: llms-txt#set-up-single-sign-on-(sso)

  • Available on Enterprise plans.
  • You need to be an instance owner or admin to enable and configure SAML or OIDC.

n8n supports the SAML and OIDC authentication protocols for single sign-on (SSO). See OIDC vs SAML for more general information on the two protocols, the differences between them, and their respective benefits.

  • Set up SAML: a general guide to setting up SAML in n8n, and links to resources for common identity providers (IdPs).
  • Set up OIDC: a general guide to setting up OpenID Connect (OIDC) SSO in n8n.

Paddle node

URL: llms-txt#paddle-node

Contents:

  • Operations
  • Templates and examples

Use the Paddle node to automate work in Paddle, and integrate Paddle with other applications. n8n has built-in support for a wide range of Paddle features, including creating, updating, and getting coupons, as well as getting plans, products, and users.

On this page, you'll find a list of operations the Paddle node supports and links to more resources.

Refer to Paddle credentials for guidance on setting up authentication.

  • Coupon
    • Create a coupon.
    • Get all coupons.
    • Update a coupon.
  • Payment
    • Get all payment.
    • Reschedule payment.
  • Plan
    • Get a plan.
    • Get all plans.
  • Product
    • Get all products.
  • User
    • Get all users

Templates and examples

Browse Paddle integration templates, or search all templates


Salesmate node

URL: llms-txt#salesmate-node

Contents:

  • Operations
  • Templates and examples

Use the Salesmate node to automate work in Salesmate, and integrate Salesmate with other applications. n8n has built-in support for a wide range of Salesmate features, including creating, updating, deleting, and getting activities, companies, and deals.

On this page, you'll find a list of operations the Salesmate node supports and links to more resources.

Refer to Salesmate credentials for guidance on setting up authentication.

  • Activity
    • Create an activity
    • Delete an activity
    • Get an activity
    • Get all companies
    • Update an activity
  • Company
    • Create a company
    • Delete a company
    • Get a company
    • Get all companies
    • Update a company
  • Deal
    • Create a deal
    • Delete a deal
    • Get a deal
    • Get all deals
    • Update a deal

Templates and examples

Browse Salesmate integration templates, or search all templates


Configure self-hosted n8n for user management

URL: llms-txt#configure-self-hosted-n8n-for-user-management

Contents:

  • Setup
    • Step one: SMTP
    • Step two: In-app setup
    • Step three: Invite users

User management in n8n allows you to invite people to work in your n8n instance.

This document describes how to configure your n8n instance to support user management, and the steps to start inviting users.

Refer to the main User management guide for more information about usage, including:

For LDAP setup information, refer to LDAP.

For SAML setup information, refer to SAML.

Basic auth and JWT removed

n8n removed support for basic auth and JWT in version 1.0.

There are three stages to set up user management in n8n:

  1. Configure your n8n instance to use your SMTP server.
  2. Start n8n and follow the setup steps in the app.
  3. Invite users.

n8n recommends setting up an SMTP server, for user invites and password resets.

Optional from 0.210.1

From version 0.210.1 onward, this step is optional. You can choose to manually copy and send invite links instead of setting up SMTP. Note that if you skip this step, users can't reset passwords.

Get the following information from your SMTP provider:

  • Server name
  • SMTP username
  • SMTP password
  • SMTP sender name

To set up SMTP with n8n, configure the SMTP environment variables for your n8n instance. For information on how to set environment variables, refer to Configuration

Variable Type Description Required?
N8N_EMAIL_MODE string smtp Required
N8N_SMTP_HOST string your_SMTP_server_name Required
N8N_SMTP_PORT number your_SMTP_server_port Default is 465. Optional
N8N_SMTP_USER string your_SMTP_username Optional
N8N_SMTP_PASS string your_SMTP_password Optional
N8N_SMTP_OAUTH_SERVICE_CLIENT string your_OAuth_service_client Optional
N8N_SMTP_OAUTH_PRIVATE_KEY string your_OAuth_private_key Optional
N8N_SMTP_SENDER string Sender email address. You can optionally include the sender name. Example with name: N8N <contact@n8n.com> Required
N8N_SMTP_SSL boolean Whether to use SSL for SMTP (true) or not (false). Defaults to true. Optional
N8N_UM_EMAIL_TEMPLATES_INVITE string Full path to your HTML email template. This overrides the default template for invite emails. Optional
N8N_UM_EMAIL_TEMPLATES_PWRESET string Full path to your HTML email template. This overrides the default template for password reset emails. Optional
N8N_UM_EMAIL_TEMPLATES_WORKFLOW_SHARED String Overrides the default HTML template for notifying users that a credential was shared. Provide the full path to the template. Optional
N8N_UM_EMAIL_TEMPLATES_CREDENTIALS_SHARED String Overrides the default HTML template for notifying users that a credential was shared. Provide the full path to the template. Optional
N8N_UM_EMAIL_TEMPLATES_PROJECT_SHARED String Overrides the default HTML template for notifying users that a project was shared. Provide the full path to the template. Optional

If your n8n instance is already running, you need to restart it to enable the new SMTP settings.

More configuration options

There are more configuration options available as environment variables. Refer to Environment variables for a list. These include options to disable tags, workflow templates, and the personalization survey, if you don't want your users to see them.

If you're not familiar with SMTP, this blog post by SendGrid offers a short introduction, while Wikipedia's Simple Mail Transfer Protocol article provides more detailed technical background.

Step two: In-app setup

When you set up user management for the first time, you create an owner account.

  1. Open n8n. The app displays a signup screen.
  2. Enter your details. Your password must be at least eight characters, including at least one number and one capital letter.
  3. Click Next. n8n logs you in with your new owner account.

Step three: Invite users

You can now invite other people to your n8n instance.

  1. Sign into your workspace with your owner account. (If you are in the Admin Panel open your Workspace from the Dashboard)
  2. Click the three dots next to your user icon at the bottom left and click Settings. n8n opens your Personal settings page.
  3. Click Users to go to the Users page.
  4. Click Invite.
  5. Enter the new user's email address.
  6. Click Invite user. n8n sends an email with a link for the new user to join.

Humantic AI credentials

URL: llms-txt#humantic-ai-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a Humantic AI account.

You can also try out an API key as a free trial at the Humantic AI API page.

Supported authentication methods

Refer to Humantic AI's API documentation for more information about the service.

To configure this credential, you'll need:


Troubleshooting and errors

URL: llms-txt#troubleshooting-and-errors

Contents:

  • Error: Missing packages
  • Prevent loading community nodes on n8n cloud

Error: Missing packages

n8n installs community nodes directly onto the hard disk. The files must be available at startup for n8n to load them. If the packages aren't available at startup, you get an error warning of missing packages.

If running n8n using Docker: depending on your Docker setup, you may lose the packages when you recreate your container or upgrade your n8n version. You must either:

  • Persist the contents of the ~/.n8n/nodes directory. This is the best option. If you follow the Docker installation guide, the setup steps include persisting this directory.
  • Set the N8N_REINSTALL_MISSING_PACKAGES environment variable to true.

The second option might increase startup time and may cause health checks to fail.

Prevent loading community nodes on n8n cloud

If your n8n cloud instance crashes and fails to start, you can prevent installed community nodes from loading on instance startup. Visit the Cloud Admin Panel > Manage and toggle Disable all community nodes to true. This toggle is only visible when you allow community node installation.


Hosting n8n on Azure

URL: llms-txt#hosting-n8n-on-azure

Contents:

  • Prerequisites
  • Hosting options
  • Open the Azure Kubernetes Service
  • Create a cluster
  • Set Kubectl context
  • Clone configuration repository
  • Configure Postgres
    • Configure volume for persistent storage
    • Postgres environment variables
  • Configure n8n

This hosting guide shows you how to self-host n8n on Azure. It uses n8n with Postgres as a database backend using Kubernetes to manage the necessary resources and reverse proxy.

You need The Azure command line tool

Self-hosting knowledge prerequisites

Self-hosting n8n requires technical knowledge, including:

  • Setting up and configuring servers and containers
  • Managing application resources and scaling
  • Securing servers and applications
  • Configuring n8n

n8n recommends self-hosting for expert users. Mistakes can lead to data loss, security issues, and downtime. If you aren't experienced at managing servers, n8n recommends n8n Cloud.

Latest and Next versions

n8n releases a new minor version most weeks. The latest version is for production use. next is the most recent release. You should treat next as a beta: it may be unstable. To report issues, use the forum.

Current latest: 1.118.2
Current next: 1.119.0

Azure offers several ways suitable for hosting n8n, including Azure Container Instances (optimized for running containers), Linux Virtual Machines, and Azure Kubernetes Service (containers running with Kubernetes).

This guide uses the Azure Kubernetes Service (AKS) as the hosting option. Using Kubernetes requires some additional complexity and configuration, but is the best method for scaling n8n as demand changes.

The steps in this guide use a mix of the Azure UI and command line tool, but you can use either to accomplish most tasks.

Open the Azure Kubernetes Service

From the Azure portal select Kubernetes services.

From the Kubernetes services page, select Create > Create a Kubernetes cluster.

You can select any of the configuration options that suit your needs, then select Create when done.

Set Kubectl context

The remainder of the steps in this guide require you to set the Azure instance as the Kubectl context. You can find the connection details for a cluster instance by opening its details page and then the Connect button. The resulting code snippets shows the steps to paste and run into a terminal to change your local Kubernetes settings to use the new cluster.

Clone configuration repository

Kubernetes and n8n require a series of configuration files. You can clone these from this repository. The following steps tell you which file configures what and what you need to change.

Clone the repository with the following command:

And change directory:

Configure Postgres

For larger scale n8n deployments, Postgres provides a more robust database backend than SQLite.

Configure volume for persistent storage

To maintain data between pod restarts, the Postgres deployment needs a persistent volume. The default storage class is suitable for this purpose and is defined in the postgres-claim0-persistentvolumeclaim.yaml manifest.

Specialized storage classes

If you have specialised or higher requirements for storage classes, read more on the options Azure offers in the documentation.

Postgres environment variables

Postgres needs some environment variables set to pass to the application running in the containers.

The example postgres-secret.yaml file contains placeholders you need to replace with your own values. Postgres will use these details when creating the database..

The postgres-deployment.yaml manifest then uses the values from this manifest file to send to the application pods.

Create a volume for file storage

While not essential for running n8n, using persistent volumes is required for:

  • Using nodes that interact with files, such as the binary data node.
  • If you want to persist manual n8n encryption keys between restarts. This saves a file containing the key into file storage during startup.

The n8n-claim0-persistentvolumeclaim.yaml manifest creates this, and the n8n Deployment mounts that claim in the volumes section of the n8n-deployment.yaml manifest.

Kubernetes lets you optionally specify the minimum resources application containers need and the limits they can run to. The example YAML files cloned above contain the following in the resources section of the n8n-deployment.yaml file:

This defines a minimum of 250mb per container, a maximum of 500mb, and lets Kubernetes handle CPU. You can change these values to match your own needs. As a guide, here are the resources values for the n8n cloud offerings:

  • Start: 320mb RAM, 10 millicore CPU burstable
  • Pro (10k executions): 640mb RAM, 20 millicore CPU burstable
  • Pro (50k executions): 1280mb RAM, 80 millicore CPU burstable

Optional: Environment variables

You can configure n8n settings and behaviors using environment variables.

Create an n8n-secret.yaml file. Refer to Environment variables for n8n environment variables details.

The two deployment manifests (n8n-deployment.yaml and postgres-deployment.yaml) define the n8n and Postgres applications to Kubernetes.

The manifests define the following:

  • Send the environment variables defined to each application pod
  • Define the container image to use
  • Set resource consumption limits with the resources object
  • The volumes defined earlier and volumeMounts to define the path in the container to mount volumes.
  • Scaling and restart policies. The example manifests define one instance of each pod. You should change this to meet your needs.

The two service manifests (postgres-service.yaml and n8n-service.yaml) expose the services to the outside world using the Kubernetes load balancer using ports 5432 and 5678 respectively.

Send to Kubernetes cluster

Send all the manifests to the cluster with the following command:

You may see an error message about not finding an "n8n" namespace as that resources isn't ready yet. You can run the same command again, or apply the namespace manifest first with the following command:

n8n typically operates on a subdomain. Create a DNS record with your provider for the subdomain and point it to the IP address of the n8n service. Find the IP address of the n8n service from the Services & ingresses menu item of the cluster you want to use under the External IP column. You need to add the n8n port, "5678" to the URL.

Static IP addresses with AKS

Read this tutorial for more details on how to use a static IP address with AKS.

Remove the resources created by the manifests with the following command:

Examples:

Example 1 (unknown):

git clone https://github.com/n8n-io/n8n-hosting.git

Example 2 (unknown):

cd n8n-hosting/kubernetes

Example 3 (unknown):

…
volumes:
  - name: n8n-claim0
    persistentVolumeClaim:
      claimName: n8n-claim0
…

Example 4 (unknown):

…
resources:
  requests:
    memory: "250Mi"
  limits:
    memory: "500Mi"
…

Item linking

URL: llms-txt#item-linking

Programmatic-style nodes only

This guidance applies to programmatic-style nodes. If you're using declarative style, n8n handles paired items for you automatically.

Use n8n's item linking to access data from items that precede the current item. n8n needs to know which input item a given output item comes from. If this information is missing, expressions in other nodes may break. As a node developer, you must ensure any items returned by your node support this.

This applies to programmatic nodes (including trigger nodes). You don't need to consider item linking when building a declarative-style node. Refer to Choose your node building approach for more information on node styles.

Start by reading Item linking concepts, which provides a conceptual overview of item linking, and details of the scenarios where n8n can handle the linking automatically.

If you need to handle item linking manually, do this by setting pairedItem on each item your node returns:

Examples:

Example 1 (unknown):

// Use the pairedItem information of the incoming item
newItem = {
	"json": { . . . },
	"pairedItem": {
		"item": item.pairedItem,
		// Optional: choose the input to use
		// Set this if your node combines multiple inputs
		"input": 0
};

// Or set the index manually
newItem = {
		"json": { . . . }
		"pairedItem": {
			"item": i,
			// Optional: choose the input to use
			// Set this if your node combines multiple inputs
			"input": 0
		},
};

Shopify Trigger node

URL: llms-txt#shopify-trigger-node

Shopify is an e-commerce platform that allows users to set up an online store and sell their products.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Shopify Trigger integrations page.


Agile CRM node

URL: llms-txt#agile-crm-node

Contents:

  • Operations
  • Templates and examples

Use the Agile CRM node to automate work in Agile CRM, and integrate Agile CRM with other applications. n8n has built-in support for a wide range of Agile CRM features, including creating, getting, updating and deleting companies, contracts, and deals.

On this page, you'll find a list of operations the Agile CRM node supports and links to more resources.

Refer to Agile CRM credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Company
    • Create a new company
    • Delete a company
    • Get a company
    • Get all companies
    • Update company properties
  • Contact
    • Create a new contact
    • Delete a contact
    • Get a contact
    • Get all contacts
    • Update contact properties
  • Deal
    • Create a new deal
    • Delete a deal
    • Get a deal
    • Get all deals
    • Update deal properties

Templates and examples

Browse Agile CRM integration templates, or search all templates


Brandfetch credentials

URL: llms-txt#brandfetch-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following node:

Create a Brandfetch developer developer account.

Supported authentication methods

Refer to Brandfetch's API documentation for more information about the service.

To configure this credential, you'll need:


Workflow 1: Merging data

URL: llms-txt#workflow-1:-merging-data

Nathan's company stores its customer data in Airtable. This data contains information about the customers' ID, country, email, and join date, but lacks data about their respective region and subregion. You need to fill in these last two fields in order to create the reports for regional sales.

To accomplish this task, you first need to make a copy of this table in your Airtable account:

When setting up your Airtable, ensure that the customerSince column is configured as a Date type field with the Include time option enabled. Without this setting, you may encounter errors in step 4 when updating the table.

Next, build a small workflow that merges data from Airtable and a REST Countries API:

  1. Use the Airtable node to list the data in the Airtable table named customers.
  2. Use the HTTP Request node to get data from the REST Countries API: https://restcountries.com/v3.1/all, and send the query parameter name fields with the value name,region,subregion. This will return data about world countries, split out into separate items.
  3. Use the Merge node to merge data from Airtable and the Countries API by country name, represented as customerCountry in Airtable and name.common in the Countries API, respectively.
  4. Use another Airtable node to update the fields region and subregion in Airtable with the data from the Countries API.

The workflow should look like this:

Workflow 1 for merging data from Airtable and the Countries API

  • How many items does the HTTP Request node return?
  • How many items does the Merge node return?
  • How many unique regions are assigned in the customers table?
  • What's the subregion assigned to the customerID 10?

Set log output to both console and a log file

URL: llms-txt#set-log-output-to-both-console-and-a-log-file

export N8N_LOG_OUTPUT=console,file


Google Drive Folder operations

URL: llms-txt#google-drive-folder-operations

Contents:

  • Create a folder
    • Options
  • Delete a folder
    • Options
  • Share a folder
    • Options

Use this operation to create, delete, and share folders in Google Drive. Refer to Google Drive for more information on the Google Drive node itself.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

Use this operation to create a new folder in a drive.

Enter these parameters:

  • Credential to connect with: Create or select an existing Google Drive credentials.
  • Resource: Select Folder.
  • Operation: Select Create.
  • Folder Name: The name to use for the new folder.
  • Parent Drive: Select From list to choose the drive from the dropdown list, By URL to enter the URL of the drive, or By ID to enter the driveId.
  • Parent Folder: Select From list to choose the folder from the dropdown list, By URL to enter the URL of the folder, or By ID to enter the folderId.

You can find the driveId and folderID by visiting the shared drive or folder in your browser and copying the last URL component: https://drive.google.com/drive/u/1/folders/driveId.

  • Simplify Output: Choose whether to return a simplified version of the response instead of including all fields.
  • Folder Color: The color of the folder as an RGB hex string.

Refer to the Method: files.insert | Google Drive API documentation for more information.

Use this operation to delete a folder from a drive.

Enter these parameters:

  • Credential to connect with: Create or select an existing Google Drive credentials.

  • Resource: Select Folder.

  • Operation: Select Delete.

  • Folder: Choose a folder you want to delete.

    • Select From list to choose the folder from the dropdown list, By URL to enter the URL of the folder, or By ID to enter the folderId.
    • You can find the folderId in a Google Drive folder URL: https://drive.google.com/drive/u/0/folders/folderID.
  • Delete Permanently: Choose whether to delete the folder now instead of moving it to the trash.

Refer to the Method: files.delete | Google Drive API documentation for more information.

Use this operation to add sharing permissions to a folder.

Enter these parameters:

  • Credential to connect with: Create or select an existing Google Drive credentials.

  • Resource: Select Folder.

  • Operation: Select Share.

  • Folder: Choose a file you want to move.

    • Select From list to choose the folder from the dropdown list, By URL to enter the URL of the folder, or By ID to enter the folderId.
    • You can find the folderId in a Google Drive folder URL: https://drive.google.com/drive/u/0/folders/folderID.
  • Permissions: The permissions to add to the folder:

    • Role: Select what users can do with the folder. Can be one of Commenter, File Organizer, Organizer, Owner, Reader, Writer.
    • Type: Select the scope of the new permission:
      • User: Grant permission to a specific user, defined by entering their Email Address.
      • Group: Grant permission to a specific group, defined by entering its Email Address.
      • Domain: Grant permission to a complete domain, defined by the Domain.
      • Anyone: Grant permission to anyone. Can optionally Allow File Discovery to make the file discoverable through search.
  • Email Message: A plain text custom message to include in the notification email.

  • Move to New Owners Root: Available when trying to transfer ownership while sharing an item not in a shared drive. When enabled, moves the folder to the new owner's My Drive root folder.

  • Send Notification Email: Whether to send a notification email when sharing to users or groups.

  • Transfer Ownership: Whether to transfer ownership to the specified user and downgrade the current owner to writer permissions.

  • Use Domain Admin Access: Whether to perform the action as a domain administrator.

Refer to the REST Resources: files | Google Drive API documentation for more information.


Light evaluations

URL: llms-txt#light-evaluations

Contents:

  • What are light evaluations?
  • How it works
      1. Create a dataset
      1. Wire the dataset up to your workflow
      1. Write workflow outputs back to dataset
      1. Run evaluation

Available on registered community and paid plans

Light evaluations are available to registered community users and on all paid plans.

What are light evaluations?

When building your workflow, you often want to test it with a handful of examples to get a sense of how it performs and make improvements. At this stage of workflow development, looking over workflow outputs for each example is often enough. The benefits of setting up more formal scoring or metrics don't yet justify the effort.

Light evaluation allows you to run the examples in a test dataset through your workflow one-by-one, writing the outputs back to your dataset. You can then examine those outputs next to each other, and visually compare them to the expected outputs (if you have them).

Credentials for Google Sheets

Evaluations use data tables or Google Sheets to store the test dataset. To use Google Sheets as a dataset source, configure a Google Sheets credential.

Light evaluations take place in the 'Editor' tab of your workflow, although youll find instructions on how to set it up in the 'Evaluations' tab.

  1. Create a dataset
  2. Wire the dataset up to the workflow
  3. Write workflow outputs back to dataset
  4. Run evaluation

The following explanation will use a sample workflow that assigns a category and priority to incoming support tickets.

1. Create a dataset

Create a data table or Google Sheet with a handful of examples for your workflow. Your dataset should contain columns for:

  • The workflow input
  • (Optional) The expected or correct workflow output
  • The actual output

Leave the actual output column or columns blank, since you'll be filling them during the evaluation.

A sample dataset for the support ticket classification workflow.

2. Wire the dataset up to your workflow

Insert an evaluation trigger to pull in your dataset

Each time the evaluation trigger runs, it will output a single item representing one row of your dataset.

Clicking the 'Evaluate all' button to the left of the evaluation trigger will run your workflow multiple times in sequence, once for each row in your dataset. This is a special behavior of the evaluation trigger.

While wiring the trigger up, you often only want to run it once. You can do this by either:

  • Setting the trigger's 'Max rows to process' to 1
  • Clicking on the 'Execute node' button on the trigger (rather than the 'Evaluate all' button)

Wire the trigger up to your workflow

You can now connect the evaluation trigger to the rest of your workflow and reference the data that it outputs. At a minimum, you need to use the datasets input column(s) later in the workflow.

If you have multiple triggers in your workflow you will need to merge their branches together.

The support ticket classification workflow with the evaluation trigger added in and wired up.

3. Write workflow outputs back to dataset

To populate the output column(s) of your dataset when the evaluation runs:

  • Insert the 'Set outputs' action of the evaluation node
  • Wire it up to your workflow at a point after it has produced the outputs you're evaluating
  • In the node's parameters, map the workflow outputs into the correct dataset column

The support ticket classification workflow with the 'set outputs' node added in and wired up.

4. Run evaluation

Click on the Execute workflow button to the left of the evaluation trigger. The workflow will execute multiple times, once for each row of the dataset:

Review the outputs of each execution in the data table or Google Sheet, and examine the execution details using the workflow's 'executions' tab if you need to.

Once your dataset grows past a handful of examples, consider metric-based evaluation to get a numerical view of performance. See also tips and common issues.


Merging data

URL: llms-txt#merging-data

Contents:

  • Merge data from different data streams
  • Merge data from different nodes
  • Merge data from multiple node executions
  • Compare, merge, and split again

Merging brings multiple data streams together. You can achieve this using different nodes depending on your workflow requirements.

  • Merge data from different data streams or nodes: Use the Merge node to combine data from various sources into one.
  • Merge data from multiple node executions: Use the Code node for complex scenarios where you need to merge data from multiple executions of a node or multiple nodes.
  • Compare and merge data: Use the Compare Datasets node to compare, merge, and output data streams based on the comparison.

Explore each method in more detail in the sections below.

Merge data from different data streams

If your workflow splits, you combine the separate streams back into one stream.

Here's an example workflow showing different types of merging: appending data sets, keeping only new items, and keeping only existing items. The Merge node documentation contains details on each of the merge operations.

View template details

Merge data from different nodes

You can use the Merge node to combine data from two previous nodes, even if the workflow hasn't split into separate data streams. This can be useful if you want to generate a single dataset from the data generated by multiple nodes.

Merging data from two previous nodes

Merge data from multiple node executions

Use the Code node to merge data from multiple node executions. This is useful in some Looping scenarios.

Node executions and workflow executions

This section describes merging data from multiple node executions. This is when a node executes multiple times during a single workflow execution.

Refer to this example workflow using Loop Over Items and Wait to artificially create multiple executions.

View template details

Compare, merge, and split again

The Compare Datasets node compares data streams before merging them. It outputs up to four different data streams.

Refer to this example workflow for an example.

View template details


Mattermost node

URL: llms-txt#mattermost-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported
  • Channel ID field error
  • Find the channel ID

Use the Mattermost node to automate work in Mattermost, and integrate Mattermost with other applications. n8n has built-in support for a wide range of Mattermost features, including creating, deleting, and getting channels, and users, as well as posting messages, and adding reactions.

On this page, you'll find a list of operations the Mattermost node supports and links to more resources.

Refer to Mattermost credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Channel
    • Add a user to a channel
    • Create a new channel
    • Soft delete a channel
    • Get a page of members for a channel
    • Restores a soft deleted channel
    • Search for a channel
    • Get statistics for a channel
  • Message
    • Soft delete a post, by marking the post as deleted in the database
    • Post a message into a channel
    • Post an ephemeral message into a channel
  • Reaction
    • Add a reaction to a post.
    • Remove a reaction from a post
    • Get all the reactions to one or more posts
  • User
    • Create a new user
    • Deactivates the user and revokes all its sessions by archiving its user object.
    • Retrieve all users
    • Get a user by email
    • Get a user by ID
    • Invite user to team

Templates and examples

Standup bot (4/4): Worker

View template details

Receive a Mattermost message when a user updates their profile on Facebook

View template details

Send Instagram statistics to Mattermost

View template details

Browse Mattermost integration templates, or search all templates

Refer to Mattermost's documentation for more information about the service.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.

Channel ID field error

If you're not the System Administrator, you might get an error: there was a problem loading the parameter options from server: "Mattermost error response: You do not have the appropriate permissions. next to the Channel ID field.

Ask your system administrator to grant you the post:channel permission.

Find the channel ID

To find the channel ID in Mattermost:

  1. Select the channel from the left sidebar.
  2. Select the channel name at the top.
  3. Select View Info.

Embeddings AWS Bedrock node

URL: llms-txt#embeddings-aws-bedrock-node

Contents:

  • Node parameters
  • Templates and examples
  • Related resources

Use the Embeddings AWS Bedrock node to generate embeddings for a given text.

On this page, you'll find the node parameters for the Embeddings AWS Bedrock node, and links to more resources.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Model: Select the model to use to generate the embedding.

Learn more about available models in the Amazon Bedrock documentation.

Templates and examples

Browse Embeddings AWS Bedrock integration templates, or search all templates

Refer to LangChains's AWS Bedrock embeddings documentation and the AWS Bedrock documentation for more information about AWS Bedrock.

View n8n's Advanced AI documentation.


Navigate to the directory containing your docker compose file

URL: llms-txt#navigate-to-the-directory-containing-your-docker-compose-file

cd </path/to/your/compose/file/directory>


Travis CI node

URL: llms-txt#travis-ci-node

Contents:

  • Operations
  • Templates and examples

Use the Travis CI node to automate work in Travis CI, and integrate Travis CI with other applications. n8n has built-in support for a wide range of Travis CI features, including cancelling and getting builds.

On this page, you'll find a list of operations the Travis CI node supports and links to more resources.

Refer to Travis CI credentials for guidance on setting up authentication.

  • Build
    • Cancel a build
    • Get a build
    • Get all builds
    • Restart a build
    • Trigger a build

Templates and examples

Browse Travis CI integration templates, or search all templates


Programmatic-style parameters

URL: llms-txt#programmatic-style-parameters

Contents:

  • defaultVersion
  • methods and loadOptions
  • version

These are the parameters available for node base file of programmatic-style nodes.

This document gives short code snippets to help understand the code structure and concepts. For a full walk-through of building a node, including real-world code examples, refer to Build a programmatic-style node.

Programmatic-style nodes also use the execute() method. Refer to Programmatic-style execute method for more information.

Refer to Standard parameters for parameters available to all nodes.

Number | Optional

Use defaultVersion when using the full versioning approach.

n8n support two methods of node versioning. Refer to Node versioning for more information.

methods and loadOptions

Object | Optional

Contains the loadOptions method for programmatic-style nodes. You can use this method to query the service to get user-specific settings (such as getting a user's email labels from Gmail), then return them and render them in the GUI so the user can include them in subsequent queries.

For example, n8n's Gmail node uses loadOptions to get all email labels:

Number or Array | Optional

Use version when using the light versioning approach.

If you have one version of your node, this can be a number. If you want to support multiple versions, turn this into an array, containing numbers for each node version.

n8n support two methods of node versioning. Programmatic-style nodes can use either. Refer to Node versioning for more information.

Examples:

Example 1 (unknown):

methods = {
		loadOptions: {
			// Get all the labels and display them
			async getLabels(
				this: ILoadOptionsFunctions,
			): Promise<INodePropertyOptions[]> {
				const returnData: INodePropertyOptions[] = [];
				const labels = await googleApiRequestAllItems.call(
					this,
					'labels',
					'GET',
					'/gmail/v1/users/me/labels',
				);
				for (const label of labels) {
					const labelName = label.name;
					const labelId = label.id;
					returnData.push({
						name: labelName,
						value: labelId,
					});
				}
				return returnData;
			},
		},
	};

Zulip credentials

URL: llms-txt#zulip-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a Zulip account.

Supported authentication methods

Refer to Zulip's API documentation for more information about the service.

To configure this credential, you'll need:

  • A URL: Enter the URL of your Zulip domain.
  • An Email address: Enter the email address you use to log in to Zulip.
  • An API Key: Get your API key in the Gear cog > Personal Settings > Account & privacy > API Key. Refer to API Keys for more information.

Beeminder node

URL: llms-txt#beeminder-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Beeminder node to automate work in Beeminder, and integrate Beeminder with other applications. n8n has built-in support for a wide range of Beeminder features, including creating, deleting, and updating data points.

On this page, you'll find a list of operations the Beeminder node supports and links to more resources.

Refer to Beeminder credentials for guidance on setting up authentication.

  • Create data point for a goal
  • Delete a data point
  • Get all data points for a goal
  • Update a data point

Templates and examples

Browse Beeminder integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


metrics

URL: llms-txt#metrics


URL: llms-txt#memory-related-errors

Contents:

  • Identifying out of memory situations
  • Typical causes
  • Avoiding out of memory situations
    • Increase available memory
    • Reduce memory consumption
    • Increase old memory

n8n doesn't restrict the amount of data each node can fetch and process. While this gives you freedom, it can lead to errors when workflow executions require more memory than available. This page explains how to identify and avoid these errors.

Only for self-hosted n8n

This page describes memory-related errors when self-hosting n8n. Visit Cloud data management to learn about memory limits for n8n Cloud.

Identifying out of memory situations

n8n provides error messages that warn you in some out of memory situations. For example, messages such as Execution stopped at this node (n8n may have run out of memory while executing it).

Error messages including Problem running workflow, Connection Lost, or 503 Service Temporarily Unavailable suggest that an n8n instance has become unavailable.

When self-hosting n8n, you may also see error messages such as Allocation failed - JavaScript heap out of memory in your server logs.

On n8n Cloud, or when using n8n's Docker image, n8n restarts automatically when encountering such an issue. However, when running n8n with npm you might need to restart it manually.

Such problems occur when a workflow execution requires more memory than available to an n8n instance. Factors increasing the memory usage for a workflow execution include:

  • Amount of JSON data.
  • Size of binary data.
  • Number of nodes in a workflow.
  • Some nodes are memory-heavy: the Code node and the older Function node can increase memory consumption significantly.
  • Manual or automatic workflow executions: manual executions increase memory consumption as n8n makes a copy of the data for the frontend.
  • Additional workflows running at the same time.

Avoiding out of memory situations

When encountering an out of memory situation, there are two options: either increase the amount of memory available to n8n or reduce the memory consumption.

Increase available memory

When self-hosting n8n, increasing the amount of memory available to n8n means provisioning your n8n instance with more memory. This may incur additional costs with your hosting provider.

On n8n cloud you need to upgrade to a larger plan.

Reduce memory consumption

This approach is more complex and means re-building the workflows causing the issue. This section provides some guidelines on how to reduce memory consumption. Not all suggestions are applicable to all workflows.

  • Split the data processed into smaller chunks. For example, instead of fetching 10,000 rows with each execution, process 200 rows with each execution.
  • Avoid using the Code node where possible.
  • Avoid manual executions when processing larger amounts of data.
  • Split the workflow up into sub-workflows and ensure each sub-workflow returns a limited amount of data to its parent workflow.

Splitting the workflow might seem counter-intuitive at first as it usually requires adding at least two more nodes: the Loop Over Items node to split up the items into smaller batches and the Execute Workflow node to start the sub-workflow.

However, as long as your sub-workflow does the heavy lifting for each batch and then returns only a small result set to the main workflow, this reduces memory consumption. This is because the sub-workflow only holds the data for the current batch in memory, after which the memory is free again.

Increase old memory

This applies to self-hosting n8n. When encountering JavaScript heap out of memory errors, it's often useful to allocate additional memory to the old memory section of the V8 JavaScript engine. To do this, set the appropriate V8 option --max-old-space-size=SIZE either through the CLI or through the NODE_OPTIONS environment variable.


Troubleshooting

URL: llms-txt#troubleshooting

Contents:

  • Credentials
    • Error message: 'Credentials of type "*" aren't known'
  • Editor UI
    • Error message: 'There was a problem loading init data: API-Server can not be reached. It's probably down'
    • Node icon doesn't show up in the Add Node menu and the Editor UI
    • Node icon doesn't fit
    • Node doesn't show up in the Add Node menu
    • Changes to the description properties don't show in the UI on refreshing
    • Linter incorrectly warning about file name case

Error message: 'Credentials of type "*" aren't known'

Check that the name in the credentials array matches the name used in the property name of the credentials' class.

Error message: 'There was a problem loading init data: API-Server can not be reached. It's probably down'

  • Check that the names of the node file, node folder, and class match the path added to packages/nodes-base/package.json.
  • Check that the names used in the displayOptions property are names used by UI elements in the node.

Node icon doesn't show up in the Add Node menu and the Editor UI

  • Check that the icon is in the same folder as the node.
  • Check that it's either in PNG or SVG format.
  • When the icon property references the icon file, check that it includes the logo extension (.png or .svg) and that it prefixes it with file:. For example, file:friendGrid.png or file:friendGrid.svg.

Node icon doesn't fit

  • If you use an SVG file, make sure the canvas size is square. You can find instructions to change the canvas size of an SVG file using GIMP here.
  • If you use a PNG file, make sure that it's 60x60 pixels.

Node doesn't show up in the Add Node menu

Check that you registered the node in the package.json file in your project.

Changes to the description properties don't show in the UI on refreshing

Every time you change the description properties, you have to stop the current n8n process (ctrl + c) and run it again. You may also need to re-run npm link.

Linter incorrectly warning about file name case

The node linter has rules for file names, including what case they should be. Windows users may encounter an issue when renaming files that causes the linter to continue giving warnings, even after you rename the files. This is due to a known Windows issue with changing case when renaming files.


Update its data

URL: llms-txt#update-its-data

nodeStaticData.lastExecution = new Date().getTime()


JotForm Trigger node

URL: llms-txt#jotform-trigger-node

JotForm is an online form building service. JotForm's software creates forms with a drag and drop creation tool and an option to encrypt user data.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's JotForm Trigger integrations page.


Allows usage of only crypto and fs

URL: llms-txt#allows-usage-of-only-crypto-and-fs

export NODE_FUNCTION_ALLOW_BUILTIN=crypto,fs


LoneScale node

URL: llms-txt#lonescale-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported

Use the LoneScale node to automate work in LoneScale and integrate LoneScale with other applications. n8n has built-in support for managing Lists and Items in LoneScale.

On this page, you'll find a list of operations the LoneScale node supports, and links to more resources.

You can find authentication information for this node here.

  • List
    • Create
  • Item
    • Create

Templates and examples

Browse LoneScale integration templates, or search all templates

Refer to LoneScales documentation for more information about the service.

n8n provides a trigger node for LoneScale. You can find the trigger node docs here.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Block access to nodes

URL: llms-txt#block-access-to-nodes

Contents:

  • Exclude nodes
  • Suggested nodes to block
  • Related resources

For security reasons, you may want to block your users from accessing or working with specific n8n nodes. This is helpful if your users might be untrustworthy.

Use the NODES_EXCLUDE environment variable to prevent your users from accessing specific nodes.

Update your NODES_EXCLUDE environment variable to include an array of strings containing any nodes you want to block your users from using.

For example, setting the variable this way:

Blocks the Execute Command and Read/Write Files from Disk nodes.

Your n8n users won't be able to search for or use these nodes.

Suggested nodes to block

The nodes that can pose security risks vary based on your use case and user profile. Here are some nodes you might want to start with:

Refer to Nodes environment variables for more information on this environment variable.

Refer to Configuration for more information on setting environment variables.

Examples:

Example 1 (unknown):

NODES_EXCLUDE: "[\"n8n-nodes-base.executeCommand\", \"n8n-nodes-base.readWriteFile\"]"

Vonage node

URL: llms-txt#vonage-node

Contents:

  • Operations
  • Templates and examples

Use the Vonage node to automate work in Vonage, and integrate Vonage with other applications. n8n supports sending SMS with Vonage.

On this page, you'll find a list of operations the Vonage node supports and links to more resources.

Refer to Vonage credentials for guidance on setting up authentication.

Templates and examples

Receive messages from a topic via Kafka and send an SMS

View template details

Receive messages from a queue via RabbitMQ and send an SMS

View template details

Get data from Hacker News and send to Airtable or via SMS

View template details

Browse Vonage integration templates, or search all templates


Pull latest (stable) version

URL: llms-txt#pull-latest-(stable)-version

docker pull docker.n8n.io/n8nio/n8n


Git and n8n

URL: llms-txt#git-and-n8n

Contents:

  • Git overview
  • Branches: Multiple copies of a project
  • Local and remote: Moving work between your machine and a Git provider
  • Push, pull, and commit

n8n uses Git to provide source control. To use this feature, it helps to have some knowledge of basic Git concepts. n8n doesn't implement all Git functionality: you shouldn't view n8n's source control as full version control.

New to Git and source control?

If you're new to Git, don't panic. You don't need to learn Git to use n8n. This document explains the concepts you need. You do need some Git knowledge to set up the source control, as this involves work in your Git provider.

Familiar with Git and source control?

If you're familiar with Git, don't rely on behaviors matching exactly. In particular, be aware that source control in n8n doesn't support a pull request-style review and merge process, unless you do this outside n8n in your Git provider.

This page introduces the Git concepts and terminology used in n8n. It doesn't cover everything you need to set up and manage a repository. The person doing the Setup should have some familiarity with Git and with their Git hosting provider.

This is a brief introduction

Git is a complex topic. This section provides a brief introduction to the key terms you need when using environments in n8n. If you want to learn about Git in depth, refer to GitHub | Git and GitHub learning resources.

Git is a tool for managing, tracking, and collaborating on multiple versions of documents. It's the basis for widely used platforms such as GitHub and GitLab.

Branches: Multiple copies of a project

Git uses branches to maintain multiple copies of a document alongside each other. Every branch has its own version. A common pattern is to have a main branch, and then everyone who wants to contribute to the project works on their own branch (copy). When they finish their work, their branch is merged back into the main branch.

Local and remote: Moving work between your machine and a Git provider

A common pattern when using Git is to install Git on your own computer, and use a Git provider such as GitHub to work with Git in the cloud. In effect, you have a Git repository (project) on GitHub, and work with copies of it on your local machine.

n8n uses this pattern for source control: you'll work with your workflows on your n8n instance, but send them to your Git provider to store them.

Push, pull, and commit

n8n uses three key Git processes:

  • Push: send work from your instance to Git. This saves a copy of your workflows and tags, as well as credential and variable stubs, to Git. You can choose which workflows you want to save.

  • Pull: get the workflows, tags, and variables from Git and load it into n8n. You will need to populate any credentials or variable stubs included in the refreshed items.

Pulling overwrites your work

If you have made changes to a workflow in n8n, you must push the changes to Git before pulling. When you pull, it overwrites any changes you've made if they aren't stored in Git.

  • Commit: a commit in n8n is a single occurrence of pushing work to Git. In n8n, commit and push happen at the same time.

Refer to Push and pull for detailed information about how n8n interacts with Git.


Examples using n8n's built-in methods and variables

URL: llms-txt#examples-using-n8n's-built-in-methods-and-variables

Contents:

  • Related resources

n8n provides built-in methods and variables for working with data and accessing n8n data. This section provides usage examples.


Figma Trigger (Beta) node

URL: llms-txt#figma-trigger-(beta)-node

Contents:

  • Events

Figma is a prototyping tool which is primarily web-based, with more offline features enabled by desktop applications for macOS and Windows.

Supported Figma Plans

Figma doesn't support webhooks on the free "Starter" plan. Your team needs to be on the "Professional" plan to use this node.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Figma Trigger integrations page.

  • File Commented: Triggers when someone comments on a file.
  • File Deleted: Triggers when someone deletes an individual file, but not when someone deletes an entire folder with all files.
  • File Updated: Triggers when someone saves or deletes a file. A save occurs when someone closes a file within 30 seconds after making changes.
  • File Version Updated: Triggers when someone creates a named version in the version history of a file.
  • Library Publish: Triggers when someone publishes a library file.

Gmail node Label Operations

URL: llms-txt#gmail-node-label-operations

Contents:

  • Create a label
    • Create label options
  • Delete a label
  • Get a label
  • Get Many labels
  • Common issues

Use the Label operations to create, delete, or get a label or list labels in Gmail. Refer to the Gmail node for more information on the Gmail node itself.

Use this operation to create a new label.

Enter these parameters:

  • Select the Credential to connect with or create a new one.
  • Resource: Select Label.
  • Operation: Select Create.
  • Name: Enter a display name for the label.

Create label options

Use these options to further refine the node's behavior:

  • Label List Visibility: Sets the visibility of the label in the label list in the Gmail web interface. Choose from:
    • Hide: Don't show the label in the label list.
    • Show (default): Show the label in the label list.
    • Show if Unread: Show the label if there are any unread messages with that label.
  • Message List Visibility: Sets the visibility of messages with this label in the message list in the Gmail web interface. Choose whether to Show or Hide messages with this label.

Refer to the Gmail API Method: users.labels.create documentation for more information.

Use this operation to delete an existing label.

Enter these parameters:

  • Select the Credential to connect with or create a new one.
  • Resource: Select Label.
  • Operation: Select Delete.
  • Label ID: Enter the ID of the label you want to delete.

Refer to the Gmail API Method: users.labels.delete documentation for more information.

Use this operation to get an existing label.

Enter these parameters:

  • Select the Credential to connect with or create a new one.
  • Resource: Select Label.
  • Operation: Select Get.
  • Label ID: Enter the ID of the label you want to get.

Refer to the Gmail API Method: users.labels.get documentation for more information.

Use this operation to get two or more labels.

Enter these parameters:

  • Select the Credential to connect with or create a new one.
  • Resource: Select Label.
  • Operation: Select Get Many.
  • Return All: Choose whether the node returns all labels (turned on) or only up to a set limit (turned off).
  • Limit: Enter the maximum number of labels to return. Only used if you've turned off Return All.

Refer to the Gmail API Method: users.labels.list documentation for more information.

For common errors or issues and suggested resolution steps, refer to Common Issues.


GitHub credentials

URL: llms-txt#github-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API access token
    • Generate personal access token
    • Set up the credential
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Create a GitHub account.

Supported authentication methods

Refer to GitHub's API documentation for more information about the service.

Using API access token

To configure this credential, you'll need a GitHub account.

There are two steps to setting up this credential:

  1. Generate a GitHub personal access token.
  2. Set up the credential.

Refer to the sections below for detailed instructions.

Generate personal access token

Recommended access token type

n8n recommends using a personal access token (classic). GitHub's fine-grained personal access tokens are still in beta and can't access all endpoints.

To generate your personal access token:

  1. If you haven't done so already, verify your email address with GitHub. Refer to Verifying your email address for more information.
  2. Open your GitHub profile Settings.
  3. In the left navigation, select Developer settings.
  4. In the left navigation, under Personal access tokens, select Tokens (classic).
  5. Select Generate new token > Generate new token (classic).
  6. Enter a descriptive name for your token in the Note field, like n8n integration.
  7. Select the Expiration you'd like for the token, or select No expiration.
  8. Select Scopes for your token. For most of the n8n GitHub nodes, add the repo scope.
    • A token without assigned scopes can only access public information.
    • Refer to
  9. Select Generate token.
  10. Copy the token.

Refer to Creating a personal access token (classic) for more information. Refer to Scopes for OAuth apps for more information on GitHub scopes.

Set up the credential

Then, in your n8n credential:

  1. If you aren't using GitHub Enterprise Server, don't change the GitHub server URL.
  2. Enter your User name as it appears in your GitHub profile.
  3. Enter the Access Token you generated above.

Note for n8n Cloud users

Cloud users don't need to provide connection details. Select Connect my account to connect through your browser.

If you're self-hosting n8n, create a new GitHub OAuth app:

  1. Open your GitHub profile Settings.
  2. In the left navigation, select Developer settings.
  3. In the left navigation, select OAuth apps.
  4. Select New OAuth App.
    • If you haven't created an app before, you may see Register a new application instead. Select it.
  5. Enter an Application name, like n8n integration.
  6. Enter the Homepage URL for your app's website.
  7. If you'd like, add the optional Application description, which GitHub displays to end-users.
  8. From n8n, copy the OAuth Redirect URL and paste it into the GitHub Authorization callback URL.
  9. Select Register application.
  10. Copy the Client ID and Client Secret this generates and add them to your n8n credential.

Refer to the GitHub Authorizing OAuth apps documentation for more information on the authorization process.


Tutorial: Create environments with source control

URL: llms-txt#tutorial:-create-environments-with-source-control

Contents:

  • Choose your source control pattern

    • Multiple instances, multiple branches
    • Multiple instances, one branch
  • Set up your repository

  • Connect your n8n instances to your repository

    • Configure Git in n8n
    • Set up a deploy key
    • Connect n8n and configure your instance
  • Push work from development

  • Pull work to production

  • Available on Enterprise.

  • You must be an n8n instance owner or instance admin to enable and configure source control.

  • Instance owners and instance admins can push changes to and pull changes from the connected repository.

  • Project admins can push changes to the connected repository. They can't pull changes from the repository.

This tutorial walks through the process of setting up environments end-to-end. You'll create two environments: development and production. It uses GitHub as the Git provider. The process is similar for other providers.

n8n has built its environments feature on top of Git, a version control software. You link an n8n instance to a Git branch, and use a push-pull pattern to move work between environments. You should have some understanding of environments and Git. If you need more information on these topics, refer to:

Choose your source control pattern

Before setting up source control and environments, you need to plan your environments, and how they relate to Git branches. n8n supports different Branch patterns. For environments, you need to choose between two patterns: multi-instance, multi-branch, or multi-instance, single-branch. This tutorial covers both patterns.

Recommendation: don't push and pull to the same n8n instance

You can push work from an instance to a branch, and pull to the same instance. n8n doesn't recommend this. To reduce the risk of merge conflicts and overwriting work, try to create a process where work goes in one direction: either to Git, or from Git, but not both.

Multiple instances, multiple branches

The advantages of this pattern are:

  • An added safety layer to prevent changes getting into your production environment by mistake. You have to do a pull request in GitHub to copy work between environments.
  • It supports more than two instances.

The disadvantage is more manual steps to copy work between environments.

Multiple instances, one branch

The advantage of this pattern is that work is instantly available to other environments when you push from one instance.

The disadvantages are:

  • If you push by mistake, there is a risk the work will make it into your production instance. If you use a GitHub Action to automate pulls to production, you must either use the multi-instance, multi-branch pattern, or be careful to never push work that you don't want in production.
  • Pushing and pulling to the same instance can cause data loss as changes are overridden when performing these actions. You should set up processes to ensure content flows in one direction.

Set up your repository

Once you've chosen your pattern, you need to set up your GitHub repository.

  1. Create a new repository.
    • Make sure the repository is private, unless you want your workflows, tags, and variable and credential stubs exposed to the internet.
    • Create the new repository with a README so you can immediately create branches.
  2. Create one branch named production and another named development. Refer to Creating and deleting branches within your repository for guidance.

Create a new repository.

  • Make sure the repository is private, unless you want your workflows, tags, and variable and credential stubs exposed to the internet.
  • Create the new repository with a README. This creates the main branch, which you'll connect to.

Connect your n8n instances to your repository

Create two n8n instances, one for development, one for production.

Configure Git in n8n

  1. Go to Settings > Environments.
  2. Choose your connection method:
    • SSH: In Git repository URL, enter the SSH URL for your repository (for example, git@github.com:username/repo.git).
    • HTTPS: In Git repository URL enter the HTTPS URL for your repository (for example, https://github.com/username/repo.git).
  3. Configure authentication based on your connection method:
    • For SSH: n8n supports ED25519 and RSA public key algorithms. ED25519 is the default. Select RSA under SSH Key if your git host requires RSA. Copy the SSH key.
    • For HTTPS: Enter your credentials:
      • Username: Your Git provider username.
      • Token: Your Personal Access Token (PAT) from your Git provider.

Set up a deploy key

Set up SSH access by creating a deploy key for the repository using the SSH key from n8n. The key must have write access. Refer to GitHub | Managing deploy keys for guidance.

Connect n8n and configure your instance

  1. In Settings > Environments in n8n, select Connect. n8n connects to your Git repository.

  2. Under Instance settings, choose which branch you want to use for the current n8n instance. Connect the production branch to the production instance, and the development branch to the development instance.

  3. Production instance only: select Protected instance to prevent users editing workflows in this instance.

  4. Select Save settings.

  5. In Settings > Environments in n8n, select Connect.

  6. Under Instance settings, select the main branch.

  7. Production instance only: select Protected instance to prevent users editing workflows in this instance.

  8. Select Save settings.

Push work from development

In your development instance, create a few workflows, tags, variables, and credentials.

  1. Select Push in the main menu.

Pull and push buttons when menu is closed

Pull and push buttons when menu is open

  1. In the Commit and push changes modal, select which workflows you want to push. You can filter by status (new, modified, deleted) and search for workflows. n8n automatically pushes tags, and variable and credential stubs.

  2. Enter a commit message. This should be a one sentence description of the changes you're making.

  3. Select Commit and Push. n8n sends the work to Git, and displays a success message on completion.

Pull work to production

Your work is now in GitHub. If you're using a multi-branch setup, it's on the development branch. If you chose the single-branch setup, it's on main.

  1. In GitHub, create a pull request to merge development into production.
  2. Merge the pull request.
  3. In your production instance, select Pull in the main menu.

In your production instance, select Pull in the main menu.

Pull and push buttons when menu is closed

Pull and push buttons when menu is open

Optional: Use a GitHub Action to automate pulls

If you want to avoid logging in to your production instance to pull, you can use a GitHub Action and the n8n API to automatically pull every time you push new work to your production or main branch.

A GitHub Action example:

Examples:

Example 1 (unknown):

name: CI
on:
  # Trigger the workflow on push or pull request events for the "production" branch
  push:
    branches: [ "production" ]
  # Allows you to run this workflow manually from the Actions tab
  workflow_dispatch:
jobs:
  run-pull:
    runs-on: ubuntu-latest
    steps:
      - name: PULL
				# Use GitHub secrets to protect sensitive information
        run: >
          curl --location '${{ secrets.INSTANCE_URL }}/version-control/pull' --header
          'Content-Type: application/json' --header 'X-N8N-API-KEY: ${{ secrets.INSTANCE_API_KEY }}'

execution

URL: llms-txt#execution

Contents:

  • execution.id
  • execution.resumeUrl
  • execution.customData

Contains the unique ID of the current workflow execution.

execution.resumeUrl

The webhook URL to call to resume a waiting workflow.

See the Wait > On webhook call documentation to learn more.

execution.resumeUrl is available in workflows containing a Wait node, along with a node that waits for a webhook response.

execution.customData

This is only available in the Code node.

Examples:

Example 1 (unknown):

let executionId = $execution.id;

Example 2 (unknown):

executionId = _execution.id

Example 3 (unknown):

// Set a single piece of custom execution data
$execution.customData.set("key", "value");

// Set the custom execution data object
$execution.customData.setAll({"key1": "value1", "key2": "value2"})

// Access the current state of the object during the execution
var customData = $execution.customData.getAll()

// Access a specific value set during this execution
var customData = $execution.customData.get("key")

Text Classifier node

URL: llms-txt#text-classifier-node

Contents:

  • Node parameters
  • Node options
  • Related resources

Use the Text Classifier node to classify (categorize) incoming data. Using the categories provided in the parameters (see below), each item is passed to the model to determine its category.

On this page, you'll find the node parameters for the Text Classifier node, and links to more resources.

  • Input Prompt defines the input to classify. This is usually an expression that references a field from the input items. For example, this could be {{ $json.chatInput }} if the input is a chat trigger. By default it references the text field.

  • Categories: Add the categories that you want to classify your input as. Categories have a name and a description. Use the description to tell the model what the category means. This is important if the meaning isn't obvious. You can add as many categories as you like.

  • Allow Multiple Classes To Be True: You can configure the classifier to always output a single class per item (turned off), or allow the model to select multiple classes (turned on).

  • When No Clear Match: Define what happens if the model can't find a good match for an item. There are two options:

    • Discard Item (the default): If the node doesn't detect any of the categories, it drops the item.
    • Output on Extra, 'Other' Branch: Creates a separate output branch called Other. When the node doesn't detect any of the categories, it outputs items in this branch.
  • System Prompt Template: Use this option to change the system prompt that's used for the classification. It uses the {categories} placeholder for the categories.

  • Enable Auto-Fixing: When enabled, the node automatically fixes model outputs to ensure they match the expected format. Do this by sending the schema parsing error to the LLM and asking it to fix it.

View n8n's Advanced AI documentation.


NocoDB node

URL: llms-txt#nocodb-node

Contents:

  • Operations
  • Templates and examples
  • Relates resources
  • What to do if your operation isn't supported

Use the NocoDB node to automate work in NocoDB, and integrate NocoDB with other applications. n8n has built-in support for a wide range of NocoDB features, including creating, updating, deleting, and retrieving rows.

On this page, you'll find a list of operations the NocoDB node supports and links to more resources.

Refer to NocoDB credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Row
    • Create
    • Delete
    • Get
    • Get Many
    • Update a row

Templates and examples

Scrape and summarize posts of a news site without RSS feed using AI and save them to a NocoDB

View template details

Multilanguage Telegram bot

View template details

Create LinkedIn Contributions with AI and Notify Users On Slack

View template details

Browse NocoDB integration templates, or search all templates

Refer to NocoDB's documentation for more information about the service.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


WooCommerce credentials

URL: llms-txt#woocommerce-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key
  • Resolve "Consumer key is missing" error

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to WooCommerce's REST API documentation for more information about the service.

To configure this credential, you'll need:

  • A Consumer Key: Created when you generate an API key.
  • A Consumer Secret: Created when you generate an API key.
  • A WooCommerce URL

To generate an API key and set up your credential:

  1. Go to WooCommerce > Settings > Advanced > Rest API > Add key.
  2. Select Read/Write from the Permissions dropdown.
  3. Copy the generated Consumer Key and Consumer Secret and enter them into your n8n credentials.
  4. Enter your WordPress site URL as the WooCommerce URL.
  5. By default, n8n passes your credential details in the Authorization header. If you need to pass them as query string parameters instead, turn on Include Credentials in Query.

Refer to Generate Keys for more information.

Resolve "Consumer key is missing" error

When you try to connect your credentials, you may receive an error like this: Consumer key is missing.

This occurs when the server can't parse the Authorization header details when authenticating over SSL.

To resolve it, turn on the Include Credentials in Query toggle to pass the consumer key/secret as query string parameters instead and retry the credential.


ConvertKit node

URL: llms-txt#convertkit-node

Contents:

  • Operations
  • Templates and examples

Use the ConvertKit node to automate work in ConvertKit, and integrate ConvertKit with other applications. n8n has built-in support for a wide range of ConvertKit features, including creating and deleting custom fields, getting tags, and adding subscribers.

On this page, you'll find a list of operations the ConvertKit node supports and links to more resources.

Refer to ConvertKit credentials for guidance on setting up authentication.

  • Custom Field
    • Create a field
    • Delete a field
    • Get all fields
    • Update a field
  • Form
    • Add a subscriber
    • Get all forms
    • List subscriptions to a form including subscriber data
  • Sequence
    • Add a subscriber
    • Get all sequences
    • Get all subscriptions to a sequence including subscriber data
  • Tag
    • Create a tag
    • Get all tags
  • Tag Subscriber
    • Add a tag to a subscriber
    • List subscriptions to a tag including subscriber data
    • Delete a tag from a subscriber

Templates and examples

Enrich lead captured by ConvertKit and save it in Hubspot

by Ricardo Espinozaas

View template details

Manage subscribers in ConvertKit

View template details

Receive updates on a subscriber added in ConvertKit

View template details

Browse ConvertKit integration templates, or search all templates


Facebook Lead Ads credentials

URL: llms-txt#facebook-lead-ads-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to Facebook Lead Ads' documentation for more information about the service.

View example workflows and related content on n8n's website.

To configure this credential, you'll need a Meta for Developers account and:

  • A Client ID
  • A Client Secret

To get both, create a Meta app with either the Facebook Login product or the Facebook Login for Business product.

To create your app and set up the credential with Facebook Login for Business:

  1. Go to the Meta Developer App Dashboard and select Create App.
  2. If you have a business portfolio and you're ready to connect the app to it, select the business portfolio. If you don't have a business portfolio or you're not ready to connect the app to the portfolio, select I dont want to connect a business portfolio yet and select Next. The Use cases page opens.
  3. Select Other, then select Next.
  4. Select Business and Next.
  5. Complete the essential information:
    • Add an App name.
    • Add an App contact email.
    • Here again you can connect to a business portfolio or skip it.
  6. Select Create app. The Add products to your app page opens.
  7. Select Facebook Login for Business. The Settings page for this product opens.
  8. Copy the OAuth Redirect URL from your n8n credential.
  9. In your Meta app settings in Client OAuth settings, paste that URL as the Valid OAuth Redirect URIs.
  10. Select App settings > Basic from the left menu.
  11. Copy the App ID and enter it as the Client ID within your n8n credential.
  12. Copy the App Secret and enter it as the Client Secret within your n8n credential.

Your credential should successfully connect now, but you'll need to go through the steps to take your Meta app live before you can use it with the Facebook Lead Ads trigger. Here's a summary of what you'll need to do:

  1. In your Meta app, select App settings > Basic from the left menu.
  2. Enter a Privacy Policy URL. (Required to take the app "Live.")
  3. Select Save changes.
  4. At the top of the page, toggle the App Mode from Development to Live.
  5. Facebook Login for Business requires Advanced Access for public_profile. To add it, go to App Review > Permissions and Features.
  6. Search for public_profile and select Request advanced access.
  7. Complete the steps for business verification.
  8. Use the Lead Ads Testing Tool to trigger some demo form submissions and test your workflow.

Refer to Meta's Create an app documentation for more information on creating an app, required fields like the Privacy Policy URL, and adding products.

For more information on the app modes and switching to Live mode, refer to App Modes and Publish | App Types.


How can you contribute?

URL: llms-txt#how-can-you-contribute?

Contents:

  • Share some love: Review us
  • Help out the community
  • Contribute a workflow template
  • Build a node
  • Contribute to the code
  • Contribute to the docs
  • Contribute to community tutorials
    • How to submit a post
  • Refer a candidate

There are a several ways in which you can contribute to n8n, depending on your skills and interests. Each form of contribution is valuable to us!

Share some love: Review us

Help out the community

You can participate in the forum and help the community members out with their questions.

When sharing workflows in the community forum for debugging, use code blocks. Use triple backticks ```` to wrap the workflow JSON in a code block.

The following video demonstrates the steps of sharing workflows on the community forum:

Contribute a workflow template

You can submit your workflows to n8n's template library.

n8n is working on a creator program, and developing a marketplace of templates. This is an ongoing project, and details are likely to change.

Refer to n8n Creator hub for information on how to submit templates and become a creator.

Create an integration for a third party service. Check out the node creation docs for guidance on how to create and publish a community node.

Contribute to the code

There are different ways in which you can contribute to the n8n code base:

  • Fix issues reported on GitHub. The CONTRIBUTING guide will help you get your development environment ready in minutes.
  • Add additional functionality to an existing third party integration.
  • Add a new feature to n8n.

Contribute to the docs

You can contribute to the n8n documentation, for example by documenting nodes or fixing issues.

The repository for the docs is here and the guidelines for contributing to the docs are here.

Contribute to community tutorials

Share your own video or written guides on our community-driven, searchable library of n8n tutorials and training materials. Tag them for easy discovery, and post in your languages subcategory. Follow the contribution guidelines to help keep our growing library high-quality and accessible to everyone.

How to submit a post

n8n appreciates all contributions. Publishing a tutorial on your own site that supports the community is a great contribution. If you want n8n to highlight your post on the blog, follow these steps:

  1. Email your idea to marketing@n8n.io with the subject "Blog contribution: [Your Topic]."
  2. Submit your draft:
    • Write your post in a Google Doc following the style guide.
    • If your blog post includes example workflows, include the workflow JSON in a separate section at the end.
    • For author credit, provide a second Google Doc with your full name, a short byline, and your image. n8n will use this to create your author page and credit you as the author of the post.
  3. Wait for feedback. We will respond if your draft fits with the blog's strategy and requirements. If you don't hear back within 30 days, it means we won't be moving forward with your blog post.

Do you know someone who would be a great fit for one of our open positions? Refer them to us! In return, we'll pay you €1,000 when the referral successfully passes their probationary period.

Here's how this works:

  1. Search: Have a look at the description and requirements of each role, and consider if someone you know would be a great fit.
  2. Referral: Once you've identified a potential candidate, send an email to Jobs at n8n with the subject line Employee referral - [job title] and a short description of the person you're referring (and the reason why). Also, tell your referral to apply for the job through our careers page.
  3. Evaluation: We'll screen the application and inform you about the next steps of the hiring process.
  4. Reward: As soon as your referral has successfully finished the probationary period, we'll reward you for your efforts by transferring the €1,000 to your bank account.

Wufoo Trigger node

URL: llms-txt#wufoo-trigger-node

Wufoo is an online form builder that helps you create custom HTML forms without writing code.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Wufoo Trigger integrations page.


The top level domain to serve from

URL: llms-txt#the-top-level-domain-to-serve-from

DOMAIN_NAME=example.com


Mattermost credentials

URL: llms-txt#mattermost-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using API access token
  • Enable personal access tokens

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to Mattermost's API documentation for more information about the service.

Using API access token

To configure this credential, you'll need a Mattermost account and:

  • A personal Access Token
  • Your Mattermost Base URL.
  1. In Mattermost, go to Profile > Security > Personal Access Tokens.

No Personal Access Tokens option

If you don't see the Personal Access Tokens option, refer to the troubleshooting steps in Enable personal access tokens below.

  1. Select Create Token.

  2. Enter a Token description, like n8n integration.

  3. Copy the Token ID and enter it as the Access Token in your n8n credential.

  4. Enter your Mattermost URL as the Base URL.

  5. By default, n8n connects only if SSL certificate validation succeeds. To connect even if SSL certificate validation fails, turn on Ignore SSL Issues.

Refer to the Mattermost Personal access tokens documentation for more information.

Enable personal access tokens

Not seeing the Personal Access Tokens option has two possible causes:

  • Mattermost doesn't have the personal access tokens integration enabled.
  • You're trying to generate a personal access token as a non-admin user who doesn't have permission to generate personal access tokens.

To identify the root cause and resolve it:

  1. Log in to Mattermost as an admin.
  2. Go to System Console > Integrations > Integration Management.
  3. Confirm that Enable personal access tokens is set to true. If it's not, change.
  4. Go to System Console > User Management > Users.
  5. Search for the user account you want to allow to generate personal access tokens.
  6. Select the Actions dropdown for the user and select Manage roles.
  7. Check the box for Allow this account to generate personal access tokens and Save.

Refer to the Mattermost Personal access tokens documentation for more information.


SSE Trigger node

URL: llms-txt#sse-trigger-node

Contents:

  • Node parameters
  • Templates and examples

Server-Sent Events (SSE) is a server push technology enabling a client to receive automatic updates from a server using HTTP connection. The SSE Trigger node is used to receive server-sent events.

The SSE Trigger node has one parameter, the URL. Enter the URL from which to receive the server-sent events (SSE).

Templates and examples

Browse SSE Trigger integration templates, or search all templates


Choose your n8n

URL: llms-txt#choose-your-n8n

Contents:

  • Platforms
  • Licenses
  • Free versions
  • Paid versions

This section contains information on n8n's range of platforms, pricing plans, and licenses.

There are different ways to set up n8n depending on how you intend to use it:

  • n8n Cloud: hosted solution, no need to install anything.
  • Self-host: recommended method for production or customized use cases.
  • Embed: n8n Embed allows you to white label n8n and build it into your own product. Contact n8n on the Embed website for pricing and support.

Self-hosting knowledge prerequisites

Self-hosting n8n requires technical knowledge, including:

  • Setting up and configuring servers and containers
  • Managing application resources and scaling
  • Securing servers and applications
  • Configuring n8n

n8n recommends self-hosting for expert users. Mistakes can lead to data loss, security issues, and downtime. If you aren't experienced at managing servers, n8n recommends n8n Cloud.

n8n's Sustainable Use License and n8n Enterprise License are based on the fair-code model.

For a detailed explanation of the license, refer to Sustainable Use License.

n8n offers the following free options:

  • A free trial of Cloud
  • A free self-hosted community edition for self-hosted users

n8n has two paid versions:

  • n8n Cloud: choose from a range of paid plans to suit your usage and feature needs.
  • Self-hosted: there are both free and paid versions of self-hosted.

For details of the Cloud plans and contact details for Enterprise Self-hosted, refer to Pricing on the n8n website.


Facebook Lead Ads Trigger node

URL: llms-txt#facebook-lead-ads-trigger-node

Contents:

  • Events
  • Related resources
  • Common issues
    • Workflow only works in testing or production

Use the Facebook Lead Ads Trigger node to respond to events in Facebook Lead Ads and integrate Facebook Lead Ads with other applications. n8n has built-in support for responding to new leads.

On this page, you'll find a list of events the Facebook Lead Ads Trigger node can respond to, and links to more resources.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Facebook Lead Ads Trigger integrations page.

View example workflows and related content on n8n's website.

Refer to Facebook Lead Ads' documentation for details about their API.

Here are some common errors and issues with the Facebook Lead Ads Trigger node and steps to resolve or troubleshoot them.

Workflow only works in testing or production

Facebook Lead Ads only allows you to register a single webhook per app. This means that every time you switch from using the testing URL to the production URL (and vice versa), Facebook Lead Ads overwrites the registered webhook URL.

You may have trouble with this if you try to test a workflow that's also active in production. Facebook Lead Ads will only send events to one of the two webhook URLs, so the other will never receive event notifications.

To work around this, you can disable your workflow when testing:

Halts production traffic

This workaround temporarily disables your production workflow for testing. Your workflow will no longer receive production traffic while it's deactivated.

  1. Go to your workflow page.
  2. Toggle the Active switch in the top panel to disable the workflow temporarily.
  3. Test your workflow using the test webhook URL.
  4. When you finish testing, toggle the Inactive toggle to enable the workflow again. The production webhook URL should resume working.

Cloud free trial

URL: llms-txt#cloud-free-trial

Contents:

  • Upgrade to a paid account
  • Trial expiration
    • Cancelling your trial
  • Enterprise trial

When you create a new n8n cloud trial, you have 14 days to try all the features of the Pro plan, including:

  • Global variables
  • Insights dashboard
  • Execution search
  • 5 days workflow history to rollback

The trial gives you Pro plan features with limits of 1000 executions and the same computing power as the Starter plan.

Upgrade to a paid account

You can upgrade to a paid n8n account at any time. To upgrade:

  1. Log in to your account.
  2. Click the Upgrade button in the upper-right corner.
  3. Select your plan and whether to pay annually or by the month.
  4. Select a payment method.

If you don't upgrade by the end of your trial, the trial will automatically expire and your workspace will be deleted.

Download your workflows

You can download your workflows to reuse them later. You have 90 days to download your workflows after your free trial ends.

Cancelling your trial

You don't need to cancel your trial. Your trial will automatically expire at the end of the trial period and no charges will occur. All your data will be deleted soon after.

You can contact the sales team if you want to test the Enterprise plan, which includes features such as:

  • SSO SAML and LDAP
  • Different environments
  • External secret store integration
  • Log streaming
  • Version control using Git

Click the Contact button on the n8n website.


Todoist node

URL: llms-txt#todoist-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Todoist node to automate work in Todoist, and integrate Todoist with other applications. n8n has built-in support for a wide range of Todoist features, including creating, updating, deleting, and getting tasks.

On this page, you'll find a list of operations the Todoist node supports and links to more resources.

Refer to Todoist credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Task
    • Create a new task
    • Close a task
    • Delete a task
    • Get a task
    • Get all tasks
    • Reopen a task
    • Update a task

Templates and examples

Realtime Notion Todoist 2-way Sync with Redis

View template details

Sync tasks automatically from Todoist to Notion

View template details

Effortless Task Management: Create Todoist Tasks Directly from Telegram with AI

View template details

Browse Todoist integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Google Analytics node

URL: llms-txt#google-analytics-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported

Use the Google Analytics node to automate work in Google Analytics, and integrate Google Analytics with other applications. n8n has built-in support for a wide range of Google Analytics features, including returning reports and user activities.

On this page, you'll find a list of operations the Google Analytics node supports and links to more resources.

Refer to Google Analytics credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Report
    • Get
  • User Activity
    • Search

Templates and examples

AI marketing report (Google Analytics & Ads, Meta Ads), sent via email/Telegram

by Friedemann Schuetz

View template details

Automate Google Analytics Reporting

View template details

Create a Google Analytics Data Report with AI and sent it to E-Mail and Telegram

by Friedemann Schuetz

View template details

Browse Google Analytics integration templates, or search all templates

Refer to Google Analytics' documentation for more information about the service.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Azure Storage node

URL: llms-txt#azure-storage-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported

The Azure Storage node has built-in support for a wide range of features, which includes creating, getting, and deleting blobs and containers. Use this node to automate work within the Azure Storage service or integrate it with other services in your workflow.

On this page, you'll find a list of operations the Azure Storage node supports, and links to more resources.

You can find authentication information for this node here.

  • Blob
    • Create blob: Create a new blob or replace an existing one.
    • Delete blob: Delete an existing blob.
    • Get blob: Retrieve data for a specific blob.
    • Get many blobs: Retrieve a list of blobs.
  • Container
    • Create container: Create a new container.
    • Delete container: Delete an existing container.
    • Get container: Retrieve data for a specific container.
    • Get many containers: Retrieve a list of containers.

Templates and examples

Browse Azure Storage integration templates, or search all templates

Refer to Microsoft's Azure Storage documentation for more information about the service.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Hosting n8n on Hetzner cloud

URL: llms-txt#hosting-n8n-on-hetzner-cloud

Contents:

  • Create a server
  • Log in to your server
  • Install Docker Compose
  • Clone configuration repository
  • Default folders and files
    • Create Docker volume
  • Set up DNS
  • Open ports
  • Configure n8n
  • The Docker Compose file

This hosting guide shows you how to self-host n8n on a Hetzner cloud server. It uses:

  • Caddy (a reverse proxy) to allow access to the Server from the internet.
  • Docker Compose to create and define the application components and how they work together.

Self-hosting knowledge prerequisites

Self-hosting n8n requires technical knowledge, including:

  • Setting up and configuring servers and containers
  • Managing application resources and scaling
  • Securing servers and applications
  • Configuring n8n

n8n recommends self-hosting for expert users. Mistakes can lead to data loss, security issues, and downtime. If you aren't experienced at managing servers, n8n recommends n8n Cloud.

Latest and Next versions

n8n releases a new minor version most weeks. The latest version is for production use. next is the most recent release. You should treat next as a beta: it may be unstable. To report issues, use the forum.

Current latest: 1.118.2
Current next: 1.119.0

  1. Log in to the Hetzner Cloud Console.
  2. Select the project to host the server, or create a new project by selecting + NEW PROJECT.
  3. Select + CREATE SERVER on the project tile you want to add it to.

You can change most of the settings to suit your needs, but as this guide uses Docker to run the application, under the Image section, select "Docker CE" from the APPS tab.

When creating the server, Hetzner asks you to choose a plan. For most usage levels, the CPX11 type is enough.

Hetzner lets you choose between SSH and password-based authentication. SSH is more secure. The rest of this guide assumes you are using SSH.

Log in to your server

The rest of this guide requires you to log in to the server using a terminal with SSH. Refer to Access with SSH/rsync/BorgBackup for more information. You can find the public IP in the listing of the servers in your project.

Install Docker Compose

The Hetzner Docker app image doesn't have Docker compose installed. Install it with the following commands:

Clone configuration repository

Docker Compose, n8n, and Caddy require a series of folders and configuration files. You can clone these from this repository into the root user folder of the server. The following steps will tell you which file to change and what changes to make.

Clone the repository with the following command:

And change directory to the root of the repository you cloned:

Default folders and files

The host operating system (the server) copies the two folders you created to Docker containers to make them available to Docker. The two folders are:

  • caddy_config: Holds the Caddy configuration files.
  • local_files: A folder for files you upload or add using n8n.

Create Docker volume

To persist the Caddy cache between restarts and speed up start times, create a Docker volume that Docker reuses between restarts:

Create a Docker volume for the n8n data:

n8n typically operates on a subdomain. Create a DNS record with your provider for the subdomain and point it to the IP address of the server. The exact steps for this depend on your DNS provider, but typically you need to create a new "A" record for the n8n subdomain. DigitalOcean provide An Introduction to DNS Terminology, Components, and Concepts.

n8n runs as a web application, so the server needs to allow incoming access to traffic on port 80 for non-secure traffic, and port 443 for secure traffic.

Open the following ports in the server's firewall by running the following two commands:

n8n needs some environment variables set to pass to the application running in the Docker container. The example .env file contains placeholders you need to replace with values of your own.

Open the file with the following command:

The file contains inline comments to help you know what to change.

Refer to Environment variables for n8n environment variables details.

The Docker Compose file

The Docker Compose file (docker-compose.yml) defines the services the application needs, in this case Caddy and n8n.

  • The Caddy service definition defines the ports it uses and the local volumes to copy to the containers.
  • The n8n service definition defines the ports it uses, the environment variables n8n needs to run (some defined in the .env file), and the volumes it needs to copy to the containers.

The Docker Compose file uses the environment variables set in the .env file, so you shouldn't need to change it's content, but to take a look, run the following command:

Caddy needs to know which domains it should serve, and which port to expose to the outside world. Edit the Caddyfile file in the caddy_config folder.

Change the placeholder subdomain to yours. If you followed the steps to name the subdomain n8n, your full domain is similar to n8n.example.com. The n8n in the reverse_proxy setting tells Caddy to use the service definition defined in the docker-compose.yml file:

Start Docker Compose

Start n8n and Caddy with the following command:

This may take a few minutes.

In your browser, open the URL formed of the subdomain and domain name defined earlier. Enter the user name and password defined earlier, and you should be able to access n8n.

Stop n8n and Caddy

You can stop n8n and Caddy with the following command:

If you run n8n using a Docker Compose file, follow these steps to update n8n:

Examples:

Example 1 (unknown):

apt update && apt -y upgrade
apt install docker-compose-plugin

Example 2 (unknown):

git clone https://github.com/n8n-io/n8n-docker-caddy.git

Example 3 (unknown):

cd n8n-docker-caddy

Example 4 (unknown):

docker volume create caddy_data

Google Gemini node

URL: llms-txt#google-gemini-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported

Use the Google Gemini node to automate work in Google Gemini and integrate Google Gemini with other applications. n8n has built-in support for a wide range of Google Gemini features, including working with audio, videos, images, documents, and files to analyze, generate, and transcribe.

On this page, you'll find a list of operations the Google Gemini node supports, and links to more resources.

You can find authentication information for this node here.

  • Audio:
    • Analyze Audio: Take in audio and answer questions about it.
    • Transcribe a Recording: Transcribes audio into text.
  • Document:
    • Analyze Document: Take in documents and answer questions about them.
  • File:
    • Upload File: Upload a file to the Google Gemini API for later user.
  • Image:
    • Analyze Image: Take in images and answer questions about them.
    • Generate an Image: Creates an image from a text prompt.
  • Text:
    • Message a Model: Create a completion with a Google Gemini model.
  • Video:
    • Analyze Video: Take in videos and answer questions about them.
    • Generate a Video: Creates a video from a text prompt.
    • Download Video: Download a generated video from the Google Gemini API using a URL.

Templates and examples

🤖Automate Multi-Platform Social Media Content Creation with AI

View template details

AI-Powered Social Media Content Generator & Publisher

View template details

Build Your First AI Agent

View template details

Browse Google Gemini integration templates, or search all templates

Refer to Google Gemini's documentation for more information about the service.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


OpenAI Chat Model node common issues

URL: llms-txt#openai-chat-model-node-common-issues

Contents:

  • Processing parameters
  • The service is receiving too many requests from you
  • Insufficient quota
  • Bad request - please check your parameters

Here are some common errors and issues with the OpenAI Chat Model node and steps to resolve or troubleshoot them.

Processing parameters

The OpenAI Chat Model node is a sub-node. Sub-nodes behave differently than other nodes when processing multiple items using expressions.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

The service is receiving too many requests from you

This error displays when you've exceeded OpenAI's rate limits.

There are two ways to work around this issue:

  1. Split your data up into smaller chunks using the Loop Over Items node and add a Wait node at the end for a time amount that will help. Copy the code below and paste it into a workflow to use as a template.

  2. Use the HTTP Request node with the built-in batch-limit option against the OpenAI API instead of using the OpenAI node.

Insufficient quota

There are a number of OpenAI issues surrounding quotas, including failures when quotas have been recently topped up. To avoid these issues, ensure that there is credit in the account and issue a new API key from the API keys screen.

This error displays when your OpenAI account doesn't have enough credits or capacity to fulfill your request. This may mean that your OpenAI trial period has ended, that your account needs more credit, or that you've gone over a usage limit.

To troubleshoot this error, on your OpenAI settings page:

  • Select the correct organization for your API key in the first selector in the upper-left corner.
  • Select the correct project for your API key in the second selector in the upper-left corner.
  • Check the organization-level billing overview page to ensure that the organization has enough credit. Double-check that you select the correct organization for this page.
  • Check the organization-level usage limits page. Double-check that you select the correct organization for this page and scroll to the Usage limits section to verify that you haven't exceeded your organization's usage limits.
  • Check your OpenAI project's usage limits. Double-check that you select the correct project in the second selector in the upper-left corner. Select Project > Limits to view or change the project limits.
  • Check that the OpenAI API is operating as expected.

Balance waiting period

After topping up your balance, there may be a delay before your OpenAI account reflects the new balance.

If you find yourself frequently running out of account credits, consider turning on auto recharge in your OpenAI billing settings to automatically reload your account with credits when your balance reaches $0.

Bad request - please check your parameters

This error displays when the request results in an error but n8n wasn't able to interpret the error message from OpenAI.

To begin troubleshooting, try running the same operation using the HTTP Request node, which should provide a more detailed error message.

Examples:

Example 1 (unknown):

{
       "nodes": [
       {
           "parameters": {},
           "id": "35d05920-ad75-402a-be3c-3277bff7cc67",
           "name": "When clicking Execute workflow",
           "type": "n8n-nodes-base.manualTrigger",
           "typeVersion": 1,
           "position": [
           880,
           400
           ]
       },
       {
           "parameters": {
           "batchSize": 500,
           "options": {}
           },
           "id": "ae9baa80-4cf9-4848-8953-22e1b7187bf6",
           "name": "Loop Over Items",
           "type": "n8n-nodes-base.splitInBatches",
           "typeVersion": 3,
           "position": [
           1120,
           420
           ]
       },
       {
           "parameters": {
           "resource": "chat",
           "options": {},
           "requestOptions": {}
           },
           "id": "a519f271-82dc-4f60-8cfd-533dec580acc",
           "name": "OpenAI",
           "type": "n8n-nodes-base.openAi",
           "typeVersion": 1,
           "position": [
           1380,
           440
           ]
       },
       {
           "parameters": {
           "unit": "minutes"
           },
           "id": "562d9da3-2142-49bc-9b8f-71b0af42b449",
           "name": "Wait",
           "type": "n8n-nodes-base.wait",
           "typeVersion": 1,
           "position": [
           1620,
           440
           ],
           "webhookId": "714ab157-96d1-448f-b7f5-677882b92b13"
       }
       ],
       "connections": {
       "When clicking Execute workflow": {
           "main": [
           [
               {
               "node": "Loop Over Items",
               "type": "main",
               "index": 0
               }
           ]
           ]
       },
       "Loop Over Items": {
           "main": [
           null,
           [
               {
               "node": "OpenAI",
               "type": "main",
               "index": 0
               }
           ]
           ]
       },
       "OpenAI": {
           "main": [
           [
               {
               "node": "Wait",
               "type": "main",
               "index": 0
               }
           ]
           ]
       },
       "Wait": {
           "main": [
           [
               {
               "node": "Loop Over Items",
               "type": "main",
               "index": 0
               }
           ]
           ]
       }
       },
       "pinData": {}
   }

Datadog credentials

URL: llms-txt#datadog-credentials

Contents:

  • Prerequisites
  • Related resources
  • Using API Key

You can use these credentials to authenticate when using the HTTP Request node to make a Custom API call.

Create a Datadog account.

Refer to Datadog's API documentation for more information about authenticating with the service.

This is a credential-only node. Refer to Custom API operations to learn more. View example workflows and related content on n8n's website.

To configure this credential, you'll need:

  • Your Datadog instance Host
  • An API Key
  • An App Key

Refer to Authentication on Datadog's website for more information.


Telegram node Callback operations

URL: llms-txt#telegram-node-callback-operations

Contents:

  • Answer Query
    • Answer Query additional fields
  • Answer Inline Query
    • Answer Inline Query additional fields

Use these operations to respond to callback queries sent from the in-line keyboard or in-line queries. Refer to Telegram for more information on the Telegram node itself.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

Use this operation to send answers to callback queries sent from inline keyboards using the Bot API answerCallbackQuery method.

Enter these parameters:

  • Credential to connect with: Create or select an existing Telegram credential.
  • Resource: Select Callback.
  • Operation: Select Answer Query.
  • Query ID: Enter the unique identifier of the query you want to answer.
    • To feed a Query ID directly into this node, use the Telegram Trigger node triggered on the Callback Query.
  • Results: Enter a JSON-serialized array of results you want to use as answers to the query. Refer to the Telegram InlineQueryResults documentation for more information on formatting your array.

Refer to the Telegram Bot API answerCallbackQuery documentation for more information.

Answer Query additional fields

Use the Additional Fields to further refine the behavior of the node. Select Add Field to add any of the following:

  • Cache Time: Enter the maximum amount of time in seconds that the client may cache the result of the callback query. Telegram defaults to 0 seconds for this method.
  • Show Alert: Telegram can display the answer as a notification at the top of the chat screen or as an alert. Choose whether you want to keep the default notification display (turned off) or display the answer as an alert (turned on).
  • Text: If you want the answer to show text, enter up to 200 characters of text here.
  • URL: Enter a URL that will be opened by the user's client. Refer to the url parameter instructions at the Telegram Bot API answerCallbackQuery documentation for more information.

Answer Inline Query

Use this operation to send answers to callback queries sent from inline queries using the Bot API answerInlineQuery method.

Enter these parameters:

  • Credential to connect with: Create or select an existing Telegram credential.
  • Resource: Select Callback.
  • Operation: Select Answer Inline Query.
  • Query ID: Enter the unique identifier of the query you want to answer.
    • To feed a Query ID directly into this node, use the Telegram Trigger node triggered on the Inline Query.
  • Results: Enter a JSON-serialized array of results you want to use as answers to the query. Refer to the Telegram InlineQueryResults documentation for more information on formatting your array.

Telegram allows a maximum of 50 results per query.

Refer to the Telegram Bot API answerInlineQuery documentation for more information.

Answer Inline Query additional fields

Use the Additional Fields to further refine the behavior of the node. Select Add Field to add any of the following:

  • Cache Time: The maximum amount of time in seconds that the client may cache the result of the callback query. Telegram defaults to 300 seconds for this method.
  • Show Alert: Telegram can display the answer as a notification at the top of the chat screen or as an alert. Choose whether you want to keep the default notification display (turned off) or display the answer as an alert (turned on).
  • Text: If you want the answer to show text, enter up to 200 characters of text here.
  • URL: Enter a URL that the user's client will open.

Branch patterns

URL: llms-txt#branch-patterns

Contents:

  • Multiple instances, multiple branches
  • Multiple instances, one branch
  • One instance, multiple branches
  • One instance, one branch

The relationship between n8n instances and Git branches is flexible. You can create different setups depending on your needs.

Recommendation: don't push and pull to the same n8n instance

You can push work from an instance to a branch, and pull to the same instance. n8n doesn't recommend this. To reduce the risk of merge conflicts and overwriting work, try to create a process where work goes in one direction: either to Git, or from Git, but not both.

Multiple instances, multiple branches

This pattern involves having multiple n8n instances, each one linked to its own branch.

You can use this pattern for environments. For example, create two n8n instances, development and production. Link them to their own branches. Push work from your development instance to its branch, do a pull request to move work to the production branch, then pull to the production instance.

The advantages of this pattern are:

  • An added safety layer to prevent changes getting into your production environment by mistake. You have to do a pull request in GitHub to copy work between environments.
  • It supports more than two instances.

The disadvantage is more manual steps to copy work between environments.

Multiple instances, one branch

Use this pattern if you want the same workflows, tags, and variables everywhere, but want to use them in different n8n instances.

You can use this pattern for environments. For example, create two n8n instances, development and production. Link them both to the same branch. Push work from development, and pull it into production.

This pattern is also useful when testing a new version of n8n: you can create a new n8n instance with the new version, connect it to the Git branch and test it, while your production instance remains on the older version until you're confident it's safe to upgrade.

The advantage of this pattern is that work is instantly available to other environments when you push from one instance.

The disadvantages are:

  • If you push by mistake, there is a risk the work will make it into your production instance. If you use a GitHub Action to automate pulls to production, you must either use the multi-instance, multi-branch pattern, or be careful to never push work that you don't want in production.
  • Pushing and pulling to the same instance can cause data loss as changes are overridden when performing these actions. You should set up processes to ensure content flows in one direction.

One instance, multiple branches

The instance owner can change which Git branch connects to the instance. The full setup in this case is likely to be a Multiple instances, multiple branches pattern, but with one instance switching between branches.

This is useful to review work. For example, different users could work on their own instance and push to their own branch. The reviewer could work in a review instance, and switch between branches to load work from different users.

n8n doesn't clean up the existing contents of an instance when changing branches. Switching branches in this pattern results in all the workflows from each branch being in your instance.

One instance, one branch

This is the simplest pattern.


Asana credentials

URL: llms-txt#asana-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using Access token
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Supported authentication methods

  • Access token
  • OAuth2

Refer to Asana's Developer Guides for more information about working with the service.

Using Access token

To configure this credential, you'll need an Asana account and:

  • A Personal Access Token (PAT)
  1. Open the Asana developer console.
  2. In the Personal access tokens section, select Create new token.
  3. Enter a Token name, like n8n integration.
  4. Check the box to agree to the Asana API terms.
  5. Select Create token.
  6. Copy the token and enter it as the Access Token in your n8n credential.

Refer to the Asana Quick start guide for more information.

To configure this credential, you'll need an Asana account.

Note for n8n Cloud users

Cloud users don't need to provide connection details. Select Connect my account to connect through your browser.

If you're self-hosting n8n, you'll need to register an application to set up OAuth:

  1. Open the Asana developer console.
  2. In the My apps section, select Create new app.
  3. Enter an App name for your application, like n8n integration.
  4. Select a purpose for your app.
  5. Check the box to agree to the Asana API terms.
  6. Select Create app. The page opens to the app's Basic Information.
  7. Select OAuth from the left menu.
  8. In n8n, copy the OAuth Redirect URL.
  9. In Asana, select Add redirect URL and enter the URL you copied from n8n.
  10. Copy the Client ID from Asana and enter it in your n8n credential.
  11. Copy the Client Secret from Asana and enter it in your n8n credential.

Refer to the Asana OAuth register an application documentation for more information.


Home Assistant node

URL: llms-txt#home-assistant-node

Contents:

  • Operations
  • Templates and examples
  • Related resources

Use the Home Assistant node to automate work in Home Assistant, and integrate Home Assistant with other applications. n8n has built-in support for a wide range of Home Assistant features, including getting, creating, and checking camera proxies, configurations, logs, services, and templates.

On this page, you'll find a list of operations the Home Assistant node supports and links to more resources.

Refer to Home Assistant credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Camera Proxy
    • Get the camera screenshot
  • Config
    • Get the configuration
    • Check the configuration
  • Event
    • Create an event
    • Get all events
  • Log
    • Get a log for a specific entity
    • Get all logs
  • Service
    • Call a service within a specific domain
    • Get all services
  • State
    • Create a new record, or update the current one if it already exists (upsert)
    • Get a state for a specific entity
    • Get all states
  • Template
    • Create a template

Templates and examples

Turn on a light to a specific color on any update in GitHub repository

View template details

Birthday and Ephemeris Notification (Google Contact, Telegram & Home Assistant)

View template details

📍 Daily Nearby Garage Sales Alerts via Telegram

View template details

Browse Home Assistant integration templates, or search all templates

Refer to Home Assistant's documentation for more information about the service.


Venafi TLS Protect Datacenter node

URL: llms-txt#venafi-tls-protect-datacenter-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported

Use the Venafi TLS Protect Datacenter node to automate work in Venafi TLS Protect Datacenter, and integrate Venafi TLS Protect Datacenter with other applications. n8n has built-in support for a wide range of Venafi TLS Protect Datacenter features, including creating, deleting, and getting certificates.

On this page, you'll find a list of operations the Venafi TLS Protect Datacenter node supports and links to more resources.

Refer to Venafi TLS Protect Datacenter credentials for guidance on setting up authentication.

  • Certificate
    • Create
    • Delete
    • Download
    • Get
    • Get Many
    • Renew
  • Policy
    • Get

Templates and examples

Browse Venafi TLS Protect Datacenter integration templates, or search all templates

  • A node and trigger node for Venafi TLS Protect Cloud.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Data editing

URL: llms-txt#data-editing

Contents:

  • Edit output data
  • Use data from previous executions

n8n allows you to edit pinned data. This means you can check different scenarios without setting up each scenario and sending the relevant data from your external system. It makes it easier to test edge cases.

Data editing isn't available for production workflow executions. It's a feature to help test workflows during development.

  1. Run the node to load data.
  2. In the OUTPUT view, select JSON to switch to JSON view.
  3. Select Edit .
  4. Edit your data.
  5. Select Save. n8n saves your data changes and pins your data.

Use data from previous executions

You can copy data from nodes in previous workflow executions:

  1. Open the left menu.
  2. Select Executions.
  3. Browse the workflow executions list to find the one with the data you want to copy.
  4. Select Open Past Execution .
  5. Double click the node whose data you want to copy.
  6. If it's table layout, select JSON to switch to JSON view.
  7. There are two ways to copy the JSON:
  8. Select the JSON you want by highlighting it, like selecting text. Then use ctrl + c to copy it.
  9. Select the JSON you want to copy by clicking on a parameter. Then:
    1. Hover over the JSON. n8n displays the Copy button.
    2. Select Copy .
    3. You can choose what to copy:
      • Copy Item Path and Copy Parameter Path gives you expressions that access parts of the JSON.
      • Copy Value: copies the entire selected JSON.
  10. Return to the workflow you're working on:
    1. Open the left menu.
    2. Select Workflows.
    3. Select Open.
    4. Select the workflow you want to open.
  11. Open the node where you want to use the copied data.
  12. If there is no data, run the node to load data.
  13. In the OUTPUT view, select JSON to switch to JSON view.
  14. Select Edit .
  15. Paste in the data from the previous execution.
  16. Select Save. n8n saves your data changes and pins your data.

Jira Software node

URL: llms-txt#jira-software-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported
  • Related resources
  • Fetch issues for a specific project

Use the Jira Software node to automate work in Jira, and integrate Jira with other applications. n8n has built-in support for a wide range of Jira features, including creating, updating, deleting, and getting issues, and users.

On this page, you'll find a list of operations the Jira Software node supports and links to more resources.

Refer to Jira credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Issue
    • Get issue changelog
    • Create a new issue
    • Delete an issue
    • Get an issue
    • Get all issues
    • Create an email notification for an issue and add it to the mail queue
    • Return either all transitions or a transition that can be performed by the user on an issue, based on the issue's status
    • Update an issue
  • Issue Attachment
    • Add attachment to issue
    • Get an attachment
    • Get all attachments
    • Remove an attachment
  • Issue Comment
    • Add comment to issue
    • Get a comment
    • Get all comments
    • Remove a comment
    • Update a comment
  • User
    • Create a new user.
    • Delete a user.
    • Retrieve a user.

Templates and examples

Automate Customer Support Issue Resolution using AI Text Classifier

View template details

Create a new issue in Jira

View template details

Analyze & Sort Suspicious Email Contents with ChatGPT

View template details

Browse Jira Software integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.

Refer to the official JQL documentation about Jira Query Language (JQL) to learn more about it.

Fetch issues for a specific project

The Get All operation returns all the issues from Jira. To fetch issues for a particular project, you need to use Jira Query Language (JQL).

For example, if you want to receive all the issues of a project named n8n, you'd do something like this:

  • Select Get All from the Operation dropdown list.
  • Toggle Return All to true.
  • Select Add Option and select JQL.
  • Enter project=n8n in the JQL field.

This query will fetch all the issues in the project named n8n. Enter the name of your project instead of n8n to fetch all the issues for your project.


Auto-fixing Output Parser node

URL: llms-txt#auto-fixing-output-parser-node

Contents:

  • Templates and examples
  • Related resources

The Auto-fixing Output Parser node wraps another output parser. If the first one fails, it calls out to another LLM to fix any errors.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

Templates and examples

Notion AI Assistant Generator

View template details

Proxmox AI Agent with n8n and Generative AI Integration

View template details

Handling Appointment Leads and Follow-up With Twilio, Cal.com and AI

View template details

Browse Auto-fixing Output Parser integration templates, or search all templates

Refer to LangChain's output parser documentation for more information about the service.

View n8n's Advanced AI documentation.


Typeform credentials

URL: llms-txt#typeform-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using API token
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to Typeform's API documentation for more information about the service.

To configure this credential, you'll need a Typeform account and:

  • A personal Access Token

To get your personal access token:

  1. Log into your Typeform account.
  2. Select your profile avatar in the upper right and go to Account > Your settings > Personal Tokens.
  3. Select Generate a new token.
  4. Give your token a Name, like n8n integration.
  5. For Scopes, select Custom scopes. Select these scopes:
    • Forms: Read
    • Webhooks: Read, Write
  6. Select Generate token.
  7. Copy the token and enter it in your n8n credential.

Refer to Typeform's Personal access token documentation for more information.

To configure this credential, you'll need a Typeform account and:

  • A Client ID: Generated when you register an app.
  • A Client Secret: Generated when you register an app.

To get your Client ID and Client Secret, register a new Typeform app:

  1. Log into your Typeform account.
  2. In the upper left, select the dropdown for your organization and select Developer apps.
  3. Select Register a new app.
  4. Enter an App Name that makes sense, like n8n OAuth2 integration.
  5. Enter your n8n base URL as the App website, for example https://n8n-sample.app.n8n.cloud/.
  6. From n8n, copy the OAuth Redirect URL. Enter this in Typeform as the Redirect URI(s).
  7. Select Register app.
  8. Copy the Client Secret and enter it in your n8n credential.
  9. In Typeform, select Got it to close the Client Secret modal.
  10. The Developer apps panel displays your new app. Copy the Client ID and enter it in your n8n credential.
  11. Once you enter both the Client ID and Client Secret in n8n, select Connect my account and follow the on-screen prompts to finish authorizing the app.

Refer to Create applications that integrate with Typeform's APIs for more information.


Magento 2 credentials

URL: llms-txt#magento-2-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API access token

You can use these credentials to authenticate the following node:

  • Magento 2

  • Create a Magento (Adobe Commerce) account.

  • Set your store to Allow OAuth Access Tokens to be used as standalone Bearer tokens.

    • Go to Admin > Stores > Configuration > Services > OAuth > Consumer Settings.
  • Set the Allow OAuth Access Tokens to be used as standalone Bearer tokens option to Yes.

  • You can also enable this setting from the CLI by running the following command:

This step is necessary until n8n updates the Magento 2 credentials to use OAuth. Refer to Integration Tokens for more information.

Supported authentication methods

Refer to Magento's API documentation for more information about the service.

Using API access token

To configure this credential, you'll need:

  • A Host: Enter the address of your Magento store.
  • An Access Token: Get an access token from the Admin Panel:
    1. Go to System > Extensions > Integrations.
    2. Add a new Integration.
    3. Go to the API tab and select the Magento resources you'd like the n8n integration to access.
    4. From the Integrations page, Activate the new integration.
    5. Select Allow to display your access token so you can copy it and enter it in n8n.

Examples:

Example 1 (unknown):

bin/magento config:set oauth/consumer/enable_integration_as_bearer 1

Freshworks CRM credentials

URL: llms-txt#freshworks-crm-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a Freshworks CRM account.

Supported authentication methods

Refer to Freshworks CRM's API documentation for more information about the service.

To configure this credential, you'll need:

  • An API Key: Refer to the Freshworks CRM API authenticaton documentation for detailed instructions on getting your API key.
  • Your Freshworks CRM Domain: Use the subdomain of your Freshworks CRM account. This is part of the URL, for example https://<subdomain>.myfreshworks.com. So if you access Freshworks CRM through https://n8n.myfreshworks.com, enter n8n as your Domain.

pandas==2.2.2

URL: llms-txt#pandas==2.2.2

Contents:

    1. Allowlist packages for the Code node
    1. Build your custom image
    1. Run it

{ "task-runners": [ { "runner-type": "javascript", "env-overrides": { "NODE_FUNCTION_ALLOW_BUILTIN": "crypto", "NODE_FUNCTION_ALLOW_EXTERNAL": "moment,uuid", // <-- add JS packages here } }, { "runner-type": "python", "env-overrides": { "PYTHONPATH": "/opt/runners/task-runner-python", "N8N_RUNNERS_STDLIB_ALLOW": "json", "N8N_RUNNERS_EXTERNAL_ALLOW": "numpy,pandas" // <-- add Python packages here } } ] }

docker buildx build
-f docker/images/runners/Dockerfile
-t n8nio/runners:custom
.

docker run --rm -it
-e N8N_RUNNERS_AUTH_TOKEN=test
-e N8N_RUNNERS_LAUNCHER_LOG_LEVEL=debug
-e N8N_RUNNERS_TASK_BROKER_URI=http://host.docker.internal:5679
-p 5680:5680
n8nio/runners:custom


**Examples:**

Example 1 (unknown):
```unknown
Pin versions (for example, `==2.3.2`) for deterministic builds.

### 3) Allowlist packages for the Code node

Open `docker/images/runners/n8n-task-runners.json` and add your packages to the env overrides:

Example 2 (unknown):

- `NODE_FUNCTION_ALLOW_BUILTIN`: comma-separated list of allowed node builtin modules.
- `NODE_FUNCTION_ALLOW_EXTERNAL`: comma-separated list of allowed JS packages.
- `N8N_RUNNERS_STDLIB_ALLOW`: comma-separated list of allowed Python standard library packages.
- `N8N_RUNNERS_EXTERNAL_ALLOW`: comma-separated list of allowed Python packages.

### 4) Build your custom image

For example, from the n8n repository root:

Example 3 (unknown):

### 5) Run it

For example:

Hardening task runners

URL: llms-txt#hardening-task-runners

Contents:

  • Run task runners as sidecars in external mode

Task runners are responsible for executing code from the Code node. While Code node executions are secure, you can follow these recommendations to further harden your task runners.

Run task runners as sidecars in external mode

To increase the isolation between the core n8n process and code in the Code node, run task runners in external mode. External task runners launch as separate containers, providing a fully isolated environment to execute the JavaScript defined in the Code node.


Sort

URL: llms-txt#sort

Contents:

  • Node parameters
    • Simple
    • Random
    • Code
  • Templates and examples
  • Related resources

Use the Sort node to organize lists of items in a desired ordering, or generate a random selection.

The Sort operation uses the default JavaScript operation where the elements to be sorted are converted into strings and their values compared. Refer to Mozilla's guide to Array sort to learn more.

Configure this node using the Type parameter.

Use the dropdown to select how you want to input the sorting from these options.

Performs an ascending or descending sort using the selected fields.

When you select this Type:

  • Use the Add Field To Sort By button to input the Field Name.
  • Select whether to use Ascending or Descending order.

When you select Simple as the Type, you have the option to Disable Dot Notation. By default, n8n enables dot notation to reference child fields in the format parent.child. Use this option to disable dot notation (turned on) or to continue using dot (turned off).

Creates a random order in the list.

Input custom JavaScript code to perform the sort operation. This is a good option if a simple sort won't meet your needs.

Enter your custom JavaScript code in the Code input field.

Templates and examples

Automated Web Scraping: email a CSV, save to Google Sheets & Microsoft Excel

View template details

Transcribing Bank Statements To Markdown Using Gemini Vision AI

View template details

Allow Users to Send a Sequence of Messages to an AI Agent in Telegram

View template details

Browse Sort integration templates, or search all templates

Learn more about data structure and data flow in n8n workflows.


WooCommerce node

URL: llms-txt#woocommerce-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the WooCommerce node to automate work in WooCommerce, and integrate WooCommerce with other applications. n8n has built-in support for a wide range of WooCommerce features, including creating and deleting customers, orders, and products.

On this page, you'll find a list of operations the WooCommerce node supports and links to more resources.

Refer to WooCommerce credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Customer
    • Create a customer
    • Delete a customer
    • Retrieve a customer
    • Retrieve all customers
    • Update a customer
  • Order
    • Create a order
    • Delete a order
    • Get a order
    • Get all orders
    • Update an order
  • Product
    • Create a product
    • Delete a product
    • Get a product
    • Get all products
    • Update a product

Templates and examples

AI-powered WooCommerce Support-Agent

View template details

Personal Shopper Chatbot for WooCommerce with RAG using Google Drive and openAI

View template details

Create, update and get a product from WooCommerce

View template details

Browse WooCommerce integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Affinity node

URL: llms-txt#affinity-node

Contents:

  • Operations
  • Templates and examples

Use the Affinity node to automate work in Affinity, and integrate Affinity with other applications. n8n has built-in support for a wide range of Affinity features, including creating, getting, updating and deleting lists, entries, organization, and persons.

On this page, you'll find a list of operations the Affinity node supports and links to more resources.

Refer to Affinity credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • List
    • Get a list
    • Get all lists
  • List Entry
    • Create a list entry
    • Delete a list entry
    • Get a list entry
    • Get all list entries
  • Organization
    • Create an organization
    • Delete an organization
    • Get an organization
    • Get all organizations
    • Update an organization
  • Person
    • Create a person
    • Delete a person
    • Get a person
    • Get all persons
    • Update a person

Templates and examples

Create an organization in Affinity

View template details

Receive updates when a new list is created in Affinity

View template details

🛠️ Affinity Tool MCP Server 💪 all 16 operations

View template details

Browse Affinity integration templates, or search all templates


Custom API operations

URL: llms-txt#custom-api-operations

Contents:

  • Predefined credential types
    • Using predefined credential types
    • Credential scopes

One of the most complex parts of setting up API calls is managing authentication. n8n provides credentials support for operations and services beyond those supported by built-in nodes.

  • Custom operations for existing nodes: n8n supplies hundreds of nodes to create workflows that link multiple products. However, some nodes don't include all the possible operations supported by a product's API. You can work around this by making a custom API call using the HTTP Request node.
  • Credential-only nodes: n8n includes credential-only nodes. These are integrations where n8n supports setting up credentials for use in the HTTP Request node, but doesn't provide a standalone node. You can find a credential-only node in the nodes panel, as you would for any other integration.

Predefined credential types

A predefined credential type is a credential that already exists in n8n. You can use predefined credential types instead of generic credentials in the HTTP Request node.

For example: you create an Asana credential, for use with the Asana node. Later, you want to perform an operation that isn't supported by the Asana node, using Asana's API. You can use your existing Asana credential in the HTTP Request node to perform the operation, without additional authentication setup.

Using predefined credential types

To use a predefined credential type:

  1. Open your HTTP Request node, or add a new one to your workflow.
  2. In Authentication, select Predefined Credential Type.
  3. In Credential Type, select the API you want to use.
  4. In Credential for <API name>, you can:
    1. Select an existing credential for that platform, if available.
    2. Select Create New to create a new credential.

Credential scopes

Some existing credential types have specific scopes: endpoints that they work with. n8n warns you about this when you select the credential type.

For example, follow the steps in Using predefined credential types, and select Google Calendar OAuth2 API as your Credential Type. n8n displays a box listing the two endpoints you can use this credential type with:


Manual, partial, and production executions

URL: llms-txt#manual,-partial,-and-production-executions

Contents:

  • Manual executions
  • Partial executions
    • Troubleshooting partial executions
  • Production executions

There are some important differences in how n8n executes workflows manually (by clicking the Execute Workflow button) and automatically (when the workflow is Active and triggered by an event or schedule).

Manual executions allow you to run workflows directly from the canvas to test your workflow logic. These executions are "ad-hoc": they run only when you manually select the Execute workflow button.

Manual executions make building workflows easier by allowing you to iteratively test as you go, following the flow logic and seeing data transformations. You can test conditional branching, data formatting changes, and loop behavior by providing different input items and modifying node options.

Pinning execution data

When performing manual executions, you can use data pinning to "pin" or "freeze" the output data of a node. You can optionally edit the pinned data as well.

On future runs, instead of executing the pinned node, n8n will substitute the pinned data and continue following the flow logic. This allows you to iterate without operating on variable data or repeating queries to external services. Production executions ignore all pinned data.

Partial executions

Clicking the Execute workflow button at the bottom of the workflow in the Editor tab manually runs the entire workflow. You can also perform partial executions to run specific steps in your workflow. Partial executions are manual executions that only run a subset of your workflow nodes.

To perform a partial execution, select a node, open its detail view, and select Execute step. This executes the specific node and any preceding nodes required to fill in its input data. You can also temporarily disable specific nodes in the workflow chain to avoid interacting with those services while building.

In particular, partial executions are useful when updating the logic of a specific node since they allow you to re-execute the node with the same input data.

Troubleshooting partial executions

Some common issues you might come across when running partial executions include the following:

The destination node is not connected to any trigger. Partial executions need a trigger.

This error message appears when you try to perform a partial execution without connecting the workflow to a trigger. Manual executions, including partial executions, attempt to mimic production executions when possible. Part of this includes requiring a trigger node to describe when the workflow logic should execute.

To work around this, connect a trigger node to the workflow with the node you're trying to execute. Most often, a manual trigger is the simplest option.

Please execute the whole workflow, rather than just the node. (Existing execution data is too large.)

This error can appear when performing partial executions on workflows with large numbers of branches. Partial executions involve sending data and workflow logic to the n8n backend in a way that isn't required for full executions. This error occurs when your workflow exceeds the maximum size allowed for these messages.

To work around this, consider using the limit node to limit node output while running partial executions. Once the workflow is running as intended, you can disable or delete the limit node before enabling production execution.

Production executions

Production executions occur when a triggering event or schedule automatically runs a workflow.

To configure production executions, you must attach a trigger node (any trigger other than the manual trigger works) and switch workflow's toggle to Active. Once activated, the workflow automatically executes whenever the trigger condition occurs.

The execution flow for production executions doesn't display in the Editor tab of the workflow as with manual executions. Instead, you can see executions in the workflow's Executions tab according to your workflow settings. From there, you can explore and troubleshoot problems using the debug in editor feature.


Executions

URL: llms-txt#executions

Contents:

  • Execution modes
  • Execution lists

An execution is a single run of a workflow.

There are two execution modes:

  • Manual: run workflows manually when testing. Select Execute Workflow to start a manual execution. You can do manual executions of active workflows, but n8n recommends keeping your workflow set to Inactive while developing and testing.
  • Production: a production workflow is one that runs automatically. To enable this, set the workflow to Active.

n8n provides two execution lists:

n8n supports adding custom data to executions.


Standard parameters

URL: llms-txt#standard-parameters

Contents:

  • displayName
  • name
  • icon
  • group
  • description
  • defaults
  • forceInputNodeExecution
  • inputs
  • outputs
  • requiredInputs

These are the standard parameters for the node base file. They're the same for all node types.

String | Required

This is the name users see in the n8n GUI.

String | Required

The internal name of the object. Used to reference it from other places in the node.

String or Object | Required

Specifies an icon for a particular node. n8n recommends uploading your own image file.

You can provide the icon file name as a string, or as an object to handle different icons for light and dark modes. If the icon works in both light and dark modes, use a string that starts with file:, indicating the path to the icon file. For example:

To provide different icons for light and dark modes, use an object with light and dark properties. For example:

n8n recommends using an SVG for your node icon, but you can also use PNG. If using PNG, the icon resolution should be 60x60px. Node icons should have a square or near-square aspect ratio.

Don't reference Font Awesome

If you want to use a Font Awesome icon in your node, download and embed the image.

Array of strings | Required

Tells n8n how the node behaves when the workflow runs. Options are:

  • trigger: node waits for a trigger.
  • schedule: node waits for a timer to expire.
  • input, output, transform: these currently have no effect.
  • An empty array, []. Use this as the default option if you don't need trigger or schedule.

String | Required

A short description of the node. n8n uses this in the GUI.

Object | Required

Contains essential brand and name settings.

The object can include:

  • name: String. Used as the node name on the canvas if the displayName is too long.
  • color: String. Hex color code. Provide the brand color of the integration for use in n8n.

forceInputNodeExecution

Boolean | Optional

When building a multi-input node, you can choose to force all preceding nodes on all branches to execute before the node runs. The default is false (requiring only one input branch to run).

Array of strings | Required

Names the input connectors. Controls the number of connectors the node has on the input side. If you need only one connector, use input: ['main'].

Array of strings | Required

Names the output connectors. Controls the number of connectors the node has on the output side. If you need only one connector, use output: ['main'].

Integer or Array | Optional

Used for multi-input nodes. Specify inputs by number that must have data (their branches must run) before the node can execute.

Array of objects | Required

This parameter tells n8n the credential options. Each object defines an authentication type.

The object must include:

  • name: the credential name. Must match the name property in the credential file. For example, name: 'asanaApi' in Asana.node.ts links to name = 'asanaApi' in AsanaApi.credential.ts.
  • required: Boolean. Specify whether authentication is required to use this node.

Object | Required

Set up the basic information for the API calls the node makes.

This object must include:

  • baseURL: The API base URL.

  • headers: an object describing the API call headers, such as content type.

  • url: string. Appended to the baseURL. You can usually leave this out. It's more common to provide this in the operations.

Array of objects | Required

This contains the resource and operations objects that define node behaviors, as well as objects to set up mandatory and optional fields that can receive user input.

A resource object includes the following parameters:

  • displayName: String. This should always be Resource.
  • name: String. This should always be resource.
  • type: String. Tells n8n which UI element to use, and what input type to expect. For example, options results in n8n adding a dropdown that allows users to choose one option. Refer to Node UI elements for more information.
  • noDataExpression: Boolean. Prevents using an expression for the parameter. Must always be true for resource.

Operations objects

The operations object defines the available operations on a resource.

  • displayName: String. This should always be Options.
  • name: String. This should always be option.
  • type: String. Tells n8n which UI element to use, and what input type to expect. For example, dateTime results in n8n adding a date picker. Refer to Node UI elements for more information.
  • noDataExpression: Boolean. Prevents using an expression for the parameter. Must always be true for operation.
  • options: Array of objects. Each objects describes an operation's behavior, such as its routing, the REST verb it uses, and so on. An options object includes:
    • name. String.
    • value. String.
    • action: String. This parameter combines the resource and operation. You should always include it, as n8n will use it in future versions. For example, given a resource called "Card" and an operation "Get all", your action is "Get all cards".
    • description: String.
    • routing: Object containing request details.

Additional fields objects

These objects define optional parameters. n8n displays them under Additional Fields in the GUI. Users can choose which parameters to set.

The objects must include:

For more information about UI element types, refer to UI elements.

Examples:

Example 1 (unknown):

icon: 'file:exampleNodeIcon.svg'

Example 2 (unknown):

icon: { 
  light: 'file:exampleNodeIcon.svg', 
  dark: 'file:exampleNodeIcon.dark.svg' 
}

Example 3 (unknown):

displayName: 'Additional Fields',
name: 'additionalFields',
// The UI element type
type: ''
placeholder: 'Add Field',
default: {},
displayOptions: {
  // Set which resources and operations this field is available for
  show: {
    resource: [
      // Resource names
    ],
    operation: [
      // Operation names
    ]
  },
}

Logging in n8n

URL: llms-txt#logging-in-n8n

Contents:

  • Setup

Logging is an important feature for debugging. n8n uses the winston logging library.

n8n Self-hosted Enterprise tier includes Log streaming, in addition to the logging options described in this document.

To set up logging in n8n, you need to set the following environment variables (you can also set the values in the configuration file)

Setting in the configuration file Using environment variables Description
n8n.log.level N8N_LOG_LEVEL The log output level. The available options are (from lowest to highest level) are error, warn, info, and debug. The default value is info. You can learn more about these options here.
n8n.log.output N8N_LOG_OUTPUT Where to output logs. The available options are console and file. Multiple values can be used separated by a comma (,). console is used by default.
n8n.log.file.location N8N_LOG_FILE_LOCATION The log file location, used only if log output is set to file. By default, <n8nFolderPath>/logs/n8n.log is used.
n8n.log.file.fileSizeMax N8N_LOG_FILE_SIZE_MAX The maximum size (in MB) for each log file. By default, n8n uses 16 MB.
n8n.log.file.fileCountMax N8N_LOG_FILE_COUNT_MAX The maximum number of log files to keep. The default value is 100. This value should be set when using workers.

AWS SNS node

URL: llms-txt#aws-sns-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the AWS SNS node to automate work in AWS SNS, and integrate AWS SNS with other applications. n8n has built-in support for a wide range of AWS SNS features, including publishing messages.

On this page, you'll find a list of operations the AWS SNS node supports and links to more resources.

Refer to AWS SNS credentials for guidance on setting up authentication.

  • Publish a message to a topic

Templates and examples

Browse AWS SNS integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Bitly credentials

URL: llms-txt#bitly-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API token
  • Using OAuth2

You can use these credentials to authenticate the following node:

Create a Bitly account.

Supported authentication methods

Refer to Bitly's API documentation for more information about the service.

To configure this credential, you'll need:

Note for n8n Cloud users

Cloud users don't need to provide connection details. Select Connect my account to connect through your browser.

If you need to configure OAuth2 from scratch or need more detail on what's happening in the OAuth web flow, refer to the Bitly API Authentication documentation for more information.


Redis Chat Memory node

URL: llms-txt#redis-chat-memory-node

Contents:

  • Node parameters
  • Templates and examples
  • Related resources
  • Single memory instance

Use the Redis Chat Memory node to use Redis as a memory server.

On this page, you'll find a list of operations the Redis Chat Memory node supports, and links to more resources.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Session Key: Enter the key to use to store the memory in the workflow data.
  • Session Time To Live: Use this parameter to make the session expire after a given number of seconds.
  • Context Window Length: Enter the number of previous interactions to consider for context.

Templates and examples

Build your own N8N Workflows MCP Server

View template details

Conversational Interviews with AI Agents and n8n Forms

View template details

Telegram AI Bot-to-Human Handoff for Sales Calls

View template details

Browse Redis Chat Memory integration templates, or search all templates

Refer to LangChain's Redis Chat Memory documentation for more information about the service.

View n8n's Advanced AI documentation.

Single memory instance

If you add more than one Redis Chat Memory node to your workflow, all nodes access the same memory instance by default. Be careful when doing destructive actions that override existing memory contents, such as the override all messages operation in the Chat Memory Manager node. If you want more than one memory instance in your workflow, set different session IDs in different memory nodes.


Pinecone credentials

URL: llms-txt#pinecone-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to Pinecone's documentation for more information about the service.

View n8n's Advanced AI documentation.

To configure this credential, you'll need a Pinecone account and:

  1. Open your Pinecone console.
  2. Select the project you want to create an API key for. If you don't have any existing projects, create one. Refer to Pinecone's Quickstart for more information.
  3. Go to API Keys.
  4. Copy the API Key displayed there and enter it in your n8n credential.

Refer to Pinecone's API Authentication documentation for more information.


Compression

URL: llms-txt#compression

Contents:

  • Node parameters
    • Compress
    • Decompress
  • Templates and examples

Use the Compression node to compress and decompress files. Supports Zip and Gzip formats.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

The node parameters depend on which Operation you select. Choose to:

  • Compress: Create a compressed file from your input data.
  • Decompress: Decompress an existing compressed file.

Refer to the sections below for parameters specific to each Operation.

  • Input Binary Field(s): Enter the name of the fields in the input data that contain the binary files you want to compress. To compress more than one file, use a comma-separated list.

  • Output Format: Choose whether to format the compressed output as Zip or Gzip.

  • File Name: Enter the name of the zip file the node creates.

  • Put Output File in Field: Enter the name of the field in the output data to contain the file.

  • Put Output File in Field: Enter the name of the fields in the input data that contain the binary files you want to decompress. To decompress more than one file, use a comma-separated list.

  • Output Prefix: Enter a prefix to add to the output file name.

Templates and examples

Talk to your SQLite database with a LangChain AI Agent 🧠💬

View template details

Transcribing Bank Statements To Markdown Using Gemini Vision AI

View template details

Build a Tax Code Assistant with Qdrant, Mistral.ai and OpenAI

View template details

Browse Compression integration templates, or search all templates


AWS SQS node

URL: llms-txt#aws-sqs-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the AWS SQS node to automate work in AWS SNS, and integrate AWS SQS with other applications. n8n has built-in support for a wide range of AWS SQS features, including sending messages.

On this page, you'll find a list of operations the AWS SQS node supports and links to more resources.

Refer to AWS SQS credentials for guidance on setting up authentication.

  • Send a message to a queue.

Templates and examples

Browse AWS SQS integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Sub nodes

URL: llms-txt#sub-nodes

Sub nodes attach to root nodes within a group of cluster nodes. They configure the overall functionality of the cluster.

Cluster nodes are node groups that work together to provide functionality in an n8n workflow. Instead of using a single node, you use a root node and one or more sub-nodes that extend the functionality of the node.


HighLevel credentials

URL: llms-txt#highlevel-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Create a HighLevel developer account.

Supported authentication methods

  • API key: Use with API v1
  • OAuth2: Use with API v2

HighLevel deprecated API v1.0 and no longer maintains it. Use OAuth2 to set up new credentials.

Refer to HighLevel's API 2.0 documentation for more information about the service.

For existing integrations with the API v1.0, refer to HighLevel's API 1.0 documentation.

To configure this credential, you'll need:

To configure this credential, you'll need:

  • A Client ID
  • A Client Secret

To generate both, create an app in My Apps > Create App. Use these settings:

  1. Set Distribution Type to Sub-Account.

  2. Add these Scopes:

  • locations.readonly
    • contacts.readonly
    • contacts.write
    • opportunities.readonly
    • opportunities.write
    • users.readonly
  1. Copy the OAuth Redirect URL from n8n and add it as a Redirect URL in your HighLevel app.

  2. Copy the Client ID and Client Secret from HighLevel and add them to your n8n credential.

  3. Add the same scopes added above to your n8n credential in a space-separated list. For example:

locations.readonly contacts.readonly contacts.write opportunities.readonly opportunities.write users.readonly

Refer to HighLevel's API Authorization documentation for more details. Refer to HighLevel's API Scopes documentation for more information about available scopes.


Trello credentials

URL: llms-txt#trello-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to Trello's API documentation for more information about the service.

To configure this credential, you'll need a Trello account and:

  • An API Key
  • An API Token

To generate both the API Key and API Token, create a Trello Power-Up:

  1. Open the Trello Power-Up Admin Portal.
  2. Select New.
  3. Enter a Name for your Power-Up, like n8n integration.
  4. Select the Workspace the Power-Up should have access to.
  5. Leave the iframe connector URL blank.
  6. Enter appropriate contact information.
  7. Select Create.
  8. This should open the Power-Up to the API Key page. (If it doesn't, open that page.)
  9. Select Generate a new API Key.
  10. Copy the API key from Trello and enter it in your n8n credential.
  11. In your Trello API key page, enter your n8n base URL as an Allowed origin.
  12. In Capabilities make sure to select the necessary options.
  13. Select the Token link next to your Trello API Key.
  14. When prompted, select Allow to grant all the permissions asked for.
  15. Copy the Trello Token and enter it as the n8n API Token.

Refer to Trello's API Introduction for more information on API keys and tokens. Refer to Trello's Power-Up Admin Portal for more information on creating Power-Ups.


Anthropic credentials

URL: llms-txt#anthropic-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to Anthropic's documentation for more information about the service.

View n8n's Advanced AI documentation.

To configure this credential, you'll need an Anthropic Console account with access to Claude.

  1. In the Anthropic Console, open Settings > API Keys.
  2. Select + Create Key.
  3. Give your key a Name, like n8n-integration.
  4. Select Copy Key to copy the key.
  5. Enter this as the API Key in your n8n credential.

Refer to Anthropic's Intro to Claude and Quickstart for more information.


Expressions

URL: llms-txt#expressions

Contents:

  • Writing expressions
    • Example: Get data from webhook body
    • Example: Writing longer JavaScript
  • Common issues

Expressions are a powerful feature implemented in all n8n nodes. They allow node parameters to be set dynamically based on data from:

  • Previous node executions
  • The workflow
  • Your n8n environment

You can also execute JavaScript within an expression, making this a convenient and easy way to manipulate data into useful parameter values without writing extensive extra code.

n8n created and uses a templating language called Tournament, and extends it with custom methods and variables and data transformation functions. These features make it easier to perform common tasks like getting data from other nodes or accessing workflow metadata.

n8n additionally supports two libraries:

  • Luxon, for working with dates and time.
  • JMESPath, for querying JSON.

When writing expressions, it's helpful to understand data structure and behavior in n8n. Refer to Data for more information on working with data in your workflows.

Writing expressions

To use an expression to set a parameter value:

  1. Hover over the parameter where you want to use an expression.
  2. Select Expressions in the Fixed/Expression toggle.
  3. Write your expression in the parameter, or select Open expression editor to open the expressions editor. If you use the expressions editor, you can browse the available data in the Variable selector. All expressions have the format {{ your expression here }}.

Example: Get data from webhook body

Consider the following scenario: you have a webhook trigger that receives data through the webhook body. You want to extract some of that data for use in the workflow.

Your webhook data looks similar to this:

In the next node in the workflow, you want to get just the value of city. You can use the following expression:

  1. Accesses the incoming JSON-formatted data using n8n's custom $json variable.
  2. Finds the value of city (in this example, "New York"). Note that this example uses JMESPath syntax to query the JSON data. You can also write this expression as {{$json['body']['city']}}.

Example: Writing longer JavaScript

You can do things like variable assignments or multiple statements in an expression, but you need to wrap your code using the syntax for an IIFE (Immediately Invoked Function Expression).

The following code use the Luxon date and time library to find the time between two dates in months. We surround the code in both the handlebar brackets for an expression and the IIFE syntax.

For common errors or issues with expressions and suggested resolution steps, refer to Common Issues.

Examples:

Example 1 (unknown):

[
  {
    "headers": {
      "host": "n8n.instance.address",
      ...
    },
    "params": {},
    "query": {},
    "body": {
      "name": "Jim",
      "age": 30,
      "city": "New York"
    }
  }
]

Example 2 (unknown):

{{$json.body.city}}

Example 3 (unknown):

{{(()=>{
  let end = DateTime.fromISO('2017-03-13');
  let start = DateTime.fromISO('2017-02-13');
  let diffInMonths = end.diff(start, 'months');
  return diffInMonths.toObject();
})()}}

Ollama Chat Model node common issues

URL: llms-txt#ollama-chat-model-node-common-issues

Contents:

  • Processing parameters
  • Can't connect to a remote Ollama instance
  • Can't connect to a local Ollama instance when using Docker
    • If only Ollama is in Docker
    • If only n8n is in Docker
    • If Ollama and n8n are running in separate Docker containers
    • If Ollama and n8n are running in the same Docker container
  • Error: connect ECONNREFUSED ::1:11434
  • Ollama and HTTP/HTTPS proxies

Here are some common errors and issues with the Ollama Chat Model node and steps to resolve or troubleshoot them.

Processing parameters

The Ollama Chat Model node is a sub-node. Sub-nodes behave differently than other nodes when processing multiple items using expressions.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

Can't connect to a remote Ollama instance

The Ollama Chat Model node supports Bearer token authentication for connecting to remote Ollama instances behind authenticated proxies (such as Open WebUI).

For remote authenticated connections, configure both the remote URL and API key in your Ollama credentials.

Follow the Ollama credentials instructions for more information.

Can't connect to a local Ollama instance when using Docker

The Ollama Chat Model node connects to a locally hosted Ollama instance using the base URL defined by Ollama credentials. When you run either n8n or Ollama in Docker, you need to configure the network so that n8n can connect to Ollama.

Ollama typically listens for connections on localhost, the local network address. In Docker, by default, each container has its own localhost which is only accessible from within the container. If either n8n or Ollama are running in containers, they won't be able to connect over localhost.

The solution depends on how you're hosting the two components.

If only Ollama is in Docker

If only Ollama is running in Docker, configure Ollama to listen on all interfaces by binding to 0.0.0.0 inside of the container (the official images are already configured this way).

When running the container, publish the ports with the -p flag. By default, Ollama runs on port 11434, so your Docker command should look like this:

When configuring Ollama credentials, the localhost address should work without a problem (set the base URL to http://localhost:11434).

If only n8n is in Docker

If only n8n is running in Docker, configure Ollama to listen on all interfaces by binding to 0.0.0.0 on the host.

If you are running n8n in Docker on Linux, use the --add-host flag to map host.docker.internal to host-gateway when you start the container. For example:

If you are using Docker Desktop, this is automatically configured for you.

When configuring Ollama credentials, use host.docker.internal as the host address instead of localhost. For example, to bind to the default port 11434, you could set the base URL to http://host.docker.internal:11434.

If Ollama and n8n are running in separate Docker containers

If both n8n and Ollama are running in Docker in separate containers, you can use Docker networking to connect them.

Configure Ollama to listen on all interfaces by binding to 0.0.0.0 inside of the container (the official images are already configured this way).

When configuring Ollama credentials, use the Ollama container's name as the host address instead of localhost. For example, if you call the Ollama container my-ollama and it listens on the default port 11434, you would set the base URL to http://my-ollama:11434.

If Ollama and n8n are running in the same Docker container

If Ollama and n8n are running in the same Docker container, the localhost address doesn't need any special configuration. You can configure Ollama to listen on localhost and configure the base URL in the Ollama credentials in n8n to use localhost: http://localhost:11434.

Error: connect ECONNREFUSED ::1:11434

This error occurs when your computer has IPv6 enabled, but Ollama is listening to an IPv4 address.

To fix this, change the base URL in your Ollama credentials to connect to 127.0.0.1, the IPv4-specific local address, instead of the localhost alias that can resolve to either IPv4 or IPv6: http://127.0.0.1:11434.

Ollama and HTTP/HTTPS proxies

Ollama doesn't support custom HTTP agents in its configuration. This makes it difficult to use Ollama behind custom HTTP/HTTPS proxies. Depending on your proxy configuration, it might not work at all, despite setting the HTTP_PROXY or HTTPS_PROXY environment variables.

Refer to Ollama's FAQ for more information.

Examples:

Example 1 (unknown):

docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama

Example 2 (unknown):

docker run -it --rm --add-host host.docker.internal:host-gateway --name n8n -p 5678:5678 -v n8n_data:/home/node/.n8n docker.n8n.io/n8nio/n8n

Matrix credentials

URL: llms-txt#matrix-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API access token

You can use these credentials to authenticate the following nodes:

Create an account on a Matrix server. Refer to Creating an account for more information.

Supported authentication methods

Refer to the Matrix Specification for more information about the service.

Refer to the documentation for the specific client you're using to access the Matrix server.

Using API access token

To configure this credential, you'll need:

  • An Access Token: This token is tied to the account you use to log into Matrix with.
  • A Homeserver URL: This is the URL of the homeserver you entered when you created your account. n8n prepopulates this with matrix.org's own server; adjust this if you're using a server hosted elsewhere.

Instructions for getting these details vary depending on the client you're using to access the server. Both the Access Token and the Homeserver URL can most commonly be found in Settings > Help & About > Advanced, but refer to your client's documentation for more details.


Ollama Model node

URL: llms-txt#ollama-model-node

Contents:

  • Node parameters
  • Node options
  • Templates and examples
  • Related resources
  • Common issues
  • Self-hosted AI Starter Kit

The Ollama Model node allows you use local Llama 2 models.

On this page, you'll find the node parameters for the Ollama Model node, and links to more resources.

This node lacks tools support, so it won't work with the AI Agent node. Instead, connect it with the Basic LLM Chain node.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Model: Select the model that generates the completion. Choose from:
    • Llama2
    • Llama2 13B
    • Llama2 70B
    • Llama2 Uncensored

Refer to the Ollama Models Library documentation for more information about available models.

  • Sampling Temperature: Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations.
  • Top K: Enter the number of token choices the model uses to generate the next token.
  • Top P: Use this option to set the probability the completion should use. Use a lower value to ignore less probable options.

Templates and examples

Chat with local LLMs using n8n and Ollama

View template details

🔐🦙🤖 Private & Local Ollama Self-Hosted AI Assistant

View template details

Auto Categorise Outlook Emails with AI

View template details

Browse Ollama Model integration templates, or search all templates

Refer to LangChains's Ollama documentation for more information about the service.

View n8n's Advanced AI documentation.

For common questions or issues and suggested solutions, refer to Common issues.

Self-hosted AI Starter Kit

New to working with AI and using self-hosted n8n? Try n8n's self-hosted AI Starter Kit to get started with a proof-of-concept or demo playground using Ollama, Qdrant, and PostgreSQL.


Milvus Vector Store node

URL: llms-txt#milvus-vector-store-node

Contents:

  • Node usage patterns
    • Use as a regular node to insert and retrieve documents
    • Connect directly to an AI agent as a tool
    • Use a retriever to fetch documents
    • Use the Vector Store Question Answer Tool to answer questions
  • Node parameters
    • Operation Mode
    • Rerank Results
    • Get Many parameters
    • Insert Documents parameters

Use the Milvus node to interact with your Milvus database as vector store. You can insert documents into a vector database, get documents from a vector database, retrieve documents to provide them to a retriever connected to a chain, or connect directly to an agent as a tool.

On this page, you'll find the node parameters for the Milvus node, and links to more resources.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

Node usage patterns

You can use the Milvus Vector Store node in the following patterns.

Use as a regular node to insert and retrieve documents

You can use the Milvus Vector Store as a regular node to insert, or get documents. This pattern places the Milvus Vector Store in the regular connection flow without using an agent.

See this example template for how to build a system that stores documents in Milvus and retrieves them to support cited, chat-based answers.

Connect directly to an AI agent as a tool

You can connect the Milvus Vector Store node directly to the tool connector of an AI agent to use a vector store as a resource when answering queries.

Here, the connection would be: AI agent (tools connector) -> Milvus Vector Store node. See this example template where data is embedded and indexed in Milvus, and the AI Agent uses the vector store as a knowledge tool for question-answering.

Use a retriever to fetch documents

You can use the Vector Store Retriever node with the Milvus Vector Store node to fetch documents from the Milvus Vector Store node. This is often used with the Question and Answer Chain node to fetch documents from the vector store that match the given chat input.

A typical node connection flow looks like this: Question and Answer Chain (Retriever connector) -> Vector Store Retriever (Vector Store connector) -> Milvus Vector Store.

Check out this workflow example to see how to ingest external data into Milvus and build a chat-based semantic Q&A system.

Use the Vector Store Question Answer Tool to answer questions

Another pattern uses the Vector Store Question Answer Tool to summarize results and answer questions from the Milvus Vector Store node. Rather than connecting the Milvus Vector Store directly as a tool, this pattern uses a tool specifically designed to summarizes data in the vector store.

The connections flow would look like this: AI agent (tools connector) -> Vector Store Question Answer Tool (Vector Store connector) -> Milvus Vector store.

This Vector Store node has four modes: Get Many, Insert Documents, Retrieve Documents (As Vector Store for Chain/Tool), and Retrieve Documents (As Tool for AI Agent). The mode you select determines the operations you can perform with the node and what inputs and outputs are available.

In this mode, you can retrieve multiple documents from your vector database by providing a prompt. The prompt is embedded and used for similarity search. The node returns the documents that are most similar to the prompt with their similarity score. This is useful if you want to retrieve a list of similar documents and pass them to an agent as additional context.

Insert Documents

Use insert documents mode to insert new documents into your vector database.

Retrieve Documents (as Vector Store for Chain/Tool)

Use Retrieve Documents (As Vector Store for Chain/Tool) mode with a vector-store retriever to retrieve documents from a vector database and provide them to the retriever connected to a chain. In this mode you must connect the node to a retriever node or root node.

Retrieve Documents (as Tool for AI Agent)

Use Retrieve Documents (As Tool for AI Agent) mode to use the vector store as a tool resource when answering queries. When formulating responses, the agent uses the vector store when the vector store name and description match the question details.

Enables reranking. If you enable this option, you must connect a reranking node to the vector store. That node will then rerank the results for queries. You can use this option with the Get Many, Retrieve Documents (As Vector Store for Chain/Tool) and Retrieve Documents (As Tool for AI Agent) modes.

Get Many parameters

  • Milvus Collection: Select or enter the Milvus Collection to use.
  • Prompt: Enter your search query.
  • Limit: Enter how many results to retrieve from the vector store. For example, set this to 10 to get the ten best results.

Insert Documents parameters

  • Milvus Collection: Select or enter the Milvus Collection to use.
  • Clear Collection: Specify whether to clear the collection before inserting new documents.

Retrieve Documents (As Vector Store for Chain/Tool) parameters

  • Milvus collection: Select or enter the Milvus Collection to use.

Retrieve Documents (As Tool for AI Agent) parameters

  • Name: The name of the vector store.
  • Description: Explain to the LLM what this tool does. A good, specific description allows LLMs to produce expected results more often.
  • Milvus Collection: Select or enter the Milvus Collection to use.
  • Limit: Enter how many results to retrieve from the vector store. For example, set this to 10 to get the ten best results.

Available in Get Many mode. When searching for data, use this to match with metadata associated with the document.

This is an AND query. If you specify more than one metadata filter field, all of them must match.

When inserting data, the metadata is set using the document loader. Refer to Default Data Loader for more information on loading documents.

Available in Insert Documents mode. Deletes all data from the collection before inserting the new data.

Refer to LangChain's Milvus documentation for more information about the service.

View n8n's Advanced AI documentation.


AWS SNS Trigger node

URL: llms-txt#aws-sns-trigger-node

Contents:

  • Events
  • Related resources

AWS SNS is a notification service provided as part of Amazon Web Services. It provides a low-cost infrastructure for the mass delivery of messages, predominantly to mobile users.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's AWS SNS Trigger integrations page.

n8n provides an app node for AWS SNS. You can find the node docs here.

View example workflows and related content on n8n's website.

Refer to AWS SNS's documentation for details about their API.


SurveyMonkey credentials

URL: llms-txt#surveymonkey-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API access token
  • Using OAuth
  • Required app scopes

You can use these credentials to authenticate the following nodes:

Supported authentication methods

  • API access token
  • OAuth2

Refer to SurveyMonkey's API documentation for more information about the service.

Using API access token

To configure this credential, you'll need:

  • An Access Token: Generated once you create an app.
  • A Client ID: Generated once you create an app.
  • A Client Secret: Generated once you create an app.

Once you've created your app and assigned appropriate scopes, go to Settings > Credentials. Copy the Access Token, Client ID, and Secret and add them to n8n.

To configure this credential, you'll need:

  • A Client ID: Generated once you create an app.
  • A Client Secret: Generated once you create an app.

Once you've created your app and assigned appropriate scopes:

  1. Go to the app's Settings > Settings.
  2. From n8n, copy the OAuth Redirect URL.
  3. Overwrite the app's existing OAuth Redirect URL with that URL.
  4. Select Submit Changes.
  5. Be sure the Scopes section contains the Required app scopes.

From the app's Settings > Credentials, copy the Client ID and Client Secret and add them to your n8n credential. You can now select Connect my account from n8n.

SurveyMonkey Test OAuth Flow

This option only works if you keep the default SurveyMonkey OAuth Redirect URL and add the n8n OAuth Redirect URL as an Additional Redirect URL.

Required app scopes

Once you create your app, go to Settings > Scopes. Select these scopes for your n8n credential to work:

  • View Surveys
  • View Collectors
  • View Responses
  • View Response Details
  • Create/Modify Webhooks
  • View Webhooks

Select Update Scopes to save them.


HTML

URL: llms-txt#html

Contents:

  • Operations
  • Generate HTML template
  • Extract HTML Content
    • Source Data
    • Extraction Values
    • Extract HTML Content options
  • Convert to HTML Table
  • Templates and examples

The HTML node provides operations to help you work with HTML in n8n.

The HTML node replaces the HTML Extract node from version 0.213.0 on. If you're using an older version of n8n, you can still view the HTML Extract node documentation.

When using the HTML node to generate an HTML template you can introduce XSS (cross-site scripting). This is a security risk. Be careful with un-trusted inputs.

  • Generate HTML template: Use this operation to create an HTML template. This allows you to take data from your workflow and output it as HTML.
  • Extract HTML content: Extract contents from an HTML-formatted source. The source can be in JSON or a binary file (.html).
  • Convert to HTML Table: Convert content to an HTML table.

The node parameters and options depend on the operation you select. Refer to the sections below for more details on configuring each operation.

Generate HTML template

Create an HTML template. This allows you to take data from your workflow and output it as HTML.

  • Standard HTML
  • CSS in <style> tags.
  • JavaScript in <script> tags. n8n doesn't execute the JavaScript.
  • Expressions, wrapped in {{}}.

You can use Expressions in the template, including n8n's Built-in methods and variables.

Extract HTML Content

Extract contents from an HTML-formatted source. The source can be in JSON or a binary file (.html).

Use these parameters:

Select the source type for your HTML content. Choose between:

  • JSON: If you select this source data, enter the JSON Property: the name of the input containing the HTML you want to extract. The property can contain a string or an array of strings.
  • Binary: If you select this source data, enter the Input Binary Field: the name of the input containing the HTML you want to extract. The property can contain a string or an array of strings.

Extraction Values

  • Key: Enter the key to save the extracted value under.
  • CSS Selector: Enter the CSS selector to search for.
  • Return Value: Select the type of data to return. Choose from:
    • Attribute: Return an attribute value like class from an element.
      • If you select this option, enter the name of the Attribute to return the value of.
    • HTML: Return the HTML that the element contains.
    • Text: Return the text content of the element.
      • If you choose this option, you can also enter a comma-separated list of selectors to skip in the Skip Selectors.
    • Value: Return the value of an input, select, or text area.
  • Return Array: Choose whether to return multiple extraction values as an array (turned on) or as a single string (turned off).

Extract HTML Content options

You can also configure this operation with these options:

  • Trim Values: Controls whether to remove all spaces and newlines from the beginning and end of the values (turned on) or leaves them (turned off).
  • Clean Up Text: Controls whether to remove leading whitespaces, trailing whitespaces, and line breaks (newlines) and condense multiple consecutive whitespaces into a single space (turned on) or to leave them as-is (turned off).

Convert to HTML Table

This operation expects data from another node. It has no parameters. It includes these options:

  • Capitalize Headers: Controls whether to capitalize the table's headers (turned on) or not (turned off).
  • Custom Styling: Controls whether to use custom styling (turned on) or not (turned off).
  • Caption: Enter a caption to add to the table.
  • Table Attributes: Enter any attributes to apply to the <table>, such as style attributes.
  • Header Attributes: Enter any attributes to apply to the table's headers <th>.
  • Row Attributes: Enter any attributes to apply to the table's rows <tr>.
  • Cell Attributes: Enter any attributes to apply to the table's cells <td>.

Templates and examples

Scrape and summarize webpages with AI

View template details

Pulling data from services that n8n doesnt have a pre-built integration for

View template details

Automated Web Scraping: email a CSV, save to Google Sheets & Microsoft Excel

View template details

Browse HTML integration templates, or search all templates


Navigating the Editor UI

URL: llms-txt#navigating-the-editor-ui

Contents:

  • Getting started
  • Editor UI settings
    • Left-side panel
    • Top bar
    • Canvas
  • Nodes
    • Finding nodes
    • Adding nodes
    • Node buttons
  • Summary

In this lesson you will learn how to navigate the Editor UI. We will walk through the canvas and show you what each icon means and where to find things you will need while building workflows in n8n.

This course is based on n8n version 1.82.1. In other versions, some user interfaces might look different, but this shouldn't impact the core functionality.

Begin by setting up n8n.

We recommend starting with n8n Cloud, a hosted solution that doesn't require installation and includes a free trial.

If n8n Cloud isn't a good option for you, you can self-host with Docker. This is an advanced option recommended only for technical users familiar with hosting services, Docker, and the command line.

For more details on the different ways to set up n8n, see our platforms documentation.

Once you have n8n running, open the Editor UI in a browser window. Log in to your n8n instance. Select Overview and then Create Workflow to view the main canvas.

It should look like this:

Editor UI settings

The editor UI is the web interface where you build workflows. You can access all your workflows and credentials, as well as support pages, from the Editor UI.

On the left side of the Editor UI, there is a panel which contains the core functionalities and settings for managing your workflows. Expand and collapse it by selecting the small arrow icon.

The panel contains the following sections:

  • Overview: Contains all the workflows, credentials, and executions you have access to. During this course, create new workflows here.
  • Personal: Every user gets a default personal project. If you dont create a custom project, your workflows and credentials are stored here.
  • Projects: Projects let you group workflows and credentials together. You can assign roles to users in a project to control what they can do. Projects arent available on the Community edition.
  • Admin Panel: n8n Cloud only. Access your n8n instance usage, billing, and version settings.
  • Templates: A collection of pre-made workflows. Great place to get started with common use cases.
  • Variables: Used to store and access fixed data across your workflows. This feature is available on the Pro and Enterprise Plans.
  • Insights: Provides analytics and insights about your workflows.
  • Help: Contains resources around n8n product and community.
  • Whats New: Shows the latest product updates and features.

Editor UI left-side menu

The top bar of the Editor UI contains the following information:

  • Workflow Name: By default, n8n names a new workflow as "My workflow", but you can edit the name at any time.
  • + Add Tag: Tags help you organise your workflows by category, use case, or whatever is relevant for you. Tags are optional.
  • Inactive/active toggle: This button activates or deactivates the current workflow. By default, workflows are deactivated.
  • Share: You can share and collaborate with others on workflows on the Starter, Pro, and Enterprise plans.
  • Save: This button saves the current workflow.
  • History: Once you save your workflow, you can view previous versions here.

The canvas is the gray dotted grid background in the Editor UI. It displays several icons and a node with different functionalities:

  • Buttons to zoom the canvas to fit the screen, zoom in or out of the canvas, reset zoom, and tidy up the nodes on screen.
  • A button to Execute workflow once you add your first node. When you click on it, n8n executes all nodes on the canvas in sequence.
  • A button with a + sign inside. This button opens the nodes panel.
  • A button with a note icon inside. This button adds a sticky note to the canvas (visible when hovering on the top right + icon).
  • A button labeled Ask Assistant appears on the right side of the canvas. You can ask the AI Assistant for help with building workflows.
  • A dotted square with the text "Add first step." This is where you add your first node.

You can move the workflow canvas around in three ways:

  • Select Ctrl+Left Button on the canvas and move it around.
  • Select Middle Button on the canvas and move it around.
  • Place two fingers on your touchpad and slide.

Don't worry about workflow execution and activation for now; we'll explain these concepts later on in the course.

You can think of nodes as building blocks that serve different functions that, when put together, make up a functioning machine: an automated workflow.

A node is an individual step in your workflow: one that either (a) loads, (b) processes, or (c) sends data.

Based on their function, n8n classifies nodes into four types:

  • App or Action Nodes add, remove, and edit data; request and send external data; and trigger events in other systems. Refer to the Action nodes library for a full list of these nodes.
  • Trigger Nodes start a workflow and supply the initial data. Refer to the Trigger nodes library for a list of trigger nodes.
  • Core Nodes can be trigger or app nodes. Whereas most nodes connect to a specific external service, core nodes provide functionality such as logic, scheduling, or generic API calls. Refer to the Core Nodes library for a full list of core nodes.
  • Cluster Nodes are node groups that work together to provide functionality in a workflow, primarily for AI workflows. Refer to Cluster nodes for more information.

Refer to Node types for a more detailed explanation of all node types.

You can find all available nodes in the nodes panel on the right side of the Editor UI. There are three ways in which you can open the nodes panel:

  • Click the + icon in the top right corner of the canvas.
  • Click the + icon on the right side of an existing node on the canvas (the node to which you want to add another one).
  • Click the Tab key on your keyboard.

In the nodes panel, notice that when adding your first node, you will see the different trigger node categories. After you have added your trigger node, you'll see that the nodes panel changes to show Advanced AI, Actions in an App, Data transformation, Flow, Core, and Human in the loop nodes.

If you want to find a specific node, use the search input at the top of the nodes panel.

There are two ways to add nodes to your canvas:

  • Select the node you want in the nodes panel. The new node will automatically connect to the selected node on the canvas.
  • Drag and drop the node from the nodes panel to the canvas.

If you hover on a node, you'll notice that three icons appear on top:

  • Execute the node (Play icon)
  • Deactivate/Activate the node (Power icon)
  • Delete the node (Trash icon)

There will also be an ellipsis icon, which opens a context menu containing other node options.

To move a workflow around the canvas, select all nodes with your mouse or Ctrl+A, select and hold on a node, then drag it to any point you want on the canvas.

In this lesson you learned how to navigate the Editor UI, what the icons mean, how to access the left-side and node panels, and how to add nodes to the canvas.

In the next lesson, you will build a mini-workflow to put into practice what you've learned so far.


External secrets environment variables

URL: llms-txt#external-secrets-environment-variables

File-based configuration

You can add _FILE to individual variables to provide their configuration in a separate file. Refer to Keeping sensitive data in separate files for more details.

You can use an external secrets store to manage credentials for n8n. Refer to External secrets for details.

Variable Type Default Description
N8N_EXTERNAL_SECRETS_UPDATE_INTERVAL Number 300 (5 minutes) How often (in seconds) to check for secret updates.

or

URL: llms-txt#or

Contents:

  • Next steps
  • Updating
  • n8n with tunnel
  • Reverting an upgrade
  • Windows troubleshooting

npm install -g n8n@next

n8n start --tunnel


## Reverting an upgrade

Install the older version that you want to go back to.

If the upgrade involved a database migration:

1. Check the feature documentation and release notes to see if there are any manual changes you need to make.
1. Run `n8n db:revert` on your current version to roll back the database. If you want to revert more than one database migration, you need to repeat this process.

## Windows troubleshooting

If you are experiencing issues running n8n on Windows, make sure your Node.js environment is correctly set up. Follow Microsoft's guide to [Install NodeJS on Windows](https://learn.microsoft.com/en-us/windows/dev-environment/javascript/nodejs-on-windows).

**Examples:**

Example 1 (unknown):
```unknown
### Next steps

Try out n8n using the [Quickstarts](../../../try-it-out/).

## Updating

To update your n8n instance to the `latest` version, run:

Example 2 (unknown):

To install the `next` version:

Example 3 (unknown):

## n8n with tunnel

Danger

Use this for local development and testing. It isn't safe to use it in production.

To use webhooks for trigger nodes of external services like GitHub, n8n has to be reachable from the web. n8n runs a [tunnel service](https://github.com/localtunnel/localtunnel) that can redirect requests from n8n's servers to your local n8n instance.

Start n8n with `--tunnel` by running:

Evaluation Trigger node

URL: llms-txt#evaluation-trigger-node

Contents:

  • Parameters
  • Templates and examples
  • Related resources

Use the Evaluation Trigger node when setting up evaluations to validate your AI workflow reliability. During evaluation, the Evaluation Trigger node reads your evaluation dataset from Google Sheets, sending the items through the workflow one at a time, in sequence.

On this page, you'll find the Evaluation Trigger node parameters and options.

Credentials for Google Sheets

The Evaluation Trigger node uses data tables or Google Sheets to store the test dataset. To use Google Sheets as a dataset source, configure a Google Sheets credential.

  • Source: Select the location to which you want to output the evaluation results. Default value is Data table.

Source settings differ depending on Source selection.

  • When Source is Data table:
    • Data table: Select a data table by name or ID.
    • Limit Rows: Whether to limit the number of rows in the data table to process. Default state is off.
      • Max Rows to Process: When Limit Rows is enabled, the maximum number of rows to read and process during the evaluation. Default value is 10.
      • Filter Rows: Whether to filter rows in the data table to process. Default state is off.
    • When Source is Google Sheets:
      • Credential to connect with: Create or select an existing Google Sheets credentials.
      • Document Containing Dataset: Choose the spreadsheet document with the sheet containing your test dataset.
        • Select From list to choose the spreadsheet title from the dropdown list, By URL to enter the url of the spreadsheet, or By ID to enter the spreadsheetId.
        • You can find the spreadsheetId in a Google Sheets URL: https://docs.google.com/spreadsheets/d/spreadsheetId/edit#gid=0.
      • Sheet Containing Dataset: Choose the sheet containing your test dataset.
        • Select From list to choose the sheet title from the dropdown list, By URL to enter the url of the sheet, By ID to enter the sheetId, or By Name to enter the sheet title.
        • You can find the sheetId in a Google Sheets URL: https://docs.google.com/spreadsheets/d/aBC-123_xYz/edit#gid=sheetId.
      • Limit Rows: Whether to limit the number of rows in the sheet to process.
        • Max Rows to Process: When Limit Rows is enabled, the maximum number of rows to read and process during the evaluation.
      • Filters: Filter the evaluation dataset based on column values.
        • Column: Choose a sheet column you want to filter by. Select From list to choose the column name from the dropdown list, or By ID to specify an ID using an expression.
        • Value: The column value you want to filter by. The evaluation will only process rows with the given value for the selected column.

Templates and examples

AI Automated HR Workflow for CV Analysis and Candidate Evaluation

View template details

HR Job Posting and Evaluation with AI

View template details

AI-Powered Candidate Screening and Evaluation Workflow using OpenAI and Airtable

View template details

Browse Evaluation Trigger integration templates, or search all templates

To learn more about n8n evaluations, check out the evaluations documentation

n8n provides an app node for evaluations. You can find the node docs here.

For common questions or issues and suggested solutions, refer to the evaluations tips and common issues page.


Allow usage of external npm modules.

URL: llms-txt#allow-usage-of-external-npm-modules.

export NODE_FUNCTION_ALLOW_EXTERNAL=moment,lodash


If using Task Runners

If n8n instance is setup with [Task Runners](../../task-runners/), add the environment variables to the Task Runners instead to the main n8n node.

Refer to [Environment variables reference](../../environment-variables/nodes/) for more information on these variables.

---

## Google Cloud Firestore node

**URL:** llms-txt#google-cloud-firestore-node

**Contents:**
- Operations
- Templates and examples
- What to do if your operation isn't supported

Use the Google Cloud Firestore node to automate work in Google Cloud Firestore, and integrate Google Cloud Firestore with other applications. n8n has built-in support for a wide range of Google Cloud Firestore features, including creating, deleting, and getting documents.

On this page, you'll find a list of operations the Google Cloud Firestore node supports and links to more resources.

Refer to [Google credentials](../../credentials/google/) for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/).

- Document
  - Create a document
  - Create/Update a document
  - Delete a document
  - Get a document
  - Get all documents from a collection
  - Runs a query against your documents
- Collection
  - Get all root collections

## Templates and examples

**Create, update, and get a document in Google Cloud Firestore**

[View template details](https://n8n.io/workflows/839-create-update-and-get-a-document-in-google-cloud-firestore/)

**🛠️ Google Cloud Firestore Tool MCP Server 💪 all 7 operations**

[View template details](https://n8n.io/workflows/5252-google-cloud-firestore-tool-mcp-server-all-7-operations/)

**Automated AI News Curation and LinkedIn Posting with GPT-5 and Firebase**

[View template details](https://n8n.io/workflows/9886-automated-ai-news-curation-and-linkedin-posting-with-gpt-5-and-firebase/)

[Browse Google Cloud Firestore integration templates](https://n8n.io/integrations/google-cloud-firestore/), or [search all templates](https://n8n.io/workflows/)

## What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**.
1. Select the service you want to connect to.
1. Select your credential.

Refer to [Custom API operations](../../../custom-operations/) for more information.

---

## Privacy

**URL:** llms-txt#privacy

**Contents:**
- GDPR
  - Data processing agreement
  - Submitting an account deletion request
  - Sub-processors
  - GDPR for self-hosted users
- Data collection
  - Data collection in self-hosted n8n
  - Data collection in n8n Cloud
  - AI in n8n
  - Documentation telemetry

This page describes n8n's data privacy practices.

### Data processing agreement

For Cloud versions of n8n, n8n is considered both a Controller and a Processor as defined by the GDPR. As a Processor, n8n implements policies and practices that secure the personal data you send to the platform, and includes a [Data Processing Agreement](https://n8n.io/legal/#data) as part of the company's standard [Terms of Service](https://n8n.io/legal/#terms).

The n8n Data Processing Agreement includes the [Standard Contractual Clauses (SCCs)](https://ec.europa.eu/info/law/law-topic/data-protection/international-dimension-data-protection/standard-contractual-clauses-scc_en). These clarify how n8n handles your data, and they update n8n's GDPR policies to cover the latest standards set by the European Commission.

You can find a list of n8n sub-processors [here](#sub-processors).

For self-hosted versions, n8n is neither a Controller nor a Processor, as we don't manage your data

### Submitting an account deletion request

Email help@n8n.io to make an account deletion request.

This is a list of sub-processors authorized to process customer data for n8n's service. n8n audits each sub-processor's security controls and applicable regulations for the protection of personal data.

| Sub-processor name | Purpose                | Contact details                                                                                                                                         | Geographic location of processing |
| ------------------ | ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------- |
| Microsoft Azure    | Cloud service provider | Microsoft Azure 1 Microsoft Way Redmond WA 98052 USA Contact information: https://privacy.microsoft.com/en-GB/privacystatement#mainhowtocontactusmodule | Germany (West Central Region)     |
| Hetzner Online     | Cloud service provider | Hetzner Online GmbH Industriestr. 25 91710 Gunzenhausen Germany data-protection@hetzner.com                                                             | Germany                           |
| OpenAI             | AI provider            | 1455 3rd Street San Francisco, CA 94158 United States                                                                                                   | US                                |
| Anthropic          | AI provider            | Anthropic Ireland, Limited 6th Floor South Bank House, Barrow Street, Dublin 4 Ireland                                                                  | US                                |
| Google Vertex AI   | AI provider            | Google LLC, 1600 Amphitheatre Parkway, Mountain View, CA 94043, United States                                                                           | EU, US                            |
| LangChain          | AI provider            | LangChain, Inc. Delaware                                                                                                                                | US                                |

Subscribe [here](https://n8n-community.typeform.com/to/FdeRxSkH?typeform-source=n8n.io) to receive updates when n8n adds or changes a sub-processor.

### GDPR for self-hosted users

If you self-host n8n, you are responsible for deleting user data. If you need to delete data on behalf of one of your users, you can delete the respective execution. n8n recommends configuring n8n to prune execution data automatically every few days to avoid effortful GDPR request handling as much as possible. Configure this using the `EXECUTIONS_DATA_MAX_AGE` environment variable. Refer to [Environment variables](../../hosting/configuration/environment-variables/) for more information.

n8n collects selected usage and performance data to help diagnose problems and improve the platform. Read about how n8n stores and processes this information in the [privacy policy](https://n8n.io/legal/#privacy).

The data gathered is different in self-hosted n8n and n8n Cloud.

### Data collection in self-hosted n8n

n8n takes care to keep self-hosted data anonymous and avoids collecting sensitive data.

#### What n8n collects

- Error codes and messages of failed executions (excluding any payload data, and not for custom nodes)
- Error reports for app crashes and API issues
- The graph of a workflow (types of nodes used and how they're connected)
- From node parameters:
  - The 'resource' and 'operation' that a node is set to (if applicable)
  - For HTTP request nodes, the domain, path, and method (with personal data anonymized)
- Data around workflow executions:
  - Status
  - The user ID of the user who ran the execution
  - The first time a workflow loads data from an external source
  - The first successful production (non-manual) workflow execution
- The domain of webhook calls, if specified (excluding subdomain).
- Details on how the UI is used (for example, navigation, nodes panel searches)
- Diagnostic information:
  - n8n version
  - Selected settings:
    - DB_TYPE
    - N8N_VERSION_NOTIFICATIONS_ENABLED
    - N8N_DISABLE_PRODUCTION_MAIN_PROCESS
    - [Execution variables](../../hosting/configuration/environment-variables/executions/)
  - OS, RAM, and CPUs
  - Anonymous instance ID
- IP address

#### What n8n doesn't collect

n8n doesn't collect private or sensitive information, such as:

- Personally identifiable information (except IP address)
- Credential information
- Node parameters (except 'resource' and 'operation')
- Execution data
- Sensitive settings (for example, endpoints, ports, DB connections, username/password)
- Error payloads

#### How collection works

Most data is sent to n8n as events that generate it occur. Workflow execution counts and an instance pulse are sent periodically (every 6 hours).

#### Opting out of telemetry

Telemetry collection is enabled by default. To disable it you can configure the following environment variables.

To opt out of telemetry events:

To opt out of checking for new versions of n8n:

To disable the templates feature (prevents background health check calls):

See [configuration](../../hosting/configuration/configuration-methods/) for more info on how to set environment variables.

### Data collection in n8n Cloud

n8n Cloud collects everything listed in [Data collection in self-hosted n8n](#data-collection-in-self-hosted-n8n).

Additionally, in n8n Cloud, n8n uses [PostHog](https://posthog.com/) to track events and visualise usage, including using session recordings. Session recordings comprise the data seen by a user on screen, with the exception of credential values. n8n's product team uses this data to improve the product. All recordings are deleted after 21 days.

To provide enhanced assistance, n8n integrates AI-powered features that leverage Large Language Models (LLMs).

To assist and improve user experience, n8n may send specific context data to LLMs. This context data is strictly limited to information about the current workflow. n8n does not send any values from credential fields or actual output data to AI services. The data will not be incorporated, used, or retained to train the models of the AI services. Any data will be deleted after 30 days.

#### When n8n shares data

Data is only sent to AI services if workspaces have opted in to use the assistant. The Assistant is enabled by default for n8n Cloud users. When a workspace opts in to use the assistant, node-specific data is transmitted only during direct interactions and active sessions with the AI assistant, ensuring no unnecessary data sharing occurs.

- **General Workflow Information**: This includes details about which nodes are present in your workflow, the number of items currently in the workflow, and whether the workflow is active.
- **Input & Output Schemas of Nodes**: This includes the schema of all nodes with incoming data and the output schema of a node in question. We do not send the actual data value of the schema.
- **Node Configuration**: This includes the operations, options, and settings chosen in the referenced node.
- **Code and Expressions**: This includes any code or expressions in the node in question to help with debugging potential issues and optimizations.

#### What n8n doesn't share

- **Credentials**: Any values of the credential fields of your nodes.
- **Output Data**: The actual data processed by your workflows.
- **Sensitive Information**: Any personally identifiable information or other sensitive data that could compromise your privacy or security that you have not explicitly mentioned in node parameters or your code of a [Code Node](../../integrations/builtin/core-nodes/n8n-nodes-base.code/).

### Documentation telemetry

n8n's documentation (this website) uses cookies to recognize your repeated visits and preferences, as well as to measure the effectiveness of n8n's documentation and whether users find what they're searching for. With your consent, you're helping n8n to make our documentation better. You can control cookie consent using the cookie widget.

## Retention and deletion of personal identifiable data

PID (personal identifiable data) is data that's personal to you and would identify you as an individual.

n8n only retains data for as long as necessary to provide the core service.

For n8n Cloud, n8n stores your workflow code, credentials, and other data indefinitely, until you choose to delete it or close your account. The platform stores execution data according to the retention rules on your account.

n8n deletes most internal application logs and logs tied to subprocessors within 90 days. The company retains a subset of logs for longer periods where required for security investigations.

If you choose to delete your n8n account, n8n deletes all customer data and event data associated with your account. n8n deletes customer data in backups within 90 days.

Self-hosted users should have their own PID policy and data deletion processes. Refer to [What you can do](../what-you-can-do/) for more information.

n8n uses Paddle.com to process payments. When you sign up for a paid plan, Paddle transmits and stores the details of your payment method according to their security policy. n8n stores no information about your payment method.

**Examples:**

Example 1 (unknown):
```unknown
export N8N_DIAGNOSTICS_ENABLED=false

Example 2 (unknown):

export N8N_VERSION_NOTIFICATIONS_ENABLED=false

Example 3 (unknown):

export N8N_TEMPLATES_ENABLED=false

Core nodes library

URL: llms-txt#core-nodes-library

This section provides information about n8n's core nodes.


Self-hosted AI Starter Kit

URL: llms-txt#self-hosted-ai-starter-kit

Contents:

  • Whats included
  • What you can build
  • Get the kit

The Self-hosted AI Starter Kit is an open, docker compose template that bootstraps a fully featured Local AI and Low Code development environment.

Curated by n8n, it combines the self-hosted n8n platform with a list of compatible AI products and components to get you started building self-hosted AI workflows.

Self-hosted n8n: Low-code platform with over 400 integrations and advanced AI components.

Ollama: Cross-platform LLM platform to install and run the latest local LLMs.

Qdrant: Open-source, high performance vector store with a comprehensive API.

PostgreSQL: The workhorse of the Data Engineering world, handles large amounts of data safely.

What you can build

AI Agents that can schedule appointments

Summaries of company PDFs without leaking data

Smarter Slackbots for company communications and IT-ops

Private, low-cost analyses of financial documents

Head to the GitHub repository to clone the repo and get started!

n8n designed this kit to help you get started with self-hosted AI workflows. While its not fully optimized for production environments, it combines robust components that work well together for proof-of-concept projects. Customize it to meet your needs. Secure and harden it before using in production.


Try it out

URL: llms-txt#try-it-out

The best way to learn n8n is by using our tutorials to get familiar with the user interface and the many different types of nodes and integrations available. Here is a selection of material to get you started:


License environment variables

URL: llms-txt#license-environment-variables

File-based configuration

You can add _FILE to individual variables to provide their configuration in a separate file. Refer to Keeping sensitive data in separate files for more details.

To enable certain licensed features, you must first activate your license. You can do this either through the UI or by setting environment variables. For more information, see license key.

Variable Type Default Description
N8N_HIDE_USAGE_PAGE boolean false Hide the usage and plans page in the app.
N8N_LICENSE_ACTIVATION_KEY String '' Activation key to initialize license. Not applicable if the n8n instance was already activated.
N8N_LICENSE_AUTO_RENEW_ENABLED Boolean true Enables (true) or disables (false) autorenewal for licenses. If disabled, you need to manually renew the license every 10 days by navigating to Settings > Usage and plan, and pressing F5. Failure to renew the license will disable all licensed features.
N8N_LICENSE_DETACH_FLOATING_ON_SHUTDOWN Boolean true Controls whether the instance releases floating entitlements back to the pool upon shutdown. Set to true to allow other instances to reuse the entitlements, or false to retain them. For production instances that must always keep their licensed features, set this to false.
N8N_LICENSE_SERVER_URL String https://license.n8n.io/v1 Server URL to retrieve license.
N8N_LICENSE_TENANT_ID Number 1 Tenant ID associated with the license. Only set this variable if explicitly instructed by n8n.
https_proxy_license_server String https://user:pass@proxy:port Proxy server URL for HTTPS requests to retrieve license. This variable name needs to be lowercase.

TYPE n8n_scaling_mode_queue_jobs_failed counter

URL: llms-txt#type-n8n_scaling_mode_queue_jobs_failed-counter

n8n_scaling_mode_queue_jobs_failed 0


Copper credentials

URL: llms-txt#copper-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a Copper account at the Professional or Business plan level.

Supported authentication methods

Refer to Copper's API documentation for more information about the service.

To configure this credential, you'll need:


Google: OAuth2 single service

URL: llms-txt#google:-oauth2-single-service

Contents:

  • Prerequisites
  • Set up OAuth
    • Create a Google Cloud Console project
    • Enable APIs
    • Configure your OAuth consent screen
    • Create your Google OAuth client credentials
    • Finish your n8n credential
  • Video
  • Troubleshooting
    • Google hasn't verified this app

This document contains instructions for creating a Google credential for a single service. They're also available as a video.

Note for n8n Cloud users

For the following nodes, you can authenticate by selecting Sign in with Google in the OAuth section:

There are five steps to connecting your n8n credential to Google services:

  1. Create a Google Cloud Console project.
  2. Enable APIs.
  3. Configure your OAuth consent screen.
  4. Create your Google OAuth client credentials.
  5. Finish your n8n credential.

Create a Google Cloud Console project

First, create a Google Cloud Console project. If you already have a project, jump to the next section:

  1. Log in to your Google Cloud Console using your Google credentials.

  2. In the top menu, select the project dropdown in the top navigation and select New project or go directly to the New Project page.

  3. Enter a Project name and select the Location for your project.

  4. Select Create.

  5. Check the top navigation and make sure the project dropdown has your project selected. If not, select the project you just created.

Check the project dropdown in the Google Cloud top navigation

With your project created, enable the APIs you'll need access to:

  1. Access your Google Cloud Console - Library. Make sure you're in the correct project.

Check the project dropdown in the Google Cloud top navigation

  1. Go to APIs & Services > Library.

  2. Search for and select the API(s) you want to enable. For example, for the Gmail node, search for and enable the Gmail API.

  3. Some integrations require other APIs or require you to request access:

Google Drive API required

The following integrations require the Google Drive API, as well as their own API:

  • Google Docs
    • Google Sheets
    • Google Slides

In addition to the Vertex AI API you will also need to enable the Cloud Resource Manager API.

  1. Select ENABLE.

If you haven't used OAuth in your Google Cloud project before, you'll need to configure the OAuth consent screen:

  1. Access your Google Cloud Console - Library. Make sure you're in the correct project.

Check the project dropdown in the Google Cloud top navigation

  1. Open the left navigation menu and go to APIs & Services > OAuth consent screen. Google will redirect you to the Google Auth Platform overview page.

  2. Select Get started on the Overview tab to begin configuring OAuth consent.

  3. Enter an App name and User support email to include on the Oauth screen. Select Next to continue.

  4. For the Audience, select Internal for user access within your organization's Google workspace or External for any user with a Google account. Refer to Google's User type documentation for more information on user types. Select Next to continue.

  5. Select the Email addresses Google should use to contact you about changes to your project. Select Next to continue.

  6. Read and accept the Google's User Data Policy. Select Continue and then select Create.

  7. In the left-hand menu, select Branding.

  8. In the Authorized domains section, select Add domain:

  • If you're using n8n's Cloud service, add n8n.cloud
    • If you're self-hosting, add the domain of your n8n instance.
  1. Select Save at the bottom of the page.

Create your Google OAuth client credentials

Next, create the OAuth client credentials in Google:

  1. Access your Google Cloud Console. Make sure you're in the correct project.
  2. In the APIs & Services section, select Credentials.
  3. Select + Create credentials > OAuth client ID.
  4. In the Application type dropdown, select Web application.
  5. Google automatically generates a Name. Update the Name to something you'll recognize in your console.
  6. From your n8n credential, copy the OAuth Redirect URL. Paste it into the Authorized redirect URIs in Google Console.
  7. Select Create.

Finish your n8n credential

With the Google project and credentials fully configured, finish the n8n credential:

  1. From Google's OAuth client created modal, copy the Client ID. Enter this in your n8n credential.
  2. From the same Google modal, copy the Client Secret. Enter this in your n8n credential.
  3. In n8n, select Sign in with Google to complete your Google authentication.
  4. Save your new credentials.

Google hasn't verified this app

If using the OAuth authentication method, you might see the warning Google hasn't verified this app. To avoid this:

  • If your app User Type is Internal, create OAuth credentials from the same account you want to authenticate.
  • If your app User Type is External, you can add your email to the list of testers for the app: go to the Audience page and add the email you're signing in with to the list of Test users.

If you need to use credentials generated by another account (by a developer or another third party), follow the instructions in Google Cloud documentation | Authorization errors: Google hasn't verified this app.

Google Cloud app becoming unauthorized

For Google Cloud apps with Publishing status set to Testing and User type set to External, consent and tokens expire after seven days. Refer to Google Cloud Platform Console Help | Setting up your OAuth consent screen for more information. To resolve this, reconnect the app in the n8n credentials modal.


Security environment variables

URL: llms-txt#security-environment-variables

File-based configuration

You can add _FILE to individual variables to provide their configuration in a separate file. Refer to Keeping sensitive data in separate files for more details.

Variable Type Default Description
N8N_BLOCK_ENV_ACCESS_IN_NODE Boolean false Whether to allow users to access environment variables in expressions and the Code node (false) or not (true).
N8N_BLOCK_FILE_ACCESS_TO_N8N_FILES Boolean true Set to true to block access to all files in the .n8n directory and user defined configuration files.
N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS Boolean false Set to true to try to set 0600 permissions for the settings file, giving only the owner read and write access.
N8N_RESTRICT_FILE_ACCESS_TO String Limits access to files in these directories. Provide multiple files as a colon-separated list (":").
N8N_SECURITY_AUDIT_DAYS_ABANDONED_WORKFLOW Number 90 Number of days to consider a workflow abandoned if it's not executed.
N8N_SECURE_COOKIE Boolean true Ensures that cookies are only sent over HTTPS, enhancing security.
N8N_SAMESITE_COOKIE Enum string: strict, lax, none lax Controls cross-site cookie behavior (learn more): - strict: Sent only for first-party requests. - lax (default): Sent with top-level navigation requests. - none: Sent in all contexts (requires HTTPS).
N8N_GIT_NODE_DISABLE_BARE_REPOS Boolean false Set to true to prevent the Git node from working with bare repositories, enhancing security.

SendGrid node

URL: llms-txt#sendgrid-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the SendGrid node to automate work in SendGrid, and integrate SendGrid with other applications. n8n has built-in support for a wide range of SendGrid features, including creating, updating, deleting, and getting contacts, and lists, as well as sending emails.

On this page, you'll find a list of operations the SendGrid node supports and links to more resources.

Refer to SendGrid credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Contact
    • Create/update a contact
    • Delete a contact
    • Get a contact by ID
    • Get all contacts
  • List
    • Create a list
    • Delete a list
    • Get a list
    • Get all lists
    • Update a list
  • Mail
    • Send an email.

Templates and examples

Track investments using Baserow and n8n

View template details

Automated Email Optin Form with n8n and Hunter io for verification

View template details

Add contacts to SendGrid automatically

View template details

Browse SendGrid integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Retrieve linked items from earlier in the workflow

URL: llms-txt#retrieve-linked-items-from-earlier-in-the-workflow

Every item in a node's input data links back to the items used in previous nodes to generate it. This is useful if you need to retrieve linked items from further back than the immediate previous node.

To access the linked items from earlier in the workflow, use ("<node-name>").itemMatching(currentNodeinputIndex).

For example, consider a workflow that does the following:

  1. The Customer Datastore node generates example data:

  2. The Edit Fields node simplifies this data:

  3. The Code node restore the email address to the correct person:

The Code node does this using the following code:

You can view and download the example workflow from n8n website | itemMatchin usage example .

Examples:

Example 1 (unknown):

[
   	{
   		"id": "23423532",
   		"name": "Jay Gatsby",
   		"email": "gatsby@west-egg.com",
   		"notes": "Keeps asking about a green light??",
   		"country": "US",
   		"created": "1925-04-10"
   	},
   	{
   		"id": "23423533",
   		"name": "José Arcadio Buendía",
   		"email": "jab@macondo.co",
   		"notes": "Lots of people named after him. Very confusing",
   		"country": "CO",
   		"created": "1967-05-05"
   	},
   	...
   ]

Example 2 (unknown):

[
   	{
   		"name": "Jay Gatsby"
   	},
   	{
   		"name": "José Arcadio Buendía"
   	},
       ...
   ]

Example 3 (unknown):

[
   	{
   		"name": "Jay Gatsby",
   		"restoreEmail": "gatsby@west-egg.com"
   	},
   	{
   		"name": "José Arcadio Buendía",
   		"restoreEmail": "jab@macondo.co"
   	},
   	...
   ]

Example 4 (unknown):

for(let i=0; i<$input.all().length; i++) {
	$input.all()[i].json.restoreEmail = $('Customer Datastore (n8n training)').itemMatching(i).json.email;
}
return $input.all();

OpenAI Audio operations

URL: llms-txt#openai-audio-operations

Contents:

  • Generate Audio
    • Options
  • Transcribe a Recording
    • Options
  • Translate a Recording
    • Options
  • Common issues

Use this operation to generate an audio, or transcribe or translate a recording in OpenAI. Refer to OpenAI for more information on the OpenAI node itself.

Use this operation to create audio from a text prompt.

Enter these parameters:

  • Credential to connect with: Create or select an existing OpenAI credential.

  • Resource: Select Audio.

  • Operation: Select Generate Audio.

  • Model: Select the model you want to use to generate the audio. Refer to TTS | OpenAI for more information.

    • TTS-1: Use this to optimize for speed.
    • TTS-1-HD: Use this to optimize for quality.
  • Text Input: Enter the text to generate the audio for. The maximum length is 4096 characters.

  • Voice: Select a voice to use when generating the audio. Listen to the previews of the voices in Text to speech guide | OpenAI.

  • Response Format: Select the format for the audio response. Choose from MP3 (default), OPUS, AAC, FLAC, WAV, and PCM.

  • Audio Speed: Enter the speed for the generated audio from a value from 0.25 to 4.0. Defaults to 1.

  • Put Output in Field: Defaults to data. Enter the name of the output field to put the binary file data in.

Refer to Create speech | OpenAI documentation for more information.

Transcribe a Recording

Use this operation to transcribe audio into text. OpenAI API limits the size of the audio file to 25 MB. OpenAI will use the whisper-1 model by default.

Enter these parameters:

  • Credential to connect with: Create or select an existing OpenAI credential.

  • Resource: Select Audio.

  • Operation: Select Transcribe a Recording.

  • Input Data Field Name: Defaults to data. Enter the name of the binary property that contains the audio file in one of these formats: .flac, .mp3, .mp4, .mpeg, .mpga, .m4a, .ogg, .wav, or .webm.

  • Language of the Audio File: Enter the language of the input audio in ISO-639-1. Use this option to improve accuracy and latency.

  • Output Randomness (Temperature): Defaults to 1.0. Adjust the randomness of the response. The range is between 0.0 (deterministic) and 1.0 (maximum randomness). We recommend altering this or Output Randomness (Top P) but not both. Start with a medium temperature (around 0.7) and adjust based on the outputs you observe. If the responses are too repetitive or rigid, increase the temperature. If theyre too chaotic or off-track, decrease it.

Refer to Create transcription | OpenAI documentation for more information.

Translate a Recording

Use this operation to translate audio into English. OpenAI API limits the size of the audio file to 25 MB. OpenAI will use the whisper-1 model by default.

Enter these parameters:

  • Credential to connect with: Create or select an existing OpenAI credential.

  • Resource: Select Audio.

  • Operation: Select Translate a Recording.

  • Input Data Field Name: Defaults to data. Enter the name of the binary property that contains the audio file in one of these formats: .flac, .mp3, .mp4, .mpeg, .mpga, .m4a, .ogg, .wav, or .webm.

  • Output Randomness (Temperature): Defaults to 1.0. Adjust the randomness of the response. The range is between 0.0 (deterministic) and 1.0 (maximum randomness). We recommend altering this or Output Randomness (Top P) but not both. Start with a medium temperature (around 0.7) and adjust based on the outputs you observe. If the responses are too repetitive or rigid, increase the temperature. If theyre too chaotic or off-track, decrease it.

Refer to Create transcription | OpenAI documentation for more information.

For common errors or issues and suggested resolution steps, refer to Common Issues.


Help Scout credentials

URL: llms-txt#help-scout-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Create a Help Scout account.

Supported authentication methods

Refer to Help Scout's API documentation for more information about the service.

Note for n8n Cloud users

Cloud users don't need to provide connection details. Select Connect my account to connect through your browser.

If you need to configure OAuth2 from scratch or need more detail on what's happening in the OAuth web flow, you'll need to create a Help Scout app. Refer to the instructions in the Help Scout OAuth documentation for more information.


Marketstack credentials

URL: llms-txt#marketstack-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a Marketstack account.

Supported authentication methods

Refer to Marketstack's API documentation for more information about the service.

To configure this credential, you'll need:

  • An API Key: View and generate API keys in your Marketstack account dashboard.
  • Select whether to Use HTTPS: Make this selection based on your Marketstack account plan level:
    • Free plan: Turn off Use HTTPS
    • All other plans: Turn on Use HTTPS

Let AI specify the tool parameters

URL: llms-txt#let-ai-specify-the-tool-parameters

Contents:

  • Let the model fill in the parameter
  • Use the $fromAI() function
    • Parameters
    • Examples
    • Templates

When configuring tools connected to the Tools Agent, many parameters can be filled in by the AI model itself. The AI model will use the context from the task and information from other connected tools to fill in the appropriate details.

There are two ways to do this, and you can switch between them.

Let the model fill in the parameter

Each appropriate parameter field in the tool's editing dialog has an extra button at the end:

On activating this button, the AI Agent will fill in the expression for you, with no need for any further user input. The field itself is filled in with a message indicating that the parameter has been defined automatically by the model.

If you want to define the parameter yourself, click on the 'X' in this box to revert to user-defined values. Note that the 'expression' field will now contain the expression generated by this feature, though you can now edit it further to add extra details as described in the following section.

Activating this feature will overwrite any manual definition you may have already added.

Use the $fromAI() function

The $fromAI() function uses AI to dynamically fill in parameters for tools connected to the Tools AI agent.

The $fromAI() function is only available for tools connected to the AI Agent node. The $fromAI() function doesn't work with the Code tool or with other non-tool cluster sub-nodes.

To use the $fromAI() function, call it with the required key parameter:

The key parameter and other arguments to the $fromAI() function aren't references to existing values. Instead, think of these arguments as hints that the AI model will use to populate the right data.

For instance, if you choose a key called email, the AI Model will look for an email address in its context, other tools, and input data. In chat workflows, it may ask the user for an email address if it can't find one elsewhere. You can optionally pass other parameters like description to give extra context to the AI model.

The $fromAI() function accepts the following parameters:

Parameter Type Required? Description
key string A string representing the key or name of the argument. This must be between 1 and 64 characters in length and can only contain lowercase letters, uppercase letters, numbers, underscores, and hyphens.
description string A string describing the argument.
type string A string specifying the data type. Can be string, number, boolean, or json (defaults to string).
defaultValue any The default value to use for the argument.

As an example, you could use the following $fromAI() expression to dynamically populate a field with a name:

If you don't need the optional parameters, you could simplify this as:

To dynamically populate the number of items you have in stock, you could use a $fromAI() expression like this:

If you only want to fill in parts of a field with a dynamic value from the model, you can use it in a normal expression as well. For example, if you want the model to fill out the subject parameter for an e-mail, but always pre-fix the generated value with the string 'Generated by AI:', you could use the following expression:

You can see the $fromAI() function in action in the following templates:

Examples:

Example 1 (unknown):

{{ $fromAI('email') }}

Example 2 (unknown):

$fromAI("name", "The commenter's name", "string", "Jane Doe")

Example 3 (unknown):

$fromAI("name")

Example 4 (unknown):

$fromAI("numItemsInStock", "Number of items in stock", "number", 5)

Self-hosted concurrency control

URL: llms-txt#self-hosted-concurrency-control

Contents:

  • Comparison to queue mode

Only for self-hosted n8n

This document is for self-hosted concurrency control. Read Cloud concurrency to learn how concurrency works with n8n Cloud accounts.

In regular mode, n8n doesn't limit how many production executions may run at the same time. This can lead to a scenario where too many concurrent executions thrash the event loop, causing performance degradation and unresponsiveness.

To prevent this, you can set a concurrency limit for production executions in regular mode. Use this to control how many production executions run concurrently, and queue up any concurrent production executions over the limit. These executions remain in the queue until concurrency capacity frees up, and are then processed in FIFO order.

Concurrency control is disabled by default. To enable it:

  • Concurrency control applies only to production executions: those started from a webhook or trigger node. It doesn't apply to any other kinds, such as manual executions, sub-workflow executions, error executions, or started from CLI.

  • You can't retry queued executions. Cancelling or deleting a queued execution also removes it from the queue.

  • On instance startup, n8n resumes queued executions up to the concurrency limit and re-enqueues the rest.

  • To monitor concurrency control, watch logs for executions being added to the queue and released. In a future version, n8n will show concurrency control in the UI.

When you enable concurrency control, you can view the number of active executions and the configured limit at the top of a project's or workflow's executions tab.

Comparison to queue mode

In queue mode, you can control how many jobs a worker may run concurrently using the --concurrency flag.

Concurrency control in queue mode is a separate mechanism from concurrency control in regular mode, but the environment variable N8N_CONCURRENCY_PRODUCTION_LIMIT controls both of them. In queue mode, n8n takes the limit from this variable if set to a value other than -1, falling back to the --concurrency flag or its default.

Examples:

Example 1 (unknown):

export N8N_CONCURRENCY_PRODUCTION_LIMIT=20

Custom Code Tool node

URL: llms-txt#custom-code-tool-node

Contents:

  • Node parameters
    • Description
    • Language
    • JavaScript / Python box
  • Templates and examples
  • Related resources

Use the Custom Code Tool node to write code that an agent can run.

On this page, you'll find the node parameters for the Custom Code Tool node and links to more resources.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

Give your custom code a description. This tells the agent when to use this tool. For example:

Call this tool to get a random color. The input should be a string with comma separated names of colors to exclude.

You can use JavaScript or Python.

JavaScript / Python box

You can access the tool input using query. For example, to take the input string and lowercase it:

Templates and examples

AI: Conversational agent with custom tool written in JavaScript

View template details

Custom LangChain agent written in JavaScript

View template details

OpenAI assistant with custom tools

View template details

Browse Custom Code Tool integration templates, or search all templates

Refer to LangChain's documentation on tools for more information about tools in LangChain.

View n8n's Advanced AI documentation.

Examples:

Example 1 (unknown):

let myString = query;
return myString.toLowerCase();

Strava node

URL: llms-txt#strava-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Strava node to automate work in Strava, and integrate Strava with other applications. n8n has built-in support for a wide range of Strava features, including creating new activities, and getting activity information.

On this page, you'll find a list of operations the Strava node supports and links to more resources.

Refer to Strava credentials for guidance on setting up authentication.

  • Activity
    • Create a new activity
    • Get an activity
    • Get all activities
    • Get all activity comments
    • Get all activity kudos
    • Get all activity laps
    • Get all activity zones
    • Update an activity

Templates and examples

AI Fitness Coach Strava Data Analysis and Personalized Training Insights

View template details

Export all Strava Activity Data to Google Sheets

View template details

Receive updates when a new activity gets created and tweet about it

View template details

Browse Strava integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Remove the container with the <container_id>

URL: llms-txt#remove-the-container-with-the-<container_id>

docker rm <container_id>


Airtop node

URL: llms-txt#airtop-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported
  • Node reference
    • Create a session and window
    • Extract content
    • Interacting with pages
    • Terminate a session

Use the Airtop node to automate work in Airtop, and integrate Airtop with other applications. n8n has built-in support for a wide range of Airtop features, enabling you to control a cloud-based web browser for tasks like querying, scraping, and interacting with web pages.

On this page, you'll find a list of operations the Airtop node supports, and links to more resources.

Refer to Airtop credentials for guidance on setting up authentication.

  • Session
    • Create session
    • Save profile on termination
    • Terminate session
  • Window
    • Create a new browser window
    • Load URL
    • Take screenshot
    • Close window
  • Extraction
    • Query page
    • Query page with pagination
    • Smart scrape page
  • Interaction
    • Click an element
    • Hover on an element
    • Type

Templates and examples

Automated LinkedIn Profile Discovery with Airtop and Google Search

View template details

Automate Web Interactions with Claude 3.5 Haiku and Airtop Browser Agent

View template details

Web Site Scraper for LLMs with Airtop

View template details

Browse Airtop integration templates, or search all templates

Refer to Airtop's documentation for more information about the service.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.

Contact Airtop's Support for assistance or to create a feature request.

Create a session and window

Create an Airtop browser session to get a Session ID, then use it to create a new browser window. After this, you can use any extraction or interaction operation.

Extract content from a web browser using these operations:

  • Query page: Extract information from the current window.
  • Query page with pagination: Extract information from pages with pagination or infinite scrolling.
  • Smart scrape page: Get the window content as markdown.

Get JSON responses by using the JSON Output Schema parameter in query operations.

Interacting with pages

Click, hover, or type on elements by describing the element you want to interact with.

Terminate a session

End your session to save resources. Sessions are automatically terminated based on the Idle Timeout set in the Create Session operation or can be manually terminated using the Terminate Session operation.


ClickUp credentials

URL: llms-txt#clickup-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using API access token
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Supported authentication methods

  • API access token
  • OAuth2

Refer to ClickUp's documentation for more information about the service.

Using API access token

To configure this credential, you'll need a ClickUp account and:

  • A Personal API Access Token

To get your personal API token:

  1. If you're using ClickUp 2.0, select your avatar in the lower-left corner and select Apps. If you're using ClickUp 3.0, select your avatar in the upper-right corner, select Settings, and scroll down to select Apps in the sidebar.
  2. Under API Token, select Generate.
  3. Copy your Personal API token and enter it in your n8n credential as the Access Token.

Refer to ClickUp's Personal Token documentation for more information.

Note for n8n Cloud users

Cloud users don't need to provide connection details. Select Connect my account to connect through your browser.

If you're self-hosting n8n, you'll need to create an OAuth app:

  1. In ClickUp, select your avatar and select Integrations.
  2. Select ClickUp API.
  3. Select Create an App.
  4. Enter a Name for your app.
  5. In n8n, copy the OAuth Redirect URL. Enter this as your ClickUp app's Redirect URL.
  6. Once you create your app, copy the client_id and secret and enter them in your n8n credential.
  7. Select Connect my account and follow the on-screen prompts to finish connecting the credential.

Refer to the ClickUp Oauth flow documentation for more information.


Google Cloud Realtime Database node

URL: llms-txt#google-cloud-realtime-database-node

Contents:

  • Operations
  • Templates and examples

Use the Google Cloud Realtime Database node to automate work in Google Cloud Realtime Database, and integrate Google Cloud Realtime Database with other applications. n8n has built-in support for a wide range of Google Cloud Realtime Database features, including writing, deleting, getting, and appending databases.

On this page, you'll find a list of operations the Google Cloud Realtime Database node supports and links to more resources.

Refer to Google Cloud Realtime Database credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Write data to a database
  • Delete data from a database
  • Get a record from a database
  • Append to a list of data
  • Update item on a database

Templates and examples

Browse Google Cloud Realtime Database integration templates, or search all templates


Medium node

URL: llms-txt#medium-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Medium node to automate work in Medium, and integrate Medium with other applications. n8n has built-in support for a wide range of Medium features, including creating posts, and getting publications.

On this page, you'll find a list of operations the Medium node supports and links to more resources.

Medium API no longer supported

Medium has stopped supporting the Medium API. The Medium node still appears within n8n, but you won't be able to configure new API keys to authenticate with.

Refer to Medium credentials for guidance on setting up existing API keys.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Post
    • Create a post
  • Publication
    • Get all publications

Templates and examples

Cross-post your blog posts

View template details

Posting from Wordpress to Medium

View template details

Publish a post to a publication on Medium

View template details

Browse Medium integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Aggregate

URL: llms-txt#aggregate

Contents:

  • Node parameters
    • Individual Fields
    • All Item Data
  • Node options
  • Templates and examples
  • Related resources

Use the Aggregate node to take separate items, or portions of them, and group them together into individual items.

To begin using the node, select the Aggregate you'd like to use:

Individual Fields

  • Input Field Name: Enter the name of the field in the input data to aggregate together.
  • Rename Field: This toggle controls whether to give the field a different name in the aggregated output data. Turn this on to add a different field name. If you're aggregating multiple fields, you must provide new output field names. You can't leave multiple fields undefined.
    • Output Field Name: This field is displayed when you turn on Rename Field. Enter the field name for the aggregated output data.

Refer to Node options for more configuration options.

  • Put Output in Field: Enter the name of the field to output the data in.
  • Include: Select which fields to include in the output. Choose from:
    • All fields: The output includes data from all fields with no further parameters.
    • Specified Fields: If you select this option, enter a comma-separated list of fields the output should include data from in the Fields To Include parameter. The output will include only the fields in this list.
    • All Fields Except: If you select this option, enter a comma-separated list of fields the output should exclude data from in the Fields To Exclude parameter. The output will include all fields not in this list.

Refer to Node options for more configuration options.

You can further configure this node using these Options:

  • Disable Dot Notation: The node displays this toggle when you select the Individual Fields Aggregate. It controls whether to disallow referencing child fields using parent.child in the field name (turned on), or allow it (turned off, default).
  • Merge Lists: The node displays this toggle when you select the Individual Fields Aggregate. Turn it on if the field to aggregate is a list and you want to output a single flat list rather than a list of lists.
  • Include Binaries: The node displays this toggle for both Aggregate types. Turn it on if you want to include binary data from the input in the new output.
  • Keep Missing And Null Values: The node displays this toggle when you select the Individual Fields Aggregate. Turn it on to add a null (empty) entry in the output list when there is a null or missing value in the input. If turned off, the output ignores null or empty values.

Templates and examples

🤖Automate Multi-Platform Social Media Content Creation with AI

View template details

Scrape business emails from Google Maps without the use of any third party APIs

View template details

Build Your First AI Data Analyst Chatbot

View template details

Browse Aggregate integration templates, or search all templates

Learn more about data structure and data flow in n8n workflows.


QuestDB credentials

URL: llms-txt#questdb-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using database connection

You can use these credentials to authenticate the following nodes:

Create a user account on an instance of QuestDB.

Supported authentication methods

  • Database connection

Refer to QuestDB's documentation for more information about the service.

Using database connection

To configure this credential, you'll need:

  • The Host: Enter the host name or IP address for the server.
  • The Database: Enter the database name, for example qdb.
  • A User: Enter the username for the user account as configured in pg.user or pg.readonly.user property in server.conf. Default value is admin.
  • A Password: Enter the password for the user account as configured in pg.password or pg.readonly.password property in server.conf. Default value is quest.
  • SSL: Select whether the connection should use SSL, which sets the sslmode parameter. Options include:
    • Allow
    • Disable
    • Require
  • The Port: Enter the port number to use for the connection. Default is 8812.

Refer to List of supported connection properties for more information.


Quick Base node

URL: llms-txt#quick-base-node

Contents:

  • Operations
  • Templates and examples

Use the Quick Base node to automate work in Quick Base, and integrate Quick Base with other applications. n8n has built-in support for a wide range of Quick Base features, including creating, updating, deleting, and getting records, as well as getting fields, and downloading files.

On this page, you'll find a list of operations the Quick Base node supports and links to more resources.

Refer to Quick Base credentials for guidance on setting up authentication.

  • Field
    • Get all fields
  • File
    • Delete a file
    • Download a file
  • Record
    • Create a record
    • Delete a record
    • Get all records
    • Update a record
    • Upsert a record
  • Report
    • Get a report
    • Run a report

Templates and examples

Browse Quick Base integration templates, or search all templates


AI Agent Tool node

URL: llms-txt#ai-agent-tool-node

Contents:

  • Node parameters
  • Node options
  • Templates and examples
  • Dynamic parameters for tools with $fromAI()

The AI Agent Tool node allows a root-level agent in your workflow to call other agents as tools to simplify multi-agent orchestration.

The primary agent can supervise and delegate work to AI Agent Tool nodes that specialize in different tasks and knowledge. This allows you to use multiple agents in a single workflow without the complexity of managing context and variables that sub-workflows require. You can nest AI Agent Tool nodes into multiple layers for more complex multi-tiered use cases.

On this page, you'll find the node parameters for the AI Agent Tool node, and links to more resources.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

Configure the AI Agent Tool node using these parameters:

  • Description: Give a description to the LLM of this agent's purpose and scope of responsibility. A good, specific description tells the parent agent when to delegate tasks to this agent for processing.
  • Prompt (User Message): The prompt to the LLM explaining what actions to perform and what information to return.
  • Require Specific Output Format: Whether you want the node to require a specific output format. When turned on, n8n prompts you to connect one of the output parsers described on the main agent page.
  • Enable Fallback Model: Whether to enable a fallback model. When enabled, n8n prompts you to connect a backup chat model to use in case the primary model fails or isn't available.

Refine the AI Agent Tool node's behavior using these options:

  • System Message: A message to send to the agent before the conversation starts.
  • Max Iterations: The maximum number of times the model should run to generate a response before stopping.
  • Return Intermediate Steps: Whether to include intermediate steps the agent took in the final output.
  • Automatically Passthrough Binary Images: Whether binary images should be automatically passed through to the agent as image type messages.
  • Batch Processing: Whether to enable the following batch processing options for rate limiting:
    • Batch Size: The number of items to process in parallel. This helps with rate limiting but may impact the log output ordering.
    • Delay Between Batches: The number of milliseconds to wait between batches.

Templates and examples

Building Your First WhatsApp Chatbot

View template details

Generate AI Viral Videos with Seedance and Upload to TikTok, YouTube & Instagram

View template details

AI agent that can scrape webpages

View template details

Browse AI Agent Tool integration templates, or search all templates

Dynamic parameters for tools with $fromAI()

To learn how to dynamically populate parameters for app node tools, refer to Let AI specify tool parameters with $fromAI().


Airtable node

URL: llms-txt#airtable-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported
  • Node reference
    • Get the Record ID
    • Create a Record ID column in Airtable
    • Use the List operation
    • Filter records when using the List operation
  • Common issues

Use the Airtable node to automate work in Airtable, and integrate Airtable with other applications. n8n has built-in support for a wide range of Airtable features, including creating, reading, listing, updating and deleting tables.

On this page, you'll find a list of operations the Airtable node supports and links to more resources.

Refer to Airtable credentials for guidance on setting up authentication.

  • Append the data to a table
  • Delete data from a table
  • List data from a table
  • Read data from a table
  • Update data in a table

Templates and examples

Handling Appointment Leads and Follow-up With Twilio, Cal.com and AI

View template details

Website Content Scraper & SEO Keyword Extractor with GPT-5-mini and Airtable

View template details

AI-Powered Social Media Amplifier

View template details

Browse Airtable integration templates, or search all templates

n8n provides a trigger node for Airtable. You can find the trigger node docs here.

Refer to Airtable's documentation for more information about the service.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.

Get the Record ID

To fetch data for a particular record, you need the Record ID. There are two ways to get the Record ID.

Create a Record ID column in Airtable

To create a Record ID column in your table, refer to this article. You can then use this Record ID in your Airtable node.

Use the List operation

To get the Record ID of your record, you can use the List operation of the Airtable node. This operation will return the Record ID along with the fields. You can then use this Record ID in your Airtable node.

Filter records when using the List operation

To filter records from your Airtable base, use the Filter By Formula option. For example, if you want to return all the users that belong to the organization n8n, follow the steps mentioned below:

  1. Select 'List' from the Operation dropdown list.
  2. Enter the base ID and the table name in the Base ID and Table field, respectively.
  3. Click on Add Option and select 'Filter By Formula' from the dropdown list.
  4. Enter the following formula in the Filter By Formula field: {Organization}='n8n'.

Similarly, if you want to return all the users that don't belong to the organization n8n, use the following formula: NOT({Organization}='n8n').

Refer to the Airtable documentation to learn more about the formulas.

For common errors or issues and suggested resolution steps, refer to Common Issues.


Telegram credentials

URL: llms-txt#telegram-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API bot access token

You can use these credentials to authenticate the following nodes:

Create a Telegram account.

Supported authentication methods

  • API bot access token

Refer to Telegram's Bot API documentation for more information about the service.

Refer to the Telegram Bot Features documentation for more information on creating and working with bots.

Using API bot access token

To configure this credential, you'll need:

  • A bot Access Token

To generate your access token:

  1. Start a chat with the BotFather.
  2. Enter the /newbot command to create a new bot.
  3. The BotFather will ask you for a name and username for your new bot:
    • The name is the bot's name displayed in contact details and elsewhere. You can change the bot name later.
    • The username is a short name used in search, mentions, and t.me links. Use these guidelines when creating your username:
      • Must be between 5 and 32 characters long.
      • Not case sensitive.
      • May only include Latin characters, numbers, and underscores.
      • Must end in bot, like tetris_bot or TetrisBot.
      • You can't change the username later.
  4. Copy the bot token the BotFather generates and add it as the Access Token in n8n.

Refer to the BotFather Create a new bot documentation for more information.


Redis credentials

URL: llms-txt#redis-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using database connection

You can use these credentials to authenticate the following nodes:

Supported authentication methods

  • Database connection

Refer to Redis's developer documentation for more information about the service.

Using database connection

You'll need a user account on a Redis server and:

  • A Password
  • The Host name
  • The Port number
  • A Database Number
  • SSL

To configure this credential:

  1. Enter your user account Password.
  2. Enter the Host name of the Redis server. The default is localhost.
  3. Enter the Port number the connection should use. The default is 6379.
    • This number should match the tcp_port listed when you run the INFO command.
  4. Enter the Database Number. The default is 0.
  5. If the connection should use SSL, turn on the SSL toggle. If this toggle is off, the connection uses TCP only.
  6. If you enable SSL, you have the option to disable TLS verification. Toggle to use self-signed certificates. WARNING: This makes the connection less secure.

Refer to Connecting to Redis | Generic client for more information.


Workflow 2: Generating reports

URL: llms-txt#workflow-2:-generating-reports

Contents:

  • Part 1: Getting data from different sources
  • Part 2: Generating file for regional sales
  • Part 3: Generating files for total sales

In this workflow, you will merge data from different sources, transform binary data, generate files, and send notifications about them. The final workflow should look like this:

Workflow 2 for aggregating data and generating files

To make things easier, let's split the workflow into three parts.

Part 1: Getting data from different sources

The first part of the workflow consists of five nodes:

Workflow 1: Getting data from different sources

  1. Use the HTTP Request node to get data from the API endpoint that stores company data. Configure the following node parameters:
  • Method: Get
    • URL: The Dataset URL you received in the email when you signed up for this course.
    • Authentication: Generic Credential Type
      • Generic Auth Type: Header Auth
      • Credentials for Header Auth: The Header Auth name and Header Auth value you received in the email when you signed up for this course.
    • Send Headers: Toggle to true
      • Specify Headers: Select Using Fields Below
      • Name: unique_id
      • Value: The unique ID you received in the email when you signed up for this course.
  1. Use the Airtable node to list data from the customers table (where you updated the fields region and subregion).

  2. Use the Merge node to merge data from the Airtable and HTTP Request node, based on matching the input fields for customerID.

  3. Use the Sort node to sort data by orderPrice in descending order.

  • What's the name of the employee assigned to customer 1?
  • What's the order status of customer 2?
  • What's the highest order price?

Part 2: Generating file for regional sales

The second part of the workflow consists of four nodes:

Workflow 2: Generating file for regional sales

  1. Use the If node to filter to only display orders from the region Americas.
  2. Use the Convert to File to transform the incoming data from JSON to binary format. Convert each item to a separate file. (Bonus points if you can figure out how to name each report based on the orderID!)
  3. Use the Gmail node (or another email node) to send the files using email to an address you have access to. Note that you need to add an attachment with the data property.
  4. Use the Discord node to send a message in the n8n Discord channel #course-level-two. In the node, configure the following parameters:
    • Webhook URL: The Discord URL you received in the email when you signed up for this course.
    • Text: "I sent the file using email with the label ID {label ID}. My ID: " followed by the unique ID emailed to you when you registered for this course.
      Note that you need to replace the text in curly braces {} with expressions that reference the data from the nodes.
  • How many orders are assigned to the Americas region?
  • What's the total price of the orders in the Americas region?
  • How many items does the Write Binary File node return?

Part 3: Generating files for total sales

The third part of the workflow consists of five nodes:

Workflow 3: Generating files for total sales

  1. Use the Loop Over Items node to split data from the Item Lists node into batches of 5.
  2. Use the Set node to set four values, referenced with expressions from the previous node: customerEmail, customerRegion, customerSince, and orderPrice.
  3. Use the Date & Time node to change the date format of the field customerSince to the format MM/DD/YYYY.
    • Set the Include Input Fields option to keep all the data together.
  4. Use the Convert to File node to create a CSV spreadsheet with the file name set as the expression: {{$runIndex > 0 ? 'file_low_orders':'file_high_orders'}}.
  5. Use the Discord node to send a message in the n8n Discord channel #course-level-two. In the node, configure the following parameters:
    • Webhook URL: The Discord URL you received in the email when you signed up for this course.
    • Text: "I created the spreadsheet {file name}. My ID:" followed by the unique ID emailed to you when you registered for this course.
      Note that you need to replace {file name} with an expression that references data from the previous Convert to File node.
  • What's the lowest order price in the first batch of items?
  • What's the formatted date of customer 7?
  • How many items does the Convert to File node return?

To check the configuration of the nodes, you can copy the JSON workflow code below and paste it into your Editor UI:

Examples:

Example 1 (unknown):

{
"meta": {
    "templateCredsSetupCompleted": true,
    "instanceId": "cb484ba7b742928a2048bf8829668bed5b5ad9787579adea888f05980292a4a7"
},
"nodes": [
    {
    "parameters": {
        "sendTo": "bart@n8n.io",
        "subject": "Your TPS Reports",
        "emailType": "text",
        "message": "Please find your TPS report attached.",
        "options": {
        "attachmentsUi": {
            "attachmentsBinary": [
            {}
            ]
        }
        }
    },
    "id": "d889eb42-8b34-4718-b961-38c8e7839ea6",
    "name": "Gmail",
    "type": "n8n-nodes-base.gmail",
    "typeVersion": 2.1,
    "position": [
        2100,
        500
    ],
    "credentials": {
        "gmailOAuth2": {
        "id": "HFesCcFcn1NW81yu",
        "name": "Gmail account 7"
        }
    }
    },
    {
    "parameters": {},
    "id": "c0236456-40be-4f8f-a730-e56cb62b7b5c",
    "name": "When clicking \"Execute workflow\"",
    "type": "n8n-nodes-base.manualTrigger",
    "typeVersion": 1,
    "position": [
        780,
        600
    ]
    },
    {
    "parameters": {
        "url": "https://internal.users.n8n.cloud/webhook/level2-erp",
        "authentication": "genericCredentialType",
        "genericAuthType": "httpHeaderAuth",
        "sendHeaders": true,
        "headerParameters": {
        "parameters": [
            {
            "name": "unique_id",
            "value": "recFIcD6UlSyxaVMQ"
            }
        ]
        },
        "options": {}
    },
    "id": "cc106fa0-6630-4c84-aea4-a4c7a3c149e9",
    "name": "HTTP Request",
    "type": "n8n-nodes-base.httpRequest",
    "typeVersion": 4.1,
    "position": [
        1000,
        500
    ],
    "credentials": {
        "httpHeaderAuth": {
        "id": "qeHdJdqqqaTC69cm",
        "name": "Course L2 Credentials"
        }
    }
    },
    {
    "parameters": {
        "operation": "search",
        "base": {
        "__rl": true,
        "value": "apprtKkVasbQDbFa1",
        "mode": "list",
        "cachedResultName": "All your base",
        "cachedResultUrl": "https://airtable.com/apprtKkVasbQDbFa1"
        },
        "table": {
        "__rl": true,
        "value": "tblInZ7jeNdlUOvxZ",
        "mode": "list",
        "cachedResultName": "Course L2, Workflow 1",
        "cachedResultUrl": "https://airtable.com/apprtKkVasbQDbFa1/tblInZ7jeNdlUOvxZ"
        },
        "options": {}
    },
    "id": "e5ae1927-b531-401c-9cb2-ecf1f2836ba6",
    "name": "Airtable",
    "type": "n8n-nodes-base.airtable",
    "typeVersion": 2,
    "position": [
        1000,
        700
    ],
    "credentials": {
        "airtableTokenApi": {
        "id": "MIplo6lY3AEsdf7L",
        "name": "Airtable Personal Access Token account 4"
        }
    }
    },
    {
    "parameters": {
        "mode": "combine",
        "mergeByFields": {
        "values": [
            {
            "field1": "customerID",
            "field2": "customerID"
            }
        ]
        },
        "options": {}
    },
    "id": "1cddc984-7fca-45e0-83b8-0c502cb4c78c",
    "name": "Merge",
    "type": "n8n-nodes-base.merge",
    "typeVersion": 2.1,
    "position": [
        1220,
        600
    ]
    },
    {
    "parameters": {
        "sortFieldsUi": {
        "sortField": [
            {
            "fieldName": "orderPrice",
            "order": "descending"
            }
        ]
        },
        "options": {}
    },
    "id": "2f55af2e-f69b-4f61-a9e5-c7eefaad93ba",
    "name": "Sort",
    "type": "n8n-nodes-base.sort",
    "typeVersion": 1,
    "position": [
        1440,
        600
    ]
    },
    {
    "parameters": {
        "conditions": {
        "options": {
            "caseSensitive": true,
            "leftValue": "",
            "typeValidation": "strict"
        },
        "conditions": [
            {
            "id": "d3afe65c-7c80-4caa-9d1c-33c62fbc2197",
            "leftValue": "={{ $json.region }}",
            "rightValue": "Americas",
            "operator": {
                "type": "string",
                "operation": "equals",
                "name": "filter.operator.equals"
            }
            }
        ],
        "combinator": "and"
        },
        "options": {}
    },
    "id": "2ed874a9-5bcf-4cc9-9b52-ea503a562892",
    "name": "If",
    "type": "n8n-nodes-base.if",
    "typeVersion": 2,
    "position": [
        1660,
        500
    ]
    },
    {
    "parameters": {
        "operation": "toJson",
        "mode": "each",
        "options": {
        "fileName": "=report_orderID_{{ $('If').item.json.orderID }}.json"
        }
    },
    "id": "d93b4429-2200-4a84-8505-16266fedfccd",
    "name": "Convert to File",
    "type": "n8n-nodes-base.convertToFile",
    "typeVersion": 1.1,
    "position": [
        1880,
        500
    ]
    },
    {
    "parameters": {
        "authentication": "webhook",
        "content": "I sent the file using email with the label ID  and wrote the binary file {file name}. My ID: 123",
        "options": {}
    },
    "id": "26f43f2c-1422-40de-9f40-dd2d80926b1c",
    "name": "Discord",
    "type": "n8n-nodes-base.discord",
    "typeVersion": 2,
    "position": [
        2320,
        500
    ],
    "credentials": {
        "discordWebhookApi": {
        "id": "WEBrtPdoLrhlDYKr",
        "name": "L2 Course Discord Webhook account"
        }
    }
    },
    {
    "parameters": {
        "batchSize": 5,
        "options": {}
    },
    "id": "0fa1fbf6-fe77-4044-a445-c49a1db37dec",
    "name": "Loop Over Items",
    "type": "n8n-nodes-base.splitInBatches",
    "typeVersion": 3,
    "position": [
        1660,
        700
    ]
    },
    {
    "parameters": {
        "assignments": {
        "assignments": [
            {
            "id": "ce839b80-c50d-48f5-9a24-bb2df6fdd2ff",
            "name": "customerEmail",
            "value": "={{ $json.customerEmail }}",
            "type": "string"
            },
            {
            "id": "0c613366-3808-45a2-89cc-b34c7b9f3fb7",
            "name": "region",
            "value": "={{ $json.region }}",
            "type": "string"
            },
            {
            "id": "0f19a88c-deb0-4119-8965-06ed62a840b2",
            "name": "customerSince",
            "value": "={{ $json.customerSince }}",
            "type": "string"
            },
            {
            "id": "a7e890d6-86af-4839-b5df-d2a4efe923f7",
            "name": "orderPrice",
            "value": "={{ $json.orderPrice }}",
            "type": "number"
            }
        ]
        },
        "options": {}
    },
    "id": "09b8584c-4ead-4007-a6cd-edaa4669a757",
    "name": "Edit Fields",
    "type": "n8n-nodes-base.set",
    "typeVersion": 3.3,
    "position": [
        1880,
        700
    ]
    },
    {
    "parameters": {
        "operation": "formatDate",
        "date": "={{ $json.customerSince }}",
        "options": {
        "includeInputFields": true
        }
    },
    "id": "c96fae90-e080-48dd-9bff-3e4506aafb86",
    "name": "Date & Time",
    "type": "n8n-nodes-base.dateTime",
    "typeVersion": 2,
    "position": [
        2100,
        700
    ]
    },
    {
    "parameters": {
        "options": {
        "fileName": "={{$runIndex > 0 ? 'file_low_orders':'file_high_orders'}}"
        }
    },
    "id": "43dc8634-2f16-442b-a754-89f47c51c591",
    "name": "Convert to File1",
    "type": "n8n-nodes-base.convertToFile",
    "typeVersion": 1.1,
    "position": [
        2320,
        700
    ]
    },
    {
    "parameters": {
        "authentication": "webhook",
        "content": "I created the spreadsheet {file name}. My ID: 123",
        "options": {}
    },
    "id": "05da1c22-d1f6-4ea6-9102-f74f9ae2e9d3",
    "name": "Discord1",
    "type": "n8n-nodes-base.discord",
    "typeVersion": 2,
    "position": [
        2540,
        700
    ],
    "credentials": {
        "discordWebhookApi": {
        "id": "WEBrtPdoLrhlDYKr",
        "name": "L2 Course Discord Webhook account"
        }
    }
    }
],
"connections": {
    "Gmail": {
    "main": [
        [
        {
            "node": "Discord",
            "type": "main",
            "index": 0
        }
        ]
    ]
    },
    "When clicking \"Execute workflow\"": {
    "main": [
        [
        {
            "node": "HTTP Request",
            "type": "main",
            "index": 0
        },
        {
            "node": "Airtable",
            "type": "main",
            "index": 0
        }
        ]
    ]
    },
    "HTTP Request": {
    "main": [
        [
        {
            "node": "Merge",
            "type": "main",
            "index": 0
        }
        ]
    ]
    },
    "Airtable": {
    "main": [
        [
        {
            "node": "Merge",
            "type": "main",
            "index": 1
        }
        ]
    ]
    },
    "Merge": {
    "main": [
        [
        {
            "node": "Sort",
            "type": "main",
            "index": 0
        }
        ]
    ]
    },
    "Sort": {
    "main": [
        [
        {
            "node": "Loop Over Items",
            "type": "main",
            "index": 0
        },
        {
            "node": "If",
            "type": "main",
            "index": 0
        }
        ]
    ]
    },
    "If": {
    "main": [
        [
        {
            "node": "Convert to File",
            "type": "main",
            "index": 0
        }
        ]
    ]
    },
    "Convert to File": {
    "main": [
        [
        {
            "node": "Gmail",
            "type": "main",
            "index": 0
        }
        ]
    ]
    },
    "Loop Over Items": {
    "main": [
        null,
        [
        {
            "node": "Edit Fields",
            "type": "main",
            "index": 0
        }
        ]
    ]
    },
    "Edit Fields": {
    "main": [
        [
        {
            "node": "Date & Time",
            "type": "main",
            "index": 0
        }
        ]
    ]
    },
    "Date & Time": {
    "main": [
        [
        {
            "node": "Convert to File1",
            "type": "main",
            "index": 0
        }
        ]
    ]
    },
    "Convert to File1": {
    "main": [
        [
        {
            "node": "Discord1",
            "type": "main",
            "index": 0
        }
        ]
    ]
    },
    "Discord1": {
    "main": [
        [
        {
            "node": "Loop Over Items",
            "type": "main",
            "index": 0
        }
        ]
    ]
    }
},
"pinData": {}
}

Credentials file

URL: llms-txt#credentials-file

Contents:

  • Structure of the credentials file
    • Outline structure
  • Parameters
    • name
    • displayName
    • documentationUrl
    • properties
    • authenticate
    • test

The credentials file defines the authorization methods for the node. The settings in this file affect what n8n displays in the Credentials modal, and must reflect the authentication requirements of the service you're connecting to.

In the credentials file, you can use all the n8n UI elements. n8n encrypts the data that's stored using credentials using an encryption key.

Structure of the credentials file

The credentials file follows this basic structure:

  1. Import statements
  2. Create a class for the credentials
  3. Within the class, define the properties that control authentication for the node.

Outline structure

String. The internal name of the object. Used to reference it from other places in the node.

String. The name n8n uses in the GUI.

documentationUrl

String. URL to your credentials documentation.

Each object contains:

  • displayName: the name n8n uses in the GUI.

  • name: the internal name of the object. Used to reference it from other places in the node.

  • type: the data type expected, such as string.

  • default: the URL that n8n should use to test credentials.

  • authenticate: Object. Contains objects that tell n8n how to inject the authentication data as part of the API request.

String. If you're using an authentication method that sends data in the header, body, or query string, set this to 'generic'.

Object. Defines the authentication methods. Options are:

  • body: Object. Sends authentication data in the request body. Can contain nested objects.

  • header: Object. Send authentication data in the request header.

  • qs: Object. Stands for "query string." Send authentication data in the request query string.

  • auth: Object. Used for Basic Auth. Requires username and password as the key names.

Provide a request object containing a URL and authentication type that n8n can use to test the credential.

Examples:

Example 1 (unknown):

import {
	IAuthenticateGeneric,
	ICredentialTestRequest,
	ICredentialType,
	INodeProperties,
} from 'n8n-workflow';

export class ExampleNode implements ICredentialType {
	name = 'exampleNodeApi';
	displayName = 'Example Node API';
	documentationUrl = '';
	properties: INodeProperties[] = [
		{
			displayName: 'API Key',
			name: 'apiKey',
			type: 'string',
			default: '',
		},
	];
	authenticate: IAuthenticateGeneric = {
		type: 'generic',
		properties: {
    		// Can be body, header, qs or auth
			qs: {
        		// Use the value from `apiKey` above
				'api_key': '={{$credentials.apiKey}}'
			}

		},
	};
	test: ICredentialTestRequest = {
		request: {
			baseURL: '={{$credentials?.domain}}',
			url: '/bearer',
		},
	};
}

Example 2 (unknown):

authenticate: IAuthenticateGeneric = {
  	type: 'generic',
  	properties: {
  		body: {
  			username: '={{$credentials.username}}',
  			password: '={{$credentials.password}}',
  		},
  	},
  };

Example 3 (unknown):

authenticate: IAuthenticateGeneric = {
  	type: 'generic',
  	properties: {
  		header: {
  			Authorization: '=Bearer {{$credentials.authToken}}',
  		},
  	},
  };

Example 4 (unknown):

authenticate: IAuthenticateGeneric = {
  	type: 'generic',
  	properties: {
  		qs: {
  			token: '={{$credentials.token}}',
  		},
  	},
  };

Executions environment variables

URL: llms-txt#executions-environment-variables

File-based configuration

You can add _FILE to individual variables to provide their configuration in a separate file. Refer to Keeping sensitive data in separate files for more details.

This page lists environment variables to configure workflow execution settings.

Variable Type Default Description
EXECUTIONS_MODE Enum string: regular, queue regular Whether executions should run directly or using queue. Refer to Queue mode for more details.
EXECUTIONS_TIMEOUT Number -1 Sets a default timeout (in seconds) to all workflows after which n8n stops their execution. Users can override this for individual workflows up to the duration set in EXECUTIONS_TIMEOUT_MAX. Set EXECUTIONS_TIMEOUT to -1 to disable.
EXECUTIONS_TIMEOUT_MAX Number 3600 The maximum execution time (in seconds) that users can set for an individual workflow.
EXECUTIONS_DATA_SAVE_ON_ERROR Enum string: all, none all Whether n8n saves execution data on error.
EXECUTIONS_DATA_SAVE_ON_SUCCESS Enum string: all, none all Whether n8n saves execution data on success.
EXECUTIONS_DATA_SAVE_ON_PROGRESS Boolean false Whether to save progress for each node executed (true) or not (false).
EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS Boolean true Whether to save data of executions when started manually.
EXECUTIONS_DATA_PRUNE Boolean true Whether to delete data of past executions on a rolling basis.
EXECUTIONS_DATA_MAX_AGE Number 336 The execution age (in hours) before it's deleted.
EXECUTIONS_DATA_PRUNE_MAX_COUNT Number 10000 Maximum number of executions to keep in the database. 0 = no limit
EXECUTIONS_DATA_HARD_DELETE_BUFFER Number 1 How old (hours) the finished execution data has to be to get hard-deleted. By default, this buffer excludes recent executions as the user may need them while building a workflow.
EXECUTIONS_DATA_PRUNE_HARD_DELETE_INTERVAL Number 15 How often (minutes) execution data should be hard-deleted.
EXECUTIONS_DATA_PRUNE_SOFT_DELETE_INTERVAL Number 60 How often (minutes) execution data should be soft-deleted.
N8N_CONCURRENCY_PRODUCTION_LIMIT Number -1 Max production executions allowed to run concurrently, in both regular and scaling modes. -1 to disable in regular mode.

WhatsApp Business Cloud credentials

URL: llms-txt#whatsapp-business-cloud-credentials

Contents:

  • Requirements
  • Supported authentication methods
  • Related resources
  • Using API key
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

To create credentials for WhatsApp, you need the following Meta assets:

  • A Meta developer account: A developer account allows you to create and manage Meta apps, including WhatsApp integrations.

Set up a Meta developer account

  1. Visit the Facebook Developers site.
  2. Click Getting Started in the upper-right corner (if the link says My Apps, you've already set up a developer account).
  3. Agree to terms and conditions.
  4. Provide a phone number for verification.
  5. Select your occupation or role.
  • A Meta business portfolio: WhatsApp messaging services require a Meta business portfolio, formerly called a Business Manager account. The UI may still show either option.

Set up a Meta business portfolio

  1. Visit the Facebook Business site.
  2. Select Create an account.
    • If you already have a Facebook Business account and portfolio, but want a new portfolio, open the business portfolio selector in the left-side menu and select Create a business portfolio.
  3. Enter a Business portfolio name.
  4. Enter your name.
  5. Enter a business email.
  6. Select Submit or Create.
  • A Meta business app configured with WhatsApp: Once you have a developer account, you will create a Meta business app.

Set up a Meta business app with WhatsApp

  1. Visit the Meta for Developers Apps dashboard
  2. Select Create app.
  3. In Add products to your app, select Set up in the WhatsApp tile. Refer to Add the WhatsApp Product for more detail.
  4. This opens the WhatsApp Quickstart page. Select your business portfolio.
  5. Select Continue.
  6. In the left-side menu, go to App settings > Basic.
  7. Set the Privacy Policy URL and Terms of Service URL for the app.
  8. Change the App Mode to Live.

Supported authentication methods

Refer to WhatsApp's API documentation for more information about the service.

Meta classifies users who create WhatsApp business apps as Tech Providers; refer to Meta's Get Started for Tech Providers for more information.

You need WhatsApp API key credentials to use the WhatsApp Business Cloud node.

To configure this credential, you'll need:

  • An API Access Token
  • A Business Account ID

To generate an access token, follow these steps:

  1. Visit the Meta for Developers Apps dashboard.
  2. Select your Meta app.
  3. In the left-side menu, select WhatsApp > API Setup.
  4. Select Generate access token and confirm the access you want to grant.
  5. Copy the Access token and add it to n8n as the Access Token.
  6. Copy the WhatsApp Business Account ID and add it to n8n as the Business Account ID.

Refer to Test Business Messaging on WhatsApp for more information on the above steps.

Fully verifying and launching your app will take further configuration. Refer to Meta's Get Started for Tech Providers Steps 5 and beyond for more information. Refer to App Review for more information on the Meta App Review process.

You need WhatsApp OAuth2 credentials to use the WhatsApp Trigger node.

To configure this credential, you'll need:

  • A Client ID
  • A Client Secret

To retrieve these items, follow these steps:

  1. Visit the Meta for Developers Apps dashboard.
  2. Select your Meta app.
  3. In the left-side menu, select App settings > Basic.
  4. Copy the App ID and enter it as the Client ID within the n8n credential.
  5. Copy the App Secret and enter it as the Client Secret within the n8n credential.

Fully verifying and launching your app will take further configuration. Refer to Meta's Get Started for Tech Providers Steps 5 and beyond for more information. Refer to App Review for more information on the Meta App Review process.


Chargebee node

URL: llms-txt#chargebee-node

Contents:

  • Operations
  • Templates and examples

Use the Chargebee node to automate work in Chargebee, and integrate Chargebee with other applications. n8n has built-in support for a wide range of Chargebee features, including creating customers, returning invoices, and canceling subscriptions.

On this page, you'll find a list of operations the Chargebee node supports and links to more resources.

Refer to Chargebee credentials for guidance on setting up authentication.

  • Customer
    • Create a customer
  • Invoice
    • Return the invoices
    • Get URL for the invoice PDF
  • Subscription
    • Cancel a subscription
    • Delete a subscription

Templates and examples

Browse Chargebee integration templates, or search all templates


Notion node common issues

URL: llms-txt#notion-node-common-issues

Contents:

  • Relation property not displaying
  • Create toggle heading
  • Handle null and empty values

Here are some common errors and issues with the Notion node and steps to resolve or troubleshoot them.

Relation property not displaying

The Notion node only supports displaying the data relation property for two-way relations. When you connect two Notion databases with a two-way relationship, you can select or filter by the relation property when working with the Notion node's Database Page resource.

To enable two-way relations, edit the relation property in Notion and enable the Show on [name of related database] option to create a reverse relation. Select a name to use for the relation in the new context. The relation is now accessible in n8n when filtering or selecting.

If you need to work with Notion databases with one-way relationship, you can use the HTTP Request with your existing Notion credentials. For example, to update a one-way relationship, you can send a PATCH request to the following URL:

Enable Send Body, set the Body Content Type to JSON, and set Specify Body to Using JSON. Afterward, you can enter a JSON object like the following into the JSON field:

Create toggle heading

The Notion node allows you to create headings and toggles when adding blocks to Page, Database Page, or Block resources. Creating toggleable headings isn't yet supported by the Notion node itself.

You can work around this be creating a regular heading and then modifying it to enable the is_toggleable property:

  1. Add a heading with Notion node.
  2. Select the resource you want to add a heading to:
    • To add a new page with a heading, select the Page or Database Page resources with the Create operation.
    • To add a heading to an existing page, select the Block resource with the Append After operation.
  3. Select Add Block and set the Type Name or ID to either Heading 1, Heading 2, or Heading 3.
  4. Add an HTTP Request node connected to the Notion node and select the GET method.
  5. Set the URL to https://api.notion.com/v1/blocks/<block_ID>. For example, if your added the heading to an existing page, you could use the following URL: https://api.notion.com/v1/blocks/{{ $json.results[0].id }}. If you created a new page instead of appending a block, you may need to discover the block ID by querying the page contents first.
  6. Select Predefined Credential Type and connect your existing Notion credentials.
  7. Add an Edit Fields (Set) node after the HTTP Request node.
  8. Add heading_1.is_toggleable as a new Boolean field set to true. Swap heading_1 for a different heading number as necessary.
  9. Add a second HTTP Request node after the Edit Fields (Set) node.
  10. Set the Method to PATCH and use https://api.notion.com/v1/blocks/{{ $json.id }} as the URL value.
  11. Select Predefined Credential Type and connect your existing Notion credentials.
  12. Enable Send Body and set a parameter.
  13. Set the parameter Name to heading_1 (substitute heading_1 for the heading level you are using).
  14. Set the parameter Value to {{ $json.heading_1 }} (substitute heading_1 for the heading level you are using).

The above sequence will create a regular heading block. It will query the newly created header, add the is_toggleable property, and update the heading block.

Handle null and empty values

You may receive a validation error when working with the Notion node if you submit fields with empty or null values. This can occur any time you populate fields from previous nodes when that data is missing.

To work around this, check for the existence of the field data before sending it to Notion or use a default value.

To check for the data before executing the Notion node, use an If node to check whether the field is unset. This allows you to use the Edit Fields (Set) node to conditionally remove the field when it doesn't have a valid value.

As an alternative, you can set a default value if the incoming data doesn't provide one.

Examples:

Example 1 (unknown):

https://api.notion.com/v1/pages/<page_id>

Example 2 (unknown):

{
	"properties": {
		"Account": {
			"relation": [
				{
					"id": "<your_relation_ID>"
				}
			]
		}
	}
}

HTTP Request credentials

URL: llms-txt#http-request-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Using predefined credential type
  • Using basic auth
  • Using digest auth
  • Using header auth
  • Using bearer auth
  • Using OAuth1
  • Using OAuth2
    • Authorization Code grant type

You can use these credentials to authenticate the following nodes:

You must use the authentication method required by the app or service you want to query.

If you need to secure the authentication with an SSL certificate, refer to Provide an SSL certificate for the information you'll need.

Supported authentication methods

  • Predefined credential type
  • Basic auth (generic credential type)
  • Custom auth (generic credential type)
  • Digest auth (generic credential type)
  • Header auth (generic credential type)
  • Bearer auth (generic credential type)
  • OAuth1 (generic credential type)
  • OAuth2 (generic credential type)
  • Query auth (generic credential type)

Refer to HTTP authentication for more information relating to generic credential types.

Predefined credential types

n8n recommends using predefined credential types whenever there's a credential type available for the service you want to connect to. It offers an easier way to set up and manage credentials, compared to configuring generic credentials.

You can use Predefined credential types to perform custom operations with some APIs where n8n has a node for the platform. For example, n8n has an Asana node, and supports using your Asana credentials in the HTTP Request node. Refer to Custom operations for more information.

Using predefined credential type

To use a predefined credential type:

  1. Open your HTTP Request node, or add a new one to your workflow.
  2. In Authentication, select Predefined Credential Type.
  3. In Credential Type, select the API you want to use.
  4. In Credential for <API name>, you can:
    1. Select an existing credential for that platform, if available.
    2. Select Create New to create a new credential.

Refer to Custom API operations for more information.

Use this generic authentication if your app or service supports basic authentication.

To configure this credential, enter:

  • The Username you use to access the app or service your HTTP Request is targeting
  • The Password that goes with that username

Use this generic authentication if your app or service supports digest authentication.

To configure this credential, enter:

  • The Username you use to access the app or service your HTTP Request is targeting
  • The Password that goes with that username

Use this generic authentication if your app or service supports header authentication.

To configure this credential, enter:

  • The header Name you need to pass to the app or service your HTTP request is targeting
  • The Value for the header

Read more about HTTP headers

Use this generic authentication if your app or service supports bearer authentication. This authentication type is actually just header authentication with the Name set to Authorization and the Value set to Bearer <token>.

To configure this credential, enter:

  • The Bearer Token you need to pass to the app or service your HTTP request is targeting

Read more about bearer authentication.

Use this generic authentication if your app or service supports OAuth1 authentication.

To configure this credential, enter:

  • An Authorization URL: Also known as the Resource Owner Authorization URI. This URL typically ends in /oauth1/authorize. The temporary credentials are sent here to prompt a user to complete authorization.
  • An Access Token URL: This is the URI used for the initial request for temporary credentials. This URL typically ends in /oauth1/request or /oauth1/token.
  • A Consumer Key: Also known as the client key, like a username. This specifies the oauth_consumer_key to use for the call.
  • A Consumer Secret: Also known as the client secret, like a password.
  • A Request Token URL: This is the URI used to switch from temporary credentials to long-lived credentials after authorization. This URL typically ends in /oauth1/access.
  • Select the Signature Method the auth handshake uses. This specifies the oauth_signature_method to use for the call. Options include:
    • HMAC-SHA1
    • HMAC-SHA256
    • HMAC-SHA512

For most OAuth1 integrations, you'll need to configure an app, service, or integration to generate the values for most of these fields. Use the OAuth Redirect URL in n8n as the redirect URL or redirect URI for such a service.

Read more about OAuth1 and the OAuth1 authorization flow.

Use this generic authentication if your app or service supports OAuth2 authentication.

Requirements to configure this credential depend on the Grant Type selected. Refer to OAuth Grant Types for more information on each grant type.

For most OAuth2 integrations, you'll need to configure an app, service, or integration. Use the OAuth Redirect URL in n8n as the redirect URL or redirect URI for such a service.

Read more about OAuth2.

Authorization Code grant type

Use Authorization Code grant type to exchange an authorization code for an access token. The auth flow uses the redirect URL to return the user to the client. Then the application gets the authorization code from the URL and uses it to request an access token. Refer to Authorization Code Request for more information.

To configure this credential, select Authorization Code as the Grant Type.

  • An Authorization URL
  • An Access Token URL
  • A Client ID: The ID or username to log in with.
  • A Client Secret: The secret or password used to log in with.
  • Optional: Enter one or more Scopes for the credential. If unspecified, the credential will request all scopes available to the client.
  • Optional: Some services require more query parameters. If your service does, add them as Auth URI Query Parameters.
  • An Authentication type: Select the option that best suits your use case. Options include:
    • Header: Send the credentials as a basic auth header.
    • Body: Send the credentials in the body of the request.
  • Optional: Choose whether to Ignore SSL Issues. If turned on, n8n will connect even if SSL validation fails.

Client Credentials grant type

Use the Client Credentials grant type when applications request an access token to access their own resources, not on behalf of a user. Refer to Client Credentials for more information.

To configure this credential, select Client Credentials as the Grant Type.

  • An Access Token URL: The URL to hit to begin the OAuth2 flow. Typically this URL ends in /token.
  • A Client ID: The ID or username to use to log in to the client.
  • A Client Secret: The secret or password used to log in to the client.
  • Optional: Enter one or more Scopes for the credential. Most services don't support scopes for Client Credentials grant types; only enter scopes here if yours does.
  • An Authentication type: Select the option that best suits your use case. Options include:
    • Header: Send the credentials as a basic auth header.
    • Body: Send the credentials in the body of the request.
  • Optional: Choose whether to Ignore SSL Issues. If turned on, n8n will connect even if SSL validation fails.

Proof Key for Code Exchange (PKCE) grant type is an extension to the Authorization Code flow to prevent CSRF and authorization code injection attacks.

To configure this credential, select PKCE as the Grant Type.

  • An Authorization URL
  • An Access Token URL
  • A Client ID: The ID or username to log in with.
  • A Client Secret: The secret or password used to log in with.
  • Optional: Enter one or more Scopes for the credential. If unspecified, the credential will request all scopes available to the client.
  • Optional: Some services require more query parameters. If your service does, add them as Auth URI Query Parameters.
  • An Authentication type: Select the option that best suits your use case. Options include:
    • Header: Send the credentials as a basic auth header.
    • Body: Send the credentials in the body of the request.
  • Optional: Choose whether to Ignore SSL Issues. If turned on, n8n will connect even if SSL validation fails.

Use this generic authentication if your app or service supports passing authentication as a single key/value query parameter. (For multiple query parameters, use Custom Auth.)

To configure this credential, enter:

  • A query parameter key or Name
  • A query parameter Value

Use this generic authentication if your app or service supports passing authentication as multiple key/value query parameters or you need more flexibility than the other generic auth options.

The Custom Auth credential expects JSON data to define your credential. You can use headers, qs, body or a mix. Review the examples below to get started.

Sending two headers

Sending header and query string

Provide an SSL certificate

You can send an SSL certificate with your HTTP request. Create the SSL certificate as a separate credential for use by the node:

  1. In the HTTP Request node Settings, turn on SSL Certificates.
  2. On the Parameters tab, add an existing SSL Certificate credential to Credential for SSL Certificates or create a new one.

To configure your SSL Certificates credential, you'll need to add:

  • The Certificate Authority CA bundle
  • The Certificate (CRT): May also appear as a Public Key, depending on who your issuing CA was and how they format the cert
  • The Private Key (KEY)
  • Optional: If the Private Key is encrypted, enter a Passphrase for the private key.

If your SSL certificate is in a single file (such as a .pfx file), you'll need to open the file to copy details from it to paste into the appropriate fields:

  • Enter the Public Key/CRT as the Certificate
  • Enter the Private Key/KEY in that field

Examples:

Example 1 (unknown):

{
	"headers": {
		"X-AUTH-USERNAME": "username",
		"X-AUTH-PASSWORD": "password"
	}
}

Example 2 (unknown):

{
	 "body" : {
		"user": "username",
		"pass": "password"
	}
}

Example 3 (unknown):

{
	"qs": { 
		"appid": "123456",
		"apikey": "my-api-key"
	}
}

Example 4 (unknown):

{
	"headers": {
		"api-version": "202404"
	},
	"qs": {
		"apikey": "my-api-key"
	}
}

Data table

URL: llms-txt#data-table

Contents:

  • Node parameters
    • Resource
    • Operations
  • Related resources

Use the Data Table node to permanently save data across workflow executions in a table format. It provides functionality to perform various data operations on stored data. See Data tables.

Select the resource on which you want to operate.

Select the operation you want to run on the resource:

  • Delete: Delete one or more rows.
  • Dry Run: Simulate a deletion before finalizing it. If you switch on this option, n8n returns the rows that will be deleted by the operation. Default state is off.
  • Get: Get one or more rows from your table based on defined filters.
  • Limit: The number of rows you want to return, specified as a number. Default value is 50.
  • Return all: Switch on to return all data. Default value is off.
  • If Row Exists: Specify a set of conditions to match input items that exist in the data table.
  • If Row Does Not Exist: Specify a set of conditions to match input items that don't exist in the data table.
  • Insert: Insert rows into an existing table.
  • Optimize Bulk: Optimize the speed of insertions when working with many rows. If you switch on this option, n8n won't return the data that was inserted. Default state is off.
  • Update: Update one or more rows.
  • Upsert: Upsert one or more rows. If the row exists, it's updated; otherwise, a new row is created.

Data tables explains how to create and manage data tables.


For n8n Cloud

URL: llms-txt#for-n8n-cloud

curl -X 'GET'
'/api/v/workflows?active=true&limit=150&cursor=MTIzZTQ1NjctZTg5Yi0xMmQzLWE0NTYtNDI2NjE0MTc0MDA'
-H 'accept: application/json'


---

## Bitwarden credentials

**URL:** llms-txt#bitwarden-credentials

**Contents:**
- Prerequisites
- Supported authentication methods
- Related resources
- Using API key

You can use these credentials to authenticate the following node:

- [Bitwarden](../../app-nodes/n8n-nodes-base.bitwarden/)

Create a [Bitwarden](https://vault.bitwarden.com/#/register?org=teams) Teams organization or Enterprise organization account. (Bitwarden only makes the Bitwarden Public API available for these [organization](https://bitwarden.com/help/about-organizations/) plans.)

## Supported authentication methods

Refer to [Bitwarden's Public API documentation](https://bitwarden.com/help/public-api/) for more information about the service.

To configure this credential, you'll need:

- A **Client ID**: Provided when you generate an API key
- A **Client Secret**: Provided when you generate an API key
- The **Environment**:
  - Choose **Cloud-hosted** if you don't self-host Bitwarden. No further configuration required.
  - Choose **Self-hosted** if you host Bitwarden on your own server. Enter your **Self-hosted domain** in the appropriate field.

The Client ID and Client Secret must be for an **Organization API Key**, not a Personal API Key. Refer to the [Bitwarden Public API Authentication documentation](https://bitwarden.com/help/public-api/#authentication) for instructions on generating an Organization API Key.

---

## Self-hosting n8n

**URL:** llms-txt#self-hosting-n8n

This section provides guidance on setting up n8n for both the Enterprise and Community self-hosted editions. The Community edition is free, the Enterprise edition isn't.

See [Community edition features](community-edition-features/) for a list of available features.

- **Installation and server setups**

Install n8n on any platform using npm or Docker. Or follow our guides to popular hosting platforms.

[Docker installation guide](installation/docker/)

Learn how to configure n8n with environment variables.

[Environment Variables](configuration/environment-variables/)

- **Users and authentication**

Choose and set up user authentication for your n8n instance.

[Authentication](configuration/user-management-self-hosted/)

Manage data, modes, and processes to keep n8n running smoothly at scale.

[Scaling](scaling/queue-mode/)

Secure your n8n instance by setting up SSL, SSO, or 2FA or blocking or opting out of some data collection or features.

[Securing n8n guide](securing/overview/)

New to n8n or AI? Try our Self-hosted AI Starter Kit. Curated by n8n, it combines the self-hosted n8n platform with compatible AI products and components to get you started building self-hosted AI workflows.

[Starter kits](starter-kits/ai-starter-kit/)

Self-hosting knowledge prerequisites

Self-hosting n8n requires technical knowledge, including:

- Setting up and configuring servers and containers
- Managing application resources and scaling
- Securing servers and applications
- Configuring n8n

n8n recommends self-hosting for expert users. Mistakes can lead to data loss, security issues, and downtime. If you aren't experienced at managing servers, n8n recommends [n8n Cloud](https://n8n.io/cloud/).

---

## MySQL node common issues

**URL:** llms-txt#mysql-node-common-issues

**Contents:**
- Update rows by composite key
- Can't connect to a local MySQL server when using Docker
  - If only MySQL is in Docker
  - If only n8n is in Docker
  - If MySQL and n8n are running in separate Docker containers
  - If MySQL and n8n are running in the same Docker container
- Decimal numbers returned as strings

Here are some common errors and issues with the [MySQL node](../) and steps to resolve or troubleshoot them.

## Update rows by composite key

The MySQL node's **Update** operation lets you to update rows in a table by providing a **Column to Match On** and a value. This works for tables where single column values can uniquely identify individual rows.

You can't use this pattern for tables that use [composite keys](https://en.wikipedia.org/wiki/Composite_key), where you need multiple columns to uniquely identify a row. A example of this is MySQL's [`user` table](https://mariadb.com/kb/en/mysql-user-table/) in the `mysql` database, where you need both the `user` and `host` columns to uniquely identify rows.

To update tables with composite keys, write the query manually with the **Execute SQL** operation instead. There, you can match on multiple values, like in this example which matches on both `customer_id` and `product_id`:

## Can't connect to a local MySQL server when using Docker

When you run either n8n or MySQL in Docker, you need to configure the network so that n8n can connect to MySQL.

The solution depends on how you're hosting the two components.

### If only MySQL is in Docker

If only MySQL is running in Docker, configure MySQL to listen on all interfaces by binding to `0.0.0.0` inside of the container (the official images are already configured this way).

When running the container, [publish the port](https://docs.docker.com/get-started/docker-concepts/running-containers/publishing-ports/) with the `-p` flag. By default, MySQL runs on port 3306, so your Docker command should look like this:

When configuring [MySQL credentials](../../../credentials/mysql/), the `localhost` address should work without a problem (set the **Host** to `localhost`).

### If only n8n is in Docker

If only n8n is running in Docker, configure MySQL to listen on all interfaces by binding to `0.0.0.0` on the host.

If you are running n8n in Docker on **Linux**, use the `--add-host` flag to map `host.docker.internal` to `host-gateway` when you start the container. For example:

If you are using Docker Desktop, this is automatically configured for you.

When configuring [MySQL credentials](../../../credentials/mysql/), use `host.docker.internal` as the **Host** address instead of `localhost`.

### If MySQL and n8n are running in separate Docker containers

If both n8n and MySQL are running in Docker in separate containers, you can use Docker networking to connect them.

Configure MySQL to listen on all interfaces by binding to `0.0.0.0` inside of the container (the official images are already configured this way). Add both the MySQL and n8n containers to the same [user-defined bridge network](https://docs.docker.com/engine/network/drivers/bridge/).

When configuring [MySQL credentials](../../../credentials/mysql/), use the MySQL container's name as the host address instead of `localhost`. For example, if you call the MySQL container `my-mysql`, you would set the **Host** to `my-mysql`.

### If MySQL and n8n are running in the same Docker container

If MySQL and n8n are running in the same Docker container, the `localhost` address doesn't need any special configuration. You can configure MySQL to listen on `localhost` and configure the **Host** in the [MySQL credentials in n8n](../../../credentials/ollama/) to use `localhost`.

## Decimal numbers returned as strings

By default, the MySQL node returns [`DECIMAL` values](https://dev.mysql.com/doc/refman/8.4/en/fixed-point-types.html) as strings. This is done intentionally to avoid loss of precision that can occur due to limitation with the way JavaScript represents numbers. You can learn more about the decision in the documentation for the [MySQL library](https://sidorares.github.io/node-mysql2/docs/api-and-configurations) that n8n uses.

To output decimal values as numbers instead of strings and ignore the risks in loss of precision, enable the **Output Decimals as Numbers** option. This will output the values as numbers instead of strings.

As an alternative, you can manually convert from the string to a decimal using the [`toFloat()` function](../../../../../code/builtin/data-transformation-functions/strings/#string-toFloat) with [`toFixed()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/toFixed) or with the [Edit Fields (Set) node](../../../core-nodes/n8n-nodes-base.set/) after the MySQL node. Be aware that you may still need to account for a potential loss of precision.

**Examples:**

Example 1 (unknown):
```unknown
UPDATE orders SET quantity = 3 WHERE customer_id = 538 AND product_id = 800;

Example 2 (unknown):

docker run -p 3306:3306 --name my-mysql -d mysql:latest

Example 3 (unknown):

docker run -it --rm --add-host host.docker.internal:host-gateway --name n8n -p 5678:5678 -v n8n_data:/home/node/.n8n docker.n8n.io/n8nio/n8n

Kitemaker credentials

URL: llms-txt#kitemaker-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API access token

You can use these credentials to authenticate the following nodes:

Create a Kitemaker account.

Supported authentication methods

Refer to Kitemaker's API documentation for more information about the service.

Using API access token

To configure this credential, you'll need:

  • A Personal Access Token: Generate a personal access token from Manage > Developer settings. Refer to API Authentication for more detailed instructions.

Sub-workflows

URL: llms-txt#sub-workflows

Contents:

  • Set up and use a sub-workflow
    • Create the sub-workflow
    • Call the sub-workflow
  • How data passes between workflows
  • Sub-workflow conversion

You can call one workflow from another workflow. This allows you to build modular, microservice-like workflows. It can also help if your workflow grows large enough to encounter memory issues. Creating sub-workflows uses the Execute Workflow and Execute Sub-workflow Trigger nodes.

Sub-wokflow executions don't count towards your plan's monthly execution or active workflow limits.

Set up and use a sub-workflow

This section walks through setting up both the parent workflow and sub-workflow.

Create the sub-workflow

  1. Create a new workflow.

Create sub-workflows from existing workflows

You can optionally create a sub-workflow directly from an existing parent workflow using the Execute Sub-workflow node. In the node, select the Database and From list options and select Create a sub-workflow in the list.

You can also extract selected nodes directly using Sub-workflow conversion in the context menu.

  1. Optional: configure which workflows can call the sub-workflow:

  2. Select the Options menu > Settings. n8n opens the Workflow settings modal.

    1. Change the This workflow can be called by setting. Refer to Workflow settings for more information on configuring your workflows.
  3. Add the Execute Sub-workflow trigger node (if you are searching under trigger nodes, this is also titled When Executed by Another Workflow).

  4. Set the Input data mode to choose how you will define the sub-workflow's input data:

  • Define using fields below: Choose this mode to define individual input names and data types that the calling workflow needs to provide. The Execute Sub-workflow node or Call n8n Workflow Tool node in the calling workflow will automatically pull in the fields defined here.
    • Define using JSON example: Choose this mode to provide an example JSON object that demonstrates the expected input items and their types.
    • Accept all data: Choose this mode to accept all data unconditionally. The sub-workflow won't define any required input items. This sub-workflow must handle any input inconsistencies or missing values.
  1. Add other nodes as needed to build your sub-workflow functionality.

  2. Save the sub-workflow.

Sub-workflow mustn't contain errors

If there are errors in the sub-workflow, the parent workflow can't trigger it.

Load data into sub-workflow before building

This requires the ability to load data from previous executions, which is available on n8n Cloud and registered Community plans.

If you want to load data into your sub-workflow to use while building it:

  1. Create the sub-workflow and add the Execute Sub-workflow Trigger.
  2. Set the node's Input data mode to Accept all data or define the input items using fields or JSON if they're already known.
  3. In the sub-workflow settings, set Save successful production executions to Save.
  4. Skip ahead to setting up the parent workflow, and run it.
  5. Follow the steps to load data from previous executions.
  6. Adjust the Input data mode to match the input sent by the parent workflow if necessary.

You can now pin example data in the trigger node, enabling you to work with real data while configuring the rest of the workflow.

Call the sub-workflow

  1. Open the workflow where you want to call the sub-workflow.

  2. Add the Execute Sub-workflow node.

  3. In the Execute Sub-workflow node, set the sub-workflow you want to call. You can choose to call the workflow by ID, load a workflow from a local file, add workflow JSON as a parameter in the node, or target a workflow by URL.

Find your workflow ID

Your sub-workflow's ID is the alphanumeric string at the end of its URL.

  1. Fill in the required input items defined by the sub-workflow.

  2. Save your workflow.

When your workflow executes, it will send data to the sub-workflow, and run it.

You can follow the execution flow from the parent workflow to the sub-workflow by opening the Execute Sub-workflow node and selecting the View sub-execution link. Likewise, the sub-workflow's execution contains a link back to the parent workflow's execution to navigate in the other direction.

How data passes between workflows

As an example, imagine you have an Execute Sub-workflow node in Workflow A. The Execute Sub-workflow node calls another workflow called Workflow B:

  1. The Execute Sub-workflow node passes the data to the Execute Sub-workflow Trigger node (titled "When executed by another node" in the canvas) of Workflow B.
  2. The last node of Workflow B sends the data back to the Execute Sub-workflow node in Workflow A.

Sub-workflow conversion

See sub-workflow conversion for how to divide your existing workflows into sub-workflows.


Built-in date and time methods

URL: llms-txt#built-in-date-and-time-methods

Methods for working with date and time.

You can use Python in the Code node. It isn't available in expressions.

Method Description Available in Code node?
$now A Luxon object containing the current timestamp. Equivalent to DateTime.now().
$today A Luxon object containing the current timestamp, rounded down to the day. Equivalent to DateTime.now().set({ hour: 0, minute: 0, second: 0, millisecond: 0 }).
Method Description
_now A Luxon object containing the current timestamp. Equivalent to DateTime.now().
_today A Luxon object containing the current timestamp, rounded down to the day. Equivalent to DateTime.now().set({ hour: 0, minute: 0, second: 0, millisecond: 0 }).

Don't mix native JavaScript and Luxon dates

While you can use both native JavaScript dates and Luxon dates in n8n, they aren't directly interoperable. It's best to convert JavaScript dates to Luxon to avoid problems.

n8n provides built-in convenience functions to support data transformation in expressions for dates. Refer to Data transformation functions | Dates for more information.


Model Selector

URL: llms-txt#model-selector

Contents:

  • Node parameters
    • Number of Inputs
    • Rules
  • Templates and examples
  • Related resources

The Model Selector node dynamically selects one of the connected language models during workflow execution based on a set of defined conditions. This enables implementing fallback mechanisms for error handling or choosing the optimal model for specific tasks.

This page covers node parameters for the Model Selector node and includes links to related resources.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

Specifies the number of input connections available for attaching language models.

Each rule defines the model to use when specific conditions match.

The Model Selector node evaluates rules sequentially, starting from the first input, and stops evaluation as soon as it finds a match. This means that if multiple rules would match, n8n will only use the model defined by the first matching rule.

Templates and examples

AI Orchestrator: dynamically Selects Models Based on Input Type

View template details

Dynamic AI Model Selector with GDPR Compliance via Requesty and Google Sheets

View template details

Hotel Receptionist with WhatsApp, Gemini Model-Switching, Redis & Google Sheets

View template details

Browse Model Selector integration templates, or search all templates

View n8n's Advanced AI documentation.


Formstack Trigger credentials

URL: llms-txt#formstack-trigger-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API access token
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Create a Formstack account.

Supported authentication methods

  • API access token
  • OAuth2

Refer to Formstack's API documentation for more information about the service.

Using API access token

To configure this credential, you'll need:

  • An API Access Token: To generate an Access Token, create a new application in Formstack using the following details:
    • Redirect URI: For cloud n8n instances, enter https://oauth.n8n.cloud/oauth2/callback.
      • For self-hosted n8n instances, enter the OAuth callback URL for your n8n instance in the format https://<n8n_url>/rest/oauth2-credential/callback. For example https://localhost:5678/rest/oauth2-credential/callback.
    • Platform: Select Website.

Once you've created the application, copy the access token either from the applications list or by selecting the application to view its details.

Refer to Formstack's API Authorization documentation for more detailed instructions.

Access token permissions

Formstack ties access tokens to a Formstack user. Access tokens follow Formstack (in-app) user permissions.

To configure this credential, you'll need:

  • A Client ID
  • A Client Secret

To generate both of these, create a new application in Formstack using the following details:

  • Redirect URI: Copy the OAuth Redirect URL from the n8n credential to enter here.
    • For self-hosted n8n instances, enter the OAuth callback URL for your n8n instance in the format https://<n8n_url>/rest/oauth2-credential/callback. For example https://localhost:5678/rest/oauth2-credential/callback.
  • Platform: Select Website.

Once you've created the application, select it from the applications list to view the Application Details. Copy the Client ID and Client Secret and add them to n8n. Once you've added both, select the Connect my account button to begin the OAuth2 flow and authorization process.

Refer to Formstack's API Authorization documentation for more detailed instructions.

Access token permissions

Formstack ties access tokens to a Formstack user. Access tokens follow Formstack (in-app) user permissions.


Pull next (unstable) version

URL: llms-txt#pull-next-(unstable)-version

docker pull docker.n8n.io/n8nio/n8n:next

Examples:

Example 1 (unknown):

After pulling the updated image, stop your n8n container and start it again. You can also use the command line. Replace `<container_id>` in the commands below with the container ID you find in the first command:

OpenAI node

URL: llms-txt#openai-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported
  • Using tools with OpenAI assistants
    • Operations that support tool connectors
  • Common issues

Use the OpenAI node to automate work in OpenAI and integrate OpenAI with other applications. n8n has built-in support for a wide range of OpenAI features, including creating images and assistants, as well as chatting with models.

On this page, you'll find a list of operations the OpenAI node supports and links to more resources.

Previous node versions

The OpenAI node replaces the OpenAI assistant node from version 1.29.0 on. n8n version 1.117.0 introduces V2 of the OpenAI node that supports the OpenAI Responses API and removes support for the to-be-deprecated Assistants API.

Refer to OpenAI credentials for guidance on setting up authentication.

Templates and examples

View template details

Building Your First WhatsApp Chatbot

View template details

Scrape and summarize webpages with AI

View template details

Browse OpenAI integration templates, or search all templates

Refer to OpenAI's documentation for more information about the service.

Refer to OpenAI's assistants documentation for more information about how assistants work.

For help dealing with rate limits, refer to Handling rate limits.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.

Using tools with OpenAI assistants

Some operations allow you to connect tools. Tools act like addons that your AI can use to access extra context or resources.

Select the Tools connector to browse the available tools and add them.

Once you add a tool connection, the OpenAI node becomes a root node, allowing it to form a cluster node with the tools sub-nodes. See Node types for more information on cluster nodes and root nodes.

Operations that support tool connectors

For common questions or issues and suggested solutions, refer to Common issues.


Pagination in the HTTP Request node

URL: llms-txt#pagination-in-the-http-request-node

Contents:

  • Enable pagination
  • Use a URL from the response to get the next page using $response
  • Get the next page by number using $pageCount
  • Navigate pagination through body parameters
  • Set the page size in the query

The HTTP Request node supports pagination. This page provides some example configurations, including using the HTTP node variables.

Refer to HTTP Request for more information on the node.

Different APIs implement pagination in different ways. Check the API documentation for the API you're using for details. You need to find out things like:

  • Does the API provide the URL for the next page?
  • Are there API-specific limits on page size or page number?
  • The structure of the data that the API returns.

In the HTTP Request node, select Add Option > Pagination.

Use a URL from the response to get the next page using $response

If the API returns the URL of the next page in its response:

  1. Set Pagination Mode to Response Contains Next URL. n8n displays the parameters for this option.

  2. In Next URL, use an expression to set the URL. The exact expression depends on the data returned by your API. For example, if the API includes a parameter called next-page in the response body:

Get the next page by number using $pageCount

If the API you're using supports targeting a specific page by number:

  1. Set Pagination Mode to Update a Parameter in Each Request.
  2. Set Type to Query.
  3. Enter the Name of the query parameter. This depends on your API and is usually described in its documentation. For example, some APIs use a query parameter named page to set the page. So Name would be page.
  4. Hover over Value and toggle Expression on.
  5. Enter {{ $pageCount + 1 }}

$pageCount is the number of pages the HTTP Request node has fetched. It starts at zero. Most API pagination counts from one (the first page is page one). This means that adding +1 to $pageCount means the node fetches page one on its first loop, page two on its second, and so on.

Navigate pagination through body parameters

If the API you're using allows you to paginate through the body parameters:

  1. Set the HTTP Request Method to POST
  2. Set Pagination Mode to Update a Parameter in Each Request.
  3. Select Body in the Type parameter.
  4. Enter the Name of the body parameter. This depends on the API you're using. page is a common key name.
  5. Hover over Value and toggle Expression on.
  6. Enter {{ $pageCount + 1 }}

Set the page size in the query

If the API you're using supports choosing the page size in the query:

  1. Select Send Query Parameters in main node parameters (this is the parameters you see when you first open the node, not the settings within options).
  2. Enter the Name of the query parameter. This depends on your API. For example, a lot of APIs use a query parameter named limit to set page size. So Name would be limit.
  3. In Value, enter your page size.

Examples:

Example 1 (unknown):

{{ $response.body["next-page"] }}

Magento 2 node

URL: llms-txt#magento-2-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Magento 2 node to automate work in Magento 2, and integrate Magento 2 with other applications. n8n has built-in support for a wide range of Magento 2 features, including creating, updating, deleting, and getting customers, invoices, orders, and projects.

On this page, you'll find a list of operations the Magento 2 node supports and links to more resources.

Refer to Magento 2 credentials for guidance on setting up authentication.

  • Customer
    • Create a new customer
    • Delete a customer
    • Get a customer
    • Get all customers
    • Update a customer
  • Invoice
    • Create an invoice
  • Order
    • Cancel an order
    • Get an order
    • Get all orders
    • Ship an order
  • Product
    • Create a product
    • Delete a product
    • Get a product
    • Get all products
    • Update a product

Templates and examples

Automate Your Magento 2 Weekly Sales & Performance Reports

by Kanaka Kishore Kandregula

View template details

Automatic Magento 2 Product & Coupon Alerts to Telegram with Duplicate Protection

by Kanaka Kishore Kandregula

View template details

Daily Magento 2 Customer Sync to Google Contacts & Sheets without Duplicates

by Kanaka Kishore Kandregula

View template details

Browse Magento 2 integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Execute Sub-workflow

URL: llms-txt#execute-sub-workflow

Contents:

  • Node parameters
    • Source
    • Workflow Inputs
    • Mode
  • Node options
  • Templates and examples
  • Set up and use a sub-workflow
    • Create the sub-workflow
    • Call the sub-workflow
  • How data passes between workflows

Use the Execute Sub-workflow node to run a different workflow on the host machine that runs n8n.

Select where the node should get the sub-workflow's information from:

  • Database: Select this option to load the workflow from the database by ID. You must also enter either:
    • From list: Select the workflow from a list of workflows available to your account.
    • Workflow ID: Enter the ID for the workflow. The URL of the workflow contains the ID after /workflow/. For example, if the URL of a workflow is https://my-n8n-acct.app.n8n.cloud/workflow/abCDE1f6gHiJKL7, the Workflow ID is abCDE1f6gHiJKL7.
  • Local File: Select this option to load the workflow from a locally saved JSON file. You must also enter:
    • Workflow Path: Enter the path to the local JSON workflow file you want the node to execute.
  • Parameter: Select this option to load the workflow from a parameter. You must also enter:
    • Workflow JSON: Enter the JSON code you want the node to execute.
  • URL: Select this option to load the workflow from a URL. You must also enter:
    • Workflow URL: Enter the URL you want to load the workflow from.

If you select a sub-workflow using the database and From list options, the sub-workflow's input items will automatically display, ready for you to fill in or map values.

You can optionally remove requested input items, in which case the sub-workflow receives null as the item's value. You can also enable Attempt to convert types to try to automatically convert data to the sub-workflow item's requested type.

Input items won't appear if the sub-workflow's Workflow Input Trigger node uses the "Accept all data" input data mode.

Use this parameter to control the execution mode for the node. Choose from these options:

  • Run once with all items: Pass all input items into a single execution of the node.
  • Run once for each item: Execute the node once for each input item in turn.

This node includes one option: Wait for Sub-Workflow Completion. This lets you control whether the main workflow should wait for the sub-workflow's completion before moving on to the next step (turned on) or whether the main workflow should continue without waiting (turned off).

Templates and examples

Scrape business emails from Google Maps without the use of any third party APIs

View template details

Back Up Your n8n Workflows To Github

View template details

Host Your Own AI Deep Research Agent with n8n, Apify and OpenAI o3

View template details

Browse Execute Sub-workflow integration templates, or search all templates

Set up and use a sub-workflow

This section walks through setting up both the parent workflow and sub-workflow.

Create the sub-workflow

  1. Create a new workflow.

Create sub-workflows from existing workflows

You can optionally create a sub-workflow directly from an existing parent workflow using the Execute Sub-workflow node. In the node, select the Database and From list options and select Create a sub-workflow in the list.

You can also extract selected nodes directly using Sub-workflow conversion in the context menu.

  1. Optional: configure which workflows can call the sub-workflow:

  2. Select the Options menu > Settings. n8n opens the Workflow settings modal.

    1. Change the This workflow can be called by setting. Refer to Workflow settings for more information on configuring your workflows.
  3. Add the Execute Sub-workflow trigger node (if you are searching under trigger nodes, this is also titled When Executed by Another Workflow).

  4. Set the Input data mode to choose how you will define the sub-workflow's input data:

  • Define using fields below: Choose this mode to define individual input names and data types that the calling workflow needs to provide. The Execute Sub-workflow node or Call n8n Workflow Tool node in the calling workflow will automatically pull in the fields defined here.
    • Define using JSON example: Choose this mode to provide an example JSON object that demonstrates the expected input items and their types.
    • Accept all data: Choose this mode to accept all data unconditionally. The sub-workflow won't define any required input items. This sub-workflow must handle any input inconsistencies or missing values.
  1. Add other nodes as needed to build your sub-workflow functionality.

  2. Save the sub-workflow.

Sub-workflow mustn't contain errors

If there are errors in the sub-workflow, the parent workflow can't trigger it.

Load data into sub-workflow before building

This requires the ability to load data from previous executions, which is available on n8n Cloud and registered Community plans.

If you want to load data into your sub-workflow to use while building it:

  1. Create the sub-workflow and add the Execute Sub-workflow Trigger.
  2. Set the node's Input data mode to Accept all data or define the input items using fields or JSON if they're already known.
  3. In the sub-workflow settings, set Save successful production executions to Save.
  4. Skip ahead to setting up the parent workflow, and run it.
  5. Follow the steps to load data from previous executions.
  6. Adjust the Input data mode to match the input sent by the parent workflow if necessary.

You can now pin example data in the trigger node, enabling you to work with real data while configuring the rest of the workflow.

Call the sub-workflow

  1. Open the workflow where you want to call the sub-workflow.

  2. Add the Execute Sub-workflow node.

  3. In the Execute Sub-workflow node, set the sub-workflow you want to call. You can choose to call the workflow by ID, load a workflow from a local file, add workflow JSON as a parameter in the node, or target a workflow by URL.

Find your workflow ID

Your sub-workflow's ID is the alphanumeric string at the end of its URL.

  1. Fill in the required input items defined by the sub-workflow.

  2. Save your workflow.

When your workflow executes, it will send data to the sub-workflow, and run it.

You can follow the execution flow from the parent workflow to the sub-workflow by opening the Execute Sub-workflow node and selecting the View sub-execution link. Likewise, the sub-workflow's execution contains a link back to the parent workflow's execution to navigate in the other direction.

How data passes between workflows

As an example, imagine you have an Execute Sub-workflow node in Workflow A. The Execute Sub-workflow node calls another workflow called Workflow B:

  1. The Execute Sub-workflow node passes the data to the Execute Sub-workflow Trigger node (titled "When executed by another node" in the canvas) of Workflow B.
  2. The last node of Workflow B sends the data back to the Execute Sub-workflow node in Workflow A.

Coda node

URL: llms-txt#coda-node

Contents:

  • Operations
  • Templates and examples

Use the Coda node to automate work in Coda, and integrate Coda with other applications. n8n has built-in support for a wide range of Coda features, including creating, getting, and deleting controls, formulas, tables, and views.

On this page, you'll find a list of operations the Coda node supports and links to more resources.

Refer to Coda credentials for guidance on setting up authentication.

  • Control
    • Get a control
    • Get all controls
  • Formula
    • Get a formula
    • Get all formulas
  • Table
    • Create/Insert a row
    • Delete one or multiple rows
    • Get all columns
    • Get all the rows
    • Get a column
    • Get a row
    • Pushes a button
  • View
    • Delete view row
    • Get a view
    • Get all views
    • Get all views columns
    • Get all views rows
    • Update row
    • Push view button

Templates and examples

Browse Coda integration templates, or search all templates


Check incoming data

URL: llms-txt#check-incoming-data

At times, you may want to check the incoming data. If the incoming data doesn't match a condition, you may want to return a different value. For example, you want to check if a variable from the previous node is empty and return a string if it's empty. Use the following code snippet to return not found if the variable is empty.

The above expression uses the ternary operator. You can learn more about the ternary operator here.

As an alternative, you can use the nullish coalescing operator (??) or the logical or operator (||):

In either of the above two cases, the value of $x will be used if it's set to a non-null, non-false value. The string default value is the fallback value.

Examples:

Example 1 (unknown):

{{$json["variable_name"]? $json["variable_name"] :"not found"}}

Example 2 (unknown):

{{ $x ?? "default value" }}
{{ $x || "default value" }}

TOTP

URL: llms-txt#totp

Contents:

  • Node parameters
    • Credential to connect with
    • Operation
  • Node options
    • Algorithm
    • Digits
    • Period
  • Templates and examples

The TOTP node provides a way to generate a TOTP (time-based one-time password).

Refer to TOTP credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

Configure this node with these parameters.

Credential to connect with

Select or create a TOTP credential for the node to use.

Generate Secret is the only operation currently supported.

Use these Options to further configure the node.

Select the HMAC hashing algorithm to use. Default is SHA1.

Enter the number of digits in the generated code. Default is 6.

Enter how many seconds the TOTP is valid for. Default is 30.

Templates and examples

Browse TOTP integration templates, or search all templates


Flow logic

URL: llms-txt#flow-logic

Contents:

  • Related sections

n8n allows you to represent complex logic in your workflows.

You need some understanding of Data in n8n, including Data structure and Data flow within nodes.

When building your logic, you'll use n8n's Core nodes, including:


Emelia credentials

URL: llms-txt#emelia-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create an Emelia account.

Supported authentication methods

Refer to Emelia's API documentation for more information about the service.

To configure this credential, you'll need:

  • An API Key: To generate an API Key in Emelia, access your API Keys by selecting the avatar in the top right (your Settings). Refer to the Authentication section of Emelia's API documentation for more information.

Twake node

URL: llms-txt#twake-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Twake node to automate work in Twake, and integrate Twake with other applications. n8n supports sending messages with Twake.

On this page, you'll find a list of operations the Twake node supports and links to more resources.

Refer to Twake credentials for guidance on setting up authentication.

  • Message
    • Send a message

Templates and examples

Browse Twake integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Netlify Trigger node

URL: llms-txt#netlify-trigger-node

Contents:

  • Related resources

Netlify offers hosting and serverless backend services for web applications and static websites.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Netlify Trigger integrations page.

n8n provides an app node for Netlify. You can find the node docs here.

View example workflows and related content on n8n's website.


JWT

URL: llms-txt#jwt

Contents:

  • Operations
  • Node parameters
  • Payload Claims
    • Audience
    • Expires In
    • Issuer
    • JWT ID
    • Not Before
    • Subject
  • Node options

Work with JSON web tokens in your n8n workflows.

You can find authentication information for this node here.

  • Decode
  • Sign
  • Verify

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Credential to connect with: Select or create a JWT credential to connect with.
  • Token: Enter the token to Verify or Decode.
  • If you select the Sign operation, you'll also have this parameter:
    • Use JSON to Build Payload: When turned on, the node uses JSON to build the claims. The selection here influences what appears in the Payload Claims section.

The node only displays payload claims if you select the Sign operation. What you see depends on what you select for Use JSON to Build Payload:

  • If you select Use JSON to Build Payload, this section displays a JSON editor where you can construct the claims.
  • If you don't select Use JSON to Build Payload, this section prompts you to Add Claim.

You can add the following claims.

The Audience or aud claim identifies the intended recipients of the JWT.

Refer to "aud" (Audience) Claim for more information.

The Expires In or exp claim identifies the time after which the JWT expires and must not be accepted for processing.

Refer to "exp" (Expiration Time) Claim for more information.

The Issuer or iss claim identifies the principal that issued the JWT.

Refer to "iss" (Issuer) Claim for more information.

The JWT ID or jti claim provides a unique identifier for the JWT.

Refer to "jti" (JWT ID) Claim for more information.

The Not Before or nbf claim identifies the time before which the JWT must not be accepted for processing.

Refer to "nbf" (Not Before) Claim for more information.

The Subject or sub claim identifies the principal that's the subject of the JWT.

Refer to "sub" (Subject) Claim for more information.

Decode node options

The Return Additional Info toggle controls how much information the node returns.

When turned on, the node returns the complete decoded token with information about the header and signature. When turned off, the node only returns the payload.

Sign node options

Use the Override Algorithm control to select the algorithm to use for verifying the token. This algorithm will override the algorithm selected in the credentials.

Verify node options

This operation includes several node options:

  • Return Additional Info: This toggle controls how much information the node returns. When turned on, the node returns the complete decoded token with information about the header and signature. When turned off, the node only returns the payload.
  • Ignore Expiration: This toggle controls whether the node should ignore the token's expiration time claim (exp). Refer to "exp" (Expiration Time) Claim for more information.
  • Ignore Not Before Claim: This toggle controls whether to ignore the token's not before claim (nbf). Refer to "nbf" (Not Before) Claim for more information.
  • Clock Tolerance: Enter the number of seconds to tolerate when checking the nbf and exp claims. This allows you to deal with small clock differences among different servers. Refer to "exp" (Expiration Time) Claim for more information.
  • Override Algorithm: The algorithm to use for verifying the token. This algorithm will override the algorithm selected in the credentials.

Templates and examples

Validate Auth0 JWT Tokens using JWKS or Signing Cert

View template details

Build Production-Ready User Authentication with Airtable and JWT

View template details

Host Your Own JWT Authentication System with Data Tables and Token Management

View template details

Browse JWT integration templates, or search all templates


LinkedIn node

URL: llms-txt#linkedin-node

Contents:

  • Operations
  • Parameters
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported

Use the LinkedIn node to automate work in LinkedIn, and integrate LinkedIn with other applications. n8n supports creating posts.

On this page, you'll find a list of operations the LinkedIn node supports and links to more resources.

Refer to LinkedIn credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Post As: choose whether to post as a Person or Organization.

  • Person Name or ID and Organization URN: enter an identifier for the person or organization.

Posting as organization

If posting as an Organization enter the organization number in the URN field. For example, 03262013 not urn:li:company:03262013.

  • Text: the post contents.

  • Media Category: use this when including images or article URLs in your post.

Templates and examples

🤖Automate Multi-Platform Social Media Content Creation with AI

View template details

AI-Powered Social Media Content Generator & Publisher

View template details

🩷Automated Social Media Content Publishing Factory + System Prompt Composition

View template details

Browse LinkedIn integration templates, or search all templates

Refer to LinkedIn's API documentation for more information about the service.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


MQTT credentials

URL: llms-txt#mqtt-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using broker connection

You can use these credentials to authenticate the following nodes:

Install an MQTT broker.

MQTT provides a list of Servers/Brokers at MQTT Software.

Supported authentication methods

Refer to MQTT's documentation for more information about the MQTT protocol.

Refer to your broker provider's documentation for more detailed configuration and details.

Using broker connection

To configure this credential, you'll need:

  • Your MQTT broker's Protocol
  • The Host
  • The Port
  • A Username and Password to authenticate with
  • If you're using SSL, the relevant certificates and keys
  1. Select the broker's Protocol, which determines the URL n8n uses. Options include:
    • Mqtt: Begin the URL with the standard mqtt: protocol.
    • Mqtts: Begin the URL with the secure mqtts: protocol.
    • Ws: Begin the URL with the WebSocket ws: protocol.
  2. Enter your broker Host.
  3. Enter the Port number n8n should use to connect to the broker host.
  4. Enter the Username to log into the broker as.
  5. Enter that user's Password.
  6. If you want to receive QoS 1 and 2 messages while offline, turn off the Clean Session toggle.
  7. Enter a Client ID you'd like the credential to use. If you leave this blank, n8n will generate one for you. You can use a fixed or expression-based Client ID.
    • Client IDs can be useful to identify and track connection access. n8n recommends using something with n8n in it for easier auditing.
  8. If your MQTT broker uses SSL, turn the SSL toggle on. Once you turn it on:
    1. Select whether to use Passwordless connection with certificates, which is like the SASL mechanism EXTERNAL. If turned on:
      1. Select whether to Reject Unauthorized Certificate: If turned off, n8n will connect even if the certificate validation fails.
      2. Add an SSL Client Certificate.
      3. Add an SSL Client Key for the Client Certificate.
    2. One or more SSL CA Certificates.

Refer to your MQTT broker provider's documentation for more detailed configuration instructions.


Hacker News node

URL: llms-txt#hacker-news-node

Contents:

  • Operations
  • Templates and examples

Use the Hacker News node to automate work in Hacker News, and integrate Hacker News with other applications. n8n has built-in support for a wide range of Hacker News features, including getting articles, and users.

On this page, you'll find a list of operations the Hacker News node supports and links to more resources.

This node doesn't require authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • All
    • Get all items
  • Article
    • Get a Hacker News article
  • User
    • Get a Hacker News user

Templates and examples

Hacker News to Video Content

View template details

AI chat with any data source (using the n8n workflow tool)

View template details

Community Insights using Qdrant, Python and Information Extractor

View template details

Browse Hacker News integration templates, or search all templates


Segment node

URL: llms-txt#segment-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Segment node to automate work in Segment, and integrate Segment with other applications. n8n has built-in support for a wide range of Segment features, including adding users to groups, creating identities, and tracking activities.

On this page, you'll find a list of operations the Segment node supports and links to more resources.

Refer to Segment credentials for guidance on setting up authentication.

  • Group
    • Add a user to a group
  • Identify
    • Create an identity
  • Track
    • Record the actions your users perform. Every action triggers an event, which can also have associated properties.
    • Record page views on your website, along with optional extra information about the page being viewed.

Templates and examples

Auto-Scrape TikTok User Data via Dumpling AI and Segment in Airtable

View template details

Weekly Google Search Console SEO Pulse: Catch Top Movers Across Keyword Segments

View template details

Create a customer and add them to a segment in Customer.io

View template details

Browse Segment integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Test your knowledge

URL: llms-txt#test-your-knowledge

Contents:

  • What's next?

Congratulations, you finished the n8n Course Level 2!

You've learned a lot about workflow automation and built quite a complex business workflow. Why not showcase your skills?

You can test your knowledge by taking a quiz, which consists of questions about the theoretical concepts and workflows covered in this course.

  • You need to have at least 80% correct answers to pass the quiz.
  • You can take the quiz as many times as you want.
  • There's no time limit on answering the quiz questions.

Take the quiz!

  • Create new workflows for your work or personal use and share them with us. Don't have any ideas? Find inspiration on the workflows page and on our blog.
  • Dive deeper into n8n's features by reading the docs.

Start the container

URL: llms-txt#start-the-container

Contents:

  • Next steps

docker compose up -d


- Learn more about [configuring](../../../configuration/environment-variables/) and [scaling](../../../scaling/overview/) n8n.
- Or explore using n8n: try the [Quickstarts](../../../../try-it-out/).

---

## Stackby node

**URL:** llms-txt#stackby-node

**Contents:**
- Operations
- Templates and examples

Use the Stackby node to automate work in Stackby, and integrate Stackby with other applications. n8n has built-in support for a wide range of Stackby features, including appending, deleting, listing and reading.

On this page, you'll find a list of operations the Stackby node supports and links to more resources.

Refer to [Stackby credentials](../../credentials/stackby/) for guidance on setting up authentication.

- Append
- Delete
- List
- Read

## Templates and examples

[Browse Stackby integration templates](https://n8n.io/integrations/stackby/), or [search all templates](https://n8n.io/workflows/)

---

## Kibana credentials

**URL:** llms-txt#kibana-credentials

**Contents:**
- Prerequisites
- Supported authentication methods
- Related resources
- Using basic auth

You can use these credentials to authenticate when using the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to make a [Custom API call](../../../custom-operations/).

- Create an [Elasticsearch](https://www.elastic.co/) account.
- If you're creating a new account to test with, load some sample data into Kibana. Refer to the [Kibana quick start](https://www.elastic.co/guide/en/kibana/current/get-started.html) for more information.

## Supported authentication methods

Refer to [Kibana's API documentation](https://www.elastic.co/guide/en/kibana/current/api.html) for more information about the service.

This is a credential-only node. Refer to [Custom API operations](../../../custom-operations/) to learn more. View [example workflows and related content](https://n8n.io/integrations/kibana/) on n8n's website.

To configure this credential, you'll need:

- The **URL** you use to access Kibana, for example `http://localhost:5601`
- A **Username**: Use the same username that you use to log in to Elastic.
- A **Password**: Use the same password that you use to log in to Elastic.

---

## Customer.io node

**URL:** llms-txt#customer.io-node

**Contents:**
- Operations
- Templates and examples
- What to do if your operation isn't supported

Use the Customer.io node to automate work in Customer.io, and integrate Customer.io with other applications. n8n has built-in support for a wide range of Customer.io features, including creating and updating customers, tracking events, and getting campaigns.

On this page, you'll find a list of operations the Customer.io node supports and links to more resources.

Refer to [Customer.io credentials](../../credentials/customerio/) for guidance on setting up authentication.

- Customer
  - Create/Update a customer.
  - Delete a customer.
- Event
  - Track a customer event.
  - Track an anonymous event.
- Campaign
  - Get
  - Get All
  - Get Metrics
- Segment
  - Add Customer
  - Remove Customer

## Templates and examples

**Create a customer and add them to a segment in Customer.io**

[View template details](https://n8n.io/workflows/646-create-a-customer-and-add-them-to-a-segment-in-customerio/)

**Receive updates when a subscriber unsubscribes in Customer.io**

[View template details](https://n8n.io/workflows/645-receive-updates-when-a-subscriber-unsubscribes-in-customerio/)

**AI Agent Powered Marketing 🛠️ Customer.io Tool MCP Server 💪 all 9 operations**

[View template details](https://n8n.io/workflows/5314-ai-agent-powered-marketing-customerio-tool-mcp-server-all-9-operations/)

[Browse Customer.io integration templates](https://n8n.io/integrations/customerio/), or [search all templates](https://n8n.io/workflows/)

## What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**.
1. Select the service you want to connect to.
1. Select your credential.

Refer to [Custom API operations](../../../custom-operations/) for more information.

---

## Error handling

**URL:** llms-txt#error-handling

**Contents:**
- Create and set an error workflow
- Error data
- Cause a workflow execution failure using Stop And Error

When designing your flow logic, it's a good practice to consider potential errors, and set up methods to handle them gracefully. With an error workflow, you can control how n8n responds to a workflow execution failure.

To investigate failed executions, you can:

- Review your [Executions](../../workflows/executions/), for a [single workflow](../../workflows/executions/single-workflow-executions/) or [all workflows you have access to](../../workflows/executions/all-executions/). You can [load data from previous execution](../../workflows/executions/debug/) into your current workflow.
- Enable [Log streaming](../../log-streaming/).

## Create and set an error workflow

For each workflow, you can set an error workflow in **Workflow Settings**. It runs if an execution fails. This means you can, for example, send email or Slack alerts when a workflow execution errors. The error workflow must start with the [Error Trigger](../../integrations/builtin/core-nodes/n8n-nodes-base.errortrigger/).

You can use the same error workflow for multiple workflows.

1. Create a new workflow, with the Error Trigger as the first node.
1. Give the workflow a name, for example `Error Handler`.
1. Select **Save**.
1. In the workflow where you want to use this error workflow:
   1. Select **Options** > **Settings**.
   1. In **Error workflow**, select the workflow you just created. For example, if you used the name Error Handler, select **Error handler**.
   1. Select **Save**. Now, when this workflow errors, the related error workflow runs.

The default error data received by the Error Trigger is:

All information is always present, except:

- `execution.id`: requires the execution to be saved in the database. Not present if the error is in the trigger node of the main workflow, as the workflow doesn't execute.
- `execution.url`: requires the execution to be saved in the database. Not present if the error is in the trigger node of the main workflow, as the workflow doesn't execute.
- `execution.retryOf`: only present when the execution is a retry of a failed execution.

If the error is caused by the trigger node of the main workflow, rather than a later stage, the data sent to the error workflow is different. There's less information in `execution{}` and more in `trigger{}`:

## Cause a workflow execution failure using Stop And Error

When you create and set an error workflow, n8n runs it when an execution fails. Usually, this is due to things like errors in node settings, or the workflow running out of memory.

You can add the [Stop And Error](../../integrations/builtin/core-nodes/n8n-nodes-base.stopanderror/) node to your workflow to force executions to fail under your chosen circumstances, and trigger the error workflow.

**Examples:**

Example 1 (unknown):
```unknown
[
	{
		"execution": {
			"id": "231",
			"url": "https://n8n.example.com/execution/231",
			"retryOf": "34",
			"error": {
				"message": "Example Error Message",
				"stack": "Stacktrace"
			},
			"lastNodeExecuted": "Node With Error",
			"mode": "manual"
		},
		"workflow": {
			"id": "1",
			"name": "Example Workflow"
		}
	}
]

Example 2 (unknown):

{
  "trigger": {
    "error": {
      "context": {},
      "name": "WorkflowActivationError",
      "cause": {
        "message": "",
        "stack": ""
      },
      "timestamp": 1654609328787,
      "message": "",
      "node": {
        . . . 
      }
    },
    "mode": "trigger"
  },
  "workflow": {
    "id": "",
    "name": ""
  }
}

Stackby credentials

URL: llms-txt#stackby-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a Stackby account.

Supported authentication methods

Refer to Stackby's API documentation for more information about the service.

To configure this credential, you'll need:


Google Calendar Event operations

URL: llms-txt#google-calendar-event-operations

Contents:

  • Create
    • Options
  • Delete
    • Options
  • Get
    • Options
  • Get Many
    • Options
  • Update

Use these operations to create, delete, get, and update events in Google Calendar. Refer to Google Calendar for more information on the Google Calendar node itself.

Use this operation to add an event to a Google Calendar.

Enter these parameters:

  • Credential to connect with: Create or select an existing Google Calendar credentials.

  • Resource: Select Event.

  • Operation: Select Create.

  • Calendar: Choose a calendar you want to add an event to. Select From list to choose the title from the dropdown list or By ID to enter a calendar ID.

  • Start Time: The start time for the event. By default, uses an expression evaluating to the current time ({{ $now }}).

  • End Time: The end time for the event. By default, this uses an expression evaluating to an hour from now ({{ $now.plus(1, 'hour') }}).

  • Use Default Reminders: Whether to enable default reminders for the event according to the calendar configuration.

  • All Day: Whether the event is all day or not.

  • Attendees: Attendees to invite to the event.

  • Color Name or ID: The color of the event. Choose from the list or specify the ID using an expression.

  • Conference Data: Creates a conference link (Hangouts, Meet, etc.) and attaches it to the event.

  • Description: A description for the event.

  • Guests Can Invite Others: Whether attendees other than the organizer can invite others to the event.

  • Guests Can Modify: Whether attendees other than the organizer can modify the event.

  • Guests Can See Other Guests: Whether attendees other than the organizer can see who the event's attendees are.

  • ID: Opaque identifier of the event.

  • Location: Geographic location of the event as free-form text.

  • Max Attendees: The maximum number of attendees to include in the response. If there are more than the specified number of attendees, only returns the participant.

  • Repeat Frequency: The repetition interval for recurring events.

  • Repeat How Many Times?: The number of instances to create for recurring events.

  • Repeat Until: The date at which recurring events should stop.

  • RRULE: Recurrence rule. When set, ignores the Repeat Frequency, Repeat How Many Times, and Repeat Until parameters.

  • Send Updates: Whether to send notifications about the creation of the new event.

  • Show Me As: Whether the event blocks time on the calendar.

  • Summary: The title of the event.

Refer to the Events: insert | Google Calendar API documentation for more information.

Use this operation to delete an event from a Google Calendar.

Enter these parameters:

  • Credential to connect with: Create or select an existing Google Calendar credentials.

  • Resource: Select Event.

  • Operation: Select Delete.

  • Calendar: Choose a calendar you want to delete an event from. Select From list to choose the title from the dropdown list or By ID to enter a calendar ID.

  • Event ID: The ID of the event to delete.

  • Send Updates: Whether to send notifications about the deletion of the event.

Refer to the Events: delete | Google Calendar API documentation for more information.

Use this operation to retrieve an event from a Google Calendar.

Enter these parameters:

  • Credential to connect with: Create or select an existing Google Calendar credentials.

  • Resource: Select Event.

  • Operation: Select Get.

  • Calendar: Choose a calendar you want to get an event from. Select From list to choose the title from the dropdown list or By ID to enter a calendar ID.

  • Event ID: The ID of the event to get.

  • Max Attendees: The maximum number of attendees to include in the response. If there are more than the specified number of attendees, only returns the participant.

  • Return Next Instance of Recurrent Event: Whether to return the next instance of a recurring event instead of the event itself.

  • Timezone: The timezone used in the response. By default, uses the n8n timezone.

Refer to the Events: get | Google Calendar API documentation for more information.

Use this operation to retrieve more than one event from a Google Calendar.

Enter these parameters:

  • Credential to connect with: Create or select an existing Google Calendar credentials.

  • Resource: Select Event.

  • Operation: Select Get Many.

  • Calendar: Choose a calendar you want to get an event from. Select From list to choose the title from the dropdown list or By ID to enter a calendar ID.

  • Return All: Whether to return all results or only up to a given limit.

  • Limit: (When "Return All" isn't selected) The maximum number of results to return.

  • After: Retrieve events that occur after this time. At least part of the event must be after this time. By default, this uses an expression evaluating to the current time ({{ $now }}). Switch the field to "fixed" to select a date from a date widget.

  • Before: Retrieve events that occur before this time. At least part of the event must be before this time. By default, this uses an expression evaluating to the current time plus a week ({{ $now.plus({ week: 1 }) }}). Switch the field to "fixed" to select a date from a date widget.

  • Fields: Specify the fields to return. By default, returns a set of commonly used fields predefined by Google. Use "*" to return all fields. You can find out more in Google Calendar's documentation on working with partial resources.

  • iCalUID: Specifies an event ID (in the iCalendar format) to include in the response.

  • Max Attendees: The maximum number of attendees to include in the response. If there are more than the specified number of attendees, only returns the participant.

  • Order By: The order to use for the events in the response.

  • Query: Free text search terms to find events that match. This searches all fields except for extended properties.

  • Recurring Event Handling: What to do for recurring events:

  • All Occurrences: Return all instances of the recurring event for the specified time range.

    • First Occurrence: Return the first event of a recurring event within the specified time range.
    • Next Occurrence: Return the next instance of a recurring event within the specified time range.
  • Show Deleted: Whether to include deleted events (with status equal to "cancelled") in the results.

  • Show Hidden Invitations: Whether to include hidden invitations in the results.

  • Timezone: The timezone used in the response. By default, uses the n8n timezone.

  • Updated Min: The lower bounds for an event's last modification time (as an RFC 3339 timestamp)

Refer to the Events: list | Google Calendar API documentation for more information.

Use this operation to update an event in a Google Calendar.

Enter these parameters:

  • Credential to connect with: Create or select an existing Google Calendar credentials.

  • Resource: Select Event.

  • Operation: Select Update.

  • Calendar: Choose a calendar you want to add an event to. Select From list to choose the title from the dropdown list or By ID to enter a calendar ID.

  • Event ID: The ID of the event to update.

  • Modify: For recurring events, choose whether to update the recurring event or a specific instance of the recurring event.

  • Use Default Reminders: Whether to enable default reminders for the event according to the calendar configuration.

  • Update Fields: The fields of the event to update:

  • All Day: Whether the event is all day or not.

    • Attendees: Attendees to invite to the event. You can choose to either add attendees or replace the existing attendee list.
    • Color Name or ID: The color of the event. Choose from the list or specify the ID using an expression.
    • Description: A description for the event.
    • End: The end time of the event.
    • Guests Can Invite Others: Whether attendees other than the organizer can invite others to the event.
    • Guests Can Modify: Whether attendees other than the organizer can make changes to the event.
    • Guests Can See Other Guests: Whether attendees other than the organizer can see who the event's attendees are.
    • ID: Opaque identifier of the event.
    • Location: Geographic location of the event as free-form text.
    • Max Attendees: The maximum number of attendees to include in the response. If there are more than the specified number of attendees, only returns the participant.
    • Repeat Frequency: The repetition interval for recurring events.
    • Repeat How Many Times?: The number of instances to create for recurring events.
    • Repeat Until: The date at which recurring events should stop.
    • RRULE: Recurrence rule. When set, ignores the Repeat Frequency, Repeat How Many Times, and Repeat Until parameters.
    • Send Updates: Whether to send notifications about the creation of the new event.
    • Show Me As: Whether the event blocks time on the calendar.
    • Start: The start time of the event.
    • Summary: The title of the event.
    • Visibility: The visibility of the event:
      • Confidential: The event is private. This value is provided for compatibility.
      • Default: Uses the default visibility for events on the calendar.
      • Public: The event is public and the event details are visible to all readers of the calendar.
      • Private: The event is private and only event attendees may view event details.

Refer to the Events: update | Google Calendar API documentation for more information.


Mautic node

URL: llms-txt#mautic-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Mautic node to automate work in Mautic, and integrate Mautic with other applications. n8n has built-in support for a wide range of Mautic features, including creating, updating, deleting, and getting companies, and contacts, as well as adding and removing campaign contacts.

On this page, you'll find a list of operations the Mautic node supports and links to more resources.

Refer to Mautic credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Campaign Contact
    • Add contact to a campaign
    • Remove contact from a campaign
  • Company
    • Create a new company
    • Delete a company
    • Get data of a company
    • Get data of all companies
    • Update a company
  • Company Contact
    • Add contact to a company
    • Remove a contact from a company
  • Contact
    • Create a new contact
    • Delete a contact
    • Edit contact's points
    • Add/remove contacts from/to the don't contact list
    • Get data of a contact
    • Get data of all contacts
    • Send email to contact
    • Update a contact
  • Contact Segment
    • Add contact to a segment
    • Remove contact from a segment
  • Segment Email
    • Send

Templates and examples

Validate email of new contacts in Mautic

View template details

Add new customers from WooCommerce to Mautic

View template details

Send sales data from Webhook to Mautic

View template details

Browse Mautic integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Credentials

URL: llms-txt#credentials

Credentials are private pieces of information issued by apps and services to authenticate you as a user and allow you to connect and share information between the app or service and the n8n node.

Access the credentials UI by opening the left menu and selecting Credentials. n8n lists credentials you created on the My credentials tab. The All credentials tab shows all credentials you can use, included credentials shared with you by other users.


Transforming data

URL: llms-txt#transforming-data

n8n uses a predefined data structure that allows all nodes to process incoming data correctly.

Your incoming data may have a different data structure, in which case you will need to transform it to allow each item to be processed individually.

For example, the image below shows the output of an HTTP Request node that returns data incompatible with n8n's data structure. The node returns the data and displays that only one item was returned.

To transform this kind of structure into the n8n data structure you can use the data transformation nodes:

  • Aggregate: take separate items, or portions of them, and group them together into individual items.
  • Limit: remove items beyond a defined maximum number.
  • Remove Duplicates: identify and delete items that are identical across all fields or a subset of fields.
  • Sort: organize lists of in a desired ordering, or generate a random selection.
  • Split Out: separate a single data item containing a list into multiple items.
  • Summarize: aggregate items together, in a manner similar to Excel pivot tables.

Calculator node

URL: llms-txt#calculator-node

Contents:

  • Templates and examples
  • Related resources

The Calculator node is a tool that allows an agent to run mathematical calculations.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

Templates and examples

Build Your First AI Data Analyst Chatbot

View template details

Chat with OpenAI Assistant (by adding a memory)

View template details

AI marketing report (Google Analytics & Ads, Meta Ads), sent via email/Telegram

by Friedemann Schuetz

View template details

Browse Calculator integration templates, or search all templates

Refer to LangChain's documentation on tools for more information about tools in LangChain.

View n8n's Advanced AI documentation.


Motorhead credentials

URL: llms-txt#motorhead-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to Motorhead's API documentation for more information about the service.

View n8n's Advanced AI documentation.

To configure this credential, you'll need a Motorhead account and:

  • Your Host URL
  • An API Key
  • A Client ID

To set it up, you'll generate an API key:

  1. If you're self-hosting Motorhead, update the Host URL to match your Motorhead URL.
  2. In Motorhead, go to Settings > Organization.
  3. In the API Keys section, select Create.
  4. Enter a Name for your API Key, like n8n integration.
  5. Select Generate.
  6. Copy the apiKey and enter it in your n8n credential.
  7. Return to the API key list.
  8. Copy the clientID for the key and enter it as the Client ID in your n8n credential.

Refer to Generate an API key for more information.


Pushbullet credentials

URL: llms-txt#pushbullet-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Create a Pushbullet account.

Supported authentication methods

Refer to Pushbullet's API documentation for more information about the service.

To configure this credential, you'll need:

  • A Client ID: Generated when you create a Pushbullet app, also known as an OAuth client.
  • A Client Secret: Generated when you create a Pushbullet app, also known as an OAuth client.

To generate the Client ID and Client Secret, go to the create client page. Copy the OAuth Redirect URL from n8n and add this as your redirect_uri for the app/client. Use the client_id and client_secret from the OAuth Client in your n8n credential.

Refer to Pushbullet's OAuth2 Guide for more information.

Pushbullet OAuth test link

Pushbullet offers a test link during the client creation process described above. This link isn't compatible with n8n. To verify the authentication works, use the Connect my account button in n8n.


Rundeck node

URL: llms-txt#rundeck-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported
  • Find the job ID

Use the Rundeck node to automate work in Rundeck, and integrate Rundeck with other applications. n8n has built-in support for executing jobs and getting metadata.

On this page, you'll find a list of operations the Rundeck node supports and links to more resources.

Refer to Rundeck credentials for guidance on setting up authentication.

  • Job
    • Execute a job
    • Get metadata of a job

Templates and examples

Browse Rundeck integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.

  1. Access your Rundeck dashboard.
  2. Open the project that contains the job you want to use with n8n.
  3. In the sidebar, select JOBS.
  4. Under All Jobs, select the name of the job you want to use with n8n.
  5. In the top left corner, under the name of the job, copy the string that's displayed in smaller font below the job name. This is your job ID.
  6. Paste this job ID in the Job Id field in n8n.

Google Drive File and Folder operations

URL: llms-txt#google-drive-file-and-folder-operations

Contents:

  • Search files and folders
    • Options

Use this operation to search for files and folders in Google Drive. Refer to Google Drive for more information on the Google Drive node itself.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

Search files and folders

Use this operation to search for files and folders in a drive.

Enter these parameters:

  • Credential to connect with: Create or select an existing Google Drive credentials.

  • Resource: Select File/Folder.

  • Operation: Select Search.

  • Search Method: Choose how you want to search:

    • Search File/Folder Name: Fill out the Search Query with the name of the file or folder you want to search for. Returns files and folders that are partial matches for the query as well.
    • Advanced Search: Fill out the Query String to search for files and folders using Google query string syntax.
  • Return All: Choose whether to return all results or only up to a given limit.

  • Limit: The maximum number of items to return when Return All is disabled.

  • Filter: Choose whether to limit the scope of your search:

    • Drive: The drive you want to search in. By default, uses your personal "My Drive". Select From list to choose the drive from the dropdown list, By URL to enter the URL of the drive, or By ID to enter the driveId.
      • You can find the driveId by visiting the shared drive in your browser and copying the last URL component: https://drive.google.com/drive/u/1/folders/driveId.
    • Folder: The folder to search in. Select From list to choose the folder from the dropdown list, By URL to enter the URL of the folder, or By ID to enter the folderId.
      • You can find the folderId by visiting the shared folder in your browser and copying the last URL component: https://drive.google.com/drive/u/1/folders/folderId.
    • What to Search: Whether to search for Files and Folders, Files, or Folders.
    • Include Trashed Items: Whether to also return items in the Drive's trash.
  • Fields: Select the fields to return. Can be one or more of the following: [All], explicitlyTrashed, exportLinks, hasThumbnail, iconLink, ID, Kind, mimeType, Name, Permissions, Shared, Spaces, Starred, thumbnailLink, Trashed, Version, or webViewLink.

Refer to the Method: files.list | Google Drive API documentation for more information.


PagerDuty node

URL: llms-txt#pagerduty-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the PagerDuty node to automate work in PagerDuty, and integrate PagerDuty with other applications. n8n has built-in support for a wide range of PagerDuty features, including creating incident notes, as well as updating, and getting all log entries and users.

On this page, you'll find a list of operations the PagerDuty node supports and links to more resources.

Refer to PagerDuty credentials for guidance on setting up authentication.

  • Incident
    • Create an incident
    • Get an incident
    • Get all incidents
    • Update an incident
  • Incident Note
    • Create an incident note
    • Get all incident's notes
  • Log Entry
    • Get a log entry
    • Get all log entries
  • User
    • Get a user

Templates and examples

Manage custom incident response in PagerDuty and Jira

View template details

Incident Response Workflow - Part 3

View template details

Incident Response Workflow - Part 2

View template details

Browse PagerDuty integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Calendly credentials

URL: llms-txt#calendly-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using API access token
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Supported Calendly plans

The Calendly Trigger node relies on Calendly webhooks. Calendly only offers access to webhooks in their paid plans.

Supported authentication methods

  • API access token
  • OAuth2

Refer to Calendly's API documentation for more information about the service.

Using API access token

To configure this credential, you'll need a Calendly account and:

  • An API Key or Personal Access Token

To get your access token:

  1. Go to the Calendly Integrations & apps page.
  2. Select API & Webhooks.
  3. In Your Personal Access Tokens, select Generate new token.
  4. Enter a Name for your access token, like n8n integration.
  5. Select Create token.
  6. Select Copy token and enter it in your n8n credential.

Refer to Calendly's API authentication documentation for more information.

To configure this credential, you'll need a Calendly developer account and:

  • A Client ID
  • A Client Secret

To get both, create a new OAuth app in Calendly:

  1. Log in to Calendly's developer portal and go to My apps.
  2. Select Create new app.
  3. Enter a Name of app, like n8n integration.
  4. In Kind of app, select Web.
  5. In Environment type, select the environment that corresponds to your usage, either Sandbox or Production.
    • Calendly recommends starting with Sandbox for development and creating a second application for Production when you're ready to go live.
  6. Copy the OAuth Redirect URL from n8n and enter it as a Redirect URI in the OAuth app.
  7. Select Save & Continue. The app details display.
  8. Copy the Client ID and enter this as your n8n Client ID.
  9. Copy the Client secret and enter this as your n8n Client Secret.
  10. Select Connect my account in n8n and follow the on-screen prompts to finish authorizing the credential.

Refer to Registering your application with Calendly for more information.


Gotify credentials

URL: llms-txt#gotify-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API token

You can use these credentials to authenticate the following nodes:

Install Gotify on your server.

Supported authentication methods

Refer to Gotify's API documentation for more information about the service.

To configure this credential, you'll need:

  • An App API Token: Only required if you'll use this credential to create messages. To generate an App API token, create an application from the Apps menu. Refer to Gotify's Push messages documentation for more information.
  • A Client API Token: Required for all actions other than creating messages (such as deleting or retrieving messages). To generate a Client API token, create a client from the Clients menu.
  • The URL of the Gotify host

WhatsApp Business Cloud node common issues

URL: llms-txt#whatsapp-business-cloud-node-common-issues

Contents:

  • Bad request - please check your parameters
  • Working with non-text media

Here are some common errors and issues with the WhatsApp Business Cloud node and steps to resolve or troubleshoot them.

Bad request - please check your parameters

This error occurs when WhatsApp Business Cloud rejects your request because of a problem with its parameters. It's common to see this when using the Send Template operation if the data you send doesn't match the format of your template.

To resolve this issue, review the parameters in your message template. Pay attention to each parameter's data type and the order they're defined in the template.

Check the data that n8n is mapping to the template parameters. If you're using expressions to set parameter values, check the input data to make sure each item resolves to a valid value. You may want to use the Edit Fields (Set) node or set a fallback value to ensure you send a value with the correct format.

Working with non-text media

The WhatsApp Business Cloud node can work with non-text messages and media like images, audio, documents, and more.

If your operation includes a Input Data Field Name or Property Name parameter, set this to the field name itself rather than referencing the data in an expression.

For example, if you are trying to send a message with an "Image" MessageType and Take Image From set to "n8n", set Input Data Field Name to a field name like data instead of an expression like {{ $json.input.data }}.


OpenAI Video operations

URL: llms-txt#openai-video-operations

Contents:

  • Generate Video
    • Options

Use this operation to generate a video in OpenAI. Refer to OpenAI for more information on the OpenAI node itself.

Use this operation to generate a video from a text prompt.

Enter these parameters:

  • Credential to connect with: Create or select an existing OpenAI credential.

  • Resource: Select Video.

  • Operation: Select Generate Video.

  • Model: Select the model you want to use to generate a video. Currently supports sora-2 and sora-2-pro.

  • Prompt: The prompt to generate a video from.

  • Seconds: Clip duration in seconds (up to 25).

  • Size: Output resolution formatted as width x height. 1024x1792 and 1792x1024 are only supported by Sora 2 Pro.

  • Reference: Optional image reference that guides generation. Has to be passed in as a binary item.

  • Wait Timeout: Time to wait for the video to be generated in seconds. Defaults to 300.

  • Output Field Name: The name of the output field to put the binary file data in. Defaults to data.

Refer to Video Generation | OpenAI for more information.


5. Calculating Booked Orders

URL: llms-txt#5.-calculating-booked-orders

Contents:

  • About the Code node
  • Configure the Code node
  • What's next?

In this step of the workflow you will learn how n8n structures data and how to add custom JavaScript code to perform calculations using the Code node. After this step, your workflow should look like this:

View workflow file

The next step in Nathan's workflow is to calculate two values from the booked orders:

  • The total number of booked orders
  • The total value of all booked orders

To calculate data and add more functionality to your workflows you can use the Code node, which lets you write custom JavaScript code.

About the Code node

The Code node has two operational modes, depending on how you want to process items:

  • Run Once for All Items allows you to write code to process all input items at once, as a group.
  • Run Once for Each Item executes your code once for each input item.

Learn more about how to use the Code node.

In n8n, the data that's passed between nodes is an array of objects with the following JSON structure:

  1. (required) n8n stores the actual data within a nested json key. This property is required, but can be set to anything from an empty object (like {}) to arrays and deeply nested data. The code node automatically wraps the data in a json object and parent array ([]) if it's missing.
  2. (optional) Binary data of item. Most items in n8n don't contain binary data.
  3. (required) Arbitrary key name for the binary data.
  4. (required) Base64-encoded binary data.
  5. (optional) Should set if possible.
  6. (optional) Should set if possible.
  7. (optional) Should set if possible.

You can learn more about the expected format on the n8n data structure page.

Configure the Code node

Now let's see how to accomplish Nathan's task using the Code node.

In your workflow, add a Code node connected to the false branch of the If node.

With the Code node window open, configure these parameters:

  • Mode: Select Run Once for All Items.

  • Language: Select JavaScript.

Using Python in code nodes

While we use JavaScript below, you can also use Python in the Code node. To learn more, refer to the Code node documentation.

  • Copy the Code below and paste it into the Code box to replace the existing code:

Notice the format in which we return the results of the calculation:

If you don't use the correct data structure, you will get an error message: Error: Always an Array of items has to be returned!

Now select Execute step and you should see the following results:

Nathan 🙋: Wow, the Code node is powerful! This means that if I have some basic JavaScript skills I can power up my workflows.

You 👩‍🔧: Yes! You can progress from no-code to low-code!

Nathan 🙋: Now, how do I send the calculations for the booked orders to my team's Discord channel?

You 👩‍🔧: There's an n8n node for that. I'll set it up in the next step.

Examples:

Example 1 (unknown):

[
    {
   	 "json": { // (1)!
   		 "apple": "beets",
   		 "carrot": {
   			 "dill": 1
   		 }
   	 },
   	 "binary": { // (2)!
   		 "apple-picture": { // (3)!
   			 "data": "....", // (4)!
   			 "mimeType": "image/png", // (5)!
   			 "fileExtension": "png", // (6)!
   			 "fileName": "example.png", // (7)!
   		 }
   	 }
    },
    ...
]

Example 2 (unknown):

let items = $input.all();
  let totalBooked = items.length;
  let bookedSum = 0;

  for (let i=0; i < items.length; i++) {
    bookedSum = bookedSum + items[i].json.orderPrice;
  }

  return [{ json: {totalBooked, bookedSum} }];

Example 3 (unknown):

return [{ json: {totalBooked, bookedSum} }]

Binary data

URL: llms-txt#binary-data

Contents:

  • Enable filesystem mode
  • Binary data pruning

Binary data is any file-type data, such as image files or documents generated or processed during the execution of a workflow.

Enable filesystem mode

When handling binary data, n8n keeps the data in memory by default. This can cause crashes when working with large files.

To avoid this, change the N8N_DEFAULT_BINARY_DATA_MODE environment variable to filesystem. This causes n8n to save data to disk, instead of using memory.

If you're using queue mode, keep this to default. n8n doesn't support filesystem mode with queue mode.

Binary data pruning

n8n executes binary data pruning as part of execution data pruning. Refer to Execution data | Enable executions pruning for details.

If you configure multiple binary data modes, binary data pruning operates on the active binary data mode. For example, if your instance stored data in S3, and you later switched to filesystem mode, n8n only prunes binary data in the filesystem. Refer to External storage for details.


Webhook credentials

URL: llms-txt#webhook-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Using basic auth
  • Using header auth
  • Using JWT auth

You can use these credentials to authenticate the following nodes:

You must use the authentication method required by the app or service you want to query.

Supported authentication methods

  • Basic auth
  • Header auth
  • JWT auth
  • None

Use this generic authentication if your app or service supports basic authentication.

To configure this credential, enter:

  • The Username you use to access the app or service your HTTP Request is targeting
  • The Password that goes with that username

Use this generic authentication if your app or service supports header authentication.

To configure this credential, enter:

  • The header Name you need to pass to the app or service your HTTP request is targeting
  • The Value for the header

Read more about HTTP headers

JWT Auth is a method of authentication that uses JSON Web Tokens (JWT) to digitally sign data. This authentication method uses the JWT credential and can use either a Passphrase or PEM Key as key type. Refer to JWT credential for more information.


Query JSON with JMESPath

URL: llms-txt#query-json-with-jmespath

Contents:

  • The jmespath() method
  • Common tasks
    • Apply a JMESPath expression to a collection of elements with projections
    • Select multiple elements and create a new list or object
    • An alternative to arrow functions in expressions

JMESPath is a query language for JSON that you can use to extract and transform elements from a JSON document. For full details of how to use JMESPath, refer to the JMESPath documentation.

The jmespath() method

n8n provides a custom method, jmespath(). Use this method to perform a search on a JSON object using the JMESPath query language.

To help understand what the method does, here is the equivalent longer JavaScript:

Expressions must be single-line

The longer code example doesn't work in Expressions, as they must be single-line.

object is a JSON object, such as the output of a previous node. searchString is an expression written in the JMESPath query language. The JMESPath Specification provides a list of supported expressions, while their Tutorial and Examples provide interactive examples.

Search parameter order

The examples in the JMESPath Specification follow the pattern search(searchString, object). The JMESPath JavaScript library, which n8n uses, supports search(object, searchString) instead. This means that when using examples from the JMESPath documentation, you may need to change the order of the search function parameters.

This section provides examples for some common operations. More examples, and detailed guidance, are available in JMESPath's own documentation.

When trying out these examples, you need to set the Code node Mode to Run Once for Each Item.

Apply a JMESPath expression to a collection of elements with projections

From the JMESPath projections documentation:

Projections are one of the key features of JMESPath. Use it to apply an expression to a collection of elements. JMESPath supports five kinds of projections:

  • List Projections
  • Slice Projections
  • Object Projections
  • Flatten Projections
  • Filter Projections

The following example shows basic usage of list, slice, and object projections. Refer to the JMESPath projections documentation for detailed explanations of each projection type, and more examples.

Given this JSON from a webhook node:

Retrieve a list of all the people's first names:

Get a slice of the first names:

Get a list of the dogs' ages using object projections:

Select multiple elements and create a new list or object

Use Multiselect to select elements from a JSON object and combine them into a new list or object.

Given this JSON from a webhook node:

Use multiselect list to get the first and last names and create new lists containing both names:

An alternative to arrow functions in expressions

For example, generate some input data by returning the below code from the Code node:

You could do a search like "find the item with the name Lenovo and tell me their category ID."

Examples:

Example 1 (unknown):

$jmespath(object, searchString)

Example 2 (unknown):

_jmespath(object, searchString)

Example 3 (unknown):

var jmespath = require('jmespath');
jmespath.search(object, searchString);

Example 4 (unknown):

[
  {
    "headers": {
      "host": "n8n.instance.address",
      ...
    },
    "params": {},
    "query": {},
    "body": {
      "people": [
        {
          "first": "James",
          "last": "Green"
        },
        {
          "first": "Jacob",
          "last": "Jones"
        },
        {
          "first": "Jayden",
          "last": "Smith"
        }
      ],
      "dogs": {
        "Fido": {
          "color": "brown",
          "age": 7
        },
        "Spot": {
          "color": "black and white",
          "age": 5
        }
      }
    }
  }
]

Yourls node

URL: llms-txt#yourls-node

Contents:

  • Operations
  • Templates and examples

Use the Yourls node to automate work in Yourls, and integrate Yourls with other applications. n8n has built-in support for a wide range of Yourls features, including expanding and shortening URLs.

On this page, you'll find a list of operations the Yourls node supports and links to more resources.

Refer to Yourls credentials for guidance on setting up authentication.

  • URL
    • Expand a URL
    • Shorten a URL
    • Get stats about one short URL

Templates and examples

Browse Yourls integration templates, or search all templates


Customer.io credentials

URL: llms-txt#customer.io-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key
  • Why you need a Tracking API Key and an App API Key

You can use these credentials to authenticate the following nodes with Customer.io.

Create a Customer.io account.

Supported authentication methods

Refer to Customer.io's summary API documentation for more information about the service.

For detailed API reference documentation for each API, refer to the Track API documentation and the App API documentation.

To configure this credential, you'll need:

  • A Tracking API Key: For use with the Track API at https://track.customer.io/api/v1/. See the FAQs below for more details.
  • Your Region: Customer.io uses different API subdomains depending on the region you select. Options include:
    • Global region: Keeps the default URLs for both APIs; for use in all non-EU countries/regions.
    • EU region: Adjusts the Track API subdomain to track-eu and the App API subdomain to api-eu; only use this if you are in the EU.
  • A Tracking Site ID: Required with your Tracking API Key
  • An App API Key: For use with the App API at https://api.customer.io/v1/api/. See the FAQs below for more details.

Refer to the Customer.io Finding and managing your API credentials documentation for instructions on creating both Tracking API and App API keys.

Why you need a Tracking API Key and an App API Key

Customer.io has two different API endpoints and generates and stores the keys for each slightly differently:

  • The Track API at https://track.customer.io/api/v1/
  • The App API at https://api.customer.io/v1/api/

The Track API requires a Tracking Site ID; the App API doesn't.

Based on the operation you want to perform, n8n uses the correct API key and its corresponding endpoint.


Google Drive File operations

URL: llms-txt#google-drive-file-operations

Contents:

  • Copy a file
    • Options
  • Create from text
    • Options
  • Delete a file
    • Options
  • Download a file
    • Options
  • Move a file
  • Share a file

Use this operation to create, delete, change, and manage files in Google Drive. Refer to Google Drive for more information on the Google Drive node itself.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

Use this operation to copy a file to a drive.

Enter these parameters:

  • Credential to connect with: Create or select an existing Google Drive credentials.

  • Resource: Select File.

  • Operation: Select Copy.

  • File: Choose a file you want to copy.

    • Select From list to choose the title from the dropdown list, By URL to enter the URL of the file, or By ID to enter the fileId.
    • You can find the fileId in a shareable Google Drive file URL: https://docs.google.com/document/d/fileId/edit#gid=0. In your Google Drive, select Share > Copy link to get the shareable file URL.
  • File Name: The name to use for the new copy of the file.

  • Copy In The Same Folder: Choose whether to copy the file to the same folder. If disabled, set the following:

    • Parent Drive: Select From list to choose the drive from the dropdown list, By URL to enter the URL of the drive, or By ID to enter the driveId.
    • Parent Folder: Select From list to choose the folder from the dropdown list, By URL to enter the URL of the folder, or By ID to enter the folderId.
    • You can find the driveId and folderID by visiting the shared drive or folder in your browser and copying the last URL component: https://drive.google.com/drive/u/1/folders/driveId.
  • Copy Requires Writer Permissions: Select whether to enable readers and commenters to copy, print, or download the new file.

  • Description: A short description of the file.

Refer to the Method: files.copy | Google Drive API documentation for more information.

Use this operation to create a new file in a drive from provided text.

Enter these parameters:

  • Credential to connect with: Create or select an existing Google Drive credentials.
  • Resource: Select File.
  • Operation: Select Create From Text.
  • File Content: Enter the file content to use to create the new file.
  • File Name: The name to use for the new file.
  • Parent Drive: Select From list to choose the drive from the dropdown list, By URL to enter the URL of the drive, or By ID to enter the driveId.
  • Parent Folder: Select From list to choose the folder from the dropdown list, By URL to enter the URL of the folder, or By ID to enter the folderId.

You can find the driveId and folderID by visiting the shared drive or folder in your browser and copying the last URL component: https://drive.google.com/drive/u/1/folders/driveId.

  • APP Properties: A bundle of arbitrary key-value pairs which are private to the requesting app.

  • Properties: A bundle of arbitrary key-value pairs which are visible to all apps.

  • Keep Revision Forever: Choose whether to set the keepForever field in the new head revision. This only applies to files with binary content. You can keep a maximum of 200 revisions, after which you must delete the pinned revisions.

  • OCR Language: An ISO 639-1 language code to help the OCR interpret the content during import.

  • Use Content As Indexable Text: Choose whether to mark the uploaded content as indexable text.

  • Convert to Google Document: Choose whether to create a Google Document instead of the default .txt format. You must enable the Google Docs API in the Google API Console for this to work.

Refer to the Method: files.insert | Google Drive API documentation for more information.

Use this operation to delete a file from a drive.

Enter these parameters:

  • Credential to connect with: Create or select an existing Google Drive credentials.

  • Resource: Select File.

  • Operation: Select Delete.

  • File: Choose a file you want to delete.

    • Select From list to choose the title from the dropdown list, By URL to enter the URL of the file, or By ID to enter the fileId.
    • You can find the fileId in a shareable Google Drive file URL: https://docs.google.com/document/d/fileId/edit#gid=0. In your Google Drive, select Share > Copy link to get the shareable file URL.
  • Delete Permanently: Choose whether to delete the file now instead of moving it to the trash.

Refer to the Method: files.delete | Google Drive API documentation for more information.

Use this operation to download a file from a drive.

Enter these parameters:

  • Credential to connect with: Create or select an existing Google Drive credentials.

  • Resource: Select File.

  • Operation: Select Download.

  • File: Choose a file you want to download.

    • Select From list to choose the title from the dropdown list, By URL to enter the URL of the file, or By ID to enter the fileId.
    • You can find the fileId in a shareable Google Drive file URL: https://docs.google.com/document/d/fileId/edit#gid=0. In your Google Drive, select Share > Copy link to get the shareable file URL.
  • Put Output File in Field: Choose the field name to place the binary file contents to make it available to following nodes.

  • Google File Conversion: Choose the formats to export as when downloading Google Files:

    • Google Docs: Choose the export format to use when downloading Google Docs files: HTML, MS Word Document, Open Office Document, PDF, Rich Text (rtf), or Text (txt).
    • Google Drawings: Choose the export format to use when downloading Google Drawing files: JPEG, PDF, PNG, or SVG.
    • Google Slides: Choose the export format to use when downloading Google Slides files: MS PowerPoint, OpenOffice Presentation, or PDF.
    • Google Sheets: Choose the export format to use when downloading Google Sheets files: CSV, MS Excel, Open Office Sheet, or PDF.
  • File Name: The name to use for the downloaded file.

Refer to the Method: files.get | Google Drive API documentation for more information.

Use this operation to move a file to a different location in a drive.

Enter these parameters:

  • Credential to connect with: Create or select an existing Google Drive credentials.
  • Resource: Select File.
  • Operation: Select Move.
  • File: Choose a file you want to move.
    • Select From list to choose the title from the dropdown list, By URL to enter the URL of the file, or By ID to enter the fileId.
    • You can find the fileId in a shareable Google Drive file URL: https://docs.google.com/document/d/fileId/edit#gid=0. In your Google Drive, select Share > Copy link to get the shareable file URL.
  • Parent Drive: Select From list to choose the drive from the dropdown list, By URL to enter the URL of the drive, or By ID to enter the driveId.
  • Parent Folder: Select From list to choose the folder from the dropdown list, By URL to enter the URL of the folder, or By ID to enter the folderId.

You can find the driveId and folderID by visiting the shared drive or folder in your browser and copying the last URL component: https://drive.google.com/drive/u/1/folders/driveId.

Refer to the Method: parents.insert | Google Drive API documentation for more information.

Use this operation to add sharing permissions to a file.

Enter these parameters:

  • Credential to connect with: Create or select an existing Google Drive credentials.

  • Resource: Select File.

  • Operation: Select Share.

  • File: Choose a file you want to share.

    • Select From list to choose the title from the dropdown list, By URL to enter the URL of the file, or By ID to enter the fileId.
    • You can find the fileId in a shareable Google Drive file URL: https://docs.google.com/document/d/fileId/edit#gid=0. In your Google Drive, select Share > Copy link to get the shareable file URL.
  • Permissions: The permissions to add to the file:

    • Role: Select what users can do with the file. Can be one of Commenter, File Organizer, Organizer, Owner, Reader, Writer.
    • Type: Select the scope of the new permission:
      • User: Grant permission to a specific user, defined by entering their Email Address.
      • Group: Grant permission to a specific group, defined by entering its Email Address.
      • Domain: Grant permission to a complete domain, defined by the Domain.
      • Anyone: Grant permission to anyone. Can optionally Allow File Discovery to make the file discoverable through search.
  • Email Message: A plain text custom message to include in the notification email.

  • Move to New Owners Root: Available when trying to transfer ownership while sharing an item not in a shared drive. When enabled, moves the file to the new owner's My Drive root folder.

  • Send Notification Email: Whether to send a notification email when sharing to users or groups.

  • Transfer Ownership: Whether to transfer ownership to the specified user and downgrade the current owner to writer permissions.

  • Use Domain Admin Access: Whether to perform the action as a domain administrator.

Refer to the REST Resources: files | Google Drive API documentation for more information.

Use this operation to update a file.

Enter these parameters:

  • Credential to connect with: Create or select an existing Google Drive credentials.

  • Resource: Select File.

  • Operation: Select Update.

  • File to Update: Choose a file you want to update.

    • Select From list to choose the title from the dropdown list, By URL to enter the URL of the file, or By ID to enter the fileId.
    • You can find the fileId in a shareable Google Drive file URL: https://docs.google.com/document/d/fileId/edit#gid=0. In your Google Drive, select Share > Copy link to get the shareable file URL.
  • Change File Content: Choose whether to send new binary data to replace the existing file content. If enabled, fill in the following:

    • Input Data Field Name: The name of the input field that contains the binary file data you wish to use.
  • New Updated File Name: A new name for the file if you want to update the filename.

  • APP Properties: A bundle of arbitrary key-value pairs which are private to the requesting app.

  • Properties: A bundle of arbitrary key-value pairs which are visible to all apps.

  • Keep Revision Forever: Choose whether to set the keepForever field in the new head revision. This only applies to files with binary content. You can keep a maximum of 200 revisions, after which you must delete the pinned revisions.

  • OCR Language: An ISO 639-1 language code to help the OCR interpret the content during import.

  • Use Content As Indexable Text: Choose whether to mark the uploaded content as indexable text.

  • Move to Trash: Whether to move the file to the trash. Only possible for the file owner.

  • Return Fields: Return metadata fields about the file. Can be one or more of the following: [All], explicitlyTrashed, exportLinks, hasThumbnail, iconLink, ID, Kind, mimeType, Name, Permissions, Shared, Spaces, Starred, thumbnailLink, Trashed, Version, or webViewLink.

Refer to the Method: files.update | Google Drive API documentation for more information.

Use this operation to upload a file.

Enter these parameters:

  • Credential to connect with: Create or select an existing Google Drive credentials.
  • Resource: Select File.
  • Operation: Select Upload.
  • Input Data Field Name: The name of the input field that contains the binary file data you wish to use.
  • File Name: The name to use for the new file.
  • Parent Drive: Select From list to choose the drive from the dropdown list, By URL to enter the URL of the drive, or By ID to enter the driveId.
  • Parent Folder: Select From list to choose the folder from the dropdown list, By URL to enter the URL of the folder, or By ID to enter the folderId.

You can find the driveId and folderID by visiting the shared drive or folder in your browser and copying the last URL component: https://drive.google.com/drive/u/1/folders/driveId.

  • APP Properties: A bundle of arbitrary key-value pairs which are private to the requesting app.

  • Properties: A bundle of arbitrary key-value pairs which are visible to all apps.

  • Keep Revision Forever: Choose whether to set the keepForever field in the new head revision. This only applies to files with binary content. You can keep a maximum of 200 revisions, after which you must delete the pinned revisions.

  • OCR Language: An ISO 639-1 language code to help the OCR interpret the content during import.

  • Use Content As Indexable Text: Choose whether to mark the uploaded content as indexable text.

  • Simplify Output: Choose whether to return a simplified version of the response instead of including all fields.

Refer to the Method: files.insert | Google Drive API documentation for more information.


No Operation, do nothing

URL: llms-txt#no-operation,-do-nothing

Contents:

  • Templates and examples

Use the No Operation, do nothing node when you don't want to perform any operations. The purpose of this node is to make the workflow easier to read and understand where the flow of data stops. This can help others visually get a better understanding of the workflow.

Templates and examples

Back Up Your n8n Workflows To Github

View template details

🩷Automated Social Media Content Publishing Factory + System Prompt Composition

View template details

Host Your Own AI Deep Research Agent with n8n, Apify and OpenAI o3

View template details

Browse No Operation, do nothing integration templates, or search all templates


MessageBird credentials

URL: llms-txt#messagebird-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a Bird account.

Supported authentication methods

Refer to MessageBird's API documentation for more information about the service.

To configure this credential, you'll need:


Grist credentials

URL: llms-txt#grist-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a Grist account.

Supported authentication methods

Refer to Grist's API documentation for more information about the service.

To configure this credential, you'll need:

  • An API Key: Refer to the Grist API authentication documentation for instructions on creating an API key.
  • To select your Grist Plan Type. Options include:
    • Free
    • Paid: If selected, provide your Grist Custom Subdomain. This is the portion that comes before .getgrist.com. For example, if our full Grist domain was n8n.getgrist.com, we'd enter n8n here.
    • Self-Hosted: If selected, provide your Grist Self-Hosted URL. This should be the full URL.

Emelia node

URL: llms-txt#emelia-node

Contents:

  • Operations
  • Templates and examples

Use the Emelia node to automate work in Emelia, and integrate Emelia with other applications. n8n has built-in support for a wide range of Emelia features, including creating campaigns, and adding contacts to a list.

On this page, you'll find a list of operations the Emelia node supports and links to more resources.

Refer to Emelia credentials for guidance on setting up authentication.

  • Campaign
    • Add Contact
    • Create
    • Get
    • Get All
    • Pause
    • Start
  • Contact List
    • Add
    • Get All

Templates and examples

Send a message on Mattermost when you get a reply in Emelia

View template details

Create a campaign, add a contact, and get the campaign from Emelia

View template details

🛠️ Emelia Tool MCP Server 💪 all 9 operations

View template details

Browse Emelia integration templates, or search all templates


Don't save successful executions

URL: llms-txt#don't-save-successful-executions

export EXECUTIONS_DATA_SAVE_ON_SUCCESS=none


Plivo node

URL: llms-txt#plivo-node

Contents:

  • Operations
  • Templates and examples

Use the Plivo node to automate work in Plivo, and integrate Plivo with other applications. n8n has built-in support for a wide range of Plivo features, including making calls, and sending SMS/MMS.

On this page, you'll find a list of operations the Plivo node supports and links to more resources.

Refer to Plivo credentials for guidance on setting up authentication.

  • Call
    • Make a voice call
  • MMS
    • Send an MMS message (US/Canada only)
  • SMS
    • Send an SMS message.

Templates and examples

Send daily weather updates to a phone number via Plivo

View template details

Create and Join Call Sessions with Plivo and UltraVox AI Voice Assistant

View template details

🛠️ Plivo Tool MCP Server 💪 all 3 operations

View template details

Browse Plivo integration templates, or search all templates


Community node verification guidelines

URL: llms-txt#community-node-verification-guidelines

Contents:

  • Use the n8n-node tool
  • Package source verification
  • No external dependencies
  • Proper documentation
  • No access to environment variables or file system
  • Follow n8n best practices
  • Use English language only

Do you want n8n to verify your node?

Consider following these guidelines while building your node if you want to submit it for verification by n8n. Any user with verified community nodes enabled can discover and install verified nodes from n8n's nodes panel across all deployment types (self-hosted and n8n Cloud).

Use the n8n-node tool

All verified community node authors should strongly consider using the n8n-node tool to create and check their package. This helps n8n ensure quality and consistency by:

  • Generating the expected package file structure
  • Adding the required metadata and configuration to the package.json file
  • Making it easy to lint your code against n8n's standards
  • Allowing you to load your node in a local n8n instance for testing

Package source verification

  • Verify that your npm package repository URL matches the expected GitHub (or other platform) repository.
  • Confirm that the package author / maintainer matches between npm and the repository.
  • Confirm that the git link in npm works and that the repository is public.
  • Make sure your package has proper documentation (README, usage examples, etc.).
  • Make sure your package license is MIT.

No external dependencies

  • Ensure that your package does not include any external dependencies to keep it lightweight and easy to maintain.

Proper documentation

  • Provide clear documentation, whether its a README on GitHub or links to relevant API documentation.
  • Include usage instructions, example workflows, and any necessary authentication details.

No access to environment variables or file system

  • The code must not interact with environment variables or attempt to read/write files.
  • Pass all necessary data through node parameters.

Follow n8n best practices

  • Maintain a clear and consistent coding style.
  • Use TypeScript and follow n8n's node development guidelines.
  • Ensure proper error handling and validation.
  • Make sure the linter passes (in other words, make sure running npx @n8n/scan-community-package n8n-nodes-PACKAGE passes).

Use English language only

  • Both the node interface and all documentation must be in English only.
  • This includes parameter names, descriptions, help text, error messages and README content.

Oracle Database credentials

URL: llms-txt#oracle-database-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using database connection

You can use these credentials to authenticate the following nodes:

These nodes do not support SSH tunnels. They require Oracle Database 19c or later. For thick mode, use Oracle Client Libraries 19c or later.

Create a user account on a OracleDB server database.

Supported authentication methods

  • Database connection

Refer to Oracle Database documentation for more information about the service.

Using database connection

To configure this credential, you'll need:

  • A User name.
  • A Password for that user.
  • Connection String: The Oracle database instance to connect to. The string can be an Easy Connect string, or a TNS Alias from a tnsnames.ora file, or the Oracle database instance.
  • Use Optional Oracle Client Libraries: If you want to use node-oracledb Thick mode for working with Oracle Database advanced features, turn this on. This option is not available in official n8n docker images. Additional settings to enable Thick mode are required. Refer to Enabling Thick mode documentation for more information.
  • Use SSL: If your Connection String is using SSL, turn this on and configure additional details for the SSL Authentication.
  • Wallet Password: The password to decrypt the Privacy Enhanced Mail (PEM)-encoded private certificate, if it is encrypted.
  • Wallet Content: The security credentials required to establish a mutual TLS (mTLS) connection to Oracle Database.
  • Distinguished Name: The distinguished name (DN) that should be matched with the certificate DN.
  • Match Distinguished Name: Whether the server certificate DN should be matched in addition to the regular certificate verification that is performed.
  • Allow Weak Distinguished Name Match: Whether the secure DN matching behavior which checks both the listener and server certificates has to be performed.
  • Pool Min: The number of connections established to the database when a pool is created.
  • Pool Max: The maximum number of connections to which a connection pool can grow.
  • Pool Increment: The number of connections that are opened whenever a connection request exceeds the number of currently open connections.
  • Pool Maximum Session Life Time: The number of connections that are opened whenever a connection request exceeds the number of currently open connections.
  • Pool Connection Idle Timeout: The number of connections that are opened whenever a connection request exceeds the number of currently open connections.
  • Connection Class Name: DRCP/PRCP Connection Class. Refer to Enabling DRCP for more information.
  • Connection Timeout: The timeout duration in seconds for an application to establish an Oracle Net connection.
  • Transport Connection Timeout: The maximum number of seconds to wait to establish a connection to the database host.
  • Keepalive Probe Interval: The number of minutes between the sending of keepalive probes.

To set up your database connection credential:

  1. Enter your database's username as the User in your n8n credential.

  2. Enter the user's Password.

  3. Enter your database's connection string as the Connection String in your n8n credential.

  4. If your database uses SSL and you'd like to configure SSL for the connection, turn this option on in the credential. If you turn it on, enter the information of your Oracle Database SSL certificate in these fields:

  5. Enter the output of PEM-encoded wallet file, ewallet.pem contents after retaining the new lines. The command

can be used to dump file contents in the Wallet Content field.

Refer to node-oracledb for more information on working with TLS connections.

Examples:

Example 1 (unknown):

node -e "console.log('{{\"' + require('fs').readFileSync('ewallet.pem', 'utf8').split('\n').join('\\\\n') + '\"}}')"

Disqus credentials

URL: llms-txt#disqus-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API access token

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to Disqus's API documentation for more information about the service.

Using API access token

To configure this credential, you'll need:

  • An Access Token: Once you've registered an API application, copy the API Key and add it to n8n as the Access Token.

Use LangSmith with n8n

URL: llms-txt#use-langsmith-with-n8n

Contents:

  • Connect your n8n instance to LangSmith

LangSmith is a developer platform created by the LangChain team. You can connect your n8n instance to LangSmith to record and monitor runs in n8n, just as you can in a LangChain application.

Self-hosted n8n only.

Connect your n8n instance to LangSmith

  1. Log in to LangSmith and get your API key.

  2. Set the LangSmith environment variables:

Variable Value
LANGCHAIN_ENDPOINT "https://api.smith.langchain.com"
LANGCHAIN_TRACING_V2 true
LANGCHAIN_API_KEY Set this to your API key

Set the variables so that they're available globally in the environment where you host your n8n instance. You can do this in the same way as the rest of your general configuration.

For information on using LangSmith, refer to LangSmith's documentation.


Invoice Ninja Trigger node

URL: llms-txt#invoice-ninja-trigger-node

Invoice Ninja is a free open-source online invoicing app for freelancers & businesses. It offers invoicing, payments, expense tracking, & time-tasks.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Invoice Ninja Trigger integrations page.


Zep Vector Store node

URL: llms-txt#zep-vector-store-node

Contents:

  • Node usage patterns
    • Use as a regular node to insert, update, and retrieve documents
    • Connect directly to an AI agent as a tool
    • Use a retriever to fetch documents
    • Use the Vector Store Question Answer Tool to answer questions
  • Node parameters
    • Operation Mode
    • Rerank Results
    • Insert Documents parameters
    • Get Many parameters

This node is deprecated, and will be removed in a future version.

Use the Zep Vector Store to interact with Zep vector databases. You can insert documents into a vector database, get documents from a vector database, retrieve documents to provide them to a retriever connected to a chain, or connect it directly to an agent to use as a tool.

On this page, you'll find the node parameters for the Zep Vector Store node, and links to more resources.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Zep Vector Store integrations page.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

Node usage patterns

You can use the Zep Vector Store node in the following patterns.

Use as a regular node to insert, update, and retrieve documents

You can use the Zep Vector Store as a regular node to insert or get documents. This pattern places the Zep Vector Store in the regular connection flow without using an agent.

You can see an example of this in scenario 1 of this template (the example uses Supabase, but the pattern is the same).

Connect directly to an AI agent as a tool

You can connect the Zep Vector Store node directly to the tool connector of an AI agent to use a vector store as a resource when answering queries.

Here, the connection would be: AI agent (tools connector) -> Zep Vector Store node.

Use a retriever to fetch documents

You can use the Vector Store Retriever node with the Zep Vector Store node to fetch documents from the Zep Vector Store node. This is often used with the Question and Answer Chain node to fetch documents from the vector store that match the given chat input.

An example of the connection flow (the example uses Pinecone, but the pattern in the same) would be: Question and Answer Chain (Retriever connector) -> Vector Store Retriever (Vector Store connector) -> Zep Vector Store.

Use the Vector Store Question Answer Tool to answer questions

Another pattern uses the Vector Store Question Answer Tool to summarize results and answer questions from the Zep Vector Store node. Rather than connecting the Zep Vector Store directly as a tool, this pattern uses a tool specifically designed to summarizes data in the vector store.

The connections flow (this example uses Supabase, but the pattern is the same) in this case would look like this: AI agent (tools connector) -> Vector Store Question Answer Tool (Vector Store connector) -> Zep Vector store.

This Vector Store node has four modes: Get Many, Insert Documents, Retrieve Documents (As Vector Store for Chain/Tool), and Retrieve Documents (As Tool for AI Agent). The mode you select determines the operations you can perform with the node and what inputs and outputs are available.

In this mode, you can retrieve multiple documents from your vector database by providing a prompt. The prompt is embedded and used for similarity search. The node returns the documents that are most similar to the prompt with their similarity score. This is useful if you want to retrieve a list of similar documents and pass them to an agent as additional context.

Insert Documents

Use insert documents mode to insert new documents into your vector database.

Retrieve Documents (as Vector Store for Chain/Tool)

Use Retrieve Documents (As Vector Store for Chain/Tool) mode with a vector-store retriever to retrieve documents from a vector database and provide them to the retriever connected to a chain. In this mode you must connect the node to a retriever node or root node.

Retrieve Documents (as Tool for AI Agent)

Use Retrieve Documents (As Tool for AI Agent) mode to use the vector store as a tool resource when answering queries. When formulating responses, the agent uses the vector store when the vector store name and description match the question details.

Enables reranking. If you enable this option, you must connect a reranking node to the vector store. That node will then rerank the results for queries. You can use this option with the Get Many, Retrieve Documents (As Vector Store for Chain/Tool) and Retrieve Documents (As Tool for AI Agent) modes.

Insert Documents parameters

  • Collection Name: Enter the collection name to store the data in.

Get Many parameters

  • Collection Name: Enter the collection name to retrieve the data from.
  • Prompt: Enter the search query.
  • Limit: Enter how many results to retrieve from the vector store. For example, set this to 10 to get the ten best results.

Retrieve Documents (As Vector Store for Chain/Tool) parameters

  • Collection Name: Enter the collection name to retrieve the data from.

Retrieve Documents (As Tool for AI Agent) parameters

  • Name: The name of the vector store.
  • Description: Explain to the LLM what this tool does. A good, specific description allows LLMs to produce expected results more often.
  • Collection Name: Enter the collection name to retrieve the data from.
  • Limit: Enter how many results to retrieve from the vector store. For example, set this to 10 to get the ten best results.

Embedding Dimensions

Must be the same when embedding the data and when querying it.

This sets the size of the array of floats used to represent the semantic meaning of a text document.

Available in the Insert Documents Operation Mode, enabled by default.

Disable this to configure your embeddings in Zep instead of in n8n.

Available in Get Many mode. When searching for data, use this to match with metadata associated with the document.

This is an AND query. If you specify more than one metadata filter field, all of them must match.

When inserting data, the metadata is set using the document loader. Refer to Default Data Loader for more information on loading documents.

Templates and examples

Browse Zep Vector Store integration templates, or search all templates

Refer to LangChain's Zep documentation for more information about the service.

View n8n's Advanced AI documentation.


Telegram node Message operations

URL: llms-txt#telegram-node-message-operations

Contents:

  • Delete Chat Message
  • Edit Message Text
    • Edit Message Text additional fields
  • Pin Chat Message
    • Pin Chat Message additional fields
  • Send Animation
    • Send Animation additional fields
    • Send Audio
    • Send Audio additional fields
  • Send Chat Action

Use these operations to send, edit, and delete messages in a chat; send files to a chat; and pin/unpin message from a chat. Refer to Telegram for more information on the Telegram node itself.

To use most of these operations, you must add your bot to a channel so that it can send messages to that channel. Refer to Common Issues | Add a bot to a Telegram channel for more information.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

Delete Chat Message

Use this operation to delete a message from chat using the Bot API deleteMessage method.

Enter these parameters:

  • Credential to connect with: Create or select an existing Telegram credential.
  • Resource: Select Message.
  • Operation: Select Delete Chat Message.
  • Chat ID: Enter the Chat ID or username of the channel you wish to delete in the format @channelusername.
  • Message ID: Enter the unique identifier of the message you want to delete.

Refer to the Telegram Bot API deleteMessage documentation for more information.

Use this operation to edit the text of an existing message using the Bot API editMessageText method.

Enter these parameters:

  • Credential to connect with: Create or select an existing Telegram credential.
  • Resource: Select Message.
  • Operation: Select Edit Message Text.
  • Chat ID: Enter the Chat ID or username of the channel you wish to leave in the format @channelusername.
  • Message ID: Enter the unique identifier of the message you want to edit.
  • Reply Markup: Select whether to use the Inline Keyboard to display the InlineKeyboardMarkup None not to. This sets the reply_markup parameter. Refer to the InlineKeyboardMarkup documentation for more information.
  • Text: Enter the text you want to edit the message to.

Refer to the Telegram Bot API editMessageText documentation for more information.

Edit Message Text additional fields

Use the Additional Fields to further refine the behavior of the node. Select Add Field to add any of the following:

  • Disable WebPage Preview: Select whether you want to enable link previews for links in this message (turned off) or disable link previews for links in this message (turned on). This sets the link_preview_options parameter for is_disabled. Refer to the LinkPreviewOptions documentation for more information.
  • Parse Mode: Choose whether the message should be parsed using HTML (default), Markdown (Legacy), or MarkdownV2. This sets the parse_mode parameter.

Use this operation to pin a message for the chat using the Bot API pinChatMessage method.

Enter these parameters:

  • Credential to connect with: Create or select an existing Telegram credential.
  • Resource: Select Message.
  • Operation: Select Pin Chat Message.
  • Chat ID: Enter the Chat ID or username of the channel you wish to pin the message to in the format @channelusername.
  • Message ID: Enter the unique identifier of the message you want to pin.

Refer to the Telegram Bot API pinChatMessage documentation for more information.

Pin Chat Message additional fields

Use the Additional Fields to further refine the behavior of the node. Select Add Field to add any of the following:

  • Disable Notifications: By default, Telegram will notify all chat members that the message has been pinned. If you don't want these notifications to go out, turn this control on. Sets the disable_notification parameter to true.

Use this operation to send GIFs or H.264/MPEG-4 AVC videos without sound up to 50 MB in size to the chat using the Bot API sendAnimation method.

Enter these parameters:

  • Credential to connect with: Create or select an existing Telegram credential.
  • Resource: Select Message.
  • Operation: Select Send Animation.
  • Chat ID: Enter the Chat ID or username of the channel you wish to send the animation to in the format @channelusername.
  • Binary File: To send a binary file from the node itself, turn this option on. If you turn this parameter on, you must enter the Input Binary Field containing the file you want to send.
  • Animation: If you aren't using the Binary File, enter the animation to send here. Pass a file_id to send a file that exists on the Telegram servers (recommended) or an HTTP URL for Telegram to get a file from the internet.
  • Reply Markup: Use this parameter to set more interface options. Refer to Reply Markup parameters for more information on these options and how to use them.

Refer to the Telegram Bot API sendAnimation documentation for more information.

Send Animation additional fields

Use the Additional Fields to further refine the behavior of the node using optional fields in Telegram's sendAnimation method. Select Add Field to add any of the following:

  • Caption: Enter a caption text for the animation, max of 1024 characters.
  • Disable Notification: Choose whether to send the notification silently (turned on) or with a standard notification (turned off).
  • Duration: Enter the animation's duration in seconds.
  • Height: Enter the height of the animation.
  • Parse Mode: Enter the parser to use for any related text. Options include HTML (default), Markdown (Legacy), MarkdownV2. Refer to Telegram's Formatting options for more information on these options.
  • Reply To Message ID: If the message is a reply, enter the ID of the message it's replying to.
  • Message Thread ID: Enter a unique identifier for the target message thread (topic) of the forum; for forum supergroups only.
  • Thumbnail: Add the thumbnail of the file sent. Ignore this field if thumbnail generation for the file is supported server-side. The thumbnail should meet these specs:
    • JPEG format
    • Less than 200 KB in size
    • Width and height less than 320px.
  • Width: Enter the width of the video clip.

Use this operation to send an audio file to the chat and display it in the music player using the Bot API sendAudio method.

Enter these parameters:

  • Credential to connect with: Create or select an existing Telegram credential.
  • Resource: Select Message.
  • Operation: Select Send Audio.
  • Chat ID: Enter the Chat ID or username of the channel you wish to send the audio to in the format @channelusername.
  • Binary File: To send a binary file from the node itself, turn this option on. If you turn this parameter on, you must enter the Input Binary Field containing the file you want to send.
  • Audio: If you aren't using the Binary File, enter the audio to send here. Pass a file_id to send a file that exists on the Telegram servers (recommended) or an HTTP URL for Telegram to get a file from the internet.
  • Reply Markup: Use this parameter to set more interface options. Refer to Reply Markup parameters for more information on these options and how to use them.

Refer to the Telegram Bot API sendAudio documentation for more information.

Send Audio additional fields

Use the Additional Fields to further refine the behavior of the node using optional fields in Telegram's sendAudio method. Select Add Field to add any of the following:

  • Caption: Enter a caption text for the audio, max of 1024 characters.
  • Disable Notification: Choose whether to send the notification silently (turned on) or with a standard notification (turned off).
  • Duration: Enter the audio's duration in seconds.
  • Parse Mode: Enter the parser to use for any related text. Options include HTML (default), Markdown (Legacy), MarkdownV2. Refer to Telegram's Formatting options for more information on these options.
  • Performer: Enter the name of the performer.
  • Reply To Message ID: If the message is a reply, enter the ID of the message it's replying to.
  • Message Thread ID: Enter a unique identifier for the target message thread (topic) of the forum; for forum supergroups only.
  • Title: Enter the audio track's name.
  • Thumbnail: Add the thumbnail of the file sent. Ignore this field if thumbnail generation for the file is supported server-side. The thumbnail should meet these specs:
    • JPEG format
    • Less than 200 KB in size
    • Width and height less than 320px.

Use this operation when you need to tell the user that something is happening on the bot's side. The status is set for 5 seconds or less using the Bot API sendChatAction method.

Enter these parameters:

  • Credential to connect with: Create or select an existing Telegram credential.
  • Resource: Select Message.
  • Operation: Select Send Chat Action.
  • Chat ID: Enter the Chat ID or username of the channel you wish to send the chat action to in the format @channelusername.
  • Action: Select the action you'd like to broadcast the bot as taking. The options here include: Find Location, Typing, Recording audio or video, and Uploading file types.

Refer to Telegram's Bot API sendChatAction documentation for more information.

Use this operation to send a document to the chat using the Bot API sendDocument method.

Enter these parameters:

  • Credential to connect with: Create or select an existing Telegram credential.
  • Resource: Select Message.
  • Operation: Select Send Document.
  • Chat ID: Enter the Chat ID or username of the channel you wish to send the document to in the format @channelusername.
  • Binary File: To send a binary file from the node itself, turn this option on. If you turn this parameter on, you must enter the Input Binary Field containing the file you want to send.
  • Document: If you aren't using the Binary File, enter the document to send here. Pass a file_id to send a file that exists on the Telegram servers (recommended) or an HTTP URL for Telegram to get a file from the internet.
  • Reply Markup: Use this parameter to set more interface options. Refer to Reply Markup parameters for more information on these options and how to use them.

Refer to Telegram's Bot API sendDocument documentation for more information.

Send Document additional fields

Use the Additional Fields to further refine the behavior of the node using optional fields in Telegram's sendDocument method. Select Add Field to add any of the following:

  • Caption: Enter a caption text for the file, max of 1024 characters.
  • Disable Notification: Choose whether to send the notification silently (turned on) or with a standard notification (turned off).
  • Parse Mode: Enter the parser to use for any related text. Options include HTML (default), Markdown (Legacy), MarkdownV2. Refer to Formatting options for more information on these options.
  • Reply To Message ID: If the message is a reply, enter the ID of the message it's replying to.
  • Message Thread ID: Enter a unique identifier for the target message thread (topic) of the forum; for forum supergroups only.
  • Thumbnail: Add the thumbnail of the file sent. Ignore this field if thumbnail generation for the file is supported server-side. The thumbnail should meet these specs:
    • JPEG format
    • Less than 200 KB in size
    • Width and height less than 320px.

Use this operation to send a geolocation to the chat using the Bot API sendLocation method.

Enter these parameters:

  • Credential to connect with: Create or select an existing Telegram credential.
  • Resource: Select Message.
  • Operation: Select Send Location.
  • Chat ID: Enter the Chat ID or username of the channel you wish to send the location to in the format @channelusername.
  • Latitude: Enter the latitude of the location.
  • Longitude: Enter the longitude of the location.
  • Reply Markup: Use this parameter to set more interface options. Refer to Reply Markup parameters for more information on these options and how to use them.

Refer to Telegram's Bot API sendLocation documentation for more information.

Send Location additional fields

Use the Additional Fields to further refine the behavior of the node using optional fields in Telegram's sendLocation method. Select Add Field to add any of the following:

  • Disable Notification: Choose whether to send the notification silently (turned on) or with a standard notification (turned off).
  • Reply To Message ID: If the message is a reply, enter the ID of the message it's replying to.
  • Message Thread ID: Enter a unique identifier for the target message thread (topic) of the forum; for forum supergroups only.

Use this operation to send a group of photos and/or videos using the Bot API sendMediaGroup method.

Enter these parameters:

  • Credential to connect with: Create or select an existing Telegram credential.
  • Resource: Select Message.
  • Operation: Select Send Media Group.
  • Chat ID: Enter the Chat ID or username of the channel you wish to send the media group to in the format @channelusername.
  • Media: Use Add Media to add different media types to your media group. For each medium, select:
    • Type: The type of media this is. Choose from Photo and Video.
    • Media File: Enter the media file to send. Pass a file_id to send a file that exists on the Telegram servers (recommended) or an HTTP URL for Telegram to get a file from the internet.
    • Additional Fields: For each media file, you can choose to add these fields:
      • Caption: Enter a caption text for the file, max of 1024 characters.
      • Parse Mode: Enter the parser to use for any related text. Options include HTML (default), Markdown (Legacy), MarkdownV2. Refer to Formatting options for more information on these options.

Refer to Telegram's Bot API sendMediaGroup documentation for more information.

Send Media Group additional fields

Use the Additional Fields to further refine the behavior of the node using optional fields in Telegram's sendMediaGroup method. Select Add Field to add any of the following:

  • Disable Notification: Choose whether to send the notification silently (turned on) or with a standard notification (turned off).
  • Reply To Message ID: If the message is a reply, enter the ID of the message it's replying to.
  • Message Thread ID: Enter a unique identifier for the target message thread (topic) of the forum; for forum supergroups only.

Use this operation to send a message to the chat using the Bot API sendMessage method.

Enter these parameters:

  • Credential to connect with: Create or select an existing Telegram credential.
  • Resource: Select Message.
  • Operation: Select Send Message.
  • Chat ID: Enter the Chat ID or username of the channel you wish to send the message to in the format @channelusername.
  • Text: Enter the text to send, max 4096 characters after entities parsing.

Refer to Telegram's Bot API sendMessage documentation for more information.

Telegram limits the number of messages you can send to 30 per second. If you expect to hit this limit, refer to Send more than 30 messages per second for a suggested workaround.

Send Message additional fields

Use the Additional Fields to further refine the behavior of the node using optional fields in Telegram's sendMessage method. Select Add Field to add any of the following:

  • Append n8n Attribution: Choose whether to include the phrase This message was sent automatically with n8n to the end of the message (turned on, default) or not (turned off).
  • Disable Notification: Choose whether to send the notification silently (turned on) or with a standard notification (turned off).
  • Disable WebPage Preview: Select whether you want to enable link previews for links in this message (turned off) or disable link previews for links in this message (turned on). This sets the link_preview_options parameter for is_disabled. Refer to the LinkPreviewOptions documentation for more information.
  • Parse Mode: Enter the parser to use for any related text. Options include HTML (default), Markdown (Legacy), MarkdownV2. Refer to Telegram's Formatting options for more information on these options.
  • Reply To Message ID: If the message is a reply, enter the ID of the message it's replying to.
  • Message Thread ID: Enter a unique identifier for the target message thread (topic) of the forum; for forum supergroups only.

Send and Wait for Response

Use this operation to send a message to the chat using the Bot API sendMessage method and pause the workflow execution until the user confirms the operation.

Enter these parameters:

  • Credential to connect with: Create or select an existing Telegram credential.
  • Resource: Select Message.
  • Operation: Select Send and Wait for Response.
  • Chat ID: Enter the Chat ID or username of the channel you wish to send the message to in the format @channelusername.
  • Message: Enter the text to send.
  • Response Type: The approval or response type to use:
    • Approval: Users can approve or disapprove from within the message.
    • Free Text: Users can submit a response with a form.
    • Custom Form: Users can submit a response with a custom form.

Refer to Telegram's Bot API sendMessage documentation for more information.

Telegram limits the number of messages you can send to 30 per second. If you expect to hit this limit, refer to Send more than 30 messages per second for a suggested workaround.

Send and Wait for Response additional fields

The additional fields depend on which Response Type you choose.

The Approval response type adds these options:

  • Type of Approval: Whether to present only an approval button or both an approval and disapproval buttons.
  • Button Label: The label for the approval or disapproval button. The default choice is ✅ Approve and ❌ Decline for approval and disapproval actions respectively.
  • Limit Wait Time: Whether the workflow will automatically resume execution after a specified time limit. This can be an interval or a specific wall time.

When using the Free Text response type, the following options are available:

  • Message Button Label: The label to use for message button. The default choice is Respond.
  • Response Form Title: The title of the form where users provide their response.
  • Response Form Description: A description for the form where users provide their response.
  • Response Form Button Label: The label for the button on the form to submit their response. The default choice is Submit.
  • Limit Wait Time: Whether the workflow will automatically resume execution after a specified time limit. This can be an interval or a specific wall time.

When using the Custom Form response type, you build a form using the fields and options you want.

You can customize each form element with the settings outlined in the n8n Form trigger's form elements. To add more fields, select the Add Form Element button.

The following options are also available:

  • Message Button Label: The label to use for message button. The default choice is Respond.
  • Response Form Title: The title of the form where users provide their response.
  • Response Form Description: A description for the form where users provide their response.
  • Response Form Button Label: The label for the button on the form to submit their response. The default choice is Submit.
  • Limit Wait Time: Whether the workflow will automatically resume execution after a specified time limit. This can be an interval or a specific wall time.

Use this operation to send a photo to the chat using the Bot API sendPhoto method.

Enter these parameters:

  • Credential to connect with: Create or select an existing Telegram credential.
  • Resource: Select Message.
  • Operation: Select Send Photo.
  • Chat ID: Enter the Chat ID or username of the channel you wish to send the photo to in the format @channelusername.
  • Binary File: To send a binary file from the node itself, turn this option on. If you turn this parameter on, you must enter the Input Binary Field containing the file you want to send.
  • Photo: If you aren't using the Binary File, enter the photo to send here. Pass a file_id to send a file that exists on the Telegram servers (recommended) or an HTTP URL for Telegram to get a file from the internet.
  • Reply Markup: Use this parameter to set more interface options. Refer to Reply Markup parameters for more information on these options and how to use them.

Refer to Telegram's Bot API sendPhoto documentation for more information.

Send Photo additional fields

Use the Additional Fields to further refine the behavior of the node using optional fields in Telegram's sendPhoto method. Select Add Field to add any of the following:

  • Caption: Enter a caption text for the file, max of 1024 characters.
  • Disable Notification: Choose whether to send the notification silently (turned on) or with a standard notification (turned off).
  • Parse Mode: Enter the parser to use for any related text. Options include HTML (default), Markdown (Legacy), MarkdownV2. Refer to Telegram's Formatting options for more information on these options.
  • Reply To Message ID: If the message is a reply, enter the ID of the message it's replying to.
  • Message Thread ID: Enter a unique identifier for the target message thread (topic) of the forum; for forum supergroups only.

Use this method to send static .WEBP, animated .TGS, or video .WEBM stickers using the Bot API sendSticker method.

Enter these parameters:

  • Credential to connect with: Create or select an existing Telegram credential.
  • Resource: Select Message.
  • Operation: Select Send Sticker.
  • Chat ID: Enter the Chat ID or username of the channel you wish to send the sticker to in the format @channelusername.
  • Binary File: To send a binary file from the node itself, turn this option on. If you turn this parameter on, you must enter the Input Binary Field containing the file you want to send.
  • Sticker: If you aren't using the Binary File, enter the photo to send here. Pass a file_id to send a file that exists on the Telegram servers (recommended) or an HTTP URL for Telegram to get a file from the internet.
  • Reply Markup: Use this parameter to set more interface options. Refer to Reply Markup parameters for more information on these options and how to use them.

Refer to Telegram's Bot API sendSticker documentation for more information.

Send Sticker additional fields

Use the Additional Fields to further refine the behavior of the node using optional fields in Telegram's sendSticker method. Select Add Field to add any of the following:

  • Disable Notification: Choose whether to send the notification silently (turned on) or with a standard notification (turned off).
  • Reply To Message ID: If the message is a reply, enter the ID of the message it's replying to.
  • Message Thread ID: Enter a unique identifier for the target message thread (topic) of the forum; for forum supergroups only.

Use this operation to send a video to the chat using the Bot API sendVideo method.

Enter these parameters:

  • Credential to connect with: Create or select an existing Telegram credential.
  • Resource: Select Message.
  • Operation: Select Send Video.
  • Chat ID: Enter the Chat ID or username of the channel you wish to send the video to in the format @channelusername.
  • Binary File: To send a binary file from the node itself, turn this option on. If you turn this parameter on, you must enter the Input Binary Field containing the file you want to send.
  • Video: If you aren't using the Binary File, enter the video to send here. Pass a file_id to send a file that exists on the Telegram servers (recommended) or an HTTP URL for Telegram to get a file from the internet.
  • Reply Markup: Use this parameter to set more interface options. Refer to Reply Markup parameters for more information on these options and how to use them.

Refer to Telegram's Bot API sendVideo documentation for more information.

Send Video additional fields

Use the Additional Fields to further refine the behavior of the node using optional fields in Telegram's sendVideo method. Select Add Field to add any of the following:

  • Caption: Enter a caption text for the video, max of 1024 characters.
  • Disable Notification: Choose whether to send the notification silently (turned on) or with a standard notification (turned off).
  • Duration: Enter the video's duration in seconds.
  • Height: Enter the height of the video.
  • Parse Mode: Enter the parser to use for any related text. Options include HTML (default), Markdown (Legacy), MarkdownV2. Refer to Telegram's Formatting options for more information on these options.
  • Reply To Message ID: If the message is a reply, enter the ID of the message it's replying to.
  • Message Thread ID: Enter a unique identifier for the target message thread (topic) of the forum; for forum supergroups only.
  • Thumbnail: Add the thumbnail of the file sent. Ignore this field if thumbnail generation for the file is supported server-side. The thumbnail should meet these specs:
    • JPEG format
    • Less than 200 KB in size
    • Width and height less than 320px.
  • Width: Enter the width of the video.

Unpin Chat Message

Use this operation to unpin a message from the chat using the Bot API unpinChatMessage method.

Enter these parameters:

  • Credential to connect with: Create or select an existing Telegram credential.
  • Resource: Select Message.
  • Operation: Select Pin Chat Message.
  • Chat ID: Enter the Chat ID or username of the channel you wish to unpin the message from in the format @channelusername.
  • Message ID: Enter the unique identifier of the message you want to unpin.

Refer to the Telegram Bot API unpinChatMessage documentation for more information.

Reply Markup parameters

For most of the Message Send actions (such as Send Animation, Send Audio), use the Reply Markup parameter to set more interface options:

  • Force Reply: The Telegram client will act as if the user has selected the bot's message and tapped Reply, automatically displaying a reply interface to the user. Refer to Force Reply parameters for further guidance on this option.
  • Inline Keyboard: Display an inline keyboard right next to the message. Refer to Inline Keyboard parameters for further guidance on this option.
  • Reply Keyboard: Display a custom keyboard with reply options. Refer to Reply Keyboard parameters for further guidance on this option.
  • Reply Keyboard Remove: The Telegram client will remove the current custom keyboard and display the default letter-keyboard. Refer to Reply Keyboard parameters for further guidance on this option.

Telegram Business accounts

Telegram restricts the following options in channels and for messages sent on behalf of a Telegram Business account:

  • Force Reply
  • Reply Keyboard
  • Reply Keyboard Remove

Force Reply parameters

Force Reply is useful if you want to create user-friendly step-by-step interfaces without having to sacrifice privacy mode.

If you select Reply Markup > Force Reply, choose from these Force Reply parameters:

  • Force Reply: Turn on to show the reply interface to the user, as described above.
  • Selective: Turn this on if you want to force reply from these users only:
    • Users that are @mentioned in the text of the message.
    • The sender of the original message, if this Send Animation message is a reply to a message.

Refer to ForceReply for more information.

Inline Keyboard parameters

If you select Reply Markup > Inline Keyboard, define the inline keyboard buttons you want to display using the Add Button option. To add more rows to your keyboard, use Add Keyboard Row.

Refer to InlineKeyboardMarkup and InlineKeyboardButtons for more information.

Reply Keyboard parameters

If you select Reply Markup > Reply Keyboard, use the Reply Keyboard section to define the buttons and rows in your Reply Keyboard.

Use the Reply Keyboard Options to further refine the keyboard's behavior:

  • Resize Keyboard: Choose whether to request the Telegram client to resize the keyboard vertically for optimal fit (turned on) or whether to use the same height as the app's standard keyboard (turned off).
  • One Time Keyboard: Choose whether the Telegram client should hide the keyboard as soon as a user uses it (turned on) or to keep displaying it (turned off).
  • Selective: Turn this on if you want to show the keyboard to these users only:
    • Users that are @mentioned in the text of the message.
    • The sender of the original message, if this Send Animation message is a reply to a message.

Refer to ReplyKeyboardMarkup for more information.

Reply Keyboard Remove parameters

If you select Reply Markup > Reply Keyboard Remove, choose from these Reply Keyboard Remove parameters:

  • Remove Keyboard: Choose whether to request the Telegram client to remove the custom keyboard (turned on) or to keep it (turned off).
  • Selective: Turn this on if you want to remove the keyboard for these users only:
    • Users that are @mentioned in the text of the message.
    • The sender of the original message, if this Send Animation message is a reply to a message.

Refer to ReplyKeyboardRemove for more information.


Execution Data

URL: llms-txt#execution-data

Contents:

  • Operations
  • Data to Save
  • Limitations
  • Templates and examples

Use this node to save metadata for workflow executions. You can then search by this data in the Executions list.

You can retrieve custom execution data during workflow execution using the Code node. Refer to Custom executions data for more information.

Custom executions data is available on:

  • Cloud: Pro, Enterprise

  • Self-Hosted: Enterprise, registered Community

  • Save Execution Data for Search

Add a Saved Field for each key/value pair of metadata you'd like to save.

The Execution Data node has the following restrictions when storing execution metadata:

  • key: limited to 50 characters
  • value: limited to 512 characters

If either the key or value exceed the above limitations, n8n truncates to their maximum length and outputs a log entry.

Templates and examples

Host Your Own AI Deep Research Agent with n8n, Apify and OpenAI o3

View template details

API Schema Extractor

View template details

Realtime Notion Todoist 2-way Sync with Redis

View template details

Browse Execution Data integration templates, or search all templates


Set 60 as the maximum number of log files to be kept

URL: llms-txt#set-60-as-the-maximum-number-of-log-files-to-be-kept

Contents:

  • Log levels
  • Development
    • Implementation details
    • Adding logs
  • Front-end logs

export N8N_LOG_FILE_COUNT_MAX=60

// You have to import the LoggerProxy. We rename it to Logger to make it easier

import { LoggerProxy as Logger } from 'n8n-workflow';

// Info-level logging of a trigger function, with workflow name and workflow ID as additional metadata properties

Logger.info(Polling trigger initiated for workflow "${workflow.name}", {workflowName: workflow.name, workflowId: workflow.id});


When creating new loggers, some useful standards to keep in mind are:

- Craft log messages to be as human-readable as possible. For example, always wrap names in quotes.
- Duplicating information in the log message and metadata, like workflow name in the above example, can be useful as messages are easier to search and metadata enables easier filtering.
- Include multiple IDs (for example, `executionId`, `workflowId`, and `sessionId`) throughout all logs.
- Use node types instead of node names (or both) as this is more consistent, and so easier to search.

As of now, front-end logs aren't available. Using `Logger` or `LoggerProxy` would yield errors in the `editor-ui` package. This functionality will get implemented in the future versions.

**Examples:**

Example 1 (unknown):
```unknown
### Log levels

n8n uses standard log levels to report:

- `silent`: outputs nothing at all
- `error`: outputs only errors and nothing else
- `warn`: outputs errors and warning messages
- `info`: contains useful information about progress
- `debug`: the most verbose output. n8n outputs a lot of information to help you debug issues.

## Development

During development, adding log messages is a good practice. It assists in debugging errors. To configure logging for development, follow the guide below.

### Implementation details

n8n uses the `LoggerProxy` class, located in the `workflow` package. Calling the `LoggerProxy.init()` by passing in an instance of `Logger`, initializes the class before the usage.

The initialization process happens only once. The [`start.ts`](https://github.com/n8n-io/n8n/blob/master/packages/cli/src/commands/start.ts) file already does this process for you. If you are creating a new command from scratch, you need to initialize the `LoggerProxy` class.

Once the `Logger` implementation gets created in the `cli` package, it can be obtained by calling the `getInstance` convenience method from the exported module.

Check the [start.ts](https://github.com/n8n-io/n8n/blob/master/packages/cli/src/commands/start.ts) file to learn more about how this process works.

### Adding logs

Once the `LoggerProxy` class gets initialized in the project, you can import it to any other file and add logs.

Convenience methods are provided for all logging levels, so new logs can be added whenever needed using the format `Logger.<logLevel>('<message>', ...meta)`, where `meta` represents any additional properties desired beyond `message`.

In the example above, we use the standard log levels described [above](#log-levels). The `message` argument is a string, and `meta` is a data object.

Pull specific version

URL: llms-txt#pull-specific-version

docker pull docker.n8n.io/n8nio/n8n:1.81.0


Mistral Cloud credentials

URL: llms-txt#mistral-cloud-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to Mistral's API documentation for more information about the APIs.

View n8n's Advanced AI documentation.

To configure this credential, you'll need:

Once you've added payment information to your Mistral Cloud account:

  1. Sign in to your Mistral account.
  2. Go to the API Keys page.
  3. Select Create new key.
  4. Copy the API key and enter it in your n8n credential.

Refer to Account setup for more information.

Paid account required

Mistral requires you to add payment information and activate payments to use API keys. Refer to the Prerequisites section above for more information.


In ~/.n8n directory run

URL: llms-txt#in-~/.n8n-directory-run

mkdir custom cd custom npm init


---

## X (formerly Twitter) credentials

**URL:** llms-txt#x-(formerly-twitter)-credentials

**Contents:**
- Prerequisites
- Supported authentication methods
- Related resources
- Using OAuth2
- X rate limits

You can use these credentials to authenticate the following nodes:

- [X (formerly Twitter)](../../app-nodes/n8n-nodes-base.twitter/)

- Create an [X developer](https://developer.x.com/en) account.
- Create a [Twitter app](https://developer.x.com/en/docs/apps) or use the default project and app created when you sign up for the developer portal. Refer to each supported authentication method below for more details on the app's configuration.

## Supported authentication methods

n8n used to support an **OAuth** authentication method, which used X's [OAuth 1.0a](https://developer.x.com/en/docs/authentication/oauth-1-0a) authentication method. n8n deprecated this method with the release of V2 of the X node in n8n version [0.236.0](../../../../release-notes/0-x/#n8n02360).

Refer to [X's API documentation](https://developer.x.com/en/docs/twitter-api) for more information about the service. Refer to [X's API authentication documentation](https://developer.x.com/en/docs/authentication/overview) for more information about authenticating with the service.

Refer to [Application-only Authentication](https://developer.twitter.com/en/docs/authentication/oauth-2-0/application-only) for more information about app-only authentication.

Use this method if you're using n8n version 0.236.0 or later.

To configure this credential, you'll need:

- A **Client ID**
- A **Client Secret**

To generate your Client ID and Client Secret:

1. In the Twitter [developer portal](https://developer.x.com/en/portal/dashboard), open your project.
1. On the project's **Overview** tab, find the **Apps** section and select **Add App**.
1. Give your app a **Name** and select **Next**.
1. Go to the **App Settings**.
1. In the **User authentication settings**, select **Set Up**.
1. Set the **App permissions**. Choose **Read and write and Direct message** if you want to use all functions of the n8n X node.
1. In the **Type of app** section, select **Web App, Automated App or Bot**.
1. In n8n, copy the **OAuth Redirect URL**.
1. In your X app, find the **App Info** section and paste that URL in as the **Callback URI / Redirect URL**.
1. Add a **Website URL**.
1. Save your changes.
1. Copy the **Client ID** and **Client Secret** displayed in X and add them to the corresponding fields in your n8n credential.

Refer to X's [OAuth 2.0 Authentication documentation](https://developer.x.com/en/docs/authentication/oauth-2-0) for more information on working with this authentication method.

This credential uses the OAuth 2.0 Bearer Token authentication method, so you'll be subject to app rate limits. Refer to [X rate limits](#x-rate-limits) below for more information.

X has time-based rate limits per endpoint based on your developer access plan level. X calculates app rate limits and user rate limits independently. Refer to [Rate limits](https://developer.x.com/en/docs/twitter-api/rate-limits) for the access plan level rate limits and guidance on avoiding hitting them.

Use the guidance below for calculating rate limits:

- If you're using the deprecated OAuth method, user rate limits apply. You'll have one limit per time window for each set of users' access tokens.
- If you're [Using OAuth2](#using-oauth2), app rate limits apply. You'll have a limit per time window for requests made by your app.

X calculates user rate limits and app rate limits independently.

Refer to X's [Rate limits and authentication methods](https://developer.x.com/en/docs/twitter-api/rate-limits#auth) for more information about these rate limit types.

---

## Cohere Chat Model node

**URL:** llms-txt#cohere-chat-model-node

**Contents:**
- Node parameters
- Node options
- Templates and examples
- Related resources

Use the Cohere Chat Model node to access Cohere's large language models for conversational AI and text generation tasks.

On this page, you'll find the node parameters for the Cohere Chat Model node, and links to more resources.

You can find authentication information for this node [here](../../../credentials/cohere/).

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name.

- **Model**: Select the model which will generate the completion. n8n dynamically loads available models from the Cohere API. Learn more in the [Cohere model documentation](https://docs.cohere.com/v2/docs/models#command).

- **Sampling Temperature**: Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations.
- **Max Retries**: Enter the maximum number of times to retry a request.

## Templates and examples

**Automate Sales Cold Calling Pipeline with Apify, GPT-4o, and WhatsApp**

[View template details](https://n8n.io/workflows/5449-automate-sales-cold-calling-pipeline-with-apify-gpt-4o-and-whatsapp/)

**Create a Multi-Modal Telegram Support Bot with GPT-4 and Supabase RAG**

by Ezema Kingsley Chibuzo

[View template details](https://n8n.io/workflows/5589-create-a-multi-modal-telegram-support-bot-with-gpt-4-and-supabase-rag/)

**Build a Document QA System with RAG using Milvus, Cohere, and OpenAI for Google Drive**

[View template details](https://n8n.io/workflows/3848-build-a-document-qa-system-with-rag-using-milvus-cohere-and-openai-for-google-drive/)

[Browse Cohere Chat Model integration templates](https://n8n.io/integrations/cohere-chat-model/), or [search all templates](https://n8n.io/workflows/)

Refer to [Cohere's API documentation](https://docs.cohere.com/v2/reference/about) for more information about the service.

View n8n's [Advanced AI](../../../../../advanced-ai/) documentation.

---

## Rundeck credentials

**URL:** llms-txt#rundeck-credentials

**Contents:**
- Prerequisites
- Supported authentication methods
- Related resources
- Using API token

You can use these credentials to authenticate the following nodes:

- [Rundeck](../../app-nodes/n8n-nodes-base.rundeck/)

Create a user account on a [Rundeck](https://www.rundeck.com/) server.

## Supported authentication methods

Refer to [Rundeck's API documentation](https://docs.rundeck.com/docs/api/) for more information about the service.

To configure this credential, you'll need:

- Your **URL**: Enter the base URL of your Rundeck server, for example `http://myserver:4440`. Refer to [URLs](https://docs.rundeck.com/docs/api/#urls) for more information.
- A user API **Token**: To generate a user API token, go to your **Profile > User API Tokens**. Refer to [User API tokens](https://docs.rundeck.com/docs/manual/10-user.html#user-api-tokens) for more information.

---

## Information Extractor node

**URL:** llms-txt#information-extractor-node

**Contents:**
- Node parameters
- Node options
- Related resources

Use the Information Extractor node to extract structured information from incoming data.

On this page, you'll find the node parameters for the Information Extractor node, and links to more resources.

- **Text** defines the input text to extract information from. This is usually an expression that references a field from the input items. For example, this could be `{{ $json.chatInput }}` if the input is a chat trigger, or `{{ $json.text }}` if a previous node is Extract from PDF.
- Use **Schema Type** to choose how you want to describe the desired output data format. You can choose between:
  - **From Attribute Descriptions**: This option allows you to define the schema by specifying the list of attributes and their descriptions.
  - **Generate From JSON Example**: Input an example JSON object to automatically generate the schema. The node uses the object property types and names. It ignores the actual values. n8n treats every field as mandatory when generating schemas from JSON examples.
  - **Define using JSON Schema**: Manually input the JSON schema. Read the JSON Schema [guides and examples](https://json-schema.org/learn/miscellaneous-examples) for help creating a valid JSON schema.

- **System Prompt Template**: Use this option to change the system prompt that's used for the information extraction. n8n automatically appends format specification instructions to the prompt.

View n8n's [Advanced AI](../../../../../advanced-ai/) documentation.

---

## Zammad node

**URL:** llms-txt#zammad-node

**Contents:**
- Operations
- Templates and examples

Use the Zammad node to automate work in Zammad, and integrate Zammad with other applications. n8n has built-in support for a wide range of Zammad features, including creating, retrieving, and deleting groups and organizations.

On this page, you'll find a list of operations the Zammad node supports and links to more resources.

Refer to [Zammad credentials](../../credentials/zammad/) for guidance on setting up authentication.

- Group
  - Create
  - Delete
  - Get
  - Get many
  - Update
- Organization
  - Create
  - Delete
  - Get
  - Get many
  - Update
- Ticket
  - Create
  - Delete
  - Get
  - Get many
- User
  - Create
  - Delete
  - Get
  - Get many
  - Get self
  - Update

## Templates and examples

**Update people through Zulip about open tickets in Zammad**

[View template details](https://n8n.io/workflows/1575-update-people-through-zulip-about-open-tickets-in-zammad/)

**Export Zammad Objects (Users, Roles, Groups, Organizations) to Excel**

[View template details](https://n8n.io/workflows/2596-export-zammad-objects-users-roles-groups-organizations-to-excel/)

**Sync Entra User to Zammad User**

[View template details](https://n8n.io/workflows/2587-sync-entra-user-to-zammad-user/)

[Browse Zammad integration templates](https://n8n.io/integrations/zammad/), or [search all templates](https://n8n.io/workflows/)

---

## LangChain Code node

**URL:** llms-txt#langchain-code-node

**Contents:**
- Node parameters
  - Add Code
  - Inputs
  - Outputs
- Node inputs and outputs configuration
- Built-in methods
- Templates and examples
- Related resources

Use the LangChain Code node to import LangChain. This means if there is functionality you need that n8n hasn't created a node for, you can still use it. By configuring the LangChain Code node connectors you can use it as a normal node, root node or sub-node.

On this page, you'll find the node parameters, guidance on configuring the node, and links to more resources.

Not available on Cloud

This node is only available on self-hosted n8n.

Add your custom code. Choose either **Execute** or **Supply Data** mode. You can only use one mode.

Unlike the [Code node](../../../core-nodes/n8n-nodes-base.code/), the LangChain Code node doesn't support Python.

- **Execute**: use the LangChain Code node like n8n's own Code node. This takes input data from the workflow, processes it, and returns it as the node output. This mode requires a main input and output. You must create these connections in **Inputs** and **Outputs**.
- **Supply Data**: use the LangChain Code node as a sub-node, sending data to a root node. This uses an output other than main.

By default, you can't load built-in or external modules in this node. Self-hosted users can [enable built-in and external modules](../../../../../hosting/configuration/configuration-methods/).

Choose the input types.

The main input is the normal connector found in all n8n workflows. If you have a main input and output set in the node, **Execute** code is required.

Choose the output types.

The main output is the normal connector found in all n8n workflows. If you have a main input and output set in the node, **Execute** code is required.

## Node inputs and outputs configuration

By configuring the LangChain Code node connectors (inputs and outputs) you can use it as an app node, root node or sub-node.

| Node type                                                                       | Inputs                        | Outputs                                                                   | Code mode   |
| ------------------------------------------------------------------------------- | ----------------------------- | ------------------------------------------------------------------------- | ----------- |
| App node. Similar to the [Code node](../../../core-nodes/n8n-nodes-base.code/). | Main                          | Main                                                                      | Execute     |
| Root node                                                                       | Main; at least one other type | Main                                                                      | Execute     |
| Sub-node                                                                        | -                             | A type other than main. Must match the input type you want to connect to. | Supply Data |
| Sub-node with sub-nodes                                                         | A type other than main        | A type other than main. Must match the input type you want to connect to. | Supply Data |

n8n provides these methods to make it easier to perform common tasks in the LangChain Code node.

| Method                                                           | Description                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                   |
| ---------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `this.addInputData(inputName, data)`                             | Populate the data of a specified non-main input. Useful for mocking data. - `inputName` is the input connection type, and must be one of: `ai_agent`, `ai_chain`, `ai_document`, `ai_embedding`, `ai_languageModel`, `ai_memory`, `ai_outputParser`, `ai_retriever`, `ai_textSplitter`, `ai_tool`, `ai_vectorRetriever`, `ai_vectorStore` - `data` contains the data you want to add. Refer to [Data structure](../../../../../data/data-structure/) for information on the data structure expected by n8n.   |
| `this.addOutputData(outputName, data)`                           | Populate the data of a specified non-main output. Useful for mocking data. - `outputName` is the input connection type, and must be one of: `ai_agent`, `ai_chain`, `ai_document`, `ai_embedding`, `ai_languageModel`, `ai_memory`, `ai_outputParser`, `ai_retriever`, `ai_textSplitter`, `ai_tool`, `ai_vectorRetriever`, `ai_vectorStore` - `data` contains the data you want to add. Refer to [Data structure](../../../../../data/data-structure/) for information on the data structure expected by n8n. |
| `this.getInputConnectionData(inputName, itemIndex, inputIndex?)` | Get data from a specified non-main input. - `inputName` is the input connection type, and must be one of: `ai_agent`, `ai_chain`, `ai_document`, `ai_embedding`, `ai_languageModel`, `ai_memory`, `ai_outputParser`, `ai_retriever`, `ai_textSplitter`, `ai_tool`, `ai_vectorRetriever`, `ai_vectorStore` - `itemIndex` should always be `0` (this parameter will be used in upcoming functionality) - Use `inputIndex` if there is more than one node connected to the specified input.                      |
| `this.getInputData(inputIndex?, inputName?)`                     | Get data from the main input.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                 |
| `this.getNode()`                                                 | Get the current node.                                                                                                                                                                                                                                                                                                                                                                                                                                                                                         |
| `this.getNodeOutputs()`                                          | Get the outputs of the current node.                                                                                                                                                                                                                                                                                                                                                                                                                                                                          |
| `this.getExecutionCancelSignal()`                                | Use this to stop the execution of a function when the workflow stops. In most cases n8n handles this, but you may need to use it if building your own chains or agents. It replaces the [Cancelling a running LLMChain](https://js.langchain.com/docs/modules/chains/foundational/llm_chain#cancelling-a-running-llmchain) code that you'd use if building a LangChain application normally.                                                                                                                  |

## Templates and examples

**🤖 AI Powered RAG Chatbot for Your Docs + Google Drive + Gemini + Qdrant**

[View template details](https://n8n.io/workflows/2982-ai-powered-rag-chatbot-for-your-docs-google-drive-gemini-qdrant/)

**Custom LangChain agent written in JavaScript**

[View template details](https://n8n.io/workflows/1955-custom-langchain-agent-written-in-javascript/)

**Use any LangChain module in n8n (with the LangChain code node)**

[View template details](https://n8n.io/workflows/2082-use-any-langchain-module-in-n8n-with-the-langchain-code-node/)

[Browse LangChain Code integration templates](https://n8n.io/integrations/langchain-code/), or [search all templates](https://n8n.io/workflows/)

View n8n's [Advanced AI](../../../../../advanced-ai/) documentation.

---

## Workflow components

**URL:** llms-txt#workflow-components

This section contains:

- [Nodes](nodes/): integrations and operations.
- [Connections](connections/): node connectors.
- [Sticky notes](sticky-notes/): document your workflows.

---

## Advanced AI examples and concepts

**URL:** llms-txt#advanced-ai-examples-and-concepts

This section provides explanations of important AI concepts, and workflow templates that highlight those concepts, with explanations and configuration guides. The examples cover common use cases and highlight different features of advanced AI in n8n.

- **Agents and chains**

Learn about [agents](../../../glossary/#ai-agent) and [chains](../../../glossary/#ai-chain) in AI, including exploring key differences using the example workflow.

[What's a chain in AI?](../understand-chains/)\
  [What's an agent in AI?](../understand-agents/)\
  [Demonstration of key differences between agents and chains](../agent-chain-comparison/)

- **Call n8n Workflow Tool**

Learn about [tools](../../../glossary/#ai-tool) in AI, then explore examples that use n8n workflows as custom tools to give your AI workflow access to more data.

[What's a tool in AI?](../understand-tools/)\
  [Chat with Google Sheets](../data-google-sheets/)\
  [Call an API to fetch data](../api-workflow-tool/)\
  [Set up a human fallback](../human-fallback/)\
  [Let AI specify tool parameters with `$fromAI()`](../using-the-fromai-function/)

- **Vector databases**

Learn about [vector databases](../../../glossary/#ai-vector-store) in AI, along with related concepts including [embeddings](../../../glossary/#ai-embedding) and retrievers.

[What's a vector database?](../understand-vector-databases/)\
  [Populate a Pinecone vector database from a website](../vector-store-website/)

Learn about [memory](../../../glossary/#ai-memory) in AI.

[What's memory in AI?](../understand-memory/)

- **AI workflow templates**

You can browse AI templates, included community contributions, on the n8n website.

[Browse all AI templates](https://n8n.io/workflows/?categories=25)

---

## Discord credentials

**URL:** llms-txt#discord-credentials

**Contents:**
- Prerequisites
- Supported authentication methods
- Related resources
- Using bot
- Using OAuth2
- Using webhook
- Choose an authentication method

You can use these credentials to authenticate the following nodes:

- [Discord](../../app-nodes/n8n-nodes-base.discord/)

- Create a [Discord](https://www.discord.com/) account.
- For Bot and OAuth2 credentials:
  - [Set up your local developer environment](https://discord.com/developers/docs/quick-start/getting-started#step-0-project-setup).
  - [Create an application and a bot user](https://discord.com/developers/docs/quick-start/getting-started#step-1-creating-an-app).
- For webhook credentials, [create a webhook](https://support.discord.com/hc/en-us/articles/228383668-Intro-to-Webhooks).

## Supported authentication methods

- Bot
- OAuth2
- Webhook

Not sure which method to use? Refer to [Choose an authentication method](#choose-an-authentication-method) for more guidance.

Refer to [Discord's Developer documentation](https://discord.com/developers/docs/intro) for more information about the service.

Use this method if you want to add the bot to your Discord server using a bot token rather than OAuth2.

To configure this credential, you'll need:

- A **Bot Token**: Generated once you create an application with a bot.

To create an application with a bot and generate the **Bot Token**:

1. If you don't have one already, create an app in the [developer portal](https://discord.com/developers/applications?new_application=true).
1. Enter a **Name** for your app.
1. Select **Create**.
1. Select **Bot** from the left menu.
1. Under **Token**, select **Reset Token** to generate a new bot token.
1. Copy the token and add it to your n8n credential.
1. In **Bot > Privileged Gateway Intents**, add any privileged intents you want your bot to have. Refer to [Configuring your bot](https://discord.com/developers/docs/quick-start/getting-started#configuring-your-bot) for more information on privileged intents.
   - n8n recommends activating **SERVER MEMBERS INTENT: Required for your bot to receive events listed under GUILD_MEMBERS**.
1. In **Installation > Installation Contexts**, select the installation contexts you want your bot to use:
   - Select **Guild Install** for server-installed apps. (Most common for n8n users.)
   - Select **User Install** for user-installed apps. (Less common for n8n users, but may be useful for testing.)
   - Refer to Discord's [Choosing installation contexts](https://discord.com/developers/docs/quick-start/getting-started#choosing-installation-contexts) documentation for more information about these installation contexts.
1. In **Installation > Install Link**, select **Discord Provided Link** if it's not already selected.
1. Still on the **Installation** page, in the **Default Install Settings** section, select `applications.commands` and `bot` scopes. Refer to Discord's [Scopes](https://discord.com/developers/docs/topics/oauth2#shared-resources-oauth2-scopes) documentation for more information about these and other scopes.
1. Add permissions on the **Bot > Bot Permissions** page. Refer to Discord's [Permissions](https://discord.com/developers/docs/topics/permissions) documentation for more information. n8n recommends selecting these permissions for the [Discord](../../app-nodes/n8n-nodes-base.discord/) node:
   - Manage Roles
   - Manage Channels
   - Read Messages/View Channels
   - Send Messages
   - Create Public Threads
   - Create Private Threads
   - Send Messages in Threads
   - Send TTS Messages
   - Manage Messages
   - Manage Threads
   - Embed Links
   - Attach Files
   - Read Message History
   - Add Reactions
1. Add the app to your server or test server:
   1. Go to **Installation > Install Link** and copy the link listed there.
   1. Paste the link in your browser and hit Enter.
   1. Select **Add to server** in the installation prompt.
   1. Once your app's added to your server, you'll see it in the member list.

These steps outline the basic functionality needed to set up your n8n credential. Refer to the [Discord Creating an App](https://discord.com/developers/docs/quick-start/getting-started#step-1-creating-an-app) guide for more information on creating an app, especially:

- [Fetching your credentials](https://discord.com/developers/docs/quick-start/getting-started#fetching-your-credentials) for getting your app's credentials into your local developer environment.
- [Handling interactivity](https://discord.com/developers/docs/quick-start/getting-started#step-3-handling-interactivity) for information on setting up public endpoints for interactive `/slash` commands.

Use this method if you want to add the bot to Discord servers using the OAuth2 flow, which simplifies the process for those installing your app.

To configure this credential, you'll need:

- A **Client ID**
- A **Client Secret**
- Choose whether to send **Authentication** in the **Header** or **Body**
- A **Bot Token**

For details on creating an application with a bot and generating the token, follow the same steps as in [Using bot](#using-bot) above.

1. Copy the **Bot Token** you generate and add it into the n8n credential.
1. Open the **OAuth2** page in your Discord application to access your **Client ID** and generate a **Client Secret**. Add these to your n8n credential.
1. From n8n, copy the **OAuth Redirect URL** and add it into the Discord application in **OAuth2 > Redirects**. Be sure you save these changes.

To configure this credential, you'll need:

- A **Webhook URL**: Generated once you create a webhook.

To get a Webhook URL, you create a webhook and copy the URL that gets generated:

1. Open your Discord **Server Settings** and open the **Integrations** tab.
1. Select **Create Webhook** to create a new webhook.
1. Give your webhook a **Name** that makes sense.
1. Select the **avatar** next to the **Name** to edit or upload a new avatar.
1. In the **CHANNEL** dropdown, select the channel the webhook should post to.
1. Select **Copy Webhook URL** to copy the Webhook URL. Enter this URL in your n8n credential.

Refer to the [Discord Making a Webhook documentation](https://support.discord.com/hc/en-us/articles/228383668-Intro-to-Webhooks) for more information.

## Choose an authentication method

The simplest installation is a **webhook**. You create and add webhooks to a single channel on a Discord server. Webhooks can post messages to a channel. They don't require a bot user or authentication. But they can't listen or respond to user requests or commands. If you need a straightforward way to send messages to a channel without the need for interaction or feedback, use a webhook.

A **bot** is an interactive step up from a webhook. You add bots to the Discord server (referred to as a `guild` in the Discord API documentation) or to user accounts. Bots added to the server can interact with users on all the server's channels. They can manage channels, send and retrieve messages, retrieve the list of all users, and change their roles. If you need to build an interactive, complex, or multi-step workflow, use a bot.

**OAuth2** is basically a **bot** that uses an OAuth2 flow rather than just the bot token. As with bots, you add these to the Discord server or to user accounts. These credentials offer the same functionalities as bots, but they can simplify the installation of the bot on your server.

---

## AWS Cognito node

**URL:** llms-txt#aws-cognito-node

**Contents:**
- Operations
- Templates and examples
- Related resources
- What to do if your operation isn't supported

Use the AWS Cognito node to automate work in AWS Cognito and integrate AWS Cognito with other applications. n8n has built-in support for a wide range of AWS Cognito features, which includes creating, retrieving, updating, and deleting groups, users, and user pools.

On this page, you'll find a list of operations the AWS Cognito node supports, and links to more resources.

You can find authentication information for this node [here](../../credentials/aws/).

- Group:
  - Create: Create a new group.
  - Delete: Delete an existing group.
  - Get: Retrieve details about an existing group.
  - Get Many: Retrieve a list of groups.
  - Update: Update an existing group.
- User:
  - Add to Group: Add an existing user to a group.
  - Create: Create a new user.
  - Delete: Delete a user.
  - Get: Retrieve information about an existing user.
  - Get Many: Retrieve a list of users.
  - Remove From Group: Remove a user from a group.
  - Update: Update an existing user.
- User Pool:
  - Get: Retrieve information about an existing user pool.

## Templates and examples

**Transcribe audio files from Cloud Storage**

[View template details](https://n8n.io/workflows/1394-transcribe-audio-files-from-cloud-storage/)

**Extract and store text from chat images using AWS S3**

[View template details](https://n8n.io/workflows/1393-extract-and-store-text-from-chat-images-using-aws-s3/)

**Sync data between Google Drive and AWS S3**

[View template details](https://n8n.io/workflows/1396-sync-data-between-google-drive-and-aws-s3/)

[Browse AWS Cognito integration templates](https://n8n.io/integrations/aws-cognito/), or [search all templates](https://n8n.io/workflows/)

Refer to [AWS Cognito's documentation](https://docs.aws.amazon.com/cognito/) for more information about the service.

## What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**.
1. Select the service you want to connect to.
1. Select your credential.

Refer to [Custom API operations](../../../custom-operations/) for more information.

---

## Install and manage community nodes

**URL:** llms-txt#install-and-manage-community-nodes

There are three ways to install community nodes:

- Within n8n using the [nodes panel](verified-install/) (for verified community nodes only).
- Within n8n [using the GUI](gui-install/): Use this method to install community nodes from the npm registry.
- [Manually from the command line](manual-install/): use this method to install community nodes from npm if your n8n instance doesn't support installation through the in-app GUI.

Installing from npm only available on self-hosted instances

Unverified community nodes aren't available on n8n cloud and require [self-hosting](../../../hosting/) n8n.

---

## Pipedrive node

**URL:** llms-txt#pipedrive-node

**Contents:**
- Operations
- Templates and examples
- What to do if your operation isn't supported

Use the Pipedrive node to automate work in Pipedrive, and integrate Pipedrive with other applications. n8n has built-in support for a wide range of Pipedrive features, including creating, updating, deleting, and getting activity, files, notes, organizations, and leads.

On this page, you'll find a list of operations the Pipedrive node supports and links to more resources.

Refer to [Pipedrive credentials](../../credentials/pipedrive/) for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/).

- Activity
  - Create an activity
  - Delete an activity
  - Get data of an activity
  - Get data of all activities
  - Update an activity
- Deal
  - Create a deal
  - Delete a deal
  - Duplicate a deal
  - Get data of a deal
  - Get data of all deals
  - Search a deal
  - Update a deal
- Deal Activity
  - Get all activities of a deal
- Deal Product
  - Add a product to a deal
  - Get all products in a deal
  - Remove a product from a deal
  - Update a product in a deal
- File
  - Create a file
  - Delete a file
  - Download a file
  - Get data of a file
- Lead
  - Create a lead
  - Delete a lead
  - Get data of a lead
  - Get data of all leads
  - Update a lead
- Note
  - Create a note
  - Delete a note
  - Get data of a note
  - Get data of all notes
  - Update a note
- Organization
  - Create an organization
  - Delete an organization
  - Get data of an organization
  - Get data of all organizations
  - Update an organization
  - Search organizations
- Person
  - Create a person
  - Delete a person
  - Get data of a person
  - Get data of all persons
  - Search all persons
  - Update a person
- Product
  - Get data of all products

## Templates and examples

**Two way sync Pipedrive and MySQL**

[View template details](https://n8n.io/workflows/1822-two-way-sync-pipedrive-and-mysql/)

**Upload leads from a CSV file to Pipedrive CRM**

[View template details](https://n8n.io/workflows/1787-upload-leads-from-a-csv-file-to-pipedrive-crm/)

**Enrich new leads in Pipedrive and send an alert to Slack for high-quality ones**

[View template details](https://n8n.io/workflows/2135-enrich-new-leads-in-pipedrive-and-send-an-alert-to-slack-for-high-quality-ones/)

[Browse Pipedrive integration templates](https://n8n.io/integrations/pipedrive/), or [search all templates](https://n8n.io/workflows/)

## What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**.
1. Select the service you want to connect to.
1. Select your credential.

Refer to [Custom API operations](../../../custom-operations/) for more information.

---

## Delete data

**URL:** llms-txt#delete-data

**Contents:**
- Templates and examples

delete nodeStaticData.lastExecution

Templates and examples

View template details


Send Email

URL: llms-txt#send-email

Contents:

  • Node parameters
    • Credential to connect with
    • Operation
    • From Email
    • To Email
    • Subject
    • Email Format
  • Node options
    • Append n8n Attribution
    • Attachments

The Send Email node sends emails using an SMTP email server.

You can find authentication information for this node here.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

Configure this node using the following parameters.

Credential to connect with

Select or create an SMTP account credential for the node to use.

The Send Email node supports the following operations:

  • Send: Send an email.
  • Send and Wait for Response: Send an email and wait for a response from the receiver. This operation pauses the workflow execution until the user submits a response.

Choosing Send and Wait for Response will activate parameters and options as discussed in waiting for a response.

Enter the email address you want to send the email from. You can also include a name using this format: Name Name <email@sample.com>, for example: Nathan Doe <nate@n8n.io>.

Enter the email address you want to send the email to. You can also include a name using this format: Name Name <email@sample.com>, for example: Nathan Doe <nate@n8n.io>. Use a comma to separate multiple email addresses: first@sample.com, "Name" <second@sample.com>.

This email format also applies to the CC and BCC fields.

Enter the subject line for the email.

Select the format to send the email in. This parameter is available when using the Send operation. Choose from:

  • Text: Send the email in plain-text format.
  • HTML: Send the email in HTML format.
  • Both: Send the email in both formats. If you choose this option, the email recipient's client will set which format to display.

Use these Options to further refine the node's behavior.

Append n8n Attribution

Set whether to include the phrase This email was sent automatically with n8n at the end of the email (turned on) or not (turned off).

Enter the name of the binary properties that contain data to add as an attachment. Some tips on using this option:

  • Use the Read/Write Files from Disk node or the HTTP Request node to upload the file to your workflow.
  • Add multiple attachments by entering a comma-separated list of binary properties.
  • Reference embedded images or other content within the body of an email message, for example <img src="cid:image_1">.

Enter an email address for the cc: field.

Enter an email address for the bcc: field.

Ignore SSL Issues

Set whether n8n should ignore failures with TLS/SSL certificate validation (turned on) or enforce them (turned off).

Enter an email address for the Reply To field.

Waiting for a response

By choosing the Send and Wait for a Response operation, you can send an email message and pause the workflow execution until a person confirms the action or provides more information.

You can choose between the following types of waiting and approval actions:

  • Approval: Users can approve or disapprove from within the message.
  • Free Text: Users can submit a response with a form.
  • Custom Form: Users can submit a response with a custom form.

Different options are available depending on which type you choose.

Approval parameters and options

When using the Approval response type, the following options are available:

  • Type of Approval: Whether to present only an approval button or both an approval and disapproval buttons.
  • Button Label: The label for the approval or disapproval button. The default choice is Approve and Decline for approval and disapproval actions respectively.
  • Button Style: The style (primary or secondary) for the button.

This mode also offers the following options:

  • Limit Wait Time: Whether the workflow will automatically resume execution after a specified time limit. This can be an interval or a specific wall time.
  • Append n8n Attribution: Set whether to include the phrase This email was sent automatically with n8n at the end of the email (turned on) or not (turned off).

Free Text parameters and options

When using the Free Text response type, the following options are available:

  • Message Button Label: The label to use for message button. The default choice is Respond.
  • Response Form Title: The title of the form where users provide their response.
  • Response Form Description: A description for the form where users provide their response.
  • Response Form Button Label: The label for the button on the form to submit their response. The default choice is Submit.
  • Limit Wait Time: Whether the workflow will automatically resume execution after a specified time limit. This can be an interval or a specific wall time.
  • Append n8n Attribution: Set whether to include the phrase This email was sent automatically with n8n at the end of the email (turned on) or not (turned off).

Custom Form parameters and options

When using the Custom Form response type, you build a form using the fields and options you want.

You can customize each form element with the settings outlined in the n8n Form trigger's form elements. To add more fields, select the Add Form Element button.

The following options are also available:

  • Message Button Label: The label to use for message button. The default choice is Respond.
  • Response Form Title: The title of the form where users provide their response.
  • Response Form Description: A description for the form where users provide their response.
  • Response Form Button Label: The label for the button on the form to submit their response. The default choice is Submit.
  • Limit Wait Time: Whether the workflow will automatically resume execution after a specified time limit. This can be an interval or a specific wall time.
  • Append n8n Attribution: Set whether to include the phrase This email was sent automatically with n8n at the end of the email (turned on) or not (turned off).

Templates and examples

Personalize marketing emails using customer data and AI

View template details

Automated Stock Analysis Reports with Technical & News Sentiment using GPT-4o

View template details

AI marketing report (Google Analytics & Ads, Meta Ads), sent via email/Telegram

by Friedemann Schuetz

View template details

Browse Send Email integration templates, or search all templates


DFIR-IRIS credentials

URL: llms-txt#dfir-iris-credentials

Contents:

  • Prerequisites
  • Related resources
  • Using API Key

You can use these credentials to authenticate when using the HTTP Request node to make a Custom API call.

An accessible instance of DFIR-IRIS.

Refer to DFIR-IRIS's API documentation for more information about authenticating with the service.

This is a credential-only node. Refer to Custom API operations to learn more. View example workflows and related content on n8n's website.

To configure this credential, you'll need:


Facebook Trigger Workplace Security object

URL: llms-txt#facebook-trigger-workplace-security-object

Contents:

  • Trigger configuration
  • Related resources

Use this object to receive updates when Workplace security events occur, like adding or removing admins, users joining or leaving a Workplace, and more. Refer to Facebook Trigger for more information on the trigger itself.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Facebook Trigger integrations page.

Trigger configuration

To configure the trigger with this Object:

  1. Select the Credential to connect with. Select an existing or create a new Facebook App credential.
  2. Enter the APP ID of the app connected to your credential. Refer to the Facebook App credential documentation for more information.
  3. Select Workplace Security as the Object.
  4. Field Names or IDs: By default, the node will trigger on all the available events using the * wildcard filter. If you'd like to limit the events, use the X to remove the star and use the dropdown or an expression to select the updates you're interested in.
  5. In Options, turn on the toggle to Include Values. This Object type fails without the option enabled.

Refer to Meta's Security Workplace API reference for more information.


HTTP Request node

URL: llms-txt#http-request-node

Contents:

  • Node parameters
    • Method
    • URL
    • Authentication
    • Send Query Parameters
    • Send Headers
    • Send Body
  • Node options
    • Array Format in Query Parameters
    • Batching

The HTTP Request node is one of the most versatile nodes in n8n. It allows you to make HTTP requests to query data from any app or service with a REST API. You can use the HTTP Request node a regular node or attached to an AI agent to use as a tool.

When using this node, you're creating a REST API call. You need some understanding of basic API terminology and concepts.

There are two ways to create an HTTP request: configure the node parameters or import a curl command.

Refer to HTTP Request credentials for guidance on setting up authentication.

Select the method to use for the request:

  • DELETE
  • GET
  • HEAD
  • OPTIONS
  • PATCH
  • POST
  • PUT

Enter the endpoint you want to use.

n8n recommends using the Predefined Credential Type option when it's available. It offers an easier way to set up and manage credentials, compared to configuring generic credentials.

Predefined credentials

Credentials for integrations supported by n8n, including both built-in and community nodes. Use Predefined Credential Type for custom operations without extra setup. Refer to Custom API operations for more information.

Generic credentials

Credentials for integrations not supported by n8n. You'll need to manually configure the authentication process, including specifying the required API endpoints, necessary parameters, and the authentication method.

You can select one of the following methods:

  • Basic auth
  • Custom auth
  • Digest auth
  • Header auth
  • OAuth1 API
  • OAuth2 API
  • Query auth

Refer to HTTP request credentials for more information on setting up each credential type.

Send Query Parameters

Query parameters act as filters on HTTP requests. If the API you're interacting with supports them and the request you're making needs a filter, turn this option on.

Specify your query parameters using one of the available options:

  • Using Fields Below: Enter Name/Value pairs of Query Parameters. To enter more query parameter name/value pairs, select Add Parameter. The name is the name of the field you're filtering on, and the value is the filter value.
  • Using JSON: Enter JSON to define your query parameters.

Refer to your service's API documentation for detailed guidance.

Use this parameter to send headers with your request. Headers contain metadata or context about your request.

Specify Headers using one of the available options:

  • Using Fields Below: Enter Name/Value pairs of Header Parameters. To enter more header parameter name/value pairs, select Add Parameter. The name is the header you wish to set, and the value is the value you want to pass for that header.
  • Using JSON: Enter JSON to define your header parameters.

Refer to your service's API documentation for detailed guidance.

If you need to send a body with your API request, turn this option on.

Then select the Body Content Type that best matches the format for the body content you wish to send.

Use this option to send your body as application/x-www-form-urlencoded.

Specify Body using one of the available options:

  • Using Fields Below: Enter Name/Value pairs of Body Parameters. To enter more body parameter name/value pairs, select Add Parameter. The name should be the form field name, and the value is what you wish to set that field to.
  • Using Single Field: Enter your name/value pairs in a single Body parameter with format fieldname1=value1&fieldname2=value2.

Refer to your service's API documentation for detailed guidance.

Use this option to send your body as multipart/form-data.

Configure your Body Parameters by selecting the Parameter Type:

  • Choose Form Data to enter Name/Value pairs.
  • Choose n8n Binary File to pull the body from a file the node has access to.
    • Name: Enter the ID of the field to set.
    • Input Data Field Name: Enter the name of the incoming field containing the binary file data you want to process.

Select Add Parameter to enter more parameters.

Refer to your service's API documentation for detailed guidance.

Use this option to send your body as JSON.

Specify Body using one of the available options:

  • Using Fields Below: Enter Name/Value pairs of Body Parameters. To enter more body parameter name/value pairs, select Add Parameter.
  • Using JSON: Enter JSON to define your body.

Refer to your service's API documentation for detailed guidance.

Use this option to send the contents of a file stored in n8n as the body.

Enter the name of the incoming field that contains the file as the Input Data Field Name.

Refer to your service's API documentation for detailed guidance on how to format the file.

Use this option to send raw data in the body.

  • Content Type: Enter the Content-Type header to use for the raw body content. Refer to the IANA Media types documentation for a full list of MIME content types.
  • Body: Enter the raw body content to send.

Refer to your service's API documentation for detailed guidance.

Select Add Option to view and select these options. Options are available to all parameters unless otherwise noted.

Array Format in Query Parameters

This option is only available when you turn on Send Query Parameters.

Use this option to control the format for arrays included in query parameters. Choose from these options:

  • No Brackets: Arrays will format as the name=value for each item in the array, for example: foo=bar&foo=qux.
  • Brackets Only: The node adds square brackets after each array name, for example: foo[]=bar&foo[]=qux.
  • Brackets with Indices: The node adds square brackets with an index value after each array name, for example: foo[0]=bar&foo[1]=qux.

Refer to your service's API documentation for guidance on which option to use.

Control how to batch large numbers of input items:

  • Items per Batch: Enter the number of input items to include in each batch.
  • Batch Interval: Enter the time to wait between each batch of requests in milliseconds. Enter 0 for no batch interval.

Ignore SSL Issues

By default, n8n only downloads the response if SSL certificate validation succeeds. If you'd like to download the response even if SSL certificate validation fails, turn this option on.

Lowercase Headers

Choose whether to lowercase header names (turned on, default) or not (turned off).

Choose whether to follow redirects (turned on by default) or not (turned off). If turned on, enter the maximum number of redirects the request should follow in Max Redirects.

Use this option to set some details about the expected API response, including:

  • Include Response Headers and Status: By default, the node returns only the body. Turn this option on to return the full response (headers and response status code) as well as the body.
  • Never Error: By default, the node returns success only when the response returns with a 2xx code. Turn this option on to return success regardless of the code returned.
  • Response Format: Select the format in which the data gets returned. Choose from:
    • Autodetect (default): The node detects and formats the response based on the data returned.
    • File: Select this option to put the response into a file. Enter the field name where you want the file returned in Put Output in Field.
    • JSON: Select this option to format the response as JSON.
    • Text: Select this option to format the response as plain text. Enter the field name where you want the file returned in Put Output in Field.

Use this option to paginate results, useful for handling query results that are too big for the API to return in a single call.

Inspect the API data first

Some options for pagination require knowledge of the data returned by the API you're using. Before setting up pagination, either check the API documentation, or do an API call without pagination, to see the data it returns.

Understand pagination

Pagination means splitting a large set of data into multiple pages. The amount of data on each page depends on the limit you set.

For example, you make an API call to an endpoint called /users. The API wants to send back information on 300 users, but this is too much data for the API to send in one response.

If the API supports pagination, you can incrementally fetch the data. To do this, you call /users with a pagination limit, and a page number or URL to tell the API which page to send. In this example, say you use a limit of 10, and start from page 0. The API sends the first 10 users in its response. You then call the API again, increasing the page number by 1, to get the next 10 results.

Configure the pagination settings:

  • Pagination Mode:
    • Off: Turn off pagination.
    • Update a Parameter in Each Request: Use this when you need to dynamically set parameters for each request.
    • Response Contains Next URL: Use this when the API response includes the URL of the next page. Use an expression to set Next URL.

For example setups, refer to HTTP Request node cookbook | Pagination.

n8n provides built-in variables for working with HTTP node requests and responses when using pagination:

Variable Description
$pageCount The pagination count. Tracks how many pages the node has fetched.
$request The request object sent by the HTTP node.
$response The response object from the HTTP call. Includes $response.body, $response.headers, and $response.statusCode. The contents of body and headers depend on the data sent by the API.

Different APIs implement pagination in different ways. Check the API documentation for the API you're using for details. You need to find out things like:

  • Does the API provide the URL for the next page?
  • Are there API-specific limits on page size or page number?
  • The structure of the data that the API returns.

Use this option if you need to specify an HTTP proxy.

Enter the Proxy the request should use. This takes precedence over global settings defined with the HTTP_PROXY, HTTPS_PROXY, or ALL_PROXY environment variables.

Use this option to set how long the node should wait for the server to send response headers (and start the response body). The node aborts requests that exceed this value for the initial response.

Enter the Timeout time to wait in milliseconds.

The following options are only available when attached to an AI agent as a tool.

Optimize Response

Whether to optimize the tool response to reduce the amount of data passed to the LLM. Optimizing the response can reduce costs and can help the LLM ignore unimportant details, often leading to better results.

When optimizing responses, you select an expected response type, which determines other options you can configure. The supported response types are:

When expecting a JSON response, you can configure which parts of the JSON data to use as a response with the following choices:

  • Field Containing Data: This field identifies a specific part of the JSON object that contains your relevant data. You can leave this blank to use the entire response.
  • Include Fields: This is how you choose which fields you want in your response object. There are three choices:
    • All: Include all fields in the response object.
    • Selected: Include only the fields specified below.
      • Fields: A comma-separated list of fields to include in the response. You can use dot notation to specify nested fields. You can drag fields from the Input panel to add them to the field list.
    • Exclude: Include all fields except the fields specified below.
      • Fields: A comma-separated list of fields to exclude from the response. You can use dot notation to specify nested fields. You can drag fields from the Input panel to add them to the field list.

When expecting HTML, you can identify the part of an HTML document relevant to the LLM and optimize the response with the following options:

  • Selector (CSS): A specific element or element type to include in the response HTML. Uses the body element by default.
  • Return Only Content: Whether to strip HTML tags and attributes from the response, leaving only the actual content. This uses fewer tokens and may be easier for the model to understand.
    • Elements To Omit: A comma-separated list of CSS selectors to exclude when extracting content.
  • Truncate Response: Whether to limit the response size to save tokens.
    • Max Response Characters: The maximum number of characters to include in the HTML response. The default value is 1000.

When expecting a generic Text response, you can optimize the results with the following options:

  • Truncate Response: Whether to limit the response size to save tokens.
    • Max Response Characters: The maximum number of characters to include in the HTML response. The default value is 1000.

Import curl command

curl is a command line tool and library for transferring data with URLs.

You can use curl to call REST APIs. If the API documentation of the service you want to use provides curl examples, you can copy them out of the documentation and into n8n to configure the HTTP Request node.

Import a curl command:

  1. From the HTTP Request node's Parameters tab, select Import cURL. The Import cURL command modal opens.
  2. Paste your curl command into the text box.
  3. Select Import. n8n loads the request configuration into the node fields. This overwrites any existing configuration.

Templates and examples

Building Your First WhatsApp Chatbot

View template details

Scrape and summarize webpages with AI

View template details

Generate AI Viral Videos with Seedance and Upload to TikTok, YouTube & Instagram

View template details

Browse HTTP Request integration templates, or search all templates

For common questions or issues and suggested solutions, refer to Common Issues.


Zabbix credentials

URL: llms-txt#zabbix-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate when using the HTTP Request node to make a Custom API call.

Create a Zabbix Cloud account or self-host your own Zabbix server.

Supported authentication methods

Refer to Zabbix's API documentation for more information about the service.

This is a credential-only node. Refer to Custom API operations to learn more. View example workflows and related content on n8n's website.

To configure this credential, you'll need:

  • an API Token: An API key for your Zabbix user.
  • the URL: The URL of your Zabbix server. Don't include /zabbix as part of the URL.

Refer to Zabbix's API documentation for more information about authenticating to the service.


Docker Installation

URL: llms-txt#docker-installation

Contents:

  • Prerequisites
  • Starting n8n
  • Using with PostgreSQL
  • Updating

n8n recommends using Docker for most self-hosting needs. It provides a clean, isolated environment, avoids operating system and tooling incompatibilities, and makes database and environment management simpler.

You can also use n8n in Docker with Docker Compose. You can find Docker Compose configurations for various architectures in the n8n-hosting repository.

Self-hosting knowledge prerequisites

Self-hosting n8n requires technical knowledge, including:

  • Setting up and configuring servers and containers
  • Managing application resources and scaling
  • Securing servers and applications
  • Configuring n8n

n8n recommends self-hosting for expert users. Mistakes can lead to data loss, security issues, and downtime. If you aren't experienced at managing servers, n8n recommends n8n Cloud.

You can also follow along with our video guide here:

Before proceeding, install Docker:

  • Docker Desktop is available for Mac, Windows, and Linux. Docker Desktop includes the Docker Engine and Docker Compose.
  • Docker Engine and Docker Compose are also available as separate packages for Linux. Use this for Linux machines without a graphical environment or when you don't want the Docker Desktop UI.

Latest and Next versions

n8n releases a new minor version most weeks. The latest version is for production use. next is the most recent release. You should treat next as a beta: it may be unstable. To report issues, use the forum.

Current latest: 1.118.2
Current next: 1.119.0

From your terminal, run the following commands, replacing the <YOUR_TIMEZONE> placeholders with your timezone:

This command creates a volume to store persistent data, downloads the required n8n image, and starts the container with the following settings:

  • Maps and exposes port 5678 on the host.
  • Sets the timezone for the container:
  • Enforces secure file permissions for the n8n configuration file.
  • Enables task runners, the recommended way of executing tasks in n8n.
  • Mounts the n8n_data volume to the /home/node/.n8n directory to persist your data across container restarts.

Once running, you can access n8n by opening: http://localhost:5678

Using with PostgreSQL

By default, n8n uses SQLite to save credentials, past executions, and workflows. n8n also supports PostgreSQL, configurable using environment variables as detailed below.

Persisting the .n8n directory still recommended

When using PostgreSQL, n8n doesn't need to use the .n8n directory for the SQLite database file. However, the directory still contains other important data like encryption keys, instance logs, and source control feature assets. While you can work around some of these requirements, (for example, by setting the N8N_ENCRYPTION_KEY environment variable), it's best to continue mapping a persistent volume for the directory to avoid potential issues.

To use n8n with PostgreSQL, execute the following commands, replacing the placeholders (depicted within angled brackets, for example <POSTGRES_USER>) with your actual values:

You can find a complete docker-compose file for PostgreSQL in the n8n hosting repository.

To update n8n, in Docker Desktop, navigate to the Images tab and select Pull from the context menu to download the latest n8n image:

You can also use the command line to pull the latest, or a specific version:

Examples:

Example 1 (unknown):

docker volume create n8n_data

docker run -it --rm \
 --name n8n \
 -p 5678:5678 \
 -e GENERIC_TIMEZONE="<YOUR_TIMEZONE>" \
 -e TZ="<YOUR_TIMEZONE>" \
 -e N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true \
 -e N8N_RUNNERS_ENABLED=true \
 -v n8n_data:/home/node/.n8n \
 docker.n8n.io/n8nio/n8n

Example 2 (unknown):

docker volume create n8n_data

docker run -it --rm \
 --name n8n \
 -p 5678:5678 \
 -e GENERIC_TIMEZONE="<YOUR_TIMEZONE>" \
 -e TZ="<YOUR_TIMEZONE>" \
 -e N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true \
 -e N8N_RUNNERS_ENABLED=true \
 -e DB_TYPE=postgresdb \
 -e DB_POSTGRESDB_DATABASE=<POSTGRES_DATABASE> \
 -e DB_POSTGRESDB_HOST=<POSTGRES_HOST> \
 -e DB_POSTGRESDB_PORT=<POSTGRES_PORT> \
 -e DB_POSTGRESDB_USER=<POSTGRES_USER> \
 -e DB_POSTGRESDB_SCHEMA=<POSTGRES_SCHEMA> \
 -e DB_POSTGRESDB_PASSWORD=<POSTGRES_PASSWORD> \
 -v n8n_data:/home/node/.n8n \
 docker.n8n.io/n8nio/n8n

Set a custom encryption key

URL: llms-txt#set-a-custom-encryption-key

n8n creates a random encryption key automatically on the first launch and saves it in the ~/.n8n folder. n8n uses that key to encrypt the credentials before they get saved to the database. If the key isn't yet in the settings file, you can set it using an environment variable, so that n8n uses your custom key instead of generating a new one.

In queue mode, you must specify the encryption key environment variable for all workers.

Refer to Environment variables reference for more information on this variable.

Examples:

Example 1 (unknown):

export N8N_ENCRYPTION_KEY=<SOME RANDOM STRING>

Processing different data types

URL: llms-txt#processing-different-data-types

Contents:

  • HTML and XML data
    • HTML Exercise
    • XML Exercise
  • Date, time, and interval data
    • Date Exercise
  • Binary data
    • Binary Exercise 1
    • Binary Exercise 2

In this chapter, you will learn how to process different types of data using n8n core nodes.

You're most likely familiar with HTML and XML.

HTML is a markup language used to describe the structure and semantics of a web page. XML looks similar to HTML, but the tag names are different, as they describe the kind of data they hold.

If you need to process HTML or XML data in your n8n workflows, use the HTML node or the XML node.

Use the HTML node to extract HTML content of a webpage by referencing CSS selectors. This is useful if you want to collect structured information from a website (web-scraping).

Let's get the title of the latest n8n blog post:

  1. Use the HTTP Request node to make a GET request to the URL https://blog.n8n.io/ (this endpoint requires no authentication).

  2. Connect an HTML node and configure it to extract the title of the first blog post on the page.

    • Hint: If you're not familiar with CSS selectors or reading HTML, the CSS selector .post .item-title a should help!
  3. Configure the HTTP Request node with the following parameters:

    • Authentication: None
    • Request Method: GET
    • URL: https://blog.n8n.io/ The result should look like this:

Result of HTTP Request node

  1. Connect an HTML node to the HTTP Request node and configure the former's parameters:
    • Operation: Extract HTML Content
    • Source Data: JSON
    • JSON Property: data
    • Extraction Values:
      • Key: title
      • CSS Selector: .post .item-title a
      • Return Value: HTML

You can add more values to extract more data.

The result should look like this:

Result of HTML Extract node

Use the XML node to convert XML to JSON and JSON to XML. This operation is useful if you work with different web services that use either XML or JSON and need to get and submit data between them in the two formats.

In the final exercise of Chapter 1, you used an HTTP Request node to make a request to the PokéAPI. In this exercise, we'll return to that same API but we'll convert the output to XML:

  1. Add an HTTP Request node that makes the same request to the PokéAPI at https://pokeapi.co/api/v2/pokemon.

  2. Use the XML node to convert the JSON output to XML.

  3. To get the pokemon from the PokéAPI, execute the HTTP Request node with the following parameters:

  4. Connect an XML node to it with the following parameters:

    • Mode: JSON to XML
    • Property name: data

The result should look like this:

XML node (JSON to XML) Table View

To transform data the other way around, select the mode XML to JSON.

Date, time, and interval data

Date and time data types include DATE, TIME, DATETIME, TIMESTAMP, and YEAR. The dates and times can be passed in different formats, for example:

  • DATE: March 29 2022, 29-03-2022, 2022/03/29
  • TIME: 08:30:00, 8:30, 20:30
  • DATETIME: 2022/03/29 08:30:00
  • TIMESTAMP: 1616108400 (Unix timestamp), 1616108400000 (Unix ms timestamp)
  • YEAR: 2022, 22

There are a few ways you can work with dates and times:

  • Use the Date & Time node to convert date and time data to different formats and calculate dates.
  • Use Schedule Trigger node to schedule workflows to run at a specific time, interval, or duration.

Sometimes, you might need to pause the workflow execution. This might be necessary if you know that a service doesn't process the data instantly or it's slow to return all the results. In these cases, you don't want n8n to pass incomplete data to the next node.

If you run into situations like this, use the Wait node after the node that you want to delay. The Wait node pauses the workflow execution and will resume execution:

  • At a specific time.
  • After a specified time interval.
  • On a webhook call.

Build a workflow that adds five days to an input date from the Customer Datastore node that you used before. Then, if the calculated date occurred after 1959, the workflow waits 1 minute before setting the calculated date as a value. The workflow should be triggered every 30 minutes.

  1. Add the Customer Datastore (n8n training) node with the Get All People action selected. Return All.

  2. Add the Date & Time node to Round Up the created Date from the datastore to End of Month. Output this to field new-date. Include all input fields.

  3. Add the If node to check if that new rounded date is after 1960-01-01 00:00:00.

  4. Add the Wait node to the True output of that node and set it to wait for one minute.

  5. Add the Edit Fields (Set) node to set a new field called outputValue to a String containing new-date. Include all input fields.

  6. Add the Schedule Trigger node at the beginning of the workflow to trigger it every 30 minutes. (You can keep the Manual Trigger node for testing!)

  7. Add the Customer Datastore (n8n training) node with the Get All People action selected.

    • Select the option to Return All.
  8. Add a Date & Time node connected to the Customer Datastore node. Select the option to Round a Date.

    • Add the created date as the Date to round.
    • Select Round Up as the Mode and End of Month as the To.
    • Set the Output Field Name as new-date.
    • In Options, select Add Option and use the control to Include Input Fields
  9. Add an If node connected to the Date & Time node.

    • Add the new-date field as the first part of the condition.
    • Set the comparison to Date &Time > is after
    • Add 1960-01-01 00:00:00 as the second part of the expression. (This should produce 3 items in the True Branch and 2 items in the False Branch)
  10. Add a Wait node to the True output of the If node.

    • Set Resume to After Time interval.
    • Set Wait Amount to 1.00.
    • Set Wait Unit to Minutes.
  11. Add an Edit Fields (Set) node to the Wait node.

    • Use either JSON or Manual Mapping Mode.
    • Set a new field called outputValue to be the value of the new-date field.
    • Select the option to Include Other Input Fields and include All fields.
  12. Add a Schedule Trigger node at the beginning of the workflow.

    • Set the Trigger Interval to use Minutes.
    • Set the Minutes Between Triggers to 30.
    • To test your schedule, be sure to activate the workflow.
    • Be sure to connect this node to the Customer Datastore (n8n training) node you began with!

The workflow should look like this:

Workflow for transforming dates

To check the configuration of each node, you can copy the JSON code of this workflow and either paste it into the Editor UI or save it as a file and import from file into a new workflow. See Export and import workflows for more information.

Up to now, you have mainly worked with text data. But what if you want to process data that's not text, like images or PDF files? These types of files are represented in the binary numeral system, so they're considered binary data. In this form, binary data doesn't offer you useful information, so you'll need to convert it into a readable form.

In n8n, you can process binary data with the following nodes:

Reading and writing files is only available on self-hosted n8n

Reading and writing files to disk isn't available on n8n Cloud. You'll read and write to the machine where you installed n8n. If you run n8n in Docker, your command runs in the n8n container and not the Docker host. The Read/Write Files From Disk node looks for files relative to the n8n install path. n8n recommends using absolute file paths to prevent any errors.

To read or write a binary file, you need to write the path (location) of the file in the node's File(s) Selector parameter (for the Read operation) or in the node's File Path and Name parameter (for the Write operation).

Naming the right path

The file path looks slightly different depending on how you are running n8n:

  • npm: ~/my_file.json
  • n8n cloud / Docker: /tmp/my_file.json

Binary Exercise 1

For our first binary exercise, let's convert a PDF file to JSON:

  1. Make an HTTP request to get this PDF file: https://media.kaspersky.com/pdf/Kaspersky_Lab_Whitepaper_Anti_blocker.pdf.
  2. Use the Extract From File node to convert the file from binary to JSON.

In the HTTP Request node, you should see the PDF file, like this:

HTTP Request node to get PDF

When you convert the PDF from binary to JSON using the Extract From File node, the result should look like this:

Extract From File node

To check the configuration of the nodes, you can copy the JSON workflow code below and paste it into your Editor UI:

Binary Exercise 2

For our second binary exercise, let's convert some JSON data to binary:

  1. Make an HTTP request to the Poetry DB API https://poetrydb.org/random/1.
  2. Convert the returned data from JSON to binary using the Convert to File node.
  3. Write the new binary file data to the machine where n8n is running using the Read/Write Files From Disk node.
  4. To check that it worked out, use the Read/Write Files From Disk node to read the generated binary file.

The workflow for this exercise looks like this:

Workflow for moving JSON to binary data

To check the configuration of the nodes, you can copy the JSON workflow code below and paste it into your Editor UI:

Examples:

Example 1 (unknown):

{
"name": "Course 2, Ch 2, Date exercise",
"nodes": [
	{
	"parameters": {},
	"id": "6bf64d5c-4b00-43cf-8439-3cbf5e5f203b",
	"name": "When clicking \"Execute workflow\"",
	"type": "n8n-nodes-base.manualTrigger",
	"typeVersion": 1,
	"position": [
		620,
		280
	]
	},
	{
	"parameters": {
		"operation": "getAllPeople",
		"returnAll": true
	},
	"id": "a08a8157-99ee-4d50-8fe4-b6d7e16e858e",
	"name": "Customer Datastore (n8n training)",
	"type": "n8n-nodes-base.n8nTrainingCustomerDatastore",
	"typeVersion": 1,
	"position": [
		840,
		360
	]
	},
	{
	"parameters": {
		"operation": "roundDate",
		"date": "={{ $json.created }}",
		"mode": "roundUp",
		"outputFieldName": "new-date",
		"options": {
		"includeInputFields": true
		}
	},
	"id": "f66a4356-2584-44b6-a4e9-1e3b5de53e71",
	"name": "Date & Time",
	"type": "n8n-nodes-base.dateTime",
	"typeVersion": 2,
	"position": [
		1080,
		360
	]
	},
	{
	"parameters": {
		"conditions": {
		"options": {
			"caseSensitive": true,
			"leftValue": "",
			"typeValidation": "strict"
		},
		"conditions": [
			{
			"id": "7c82823a-e603-4166-8866-493f643ba354",
			"leftValue": "={{ $json['new-date'] }}",
			"rightValue": "1960-01-01T00:00:00",
			"operator": {
				"type": "dateTime",
				"operation": "after"
			}
			}
		],
		"combinator": "and"
		},
		"options": {}
	},
	"id": "cea39877-6183-4ea0-9400-e80523636912",
	"name": "If",
	"type": "n8n-nodes-base.if",
	"typeVersion": 2,
	"position": [
		1280,
		360
	]
	},
	{
	"parameters": {
		"amount": 1,
		"unit": "minutes"
	},
	"id": "5aa860b7-c73c-4df0-ad63-215850166f13",
	"name": "Wait",
	"type": "n8n-nodes-base.wait",
	"typeVersion": 1.1,
	"position": [
		1480,
		260
	],
	"webhookId": "be78732e-787d-463e-9210-2c7e8239761e"
	},
	{
	"parameters": {
		"assignments": {
		"assignments": [
			{
			"id": "e058832a-2461-4c6d-b584-043ecc036427",
			"name": "outputValue",
			"value": "={{ $json['new-date'] }}",
			"type": "string"
			}
		]
		},
		"includeOtherFields": true,
		"options": {}
	},
	"id": "be034e9e-3cf1-4264-9d15-b6760ce28f91",
	"name": "Edit Fields",
	"type": "n8n-nodes-base.set",
	"typeVersion": 3.3,
	"position": [
		1700,
		260
	]
	},
	{
	"parameters": {
		"rule": {
		"interval": [
			{
			"field": "minutes",
			"minutesInterval": 30
			}
		]
		}
	},
	"id": "6e8e4308-d0e0-4d0d-bc29-5131b57cf061",
	"name": "Schedule Trigger",
	"type": "n8n-nodes-base.scheduleTrigger",
	"typeVersion": 1.1,
	"position": [
		620,
		480
	]
	}
],
"pinData": {},
"connections": {
	"When clicking \"Execute workflow\"": {
	"main": [
		[
		{
			"node": "Customer Datastore (n8n training)",
			"type": "main",
			"index": 0
		}
		]
	]
	},
	"Customer Datastore (n8n training)": {
	"main": [
		[
		{
			"node": "Date & Time",
			"type": "main",
			"index": 0
		}
		]
	]
	},
	"Date & Time": {
	"main": [
		[
		{
			"node": "If",
			"type": "main",
			"index": 0
		}
		]
	]
	},
	"If": {
	"main": [
		[
		{
			"node": "Wait",
			"type": "main",
			"index": 0
		}
		]
	]
	},
	"Wait": {
	"main": [
		[
		{
			"node": "Edit Fields",
			"type": "main",
			"index": 0
		}
		]
	]
	},
	"Schedule Trigger": {
	"main": [
		[
		{
			"node": "Customer Datastore (n8n training)",
			"type": "main",
			"index": 0
		}
		]
	]
	}
}
}

Example 2 (unknown):

{
	"name": "Binary to JSON",
	"nodes": [
		{
		"parameters": {},
		"id": "78639a25-b69a-4b9c-84e0-69e045bed1a3",
		"name": "When clicking \"Execute Workflow\"",
		"type": "n8n-nodes-base.manualTrigger",
		"typeVersion": 1,
		"position": [
			480,
			520
		]
		},
		{
		"parameters": {
			"url": "https://media.kaspersky.com/pdf/Kaspersky_Lab_Whitepaper_Anti_blocker.pdf",
			"options": {}
		},
		"id": "a11310df-1287-4e9a-b993-baa6bd4265a6",
		"name": "HTTP Request",
		"type": "n8n-nodes-base.httpRequest",
		"typeVersion": 4.1,
		"position": [
			700,
			520
		]
		},
		{
		"parameters": {
			"operation": "pdf",
			"options": {}
		},
		"id": "88697b6b-fb02-4c3d-a715-750d60413e9f",
		"name": "Extract From File",
		"type": "n8n-nodes-base.extractFromFile",
		"typeVersion": 1,
		"position": [
			920,
			520
		]
		}
	],
	"pinData": {},
	"connections": {
		"When clicking \"Execute Workflow\"": {
		"main": [
			[
			{
				"node": "HTTP Request",
				"type": "main",
				"index": 0
			}
			]
		]
		},
		"HTTP Request": {
		"main": [
			[
			{
				"node": "Extract From File",
				"type": "main",
				"index": 0
			}
			]
		]
		}
	}
}

Example 3 (unknown):

{
	"name": "JSON to file and Read-Write",
	"nodes": [
		{
		"parameters": {},
		"id": "78639a25-b69a-4b9c-84e0-69e045bed1a3",
		"name": "When clicking \"Execute Workflow\"",
		"type": "n8n-nodes-base.manualTrigger",
		"typeVersion": 1,
		"position": [
			480,
			520
		]
		},
		{
		"parameters": {
			"url": "https://poetrydb.org/random/1",
			"options": {}
		},
		"id": "a11310df-1287-4e9a-b993-baa6bd4265a6",
		"name": "HTTP Request",
		"type": "n8n-nodes-base.httpRequest",
		"typeVersion": 4.1,
		"position": [
			680,
			520
		]
		},
		{
		"parameters": {
			"operation": "toJson",
			"options": {}
		},
		"id": "06be18f6-f193-48e2-a8d9-35f4779d8324",
		"name": "Convert to File",
		"type": "n8n-nodes-base.convertToFile",
		"typeVersion": 1,
		"position": [
			880,
			520
		]
		},
		{
		"parameters": {
			"operation": "write",
			"fileName": "/tmp/poetrydb.json",
			"options": {}
		},
		"id": "f2048e5d-fa8f-4708-b15a-d07de359f2e5",
		"name": "Read/Write Files from Disk",
		"type": "n8n-nodes-base.readWriteFile",
		"typeVersion": 1,
		"position": [
			1080,
			520
		]
		},
		{
		"parameters": {
			"fileSelector": "={{ $json.fileName }}",
			"options": {}
		},
		"id": "d630906c-09d4-49f4-ba14-416c0f4de1c8",
		"name": "Read/Write Files from Disk1",
		"type": "n8n-nodes-base.readWriteFile",
		"typeVersion": 1,
		"position": [
			1280,
			520
		]
		}
	],
	"pinData": {},
	"connections": {
		"When clicking \"Execute Workflow\"": {
		"main": [
			[
			{
				"node": "HTTP Request",
				"type": "main",
				"index": 0
			}
			]
		]
		},
		"HTTP Request": {
		"main": [
			[
			{
				"node": "Convert to File",
				"type": "main",
				"index": 0
			}
			]
		]
		},
		"Convert to File": {
		"main": [
			[
			{
				"node": "Read/Write Files from Disk",
				"type": "main",
				"index": 0
			}
			]
		]
		},
		"Read/Write Files from Disk": {
		"main": [
			[
			{
				"node": "Read/Write Files from Disk1",
				"type": "main",
				"index": 0
			}
			]
		]
		}
	}
}

Outlook.com IMAP credentials

URL: llms-txt#outlook.com-imap-credentials

Contents:

  • Set up the credentials
  • Connection errors
  • Use an app password
    • Security Info app password

Follow these steps to configure the IMAP credentials with an Outlook.com account.

Set up the credentials

To set up the IMAP credential with Outlook.com account, use these settings:

  1. Enter your Outlook.com email address as the User.

  2. Enter your Outlook.com password as the Password.

Outlook.com doesn't require you to use an app password, but if you'd like to for security reasons, refer to Use an app password.

  1. Enter outlook.office365.com as the Host.

  2. For the Port, keep the default port number of 993.

  3. Turn on the SSL/TLS toggle.

  4. Check with your email administrator about whether to Allow Self-Signed Certificates.

Refer to Microsoft's POP, IMAP, and SMTP settings for Outlook.com documentation for more information.

You may receive a connection error if you configured your Outlook.com account as IMAP in multiple email clients. Microsoft is working on a fix for this. For now, try this workaround:

  1. Go to account.live.com/activity and sign in using the email address and password of the affected account.
  2. Under Recent activity, find the Session Type event that matches the most recent time you received the connection error. Select it to expand the details.
  3. Select This was me to approve the IMAP connection.
  4. Retest your n8n credential.

Refer to What is the Recent activity page? for more information on using this page.

The source for these instructions is Outlook.com IMAP connection errors. Refer to that documentation for more information.

Use an app password

If you'd prefer to use an app password instead of your email account password:

  1. Log into the My Account page.
  2. If you have a left navigation option for Security Info, jump to Security Info app password. If you don't have an option for Security Info, continue with these instructions.
  3. Go to the Additional security verification page.
  4. Select App passwords and Create.
  5. Enter a Name for your app password, like n8n credential.
  6. Use the option to copy password to clipboard and enter this as the Password in n8n instead of your email account password.

Refer to Outlook's Manage app passwords for 2-step verification page for more information.

Security Info app password

If you have a left navigation option for Security Info:

  1. Select Security Info. The Security Info page opens.
  2. Select + Add method.
  3. On the Add a method page, select App password and then select Add.
  4. Enter a Name for your app password, like n8n credential.
  5. Copy the Password and enter this as the Password in n8n instead of your email account password.

Refer to Outlook's Create app passwords from the Security info (preview) page for more information.


For example, **/myfile.txt

URL: llms-txt#for-example,-**/myfile.txt

//

Examples:

Example 1 (unknown):

Ignore a sub-directory of a directory you're watching:

Google Drive node common issues

URL: llms-txt#google-drive-node-common-issues

Contents:

  • Google hasn't verified this app
  • Google Cloud app becoming unauthorized
  • Google Drive OAuth error
  • Get recent files from Google Drive

Here are some common errors and issues with the Google Drive node and steps to resolve or troubleshoot them.

Google hasn't verified this app

If using the OAuth authentication method, you might see the warning Google hasn't verified this app. To avoid this:

  • If your app User Type is Internal, create OAuth credentials from the same account you want to authenticate.
  • If your app User Type is External, you can add your email to the list of testers for the app: go to the Audience page and add the email you're signing in with to the list of Test users.

If you need to use credentials generated by another account (by a developer or another third party), follow the instructions in Google Cloud documentation | Authorization errors: Google hasn't verified this app.

Google Cloud app becoming unauthorized

For Google Cloud apps with Publishing status set to Testing and User type set to External, consent and tokens expire after seven days. Refer to Google Cloud Platform Console Help | Setting up your OAuth consent screen for more information. To resolve this, reconnect the app in the n8n credentials modal.

Google Drive OAuth error

If using the OAuth authentication method, you may see an error indicating that you can't sign in because the app doesn't meet Google's expectations for keeping apps secure.

Most often, the actual cause of this issue is that the URLs don't match between Google's OAuth configuration and n8n. To avoid this, start by reviewing any links included in Google's error message. This will contain details about the exact error that occurred.

If you are self-hostin n8n, check the n8n configuration items used to construct external URLs. Verify that the N8N_EDITOR_BASE_URL and WEBHOOK_URL environment variables use fully qualified domains.

Get recent files from Google Drive

To retrieve recent files from Google Drive, you need to sort files by modification time. To do this, you need to search for existing files and retrieve their modification times. Next you can sort the files to find the most recent file and use another Google Drive node target the file by ID.

The process looks like this:

  1. Add a Google Drive node to your canvas.
  2. Select the File/Folder resource and the Search operation.
  3. Enable Return All to sort through all files.
  4. Set the What to Search filter to Files.
  5. In the Options, set the Fields to All.
  6. Connect a Sort node to the output of the Google Drive node.
  7. Choose Simple sort type.
  8. Enter modifiedTime as the Field Name in the Fields To Sort By section.
  9. Choose Descending sort order.
  10. Add a Limit node to the output of the Sort node.
  11. Set Max Items to 1 to keep the most recent file.
  12. Connect another Google Drive node to the output of the Limit node.
  13. Select File as the Resource and the operation of your choice.
  14. In the File selection, choose By ID.
  15. Select Expression and enter {{ $json.id }} as the expression.

View workflow file


Medium credentials

URL: llms-txt#medium-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API access token
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Medium API no longer supported

Medium has stopped supporting the Medium API. These credentials still appear within n8n, but you can't configure new integrations using them.

Supported authentication methods

  • API access token
  • OAuth2

Refer to Medium's API documentation for more information about the service.

Using API access token

To configure this credential, you'll need:

  • An API Access Token: Generate a token in Settings > Security and apps > Integration tokens. Use the integration token this generates as your n8n Access Token.

Refer to the Medium API Self-issued access tokens documentation for more information.

To configure this credential, you'll need:

  • A Client ID
  • A Client Secret

To generate a Client ID and Client Secret, you'll need access to the Developers menu. From there, create a new application to generate the Client ID and Secret.

Use these settings for your new application:

  • Select OAuth 2 as the Authorization Protocol
  • Copy the OAuth Callback URL from n8n and use this as the Callback URL in Medium.

Gong credentials

URL: llms-txt#gong-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using API access token
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Supported authentication methods

  • API access token
  • OAuth2

Refer to Gong's API documentation for more information about the service.

Using API access token

To configure this credential, you'll need a Gong account and:

  • An Access Key
  • An Access Key Secret

You can create both of these items on the Gong API Page (you must be a technical administrator in Gong to access this resource).

Refer to Gong's API documentation for more information about authenticating to the service.

To configure this credential, you'll need a Gong account, a Gong developer account and:

  • A Client ID: Generated when you create an Oauth app for Gong.
  • A Client Secret: Generated when you create an Oauth app for Gong.

If you're self-hosting n8n, you'll need to create an app to configure OAuth2. Refer to Gong's OAuth documentation for more information about setting up OAuth2.


Create a workflow

URL: llms-txt#create-a-workflow

Contents:

  • Create a workflow
  • Run workflows manually
  • Run workflows automatically

A workflow is a collection of nodes connected together to automate a process. You build workflows on the workflow canvas.

  1. Select the button in the upper-left corner of the side menu. Select workflow.

  2. If your n8n instance supports projects, you'll also need to choose whether to create the workflow inside your personal space or a specific project you have access to. If you're using the community version, you'll always create workflows inside your personal space.

  3. Get started by adding a trigger node: select Add first step...

  4. Select the create button in the upper-right corner from either the Overview page or a specific project. Select workflow.

  5. If you're doing this from the Overview page, you'll create the workflow inside your personal space. If you're doing this from inside a project, you'll create the workflow inside that specific project.

  6. Get started by adding a trigger node: select Add first step...

If it's your first time building a workflow, you may want to use the quickstart guides to quickly try out n8n features.

Run workflows manually

You may need to run your workflow manually when building and testing, or if your workflow doesn't have a trigger node.

To run manually, select Execute Workflow.

Run workflows automatically

All new workflows are inactive by default.

You need to activate workflows that start with a trigger node or Webhook node so that they can run automatically. When a workflow is inactive, you must run it manually.

To activate or deactivate your workflow, open your workflow and toggle Inactive / Active.

Once a workflow is active, it runs whenever its trigger conditions are met.


Rocket.Chat node

URL: llms-txt#rocket.chat-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Rocket.Chat node to automate work in Rocket.Chat, and integrate Rocket.Chat with other applications. n8n supports posting messages to channels, and sending direct messages, with Rocket.Chat.

On this page, you'll find a list of operations the Rocket.Chat node supports and links to more resources.

Refer to Rocket.Chat credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Chat
    • Post a message to a channel or a direct message

Templates and examples

Post latest Twitter mentions to Slack

View template details

Post a message to a channel in RocketChat

View template details

Render custom text over images

View template details

Browse Rocket.Chat integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


uProc node

URL: llms-txt#uproc-node

Contents:

  • Operations
    • Audio
    • Communication
    • Company
    • Finance
    • Geographical
    • Image
    • Internet
    • Personal
    • Product

Use the uProc node to automate work in uProc, and integrate uProc with other applications. n8n has built-in support for a wide range of uProc features, including getting advanced human audio file, communication data, company, finance and product information.

On this page, you'll find a list of operations the uProc node supports and links to more resources.

Refer to uProc credentials for guidance on setting up authentication.

  • Get advanced human audio file by provided text and language

  • Get an audio file by provided text and language

  • Discover if a domain has a social network presence

  • Discover if an email is valid, hard bounce, soft bounce, spam-trap, free, temporary, and recipient exists

  • Discover if the email recipient exists, returning email status

  • Check if an email domain has an SMTP server to receive emails

  • Discover if the email has a social network presence

  • Check if an email has a valid format

  • Check if an email domain belongs to a disposable email service

  • Check if email belongs to free service provider like Gmail

  • Check if email is catchall

  • Discover if an email exists in the Robinson list (only Spain)

  • Check if email belongs to a system or role-based account

  • Check if an email is a spam trap

  • Discover if an IMEI number has a valid format

  • Check if a LinkedIn profile is a first-degree contact

  • Discover if mobile phone number exists in network operator, with worldwide coverage

  • Discover if a mobile phone number has a valid format with worldwide coverage

  • Discover if a mobile phone number has a valid format (only Spain)

  • Discover if a mobile phone number has a valid prefix, with worldwide coverage

  • Discover if a Spanish mobile phone number has a valid prefix

  • Discover if a mobile number is switched on to call it later, with worldwide coverage

  • Discover if a mobile number can receive SMS with worldwide coverage

  • Discover if a phone (landline or mobile) exists in a Robinson list (only Spain)

  • Discover if a landline or mobile number has a valid prefix

  • Discover if a landline phone number is valid, with Spain coverage

  • Allows discovering if landline number has a good international format, depending on the country

  • Discover if a landline phone number prefix exists, with worldwide coverage

  • Clean a phone removing non allowed characters

  • Allows getting country code of a mobile phone number with international format

  • Allows getting a domain from an email

  • Discover an email by company website or domain and prospect's first-name and last-name

  • Check if an email is personal or generic

  • Get emails list found on the internet by domain or URI

  • Get an emails list found on the internet by non-free email

  • Get emails list found inside the website by domain or URI

  • Get three first web references of an email published on the internet

  • Allows you to fix the email domain of those misspelled emails

  • Fix the international prefix of a phone based on the ISO code of a country

  • Get GDPR compliant emails list by domain for your Email Marketing campaigns in Europe

  • Discover if mobile exist using real-time HLR query

  • Get personal email by social network profile

  • Get portability data about a landline or mobile number, only for Spain

  • Extract results from a LinkedIn search (employees in a company)

  • Get members in a LinkedIn group

  • Get 'Search LinkedIn Contacts' URL

  • Extract the last 80 connections from your LinkedIn profile

  • Extract the last 80 invitations sent from your LinkedIn

  • Get users who comment on a post on LinkedIn

  • Get users who like a post on LinkedIn

  • Extract a LinkedIn profile

  • Extract results from a LinkedIn search (profiles)

  • Extract last profiles that have published content on LinkedIn by specific keywords

  • Discover if mobile exist using real-time HLR query, as well as portability and roaming data

  • Get existence, portability, and roaming of a mobile phone using MNP query

  • Discover if mobile or landline prefix exists in Spain

  • Allows normalizing email address, removing non allowed characters

  • Allows normalizing a mobile phone, removing non-allowed characters

  • Parse phone number in multiple fields and verify format and prefix validity

  • Allows getting country prefix number by country code

  • Discover an email by company website or domain and prospect's first-name and last-name

  • This tool parses a social URI address and extracts any available indicators

  • Search all social networks by domain, parses all found URLs, and returns social networks data

  • Discover if a domain or a website has social activity and returns all social network profiles found

  • Discover if an email has social activity, and get all social network profiles found

  • Discover if a mobile phone has social activity, and get all social network profiles found

  • Get web references for an email published on the internet

  • Send a custom message invitation to a non connected LinkedIn profile

  • Send a custom email to a recipient

  • Send a custom SMS to a recipient with worldwide coverage

  • Send a custom invitation message if a profile is connected or a custom message otherwise

  • Visits a profile to show interest and get profile views in return from contact, increasing your LinkedIn network

  • Send a custom private message to a connected LinkedIn profile

  • Get an email by contact's LinkedIn profile URI

  • Discover an email by company's name and prospect's full name

  • Discover an email by company's website or domain and prospect's full name

  • Get email by first name, last name, and company

  • Get parsed and validated phone

  • Discover if a CIF card number is valid

  • Check if a company is a debtor by TaxID

  • Check if the ISIN number is valid

  • Check if the SS number is valid, only for Spain

  • Identify and classify a prospecting role in detecting the right area and seniority to filter later

  • Get a company's contact, social, and technology data by domain

  • Get a company's contact, social, and technology data by email

  • Get a company's data by CIF

  • Get a company's data by DUNS

  • Get a company's data by domain

  • Get a company's data by email

  • Get a company's data by IP address

  • Get a company's data by name

  • Get a company's data by phone number

  • Get a company's data by social networks URI (LinkedIn, Twitter)

  • Get a company's name by company domain

  • Get professional data of a decision-maker by company name/domain and area

  • Discover more suitable decision-maker using search engines (Bing) by company name and area

  • Get professional emails of decision-makers by company domain and area

  • Discover up to ten decision-makers using search engines (Bing) by company name and area

  • Get a company's domain by company name

  • Get employees by company name or domain, area, seniority, and country

  • Get a company's Facebook profile by name without manually searching on Google or Facebook

  • Get geocoded company data by IP address

  • Get a company's LinkedIn profile by name without manually searching on Google or LinkedIn

  • Allows normalizing a CIF number, removing non-allowed characters

  • Get a company's phone by company domain

  • Get a company's sales data by a company's DUNS number

  • Get a company's sales data by a company's domain name

  • Get a company's sales data by a company's name

  • Get a company's sales data by a company's tax ID (CIF)

  • Get a company's Twitter profile by name without manually searching on Google or Twitter

  • Get decision maker by search engine

  • Get decision makers by search engine

  • Get Facebook URI by company's domain

  • Get GitHub URI by company's domain

  • Get Instagram URI by company's domain

  • Get LinkedIn URI by company's domain

  • Get Pinterest URI by company's domain

  • Get Twitter URI by company's domain

  • Get YouTube URI by company's domain

  • Check if crypto wallet is valid

  • Discover if a BIC number has a valid format

  • Discover if an account number has a valid format

  • Check if credit card number checksum is valid

  • Discover if an IBAN account number has a valid format

  • Discover if an ISO currency code is valid

  • Check if a TIN exists in Europe

  • Convert amount between supported currencies and an exchange date

  • Get credit card type

  • Get multiple ISO currency codes by a country name

  • Get all ISO currency by an IP address

  • Get multiple ISO currency codes by a country ISO code

  • Get ISO currency code by IP address

  • Get ISO currency code by a currency ISO code

  • Get ISO currency code by an ISO country code

  • Get ISO currency code by a country name

  • Get related European TIN in Europe

  • Get IBAN by account number of the country

  • Get to search data bank information by IBAN account number

  • Get country VAT by address

  • Get country VAT by coordinates

  • Get Swift code lookup

  • Get VAT by IP address

  • Get VAT value by country ISO code

  • Get VAT by phone number, with worldwide coverage

  • Get VAT by zip code

  • Check if a country's ISO code exists

  • Discover if the distance between two coordinates is equal to another

  • Discover if the distance (kilometers) between two coordinates is greater than the given input

  • Discover if the distance (kilometers) between two coordinates is greater or equal to the given input

  • Discover if the distance(kilometers) between two coordinates is lower than the given input

  • Check if an address exists by a partial address search

  • Check if a house number exists by a partial address search

  • Check if coordinates have a valid format

  • Discover if a zip code number prefix exists (only for Spain)

  • Discover if a zip code number has a valid format (only for Spain)

  • Get cartesian coordinates(X, Y, Z/WGS84) by Latitude and Longitude

  • Get location by parameters

  • Get multiple cities by phone prefix (only for Spain)

  • Get multiple cities by partial initial text (only for Spain)

  • Get multiple cities by zip code prefix (only for Spain)

  • Get a city from IP

  • City search by partial name (only for Spain)

  • Discover the city name by a local phone number (only for Spain)

  • Discover the city name by the zip code (only for Spain)

  • Discover the community name from a zip code (only for Spain)

  • Discover latitude and longitude coordinates of an IP address

  • Discover latitude and longitude coordinates of a postal address

  • Get multiple country names by currency ISO code

  • Get multiple countries by ISO code

  • Get multiple country names by initial name

  • Get country name by currency ISO code

  • Get country name by IP address

  • Get country name by its ISO code

  • Get country by a prefix

  • Get country name by phone number, with worldwide coverage

  • Get Aplha2 code by a country prefix or a name

  • Get decimal coordinates (degrees, minutes, and seconds) by latitude and longitude

  • Returns straight-line distance (kilometers) between two addresses

  • Returns straight-line distance (kilometers) between two GPS coordinates (latitude and longitude)

  • Returns straight-line distance (kilometers) between two IP addresses

  • Returns straight-line distance (kilometers) between two landline phones, using city and province of every phone

  • Returns straight-line distance (kilometers) between two zip codes, using city and province of every zip code

  • Get an exact address by a partial address search

  • Discover geographical, company, timezone, and reputation data by IPv4 address

  • Discover the city name, zip code, province, country, latitude, and longitude from an IPv4 or IPv6 address and geocodes it

  • Parse postal address into separated fields, getting an improved resolution

  • Discover locale data (currency, language) by IPv4 or IPv6 address

  • Discover the city name, zip code, province, or country by latitude and longitude

  • Discover the city name, zip code, province, country, latitude, and longitude from an IPv4 or IPv6 address

  • Discover the city and the province from a landline phone number (only Spain)

  • Discover location data by name

  • Discover the city and the province from a zip code number (only Spain)

  • Get the most relevant locations by name

  • Get the most relevant locations by name, category, location, and radius

  • Get multiple personal names by a prefix

  • Discover network data by IPv4 or IPv6 address

  • Allow normalizing an address by removing non allowed characters

  • Allow normalizing a city by removing non allowed characters

  • Allow normalizing a country by removing non allowed characters

  • Allow normalizing a province by removing non allowed characters

  • Allow normalizing a zip code by removing non allowed characters

  • Get normalized country

  • Parse postal address into separated fields, getting a basic resolution

  • Discover the province name from an IP address

  • Get the first province by a name prefix (only for Spain)

  • Discover the province name from a landline phone number (only for Spain)

  • Discover the province name from a zip code number (only for Spain)

  • Get a province list by a name prefix (only for Spain)

  • Get a province list by a phone prefix (only for Spain)

  • Get a province list by a zip code prefix (only for Spain)

  • Discover reputation by IPv4 or IPv6 address

  • Returns driving routing time, distance, fuel consumption, and cost between two addresses

  • Returns driving routing time, distance, fuel consumption, and cost between two GPS coordinates

  • Returns driving routing time, distance, fuel consumption, and cost between two IP addresses

  • Returns driving routing time, distance, fuel consumption, and cost between two landline phones, using city and province of every phone (only for Spain)

  • Returns driving routing time, distance, fuel consumption, and cost between two zip codes, using city and province of every zip code

  • Discover date-time data by IPv4 or IPv6 address

  • Get USNG coordinates by latitude and longitude

  • Get UTM coordinates by latitude and longitude

  • Discover the zip code if you have an IP address

  • Get the first zip code by prefix, only for Spain

  • Get multiple zip codes by prefix, with worldwide coverage

  • Get time data by coordinates

  • Get time data by postal address

  • Get QR code decoded content by an image URL

  • It allows discovering all geographical and technical EXIF metadata present in a photographic JPEG image

  • Get an encoded barcode by number and a required standard

  • Get QR code encoded by a text

  • Generate a new image by URL and text

  • Discover logo (favicon) used in a domain

  • Generate a screenshot by URL provided using Chrome browser

  • Get OCR text from image

  • Check if a domain exists

  • Check if a domain has a DNS record

  • Check if a domain has the given IP address assigned

  • Check if a domain has an MX record

  • Check if a domain has a valid SSL certificate

  • Check if a domain has a valid format

  • Check if a domain accepts all emails, existing or not

  • Check if a domain is a free service domain provider

  • Check if a domain is temporary or not

  • Discover if a computer is switched on

  • Discover if service in a port is available

  • Check if an URL contains a string or regular expression

  • Check if an URL exists

  • Check that an URL has a valid format

  • Get full SSL certificate data by a domain (or website) and monitor your certificate status

  • Get feed entries by domain

  • Get last feed entry by domain

  • Get text data from web, PDF or image allowing to filter some elements by regular expressions or field names

  • Decode URL to recover original

  • Get valid, existing, and default URL when accessing a domain using a web browser

  • Get long version of shortened URL

  • Discover device features by a user agent

  • Get the network name of and IP address

  • Get the domain record by its type

  • Encode URL to avoid problems

  • Copy file from one URL to another URL

  • Fix an IP address to the right format

  • Get the IPv4 address linked with a domain

  • Convert a number to an IP address

  • Get ISP known name of email domain name

  • Convert an IP address to numeric notation

  • Scan a host and returns the most commonly open ports

  • Obtains a list with multiple results from a website

  • Obtains the content of a website

  • Decode URL into multiple fields

  • Generate a PDF file by URL (provided using Chrome browser)

  • Get the root domain of any web address, removing non needed characters

  • Generates shareable URIs to use on social networks and email using a content URI and a text

  • Get data from the existing table in an HTML page or a PDF file

  • Discover client and server technologies used in a domain

  • Discover client and server technologies used in web pages

  • Analyze URL's health status about SSL, broken links, conflictive HTTP links with SSL, and more

  • Get website visits and rank of any domain

  • Get a domain's WHOIS data by fields

  • Get WHOIS data fields by IP address provided

  • Check if age is between two numbers

  • Check if date returns an age between 20 and 29

  • Check if date returns an age between 40 and 49

  • Check if age is greater than another

  • Check if birth date returns an age greater than 64

  • Check if birth date belongs to an adult (18 years for Spain)

  • Check if age is lower than another

  • Check if age is lower or equal than another

  • Check if ages are equal

  • Discover if a date is between two dates

  • Discover if a date is greater

  • Discover if a date is greater or equal

  • Discover if a date belongs to a leap year

  • Discover if a date is lower

  • Discover if a date is lower or equal

  • Discover if a date has a valid format

  • Discover if a gender value is valid

  • Discover if an NIE card number is valid

  • Discover if a NIF card number is valid

  • Check if a personal name exists in the INE data source (only for Spain)

  • Check if a name contains accepted characters

  • Discover if a NIF exists in the Robinson list (only for Spain)

  • Check if surname contains accepted characters

  • Check if a personal surname appears in INE data source (only for Spain)

  • Discover if a DNI card number is valid

  • Discover the age of a birth date

  • Discover the age range of a person by birth date

  • Get the difference between two dates

  • Discover the gender of a person by the email

  • Discover the gender of a person or company by the name

  • Get LinkedIn employee profile URI by business email

  • Get LinkedIn employee profile URI by first name, last name, and company

  • Discover the letter of a DNI card number

  • Get first personal name matching by prefix and gender from INE data source (only for Spain)

  • Get LinkedIn URI by email

  • Get LinkedIn URI by phone

  • Allow normalizing a DNI number by removing non allowed characters

  • Allow normalizing an NIE number by removing non allowed characters

  • Normalize name by removing non allowed characters

  • Normalize surname

  • Get parsed date-time

  • Normalize full name, fixing abbreviations, sorting if necessary, and returning first name, last name, and gender

  • Get prospect's contact data and the company's location and social data by email

  • Get contact, location, and social data by email and company name and location

  • Get personal and social data by social profile

  • Get personal data by email

  • Get personal data by first name, last name, company, and location

  • Get personal data by mobile

  • Get personal data by social network profile

  • Generate random fake data

  • Get first personal surname matching by prefix from INE data source (only for Spain)

  • Get personal surname matching by prefix from INE data source (only for Spain)

  • Get Twitter profile by first name, last name, and company

  • Get XING profile by first name, last name, and company

  • Add a contact email to a person list

  • Check if an ASIN code exists on the Amazon Marketplace

  • Check if an ASIN code has a valid format

  • Check if an EAN code exists on Amazon Marketplace

  • Check if an EAN barcode has a valid format

  • Check if an EAN barcode of 13 digits has a valid format

  • Check if an EAN barcode of 14 digits has a valid format

  • Check if an EAN barcode of 18 digits has a valid format

  • Check if an EAN barcode of 8 digits has a valid format

  • Check if a GTIN barcode has a valid format

  • Check if a GTIN barcode of 13 digits has a valid format

  • Check if a GTIN barcode of 14 digits has a valid format

  • Check if a GTIN barcode of 8 digits has a valid format

  • Check if VIN Number is valid

  • Allows checking if an ISBN book exists

  • Allows checking if an ISBN10/13 code has a valid format

  • Allows checking if an ISBN10 code has a valid format

  • Allows checking if an ISBN13 code has a valid format

  • Check if a UPC exists

  • Check if a UPC has a valid format

  • Get ASIN by EAN

  • Get a book by author's surname

  • Get all publications by category

  • Get book data by an editor's name

  • Get book or publication data by 10 or 13 digits ISBN code

  • Get book data by title

  • Get books by author's surname

  • Get all books by category

  • Get all books by editor

  • Get all books by title

  • Get EAN code by ASIN code

  • Get product data on a UPC on Amazon Marketplace

  • Get ISBN10 code by ISBN13 code

  • Get ISBN13 code by ISBN10 code

  • Get data By VIN number

  • Check if a Luhn number is valid

  • Check if a password is strong

  • Check if a UUID number is valid

  • Get blacklists for a domain

  • Get blacklists for an IP address

  • Check if a string only contains alphabets

  • Check if a string is alphanumeric

  • Check if a string is boolean

  • Check if the largest item in a list matches the provided item

  • Check if IPv4 or IPv6 address has a valid format

  • Check if IPv4 address has a valid format

  • Check if IPv6 address has a valid format

  • Check if the length of a list is between two quantities

  • Checks if the length of a list equals a specified quantity

  • Checks if the length of a list is greater than or equal to a certain amount

  • Check if the length of a list is lower than a certain amount

  • Check if the list contains a specific item

  • Check if the list ends with a specific element

  • Check if a list is sorted in ascending order

  • Check if the list starts with a specific element

  • Checks if the smallest element in a list matches the provided element

  • Check if a string contains only numbers

  • Check if a string contains a character

  • Check if a string ends with a character

  • Check if a string has no content

  • Check if a string contains random characters

  • Check if a string contains a value that matches with a regular expression

  • Check if the length of a string is between two numbers

  • Check if the length of a string is equal to a number

  • Check if the length of a string is greater than a number

  • Check if the length of a string is greater or equal to a number

  • Check if the length of a string is lower than a number

  • Check if the length of a string is lower or equal to a number

  • Check if a string starts with a character

  • Check if a string contains only lowercase characters

  • Check if a string contains only uppercase characters

  • Check if a list consists of unique elements

  • Check if the supplied values form a valid list of elements

  • Check if the number of words in a sentence is between two determined quantities

  • Check if the number of words in a sentence equals a certain amount

  • Check if the number of words in a sentence is greater than a certain amount

  • Check if the number of words in a sentence is greater than

  • Check if the word count is lower

  • Check if the number of words present in a sentence is less than or equal to a quantity

  • Convert a string to Base64 encoded value

  • Discover banned English words in an email body or subject

  • Get field names by analyzing the field value provided

  • Get HTML code from Markdown

  • Get Markdown text from HTML

  • Get text without HTML

  • Get spin string

  • Format a string using a format pattern

  • Generate random string using a regular expression as a pattern

  • Return the largest item in a list

  • Return the smallest item in a list

  • Convert to lowercase

  • Convert a string to MD5 encoded value

  • Merge two strings

  • Normalize a string depending on the field name

  • Analyze string and return all emails, phones, zip codes, and links

  • Convert a string to an SHA encoded value

  • Analyze an English text with emojis and detect sentiment

  • Returns an ascending sorted list

  • Split a value into two parts and join them using a separator from the original string

  • Split a value into two parts using a separator from the original string

  • Get the length of a string

  • Lookup string between multiple values by fuzzy logic and regex patterns

  • Clean abuse words from a string

  • Replace the first value found in a string with another

  • Replace all values found in a string with another

  • Translate a text into any language

  • Return a single list with no repeating elements

  • Convert all letters to uppercase

  • Count total words in a text

Templates and examples

Scrape and store data from multiple website pages

View template details

Create a website screenshot and send via Telegram Channel

View template details

Monitor SSL certificate of any domain with uProc

View template details

Browse uProc integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Google: Service Account

URL: llms-txt#google:-service-account

Contents:

  • Prerequisites
  • Set up Service Account
    • Create a Google Cloud Console project
    • Enable APIs
    • Set up Google Cloud Service Account
    • Finish your n8n credential
  • Video
  • Troubleshooting
    • Service Account can't access Google Drive files
    • Enable domain-wide delegation

Using service accounts is more complex than OAuth2. Before you begin:

Set up Service Account

There are four steps to connecting your n8n credential to a Google Service Account:

  1. Create a Google Cloud Console project.
  2. Enable APIs.
  3. Set up Google Cloud Service Account.
  4. Finish your n8n credential.

Create a Google Cloud Console project

First, create a Google Cloud Console project. If you already have a project, jump to the next section:

  1. Log in to your Google Cloud Console using your Google credentials.

  2. In the top menu, select the project dropdown in the top navigation and select New project or go directly to the New Project page.

  3. Enter a Project name and select the Location for your project.

  4. Select Create.

  5. Check the top navigation and make sure the project dropdown has your project selected. If not, select the project you just created.

Check the project dropdown in the Google Cloud top navigation

With your project created, enable the APIs you'll need access to:

  1. Access your Google Cloud Console - Library. Make sure you're in the correct project.

Check the project dropdown in the Google Cloud top navigation

  1. Go to APIs & Services > Library.

  2. Search for and select the API(s) you want to enable. For example, for the Gmail node, search for and enable the Gmail API.

  3. Some integrations require other APIs or require you to request access:

Google Drive API required

The following integrations require the Google Drive API, as well as their own API:

  • Google Docs
    • Google Sheets
    • Google Slides

In addition to the Vertex AI API you will also need to enable the Cloud Resource Manager API.

  1. Select ENABLE.

Set up Google Cloud Service Account

  1. Access your Google Cloud Console - Library. Make sure you're in the correct project.

Check the project dropdown in the Google Cloud top navigation

  1. Open the left navigation menu and go to APIs & Services > Credentials. Google takes you to your Credentials page.

  2. Select + Create credentials > Service account.

  3. Enter a name in Service account name and an ID in Service account ID. Refer to Creating a service account for more information.

  4. Select Create and continue.

  5. Based on your use-case, you may want to Select a role and Grant users access to this service account using the corresponding sections.

  6. Select your newly created service account under the Service Accounts section. Open the Keys tab.

  7. Select Add key > Create new key.

  8. In the modal that appears, select JSON, then select CREATE. Google saves the file to your computer.

Finish your n8n credential

With the Google project and credentials fully configured, finish the n8n credential:

  1. Open the downloaded JSON file.

  2. Copy the client_email and enter it in your n8n credential as the Service Account Email.

  3. Copy the private_key. Don't include the surrounding " marks. Enter this as the Private Key in your n8n credential.

Older versions of n8n

If you're running an n8n version older than 0.156.0, replace all instances of \n in the JSON file with new lines.

  1. Optional: Choose if you want to Impersonate a User (turned on).

  2. To use this option, you must Enable domain-wide delegation for the service account as a Google Workspace super admin.

    1. Enter the Email of the user you want to impersonate.
  3. If you plan to use this credential with the HTTP Request node, turn on Set up for use in HTTP Request node.

  4. With this setting turned on, you'll need to add Scope(s) for the node. n8n prepopulates some scopes. Refer to OAuth 2.0 Scopes for Google APIs for more information.

  5. Save your credentials.

Service Account can't access Google Drive files

No access to my drive

Google no longer allows Service Accounts created after April 15, 2025 to access my drive. Service Accounts now only have access to shared drives.

While not recommended, if you need to use a Service Account to access my drive, you can do so by enabling domain-wide delegation. You can learn more in this post in the community.

A Service Account can't access Google Drive files and folders that weren't shared with its associated user email.

  1. Access your Google Cloud Console and copy your Service Account email.
  2. Access your Google Drive and go to the designated file or folder.
  3. Right-click on the file or folder and select Share.
  4. Paste your Service Account email into Add People and groups.
  5. Select Editor for read-write access or Viewer for read-only access.

Enable domain-wide delegation

To impersonate a user with a service account, you must enable domain-wide delegation for the service account.

Google recommends you avoid using domain-wide delegation, as it allows impersonation of any user (including super admins) and can pose a security risk.

To delegate domain-wide authority to a service account, you must be a super administrator for the Google Workspace domain. Then:

  1. From your Google Workspace domain's Admin console, select the hamburger menu, then select Security > Access and data control > API Controls.
  2. In the Domain wide delegation pane, select Manage Domain Wide Delegation.
  3. Select Add new.
  4. In the Client ID field, enter the service account's Client ID. To get the Client ID:
    • Open your Google Cloud Console project, then open the Service Accounts page.
    • Copy the OAuth 2 Client ID and use this as the Client ID for the Domain Wide Delegation.
  5. In the OAuth scopes field, enter a list of comma-separate scopes to grant your application access. For example, if your application needs domain-wide full access to the Google Drive API and the Google Calendar API, enter: https://www.googleapis.com/auth/drive, https://www.googleapis.com/auth/calendar.
  6. Select Authorize.

It can take from 5 minutes up to 24 hours before you can impersonate all users in your Workspace.


Cisco Umbrella credentials

URL: llms-txt#cisco-umbrella-credentials

Contents:

  • Prerequisites
  • Authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate when using the HTTP Request node to make a Custom API call.

Authentication methods

Refer to Cisco Umbrella's API documentation for more information about the service.

This is a credential-only node. Refer to Custom API operations to learn more. View example workflows and related content on n8n's website.

To configure this credential, you'll need:

  • An API Key
  • A Secret: Provided when you generate an API key

Refer to the Cisco Umbrella Manage API Keys documentation for instructions on creating an Umbrella API key.


MongoDB node

URL: llms-txt#mongodb-node

Contents:

  • Operations
  • Templates and examples

Use the MongoDB node to automate work in MongoDB, and integrate MongoDB with other applications. n8n has built-in support for a wide range of MongoDB features, including aggregating, updating, finding, deleting, and getting documents as well as creating, updating, listing, and dropping search indexes. All operations in this Node make use of the MongoDB Node driver.

On this page, you'll find a list of operations the MongoDB node supports and links to more resources.

Refer to MongoDB credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Document
    • Aggregate documents
    • Delete documents
    • Find documents
    • Find and replace documents
    • Find and update documents
    • Insert documents
    • Update documents
  • Search Index
    • Create search indexes
    • Drop search indexes
    • List search indexes
    • Update search indexes

Templates and examples

Scrape and store data from multiple website pages

View template details

AI-Powered WhatsApp Chatbot for Text, Voice, Images, and PDF with RAG

View template details

Content Farming - : AI-Powered Blog Automation for WordPress

View template details

Browse MongoDB integration templates, or search all templates


MySQL node

URL: llms-txt#mysql-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • Use query parameters
  • Common issues

Use the MySQL node to automate work in MySQL, and integrate MySQL with other applications. n8n has built-in support for a wide range of MySQL features, including executing an SQL query, as well as inserting, and updating rows in a database.

On this page, you'll find a list of operations the MySQL node supports and links to more resources.

Refer to MySQL credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Delete
  • Execute SQL
  • Insert
  • Insert or Update
  • Select
  • Update

Templates and examples

Generate SQL queries from schema only - AI-powered

View template details

Generate Monthly Financial Reports with Gemini AI, SQL, and Outlook

View template details

Import CSV into MySQL

View template details

Browse MySQL integration templates, or search all templates

Refer to MySQL's Connectors and APIs documentation for more information about the service.

Refer to MySQL's SELECT statement documentation for more information on writing SQL queries.

Use query parameters

When creating a query to run on a MySQL database, you can use the Query Parameters field in the Options section to load data into the query. n8n sanitizes data in query parameters, which prevents SQL injection.

For example, you want to find a person by their email address. Given the following input data:

You can write a query like:

Then in Query Parameters, provide the field values to use. You can provide fixed values or expressions. For this example, use expressions so the node can pull the email address from each input item in turn:

For common errors or issues and suggested resolution steps, refer to Common issues.

Examples:

Example 1 (unknown):

[
    {
        "email": "alex@example.com",
        "name": "Alex",
        "age": 21 
    },
    {
        "email": "jamie@example.com",
        "name": "Jamie",
        "age": 33 
    }
]

Example 2 (unknown):

SELECT * FROM $1:name WHERE email = $2;

Example 3 (unknown):

// users is an example table name
users, {{ $json.email }}

Wait

URL: llms-txt#wait

Contents:

  • Operations
    • After Time Interval
    • At Specified Time
    • On Webhook Call
    • On Form Submitted
  • Templates and examples
  • Time-based operations

Use the Wait node pause your workflow's execution. When the workflow pauses it offloads the execution data to the database. When the resume condition is met, the workflow reloads the data and the execution continues.

The Wait node can Resume on the following conditions:

Refer to the more detailed sections below for more detailed instructions.

After Time Interval

Wait for a certain amount of time.

This parameter includes two more fields:

  • Wait Amount: Enter the amount of time to wait.
  • Wait Unit: Select the unit of measure for the Wait Amount. Choose from:
    • Seconds
    • Minutes
    • Hours
    • Days

Refer to Time-based operations for more detail on how these intervals work and the timezone used.

At Specified Time

Wait until a specific date and time to continue. Use the date and time picker to set the Date and Time.

Refer to Time-based operations for more detail on the timezone used.

This parameter enables your workflows to resume when the Wait node receives an HTTP call.

The webhook URL that resumes the execution when called is generated at runtime. The Wait node provides the $execution.resumeUrl variable so that you can reference and send the yet-to-be-generated URL wherever needed, for example to a third-party service or in an email.

When the workflow executes, the Wait node generates the resume URL and the webhook(s) in your workflow using the $execution.resumeUrl. This generated URL is unique to each execution, so your workflow can contain multiple Wait nodes and as the webhook URL is called it will resume each Wait node sequentially.

For this Resume style, set more parameters listed below.

Select if and how incoming resume-webhook-requests to $execution.resumeUrl should be authenticated. Options include:

  • Basic Auth: Use basic authentication. Select or enter a new Credential for Basic Auth to use.
  • Header Auth: Use header authentication. Select or enter a new Credential for Header Auth to use.
  • JWT Auth: Use JWT authentication. Select or enter a new Credential for JWT Auth to use.
  • None: Don't use authentication.

Refer to the Webhook node | Authentication documentation for more information on each auth type.

Select the HTTP method the webhook should use. Refer to the Webhook node | HTTP Method documentation for more information.

Enter the Response Code the webhook should return. You can use common codes or enter a custom code.

Set when and how to respond to the webhook from these options:

  • Immediately: Respond as soon as the node executes.
  • When Last Node Finishes: Return the response code and the data output from the last node executed in the workflow. If you select this option, also set:
    • Response Data: Select what data should be returned and what format to use. Options include:
      • All Entries: Returns all the entries of the last node in an array.
      • First Entry JSON: Return the JSON data of the first entry of the last node in a JSON object.
      • First Entry Binary: Return the binary data of the first entry of the last node in a binary file.
      • No Response Body: Return with no body.
  • Using 'Respond to Webhook' Node: Respond as defined in the Respond to Webhook node.

Set whether the workflow will automatically resume execution after a specific limit type (turned on) or not (turned off). If turned on, also set:

  • Limit Type: Select what type of limit to enforce from these options:
    • After Time Interval: Wait for a certain amount of time.
      • Enter the limit's Amount of time.
      • Select the limit's Unit of time.
    • At Specified Time: Wait until a specific date and time to resume.
      • Max Date and Time: Use the date and time picker to set the specified time the node should resume.

On Webhook Call options

  • Binary Property: Enter the name of the binary property to write the data of the received file to. This option's only relevant if binary data is received.
  • Ignore Bots: Set whether to ignore requests from bots like link previewers and web crawlers (turned on) or not (turned off).
  • IP(s) Whitelist: Enter IP addresses here to limit who (or what) can invoke the webhook URL. Enter a comma-separated list of allowed IP addresses. Access from IPs outside the whitelist throws a 403 error. If left blank, all IP addresses can invoke the webhook URL.
  • No Response Body: Set whether n8n should send a body in the response (turned off) or prevent n8n from sending a body in the response (turned on).
  • Raw Body: Set whether to return the body in a raw format like JSON or XML (turned on) or not (turned off).
  • Response Data: Enter any custom data you want to send in the response.
  • Response Headers: Send more headers in the webhook response. Refer to MDN Web Docs | Response header to learn more about response headers.
  • Webhook Suffix: Enter a suffix to append to the resume URL. This is useful for creating unique webhook URLs for each Wait node when a workflow contains multiple Wait nodes. Note that the generated $resumeWebhookUrl won't automatically include this suffix, you must manually append it to the webhook URL before exposing it.

On Webhook Call limitations

There are some limitations to keep in mind when using On Webhook Call:

  • Partial executions of your workflow changes the $resumeWebhookUrl, so be sure that the node sending this URL to your desired third-party runs in the same execution as the Wait node.

On Form Submitted

Wait for a form submission before continuing. Set up these parameters:

Enter the title to display at the top of the form.

Form Description

Enter a form description to display beneath the title. This description can help prompt the user on how to complete the form.

Set up each field you want to appear on your form using these parameters:

  • Field Label: Enter the field label you want to appear in the form.
  • Field Type: Select the type of field to display in the form. Choose from:
    • Date
    • Dropdown List: Enter each dropdown options in the Field Options.
      • Multiple Choice: Select whether the user can select a single dropdown option (turned off) or multiple dropdown options (turned on)
    • Number
    • Password
    • Text
    • Textarea
  • Required Field: Set whether the user must complete this field in order to submit the form (turned on) or if the user can submit the form without completing it (turned off).

Set when to respond to the form submission. Choose from:

  • Form Is Submitted: Respond as soon as this node receives the form submission.
  • Workflow Finishes: Respond when the last node of this workflow finishes.
  • Using 'Respond to Webhook' Node: Respond when the Respond to Webhook node executes.

Set whether the workflow will automatically resume execution after a specific limit type (turned on) or not (turned off).

If turned on, also set: * Limit Type: Select what type of limit to enforce from these options: * After Time Interval: Wait for a certain amount of time. * Enter the limit's Amount of time. * Select the limit's Unit of time. * At Specified Time: Wait until a specific date and time to resume. * Max Date and Time: Use the date and time picker to set the specified time the node should resume.

On Form Response options

  • Form Response: Choose how and what you want the form to Respond With from these options:
    • Form Submitted Text: The form displays whatever text is entered in Text to Show after a user fills out the form. Use this option if you want to display a confirmation message.
    • Redirect URL: The form will redirect the user to the URL to Redirect to after they fill out the form. This must be a valid URL.
  • Webhook Suffix: Enter a suffix to append to the resume URL. This is useful for creating unique webhook URLs for each Wait node when a workflow contains multiple Wait nodes. Note that the generated $resumeWebhookUrl won't automatically include this suffix, you must manually append it to the webhook URL before exposing it.

Templates and examples

Generate AI Viral Videos with Seedance and Upload to TikTok, YouTube & Instagram

View template details

Generate AI Videos with Google Veo3, Save to Google Drive and Upload to YouTube

View template details

Scrape business emails from Google Maps without the use of any third party APIs

View template details

Browse Wait integration templates, or search all templates

Time-based operations

For the time-based resume operations, note that:

  • For wait times less than 65 seconds, the workflow doesn't offload execution data to the database. Instead, the process continues to run and the execution resumes after the specified interval passes.
  • The n8n server time is always used regardless of the timezone setting. Workflow timezone settings, and any changes made to them, don't affect the Wait node interval or specified time.

Designing the Workflow

URL: llms-txt#designing-the-workflow

Now that we know what Nathan wants to automate, let's consider the steps he needs to take to achieve his goals:

  1. Get the relevant data (order id, order status, order value, employee name) from the data warehouse
  2. Filter the orders by their status (Processing or Booked)
  3. Calculate the total value of all the Booked orders
  4. Notify the team members about the Booked orders in the company's Discord channel
  5. Insert the details about the Processing orders in Airtable for follow-up
  6. Schedule this workflow to run every Monday morning

Nathan's workflow involves sending data from the company's data warehouse to two external services:

Before that, the data has to be wrangled with general functions (conditional filtering, calculation, scheduling).

n8n provides integrations for all these steps, so Nathan's workflow in n8n would look like this:

View workflow file

You will build this workflow in eight steps:

  1. Getting data from the data warehouse
  2. Inserting data into Airtable
  3. Filtering orders
  4. Setting values for processing orders
  5. Calculating booked orders
  6. Notifying the team
  7. Scheduling the workflow
  8. Activating and examining the workflow

To build this workflow, you will need the credentials found in the email you received from n8n when you signed up for this course. If you haven't signed up already, you can do it here. If you haven't received a confirmation email after signing up, contact us.

Start building!


Question and Answer Chain node

URL: llms-txt#question-and-answer-chain-node

Contents:

  • Node parameters
    • Query
  • Templates and examples
  • Related resources
  • Common issues

Use the Question and Answer Chain node to use a vector store as a retriever.

On this page, you'll find the node parameters for the Question and Answer Chain node, and links to more resources.

The question you want to ask.

Templates and examples

Ask questions about a PDF using AI

View template details

AI Crew to Automate Fundamental Stock Analysis - Q&A Workflow

View template details

Advanced AI Demo (Presented at AI Developers #14 meetup)

View template details

Browse Question and Answer Chain integration templates, or search all templates

Refer to LangChain's documentation on retrieval chains for examples of how LangChain can use a vector store as a retriever.

View n8n's Advanced AI documentation.

For common errors or issues and suggested resolution steps, refer to Common Issues.


PostBin node

URL: llms-txt#postbin-node

Contents:

  • Operations
  • Templates and examples
  • Send requests
  • Create and manage bins

PostBin is a service that helps you test API clients and webhooks. Use the PostBin node to automate work in PostBin, and integrate PostBin with other applications. n8n has built-in support for a wide range of PostBin features, including creating and deleting bins, and getting and sending requests.

On this page, you'll find a list of operations the PostBin node supports, and links to more resources.

  • Bin
    • Create
    • Get
    • Delete
  • Request
    • Get
    • Remove First
    • Send

Templates and examples

Browse PostBin integration templates, or search all templates

To send requests to a PostBin bin:

  1. Go to PostBin and follow the steps to generate a new bin. PostBin gives you a unique URL, including a bin ID.
  2. In the PostBin node, select the Request resource.
  3. Choose the type of Operation you want to perform.
  4. Enter your bin ID in Bin ID.

Create and manage bins

You can create and manage PostBin bins using the PostBin node.

  1. In Resource, select Bin.
  2. Choose an Operation. You can create, delete, or get a bin.

ServiceNow credentials

URL: llms-txt#servicenow-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using basic auth
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Create a ServiceNow developer account.

Supported authentication methods

  • Basic auth
  • OAuth2

Refer to ServiceNow's API documentation for more information about the service.

To configure this credential, you'll need:

  • A User name: Enter your ServiceNow username.
  • A Password: Enter your ServiceNow password.
  • A Subdomain: The subdomain for your servicenow instance is in your instance URL: https://<subdomain>.service-now.com/. For example, if the full URL is https://dev99890.service-now.com, then the subdomain is dev99890.

To configure this credential, you'll need:

  • A Client ID: Generated once you register a new app.
  • A Client Secret: Generated once you register a new app.
  • A Subdomain: The subdomain for your servicenow instance is in your instance URL: https://<subdomain>.service-now.com/. For example, if the full URL is https://dev99890.service-now.com, then the subdomain is dev99890.

To generate your Client ID and Client Secret, register a new app in System OAuth > Application Registry > New > Create an OAuth API endpoint for external clients. Use these settings for your app:

  • Copy the Client ID and add it to your n8n credential.
  • Enter a Client Secret or leave it blank to automatically generate a random secret. Add this secret to your n8n credential.
  • Copy the n8n OAuth Redirect URL and add it as a Redirect URL.

Refer to How to setup OAuth2 authentication for RESTMessageV2 integrations for more information.


URL: llms-txt#peekalink-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a Peekalink account.

Supported authentication methods

Refer to Peekalink's API documentation for more information about the service.

To configure this credential, you'll need:

  • An API Key: To get your API key, access your Peekalink dashboard and copy the key in the Your API Key section. Refer to Get your API key for more information.

Eventbrite credentials

URL: llms-txt#eventbrite-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API private key
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Create an Eventbrite account.

Supported authentication methods

  • API private key
  • OAuth2

Refer to Eventbrite's API documentation for more information about the service.

Using API private key

To configure this credential, you'll need:

Note for n8n Cloud users

Cloud users don't need to provide connection details. Select Connect my account to connect through your browser.

If you need to configure OAuth2 from scratch or need more detail on what's happening in the OAuth web flow, refer to the instructions in the Eventbrite API authentication For App Partners documentation to set up OAuth.


Vero node

URL: llms-txt#vero-node

Contents:

  • Operations
  • Templates and examples

Use the Vero node to automate work in Vero and integrate Vero with other applications. n8n has built-in support for a wide range of Vero features, including creating and deleting users.

On this page, you'll find a list of operations the Vero node supports and links to more resources.

Refer to Vero credentials for guidance on setting up authentication.

  • User
    • Create or update a user profile
    • Change a users identifier
    • Unsubscribe a user.
    • Resubscribe a user.
    • Delete a user.
    • Adds a tag to a users profile.
    • Removes a tag from a users profile.
  • Event
    • Track an event for a specific customer

Templates and examples

Browse Vero integration templates, or search all templates


Philips Hue node

URL: llms-txt#philips-hue-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Philips Hue node to automate work in Philips Hue, and integrate Philips Hue with other applications. n8n has built-in support for a wide range of Philips Hue features, including deleting, retrieving, and updating lights.

On this page, you'll find a list of operations the Philips Hue node supports and links to more resources.

Refer to Philips Hue credentials for guidance on setting up authentication.

  • Light
    • Delete a light
    • Retrieve a light
    • Retrieve all lights
    • Update a light

Templates and examples

Turn on a light and set its brightness

View template details

Google Calendar to Slack Status and Philips Hue

View template details

🛠️ Philips Hue Tool MCP Server 💪 all 4 operations

View template details

Browse Philips Hue integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Built-in methods and variables

URL: llms-txt#built-in-methods-and-variables

n8n provides built-in methods and variables for working with data and accessing n8n data. This section provides a reference of available methods and variables for use in expressions, with a short description.

Availability in the expressions editor and the Code node

Some methods and variables aren't available in the Code node. These aren't in the documentation.

All data transformation functions are only available in the expressions editor.

The Cookbook contains examples for some common tasks, including some Code node only functions.


Anthropic Chat Model node

URL: llms-txt#anthropic-chat-model-node

Contents:

  • Node parameters
  • Node options
  • Templates and examples
  • Related resources

Use the Anthropic Chat Model node to use Anthropic's Claude family of chat models with conversational agents.

On this page, you'll find the node parameters for the Anthropic Chat Model node, and links to more resources.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Model: Select the model that generates the completion. Choose from:
    • Claude
    • Claude Instant

Learn more in the Anthropic model documentation.

  • Maximum Number of Tokens: Enter the maximum number of tokens used, which sets the completion length.
  • Sampling Temperature: Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations.
  • Top K: Enter the number of token choices the model uses to generate the next token.
  • Top P: Use this option to set the probability the completion should use. Use a lower value to ignore less probable options.

Templates and examples

Notion AI Assistant Generator

View template details

Gmail AI Email Manager

View template details

🤖 AI content generation for Auto Service 🚘 Automate your social media📲!

View template details

Browse Anthropic Chat Model integration templates, or search all templates

Refer to LangChains's Anthropic documentation for more information about the service.

View n8n's Advanced AI documentation.


SyncroMSP credentials

URL: llms-txt#syncromsp-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a SyncroMSP account.

Supported authentication methods

Refer to SyncroMSP's API documentation for more information about the service.

To configure this credential, you'll need:

  • An API Key: Called an API token in SyncroMSP. To create an API token, go to your user menu > Profile/Password > API Tokens and select the option to Create New Token. Select Custom Permissions to enter a name for your token and adjust the permissions to match your requirements.
  • Your Subdomain: Enter your SyncroMSP subdomain. This is visible in the URL of your SyncroMSP, located between https:// and .syncromsp.com. If your full URL is https://n8n-instance.syncromsp.com, you'd enter n8n-instance as the subdomain.

Refer to API Tokens for more information on creating new tokens.


Code standards

URL: llms-txt#code-standards

Contents:

  • Use the linter
  • Use the starter
  • Write in TypeScript
  • Detailed guidelines for writing a node
    • Resources and operations
    • Reuse internal parameter names
  • Detailed guidelines for writing a programmatic-style node
    • Don't change incoming data
    • Use the built in request library

Following defined code standards when building your node makes your code more readable and maintainable, and helps avoid errors. This document provides guidance on good code practices for node building. It focuses on code details. For UI standards and UX guidance, refer to Node UI design.

The n8n node linter provides automatic checking for many of the node-building standards. You should ensure your node passes the linter's checks before publishing it. Refer to the n8n node linter documentation for more information.

The n8n node starter project includes a recommended setup, dependencies (including the linter), and examples to help you get started. Begin new projects with the starter.

Write in TypeScript

All n8n code is TypeScript. Writing your nodes in TypeScript can speed up development and reduce bugs.

Detailed guidelines for writing a node

These guidelines apply to any node you build.

Resources and operations

If your node can perform several operations, call the parameter that sets the operation Operation. If your node can do these operations on more than one resource, create a Resource parameter. The following code sample shows a basic resource and operations setup:

Reuse internal parameter names

All resource and operation fields in an n8n node have two settings: a display name, set using the name parameter, and an internal name, set using the value parameter. Reusing the internal name for fields allows n8n to preserve user-entered data if a user switches operations.

For example: you're building a node with a resource named 'Order'. This resource has several operations, including Get, Edit, and Delete. Each of these operations uses an order ID to perform the operation on the specified order. You need to display an ID field for the user. This field has a display label, and an internal name. By using the same internal name (set in value) for the operation ID field on each resource, a user can enter the ID with the Get operation selected, and not lose it if they switch to Edit.

When reusing the internal name, you must ensure that only one field is visible to the user at a time. You can control this using displayOptions.

Detailed guidelines for writing a programmatic-style node

These guidelines apply when building nodes using the programmatic node-building style. They aren't relevant when using the declarative style. For more information on different node-building styles, refer to Choose your node building approach.

Don't change incoming data

Never change the incoming data a node receives (data accessible with this.getInputData()) as all nodes share it. If you need to add, change, or delete data, clone the incoming data and return the new data. If you don't do this, sibling nodes that execute after the current one will operate on the altered data and process incorrect data.

It's not necessary to always clone all the data. For example, if a node changes the binary data but not the JSON data, you can create a new item that reuses the reference to the JSON item.

Use the built in request library

Some third-party services have their own libraries on npm, which make it easier to create an integration. The problem with these packages is that you add another dependency (plus all the dependencies of the dependencies). This adds more and more code, which has to be loaded, can introduce security vulnerabilities, bugs, and so on. Instead, use the built-in module:

This uses the npm package Axios.

Refer to HTTP helpers for more information, and for migration instructions for the removed this.helpers.request.

Examples:

Example 1 (unknown):

export const ExampleNode implements INodeType {
    description: {
        displayName: 'Example Node',
        ...
        properties: [
            {
                displayName: 'Resource',
                name: 'resource',
                type: 'options',
                options: [
                    {
                        name: 'Resource One',
                        value: 'resourceOne'
                    },
                    {
                        name: 'Resource Two',
                        value: 'resourceTwo'
                    }
                ],
                default: 'resourceOne'
            },
            {
                displayName: 'Operation',
                name: 'operation',
                type: 'options',
                // Only show these operations for Resource One
                displayOptions: {
                    show: {
                        resource: [
                            'resourceOne'
                        ]
                    }
                },
                options: [
                    {
                        name: 'Create',
                        value: 'create',
                        description: 'Create an instance of Resource One'
                    }
                ]
            }
        ]
    }
}

Example 2 (unknown):

// If no auth needed
const response = await this.helpers.httpRequest(options);

// If auth needed
const response = await this.helpers.httpRequestWithAuthentication.call(
	this, 
	'credentialTypeName', // For example: pipedriveApi
	options,
);

Risks when using community nodes

URL: llms-txt#risks-when-using-community-nodes

Contents:

  • Report bad community nodes
  • Disable community nodes

Installing community nodes from npm means you are installing unverified code from a public source into your n8n instance. This has some risks.

  • System security: community nodes have full access to the machine that n8n runs on, and can do anything, including malicious actions.
  • Data security: any community node that you use has access to data in your workflows.
  • Breaking changes: node developers may introduce breaking changes in new versions of their nodes. A breaking change is an update that breaks previous functionality. Depending on the node versioning approach that a node developer chooses, upgrading to a version with a breaking change could cause all workflows using the node to break. Be careful when upgrading your nodes.

n8n vets verified community nodes

In addition to publicly available community nodes from npm, n8n inspects some nodes and makes them available as verified community node inside the nodes panel. These nodes have to meet a set of data and system security requirements for approval.

Report bad community nodes

You can report bad community nodes to [security@n8n.io](mailto: security@n8n.io)

Disable community nodes

If you are self-hosting n8n, you can disable community nodes by setting N8N_COMMUNITY_PACKAGES_ENABLED to false. On n8n cloud, visit the Cloud Admin Panel and disable community nodes from there. See troubleshooting for more information.


Install private nodes

URL: llms-txt#install-private-nodes

Contents:

  • Install your node in a Docker n8n instance
  • Install your node in a global n8n instance

You can build your own nodes and install them in your n8n instance without publishing them on npm. This is useful for nodes that you create for internal use only at your company.

Install your node in a Docker n8n instance

If you're running n8n using Docker, you need to create a Docker image with the node installed in n8n.

  1. Create a Dockerfile and paste the code from this Dockerfile.

Your Dockerfile should look like this:

  1. Compile your custom node code (npm run build if you are using nodes starter). Copy the node and credential folders from within the dist folder into your container's ~/.n8n/custom/ directory. This makes them available to Docker.

  2. Download the docker-entrypoint.sh file, and place it in the same directory as your Dockerfile.

  3. Build your Docker image:

You can now use your node in Docker.

Install your node in a global n8n instance

If you've installed n8n globally, make sure that you install your node inside n8n. n8n will find the module and load it automatically.

Examples:

Example 1 (unknown):

FROM node:16-alpine

   ARG N8N_VERSION

   RUN if [ -z "$N8N_VERSION" ] ; then echo "The N8N_VERSION argument is missing!" ; exit 1; fi

   # Update everything and install needed dependencies
   RUN apk add --update graphicsmagick tzdata git tini su-exec

   # Set a custom user to not have n8n run as root
   USER root

   # Install n8n and the packages it needs to build it correctly.
   RUN apk --update add --virtual build-dependencies python3 build-base ca-certificates && \
   	npm config set python "$(which python3)" && \
   	npm_config_user=root npm install -g full-icu n8n@${N8N_VERSION} && \
   	apk del build-dependencies \
   	&& rm -rf /root /tmp/* /var/cache/apk/* && mkdir /root;


   # Install fonts
   RUN apk --no-cache add --virtual fonts msttcorefonts-installer fontconfig && \
   	update-ms-fonts && \
   	fc-cache -f && \
   	apk del fonts && \
   	find  /usr/share/fonts/truetype/msttcorefonts/ -type l -exec unlink {} \; \
   	&& rm -rf /root /tmp/* /var/cache/apk/* && mkdir /root

   ENV NODE_ICU_DATA /usr/local/lib/node_modules/full-icu

   WORKDIR /data

   COPY docker-entrypoint.sh /docker-entrypoint.sh
   ENTRYPOINT ["tini", "--", "/docker-entrypoint.sh"]

   EXPOSE 5678/tcp

Example 2 (unknown):

# Replace <n8n-version-number> with the n8n release version number. 
   # For example, N8N_VERSION=0.177.0
   docker build --build-arg N8N_VERSION=<n8n-version-number> --tag=customizedn8n .

AI Agent node

URL: llms-txt#ai-agent-node

Contents:

  • Templates and examples
  • Related resources
  • Common issues

An AI agent is an autonomous system that receives data, makes rational decisions, and acts within its environment to achieve specific goals. The AI agent's environment is everything the agent can access that isn't the agent itself. This agent uses external tools and APIs to perform actions and retrieve information. It can understand the capabilities of different tools and determine which tool to use depending on the task.

You must connect at least one tool sub-node to an AI Agent node.

Prior to version 1.82.0, the AI Agent had a setting for working as different agent types. This has now been removed and all AI Agent nodes work as a Tools Agent which was the recommended and most frequently used setting. If you're working with older versions of the AI Agent in workflows or templates, as long as they were set to 'Tools Agent', they should continue to behave as intended with the updated node.

Templates and examples

View template details

Building Your First WhatsApp Chatbot

View template details

Generate AI Viral Videos with Seedance and Upload to TikTok, YouTube & Instagram

View template details

Browse AI Agent integration templates, or search all templates

Refer to LangChain's documentation on agents for more information about the service.

New to AI Agents? Read the n8n blog introduction to AI agents.

View n8n's Advanced AI documentation.

For common errors or issues and suggested resolution steps, refer to Common Issues.


Node file structure

URL: llms-txt#node-file-structure

Contents:

  • Required files and directories
  • Modular structure
  • Versioning
  • Decide how many nodes to include in a package
  • A best-practice example for programmatic nodes

Following best practices and standards in your node structure makes your node easier to maintain. It's helpful if other people need to work with the code.

The file and directory structure of your node depends on:

  • Your node's complexity.
  • Whether you use node versioning.
  • How many nodes you include in the npm package.

n8n recommends using the n8n-node tool to create the expected node file structure. You can customize the generated scaffolding as required to meet more complex needs.

Required files and directories

Your node must include:

  • A package.json file at the root of the project. Every npm module requires this.
  • A nodes directory, containing the code for your node:
    • This directory must contain the base file, in the format <node-name>.node.ts. For example, MyNode.node.ts.
    • n8n recommends including a codex file, containing metadata for your node. The codex filename must match the node base filename. For example, given a node base file named MyNode.node.ts, the codex name is MyNode.node.json.
    • The nodes directory can contain other files and subdirectories, including directories for versions, and node code split across more than one file to create a modular structure.
  • A credentials directory, containing your credentials code. This code lives in a single credentials file. The filename format is <node-name>.credentials.ts. For example, MyNode.credentials.ts.

You can choose whether to place all your node's functionality in one file, or split it out into a base file and other modules, which the base file then imports. Unless your node is very simple, it's a best practice to split it out.

A basic pattern is to separate out operations. Refer to the HttpBin starter node for an example of this.

For more complex nodes, n8n recommends a directory structure. Refer to the Airtable node or Microsoft Outlook node as examples.

  • actions: a directory containing sub-directories that represent resources.
    • Each sub-directory should contain two types of files:
    • An index file with resource description (named either <resourceName>.resource.ts or index.ts)
    • Files for operations <operationName>.operation.ts. These files should have two exports: description of the operation and an execute function.
  • methods: an optional directory dynamic parameters' functions.
  • transport: a directory containing the communication implementation.

If your node has more than one version, and you're using full versioning, this makes the file structure more complex. You need a directory for each version, along with a base file that sets the default version. Refer to Node versioning for more information on working with versions, including types of versioning.

Decide how many nodes to include in a package

There are two possible setups when building a node:

  • One node in one npm package.
  • More than one node in a single npm package.

n8n supports both approaches. If you include more than one node, each node should have its own directory in the nodes directory.

A best-practice example for programmatic nodes

n8n's built-in Airtable node implements a modular structure and versioning, following recommended patterns.


OpenAI Functions Agent node

URL: llms-txt#openai-functions-agent-node

Contents:

  • Node parameters
    • Prompt
    • Require Specific Output Format
  • Node options
    • System Message
    • Max Iterations
    • Return Intermediate Steps
  • Templates and examples
  • Common issues

Use the OpenAI Functions Agent node to use an OpenAI functions model. These are models that detect when a function should be called and respond with the inputs that should be passed to the function.

Refer to AI Agent for more information on the AI Agent node itself.

You can use this agent with the Chat Trigger node. Attach a memory sub-node so that users can have an ongoing conversation with multiple queries. Memory doesn't persist between sessions.

OpenAI Chat Model required

You must use the OpenAI Chat Model with this agent.

Configure the OpenAI Functions Agent using the following parameters.

Select how you want the node to construct the prompt (also known as the user's query or input from the chat).

  • Take from previous node automatically: If you select this option, the node expects an input from a previous node called chatInput.
  • Define below: If you select this option, provide either static text or an expression for dynamic content to serve as the prompt in the Prompt (User Message) field.

Require Specific Output Format

This parameter controls whether you want the node to require a specific output format. When turned on, n8n prompts you to connect one of these output parsers to the node:

Refine the OpenAI Functions Agent node's behavior using these options:

If you'd like to send a message to the agent before the conversation starts, enter the message you'd like to send.

Use this option to guide the agent's decision-making.

Enter the number of times the model should run to try and generate a good answer from the user's prompt.

Return Intermediate Steps

Select whether to include intermediate steps the agent took in the final output (turned on) or not (turned off).

This could be useful for further refining the agent's behavior based on the steps it took.

Templates and examples

Refer to the main AI Agent node's Templates and examples section.

For common questions or issues and suggested solutions, refer to Common issues.


Video courses

URL: llms-txt#video-courses

Contents:

  • Beginner
  • Advanced

n8n provides two video courses on YouTube.

For support, join the Forum.

The Beginner course covers the basics of n8n:

The Advanced course covers more complex workflows, more technical nodes, and enterprise features:


Building a Mini-workflow

URL: llms-txt#building-a-mini-workflow

Contents:

    1. Add a Manual Trigger node
    1. Add the Hacker News node
    1. Configure the Hacker News node
    • Parameters
    • Settings
    1. Execute the node
    • Node executions
    1. Save the workflow
  • Summary

In this lesson, you will build a small workflow that gets 10 articles about automation from Hacker News. The process consists of five steps:

  1. Add a Manual Trigger node
  2. Add the Hacker News node
  3. Configure the Hacker News node
  4. Execute the node
  5. Save the workflow

The finished workflow will look like this:

View workflow file

1. Add a Manual Trigger node

Open the nodes panel (reminder: you can open this by selecting the + icon in the top right corner of the canvas or selecting Tab on your keyboard).

  1. Search for the Manual Trigger node.
  2. Select it when it appears in the search.

This will add the Manual Trigger node to your canvas, which allows you to run the workflow at any time by selecting the Execute workflow button.

For faster workflow creation, you can skip this step in the future. Adding any other node without a trigger will add the Manual Trigger node to the workflow.

In a real-world scenario, you would probably want to set up a schedule or some other trigger to run the workflow.

2. Add the Hacker News node

Select the + icon to the right of the Manual Trigger node to open the nodes panel.

  1. Search for the Hacker News node.
  2. Select it when it appears in the search.
  3. In the Actions section, select Get many items.

n8n adds the node to your canvas and the node window opens to display its configuration details.

3. Configure the Hacker News node

When you add a new node to the Editor UI, the node is automatically activated. The node details will open in a window with several options:

  • Parameters: Adjust parameters to refine and control the node's functionality.
  • Settings: Adjust settings to control the node's design and executions.
  • Docs: Open the n8n documentation for this node in a new window.

Parameters vs. Settings

  • Parameters are different for each node, depending on its functionality.
  • Settings are the same for all nodes.

We need to configure several parameters for the Hacker News node to make it work:

  • Resource: All
    This resource selects all data records (articles).
  • Operation: Get Many
    This operation fetches all the selected articles.
  • Limit: 10
    This parameter sets a limit to the number of results the Get Many operation returns.
  • Additional Fields > Add Field > Keyword: automation
    Additional fields are options that you can add to certain nodes to make your request more specific or filter the results. For this example, we want to get only articles that include the keyword "automation."

The configuration of the parameters for the Hacker News node should now look like this:

Hacker News node parameters

The Settings section includes several options for node design and executions. In this case, we'll configure only the final two settings, which set the node's appearance in the Editor UI canvas.

In the Hacker News node Settings, edit:

  • Notes: Get the 10 latest articles.

It's often helpful to add a short description in the node about what it does. This is helpful for complex or shared workflows in particular!

  • Display note in flow?: toggle to true
    This option will display the Note under the node in the canvas.

The configuration of the settings for the Hacker News node should now look like this:

Hacker News node settings

You can rename the node with a name that's more descriptive for your use case. There are three ways to do this:

  • Select the node you want to rename and at the same time press the F2 key on your keyboard.
  • Double-click on the node to open the node window. Click on the name of the node in the top left corner of the window, rename it as you like, then click Rename to save the node under the new name.
  • Right-click on the node and select the Rename option.

Renaming a node from the keyboard

To find the original node name (the type of node), open the node window and select Settings. The bottom of the page contains the node type and version.

4. Execute the node

Select the Execute step button in the node details window. You should see 10 results in the Output Table view.

Results in Table view for the Hacker News node

A node execution represents a run of that node to retrieve or process the specified data.

If a node executes successfully, a small green checkmark appears on top of the node in the canvas

Successfully executed workflow

If there are no problems with the parameters and everything works fine, the requested data displays in the node window in Table, JSON, and Schema format. You can switch between these views by selecting the one you want from the Table | JSON | Schema button at the top of the node window.

The Table view is the default. It displays the requested data in a table, where the rows are the records and the columns are the available attributes of those records.

Here's our Hacker News output in JSON view:

Results in JSON view for the Hacker News node

The node window displays more information about the node execution:

  • Next to the Output title, notice a small icon (this will be a green checkmark if the node execution succeeded). Beside it, there is an info icon. If you hover on it, you'll get two more pieces of information that can provide insights into the performance of each individual node in a workflow:
    • Start Time: When the node execution started.
    • Execution Time: How long it took for the node to return the results from the moment it started executing.
  • Just below the Output title, you'll notice another piece of information: 10 items. This field displays the number of items (records) that the node request returned. In this example, it's expected to be 10, since this is the limit we set in step 2. But if you don't set a limit, it's useful to see how many records are actually returned.

A red warning icon on a node means that the node has errors. This might happen if the node credentials are missing or incorrect or the node parameters aren't configured correctly.

5. Save the workflow

Once you're finished editing the node, select Back to canvas to return to the main canvas.

By default, your workflow is automatically saved as "My workflow."

For this lesson, rename the workflow to be "Hacker News workflow."

You can rename a workflow by clicking on the workflow's name at the top of the Editor UI.

Once you've renamed the workflow, be sure to save it.

There are two ways in which you can save a workflow:

  • From the Canvas in Editor UI, click Ctrl + S or Cmd + S on your keyboard.
  • Select the Save button in the top right corner of the Editor UI. You may need to leave the node editor first by clicking outside the dialog.

If you see a grey Saved text instead of the Save button, your workflow was automatically saved.

Congratulations, you just built your first workflow! In this lesson, you learned how to use actions in app nodes, configure their parameters and settings, and save and execute your workflow.

In the next lesson, you'll meet your new client, Nathan, who needs to automate his sales reporting work. You will build a more complex workflow for his use case, helping him become more productive at work.


8. Activating and Examining the Workflow

URL: llms-txt#8.-activating-and-examining-the-workflow

Contents:

  • Workflow Executions
  • Workflow Settings
  • What's next?

In this step of the workflow, you will learn how to activate your workflow and change the default workflow settings.

Activating a workflow means that it will run automatically every time a trigger node receives input or meets a condition. By default, all newly created workflows start deactivated.

To activate your workflow, set the Inactive toggle in the top navigation of the Editor UI to be Activated. Nathan's workflow will now be executed automatically every Monday at 9 AM:

Workflow Executions

An execution represents a completed run of a workflow, from the first to the last node. n8n logs workflow executions, allowing you to see if the workflow succeeded or not. The execution log is useful for debugging your workflow and seeing at what stage it runs into issues.

To view the executions for a specific workflow, you can switch to the Executions tab when the workflow is open on the canvas. Use the Editor tab to swap back to the node editor.

To see the execution log for the entire n8n instance, in your Editor UI, select Overview and then select the Executions tab in the main panel.

The Executions window displays a table with the following information:

  • Name: The name of the workflow
  • Started At: The date and time when the workflow started
  • Status: The status of the workflow (Waiting, Running, Succeeded, Cancelled, or Failed) and the amount of time it took the workflow to execute
  • Execution ID: The ID of this workflow execution

Workflow execution status

You can filter the displayed Executions by workflow and by status (Any Status, Failed, Cancelled, Running, Success, or Waiting). The information displayed here depends on which executions you configure to save in the Workflow Settings.

You can customize your workflows and executions, or overwrite some global default settings in Workflow Settings.

Access these settings by selecting the three dots in the upper right corner of the Editor UI when the workflow is open on the canvas, then select Settings.

In the Workflow Settings window you can configure the following settings:

  • Execution Order: Choose the execution logic for multi-branch workflows. You should leave this set to v1 if you don't have workflows that rely on the legacy execution ordering.
  • Error Workflow: A workflow to run if the execution of the current workflow fails.
  • This workflow can be called by: Workflows allowed to call this workflow using the Execute Sub-workflow node.
  • Timezone: The timezone to use in the current workflow. If not set, the global timezone. In particular, this setting is important for the Schedule Trigger node, as you want to make sure that the workflow gets executed at the right time.
  • Save failed production executions: If n8n should save the Execution data of the workflow when it fails. Default is to save.
  • Save successful production executions: If n8n should save the Execution data of the workflow when it succeeds. Default is to save.
  • Save manual executions: If n8n should save executions started from the Editor UI. Default is to save.
  • Save execution progress: If n8n should save the execution data of each node. If set to Save, you can resume the workflow from where it stopped in case of an error, though keep in mind that this might make the execution slower. Default is to not save.
  • Timeout Workflow: Whether to cancel a workflow execution after a specific period of time. Default is to not timeout.

You 👩‍🔧: That was it! Now you have a 7-node workflow that will run automatically every Monday morning. You don't have to worry about remembering to wrangle the data. Instead, you can start your week with more meaningful or exciting work.

Nathan 🙋: This workflow is incredibly helpful, thank you! Now, what's next for you?

You 👩‍🔧: I'd like to build more workflows, share them with others, and use some workflows built by other people.


Pushcut Trigger node

URL: llms-txt#pushcut-trigger-node

Contents:

  • Configure a Pushcut action

Pushcut is an app for iOS that lets you create smart notifications to kick off shortcuts, URLs, and online automation.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Pushcut Trigger integrations page.

Configure a Pushcut action

Follow these steps to configure your Pushcut Trigger node with your Pushcut app.

  1. In your Pushcut app, select a notification from the Notifications screen.
  2. Select the Add Action button.
  3. Enter an action name in the Label field.
  4. Select the Server tab.
  5. Select the Integration tab.
  6. Select Integration Trigger.
  7. In n8n, enter a name for the action and select Execute step.
  8. Select this action under the Select Integration Trigger screen in your Pushcut app.
  9. Select Done in the top right to save the action.

Data collection

URL: llms-txt#data-collection

Contents:

  • Collected data
  • How collection works
  • Opting out of data collection
    • Opt out of telemetry events
    • Opt out of checking for new versions of n8n
  • Disable all connection to n8n servers
  • Related resources

n8n collects some anonymous data from self-hosted n8n installations. Use the instructions below to opt out of data telemetry collection.

Refer to Privacy | Data collection in self-hosted n8n for details on the data n8n collects.

How collection works

Your n8n instance sends most data to n8n as the events that generate it occur. Workflow execution counts and an instance pulse are sent periodically (every 6 hours). These data types mostly fall into n8n telemetry collection.

Opting out of data collection

n8n enables telemetry collection by default. To disable it, configure the following environment variables.

Opt out of telemetry events

To opt out of telemetry events, set the N8N_DIAGNOSTICS_ENABLED environment variable to false, for example:

Opt out of checking for new versions of n8n

To opt out of checking for new versions of n8n, set the N8N_VERSION_NOTIFICATIONS_ENABLED environment variable to false, for example:

Disable all connection to n8n servers

If you want to fully prevent all communication with n8n's servers, refer to Isolate n8n.

Refer to Deployment environment variables for more information on these environment variables.

Refer to Configuration for more information on setting environment variables.

Examples:

Example 1 (unknown):

export N8N_DIAGNOSTICS_ENABLED=false

Example 2 (unknown):

export N8N_VERSION_NOTIFICATIONS_ENABLED=false

Microsoft Entra ID credentials

URL: llms-txt#microsoft-entra-id-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using OAuth2
    • Register an application
    • Generate a client secret
  • Setting custom scopes
  • Common issues
    • Need admin approval

You can use these credentials to authenticate the following nodes:

  • Microsoft Entra ID

  • Create a Microsoft Entra ID account or subscription.

  • If the user account is managed by a corporate Microsoft Entra account, the administrator account has enabled the option “User can consent to apps accessing company data on their behalf” for this user (see the Microsoft Entra documentation).

Microsoft includes an Entra ID free plan when you create a Microsoft Azure account.

Supported authentication methods

Refer to Microsoft Entra ID's documentation for more information about the service.

Note for n8n Cloud users

Cloud users don't need to provide connection details. Select Connect my account to connect through your browser.

For self-hosted users, there are two main steps to configure OAuth2 from scratch:

  1. Register an application with the Microsoft Identity Platform.
  2. Generate a client secret for that application.

Follow the detailed instructions for each step below. For more detail on the Microsoft OAuth2 web flow, refer to Microsoft authentication and authorization basics.

Register an application

Register an application with the Microsoft Identity Platform:

  1. Open the Microsoft Application Registration Portal.
  2. Select Register an application.
  3. Enter a Name for your app.
  4. In Supported account types, select Accounts in any organizational directory (Any Azure AD directory - Multi-tenant) and personal Microsoft accounts (for example, Skype, Xbox).
  5. In Register an application:
    1. Copy the OAuth Callback URL from your n8n credential.
    2. Paste it into the Redirect URI (optional) field.
    3. Select Select a platform > Web.
  6. Select Register to finish creating your application.
  7. Copy the Application (client) ID and paste it into n8n as the Client ID.

Refer to Register an application with the Microsoft Identity Platform for more information.

Generate a client secret

With your application created, generate a client secret for it:

  1. On your Microsoft application page, select Certificates & secrets in the left navigation.
  2. In Client secrets, select + New client secret.
  3. Enter a Description for your client secret, such as n8n credential.
  4. Select Add.
  5. Copy the Secret in the Value column.
  6. Paste it into n8n as the Client Secret.
  7. Select Connect my account in n8n to finish setting up the connection.
  8. Log in to your Microsoft account and allow the app to access your info.

Refer to Microsoft's Add credentials for more information on adding a client secret.

Setting custom scopes

Microsoft Entra ID credentials use the following scopes by default:

To select different scopes for your credentials, enable the Custom Scopes slider and edit the Enabled Scopes list. Keep in mind that some features may not work as expected with more restrictive scopes.

Here are the known common errors and issues with Microsoft Entra credentials.

Need admin approval

When attempting to add credentials for a Microsoft360 or Microsoft Entra account, users may see a message when following the procedure that this action requires admin approval.

This message will appear when the account attempting to grant permissions for the credential is managed by a Microsoft Entra. In order to issue the credential, the administrator account needs to grant permission to the user (or "tenant") for that application.

The procedure for this is covered in the Microsoft Entra documentation.


CLI commands for n8n

URL: llms-txt#cli-commands-for-n8n

Contents:

  • Running CLI commands
  • Start a workflow
  • Change the active status of a workflow
  • Export entities
  • Export workflows and credentials
    • Workflows
    • Credentials
  • Import entities
  • Import workflows and credentials
    • Workflows

n8n includes a CLI (command line interface), allowing you to perform actions using the CLI rather than the n8n editor. These include starting workflows, and exporting and importing workflows and credentials.

Running CLI commands

You can use CLI commands with self-hosted n8n. Depending on how you choose to install n8n, there are differences in how to run the commands:

  • npm: the n8n command is directly available. The documentation uses this in the examples below.

  • Docker: the n8n command is available within your Docker container:

You can start workflows directly using the CLI.

Execute a saved workflow by its ID:

Change the active status of a workflow

You can change the active status of a workflow using the CLI.

These commands operate on your n8n database. If you execute them while n8n is running, the changes don't take effect until you restart n8n.

Set the active status of a workflow by its ID to false:

Set the active status of a workflow by its ID to true:

Set the active status to false for all the workflows:

Set the active status to true for all the workflows:

You can export your database entities from n8n using the CLI. This tooling allows you to export all entity types from one database type, such as SQLite, and import them into another database type, such as Postgres.

Flag Description
--help Help prompt.
--outputDir Output directory path
--includeExecutionHistoryDataTables Include execution history data tables, these are excluded by default as they can be very large

Export workflows and credentials

You can export your workflows and credentials from n8n using the CLI.

Flag Description
--help Help prompt.
--all Exports all workflows/credentials.
--backup Sets --all --pretty --separate for backups. You can optionally set --output.
--id The ID of the workflow to export.
--output Outputs file name or directory if using separate files.
--pretty Formats the output in an easier to read fashion.
--separate Exports one file per workflow (useful for versioning). Must set a directory using --output.
--decrypted Exports the credentials in a plain text format.

Export all your workflows to the standard output (terminal):

Export a workflow by its ID and specify the output file name:

Export all workflows to a specific directory in a single file:

Export all the workflows to a specific directory using the --backup flag (details above):

Export all your credentials to the standard output (terminal):

Export credentials by their ID and specify the output file name:

Export all credentials to a specific directory in a single file:

Export all the credentials to a specific directory using the --backup flag (details above):

Export all the credentials in plain text format. You can use this to migrate from one installation to another that has a different secret key in the configuration file.

Sensitive information

All sensitive information is visible in the files.

You can import entities from a previous export:entities command using this command, it allows importing of entities into a database type that differs from the exported database type. Current supported database types include: SQLite, Postgres.

The database is expected to be empty prior to import, this can be forced with the --truncateTables parameter.

Flag Description
--help Help prompt.
--inputDir Input directory that holds output files for import
--truncateTables Truncate tables before import

Import workflows and credentials

You can import your workflows and credentials from n8n using the CLI.

When exporting workflows and credentials, n8n also exports their IDs. If you have workflows and credentials with the same IDs in your existing database, they will be overwritten. To avoid this, delete or change the IDs before importing.

Flag Description
--help Help prompt.
--input Input file name or directory if you use --separate.
--projectId Import the workflow or credential to the specified project. Can't be used with --userId.
--separate Imports *.json files from directory provided by --input.
--userId Import the workflow or credential to the specified user. Can't be used with --projectId.

n8n limits workflow and credential names to 128 characters, but SQLite doesn't enforce size limits.

This might result in errors like Data too long for column name during the import process.

In this case, you can edit the names from the n8n interface and export again, or edit the JSON file directly before importing.

Import workflows from a specific file:

Import all the workflow files as JSON from the specified directory:

Import credentials from a specific file:

Import all the credentials files as JSON from the specified directory:

Clear your existing license from n8n's database and reset n8n to default features:

If your license includes floating entitlements, running this command will also attempt to release them back to the pool, making them available for other instances.

Display information about the existing license:

You can reset user management using the n8n CLI. This returns user management to its pre-setup state. It removes all user accounts.

Use this if you forget your password, and don't have SMTP set up to do password resets by email.

Disable MFA for a user

If a user loses their recovery codes you can disable MFA for a user with this command. The user will then be able to log back in to set up MFA again.

You can reset the LDAP settings using the command below.

Uninstall community nodes and credentials

You can manage community nodes using the n8n CLI. For now, you can only uninstall community nodes and credentials, which is useful if a community node causes instability.

Flag Description
--help Show CLI help.
--credential The credential type. Get this value by visiting the node's <NODE>.credential.ts file and getting the value of name.
--package Package name of the community node.
--uninstall Uninstalls the node.
--userId The ID of the user who owns the credential. On self-hosted, query the database. On cloud, query the API with your API key.

Uninstall a community node by package name:

For example, to uninstall the Evolution API community node, type:

Uninstall a community node credential:

For example, to uninstall the Evolution API community node credential, visit the repository and navigate to the credentials.ts file to find the name:

You can run a security audit on your n8n instance, to detect common security issues.

Examples:

Example 1 (unknown):

docker exec -u node -it <n8n-container-name> <n8n-cli-command>

Example 2 (unknown):

n8n execute --id <ID>

Example 3 (unknown):

n8n update:workflow --id=<ID> --active=false

Example 4 (unknown):

n8n update:workflow --id=<ID> --active=true

Cortex node

URL: llms-txt#cortex-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Cortex node to automate work in Cortex, and integrate Cortex with other applications. n8n has built-in support for a wide range of Cortex features, including executing analyzers, and responders, as well as getting job details.

On this page, you'll find a list of operations the Cortex node supports and links to more resources.

Refer to Cortex credentials for guidance on setting up authentication.

  • Analyzer
    • Execute Analyzer
  • Job
    • Get job details
    • Get job report
  • Responder
    • Execute Responder

Templates and examples

Browse Cortex integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Workflow sharing

URL: llms-txt#workflow-sharing

Contents:

  • Share a workflow
  • View shared workflows
  • Workflow roles and permissions
    • Permissions
  • Node editing restrictions with unshared credentials

Available on Pro and Enterprise Cloud plans, and Enterprise self-hosted plans.

Workflow sharing allows you to share workflows between users of the same n8n instance.

Users can share workflows they created. Instance owners, and users with the admin role, can view and share all workflows in the instance. Refer to Account types for more information about owners and admins.

  1. Open the workflow you want to share.
  2. Select Share.
  3. In Add users, find and select the users you want to share with.
  4. Select Save.

View shared workflows

You can browse and search workflows on the Workflows list. The workflows in the list depend on the project:

  • Overview lists all workflows you can access. This includes:
    • Your own workflows.
    • Workflows shared with you.
    • Workflows in projects you're a member of.
    • If you log in as the instance owner or admin: all workflows in the instance.
  • Other projects: all workflows in the project.

Workflow roles and permissions

There are two workflow roles: creator and editor. The creator is the user who created the workflow. Editors are other users with access to the workflow.

You can't change the workflow owner, except when deleting the user.

Workflow sharing allows editors to use all credentials used in the workflow. This includes credentials that aren't explicitly shared with them using credential sharing.

Permissions Creator Editor
View workflow (read-only)
View executions
Update (including tags)
Run
Share
Export
Delete

Node editing restrictions with unshared credentials

Sharing in n8n works on the principle of least privilege. This means that if a user shares a workflow with you, but they don't share their credentials, you can't edit the nodes within the workflow that use those credentials. You can view and run the workflow, and edit nodes that don't use unshared credentials.

Refer to Credential sharing for guidance on sharing credentials.


Built-in integrations

URL: llms-txt#built-in-integrations

Contents:

  • Node operations: Triggers and Actions
  • Core nodes
  • Cluster nodes
  • Credentials
  • Community nodes

This section contains the node library: reference documentation for every built-in node in n8n, and their credentials.

Node operations: Triggers and Actions

When you add a node to a workflow, n8n displays a list of available operations. An operation is something a node does, such as getting or sending data.

There are two types of operation:

  • Triggers start a workflow in response to specific events or conditions in your services. When you select a Trigger, n8n adds a trigger node to your workflow, with the Trigger operation you chose pre-selected. When you search for a node in n8n, Trigger operations have a bolt icon .
  • Actions are operations that represent specific tasks within a workflow, which you can use to manipulate data, perform operations on external systems, and trigger events in other systems as part of your workflows. When you select an Action, n8n adds a node to your workflow, with the Action operation you chose pre-selected.

Core nodes can be actions or triggers. Whereas most nodes connect to a specific external service, core nodes provide functionality such as logic, scheduling, or generic API calls.

Cluster nodes are node groups that work together to provide functionality in an n8n workflow. Instead of using a single node, you use a root node and one or more sub-nodes that extend the functionality of the node.

External services need a way to identify and authenticate users. This data can range from an API key over an email/password combination to a long multi-line private key. You can save these in n8n as credentials.

Nodes in n8n can then request that credential information. As another layer of security, only node types with specific access rights can access the credentials.

To make sure that the data is secure, it gets saved to the database encrypted. n8n uses a random personal encryption key, which it automatically generates on the first run of n8n and then saved under ~/.n8n/config.

To learn more about creating, managing, and sharing credentials, refer to Manage credentials.

n8n supports custom nodes built by the community. Refer to Community nodes for guidance on installing and using these nodes.

For help building your own custom nodes, and publish them to npm, refer to Creating nodes for more information.


External data storage environment variables

URL: llms-txt#external-data-storage-environment-variables

File-based configuration

You can add _FILE to individual variables to provide their configuration in a separate file. Refer to Keeping sensitive data in separate files for more details.

Refer to External storage for more information on using external storage for binary data.

Variable Type Default Description
N8N_EXTERNAL_STORAGE_S3_HOST String - Host of the n8n bucket in S3-compatible external storage. For example, s3.us-east-1.amazonaws.com
N8N_EXTERNAL_STORAGE_S3_BUCKET_NAME String - Name of the n8n bucket in S3-compatible external storage.
N8N_EXTERNAL_STORAGE_S3_BUCKET_REGION String - Region of the n8n bucket in S3-compatible external storage. For example, us-east-1
N8N_EXTERNAL_STORAGE_S3_ACCESS_KEY String - Access key in S3-compatible external storage
N8N_EXTERNAL_STORAGE_S3_ACCESS_SECRET String - Access secret in S3-compatible external storage.
N8N_EXTERNAL_STORAGE_S3_AUTH_AUTO_DETECT Boolean - Use automatic credential detection to authenticate S3 calls for external storage. This will ignore the access key and access secret and use the default credential provider chain.

HubSpot Trigger node

URL: llms-txt#hubspot-trigger-node

Contents:

  • Events
  • Related resources

HubSpot provides tools for social media marketing, content management, web analytics, landing pages, customer support, and search engine optimization.

If you activate a second trigger, the previous trigger stops working. This is because the trigger registers a new webhook with HubSpot when activated. HubSpot only allows one webhook at a time.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's HubSpot Trigger integrations page.

  • Company
    • Created
    • Deleted
    • Property changed
  • Contact
    • Created
    • Deleted
    • Privacy deleted
    • Property changed
  • Conversation
    • Created
    • Deleted
    • New message
    • Privacy deletion
    • Property changed
  • Deal
    • Created
    • Deleted
    • Property changed
  • Ticket
    • Created
    • Deleted
    • Property changed

n8n provides an app node for HubSpot. You can find the node docs here.

View example workflows and related content on n8n's website.

Refer to HubSpot's documentation for details about their API.


Facebook Graph API credentials

URL: llms-txt#facebook-graph-api-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using app access token
    • Create a Meta app
    • Generate an App Access Token

You can use these credentials to authenticate the following nodes:

Facebook Trigger credentials

If you want to create credentials for the Facebook Trigger node, follow the instructions mentioned in the Facebook App credentials documentation.

Supported authentication methods

Refer to Meta's Graph API documentation for more information about the service.

Using app access token

To configure this credential, you'll need a Meta for Developers account and:

  • An app Access Token

There are two steps in setting up your credential:

  1. Create a Meta app with the products you need to access.
  2. Generate an App Access Token for that app.

Refer to the detailed instructions below for each step.

Create a Meta app

To create a Meta app:

  1. Go to the Meta Developer App Dashboard and select Create App.
  2. If you have a business portfolio and you're ready to connect the app to it, select the business portfolio. If you don't have a business portfolio or you're not ready to connect the app to the portfolio, select I dont want to connect a business portfolio yet and select Next. The Use cases page opens.
  3. Select the Use case that aligns with how you wish to use the Facebook Graph API. For example, for products in Meta's Business suite (like Messenger, Instagram, WhatsApp, Marketing API, App Events, Audience Network, Commerce API, Fundraisers, Jobs, Threat Exchange, and Webhooks), select Other, then select Next.
  4. Select Business and Next.
  5. Complete the essential information:
    • Add an App name.
    • Add an App contact email.
    • Here again you can connect to a business portfolio or skip it.
  6. Select Create app.
  7. The Add products to your app page opens.
  8. Select App settings > Basic from the left menu.
  9. Enter a Privacy Policy URL. (Required to take the app "Live.")
  10. Select Save changes.
  11. At the top of the page, toggle the App Mode from Development to Live.
  12. In the left menu, select Add Product.
  13. The Add products to your app page appears. Select the products that make sense for your app and configure them.

Refer to Meta's Create an app documentation for more information on creating an app, required fields like the Privacy Policy URL, and adding products.

For more information on the app modes and switching to Live mode, refer to App Modes and Publish | App Types.

Generate an App Access Token

Next, create an app access token to use with your n8n credential and the products you selected:

  1. In a separate tab or window, open the Graph API explorer.

  2. Select the Meta App you just created in the Access Token section.

  3. In User or Page, select Get App Token.

  4. Select Generate Access Token.

  5. The page prompts you to log in and grant access. Follow the on-screen prompts.

You may receive a warning that the app isn't available. Once you take an app live, there may be a few minutes' delay before you can generate an access token.

  1. Copy the token and enter it in your n8n credential as the Access Token.

Refer to the Meta instructions for Your First Request for more information on generating the token.


DeepL node

URL: llms-txt#deepl-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the DeepL node to automate work in DeepL, and integrate DeepL with other applications. n8n has built-in support for a wide range of DeepL features, including translating languages.

On this page, you'll find a list of operations the DeepL node supports and links to more resources.

Refer to DeepL credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Language
    • Translate data

Templates and examples

Translate PDF documents from Google drive folder with DeepL

View template details

Translate cocktail instructions using DeepL

View template details

Real-time Chat Translation with DeepL

View template details

Browse DeepL integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Autopilot node

URL: llms-txt#autopilot-node

Contents:

  • Operations
  • Templates and examples

Use the Autopilot node to automate work in Autopilot, and integrate Autopilot with other applications. n8n has built-in support for a wide range of Autopilot features, including creating, deleting, and updating contacts, as well as adding contacts to a list.

On this page, you'll find a list of operations the Autopilot node supports and links to more resources.

Autopilot branding change

Autopilot has become Ortto. The Autopilot credentials and nodes are only compatible with Autopilot, not the new Ortto API.

Refer to Autopilot credentials for guidance on setting up authentication.

  • Contact
    • Create/Update a contact
    • Delete a contact
    • Get a contact
    • Get all contacts
  • Contact Journey
    • Add contact to list
  • Contact List
    • Add contact to list
    • Check if contact is on list
    • Get all contacts on list
    • Remove a contact from a list
  • List
    • Create a list
    • Get all lists

Templates and examples

Twitch Auto-Clip-Generator: Fetch from Streamers, Clip & Edit on Autopilot

View template details

Viral ASMR Video Factory: Automatically generate viral videos on autopilot.

View template details

Manage contacts via Autopilot

View template details

Browse Autopilot integration templates, or search all templates


Create and edit credentials

URL: llms-txt#create-and-edit-credentials

Contents:

  • Create a credential
  • Expressions in credentials
    • Example workflow

Credentials are securely stored authentication information used to connect n8n workflows to external services such as APIs, or databases.

Create a credential

  1. Select the  Create button in the upper-left corner of the side menu. Select credential.

  2. If your n8n instance supports projects, you'll also need to choose whether to create the credential inside your personal space or a specific project you have access to. If you're using the community version, you'll create the credential inside your personal space.

  3. Select the app or service you wish to connect to.

  4. Using the Create button in the upper-right corner from either the Overview page or a specific project. Select Credential.

  5. If you're doing this from the Overview page, you'll create the credential inside your personal space. If you're doing this from inside a project, you'll create the credential inside that specific project.

  6. Select the app or service you wish to connect to.

You can also create new credential in the credential drop down when editing a node on the workflow editor.

Once in the credential modal, enter the details required by your service. Refer to your service's page in the credentials library for guidance.

When you save a credential, n8n tests it to confirm it works.

n8n names new credentials "node name account" by default. You can rename the credentials by clicking on the name, similarly to renaming nodes. It's good practice to give them names that identify the app or service, type, and purpose of the credential. A naming convention makes it easier to keep track of and identify your credentials.

Expressions in credentials

You can use expressions to set credentials dynamically as your workflow runs:

  1. In your workflow, find the data path containing the credential. This varies depending on the exact parameter names in your data. Make sure that the data containing the credential is available in the workflow when you get to the node that needs it.
  2. When creating your credential, hover over the field where you want to use an expression.
  3. Toggle Expression on.
  4. Enter your expression.

View workflow file

Using the example

To load the template into your n8n instance:

  1. Download the workflow JSON file.
  2. Open a new workflow in your n8n instance.
  3. Copy in the JSON, or select Workflow menu > Import from file....

The example workflows use Sticky Notes to guide you:

  • Yellow: notes and information.
  • Green: instructions to run the workflow.
  • Orange: you need to change something to make the workflow work.
  • Blue: draws attention to a key feature of the example.

Dealing with errors in workflows

URL: llms-txt#dealing-with-errors-in-workflows

Contents:

  • Checking failed workflows
  • Catching erroring workflows
    • Exercise
  • Throwing exceptions in workflows

Sometimes you build a nice workflow, but it fails when you try to execute it. Workflow executions may fail for a variety of reasons, ranging from straightforward problems with incorrectly configuring a node or a failure in a third-party service to more mysterious errors.

But don't panic. In this lesson, you'll learn how you can troubleshoot errors so you can get your workflow up and running as soon as possible.

Checking failed workflows

n8n tracks executions of your workflows.

When one of your workflows fails, you can check the Executions log to see what went wrong. The Executions log shows you a list of the latest execution time, status, mode, and running time of your saved workflows.

Open the Executions log by selecting Executions in the left-side panel.

To investigate a specific failed execution from the list, select the name or the View button that appears when you hover over the row of the respective execution.

This will open the workflow in read-only mode, where you can see the execution of each node. This representation can help you identify at what point the workflow ran into issues.

To toggle between viewing the execution and the editor, select the Editor | Executions button at the top of the page.

Workflow execution view

Catching erroring workflows

To catch failed workflows, create a separate Error Workflow with the Error Trigger node. This workflow will only execute if the main workflow execution fails.

Use additional nodes in your Error Workflow that make sense, like sending notifications about the failed workflow and its errors using email or Slack.

To receive error messages for a failed workflow, set the Error Workflow in the Workflow Settings to an Error Workflow that uses an Error Trigger node.

The only difference between a regular workflow and an Error Workflow is that the latter contains an Error Trigger node. Make sure to create this node before you set this as another workflow's designated Error Workflow.

  • If a workflow uses the Error Trigger node, you don't have to activate the workflow.
  • If a workflow contains the Error Trigger node, by default, the workflow uses itself as the error workflow.
  • You can't test error workflows when running workflows manually. The Error trigger only runs when an automatic workflow errors.
  • You can set the same Error Workflow for multiple workflows.

In the previous chapters, you've built several small workflows. Now, pick one of them that you want to monitor and create an Error Workflow for it:

  1. Create a new Error Workflow.
  2. Add the Error Trigger node.
  3. Connect a node for the communication platform of your choice to the Error Trigger node, like Slack, Discord, Telegram, or even Gmail or a more generic Send Email.
  4. In the workflow you want to monitor, open the Workflow Settings and select the new Error Workflow you just created. Note that this workflow needs to run automatically to trigger the error workflow.

The workflow for this exercise looks like this:

To check the configuration of the nodes, you can copy the JSON workflow code below and paste it into your Editor UI:

Throwing exceptions in workflows

Another way of troubleshooting workflows is to include a Stop and Error node in your workflow. This node throws an error. You can specify the error type:

  • Error Message: returns a custom message about the error
  • Error Object: returns the type of error

You can only use the Stop and Error node as the last node in a workflow.

Throwing exceptions with the Stop and Error node is useful for verifying the data (or assumptions about the data) from a node and returning custom error messages.

If you are working with data from a third-party service, you may come across problems such as:

  • Wrongly formatted JSON output
  • Data with the wrong type (for example, numeric data that has a non-numeric value)
  • Missing values
  • Errors from remote servers

Though this kind of invalid data might not cause the workflow to fail right away, it could cause problems later on, and then it can become difficult to track the source error. This is why it's better to throw an error at the time you know there might be a problem.

Stop and Error node with error message

Examples:

Example 1 (unknown):

{
	"nodes": [
		{
			"parameters": {},
			"name": "Error Trigger",
			"type": "n8n-nodes-base.errorTrigger",
			"typeVersion": 1,
			"position": [
				720,
				-380
			]
		},
		{
			"parameters": {
				"channel": "channelname",
				"text": "=This workflow {{$node[\"Error Trigger\"].json[\"workflow\"][\"name\"]}}failed.\nHave a look at it here: {{$node[\"Error Trigger\"].json[\"execution\"][\"url\"]}}",
				"attachments": [],
				"otherOptions": {}
			},
			"name": "Slack",
			"type": "n8n-nodes-base.slack",
			"position": [
				900,
				-380
			],
			"typeVersion": 1,
			"credentials": {
				"slackApi": {
					"id": "17",
					"name": "slack_credentials"
				}
			}
		}
	],
	"connections": {
		"Error Trigger": {
			"main": [
				[
					{
						"node": "Slack",
						"type": "main",
						"index": 0
					}
				]
			]
		}
	}
}

GitHub Document Loader node

URL: llms-txt#github-document-loader-node

Contents:

  • Node parameters
  • Node options
  • Templates and examples
  • Related resources

Use the GitHub Document Loader node to load data from a GitHub repository for vector stores or summarization.

On this page, you'll find the node parameters for the GitHub Document Loader node, and links to more resources.

You can find authentication information for this node here. This node doesn't support OAuth for authentication.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Text Splitting: Choose from:

  • Repository Link: Enter the URL of your GitHub repository.

  • Branch: Enter the branch name to use.

  • Recursive: Select whether to include sub-folders and files (turned on) or not (turned off).

  • Ignore Paths: Enter directories to ignore.

Templates and examples

Browse GitHub Document Loader integration templates, or search all templates

Refer to LangChain's documentation on document loaders for more information about the service.

View n8n's Advanced AI documentation.


Returns all the items of the given node and current run

URL: llms-txt#returns-all-the-items-of-the-given-node-and-current-run

allItems = _("").all();


Respond to Webhook

URL: llms-txt#respond-to-webhook

Contents:

  • How to use Respond to Webhook
  • Node parameters
    • Respond With
  • Node options
  • How n8n secures HTML responses
  • Templates and examples
  • Workflow behavior
  • Output the response sent to the webhook
  • Return more than one data item (deprecated)

Use the Respond to Webhook node to control the response to incoming webhooks. This node works with the Webhook node.

Runs once for the first data item

The Respond to Webhook node runs once, using the first incoming data item. Refer to Return more than one data item for more information.

How to use Respond to Webhook

To use the Respond to Webhook node:

  1. Add a Webhook node as the trigger node for the workflow.
  2. In the Webhook node, set Respond to Using 'Respond to Webhook' node.
  3. Add the Respond to Webhook node anywhere in your workflow. If you want it to return data from other nodes, place it after those nodes.

Configure the node behavior using these parameters.

Choose what data to send in the webhook response.

  • All Incoming Items: Respond with all the JSON items from the input.
  • Binary File: Respond with a binary file defined in Response Data Source.
  • First Incoming Item: Respond with the first incoming item's JSON.
  • JSON: Respond with a JSON object defined in Response Body.
  • JWT Token: Respond with a JSON Web Token (JWT).
  • No Data: No response payload.
  • Redirect: Redirect to a URL set in Redirect URL.
  • Text: Respond with text set in Response Body. This sends HTML by default (Content-Type: text/html).

Select Add Option to view and set the options.

  • Response Code: Set the response code to use.
  • Response Headers: Define the response headers to send.
  • Put Response in Field: Available when you respond with All Incoming Items or First Incoming Item. Set the field name for the field containing the response data.
  • Enable Streaming: When enabled, sends the data back to the user using streaming. Requires a trigger configured with the Response mode Streaming.

How n8n secures HTML responses

Starting with n8n version 1.103.0, n8n automatically wraps HTML responses to webhooks in <iframe> tags. This is a security mechanism to protect the instance users.

This has the following implications:

  • HTML renders in a sandboxed iframe instead of directly in the parent document.
  • JavaScript code that attempts to access the top-level window or local storage will fail.
  • Authentication headers aren't available in the sandboxed iframe (for example, basic auth). You need to use an alternative approach, like embedding a short-lived access token within the HTML.
  • Relative URLs (for example, <form action="/">) won't work. Use absolute URLs instead.

Templates and examples

Creating an API endpoint

View template details

Create a Branded AI-Powered Website Chatbot

View template details

AI-Powered YouTube Video Summarization & Analysis

View template details

Browse Respond to Webhook integration templates, or search all templates

When using the Respond to Webhook node, workflows behave as follows:

  • The workflow finishes without executing the Respond to Webhook node: it returns a standard message with a 200 status.
  • The workflow errors before the first Respond to Webhook node executes: the workflow returns an error message with a 500 status.
  • A second Respond to Webhook node executes after the first one: the workflow ignores it.
  • A Respond to Webhook node executes but there was no webhook: the workflow ignores the Respond to Webhook node.

Output the response sent to the webhook

By default, the Respond to Webhook node has a single output branch that contains the node's input data.

You can optionally enable a second output branch containing the response sent to the webhook. To enable this secondary output, open the Respond to Webhook node on the canvas and select the Settings tab. Activate the Enable Response Output Branch option.

The node will now have two outputs:

  • Input Data: The original output, passing on the node's input.
  • Response: The response object sent to the webhook.

Return more than one data item (deprecated)

n8n 1.22.0 added support for returning all data items using the All Incoming Items option. n8n recommends upgrading to the latest version of n8n, instead of using the workarounds described in this section.

The Respond to Webhook node runs once, using the first incoming data item. This includes when using expressions. You can't force looping using the Loop node: the workflow will run, but the webhook response will still only contain the results of the first execution.

If you need to return more than one data item, choose one of these options:

  • Instead of using the Respond to Webhook node, use the When Last Node Finishes option in Respond in the Webhook node. Use this when you want to return the final data that the workflow outputs.
  • Use the Aggregate node to turn multiple items into a single item before passing the data to the Respond to Webhook node. Set Aggregate to All Item Data (Into a Single List).

Auth0 Management credentials

URL: llms-txt#auth0-management-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API client secret

You can use these credentials to authenticate when using the HTTP Request node to make a Custom API call.

Create an Auth0 account.

Supported authentication methods

Refer to Auth0 Management's documentation for more information about the service.

This is a credential-only node. Refer to Custom API operations to learn more. View example workflows and related content on n8n's website.

Using API client secret

To configure this credential, you'll need:

  • An Auth0 Domain
  • A Client ID
  • A Client Secret

Refer to the Auth0 Management API Get Access Tokens documentation for instructions on obtaining the Client ID and Client Secret from the application's Settings tab.


Mailchimp Trigger node

URL: llms-txt#mailchimp-trigger-node

Mailchimp is an integrated marketing platform that allows business owners to automate their email campaigns and track user engagement.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Mailchimp Trigger integrations page.


Configuration examples

URL: llms-txt#configuration-examples

This section contains examples for how to configure n8n to solve particular use cases.


Google Ads node

URL: llms-txt#google-ads-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported

Use the Google Ads node to automate work in Google Ads, and integrate Google Ads with other applications. n8n has built-in support for a wide range of Google Ads features, including getting campaigns.

On this page, you'll find a list of operations the Google Ads node supports and links to more resources.

Refer to Google Ads credentials for guidance on setting up authentication.

  • Campaign
  • Get all campaigns
  • Get a campaign

Templates and examples

AI marketing report (Google Analytics & Ads, Meta Ads), sent via email/Telegram

by Friedemann Schuetz

View template details

Generating New Keywords and their Search Volumes using the Google Ads API

View template details

Get Meta Ads insights and save them into Google Sheets

View template details

Browse Google Ads integration templates, or search all templates

Refer to Google Ads' documentation for more information about the service.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Sentiment Analysis node

URL: llms-txt#sentiment-analysis-node

Contents:

  • Node parameters
  • Node options
  • Usage Notes
    • Model Temperature Setting
    • Language Considerations
    • Processing Large Volumes
    • Iterative Refinement
  • Example Usage
    • Basic Sentiment Analysis
    • Custom Category Analysis

Use the Sentiment Analysis node to analyze the sentiment of incoming text data.

The language model uses the Sentiment Categories in the node options to determine each item's sentiment.

  • Text to Analyze defines the input text for sentiment analysis. This is an expression that references a field from the input items. For example, this could be {{ $json.chatInput }} if the input is from a chat or message source. By default, it expects a text field.

  • Sentiment Categories: Define the categories that you want to classify your input as.

    • By default, these are Positive, Neutral, Negative. You can customize these categories to fit your specific use case, such as Very Positive, Positive, Neutral, Negative, Very Negative for more granular analysis.
  • Include Detailed Results: When turned on, this option includes sentiment strength and confidence scores in the output. Note that these scores are estimates generated by the language model and are rough indicators rather than precise measurements.

  • System Prompt Template: Use this option to change the system prompt that's used for the sentiment analysis. It uses the {categories} placeholder for the categories.

  • Enable Auto-Fixing: When enabled, the node automatically fixes model outputs to ensure they match the expected format. Do this by sending the schema parsing error to the LLM and asking it to fix it.

Model Temperature Setting

It's strongly advised to set the temperature of the connected language model to 0 or a value close to 0. This helps ensure that the results are as deterministic as possible, providing more consistent and reliable sentiment analysis across multiple runs.

Language Considerations

The node's performance may vary depending on the language of the input text.

For best results, ensure your chosen language model supports the input language.

Processing Large Volumes

When analyzing large amounts of text, consider splitting the input into smaller chunks to optimize processing time and resource usage.

Iterative Refinement

For complex sentiment analysis tasks, you may need to iteratively refine the system prompt and categories to achieve the desired results.

Basic Sentiment Analysis

  1. Connect a data source (for example, RSS Feed, HTTP Request) to the Sentiment Analysis node.
  2. Set the "Text to Analyze" field to the relevant item property (for example, {{ $json.content }} for blog post content).
  3. Keep the default sentiment categories.
  4. Connect the node's outputs to separate paths for processing positive, neutral, and negative sentiments differently.

Custom Category Analysis

  1. Change the Sentiment Categories to Excited, Happy, Neutral, Disappointed, Angry.
  2. Adjust your workflow to handle these five output categories.
  3. Use this setup to analyze customer feedback with more nuanced emotional categories.

View n8n's Advanced AI documentation.


Returns all items the node "IF" outputs (index: 1 which is Output "false" of run 0 which is the first run)

URL: llms-txt#returns-all-items-the-node-"if"-outputs-(index:-1-which-is-output-"false"-of-run-0-which-is-the-first-run)

Contents:

  • Accessing item data

allItems = _("IF").all(1, 0);

previousNodeData = $("").all(); for(let i=0; i<previousNodeData.length; i++) { console.log(previousNodeData[i].json); }

previousNodeData = _("").all(); for item in previousNodeData: # item is of type <class 'pyodide.ffi.JsProxy'> # You need to convert it to a Dict itemDict = item.json.to_py() print(itemDict)


**Examples:**

Example 1 (unknown):
```unknown
## Accessing item data

Get all items output by a previous node, and log out the data they contain:

Example 2 (unknown):



Customer.io Trigger node

URL: llms-txt#customer.io-trigger-node

Contents:

  • Events
  • Related resources

Customer.io enables users to send newsletters to selected segments of customers using their website data. You can send targeted emails, push notifications, and SMS to lower churn, create stronger relationships, and drive subscriptions.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Customer.io Trigger integrations page.

  • Customer
  • Subscribed
  • Unsubscribe
  • Email
  • Bounced
  • Clicked
  • Converted
  • Delivered
  • Drafted
  • Failed
  • Opened
  • Sent
  • Spammed
  • Push
  • Attempted
  • Bounced
  • Clicked
  • Delivered
  • Drafted
  • Failed
  • Opened
  • Sent
  • Slack
  • Attempted
  • Clicked
  • Drafted
  • Failed
  • Sent
  • Sms
  • Attempted
  • Bounced
  • Clicked
  • Delivered
  • Drafted
  • Failed
  • Sent

n8n provides an app node for Customer.io. You can find the node docs here.

View example workflows and related content on n8n's website.

Refer to Customer.io's documentation for details about their API.


Automating a (Real-world) Use Case

URL: llms-txt#automating-a-(real-world)-use-case

Contents:

  • Understanding the scenario

Meet Nathan 🙋. Nathan works as an Analytics Manager at ABCorp. His job is to support the ABCorp team with reporting and analytics. Being a true jack of all trades, he also handles several miscellaneous initiatives.

Some things that Nathan does are repetitive and mind-numbing. He wants to automate some of these tasks so that he doesn't burn out. As an Automation Expert, you are meeting with Nathan today to help him understand how he can offload some of his responsibilities to n8n.

Understanding the scenario

You 👩‍🔧: Nice to meet you, Nathan. Glad to be doing this! What's a repetitive task that's error-prone and that you'd like to get off your plate first?

Nathan 🙋: Thanks for coming in! The most annoying one's gotta be the weekly sales reporting.

I have to collect sales data from our legacy data warehouse, which manages data from the main business processes of an organization, such as sales or production. Now, each sales order can have the status Processing or Booked. I have to calculate the sum of all the Booked orders and announce them in the company Discord every Monday. Then I have to create a spreadsheet of all the Processing sales so that the Sales Managers can review them and check if they need to follow up with customers.

This manual work is tough and requires high attention to detail to make sure that all the numbers are right. Inevitably, I lose my focus and mistype a number or I don't get it done on time. I've been criticized once by my manager for miscalculating the data.

You 👩‍🔧: Oh no! Doesn't the data warehouse have a way to export the data?

Nathan 🙋: The data warehouse was written in-house ages ago. It doesn't have a CSV export but they recently added a couple of API endpoints that expose this data, if that helps.

You 👩‍🔧: Perfect! That's a good start. If you have a generic API, we can add some custom code and a couple of services to make an automated workflow. This gig has n8n written all over it. Let's get started!


Pushcut node

URL: llms-txt#pushcut-node

Contents:

  • Operations
  • Templates and examples

Use the Pushcut node to automate work in Pushcut, and integrate Pushcut with other applications. n8n supports sending notifications with Pushcut.

On this page, you'll find a list of operations the Pushcut node supports and links to more resources.

Refer to Pushcut credentials for guidance on setting up authentication.

  • Notification
    • Send a notification

Templates and examples

Browse Pushcut integration templates, or search all templates


Set up OIDC

URL: llms-txt#set-up-oidc

Contents:

  • Setting up and enabling OIDC

  • Provider-specific OIDC setup

    • Auth0
  • Discovery endpoints reference

  • Available on Enterprise plans.

  • You need to be an instance owner or admin to enable and configure OIDC.

Setting up and enabling OIDC

  1. In n8n, go to Settings > SSO.

  2. Under Select Authentication Protocol, choose OIDC from the dropdown.

  3. Copy the redirect URL shown (for example, https://yourworkspace.app.n8n.cloud/rest/sso/oidc/callback).

Extra configuration for load balancers or proxies

If you are running n8n behind a load balancer, make sure you set the N8N_EDITOR_BASE_URL environment variable.

  1. Set up OIDC with your identity provider (IdP). You'll need to:
  • Create a new OIDC client/application in your IdP.
    • Configure the redirect URL from the previous step.
    • Note down the Client ID and Client Secret provided by your IdP.
  1. In your IdP, locate the Discovery Endpoint (also called the well-known configuration endpoint). It typically has the following format:

  2. In n8n, complete the OIDC configuration:

  • Discovery Endpoint: Enter the discovery endpoint URL from your IdP.
    • Client ID: Enter the client ID you received when registering your application with your IdP.
    • Client Secret: Enter the client secret you received when registering your application with your IdP.
  1. Select Save settings.

  2. Set OIDC to Activated.

Provider-specific OIDC setup

  1. Create an application in Auth0:
    • Log in to your Auth0 Dashboard.
    • Go to Applications > Applications.
    • Click Create Application.
    • Enter a name (for example, "n8n SSO") and select Regular Web Applications.
    • Click Create.
  2. Configure the application:
    • Go to the Settings tab of your new application.
    • Allowed Callback URLs: Add your n8n redirect URL from Settings > SSO > OIDC.
    • Allowed Web Origins: Add your n8n base URL (for example, https://yourworkspace.app.n8n.cloud).
    • Click Save Changes.
  3. Get your credentials:
    • Client ID: Found in the Settings tab.
    • Client Secret: Found in the Settings tab.
    • Discovery Endpoint: https://{your-auth0-domain}.auth0.com/.well-known/openid-configuration.
  4. In n8n, complete the OIDC configuration:
    • Discovery Endpoint: Enter the discovery endpoint URL from Auth0.
    • Client ID: Enter the client ID you found in your Auth0 settings.
    • Client Secret: Enter the client secret you found in your Auth0 settings.
  5. Select Save settings.
  6. Set OIDC to Activated.

Discovery endpoints reference

  • Google discovery endpoint example:

  • Microsoft Azure AD discovery endpoint example:

  • Auth0 discovery endpoint example:

  • Okta discovery endpoint example:

  • Amazon Cognito discovery endpoint example:

Examples:

Example 1 (unknown):

https://your-idp-domain/.well-known/openid-configuration

Example 2 (unknown):

https://accounts.google.com/.well-known/openid-configuration

Example 3 (unknown):

https://login.microsoftonline.com/{tenant-id}/v2.0/.well-known/openid-configuration

Example 4 (unknown):

https://{your-domain}.auth0.com/.well-known/openid-configuration

Nodes environment variables

URL: llms-txt#nodes-environment-variables

File-based configuration

You can add _FILE to individual variables to provide their configuration in a separate file. Refer to Keeping sensitive data in separate files for more details.

This page lists the environment variables configuration options for managing nodes in n8n, including specifying which nodes to load or exclude, importing built-in or external modules in the Code node, and enabling community nodes.

Variable Type Default Description
N8N_COMMUNITY_PACKAGES_ENABLED Boolean true Enables (true) or disables (false) the functionality to install and load community nodes. If set to false, neither verified nor unverified community packages will be available, regardless of their individual settings.
N8N_COMMUNITY_PACKAGES_PREVENT_LOADING Boolean false Prevents (true) or allows (false) loading installed community nodes on instance startup. Use this if a faulty node prevents the instance from starting.
N8N_COMMUNITY_PACKAGES_REGISTRY String https://registry.npmjs.org NPM registry URL to pull community packages from (license required).
N8N_CUSTOM_EXTENSIONS String - Specify the path to directories containing your custom nodes.
N8N_PYTHON_ENABLED Boolean true Whether to enable Python execution on the Code node.
N8N_UNVERIFIED_PACKAGES_ENABLED Boolean true When N8N_COMMUNITY_PACKAGES_ENABLED is true, this variable controls whether to enable the installation and use of unverified community nodes from an NPM registry (true) or not (false).
N8N_VERIFIED_PACKAGES_ENABLED Boolean true When N8N_COMMUNITY_PACKAGES_ENABLED is true, this variable controls whether to show verified community nodes in the nodes panel for installation and use (true) or to hide them (false).
NODE_FUNCTION_ALLOW_BUILTIN String - Permit users to import specific built-in modules in the Code node. Use * to allow all. n8n disables importing modules by default.
NODE_FUNCTION_ALLOW_EXTERNAL String - Permit users to import specific external modules (from n8n/node_modules) in the Code node. n8n disables importing modules by default.
NODES_ERROR_TRIGGER_TYPE String n8n-nodes-base.errorTrigger Specify which node type to use as Error Trigger.
NODES_EXCLUDE Array of strings - Specify which nodes not to load. For example, to block nodes that can be a security risk if users aren't trustworthy: NODES_EXCLUDE: "[\"n8n-nodes-base.executeCommand\", \"@n8n/n8n-nodes-langchain.lmChatDeepSeek\"]"
NODES_INCLUDE Array of strings - Specify which nodes to load.

Serp credentials

URL: llms-txt#serp-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a SerpApi account.

Supported authentication methods

Refer to Serp's API documentation for more information about the service.

View n8n's Advanced AI documentation.

To configure this credential, you'll need:

  1. Go to Your Account > API Key.
  2. Copy Your Private API Key and enter it as the API Key in your n8n credential.

Copper Trigger node

URL: llms-txt#copper-trigger-node

Contents:

  • Events
  • Related resources

Copper is a CRM that focuses on strong integration with Google Workspace. It's mainly targeted towards small and medium-sized businesses.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Copper Trigger integrations page.

  • Delete
  • New
  • Update

n8n provides an app node for Copper. You can find the node docs here.

View example workflows and related content on n8n's website.

Refer to Copper's documentation for details about their API.


GitHub Trigger node

URL: llms-txt#github-trigger-node

Contents:

  • Events
  • Related resources

GitHub provides hosting for software development and version control using Git. It offers the distributed version control and source code management (SCM) functionality of Git, access control and several collaboration features such as bug tracking, feature requests, task management, and wikis for every project.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's GitHub Trigger integrations page.

  • Check run
  • Check suite
  • Commit comment
  • Create
  • Delete
  • Deploy key
  • Deployment
  • Deployment status
  • Fork
  • GitHub app authorization
  • Gollum
  • Installation
  • Installation repositories
  • Issue comment
  • Label
  • Marketplace purchase
  • Member
  • Membership
  • Meta
  • Milestone
  • Org block
  • Organization
  • Page build
  • Project
  • Project card
  • Project column
  • Public
  • Pull request
  • Pull request review
  • Pull request review comment
  • Push
  • Release
  • Repository
  • Repository import
  • Repository vulnerability alert
  • Security advisory
  • Star
  • Status
  • Team
  • Team add
  • Watch

n8n provides an app node for GitHub. You can find the node docs here.

View example workflows and related content on n8n's website.

Refer to GitHub's documentation for details about their API.


Zoom node

URL: llms-txt#zoom-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Zoom node to automate work in Zoom, and integrate Zoom with other applications. n8n has built-in support for a wide range of Zoom features, including creating, retrieving, deleting, and updating meetings.

On this page, you'll find a list of operations the Zoom node supports and links to more resources.

Refer to Zoom credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Meeting
    • Create a meeting
    • Delete a meeting
    • Retrieve a meeting
    • Retrieve all meetings
    • Update a meeting

Templates and examples

Zoom AI Meeting Assistant creates mail summary, ClickUp tasks and follow-up call

by Friedemann Schuetz

View template details

Streamline Your Zoom Meetings with Secure, Automated Stripe Payments

View template details

Create Zoom meeting link from Google Calendar invite

View template details

Browse Zoom integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


FileMaker node

URL: llms-txt#filemaker-node

Contents:

  • Operations
  • Templates and examples

Use the FileMaker node to automate work in FileMaker, and integrate FileMaker with other applications. n8n has built-in support for a wide range of FileMaker features, including creating, finding, getting, editing, and duplicating files.

On this page, you'll find a list of operations the FileMaker node supports and links to more resources.

Refer to FileMaker credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Find Records
  • Get Records
  • Get Records by Id
  • Perform Script
  • Create Record
  • Edit Record
  • Duplicate Record
  • Delete Record

Templates and examples

Create, update, and retrieve a record from FileMaker

View template details

Convert FileMaker Data API to Flat File Array

View template details

Integrate Xero with FileMaker using Webhooks

View template details

Browse FileMaker integration templates, or search all templates


Debug Helper

URL: llms-txt#debug-helper

Contents:

  • Operations
  • Node parameters
    • Throw Error
    • Out Of Memory
    • Generate Random Data
  • Templates and examples

Use the Debug Helper node to trigger different error types or generate random datasets to help test n8n workflows.

Define the operation by selecting the Category:

  • Do Nothing: Don't do anything.
  • Throw Error: Throw an error with the specified type and message.
  • Out Of Memory: Generate a specific memory size to simulate being out of memory.
  • Generate Random Data: Generate some random data in a selected format.

The node parameters depend on the Category selected. The Do Nothing Category has no other parameters.

  • Error Type: Select the type of error to throw. Choose from:
    • NodeApiError
    • NodeOperationError
    • Error
  • Error Message: Enter the error message to throw.

The Out of Memory Category adds one parameter, the Memory Size to Generate. Enter the approximate amount of memory to generate.

Generate Random Data

  • Data Type: Choose the type of random data you'd like to generate. Options include:
    • Address
    • Coordinates
    • Credit Card
    • Email
    • IPv4
    • IPv6
    • MAC
    • Nanoids: If you select this data type, you'll also need to enter:
      • Nanoid Alphabet: The alphabet the generator will use to generate the nanoids.
      • Nanoid Length: The length of each nanoid.
    • URL
    • User Data
    • UUID
    • Version
  • Seed: If you'd like to generate the data using a specific seed, enter it here. This ensures the data gets generated consistently. If you'd rather use random data generation, leave this field empty.
  • Number of Items to Generate: Enter the number of random items you'd like to generate.
  • Output as Single Array: Whether to generate the data as a single array (turned on) or multiple items (turned off).

Templates and examples

Build an MCP Server with Google Calendar and Custom Functions

View template details

Test Webhooks in n8n Without Changing WEBHOOK_URL (PostBin & BambooHR Example)

View template details

Extract Domain and verify email syntax on the go

View template details

Browse Debug Helper integration templates, or search all templates


Cockpit credentials

URL: llms-txt#cockpit-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API access token

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to Cockpit's API documentation for more information about the service.

Using API access token

To configure this credential, you'll need:

  • Your Cockpit URL: The URL you use to access your Cockpit instance
  • An Access Token: Refer to the Cockpit Managing tokens documentation for instructions on creating an API token. Use the API token as the n8n Access Token.

Creating nodes

URL: llms-txt#creating-nodes

Contents:

  • Prerequisites

Learn how to build your own custom nodes.

This section includes:

This section assumes the following:

  • Some familiarity with JavaScript and TypeScript.
  • Ability to manage your own development environment, including git.
  • Knowledge of npm, including creating and submitting packages.
  • Familiarity with n8n, including a good understanding of data structures and item linking.

Read/Write Files from Disk

URL: llms-txt#read/write-files-from-disk

Contents:

  • Operations
  • Read File(s) From Disk
    • Read File(s) From Disk options
  • Write File to Disk
    • Write File to Disk options
  • Templates and examples
  • File locations

Use the Read/Write Files from Disk node to read and write files from/to the machine where n8n is running.

This node isn't available on n8n Cloud.

  • Read File(s) From Disk: Use this operation to retrieve one or more files from the computer that runs n8n.
  • Write File to Disk: Use this operation to create a binary file on the computer that runs n8n.

Refer to the sections below for more information on configuring the node for each operation.

Read File(s) From Disk

Configure this operation with these parameters:

  • File(s) Selector: Enter the path of the file you want to read.
    • To enter multiple files, enter a page path pattern. You can use these characters to define a path pattern:
      • *: Matches any character zero or more times, excluding path separators.
      • **: Matches any character zero or more times, include path separators.
      • ?: Matches any character except for path separators one time.
      • []: Matches any characters inside the brackets. For example, [abc] would match the characters a, b, or c, and nothing else.

Refer to Picomatch's Basic globbing documentation for more information on these characters and their expected behavior.

Read File(s) From Disk options

You can also configure this operation with these Options:

  • File Extension: Enter the extension for the file in the node output.
  • File Name: Enter the name for the file in the node output.
  • MIME Type: Enter the file's MIME type in the node output. Refer to Common MIME types for a list of file extensions and their MIME types.
  • Put Output File in Field: Enter the name of the field in the output data to contain the file.

Write File to Disk

Configure this operation with these parameters:

  • File Path and Name: Enter the destination for the file, the file's name, and the file's extension.
  • Input Binary Field: Enter the name of the field in the node input data that will contain the binary file.

Write File to Disk options

You can also configure this operation with these Options:

This operation includes a single option, whether to Append data to an existing file instead of creating a new one (turned on) or to create a new file instead of appending to existing (turned off).

Templates and examples

Generate SQL queries from schema only - AI-powered

View template details

Breakdown Documents into Study Notes using Templating MistralAI and Qdrant

View template details

Talk to your SQLite database with a LangChain AI Agent 🧠💬

View template details

Browse Read/Write Files from Disk integration templates, or search all templates

If you run n8n in Docker, your command runs in the n8n container and not the Docker host.

This node looks for files relative to the n8n install path. n8n recommends using absolute file paths to prevent any errors.


Set up source control for environments

URL: llms-txt#set-up-source-control-for-environments

Contents:

  • Prerequisites
  • Step 1: Set up your repository and branches
  • Step 2: Configure Git in n8n
  • Step 3: Set up authentication
    • SSH authentication (using deploy keys)
    • HTTPS authentication (using Personal Access Tokens)
  • Step 4: Connect n8n and configure your instance

Link a Git repository to an n8n instance and configure your source control.

n8n uses source control to provide environments. Refer to Environments in n8n for more information.

To use source control with n8n, you need a Git repository with either:

  • SSH access (using deploy keys), or
  • HTTPS access (using Personal Access Tokens)

This document assumes you are familiar with Git and your Git provider.

Step 1: Set up your repository and branches

  1. Create a new repository for use with n8n.
  2. Create the branches you need. For example, if you plan to have different environments for test and production, set up a branch for each.

To help decide what branches you need for your use case, refer to Branch patterns.

Step 2: Configure Git in n8n

  1. Go to Settings > Environments.
  2. Choose your connection method:
    • SSH: In Git repository URL, enter the SSH URL for your repository (for example, git@github.com:username/repo.git).
    • HTTPS: In Git repository URL enter the HTTPS URL for your repository (for example, https://github.com/username/repo.git).
  3. Configure authentication based on your connection method:
    • For SSH: n8n supports ED25519 and RSA public key algorithms. ED25519 is the default. Select RSA under SSH Key if your git host requires RSA. Copy the SSH key.
    • For HTTPS: Enter your credentials:
      • Username: Your Git provider username.
      • Token: Your Personal Access Token (PAT) from your Git provider.

Step 3: Set up authentication

Configure authentication based on your chosen connection method.

SSH authentication (using deploy keys)

Set up SSH access by creating a deploy key for the repository using the SSH key from n8n. The key must have write access.

The steps depend on your Git provider. Help links for common providers:

HTTPS authentication (using Personal Access Tokens)

Create a Personal Access Token (PAT) with repository access permissions.

Help links for creating PATs with common providers:

Required permissions for your token:

  • Repository read/write access
  • Contents read/write (for GitHub)
  • Source code pull/push (for GitLab)

Step 4: Connect n8n and configure your instance

  1. In Settings > Environments in n8n, select Connect. n8n connects to your Git repository.
  2. Under Instance settings, choose which branch you want to use for the current n8n instance.
  3. Optional: select Protected instance to prevent users editing workflows in this instance. This is useful for protecting production instances.
  4. Optional: choose a custom color for the instance. This will appear in the menu next to the source control push and pull buttons. It helps users know which instance they're in.
  5. Select Save settings.

OpenWeatherMap node

URL: llms-txt#openweathermap-node

Contents:

  • Operations
  • Templates and examples

Use the OpenWeatherMap node to automate work in OpenWeatherMap, and integrate OpenWeatherMap with other applications. n8n supports retrieving current and upcoming weather data with OpenWeatherMap.

On this page, you'll find a list of operations the OpenWeatherMap node supports and links to more resources.

Refer to OpenWeatherMap credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Returns the current weather data
  • Returns the weather data for the next 5 days

Templates and examples

Get Weather Forecast via Telegram

View template details

Get information about the weather for any city

View template details

Receive the weather information of any city

View template details

Browse OpenWeatherMap integration templates, or search all templates


Bubble node

URL: llms-txt#bubble-node

Contents:

  • Operations
  • Templates and examples

Use the Bubble node to automate work in Bubble, and integrate Bubble with other applications. n8n has built-in support for a wide range of Bubble features, including creating, deleting, getting, and updating objects.

On this page, you'll find a list of operations the Bubble node supports and links to more resources.

Refer to Bubble credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Object
    • Create
    • Delete
    • Get
    • Get All
    • Update

Templates and examples

Create, update, and get an object from Bubble

View template details

Access data from bubble application

View template details

AI Agent Integration for Bubble Apps with MCP Protocol Data Access

View template details

Browse Bubble integration templates, or search all templates


Odoo node

URL: llms-txt#odoo-node

Contents:

  • Operations
  • Templates and examples

Use the Odoo node to automate work in Odoo, and integrate Odoo with other applications. n8n has built-in support for a wide range of Odoo features, including creating, updating, deleting, and getting contracts, resources, and opportunities.

On this page, you'll find a list of operations the Odoo node supports and links to more resources.

Refer to Odoo credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Contact
    • Create a new contact
    • Delete a contact
    • Get a contact
    • Get all contacts
    • Update a contact
  • Custom Resource
    • Create a new item
    • Delete an item
    • Get an item
    • Get all items
    • Update an item
  • Note
    • Create a new note
    • Delete a note
    • Get a note
    • Get all notes
    • Update a note
  • Opportunity
    • Create a new opportunity
    • Delete an opportunity
    • Get an opportunity
    • Get all opportunities
    • Update an opportunity

Templates and examples

ERP AI chatbot for Odoo sales module with OpenAI

View template details

Summarize emails and save them as notes on sales opportunity in Odoo

View template details

Import Odoo Product Images from Google Drive

View template details

Browse Odoo integration templates, or search all templates


Dropbox node

URL: llms-txt#dropbox-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Dropbox node to automate work in Dropbox, and integrate Dropbox with other applications. n8n has built-in support for a wide range of Dropbox features, including creating, downloading, moving, and copying files and folders.

On this page, you'll find a list of operations the Dropbox node supports and links to more resources.

Refer to Dropbox credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • File
    • Copy a file
    • Delete a file
    • Download a file
    • Move a file
    • Upload a file
  • Folder
    • Copy a folder
    • Create a folder
    • Delete a folder
    • Return the files and folders in a given folder
    • Move a folder
  • Search
    • Query

Templates and examples

Hacker News to Video Content

View template details

Nightly n8n backup to Dropbox

View template details

Explore n8n Nodes in a Visual Reference Library

View template details

Browse Dropbox integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Enable executions pruning

URL: llms-txt#enable-executions-pruning

export EXECUTIONS_DATA_PRUNE=true


How old (hours) a finished execution must be to qualify for soft-deletion

URL: llms-txt#how-old-(hours)-a-finished-execution-must-be-to-qualify-for-soft-deletion

export EXECUTIONS_DATA_MAX_AGE=168


Embeddings Google PaLM node

URL: llms-txt#embeddings-google-palm-node

Contents:

  • Node parameters
  • Templates and examples
  • Related resources

Use the Embeddings Google PaLM node to generate embeddings for a given text.

On this page, you'll find the node parameters for the Embeddings Google PaLM node, and links to more resources.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Model: Select the model to use to generate the embedding.

n8n dynamically loads models from the Google PaLM API and you'll only see the models available to your account.

Templates and examples

Ask questions about a PDF using AI

View template details

Chat with PDF docs using AI (quoting sources)

View template details

RAG Chatbot for Company Documents using Google Drive and Gemini

View template details

Browse Embeddings Google PaLM integration templates, or search all templates

Refer to Langchain's Google PaLM embeddings documentation for more information about the service.

View n8n's Advanced AI documentation.


Binary data environment variables

URL: llms-txt#binary-data-environment-variables

File-based configuration

You can add _FILE to individual variables to provide their configuration in a separate file. Refer to Keeping sensitive data in separate files for more details.

By default, n8n uses memory to store binary data. Enterprise users can choose to use an external service instead. Refer to External storage for more information on using external storage for binary data.

Variable Type Default Description
N8N_AVAILABLE_BINARY_DATA_MODES String filesystem A comma separated list of available binary data modes.
N8N_BINARY_DATA_STORAGE_PATH String N8N_USER_FOLDER/binaryData The path where n8n stores binary data.
N8N_DEFAULT_BINARY_DATA_MODE String default The default binary data mode. default keeps binary data in memory. Set to filesystem to use the filesystem, or s3 to AWS S3. Note that binary data pruning operates on the active binary data mode. For example, if your instance stored data in S3, and you later switched to filesystem mode, n8n only prunes binary data in the filesystem. This may change in future.

Workflow Trigger node

URL: llms-txt#workflow-trigger-node

Contents:

  • Node parameters
  • Templates and examples

The Workflow Trigger node gets triggered when a workflow is updated or activated.

n8n has deprecated the Workflow Trigger node and moved its functionality to the n8n Trigger node.

If you want to use the Workflow Trigger node for a workflow, add the node to the workflow. You don't have to create a separate workflow.

The Workflow Trigger node gets triggered for the workflow that it gets added to. You can use the Workflow Trigger node to trigger a workflow to notify the state of the workflow.

The node includes a single parameter to identify the Events that should trigger it. Choose from these events:

  • Active Workflow Updated: If you select this event, the node triggers when this workflow is updated.
  • Workflow Activated: If you select this event, the node triggers when this workflow is activated.

You can select one or both of these events.

Templates and examples

Qualys Vulnerability Trigger Scan SubWorkflow

View template details

Pattern for Multiple Triggers Combined to Continue Workflow

View template details

Unify multiple triggers into a single workflow

by Guillaume Duvernay

View template details

Browse Workflow Trigger integration templates, or search all templates


Data item linking

URL: llms-txt#data-item-linking

An item is a single piece of data. Nodes receive one or more items, operate on them, and output new items. Each item links back to previous items.

You need to understand this behavior if you're:

  • Building a programmatic-style node that implements complex behaviors with its input and output data.
  • Using the Code node or expressions editor to access data from earlier items in the workflow.
  • Using the Code node for complex behaviors with input and output data.

This section provides:


Intercom credentials

URL: llms-txt#intercom-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to Intercom's API documentation for more information about the service.

To configure this credential, you'll need:


Google Business Profile Trigger node

URL: llms-txt#google-business-profile-trigger-node

Contents:

  • Events
  • Related resources

Use the Google Business Profile Trigger node to respond to events in Google Business Profile and integrate Google Business Profile with other applications. n8n has built-in support for responding to new reviews.

On this page, you'll find a list of events the Google Business Profile Trigger node can respond to and links to more resources.

You can find authentication information for this node here.

n8n provides an app node for Google Business Profile. You can find the node docs here.

View example workflows and related content on n8n's website.

Refer to Google Business Profile's documentation for details about their API.


Telegram node common issues

URL: llms-txt#telegram-node-common-issues

Contents:

  • Add a bot to a Telegram channel
  • Get the Chat ID
  • Send more than 30 messages per second
  • Remove the n8n attribution from sent messages

Here are some common errors and issues with the Telegram node and steps to resolve or troubleshoot them.

Add a bot to a Telegram channel

For a bot to send a message to a channel, you must add the bot to the channel. If you haven't added the bot to the channel, you'll see an error with a description like: Error: Forbidden: bot is not a participant of the channel.

To add a bot to a channel:

  1. In the Telegram app, access the target channel and select the channel name.
  2. Label the channel name as public channel.
  3. Select Administrators > Add Admin.
  4. Search for the bot's username and select it.
  5. Select the checkmark on the top-right corner to add the bot to the channel.

You can only use @channelusername on public channels. To interact with a Telegram group, you need that group's Chat ID.

There are three ways to get that ID:

  1. From the Telegram Trigger: Use the Telegram Trigger node in your workflow to get a Chat ID. This node can trigger on different events and returns a Chat ID on successful execution.
  2. From your web browser: Open Telegram in a web browser and open the group chat. The group's Chat ID is the series of digits behind the letter "g." Prefix your group Chat ID with a - when you enter it in n8n.
  3. Invite Telegram's @RawDataBot to the group: Once you add it, the bot outputs a JSON file that includes a chat object. The id for that object is the group Chat ID. Then remove the RawDataBot from your group.

Send more than 30 messages per second

The Telegram API has a limitation of sending only 30 messages per second. Follow these steps to send more than 30 messages:

  1. Loop Over Items node: Use the Loop Over Items node to get at most 30 chat IDs from your database.
  2. Telegram node: Connect the Telegram node with the Loop Over Items node. Use the Expression Editor to select the Chat IDs from the Loop Over Items node.
  3. Code node: Connect the Code node with the Telegram node. Use the Code node to wait for a few seconds before fetching the next batch of chat IDs. Connect this node with the Loop Over Items node.

You can also use this workflow.

Remove the n8n attribution from sent messages

If you're using the node to send Telegram messages, the message automatically gets an n8n attribution appended to the end:

This message was sent automatically with n8n

To remove this attribution:

  1. In the node's Additional Fields section, select Add Field.
  2. Select Append n8n attribution.
  3. Turn the toggle off.

Refer to Send Message additional fields for more information.


AI Transform

URL: llms-txt#ai-transform

Contents:

  • Node parameters
    • Instructions
    • Transformation Code
  • Templates and examples

Use the AI Transform node to generate code snippets based on your prompt. The AI is context-aware, understanding the workflows nodes and their data types.

Available only on Cloud plans.

Enter your prompt for the AI and click the Generate code button to automatically populate the Transformation Code. For example, you can specify how you want to process or categorize your data. Refer to Writing good prompts for more information.

The prompt should be in plain English and under 500 characters.

Transformation Code

The code snippet generated by the node is read-only. To edit this code, adjust your prompt in Instructions or copy and paste it into a Code node.

Templates and examples

Customer Support WhatsApp Bot with Google Docs Knowledge Base and Gemini AI

View template details

Explore n8n Nodes in a Visual Reference Library

View template details

Parse Gmail Inbox and Transform into Todoist tasks with Solve Propositions

View template details

Browse AI Transform integration templates, or search all templates


Storyblok credentials

URL: llms-txt#storyblok-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using Content API key
  • Using Management API key

You can use these credentials to authenticate the following nodes:

Create a Storyblok account.

Supported authentication methods

  • Content API key: For read-only access
  • Management API key: For full CRUD operations

n8n supports Content API v1 only.

Refer to Storyblok's Content v1 API documentation and Management API documentation for more information about the services.

Using Content API key

To configure this credential, you'll need:

  • A Content API Key: Go to your Storyblok workspace's Settings > Access Tokens to get an API key. Choose an Access Level of either Public (version=published) or Preview (version-published and version=draft). Enter this access token as your API Key. Refer to How to retrieve and generate access tokens for more detailed instructions.

Refer to Content v1 API Authentication for more information about supported operations with each Access Level.

Using Management API key

To configure this credential, you'll need:

  • A Personal Access Token: Go to My Account > Personal access tokens to generate a new access token. Enter this access token as your Personal Access Token.

Embeddings Cohere node

URL: llms-txt#embeddings-cohere-node

Contents:

  • Node parameters
  • Templates and examples
  • Related resources

Use the Embeddings Cohere node to generate embeddings for a given text.

On this page, you'll find the node parameters for the Embeddings Cohere node, and links to more resources.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Model: Select the model to use to generate the embedding. Choose from:
    • Embed-English-v2.0(4096 Dimensions)
    • Embed-English-Light-v2.0(1024 Dimensions)
    • Embed-Multilingual-v2.0(768 Dimensions)

Learn more about available models in Cohere's models documentation.

Templates and examples

Automate Sales Cold Calling Pipeline with Apify, GPT-4o, and WhatsApp

View template details

Create a Multi-Modal Telegram Support Bot with GPT-4 and Supabase RAG

by Ezema Kingsley Chibuzo

View template details

Build a Document QA System with RAG using Milvus, Cohere, and OpenAI for Google Drive

View template details

Browse Embeddings Cohere integration templates, or search all templates

Refer to Langchain's Cohere embeddings documentation for more information about the service.

View n8n's Advanced AI documentation.


Linear Trigger node

URL: llms-txt#linear-trigger-node

Contents:

  • Events

Linear is a SaaS issue tracking tool.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Linear Trigger integrations page.

  • Comment Reaction
  • Cycle
  • Issue
  • Issue Comment
  • Issue Label
  • Project

Gmail Trigger node common issues

URL: llms-txt#gmail-trigger-node-common-issues

Contents:

  • 401 unauthorized error

Here are some common errors and issues with the Gmail Trigger node and steps to resolve or troubleshoot them.

401 unauthorized error

The full text of the error looks like this:

This error occurs when there's an issue with the credential you're using and its scopes or permissions.

  1. For OAuth2 credentials, make sure you've enabled the Gmail API in APIs & Services > Library. Refer to Google OAuth2 Single Service - Enable APIs for more information.
  2. For Service Account credentials:
    1. Enable domain-wide delegation.
    2. Make sure you add the Gmail API as part of the domain-wide delegation configuration.

Examples:

Example 1 (unknown):

401 - {"error":"unauthorized_client","error_description":"Client is unauthorized to retrieve access tokens using this method, or client not authorized for any of the scopes requested."}

Environment variables overview

URL: llms-txt#environment-variables-overview

This section lists the environment variables that you can use to change n8n's configuration settings when self-hosting n8n.

File-based configuration

You can provide a configuration file for n8n. You can also append _FILE to certain variables to provide their configuration in a separate file.


3. Filtering Orders

URL: llms-txt#3.-filtering-orders

Contents:

  • Add If node before the Airtable node
  • Configure the If node
  • Insert data into Airtable
  • What's next?

In this step of the workflow, you will learn how to filter data using conditional logic and how to use expressions in nodes using the If node.

After this step, your workflow should look like this:

View workflow file

To insert only processing orders into Airtable we need to filter our data by orderStatus. Basically, we want to tell the program that if the orderStatus is processing, then insert all records with this status into Airtable; else, for example, if the orderStatus isn't processing, calculate the sum of all orders with the other orderStatus (booked).

This if-then-else command is conditional logic. In n8n workflows, you can add conditional logic with the If node, which splits a workflow conditionally based on comparison operations.

If you need to filter data on more than boolean values (true and false), use the Switch node. The Switch node is similar to the If node, but supports multiple output connectors.

Add If node before the Airtable node

First, let's add an If node between the connection from the HTTP Request node to the Airtable node:

  1. Hover over the arrow connection the HTTP Request node and the Airtable node.
  2. Select the + sign between the HTTP Request node and the Airtable node.

Configure the If node

Selecting the plus removes the connection to the Airtable node to the HTTP request. Now, let's add an If node connected to the HTTP Request node:

  1. Search for the If node.
  2. Select it when it appears in the search.

For the If node, we'll use an expression.

An expression is a string of characters and symbols in a programming language that can be evaluated to get a value, often according to its input. In n8n workflows, you can use expressions in a node to refer to another node for input data. In our example, the If node references the data output by the HTTP Request node.

In the If node window, configure the parameters:

  • Set the value1 placeholder to {{ $json.orderStatus }} with the following steps:
  1. Hover over the value1 field.

  2. Select the Expression tab on the right side of the value1 field.

  3. Next, open the expression editor by selecting the link icon:

Opening the Expression Editor

  1. Use the left-side panel to select HTTP Request > orderStatus and drag it into the Expression field in the center of the window.

Expression Editor in the If node

  1. Once you add the expression, close the Edit Expression dialog.
  • Operation: Select String > is equal to

  • Set the value2 placeholder to processing.

Make sure to select the correct data type (boolean, date & time, number, or string) when you select the Operation.

Select Execute step to test the If node.

Your results should look like this:

Note that the orders with a processing order status should show for the True Branch output, while the orders with a booked order status should show in the False Branch output.

Close the If node detail view when you're finished.

Insert data into Airtable

Next, we want to insert this data into Airtable. Remember what Nathan said at the end of the Inserting data into Airtable lesson?

I actually need to insert only processing orders in the table...

Since Nathan only needs the processing orders in the table, we'll connect the Airtable node to the If node's true connector.

In this case, since the Airtable node is already on our canvas, select the If node true connector and drag it to the Airtable node.

It's a good idea at this point to retest the Airtable node. Before you do, open your table in Airtable and delete all existing rows. Then open the Airtable node window in n8n and select Execute step.

Review your data in Airtable to be sure your workflow only added the correct orders (those with orderStatus of processing). There should be 14 records now instead of 30.

At this stage, your workflow should look like this:

View workflow file

Nathan 🙋: This If node is so useful for filtering data! Now I have all the information about processing orders. I actually only need the employeeName and orderID, but I guess I can keep all the other fields just in case.

You 👩‍🔧: Actually, I wouldn't recommend doing that. Inserting more data requires more computational power, the data transfer is slower and takes longer, and takes up more storage resources in your table. In this particular case, 14 records with 5 fields might not seem like it'd make a significant difference, but if your business grows to thousands of records and dozens of fields, things add up and even one extra column can affect performance.

Nathan 🙋: Oh, that's good to know. Can you select only two fields from the processing orders?

You 👩‍🔧: Sure, I'll do that in the next step.


Zendesk Trigger node

URL: llms-txt#zendesk-trigger-node

Zendesk is a support ticketing system, designed to help track, prioritize, and solve customer support interactions. More than just a help desk, Zendesk Support helps nurture customer relationships with personalized, responsive support across any channel.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Zendesk Trigger integrations page.


Hosting n8n on Google Kubernetes Engine

URL: llms-txt#hosting-n8n-on-google-kubernetes-engine

Contents:

  • Prerequisites
  • Create project
  • Enable the Kubernetes Engine API
  • Create a cluster
  • Set Kubectl context
  • Clone configuration repository
  • Configure Postgres
    • Create a volume for persistent storage
    • Postgres environment variables
  • Configure n8n

Google Cloud offers several options suitable for hosting n8n, including Cloud Run (optimized for running containers), Compute Engine (VMs), and Kubernetes Engine (containers running with Kubernetes).

This guide uses the Google Kubernetes Engine (GKE) as the hosting option. If you want to use Cloud Run, refer to these instructions.

Most of the steps in this guide use the Google Cloud UI, but you can also use the gcloud command line tool instead to undertake all the steps.

Self-hosting knowledge prerequisites

Self-hosting n8n requires technical knowledge, including:

  • Setting up and configuring servers and containers
  • Managing application resources and scaling
  • Securing servers and applications
  • Configuring n8n

n8n recommends self-hosting for expert users. Mistakes can lead to data loss, security issues, and downtime. If you aren't experienced at managing servers, n8n recommends n8n Cloud.

Latest and Next versions

n8n releases a new minor version most weeks. The latest version is for production use. next is the most recent release. You should treat next as a beta: it may be unstable. To report issues, use the forum.

Current latest: 1.118.2
Current next: 1.119.0

GCP encourages you to create projects to logically organize resources and configuration. Create a new project for your n8n deployment from your Google Cloud Console: select the project dropdown menu and then the NEW PROJECT button. Then select the newly created project. As you follow the other steps in this guide, make sure you have the correct project selected.

Enable the Kubernetes Engine API

GKE isn't enabled by default. Search for "Kubernetes" in the top search bar and select "Kubernetes Engine" from the results.

Select ENABLE to enable the Kubernetes Engine API for this project.

From the GKE service page, select Clusters > CREATE. Make sure you select the "Standard" cluster option, n8n doesn't work with an "Autopilot" cluster. You can leave the cluster configuration on defaults unless there's anything specifically you need to change, such as location.

Set Kubectl context

The rest of the steps in this guide require you to set the GCP instance as the Kubectl context. You can find the connection details for a cluster instance by opening its details page and selecting CONNECT. The displayed code snippet shows a connection string for the gcloud CLI tool. Paste and run the code snippet in the gcloud CLI to change your local Kubernetes settings to use the new gcloud cluster.

Clone configuration repository

Kubernetes and n8n require a series of configuration files. You can clone these from this repository locally. The following steps explain the file configuration and how to add your information.

Clone the repository with the following command:

And change directory:

Configure Postgres

For larger scale n8n deployments, Postgres provides a more robust database backend than SQLite.

Create a volume for persistent storage

To maintain data between pod restarts, the Postgres deployment needs a persistent volume. Running Postgres on GCP requires a specific Kubernetes Storage Class. You can read this guide for specifics, but the storage.yaml manifest creates it for you. You may want to change the regions to create the storage in under the allowedTopologies > matchedLabelExpressions > values key. By default, they're set to us-central.

Postgres environment variables

Postgres needs some environment variables set to pass to the application running in the containers.

The example postgres-secret.yaml file contains placeholders you need to replace with your own values. Postgres will use these details when creating the database..

The postgres-deployment.yaml manifest then uses the values from this manifest file to send to the application pods.

Create a volume for file storage

While not essential for running n8n, using persistent volumes is required for:

  • Using nodes that interact with files, such as the binary data node.
  • If you want to persist manual n8n encryption keys between restarts. This saves a file containing the key into file storage during startup.

The n8n-claim0-persistentvolumeclaim.yaml manifest creates this, and the n8n Deployment mounts that claim in the volumes section of the n8n-deployment.yaml manifest.

Kubernetes lets you optionally specify the minimum resources application containers need and the limits they can run to. The example YAML files cloned above contain the following in the resources section of the n8n-deployment.yaml and postgres-deployment.yaml files:

This defines a minimum of 250mb per container, a maximum of 500mb, and lets Kubernetes handle CPU. You can change these values to match your own needs. As a guide, here are the resources values for the n8n cloud offerings:

  • Start: 320mb RAM, 10 millicore CPU burstable
  • Pro (10k executions): 640mb RAM, 20 millicore CPU burstable
  • Pro (50k executions): 1280mb RAM, 80 millicore CPU burstable

Optional: Environment variables

You can configure n8n settings and behaviors using environment variables.

Create an n8n-secret.yaml file. Refer to Environment variables for n8n environment variables details.

The two deployment manifests (n8n-deployment.yaml and postgres-deployment.yaml) define the n8n and Postgres applications to Kubernetes.

The manifests define the following:

  • Send the environment variables defined to each application pod
  • Define the container image to use
  • Set resource consumption limits with the resources object
  • The volumes defined earlier and volumeMounts to define the path in the container to mount volumes.
  • Scaling and restart policies. The example manifests define one instance of each pod. You should change this to meet your needs.

The two service manifests (postgres-service.yaml and n8n-service.yaml) expose the services to the outside world using the Kubernetes load balancer using ports 5432 and 5678 respectively.

Send to Kubernetes cluster

Send all the manifests to the cluster with the following command:

You may see an error message about not finding an "n8n" namespace as that resources isn't ready yet. You can run the same command again, or apply the namespace manifest first with the following command:

n8n typically operates on a subdomain. Create a DNS record with your provider for the subdomain and point it to the IP address of the n8n service. Find the IP address of the n8n service from the Services & Ingress menu item of the cluster you want to use under the Endpoints column.

Read this GKE tutorial for more details on how reserved IP addresses work with GKE and Kubernetes resources.

Remove the resources created by the manifests with the following command:

Examples:

Example 1 (unknown):

git clone https://github.com/n8n-io/n8n-hosting.git

Example 2 (unknown):

cd n8n-hosting/kubernetes

Example 3 (unknown):

…
allowedTopologies:
  - matchLabelExpressions:
      - key: failure-domain.beta.kubernetes.io/zone
        values:
          - us-central1-b
          - us-central1-c

Example 4 (unknown):

…
volumes:
  - name: n8n-claim0
    persistentVolumeClaim:
      claimName: n8n-claim0
…

Asana node

URL: llms-txt#asana-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Asana node to automate work in Asana, and integrate Asana with other applications. n8n has built-in support for a wide range of Asana features, including creating, updating, deleting, and getting users, tasks, projects, and subtasks.

On this page, you'll find a list of operations the Asana node supports and links to more resources.

Refer to Asana credentials for guidance on setting up authentication.

Update to 1.22.2 or above

Due to changes in Asana's API, some operations in this node stopped working on 17th January 2023. Upgrade to n8n 1.22.2 or above.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Project
    • Create a new project
    • Delete a project
    • Get a project
    • Get all projects
    • Update a project
  • Subtask
    • Create a subtask
    • Get all subtasks
  • Task
    • Create a task
    • Delete a task
    • Get a task
    • Get all tasks
    • Move a task
    • Search for tasks
    • Update a task
  • Task Comment
    • Add a comment to a task
    • Remove a comment from a task
  • Task Tag
    • Add a tag to a task
    • Remove a tag from a task
  • Task Project
    • Add a task to a project
    • Remove a task from a project
  • User
    • Get a user
    • Get all users

Templates and examples

Automated Customer Service Ticket Creation & Notifications with Asana & WhatsApp

View template details

Sync tasks data between Notion and Asana

View template details

Receive updates when an event occurs in Asana

View template details

Browse Asana integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Think Tool node

URL: llms-txt#think-tool-node

Contents:

  • Templates and examples
    • Prompting example
  • Related resources

The Think Tool node is a tool that invites an agent to do some thinking before answering. Essentially, it allows the agent to reflect on the question by talking to itself before providing an answer. In particular, this is useful for complex queries that require a thoughtful response.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

Templates and examples

Generate AI Viral Videos with Seedance and Upload to TikTok, YouTube & Instagram

View template details

Generate & Auto-post AI Videos to Social Media with Veo3 and Blotato

View template details

Automated Stock Analysis Reports with Technical & News Sentiment using GPT-4o

View template details

Browse Think Tool integration templates, or search all templates

Prompting example

As an example, say you ask the agent:

Jack is looking at Anne. Anne is looking at George. Jack is married, George is not, and we don't know if Anne is married. Is a married person looking at an unmarried person?

The agent will do its thinking with the tool:

We have the following information: 1. Jack is looking at Anne. 2. Anne is looking at George. 3. Jack is married. 4. George is not married. 5. We don't know if Anne is married or not.

We need to determine if a married person is looking at an unmarried person. Let's consider both cases for Anne:

Case 1: Anne is married.

  • If Anne is married, then she (married) is looking at George (unmarried). In this case, a married person (Anne) is looking at an unmarried person (George).

Case 2: Anne is unmarried.

  • If Anne is unmarried, then Jack (married) is looking at Anne (unmarried). In this case, a married person (Jack) is looking at an unmarried person (Anne).

In both scenarios, we find that a married person is looking at an unmarried person. Therefore, the answer to the question is yes.

and then afterward provide the answer:

Yes, a married person is looking at an unmarried person. In both possible scenarios regarding Anne's marital status, we find that a married individual (either Jack or Anne) is looking at an unmarried individual (either George or Anne).

Refer to LangChain's documentation on tools for more information about tools in LangChain.

View n8n's Advanced AI documentation.


Disable the public REST API

URL: llms-txt#disable-the-public-rest-api

Contents:

  • Disable the API playground
  • Related resources

The n8n public REST API allows you to programmatically perform many of the same tasks as you can in the n8n GUI.

If you don't plan on using this API, n8n recommends disabling it to improve the security of your n8n installation.

To disable the public REST API, set the N8N_PUBLIC_API_DISABLED environment variable to true, for example:

Disable the API playground

To disable the API playground, set the N8N_PUBLIC_API_SWAGGERUI_DISABLED environment variable to true, for example:

Refer to Deployment environment variables for more information on these environment variables.

Refer to Configuration for more information on setting environment variables.

Examples:

Example 1 (unknown):

export N8N_PUBLIC_API_DISABLED=true

Example 2 (unknown):

export N8N_PUBLIC_API_SWAGGERUI_DISABLED=true

MCP Server Trigger node

URL: llms-txt#mcp-server-trigger-node

Contents:

  • How the MCP Server Trigger node works
  • Node parameters
    • MCP URL
    • Authentication
    • Path
  • Templates and examples
    • Integrating with Claude Desktop
  • Limitations
    • Configuring the MCP Server Trigger node with webhook replicas
  • Related resources

Use the MCP Server Trigger node to allow n8n to act as a Model Context Protocol (MCP) server, making n8n tools and workflows available to MCP clients.

You can find authentication information for this node here.

How the MCP Server Trigger node works

The MCP Server Trigger node acts as an entry point into n8n for MCP clients. It operates by exposing a URL that MCP clients can interact with to access n8n tools.

Unlike conventional trigger nodes, which respond to events and pass their output to the next connected node, the MCP Server Trigger node only connects to and executes tool nodes. Clients can list the available tools and call individual tools to perform work.

You can expose n8n workflows to clients by attaching them with the Custom n8n Workflow Tool node.

Server-Sent Events (SSE) and streamable HTTP support

The MCP Server Trigger node supports both Server-Sent Events (SSE), a long-lived transport built on top of HTTP, and streamable HTTP for connections between clients and the server. It currently doesn't support standard input/output (stdio) transport.

Use these parameters to configure your node.

The MCP Server Trigger node has two MCP URLs: test and production. n8n displays the URLs at the top of the node panel.

Select Test URL or Production URL to toggle which URL n8n displays.

  • Test: n8n registers a test MCP URL when you select Listen for Test Event or Execute workflow, if the workflow isn't active. When you call the MCP URL, n8n displays the data in the workflow.
  • Production: n8n registers a production MCP URL when you activate the workflow. When using the production URL, n8n doesn't display the data in the workflow. You can still view workflow data for a production execution: select the Executions tab in the workflow, then select the workflow execution you want to view.

You can require authentication for clients connecting to your MCP URL. Choose from these authentication methods:

  • Bearer auth
  • Header auth

Refer to the HTTP request credentials for more information on setting up each credential type.

By default, this field contains a randomly generated MCP URL path, to avoid conflicts with other MCP Server Trigger nodes.

You can manually specify a URL path, including adding route parameters. For example, you may need to do this if you use n8n to prototype an API and want consistent endpoint URLs.

Templates and examples

Build an MCP Server with Google Calendar and Custom Functions

View template details

Build your own N8N Workflows MCP Server

View template details

Build a Personal Assistant with Google Gemini, Gmail and Calendar using MCP

View template details

Browse MCP Server Trigger integration templates, or search all templates

Integrating with Claude Desktop

You can connect to the MCP Server Trigger node from Claude Desktop by running a gateway to proxy SSE messages to stdio-based servers.

To do so, add the following to your Claude Desktop configuration:

Be sure to replace the <MCP_URL> and <MCP_BEARER_TOKEN> placeholders with the values from your MCP Server Trigger node parameters and credentials.

Configuring the MCP Server Trigger node with webhook replicas

The MCP Server Trigger node relies on Server-Sent Events (SSE) or streamable HTTP, which require the same server instance to handle persistent connections. This can cause problems when running n8n in queue mode depending on your webhook processor configuration:

  • If you use queue mode with a single webhook replica, the MCP Server Trigger node works as expected.
  • If you run multiple webhook replicas, you need to route all /mcp* requests to a single, dedicated webhook replica. Create a separate replica set with one webhook container for MCP requests. Afterward, update your ingress or load balancer configuration to direct all /mcp* traffic to that instance.

Caution when running with multiple webhook replicas

If you run an MCP Server Trigger node with multiple webhook replicas and don't route all /mcp* requests to a single, dedicated webhook replica, your SSE and streamable HTTP connections will frequently break or fail to reliably deliver events.

n8n also provides an MCP Client Tool node that allows you to connect your n8n AI agents to external tools.

Refer to the MCP documentation and MCP specification for more details about the protocol, servers, and clients.

Here are some common errors and issues with the MCP Server Trigger node and steps to resolve or troubleshoot them.

Running the MCP Server Trigger node with a reverse proxy

When running n8n behind a reverse proxy like nginx, you may experience problems if the MCP endpoint isn't configured for SSE or streamable HTTP.

Specifically, you need to disable proxy buffering for the endpoint. Other items you might want to adjust include disabling gzip compression (n8n handles this itself), disabling chunked transfer encoding, and setting the Connection to an empty string to remove it from the forwarded headers. Explicitly disabling these in the MCP endpoint ensures they're not inherited from other places in your nginx configuration.

An example nginx location block for serving MCP traffic with these settings may look like this:

Examples:

Example 1 (unknown):

{
  "mcpServers": {
    "n8n": {
      "command": "npx",
      "args": [
        "mcp-remote",
        "<MCP_URL>",
        "--header",
        "Authorization: Bearer ${AUTH_TOKEN}"
      ],
      "env": {
        "AUTH_TOKEN": "<MCP_BEARER_TOKEN>"
      }
    }
  }
}

Example 2 (unknown):

location /mcp/ {
    proxy_http_version          1.1;
    proxy_buffering             off;
    gzip                        off;
    chunked_transfer_encoding   off;

    proxy_set_header            Connection '';

    # The rest of your proxy headers and settings
    # . . .
}

Prerequisites

URL: llms-txt#prerequisites

Contents:

  • CPU considerations
  • Database considerations
    • Best practices
  • Memory considerations

Embed requires an embed license. For more information about when to use Embed, as well as costs and licensing processes, refer to Embed on the n8n website.

The requirements provided here are an example based on n8n Cloud and are for illustrative purposes only. Your requirements may vary depending on the number of users, workflows, and executions. Contact n8n for more information.

Component Sizing Supported
CPU/vCPU Minimum 10 CPU cycles, scaling as needed Any public or private cloud
Database 512 MB - 4 GB SSD SQLite or PostgreSQL
Memory 320 MB - 2 GB

CPU considerations

n8n isn't CPU intensive so even small instances (of providers such as AWS and GCP) should be enough for most use cases. Usually, memory requirements supersede CPU requirements, so focus resources there when planning your infrastructure.

Database considerations

n8n uses its database to store credentials, past executions, and workflows.

A core feature of n8n is the flexibility to choose a database. All the supported databases have different advantages and disadvantages, which you have to consider individually and pick the one that best suits your needs. By default n8n creates an SQLite database if no database exists at the given location.

n8n recommends that every n8n instance have a dedicated database. This helps to prevent dependencies and potential performance degradation. If it isn't possible to provide a dedicated database for every n8n instance, n8n recommends making use of Postgres's schema feature.

For Postgres, the database must already exist on the DB-instance. The database user for the n8n process needs to have full permissions on all tables that they're using or creating. n8n creates and maintains the database schema.

  • SSD storage.
  • In containerized cloud environments, ensure that the volume is persisted and mounted when stopping/starting a container. If not, all data is lost.
  • If using Postgres, don't use the tablePrefix configuration option. It will be deprecated in the near future.
  • Pay attention to the changelog of new versions and consider reverting migrations before downgrading.
  • Set up at least the basic database security and stability mechanisms such as IP allow lists and backups.

Memory considerations

An n8n instance doesn't typically require large amounts of available memory. For example an n8n Cloud instance at idle requires ~100MB. It's the nature of your workflows and the data being processed that determines your memory requirements.

For example, while most nodes just pass data to the next node in the workflow, the Code node creates a pre-processing and post-processing copy of the data. When dealing will large binary files, this can consume all available resources.


CrowdStrike credentials

URL: llms-txt#crowdstrike-credentials

Contents:

  • Prerequisites
  • Authentication methods
  • Related resources
  • Using OAuth2

You can use these credentials to authenticate when using the HTTP Request node to make a Custom API call.

Create a CrowdStrike account.

Authentication methods

Refer to CrowdStrike's documentation for more information about the service. Their documentation is behind a log in, so you must log in to your account on their website to access the API documentation.

This is a credential-only node. Refer to Custom API operations to learn more. View example workflows and related content on n8n's website.

To configure this credential, you'll need:

  • The URL of your CrowdStrike instance
  • A Client ID: Generated by creating a new API Client in Crowdstrike in Support > API Clients and Keys.
  • A Client Secret: Generated by creating a new API Client in Crowdstrike in Support > API Clients and Keys.

When setting up your API client, grant it the usermgmt:read scope. n8n relies on this to test that the credential is working.

A broad outline of the appropriate steps is available publicly at the CrowdStrike blog: Getting Access to the CrowdStrike API. CrowdStrike's full documentation is behind a log in, so you must log in to your account to access the full API documentation.


Freshdesk node

URL: llms-txt#freshdesk-node

Contents:

  • Operations
  • Templates and examples

Use the Freshdesk node to automate work in Freshdesk and integrate Freshdesk with other applications. n8n has built-in support for a wide range of Freshdesk features, including creating, updating, deleting, and getting contacts and tickets.

On this page, you'll find a list of operations the Freshdesk node supports and links to more resources.

Refer to Freshdesk credentials for guidance on setting up authentication.

  • Contact
    • Create a new contact
    • Delete a contact
    • Get a contact
    • Get all contacts
    • Update a contact
  • Ticket
    • Create a new ticket
    • Delete a ticket
    • Get a ticket
    • Get all tickets
    • Update a ticket

Templates and examples

Create ticket on specific customer messages in Telegram

View template details

Create a new Freshdesk ticket

View template details

Automate CSAT Surveys with Freshdesk & Store Responses in Google Sheets

View template details

Browse Freshdesk integration templates, or search all templates


Kitemaker node

URL: llms-txt#kitemaker-node

Contents:

  • Operations
  • Templates and examples

Use the Kitemaker node to automate work in Kitemaker, and integrate Kitemaker with other applications. n8n has built-in support for a wide range of Kitemaker features, including retrieving data on organizations, spaces and users, as well as creating, getting, and updating work items.

On this page, you'll find a list of operations the Kitemaker node supports and links to more resources.

Refer to Kitemaker credentials for guidance on setting up authentication.

  • Organization
    • Retrieve data on the logged-in user's organization.
  • Space
    • Retrieve data on all the spaces in the logged-in user's organization.
  • User
    • Retrieve data on all the users in the logged-in user's organization.
  • Work Item
    • Create
    • Get
    • Get All
    • Update

Templates and examples

Browse Kitemaker integration templates, or search all templates


AWS Comprehend node

URL: llms-txt#aws-comprehend-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the AWS Comprehend node to automate work in AWS Comprehend, and integrate AWS Comprehend with other applications. n8n has built-in support for a wide range of AWS Comprehend features, including identifying and analyzing texts.

On this page, you'll find a list of operations the AWS Comprehend node supports and links to more resources.

Refer to AWS Comprehend credentials for guidance on setting up authentication.

  • Identify the dominant language
  • Analyse the sentiment of the text

Templates and examples

Browse AWS Comprehend integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Cloudflare node

URL: llms-txt#cloudflare-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported

Use the Cloudflare node to automate work in Cloudflare, and integrate Cloudflare with other applications. n8n has built-in support for a wide range of Cloudflare features, including deleting, getting, and uploading zone certificates.

On this page, you'll find a list of operations the Cloudflare node supports and links to more resources.

Refer to Cloudflare credentials for guidance on setting up authentication.

  • Zone Certificate
    • Delete
    • Get
    • Get Many
    • Upload

Templates and examples

Report phishing websites to Steam and CloudFlare

View template details

KV - Cloudflare Key-Value Database Full API Integration Workflow

View template details

Extract University Term Dates from Excel using CloudFlare Markdown Conversion

View template details

Browse Cloudflare integration templates, or search all templates

Refer to Cloudflare's API documentation on zone-level authentication for more information on this service.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Tools AI Agent node

URL: llms-txt#tools-ai-agent-node

Contents:

  • Node parameters
    • Prompt
    • Require Specific Output Format
  • Node options
    • System Message
    • Max Iterations
    • Return Intermediate Steps
    • Automatically Passthrough Binary Images
    • Enable Streaming
  • Templates and examples

The Tools Agent uses external tools and APIs to perform actions and retrieve information. It can understand the capabilities of different tools and determine which tool to use depending on the task. This agent helps integrate LLMs with various external services and databases.

This agent has an enhanced ability to work with tools and can ensure a standard output format.

The Tools Agent implements Langchain's tool calling interface. This interface describes available tools and their schemas. The agent also has improved output parsing capabilities, as it passes the parser to the model as a formatting tool.

Refer to AI Agent for more information on the AI Agent node itself.

You can use this agent with the Chat Trigger node. Attach a memory sub-node so that users can have an ongoing conversation with multiple queries. Memory doesn't persist between sessions.

This agent supports the following chat models:

The Tools Agent can use the following tools...

Configure the Tools Agent using the following parameters.

Select how you want the node to construct the prompt (also known as the user's query or input from the chat).

  • Take from previous node automatically: If you select this option, the node expects an input from a previous node called chatInput.
  • Define below: If you select this option, provide either static text or an expression for dynamic content to serve as the prompt in the Prompt (User Message) field.

Require Specific Output Format

This parameter controls whether you want the node to require a specific output format. When turned on, n8n prompts you to connect one of these output parsers to the node:

Refine the Tools Agent node's behavior using these options:

If you'd like to send a message to the agent before the conversation starts, enter the message you'd like to send.

Use this option to guide the agent's decision-making.

Enter the number of times the model should run to try and generate a good answer from the user's prompt.

Return Intermediate Steps

Select whether to include intermediate steps the agent took in the final output (turned on) or not (turned off).

This could be useful for further refining the agent's behavior based on the steps it took.

Automatically Passthrough Binary Images

Use this option to control whether binary images should be automatically passed through to the agent as image type messages (turned on) or not (turned off).

When enabled, the AI Agent sends data back to the user in real-time as it generates the answer. This is useful for long-running generations. This is enabled by default.

Streaming requirements

For streaming to work, your workflow must use a trigger that supports streaming responses, such as the Chat Trigger or Webhook node with Response Mode set to Streaming.

Templates and examples

Refer to the main AI Agent node's Templates and examples section.

Dynamic parameters for tools with $fromAI()

To learn how to dynamically populate parameters for app node tools, refer to Let AI specify tool parameters with $fromAI().

For common questions or issues and suggested solutions, refer to Common issues.


TYPE n8n_scaling_mode_queue_jobs_completed counter

URL: llms-txt#type-n8n_scaling_mode_queue_jobs_completed-counter

n8n_scaling_mode_queue_jobs_completed 0


Twilio credentials

URL: llms-txt#twilio-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using Auth Token
  • Using API key
    • Selecting an API key type

You can use these credentials to authenticate the following nodes:

Supported authentication methods

  • Auth token: Twilio recommends this method for local testing only.
  • API key: Twilio recommends this method for production.

Refer to Twilio's API documentation for more information about the service.

To configure this credential, you'll need a Twilio account and:

  • Your Twilio Account SID
  • Your Twilio Auth Token

To set up the credential:

  1. In n8n, select Auth Token as the Auth Type.
  2. In Twilio, go to Console Dashboard > Account Info.
  3. Copy your Account SID and enter this in your n8n credential. This acts as a username.
  4. Cop your Auth Token and enter this in your n8n credential. This acts as a password.

Refer to Auth Tokens and How to Change Them for more information.

To configure this credential, you'll need a Twilio account and:

  • Your Twilio Account SID
  • An API Key SID: Generated when you create an API key.
  • An API Key Secret: Generated when you create an API key.

To set up the credential:

  1. In n8n, select API Key as the Auth Type.
  2. In Twilio, go to Console Dashboard > Account Info.
  3. Copy your Account SID and enter it in your n8n credential.
  4. In Twilio, go to your account's API keys & tokens page.
  5. Select Create API Key.
  6. Enter a Friendly name for your API key, like n8n integration.
  7. Select your Key type. n8n works with either Main or Standard. Refer to Selecting an API key type for more information.
  8. Select Create API Key to finish creating the key.
  9. On the Copy secret key page, copy the SID displayed with the key and enter it in your n8n credential API Key SID.
  10. On the Copy secret key page, copy the Secret displayed with the key and enter it in your n8n credential API Key Secret.

Refer to Create an API key for more detailed instructions.

Selecting an API key type

When you create a Twilio API key, you must select a key type. The n8n credential works with Main and Standard key types.

Here are more details on the different API key types:

  • Main: This key type gives you the same level of access as using your Account SID and Auth Token in API requests.
  • Standard: This key type gives you access to all the functionality in Twilio's APIs except the API key resources and Account resources.
  • Restricted: This key type is in beta. n8n hasn't tested the credential against this key type; if you try it, let us know if you run into any issues.

Refer to Types of API keys for more information on the key types.


New York is the default value if not set

URL: llms-txt#new-york-is-the-default-value-if-not-set

GENERIC_TIMEZONE=Europe/Berlin


Wise Trigger node

URL: llms-txt#wise-trigger-node

Contents:

  • Events

Wise allows you to transfer money abroad with low-cost money transfers, receive money with international account details, and track transactions on your phone.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Wise Trigger integrations page.

  • Triggered every time a balance account is credited
  • Triggered every time a balance account is credited or debited
  • Triggered every time a transfer's list of active cases is updated
  • Triggered every time a transfer's status is updated

Building community nodes

URL: llms-txt#building-community-nodes

Contents:

  • Standards
  • Submit your node for verification by n8n

Community nodes are npm packages, hosted in the npm registry.

When building a node to submit to the community node repository, use the following resources to make sure your node setup is correct:

Developing with the n8n-node tool ensures that your node adheres to the following standards required to make your node available in the n8n community node repository:

  • Make sure the package name starts with n8n-nodes- or @<scope>/n8n-nodes-. For example, n8n-nodes-weather or @weatherPlugins/n8n-nodes-weather.
  • Include n8n-community-node-package in your package keywords.
  • Make sure that you add your nodes and credentials to the package.json file inside the n8n attribute.
  • Check your node using the linter (npm run lint) and test it locally (npm run dev) to ensure it works.
  • Submit the package to the npm registry. Refer to npm's documentation on Contributing packages to the registry for more information.

Submit your node for verification by n8n

n8n vets verified community nodes. Users can discover and install verified community nodes from the nodes panel in n8n. These nodes need to adhere to certain technical and UX standards and constraints.

Before submitting your node for review by n8n, you must:

  • Start from the n8n-node tool generated scaffolding. While this isn't strictly required, n8n strongly suggests using the n8n-node CLI tool for any community node you plan to submit for verification. Using the tool ensures that your node follows the expected conventions and adheres to the community node requirements.
  • Make sure that your node follows the technical guidelines for verified community nodes and that all automated checks pass. Specifically, verified community nodes aren't allowed to use any run-time dependencies.
  • Ensure that your node follows the UX guidelines.
  • Make sure that the node has appropriate documentation in the form of a README in the npm package or a related public repository.
  • Submit your node to npm as n8n will fetch it from there for final vetting.

If your node meets all the above requirements, sign up or log in to the n8n Creator Portal and submit your node for verification. Note that n8n reserves the right to reject nodes that compete with any of n8n's paid features, especially enterprise functionality.


Acuity Scheduling credentials

URL: llms-txt#acuity-scheduling-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Create an Acuity Scheduling account.

Supported authentication methods

Refer to Acuity's API documentation for more information about working with the service.

To configure this credential, you'll need:

  • A numeric User ID
  • An API Key

Refer to the Acuity API Quick Start authentication instructions to generate an API key and view your User ID.

Note for n8n Cloud users

Cloud users don't need to provide connection details. Select Connect my account to connect through your browser.

If you need to set this up from scratch, complete the Acuity OAuth2 Account Registration page. Use the Client ID and Client Secret provided from that registration.


Respond to Chat node

URL: llms-txt#respond-to-chat-node

Contents:

  • Node parameters
    • Message
    • Wait for User Reply
  • Node options
    • Add Memory Input Connection
    • Limit Wait Time
  • Related resources
  • Common issues

Use the Respond to Chat node in correspondence with the Chat Trigger node to send a response into the chat and optionally wait for a response from the user. This allows you to have multiple chat interactions within a single execution and enables human-in-the-loop use cases in the chat.

The Respond to Chat node requires a Chat Trigger node to be present in the workflow, with the Response Mode set to 'Using Response Nodes'.

The message to send to the chat.

Wait for User Reply

Set whether the workflow execution should wait for a response from the user (enabled) or continue immediately after sending the message (disabled).

Add Memory Input Connection

Choose whether you want to commit the messages from the Respond to Chat node to a connected memory. Using a shared memory between an agent or chain root node and the Respond to Chat node attaches the same session key to these messages and lets you capture the full message history.

When you enable Wait for User Reply, this option decides whether the workflow automatically resumes execution after a specific limit (enabled) or not (disabled).

View n8n's Advanced AI documentation.

For common questions or issues and suggested solutions, refer to Common Issues.


Brandfetch node

URL: llms-txt#brandfetch-node

Contents:

  • Operations
  • Templates and examples

Use the Brandfetch node to automate work in Brandfetch, and integrate Brandfetch with other applications. n8n has built-in support for a wide range of Brandfetch features, including returning a companys information.

On this page, you'll find a list of operations the Brandfetch node supports and links to more resources.

Refer to Brandfetch credentials for guidance on setting up authentication.

  • Return a company's colors
  • Return a company's data
  • Return a company's fonts
  • Return a company's industry
  • Return a company's logo & icon

Templates and examples

Browse Brandfetch integration templates, or search all templates


Community Edition Features

URL: llms-txt#community-edition-features

Contents:

  • Registered Community Edition

The community edition includes almost the complete feature set of n8n, except for the features listed here.

The community edition doesn't include these features:

These features are available on the Enterprise Cloud plan, including the self-hosted Enterprise edition. Some of these features are available on the Starter and Pro Cloud plan.

See pricing for reference.

Registered Community Edition

You can unlock extra features by registering your n8n community edition. You register with your email and receive a license key.

Registering unlocks these features for the community edition:

To register a new community edition instance, select the option during your initial account creation.

To register an existing community edition instance:

  1. Select the three dots icon in the lower-left corner.
  2. Select Settings and then Usage and plan.
  3. Select Unlock to enter your email and then select Send me a free license key.
  4. Check your email for the account you entered.

Once you have a license key, activate it by clicking the button in the license email or by visiting Options > Settings > Usage and plan and selecting Enter activation key.

Once activated, your license will not expire. We may change the unlocked features in the future. This will not impact previously unlocked features.


Workflows

URL: llms-txt#workflows

A workflow is a collection of nodes connected together to automate a process.

If it's your first time building a workflow, you may want to use the quickstart guides to quickly try out n8n features.


Beeminder credentials

URL: llms-txt#beeminder-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API user token

You can use these credentials to authenticate the following node:

Create a Beeminder account.

Supported authentication methods

Refer to Beeminder's API documentation for more information about the service.

Using API user token

To configure this credential, you'll need:

  • A User name: Should match the user who the Auth Token is generated for.
  • A personal Auth Token for that user. Generate this using either method below:

Baserow credentials

URL: llms-txt#baserow-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using basic auth

You can use these credentials to authenticate the following node:

Create a Baserow account on any hosted Baserow instance or a self-hosted instance.

Supported authentication methods

Refer to Baserow's documentation for more information about the service.

Refer to Baserow's auto-generated API documentation for more information about the API specifically.

To configure this credential, you'll need:

  • Your Baserow Host
  • A Username and Password to log in with
  1. Enter the Host for the Baserow instance:
    • For a Baserow-hosted instance: leave as https://api.baserow.io.
    • For a self-hosted instance: set to your self-hosted instance API URL.
  2. Enter the Username for the user account n8n should use.
  3. Enter the Password for that user account.

Refer to Baserow's API Authentication documentation for information on creating user accounts.


Structured Output Parser node

URL: llms-txt#structured-output-parser-node

Contents:

  • Node parameters
  • Templates and examples
  • Related resources
  • Common issues

Use the Structured Output Parser node to return fields based on a JSON Schema.

On this page, you'll find the node parameters for the Structured Output Parser node, and links to more resources.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Schema Type: Define the output structure and validation. You have two options to provide the schema:
  1. Generate from JSON Example: Input an example JSON object to automatically generate the schema. The node uses the object property types and names. It ignores the actual values. n8n treats every field as mandatory when generating schemas from JSON examples.
  2. Define using JSON Schema: Manually input the JSON schema. Read the JSON Schema guides and examples for help creating a valid JSON schema. Please note that we don't support references (using $ref) in JSON schemas.

Templates and examples

Generate AI Viral Videos with Seedance and Upload to TikTok, YouTube & Instagram

View template details

🤖Automate Multi-Platform Social Media Content Creation with AI

View template details

AI-Powered Social Media Content Generator & Publisher

View template details

Browse Structured Output Parser integration templates, or search all templates

Refer to LangChain's output parser documentation for more information about the service.

View n8n's Advanced AI documentation.

For common questions or issues and suggested solutions, refer to Common issues.


Tapfiliate node

URL: llms-txt#tapfiliate-node

Contents:

  • Operations
  • Templates and examples

Use the Tapfiliate node to automate work in Tapfiliate, and integrate Tapfiliate with other applications. n8n has built-in support for a wide range of Tapfiliate features, including creating and deleting affiliates, and adding affiliate metadata.

On this page, you'll find a list of operations the Tapfiliate node supports and links to more resources.

Refer to Tapfiliate credentials for guidance on setting up authentication.

  • Affiliate
    • Create an affiliate
    • Delete an affiliate
    • Get an affiliate by ID
    • Get all affiliates
  • Affiliate Metadata
    • Add metadata to affiliate
    • Remove metadata from affiliate
    • Update affiliate's metadata
  • Program Affiliate
    • Add affiliate to program
    • Approve an affiliate for a program
    • Disapprove an affiliate
    • Get an affiliate in a program
    • Get all affiliates in program

Templates and examples

Browse Tapfiliate integration templates, or search all templates


GoTo Webinar credentials

URL: llms-txt#goto-webinar-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Create a GoToWebinar account with Developer Center access.

Supported authentication methods

Refer to GoToWebinar's API documentation for more information about authenticating with the service.

To configure this credential, you'll need:

  • A Client ID: Provided once you create an OAuth client
  • A Client Secret: Provided once you create an OAuth client

Refer to the Create an OAuth client documentation for detailed instructions on creating an OAuth client. Copy the OAuth Callback URL from n8n to use as the Redirect URI in your OAuth client. The Client ID and Client secret are provided once you've finished setting up your client.


Understanding the data structure

URL: llms-txt#understanding-the-data-structure

Contents:

  • Data structure of n8n
  • Creating data sets with the Code node
    • Exercise
  • Referencing node data with the Code node
    • Exercise
  • Transforming data
    • Exercise

In this chapter, you will learn about the data structure of n8n and how to use the Code node to transform data and simulate node outputs.

Data structure of n8n

In a basic sense, n8n nodes function as an Extract, Transform, Load (ETL) tool. The nodes allow you to access (extract) data from multiple disparate sources, modify (transform) that data in a particular way, and pass (load) it along to where it needs to be.

The data that moves along from node to node in your workflow must be in a format (structure) that can be recognized and interpreted by each node. In n8n, this required structure is an array of objects.

About array of objects

An array is a list of values. The array can be empty or contain several elements. Each element is stored at a position (index) in the list, starting at 0, and can be referenced by the index number. For example, in the array ["Leonardo", "Michelangelo", "Donatello", "Raphael"]; the element Donatello is stored at index 2.

An object stores key-value pairs, instead of values at numbered indexes as in arrays. The order of the pairs isn't important, as the values can be accessed by referencing the key name. For example, the object below contains two properties (name and color):

An array of objects is an array that contains one or more objects. For example, the array turtles below contains four objects:

You can access the properties of an object using dot notation with the syntax object.property. For example, turtles[1].color gets the color of the second turtle.

Data sent from one node to another is sent as an array of JSON objects. The elements in this collection are called items.

An n8n node performs its action on each item of incoming data.

Items in the Customer Datastore node

Creating data sets with the Code node

Now that you are familiar with the n8n data structure, you can use it to create your own data sets or simulate node outputs. To do this, use the Code node to write JavaScript code defining your array of objects with the following structure:

For example, the array of objects representing the Ninja turtles would look like this in the Code node:

Array of objects in the Code node

Notice that this array of objects contains an extra key: json. n8n expects you to wrap each object in an array in another object, with the key json.

Illustration of data structure in n8n

It's good practice to pass the data in the right structure used by n8n. But don't worry if you forget to add the json key to an item, n8n (version 0.166.0 and above) adds it automatically.

You can also have nested pairs, for example if you want to define a primary and a secondary color. In this case, you need to further wrap the key-value pairs in curly braces {}.

n8n data structure video

This talk offers a more detailed explanation of data structure in n8n.

In a Code node, create an array of objects named myContacts that contains the properties name and email, and the email property is further split into personal and work.

In the Code node, in the JavaScript Code field you have to write the following code:

When you execute the Code node, the result should look like this:

Result of Code node

Referencing node data with the Code node

Just like you can use expressions to reference data from other nodes, you can also use some methods and variables in the Code node.

Please make sure you read these pages before continuing to the next exercise.

Let's build on the previous exercise, in which you used the Code node to create a data set of two contacts with their names and emails. Now, connect a second Code node to the first one. In the new node, write code to create a new column named workEmail that references the work email of the first contact.

In the Code node, in the JavaScript Code field you have to write the following code:

When you execute the Code node, the result should look like this:

Code node reference

The incoming data from some nodes may have a different data structure than the one used in n8n. In this case, you need to transform the data, so that each item can be processed individually.

The two most common operations for data transformation are:

  • Creating multiple items from one item
  • Creating a single item from multiple items

There are several ways to transform data for the purposes mentioned above:

  • Use n8n's data transformation nodes. Use these nodes to modify the structure of incoming data that contain lists (arrays) without needing to use JavaScript code in the Code node:

    • Use the Split Out node to separate a single data item containing a list into multiple items.
    • Use the Aggregate node to take separate items, or portions of them, and group them together into individual items.
  • Use the Code node to write JavaScript functions to modify the data structure of incoming data using the Run Once for All Items mode:

    • To create multiple items from a single item, you can use JavaScript code like this. This example assumes that the item has a key named data set to an array of items in the form of: [{ "data": [{<item_1>}, {<item_2>}, ...] }]:
  • To create a single item from multiple items, you can use this JavaScript code:

These JavaScript examples assume your entire input is what you want to transform. As in the exercise above, you can also execute either operation on a specific field by identifying that in the items list, for example, if our workEmail example had multiple emails in a single field, we could run some code like this:

  1. Use the HTTP Request node to make a GET request to the PokéAPI https://pokeapi.co/api/v2/pokemon. (This API requires no authentication).

  2. Transform the data in the results field with the Split Out node.

  3. Transform the data in the results field with the Code node.

  4. To get the pokemon from the PokéAPI, execute the HTTP Request node with the following parameters:

  1. To transform the data with the Split Out node, connect this node to the HTTP Request node and set the following parameters:
  • Field To Split Out: results
    • Include: No Other Fields
  1. To transform the data with the Code node, connect this node to the HTTP Request node and write the following code in the JavaScript Code field:

Examples:

Example 1 (unknown):

{
	name: 'Michelangelo',
	color: 'blue',
}

Example 2 (unknown):

var turtles = [
	{
		name: 'Michelangelo',
		color: 'orange',
	},
	{
		name: 'Donatello',
		color: 'purple',
	},
	{
		name: 'Raphael',
		color: 'red',
	},
	{
		name: 'Leonardo',
		color: 'blue',
	}
];

Example 3 (unknown):

return [
	{
		json: {
			apple: 'beets',
		}
	}
];

Example 4 (unknown):

var myContacts = [
	{
		json: {
			name: 'Alice',
			email: {
				personal: 'alice@home.com',
				work: 'alice@wonderland.org'
			},
		}
	},
	{
		json: {
			name: 'Bob',
			email: {
				personal: 'bob@mail.com',
				work: 'contact@thebuilder.com'
				},
		}
	},
];

return myContacts;

MQTT node

URL: llms-txt#mqtt-node

Contents:

  • Operations
  • Templates and examples
  • Related resources

Use the MQTT node to automate work in MQTT, and integrate MQTT with other applications. n8n supports transporting messages with MQTT.

On this page, you'll find a list of operations the MQTT node supports and links to more resources.

Refer to MQTT credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

Use the MQTT node to send a message. You can set the message topic, and choose whether to send the node input data as part of the message.

Templates and examples

IOT Button Remote / Spotify Control Integration with MQTT

View template details

Receive messages for a MQTT queue

View template details

Send location updates of the ISS to a topic in MQTT

View template details

Browse MQTT integration templates, or search all templates

n8n provides a trigger node for MQTT. You can find the trigger node docs here.

Refer to MQTT's documentation for more information about the service.


What are vector databases?

URL: llms-txt#what-are-vector-databases?

Contents:

  • A simplified example
  • Demonstrating the power of similarity search
  • Embeddings, retrievers, text splitters, and document loaders

Vector databases store information as numbers:

A vector database is a type of database that stores data as high-dimensional vectors, which are mathematical representations of features or attributes. (source)

This enables fast and accurate similarity searches. With a vector database, instead of using conventional database queries, you can search for relevant data based on semantic and contextual meaning.

A simplified example

A vector database could store the sentence "n8n is a source-available automation tool that you can self-host", but instead of storing it as text, the vector database stores an array of dimensions (numbers between 0 and 1) that represent its features. This doesn't mean turning each letter in the sentence into a number. Instead, the vectors in the vector database describe the sentence.

Suppose that in a vector store 0.1 represents automation tool, 0.2 represents source available, and 0.3 represents can be self-hosted. You could end up with the following vectors:

Sentence Vector (array of dimensions)
n8n is a source-available automation tool that you can self-host [0.1, 0.2, 0.3]
Zapier is an automation tool [0.1]
Make is an automation tool [0.1]
Confluence is a wiki tool that you can self-host [0.3]

This example is very simplified

In practice, vectors are far more complex. A vector can range in size from tens to thousands of dimensions. The dimensions don't have a one-to-one relationship to a single feature, so you can't translate individual dimensions directly into single concepts. This example gives an approximate mental model, not a true technical understanding.

Qdrant provides vector search demos to help users understand the power of vector databases. The food discovery demo shows how a vector store can help match pictures based on visual similarities.

This demo uses data from Delivery Service. Users may like or dislike the photo of a dish, and the app will recommend more similar meals based on how they look. It's also possible to choose to view results from the restaurants within the delivery radius. (source)

For full technical details, refer to the Qdrant demo-food-discovery GitHub repository.

Embeddings, retrievers, text splitters, and document loaders

Vector databases require other tools to function:

  • Document loaders and text splitters: document loaders pull in documents and data, and prepare them for embedding. Document loaders can use text splitters to break documents into chunks.
  • Embeddings: these are the tools that turn the data (text, images, and so on) into vectors, and back into raw data. Note that n8n only supports text embeddings.
  • Retrievers: retrievers fetch documents from vector databases. You need to pair them with an embedding to translate the vectors back into data.

Access its data

URL: llms-txt#access-its-data

lastExecution = nodeStaticData.lastExecution


SerpApi (Google Search) node

URL: llms-txt#serpapi-(google-search)-node

Contents:

  • Node options
  • Templates and examples
  • Related resources

The SerpAPI node allows an agent in your workflow to call Google's Search API.

On this page, you'll find the node parameters for the SerpAPI node, and links to more resources.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Country: Enter the country code you'd like to use. Refer to Google GL Parameter: Supported Google Countries for supported countries and country codes.
  • Device: Select the device to use to get the search results.
  • Explicit Array: Choose whether to force SerpApi to fetch the Google results even if a cached version is already present (turned on) or not (turned off).
  • Google Domain: Enter the Google Domain to use. Refer to Supported Google Domains for supported domains.
  • Language: Enter the language code you'd like to use. Refer to Google HL Parameter: Supported Google Languages for supported languages and language codes.

Templates and examples

View template details

🤖Automate Multi-Platform Social Media Content Creation with AI

View template details

AI chatbot that can search the web

View template details

Browse SerpApi (Google Search) integration templates, or search all templates

Refer to Serp's documentation for more information about the service. You can also view LangChain's documentation on their Serp integration.

View n8n's Advanced AI documentation.


Node versioning

URL: llms-txt#node-versioning

Contents:

  • Light versioning
  • Full versioning

n8n supports node versioning. You can make changes to existing nodes without breaking the existing behavior by introducing a new version.

Be aware of how n8n decides which node version to load:

  • If a user builds and saves a workflow using version 1, n8n continues to use version 1 in that workflow, even if you create and publish a version 2 of the node.
  • When a user creates a new workflow and browses for nodes, n8n always loads the latest version of the node.

Versioning type restricted by node style

If you build a node using the declarative style, you can't use full versioning.

This is available for all node types.

One node can contain more than one version, allowing small version increments without code duplication. To use this feature:

  1. Change the main version parameter to an array, and add your version numbers, including your existing version.
  2. You can then access the version parameter with @version in your displayOptions in any object (to control which versions n8n displays the object with). You can also query the version from a function using const nodeVersion = this.getNode().typeVersion;.

As an example, say you want to add versioning to the NasaPics node from the Declarative node tutorial, then configure a resource so that n8n only displays it in version 2 of the node. In your base NasaPics.node.ts file:

This isn't available for declarative-style nodes.

As an example, refer to the Mattermost node.

Full versioning summary:

  • The base node file should extend NodeVersionedType instead of INodeType.
  • The base node file should contain a description including the defaultVersion (usually the latest), other basic node metadata such as name, and a list of versions. It shouldn't contain any node functionality.
  • n8n recommends using v1, v2, and so on, for version folder names.

Examples:

Example 1 (unknown):

{
    displayName: 'NASA Pics',
    name: 'NasaPics',
    icon: 'file:nasapics.svg',
    // List the available versions
    version: [1,2,3],
    // More basic parameters here
    properties: [
        // Add a resource that's only displayed for version2
        {
            displayName: 'Resource name',
            // More resource parameters
            displayOptions: {
                show: {
                    '@version': 2,
                },
            },
        },
    ],
}

Bitwarden node

URL: llms-txt#bitwarden-node

Contents:

  • Operations
  • Templates and examples

Use the Bitwarden node to automate work in Bitwarden, and integrate Bitwarden with other applications. n8n has built-in support for a wide range of Bitwarden features, including creating, getting, deleting, and updating collections, events, groups, and members.

On this page, you'll find a list of operations the Bitwarden node supports and links to more resources.

Refer to Bitwarden credentials for guidance on setting up authentication.

  • Collection
    • Delete
    • Get
    • Get All
    • Update
  • Event
    • Get All
  • Group
    • Create
    • Delete
    • Get
    • Get All
    • Get Members
    • Update
    • Update Members
  • Member
    • Create
    • Delete
    • Get
    • Get All
    • Get Groups
    • Update
    • Update Groups

Templates and examples

Browse Bitwarden integration templates, or search all templates


Elastic Security credentials

URL: llms-txt#elastic-security-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using basic auth
  • Using API key

You can use these credentials to authenticate the following nodes:

Supported authentication methods

  • Basic auth
  • API Key

Refer to Elastic Security's documentation for more information about the service.

To configure this credential, you'll need:

  • A Username: For the user account you log into Elasticsearch with.

  • A Password: For the user account you log into Elasticsearch with.

  • Your Elasticsearch application's Base URL (also known as the Elasticsearch application endpoint):

  1. In Elasticsearch, select the option to Manage this deployment.
  2. In the Applications section, copy the endpoint of the Elasticsearch application.
  3. Add this in n8n as the Base URL.

Custom endpoint aliases

If you add a custom endpoint alias to a deployment, update your n8n credential Base URL with the new endpoint.

To configure this credential, you'll need:

  • An API Key: For the user account you log into Elasticsearch with. Refer to Elasticsearch's Create API key documentation for more information.

  • Your Elasticsearch application's Base URL (also known as the Elasticsearch application endpoint):

  1. In Elasticsearch, select the option to Manage this deployment.
  2. In the Applications section, copy the endpoint of the Elasticsearch application.
  3. Add this in n8n as the Base URL.

Freshworks CRM node

URL: llms-txt#freshworks-crm-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Freshworks CRM node to automate work in Freshworks CRM, and integrate Freshworks CRM with other applications. n8n has built-in support for a wide range of Freshworks CRM features, including creating, updating, deleting, and retrieve, accounts, appointments, contacts, deals, notes, sales activity and more.

On this page, you'll find a list of operations the Freshworks CRM node supports and links to more resources.

Refer to Freshworks CRM credentials for guidance on setting up authentication.

  • Account
    • Create an account
    • Delete an account
    • Retrieve an account
    • Retrieve all accounts
    • Update an account
  • Appointment
    • Create an appointment
    • Delete an appointment
    • Retrieve an appointment
    • Retrieve all appointments
    • Update an appointment
  • Contact
    • Create a contact
    • Delete a contact
    • Retrieve a contact
    • Retrieve all contacts
    • Update a contact
  • Deal
    • Create a deal
    • Delete a deal
    • Retrieve a deal
    • Retrieve all deals
    • Update a deal
  • Note
    • Create a note
    • Delete a note
    • Update a note
  • Sales Activity
    • Retrieve a sales activity
    • Retrieve all sales activities
  • Task
    • Create a task
    • Delete a task
    • Retrieve a task
    • Retrieve all tasks
    • Update a task

Templates and examples

Search LinkedIn companies, Score with AI and add them to Google Sheet CRM

View template details

Real Estate Lead Generation with BatchData Skip Tracing & CRM Integration

View template details

📄🌐PDF2Blog - Create Blog Post on Ghost CRM from PDF Document

View template details

Browse Freshworks CRM integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Onfleet Trigger node

URL: llms-txt#onfleet-trigger-node

Contents:

  • Events

Onfleet is a logistics platform offering a last-mile delivery solution.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Onfleet Trigger integrations page.

Trigger a workflow on:

  • SMS recipient opt out
  • SMS recipient response missed
  • Task arrival
  • Task assigned
  • Task cloned
  • Task completed
  • Task created
  • Task delayed
  • Task ETA
  • Task failed
  • Task started
  • Task unassigned
  • Task updated
  • Worker created
  • Worker deleted
  • Worker duty

Cisco Secure Endpoint credentials

URL: llms-txt#cisco-secure-endpoint-credentials

Contents:

  • Prerequisites
  • Authentication methods
  • Related resources
  • Using OAuth2

You can use these credentials to authenticate when using the HTTP Request node to make a Custom API call.

Authentication methods

Refer to Cisco Secure Endpoint's documentation for more information about the service.

This is a credential-only node. Refer to Custom API operations to learn more. View example workflows and related content on n8n's website.

To configure this credential, you'll need:

  • The Region for your Cisco Secure Endpoint. Options are:
    • Asia Pacific, Japan, and China
    • Europe
    • North America
  • A Client ID: Provided when you register a SecureX API Client
  • A Client Secret: Provided when you register a SecureX API Client

To get a Client ID and Client Secret, you'll need to Register a SecureX API Client. Refer to Cisco Secure Endpoint's authentication documentation for detailed instructions. Use the SecureX Client Password as the Client Secret within the n8n credential.


Tags

URL: llms-txt#tags

Contents:

  • Add a tag to a workflow
  • Filter by tag
  • Manage tags

Workflow tags allow you to label your workflows. You can then filter workflows by tag.

Tags are global. This means when you create a tag, it's available to all users on your n8n instance.

Add a tag to a workflow

To add a tag to your workflow:

  1. In your workflow, select + Add tag.
  2. Select an existing tag, or enter a new tag name.
  3. Once you select a tag and click away from the tag modal, n8n displays the tag next to the workflow name.

You can add more than one tag.

When browsing the workflows on your instance, you can filter by tag.

  1. On the Workflows page, select Filters.
  2. Select Tags.
  3. Select the tag or tags you want to filter by. n8n lists the workflows with that tag.

You can edit existing tags. Instance owners can delete tags.

  1. Select Manage tags. This is available from Filters > Tags on the Workflows page, or in the + Add tag modal in your workflow.
  2. Hover over the tag you want to change.
  3. Select Edit to rename it, or Delete to delete it.

Tags are global. If you edit or delete a tag, this affects all users of your n8n instance.


PayPal Trigger node

URL: llms-txt#paypal-trigger-node

PayPal is a digital payment service that supports online fund transfers that customers can use when shopping online.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's PayPal Trigger integrations page.


Hosting n8n on Heroku

URL: llms-txt#hosting-n8n-on-heroku

Contents:

  • Use the deployment template to create a Heroku project
    • Configure environment variables
    • Deploy n8n
  • Changing the deployment template
    • The Dockerfile
    • Heroku and exposing ports
    • Configuring Heroku
  • Next steps

This hosting guide shows you how to self-host n8n on Heroku. It uses:

  • Docker Compose to create and define the application components and how they work together.
  • Heroku's PostgreSQL service to host n8n's data storage.
  • A Deploy to Heroku button offering a one click, with minor configuration, deployment.

Self-hosting knowledge prerequisites

Self-hosting n8n requires technical knowledge, including:

  • Setting up and configuring servers and containers
  • Managing application resources and scaling
  • Securing servers and applications
  • Configuring n8n

n8n recommends self-hosting for expert users. Mistakes can lead to data loss, security issues, and downtime. If you aren't experienced at managing servers, n8n recommends n8n Cloud.

Latest and Next versions

n8n releases a new minor version most weeks. The latest version is for production use. next is the most recent release. You should treat next as a beta: it may be unstable. To report issues, use the forum.

Current latest: 1.118.2
Current next: 1.119.0

Use the deployment template to create a Heroku project

The quickest way to get started with deploying n8n to Heroku is using the Deploy to Heroku button:

This opens the Create New App page on Heroku. Set a name for the project, and choose the region to deploy the project to.

Configure environment variables

Heroku pre-fills the configuration options defined in the env section of the app.json file, which also sets default values for the environment variables n8n uses.

You can change any of these values to suit your needs. You must change the following values:

  • N8N_ENCRYPTION_KEY, which n8n uses to encrypt user account details before saving to the database.
  • WEBHOOK_URL should match the application name you create to ensure that webhooks have the correct URL.

Select Deploy app.

After Heroku builds and deploys the app it provides links to Manage App or View the application.

Refer to the Heroku documentation to find out how to connect your domain to a Heroku application.

Changing the deployment template

You can make changes to the deployment template by forking the repository and deploying from you fork.

By default the Dockerfile pulls the latest n8n image, if you want to use a different or fixed version, then update the image tag on the top line of the Dockerfile.

Heroku and exposing ports

Heroku doesn't allow Docker-based applications to define an exposed port with the EXPOSE command. Instead, Heroku provides a PORT environment variable that it dynamically populates at application runtime. The entrypoint.sh file overrides the default Docker image command to instead set the port variable that Heroku provides. You can then access n8n on port 80 in a web browser.

Docker limitations with Heroku

Read this guide for more details on the limitations of using Docker with Heroku.

Configuring Heroku

The heroku.yml file defines the application you want to create on Heroku. It consists of two sections:

  • setup > addons defines the Heroku addons to use. In this case, the PostgreSQL database addon.

  • The build section defines how Heroku builds the application. In this case it uses the Docker buildpack to build a web service based on the supplied Dockerfile.

  • Learn more about configuring and scaling n8n.

  • Or explore using n8n: try the Quickstarts.


ServiceNow node

URL: llms-txt#servicenow-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the ServiceNow node to automate work in ServiceNow, and integrate ServiceNow with other applications. n8n has built-in support for a wide range of ServiceNow features, including getting business services, departments, configuration items, and dictionary as well as creating, updating, and deleting incidents, users, and table records.

On this page, you'll find a list of operations the ServiceNow node supports and links to more resources.

Refer to ServiceNow credentials for guidance on setting up authentication.

  • Business Service
    • Get All
  • Configuration Items
    • Get All
  • Department
    • Get All
  • Dictionary
    • Get All
  • Incident
    • Create
    • Delete
    • Get
    • Get All
    • Update
  • Table Record
    • Create
    • Delete
    • Get
    • Get All
    • Update
  • User
    • Create
    • Delete
    • Get
    • Get All
    • Update
  • User Group
    • Get All
  • User Role
    • Get All

Templates and examples

ServiceNow Incident Notifications to Slack Workflow

View template details

List recent ServiceNow Incidents in Slack Using Pop Up Modal

View template details

Display ServiceNow Incident Details in Slack using Slash Commands

View template details

Browse ServiceNow integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Postgres credentials

URL: llms-txt#postgres-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using database connection
    • SSH tunnel limitations

You can use these credentials to authenticate the following nodes:

The Agent node doesn't support SSH tunnels.

Create a user account on a Postgres server.

Supported authentication methods

  • Database connection

Refer to Postgres's documentation for more information about the service.

Using database connection

To configure this credential, you'll need:

  • The Host or domain name for the server.
  • The Database name.
  • A User name.
  • A user Password.
  • Ignore SSL Issues: Set whether the credential connects if SSL validation fails.
  • SSL: Choose whether to use SSL in your connection.
  • The Port number to use for the connection.
  • SSH Tunnel: Choose if you want to use SSH to encrypt the network connection with the Postgres server.

To set up the database connection:

  1. Enter the Host or domain name for the Postgres server. You can either run the /conninfo command to confirm the host name or run this query:

  2. Enter the Database name. Run the /conninfo command to confirm the database name.

  3. Enter the User name of the user you wish to connect as.

  4. Enter the user's Password.

  5. Ignore SSL Issues: If you turn this on, the credential will connect even if SSL validation fails.

  6. SSL: Choose whether to use SSL in your connection. Refer to Postgres SSL Support for more information. Options include:

  • Allow: Sets the ssl-mode parameter to allow. First try a non-SSL connection; if that fails, try an SSL connection.
    • Disable: Sets the ssl-mode parameter to disable. Only try a non-SSL connection.
    • Require: Sets the ssl-mode parameter to require. Only try an SSL connection. If a root CA file is present, verify that a trusted certificate authority (CA) issued the server certificate.
  1. Enter the Port number to use for the connection. You can either run the /conninfo command to confirm the host name or run this query:

  2. SSH Tunnel: Turn this setting on to connect to the database over SSH. Refer to SSH tunnel limitations for some guidance around using SSH. Once turned on, you'll need:

  3. Select SSH Authenticate with to set the SSH Tunnel type to build:

    • Select Password if you want to connect to SSH using a password.
    • Select Private Key if you want to connect to SSH using an identity file (private key) and a passphrase.
    1. Enter the remote bind address you're connecting to as the SSH Host.
    2. SSH Port: Enter the local port number for the SSH tunnel.
    3. SSH Postgres Port: Enter the remote end of the tunnel, the port number the database server is using.
    4. SSH User: Enter the username to log in as.
    5. If you selected Password for SSH Authenticate with, add the user's SSH Password.
    6. If you selected Private Key for SSH Authenticate with:
      1. Add the contents of the Private Key or identity file used for SSH.
      2. If the Private Key was created with a passphrase, enter that Passphrase. If the Private Key has no passphrase, leave this field blank.

Refer to Secure TCP/IP Connections with SSH Tunnels for more information.

SSH tunnel limitations

Only use the SSH Tunnel setting if:

  • You're using the credential with the Postgres node (Agent node doesn't support SSH tunnels).
  • You have an SSH server running on the same machine as the Postgres server.
  • You have a user account that can log in using ssh.

Examples:

Example 1 (unknown):

SELECT inet_server_addr();

Example 2 (unknown):

SELECT inet_server_port();

Xata credentials

URL: llms-txt#xata-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a Xata database or an account on an existing database.

Supported authentication methods

Refer to Xata's documentation for more information about the service.

View n8n's Advanced AI documentation.

To configure this credential, you'll need:

  • The Database Endpoint: The Workspace API requires that you identify the database you're requesting information from using this format: https://{workspace-display-name}-{workspace-id}.{region}.xata.sh/db/{dbname}. Refer to Workspace API for more information.
    • {workspace-display-name}: The workspace display name is an optional identifier you can include in your Database Endpoint. The API ignores it, but including it can make it easier to figure out which workspace this database is in if you're saving multiple credentials.
    • {workspace-id}: The unique ID of the workspace, 6 alphanumeric characters.
    • {region}: The hosting region for the database. This value must match the database region configuration.
    • {dbname}: The name of the database you're interacting with.
  • A Branch: Enter the name of the GitHub branch for your database.
  • An API Key: To generate an API key, go to Account Settings and select + Add a key. Refer to Generate an API Key for more information.

Strings

URL: llms-txt#strings

Contents:

  • base64Encode(): A base64 encoded string.
  • base64Decode(): A plain string.
  • extractDomain(): String
  • extractEmail(): String
  • extractUrl(): String
  • extractUrlPath(): String
  • hash(algo?: Algorithm): String
  • isDomain(): Boolean
  • isEmail(): Boolean
  • isEmpty(): Boolean

A reference document listing built-in convenience functions to support data transformation in expressions for strings.

JavaScript in expressions

You can use any JavaScript in expressions. Refer to Expressions for more information.

base64Encode(): A base64 encoded string.

Encode a string as base64.


base64Decode(): A plain string.

Convert a base64 encoded string to a normal string.


extractDomain(): String

Extracts a domain from a string containing a valid URL. Returns undefined if none is found.


extractEmail(): String

Extracts an email from a string. Returns undefined if none is found.


extractUrl(): String

Extracts a URL from a string. Returns undefined if none is found.


extractUrlPath(): String

Extract the path but not the root domain from a URL. For example, "https://example.com/orders/1/details".extractUrlPath() returns "/orders/1/details/".


hash(algo?: Algorithm): String

Returns a string hashed with the given algorithm.

Function parameters

algoOptionalString enum

Which hashing algorithm to use.

One of: md5, base64, sha1, sha224, sha256, sha384, sha512, sha3, ripemd160


isDomain(): Boolean

Checks if a string is a domain.


isEmail(): Boolean

Checks if a string is an email.


isEmpty(): Boolean

Checks if a string is empty.


isNotEmpty(): Boolean

Checks if a string has content.


isNumeric(): Boolean

Checks if a string only contains digits.


Checks if a string is a valid URL.


parseJson(): Object

Equivalent of JSON.parse(). Parses a string as a JSON object.


quote(mark?: String): String

Returns a string wrapped in the quotation marks. Default quotation is ".

Function parameters

Which quote mark style to use.


removeMarkdown(): String

Removes Markdown formatting from a string.


replaceSpecialChars(): String

Replaces non-ASCII characters in a string with an ASCII representation.


removeTags(): String

Remove tags, such as HTML or XML, from a string.


toBoolean(): Boolean

Convert a string to a boolean. "false", "0", "", and "no" convert to false.


toDateTime(): Date

Converts a string to a Luxon date object.


toDecimalNumber(): Number

See toFloat


toFloat(): Number

Converts a string to a decimal number.


Converts a string to an integer.


toSentenceCase(): String

Formats a string to sentence case.


toSnakeCase(): String

Formats a string to snake case.


toTitleCase(): String

Formats a string to title case. Will not change already uppercase letters to prevent losing information from acronyms and trademarks such as iPhone or FAANG.


toWholeNumber(): Number

Converts a string to a whole number.


urlDecode(entireString?: Boolean): String

Decodes a URL-encoded string. It decodes any percent-encoded characters in the input string, and replaces them with their original characters.

Function parameters

entireStringOptionalBoolean

Whether to decode characters that are part of the URI syntax (true) or not (false).


urlEncode(entireString?: Boolean): String

Encodes a string to be used/included in a URL.

Function parameters

entireStringOptionalBoolean

Whether to encode characters that are part of the URI syntax (true) or not (false).



Elasticsearch credentials

URL: llms-txt#elasticsearch-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using basic auth

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to Elasticsearch's documentation for more information about the service.

To configure this credential, you'll need an Elasticsearch account with a deployment and:

  • A Username
  • A Password
  • Your Elasticsearch application's Base URL (also known as the Elasticsearch application endpoint)

To set up the credential:

  1. Enter your Elasticsearch Username.
  2. Enter your Elasticsearch Password.
  3. In Elasticsearch, go to Deployments.
  4. Select your deployment.
  5. Select Manage this deployment.
  6. In the Applications section, copy the endpoint of the Elasticsearch application.
  7. Enter this in n8n as the Base URL.
  8. By default, n8n connects only if SSL certificate validation succeeds. If you'd like to connect even if SSL certificate validation fails, turn on Ignore SSL Issues.

Custom endpoint aliases

If you add a custom endpoint alias to a deployment, update your n8n credential Base URL with the new endpoint.


Discord node

URL: llms-txt#discord-node

Contents:

  • Operations
  • Waiting for a response
    • Response Type
    • Approval response customization
    • Free Text response customization
    • Custom Form response customization
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported
  • Common issues

Use the Discord node to automate work in Discord, and integrate Discord with other applications. n8n has built-in support for a wide range of Discord features, including sending messages in a Discord channel and managing channels.

On this page, you'll find a list of operations the Discord node supports and links to more resources.

Refer to Discord credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Channel
    • Create
    • Delete
    • Get
    • Get Many
    • Update
  • Message
    • Delete
    • Get
    • Get Many
    • React with Emoji
    • Send
    • Send and Wait for Response
  • Member
    • Get Many
    • Role Add
    • Role Remove

Waiting for a response

By choosing the Send and Wait for a Response operation, you can send a message and pause the workflow execution until a person confirms the action or provides more information.

You can choose between the following types of waiting and approval actions:

  • Approval: Users can approve or disapprove from within the message.
  • Free Text: Users can submit a response with a form.
  • Custom Form: Users can submit a response with a custom form.

You can customize the waiting and response behavior depending on which response type you choose. You can configure these options in any of the above response types:

  • Limit Wait Time: Whether the workflow will automatically resume execution after a specified time limit. This can be an interval or a specific wall time.
  • Append n8n Attribution: Whether to mention in the message that it was sent automatically with n8n (turned on) or not (turned off).

Approval response customization

When using the Approval response type, you can choose whether to present only an approval button or both approval and disapproval buttons.

You can also customize the button labels for the buttons you include.

Free Text response customization

When using the Free Text response type, you can customize the message button label, the form title and description, and the response button label.

Custom Form response customization

When using the Custom Form response type, you build a form using the fields and options you want.

You can customize each form element with the settings outlined in the n8n Form trigger's form elements. To add more fields, select the Add Form Element button.

You'll also be able to customize the message button label, the form title and description, and the response button label.

Templates and examples

Fully Automated AI Video Generation & Multi-Platform Publishing

by Juan Carlos Cavero Gracia

View template details

AI-Powered Short-Form Video Generator with OpenAI, Flux, Kling, and ElevenLabs

View template details

Discord AI-powered bot

View template details

Browse Discord integration templates, or search all templates

Refer to Discord's documentation for more information about the service.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.

For common errors or issues and suggested resolution steps, refer to Common Issues.


OpenAI Image operations

URL: llms-txt#openai-image-operations

Contents:

  • Analyze Image
    • Options
  • Generate an Image
    • Options
  • Edit an Image
    • Options
  • Common issues

Use this operation to analyze or generate an image in OpenAI. Refer to OpenAI for more information on the OpenAI node itself.

Use this operation to take in images and answer questions about them.

Enter these parameters:

  • Credential to connect with: Create or select an existing OpenAI credential.

  • Resource: Select Image.

  • Operation: Select Analayze Image.

  • Model: Select the model you want to use to analyze an image.

  • Text Input: Ask a question about the image.

  • Input Type: Select how you'd like to input the image. Options include:

    • Image URL(s): Enter the URL(s) of the image(s) to analyze. Add multiple URLs in a comma-separated list.
    • Binary File(s): Enter the name of the binary property which contains the image(s) in the Input Data Field Name.
  • Detail: Specify the balance between response time versus token usage.

  • Length of Description (Max Tokens): Defaults to 300. Fewer tokens will result in shorter, less detailed image description.

Refer to Images | OpenAI documentation for more information.

Use this operation to create an image from a text prompt.

Enter these parameters:

  • Credential to connect with: Create or select an existing OpenAI credential.

  • Resource: Select Image.

  • Operation: Select Generate an Image.

  • Model: Select the model you want to use to generate an image.

  • Prompt: Enter the text description of the desired image(s). The maximum length is 1000 characters for dall-e-2 and 4000 characters for dall-e-3.

  • Quality: The quality of the image you generate. HD creates images with finer details and greater consistency across the image. This option is only supported for dall-e-3. Otherwise, choose Standard.

  • Resolution: Select the resolution of the generated images. Select 1024x1024 for dall-e-2. Select one of 1024x1024, 1792x1024, or 1024x1792 for dall-e-3 models.

  • Style: Select the style of the generated images. This option is only supported for dall-e-3.

    • Natural: Use this to produce more natural looking images.
    • Vivid: Use this to produce hyper-real and dramatic images.
  • Respond with image URL(s): Whether to return image URL(s) instead of binary file(s).

  • Put Output in Field: Defaults to data. Enter the name of the output field to put the binary file data in. Only available if Respond with image URL(s) is turned off.

Refer to Create image | OpenAI documentation for more information.

Use this operation to edit an image from a text prompt.

Enter these parameters:

  • Credential to connect with: Create or select an existing OpenAI credential.

  • Resource: Select Image.

  • Operation: Select Edit Image.

  • Model: Select the model you want to use to generate an image. Supports dall-e-2 and gpt-image-1.

  • Prompt: Enter the text description of the desired edits to the input image(s).

  • Image(s): Add one or more binary fields to include images with your prompt. Each image should be a png, webp, or jpg file less than 50MB. You can provide up to 16 images.

  • Number of Images: The number of images to generate. Must be between 1 and 10.

  • Size: The size and dimensions of the generated images (in px).

  • Quality: The quality of the image that will be generated (auto, low, medium, high, standard). Only supported for gpt-image-1.

  • Output Format: The format in which the generated images are returned (png, webp, or jpg). Only supported for gpt-image-1.

  • Output Compression: The compression level (0-100%) for the generated images. Only supported for gpt-image-1 with webp or jpeg output formats.

  • Background: Allows to set transparency for the background of the generated image(s). Only supported for gpt-image-1.

  • Input Fidelity: Control how much effort the model will exert to match the style and features of input images. Only supported for gpt-image-1.

  • Image Mask: Name of the binary property that contains the image. A second image whose fully transparent areas (for example, where alpha is zero) shows where the image should be edited. If there are multiple images provided, the mask will be applied on the first image. Must be a valid PNG file, less than 4MB, and have the same dimensions as image.

  • User: A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse.

For common errors or issues and suggested resolution steps, refer to Common Issues.


Cockpit node

URL: llms-txt#cockpit-node

Contents:

  • Operations
  • Templates and examples

Use the Cockpit node to automate work in Cockpit, and integrate Cockpit with other applications. n8n has built-in support for a wide range of Cockpit features, including creating a collection entry, storing data from a form submission, and getting singletons.

On this page, you'll find a list of operations the Cockpit node supports and links to more resources.

Refer to Cockpit credentials for guidance on setting up authentication.

  • Collection
    • Create a collection entry
    • Get all collection entries
    • Update a collection entry
  • Form
    • Store data from a form submission
  • Singleton
    • Get a singleton

Templates and examples

Browse Cockpit integration templates, or search all templates


Azure Storage credentials

URL: llms-txt#azure-storage-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using OAuth2
    • Register an application
    • Generate a client secret
  • Using Shared Key
  • Common issues
    • Need admin approval

You can use these credentials to authenticate the following nodes:

Supported authentication methods

  • OAuth2
  • Shared Key

Refer to Azure Storage's API documentation for more information about the service.

Note for n8n Cloud users

Cloud users don't need to provide connection details. Select Connect my account to connect through your browser.

For self-hosted users, there are two main steps to configure OAuth2 from scratch:

  1. Register an application with the Microsoft Identity Platform.
  2. Generate a client secret for that application.

Follow the detailed instructions for each step below. For more detail on the Microsoft OAuth2 web flow, refer to Microsoft authentication and authorization basics.

Register an application

Register an application with the Microsoft Identity Platform:

  1. Open the Microsoft Application Registration Portal.
  2. Select Register an application.
  3. Enter a Name for your app.
  4. In Supported account types, select Accounts in any organizational directory (Any Azure AD directory - Multi-tenant) and personal Microsoft accounts (for example, Skype, Xbox).
  5. In Register an application:
    1. Copy the OAuth Callback URL from your n8n credential.
    2. Paste it into the Redirect URI (optional) field.
    3. Select Select a platform > Web.
  6. Select Register to finish creating your application.
  7. Copy the Application (client) ID and paste it into n8n as the Client ID.

Refer to Register an application with the Microsoft Identity Platform for more information.

Generate a client secret

With your application created, generate a client secret for it:

  1. On your Microsoft application page, select Certificates & secrets in the left navigation.
  2. In Client secrets, select + New client secret.
  3. Enter a Description for your client secret, such as n8n credential.
  4. Select Add.
  5. Copy the Secret in the Value column.
  6. Paste it into n8n as the Client Secret.
  7. Select Connect my account in n8n to finish setting up the connection.
  8. Log in to your Microsoft account and allow the app to access your info.

Refer to Microsoft's Add credentials for more information on adding a client secret.

To configure this credential, you'll need:

  • An Account: The name of your Azure Storage account.
  • A Key: A shared key for your Azure Storage account. Select Security + networking and then Access keys. You can use either of the two account keys for this purpose.

Refer to Manage storage account access keys | Microsoft for more detailed steps.

Here are the known common errors and issues with Azure Storage credentials.

Need admin approval

When attempting to add credentials for a Microsoft360 or Microsoft Entra account, users may see a message when following the procedure that this action requires admin approval.

This message will appear when the account attempting to grant permissions for the credential is managed by a Microsoft Entra. In order to issue the credential, the administrator account needs to grant permission to the user (or "tenant") for that application.

The procedure for this is covered in the Microsoft Entra documentation.


Date & Time

URL: llms-txt#date-&-time

Contents:

  • Operations
  • Add to a Date
    • Add to a Date options
  • Extract Part of a Date
    • Extract Part of a Date options
  • Format a Date
    • Format a Date options
  • Get Current Date
    • Get Current Date options
  • Get Time Between Dates

The Date & Time node manipulates date and time data and convert it to different formats.

The node relies on the timezone setting. n8n uses either:

  1. The workflow timezone, if set. Refer to Workflow settings for more information.
  2. The n8n instance timezone, if the workflow timezone isn't set. The default is America/New York for self-hosted instances. n8n Cloud tries to detect the instance owner's timezone when they sign up, falling back to GMT as the default. Self-hosted users can change the instance setting using Environment variables. Cloud admins can change the instance timezone in the Admin dashboard.

Date and time in other nodes

You can work with data and time in the Code node, and in expressions in any node. n8n supports Luxon to help work with date and time in JavaScript. Refer to Date and time with Luxon for more information.

  • Add to a Date: Add a specified amount of time to a date.
  • Extract Part of a Date: Extract part of a date, such as the year, month, or day.
  • Format a Date: Transform a date's format to a new format using preset options or a custom expression.
  • Get Current Date: Get the current date and choose whether to include the current time or not. Useful for triggering other flows and conditional logic.
  • Get Time Between Dates: Calculate the amount of time in specific units between two dates.
  • Round a Date: Round a date up or down to the nearest unit of your choice, such as month, day, or hour.
  • Subtract From a Date: Subtract a specified amount of time from a date.

Refer to the sections below for parameters and options specific to each operation.

Configure the node for this operation using these parameters:

  • Date to Add To: Enter the date you want to change.
  • Time Unit to Add: Select the time unit for the Duration parameter.
  • Duration: Enter the number of time units to add to the date.
  • Output Field Name: Enter the name of the field to output the new date to.

Add to a Date options

This operation has one option: Include Input Fields. If you'd like to include all of the input fields in the output, turn this option on. If turned off, only the Output Field Name and its contents are output.

Extract Part of a Date

Configure the node for this operation using these parameters:

  • Date: Enter the date you want to round or extract part of.
  • Part: Select the part of the date you want to extract. Choose from:
    • Year
    • Month
    • Week
    • Day
    • Hour
    • Minute
    • Second
  • Output Field Name: Enter the name of the field to output the extracted date part to.

Extract Part of a Date options

This operation has one option: Include Input Fields. If you'd like to include all of the input fields in the output, turn this option on. If turned off, only the Output Field Name and its contents are output.

Configure the node for this operation using these parameters:

  • Date: Enter the date you want to format.
  • Format: Select the format you want to change the date to. Choose from:
    • Custom Format: Enter your own custom format using Luxon's special tokens. Tokens are case-sensitive.
    • MM/DD/YYYY: For 4 September 1986, this formats the date as 09/04/1986.
    • YYYY/MM/DD: For 4 September 1986, this formats the date as 1986/09/04.
    • MMMM DD YYYY: For 4 September 1986, this formats the date as September 04 1986.
    • MM-DD-YYYY: For 4 September 1986, this formats the date as 09-04-1986.
    • YYYY-MM-DD: For 4 September 1986, this formats the date as 1986-09-04.
  • Output Field Name: Enter the name of the field to output the formatted date to.

Format a Date options

This operation includes these options:

  • Include Input Fields: If you'd like to include all of the input fields in the output, turn this option on. If turned off, only the Output Field Name and its contents are output.
  • From Date Format: If the node isn't recognizing the Date format correctly, enter the format for that Date here so the node can process it properly. Use Luxon's special tokens to enter the format. Tokens are case-sensitive
  • Use Workflow Timezone: Whether to use the input's time zone (turned off) or the workflow's timezone (turned on).

Configure the node for this operation using these parameters:

  • Include Current Time: Choose whether to include the current time (turned on) or to set the time to midnight (turned off).
  • Output Field Name: Enter the name of the field to output the current date to.

Get Current Date options

This operation includes these options:

  • Include Input Fields: If you'd like to include all of the input fields in the output, turn this option on. If turned off, only the Output Field Name and its contents are output.
  • Timezone: Set the timezone to use. If left blank, the node uses the n8n instance's timezone.

Use GMT for +00:00 timezone.

Get Time Between Dates

Configure the node for this operation using these parameters:

  • Start Date: Enter the earlier date you want to compare.
  • End Date: Enter the later date you want to compare.
  • Units: Select the units you want to calculate the time between. You can include multiple units. Choose from:
    • Year
    • Month
    • Week
    • Day
    • Hour
    • Minute
    • Second
    • Millisecond
  • Output Field Name: Enter the name of the field to output the calculated time between to.

Get Time Between Dates options

The Get Time Between Dates operation includes the Include Input Fields option as well as an Output as ISO String option. If you leave this option off, each unit you selected will return its own time difference calculation, for example:

If you turn on the Output as ISO String option, the node formats the output as a single ISO duration string, for example: P1Y3M13D.

ISO duration format displays a format as P<n>Y<n>M<n>DT<n>H<n>M<n>S. <n> is the number for the unit after it.

  • P = period (duration). It begins all ISO duration strings.
  • Y = years
  • M = months
  • W = weeks
  • D = days
  • T = delineator between dates and times, used to avoid confusion between months and minutes
  • H = hours
  • M = minutes
  • S = seconds

Milliseconds don't get their own unit, but instead are decimal seconds. For example, 2.1 milliseconds is 0.0021S.

Configure the node for this operation using these parameters:

  • Date: Enter the date you'd like to round.
  • Mode: Choose whether to Round Down or Round Up.
  • To Nearest: Select the unit you'd like to round to. Choose from:
    • Year
    • Month
    • Week
    • Day
    • Hour
    • Minute
    • Second
  • Output Field Name: Enter the name of the field to output the rounded date to.

Round a Date options

This operation has one option: Include Input Fields. If you'd like to include all of the input fields in the output, turn this option on. If turned off, only the Output Field Name and its contents are output.

Subtract From a Date

Configure the node for this operation using these parameters:

  • Date to Subtract From: Enter the date you'd like to subtract from.
  • Time Unit to Subtract: Select the unit for the Duration amount you want to subtract.
  • Duration: Enter the amount of the time units you want to subtract from the Date to Subtract From.
  • Output Field Name: Enter the name of the field to output the rounded date to.

Subtract From a Date options

This operation has one option: Include Input Fields. If you'd like to include all of the input fields in the output, turn this option on. If turned off, only the Output Field Name and its contents are output.

Templates and examples

Working with dates and times

View template details

Create an RSS feed based on a website's content

View template details

Customer Support WhatsApp Bot with Google Docs Knowledge Base and Gemini AI

View template details

Browse Date & Time integration templates, or search all templates

The Date & Time node uses Luxon. You can also use Luxon in the Code node and expressions. Refer to Date and time with Luxon for more information.

Supported date formats

n8n supports all date formats supported by Luxon. Tokens are case-sensitive.

Examples:

Example 1 (unknown):

timeDifference
years : 1
months : 3
days : 13

Drift node

URL: llms-txt#drift-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Drift node to automate work in Drift, and integrate Drift with other applications. n8n has built-in support for a wide range of Drift features, including creating, updating, deleting, and getting contacts.

On this page, you'll find a list of operations the Drift node supports and links to more resources.

Refer to Drift credentials for guidance on setting up authentication.

  • Contact
    • Create a contact
    • Get custom attributes
    • Delete a contact
    • Get a contact
    • Update a contact

Templates and examples

Create a contact in Drift

View template details

🛠️ Drift Tool MCP Server 💪 5 operations

View template details

Track SDK Documentation Drift with GitHub, Notion, Google Sheets, and Slack

View template details

Browse Drift integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Freshdesk credentials

URL: llms-txt#freshdesk-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a Freshdesk account.

Supported authentication methods

Refer to Freshdesk's API documentation for more information about the service.

To configure this credential, you'll need:

  • An API Key: Refer to the Freshdesk API authenticaton documentation for detailed instructions on getting your API key.
  • A Freshdesk Domain: Use the subdomain of your Freshdesk account. This is part of the URL, for example https://<subdomain>.freshdesk.com. So if you access Freshdesk through https://n8n.freshdesk.com, enter n8n as your Domain.

Your first workflow

URL: llms-txt#your-first-workflow

Contents:

  • Step one: Create a new workflow
  • Step two: Add a trigger node
  • Step three: Add the NASA node and set up credentials
  • Step four: Add logic with the If node
  • Step five: Output data from your workflow
  • Step six: Test the workflow
  • Congratulations
  • Next steps

This guide will show you how to construct a workflow in n8n, explaining key concepts along the way. You will:

  • Create a workflow from scratch.
  • Understand key concepts and skills, including:
    • Starting workflows with trigger nodes
    • Configuring credentials
    • Processing data
    • Representing logic in an n8n workflow
    • Using expressions

This quickstart uses n8n Cloud, which is recommended for new users. A free trial is available - if you haven't already done so, sign up for an account now.

Step one: Create a new workflow

When you open n8n, you'll see either:

  • A window with a welcome message and two large buttons: Choose Start from Scratch to create a new workflow.
  • The Workflows list on the Overview page. Select the Create Workflow to create a new workflow.

Step two: Add a trigger node

n8n provides two ways to start a workflow:

  • Manually, by selecting Execute Workflow.
  • Automatically, using a trigger node as the first node. The trigger node runs the workflow in response to an external event, or based on your settings.

For this tutorial, we'll use the Schedule trigger. This allows you to run the workflow on a schedule:

  1. Select Add first step.
  2. Search for Schedule. n8n shows a list of nodes that match the search.
  3. Select Schedule Trigger to add the node to the canvas. n8n opens the node.
  4. For Trigger Interval, select Weeks.
  5. For Weeks Between Triggers, enter 1.
  6. Enter a time and day. For this example, select Monday in Trigger on Weekdays, select 9am in Trigger at Hour, and enter 0 in Trigger at Minute.
  7. Close the node details view to return to the canvas.

Step three: Add the NASA node and set up credentials

The NASA node interacts with NASA's public APIs to fetch useful data. We will use the real-time data from the API to find solar events.

Credentials are private pieces of information issued by apps and services to authenticate you as a user and allow you to connect and share information between the app or service and the n8n node. The type of information required varies depending on the app/service concerned. You should be careful about sharing or revealing the credentials outside of n8n.

  1. Select the Add node connector on the Schedule Trigger node.

  2. Search for NASA. n8n shows a list of nodes that match the search.

  3. Select NASA to view a list of operations.

  4. Search for and select Get a DONKI solar flare. This operation returns a report about recent solar flares. When you select the operation, n8n adds the node to the canvas and opens it.

  5. To access the NASA APIs, you need to set up credentials:

  6. Select the Credential for NASA API dropdown.

    1. Select Create new credential. n8n opens the credentials view.
    2. Go to NASA APIs and fill out the form from the Generate API Key link. The NASA site generates the key and emails it to the address you entered.
    3. Check your email account for the API key. Copy the key, and paste it into API Key in n8n.
    4. Select Save.
    5. Close the credentials screen. n8n returns to the node. The new credentials should be automatically selected in Credential for NASA API.
  7. By default, DONKI Solar Flare provides data for the past 30 days. To limit it to just the last week, use Additional Fields:

  8. Select Add field.

  9. Select Start date.

  10. To get a report starting from a week ago, you can use an expression: next to Start date, select the Expression tab, then select the expand button to open the full expressions editor.

  11. In the Expression field, enter the following expression:

This generates a date in the correct format, seven days before the current date. Date and time formats in n8n...

n8n uses Luxon to work with date and time, and also provides two variables for convenience: $now and $today. For more information, refer to Expressions > Luxon.

  1. Close the Edit Expression modal to return to the NASA node.

  2. You can now check that the node is working and returning the expected date: select Execute step to run the node manually. n8n calls the NASA API and displays details of solar flares in the past seven days in the OUTPUT section.

  3. Close the NASA node to return to the workflow canvas.

Step four: Add logic with the If node

n8n supports complex logic in workflows. In this tutorial we will use the If node to create two branches that each generate a report from the NASA data. Solar flares have five possible classifications; we will add logic that sends a report with the lower classifications to one output, and the higher classifications to another.

  1. Select the Add node connector on the NASA node.

  2. Search for If. n8n shows a list of nodes that match the search.

  3. Select If to add the node to the canvas. n8n opens the node.

  4. You need to check the value of the classType property in the NASA data. To do this:

  5. Drag classType into Value 1.

Make sure you ran the NASA node in the previous section

If you didn't follow the step in the previous section to run the NASA node, you won't see any data to work with in this step.

  1. Change the comparison operation to String > Contains.

  2. In Value 2, enter X. This is the highest classification of solar flare. In the next step, you will create two reports: one for X class solar flares, and one for all the smaller solar flares.

  3. You can now check that the node is working and returning the expected date: select Execute step to run the node manually. n8n tests the data against the condition, and shows which results match true or false in the OUTPUT panel.

Weeks without large solar flares

In this tutorial, you are working with live data. If you find there aren't any X class solar flares when you run the workflow, try replacing X in Value 2 with either A, B, C, or M.

  1. Once you are happy the node will return some events, you can close the node to return to the canvas.

Step five: Output data from your workflow

The last step of the workflow is to send the two reports about solar flares. For this example, you'll send data to Postbin. Postbin is a service that receives data and displays it on a temporary web page.

  1. On the If node, select the Add node connector labeled true.

  2. Search for PostBin. n8n shows a list of nodes that match the search.

  3. Select PostBin.

  4. Select Send a request. n8n adds the node to the canvas and opens it.

  5. Go to Postbin and select Create Bin. Leave the tab open so you can come back to it when testing the workflow.

  6. Copy the bin ID. It looks similar to 1651063625300-2016451240051.

  7. In n8n, paste your Postbin ID into Bin ID.

  8. Now, configure the data to send to Postbin. Next to Bin Content, select the Expression tab (you will need to mouse-over the Bin Content for the tab to appear), then select the expand button to open the full expressions editor.

  9. You can now click and drag the correct field from the If Node output into the expressions editor to automatically create a reference for this label. In this case the input we want is 'classType'.

  10. Once dropped into the expressions editor it will transform into this reference: {{$json["classType"]}}. Add a message to it, so that the full expression is:

  11. Close the expressions editor to return to the node.

  12. Close the Postbin node to return to the canvas.

  13. Add another Postbin node, to handle the false output path from the If node:

  14. Hover over the Postbin node, then select Node context menu > Duplicate node to duplicate the first Postbin node.

    1. Drag the false connector from the If node to the left side of the new Postbin node.

Step six: Test the workflow

  1. You can now test the entire workflow. Select Execute Workflow. n8n runs the workflow, showing each stage in progress.
  2. Go back to your Postbin bin. Refresh the page to see the output.
  3. If you want to use this workflow (in other words, if you want it to run once a week automatically), you need to activate it by selecting the Active toggle.

Postbin's bins exist for 30 minutes after creation. You may need to create a new bin and update the ID in the Postbin nodes, if you exceed this time limit.

You now have a fully functioning workflow that does something useful! It should look something like this:

View workflow file

Along the way you have discovered:

  • How to find the nodes you want and join them together
  • How to use expressions to manipulate data
  • How to create credentials and attach them to nodes
  • How to use logic in your workflows

There are plenty of things you could add to this (perhaps add some more credentials and a node to send you an email of the results), or maybe you have a specific project in mind. Whatever your next steps, the resources linked below should help.

Examples:

Example 1 (unknown):

{{ $today.minus(7, 'days') }}

Example 2 (unknown):

There was a solar flare of class {{$json["classType"]}}

Hunter node

URL: llms-txt#hunter-node

Contents:

  • Operations
  • Templates and examples

Use the Hunter node to automate work in Hunter, and integrate Hunter with other applications. n8n has built-in support for a wide range of Hunter features, including getting, generating, and verifying email addresses.

On this page, you'll find a list of operations the Hunter node supports and links to more resources.

Refer to Hunter credentials for guidance on setting up authentication.

  • Get every email address found on the internet using a given domain name, with sources
  • Generate or retrieve the most likely email address from a domain name, a first name and a last name
  • Verify the deliverability of an email address

Templates and examples

Find and email ANYONE on LinkedIn with OpenAI, Hunter & Gmail

View template details

Automated Job Hunter: Upwork Opportunity Aggregator & AI-Powered Notifier

View template details

Automatically email great leads when they submit a form and record in HubSpot

View template details

Browse Hunter integration templates, or search all templates


UX guidelines for community nodes

URL: llms-txt#ux-guidelines-for-community-nodes

Contents:

  • Credentials
    • OAuth
  • Node structure
    • Operations to include
    • Resource Locator
    • Consistency with other nodes
    • Sorting options
  • Node functionality
    • Deleting operations output
    • Simplifying output fields

Your node's UI must conform to these guidelines to be a verified community node candidate.

API key and sensitive credentials should always be password fields.

Always include the OAuth credential if available.

Operations to include

Try to include CRUD operations for each resource type.

Try to include common operations in nodes for each resource. n8n uses some CRUD operations to keep the experience consistent and allow users to perform basic operations on the resource. The suggested operations are:

  • Create
  • Create or Update (Upsert)
  • Delete
  • Get
  • Get Many: also used when some filtering or search is available
  • Update
  1. These operations can apply to the resource itself or an entity inside of the resource (for example, a row inside a Google Sheet). When operating on an entity inside of the resource, you must specify the name of the entity in the operations name.
  2. The naming could change depending on the node and the resource. Check the following guidelines for details.
  • Use a Resource Locator component whenever possible. This provides a much better UX for users. The Resource Locator Component is most often useful when you have to select a single item.
  • The default option for the Resource Locator Component should be From list (if available).

Consistency with other nodes

  • Maintain UX consistency: n8n tries to keep its UX consistent. This means following existing UX patterns, in particular, those used in the latest new or overhauled nodes.

  • Check similar nodes: For example, if you're working on a database node, it's worth checking the Postgres node.

  • You can enhance certain "Get Many" operations by providing users with sorting options.

  • Add sorting in a dedicated collection (below the "Options" collection). Follow the example of Airtable Record:Search.

Node functionality

Deleting operations output

When deleting an item (like a record or a row), return an array with a single object: {"deleted": true}. This is a confirmation for the user that the deletion was successful and the item will trigger the following node.

Simplifying output fields

Normal nodes: 'Simplify' parameter

When an endpoint returns data with more than 10 fields, add the "Simplify" boolean parameter to return a simplified version of the output with max 10 fields.

  • One of the main issues with n8n can be the size of data and the Simplify parameter limits that problem by reducing data size.
  • Select the most useful fields to output in the simplified node and sort them to have the most used ones at the top.
  • In the Simplify mode, it's often best to flatten nested fields
  • Display Name: Simplify
  • Description: Whether to return a simplified version of the response instead of the raw data

AI tool nodes: Output parameter

When an endpoint returns data with more than 10 fields, add the 'Output' option parameter with 3 modes.

In AI tool nodes, allow the user to be more granular and select the fields to output. The rationale is that tools may run out of context window and they can get confused by too many fields, so it's better to pass only the ones they need.

  • Simplified: Works the same as the "Simplify" parameter described above.
  • Raw: Returns all the available fields.
  • Selected fields: Shows a multi-option parameter for selecting the fields to add to the output and send to the AI agent. By default, this option always returns the ID of the record/entity.

Use Title Case for the node name, parameters display names (labels), dropdown titles. Title Case is when you capitalize the first letter of each word, except for certain small words, such as articles and short prepositions.

Use Sentence case for node action names, node descriptions, parameters descriptions (tooltips), hints, dropdown descriptions.

  • Use the third-party service terminology: Try to use the same terminology as the service you're interfacing with (for example, Notion 'blocks', not Notion 'paragraphs').
  • Use the terminology used in the UI: Stick to the terminology used in the user interface of the service, rather than that used in the APIs or technical documentation (for example, in Trello you "archive" cards, but in the API they show up as "closed". In this case, you might want to use "archive").
  • No tech jargon: Don't use technical jargon where simple words will do. For example, use "field" instead of "key".
  • Consistent naming: Choose one term for something and stick to it. For example, don't mix "directory" and "folder".

It's often helpful to insert examples of content in parameters placeholders. These should start with "e.g." and use camel case for the demo content in fields.

Placeholder examples to copy:

  • image: e.g. https://example.com/image.png
  • video: e.g. https://example.com/video.mp4
  • search term: e.g. automation
  • email: e.g. nathan@example.com
  • Twitter user (or similar): e.g. n8n
  • Name and last name: e.g. Nathan Smith
  • First name: e.g. Nathan
  • Last name: e.g. Smith

Operations name, action, and description

  • Name: This is the name displayed in the select when the node is open on the canvas. It must use title case and doesn't have to include the resource (for example, "Delete").
  • Action: This is the name of the operation displayed in the panel where the user selects the node. It must be in sentence case and must include the resource (for example, "Delete record").
  • Description: This is the sub-text displayed below the name in the select when the node is open on the canvas. It must use sentence case and must include the resource. It can add a bit of information and use alternative words than the basic resource/operation (for example, "Retrieve a list of users").
  • If the operation acts on an entity that's not the Resource (for example, a row in a Google Sheet), specify that in the operation name (for example, "Delete Row").

As a general rule, is important to understand what the object of an operation is. Sometimes, the object of an Operation is the resource itself (for example, Sheet:Delete to delete a Sheet).

In other cases, the object of the operation isn't the resource, but something contained inside the resource (for example, Table:Delete rows, here the resource is the table, but what you are operating on are the rows inside of it).

This is the name displayed in the select when the node is open on the canvas.

  • Parameter: name

  • Case: Title Case

  • Don't repeat the resource (if the resource selection is above): The resource is often displayed above the operation, so it's not necessary to repeat it in the operation (this is the case if the object of the operation is the resource itself).

    • For example: Sheet:Delete → No need to repeat Sheet in Delete, because n8n displays Sheet in the field above and what you're deleting is the Sheet.
  • Specify the resource if there's no resource selection above: In some nodes, you won't have a resource selection (because there's only one resource). In these cases, specify the resource in the operation.

    • For example: Delete Records → In Airtable, there's no resource selection, so it's better to specify that the Delete operation will delete records.
  • Specify the object of the operation if it's not the resource: Sometimes, the object of the operation isn't the resource. In these cases, specify the object in the operation as well.

    • For example: Table:Get Columns → Specify Columns because the resource is Table, while the object of the operation is Columns.

This is the name of the operation displayed in the panel where the user selects the node. * Parameter: action * Case: Sentence case

  • Omit articles: To keep the text shorter, get rid of articles (a, an, the…).
    • correct: Update row in sheet
    • incorrect: Update a row in a sheet
  • Repeat the resource: In this case, it's okay to repeat the resource. Even if the resource is visible in the list, the user might not notice and it's useful to repeat it in the operation label.
  • Specify the object of the operation if it is not the resource: Same as for the operation name. In this case, you don't need to repeat the resource.
    • For example: Append Rows → You have to specify Rows because rows are what you're actually appending to. Don't add the resource (Sheet) since you aren't appending to the resource.

Naming description

This is the subtext displayed below the name in the selection when the node is open on the canvas.

  • Parameter: description

  • Case: Sentence case

  • If possible, add more information than that specified in the operation name

  • Use alternative wording to help users better understand what the operation is doing. Some people might not understand the text used in the operation (maybe English isn't their native language), and using alternative working could help them.

n8n uses a general vocabulary and some context-specific vocabulary for groups of similar applications (for example, databases or spreadsheets).

The general vocabulary takes inspiration from CRUD operations:

  • Clear
    • Delete all the contents of the resource (empty the resource).
    • Description: Delete all the <CHILD_ELEMENT>s inside the <RESOURCE>
  • Create
    • Create a new instance of the resource.
    • Description: Create a new <RESOURCE>
  • Create or Update
    • Create or update an existing instance of the resource.
    • Description: Create a new <RESOURCE> or update an existing one (upsert)
  • Delete
    • You can use "Delete" in two different ways:
      1. Delete a resource:
        • Description: Delete a <RESOURCE> permanently (use "permanently" only if that's the case)
      2. Delete something inside of the resource (for example, a row):
        • In this case, always specify the object of the operation: for example, Delete Rows or Delete Records.
        • Description: Delete a <CHILD_ELEMENT> permanently
  • Get
    • You can use "Get" in two different ways:
      1. Get a resource:
        • Description: Retrieve a <RESOURCE>
      2. Get an item inside of the resource (for example, records):
        • In this case, always specify the object of the operation: for example, Get Row or Get Record.
        • Description: Retrieve a <CHILD_ELEMENT> from the/a <RESOURCE>
  • Get Many
    • You can use "Get Many" in two different ways:
      1. Get a list of resources (without filtering):
        • Description: Retrieve a list of <RESOURCE>s
      2. Get a list of items inside of the resource (for example, records):
        • In this case, always specify the object of the operation: for example, Get Many Rows or Get Many Records.
        • You can omit Many: Get Many Rows can be Get Rows.
        • Description: List all <CHILD_ELEMENT>s in the/a <RESOURCE>
  • Insert or Append
    • Add something inside of a resource.
    • Use insert for database nodes.
    • Description: Insert <CHILD_ELEMENT>(s) in a <RESOURCE>
  • Insert or Update or Append or Update
    • Add or update something inside of a resource.
    • Use insert for database nodes.
    • Description: Insert <CHILD_ELEMENT>(s) or update an existing one(s) (upsert)
  • Update
    • You can use "Update" in two different ways:
      1. Update a resource:
        • Description: Update one or more <RESOURCE>s
      2. Update something inside of a resource (for example, a row):
        • In this case, always specify the object of the operation: for example, Update Rows or Update Records.
        • Description: Update <CHILD_ELEMENT>(s) inside a <RESOURCE>

Referring to parameter and field name

When you need to refer to parameter names or field names in copy, wrap them in single quotation marks (for example, "Please fill the 'name' parameter).

Boolean description

Start the description of boolean components with 'Whether...'

General philosophy

Errors are sources of pain for users. For this reason, n8n always wants to tell the user:

  • What happened: a description of the error and what went wrong.
  • How to solve the problem: or at least how to get unstuck and continue using n8n without problems. n8n doesn't want users to remain blocked, so use this as an opportunity to guide them to success.

Error structure in the Output panel

Error Message - What happened

This message explains to the user what happened, and the current issue that prevents the execution completing.

  • If you have the displayName of the parameter that triggered the error, include it in the error message or description (or both).
  • Item index: if you have the ID of the item that triggered the error, append [Item X] to the error message. For example, The ID of the release in the parameter “Release ID” for could not be found [item 2].
  • Avoid using words like "error", "problem", "failure", "mistake".

Error Description - How to solve or get unstuck

The description explains to users how to solve the problem, what to change in the node configuration (if that's the case), or how to get unstuck. Here, you should guide them to the next step and unblock them.

Avoid using words like "error", "problem", "failure", "mistake".


Allows usage of only crypto

URL: llms-txt#allows-usage-of-only-crypto

export NODE_FUNCTION_ALLOW_BUILTIN=crypto


Netlify node

URL: llms-txt#netlify-node

Contents:

  • Operations
  • Templates and examples

Use the Netlify node to automate work in Netlify, and integrate Netlify with other applications. n8n has built-in support for a wide range of Netlify features, including getting and cancelling deployments, as well as deleting, and getting sites.

On this page, you'll find a list of operations the Netlify node supports and links to more resources.

Refer to Netlify credentials for guidance on setting up authentication.

  • Deploy
    • Cancel a deployment
    • Create a new deployment
    • Get a deployment
    • Get all deployments
  • Site
    • Delete a site
    • Get a site
    • Returns all sites

Templates and examples

Deploy site when new content gets added

View template details

Send notification when deployment fails

View template details

Add Netlify Form submissions to Airtable

View template details

Browse Netlify integration templates, or search all templates


Execution order in multi-branch workflows

URL: llms-txt#execution-order-in-multi-branch-workflows

n8n's node execution order depends on the version of n8n you're using:

  • For workflows created before version 1.0: n8n executes the first node of each branch, then the second node of each branch, and so on.
  • For workflows created in version 1.0 and above: executes each branch in turn, completing one branch before starting another. n8n orders the branches based on their position on the canvas, from topmost to bottommost. If two branches are at the same height, the leftmost branch executes first.

You can change the execution order in your workflow settings.


Metric-based evaluations

URL: llms-txt#metric-based-evaluations

Contents:

  • What are metric-based evaluations?
  • How it works
      1. Set up light evaluation
      1. Add metrics to workflow
      1. Run evaluation and view results

Available on Pro and Enterprise plans

Metric-based evaluation is available on Pro and Enterprise plans. Registered community and Starter plan users can also use it for a single workflow.

What are metric-based evaluations?

Once your workflow is ready for deployment, you often want to test it on more examples than when you were building it.

For example, when production executions start to turn up edge cases, you want to add them to your test dataset so that you can make sure they're covered.

For large datasets like the ones built from production data, it can be hard to get a sense of performance just by eyeballing the results. Instead, you must measure performance. Metric-based evaluations can assign one or more scores to each test run, which you can compare to previous runs. Individual scores get rolled up to measure performance on the whole dataset.

This feature allows you to run evaluations that calculate metrics, track how those metrics change between runs and drill down into the reasons for those changes.

Metrics can be deterministic functions (such as the distance between two strings) or you can calculate them using AI. Metrics often involve checking how far away the output is from a reference output (also called ground truth). To do so, the dataset must contain that reference output. Some evaluations don't need this reference output though (for example, checking text for sentiment or toxicity).

Credentials for Google Sheets

Evaluations use data tables or Google Sheets to store the test dataset. To use Google Sheets as a dataset source, configure a Google Sheets credential.

  1. Set up light evaluation
  2. Add metrics to workflow
  3. Run evaluation and view results

1. Set up light evaluation

Follow the setup instructions to create a dataset and wire it up to your workflow, writing outputs back to the dataset.

The following steps use the same support ticket classification workflow from the light evaluation docs:

2. Add metrics to workflow

Metrics are dimensions used to score the output of your workflow. They often compare the actual workflow output with a reference output. It's common to use AI to calculate metrics, although it's sometimes possible to just use code. In n8n, metrics are always numbers.

You need to add the logic to calculate the metrics for your workflow, at a point after it has produced the outputs. You can add any reference outputs your metric uses as a column in your dataset. This makes sure they it will be available in the workflow, since they will be output by the evaluation trigger.

Use the Set Metrics operation to calculate:

  • Correctness (AI-based): Whether the answer's meaning is consistent with a supplied reference answer. Uses a scale of 1 to 5, with 5 being the best.
  • Helpfulness (AI-based): Whether the response answers the given query. Uses a scale of 1 to 5, with 5 being the best.
  • String Similarity: How close the answer is to the reference answer, measured character-by-character (edit distance). Returns a score between 0 and 1.
  • Categorization: Whether the answer is an exact match with the reference answer. Returns 1 when matching and 0 otherwise.
  • Tools Used: Whether the execution used tools or not. Returns a score between 0 and 1.

You can also add custom metrics. Just calculate the metrics within the workflow and then map them into an Evaluation node. Use the Set Metrics operation and choose Custom Metrics as the Metric. You can then set the names and values for the metrics you want to return.

  • RAG document relevance: when working with a vector database, whether the documents retrieved are relevant to the question.

Calculating metrics can add latency and cost, so you may only want to do it when running an evaluation and avoid it when making a production execution. You can do this by putting the metric logic after a 'check if evaluating' operation.

3. Run evaluation and view results

Switch to the Evaluations tab on your workflow and click the Run evaluation button. An evaluation will start. Once the evaluation has finished, it will display a summary score for each metric.

You can see the results for each test case by clicking on the test run row. Clicking on an individual test case will open the execution that produced it (in a new tab).


LoneScale credentials

URL: llms-txt#lonescale-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a LoneScale account.

Supported authentication methods

Refer to LoneScale's API documentation for more information about the service.

To configure this credential, you'll need:


Vector Store Retriever node

URL: llms-txt#vector-store-retriever-node

Contents:

  • Node parameters
  • Templates and examples
  • Related resources

Use the Vector Store Retriever node to retrieve documents from a vector store.

On this page, you'll find the node parameters for the Vector Store Retriever node, and links to more resources.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Limit: Enter the maximum number of results to return.

Templates and examples

Ask questions about a PDF using AI

View template details

AI Crew to Automate Fundamental Stock Analysis - Q&A Workflow

View template details

Advanced AI Demo (Presented at AI Developers #14 meetup)

View template details

Browse Vector Store Retriever integration templates, or search all templates

Refer to LangChain's vector store retriever documentation for more information about the service.

View n8n's Advanced AI documentation.


Using the Code node

URL: llms-txt#using-the-code-node

Contents:

  • Usage
    • Choose a mode
  • JavaScript
    • Supported JavaScript features
    • External libraries
    • Built-in methods and variables
    • Keyboard shortcuts
  • Python (Pyodide - legacy)
    • Built-in methods and variables
    • Keyboard shortcuts

Use the Code node to write custom JavaScript or Python and run it as a step in your workflow.

This page gives usage information about the Code node. For more guidance on coding in n8n, refer to the Code section. It includes:

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Code integrations page.

Function and Function Item nodes

The Code node replaces the Function and Function Item nodes from version 0.198.0. If you're using an older version of n8n, you can still view the Function node documentation and Function Item node documentation.

How to use the Code node.

  • Run Once for All Items: this is the default. When your workflow runs, the code in the code node executes once, regardless of how many input items there are.
  • Run Once for Each Item: choose this if you want your code to run for every input item.

The Code node supports Node.js.

Supported JavaScript features

The Code node supports:

  • Promises. Instead of returning the items directly, you can return a promise which resolves accordingly.
  • Writing to your browser console using console.log. This is useful for debugging and troubleshooting your workflows.

External libraries

If you self-host n8n, you can import and use built-in and external npm modules in the Code node. To learn how to enable external modules, refer to the Enable modules in Code node guide.

If you use n8n Cloud, you can't import external npm modules. n8n makes two modules available for you:

Built-in methods and variables

n8n provides built-in methods and variables for working with data and accessing n8n data. Refer to Built-in methods and variables for more information.

The syntax to use the built-in methods and variables is $variableName or $methodName(). Type $ in the Code node or expressions editor to see a list of suggested methods and variables.

Keyboard shortcuts

The Code node editing environment supports time-saving and useful keyboard shortcuts for a range of operations from autocompletion to code-folding and using multiple-cursors. See the full list of keyboard shortcuts.

Python (Pyodide - legacy)

Pyodide is a legacy feature. Future versions of n8n will no longer support this feature.

n8n added Python support in version 1.0. It doesn't include a Python executable. Instead, n8n provides Python support using Pyodide, which is a port of CPython to WebAssembly. This limits the available Python packages to the Packages included with Pyodide. n8n downloads the package automatically the first time you use it.

Slower than JavaScript

The Code node takes longer to process Python than JavaScript. This is due to the extra compilation steps.

Built-in methods and variables

n8n provides built-in methods and variables for working with data and accessing n8n data. Refer to Built-in methods and variables for more information.

The syntax to use the built-in methods and variables is _variableName or _methodName(). Type _ in the Code node to see a list of suggested methods and variables.

Keyboard shortcuts

The Code node editing environment supports time-saving and useful keyboard shortcuts for a range of operations from autocompletion to code-folding and using multiple-cursors. See the full list of keyboard shortcuts.

File system and HTTP requests

You can't access the file system or make HTTP requests. Use the following nodes instead:

Python (Native - beta)

n8n added native Python support using task runners (beta) in version 1.111.0.

Main differences from Pyodide:

  • Native Python supports only _items in all-items mode and _item in per-item mode. It doesn't support other n8n built-in methods and variables.
  • Native Python supports importing native Python modules from the standard library and from third-parties, if the n8nio/runners image includes them and explicitly allowlists them. See adding extra dependencies for task runners for more details.
  • Native Python denies insecure built-ins by default. See task runners environment variables for more details.
  • Unlike Pyodide, which accepts dot access notation, for example, item.json.myNewField, native Python only accepts bracket access notation, for example, item["json"]["my_new_field"]. There may be other minor syntax differences where Pyodide accepts constructs that aren't legal in native Python.

Keep in mind upgrading to native Python is a breaking change, so you may need to adjust your Python scripts to use the native Python runner.

This feature is in beta and is subject to change. As it becomes stable, n8n will roll it out progressively to n8n cloud users during 2025. Self-hosting users can try it out and provide feedback.

There are two places where you can use code in n8n: the Code node and the expressions editor. When using either area, there are some key concepts you need to know, as well as some built-in methods and variables to help with common tasks.

When working with the Code node, you need to understand the following concepts:

  • Data structure: understand the data you receive in the Code node, and requirements for outputting data from the node.
  • Item linking: learn how data items work, and how to link to items from previous nodes. You need to handle item linking in your code when the number of input and output items doesn't match.

Built-in methods and variables

n8n includes built-in methods and variables. These provide support for:

  • Accessing specific item data
  • Accessing data about workflows, executions, and your n8n environment
  • Convenience variables to help with data and time

Refer to Built-in methods and variables for more information.

Use AI in the Code node

AI assistance in the Code node is available to Cloud users. It isn't available in self-hosted n8n.

AI generated code overwrites your code

If you've already written some code on the Code tab, the AI generated code will replace it. n8n recommends using AI as a starting point to create your initial code, then editing it as needed.

To use ChatGPT to generate code in the Code node:

  1. In the Code node, set Language to JavaScript.
  2. Select the Ask AI tab.
  3. Write your query.
  4. Select Generate Code. n8n sends your query to ChatGPT, then displays the result in the Code tab.

Netlify credentials

URL: llms-txt#netlify-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API access token

You can use these credentials to authenticate the following nodes:

Create a Netlify account.

Supported authentication methods

Refer to Netlify's API documentation for more information about the service.

Using API access token

To configure this credential, you'll need:

  • An Access Token: Generate an Access Token in Applications > Personal Access Tokens. Refer to Netlify API Authentication for more detailed instructions.

Oura credentials

URL: llms-txt#oura-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API access token

You can use these credentials to authenticate the following nodes:

Create an Oura account.

Supported authentication methods

Refer to Oura's API documentation for more information about the service.

Using API access token

To configure this credential, you'll need:

  • A Personal Access Token: To generate a personal access token, go to the Personal Access Tokens page and select Create A New Personal Access Token.

Refer to How to Generate Personal Access Tokens for more information.


Gong node

URL: llms-txt#gong-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported

Use the Gong node to automate work in Gong and integrate Gong with other applications. n8n has built-in support for a wide range of Gong features, which includes getting one or more calls and users.

On this page, you'll find a list of operations the Gong node supports, and links to more resources.

You can find authentication information for this node here.

  • Call
    • Get
    • Get Many
  • User
    • Get
    • Get Many

Templates and examples

CallForge - 05 - Gong.io Call Analysis with Azure AI & CRM Sync

View template details

CallForge - 04 - AI Workflow for Gong.io Sales Calls

View template details

CallForge - 06 - Automate Sales Insights with Gong.io, Notion & AI

View template details

Browse Gong integration templates, or search all templates

Refer to Gong's documentation for more information about the service.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


What's memory in AI?

URL: llms-txt#what's-memory-in-ai?

Contents:

  • AI memory in n8n

Memory is a key part of AI chat services. The memory keeps a history of previous messages, allowing for an ongoing conversation with the AI, rather than every interaction starting fresh.

To add memory to your AI workflow you can use either:

If you need to do advanced AI memory management in your workflows, use the Chat Memory Manager node.

This node is useful when you:

  • Can't add a memory node directly.
  • Need to do more complex memory management, beyond what the memory nodes offer. For example, you can add this node to check the memory size of the Agent node's response, and reduce it if needed.
  • Want to inject messages to the AI that look like user messages, to give the AI more context.

Coda credentials

URL: llms-txt#coda-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API access token

You can use these credentials to authenticate the following nodes:

Create a Coda account.

Supported authentication methods

Refer to Coda's API documentation for more information about the service.

Using API access token

To configure this credential, you'll need:

  • An API Access Token: Generate an API access token in your Coda Account settings.

n8n displays

URL: llms-txt#n8n-displays-


Supabase Vector Store node

URL: llms-txt#supabase-vector-store-node

Contents:

  • Node usage patterns
    • Use as a regular node to insert, update, and retrieve documents
    • Connect directly to an AI agent as a tool
    • Use a retriever to fetch documents
    • Use the Vector Store Question Answer Tool to answer questions
  • Node parameters
    • Operation Mode
    • Rerank Results
    • Get Many parameters
    • Insert Documents parameters

Use the Supabase Vector Store to interact with your Supabase database as vector store. You can insert documents into a vector database, get many documents from a vector database, and retrieve documents to provide them to a retriever connected to a chain.

Use the Supabase Vector Store to interact with your Supabase database as vector store. You can insert documents into a vector database, get documents from a vector database, retrieve documents to provide them to a retriever connected to a chain, or connect it directly to an agent to use as a tool. You can also update an item in a vector store by its ID.

On this page, you'll find the node parameters for the Supabase node, and links to more resources.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

Supabase provides a quickstart for setting up your vector store. If you use settings other than the defaults in the quickstart, this may affect parameter settings in n8n. Make sure you understand what you're doing.

Node usage patterns

You can use the Supabase Vector Store node in the following patterns.

Use as a regular node to insert, update, and retrieve documents

You can use the Supabase Vector Store as a regular node to insert, update, or get documents. This pattern places the Supabase Vector Store in the regular connection flow without using an agent.

You can see an example of this in scenario 1 of this template.

Connect directly to an AI agent as a tool

You can connect the Supabase Vector Store node directly to the tool connector of an AI agent to use a vector store as a resource when answering queries.

Here, the connection would be: AI agent (tools connector) -> Supabase Vector Store node.

Use a retriever to fetch documents

You can use the Vector Store Retriever node with the Supabase Vector Store node to fetch documents from the Supabase Vector Store node. This is often used with the Question and Answer Chain node to fetch documents from the vector store that match the given chat input.

An example of the connection flow (the example uses Pinecone, but the pattern in the same) would be: Question and Answer Chain (Retriever connector) -> Vector Store Retriever (Vector Store connector) -> Supabase Vector Store.

Use the Vector Store Question Answer Tool to answer questions

Another pattern uses the Vector Store Question Answer Tool to summarize results and answer questions from the Supabase Vector Store node. Rather than connecting the Supabase Vector Store directly as a tool, this pattern uses a tool specifically designed to summarizes data in the vector store.

The connections flow in this case would look like this: AI agent (tools connector) -> Vector Store Question Answer Tool (Vector Store connector) -> Supabase Vector store.

This Vector Store node has five modes: Get Many, Insert Documents, Retrieve Documents (As Vector Store for Chain/Tool), Retrieve Documents (As Tool for AI Agent), and Update Documents. The mode you select determines the operations you can perform with the node and what inputs and outputs are available.

In this mode, you can retrieve multiple documents from your vector database by providing a prompt. The prompt will be embedded and used for similarity search. The node will return the documents that are most similar to the prompt with their similarity score. This is useful if you want to retrieve a list of similar documents and pass them to an agent as additional context.

Insert Documents

Use Insert Documents mode to insert new documents into your vector database.

Retrieve Documents (As Vector Store for Chain/Tool)

Use Retrieve Documents (As Vector Store for Chain/Tool) mode with a vector-store retriever to retrieve documents from a vector database and provide them to the retriever connected to a chain. In this mode you must connect the node to a retriever node or root node.

Retrieve Documents (As Tool for AI Agent)

Use Retrieve Documents (As Tool for AI Agent) mode to use the vector store as a tool resource when answering queries. When formulating responses, the agent uses the vector store when the vector store name and description match the question details.

Update Documents

Use Update Documents mode to update documents in a vector database by ID. Fill in the ID with the ID of the embedding entry to update.

Enables reranking. If you enable this option, you must connect a reranking node to the vector store. That node will then rerank the results for queries. You can use this option with the Get Many, Retrieve Documents (As Vector Store for Chain/Tool) and Retrieve Documents (As Tool for AI Agent) modes.

Get Many parameters

  • Table Name: Enter the Supabase table to use.
  • Prompt: Enter the search query.
  • Limit: Enter how many results to retrieve from the vector store. For example, set this to 10 to get the ten best results.

Insert Documents parameters

  • Table Name: Enter the Supabase table to use.

Retrieve Documents (As Vector Store for Chain/Tool) parameters

  • Table Name: Enter the Supabase table to use.

Retrieve Documents (As Tool for AI Agent) parameters

  • Name: The name of the vector store.

  • Description: Explain to the LLM what this tool does. A good, specific description allows LLMs to produce expected results more often.

  • Table Name: Enter the Supabase table to use.

  • Limit: Enter how many results to retrieve from the vector store. For example, set this to 10 to get the ten best results.

  • Table Name: Enter the Supabase table to use.

  • ID: The ID of an embedding entry.

Parameters for Update Documents

The name of the matching function you set up in Supabase. If you follow the Supabase quickstart, this will be match_documents.

Available in Get Many mode. When searching for data, use this to match with metadata associated with the document.

This is an AND query. If you specify more than one metadata filter field, all of them must match.

When inserting data, the metadata is set using the document loader. Refer to Default Data Loader for more information on loading documents.

Templates and examples

AI Agent To Chat With Files In Supabase Storage

View template details

Supabase Insertion & Upsertion & Retrieval

View template details

Automate Sales Cold Calling Pipeline with Apify, GPT-4o, and WhatsApp

View template details

Browse Supabase Vector Store integration templates, or search all templates

Refer to LangChain's Supabase documentation for more information about the service.

View n8n's Advanced AI documentation.


WooCommerce Trigger node

URL: llms-txt#woocommerce-trigger-node

Contents:

  • Events

WooCommerce is a customizable, open-source e-commerce plugin for WordPress.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's WooCommerce Trigger integrations page.

  • coupon.created
  • coupon.updated
  • coupon.deleted
  • customer.created
  • customer.updated
  • customer.deleted
  • order.created
  • order.updated
  • order.deleted
  • product.created
  • product.updated
  • product.deleted

Microsoft SharePoint node

URL: llms-txt#microsoft-sharepoint-node

Contents:

  • Operations
  • Templates and examples
  • Related resources

Use the Microsoft SharePoint node to automate work in Microsoft SharePoint and integrate Microsoft SharePoint with other applications. n8n has built-in support for a wide range of Microsoft SharePoint features, which includes downloading, uploading, and updating files, managing items in a list, and getting lists and list items.

On this page, you'll find a list of operations the Microsoft SharePoint node supports, and links to more resources.

You can find authentication information for this node here.

  • File:
    • Download: Download a file.
    • Update: Update a file.
    • Upload: Upload an existing file.
  • Item:
    • Create: Create an item in an existing list.
    • Create or Update: Create a new item, or update the current one if it already exists (upsert).
    • Delete: Delete an item from a list.
    • Get: Retrieve an item from a list.
    • Get Many: Get specific items in a list or list many items.
    • Update: Update an item in an existing list.
  • List:
    • Get: Retrieve details of a single list.
    • Get Many: Retrieve a list of lists.

Templates and examples

Upload File to SharePoint Using Microsoft Graph API

View template details

Track Top Social Media Trends with Reddit, Twitter, and GPT-4o to SP/Drive

View template details

🛠️ Microsoft SharePoint Tool MCP Server 💪 all 11 operations

View template details

Browse Microsoft SharePoint integration templates, or search all templates

Refer to Microsoft's SharePoint documentation for more information about the service.


Microsoft Dynamics CRM node

URL: llms-txt#microsoft-dynamics-crm-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Microsoft Dynamics CRM node to automate work in Microsoft Dynamics CRM, and integrate Microsoft Dynamics CRM with other applications. n8n has built-in support for creating, updating, deleting, and getting Microsoft Dynamics CRM accounts.

On this page, you'll find a list of operations the Microsoft Dynamics CRM node supports and links to more resources.

Refer to Microsoft credentials for guidance on setting up authentication.

  • Account
    • Create
    • Delete
    • Get
    • Get All
    • Update

Templates and examples

Browse Microsoft Dynamics CRM integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Compare Datasets

URL: llms-txt#compare-datasets

Contents:

  • Node parameters
  • Understand item comparison
  • Node options
    • Fields to Skip Comparing
    • Disable Dot Notation
    • Multiple Matches
  • Understand the output
  • Templates and examples

The Compare Datasets node helps you compare data from two input streams.

  1. Decide which fields to compare. In Input A Field, enter the name of the field you want to use from input stream A. In Input B Field, enter the name of the field you want to use from input stream B.
  2. Optional: You can compare by multiple fields. Select Add Fields to Match to set up more comparisons.
  3. Choose how to handle differences between the datasets. In When There Are Differences, select one of the following:
    • Use Input A Version to treat input stream A as the source of truth.
    • Use Input B Version to treat input stream B as the source of truth.
    • Use a Mix of Versions to use different inputs for different fields.
      • Use Prefer to select either Input A Version or Input B Version as the main source of truth.
      • Enter input fields that are exceptions to For Everything Except to pull from the other input source. To add multiple input fields, enter a comma-separated list.
    • Include Both Versions to include both input streams in the output, which may make the structure more complex.
  4. Decide whether to use Fuzzy Compare. When turned on, the comparison will tolerate small type differences when comparing fields. For example, the number 3 and the string 3 are treated as the same with Fuzzy Compare turned on, but wouldn't be treated the same with it turned off.

Understand item comparison

Item comparison is a two stage process:

  1. n8n checks if the values of the fields you selected to compare match across both inputs.
  2. If the fields to compare match, n8n then compares all fields within the items, to determine if the items are the same or different.

Use the node Options to refine your comparison or tweak comparison behavior.

Fields to Skip Comparing

Enter field names that you want to ignore in the comparison.

For example, if you compare the two datasets below using person.language as the Fields to Match, n8n returns them as different. If you add person.name to Fields to Skip Comparing, n8n returns them as matching.

Disable Dot Notation

Whether to disallow referencing child fields using parent.child in the field name (turned on) or allow it (turned off, default).

Choose how to handle duplicate data. The default is Include All Matches. You can choose Include First Match Only.

For example, given these two datasets:

n8n returns three items in the Same Branch tab. The data is the same in both branches.

If you select Include First Match Only, n8n returns two items, in the Same Branch tab. The data is the same in both branches, but n8n only returns the first occurrence of the matching "apple" items.

Understand the output

There are four output options:

  • In A only Branch: Contains data that occurs only in the first input.
  • Same Branch: Contains data that's the same in both inputs.
  • Different Branch: Contains data that's different between inputs.
  • In B only Branch: Contains data that occurs only in the second output.

Templates and examples

Intelligent Email Organization with AI-Powered Content Classification for Gmail

View template details

Two way sync Pipedrive and MySQL

View template details

Sync Google Sheets data with MySQL

View template details

Browse Compare Datasets integration templates, or search all templates

Examples:

Example 1 (unknown):

// Input 1
	[
		{
			"person":
			{
				"name":	"Stefan",
				"language":	"de"
			}
		},
		{
			"person":
			{
				"name":	"Jim",
				"language":	"en"
			}
		},
		{
			"person":
			{
				"name":	"Hans",
				"language":	"de"
			}
		}
	]
	// Input 2
		[
		{
			"person":
			{
				"name":	"Sara",
				"language":	"de"
			}
		},
		{
			"person":
			{
				"name":	"Jane",
				"language":	"en"
			}
		},
		{
			"person":
			{
				"name":	"Harriet",
				"language":	"de"
			}
		}
	]

Example 2 (unknown):

// Input 1
	[
		{
			"fruit": {
				"type": "apple",
				"color": "red"
			}
		},
				{
			"fruit": {
				"type": "apple",
				"color": "red"
			}
		},
				{
			"fruit": {
				"type": "banana",
				"color": "yellow"
			}
		}
	]
	// Input 2
	[
		{
			"fruit": {
				"type": "apple",
				"color": "red"
			}
		},
				{
			"fruit": {
				"type": "apple",
				"color": "red"
			}
		},
				{
			"fruit": {
				"type": "banana",
				"color": "yellow"
			}
		}
	]

Set the custom execution data object

URL: llms-txt#set-the-custom-execution-data-object

_execution.customData.setAll({"key1": "value1", "key2": "value2"})


NASA credentials

URL: llms-txt#nasa-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using an API key

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to the Browse APIs section of the NASA Open APIs for more information about the service.

To configure this credential, you'll need:

To generate an API key:

  1. Go to the NASA Open APIs page.
  2. Complete the fields in the Generate API Key section.
  3. Copy the API Key and enter it in your n8n credential.

E-goi credentials

URL: llms-txt#e-goi-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create an E-goi account.

Supported authentication methods

Refer to E-goi's API documentation for more information about the service.

To configure this credential, you'll need:


Reranker Cohere

URL: llms-txt#reranker-cohere

Contents:

  • Node parameters
    • Model
  • Templates and examples
  • Related resources

The Reranker Cohere node allows you to rerank the resulting chunks from a vector store. You can connect this node to a vector store.

The reranker reorders the list of documents retrieved from a vector store for a given query in order of descending relevance.

On this page, you'll find the node parameters for the Reranker Cohere node, and links to more resources.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

Choose the reranking model to use. You can find out more about the available models in Cohere's model documentation.

Templates and examples

Automate Sales Cold Calling Pipeline with Apify, GPT-4o, and WhatsApp

View template details

Create a Multi-Modal Telegram Support Bot with GPT-4 and Supabase RAG

by Ezema Kingsley Chibuzo

View template details

Build an All-Source Knowledge Assistant with Claude, RAG, Perplexity, and Drive

View template details

Browse Reranker Cohere integration templates, or search all templates

View n8n's Advanced AI documentation.


Mandrill node

URL: llms-txt#mandrill-node

Contents:

  • Operations
  • Templates and examples

Use the Mandrill node to automate work in Mandrill, and integrate Mandrill with other applications. n8n supports sending messages based on templates or HTML with Mandrill.

On this page, you'll find a list of operations the Mandrill node supports and links to more resources.

Refer to Mandrill credentials for guidance on setting up authentication.

  • Message
    • Send message based on template.
    • Send message based on HTML.

Templates and examples

Browse Mandrill integration templates, or search all templates


Microsoft SQL credentials

URL: llms-txt#microsoft-sql-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using SQL database connection

You can use these credentials to authenticate the following nodes:

Create a user account on a Microsoft SQL server database.

Supported authentication methods

  • SQL database connection

Refer to Microsoft's Connect to SQL Server documentation for more information about connecting to the service.

Using SQL database connection

To configure this credential, you'll need:

  • The Server name
  • The Database name
  • Your User account/ID
  • Your Password
  • The Port to use for the connection
  • The Domain name
  • Whether to use TLS
  • Whether to Ignore SSL Issues
  • The Connect Timeout
  • The Request Timeout
  • The TDS Version the connection should use

To set up the database connection:

  1. Enter the SQL Server Host Name as the Server. In an existing SQL Server connection, the host name comes before the instance name in the format HOSTNAME\INSTANCENAME. Find the host name:
  1. Enter the SQL Server Instance Name as the Database name. Find this name using the same steps listed above for finding the host name.
  • If you don't see an instance name in any of these places, then your database uses the default MSSQLSERVER instance name.
  1. Enter your User account name or ID.

  2. Enter your Password.

  • SQL Server defaults to 1433.
    • If you can't connect over port 1433, check the Error logs for the phrase Server is listening on to identify the port number you should enter.
  1. You only need to enter the Domain name if users in multiple domains access your database. Run this SQL query to get the domain name:

  2. Select whether to use TLS.

  3. Select whether to Ignore SSL Issues: If turned on, the credential will connect even if SSL certificate validation fails.

  4. Enter the number of milliseconds n8n should attempt the initial connection to complete before disconnecting as the Connect Timeout. Refer to the SqlConnection.ConnectionTimeout property documentation for more information.

  • SQL Server stores this timeout as seconds, while n8n stores it as milliseconds. If you're copying your SQL Server defaults, multiple by 100 before entering the number here.
  1. Enter the number of milliseconds n8n should wait on a given request before timing out as the Request Timeout. This is basically a query timeout parameter. Refer to Troubleshoot query time-out errors for more information.

  2. Select the Tabular Data Stream (TDS) protocol to use from the TDS Version dropdown. If the server doesn't support the version you select here, the connection uses a negotiated alternate version. Refer to Appendix A: Product Behavior for a more detailed breakdown of the TDS versions' compatibility with different SQL Server versions and .NET frameworks. Options include:

  • 7_4 (SQL Server 2012 ~ 2019): TDS version 7.4.
    • 7_3_B (SQL Server 2008R2): TDS version 7.3.B.
    • 7_3_A (SQL Server 2008): TDS version 7.3.A.
    • 7_2 (SQL Server 2005): TDS version 7.2.
    • 7_1 (SQL Server 2000): TDS version 7.1.

Examples:

Example 1 (unknown):

SELECT DEFAULT_DOMAIN()[DomainName];

Sysdig Management credentials

URL: llms-txt#sysdig-management-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API access key

You can use these credentials to authenticate when using the HTTP Request node to make a Custom API call.

Create a Sysdig account or configure a local instance.

Supported authentication methods

Refer to Sysdig's documentation for more information about the service.

This is a credential-only node. Refer to Custom API operations to learn more.

Using API access key

To configure this credential, you'll need:

Refer to the Sysdig Agent Access Keys documentation for instructions on obtaining the Access Key from the application.


SIGNL4 credentials

URL: llms-txt#signl4-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using webhook secret

You can use these credentials to authenticate the following nodes:

Create a SIGNL4 account.

Supported authentication methods

Refer to SIGNL4's Inbound Webhook documentation for more information about the service.

Using webhook secret

To configure this credential, you'll need:

  • A Team Secret: SIGNL4 includes this secret in the " Sign up complete" email as the last part of the webhook URL. If your webhook URL is https://connect.signl4.com/webhook/helloworld, your team secret would be helloworld.

Output to the browser console with console.log() or print() in the Code node

URL: llms-txt#output-to-the-browser-console-with-console.log()-or-print()-in-the-code-node

Contents:

  • console.log (JavaScript)
  • print (Python)
    • Handling an output of [object Object]

You can use console.log() or print() in the Code node to help when writing and debugging your code.

For help opening your browser console, refer to this guide by Balsamiq.

console.log (JavaScript)

For technical information on console.log(), refer to the MDN developer docs.

For example, copy the following code into a Code node, then open your console and run the node:

For technical information on print(), refer to the Real Python's guide.

For example, set your Code node Language to Python, copy the following code into the node, then open your console and run the node:

Handling an output of [object Object]

If the console displays [object Object] when you print, check the data type, then convert it as needed.

To check the data type:

If type() outputs <class 'pyodide.ffi.JsProxy'>, you need to convert the JsProxy to a native Python object using to_py(). This occurs when working with data in the n8n node data structure, such as node inputs and outputs. For example, if you want to print the data from a previous node in the workflow:

Refer to the Pyodide documentation on JsProxy for more information on this class.

Examples:

Example 1 (unknown):

let a = "apple";
console.log(a);

Example 2 (unknown):

a = "apple"
print(a)

Example 3 (unknown):

print(type(myData))

Example 4 (unknown):

previousNodeData = _("<node-name>").all();
for item in previousNodeData:
	# item is of type <class 'pyodide.ffi.JsProxy'>
	# You need to convert it to a Dict
	itemDict = item.json.to_py()
	print(itemDict)

Build a programmatic-style node

URL: llms-txt#build-a-programmatic-style-node

Contents:

  • Prerequisites
  • Build your node
    • Step 1: Set up the project
    • Step 2: Add an icon
    • Step 3: Define the node in the base file
    • Step 4: Add the execute method
    • Step 5: Set up authentication
    • Step 6: Add node metadata
    • Step 7: Update the npm package details
  • Test your node

This tutorial walks through building a programmatic-style node. Before you begin, make sure this is the node style you need. Refer to Choose your node building approach for more information.

You need the following installed on your development machine:

  • git
  • Node.js and npm. Minimum version Node 18.17.0. You can find instructions on how to install both using nvm (Node Version Manager) for Linux, Mac, and WSL here. For Windows users, refer to Microsoft's guide to Install NodeJS on Windows.

You need some understanding of:

  • JavaScript/TypeScript
  • REST APIs
  • git
  • Expressions in n8n

In this section, you'll clone n8n's node starter repository, and build a node that integrates the SendGrid. You'll create a node that implements one piece of SendGrid functionality: create a contact.

n8n has a built-in SendGrid node. To avoid clashing with the existing node, you'll give your version a different name.

Step 1: Set up the project

n8n provides a starter repository for node development. Using the starter ensures you have all necessary dependencies. It also provides a linter.

Clone the repository and navigate into the directory:

  1. Generate a new repository from the template repository.

  2. Clone your new repository:

The starter contains example nodes and credentials. Delete the following directories and files:

  • nodes/ExampleNode
  • nodes/HTTPBin
  • credentials/ExampleCredentials.credentials.ts
  • credentials/HttpBinApi.credentials.ts

Now create the following directories and files:

nodes/FriendGrid
nodes/FriendGrid/FriendGrid.node.json
nodes/FriendGrid/FriendGrid.node.ts
credentials/FriendGridApi.credentials.ts

These are the key files required for any node. Refer to Node file structure for more information on required files and recommended organization.

Now install the project dependencies:

Step 2: Add an icon

Save the SendGrid SVG logo from here as friendGrid.svg in nodes/FriendGrid/.

n8n recommends using an SVG for your node icon, but you can also use PNG. If using PNG, the icon resolution should be 60x60px. Node icons should have a square or near-square aspect ratio.

Don't reference Font Awesome

If you want to use a Font Awesome icon in your node, download and embed the image.

Step 3: Define the node in the base file

Every node must have a base file. Refer to Node base file for detailed information about base file parameters.

In this example, the file is FriendGrid.node.ts. To keep this tutorial short, you'll place all the node functionality in this one file. When building more complex nodes, you should consider splitting out your functionality into modules. Refer to Node file structure for more information.

Step 3.1: Imports

Start by adding the import statements:

Step 3.2: Create the main class

The node must export an interface that implements INodeType. This interface must include a description interface, which in turn contains the properties array.

Class names and file names

Make sure the class name and the file name match. For example, given a class FriendGrid, the filename must be FriendGrid.node.ts.

Step 3.3: Add node details

All programmatic nodes need some basic parameters, such as their display name and icon. Add the following to the description:

n8n uses some of the properties set in description to render the node in the Editor UI. These properties are displayName, icon, and description.

Step 3.4: Add the resource

The resource object defines the API resource that the node uses. In this tutorial, you're creating a node to access one of SendGrid's API endpoints: /v3/marketing/contacts. This means you need to define a resource for this endpoint. Update the properties array with the resource object:

type controls which UI element n8n displays for the resource, and tells n8n what type of data to expect from the user. options results in n8n adding a dropdown that allows users to choose one option. Refer to Node UI elements for more information.

Step 3.5: Add operations

The operations object defines what you can do with a resource. It usually relates to REST API verbs (GET, POST, and so on). In this tutorial, there's one operation: create a contact. It has one required field, the email address for the contact the user creates.

Add the following to the properties array, after the resource object:

Step 3.6: Add optional fields

Most APIs, including the SendGrid API that you're using in this example, have optional fields you can use to refine your query.

To avoid overwhelming users, n8n displays these under Additional Fields in the UI.

For this tutorial, you'll add two additional fields, to allow users to enter the contact's first name and last name. Add the following to the properties array:

Step 4: Add the execute method

You've set up the node UI and basic information. It's time to map the node UI to API requests, and make the node actually do something.

The execute method runs every time the node runs. In this method, you have access to the input items and to the parameters that the user set in the UI, including the credentials.

Add the following the execute method in the FriendGrid.node.ts:

Note the following lines of this code:

Users can provide data in two ways:

  • Entered directly in the node fields
  • By mapping data from earlier nodes in the workflow

getInputData(), and the subsequent loop, allows the node to handle situations where data comes from a previous node. This includes supporting multiple inputs. This means that if, for example, the previous node outputs contact information for five people, your FriendGrid node can create five contacts.

Step 5: Set up authentication

The SendGrid API requires users to authenticate with an API key.

Add the following to FriendGridApi.credentials.ts

For more information about credentials files and options, refer to Credentials file.

Step 6: Add node metadata

Metadata about your node goes in the JSON file at the root of your node. n8n refers to this as the codex file. In this example, the file is FriendGrid.node.json.

Add the following code to the JSON file:

For more information on these parameters, refer to Node codex files.

Step 7: Update the npm package details

Your npm package details are in the package.json at the root of the project. It's essential to include the n8n object with links to the credentials and base node file. Update this file to include the following information:

You need to update the package.json to include your own information, such as your name and repository URL. For more information on npm package.json files, refer to npm's package.json documentation.

You can test your node as you build it by running it in a local n8n instance.

  1. Install n8n using npm:

  2. When you are ready to test your node, publish it locally:

  3. Install the node into your local n8n instance:

Make sure you run npm link <node-name> in the nodes directory within your n8n installation. This can be:

  • ~/.n8n/custom/
    • ~/.n8n/<your-custom-name>: if your n8n installation set a different name using N8N_CUSTOM_EXTENSIONS.
  1. Open n8n in your browser. You should see your nodes when you search for them in the nodes panel.

Make sure you search using the node name, not the package name. For example, if your npm package name is n8n-nodes-weather-nodes, and the package contains nodes named rain, sun, snow, you should search for rain, not weather-nodes.

If there's no custom directory in ~/.n8n local installation, you have to create custom directory manually and run npm init:

Examples:

Example 1 (unknown):

git clone https://github.com/<your-organization>/<your-repo-name>.git n8n-nodes-friendgrid
   cd n8n-nodes-friendgrid

Example 2 (unknown):

npm i

Example 3 (unknown):

import {
	IExecuteFunctions,
} from 'n8n-core';

import {
	IDataObject,
	INodeExecutionData,
	INodeType,
	INodeTypeDescription,
    NodeConnectionType
} from 'n8n-workflow';

import {
	OptionsWithUri,
} from 'request';

Example 4 (unknown):

export class FriendGrid implements INodeType {
	description: INodeTypeDescription = {
		// Basic node details will go here
		properties: [
			// Resources and operations will go here
		],
	};
	// The execute method will go here
	async execute(this: IExecuteFunctions): Promise<INodeExecutionData[][]> {
	}
}

Microsoft SQL node

URL: llms-txt#microsoft-sql-node

Contents:

  • Operations
  • Templates and examples

Use the Microsoft SQL node to automate work in Microsoft SQL, and integrate Microsoft SQL with other applications. n8n has built-in support for a wide range of Microsoft SQL features, including executing SQL queries, and inserting rows into the database.

On this page, you'll find a list of operations the Microsoft SQL node supports and links to more resources.

Refer to Microsoft SQL credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Execute an SQL query
  • Insert rows in database
  • Update rows in database
  • Delete rows in database

Templates and examples

Generate Monthly Financial Reports with Gemini AI, SQL, and Outlook

View template details

Execute an SQL query in Microsoft SQL

View template details

Export SQL table into CSV file

View template details

Browse Microsoft SQL integration templates, or search all templates


Get the global workflow static data

URL: llms-txt#get-the-global-workflow-static-data

workflowStaticData = _getWorkflowStaticData('global')


Git

URL: llms-txt#git

Contents:

  • Operations
  • Add
  • Add Config
    • Add Config options
  • Clone
  • Commit
    • Commit options
  • Fetch
  • List Config
  • Log

Git is a free and open-source distributed version control system designed to handle everything from small to large projects with speed and efficiency.

You can find authentication information for this node here.

Refer to the sections below for more details on the parameters and options for each operation.

Configure this operation with these parameters:

  • Repository Path: Enter the local path of the git repository.
  • Paths to Add: Enter a comma-separated list of paths of files or folders to add in this field. You can use absolute paths or relative paths from the Repository Path.

Configure this operation with these parameters:

  • Repository Path: Enter the local path of the git repository.
  • Key: Enter the name of the key to set.
  • Value: Enter the value of the key to set.

Add Config options

The add config operation adds the Mode option. Choose whether to Set or Append the setting in the local config.

Configure this operation with these parameters:

  • Repository Path: Enter the local path of the git repository.
  • Authentication: Select Authenticate to pass credentials in. Select None to not use authentication.
    • Credential for Git: If you select Authenticate, you must select or create credentials for the node to use. Refer to Git credential for more information.
  • New Repository Path: Enter the local path where you'd like to locate the cloned repository.
  • Source Repository: Enter the URL or path of the repository you want to clone.

Configure this operation with these parameters:

  • Repository Path: Enter the local path of the git repository.
  • Message: Enter the commit message to use in this field.

The commit operation adds the Paths to Add option. To commit all "added" files and folders, leave this field blank. To commit specific "added" files and folders, enter a comma-separated list of paths of files or folders in this field.

You can use absolute paths or relative paths from the Repository Path.

This operation only prompts you to enter the local path of the git repository in the Repository Path parameter.

This operation only prompts you to enter the local path of the git repository in the Repository Path parameter.

Configure this operation with these parameters:

  • Repository Path: Enter the local path of the git repository.
  • Return All: When turned on, the node will return all results. When turned off, the node will return results up to the set Limit.
  • Limit: Only available when you turn off Return All. Enter the maximum number of results to return.

The log operation adds the File option. Enter the path of a file or folder to get the history of in this field.

You can use absolute paths or relative paths from the Repository Path.

This operation only prompts you to enter the local path of the git repository in the Repository Path parameter.

Configure this operation with these parameters:

  • Repository Path: Enter the local path of the git repository.
  • Authentication: Select Authenticate to pass credentials in or None to not use authentication.
    • If you select Authenticate, you must select or create Credential for Git for the node to use. Refer to Git credential for more information.

The push operation adds the Target Repository option. Enter the URL or path of the repository to push to in this field.

This operation only prompts you to enter the local path of the git repository in the Repository Path parameter.

This operation only prompts you to enter the local path of the git repository in the Repository Path parameter.

Configure this operation with these parameters:

  • Repository Path: Enter the local path of the git repository.
  • Name: Enter the name of the tag to create in this field.

This operation only prompts you to enter the local path of the git repository in the Repository Path parameter.

Templates and examples

Back Up Your n8n Workflows To Github

View template details

Building RAG Chatbot for Movie Recommendations with Qdrant and Open AI

View template details

ChatGPT Automatic Code Review in Gitlab MR

View template details

Browse Git integration templates, or search all templates


Set up user management on n8n Cloud

URL: llms-txt#set-up-user-management-on-n8n-cloud

Contents:

  • Step one: In-app setup
  • Step two: Invite users

To access user management, upgrade to version 0.195.0 or newer.

Once you upgrade your Cloud instance to an n8n version with user management, you can't downgrade your version.

Step one: In-app setup

When you set up user management for the first time, you create an owner account.

  1. Open n8n. The app displays a signup screen.
  2. Enter your details. Your password must be at least eight characters, including at least one number and one capital letter.
  3. Click Next. n8n logs you in with your new owner account.

Step two: Invite users

You can now invite other people to your n8n instance.

  1. Sign into your workspace with your owner account. (If you are in the Admin Panel open your Workspace from the Dashboard)
  2. Click the three dots next to your user icon at the bottom left and click Settings. n8n opens your Personal settings page.
  3. Click Users to go to the Users page.
  4. Click Invite.
  5. Enter the new user's email address.
  6. Click Invite user. n8n sends an email with a link for the new user to join.

LinkedIn credentials

URL: llms-txt#linkedin-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related Resources
  • Using Community Management OAuth2
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Supported authentication methods

  • Community Management OAuth2: Use this method if you're a new LinkedIn user or creating a new LinkedIn app.
  • OAuth2: Use this method for older LinkedIn apps and user accounts.

Refer to LinkedIn's Community Management API documentation for more information about the service.

This credential works with API version 202404.

Using Community Management OAuth2

Use this method if you're a new LinkedIn user or creating a new LinkedIn app.

To configure this credential, you'll need a LinkedIn account, a LinkedIn Company Page, and:

  • A Client ID: Generated after you create a new developer app.
  • A Client Secret: Generated after you create a new developer app.

To create a new developer app and set up the credential:

  1. Log into LinkedIn and select this link to create a new developer app.
  2. Enter an App name for your app, like n8n integration.
  3. For the LinkedIn Page, enter a LinkedIn Company Page or use the Create a new LinkedIn Page link to create one on-the-fly. Refer to Associate an App with a LinkedIn Page for more information.
  4. Add an App logo.
  5. Check the box to agree to the Legal agreement.
  6. Select Create app.
  7. This should open the Products tab. Select the products/APIs you want to enable for your app. For the LinkedIn node to work properly, you must include and configure:
    • Share on LinkedIn
    • Sign In with LinkedIn using OpenID Connect
    • Advertising API (if using it as an organization account rather than an individual)
  8. Once you've requested access to the products you need, open the Auth tab.
  9. Copy the Client ID and enter it in your n8n credential.
  10. Select the icon to Copy the Primary Client Secret. Enter this in your n8n credential as the Client Secret.

Posting from organization accounts

To post as an organization, you need to put your app through LinkedIn's Community Management App Review process.

Refer to Getting Access to LinkedIn APIs for more information on scopes and permissions.

Only use this method for older LinkedIn apps and user accounts.

Note for n8n Cloud users

Cloud users don't need to provide connection details. Select Connect my account to connect through your browser.

All users must select:

  • Organization Support: If turned on, the credential requests permission to post as an organization using the w_organization_social scope.
  • Legacy: If turned on, the credential uses legacy scopes for r_liteprofile and r_emailaddress instead of the newer profile and email scopes.

If you're self-hosting n8n, you'll need to configure OAuth2 from scratch by creating a new developer app:

  1. Log into LinkedIn and select this link to create a new developer app.
  2. Enter an App name for your app, like n8n integration.
  3. For the LinkedIn Page, enter a LinkedIn Company Page or use the Create a new LinkedIn Page link to create one on-the-fly. Refer to Associate an App with a LinkedIn Page for more information.
  4. Add an App logo.
  5. Check the box to agree to the Legal agreement.
  6. Select Create app.
  7. This should open the Products tab. Select the products/APIs you want to enable for your app. For the LinkedIn node to work properly, you must include:
    • Share on LinkedIn
    • Sign In with LinkedIn using OpenID Connect
  8. Once you've requested access to the products you need, open the Auth tab.
  9. Copy the Client ID and enter it in your n8n credential.
  10. Select the icon to Copy the Primary Client Secret. Enter this in your n8n credential as the Client Secret.

Posting from organization accounts

To post as an organization, you need to put your app through LinkedIn's Community Management App Review process.

Refer to Getting Access to LinkedIn APIs for more information on scopes and permissions.


Zammad credentials

URL: llms-txt#zammad-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using basic auth
  • Using token auth

You can use these credentials to authenticate the following nodes:

  • Zammad

  • Create a hosted Zammad account or set up your own Zammad instance.

  • For token authentication, enable API Token Access in Settings > System > API. Refer to Setting up a Zammad for more information.

Supported authentication methods

  • Basic auth
  • Token auth: Zammad recommends using this authentication method.

Refer to Zammad's API Authentication documentation for more information about authenticating with the service.

To configure this credential, you'll need:

  • A Base URL: Enter the URL of your Zammad instance.
  • An Email address: Enter the email address you use to log in to Zammad.
  • A Password: Enter your Zammad password.
  • Ignore SSL Issues: When turned on, n8n will connect even if SSL certificate validation fails.

To configure this credential, you'll need:

  • A Base URL: Enter the URL of your Zammad instance.
  • An Access Token: Once API Token Access is enabled for the Zammad instance, any user with the user_preferences.access_token permission can generate an Access Token by going to your avatar > Profile > Token Access and Create a new token.
    • The access token permissions depend on what actions you'd like to complete with this credential. For all functionality within the Zammad node, select:
      • admin.group
      • admin.organization
      • admin.user
      • ticket.agent
      • ticket.customer
  • Ignore SSL Issues: When turned on, n8n will connect even if SSL certificate validation fails.

Sentry.io credentials

URL: llms-txt#sentry.io-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API token
  • Using OAuth
  • Using Server API token

You can use these credentials to authenticate the following nodes:

Create a Sentry.io account.

Supported authentication methods

Refer to Sentry.io's API documentation for more information about the service.

To configure this credential, you'll need:

Note for n8n Cloud users

Cloud users don't need to provide connection details. Select Connect my account to connect through your browser.

If you need to configure OAuth2 from scratch, create an integration with these settings:

  • Copy the n8n OAuth Callback URL and add it as an Authorized Redirect URI.
  • Copy the Client ID and Client Secret and add them to your n8n credential.

Refer to Public integrations for more information on creating the integration.

Using Server API token

To configure this credential, you'll need:

  • An API Token: Generate a User Auth Token in Account > Settings > User Auth Tokens. Refer to User Auth Tokens for more information.
  • The URL of your self-hosted Sentry instance.

Logs environment variables

URL: llms-txt#logs-environment-variables

Contents:

  • n8n logs
  • Log streaming

File-based configuration

You can add _FILE to individual variables to provide their configuration in a separate file. Refer to Keeping sensitive data in separate files for more details.

This page lists environment variables to set up logging for debugging. Refer to Logging in n8n for details.

Variable Type Default Description
N8N_LOG_LEVEL Enum string: info, warn, error, debug info Log output level. Refer to Log levels for details.
N8N_LOG_OUTPUT Enum string: console, file console Where to output logs. Provide multiple values as a comma-separated list.
N8N_LOG_FORMAT Enum string: text, json text The log format to use. text prints human readable messages. json prints one JSON object per line containing the message, level, timestamp, and all metadata. This is useful for production monitoring as well as debugging.
N8N_LOG_CRON_ACTIVE_INTERVAL Number 0 Interval in minutes to log currently active cron jobs. Set to 0 to disable.
N8N_LOG_FILE_COUNT_MAX Number 100 Max number of log files to keep.
N8N_LOG_FILE_SIZE_MAX Number 16 Max size of each log file in MB.
N8N_LOG_FILE_LOCATION String <n8n-directory-path>/logs/n8n.log Log file location. Requires N8N_LOG_OUTPUT set to file.
DB_LOGGING_ENABLED Boolean false Whether to enable database-specific logging.
DB_LOGGING_OPTIONS Enum string: query, error, schema, warn, info, log error Database log output level. To enable all logging, specify all. Refer to TypeORM logging options
DB_LOGGING_MAX_EXECUTION_TIME Number 1000 Maximum execution time (in milliseconds) before n8n logs a warning. Set to 0 to disable long running query warning.
CODE_ENABLE_STDOUT Boolean false Set to true to send Code node logs from console.log or print to the process's stdout, only for production executions.
NO_COLOR any undefined Set to any value to output logs without ANSI colors. For more information, see the no-color.org website.

Refer to Log streaming for more information on this feature.

Variable Type Default Description
N8N_EVENTBUS_CHECKUNSENTINTERVAL Number 0 How often (in milliseconds) to check for unsent event messages. Can in rare cases send message twice. Set to 0 to disable it.
N8N_EVENTBUS_LOGWRITER_SYNCFILEACCESS Boolean false Whether all file access happens synchronously within the thread (true) or not (false).
N8N_EVENTBUS_LOGWRITER_KEEPLOGCOUNT Number 3 Number of event log files to keep.
N8N_EVENTBUS_LOGWRITER_MAXFILESIZEINKB Number 10240 Maximum size (in kilo-bytes) of an event log file before a new one starts.
N8N_EVENTBUS_LOGWRITER_LOGBASENAME String n8nEventLog Basename of the event log file.

Contextual Compression Retriever node

URL: llms-txt#contextual-compression-retriever-node

Contents:

  • Templates and examples
  • Related resources

The Contextual Compression Retriever node improves the answers returned from vector store document similarity searches by taking into account the context from the query.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

Templates and examples

Generate Contextual YouTube Comments Automatically with GPT-4o

View template details

Dynamic MCP Server Selection with OpenAI GPT-4.1 and Contextual AI Reranker

View template details

Generate Contextual Recommendations from Slack using Pinecone

View template details

Browse Contextual Compression Retriever integration templates, or search all templates

Refer to LangChain's contextual compression retriever documentation for more information about the service.

View n8n's Advanced AI documentation.


Build a declarative-style node

URL: llms-txt#build-a-declarative-style-node

Contents:

  • Prerequisites
  • Build your node
    • Step 1: Set up the project
    • Step 2: Add an icon
    • Step 3: Create the node
    • Step 4: Set up authentication
    • Step 5: Add node metadata
    • Step 6: Update the npm package details
  • Test your node
    • Troubleshooting

This tutorial walks through building a declarative-style node. Before you begin, make sure this is the node style you need. Refer to Choose your node building approach for more information.

You need the following installed on your development machine:

  • git
  • Node.js and npm. Minimum version Node 18.17.0. You can find instructions on how to install both using nvm (Node Version Manager) for Linux, Mac, and WSL here. For Windows users, refer to Microsoft's guide to Install NodeJS on Windows.

You need some understanding of:

  • JavaScript/TypeScript
  • REST APIs
  • git

In this section, you'll clone n8n's node starter repository, and build a node that integrates the NASA API. You'll create a node that uses two of NASA's services: APOD (Astronomy Picture of the Day) and Mars Rover Photos. To keep the code examples short, the node won't implement every available option for the Mars Rover Photos endpoint.

n8n has a built-in NASA node. To avoid clashing with the existing node, you'll give your version a different name.

Step 1: Set up the project

n8n provides a starter repository for node development. Using the starter ensures you have all necessary dependencies. It also provides a linter.

Clone the repository and navigate into the directory:

  1. Generate a new repository from the template repository.

  2. Clone your new repository:

The starter contains example nodes and credentials. Delete the following directories and files:

  • nodes/ExampleNode
  • nodes/HTTPBin
  • credentials/ExampleCredentials.credentials.ts
  • credentials/HttpBinApi.credentials.ts

Now create the following directories and files:

nodes/NasaPics
nodes/NasaPics/NasaPics.node.json
nodes/NasaPics/NasaPics.node.ts
credentials/NasaPicsApi.credentials.ts

These are the key files required for any node. Refer to Node file structure for more information on required files and recommended organization.

Now install the project dependencies:

Step 2: Add an icon

Save the NASA SVG logo from here as nasapics.svg in nodes/NasaPics/.

n8n recommends using an SVG for your node icon, but you can also use PNG. If using PNG, the icon resolution should be 60x60px. Node icons should have a square or near-square aspect ratio.

Don't reference Font Awesome

If you want to use a Font Awesome icon in your node, download and embed the image.

Step 3: Create the node

Every node must have a base file. Refer to Node base file for detailed information about base file parameters.

In this example, the file is NasaPics.node.ts. To keep this tutorial short, you'll place all the node functionality in this one file. When building more complex nodes, you should consider splitting out your functionality into modules. Refer to Node file structure for more information.

Step 3.1: Imports

Start by adding the import statements:

Step 3.2: Create the main class

The node must export an interface that implements INodeType. This interface must include a description interface, which in turn contains the properties array.

Class names and file names

Make sure the class name and the file name match. For example, given a class NasaPics, the filename must be NasaPics.node.ts.

Step 3.3: Add node details

All nodes need some basic parameters, such as their display name, icon, and the basic information for making a request using the node. Add the following to the description:

n8n uses some of the properties set in description to render the node in the Editor UI. These properties are displayName, icon, description, and subtitle.

Step 3.4: Add resources

The resource object defines the API resource that the node uses. In this tutorial, you're creating a node to access two of NASA's API endpoints: planetary/apod and mars-photos. This means you need to define two resource options in NasaPics.node.ts. Update the properties array with the resource object:

type controls which UI element n8n displays for the resource, and tells n8n what type of data to expect from the user. options results in n8n adding a dropdown that allows users to choose one option. Refer to Node UI elements for more information.

Step 3.5: Add operations

The operations object defines the available operations on a resource.

In a declarative-style node, the operations object includes routing (within the options array). This sets up the details of the API call.

Add the following to the properties array, after the resource object:

This code creates two operations: one to get today's APOD image, and another to send a get request for photos from one of the Mars Rovers. The object named roverName requires the user to choose which Rover they want photos from. The routing object in the Mars Rover operation references this to create the URL for the API call.

Step 3.6: Optional fields

Most APIs, including the NASA API that you're using in this example, have optional fields you can use to refine your query.

To avoid overwhelming users, n8n displays these under Additional Fields in the UI.

For this tutorial, you'll add one additional field, to allow users to pick a date to use with the APOD endpoint. Add the following to the properties array:

Step 4: Set up authentication

The NASA API requires users to authenticate with an API key.

Add the following to nasaPicsApi.credentials.ts:

For more information about credentials files and options, refer to Credentials file.

Step 5: Add node metadata

Metadata about your node goes in the JSON file at the root of your node. n8n refers to this as the codex file. In this example, the file is NasaPics.node.json.

Add the following code to the JSON file:

For more information on these parameters, refer to Node codex files.

Step 6: Update the npm package details

Your npm package details are in the package.json at the root of the project. It's essential to include the n8n object with links to the credentials and base node file. Update this file to include the following information:

You need to update the package.json to include your own information, such as your name and repository URL. For more information on npm package.json files, refer to npm's package.json documentation.

You can test your node as you build it by running it in a local n8n instance.

  1. Install n8n using npm:

  2. When you are ready to test your node, publish it locally:

  3. Install the node into your local n8n instance:

Make sure you run npm link <node-name> in the nodes directory within your n8n installation. This can be:

  • ~/.n8n/custom/
    • ~/.n8n/<your-custom-name>: if your n8n installation set a different name using N8N_CUSTOM_EXTENSIONS.
  1. Open n8n in your browser. You should see your nodes when you search for them in the nodes panel.

Make sure you search using the node name, not the package name. For example, if your npm package name is n8n-nodes-weather-nodes, and the package contains nodes named rain, sun, snow, you should search for rain, not weather-nodes.

If there's no custom directory in ~/.n8n local installation, you have to create custom directory manually and run npm init:

Examples:

Example 1 (unknown):

git clone https://github.com/<your-organization>/<your-repo-name>.git n8n-nodes-nasa-pics
   cd n8n-nodes-nasa-pics

Example 2 (unknown):

npm i

Example 3 (unknown):

import { INodeType, INodeTypeDescription } from 'n8n-workflow';

Example 4 (unknown):

export class NasaPics implements INodeType {
	description: INodeTypeDescription = {
		// Basic node details will go here
		properties: [
		// Resources and operations will go here
		]
	};
}

GoToWebinar node

URL: llms-txt#gotowebinar-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the GoToWebinar node to automate work in GoToWebinar, and integrate GoToWebinar with other applications. n8n has built-in support for a wide range of GoToWebinar features, including creating, getting, and deleting attendees, organizers, and registrants.

On this page, you'll find a list of operations the GoToWebinar node supports and links to more resources.

Refer to GoToWebinar credentials for guidance on setting up authentication.

  • Attendee
    • Get
    • Get All
    • Get Details
  • Co-Organizer
    • Create
    • Delete
    • Get All
    • Re-invite
  • Panelist
    • Create
    • Delete
    • Get All
    • Re-invite
  • Registrant
    • Create
    • Delete
    • Get
    • Get All
  • Session
    • Get
    • Get All
    • Get Details
  • Webinar
    • Create
    • Get
    • Get All
    • Update

Templates and examples

Browse GoToWebinar integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Freshservice node

URL: llms-txt#freshservice-node

Contents:

  • Operations
  • Templates and examples

Use the Freshservice node to automate work in Freshservice and integrate Freshservice with other applications. n8n has built-in support for a wide range of Freshdesk features, including creating, updating, deleting, and getting agent information and departments.

On this page, you'll find a list of operations the Freshservice node supports and links to more resources.

Refer to Freshservice credentials for guidance on setting up authentication.

  • Agent
    • Create an agent
    • Delete an agent
    • Retrieve an agent
    • Retrieve all agents
    • Update an agent
  • Agent Group
    • Create an agent group
    • Delete an agent group
    • Retrieve an agent group
    • Retrieve all agent groups
    • Update an agent group
  • Agent Role
    • Retrieve an agent role
    • Retrieve all agent roles
  • Announcement
    • Create an announcement
    • Delete an announcement
    • Retrieve an announcement
    • Retrieve all announcements
    • Update an announcement
  • Asset Type
    • Create an asset type
    • Delete an asset type
    • Retrieve an asset type
    • Retrieve all asset types
    • Update an asset type
  • Change
    • Create a change
    • Delete a change
    • Retrieve a change
    • Retrieve all changes
    • Update a change
  • Department
    • Create a department
    • Delete a department
    • Retrieve a department
    • Retrieve all departments
    • Update a department
  • Location
    • Create a location
    • Delete a location
    • Retrieve a location
    • Retrieve all locations
    • Update a location
  • Problem
    • Create a problem
    • Delete a problem
    • Retrieve a problem
    • Retrieve all problems
    • Update a problem
  • Product
    • Create a product
    • Delete a product
    • Retrieve a product
    • Retrieve all products
    • Update a product
  • Release
    • Create a release
    • Delete a release
    • Retrieve a release
    • Retrieve all releases
    • Update a release
  • Requester
    • Create a requester
    • Delete a requester
    • Retrieve a requester
    • Retrieve all requesters
    • Update a requester
  • Requester Group
    • Create a requester group
    • Delete a requester group
    • Retrieve a requester group
    • Retrieve all requester groups
    • Update a requester group
  • Software
    • Create a software application
    • Delete a software application
    • Retrieve a software application
    • Retrieve all software applications
    • Update a software application
  • Ticket
    • Create a ticket
    • Delete a ticket
    • Retrieve a ticket
    • Retrieve all tickets
    • Update a ticket

Templates and examples

Browse Freshservice integration templates, or search all templates


Endpoints environment variables

URL: llms-txt#endpoints-environment-variables

File-based configuration

You can add _FILE to individual variables to provide their configuration in a separate file. Refer to Keeping sensitive data in separate files for more details.

This page lists environment variables for customizing endpoints in n8n.

Variable Type Default Description
N8N_PAYLOAD_SIZE_MAX Number 16 The maximum payload size in MiB.
N8N_FORMDATA_FILE_SIZE_MAX Number 200 Max payload size for files in form-data webhook payloads in MiB.
N8N_METRICS Boolean false Whether to enable the /metrics endpoint.
N8N_METRICS_PREFIX String n8n_ Optional prefix for n8n specific metrics names.
N8N_METRICS_INCLUDE_DEFAULT_METRICS Boolean true Whether to expose default system and node.js metrics.
N8N_METRICS_INCLUDE_CACHE_METRICS Boolean false Whether to include metrics (true) for cache hits and misses, or not include them (false).
N8N_METRICS_INCLUDE_MESSAGE_EVENT_BUS_METRICS Boolean false Whether to include metrics (true) for events, or not include them (false).
N8N_METRICS_INCLUDE_WORKFLOW_ID_LABEL Boolean false Whether to include a label for the workflow ID on workflow metrics.
N8N_METRICS_INCLUDE_NODE_TYPE_LABEL Boolean false Whether to include a label for the node type on node metrics.
N8N_METRICS_INCLUDE_CREDENTIAL_TYPE_LABEL Boolean false Whether to include a label for the credential type on credential metrics.
N8N_METRICS_INCLUDE_API_ENDPOINTS Boolean false Whether to expose metrics for API endpoints.
N8N_METRICS_INCLUDE_API_PATH_LABEL Boolean false Whether to include a label for the path of API invocations.
N8N_METRICS_INCLUDE_API_METHOD_LABEL Boolean false Whether to include a label for the HTTP method (GET, POST, ...) of API invocations.
N8N_METRICS_INCLUDE_API_STATUS_CODE_LABEL Boolean false Whether to include a label for the HTTP status code (200, 404, ...) of API invocations.
N8N_METRICS_INCLUDE_QUEUE_METRICS Boolean false Whether to include metrics for jobs in scaling mode. Not supported in multi-main setup.
N8N_METRICS_QUEUE_METRICS_INTERVAL Integer 20 How often (in seconds) to update queue metrics.
N8N_ENDPOINT_REST String rest The path used for REST endpoint.
N8N_ENDPOINT_WEBHOOK String webhook The path used for webhook endpoint.
N8N_ENDPOINT_WEBHOOK_TEST String webhook-test The path used for test-webhook endpoint.
N8N_ENDPOINT_WEBHOOK_WAIT String webhook-waiting The path used for waiting-webhook endpoint.
WEBHOOK_URL String - Used to manually provide the Webhook URL when running n8n behind a reverse proxy. See here for more details.
N8N_DISABLE_PRODUCTION_MAIN_PROCESS Boolean false Disable production webhooks from main process. This helps ensure no HTTP traffic load to main process when using webhook-specific processes.

Token Splitter node

URL: llms-txt#token-splitter-node

Contents:

  • Node parameters
  • Templates and examples
  • Related resources

The Token Splitter node splits a raw text string by first converting the text into BPE tokens, then splits these tokens into chunks and converts the tokens within a single chunk back into text.

On this page, you'll find the node parameters for the Token Splitter node, and links to more resources.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Chunk Size: Enter the number of characters in each chunk.
  • Chunk Overlap: Enter how much overlap to have between chunks.

Templates and examples

🤖 AI Powered RAG Chatbot for Your Docs + Google Drive + Gemini + Qdrant

View template details

AI Voice Chatbot with ElevenLabs & OpenAI for Customer Service and Restaurants

View template details

Complete business WhatsApp AI-Powered RAG Chatbot using OpenAI

View template details

Browse Token Splitter integration templates, or search all templates

Refer to LangChain's token documentation and LangChain's text splitter documentation for more information about the service.

View n8n's Advanced AI documentation.


AWS Rekognition node

URL: llms-txt#aws-rekognition-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the AWS Rekognition node to automate work in AWS Rekognition, and integrate AWS Rekognition with other applications. n8n has built-in support for a wide range of AWS Rekognition features, including analyzing images.

On this page, you'll find a list of operations the AWS Rekognition node supports and links to more resources.

Refer to AWS Rekognition credentials for guidance on setting up authentication.

Templates and examples

Browse AWS Rekognition integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Access the current state of the object during the execution

URL: llms-txt#access-the-current-state-of-the-object-during-the-execution

customData = _execution.customData.getAll();


Mandrill credentials

URL: llms-txt#mandrill-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

If you already have a Mailchimp account with a Standard plan or higher, enable Transactional Emails within that account to use Mandrill.

Supported authentication methods

Refer to Mailchimp's Transactional API documentation for more information about the service.

To configure this credential, you'll need:


Bubble credentials

URL: llms-txt#bubble-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

You need a paid plan to access the Bubble APIs.

Supported authentication methods

Refer to Bubble's API documentation for more information about the service.

To configure this credential, you'll need a paid Bubble account and:

  • An API Token
  • An App Name
  • Your Domain, if you're using a custom domain

To set it up, you'll need to create an app:

  1. Go to the Apps page in Bubble.
  2. Select Create an app.
  3. Enter a Name for your app, like n8n-integration.
  4. Select Get started. The app's details open.
  5. In the left navigation, select Settings (the gear cog icon).
  6. Select the API tab.
  7. In the Public API Endpoints section, check the box to Enable Data API.
  8. The page displays the Data API root URL, for example: https://n8n-integration.bubbleapps.io/version-test/api/1.1/obj.
  9. Copy the part of the URL after https:// and before .bubbleapps.io and enter it in n8n as the App Name. In the above example, you'd enter n8n-integration.
  10. Select Generate a new API token.
  11. Enter an API Token Label, like n8n integration.
  12. Copy the Private key and enter it as the API Token in your n8n credential.
  13. In n8n, select the Environment that best matches your app:
    • Select Development for an app that you haven't deployed, accessed at https://appname.bubbleapps.io/version-test or https://www.mydomain.com/version-test.
    • Select Live for an app that you've deployed, accessed at https://appname.bubbleapps.io or https://www.mydomain.com.
  14. In n8n, select your Hosting:
    • If you haven't set up a custom domain, select Bubble Hosting.
    • If you've set up a custom domain, select Self Hosted and enter your custom Domain.

Refer to Bubble's Creating and managing apps documentation for more information.


UpLead node

URL: llms-txt#uplead-node

Contents:

  • Operations
  • Templates and examples

Use the UpLead node to automate work in UpLead, and integrate UpLead with other applications. n8n supports several UpLead operations, including getting company information.

On this page, you'll find a list of operations the UpLead node supports and links to more resources.

Refer to UpLead credentials for guidance on setting up authentication.

  • Company
    • Enrich
  • Person
    • Enrich

Templates and examples

Browse UpLead integration templates, or search all templates


The above example serve n8n at: https://n8n.example.com

URL: llms-txt#the-above-example-serve-n8n-at:-https://n8n.example.com


Anthropic node

URL: llms-txt#anthropic-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported

Use the Anthropic node to automate work in Anthropic and integrate Anthropic with other applications. n8n has built-in support for a wide range of Anthropic features, including analyzing, uploading, getting, and deleting documents, files, and images, and generating, improving, or templatizing prompts.

On this page, you'll find a list of operations the Anthropic node supports, and links to more resources.

You can find authentication information for this node here.

  • Document:
    • Analyze Document: Take in documents and answer questions about them.
  • File:
    • Upload File: Upload a file to the Anthropic API for later user.
    • Get File Metadata: Get metadata for a file from the Anthropic API.
    • List Files: List files from the Anthropic API.
    • Delete File: Delete a file from the Anthropic API.
  • Image:
    • Analyze Image: Take in images and answer questions about them.
  • Prompt:
    • Generate Prompt: Generate a prompt for a model.
    • Improve Prompt: Improve a prompt for a model.
    • Templatize Prompt: Templatize a prompt for a model.
  • Text:
    • Message a Model: Create a completion with an Anthropic model.

Templates and examples

Notion AI Assistant Generator

View template details

Gmail AI Email Manager

View template details

🤖 AI content generation for Auto Service 🚘 Automate your social media📲!

View template details

Browse Anthropic integration templates, or search all templates

Refer to Anthropic's documentation for more information about the service.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Pushcut credentials

URL: llms-txt#pushcut-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Download the Pushcut app.

Supported authentication methods

Refer to Pushcut's Guides documentation for more information about the service.

To configure this credential, you'll need:

  • An API Key: To generate an API key, go to Account > Integrations > Add API Key. Refer to Create an API key for more information.

Deployment environment variables

URL: llms-txt#deployment-environment-variables

File-based configuration

You can add _FILE to individual variables to provide their configuration in a separate file. Refer to Keeping sensitive data in separate files for more details.

This page lists the deployment configuration options for your self-hosted n8n instance, including setting up access URLs, enabling templates, customizing encryption, and configuring server details.

Proxy variable priorities

The proxy-from-env package that n8n uses to handle proxy environment variables (those ending with _PROXY) imposes a certain variable precedence. Notably, for proxy variables, lowercase versions (like http_proxy) have precedence over uppercase variants (for example HTTP_PROXY) when both are present.

To learn more about proxy environment variables, check the environment variables section of the package details.

Variable Type Default Description
HTTP_PROXY String - A URL to proxy unencrypted HTTP requests through. When set, n8n proxies all unencrypted HTTP traffic from nodes through the proxy URL.
HTTPS_PROXY String - A URL to proxy TLS/SSL encrypted HTTP requests through. When set, n8n proxies all TLS/SSL encrypted HTTP traffic from nodes through the proxy URL.
ALL_PROXY String - A URL to proxy both unencrypted and encrypted HTTP requests through. When set, n8n uses this value when more specific variables (HTTP_PROXY or HTTPS_PROXY) aren't present.
NO_PROXY String - A comma-separated list of hostnames or URLs that should bypass the proxy. When using HTTP_PROXY, HTTPS_PROXY, or ALL_PROXY, n8n will connect directly to the URLs or hostnames defined here instead of using the proxy.
N8N_EDITOR_BASE_URL String - Public URL where users can access the editor. Also used for emails sent from n8n and the redirect URL for SAML based authentication.
N8N_CONFIG_FILES (deprecated) String - Use to provide the path to a JSON configuration file. This option is deprecated and will be removed in a future version. Use .env files or *_FILE environment variables instead.
N8N_DISABLE_UI Boolean false Set to true to disable the UI.
N8N_PREVIEW_MODE Boolean false Set to true to run in preview mode.
N8N_TEMPLATES_ENABLED Boolean false Enables workflow templates (true) or disable (false).
N8N_TEMPLATES_HOST String https://api.n8n.io Change this if creating your own workflow template library. Note that to use your own workflow templates library, your API must provide the same endpoints and response structure as n8n's. Refer to Workflow templates for more information.
N8N_ENCRYPTION_KEY String Random key generated by n8n Provide a custom key used to encrypt credentials in the n8n database. By default n8n generates a random key on first launch.
N8N_USER_FOLDER String user-folder Provide the path where n8n will create the .n8n folder. This directory stores user-specific data, such as database file and encryption key.
N8N_PATH String / The path n8n deploys to.
N8N_HOST String localhost Host name n8n runs on.
N8N_PORT Number 5678 The HTTP port n8n runs on.
N8N_LISTEN_ADDRESS String :: The IP address n8n should listen on.
N8N_PROTOCOL Enum string: http, https http The protocol used to reach n8n.
N8N_SSL_KEY String - The SSL key for HTTPS protocol.
N8N_SSL_CERT String - The SSL certificate for HTTPS protocol.
N8N_PERSONALIZATION_ENABLED Boolean true Whether to ask users personalisation questions and then customise n8n accordingly.
N8N_VERSION_NOTIFICATIONS_ENABLED Boolean true When enabled, n8n sends notifications of new versions and security updates.
N8N_VERSION_NOTIFICATIONS_ENDPOINT String https://api.n8n.io/versions/ The endpoint to retrieve where version information.
N8N_VERSION_NOTIFICATIONS_INFO_URL String https://docs.n8n.io/getting-started/installation/updating.html The URL displayed in the New Versions panel for more information.
N8N_DIAGNOSTICS_ENABLED Boolean true Whether to share selected, anonymous telemetry with n8n. Note that if you set this to false, you can't enable Ask AI in the Code node.
N8N_DIAGNOSTICS_CONFIG_FRONTEND String 1zPn9bgWPzlQc0p8Gj1uiK6DOTn;https://telemetry.n8n.io Telemetry configuration for the frontend.
N8N_DIAGNOSTICS_CONFIG_BACKEND String 1zPn7YoGC3ZXE9zLeTKLuQCB4F6;https://telemetry.n8n.io/v1/batch Telemetry configuration for the backend.
N8N_PUSH_BACKEND String websocket Choose whether the n8n backend uses server-sent events (sse) or WebSockets (websocket) to send changes to the UI.
VUE_APP_URL_BASE_API String http://localhost:5678/ Used when building the n8n-editor-ui package manually to set how the frontend can reach the backend API. Refer to Configure the Base URL.
N8N_HIRING_BANNER_ENABLED Boolean true Whether to show the n8n hiring banner in the console (true) or not (false).
N8N_PUBLIC_API_SWAGGERUI_DISABLED Boolean false Whether the Swagger UI (API playground) is disabled (true) or not (false).
N8N_PUBLIC_API_DISABLED Boolean false Whether to disable the public API (true) or not (false).
N8N_PUBLIC_API_ENDPOINT String api Path for the public API endpoints.
N8N_GRACEFUL_SHUTDOWN_TIMEOUT Number 30 How long should the n8n process wait (in seconds) for components to shut down before exiting the process.
N8N_DEV_RELOAD Boolean false When working on the n8n source code, set this to true to automatically reload or restart the application when changes occur in the source code files.
N8N_REINSTALL_MISSING_PACKAGES Boolean false If set to true, n8n will automatically attempt to reinstall any missing packages.
N8N_TUNNEL_SUBDOMAIN String - Specifies the subdomain for the n8n tunnel. If not set, n8n generates a random subdomain.
N8N_PROXY_HOPS Number 0 Number of reverse-proxies n8n is running behind.

Mailgun credentials

URL: llms-txt#mailgun-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key
  • Working with multiple email domains

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to Mailgun's API documentation for more information about the service.

To configure this credential, you'll need:

  • An API Domain: If your Mailgun account is based in Europe, select api.eu.mailgun.net; otherwise, select api.mailgun.net. Refer to Mailgun Base URLs for more information.
  • An Email Domain: Enter the email sending domain you're working with. If you have multiple sending domains, refer to Working with multiple email domains for more information.
  • An API Key: View your API key in Settings > API Keys. Refer to Mailgun's API Authentication documentation for more detailed instructions.

Working with multiple email domains

If your Mailgun account includes multiple sending domains, create a separate credential for each email domain you're working with.


Brevo credentials

URL: llms-txt#brevo-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • API key

You can use these credentials to authenticate the following nodes:

Create a Brevo developer account.

Supported authentication methods

Refer to Brevo's API documentation for more information about authenticating with the service.

To configure this credential, you'll need:


Lemlist credentials

URL: llms-txt#lemlist-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create an account on a Lemlist instance.

Supported authentication methods

Refer to Lemlist's API documentation for more information about the service.

To configure this credential, you'll need:


Email Trigger (IMAP) node

URL: llms-txt#email-trigger-(imap)-node

Contents:

  • Operations
  • Node parameters
    • Credential to connect with
    • Mailbox Name
    • Action
    • Download Attachments
    • Format
  • Node options
    • Custom Email Rules
    • Force Reconnect Every Minutes

Use the IMAP Email node to receive emails using an IMAP email server. This node is a trigger node.

You can find authentication information for this node here.

Configure the node using the following parameters.

Credential to connect with

Select or create an IMAP credential to connect to the server with.

Enter the mailbox from which you want to receive emails.

Choose whether you want an email marked as read when n8n receives it. None will leave it marked unread. Mark as Read will mark it as read.

Download Attachments

This toggle controls whether to download email attachments (turned on) or not (turned off). Only set this if necessary, since it increases processing.

Choose the format to return the message in from these options:

  • RAW: This format returns the full email message data with body content in the raw field as a base64url encoded string. It doesn't use the payload field.
  • Resolved: This format returns the full email with all data resolved and attachments saved as binary data.
  • Simple: This format returns the full email. Don't use it if you want to gather inline attachments.

You can further configure the node using these Options.

Custom Email Rules

Enter custom email fetching rules to determine which emails the node fetches.

Refer to node-imap's search function criteria for more information.

Force Reconnect Every Minutes

Set an interval in minutes to force reconnection.

Templates and examples

Effortless Email Management with AI-Powered Summarization & Review

View template details

AI Email Analyzer: Process PDFs, Images & Save to Google Drive + Telegram

View template details

A Very Simple "Human in the Loop" Email Response System Using AI and IMAP

View template details

Browse Email Trigger (IMAP) integration templates, or search all templates


Keyboard shortcuts when using the Code editor

URL: llms-txt#keyboard-shortcuts-when-using-the-code-editor

Contents:

  • Cursor Movement
  • Selection
  • Basic Operations
  • Delete Operations
  • Line Operations
  • Autocomplete
  • Indentation
  • Code Folding
  • Multi-cursor
  • Formatting

The Code node editing environment supports a range of keyboard shortcuts to speed up and enhance your experience. Select the appropriate tab to see the relevant shortcuts for your operating system.

Action Shortcut
Move cursor left Left
Move cursor right Right
Move cursor up Up
Move cursor down Down
Move cursor by word left Ctrl+Left
Move cursor by word right Ctrl+Right
Move to line start Home or Ctrl+Left
Move to line end End or Ctrl+Right
Move to document start Ctrl+Home
Move to document end Ctrl+End
Move page up Page Up
Move page down Page Down
Action Shortcut
Move cursor left Left or Ctrl+B
Move cursor right Right or Ctrl+F
Move cursor up Up or Ctrl+P
Move cursor down Down or Ctrl+N
Move cursor by word left Option+Left
Move cursor by word right Option+Right
Move to line start Cmd+Left or Ctrl+A
Move to line end Cmd+Right or Ctrl+E
Move to document start Cmd+Up
Move to document end Cmd+Down
Move page up Page Up or Option+V
Move page down Page Down or Ctrl+V
Action Shortcut
Move cursor left Left
Move cursor right Right
Move cursor up Up
Move cursor down Down
Move cursor by word left Ctrl+Left
Move cursor by word right Ctrl+Right
Move to line start Home or Ctrl+Left
Move to line end End or Ctrl+Right
Move to document start Ctrl+Home
Move to document end Ctrl+End
Move page up Page Up
Move page down Page Down
Action Shortcut
Selection with any movement key Shift + [Movement Key]
Select all Ctrl+A
Select line Ctrl+L
Select next occurrence Ctrl+D
Select all occurrences Shift+Ctrl+L
Go to matching bracket Shift+Ctrl+\
Action Shortcut
Selection with any movement key Shift + [Movement Key]
Select all Cmd+A
Select line Cmd+L
Select next occurrence Cmd+D
Go to matching bracket Shift+Cmd+\
Action Shortcut
Selection with any movement key Shift + [Movement Key]
Select all Ctrl+A
Select line Ctrl+L
Select next occurrence Ctrl+D
Select all occurrences Shift+Ctrl+L
Go to matching bracket Shift+Ctrl+\
Action Shortcut
New line with indentation Enter
Undo Ctrl+Z
Redo Ctrl+Y or Ctrl+Shift+Z
Undo selection Ctrl+U
Copy Ctrl+C
Cut Ctrl+X
Paste Ctrl+V
Action Shortcut
New line with indentation Enter
Undo Cmd+Z
Redo Cmd+Y or Cmd+Shift+Z
Undo selection Cmd+U
Copy Cmd+C
Cut Cmd+X
Paste Cmd+V
Action Shortcut
New line with indentation Enter
Undo Ctrl+Z
Redo Ctrl+Y or Ctrl+Shift+Z
Undo selection Ctrl+U
Copy Ctrl+C
Cut Ctrl+X
Paste Ctrl+V
Action Shortcut
Delete character left Backspace
Delete character right Del
Delete word left Ctrl+Backspace
Delete word right Ctrl+Del
Delete line Shift+Ctrl+K
Action Shortcut
Delete character left Backspace
Delete character right Del
Delete word left Option+Backspace or Ctrl+Cmd+H
Delete word right Option+Del or Fn+Option+Backspace
Delete line Shift+Cmd+K
Delete to line start Cmd+Backspace
Delete to line end Cmd+Del or Ctrl+K
Action Shortcut
Delete character left Backspace
Delete character right Del
Delete word left Ctrl+Backspace
Delete word right Ctrl+Del
Delete line Shift+Ctrl+K
Action Shortcut
Move line up Alt+Up
Move line down Alt+Down
Copy line up Shift+Alt+Up
Copy line down Shift+Alt+Down
Toggle line comment Ctrl+/
Add line comment Ctrl+K then Ctrl+C
Remove line comment Ctrl+K then Ctrl+U
Toggle block comment Shift+Alt+A
Action Shortcut
Move line up Option+Up
Move line down Option+Down
Copy line up Shift+Option+Up
Copy line down Shift+Option+Down
Toggle line comment Cmd+/
Add line comment Cmd+K then Cmd+C
Remove line comment Cmd+K then Cmd+U
Toggle block comment Shift+Option+A
Split line Ctrl+O
Transpose characters Ctrl+T
Action Shortcut
Move line up Alt+Up
Move line down Alt+Down
Copy line up Shift+Alt+Up
Copy line down Shift+Alt+Down
Toggle line comment Ctrl+/
Add line comment Ctrl+K then Ctrl+C
Remove line comment Ctrl+K then Ctrl+C
Toggle block comment Shift+Alt+A
Action Shortcut
Start completion Ctrl+Space
Accept completion Enter or Tab
Close completion Esc
Navigate completion options Up or Down
Action Shortcut
Start completion Ctrl+Space
Accept completion Enter or Tab
Close completion Esc
Navigate completion options Up or Down
Action Shortcut
Start completion Ctrl+Space
Accept completion Enter or Tab
Close completion Esc
Navigate completion options Up or Down
Action Shortcut
Indent more Tab or Ctrl+]
Indent less Shift+Tab or Ctrl+[
Action Shortcut
Indent more Cmd+]
Indent less Cmd+[
Action Shortcut
Indent more Tab or Ctrl+]
Indent less Shift+Tab or Ctrl+[
Action Shortcut
Fold code Ctrl+Shift+[
Unfold code Ctrl+Shift+]
Fold all Ctrl+K then Ctrl+0
Unfold all Ctrl+K then Ctrl+J
Action Shortcut
Fold code Cmd+Option+[
Unfold code Cmd+Option+]
Fold all Cmd+K then Cmd+0
Unfold all Cmd+K then Cmd+J
Action Shortcut
Fold code Ctrl+Shift+[
Unfold code Ctrl+Shift+]
Fold all Ctrl+K then Ctrl+0
Unfold all Ctrl+K then Ctrl+J
Action Shortcut
Add cursor at click position Alt+Left Button
Add cursor above Ctrl+Alt+Up
Add cursor below Ctrl+Alt+Down
Add cursors to line ends Shift+Alt+I
Clear multiple cursors Esc
Action Shortcut
Add cursor at click position Option+Left Button
Add cursor above Ctrl+Option+Up
Add cursor below Ctrl+Option+Down
Add cursors to line ends Shift+Option+I
Clear multiple cursors Esc
Action Shortcut
Add cursor at click position Alt+Left Button
Add cursor above Shift+Alt+Up
Add cursor below Shift+Alt+Down
Add cursors to line ends Shift+Alt+I
Clear multiple cursors Esc
Action Shortcut
Format document Shift+Alt+F
Action Shortcut
Format document Shift+Cmd+F
Action Shortcut
Format document Ctrl+Shift+I

Search & Navigation

Action Shortcut
Open Search Ctrl+F
Select All Alt+Enter
Replace All Ctrl+Alt+Enter
Go To Line Ctrl+G
Next Diagnostic F8
Previous Diag. Shift+F8
Open Lint Panel Ctrl+Shift+M
Action Shortcut
Open Search Cmd+F
Select All Cmd+Enter
Replace All Cmd+Option+Enter
Go To Line Cmd+G
Next Diagnostic F8
Previous Diag. Shift+F8
Open Lint Panel Cmd+Shift+M
Action Shortcut
Open Search Ctrl+F
Select All Alt+Enter
Replace All Ctrl+Alt+Enter
Go To Line Ctrl+G
Next Diagnostic F8
Previous Diag. Shift+F8
Open Lint Panel Ctrl+Shift+M

Okta credentials

URL: llms-txt#okta-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using SSWS API access token

You can use these credentials to authenticate the following nodes:

Create an Okta free trial or create an admin account on an existing Okta org.

Supported authentication methods

  • SSWS API Access token

Refer to Okta's documentation for more information about the service.

Using SSWS API access token

To configure this credential, you'll need:

  • The URL: The base URL of your Okta org, also referred to as your unique subdomain. There are two quick ways to access it:
    1. In the Admin Console, select your Profile, hover over the domain listed below your username, and select the Copy icon. Paste this into n8n, but be sure to add https:// before it.
    2. Copy the base URL of your Admin Console URL, for example https://dev-123456-admin.okta.com. Paste it into n8n and remove -admin, for example: https://dev-123456.okta.com.
  • An SSWS Access Token: Create a token by going to Security > API > Tokens > Create token. Refer to Create Okta API tokens for more information.

Conversational AI Agent node

URL: llms-txt#conversational-ai-agent-node

Contents:

  • Node parameters
    • Prompt
    • Require Specific Output Format
  • Node options
    • Human Message
    • System Message
    • Max Iterations
    • Return Intermediate Steps
  • Templates and examples
  • Common issues

n8n removed this functionality in February 2025.

The Conversational Agent has human-like conversations. It can maintain context, understand user intent, and provide relevant answers. This agent is typically used for building chatbots, virtual assistants, and customer support systems.

The Conversational Agent describes tools in the system prompt and parses JSON responses for tool calls. If your preferred AI model doesn't support tool calling or you're handling simpler interactions, this agent is a good general option. It's more flexible but may be less accurate than the Tools Agent.

Refer to AI Agent for more information on the AI Agent node itself.

You can use this agent with the Chat Trigger node. Attach a memory sub-node so that users can have an ongoing conversation with multiple queries. Memory doesn't persist between sessions.

Configure the Conversational Agent using the following parameters.

Select how you want the node to construct the prompt (also known as the user's query or input from the chat).

  • Take from previous node automatically: If you select this option, the node expects an input from a previous node called chatInput.
  • Define below: If you select this option, provide either static text or an expression for dynamic content to serve as the prompt in the Prompt (User Message) field.

Require Specific Output Format

This parameter controls whether you want the node to require a specific output format. When turned on, n8n prompts you to connect one of these output parsers to the node:

Refine the Conversational Agent node's behavior using these options:

Tell the agent about the tools it can use and add context to the user's input.

You must include these expressions and variable:

  • {tools}: A LangChain expression that provides a string of the tools you've connected to the Agent. Provide some context or explanation about who should use the tools and how they should use them.
  • {format_instructions}: A LangChain expression that provides the schema or format from the output parser node you've connected. Since the instructions themselves are context, you don't need to provide context for this expression.
  • {{input}}: A LangChain variable containing the user's prompt. This variable populates with the value of the Prompt parameter. Provide some context that this is the user's input.

Here's an example of how you might use these strings:

If you'd like to send a message to the agent before the conversation starts, enter the message you'd like to send.

Use this option to guide the agent's decision-making.

Enter the number of times the model should run to try and generate a good answer from the user's prompt.

Return Intermediate Steps

Select whether to include intermediate steps the agent took in the final output (turned on) or not (turned off).

This could be useful for further refining the agent's behavior based on the steps it took.

Templates and examples

Refer to the main AI Agent node's Templates and examples section.

For common questions or issues and suggested solutions, refer to Common issues.

Examples:

Example 1 (unknown):

TOOLS
------
Assistant can ask the user to use tools to look up information that may be helpful in answering the user's original question. The tools the human can use are:

{tools}

{format_instructions}

USER'S INPUT
--------------------
Here is the user's input (remember to respond with a markdown code snippet of a JSON blob with a single action, and NOTHING else):

{{input}}

Travis CI credentials

URL: llms-txt#travis-ci-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API token

You can use these credentials to authenticate the following nodes:

Create a Travis CI account.

Supported authentication methods

Refer to Travis CI's API documentation for more information about the service.

To configure this credential, you'll need:


Monica CRM credentials

URL: llms-txt#monica-crm-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API token

You can use these credentials to authenticate the following nodes:

Sign up for a Monica CRM account or self-host an instance.

Supported authentication methods

Refer to Monica's API documentation for more information about the service.

To configure this credential, you'll need:

  • Your Environment:
    • Select Cloud-Hosted if you access your Monica instance through Monica.
    • Select Self-Hosted if you have self-hosted Monica on your own server. Provide your Self-Hosted Domain.
  • An API Token: Generate a token in Settings > API.

Pushbullet node

URL: llms-txt#pushbullet-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Pushbullet node to automate work in Pushbullet, and integrate Pushbullet with other applications. n8n has built-in support for a wide range of Pushbullet features, including creating, updating, deleting, and getting a push.

On this page, you'll find a list of operations the Pushbullet node supports and links to more resources.

Refer to Pushbullet credentials for guidance on setting up authentication.

  • Push
    • Create a push
    • Delete a push
    • Get all pushes
    • Update a push

Templates and examples

Browse Pushbullet integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


ReAct AI Agent node

URL: llms-txt#react-ai-agent-node

Contents:

  • Node parameters
    • Prompt
    • Require Specific Output Format
  • Node options
    • Human Message Template
    • Prefix Message
    • Suffix Message for Chat Model
    • Suffix Message for Regular Model
    • Return Intermediate Steps
  • Related resources

n8n removed this functionality in February 2025.

The ReAct Agent node implements ReAct logic. ReAct (reasoning and acting) brings together the reasoning powers of chain-of-thought prompting and action plan generation.

The ReAct Agent reasons about a given task, determines the necessary actions, and then executes them. It follows the cycle of reasoning and acting until it completes the task. The ReAct agent can break down complex tasks into smaller sub-tasks, prioritise them, and execute them one after the other.

Refer to AI Agent for more information on the AI Agent node itself.

The ReAct agent doesn't support memory sub-nodes. This means it can't recall previous prompts or simulate an ongoing conversation.

Configure the ReAct Agent using the following parameters.

Select how you want the node to construct the prompt (also known as the user's query or input from the chat).

  • Take from previous node automatically: If you select this option, the node expects an input from a previous node called chatInput.
  • Define below: If you select this option, provide either static text or an expression for dynamic content to serve as the prompt in the Prompt (User Message) field.

Require Specific Output Format

This parameter controls whether you want the node to require a specific output format. When turned on, n8n prompts you to connect one of these output parsers to the node:

Use the options to create a message to send to the agent at the start of the conversation. The message type depends on the model you're using:

  • Chat models: These models have the concept of three components interacting (AI, system, and human). They can receive system messages and human messages (prompts).
  • Instruct models: These models don't have the concept of separate AI, system, and human components. They receive one body of text, the instruct message.

Human Message Template

Use this option to extend the user prompt. This is a way for the agent to pass information from one iteration to the next.

Available LangChain expressions:

  • {input}: Contains the user prompt.
  • {agent_scratchpad}: Information to remember for the next iteration.

Enter text to prefix the tools list at the start of the conversation. You don't need to add the list of tools. LangChain automatically adds the tools list.

Suffix Message for Chat Model

Add text to append after the tools list at the start of the conversation when the agent uses a chat model. You don't need to add the list of tools. LangChain automatically adds the tools list.

Suffix Message for Regular Model

Add text to append after the tools list at the start of the conversation when the agent uses a regular/instruct model. You don't need to add the list of tools. LangChain automatically adds the tools list.

Return Intermediate Steps

Select whether to include intermediate steps the agent took in the final output (turned on) or not (turned off).

This could be useful for further refining the agent's behavior based on the steps it took.

Refer to LangChain's ReAct Agents documentation for more information.

Templates and examples

Refer to the main AI Agent node's Templates and examples section.

For common questions or issues and suggested solutions, refer to Common issues.


Security Assertion Markup Language (SAML)

URL: llms-txt#security-assertion-markup-language-(saml)

  • Available on Enterprise plans.
  • You need to be an instance owner or admin to enable and configure SAML.

This section tells you how to enable SAML SSO (single sign-on) in n8n. It assumes you're familiar with SAML. If you're not, SAML Explained in Plain English can help you understand how SAML works, and its benefits.


Demio credentials

URL: llms-txt#demio-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a Demio account.

Supported authentication methods

Refer to Demio's API documentation for more information about the service.

To configure this credential, you'll need:

  • An API Key
  • An API Secret

You must have Owner status in Demio to generate API keys and secrets. To view and generate API keys and secrets, go to Account Settings > API. Refer to the Demio Account Owner Settings documentation for more detailed steps.


Advanced AI

URL: llms-txt#advanced-ai

Contents:

  • Related resources
    • Node types
    • Workflow templates
    • Chat trigger
    • Chatbot widget

Build AI functionality using n8n: from creating your own chat bot, to using AI to process documents and data from other sources.

This feature is available on Cloud and self-hosted n8n, in version 1.19.4 and above.

Work through the short tutorial to learn the basics of building AI workflows in n8n.

Tutorial

  • Use a Starter Kit

Try n8n's Self-hosted AI Starter Kit to quickly start building AI workflows.

Self-hosted AI Starter Kit

  • Explore examples and concepts

Browse examples and workflow templates to help you build. Includes explanations of important AI concepts.

Examples

  • How n8n uses LangChain

Learn more about how n8n builds on LangChain.

LangChain in n8n

  • Browse AI templates

Explore a wide range of AI workflow templates on the n8n website.

AI workflows on n8n.io

Related documentation and tools.

This feature uses Cluster nodes: groups of root and sub nodes that work together.

Cluster nodes are node groups that work together to provide functionality in an n8n workflow. Instead of using a single node, you use a root node and one or more sub-nodes that extend the functionality of the node.

Workflow templates

You can browse workflow templates in-app or on the n8n website Workflows page.

Refer to Templates for information on accessing templates in-app.

Use the n8n Chat Trigger to trigger a workflow based on chat interactions.

n8n provides a chatbot widget that you can use as a frontend for AI-powered chat workflows. Refer to the @n8n/chat npm page for usage information.


Clearbit credentials

URL: llms-txt#clearbit-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following node:

Create a Clearbit account.

Supported authentication methods

Refer to Clearbit's API documentation for more information about authenticating with the service.

To configure this credential, you'll need:


SSH

URL: llms-txt#ssh

Contents:

  • Operations
    • Execute Command
    • Download File
    • Upload File
  • Templates and examples

The SSH node is useful for executing commands using the Secure Shell Protocol.

You can find authentication information for this node here.

To attach a file for upload, you will need to use an extra node such as the Read/Write Files from Disk node or the HTTP Request node to pass the file as a data property.

Configure this operation with these parameters:

  • Credential to connect with: Select an existing or create a new SSH credential to connect with.

  • Command: Enter the command to execute on the remote device.

  • Working Directory: Enter the directory where n8n should execute the command.

  • Credential to connect with: Select an existing or create a new SSH credential to connect with.

  • Path: Enter the path for the file you want to download. This path must include the file name. The downloaded file will use this file name. To use a different name, use the File Name option. Refer to Download File options for more information.

  • File Property: Enter the name of the object property that holds the binary data you want to download.

Download File options

You can further configure this operation with the File Name option. Use this option to override the binary data file name to a name of your choice.

  • Credential to connect with: Select an existing or create a new SSH credential to connect with.
  • Input Binary Field: Enter the name of the input binary field that contains the file you want to upload.
  • Target Directory: The directory to upload the file to. The name of the file is taken from the binary data file name. To enter a different name, use the File Name option. Refer to Upload File options for more information.

Upload File options

You can further configure this operation with the File Name option. Use this option to override the binary data file name to a name of your choice.

Templates and examples

Send Email if server has upgradable packages

View template details

Check VPS resource usage every 15 minutes

View template details

Docker Registry Cleanup Workflow

View template details

Browse SSH integration templates, or search all templates


Embeddings Google Gemini node

URL: llms-txt#embeddings-google-gemini-node

Contents:

  • Node parameters
  • Templates and examples
  • Related resources

Use the Embeddings Google Gemini node to generate embeddings for a given text.

On this page, you'll find the node parameters for the Embeddings Google Gemini node, and links to more resources.

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

  • Model: Select the model to use to generate the embedding.

Learn more about available models in Google Gemini's models documentation.

Templates and examples

RAG Chatbot for Company Documents using Google Drive and Gemini

View template details

🤖 AI Powered RAG Chatbot for Your Docs + Google Drive + Gemini + Qdrant

View template details

API Schema Extractor

View template details

Browse Embeddings Google Gemini integration templates, or search all templates

Refer to Langchain's Google Generative AI embeddings documentation for more information about the service.

View n8n's Advanced AI documentation.


E-goi node

URL: llms-txt#e-goi-node

Contents:

  • Operations
  • Templates and examples

Use the E-goi node to automate work in E-goi, and integrate E-goi with other applications. n8n has built-in support for a wide range of E-goi features, including creating, updating, deleting, and getting contacts.

On this page, you'll find a list of operations the E-goi node supports and links to more resources.

Refer to E-goi credentials for guidance on setting up authentication.

  • Create a member
  • Get a member
  • Get all members
  • Update a member

Templates and examples

Browse E-goi integration templates, or search all templates


Install verified community nodes in the n8n app

URL: llms-txt#install-verified-community-nodes-in-the-n8n-app

Contents:

  • Install a community node
  • Uninstall a community node

Limited to n8n instance owners

Only the n8n instance owner can install and manage verified community nodes. The instance owner is the person who sets up and manages user management. All members of an n8n instance can use already installed community nodes in their workflows.

Admin accounts can also uninstall any community node, verified or unverified. This helps them remove problematic nodes that may affect the instance's health and functionality.

Install a community node

To install a verified community node:

  1. Go to the Canvas and open the nodes panel (either by selecting '+' or pressing Tab).
  2. Search for the node that you're looking for. If there is a matching verified community node, you will see a More from the community section at the bottom of the nodes panel.
  3. Select the node you want to install. This takes you to a detailed view of the node, showing all the supported actions.
  4. Select install. This will install the node for your instance and enable all members to use it in their workflows.
  5. You can now add the node to your workflows.

Enable installation of verified community nodes

Some users may not want to show verified community nodes in the nodes panel of their instances. On n8n cloud, instance owners can toggle this in the Cloud Admin Panel. Self-hosted users can use environment variables to control the availability of this feature.

Uninstall a community node

To uninstall a community node:

  1. Go to Settings > Community nodes.
  2. On the node you want to install, select Options .
  3. Select Uninstall package.
  4. Select Uninstall Package in the confirmation modal.

Credentials environment variables

URL: llms-txt#credentials-environment-variables

File-based configuration

You can add _FILE to individual variables to provide their configuration in a separate file. Refer to Keeping sensitive data in separate files for more details.

Enable credential overwrites using the following environment variables. Refer to Credential overwrites for details.

Variable Type Default Description
CREDENTIALS_OVERWRITE_DATA /_FILE * - Overwrites for credentials.
CREDENTIALS_OVERWRITE_ENDPOINT String - The API endpoint to fetch credentials.
CREDENTIALS_DEFAULT_NAME String My credentials The default name for credentials.

LoneScale Trigger node

URL: llms-txt#lonescale-trigger-node

Contents:

  • Events
  • Related resources

Use the LoneScale Trigger node to respond to workflow events in LoneScale and integrate LoneScale with other applications.

On this page, you'll find a list of operations the LoneScale node supports, and links to more resources.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's LoneScale Trigger integrations page.

  • On new LoneScale event

n8n provides an app node for LoneScale. You can find the node docs here.

View example workflows and related content on n8n's website.


HTTP node variables

URL: llms-txt#http-node-variables

Variables for working with HTTP node requests and responses when using pagination.

Refer to HTTP Request for guidance on using the HTTP node, including configuring pagination.

Refer to HTTP Request node cookbook | Pagination for example pagination configurations.

These variables are for use in expressions in the HTTP node. You can't use them in other nodes.

Variable Description
$pageCount The pagination count. Tracks how many pages the node has fetched.
$request The request object sent by the HTTP node.
$response The response object from the HTTP call. Includes $response.body, $response.headers, and $response.statusCode. The contents of body and headers depend on the data sent by the API.

Update self-hosted n8n

URL: llms-txt#update-self-hosted-n8n

It's important to keep your n8n version up to date. This ensures you get the latest features and fixes.

Some tips when updating:

  • Update frequently: this avoids having to jump multiple versions at once, reducing the risk of a disruptive update. Try to update at least once a month.
  • Check the Release notes for breaking changes.
  • Use Environments to create a test version of your instance. Test the update there first.

For instructions on how to update, refer to the documentation for your installation method:


Returns all items the node "IF" outputs (index: 0 which is Output "true" of the same run as current node)

URL: llms-txt#returns-all-items-the-node-"if"-outputs-(index:-0-which-is-output-"true"-of-the-same-run-as-current-node)

allItems = _("IF").all(0, _runIndex);


Code node common issues

URL: llms-txt#code-node-common-issues

Contents:

  • Code doesn't return items properly
  • A 'json' property isn't an object
  • Code doesn't return an object
  • 'import' and 'export' may only appear at the top level
  • Cannot find module ''
  • Using global variables

Here are some common errors and issues with the Code node and steps to resolve or troubleshoot them.

Code doesn't return items properly

This error occurs when the code in your Code node doesn't return data in the expected format.

In n8n, all data passed between nodes is an array of objects. Each of these objects wraps another object with the json key:

To troubleshoot this error, check the following:

  • Read the data structure to understand the data you receive in the Code node and the requirements for outputting data from the node.
  • Understand how data items work and how to connect data items from previous nodes with item linking.

A 'json' property isn't an object

This error occurs when the Code node returns data where the json key isn't pointing to an object.

This may happen if you set json to a different data structure, like an array:

To resolve this, ensure that the json key references an object in your return data:

Code doesn't return an object

This error may occur when your Code node doesn't return anything or if it returns an unexpected result.

To resolve this, ensure that your Code node returns the expected data structure:

This error may also occur if the code you provided returns 'undefined' instead of the expected result. In that case, ensure that the data you are referencing in your Code node exists in each execution and that it has the structure your code expects.

'import' and 'export' may only appear at the top level

This error occurs if you try to use import or export in the Code node. These aren't supported by n8n's JavaScript sandbox. Instead, use the require function to load modules.

To resolve this issue, try changing your import statements to use require:

Cannot find module ''

This error occurs if you try to use require in the Code node and n8n can't find the module.

n8n doesn't support importing modules in the Cloud version.

If you're self-hosting n8n, follow these steps:

  • Install the module into your n8n environment.
    • If you are running n8n with npm, install the module in the same environment as n8n.
    • If you are running n8n with Docker, you need to extend the official n8n image with a custom image that includes your module.
  • Set the NODE_FUNCTION_ALLOW_BUILTIN and NODE_FUNCTION_ALLOW_EXTERNAL environment variables to allow importing modules.

Using global variables

Sometimes you may wish to set and retrieve simple global data related to a workflow across and within executions. For example, you may wish to include the date of the previous report when compiling a report with a list of project updates.

To set, update, and retrieve data directly to a workflow, use the static data functions within your code. You can manage data either globally or tied to specific nodes.

Use Remove Duplicates when possible

If you're interested in using variables to avoid processing the same data items more than once, consider using the Remove Duplicates node instead. The Remove Duplicates node can save information across executions to avoid processing the same items multiple times.

Examples:

Example 1 (unknown):

[
  {
    "json": {
	  // your data goes here
	}
  }
]

Example 2 (unknown):

[
  {
    "json": [
	  // Setting `json` to an array like this will produce an error
	]
  }
]

Example 3 (unknown):

[
  {
    "json": {
	  // Setting `json` to an object as expected
	}
  }
]

Example 4 (unknown):

[
  {
    "json": {
	  // your data goes here
	}
  }
]

Pushover node

URL: llms-txt#pushover-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Pushover node to automate work in Pushover, and integrate Pushover with other applications. n8n supports sending push notifications with Pushover.

On this page, you'll find a list of operations the Pushover node supports and links to more resources.

Refer to Pushover credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

Templates and examples

Weekly reminder on your notion tasks with a deadline

View template details

Send daily weather updates via push notification

View template details

Error Handling System with PostgreSQL Logging and Rate-Limited Notifications

by Davi Saranszky Mesquita

View template details

Browse Pushover integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Google Drive node

URL: llms-txt#google-drive-node

Contents:

  • Operations
  • Templates and examples
  • Common issues
  • What to do if your operation isn't supported

Use the Google Drive node to automate work in Google Drive, and integrate Google Drive with other applications. n8n has built-in support for a wide range of Google Drive features, including creating, updating, listing, deleting, and getting drives, files, and folders.

On this page, you'll find a list of operations the Google Drive node supports and links to more resources.

Refer to Google Drive credentials for guidance on setting up authentication.

Templates and examples

Generate AI Videos with Google Veo3, Save to Google Drive and Upload to YouTube

View template details

Fully Automated AI Video Generation & Multi-Platform Publishing

by Juan Carlos Cavero Gracia

View template details

Ask questions about a PDF using AI

View template details

Browse Google Drive integration templates, or search all templates

For common questions or issues and suggested solutions, refer to Common issues.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


DOMAIN_NAME and SUBDOMAIN together determine where n8n will be reachable from

URL: llms-txt#domain_name-and-subdomain-together-determine-where-n8n-will-be-reachable-from


TYPE n8n_scaling_mode_queue_jobs_waiting gauge

URL: llms-txt#type-n8n_scaling_mode_queue_jobs_waiting-gauge

n8n_scaling_mode_queue_jobs_waiting 0


---

## Affinity credentials

**URL:** llms-txt#affinity-credentials

**Contents:**
- Prerequisites
- Supported authentication methods
- Related resources
- Using API key

You can use these credentials to authenticate the following nodes:

- [Affinity](../../app-nodes/n8n-nodes-base.affinity/)
- [Affinity Trigger](../../trigger-nodes/n8n-nodes-base.affinitytrigger/)

Create an [Affinity](https://www.affinity.co/) account at the Scale, Advanced, or Enterprise subscription tiers.

## Supported authentication methods

Refer to [Affinity's API documentation](https://support.affinity.co/s/article/Getting-started-with-the-Affinity-API-FAQs) for more information about working with the service.

To configure this credential, you'll need:

- An **API Key**: Refer to [How to obtain your Affinity API key documentation](https://support.affinity.co/hc/en-us/articles/360032633992-How-to-obtain-your-Affinity-API-key) to get your API key.

---

## Redis node

**URL:** llms-txt#redis-node

**Contents:**
- Operations
- Templates and examples

Use the Redis node to automate work in Redis, and integrate Redis with other applications. n8n has built-in support for a wide range of Redis features, including deleting keys, getting key values, setting key value, and publishing messages to the Redis channel.

On this page, you'll find a list of operations the Redis node supports and links to more resources.

Refer to [Redis credentials](../../credentials/redis/) for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/).

- Delete a key from Redis.
- Get the value of a key from Redis.
- Returns generic information about the Redis instance.
- Atomically increments a key by 1. Creates the key if it doesn't exist.
- Returns all the keys matching a pattern.
- Set the value of a key in Redis.
- Publish message to Redis channel.

## Templates and examples

**Build your own N8N Workflows MCP Server**

[View template details](https://n8n.io/workflows/3770-build-your-own-n8n-workflows-mcp-server/)

**Conversational Interviews with AI Agents and n8n Forms**

[View template details](https://n8n.io/workflows/2566-conversational-interviews-with-ai-agents-and-n8n-forms/)

**Advanced Telegram Bot, Ticketing System, LiveChat, User Management, Broadcasting**

[View template details](https://n8n.io/workflows/2045-advanced-telegram-bot-ticketing-system-livechat-user-management-broadcasting/)

[Browse Redis integration templates](https://n8n.io/integrations/redis/), or [search all templates](https://n8n.io/workflows/)

---

## Google Sheets Trigger node

**URL:** llms-txt#google-sheets-trigger-node

**Contents:**
- Events
- Related resources
- Common issues

[Google Sheets](https://www.google.com/sheets) is a web-based spreadsheet program that's part of Google's office software suite within its Google Drive service.

You can find authentication information for this node [here](../../credentials/google/).

Examples and templates

For usage examples and templates to help you get started, refer to n8n's [Google Sheets Trigger integrations](https://n8n.io/integrations/google-sheets-trigger/) page.

- Row added
- Row updated
- Row added or updated

Refer to [Google Sheet's API documentation](https://developers.google.com/sheets/api) for more information about the service.

n8n provides an app node for Google Sheets. You can find the node docs [here](../../app-nodes/n8n-nodes-base.googlesheets/).

View [example workflows and related content](https://n8n.io/integrations/google-sheets-trigger/) on n8n's website.

For common questions or issues and suggested solutions, refer to [Common issues](common-issues/).

---

## Node types: Trigger and Action

**URL:** llms-txt#node-types:-trigger-and-action

**Contents:**
- Trigger nodes
- Action nodes

There are two node types you can build for n8n: trigger nodes and action nodes.

Both types provide integrations with external services.

[Trigger nodes](../../../../glossary/#trigger-node-n8n) start a workflow and supply the initial data. A workflow can contain multiple trigger nodes but with each execution, only one of them will execute, depending on the triggering event.

There are three types of trigger nodes in n8n:

| Type    | Description                                                                                                                                 | Example Nodes                                                                                                                                                                                                                                                                                                                                                                                                                                                                                     |
| ------- | ------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Webhook | Nodes for services that support webhooks. These nodes listen for events and trigger workflows in real time.                                 | [Zendesk Trigger](https://github.com/n8n-io/n8n/tree/master/packages/nodes-base/nodes/Zendesk), [Telegram Trigger](https://github.com/n8n-io/n8n/tree/master/packages/nodes-base/nodes/Telegram), [Brevo Trigger](https://github.com/n8n-io/n8n/tree/master/packages/nodes-base/nodes/Brevo)                                                                                                                                                                                                      |
| Polling | Nodes for services that don't support webhooks. These nodes periodically check for new data, triggering workflows when they detect updates. | [Airtable Trigger](https://github.com/n8n-io/n8n/tree/master/packages/nodes-base/nodes/Airtable), [Gmail Trigger](https://github.com/n8n-io/n8n/tree/master/packages/nodes-base/nodes/Google/Gmail), [Google Sheet Trigger](https://github.com/n8n-io/n8n/tree/master/packages/nodes-base/nodes/Google/Sheet), [RssFeed Read Trigger](https://github.com/n8n-io/n8n/tree/master/packages/nodes-base/nodes/RssFeedRead)                                                                            |
| Others  | Nodes that handle real-time responses not related to HTTP requests or polling. This includes message queue nodes and time-based triggers.   | [AMQP Trigger](https://github.com/n8n-io/n8n/tree/master/packages/nodes-base/nodes/Amqp), [RabbitMQ Trigger](https://github.com/n8n-io/n8n/tree/master/packages/nodes-base/nodes/RabbitMQ), [MQTT Trigger](https://github.com/n8n-io/n8n/tree/master/packages/nodes-base/nodes/MQTT), [Schedule Trigger](https://github.com/n8n-io/n8n/tree/master/packages/nodes-base/nodes/Schedule), [Email Trigger (IMAP)](https://github.com/n8n-io/n8n/tree/master/packages/nodes-base/nodes/EmailReadImap) |

Action nodes perform operations as part of your workflow. These can include manipulating data, and triggering events in other systems.

---

## HaloPSA node

**URL:** llms-txt#halopsa-node

**Contents:**
- Operations
- Templates and examples

Use the HaloPSA node to automate work in HaloPSA, and integrate HaloPSA with other applications. n8n has built-in support for a wide range of HaloPSA features, including creating, updating, deleting, and getting clients, sites and tickets.

On this page, you'll find a list of operations the HaloPSA node supports and links to more resources.

Refer to [HaloPSA credentials](../../credentials/halopsa/) for guidance on setting up authentication.

- Client
  - Create a client
  - Delete a client
  - Get a client
  - Get all clients
  - Update a client
- Site
  - Create a site
  - Delete a site
  - Get a site
  - Get all sites
  - Update a site
- Ticket
  - Create a ticket
  - Delete a ticket
  - Get a ticket
  - Get all tickets
  - Update a ticket
- User
  - Create a user
  - Delete a user
  - Get a user
  - Get all users
  - Update a user

## Templates and examples

[Browse HaloPSA integration templates](https://n8n.io/integrations/halopsa/), or [search all templates](https://n8n.io/workflows/)

---

## What's an agent in AI?

**URL:** llms-txt#what's-an-agent-in-ai?

**Contents:**
- Agents in n8n

One way to think of an [agent](../../../glossary/#ai-agent) is as a [chain](../understand-chains/) that knows how to make decisions. Where a chain follows a predetermined sequence of calls to different AI components, an agent uses a language model to determine which actions to take.

Agents are the part of AI that act as decision-makers. They can interact with other agents and [tools](../../../glossary/#ai-tool). When you send a query to an agent, it tries to choose the best tools to use to answer. Agents adapt to your specific queries, as well as the prompts that configure their behavior.

n8n provides one Agent node, which can act as different types of agent depending on the settings you choose. Refer to the [Agent node documentation](../../../integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.agent/) for details on the available agent types.

When you execute a workflow containing an agent, the agent runs multiple times. For example, it may do an initial setup, followed by a run to call a tool, then another run to evaluate the tool response and respond to the user.

---

## Microsoft OneDrive node

**URL:** llms-txt#microsoft-onedrive-node

**Contents:**
- Operations
- Templates and examples
- Related resources
- Find the folder ID

Use the Microsoft OneDrive node to automate work in Microsoft OneDrive, and integrate Microsoft OneDrive with other applications. n8n has built-in support for a wide range of Microsoft OneDrive features, including creating, updating, deleting, and getting files, and folders.

On this page, you'll find a list of operations the Microsoft OneDrive node supports and links to more resources.

Refer to [Microsoft credentials](../../credentials/microsoft/) for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/).

- File
  - Copy a file
  - Delete a file
  - Download a file
  - Get a file
  - Rename a file
  - Search a file
  - Share a file
  - Upload a file up to 4MB in size
- Folder
  - Create a folder
  - Delete a folder
  - Get Children (get items inside a folder)
  - Rename a folder
  - Search a folder
  - Share a folder

## Templates and examples

**Hacker News to Video Content**

[View template details](https://n8n.io/workflows/2557-hacker-news-to-video-content/)

**Working with Excel spreadsheet files (xls & xlsx)**

[View template details](https://n8n.io/workflows/1826-working-with-excel-spreadsheet-files-xls-and-xlsx/)

**📂 Automatically Update Stock Portfolio from OneDrive to Excel**

[View template details](https://n8n.io/workflows/2507-automatically-update-stock-portfolio-from-onedrive-to-excel/)

[Browse Microsoft OneDrive integration templates](https://n8n.io/integrations/microsoft-onedrive/), or [search all templates](https://n8n.io/workflows/)

Refer to [Microsoft's OneDrive API documentation](https://learn.microsoft.com/en-us/onedrive/developer/rest-api/) for more information about the service.

## Find the folder ID

To perform operations on folders, you need to supply the ID. You can find this:

- In the URL of the folder
- By searching for it using the node. You need to do this if using MS 365 (where OneDrive uses SharePoint behind the scenes):
  1. Select **Resource** > **Folder**.
  1. Select **Operation** > **Search**.
  1. In **Query**, enter the folder name.
  1. Select **Execute step**. n8n runs the query and returns data about the folder, including an `id` field containing the folder ID.

---

## 2. Inserting data into Airtable

**URL:** llms-txt#2.-inserting-data-into-airtable

**Contents:**
- Configure your table
- Add an Airtable node to the HTTP Request node
- Test the Airtable node
- What's next?

In this step of the workflow, you will learn how to insert the data received from the HTTP Request node into Airtable using the [Airtable node](../../../../integrations/builtin/app-nodes/n8n-nodes-base.airtable/).

You can replace the Airtable node with another spreadsheet app/service. For example, n8n also has a node for [**Google Sheets**](../../../../integrations/builtin/app-nodes/n8n-nodes-base.googlesheets/).

After this step, your workflow should look like this:

[View workflow file](/_workflows//courses/level-one/chapter-5/chapter-5.2.json)

## Configure your table

If we're going to insert data into Airtable, we first need to set up a table there. To do this:

1. [Create an Airtable account](https://airtable.com/signup).

1. In your Airtable workspace add a new base from scratch and name it, for example, *beginner course*.

*Create an Airtable base*

1. In the beginner course base, by default, you have a table called **Table 1** with four fields: `Name`, `Notes`, `Assignee`, and `Status`. These fields aren't relevant for us since they aren't in our "orders" data set. This brings us to the next point: the names of the fields in Airtable have to match the names of the columns in the node result. Prepare the table by doing the following:

- Rename the table from **Table 1** to **orders** to make it easier to identify.
   - Delete the 3 blank records created by default.
   - Delete the `Notes`, `Assignee`, and `Status` fields.
   - Edit the `Name` field (the primary field) to read `orderID`, with the **Number** field type.
   - Add the rest of the fields, and their field types, using the table below as a reference:

| Field name     | Field type       |
   | -------------- | ---------------- |
   | `orderID`      | Number           |
   | `customerID`   | Number           |
   | `employeeName` | Single line text |
   | `orderPrice`   | Number           |
   | `orderStatus`  | Single line text |

Now your table should look like this:

*Orders table in Airtable*

Now that the table is ready, let's return to the workflow in the n8n Editor UI.

## Add an Airtable node to the HTTP Request node

Add an Airtable node connected to the HTTP Request node.

You can add a node connected to an existing node by selecting the **+** icon next to the existing node.

1. Search for Airtable.
1. Select **Create a record** from the **Record Actions** search results.

This will add the Airtable node to your canvas and open the node details window.

In the Airtable node window, configure the following parameters:

- **Credential to connect with**:
  - Select **Create new credential**.
  - Keep the default option **Connect using: Access Token** selected.
  - **Access token**: Follow the instructions from the [Airtable credential](../../../../integrations/builtin/credentials/airtable/) page to create your token. Use the recommended scopes and add access to your beginners course base. Save the credential and close the Credential window when you're finished.
- **Resource**: Record.
- **Operation**: Create. This operation will create new records in the table.
- **Base**: You can pick your base from a list (for example, beginner course).
- **Table**: orders.
- **Mapping Column Mode**: Map automatically. In this mode, the incoming data fields must have the same as the columns in Airtable.

## Test the Airtable node

Once you've finished configuring the Airtable node, execute it by selecting **Execute step**. This might take a moment to process, but you can follow the progress by viewing the base in Airtable.

Your results should look like this:

*Airtable node results*

All 30 data records will now appear in the orders table in Airtable:

*Imported records in the orders table*

**Nathan 🙋**: Wow, this automation is already so useful! But this inserts all collected data from the HTTP Request node into Airtable. Remember that I actually need to insert only processing orders in the table and calculate the price of booked orders?

**You 👩‍🔧**: Sure, no problem. As a next step, I'll use a new node to filter the orders based on their status.

---

## RSS Feed Trigger node

**URL:** llms-txt#rss-feed-trigger-node

**Contents:**
- Node parameters
  - Every Hour mode
  - Every Day mode
  - Every Week mode
  - Every Month mode
  - Every X mode
  - Custom mode
- Templates and examples
- Related resources

The RSS Feed Trigger node allows you to start an n8n workflow when a new RSS feed item has been published.

On this page, you'll find a list of operations the RSS Feed Trigger node supports, and links to more resources.

- **Poll Times**: Select a poll **Mode** to set how often to trigger the poll. Your **Mode** selection will add or remove relevant fields. Refer to the sections below to configure the parameters for each mode type.
- **Feed URL**: Enter the URL of the RSS feed to poll.

Enter the **Minute** of the hour to trigger the poll, from `0` to `59`.

- Enter the **Hour** of the day to trigger the poll in 24-hour format, from `0` to `23`.
- Enter the **Minute** of the hour to trigger the poll, from `0` to `59`.

- Enter the **Hour** of the day to trigger the poll in 24-hour format, from `0` to `23`.
- Enter the **Minute** of the hour to trigger the poll, from `0` to `59`.
- Select the **Weekday** to trigger the poll.

- Enter the **Hour** of the day to trigger the poll in 24-hour format, from `0` to `23`.
- Enter the **Minute** of the hour to trigger the poll, from `0` to `59`.
- Enter the **Day of the Month** to trigger the poll, from `0` to `31`.

- Enter the **Value** of measurement for how often to trigger the poll in either minutes or hours.
- Select the **Unit** for the value. Supported units are **Minutes** and **Hours**.

Enter a custom **Cron Expression** to trigger the poll. Use these values and ranges:

- Seconds: `0` - `59`
- Minutes: `0` - `59`
- Hours: `0` - `23`
- Day of Month: `1` - `31`
- Months: `0` - `11` (Jan - Dec)
- Day of Week: `0` - `6` (Sun - Sat)

To generate a Cron expression, you can use [crontab guru](https://crontab.guru). Paste the Cron expression that you generated using crontab guru in the **Cron Expression** field in n8n.

If you want to trigger your workflow every day at 04:08:30, enter the following in the **Cron Expression** field.

If you want to trigger your workflow every day at 04:08, enter the following in the **Cron Expression** field.

#### Why there are six asterisks in the Cron expression

The sixth asterisk in the Cron expression represents seconds. Setting this is optional. The node will execute even if you don't set the value for seconds.

| \*     | \*     | \*   | \*           | \*    | \*          |
| ------ | ------ | ---- | ------------ | ----- | ----------- |
| second | minute | hour | day of month | month | day of week |

## Templates and examples

**Create an RSS feed based on a website's content**

[View template details](https://n8n.io/workflows/1418-create-an-rss-feed-based-on-a-websites-content/)

**Scrape and summarize posts of a news site without RSS feed using AI and save them to a NocoDB**

[View template details](https://n8n.io/workflows/2180-scrape-and-summarize-posts-of-a-news-site-without-rss-feed-using-ai-and-save-them-to-a-nocodb/)

**Generate Youtube Video Metadata (Timestamps, Tags, Description, ...)**

[View template details](https://n8n.io/workflows/4506-generate-youtube-video-metadata-timestamps-tags-description/)

[Browse RSS Feed Trigger integration templates](https://n8n.io/integrations/rss-feed-trigger/), or [search all templates](https://n8n.io/workflows/)

n8n provides an app node for RSS Feeds. You can find the node docs [here](../n8n-nodes-base.rssfeedread/).

**Examples:**

Example 1 (unknown):
```unknown
30 8 4 * * *

Example 2 (unknown):

8 4 * * *

Looping in n8n

URL: llms-txt#looping-in-n8n

Contents:

  • Using loops in n8n
    • Executing nodes once
  • Creating loops
    • Loop until a condition is met
    • Loop until all items are processed
  • Node exceptions

Looping is useful when you want to process multiple items or perform an action repeatedly, such as sending a message to every contact in your address book. n8n handles this repetitive processing automatically, meaning you don't need to specifically build loops into your workflows. There are some nodes where this isn't true.

Using loops in n8n

n8n nodes take any number of items as input, process these items, and output the results. You can think of each item as a single data point, or a single row in the output table of a node.

Nodes usually run once for each item. For example, if you wanted to send the name and notes of the customers in the Customer Datastore node as a message on Slack, you would:

  1. Connect the Slack node to the Customer Datastore node.
  2. Configure the parameters.
  3. Execute the node.

You would receive five messages: one for each item.

This is how you can process multiple items without having to explicitly connect nodes in a loop.

Executing nodes once

For situations where you don't want a node to process all received items, for example sending a Slack message only to the first customer, you can do so by toggling the Execute Once parameter in the Settings tab of that node This setting is helpful when the incoming data contains multiple items and you want to only process the first one.

n8n typically handles the iteration for all incoming items. However, there are certain scenarios where you will have to create a loop to iterate through all items. Refer to Node exceptions for a list of nodes that don't automatically iterate over all incoming items.

Loop until a condition is met

To create a loop in an n8n workflow, connect the output of one node to the input of a previous node. Add an IF node to check when to stop the loop.

Here is an example workflow that implements a loop with an IF node:

Loop until all items are processed

Use the Loop Over Items node when you want to loop until all items are processed. To process each item individually, set Batch Size to 1.

You can batch the data in groups and process these batches. This approach is useful for avoiding API rate limits when processing large incoming data or when you want to process a specific group of returned items.

The Loop Over Items node stops executing after all the incoming items get divided into batches and passed on to the next node in the workflow so it's not necessary to add an IF node to stop the loop.

Nodes and operations where you need to design a loop into your workflow:

  • CrateDB executes once for insert and update.
  • Code node in Run Once for All Items mode: processes all the items based on the entered code snippet.
  • Execute Workflow node in Run Once for All Items mode.
  • HTTP Request: you must handle pagination yourself. If your API call returns paginated results you must create a loop to fetch one page at a time.
  • Microsoft SQL executes once for insert, update, and delete.
  • MongoDB executes once for insert and update.
  • QuestDB executes once for insert.
  • Redis:
    • Info: this operation executes only once, regardless of the number of items in the incoming data.
  • RSS Read executes once for the requested URL.
  • TimescaleDB executes once for insert and update.

Mocean credentials

URL: llms-txt#mocean-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a Mocean account.

Supported authentication methods

Refer to Mocean's API documentation for more information about the service.

To configure this credential, you'll need:

  • An API Key
  • An API Secret

Both the key and secret are accessible in your Mocean Dashboard. Refer to API Authentication for more information.


Data tables

URL: llms-txt#data-tables

Contents:

  • Overview
  • How to use data tables
    • Step 1: Creating a data table
    • Step 2: Interacting with Data tables in workflows
  • Considerations and limitations of data tables
  • Data tables versus variables
  • Exporting and importing data

Data tables integrate data storage within your n8n environment. Using data tables, you can save, manage, and interact with data directly inside your workflows without relying on external database systems for scenarios such as:

  • Persisting data across workflows in the same project
  • Storing markers to prevent duplicate runs or control workflow triggers
  • Reusing prompts or messages across workflows
  • Storing evaluation data for AI workflows
  • Storing data generated from workflow executions
  • Combining data from different sources to enrich your datasets
  • Creating lookup tables as quick reference points within workflows

How to use data tables

There are two parts to working with data tables: creating them and interacting with them in workflows.

Step 1: Creating a data table

  1. In your n8n project, select the Data tables tab.
  2. Click the split button located in the top right corner and select Create Data table.
  3. Enter a descriptive name for your table.

In the table view that appears, you can:

  • Add and reorder columns to organize your data
  • Add, delete, and update rows
  • Edit existing data

Step 2: Interacting with Data tables in workflows

Interact with data tables in your workflow using the Data table node, which allows you to retreive, update, and manipulate the data stored in a Data table.

See Data table node.

Considerations and limitations of data tables

  • Data tables are suitable for light to moderate data storage. By default, a data table can't contain more than 50MB of data. In self-hosted environments, you can increase this default size limit using the environment variable N8N_DATA_TABLES_MAX_SIZE_BYTES.
  • When a data table approaches 80% of your storage limit, a warning will alert you. A final warning appears when you reach the storage limit. Exceeding this limit will disable manual additions to tables and cause workflow execution errors during attempts to insert or update data.
  • By default, data tables created within a project are accessible to all team members in that project.
  • Tables created in a Personal space are only accessible by their creator.

Data tables versus variables

Feature Data tables Variables
Unified tabular view
Row-column relationships
Cross-project access
Individual value display
Optimized for short values
Structured data
Scoped to projects
Use values as expressions

Exporting and importing data

To transfer data between n8n and external tools, use workflows that:

  1. Retrieve data from a data table.
  2. Export it using an API or file export.
  3. Import data into another system or data table accordingly.

Form.io Trigger node

URL: llms-txt#form.io-trigger-node

Form.io is an enterprise class combined form and API data management platform for building complex form-based business process applications.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Form.io Trigger integrations page.


SendGrid credentials

URL: llms-txt#sendgrid-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to SendGrid's API documentation for more information about the service.

To configure this credential, you'll need a SendGrid account and:

To create an API key:

  1. In the Twilio SendGrid app, go to Settings > API Keys.
  2. Select Create API Key.
  3. Enter a Name for your API key, like n8n integration.
  4. Select Full Access.
  5. Select Create & View.
  6. Copy the key and enter it in your n8n credential.

Refer to Create API Keys for more information.


Supabase credentials

URL: llms-txt#supabase-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using access token

You can use these credentials to authenticate the following nodes:

Create a Supabase account.

Supported authentication methods

Refer to Supabase's API documentation for more information about the service.

Using access token

To configure this credential, you'll need:

  • A Host
  • A Service Role Secret

To generate your API Key:

  1. In your Supabase account, go to the Dashboard and create or select a project for which you want to create an API key.
  2. Go to Project Settings > API to see the API Settings for your project.
  3. Copy the URL from the Project URL section and enter it as your n8n Host. Refer to API URL and keys for more detailed instruction.
  4. Reveal and copy the Project API key for the service_role. Copy that key and enter it as your n8n Service Role Secret. Refer to Understanding API Keys for more information on the service_role privileges.

Line credentials

URL: llms-txt#line-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using Notify OAuth2

Deprecated: End of service

LINE Notify is discontinuing service as of April 1st 2025 and this node will no longer work after that date. View LINE Notify's end of service announement for more information.

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to Line Notify's API documentation for more information about the service.

Using Notify OAuth2

To configure this credential, you'll need a Line account and:

  • A Client ID
  • A Client Secret

To generate both, connect Line with Line Notify. Then:

  1. Open the Line Notify page to add a new service.
  2. Enter a Service name. This name displays when someone tries to connect to the service.
  3. Enter a Service description.
  4. Enter a Service URL
  5. Enter your Company/Enterprise.
  6. Select your Country/region.
  7. Enter your name or team name as the Representative.
  8. Enter a valid Email address. Line will verify this email address before the service is fully registered. Use an email address you have ready access to.
  9. Copy the OAuth Redirect URL from your n8n credential and enter it as the Callback URL in Line Notify.
  10. Select Agree and continue to agree to the terms of service.
  11. Verify the information you entered is correct and select Add.
  12. Check your email and open the Line Notify Registration URL to verify your email address.
  13. Once verification is complete, open My services.
  14. Select the service you just added.
  15. Copy the Client ID and enter it in your n8n credential.
  16. Select the option to Display the Client Secret. Copy the Client Secret and enter it in your n8n credential.
  17. In n8n, select Connect my account and follow the on-screen prompts to finish the credential.

Refer to the Authentication section of Line Notify's API documentation for more information.


Exporting and importing workflows

URL: llms-txt#exporting-and-importing-workflows

Contents:

  • Exporting and importing workflows

In this chapter, you will learn how to export and import workflows.

Exporting and importing workflows

You can save n8n workflows locally as JSON files. This is useful if you want to share your workflow with someone else or import a workflow from someone else.

Exported workflow JSON files include credential names and IDs. While IDs aren't sensitive, the names could be, depending on how you name your credentials. HTTP Request nodes may contain authentication headers when imported from cURL. Remove or anonymize this information from the JSON file before sharing to protect your credentials.

Import & Export workflows menu

You can export and import workflows in three ways:

  • From the Editor UI menu:
    • Export: From the top navigation bar, select the three dots in the upper right, then select Download. This will download your current workflow as a JSON file on your computer.
    • Import: From the top navigation bar, select the three dots in the upper right, then select Import from URL (to import a published workflow) or Import from File (to import a workflow as a JSON file).
  • From the Editor UI canvas:
    • Export: Select all the nodes on the canvas and use Ctrl+C to copy the workflow JSON. You can paste this into a file or share it directly with other people.
    • Import: You can paste a copied workflow JSON directly into the canvas with Ctrl+V.
  • From the command line:

Mailchimp credentials

URL: llms-txt#mailchimp-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key
  • Using OAuth2
  • Selecting an authentication method

You can use these credentials to authenticate the following nodes:

Create a Mailchimp account.

Supported authentication methods

Refer to Selecting an authentication method for guidance on which method to use.

Refer to Mailchimp's API documentation for more information about the service.

To configure this credential, you'll need:

Note for n8n Cloud users

Cloud users don't need to provide connection details. Select Connect my account to connect through your browser.

If you need to configure OAuth2 from scratch, register an application. Refer to the Mailchimp OAuth2 documentation for more information.

Selecting an authentication method

Mailchimp suggests using an API key if you're only accessing your own Mailchimp account's data:

Use an API key if you're writing code that tightly couples your application's data to your Mailchimp account's data. If you ever need to access someone else's Mailchimp account's data, you should be using OAuth 2 (source)


Access a variable

URL: llms-txt#access-a-variable

_vars.


`vars` gives access to user-created variables. It's part of the [Environments](../../../../source-control-environments/) feature. `env` gives access to the [configuration environment variables](../../../../hosting/configuration/environment-variables/) for your n8n instance.

---

## Error Trigger node

**URL:** llms-txt#error-trigger-node

**Contents:**
- Usage
- Templates and examples
- Related resources
- Error data

You can use the Error Trigger node to create error workflows. When another linked workflow fails, this node gets details about the failed workflow and the errors, and runs the error workflow.

1. Create a new workflow, with the Error Trigger as the first node.
1. Give the workflow a name, for example `Error Handler`.
1. Select **Save**.
1. In the workflow where you want to use this error workflow:
   1. Select **Options** > **Settings**.
   1. In **Error workflow**, select the workflow you just created. For example, if you used the name Error Handler, select **Error handler**.
   1. Select **Save**. Now, when this workflow errors, the related error workflow runs.

- If a workflow uses the Error Trigger node, you don't have to activate the workflow.
- If a workflow contains the Error Trigger node, by default, the workflow uses itself as the error workflow.
- You can't test error workflows when running workflows manually. The Error Trigger only runs when an automatic workflow errors.

## Templates and examples

[Browse Error Trigger integration templates](https://n8n.io/integrations/error-trigger/), or [search all templates](https://n8n.io/workflows/)

You can use the [Stop And Error](../n8n-nodes-base.stopanderror/) node to send custom messages to the Error Trigger.

Read more about [Error workflows](../../../../flow-logic/error-handling/) in n8n workflows.

The default error data received by the Error Trigger is:

All information is always present, except:

- `execution.id`: requires the execution to be saved in the database. Not present if the error is in the trigger node of the main workflow, as the workflow doesn't execute.
- `execution.url`: requires the execution to be saved in the database. Not present if the error is in the trigger node of the main workflow, as the workflow doesn't execute.
- `execution.retryOf`: only present when the execution is a retry of a failed execution.

If the error is caused by the trigger node of the main workflow, rather than a later stage, the data sent to the error workflow is different. There's less information in `execution{}` and more in `trigger{}`:

**Examples:**

Example 1 (unknown):
```unknown
[
	{
		"execution": {
			"id": "231",
			"url": "https://n8n.example.com/execution/231",
			"retryOf": "34",
			"error": {
				"message": "Example Error Message",
				"stack": "Stacktrace"
			},
			"lastNodeExecuted": "Node With Error",
			"mode": "manual"
		},
		"workflow": {
			"id": "1",
			"name": "Example Workflow"
		}
	}
]

Example 2 (unknown):

{
  "trigger": {
    "error": {
      "context": {},
      "name": "WorkflowActivationError",
      "cause": {
        "message": "",
        "stack": ""
      },
      "timestamp": 1654609328787,
      "message": "",
      "node": {
        . . . 
      }
    },
    "mode": "trigger"
  },
  "workflow": {
    "id": "",
    "name": ""
  }
}

Flow node

URL: llms-txt#flow-node

Contents:

  • Operations
  • Templates and examples

Use the Flow node to automate work in Flow, and integrate Flow with other applications. n8n has built-in support for a wide range of Flow features, including creating, updating, and getting tasks.

On this page, you'll find a list of operations the Flow node supports and links to more resources.

Refer to Flow credentials for guidance on setting up authentication.

  • Task
    • Create a new task
    • Update a task
    • Get a task
    • Get all the tasks

Templates and examples

Automate Blog Content Creation with OpenAI, Google Sheets & Email Approval Flow

View template details

Automated PDF Invoice Processing & Approval Flow using OpenAI and Google Sheets

View template details

Scale Deal Flow with a Pitch Deck AI Vision, Chatbot and QDrant Vector Store

View template details

Browse Flow integration templates, or search all templates


Tapfiliate credentials

URL: llms-txt#tapfiliate-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a Tapfiliate account.

Supported authentication methods

Refer to Tapfiliate's API documentation for more information about the service.

To configure this credential, you'll need:

Refer to Your API key for more information.


Zscaler ZIA credentials

URL: llms-txt#zscaler-zia-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using basic auth and API key combo

You can use these credentials to authenticate when using the HTTP Request node to make a Custom API call.

Create an admin account on a Zscaler Internet Access (ZIA) cloud instance.

Supported authentication methods

  • Basic auth and API key combo

Refer to Zscaler ZIA's documentation for more information about the service.

This is a credential-only node. Refer to Custom API operations to learn more. View example workflows and related content on n8n's website.

Using basic auth and API key combo

To configure this credential, you'll need:

  • A Base URL: Enter the base URL of your Zscaler ZIA cloud name. To get your base URL, log in to the ZIA Admin Portal and go to Administration > Cloud Service API Security. The base URL is displayed in both the Cloud Service API Key tab and the OAuth 2.0 Authorization Servers tab.
  • A Username: Enter your ZIA admin username.
  • A Password: Enter your ZIA admin password.
  • An Api Key: Get an API key by creating one from Administration > Cloud Service API Security > Cloud Service API Key.

Refer to About Cloud Service API Key for more detailed instructions.


Microsoft Teams Trigger node

URL: llms-txt#microsoft-teams-trigger-node

Contents:

  • Events
  • Related resources

Use the Microsoft Teams Trigger node to respond to events in Microsoft Teams and integrate Microsoft Teams with other applications.

On this page, you'll find a list of events the Microsoft Teams Trigger node can respond to and links to more resources.

You can find authentication information for this node here.

  • New Channel
  • New Channel Message
  • New Chat
  • New Chat Message
  • New Team Member

n8n provides an app node for Microsoft Teams. You can find the node docs here.

View example workflows and related content on n8n's website.

Refer to the Microsoft Teams documentation for details about their API.


Crypto

URL: llms-txt#crypto

Contents:

  • Actions
  • Node parameters
    • Generate parameters
    • Hash parameters
    • Hmac parameters
    • Sign parameters
  • Templates and examples

Use the Crypto node to encrypt data in workflows.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

Node parameters depend on the action you select.

Generate parameters

  • Property Name: Enter the name of the property to write the random string to.

  • Type: Select the encoding type to use to generate the string. Choose from:

    • ASCII
    • BASE64
    • HEX
    • UUID
  • Type: Select the hash type to use. Choose from:

    • MD5
    • SHA256
    • SHA3-256
    • SHA3-384
    • SHA3-512
    • SHA385
    • SHA512
  • Binary File: Turn this parameter on if the data you want to hash is from a binary file.

    • Value: If you turn off Binary File, enter the value you want to hash.
    • Binary Property Name: If you turn on Binary File, enter the name of the binary property that contains the data you want to hash.
  • Property Name: Enter the name of the property you want to write the hash to.

  • Encoding: Select the encoding type to use. Choose from:

    • BASE64
    • HEX
  • Binary File: Turn this parameter on if the data you want to encrypt is from a binary file.

    • Value: If you turn off Binary File, enter the value you want to encrypt.
    • Binary Property Name: If you turn on Binary File, enter the name of the binary property that contains the data you want to encrypt.
  • Type: Select the encryption type to use. Choose from:

    • MD5
    • SHA256
    • SHA3-256
    • SHA3-384
    • SHA3-512
    • SHA385
    • SHA512
  • Property Name: Enter the name of the property you want to write the hash to.

  • Secret: Enter the secret or secret key used for decoding.

  • Encoding: Select the encoding type to use. Choose from:

    • BASE64
    • HEX
  • Value: Enter the value you want to sign.

  • Property Name: Enter the name of the property you want to write the signed value to.

  • Algorithm Name or ID: Choose an algorithm name from the list or specify an ID using an expression.

  • Encoding: Select the encoding type to use. Choose from:

    • BASE64
    • HEX
  • Private Key: Enter a private key to use when signing the string.

Templates and examples

Conversational Interviews with AI Agents and n8n Forms

View template details

Analyze Crypto Markets with the AI-Powered CoinMarketCap Data Analyst

View template details

Send a ChatGPT email reply and save responses to Google Sheets

View template details

Browse Crypto integration templates, or search all templates


Execute Sub-workflow Trigger node

URL: llms-txt#execute-sub-workflow-trigger-node

Contents:

  • Usage
    • Create the sub-workflow
    • Call the sub-workflow
  • Templates and examples
  • How data passes between workflows

Use this node to start a workflow in response to another workflow. It should be the first node in the workflow.

n8n allows you to call workflows from other workflows. This is useful if you want to:

  • Reuse a workflow: for example, you could have multiple workflows pulling and processing data from different sources, then have all those workflows call a single workflow that generates a report.
  • Break large workflows into smaller components.

This node runs in response to a call from the Execute Sub-workflow or Call n8n Workflow Tool nodes.

Create the sub-workflow

  1. Create a new workflow.

Create sub-workflows from existing workflows

You can optionally create a sub-workflow directly from an existing parent workflow using the Execute Sub-workflow node. In the node, select the Database and From list options and select Create a sub-workflow in the list.

You can also extract selected nodes directly using Sub-workflow conversion in the context menu.

  1. Optional: configure which workflows can call the sub-workflow:

  2. Select the Options menu > Settings. n8n opens the Workflow settings modal.

    1. Change the This workflow can be called by setting. Refer to Workflow settings for more information on configuring your workflows.
  3. Add the Execute Sub-workflow trigger node (if you are searching under trigger nodes, this is also titled When Executed by Another Workflow).

  4. Set the Input data mode to choose how you will define the sub-workflow's input data:

  • Define using fields below: Choose this mode to define individual input names and data types that the calling workflow needs to provide. The Execute Sub-workflow node or Call n8n Workflow Tool node in the calling workflow will automatically pull in the fields defined here.
    • Define using JSON example: Choose this mode to provide an example JSON object that demonstrates the expected input items and their types.
    • Accept all data: Choose this mode to accept all data unconditionally. The sub-workflow won't define any required input items. This sub-workflow must handle any input inconsistencies or missing values.
  1. Add other nodes as needed to build your sub-workflow functionality.

  2. Save the sub-workflow.

Sub-workflow mustn't contain errors

If there are errors in the sub-workflow, the parent workflow can't trigger it.

Load data into sub-workflow before building

This requires the ability to load data from previous executions, which is available on n8n Cloud and registered Community plans.

If you want to load data into your sub-workflow to use while building it:

  1. Create the sub-workflow and add the Execute Sub-workflow Trigger.
  2. Set the node's Input data mode to Accept all data or define the input items using fields or JSON if they're already known.
  3. In the sub-workflow settings, set Save successful production executions to Save.
  4. Skip ahead to setting up the parent workflow, and run it.
  5. Follow the steps to load data from previous executions.
  6. Adjust the Input data mode to match the input sent by the parent workflow if necessary.

You can now pin example data in the trigger node, enabling you to work with real data while configuring the rest of the workflow.

Call the sub-workflow

  1. Open the workflow where you want to call the sub-workflow.

  2. Add the Execute Sub-workflow node.

  3. In the Execute Sub-workflow node, set the sub-workflow you want to call. You can choose to call the workflow by ID, load a workflow from a local file, add workflow JSON as a parameter in the node, or target a workflow by URL.

Find your workflow ID

Your sub-workflow's ID is the alphanumeric string at the end of its URL.

  1. Fill in the required input items defined by the sub-workflow.

  2. Save your workflow.

When your workflow executes, it will send data to the sub-workflow, and run it.

You can follow the execution flow from the parent workflow to the sub-workflow by opening the Execute Sub-workflow node and selecting the View sub-execution link. Likewise, the sub-workflow's execution contains a link back to the parent workflow's execution to navigate in the other direction.

Templates and examples

Browse Execute Sub-workflow Trigger integration templates, or search all templates

How data passes between workflows

As an example, imagine you have an Execute Sub-workflow node in Workflow A. The Execute Sub-workflow node calls another workflow called Workflow B:

  1. The Execute Sub-workflow node passes the data to the Execute Sub-workflow Trigger node (titled "When executed by another node" in the canvas) of Workflow B.
  2. The last node of Workflow B sends the data back to the Execute Sub-workflow node in Workflow A.

Overview

URL: llms-txt#overview

Contents:

  • What are evaluations?
  • Why is evaluation needed?
  • Two types of evaluation
    • Light evaluation (pre-deployment)
    • Metric-based evaluation (post-deployment)
    • Comparison of evaluation types
  • Learn more

What are evaluations?

Evaluation is a crucial technique for checking that your AI workflow is reliable. It can be the difference between a flaky proof of concept and a solid production workflow. It's important both in the building phase and after deploying to production.

The foundation of evaluation is running a test dataset through your workflow. This dataset contains multiple test cases. Each test case contains a sample input for your workflow, and often includes the expected output(s) too.

Evaluation allows you to:

  • Test your workflow over a range of inputs so you know how it performs on edge cases
  • Make changes with confidence without inadvertently making things worse elsewhere
  • Compare performance across different models or prompts

The following video explains what evaluations are, why they're useful, and how they work:

Why is evaluation needed?

AI models are fundamentally different than code. Code is deterministic and you can reason about it. This is difficult to do with LLMs, since they're black boxes. Instead, you must measure LLM output by running data through them and observing the output.

You can only build confidence that your model performs reliably after you have run it over multiple inputs that accurately reflect all the edge cases that it will have to deal with in production.

Two types of evaluation

Light evaluation (pre-deployment)

Building a clean, comprehensive dataset is hard. In the initial building phase, it often makes sense to generate just a handful of examples. These can be enough to iterate the workflow to a releasable state (or a proof of concept). You can visually compare the results to get a sense of the workflow's quality, without setting up formal metrics.

Metric-based evaluation (post-deployment)

Once you deploy your workflow, it's easier to build a bigger, more representative dataset from production executions. When you discover a bug, you can add the input that caused it to the dataset. When fixing the bug, it's important to run the whole dataset over the workflow again as a regression test to check that the fix hasn't inadvertently made something else worse.

Since there are too many test cases to check individually, evaluations measure the quality of the outputs using a metric, a numeric value representing a particular characteristic. This also allows you to track quality changes between runs.

Comparison of evaluation types

Light evaluation (pre-deployment) Metric-based evaluation (post-deployment)
Performance improvements with each iteration Large Small
Dataset size Small Large
Dataset sources Hand-generated AI-generated Other Production executions AI-generated Other
Actual outputs Required Required
Expected outputs Optional Required (usually)
Evaluation metric Optional Required
  • Light evaluations: Perfect for evaluating your AI workflows against hand-selected test cases during development.
  • Metric-based evaluations: Advanced evaluations to maintain performance and correctness in production by using scoring and metrics with large datasets.
  • Tips and common issues: Learn how to set up specific evaluation use cases and work around common issues.

Paddle credentials

URL: llms-txt#paddle-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API access token (Classic)

You can use these credentials to authenticate the following nodes:

Create a Paddle account.

Supported authentication methods

  • API access token (Classic)

This credential works with Paddle Classic's API. If you joined Paddle after August 2023, you're using the Paddle Billing API and this credential may not work for you.

Refer to Paddle Classic's API documentation for more information about the service.

Using API access token (Classic)

To configure this credential, you'll need:

  • A Vendor Auth Code: Created when you generate an API key.
  • A Vendor ID: Displayed when you generate an API key.
  • Use Sandbox Environment API: When turned on, nodes using this credential will hit the Sandbox API endpoint instead of the live API endpoint.

To generate an auth code and view your Vendor ID, go to Paddle > Developer Tools > Authentication > Generate Auth Code. Select Reveal Auth Code to display the Auth Code. Refer to API Authentication for more information.


Best practices for user management

URL: llms-txt#best-practices-for-user-management

Contents:

  • All platforms
  • Self-hosted

This page contains advice on best practices relating to user management in n8n.

  • n8n recommends that owners create a member-level account for themselves. Owners can see all workflows, but there is no way to see who created a particular workflow, so there is a risk of overriding other people's work if you build and edit workflows as an owner.
  • Users must be careful not to edit the same workflow simultaneously. It's possible to do it, but the users will overwrite each other's changes.
  • To move workflows between accounts, export the workflow as JSON, then import it to the new account. Note that this action loses the workflow history.
  • Webhook paths must be unique across the entire instance. This means each webhook path must be unique for all workflows and all users. By default, n8n generates a long random value for the webhook path, but users can edit this to their own custom path. If two users set the same path value:
    • The path works for the first workflow that's run or activated.
    • Other workflows will error if they try to run with the same path.

If you run n8n behind a reverse proxy, set the following environment variables so that n8n generates emails with the correct URL:

  • N8N_HOST
  • N8N_PORT
  • N8N_PROTOCOL
  • N8N_EDITOR_BASE_URL

More information on these variables is available in Environment variables.


Brevo node

URL: llms-txt#brevo-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Brevo node to automate work in Brevo, and integrate Brevo with other applications. n8n has built-in support for a wide range of Brevo features, including creating, updating, deleting, and getting contacts, attributes, as well as sending emails.

On this page, you'll find a list of operations the Brevo node supports and links to more resources.

Refer to Brevo credentials for guidance on setting up authentication.

  • Contact
    • Create
    • Create or Update
    • Delete
    • Get
    • Get All
    • Update
  • Contact Attribute
    • Create
    • Delete
    • Get All
    • Update
  • Email
    • Send
    • Send Template
  • Sender
    • Create
    • Delete
    • Get All

Templates and examples

Smart Email Auto-Responder Template using AI

View template details

Automate Lead Generation with Apollo, AI Scoring and Brevo Email Outreach

View template details

Create Leads in SuiteCRM, synchronize with Brevo and notify in NextCloud

View template details

Browse Brevo integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Have a human fallback for AI workflows

URL: llms-txt#have-a-human-fallback-for-ai-workflows

Contents:

  • Key features
  • Using the example

This is a workflow that tries to answer user queries using the standard GPT-4 model. If it can't answer, it sends a message to Slack to ask for human help. It prompts the user to supply an email address.

This workflow uses the Chat Trigger to provide the chat interface, and the Call n8n Workflow Tool to call a second workflow that handles checking for email addresses and sending the Slack message.

View workflow file

  • Chat Trigger: start your workflow and respond to user chat interactions. The node provides a customizable chat interface.
  • Agent: the key piece of the AI workflow. The Agent interacts with other components of the workflow and makes decisions about what tools to use.
  • Call n8n Workflow Tool: plug in n8n workflows as custom tools. In AI, a tool is an interface the AI can use to interact with the world (in this case, the data provided by your workflow). It allows the AI model to access information beyond its built-in dataset.

To load the template into your n8n instance:

  1. Download the workflow JSON file.
  2. Open a new workflow in your n8n instance.
  3. Copy in the JSON, or select Workflow menu > Import from file....

The example workflows use Sticky Notes to guide you:

  • Yellow: notes and information.
  • Green: instructions to run the workflow.
  • Orange: you need to change something to make the workflow work.
  • Blue: draws attention to a key feature of the example.

Task runners

URL: llms-txt#task-runners

Contents:

  • How it works
  • Task runner modes
    • Internal mode
    • External mode
  • Setting up external mode
    • Configuring n8n container in external mode
    • Configuring runners container in external mode
    • Configuring launcher in runners container in external mode
  • Adding extra dependencies
      1. JavaScript packages

Task runners are a generic mechanism to execute tasks in a secure and performant way. They're used to execute user-provided JavaScript and Python code in the Code node.

Task runner support for native Python and the n8nio/runners image are in beta. Until this feature is stable, you must use the N8N_NATIVE_PYTHON_RUNNER=true environment variable to enable the Python runner.

This document describes how task runners work and how you can configure them.

The task runner feature consists of these components: one or more task runners, a task broker, and a task requester.

Task runners connect to the task broker using a websocket connection. A task requester submits a task request to the broker where an available task runner can pick it up for execution.

The runner executes the task and submits the results to the task requester. The task broker coordinates communication between the runner and the requester.

The n8n instance (main and worker) acts as the broker. The Code node in this case is the task requester.

You can use task runners in two different modes: internal and external.

In internal mode, the n8n instance launches the task runner as a child process. The n8n process monitors and manages the life cycle of the task runner. The task runner process shares the same uid and gid as n8n. This is not recommended for production.

In external mode, a launcher application launches task runners on demand and manages their lifecycle. Typically, this means that next to n8n you add a sidecar container running the n8nio/runners image containing the launcher, the JS task runner and the Python task runner. This sidecar container is independent from the n8n instance.

When using Queue mode, each worker needs to have its own sidecar container for task runners.

In addition, if you haven't enabled offloading manual executions to workers (if you aren't setting OFFLOAD_MANUAL_EXECUTIONS_TO_WORKERS=true in your configuration), then your main instance will run manual executions and needs its own sidecar container for task runners as well. Please note that running n8n with offloading disabled isn't recommended for production.

Setting up external mode

In external mode, you run the n8nio/runners image as a sidecar container next to n8n. Below you will find a docker compose as a reference. Keep in mind that the n8nio/runners image version must match that of the n8nio/n8n image, and the n8n version must be >=1.111.0.

Configuring n8n container in external mode

These are the main environment variables that you can set on the n8n container running in external mode:

Environment variables Description
N8N_RUNNERS_ENABLED=true Enables task runners.
N8N_RUNNERS_MODE=external Use task runners in external mode.
N8N_RUNNERS_AUTH_TOKEN=<random secure shared secret> A shared secret task runners use to connect to the broker.
N8N_RUNNERS_BROKER_LISTEN_ADDRESS=0.0.0.0 By default, the task broker only listens to localhost. When using multiple containers (for example, with Docker Compose), it needs to be able to accept external connections.

For full list of environment variables see task runner environment variables.

Configuring runners container in external mode

These are the main environment variables that you can set on the runners container running in external mode:

Environment variables Description
N8N_RUNNERS_AUTH_TOKEN=<random secure shared secret> The shared secret the task runner uses to connect to the broker.
N8N_RUNNERS_TASK_BROKER_URI=localhost:5679 The address of the task broker server within the n8n instance.
N8N_RUNNERS_AUTO_SHUTDOWN_TIMEOUT=15 Number of seconds of inactivity to wait before shutting down the task runner process. The launcher will automatically start the runner again when there are new tasks to execute. Set to 0 to disable automatic shutdown.

For full list of environment variables see task runner environment variables.

Configuring launcher in runners container in external mode

The launcher will read environment variables from runners container environment, and will pass them along to each runner as defined in the default launcher configuration file, located in the container at /etc/task-runners.json. The default launcher configuration file is locked down, but you will likely want to edit this file, for example, to allowlist first- or third-party modules. To customize the launcher configuration file, mount to this path:

For further information about the launcher config file, see here.

Adding extra dependencies

You can customize the n8nio/runners image. To do so, you will find the runners Dockerfile at this directory in the n8n repository. The manifests referred to below are also found in this directory.

To make additional packages available on the Code node, you can bake extra packages into your custom runners image at build time:

  • JavaScript: edit docker/images/runners/package.json (package.json manifest used to install runtime-only deps into the JS runner)
  • Python (Native): edit docker/images/runners/extras.txt (requirements.txt-style list installed into the Python runner venv)

Important: for security, any external libraries must be explicitly allowed for Code node use. Update n8n-task-runners.json to allowlist what you add.

1) JavaScript packages

Edit the runtime extras manifest docker/images/runners/package.json:

Add any packages you want under "dependencies" (pin them for reproducibility), e.g.:

2) Python packages

Edit the requirements file docker/images/runners/extras.txt:

Examples:

Example 1 (unknown):

services:
  n8n:
    image: n8nio/n8n:1.111.0
    container_name: n8n-main
    environment:
      - N8N_RUNNERS_ENABLED=true
      - N8N_RUNNERS_MODE=external
      - N8N_RUNNERS_BROKER_LISTEN_ADDRESS=0.0.0.0
      - N8N_RUNNERS_AUTH_TOKEN=your-secret-here
      - N8N_NATIVE_PYTHON_RUNNER=true
    ports:
      - "5678:5678"
    volumes:
      - n8n_data:/home/node/.n8n
    # etc.

  task-runners:
    image: n8nio/runners:1.111.0
    container_name: n8n-runners
    environment:
      - N8N_RUNNERS_TASK_BROKER_URI=http://n8n-main:5679
      - N8N_RUNNERS_AUTH_TOKEN=your-secret-here
      # etc.
    depends_on:
      - n8n

volumes:
  n8n_data:

Example 2 (unknown):

path/to/n8n-task-runners.json:/etc/n8n-task-runners.json

Example 3 (unknown):

{
  "name": "task-runner-runtime-extras",
  "description": "Runtime-only deps for the JS task-runner image, installed at image build.",
  "private": true,
  "dependencies": {
    "moment": "2.30.1"
  }
}

Example 4 (unknown):

"dependencies": {
  "moment": "2.30.1",
  "uuid": "9.0.0"
}

Google credentials

URL: llms-txt#google-credentials

Contents:

  • OAuth2 and Service Account
  • Compatible nodes

This section contains:

OAuth2 and Service Account

There are two authentication methods available for Google services nodes:

Note for n8n Cloud users

For the following nodes, you can authenticate by selecting Sign in with Google in the OAuth section:

Once configured, you can use your credentials to authenticate the following nodes. Most nodes are compatible with OAuth2 authentication. Support for Service Account authentication is limited.

Node OAuth Service Account
Google Ads
Gmail
Google Analytics
Google BigQuery
Google Books
Google Calendar
Google Chat
Google Cloud Storage
Google Contacts
Google Cloud Firestore
Google Cloud Natural Language
Google Cloud Realtime Database
Google Docs
Google Drive
Google Drive Trigger
Google Perspective
Google Sheets
Google Slides
Google Tasks
Google Translate
Google Workspace Admin
YouTube

Gmail and Service Accounts

Google technically supports Service Accounts for use with Gmail, but it requires enabling domain-wide delegation, which Google discourages, and its behavior can be inconsistent.

n8n recommends using OAuth2 with the Gmail node.


Copper node

URL: llms-txt#copper-node

Contents:

  • Operations
  • Templates and examples

Use the Copper node to automate work in Copper, and integrate Copper with other applications. n8n has built-in support for a wide range of Copper features, including getting, updating, deleting, and creating companies, customer sources, leads, projects and tasks.

On this page, you'll find a list of operations the Copper node supports and links to more resources.

Refer to Copper credentials for guidance on setting up authentication.

  • Company
    • Create
    • Delete
    • Get
    • Get All
    • Update
  • Customer Source
    • Get All
  • Lead
    • Create
    • Delete
    • Get
    • Get All
    • Update
  • Opportunity
    • Create
    • Delete
    • Get
    • Get All
    • Update
  • Person
    • Create
    • Delete
    • Get
    • Get All
    • Update
  • Project
    • Create
    • Delete
    • Get
    • Get All
    • Update
  • Task
    • Create
    • Delete
    • Get
    • Get All
    • Update
  • User
    • Get All

Templates and examples

Create, update, and get a person from Copper

View template details

Receive updates on a new project created in Copper

View template details

Let AI Agents Run Your CRM with Copper Tool MCP Server 💪 all 32 operations

View template details

Browse Copper integration templates, or search all templates


Acuity Scheduling Trigger node

URL: llms-txt#acuity-scheduling-trigger-node

Contents:

  • Events

Acuity Scheduling is a cloud-based appointment scheduling software solution that enables business owners to manage their appointments online. It has the capability to automatically sync calendars according to users' time zones and can send regular alerts and reminders to users regarding their appointment schedules.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Acuity Scheduling Trigger integrations page.

  • Appointment canceled
  • Appointment changed
  • Appointment rescheduled
  • Appointment scheduled
  • Order completed

Google Tasks node

URL: llms-txt#google-tasks-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the Google Tasks node to automate work in Google Tasks, and integrate Google Tasks with other applications. n8n has built-in support for a wide range of Google Tasks features, including adding, updating, and retrieving contacts.

On this page, you'll find a list of operations the Google Tasks node supports and links to more resources.

Refer to Google Tasks credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Task
    • Add a task to task list
    • Delete a task
    • Retrieve a task
    • Retrieve all tasks from a task list
    • Update a task

Templates and examples

Automate Image Validation Tasks using AI Vision

View template details

Sync Google Calendar tasks to Trello every day

View template details

Add a task to Google Tasks

View template details

Browse Google Tasks integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Google Sheets node common issues

URL: llms-txt#google-sheets-node-common-issues

Contents:

  • Append an array
  • Column names were updated after the node's setup

Here are some common errors and issues with the Google Sheets node and steps to resolve or troubleshoot them.

To insert an array of data into Google Sheets, you must convert the array into a valid JSON (key, value) format.

To do so, consider using:

  1. The Split Out node.

  2. The AI Transform node. For example, try entering something like:

  3. The Code node.

Column names were updated after the node's setup

You'll receive this error if the Google Sheet's column names have changed since you set up the node.

To refresh the column names, re-select Mapping Column Mode. This should prompt the node to fetch the column names again.

Once the column names refresh, update the node parameters.

Examples:

Example 1 (unknown):

Convert 'languages' array to JSON (key, value) pairs.

PostHog node

URL: llms-txt#posthog-node

Contents:

  • Operations
  • Templates and examples

Use the PostHog node to automate work in PostHog, and integrate PostHog with other applications. n8n has built-in support for a wide range of PostHog features, including creating aliases, events, and identity, as well as tracking pages.

On this page, you'll find a list of operations the PostHog node supports and links to more resources.

Refer to PostHog credentials for guidance on setting up authentication.

  • Alias
    • Create an alias
  • Event
    • Create an event
  • Identity
    • Create
  • Track
    • Track a page
    • Track a screen

Templates and examples

Browse PostHog integration templates, or search all templates


Stop And Error

URL: llms-txt#stop-and-error

Contents:

  • Operations
  • Node parameters
    • Error Message parameters
    • Error Object parameters
  • Templates and examples
  • Related resources

Use the Stop And Error node to display custom error messages, cause executions to fail under certain conditions, and send custom error information to error workflows.

  • Error Message
  • Error Object

Both operations include one node parameter, the Error Type. Use this parameter to select the type of error to throw. Choose between the two operations: Error Message and Error Object.

The other parameters depend on which operation you select.

Error Message parameters

The Error Message Error Type adds one parameter, the Error Message field. Enter the message you'd like to throw.

Error Object parameters

The Error Object Error Type adds one parameter, the Error Object. Enter a JSON object that contains the error properties you'd like to throw.

Templates and examples

Generate Leads with Google Maps

View template details

Host Your Own AI Deep Research Agent with n8n, Apify and OpenAI o3

View template details

Telegram chat with PDF

by felipe biava cataneo

View template details

Browse Stop And Error integration templates, or search all templates

You can use the Stop And Error node with the Error trigger node.

Read more about Error workflows in n8n workflows.


Plivo credentials

URL: llms-txt#plivo-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using basic auth

You can use these credentials to authenticate the following nodes:

Create a Plivo account.

Supported authentication methods

Refer to Plivo's API documentation for more information about the service.

To configure this credential, you'll need:

  • An Auth ID: Acts like your username. Copy yours from the Overview page of the Plivo console.
  • An Auth Token: Acts like a password. Copy yours from the Overview page of the Plivo console.

Refer to How can I change my Auth ID or Auth Token? for more detailed instructions.


AWS Transcribe node

URL: llms-txt#aws-transcribe-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the AWS Transcribe node to automate work in AWS Transcribe, and integrate AWS Transcribe with other applications. n8n has built-in support for a wide range of AWS Transcribe features, including creating, deleting, and getting transcription jobs.

On this page, you'll find a list of operations the AWS Transcribe node supports and links to more resources.

Refer to AWS Transcribe credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

Transcription Job

  • Create a transcription job
  • Delete a transcription job
  • Get a transcription job
  • Get all transcriptions job

Templates and examples

Transcribe audio files from Cloud Storage

View template details

Create transcription jobs using AWS Transcribe

View template details

🛠️ AWS Transcribe Tool MCP Server 💪 all operations

View template details

Browse AWS Transcribe integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Facebook Trigger WhatsApp Business Account object

URL: llms-txt#facebook-trigger-whatsapp-business-account-object

Contents:

  • Prerequisites
  • Trigger configuration

Use this object to receive updates when your WhatsApp Business Account (WABA) changes. Refer to Facebook Trigger for more information on the trigger itself.

n8n recommends using the WhatsApp Trigger node with the WhatsApp credentials instead of the Facebook Trigger node. That trigger node includes twice the events to subscribe to.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Facebook Trigger integrations page.

This Object requires some configuration in your app and WhatsApp account before you can use the trigger:

  1. Subscribe your app under your WhatsApp business account. You must subscribe an app owned by your business. Apps shared with your business can't receive webhook notifications.
  2. If you are working as a Solution Partner, make sure your app has completed App Review and requested the whatsapp_business_management permission.

Trigger configuration

To configure the trigger with this Object:

  1. Select the Credential to connect with. Select an existing or create a new Facebook App credential.
  2. Enter the APP ID of the app connected to your credential. Refer to the Facebook App credential documentation for more information.
  3. Select WhatsApp Business Account as the Object.
  4. Field Names or IDs: By default, the node will trigger on all the available events using the * wildcard filter. If you'd like to limit the events, use the X to remove the star and use the dropdown or an expression to select the updates you're interested in. Options include:
    • Message Template Status Update
    • Phone Number Name Update
    • Phone Number Quality Update
    • Account Review Update
    • Account Update
  5. In Options, turn on the toggle to Include Values. This Object type fails without the option enabled.

Refer to Webhooks for WhatsApp Business Accounts and Meta's WhatsApp Business Account Graph API reference for more information.


Run your node locally

URL: llms-txt#run-your-node-locally

Contents:

  • Troubleshooting

You can test your node as you build it by running it in a local n8n instance.

  1. Install n8n using npm:

  2. When you are ready to test your node, publish it locally:

  3. Install the node into your local n8n instance:

Make sure you run npm link <node-name> in the nodes directory within your n8n installation. This can be:

  • ~/.n8n/custom/
    • ~/.n8n/<your-custom-name>: if your n8n installation set a different name using N8N_CUSTOM_EXTENSIONS.
  1. Open n8n in your browser. You should see your nodes when you search for them in the nodes panel.

Make sure you search using the node name, not the package name. For example, if your npm package name is n8n-nodes-weather-nodes, and the package contains nodes named rain, sun, snow, you should search for rain, not weather-nodes.

If there's no custom directory in ~/.n8n local installation, you have to create custom directory manually and run npm init:

Examples:

Example 1 (unknown):

npm install n8n -g

Example 2 (unknown):

# In your node directory
   npm run build
   npm link

Example 3 (unknown):

# In the nodes directory within your n8n installation
   # node-package-name is the name from the package.json
   npm link <node-package-name>

Example 4 (unknown):

n8n start

Remove Duplicates node

URL: llms-txt#remove-duplicates-node

Contents:

  • Operation modes
    • Remove Items Repeated Within Current Input
    • Remove Items Processed in Previous Executions
    • Clear Deduplication History
  • Templates and examples
  • Related resources

Use the Remove Duplicates node to identify and delete items that are:

  • identical across all fields or a subset of fields in a single execution
  • identical to or surpassed by items seen in previous executions

This is helpful in situations where you can end up with duplicate data, such as a user creating multiple accounts, or a customer submitting the same order multiple times. When working with large datasets it becomes more difficult to spot and remove these items.

By comparing against data from previous executions, the Remove Duplicates node can delete items seen in earlier executions. It can also ensure that new items have a later date or a higher value than previous values.

Major changes in 1.64.0

The n8n team overhauled this node in n8n 1.64.0. This document reflects the latest version of the node. If you're using an older version of n8n, you can find the previous version of this document here.

The remove duplication node works differently depending on the value of the operation parameter:

Remove Items Repeated Within Current Input

When you set the "Operations" field to Remove Items Repeated Within Current Input, the Remove Duplicate node identifies and removes duplicate items in the current input. It can do this across all fields, or within a subset of fields.

Remove Items Repeated Within Current Input parameters

When using the Remove Items Repeated Within Current Input operation, the following parameter is available:

  • Compare: Select which fields of the input data n8n should compare to check if they're the same. The following options are available:
    • All Fields: Compares all fields of the input data.
    • All Fields Except: Enter which input data fields n8n should exclude from the comparison. You can provide multiple values separated by commas.
    • Selected Fields: Enter which input data fields n8n should include in the comparison. You can provide multiple values separated by commas.

Remove Items Repeated Within Current Input options

If you choose All Fields Except or Selected Fields as your compare type, you can add these options:

  • Disable Dot Notation: Set whether to use dot notation to reference child fields in the format parent.child (turned off) or not (turn on).
  • Remove Other Fields: Set whether to remove any fields that aren't used in the comparison (turned on) or not (turned off).

Remove Items Processed in Previous Executions

When you set the "Operation" field to Remove Items Processed in Previous Executions, the Remove Duplicate node compares items in the current input to items from previous executions.

Remove Items Processed in Previous Executions parameters

When using the Remove Items Processed in Previous Executions operation, the following parameters are available:

  • Keep Items Where: Select how n8n decides which items to keep. The following options are available:

  • Value Is New: n8n removes items if their value matches items from earlier executions.

    • Value Is Higher than Any Previous Value: n8n removes items if the current value isn't higher than previous values.
    • Value Is a Date Later than Any Previous Date: n8n removes date items if the current date isn't later than previous dates.
  • Value to Dedupe On: The input field or fields to compare. The option you select for the Keep Items Where parameter determines the exact format you need:

  • When using Value Is New, this must be an input field or combination of fields with a unique ID.

    • When using Value Is Higher than Any Previous Value, this must be an input field or combination of fields that has an incremental value.
    • When using Value Is a Date Later than Any Previous Date, this must be an input field that has a date value in ISO format.

Remove Items Processed in Previous Executions options

When using the Remove Items Processed in Previous Executions operation, the following option is available:

  • Scope: Sets how n8n stores and uses the deduplication data for comparisons. The following options are available:
    • Node: (default) Stores the data for this node independently from other Remove Duplicates instances in the workflow. When you use this scope, you can clear the duplication history for this node instance without affecting other nodes.
    • Workflow: Stores the duplication data at the workflow level. This shares duplication data with any other Remove Duplicate nodes set to use "workflow" scope. n8n will still manage the duplication data for other Remove Duplicate nodes set to "node" scope independently.

When you select Value Is New as your Keep Items Where choice, this option is also available:

  • History Size: The number of items for n8n to store to track duplicates across executions. The value of the Scope option determines whether this history size is specific to this individual Remove Duplicate node instance or shared with other instances in the workflow. By default, n8n stores 10,000 items.

Clear Deduplication History

When you set the "Operation" field to Clear Deduplication History, the Remove Duplicates node manages and clears the stored items from previous executions. This operation doesn't affect any items in the current input. Instead, it manages the database of items that the "Remove Items Processed in Previous Executions" operation uses.

Clear Deduplication History parameters

When using the Clear Deduplication History operation, the following parameter is available:

  • Mode: How you want to manage the key / value items stored in the database. The following option is available:
    • Clean Database: Deletes all duplication data stored in the database. This resets the duplication database to its original state.

Clear Deduplication History options

When using the Clear Deduplication History operation, the following option is available:

  • Scope: Sets the scope n8n uses when managing the duplication database.
    • Node: (default) Manages the duplication database specific to this Remove Duplicates node instance.
    • Workflow: Manages the duplication database shared by all Remove Duplicate node instances that use workflow scope.

Templates and examples

For templates using the Remove Duplicates node and examples of how to use it, refer to Templates and examples.

Learn more about data structure and data flow in n8n workflows.


Microsoft Teams node

URL: llms-txt#microsoft-teams-node

Contents:

  • Operations
  • Waiting for a response
    • Response Type
    • Approval response customization
    • Free Text response customization
    • Custom Form response customization
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported

Use the Microsoft Teams node to automate work in Microsoft Teams, and integrate Microsoft Teams with other applications. n8n has built-in support for a wide range of Microsoft Teams features, including creating and deleting, channels, messages, and tasks.

On this page, you'll find a list of operations the Microsoft Teams node supports and links to more resources.

Refer to Microsoft credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Channel
    • Create
    • Delete
    • Get
    • Get Many
    • Update
  • Channel Message
    • Create
    • Get Many
  • Chat Message
    • Create
    • Get
    • Get Many
    • Send and Wait for Response
  • Task
    • Create
    • Delete
    • Get
    • Get Many
    • Update

Waiting for a response

By choosing the Send and Wait for a Response operation, you can send a message and pause the workflow execution until a person confirms the action or provides more information.

You can choose between the following types of waiting and approval actions:

  • Approval: Users can approve or disapprove from within the message.
  • Free Text: Users can submit a response with a form.
  • Custom Form: Users can submit a response with a custom form.

You can customize the waiting and response behavior depending on which response type you choose. You can configure these options in any of the above response types:

  • Limit Wait Time: Whether the workflow will automatically resume execution after a specified time limit. This can be an interval or a specific wall time.
  • Append n8n Attribution: Whether to mention in the message that it was sent automatically with n8n (turned on) or not (turned off).

Approval response customization

When using the Approval response type, you can choose whether to present only an approval button or both approval and disapproval buttons.

You can also customize the button labels for the buttons you include.

Free Text response customization

When using the Free Text response type, you can customize the message button label, the form title and description, and the response button label.

Custom Form response customization

When using the Custom Form response type, you build a form using the fields and options you want.

You can customize each form element with the settings outlined in the n8n Form trigger's form elements. To add more fields, select the Add Form Element button.

You'll also be able to customize the message button label, the form title and description, and the response button label.

Templates and examples

Create, update and send a message to a channel in Microsoft Teams

View template details

Meraki Packet Loss and Latency Alerts to Microsoft Teams

View template details

Create Teams Notifications for new Tickets in ConnectWise with Redis

View template details

Browse Microsoft Teams integration templates, or search all templates

Refer to Microsoft Teams' API documentation for more information about the service.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Workflow history

URL: llms-txt#workflow-history

Contents:

  • Understand workflow history

  • View workflow history

  • Restore or copy previous versions

  • Full workflow history is available on Enterprise Cloud and Enterprise Self-hosted.

  • Versions from the last five days are available for Cloud Pro users.

  • Versions from the last 24 hours are available for registered Community users.

Use workflow history to view and restore previous versions of your workflows.

Understand workflow history

n8n creates a new version when you:

  • Save your workflow.
  • Restore an old version. n8n saves the latest version before restoring.
  • Pull from a Git repository using Source control. Note that n8n saves versions to the instance database, not to Git.

Workflow history and execution history

Don't confuse workflow history with the Workflow-level executions list.

Executions are workflow runs. With the executions list, you can see previous runs of the current version of the workflow. You can copy previous executions into the editor to Debug and re-run past executions in your current workflow.

Workflow history is previous versions of the workflow: for example, a version with a different node, or different parameters set.

View workflow history

To view a workflow's history:

  1. Open the workflow.
  2. Select Workflow history . n8n opens a menu showing the saved workflow versions, and a canvas with a preview of the selected version.

Restore or copy previous versions

You can restore a previous workflow version, or make a copy of it:

  1. On the version you want to restore or copy, select Options .
  2. Choose what you want to do:
    • Restore this version: replace your current workflow with the selected version.
    • Clone to new workflow: create a new workflow based on the selected version.
    • Open version in new tab: open a second tab displaying the selected version. Use this to compare versions.
    • Download: download the version as JSON.

n8n v1.0 migration guide

URL: llms-txt#n8n-v1.0-migration-guide

Contents:

  • New features
    • Python support in the Code node
    • Execution order
  • Deprecations
    • MySQL and MariaDB
    • EXECUTIONS_PROCESS and "own" mode
  • Breaking changes
    • Docker
    • Workflow failures due to expression errors
    • Mandatory owner account

This document provides a summary of what you should be aware of before updating to version 1.0 of n8n.

The release of n8n 1.0 marks a milestone in n8n's journey to make n8n available for demanding production environments. Version 1.0 represents the hard work invested over the last four years to make n8n the most accessible, powerful, and versatile automation tool. n8n 1.0 is now ready for use in production.

Python support in the Code node

Although JavaScript remains the default language, you can now also select Python as an option in the Code node and even make use of many Python modules. Note that Python is unavailable in Code nodes added to a workflow before v1.0.

PR #4295, PR #6209

n8n 1.0 introduces a new execution order for multi-branch workflows:

In multi-branch workflows, n8n needs to determine the order in which to execute nodes on branches. Previously, n8n executed the first node of each branch, then the second of each branch, and so on (breadth-first). The new execution order ensures that each branch executes completely before starting the next one (depth-first). Branches execute based on their position on the canvas, from top to bottom. If two branches are at the same height, the leftmost one executes first.

n8n used to execute multi-input nodes as long as they received data on their first input. Nodes connected to the second input of multi-input nodes automatically executed regardless of whether they received data. The new execution order introduced in n8n 1.0 simplifies this behavior: Nodes are now executed only when they receive data, and multi-input nodes require data on at least one of their inputs to execute.

Your existing workflows will use the legacy order, while new workflows will execute using the v1 order. You can configure the execution order for each workflow in workflow settings.

PR #4238, PR #6246, PR #6507

MySQL and MariaDB

n8n has deprecated support for MySQL and MariaDB as storage backends for n8n. These database systems are used by only a few users, yet they require continuous development and maintenance efforts. n8n recommends migrating to PostgreSQL for better compatibility and long-term support.

PR #6189

EXECUTIONS_PROCESS and "own" mode

Previously, you could use the EXECUTIONS_PROCESS environment variable to specify whether executions should run in the main process or in their own processes. This option and own mode are now deprecated and will be removed in a future version of n8n. This is because it led to increased code complexity while offering marginal benefits. Starting from n8n 1.0, main will be the new default.

Note that executions start much faster in main mode than in own mode. However, if a workflow consumes more memory than is available, it might crash the entire n8n application instead of just the worker thread. To mitigate this, make sure to allocate enough system resources or configure queue mode to distribute executions among multiple workers.

PR #6196

Permissions change

When using Docker-based deployments, the n8n process is now run by the user node instead of root. This change increases security.

If permission errors appear in your n8n container logs when starting n8n, you may need to update the permissions by executing the following command on the Docker host:

We've removed the Debian and RHEL images. If you were using these you need to change the image you use. This shouldn't result in any errors unless you were making a custom image based on one of those images.

Entrypoint change

The entrypoint for the container has changed and you no longer need to specify the n8n command. If you were previously running n8n worker --concurrency=5 it's now worker --concurrency=5

PR #6365

Workflow failures due to expression errors

Workflow executions may fail due to syntax or runtime errors in expressions, such as those that reference non-existent nodes. While expressions already throw errors on the frontend, this change ensures that n8n also throws errors on the backend, where they were previously silently ignored. To receive notifications of failing workflows, n8n recommends setting up an "error workflow" under workflow settings.

PR #6352

Mandatory owner account

This change makes User Management mandatory and removes support for other authentication methods, such as BasicAuth and External JWT. Note that the number of permitted users on n8n.cloud or custom plans still varies depending on your subscription.

PR #6362

Directory for installing custom nodes

n8n will no longer load custom nodes from its global node_modules directory. Instead, you must install (or link) them to ~/.n8n/custom (or a directory defined by N8N_CUSTOM_EXTENSIONS). Custom nodes that are npm packages will be located in ~/.n8n/nodes. If you have custom nodes that were linked using npm link into the global node_modules directory, you need to link them again, into ~/.n8n/nodes instead.

PR #6396

The N8N_PUSH_BACKEND environment variable can be used to configure one of two available methods for pushing updates to the user interface: sse and websocket. Starting with n8n 1.0, websocket is the default method.

PR #6196

Date transformation functions

n8n provides various transformation functions that operate on dates. These functions may return either a JavaScript Date or a Luxon DateTime object. With the new behavior, the return type always matches the input. If you call a date transformation function on a Date, it returns a Date. Similarly, if you call it on a DateTime object, it returns a DateTime object.

To identify any workflows and nodes that might be impacted by this change, you can use this utility workflow.

For more information about date transformation functions, please refer to the official documentation.

PR #6435

Execution data retention

Starting from n8n 1.0, all successful, failed, and manual workflow executions will be saved by default. These settings can be modified for each workflow under "Workflow Settings," or globally using the respective environment variables. Additionally, the EXECUTIONS_DATA_PRUNE setting will be enabled by default, with EXECUTIONS_DATA_PRUNE_MAX_COUNT set to 10,000. These default settings are designed to prevent performance degradation when using SQLite. Make sure to configure them according to your individual requirements and system capacity.

PR #6577

Removed N8N_USE_DEPRECATED_REQUEST_LIB

The legacy request library has been deprecated for some time now. As of n8n 1.0, the ability to fall back to it in the HTTP Request node by setting the N8N_USE_DEPRECATED_REQUEST_LIB environment variable has been fully removed. The HTTP Request node will now always use the new HttpRequest interface.

If you build custom nodes, refer to HTTP request helpers for more information on migrating to the new interface.

PR #6413

Removed WEBHOOK_TUNNEL_URL

As of version 0.227.0, n8n has renamed the WEBHOOK_TUNNEL_URL configuration option to WEBHOOK_URL. In n8n 1.0, WEBHOOK_TUNNEL_URL has been removed. Update your setup to reflect the new name. For more information about this configuration option, refer to the docs.

PR #1408

Remove Node 16 support

n8n now requires Node 18.17.0 or above.

Updating to n8n 1.0

  1. Create a full backup of n8n.
  2. n8n recommends updating to the latest n8n 0.x release before updating to n8n 1.x. This will allow you to pinpoint any potential issues to the correct release. Once you have verified that n8n 0.x starts up without any issues, proceed to the next step.
  3. Carefully read the Deprecations and Breaking Changes sections above to assess how they may affect your setup.
  4. Update to n8n 1.0:
    • During beta (before July 24th 2023): If using Docker, pull the next Docker image.
    • After July 24th 2023: If using Docker, pull the latest Docker image.
  5. If you encounter any issues, redeploy the previous n8n version and restore the backup.

If you encounter any issues during the process of updating to n8n 1.0, please seek help in the community forum.

We would like to take a moment to express our gratitude to all of our users for their continued support and feedback. Your contributions are invaluable in helping us make n8n the best possible automation tool. We're excited to continue working with you as we move forward with the release of version 1.0 and beyond. Thank you for being a part of our journey!

Examples:

Example 1 (unknown):

docker run --rm -it --user root -v ~/.n8n:/home/node/.n8n --entrypoint chown n8nio/base:16 -R node:node /home/node/.n8n

HaloPSA credentials

URL: llms-txt#halopsa-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a HaloPSA account.

Supported authentication methods

Refer to HaloPSA's API documentation for more information about the service.

To configure this credential, you'll need:

  • To select your Hosting Type:
    • On Premise Solution: Choose this option if you're hosting the Halo application on your own server
    • Hosted Solution Of Halo: Choose this option if your application is hosted by Halo. If this option is selected, you'll need to provide your Tenant.
  • The HaloPSA Authorisation Server URL: Your Authorisation Server URL is displayed within HaloPSA in Configuration > Integrations > Halo API in API Details.
  • The Resource Server URL: Your Resource Server is displayed within HaloPSA in Configuration > Integrations > Halo API in API Details.
  • A Client ID: Obtained by registering the application in the Halo API settings. Refer to HaloPSA's Authorisation documentation for detailed instructions. n8n recommends using these settings:
    • Choose Client Credentials as your Authentication Method.
    • Use the all permission.
  • A Client Secret: Obtained by registering the application in the Halo API settings.
  • Your Tenant name: If Hosted Solution of Halo is selected as the Hosting Type, you must provide your tenant name. Your tenant name is displayed within HaloPSA in Configuration > Integrations > Halo API in API Details.

HaloPSA uses both the application permissions and the agent's permissions to determine API access.


Twist credentials

URL: llms-txt#twist-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using OAuth2
    • Local environment redirect URL

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to Twist's API documentation for more information about authenticating with the service.

To configure this credential, you'll need:

  • A Client ID: Generated once you create a general integration.
  • A Client Secret: Generated once you create a general integration.

To generate your Client ID and Client Secret, create a general integration.

Use these settings for your integration's OAuth Authentication:

  • Copy the OAuth Redirect URL from n8n and enter it as the OAuth 2 redirect URL in Twist.

OAuth Redirect URL for self-hosted n8n

Twist doesn't accept a localhost Redirect URL. The Redirect URL should be a URL in your domain, for example: https://mytemplatemaker.example.com/gr_callback. If your n8n OAuth Redirect URL contains localhost, refer below to Local environment redirect URL for generating a URL that Twist will allow.

  • Select Update OAuth settings to save those changes.

  • Copy the Client ID and Client Secret from Twist and enter them in the appropriate fields in n8n.

Local environment redirect URL

Twist doesn't accept a localhost callback URL. These steps should allow you to configure the OAuth credentials for the local environment:

  1. Use ngrok to expose the local server running on port 5678 to the internet. In your terminal, run the following command:

  2. Run the following command in a new terminal. Replace <YOUR-NGROK-URL> with the URL that you get from the previous step.

  3. Use the generated URL as your OAuth 2 redirect URL in Twist.

Examples:

Example 1 (unknown):

ngrok http 5678

Example 2 (unknown):

export WEBHOOK_URL=<YOUR-NGROK-URL>

Extract From File

URL: llms-txt#extract-from-file

Contents:

  • Operations
  • Example workflow
  • Node parameters
    • Input Binary Field
    • Destination Output Field
  • Templates and examples

A common pattern in n8n workflows is to receive a file, either from an HTTP Request node (for files you are fetching from a website), a Webhook Node (for files which are sent to your workflow from elsewhere), or from a local source. Data obtained in this way is often in a binary format, for example a spreadsheet or PDF.

The Extract From File node extracts data from a binary format file and converts it to JSON, which can then be easily manipulated by the rest of your workflow. For converting JSON back into a binary file type, please see the Convert to File node.

Use the Operations drop-down to select the format of the source file to extract data from.

  • Extract From CSV: The "Comma Separated Values" file type is commonly used for tabulated data.
  • Extract From HTML: Extract fields from standard web page HTML format files.
  • Extract From JSON: Extract JSON data from a binary file.
  • Extract From ICS: Extract fields from iCalendar format files.
  • Extract From ODS: Extract fields from ODS spreadsheet files.
  • Extract From PDF: Extract fields from Portable Document Format files.
  • Extract From RTF: Extract fields from Rich Text Format files.
  • Extract From Text File: Extract fields from a standard text file format.
  • Extract From XLS: Extract fields from a Microsoft Excel file (older format).
  • Extract From XLSX: Extract fields from a Microsoft Excel file.
  • Move File to Base64 String: Converts binary data to a text-friendly base64 format.

In this example, a Webhook node is used to trigger the workflow. When a CSV file is sent to the webhook address, the file data is output and received by the Extract From File node.

View workflow file

Set to operate as 'Extract from CSV', the node then outputs the data as a series of JSON 'row' objects:

Receiving files with a webhook

Select the Webhook Node's Add Options button and select Raw body, then enable that setting to get the node to output the binary file that the subsequent node is expecting.

Input Binary Field

Enter the name of the field from the node input data that contains the binary file. The default is 'data'.

Destination Output Field

Enter the name of the field in the node output that will contain the extracted data.

This parameter is only available for these operations:

  • Extract From JSON
  • Extract From ICS
  • Extract From Text File
  • Move File to Base64 String

Templates and examples

Building Your First WhatsApp Chatbot

View template details

Extract text from a PDF file

View template details

Scrape and store data from multiple website pages

View template details

Browse Extract From File integration templates, or search all templates

Examples:

Example 1 (unknown):

{
  "row": {
  "0": "apple",
  "1": "1",
  "2": "2",
  "3": "3"
  }
  ...

RabbitMQ node

URL: llms-txt#rabbitmq-node

Contents:

  • Operations
  • Templates and examples

Use the RabbitMQ node to automate work in RabbitMQ, and integrate RabbitMQ with other applications. n8n has built-in support for a wide range of RabbitMQ features, including accepting, and forwarding messages.

On this page, you'll find a list of operations the RabbitMQ node supports and links to more resources.

Refer to RabbitMQ credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Delete From Queue
  • Send a Message to RabbitMQ

Templates and examples

Browse RabbitMQ integration templates, or search all templates


Taiga node

URL: llms-txt#taiga-node

Contents:

  • Operations
  • Templates and examples

Use the Taiga node to automate work in Taiga, and integrate Taiga with other applications. n8n has built-in support for a wide range of Taiga features, including creating, updating, deleting, and getting issues.

On this page, you'll find a list of operations the Taiga node supports and links to more resources.

Refer to Taiga credentials for guidance on setting up authentication.

  • Issue
    • Create an issue
    • Delete an issue
    • Get an issue
    • Get all issues
    • Update an issue

Templates and examples

Create, update, and get an issue on Taiga

View template details

Receive updates when an event occurs in Taiga

View template details

Automate Service Ticket Triage with GPT-4o & Taiga

View template details

Browse Taiga integration templates, or search all templates


Convert to File

URL: llms-txt#convert-to-file

Contents:

  • Operations
    • Convert to CSV
    • Convert to HTML
    • Convert to ICS
    • Convert to JSON
    • Convert to ODS
    • Convert to RTF
    • Convert to Text File
    • Convert to XLS
    • Convert to XLSX

Use the Convert to File node to take input data and output it as a file. This converts the input JSON data into a binary format.

To extract data from a file and convert it to JSON, use the Extract from File node.

Node parameters and options depend on the operation you select.

Configure the node for this operation with the Put Output File in Field parameter. Enter the name of the field in the output data to contain the file.

Convert to CSV options

You can also configure this operation with these Options:

  • File Name: Enter the file name for the generated output file.
  • If the first row of the file contains header names, turn on the Header Row option.

Configure the node for this operation with the Put Output File in Field parameter. Enter the name of the field in the output data to contain the file.

Convert to HTML options

You can also configure this operation with these Options:

  • File Name: Enter the file name for the generated output file.

  • If the first row of the file contains header names, turn on the Header Row option.

  • Put Output File in Field. Enter the name of the field in the output data to contain the file.

  • Event Title: Enter the title for the event.

  • Start: Enter the date and time the event will start. All-day events ignore the time.

  • End: Enter the date and time the event will end. All-day events ignore the time. If unset, the node uses the start date.

  • All Day: Select whether the event is an all day event (turned on) or not (turned off).

Convert to ICS options

You can also configure this operation with these Options:

  • File Name: Enter the file name for the generated output file.
  • Attendees: Use this option to add attendees to the event. For each attendee, add:
    • Name
    • Email
    • RSVP: Select whether the attendee needs to confirm attendance (turned on) or doesn't (turned off).
  • Busy Status: Use this option to set the busy status for Microsoft applications like Outlook. Choose from:
    • Busy
    • Tentative
  • Calendar Name: For Apple and Microsoft calendars, enter the calendar name for the event.
  • Description: Enter an event description.
  • Geolocation: Enter the Latitude and Longitude for the event's location.
  • Location: Enter the event's intended venue/location.
  • Recurrence Rule: Enter a rule to define the repeat pattern of the event (RRULE). Generate rules using the iCalendar.org RRULE Tool.
  • Organizer: Enter the organizer's Name and Email.
  • Sequence: If you're sending an update for an event with the same universally unique ID (UID), enter the revision sequence number.
  • Status: Set the status of the event. Choose from:
    • Confirmed
    • Cancelled
    • Tentative
  • UID: Enter a universally unique ID (UID) for the event. The UID should be globally unique. The node automatically generates a UID if you don't enter one.
  • URL: Enter a URL associated with the event.
  • Use Workflow Timezone: Whether to use UTC time zone (turned off) or the workflow's timezone (turned on). Set the workflow's timezone in the Workflow Settings.

Choose the best output Mode for your needs from these options:

  • All Items to One File: Send all input items to a single file.
  • Each Item to Separate File: Create a file for every input item.

Convert to JSON options

You can also configure this operation with these Options:

  • File Name: Enter the file name for the generated output file.
  • Format: Choose whether to format the JSON for easier reading (turned on) or not (turned off).
  • Encoding: Choose the character set to use to encode the data. The default is utf8.

Configure the node for this operation with the Put Output File in Field parameter. Enter the name of the field in the output data to contain the file.

Convert to ODS options

You can also configure this operation with these Options:

  • File Name: Enter the file name for the generated output file.
  • Compression: Choose whether to compress and reduce the file's output size.
  • Header Row: Turn on if the first row of the file contains header names.
  • Sheet Name: Enter the Sheet Name to create in the spreadsheet.

Configure the node for this operation with the Put Output File in Field parameter. Enter the name of the field in the output data to contain the file.

Convert to RFT options

You can also configure this operation with these Options:

  • File Name: Enter the file name for the generated output file.
  • If the first row of the file contains header names, turn on the Header Row option.

Convert to Text File

Enter the name of the Text Input Field that contains a string to convert to a file. Use dot-notation for deep fields, for example level1.level2.currentKey.

Convert to Text File options

You can also configure this operation with these Options:

  • File Name: Enter the file name for the generated output file.
  • Encoding: Choose the character set to use to encode the data. The default is utf8.

Configure the node for this operation with the Put Output File in Field parameter. Enter the name of the field in the output data to contain the file.

Convert to XLS options

You can also configure this operation with these Options:

  • File Name: Enter the file name for the generated output file.
  • Header Row: Turn on if the first row of the file contains header names.
  • Sheet Name: Enter the Sheet Name to create in the spreadsheet.

Configure the node for this operation with the Put Output File in Field parameter. Enter the name of the field in the output data to contain the file.

Convert to XLSX options

You can also configure this operation with these Options:

  • File Name: Enter the file name for the generated output file.
  • Compression: Choose whether to compress and reduce the file's output size.
  • Header Row: Turn on if the first row of the file contains header names.
  • Sheet Name: Enter the Sheet Name to create in the spreadsheet.

Move Base64 String to File

Enter the name of the Base64 Input Field that contains the Base64 string to convert to a file. Use dot-notation for deep fields, for example level1.level2.currentKey.

Move Base64 String to File options

You can also configure this operation with these Options:

  • File Name: Enter the file name for the generated output file.
  • MIME Type: Enter the MIME type of the output file. Refer to Common MIME types for a list of common MIME types and the file extensions they relate to.

Templates and examples

Automated Web Scraping: email a CSV, save to Google Sheets & Microsoft Excel

View template details

🤖 Telegram Messaging Agent for Text/Audio/Images

View template details

Ultimate Scraper Workflow for n8n

View template details

Browse Convert to File integration templates, or search all templates


NocoDB credentials

URL: llms-txt#nocodb-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using API token
  • Using user auth token

You can use these credentials to authenticate the following nodes:

Supported authentication methods

  • API token (recommended)

User auth token deprecation

NocoDB deprecated user auth tokens in v0.205.1. Use API tokens instead.

Refer to NocoDB's API documentation for more information about the service.

To configure this credential, you'll need a NocoDB instance and:

  • An API Token
  • Your database Host

To generate an API token:

  1. Log into NocoDB and select the User menu in the bottom left sidebar.
  2. Select Account Settings.
  3. Open the Tokens tab.
  4. Select Add new API token.
  5. Enter a Name for your token, like n8n integration.
  6. Select Save.
  7. Copy the API Token and enter it in your n8n credential.
  8. Enter the Host of your NocoDB instance in your n8n credential, for example http://localhost:8080.

Refer to the NocoDB API Tokens documentation for more detailed instructions.

Using user auth token

Before NocoDB deprecated it, user auth token was a temporary token designed for quick experiments with the API, valid for a session until the user logs out or for 10 hours.

User auth token deprecation

NocoDB deprecated user auth tokens in v0.205.1. Use API tokens instead.

To configure this credential, you'll need a NocoDB instance and:

  • A User Token
  • Your database Host

To generate a user auth token:

  1. Log into NocoDB and select the User menu in the bottom left sidebar.
  2. Select Copy Auth token.
  3. Enter that auth token as the User Token in n8n.
  4. Enter the Host of your NocoDB instance, for example http://localhost:8080.

Refer to the NocoDB Auth Tokens documentation for more information.


Salesmate credentials

URL: llms-txt#salesmate-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API token

You can use these credentials to authenticate the following nodes:

Create a Salesmate account.

Supported authentication methods

Refer to Salesmate's API documentation for more information about the service.

To configure this credential, you'll need:

  • A Session Token: An Access Key. Generate an access key in My Account > Access Key. Refer to Access Rights and Keys for more information.
  • A URL: Your Salesmate domain name/base URL, for example n8n.salesmate.io.

Supabase node

URL: llms-txt#supabase-node

Contents:

  • Operations
  • Using custom schemas
  • Templates and examples
  • What to do if your operation isn't supported
  • Common issues

Use the Supabase node to automate work in Supabase, and integrate Supabase with other applications. n8n has built-in support for a wide range of Supabase features, including creating, deleting, and getting rows.

On this page, you'll find a list of operations the Supabase node supports and links to more resources.

Refer to Supabase credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Row
    • Create a new row
    • Delete a row
    • Get a row
    • Get all rows
    • Update a row

Using custom schemas

By default, the Supabase node only fetches the public schema. To fetch custom schemas, enable Use Custom Schema.

In the new Schema field, provide the custom schema the Supabase node should use.

Templates and examples

AI Agent To Chat With Files In Supabase Storage

View template details

Autonomous AI crawler

View template details

Supabase Insertion & Upsertion & Retrieval

View template details

Browse Supabase integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.

For common errors or issues and suggested resolution steps, refer to Common issues.


Yahoo Send Email credentials

URL: llms-txt#yahoo-send-email-credentials

Contents:

  • Prerequisites
  • Set up the credential

Follow these steps to configure the Send Email credentials with a Yahoo account.

To follow these instructions, you must first generate an app password:

  1. Log in to your Yahoo account Security page.
  2. Select Generate app password or Generate and manage app passwords.
  3. Select Get Started.
  4. Enter an App name for your new app password, like n8n credential.
  5. Select Generate password.
  6. Copy the generated app password. You'll use this in your n8n credential.

Refer to Yahoo's Generate and manage 3rd-party app passwords for more information.

Set up the credential

To configure the Send Email credential to use Yahoo Mail:

  1. Enter your Yahoo email address as the User.
  2. Enter the app password you generated above as the Password.
  3. Enter smtp.mail.yahoo.com as the Host.
  4. For the Port:
    • Keep the default 465 for SSL or if you're unsure what to use.
    • Enter 587 for TLS.
  5. Turn on the SSL/TLS toggle.

Refer to IMAP server settings for Yahoo Mail for more information. If the settings above don't work for you, check with your email administrator.


Find your workflow ID

URL: llms-txt#find-your-workflow-id

Your workflow ID is available in:

  • The URL of the open workflow.
  • The workflow settings title.

Venafi TLS Protect Datacenter credentials

URL: llms-txt#venafi-tls-protect-datacenter-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API integration

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to Venafi's API integration documentation for more information about the service.

Using API integration

To configure this credential, you'll need:

  • A Domain: Enter your Venafi TLS Protect Datacenter domain.
  • A Client ID: Enter the Client ID from your API integration. Refer to the information and links in Prerequisites for more information on creating an API integration.
  • A Username: Enter your username.
  • A Password: Enter your password.
  • Allow Self-Signed Certificates: If turned on, the credential will allow self-signed certificates.

MCP Client Tool node

URL: llms-txt#mcp-client-tool-node

Contents:

  • Node parameters
  • Templates and examples
  • Related resources

The MCP Client Tool node is a Model Context Protocol (MCP) client, allowing you to use the tools exposed by an external MCP server. You can connect the MCP Client Tool node to your models to call external tools with n8n agents.

The MCP Client Tool node supports both Bearer and generic header authentication methods.

Configure the node with the following parameters.

  • SSE Endpoint: The SSE endpoint for the MCP server you want to connect to.
  • Authentication: The authentication method for authentication to your MCP server. The MCP tool supports bearer and generic header authentication. Select None to attempt to connect without authentication.
  • Tools to Include: Choose which tools you want to expose to the AI Agent:
    • All: Expose all the tools given by the MCP server.
    • Selected: Activates a Tools to Include parameter where you can select the tools you want to expose to the AI Agent.
    • All Except: Activates a Tools to Exclude parameter where you can select the tools you want to avoid sharing with the AI Agent. The AI Agent will have access to all MCP server's tools that aren't selected.

Templates and examples

Build an MCP Server with Google Calendar and Custom Functions

View template details

Build your own N8N Workflows MCP Server

View template details

Build a Personal Assistant with Google Gemini, Gmail and Calendar using MCP

View template details

Browse MCP Client Tool integration templates, or search all templates

n8n also has an MCP Server Trigger node that allows you to expose n8n tools to external AI Agents.

Refer to the MCP documentation and MCP specification for more details about the protocol, servers, and clients.

Refer to LangChain's documentation on tools for more information about tools in LangChain.

View n8n's Advanced AI documentation.


Bannerbear credentials

URL: llms-txt#bannerbear-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key

You can use these credentials to authenticate the following nodes:

Create a Bannerbear account.

Supported authentication methods

Refer to Bannerbear's API documentation for more information about the service.

To configure this credential, you'll need:


Sentry.io node

URL: llms-txt#sentry.io-node

Contents:

  • Operations
  • Templates and examples
  • Related resources
  • What to do if your operation isn't supported

Use the Sentry.io node to automate work in Sentry.io, and integrate Sentry.io with other applications. n8n has built-in support for a wide range of Sentry.io features, including creating, updating, deleting, and getting, issues, projects, and releases, as well as getting all events.

On this page, you'll find a list of operations the Sentry.io node supports and links to more resources.

Refer to Sentry.io credentials for guidance on setting up authentication.

  • Event
    • Get event by ID
    • Get all events
  • Issue
    • Delete an issue
    • Get issue by ID
    • Get all issues
    • Update an issue
  • Project
    • Create a new project
    • Delete a project
    • Get project by ID
    • Get all projects
    • Update a project
  • Release
    • Create a release
    • Delete a release
    • Get release by version identifier
    • Get all releases
    • Update a release
  • Organization
    • Create an organization
    • Get organization by slug
    • Get all organizations
    • Update an organization
  • Team
    • Create a new team
    • Delete a team
    • Get team by slug
    • Get all teams
    • Update a team

Templates and examples

Browse Sentry.io integration templates, or search all templates

Refer to Sentry.io's documentation for more information about the service.

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Mailjet Trigger node

URL: llms-txt#mailjet-trigger-node

Mailjet is a cloud-based email sending and tracking system. The platform allows professionals to send both marketing emails and transactional emails. It includes tools for designing emails, sending massive volumes and tracking these messages.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Mailjet Trigger integrations page.


HubSpot node

URL: llms-txt#hubspot-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the HubSpot node to automate work in HubSpot, and integrate HubSpot with other applications. n8n has built-in support for a wide range of HubSpot features, including creating, updating, deleting, and getting contacts, deals, lists, engagements and companies.

On this page, you'll find a list of operations the HubSpot node supports and links to more resources.

Refer to HubSpot credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Contact
    • Create/Update a contact
    • Delete a contact
    • Get a contact
    • Get all contacts
    • Get recently created/updated contacts
    • Search contacts
  • Contact List
    • Add contact to a list
    • Remove a contact from a list
  • Company
    • Create a company
    • Delete a company
    • Get a company
    • Get all companies
    • Get recently created companies
    • Get recently modified companies
    • Search companies by domain
    • Update a company
  • Deal
    • Create a deal
    • Delete a deal
    • Get a deal
    • Get all deals
    • Get recently created deals
    • Get recently modified deals
    • Search deals
    • Update a deal
  • Engagement
    • Create an engagement
    • Delete an engagement
    • Get an engagement
    • Get all engagements
  • Form
    • Get all fields from a form
    • Submit data to a form
  • Ticket
    • Create a ticket
    • Delete a ticket
    • Get a ticket
    • Get all tickets
    • Update a ticket

Templates and examples

Real Estate Lead Generation with BatchData Skip Tracing & CRM Integration

View template details

Create HubSpot contacts from LinkedIn post interactions

View template details

Update HubSpot when a new invoice is registered in Stripe

View template details

Browse HubSpot integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.


Contentful credentials

URL: llms-txt#contentful-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API access token

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to Contentful's API documentation for more information about the service.

Using API access token

To configure this credential, you'll need:

  • Your Contentful Space ID: The Space ID displays as you generate the tokens; You can also refer to the Contentful Find space ID documentation to view the Space ID.
  • A Content Delivery API Access Token: Required if you want to use the Content Delivery API. Leave blank if you don't intend to use this API.
  • A Content Preview API Access Token: Required if you want to use the Content Preview API. Leave blank if you don't intend to use this API.

View and generate access tokens in Contentful in Settings > API keys. Contentful generates tokens for both Content Delivery API and Content Preview API as part of a single key. Refer to the Contentful API authentication documentation for detailed instructions.


Privacy and security at n8n

URL: llms-txt#privacy-and-security-at-n8n

n8n is committed to the privacy and security of your data. This section outlines how n8n handles and secures data. This isn't an exhaustive list of practices, but an overview of key policies and procedures.

If you have any questions related to data privacy, email privacy@n8n.io.

If you have any security-related questions, or if you want to report a suspected vulnerability, email security@n8n.io.


UptimeRobot credentials

URL: llms-txt#uptimerobot-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using API key
    • API key types

You can use these credentials to authenticate the following nodes:

Create an UptimeRobot account.

Supported authentication methods

Refer to UptimeRobot's API documentation for more information about the service.

To configure this credential, you'll need:

  • An API Key: Get your API Key from My Settings > API Settings. Create a Main API Key and enter this key in your n8n credential.

UptimeRobot supports three API key types:

  • Account-specific (also known as main): Pulls data for multiple monitors.
  • Monitor-specific: Pulls data for a single monitor.
  • Read-only: Only runs GET API calls.

To complete all of the operations in the UptimeRobot node, use the Main or Account-specific API key type. Refer to API authentication for more information.


Salesforce Trigger node

URL: llms-txt#salesforce-trigger-node

Contents:

  • Events
  • Related resources

Use the Salesforce Trigger node to respond to events in Salesforce and integrate Salesforce with other applications. n8n has built-in support for a wide range of Salesforce events.

On this page, you'll find a list of events the Salesforce Trigger node can respond to, and links to more resources.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Salesforce trigger integrations page.

  • On Account Created
  • On Account Updated
  • On Attachment Created
  • On Attachment Updated
  • On Case Created
  • On Case Updated
  • On Contact Created
  • On Contact Updated
  • On Custom Object Created
  • On Custom Object Updated
  • On Lead Created
  • On Lead Updated
  • On Opportunity Created
  • On Opportunity Updated
  • On Task Created
  • On Task Updated
  • On User Created
  • On User Updated

n8n provides an app node for Salesforce. You can find the node docs here.

View example workflows and related content on n8n's website.


This is a top-level heading

URL: llms-txt#this-is-a-top-level-heading

Contents:

  • This is a sub-heading
    • This is a smaller sub-heading
  • Make images full width
  • Embed a YouTube video

This is a sub-heading

This is a smaller sub-heading

You can add links: Example

Create lists with asterisks:

  • Item one
  • Item two

Or created ordered lists with numbers:

  1. Item one
  2. Item two

Source example

@youtube


To embed your own video, copy the above syntax, replacing `ZCuL2e4zC_4` with your video ID. The YouTube video ID is the string that follows `v=` in the YouTube URL.

**Examples:**

Example 1 (unknown):
```unknown
For a more detailed guide, refer to [CommonMark's help](https://commonmark.org/help/). n8n uses [markdown-it](https://github.com/markdown-it/markdown-it), which implements the CommonMark specification.

## Make images full width

You can force images to be 100% width of the sticky note by appending `#full-width` to the filename:

Example 2 (unknown):

## Embed a YouTube video

To display a YouTube video in a note, use the `@[youtube](<video-id>)` directive with the video's ID. For this to work, the video's creator must allow embedding.

For example:

Formstack Trigger node

URL: llms-txt#formstack-trigger-node

Formstack is a workplace productivity platform that helps organizations streamline digital work through no-code online forms, documents, and signatures.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Formstack Trigger integrations page.


GetResponse Trigger node

URL: llms-txt#getresponse-trigger-node

Contents:

  • Events

GetResponse is an online platform that offers email marketing software, landing page creator, webinar hosting, and much more.

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's GetResponse Trigger integrations page.

  • Receive notifications when a customer is subscribed to a list
  • Receive notifications when a customer is unsubscribed from a list
  • Receive notifications when an email is opened
  • Receive notifications when an email is clicked
  • Receive notifications when a survey is submitted

Weaviate credentials

URL: llms-txt#weaviate-credentials

Contents:

  • Supported authentication methods
  • Related resources
  • Using API key
    • Connection type: Weaviate Cloud
    • Connection type: Custom Connection

You can use these credentials to authenticate the following nodes:

Supported authentication methods

Refer to Weaviate's connection documentationfor more information on how to connect to Weaviate.

View n8n's Advanced AI documentation.

Connection type: Weaviate Cloud

Create your Weaviate Cloud Database and follow these instructions get the following parameter values from your Weaviate Cloud Database:

  • Weaviate Cloud Endpoint
  • Weaviate Api Key

Note: Weaviate provides a free sandbox option for testing.

Connection type: Custom Connection

For this Connection Type, you need to deploy Weaviate on your own server, configured so n8n can access it. Refer to Weaviate's authentication documentation for information on creating and using API keys.

You can then provide the arguments for your custom connection:

  • Weaviate Api Key: Your Weaviate API key.
  • Custom Connection HTTP Host: The domain name or IP address of your Weaviate instance to use for HTTP API calls.
  • Custom Connection HTTP Port: The port your Weaviate instance is running on for HTTP API calls. By default, this is 8080.
  • Custom Connection HTTP Secure: Whether to connect to the Weaviate through HTTPS for HTTP API calls.
  • Custom Connection gRPC Host: The hostname or IP address of your Weaviate instance to use for gRPC.
  • Custom Connection gRPC Port: The gRPC API port for your Weaviate instance. By default, this is 50051.
  • Custom Connection gRPC Secure: Whether to connect to the Weaviate through HTTPS for gRPC.

For community support, refer to Weaviate Forums.


Xero credentials

URL: llms-txt#xero-credentials

Contents:

  • Prerequisites
  • Supported authentication methods
  • Related resources
  • Using OAuth2

You can use these credentials to authenticate the following nodes:

Create a Xero account.

Supported authentication methods

Refer to Zero's API documentation for more information about the service.

To configure this credential, you'll need:

  • A Client ID: Generated when you create a new app for a custom connection.
  • A Client Secret: Generated when you create a new app for a custom connection.

To generate your Client ID and Client Secret, create an OAuth2 custom connection app in your Xero developer portal My Apps.

Use these settings for your app:

Xero doesn't support app instances within the Xero Developer Centre that contain n8n in their name.

  • Select Web app as the Integration Type.
  • For the Company or Application URL, enter the URL of your n8n server or reverse proxy address. For cloud users, for example, this is: https://your-username.app.n8n.cloud/.
  • Copy the OAuth Redirect URL from n8n and add it as an OAuth 2.0 redirect URI in your app.
  • Select appropriate scopes for your app. Refer to OAuth2 Scopes for more information.
    • To use all functionality in the Xero node, add the accounting.contacts and accounting.transactions scopes.

Refer to Xero's OAuth Custom Connections documentation for more information.


Waiting

URL: llms-txt#waiting

Waiting allows you to pause a workflow mid-execution, then resume where the workflow left off, with the same data. This is useful if you need to rate limit your calls to a service, or wait for an external event to complete. You can wait for a specified duration, or until a webhook fires.

Making a workflow wait uses the Wait node. Refer to the node documentation for usage details.

n8n provides a workflow template with a basic example of Rate limiting and waiting for external events.


ERPNext node

URL: llms-txt#erpnext-node

Contents:

  • Operations
  • Templates and examples
  • What to do if your operation isn't supported

Use the ERPNext node to automate work in ERPNext, and integrate ERPNext with other applications. n8n has built-in support for a wide range of ERPNext features, including creating, updating, retrieving, and deleting documents.

On this page, you'll find a list of operations the ERPNext node supports and links to more resources.

Refer to ERPNext credentials for guidance on setting up authentication.

This node can be used as an AI tool

This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the AI tool parameters documentation.

  • Create a document
  • Delete a document
  • Retrieve a document
  • Retrieve all documents
  • Update a document

Templates and examples

Browse ERPNext integration templates, or search all templates

What to do if your operation isn't supported

If this node doesn't support the operation you want to do, you can use the HTTP Request node to call the service's API.

You can use the credential you created for this service in the HTTP Request node:

  1. In the HTTP Request node, select Authentication > Predefined Credential Type.
  2. Select the service you want to connect to.
  3. Select your credential.

Refer to Custom API operations for more information.