Integrating n8n with Open WebUI: Building advanced AI chatbots and workflows

blog preview

Everyone and their mum knows about the capabilities and productivity gains of AI chatbots, like ChatGPT. However, as we outlined in our post about Open WebUI, hosted solutions (like ChatGPT) have their fair share of limitations. Most notably, there is a valid privacy concern, but also the fact that you are limited to the features the provider has implemented. This is one of the main reasons for using open source, self-hosted solutions like Open WebUI or LibreChat.

In this post, we will explore how to extend Open WebUI with virtually any conceivable feature by integrating it with n8n, a powerful open-source workflow automationtool. n8n allows you to create complex workflows that can automate tasks, connect to various APIs, and perform data transformations, all by a simple low-code interface.

In this guide, we'll walk through the setup process, explore practical use cases, and provide code examples to help you get started with your own n8n and Open WebUI integration.

Understanding n8n and Open WebUI

What is Open WebUI?

Open WebUI is a self-hosted, open-source alternative to ChatGPT that provides a user-friendly interface for interacting with large language models. It offers advanced features like document RAG (Retrieval Augmented Generation), custom model creation, and an extensible pipeline system.

Key features include:

  • Support for multiple LLM providers (OpenAI, Ollama, etc.)
  • Built-in RAG capabilities for document interaction
  • Python code execution directly in the browser
  • Extensible pipeline system for custom functionality

For a comprehensive introduction to Open WebUI, check out our detailed guide on Open WebUI.

What is n8n?

n8n is an open-source workflow automation platform that allows you to create complex workflows with a visual, low-code interface. It's designed to connect different services and APIs, making it ideal for building automated processes without extensive coding.

Key capabilities include:

  • Visual workflow builder with drag-and-drop interface
  • 1,200+ pre-built integrations with various services
  • AI agent nodes for LLM integration
  • Custom code nodes for extending functionality
  • Webhook support for triggering workflows

To learn more about n8n and how to build AI agents with it, see our guide on building AI agents with n8n.

Setting up the integration between n8n and Open WebUI

Before we get started, please make sure you have a running Open WebUI and n8n instance at your disposal. You can find instructions for setting up Open WebUI in our Open WebUI guide and n8n in our n8n guide.

How does the integration work?

The integration between n8n and Open WebUI works by creating a bidirectional communication channel between the two platforms. This is accomplished through two main components:

  1. Open WebUI Function: Open WebUI provides a function system that allows you to execute custom code when certain events occur. In our case, we'll create a function that triggers whenever a user sends a message in the chat interface.

  2. n8n Webhook Trigger: On the n8n side, we'll set up a workflow that starts with a webhook trigger. This webhook will be called by our Open WebUI function, allowing n8n to process the message and return a response.

The basic flow works like this:

  1. User sends a message in Open WebUI
  2. Open WebUI function intercepts the message
  3. Function makes an HTTP request to the n8n webhook
  4. n8n processes the message through its workflow
  5. n8n returns a response to Open WebUI function
  6. Open WebUI displays the response to the user

This approach allows you to use n8n's powerful workflow capabilities while maintaining the user-friendly interface of Open WebUI. You can process messages, connect to external services, transform data, and much moreā€”all without the user ever leaving the chat interface.

Benefits of this integration approach

  • Extensibility: Connect your chatbot to any of n8n's 1,200+ integrations
  • Automation: Create complex, multi-step processes triggered by simple chat messages
  • Separation of concerns: Keep your UI clean while handling complex logic in n8n
  • Low-code development: Build sophisticated workflows without extensive programming

In the following sections, we'll walk through setting up both sides of this integration, starting with the n8n webhook workflow and then creating the Open WebUI function to connect to it.

Creating the n8n webhook workflow

Setting up the required n8n webhook workflow is rather simple:

  1. Create a new workflow: In your n8n instance, create a new workflow and add a "Webhook" node as the starting point.

  2. Configure the webhook node:

    • Set the HTTP method to "POST" and copy the generated webhook URL. This URL will be used in the Open WebUI functioni to send messages to n8n.
    • Set the "Response Mode" to "Using 'Respond to Webhook' node" to ensure that n8n responds to the webhook request, after the workflow is processed.
    • Set "Header Auth"
    • Create a new credential for the webhook node. Note down the key you used for the credentials.

    Webhook node configurationWebhook node configuration

  3. Create your workflow: Connect whatever workflow you want to the webhook node. For example, you can use the "OpenAI" node to process the incoming message and generate a response.

  4. Finally, add a "Respond to Webhook" node: This node will send the response back to Open WebUI. Connect it to the last node in your workflow. In the node settings, choose Respond with: "First incoming item".

A simple flow might look like this:Simple n8n workflow with
webhhookSimple n8n workflow with webhhook

Tip: Testing your workflow

As it might be cumbersome to test your workflow with only a webhook trigger, add a "when chat message received" trigger in addition to your webhook trigger. Connect both triggers to an "Edit fields" node. This node allows to combine both triggers into one workflow. In the "edit fields" node, select "JSON mode" and add the following JSON:

1{
2 "chatInput": {{ JSON.stringify($json.body?.chatInput ?? $json.chatInput) }},
3 "sessionId": "{{ $json.body?.sessionId ?? $json.sessionId }}",
4 "user": {{ "\"" +$json.body?.user.name + "\"" ?? "" }}
5}

This will take either the webhook input or the chat message input and create a unified input for the rest of the workflow.

Combine webhook and manual chat message
triggerCombine webhook and manual chat message trigger

To test your workflow, simply click the "Open Chat" button and enter your user message.

Creating the Open WebUI function

Now that we have ourselves an n8n webhook workflow, we need to create an Open WebUI pipe function.

What are Open WebUI functions?

Open WebUI provides a powerful extension system called "functions" that allows you to customize and extend the platform's capabilities. One of the most powerful types of functions is the "Pipe" function, which essentially lets you create custom "models" or "agents" within Open WebUI.

Think of Pipes as custom processing pathways that can intercept messages, process them in unique ways, and return responses. In our integration with n8n, we'll create a Pipe function that sends user messages to our n8n webhook and displays the responses.

Basic structure of a Pipe function

A Pipe function in Open WebUI follows this basic structure:

Note: This is just a simplified example for understanding pipes. The real example is shown further below.

1from pydantic import BaseModel, Field
2import requests
3
4class Pipe:
5 # Configuration options (Valves)
6 class Valves(BaseModel):
7 WEBHOOK_URL: str = Field(
8 default="",
9 description="The n8n webhook URL to send messages to"
10 )
11 AUTH_KEY: str = Field(
12 default="",
13 description="Authentication key for the webhook"
14 )
15
16 # Initialize the Pipe
17 def __init__(self):
18 self.valves = self.Valves()
19
20 # Define the pipe function that processes messages
21 def pipe(self, body: dict, __user__: dict = None):
22 try:
23 message = extract_user_message()
24 payload = prepare_payload(message, __user__)
25 result = send_to_n8n(payload)
26 return format_response(result)
27
28 except Exception as e:
29 return {"error": str(e)}

Let's break down the key components:

  1. Valves: These are configuration options that users can set when using your Pipe. In our case, we need the n8n webhook URL and an authentication key.

  2. **init method**: Initializes the Pipe and sets up the Valves.

  3. pipe method: This is where the magic happens. It receives the user's message, processes it, and returns a response. In our integration:

    • We extract the user's message from the input
    • We prepare a payload for n8n with the message and user information
    • We send a request to the n8n webhook
    • We format the response from n8n to match what Open WebUI expects

This structure allows our Pipe to act as a bridge between Open WebUI and n8n.

Implementing the n8n integration pipe

Now that we understand the basic structure of an Open WebUI pipe function and have our n8n webhook ready, let's create the actual implementation.

Note: We took heavy inspiration from this community pipe from @coleam. We adjusted the session handling and the user handling to make it more robust and easier to use. We also changed the request handling to allow for async status messages. Nothing you, dear reader, need to worry about, but just acknowledge that we built on top of Coles work.

  1. Fill out the name and description of your pipe function. You can use whatever you like, but we recommend something like "n8n Assistant" and "Pipe to call n8n workflows" to follow this tutorial.

  2. Create the function code:

    1"""
    2title: n8n Pipe Function
    3author: Cole Medin / Andreas Nigg (Pondhouse Data GmbH)
    4version: 0.2.0
    5
    6This module defines a Pipe class that utilizes an N8N workflow for an Agent
    7"""
    8
    9from typing import Optional, Callable, Awaitable
    10from pydantic import BaseModel, Field
    11import os
    12import time
    13import aiohttp
    14import asyncio
    15
    16
    17class Pipe:
    18 class Valves(BaseModel):
    19 n8n_url: str = Field(
    20 default="https://n8n.[your domain].com/webhook/[your webhook URL]"
    21 )
    22 n8n_bearer_token: str = Field(default="...")
    23 input_field: str = Field(default="chatInput")
    24 response_field: str = Field(default="output")
    25 emit_interval: float = Field(
    26 default=2.0, description="Interval in seconds between status emissions"
    27 )
    28 enable_status_indicator: bool = Field(
    29 default=True, description="Enable or disable status indicator emissions"
    30 )
    31
    32 def __init__(self):
    33 self.type = "pipe"
    34 self.id = "n8n_pipe"
    35 self.name = "N8N Pipe"
    36 self.valves = self.Valves()
    37 self.last_emit_time = 0
    38
    39 async def emit_status(
    40 self,
    41 __event_emitter__: Callable[[dict], Awaitable[None]],
    42 level: str,
    43 message: str,
    44 done: bool,
    45 ):
    46 current_time = time.time()
    47 if (
    48 __event_emitter__
    49 and self.valves.enable_status_indicator
    50 and (
    51 current_time - self.last_emit_time >= self.valves.emit_interval or done
    52 )
    53 ):
    54 await __event_emitter__(
    55 {
    56 "type": "status",
    57 "data": {
    58 "status": "complete" if done else "in_progress",
    59 "level": level,
    60 "description": message,
    61 "done": done,
    62 },
    63 }
    64 )
    65 self.last_emit_time = current_time
    66
    67 async def make_n8n_request(self, payload: dict) -> dict:
    68 """Separate async function to handle the N8N API request"""
    69 headers = {
    70 "Authorization": f"Bearer {self.valves.n8n_bearer_token}",
    71 "Content-Type": "application/json",
    72 }
    73
    74 async with aiohttp.ClientSession() as session:
    75 async with session.post(
    76 self.valves.n8n_url, json=payload, headers=headers
    77 ) as response:
    78 if response.status == 200:
    79 response_data = await response.json()
    80 return response_data[self.valves.response_field]
    81 else:
    82 error_text = await response.text()
    83 raise Exception(f"Error: {response.status} - {error_text}")
    84
    85 async def pipe(
    86 self,
    87 body: dict,
    88 __user__: Optional[dict] = None,
    89 __metadata__: Optional[dict] = None,
    90 __event_emitter__: Callable[[dict], Awaitable[None]] = None,
    91 __event_call__: Callable[[dict], Awaitable[dict]] = None,
    92 ) -> Optional[dict]:
    93 n8n_response = None
    94
    95 try:
    96 await self.emit_status(
    97 __event_emitter__, "info", "Calling N8N Workflow...", False
    98 )
    99
    100 messages = body.get("messages", [])
    101
    102 # Verify a message is available
    103 if messages:
    104 question = messages[-1]["content"]
    105 if "Prompt: " in question:
    106 question = question.split("Prompt: ")[-1]
    107
    108 await self.emit_status(
    109 __event_emitter__, "info", "Processing request...", False
    110 )
    111
    112 # Prepare payload
    113 payload = {"sessionId": __metadata__["chat_id"]}
    114 payload[self.valves.input_field] = question
    115 payload["user"] = __user__
    116
    117 # Make the API request
    118 n8n_response = await self.make_n8n_request(payload)
    119
    120 # Set assistant message with chain reply
    121 body["messages"].append({"role": "assistant", "content": n8n_response})
    122
    123 await self.emit_status(
    124 __event_emitter__, "info", "Processing response...", False
    125 )
    126
    127 else:
    128 await self.emit_status(
    129 __event_emitter__,
    130 "error",
    131 "No messages found in the request body",
    132 True,
    133 )
    134 body["messages"].append(
    135 {
    136 "role": "assistant",
    137 "content": "No messages found in the request body",
    138 }
    139 )
    140 return "No messages found in the request body"
    141
    142 except Exception as e:
    143 error_message = f"Error: {str(e)}"
    144 await self.emit_status(
    145 __event_emitter__,
    146 "error",
    147 error_message,
    148 True,
    149 )
    150 body["messages"].append({"role": "assistant", "content": error_message})
    151 return {"error": error_message}
    152
    153 finally:
    154 await self.emit_status(__event_emitter__, "info", "Complete", True)
    155
    156 return n8n_response
  3. Click "Save" to create your pipe function

Configuring the pipe

After creating the pipe, you need to configure it with your n8n webhook URL and authentication key:

  1. Go to the Admin Panel -> Functions tabAdmin Panel - Functions tab in your Open WebUI instance.
  2. Find and select your "n8n Assistant" function
  3. Click on the settings icon next to the model name
  4. Enter the fields as described
  5. Click "Save"

Configure n8n pipe in Open WebUIConfigure n8n pipe in Open WebUI

How the pipe works

Let's break down how our n8n integration pipe works in detail:

  1. Configuration through Valves:

    • n8n_url: The webhook URL from your n8n workflow
    • n8n_bearer_token: The authentication token you set up in n8n
    • input_field: The field name where the user's message will be sent (default: "chatInput")
    • response_field: The field name where n8n's response is expected (default: "output")
    • emit_interval and enable_status_indicator: Controls status updates during processing
  2. Initialization:

    • The __init__ method sets up the pipe with a unique ID and name
    • It initializes the valves with default values that can be overridden by the user
  3. Status Updates:

    • The emit_status method provides real-time feedback to the user during processing
    • It shows "in_progress" indicators while waiting for n8n to respond
    • This creates a better user experience by showing that something is happening
  4. Making the n8n Request:

    • The make_n8n_request method handles the actual HTTP communication with n8n
    • It sends the user's message and session information to the webhook
    • It includes proper authentication headers and handles the response
  5. The Main Pipe Logic:

    • Extracts the latest user message from the conversation
    • Creates a payload with the message, session ID, and user information
    • Makes an asynchronous request to the n8n webhook
    • Processes the response and adds it to the conversation
    • Handles errors gracefully, showing error messages in the chat if something goes wrong
  6. Response Handling:

    • The pipe simply appends the n8n response to the messages array with the role "assistant"
    • This is all Open WebUI needs to display the response in the chat interface

The beauty of this implementation is its simplicity on the Open WebUI side. All the complex logic can be handled in your n8n workflow, while the pipe just acts as a bridge between the two systems.

Use cases for the n8n and Open WebUI integration

Now that we have our integration set up, let's explore some practical use cases:

1. Database-powered chatbot

Connect your chatbot to your company database to answer questions about inventory, customer data, or sales metrics:

  • User asks: "How many products do we have in stock?"
  • Open WebUI sends the question to n8n
  • n8n workflow queries your database
  • Results are formatted and sent back to the user

2. Multi-API orchestration

Create workflows that combine multiple APIs to provide comprehensive responses:

  • User asks: "What's the weather forecast for my upcoming trip to Berlin?"
  • n8n workflow checks the user's calendar for trip dates
  • Then queries a weather API for the forecast
  • Combines the information into a single, coherent response

3. Document processing and generation

Build workflows that generate and process documents:

  • User requests: "Create a sales report for Q1"
  • n8n workflow pulls data from your CRM
  • Generates a PDF report
  • Uploads it to cloud storage
  • Returns a download link to the user

4. Multi-model orchestration

Create workflows that use different AI models for different tasks:

  • User uploads an image with text
  • n8n workflow uses a vision model to extract text
  • Then uses a language model to analyze the content
  • Finally, uses a specialized model to generate a response

Conclusion

The integration between n8n and Open WebUI opens up a world of possibilities for creating powerful, customized AI chatbots and workflows. By combining Open WebUI's user-friendly interface with n8n's extensive integration capabilities, you can build solutions that go far beyond what either platform could achieve alone.

This approach gives you:

  • Complete control over your data and workflows
  • Unlimited extensibility through n8n's 1,200+ integrations
  • Low-code development that speeds up implementation
  • Self-hosted infrastructure for maximum privacy and security

We encourage you to experiment with different workflows and share your creations with the community. The combination of these two powerful open-source tools represents a significant step forward in democratizing access to advanced AI capabilities.

Further Reading

Interested in how to train your very own Large Language Model?

We prepared a well-researched guide for how to use the latest advancements in Open Source technology to fine-tune your own LLM. This has many advantages like:

  • Cost control
  • Data privacy
  • Excellent performance - adjusted specifically for your intended use
More information on our managed RAG solution?
To Pondhouse AI
More tips and tricks on how to work with AI?
To our Blog