Enterprise AI Portals: Five Open-Source Interfaces Compared
LobeChat, OpenWebUI, LibreChat, chatbot-ui and very-ai - five enterprise AI portals compared. Features, SSO, PII protection, governance, self-hosting.
The Problem: A Model Without an Interface
An AI model without a controlled interface is like a server without a frontend. The technology is there, but nobody can use it in an orderly way. What happens next is predictable: employees turn to public AI services, ChatGPT, Gemini, Claude.ai, with their personal accounts. They enter corporate data into systems that are outside IT’s control. There is no audit trail, no data classification, no access control.
That is shadow AI. The question is not whether it is happening in your organization. The question is how pervasive it is.
The solution is not to ban AI usage. The solution is to provide an internal system that works better than the public alternatives, while running under corporate control. A simple chat interface is not enough for that. What you need is an enterprise AI portal.
What an Enterprise AI Portal Must Deliver
An enterprise AI portal is more than a chat window. It is the central platform through which all employees interact with AI, controlled, logged, and integrated into the existing technology landscape. Six requirements distinguish an enterprise portal from a consumer chat:
1. Multi-Model Routing
The portal must connect multiple models simultaneously, proprietary cloud APIs and self-hosted models. The routing logic automatically decides which model serves which request: by task type, data sensitivity, and cost parameters. Employees see a unified interface. Which model operates in the background is transparent to them but traceable.
2. Assistant Sharing
Departments create specialized assistants, with their own system prompt, their own documents, and their own rule set. An assistant for the legal department that prepares contract reviews. An assistant for HR that summarizes application materials. An assistant for procurement that compares supplier proposals. These assistants are shared within the department, versioned, and centrally managed.
This is the critical difference from a plain chat interface: not every employee has to write prompts from scratch. Instead, they use an assistant configured and optimized by domain colleagues. This lowers the barrier to entry and raises output quality.
3. Agent Integration
An enterprise portal must go beyond chat. It must integrate AI agents, specialized workflows that process documents, extract data, prepare decisions, or call external systems. The agent is triggered through the portal, its progress is displayed, and its result is documented in the portal.
4. SSO and Role-Based Access Control (RBAC)
Employees sign in through the existing identity management system, Azure AD, Okta, Google Workspace. No separate accounts, no separate passwords. Access control is role-based: who may use which models? Who may create assistants? Who may access which document sources? Who has access to agent workflows?
5. Audit Trail
Every interaction is logged. Who submitted which request and when? Which model responded? Which documents were referenced? What costs were incurred? The audit trail is exportable, for internal audit, for compliance reviews, for EU AI Act documentation.
6. Deployment Flexibility
The portal must be deployable in different environments: as a cloud service (Supabase, Vercel), as a container in a European data center, or on-premises. The hosting decision for the portal follows the same criteria as the hosting decision for the models (see Hosting Strategies).
Open-Source Interfaces Compared
Five open-source projects have positioned themselves as candidates for enterprise AI portals: LobeChat, OpenWebUI, LibreChat, chatbot-ui and very-ai. All five are self-hosted, model-agnostic and provide a chat interface for language models. The differences lie in SSO integration, governance capabilities, PII protection and works council compatibility.
Transparency note: very-ai is developed by Gosign GmbH - the publisher of this article series. We present the strengths and limitations of all five portals equally. very-ai is based on a fork of chatbot-ui (MIT license) and has evolved into a standalone product through 16 enterprise extensions.
Comparison: Five Enterprise AI Portals
| Criterion | LobeChat | OpenWebUI | LibreChat | chatbot-ui | very-ai |
|---|---|---|---|---|---|
| License | Apache 2.0 | MIT | MIT | MIT | Apache 2.0 |
| Origin | Original project | Original project | Original project | Original project | Fork of chatbot-ui |
| Model-Agnostic | ✅ OpenAI, Anthropic, Google, Ollama | ✅ OpenAI, Ollama, LiteLLM | ✅ OpenAI, Anthropic, Google, Mistral | ✅ OpenAI, Anthropic, Google, Ollama | ✅ OpenAI, Anthropic, Google (Vertex AI), Ollama |
| SSO | ❌ Not native | OAuth 2.0 (no native Entra ID) | OAuth 2.0, OpenID Connect | ❌ Not native | ✅ Azure Entra ID native with group & permission sync |
| PII Protection | ❌ | ❌ | ❌ | ❌ | ✅ Detection, anonymization and re-anonymization |
| PII per Assistant/Model | - | - | - | - | ✅ Configurable per assistant AND per model |
| Group Assistants | ❌ | Community models (limited) | Shared conversations | ❌ | ✅ Controlled via Entra ID groups |
| Audit Trail | ❌ | Basic logging | Basic logging | ❌ | ✅ Complete, exportable (CSV/JSON) |
| GDPR Statistics | ❌ | ❌ | ❌ | ❌ | ✅ Anonymized usage statistics |
| Trigger.dev Integration | ❌ | ❌ | ❌ | ❌ | ✅ Workflow trigger from chat |
| Thinking Level | ❌ | ❌ | ❌ | ❌ | ✅ Extended thinking / reasoning control |
| Web/Maps Search | Plugin system | Web search (RAG) | Plugin system | ❌ | ✅ Integrated |
| Docker Self-Hosted | ✅ | ✅ | ✅ | ✅ | ✅ |
| GitHub Stars (Feb 2026) | ~50k | ~60k | ~20k | ~28k | New (open-source launch) |
| Works Council Compatibility | ⚠️ Limited (no audit, no RBAC) | ⚠️ Basic RBAC | ⚠️ Basic RBAC | ❌ No governance | ✅ Audit trail + RBAC + PII + Entra ID |
LobeChat
LobeChat is a visually polished chat interface with a plugin architecture. Its strength lies in its plugin ecosystem and cloud API variety. For enterprise, it lacks robust RBAC, an exportable audit trail, and native agent integration. Suitable as a quick prototype or for small teams; too limited for organization-wide rollout.
OpenWebUI
OpenWebUI is the de facto standard for Ollama-based self-hosting setups. Integration with locally running models is excellent. SSO and basic logging are available. It also includes a built-in RAG pipeline. What is missing: assistant sharing, enterprise agent integration, and centralized management for several hundred users.
LibreChat
LibreChat is an open-source clone of the ChatGPT interface with multi-model support. SSO and basic RBAC are implemented. For organizations that want to replicate a ChatGPT-like experience internally, LibreChat is a solid starting point. The limits lie in agent integration and assistant sharing.
very-ai - Enterprise Portal with PII Protection and Governance
very-ai is an enterprise AI portal based on chatbot-ui (MIT) that adds 16 enterprise features not found in any of the other four portals. It is developed by Gosign GmbH and available under Apache 2.0 on GitHub.
Origin and differentiation: chatbot-ui provides a solid chat interface but lacks SSO integration, audit trail and PII protection. very-ai addresses exactly that: the codebase has been extended with enterprise features required for productive use in regulated environments. Attribution to the original project is documented in the NOTICES file.
PII detection and re-anonymization: The core differentiator. very-ai detects personal data (names, email addresses, phone numbers, IBANs) in user prompts, replaces them with placeholders ([PERSON_1], [EMAIL_1]), sends the anonymized text to the language model and re-inserts the original data into the response. The user sees real names, the language model never did.
This PII behavior is configurable per assistant and per model: Assistant A may allow PII, Assistant B anonymizes automatically. Model X receives anonymized data, Model Y (a locally hosted model) receives raw data.
Azure Entra ID with group sync: Not just authentication but automatic synchronization of Entra ID groups and roles. Employees in the Entra ID group “HR” automatically see HR assistants. Employees in “Finance” see finance assistants. No manual permission management in the portal required. When group membership changes in Entra ID, portal access changes on next login.
Group assistants: Administrators create assistants and assign them to Entra ID groups. These assistants are only visible and usable by members of the respective group. This enables department-specific AI tools without a separate permission management system.
Audit trail and GDPR statistics: Every interaction is logged: user, model, assistant, prompt, response, timestamp, token usage, PII mode. The audit trail is exportable (CSV, JSON) and filterable by time range and user. Usage statistics are GDPR-compliant anonymized - they show model and assistant usage without user-identifying data.
Trigger.dev workflow integration: Users can trigger Trigger.dev workflows from within the chat. This connects the AI portal with the automation layer (→ Article 10: Agent Orchestration Platforms).
Limitations (honest): very-ai is a new open-source project. The community is small compared to LobeChat (50k stars) or OpenWebUI (60k stars). The plugin ecosystems of the established portals are more extensive. Those looking for a portal with maximum community support and plugin variety are better served by LobeChat or OpenWebUI. Those who need PII protection, Entra ID group sync and works-council-compatible logging will currently find this combination only in very-ai.
GitHub: github.com/gosign-de/very-ai
Which portal for which use case?
Maximum model variety and plugin ecosystem: LobeChat - the largest plugin system, the most active community, broad model support. Ideal for teams that prioritize flexibility and rapid innovation.
Easiest start with Ollama: OpenWebUI - native Ollama integration, quick installation, intuitive interface. Ideal for local LLM hosting and teams starting with open-source models.
Maximum configurability: LibreChat - finest control over model endpoints and parameters. Ideal for technical teams operating multiple providers with different configurations.
Enterprise governance with PII protection: very-ai - the only option with native PII anonymization, Entra ID group sync and complete audit trail. Ideal for regulated environments where works councils, data protection and compliance have a seat at the table.
Evaluation and development: chatbot-ui - clean codebase, good starting point for custom development. Note: chatbot-ui has no active enterprise development; very-ai is the enterprise evolution of this codebase.
Most enterprises evaluate 2-3 portals in parallel using Docker containers - achievable in an afternoon. What matters is not the interface but governance capability: SSO, audit trail, PII protection and works council compatibility determine which portal makes it to production.
Why “Just a Chat” Is Not Enough
The difference between a chat interface and an enterprise AI portal becomes clear in operation. A comparison:
| Aspect | Chat Interface | Enterprise AI Portal |
|---|---|---|
| Usage | Individual Q&A | Organization-wide tool |
| Knowledge | Every user starts from zero | Assistants bundle domain expertise |
| Control | The user decides what to input | Routing and RBAC manage data flow |
| Traceability | None or limited | Complete audit trail |
| Integration | Standalone | Connected to SSO, agents, document systems |
| Scaling | Per user | Per organization |
| Shadow AI Risk | High (inadequate internal offering) | Low (superior internal offering) |
The central insight: shadow AI does not arise because employees act maliciously. It arises because the internal offering is worse than the public alternative. When the internal portal is as intuitive as ChatGPT but additionally offers specialized assistants, access to corporate documents, and agent workflows, there is no reason to resort to external services.
In Practice: A Mid-Sized Organization with 2,000 Employees
A concrete example illustrates the impact. A manufacturing company with 2,000 employees faced the following starting position:
Before the portal: An internal survey revealed that 340 employees regularly used public AI services for work tasks. Of these, 180 with free accounts (no DPA in place), 120 with personal pro accounts (corporate data in personal accounts), and 40 with company-provided accounts (but without audit trail or access control). IT had no visibility into which data was flowing into which systems.
Portal rollout: Within four weeks, very-ai was deployed, connected to Azure AD for SSO, with three initial assistants (legal, HR, procurement) and a gpt-oss-120b endpoint for confidential data.
After 90 days:
- 15 specialized assistants created by departments
- 1,200 active users per month (out of 2,000 employees)
- Shadow AI usage down by 85% (follow-up survey)
- Complete audit trail: 47,000 logged interactions
- Identification of three processes suited for dedicated agent workflows
- Total cost (portal + hosting + cloud APIs): approximately EUR 4,800 per month
The decisive factor was not technology but adoption. The portal was embraced because it was better than the alternative, not because it was mandated.
Five Success Factors for Rollout
From practical experience, five factors determine whether an enterprise AI portal succeeds or fails:
1. First impressions count. If the internal portal is slower, more cumbersome, or less capable than ChatGPT, employees will not use it again after the first attempt. Answer quality must match public services from day one.
2. Assistants over prompts. Most employees are not prompt engineers. They want to use a tool, not configure one. Specialized assistants prepared by domain colleagues significantly lower the barrier to entry.
3. Visible added value. The portal must offer something public services cannot: access to internal documents (via RAG), specialized assistants for company-specific tasks, integration into existing workflows.
4. IT ownership, not IT control. IT operates the portal and sets governance rules. But departments create their own assistants. This division, infrastructure centrally, content decentrally, has proven to be the most successful model.
5. Measure and communicate. Usage numbers, time saved, reduced shadow AI, these metrics must be collected and communicated to leadership. Without measurable results, there is no basis for the next expansion phase.
Next Step: From Portal to Agent
The enterprise AI portal is the foundation. It gives employees access to AI, controlled and logged. The next step is integrating agents, specialized workflows that go beyond simple question-and-answer interactions. How to deploy AI agents in an enterprise context, which architecture is required, and where the limits lie is covered in another article of this series.
Further reading: AI Infrastructure | Decision Layer & Shadow AI
📘 Enterprise AI Infrastructure Blueprint 2026 - Article Series
| ← Previous | Overview | Next → |
|---|---|---|
| AI Hosting: EU SaaS, German Data Center, or Self-Hosted? | Overview | RAG & Document Intelligence: How AI Understands Your Documents |
All articles in this series: Enterprise AI Infrastructure Blueprint 2026
Gosign supports organizations in selecting and deploying Enterprise AI Portals, vendor-neutral.
Book a consultation. We show you very-ai in a live demo and discuss your rollout plan.