Skip to content

NetBird Platform

netbird-platform — Round 1 Research Report

Section titled “netbird-platform — Round 1 Research Report”

Session: 20260321-0115 Domain: NetBird Platform Features & Architecture Date: 2026-03-21 Tools Used: mcp__claude_ai_Tavily__tavily_search (7 queries), mcp__claude_ai_Tavily__tavily_research (1 query), WebSearch (2 queries), mcp__exa_websearch__web_search_exa (2 queries), WebFetch (8 pages), mcp__context7__resolve-library-id (1 query), mcp__context7__query-docs (1 query)


NetBird is a mature, actively-developed open-source WireGuard-based mesh VPN platform with 23,700+ GitHub stars, 131+ contributors, and 306+ releases as of March 2026. The latest version is v0.66.4 (March 11, 2026) with releases averaging 2-3 per week. The project is backed by a German company that received funding from the German Federal Ministry of Education and Research through the StartUpSecure program (November 2022), in partnership with CISPA Helmholtz Center for Information Security.

NetBird is a strong candidate for replacing Palo Alto GlobalProtect for GSISG’s 100+ user deployment. Its self-hosted architecture can run on modest hardware (1 vCPU / 2GB RAM minimum), it supports pfSense integration (though via manual install, not yet in the official pfSense package manager), and it provides split DNS, network routing, and a new built-in reverse proxy feature. However, there are notable caveats: high-availability for the management server requires an enterprise license, the pfSense package is still early-stage (v0.1.0, bundled with NetBird client v0.55.1), and some reliability concerns exist around automatic reconnection after outages.

Overall Assessment: NetBird is technically capable for the GSISG use case but requires careful planning around HA, relay infrastructure sizing, and pfSense integration maturity. For a 100+ user deployment across two offices, the self-hosted option is viable on an Azure B2s (2 vCPU / 4GB RAM) VM with separated relay servers, but an enterprise license should be evaluated for HA guarantees.


MetricValueConfidenceSource
GitHub Stars~23,700HIGHGitHub repo
Contributors131+HIGHGitHub repo
Total Releases306+HIGHGitHub releases page
Latest Versionv0.66.4 (March 11, 2026)HIGHGitHub releases
Release Cadence2-3 per week (avg every 2-4 days)HIGHRelease dates analysis
LanguageGo (96.5%)HIGHGitHub repo
LicenseBSD-3-Clause (client) + AGPLv3 (management/signal/relay, since v0.53.0)HIGHGitHub repo
BackingGerman Federal Ministry of Education and Research / CISPA (StartUpSecure program, Nov 2022)HIGHGitHub README
FoundedGermany-based companyMEDIUMMultiple sources

Release History (last 10):

  • v0.66.4 — March 11, 2026
  • v0.66.3 — March 9, 2026
  • v0.66.2 — March 4, 2026
  • v0.66.1 — March 3, 2026
  • v0.66.0 — February 24, 2026
  • v0.65.3 — February 19, 2026
  • v0.65.2 — February 17, 2026
  • v0.65.1 — February 14, 2026
  • v0.65.0 — February 13, 2026
  • v0.64.6 — February 12, 2026

The project is very actively maintained with a rapid release cadence. Major features ship roughly monthly (v0.60 in Nov 2025, v0.62 in Jan 2026, v0.65 in Feb 2026, v0.66 in late Feb 2026).


ComponentDescriptionRequired?Resource Notes
Management ServerCore control plane — peer registration, auth, network map, policy enforcement, APIYesLightweight Go binary; main CPU consumer during peer sync
Signal ServerWebRTC-style signaling for peer connection negotiation; encrypted candidate exchangeYesLow resource; can stay embedded in combined server
Relay ServerQUIC/WebSocket relay for peers that cannot establish direct P2P (replaces legacy Coturn TURN)YesMost resource-intensive under load; CPU scales with relayed traffic
STUN ServerEmbedded in relay server; used for NAT type discovery and public IP detectionYes (embedded)Minimal overhead
DashboardWeb UI for administration (embedded nginx)YesStatic files; minimal resources
TraefikReverse proxy for TLS termination (ports 80/443)Yes (for self-hosted)Standard Traefik resource profile
Identity Provider (IdP)SSO authentication; can be embedded (v0.62+) or externalOptionalEmbedded: minimal additional overhead; External (Authentik/Keycloak): 2 vCPU + 2GB RAM additional
DatabaseSQLite (default) or PostgreSQL (recommended for 100+ peers)YesSQLite for small; PostgreSQL for production scale

Architecture Evolution (Important):

  • Pre-v0.62: Required separate containers for management, signal, relay, coturn, dashboard, plus an external IdP (e.g., Zitadel). Total: 7+ containers, 2-4GB RAM minimum.
  • v0.62+: Unified netbird-server container combines management, signal, relay, and embedded STUN. Built-in local user management eliminates mandatory external IdP. Total: 3-4 containers (server, dashboard, traefik, optional DB), ~1GB RAM minimum.
  • v0.65+: Further consolidation with unified server binary. Reverse proxy capability added.

Minimum Infrastructure Requirements (from official docs):

  • 1 Linux VM with at least 1 vCPU and 2GB RAM
  • TCP ports 80, 443 publicly accessible
  • UDP port 3478 publicly accessible
  • A public domain name pointing to the VM’s IP
  • Docker Compose installed

Confidence: HIGH — sourced from official documentation at docs.netbird.io


Short answer: An Azure B1ms (1 vCPU, 2GB RAM) will likely work for the management plane but is risky under relay load. An Azure B2s (2 vCPU, 4GB RAM) is the safer minimum.

FactorAssessment
Management server on 1 vCPU / 2GB RAMOfficially supported as minimum; will handle peer registration and policy sync
100+ peers simultaneouslyThe historical 100-peer limit (GitHub issue #1824) was a buffer size bug, not an architectural limit. It was fixed (closed Feb 2025). NetBird states “you can have hundreds of thousands of nodes”
Relay traffic at scaleCPU-intensive; one user reported 50% CPU with only 25 peers when many connections are relayed. This is the main concern for B1ms
SQLite vs PostgreSQLSQLite is default; for 100+ peers, PostgreSQL on a separate instance is recommended
With external IdP (Authentik/Keycloak)Requires 2+ vCPU and 4+ GB RAM total — B1ms is insufficient

Recommended Azure Architecture for 100+ Users:

ComponentAzure SKUEst. Cost
Main server (management + signal + dashboard)B2s (2 vCPU, 4GB)~$30/mo
Relay server(s) — 1-2 depending on geographyB1ms or B2s~$15-30/mo each
PostgreSQLAzure Database for PostgreSQL (Basic) or co-located~$25/mo

Scaling path (from NetBird docs):

  1. Start with single-server deployment
  2. Extract relay servers to separate VMs (most common first step)
  3. Move PostgreSQL to dedicated instance
  4. Signal server extraction (rarely needed)
  5. Management + Signal HA requires enterprise commercial license

Confidence: MEDIUM-HIGH — official docs confirm 1vCPU/2GB minimum; scaling recommendations from docs + community reports. Azure-specific sizing is extrapolated.


The NetBird pfSense package is NOT in the official pfSense package manager. It requires manual installation.

DetailStatus
Official pfSense repoNOT available — PR submitted ~1 year ago (Aug 2025), still pending review as of March 2026
Installation methodManual pkg add from GitHub releases
pfSense package versionv0.1.0
Bundled NetBird client versionv0.55.1 (significantly behind latest v0.66.4)
pfSense versions supportedNot explicitly documented; works on FreeBSD-based pfSense CE and Plus
OPNsense supportAlso available (announced Sep 2025 newsletter)
FreeBSD portsNetBird was accepted into FreeBSD ports repository (Jun 2025)
Update mechanismManual download and reinstall of .pkg files

Known Limitations on pfSense:

  1. Version lag: pfSense package bundles NetBird v0.55.1 while the mainline is at v0.66.4 — an 11-version gap
  2. NAT traversal conflict: pfSense’s automatic outbound NAT randomizes source ports, which breaks NetBird’s NAT traversal. Requires manual static port NAT rule configuration
  3. Site-to-site routing issues: GitHub issue reports problems with LAN resource routing through pfSense (#5 in pfsense-netbird repo)
  4. Crash on update: NetBird 0.61.0 and pfSense package v0.2.1 reported segfault when trying to update
  5. No automatic updates: Must manually download and install new versions
  6. LAN scanning disruption: Reported that network scanning tools cause disruption when NetBird is running on pfSense
  7. WireGuard package conflicts: Installing NetBird can cause issues with existing WireGuard packages on pfSense

Confidence: HIGH — sourced from official NetBird pfSense docs, GitHub issues, and pfSense package list


5. Always-On Mode & Management Server Resilience

Section titled “5. Always-On Mode & Management Server Resilience”

NetBird supports always-on operation through two mechanisms:

Setup Key Peers (servers, routing peers, headless devices):

  • No login expiration — tunnels stay up indefinitely
  • No SSO interaction required
  • Automatically reconnects after reboots (daemon auto-starts)
  • Setup keys can be one-off or reusable, with optional expiration dates
  • Revoking a key does NOT immediately disconnect already-connected peers

SSO Peers (user workstations):

  • Default session expiration: 24 hours (configurable from 1 hour to 180 days)
  • Session expiration can be disabled globally or per-peer
  • When session expires, tunnel is torn down and user must re-authenticate via browser
  • After re-authentication, tunnel automatically re-establishes
ScenarioBehavior
Existing P2P tunnelsSurvive — WireGuard tunnels are end-to-end encrypted between peers; management server is not in the data path
Existing relayed connectionsContinue working as long as relay server is reachable (relay is a separate component)
New peer connectionsCannot be established — peers need management server for initial configuration and peer discovery
Policy changesCannot be applied until management server returns
Peer reconnection after brief network glitchGenerally works automatically
Peer reconnection after prolonged outageKnown issue — clients sometimes stay offline indefinitely and require manual netbird up command
DNS resolutionContinues working if DNS servers are reachable; cached entries persist

Critical architectural insight: The management server maintains “a control channel open to each peer sending network updates.” This is used for configuration sync, not for data plane traffic. The data plane (WireGuard tunnels) operates independently once established.

Known reliability concerns (from GitHub issues and Tavily research):

  • Automatic reconnection after network outages is not always reliable (issues #1361, #1853)
  • UI may show “connected” status while the underlying WireGuard handshake has timed out (issue #2109)
  • After a management server restart, in-memory sessions are lost; clients must reconnect
  • NetBird auto-starts after reboot even if user previously ran netbird down (may be unexpected)

Confidence: HIGH for architecture details; MEDIUM for reliability concerns (sourced from multiple GitHub issues and community reports, but may be version-specific)


NetBird uses ICE (Interactive Connectivity Establishment) — the same protocol underlying WebRTC — for NAT traversal.

  1. Candidate gathering: Discovers host candidates (local IPs), server reflexive candidates (via STUN), and relay candidates
  2. Candidate exchange: Peers share candidates via the Signal server (encrypted with peer keys)
  3. Connectivity checks: Tests candidate pairs, prioritizing direct connections
  4. NAT hole punching: Simultaneous UDP packets from both peers to “punch holes” through NAT
NAT TypeP2P LikelihoodNetBird Behavior
Easy/Full Cone NAT (home routers)HighDirect P2P via STUN discovery
Port-Restricted Cone NATMedium-HighP2P usually succeeds with hole punching
Symmetric NAT (corporate firewalls)LowFalls back to relay; public IP:port mapping changes per destination
CGNAT (mobile/ISP)Low-MediumDouble NAT, shared IPs, short timeouts; often relayed
1:1 NAT (cloud VMs)HighBehaves like easy NAT if security groups permit UDP
No NAT (data center)Very HighDirect connection trivial
  • Primary transport: QUIC (UDP-based, high performance)
  • Fallback transport: WebSocket over TCP port 443 (for networks that block UDP)
  • Both protocols are raced simultaneously — whichever succeeds first is used
  • Relay traffic remains end-to-end encrypted via WireGuard — relay cannot decrypt
  • NetBird Cloud provides globally distributed relay servers
  • Self-hosted deployments must provision their own relay infrastructure

No official percentage is published. From the NetBird forum: “there isn’t a direct percentage to give” as it depends entirely on the NAT environments involved. Based on Tailscale’s published data (which uses similar ICE/STUN/TURN techniques), typical deployments achieve >90% direct connections in favorable network conditions. Corporate environments with symmetric NAT will see higher relay usage.

Confidence: HIGH for architecture; MEDIUM for percentage estimates (extrapolated from Tailscale data and NAT traversal theory)


Networks is the newer feature; Network Routes is the legacy feature. Both are actively maintained.

  • Configuration container mapping on-prem/cloud infrastructure into logical resources
  • Resources can be IP ranges, domains, or subnets
  • Routing peers forward traffic between NetBird mesh and internal networks
  • Supports domain-based resources with automatic DNS resolution
  • Access control is enforced through policies (not bypassed by default)
  • Better integration with the newer Access Control system
  • Supports lazy DNS-based routing (routing rules set up only when DNS resolves)
  • IP-range-based routing (CIDR notation)
  • Routing peer forwards packets between NetBird mesh and private networks
  • Bypasses Access Control rules by default — traffic flows freely unless explicitly restricted
  • Supports overlapping route handling and route selection
  • Supports exit nodes (route all internet traffic through a peer)
  • More flexible for raw IP-based routing scenarios
FeatureNetworksNetwork Routes
Access controlEnforced by defaultBypassed by default
Resource typesIPs, domains, subnetsCIDR ranges only
Domain-based routingYesNo
Exit nodesNoYes
Overlapping routesLimitedFull support
Recommended for new setupsYesOnly for unsupported scenarios

For GSISG Site-to-Site (Honolulu + Boulder)

Section titled “For GSISG Site-to-Site (Honolulu + Boulder)”

Networks is the recommended approach for new deployments. Each office would have:

  1. A routing peer (pfSense or dedicated Linux box) running the NetBird agent
  2. A Network defined for each office’s LAN subnet
  3. Resources representing the internal services at each location
  4. Policies controlling which users/groups can access which resources

Confidence: HIGH — sourced from official Networks docs and Network Routes docs


Yes, NetBird supports split DNS (split-horizon DNS). This is a mature feature with multiple configuration options.

  1. Primary Nameserver: Handles all DNS queries not matched by specific domains. Configured with a public DNS provider (e.g., Cloudflare 1.1.1.1). Leave “Match Domains” empty.

  2. Match Domain Nameservers: Handle queries for specific internal domains. Point to your internal DNS servers. Configure with match domains (e.g., company.internal).

  3. Custom DNS Zones (v0.63+): Define private DNS records directly in NetBird, distributed to peers without needing an external DNS server. Takes precedence over nameservers.

  • Primary: Cloudflare (1.1.1.1, 1.0.0.1) for All Peers, no match domains
  • Match: Internal DNS (10.0.0.1, 10.0.0.2) for All Peers, match domain company.internal
  • Result: google.com goes to Cloudflare; app.company.internal goes to internal DNS
  • Search domain expansion (server expands to server.company.internal)
  • Distribution group-based application (different DNS configs for different peer groups)
  • DNS failover across multiple nameservers
  • Custom Zones for infrastructure-less internal DNS
  • Automatic peer FQDN resolution (hostname.netbird.cloud)
  • No query caching (except on routing peers during failures)
  • Match domain support requires macOS, Windows 10+, or Linux with systemd-resolved
  • Android requires Private DNS to be disabled
  • Known issues with DNS resolution conflicts when roaming between networks

Confidence: HIGH — sourced from official DNS docs and custom zones docs


A built-in reverse proxy in the management server that exposes internal services to the public internet with optional authentication, automatic TLS, and WireGuard tunnel routing. Currently in beta.

  1. Configure a “Service” in the NetBird dashboard mapping a public domain to an internal target
  2. NetBird provisions TLS certificates automatically (Let’s Encrypt)
  3. Incoming HTTPS requests are terminated at the NetBird proxy cluster
  4. Traffic is forwarded through an encrypted WireGuard tunnel to the target peer/resource
  5. No inbound ports or public IP needed on the internal service
  • SSO (Single Sign-On via OIDC)
  • Password (shared password)
  • PIN Code (numeric PIN)
  • None (public access)
  • Peers (machines running NetBird agent)
  • Hosts (IP addresses)
  • Domains (domain-identified resources)
  • Subnets (CIDR ranges)
  • Cloud: Auto-generated {subdomain}.{nonce}.{cluster}.proxy.netbird.io
  • Self-hosted: Cluster domains {subdomain}.{proxy-domain}
  • Custom domains via CNAME records
  • Self-hosted: Available now (beta)
  • Cloud: Available now (beta)
  • Does not support pre-shared keys or Rosenpass
  • Self-hosted deployments must use Traefik (not Nginx/Caddy/HAProxy) due to TLS passthrough requirement
  • UDP traffic not supported through the reverse proxy
  • Some deployment issues reported (GitHub issue #5492 — embedded client auth failures)
  • No IP-based auth bypass yet (feature request #5556)

For GSISG, this could potentially replace:

  • Cloudflare Tunnels for exposing internal services
  • Standalone reverse proxy configurations for authenticated service access

However, it is beta software and should not be relied upon for production without thorough testing.

Confidence: HIGH for features; MEDIUM for production readiness (beta status, known bugs)


10. Known Limitations vs. Tailscale and Traditional VPNs

Section titled “10. Known Limitations vs. Tailscale and Traditional VPNs”
AreaNetBird LimitationTailscale Advantage
Auto-reconnectionClients sometimes stay offline after network outages; manual intervention neededContinuously retries and usually restores automatically
System service modeCannot run as Windows SYSTEM account (feature request open)Runs as system service natively on Windows/Linux
Status reportingUI may show “connected” while WireGuard handshake has timed outUI reliably reflects connection state
Client maturityYounger client with more rough edgesMore mature, polished client experience
Ecosystem sizeSmaller ecosystem, fewer integrationsLarger ecosystem, Mullvad VPN integration, Taildrop file sharing
HA for control planeEnterprise license required for management HABuilt-in HA in SaaS offering
Mobile experienceFunctional but less polishedMore refined mobile apps
Post-manual-stop behaviorAuto-starts on reboot even after netbird downRespects manual stop across reboots
AreaNetBird Advantage
Self-hostingFull control plane self-hosting (official, not reverse-engineered)
Data sovereigntyComplete control over all data and metadata
Reverse proxyBuilt-in with custom domains, SSO, PIN auth
Access control UIGUI-based policy management vs. JSON ACL files
LicenseBSD-3/AGPLv3 (fully open server) vs. proprietary control plane
Custom domainsSupported in reverse proxy vs. only *.ts.net in Tailscale Funnel
MSP multi-tenantNative MSP portal for managing multiple client networks

vs. Traditional VPNs (Palo Alto GlobalProtect)

Section titled “vs. Traditional VPNs (Palo Alto GlobalProtect)”
AreaNetBird Consideration
Central managementLess mature admin console than GlobalProtect Panorama
Compliance certificationsNo FIPS 140-2 or Common Criteria; not SOC2 certified (GlobalProtect is)
Enterprise supportCommunity + ticketing support; no 24/7 TAC equivalent
Traffic inspectionNo inline DLP/threat prevention (GlobalProtect has)
IPv6Single flat IPv4 network only (100.64.0.0/10); no IPv6 overlay support
Static IP per userNot supported (Defguard comparison confirms this)
Active-Active HANot available in open-source; requires enterprise license
Audit/compliance loggingTraffic event logging is cloud-only or enterprise
SCIM provisioningRequires commercial license for self-hosted
  1. Single flat IPv4 network — all peers share the 100.64.0.0/10 address space; no multiple network segments
  2. No IPv6 overlay support — IPv4 only for the mesh network
  3. Persistent keepalive fixed at 25 seconds — not configurable; wastes battery on mobile
  4. Self-hosted HA requires enterprise license — management and signal HA not available in open-source
  5. Some enterprise features cloud-only — traffic event logging, EDR integration, SCIM provisioning
  6. pfSense integration immature — manual install, version lag, known bugs
  7. DNS roaming issues — switching networks can break DNS resolution (community workaround: disable DNS management)

Confidence: HIGH — cross-referenced across multiple comparison articles, GitHub issues, and official documentation


Q1: Current stable version, release cadence, and project maturity?

Section titled “Q1: Current stable version, release cadence, and project maturity?”

Latest stable: v0.66.4 (March 11, 2026). Release cadence: 2-3 releases per week, major features monthly. Maturity: 23,700+ GitHub stars, 131+ contributors, 306+ releases, 2,703+ commits. Backed by German Federal Ministry of Education and Research through StartUpSecure program (since Nov 2022). Dual-licensed BSD-3-Clause (client) + AGPLv3 (server). Written in Go (96.5%). Very active development and growing community.

Q2: Exact components of a self-hosted deployment?

Section titled “Q2: Exact components of a self-hosted deployment?”

Since v0.62+, a minimal self-hosted deployment requires: (1) netbird-server container (combines management, signal, relay, STUN), (2) dashboard container (web UI with embedded nginx), (3) Traefik container (reverse proxy, TLS), and optionally (4) PostgreSQL for production scale. An external IdP is no longer mandatory — local user management is built-in. Minimum resources: 1 vCPU, 2GB RAM, public IP, domain name.

Q3: Can it run on Azure B1ms for 100+ peers?

Section titled “Q3: Can it run on Azure B1ms for 100+ peers?”

Technically yes for the management plane, but risky. The management server itself is lightweight, and the 100-peer limit bug was fixed. However, relay traffic is CPU-intensive (50% CPU reported with 25 peers on relay). Recommendation: Use Azure B2s (2 vCPU, 4GB RAM) for the main server, with separate relay server(s) on B1ms/B2s instances. Use PostgreSQL instead of SQLite for 100+ peers. If using an external IdP, double the resources.

NOT in the official pfSense package manager. A PR was submitted ~1 year ago (Aug 2025) and is still pending review as of March 2026. Installation is manual via pkg add from GitHub releases. The pfSense package is v0.1.0 with NetBird client v0.55.1 (11 versions behind current). Known issues include NAT traversal conflicts with pfSense’s automatic outbound NAT, segfaults on update, and WireGuard package conflicts. pfSense version compatibility is not explicitly documented. NetBird is available in the FreeBSD ports repository (since June 2025).

Setup key peers: No session expiration; tunnels stay up indefinitely; auto-reconnect after reboot. Best for servers and routing peers. SSO peers: Default 24-hour session expiration (configurable 1h to 180 days, or disabled). When expired, tunnel is torn down until re-authentication. Management server down: Existing P2P tunnels survive (WireGuard data plane is independent). Existing relayed connections survive if relay is up. New connections cannot be established. Known issue: Clients sometimes fail to automatically reconnect after prolonged outages.

Uses ICE protocol (same as WebRTC) with STUN for NAT discovery and custom relay (QUIC primary, WebSocket fallback) for unreachable peers. Handles easy NAT well (direct P2P). Symmetric NAT and CGNAT typically fall back to relay. Corporate firewalls with UDP blocking use WebSocket relay over TCP 443. No official direct-vs-relayed percentage is published; estimated >90% direct in favorable conditions based on similar technology benchmarks. All relay traffic remains end-to-end encrypted via WireGuard.

Networks (newer) is the recommended feature for site-to-site routing. It supports domain-based resources, enforces access control by default, and integrates with the modern policy system. Network Routes (legacy) supports CIDR-based routing, exit nodes, and overlapping routes but bypasses ACL by default. For GSISG’s Honolulu-Boulder site-to-site, use Networks with routing peers at each office and policies controlling inter-site access.

Yes, fully supported. Configure primary nameservers for internet DNS and match domain nameservers for internal domains. Custom DNS Zones (v0.63+) allow defining records directly in NetBird without external DNS infrastructure. Search domain expansion is supported. Distribution groups allow different DNS configurations for different peer groups. Limitations: no query caching, requires systemd-resolved on Linux, Android requires Private DNS disabled.

Introduced in v0.65.0 (February 2026). Exposes internal services to the public internet with automatic TLS, optional SSO/password/PIN authentication, and WireGuard tunnel routing. Available for both self-hosted (beta) and cloud. Self-hosted requires Traefik. Could replace Cloudflare Tunnels for authenticated service exposure. netbird expose CLI command (v0.66) enables one-command ephemeral service exposure. Still in beta with known deployment issues.

Q10: Known limitations vs. Tailscale and traditional VPNs?

Section titled “Q10: Known limitations vs. Tailscale and traditional VPNs?”

Key limitations: (1) auto-reconnection reliability issues, (2) no Windows SYSTEM service support, (3) single flat IPv4 network only, (4) no IPv6 overlay, (5) fixed 25s keepalive, (6) self-hosted HA requires enterprise license, (7) some enterprise features cloud-only, (8) pfSense integration immature, (9) smaller ecosystem than Tailscale, (10) no FIPS/SOC2 compliance certifications (important vs. GlobalProtect). Key advantages: full self-hosting, data sovereignty, built-in reverse proxy with custom domains, GUI policy management, open-source server components.


  1. Exact resource consumption at 100+ peers — No official benchmarks or sizing guides for specific peer counts. Community reports are anecdotal.

  2. Direct vs. relayed connection percentages — NetBird does not publish statistics. Real-world ratios depend heavily on the specific network environments.

  3. pfSense version compatibility — Not documented which pfSense CE/Plus versions are explicitly supported.

  4. Enterprise license pricing — Custom pricing only; not publicly available. Required for self-hosted HA, SCIM, and some advanced features.

  5. Management server HA architecture — Details of the enterprise HA implementation are not publicly documented.

  6. Long-term reliability at scale — Limited public data on deployments with 100+ sustained peers. One user switched back to Nebula after reliability issues (GitHub #3121).

  7. pfSense package update timeline — The PR to the official pfSense repo has been pending for ~1 year with no timeline for review.

  8. Reverse proxy production readiness — Beta status with known authentication bugs (#5492). Production viability for GSISG needs validation.

  9. Mobile client maturity — Limited data on iOS/Android stability for always-on enterprise use.

  10. Compliance certifications — No information found about SOC2, FedRAMP, HIPAA, or other compliance frameworks. This may be a concern for a company replacing enterprise-grade GlobalProtect.


ToolQueriesPurpose
mcp__claude_ai_Tavily__tavily_search7Primary web search for current info across all topics
mcp__claude_ai_Tavily__tavily_research1Deep research on always-on mode and tunnel persistence
WebSearch2Cross-reference GitHub stats and Azure scaling info
mcp__exa_websearch__web_search_exa2Deep search for Tailscale comparisons and pfSense details
WebFetch8Direct documentation page fetching
mcp__context7__resolve-library-id1Library ID resolution for NetBird docs
mcp__context7__query-docs1Structured documentation query

Official Documentation:

GitHub:

Comparison Articles:

Community:

Product Pages: