Skip to content

Azure Sizing & GP Coexistence

Azure Sizing & GlobalProtect Coexistence — Round 2 Research Report

Section titled “Azure Sizing & GlobalProtect Coexistence — Round 2 Research Report”

Session: 20260321-0115 Domain: Azure VM Sizing for 100+ Peers and GlobalProtect Coexistence During Migration Date: 2026-03-21 Round: 2 (resolving R1 conflicts) Tools Used: mcp__claude_ai_Tavily__tavily_search (12 queries), WebFetch (7 pages), Bash (1 command)


Round 1 produced conflicting Azure VM sizing recommendations (B1ms at $15/mo vs B2s at $30/mo). After deeper investigation, the answer is nuanced: B1ms is sufficient for the management plane with 100-150 peers under normal conditions, but B2s provides essential headroom for relay traffic spikes and policy evaluation at scale. The key variable is not peer count but policy complexity — GitHub issue #4488 demonstrates that 70 peers with 480 policies consumed 8 vCPU due to posture check inefficiency. For GSISG’s simpler policy set (likely <20 policies), B1ms is viable, but B2s at $30/mo eliminates risk for only $15/mo more.

SQLite is adequate for 100-150 peers with modest policy sets. PostgreSQL becomes necessary at 300+ peers or when events.db bloat causes I/O contention. A separate relay server is unnecessary for this deployment — most connections will be direct P2P, and the embedded relay in the combined server handles the minority of relayed connections without issue.

Regarding GlobalProtect coexistence: the issue is real but well-understood. GP triggers NetBird’s network monitor to restart the WireGuard interface, killing active sessions. The workaround (--network-monitor=false) is safe during migration with manageable side effects. The fix (PR #5156) remains open and unmerged as of March 2026. Both VPNs CAN run simultaneously with the workaround — they do not fight over default routes or DNS by design, because NetBird uses split routing (100.64.0.0/10 overlay) while GP uses its own tunnel for corporate routes.


Question 1: Azure VM Sizing for 100-150 Peers

Section titled “Question 1: Azure VM Sizing for 100-150 Peers”

The Definitive Answer: B2s ($30/mo) Is the Right Choice, B1ms Is Viable Fallback

Section titled “The Definitive Answer: B2s ($30/mo) Is the Right Choice, B1ms Is Viable Fallback”

The confusion in R1 arose because both agents were partly correct but examined different aspects:

Management plane (control traffic only): B1ms (1 vCPU / 2 GB) is sufficient. The management server is a lightweight Go binary handling peer registration, policy sync, and signaling. Official docs confirm “1 CPU and 2 GB of memory” as the minimum. A Hacker News commenter reports running “1k active users setup, super efficient and stable.”

With relay traffic: This is where B1ms becomes risky. One community report noted 50% CPU with 25 peers when many connections were relayed. However, this was on older versions with Coturn TURN relay. The v0.62+ built-in relay uses QUIC, which is more CPU-efficient.

With complex policies: GitHub issue #4488 reveals the real scaling bottleneck. With 480 network policies and 70 peers, 8 vCPU was required because posture checks (OS version, NB version) used excessive regexp operations on every peer sync. This is a pathological case — GSISG will have far fewer policies.

Resource Consumption Estimates for GSISG (100-150 peers)

Section titled “Resource Consumption Estimates for GSISG (100-150 peers)”
ComponentCPU ImpactRAM ImpactNotes
Management server (peer sync)~5-15% of 1 vCPU~200-400 MBScales with peer count * policy count
Signal server (connection setup)<5% of 1 vCPU~50-100 MBOnly active during connection establishment
Embedded relay (relayed traffic)0-30% of 1 vCPU~50-150 MBDepends on how many peers relay; see Q3
Dashboard + Traefik<5% of 1 vCPU~100-200 MBStatic serving, minimal impact
SQLite I/OMinimalN/ASee Q4
Total estimated~15-50% of 1 vCPU~400-850 MBUnder normal operation
Peak (all peers reconnecting)~80-100% of 1 vCPU~1-1.5 GBAfter server restart or network event
FactorB1ms (1 vCPU / 2 GB)B2s (2 vCPU / 4 GB)
Normal operationSufficientComfortable
Mass reconnection eventCPU-constrained; peers queueHandles gracefully
Relay under loadRisk of CPU starvationAdequate headroom
Future growth (150-250 peers)Requires migrationHandles without changes
Docker overhead (4 containers)Tight on 2 GB RAMComfortable on 4 GB RAM
Monthly cost (pay-as-you-go)~$15.11~$30.37
Cost differenceBaseline+$15.23/mo ($183/yr)

Verdict: B2s at $30/mo buys meaningful risk reduction for $15/mo more. The B-series burstable model is ideal because NetBird’s workload is bursty (low baseline, spikes during peer registration).

  1. Start with B2s (single server, embedded relay, SQLite)
  2. If relay CPU becomes a concern, extract relay to separate B1ms
  3. If database I/O becomes a concern (300+ peers), migrate to PostgreSQL
  4. B2s handles up to ~300 peers comfortably; beyond that, consider B4ms or separated components

Question 2: Combined Server Binary Resource Usage (v0.62+)

Section titled “Question 2: Combined Server Binary Resource Usage (v0.62+)”

Before v0.62, NetBird required 7+ containers: management, signal, relay (Coturn), dashboard, Traefik, external IdP (e.g., Zitadel), and optional database. This consumed 2-4 GB RAM minimum.

Since v0.62, the unified netbird-server container combines management, signal, relay, and embedded STUN into a single Go binary. Built-in local user management eliminates the mandatory external IdP. The deployment now requires only 3-4 containers total.

SourceScaleInfrastructureObservation
NetBird official docsMinimumAny”1 CPU and 2 GB of memory”
Cloudron Forum user~20 peersHetzner CX11 (1 vCPU, 2 GB)“Running smoothly for over a year”
Carl Pearson (Hetzner guide)Small deployment”Even the leanest VPS is enough”No resource issues
HN commenter1,000 active usersNot specified”super efficient and stable”
GitHub #448870 peers, 480 policies8 vCPU requiredPathological: posture check CPU burn
GitHub #1473370 users, 141 groups16 CPU / 16 GB insufficientSQLite locking + high policy count
dev.to self-hosting guideSmall/mediumVPS with 1-2 GBSuccessful deployment

Key Insight: Policy Complexity Matters More Than Peer Count

Section titled “Key Insight: Policy Complexity Matters More Than Peer Count”

The 1k-user success and the 70-peer failure are not contradictory. The difference is policy complexity:

  • Simple policies (e.g., “All Peers can access Routing Peer”): O(n) computation per peer sync
  • Complex policies (480 policies with posture checks): O(n * p * g) where n=peers, p=policies, g=groups

For GSISG with likely 5-15 policies and 2-3 groups, the compute overhead per peer sync is trivial. The management server will idle at <10% CPU between authentication events.

ComponentEstimated RAMBasis
netbird-server (management + signal + relay)200-500 MBGo binary, scales with peer count
dashboard (nginx)50-100 MBStatic files
Traefik50-100 MBReverse proxy
Docker overhead100-200 MBContainer runtime
Total400-900 MBFor 100-150 peers

Conclusion: 2 GB RAM (B1ms) is tight but workable. 4 GB RAM (B2s) provides comfortable headroom for spikes and eliminates OOM risk.


Short Answer: The Embedded Relay Is Sufficient for 100-150 Peers

Section titled “Short Answer: The Embedded Relay Is Sufficient for 100-150 Peers”

A separate relay server is not needed for GSISG’s deployment. Here is the analysis:

Direct P2P vs. Relayed Connection Estimation

Section titled “Direct P2P vs. Relayed Connection Estimation”

NetBird uses ICE (Interactive Connectivity Establishment) to maximize direct P2P connections. The relay is only used when both peers are behind symmetric NAT or restrictive firewalls.

GSISG’s network profile:

User CategoryCountNAT TypeExpected Connection Type
Office workers (Honolulu, on corporate LAN)~60-70Behind pfSense, likely port-restricted NATP2P to other office; P2P or relay to remote
Office workers (Boulder, on corporate LAN)~20-30Behind pfSense, likely port-restricted NATP2P to other office; P2P or relay to remote
Remote workers (home)~20-30Home router (easy NAT / full cone)P2P in most cases
Field workers (cellular)~5-10CGNAT (carrier-grade NAT)Likely relayed
Azure routing peer11:1 NAT (cloud)P2P to most peers

Estimated relay percentage: Based on this profile, approximately 5-15% of active connections will require relay. This is consistent with Tailscale’s published data showing >90% direct connections in typical deployments.

Most peers are idle (maintaining control channel only). Active data transfer occurs for ~10-15 users at any time (SMB, RDP).

MetricEstimateReasoning
Total active connections needing relay1-3 simultaneously5-15% of 10-15 active users
Average SMB/RDP bandwidth per user2-5 MbpsTypical file share browsing / RDP
Peak relay bandwidth5-15 Mbps3 users * 5 Mbps
Relay CPU per Mbps (QUIC)~1-2% of 1 vCPUWireGuard encryption is lightweight
Total relay CPU at peak~5-30% of 1 vCPUWell within B2s capacity

Why the “50% CPU with 25 peers” Report Does Not Apply

Section titled “Why the “50% CPU with 25 peers” Report Does Not Apply”

The R1 report cited a user experiencing 50% CPU with 25 peers on relay. Context matters:

  1. That report was likely from an older version using Coturn (TCP TURN), which is heavier than the v0.62+ QUIC relay
  2. 25 peers ALL relaying is an unusual scenario — GSISG will have ~1-3 relayed at any time
  3. The report may have included management overhead, not just relay

A separate relay server makes sense when:

  • >50 peers are simultaneously relaying (unlikely for GSISG)
  • Geographic distribution requires relay servers in multiple regions (GSISG is US-only, single Azure region works)
  • Relay bandwidth exceeds 100 Mbps sustained (GSISG’s estimate is <15 Mbps peak)
  • HA is required for the relay independently of management (not needed at this scale)

Verdict: The embedded relay in the combined server is more than sufficient. Deploying a separate relay adds cost ($15-30/mo) and complexity with no benefit for this deployment size. If relay load becomes a concern later, it can be extracted as a separate container or VM without disruption.


Question 4: SQLite vs. PostgreSQL for 100+ Peers

Section titled “Question 4: SQLite vs. PostgreSQL for 100+ Peers”

Short Answer: SQLite Is Fine for 100-150 Peers; PostgreSQL Is Overkill

Section titled “Short Answer: SQLite Is Fine for 100-150 Peers; PostgreSQL Is Overkill”

The R1 recommendation to use PostgreSQL was based on generic guidance. After examining actual NetBird-specific evidence:

SourcePeer CountDatabaseResult
GitHub #1473 (original report)11 users, 10 peersSQLiteevents.db reached 1 GB; slow login performance
GitHub #1473 (follow-up)370 users, 141 groupsSQLiteInsufficient performance; requested PostgreSQL
GitHub #448870 peers, 480 policiesSQLite (implicit)CPU issue, not database issue
Headscale benchmark (#2001)600 clientsSQLite vs PostgreSQLPostgreSQL recommended for >500 clients
HN commenter1,000 usersUnknown”super efficient and stable”
NetBird official docsAnySQLite default”optional” PostgreSQL migration

The primary SQLite issue is not peer count but events.db growth. NetBird logs every event (peer connect, disconnect, policy change, etc.) to a SQLite database. With 100+ peers connecting/disconnecting throughout the day, events.db can grow to multiple GB within months.

Impact: Large events.db causes:

  • Slow dashboard loading (activity page queries entire events table)
  • SQLite write locking during event insertion (single-writer limitation)
  • Disk I/O spikes during periodic event purges

Mitigation without PostgreSQL:

  1. Periodically archive/truncate events.db (simple cron job)
  2. Use WAL mode for SQLite (NetBird enables this by default)
  3. Place SQLite on SSD (Azure Premium SSD is default)
ThresholdSymptomAction
<150 peers, <20 policiesNo issuesStay on SQLite
150-300 peers, moderate policiesOccasional slow dashboard loadsConsider PostgreSQL or events cleanup
300+ peers OR 100+ policiesConsistent login delays, SQLite lockingMigrate to PostgreSQL
500+ peersManagement server CPU spikes on peer syncPostgreSQL + consider component separation
OptionMonthly CostNotes
Co-located on same VM$0 (already in B2s)Simplest; some resource contention
Azure Database for PostgreSQL Flexible (Burstable B1ms)~$12-15/moManaged; automatic backups
Azure Database for PostgreSQL Flexible (GP D2s_v3)~$100/moOverkill for NetBird

Verdict: Start with SQLite. Monitor events.db size monthly. Implement events cleanup script. Migrate to PostgreSQL only if symptoms appear, which is unlikely at 100-150 peers.


ComponentSpecificationMonthly CostAnnual Cost
VM: Standard_B2s2 vCPU, 4 GB RAM, Linux$30.37$364.44
OS Disk: P4 Premium SSD32 GB (sufficient for OS + Docker + NetBird)$5.28$63.36
Public IP: Standard Static IPv4Required for peer connectivity$3.65$43.80
Bandwidth (egress)Estimated 5-10 GB/mo (signaling + relay)$0.00-$0.44$0.00-$5.28
Total (pay-as-you-go)$39.30-$39.74$471.60-$476.88
CommitmentVM MonthlyVM AnnualTotal Monthly (all components)Total Annual
Pay-as-you-go$30.37$364.44~$39.30~$471.60
1-year reserved (~37% savings on VM)~$19.13~$229.56~$28.06~$336.72
3-year reserved (~60% savings on VM)~$12.15~$145.80~$21.08~$252.96

NetBird management traffic is minimal. Here is the breakdown:

Traffic TypeDirectionMonthly VolumeCost
Peer signaling (100-150 peers)Egress~100-500 MBFree (within 100 GB free tier)
Relay traffic (5-15% of active users)Egress~2-5 GBFree (within 100 GB free tier)
STUN responsesEgress~10-50 MBFree
Dashboard/APIEgress~50-200 MBFree
Total egress~2-6 GB/mo$0.00 (within free 100 GB)

Key finding: Azure’s first 100 GB/mo of egress from North America is free. NetBird’s total egress for 100-150 peers is well under this threshold. Bandwidth is effectively free.

R1 AgentRecommended ConfigMonthly Cost
R1 Agent A (cost report)B1ms + separate components~$29.76
R1 Agent B (platform report)B2s + separate relay + PostgreSQL~$70-85
R2 (this report)B2s single VM, SQLite, embedded relay~$39.30

The R2 recommendation splits the difference: uses the safer B2s VM but avoids the unnecessary cost of separate relay servers and managed PostgreSQL. The total is $39.30/mo pay-as-you-go or ~$28/mo with 1-year reserved pricing.


Question 6: GlobalProtect + NetBird Coexistence (GitHub #5077)

Section titled “Question 6: GlobalProtect + NetBird Coexistence (GitHub #5077)”

The problem is precisely documented in GitHub issue #5077 (opened January 9, 2026, status: OPEN).

When a user activates GlobalProtect VPN while NetBird is already running:

  1. GlobalProtect creates a new virtual network adapter (“PANGP Virtual Ethernet Adapter Secure”)
  2. GP adds a default route (or modifies the routing table) via this adapter
  3. NetBird’s network monitor detects the route change as a “significant network change”
  4. NetBird’s engine interprets this as a network switch (e.g., Wi-Fi to Ethernet) and initiates a full client restart
  5. The restart tears down the WireGuard interface (wt0), destroying all active TCP sessions
  6. NetBird recreates the interface and reconnects, but established SSH/RDP sessions are lost

Root cause: NetBird’s network monitor on Windows watches for default route changes as a signal that the network environment has changed. It does not distinguish between a physical network change (switching Wi-Fi networks) and a VPN adding a virtual adapter with its own routes.

This flag disables NetBird’s network change detection entirely. When set:

What it disables:

  • Detection of default route changes
  • Automatic WireGuard interface restart when network changes occur
  • Recovery logic that re-establishes connections after network switches

Side effects of disabling:

ScenarioWith network-monitor (default)With —network-monitor=false
Switch Wi-Fi to EthernetAuto-reconnects within secondsMay take 25+ seconds (WireGuard keepalive timeout)
Switch between Wi-Fi networksAuto-reconnects within secondsMay lose connection; manual netbird down && netbird up needed
VPN connects/disconnectsInterface restart (the bug)No disruption (desired behavior)
Resume from sleep/hibernateQuick reconnectionMay require manual reconnection
IP address change (DHCP renewal)Detected and handledMay not recover automatically

Assessment for GSISG migration: The side effects are manageable. Most users are on stable office Ethernet or home Wi-Fi — they rarely switch networks during the workday. The 25-second WireGuard keepalive timeout provides passive reconnection for most scenarios. Field workers on cellular may need to manually toggle NetBird occasionally.

PRTitleStatusNotes
#5155”Check Windows Interfaces by Name and Description”CLOSED (not merged)Code review found test failures and regression risk
#5156”Change Soft Interface detection to include Interface Description”OPEN (not merged)Actively addresses #5077; passed SonarQube quality gate; awaiting maintainer approval

PR #5156 approach: Extends Windows network monitor to check both interface Name and Description against a list of known soft/virtual adapter keywords. “pangp” (Palo Alto Networks GlobalProtect) is added to this list, so route changes from GP’s virtual adapter are ignored.

No version includes the fix yet. The flag --network-monitor=false remains the only workaround. When PR #5156 eventually merges, it will likely appear in a v0.67.x or v0.68.x release.


Question 7: Routing and DNS Conflict Analysis

Section titled “Question 7: Routing and DNS Conflict Analysis”

Can Both Tunnel Interfaces Run Simultaneously?

Section titled “Can Both Tunnel Interfaces Run Simultaneously?”

Yes, with important caveats.

GlobalProtect and NetBird use fundamentally different routing approaches:

AspectGlobalProtectNetBird
Interface name”PANGP Virtual Ethernet Adapter”wt0 (WireGuard)
Address spaceCorporate subnets (e.g., 10.x.x.x)Overlay 100.64.0.0/10
Routing modeSplit-tunnel or full-tunnel (configurable by admin)Split routing (only NetBird overlay + configured routes)
Default routeMay add 0.0.0.0/0 (full-tunnel mode)Does NOT add default route (unless exit node is configured)

If GlobalProtect is in split-tunnel mode: No routing conflict. GP routes corporate traffic (e.g., 10.100.7.0/24, 10.15.0.0/24) and NetBird routes overlay traffic (100.64.0.0/10). The routes are non-overlapping and coexist in the routing table.

If GlobalProtect is in full-tunnel mode: GP claims the default route (0.0.0.0/0). This means:

  • All internet traffic goes through GP
  • NetBird’s WireGuard traffic to its management server/relay goes through GP’s tunnel (if the management server IP is not excluded)
  • This may cause MTU issues (double encapsulation: WireGuard inside GP’s IPsec/SSL tunnel)
  • NetBird’s specific overlay routes (100.64.0.0/10) still work because they are more specific than 0.0.0.0/0

Recommendation: Ensure GlobalProtect is in split-tunnel mode during migration. If it must be full-tunnel, add NetBird management server IP to GP’s split-tunnel exclusion list to avoid double encapsulation.

AspectGlobalProtectNetBird
DNS modificationPushes corporate DNS servers to clientConfigures match-domain DNS (split DNS)
DNS scopeMay override all DNS or just internal domainsOnly manages DNS for configured match domains
Conflict potentialMediumLow (if using match domains)

Potential conflict: If GP pushes DNS servers that override the system default, and NetBird also configures DNS for match domains, the order of DNS resolver configuration matters. On Windows, the “more specific” configuration typically wins for match domains, but GP’s DNS push can occasionally override NetBird’s settings.

Mitigation: During migration, configure NetBird with match-domain DNS (e.g., *.company.internal -> internal DNS) and leave primary DNS to GP or the system default. This avoids conflicts because NetBird only intercepts DNS for its configured domains.

This is a critical clarification. NetBird, by default, does NOT add a default route. It only adds routes for:

  1. The NetBird overlay network (100.64.0.0/10) — peer-to-peer traffic
  2. Configured network routes (e.g., 10.100.7.0/24 via a routing peer)

These are specific routes that coexist peacefully with GP’s routing. The conflict in issue #5077 is NOT a routing conflict — it is a network monitor detection issue. The routes themselves do not fight.


Section titled “Question 8: Recommended Migration Sequence”

Phase 1: Install NetBird (GP Remains Primary) — Week 1-2

Section titled “Phase 1: Install NetBird (GP Remains Primary) — Week 1-2”
  1. Deploy NetBird management server on Azure B2s
  2. Configure Entra ID integration and test with 3-5 pilot users
  3. Install NetBird client on pilot machines WITH --network-monitor=false flag
  4. Verify NetBird overlay connectivity (ping between peers on 100.x.x.x addresses)
  5. GP continues handling all production VPN traffic — no changes to GP

Validation checklist:

  • NetBird peers show “Connected” in dashboard
  • Peer-to-peer connectivity verified (netbird status -d shows P2P connections)
  • GP sessions remain stable (no SSH/RDP drops from issue #5077)
  • DNS resolution works for both GP and NetBird domains

Phase 2: Configure NetBird Routing (GP Still Active) — Week 2-3

Section titled “Phase 2: Configure NetBird Routing (GP Still Active) — Week 2-3”
  1. Deploy routing peers at Honolulu and Boulder offices
  2. Configure Network Routes for office subnets (10.100.7.0/24, 10.15.0.0/24)
  3. Add routes to pilot users’ NetBird configuration
  4. Pilot users now have TWO paths to office resources: GP and NetBird
  5. Test accessing critical resources (file shares, printers, internal apps) via NetBird routes
  6. Compare performance and reliability between GP and NetBird paths

Key test: Disconnect GP on a pilot machine. Verify all office resources are accessible via NetBird alone. Reconnect GP. Verify both still work.

Phase 3: Expand to All Users (GP as Fallback) — Week 3-5

Section titled “Phase 3: Expand to All Users (GP as Fallback) — Week 3-5”
  1. Deploy NetBird to all endpoints via TacticalRMM (MSI silent install with --network-monitor=false)
  2. Configure all users with NetBird routes to office resources
  3. GP remains installed and functional on all machines
  4. Communicate to users: “NetBird is your new VPN. If you have issues, GlobalProtect still works as backup.”
  5. Monitor NetBird dashboard for connection issues, relay percentage, DNS problems

Phase 4: Disable GP (NetBird Primary) — Week 5-8

Section titled “Phase 4: Disable GP (NetBird Primary) — Week 5-8”
  1. After 2+ weeks of stable NetBird operation with all users:
  2. Disable GP’s auto-connect on endpoints (but do NOT uninstall)
  3. Remove --network-monitor=false flag IF PR #5156 has been merged (check NetBird release notes)
  4. If PR not merged, keep the flag — it is safe for production
  5. Monitor for 2 more weeks
  1. Uninstall GlobalProtect client from all endpoints (via TacticalRMM)
  2. Remove --network-monitor=false flag if still set (no GP = no trigger)
  3. Power down PA-2020 but keep configuration backed up
  4. Update documentation and close migration project
  1. NEVER remove GP before NetBird is verified working for all users — always have a fallback
  2. Always use --network-monitor=false while both VPNs are installed — this is non-negotiable
  3. Test rollback before expanding beyond pilot group — verify GP still works after NetBird is installed
  4. Field workers on cellular should be in a later wave — they are most likely to need relay and most affected by network-monitor=false
  5. Keep PA-2020 powered on for 30 days after full migration — emergency fallback

DecisionRecommendationRationale
VM sizeB2s (2 vCPU, 4 GB)$15/mo more than B1ms; eliminates scaling risk
DatabaseSQLite (default)Adequate for 100-150 peers; migrate to PostgreSQL only if symptoms appear
RelayEmbedded (in combined server)Separate relay unnecessary; <15% of connections will relay
RegionWest US 2Lowest latency to both Hawaii and Colorado
Monthly cost~$39/mo (pay-as-you-go) or ~$28/mo (1-year reserved)
Reserved instance1-year commitment recommended28% savings; low commitment risk
DecisionRecommendationRationale
Coexistence approachInstall NetBird alongside GP; both run simultaneouslyZero-risk migration path
Network monitor--network-monitor=false on all clients during migrationPrevents issue #5077 (WireGuard restart)
Remove flag post-migration?Yes, after GP is fully removedNo GP = no trigger for the bug
GP tunnel mode during migrationSplit-tunnel (if configurable)Avoids default route conflict
Migration wavesPilot (5) -> Office (30) -> All office (100) -> Remote (30) -> Field (10)Risk-graduated approach
Rollback time<1 hour (just re-enable GP)GP remains installed throughout

GapImpactMitigation
PR #5156 merge timeline unknownMEDIUM — determines when —network-monitor=false can be removedFlag is safe to run indefinitely; removal is an optimization, not a requirement
No official NetBird sizing benchmarksLOW — community evidence is sufficientMonitor actual resource usage post-deployment; Azure VM resize is trivial
Relay bandwidth under corporate workloads not benchmarkedLOW — estimated <15 Mbps peakMonitor relay connections in NetBird dashboard; extract relay if >30% connections relay
SQLite events.db growth rate unknown at scaleLOW — mitigated by periodic cleanupImplement cron job to archive events.db after 90 days
GP full-tunnel vs split-tunnel mode at GSISG unknownMEDIUM — affects coexistence strategyAsk GSISG IT admin for current GP configuration
Exact Azure reserved instance pricing variesLOW — estimates within 5%Use Azure Pricing Calculator for exact quote

ToolCountPurpose
mcp__claude_ai_Tavily__tavily_search12Web search for resource usage, SQLite/PostgreSQL, relay sizing, GP coexistence, Azure pricing, network-monitor flag
WebFetch7GitHub issues #5077, #4488, #1473, PRs #5155, #5156, Azure VM pricing, NetBird scaling docs
Bash1Directory listing for output path

GitHub Issues & PRs:

Official Documentation:

Azure Pricing:

Community: