Purpose
These notes summarize the main iMaster NCE-Campus deployment models: when they are used, how high availability and geographic redundancy are implemented, and what the minimum requirements for CPU, RAM, disk, and NICs are.
Overview of Deployment Variants
1) Single-node system
- All product components run on a single node.
- Suitable for small environments and simple installations.
- PM-based deployment is recommended.
- Typical scale:
- LAN only: up to 5,000 LAN-side NEs and up to 20,000 online users
- LAN-WAN convergence: LAN-side NEs + WAN-side NEs x 5 <= 5,000 and up to 20,000 online users
Useful for small environments, lab setups, or scenarios where simplicity is more important than built-in cluster redundancy.
2) Minimum cluster
- 3 nodes.
- Controller/service, middleware, and big data analysis services are co-deployed on each node.
- PM-based deployment is recommended, but VM-based deployment is also possible.
- This is the typical smallest production-grade cluster variant.
- Typical scale:
- LAN only: up to 30,000 LAN-side NEs and up to 100,000 online users
- LAN-WAN convergence: up to 15,000 weighted devices and up to 50,000 online users
Useful when more availability than a single-node system is required, but a larger distributed architecture is not yet necessary.
3) Distributed cluster
- Starts at 6 nodes and scales in the standard designs to 9 or 17 nodes.
- VM-based deployment is recommended.
- Services are separated by role instead of being co-deployed on all nodes.
- Typical roles:
- service/controller nodes
- middleware nodes
- big data analysis nodes
Typical scaling levels:
- 6-node distributed cluster:
- up to 30,000 weighted devices
- up to 100,000 online users
- 9-node distributed cluster:
- up to 60,000 weighted devices
- up to 300,000 online users
- 17-node distributed cluster:
- up to 200,000 weighted devices, with WAN-side NEs <= 20,000
- up to 700,000 online users
Useful when more scale, clearer role separation, and better performance isolation are required.
4) Multi-cluster system
- A global cluster is combined with multiple regional clusters.
- In the resource planning guide, this model is described as 3 global nodes plus up to 10 regional clusters.
- In the installation scenarios guide, one example shows 3 OMP nodes, 3 global nodes, and at least two regional distributed clusters with 9 or 17 nodes.
- Regional clusters operate like independent controller systems for their own region.
- Regional clusters cannot be single-node systems; they must be minimum or distributed clusters.
Useful for very large or geographically distributed organizations that want to centrally manage multiple regions.
Choosing the Right Variant
Single-nodefor labs, PoCs, or small campus environments.Minimum clusterfor production cluster deployments of medium size.Distributed clusterfor large current or future growth and when roles should be cleanly separated.Multi-clusterwhen several regional deployments must be centrally aggregated.
Huawei recommendation from the installation scenarios document:
- LAN-only:
- if <= 30,000 devices and no foreseeable expansion: minimum cluster
- if > 30,000 devices or expansion is planned: distributed cluster
- LAN-WAN convergence:
- if <= 15,000 weighted devices and no foreseeable expansion: minimum cluster
- if > 15,000 weighted devices or expansion is planned: distributed cluster
HA, Redundancy, and DR
Cluster HA within one data center
- A minimum cluster already provides more availability than a single-node system because services are distributed across 3 nodes.
- A distributed cluster further increases resilience because service, middleware, and big data roles are separated across multiple nodes or VMs.
- In VM deployments, nodes of the same type must use anti-affinity.
- This means two VMs with the same role must not run on the same physical host.
This is important so that the failure of a single host does not cause several identical roles to fail at the same time.
Geographic redundancy / disaster recovery across multiple sites
- Huawei explicitly refers to a
Geographic Redundancy System Installationfor:- single-node DR
- cluster DR
- An automatic DR solution with arbitration is recommended.
- The primary site, secondary site, and arbitration site must be deployed at different locations.
- Geographic redundancy should be understood as an active/standby design across separate locations.
- If the heartbeat between clusters fails, the arbitration service decides active/standby status.
Important: geographic redundancy is not simply „more nodes in one DC“, but a true multi-site DR design.
Arbitration node
- Used for the automatic DR solution with arbitration.
- According to Hedex, the arbitration service consists of
ETCDandMonitor. Monitorchecks connectivity between sites and writes the result intoETCD.- Minimum requirements:
- 1 node
- 4 vCPUs
- 16 GB RAM
- 150 GB disk
- 1 or 2 GE NICs
Minimum Resource Requirements
Single-node system
LAN-only single node
- 1 PM or 1 VM
- CPU:
- PM: 40 threads, no overcommitment, 2.2 GHz+
- VM: 40 vCPUs, 2.2 GHz+
- RAM:
- 128 GB standard
- 192 GB with IoT management
- 256 GB if network configuration verification and runbook are installed together
- Disk:
- 960 GB system disk
- 1200 GB data disk
- NIC:
- 1 or 2 GE
LAN-WAN single node
- 1 PM or 1 VM
- CPU:
- PM: 80 threads, no overcommitment, 2.2 GHz+
- VM: 80 vCPUs, 2.2 GHz+
- RAM: 256 GB
- Disk:
- 960 GB system disk
- 1200 GB data disk
- NIC:
- 1 or 2 GE
Minimum cluster
- 3 PMs or 3 VMs
- CPU:
- PM: 48 threads, no overcommitment, 2.2 GHz+
- VM: 48 vCPUs, 2.2 GHz+
- RAM: 128 GB per node
- Disk:
- 960 GB system disk
- 1800 GB data disk
- NIC:
- 2 to 4 GE
Notes:
- Network configuration verification and runbook cannot be installed together at this size.
- If AI-based terminal fingerprint identification is required, the minimum cluster grows to 4 nodes.
Distributed cluster
6-node distributed cluster
- 6 VMs total
- 3 service and middleware nodes:
- 32 vCPUs
- 96 GB RAM
- 960 GB disk
- 3 big data nodes:
- 32 vCPUs
- 128 GB RAM
- 300 GB system disk + 1800 GB data disk
- NIC:
- 2 to 4 GE
9-node distributed cluster
- 9 VMs total
- 3 service nodes:
- 20 vCPUs
- 96 GB RAM
- 960 GB disk
- 3 middleware nodes:
- 20 vCPUs
- 64 GB RAM
- 960 GB disk
- 3 big data nodes:
- 20 vCPUs
- 128 GB RAM
- 300 GB system disk + 4000 GB data disk
- NIC:
- 2 to 4 GE
17-node distributed cluster
- 17 VMs total
- 7 service nodes:
- 20 vCPUs
- 96 GB RAM
- 960 GB disk
- 5 middleware nodes:
- 20 vCPUs
- 64 GB RAM
- 960 GB disk
- 5 big data nodes:
- 20 vCPUs
- 128 GB RAM
- 300 GB system disk + 7000 GB data disk recommended
- NIC:
- 2 to 4 GE
Multi-cluster system
- Global cluster:
- 3 nodes
- 16 vCPUs per node
- 64 GB RAM per node
- 600 GB disk per node
- 2 to 4 GE NICs
- Regional clusters:
- are sized according to the selected minimum-cluster or distributed-cluster model
- cannot be single-node systems
Additional Design Notes
- On-premises supports PM- and VM-based deployment.
- In VM mode, VMware, FusionCompute, and HCS are supported.
- Huawei servers are required for PM-based deployment.
- For Huawei servers with virtualization, FusionCompute or HCS is recommended.
- For third-party servers prepared by the customer, VMware is recommended.
- Public cloud deployment is currently supported on Huawei Cloud ECS.
- SSD-based and HDD-based servers must not be mixed in the same deployment network.
- In VM environments, the overhead of the virtualization platform must be included in planning.
- Huawei gives a typical example of around 8 vCPUs, 32 GB RAM, and 100 GB disk of additional overhead for virtualization software.
- According to newer Hedex documentation, some deployment and DR functions depend on release, edition, and operating model.
- Therefore, multi-cluster, DR, and public cloud capabilities should always be checked against the exact support matrix of the deployed release.
Authentication Component
- Deployed as a single-node system.
- Huawei explicitly recommends PM-based deployment here.
- Typical minimum requirements:
- 1 PM or 1 VM
- 20 threads or 20 vCPUs
- 64 GB RAM
- 960 GB disk
- 1 or 2 GE NICs
Key Takeaways
Single-node= simplest and smallest variant, but with the lowest resilience.Minimum cluster= smallest true production-grade cluster variant.Distributed cluster= preferred for larger environments and clear role separation.Multi-cluster= central management of multiple regional clusters.HA in one DCandgeographic DR across multiple DCsare two different layers in the design.- For geographic DR, Huawei recommends an automatic DR solution with arbitration plus separate primary, secondary, and arbitration sites.
Sources
001_Docs/IMasterNCE/[iMaster NCE-Campus Encyclopedia] Installation Resource Planning and Requirements.pdf001_Docs/IMasterNCE/【iMaster NCE-Campus Encyclopedia】Installation Scenarios.pdf001_Docs/IMasterNCE/profile.xml001_Docs/IMasterNCE/resources/
Download als PDF File