Examples of using Cluster controller in English and their translations into Russian
{-}
-
Official
-
Colloquial
This command produces an output- a string with the Cluster Controller IP Address.
When the Cluster Controller is running, the site can start serving clients(if you do not use Frontend Servers).
Cluster-wide Directory Units are processed on the Cluster Controller server.
When the Cluster Controller and at least one Backend Server are running, they both can serve all accounts in the Shared Domains.
Use this command to get the IP address of the current Dynamic Cluster Controller.
When a Cluster-wide Local Unit is used on the Cluster Controller, the request is performed locally.
The Server will poll all specified Backend Server IP Addresses until it finds the active Cluster Controller.
The Dynamic Cluster Controller is informed that this Server can execute HTTP requests for other Cluster members.
They become available again within 5-10 seconds, when the Cluster Controller detects the failure.
The Dynamic Cluster Controller collects and distributes information about all active Cluster members that have this option selected.
Use the WebAdmin Interface of this first Backend Server to verify that the Cluster Controller is running.
The Dynamic Cluster Controller is informed that this Server can create Real-Time Task and Call Leg objects for other Cluster members.
If the main Controller fails,the Backup Controller becomes the Cluster Controller.
When the Cluster Controller detects a change in the cluster member belonging to this Load Balancer Group, the program receives the following command.
When a Dynamic Cluster has at least 2 backend Servers, the Cluster Controller assigns the Controller Backup duties to one of the other backend Servers.
The Dynamic Cluster Controller is informed that this Server can accept(enqueue) E-mail messages composed or received with the other Cluster members.
In the CommuniGate Pro Dynamic Cluster environment, the Chronos component scheduler runs on the Cluster Controller, distributing the Chronos tasks to all available backend Servers.
For each"balancing groups" the Cluster Controller selects one of the available load balancers and activates it, while other load balancers work in the"backup" mode.
If the Cluster Member running the active Load Balancer fails or it is switched into the"non-ready" state, the Cluster Controller activates some other Load Balancer member in that group(if it can find one).
When the current Cluster Controller stops or fails, and the Backup Controller assumes the Cluster Controller role, it re-mounts all Cluster-wide Local Units, and processes them as regular Local Units,while other Cluster members redirect requests to those Cluster-wide Local Units to this new Cluster Controller.
When this option is enabled,the Server sends all TFTP requests to the Cluster Controller(unless this Server is the active Controller itself), using the inter-cluster CLI protocol.
RPOP activity is scheduled on the active Cluster Controller, so if Backend servers do not have direct access to the Internet, their RPOP setting should be set to Remotely.
As soon as the first Load Balancer helper application starts on some Cluster Member, the Cluster Controller activates that Helper, making it direct all incoming traffic to its Cluster member, and distribute that traffic to all active Cluster members in its Load Balancer Group.
Opennebula: controller which executes the OpenNebula cluster services( package info), adoption requested since 1297 days.
With the ability to manage up to 64(upgradable to 256) wireless access points andup to a maximum of 1,024 wireless access points in a controller cluster, the DWC-2000 is a cost-effective mobility solution suitable for medium to large scale deployments.
Each cluster has its own dedicated DDR3 SDRAM controller, and a memory bank with its own address space.
With the controller clustering feature, the administrator can easily log into one wireless controller and perform essential configurations on others within the group.
All Servers send the resynchronization information to the Backup Controller and the Cluster continues to operate without interruption.
If there are no other Backend Servers in the Cluster, the Controller continues to serve all new sessions itself.
All other Cluster members maintain connections with the Backup Controller.