If the quorum device is not the same as the Availability Suite configuration-data device, move the configuration data to an available slice on the quorum device. If you moved the configuration data, configure Availability Suite software to use the new location. Perform this procedure to upgrade the Solaris OS, Java ES shared components, volume-manager software, and Sun Cluster software by using the live upgrade method.
For information about live upgrade of the Solaris OS, refer to the documentation for the Solaris version that you are using:. The cluster must already run on, or be upgraded to, at least the minimum required level of the Solaris OS to support upgrade to Sun Cluster 3. See Supported Products in Sun Cluster 3. You can use the cconsole utility to perform this procedure on all nodes simultaneously.
If your operating system is an older version, perform the following steps:. If you will upgrade the Solaris OS and your cluster uses dual-string mediators for Solaris Volume Manager software, unconfigure your mediators. See Configuring Dual-String Mediators for more information about mediators. If the value in the Status field is Bad , repair the affected mediator host. Save this information for when you restore the mediators during the procedure How to Finish Upgrade to Sun Cluster 3.
For a disk set that uses mediators, take ownership of the disk set if no node already has ownership. See the mediator 7D man page for further information about mediator-specific options to the metaset command. Build an inactive boot environment BE. For information about important options to the lucreate command, see Solaris 10 Installation Guide: Solaris Live Upgrade and Upgrade Planning and the lucreate 1M man page.
If the cluster already runs on a properly patched version of the Solaris OS that supports Sun Cluster 3. Sun Cluster software requires at least version 1. If you upgraded to a version of Solaris that installs an earlier version of Java, the upgrade might have changed the symbolic link to point to a version of Java that does not meet the minimum requirement for Sun Cluster 3.
The following are examples of commands that you can use to display the version of their related releases of Java software. You might need to patch your Solaris software to use the Live Upgrade feature. If your cluster hosts software applications that require an upgrade and that you can upgrade by using the live upgrade method, upgrade those software applications. If your cluster hosts software applications that cannot use the live upgrade method, you will upgrade them later in Step Specify the name to give the state file and the absolute or relative path where the file should be created.
Follow the instructions on the screen to select and upgrade Shared Components software packages on the node. The installation wizard program displays the status of the installation. When the installation is complete, the program displays an installation summary and the installation logs.
Run the installer program in silent mode and direct the installation to the alternate boot environment. The installer program must be the same version that you used to create the state file. For more information, see the scinstall 1M man page. The name of the alternate BE that you built in Step 3. Repeat Step 1 through Step 22 for each node in the cluster. Do not use the reboot or halt command.
These commands do not activate a new BE. Use only shutdown or init to reboot into a new BE. Optional If your cluster hosts software applications that require upgrade for which you cannot use the live upgrade method, perform the following steps.
Throughout the process of software-application upgrade, always reboot into noncluster mode until all upgrades are complete. In the GRUB menu, use the arrow keys to select the appropriate Solaris entry and type e to edit its commands.
In the boot parameters screen, use the arrow keys to select the kernel entry and type e to edit the entry. This change to the kernel boot parameter command does not persist over the system boot.
The next time you reboot the node, it will boot into cluster mode. To boot into noncluster mode instead, perform these steps to again to add the -x option to the kernel boot parameter command. Remember to boot into noncluster mode if you are directed to reboot, until all applications have been upgraded. The GRUB menu appears similar to the following:.
This example shows a live upgrade of a cluster node. In this example, sc31u2 is the original boot environment BE. The Java ES installer state file is named sc32state. They provide the intelligence, automation, flexibility, high availability , and ease-of-use IT managers need to protect business-critical applications from downtime or data loss.
This leading HIS provider has more than 10, U. To support these customers, the organization had more than 20 SQL Server clusters located in two geographically dispersed data centers, as well as a few smaller servers and SQL Server log shipping for disaster recovery DR. The organization has a large customer base and vast IT infrastructure and needed a solution that could handle heavy network traffic and eliminate network bandwidth problems when replicating data to its DR site.
RPO is the maximum amount of data loss that can be tolerated when a server fails, or a disaster happens. RTO is the maximum tolerable duration of any outage. See the full case study to learn more. SIOS software is an essential part of your cluster solution, protecting your choice of Windows or Linux environments in any configuration or combination of physical, virtual and cloud public, private, and hybrid environments without sacrificing performance or availability.
To see how SIOS clustering software works to protect Windows and Linux environments, request a demo or get a free trial.
Check out recent blog posts about our clustering products. Connect with Us Menu Blog Clustering Software. Each interconnect consists of a cable that is connected in one of the following ways:. For more information about the purpose and function of the cluster interconnect, see Cluster Interconnect in Oracle Solaris Cluster Concepts Guide. Note - You do not need to configure a cluster interconnect for a single-host cluster.
However, if you anticipate eventually adding more nodes to a single-host cluster configuration, you might want to configure the cluster interconnect for future use. During Oracle Solaris Cluster configuration, you specify configuration information for one or two cluster interconnects.
If the number of available adapter ports is limited, you can use tagged VLANs to share the same adapter with both the private and public network. You can set up from one to six cluster interconnects in a cluster. While a single cluster interconnect reduces the number of adapter ports that are used for the private interconnect, it provides no redundancy and less availability.
If a single interconnect fails, the cluster is at a higher risk of having to perform automatic recovery. Whenever possible, install two or more cluster interconnects to provide redundancy and scalability, and therefore higher availability, by avoiding a single point of failure.
You can configure additional cluster interconnects, up to six interconnects total, after the cluster is established by using the clsetup utility. For guidelines about cluster interconnect hardware, see Interconnect Requirements and Restrictions in Oracle Solaris Cluster 4. For the transport adapters, such as ports on network interfaces, specify the transport adapter names and transport type.
If your configuration is a two-host cluster, you also specify whether your interconnect is a point-to-point connection adapter to adapter or uses a transport switch. Link-local IPv6 addresses, which are required on private-network adapters to support IPv6 public-network addresses for scalable data services, are derived from the local MAC addresses.
You must use the dladm create-vlan command to configure the adapter as a tagged VLAN adapter before you configure it with the cluster. This name is composed of the adapter name plus the VLAN instance number. You would therefore specify the adapter name as net to indicate that it is part of a shared virtual LAN.
Logical network interfaces — Logical network interfaces are reserved for use by Oracle Solaris Cluster software. If you use transport switches, such as a network switch, specify a transport switch name for each interconnect. You can use the default name switch N , where N is a number that is automatically assigned during configuration, or create another name. Also specify the switch port name or accept the default name.
The default port name is the same as the internal node ID number of the Oracle Solaris host that hosts the adapter end of the cable.
However, you cannot use the default port name for certain adapter types. Clusters with three or more nodes must use transport switches. Direct connection between cluster nodes is supported only for two-host clusters. If your two-host cluster is direct connected, you can still specify a transport switch for the interconnect. Tip - If you specify a transport switch, you can more easily add another node to the cluster in the future. Fencing is a mechanism that is used by the cluster to protect the data integrity of a shared disk during split-brain situations.
By default, the scinstall utility in Typical Mode leaves global fencing enabled, and each shared disk in the configuration uses the default global fencing setting of prefer3. With the prefer3 setting, the SCSI-3 protocol is used. If any device is unable to use the SCSI-3 protocol, the pathcount setting should be used instead, where the fencing protocol for the shared disk is chosen based on the number of DID paths that are attached to the disk. However, data integrity for such devices cannot be guaranteed during split-brain situations.
In Custom Mode, the scinstall utility prompts you whether to disable global fencing. For most situations, respond No to keep global fencing enabled. However, you can disable global fencing in certain situations. Caution - If you disable fencing under other situations than the ones described, your data might be vulnerable to corruption during application failover.
Examine this data corruption possibility carefully when you consider turning off fencing. If you turn off fencing for a shared disk that you then configure as a quorum device, the device uses the software quorum protocol. You want to enable systems that are outside the cluster to gain access to storage that is attached to the cluster. If you disable global fencing during cluster configuration, fencing is turned off for all shared disks in the cluster.
After the cluster is configured, you can change the global fencing protocol or override the fencing protocol of individual shared disks. However, to change the fencing protocol of a quorum device, you must first unconfigure the quorum device. Then set the new fencing protocol of the disk and reconfigure it as a quorum device. For more information about setting the fencing protocol of individual shared disks, see the cldevice 1CL man page.
For more information about the global fencing setting, see the cluster 1CL man page. Oracle Solaris Cluster configurations use quorum devices to maintain data and resource integrity. If the cluster temporarily loses connection to a node, the quorum device prevents amnesia or split-brain problems when the cluster node attempts to rejoin the cluster.
During Oracle Solaris Cluster installation of a two-host cluster, you can choose to have the scinstall utility automatically configure an available shared disk in the configuration as a quorum device.
The scinstall utility assumes that all available shared disks are supported as quorum devices. After installation, you can also configure additional quorum devices by using the clsetup utility.
If your cluster configuration includes third-party shared storage devices that are not supported for use as quorum devices, you must use the clsetup utility to configure quorum manually. Minimum — A two-host cluster must have at least one quorum device, which can be a shared disk, a quorum server, or a NAS device. For other topologies, quorum devices are optional. Odd-number rule — If more than one quorum device is configured in a two-host cluster or in a pair of hosts directly connected to the quorum device, configure an odd number of quorum devices.
This configuration ensures that the quorum devices have completely independent failure pathways. Distribution of quorum votes — For highest availability of the cluster, ensure that the total number of votes that are contributed by quorum devices is less than the total number of votes that are contributed by nodes. Otherwise, the nodes cannot form a cluster if all quorum devices are unavailable even if all nodes are functioning.
Changing the fencing protocol of quorum devices — For SCSI disks that are configured as a quorum device, you must unconfigure the quorum device before you can enable or disable its SCSI fencing protocol. You must disable fencing for such disks. The software quorum protocol would also be used by SCSI-shared disks if fencing is disabled for such disks. Replicated devices — Oracle Solaris Cluster software does not support replicated devices as quorum devices.
When a configured quorum device is added to a ZFS storage pool, the disk is relabeled as an EFI disk and quorum configuration information is lost. The disk can then no longer provide a quorum vote to the cluster. After a disk is in a storage pool, you can configure that disk as a quorum device.
Or, you can unconfigure the quorum device, add it to the storage pool, then reconfigure the disk as a quorum device. A zone cluster is a cluster of Oracle Solaris non-global zones. You can use the clsetup utility to create a zone cluster and add a network address, file system, ZFS storage pool, or storage device. You can also use a command line interface the clzonecluster utility to create a zone cluster, make configuration changes, and remove a zone cluster. For more information about using the clzonecluster utility, see the clzonecluster 1CL man page.
Supported brands for zone clusters are solaris , solaris10 , and labeled. The labeled brand is used exclusively in a Trusted Extensions environment. To use the Trusted Extensions feature of Oracle Solaris, you must configure the Trusted Extensions feature for use in a zone cluster. You can also specify a shared-IP zone cluster or an exclusive-IP zone cluster when you run the clsetup utility.
Shared-IP zone clusters work with solaris or solaris10 brand zones. A shared-IP zone cluster shares a single IP stack between all the zones on the node, and each zone is allocated an IP address. Exclusive-IP zone clusters work only with solaris brand zones, and do not work with solaris10 brand zones. An exclusive-IP zone cluster supports a separate IP instance stack.
Global cluster — The zone cluster must be configured on a global Oracle Solaris Cluster configuration. A zone cluster cannot be configured without an underlying global cluster. Cluster mode — The global-cluster node from which you create or modify a zone cluster must be in cluster mode. If any other nodes are in noncluster mode when you administer a zone cluster, the changes that you make are propagated to those nodes when they return to cluster mode. Adequate private-IP addresses — The private IP-address range of the global cluster must have enough free IP-address subnets for use by the new zone cluster.
If the number of available subnets is insufficient, the creation of the zone cluster fails. Changes to the private IP-address range — The private IP subnets and the corresponding private IP-addresses that are available for zone clusters are automatically updated if the global cluster's private IP-address range is changed. If a zone cluster is deleted, the cluster infrastructure frees the private IP-addresses that were used by that zone cluster, making the addresses available for other use within the global cluster and by any other zone clusters that depend on the global cluster.
Supported devices — Devices that are supported with Oracle Solaris zones can be exported to a zone cluster. Such devices include the following:. Distribution of nodes — You cannot host multiple nodes of the same zone cluster on the same host machine.
A host can support multiple zone-cluster nodes as long as each zone-cluster node on that host is a member of a different zone cluster. Node creation — You must create at least one zone-cluster node at the time that you create the zone cluster. You can use the clsetup utility or the clzonecluster command to create the zone cluster. The name of the zone-cluster node must be unique within the zone cluster.
The infrastructure automatically creates an underlying non-global zone on each host that supports the zone cluster. Each non-global zone is given the same zone name, which is derived from, and identical to, the name that you assign to the zone cluster when you create the cluster.
0コメント