Initial configuration consists of the following tasks:
Setting up logical devices (pools).
Setting up and starting the Cluster Configuration System (CCS).
Starting clustering and locking systems.
Setting up and mounting file systems.
Note | |
---|---|
GFS kernel modules must be loaded prior to performing initial configuration tasks. Refer to Section 3.2.2 Loading the GFS Kernel Modules. |
Note | |
---|---|
For examples of GFS configurations, refer to Appendix C Basic GFS Examples. |
The following sections describe the initial configuration tasks.
To set up logical devices (pools) follow these steps:
Create file system pools.
Create pool configuration files. Refer to Section 5.4 Creating a Configuration File for a New Volume.
Create a pool for each file system. Refer to Section 5.5 Creating a Pool Volume.
Command usage:
pool_tool -c ConfigFile
Create a CCS pool.
Create pool configuration file. Refer to Section 5.4 Creating a Configuration File for a New Volume.
Create a pool to be the Cluster Configuration Archive (CCA) device. Refer to Section 5.5 Creating a Pool Volume.
Command usage:
pool_tool -c ConfigFile
At each node, activate pools. Refer to Section 5.6 Activating/Deactivating a Pool Volume.
Command usage:
pool_assemble
Note | |
---|---|
You can use GFS init.d scripts included with GFS to automate activating and deactivating pools. For more information about GFS init.d scripts, refer to Chapter 12 Using GFS init.d Scripts. |
To set up and start the Cluster Configuration System, follow these steps:
Create CCS configuration files and place them into a temporary directory. Refer to Chapter 6 Creating the Cluster Configuration System Files.
Create a CCS archive on the CCA device. (The CCA device is the pool created in Step 2 of Section 4.2.1 Setting Up Logical Devices.) Put the CCS files (created in Step 1) into the CCS archive. Refer to Section 7.1 Creating a CCS Archive.
Command usage:
ccs_tool create Directory CCADevice.
At each node, start the CCS daemon, specifying the CCA device at the command line. Refer to Section 7.2 Starting CCS in the Cluster.
Command usage:
ccsd -d CCADevice
Note | |
---|---|
You can use GFS init.d scripts included with GFS to automate starting and stopping the Cluster Configuration System. For more information about GFS init.d scripts, refer to Chapter 12 Using GFS init.d Scripts. |
To start clustering and locking systems, start lock_gulmd at each node. Refer to Section 8.2.3 Starting LOCK_GULM Servers.
Command usage:
lock_gulmd
Note | |
---|---|
You can use GFS init.d scripts included with GFS to automate starting and stopping lock_gulmd. For more information about GFS init.d scripts, refer to Chapter 12 Using GFS init.d Scripts. |
To set up and mount file systems, follow these steps:
Create GFS file systems on pools created in Step 1 of Section 4.2.2 Setting Up and Starting the Cluster Configuration System. Choose a unique name for each file system. Refer to Section 9.1 Making a File System.
Command usage:
gfs_mkfs -p lock_gulm -t ClusterName:FSName -j NumberJournals BlockDevice
At each node, mount the GFS file systems. Refer to Section 9.2 Mounting a File System.
Command usage:
mount -t gfs BlockDevice MountPoint
mount -t gfs -o acl BlockDevice MountPoint
The -o acl mount option allows manipulating file ACLs. If a file system is mounted without the -o acl mount option, users are allowed to view ACLs (with getfacl), but are not allowed to set them (with setfacl).
Note | |
---|---|
You can use GFS init.d scripts included with GFS to automate mounting and unmounting GFS file systems. For more information about GFS init.d scripts, refer to Chapter 12 Using GFS init.d Scripts. |