--- slug: /plan-disks --- # Plan disks seekdb servers depend on data disks, transaction log disks, and seekdb installation disks. If you are a personal user, you can put all data on a single disk and skip this step. If you are an enterprise user, it is recommended to mount data to three separate disks. If your machine does not have three disks, or if you are using a RAID disk array, you need to partition the disk or the logical volumes of the disk array. It is recommended that you partition using the following scheme: * Data disk The data disk is used to store baseline data, and the path is specified by the configuration parameter `data_dir`. When you start seekdb for the first time, `${data_dir}/{sstable,slog}` will be created automatically. The size of the data disk is determined by the `datafile_disk_percentage`/`datafile_size` parameters. You can also dynamically expand disk files after deployment through the `datafile_next` and `datafile_maxsize` configuration items. For details, see [Configure dynamic expansion of disk data files](https://en.oceanbase.com/docs/common-oceanbase-database-10000000001971412). * Transaction log disk The path of the transaction log disk is specified by the configuration parameter `redo-dir`. It is recommended that you set the size of the transaction log disk to 3 to 4 times or more of the seekdb memory. When you start seekdb for the first time, `${redo-dir}` will be created automatically. The transaction log disk contains multiple fixed-size files, and you can automatically create and clear transaction logs as needed. When transaction logs reach 80% of the total disk capacity, automatic clearing will be triggered. However, transaction logs can only be deleted when the memory data corresponding to the transaction logs has been merged into the baseline data. With the same data volume, the size of transaction logs is approximately three times the size of memory data. Therefore, the upper limit of the space required for the transaction log disk is proportional to the total data volume after two merges. Empirical formula: Transaction log file size = 3~4 times the upper limit of incremental data memory. * seekdb installation disk The path of the seekdb installation disk is specified by the configuration parameter `base-dir`. The seekdb RPM package installation directory is located under `${base-dir}`. Baseline data files and transaction log files are linked to independent data disks and transaction log disks respectively through soft links. seekdb runtime logs are located under `${base-dir}/log`. Runtime logs continue to grow, and seekdb cannot automatically delete runtime logs, so you need to regularly delete runtime logs. ## Disk mounting The disk mount point requirements for seekdb are shown in the following table. * Personal users For personal users, disk mounting is not required. It is recommended that the minimum available disk space be 5 GB when in use. * Enterprise users | Directory | Size | Purpose | File system format | |------------|----------------|----------------|-------------------------| | /home | 100 GB~300 GB | seekdb database installation disk | ext4 or xfs recommended | | /data/log1 | 2 times the memory size allocated to seekdb | seekdb process log disk | ext4 or xfs recommended | | /data/1 | Depends on the size of data to be stored | seekdb process data disk | ext4 or xfs recommended | :::info ::: ## Disk mounting operations Disk mounting must be performed under the root user, and there are two operation methods: * Mount disks using LVM tools (recommended). * Mount disks using fdisk tools. ### Mount disks using LVM tools 1. Check disk information Use the `fdisk -l` command to identify available disks and partitions, and confirm the target device (such as `/dev/sdb1`). ```shell fdisk -l ``` 2. Install LVM tools If LVM is not pre-installed, run the following command to install LVM. If LVM is already installed, skip this step. * Debian/Ubuntu systems ```shell apt-get install lvm2 ``` * CentOS/RHEL systems ```shell yum install lvm2 ``` 3. Create a physical volume (PV). 1. Initialize the partition as a physical volume. ```shell pvcreate /dev/sdb1 ``` 2. Verify the PV creation result ```shell pvs ``` 4. Create a volume group (VG). 1. Combine multiple physical volumes into one VG. ```shell vgcreate vg01 /dev/sdb1 /dev/sdc1 ``` 2. View VG information ```shell vgs ``` 5. Create a logical volume (LV). 1. Create a 100 GB logical volume from the VG. ```shell lvcreate -L 100G -n lv01 vg01 ``` The size of the logical volume here can be set according to actual needs. 2. View LV information. ```shell lvs ``` 6. Format and mount. 1. Format as ext4 file system. ```shell mkfs.ext4 /dev/vg01/lv01 ``` 2. Create a mount point. ```shell mkdir -p /data/1 ``` 3. Temporarily mount. ```shell mount /dev/vg01/lv01 /data/1 ``` 7. Set automatic mounting on boot. Edit the `/etc/fstab` file and add the mount configuration: ```shell vim /etc/fstab ``` Add the following content to the configuration file: ```shell /dev/vg01/lv01 /data/1 ext4 defaults,noatime,nodiratime,nodelalloc,barrier=0 0 0 ``` ### Mount disks using fdisk tools 1. Check disk information Use the `fdisk -l` command to identify available disks and partitions, and confirm the target device (such as `/dev/sdb1`). ```shell fdisk -l ``` 2. Create a partition Use the fdisk tool to create a new partition, for example `fdisk /dev/sdb1`, enter n to create a primary partition, and finally save (w). ```shell fdisk /dev/sdb1 ``` 3. Format and mount. 1. Format as ext4 file system. ```shell mkfs.ext4 /dev/sdb1 ``` 2. Create a mount point. ```shell mkdir -p /data/1 ``` 3. Temporarily mount. ```shell mount /dev/sdb1 /data/1 ``` 4. Set automatic mounting on boot. Edit the `/etc/fstab` file and add the mount configuration: ```shell vim /etc/fstab ``` Add the following content to the configuration file: ```shell /dev/sdb1 /data/1 ext4 defaults,noatime,nodiratime,nodelalloc,barrier=0 0 0 ``` ## Check disks After disks are mounted, run the following command to check the disk mounting status: ```shell df -h ``` The following result is returned: ```shell Filesystem Size Used Avail Use% Mounted on devtmpfs 31G 0 31G 0% /dev tmpfs 31G 0 31G 0% /dev/shm tmpfs 31G 516K 31G 1% /run tmpfs 31G 0 31G 0% /sys/fs/cgroup /dev/vda1 493G 171G 302G 37% / tmpfs 6.2G 0 6.2G 0% /run/user/0 /dev/sdb1 984G 77M 934G 1% /data/1 /dev/vdc1 196G 61M 186G 1% /data/log1 /dev/vdb1 492G 73M 467G 1% /home/admin/seekdb ``` Result description * `/data/1` is the data disk with a size of 1 TB. * `/data/log1` stores logs. * `/home/admin/seekdb` stores seekdb binary files and runtime logs. Ensure that the disks corresponding to `data_dir`, `redo_dir`, and `home_path` in the configuration file have been mounted. The directories corresponding to `data_dir` and `redo_dir` are empty, and the disk usage of the directory corresponding to `data_dir` must be less than 4%. ## Set directory permissions After disk mounting is complete, you need to check the permissions of the directories corresponding to the mounted disks. Run the following command to check the permissions of cluster-related file directories. Here, the `data` directory is used as an example: ```shell [root@test001 data]# ls -al ``` The following result is returned: ```shell drwxr-xr-x 2 admin admin 4096 Feb 9 18:43 . drwxr-xr-x 2 admin admin 4096 Feb 9 18:43 log1 ``` If you find that the `admin` user does not have permissions for related files after checking directory permissions, run the following command to change the file owner: ```shell [root@test001 ~]# chown -R admin:admin /data/log1 [root@test001 ~]# chown -R admin:admin /data ``` Here, `/data/log1` and `/data` are example mount directories. You need to replace them with your actual mount directories.