Areca Raid Commands

1. Command to check the status of raid.

CLI> rsf info
# Name Disks TotalCap FreeCap DiskChannels State
===============================================================================
1 Raid Set # 00 4 4000.0GB 0.0GB 12×4 Degraded
===============================================================================

Here we could see that the server has 4 drives with a total capacity of 4TB and the raid is degraded.

2. Command to check the status of disks.

CLI> disk info
# Ch# ModelName Capacity Usage
===============================================================================
1 1 WDC WD1002FBYS-02A6B0 1000.2GB Raid Set # 00
2 2 WDC WD1002FBYS-02A6B0 1000.2GB Raid Set # 00
3 3 WDC WD1002FBYS-02A6B0 1000.2GB Failed
4 4 WDC WD1002FBYS-02A6B0 1000.2GB Raid Set # 00
===============================================================================

The above information shows that the drive connected to channel/port 3 is degraded.

3. Command to check the disk information

CLI> disk info drv=3
Drive Information
===============================================================
IDE Channel : 3
Model Name : WDC WD1002FBYS-02A6B0
Serial Number : WD-WMATV5700033
Firmware Rev. : 03.00C06
Disk Capacity : 1000.2GB
Device State : NORMAL
Timeout Count : 0
Media Error Count : 55
Device Temperature : 26 C
SMART Read Error Rate : 200(51)
SMART Spinup Time : 253(21)
SMART Reallocation Count : 200(140)
SMART Seek Error Rate : 200(0)
SMART Spinup Retries : 100(0)
SMART Calibration Retries : 100(0)
===============================================================

This command will show us the serial number of the hard disk so that we can confirm before replacing the degraded drive.

4. Checking the event log

CLI> event info
Date-Time Device Event Type Elapsed Time Errors
===============================================================================
2015-01-01 08:02:48 IDE Channel #03 Device Failed
2015-01-01 08:02:48 Raid Set # 00 RaidSet Degraded
2015-01-01 08:02:48 ARC-1110-VOL#00 Volume Degraded
2014-12-18 03:46:09 ARC-1110-VOL#00 Complete Rebuild 041:54:30
2014-12-16 09:51:39 ARC-1110-VOL#00 Start Rebuilding

===============================================================================

Shows when the raid is degraded, whether any rebuild is in progress, etc

5. Starting manual rebuild

If the raid does not start rebuild automatically after drive replacement, it will be showing as Free.

CLI> disk info
# Ch# ModelName Capacity Usage
===============================================================================
1 1 WDC WD1002FBYS-02A6B0 1000.2GB Raid Set # 00
2 2 WDC WD1002FBYS-02A6B0 1000.2GB Raid Set # 00
3 3 WDC WD1002FBYS-02A6B0 1000.2GB Free
4 4 WDC WD1002FBYS-02A6B0 1000.2GB Raid Set # 00
===============================================================================

In this case we need to manually add the drive to array and start rebuild.

CLI> rsf createhs drv=3

6. Password errors

If any command is showing password required error like below,

CLI> rsf createhs drv=3
GuiErrMsg<0x08>: Password Required

we can simply set a one time password and can execute the command.

CLI> set password=0000
GuiErrMsg<0x00>: Success.

CLI> rsf createhs drv=3
GuiErrMsg<0x00>: Success.

7. Changing disk priority to speedup the raid rebuild

CLI> sys changept p=3
GuiErrMsg<0x00>: Success.

use the above command to change priority of disk in port 3 to speed up raid rebuild.

configure: error: Please reinstall the libmagic distribution

I was installing fileinfo from source in a litespeed server.

cd /usr/src
wget http://pecl.php.net/get/Fileinfo-1.0.4.tgz
tar -zvxf Fileinfo-1.0.4.tgz
/usr/local/lsws/lsphp5/bin/phpize
./configure –with-php-config=/usr/local/lsws/lsphp5/bin/php-config

Got the following error while trying to run ./configure

===
checking for magic files in default path… not found
configure: error: Please reinstall the libmagic distribution
===

Fix :

yum install file-devel

Once file-devel is installed, ./configure ran just fine and instalaltion completed.

 

Raid Controllers

Here we are dealing with the cli instalaltion of different Raid Controllers. You can check the controller of your server using the below command:

lspci | grep -i raid

The commonly used controllers are 3ware, adaptec, megaraid etc

1. 3ware

cd /usr/src/
wget http://cpanelstuffs.linuxcabin.com/downloads/3DM2_CLI-Linux_10.2.1_9.5.4.zip
unzip 3DM2_CLI-Linux_10.2.1_9.5.4.zip
sh install.sh -i
If asking for 3DM2 support mode, select WEB interface
3DM2 supports two modes of operation.

After installation you could find the binary from:

/opt/3ware/CLI/tw_cli

Create a symlink to /bin for easy usage.

ln -s /opt/3ware/CLI/tw_cli /bin/tw_cli
2. Adaptec

cd /usr/src/
wget http://cpanelstuffs.linuxcabin.com/downloads/StorMan-7.00.x86_64.rpm
rpm -ivh StorMan-7.00.x86_64.rpm

After installation you could find the binary from:

/usr/StorMan/arcconf

Create a symlink to /bin for easy usage.

ln -s /usr/StorMan/arcconf /bin/arcconf
3. Megaraid

We can use the storcli binary for megaraid controller.

wget -O /sbin/storcli wget http://cpanelstuffs.linuxcabin.com/downloads/storcli

This will enable storcli binary in the server.

4. Areca

You can use the software cli for controlling areka raid controller. You can download the controller from the following link,

wget ftp://ftp.areca.com.tw/RaidCards/AP_Drivers/Linux/CLI/v1.9.0_120503.zip;type=i
unzip v1.9.0_120503.zip
cd v1.9.0_120503/x86_64/cli64

Now run ./cli64 and you will get an interface like the following. CLI64>

Formatting Ext4 volumes beyond the 16TB limit

We can use e2fsprogs utility to format drives having size more than 16 TB. Installation steps are as below:

wget http://downloads.sourceforge.net/project/e2fsprogs/e2fsprogs/v1.42.7/e2fsprogs-1.42.7.tar.gz
tar -zxvf e2fsprogs-1.42.7.tar.gz
cd e2fsprogs-1.42.7
mkdir build
cd build
../configure
make
make install

Once done, open the file /etc/mke2fs.conf and add the below line in the ext4 section:

auto_64-bit_support = 1

Now you can manually format the drive using the following command:

mke2fs -O 64bit,has_journal,extents,huge_file,flex_bg,uninit_bg,dir_nlink,extra_isize -i 4194304 /dev/sdb1

Find Inode Of a File

You can find inode of a file using the following command.

ls -li filename

ls -li test
3932172 -rw-r–r– 1 root root 0 Sep 11 18:34 test

here 3932172 is the inode value of the file test

You can also use the command
“stat”

stat test
File: `test’
Size: 0 Blocks: 0 IO Block: 4096 regular empty file
Device: 803h/2051d Inode: 3932172 Links: 1
Access: (0644/-rw-r–r–) Uid: ( 0/ root) Gid: ( 0/ root)
Access: 2014-09-11 18:34:43.114367181 -0700
Modify: 2014-09-11 18:34:43.114367181 -0700
Change: 2014-09-11 18:34:43.114367181 -0700

Caching Modes in Flashcache

Flashcache supports writeback, writethrough and writearound caching modes.

Writethrough – safest, all writes are cached to ssd but also written to disk
immediately. If your ssd has slower write performance than your disk (likely
for early generation SSDs purchased in 2008-2010), this may limit your system
write performance. All disk reads are cached (tunable).

Writearound – again, very safe, writes are not written to ssd but directly to
disk. Disk blocks will only be cached after they are read. All disk reads
are cached (tunable).

Writeback – fastest but less safe. Writes only go to the ssd initially, and
based on various policies are written to disk later. All disk reads are
cached (tunable).

Writeonly – variant of writeback caching. In this mode, only incoming writes
are cached. No reads are ever cached.

Writethrough and Writearound caches are not persistent across a device removal
or a reboot. Only Writeback caches are persistent across device removals
and reboots. This reinforces ‘writeback is fastest’, ‘writethrough is safest’.

 

Ref: https://github.com/facebook/flashcache/blob/master/doc/flashcache-sa-guide.txt

What is Flashcache?

Flash cache is used for the temporary storage of data on flash memory chips for request handling with greater speed.

Here, a temporary copy of the most active data is stored in the flash cache and a permanent copy of the data on a normal hard disk drive (HDD). The goal of flash caching is to store previously-requested data as it travels through the network so it can be retrieved quickly when it is needed again. Placing previously requested information in temporary storage, or cache, reduces demand on an enterprise’s bandwidth in addition to speeding up access to data. Normally SSDs are used for flash cache purpopse which will be used along with a bigger normal HDD.

Could not open device at /dev/ipmi0 or /dev/ipmi/0

Got the following error while trying to configure IPMI in a server.

#ipmitool lan set

Could not open device at /dev/ipmi0 or /dev/ipmi/0 or /dev/ipmidev/0: No such file or directory
Get Channel Info command failed
Could not open device at /dev/ipmi0 or /dev/ipmi/0 or /dev/ipmidev/0: No such file or directory
Get Channel Info command failed

Fix:

The issue was because of missing IPMI kernel modules. Just run the following commands to get this fixed.

modprobe ipmi_msghandler
modprobe ipmi_devintf
modprobe ipmi_si

“maildir_use_size_file” option set for the second time

Got the following error while trying to reset exim configuration via WHM.

Error message from syntax check: 2012-11-01 16:25:09 Exim configuration error in line 1611 of etc/exim.conf.buildtest.work.gg44IbxpiNRr8DMO: “maildir_use_size_file” option set for the second time

Fix:

delete maildir_use_size_file from file /usr/local/cpanel/etc/exim/matchcf/maildir_format

If you encounter with another error like

“quota_is_inclusive” option set for the second time

remove the line quota_is_inclusive from this file /usr/local/cpanel/etc/exim/matchcf/quota_directory too.

 

shell-init: error retrieving current directory

Got the following error while trying to restart iptables in a newly created VPS.

# service iptables restart

shell-init: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory

chdir: error retrieving current directory: getcwd: cannot access parent directories: No such file or directory

iptables: Flushing firewall rules:                         [  OK  ]

iptables: Setting chains to policy ACCEPT: nat mangle filte[  OK  ]

iptables: Unloading modules:                               [  OK  ]

iptables: Applying firewall rules:                         [  OK  ]

Fix:

just do “cd or cd / “ on console

it will resolved.

# cd

# service iptables restart

iptables: Flushing firewall rules:                         [  OK  ]

iptables: Setting chains to policy ACCEPT: nat mangle filte[  OK  ]

iptables: Unloading modules:                               [  OK  ]

iptables: Applying firewall rules:                         [  OK  ]