VDP 6.1.3 -Expand backup storage space seems not to be working without any issues.

Expand Storage on VDP appliance 6.1.3

Issues

I have detected several issues after I have extended the storage of a VDP appliance server. For example from 4TB to 6TB.

The issues that you can face with are:

  • There are incorrect number of disks associated with VDP appliance.error01
  • Balancing the data on the new disk seems not to be working. VDP appliance stays in error “The VDP appliance storage is nearly full”

Solutions

There are incorrect number of disks associated with VDP appliance.

Issue

The message is informing you that the added disks are not added to the configuration of the VDP appliance server.

VMware KB

No VMware KB found.

Workaround/Solution

This is due to an incorrect update for “numberOfDisk” – parameter in the vdr-configuration.xml file.
When you upgrade from 4TB to 6TB, the amount of disks will change from 6 to 9 disks.
 Which means the “numberOfDisk” – parameter should have been set to 9.
This can fixed by  editing the vdr-configuration.xml file located in /usr/local/vdr/etc and change this parameter to the number of data drives (do not include the OS drive) present on your VDP appliance.
<numberOfDisk>9</numberOfDisk>
NOTE: Do not include the VDP OS disks.
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 32G 5.8G 25G 20% /
udev 4.9G 200K 4.9G 1% /dev
tmpfs 4.9G 0 4.9G 0% /dev/shm
/dev/sda1 128M 37M 85M 31% /boot
/dev/sda7 1.5G 129M 1.3G 10% /var
/dev/sda9 138G 3.7G 127G 3% /space
/dev/sdd1 1.0T 721G 303G 71% /data01
/dev/sdg1 1.0T 719G 306G 71% /data02
/dev/sdj1 1.0T 718G 307G 71% /data03
/dev/sdb1 1.0T 155G 869G 16% /data04
/dev/sde1 1.0T 155G 870G 16% /data05
/dev/sdh1 1.0T 155G 869G 16% /data06
/dev/sdc1 1.0T 35M 1.0T 1% /data07
/dev/sdf1 1.0T 35M 1.0T 1% /data08
/dev/sdk1 1.0T 35M 1.0T 1% /data09
In bold the disk that are use for backup data.
Save the file after editing. This will resolve the message we had.

Balancing the data on the new disk seems not to be working.

Issue

It can be that when you have added new space to your VDP appliance server that the message of near full or full VDP storage is not removed from your screen and that the new disk not getting data. Balancing of the data is not happening. Normally this will take a will to see the effect (You can use the command df -h on the VDP appliance server)

Entering the following command in a SSH session of the VDP appliance:

When you don’t see the effect after a will you can proceed with the following steps.

#: df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 32G 5.8G 25G 20% /
udev 4.9G 200K 4.9G 1% /dev
tmpfs 4.9G 0 4.9G 0% /dev/shm

/dev/sda1 128M 37M 85M 31% /boot
/dev/sda7 1.5G 129M 1.3G 10% /var
/dev/sda9 138G 3.7G 127G 3% /space
/dev/sdd1 1.0T 721G 303G 71% /data01
/dev/sdg1 1.0T 719G 306G 71% /data02
/dev/sdj1 1.0T 718G 307G 71% /data03
/dev/sdb1 1.0T 155G 869G 16% /data04
/dev/sde1 1.0T 155G 870G 16% /data05
/dev/sdh1 1.0T 155G 869G 16% /data06
/dev/sdc1 1.0T 35M 1.0T 1% /data07
/dev/sdf1 1.0T 35M 1.0T 1% /data08
/dev/sdk1 1.0T 35M 1.0T 1% /data09

This shows us that the balance over the 9 disks is not correct.

VMware KB

https://kb.vmware.com/kb/2132781

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2095746 (This is also the case for VDP 6.X)

Workaround/Solution

 

There is not direct workaround anymore in the newer version of VDP. The only thing you can do is wait until the added disk will be filled up by the VDP rebalance.

#df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 32G 5.8G 25G 20% /
udev 4.9G 220K 4.9G 1% /dev
tmpfs 4.9G 0 4.9G 0% /dev/shm
/dev/sda1 128M 37M 85M 31% /boot
/dev/sda7 1.5G 135M 1.3G 10% /var
/dev/sda9 138G 6.1G 125G 5% /space
/dev/sdc1 1.0T 652G 373G 64% /data01
/dev/sdb1 1.0T 648G 376G 64% /data02
/dev/sdd1 1.0T 648G 376G 64% /data03
/dev/sde1 1.0T 649G 375G 64% /data04
/dev/sdf1 1.0T 650G 374G 64% /data05
/dev/sdg1 1.0T 649G 376G 64% /data06
/dev/sdh1 1.0T 85G 940G 9% /data07
/dev/sdi1 1.0T 85G 940G 9% /data08
/dev/sdj1 1.0T 83G 941G 9% /data09
/dev/sdl1 1.0T 85G 940G 9% /data10
/dev/sdk1 1.0T 84G 940G 9% /data11
/dev/sdm1 1.0T 84G 941G 9% /data12

OLD data – this is not correct anymore.

When we look deeper into the AVAMAR configuration, we see that the free space unbalance percentage is equal to 10%. This means that when the unbalance space is greater then 10% between all the disks the balance process will stop working.
~/#: avmaint config –ava | grep freespaceunbalance
freespaceunbalance=”10″
freespaceunbalancedisk0=”30″

#: df -h
Filesystem Size Used Avail Use% Mounted on

/dev/sdd1 1.0T 721G 303G 71% /data01
/dev/sdg1 1.0T 719G 306G 71% /data02
/dev/sdj1 1.0T 718G 307G 71% /data03
/dev/sdb1 1.0T 155G 869G 16% /data04
/dev/sde1 1.0T 155G 870G 16% /data05
/dev/sdh1 1.0T 155G 869G 16% /data06
/dev/sdc1 1.0T 35M 1.0T 1% /data07
/dev/sdf1 1.0T 35M 1.0T 1% /data08
/dev/sdk1 1.0T 35M 1.0T 1% /data09

#: avmaint config –ava freespaceunbalance=80
<?xml version=”1.0″ encoding=”UTF-8″ standalone=”yes”?>
<gsanconfig freespaceunbalance=”10″/>
#: avmaint config –ava | grep freespaceunbalance
freespaceunbalance=”80″
freespaceunbalancedisk0=”30″

 

This change will start the balancing process after the integrity check process. Monitor closely the balancing process and change the value backup to a better level when needed.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s