vSphere 4.x iSCSI Heap Usage

When using more than 64 devices with the software iSCSI initiator the vmkernel logs will start to produce errors such as:

Jul 19 08:01:01 esx vmkernel: 0:03:38:54.053 cpu2:1039)iSCSI: bus 0 target 46 cannot allocate new session (size %Zu) at 10464
Jul 19 08:01:01 esx vmkernel: 0:03:38:54.054 cpu4:1040)WARNING: Heap: 1419: Heap vmk-iscsi (6288144/6291456): Maximum allowed growth (3320) too small for size (20480)

I most recently saw this problem crop up with a customer and a large XIV deployment to 32 vSphere hosts.  To resolve this problem we can increase the heap size to 8MB (from the default of 6MB) via this command:

esxcfg-module -s heap_max=8388608 iscsi_mod

Why 8388608?  Well this is the byte count (8 * 1024 * 1024 = 8388608).  Once this modification is complete a complete reboot of the ESX host is required to set this change.

How to test vmKernel connectivity

In case you are ever stuck trying to get our iSCSI Initiator to link up to the target or just troubleshooting some vMotion problems you can do the following:

From the console or terminal on a ESX host (CLI will work for ESXi as well) do teh following:

vmkping <destination>

This will use the vmk port groups to test connectivity to the iSCSI or other vSphere servers.  A good ping will result in a result like this:

PING server(10.0.0.1): 56 data bytes
64 bytes from 10.0.0.1: icmp_seq=0 ttl=64 time=10.245 ms
64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.935 ms
64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=0.926 ms

A failure of the ping is like this:

[root@server]# vmkping server
PING server (10.0.0.2) 56(84) bytes of data.

— server ping statistics —
3 packets transmitted, 0 received, 100% packet loss, time 3017ms

This method will help to resolve the unable to connect to destination errors that a vMotion can produce

vMotion Changes in vSphere 4.1

From VMware:

In vSphere 4.1:
VMDirectPath and USB Device Passthrough with vMotion is supported
Migration with vMotion and DRS for virtual machines configured with USB device passthrough from an ESX/ESXi host is supported
Fault Tolerant (FT) protected virtual machines can now vMotion via DRS. However, Storage vMotion is unsupported at this time.

Note: Ensure that the ESX hosts are at the same version and build.
In addition to the above, vSphere 4.1 has improved vMotion performance and allows:
4 concurrent VMotion operations per host on a 1GB network
8 concurrent VMotion operations per host on a 10GB network
8 concurrent VMotion operations per datastore

Finally VMware is allowing us to take advantage of the larger 10GBit pipes and allowing more than 1 vMotion at a time.

vSphere 4.1 ESX(i) Major Security Flaw

About a week ago while testing vSphere 4.1 upgrades and deployment methods, I signed into a host’s service console as root, and I could have swore that I typed the password incorrectly.  So I signed back out and tried it again, and it let me in with an extra digit at the end of the password.  Some more testing led me to see that the PAM software was only checking the first 8 characters of the password and ignoring the rest of it.

After some fancy Google’ing and wishing I found this article on the virtuallyGhetto blog:  http://www.virtuallyghetto.com/2010/07/esxi-41-major-security-issue.html

The good news is that VMware is aware of the issue and has a work around to correct the problem: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1024500

VMware’s Solution

For ESX:
Add md5 to the file /etc/pam.d/system-auth.

  1. Log in to the service console and acquire root privileges.
  2. Change to the directory /etc/pam.d/.
  3. Use a text editor to open the file system-auth.
  4. Add md5 to the following line, as shown:

password sufficient /lib/security/$ISA/pam_unix.so use_authtok nullok shadow md5

Optionally, you can use the following sed command to accomplish this:

sed -e /password.*pam_unix.so/s/$/ md5/ -i /etc/pam.d/system-auth

For ESXi:

Add md5 to the file /etc/pam.d/system-auth.

  1. Access tech support mode.
  2. Change to the directory /etc/pam.d/.
  3. Use a text editor to open the file system-auth.
  4. Add md5 to the following line, as shown:

password sufficient /lib/security/$ISA/pam_unix.so use_authtok nullok shadow md5

(Optional) If you want the change to persist when you restart ESXi, you must add the following line to the file /etc/rc.local:

sed -e ‘/password.*pam_unix.so.* md5/q’ -e ‘/password.*pam_unix.so/s/$/ md5/’ -i /etc/pam.d/system-auth

VMware expects to release a permanent solution to this issue sometime in the future. We recommend that you remove the workaround from ESXi systems when you install the permanent solution.

VCDX Application Status

The best news a guy could get…

Thomas,

Congratulations! We have completed a preliminary technical review of your VCDX Application, and you have been accepted to advance to the next and final step: the VCDX Defense.

The VCDX Defense will be conducted at the VMware headquarters:
3401 Hillview Ave
Palo Alto, CA 94304

XXX has been reserved for your defense. Please confirm that this date works for you, an a specific timeslot will be communicated to you at a later time.

Again, congratulations on your progress thus far in the VCDX Program.

Regards,
The VMware Technical Certification Team

And now the worry…

vCenter Installation Tips

Since the vCenter now requires a 64 bit operating system to install, the DSN must be upgraded as well to support the 64bit application.  Currently vCenter 4.0 and older all required a 32 bit DSN.  To change to the proper DSN you will need to remove the old one from the 32-bit DSN application and create a new DSN.

1) Remove the old DSN if it exists from: %windir%SysWoW64ODBCAD32.exe

2) Install the 64 bit SQL native client from: http://go.microsoft.com/fwlink/?LinkId=123718&clcid=0x409

3) Create a new DSN from the utility in Administrative Tools or here: %windir%system32ODBCAD32.exe (this is the 64 bit version)

4) Install vCenter and get rocking and rolling.

Browse Categories