Category: VMware ESXI

There was a “secret” movement of server equipment in our hardened site. I call it secret since notification of this plan was minimal. One of servers hit was a VMware ESXI host which happened to have two important VMs.

After the host was restored to the network with a new IP; I was surprised with this message when I tried to connect to it:

The VMRC console has disconnected.. attempting to reconnect

I tried a few things but nothing worked. This ESXI host was running version 5.0 and I remembered having an error message when trying to the VMware client wanted to update.

Due to the critical nature of the VMs; I did not have enough time to debug the issue. I ended up using another server and installed the VMware client from this host (ie http:/esxi host).

After that; I was able to connect to the host, power up the VMs and apply IP changes.

I needed to add a new virtual machine to a VMware esxi 5.5 host and found an entry for an old Netapp filer which had been upgraded. I tried to unmount and delete it but I only received an error message.   Normally I would post it but I lost it. 😦

This is one of the times where the VMware client can’t help you.

I started putty and used SSH to access the host.

There I could use esxcli to list the NFS mounts:

~ # esxcli storage nfs list

Volume Name Host Share Accessible Mounted Read-Only Hardware Acceleration
———– —————- ————– ———- ——- ——— ———————
windows  oldfiler          /windows/mount true true false Not Supported

The only thing needed is the volume name. The client gave you that but here is a chance to use a CLI command.

To remove; simply enter the following:

~ # esxcli storage nfs remove -v windows

The command will only return output if there is an error. To verify it worked; list the mounts again.

~ # ~ # esxcli storage nfs list

Volume Name Host Share Accessible Mounted Read-Only Hardware Acceleration
———– —————- ————– ———- ——- ——— ———————

When I returned to the client; I found it had refreshed as the entry disappeared.

I added the new NFS mount and continued on with my install.

I completed an install of esxi 5.5 and returned to my desk which is in another building.  I needed to check something on the new host and discovered I forgot to enable ssh.

Not wanting to drive back to the host; I had a look around the vmware client.

There is a way to remotely enable ssh.

1) Access the host with the client.
2) Click the Configuration Tab.
3) You will see Services and Firewall
4) Click the Properties link for Services.
5) This will bring up the Services properties window.
6) Scroll through and look for SSH which should be in a stopped state.
7) Click on SSH and then click the options button.
8) Here you can configure how you would like ssh to run.  If you want it to run
   all the time, click Start and stop with Host
9) Click the Start Button under Services Commands.
10)Click OK twice to get back the client main section.

After that SSH will work.

I had an esxi host with a bad disk drive. Even though it was raided and a simple shutdown, replace the drive, boot, and select auto repair;  the server owner wanted the virtual machines backed up.

There was no urgency as there would be no outage since the virtual machines were not in use at the moment.

The host is an HP DL360 G6 with 2 volumes.  The first held hypervisor and two templates.
The second volume was over a terabyte and housed twenty-two virtual machines.  Each VM had a 62 gigabyte vmdk.

There are a few ways to handle virtual machine backups.  If you don’t have a backup solution in place, probably the easiest way would be to use the vsphere client and use the “export OVF template” option under the File/Export menu. The Open Virtualization Format is used to transport virtual machines. One advantage is that it will compress your VMDK files which is a good thing if you don’t have a large amount of free server space to store copies of the virtual machines.

Server space was not an issue so I thought I would simply move the virtual machines to storage.

I mounted the NFS directory and brought down the virtual machines. I used the “Datastore Browser” and selected the datastore where the Virtual Machines are stored.  I selected a folder and clicked move.  There is a warning message which you can OK through and then selected the NFS mount where it will go.

This process will move the files of the virtual machine but leave the configuration on the host. If you were to reboot, you would see “unreachable” for the VM entry of any virtual machine you moved.

Running the moves serially; I found 66 gigabytes(the vmdk and the other files) would move on average 20 minutes. If I started more moves, the time obviously would increase.  I found going past 4 moves would really start to bog things down and the client would have issues.   One case I tried 10 moves and went to bed.  The next morning I found there were 4 timeout errors and about 5 moves left with the longest taking about 66 minutes.  A quick check of the folders found that all had moved in about 2 hours or so.  The client was confused.  I terminated the migration time windows.

One thing to consider is there isn’t a label on what is being moved.  Just the process bar.

For the 22 virtual machines, it was about 8 hours or so to move them.

After doing the disk swap; I started the process to move them back.  The time was a less as I did that 10 VM move through the night and finished up with a couple batches of 4.

The host needed a reboot to recognize they had returned.  After that; I powered them up and there were no issues.

This was not the most efficient way to handle the disk swap and backup.  However, it was one of the rare times to simply play and see what happened.

One of the most annoying events in a virtual farm is not having the root password of a host. Such was a recent case of a VMware 4.0 ESXi host. I tried all known passwords and variants and asked around but none worked and of course nobody changed the password.

It would have been easy if this was an ESX host as all you would need to do is to crash boot the system and place it into single user mode. The problem with ESXi is there is no GRUB loader so this is not possible.

If you look on the VMware site, you will find instructions to reset the root for ESX but for ESXi you have a nice message.

Reinstalling the ESXi host is the only supported way to reset a password on ESXi.  Any other method may lead to a host failure or an unsupported configuration due to the complex nature of the ESXi architecture. ESXi does not have a service console and as such traditional Linux methods of resetting a password, such as single-user mode do not apply.

It’s fine for VMware to suggest this approach but the problem is you will loose all your VMs. The author probably thinking of ESX where backups are rather simple. You can do that for ESXi but even if you have backups, it’s still time to reinstall and restore them and the user may not like the time needed.

The good thing is you don’t have to do this as there are two ways you can recover a lost password.

1) You can run a repair of the OS.

2) You can use a Linux Live CD.

I thought the easiest approach would be to run a repair since all it would do was reset the configuration of the system and leave the VMFS datastore alone.  The VMs would be forgotten but you can add them back with the client.

One of the things you will read from the documentation is that the VMFS datastore is preserved if it’s still the same patitition when you set up the host or its on a another disk.

This is not the case as it will be explained later.

There is a size issue if the VMFS location is on the boot disk and is located beyond the 900 MB partition and the partition table is corrupt. The VMFS datastore can not be recovered automatically by the repair process and you will need help from VMware.

This was not the case for this host.

Make sure you use the original install CD for the repair. If you don’t have it, get the version number from the login screen on the host and get the appropriate ISO from VMware and burn a new CD.

You will need to powercycle the host so inform your users.

 1) Insert the ESXi 4.0 Installation CD
 2) Power Cycle the machine. Depending on your how your BIOS is set for boot order 
    make the change for the CDROM via setup or boot option if your BIOS has it.
 3) The Installation will proceed.  Don't worry about it installing as a new machine 
    as the Repair option will appear. Press R to repair
 4) Accept the EULA by pressing F11.
 5) You will get a screen to select the disk with the OS.
    Now VMWARE defaulted all disks and if you read about how the VMFS partitions are 
    left alone this may not be the case.

    As you will also read:

     If you do not choose the same installation disk, the damaged ESXi 4.0 image 
     is not fixed and a new image is installed.

    Such was the situation I found.  Rather then selecting the OS disk I just 
    defaulted and went on.

    For your situation, highlight the OS disk and press Enter.

    Here you will get the confusing warning of the data on the selected disk is 
    about to be overwritten and if there were no changes to the partitions, your 
    VMFS data stores will be preserved.

    Again such is not the case if you leave all drives selected.

 6) If you have the right disk, press Enter.  Otherwise, press the Backspace and 
    select the correct disk.
 7) Now you will get your last chance to back out as you will get a confirmation
    request. If all is in order, press F11 to start the recovery.

The process will run and you will end with one of the two messages:

Repair Complete
The ESXi 4.0 image was repaired successfully and the partition table was restored. The installer recreated the partition table to recover your VMFS partitions or custom partitions. The repair operation added these entries in the partition table.

If you have this then you simply reconfigure the general configuration and then add the missing VMs

Repair Incomplete
The ESXi 4.0 image was repaired successfully, but the partition table could not be restored. The installer could not recreate the partition table to recover your VMFS partitions or custom partitions. You must manually add the partition entries to the partition table to recover your data. Call VMware support for help.

This was the message I received and it was misleading. It completed but nothing was changed as I still could not login via the root account.

Now comes the problem with ESXi, it’s not supported since it is free to use. This would be a paid support call and it would be a critical problem if it was decided to contact support.

What to do?

Before any attempt of recovery, I needed access to the system. Since the host did boot, the original OS was still usable.

To be continued.

It appears VMware is at it again.

I wanted to upgrade a 5.0 ESXI box to 5.1.  I booted off the DVD and the upgrade ran without issues.

When it came time to add the license; I received this nice little message:

An error occurred when assigning the specified license key: The System
Memory is not satisfied with the 32 GB of Maximum Memory limit.
Current with 72.00 GB of Memory.

Which left me wondering what’s up with this as 5.0 had no problem and left me with 32 GB useable memory.

I spoke to VMware about it and was informed when they found people did not like paying to use their
RAM and had to remove their silly vram scheme, they added this feature as IMHO a “sour grapes” move.

The only way to install 5.1 is to disable the extra RAM in Bios which in my case was not possible with
a HP DL 360 G7 or pull DIMMS.  This is a rather annoying move by VMware as my company tends to
purchase servers with large RAM.

I am not sure what VMware seeks to accomplish with this move as I suspect this will move more systems to Hyper-V.

It will only be a matter of time where they end this foolish stunt.

No more vRAM gouging!

I had been away for a long time due to workloads being crazy.  A good thing overall but it does get in the way of writing.

What better way to mark a return then VMwares announcement they have ended vRAM licensing! I never understood why VMware didn’t think this one out. Probably thought they could make a great deal of money but ended up loosing market share to Microsoft. It is interesting this announcement comes a week after the release of the RTM of 2012.

This will keep interest in VMware but in some cases it might be too late as 2012 with Hyper-V V3 looks very promising.

Time will tell.
CRN Article

CIO Article

A recent discussion on an administrator list was about monitoring snapshot deletions.  A snapshot was deleted and the administrator became concerned about the length of time it was taking for it to disappear.

As with most management functions, you normally use the VMware vSphere client. For small snapshots, this is sufficient but for larger snapshots; the process will jump to 95% and will remain at that level depending on how long the snapshot has been active.

Another way to monitor the process, is by use of the watch command.

You will need to access the host via ssh(of course you will have had to configure the host to allow it).

After you login; cd to where your vmdk files are stored.


watch “ls -luth *.vmdk”

Basically, an ls command will be issued against the .vmdk files every two seconds.  The options are as follows:

“l” will use the long listing format.

“h” will display the file sizes in human-readable format.

“u” will sort and show files by access time

“t” sorts by modification time.

Once you see the -delta files disappear; the deletion process is finished.

To kill the command, simply issue a ctrl-c

VMware KB article with more information