Attention PC Users

I assume that if you’re reading this that you’re a PC user and not just a smart phone or tablet user. I’ve worked in technology for almost 20 years and enjoy building my own PCs and servers for home use, but until recently never paid a lot of attention to arguably the most important components of the computer – the human interface devices. Or to put in layman’s speak, the mouse, keyboard and monitor.

Many PC builders spend a lot of time and money on the CPU, graphics card, memory, case, cooling, etc but neglect the keyboard and mouse. I have recently decided to rectify that and have bought myself two mechanical keyboards – one for home and one for the office. I’ve also splashed out on some new mice. I spend probably 10-12 hours a day at the computer which sounds like a lot but I do work in technology, and much of my time is spent typing so why skimp on that most important of components?


The first keyboard I bought for home use was the Corsair K70 with Cherry MX Brown switches. This is a great keyboard for gaming and typing, although I favour it more for gaming than typing. It is way better than any cheap rubber dome keyboards out there and is a joy to use. It has a lovely metal volume control rotary slider and an aluminium case with red LEDs, although I generally have the LEDs turned off. It’s quite noisy though compared to a rubber dome keyboard, so I installed some rubber o-rings and it dampened the sound quite a bit and also improved the feel.

Corsair K70
Corsair K70

I decided that mechanical keyboards felt so much better that I couldn’t possibly go back to a rubber dome keyboard and that meant replacing my work keyboard as well. I decided on a smaller form factor for work to maximise my desk space and make my work more ergonomic by having the mouse closer to my hands by going for a Ten Keyless (TKL) keyboard; this is a full keyboard but without the number pad. A big factor in my choice of keyboard for work was that it must be optimised for typing but also be quiet. I therefore decided to go for the non-tactile linear Cherry MX switch and opted for the Cherry MX Black. I was tempted by the Cherry MX Red switch, but for typing I had heard that the MX Red requires such little force to press down that typos can be too easy to make. The MX black is stiffer and therefore is less likely for me to make typos. Three months in, I’m happy with my decision – I bought a Ducky Zero DK2087S with Cherry MX Black switches. This thing is a joy to type on and I think I prefer it over the Corsair, although to be honest there’s not much in it. The Corsair is definitely better for gaming, but I think the Ducky is better for typing and it’s all down to the type of switch. I am wonder if I should eventually try a TKL with Cherry MX Clear switches as that is a stiffer MX Brown and might be the perfect mechanical keyboard switch for me. It’s a very minor thing so will be sticking with my current choices for now.

Ducky Zero
Ducky Zero

Typing Speed

So, I have two great new keyboards but I can’t touch type. What’s the point in owning a great keyboard if I can’t use it to it’s full potential? I have been using computers since the Commodore VIC20 came out and I never learned to touch type — how stupid is that? So I’ve committed myself in 2015 to be a touch typist by the end of the year. I did a test on and my score was 56wpm with 80% accuracy. The thing that’s bad is my accuracy, I’m constantly backspacing and that’s not efficient. I’ve decided to do the typing course on and started it in mid December 2014. At the start my typing speed plummeted to 15wpm but the accuracy shot up to 94% – slow but much more accurate.

From that moment on I committed myself to only typing using the touch typing method taught on with certain fingers allocated to certain keys. It’s the only way to unlearn years and years of bad habits. It’s been quite slow going but I’m improving and now I’m proud to say that I am back to my original speed of 55wpm but my accuracy has gone from 80% to 98%. That’s pretty remarkable in the space of just one month. I thought it would take much longer to unlearn the bad habits, but it really hasn’t. I’m hoping that by the end of the year the speed and accuracy will be higher still and I’m committed to it, after all my keyboards deserve it!

I have migrated from ESXi 5.5 to Hyper-V 2012 R2 Core

Although I have been happy with VMware’s product there were some severe limitations and I don’t like the path that VMware is heading down for home enthusiasts. The limitations for me are:

  • ESXi 5.5 free can only be managed from the vSphere desktop client which cannot manage hardware versions greater than v8.
  • To manage all features of VMware requires vCenter, which is an expensive paid add-on not aimed at the home enthusiast. The software is priced for medium and large businesses for data centre use. The software can be used for 60 days and this is what I have been doing, but I am getting bored rebuilding my home data centre every two months.
  • vSphere web client is required to use a lot of the new features in VMware, and the web client requires vCenter (not free)

There are several alternatives to ESXi that spring to mind;

  1. Microsoft Hyper-V
  2. Linux KVM
  3. Citrix XenServer
  4. Proxmox

Since most of my professional work is with Microsoft products I decided to jump in with Microsoft Hyper-V 2012 R2 Core. I have a fair amount of Powershell experience so I’m quite comfortable administering Hyper-V from a Powershell prompt.

Migration experience

I had great difficulty migrating because I did not have any swing kit, or other large storage areas to move the data. I had about 6-8 TB of data to move so I decided to trash it all and start over. The data I had in VMware was backups of all my desktop machines, some experimental machines and some movies, all of which could be replaced easily. I use BackBlaze to backup my irreplaceable data off-site so if my desktop died before I could back it up again, I was ok.

I did migrate a couple of machines onto my desktop PC, using 5nine V2V Easy Converter. I could not successfully convert any machines using the Microsoft tool, I always got this error:

[7]  VERBOSE:    Microsoft.Accelerators.Mvmc.Engine.DiskCopyFailureException: The virtual disk(s) attached to the source virtual machine were not successfully converted to VHD(s) and copied to the workspace folder path. —> System.AggregateException: One or more errors occurred. —> System.AggregateException: One or more errors occurred. —> System.Security.Principal.IdentityNotMappedException: Some or all identity references could not be translated.

My storage configuration in mooZilla (VMware host) looked like this:

VMware storage layout
VMware storage layout

Boot Hyper-V from USB?

I wanted to boot from USB into Hyper-V but that took me several hours to figure out using Hyper-V 2012 and the responsiveness of the server was dire. The responsiveness of guest VMs was perfectly fine, but anything on the command line in powershell was very bad. I could not get it to work using the 2012 R2 image either using this guide or this technet article. I always got this INACCESSIBLE_BOOT_DEVICE error:


I could have persevered with this by messing around with the boot image and slipstreaming in the storage drivers, but frankly I couldn’t be arsed with it, so I decided to add another disk to the server and install Hyper-V to the C drive of a 250G spinning hard disk. As I was doing that I decided to put in a cheap 120GB SSD drive as well on top of my existing RAID-5 and RAID-0 arrays for a few select VMs.

Storage now looks like this:

home server

The rebuild

With my two extra drives installed I copied the Hyper-V ISO boot image to a USB drive and proceeded to install Hyper-V. The installation was very quick indeed and completely painless except once installed I could not remote desktop to it. This was fixed with a powershell one-liner:

Set-NetFirewallProfile -Profile Domain,Public,Private -Enabled False

This disables the firewall for all network profiles. Probably not recommended outside of a home situation, even at home I’m not entirely comfortable with it, and will probably lock it down a bit later. Once the firewall was disabled I did the following:

  1. Added another local admin account – I have a habit of never using Administrator
  2. Changed the computer name
  3. Changed the network interface to use a static IP address.
  4. Set DNS servers.
  5. Installed all the latest Windows updates.
  6. Configured date and time and NTP settings.

Once that was done I could connect happily from my Windows 8.1 workstation using the built-in Hyper-V Manager and also with 5nine Manager for Hyper-V. I then proceeded to format my storage with diskpart using 64k unit sizes and then mounting all my drives into the C drive. I hate drive letters so I formatted my storage as shown in the above diagram. Useful commands for diskpart:

DISKPART> list disk

Disk ### Status         Size     Free     Dyn Gpt
-------- ------------- ------- ------- --- ---
Disk 0   Online           10 TB     0 B       *
Disk 1   Online         3724 GB     0 B       *
Disk 2   Online           14 GB     0 B
Disk 3   Online         6000 MB     0 B

DISKPART> sel disk 0

Disk 0 is now the selected disk.


DiskPart succeeded in cleaning the disk.

DISKPART> list part

There are no partitions on this disk to show.

DISKPART> create partition primary

Disk is uninitialized, initializing it to GPT.

DiskPart succeeded in creating the specified partition.

DISKPART> list part

  Partition ###  Type              Size     Offset
  -------------  ----------------  -------  -------
  Partition 1    Reserved           128 MB    17 KB
* Partition 2    Primary             10 TB   129 MB

DISKPART> select partition 2

Partition 2 is now the selected partition.

DISKPART> format quick fs=ntfs unit=64k label=RAID-5

100 percent completed
DiskPart successfully formatted the volume.

DISKPART> assign mount=C:\RAID-5

Installing my VMs

Now that the storage is ready, time to create some VMs. Before that though, I configured the default locations like so:

Hard disk file storage location
Hard disk file storage location
VM Files location
VM Files location

Then the network switches needed setting up. I have three network cards: two for my twin WAN links to the internet and one for the LAN. I’m not fiddling about with VLANs at the moment, mainly because I don’t really know what I’m doing with regard to setting up VLANs (maybe another day).

Network switches
Network switches
  • Home External: External WAN link for home internet use
  • Business External: External WAN link for business internet use
  • Internal Network: LAN
  • Lab: a virtual switch for isolated VM only network for my test labs

All done. I then proceeded to build my VMs, and even built a template VM for my Windows Server 2012 R2 builds. I’ll quickly outline what I did here for that:

Windows Server 2012 Template

Using notes mainly from here (but adapted for my own use) I built a VM and then configured it. At the end I sysprepped with:

sysprep /oobe /generalize

Then exported the VM with Export-VM. Each time I create a new VM I just go to Import-VM… in Hyper-V manager, and a new VM is created from my template. It’s not as nice as templates in vCenter, but it will do for me. I’m sure that Microsoft’s equivalent of vCenter is much better at this (System Center Virtual Machine Manager). I’ll get to playing with this one day, but like vCenter, it’s not free, although 90 day trials are available, I think.

Import Virtual Machine
Import Virtual Machine
Import Virtual Machine Summary
Import Virtual Machine Summary

I have over ten VMs already and have plans for quite a few more.

Hyper-V annoyances

So, all sounds good so far, what is missing from Hyper-V for the home enthusiast?

Boot from USB could be improved

Booting from USB was a huge pain and when I finally got it working under an older version of Hyper-V, it was dog slow. I know that few data centres would contemplate this configuration and I empathise with Microsoft in not giving this any priority, but for the home enthusiast it would be great if it worked as well as VMware. VMware boots from USB quite slowly but from then on it performs well and and is robust, and saves you the time and expense of having a boot hard disk.

Lack of hot plug features

VMware really surpasses Hyper-V with hot adding of hardware. You can add memory, CPU, network interfaces, hard drives and more with the machine running. Hyper-V only allows hot adding of virtual hard disks. It’s not a major issue for me, but for some people it could be a pain but it does show that Microsoft are lagging VMware in this area.

Linux/BSD support not as good

The main linux distros are well supported in Hyper-V including:

  • CentOS, Red Hat Enterprise
  • Debian
  • Oracle Linux
  • SUSE
  • Ubuntu
  • FreeBSD 10 onwards

Should be fine for most needs, but it’s not as comprehensive as VMware. I am using pfSense for my virtual firewall router and that is only fully supported in version 10 of FreeBSD. The latest stable version of pfSense (2.1.5 at time of writing) runs on FreeBSD version 8 so you’re out of luck if you want to run a virtual firewall on pfSense with Hyper-V. Having said that the Release Candidate for pfSense 2.2 based on FreeBSD 10 is out now so the stable supported version should be out in Q1 2015, so not too long to wait at time of writing. I’m running the 2.2 RC version on my network on Hyper-V and it’s running just fine.

pfSense router running as a Hyper-V guest
pfSense router running as a Hyper-V guest


Hyper-V good things

Guest drivers are built-in

With VMware, you need to install the VMware tools in your Guest OS’s for maximum features and performance. From VMware:

VMware Tools is a suite of utilities that enhances the performance of the virtual machine’s guest operating system and improves management of the virtual machine. Without VMware Tools installed in your guest operating system, guest performance lacks important functionality.

Microsoft seem to have worked with the OS distributions listed above because none of that is needed in Hyper-V. All the drivers are there when you install your guest OS. If you import from VMware or another Hypervisor, you will need to uninstall the tools from that guest first before importing to Hyper-V, and you then need to insert the Integration Services disk in Hyper-V Manager to install the correct drivers. For new guest machines in Hyper-V, none of that is needed. With regards to performance I haven’t seen a noticeable difference between VMware or Hyper-V. I think Windows VMs run slightly faster, but that could be perception bias on my part.

Memory management

Seems better in Hyper-V. I’ve noticed my Windows VMs not requiring hardly any RAM at all (512MB for Windows Servers) and they are running fast.


If you are a Windows person then a lot of those skills can be used for Hyper-V. Powershell scripting, setting up shares on your host so you can access them easily from any machine, permissions, AD integration seem smoother and easier on Hyper-V. From my Windows machine I can connect to the Hyper-V server and check event logs, users and groups, disk management, performance monitor counters, services, task schedules, etc. Quite nice. I created a Tails VM and the network card would not connect to the internet, so I had a quick look in the Event log in Hyper-V by launching Computer Management on my Windows 8.1 workstation and then connecting to my Hyper-V host, and could see that MAC address spoofing needed to be enabled. Just a simple example of easy management.

Event logs in Hyper-V viewed from workstation
Event logs in Hyper-V viewed from 8.1 workstation

Now Tails is running fine.

Tails running in Hyper-V
Tails running in Hyper-V

Pause AND Save

Hyper-V has two “pause” modes, Pause and Save. I really like this feature, and I don’t think VMware has it, although I could be wrong. I think VMware only has “save”. If you Save a machine, Hyper-V writes the memory out to disk and saves the state of the machine so you don’t need to close your applications and shut down your VM if you want to free up resources on the host. Save frees up CPU and memory, but takes a few seconds to write out memory to disk. Same as VMware.

Pause though, will freeze the VM and release the CPU for other machines, but it will maintain the guest memory on the host so pausing is virtually instant. Not sure why you’d ever need it, but it’s cool nonetheless. Perhaps another guest requires a lot of CPU, you can just pause other VMs instantly and reassign CPU resources. Not sure if it would be used in an enterprise environment, but it might be handy for some people in a lab environment.


I’m pretty happy with Hyper-V so far, although I haven’t explored AD integration, replication, high availability mainly because I only have the one host. What I’d really like for my home lab is to have three machines: two Hyper-V hosts and one storage host, but for me it’s overkill and it would cost a lot in power to run.

Migration from VMware was a pain and I would not recommend an in-place migration. Use swing kit with separate storage, so if the migration doesn’t go well you have something to fall back to.

The thing that I’m really happy about is I don’t have to rebuild my lab every 60 days! Hooray!


mooZilla build – a tour

I thought it might be useful to document how I built my bare metal hypervisor server for home use, also known as a Whitebox build. At the time of writing I am using VMware ESXi 5.1 but will be looking to upgrade to 5.5 shortly when it’s released.

My build was inspired by vZilla from Paul Braren at Thanks Paul for an awesome site! 😀

Component list

It took me quite a bit of research to carefully choose the components for my build. I did get one of them wrong and had to sent it back to the supplier (described later).

  • Case Antec Performance One P182 (from my old server). Really like this case, great for cable tidying, plenty of room and cooling. 4 years old and not showing its age one bit.
  • Motherboard ASRock Z87 Extreme 3. I went for this board because it had 3 PCI slots in it, and 3 PCIE3 slots. I am using 3 network cards and one RAID controller card so I need lots of slots with room for expansion later. I have 1 PCI and 1 PCIE3 slot left.
  • CPU Intel Core i7 4771 Haswell 3.5 GHz not overclocked (yet) :D. I just went for the fastest CPU on the market at a reasonable price. Currently using stock fan, seems good and quiet enough at the moment.
  • Memory To be honest I’m not really sure I have the right type, but I’ve gone for Corsair Vengeance Low Profile 4x 8GB for a total of 32 GB. PC3-12800 (1600) CAS 10-10-10-27. Seems to work OK.
  • PSU Power supply is a 1000W Akasa venom Hybrid modular supply to power lots of drives. Probably overkill but I don’t want to risk starving the system of power.
  • RAID Controller. LSI MegaRAID 9270-8i. Has two mini-SAS slots which can be used with SAS-SATA fan-out cables to connect 8 drives. It’s expandable to over 100 drives with expansion ports, but I don’t think I’ll need that. I have the MegaRAID LSIiBBU09 battery backup installed too (extra cost)
  • Storage as RAID-5 4x 4TB Hitachi Deskstar SATA III drives. Main storage area. 10.91 TB usable
  • Storage as RAID-0 4x 1TB Samsung SATA II drives (from my old server). Will be used for backups. 3.64 TB usable.
  • Akasa C31 5.25″ Expansion Bay HDD Adaptor for 4x 3.5″ HDD in Three 5.25″ Bays. This thing let’s me use the 5.25″ space in the case for 3.5″ drives in a tidy and rubber-insulated way to dampen noise
  • 1x PCIE Network card Intel EXPI9301CTBLK PCI-e Gigabit. This card connects to my home network.
  • 2x PCI Network cards Intel PWLA8391GTBLK Pro 1000GT Gigabit Adapter PC (from my old server). Connects to two separate ADSL internet connections.
  • 2x USB Stick Kingston DataTraveler 100 G3 8GB

I’m in UK so all components were purchased from except for the RAID controller and battery and drive cage (scan didn’t stock them so got them from amazon uk). Notice there’s no optical drive or floppy in the build 😉

Final build photo

Components in
Components in (click to enlarge)

The process

All my components were laid out on a small table and connected up so that I could fiddle with them unimpeded. If I had put everything in the case and then needed to change it, it would have been a pain. When building complicated systems like this I think it’s always a good idea to do a “bread board” build first and get the system running and configured exactly how it should be, then load it all into a case. Powering on the system like this can be done with either a power switch stolen from an old system (preferred) or using any bit of conductive metal lying around that can short the Power Switch pins on the motherboard. I used my screwdriver to do that as I didn’t have any old power switches to use. Be very careful not to short the wrong pins! This can potentially kill your motherboard!

I started the bread board build with only the motherboard, PSU, CPU and memory connected. First time I connected it up, it didn’t work. Nothing on the screen, CPU fans powered up but after 1 min they span down and then span up again. I phoned Scan and asked them what the problem might be and they told me to check that none of the CPU pins were damaged. I found this highly unlikely but I checked anyway. Sure enough one of the CPU pins on the motherboard was bent, but I’m not sure if it was me, or whether the motherboard arrived like that. Anyway, my wife straightened it out for me (has better eyes than me, and better with the tweezers!). This little problem cost me half a day of troubleshooting! Anyway with the pin straightened, the system booted up. I now connected some more components – the RAID card, keyboard, mouse.

My original build had the LSI MegaRAID 9240-4i card as recommended by Scan Computers UK. However I could not get this card to work under VMware or Windows 7 installed to bare metal.

Windows 7 on bare metal with 9240-4i failure
Windows 7 on bare metal with 9240-4i failure

I flashed the motherboard and the card to the latest firmware, and I slipstreamed the latest drivers into the ESXi boot ISO, but still no luck.

ESXi frozen trying to load megaraid driver for 9240-4i
ESXi frozen trying to load megaraid driver for 9240-4i

DO NOT USE the 9240-4i card. Maybe I was unlucky but I’ve since learned that this card offloads processing to the host CPU – I didn’t know that when I ordered it. This mistake cost me more than a day of time so, I sent this back to Scan and replaced it with the 9270-8i which is a true hardware RAID controller with processing done on the card and not offloaded. I had no issues with this card at all, however I did flash the firmware to the latest version anyway to make sure it would be fully compatible and stable.

Test with one drive attached to one SATA port on the motherboard

I decided to quickly create a Windows Server 2012 R2 virtual machine without a RAID controller as a scientific control for benchmark testing the RAID controller. I used both CrystalDiskMark software and SQLIO to test the disk. See the links for more info. here are the results:

Single 4TB Hitachi disk attached to Mobo SATA
Single 4TB Hitachi disk attached to Mobo SATA

I will leave out the SQLIO test results here because it will take up too much room.

RAID Controller configuration

This part consumed another day of my time. 🙁

With the new 9270-8i card in (without the battery at this point) I proceeded to build the RAID-5 array with the following settings:

  • RAID level: 5
  • Strip size: 256KB
  • Write policy: Write through
  • Read policy: Read ahead

This was done using the WebBIOS utility when the system boots up. You need to press CTRL-H when the system boots to get into it, but on my ASRock board, when I pressed CTRL-H it went to the ASRock BIOS boot settings screen. This confused me, but after an hour of research I realized that I need to press CTRL-H, then F11 at the ASRock screen and select RAID Controller as the boot device. (sigh). I know this configuration is possible in a Windows VM with the LSI Storage Manager installed, but I haven’t set that up yet.

I then installed ESXi from one bootable ISO USB Stick onto the other empty USB stick. Took about 10 minutes. I removed the install USB stick and booted into VMware ESXi for the first time, and set up my network so I could connect from the vSphere windows client.

I created a Windows Server 2012 R2 virtual machine to do some tests. I uploaded the ISO image and at this point I noticed something wrong. The upload was taking a long long time. On my old server an upload of an ISO image took about a minute, but this one took nearly twenty mintues to upload. The creation of the virtual machine itself and installation of Windows Server took quite a long time, but I decided to be patient and run a disk IO test. here is the result:

Thin provisioned with write-through
Thin provisioned with write-through policy

Read speed is acceptable, but the write speed is totally unacceptable. I spent a while reading on the web and doing some more testing. I tried an eager-zeroed thick provisioned disk and got this result:

Thick provisioned eager-zeroed with write-through policy
Thick provisioned eager-zeroed with write-through policy

An improvement in read speed but nothing in the write speed. It didn’t even occur to me at this point to enable Write-back caching because I didn’t have the battery installed and it felt wrong to be using a disk cache with no battery backup because if there was a power loss during writes, it would likely cause filesystem corruption. After another few hours tinkering about, I decided to wipe the array and try RAID-0 to see if the controller was just really bad at RAID-5 performance.

RAID-0 with write-through caching
RAID-0 with write-through

Hmm, a HUGE performance increase which is not that surprising for RAID-0, although I was surprised at just how much the write performance had increased. It then occurred to me to try RAID-0 with write caching:

RAID-0 with write caching
RAID-0 with write-back caching

Better performance. This encouraged me to wipe the array again and try RAID-5 with write-back caching even though my battery hadn’t arrived yet. Bear in mind all this testing took hours and hours of time, each one also has a SQLIO test as well, but I won’t bore you with the details of those tests.

RAID-5 with Write-back caching (no battery)
RAID-5 with Write-back caching (no battery)

Ta-daaaaa! 😀

Problem solved. Performance is acceptable to me now. I then proceeded to load the storage with my backed up VMs from the old server using a combination of Backups with Trilead VM Explorer and the VMware ovftool (and a little bit of WinSCP). Several hours later (like, 24) and most of my VMs are online in my new mooZilla server. At this stage I am just awaiting the battery backup for the RAID card. As the server is connected to a UPS the write-back cache battery not being present isn’t so big a concern and I can live with the small risk.

I then connected up my four year-old SATA II disks (4x 1TB) to the LSI controller and configured it as RAID-0 with write-back caching. This storage array will hold my backups inside my BackupPC VM. I know RAID-0 is not redundant, but I just don’t care because it’s only being used for local backups. I use BackBlaze for off-site. The old SATA II disks aren’t too shabby with RAID-0:

RAID-0 Array with 4x1TB SATA 2 disks
RAID-0 Array with 4x1TB SATA 2 disks

A few days later my battery backup for the LSI controller arrived. I plugged it in and changed the write policy on both arrays to Write-Back with BBU (Battery Back Up). When I powered up the server I got a warning saying the battery was low and that performance would be slow:

Write Back with BBU Warning
Write Back with BBU Warning

Sure enough I did a test with the fresh battery in and the controller had reverted to write-through mode to protect the data. Makes sense because if you have a battery backed RAID controller you have a battery to protect the write-cache, and if that battery is low, it’s not going to hold the cache for very long so the controller reverts to write-through mode to protect data at the expense of performance.

Here are the results for both arrays:

RAID-5 Write-Back BBU (new battery)
RAID-5 Write-Back BBU (new battery)
RAID-0 Write Back BBU (new battery)
RAID-0 Write Back BBU (new battery)

After an hour, I rebooted ESXi and checked the disks again, the RAID controller had switched back to write-back cache mode from the temporary Write-Through while the battery was charging. Nice.


RAID-5 Write-Back BBU, charged battery
RAID-5 Write-Back BBU, charged battery


RAID-0 Write-back BBU (charged battery)
RAID-0 Write-back BBU (charged battery)

Even though the RAID-0 SATA II disks are nearly five years old, they are holding up pretty well.

LSI Storage Manager in Windows Server 2012 R2

Next thing I did was build a Windows Server 2012 R2 VM and install LSI storage manager in it. I followed Paul’s article on how to get ESXi to detect the controller through the virtualization layer and it worked straight away.

LSI Storage Manager
LSI Storage Manager (Physical view)
RAID-5 Details
RAID-5 Details
RAID-0 Details
RAID-0 Details

Interesting that the temperature of the RAID chip is 72C – pretty hot. I have seen it reach 99C under load.

vSphere client

My vSphere client currently looks like this, until I get all my VMs restored from backup.

vSphere client at time of writing
vSphere client at time of writing


This server build took a lot of effort and about £1500. I think it was worth it, and I think the server will last for four to five years. My VMs load very quickly compared to my old server, and I have capacity to spare. I also have a ton of storage to play with so can have plenty of backups and media. I would really love to get vCenter running without having to buy it (hint, hint Paul) and I will have a go when 5.5 for ESXi is officially released. I do have the ISO already downloaded as I applied for the trial, so I might get to that next week.

I hope this was helpful, please leave feedback or questions in the comments.