1. Create Nutanix CE VM Nested on ESXi
This post explain how to install Nutanix CE nested on ESXi 6.
My Lab is fully Nested in one Dell R320 server.
I use Nutanix Community Edition, which is a community supported and free version of Nutanix software. The CE version is fully functional and run on Acropolis Hypervisor.
ESXi host configuration:
CPU :
RAM :
HBA :
DISK :
500GB SSD RAID1
VMware :
Nutanix CE VM configuration:
1 2 3 4 5 6 |
4 vCPUs (2 sockets x 2 cores) 18GB RAM 7GB Nutanix CE image as vmdk 500GB vmdk on SSD 500GB vmdk on HDD 1 x e1000 virtual nic |
I’m assume that registration and downloading Nutanix CE is already done.
Prepare the boot disk:
Extract the ce-2015.07.16-beta.img.gz archive
After you have extract the image file
Rename the ce-2015.07.16-beta.img to NutanixCE-flat.vmdk
Keep the following lines in a .txt file.
This is the disk descriptor used later to create the first disk in the VM « boot disk » :
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
#Disk DescriptorFile version=4 encoding="UTF-8" CID=a63adc2a parentCID=ffffffff isNativeSnapshot="no" createType="vmfs" # Extent description RW 14540800 VMFS "NutanixCE-flat.vmdk" # The Disk Data Base #DDB ddb.adapterType = "lsilogic" ddb.geometry.cylinders = "905" ddb.geometry.heads = "255" ddb.geometry.sectors = "63" ddb.longContentID = "3f748e79041db88d7194ebf11c32c88d" ddb.uuid = "60 00 C2 9b 69 2f c9 76-74 c4 07 9e 10 87 3b f9" ddb.virtualHWVersion = "11" |
- Boot the ESXi 6.0 host.
- Enable HV (Hardware Virtualization to run nested 64bit Guests VM)
Logging into your ESXi host with root login and type this :
1 |
grep -i "vhv.enable" /etc/vmware/config || echo "vhv.enable = \"TRUE\"" >> /etc/vmware/config |
A reboot of the system is not necessary
- Create a Portgroup on a Virtual Switch to attach the Nutanix CE VM and ensure that the security settings allow Promiscuous Mode
Accept Promiscuous Mode
- Connect to ESXi host with SSH client :Create a directory for Nutanix CE VM :
1 |
[root@esxi-lab:~] mkdir /vmfs/volumes/datastore1/NTX |
- Create a NutanixCE.vmdk file in the directory :
1 |
[root@esxi-lab:~] touch /vmfs/volumes/datastore1/NTX/NutanixCE.vmdk |
1 2 3 4 |
[root@esxi-lab:~] vi /vmfs/volumes/datastore1/NTX/NutanixCE.vmdk Paste the previous lines about disk descriptor on VI editor and save the file. Type ":wq" to write out and save the file |
- Send the NutanixCE-flat.vmdk file by SSH with Winscp client to the same directory
You could list the directory to see the following :
Create the VM:
-
Create new Centos 4/5/6/7 (64bit) VM
-
Configure 4vCPUs (minimum of 4 vCPU)
-
Configure 18GB RAM (minimum of 16GB RAM)
Select Typical :
Type the Name of the VM :
Select datastore :
Select Guest Operation System :
Select the dedicated portgroup created previously :
You need to create a temporaly disk.
Check Edit and Continue.
Edit vRAM, vCPU, vSCSI Controller and remove floppy and the virtual disk (16GB):
-
Add an Existing Hard Disk and use the NutanixCE.vmdk image send by WinSCP.
Map it to SCSI 0:0 on the PVSCSI Adapter
-
Add a new Hard Disk, 500GB thin provisioned, for the virtual SSD.
Map it to SCSI 0:1 on the PVSCSI Adapter
-
Add a new Hard disk, 500GB thin provisioned, for the virtual HDD
Map it to SCSI 0:2 on the PVSCSI Adapter
- Check if the Network Adapter is a Intel e1000.
- Check if the version is Centos 4/5/6/7 (64-bit)
- Click on VM Options > General > Configuration Parameters and add a new row to simulate SSD disk:
This parameter simulates an SSD disk attached on SCSI0:1 controler.
- You can validate the configuration
You should see in the VM Directory :
-
Expose « hardware assisted virtualization » options to the guest OS in the vSphere Web Client :
If you don’t have vCenter and you can’t edit VM with Virtual Hardware 11 through the vSphere Web Client:
Edit your .vmx file with VI editor and add this line :
1 2 |
vhv.enable = "TRUE" featMask.vm.hv.capable = "Min:1" |
- Keep this VM for futures deployments and use VM clones to create multiple Nutanix CE nodes.
If your physical host do not have suffisants ressources to run all nodes and your installation failed. Go to the part 2 for customise your Acropolis and CVM config
2. Customise Nutanix CE VM before Install
If you have a physical host with poor performance. Before you deploy the Nested Acropolis and the CVM, edit some config files to bypass some checks
Logon to the VM with « login : root » « password : nutanix/4u »
Edit « minimum_reqs.py » and « sysUtil.py » files in /home/install/phx_iso/phoenix
1 – Edit « MIN_MEMORY_GB », « MIN_CORES » to change minimum memory and core needed by the NutanixCE VM
2 – Edit « SVM_NUM_VCPUS », « CUSTOM_RAM » to deploy CVM with less vCPU and memory
3 – Edit « SSD_rdIOPS_thresh », « SSD_wrIOPS_thresh » to change minimum read/write IOPs requirement to deploy CVM
If you have an AMD processor disable the « CheckVtx » and « CheckIsIntel » in /home/install/phx_iso/phoenix/minimum_reqs.py
Comment the lines : checkVtx(cpuinfo) and checkIsIntel(cpuinfo)
After the customize finished, you can clone the VM and use the clone for deploy multiple Nutanix CE nodes.
3. Install Nutanix CE
Boot the 3 VMs
Login with « Install » to run differents checks before installation
Select the keyboard layout and Proceed
Complete IPs configuration, accept the end user licence agreement and Start.
Do not check « Create single-node », if you want to create 3 or more nodes !
VM 1 :
VM 2 :
VM 3 :
Wait for the configuration of the hypervisors and deployment the CVM
CVM start :
Connect to the hypervisor: (Login root | password : nutanix/4u)
We have now 3 nodes Nutanix CE :
4. Edit virtual CVM ressource with Virsh after install
By default CVM was running with X vCPU and XX GB vRAM, if you want to change this, use an SSH client to connect onto the KVM hypervisor
Logon using default nutanix KVM credentials :
Login: root / password: nutanix/4u
Get the name of your Nutanix CVM :
1 |
virsh list --all |
Run virsh dominfo <Name> to confirm number of vCPU’s and vRAM
To change the amount of vRAM, run the following commands and substitute the approriate CVM name
# Shutdown CVM
1 |
virsh shutdown "Name" |
# Set vRAM
1 2 |
virsh setmaxmem "Name" 6G --config virsh setmem "Name" 6G --config |
# Start VM again
1 |
virsh start "Name |
To change the CPU’s, edit the virsh XML file.
# Edit virsh xml
1 |
virsh edit "Name" |
This will open the VM .xml file using vi editor
- Press « i » to enter insert mode
- Use the arrow keys to move to the following line <vcpu placement=’static’> x</vcpu>
- Change the value to whatever you want
- Press « esc » to exit insert mode
- Type « :wq » to write out and save the file
# Shutdown the Nutanix CVM
1 |
virsh shutdown "Name" |
# Start the Nutanix CVM
1 |
virsh start "Name" |
Run virsh dominfo again to confirm the changes
1 |
virsh dominfo "Name" |
Enjoy ;-p


Laisser un commentaire