Create Nutanix CE VM Nested on ESXi 6

1. Create Nutanix CE VM Nested on ESXi

Nutanix-3blocs

This post explain how to install Nutanix CE nested on ESXi 6.

My Lab is fully Nested in one Dell R320 server.

 I use Nutanix Community Edition, which is a community supported and free version of Nutanix software. The CE version is fully functional and run on Acropolis Hypervisor.

ESXi host configuration:

CPU :

RAM :

HBA :

DISK :

500GB SSD RAID1

VMware :

Nutanix CE VM configuration:

I’m assume that registration and downloading Nutanix CE is already done.

Prepare the boot disk:

Extract the ce-2015.07.16-beta.img.gz archive


After you have extract the image file


Rename the ce-2015.07.16-beta.img to NutanixCE-flat.vmdk


Keep the following lines in a .txt file.

This is the disk descriptor used later to create the first disk in the VM « boot disk » :

 

 

  •  Boot the ESXi 6.0 host.
  • Enable HV (Hardware Virtualization to run nested 64bit Guests VM)

Logging into your ESXi host with root login and type this :

A reboot of the system is not necessary

  •  Create a Portgroup on a Virtual Switch to attach the Nutanix CE VM and ensure that the security settings allow Promiscuous Mode


Accept Promiscuous Mode

  • Connect to ESXi host with SSH client :Create a directory for Nutanix CE VM  :

  • Create a NutanixCE.vmdk file in the directory :

  • Send the NutanixCE-flat.vmdk file by SSH with Winscp client to the same directory

You could list the directory to see the following :

Create the VM:

  • Create new Centos 4/5/6/7 (64bit) VM
  • Configure 4vCPUs (minimum of 4 vCPU)
  • Configure 18GB RAM (minimum of 16GB RAM)

Select Typical :

Type the Name of the VM :

 

Select datastore :

Select Guest Operation System :

Select the dedicated portgroup created previously :

You need to create a temporaly disk.

Check Edit and Continue.

Edit vRAM, vCPU, vSCSI Controller and remove floppy and the virtual disk (16GB):

  • Add an Existing Hard Disk and use the NutanixCE.vmdk image send by WinSCP.

    Map it to SCSI 0:0 on the PVSCSI Adapter

     








  • Add a new Hard Disk, 500GB thin provisioned, for the virtual SSD.

    Map it to SCSI 0:1 on the PVSCSI Adapter

     






  • Add a new Hard disk, 500GB thin provisioned, for the virtual HDD

    Map it to SCSI 0:2 on the PVSCSI Adapter

  • Check if the Network Adapter is a Intel e1000.

  • Check if the version is Centos 4/5/6/7 (64-bit)

  • Click on VM Options > General > Configuration Parameters and add a new row to simulate SSD disk:

This parameter simulates an SSD disk attached on SCSI0:1 controler.

  •  You can validate the configuration

 You should see in the VM Directory :


  • Expose « hardware assisted virtualization » options to the guest OS in the vSphere Web Client :

If you don’t have vCenter and you can’t edit VM with Virtual Hardware 11 through the vSphere Web Client:

Edit your .vmx file with VI editor and add this line :

  • Keep this VM for futures deployments and use VM clones to create multiple Nutanix CE nodes.

If your physical host do not have suffisants ressources to run all nodes and your installation failed. Go to the part 2 for customise your Acropolis and CVM config

2. Customise Nutanix CE VM before Install

If you have a physical host with poor performance. Before you deploy the Nested Acropolis and the CVM, edit some config files to bypass some checks

Logon to the VM with « login : root » « password : nutanix/4u »

Edit « minimum_reqs.py » and « sysUtil.py » files in /home/install/phx_iso/phoenix


1 – Edit « MIN_MEMORY_GB », « MIN_CORES » to change minimum memory and core needed by the NutanixCE VM

2 – Edit « SVM_NUM_VCPUS », « CUSTOM_RAM » to deploy CVM with less vCPU and memory

3 – Edit « SSD_rdIOPS_thresh », « SSD_wrIOPS_thresh » to change minimum read/write IOPs requirement to deploy CVM

 

If you have an AMD processor disable the « CheckVtx » and « CheckIsIntel » in  /home/install/phx_iso/phoenix/minimum_reqs.py

Comment the lines : checkVtx(cpuinfo) and checkIsIntel(cpuinfo)

After the customize finished, you can clone the VM and use the clone for deploy multiple Nutanix CE nodes.

3. Install Nutanix CE

Boot the 3 VMs


Login with « Install » to run differents checks before installation


Select the keyboard layout and Proceed


Complete IPs configuration, accept the end user licence agreement and Start.

Do not check « Create single-node », if you want to create 3 or more nodes !

VM 1 :


VM 2 :

VM 3 :

Wait for the configuration of the hypervisors and deployment the CVM



CVM start :


Connect to the hypervisor: (Login root | password : nutanix/4u)



We have now 3 nodes Nutanix CE :


 4. Edit virtual CVM ressource with Virsh after install

By default CVM was running with X vCPU and XX GB vRAM, if you want to change this, use an SSH client to connect onto the KVM hypervisor

Logon using default nutanix KVM credentials :

Login: root /  password: nutanix/4u

Get the name of your Nutanix CVM :

080515_0949_60.jpg

Run virsh dominfo <Name> to confirm number of vCPU’s and vRAM

To change the amount of vRAM, run the following commands and substitute the approriate CVM name

# Shutdown CVM

# Set vRAM

# Start VM again

To change the CPU’s, edit the virsh XML file.

# Edit virsh xml


This will open the VM .xml file using vi editor

  • Press « i » to enter insert mode
  • Use the arrow keys to move to the following line <vcpu placement=’static’> x</vcpu>
  • Change the value to whatever you want
  • Press « esc » to exit insert mode
  • Type « :wq » to write out and save the file

# Shutdown the Nutanix CVM

# Start the Nutanix CVM

Run virsh dominfo again to confirm the changes

080515_0949_89

Enjoy ;-p

twitterlinkedin
Tagués avec : , , , , , , , , , , , , , , , , , , , , , , , ,

Laisser un commentaire

Votre adresse de messagerie ne sera pas publiée. Les champs obligatoires sont indiqués avec *

*