Feed on
Posts
Comments

On CVM:

Set cluster tz

ncli cluster set-timezone timezone=America/New_York

Set AHV tz

hostssh "date; mv /etc/localtime /etc/localtime.bak; ln -s /usr/share/zoneinfo/America/New_York /etc/localtime; date"

Set name servers

ncli cluster add-to-name-servers servers=8.8.8.8,8.8.4.4

Set ntp

ncli cluster add-to-ntp-servers servers=0.us.pool.ntp.org,1.us.pool.ntp.org,2.us.pool.ntp.org,3.us.pool.ntp.org

Set cluster name

ncli cluster edit-info new-name=<etc>

Set cluster IP

ncli cluster set-external-ip-address external-ip-address=<x.x.x.x>

Set DS IP

ncli cluster edit-params external-data-services-ip-address=<x.x.x.x>

Health checks

ncc health_checks run_all

Diags

~/diagnostics/diagnostics.py run

~/diagnostics/diagnostics.py cleanup

Nutanix is the best thing since sliced bread…more to come…

Deploy ScaleIO 1.32 in virtual lab – I’m using VMware vSphere 5.5, but you can use any virtualization platform.

Server minimum hardware requirements 2 CPU cores and 2 GB RAM – I’m using 2 cores and 4 GB of RAM

Deploy ScaleIO 1.32 Gateway on Windows 2008/2012 R2 (if you followed this, skip to the next section)
* Configure disk (I’m using 40 GB) and networking
* Update – if necessary – obviously
* Install JRE 1.7 or higher (ideally 64-bit) – if not included in image
* Turn off the Windows Firewall with Advanced Security (netsh advfirewall set allprofiles state off) – if not included in image
* Install the Gateway binary (choose 64-bit if you chose 64-bit JRE or 32-bit for 32-bit JRE)
* Done and done…you can use the gateway to do systems installs/upgrades/etc. from now on…

Deploy 4 Linux VMs (I used CentOS 6.6 x86_64 Minimal)

Deploy ScaleIO 1.32 SDC/SDS Nodes on Linux (on each server)
* Configure disk (I’m using 8 GB for OS and 100 GB for data) and networking
* Update – if necessary – obviously (wash, rinse, repeat)
* Disable the firewall (service iptables stop && service ip6tables stop && chkconfig iptables off && chkconfig ip6tables off) and selinux (sed -i s’/SELINUX=enforcing/SELINUX=disabled’/g /etc/sysconfig/selinux)
* Install dependencies – yum -y install openssh-clients libaio numactl
* Reboot

Use ScaleIO Installer via ScaleIO Gateway to deploy
* Connect to https://<ip address> for the IP address of the Gateway (accept all the warnings)
* Enter “admin” as the username and the password you provided when you installed the Gateway binaries
* Click “Get Started” under “Install using this web interface”
* Click “Browse” and select the installation packages you wish to deploy – for Linux this consists of all the rpm files under ScaleIO_1.32_RHEL6_Download, excluding the *callhome* rpm…
* Then click Open, then Upload
* Then click Proceed to Install

At this point, you can decide whether to use the installation wizard or perform a configured installation using a CSV file to provide the necessary details.  If you use the installation wizard, it will create a default protection domain and storage pool, which is fine for a demo environment, but I’d prefer a custom install, so I’m choosing the “Upload installation CSV” option and supplying the following CSV contents:

Domain,Username,Password,Operating System,Is MDM/TB,MDM Mgmt IP,MDM IPs,Is SDS,SDS Name,SDS All IPs,SDS-SDS Only IPs,SDS-SDC Only IPs,Protection Domain,Fault Set,SDS Device List,SDS Pool List,SDS Device Names,Optimize IOPS,Is SDC
,root,ScaleIO,linux,Primary,,192.168.50.55,Yes,Linux-55,192.168.50.55,,,PD1,,/dev/sdb,SP1,,,Yes
,root,ScaleIO,linux,Secondary,,192.168.50.56,Yes,Linux-56,192.168.50.56,,,PD1,,/dev/sdb,SP1,,,Yes
,root,ScaleIO,linux,TB,,192.168.50.57,Yes,Linux-57,192.168.50.57,,,PD1,,/dev/sdb,SP1,,,Yes
,root,ScaleIO,linux,,,,Yes,Linux-58,192.168.50.58,,,PD1,,/dev/sdb,SP1,,,Yes

* To upload that CSV, click the “Browse” button, navigate to the CSV file, then click “Upload installation CSV” button.
* Verify the MDM and LIA passwords
* Check the “I accept the terms…” check box
* Set any advanced options or syslog details
* Uncheck “Call Home” check box
* Verify the content supplied in the CSV
* Click “Start Installation” button (If you’ve done everything correctly up until this point, you should see no errors in the query, upload, install and configure phases to follow. If you do see any failures, first retry, then troubleshoot using the information provided via the “Details” button next to each of the failed tasks.)
* Click the “Monitor” button to view the status of the installation process
* Once the Query Phase completes successfully, click the “Start upload phase” button to continue
* Once the Upload Phase completes successfully, click the “Start install phase” button to continue
* Once the Install Phase completes successfully, click the “Start configure phase” button to continue
* Once the Configure Phase completes successfully, click the “Mark operation completed” button and follow the “Post installation instructions…” displayed to add and map volumes – you should already have SDS devices, if you followed my instructions.

* Install the EMC ScaleIO GUI using the EMC-ScaleIO-gui-1.32-402.1.msi in the ScaleIO_Windows_SW_Download\ScaleIO_1.32_GUI_for_Windows_Download from the downloaded installation zip.
* Connect to your primary MDM at 192.168.50.55 with the username “admin” and the password “ScaleIO1″ (if you used my IP addresses)…
* From there, you can perform some minor configuration changes – like renaming objects or configuring capacity and spare percentage…
* You’ll need to create volumes and map them to SDCs using the command line…
* For example, I’ll create 4 24GB volumes and map each of them to all 4 SDCs below:
cd /opt/emc/scaleio/mdm/bin/
scli –login –username admin –password 1nT3ll1g3nt
scli –add_volume –protection_domain_name PD1 –storage_pool_name SP1 –size_gb 24 –volume_name VOL1
scli –add_volume –protection_domain_name PD1 –storage_pool_name SP1 –size_gb 24 –volume_name VOL2
scli –add_volume –protection_domain_name PD1 –storage_pool_name SP1 –size_gb 24 –volume_name VOL3
scli –add_volume –protection_domain_name PD1 –storage_pool_name SP1 –size_gb 24 –volume_name VOL4
scli –map_volume_to_sdc –volume_name VOL1 –sdc_ip 192.168.50.55 –allow_multi_map
scli –map_volume_to_sdc –volume_name VOL1 –sdc_ip 192.168.50.56 –allow_multi_map
scli –map_volume_to_sdc –volume_name VOL1 –sdc_ip 192.168.50.57 –allow_multi_map
scli –map_volume_to_sdc –volume_name VOL1 –sdc_ip 192.168.50.58 –allow_multi_map
scli –map_volume_to_sdc –volume_name VOL2 –sdc_ip 192.168.50.55 –allow_multi_map
scli –map_volume_to_sdc –volume_name VOL2 –sdc_ip 192.168.50.56 –allow_multi_map
scli –map_volume_to_sdc –volume_name VOL2 –sdc_ip 192.168.50.57 –allow_multi_map
scli –map_volume_to_sdc –volume_name VOL2 –sdc_ip 192.168.50.58 –allow_multi_map
scli –map_volume_to_sdc –volume_name VOL3 –sdc_ip 192.168.50.55 –allow_multi_map
scli –map_volume_to_sdc –volume_name VOL3 –sdc_ip 192.168.50.56 –allow_multi_map
scli –map_volume_to_sdc –volume_name VOL3 –sdc_ip 192.168.50.57 –allow_multi_map
scli –map_volume_to_sdc –volume_name VOL3 –sdc_ip 192.168.50.58 –allow_multi_map
scli –map_volume_to_sdc –volume_name VOL4 –sdc_ip 192.168.50.55 –allow_multi_map
scli –map_volume_to_sdc –volume_name VOL4 –sdc_ip 192.168.50.56 –allow_multi_map
scli –map_volume_to_sdc –volume_name VOL4 –sdc_ip 192.168.50.57 –allow_multi_map
scli –map_volume_to_sdc –volume_name VOL4 –sdc_ip 192.168.50.58 –allow_multi_map

* Now each volume is mapped to the SDCs as follows:
VOL1 = /dev/scinia
VOL2 = /dev/scinib
VOL3 = /dev/scinic
VOL4 = /dev/scinid
* You can now partition, create file systems and mount those file systems on the SDCs…
* I’d suggest setting tune2fs to prevent the file checking on mount and perform those file system checks (if you feel necessary) on one of the SDCs (e.g. tune2fs -c 0 /dev/scinia1)

Deploy ScaleIO 1.32 in virtual lab – I’m using VMware vSphere 5.5, but you can use any virtualization platform.

Server minimum hardware requirements 2 CPU cores and 2 GB RAM – I’m using 2 cores and 4 GB of RAM

Deploy your choice of Windows – I’m using Server 2012 R2, so YMMV.

Deploy ScaleIO 1.32 Gateway on Windows
* Configure disk (I’m using 40 GB) and networking
* Update – if necessary – obviously
* Install JRE 1.7 or higher (ideally 64-bit) – if not included in image
* Turn off the Windows Firewall with Advanced Security (netsh advfirewall set allprofiles state off) – if not included in image
* Install the Gateway binary (choose 64-bit if you chose 64-bit JRE or 32-bit for 32-bit JRE)
* Done and done…you can use the gateway to do systems installs/upgrades/etc. from now on…

Deploy ScaleIO 1.32 SDC/SDS Nodes on Windows (on each server)
* Configure disk (I’m using 40 GB for OS and 100 GB for data) and networking
* Update – if necessary – obviously (wash, rinse, repeat)
*Install JRE 1.7 or higher (ideally 64-bit) – if not included in image
* Turn off the Windows Firewall with Advanced Security (netsh advfirewall set allprofiles state off) – if not included in image
* Go to diskmgmt.msc, bring Disk 1 online, then initialize, then create a New Simple Volume assigned to E, but DO NOT FORMAT (leave it RAW) – when the window pops up asking to format, just click Cancel or top-right X

Use ScaleIO Installer via ScaleIO Gateway to deploy
* Connect to https://<ip address> for the IP address of the Gateway (accept all the warnings)
* Enter “admin” as the username and the password you provided when you installed the Gateway binaries
* Click “Get Started” under “Install using this web interface”
* Click “Browse” and select the installation packages you wish to deploy – for Windows this consists of all the msi files under ScaleIO_1.32_Windows_Download, excluding the *callhome* msi…
* Then click Open, then Upload
* Then click Proceed to Install

At this point, you can decide whether to use the installation wizard or perform a configured installation using a CSV file to provide the necessary details.  If you use the installation wizard, it will create a default protection domain and storage pool, which is fine for a demo environment, but I’d prefer a custom install, so I’m choosing the “Upload installation CSV” option and supplying the following CSV contents:

Domain,Username,Password,Operating System,Is MDM/TB,MDM Mgmt IP,MDM IPs,Is SDS,SDS Name,SDS All IPs,SDS-SDS Only IPs,SDS-SDC Only IPs,Protection Domain,Fault Set,SDS Device List,SDS Pool List,SDS Device Names,Optimize IOPS,Is SDC
localhost,administrator,ScaleIO1,windows,Primary,,192.168.50.51,Yes,Win-51,192.168.50.51,,,PD1,,e,SP1,,,Yes
localhost,administrator,ScaleIO1,windows,Secondary,,192.168.50.52,Yes,Win-52,192.168.50.52,,,PD1,,e,SP1,,,Yes
localhost,administrator,ScaleIO1,windows,TB,,192.168.50.53,Yes,Win-53,192.168.50.53,,,PD1,,e,SP1,,,Yes
localhost,administrator,ScaleIO1,windows,,,,Yes,Win-54,192.168.50.54,,,PD1,,e,SP1,,,Yes

* To upload that CSV, click the “Browse” button, navigate to the CSV file, then click “Upload installation CSV” button.
* Verify the MDM and LIA passwords
* Check the “I accept the terms…” check box
* Set any advanced options or syslog details
* Uncheck “Call Home” check box
* Verify the content supplied in the CSV
* Click “Start Installation” button (If you’ve done everything correctly up until this point, you should see no errors in the query, upload, install and configure phases to follow. If you do see any failures, first retry, then troubleshoot using the information provided via the “Details” button next to each of the failed tasks.)
* Click the “Monitor” button to view the status of the installation process
* Once the Query Phase completes successfully, click the “Start upload phase” button to continue
* Once the Upload Phase completes successfully, click the “Start install phase” button to continue
* Once the Install Phase completes successfully, click the “Start configure phase” button to continue
* Once the Configure Phase completes successfully, click the “Mark operation completed” button and follow the “Post installation instructions…” displayed to add and map volumes – you should already have SDS devices, if you followed my instructions.

* Install the EMC ScaleIO GUI using the EMC-ScaleIO-gui-1.32-402.1.msi in the ScaleIO_Windows_SW_Download\ScaleIO_1.32_GUI_for_Windows_Download from the downloaded installation zip.
* Connect to your primary MDM at 192.168.50.51 with the username “admin” and the password “ScaleIO1” (if you used my IP addresses)…
* From there, you can perform some minor configuration changes – like renaming objects or configuring capacity and spare percentage…
* You’ll need to create volumes and map them to SDCs using the command line…
* For example, I’ll create 4 24GB volumes and map each of them to all 4 SDCs below:
cd /d C:\Progra~1\EMC\scaleio\mdm\bin
cli –login –username admin –password ScaleIO1
cli –add_volume –protection_domain_name PD1 –storage_pool_name SP1 –size_gb 24 –volume_name VOL1
cli –add_volume –protection_domain_name PD1 –storage_pool_name SP1 –size_gb 24 –volume_name VOL2
cli –add_volume –protection_domain_name PD1 –storage_pool_name SP1 –size_gb 24 –volume_name VOL3
cli –add_volume –protection_domain_name PD1 –storage_pool_name SP1 –size_gb 24 –volume_name VOL4
cli –map_volume_to_sdc –volume_name VOL1 –sdc_ip 192.168.50.51 –allow_multi_map
cli –map_volume_to_sdc –volume_name VOL1 –sdc_ip 192.168.50.52 –allow_multi_map
cli –map_volume_to_sdc –volume_name VOL1 –sdc_ip 192.168.50.53 –allow_multi_map
cli –map_volume_to_sdc –volume_name VOL1 –sdc_ip 192.168.50.54 –allow_multi_map
cli –map_volume_to_sdc –volume_name VOL2 –sdc_ip 192.168.50.51 –allow_multi_map
cli –map_volume_to_sdc –volume_name VOL2 –sdc_ip 192.168.50.52 –allow_multi_map
cli –map_volume_to_sdc –volume_name VOL2 –sdc_ip 192.168.50.53 –allow_multi_map
cli –map_volume_to_sdc –volume_name VOL2 –sdc_ip 192.168.50.54 –allow_multi_map
cli –map_volume_to_sdc –volume_name VOL3 –sdc_ip 192.168.50.51 –allow_multi_map
cli –map_volume_to_sdc –volume_name VOL3 –sdc_ip 192.168.50.52 –allow_multi_map
cli –map_volume_to_sdc –volume_name VOL3 –sdc_ip 192.168.50.53 –allow_multi_map
cli –map_volume_to_sdc –volume_name VOL3 –sdc_ip 192.168.50.54 –allow_multi_map
cli –map_volume_to_sdc –volume_name VOL4 –sdc_ip 192.168.50.51 –allow_multi_map
cli –map_volume_to_sdc –volume_name VOL4 –sdc_ip 192.168.50.52 –allow_multi_map
cli –map_volume_to_sdc –volume_name VOL4 –sdc_ip 192.168.50.53 –allow_multi_map
cli –map_volume_to_sdc –volume_name VOL4 –sdc_ip 192.168.50.54 –allow_multi_map

* You can then go to each SDC and open diskmgmt.msc to bring those ScaleIO volumes online as disks and format them to start using them.  In order to find out which “Disk” is associated with each ScaleIO volume, right-click on the Disk and choose properties, then open the Details tab and select Device instance path from the drop-down.  The last part of the Device instance path value is the ScaleIO volume ID, which you can see via running “cli –query_all_volumes” on the MDM command line.

Tags: , , ,

Since nslookup blows…this is the best method to install DIG on Windows (x64 – is there any other?) from the ISC BIND package…

  1. http://www.isc.org/downloads/
  2. Download the 64-bit BIND package
    note: it is dependent on the Microsoft Visual C++ Redistributable package, which is included in the package or can be downloaded from Microsoft http://www.microsoft.com/en-us/download/details.aspx?id=30679#
  3. Extract the zip file that you just downloaded
  4. Copy dig.exe, host.exe, libbind9.dll, libdns.dll, libeay32.dll, libisc.dll, libisccfg.dll, liblwres.dll, and libxml2.dll from the extracted zip to %systemroot%\system32
    note: you will need to provide admin creds or do so using an admin shell
  5. Get out of your seat (assuming you’re seated) and do a happy dance!

After fighting a bit with EMC ScaleIO to setup a demo on VMware, I decided there has to be an easier and more cost-effective way to implement fault-tolerant software defined storage for VMware vSphere using existing open source packages.  I have to admit I’m a fan of RHEL based solutions, so a quick Google search for suitable packages quickly revealed GlusterFS.  So, here’s a quick procedure that I followed to deploy in my 3-host vSphere 5.5 home lab – you’re mileage may vary and your subnets will likely be different too…  My basic setup is pretty simple – I have 3 ESXi 5.5 hosts with multiple NICs that are connected to trunk ports on a gig switch.  Each ESXi host boots from a 128GB SSD, which also hosts a small local datastore for ISO images and temporary storage – along with host cache when I was playing around with that.  In addition, each host has local HDD storage (1 with a 900 GB RAID 5 and the other 2 with 3 x 1 TB non-RAID SATA drives each) that is primarily unused because it’s not sufficiently fault-tolerant, but that’s why I’m implementing this solution.  So, with no further adieu…

Some basic setup info…

  • If you don’t already have one, create a CentOS 6 (x86_64) or 7 template…I’m using CentOS 6, so if you’re using CentOS 7, you’ll need to make some adjustments…
  • Make sure you have a local datastore on each ESXi host to which you’ll deploy from that template, preferably on a local SSD and not on the same local datastore that you’ll be leveraging for the shared storage…
  • For the sake of simplicity (this is a home lab after all), I’m going to use my regular lab subnet (192.168.0.0/24) for internetworking and a standard vSwitch on each host on a dedicated subnet (172.16.1.0/29, 172.16.1.8/29 and 172.16.1.16/29) for the NFS connection to the local SVM.  I used non-overlapping subnets just in case I ever decide to add an uplink.  You could just as easily use the same subnet on each host if you want…
  • You’ll also need a Virtual Machine network on that vSwitch in order to connect the SVM to the host, I named mine SVM.
  • I’ve disabled the firewall and selinux in my templates – I have plenty of network security on my border, so no need to stifle the OS with that responsibility…
  • Obviously, start with an updated OS via yum -y update too…

First thing’s first…

  • Create a standard vSwitch that is not attached to a physical uplink (this is a host-only network, so no need for uplink) on each host and name the associated VMkernel Port something appropriate – I used HostNFS – and assign the appropriate IP address – I used 172.16.1.1/255.255.255.248 on esxi1, 172.16.1.9/255.255.255.248 on esxi2 and 172.16.1.17/255.255.255.248 on esxi3.
  • Deploy a storage VM from your template to the local storage on each ESXi host – I’m deploying them with 1 CPU, 1 GB RAM, a thin-provisioned 8 GB (overkill) vmdk and 2 vNICs – one connected to my 192.168.0.0/24 subnet (for mgmt, Internet and GlusterFS replication/synchronization) and one connected to the SVM virtual network.  Also, I’m naming my VMs by the host and function – so esxi1-svm, esxi2-svm and esxi3-svm (for future reference).
  • Power on each SVM and assign an IP address to the NICs as appropriate – mine are 192.168.0.31, 32 and 33 for the external NIC and 172.16.1.2, 172.16.1.10 and 172.16.1.18 for the HostNFS connectivity.
  • Install the EPEL repo
    yum -y install http://mirror.umd.edu/fedora/epel/6/x86_64/epel-release-6-8.noarch.rpm
  • Enable the GlusterFS repo
    yum -y install wget
    cd /etc/yum.repos.d
    wget http://download.gluster.org/pub/gluster/glusterfs/LATEST/CentOS/glusterfs-epel.repo
  • Install some packages
    yum -y install glusterfs-server ntp
  • Configure NTP – I point to 0/1/2/3.us.pool.ntp.org, but you can choose to point wherever you want for time services
  • Enable glusterd and ntp
    chkconfig ntpd on
    chkconfig glusterd on
  • REBOOT
  • To get the servers talking glusterfs, on esxi1-svm
    gluster peer probe 192.168.0.32
    gluster peer probe 192.168.0.33
  • To verify they’re talking to each other, on each host
    gluster peer status
  • Add virtual disks to the SVMs or configure passthrough for a dedicated controller, then create a partition and file system on that partition – I created a 512 GB vmdk on each SVM, then a primary partition of max size and ext4 file system within…
  • Turn off auto check: tune2fs -c 0 /dev/sdb1
  • Make a directory for GlusterFS brick and GlusterFS volume – I created /gfs within which I created b1 for the first GlusterFS brick: mkdir -p /gfs/b1
  • Mount new file system: mount /dev/sdb1 /gfs/b1
  • Update /etc/fstab accordingly:
    /dev/sdb1 /gfs/b1 ext4 defaults 1 2
  • Create and start a new GlusterFS volume:
    gluster volume create v1 replica 3 transport tcp 192.168.0.31:/gfs/b1/v1 192.168.0.32:/gfs/b1/v1 192.168.0.33:/gfs/b1/v1
    gluster volume start v1
  • Set the quorum count for the volume: gluster volume set v1 cluster.quorum-count 2
  • Check the volume status: gluster volume info
  • Allow NFS connectivity to the GlusterFS volume from each ESXi host to each SVM:
    gluster volume set v1 nfs.rpc-auth-allow 172.16.1.1,172.16.1.9,172.16.1.17
  • Mount the new NFS export of the GlusterFS volume on each ESXi host:
    On esxi1, mount 172.16.1.2:/v1 to esxi1-gfs-b1-v1
    On esxi2, mount 172.16.1.10:/v1 to esxi2-gfs-b1-v1
    On esxi3, mount 172.16.1.18:/v1 to esxi3-gfs-b1-v1
  • Have fun!

BRO on CentOS 6.6

  • Install CentOS 6.6 minimal
  • chkconfig iptables off
  • chkconfig ip6tables off
  • Disable selinux and reboot
  • yum -y install http://mirror.umd.edu/fedora/epel/6/i386/epel-release-6-8.noarch.rpm
  • yum -y update
  • yum -y install bison cmake file-devel flex gcc gcc-c++ libpcap-devel libunwind make openssl-devel python-devel swig wget zlib-devel
  • yum -y install gperftools GeoIP GeoIP-update GeoIP-update6 xerces-c xqilla
  • cd /usr/local/src
  • wget http://www.bro.org/downloads/release/bro-2.3.2.tar.gz
  • tar xvzf bro-2.3.2.tar.gz
  • cd bro*
  • ./configure –enable-debug –enable-perftools –prefix=/usr/local/bro
  • make
  • make install
  • export PATH=/usr/local/bro/bin:$PATH
  • broctl
  • install
  • start
  • exit

Done…https://www.bro.org/sphinx/quickstart/index.html…

  1. Install minimal install of CentOS 6 – either i386 or x86_64 and either virtual or native
  2. Login to root console
  3. Verify and/or configure network access – in minimal install, one must actively enable network interfaces via “ifup”
  4. Install prerequisite packages identified on www.ipxe.org# yum install git gcc binutils make perl syslinux geniosimage
  5. Install packages to write ISO contents to a CD# yum install cdrecord
  6. Retrieve the iPXE source# git clone git://git.ipxe.org/ipxe.git
    # cd ipxe/src# make
  7. Troubleshoot and resolve any errors to achieve a successful build before continuing
  8. Create the embedded script (see www.ipxe.org/embed for more info) by entering the following as the contents of boot.ipxe in the ipxe/src directory using preferred text editor (e.g. vi boot.ipxe):#!ipxe
    # 2 lines continuously attempt to retrieve IP assignment
    :retry_dhcp
    dhcp || goto retry_dhcp
    # Attempt to evaluate first iPXE file, then the second, then the third
    # || = “or” and prevents the script from exiting at the first error
    chain tftp://<ip_address_of_source1>/<name_of_ipxe_file1> ||
    chain tftp://<ip_address_of_source2>/<name_of_ipxe_file1> ||
    chain tftp://<ip_address_of_source3>/<name_of_ipxe_file1>

    e.g.
    #!ipxe
    :retry_dhcp
    dhcp || goto retry_dhcp
    chain tftp://192.168.0.200/boot.ipxe ||
    chain tftp://172.16.20.200/boot.ipxe ||
    chain tftp://10.10.10.200/boot.ipxe
     

    This iPXE script will continuously attempt to retrieve an IP address from a DHCP server.  Once it receives an IP assignment, it will proceed to attempt to load the wdl6.ipxe from each of the sources in the order entered.  This allows the use of a single boot CD across several network segments.  It is important to note that the double pipe (vertical bars) at the end of the first two “chain” lines are necessary in order prevent the iPXE script from exiting on the first error – which would occur if the wdl6.ipxe script couldn’t be retrieved from 192.168.0.200 in the example.

  9. Compile the changes into the bin/ipxe.iso that we’ll later use as the source for the boot CD# make bin/ipxe.iso EMBED=boot.ipxe
  10. Identify your CD recording device# cdrecord –scanbus
    Note the 3 comma delimited numbers in the left-most column of the line in which the CD recording device is identified (e.g. 1,0,0)
  11. Place blank CD in CD recording device and record ISO to the blank CD# cdrecord –v dev=1,0,0 bin/ipxe.iso
  12. The embedded script (boot.ipxe) can be altered again, then recompiled and recorded to CD by editing the boot.ipxe then following steps 9-11.

I recently posted the following in a forum topic comparing VMware ESXi and Microsoft Hyper-V hypervisors and thought I’d slap it here for posterity:

Hyper-V does not run as a service on top of Windows – which would be a Type 2 hypervisor, rather it is a Type 1 hypervisor (just like ESXi) that runs natively on the hardware.  If one has installed the GUI/Core version of Windows Server 2008 R2 or 2012 (instead of the Hyper-V Server 2008/2012 products), then once the Hyper-V role is enabled, that OS is “virtualized” into the Parent Partition as a part of that process (one of the reasons for the required reboot).  This Parent Partition is a virtual container that must go through the hypervisor (Hyper-V) to access the hardware, but it is not visible in the Hyper-V Manager or SCVMM because it’s not a Child Partition (see http://en.wikipedia.org/wiki/File:Hyper-V.png).  The attack surface for Hyper-V itself is minimal – similar to ESXi – and more likely penetrated via vulnerabilities introduced through misconfiguration – similar to ESXi.  In addition, the attack surface of the OS in the Parent Partition can be sufficiently limited through the use of the Hyper-V Server 2008/2012 products or the Core install of Windows Server 2008 R2/2012.  If the environment is managed through SCVMM (an all-inclusive management tool – similar to vCenter w/ added features) and PowerShell, then there is no need for a GUI at the Hyper-V host level – in fact, one might argue that it’s irresponsible to install a GUI at the Hyper-V host level due to the increased resource utilization (minimal, but there), attack surface and management (patching) requirement.

As for the comparison of the 2 major players (ESXi and Hyper-V), they each have their pros and cons that must be weighted based on the requirements of the target environment to settle on the best product for each target environment. To me, it’s a bit like choosing a mode of transportation; you wouldn’t want a Ferrari in Antarctica or a snowmobile in Miami.  While each has its own pros/cons, the environment dictates the better choice…  😉

However, I will say that, in my opinion, Microsoft will likely penetrate the hypervisor market by leaps and bounds over the next several years if only because there are decidely more Windows-centric environments than any other OS, there’s more Microsoft-skilled labor out there, the price comparison is currently lobsided in their direction, the all-inclusive cloud-enabling management tool – SCVMM, cluster-aware updating via WSUS/SCCM and the flexibility/performance of SMB 3 for Hyper-V Clustering.  One of the primary reasons I think Microsoft’s penetration of the market won’t be swift and overpowering (i.e. taking several years) is that VMware has been a leader for quite a long time, which means it will likely take hurculean efforts to convince VMware professionals to consider the alternative (bottom-up) and the existing investment in and ROI-projections for licensing and hardware (NFS-based SAN storage) for environments with a significant VMware footprint (top-down).  But those speed bumps will likely be flattened as time passes and Microsoft continues to strengthen the Hyper-V solution, unless VMware comes off the ropes swinging with some serious R&D (technical) and a new pricing model (financial).

While I think the Hyper-V solution is compelling – especially considering the advancements in 2012, I also think that VMware vSphere is a solid/reliable product with a proven track-record and a well-trained, loyal workforce out there.  Also, vSphere just makes better sense in some environment – as I said initially, each solution (and others Xen, KVM, QEMU, VirtualBox, etc.) has its pros and cons that need to be considered and weighted with requirements for each environment.

Additional MS References:
Hyper-V Architecture Poster: http://www.microsoft.com/en-us/download/details.aspx?id=29189
MS Virtualization Team Blog: http://blogs.technet.com/b/virtualization/
MS System Center Blog: http://blogs.technet.com/b/systemcenter/
MS Storage Team Blog: http://blogs.technet.com/b/filecab/
MS Jose Barreto’s Blog: http://blogs.technet.com/b/josebda/
MS Edge on Channel 9: http://channel9.msdn.com/Shows/Edge

If you’re like me and you sometimes do things rather hastily in your home lab (NEVER IN PRODUCTION THOUGH), then you might have found yourself with an orphaned vCenter Server in your SCVMM 2012 console after moving away from VMware’s vSphere hypervisor (for whatever valid reason – choose from many).  Well, you’re in luck – well, kinda!  If you’re comfortable with SQL Server, then you’re in luck!  Otherwise, I’d recommend you steer clear of this solution and lean on Microsoft for their’s (http://support.microsoft.com/kb/2730029).  If the former, then strap on your SQL Server Management Studio, point it at your SCVMM database, and execute the following query (replacing <servername> with the name of your orphaned vCenter Server):

DECLARE @computername varchar(255)
SET @computername = ‘<servername>
DELETE FROM [tbl_ADHC_VmwResourcePool] WHERE [HostID] = (select top 1 HostID from tbl_ADHC_Host where ComputerName = @computername)
DELETE FROM [tbl_ADHC_AgentServerRelation] WHERE AgentServerID = (select top 1 AgentServerID from tbl_ADHC_AgentServer where Computername = @computername)
DELETE FROM [tbl_ADHC_AgentServer] WHERE AgentServerID = (select top 1 AgentServerID from tbl_ADHC_AgentServer where Computername = @computername)
DELETE FROM [tbl_ADHC_Host] WHERE [HostID] = (select top 1 HostID from tbl_ADHC_Host where ComputerName = @computername)

 

Viola!

Older Posts »