Server Stuff

Manage your Stack with WAC and OpenManage

In a previous post, I briefly mentioned managing your Azure Stack HCI with Windows Admin Center (WAC). When you are using Dell hardware, it’s even better with OpenManage!

Imagine being able to securely control your server stack from any browser? Even if you don’t allow access from outside the network, imagine how nice it would be to be able to have a ‘single pane of glass’ to manage most of your day to day things. Now imagine that it’s free…

Setting up

First you have a decision to make. Do you want to run WAC from your pc or as a service on a server? I like to run the latest General Availability release on a server for our whole team to use, and the latest Insider build on my laptop for testing and comparison.

So how do you get it ? Simply go to the Windows Admin Center web site and download the MSI file. It’s the same install whether you are installing on Windows 10 or Windows Server, so that’s easy.

Before you install, obviously you have to make the decision on the ‘type’ of install you want. There are four main types of installations ranging from the super simple ‘Local Client’ install all the way up to a failover cluster of management servers. Here are the differences.

First is the simplest of installations – the Local Client. This is just you install on your Windows 10 pc that has network connectivity to all of the servers to be managed. This is fine for small, single administrator environments or as I noted above, for testing Insider builds.

Next is the ‘Gateway’ version. It’s installed on a designated Windows 2016 or higher server and runs as a service. This is the best one for access by a small team of admins.

Our third option is worth noting as it is a supported and documented configuration. It’s called ‘Managed Server’. In this scenario, you install the software directly on one of the servers to be managed. I don’t recommend it unless you have some reason not to create a dedicated “gateway” VM. Of course I’m a fan of single purpose servers, so your situation may vary.

The fourth and final option for a LAN installation is a failover cluster. Truthfully if you are in a large environment – this is certainly the way to go. It give you high availability and reliability for your management tools. In a small or medium business, at this time, it’s a little overkill.

My deployment is a bit of a hybrid. I’m running a gateway server on a VM sitting atop a Azure Stack HCI cluster for the team to use. So it’s highly available enough for our needs.

One of the big advantages to WAC over traditional management tools is the extensibility. There is a Windows Admin Center SDK along with examples on Github if you want to write your own. However there are several ready-made extensions provided by Microsoft and a few hardware vendors. Microsoft extensions are things like DNS, DHCP and Active Directory. The real benefit is if your hardware vendor provides an extension.

For example – as noted in my Azure Stack HCI post – I’m using Dell hardware. The Dell EMC OpenManage extension lets me use Windows Admin Center to not only manage my software environmant but my server hardware as well! Disk Health, Temperatures, Fan Speed, and more all in one handy to use dashboard.

Azure Stack with Only Two Servers

Ok, so that title is a teeny bit of an overstatement. What we’re going to discuss today is not the Azure Stack you have heard so much about. No, today we’re talking about an Azure Stack HCI Cluster.

Essentially, it’s a Storage Spaces Direct cluster with brand new shiny marketing and management. While you can use the old “*.msc” tools, it’s a far, far better thing to use the new Windows Administrator Center (WAC)! More on that soon.

I’m not going to dig into the details of how S2D works or the fine details of building clusters large and small. Instead, I want to share with you some of the reasons why small and medium businesses need to pay attention to this technology pronto.

  1. Scalability: Sure, most of the documentation you find today focuses on building clusters of 4-6 nodes or more. That’s great unless you are a small business that doesn’t need that kind of compute and storage. That’s where the two-node “back to back” cluster comes in. Think “entry-level”. The beauty of this solution is scalability. If you do outgrow this, you buy a new node, possibly a new network switch and you are up to a 3-node cluster!
  2. Compact: I had two old servers that took up two Rack Units (RU) of space each plus an external drive array that took another 2 RU. That totaled up to 6 Rack Units or 10.5 inches of rack space. The new servers that replaced them are 1 Rack Unit each for a total of 3.5 inches! That doesn’t even touch on noise, heat and power consumption.
  3. Fast and easy: Ok yes, it is a complicated technology. However, you can follow Dell’s Deployment Guide for 2-Node hyper-converged back-to-back connected infrastructure with R640 Storage Spaces Direct Ready Node
    and you’ll be 90% of the way there. It’s a mouthful, but it’s 40 pages of step by step instructions and checklists. *TIP* I’ve included some tips below for a couple of places that aren’t super clear (or at least to me)
  4. Well Documented: If you are like me and want to understand how this all works before you bet your mission critical VMs to it, there is a ton of good information out there. Here are some options depending on your preferred method.

    • The Book: Dave Kawula’sMaster Storage Spaces Direct.  It’s over 300 pages of detailed explanation for a very reasonable price. Although you can get it free, those people spent a lot of time working on that so pay something. You’ll see what I mean on the Leanpub site.
    • The Video: Pluralsight’s Greg Shields has a course on Storage Spaces Direct that is 4 hours of in-depth instruction on Windows Failover Clusters and Storage Spaces Direct in Windows 2019. If you aren’t a subscriber to Pluralsight, they offer trial periods!
    • The ‘Official’ Documentation: Microsoft’s Storage Spaces Direct Overview is the place for the official documentation.

There are a few tips and gotchas that I want to share from my experience.
First, the hardware. These servers aren’t crippling expensive, but they certainly aren’t disposable cheap either. This means there are a lot of critical hardware decisions to make. What’s more important to your organization – budget or speed or a balance? On the one end, you can go all-flash storage which will give you a fast system, but the price tag goes up fast too. The less expensive but slightly more complicated set up is a hybrid of SSD and HDD storage. Making certain that you have the right mix of memory, storage and network adapters can be a daunting task.

Honing a specification and shopping it around to get the perfect balance of form, function, and price is great if you are a hobbyist or plan on building out a ton of these at one time.

However, for IT Admins in most small companies, the more important thing is that it gets done quick, quiet and correctly. The business owners don’t care about the fine nuances. They want to start realizing the business benefits of the new technology.

I chose a much more economical option, both in time and cash. I searched Dell’s website for “Microsoft Storage Spaces Direct Ready Nodes” and picked out a pair of their “Ready Nodes” that looked like they matched my budget and needs.

Then it was a small matter of reaching out to my Dell team. My sales rep put me in touch with their server/storage specialist. He asked a couple of questions about workloads, storage, and networking. Presto, we had our servers on order.

*Pro-tip* Buy the fiber patch cables at the same time. They are no less expensive elsewhere and you have less chance of getting crap cables.

If you don’t already have a relationship with Dell, there are several other Microsoft certified hardware vendors. There is a list here: http://bit.ly/S2D2NodeOptimized

 Tips for the build

You’ve only got 2 servers to configure, so typing each command is feasible. However, in the interest of both documenting what I did and saving on silly typos, I opened up VSCode and wrote up all the Powershell commands described in the Dell docs.

The steps (much simplified) look something like:

  1. Install OS if not preinstalled and then patch to current.
  2. Plan and Map out IP networking. Use the checklists in the back of the Dell guide to map out your IP scheme. It’s a time saver!
  3. Pre-stage the Active Directory accounts for the Cluster as documented here. Trust me, it’s faster to do it on the front side than after the fact.
  4. Install Windows Features on each node
  5. Virtual Switch and Network configuration on each node
  6. Test-Cluster and Review/Remediation
  7. Create Cluster – with no storage
  8. Enable Storage Spaces Direct on the cluster.
  9. Configure a witness – either another server on the site or a cloud witness.
  10. Create virtual disks – Cluster creation and enabling Storage Spaces Direct on the cluster creates only a storage pool and does not provision any virtual disks in the storage pool.

Next time, we’ll go into depth on the management of our newly built Azure Stack HCI setup!

Windows 2016 and DSC – Like Peanut Butter and Chocolate

We’ve all heard about DSC, right? Sure you have. Maybe you’ve been playing around a bit in labs or using for test environments. Why haven’t we all taken the plunge to Infrastructure as Code ( or more accurately Infrastructure from Code)? Because it’s hard, and we’re busy?

Likely, it’s because we don’t have the opportunity to go all ‘greenfield‘ in our daily jobs. Most of us live in the depressingly named ‘brownfields.’ We have servers already, we have workloads on them that are of different importance to our companies. We can’t just rip everything out and replace it! Oh but wait! Most of those machines are running older versions of Window Server or have old hardware. We’re going to need to do an upgrade/replacement plan anyway.

I know a lot of us have those old servers that run some important job. Maybe we virtualized when the hardware broke, but otherwise, they are still on 2008 or god forbid 2003. We can’t manage them with the latest tools because they have an old version of PowerShell ( if at all). We want to get rid of them, but what a daunting task!

Why not combine these two tasks? Two birds one stone and all that. Build out a pull server, and rebuild your infrastructure the “Modern” way. Don’t upgrade those VM,s construct new ones and shift the workload over. That ensures the cleanest installation and configuration. Since you’re configuring them from scratch, why not do it with DSC?

Here’s what I’m in the middle of right now: I have several physical servers at or near the end of life. I have a few 2008 servers still lingering. I want to get all my servers on 2016 to take advantage of several newer technologies. To make our Hyper-V hosts more efficient, I want to move as much as possible to Server Core.  I had used DSC for several small servers but not in a truly ” production ” manner. Time to upgrade!

Here’s my plan in broad terms:

  1. Build out a pair of new load balanced secure Pull Servers – using DSC Push Mode.
  2. From there, make a BASE configuration shared by all servers and inject that MOF into the image I’m using to build new VMs. This base config contains things like domain join, setting up the LCM with where to look for the pull server, network set up, etc.
  3. Create configurations for the server Archetypes – File Server – Web Server – App Server – Backup Server – Domain Controller – etc.
  4. Write some basic Pester tests to verify that the configurations are doing what I expect.
  5. Start standing up servers, pushing configs and testing. Once tests pass…..
  6. Move to production mode!

At some point, I plan to move the whole shooting match from Github to Visual Studios Team Services for source control and test beds. It would be nice to be able to apply a MOF file to a VM in Azure, run pester tests, and upon a full pass, have it deploy that new MOF to the on-site pull servers. But that’s a learning curve for another day!

TestLab v2 – The aborted build

If you missed the first two parts to this , start here and continue  here….

<SNIP!>

So the reason for the long delay in finishing this is due to some hardware problems with my test server. What was going to work fine for a 2012 server, doesn’t work for crap in 2016.

The problem is in the CPU. The old server I had planned on using as a lab does NOT have a SLAT capable chip. Since that’s a requirement for 2016 Hyper-V, it’s kind of a show stopper.

However – all is not lost! Jason Helmick and Melissa Januszko cooked up a PowerShell Automated Lab Environment that uses Virtual Engine Lability to easily stand up a lab environment on any Windows 10 machine. You don’t even have to manually download the ISO files for the OS install. Now I can very easily stand up/ tear down a lab with little fuss.

So with the lab situation handled, I’m moving on!

My goals this year is to get better with DSC, Pester testing and to complete a build pipeline for work. Let’s see how it goes…..

Test Lab v2 – the Planning Stage

(For the introduction post : Rebuilding the Test Lab v2 )

Before getting into the actual planning let me describe the environment and restrictions.

  • Hardware
    • Dell 2950 w/ 2 Xeon 3GHz CPUs, 32 GB RAM, 12 TB JBOD Direct attached
  • Software
    • Windows 2012 R2 (full install), Hyper-V role, with Windows Deduplication running on the JBOD to keep file sizes on VHDs down. The license is through our company MSDN account, permissible as this is for testing and development.
    • Powershell v5 is installed
  • Network
    • Dual gigabit ethernet adapters connected to the company LAN.
  • Restrictions
    • As a “guest” on the company network, I have to be very careful to isolate traffic in and out of my test environment. I’ll use a Vyos VM Router to do this.
    • I have no System Center VMM, no VMware, just plain vanilla out of the box Windows.

 

Alright so with our tools laid out, let’s talk about goals. What do I want to do be able to develop and test on this box? What’s that going to take? I’ve got to keep this simple or I’ll go down a rabbit hole of setting up the setup of the environment so that in the end I’ll get bogged down in minutiae. That may come later but for now – simple wins over cool new stuff.

Goal 1 : Learning to work in more of a DevOps kind of environment with source control and a development pipeline for my Powershell based tools. For this we’ll need TWO Virtual subnets – one for Dev and one for Test. Since there will only be one or two people on this at a time, I can build this all on the same box for now. Later when this process becomes more mainstream it won’t be difficult to rebuild the infrastructure on a production box.

Goal 2: build as much as possible with DSC – within reason. This is that rabbit hole I mentioned above. True you can build out a full network from scratch with DSC and a server WIM, but I’ve never done that and in the interest of getting stuff running right now  I’m going a more old school route. Build a “Base” server in each subnet that is a multifunction server.  It’ll be a domain controller, with DHCP, DNS, Windows Deployment Services and a DSC Pull server. From THERE I can work on things that I’m either rusty or inexperienced on. Walk before run before fly and all that good jazz.

I might add a goal 3 later but for now this is good.  Let’s diagram this out so we can get that big picture over view.

Capture

Right. Next step, we build 3 VM switches, a VyosRouter and 2 “Base” servers.

See ya then!

 

Rebuilding the Test Lab v2

Last year I wrote an article for Powershell.org extolling the benefits of a home lab and how it didn’t cost much to build a basic one. You can read it here.

That lab has done me well, but things change and needs increase and opportunities arise. The needs changing obviously is ” I want to be able to run more VM’s without sitting and listening to my disk thrash for minutes to start one”.  The answer to that need is “buy more RAM or a SSD”, both of which have that nasty side-effect of costing money.  So I gritted my teeth and waited…

Fast forward and now my work is decommissioning physical servers due to them not being covered under a 4 hour service agreement. Also due to a ton of virtualization by yours truly. So there are a few functioning servers with older cpu’s and smaller disks sitting idle…. yeah right. Time for TestLab v2!

This time I’m doing things a little different. First of all, obviously I’m not building on a Windows 8/10 machine. Secondly this box, while small by server standards, is a big improvement over my home pc. Also I’m building this as a test lab for our team so it’s a little more ” official”.  I am using their hardware and network after all, I should share *grin*!

Now I’ve recognized a flaw in my process of “build, test, document”. Really it’s a side-effect of my mild ADD and the hectic work pace I keep up. Once I’ve built it and solved those problems and tested it and solved those problems, I kind of lose interest.  There’s no more puzzle to solve, just homework to do. bah.

So we’re going to try a NEW (to me) technique.  I’m going to write AS I build it, typing during those install waits, and reboots. I’m going to break this into a few parts.  First this introduction, followed by  a section on “Pre-build planning”,one on the actual build, then a wrapup “post-mortem” post.

. Let’s see how this goes!