Packing up for Ignite!

Everyone has their own opinions about what to take when packing for a conference. Obviously a lot of the conferences are smaller than Ignite and that makes for a slightly different packing list.

Here are a few of the things on my checklist :

1. Two pairs of shoes. It seems silly, but I can vouch for the fact that swapping shoes every day makes a difference. I for one usually don’t walk 17,000 steps in a normal day. However, at Ignite, a 17k day is not unusual for me. Good, well broken in, comfortable shoes are invaluable.

2. Powerbanks. Those little USB power chargers will keep your phone juiced up all day. Power plugs can be scarce at times. Having a big battery to tide you over can be the difference between being able to tap out a quick note VS having to write it down in a pad.

3. A notebook. As much as you try, you may run out of juice and need to take notes the old fashioned way. I prefer the half size spirals. Usually there are some vendors in the Expo/Hub giving away branded notepads as well.

4. Light jacket/windbreaker. The temperature can swing wildly from hot outside and warm in the halls to chilly in the rooms for breakouts. I like to take a light warmup jacket that will fit easily in my pack.

5. USB multiport charger. I have a little USB charger with 5 ports that plugs into a regular wall plug. This is for your hotel room to charge all of your stuff at the same time. There are never enough plugs accessible in hotel rooms.

6. Hand sanitizer. You will be shaking hands a lot. Nuff said.

7. Business cards. If your company does not provide them, most Office Depots will make you a set of 50 or 100 inexpensively and ready in a couple of hours.

8. Extra space. You will be bringing back t-shirts and other swag. Even if you don’t want the stuff, it’s nice to bring it back to your team. Either only pack your suitcase 2/3 full or bring an empty gym bag in your main suitcase. It’s amazing how many t-shirts can be stuffed in a decent size gym bag!

Of course, some of these seem obvious, but I personally have forgotten at least a few of these things! Trust me, it’s much cheaper to bring these things than to buy at the airport or hotel.

Other than that, eat and drink responsibly, stay hydrated, and have fun. If you tend to be a little introverted, step out of your comfort zone and introduce yourself to at least one other attendee per day. Vendors do not count since they are working at meeting YOU!

That about sums it up for now. I’m going to try to post more during the week, time permitting.

Powershell Tuesday Quick Tip #7

Back at it again!

This tip is a little snippet of code for building Dynamic distribution lists for something other than Organizational Units or Branch offices or the other examples. This one is to build a list of Managers.

How does one differentiate managers in Active Directory? Title? no because at least in our organization, the management team have different titles. Things like ‘Vice President’, ‘Director’,or ‘Controller’ in addition to the more mundane ‘Manager’. This is complicated by the fact that we refer to our outside salesforce as ‘Territory Managers’. So the Title property is right out.

What is the one other property that defines a member of a leadership/management team? Yep – they have someone reporting to them. Enter the “Direct Reports” property!

Team that up with a New-DynamicDistributionGroup Cmdlet and you get a new example like this:

New-DynamicDistributionGroup -DisplayName 'My Company Managers' `
 -Name MCManagers `
 -PrimarySmtpAddress 'mcmanagers@mycompany.com' `
 -RecipientFilter { DirectReports -ne $null}

*Standard disclaimer about minding the backtick line continuations. This is actually a one liner but considering the important part is at the end, this format is easier to read.

The result is a list of anyone in your organization that has someone reporting to them. Exactly what the HR folks want when they ask for a mailing list of all the managers!

Manage your Stack with WAC and OpenManage

In a previous post, I briefly mentioned managing your Azure Stack HCI with Windows Admin Center (WAC). When you are using Dell hardware, it’s even better with OpenManage!

Imagine being able to securely control your server stack from any browser? Even if you don’t allow access from outside the network, imagine how nice it would be to be able to have a ‘single pane of glass’ to manage most of your day to day things. Now imagine that it’s free…

Setting up

First you have a decision to make. Do you want to run WAC from your pc or as a service on a server? I like to run the latest General Availability release on a server for our whole team to use, and the latest Insider build on my laptop for testing and comparison.

So how do you get it ? Simply go to the Windows Admin Center web site and download the MSI file. It’s the same install whether you are installing on Windows 10 or Windows Server, so that’s easy.

Before you install, obviously you have to make the decision on the ‘type’ of install you want. There are four main types of installations ranging from the super simple ‘Local Client’ install all the way up to a failover cluster of management servers. Here are the differences.

First is the simplest of installations – the Local Client. This is just you install on your Windows 10 pc that has network connectivity to all of the servers to be managed. This is fine for small, single administrator environments or as I noted above, for testing Insider builds.

Next is the ‘Gateway’ version. It’s installed on a designated Windows 2016 or higher server and runs as a service. This is the best one for access by a small team of admins.

Our third option is worth noting as it is a supported and documented configuration. It’s called ‘Managed Server’. In this scenario, you install the software directly on one of the servers to be managed. I don’t recommend it unless you have some reason not to create a dedicated “gateway” VM. Of course I’m a fan of single purpose servers, so your situation may vary.

The fourth and final option for a LAN installation is a failover cluster. Truthfully if you are in a large environment – this is certainly the way to go. It give you high availability and reliability for your management tools. In a small or medium business, at this time, it’s a little overkill.

My deployment is a bit of a hybrid. I’m running a gateway server on a VM sitting atop a Azure Stack HCI cluster for the team to use. So it’s highly available enough for our needs.

One of the big advantages to WAC over traditional management tools is the extensibility. There is a Windows Admin Center SDK along with examples on Github if you want to write your own. However there are several ready-made extensions provided by Microsoft and a few hardware vendors. Microsoft extensions are things like DNS, DHCP and Active Directory. The real benefit is if your hardware vendor provides an extension.

For example – as noted in my Azure Stack HCI post – I’m using Dell hardware. The Dell EMC OpenManage extension lets me use Windows Admin Center to not only manage my software environmant but my server hardware as well! Disk Health, Temperatures, Fan Speed, and more all in one handy to use dashboard.

Azure Stack with Only Two Servers

Ok, so that title is a teeny bit of an overstatement. What we’re going to discuss today is not the Azure Stack you have heard so much about. No, today we’re talking about an Azure Stack HCI Cluster.

Essentially, it’s a Storage Spaces Direct cluster with brand new shiny marketing and management. While you can use the old “*.msc” tools, it’s a far, far better thing to use the new Windows Administrator Center (WAC)! More on that soon.

I’m not going to dig into the details of how S2D works or the fine details of building clusters large and small. Instead, I want to share with you some of the reasons why small and medium businesses need to pay attention to this technology pronto.

  1. Scalability: Sure, most of the documentation you find today focuses on building clusters of 4-6 nodes or more. That’s great unless you are a small business that doesn’t need that kind of compute and storage. That’s where the two-node “back to back” cluster comes in. Think “entry-level”. The beauty of this solution is scalability. If you do outgrow this, you buy a new node, possibly a new network switch and you are up to a 3-node cluster!
  2. Compact: I had two old servers that took up two Rack Units (RU) of space each plus an external drive array that took another 2 RU. That totaled up to 6 Rack Units or 10.5 inches of rack space. The new servers that replaced them are 1 Rack Unit each for a total of 3.5 inches! That doesn’t even touch on noise, heat and power consumption.
  3. Fast and easy: Ok yes, it is a complicated technology. However, you can follow Dell’s Deployment Guide for 2-Node hyper-converged back-to-back connected infrastructure with R640 Storage Spaces Direct Ready Node
    and you’ll be 90% of the way there. It’s a mouthful, but it’s 40 pages of step by step instructions and checklists. *TIP* I’ve included some tips below for a couple of places that aren’t super clear (or at least to me)
  4. Well Documented: If you are like me and want to understand how this all works before you bet your mission critical VMs to it, there is a ton of good information out there. Here are some options depending on your preferred method.

    • The Book: Dave Kawula’sMaster Storage Spaces Direct.  It’s over 300 pages of detailed explanation for a very reasonable price. Although you can get it free, those people spent a lot of time working on that so pay something. You’ll see what I mean on the Leanpub site.
    • The Video: Pluralsight’s Greg Shields has a course on Storage Spaces Direct that is 4 hours of in-depth instruction on Windows Failover Clusters and Storage Spaces Direct in Windows 2019. If you aren’t a subscriber to Pluralsight, they offer trial periods!
    • The ‘Official’ Documentation: Microsoft’s Storage Spaces Direct Overview is the place for the official documentation.

There are a few tips and gotchas that I want to share from my experience.
First, the hardware. These servers aren’t crippling expensive, but they certainly aren’t disposable cheap either. This means there are a lot of critical hardware decisions to make. What’s more important to your organization – budget or speed or a balance? On the one end, you can go all-flash storage which will give you a fast system, but the price tag goes up fast too. The less expensive but slightly more complicated set up is a hybrid of SSD and HDD storage. Making certain that you have the right mix of memory, storage and network adapters can be a daunting task.

Honing a specification and shopping it around to get the perfect balance of form, function, and price is great if you are a hobbyist or plan on building out a ton of these at one time.

However, for IT Admins in most small companies, the more important thing is that it gets done quick, quiet and correctly. The business owners don’t care about the fine nuances. They want to start realizing the business benefits of the new technology.

I chose a much more economical option, both in time and cash. I searched Dell’s website for “Microsoft Storage Spaces Direct Ready Nodes” and picked out a pair of their “Ready Nodes” that looked like they matched my budget and needs.

Then it was a small matter of reaching out to my Dell team. My sales rep put me in touch with their server/storage specialist. He asked a couple of questions about workloads, storage, and networking. Presto, we had our servers on order.

*Pro-tip* Buy the fiber patch cables at the same time. They are no less expensive elsewhere and you have less chance of getting crap cables.

If you don’t already have a relationship with Dell, there are several other Microsoft certified hardware vendors. There is a list here: http://bit.ly/S2D2NodeOptimized

 Tips for the build

You’ve only got 2 servers to configure, so typing each command is feasible. However, in the interest of both documenting what I did and saving on silly typos, I opened up VSCode and wrote up all the Powershell commands described in the Dell docs.

The steps (much simplified) look something like:

  1. Install OS if not preinstalled and then patch to current.
  2. Plan and Map out IP networking. Use the checklists in the back of the Dell guide to map out your IP scheme. It’s a time saver!
  3. Pre-stage the Active Directory accounts for the Cluster as documented here. Trust me, it’s faster to do it on the front side than after the fact.
  4. Install Windows Features on each node
  5. Virtual Switch and Network configuration on each node
  6. Test-Cluster and Review/Remediation
  7. Create Cluster – with no storage
  8. Enable Storage Spaces Direct on the cluster.
  9. Configure a witness – either another server on the site or a cloud witness.
  10. Create virtual disks – Cluster creation and enabling Storage Spaces Direct on the cluster creates only a storage pool and does not provision any virtual disks in the storage pool.

Next time, we’ll go into depth on the management of our newly built Azure Stack HCI setup!

Convert DHCP Lease to Reservation with Powershell

Scenario :  You have just created (by whatever means) a new server. It has gotten an IP address from DHCP and all is working as it should…..However…. Perhaps this server provides a service that requires a static IP (for whatever reason).

In the old days, you would fire up the DHCP tool from the RSAT tools or <shudder> RDP into your domain controller to convert the dynamic DHCP Lease into a reservation. There is a faster way…… if you guessed Powershell – you are correct 🙂

It took a little spelunking around to work this out but here are the steps to follow.

First, create a remote powershell session to your DHCP server.

 Enter-PSSession -ComputerName MyDC.houndtech.pri 

Next find out the IP address  of the new server. Easy enough with :

Test-NetConnection -computername Newserver.houndtech.pri 

Let’s make things a little easier for later and put that returned object in a variable.

$x = Test-NetConnection -computername Newserver.houndtech.pri  

But we don’t need the whole object, just the IP address, so let’s narrow it down with

$IP = $x.BasicNameResolution.IPaddress  

Now $IP contains JUST the IP address, which is what we need for the next series of cmdlets.
Next we retrieve the DHCP lease object with

 Get-DHCPServerV4Lease -IPAddress $IP 

Finally pipe it to the cmdlet to add the reservation.

 Get-DHCPServerV4Lease -IPAddress $IP | Add-DHCPServerV4Reservation 

Of course this could be piped together into a sweet oneliner that would go well added to any automated provisioning script.

 Get-DHCPServerV4Lease -IPAddress ((Test-NetConnection -ComputerName 'NewServer.HoundTech.pri').RemoteAddress.IPAddressToString)| Add-DhcpServerv4Reservation 

See you next time!

Powershell and Devops Summit 2018

Last week was the big Powershell/Devops Summit in Bellevue, WA. I say “big” not as in ginormous 15,000 attendee extravaganzas like Ignite or VMWorld. No, this 365 attendee Summit was big as in the stature of the people there. All the Powershell superstars were there, sharing their knowledge and enthusiastically pushing the rest of us to excel.

This was my first Summit, and although I have waded into the Powershell Community pool, this was a dive into the deep end! Happily, I managed to keep up and learn quite a few things that I can immediately apply. I also brought back copious notes on things to try out in the old Lab.

It would be a novella to describe all the things I learned, but here are a few highlights and key takeaways.

  • Powershell 6.x is the way forward. Cross-platform and lightweight, it will run on almost anything. There was a demo of a Raspberry Pi with Powershell 6 installed and sending sensor information ( heat – humidity sensor) and controlling an attached light. Pretty nifty. Also Cloud Shell (Azure Powershell in a browser) either runs now or will soon run v6.
  • To utilize old modules, soon to be released: Windows Powershell Compatibility Pack. This is a clever solution. It allows you to essentially remote into your own PC’s Windows Powershell (5.1) session. It’s a little confusing until you remember that Powershell 6 and Powershell 5.1 are different executables and can and do run side by side.
  • Powershell Classes + REST API’s = Super functions. In a nutshell, use  classes to build objects out of data returned from Get-RestMethod. Once it’s a fully fleshed out PSObject, you have many more options on how to interact with that data. Powershell is all about objects afterall. For more info: Tweet to: Jeremy Murrah or check out his presentation on GitHub
  • Desired State Configuration Pull Server is becoming more of a community/open source project. It appears (and maybe I misunderstood) that Microsoft isn’t doing much development on the Pull Server portion of DSC. They are focusing on the Local Configuration Manager (LCM). This makes a lot of sense, it’s easy and not expensive to use Azure Automation as your pull server. There are also a few other Open Source Pull servers like TUG
  • Lean Coffee! For a completely not  Powershell side session, Glenn Sarti introduced a few of us to ‘Lean Coffee’. It’s not a skinny half-caf soy latte, it’s a way to organize small informal meetings. Briefly, everyone gets Post-Notes ( or other slips of paper) and writes down 2-3 things they want the group to discuss at this meeting. Everyone votes on what they want to discuss and that determines the order of conversation. Someone acts as ‘timekeeper’, and every 2 minutes polls the group for a simple “thumbs up/down” vote. If the majority votes UP , the conversation stays on that topic, otherwise, you move on to the next highest voted topic. Repeat until time or topics runs out. I am definitely going to try this in our next team meeting! For more info – check out : http://agilecoffee.com/leancoffee/
  • I learned a LOT about CI/CD pipelines for Powershell using VSTS and a few other tools from both and in separate sessions. This topic needs a blog post or 6 all it’s own. Two things to remember – 1. it can start simple and build from there  and 2. Plaster frameworks make the dream work.
  • Thomas Rayner showed us how to make custom rules for PSScriptAnalyzer. Time to make some “house rules”!

There was a LOT more that I have noted to study up on and try out, those will be later posts I’m sure.

Of course, Jeffrey Snover  continues to amaze me with his enthusiasm and optimism. Some great ‘Snover-isms” heard: “Line noise should not compile!” and “Like cockroaches in a kitchen full of cowboys”.

Aside from actual sessions, there were several interesting conversations not only with superstars like Don Jones™ and Mark Minasi, but also with the “next-gen” stars like Michael Bender and James Petty and with regular Powershell guys like me.

As for the actual event, the folks that put this together are top-notch. They care about the experience we have and it shows. Very good food, opportunities to socialize, and a refreshing lack of hard sell vendors. One of the few conferences I go to that doesn’t result in an email flood in the week following.

Bellevue was great – it only rained 4 of the 5 days I was there! Seriously though, the rain wasn’t an issue – even though I walked everywhere except to and from the airport. A light misting drizzly rain wasn’t horrible, and the temperatures were great. Cool enough so that you could vigorously walk up hills without getting sweat soaked and warm enough that only a light jacket was needed.

 

Powershell Tuesday Quick Tip #6

This week’s tip is a bit longer – it’s actually a little script to remotely create a shortcut to a network file share on a user’s desktop. This one comes in handy when explaining how to make a shortcut over the phone proves …. difficult.

$PC = 'Luke-Laptop.rebellion.org'
$User = 'LSkywalker'
$TargetPath = "\\tactics.rebellion.org\death_star_plans\"
$ShortcutFile="\\$PC\C$\Users\$user\Desktop\Death Star.lnk"
$Obj = New-Object -COMObject Wscript.Shell
$Shortcut = $Obj.CreateShortcut($ShortcutFile)
$Shortcut.TargetPath = $TargetPath
$Shortcut.Save()

This one is a good candidate for getting wrapped up in a function and added to an “Admin Toolkit Module”.