Microsoft 365 Migration – The Beginning

There are lots of reasons to move to Microsoft 365. You can apply most of the same logic to Google’s Workspace (formerly G Suite) or Zoho Office, but we’ll focus on Microsoft’s Office 365 portion of the larger M365 product. There are many extras in the margins between Office 365 and Microsoft 365, and we’ll touch on those in a bit. Let’s back up a bit and ask, “Why do this at all?”

A large part of why to migrate has to do with the cost and effective use of human and equipment resources. A cost is associated with even a free email server running on your hardware. Even if you discount the price of the software, which is rarely free, you have the cost of the hardware, bandwidth, and the time of someone to maintain it. These small costs add up quickly! Conversely, a cloud option like M365 moves that cost to a monthly fee per user. In exchange for that fee, you get email, storage, chat, websites, security, integration with your on-prem Active Directory, and more. Check any of the above products’ websites. They are happy to tell you why they are great.

The main thing to consider is ‘core vs. context.’ If you are not running an email organization, why do your customers care about what email platform you use? So long as it works and does the main things we expect a mail platform to do around messaging and scheduling, no one cares about the details. It’s context, not a core competency or differentiator. In the last 30 years, no customer said to a business, “I want to buy what you are selling because you have a great email server.” That is unless they were talking to someone selling email servers! If it’s not improving your business, hire an outside service.

To summarize:  If your company is not in the business of providing email, SharePoint, and /or cloud storage, outsource it to Microsoft or Google (pick one). These things are not core competencies. They are context and, therefore, distractions from the core things your company does to make money.

Depending on the size and complexity of your organization, this could either be a weekend project or a multi-phase beast spanning months. A big decision to make is who does the migration. How do they do it? These are intrinsically linked because if you hire a partner to do it, they have their own processes that dictate the how. If you do it in-house, then you need to determine the how.

Let’s talk today about the decision on hiring it out or doing it in-house. Here are a few leading questions to ask yourself to make that decision.

  • How big is the environment?
    Even a single admin who can follow complex instructions can accomplish the Exchange migration if the environment is small. SharePoint is more difficult without tools but see the points below.
  • Do you have people who are competent in Exchange and PowerShell?
    While most of the work can be done in the GUI environment, a fair amount of PowerShell code is involved. Most of it is copied from Microsoft’s website and modified for your environment, but if you or your admin aren’t familiar with PowerShell basics, you could have problems.
  • Is your current on-prem environment very customized or plain vanilla?
    Perhaps you don’t even have SharePoint on-premises, or you only have a few hundred Exchange mailboxes. Conversely, you have a ton of old, poorly organized mailboxes left over from a previous management decision to keep everything forever, including thousands of shared mailboxes and distribution lists with no owners. <Sorry, I got triggered a bit there>
  • Are you willing to invest in some tools?
    Tools like Sharegate and some others will GREATLY ease your migration pain. Sure, you could code up some PowerShell scripts to do much of the same but remember what we said above about core competencies?
  • How FAST do you want to do it?
    It may take a while if you have a small team or a very complex environment. Hiring someone to do it will still take time, but hopefully, you paid by the project, not the manhour. If you have a very small team running at full capacity to keep things running, they may not be able to get this done in a timely manner.

Another point to consider on the speed thing is this; Is the completion of this project going to mean a reduction in staff? If you have one person whose only job is running Exchange and SharePoint servers, they will be understandably nervous. I’m not trying to say anyone would deliberately slow down a project to stretch out their job, but subconsciously this person is likely not working as hard as they could if they fear losing their job.

The flip side is if you have an overworked server admin that maintains Exchange and SharePoint and would love to simplify the environment to concentrate on value-added things like cloud automation and such fun stuff. This admin will work nights and weekends to get these beasts out of the environment. This is the admin that understands that running Exchange in-house is a thankless job. If it works, no one notices, but one thing glitches, and they are out for your head.

The final piece of the do-it-yourself vs. hiring someone debate is cost. Obviously, part of the equation is what this migration will cost in dollars and cents. The benefit of doing it with existing staff is that you are already paying them! The downside is if there is a problem with the existing team, either in competence or capacity. If that balance tips too hard against doing it in-house, either hire out the whole project or, if it’s a capacity problem, hire a contractor to come in and take the lead on the project for you.

Take all of this into account and decide the WHO. That decision dictates the HOW. The HOW could be a book all by itself, but next time I’ll run through a summary of things that need to get done to call this a successful undertaking.

 

Ch-Ch-Changes

It’s been a long time since my “Reflections” post, almost nine months! The last time I posted, I was starting a new journey with a new company, and wow, what a ride! The company has gone through some tough times; I’m not going to sugarcoat it. The week I accepted the offer, they declared bankruptcy! It turns out it was a Chapter 11 reorganization done as part of a change of ownership. I had known about the pending change of ownership; the bankruptcy was a bit of a surprise. It’s turned out ok so far.

The company cleared bankruptcy in the minimum amount of time and started rebuilding. In IT, we’re still reorganizing to match the reduced size of the company and a more pragmatic and lean stance toward hardware, software, and people. Sadly, there were some staff reductions, which is always hard. The good news is that my experience running a lean IT team has come in very handy. There were a lot of extra software and network services that we’ve been able to cut due to the changes in management style. Like half a million dollars of changes.

When I say management style, I’m not just talking about the differences between my predecessor and me or between the current and previous CIOs. The change has been from CEO on down. We’re on our second CEO since I  started, and almost the entire senior leadership team has changed. There has been a lot of “out with the old and in with the new” culture change, and let me say it’s caused a bit of introspection on my part. At my old company, I was old school and skeptical about how the new leadership wanted to change the entire culture of the company overnight. Now I’m the new guy trying to help reshape company culture!

The key is to be respectful to the folks who have been around for a long time and have built the base we are expanding upon now. At the same time, we all need to have a hard, honest look at the parts of the corporate culture that led to excessive spending and poor practices resulting in bankruptcy and buyout by a debtor. Now that I’ve been on both sides of that coin, I can empathize with how both sets of people feel about the changes that inevitably occur when you change the whole leadership of a company. I think it’s made me a better leader.

Of course, the biggest strength of any leader is their team, and I have a damn good one. I’m not just saying that because they might read this, either. We have been through a rough year and still have a lot of hard work ahead of us. We still have room for improvement in many ways, but in the ways that matter, we have a good core team to continue building.

We also still have a lot of building to do. I’m learning that, typical of many larger midsized companies; this one is way ahead in some ways and way behind in others. We are finally fully rolling out Office 365 and modernizing and simplifying much of the network design. Overall, we’re on track to start 2023 in a better place from a technological and budgetary standpoint. I’m proud of our work thus far and will be even more proud when we pull off the plans for the last quarter of the year!

 

 

SOLVED! Unable to open files from SharePoint in the desktop Office app.

As part of our rollout of Office 365, we had a few users report that after the automated process to uninstall Office 2016 and install Office 365, they could not open some files from our SharePoint 2013 servers. The affected users could not use the “Open in App” feature. They would get the error “We’re sorry Excel cannot open .. <Filename>”. and then an error stating that the file was either missing or corrupt or open in another application. This one stumped us for a bit, and the old Google-Fu wasn’t helping much.

Here were our steps and the resolution.

First, diagnostically, we utilized the old “Wolf in Siberia” tactic of isolating the problem. 

  • It wasn’t all users, but not just one, either. Therefore, not a problem with the software package.
  • The file could be opened in the browser in both view and edit mode and could be downloaded and opened. Therefore the file wasn’t corrupt.
  • Other users could open the file without a problem. So, not a SharePoint problem.
  • We had the user log into a different computer to see if the user profile or SharePoint permissions were a problem. They could open the file without error. So the issue was on their pc.
  • We logged into the user’s computer as a different user and could open the file.  

So the problem was specific to that user on that PC!

Considering the error stated that one of the possible causes was “the file is open in another application,” we started looking for a cache or flag saying the file was open for editing. First, we did the easy one and ran a disk cleanup on the main drive emptying temp folders and internet caches. No luck. So we did some internet searching and found several ways to clear the Office cache in earlier versions but not Office 365! Finally, we broadened the net and searched simply for Office 365 cache without any specific error and hit upon a post about a Teams issue that was resolved by deleting the files in the cache folder. Eureka!

So, after all of that, the solution is straightforward. Close all Office programs, then delete all the files in “%systemDrive%\Users\%userprofile%\AppData\Local\Microsoft\Office\16.0\OfficeFileCache\”. 

There is one caveat – be careful. In our tests, we found at least one user whose cache folder had a “1” appended (OfficeFileCache1). 

Reflections

In late May 1992, I walked into the Houston warehouse of Swiff-Train Company for my first day of work. I was 22 years old and in desperate need of a job. My last few had been odd jobs, short construction gigs, etc. Nothing that would help that end of the month stress around making rent. Hopefully, this would be different, I thought.

My best friend Darryl worked here for a couple of years and always spoke highly of the company. I had met most of the people from the local branch at various birthday parties and BBQs at Darryl’s house, and they seemed like a good bunch. So when things ended at my telemarketing job, I called up Monnie, the warehouse manager, and asked for a job.

I started sweeping the floors as contract labor for a week until my paperwork went through corporate, and I became a full-time warehouse worker. It was hard, hot work, but the pay was fair and, most importantly, stable. I stayed in the warehouse through the summer until I saw an “Inventory Control Clerk” notice. I asked around and realized that position was for the San Antonio branch, not Houston. So I asked the branch manager why we didn’t have that position in Houston, as we sure needed someone to manage all the shipping and receiving paperwork. She agreed, and a few weeks later, I was the new Inventory Control Clerk in Houston! I was on a roll now!

After a few months, I was happy, and learning the computer systems was fun. I had a good time flirting with the girls from other branches on the phone and chatting on the computer. Let me pause to say; these weren’t PCs we were using. Remember – 1992. No, we did all of our work on TN5250 green screens, with big clunky keyboards and square CRT screens. But computer work sure beat hauling rolls of carpet pad in the Houston heat! Then another change came, and it was a big one.

I had gotten pretty friendly with the purchasing team at HQ in Corpus Christi. I handled the receiving of goods in Houston. When the purchase order didn’t match what arrived, I needed them to adjust it to receive the PO. Still, I was surprised when the purchasing manager Steve called and asked if I would be interested in a job in his department! It required moving to Corpus Christi, but most importantly, the company would help me move, and it was a dollar and a half raise! It sounds crazy now, but I only made $7 an hour back then! I talked to my girlfriend about it, and we were in! We traveled to Corpus Christi to find an apartment and visit with Steve. The interview went well, and we found a place we liked, so we went back to Houston to pack! In January of 1993, we moved to Corpus Christi using a Swiff-Train delivery truck and the help of one of the drivers.

That moving day was an excellent example of the company’s spirit that helped keep me there for almost 30 years. Rona Train, the president’s wife, stopped by and brought us a sandwich platter and snacks. That small act of kindness from the company owners impressed me greatly. That ‘family’ feeling persisted for a long time. Over the next three years, L.A. and Rona Train were my business parents of sorts. When I started doing extra projects that involved meeting with vendors in person rather than over the phone, LA sat me down for a talk.

He told me that I was a bright, well-spoken young man and would do well in the business world. HOWEVER. He advised me that I was making less than a perfect impression with my long pony-tailed hair. My hair was halfway down my back at that time, and I was pretty proud of it. The girls loved it! But my mentor told me that I was starting each new conversation two steps backward because older business people didn’t take me seriously. That afternoon, I cut my hair so short my girlfriend didn’t recognize me.

While all of this was happening, I had developed computing into a hobby of sorts. The IT guys Tim and Wade had put an actual Windows 3.11 PC in each department with instructions to “see if you can find a use for it.”  I was poking around when I found that I could record and write something called a ‘macro’ on the terminal emulator installed on the PC. It took a couple of weeks to master, but I wrote a macro that automated one of my very tedious purchasing jobs. That macro took ordering updates from a manual two-week process to a semi-automated 2-hour process. I was hooked! I started borrowing PC magazines from the IT Department and bugging them to explain things I didn’t quite understand. I spent my spare time hacking on a used PC I bought from a friend. I was on a new track!

I was terribly bored pushing paper as a purchasing clerk by this time. My friend Jason had moved to Dallas and gotten a job with a big tech company, and I started looking at making a career change myself. I was pretty good with PCs and spent a lot of my spare time around the office helping people with their software or hardware when the IT guys were busy. With that in mind, I started sending out my resume to companies trying to get a help desk job, tape technician, or ANYTHING in IT. I wasn’t gaining much traction with that, and I had also spread my inquiries to the Dallas area. Shortly after the big haircut, Tim, the IT manager, approached me about learning to be a programmer. Of course, I jumped at it!

My education was more in the line of an apprenticeship than formal classes. Every day, after I finished my work in Purchasing, I would go back to the IT office, and Tim would teach me how to code in RPG (Report Program Generator). I didn’t care that I wasn’t getting overtime or even paid at all. I was LEARNING and CREATING something! Tim told me that if he didn’t feel like I had what it took to do this full-time, I could go back to purchasing. Yeah – no way in hell! In November 1996, I officially became a “Programmer/Analyst II” and left boredom behind!

Over the next few years, I helped with the rollout of PCs for everyone in the office, replacing old Twinax and TokenRing networks with IP on Ethernet, and helping people with their problems with new technology. I installed our first Windows server running Microsoft BackOffice Small Business Server 4.0. It had Windows NT 4.0, Exchange 5.0,  Internet Information Services 3.0, SQL Server 6.5, and I LOVED IT. for the first time, we had company email. Granted, we dialed into our service provider three times a day to download it, but it was a start. We upgraded to SBS 4.5 to prepare for Y2K and moved our interbranch communications from leased lines to Frame-Relay. By this time, I wasn’t programming much anymore. Most of my time was taken managing servers, networks and supporting the growing number of PCs.

We survived the Y2K ‘Millenium Bug’ without much fanfare because Tim and I worked our butts off to ensure every piece of equipment was compliant. Life was pretty good, and our small IT team of two grew to be three with the addition of Catrina. She was the receptionist in Corpus Christi until I noticed her writing HTML at her desk in her spare time. I told Tim about this, and he immediately hired her. I convinced Tim and the rest of the management to get rid of Frame-Relay and go to a pure internet-based Wide Area Network (WAN). We secured connections between branches with IPSEC VPNs using Netscreen firewalls. That saved a bunch of money and opened up our bandwidth to a whole new level. We opened, closed, and moved branches. And so we went. I was still learning new things but had gotten caught up in the day-to-day. I started filtering new technologies based on whether they applied to Swiff-Train or not. If we couldn’t use it, I didn’t bother. I was stagnating and didn’t know it. It was around 2010, and everyone was pretty happy.

By 2014 I asked for some time to go to one of the big conferences I kept hearing about. Tim and I went to COMMON sometime in the early 2000s, the big IBM mainframe/midrange conference. We had gone to learn more about tools and such around our AS400 and how to tie it to the broader internet. I had learned a lot, but Tim had felt like it was a waste of time and that it was “just a bunch of people trying to sell you stuff.” But in 2014, Microsoft’s Tech-Ed conference was in Houston, and I would be 30 minutes away from the branch if something happened and they needed me. So Tim relented and let me go. I think he did it to reward me and to shut me up about it. *grin*

So in May 2014, at Tech-Ed, my life began to change once more. I went to learn more about how to manage Exchange and Windows servers, etc. That was going along great until Wednesday of that week. I had heard “Powershell, Powershell, Powershell” for the last two days. I thought it was just a replacement for VBScript, which was clunky and not a part of how I managed computers. However, everyone seemed pretty hyped up about it, so I checked the session calendar and went to see Don Jones’ “Windows PowerShell Best Practices and Patterns: Time to Get Serious” session.

Damn. Don looked and sounded a little like Tony Stark from the Iron Man movies, and his message ignited my brain. During the Q&A, he said, “either learn Powershell or memorize the phrase ‘Do you want fries with that?'” He also said, “learn Powershell, and when you get that big raise or new promotion or a new job, I like whiskey. Good whiskey.”

Wow. I quickly switched my session plans to be all about PowerShell. I learned about Desired State Configuration, and I heard that Microsoft would do away with the Graphical User Interface (GUI) on servers. I downloaded podcasts about Powershell, and I was on fire!

When I got back to work, I was like a man possessed. I was writing Powershell scripts and automating things; I was innovating again. When my evaluation came up, I got a BIG raise, like a 5 figure raise. So I wrote an email to Don Jones to thank him and ask if he would be at Ignite (the new Tech-Ed) Conference. I owed the man a bottle of whiskey. He was very gracious and introduced me to Jason Helmick, Jeffery Snover, and several other influential folks. He liked the whiskey as well.

So fast forward a year, and I let several opportunities pass, mainly due to crushing Imposter’s Syndrome. People kept saying, “blog more, speak at conferences,” and I would freeze up. Heck, I still have trouble writing tech blogs, as you can tell by the long gaps on this one. Despite all of that, I was crushing it at work and got another big raise. We split our IT department into Dev and Ops, and I became IT Ops Manager – later Infrastructure Manager. We hired a new SysAdmin/Help desk person, and suddenly I was a manager and a mentor.

This time I took a bottle of whiskey to Jason Helmick at Ignite 2016 in Atlanta and introduced my young apprentice Bradley around. I was going to two conferences a year; each was a shot in the arm, and I would come back all fired up and full of new ideas. My bosses were loving it and quite willing to invest the money to keep me lit up.

Ignite 2017’s bottle went to Greg Shields for his advice and all of the classes on Pluralsight that helped me get certified. I was still fired up at work, but the ownership had changed, and the culture was starting to shift. I started spending more time managing than tech, but I was still happy. I moved our stack of OLD servers to a hyper-converged Storage Spaces Direct cluster, which Microsoft then rebranded “Azure Stack HCI” That went well. In 2019, Dell/EMC did a case study on our deployment of Azure Stack HCI and even did a video interview with me talking about the benefits of Dell and Azure Stack!

Then came COVID-19.

I admit that, at first, I thought that the media was blowing things out of proportion for the impact on the US. That had been the case with SARS and the Bird Flu, but it quickly became apparent I was wrong. Over a week in the spring of 2020, we took a workforce that was 95% in the office to one that was 80% remote. It wasn’t pretty, but it worked.

Lucky for us, I had started an initiative to make the company “branch independent.” The idea was that no matter which branch was “down” for whatever reason, the company could keep functioning. We had long had redundant servers in different branches and had started moving servers to a co-located rack in a local data center. That didn’t cover if employees couldn’t come to the office due to weather, or power issues, or in this case, pandemic. Honestly, I was planning for hurricanes and building fires, not pandemics, but it worked out well.

We had just started a rollout of RingCentral’s cloud-based PBX, so with some accelerated training and some long nights, the company was no longer tied to phone servers or phone lines, for that matter. The problem was we didn’t have enough laptops to go around, and we had to send desktops home with people and help them set them up via phone call and Facetime. This is where a good relationship with Dell paid off. We were able to get a bunch of laptops to enable our people to work not just from home but from anywhere with an internet connection! A real hero moment for our IT team. No lost work, no security issues, all formulated and executed in a matter of 2-3 weeks.

Unfortunately, that was the last big hurrah, it turns out. Technology growth in the company slowed to a crawl and then stopped. Ask three people why and you’ll get three different answers. The bottom line was that I had gotten the company to a level of technology where the execs were comfortable. The mission was to keep the lights on and cut costs every year rather than innovate and improve. I took the opportunity to use some vacation time to make some motorcycle trips and think about the future. I realized the 29-year relationship between me and STC had withered beyond repair.

I opened up my LinkedIn profile to be “Open to a conversation” with recruiters and had several nibbles over the next few months. Then in November, I got the call that changed it all. A phone interview and a face-to-face interview later, I had an offer letter in my hands! And what an offer! A much larger company, a larger team, a larger paycheck, and opportunities to grow.

While I was going through my interview process, I was worried about leaving Bradley in a bad spot. I was planning on spending my last weeks trimming and automating as much as possible so he would have an easier time handling the IT Operations alone until the company could hire a new junior help desk person. Plot twist – Bradley had been finalizing a new job as well!

So on Friday, December 3rd, I met with the COO and handed him both letters of resignation. Two weeks later, Bradley gave me a bottle of whiskey as thanks for a leg up in his career, much as I had given that bottle to Don a few years back. I cracked it open, and we shared a small taste before I left for the last time.

Before we left, Bradley and I worked hard to smooth the transition to an outsourced IT service provider. I still have the company laptop and a consulting contract if they need my help, but I doubt they will. The new service company was pretty impressed with the work that Bradley and I had done for just being the two of us for the last several years. We cleaned up pretty well before we turned in our keys.

How long was I at Swiff-Train? 10,802 days. 771 paychecks. A lifetime? No, only half of one (I hope). It’s not the end of a career; it’s the end of a job.

Today I’m starting a new job at a new company—a new job with new challenges and rewards.

The difference between a job and a career? Ownership.

 

Get-Command aka Discovering PowerShell

What’s the PowerShell command for…?” This happens to all of us. We’re working on a task and hit that speedbump where we either have forgotten or just don’t know the command to do that thing. This is where it’s very easy to turn to Google or Bing and and start searching. There is an easier way!

Really the issue with search engine results is that they may be incomplete, for a different version of PowerShell, or you just can’t find the answer. Whether it’s not using the right keywords, date constrictions, or whatever, the better way is right in front of you.

You have a PowerShell window open right? That’s why you are looking for commands right? Try this:

Get-Command

That’s nice, but on my machine ( with a few modules installed, granted) I get over 6,500 commands that way!
That’s not helpful without some limiting parameters. I do assume that you have an idea about what you are trying to accomplish. This is where the -noun and -verb parameters come in handy.

Let’s take a random example to walk through: What is the command to determine what services are running on a computer?
Our plain English request has verbs and nouns and PowerShell commands have verbs and nouns.
With a little knowledge of PowerShell structure, it’s pretty easy to work through.

Quick refresh: PowerShell commands (cmdlets) are structured in a “Verb-Noun” syntax.
Of all the verbs in the English language, there are (at this writing) 100 approved verbs. You can see the list at Approved Verbs for PowerShell Commands. 

That’s a lot but we can narrow those down pretty quickly by remembering a few things – Common verbs are things like Add, Clear, Close, Copy, Enter, Exit, Find, Get, Hide etc. We recognize ‘Get’ from other things like  GET-COMMAND! ‘Find’ also looks attractive in this case, so let’s have a quick look at that.

PS> Get-command -verb find

CommandType     Name                                               Version    Source
-----------     ----                                               -------    ------
Function        Find-Certificate                                   2.5.0.0    xCertificate
Function        Find-Command                                       2.2.5      PowerShellGet
Function        Find-Command                                       1.0.0.1    PowerShellGet
Function        Find-DSCResource                                   2.2.5      PowerShellGet
Function        Find-DscResource                                   1.0.0.1    PowerShellGet
Function        Find-IpamFreeAddress                               2.0.0.0    IpamServer
Function        Find-IpamFreeRange                                 2.0.0.0    IpamServer
Function        Find-IpamFreeSubnet                                2.0.0.0    IpamServer
Function        Find-Module                                        2.2.5      PowerShellGet
Function        Find-Module                                        1.0.0.1    PowerShellGet
Function        Find-NetIPsecRule                                  2.0.0.0    NetSecurity
Function        Find-NetRoute                                      1.0.0.0    NetTCPIP
Function        Find-RoleCapability                                2.2.5      PowerShellGet
Function        Find-RoleCapability                                1.0.0.1    PowerShellGet
Function        Find-Script                                        2.2.5      PowerShellGet
Function        Find-Script                                        1.0.0.1    PowerShellGet
Cmdlet          Find-Package                                       1.4.7      PackageManagement
Cmdlet          Find-PackageProvider                               1.4.7      PackageManagement

Ok that’s better but none of those seem to have anything to do with services. Let’s come at it from the ‘service’ side. You can list all the command with a certain noun or noun part using the -noun parameter.

PS> get-command -noun service

CommandType     Name                                     Version    Source
-----------     ----                                     -------    ------
Cmdlet          Get-Service                              7.0.0.0    Microsoft.PowerShell.Management
Cmdlet          New-Service                              7.0.0.0    Microsoft.PowerShell.Management
Cmdlet          Remove-Service                           7.0.0.0    Microsoft.PowerShell.Management
Cmdlet          Restart-Service                          7.0.0.0    Microsoft.PowerShell.Management
Cmdlet          Resume-Service                           7.0.0.0    Microsoft.PowerShell.Management
Cmdlet          Set-Service                              7.0.0.0    Microsoft.PowerShell.Management
Cmdlet          Start-Service                            7.0.0.0    Microsoft.PowerShell.Management
Cmdlet          Stop-Service                             7.0.0.0    Microsoft.PowerShell.Management
Cmdlet          Suspend-Service                          7.0.0.0    Microsoft.PowerShell.Management

Aha! That first one Get-Service looks good! Let’s try that one.

PS> Get-Service

Status   Name               DisplayName
------   ----               -----------
Running  AarSvc_11d5ae      Agent Activation Runtime_11d5ae
Running  AdobeARMservice    Adobe Acrobat Update Service
Running  AdobeUpdateService AdobeUpdateService
Running  AGMService         Adobe Genuine Monitor Service
Running  AGSService         Adobe Genuine Software Integrity Serv…

Bingo! Of course I abbreviated that list. There are 299 services on this machine right now. Perhaps we should filter that down a bit. Parameters are a good way to do just that. You can find the parameters using the Get-Help cmdlet.

PS> get-help get-service

NAME
    Get-Service

SYNTAX
    Get-Service [[-Name] <string[]>] [-DependentServices] [-RequiredServices]
    [-Include <string[]>] [-Exclude <string[]>] [<CommonParameters>]

    Get-Service -DisplayName <string[]> [-DependentServices]
    [-RequiredServices] [-Include <string[]>] [-Exclude <string[]>]
    [<CommonParameters>]

    Get-Service [-DependentServices] [-RequiredServices] [-Include
    <string[]>] [-Exclude <string[]>] [-InputObject <ServiceController[]>]
    [<CommonParameters>]


ALIASES
    gsv

Let’s say that the service we are interested in is something to do with printers. We can use wildcards in the -DisplayName parameter to narrow this down

PS > Get-Service -DisplayName *print*

Status   Name               DisplayName
------   ----               -----------
Running  DLPWD              Dell Printer Status Watcher
Running  DLSDB              Dell Printer Status Database
Stopped  PrintNotify        Printer Extensions and Notifications
Stopped  PrintWorkflowUser… PrintWorkflow_11d5ae
Running  Spooler            Print Spooler

Of course you can use the PowerShell pipeline to do Where-Object filtering, start/stop/restart the services discovered, but all of that is another story for another time!

Cybersecurity Executive Briefing – Why Defense in Depth

This started as an email to a few people but as the word count climbed, I started thinking that there may be other SMB IT folks out there trying to explain cybersecurity to Executive teams. I am NOT a security ‘expert’, but I do keep as up to date as possible. I hope this helps.

There are two key concepts in cyber-security today – Defense in Depth and Assumed Breach.  To understand them fully is an entire career of knowledge and research but we can boil it down to a few base principles.

Back in the Olden Days,  there were no smartphones, a few laptops, and the main concern was Keep the Bad Guys Out.  We installed firewalls to block access into our networks, some filters for email viruses and called it done.

Alas, the villainous hordes were not so easily defeated! The increasingly mobile user started being a Typhoid Mary, carrying in bad things from the outside.  Since it’s on a ‘trusted’ device, it’s welcomed into the ‘safe’ network.

Oops.

Like a sick child brings home colds and flu, these mobile devices coming back from the outside world brought back various malicious software ( MALWARE ).  Some were like viruses and spread from machine to machine by infecting documents or programs. Some were worms that moved through the network by tunneling around the files and folders accessible to a “trusted machine”. Some simply downloaded remote control software and awaited instructions as part of a robot network of attackers (BOTNET). All of this happens inside the ‘safe’ perimeter.

The way to defend against this is a posture of “Assumed Breach”. Stated simply, this is acknowledging that at some point an infected machine/file will be on your network. Now what? Detections and Containment are usually the first steps. This is done by implementing machine level firewalls, restricting network shares, routine backups and scans.

The best practices of today all recommend a ‘Defense in Depth’ strategy. Conceptually this is pretty straightforward, but a little trickier to implement. The base idea is like a soldier in a hostile country.

  • At the most personal level, our trooper has a helmet and body armor. On your laptop, this is limiting administrative authorities, disk encryption, and anti-malware software.
  • Next out are the protections for the group. For our troopers, it’s walls and gates with guards. Your network has a firewall with intrusion detection and filters to keep the bad stuff out.
  • Finally, there is air support. For our trooper, this may be a group of fighter jets flying around to detect and intercept incoming ‘bogeys’. For our network, this is a cloud-based filter that stops problems before they even get to the perimeter.

The issue with many corporate solutions goes back to the whole “mobile” concept. When our troopers are out in the jungles of the cyber world (aka the Airport Lounge), they don’t have the group protection of the firewall layer. With no cloud-based systems, it’s down to just body armor or what security they have locally installed.

Just like how our troops have more advanced body armor than their great-grandfathers in World War II, advances in weapons require advances in protection. It’s an arms race between the security experts and the bad actors, and it’s a fast-moving one.

Keeping up is difficult even for large companies with dedicated security staff, and it’s exponentially more difficult for smaller companies where the technical staff are forced to wear more hats. That’s why SaaS ( Software as a Service) offerings are so attractive. Most well reviewed SaaS products are very good at early detection and protection from outside attacks. The best will also prevent a botnet compromised machine from ‘reporting back’ for instructions. A few thousand per year is still cheaper than a very public breach or hiring a dedicated security person! Some Executives will have doubts if it’s worth it to have all this security. Sure, responsible leadership has a cybersecurity clause on their business insurance. Even with multiple layers and backups, you may still get hit. To switch analogies a bit, we all have a theft clause on our homeowner/renter insurance, right? We also still lock our doors when we leave.

So what’s the answer?  What is the solution that fixes all of this so we can move on with business? Sadly there is a no one size fits all solution. The answer is ‘It Depends”.  What is your budget? What is your tolerance for risk? A very small business with few digital files would likely be ok with a daily or weekly backup to a drive stored off line. You could call that 1.5 layers. Larger businesses with a larger digital presence and less tolerance for loss of files and capability will need more protection. 

 

Packing up for Ignite!

Everyone has their own opinions about what to take when packing for a conference. Obviously a lot of the conferences are smaller than Ignite and that makes for a slightly different packing list.

Here are a few of the things on my checklist :

1. Two pairs of shoes. It seems silly, but I can vouch for the fact that swapping shoes every day makes a difference. I for one usually don’t walk 17,000 steps in a normal day. However, at Ignite, a 17k day is not unusual for me. Good, well broken in, comfortable shoes are invaluable.

2. Powerbanks. Those little USB power chargers will keep your phone juiced up all day. Power plugs can be scarce at times. Having a big battery to tide you over can be the difference between being able to tap out a quick note VS having to write it down in a pad.

3. A notebook. As much as you try, you may run out of juice and need to take notes the old fashioned way. I prefer the half size spirals. Usually there are some vendors in the Expo/Hub giving away branded notepads as well.

4. Light jacket/windbreaker. The temperature can swing wildly from hot outside and warm in the halls to chilly in the rooms for breakouts. I like to take a light warmup jacket that will fit easily in my pack.

5. USB multiport charger. I have a little USB charger with 5 ports that plugs into a regular wall plug. This is for your hotel room to charge all of your stuff at the same time. There are never enough plugs accessible in hotel rooms.

6. Hand sanitizer. You will be shaking hands a lot. Nuff said.

7. Business cards. If your company does not provide them, most Office Depots will make you a set of 50 or 100 inexpensively and ready in a couple of hours.

8. Extra space. You will be bringing back t-shirts and other swag. Even if you don’t want the stuff, it’s nice to bring it back to your team. Either only pack your suitcase 2/3 full or bring an empty gym bag in your main suitcase. It’s amazing how many t-shirts can be stuffed in a decent size gym bag!

Of course, some of these seem obvious, but I personally have forgotten at least a few of these things! Trust me, it’s much cheaper to bring these things than to buy at the airport or hotel.

Other than that, eat and drink responsibly, stay hydrated, and have fun. If you tend to be a little introverted, step out of your comfort zone and introduce yourself to at least one other attendee per day. Vendors do not count since they are working at meeting YOU!

That about sums it up for now. I’m going to try to post more during the week, time permitting.

Powershell Tuesday Quick Tip #7

Back at it again!

This tip is a little snippet of code for building Dynamic distribution lists for something other than Organizational Units or Branch offices or the other examples. This one is to build a list of Managers.

How does one differentiate managers in Active Directory? Title? no because at least in our organization, the management team have different titles. Things like ‘Vice President’, ‘Director’,or ‘Controller’ in addition to the more mundane ‘Manager’. This is complicated by the fact that we refer to our outside salesforce as ‘Territory Managers’. So the Title property is right out.

What is the one other property that defines a member of a leadership/management team? Yep – they have someone reporting to them. Enter the “Direct Reports” property!

Team that up with a New-DynamicDistributionGroup Cmdlet and you get a new example like this:

New-DynamicDistributionGroup -DisplayName 'My Company Managers' `
 -Name MCManagers `
 -PrimarySmtpAddress 'mcmanagers@mycompany.com' `
 -RecipientFilter { DirectReports -ne $null}

*Standard disclaimer about minding the backtick line continuations. This is actually a one liner but considering the important part is at the end, this format is easier to read.

The result is a list of anyone in your organization that has someone reporting to them. Exactly what the HR folks want when they ask for a mailing list of all the managers!

Manage your Stack with WAC and OpenManage

In a previous post, I briefly mentioned managing your Azure Stack HCI with Windows Admin Center (WAC). When you are using Dell hardware, it’s even better with OpenManage!

Imagine being able to securely control your server stack from any browser? Even if you don’t allow access from outside the network, imagine how nice it would be to be able to have a ‘single pane of glass’ to manage most of your day to day things. Now imagine that it’s free…

Setting up

First you have a decision to make. Do you want to run WAC from your pc or as a service on a server? I like to run the latest General Availability release on a server for our whole team to use, and the latest Insider build on my laptop for testing and comparison.

So how do you get it ? Simply go to the Windows Admin Center web site and download the MSI file. It’s the same install whether you are installing on Windows 10 or Windows Server, so that’s easy.

Before you install, obviously you have to make the decision on the ‘type’ of install you want. There are four main types of installations ranging from the super simple ‘Local Client’ install all the way up to a failover cluster of management servers. Here are the differences.

First is the simplest of installations – the Local Client. This is just you install on your Windows 10 pc that has network connectivity to all of the servers to be managed. This is fine for small, single administrator environments or as I noted above, for testing Insider builds.

Next is the ‘Gateway’ version. It’s installed on a designated Windows 2016 or higher server and runs as a service. This is the best one for access by a small team of admins.

Our third option is worth noting as it is a supported and documented configuration. It’s called ‘Managed Server’. In this scenario, you install the software directly on one of the servers to be managed. I don’t recommend it unless you have some reason not to create a dedicated “gateway” VM. Of course I’m a fan of single purpose servers, so your situation may vary.

The fourth and final option for a LAN installation is a failover cluster. Truthfully if you are in a large environment – this is certainly the way to go. It give you high availability and reliability for your management tools. In a small or medium business, at this time, it’s a little overkill.

My deployment is a bit of a hybrid. I’m running a gateway server on a VM sitting atop a Azure Stack HCI cluster for the team to use. So it’s highly available enough for our needs.

One of the big advantages to WAC over traditional management tools is the extensibility. There is a Windows Admin Center SDK along with examples on Github if you want to write your own. However there are several ready-made extensions provided by Microsoft and a few hardware vendors. Microsoft extensions are things like DNS, DHCP and Active Directory. The real benefit is if your hardware vendor provides an extension.

For example – as noted in my Azure Stack HCI post – I’m using Dell hardware. The Dell EMC OpenManage extension lets me use Windows Admin Center to not only manage my software environmant but my server hardware as well! Disk Health, Temperatures, Fan Speed, and more all in one handy to use dashboard.

Azure Stack with Only Two Servers

Ok, so that title is a teeny bit of an overstatement. What we’re going to discuss today is not the Azure Stack you have heard so much about. No, today we’re talking about an Azure Stack HCI Cluster.

Essentially, it’s a Storage Spaces Direct cluster with brand new shiny marketing and management. While you can use the old “*.msc” tools, it’s a far, far better thing to use the new Windows Administrator Center (WAC)! More on that soon.

I’m not going to dig into the details of how S2D works or the fine details of building clusters large and small. Instead, I want to share with you some of the reasons why small and medium businesses need to pay attention to this technology pronto.

  1. Scalability: Sure, most of the documentation you find today focuses on building clusters of 4-6 nodes or more. That’s great unless you are a small business that doesn’t need that kind of compute and storage. That’s where the two-node “back to back” cluster comes in. Think “entry-level”. The beauty of this solution is scalability. If you do outgrow this, you buy a new node, possibly a new network switch and you are up to a 3-node cluster!
  2. Compact: I had two old servers that took up two Rack Units (RU) of space each plus an external drive array that took another 2 RU. That totaled up to 6 Rack Units or 10.5 inches of rack space. The new servers that replaced them are 1 Rack Unit each for a total of 3.5 inches! That doesn’t even touch on noise, heat and power consumption.
  3. Fast and easy: Ok yes, it is a complicated technology. However, you can follow Dell’s Azure Stack HCI Deployment guide and you’ll be 90% of the way there. It’s a mouthful, but it’s 40 pages of step by step instructions and checklists. *TIP* I’ve included some tips below for a couple of places that aren’t super clear (or at least to me)
  4. Well Documented: If you are like me and want to understand how this all works before you bet your mission critical VMs to it, there is a ton of good information out there. Here are some options depending on your preferred method.

    • The Book: Dave Kawula’sMaster Storage Spaces Direct.  It’s over 300 pages of detailed explanation for a very reasonable price. Although you can get it free, those people spent a lot of time working on that so pay something. You’ll see what I mean on the Leanpub site.
    • The Video: Pluralsight’s Greg Shields has a course on Storage Spaces Direct that is 4 hours of in-depth instruction on Windows Failover Clusters and Storage Spaces Direct in Windows 2019. If you aren’t a subscriber to Pluralsight, they offer trial periods!
    • The ‘Official’ Documentation: Microsoft’s Storage Spaces Direct Overview is the place for the official documentation.

There are a few tips and gotchas that I want to share from my experience.
First, the hardware. These servers aren’t crippling expensive, but they certainly aren’t disposable cheap either. This means there are a lot of critical hardware decisions to make. What’s more important to your organization – budget or speed or a balance? On the one end, you can go all-flash storage which will give you a fast system, but the price tag goes up fast too. The less expensive but slightly more complicated set up is a hybrid of SSD and HDD storage. Making certain that you have the right mix of memory, storage and network adapters can be a daunting task.

Honing a specification and shopping it around to get the perfect balance of form, function, and price is great if you are a hobbyist or plan on building out a ton of these at one time.

However, for IT Admins in most small companies, the more important thing is that it gets done quick, quiet and correctly. The business owners don’t care about the fine nuances. They want to start realizing the business benefits of the new technology.

I chose a much more economical option, both in time and cash. I searched Dell’s website for “Microsoft Storage Spaces Direct Ready Nodes” and picked out a pair of their “Ready Nodes” that looked like they matched my budget and needs.

Then it was a small matter of reaching out to my Dell team. My sales rep put me in touch with their server/storage specialist. He asked a couple of questions about workloads, storage, and networking. Presto, we had our servers on order.

*Pro-tip* Buy the fiber patch cables at the same time. They are no less expensive elsewhere and you have less chance of getting crap cables.

If you don’t already have a relationship with Dell, there are several other Microsoft certified hardware vendors. There is a list here: http://bit.ly/S2D2NodeOptimized

 Tips for the build

You’ve only got 2 servers to configure, so typing each command is feasible. However, in the interest of both documenting what I did and saving on silly typos, I opened up VSCode and wrote up all the Powershell commands described in the Dell docs.

The steps (much simplified) look something like:

  1. Install OS if not preinstalled and then patch to current.
  2. Plan and Map out IP networking. Use the checklists in the back of the Dell guide to map out your IP scheme. It’s a time saver!
  3. Pre-stage the Active Directory accounts for the Cluster as documented here. Trust me, it’s faster to do it on the front side than after the fact.
  4. Install Windows Features on each node
  5. Virtual Switch and Network configuration on each node
  6. Test-Cluster and Review/Remediation
  7. Create Cluster – with no storage
  8. Enable Storage Spaces Direct on the cluster.
  9. Configure a witness – either another server on the site or a cloud witness.
  10. Create virtual disks – Cluster creation and enabling Storage Spaces Direct on the cluster creates only a storage pool and does not provision any virtual disks in the storage pool.

Next time, we’ll go into depth on the management of our newly built Azure Stack HCI setup!