Note to Readers

This post also gets picked up by myITforum and published there. It started off with CM12 lab building issues and has morphed into covering CM12 in general.


When will Microsoft ever get Role Based Access (RBA) working for Automatic Detection Rules (ADRs)? I need to know that a server admin can make use of an ADR to setup his patches and that a workstation admin can't go in and edit the server ADRs. And vice versa.

Well, RBA is there. Already. Right now. At least in CM12 R2 it is. Was it always there? I could swear that when RTM came out, that this wasn't possible. But I verified this works yesterday. What isn't there is the option to right-click an ADR and assign the scope, but that's really not important.

The server admin can see the workstation admin's ADRs, but all the properties are grayed out and no changes can be made. The guts of this (as with all RBA) revolves around the collections each admin has access to. When a server admin creates an ADR which targets his collection that a workstation admin doesn't have access to, RBA kicks in and protects the admin.

So what's not to like about ADRs now?

Well, other than wishing they'd use saved searches instead of filters (which is another DCR submitted long ago) not much. I have just one thing driving me nuts before I let the admins know that they can start using ADRs now. Packages.

You can't make an ADR without filling out the packages prompts in the wizard. I'd have to let these admins also make patch packages on their own. And I can even grant that specific feature in our SUM role. So why could this be bad, especially if our single instance store in the Content Library is saving us space?

Well for one, it isn't saving us space on the source files (and for that I really need to move that share to a dedupe volume). But the other one is that one admin could now download a patch everyone is using and later just go delete it and break a lot of deployments. Sure, I could go fix that by downloading the patch myself quickly, but that could leave clients sitting around for a day before they retry. Maybe I'm over thinking this?

Ignore Ignite?

At our last user group meeting we discussed the inevitability of the cloud in IT and what that would mean for the future of IT Pros. One thing we all agreed on was that knowing PowerShell was probably the best investment of time right now for hope of having a meaningful job down the road (and heck, really today). It was also rather clear that for most attendees, we still have a hard time just doing today's job and continue to look for help via our user group and conferences. Microsoft Ignite came up and few seemed interested in attending. Why not?

Ignite is seen more as a crowded marketing show where a search shows 91 sessions listed for System Center (but that is a cloudy list) with crowded hotels and daily busing in needed. But we have so many better options today:

Each conference is on track to repeat around the same time and location each year so attendees can plan on making at least one and budget for them in advance.

These smaller conferences give attendees a better chance to network with others. With SCU, you can attend a user group broadcasting it in your area so that you can talk about the sessions you just saw with the rest of the group and go over common issues and ideas. And SCU is free. So you have no excuse not to go. Even if you have no local group you can watch from home or work. The speakers there are all the top speakers out there. MNSCUG plans to simulcast SCU next week.

I went to my first Connections conference last year in Vegas and was surprised how well it went. Smaller rooms and a crowd not too spread out from System Center. In fact, many of the CM sessions bore me simply because the product hasn't changed much over the past few years, so I found myself drifting into SQL sessions (something all System Center products rely on). They were great. There should be a good 80-90 System Center sessions this year. And the Aria is just a gorgeous hotel!

And then there's my favorite: MMS. It's right here at the MoA. It's just 3 days, but very long days. Early risers can start with birds of a feather sessions and sessions can start as late as 6pm (some with beer served!). Small rooms and many great speakers where "attendees don't get lost in the crowd." Feedback for the 1st year was overwhelmingly positive. An evening party plus the mall and a great bar right at the hotel make after hours mingling with others easy and fun. No busing to a convention center, no long lines, no crappy catered food. We've also revised Henry Wilson's old comparison doc as it might help get you funding. And MMS sessions from 2014 are still up to give an idea of what 2015 sessions should look like. And we just got word that our dates should be Nov 9-10-11 this year.

How to melt a SUP

We have 3 primary sites under a CAS (bad, but we have no choice with so many clients). Because we also have Nomad, we don't care where clients get assigned. We care only that each site has roughly the same client count as the others. But we drifted about 30K clients too many on one site and simply made use of CM12 R2's function to move clients. So we moved them to level set the count.

The downside, and we knew this, was that each client would have to do a full inventory and SUP scan. That's a lot of traffic but we've done this before without issue. But this time we melted the SUPs with many full scans. And the wonderful Rapid Fail detection built into IIS decided to protect us by stopping our WSUS App pool. Late at night.

Now in CM12 post SP1 (we're on R2), clients make use of the SUPList which is a list of all possible SUPs available. Clients find one SUP off that list and stick to it. They never change unless they can't reach their SUP after 4 attempts (30 minutes between each - the 5th attempt is to a new SUP). Well with the app pool off, all clients trying to scan would fail and start looking for new SUPs. A new SUP means a full scan. A full scan from 110K clients is far worse than from just 10K when we're moving things. Needless to say our SUPs were working very hard the next morning to serve clients. On a normal day the NIC on one of our SUPs shows about 1Mbps of traffic, but after starting the WSUS App pool we were at over 850Mbps going out per SUP.

Disabling Rapid Fail is one nice fix to help keep that app pool from stopping, but we also increased the default RAM on that from 5GB to 20GB (the SUPs have 24GB so we were clearly wasting most of that). I know of another company who has 85K clients on 2 SUPs who boosted their RAM from 24 GB to 48 GB to help IIS serve clients. Another option is to add more SUPs, but RAM is probably cheaper than another VM. This default Private Memory Limit is 5GB, so for those of us weirdoes with lots of RAM, it makes sense to crank this up if you can. We actually did this long ago, but we're thinking the Server 2012 R2 upgrade over Server 2012 wiped our settings out.

By the way, the obvious 'treatment' during such a meltdown is to throttle IIS. We set our servers down to 50 Mbps and the network team was happy; your setting will vary based on client count and bandwidth. Our long term insurance here will be QoS. UPDATE: Jeff Carreon just posted a tidbit on how to throttle quickly in case of an emergency using PowerShell.

So how do we keep our settings? We ask Sherry who knows DCM! Read more on her CIs to enforce our settings here.

CM12 R2 and SQL14 - Sneak Peek

Does Microsoft support SQL 2014 on ConfigMgr 2012 R2 yet? Nope. But we should get word soon about it and I'm basing that speculation on the fact that I went ahead and tried SQ14 in my lab last week and it works! And a week later and into Patch Week, it's still working.

I simply stopped\disabled CM, ran the SQL14 upgrade over my SQL12CU9 install, rebooted and enabled\started CM. The only issue I ran into is one Steve Thompson blogged about way back for the old 08R2 upgrade to SQL12. Same issue and same easy fix.

Now I'm not saying you should go upgrade yourself unless you like playing in your lab too, but at least this went more smoothly than SQL12 did. So I'm very hopeful we'll get word on support soon.

Now the bad news. The item I had been looking forward to seeing in SQL14 the most is the ability to right click a table and move it into memory. In-Memory tables sound like they'd be amazing for performance. But this feature doesn't look promising for CM12 support. Looking at a few results I can see that Microsoft would have to rewrite a lot of tables before we could make use of this feature.

In-Memory Optimization Advisor

CM12 MP and DP with no Server GUI

Here is something I've wanted to try forever - heck since they used to call it Server Core.

For my role servers like the MP or DP servers, would CM still work if I remove the GUI from the OS?  Because Server 2012 R2 lets you take the Windows shell off and put it back on, it's easy to test.  So just I did.

I mix my MP and DP servers on the same VM.  So my test here is to see if those roles will still work after I take the UI away (and manage the servers strictly with PowerShell).


By using Service Manager, I ask to remove the feature User Interfaces and Infrastructure.  Well that's a bit too extreme because we'd evidently lose the IIS BITS Server Extensions and Remote Differential Compression.  And I know I need those for CM.  So I back off and select only to remove the Server Graphical Shell (essentially Explorer and IE).  That works!

So why am I even playing with it?  Theoretically, the loss of the UI means a smaller attack surface so my server should be safer.  And it could mean fewer patches might be needed in the future which could lead to fewer reboots and more uptime.

In reality, I doubt I'm gaining much here.  The actual best benefit would be that my team is forced to manage more using PowerShell and quit playing with things one at a time in a UI.  When you RDP to this server, you just get a cmd box and no explorer.  This isn't supported by Microsoft yet as far as I know, but because my MP and DP logs (and CM client logs) look good, I'm sure it's simply a matter of Microsoft not testing this setup yet to support it.

I'll let this server in the lab sit for a couple months like this and decide then if I'd like to do the rest in the lab (role servers only; I highly doubt a primary site could work like this).  Also, I have other internal apps to consider beyond CM.  Like is Symantec Endpoint Protection still fine?  Other server base apps I'm required to run also need to be checked.

Many apps might fail if you start with no UI, but it seems they mostly work if you remove it after the install.  And if I change my mind about this or run into an issue, it's easy to put the Server Graphical Shell back on.  Oh, and Kaido has a tip regarding this as your source files for the GUI can become stale.

Considerations for Enabling Distributed Views on a CAS

So what are distributed views and why use them?

I recently posted about an issue where I was forced into enabling distributed views (DV) for my primary sites.  Technically, I enabled DV for the hardware inventory link (there are two others as well).  We had looked at using DV earlier, but even in SP1 there were issues.  But with R2, we’ve found them to be fully functional.

So what are DV?  Simply put, you tell your primary sites to keep client data instead of replicating the data up to the CAS.  Normally DRS replicates data up via SQL (CM07 would copy files to in central inboxes and let SQL import that data all over again).  DRS uses a variation of the SQL Service Broker (instead of merge or transactional replication) to copy this data back to a primary.  But for 3 major links you can tell the primary site just to hold onto that data and if you need it for reporting, SRS can grab that data from the primary sites on the fly.

Distributed Views

CM accomplishes this on-the-fly generation of data via linked servers in SQL.  When you enable DV to a primary site, all of the replica groups in that link stop and a linked server is created in SQL.  The local ConfigMrg_DViewAccess group on the primary site is populated with the machine account of the CAS as well as the execution account SRS uses.

Note that if you have other users running ad hoc queries against SQL and they require client information, you’d want to put their AD group or ID into that local group.

Now think about R2 and how SRS respects RBAC.  A user wanting information about clients and he’s coming in via SRS means it’s going to entail an RBAC function to figure out what that user is permitted to see: he is simply not granted permissions to see all collections or devices but is limited to his scope.  For SRS to grab that data which is on the primary sites and not the CAS means we need to worry about constrained delegation.

Constrained delegation is where you set the machine account (CAS$) to be trusted to pass on a user’s ID via Kerberos to a child site.  Open the CM console to the CAS and look at devices and its CAS$ talking directly to the primary site to get you the data in the console.  It can do this because there is one hop from the CAS to the primary site.  But open the SRS site from a desktop which connects to the CAS and has to get data from the primary and you’re making 2 hops and Kerberos won’t recognize you.  To make that leap you need constrained delegation.


Note: the SPN on the server should match what you have granted in the image shown.

Generally, constrained delegation should work without issue.  But we found in practice that certain scenarios can break it.  For everyone.  How?  We’re thinking that anyone coming in from a non-trusted domain seems to poison the Kerberos cache giving mixed results to users.  Sometimes users will be shown an anonymous logon error instead of seeing their SRS report results.  We had a case open with Microsoft for a week on the issue and they never found the source of the failure no matter how many data dumps we gave them.  So we temporarily cheated with local SQL accounts on the linked servers.

So back to the why of when you enable DV.  We were told by the lead developer of CM that we’d be crazy not to use DV if a primary site was next to a CAS.  And so we did test that and we actually ran with it for a while. But back in SP1 there were issues and we decided to just go back to full DRS.  But what about servers not close to the CAS?

I explained previously that enabling DV saved DRS from sending 1 terabyte of hardware inventory back to the CAS.  That is a lot of data no matter how you look at it.  From a point of view, it actually makes sense to enable DV everywhere: save your network all of the replication of so much data and leave it on the primary servers.  Grab it only when you need to view it.  Heck, why isn’t it the default?  Why didn’t Microsoft just have CM use DV all the time?  The answer is that there is a bit of a cost.

Reports are obviously slower because the primary sites are not close and have to run the query SRS wants locally and then send the data across the WAN.  That doesn’t give the best experience to the end user.  But it gets worse.  We found that if we simply ran a query directly to each primary instead of going through the CAS, reports would run even faster.  Microsoft has yet to explain why, though they are looking into an answer for us.  I suspect that answer is going be the explanation for why DV isn’t enabled by default.  That reports are just going to be slower this way.

That isn’t all bad.  Recall that we can tell SRS to run reports on a schedule and cache the results when we want it to.  So if you set a report to run at 8am and a user wants that data off SRS at 9am, it can send the 8am results back nearly instantly to the user.  There is even a benefit here in that the user isn’t able to pound on the server by running the report often.  You get full control via SRS on how long that cache is maintained and how often SRS should just run the report in the 1st place.

And of course, this means extra work on your part to find the most commonly used reports and start setting them up for schedules in SRS.  That’s work.  Most admins will just take a pass and let the SRS user wait.  I’d caution against that for one reason only:  managers and execs tend to think a report is a reflection of how fast CM is.  A slow report must mean CM is on its last legs.  Better go find a new product!  So you’d really better consider some schedules on common reports.

Key takeaways here are (1st don’t setup a CAS) that if you have a primary next to the CAS, you might very well want to enable DV for one or all replication links and that if you do so, make sure your SPNs are cleanly setup and enable constrained delegation.  And what that local DV group on the primary as you might need to add other users who run CM queries.  Finally, the recommendation is that you enable DV on a primary very close to the CAS to help speed those reports, but if your WAN links are fast enough like ours are, it does work if you enable it on others too.

Also note that disabling a replication link means you're going to suddenly have a lot of data replicating all at once to your CAS so keep that in mind before enabling DV in the 1st place or deciding you want to abandon them.