Jekyll2022-05-04T21:18:14+00:00https://www.ephingadmin.com/feed.xmlEphing AdminTips and Tricks for Windows AdminsRyan EphgravePasswordless PowerShell2021-02-06T00:00:00+00:002021-02-06T00:00:00+00:00https://www.ephingadmin.com/PasswordLessPowerShell<h1 id="passwordless-powershell---how-to-use-gmsas-in-your-scripts">Passwordless PowerShell - How to use gMSAs In Your Scripts</h1>
<p>One thing I’ve always hated doing with PowerShell is storing/retrieving passwords, because that always feels like a weak link in the security chain. I love running code as multiple accounts (my team probably has 75+ AD accounts we own), but hate everything else we have to do with those accounts.</p>
<p>A few years ago we heard about these things called gMSAs. They are accounts, managed by Active Directory, and are passwordless (not really, but you don’t have to care about the password)! Instead of getting a traditional password, you tell AD who is allowed to use that password, and then they can use the credential whenever they want! Sounds awesome, right?</p>
<p><img src="https://media.giphy.com/media/VGthqYKqyKhipYxK2s/giphy.gif" alt="Awesome" /></p>
<p>There’s a catch. They were made to be used by Windows and don’t have great support outside of that use case. I’m not Windows.</p>
<p><img src="https://media.giphy.com/media/qcoocjBhD5Zle/giphy.gif" alt="Quit playing games" /></p>
<p>So, years ago a team member (Jeff Scripter) wrote a function to get a PSCredential object from a gMSA. This work was based on the open source tool <a href="https://github.com/MichaelGrafnetter/DSInternals">DSInternals</a> which has functionality built in to get the password.</p>
<p>After seeing some talk on Twitter about gMSAs, I decided to write a module so everyone can use them like our team does!</p>
<p><img src="https://media.giphy.com/media/t75AqiyT97TmIZZf3V/giphy.gif" alt="Alright" /></p>
<p>I’m not going to go over setting up gMSAs, mostly because I already did that in the <a href="https://github.com/Ryan2065/gMSACredentialModule">documentation</a> for the module (at the bottom). I’m also not going to go super in-depth into the module, because it’s pretty small. Instead, I’m going to talk about how to set up your environment so you can achieve passwordless PowerShell!</p>
<p>First off, you’ll want to decide how you’ll get access to your script passwords. Your choices are realistically “AD User” or “AD Group”. We’ve set things up with an AD group and only one user in it, but do whatever makes you happy.</p>
<p>Now that you have your password retriever, make your gMSAs!</p>
<pre><code class="language-PowerShell">$ADGroupName = 'Not Password Retrievers' # to fool the hackers
$GMSAName = 'gMSASQLRead'
$DomainFqdn = 'home.lab'
$ServiceAccount = New-ADServiceAccount -Name $GMSAName -DNSHostName "$GMSAName.$($DomainFqdn)" -PrincipalsAllowedToRetrieveManagedPassword $ADGroupName -Enabled $true
</code></pre>
<p>With the above code, any AD object (computer or user) in the group “Not Password Retrievers” will be able to get the gMSA password. So just drop your PowerShell service account in that group.</p>
<p>Next, use it!</p>
<pre><code class="language-PowerShell">Install-Module GMSACredential
$Cred = Get-GMSACredential -GMSAName 'gMSASQLRead' -Domain 'Home.Lab'
$Results = Invoke-GMSACommand -Credential $Cred -ScriptBlock {
# Code to query remote SQL server
}
</code></pre>
<p>So there’s a little to unpack in the above code. First, line 1 installs the module. Ok, we’re starting out easy.</p>
<p>Next, we get the gMSA credential from AD. This line will only work if you are running as the user who can get the password.</p>
<p>Lastly, I’m running Invoke-GMSACommand (which is based on Invoke-Command). Why am I not just running Invoke-Command? Invoke-GMSACommand creates a token for the gMSA account and then executes Invoke-Command as that token. This works <em>extremely</em> well when you are accessing resources off-box like a remote SQL server and you can’t run a remote PowerShell command against that server. Almost all my team’s scripting is done against remote servers so this is what we always use.</p>
<p><em>Note</em> if you only want to do something locally (ie, I’m on Computer1 and I want to access resources on Computer1) - feel free to just use Invoke-Command -ComputerName localhost -Credential $Cred.</p>
<p>Aaaand that’s all there is to it! There’s not a lot to this module, just one command to get a credential and another to use it.</p>
<p>Please let me know if this does or does not work in your environment (works in my lab) or if you have any suggestions for making it better, comment or leave a note in <a href="https://github.com/Ryan2065/gMSACredentialModule/issues">GitHub</a>, or find me on <a href="https://twitter.com/ephingposh?lang=en">Twitter</a></p>
<p>Thanks for reading, and remember</p>
<p><img src="https://media.giphy.com/media/65GiuFnyEuMjm/giphy.gif" alt="BackstreetsBack" /></p>ryan2065@gmail.comPasswordless PowerShell - How to use gMSAs In Your ScriptsCMG for the lab - Free Lets Encrypt Certs!2021-02-06T00:00:00+00:002021-02-06T00:00:00+00:00https://www.ephingadmin.com/LetsEncryptWithCMG<p>Let’s say you’re a IT Pro with your own ConfigMgr lab, and you want to hook up CMG, but you don’t like messing with certificates. It seems like your only option is to buy a certificate. But… you’re cheap!</p>
<p>Well, you can use Lets Encrypt to generate a trusted certificate for you if you already have a domain, with just a little bit of command line work!</p>
<p>First up, start up WSL. If you don’t have WSL, Step 0 is to install it!</p>
<p>Lets Encrypt uses (maybe makes - not sure) a tool called CertBot to do your certificate work. So you first have to get that tool. Run these commands to get it:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
sudo apt-get install certbot
sudo apt-get install openssl
</code></pre></div></div>
<p>The first command adds the certbot repository to your computer, so you can install directly from the command line. Then, you update any packages already existing so everything is up to date, then lastly you do the installs! One for certbot, and one for openssl so we can use their tools to export the cert to a pfx.</p>
<p>Now that you have certbot installed, generate the certificate!</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo certbot -d mycoolanduniquecmgname.ephingadmin.com --manual --preferred-challenges dns certonly
</code></pre></div></div>
<p>So you probably don’t want to generate a cert for my domain, so change that part. The string mycoolanduniquecmgname needs to be the name of your CMG server and needs to be unique across <em>all</em> CMG instances in your geographic instance. So be unique, something other than cmg1.</p>
<p>The above command will start the process of generating a certificate and give you a manual step to do. The manual step comes after you say (y) to their questions, and give your email. It’ll pop up with this prompt:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Please deploy a DNS TXT record under the name
_acme-challenge.mycoolanduniquecmgname.ephingadmin.com with the following value:
aa2DaK2Ckg-IaR17YDDEMWb2SJdSwaxRrx6S9T3y3BB
Before continuing, verify the record is deployed.
</code></pre></div></div>
<p>What does that mean? Lets Encrypt doesn’t just give out certs to anyone for your domain. So you have to prove you own this domain. How do you do that? Create a TXT record in your domain. Look up your hosts instructions. For Cloudflare, go to their <a href="https://dash.cloudflare.com/login">management dashboard</a>, click on your domain, click DNS, then click Add Record.</p>
<p>Once the record is added, it doesn’t take effect immediately!!!!! You must verify it’s implemented before hitting enter in the manual steps.</p>
<p>You can validate it with nslookup:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>nslookup -q=TXT _acme-challenge.mycoolanduniquecmgname.ephingadmin.com
</code></pre></div></div>
<p>you should get this output if it’s working:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Server: UnKnown
Address: 192.168.85.154
Non-authoritative answer:
_acme-challenge.mycoolanduniquecmgname.ephingadmin.com text =
"aa2DaK2Ckg-IaR17YDDEMWb2SJdSwaxRrx6S9T3y3BB"
</code></pre></div></div>
<p>Now, press enter and it will finish!</p>
<p>You’ll get some good output that tells you where the cert is - that’s super helpful!</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>IMPORTANT NOTES:
- Congratulations! Your certificate and chain have been saved at:
/etc/letsencrypt/live/mycoolanduniquecmgname.ephingadmin.com/fullchain.pem
Your key file has been saved at:
/etc/letsencrypt/live/mycoolanduniquecmgname.ephingadmin.com/privkey.pem
Your cert will expire on 2022-08-02. To obtain a new or tweaked
version of this certificate in the future, simply run certbot
again. To non-interactively renew *all* of your certificates, run
"certbot renew"
</code></pre></div></div>
<p>Now, you have to go to that directory, but it’s protected! So, enter sudo super mode!</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo su
cd /etc/letsencrypt/live/mycoolanduniquecmgname.ephingadmin.com
</code></pre></div></div>
<p>Lastly, create the cert. It will ask for an export password, this is a password you will put on the pfx - so you’re creating one. You could leave it blank, but when I did that and tried to use it, blank was not a password I could give. So be sure you add a password!</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>openssl pkcs12 -export -out /tmp/cmgCert.pfx -inkey privkey.pem -in cert.pem -certfile chain.pem
</code></pre></div></div>
<p>This puts the cert in a new folder tmp and calls it cmgCert.pfx.</p>
<p>Lastly, let’s bring it to civilization so we can access it in Windows.</p>
<p>Create a folder at the root of C - or do it somewhere else, I just hate typing paths with wrong slashes.</p>
<p>I created <code class="language-plaintext highlighter-rouge">C:\CMGCert</code></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cp /tmp/certificate.pfx /mnt/c/cmgcert
</code></pre></div></div>
<p>Now you have a cert you can import to ConfigMgr for your cloud CMG!</p>
<p>Join me in 30 days when it expires and I have to figure out how to automatically update it!</p>ryan2065@gmail.comLet’s say you’re a IT Pro with your own ConfigMgr lab, and you want to hook up CMG, but you don’t like messing with certificates. It seems like your only option is to buy a certificate. But… you’re cheap!CMPivot - How to change the scope2020-10-02T00:00:00+00:002020-10-02T00:00:00+00:00https://www.ephingadmin.com/CMPivotScope<h1 id="cmpivot---changing-the-scope">CMPivot - Changing the scope</h1>
<p>CMPivot is cool and all, but you are required to have the Default security scope in order to use it. This really makes it hard to get your users to adopt it if you have RBAC set up the way Microsoft has been asking us to do it for forever.</p>
<p>“Here’s a great tool for your helpdesk, just let them see everything!”</p>
<p><img src="https://www.ephingadmin.com/images/2020/ComeOn.gif" alt="ComeOn" /></p>
<p>So I’ve played around with it on the backend, and have come up with a solution that lets you select what SCOPEs can run CMPivot!</p>
<p>First, the proof!</p>
<p>I created a SCOPE called CMPivotDemo and assigned a user to that scope, removing Default. I then tried to run CMPivot:</p>
<p><img src="https://www.ephingadmin.com/images/2020/CantRunCMPivot.jpg" alt="CantRunCMPivot" /></p>
<p>As we all know, CMPivot is a fancy UI on top of a CM Script. If you just assign the CM Script called “CMPivot” to the scope your user is in, they can then launch CMPivot! This is achievable through PowerShell:</p>
<pre><code class="language-PowerShell">$Script = Get-CMScript -ScriptName 'CMPivot' -Fast
Add-CMObjectSecurityScope -Name 'CMPivotDemo' -InputObject $Script
</code></pre>
<p>In your environment, just change <code class="language-plaintext highlighter-rouge">CMPivotDemo</code> to the name of the scope you want to access CMPivot. Run the script and…</p>
<p><img src="https://www.ephingadmin.com/images/2020/CMPivotWorks.gif" alt="CMPivotWorks" /></p>ryan2065@gmail.comCMPivot - Changing the scopeEFPosh - How I use it2020-08-10T00:00:00+00:002020-08-10T00:00:00+00:00https://www.ephingadmin.com/EFPosh-HowIUseIt<h1 id="how-i-use-efposh">How I use EFPosh</h1>
<p>I developed EFPosh just a few months ago to make interacting with SQL easy from PowerShell, but the tooling has evolved a lot the more I use it at work. I thought it’d be interesting to write how I use this PowerShell module in my day-to-day work to give people their own ideas.</p>
<h2 id="database-context">Database Context</h2>
<p>The idea behind a database context is you build out the database schema in Entity Framework first, then you can query the DB, edit data, or even build the database from scratch. In working with EFPosh, I’ve gravitated towards building a database context file, and then just loading the file in my scripts. EFPosh has the ability (and I use it all the time) to build out your context for you!</p>
<pre><code class="language-PowerShell">$SplattedParams = @{
'MSSQLServer' = 'Lab-CM.Home.Lab'
'MSSQLDatabase' = 'CM_PS1'
'MSSQLIntegratedSecurity' = $true
'FilePath' = "$($env:Temp)\CMContext.ps1"
'Overwrite' = $true
'EntitesToMap' = @(
'v_Collections',
'v_R_System'
)
}
Start-EFPoshModel @SplattedParams
</code></pre>
<p>The above code will create a DBContext file CMContext.ps1 with parameters for Server, Database, and Credential and has all the plumbing to query the views v_Collection and v_R_System.</p>
<p>At work, I’ll generally wrap the above in a function like this to handle the creation if it doesn’t exist:</p>
<pre><code class="language-PowerShell">Function Get-CMContext {
Param(
$Server,
$Database
)
$ContextFile = "$($env:Temp)\CMContext.ps1"
if(-not ( Test-Path $ContextFile )){
$SplattedParams = @{
'MSSQLServer' = 'Lab-CM.Home.Lab'
'MSSQLDatabase' = 'CM_PS1'
'MSSQLIntegratedSecurity' = $true
'FilePath' = $ContextFile
'Overwrite' = $true
'EntitesToMap' = @(
'v_Collections',
'v_R_System'
)
}
Start-EFPoshModel @SplattedParams
}
return . $ContextFile -Server $Server -Database $Database
}
</code></pre>
<h2 id="querying-data">Querying data</h2>
<p>Once I have the context, the rest is a breeze! I’ll re-use the function from above to get the context and then query for a collection:</p>
<pre><code class="language-PowerShell">$Context = Get-CMContext -Server 'Lab-CM.Home.Lab' -Database 'CM_PS1'
Search-EFPosh -Entity $Context.v_Collections -Expression { $_.CollectionName -eq 'MyNewDeviceCollection' }
</code></pre>
<p>In Entity Framework, entities are classes that map to database objects. If I open up my CMContext.ps1 file, I can see the entity v_Collections:</p>
<pre><code class="language-PowerShell">Class v_Collections {
[int] $CollectionID
[string] $SiteID
[string] $CollectionName
[string] $LimitToCollectionID
[string] $LimitToCollectionName
}
</code></pre>
<p>Each property on this class corresponds to a column on the SQL view. I can edit this class and remove properties to limit what’s brought back from SQL, or I can leave it as is. For the sake of space, I edited the class to only include properties I care about. I highly recommend you edit these and only bring back the columns you care about.</p>
<p>Back to the searching example!</p>
<pre><code class="language-PowerShell">Search-EFPosh -Entity $Context.v_Collections -Expression { $_.CollectionName -eq 'MyNewDeviceCollection' }
</code></pre>
<p>So now that I know what entities are, we can see above I’m telling EFPosh to query the view v_Collections for the context $Context (which has the server/connection info). Then, I’m telling it to filter v_Collections and only return the collections that have a CollectionName of MyNewDeviceCollection.</p>
<p>Run it and you get great output</p>
<pre><code class="language-PowerShell">PS C:\Users\Ryan> Search-EFPosh -Entity $Context.v_Collections -Expression { $_.CollectionName -eq 'MyNewDeviceCollection' }
CollectionID : 16777229
SiteID : PS100014
CollectionName : MyNewDeviceCollection
LimitToCollectionID : SMS00001
LimitToCollectionName : All Systems
</code></pre>
<p>What is happening on the backend? There’s an “easter egg” (undocumented) feature that lets you see what Entity Framework does in the background. Create an environment variable called EFPoshLog and set it to ‘true’, then re-create the DB context.</p>
<pre><code class="language-PowerShell">$env:EFPoshLog= 'true'
$Context = Get-CMContext -Server 'Lab-CM.Home.Lab' -Database 'CM_PS1'
Search-EFPosh -Entity $Context.v_Collections -Expression { $_.CollectionName -eq 'MyNewDeviceCollection' }
info: Microsoft.EntityFrameworkCore.Infrastructure[10403]
Entity Framework Core 2.2.6-servicing-10079 initialized 'PoshContext' using provider 'Microsoft.EntityFrameworkCore.SqlServer' with options: None
info
: Microsoft.EntityFrameworkCore.Database.Command[20101]
Executed DbCommand (5ms) [Parameters=[], CommandType='Text', CommandTimeout='30']
SELECT [p].[CollectionID], [p].[CollectionName], [p].[LimitToCollectionID], [p].[LimitToCollectionName], [p].[SiteID]
FROM [dbo].[v_Collections] AS [p]
WHERE [p].[CollectionName] = N'MyNewDeviceCollection'
CollectionID : 16777229
SiteID : PS100014
CollectionName : MyNewDeviceCollection
LimitToCollectionID : SMS00001
LimitToCollectionName : All Systems
</code></pre>
<p>What happened is EFPosh takes your PowerShell binary expression, translates it to a Linq binary expression, and then Entity Framework translates that to a SQL query. If you think this might be prone to errors, yeah. It’s not perfect. Because of this the only supported expressions right now are ones that have a left and right side of the expression (ie, <code class="language-plaintext highlighter-rouge">$_ -eq 5</code>). Expressions that simply run methods are unsupported (ie, <code class="language-plaintext highlighter-rouge">$_.Equals(5)</code>).</p>
<p>One really cool thing about entity framework is the ability to create complex logic with little effort. One function I use a lot at work is getting the children of a parent collection recursively. There’s not a lot to the function, so let’s just put it out there:</p>
<pre><code class="language-PowerShell">Function Get-RecursiveCollections{
Param(
[string[]]$CollectionNames
)
$Results = Search-EFPosh -Entity $Context.v_Collections -Expression { $0 -contains $_.LimitToCollectionName } -Arguments @(,$CollectionNames)
if($Results.Count -gt 0){
Get-RecursiveCollections -CollectionNames $Results.CollectionName
}
$Results
}
</code></pre>
<p>This function calls Search-EFPosh and passes the expression <code class="language-plaintext highlighter-rouge">{ $0 -contains $_.LimitToCollectionName }</code>. What in the world is $0? This corresponds to index 0 in the Arguments array, so we also have <code class="language-plaintext highlighter-rouge">-Arguments @(,$CollectionNames)</code>. Before you mosey on down to the comments to talk about the “misplaced” comma, that’s supposed to be there! I’m not entirely sure what it does, but without it PowerShell throws away @() for single-object arrays and leaves you with just $CollectionNames, so we’d be indexing into $CollectionNames, which is not what we want!</p>
<p>So what does $0 -contains $_.LimitToCollectionName turn into? Here’s the generated SQL!</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code> SELECT [p].[CollectionID], [p].[CollectionName], [p].[LimitToCollectionID], [p].[LimitToCollectionName], [p].[SiteID]
FROM [dbo].[v_Collections] AS [p]
WHERE [p].[LimitToCollectionName] IN (N'Collection 1')
</code></pre></div></div>
<p>So when we say “Array” -contains “propertyName” it turns into the SQL equivalent of Column IN ().</p>
<p>Back to our function, I then check if there are any results, and if there are, call this function again to get the children! Lastly, return the results.</p>
<p>With 5 minutes of programming, I was able to create a function that dynamically generates SQL queries to recursively pull back data from SQL!</p>
<p>So that’s a little bit of how I use EFPosh at work, from context management to using that context, it’s all pretty easy!</p>ryan2065@gmail.comHow I use EFPoshEFPosh - Job State2020-05-26T00:00:00+00:002020-05-26T00:00:00+00:00https://www.ephingadmin.com/EFPosh-SavingJobState<h1 id="efposh---job-state">EFPosh - Job State</h1>
<p>For over a year I’ve been trying to put Entity Framework inside of PowerShell because of how powerful the toolset is in C#. I finally made headway a month ago and was able to create the PowerShell module <a href="https://github.com/Ryan2065/EFPosh">EFPosh</a>.</p>
<p>Entity Framework is hard to explain exactly how amazing it is, so I’m going to go through a few use cases to show how amazing this tool is!</p>
<h2 id="job-tracker">Job Tracker</h2>
<p>One thing that’s not the easiest to do in PowerShell is creating a job tracker. I was working recently on a migration job where we had distinct entities in one system, and wanted to move them to another. There were thousands of objects and each migration takes time to run, so the risk of this long running job being stopped was high. I wanted some easy way to lookup if an object was already processed, and skip it if it was.</p>
<h3 id="building-a-job-tracking-class">Building a Job Tracking Class</h3>
<p>Getting started with Entity Framework is as easy as building a PowerShell class! First, we have to put some thought into this class, mainly we have to identify something called the Primary Key. The Primary Key is something that’s unique and will always be unique. If I’m migrating SCCM collections, that’s going to be the Collection Id. If I’m migrating Active Directory objects, it’s the SID. The point is data you use usually has a primary key and you can just use the existing one. For my purposes, I’m migrating collections so I’ll use the collection Id.</p>
<pre><code class="language-PowerShell">Class JobTracker {
[string]$CollectionId
[bool]$Complete
}
$Tables = @( New-EFPoshEntityDefinition -Type 'JobTracker' -PrimaryKeys 'CollectionId' )
$Context = New-EFPoshContext -SQLiteFile '.\JobTracker.sqlite' -Entities $Tables -EnsureCreated
</code></pre>
<p>The above code will create a Sqlite database in the current directory called JobTracker with a table JobTracker!</p>
<p>To create a database, all Entity Framework needs is a class describing each table, and then tell it what kind of database to create. First I create a $Tables array saying we want to create a table of type JobTracker with CollectionId as a primary key. Then, I create the context with the -EnsureCreated switch, which will create the database if it does not exist!</p>
<blockquote>
<p>The Context (output from New-EFPoshContext) is required for everything in Entity Framework. You can keep it in a variable or re-create it as needed.</p>
</blockquote>
<p>Now, the details of how we get collections to migrate doesn’t matter for this blog, so let’s handwave it and get all those collections!</p>
<pre><code class="language-PowerShell">$CollectionsToMigrate = Get-CollectionsToMigrate
</code></pre>
<p>After getting the collections, we’ll skeleton out a loop:</p>
<pre><code class="language-PowerShell">Foreach($Collection in $CollectionsToMigrate) {
}
</code></pre>
<p>First we want to pull up the record for this collection:</p>
<pre><code class="language-PowerShell"> $Record = Search-EFPosh -Entity $Context.JobTracker -Expression { $_.CollectionId -eq $Collection.CollectionId } -FirstOrDefault
</code></pre>
<p>Entity Framework uses Linq BinaryExpressions to build queries. PowerShell doesn’t have a good interface for those, but they have BinaryExpressionAst objects, which are expressions found in Where-Object filters, if statements, etc. EFPosh will convert these to Linq binary expressions so you can write your expression in normal PowerShell and it’s converted behind the scenes to what it needs to be. Above, we have to say what table we want to query against, then give it the expression we are looking for. Simple!</p>
<p>After we get the record, we should look to see if it contains anything. If it did not find a record, create one to track progress:</p>
<pre><code class="language-PowerShell">if($null -eq $Record){
$Record = [JobTracker]::new()
$Record.CollectionId = $Collection.CollectionId
$Context.Add($Record)
$Context.SaveChanges()
}
</code></pre>
<p>To add a new record to the database, you simply create a new object of type JobTracker, and fill out all the information. In this case, we only have to fill out CollectionId. Then, add it to the context, and then save the changes.</p>
<p>Now that we have a $Record object (either from the database or newly created), let’s evaluate it:</p>
<pre><code class="language-PowerShell">if($false -eq $Record.Complete){
Write-Host "Migrating $($Collection.CollectionId)"
Start-CollectionMigration -Collection $Collection
$Record.Complete = Test-CollectionMigration -Collection $Collection
$Context.SaveChanges()
}
</code></pre>
<p>Here, we say if the Record is not complete, start the migration. After running the migration, test if it was complete with Test-CollectionMigration. Store the results in <code class="language-plaintext highlighter-rouge">$Record.Complete</code> and run <code class="language-plaintext highlighter-rouge">$Context.SaveChanges()</code> to save the results! Entity Framework has a Change Tracker that runs in the background. Any changes to entities are automagically tracked and when you call SaveChanges() are immediately written to the database.</p>
<p>And that’s it! We are now saving the job status to the database, so if we have to stop the job we can easily resume, or if there’s some bug in our code and only 90% of collections migrated, we can fix the bug and only resume the 10% that didn’t migrate!</p>
<p>How cool was that? This blog showed you how to create a database, query, add, and modify data all without writing ANY SQL!</p>
<p>You can see a <a href="https://gist.github.com/Ryan2065/436d851fc2d45d3804db7ca0d2057fa3">gist</a> of the entire script here. I’ve included fake functions for the “Collection Migration” pieces. To follow with the blog, start on line 19.</p>
<script src="https://gist.github.com/Ryan2065/436d851fc2d45d3804db7ca0d2057fa3.js"></script>ryan2065@gmail.comEFPosh - Job StateEphing Power Pool2020-05-21T00:00:00+00:002020-05-21T00:00:00+00:00https://www.ephingadmin.com/EphingPowerPool<h1 id="introducing-the-ephing-power-pool">Introducing the Ephing Power Pool</h1>
<p>Let’s travel back in time 20 years to February 2020 in the Ephing kitchen. Ephing Momma and EphingAdmin were having a discussion:</p>
<blockquote>
<p>EphingMomma: We should get a pool this summer!</p>
<p>EphingAdmin: That sounds like a great idea!</p>
<p>EphingMomma: Looks like theres a few options at Target and Lowes</p>
<p>EphingAdmin: Yeah, let’s wait until summer to get one. Not like anything big will happen between now and then to close everything down and make everyone run out and buy them.</p>
</blockquote>
<p><img src="https://media.giphy.com/media/w3QsOYBlbAURq/giphy.gif" alt="WahWah" /></p>
<p>So now, it’s almost summer and we start looking for a pool!</p>
<p>Look at Target…</p>
<p><img src="https://media.giphy.com/media/baPIkfAo0Iv5K/giphy.gif" alt="Nothing" /></p>
<p>Look at every Target within 100 miles (I’m in Minnesota, there’s a Lot)</p>
<p><img src="https://media.giphy.com/media/kzxOVNpKLWDyL9tTTn/giphy.gif" alt="Nothing" /></p>
<p>Look online…</p>
<p><img src="https://media.giphy.com/media/QZOaeparxsNOfKWbER/giphy.gif" alt="Nothing" /></p>
<p>My wife and I start watching Target’s stock, Wednesday we see limited availability at a Target 30 minutes away, I race there!</p>
<p><img src="https://media.giphy.com/media/h8UQPAvp7LOUJJkhac/giphy.gif" alt="Nothing" /></p>
<p>Then this morning I get a text from my wife:</p>
<p>“It shows in stock at Target, GO GO GO!”</p>
<p>I race to Target, get there, run to the Pools and…</p>
<p><img src="https://media.giphy.com/media/JoJGxeheao5mQaSiBK/giphy.gif" alt="Nothing" /></p>
<p>I go home, dejected, and decide to check some links I have saved from Lowes. Bam, in stock AND order online. Surely this can’t fail! I click add to cart, checkout,</p>
<p>“There’s a problem with availability, please choose a different delivery option.”</p>
<p>Ok, no problem, I check - all delivery options are grayed out!</p>
<p>I go back to the product page and… All sold out!</p>
<p><img src="https://media.giphy.com/media/3o7TKA3ypeMbOXSrp6/giphy.gif" alt="Damnit" /></p>
<p>So, I click refresh just to see what happens, and it’s in stock!</p>
<p><img src="https://media.giphy.com/media/5VKbvrjxpVJCM/giphy.gif" alt="InStock" /></p>
<p>I go to click add to cart, and it gets grayed out and shows out of stock!</p>
<p><img src="https://media.giphy.com/media/5fcc4PADD7ax2/giphy.gif" alt="WhatIsGoingOn" /></p>
<p>What - The - Hell</p>
<p>I hit refresh, same thing happens. Shows in stock, but then is out of stock in a few seconds.</p>
<p>So that got me thinking, what if they load the page and then call an API to check the stock…</p>
<p>To developer mode!</p>
<p>I go to <a href="https://www.lowes.com/pd/Intex-24-ft-x-12-ft-x-52-in-Rectangle-Above-Ground-Pool/1002623858">the Lowes pool link</a> let it load, then hit F12 on my browser, go to the network tab, and hit Refresh!</p>
<p>This now shows me all the extra content the page is loading in the background, which I hope shows me whatever is updating the In Stock button…</p>
<p><img src="https://www.ephingadmin.com/images/2020/LowesInitialF12.jpg" alt="F12" /></p>
<p>I then start scrolling through them! To help myself narrow this down, I decide to first look at lowes.com things (lowesCDN sounds like places they store images and static files, and the other locations all look like ads) AND look for json responses - like these:</p>
<p><img src="https://www.ephingadmin.com/images/2020/Lowes-TheseLookPromising.jpg" alt="Look Promising" /></p>
<p>If you see in the image above, I clicked on the “Response” tab on the right so I can see the JSON output. That link doesn’t look great, but what about that one that says Guest?</p>
<p>Click on that and you see some great json!</p>
<p><img src="https://www.ephingadmin.com/images/2020/Lowes-GreatJson.jpg" alt="Lowes-GreatJSON" /></p>
<p>Now, this isn’t exactly what we are looking for (how to tell if it’s in stock) but there’s a lot of product details here. If I scroll down…</p>
<p><img src="https://www.ephingadmin.com/images/2020/Lowes-ItemInventory.jpg" alt="Lowes-ItemInventory" /></p>
<p>A json object called <code class="language-plaintext highlighter-rouge">ItemInventory</code> with a <code class="language-plaintext highlighter-rouge">totalAvailableQty</code> property! This is exactly what we need. If you click back to the Headers tab on the right, you can get the link that produced this JSON:</p>
<blockquote>
<p>https://www.lowes.com/pd/1002623858/productdetail/1955/Guest</p>
</blockquote>
<p>Looking at this link, and the data in the JSON blob, I believe this is how the link is organized:</p>
<p><code class="language-plaintext highlighter-rouge">https://www.lowes.com/pd/<productId>/productdetail/<storeId>/Guest</code></p>
<p>So, this link is specific to my product and my store, which is really all I need. Now, to PowerShell!</p>
<pre><code class="language-PowerShell">$Quantity = 0
while($Quantity -eq 0) {
$LowesData = Invoke-RestMethod -Uri 'https://www.lowes.com/pd/1002623858/productdetail/1955/Guest' -Method Get
$LowesData.inventory.totalAvailableQty
$Quantity = $LowesData.inventory.totalAvailableQty
if($Quantity -eq $null) { $Quantity = 0 }
if($Quantity -eq 0){
Start-Sleep -Seconds 60
}
}
</code></pre>
<p>I wrote this simple script to check every 60 seconds if the pool is in stock. Line 4 will output the current quantity (sanity check) and Line 5 & 6 will switch Quantity to 0 if it’s $null, meaning if there was no data returned.</p>
<p>So this script will check if the item is in stock, it’s going to run on my home machine, how do I make it really let me know when it’s in stock?! Since I work from home right next to this computer, I decided to simply have PowerShell tell me it’s in stock!</p>
<pre><code class="language-PowerShell">Start 'https://www.lowes.com/pd/Intex-24-ft-x-12-ft-x-52-in-Rectangle-Above-Ground-Pool/1002623858'
while($true) {
$null = (New-Object –ComObject SAPI.SPVoice).Speak("It is in stock")
Start-Sleep -Seconds 5
}
</code></pre>
<p>First I have it open the Pool link in my preferred browser, then I have it yell at me every 5 seconds It is in stock.</p>
<p>It took about 4 hours, but eventually my computer started yelling it is in stock! I ordered the pool and well be in the new Ephing Power Pool in a week or two!</p>ryan2065@gmail.comIntroducing the Ephing Power PoolCM Pivot Revisited2020-02-11T00:00:00+00:002020-02-11T00:00:00+00:00https://www.ephingadmin.com/CMPivotRevisited<p>I wanted to write a follow-up to my blog post <a href="https://www.ephingadmin.com/CMPivotInternals/">CMPivot Internals</a> because in MEMCM 1910 CMPivot had a small, but huge change.</p>
<p>Now, after 1910, CMPivot processes queries client-side instead of server side!</p>
<p><img src="https://media.giphy.com/media/KFt2DA9T82paOA1Yci/giphy.gif" alt="Huh" /></p>
<p>Yeah, it’s not obvious what that will do, so lets dive in!</p>
<h2 id="cmpivot-is-a-script">CMPivot is a Script!</h2>
<p>Ok, <em>all</em> of CMPivot is not a script, but most of the code running on your client side computers is still a script. You can access this script still with the SQL query:</p>
<pre><code class="language-SQL">SELECT CONVERT(varchar(max), Script) as 'ScriptText'
FROM Scripts
WHERE ScriptName = 'CMPivot'
</code></pre>
<p>Note: In SQL, if you want the full script information, you’ll need to change your settings to let results have line breaks, and increase the max size of data returned.</p>
<p>Wait, increase the max size of data returned?! Yeah, they embed a DLL inside the script now, so it’s a lot longer. And that brings about the first change I want to talk about.</p>
<h2 id="client-side-query-processing">Client Side Query Processing</h2>
<p>Previously, CMPivot would process your Pivot queries server-side, tell the script client-side what data to gather, then sift through the data server-side and apply filters. Because of how it worked, a lot of interesting limitations applied to CMPivot - the worst was you could only search through the last 50 or 100 lines of text files and Windows event logs.</p>
<p>Now that they embed the query parsing DLL in the PowerShell script, they parse the query client side and have adapted the script to use that information. Now if you say you want to search smsts.log for a specific line, it will search <strong>ALL</strong> of smsts.log instead of just the last 100 lines. It will also search through entire windows event logs for whatever you want to search for.</p>
<p>This makes CM Pivot SO much more powerful AND lighter on your server infrastructure.</p>
<h2 id="comparing-previous-examples">Comparing previous examples</h2>
<p>If you look at the previous blog post, I showed how hard CMPivot could be on your environment by running the following query:</p>
<pre><code class="language-Kestrel">CcmLog('Scripts') | where (Device == 'DeviceName') | order by DateTime desc | project Device, LogText, DateTime
</code></pre>
<p>Why was this query hard on your environment? If you pasted this into CMPivot and ran it, you’d see 0 results. This is because no one really has a device named “DeviceName” in their environment. If you went to SQL though, and ran this query:</p>
<pre><code class="language-SQL">SELECT [ResourceID]
,[ScriptOutput]
,[LastUpdateTime]
FROM [vSMS_CMPivotResult]
ORDER BY LastUpdateTime DESC
</code></pre>
<p>You’d see one result row for every device in your environment and in vSMS_CMPivotStatus you could even find the 50 lines of text from the Scripts.log file for every device. So that’s a lot of data depending what collection you ran it on!</p>
<p>That process was all pre-1910. Now, if I run the above query, 0 results are in vSMS_CMPivotResult because if you specify “DeviceName=x” it will only grab data from that specific device name! It’s easy to say “duh, why didn’t they do that before?!” but this comes with a trade-off. If you want to “pivot” and suddenly want to do query all devices, it’s going to have to create a new request and can’t use cached results. I think this change in process is 100% fine, but just know Pivot might <em>seem</em> slower now.</p>
<h2 id="any-areas-of-concern-still">Any areas of concern still?</h2>
<p>This isn’t a limitation of the technology, or a bug, but do watch out for database bloat and CMG costs. Each client can send a maximum amount of data at 128KB, which means every 8 clients can send 1MB of data that’s stored temporarily in your database. Not huge numbers, but if you do a lot of querying of “Get me the entire contents of this log file” and then after that operation you filter to what you want, you could see issues with cost and database growth. Where I work, if I ran this CMPivot query against “All Systems” :</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>CcmLog('ccmexec') | order by DateTime desc | project Device, LogText, DateTime
</code></pre></div></div>
<p>I could expect the database to grow by about 50GB if all 400,000 machines are on. So any queries you need to run against ‘All Systems’ should probably first be fine-tuned against a smaller collection.</p>ryan2065@gmail.comI wanted to write a follow-up to my blog post CMPivot Internals because in MEMCM 1910 CMPivot had a small, but huge change.Status Message Presentation2019-12-19T00:00:00+00:002019-12-19T00:00:00+00:00https://www.ephingadmin.com/CMStatusMessagesclass: center, middle
# Status Message Triggers
## Ryan Ephgrave
## @EphingPosh
Wifi Code: msevent363hb
Presentation At: https://EphingAdmin.com/CMStatusMessages-presentation
---
# Status Message Triggers (Wifi Code: msevent363hb)
### Pros
* Event-based automations (no polling the DB every minute)
--
* Has been in SCCM, unchanged, for over a decade
--
* You can use these to automate a ton surrounding OSD, Application Deployment, Software Updates, Process Enforcement, and Approvals
--
### Cons
--
* Has been in SCCM, unchanged, for over a decade
--
* Single-Threaded (like collections)
--
* Has "quirks" with parameters
--
* Only one program action per "type"
---
# What can we automate? (Wifi Code: msevent363hb)
--
```
https://gallery.technet.microsoft.com/Enumerate-status-message-6e7e1761
```
```
https://blogs.technet.microsoft.com/saudm/2015/01/19/enumerating-status-message-strings-in-powershell/
```
--
### Examples of status message events
--
* Task sequence step completes
--
* Task sequence completes
--
* Anything in CM created, edited, or deleted
--
* PXE boot happened on a DP
--
* Package installed
---
# What can we automate?
``` SQL
WITH cte AS (
SELECT RecordId
FROM vStatusMessageAttributes
WHERE
AttributeTime BETWEEN '2019-07-01' AND '2019-12-31'
AND AttributeValue = 'PS100014'
)
SELECT
*
FROM vStatusMessagesWithStrings
WHERE RecordID IN ( SELECT RecordId FROM cte )
order by Time desc
```
---
# So how do I use these without killing CM?
--
* Option: A poor man's multi-thread
--
```
@echo off
for %%F in (%1) do set filename=%%~nxF
SET fileCount=0
for /f "tokens=1,*" %%a in ('tasklist ^| find /I /C "%filename%"') do set fileCount=%%a
for /f "tokens=1,* delims= " %%a in ("%*") do set ALL_BUT_FIRST=%%b
IF %fileCount% gtr 5 (
%1 %ALL_BUT_FIRST%
) ELSE (
start "" %1 %ALL_BUT_FIRST%
)
```
---
# So how do I use these without killing CM?
* Option: Use Math to verify you won't kill CM
--
![Math](https://media.giphy.com/media/BmmfETghGOPrW/giphy.gif)
--
---
# So how do I use these without killing CM?
* Option: Use Math to verify you won't kill CM
In every 1 minute there are 60 seconds and 60 minutes in an hour. This means there are 3,600 seconds in an hour.
If my automation would run 1000 times in an hour, how quick does it have to run?
--
Answer: 3.6 seconds
---
# So how do I use these without killing CM?
* Option: Use Math to verify you won't kill CM
``` SQL
WITH cte AS (
SELECT RecordId, [Time]
FROM vStatusMessages
WHERE
[Time] BETWEEN '2019-07-01' AND '2019-12-31'
AND MessageID IN (
30001,30004,30006,30007,30008,30016,30068,30152,30226,30227,30228
)
)
SELECT
DATEADD(Hour, DATEDIFF(Hour, 0, [Time]),0) AS 'Hour'
,COUNT(RecordId) AS 'Count'
FROM cte
GROUP BY DATEADD(Hour, DATEDIFF(Hour, 0, [Time]),0)
ORDER BY COUNT(RecordId) DESC
```
---
# So how do I use these without killing CM?
* Option: Use Math to verify you won't kill CM
RESULTS:
| Hour | Count |
| ---- | ------|
|2019-07-22 21:00:00.000 | 1262 |
|2019-10-07 22:00:00.000 | 1109 |
|2019-07-11 22:00:00.000 | 627 |
|2019-07-12 21:00:00.000 | 589 |
|2019-07-26 13:00:00.000 | 389 |
--
Means:
Script has to run in 2.9 seconds to always be quicker than the worst, but as long as it completes in 6.11 seconds it would have only backed up 3 times in the past 5 months.
---
# So how do I use these without killing CM?
Putting both together:
--
If the multi-threading script takes 1 second to run and it can spin up 5 instances, the run time of your script is approximately
```
(( RunTime ) / 5) + 1
```
--
This means, in the previous example where the script had 2.9 seconds to run, using multi-threading it has:
```
(( RunTime ) / 5) + 1 = 2.6
( RunTime ) / 5 = 1.6
RunTime = 8
```
--
```
(( RunTime ) / 5) + 1 = 6.11
( RunTime ) / 5 = 5.11
RunTime = 25.55
```
---
class: center, middle
# DemoRyan Ephgraveclass: center, middleBuilding A Better Fake ConfigMgr Client2019-05-03T00:00:00+00:002019-05-03T00:00:00+00:00https://www.ephingadmin.com/ContainerizeCMClient<h1 id="building-a-better-fake-configmgr-client">Building A Better Fake ConfigMgr Client</h1>
<p>I’ve <a href="https://www.ephingadmin.com/CMMessaging/">posted before</a> about ways to fake a CM client, but that only fakes the inventory data. I wanted to up my demo game and had a goal - make a fake client that could respond to CM Pivot!</p>
<p>I decided the best course of action would be to put the ConfigMgr client in a Docker container so I could just spin up multiple containers and have live clients that acted like real machines. Here’s how I did it:</p>
<p>First off, you want <a href="https://docs.docker.com/docker-for-windows/">Docker for Windows</a> installed on your machine.</p>
<p>After Docker is installed, make a new folder somewhere and then make a new file called <code class="language-plaintext highlighter-rouge">Dockerfile</code> (note, this has no extension, just Dockerfile). This is basically a script that will build our container. Open the file with your favorite editor (VS Code, NotePad, etc…) and put this in:</p>
<div class="language-Dockerfile highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">FROM</span><span class="s"> microsoft/dotnet-framework:4.6.2</span>
<span class="k">WORKDIR</span><span class="s"> /Client</span>
<span class="k">RUN </span><span class="nb">echo </span>10.0.0.6 MMSMOAPS1.contoso.com <span class="o">>></span> C:<span class="se">\W</span>indows<span class="se">\S</span>ystem32<span class="se">\d</span>rivers<span class="se">\e</span>tc<span class="se">\h</span>osts
<span class="k">COPY</span><span class="s"> Client .</span>
<span class="k">RUN </span>powershell.exe <span class="nt">-file</span> install.ps1
<span class="k">ENTRYPOINT</span><span class="s"> ["powershell.exe", "-file", "start.ps1"]</span>
</code></pre></div></div>
<p>In my lab, I don’t have DNS so I have to use the hosts file. If you are confident you can get to your MP without editing the hosts file, then feel free to remove the RUN echo line. If you need to edit your hosts file, update the RUN echo line to have the IP address and FQDN of your MP.</p>
<p>Next, copy the contents of <code class="language-plaintext highlighter-rouge">\\SCCMPrimaryServer\SMS_PS1\Client</code> to a folder called Client where your Dockerfile is. It should look like this:</p>
<pre><code class="language-FileSystem">root
-Client
-ccmsetup.exe
-All other files
-Dockerfile
</code></pre>
<p>Now, in your client folder, we need to put two files. One is install.ps1 and the other is start.ps1. The contents of install.ps1:</p>
<pre><code class="language-PowerShell">Write-Output 'Starting ccmsetup'
& cmd /c ccmsetup.exe /mp:MMSMOAPS1.contoso.com SMSSITECODE=PS1 SMSMP=MMSMOAPS1.contoso.com DNSSUFFIX=contoso.com
Start-Sleep 10
while(Get-Process -Name 'ccmsetup' -ErrorAction SilentlyContinue) {
Get-ChildItem 'C:\Windows\ccmsetup' -Filter 'ccmsetup.log' -Recurse | ForEach-Object {
Get-Content $_.FullName -Tail 5
}
Start-Sleep 10
}
$service = Get-Service -Name 'ccmexec' -ErrorAction SilentlyContinue
if($null -eq $service) {
Get-ChildItem 'C:\Windows\ccmsetup' -Filter 'ccmsetup.log' -Recurse | ForEach-Object {
Get-Content $_.FullName
}
$ErrorActionPreference = 'Stop'
throw 'Was not able to install client'
}
</code></pre>
<p>Change line 3 to match your environment (update the MP with the FQDN and the DNSSuffix).</p>
<p>This file installs the client, and since visibility is hard inside a docker container it outputs the ccmseup.log file as the install happens. Finally it checks to ensure the ccmexec service exists and if it does not, it will output the full ccmsetup.log file and throw an error.</p>
<p>Now create start.ps1 and here’s the contents of that file:</p>
<pre><code class="language-PowerShell">Remove-Item C:\Windows\SMSCFG.INI -Force
Get-ChildItem 'HKLM:\SOFTWARE\Microsoft\SystemCertificates\SMS\Certificates\*' | Remove-Item -Force
Start-Service -Name 'ccmexec'
ping localhost -t
</code></pre>
<p>This simply removes the SMSCFG file and any client certificates (which ensures each new docker container is a unique client) and then starts ccmexec. The docker container will exit when the script ends, so at the end it just pings localhost forever. This way you can stop it when you want.</p>
<p>And that’s it! Now all you have to do is build the container. What this step does is it will run the steps in the Dockerfile. These steps are:</p>
<p>1) Download the image microsoft/dotnet-framework:4.6.2
2) set up the hosts file
3) Copy the folder .\Client to the container
4) Run the file install.ps1 to install the SCCM client
5) Set up the container to run start.ps1 when it starts.</p>
<p>Go to the root of the main folder with your Dockerfile in it, and run this command:</p>
<pre><code class="language-cmd">docker build --no-cache --pull -t ephingcmclient .
</code></pre>
<p>-t ephingcmclient is the name of the container, so if you want feel free to name it something else. <code class="language-plaintext highlighter-rouge">--no-cache</code> is there because if you have to rebuild (let’s say the client didn’t fully install) you don’t want it using the cache. <code class="language-plaintext highlighter-rouge">--pull</code> is there because no-cache doesn’t always work if pull isn’t specified. It just says always use the newest image.</p>
<p>After you wait 10-15 minutes for the image to download, this step will take another 10-15 minutes to install the client. Be patient.</p>
<p>The most common error I’ve seen is a network error where the client couldn’t contact the MP. Docker at the build stage doesn’t always share a VPN connection with your container, so you might have problems if you use a VPN. I got around it by putting my primary site server on the internet, opening port 80 and 443, and updated my hosts file to hit the external IP.</p>
<p><img src="https://media.giphy.com/media/3oz8xUJsD8AsihJrtC/giphy.gif" alt="SafetyFirst" /></p>
<p>After you have successfully built the container, you can run it!</p>
<pre><code class="language-cmd">docker run -d ephingcmclient
</code></pre>
<p>This runs it in detached mode so you can run multiples…</p>
<pre><code class="language-cmd">docker run -it ephingcmclient
</code></pre>
<p>This runs it in interactive mode so you can see if there are any errors.</p>
<p>If everything goes good, after a minute you’ll start seeing your new clients pop up in CM. Note, this will be workgroup computers, so if you don’t have your site set up to “Automatically approve all computers (not recommended)” you’ll have to approve them.</p>
<p>Once you do, they will be fully working CM Clients that respond to CM Pivot!</p>
<p><img src="..\images\2019\2019-05-03-21-29-56.png" alt="DockerWorking" /></p>
<strike>There is a slight bug right now where only one Docker container will reply to CM Pivot, the rest will show up as Clients and Active, just not respond to Pivot. Once I figure that out I'll update the post. </strike>
<p>Update 5/5/2019: All clients are now fully working. The code to remove certificates wasn’t correct. It’s updated now in this blog post and should work.</p>ryan2065@gmail.comBuilding A Better Fake ConfigMgr ClientCM Pivot Internals2019-04-25T00:00:00+00:002019-04-25T00:00:00+00:00https://www.ephingadmin.com/CMPivotInternals<h2 id="cm-pivot-internals">CM Pivot Internals</h2>
<p>CM Pivot is one of the coolest new features Microsoft has put in SCCM. In this blog, we are going to rip the covers off CM Pivot and look at how it works under the scenes. This information will help you understand results as they come back from CM Pivot, and hopefully help you avoid issues with the product as you start to use it more and more in your environment.</p>
<h2 id="getting-started">Getting started</h2>
<p>If you are looking for a quick start guide on CM Pivot, look no further than the <a href="https://docs.microsoft.com/en-us/sccm/core/servers/manage/cmpivot">Microsoft Docs</a>. These should get you started! This post will assume you know a little about CM Pivot already.</p>
<h3 id="client-side">Client Side</h3>
<p>I wanted to start with client side because this is the most eye-opening piece we’ll be talking about. Knowing how CM Pivot works on clients is vital to understanding the data you have coming back.</p>
<h3 id="cm-scripts">CM Scripts</h3>
<p>Microsoft has said a few times that CM Pivot is built on scripts, which is a nice way of saying the client-side piece of CM Pivot is a PowerShell script. You heard that right, when you type your query in the CM Pivot window, it triggers a CM Script with parameters and sends that to the client. What script does it trigger? The one called CMPivot!</p>
<p>If you want the text of this script, here’s a quick SQL query to get the script:</p>
<div class="language-sql highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">SELECT</span> <span class="k">CONVERT</span><span class="p">(</span><span class="nb">varchar</span><span class="p">(</span><span class="k">max</span><span class="p">),</span> <span class="n">Script</span><span class="p">)</span> <span class="k">as</span> <span class="s1">'ScriptText'</span>
<span class="k">FROM</span> <span class="n">Scripts</span>
<span class="k">WHERE</span> <span class="n">ScriptName</span> <span class="o">=</span> <span class="s1">'CMPivot'</span>
</code></pre></div></div>
<p>If you run this query, you’ll see a large script that accepts two parameters and if you read through you’ll find the logic for all parts of CM Pivot. How do they query WMI? How do they parse logs? It’s all right here.</p>
<p>Why does this matter? If you look at the script, you’ll start noticing something. There’s no filters on the script! There are some parameters it can take (for instance, what log file do we want to read?), but there is no way on the client side to limit the data sent back from the CM Pivot query.</p>
<h3 id="cm-pivot-queries">CM Pivot Queries</h3>
<p>What does this mean? Let’s look at an example I’ll steal from <a href="https://www.systemcenterdudes.com/sccm-cmpivot-query/">SystemCenterDudes</a> CM Pivot example queries. Note, this is a fantasic page and I reference it a lot. I’m going to pick on one specific query on the page, but it is still a good query, it’s just not obvious whats happening.</p>
<p>Here’s the query:</p>
<p>List 50 last lines of a specific SCCM log file on a specific computer:</p>
<pre><code class="language-cmd">CcmLog('Scripts') | where (Device == 'DeviceName') | order by DateTime desc | project Device, LogText, DateTime
</code></pre>
<p>This query when run as-is will return no results. Why? You probably don’t have a device named DeviceName in your organization. You’ll see results like this:</p>
<p><img src="..\images\2019\2019-04-25-13-58-13.png" alt="NoResults" /></p>
<p>If I query SQL with this query:</p>
<div class="language-sql highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">SELECT</span> <span class="p">[</span><span class="n">ResourceID</span><span class="p">]</span>
<span class="p">,[</span><span class="n">ScriptOutput</span><span class="p">]</span>
<span class="p">,[</span><span class="n">LastUpdateTime</span><span class="p">]</span>
<span class="k">FROM</span> <span class="p">[</span><span class="n">vSMS_CMPivotResult</span><span class="p">]</span>
<span class="k">ORDER</span> <span class="k">BY</span> <span class="n">LastUpdateTime</span> <span class="k">DESC</span>
</code></pre></div></div>
<p>Now, I see results for all systems I queried! What’s going on? In a CM Pivot query, the only thing <em>you</em> can send to clients to filter results client side is to the left of the first |. This means the filter “where (Device == ‘DeviceName’)” is only processed AFTER the results come in. If this query was run against “All Systems”, every single system in your environment will run the CM Pivot query, send back the last 50 lines of the Scripts log file, put that in SQL, and then do nothing with it because the Device I wanted wasn’t there.</p>
<p>Note, this isn’t a dig at CM Pivot - it’s understanding how the technology works so good decisions can be made. Let’s look at an example where this could give us potentially wrong information:</p>
<pre><code class="language-cmd">EventLog('System') | where Source == 'Iphlpsvc'
</code></pre>
<p>In this query, I <em>think</em> I’m searching the event log of all my systems for the source ‘Iphlpsvc’. But remember, only the pieces on the left of the first | get sent to the client, so what actually happens? Every client sends back the newest 50 records in the System Event Log, and then in SQL the results are filtered and displayed based only on the newest 50 records. So since the System Event log is much larger than 50 records, it’s not a true search. I could very easily think this source isn’t recorded in any device in my organization, but it could in fact be in all of them.</p>
<h3 id="data-layer">Data Layer</h3>
<p>We’ve talked a little about CM Pivot and SQL in the queries section, but I did want take a moment to talk about what happens on SQL.</p>
<p>SQL is used to temporarily store all CM Pivot results pre-filter. If you wanted to dig into the results, the view <code class="language-plaintext highlighter-rouge">vSMS_CMPivotResult</code> is where you’d go to look at Pivot results. CM Pivot is meant to be live, so no historical results are saved. When you close the UI, the data is cleared out. If there is any orphaned data, it will be cleared within 7 days.</p>
<h3 id="conclusion">Conclusion</h3>
<p>CM Pivot is an amazing addition to our toolkit and it has a bright future, however it does have limits and some quirks about it. Hopefully this blog post will help you use make good decisions with CM Pivot so you can get the most out of it!</p>ryan2065@gmail.comCM Pivot Internals