Click an Ad

If you find this blog helpful, please support me by clicking an ad!

Tuesday, November 18, 2014

The $500 Checkbox

It's been a long time since I've posted, but my desire to write has returned, and I've written down a number of projects to post about in the coming days and weeks.

Today's tale is a precautionary one, that hopefully can save you some grief (and money) down the road.

A couple of months ago, I was replacing my primary domain controller. By "primary", I mean that this was DC number 1. It's a physical box (as opposed to my other 4 virtual DCs), and it runs my DHCP scope, the domain's external NTP client, and holds all of the FSMO roles.

I've done this a few times and have a checklist, so I really wasn't that concerned. I had some issues when migrating DHCP to another 2008R2 DC. Old DHCP superscopes which had been removed somehow made their way back, and not all server options were transferred correctly. This wouldn't be apparent for a couple of days because, silly me, I trusted a Microsoft migration process. Next time I'll just rebuild the DHCP from scratch, thankyouverymuch.

My problem came when I was demoting DC1. I couldn't. I kept getting access denied messages, despite trying multiple logins and multiple group membership combinations (Domain Admins, Enterprise Admins, DNS Admins, Schema Admins, all of the above, some but not others). Furthermore, my DCDIAG's, which I run daily, had all been sparkling clean for some time. Finally, at the end of my rope and with my Google-Fu failing me, I decided to bite the bullet and call MS support (which I've found is really good, by the way). This was after hours, so it ended up costing about $500.

After poking around, support came to the conclusion that we should forcefully remove the problematic domain controller and do a manual cleanup of Active Directory. We still had problems altering the computer object, due to some funky permission issues. After changing them manually, this rectified the problem, but they, and I, still didn't know the actual root cause of the issue.

As often occurs, after stepping away from the problem the answer hit me like a lightning bolt. The next day while eating breakfast, I remembered:

A while ago I used Powershell to enable the "Protect object from accidental deletion" feature on every user, computer, and group object in my Active Directory. I did this to protect myself from doing something stupid, like pressing a sequence of keyboard shortcuts while my window focus was in the wrong place and screwing up AD.

I looked at my other domain controllers and they all had the "Protect object from accidental deletion" option enabled. You can see this if you open Active Directory Users and Computers, click the View menu and then "Advanced Features". Now open a computer or user and click on the "Object" tab.

Fast forward a couple of weeks later and it was time to rebuild another DC. I had the same issue with demoting the domain controller. I went into AD and unchecked this box, then let it sit for 15 minutes to let AD replicate. When I tried again, I had no issues!

Friday, August 22, 2014

Get Internet Explorer Versions from a List of Computers

If you haven't been following the tech news, last week Microsoft announced that they were revicing which versions of Internet Explorer that they would support on different Operating Systems. You can find the original article with a list of what's what here. Having just completed testing for IE10, I have a lot of work to do this next year, as the new policy goes into effect on 1/12/2016. I don't have many internal websites to worry about, but the Finance departments reliance on stupid banks that seem incapable of being current is making things difficult. I'm hoping that Internet Explorer Enterprise mode will help me out, but I haven't had the time to really delve into it yet.

I track my client computers' software inventories in Spiceworks, and it was easy enough to get a report from Spiceworks showing who had what, but I don't track my servers in Spiceworks. For that, I needed a script to go through a list of my servers and tell me what version of IE they were running. I found a couple of script online, but they weren't to my liking. Some gave me inaccurate info, even. But they did point me to which registry tree I should look at, so I made my own, and here it is:

#=====================BEGIN SCRIPT==========================
$Computers = Get-content "C:\Temp\QueryInternetExplorerVersionsComputerList.txt"
$TempFile = "C:\Temp\IEVersion.csv"
$Delimiter = ","

Foreach ($Computer in $Computers){
$Version = (Invoke-Command -computername $Computer {$reg = Get-Item ('HKLM:\Software\Microsoft\Internet Explorer'); $reg.GetValue("svcVersion")})
$Computer + $Delimiter + $Version + $Delimiter | add-content $Tempfile

}
#=====================END SCRIPT==========================

  1. So, what we're doing here is as follows:
  2. Get the list of computers from a text file
  3. Specify a TempFile for output
  4. Specify a delimiter
  5. For each computer in the list, pull the SrcVersion value from the registry
  6. Combine the Computername and SrcVersion, along with use of the delimiters to make a CSV file, which I can then import into Excel.
  7. .....
  8. Profit!

Good luck and Godspeed, my fellow admins!

Windows Filtering Platform events in Security Log ...OR... Don't Screw with Advanced Audit Policy Configuration

I have 4 domain controllers, and the are set up to forward events in their Security Logs to a Kiwi Syslog Server (Solarwinds). I have a 30 day rotation, and I monitor the space usage in a dedicated pie chart on my heads-up display. This past week it started getting lower. After poking around, I noticed that the file sizes of each days' log files were getting bigger and bigger. I opened one up in the Kiwi log viewer (which is the most horrible part of this software, by the way), and noticed a TON of messages from "Windows Filtering Platform". These were basically noting success every time an event was sent to the syslog server. Well..... thanks I guess?

I rarely have to deal with auditing, so I consult the Google and find instructions for how to stop success auditing for Windows Filtering Platform Events. It seemed straight forward enough: Open you default domain controller group policy, and drill down into the Advanced Audit Policy Configuration, and there are two options there dedicated to it. I enable the auditing policy and check neither success or deny, so that the policy will be in effect and tell Windows auditing not to monitor either of them. I document my changes in the ticket I made for myself and close it out.

A few minutes later I get an email from our Netwrix AD auditing software telling me that auditing isn't configured correctly. I go into Active Directory Users and Computers, add a period to the end of a computer account's description, and run a new report. The change is noted, but there are big red warning letters telling me something with auditing isn't right. Huh. I decide to see what the overnight report says, make another benign change in ADUC, and head out for the day.

The next morning, the report comes in showing changes in AD, but it still has big red warning letters saying that auditing isn't configured correctly. I go to check my syslog file size, and it's 5KB (down from 4GB), and the only thing in there are messages over and over saying that the audit settings have been changed.

So, back to Google, and APPARENTLY, when you use any part of the Advanced Audit Policy Configuration, it supercedes ALL of the normal auditing settings. So, by simply turning off logging for the Windows Filtering Platform, I had negated all settings in the regular auditing settings. Super.

Now, I don't know if you've ever had to reverse a group policy setting, but it is not intuitive. Simply turning the setting off does not reverse what has been done. A group policy restore from backup cannot reverse what you have done; you actually need to reverse the setting. Likewise, clearing the checkboxes in the advanced auditing section would not restore auditing to the way it used to be. Sadly.

Here is what I had to do to reverse this calamity:

  1. Put the GPO back to the way it was.
  2. Get on a domain controller.
  3. Find out what you Group Policy Object's policyID is. I used get-gpo -name "<name>".
  4. Now, go to C:\Windows\SYSVOL\Domain\Policies\<PolicyID>\Machine\Microsoft\Windows NT
  5. In this folder delete the Audit folder. 
  6. Now get on a command shell, and type the following: auditpol /get /category:*
  7. What you see is that nothing is being audited.
  8. Run a gpupdate /force, then run the command again - SUCCESS!!!


While researching this I ran across the command that I should have used to get rid of those Windows Filtering Platform events, which is this:
auditpol /set /subcategory:"Filtering Platform Connection" /Success:disable

You can check the setting before and after you run that command with this command:
auditpol /get /subcategory:"Filtering Platform Connection"

Thursday, August 14, 2014

Starting a Backup Exec job from Powershell (After Veeam jobs finish)

When my weekly Veeam jobs finish, I have a post-job script that triggers a Backup Exec job that writes the Veeam backup files to tape. Yes, Veeam has built-in tape writing abilities now, but I still have physical servers to care and feed, and therefore Backup Exec still handles my tapes. Having just upgraded from Backup Exec 2010 to Backup Exec 2014, I learned that bemcli.exe had been removed. This was the old, atiquated way of running commands from the CLI to control Backup Exec. Smartly, they have replaced that with a Powershell module.

At a high level, here are the commands that your script needs to start a Backup Exec job:
#This command loads the module
Import-Module "C:\program files\symantec\backup exec\modules\bemcli\bemcli"

#This command starts the job, with inline confirmation
Start-bejob -inputobject "YourBackupExecJobName" -confirm:$false

For your entertainment, I'll now paste in my entire post job script, where I have built in a couple of handy features. Namely, an email telling me that whether the tape job has started or not, and an if statement that only starts the tape job if there are no failed backup jobs. If there's a failure, this gives me an opportunity to fix it before starting the tape-writing job.

Further commentary within this Veeam post-job script:

#-------------------------BEGIN SCRIPT------------------------
#Add the Veeam snap-in and the Backup Exec module
Add-PSSnapin VeeamPSSnapin
Import-Module "C:\program files\symantec\backup exec\modules\bemcli\bemcli"

#Set the initial counter, used to count Veeam job failures, used later in the IF statement
$Result = 0

#Email variables
$To = "me@contoso.com"
$From = "helpdesk@contoso.com"
$SMTPServer = "mailserver.contoso.com"

#Get all of the weekly Veeam jobs. My weekly jobs all have a "Weekly" prefix
$Statuses = (get-vbrjob | where {$_.Name -like "Weekly*"})

#Go through each Veeam job and increment $Result if a job has an unsuccessful status
#I'm not counting warnings, which can be triggered when a VM has been resized and Change Block Tracking has been reset
Foreach ($Job in $Statuses){
If ($Job.GetLastResult() -notlike "Success" -and $Job.GetLastResult() -notlike "Warning"){
$Result++
} #End If
} #End Foreach

#If the $Result equals 0, all the jobs were successful. Start the tape copy and email me
If ($Result -eq 0){
        #The following line uses the old (Backup Exec 2010) method for starting the tape job
#start-process "C:\Program Files\Symantec\Backup Exec\bemcmd.exe" -ArgumentList '-o1 -j"MyTapeJob"'
        #This next line is the new way of calling the tape job in Backup Exec
start-bejob -inputobject "MyTapeJob" -confirm:$false
    $To = "me@contoso.com"
$Subject = "Veeam to Tape Job Started"
Send-Mailmessage -to $To -Subject $subject -From $From -body $body -smtpserver $SMTPServer
} #End If

#If the result does not equal 0, email me
If ($Result -gt 0){
$Body = "The Veeam to Tape job has NOT started due to errors"
$Subject = "VEEAM BACKUP ERRORS: Veeam to Tape Job NOT Started"
Send-Mailmessage -to $To -Subject $subject -From $From -body $body -smtpserver $SMTPServer
} #End If

#-------------------------END SCRIPT------------------------

Tuesday, August 12, 2014

New Backup Exec 2014 Upgrade

I have a new and now working Backup Exec 2014 installation! Keep in mind that I skipped 2012 after all of the horror stories, so things that I think are new might not be to someone who followed the regular upgrade cycles. I have a simple setup; I just have the one server, and with it I back up maybe a dozen physical boxes, and write my Veeam backup files to tape. I also don't use many of the advanced features, nor do I know much about them. Backup Exec is good at writing tapes and backing up physical boxes. I don't see them surpassing Veeam anytime soon. My previous installation was up-to-date Backup Exec 2010 R3, and worked reliably.

I can report that running Backup Exec 2014 along with Veeam v7 on the same server has not caused any issues. The only thing I did have to do was set some of the Veeam services to delayed start so that Backup Exec could grab control of the tape library. Namely, the Catalog Data Service, the Enterprise Manager, the Restful API, and the Veeam Backup Service itself.

Anyway, enough about Veeam....

After doing a little bit of reading, mostly concerning the upgrade paths and methods, I decided to just pull the trigger and sink or swim. I'm happy to report that the installation of Backup Exec 2014 went very smoothly. All of my jobs and media sets were moved over, though I did need to remove and add back a couple of selections from the jobs. There were problems with some of the jobs' system states and I had some failures, but there are problems with any upgrades, so I dealt. One thing I didn't like was that I didn't have selection lists to change; I had to change the selections for each server in the job. One edit for the full job that runs weekly, and then another edit for the daily differential job. I liked the previous method better. In Backup Exec 2010 you specified what to back up in a selection list, then applied a policy which backed up those selections in a certain way, which is where you set up the schedule and the backup type. You applied the policy to the selection list and that made your jobs. Of course, you could create an entire job, but if you're taking care of multiple servers, the selection/policy was the way to go. Now, in the 2014 version, there aren't policies that I could find. I didn't look too hard, though. I edited the jobs directly.

Seriously, the hardest thing about this installation was getting the license keys straightened out. I just got that figured out today after a 45 minute call with support!!!

One laughable thing about my journey; I attended (for a few minutes, anyway) a Symantec webinar "unveiling" 2014, and showcasing some of its new features. The video was garbage. Conclusion: Overall good job! Work on your presentation and licensing though, Symantec!

Thursday, July 31, 2014

Happy Birthday to me - I got a(nother) SAN, and a Backup Exec 2014 Upgrade

So today is my birthday, and I get to do the remote installation on my new Dell Equallogic 6100XV SAN! This will be my 3rd Dell SAN, with 1 more to go, so I'm old hat. I could almost do the whole thing myself at this point, but configuration of the switch stack and iSCSI VLANs is not my cup of tea. The Dell tech has it pretty easy.....


In other news, yesterday I upgraded (in place) my aging Backup Exec 2010 installation to Backup Exec 2014. I only had one job give me any issues, but it was quickly resolved. I would add that my Backup Exec installation runs on the same server as Veeam Backup and Replication v7, and so far all of my jobs are running successfuly on Veeam and I'm not seeing any coexistence issues.

Why both, you ask? I have (and will for a very long time, maybe forever) both physical servers and virtual servers. Backup Exec can theoretically back up virtual servers, but why use anything other than Veeam - THE BEST THERE IS?????

Friday, July 25, 2014

Happy Sysadmin Day!

I'd just like to take a minute to wish all of my fellow sysadmins a happy sysadmin day! I didn't get squat, because apparently the only people that know about this vaunted holiday are actual sysadmins. Which is too bad really. We drive organizations. We are the ones that surmount all kinds of obstacles to keep the environments that we manage running as securely as possible and at peak performance.

So when you get home tonight, pour yourself a glass of scotch (or your beverage of choice) and contemplate all the great things that you do. Hopefully someone else has expressed their thanks to you, but probably not. :)


Friday, July 18, 2014

Powershell - Exchange Mailbox Size Reports and Bonus Rant

We limit our users to 1GB mailboxes. If they're high up in the chain we'll let them go to 3GB mailboxes. We do run a Barracuda Email Archiver with a 6 year retention policy, and allow our users to make their own PSTs if they need to (but only with the knowledge that IT will not help them recover or backup PST files).

I have talked to a lot of email admins and a lot have no mailbox size limits at all or a very high limit. The consensus seems to be that our limits are ridiculous. If you have an opinion on this, I'd love to hear from you, but my take is that, seriously, do you really need to keep all of that mail? Be judicious. It's my opinion that if all of these digital documents were converted into actual pieces of paper there would be a sea-change in users' attitudes with regard to email and files in general. I mean seriously, if the documents took up actual physical space, you'd get rid of things that you don't need, as opposed to spending the money for lateral file cabinets and a warehouse to hold all of this crap. Delete the office jokes and the cat pictures. If there's a long chain of emails, save the last one.

Saving all of this stuff digitally has real costs. The amount of space required (and SAN space is not cheap). If you want your IT department to implement some sort of document lifecycle and automate what each user could easily do, bear in mind that those systems are not cheap and are not painless to implement. Also, we have to back all that crap up, so there's backup storage. Now the backup solution has to churn through more data during a static backup window, and speed can be expensive.

Ok, I'm done with my rant. Here's what I use to keep tabs on my top mailbox sizes. I am running Exchange 2010.

#----------------------BEGIN SCRIPT-----------------------------------
Add-PSSnapin Microsoft.Exchange.Management.PowerShell.E2010

function Get-DatabaseStatistics {
$Databases = Get-MailboxDatabase -Status
foreach($Database in $Databases) {
    $DBSize = $Database.DatabaseSize
    $MBCount = @(Get-MailboxStatistics -Database $Database.Name).Count
    $MBAvg = Get-MailboxStatistics -Database $Database.Name | %{$_.TotalItemSize.value.ToMb()} | Measure-Object -Average          
    New-Object PSObject -Property @{
        Server = $Database.Server.Name
        DatabaseName = $Database.Name
        LastFullBackup = $Database.LastFullBackup
        MailboxCount = $MBCount
        "DatabaseSize (MB)" = $DBSize.ToMB()
        "AverageMailboxSize (MB)" = $MBAvg.Average
        "WhiteSpace (MB)" = $Database.AvailableNewMailboxSpace.ToMb()
        } #End New-Object
    } #End Foreach
} #End Get-DatabaseStatistics Function

Get-DatabaseStatistics | Export-Csv c:\temp\report.csv -Force -NoType

Send-MailMessage -To IT@DOMAIN.org -From helpdesk@DOMAIN.org -Subject "Database Statistics for $((get-date).ToShortDateString())" -SmtpServer MailServer.DOMAIN.org -Attachments c:\temp\report.csv

Remove-Item c:\temp\report.csv

#----------------------END SCRIPT-----------------------------------

Tuesday, July 15, 2014

After P2V (VMware), VM loses its Default Gateway Address

I ran into this issue this morning. Yesterday I P2V'd a Windows 7 "server" (don't get me started) that synchronizes nightly with a remote server over the internet. The P2V was successful, and the application passed inspection by the department that used it.

This morning we found out that the server did not synchronize. The first thing I did was run an ipconfig, which revealed that there was no default gateway set! I KNOW I set that. I also know that I've seen this issue before, and it has to do with two different network adapters having the same default gateway setting.

There are two things you need to do to fix this issue, but both are pretty easy. You can do it while the server is running if you are careful not to delete the wrong network adapter.

The first thing you need to do:

  1. Open an administrative command prompt
  2. type: set devmgr_show_nonpresent_devices=1
  3. In the same command prompt window, type: devmgmt.msc
  4. Now, in the device manager window that just opened, click View-> Show Hidden Devices
  5. Under the Network Adapters branch, you should see a "greyed out" adapter. This is the inactive adapter.
  6. Right click on the inactive adapter and choose uninstall. I left the drivers behind.

** NOTE **: It seems like this is also a great way to see if a system you are working on was P2V'd at some point. I also notice other now-disconnected hardware, like the old physical video adapter, when I'm in this view.

The second thing you need to do:

  1. Open Regedit
  2. Navigate to HKLM\SYSTEM\CurrentControlSet\services\Tcpip\Parameters\Interfaces
  3. Underneath that branch, you should see at least one long CLSID that looks like this {542A742-AF234.....}
  4. Click on that, and you should recognize the settings - it's your active network adapter. If you had more than one CLSID, click between the different CLSIDs to find the one with the IPAddress that corresponds to your main adapter.
  5. Write down what your default gateway is, if you don't know it by heart. 
  6. Right-click on the "DefaultGateway" item and click modify.
  7. Select all of the text in the large "Value Data" field and delete it, then carefully type in your default gateway and click the OK button.


The reason behind step two is that sometimes an invisible line return character gets added to the value, either in front or behind, and that can make the setting misbehave on reboot.



Monday, July 14, 2014

Report on Group Membership for Sensitive Groups

Here's another script I have that runs weekly, and gives me a run-down on accounts that have membership to sensitive groups in Active Directory. I used Quest's Activeroles Powershell plugin for this. Quest is now owned by Dell, and you can find the plugin here.

I recommend that you run this to identify accounts that may have more access than you'd prefer. If you just add someone to a group temporarily, this can help save you from forgetting that they're a member (long term).

This covers #20 in my list of scheduled reports. Which I highly suggest that you check out...

#-------------------BEGIN SCRIPT---------------------------

#Add the snapin
add-pssnapin Quest.ActiveRoles.ADManagement

Specify a temp file
$TempFile = "c:\temp\GroupAudit.txt"

#Here we list the groups that we'd like to display members for
$Groups = `
"DOMAIN\Administrators",
"DOMAIN\DnsAdmins",
"DOMAIN\Domain Admins",
"DOMAIN\Enterprise Admins",
"DOMAIN\Exchange Admins",
"DOMAIN\Schema Admins"

#For each group, add a header, then output the members of the group. Pipe everything to the temp file
Foreach ($Group in $Groups){
$Header = "`r`nThe current members of the $group group are:"
$Header | Add-Content $TempFile
get-qadgroup $Group | get-qadgroupmember | add-content $TempFile
} #End Foreach

#Get the content of the temp file to form the body of the email
$body = (get-content $TempFile | out-string)

#Specify Email variables
$From = "helpdesk@DOMAIN.org"
$Subject = "PS Report - Sensitive AD Group Memberships"
$To = "me@DOMAIN.org"
$SMTPServer = "smtpserver.DOMAIN.org

#Send the email
Send-MailMessage -To $To -Subject $Subject -Body $body -From $From -SmtpServer $SMTPServer

#Delete the temp file
Remove-Item $TempFile

#-------------------END SCRIPT---------------------------

Thursday, July 10, 2014

I Love You, Chrome, but these processes have to die....

I was sitting around in my living room looking at random wireshark captures (What does telnet look like? How about a DNS query?) and I'm trying to reduce the background network "noise" as much as possible. I have no apps running on my taskbar and killed some background fluff that was running, and then I see Hangouts running in my system tray. A right-click reveals no way to exit. I open task manager to find 30 chrome.exe. Well, maybe not thirty, but more than 10. I have quite a few extensions.....

I know WHY it's like this. Separate processes are more easily secured. It's a sandbox thing. What I don't like is that there's not a better description, like "Master Chrome Process" so I can kill one and it will take down all of the child processes with it. I'm not entirely sure that "child processes" is accurate.

There's a lot about this that I don't know, and don't really care about at this point in time. All I want is for Chrome to stop chattering and mucking up my Wireshark capture. I ran a search to see if there was a way to kill all of these processes in some sane fashion, but then I thought, why am I doing this? I can write a script to kill all of these in less time than it would take me to scour the results for a workable process that may or may not exist. Pffft - one liner time:

#Kill-Chrome.ps1
#This script kills all processes named chrome*
#Do not pass go. Do not collect more marketing data.
get-process | where {$_.name -like "chrome*"} | %{stop-process -id $_.id}

Wednesday, July 9, 2014

Monthly WSUS Database ReIndexing - The Automated Way

A buddy of mine clued me in on this post that recommends re-indexing the WSUS database every month. Sounds like a good candidate for automation to me....

First off, the environment I'm operating under runs WSUS 3.0 SP2 on a Windows Server 2008 R2 64-bit box that is fully patched. WSUS utilizes the Windows Internal Database, and not a full blown SQL database.

The first thing you'll need is "sqlcmd". I tried just copying sqlcmd.exe from one of my SQL servers, but that didn't turn out so well. I was worried that I was going to have to install all of SSMS on my WSUS server, which wouldn't be horrible, but I like to keep things tidy and this seemed like overkill. Some research led me to this download page for the SQL 2005 Feature Pack.


  • Download the Microsoft SQL Server Native Client from that download page (skip the red "Download" button at the top for ala carte offerings below; more rejoicing).
  • Scroll down farther and also grab the appropriate Microsoft SQL Server 2005 Command Line Query Utility.
  • On your WSUS server, install the Native Client, and then install the Command Line Query Utility. In my case, sqlcmd.exe was created in C:\Program Files\Microsoft SQL Server\90\Tools\binn.
  • Now, copy the WSUSDBMaintenance.sql script on Technet to a folder (I'll use C:\PS).
  • Create a new batch file in C:\PS containing the following:

c:
cd "C:\Program Files\Microsoft SQL Server\90\Tools\binn"

sqlcmd -I -S np:\\.\pipe\MSSQL$MICROSOFT##SSEE\sql\query -i C:\PS\WsusDBMaintenance.sql

  • Finally, schedule the task on the WSUS server so it runs once a month. I picked the last Sunday.

If you're familiar with the blog, you may wonder why I'm not using a Powershell script. :)
The answer is because this is the easiest way to implement this tiny operation. Why fool around with Invoke-Expression when a batch file works just fine?

When I ran this for the first time, I assumed it would take a long time, since I'd never done it before. To my surprise, the process took about 3 minutes.

Monday, July 7, 2014

Using Powershell to find out which Windows Updates were Installed since $Date?

I wrote this script some time ago, and now I am going back through my old scripts and making functions out of them.

The purpose of this script was to enable me to find out which Windows Update patches were installed on a server in the past X days. I wanted to be able to fully answer the "what changed" question when it came up to troubleshoot some server issue.

Here is the script as it was originally:

#------------- BEGIN SCRIPT ----------------------

#Prompt for the computer name and how far to go back
$HostChoice = (Read-Host -Prompt "Please Enter the computer name you'd like to query")
$DaysBackChoice = (Read-Host -Prompt "How many days would you like to go back?")

#Get the date from X Days ago
$DateXDaysAgo = ((get-date).adddays(-$DaysBackChoice).toshortdatestring())
$DateXDaysAgo = ($DateXDaysAgo + " 12:00:00 AM")

#Get the info from the remote computer and pass it to GridView
Get-WMIObject -ComputerName $HostChoice -Class Win32_QuickFixEngineering | 
where {$_.InstalledOn -ge $DateXDaysAgo} | sort InstalledOn | 
out-gridview

#------------- END SCRIPT ----------------------

Since writing this, I've learned that "Read-Host" kills puppies every time you use it. Or something. I've therefore decided to turn this script into a full-blown function using this as a template:

#------------- BEGIN SCRIPT ----------------------

function Do-Something {
  <#
  .SYNOPSIS
  What does this script do?
  .DESCRIPTION
  A better list of what the script does
  .EXAMPLE
  Command example
  .PARAMETER <ParameterName>
  What is the purpose of this parameter?
  #>
  [CmdletBinding()]
  param($ParameterName)
  (
    [Parameter(Mandatory=$True,
    ValueFromPipeline=$True,
    ValueFromPipelineByPropertyName=$True,
      HelpMessage='What does this parameter do?')]
    [Alias('AliasHere')]
    [ValidateLength(3,30)]
    [string]$ParameterVariableName
  )

#Put the meat of your function inside the Process block
  process {
    write-verbose "Beginning process loop"
  } #End Process
} #End Function

#------------- END SCRIPT ----------------------

After much trial and error - I had to reformat the dates for some reason, here is the final product:

#------------- BEGIN SCRIPT ----------------------


function Get-UpdatesInstalled{
  <#
  .SYNOPSIS
  Find out which updates have been installed on a computer in the past X days
  .DESCRIPTION
  This script is passed two arguments, Computername and DaysBack. You should be a member of the local administrators group on the target machine. The script then returns, in GridView, which updates have been installed within that timeframe.
  .EXAMPLE
  Get-UpdatesInstalled -ComputerName <NameString> -DaysBack <PositiveInteger>
  .PARAMETER <ComputerName>
  This sets the target computer to query.
  .PARAMETER <DaysBack>
  This sets how far back to query for updates.
  #>
  [CmdletBinding()]
  param(
    [Parameter(Mandatory=$True,
    Position=1,
    ValueFromPipeline=$True,
    ValueFromPipelineByPropertyName=$True,
      HelpMessage='Name of computer to query')]
    [ValidateLength(3,30)]
    [string]$ComputerName, 

    [Parameter(Mandatory=$True,
    Position=2,
    ValueFromPipeline=$True,
    ValueFromPipelineByPropertyName=$True,
      HelpMessage='Updates installed within how many days?')]
    [string]$DaysBack
  ) #End Param


#Put the meat of your function inside the Process block
  process {
    write-verbose "Beginning process loop"
      #Ping Test
      $PingTest = test-connection $ComputerName -quiet
      
      If ($PingTest -eq $true){
        Write-Verbose "Ping Test Successful"
#Get the date from X Days ago and reformat for use in this context
$DaysBack = "-" + $DaysBack
        $DaysBackDouble = $DaysBack -as [double]
        $DateXDaysAgo = ((get-date).adddays($DaysBackDouble).toshortdatestring())
$DateXDaysAgo = ($DateXDaysAgo + " 12:00:00 AM")

#Get the info from the remote computer and pass it to GridView
$Updates = Get-WMIObject -ComputerName $ComputerName -Class Win32_QuickFixEngineering | 
        where {$_.InstalledOn -ge $DateXDaysAgo} | sort InstalledOn | 
        out-gridview -Title "Updates installed on $ComputerName since $DateXDaysAgo"
      } #End If Pingtest
      Else {
      Write-Host "Unable to Ping Host"
      } #End Else
  } #End Process
} #End Function

#------------- END SCRIPT ----------------------



Thursday, July 3, 2014

Powershell Shenanigans - Kill Remote RANDOM Processes

It's Friday for me, so here's something a little off-beat!

A few months ago my Padowan and I were sitting around and I was showing him some intro to Powershell type things. It occurred to me that it might be funny to write a script that would randomly kill processes on a user's machine.

DISCLAIMER: Using this tool could result in the loss or corruption of data since you aren't closing files properly. I haven't ever used this on an end-user, and you shouldn't either. It's mean. Also, this isn't the cleanest code possible. It was for fun!

#--------------------- BEGIN SCRIPT -------------------------

#Get the target computer name
$cpname = read-host "Enter the computer name"

#Get a list of running processes from the target system.
#This was refined so that I wouldn't kill system processes
do {
$Processes = get-process -computername $cpname | where {$_.ProcessName -ne "svchost" -and $_.ProcessName -ne "winlogon" -and $_.ProcessName -ne "wininit" -and $_.ProcessName -ne "wmiprvse" -and $_.ProcessName -ne "system" -and `
$_.ProcessName -ne "spoolsv" -and $_.ProcessName -ne "lsass" -and $_.ProcessName -ne "csrss" -and $_.ProcessName -ne "conhost" -and $_.ProcessName -ne "smss" -and $_.ProcessName -ne "services" -and $_.ProcessName -ne "idle"}

#Spit out a list of processes
$Processes | select id, processname | ft -autosize

#Prompt for a course of action. At this point the script isn't entirely without merit. I could use it to kill a stuck process on a user's system.
$Choice = Read-Host "Enter Process Number to kill, R for Random, or Q to quit"

#Kill a random process
If ($Choice -like "R"){
$RandProc = $Processes | get-random
get-wmiobject -computername $cpname win32_process | where-object {$_.handle -eq $RandProc.ID} | foreach-object {$_.terminate()}
$ProcessKilled = $RandProc.Processname + " has been killed"
Write-Host $ProcessKilled
} #End If

#Quit Choice
If ($Choice -like "Q"){
exit
} #End If

#If you chose a specific process number to kill
If ($Choice -gt 0){
$Result = get-wmiobject -computername $cpname win32_process | where-object {$_.handle -eq $Choice}
$Result | foreach-object {$_.terminate()}
$ProcessKilled = $Result.Processname + " has been killed"
Write-Host $ProcessKilled
} #End If
$answer = read-host "Press 1 to kill another process, or q to quit"
}
while ($answer -eq 1)

Wednesday, July 2, 2014

Are you Monitoring PasteBin for Your Employer? You should be!

Last year I went to a great Hacker Security convention called GrrCon here in Grand Rapids, Michigan. I'll be going this year too, so give me a shout if you want to meet up. It's not an expensive ticket, and the content was SO amazing last year.

One of the things I took away was that I needed to be monitoring things like PasteBin. If you monitor security websites, you probably recognize Pastebin as a popular place for hackers to post pilfered data, though the site has many more worthwhile uses. I was surprised to find computer system data for an old employer of mine (including usernames and passwords!).

Getting it taken down was easy enough, but I then went out and found PasteLert, which gives me alerting on any search terms that I plug into it. I can even pipe them into an RSS feed so I can read them along with my daily news in Feedly, which is pretty great.

I highly recommend that everyone looks into ways to find leaked data about your organization!

Tuesday, July 1, 2014

Stepping up my Powershell Game

I watched a fascinating video about Powershell in the enterprise from a talk at TechEd given by Don Jones. It made me realize that it was time to step up my game a bit when it came to Powershell.

Two things I've decided to get more serious about are functions and version control. Version control doesn't look like it will be too hard to implement. We don't have any developers in-house so I don't have anything at-hand to use, but Sapien makes a version control system that I'm going to take a look at.

Most of the scripts I write automate certain things. What I've found over time is that I'm constantly stealing code from one script to use in another. Most of the time the syntax is even identical. I really should automate these snippets into full blown functions. I also think that it's time for me to make some tools for use by our helpdesk. We have a little command line batch file that predates me and is pretty popular within the department. So, for my first function, I'm going to create an uptime function, but I'm going to tie in a bunch of other handy information, too.

From the video, I gathered that [CmdletBinding()] is the cat's meow, and I can now confirm this. Using the Cmdletbinding tag allows me to use Write-Debug and Write-Verbose within my scripts, and then use -verbose and -debug options while calling the function to get that info (among a lot of other things).

Here's what I've come up with so far:

function Get-PCInfo {
  <#
  .SYNOPSIS
  Lists a number of useful pieces of information from a computer
  .DESCRIPTION
  Gets the following information:
  Logged on User
  Uptime
  CPU Usage
  Memory Usage
  C Drive Free Space
  .EXAMPLE
  Get-ComputerProperties <ComputerName>
  .PARAMETER computername
  The computer name to query. Just one.
  #>
  [CmdletBinding()]
  param
  (
    [Parameter(Mandatory=$True,
    ValueFromPipeline=$True,
    ValueFromPipelineByPropertyName=$True,
      HelpMessage='What computer name would you like to target?')]
    [Alias('host')]
    [ValidateLength(3,30)]
    [string[]]$computername

  )

  process {

    write-verbose "Beginning process loop"

    foreach ($computer in $computername) {
      Write-Verbose "Processing $computer"
      # use $computer to target a single computer

      #Ping Test
      $PingTest = test-connection $computer -quiet
      Write-Verbose "Ping Test Successful"

      If ($PingTest -eq $true){
          #Get Logged On User
          $LoggedOnUser = (Get-WMIObject -ComputerName $Computer -class Win32_ComputerSystem | select username).username

          #Get Uptime
          $lastboottime = (Get-WmiObject -Class Win32_OperatingSystem -computername $computer).LastBootUpTime
          $sysuptime = (Get-Date) – [System.Management.ManagementDateTimeconverter]::ToDateTime($lastboottime) 
          $Uptime = ($sysuptime.days).ToString() + " days, " + ($sysuptime.hours).ToString() + " hours, " + 
            ($sysuptime.minutes).ToString() + " minutes, " + ($sysuptime.seconds).ToString() + " seconds"
      
          #Get CPU Usage
          $AVGProc = (Get-WmiObject -computername $computer win32_processor | Measure-Object -property LoadPercentage -Average | Select Average).Average

          #Get Memory Usage
          $MemoryUsage = (Get-WMIObject -Class win32_operatingsystem -computername $computer | 
            Select-Object @{Name = "MemoryUsage"; Expression = {“{0:N2}” -f ((($_.TotalVisibleMemorySize - $_.FreePhysicalMemory)*100)/ $_.TotalVisibleMemorySize) }}).MemoryUsage

          #C Drive Free Space
          $CDriveFree = (Get-WMIObject -Class Win32_LogicalDisk -ComputerName $computer | where {$_.DeviceID -Like "C:"} | ForEach-Object {[math]::truncate($_.freespace / 1GB)}).ToString()

          # create an object with your output info
          $InfoItem = New-Object System.Object
          $InfoItem | Add-member -type NoteProperty -Name 'Logged On User' -value $LoggedOnUser
          $InfoItem | Add-member -type NoteProperty -Name 'Uptime' -value $Uptime
          $InfoItem | Add-member -type NoteProperty -Name 'CPU Usage (Load Percentage)' -value $AVGProc
          $InfoItem | Add-member -type NoteProperty -Name 'MemoryUsage (Percent)' -value $MemoryUsage
          $InfoItem | Add-member -type NoteProperty -Name 'C Drive Free (GB)' -value $CDriveFree
      
          #Display the Info
          $InfoItem
      } #End If
      Else {
        Write-Host "Unable to Ping Host"
      } #End Else
    } #End Foreach
  } #End Process
} #End Function

Monday, June 30, 2014

Adsense.... I Did It

So I got approved for Google AdSense. I really wavered for quite a while on whether I should burden my dear readers with ads, but I've been writing on here for a couple of years now and I figured I might as well get something out of this aside from the great feeling that I'm helping my fellow sysadmins.

So, click on some ads, and help me buy some lab gear if you are so inclined. I'm still going to post whether it makes me money or not.

Adventures in My Lab

I just wanted to post (brag, heh) about the lab setup that I've created. I have VMware workstation set up and I've created an internal (Host Only) network containing pretty much every Microsoft OS since DOS 6.22 in it. I'm tempted to get my old SpitFire BBS (I'm a packrat) set up, which I ran on a 2400 baud modem at night back in junior high, but I don't have the slightest clue on how I would simulate modem communication, and frankly I have better things to do. Booting into Windows for Workgroups 3.11 was definitely a trip! I've also go Windows 95, 96, ME, 2000, 2003, 2008R2 (with MS SQL), and 2012 R2 for kicks. All of them are networked except for the DOS 6.22 VM. Also, I have a Kali Linux VM set up so that I can look into the security pen-testing world, which mainly consists of NMap, Wireshark, and MetaSploit at this point. I was a little sad when I ran Armitage (which is a graphical front-end for MetaSploit noobs like me) against my ancient Windows 3.11 and 95 boxes and it didn't come back with any detected exploits, but I am assuming that MetaSploit probably doesn't contain modules for antique OSes out of the box. I was so expecting to see the Ping of Death available for my entertainment!

I set up a Linux Mint 17 VM in bridged mode that I've been using as much as possible for my day-to-day computer use, in order to get used to using Linux for normal tasks. Dropbox helps me keep a running list of handy commands for reference that I can access from any of my other boxes. My plans are to get this machine on my work domain so that I can start playing with Samba and learn how to use Linux as a file server. I'm following the LPIC-1 curriculum now that I've wrapped up a great YouTube series by TutoriaLinux on the generalities of Linux. I am definitely getting more comfortable using the terminal and I now better understand the uses of the various folders in the Linux filesystem. Looks like I'll be spending a lot of time in /etc (configuration files) and /var (logs) going forward.

On the Cisco front, I've acquired a copy of Cisco Packet Tracer, which is an amazing learning tool. I've managed to use the clients, switches, and routers within it to create a functional Cisco network and have gotten used to at least some of the commands used to provision Cisco devices. I'm now able to bring ports up and down and configure some of the basic security and connectivity options like console and enable passwords, timeouts, and telnet channels. So far so good. I've also acquired GNS3, which uses Dynamips to boot actual Cisco IOS images to do simulations, but at this point in my education (about a third of the way through the CBT Nuggets CCENT course) it's pretty advanced and I'm holding off until I know what I'm doing a little more.

My strategy is to use the CCENT curriculum as a vehicle to expand my understanding of network functionality. My next step depends on my employer. We currently use an ASA, so a CCNA: Security seems like a worthwhile pursuit, especially given my interest in network security. We're shopping for a VoIP solution, and if we go with a Cisco implementation, then I'll head down the CCNA: Voice path instead.  I feel like branching out into Voice would be more useful to my career and add a new area of expertise where there was once only a hatred of phones and telephony in general. If for some reason neither of those pan out, or look like they won't serve my current employer well, then I'll just continue on with the CCNA: Routing and Switching as a fallback.

Friday, June 20, 2014

Starting my Journey - Bonus: Resetting Cisco Routers/Switches to Factory

I've been eagerly watching the CBT Nuggets CCENT training videos, and so far I'm loving it. TCP/IP networking is like fricking magic! It's complicated, but it makes a lot of sense. I have been looking at Wireshark captures and already found some interesting things occuring on the network. Did you know that if Dropbox is installed on a computer, it sends out ARP broadcasts every 30 seconds looking for other DropBox installations? It's called "Dropbox LAN Sync Discovery Protocol".

In other fun news, in VMware Workstation I successfully go Windows 3.11 for Workgroups, Windows 95, and Windows 98 talking to each other over TCP. Just because. :)

After some more introspection, I have decided that it's very good that I'm getting into networking with the CCENT. I might even go farther than that. When you get right down to it, IT is all about the DATA. Up to this point (and I don't see this changing soon - I'm just branching out), I have dealt mainly with presenting the data to the employees. Email, file shares, whatever. But I got to thinking: what are the core things that need to be done with data? Data needs to be:

  • Copied (backed up)
  • Secured (ensure CIA: confidentiality, integrity, accessibility)
  • Transmitted (Networking)
  • Converted into information (Monitoring, reporting, database mining)
  • Stored (SANs, DAS, etc)

I figure, the closer I get to the core functions of the DATA, the better my job prospects will be, in my opinion.

Work had some old Cisco gear laying around that I can use, so I now have at my disposal:

1 Cisco Catalyst 3550 switch
1 Cisco 2600 router with 2x WIC 1DSU-T1 cards (ports look like cat5/6 - do they take something special or can I plug an ethernet cable in there?)
2 Cisco 1841 routers each with 1 WIC 1DSU-T1 V2 card.

I managed to hunt down instructions on how to reset them and have done the needful. Here are the instructions I collated and tested (as much for my reference as yours!):

Resetting the Cisco 1841 and 2600 routers to factory:



  1. Make sure router is powered off
  2. Connect console cable, and bring up putty in the correct COM port
  3. Boot router
  4. Send the break command (right-click on the window bar and choose special command->break)
  5. type confreg 0x2142
  6. type reset
  7. Once the router reboots (say no to initial config dialog), enter enable mode, then type reload
  8. Once the router reboots (say no to initial config dialog), enable, and conf t
  9. type config-register 0x2102
  10. type exit to get back to enable mode
  11. type write memory
  12. type reload


Resetting the Cisco Catalyst 3550 to factory:


  1. Connect the console cable to the switch and start your terminal program (HyperTerminal/Secure CRT). Console port settings are 9600,8,N,1
  2. Hold the MODE button (on the front of the switch) while you power on the switch.
  3. Hold the MODE button for a few seconds until you the System light stop flashing.
  4. At this point, the switch should be in ROMmon mode. 
  5. From ROMmon mode, type: flash_initStep 
  6. From ROMmon mode, type: delete flash:config.textStep 
  7. From ROMmon mode, type: boot


Monday, June 16, 2014

Getting a little too..... comfortable.....

After two years this month at my post, I've finally got this environment running as well as I possibly can. I've got a reliable SAN behind a solid VMware cluster, and I've got the environment automated and monitored to the nth degree. Time to sit down and rest on my laurels, right? WRONG.

Now is the time to look around and see what I can learn to either make my environment better, or to make me a better sysadmin.

I'm learning Linux, but that's going to be an ongoing thing that's going to take lots of gradual doing for me to get comfortable with. I have a dedicated Kali Linux laptop that I use every opportunity I have to step outside of the Windows world. I am also doing security stuff (hence using Kali Linux) as I can, but again that's a long, slow slog; not something you pick up a book and learn all of over the course of a couple of months.

I'm finally in the process of learning to write T-SQL statements and turn a big pile of data into information that my department can use to make better decisions. I've wanted to learn T-SQL for the longest time, but could never get interested enough in the data to write my own questions, which is how I learn best. Sales and marketing data never piqued my interest, but give me some helpdesk and inventory data, and I'm MOTIVATED! We run Spiceworks, and it dumps a ton of data into a SQLite database. I discovered that I can use the SQLite Database Browser to mount an offline copy of the Spiceworks database and start working with the data. My biggest challenge right now is understanding JOIN statements. This is giving me a headache. This is the last hurdle I need to clear before I can write a Powershell script to start pulling out some nice monthly helpdesk reports for my manager.

Besides these, I'm going to start some more structured learning. For a little while there, I had come to the conclusion that I wasn't going to play the certification game anymore. I have real projects and a decade now of real experience under my belt, so why bother? After analyzing things, I changed my mind. I want to learn x, y, and z. Why NOT go through a structured curriculum and seize the reward at the other side of the journey? I'd almost be a fool not to get certified after learning the subject material. With that in mind.....

One of my weaknesses has always been networking, and I want a better understanding of it. Not only am I the network admin's backup, but it will come in handy if I decide to move my SAN/VMware backend to 10Gb ethernet. I'd like to start working on some VMware certifications down the road, and this is definitely my weakest subject. I got my Network+ and went through a Cisco CCNA course like, 8 years ago, but the knowledge has faded over time. Also, I don't fully understand VLANs, and that bothers me. CCENT here I come!

I'm not sure if I'll continue on the CCNA track because we don't run Cisco gear, but I'm finding that CCENT is a thorough gauge for understanding the fundamentals of networking, and the subject matter is quite in-depth. I just completed CBT modules on Layer 2 communication via ARP and the TCP 3-way handshake; fascinating stuff! We have an ASA, so maybe the CCNA Security track is viable. I get 3 years to figure it out before my CCENT expires, so I'll mull it over.

Following up on that cert, which I hope to complete in a couple of months, I might as well upgrade my MCITP: Server 2008 Enterprise Administrator cert to the 2012 versions. I have deployed a couple of 2012 servers so far, so I might as well. It took too much work to get my MCSE:Security on 2003, and then upgrade it to 2008, for me to let it whither on the vine....

After I complete those, I'll step back and see what's what. VMware looks enticing; I've already taken two of their classes, so I figure I might as well get the paper to back them up. I've had my eye on a SQL administration (not dev) cert for a while now, and have learned a lot about backups, database structure, and maintenance in the past year. Veeam has a new certification program that also look interesting to me.

A couple of other subjects that look interesting to me are Project Management and Storage. I see CompTIA has entry level certs for both of those. I don't really need to get heavily involved, so these look like low-hanging fruit after I brush up on the basics of these subjects.

I want to learn how to use a couple of apps that have intrigued me for a long time, but that I just never had time to learn: Wireshark (which will actually help me along quite nicely with my CCENT) and Windows Remote Desktop Services. Frankly, I'm not too jazzed about Windows RDS. I've managed Terminal Servers in the past, and I loathe them. That said, 2012 looks like it might have made the RDS situation a little easier to use, so I'll probably look into it. I might even be able to help my employer save some money on software licensing. I'm looking at you, Adobe Acrobat. Fifteen people need to sporadically use your software, and only the professional version will suffice, of course. Maybe an RDS server is the answer....

As you can see, I'm really excited about a lot of different technologies right now. I'll write when I can. :)

Thursday, May 29, 2014

Neat New Piece of Software: Documentor

I stopped scanning my server subnets with Spiceworks recently and needed something to fill the gap. I've recently used a very nice piece of software called Documentor to accomplish this. It's not the fanciest thing out there, but it does its job! The Developer of this, Ranma, is active on the ArsTechnica forums.

Be sure to read the wiki to get a bit more functionality. It will tell you how to scan a number of objects from a text file, which makes the tool much easier to use....


Tuesday, May 27, 2014

A local Linux Users' group? Sign me up.....

Some time ago I found out that there was a Linux users' group in my area. I put off attending a meeting for a long time because I feel like I was too much of a Linux noob to attend something specialized like this. How wrong I was!

There were 6 people at the meeting of the Kalamazoo Linux Users Group (KLUG) of varying degrees of knowledge. I was actually able to help one gentleman get his wireless working on a fresh copy of Mint. I learned a few new tricks and made some new friends. I'm confident that I'll be able to contribute more as my skills develop, but I think it's just great to be at a table messing around with computers with other people who are also interested in them. I really wish there were more user groups that met in meat-space.

Now that I'm married and have kids and am out of school, I find that it's more difficult to meet people with the same interests. Hell, I've even considered taking computer classes just to meet other people who like computers.

My advice: Get out there and physically meet some of the people you consort with online as much as you can!

Friday, April 25, 2014

Facebook edits are visible?

Just a quickie: Did you know that Facebook edits are visible? You will see a link titled "Edited" under the name of the poster; comments and posts will have this link if they get edited. Click on it and you can see the original. Kind of disturbing, really. Look through your feed and I'm sure you find a couple. No takebacks. Be careful before posting....

Thursday, April 10, 2014

HeartBleed made me finally make the leap... LastPass here I come

The HeartBleed SSL vulnerability has been at the forefront of tech news for the past couple of days now. This new vulnerability has shaken my person password policy and management to its core. I knew it would happen someday, but I just kept kicking the can down the road.

I've had a pretty good run with KeePass. My KeePass database spans 5 years of creating passwords on the internet. Are all of my passwords original? Nope. I have several variations and combinations of very strong passwords. I've also simply used generic passwords at sites that I don't care about. To my knowledge, nothing has been compromised yet.

HeartBleed leapfrogs current password harvesting methods. Until now, hackers have typically compromised servers, and then pilfer improperly secured password databases. Every time this happens, more and more passwords are added to hackers' dictionaries. Hackers use dictionaries to try and guess passwords, and these lists are getting better and better. With HeartBleed, hackers don't have to mess around with password database security, or salted hashed, or anything like that. They can just scrape the unencrypted, non-salted passwords directly out of the server's memory. Awesome. Oh wait it gets better! This vulnerability has been wide open for a couple of months!

I used my most complex combo passwords on my most important sites: Facebook, Gmail, Dropbox, etc. The ones I don't want people to get into. Facebook, Gmail, and Dropbox were all vulnerable. For an example, let's say my awesome strong passwords consist of 1-4 different words:

M0t0b0at
Gr@v1t33
L3tsGO
M@RVEL-us

Since I value my GMail account's security, my password would have been gM0t0b0atGr@v1t33L3tsGOM@RVEL-us. On other less important sites, it would have been just one, like L3tsGO.

I admit it. This is not good security practice. All my passwords should be strong. And different. The problem is I have too many passwords, and my brain just isn't big enough. I've been fighting the monster for so long.... and now I'm done.

The problem now is that some enterprising hacker can now take my passwords that they've harvested and add them to their dictionary. Most password cracking programs have the ability to do all kinds of neat things, like change all e's to 3's, try both upper and lower-case letters, etc. They can also put passwords together in different combinations. So, my risk has increased quite a bit.

Yesterday I plunked down the $13 for LastPass and started changing my passwords. They're all different now; random and 20 characters long. If a site's password database gets hacked, I'll just change the password for that site and be done with it. I can't take the worry or the management overhead anymore. I have thrown in the towel. Take my money, LastPass.

I recommend that you check out this link on mashable to find out where you should be changing your passwords.




Friday, April 4, 2014

MalwareBytes Blocked IP Reporting

We had a spate of malware attacks on our website recently. We run MalwareBytes Anti-Malware on the box. In the log file, I see that it's blocking IP addresses. Shouldn't there be some reporting? I think so. I start poking around the interface, but there are no reporting options.

Surprisingly, it doesn't even write to the event logs. Only log files. So much for alerting us when there's an issue. I talk to my network admin and we decide that we'd like a daily report that tells us what IPs have been blocked, and then he can investigate further and decide if he wants to block the IP at the Firewall.

On a daily basis, we need to pull yesterday's log file, select any lines with "IP-BLOCK" in them, and send him an email with the entries so he can look into the IP addresses. Sounds like a job for Powershell!

As it happens, MalwareBytes writes their log files using a weird encoding format. My Get-Content fails miserably, resulting in text output where there appears to be a space between every single character. In Notepad++, the text looks fine, but I notice that in the bottom right-hand side it says "UCS-2 LE w/o BOM". Weird, this must be encoded differently. Get-Content works with some encoding schemes, but this one is not in the list. After much Googling, trial, and error, I am able to figure out that I need to read the file using get-content, then output to file using different encoding by using the -Encoding UTF8 switch. Now, however, the text file I have contains a bunch of NUL characters. To get rid of these I have to do a -replace "`0","". That's a zero, and the backtick zero symbolizes the NUL character. NOW I have some data to work with!

Great, so I put it all together, test it, and schedule it to run nightly at 12:01AM.

Here's the script:

#---------------------- BEGIN SCRIPT -----------------------

#Path to Malwarebytes Log Files
$PathToLogs = "C:\ProgramData\Malwarebytes\Malwarebytes' Anti-Malware\Logs\"

#Temp File
$TempFile = "C:\Temp\TempFile.txt"

#Yesterday date, as a string formatted yyyy-MM-dd
$date = (Get-Date).AddDays(-1).ToString('yyyy-MM-dd')

#Put together the filename we'll be looking for
$FileName = "protection-log-" + $date + ".txt"

#Put together the entire path
$FileFullPath = $PathToLogs + $FileName

#Read the content from the log file, and send it out with UTF8 encoding to the Temp File
Get-Content $FileFullPath | out-file -Encoding UTF8 $TempFile

#Read the new content
$UTF8File = Get-Content $TempFile

#Delete the Temp File, since we've read it now
Remove-Item $TempFile -Force

#Specify the character to be removed
$RemoveString = "`0"

#Remove the null characters from the file, creating a usable file
$CleanedLog = $UTF8File -replace $RemoveString,""

#Get the lines that have blocked IPs
$BlockedIPs = $CleanedLog | select-string -pattern "IP-BLOCK"

#Convert BlockedIPs to a string, so I can use it in the body of my email
$BlockedIPs = $BlockedIPs | out-string

#Send an email if there are greater than 0 IP Block messages
If ((($BlockedIPs | measure-object).count) -gt 0){
Send-MailMessage -To netadmin@contoso.com -Subject "PS Report - IPs Blocked by MalwareBytes" -Body $BlockedIPs -From "helpdesk@contoso.com" -SmtpServer "mailserver.contoso.com"
}

#---------------------- BEGIN SCRIPT -----------------------

Thursday, April 3, 2014

Exchange Calendar Issues

A coworker of mine went to Exchange World in Austin, TX this week, and we have finally figured out why we've been having such strange calendar behavior! Apparently, there are many issues between third-party vendors' ActiveSync software and Exchange Servers.

This article over at Network World gives a great description of the problem and what IT needs to do to resolve it.

Monday, March 31, 2014

Automated Backup of a MySQL Database

I have a couple MySQL databases out here in the wilderness. I use Veeam Backup and Replication to back up my virtual machines, but Veeam doesn't have any way to quiet MySQL database activity down so that it can get a proper backup, that I'm aware of. So, I got my hands dirty and created the following process to automatically back up MySQL databases.


  • The first thing that needs to be done is that a user needs to be created within the MySQL instance that can be used for backups. I accomplish this on the server in MySQL Workbench. Under Server Administration, click on Security. Add a user account, give it a nice password, and add it to the BackupAdmin role.
  • The MySQL server is not a domain-joined computer, so I created a local Windows account (MySQLWinBackup), and assigned the account the "Log on as batch" right.
  • I created a local folder named C:\DB_Backup. This folder should be shared to "Everyone", and then the NTFS permissions modified so that the local Windows backup account has full permissions.
  • The following Powershell script file needs to be created. I store my scripts in C:\PS:

#--------------------- BEGIN SCRIPT -------------------------------

#Name of Server
$ServerName = "WebServer"

#The Path to the mysqldump.exe program. This is the program that backs up the MySQL database natively.
$PathToMySQLDump = "C:\MySQL\bin\mysqldump.exe"

#Credentials of the MySQL user account
$Username = "MySQLBackupAccount"
$Password = "MySQLBackupAccountPassword"

#Get today's date, to be used for naming purposes
$Date = (get-date).ToString('MM-dd-yyyy')

#Where to store the backup files
$LocalBackupPath = "C:\DB_Backup"

#Backup all of the Databases
cmd /c " `"$PathToMySQLDump`" --routines --events --user=$UserName --password=$Password --all-databases > $LocalBackupPath\$ServerName-AllDataBasesBackup-$Date.sql "

#--------------------- END SCRIPT -------------------------------

  • Now, I set that Powershell script to execute daily at 5:10PM in Task Scheduler.
  • I run the scheduled task and find out how long it's going to take. This information will be used later on.
  • Over on my backup server, I create another script. This script handles moving the backup files created earlier off-SAN and off-site:
#--------------------- BEGIN SCRIPT -------------------------------

#Name of Server
$ServerName = "WebServer"

#Path to the location where the MySQL server put the backups
$RemoteBackupPath = "C:\DB_Backup"

#Off-SAN copy location
$OffSANCopyLocation = "\\OnSiteNASDevice\MySQLBackup"

#Off-Site copy location
$OffSiteCopyLocation = "\\DR_NAS_Device\MySQLBackup"

#Body of the notification email
$Body = "."

#Map Y drive to the MySQL server's backup location:
net use Y: \\WebServer\DB_Backup /user:"MySQLWinBackup" MySQLWinBackupPassword

#Get a file count for success/failure test
$FileTest = (get-childitem Y:\ | measure).count

#Success/Failure Test - If the file count does not come back as expected, send an email, THEN EXIT THE SCRIPT
If ($FileTest -ne 1){
$Body = "The correct number of files is not present on the MySQL Server to be distributed off-site and off-SAN."
Send-Mailmessage -from "helpdesk@contoso.com" -to "itreporting@contoso.com" -subject "PS Report - MySQL Backup Status - FAILURE - WEBSERVER" -smtpserver contosomailserver -body $body
exit
}

#Copy the files from the MySQL Server to the off-site and off-SAN locations
copy-item -Path "Y:\*.*" -Destination "$OffSiteCopyLocation"
copy-item -Path "Y:\*.*" -Destination "$OffSANCopyLocation"

#Delete the backup files stored locally on the MySQL server
remove-item "Y:\*.*"

#For off-site location, get the creationtime and delete anything older than 7 days
$Files = (Get-childitem $OffSiteCopyLocation)
Foreach ($file in $files){
$FileAge = ((get-date) - ($file.creationtime)).totaldays
If ($FileAge -gt 7){
remove-item $File.FullName -Force
} #End If
} #End Foreach

#For off-SAN location, get the creationtime and delete anything older than 7 days
$Files = (Get-childitem $OffSANCopyLocation)
Foreach ($file in $files){
$FileAge = ((get-date) - ($file.creationtime)).totaldays
If ($FileAge -gt 7){
remove-item $File.FullName -Force
} #End If
} #End Foreach

#Send a success notification email
Send-Mailmessage -from "helpdesk@contoso.com" -to "itreporting@contoso.com" -subject "PS Report - MySQL Backup Status - SUCCESS - WEBSERVER" -smtpserver contosomailserver -body $body

#Unmap the MySQL Backup drive
net use Y: /delete

#--------------------- END SCRIPT -------------------------------

  • I save this script in the C:\PS folder on my backup server, then set a scheduled task to run the script at an appropriate time. How I determine what's appropriate? I look at the run time of the backup on the server itself, and then pick what's comfortable to me. In my case, the backup takes about 2 minutes to run, so I set this task to execute 10 minutes after the start of the MySQL backup script.

The resulting workflow is:
  1. MySQL Server backs up databases to its C drive.
  2. Backup server checks for the existence of the new backup file. If not ok, send failure email alert.
  3. Backup server copies that file to 2 other locations, then deleted the originals on the MySQL Server.
  4. Backup server deletes backup files older than 7 days at the 2 other locations.
  5. Backup Server sends success email alert.

Friday, March 28, 2014

Some PC App Recommendations

This past week I had to bite the bullet and reformat my work laptop. Piriform's CCleaner was a big help, as I was able to export a list of all installed apps. To do so, open up CCleaner and click on "Tools". At this point "Uninstall" should be highlighted, and on the bottom right-hand side, click on "Save to text file...".

Once I had my list, I installed all of the apps on my new laptop. I figured that I would note some helpful items here. I use Windows 7, so if you're running Windows 8, then your experience could be different.


  • Actual Multiple Monitors allows me to have the Windows 7 taskbar extend to my second screen. I can pin separate applications to this taskbar.
  • Advanced IP Scanner is my go-to address space scanner.
  • Autohotkey is something I would like to play with more, but my killer app for this so far is this gem, which allows me to use Ctrl+V to paste into the command prompt and Powershell.
  • Beyond Compare allows me to compare files, folders, and even registry entries. I've been using this for many years.
  • EMCO makes two utilities that I really like: MAC Address Scanner and Remote Console. Usually I use Powershell remoting, but this has come in handy when there is a configuration issue on the client side and I'm not able to use remote Powershell.
  • Image Resizer Powertoy Clone for Windows allows me to right click on picture files and quickly resize them.
  • NetSetMan allows me to quickly switch between different network adapter configurations, such as when I need a static IP address at a remote site, or when I want a static address on the iSCSI network.
  • QtTabBar gives me the ability to have tabs in Windows Explorer. This is a killer app......
  • Multiplicity allows me to have a kind of software KVM. If you've seen my setup, I have two screens above and two below. Each set of two is a different computer, and this lets me control all four screens with on keyboard and mouse.
  • Tabs for Excel was a big win with the accounting department when I discovered it and installed it for them. Now I can have multiple spreadsheets open in one Excel window.
  • VirtualCloneDrive runs in my system tray and allows me to mount ISO files.
  • Vistaswitcher (don't be scared off by "Vista" in the name), gives you a much better interface when using Alt-Tab to switch between open applications. Instead of just showing the name of the running apps, it shows you what's within the window.
  • Windows Grep will let me search through a bunch of text files for certain strings. I know I can do this through Powershell, and this is a total crutch, but sometimes things are just easier with a GUI.
  • WizMouse allows me to use my mouse's scroll wheel to scroll through something without changing my active window. All I have to do is put my mouse over the inactive app's window and scroll, and it works!




Monday, March 24, 2014

Pulling out lines that contain X from a text file

This is going to be a short one, and really is more for my reference, but hopefully it helps other people.

I had a long log file, about 490MB, and I was looking for entries that had certain content. Specifically, I wanted any line that had "5156" in it.

Get-Content c:\temp\LogFile.txt | where-object {$_ -match '5156'} | set-content c:\temp\output.txt

After the line above executes, only lines that contain 5156 are copied to the output file.

I'm going to be using this a lot in the future, I think.....

Friday, March 21, 2014

Use This Powershell Script to List Which VMs are Running on each VMware Host in Your Cluster

I don't use this that often, but sometimes I just want to know which VMs are running on each of my VMware ESXi hosts. I find it easier to fire off the script than to open up vCenter if I want to know which host a VM is running on, and what other VMs are running on that host.

#-------------------------------- BEGIN SCRIPT --------------------------------

#Get Credentials
$credential = get-credential stemalyc@kalamazoocity.org

#Add the VMware Snapin
add-pssnapin Vmware.VimAutomation.Core

#Connect to the vCenter Server
connect-viserver -server cityvc.kalamazoocity.org -credential $credential

#Get all the hosts
$VMHosts = (Get-VMHost | select Name | sort name)

#Get all the VMs
$VMs = (Get-VM | select name, vmhost)

#Find the number of hosts, which will be our counter maximum later on
$HostQty = (($VMhosts | measure).count)

#Initialize the counter to zero
$i = 0

#While the counter is less than or equal to the host counter
While ($i -le $HostQty){
#List the VMs on the host. I use $i here to reference the exact host in the $VMHosts array.
$Listing = ($VMs | where {$_.vmhost -like ($VMHosts[$i].name)} | sort name)
#Output the list to the screen with Format-Table (ft)
$Listing | ft
#Blank out the list variable so it can be reused
$Listing = $null
#Increment the counter
$i++
} #End While

#Create a pause at the end of the script
$Pause = Read-Host "Press Enter to Continue"

#-------------------------------- END SCRIPT --------------------------------