“No Response Ping” during network discovery on Server Core

No Response Ping

When running a discovery for network devices in SCOM 2012 R2 from a server running Server 2012 R2 Core, the discovery may fail with the above error message. This doesn’t seem to occur with a full installation of Server 2012 R2. The solution, as per this guide is to enable the pre-defined firewall rules. To do this in PowerShell:

Get-NetFirewallRule | ? { $_.DisplayName -match "operations" -and $_.Enabled -ne "True" } | Set-NetFirewallRule -Enabled "True"

I know that querying and setting the Enabled property in this way seems odd, but this property is an enum rather than a Boolean.

After setting the above rules, run the discovery again and the devices should be detected.

Reduce Server image size

A useful feature of Server 2012 R2, to reduce the size of an image (such as a template), is to compress the manifest files and inactive payloads, i.e. uninstallation data for updates. Reducing the size of an image can be useful, especially on Server Core, where Disk Cleanup isn’t readily available, as this reduces the time needed to deploy a VM. Having tested this recently, the image size was reduced by 10 %. Be warned that this command can take several hours to run.

dism /Online /Cleanup-Image /StartComponentCleanup

If you would like to go further, the binaries for disabled features can be removed completely, but this removes both the features and any associated updates, which means that installing the features again will necessitate access to the installation media and the relevant updates.

Get-WindowsFeature | ? { $_.InstallState -eq Available } | Uninstall-WindowsFeature -Remove

Source: TechNet.

“The source files could not be found” when installing a Server feature

The source files could not be found. Error 0x800f0906.

This infamous error, that may be returned when installing a Windows Server feature, indicates the inability of the installer to access the source files or the correct version of the source files. As per this blog, even specifying the installation media may not resolve the problem, as the installer may be expecting a different version of the binaries. Allowing updates from WSUS may work, but in my experience this isn’t usually the case.

There are several options for resolving this problem, e.g.:

Install-WindowsFeature AD-Domain-Services -Source Online
Install-WindowsFeature AD-Domain-Services -Source wim:"X:\sources\install.wim":2
Install-WindowsFeature AD-Domain-Services -Source "X:\sources\sxs\"

Obviously, adjust the drive letter to the appropriate letter of your mounted installation ISO.

One particularly troublesome feature is AD Domain Services, for which I have found that the only solution is to provide access to a copy of the SxS store of the same binary version, e.g. the SxS store on another similarly-patched server. Using Install-WindowsFeature accesses network shares using the server’s context, which generally won’t have access to an admin share on a DC, so the Enable-WindowsOptionalFeature cmdlet must be used, which uses the current user’s context:

Enable-WindowsOptionalFeature -Online -FeatureName DirectoryServices-DomainController -All -Source "\\DC01\C$\Windows\WinSxS\"

Remote administration of Server 2012 R2 Core

Windows Server 2012 R2 Core requires some configuration changes to the firewall and registry to enable remote administration. I’ve struggled to find a comprehensive list, so I’ll keep one here of the commands I use regularly.

General

Enable-NetFirewallRule -DisplayGroup "Remote Desktop"
Enable-NetFirewallRule -DisplayGroup "Windows Management Instrumentation (WMI)"
Enable-NetFirewallRule -DisplayGroup "Remote Event Log Management"
Enable-NetFirewallRule -DisplayGroup "Remote Service Management"
Enable-NetFirewallRule -DisplayGroup "File and Printer Sharing"
Enable-NetFirewallRule -DisplayGroup "Remote Scheduled Tasks Management"
Enable-NetFirewallRule -DisplayGroup "Performance Logs and Alerts"
Enable-NetFirewallRule -DisplayGroup "Windows Remote Management"
Enable-NetFirewallRule -DisplayGroup "Windows Firewall Remote Management"
Enable-NetFirewallRule -DisplayGroup "Routing and Remote Access"

IIS

Install-WindowsFeature Web-Mgmt-Service
Set-ItemProperty -Path "HKLM:\SOFTWARE\Microsoft\WebManagement\Server" -Name "EnableRemoteManagement" -Value "1"
New-NetFirewallRule -Action Allow -Direction Inbound -DisplayName "IIS Management" -Service "WMSVC"
Set-Service "WMSVC" -StartupType Automatic
Start-Service "WMSVC"

The vertical blanking interval

I’ve often idly wondered how teletext worked. How was digital information somehow stuffed into or alongside an analogue television transmission? Or VHS copy protection. How was it possible to add copy protection to an analogue medium? The answer is the vertical blanking interval (VBI).

The VBI is the time between drawing the last line of one frame and the first line of the next, and it was originally required because the magnetic field used to deflect the electron beam in CRTs took time to reset from one corner of the screen to the other. Over time, this interval was used to carry information not required for the primary signal, including subtitles, teletext and – in the case of video cassettes – copy protection.

On Macrovision’s VHS copy protection, from Wikipedia:

Macrovision’s legacy analog copy protection (ACP) works by implanting a series of excessive voltage pulses within the off-screen VBI lines of video. These pulses were included physically within pre-existing recordings on VHS and Betamax, and were generated upon playback by a chip in DVD players and digital cable or satellite boxes. A DVD recorder receiving an analog signal featuring these pulses would detect them and display a message saying that the source is “copy-protected” followed by aborting the recording. VCRs, in turn, react to these excessive voltage pulses by compensating with their automatic gain control circuitry, causing the recorded picture to wildly change brightness, rendering it annoying to watch. The system was only effective on VCRs made at around the mid-1980s and later.

Bug in SCOM’s DayTimeExpression operator of System.ExpressionFilter

Matthew Long’s post on the useful DayTimeExpression module of the System.ExpressionFilter data filter is the only useful information I could find on its use, save for the MSDN documentation. However, in using the module, I’ve noticed what appears to be a bug in its implementation.

The problem

The module works by checking whether a date-time is bound (or not bound, as defined by <InRange />) by a window that’s defined by (a) a days of the week mask and (b) a time window, the start and end of which are specified by the number of seconds since midnight. The problem appears to be that the date-time is interpreted as local time, despite being defined in ISO 8601 format, e.g. 2015-05-29T17:15:00Z, rather than being correctly interpreted as UTC and converted to local time before being checked against the defined window. This problem occurs whether a date-time is manually specified, e.g. <Value Type="DateTime">2015-05-29T17:15:00Z</Value>, or is obtained at runtime, e.g. <XPathQuery Type="DateTime">./@time</XPathQuery>.

The workaround

It may be possible to solve this problem using XQuery/XPath, but my solution is to return the current local date-time from the PowerShell script called by the data source, albeit presented as UTC in ISO 8601 format. For example, if the script were executed at 17:15 (local time), the script would return, in its property bag, a property with a value of 2015-05-29T17:15:00Z. This is a hack, as, because I’m on BST (UTC+1), this value should be 2015-05-29T16:15:00Z, but this triggers the bug.

In my PowerShell script:

$Bag.AddValue("Date-time executed (local time)", (Get-Date).ToString("yyyy-MM-ddTHH:mm:ssZ"))

In my monitor type:

<Expression>
	<DayTimeExpression>
		<ValueExpression>
			<XPathQuery Type="DateTime">Property[@Name='Date-time executed (local time)']</XPathQuery>
		</ValueExpression>
		<StartTime>$Config/SecondsFromMidnightCheckWindowStart$</StartTime>
		<EndTime>$Config/SecondsFromMidnightCheckWindowEnd$</EndTime>
		<Days>$Config/DaysOfWeekMask$</Days>
		<InRange>true</InRange>
	</DayTimeExpression>
</Expression>

I am now able to define the window using the number of seconds from midnight, local time, and System.ExpressionFilter correctly checks this window against the execution time.

SCOM data warehouse daily state aggregations stored against wrong date

Daily state aggregations appear to be stored against the wrong date in the data warehouse database when the time zone of the server hosting the SQL Server instance is at an offset to UTC. This is evident in the UK, because for approximately half the year the UK is on UTC, at which point the daily aggregations are stored against the correct date, but during BST (UTC+1) they are stored against the previous date. I suspect this may only affect time zones at a positive offset to UTC, and that the problem is in dbo.StandardDatasetAggregate, possibly related to the calculation of @IntervalStartDateTime, but I haven’t been able to pinpoint the bug. I also haven’t checked whether the same situation exists with the other aggregate types, such as performance.

The problem can be best illustrated with a script, which works against a database running in the UK for web application transaction monitor-data generated in 2014:

USE OperationsManagerDW

-- Determine a web application transaction monitor with relevant data, i.e. during BST and where an unhealthy state has been recorded.
DECLARE	@ManagedEntityMonitorRowId INT,
		@Date DATE

SELECT TOP 1
	@Date = sdf.[Date],
	@ManagedEntityMonitorRowId = sdf.ManagedEntityMonitorRowId
FROM
			dbo.vStateDailyFull sdf

INNER JOIN	dbo.vManagedEntity me
ON			sdf.ManagedEntityRowId = me.ManagedEntityRowId
AND			me.FullName LIKE 'WebApplication[_]%'
AND			me.FullName NOT LIKE '%WatcherComputersGroup'

WHERE
	sdf.MonitorRowId =	(
							SELECT TOP 1
								MonitorRowId
							FROM
								dbo.vMonitor
							WHERE
								MonitorSystemName = 'System.Health.EntityState'
						)
AND	(
			sdf.InYellowStateMilliSeconds > 10000
		OR	sdf.InRedStateMilliseconds > 10000
	)
AND	sdf.[Date] BETWEEN '31/Mar/2014' AND '25/Oct/2014'		-- ~BST.
ORDER BY
	sdf.[Date]

-- Return the data.
SELECT
	*
FROM
	dbo.vStateFull
WHERE
	ManagedEntityMonitorRowId = @ManagedEntityMonitorRowId
AND	[DateTime] >= @Date
AND	[DateTime] < DATEADD(d, 2, @Date)
ORDER BY
	[DateTime]

For a particular web application transaction monitor, my results for the daily aggregation (AggregationTypeId = 30) for 31/03/2014 show that InYellowStateMilliseconds is 206173, but the sum total of InYellowStateMilliseconds for the hourly aggregations (AggregationTypeId = 20) is 0 for this date. The figure of 206173 shows against the hourly aggregations associated with the following day, 01/04/2014.

I have logged this bug on Microsoft Connect.

Invoke-WebRequest’s TimeoutSec parameter defaults to 100 seconds

According to TechNet, when the TimeoutSec parameter of Invoke-WebRequest is ommitted or specified as 0, the timeout is indefinite. There appears to be a bug that causes this value to default to 100 seconds, at least in PowerShell 4. I presume this is because the default value of the underlying System.Net.HttpWebRequest.Timeout property is 100,000 ms.

The workaround is to specify a large value for TimeoutSec, e.g.

Invoke-WebRequest -Uri $Uri -TimeoutSec ([int]::MaxValue)

That is, assuming that you want the request to return within 68 years.

Edit (07/03/2024)

In PowerShell 7, the maximum time-out is now 24.20:31:23.6470000, so the command with the maximum time-out is now:

Invoke-WebRequest -Uri $Uri -TimeoutSec ([timespan]::Parse("24.20:31:23").TotalSeconds)

SCOM’s SQL Server 2014 management pack doesn’t discover file groups containing filestreams or partition schemes

Edit: 13/08/2015: This bug has been fixed in management pack version 6.6.0.0.


I have discovered a bug in the latest SQL Server 2014 SCOM management pack (6.5.1.0).

The problem is that file groups are not discovered for databases containing filestreams or partition schemes, due to a bug in the discovery script. The health service subsequently goes on to discover the files associated with all the file groups, regardless of whether the file group has been discovered, and forwards the data to the management server. Upon receipt of the discovery data for the files, the management server rejects it because some of the files are associated with file groups that haven’t yet been discovered, i.e. the management server is unable to map some of the files to file groups. The result is that all files for the database aren’t discovered and are therefore not monitored.

The symptoms are event ID 10801 on management servers, when the management server processes the discovery data, and missing database file groups and files in the inventory. The cause is a bug in the DiscoverSQL2014FileGroups.js script in the Microsoft.SQLServer.2014.Discovery management pack, where it only accepts file group types FX and FG, but, according to MSDN, there are two more possible values: FD and PS.

I have logged this bug on Microsoft Connect. Please vote for this bug if it’s a problem for you.

I’ve implemented a workaround, until Microsoft resolve the problem, which restricts file and log file discovery to those file group types that have previously been discovered, i.e. FX and FG. This allows monitoring of these file group types, but does not monitor the missing types. Unfortunately I can’t distribute the updated management pack as it contains Microsoft code.

SCOM web application transaction monitor: error code 2147954430

The cause of an error code of 2147954430 from a web application transaction monitor can be difficult to determine, especially as it’s often intermittent. In my case, investigating the watcher node revealed that too many long-running calls had been included in the monitor and this had caused a backlog on the node, as described by event ID 10503 in the Operations Manager log:

The HTTP URL Monitoring Module detected that a backlog of processing has happened. It might be an indication of too many URL monitors configured for this watcher node.

The solution was to increase the interval between executions of the monitor to one that more realistically reflected the likely execution time. And it might be useful to create a rule to raise alerts from these events on the watcher nodes.