IOmeter execution


Download IOmeter from iometer.org. I also download the documentation to be sure that I have the updated documentation.

Manual Setup

The drive must be defined on each worker before any other configuration in the IOmeter GUI.
You can only define one drive ("target") per worker.
Set up for 20 workers if the server has that many cores. If you have more cores than that you still don't need to define more that 20 workers in IOmeter.

IOmeter need a storage size to test on. It should three times larger than the complete amount of cache storage in the stack to the drive. On a SAN-drive this gives that the test file might be rather large. If you don't know the cache size and are working on local disks you could start with 40GB.
The maximum disk size is a little tricky as it is measured in number of sectors in IOmeter.
To get from storage amount like in GB to number of sectors you need to know the sector size. On a lot of disks it is 512B but some large disks with rotating platters mights have a sector size on 4KB. The number you can get in PowerShell with the CIM class Win32_DiskDrive where the attribute BytesPerSector gives the value.
When you have the sector size and know the size you want for maximum size the the calculation can be like this on a 40GB maximum:
(40GB * 1024MB * 1024KB * 1024B) / 512B = 83.886.080 sectors

In the tab "Access Specifications" I chose Default and added it as assigned.
The definition then can be altered by editing the default specification.
The "Transfer Request Size" is changed to the sized wanted for the test, e.g. 64 Kilobytes.
As SQL Server Database Engine mainly works sequential setting "Percent Random/Sequential Distribution" should be set to 100% Sequential. This setting is for some tests set to 100% Random.
A lot of SQL Server storage activity on data i reading, and then the setting "Percent Read/Write Distribution" should be set to 100% Read. This setting is for some test set to 100% Write.

On the tab "Results Display" the setting "Update Frequency (seconds)" should be set for 2 seconds to display the testing in a usefull way.

Start with 12 outstanding I/Os. Justify up and down to get highest bandwidth ("Total MBs per Second (Decimal)") before latency ("Average I/O Response Time (ms)") grows.


A test series could be like this

  • Read 8 KB random I/O
  • Read 32 KB random I/O
  • Read 64 KB random I/O
  • Read 256 KB random I/O
  • Read 8 KB sequential I/O
  • Read 64 KB sequential I/O
  • Read 256 KB sequential I/O
  • Read 1024 KB sequential I/O. IOmeter changes this entry to 1 MB.
  • Write 8 KB random I/O
  • Write 32 KB random I/O
  • Write 64 KB random I/O
  • Write 256 KB random I/O
  • Write 8 KB sequential I/O
  • Write 32 KB sequential I/O
  • Write 64 KB sequential I/O
  • Write 256 KB sequential I/O


Setup a local Performance Monitor to local disk in blg-files.
Measure Logical Disk, not Physical Disk.

Manual Execution

The IOmeter status shows "Preparing Drives" while creating the file. And it will take quite some time. You should plan on a few hours.
After the file is created it is reused for each run om the same disk. Even if the testing parameters are altered.

Let each test run for about ten minutes.


The test file "iobw.tst" is in the root of the drive. When IOmeter is stopped this file can be deleted. The deletion requires administrator rights.
Be nice and clean up after yourself ;-)


The IOmeter execution could be automated like I have done before with SQLIO. But as a manual evaluation is needed for the adjustments on outstanding I/Os it is practically not of much use to build an automated execution.

To be fair I have not discovered the detailed values in handling IOmeter myself. This has been delivered by Michael Frandsen (LinkedIn: michaelfrandsendk) in other context than this.


Drive info

The CIM CmdLets in PowerShell I think is a collection of great new features and a long needed replacements to WMI.
One very useful CmdLet is Get-CimAssociatedInstance that makes the hierarchy of classes nice and easy.
This I had use for lately for checking the drives on some new database servers. That was large servers with several drives on various storage technologies.

By travelling through the associated CIM objects I was able to get from the drive with a drive letter to the physical diskdrive.
The chain could be described like:

Drive or Volume - Logical Disk - Disk Partition - Disk Drive

The Volume (Win32_Volume) has no direct association to the Logical Disk (Win32_LogicalDisk), but the Logical Disk has the drive letter as key in the attribute DeviceID. The Volume object still holds some interesting attributes like BlockSize.

With a Logical Disk object I got the Disk Partition (Win32_DiskPartition). And from there I got the Disk Drive (Win32_DiskDrive).
The complete chain on the C-drive can look like this in PowerShell:

[String]$DriveLetter = 'C'

$WqlVolume = "SELECT * FROM Win32_Volume WHERE DriveLetter='$($DriveLetter):'"
$CimVolume = Get-CimInstance -Query $WqlVolume
$CimVolume | fl *

$WqlLogicalDisk = "SELECT * FROM Win32_LogicalDisk WHERE DeviceID='$($DriveLetter):'"
$CimDisk = Get-CimInstance -Query $WqlLogicalDisk
$CimDisk | fl *

$CimDiskPartition = Get-CimAssociatedInstance -InputObject $CimDisk -ResultClassName 'Win32_DiskPartition'
$CimDiskPartition | fl *

$CimDiskDrive = Get-CimAssociatedInstance -InputObject $CimDiskPartition -ResultClassName 'Win32_DiskDrive'
$CimDiskDrive | fl *

Now all the info is stored in variables, and can be accessed and processed directly.
If you like a selection of data can be combined in a custom PowerShell object (PSObject).

It is obvious that the classes for the more physical elements were defined for spinning plate disks, e.g. a SSD on SATA has tracks and cylinders according to Win32_PhysicalDrive.


SQL Server 2016 installation notes

These are some personal notes I have made on installing SQL Server 2016. I intent to update the notes when there are new releases or I have looked deeper into a detail.


I have selected these SQL Server features:
  • Database Engine Services, no sub-features (FEATURE=SQLENGINE).
  • Client Tools Connectivity (FEATURE=CONN).
  • Client Tools Backwards Compatibility (FEATURE=BC).
  • Client Tools SDK (FEATURE=SDK).
  • Management Tools - Complete (FEATURE=ADV_SSMS).
  • SQL Client Connectivity SDK (FEATURE=SNAC_SDK).
SQL Server Management Studio (SSMS) is with RC0 not part of the general setup. This I look more into in a later section of this post.
The installation is with these products:
  • Windows Server 2012 R2 virtualized on vmware Workstation 12 Pro.
  • SQL Server 2016 RC1.
  • Windows 10 Pro virtualized on vmware Workstation 12 Pro.


Before installation I make some notes on what is installed. How to do this is nicely described by Marc Carter in the blog entry „Use PowerShell to Find Installed Software“.
Also I make notes on which services are running. This can be collected with the cmdlet Get-Service.

Then I check if .NET 3.5 is enabled on the computer as this is a requirement for my SQL Server installation. If .NET 3.5 is not installed I have to do so.
This I have to describe more detailed...


There are several new features on storage configuration with SQL Server 2016, like configuring multiple datafiles for tempdb on installation. To test a SQL Server installation with multiple drives I have made these drives for the Windows Server installtion:
  • C-drive: System drive with Windows Server and basic application files.
  • D-drive: Program drive with application files. Also the SQLAdmin folder for local DBA files.
    • GPT partition.
  • E-drive: Data drive with data files for user databases.
    • GPT partition.
    • NTFS allocation unit is set to 64 KiB (KB).
  • F-drive: Transaction log drive with transaction log files for user databases and tempdb.
    • GPT partition.
    • NTFS allocation unit is set to 64 KiB (KB).
  • G-drive: tempdb drive with data files for tempdb database.
    • GPT partition.
    • NTFS allocation unit is set to 64 KiB (KB).
  • H-drive: Backup drive for any backup file.
    • GPT partition.
The Z-drive is reserved for removable drive.


The SQL Server installation is usually done by the wizard or by command-line. This I will look into the details later.

In this case I use the default service accounts given by the SQL Server installation, but I change the SQL Server Agent Startup Type to Automatic. These issues are subjects for many great discussions, but not here and now.
SQL Server 2016 gives the possibility to enable Database Instant File Initialization on installation, but I have chosen not to. The subject is greatly described by Kimberly Tripp in the blog post „Instant Initialization – What, Why and How?“.

The default collation for the SQL Server installation is still the SQL collation „SQL_Latin1_General_CP1_CI_AS“ where the recommendation for some years has been a Windows collation. Usually I recommend the SharePoint collation „Latin1_General_CI_AS_KS_WS“ to ensure a better information integration. In this case I changed to the collation „Latin1_General_100_CI_AS_KS_WS_SC“ to get the Special Characters available.

With SQL Server 2016 it is now possible to configure tempdb on installation. This is a really great feature – that I have not used in this case.

The installation took about 44 minutes by the summary log file.


Like before the installation I note what is installed and which services that are running.
Also I take a look into the summary log file.

SQL Server Management Studio

As mentioned before SQL Server Management Studio (SSMS) is with RC0 removed from the general setup, and placed in "Tools\SSMS-Full-Setup.exe". When running the GUI setup, there are no selection of features.
With RC1 is SSMS removed from the installation set, but can be downloaded as described in BOL „Download SQL Server Management Studio (SSMS)“. There is a item in the setup GUI called "Install SQL Server Management Tools", but it just open the webpage I just mentioned.
As SSMS is removed from the SQL Server setup, the release notes for SSMS are also removed from the general SQL Server release notes. You might like to take a quick look at the SSMS Release Notes in "SQL Server Management Studio - Release Notes".


Usually I stop and disable the SQL Server VSS Writer service as I take backup with T-SQL through the Database Engine:
Get-Service -Name SQLWriter | Set-Service -StartupType Disabled -Status Stopped

Finally I run Windows Update to get the latest patches for the new components on the computer. In this case there were a Security Update and a regular Update for Visual Studio 2010 SP1 (KB2645410 & KB2635973).


SQL Server Errorlog parsing with PowerShell

In a recent Incident I browsed the SQL Server Errorlog, but with a daily cycle of the Errorlog there were quite a few logfiles to browse. This is a description of some ways to parse the SQL Server Errorlog for specific messages.

Ways to read Errorlog

There are several more or less common ways to read the Errorlog. I have collected some with a short description.

  • T-SQL: The stored procedures master.sys.sp_readerrorlog uses the extended stored procedure master.sys.xp_readerrorlog to search for a somewhat simple string. This can be enough in simple situations, but not enough when in need for more detailed information. Also these procedures are not documented by Microsoft, and then not supported.
  • SMO: The method Server.ReadErrorLog() reads a Errorlog file and returns an array of Errorlog lines. This is rather usefull in most cases, but sometimes I need to combine several lines from the Errorlog.
  • SQLPSX: This Codeplex project "SQL Server PowerShell Extensions" has defined the cmdlet Get-SqlErrorLog. The cmdlet is using the SMO method Server.ReadErrorLog() described above. Unfortunately it looks like the project has been dormant since 2011.
  • .NET: The method [System.IO.File]::ReadAllText() reads the entire contents of the Errorlog file into one String. This makes it convenient for combining several lines in the Errorlog in one search filter.
  • PowerShell: The cmdlet Get-Content, especially with the -Raw parameter set, gives a direct acces the the contents of a Errorlog file. I prefer to use the parameter -LiteralPath to the parameter -Path as it ensures the precise path on casing, spaces and everything else.
In general I prefer to use the PowerShell cmdlet Get-Content. Get-Content -Raw can be used to get one string instead of an array of SQL Server Errorlog lines. This makes it easy to match the entire event with one regular expression (regex) pattern.
To match across several lines in the Errorlog, the regular expression pattern includes \s to match both CR&LF and LF. Actually in this case both symbols are in the Errorlog to generate a NewLine in the presentation of the Errorlog.

When the parsing is implemented in Powershell advanced functions the input can be both one SQL Server Errorlog file or a folder holding several Errorlog files. The function will implicit traverse the files in the pipeline through the Process part of the function.
Calling a function with one Errorlog file can be like this
Get-FlushCache -ErrorLogFile 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Log\ERRORLOG.1'
while calling the function on all Errorlog files in a folder can be like this
Get-ChildItem -Path 'C:\Program Files\Microsoft SQL Server\MSSQL11.MSSQLSERVER\MSSQL\Log\ERRORLOG*' |

A PowerShell advanced function has this general structure
function Get-FlushCache {
Find FlushCache entries in SQL Server Errorlog and get metrics from the entry.
ErrorlogFileName Name of SQL Server errorlog file with full path
SQL Server Database Engine Errorlog file by full path.
Collection of FlushCache events
Begin {
  $FlushCaches = @() # Array to hold all FlushCache objects that are created on the matches
  $TotalWatch = [System.Diagnostics.Stopwatch]::StartNew()

Process {
  "Check file '$($ErrorLogFile.Name)'..." | Write-Verbose
  if ($ErrorLogFile.Exists -eq $false) {
    "File '$($ErrorLogFile.FullName)' does not exist" | Write-Error -Category ObjectNotFound

  "Matching pattern (File size = $($ErrorLogFile.Length) B)..." | Write-Verbose
  [String]$ErrorLog = Get-Content -LiteralPath $ErrorLogFile.FullName -Raw
  [String]$regex = ...

  $MatchWatch = [System.Diagnostics.Stopwatch]::StartNew()
  [MatchInfo]$FlushCacheMatches = $ErrorLog | Select-String -Pattern $regex -AllMatches
  if ($FlushCacheMatches -eq $null) {
    "No matches on pattern in '$($ErrorLogFile.Name)' (Match timer = $($MatchWatch.Elapsed.ToString()))." | Write-Verbose
  else {
    "$($FlushCacheMatches.Matches.Count) matches found in '$($ErrorLogFile.Name)' (Match timer = $($MatchWatch.Elapsed.ToString()))." | Write-Verbose
    foreach($Match in $FlushCacheMatches.Matches) {
      $TimeStamp = [System.DateTime]$Match.Groups['timestamp'].Value
      $BufCount = [Int]$Match.Groups['bufs'].Value
      $WriteCount = [Int]$Match.Groups['writes'].Value
      $WriteTime = [Int]$Match.Groups['writetime'].Value
      $BufAvoidCount = [Int]$Match.Groups['bufsavoided'].Value
      $DbId = [String]$Match.Groups['db'].Value
      $AvgWritesPerSecond = [Double]$Match.Groups['avgwrites'].Value
      $AvgThroughput = [Double]$Match.Groups['avgthroughput'].Value
      $IoSaturation = [Int]$Match.Groups['iosaturation'].Value
      $ContextSwitchCount = [Int]$Match.Groups['contxtsw'].Value
      $LastTarget = [Int]$Match.Groups['lasttarget'].Value
      $AvgWriteLatency = [Int]$Match.Groups['avgwritelat'].Value

      $FlushCacheProperties = @{
        TimeStamp = $TimeStamp
        BufCount = $BufCount
        WriteCount = $WriteCount
        WriteTimeMs = $WriteTime
        BufAvoidCount = $BufAvoidCount
        DbId = $DbId
        AvgWritesPerSecond = $AvgWritesPerSecond
        AvgThroughput = $AvgThroughput
        IoSaturation = $IoSaturation
        ContextSwitchCount = $ContextSwitchCount
        LastTarget = $LastTarget
        AvgWriteLatency = $AvgWriteLatency
      $FlushCache = New-Object -TypeName PSObject -Property $FlushCacheProperties
      $FlushCache.PSObject.TypeNames.Insert(0, 'SqlAdmin.FlushCache')
      $FlushCaches += $FlushCache

End {
  "$($FlushCaches.Count) matches in extract collection (Total watch = $($TotalWatch.Elapsed.ToString()))." | Write-Verbose
} # Get-FlushCache()

The contents of the String regex is the regular expression pattern. Each pattern will define different output variables and the function must be defined with matching attributes on the custom object. Some examples on regular expression patterns are shown below.

How to match an event

When the content of a Errorlog file is read, the next step is to find the event(-s) in the file. This can in PowerShell be done somewhat direct with the cmdlet Select-String using the -Pattern parameter with a regular expression as value. The individual parts of the match can be isolated by defining matching groups in the regular expression.
Select-String return a MatchInfo object that contain a collection of Matches. This collection can be traversed with a ForEach statement on a variable that will be a Match object. The value of a match group element is in the Value property of each Group object in the Groups collection.
The group values in a match I put in a Custom Object (PSObject) and the objects are put in a collection. This collection is the basic output of the script.


This event is logged in the SQL Server Errolog like this example:
2013-01-13 12:13:14.37 spid16s     FlushCache: cleaned up 70601 bufs with 2731 writes in 130454 ms (avoided 13308 new dirty bufs) for db 42:0
2013-01-13 12:13:14.37 spid16s                 average writes per second:  20.93 writes/sec
            average throughput:   4.22 MB/sec, I/O saturation: 5536, context switches 12391
2013-01-13 12:13:14.37 spid16s                 last target outstanding: 498, avgWriteLatency 39

This is an example of an event that is logged across several lines in the Errorlog file.

The challenge in this case is immediately two-fold:
  1. Multiple lines in SQL Server Errorlog in one event entry.
  2. Multiple events in one SQL Server Errorlog.
The regular expression pattern used in this case is
[String]$regex = '(?\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d{2}).{13}' `
+ 'FlushCache: cleaned up (?<bufs>\d+) bufs ' `
+ 'with (?<writes>\d+) writes ' `
+ 'in (?<writetime>\d+) ms ' `
+ '\(avoided (?<bufsavoided>\d+) new dirty bufs\) ' `
+ 'for db (?<db>\d+:\d+)' `
+ '\s+.+average writes per second: (?<avgwrites>\d+\.\d+) writes/sec' `
+ '\s+.+average throughput: (?<avgthroughput>\d+.\d+) MB/sec, ' `
+ 'I/O saturation: (?<iosaturation>\d+), ' `
+ 'context switches (?<contxtsw>\d+)' `
+ '\s+.+last target outstanding: (?<lasttarget>\d+), ' `
+ 'avgWriteLatency (?<avgwritelat>\d+)'

The pattern string is in the script code spread over several lines with one line per regex group. This gives me the oppertunity to describe each grouping pattern.

I/O requests taking longer than 15 seconds

2013-01-13 12:13:14.39 spid17s     SQL Server has encountered 1 occurrence(s) of I/O requests taking longer than 15 seconds to complete on file [C:\Program Files\Microsoft SQL Server\MSSQL12.MSSQLSERVER\MSSQL\DATA\tempdb.mdf] in database id 2. The OS file handle is 0x0000000000001178. The offset of the latest long I/O is: 0x000002dc510000
This issue is described in detail in the Microsoft Support article „Diagnostics in SQL Server help detect stalled and stuck I/O operations“.
The regular expression pattern to find these events is
[String]$regex = '(?<timestamp>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d{2}).{13}' `
+ 'SQL Server has encountered (?<occurrence>\d+) occurrence\(s\) of I/O requests taking longer than 15 seconds to complete ' `
+ 'on file \[(?<file>.+)\] ' `
+ 'in database id (?<db>\d+). ' `
+ 'The OS file handle is (?<handle>0x[0-9a-f]+). ' `
+ 'The offset of the latest long I/O is: (?<offset>0x[0-9a-f]+)'

Login failed

This is something I want to look more into later. I mention the event now in a rather incomplete form as a teaser - to myself.
2013-01-13 12:13:14.64 Logon Error: 18456, Severity: 14, State: 38.
2013-01-13 12:13:14.16 Logon Login failed for user 'SQLAdmin\Albert'. Reason: Failed to open the explicitly specified database 'Adventureworks'. [CLIENT:]

2013-01-13 12:13:14.24 Logon Error: 18456, Severity: 14, State: 40.
2013-01-13 12:13:14.24 Logon Login failed for user 'Bobby'. Reason: Failed to open the database 'Adventureworks' specified in the login properties. [CLIENT:]

This error is described in the blog entry „Understanding "login failed" (Error 18456) error messages in SQL Server 2005“.

Anonymous login

When using Windows Authentication through double-hop like linked server or SharePoint and Kerberos delegation is not implemented it will fail with a entry in SQL Server Errorlog like:
Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'

This situation is very well described in SQL Server Protocols Blog: "SQL Linked Server Query failed with "Login failed for user...""

Memory page size - Large Pages

This is not a error, but a Errorlog entry on memory configuration.
By default Large Pages are not set, and this is logged in Errorlog when SQL Server start:
2013-01-13 12:13:14.15 Server Using conventional memory in the memory manager.
When Large Pages is enabled and SQL Server is restarted the Errorlog has a entry like:
The regular expression pattern to detect Larges Pages is like
[String]$regex = 'TBD'


The PowerShell documentation in TechNet Library is not updated. Actually some of it is wrong like for Get-Content. Use the Get-Help cmdlet.

A more general function that takes regular expression pattern as parameter value could be nice. One thing I have to figure out is how to handle the groups in a regular expression pattern.

As the PowerShell advanced functions can take a folder content as input by the pipeline, it might speed up handling several files if the patterns matching is done in parallel. This might require another structure of the function with some thread handling.

This post will be updated when I have news or better ways to parse SQL Server Errorlog.


Happy SysAdmin: „Reading large text files with Powershell


SQL Server Native Client WMI name

I am developing a SQL Server deployment package. And again I do not hit the nail in the first stroke.
This makes me install - and uninstall - SQL Server components several times. Rather quickly I get tired by clicking through Uninstall in Windows Control Panel. This can usually be fixed with a PowerShell script, but uninstalling SQL Server Native Client (SNAC) gave me some trouble.

To get the metadata on the installed programs through CIM I created a variable to work on
$CimProduct = Get-CimInstance -ClassName Win32_Product -Filter "Name LIKE 'Microsoft SQL Server 2012 Native Client'"
But the result in $CimProduct was empty.

A general request showed me that SNAC is installed with the name that I filtered on
Get-CimInstance -ClassName Win32_Product -CimSession $CimSession
gave a long list which I have narrowed down here
Name             Caption                                              Vendor                                               Version
----             -------                                              ------                                               -------
Microsoft SQL... Microsoft SQL Server 2012 Native Client              Microsoft Corporation                                11.0.2100.60

Then I tried a more general filter
$CimProduct = Get-CimInstance -ClassName Win32_Product -Filter "Name LIKE '%Native Client%'"
"Name = '$($CimProduct.Name)'."

with success
Name = 'Microsoft SQL Server 2012 Native Client '.
Immediately it looked like the text that I filtered on in the beginning.
But notice the trailing space!
If I some day meet the Product Manager we have something to talk about ;-)

With the new knowledge I got the search right
$CimProduct = Get-CimInstance -ClassName Win32_Product -Filter "Name LIKE 'Microsoft SQL Server 2012 Native Client '"

Now I was able to uninstall SNAC in one line
$Uninstall = Invoke-CimMethod -Query "SELECT * FROM Win32_Product WHERE Name = 'Microsoft SQL Server 2012 Native Client '" -MethodName Uninstall

And with success