2018-01-04

SSMS installation on off-line computer

In secure networks I have several times found it necessary to install a tool station computer without internet access. Most tools like editors can easily be downloaded from another computer and the installed on the tool station. But Microsoft SQL Server Management Studio (SSMS) has some issues when installing without internet access.

This is an example on the type of error message I usually get. In this case it is from installing SSMS 17.2 on a danish Windows 7. But I have had similar messages on previous SSMS versions and on other Windows version.
I have not found a official description from Microsoft on this situation. One of the first times I ran into this I was actually sitting with several Microsoft SQL Server consultants and PFE's, and they came up with a solution by some mails.

When a SSMS Setup Failure happens it is also logged in a SsmsSetup log file. This log file is in the folder "C:\Users\<User Name>\AppData\Local\Temp\SsmsSetup".
(** Insert example **)

The short description is that SSMS is missing two certificates, that it is used to download during SSMS installation. The two certificates are

  • MicRooCerAut2011_2011_03_22 (link)
  • MicRooCerAut_2010-06-23 (link)
You will have to install the certificates before installing SSMS.
After you have downloaded the certificates as cer-files you will have to import the certificates into Windows. It does not matter where you place the cer-files.
I know there are several guides on  importing certificates into Windows, but I go through the details anyhow to avoid doubt.

  1. Start Microsoft Management Console (mmc.exe)
  2. Add the snap-in "Certificates" (Ctrl + M)
  3. When you mark the snap-in to be added to MMC you will have to select which account type the snap-in should manage. Select Computer Account 
  4. You will also have to select which computer the snap-in should manage. Select Local Computer
  5. After adding the Certificates snap-in you should expand Certificates > Root Key Centers... (or Trusted Root Certification Authorities) > Certificates
  6. Right-click the item Certificates and select All Tasks > Import...
  7. This will start the guide Certificate Import
  8. Select to place all certificates in the Certificate Store named Root Key Center... (or Trusted Root Certification Authorities)
  9. When both certificates are imported with success you can close MMC
All this I am sure can to automated by a script, but as the computer is off-line there are also some issues getting the script file to the computer.

2017-08-12

Windows 10 on ThinkPad T440s

The other day I wanted to install Windows 10 on a Lenovo ThinkPad T440s.
It seemed like a simple task:
  1. Download Windows 10 image from Visual Studio subscription.
  2. Extract installation files to USB stick.
  3. Boot ThinkPad on USB stick.
  4. Install Windows 10.
Not quite.
The installation set is not specially large and fits easily on a medium (8 GiB) USB stick.
But the file "install.wim" is larger than 4 GiB in the latest Windows 10 image. And when a USB usually is formatted with FAT32, it can't hold a file larger than 4 GiB.
My first thought was to format the USB stick with exFAT, but the T440s couldn't boot on the USB stick with the installed BIOS level.

My solution was:

  1. Format the USB stick with FAT32.
  2. Download the RTM image of Windows 10 where the file "install.wim" is smaller than 4 GiB.
  3. Extract installation files to the USB stick.
  4. Boot the ThinkPad on the USB stick.
  5. Install Windows 10 RTM.
  6. Update Windows 10 to current release - with several reboots.
How to install Windows 10 later when RTM is not available on Visual Studio subscription might be a larger challenge in the future.
If I meet that challenge then I will update this post.

2017-08-09

Getting and using PowerShell AzureRM

PowerShell


Check if PowerShell is available
powershell.exe
and what version that is installed.
prompt> $Host
or
prompt> $Host.Version

If PowerShell is an old version or not installed, then the installation is done by installing Windows Management Framework (WMF) in the latest version. Take a look at the PowerShell site (link) on MSDN or use your favorite search engine to find the latest version of WMF.

PowerShellGet

PowerShellGet is a PowerShell module that is used to get modules form a repository like PowerShell Gallery.

Check if the module PowerShellGet is installed and if then in what version:
Get-Module -Name PowerShellGet -ListAvailable
The output is a ModuleInfoGrouping object, where the Version property holds a Version object.
Check what version that is the latest in the repository:
Find-Module -Name PowerShellGet -Repository PSGallery
Compare the Version elements in the outputs.
The output is a ModuleInfoGrouping object, where the Version property holds a Version object like the output from the Cmdlet Get-Module.
You might have several versions of the same module installed. Then you should compare on the latest version.

Install the latest version of the PowerShellGet module:
Install-Module -Name PowerShellGet -AllowClobber -Force
I have seen this warning (error?) some times:
WARNING: The version '1.1.4.0' of module 'PackageManagement' is currently in use. Retry the operation after closing the applications.
A PowerShell restart does not help. I have not looked deeper in this issue.

Update the PowerShellGet module to the latest version:
Update-Module -Name PowerShellGet
I have seen that version 1.0.0.1 of PowerShellGet can't be updated, but the update doesn't fail.

Uninstall the module in all versions installed:
Uninstall-Module -Name PowerShellGet -AllVersions -Force
Again PowerShellGet in version 1.0.0.1 stand out as it is not uninstalled, but the uninstall doesn't fail.
If I try a version specific uninstall
Uninstall-Module -Name PowerShellGet -RequiredVersion '1.0.0.1'
then there is an error
PackageManagement\Uninstall-Package : No match was found for the specified search criteria and module names 'PowerShellGet'.
At C:\Program Files\WindowsPowerShell\Modules\PowerShellGet\1.1.3.2\PSModule.psm1:2252 char:21
+ ... $null = PackageManagement\Uninstall-Package @PSBoundParameters
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (Microsoft.Power...ninstallPackage:UninstallPackage) [Uninstall-Package], Exception
+ FullyQualifiedErrorId : NoMatchFound,Microsoft.PowerShell.PackageManagement.Cmdlets.UninstallPackage

where it looks like a newer version is installed, but that does not show through Get-Module -ListAvailable. It might be the installation on my computer that has broken during several Install-Update-Uninstall sessions. I have not looked deeper in this issue.

Import the PowerShellGet module in the current PowerShell session with this statement:
Import-Module -Name PowerShellGet
This is usually not needed as the module is loaded when installed.

List functions available in module:
Get-Command -Module PowerShellGet

Remove the PowerShellGet module with
Remove-Module -Name PowerShellGet
but why???

AzureRM

It is the same Cmdlets as used with PowerShellGet above, but I go through the phases anyway to see the differences.

Check the installed and available version:
Get-Module -Name AzureRm -ListAvailable
Find-Module -Name AzureRM -Repository PSGallery


Install the latest version:
Install-Module -Name AzureRM -AllowClobber -Force

Update to the latest version:
Update-Module -Name AzureRM

Uninstall the module:
Uninstall-Module -Name AzureRM -AllVersions -Force

Import the module in the current PowerShell session:
Import-Module -Name AzureRM

List functions available in module:
Get-Command -Module AzureRM

Remove the module from the current PowerShell session:
Remove-Module -Name AzureRM

Context

Windows 7 (Enterprise), WMF 5

2017-07-30

SqlBulkCopy with PowerShell

The challenge

A rather common task is to copy many (all) rows from one SQL Server database table to another as smooth and fast as possible. There are some tuning on the platform that is important, but the copy itself can be done in several very different ways.

A copy direct from table to table can be done with the .NET SqlBulkCopy class using PowerShell.

Alternative T-SQL statements can be used with reduced possibilities and (maybe) better performance:
INSERT INTO ... SELECT FROM where the target table must be created before, but can be in any filegroup.
Or SELECT INTO can be used where the target table will be created in the primary filegroup if the table does not exist there already.

Copying data between a table and a file can also be done with the SQL Server utility bcp.exe.
To copy data from a file to a database table can be done with the T-SQL statement BULK INSERT.

But for now I will focus on SqlBulkCopy with PowerShell.
Yan Pan wrote the great post „Use PowerShell to Copy a Table Between Two SQL Server Instances“ at the Hey, Scripting Guy! blog, but I will go through details on my own to get a deeper understanding.

Using SqlBulkCopy

There is a special case on identity insert as key constraints and not null is not checked by default. This can be changed using the SqlBulkCopyOption enumerations CheckConstraints.

With large amounts of data it is important to use streaming with a SqlDataReader object instead of a static DataSet object, as the DataSet object will hold all data in memory. This can really stress a server and might bring it down.

There are some interesting articles on MSDN Library and a good discussion on stackoverflow on SqlBulkCopy specific and SQL Server bulk operations in general:

I made the PowerShell function Copy-Sqltable to handle the copy of the data:
function Copy-SqlTable {
<#
.DESCRIPTION
  Copy single table from source database to target database on same SQL Server Database Engine instance.
#>
[CmdletBinding()]
[OutputType([void])]
Param()

Begin {
  $mywatch = [System.Diagnostics.Stopwatch]::StartNew()
  "{0:s}Z :: Copy-SqlTable()" -f [System.DateTime]::UtcNow | Write-Verbose

  [string]$ApplicationName = 'SqlBulkCopy.ps1'

  #Candidates for function parameters:
  [string]$SourceInstanceName = '(local)\SQL2016A'
  [string]$SourceDatabaseName = 'source'
  [string]$SourceTableName = '[test].[business]'

  [string]$TargetInstanceName = $SourceInstanceName
  [string]$TargetDatabaseName = 'target'
  [string]$TargetTableName = $SourceTableName
}

Process {
  'Connect to source...' | Write-Verbose
  [string]$CnnStrSource = "Data Source=$SourceInstanceName;Integrated Security=SSPI;Initial Catalog=$SourceDatabaseName;Application Name=$ApplicationName"
  "Source connection string: '$CnnStrSource'" | Write-Debug
  $SqlCnnSource = New-Object -TypeName System.Data.SqlClient.SqlConnection $CnnStrSource
  $SqlCommand = New-Object -TypeName System.Data.SqlClient.SqlCommand("SELECT * FROM $SourceTableName;", $SqlCnnSource)
  $SqlCnnSource.Open()
  [System.Data.SqlClient.SqlDataReader]$SqlReader = $SqlCommand.ExecuteReader()

  'Copy to target...' | Write-Verbose
  [string]$CnnStrTarget = "Data Source=$TargetInstanceName;Integrated Security=SSPI;Initial Catalog=$TargetDatabaseName;Application Name=$ApplicationName"
  "Target connection string: '$CnnStrTarget'" | Write-Debug
  try {
    $SqlBulkCopy = New-Object -TypeName System.Data.SqlClient.SqlBulkCopy($CnnStrTarget, [System.Data.SqlClient.SqlBulkCopyOptions]::KeepIdentity)
    $SqlBulkCopy.EnableStreaming = $true
    $SqlBulkCopy.DestinationTableName = $TargetTableName
    $SqlBulkCopy.BatchSize = 1000000 # Another candidate for function parameter
    $SqlBulkCopy.BulkCopyTimeout = 0 # seconds, 0 (zero) = no timeout limit
    $SqlBulkCopy.WriteToServer($SqlReader)
  }
  catch [System.Exception] {
    $_.Exception | Write-Output
  }
  finally {
    'Copy complete. Closing...' | Write-Verbose
    $SqlReader.Close()
    $SqlCnnSource.Close()
    $SqlCnnSource.Dispose()
    $SqlBulkCopy.Close()
  }
}

End {
  $mywatch.Stop()
  [string]$Message = "Copy-SqlTable finished with success. Duration = $($mywatch.Elapsed.ToString()). [hh:mm:ss.ddd]"
  "{0:s}Z $Message" -f [System.DateTime]::UtcNow | Write-Output
}
} # Copy-SqlTable


Measure

Execution time is measured on each run. For the PowerShell function I used a .NET Stopwatch object. The T-SQL statements are clocked default by Management Studio.

Also I kept an eye on Memory and CPU usage in Windows Performance Monitor on the Process object with all counters (*) on the processes sqlsrvr.exe and powershell/powershell_ise during each run.

Finally I caught actual execution plan on T-SQL statements and kept an eye on the SQL Server Activity Monitor, e.g. the wait statistics. Actually I enabled the SQL Server Query Store on both tables on creation to have some extra figures to look at.

Create test data

Using the batch delimiter "GO" in Management Studio or SQLCMD with the number of rows as the count parameter value to generate several rows of source data:
USE [source];
GO
SET NOCOUNT ON;
GO
INSERT INTO [test].[business] ([test_str],[test_nr])
VALUES (NEWID(), CONVERT(int, RAND()*2147483647));

GO 1000000000


Define SQL Server objects

The source database and table is created simple but still with parallel I/O in mind:
CREATE DATABASE [source]
ON PRIMARY
  (NAME = N'source_data', FILENAME = N'C:\MSSQL_data\source.primary.mdf',
  SIZE = 8MB, FILEGROWTH = 10MB ),
FILEGROUP [user_data]
  ( NAME = N'user_data00',
   FILENAME = N'C:\MSSQL_data\source.user_data00.ndf',
   SIZE = 128MB, FILEGROWTH = 10MB),
  ( NAME = N'user_data01',
   FILENAME = N'C:\MSSQL_data\source.user_data01.ndf',
   SIZE = 128MB, FILEGROWTH = 10MB),
  ( NAME = N'user_data02',
   FILENAME = N'C:\MSSQL_data\source.user_data02.ndf',
   SIZE = 128MB, FILEGROWTH = 10MB),
  ( NAME = N'user_data03',
   FILENAME = N'C:\MSSQL_data\source.user_data03.ndf',
   SIZE = 128MB, FILEGROWTH = 10MB),
  ( NAME = N'user_data04',
   FILENAME = N'C:\MSSQL_data\source.user_data04.ndf',
   SIZE = 128MB, FILEGROWTH = 10MB),
  ( NAME = N'user_data05',
   FILENAME = N'C:\MSSQL_data\source.user_data05.ndf',
   SIZE = 128MB, FILEGROWTH = 10MB),
  ( NAME = N'user_data06',
   FILENAME = N'C:\MSSQL_data\source.user_data06.ndf',
   SIZE = 128MB, FILEGROWTH = 10MB),
  ( NAME = N'user_data07',
   FILENAME = N'C:\MSSQL_data\source.user_data07.ndf',
   SIZE = 128MB, FILEGROWTH = 10MB)
LOG ON
  ( NAME = N'source_log',
   FILENAME = N'C:\MSSQL_translog\source_log.ldf',
   SIZE = 56MB, FILEGROWTH = 10MB);
GO

ALTER DATABASE [source] SET QUERY_STORE = ON;
ALTER DATABASE [source] SET QUERY_STORE (OPERATION_MODE = READ_WRITE);
GO

USE [source];
GO
CREATE SCHEMA [test];
GO
CREATE TABLE [test].[business] (
  [test_id] bigint NOT NULL IDENTITY (1, 1),
  [test_str] nvarchar(256) NOT NULL,
  [test_nr] int NOT NULL
  ) ON [user_data];
GO

USE [master];
GO


The target database and table is created in similar way but on another drive to further optimize I/O:
CREATE DATABASE [target]
ON PRIMARY
  (NAME = N'taget_data', FILENAME = N'M:\MSSQL_data\target.primary.mdf',
  SIZE = 8MB, FILEGROWTH = 8MB ) ,
FILEGROUP [user_data]
  ( NAME = N'user_data00',
   FILENAME = N'M:\MSSQL_data\target.user_data00.ndf',
   SIZE = 1792MB, FILEGROWTH = 32MB),
  ( NAME = N'user_data01',
   FILENAME = N'M:\MSSQL_data\target.user_data01.ndf',
   SIZE = 1792MB, FILEGROWTH = 32MB),
  ( NAME = N'user_data02',
   FILENAME = N'M:\MSSQL_data\target.user_data02.ndf',
   SIZE = 1792MB, FILEGROWTH = 32MB),
  ( NAME = N'user_data03',
   FILENAME = N'M:\MSSQL_data\target.user_data03.ndf',
   SIZE = 1792MB, FILEGROWTH = 32MB),
  ( NAME = N'user_data04',
   FILENAME = N'M:\MSSQL_data\target.user_data04.ndf',
   SIZE = 1792MB, FILEGROWTH = 32MB),
  ( NAME = N'user_data05',
   FILENAME = N'M:\MSSQL_data\target.user_data05.ndf',
   SIZE = 1792MB, FILEGROWTH = 32MB),
  ( NAME = N'user_data06',
   FILENAME = N'M:\MSSQL_data\target.user_data06.ndf',
   SIZE = 1792MB, FILEGROWTH = 32MB),
  ( NAME = N'user_data07',
   FILENAME = N'M:\MSSQL_data\target.user_data07.ndf',
   SIZE = 1792MB, FILEGROWTH = 32MB)
LOG ON
  ( NAME = N'source_log',
   FILENAME = N'C:\MSSQL_translog\target_log.ldf',
   SIZE = 56MB, FILEGROWTH = 16MB);
GO

ALTER DATABASE [target] SET QUERY_STORE = ON
ALTER DATABASE [target] SET QUERY_STORE (OPERATION_MODE = READ_WRITE)
GO

USE [target];
GO
CREATE SCHEMA [test];
GO
CREATE TABLE [test].[business] (
  [test_id] bigint NOT NULL IDENTITY (1, 1),
  [test_str] nvarchar(256) NOT NULL,
  [test_nr] int NOT NULL
) ON [user_data];
GO

USE [master];
GO


Evaluation

The first measure is a basic run with default batch size.

Copy-SqlTable finished with success. Duration = 00:13:28.0940294. [hh:mm:ss.ddd]; 134 308 637 rows; 12.3 GB data
Copy-SqlTable finished with success. Duration = 00:16:12.9162091. [hh:mm:ss.ddd]; BatchSize = 1 000
Copy-SqlTable finished with success. Duration = 00:11:34.3647701. [hh:mm:ss.ddd]; BatchSize = 10 000
Copy-SqlTable finished with success. Duration = 00:10:15.7085043. [hh:mm:ss.ddd]; BatchSize = 100 000
Copy-SqlTable finished with success. Duration = 00:10:00.1098163. [hh:mm:ss.ddd]; BatchSize = 1 000 000

2017-07-06

COM objects with PowerShell

COM (Component Object Model) is a rather old technology from Microsoft. But still relevant i practice and recognized by Microsoft. Actually there is a new COM implementation in Windows Management Framework (WMF) 5.0. with significant performance improvements as described in „What's New in Windows PowerShell“. Also some issues are fixed with WMF 5.1 (Bug Fixes).

To get a list of the COM components available on the computer you can use this PowerShell statement:
Get-ChildItem HKLM:\Software\Classes -ErrorAction SilentlyContinue |
Where-Object {
$_.PSChildName -match '^\w+\.\w+$' -and (Test-Path -Path "$($_.PSPath)\CLSID")
  } |
Select-Object -ExpandProperty PSChildName | Out-GridView

On the computer I am writing this on there are 1219 COM components…
The statement above is part of the short and fine article „Get a list of all Com objects available“ by Jaap Brasser.

On MSDN there are some nice introductions:
MSDN Library: „Creating .NET and COM Objects (New-Object)“.

A COM object is created with the PowerShell CmdLet New-Object using the parameter -Strict to stabilize the script when the COM component is using an Interop Assembly.

The integration between COM and .NET is based on COM Interop.

Microsoft Office

Each product in Microsoft Office has at least one COM component. Usually they are used inside a Office product with Visual Basic for Applications (VBA), but VBA code can very easy be refactored to VBScript.

Excel

The central COM component when working with Excel is the „Application Object“. This is a very short and incomplete example:
$Excel = New-Object -ComObject Excel.Application -Property @{Visible = $true} -Strict -ErrorAction SilentlyContinue

$Workbook = $Excel.Workbooks.Add()
$Workbook.Author = 'Niels Grove-Rasmussen'
$Workbook.Title = 'PowerShell COM sandbox'
$Workbook.Subject = 'Sandbox on using COM objects in PowerShell'

$Workbook.Sheets.Item(3).Delete()
$Workbook.Sheets.Item(2).Delete()

$Sheet1 = $Workbook.Sheets.Item(1)
$Sheet1.Name = 'COM'

$Sheet1.Range('A1:A1').Cells = 'Cell A1'
$Sheet1.Range('B1:B1').Cells = 2
$Sheet1.Range('B2:B2').Cells = 3.1
$Sheet1.Range('B3:B3').Cells.Formula = '=SUM(B1:B2)'

$Excel.Quit()

Using -Strinct on the CmdLet New-Object also generates 5+ lines of output about IDispatch. This is suppressed with -ErrorAction SilentlyContinue.

Working with Excel through COM will often - of not always - end up using Application.Sheets. In the example above it is used to give the sheet a custom name and add values or a formula to cells.

The Excel object does not close nice. So some COM cleanup is required to reset the COM environment after a Excel object is closed.
while( [System.Runtime.Interopservices.Marshal]::ReleaseComObject($Excel) ) { 'Released one Excel COM count.' | Write-Verbose }
Remove-Variable Excel
[System.GC]::Collect()

As this is a old challenge there is a Microsoft article on the more general subject: „Getting Rid of a COM Object (Once and For All)“. One interesting thing about Excel in the article is that when Excel is closed manually a EXCEL.EXE process still is active.

Word

This is another example on a COM component in Microsoft Office. The naming is similar to Excel, so the central class is the Application Class.

$Word = New-Object -ComObject Word.Application -Property @{Visible = $true} -Strict -ErrorAction SilentlyContinue

$Word.Quit()


Internet Explorer

The old Microsoft web browser has a COM component that used to be popular for a automation GUI. Some naming is aligned with Microsoft Office COM components – or the other way. The central class is the Application Class.

$IE = New-Object -ComObject InternetExplorer.Application -Strict
$IE.Visible = $true

$IE = $null


Windows Scripting Host

The „old“ Windows automation Windows Scripting Host (WSH) from before PowerShell. By default there is script engine for Visual Basic Scripting Edition (VBScript) and JScript. Other languages are implemented to integrate with WSH, e.g. Perl and REXX. There are many old but good books, articles and examples around. Personally I prefer JScript as it has a more robust error handling than VBScript - and other subjective benefits.

WScript

In this part of WSH the central COM class is WshShell.

$vbInformation = 64
$vbOKCancel = 1

$Timeout = 0
$Wsh = New-Object -ComObject WScript.Shell -Strict
[int]$Button = $Wsh.Popup('Hello from PowerShell!', $Timeout, 'PowerShell', $vbInformation + $vbOKCancel)
$Wsh = $null
''
switch($Button) {
  -1 { '???' }
  1 { ':-)' }
  2 { ':-(' }
}

The Popup method used above is a less nice way to put a popup window in a PowerShell Script. I have elsewhere written a nicer solution for a PowerShell MessageBox.
I think that if you need GUI elements then you should look at .NET Windows Forms before WSH.

Controller

WshController

Network

WshNetwork

Windows Shell

The central COM component to Windows Shell is the Shell object.

Actually the name „Windows Shell“ is somewhat misleading as Windows Shell is usually used for the Windows command line shell cmd.exe. So if you are talking with others about the COM class Shell.Application you should make sure to distinguish from cmd.exe Windows Shell.

$Shell = New-Object -ComObject Shell.Application -Strict
$Shell.Open('C:\')

$Shell = $null


ScriptControl

.

$Script = New-Object -ComObject MSScriptControl.ScriptControl -Strict
$Script.Language = 'vbscript'
$Script.AddCode('function getInput() getInput = inputbox(...')
$Input = $Script.eval('getInput')

This is a rather poisonous way to call a piece of VBScript code as the eval method is used. The method opens up for injection attack to the script.

Disclaimer: The example above should not be used in production!

When I have a more suitable example on using ScriptControl this example will be replaced.

History

2017-06-06 Blog entry started as placeholder for ideas and old notes. Not released.
2017-07-06 Notes rewritten, restructured and released.