2018-12-28

Sandbox Active Directory in VMware Workstation

This is to create a isolated sandbox where network, storage and patching can be isolated to the given sandbox. The idea is also to be able to create several independent sandboxes for different purposes. Also the idea is to be able to create sandboxes of different complexity like a single database installation to a full environment in a sandbox.
I am sure this could also be accomplished with Microsoft Hyper-V or another virtualisation platform. But I am currently using VMware and then also in this case.

This start as a very simple description of the process with some central configurations. Later I intent to expand this to a scripted installation. The final goal is to be able to provide a complete AD forrest automatic.

Clone Windows Server

I have a stand-alone Windows Server that I clone in the current state where it is shut down (cold) and the clone is a full clone opposite to a linked clone. The installation and configuration is described in the blog post "Sandbox Windows Server in VMware Workstation".

Generalize Windows with sysprep

Microsoft Docs: "Sysprep (System Preparation) Overview"

Prepare virtual network

  • Custom network (VMnet13, host-only)
  • DHCP disabled
  • Subnet IP: 192.168.42.0
  • Subnet mask: 255.255.255.0

Prepare server

Network profile:
  • IP : 192.168.42.42
  • Subnet : 255.255.255.0
  • Preferred DNS : 192.168.42.42 (this host)
  • Alternate DNS : 192.168.42.1 (the vmnet)
  • Default gateway: None (host-only)
Steps to prepare the server:
  1. Rename computer, e.g. "DC00".
    • Start PowerShell as administrator.
      Rename-Computer -NewName 'DC00' -Restart
  2. Danish keyboard - as a Dane I usually end up with a Danish keyboard.
  3. Windows location: UK to set for UTC time. This is to avoid issues with daylight savings and other local tricks.
  4. English UK ISO-8601'ish date time format.
  5. Windows SmartScreen enabled.
  6. Set password on Administrator user and rename user, e.g. "Ragnar". Password never expires.
  7. Personal administrator user, e.g. "SuperNiels". Password never expires.
  8. vmxnet3 Ethernet Adapter:
    • Do not allow computer to turn off device to save power.
    • See the post "vmxnet3 network adapter" about vmxnet paravirtualized network adapter.
    • Remove Ethernet0 after vmxnet3 configuration.
    • Rename adapter, e.g. to "Ethernet42".
  9. Remove old network adapter. Show hidden devices in Device Manager and remove Intel Gigabit Network Connection.
  10. Static network definition; IP and DNS. This should be done after changing the ethernet adapter as the configuration is assigned the adapter.
  11. Print Spooler service stopped and disabled
    • Start PowerShell as administrator.
      Stop-Service -Name 'spooler'
      Set-Service -Name 'spooler' -StartupType 'Disabled'
  12. Activate Windows Server. This can be done offline as described in the post "Activate Windows Server offline". Or you can temporary change virtual network to a network with NAT to the internet.
Also you might want to consider to disable the screensaver and keeping the screen on always.

Configure Domain Controller

Domain: sandbox.lan
  1. Add Windows Server roles: Active Directory Domain Services (AD DS) and DNS Server. Both with all features and tools.
  2. Promote server to Domain Controller (dcpromo) in a new forrest.
  3. Specify domain controller capabilities; DNS server and Global Catalog on the domain "sandbox.lan".
  4. Enter password for DSRM.
  5. Do not delegate DNS.
  6. Accept the NetBIOS name "SANDBOX".
  7. Accept the default folders. If you are installing a DC in Production this is a configuration to consider.
The generated script can be viewd, but it is actually only one CmdLet call and because of the limited coverage I find the "script" rather incomplete.
Import-Module ADDSDeployment
Install-ADDSForest -CreateDnsDelegation:$false -DatabasePath "C:\Windows\NTDS" -DomainMode "WinThreshold" -DomainName "sandbox.lan" -DomainNetbiosName "SANDBOX" -ForestMode "WinThreshold" -InstallDns:$true -LogPath "C:\Windows\NTDS" -NoRebootOnCompletion:$false -SysvolPath "C:\Windows\SYSVOL" -Force:$true


Review the validation and start installation of AD DS.
When the installation of AD DS in complete you will be logged of and the server will be restarted.

Verify Domain Controller

The Server Manager now also shows the items "AD DS" and "DNS" in the menu to the left.
I the System window full computer name is now "DC00.sandbox.lan". Also the workgroup field is now a domain field where the domain name "sandbox.lan" is shown.

Patch Windows Server offline

  1. Get latest cumulative from Microsoft Update
  2. Copy installation set to the virtual server
  3. Run installation on the virtual server as administrator

History

2023-09-21  Creation of VM migrated to SQLAdmin blog post.
2018-12-28  Post migrated to SQLAdmin blog.
2018-04-30  Post released on AzureAdmin blog.

2018-11-15

Excel datetime from Nagios CSV export

When you export a counter set like Processor % Priviledged Time from Nagios XI to a CSV-file, you get a file that begins with something like this:

'timestamp','\Processor($)\% Privileged Time_value'
'1540989000','6.0683333333e+00'
'1540990800','1.2750000000e+00'
...


The timestamp column is just a integer but I would like a human readable timestamp. At first I looked at a conversion from a file time, but the integer from Nagios is too small to give a correct datetime.

A good colleague then introduced me to Epoc timestamp, and with this formular in Excel the Nagios timestamp is converted correct:
=(A2/86400)+25569
where
  • "A2" is the reference to the cell holding the Nagios timestamp integer
  • 86400 is the Unix timestamp for the first day of the year 1970 in the Gregorian calendar, having 365 days.
  • 25569 is the timestamp for the same day (1970-01-01) in Excel calculated with =DATE(1970;1;1).
When you enter the formular in Excel the first result is not that human readable, e.g. 43404.52083. But by formatting the cell as a timestamp or a user defined type like
yyyy-mm-dd hh:mm:ss,000
you get a nice and precise timestamp:
2018-10-31 12:30:00,000

The number in the second column can also be formatted to something more readable by formatting the cells to a number, e.g. with one decimal.

Please notice that I am working in a danish Excel where the decimal delimiter is comma (,). I have tried to convert to the more international decimal delimiter dot (.), but the conversion is done manually in this text and not tested. So you could experience some errors on this minor subject, but I am quite confident that you can fix them quite fast.

You can play with Epoc conversions back and forth on EpocConverter.

2018-10-09

TFS Backup

Backup share is required by the TFS backup wizard. Actually the wizard checks if the share is on the local host. To be sure that the rights are correct for a SQL Server backup the SQL Server service account should be owner of the folder that is shared. Subfolders should be created with the (undocumented) extended stored procedure xp_create_subdir to ensure that the SQL Server service account has the correct rights on the given folder.

Backup by TFS Job Tasks defined in TFS Administration Console (TfsMgmt.exe). Please be avare of that files are written in two contexts - the TFS Service (TFSService) and the SQL Server service:
  • Recovery Model on all TFS and SSRS databases is set to Full in the context of the administrator.
  • Backup configuration is written in xml-file in TFS backup folder (BackupSettings.xml) in the context of the TFS Service.
  • Backup of Reporting Services encryption keys are done by the wizard - not by a TFS backup job. This is why there only is one snk-file on a key backup in the TFS backup share.
Actual process in backup job:
  1. Verify installation, running status and capacity.
  2. Prepare databases for synchronized backup
  3. Grant permissions to backup path
  4. Full database backup of TFS databases; Configuration, Collection(-s) and Warehouse. Also SSRS databases; ReportServer and ReportServerTempDB. Backup files are created and written inthe context of the SQL Server service - like a normal SQL Server backup.
  5. Marking TFS databases using stored procedure prc_SetTransactionMark in each database; EXEC prc_SetTransactionMark TfsMarkTfpt
  6. Backup transaction log on TFS and SSRS databases. Backup files are created and written in the context of the SQL Server service.
Log files on the TFS job task are in the folder "%ProgramData%\Microsoft\Team Foundation\Server Configuration\Logs".
Also configuration and log files on each TFS backup run in the TFS backup folder.
  • Transactional_<number>.log; TFS job log on transaction log backup.
  • <database name>_<number>L.trn; SQL Server transaction log backup file.
  • Delete_<number>.log; TFS job log on deletion of old files.
  • Full_<number>.log; TFS job log on full database backup.
  • <database name>_number>F.bak; SQL Server full database backup file.
  • BackupSets.xml; TFS backup history on existing backup files.
  • RSKey<number>.snk; SQL Server Reporting Services encryption key backup file.
The log file BackupSets.xml holds information about the the existing backup files and their history.
Existing backup files is controlled by the TFS Delete job that deletes backup files on the given retention period when the recommended minimum value is seven days.

TFS Scheduled Backups does not
    • back up system databases; master, msdb, model. You could use Ola's SQL Server Maintenance Solution (GitHub).
    • back up Service Master Key (SMK).
    • back up Database Master Key (DMK) on master.
    • verify backup files.

    2018-10-04

    Access to vmware shared folders in PowerShell administrator session

    When you define shared folders to a vmware guest as a drive they are also available to a PowerShell session as a drive. Usually the Z-drive. This you can see by the command "net use"

    PS> net use
    New connections will be remembered.

    Status       Local     Remote                    Network
    -------------------------------------------------------------------------------
                 Z:        \\vmware-host\Shared Folders
                                                    VMware Shared Folders
    The command completed successfully.


    But if the PowerShell session is a administrator session, you get something like a network error. This is because the share is not defined to the administrator session. If a "net use" is run and the result compared with the one from the normal user session you can see that the Z-drive is missing.

    PS> net use
    New connections will be remembered.

    There are no entries in the list.


    The drive is then mapped to the session. A new administrator session will also have the Z-drive mapped.

    PS> net use Z: '\\vmware-host\shared folders'
    The command completed successfully


    It is possible to persist the mapping by the parameter "Persist", but you might experience that the persistance is a varying subject. I have not looked deeper into the details on this.
    /persistent:{yes | no}

    The short help on "net use" is available with the command:
    net use /?
    The longer help is available with the command:
    net use /help

    This is not only in PowerShell, but a general Windows UAC thing. You will have the somewhat same experience in other tools like Windows Shell (cmd) or Windows Scripting Host (WSH).

    Setting the Z-drive with the PowerShell cmdlet New-PSDrive does not make the vmware shared folders available to administrator sessions.

    2018-09-19

    Team Foundation Server on a single server

    Preparations

    This is a description on installing Microsoft Team Foundation Serverv (TFS) 2017 or 2018 on a single server. I am doing it on a virtual machine running Microsoft Windows Server 2016 in vmware Workstation version 14.

    Microsoft has provided the document "Install TFS on a single server" (TFS single server) and this document describes several activities and hints quite nicely. I am quite new to TFS and similar platforms which gives me some experiences that are not described in the Microsoft document. This is the motivation to this text and the works behind.

    There is a document from Microsoft on "Install and configure Team Foundation Server". But for now this document is on TFS 2015 and not TFS 2018.

    Microsoft SQL Server

    The Microsoft document on TFS single server installation describes a TFS installation using Microsoft SQL Server Express as database platform. But this solution is rather limited with SQL Server Express and then I have decided to use a "real" Microsoft SQL Server installation in this scenario.

    Besides SQL Server Database Engine - with Full-Text Search - the SQL Server installation also includes SQL Server Analysis Services (SSAS) with a Multidimensional database model, SQL Server Reporting Services (SSRS) in a Native setup and SQL Server Client Connectivity Tools.

    If you are using SQL Server 2017 then Reporting Services is a separate download and installation. But you will need the product key for the installation.

    The trick is
    1. Install SQL Server Database Engine first. During installation copy the product key. You can put it in a text-file for future reference. Do not patch yet to required CU level.
    2. Install SQL Server Reporting Services with the product key from the general SQL Server installation. During installation you are required to restart the server. Do that and configure Reporting Services with the Report Server Configuration Manager. You can name the Reporting Services the same way as the databases you plan for TFS, e.g. "Tfs_SQLAdmin_*".
    3. Patch the SQL Server platform with the lastest approved Cumulative Update (CU). The CU installation set will also patch Reporting Services.

    To make TFS backup and restore work a network path to a share is required. This requirement is to (try to) ensure that the backup files are placed on another machine than TFS.
    To make the TFS backup and restore work on a stand-alone sandbox server like this a local share can be created.
    DO NOT DO THIS IN PRODUCTION!!!

    If you want the database files placed in another path than the default location you should change the database default locations on the SQL Server Database Engine instance.
    You might run into some difficulties if you create the path in your own context. To get the ownership of the folders correct you can use the (undocumented) extended stored procedure master.sys.xp_create_subdir. I would not recommend to enable xp_cmdshell as described in some "solutions".
    Or you could move de databases right after completion of TFS installation. Still you should do it through SQL Server with xp's.

    Disclaimer: I have worked with SQL Server as my primary field for 10+ years, so this description might be too thin to some. If you can't find what you need in this text or the documentation from Microsoft on installing SQL Server then please write a comment to this text.

    Oracle JRE

    TFS is using the 3rd part product Elasticsearch for Code Search, and Elasticsearch requires a local Java runtime installation. The documentation from Microsoft specifies the requirement to be Oracle Java Runtime Engine (JRE) 8 update 20 or later. I usually prefer the latest version and on this day it is JRE 8 update 172.

    Get the Server JRE from Oracle Java downloads (link). You might have to scroll a few screens down to find Java 8.

    Unlike other Java installations it is packed several times into a tar.gz-file. This is how I unpacked the installation using 7-zip before putting it on the server:
    1. Unpack the tar-gz-file to the same folder.
    2. Unpack the tar-file to a subfolder, e.g. "\server-jre-8u172-windows-x64\".
    3. The subfolder then holds only one element with a JDK subfolder, e.g. "\jdk1.8.0_172\"
    4. Copy this folder to the server in the path you see fit, E.g. %ProgramFiles%\Java.
    It looks like we actually gets a JRE plus some JDK elements...

    After installing JRE the Windows environmental variable JAVA_HOME is required by Elasticsearch.

    Set the value of the variable to the full path to the JRE folder:

    Please notice that the full path to the JRE installation on the given update level. If When the JRE is updated then the environmental variable JAVA_HOME must also be updated.

    Service accounts

    The SQL Server components are installed with local virtual accounts as service accounts. If the installation is in a Active Directory (AD) domain then I would consider using AD Managed Service Accounts (MSA) for SQL Server components.

    For the TFS components these service accounts are created locally:
    • TFS Service (tfsservice); Log on as a service
    • TFS Reports (tfsreports)
    • TFS Build (tfsbuild); Log on as a service
    • TFS Search (tfssearch); Log on as a service
    Most services requires the Windows local security policy "Log on as a service".
    1. Open "Administrative Tools".
    2. Expand "Local Policies" (secpol.msc).
    3. Click "User Rights Assignment".
    4. Right-click "Log on as a service" in the right pane and select Properties.
    5. Click "Add User or Group...".
    6. Enter user to add and click OK.
    7. Click OK.
    8. Close "Local Security Policy".
    If the installation is in a AD domain then I would create the service accounts as domain accounts.

    When preparing for a TFS installation in Production I would definitely ensure that the service accounts are hardened and secured. Generally I think that source code and development documentation is core business information and must be secured as much as possible.

    Installation

    The installation set can be downloaded as either EXE- or ISO-file. I prefer the ISO-file as I like the way it can be mounted the virtual server and used without much ado.

    TFS installation

    The installation is started with the Tfs exe-file in the root of the installation set, e.g. "Tfs2018.exe". In the first screen the installation path can be entered, which I think should be considered if your server standard aim to protect the system drive by seperation of system and applications.

    The basic TFS installation takes several minutes:


    When the basic TFS installation is completed the machine must be restarted.

    After the restart the TFS installation automatic start the TFS Configuration Wizard:
    The TFS Configuration Wizard collects configuration data, verify the configuration and execute the configuration.

    As this is a isolated sandbox installation I choose to participate in the Microsoft Improvements Program:
    This I would not do in Production as it requires the server to be connected to the internet, and I really don't know or control what is send to Microsoft and how it is send.

    This is a new TFS installation and I don't have databases or any other configuration items on TFS:

    With TFS 2018 Update 3 you are asked if the installation is a trial. The default answer is a trial installation. If you have Visual Studio or MSDN subscription I recommend not to install as trial. One thing is that the trial installation will expire after 60 days:

    At first it would seem obvious to do a basic deployment of TFS, but as I would like integration with SSRS I have to select a Advanced TFS deployment:

    The TFS language selection is English. While I am in a very little language area with Danish I generally prefer servers to "speak" English. This remove a lot of communication issues - also with consultants not speaking Danish:

    The SQL Server Database Engine is here installed with a named instanced. I have not created empty and pre-sized databases. This I will get back to later:

    The default service account for the TFS service is Local Service, but as mentioned earlier I have created dedicated service accounts. Please notice the possibility to test the service account:

    The TFS Application Tier is configured with defaults:

    The TFS Search is also configured with a dedicated service account and not the default Local Service. As this installation is a single server installation there is no existing Search Service. When installing TFS in Production a separate TFS Search server i highly recommended:

    With Update 3 for TFS 2018 Microsoft has introduced basic authentication to secure the communication with TFS Code Search. There is a note on the Microsoft DevOps blog "Introducing Search service authorization to make communications with TFS more secure" with some background and a upgrade path.

    This is where the integration with SSRS is indicated:

    When the SSRS installation is configured and running before TFS installation the TFS Configuration Wizard can detect not only the SSRS instance but also the SSRS URLs:

    The TFS Reporting also uses SSAS, and when the SSAS installation is configured and running before TFS installation then the TFS Configuration Wizard can detect the SSAS instance:

    To segregate the TFS Reporting from the rest of the TFS installation it is highly recommended to use a dedicated service account for TFS Reporting:

    I do not go with the default TFS project collection as this will give some nasty restrictions on future consolidations:

    When then TFS configuration definition is completed then the wizard gives you the possibility to manually review the complete TFS configuration:

    Finally the TFS Configuration Wizard verify the complete configuration. It is at this point things like SSAS database model is verified. The verification takes several minutes:

    When the verification is complete you get a picture like this. I had to do some reruns of the verification before getting it right:

    Then it is time for the actual configuration of the TFS installation. This will take some minutes:

    Again you will get a nice status picture when the TFS configuration is complete:

    Finally the TFS Configuration Wizard shows link for log file, link to TFS and other useful configuration details.

    The TFS site is running on Microsoft Internet Information Server (IIS) in a dedicated application pool. The default application pool is also running, but in a Production installation I would really consider to stop the default application pool to comply with the Principle of Least Service:

    And the scrolling text is something I think should be in the general documentation:

    When the TFS Configuration Wizard is closed the installation automatically start the TFS Administration Console:

    Configurations

    After the TFS installation I pinned some tools to the Windows Start:

    Also I created these shortcuts on the Windows Desktop:

    • Report Server (http://localhost/reportserver)
    • Reports (http://localhost/reports)
    • TFS (http://localhost:8080/tfs)
    • TFS Logs Folder (C:\ProgramData\Microsoft\Team Foundation\Server Configuration\Logs)

    TFS Scheduled Backup

    Even in a TFS sandbox a backup should be scheduled. See more below in the section about Database Configuration.

    The TFS Scheduled Backups Wizard require a Network Backup Path that must be a UNC-path. Please see the section in the beginning of this text about SQL Server preparations.

    A simple TFS backup schedule could look like this:
    Please notice that this is a TFS backup schedule for a sandbox installation. In Production or another more important environment I would really consider to change the retention to a higher value.

    Database Configuration

    The TFS Configuration Wizard is running in the user context of the administrator who started the TFS installation. This gives that the user is owner of the databases. Such a configuration should be changed to your standard as it otherwise could generate problems when the user is disabled or deleted in the AD domain:

    The database recovery model on the created TFS databases is Full - even if the model database is configured for Recovery Model Simple before TFS installation. Usually I would change the database Recovery Model to Simple in non-Production installations, but scheduling TFS database backup in the TFS Administration Console will change the Recovery Model to full on all databases, also ReportingServicesTempDB. And actually this makes sense as TFS use marked transactions on the backup to ensure consistency acress the databases.
    Actually the change of Recovery Model is done when the Backup Schedule is configured and in the context of the user working in TFS Administration Console. The backup itself is taken in the context of the TFS service account (tfsservice). Please remember that the backup files are created in the context of the Database instance service account, and that this user must have create and write access to the shared storage on the backup share.

    If the TFS databases are created on SQL Server 2017 then the Compability Level of the TFS databases will not match the Compability Level of the Database Engine instance. To fix this the Compability Level of the TFS databases should be raised from 130 to 140. The TFS installation will still be supported as SQL Server 2017 is on the list of requirements for TFS 2018.

    The initial size of the databases is defined by the size of the system database model. During the TFS installation the databases grows several times. The databases should be re-sized with additional space to minimize autogrow events. When installation TFS in Production I would really go with empty predefined databases to avoid database autogrows:

    Also I would reconfigure the database autogrow size - especially the TFS Collection database that is initially defined with exponentially growth: 

    To optimise performance on the TFS Collection database I added some datafiles to optimise for parallel I/O:
    ALTER DATABASE [Tfs_SQLADMIN_Collection] ADD FILE ( NAME = N'Tfs_SQLADMIN_Collection_01', FILENAME = N'C:\MSSQL\Data\Tfs_SQLADMIN_Collection_01.ndf' , SIZE = 64MB , FILEGROWTH = 64MB ) TO FILEGROUP [PRIMARY];
    ALTER DATABASE [Tfs_SQLADMIN_Collection] ADD FILE ( NAME = N'Tfs_SQLADMIN_Collection_02', FILENAME = N'C:\MSSQL\Data\Tfs_SQLADMIN_Collection_02.ndf' , SIZE = 64MB , FILEGROWTH = 64MB ) TO FILEGROUP [PRIMARY];
    ALTER DATABASE [Tfs_SQLADMIN_Collection] ADD FILE ( NAME = N'Tfs_SQLADMIN_Collection_03', FILENAME = N'C:\MSSQL\Data\Tfs_SQLADMIN_Collection_03.ndf' , SIZE = 64MB , FILEGROWTH = 64MB ) TO FILEGROUP [PRIMARY];
    ALTER DATABASE [Tfs_SQLADMIN_Collection] ADD FILE ( NAME = N'Tfs_SQLADMIN_Collection_04', FILENAME = N'C:\MSSQL\Data\Tfs_SQLADMIN_Collection_04.ndf' , SIZE = 64MB , FILEGROWTH = 64MB ) TO FILEGROUP [PRIMARY];
    ALTER DATABASE [Tfs_SQLADMIN_Collection] ADD FILE ( NAME = N'Tfs_SQLADMIN_Collection_05', FILENAME = N'C:\MSSQL\Data\Tfs_SQLADMIN_Collection_05.ndf' , SIZE = 64MB , FILEGROWTH = 64MB ) TO FILEGROUP [PRIMARY];
    ALTER DATABASE [Tfs_SQLADMIN_Collection] ADD FILE ( NAME = N'Tfs_SQLADMIN_Collection_06', FILENAME = N'C:\MSSQL\Data\Tfs_SQLADMIN_Collection_06.ndf' , SIZE = 64MB , FILEGROWTH = 64MB ) TO FILEGROUP [PRIMARY];
    ALTER DATABASE [Tfs_SQLADMIN_Collection] ADD FILE ( NAME = N'Tfs_SQLADMIN_Collection_07', FILENAME = N'C:\MSSQL\Data\Tfs_SQLADMIN_Collection_07.ndf' , SIZE = 64MB , FILEGROWTH = 64MB ) TO FILEGROUP [PRIMARY];

    Discussion

    Continuity

    When installing a more permanent TFS installation you must plan and test full or partly recovery of the TFS installation. There is a possibility to configure Scheduled Backups in TFS Administration Console:

    If you choose to go with this possibility you should not reduce the recovery tests.

    Security

    The default authentication on TFS is NTLM, but it is according to the documentation possible to use Kerberos authentication. This I would like to look into before building a TFS installation in Production.

    History

    2018-09-19 Moved to SQLAdmin blog.
    2018-02-13 First published on AzureAdmin blog.

    2018-09-11

    Multiple data files for TFS Collection database

    Unfortunately Microsoft Team Foundation Server (TFS) creates a collection database with only one data file. Like other products with high activity database like SharePoint the database should have multiple datafiles to optimise for parallel I/O.
    Paul Randal has written a fine piece on "Benchmarking: do multiple data files make a difference?" you also should read.

    Findings

    1111 autogrow events on the datafile in 22 days. This is partly due to the growth rate is 50 MiB.
    The data file is about 176 GiB.
    Unfortunately I can´t get the file fragmentation as I don´t have access to the OS. But worst case could be the difference between current size and model database size divided by 50 MiB which is about 3600 fragments. Even I have to expand the extra data files in small steps of 512 MiB the final fragmentation on each extra data file will only be about 48 fragments. The total fragmentation after this activity will teoretically be around 830 fragments which is a significant reduction (77 %) in external fragmentation.

    General solution

    1. Create extra data files (ndf).
    2. Move data to extra data files from first data file (mdf).
    3. Shrink first data file to equal size of the extra data files.

    Sounds easy - but...

    • Mind the transaction log. It should not autogrow as a result of this activity. And movement of data pages is registered in the transaction log.
    • Data pages are not moved from end of first data file automatically by SQL Server.
    • Limited storage...

    Testing in a sandbox with sufficient storage showed some challenges.
    Looks like target size on extra data files i 24 GiB. With increments in 512 MiB there are about 48 rounds.

    The real deal

    • Automated translog backup once a hour scheduled and managed by extrenal operator. 48 rounds times one hour gives 48 hours. At least - but on the other hand each round takes only about 10-15 minutes on the given hardware.
    • sysadmin. But not local administrator on the Windows Server. Outsourced operations...
    • Trace Flag 1117 set default by outsourcing vendor. Otherwise this was a different challenge.
    All this considered the activity can be described as two major activities where the second will have several rounds:
    1. Create the extra data files (ndf) with small initial size, e.g. 512 MiB. You could disable autogrow on the first data file (mdf) to avoid further unwanted autogrows.
    2. Repeat these steps until the first data file (mdf) has equal size compared to the extra data files (ndf).
      1. Disable automatic filegrowth on extra data files (ndf). This is to avoid the extra data files (ndf) growing wild and consuming storage out of control while data pages are moved from the first data file (mdf).
      2. Move data pages to extra datafiles using DBCC SHRINKFILE (databasename, EMPTYFILE).
      3. Enable autogrow on the the extra data files, e.g. 512 MiB.
      4. Move allocated data pages in first data file from the end of the file in front of the file using DBCC SHRINKFILE (databasename, NOTRUNCATE).
      5. Truncate first data file (mdf) using DBCC SHRINKFILE (databasename, targetsize in MiB).
      6. Expand the extra datafiles in a smaller increment than you wanted, e.g. 512 MiB. This step can be omitted first time right after the extra data files (ndf) are created.
      7. Check data space in each data file through the DMV sys.database_files or the SSMS report "Disk Usage".
      8. Wait for translog backup.
      9. Confirm translog backup and free space in translog.
    The steps above are described and discussed in detail in the following. I have note created a complete turn-key script as the naming will differ from one TFS installation to another. But you should be able to create your own script by copy-alter the statements.

      First - Create extra data files

      In this case I add seven extra data files as the database server has more than eight processors (cores). This gives a total of eight data files which I find a fine compromise between multiple data files and simple administration.
      I add each extra data file in a separate batch to minimise the impact on the running database installation:
      ALTER DATABASE [TFS_SQLADMIN_Collection] ADD FILE (
        NAME = N'TFS_SQLADMIN_Collection_1',
        FILENAME = N'G:\MSSQL\data\TFS_SQLADMIN_Collection_1.ndf',
        SIZE = 512MB , FILEGROWTH = 512MB ) TO FILEGROUP [PRIMARY];
      GO
      ALTER DATABASE [TFS_SQLADMIN_Collection] ADD FILE (
        NAME = N'TFS_SQLADMIN_Collection_2',
        FILENAME = N'G:\MSSQL\data\TFS_SQLADMIN_Collection_2.ndf',
        SIZE = 512MB , FILEGROWTH = 512MB ) TO FILEGROUP [PRIMARY];
      GO
      ALTER DATABASE [TFS_SQLADMIN_Collection] ADD FILE (
        NAME = N'TFS_SQLADMIN_Collection_3',
        FILENAME = N'G:\MSSQL\data\TFS_SQLADMIN_Collection_3.ndf',
        SIZE = 512MB , FILEGROWTH = 512MB ) TO FILEGROUP [PRIMARY];
      GO
      ALTER DATABASE [TFS_SQLADMIN_Collection] ADD FILE (
        NAME = N'TFS_SQLADMIN_Collection_4',
        FILENAME = N'G:\MSSQL\data\TFS_SQLADMIN_Collection_4.ndf',
        SIZE = 512MB , FILEGROWTH = 512MB ) TO FILEGROUP [PRIMARY];
      GO
      ALTER DATABASE [TFS_SQLADMIN_Collection] ADD FILE (
        NAME = N'TFS_SQLADMIN_Collection_5',
        FILENAME = N'G:\MSSQL\data\TFS_SQLADMIN_Collection_5.ndf',
        SIZE = 512MB , FILEGROWTH = 512MB ) TO FILEGROUP [PRIMARY];
      GO
      ALTER DATABASE [TFS_SQLADMIN_Collection] ADD FILE (
        NAME = N'TFS_SQLADMIN_Collection_6',
        FILENAME = N'G:\MSSQL\data\TFS_SQLADMIN_Collection_6.ndf',
        SIZE = 512MB , FILEGROWTH = 512MB ) TO FILEGROUP [PRIMARY];
      GO
      ALTER DATABASE [TFS_SQLADMIN_Collection] ADD FILE (
        NAME = N'TFS_SQLADMIN_Collection_7',
        FILENAME = N'G:\MSSQL\data\TFS_SQLADMIN_Collection_7.ndf',
        SIZE = 512MB , FILEGROWTH = 512MB ) TO FILEGROUP [PRIMARY];
      GO


      After adding the extra data files the autogrowth is disabled on the first data file (mdf):
      ALTER DATABASE [TFS_SQLADMIN_Collection] MODIFY FILE (NAME = N'TFS_SQLADMIN_Collection', FILEGROWTH = 0);

      Second - Balance data files

      The second activity could be automated, but in my case the server hosts other databases, and maybe some developoers were working late... So I choose to do this manually step-by-step while keeping a sharp eye on the general behaviour of the server.

      Filegrowth on extra data files

      Automatic filegrowth on the extra data files is disabled during movement of data pages from the first data files (mdf) to the extra data files (ndf). This is to avoid autogrow events and keep the storage usage in control during the entire operation.

      If you set alle datafiles, both the first and the newly added, to a zero filegrowth then you can reduce the fragmentation of the files and spare some time by not completely emtying the first datafile.

      The filegrowth on the extra data files (ndf) is handled manually during the operation. In this case I ended up with a growth on 512 MiB in each step by several experiments in a sandbox.

      The growtrate should be small enough to make it possible to handle the amount of data within one log backup without autogrow events on the transaction log file (ldf). This is again to keep the storage usage in control during the operation.

      Move data pages to extra files

      Data pages are moved in the database "TFS_SQLADMIN_Collection" from the first data file (mdf) to the extra data files (ndf) with the command
      DBCC SHRINKFILE (TFS_SQLADMIN_Collection, EMPTYFILE);
      It is the logical name of the data file that is required as the first parameter. Here it is so unfortunately that the database and the first data file has the same name.
      The string holding the logical filename is not wrapped in apostrophes like a string usually is in T-SQL.
      The parameter EMPTYFILE will move all data pages from the given data file and prepare the file for deletion. In this case I only want the file reduced to a size equal to the other data files. According to the documentation on DBCC SHRINKFILE a ALTER-statement on the data file will remove the emptyfile-flag from the file.

      While the data are moved to the extra datafiles you can se a session with your user running the command "DbccFilesCompact". Also you can see heavy I/O activity on all datafiles; reads on the first datafile and writes on the extra datafiles.

      Please notice that the data pages moved also are in the transaction log with their data.

      Move allocated data pages to front of data file

      This can be visualised as a movement of allocated data pages from right to left on a line illustrating the file initialisation to the left.

      Truncate data files

      When the end of the data file is liberated of allocated data pages, the data file can be truncated down to last extend holding allocated data pages with the command
      DBCC SHRINKFILE (TFS_SQLADMIN_Collection, 24576);

      The command will truncate the file to 24 MiB, which is my initial qualified guess on the final size of all data files. If the file can't be truncated to this size because the file holds more allocated datapages than the limit then the shrink operation will stop a the last extend holding allocated data pages. But then the command will not throw an error - just finish gracefully as documented.

      Now you can set the desired autogrow rate on all the datafiles. The growth rate must be the same on alle the data files.

      Expand the extra datafiles

      The extra datafiles ndf) are then expanded with the increment described above with a statement like
      USE [master]
      GO
      DECLARE @msg nvarchar(2047) = N'[' + CONVERT(nchar(23), GETUTCDATE(), 126) + 'Z]  Expand extra data files (ndf) to 6 GiB each...';
      RAISERROR( @msg, 0, 0);
      GO
      ALTER DATABASE [TFS_SQLADMIN_Collection] MODIFY FILE ( NAME = N'TFS_SQLADMIN_Collection_1', SIZE = 6GB );
      ALTER DATABASE [TFS_SQLADMIN_Collection] MODIFY FILE ( NAME = N'TFS_SQLADMIN_Collection_2', SIZE = 6GB );
      ALTER DATABASE [TFS_SQLADMIN_Collection] MODIFY FILE ( NAME = N'TFS_SQLADMIN_Collection_3', SIZE = 6GB );
      ALTER DATABASE [TFS_SQLADMIN_Collection] MODIFY FILE ( NAME = N'TFS_SQLADMIN_Collection_4', SIZE = 6GB );
      ALTER DATABASE [TFS_SQLADMIN_Collection] MODIFY FILE ( NAME = N'TFS_SQLADMIN_Collection_5', SIZE = 6GB );
      ALTER DATABASE [TFS_SQLADMIN_Collection] MODIFY FILE ( NAME = N'TFS_SQLADMIN_Collection_6', SIZE = 6GB );
      ALTER DATABASE [TFS_SQLADMIN_Collection] MODIFY FILE ( NAME = N'TFS_SQLADMIN_Collection_7', SIZE = 6GB );
      GO
      DECLARE @msg nvarchar(2047) = N'[' + CONVERT(nchar(23), GETUTCDATE(), 126) + 'Z] Extra data files (ndf) expanded to 6 GiB each.';
      RAISERROR( @msg, 0, 0);
      GO

      The size can only be defined in integers on the given scale like "GB" or "MB". So the next expansion of the extra data files (ndf) will have the final size defined in a lower scale but a larger number like
      ALTER DATABASE [TFS_SQLADMIN_Collection] MODIFY FILE ( NAME = N'TFS_SQLADMIN_Collection_1', SIZE = 6656MB );
      as 6656 MiB equals 6.5 GiB.

      Before and after I have put a small message with a timestamp. This makes it possible to log the execution and get some durations.
      During the entire activity I have copied the output like these messages to a Word document. As the activity is spread over several days I think it is usefull to compile the output and take a look at it when the dust has settled.

      As there are several rounds with expansion statements like above I have created a PowerShell script to generate the statements.
      function Get-SqlExpandNdf {
        for ([int]$Size = 512; $Size -ile (24*1024); $Size += 512) {
          if ($Size % 1024 -ieq 0) {
            [string]$FileSize = "$($Size/1024)GB"
          }
          else {
            [string]$FileSize = "$($Size)MB"
          }

          [string]$SizeLabel = "$(([System.Math]::Round($Size/1024,1)).ToString().Replace(',','.')) GiB"

          'USE [master];'
          'GO'
          "DECLARE @msg nvarchar(2047) = N'[' + CONVERT(nchar(23), GETUTCDATE(), 126) + 'Z] Expand extra data files (ndf) to $SizeLabel each...';"
          'RAISERROR( @msg, 0,0 );'
          'GO'

          (1..7) |
          ForEach-Object {
            "ALTER DATABASE [TFS_SQLADMIN_Collection] MODIFY FILE ( NAME = N'TFS_SQLADMIN_Collection_$_', SIZE = $FileSize );"
          }

          'GO'
          "DECLARE @msg nvarchar(2047) = N'[' + CONVERT(nchar(23), GETUTCDATE(), 126) + 'Z] Extra data files (ndf) expanded to $SizeLabel each.';"
          'RAISERROR( @msg, 0,0 );'
          'GO'
          ''
        }
      }


      The output can be streamed to a sql-file with the CmdLet Out-File
      Get-SqlExpandNdf | Out-File -LiteralPath 'C:\temp\ExpandNdf.sql' -NoClobber
      and added to a general script file for the activity.

      Check data space

      The data space and the distribution of data on the data files can be seen with a query like this
      USE [TFS_SQLADMIN_Collection];
      GO
      SELECT
        [database_files].[name],
        [database_files].[file_id],
        [database_files].[physical_name],
        ([database_files].[size] * 8.0/1024/1024) AS [size_in_gb],
        (([database_files].[size] * 8.0/1024) - (FILEPROPERTY([database_files].[name], 'SpaceUsed') * 8.0/1024)) AS [free_space_in_mb],
        (([database_files].[size] * 8.0/1024/1024) - (FILEPROPERTY([database_files].[name], 'SpaceUsed') * 8.0/1024/1024)) AS [free_space_in_gb]
      FROM [sys].[database_files];
      GO
      USE [master];
      GO

      The query will return a result set on six columns and as many rows as there are files to the database inclusive transaction log files (ldf). A result set could look like this
      namefile_idphysical_namesize_in_gbfree_space_in_mbfree_space_in_gb
      TFS_SQLADMIN_Collection1G:\MSSQL\data\TFS_SQLADMIN_Collection.mdf132.5422.500.02
      TFS_SQLADMIN_Collection_log2T:\MSSQL\data\TFS_SQLADMIN_Collection_log.ldf28.7822171.1021.65
      TFS_SQLADMIN_Collection_13G:\MSSQL\data\TFS_SQLADMIN_Collection_1.ndf6.50502.060.49
      TFS_SQLADMIN_Collection_24G:\MSSQL\data\TFS_SQLADMIN_Collection_2.ndf6.50509.560.49
      TFS_SQLADMIN_Collection_35G:\MSSQL\data\TFS_SQLADMIN_Collection_3.ndf6.50509.370.49
      TFS_SQLADMIN_Collection_46G:\MSSQL\data\TFS_SQLADMIN_Collection_4.ndf6.50510.250.49
      TFS_SQLADMIN_Collection_57G:\MSSQL\data\TFS_SQLADMIN_Collection_5.ndf6.50509.620.49
      TFS_SQLADMIN_Collection_68G:\MSSQL\data\TFS_SQLADMIN_Collection_6.ndf6.50510.620.49
      TFS_SQLADMIN_Collection_79G:\MSSQL\data\TFS_SQLADMIN_Collection_7.ndf6.50510.430.49

      Discussion

      This activity should have been held as the TFS Collection database was created. But when this is not the case one might as well get on with it as soon as possible as the technical debt only will increase.

      I could have expanded the extra data files in each run just after autogrow was re-enabled on the files, but this order gave me a better script file structure.

      I have seen a few recommendations on moving the table "tbl_Content" to a custom filegroup. This filegroup should also be configured with multiple data files. I have not gone further with this configuration as the organisation is only about 1100 employees with around 300 active TFS users and then I figure the TFS collection will not be big enough for this to really matter.

      2018-08-07

      Migrate from OLEDB to ODBC - or not

      August 11, 2011 (2011-08-11) Microsoft announced with the blog post "Microsoft is Aligning with ODBC for Native Relational Data Access" that they recognized ODBC as the de-facto standard for data access and that OLE DB was scheduled for deprecation. The blog post was later updated with links to tools that should help the migration. Also Microsoft made the FAQ "Microsoft is Aligning with ODBC for Native Relational Data Access - FAQ" to help with the details in the migration.
      This was at the time big news as Microsoft for several years had promoted OLE DB for ODBC when building applications involving Microsoft technology. And of course this blog post and it's consequences was discussed a lot, e.g. by Dan Guzman in the blog posts "RIP OLE DB" (2011-09-04) and "Deprecated SQL Server Data Access Technologies" (2017-02-04).
      If you are interested in some background then Hal Berenson wrote a blog post about "OLE DB and SQL Server: History, End-Game, and some Microsoft “dirt”" where he gives some great insight as a driving force behind OLE DB.

      But then in October 2017 Microsoft released a new OLE DB driver for SQL Server. In the blog post "Announcing the new release of OLE DB Driver for SQL Server" (2019-10-06) Microsoft also announced that OLE DB data access technology was un-deprecated.
      A somewhat wobbling course on a rather important subject. But actually I do think it makes sense as both technologies are very widespread and we in reality are beyond the point of no return on each technology.

      It is was highly recommended to migrate your solutions and applications from the COM-based OLEDB to ODBC to be in support as described in the document "Data Access Technologies Road Map".

      A minor detail is that SQL Server Native Client (SNAC) and MDAC/WDAC continues to be deprecated. So when using OLE DB you should still migrate but to another OLE DB driver with msoledbsql.

      Unfortunately there is not much literature on neither OLE DB or ODBC, but with some luck you can find the book "Inside ODBC" by Kyle Geiger (1995, Microsoft Press). The book holds a lot of examples on ODBC are in C/C++ as ODBC is designed for native code.
      The "Windows Data Access Components SDK" has a lot of interesting documentation on both OLE DB, ODBC and ADO.

      OLE DB

      OLE DB is a COM technology with low-level interfaces in a provider for at database service like the  SQL Server Database Engine RDBMS. Back in the days before .NET a Microsoft application could either use OLE DB direct or use the high-level interface ADO. There are still a lot of ADO connectivity in the real world. Especially with Office like Excel in VBA solutions I often meet OLE DB by ADO.
      There are several OLE DB providers for various data platforms in WDAC, formerly called MDAC. For SQL Server there are also OLE DB providers in SNAC, but as .NET providers was included by default in .NET then SNAC in general was obsolete for .NET.

      The blog post "OLE DB Supportability Guidelines" (2017-08-15) on the SQLNCli blog is written before October 2017 but recommend to continue using OLE DB if no blocking issues are experienced. This I think is the first hint on OLE DB continuation with Microsoft.

      SQLNCli blog: "Released: Update to Microsoft OLE DB Driver for SQL Server" (18.1)

      ODBC

      ODBC is a cross-platform interface for data sources. The interface is as CLI defined as ISO standard (ISO/IEC 9075-3:2016).

      When ODBC was announced as the de-facto standard on SQL Server connectivity, Microsoft released some material on ODBC and how to convert applications. A general introduction to ODBC was given in the document "ODBC How-to Topics". The technical article "Converting SQL Server Applications from OLE DB to ODBC" gives some details on converting from OLE DB to ODBC e.g. with a description on mapping OLE DB objects to ODBC APIs.

      The Microsoft document "Download ODBC Driver for SQL Server" lists the various downloads for the SQL Server ODBC driver. There are several as Microsoft not only deliver a driver for Windows but also for other operating systems like macOS and a collection of Linux.

      Microsoft SQLNCli team blog: ODBC Driver 17.2 for SQL Server Released

      SNAC

      SQL Server Native Client (SNAC) was created to support special SQL Server Features like database mirroring. Actually the installation set is a single dynamic-link library (DLL) with a special version of SQL Server OLE DB provider and ODBC driver. Also the library contain a special version of the ADO API. That is some versions of SNAC...

      SNAC is now deprecated, and the document "SNAC lifecycle explained" gives a short description of the SNAC versions, their general contents and support.

      2018-08-03

      Manual update Windows Server 2016

      I have to update Windows Server installations manually as they are used for sandbox installations on virtual machines, and some of the installations like ADDC I don't want to put on the internet. Also I don't take the time to build a WSUS installation to each sandbox installation or domain.

      The otes on manually updates to Windows Server 2016 are personal and strictly coupled to my given installations. Experiences are then also rather personal, and the notes should not be used without review and test.

      Updates are downloaded from Microsoft Update Catalog. A search for "Windows Server 2016" (link) and a sort on latest update date gives a nice list to pick from.
      Usually updates are released to the catalog at the same time as to Windows Update. That is 2nd Tuesday of the month. Also called Patch Tuesday.


      2018-07 Cumulative Update (KB4346877) fail with error message on the update not being "applicable".
      Searching the error message I found this short and precise description from PDQ.com (link). It points to KB4132216 on a "Servicing stack update for Windows..." where the text mention stability improvements to "... an issue that might sometimes lead to incorrect checks for applicability during installation of Windows Updates...". The text is at first sight on Windows 10, but a lot of Windows Server 2016 updates are named Windows 10 in the KB articles and even the files in the installations.
      The update can be downloaded from Microsoft Update Catalog (link). Installing the servicing stack update takes a few seconds, and after that the Cumulative Update runs as expected.
      This takes several minutes and requires a server restart. The restart itself also takes several minutes.

      2018-01-04

      SSMS installation on off-line computer

      In secure networks I have several times found it necessary to install a tool station computer without internet access. Most tools like editors can easily be downloaded from another computer and the installed on the tool station. But Microsoft SQL Server Management Studio (SSMS) has some issues when installing without internet access.

      This is an example on the type of error message I usually get. In this case it is from installing SSMS 17.2 on a danish Windows 7. But I have had similar messages on previous SSMS versions and on other Windows version.
      I have not found a official description from Microsoft on this situation. One of the first times I ran into this I was actually sitting with several Microsoft SQL Server consultants and PFE's, and they came up with a solution by some mails.

      When a SSMS Setup Failure happens it is also logged in a SsmsSetup log file. This log file is in the folder "C:\Users\<User Name>\AppData\Local\Temp\SsmsSetup".
      (** Insert example **)

      The short description is that SSMS is missing two certificates, that it is used to download during SSMS installation. The two certificates are

      • MicRooCerAut2011_2011_03_22 (link)
      • MicRooCerAut_2010-06-23 (link)
      You will have to install the certificates before installing SSMS.
      After you have downloaded the certificates as cer-files you will have to import the certificates into Windows. It does not matter where you place the cer-files.
      I know there are several guides on  importing certificates into Windows, but I go through the details anyhow to avoid doubt.

      1. Start Microsoft Management Console (mmc.exe)
      2. Add the snap-in "Certificates" (Ctrl + M)
      3. When you mark the snap-in to be added to MMC you will have to select which account type the snap-in should manage. Select Computer Account 
      4. You will also have to select which computer the snap-in should manage. Select Local Computer
      5. After adding the Certificates snap-in you should expand Certificates > Root Key Centers... (or Trusted Root Certification Authorities) > Certificates
      6. Right-click the item Certificates and select All Tasks > Import...
      7. This will start the guide Certificate Import
      8. Select to place all certificates in the Certificate Store named Root Key Center... (or Trusted Root Certification Authorities)
      9. When both certificates are imported with success you can close MMC
      All this I am sure can to automated by a script, but as the computer is off-line there are also some issues getting the script file to the computer.