Category Archives: SQL

My SSMS Settings: A Visual Guide

As a database professional, I’ve optimized my SQL Server Management Studio (SSMS) settings for maximum efficiency. Here’s a visual breakdown of my key configurations:

 

1. Editor Tab and Status Bar

  • Status bar shows execution time, database name, login, row count, and server
  • Pink for group connections, khaki for single server
  • Tab text includes only the file name

 

2. Advanced Execution Settings

  • SET CONCAT_NULL_YIELDS_NULL and SET ARITHABORT enabled
  • Transaction isolation: READ COMMITTED
  • Suppressed provider message headers
  • Completion time shown after query execution

 

3. Query Results to Grid

  • Max characters for non-XML: 65535
  • XML data: Unlimited

 

4. Query Results to Text

  • Custom delimiter: | (vertical bar)
  • Include column headers
  • Scroll as results are received
  • Max characters per column: 8192

 

5. SQL Server Object Explorer Options

  • Audit Log: Top 1000 records
  • Drag-and-drop: Prepend schema, surround with brackets
  • Prompt for schema object renaming
  • Edit Top: 200 rows
  • Select Top: 1000 rows

 

6. Query Output Formatting

  • Custom delimiter: | (pipe character)
  • Include column headers
  • Scroll as results are received
  • Max characters per column: 8192

These settings streamline my SSMS workflow. Adjust them to fit your needs and boost your productivity!

Easily attach StackOverflow database

Brent Ozar maintains a Microsoft SQL Server version of the public Stack Overflow data export.

He provides detailed explanation on how to download, extract and use it.

You need to download the desired versions and put them on your network.

For easy deployment of the StackOverflow database, I’ve put together a PowerShell function that takes care of downloading the files from your domain (if needed), extracting the archives and mounting the files on an existing instance.

The function uses the dbatools and 7Zip4PowerShell PowerShell modules. If they are not present on the host, they will be installed from the official PSGallery.

This will extract the StackOverflow2010.7z archive to D:\Temp, then it will get the default path for Data and Log from the ‘server\instance’, it will move the data and the log files to the appropriate location and then mount them.

The function also offers the possibility to specify that the data and log files should be placed in custom locations.

The full version of the script is available on my GitHub account.

https://github.com/cviorel/PowerShell/blob/main/Get-StackOverflow.ps1

Hope this comes in handy!

Enjoy!

SQL Server Management Studio – automatically get the latest version

When working with SQL Server, I prefer to have a jump host with my tools already installed.

You may also have them on your main laptop/workstation, but I would also recommend to have a VM in the cloud or in your datacenter.

Everyone has a different list of tools they use, but most of the time you’ll find these apps on those lists:

  • command line tools to interact with various cloud providers
  • putty
  • winscp
  • chocolatey
  • SQL Server Management Studio
  • Azure Data Studio
  • git
  • vscode
  • vim
  • various PowerShell modules

In this post I will focus on SQL Server Management Studio.

Since version 18.7, the Azure Data Studio is also included in the installer.

By default Azure Data Studio is installed along with SSMS, but this can be excluded by using:

Eventually, this will not be a valid option, as SSMS will require dependencies provided by Azure Data Studio.

This is a good time to update the script I’m using to automatically install the latest version of SSMS.

I could not find an official manifest with the SSMS versions that I could query, so the script is using the GitHub page where Microsoft publishes the SSMS release notes.

If SSMS is already installed, the script will compare the existing version with the latest one available and it will notify the user.

You can also reference this script directly from my GitHub account.

Feel free to grab it and share it you you find it useful.

Configure Active Directory authentication with SQL Server on Linux

Microsoft just released the adutil in public preview which is a CLI based utility developed to ease the AD authentication configuration for both SQL Server on Linux and SQL Server Linux containers.

We don’t need to switch to a Windows machine to create the AD user for SQL Server and setting SPNs.

In the following steps I will try to install a SQL Server instance on Linux using just the Linux CLI tool adutil.

We will need 2 VMs:
  • tf-wincore01.lab.local – Domain Controller (DC) running on Windows Server 2019 Core (will
    host the lab.local domain)
  • tf-ubuntu01.lab.local – Ubuntu 18.04 LTS – SQL Server Instance on port 20001 will be
    installed here

I will be creating a brand new environment for this test and I am using Terraform to provision the VMs .

Prepare the Domain Controller

Once the VMs are created we need to configure the domain controller:

Let’s setup our zones:

Note that this AD configuration is just the bare minimum for our lab and it’s not fit for a Production environment!

Join the Linux host to the domain

It’s now time to join the Linux box to our new domain. The yaml file used by netplan needs to point to the domain:

Confirm the configuration and apply it.

In my case, the file looks like this:

/etc/resolv.conf file should also point to the domain:

Next, we install the packages that will allow us to join the machine to the domain:

Let’s also set the hostname:

We are now ready to join the machine to the domain:

This command:

  • creates a new computer account in AD
  • creates the /etc/krb5.keytab host keytab file
  • configures the domain in /etc/sssd/sssd.conf
  • updates /etc/krb5.conf

Let’s verify that we can now gather information about a user from the domain, and that we can acquire a Kerberos ticket as that user. The following example uses id, kinit, and klist commands for this.

Install adutil

We now need to install the adutil so we can interact with the Domain Controller directly from the Linux box.

Create a domain user using adutil

Let’s try to create a regular AD user:

At this point adutil cannot list the users, but we can check if an account exists in the AD

Install SQL Server instance on the Linux host

From this point on, I can proceed at installing the SQL Server instance on the Linux host:

Create an AD user for SQL Server and set the ServicePrincipalName (SPN) using adutil

SQL Server instance is running and let’s now create an AD user for SQL Server and set the ServicePrincipalName (SPN) using the adutil tool.

Test the connections and the authentication scheme

Let’s create an AD-based SQL Server login:

Connecting as a domain user from the Linux box:

Let’s verify the authentication scheme:

Conclusion

Our setup is now complete and we managed to perform all the required operations from a Linux machine. The same can be applied to provision SQL Server running on Linux containers. This also should apply if you’re running in the cloud.

Always On Availability Groups using containers

The complete code can be found on my GitHub account

As a SQL Server person, I usually need to work with full blown Availability Groups for my various test scenarios.
I need to have a reliable and consistent way to rebuild the whole setup, multiple times a day.

For this purpose, docker containers are a perfect fit.
This approach will serve multiple scenarios (tsql development, performance tuning, infrastructure changes, etc.)

Target

Using the process I’ll explain below, I will deploy:

  • 3 nodes running SQL Server 2019 Dev on top of Ubuntu 18.04
  • 1 Clusterless Availability Group (also known as Read-Scale Availability Group)

Note that our Clusterless AG is not a high availability or disaster recovery solution.
It only provides a mechanism to synchronize databases across multiple servers (containers).
Only manual failover without data loss and forced failover with data loss is possible when using Read-Scale availability groups.

For production ready and true HA and DR one should look into traditional availability groups running on top of Windows Failover Cluster.
Another viable solution is to run SQL Server instance on Kubernetes in Azure Kubernetes Service (AKS), with persistent storage for high availability.

How

The workflow consist of the following steps:

  • prepare a custom docker image running Ubuntu 18.04 and SQL Server 2019
  • create a configuration file that will be used by docker-compose to spin up the 3 nodes

 

 

 

 

 

The actual build of the Availability Group will be performed by the entrypoint.sh script that will run on all the containers based on the image we just created.

The entrypoint.sh file is used to configure the container.
We just need to add a few .sql scripts that will get executed using sqlcmd utility.
In this case is the ag.sql file that contains the commands to create logins, certificates, endpoints and finally the Availability Group.

Remember, we’re using a Clusterless Availability Group, so the SQL Server service on Linux uses certificates to authenticate communication between the mirroring endpoints.

In a matter of minutes I have a fully working AG.

Credentials

During the build of the docker image and to create the AG I will need to specify various variables and credentials.

For production environments the recommended approach to manage secrets is to use a vault.

For my case I’m storing various variables and credentials in plain text files in the env folder.
Docker will parse those files and they will be available as environment variables.

  • sapassword.env – this contains the SA password and it’s needed when the custom image is built.

  • sqlserver.env – various variables are set here and are needed when the custom image is built.

  • miscpassword.env – will be needed to create the login and certificate needed by the Availability Group. This file is actually added to the container and it will be deleted after the Availability Group is created.

The advantage of this approach is that I have only one place where I store all these variables and credentials, but as I mentioned earlier, it’s not a proper solution from a security standpoint.

A few alternative approaches would be:
– use a tool to manage secrets, like Vault
multi-stage builds
– use BuildKit

Conclusion

From a testing and development point of view, this solution works very well for me as I can rebuild the environment in a fast and consistent way.

It’s not by any means the best option out there, but it’s really simple to use and reproduce.

See it in action

Click on the image for the full gif

Get SMO version on your server

A quick way to find out what SMO versions are installed:

DTExec: The package execution returned DTSER_FAILURE (1)

What to do when a SQL backup job created with SQL server Maintenance Plans fails with this error?

Investigate! No useful information in the logs.
Previously that day I dropped a few unneeded databases so maybe the Maintenance Plan was still referencing them. Everything looked good when the Maintenance Plan was opened in SSMS (it was configured to backup “SELECTED” databases), I could not find any reference to the deleted databases. Under the hood, the Maintenance Plan was still trying to backup those deleted databases because when I clicked OK and then hit Save, that Maintenance Plan was refreshed and everything worked just fine.

So always try to find the root cause.

T-SQL to generate backup and restore commands

Hi guys,

Since my day to day job is a DBA, I figured I will share stuff that others might find useful.
For the first time I will share a small script which will help you generate T-SQL commands in order to script the backup/restore process of your SQL Server.

This was tested on Microsoft SQL Server 2012 (SP1) – Express Edition (64-bit). Feel free to send your comments.