Monthly Archives: December 2020

SQL Server Management Studio – automatically get the latest version

When working with SQL Server, I prefer to have a jump host with my tools already installed.

You may also have them on your main laptop/workstation, but I would also recommend to have a VM in the cloud or in your datacenter.

Everyone has a different list of tools they use, but most of the time you’ll find these apps on those lists:

  • command line tools to interact with various cloud providers
  • putty
  • winscp
  • chocolatey
  • SQL Server Management Studio
  • Azure Data Studio
  • git
  • vscode
  • vim
  • various PowerShell modules

In this post I will focus on SQL Server Management Studio.

Since version 18.7, the Azure Data Studio is also included in the installer.

By default Azure Data Studio is installed along with SSMS, but this can be excluded by using:

Eventually, this will not be a valid option, as SSMS will require dependencies provided by Azure Data Studio.

This is a good time to update the script I’m using to automatically install the latest version of SSMS.

I could not find an official manifest with the SSMS versions that I could query, so the script is using the GitHub page where Microsoft publishes the SSMS release notes.

If SSMS is already installed, the script will compare the existing version with the latest one available and it will notify the user.

You can also reference this script directly from my GitHub account.

Feel free to grab it and share it you you find it useful.

Configure Active Directory authentication with SQL Server on Linux

Microsoft just released the adutil in public preview which is a CLI based utility developed to ease the AD authentication configuration for both SQL Server on Linux and SQL Server Linux containers.

We don’t need to switch to a Windows machine to create the AD user for SQL Server and setting SPNs.

In the following steps I will try to install a SQL Server instance on Linux using just the Linux CLI tool adutil.

We will need 2 VMs:
  • tf-wincore01.lab.local – Domain Controller (DC) running on Windows Server 2019 Core (will
    host the lab.local domain)
  • tf-ubuntu01.lab.local – Ubuntu 18.04 LTS – SQL Server Instance on port 20001 will be
    installed here

I will be creating a brand new environment for this test and I am using Terraform to provision the VMs .

Prepare the Domain Controller

Once the VMs are created we need to configure the domain controller:

Let’s setup our zones:

Note that this AD configuration is just the bare minimum for our lab and it’s not fit for a Production environment!

Join the Linux host to the domain

It’s now time to join the Linux box to our new domain. The yaml file used by netplan needs to point to the domain:

Confirm the configuration and apply it.

In my case, the file looks like this:

/etc/resolv.conf file should also point to the domain:

Next, we install the packages that will allow us to join the machine to the domain:

Let’s also set the hostname:

We are now ready to join the machine to the domain:

This command:

  • creates a new computer account in AD
  • creates the /etc/krb5.keytab host keytab file
  • configures the domain in /etc/sssd/sssd.conf
  • updates /etc/krb5.conf

Let’s verify that we can now gather information about a user from the domain, and that we can acquire a Kerberos ticket as that user. The following example uses id, kinit, and klist commands for this.

Install adutil

We now need to install the adutil so we can interact with the Domain Controller directly from the Linux box.

Create a domain user using adutil

Let’s try to create a regular AD user:

At this point adutil cannot list the users, but we can check if an account exists in the AD

Install SQL Server instance on the Linux host

From this point on, I can proceed at installing the SQL Server instance on the Linux host:

Create an AD user for SQL Server and set the ServicePrincipalName (SPN) using adutil

SQL Server instance is running and let’s now create an AD user for SQL Server and set the ServicePrincipalName (SPN) using the adutil tool.

Test the connections and the authentication scheme

Let’s create an AD-based SQL Server login:

Connecting as a domain user from the Linux box:

Let’s verify the authentication scheme:

Conclusion

Our setup is now complete and we managed to perform all the required operations from a Linux machine. The same can be applied to provision SQL Server running on Linux containers. This also should apply if you’re running in the cloud.

Always On Availability Groups using containers

The complete code can be found on my GitHub account

As a SQL Server person, I usually need to work with full blown Availability Groups for my various test scenarios.
I need to have a reliable and consistent way to rebuild the whole setup, multiple times a day.

For this purpose, docker containers are a perfect fit.
This approach will serve multiple scenarios (tsql development, performance tuning, infrastructure changes, etc.)

Target

Using the process I’ll explain below, I will deploy:

  • 3 nodes running SQL Server 2019 Dev on top of Ubuntu 18.04
  • 1 Clusterless Availability Group (also known as Read-Scale Availability Group)

Note that our Clusterless AG is not a high availability or disaster recovery solution.
It only provides a mechanism to synchronize databases across multiple servers (containers).
Only manual failover without data loss and forced failover with data loss is possible when using Read-Scale availability groups.

For production ready and true HA and DR one should look into traditional availability groups running on top of Windows Failover Cluster.
Another viable solution is to run SQL Server instance on Kubernetes in Azure Kubernetes Service (AKS), with persistent storage for high availability.

How

The workflow consist of the following steps:

  • prepare a custom docker image running Ubuntu 18.04 and SQL Server 2019
  • create a configuration file that will be used by docker-compose to spin up the 3 nodes

 

 

 

 

 

The actual build of the Availability Group will be performed by the entrypoint.sh script that will run on all the containers based on the image we just created.

The entrypoint.sh file is used to configure the container.
We just need to add a few .sql scripts that will get executed using sqlcmd utility.
In this case is the ag.sql file that contains the commands to create logins, certificates, endpoints and finally the Availability Group.

Remember, we’re using a Clusterless Availability Group, so the SQL Server service on Linux uses certificates to authenticate communication between the mirroring endpoints.

In a matter of minutes I have a fully working AG.

Credentials

During the build of the docker image and to create the AG I will need to specify various variables and credentials.

For production environments the recommended approach to manage secrets is to use a vault.

For my case I’m storing various variables and credentials in plain text files in the env folder.
Docker will parse those files and they will be available as environment variables.

  • sapassword.env – this contains the SA password and it’s needed when the custom image is built.

  • sqlserver.env – various variables are set here and are needed when the custom image is built.

  • miscpassword.env – will be needed to create the login and certificate needed by the Availability Group. This file is actually added to the container and it will be deleted after the Availability Group is created.

The advantage of this approach is that I have only one place where I store all these variables and credentials, but as I mentioned earlier, it’s not a proper solution from a security standpoint.

A few alternative approaches would be:
– use a tool to manage secrets, like Vault
multi-stage builds
– use BuildKit

Conclusion

From a testing and development point of view, this solution works very well for me as I can rebuild the environment in a fast and consistent way.

It’s not by any means the best option out there, but it’s really simple to use and reproduce.

See it in action

Click on the image for the full gif