1

The end game is to create a database when building a docker container, and persist the data so that if the container is removed, I can start the container again and have my database with my persisted data.

I'm using microsoft/mssql-server-windows-developer with Windows containers with docker-compose.

The relevant part of my docker-compose file is (other services removed):

version: "3.9" services:
    db:
        build: 
            context: .
            dockerfile: Database/Dockerfile
        volumes:
            - C:\temp:C:\temp
        ports:
            - 1433:1433

Basically, the db Dockerfile runs a powershell script (very similar to https://github.com/microsoft/mssql-docker/blob/master/windows/mssql-server-windows-developer/start.ps1). My powershell script starts MSSQLSERVER then runs sql files to create a database, run create table, create procs, etc scripts.

All of this works. docker-compose build then docker-compose up will create and run my database on localhost and everything is great. But, if I manipulate the data at all and remove the database then call docker-compose up again, my data is gone.

Everything I've read about persisting data includes using attach_db. I would like to do some sort of if exists, attach_db else create database.

The question (finally)... Why don't I have an mdf file after I create the database? Am I supposed to? I've messed with different ways to add volumes but my volume is always empty. It doesn't appear I'm creating an mdf file to add to my volume.

EDIT - Adding Dockerfile and ps script Dockerfile calls

Dockerfile:

FROM microsoft/mssql-server-windows-developer

ENV sa_password="nannynannybooboo" \
    ACCEPT_EULA="Y" \
    db1="db1" \
    db2="db2"

EXPOSE 1433

RUN mkdir -p ./db1
RUN mkdir -p ./db2

COPY /Database/startsql.ps1 .
COPY /Database/db1/ ./db1
COPY /Database/db2/ ./db2

HEALTHCHECK CMD [ "sqlcmd", "-Q", "select 2" ]

RUN .\startsql -sa_password $env:sa_password -ACCEPT_EULA $env:ACCEPT_EULA -db_name $env:db2 -Verbose

RUN .\startsql -sa_password $env:sa_password -ACCEPT_EULA $env:ACCEPT_EULA -db_name $env:db1 -Verbose

startsql.ps1

# based off https://github.com/microsoft/mssql-docker/blob/master/windows/mssql-server-windows-developer/start.ps1

param(
[Parameter(Mandatory=$false)]
[string]$sa_password,

[Parameter(Mandatory=$false)]
[string]$ACCEPT_EULA,

[Parameter(Mandatory=$true)]
[string]$db_name
)

if($ACCEPT_EULA -ne "Y" -And $ACCEPT_EULA -ne "y")
{
    Write-Verbose "ERROR: You must accept the End User License Agreement before this container can start."
    Write-Verbose "Set the environment variable ACCEPT_EULA to 'Y' if you accept the agreement."

    exit 1
}

# start the service
Write-Verbose "Starting SQL Server"
start-service MSSQLSERVER

if($sa_password -eq "_") {
    if (Test-Path $env:sa_password_path) {
        $sa_password = Get-Content -Raw $secretPath
    }
    else {
        Write-Verbose "WARN: Using default SA password, secret file not found at: $secretPath"
    }
}

Write-Verbose $sa_password
if($sa_password -ne "_")
{
    Write-Verbose "Changing SA login credentials"
    $sqlcmd = "ALTER LOGIN sa with password=" +"'" + $sa_password + "'" + ";ALTER LOGIN sa ENABLE;"
    & sqlcmd -Q $sqlcmd
}

Write-Verbose "Started SQL Server"
Write-Verbose "Starting set up scripts..."

Write-Verbose $db_name
$exists = $true
$exists = @($sqlServer.Databases | % { $_.Name }) -contains $db_name

$creation = ".\"+$db_name+"\creation.sql"
$creation_rpt = ".\"+$db_name+"\creation.rpt"

$userdefined = ".\"+$db_name+"\userdefined.sql"
$userdefined_rpt = ".\"+$db_name+"\userdefined.rpt"

$presetup = ".\"+$db_name+"\pre.setup.sql"
$presetup_rpt = ".\"+$db_name+"\presetup.rpt"

$tables = ".\"+$db_name+"\tables.sql"
$tables_rpt = ".\"+$db_name+"\tables.rpt"

$procs = ".\"+$db_name+"\procs.sql"
$procs_rpt = ".\"+$db_name+"\procs.rpt"

$triggers = ".\"+$db_name+"\triggers.sql"
$triggers_rpt = ".\"+$db_name+"\triggers.rpt"

Write-Verbose $creation

Write-Verbose $exists
if ($exists -ne $true){
  Write-Verbose "Starting creation script..."
  Invoke-Sqlcmd -InputFile $creation | Out-File -FilePath $creation_rpt

  Write-Verbose "Starting user defined script..."
  Invoke-Sqlcmd -InputFile $userdefined | Out-File -FilePath $userdefined_rpt

  Write-Verbose "Starting pre.setup script..."
  Invoke-Sqlcmd -InputFile $presetup | Out-File -FilePath $presetup_rpt

  Write-Verbose "Starting tables script..."
  Invoke-Sqlcmd -InputFile $tables | Out-File -FilePath $tables_rpt

  Write-Verbose "Starting triggers script..."
  Invoke-Sqlcmd -InputFile $triggers | Out-File -FilePath $triggers_rpt

  Write-Verbose "Starting procs script..."
  Invoke-Sqlcmd -InputFile $procs | Out-File -FilePath $procs_rpt
}

Get-EventLog -LogName Application -Source "MSSQL*" -After (Get-Date).AddSeconds(-2) | Select-Object TimeGenerated, EntryType, Message

I can't share the sql files startsql calls, but 99% of the sql is SSMS generate scripts from an existing DB that I am replicating. The 1% that isn't generated by SSMS is a command to link the two databases being created.

3
  • 1
    I think you need volumes but I've always failed and finding out how to do this. Do you install odbc drivers as well ? Commented Aug 4, 2021 at 20:50
  • @Umar.H, I think odbc drivers depends on the docker image you're using. I believe all of the drivers exist in microsoft/mssql-server-windows-developer as they are used to log the container in to SQL Server (I think). Commented Aug 5, 2021 at 14:47
  • This is incredibly helpful now that I am starting to learn about docker compose, thanks a lot for sharing! Commented Dec 28, 2023 at 19:16

1 Answer 1

2

Volumes

You're spot on, volumes can (And should!) be used to persist your data.

Microsoft themselves have docs on how to persist data from containerised SQL servers, including the required commands: https://learn.microsoft.com/en-us/sql/linux/sql-server-linux-docker-container-configure?view=sql-server-ver15&pivots=cs1-bash#persist

However, this is for Linux, not Windows, so the paths will be different (Very likely the defaults for non-containerised work)

To find that location, you could probably use a query found below, or hop into the container while it is running (using docker exec) and navigate around:

https://www.netwrix.com/how_to_view_sql_server_database_file_location.html

When using volumes with docker-compose the spec can be found here, and is really simple to follow:

https://docs.docker.com/storage/volumes/#use-a-volume-with-docker-compose

(Edit) Proof of Concept

I played around with the Windows container and managed to get the volumes working fine.

I ditched your Dockerfile, and just used the base container image, see below.

version: "3.9"

services:
  db:
    image: microsoft/mssql-server-windows-developer
    volumes:
      - .\db:C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA
    ports:
      - 1433:1433
    environment:
      SA_PASSWORD: "Abc12345678"
      ACCEPT_EULA: "Y"

This works for me because I specified the MDF file location upon database creation:

/*
 https://learn.microsoft.com/en-us/sql/relational-databases/databases/create-a-database?view=sql-server-ver15
 */
USE master ;
GO  
CREATE DATABASE Sales  
ON   
( NAME = Sales_dat,  
    FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\saledat.mdf',  
    SIZE = 10,  
    MAXSIZE = 50,  
    FILEGROWTH = 5 )  
LOG ON  
( NAME = Sales_log,  
    FILENAME = 'C:\Program Files\Microsoft SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\salelog.ldf',  
    SIZE = 5MB,  
    MAXSIZE = 25MB,  
    FILEGROWTH = 5MB ) ;  
GO


EXEC sp_databases ;

You can see that the filepath in the container there correlates to the volume path in the docker-compose file. When I stopped the container, I could successfully see the mdf file in the .\db folder of my project. If you managed to locate the filepath from running your query, you can simply add that to the volume spec in the same fashion as above.When restarting the container, everything loaded fine, and the SP returned a valid list of all DB's.

Windows Containers

I knew they were regarded as a bad idea, but me oh my, did I not expect the base image to be 15GB. This is ridiculously large, and depending on your use case, will present issues with the development, and deployment process, simply in terms of the time required to download the image. If you can use Linux containers for your purposes, I highly recommend it as they are production ready, small, lightweight, and better supported. They can even be ran as the developer edition, and the Microsoft docs clearly state how to persist data from these containers

Linux: https://hub.docker.com/_/microsoft-mssql-server

Windows: https://hub.docker.com/r/microsoft/mssql-server-windows-developer/

Ex:

# Using Windows containers in Powershell
PS> docker image ls
REPOSITORY                                 TAG       IMAGE ID       CREATED          SIZE
microsoft/mssql-server-windows-developer   latest    19873f41b375   3 years ago      15.1GB

# Using Linux containers in WSL
$ docker image ls
REPOSITORY                        TAG           IMAGE ID       CREATED        SIZE
mcr.microsoft.com/mssql/server    2019-latest   56beb1db7406   10 days ago    1.54GB
Sign up to request clarification or add additional context in comments.

11 Comments

The query to find mdf files in the netwrix link helped a lot. The path for my mdf does not exist in either the host or navigating via docker exec. This leads me to believe the mdf file is created on the docker-compose build container (which is a different container), so it's gone forever?
I just ran docker exec on the build container during docker-compose build. Kind of confusing, because most of the time that container will stay running after the build. This time, the container removed itself. But, while the container was running, I navigated to where the MDF file should be and it wasn't there either. Stumped!
Ultimately, how does the MDF file work? If I attach_db an MDF file, then manipulate the data, the next time I run the container, and attach_db, I'll be attaching the modified MDF file from the last time the container was running when I manipulated the data?
I'm glad it helped! There may be a wee gap in your understanding of the docker build process there currently. To elaborate, when building you create an image. That image can then be docker run to create a container. The key difference is that when building, there will not be an MSSQL server initialised (Simply installed, ready to be started with the when a container is run from the image). It's not possible to call docker exec on an image while it is building, so I'm not too sure what you've done! Might I recommend adding the commands you've run, and your compose file to your post?
Ha! Try it though, it definitely worked. You have to have a docker-compose/dockerfile that doesn't enough so it stays active long enough to docker exec into it. I used docker exec -t -i sharp_mcnulty powershell (you can tell because it's the docker generated container name, ha). Maybe that's only possible on windows containers? I've got a better understanding on a few things. I'll tinker then update my original post with more commands/file contents. I think the solution is create database on <mdf> for attach to set where the mdf lives, pointing to the volume.
|

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.