HIMEM.SYS

Dockerizing a .NET Framework application

2017-02-12 09:30:00 +0000 ·

Introduction

Lately, Docker is available natively on Microsoft Windows. There are ready made images for .NET Core applications and respective tooling for Visual Studio.

However, there is not that much information available for .NET full framework applications, that is, those not based on .NET Core, but on .NET 4.6.2, etc.

Having a rather large .NET server side application at hand, I was interested if and how it could be ran inside a docker container. This is especially valuable for .NET framework applications, as the later cannot be installed in multiple versions, side by side, as it is possible with .NET Core versions. For example, .NET 4.6.2 is and predecessors are in place replacements for all .NET versions down to 4.0. New releases of the .NET framework will most likely follow this modus operandi. And even though Microsoft goes out of their way to make sure that such versions are compatible, in a corporate IT environment you just don’t easily update a central component like the .NET framework.

Having our application run inside a container would largely make those arguments moot, as we could include whatever .NET framework version we require, without having to deal with compatibility issues with other applications or the central IT department.

Note that classical virtualization technologies (VMWare, Hyper-V, etc.) do provide comparable benefits, but at least in our corporate environment each virtual machine counts as a separate server cost and infrastructure wise, and is thus a totally different beast. This article will not dive further into the pros and cons of containers and the difference regarding virtual machines. You can find plenty of information, discussions and documentation about these fundamentals on the internet.

Of course, adopting Docker in said environment is another issue altogether, but for sake of the following we assume that doing so is a one-time endeavor and thus doable with enough arguments on the plus side.

The application

The application we want to run inside a Docker container is based on the full .NET framework version 4.6.2. It consists of about 16 Windows Services that are hosted in three operating system processes (typically - you could change that via configuration if necessary). Also, functionality wise, it makes no difference whether the services actually run as Windows Services or as simple console programs. Both is possible and developers typically use the later, because of less hassle.

Given the separation in three OS processes, the first one contains infrastructure services, the second one contains the actual application services and the third one contains monitoring and administration services.

The services communicate with each other using the WCF stack; more specifically the net.tcp and http bindings.

Finally, the different services don’t need to be hosted on the same computer, but could (and in some setups are) also be distributed amongst different computers.

Multiple instances of the application can be installed on the same computer by the means of so called installation prefixes. That is, each normally global resource the application uses (directory names, performance counter names, URIs, etc.) are all uniquely identified by a specific ID (e.g. DEV01).

While this allows us to even now run multiple versions in parallel on the same computer, it does not allow us to use different versions that require different .NET framework versions (or other global services for that matter). Hence, the attempt to use light wight virtualization as described in the previous section.

Finally, the application imports data from a multitude of sources including files and online communication with other systems. Results are also exported by the means of files.

All data is stored in an SQL Server database (in production on a separate cluster instance) and a second database contains “volatile” monitoring data (e.g. log records for easier searching).

The application is fully configurable in the way that the actual location of the directories for the import and export files (which could also be network shares) and the location of said database instances is not fix and no assumptions about them is made.

The Plan

Given the flexibility of the application and its configuration quite a range of options for containerizing come to mind.

For example, we could make it so that each application (and infrastructure) services is hosted in its own container. While this would be true to the spirit of containers and even micro services, for the sake of this article the main scope was to allow for a light way to run multiple application versions with different .NET framework dependencies on the same computer. Fully splitting the services into multiple containers doesn’t prevent this of course, but introduces quite some additional orchestration and maintenance liability. Tools exist for this of course, but still we first want to get experience with Docker as such and not introduce additional factors into the equation (like Docker Compose or Kubernetes). The good news is though, that there is nothing preventing us to do so later on. We’ll pickup on this thought at the end of this article.

So, to startup our enterprise we’ll go with the attempt to simple put the complete application (i.e. all services) into a single container. The databases will be assumed to be outside of the container, and we’ll need to make the configuration of the application inside the container in a way that accommodates for that.

The import, export and also log file directories will be inside the container for the first attempts. But, they must be outside the container for later iterations for them to be properly accessible. For example, you most likely want to look at log files produced by the application inside the container, and also be able to provide the application with import files and also grab export files.

Docker for Windows

Docker for Windows requires Windows 10 Pro (Build 10586 or later) or Windows Server 2016 and Microsoft Hyper-V must be enabled (and supported by the hardware/CPU).

After installing Docker, it is set to use Linux containers, which is not what we need. So we need to switch to Windows containers, which can be done via the “Settings” in the Docker task bar icon.

Note that Windows containers itself include two different types of container styles:

  • Windows Server Containers, which provide application isolation through process and namespace isolation. They share a kernel with the container host and all other containers running on the same computer. Thus, this is the style that is actually similar to docker containers on Linux. However, this type is (or will be) only supported on Windows Server itself.

  • Hyper-V Containers, basically use Hyper-V in the background to simulate the above style. They are supported by Windows 10 as well.

The whole business about Docker on Windows is currently still rather convoluted for the first time user. For example, to support older Windows versions there is also a Docker Toolbox for Windows, which does not support Windows containers and thus will not be further discussed here.

Besides that, the online documentation is pretty good and you are advised to follow the getting started chapter to get first experience, if you haven’t so already.

Building the application image

In this article we’ll not explore all the basics on how to build a Docker image, but only so much as it helps the narrative of the immediate task. There is plenty of respective information on the internet.

In the following, we have to distinguish between commands being run inside the container, or outside, on the container host. For simplicity, when a command is on the container host, we indicate so, by using a prompt of C:\sources\main (which should also be the root of our sources of the application; most commands could be of course run from any directory of the host). While commands being run inside the container use the prompt of C:\>.

Also, most issues and especially workarounds described below are - naturally - pretty application specific, but it is hoped that they still provide value for other applications that have similar structure and thus requirements.

A first attempt

The first thing you normally do when creating a custom image is to select a base image that it builds upon (compare FROM directive in a Dockerfile).

.NET Core images can use the microsoft/nanoserver (Windows Server 2016 Nano Server, ~900 MB) image as a base, because the .NET Core runtime requires no more. That is not the case for the full .NET framework, which requires more OS parts to start with and thus the microsoft/windowsservercore (Windows Server 2016 Server Core, ~10 GB) image must be used. That is still a stripped down OS compared to the full server, but clocks in with about 10 times the size of the Nano server.

Given that a possible Dockerfile would look like this:

    FROM microsoft/windowsservercore

    COPY parts/services/build/Homer.Services-1.0.0.0.exe C:/

    EXPOSE 20001 20002

    RUN powershell -Command \
     C:\Homer.Services-1.0.0.0.exe -noia C:\Homer\@DEV01 DEV01 -nodb

Let’s digest this:

  • As said we use microsoft/windowsservercore as base image.
  • We copy our application’s installer executable into the image’s root directory.
  • We expose ports 20001 and 20002, which are the ports our application will use in the given DEV01 configuration (one for net.tcp one for http, respectively).
  • We actually invoke the installer passing it some options:
    • -noia - non interactive mode.
    • C:\Homer\@DEV01 - The base installation directory.
    • DEV01 - The installation prefix to use; as said above this controls various configuration settings of the application.
    • -nodb - Don’t install application databases; as said we will want to talk to external/existing databases.

We save this Dockerfile to the root of our source tree (here C:\Sources\main). Additionally, since we don’t want to send all source tree files to the docker daemon during build we add a .dockerignore file in the same directory.

    # Ignore everything
    *
    # Explicitly include what we need
    !parts/services/build/Homer.Services-*.exe   

This will massively speed up the build process as the whole source directory is around 3 GB, while the installer exe itself is only about 40 MB. This is actually something that you might not notice to be lacking until you do multiple build attempts one after another.

A different possibility would have been to simply have an application build’s post build step that does the following:

  • Copy the exe installer in a separate directory (e.g. C:\Sources\main\docker-stage).
  • Copy the Dockerfile in the same directory.
  • Run the following docker build command from that directory.

The next step is to actually build the image using the docker build command.

    C:\Sources\main> docker build -t hmr .

    Sending build context to Docker daemon 36.61 MB
    Step 1/4 : FROM microsoft/windowsservercore
    ---> fa4b9d0c02d2
    Removing intermediate container e3b7e5eca9ad
    Step 2/4 : EXPOSE 20001 20002
    ---> Running in f523e7085e3a
    ---> bb3817b0f865
    Removing intermediate container f523e7085e3a
    Step 3/4 : COPY parts/services/build/Homer.Services-1.0.0.0.exe C:/
    ---> a0bad05caa8d
    Removing intermediate container 9ec6e076f3cd
    Step 4/4 : RUN powershell -Command       C:\Homer.Services-1.0.0.0.exe
          -noia C:\Homer\@DEV01             DEV01 -nodb
    ---> Running in f0879787f328

The installation runs (Step 4/4) for just a few seconds, then exits with the error message, that the current user is not an administrator. This seems strange, since it should be. (The user is called “ContainerAdministrator” for a reason one would assume.) The installer code that performs this check is proven to work reliably. Several different attempts, like adding the following to the docker file:

    RUN net.exe user admin "password" /add
    RUN net.exe localgroup Administrators admin /add
    ENV HOMER_APP_USER=admin

Yielded no difference, still the installer would complain about the user not being part of the administrators group, i.e. not being an administrator. So most likely there is a bug in the relevant code that does the checks. To circumvent this issue and get going, adding the -nochecks command line option works for now.

The installation step (Step 4/4) now runs a bit further, until it fails to register the Windows Services. Again, a strange error, given that the actual registration of the service shows no error. But when the installer tries to modify the service’s command line in the next step, it fails with the error that the respective service does not exist?! After some tinkering around it looks as if the user, which runs the service seems to be an issue. Using the NT AUTHORITY\System aka “LocalSystem” user to run the services the service registration works.

To force the application to user this user, instead of the configured user, we need to add the HOMER_FORCE_SVCACCOUNT environment variable to the Dockerfile, having it look like this:

    FROM microsoft/windowsservercore

    COPY parts/services/build/Homer.Services-1.0.0.0.exe C:/

    EXPOSE 20001 20002

    ENV HOMER_FORCE_SVCACCOUNT=LocalSystem

    RUN powershell -Command \
     C:\Homer.Services-1.0.0.0.exe -noia C:\Homer\@DEV01 DEV01 -nodb

Having done that, the installation runs (Step 4/4) actually runs for some time, before it fails with an error message about an assembly that could not be loaded - not showing which one exactly. This requires for a more in-depth troubleshooting, which we’ll see in the next chapter.

Troubleshooting failed docker builds

To find out which one, we can do the following, which enables fusion logging during the image build process and should give us more information.

    FROM microsoft/windowsservercore

    COPY parts/services/build/Homer.Services-1.0.0.0.exe C:/

    EXPOSE 20001 20002

    ENV HOMER_FORCE_SVCACCOUNT=LocalSystem

    # DEBUG -- START
    RUN powershell -Command \
        mkdir c:\temp\fusion; \
        reg add HKLM\SOFTWARE\Microsoft\Fusion /v LogFailures /t REG_DWORD /d 1; \
        reg add HKLM\SOFTWARE\Microsoft\Fusion /v LogPath /t REG_SZ /d C:\temp\fusion
    # DEBUG -- END

    # Make sure the failing installer's exit code (!= 0) is ignored by manually
    # exiting with code 0.
    RUN powershell -Command 
        C:\Homer.Services-1.0.0.0.exe -noia C:\Homer\@DEV01 DEV01 -nochecks -nodb; exit 0

Note the exit 0 addition to the installer call. This is required because if a RUN command exits with a code not equal to 0, the respective layer will not be persisted in the image. That would not be beneficial in our (troubleshooting) case, because that will also not persist any fusion logs written during that process.

Running docker build -t hmr . again naturally fails with the same error as previously, but now, we can docker run the image and look inside:

    c:\Sources\main> docker run --rm -i hmr cmd.exe
    Microsoft Windows [Version 10.0.14393]
    (c) 2016 Microsoft Corporation. All rights reserved.

    C:\>

As you can see this opens a CMD-session inside the container and we can use the command line tools at our disposal to navigate the to fusion logs.

Not going to all the gory details here, because they are largely application specific, it is suffice to say that the installation could not run because some SQL Server assemblies were not found (inside the container). This was to be expected for two reasons: (a) we don’t package them up in our installer because they are not re-distributable and should be present on the target computer anyway and (b) they cannot possible be inside the container but that is only based on Windows Server Core, that contains no SQL Server installation.

Note also, that even though the specific troublesome issue here was very application specific (missing assemblies), the technique (running a part complete container and inspecting it for details) is not and can be applied to all sorts of build issues.

Creating a new base container

OK, to recap the installation process needs SQL Server files inside the container to work. And most likely the running application will need them as well. So we need to make sure they are present.

One option would be to simple deliver them with our installer. But even if we ignore the distribution (license) issues for the sake of being an internal application, that still isn’t a proper solution: the fusion log has told us which assemblies were missing, but that may only be the tip of the iceberg - those assemblies having (potentially) missing dependencies as well.

Another option would be to install SQL Server during building our package as a separate RUN command. While technically viable, that has the issue of unnecessary long build times, because there is really no need to install SQL Server every time when we create a new image out of a new build of our application - SQL Server wouldn’t have changed!

The best way to handle such scenarios is to actually pick a different base image to derive from. In other words microsoft/windowsservercore was not the best choice to pick. We should rather pick an image that contains SQL Server as well.

Actually there is such an image microsoft/mssql-server-windows (which clocks in at a whopping 14 GB), but we cannot use this one because it contains SQL Server 2016 and our application needs SQL Server 2014. Also, since we don’t really want to use SQL Server inside the container, but only need the dependent assemblies having a full SQL Server installation inside the container seems wasteful. So I choose the following route: install SQL Server 2014 Express in a Windows Server Core image and create new base image for our application image from that.

Actually, doing that using Dockerfile of its own, which could have looked something like this

    FROM microsoft/windowsservercore

    COPY distrib/sqlexpress.exe /
    WORKDIR /

    RUN C:\sqlexpress.exe /q /x:c:\setup
    RUN C:\setup\setup.exe \
            /Q \
            /ACTION=Install \
            /INSTANCENAME=SQLEXPRESS \
            /FEATURES=SQL \
            /UPDATEENABLED=0 \
            /SQLSVCACCOUNT="NT AUTHORITY\System" \
            /SQLSYSADMINACCOUNTS="BUILTIN\ADMINISTRATORS" \
            /TCPENABLED=1 \
            /NPENABLED=0 \
            /SECURITYMODE=SQL \
            /IACCEPTSQLSERVERLICENSETERMS="True" \
            /INDICATEPROGRESS="True" \
            /SAPWD="Start#001"

    RUN del /F /Q c:\sqlexpress.exe && rd /q /s c:\setup

turned out to be fragile to say the least. It might be due to my lack of experience with Docker, the SQL Server Express installer, the pre-release state of Windows containers or a combination of all, but the above simple didn’t work reliably. The installer would sometimes exit with issues like “There is insufficient system memory in resource pool ‘internal’ to run this query” or such.

Manually creating the image worked however. To do so do the following:

    C:\Sources\main> docker run -i --name stage2014 -v e:\Install:C:\Install microsoft/windowsservercore cmd

That runs the Windows Server Core image and makes the container host’s e:\Install directory available as C:\Install inside the container. That directory contains the SqlExpress.exe installer binary; actually I already extracted the installer into a sub directory, so that that part didn’t needed to be done from inside the container (sqlexpress.exe /q /x:e:\install\setup).s It then provides a shell inside the container to execute the further installation steps manually. It also assigns the name stage2014 to the container.

The first thing that needed to be done is enable the Windows feature “NetFx3” (.NET Framework 3.5) in the container OS. This is necessary because SQL Server 2014 depends on it. However, not easily done because the Windows Server Core not only not enables it, but also has the respective payload removed (see output of dism.exe /online /get-features and look for NetFx3). So what needed to be done is extract the Windows Server 2016 ISO into the e:\Install folder as well (here e:\Install\ws16). Then run inside the container:

    C:\> dism.exe /online /enable-feature /all /featurename:NetFx3 /NoRestart /Source:c:\Install\ws16\sources\sxs
    Deployment Image Servicing and Management tool
    Version: 10.0.14393.0
 
    Image Version: 10.0.14393.0
 
    Enabling feature(s)
    [==========================100.0%==========================]
    The operation completed successfully.

Now to run the actual SQL Server Express installation:

    C:\> cd Install\setup
    C:\> setup.exe /Q ^
        /ACTION=Install ^
        /INSTANCENAME=SQLEXPRESS ^
        /FEATURES=SQL ^
        /UPDATEENABLED=0 ^
        /SQLSVCACCOUNT="NT AUTHORITY\System" ^
        /SQLSYSADMINACCOUNTS="BUILTIN\ADMINISTRATORS" ^
        /TCPENABLED=1 ^
        /NPENABLED=0 ^
        /SECURITYMODE=SQL ^
        /IACCEPTSQLSERVERLICENSETERMS="True" ^
        /INDICATEPROGRESS="True" ^
        /SAPWD="Start#001"

Then exit the container by using the standard CMD exit command. Then the modified container “stage2014” can be committed to an image:

    C:\sources\main> docker commit stage2014 servercore-sql2014exp

This will run for a couple of minutes, having to digest some GB of container, but afterwards we have our image to build the application container upon:

    C:\sources\main> docker image list
    REPOSITORY                       TAG                 IMAGE ID            CREATED             SIZE
    servercore-sql2014exp            latest              fa4b9d0c02d2        5 minutes ago       11.5 GB
    microsoft/windowsservercore      latest              4d83c32ad497        4 weeks ago         9.56 GB

Using a new container base and further troubleshooting

Having a new container base our application Dockerfile now looks like this:

    FROM servercore-sql2014exp

    COPY parts/services/build/Homer.Services-1.0.0.0.exe C:/

    EXPOSE 20001 20002

    ENV HOMER_FORCE_SVCACCOUNT=LocalSystem       

    RUN powershell -Command C:\Homer.Services-1.0.0.0.exe -noia C:\Homer\@DEV01 DEV01 -nochecks -nodb

Running docker build -t hmr . with this docker file actually succeeds - sometimes.

The application installer would sometimes run to completion sometimes failing at some step and sometimes simple appear to hang.

After some head scratching I figured that maybe the issue was related to the console interaction the installer does. Normally, the installer detects the presence of a console and if so, uses some “fancy” progress indicator. If no console is present, i.e. the output is redirected, it will simply skip this and plainly write the output. However, when looking the progressing of the docker build command it seemed as if there were issues with that. For example, the docker build command does seem to sport a console of some kind, but things like positioning the output on the previous line don’t work. More out of intuition then of scientific research I thus forced the installer to not use any fancy console stuff (or auto detecting would would be possible) at all. This is of course application specific, but maybe other applications have similar issues / workarounds as well. In our case the trick is to use the -v (verbose) command line option with the installer. This basically disables all status console output and issues messages that would normally go to the installers log instead, thereby - as a side product - simply issuing log messages. Since that is the case, we don’t need the installer’s log file so also using the -nolog option with that.

The Dockerfile than looks like this:

    FROM servercore-sql2014exp

    COPY parts/services/build/Homer.Services-1.0.0.0.exe C:/

    EXPOSE 20001 20002

    ENV HOMER_FORCE_SVCACCOUNT=LocalSystem       

    RUN powershell -Command C:\Homer.Services-1.0.0.0.exe -v -nolog -noia C:\Homer\@DEV01 DEV01 -nochecks -nodb

Docker builds now run reliably, without any apparent hickups.

But then, live would be too easy. One thing that still failed sometimes and sometimes not, was the provisioning of a “current link” inside the container.

As one of many steps, the application’s installer creates a NTFS junction, that points from (here) C:\Homer\@DEV01\APP-1.0.0.0 to the generic C:\Homer\@DEV01\APP-Current. This has proven good practice when installing new versions: one could always refer to APP-Current inside scripts, command, etc. while still having the versioned directory in place. Also a fallback to a previously installed version was just a matter of re-creating the junction to point to a pervious version. Of course, inside a container the whole point is kind of mood, because the container itself would only host a single version and falling back to previous one would simply mean using an (older) container.

Yet still the feature as such should just work, shouldn’t it? However it just wouldn’t. For some builds it would create the following, which would be correct:

       c:\> cd Homer\@DEV01
       c:\> dir
       02/01/2017  11:50 AM    <DIR>          APP-1.0.0.0
       02/01/2017  11:50 AM    <JUNCTION>     APP-Current [\??\C:\homer\@DEV01\APP-1.0.0.0]

While for some builds it would create the following, which is faulty:

       c:\> cd Homer\@DEV01
       c:\> dir
       02/01/2017  11:50 AM    <DIR>          APP-1.0.0.0
       02/01/2017  11:50 AM    <JUNCTION>     APP-Current []

Up to this time I have not found a particular reason for this behavior. Also, if the installer fails to create the current link it should rather error out, but it doesn’t it just leaves the “zombie” link, which simple points to nowhere.

As a workaround we simple reconfigure the installation inside the container after the installer has already done so. This is a light weight process and doesn’t cost much resources. That step will also create the current link, should it not exist. Doing some tests show that this finally “reliably” creates the current link.

    FROM servercore-sql2014exp

    COPY parts/services/build/Homer.Services-1.0.0.0.exe C:/

    EXPOSE 20001 20002

    ENV HOMER_FORCE_SVCACCOUNT=LocalSystem       

    RUN powershell -Command C:\Homer.Services-1.0.0.0.exe -noia C:\Homer\@DEV01 DEV01 -nochecks -nodb

    RUN powershell -Command C:\Homer\@DEV01\APP-1.0.0.0\bin\appctl.exe ip change -currentLink DEV01

We finally have a “working” image. Albeit working, i.e. running the image, is another story altogether, which we’ll pursuit in the following.

Running the image

Now that we have our image hmr ready. We can run it:

  C:\Sources\main> docker run --rm -i hmr cmd.exe

While this works, we quickly can see an issue: all sorts of commands fail with out of memory errors.

A docker container, by default it seems, has about 1 GB of memory assigned. Running the docker stats command, shows that our container instance is at that limit. Which comes to now real surprise for a basically two reasons:

  • Our application hosting processes, by default, use the .NET GC in server mode, that is about 1,5 GB are reserved per core already (albeit not commit, but that seems to make no difference here.) and we have three hosting processes.
  • We have installed our services with the default start type, which is Automatic, that is services begin to start when the container starts. Which also explains why we don’t get OOM errors right ways, but only after some time when the container is already running: services continue to startup in the background until we’re out of memory.

To prevent this there are actually two things we can do:

  • Run our hosting processes with the workstation mode GC instead (which should be sufficient, anyway for our testing purposes).

    To do so, we change the last RUN instruction in the Dockerfile to this:

    RUN powershell -Command C:\Homer\@DEV01\APP-1.0.0.0\bin\appctl.exe ip change -gc:wks -currentLink DEV01
    
  • Run the container with more memory grant, by passing the -m option to the docker run command, e.g.:

    C:\Sources\main> docker run --rm -i -m 8GB hmr cmd.exe
    

Additionally, we could also make it so that the application services don’t start automatically, by passing the -ssm:Manual (service start mode manual) option to the installer in the Dockerfile.

Considering all this, the Dockerfile now looks like this:

    FROM servercore-sql2014exp

    COPY parts/services/build/Homer.Services-1.0.0.0.exe C:/

    EXPOSE 20001 20002

    ENV HOMER_FORCE_SVCACCOUNT=LocalSystem       

    RUN powershell -Command C:\Homer.Services-1.0.0.0.exe -noia \
        C:\Homer\@DEV01 DEV01 -nochecks -nodb

    RUN powershell -Command C:\Homer\@DEV01\APP-1.0.0.0\bin\appctl.exe \
        ip change -gc:wks -currentLink DEV01

Lo and behold! It looks like things finally work having the application running inside a container with Windows Server 2016 Server Core as the operating system. Actually, I’m not quite sure if not some of the stranger issues encountered above (creating the “current” link, registering the Windows Services with a custom user, etc.) are not to be attributed to the Windows Server version (2016), in development we currently only use 2012 R2, or to the fact that it is the Server Core and not the full SKU.

An alternative approach

Now that the application is runnable per se, we should think about things that could be improvement to better utilize docker or containers as such.

For example, there is not really a need to run the application services as containers. We could opt for running them as a console application which is to be started using the CMD or even ENTRYPOINT docker file command.

Also, one could ignore the whole installation procedure and simply XCOPY style deploy the application’s files from the build directory into the container. Then run the (still) relevant registration steps manually, skipping the actual installer. Also, since versioning inside a container is not really necessary (we can simply create a new container for a new software version), we could directly copy into the APP-Current directory, thus there would be no need for a junction/link.

For the sake of completeness, a Dockerfile could look like this then:

    FROM servercore-sql2014exp

    COPY parts/services/build/ C:/Homer/@DEV01/APP-Current

    EXPOSE 20001 20002

    ENV HOMER_FORCE_SVCACCOUNT=LocalSystem       

    RUN powershell -Command C:\Homer\@DEV01\APP-Current\bin\appctl.exe ip change -gc:wks DEV01

    CMD ["C:\Homer\@DEV01\APP-Current\bin\apphost.exe", "-m:ApplicationServiceHost.config;DashboardServiceHost.config;WebUIServiceHost.config"]

Loose ends

Things not discussed above are about how the application in the container finds its database(s), import/export/log directories, etc. They have been left out, because they are even more application. Again, for the sake of completeness here is how a dockerfile could look like that takes care of those things as well.

The database connection strings would be configured in the configuration file of the DEV01 installation prefix, thus for them there is no visible change in the Dockerfile. Likewise for the import and export directories, which can be shares and can be configured as such in said configuration file. Of course, they must be accessible from the container, but that is not specific to Docker.

The only thing left is to make sure that log directory is accessible from the outside. To do so, start the container with the -v option. For more information see the respective Docker documentation.

What’s next?

Having a basically working application inside the container experiments with multiple containers for failover, etc. are feasible. Also, deploying different application services into different containers are possible. As for this particular application it seems reasonable to split services into containers given their role.

That is, we could have one container for the infrastructure services, one for the monitoring and administration services and one (or more) for the application services.

The database instances itself should probably not be containerized - at least not the actual data files (“mdf” files of SQL Server). On the other hand, having the SQL Server software being in a container could be beneficial for the similar reasons that the application is in a container: being able to test with different SQL Server versions in parallel. Of course, that argument is not as strong in this case, because SQL Server versions can be installed side by side.

Other things that should be improved is the management of the Dockerfile. There should be LABEL directives for the maintainer, the version, a proper description, etc. The variable parts (e.g. version numbers, etc.) should be passed in by the docker build command, e.g. using the ARG directive or environment variables. The build of the image should be part of the build process - at least optionally.

Finally, for the above the application code itself has not been changed. There are quite some things that could be improved (or fixed) in the code base to better accommodate for Docker.









  • About
  • Contact
  • Search
  • Powered by Jekyll and based on the Trio theme