Tuesday, December 1, 2015

Visual Studio Multiple Monitors

I've been using Visual Studio 2012 for some time now (yes, I'm aware that 2015 is already out; sometimes you don't have a choice) and it's been bugging me that when I split a "raft" out onto my second monitor it doesn't behave like a new instance of VS.  It only bugs me because it works that way for a bunch of other people and my old system worked that way.

Today I finally found the option to make it work the way I want it to.  Tools > Options > Environment > Tabs and Windows > Tab Well > Uncheck "Floating tab wells always stay on top of the main window".

This is so much better.

Tuesday, September 29, 2015

Searching all Fields in all Tables in all Databases in SQL Server

Sometimes you just want to push the limits of what you think you can do with a particular technology.  This was one of those times.  I wanted to see if there was an easy way to search every field in every table in every database on a server for a particular value.  Here's what I came up with.  Before anyone goes crazy (if anyone is actually reading this), I copied bits from built-in stored procedures SP_MSFOREACHDB and SP_MSFOREACH_WORKER to make this work.

I really don't like building the dynamic SQL like this, but it did get the job done so I guess I can't really complain about it.

DECLARE
     @DatabaseName NVARCHAR(517)
    ,@SearchText NVARCHAR(1000) = 'Some value'

DECLARE DatabaseCursor CURSOR GLOBAL FOR
SELECT [name]
FROM master.dbo.sysdatabases d
WHERE
    d.[status] &292 = 0
    AND DATABASEPROPERTY(d.[name], 'issingleuser') = 0
    AND HAS_DBACCESS(d.[name]) = 1
    AND [name] NOT IN ('master''model''msdb''tempdb')

OPEN DatabaseCursor
FETCH NEXT FROM DatabaseCursor INTO @DatabaseName

WHILE @@FETCH_STATUS = 0
BEGIN

    DECLARE @SQL NVARCHAR(MAX)

    SET @SQL = N'USE ' N'[' + @DatabaseName + N'] '
    SET @SQL = @SQL + N'DECLARE @SchemaName NVARCHAR(256), @TableName NVARCHAR(256), @ColumnName NVARCHAR(256) '
    SET @SQL = @SQL + N'DECLARE TableCursor CURSOR FOR '
    SET @SQL = @SQL + N'SELECT sch.[Name] AS SchemaName, st.[Name] AS TableName, sc.[Name] AS ColumnName FROM sys.tables st WITH (NOLOCK) INNER JOIN sys.columns sc WITH (NOLOCK) ON st.object_id = sc.object_id INNER JOIN sys.schemas sch WITH (NOLOCK) ON st.schema_id = sch.schema_id '
    SET @SQL = @SQL + N'OPEN TableCursor FETCH NEXT FROM TableCursor INTO @SchemaName, @TableName, @ColumnName '
    SET @SQL = @SQL + N'WHILE @@FETCH_STATUS = 0 BEGIN '
    SET @SQL = @SQL + N'DECLARE @InternalSQL NVARCHAR(MAX) = ''SELECT @CountParam = COUNT(*) FROM ['' + @SchemaName + ''].['' + @TableName + ''] WHERE ['' + @ColumnName + ''] LIKE ''''%' + @SearchText + '%'''''' '
    SET @SQL = @SQL + N'DECLARE @Count INT '
    SET @SQL = @SQL + N'EXEC SP_EXECUTESQL @InternalSQL, N''@CountParam INT OUT'', @Count OUT '
    SET @SQL = @SQL + N'IF (@Count > 0) '
    SET @SQL = @SQL + N'BEGIN '
    SET @SQL = @SQL + N'PRINT @SchemaName + ''.'' + @TableName + ''.'' + @ColumnName '
    SET @SQL = @SQL + N'END '
    SET @SQL = @SQL + N'FETCH NEXT FROM TableCursor INTO @SchemaName, @TableName, @ColumnName '
    SET @SQL = @SQL + N'END '
    SET @SQL = @SQL + N'CLOSE TableCursor '
    SET @SQL = @SQL + N'DEALLOCATE TableCursor '

    EXEC (@SQL)

    FETCH NEXT FROM DatabaseCursor INTO @DatabaseName

END

CLOSE DatabaseCursor
DEALLOCATE DatabaseCursor

Wednesday, September 23, 2015

ANSI_WARNINGS Problems

I've got a set of stored procedures that work together to accomplish a single goal.  I did things that way to make everything easier to unit test, and it worked like a charm.

Unfortunately, during actual testing a really strange error started occurring.  The last stored procedure called in the process (it's called by the "master" stored procedure) wasn't actually executing.  I added logs in everywhere and couldn't figure it out until a coworker suggested wrapping it in a TRY/CATCH.  Doing that exposed the issue, this guy: UPDATE failed because the following SET options have incorrect settings: 'ANSI_WARNINGS'. Verify that SET options are correct for use with indexed views and/or indexes on computed columns and/or filtered indexes and/or query notifications and/or XML data type methods and/or spatial index operations.

Does that make sense to you?  No?  Good, I don't feel so bad now.  It didn't make sense to me either.  Fortunately, I work with an awesome team and one of the database guys figured it out pretty quickly.  It turns out that the stored procedure that wasn't getting called (we'll refer to it as "dbo.UpdateRecords") needed to have ANSI_WARNINGS set to ON: SET ANSI_WARNINGS ON.  Additionally, the calling stored procedure (we'll refer to it as "dbo.DoItAll") needed to set ANSI_WARNINGS to on right before calling dbo.UpdateRecords.  So the end result is like this:

SET ANSI_NULLS ON
GO
SET ANSI_WARNINGS OFF
GO
CREATE PROCEDURE
 dbo.DoItAll
AS
BEGIN
    ...
    SET ANSI_WARNINGS ON
    EXEC 
dbo.UpdateRecords
    ...
END
SET ANSI_NULLS, ANSI_WARNINGS ON
GO
CREATE PROCEDURE
 dbo.UpdateRecords
AS
BEGIN 

...
END

Friday, September 4, 2015

Unit Testing in SQL Server (Part 2a)

I mentioned in a previous post that I had written a bunch of tests to fully test the CustOrderHist stored procedure included in the Northwind database.  It turns out there were only two more.  Here they are:
CREATE PROCEDURE [CustOrderHist.Tests].[test Should Return Correct Quantity For Customer And Product]
AS
BEGIN

    -- Arrange
    -- Fake data insert statements moved to Setup stored procedure

    CREATE TABLE #Results (ProductName NVARCHAR(80), QuantityINT)

    -- Act
    INSERT INTO #Results
    EXEC dbo.CustOrderHist 'ABCDE'

    -- Assert
    DECLARE @Total INT
    SELECT @Total = QuantityFROM #Results WHERE ProductName = 'First Product'

    EXEC tSQLt.AssertEquals 513, @Total

END

CREATE PROCEDURE [CustOrderHist.Tests].[test Should Return Nothing For Customer With No Orders]
AS
BEGIN

    -- Arrange
    CREATE TABLE #Results (ProductName NVARCHAR(80), QuantityINT)

    -- Act
    INSERT INTO #Results
    EXEC dbo.CustOrderHist 'FGHIJ'

    -- Assert
    DECLARE @RowCount INT
    SELECT @RowCount = COUNT(*) FROM #Results

    EXECtSQLt.AssertEquals 0, @RowCount

END

Unit Testing in SQL Server (Part 2)

This is the second in a series of posts intended to get people up to speed using the tSQLt testing framework to test their stored procedures.  If you missed it, check out the first post here.  We'll be continuing where that one leaves off.

Now that we've got Northwind setup, we're ready to start learning what tSQLt does for us, and how it does it.  The main goal of tSQLt is to allow us to test our stored procedures.  Since stored procedures often rely on data and we can't really count on that data being in a specific state before our test, we're first going to learn how to setup our tests.

First up: test classes. A test class is just a schema in SQL Server. We use them to group our tests together. You should choose the best method for your organization when it comes to naming your test classes. At one client we decided to create a test class for each stored procedure. That way all of the tests for that stored procedure could be easily contained within a single test class. If something changed in the stored procedure we'd only have to modify the tests that were in that test class. I'll be following that convention here so I'm going to create a test class for all of the tests I write that test the functionality of dbo.CustOrderHist. I do that by running the following code on the Northwind database:

EXEC tSQLt.NewTestClass 'CustOrderHist.Tests'
Now that our test class exists we're ready to set up our data so we can test our stored procedure. When your tests run, tSQLt will first check the test class (in this case CustOrderHist.Tests) for a stored procedure named "setup" (case insensitive). If such a stored procedure is found, it is executed before each test in the test class. That makes it a great place to start setting up our data for our tests.

First, let's create the Setup procedure:

CREATE PROCEDURE [CustOrderHist.Tests].[Setup]
AS
BEGIN
END
GO
By examining CustOrderHist we can see that it relies on data from the Products, [Order Details], Orders, and Customers tables. Since we rely on that data, we'll need to set up that data before each of our tests run. Fortunately, tSQLt makes that really easy for us. We just have to execute tSQLt.FakeTable for each table. Here's the code:

CREATE PROCEDURE [CustOrderHist.Tests].[Setup]
AS
BEGIN

    EXEC tSQLt.FakeTable 'Products'
    EXEC tSQLt.FakeTable 'Order Details'
    EXEC tSQLt.FakeTable 'Orders'
    EXEC tSQLt.FakeTable 'Customers'

END


The FakeTable procedure opens a transaction, renames the table being passed, then recreates a table with the same name and structure, but without any constraints, defaults, or triggers. After the code above runs, all four tables will be empty shells of their normal selves, allowing us to populate whatever data we want into them.

The stored procedure we're testing is pretty basic so that's all the setup we're going to need.  Remember that from here on out all of the tests that we create in the CustOrderHist.Tests schema (test class) will first execute those FakeTable calls.  Now we're ready to create our first test.  For reference, this is what the CustOrderHist stored procedure actually looks like:


SELECT ProductName, Total=SUM(Quantity)
FROM Products P, [Order Details] OD, Orders O, Customers C
WHERE C.CustomerID = @CustomerID
AND C.CustomerID = O.CustomerID AND O.OrderID = OD.OrderID AND OD.ProductID = P.ProductID
GROUP BY ProductName

The first thing we're going to test is whether we get any results when we pass in a customer id that should have results.  Since a test in tSQLt is just a stored procedure we can create a test pretty darn easily.  Here's what it looks like before we actually have any contents in our test:


CREATE PROCEDURE [CustOrderHist.Tests].[test Should Return Results When Records Exist For Customer]
AS
BEGIN
END


As long as the stored procedure name starts with "test", tSQLt will pick it up as a test when the tests run.  Now that we have the test technically created, let's actually test something, shall we?  In our setup procedure we faked four tables.  Now we're going to populate those tables with some fake data to prove that our procedure under test does what we think it does:

CREATE PROCEDURE [CustOrderHist.Tests].[test Should Return Results When Records Exist For Customer]
AS
BEGIN

    -- Arrange
    INSERT INTO dbo.Products (ProductID, ProductName)
    VALUES (1, 'First Product'), (2, 'Second Product'), (3, 'Third Product')

    INSERT INTO dbo.Customers (CustomerID)
    VALUES ('ABCDE'), ('FGHIJ'), ('KLMNO'), ('PQRST')

    INSERT INTO dbo.Orders (OrderID, CustomerID)
    VALUES (1, 'ABCDE'), (2, 'ABCDE'), (3, 'PQRST'), (4, 'ABCDE')

    INSERT INTO dbo.[Order Details] (OrderID, ProductID, Quantity)
    VALUES (1, 1, 10), (2, 3, 100), (3, 2, 1000), (4, 1, 503)

    CREATE TABLE #Results (ProductName NVARCHAR(80), Quantity INT)

    -- Act

    -- Assert
END

You can see that in addition to populating fake data, I also created a temp table, which I'll use to store my results.  Now that we're all setup, we can go ahead and execute the stored procedure, putting the results into the temp table:

CREATE PROCEDURE [CustOrderHist.Tests].[test Should Return Results When Records Exist For Customer]
AS
BEGIN

    -- Arrange
    INSERT INTO dbo.Products (ProductID, ProductName)
    VALUES (1, 'First Product'), (2, 'Second Product'), (3, 'Third Product')

    INSERT INTO dbo.Customers (CustomerID)
    VALUES ('ABCDE'), ('FGHIJ'), ('KLMNO'), ('PQRST')

    INSERT INTO dbo.Orders (OrderID, CustomerID)
    VALUES (1, 'ABCDE'), (2, 'ABCDE'), (3, 'PQRST'), (4, 'ABCDE')

    INSERT INTO dbo.[Order Details] (OrderID, ProductID, Quantity)
    VALUES (1, 1, 10), (2, 3, 100), (3, 2, 1000), (4, 1, 503)

    CREATE TABLE #Results (ProductName NVARCHAR(80), Quantity INT)

    -- Act
    INSERT INTO #Results
    EXEC dbo.CustOrderHist 'ABCDE'

    -- Assert
END

The last step is to check whether what we have in our results temp table is what we expect to have there:

CREATE PROCEDURE [CustOrderHist.Tests].[test Should Return Results When Records Exist For Customer]
AS
BEGIN

    -- Arrange
    INSERT INTO dbo.Products (ProductID, ProductName)
    VALUES (1, 'First Product'), (2, 'Second Product'), (3, 'Third Product')

    INSERT INTO dbo.Customers (CustomerID)
    VALUES ('ABCDE'), ('FGHIJ'), ('KLMNO'), ('PQRST')

    INSERT INTO dbo.Orders (OrderID, CustomerID)
    VALUES (1, 'ABCDE'), (2, 'ABCDE'), (3, 'PQRST'), (4, 'ABCDE')

    INSERT INTO dbo.[Order Details] (OrderID, ProductID, Quantity)
    VALUES (1, 1, 10), (2, 3, 100), (3, 2, 1000), (4, 1, 503)

    CREATE TABLE #Results (ProductName NVARCHAR(80), Quantity INT)

    -- Act
    INSERT INTO #Results
    EXEC dbo.CustOrderHist 'ABCDE'

    -- Assert
    DECLARE @RowCount INT
    SELECT @RowCount = COUNT(*) FROM #Results

    EXEC tSQLt.AssertEquals 2, @RowCount

END

That last line runs a tSQLt procedure that compares the first parameter (expected result) to the second parameter (actual result). In this test we only wanted to see that the number of rows returned from the stored procedure was what we'd expect, and it was.
That's our first test. You run it like this:

EXEC tSQLt.Run CustOrderHist.Tests'

In order to fully test the CustOrderHist stored procedure, though, we'd need to write a few more tests. I've gone ahead and just written them and posted them here to give you an idea of what they would look like.

Friday, August 28, 2015

Proper karma-ng-html2js-preprocessor Setup

If you want to test directives in AngularJS, using Karma, and those directives use external templates, you'll have to use the karma-ng-html2js-preprocessor plugin for Karma.  It took me a few tries to get it right so hopefully these instructions help somebody (even if it's me) down the road.

 The first thing you'll want to do is install the preprocessor through npm using this command: npm install karma-ng-html2js-preprocessor.  You can use the -g switch to install it globally, or not.

Once you have it installed, you'll need to update your Karma config file to pull it in and use it, and that's where my first issue came about.  First, include the preprocessor in the plugins section of the config:

   1:  plugins: [
   2:      'karma-phantomJS-launcher',
   3:      'karma-jasmine',
   4:      'karma-ng-html2js-preprocessor',
   5:      'karma-coverage'
   6:  ],

The next thing you'll need to do is setup the preprocess to pull in your html files:

   1:  preprocessors: {
   2:      './Templates/*.html': ['ng-html2js'],
   3:      './Content/Controls/*.html': ['ng-html2js']
   4:  }

Now, you can't forget to do the next thing, which is to include your html files in your list of files:

   1:  files: [
   2:      'Templates/*.html',
   3:      'Content/Controls/*.html'
   4:  ]

If you don't do that part, the templates won't be imported by the preprocessor and your tests will fail.

You can configure the preprocessor if you need to, and you may have to.  I'm not gonna lie here: I don't know how to configure this completely.  I know what I had to do to get everything to work and that's what I'm sharing here.  If you want more details on the configuration options, check out the Github page for the preprocessor.

In our solution we use the forward slash to reference our external templates.  In the directive, the templateUrl may look like this: /templates/sample.html.  Karma won't automatically know to do that (I think this has to do with the way the web server is started and the way the browser finds the "root" of the application, but I'm not 100% on that).  What we have to do to make this work is prepend the templates with a forward slash.  Easy enough once you know it needs to be done:

   1:  ngHtml2JsPreprocessor: {
   2:      // setting this option will create only a single module that contains templates
   3:      // from all the files, so you can load them all with module('foo')
   4:      moduleName: 'templates',
   5:      // since our templateUrls have to be prefixed with a forward slash in order for them to be found during normal rendering
   6:      // we need to prefix the them with the forward slash so karma can find them and the preprocessor can cache them with
   7:      // the forward slash
   8:      prependPrefix: '/'
   9:  }

There's also an option to stripPrefix, but as you can see I didn't have to use that one.  The point here is to be aware that you can configure the preprocessor and it can be confusing when you try to do it so play with it for a while.  If I can find the blog that put me on the right track, I'll throw up a link to it.

Bonus: If you need to see what's going on while you're trying to figure this out, you can change the log level (and therefore the verbosity of the logging) by finding the logLevel section in the Karma config and changing it to config.LOG_DEBUG.  That's how I figured out my issues with prependPrefix.

Second Bonus: If you need even more logging than that, you can hack away on the logging module itself.  Since everything in node.js (AFAIK) is JavaScript, you can make changes on the fly and watch the enhanced output.  To do that, just find the logger JavaScript file and start making changes to it.  Rerun your tests to see the modified output.

Monday, August 24, 2015

Setup VS for "Integrated" Karma Testing

If you're at all interested in unit testing your JavaScript, specifically your AngularJS areas, you may be using Karma and Jasmine.  If you're like me and you do most of your coding in Visual Studio you might wish there was some integration to be had between VS and your Karma tests.  Good news, there is!  Sort of.

There's an extension you can install that will reroute the output from Karma to your Output window in Visual Studio.  It's not the best way to integrate, but it definitely works as long as you get it all set up properly.  Hopefully these instructions get you there.
  1. Download the extension from here and install it
  2. Make sure your Karma config file is named karma.unit.conf.js (or karma.e2e.conf.js) and is saved at the root of your project (seriously, you can't put it anywhere else or name it anything else or it won't work)
  3. Everything else is standard for Karma so there shouldn't be any other setup necessary specifically for this extension
The extension will run your tests every time you launch the solution, and you can toggle it on and off by going to Tools > Enable/disable Karma unit testing...

The only other caveat is that in your Output window you'll need to make sure the "Show output from:" dropdown is set to Karma.

Custom Config Sections

Sometimes when you're developing a small application (or I suppose it could be a large application) you'll find that you wish the app.config/web.config had some additional options.  Coincidentally, that exact scenario arose for me recently.  I'm writing a small console application to combine reports for three different types of code coverage.  One of the reporting tools I'm using is OpenCover, which allows you to specify - through the use of command line arguments - which assemblies to check when running.  I wanted to set something up in my config file to allow me to specify this information, so I needed some custom configuration nodes.

I started by reading through this blog, which was very informative, but didn't totally do what I needed it to do.  I extended that example to meet my needs, and this is what I ended up with:
   1:  <configSections>
   2:      <section name="AssemblySettings" type="UnitTestCoverage.ConfigurationExtensions.AssemblySettings, UnitTestCoverage.ConfigurationExtensions"/>
   3:      <section name="FilterSettings" type="UnitTestCoverage.ConfigurationExtensions.FilterSettings, UnitTestCoverage.ConfigurationExtensions"/>
   4:  </configSections>
   5:  <appSettings>...snipped...</appSettings>
   6:  <AssemblySettings>
   7:      <Assemblies>
   8:          <clear />
   9:          <add key="SomeProject.Test" value="C:\Dev\SomeProject.Test\bin\Debug\SomeProject.Test.dll" outputDirectory="C:\Output\SomeProject.Test\" outputFile="SomeProject.Test.xml">
  10:              <filters>
  11:                  <clear />
  12:                  <add key="FirstName" value="[FirstNamespace]SomeProject.FirstNamespace.*" type="Include" />
  13:                  <add key="SecondName" value="[SecondNamespace]SomeProject.SecondNamespace.*" type="Include" />
  14:                  <add key="Test" value="[SomeProject.Test]" type="Ignore" />
  15:              </filters>
  16:          </add>
  17:          <add key="SomeOtherProject.Test" value="C:\Dev\SomeOtherProject.Test\bin\Debug\SomeOtherProject.Test.dll" outputDirectory="C:\Output\SomeOtherProject.Test\" outputFile="SomeOtherProject.Test.xml">
  18:              <filters>
  19:                  <clear />
  20:                  <add key="SomeOtherProject" value="[SomeOtherNamespace]*" type="Include" />
  21:                  <add key="SomeOtherProjectTest" value="[SomeOtherNamespace.Test]*" type="Ignore" />
  22:              </filters>
  23:          </add>
  24:      </Assemblies>
  25:  </AssemblySettings>

So how did I do that?  I'd love to show you.  To get what I have above, I had to create five new objects in my solution.  I decided to keep this part separate from my business logic by creating a ConfigurationExtensions project to hold everything.  You don't have to do that, of course, but I found it to be easiest.  The first class I created was the AssemblySettings class, which inherits from System.Configuration.ConfigurationSection and corresponds to
<AssemblySettings>
up there.
   1:  public class AssemblySettings : ConfigurationSection
   2:  {
   3:      [ConfigurationProperty("Assemblies", IsDefaultCollection = false)]
   4:      [ConfigurationCollection(typeof(AssemblyCollection), AddItemName = "add", RemoveItemName = "remove",
   5:          ClearItemsName = "clear")]
   6:      public AssemblyCollection Assemblies
   7:      {
   8:          get
   9:          {
  10:              AssemblyCollection assemblyCollection = (AssemblyCollection)base["Assemblies"];
  11:              return assemblyCollection;
  12:          }
  13:      }
  14:  }

The next class is AssemblyCollection (which you may have noticed is referenced on line 4 of AssemblySettings). This class inherits from ConfigurationElementCollection and it doesn't have a direct partner in the web.config. However, this is the class that allows me to have multiple Assembly items in the web.config so it is very important.
   1:  public class AssemblyCollection : ConfigurationElementCollection
   2:  {
   3:      public override ConfigurationElementCollectionType CollectionType
   4:      {
   5:          get { return ConfigurationElementCollectionType.AddRemoveClearMap; }
   6:      }
   7:   
   8:      protected override ConfigurationElement CreateNewElement()
   9:      {
  10:          return new AssemblyElement();
  11:      }
  12:   
  13:      protected override object GetElementKey(ConfigurationElement element)
  14:      {
  15:          return ((AssemblyElement)element).Key;
  16:      }
  17:   
  18:      public AssemblyElement this[int index]
  19:      {
  20:          get { return (AssemblyElement)BaseGet(index); }
  21:          set
  22:          {
  23:              if (BaseGet(index) != null)
  24:              {
  25:                  BaseRemoveAt(index);
  26:              }
  27:              BaseAdd(index, value);
  28:          }
  29:      }
  30:   
  31:      new public AssemblyElement this[string Name]
  32:      {
  33:          get { return (AssemblyElement)BaseGet(Name); }
  34:      }
  35:   
  36:      public int IndexOf(AssemblyElement assembly)
  37:      {
  38:          return BaseIndexOf(assembly);
  39:      }
  40:   
  41:      public void Add(AssemblyElement assembly)
  42:      {
  43:          BaseAdd(assembly);
  44:      }
  45:   
  46:      protected override void BaseAdd(ConfigurationElement element)
  47:      {
  48:          BaseAdd(element, false);
  49:      }
  50:   
  51:      public void Remove(AssemblyElement assembly)
  52:      {
  53:          if (BaseIndexOf(assembly) >= 0)
  54:          {
  55:              BaseRemove(assembly.Key);
  56:          }
  57:      }
  58:   
  59:      public void RemoveAt(int index)
  60:      {
  61:          BaseRemoveAt(index);
  62:      }
  63:   
  64:      public void Remove(string name)
  65:      {
  66:          BaseRemove(name);
  67:      }
  68:   
  69:      public void Clear()
  70:      {
  71:          BaseClear();
  72:      }
  73:  }

The last Assembly related class is AssemblyElement, which inherits from ConfigurationElement.
   1:  public class AssemblyElement : ConfigurationElement
   2:  {
   3:      public AssemblyElement(string key, string value, string outputDirectory, string outputFile)
   4:      {
   5:          Key = key;
   6:          Value = value;
   7:          OutputDirectory = outputDirectory;
   8:          OutputFile = outputFile;
   9:      }
  10:   
  11:      public AssemblyElement()
  12:          : this("", @"SomeValue.dll", @"C:\Output\", "results.xml")
  13:      {
  14:      }
  15:   
  16:      [ConfigurationProperty("key", DefaultValue = "", IsRequired = true, IsKey = true)]
  17:      public string Key
  18:      {
  19:          get { return (string)this["key"]; }
  20:          set { this["key"] = value; }
  21:      }
  22:   
  23:      [ConfigurationProperty("value", DefaultValue = "", IsRequired = true)]
  24:      public string Value
  25:      {
  26:          get { return (string)this["value"]; }
  27:          set { this["value"] = value; }
  28:      }
  29:   
  30:      [ConfigurationProperty("noshadow", DefaultValue = "true", IsRequired = false)]
  31:      public bool NoShadow
  32:      {
  33:          get { return (bool) this["noshadow"]; }
  34:          set { this["noshadow"] = value; }
  35:      }
  36:   
  37:      [ConfigurationProperty("outputFile", DefaultValue = @"results.xml", IsRequired = true)]
  38:      public string OutputFile
  39:      {
  40:          get { return (string)this["outputFile"]; }
  41:          set { this["outputFile"] = value; }
  42:      }
  43:   
  44:      [ConfigurationProperty("outputDirectory", DefaultValue = @"C:\Output\", IsRequired = true)]
  45:      public string OutputDirectory
  46:      {
  47:          get { return (string)this["outputDirectory"]; }
  48:          set { this["outputDirectory"] = value; }
  49:      }
  50:   
  51:      [ConfigurationProperty("filters")]
  52:      public FilterCollection Filters
  53:      {
  54:          get { return (FilterCollection)this["filters"]; }
  55:          set { this["filters"] = value; }
  56:      }
  57:  }

You can see in the config file (and in AssemblyElement) that we have a nested configuration for filters, that uses a new class called FilterCollection. FilterCollection inherits from ConfigurationElementCollection.
   1:  public class FilterCollection : ConfigurationElementCollection
   2:  {
   3:      public override ConfigurationElementCollectionType CollectionType
   4:      {
   5:          get { return ConfigurationElementCollectionType.AddRemoveClearMap; }
   6:      }
   7:   
   8:      protected override ConfigurationElement CreateNewElement()
   9:      {
  10:          return new FilterElement();
  11:      }
  12:   
  13:      protected override object GetElementKey(ConfigurationElement element)
  14:      {
  15:          return ((FilterElement)element).Key;
  16:      }
  17:   
  18:      public FilterElement this[int index]
  19:      {
  20:          get { return (FilterElement)BaseGet(index); }
  21:          set
  22:          {
  23:              if (BaseGet(index) != null)
  24:              {
  25:                  BaseRemoveAt(index);
  26:              }
  27:              BaseAdd(index, value);
  28:          }
  29:      }
  30:   
  31:      new public FilterElement this[string Name]
  32:      {
  33:          get { return (FilterElement)BaseGet(Name); }
  34:      }
  35:   
  36:      public int IndexOf(FilterElement filter)
  37:      {
  38:          return BaseIndexOf(filter);
  39:      }
  40:   
  41:      public void Add(FilterElement filter)
  42:      {
  43:          BaseAdd(filter);
  44:      }
  45:   
  46:      protected override void BaseAdd(ConfigurationElement element)
  47:      {
  48:          BaseAdd(element, false);
  49:      }
  50:   
  51:      public void Remove(FilterElement filter)
  52:      {
  53:          if (BaseIndexOf(filter) >= 0)
  54:          {
  55:              BaseRemove(filter.Key);
  56:          }
  57:      }
  58:   
  59:      public void RemoveAt(int index)
  60:      {
  61:          BaseRemoveAt(index);
  62:      }
  63:   
  64:      public void Remove(string name)
  65:      {
  66:          BaseRemove(name);
  67:      }
  68:   
  69:      public void Clear()
  70:      {
  71:          BaseClear();
  72:      }
  73:  }

Since a FilterCollection is made up of individual filters, we'll have to create those as well. The FilterElement class inherits from ConfigurationElement.
   1:  public class FilterElement : ConfigurationElement
   2:  {
   3:      public FilterElement(string key, string type, string value)
   4:      {
   5:          Type = type;
   6:          Value = value;
   7:          Key = key;
   8:      }
   9:   
  10:      public FilterElement()
  11:          : this("Namespace", "Add", "[*]*")
  12:      {
  13:      }
  14:   
  15:      [ConfigurationProperty("key", DefaultValue = "", IsRequired = true, IsKey = true)]
  16:      public string Key
  17:      {
  18:          get { return (string)this["key"]; }
  19:          set { this["key"] = value; }
  20:      }
  21:   
  22:      [ConfigurationProperty("type", DefaultValue = "Add", IsRequired = true)]
  23:      public string Type
  24:      {
  25:          get { return (string)this["type"]; }
  26:          set { this["type"] = value; }
  27:      }
  28:   
  29:      [ConfigurationProperty("value", DefaultValue = "[*]*")]
  30:      public string Value
  31:      {
  32:          get { return (string)this["value"]; }
  33:          set { this["value"] = value; }
  34:      }
  35:  }

Monday, August 3, 2015

Unit Testing in SQL Server (Part 1)


I'm a huge advocate of unit testing your .NET code, but until recently I didn't really know it was possible to test your SQL stored procedures as well. Now I'm a huge advocate of testing those, too.  Hopefully at the end of this tutorial you'll have a solid understanding of how to use the tSQLt framework to write unit tests for your SQL code.

A couple of things to keep in mind as you consider writing unit tests for SQL:
  • It's really easy to over-test in SQL; much easier than in .NET because it's tempting to test for impossible scenarios. If scenarios are impossible (because of database constraints, business rules, or some other restriction), strongly consider whether you need to test them before diving in
  • Until I wrote unit tests I never considered writing stored procedures in the same way I write .NET code. I would write monolithic stored procedures instead of breaking them into reusable (and testable) pieces. It's a lot easier to test a bunch of small stored procedures than it is to test a single huge stored procedure
First I want to send you to the tSQLt website to get their rundown of this whole thing. You can check it out here.

Now that you've done that, let's get you squared away with the Northwind database. Yes, seriously. Don't look at me like that. Northwind has everything we need and it's still pretty small. It's perfect for this. You can download Northwind from here. That should have given you a backup file that you'll want to restore to a server somewhere. If you have SQLExpress installed that'll work just fine. Before you can run any tests, though, you need to run one command against the Northwind database after you restore it.

DECLARE @Command VARCHAR(MAX)


SELECT @Command = REPLACE(REPLACE@Command, '<<DatabaseName>>', sd.[name]), '<<LoginName>>', sl.[name])
FROM master..sysdatabases sd
INNER JOIN master..syslogins sl
    ON sd.[sid] = sl.[sid]
WHERE sd.[name] = DB_NAME()
Update: I went back and tried to follow these instructions and found that the above script no longer worked.  I'm leaving it here for historic purposes, but below is the script that I ran this time around.

DECLARE @User VARCHAR(50)


SELECT @User = QUOTENAME(sl.[name])
FROM master..sysdatabases sd
INNER JOIN master..syslogins sl
    ON sd.[sid] = sl.[sid]
WHERE sd.[name] = DB_NAME()

All that does is change the database so that you're the owner.

Update: I forgot to include a step here. You need to run the next bit of SQL before you run tSQLt.class.sql or it won't work properly. This is for SQL Server 2008. If you have a different version, search for SP_DBCMPTLEVEL to find out which value you should use.
EXEC SP_DBCMPTLEVEL 'Northwind', 100
OK, now that you have that, you're ready to install tSQLt. Go ahead and get the .zip file from here and extract it somewhere you can find. First run SetClrEnabled.sql on Northwind, then run tSQLt.class.sql.

At this point, tSQLt is installed on Northwind and you should be good to go to write tests. I made a couple of small changes to my instance that I'll share here. We have many users on our environment and we started to see situations where people were renaming tables outside of transactions (we'll get to that, don't worry) and it was causing us some headaches. In order to find the offenders and coach them on what they should do differently I modified a few objects. You can run the below code to make these same changes.
IF OBJECT_ID('tSQLt.Private_RenamedObjectLog') IS NOT NULL
BEGIN
    DROP TABLE tSQLt.Private_RenamedObjectLog
END
GO

CREATE TABLE tSQLt.Private_RenamedObjectLog (
    ID INT IDENTITY(1, 1) CONSTRAINT pk__private_renamedobjectlog_id PRIMARY KEY CLUSTERED
    ,ObjectId INT NOT NULL
    ,OriginalName NVARCHAR (MAXNOT NULL
    ,[NewName] NVARCHAR (MAXNULL
    ,RenamedBy VARCHAR(1000) NULL
    ,RenamedOn DATETIME2
)
GO

IF OBJECT_ID('tSQLt.Private_MarkObjectBeforeRename'IS NOT NULL
BEGIN
    DROP PROCEDURE tSQLt.Private_MarkObjectBeforeRename
END
GO

---Build+
CREATE PROCEDURE tSQLt.Private_MarkObjectBeforeRename
(
     @SchemaName NVARCHAR(MAX)
    ,@OriginalName NVARCHAR(MAX)
    ,@NewName NVARCHAR(MAX) = NULL
)
AS
BEGIN

    INSERT INTO tSQLt.Private_RenamedObjectLog (ObjectId, OriginalName, [NewName], RenamedBy, RenamedOn)
    VALUES (OBJECT_ID(@SchemaName +'.' + @OriginalName), @OriginalName, @NewName, SYSTEM_USERGETDATE())

END
GO
In the next installment, we'll actually check out the framework a little more and write a very basic test.

Monday, June 1, 2015

Custom Media Formatters in Web API

If you read my post on HATEOAS you may be wondering how to build a custom media formatter in WebAPI.  Although this example is straight from a school assignment, it worked out pretty well and I'll definitely be using it as the basis of any future work.  Part of the assignment was to develop a Domain Access Protocol specific to the assignment.  In this case, the content type of the DAP was to be along the lines of "application/vnd.class-name-assignment+xml".  Of course, WebAPI doesn't know what that type is or how to format the result unless you tell it, which is where the idea of a custom media formatter comes into play.

For starters, any custom media formatter we create will need to inherit from a MediaTypeFormatter base class.  I chose to inherit from BufferedMediaTypeFormatter because that's what I found first when I searched for it:

   1:  public class CustomXmlFormatter: BufferedMediaTypeFormatter

Once you do that, you'll need to add the media type to the SupportedMediaTypes list in the constructor:

   1:  public CustomXmlFormatter()
   2:  {
   3:      SupportedMediaTypes.Add(new MediaTypeHeaderValue("application/vnd.class-name-assignment+xml"));
   4:  }

OK, great.  But all that's really done is set up the media type.  It doesn't tell the server when to use that media type.  This next part is going to go in the WebApiConfig file, in the Register method:

   1:  config.Formatters.Add(new CustomXmlFormatter());

At this point we're all set up to send and receive content from a client using the aforementioned "application/vnd.class-name-assignment+xml" content type.  As long as the request comes from the client with that content type, our new custom media formatter will be used to process it.  Unfortunately, nothing will happen with it at this point (actually, the service won't even compile yet because we haven't overridden a couple of important methods).  We have to override CanReadType and CanWriteType in order to read and write (respectively) the data as it comes in and goes out:

   1:  public override bool CanReadType(Type type)
   2:  {
   3:      return true;
   4:  }
   5:   
   6:  public override bool CanWriteType(Type type)
   7:  {
   8:      return true;
   9:  }

An important note about that code: it just passes everything right on through.  You may want to set those methods up so that it can only read or write based on certain types.  I didn't want to do that (and the assignment was coming due and I was short on time) so I just put in return true.

This is all great, but we still aren't actually doing anything here.  We have to override a couple more methods in order to actually process the data that comes and goes through this formatter.  Since it's shorter, I'll show you ReadFromStream first:

   1:  public override object ReadFromStream(Type type, Stream readStream, System.Net.Http.HttpContent content, IFormatterLogger formatterLogger)
   2:  {
   3:      var serializer = new XmlSerializer(type);
   4:      var val = serializer.Deserialize(readStream);
   5:      return val;
   6:  }

What we've said there is that anything that comes in should be deserialized using the XmlSerializer.  We're trusting that the input is properly formatted XML.  If it isn't, we'll throw an error.  As for the data on the way out, well, hopefully I commented it well enough to make sense:


   1:  public override void WriteToStream(Type type, object value, Stream writeStream, System.Net.Http.HttpContent content)
   2:  {
   3:      // create a stream to work with
   4:      using (var writer = new StreamWriter(writeStream))
   5:      {
   6:          // check whether the object being written out is null
   7:          if (value == null)
   8:          {
   9:              throw new Exception("Cannot serialize type");
  10:          }
  11:   
  12:          // if the object isn't null, build the output as an XML string
  13:          var output = BuildSingleItemOutputAsXml(type, value);
  14:   
  15:          // write the XML string into the stream
  16:          writer.Write(output);
  17:      }
  18:  }
  19:   
  20:  private string BuildSingleItemOutputAsXml<T>(Type type, T viewModel)
  21:  {
  22:      // get the basic XML rendering of the object
  23:      var output = AsXml(type, viewModel);
  24:   
  25:      // strip off and store the closing tag
  26:      var closingNodeTag = output.Substring(output.LastIndexOf("</", StringComparison.InvariantCulture));
  27:      output = output.Substring(0, output.LastIndexOf("</", StringComparison.InvariantCulture));
  28:   
  29:      // use reflection to get the properties of the object
  30:      var properties = type.GetProperties();
  31:   
  32:      // iterate the properties of the object until the Links are found (this is related to the HATEOAS requirement)
  33:      // create a custom node for each link found in the Links property
  34:      output = (from property in properties
  35:                where property.PropertyType == typeof(List<LinkViewModel>)
  36:                select (List<LinkViewModel>)property.GetValue(viewModel, null)
  37:                    into links
  38:                    where links != null
  39:                    from link in links
  40:                    select link).Aggregate(output,
  41:                                       (current, link) =>
  42:                                       current +
  43:                                       string.Format("<link rel=\"{0}\" href=\"{1}\" />", link.Rel, link.Href));
  44:   
  45:      // append the closing tag back on the output
  46:      output += closingNodeTag;
  47:   
  48:      return output;
  49:  }
  50:   
  51:  private string AsXml<T>(Type type, T viewModel)
  52:  {
  53:      // Build an XML string representation of our object
  54:      string xmlResult;
  55:   
  56:      var settings = new XmlWriterSettings
  57:          {
  58:              Encoding = new UnicodeEncoding(false, false),
  59:              Indent = true,
  60:              OmitXmlDeclaration = true
  61:          };
  62:   
  63:      var xmlSerializer = new XmlSerializer(type);
  64:      using (var stringWriter = new StringWriter())
  65:      {
  66:          using (var xmlWriter = XmlWriter.Create(stringWriter, settings))
  67:          {
  68:              xmlSerializer.Serialize(xmlWriter, viewModel);
  69:          }
  70:   
  71:          //Strip out namespace info
  72:          xmlResult =
  73:              stringWriter.ToString()
  74:                          .Replace("xmlns:xsd=\"http://www.w3.org/2001/XMLSchema\"", "")
  75:                          .Replace("xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"", "");
  76:              //This is the output as a string
  77:      }
  78:   
  79:      //Load the XML doc
  80:      var xdoc = new XmlDocument();
  81:      xdoc.LoadXml(xmlResult.Replace("xsi:nil", "nullable"));
  82:      //Remove all NULL values 
  83:      var xmlNodeList = xdoc.SelectNodes("//*[@nullable]");
  84:      if (xmlNodeList != null)
  85:          foreach (XmlNode node in xmlNodeList)
  86:          {
  87:              if (node.ParentNode != null)
  88:              {
  89:                  node.ParentNode.RemoveChild(node);
  90:              }
  91:          }
  92:   
  93:      return xdoc.OuterXml;
  94:  }

Everything up there ends up with one thing: a single XML string written into the stream that is being sent back to the client.  There's only one part left now, which is to specify in our responses when to use this new custom media formatter.  I use the CreateResponse extension of the HttpRequestMessage object to build my responses so this was a simple matter of specifying the content of the response like this:

   1:  response.Content = new ObjectContent(typeof(T), content, new CustomXmlFormatter(), "application/vnd.class-name-assignment+xml");


And that's all there is to it!

HATEOAS

HATEOAS: the much ignored tenet of REST services.  What does it mean?  What does it do?  How do you do it?  Those are the three questions I can hopefully answer in this post.

What does it mean?  This one is easy: Hypertext As The Engine Of Application State.

Great, so what does it actually mean?  It means you use links (yes, the same kind you put on a webpage in an anchor tag) to tell the client what it can do next.  Ergo, you use Hypertext (links) as the Engine of Application State (to tell the client what it can do).

The basic idea behind HATEOAS is that the server should be able to tell the client(s) what comes next in the process, based on the current state of the resource being returned.  Allow me to use an analogy.  Big shocker here, but I'm going with the old building a house analogy.

So let's say you're building a house.  Right from the beginning you have a resource that has a state.  For the sake of argument (and because I've never actually built a house), we'll say the resource is House and at the beginning of the process the State of House is Not Started (you can envision an empty lot if that helps).  When House has a State of Not Started, the Construction Crew (i.e. the client) may only take a few actions.  They can Change Plans, Lay Foundation, or Scrap It.  There's a ton of stuff that will come later, but right away they can't do those things (like Frame).

Once the Construction Crew chooses an action from the three available, the State of House changes.  So let's say the Construction Crew chooses Lay Foundation.  The State of House is now Foundation Laid and there are new actions that can be taken: Change Plans, Frame, Landscape, or Scrap It.  You'll notice (hopefully) that they have two of the same options.  When House is in Foundation Laid or Not Started, the actions Change Plans and Scrap It are both available.

As you can see the State of House dictates what Construction Crew can do next.  If House is the resource returned from the server, it's pretty easy to see that the server gets to dictate what steps are available for the client to take.  If the server decides that new steps should be available at different steps the service can be modified to include those links in the response.  Going back to the House example, if the service decided that Install Basketball Hoop should be available when the State of House is Foundation Laid, they can do that.  The client (Construction Crew) can then choose to do that or not.  See, the server isn't telling the client what to do next, only what they can do next.

So that should answer "What does it mean" and "What does it do", but the really important question is "How do you do it".  Again, there's a short answer and a long answer.  The short answer is that you return a links property in your response that contains the actions the client can take.  The longer answer involves code.  Keep in mind that this is just how I've done it one time so it's not The Way or anything, just a suggestion.

It's pretty easy to do the JSON version of this because serializing a class to a JSON object using JSONConvert is really simple. What I do is create a base class for every class that will be returned to a client, usually named something obvious like BaseViewModel:
   1:  public class BaseViewModel
   2:  {
   3:      [XmlIgnore]
   4:      public virtual List<LinkViewModel> Links
   5:      {
   6:          get { return new List<LinkViewModel>(); }
   7:      }
   8:  }

And the LinKViewModel is pretty basic as well:
   1:  [XmlRoot("link")]
   2:  public class LinkViewModel
   3:  {
   4:      [XmlAttribute("rel")]
   5:      public string Rel { get; set; }
   6:   
   7:      [XmlAttribute("href")]
   8:      public string Href { get; set; }
   9:  }

Once we have that, it's a single line to render the output into a usable JSON result:
   1:  JsonConvert.SerializeObject(value);

All of that leads to a result that looks like this:
{"Comments":"Happy","Id":6,"Links":[{"Rel":"view","Href":"/api/grade/get/6"},{"Rel":"update","Href":"/api/grade/update"},{"Rel":"appeal","Href":"/api/appeal/add"}],"State":0,"StudentId":123,"Value":0.0}
With that result I can use a little client-side code (in this case it's pure JavaScript) to get the links from the response and see what I can do:
   1:  function getLink(object, name) {
   2:      var links = getLinks(object);
   3:      for (var i = links.length; --i >= 0;) {
   4:          if (links[i].rel != null && links[i].rel == name) {
   5:              return links[i].href;
   6:          } else if (links[i].Rel != null && links[i].Rel == name) {
   7:              return links[i].Href;
   8:          }
   9:      }
  10:   
  11:      return null;
  12:  }
  13:          
  14:  function getLinks(object) {
  15:      var links = [];
  16:      if (object != null && object.link != null && object.link.constructor === Array) {
  17:          for (var i = object.link.length; --i >= 0;) {
  18:              links.push(object.link[i]);
  19:          }
  20:      } else if (object != null && object.link != null) {
  21:          links.push(object.link);
  22:      } else if (object != null && object.Links != null && object.Links.constructor === Array) {
  23:          for (var j = object.Links.length; --j >= 0;) {
  24:              links.push(object.Links[j]);
  25:          }
  26:      }
  27:      else if (object != null && object.Links != null) {
  28:          links.push(object.Links);
  29:      }
  30:   
  31:      return links;
  32:  }

So there you go. HATEOAS and how to do it.