Monday, November 26, 2018

Using Assert.Throws and Assert.ThrowsAsync with XUnit

I keep having to dig through my old code to find instances where I've tested particular attributes about an exception being thrown. XUnit has a couple of methods that allow you to check exceptions, but I always forget exactly how they're used. Since that's pretty much what this blog is for, here it is.

The first one (
Assert.Throws
) is pretty straightforward. It accepts a parameter of type
Func<object>
that should basically be a call to the method you want to test. Let's say you want to test a method called CheckMyName in a controller called WhoController. That might look like this:
Assert.Throws<CustomException>(() => _whoController.CheckMyName());

All that test is doing is checking that when your method is called it throws an instance of CustomException (presumably you've set up the test so that something in CheckMyName causes an exception to be thrown). You can always get access to the instance of CustomException that was thrown by using the result of Throws, which - because it is generic - will be an instance of whatever type you specified (in this case, CustomException).
var actual = Assert.Throws<CustomException>(() => _whoController.CheckMyName());

Once you have the instance of CustomException you can examine individual properties or whatever else you want to do with it.


The other method provided by XUnit is for testing asynchronous methods;
Assert.ThrowsAsync<T>
. I always get screwed up with this one because of async/await and when I should use what. Assuming my test uses async (i.e. public async Task MyTestShouldDoSomething()) where do I put the await? That's would look like this:
var actual = await Assert.ThrowsAsync<CustomException>(async () => await _whoController.CheckMyName());

It seems a little tricksy, but that's how you do it. Or, at least, that's how I do it and it works. It's possible I'm using it wrong, but I know that it definitely works so I'm fine with it.

Tuesday, October 30, 2018

Find a Stored Procedure By Searching Its Contents

Today I had to do something for the first time in a while and it took me a moment to remember the syntax so I figured I'd better write a post about it so that doesn't happen again. In SQL Server you can search the contents of a stored procedure. This is useful when you're trying to figure out which stored procedure(s) update(s) a particular field, for example. I'm sure there are other uses, but I want to keep this as short as possible.

Basically what you can do is use the built-in sys tables to search the text (contents) of a stored procedure. It's pretty straightforward so I'll just get right to the code. This simple SQL statement will get the names of any stored procedures that use the field "CurrentMarriageStatus".
SELECT DISTINCT so.[name] FROM sysobjects so INNER JOIN syscomments sc ON so.id = sc.id WHERE sc.[text] LIKE '%CurrentMarriageStatus%'


That's it! Then you can take the results and go through them one at a time to see how they're using the field you searched for.

Friday, October 5, 2018

Git: Forcing Local to Match Remote

Every now and then I've found myself making changes directly on a branch that can't be updated from local to remote. Let's say we have the "master" branch and from that we create the "working" branch. It is impossible (by rules) to update "master" directly. Instead, we must create a pull request so that our code may be reviewed. But sometimes I've already done the work on "master", committed my changes locally on "master" and tried to push them to the remote repository. That, of course, leads to an error message along the lines of "Pushes to this branch are not permitted; you must use a pull request to update this branch." That's exactly the error message we want to see, but now I'm stuck with code in the wrong branch and my "master" doesn't match the remote "master". Here's the super easy way to fix that.

git fetch --prune
git checkout -b new-branch-with-my-changes
git push --set-upstream origin new-branch-with-my-changes
git checkout master
git reset --hard origin/master

These simple steps will 1) create a new branch called new-branch-with-my-changes on the local and remote repositories, and 2) overwrite the local master branch to match the remote master branch.

Super simple, but I always have to Google it so now it's here for future me to find it more easily next time.

Tuesday, July 17, 2018

Angular Error: Illegal state: Could not load the summary for directive SomeComponent

Apparently I had encountered this issue before, but it came up again and I had to research it again (I only know I came across it before because the link to the answer on SO was purple).

When your tests fail with this message, make sure you're including the component under test (SomeComponent) in the declarations of the TestBed. It's super simple, but apparently it's bitten me at least twice.

Wednesday, July 11, 2018

Angular Tests: Error during cleanup of component

I've come across this error quite a few times and it always takes me a few minutes to remember how to overcome it so I figured I should write about it.

This error typically arises for me when I subscribe to an observable in my ngOnInit, but then don't unsubscribe in ngOnDestroy. That's it. If you (or Future Me) start seeing this error - followed by a ton of text written to the console - you may want to check if you've added a .subscribe in ngOnInit and then unsubscribe in your ngOnDestroy.

Monday, June 11, 2018

C# Substring Extensions

Most of the time when I'm using Substring for something, what I'm actually trying to achieve is getting the contents of the string between two other well-known parts. For example, in a query string I might have something like this: http://mysite.com/page?user=mickey&location=Disney&bestie=Donald. If I want to get the location from that string (and yes, I know there are utilities specifically designed to get values out of query strings, but this is the best example I could come up with on short notice so get over it) I'd have to find "location" in the string, then find the next & and then get the contents between them.

Unfortunately, the out-of-the-box versions of Substring only allow us to specify the beginning index or the beginning index and a length. If I don't know the length then I have to do some mucking about with the contents and... you know what, you've probably dealt with this before. I finally decided to write some extension methods for Substring (well, for strings) that take the string I'm searching for and give me back what I want.
   1:  public static class StringExtensions
   2:  {
   3:      public static string Substring(this string input, string searchText, StringComparison comparisonType = StringComparison.InvariantCultureIgnoreCase)
   4:      {
   5:          // find the first occurrence of the search text
   6:          var index = input.IndexOf(searchText, comparisonType);
   7:  
   8:          // either return null or the entire string that comes after the search text
   9:          // append the length of the search string to the index so the search string isn't included in the result
  10:          return index == -1 ? null : input.Substring(index + searchText.Length);
  11:      }
  12:  
  13:      public static string Substring(this string input, string primarySearchText, string secondarySearchText,
  14:          StringComparison comparisonType = StringComparison.InvariantCultureIgnoreCase)
  15:      {
  16:          // find the first occurrence of the primary search text
  17:          var index = input.IndexOf(primarySearchText, comparisonType);
  18:  
  19:          if (index == -1)
  20:          {
  21:              // if the primary search text doesn't exist in the string, just return null
  22:          }
  23:  
  24:          // append the length of the primary search string to the index so the search string isn't included in the result
  25:          index += primarySearchText.Length;
  26:  
  27:          // find the first occurrence of the secondary search text
  28:          var searchUntilIndex = input.IndexOf(secondarySearchText, index, comparisonType);
  29:          // if the secondary search text doesn't exist in the string (after the primary search text occurs),
  30:          // return the entire string that comes after the primary search text
  31:          // if the secondary search text does exist in the string after the primary search text occurs,
  32:          // return whatever is between the primary search text and the secondary search text
  33:          var length = searchUntilIndex == -1 ? input.Length - index : searchUntilIndex - index;
  34:  
  35:          return input.Substring(index, length);
  36:      }
  37:  }

These new methods let me do this:
var location = input.Substring("location=", "&");

and what I'll get back from that call is "Disney" (without the quotes, of course).

I've probably written some form of this over a dozen times. At least now I've saved it for future reference so I can copy/paste it next time I need it.

Friday, June 8, 2018

Swashbuckle for Swagger (Markdown in XML Comments and Protecting the Documentation)

I just introduced Swashbuckle to a project I've been working on. If you're not familiar with Swashbuckle and/or Swagger UI, you can check them out here and here respectively. They work together to provide really simple out of the box documentation, which you can configure in a number of ways.

In my case I wanted to allow markdown in my XML comments that would then be displayed to my user. I also wanted to be able to control access to my API documentation. That may seem counter-intuitive (why document it if you're going to restrict access?) so you'll just have to trust me that it was necessary.

Both of these requirements proved to be pretty simple once I found the right posts and put them all together. I figured I'd centralize them so next time I have to do this I have all the information in one place.

Before we get into configuration and tweaking, you'll need to actually install Swagger, which can be done by adding the Swashbuckle.AspNetCore package from NuGet. If you want to follow these instructions, you'll also need to install the Microsoft.Extensions.PlatformAbstractions.

For the markdown in my XML, I had to take the following steps:
  1. Configure my project to generate XML comment documentation
  2. Configure Swagger UI to use the generated XML document
  3. Create an operation filter to format comments as markdown
  4. Set Swagger UI to use the new filter
Configuring the project to generate XML comment documentation is pretty easy. Right-click on your project and choose Properties. On the Build tab, check "XML documentation file" and in the textbox enter the name and path you want your file to be generated into. For this post I used "bin\Debug\net461\my-awesome-comments.xml" (without the quotes of course).

Assuming you go with the out-of-the-box, most-simple usage of Swagger UI to get started, your startup.cs includes something like this:

   1:  public void ConfigureServices(IServiceCollection services)
   2:  {
   3:      ...snip...
   4:      services.AddSwaggerGen(c =>
   5:      {
   6:          c.SwaggerDoc("v1", new Info {Title = "Identity Server", Version = "v1"});
   7:      });
   8:      ...snip...
   9:  }

 To configure Swagger UI to use the generated XML document you have to two lines to your AddSwaggerGen invocation. After our changes we have this:
   1:  public void ConfigureServices(IServiceCollection services)
   2:  {
   3:      ...snip...
   4:      services.AddSwaggerGen(c =>
   5:      {
   6:          c.SwaggerDoc("v1", new Info {Title = "Identity Server", Version = "v1"});
   7:  
   8:          var filePath = Path.Combine(PlatformServices.Default.Application.ApplicationBasePath, "my-awesome-comments.xml");
   9:          c.IncludeXmlComments(filePath);
  10:      });
  11:      ...snip...
  12:  }

That's it. Our Swagger UI now uses our XML comments. We're not able to use markdown yet, but we're getting there.

I'm not super up to speed on operation filters, but I found this solution in an answer to an issue someone opened on the Github repo for Swashbuckle. Scroll down to the answer on April 23, 2015 from user geokaps. I like having everything separated in folder structures, but you don't necessarily have to do it that way. In my case, I created a folder called Filters and created a new file in that folder called FormatXmlCommentSwaggerFilter.cs. Here are the contents of that file:
   1:  public class FormatXmlCommentSwaggerFilter : IOperationFilter
   2:  {
   3:      public void Apply(Operation operation, OperationFilterContext context)
   4:      {
   5:          operation.Description = Formatted(operation.Description);
   6:          operation.Summary = Formatted(operation.Summary);
   7:      }
   8:  
   9:      private string Formatted(string text)
  10:      {
  11:          if (text == null) return null;
  12:  
  13:          // Strip out the whitespace that messes up the markdown in the xml comments.
  14:          // but don't touch the whitespace in <code> blocks. Those get fixed below.
  15:          string resultString = Regex.Replace(text, @"(^[ \t]+)(?![^<]*>|[^>]*<\/)", "", RegexOptions.Multiline);
  16:          resultString = Regex.Replace(resultString, @"<code[^>]*>", "<pre>", RegexOptions.IgnoreCase | RegexOptions.Singleline | RegexOptions.Multiline);
  17:          resultString = Regex.Replace(resultString, @"</code[^>]*>", "</pre>", RegexOptions.IgnoreCase | RegexOptions.Singleline | RegexOptions.Multiline);
  18:          resultString = Regex.Replace(resultString, @"<!--", "", RegexOptions.Multiline);
  19:          resultString = Regex.Replace(resultString, @"-->", "", RegexOptions.Multiline);
  20:  
  21:          try
  22:          {
  23:              string pattern = @"<pre\b[^>]*>(.*?)</pre>";
  24:  
  25:              foreach (Match match in Regex.Matches(resultString, pattern, RegexOptions.IgnoreCase | RegexOptions.Singleline | RegexOptions.Multiline))
  26:              {
  27:                  var formattedPreBlock = FormatPreBlock(match.Value);
  28:                  resultString = resultString.Replace(match.Value, formattedPreBlock);
  29:              }
  30:              return resultString;
  31:          }
  32:          catch
  33:          {
  34:              // Something went wrong so just return the original resultString
  35:              return resultString;
  36:          }
  37:      }
  38:  
  39:      private string FormatPreBlock(string preBlock)
  40:      {
  41:          // Split the <pre> block into multiple lines
  42:          var linesArray = preBlock.Split('\n');
  43:          if (linesArray.Length < 2)
  44:          {
  45:              return preBlock;
  46:          }
  47:          else
  48:          {
  49:              // Get the 1st line after the <pre>
  50:              string line = linesArray[1];
  51:              int lineLength = line.Length;
  52:              string formattedLine = line.TrimStart(' ', '\t');
  53:              int paddingLength = lineLength - formattedLine.Length;
  54:  
  55:              // Remove the padding from all of the lines in the <pre> block
  56:              for (int i = 1; i < linesArray.Length - 1; i++)
  57:              {
  58:                  linesArray[i] = linesArray[i].Substring(paddingLength);
  59:              }
  60:  
  61:              var formattedPreBlock = string.Join("", linesArray);
  62:              return formattedPreBlock;
  63:          }
  64:      }
  65:  }

Once I had that filter created I just had to modify my Swagger UI configuration to use it. After these changes here's my whole Swagger UI configuration in startup.cs:
   1:  public void ConfigureServices(IServiceCollection services)
   2:  {
   3:      ...snip...
   4:      services.AddSwaggerGen(c =>
   5:      {
   6:          c.SwaggerDoc("v1", new Info {Title = "Identity Server", Version = "v1"});
   7:  
   8:          var filePath = Path.Combine(PlatformServices.Default.Application.ApplicationBasePath, "my-awesome-comments.xml");
   9:          c.IncludeXmlComments(filePath);
  10:          c.OperationFilter<FormatXmlCommentSwaggerFilter>();
  11:      });
  12:      ...snip...
  13:  }

That's all I had to do to enable markdown in my XML comments and it works pretty well I have to say. The next part was a little bit trickier because I wanted to lock down my entire Swagger instance so that only authorized users (with a particular permission) would be able to access it. To do this, I took the following steps:

  1. Create Swagger authorization middleware
  2. Create an extension method to use the middleware
  3. Use extension method to wire up the middleware
In my particular scenario I'm using Swagger to document our Identity Server API so I already had the ability to secure my other endpoints. On the surface it should have been just as straightforward to secure my Swagger endpoints. But I didn't want just anyone to be able to authenticate with our Identity Server and then view my APIs. I wanted to keep those private so only certain people (developers in my organization) could see the APIs after they're authenticated. To do that I created the following middleware to validate that the user is authenticated (logged in) and also should be able to access Swagger:
   1:  public class SwaggerAuthorizedMiddleware
   2:  {
   3:      private readonly RequestDelegate _next;
   4:  
   5:      public SwaggerAuthorizedMiddleware(RequestDelegate next)
   6:      {
   7:          _next = next;
   8:      }
   9:  
  10:      public async Task Invoke(HttpContext context)
  11:      {
  12:          // if the request is for the /swagger check for authentication
  13:          if (context.Request.Path.Equals("/swagger/index.html")
  14:              && (!context.User.Identity.IsAuthenticated || context.User.Claims.All(c => c.Type != "can_access_swagger") ||
  15:                  context.User.Claims.First(c => c.Type == "can_access_swagger").Value != "true"))
  16:          {
  17:              // the user is trying to access /swagger.index.html, but is either not authenticated (logged in) or
  18:              // is authenticated, but does not have access to Swagger, so redirect them to the login page
  19:              await context.ChallengeAsync();
  20:              return;
  21:          }
  22:  
  23:          // either the user was not trying to access /swagger/index.html or the user is authenticated and allowed
  24:          // so carry on to the next middleware process
  25:          await _next.Invoke(context);
  26:      }
  27:  }

That's all the middleware has to do. Now, keep in mind that we already had our authentication setup and I'm just plugging into that existing authentication and checking whether the user is authenticated and has access. If you don't already have that setup (i.e. your API is not already secured) then securing your Swagger UI is going to be more complicated. Even if that's the case, hopefully this helps steer you in the right direction.
Now that I have the middleware, I want to create a really simple extension method so I can use the middleware in my startup class
   1:  public static class SwaggerAuthorizeExtensions
   2:  {
   3:      public static IApplicationBuilder UseSwaggerAuthorized(this IApplicationBuilder builder)
   4:      {
   5:          return builder.UseMiddleware<SwaggerAuthorizedMiddleware>();
   6:      }
   7:  }

Then once I have that extension method it's just a matter of wiring it up. In startup.cs I have a Configure method. In there, after the UseMvc invocation, I want to add UseSwaggerAuthorized:
app.UseSwaggerAuthorized();


That's it! My Swagger UI is now protected with my pre-existing authentication process with an added check for whether the user should be able to access my Swagger documentation. Hope this helps someone!

Thursday, June 7, 2018

Covariance and Contravariance in C#

Covariance and Contravariance are two crazy terms that I've only ever really seen come up with Resharper tells me I'm doing something questionable. But then I was on an interview and somebody assumed - based on my other answers - that I knew what they were, but I didn't. So I figured I should probably learn.

Let me start off by saying that I've found this fantastic answer on Stack Overflow (I seriously probably spend too much time there). StuartLC goes into the right amount of detail on a really good explanation of what Covariance and Contravariance are and how and when to use them. I'm going to copy parts of his answer here just in case something ever happens to that post on SO and I can't go back and reference it.

Microsoft actually has an answer that's pretty good, if you can get past the big words and whatnot: "In C#, covariance and contravariance enable implicit reference conversion for array types, delegate types, and generic type arguments. Covariance preserves assignment compatibility and contravariance reverses it." Well, it's a good answer until that last part (at least for me) when it mentions that Covariance preserves assignment compatibility and Contravariance reverses it. I read that sentence a hundred times and still had no idea what it meant.

So let me break it down as simply as I can. Covariance enables collections (such as arrays), delegate types, and generic type arguments of a more-derived type to a collection, delegate type, and generic type argument of a less-derived type. Let's say we have the following classes:
   1:  public class LifeForm { }
   2:  public class Animal : LifeForm { }
   3:  public class Giraffe : Animal { }
   4:  public class Zebra : Animal { }

Animal is a more-derived type than LifeForm (because Animal inherits LifeForm) while Zebra and Giraffe are more-derived types than Animal (becuase they inherit Animal).

If we then have the following interface with a generic type argument:
public interface IDoStuff<T> { }
and the following class that implements that interface:
public class StuffDoer<T> : IDoStuff<T> { }
we can't do this:
   1:  static void Main(string[] args)
   2:  {
   3:      IDoStuff<LifeForm> animal = new StuffDoer<Animal>();
   4:  }

That doesn't work because Animal is a more derived type than LifeForm and right now IDoStuff is Invariant and not Covariant. By making a small change our code will work. In our IDoStuff interface we just have to enable Covariance by adding the out keyword, like this:
public interface IDoStuff<out T> { }

Now that we've established what Covariance is and means (enabling implicit conversion from a more-derived type to a less-derived type) what is it good for? That brings me back to the SO answer I linked at the beginning of this post. StuartLC writes "Covariance is widely used with immutable collections (i.e. where new elements cannot be added or removed from a collection)". It's simple, but it's worth diving more into.

Consider the use of IList, which is Invariant. We have a method that accepts an IList of type LifeForm:
   1:  public void PrintLifeForms(IList<LifeForm> lifeForms)
   2:  {
   3:      foreach (var lifeForm in lifeForms)
   4:      {
   5:          Console.WriteLine(lifeForm.GetType().ToString());
   6:      }
   7:  }

We should be (and are) able to pass in a "heterogeneous collection" (i.e. a collection of objects that are derived from LifeForm, but aren't necessarily LifeForm itself), so this would work:
   1:  IList<LifeForm> animals = new List<LifeForm>
   2:  {
   3:      new Giraffe();
   4:      new Zebra();
   5:      new Animal();
   6:  };
   7:  
   8:  PrintLifeForms(animals);
but this would fail:
   1:  IList<Giraffe> giraffes = new List<Giraffe>
   2:  {
   3:      new Giraffe();
   4:      new Giraffe();
   5:      new Giraffe();
   6:  };
   7:  
   8:  PrintLifeForms(giraffes);

I know for me at least, this doesn't seem right. It seems like as a collection of a derived type I should be able to pass it to a parameter accepting a collection of a less-derived type. But I can't. Not when I'm using an IList anyway, because IList is invariant. Now, my ah-ha moment was in the next part of the SO answer when he wrote "If I maliciously change the method implementation of PrintLifeForms (but leave the same method signature), the reason why the compiler prevents passing List becomes obvious:"

   1:  public void PrintLifeForms(IList<LifeForm> lifeForms)
   2:  {
   3:      lifeForms.Add(new Zebra());
   4:  }

As soon as I saw that code sample, it clicked. As long as my IList is LifeForm objects I can add a zebra no problem, but if I pass in an IList of giraffes then I can't add a zebra to my list because a zebra is not a giraffe.

In this particular case, implementations of IList are allowed to have items added to them so we want IList to be invariant. In other words we don't want to accidentally expect implicit conversion from a list of Giraffes to a list of LifeForms because then we'd break if we added a zebra. If we want to be able to pass in a List of Giraffe objects then we should modify our parameter to accept an IEnumerable of LifeForm objects because IEnumerable uses a covariant generic type:
   1:  public void PrintLifeForms(IEnumerable<LifeForm> lifeForms)
   2:  {
   3:      foreach (var lifeForm in lifeForms)
   4:      {
   5:          Console.WriteLine(lifeForm.GetType().ToString());
   6:      }
   7:  }

And just like that we can now pass in our List of Giraffe objects without a problem.

That was all to explain Covariance, but the other half of this post is about Contravariance so I guess we should get to that. Contravariance is the inverse of Covariance. That is, while Covariance enables a more-derived type to be provided when a less-derived type is expected, Contravariance enables a less-derived type to be provided when a more-derived type is expected. But, what does that mean?

Let's say we have this method:
   1:  public void DoSomething(IDoStuff<Zebra> doer)
   2:  {
   3:      doer.WriteMessage($"{doer.GetType()} was passed");
   4:  }

We can pass an instance of IDoStuff of type Zebra to the method... I guess that's obvious. But we can't pass an instance of IDoStuff of type Animal or Giraffe or LifeForm. That's because right now IDoStuff is Invariant (I reverted it from earlier) and it looks like this:
   1:  public interface IDoStuff<T>
   2:  {
   3:      void WriteMessage(string message);
   4:  }
and it's implemented in StuffDoer like this:
   1:  public class StuffDoer<T> : IDoStuff<T>
   2:  {
   3:      void WriteMessage(string message)
   4:      {
   5:          Console.WriteLine(message);
   6:      }
   7:  }

Right now, this would work:
DoSomething(new StuffDoer<Zebra>());
but this wouldn't:
DoSomething(new StuffDoer<Animal>());

If we want the second call to work we need to enable Contravariance on IDoStuff. We can do that by using the in keyword, like this:
   1:  public interface IDoStuff<in T>
   2:  {
   3:      void WriteMessage(string message);
   4:  }

Now we can pass an instance of DoerStuff to PerformZebraAction. Why would we do this? What would be the purpose of enabling Contravariance? Going back to StuartLC's answer we see "Contravariance is frequently used when functions are passed as parameters." He uses Action to explain the point. Using our example with giraffes and zebras (which is actually his example) it makes a little bit more sense. Consider this method:

   1:  public void PerformZebraAction(Action<Zebra> zebraAction)
   2:  {
   3:      var zebra = new Zebra();
   4:      zebraAction(zebra);
   5:  }

We can see that passing in an instance of Action would be perfectly acceptable, but we can also pass in an instance of Action. Both of these work:
   1:  var zebraAction = new Action<Zebra> (z => Console.WriteLine("I'm a zebra!"));
   2:  PerformZebraAction(zebraAction);
   3:  
   4:  var animalAction = new Action<Animal> (z => Console.WriteLine("I'm an animal!"));
   5:  PerformZebraAction(animalAction);

but this won't work:
   1:  var giraffeAction = new Action<Giraffe> (z => Console.WriteLine("I'm a giraffe!"));
   2:  PerformZebraAction(giraffeAction);

If we just think about what we're asking for here, we're saying we want the giraffe to do zebra stuff. That doesn't make sense. But we could ask a generic animal to do zebra stuff.


At the end of the day, in 12+ years of writing C# I've never really had to worry about Covariance or Contravariance. But now at least I understand what they mean and do so I'm a bit better informed. Hopefully I remember I wrote this post so I can reference it if I ever get confused again.

Wednesday, June 6, 2018

Fun with CSS

I recently had to figure out how to do a bunch of stuff with CSS that I'd normally do with flexbox, but without flexbox. You may have read my other post about generating a PDF from HTML. In that post I used PhantomJS to render the HTML to generate the PDF and Phantom doesn't play well with flexbox (or at least I think that was the problem I was having, but it may have been something else). At any rate, I couldn't use flexbox, but I needed to imitate flexbox. Here are the various ways I did what I needed to do to get the result I wanted without flexbox.

Sticky-ish Footer

Stick a div to the bottom of the page, regardless of whether there is enough content to push the footer to the bottom. I didn't want to (or maybe I wasn't able to) use position: fixed so I used this alternative I found here via this Stack Overflow answer.

There are only a couple of things I'd change about the solution. When I set the container div's margin to -330px the footer ends up being slightly below the end of the body, which causes the page to scroll vertically even when there isn't enough content. Changing the margin on the container div to -300px resolved that problem. The other change is that you don't need the clearfooter div anymore. That may have been necessary in 2008 when that article was written, but it's not anymore.

Here's the markup I ended up using to demonstrate this nifty little capability:
<html>
  <head>
    <style>
      html, body {
        height: 100%;
        margin: 0;
      }
      
      #container {
        min-height: 100%;
        margin-bottom: -300px;
        position: relative;
      }
      
      #footer {
        height: 300px;
        position: relative;
        background-color: red;
      }
    </style>
  </head>
  <body>
    <div id="container">
      <div id="header">Header</div>
      <div id="nav">
        <ul>
          <li><a href="#">Home</a></li>
          <li><a href="#">Page 1</a></li>
          <li><a href="#">Page 2</a></li>
        </ul>
      </div>
      <div id="content">
        Content Here.
      </div>
    </div>
    <div id="footer">Footer Here.</div>
  </body>
</html>

Equal Height Columns

The next issue I had to tackle was having two columns right next to each other taking up the exact same amount of vertical space even if one of their contents was significantly larger than the other. I found this answer on Stack Overflow, which was amazing and simple and so much fun to implement.

The markup:
<html>
  <head>
    <style>
      #container {
        overflow: hidden;
        width: 100%;
      }
      
      #left-col {
        float: left;
        width: 50%;
        background-color: orange;
        padding-bottom: 500em;
        margin-bottom: -500em;
      }
      #right-col {
        float: left;
        width: 50%;
        margin-right: -1px;
        border-left: 1px solid black;
        background-color: red;
        padding-bottom: 500em;
        margin-bottom: -500em;
      }
    </style>
  </head>
  <body>
    <div id="container">
      <div id="left-col">
        <p>Test Content</p>
        <p>longer</p>
      </div>
      <div id="right-col">
        <p>Test Content</p>
      </div>
    </div>
  </body>
</html>

Forcing a Page Break with Flexbox

This one isn't really a solution as it is a pointer to an answer on Stack Overflow explaining why I couldn't use Flexbox in the first place: I was trying to force a page break and you can't do that when any of the ancestors are flexboxes.

Non-Bullet Bullets

The final issue I had to resolve was to have bullets that didn't use the <li> tag. I don't remember the entirety of the problem, but it had to do with the way list items align themselves with the bullets and how multi-line items are displayed when you use list-style-position: inside. What I ultimately wanted was a non-indented list where the bullets were always aligned with each other and the text was always aligned with itself, no matter how many times the text wrapped to a new line. I used this answer on Stack Overflow as a starting-point, and then made some minor changes to get a solution I was happy with.

\2022 is the ASCII code for a bullet and \00a0 is the ASCII code for a non-breaking space.
<html>
  <head>
    <style>
      .assumptions {
        float: left;
        width: 50%;
        margin-right: -1px;
      }
      
      .assumption {
        overflow: hidden;
        width: 100%;
      }
      
      .assumption > .bullet {
        float: left;
        width: 1em;
      }
      
      .assumption > .bullet:before {
        content: "\2022 \00a0";
        text-align: right;
        font-weight: bold;
      }
      
      .assumption > .text {
        float: left;
        width: 90%;
        margin-right: -1px;
      }
    </style>
  </head>
  <body>
    <div class="assumptions">
      <div class="assumption">
        <div class="bullet"></div>
        <div class="text">Something</div>
      </div>
      <div class="assumption">
        <div class="bullet"></div>
        <div class="text">Something else</div>
      </div>
      <div class="assumption">
        <div class="bullet"></div>
        <div class="text">Another thing</div>
      </div>
      <div class="assumption">
        <div class="bullet"></div>
        <div class="text">A really really really really really really really really really really really really really really really really really really really really really really long thing</div>
      </div>
    </div>
    <div class="assumptions">
      <div class="assumption">
        <div class="bullet"></div>
        <div class="text">Something</div>
      </div>
      <div class="assumption">
        <div class="bullet"></div>
        <div class="text">Something else</div>
      </div>
      <div class="assumption">
        <div class="bullet"></div>
        <div class="text">Another thing</div>
      </div>
    </div>
  </body>
</html>

Upon further review (and actually writing this blog post) I can see that all of my problems weren't really caused by flexbox or Phantom. It was more a case of having to figure out how to do everything in new and different ways for this specific project. But, whatever, I made it work. And now I've written about it so I don't forget.

Oh, here's the markup using all of these approaches together.
<html>
  <head>
    <style>
      html, body {
        height: 100%;
        margin: 0;
      }
      
      #container {
        min-height: 100%;
        margin-bottom: -346px;
        position: relative;
      }
      
      #footer {
        height: 330px;
        position: relative;
        background-color: red;
      }
      
      #content {
        overflow: hidden;
        width: 100%;
      }
      
      #left-col {
        float: left;
        width: 50%;
        background-color: yellow;
        padding-bottom: 500em;
        margin-bottom: -500em;
      }
      
      #right-col {
        float: left;
        width: 50%;
        background-color: blue;
        padding-bottom: 500em;
        margin-bottom: -500em;
      }

      .list-container {
        float: left;
        width: 50%;
        margin-right: -1px;
      }
      
      .list-item {
        overflow: hidden;
        width: 100%;
      }
      
      .list-item > .bullet {
        float: left;
        width: 1em;
      }
      
      .list-item > .bullet:before {
        content: "\2022 \00a0";
        text-align: right;
        font-weight: bold;
      }
      
      .list-item > .text {
        float: left;
        width: 90%;
        margin-right: -1px;
      }
    </style>
  </head>
  <body>
    <div id="container">
      <div id="header">Header</div>
      <div id="nav">
        <ul>
          <li><a href="#">Home</a></li>
          <li><a href="#">Page 1</a></li>
          <li><a href="#">Page 2</a></li>
        </ul>
      </div>
      <div id="content">
        <div class="list-container" id="left-col">
          <div class="list-item">
            <div class="bullet"></div>
            <div class="text">Something</div>
          </div>
          <div class="list-item">
            <div class="bullet"></div>
            <div class="text">Something else</div>
          </div>
          <div class="list-item">
            <div class="bullet"></div>
            <div class="text">Another thing</div>
          </div>
          <div class="list-item">
            <div class="bullet"></div>
            <div class="text">A really really really really really really really really really really really really really really really really really really really really really really long thing</div>
          </div>
        </div>
        <div class="list-container" id="right-col">
          <div class="list-item">
            <div class="bullet"></div>
            <div class="text">Something</div>
          </div>
          <div class="list-item">
            <div class="bullet"></div>
            <div class="text">Something else</div>
          </div>
          <div class="list-item">
            <div class="bullet"></div>
            <div class="text">Another thing</div>
          </div>
        </div>
      </div>
    </div>
    <div id="footer">This is my footer</div>
  </body>
</html>

Converting HTML to PDF in .Net Core

A couple of weeks ago I wrote a post about merging multiple PDFs together. Once we got that piece up and running we decided we wanted to create a PDF on the fly from some markup. Once again I turned to Google to find a way.

This turned out to be one of the greatest discoveries I could have possibly made about .Net Core 2.0: you can use node.js from right inside! It's even really easy!

First, if you're not referencing the default NuGet package of Microsoft.AspNetCore.All then you'll want to install Microsoft.AspNetCore.NodeServices.

Once you have the package installed you need to add node services to your configuration. Just add this to your ConfigureService method of Startup.cs:
services.AddNodeServices();

The next part seems a bit counter-intuitive to me, but it's what we have to do: change your controller method that is going to invoke node.js so that it accepts a parameter of type INodeServices using the FromServices attribute. So if my old controller method signature was this:
public async Task<IActionResult> GeneratePdf()
it would look like this after my changes:
public async Task<IActionResult> GeneratePdf([FromServices] INodeServices nodeServices)

Now we're free to invoke node.js functions from inside this controller method. When we're ready to invoke a node.js function we simply call InvokeAsync on our nodeServices parameter and pass in the location of the node.js function file and whatever parameters we are passing to the function.
var pdf = await nodeServices.InvokeAsync<byte[]>("./create-pdf", html);

In my particular case I'm passing in the markup I want to convert to a PDF. The node function is in a file at the root of my project structure (it's a sibling of Startup.cs) named "create-pdf.js".
So that's how you get it all setup, and this is how you actually use node.js to convert markup to a PDF:
   1:  module.exports = function (callback, html) {
   2:    var jsreport = require('jsreport-core')();
   3:  
   4:    jsreport.init().then(function () {
   5:      return jsreport.render({
   6:        template: {
   7:          content: html,
   8:          engine: 'jsrender',
   9:          recipe: 'phantom-pdf',
  10:          phantom: {
  11:            format: 'Letter'
  12:          }
  13:        }
  14:      }).then(function (resp) {
  15:        callback(/* error */ null, resp.content.toJSON().data);
  16:      });
  17:    }).catch(function (e) {
  18:      callback(/* error */ e, null);
  19:    });
  20:  };

This approach uses jsreport-core to generate the PDF from the provided markup, then provides the byte array representing the PDF back to the caller (my controller method). From there I can do what I want with the PDF.

Force Merge With Git

We use git through VSTS for our source control repository and CI/CD processes, and I love it. However, sometimes it can make my life a bit tricky. Recently, another developer accidentally merged code straight from the dev branch into the master branch. We caught the mistake and used the Revert feature of VSTS to roll the changes back. We thought we were all good then, but we were wrong.

When you use Revert in VSTS what you're actually doing is creating two new commits. VSTS takes your source code back to what it was before the commit you're reverting and then for some reason makes two additional commits. I wouldn't think that'd be a problem, but it was. When we completed our work and tried to merge up to master we got all sorts of merge conflicts that we were unable to resolve. (On a side note, there's a cool plug-in for VSTS from Microsoft Dev-Labs that allows you to resolve merge conflicts in the browser: https://marketplace.visualstudio.com/items?itemName=ms-devlabs.conflicts-tab).

What I ended up doing was temporarily allowing myself to override branch policies, reverting the master branch to the commit before the accidental merge, force pushing master into VSTS, then creating a new merge. It was pretty easy once I figured out what to do, but as usual I wanted to document it here for future reference.

Here are the git commands I used:
git checkout master
git pull
git reset --hard [commit id]
git push -f
Create new Pull Request in VSTS

Tuesday, June 5, 2018

Get String of Enum

Several times I've had a situation come up where I wanted to get the string value of a custom Enum and I've gone different routes each time. Now, there's an easy, built-in, really fast way to get the name of an Enum, like this:
   1:  public enum PersonAttributes
   2:  {
   3:      FirstName,
   4:      LastName,
   5:      Age
   6:  }
If all I'm after is the name of the Enum then I can do this:
var name = Enum.GetName(typeof(PersonAttributes), PersonAttributes.FirstName);
The advantage of this approach is that it's fast. In my tests, this call completes in about 00:00:00.0001430, which is less than a millisecond. The drawback is that what I end up with is "FirstName", which isn't properly cased. If I wanted to use that Enum name in a form or a message or something, it isn't particularly helpful.

This time around what I really needed was a way to specify some rather arbitrary text instead of the name of the Enum. I stumbled across this Stack Overflow answer and I really like the way it works. As is the purpose of this blog, I'm writing about it for my future use. All we have to do is write an extension method, which the poster calls ToDescription. Then we add a Description attribute to each Enum and let it go. Here's the new Enum:
   1:  public enum PersonAttributes
   2:  {
   3:      [Description("First Name")]
   4:      FirstName,
   5:      [Description("Last Name")]
   6:      LastName,
   7:      Age
   8:  }

And then the extension method looks like this:
   1:  public static class AttributesHelperExtension
   2:  {
   3:      public static string ToDescription(this Enum value)
   4:      {
   5:          var da = (DescriptionAttribute[])value.GetType().GetField(value.ToString()).GetCustomAttributes(typeof(DescriptionAttribute), false);
   6:          return da.Length > 0 ? da[0].Description : value.ToString();
   7:      }
   8:  }

Now when we want to get the description we can just call it like this:
var description = PersonAttributes.FirstName.ToDescription();

The advantages and drawbacks of this approach are the inverse of the earlier method. This is slower, but gives us a useful value. Also, when I say slower, it takes 00:00:00.0132981, which is about 13 milliseconds. Totally acceptable to me.

Simple Polymorphism

What is polymorphism?

According to Wikipedia it "is the provision of a single interface to entities of different types".

According to Techopedia it is a "concept that refers to the ability of a variable, function or object to take on multiple forms".

Basically, polymorphism means we have code that can be reused through the construction of super/base/parent and sub/child classes. That's really it.

My favorite example is the Animal base class. In nature we might say that all (or nearly all, so let's just say all) animals have certain characteristics and abilities. For example, all animals can move, eat, and breathe (again, let's make some assertions we know are not true for the sake of this argument). However, the manner in which all animals do those things varies wildly. If we were to code animals using polymorphism we'd end up with a base class that requires all children to implement certain methods. It might look like this:
   1:  public class Animal
   2:  {
   3:      public virtual string Move()
   4:      {
   5:          return "The animal moves";
   6:      }
   7:  }

This base class defines that an Animal can move, but allows child classes to override the way the animal moves so they can be more specific. Some base classes might look like this:
   1:  public class Horse : Animal
   2:  {
   3:      public override string Move()
   4:      {
   5:          return "The horse trots";
   6:      }
   7:  }

   1:  public class Fish : Animal
   2:  {
   3:      public override string Move()
   4:      {
   5:          return "The fish swims";
   6:      }
   7:  }

   1:  public class Bird : Animal
   2:  {
   3:      public override string Move()
   4:      {
   5:          return "The bird flies";
   6:      }
   7:  }

Now when we want to create an animal we can either create a generic Animal, a Horse, a Fish, or a Bird. No matter which animal we create, we can always invoke the Move method on our instance because we know that every Animal (or derived animal) can Move.
   1:  class Program
   2:  {
   3:      static void Main(string[] args)
   4:      {
   5:          Animal animal = new Animal();
   6:          Console.WriteLine(animal.Move());
   7:          animal = new Horse();
   8:          Console.WriteLine(animal.Move());
   9:          animal = new Fish();
  10:          Console.WriteLine(animal.Move());
  11:          animal = new Bird();
  12:          Console.WriteLine(animal.Move());
  14:      }
  15:  }

What we end up with after all of that is one object (or type) taking many forms. Our Animal object took the form of a Horse, a Bird, and a Fish. Let's look at one more example to get a better idea of why we might do this. We'll create a zoo.
   1:  public class Zoo
   2:  {
   3:      public Zoo(List<Animal> animals)
   4:      {
   5:          Animals = animals;
   6:      }
   7:  
   8:      public List<Animal> Animals { get; set; }
   7:  }
Then we'll populate our zoo:
   1:  class Program
   2:  {
   3:      static void Main(string[] args)
   4:      {
   5:          var zoo = new Zoo(new List<Animal> {new Horse(), new Fish(), new Bird()});
   6:  
   7:          foreach (var animal in zoo.Animals)
   8:          {
   9:              Console.WriteLine(animal.Move());
  10:          }
  11:      }
  12:  }

This is simple polymorphism. A complicated word to describe a relatively simple concept. Objects can take many forms.

Thursday, May 24, 2018

Combining Two PDFs Using .Net Core and a Free Library

I recently wrote a four-part series of posts describing how to display SSRS reports inside an Angular application. Once we delivered the requested features, there were a few more stories that were added, among them was the ability to attach multiple PDFs to each other before displaying them to the user. Since we already had everything in place to retrieve and display a single PDF at a time, the (obviously) tricky part of this task was to figure out how to merge multiple PDFs into a single document on the fly.

I think my first step was probably the same as most other developers: I Googled it. I kept coming across iTextSharp as the most common solution others had used so I dug into it and it looked like it was perfect. Unfortunately, it wasn't free for our scenario (the license allows certain free usages, but we didn't fall under any of them) so it wasn't really an option. I found a couple others way to do what I needed, but none were as good or as fast. I ultimately came across a Stack Overflow answer where someone mentioned a port to .Net Core (did I mention this was in .Net Core 2.0?) of the last version of iTextSharp that was under the LGPL (read: essentially free) license. Perfect! All I had to do was figure out how to make it work, which lead me to this post.

The NuGet package is iTextSharp.LGPLv2.Core (there is also iTextSharp.LGPLv2.Core.Fix, but I'm not sure what the difference(s) is/are). Once I had that installed it was a simple matter of writing a method to merge multiple PDFs together. Of course I wanted the method to be reusable and injectable so I created a basic interface with a single method that accepts an array (of undetermined length using the params keyword) of byte arrays representing the PDFs to merge together.
   1:  public interface IProvidePdfMerging
   2:  {
   3:      byte[] Merge(params byte[][] originals);
   4:  }

And then I created the implementing method.
   1:  public class PdfMerger : IProvidePdfMerging
   2:  {
   3:      public byte[] Merge(params byte[][] originals);
   4:      {
   5:          var files = originals.ToList();
   6:  
   7:          using (var stream = new MemoryStream())
   8:          {
   9:              var doc = new Document();
  10:              var pdf = new PdfCopy(doc, stream);
  11:              doc.Open();
  12:  
  13:              PdfReader reader;
  14:              PdfImportedPage page;
  15:  
  16:              files.ForEach(file =>
  17:              {
  18:                  reader = new PdfReader(file);
  19:                  for (var i = 0; i < reader.NumberOfPages; i++)
  20:                  {
  21:                      page = pdf.GetImportedPage(reader, i + 1);
  22:                      pdf.AddPage(page);
  23:                  }
  24:  
  25:                  pdf.FreeReader(reader);
  26:                  reader.Close();
  27:              });
  28:  
  29:              doc.Close();
  30:  
  31:              return stream.ToArray();
  32:          }
  33:      }
  34:  }

And that's pretty much it. I pass in the byte arrays representing the PDFs I want to merge, in the order I want to merge them, and then the resulting byte array is my new PDF. It's clean, it's fast, it's reusable, it's injectable. I covered all the bases pretty easily here.

Saturday, May 19, 2018

Angular Material Full Size Dialog on Small Devices

I restarted a project recently and decided to use Angular Material to build it. So far it's been working out pretty well. The latest version of Material (6.0.1) works nicely. The documentation leaves a lot to be desired, but it's not that difficult to muck around in the source and find what I need.

Today I started working on the sign-in/sign-out dialog for my site. I want the dialog to open as a full-screen dialog when the user is on a small device, and open as some other value when the user is on a regular computer (laptop, desktop, whatever). This didn't end up being very hard, but I had to mash together a couple of solutions I found so I wanted to put this post up for future reference.

If you Google something like "angular material full screen dialog mobile" you'll get a bunch of responses where people are looking for this exact feature.I started with this comment on one of the Github issues that was opened and then tweaked it a little bit using the BreakpointObserver that comes with Material (at least in version 6 it does). I haven't finalized what I want my sign-in/sign-out page/dialog to look like yet so for now let's just say we want the dialog to open at half the width and half the height of the window on larger devices and full screen on smaller devices.

Here's the code, and I'll explain it a bit more afterward.
   1:  import { Component } from '@angular/core';
   2:  import { BreakpointObserver, Breakpoints, BreakpointState } from '@angular/cdk/layout';
   3:  import { Observable } from 'rxjs';
   4:  import { MatDialog, MatDialogRef } from '@angular/material';
   5:  import { SignInComponent } from '../sign-in/sign-in.component';
...snip...
  12:  export class SidenavComponent {
  13:    isExtraSmall: Observable<BreakpointState> = this.breakpointObserver.observe(Breakpoints.XSmall);
  14:  
  15:    constructor(private breakpointObserver: BreakpointObserver, private dialog: MatDialog) {}
  16:  
  17:    openSignInDialog(): void {
  18:      const signInDialogRef = this.dialog.open(SignInComponent, {
  19:        width: '50%',
  20:        height: '50%',
  21:        maxWidth: '100vw',
  22:        maxHeight: '100vh',
  23:      });
  24:  
  25:      const smallDialogSubscription = this.isExtraSmall.subscribe(result =< {
  26:        if (size.matches) {
  27:          signInDialogRef.updateSize('100%', '100%');
  28:        } else {
  29:          signInDialogRef.updateSize('50%', '50%');
  30:        }
  31:      });
  32:  
  33:      signInDialogRef.afterClosed().subcsribe(result =< {
  34:        smallDialogSubscription.unsubscribe();
  35:      });
  36:    }
  37:  }

The only real issue I have with this code is that I have to specify the exact height and width I want to use when the dialog is displayed on a large device. All things considered, that's not really a big deal to me.

This is also pretty self-explanatory (I think).

On line 17 we open the dialog with a size that is half the height and width of the current window. We also set the max height and max width of the dialog explicitly to match the height and width of the viewport (the device, essentially). If you leave that part off you'll end up with your dialog being pulled over to the left of the screen and not taking up the full width.

On line 25 we subscribe to the observable that's watching the devices size to see if it drops below the predefined XSmall breakpoint. There are other breakpoints we could have used just by changing out the definition on line 13. When we subscribe we immediately get the last value from the observable. On devices that fall under the XSmall breakpoint, our result variable has a matches property that is set to true. On other devices, matches is set to false. All we have to do is invoke the updateSize function on the dialog ref we received back when we opened the dialog.

Finally, on line 33 we make sure to unsubscribe from the observable. This is just a clean-up precaution to make sure we avoid memory leaks.