Thursday, December 9, 2021

Linq DistinctBy

I've had to search for this quite a few times so I figured it was probably time to write it up. I actually did find this answer somewhere else (here, if you're interested) and modified it only slightly to follow my standards.

As I often (somehow) forget, the default .Distinct() extension in System.Linq for an IEnumerable just checks to see if objects are the exact same as each other. You may recall in C# that being the exact same means the two objects are actually references to the same exact object. I can say with a good amount of confidence that in 16+ years of working in C# that's never actually been what I'm trying to do when I use .Distinct(). I'd say my most common usage is determining whether two objects have the same data on them - usually one specific field (like an ID field, for example). There are Stack Overflow posts and extension libraries out there that include this, but I'm pretty loathe to bring in a whole library for one simple extension method. Which brings us to today.

I'm using Dapper to get a list of objects from the database and then using the splitOn feature to get the children of those objects. Think of a teacher and students: I'm getting all the teachers in the school and then all of each teacher's students in a single query, then I want to break the students out according to their teachers by using the splitOn feature of Dapper. That's no problem and doesn't even require me to use .Distinct(). If I also include the clubs that each teacher oversees, I could easily end up with duplicate students in my results. The easiest way to get the distinct students in my results would be to use the .Distinct() extension method included in System.Linq, if only that worked the way it seems like it should. Instead, I'll have to write my own. So here we are.


   1: public static IEnumerable<T> DistinctBy<T>(this IEnumerable<T> list, Func<T, object> propertySelector) where T : class
   2: {
   3:   return list.GroupBy(propertySelector).Select(x => x.First());
   4: }

That's it, really. The somewhat obvious flaw is that we'll take the first match we find, but if you're looking for distinct objects, that really shouldn't be too big of a deal. Hopefully this helps you (even if "you" are really just future me).

Wednesday, December 1, 2021

Removing A Single Commit From Master

I was asked today how you would go about rolling back a single commit in git, but keeping the commits that came after that one. It turned out to be pretty easy* so I figured I'd write it up for future me to refer back to.

To set the stage, I created a new directory and ran git init on it, then added nine text files to it that were all blank. For simplicity I just named them First.txt, Second.txt, etc. All nine files were committed in the first commit after init. I then edited each file, adding some arbitrary text in, and committed after each file. I added "some text goes here" to First.txt, saved it, and committed that change, then repeated that process nine times. That gave me a total of 10 commits (the initial commit and then one for each change to a file).

I reviewed my commits by running git log --oneline (the --oneline parameter just shows a summary view instead of the full log) to find the commit I wanted to skip, deciding on skipping the changes to Third.txt, which is commit b8b2f8c. Since that's the one I want to skip, I'm actually going to need to get the previous commit ID to use in my rebase, which is 5b124e0

Now that I know which commit I'm going to skip I'm ready to use an interactive rebase to drop it from the history.
  • git rebase -i 5b124e0
  • <text editor launches displaying all commits beginning with the one I want to drop in ascending order>
  • change "pick" in the first line (pick b8b2f8c <commit message>) to "drop"
  • save
  • close text editor
That's it! Since none of the subsequent commits touched Third.txt and only Third.txt was affected by the commit I dropped, there were no conflicts. If I look at my commit log now I see it has all of the commits except b8b2f8c and I can check Third.txt to see that it is empty (as it was after the initial commit).

That's the easy case. What happens when you have subsequent commits that have touched the same files as the commit you're trying to drop? You're going to have to manually merge them in. Using the same setup as before, I made additional changes to Third.txt and created a new commit for it (941a0d1). Then I went through the same steps as above, but this time instead of getting a nice friendly message about how everything worked I get the following:
Auto-merging Third.txt
CONFLICT (content): Merge conflict in Third.txt
error: could not apply 941a0d1... <commit message>
hint: Resolve all conflicts manually, mark them as resolved with
hint: 'git add/rm <conflicted_files>', then run 'git rebase --continue'.
hint: You can instead skip this commit: run 'git rebase --skip'.
hint: To abort and get back to the state before 'git rebase', run 'git rebase --abort'.
Could not apply 941a0d1... <commit message>
When I open up Third.txt I can see that there's a merge conflict that I can clear up. I remove the unwanted changes and leave in the changes from my last commit, save and close. Now I run git add -rm "Resolve conflict in Third.txt" and then git rebase --continue and I'm done! I can see that the commit I wanted to drop is gone, but everything else is there. The only difference is that now the latest commit is my new commit message "Resolve conflict in Third.txt" instead of the original commit message I had in there and the SHA1 (commit ID) of that latest commit has changed (it is no longer 941a0d1).

* This is easy and straightforward if none of the files after the commit you're dropping are also affected by the commit you're dropping. If that's the case, things get a bit messier, but it's still doable.

Wednesday, November 3, 2021

Angular Forms: setValue, setValidators, and updateValueAndValidity

This one was a doozy. When working with Reactive Forms in Angular you may find yourself changing the values and/or validators of controls on the form based on some user input. Doing so is a pretty simple operation. Angular provides built-in functions called setValue and setValidators, whose names are hopefully self-explanatory.

In my case I have a payment form where the payment type can be cash, check, or credit card and each payment type requires different data. When the user changes from credit card to check we don't need to require the card number field any longer. Makes sense, right? Using the built-in functions, this should just be a simple matter of calling setValidators on the credit card and check fields and moving on.

this.checkoutForm.controls.address.setValidators(null);
this.checkoutForm.controls.cardNumber.setValidators(null);
this.checkoutForm.controls.city.setValidators(null);
this.checkoutForm.controls.cvvCode.setValidators(null);
this.checkoutForm.controls.expirationMonth.setValidators(null);
this.checkoutForm.controls.expirationYear.setValidators(null);
this.checkoutForm.controls.firstName.setValidators(null);
this.checkoutForm.controls.lastName.setValidators(null);
this.checkoutForm.controls.postalCode.setValidators(null);
this.checkoutForm.controls.state.setValidators(null);

this.checkoutForm.controls.checkName.setValidators(Validators.required);
this.checkoutForm.controls.checkNumber.setValidators(Validators.required);

After running this code, you'd expect address, cardNumber, city, etc. to no longer be required and checkName and checkNumber to be required. At least, that's what I'd expect. It turns out there's one more step. We have to tell Angular to update the value and validity of each of those controls whose validators were changed. This is another easy one and it uses a built-in function again, called updateValueAndValidity.

this.checkoutForm.controls.address.updateValueAndValidity();
this.checkoutForm.controls.cardNumber.updateValueAndValidity();
this.checkoutForm.controls.city.updateValueAndValidity();
this.checkoutForm.controls.cvvCode.updateValueAndValidity();
this.checkoutForm.controls.expirationMonth.updateValueAndValidity();
this.checkoutForm.controls.expirationYear.updateValueAndValidity();
this.checkoutForm.controls.firstName.updateValueAndValidity();
this.checkoutForm.controls.lastName.updateValueAndValidity();
this.checkoutForm.controls.postalCode.updateValueAndValidity();
this.checkoutForm.controls.state.updateValueAndValidity();

this.checkoutForm.controls.checkName.updateValueAndValidity();
this.checkoutForm.controls.checkNumber.updateValueAndValidity();

This makes Angular aware that the validators have changed and reevaluates the validity of each control (and the form itself). This is all fine so far. Next we have a requirement that when the user changes from credit card to check we want to clear the credit card information and vice versa. No problem. We'll just use the setValue function.

this.checkoutForm.controls.address.setValue(null);
this.checkoutForm.controls.cardNumber.setValue(null);
this.checkoutForm.controls.city.setValue(null);
this.checkoutForm.controls.cvvCode.setValue(null);
this.checkoutForm.controls.expirationMonth.setValue(null);
this.checkoutForm.controls.expirationYear.setValue(null);
this.checkoutForm.controls.firstName.setValue(null);
this.checkoutForm.controls.postalCode.setValue(null);
this.checkoutForm.controls.state.setValue(null);

Simple and straightforward again, right? I think so. To recap, we're updating the validators, updating the values, and letting Angular know that we did that. Cool. When I wrote some unit tests against this code, I found something very weird. I wrote a test to expect updateValueAndValidity to have been called one time for each control. But the test failed because updateValueAndValidity was being called twice for each control. But that doesn't make any sense at all. I'm only calling it once. I spent hours trying to figure this out, with all kinds of console.log statements in my code until I finally realized that updateValueAndValidity was being called immediately after I called setValue.

This was really confusing for me because the way I learned to do this was that you called setValidators, setValue, then updateValueAndValidity and went on your way. I had even previously written unit tests for this exact sequence of steps and they all passed. So what gives!? I wrote my tests slightly differently this time, which exposed the issue. This time when I spied on checkoutForm.controls.address.setValue, I specified .and.callThrough(), which means keep an eye on it, but let it happen the way it always would anyway. In the past I had always just spied on it, which prevents it from calling through the way it normally would, thus hiding that updateValueAndValidity was being called twice.

That's right, it turns out setValue actually calls updateValueAndValidity for us, but setValidators doesn't. When you really stop to think about it, that makes perfect sense. Just because you updated the validators doesn't mean you want to check the validity of the the controls immediately. You may want to wait until something else happens. My confusion had to do with the way I learned to use these three functions to modify form controls on the fly. Hopefully this helps you (or better yet, future me) at some point.


BONUS!!!

You don't actually have to setValue and updateValueAndValidity on every single control on the form. You can invoke patchValue and updateValueAndValidity directly on the form to make your code cleaner. Here's how my code ended up looking using those.

this.checkoutForm.patchValue({
  address: null,
  cardNumber: null,
  city: null,
  cvvCode: null,
  expirationMonth: null,
  expirationYear: null,
  firstName: null,
  lastName: null,
  postalCode: null,
  state: null
});

Angular source: https://github.com/angular/angular/blob/e49fc96ed33c26434a14b80487dd912d8c76cace/packages/forms/src/model.ts

Reference for patchValue vs. setValue: https://ultimatecourses.com/blog/angular-2-form-controls-patch-value-set-value

Friday, October 1, 2021

Checking Authentication Without The Authorize Attribute

I wrote a post a couple of years ago about securing an endpoint with an API key that is generated dynamically. I recently came across a scenario where I wanted to use the same endpoint for authenticated requests and non-authenticated requests. That is, regardless whether someone is logged into my app or they've used a valid application to create their request, I want the same endpoint to service that request.

That set me down the path of trying to figure out how to check whether the request is authenticated without using the [Authorize] attribute on the controller class or the action method. It turned out to be pretty easy and testable, but it took me a while to find it.

We can check that User (which is the ClaimsPrincipal of the HttpContext of the controller) is not null and then - if it isn't null - we can check whether the IsAuthenticated property of the Identity property of the User is true. That's it. Just a couple of simple checks to tell us whether the request is authenticated.

if (User == null || !User.Identity.IsAuthenticated) {/*Request is not authenticated*/}

Hopefully this helps you (or future me) down the road.

Thursday, July 15, 2021

Patch with JsonPatchDocument in dotnet 5

As I mentioned recently in an update to my PUT vs POST... post, I've been using PATCH all wrong for a long time. It turns out the spec for PATCH says you're supposed to send a set of instructions for how the server should modify an existing resource.

With PATCH, however, the enclosed entity contains a set of instructions describing how a resource currently residing on the origin server should be modified to produce a new version.

That seems easy enough, I guess. Just tell the server, using a particular format, how to change the resource. And it is easy enough. But I ran into a small problem that took me way too long to solve so I'm writing it up.

I'm communicating from an Angular app to a dotnet 5 Web API. Here are some surprisingly good instructions for consuming a JsonPatchDocument on the server in my scenario. On the front-end I decided not to use a library and instead just send up the JSON in the expected format, like this:

const patch = {op: 'replace'path: `/firstName`value: 'Fred'as JsonPatchDocument;

That should have worked (so I thought), but it didn't. My JsonPatchDocument<Person> was resolving to null instead of a JsonPatchDocument. Eventually, I stumbled on a github issue for a totally separate library that I'm not using and one of the comments said this: "A correct JSONPatch body must contain an array of operations." Which, in hindsight, duh. The solution ended up being super simple: just send an array containing my single operation instead of sending the single operation.

Other than that small issue, Microsoft's instructions were spot on. I guess technically their instructions were spot on and I just overlooked that detail. Hopefully this helps you (or future me).

Friday, June 18, 2021

Font Awesome for Angular with Data-Driven Icons

Font Awesome is an easy-to-use library full of tons of images as fonts. There's also an Angular implementation for it, but it's a little trickier to use. We're using the Angular version and needed to be able to render icons based on data that would only be known at runtime. With vanilla Font Awesome you'd just bind the class based on the data and that's that, but with Angular Font Awesome you have to use a componentFactoryResolver to build the icon based on the data. There's technically documentation on exactly how to do this, but even with that it still took me quite a while to figure out exactly what needed to happen so I figured I'd write it up.

First things first, I was set on the right path by this SO answer so feel free to start there yourself or just keep reading. That answer links to this official documentation and between the two I was on my way, but still confused. The documentation says you just bind the icon you want (in the case of the example, faUser). That's not very dynamic and it's definitely not data-driven. What we need is a way to specify the prefix and the icon we want and having that render based on dynamic data. This issue on github shows that you can use the icon() function to generate an icon, but that wasn't working either. Eventually, through some significant debugging and source code review, I figured out that you can just pass an array to componentRef.instance.icon and your icon will show up, if you've already registered that icon in your icon library. Here's the final solution.

Step 1: Import the icons you're going to use. In our case we have a separate module to import all of the icons we want to use. We also have a pro license so we've got multiple icon types from Font Awesome to bring in. Here's what that module looks like.

import { NgModule } from '@angular/core';

import { FontAwesomeModuleFaIconLibrary } from '@fortawesome/angular-fontawesome';
import { faSearch as falSearch } from '@fortawesome/pro-light-svg-icons'// fal
import { faCheckCirclefaShoppingCart as farShoppingCart } from '@fortawesome/pro-regular-svg-icons'// far
import {
  faBars,
  faBuilding,
  faHeadset,
  faHome,
  faSearch as fasSearch,
  faShoppingCart as fasShoppingCart,
  faTimes,
  faUnlock,
  faUser
from '@fortawesome/pro-solid-svg-icons'// fas
import { FontAwesomeIconHostComponent } from './components/font-awesome-host/font-awesome-host.component';

@NgModule({
  declarations: [ FontAwesomeIconHostComponent ],
  exports: [ FontAwesomeIconHostComponentFontAwesomeModule ]
})
export class CustomFontAwesomeModule {
  constructor(libraryFaIconLibrary) {
    library.addIcons(faBars);
    library.addIcons(faBuilding);
    library.addIcons(faCheckCircle);
    library.addIcons(faHeadset);
    library.addIcons(faHome);
    library.addIcons(falSearch);
    library.addIcons(farShoppingCart);
    library.addIcons(fasSearch);
    library.addIcons(fasShoppingCart);
    library.addIcons(faTimes);
    library.addIcons(faUnlock);
    library.addIcons(faUser);
  }
}


Step 2: Create a Font Awesome Host Component to build the icons dynamically based on the data
Update: I added a size property
import { ComponentComponentFactoryResolverInputOnInitViewChildViewContainerRef } from '@angular/core';

import { FaIconComponent } from '@fortawesome/angular-fontawesome';
import { IconNameIconPrefix } from '@fortawesome/fontawesome-svg-core';

@Component({
  selector: 'app-fa-host',
  template: '<ng-container #host></ng-container>'
})
export class FontAwesomeIconHostComponent implements OnInit {
  @ViewChild('host', {static: trueread: ViewContainerRef}) containerViewContainerRef;

  @Input() iconIconName;
  @Input() prefixIconPrefix;
  @Input() sizeSizeProp;

  constructor(private componentFactoryResolverComponentFactoryResolver) {
  }

  public ngOnInit(): void {
    this.createIcon();
  }

  public createIcon(): void {
    const factory = this.componentFactoryResolver.resolveComponentFactory(FaIconComponent);
    const componentRef = this.container.createComponent(factory);
    componentRef.instance.icon = [this.prefixthis.icon];
    componentRef.instance.size = this.size;
    // Note that FaIconComponent.render() should be called to update the
    // rendered SVG after setting/updating component inputs.
    componentRef.instance.render();
  }
}


Step 3: Use the Font Awesome Host Component and provide the icon and prefix based on the data
<atlas-fa-host [icon]="item.icon" [prefix]="item.iconPrefix" *ngIf="!!item.icon"></atlas-fa-host>

In this use case "item" is an object that has an icon property and an iconPrefix property. This still isn't perfect in my book because any time you add a row to the data that represents an icon you haven't imported yet, you'd have to update the Angular code to import it. But this does allow us to iterate through a list of objects and dynamically render an icon based on the data for each object. That's progress.

Monday, March 22, 2021

Searching all Fields in all Tables in a Single Database

Way back in 2015 I provided the SQL to search for a specific string in any field in any table in any database on a server. I don't remember why I needed that, but I guess I did. What I've come to use more frequently, however, is a search of every field in every table in a single database. So here's that query:


DECLARE @SearchText NVARCHAR(1000) = 'Some value'
DECLARE @SchemaName NVARCHAR(256), @TableName NVARCHAR(256), @ColumnName NVARCHAR(256)
DECLARE TableCursor CURSOR FOR
SELECT sch.[Name] AS SchemaName, st.[Name] AS TableName, sc.[Name] AS ColumnName
FROM sys.tables st WITH (NOLOCK)
INNER JOIN sys.columns sc WITH (NOLOCK)
    ON st.object_id = sc.object_id
INNER JOIN sys.schemas sch WITH (NOLOCK)
    ON st.schema_id = sch.schema_id

OPEN TableCursor
FETCH NEXT FROM TableCursor INTO @SchemaName, @TableName, @ColumnName

WHILE @@FETCH_STATUS = 0
BEGIN

    DECLARE @InternalSQL NVARCHAR(MAX) = 'SELECT @CountParam = COUNT(*) FROM [' + @SchemaName + '].[' + @TableName + '] WHERE [' + @ColumnName + '] LIKE ''%' + @SearchText + '%'''
    DECLARE @Count INT

    EXEC SP_EXECUTESQL @InternalSQL, N'@CountParam INT OUT', @Count OUT

    IF (@Count > 0)
    BEGIN
        PRINT @SchemaName + '.' + @TableName + '.' + @ColumnName
    END

    FETCH NEXT FROM TableCursor INTO @SchemaName, @TableName, @ColumnName

END

CLOSE TableCursor
DEALLOCATE TableCursor

Friday, February 12, 2021

Scrum Branching Strategy


I've been using git for a few years now and I don't see how I could ever go back to TFVC or *shudder* Visual Source Safe. Git makes managing branches and work significantly easier than other source control technologies I've used in the past, but there seems to be some concern or confusion regarding a good branching strategy to use along with Scrum.

I was pointed to this article by a colleague and while I agreed with some of it, some of it just seemed flat out wrong to me. We've worked really hard through several iterations to land on our branching strategy in my current role and it works really well. I figured I'd share it and our reasons for going this route.

Before I start, it's important to point out that we use Azure DevOps for our CI/CD pipelines so we use their Pull Request tool to merge between some branches. If you're using a different tool then you may follow slightly different processes that make sense for that tool. I've done my best to lay out the way we do it using git terminology, but there are definitely parts that are specific to Azure DevOps.

The Basic Premise

We started with the idea that you should always have a branch that matches what is live right now. We decided that would be our main branch. We also have the classic 4 environments (dev, QA, staging, production), but we determined that we don't necessarily need to have corresponding branches for those environments at all times. That's because at any given time those environments could change. Though there are rules for promoting to QA and staging, dev is free to be updated at any time and is frequently changing (as we develop). We also recognized that in the event of a hotfix we wanted to be able to test the hotfix in the QA environment before promoting the change directly to the production environment (once the hotfix is approved).

The Branches

  • main - this branch matches our production environment nearly all the time
  • staging - this branch is created only when necessary
  • release/<release number> - this branch is our iteration branch and contains all changes that are expected to be available as a result of our iteration
  • <task branches> - these branches are transient, abundant, and created as necessary to complete individual pieces of the iteration (i.e. stories or bugs), but could also be created as sub-task branches to work on tasks within stories or bugs

The Strategy

Once we made the decision to change to this branching strategy, we deleted all branches except main because they were no longer useful. So assume these steps start with only the main branch in existence and that the contents of the main branch exactly represent what is in the production environment.
  1. Create a branch from main called release/<release number> where <release number> represents the next version number
    • If 1.0 is in production you'd create a branch called release/1.1
  2. When a new story is started in the sprint, create a branch for it, named something meaningful to the story (I'll refer to this as the story branch the rest of the way)
    • If the story is about accepting online orders you might name the branch online-orders
  3. Every task in the story could have a separate task branch named whatever you want (I'll refer to these as task branches the rest of the way)
    • If the story to accept online orders has separate pieces to accept credit card payments and invoices you might create a branch called accept-credit-cards when you start working on that task and another developer might create a branch called accept-invoices when they start working on the other task
  4. As the work is completed on each task, the task branch is merged into the story branch
    • Depending on team dynamics, you could use a pull request to do this, but on our team we just review our own changes and make sure there are no merge conflicts
    • It doesn't matter what merge type you use (fast-forward, rebase, etc.) because these commits will be squashed in the next step
  5. When a story is complete, the story branch can be deployed to the dev environment to validate everything is working together as expected
  6. When the story is ready to be tested (this varies by team and project, but on our team a story is ready to be tested when the developers feel confident that it could safely go live right at that moment; in other words, we don't "throw it over the wall" and wait to see what the tester finds), we merge the story branch into the release/<release number> branch and deploy that branch to the QA environment
    1. We do this using a pull request in Azure DevOps
    2. The important part here is to do a squash commit and provide a useful commit message for the work that was done on the story branch (e.g. "Made changes to allow online orders to be placed via credit card and invoice")
  7. If the tester finds something, create a new branch based on the release/<release number> branch, fix the bug, and repeat step 6
  8. At the end of the sprint, only completed stories are in the release/<release number> branch (that's important)
  9. Create a staging branch from main
  10. Perform a squash commit from release/<release number> into staging and make the commit message the release number
    1. We do this via pull request in Azure DevOps and it allows us to use the release number as the title of the pull request and then we fill in the individual commit message from the story branches as the description of the pull request; that way we have the detailed information from each story as well as the release number in the commit log
    2. This is also the time to associate work items with the commit if your team does that
  11. Once the staging branch has been updated, the release/<release number> branch can safely be deleted and the staging branch can be deployed to the staging environment
  12. When the next sprint starts, create a new release/<release number> branch based on the next release (so if our previous release was 1.1 this branch would be named release/1.2) from the staging branch (that's important)
  13. When it's time to deploy the code to production, rebase the changes from the staging branch onto the main branch
    1. Rebasing one branch onto another applies each commit from the source onto the destination, but there's only one commit to staging that gets rebased onto master at this point
    2. Since we create the new release/<release number> branch from the staging branch we already have the same exact commit (with the same SHA1) in the release/<release number> branch that is now in main
  14. Delete the staging branch and repeat this process starting at step 2 for every sprint until the project is complete

Conclusion

Obviously every team is different and what works for us may not work for you. But this does work very well for us. The commit log on main is clean and concise, but contains all of the important information for which stories are in which release. I'll try to do a separate write-up on our hotfix approach tomorrow (or next week or something).

Tuesday, February 2, 2021

Flexbox Row/Column

I've been working more towards using Flexbox and away from Bootstrap's column layout as much as possible. I love the column layout, but it still feels janky to me sometimes and Flexbox seems a lot smoother. I was recently updating a page that uses columns to display 3 items on a row and each item  (with margins and padding) takes up 33% of the row. When there are 4 items, there are 2 rows with 3 items on the first row and 1 item on the second row, but the single item on the second row still only takes up 33% of the second row.

I was able to accomplish this using the following markup and CSS.

<div class="d-flex flex-wrap">
  <div class="item">
    ...single item contents...
  </div>
  <div class="item">
    ...single item contents...
  </div>
  <div class="item">
    ...single item contents...
  </div>
  <div class="item">
    ...single item contents...
  </div>
</div>
.item {
  flex: 0 0 33%;
}

At one point I had flex: 1 0 33% and that was really close, except the one item on the second line took up the entire line. This works beautifully and I'm happy because I get to use flexbox.