Thursday, February 19, 2026

SQL Server Temp Table vs. Table Variable

This post was written by ChatGPT. I used ChatGPT as a natural language search engine to learn more about the modern differences between temp tables and table variables in SQL Server. This is ChatGPT's summary of that conversation.

 

Temp Tables vs. Table Variables in SQL Server: Stop Arguing, Start Measuring

Every SQL Server team eventually has this debate:

Should we use a temp table or a table variable?

And like most long-running arguments in engineering, the answer is:

It depends — but not in the way people think.

Let’s simplify this.


First: They Both Live in tempdb

There’s a persistent myth that table variables are “in memory” and temp tables are “on disk.”

That’s not how SQL Server works.

Unless you explicitly use memory-optimized objects, both temp tables and table variables use tempdb.

So if your argument starts with:

“Table variables are faster because they’re in memory…”

It’s already off track.


The Real Difference: Statistics

This is the dividing line.

Temp Tables (#Temp)

  • Automatically create and maintain statistics

  • Support indexes (clustered and nonclustered)

  • Give the optimizer accurate cardinality estimates

  • Can trigger recompiles when row counts change

In short: the optimizer understands them.


Table Variables (@Table)

  • Historically had no meaningful statistics

  • Often get a low row-count estimate

  • Can produce bad join strategies under load

  • Don’t trigger recompiles as data volume changes

This is where performance problems come from.

Not storage.
Not memory.
Not logging.

Cardinality estimation.


Why This Matters

SQL Server chooses join strategies based on row estimates.

If it thinks a table has 1 row when it actually has 50,000:

  • You’ll get nested loops instead of hash joins

  • You’ll scan more than you should

  • You’ll watch CPU spike and blame the wrong thing

Temp tables give the optimizer better information.
Table variables often hide it.


SQL Server 2019/2022 Improvements

Recent versions improved table variable estimation (deferred compilation), which helps.

But it doesn’t magically make them equal to temp tables in complex workloads.

The rule still holds:

If the row count is non-trivial or the query is complex, temp tables usually produce more stable and predictable plans.


When Table Variables Make Sense

They’re not evil. They’re just specialized.

Use them when:

  • Row counts are small (think dozens, maybe low hundreds)

  • You’re inside a function and can’t use temp tables

  • The data is truly simple and short-lived

  • You want limited scope and simpler lifecycle

They shine in small, controlled scenarios.

They struggle in large, evolving pipelines.


When Temp Tables Are the Right Tool

Use temp tables when:

  • You’re joining multiple large sets

  • You need indexing

  • You need reliable cardinality estimates

  • You reuse intermediate results multiple times

  • Plan stability matters

Yes, there’s tempdb overhead.

But modern SQL Server versions have significantly improved tempdb performance. For most real systems, the bottleneck isn’t tempdb allocation — it’s bad plans.


The Pattern I Trust

If I don’t know the final row count yet?

I default to a temp table.

Because I’d rather pay a small, predictable cost up front than debug a production incident caused by a bad estimate.


The Anti-Pattern

The worst pattern is this:

DECLARE @Results TABLE (...); INSERT @Results SELECT ... FROM LargeTable JOIN OtherLargeTable ...

It works in dev.
It works with 200 rows.
It melts under 200,000.

Then someone adds an index hint.
Then someone rewrites it.
Then you’re refactoring under pressure.

All because the optimizer was flying blind.


The Decision Framework

Instead of arguing syntax, ask:

  1. How many rows might this hold?

  2. Will I join against it?

  3. Will I reuse it?

  4. Does plan stability matter here?

If the answers trend toward “large / complex / reused,”
use a temp table.

If they trend toward “small / simple / scoped,”
a table variable is fine.


Final Thought

This isn’t about micro-optimizing.

It’s about giving the optimizer the information it needs to do its job.

Temp tables give it visibility.
Table variables limit it.

Pick accordingly.

And as always:

Measure first.
Optimize second.