Optimal PartitionKey & RowKey Design for Blog Posts

Following on from a previous article looking into what is good practice for defining PartitionKey and RowKey for Azure Table storage, here we take a real life scenario and look at some of the possible values we could use for when storing a blog post.

🎯 Your main query patterns will likely be:

  • Get a single post by slug or ID
  • List posts by publish date
  • List posts by category or tag
  • List posts by author
  • Show the latest N posts

Azure Tables don’t support secondary indexes, so you design your keys around these patterns.


✔️ Recommended Primary Table: Posts

PartitionKey: yyyy-MM (publish month)

RowKey: slug or inverted timestamp

This gives you:

  • Fast “latest posts” queries
  • Natural grouping by month
  • Even distribution (no hotspots)
  • Easy retrieval of a single post

Example

PartitionKey RowKey Title
2025-02 my-first-post My First Post
2025-02 MaxTicks-638742... Another Post
2025-03 building-a-cms Building a CMS

Why this works

  • Blog posts are naturally time‑series data.
  • Monthly partitions keep partitions from getting too large.
  • You can fetch the latest posts by querying the most recent partitions.

✔️ Alternative RowKey Options

Option A: Slug (human‑friendly)

  • Easy to retrieve a post directly
  • Great for SEO‑style URLs
  • Good if slugs are unique

Option B: Inverted timestamp (machine‑friendly)

  • Perfect for “latest first” queries
  • RowKey sorts newest → oldest
  • Format:
    RowKey = (DateTime.MaxValue - PublishDate).Ticks.ToString("d19")

✔️ Handling Categories, Tags, and Authors

Azure Tables don’t support secondary indexes, so you create additional tables for fast lookups.

Table: PostsByCategory

  • PartitionKey: category
  • RowKey: yyyy-MM-dd-HH-mm-ss + slug
  • Value: pointer to the main post (PartitionKey + RowKey)

Table: PostsByTag

Same pattern as categories.

Table: PostsByAuthor

Same pattern again.

These tables act like manual indexes—super cheap, super fast.


✔️ Summary of the Best Practice

Table PartitionKey RowKey Purpose
Posts yyyy-MM slug or inverted timestamp Main storage
PostsByCategory category timestamp+slug Fast category queries
PostsByTag tag timestamp+slug Fast tag queries
PostsByAuthor authorId timestamp+slug Fast author queries

This structure scales beautifully and keeps queries predictable and cheap.

Best Practices for PartitionKey and RowKey in Azure Table Storage

1. PartitionKey: Design for Scalability and Query Patterns

The PartitionKey determines how your data is distributed across storage nodes. Good partitioning avoids hotspots and keeps queries fast.

✔️ Best Practices

  • Group entities that you frequently query together
    Azure Tables can only efficiently query within a partition. If you often query “all orders for a customer,” then PartitionKey = CustomerId is a strong choice.

  • Avoid extremely large single partitions
    A single partition can only scale so far. If you expect millions of entities in one partition, consider adding a secondary dimension, such as:

    • CustomerId + Month
    • DeviceId + Date
    • Region + Category
  • Avoid extremely small partitions
    Too many tiny partitions can slow down scans and increase overhead.

  • Choose keys that evenly distribute load
    If all writes go to the same partition (e.g., PartitionKey = "Orders"), you create a hotspot. Spread writes across partitions.

✔️ Good PartitionKey examples

Scenario Good PartitionKey
IoT telemetry DeviceId or DeviceId + Date
Multi-tenant SaaS TenantId
Logging Date (e.g., 2025-02-04)
E‑commerce orders CustomerId or CustomerId + Year

2. RowKey: Ensure Uniqueness and Fast Lookup

The RowKey uniquely identifies an entity within a partition. Azure Tables sort RowKeys lexicographically.

✔️ Best Practices

  • Make RowKey unique within the partition
    Common patterns:

    • GUID
    • Timestamp (inverted for newest-first)
    • Natural key (OrderId, UserId, etc.)
  • Use RowKey to optimize query order
    Because RowKeys are sorted, you can:

    • Store newest items first using a descending timestamp trick:
      RowKey = (DateTime.MaxValue - timestamp).Ticks
    • Store items alphabetically or numerically for range queries.
  • Keep RowKeys short
    Long keys increase storage cost and slow down queries.

✔️ Good RowKey examples

Scenario Good RowKey
Logging Inverted timestamp (RowKey = MaxTicks - Now.Ticks)
Orders OrderId
IoT telemetry Timestamp or sequence number
User profiles UserId

3. General Key Design Principles

✔️ Keep keys ASCII-safe

Avoid characters that require escaping (/, \, #, ?).

✔️ Keep keys predictable

You want to be able to compute the key without extra lookups.

✔️ Keep keys immutable

Changing keys means deleting and re‑inserting the entity.

✔️ Think about your query patterns first

Azure Tables are not relational. You design keys based on how you read data, not how you model it.


4. Common Patterns (with examples)

Pattern A: Time-series data

PartitionKey: DeviceId
RowKey: Inverted timestamp

  • Fast “latest first” queries
  • Even distribution across devices

Pattern B: Multi-tenant SaaS

PartitionKey: TenantId
RowKey: EntityId

  • Easy to isolate tenant data
  • Scales well

Pattern C: Event logs

PartitionKey: Date (e.g., 2025-02-04)
RowKey: GUID or timestamp

  • Efficient daily queries
  • Avoids giant partitions

5. Anti‑Patterns (Avoid These)

❌ PartitionKey = same value for all rows

Creates a massive hotspot.

❌ RowKey = random GUID when you need sorted queries

GUIDs destroy ordering.

❌ PartitionKey = GUID

You lose the ability to query groups of related data.

❌ Too many partitions (e.g., PartitionKey = GUID per row)

Makes range scans impossible.


Summary

PartitionKey

  • Group related data
  • Spread load
  • Match your query patterns
  • Avoid hotspots

RowKey

  • Unique within partition
  • Sorted for fast range queries
  • Short and predictable

Together, they define your performance, scalability, and cost.

Best Practices: Securing Azure AD Configuration in Single Page Applications

Reading Time: 10 minutes

When building Single Page Applications (SPAs) with Azure AD authentication, developers often ask: “How do I protect my Tenant ID and Client ID from being exposed?”

The short answer might surprise you: You don’t—and you shouldn’t try to.

This article explains what information is truly public vs. secret in SPAs, debunks common security misconceptions, and provides practical best practices for securing your Azure AD-enabled applications.

Understanding What’s Secret and What’s Not

First, let’s clarify a critical distinction that causes much confusion:

Public Values (NOT Secrets)

These values cannot and should not be kept secret in SPAs:

  • Client ID (Application ID) – Identifies your application
  • Tenant ID – Identifies your Azure AD tenant
  • Authority URLs – The login endpoints
  • Redirect URIs – Where authentication flows return
  • API Scopes – Permissions your app requests

Why? Microsoft designs these values to be public. Any code running in a browser can be inspected by users—there are no secrets in client-side code.

Actual Secrets (NEVER in SPAs)

These must never appear in client-side code:

  • Client Secrets – Use server-side applications only
  • API Keys – Backend only
  • Connection Strings – Backend only
  • Private Keys – Backend only

Why? If these appear in your SPA, any user can extract and abuse them.

The PKCE Flow: Security Without Secrets

SPAs use the Proof Key for Code Exchange (PKCE) OAuth 2.0 flow, which is specifically designed to work securely without client secrets.

How PKCE Works:

  1. Your SPA generates a random code verifier
  2. Creates a code challenge from the verifier
  3. Sends the challenge during authentication
  4. Azure AD validates using the verifier

This cryptographic exchange prevents token interception attacks without requiring secrets.

Example: MSAL.js Configuration

// This is perfectly secure - no secrets needed
const msalConfig = {
    auth: {
        clientId: "1F5125A6-0098-493D-9C4E-2CA6DDD11998",  // Public
        authority: "https://login.microsoftonline.com/A588175B-C520-4961-BF1F-2C583DD047C8",  // Public
        redirectUri: "http://localhost:5173"  // Public, but controlled in Azure AD
    }
};

// MSAL.js automatically uses PKCE flow
const msalInstance = new msal.PublicClientApplication(msalConfig);

Best Practice #1: Register Allowed Redirect URIs

While Client IDs are public, you control where authentication tokens can be sent through Azure AD app registration.

Configuration in Azure Portal

Navigate to: Azure Portal → App Registrations → Your App → Authentication

Redirect URIs:

Key Principle: Even if someone steals your Client ID, they cannot receive tokens unless they control a registered redirect URI. This is your primary defense mechanism.

Best Practices for Redirect URIs:

  • Only register legitimate URIs you control
  • Use HTTPS in production (never HTTP)
  • Never use wildcards (e.g., https://*.yourdomain.com)
  • Avoid overly permissive patterns

Best Practice #2: Validate Tokens on the Backend

The real security happens on your API server, not in the SPA. Always validate every token.

ASP.NET Core API Token Validation

// Program.cs
builder.Services.AddAuthentication("Bearer")
    .AddJwtBearer("Bearer", options => 
    {
        var tenantId = builder.Configuration["AzureAd:TenantId"];
        var apiClientId = builder.Configuration["AzureAd:ApiClientId"];

        options.Authority = $"https://login.microsoftonline.com/{tenantId}/v2.0";
        
        // Critical validation settings
        options.TokenValidationParameters = new()
        {
            ValidAudience = $"api://{apiClientId}",
            ValidateIssuer = true,      // Verify token from correct tenant
            ValidateAudience = true,    // Verify token for this API
            ValidateLifetime = true,    // Reject expired tokens
            ValidateIssuerSigningKey = true  // Verify signature
        };
    });

// Apply authorization to endpoints
app.MapGet("/api/secure-data", () => "Sensitive data")
    .RequireAuthorization("RequiredPolicy");

Security Checklist:

  • Always validate issuer (prevents tokens from other tenants)
  • Always validate audience (prevents tokens for other APIs)
  • Always validate lifetime (rejects expired tokens)
  • Always validate signature (prevents tampering)
  • Use claims and scopes for authorization

Best Practice #3: Secure Token Storage

Where you store tokens matters significantly for security.

✅ Good: In-Memory Storage (MSAL.js Default)

// MSAL.js stores tokens in memory by default
const msalInstance = new msal.PublicClientApplication(msalConfig);

// Tokens are automatically managed and stored securely
const token = await msalInstance.acquireTokenSilent({
    scopes: ["api://your-api/cms.read"]
});

❌ Bad: localStorage or sessionStorage

// NEVER DO THIS - Vulnerable to XSS attacks
localStorage.setItem('accessToken', token.accessToken);  // ❌ BAD
sessionStorage.setItem('accessToken', token.accessToken);  // ❌ ALSO BAD

Why localStorage is Dangerous:

  • Accessible to any JavaScript on your domain
  • Vulnerable to XSS (Cross-Site Scripting) attacks
  • Persists across browser sessions
  • No built-in security features

Best Practice #4: Configure CORS Properly

Control which origins can call your API to prevent unauthorized cross-origin requests.

ASP.NET Core CORS Configuration

// Program.cs - Configure CORS restrictively
builder.Services.AddCors(options =>
{
    options.AddPolicy("AllowFrontend", policy =>
    {
        policy.WithOrigins("https://yourdomain.com")  // Specific origins only
              .AllowAnyHeader()
              .AllowAnyMethod()
              .AllowCredentials();  // Required for authentication
    });
});

// Apply CORS middleware
app.UseCors("AllowFrontend");

Important:

  • Never use AllowAnyOrigin() in production
  • Explicitly list allowed origins
  • Use environment-specific configurations

Configuration Management Strategies

While Client IDs can be hardcoded without security risk, proper configuration management improves maintainability.

Option 1: Hardcoded Values (Acceptable)

// index.html - Perfectly acceptable for public values
const clientId = "1F5125A6-0098-493D-9C4E-2CA6DDD11998";
const tenantId = "A588175B-C520-4961-BF1F-2C583DD047C8";

Pros: Simple, fast, no dependencies
Cons: Requires rebuild for environment changes

Option 2: Environment Variables (Better for Multi-Environment)

// Use build-time environment variables (Vite example)
const clientId = import.meta.env.VITE_CLIENT_ID;
const tenantId = import.meta.env.VITE_TENANT_ID;

// .env.production
VITE_CLIENT_ID=1F5125A6-0098-493D-9C4E-2CA6DDD11998
VITE_TENANT_ID=A588175B-C520-4961-BF1F-2C583DD047C8

Pros: Environment-specific, industry standard
Cons: Still requires rebuild per environment

Option 3: Configuration Endpoint (Best for Centralization)

Serve configuration from your backend API for runtime flexibility.

Backend Configuration Endpoint (.NET)

// Program.cs
app.MapGet("/config", (IConfiguration config) => Results.Ok(new
{
    azureAd = new
    {
        clientId = config["AzureAd:ClientId"],
        tenantId = config["AzureAd:TenantId"],
        apiClientId = config["AzureAd:ApiClientId"]
    }
}));

appsettings.json

{
  "AzureAd": {
    "ClientId": "1F5125A6-0098-493D-9C4E-2CA6DDD11998",
    "TenantId": "A588175B-C520-4961-BF1F-2C583DD047C8",
    "ApiClientId": "ABDBD38F-384E-4640-B7C6-34C8340A442E"
  }
}

Frontend Initialization

// index.html - Fetch configuration at runtime
let msalInstance;
let apiScope;

(async function initializeApp() {
    try {
        // Fetch configuration from API
        const response = await fetch('https://api.yourdomain.com/config');
        const config = await response.json();

        // Initialize MSAL with fetched configuration
        const msalConfig = {
            auth: {
                clientId: config.azureAd.clientId,
                authority: `https://login.microsoftonline.com/${config.azureAd.tenantId}`,
                redirectUri: window.location.origin
            }
        };

        apiScope = `api://${config.azureAd.apiClientId}/cms.read`;
        msalInstance = new msal.PublicClientApplication(msalConfig);

        console.log('App initialized successfully');
    } catch (error) {
        console.error('Failed to load configuration:', error);
    }
})();

Pros: Single source of truth, no rebuilds needed, runtime flexibility
Cons: Additional HTTP request on startup

Additional Security Measures

1. Implement Content Security Policy (CSP)

<!-- Add to HTML head or HTTP headers -->
<meta http-equiv="Content-Security-Policy" 
      content="default-src 'self'; 
               script-src 'self' https://alcdn.msauth.net; 
               connect-src 'self' https://login.microsoftonline.com">

2. Enable Azure AD Conditional Access

Configure in Azure Portal to enforce:

  • Multi-factor authentication (MFA)
  • Trusted device requirements
  • IP address restrictions
  • Risk-based access policies

3. Use Short-Lived Tokens

Azure AD defaults to 1-hour access token lifetimes. Don’t increase this unnecessarily.

// MSAL.js handles token refresh automatically
const token = await msalInstance.acquireTokenSilent({
    scopes: [apiScope],
    account: msalInstance.getActiveAccount()
});

4. Implement API Rate Limiting

// ASP.NET Core rate limiting
builder.Services.AddRateLimiter(options =>
{
    options.AddFixedWindowLimiter("api", opt =>
    {
        opt.Window = TimeSpan.FromMinutes(1);
        opt.PermitLimit = 100;
    });
});

app.UseRateLimiter();

app.MapGet("/api/data", () => "data")
   .RequireRateLimiting("api");

5. Monitor and Audit

  • Enable Azure AD sign-in logs
  • Monitor failed authentication attempts
  • Set up alerts for suspicious activity
  • Review API access patterns regularly

Common Security Anti-Patterns to Avoid

❌ Anti-Pattern 1: Trying to Hide Public Values

// Don't waste time obfuscating public values
const secret = atob("ZjFmYWY2NGYtMWE2NC00NWNjLTlkNGItZTIwMDhkMWNhZmM5");  // ❌ Pointless

Anyone can decode this. Accept that Client IDs are public.

❌ Anti-Pattern 2: Proxying Authentication Through Your Backend

// DON'T do this to "hide" client ID
app.post('/auth/login', (req, res) => {
    // Server exchanges credentials for token
    // Then sends token to client
});  // ❌ Defeats PKCE security, adds complexity

Use proper OAuth 2.0 flows designed for SPAs.

❌ Anti-Pattern 3: Client Secrets in SPAs

// NEVER include client secrets
const msalConfig = {
    auth: {
        clientId: "...",
        clientSecret: "super-secret-key"  // ❌ EXTREMELY DANGEROUS
    }
};  // Anyone can extract this and impersonate your app

Security Checklist for Production

Before deploying your SPA to production, verify:

Azure AD Configuration

  • Redirect URIs are restrictively configured
  • No wildcards in redirect URIs
  • HTTPS enforced for production URIs
  • Appropriate API permissions configured
  • Admin consent granted (if required)

API Security

  • Token validation enabled (issuer, audience, lifetime)
  • Authorization policies implemented
  • CORS properly configured
  • Rate limiting implemented
  • HTTPS enforced

SPA Security

  • No client secrets in code
  • Tokens stored in memory (not localStorage)
  • Content Security Policy implemented
  • Dependencies updated and scanned
  • Proper error handling (no token leaks in errors)

Monitoring

  • Sign-in logs enabled
  • Failed authentication alerts configured
  • API access logging implemented
  • Regular security reviews scheduled

Conclusion

Securing SPAs with Azure AD doesn’t require hiding Client IDs or Tenant IDs—these values are designed to be public. Instead, focus your security efforts on:

  1. Properly configuring redirect URIs in Azure AD
  2. Validating tokens rigorously on your backend API
  3. Using secure token storage (in-memory, not localStorage)
  4. Implementing proper CORS policies
  5. Enabling monitoring and alerts for suspicious activity

The security of your SPA relies on proper OAuth 2.0 flows (PKCE), token validation, and infrastructure configuration—not on attempting to hide public identifiers.

Remember: The PKCE flow is specifically designed to work securely with public Client IDs. Trust the design, follow best practices, and focus on the security measures that truly matter.

Additional Resources


Have questions or feedback? Share your thoughts in the comments below!

GitHub Identity vs Azure Token in Azure Static Web Apps

Understanding the differences, risks, benefits, and when to use each

Azure Static Web Apps supports two main authentication/authorization approaches during deployment:

  1. GitHub Identity (OIDC or GitHub Actions permissions)
  2. Azure Token (Service Principal / Azure AD App Registration)

They both let GitHub Actions deploy your app, but they work very differently.

1. GitHub Identity (OIDC)

✔️ What it is

GitHub Actions uses OpenID Connect (OIDC) to request a short‑lived Azure token at deploy time, without storing any secrets. Azure trusts GitHub’s identity provider and issues a temporary token.

⭐ Benefits

  • No secrets stored in GitHub
    Nothing to rotate, leak, or accidentally commit.
  • Short‑lived tokens
    Tokens expire quickly, reducing blast radius.
  • Least privilege by design
    You grant GitHub Actions a specific role on a specific resource.
  • Automatic rotation
    No manual maintenance.
  • Recommended by Microsoft for modern deployments
    It’s the “secure-by-default” option.

⚠️ Risks / Limitations

  • Requires Azure setup
    You must configure a federated identity credential on an Entra ID app.
  • Only works with GitHub Actions
    If you switch CI/CD providers, you must reconfigure.
  • More complex initial setup
    Especially if you’re not familiar with Entra ID.

🧭 When to use GitHub Identity

Use it when:

  • You deploy from GitHub Actions (most SWA users do).
  • You want maximum security with no secrets.
  • You want zero maintenance authentication.
  • You’re building a long‑term, production‑grade pipeline.

This is the best practice for modern Azure deployments.

2. Azure Token (Service Principal)

✔️ What it is

Service Principal (SP) is an Azure AD application with a client ID and client secret. GitHub Actions uses this secret to authenticate to Azure.

⭐ Benefits

  • Simple to understand
    It’s just a username/password for Azure.
  • Works with any CI/CD provider
    GitHub, Azure DevOps, GitLab, Jenkins, etc.
  • Good for legacy pipelines
    Many older workflows rely on SPs.

⚠️ Risks / Limitations

  • Secrets must be stored in GitHub
    Even in encrypted secrets, this is a risk.
  • Secrets can leak
    Through logs, PRs, misconfigured workflows, or compromised GitHub accounts.
  • Secrets must be rotated manually
    Developers often forget.
  • Long‑lived credentials
    If compromised, attackers have persistent access.

🧭 When to use Azure Token

Use it when:

  • You need multi‑platform CI/CD (not just GitHub).
  • You have existing SP‑based pipelines and can’t migrate yet.
  • You need fine‑grained control over the identity (e.g., custom API permissions).

This is the legacy but still valid option.

Side‑by‑Side Comparison

FeatureGitHub Identity (OIDC)Azure Token (Service Principal)
Secrets stored in GitHub❌ None✔️ Yes (client secret)
Token lifetimeShort-livedLong-lived
RotationAutomaticManual
Setup complexityMediumLow
Works outside GitHub❌ No✔️ Yes
Security posture⭐ Strong⚠️ Weaker
Recommended by Microsoft✔️ Yes⚠️ Only for legacy

How to Secure Each Approach

GitHub Identity (OIDC)

  • Restrict trust to specific GitHub repo + branch
    (e.g., only main can request tokens)
  • Assign least-privilege roles
    Usually Contributor or Static Web App Contributor.
  • Use environment protection rules
    Require approvals for production deployments.
  • Use branch protection
    Prevent unauthorized pushes to the trusted branch.

This is already extremely secure.

Azure Token (Service Principal)

If you must use SPs, harden them:

  • Store secrets only in GitHub Encrypted Secrets
  • Rotate secrets every 90 days
  • Use least-privilege roles
  • Enable Conditional Access (IP restrictions, MFA for portal access)
  • Use Managed Identity inside Azure where possible
  • Avoid using the same SP for multiple apps

SPs can be secure, but they require disciplin

Which Should You Use?

Given your background—structured learning, Azure-focused, building a real-world CMS project—the best choice is:

👉 Use GitHub Identity (OIDC) for all new Static Web App deployments.

It’s:

  • more secure
  • easier to maintain
  • aligned with modern Azure DevOps practices
  • ideal for production workloads

Use a Service Principal only if:

  • you need cross-platform CI/CD
  • you’re integrating with tools outside GitHub
  • you’re migrating legacy pipelines

Pattern Matching in .NET: A Journey from .NET 6 to .NET 9

Introduction

Pattern matching is one of those language features that quietly becomes indispensable: once you’re used to it, you wonder how you’d live without it. In the .NET ecosystem — specifically in C# — pattern matching has matured significantly in recent versions. This post walks through how pattern matching evolved from .NET 6 up through .NET 9, with examples, explanations, and a bit of history to ground things.


A Bit of History: Where Pattern Matching Comes From

To understand how we got here, it’s helpful to look back:

  • Pattern matching as a concept goes way back in computer science — it’s deeply rooted in functional programming languages like ML, Haskell, and others. [1]
  • Early languages such as SNOBOL (1960s) also had pattern matching for strings. [1]
  • In .NET, C# first introduced pattern matching in version 7, but it’s grown a lot since then. ([Microsoft for Developers][2])
  • Over successive C# versions, Microsoft has added more expressive patterns: relational operators, logical combinators (and, or, not), parenthesized patterns, and more. ([codemag.com][3])

Why Pattern Matching Matters

Pattern matching makes your code more expressive and concise:

  • You can match on types, not just use if (obj is SomeType) { … }.
  • You can deconstruct objects (e.g., with switch expressions or property patterns).
  • You can combine conditions very cleanly using logical patterns, making your intent clearer.
  • Using relational patterns (like <, >=) in is or switch avoids boilerplate when clauses.

Pattern Matching in .NET 6 / C# 10 (Baseline)

By the time .NET 6 came around (C# 10), most of the core pattern matching features were already well established:

  • Type patterns: if (obj is MyType t) { … }
  • Constant patterns: case 42: … or if (x is 42) …
  • Property patterns: e.g., if (p is Point { X: 0, Y: 0 }) …
  • Positional patterns (with deconstruct): if (p is Point(var x, var y)) …
  • Switch expressions using patterns.

Here’s a small example in C# 10 / .NET 6:





How it works:

public record Point(int X, int Y);

object GetSomething() => new Point(3, 4);

void Process()
{
    object o = GetSomething();

    if (o is Point { X: 0, Y: 0 })
    {
        Console.WriteLine("Origin");
    }
    else if (o is Point(var x, var y))
    {
        Console.WriteLine($"Point with coordinates ({x}, {y})");
    }
    else
    {
        Console.WriteLine("Not a Point");
    }

    // Using switch expression:
    string description = o switch
    {
        Point { X: 0, Y: 0 } => "At origin",
        Point(var x, var y) => $"At ({x}, {y})",
        _ => "Unknown object"
    };

    Console.WriteLine(description);
}


  • if (o is Point { X: 0, Y: 0 }): Checks if o is a Point and destructures it, matching its properties.
  • Point(var x, var y): Uses deconstruction (positional) pattern.
  • The switch expression is very readable: pattern on left, result on right.

While the major pattern matching leaps came earlier, many of the improvements up to .NET 7 / C# 11 were incremental (optimization, compiler refinement) rather than huge syntactic breakthroughs. As such, most blog posts emphasize changes in C# 8 and C# 9 for pattern matching. There were no widely-advertised brand-new pattern constructs introduced in .NET 7 specifically for pattern matching.


Big Leap: Pattern Matching in .NET 8 / C# 12 (Preview of what’s coming in .NET 9)

Although your question goes up to .NET 9, it’s the C# language version that drives much of the pattern-matching syntax, so it’s useful to talk about what’s new or expected around C# 12 / .NET 9.

According to community and preview sources: ([DEV Community][4])

  • Enhanced pattern matching: There’s continuing work to make pattern expressions more concise and expressive, particularly logical and relational patterns.
  • Though some sources talk about C# 12 (which aligns with .NET 9) introducing more pattern matching improvements, it’s not always clearly documented in official release notes — because features may still be in preview or evolving in the language design process.

Deep Dive: What C# 9 Introduced (Pattern Matching Enhancements)

Most of the big pattern-matching improvements came in C# 9, released with .NET 5, but still very relevant for .NET 6–9, since C# 9 features continued to be supported.

Here’s a breakdown:

1. Relational Patterns

You can now write patterns using <, >, <=, >= directly.

int age = 25;

string ageGroup = age switch
{
    < 13 => "Child",
    < 20 => "Teenager",
    < 65 => "Adult",
    _ => "Senior"
};

Console.WriteLine(ageGroup);  // Output: "Adult"

Explanation:
Instead of using when clauses, the relational operators are part of the pattern, making it concise and expressive. ([Microsoft for Developers][2])


2. Logical Patterns: and, or, not

These let you combine patterns in more English-like ways.

char c = 'G';

bool IsLetter = c is (>= 'a' and <= 'z') or (>= 'A' and <= 'Z');
Console.WriteLine(IsLetter);  // True

string? maybe = null;
if (maybe is not null)
{
    Console.WriteLine($"You said: {maybe}");
}
else
{
    Console.WriteLine("Nothing typed");
}

Explanation:

  • >= 'a' and <= 'z' is a conjunctive (AND) pattern: both conditions must hold.
  • or lets you express alternate shapes (either lowercase or uppercase letter).
  • not provides a simpler, more readable null check (is not null) vs. != null. ([anthonygiretti.com][5])

3. Parenthesized Patterns

Parentheses help clarify grouping, especially when combining logical and relational patterns.

if (c is (>= 'a' and <= 'z') or (>= 'A' and <= 'Z'))
{
    Console.WriteLine("It's a letter!");
}

Here the parentheses make it clear which relational checks are grouped together.


4. Negated (not) Pattern

As shown above, not is handy.

object? obj = new object();
if (obj is not null)
{
    Console.WriteLine("Got non-null object");
}

This is effectively a more pattern-based way to say obj != null. ([Telerik.com][6])


5. Combining Patterns in is and switch

You can use these new patterns both in is expressions and switch statements / expressions.

object? value = 100;

string description = value switch
{
    int i and > 0 => "Positive integer",
    int i and <= 0 => "Zero or negative integer",
    string s => $"String of length {s.Length}",
    null => "Null value",
    _ => "Other type"
};

Console.WriteLine(description);

How this works:

  • int i and > 0: first checks if value is an int (type pattern), captures it in i, then also checks the relational condition that i > 0.
  • null is matched explicitly.
  • _ is the discard pattern: anything else.

Putting It All Together: A Mini Real-World Example

Let’s build a mini console app that classifies messages:

public record Message(string Sender, string Content, DateTime Timestamp);

string ClassifyMessage(object? msg)
{
    return msg switch
    {
        Message { Sender: "System", Content: "Ping", Timestamp: var ts } when ts < DateTime.UtcNow.AddMinutes(-5)
            => "Stale system ping",
        Message { Sender: "System", Content: "Ping", Timestamp: _ }
            => "Recent system ping",
        Message { Sender: var s and not "", Content: var c } => $"User '{s}' said: {c}",
        string s when s.Length == 0 => "Empty user message",
        null => "No message",
        _ => "Unknown object"
    };
}

// Usage:
var m1 = new Message("Alice", "Hello", DateTime.UtcNow);
var m2 = "Just a random string";
var m3 = (Message?)null;

Console.WriteLine(ClassifyMessage(m1));  
Console.WriteLine(ClassifyMessage(m2));  
Console.WriteLine(ClassifyMessage(m3));

Explanation:

  • We match on a Message record, using a property pattern (Sender, Content, Timestamp).
  • The first case even combines a when guard for a “stale” timestamp.
  • We use and not to ensure Sender is not empty.
  • We match plain strings, nulls, and anything else.

What About .NET 9 and Beyond?

As of the latest previews and community writing:

  • .NET 9 (and upcoming C# 12 in some sources) seems to continue enhancing pattern matching. One blog notes “enhancements in pattern matching provide more concise and expressive coding capabilities.” ([DEV Community][4])
  • There’s interest in even more expressive patterns, possibly more advanced “type expressions” within switch statements. ([ByteHide][7])
  • That said, some features remain in preview, and full official documentation may lag behind blog/community discussions.

Tips for Developers (Junior → Senior)

  • Junior developers: Start by using type patterns, switch expressions, and simple property patterns. These make code cleaner and more readable, especially when dealing with polymorphic data.
  • Intermediate developers: Explore relational and logical patterns (and, or, not). They help you reduce boilerplate (no more long if-else chains).
  • Senior developers: Use advanced combinations, parenthesized patterns, and even guard clauses (when) in switch expressions to build expressive, maintainable control flow. Also, think about pattern matching for domain-driven design: matching “shapes” of data is often more robust than fragile if chains.

References & Further Reading