Why You Need It? And How to Build It Correctly?
Caching in Redis is easy until you realize your entity needs two different lookup keys - one internal, one external.
That’s when things get tricky, and if you’re not careful, your application can fall prey to stale data, missed invalidations, and inconsistent state across services.
In this article, we’ll explore:
• Why dual-key caching is necessary
• A real-world example that absolutely requires it
• The correct architecture for dual-key caching
• A robust .NET implementation
• Common pitfalls and how to avoid them
A Real-World Scenario Where Dual-Key Redis Is Not Optional
The Problem: One Entity, Two Worlds:
Every user has:
• UserId (GUID, internal, immutable)
• Email (used for login, external, mutable)
Now consider how the system interacts with this user profile.
var query =
from a in A
join b in B on a.Id equals b.AId into g
from b in g.DefaultIfEmpty()
select new { a, b };
Workflow 1 - Authentication (Email → User)
On login, the identity service must:
1. Find the user by email
2. Load their credentials
3. Load profile details (timezone, roles, settings)
This requires fast lookup by email.
Doing this against SQL during peak hours (e.g., 3000 logins/minute) is a recipe for outages.
Redis solves that.
But internal systems behave differently…
Workflow 2 - Internal Microservices (UserId → Profile)
Billing, notifications, analytics, and audit logs - they all identify users by:
UserId
They never know the email. They only know the internal GUID.
So they expect fast lookup by UserId.
The Missing Piece
If you only cache under one key:
Only cache by UserId?
Login traffic destroys your DB.
Only cache by Email?
Internal services constantly miss cache and fall back to SQL.
Introducing: Dual-Key Caching
Dual-key caching lets you access the same entity using two different keys:
1. By internal, stable key (UserId)
2. By external, user-facing key (Email)
And if either lookup misses, the system slows down.
But there’s a bigger issue...
Why You Cannot Just Duplicate the Cache Entry
A naïve developer might say:
“Just store the full JSON under both keys!”
This works until reality hits:
✔ Users change their email
• old email cache isn't deleted
• new email points to stale data
• login and internal systems return different versions of the same object
✔ Cache invalidation becomes error-prone
You must delete two keys every time user data changes.
✔ Partial writes cause an inconsistent state
Network hiccups between two StringSetAsync calls = corrupted cache.
✔ You waste memory storing duplicate JSON objects
Unnecessary for large objects.
This is why professional systems use a completely different approach.
The Correct Architecture: Single Source of Truth + Index Key
Instead of storing the user JSON twice:
Store full user profile ONCE:
user : data : {userId} → JSON
Store an index from Email → UserId:
user : data : {userId} → JSON
This gives you:
• No duplicate JSON
• No inconsistency between primary and secondary keys
• Safe email updates
• Simple invalidation
Now let’s build it in .NET.
Implementing Dual-Key Redis Caching in .NET
Step 1: DTO + Redis key helpers
public class UserProfileDto
{
public Guid UserId { get; set; }
public string Email { get; set; } = default!;
public string DisplayName { get; set; } = default!;
public string TimeZone { get; set; } = default!;
}
public static class UserKeys
{
public static string DataKey(Guid id)
⇒ $"user:data:{id}";
public static string EmailIndex(string email) =>
$"user:email:{email.ToLowerInvariant()}";
}
Step 2: Caching the user (atomic write)
public async Task CacheUserAsync(UserProfileDto user)
{
var dataKey = UserKeys.DataKey(user.UserId);
var emailKey = UserKeys.EmailIndex(user.Email);
var json = JsonSerializer.Serialize(user);
var tran = _db.CreateTransaction();
tran.StringSetAsync(dataKey, json, TimeSpan.FromMinutes(10));
tran.StringSetAsync(emailKey, user.UserId.ToString(), TimeSpan.FromMinutes(10));
await tran.ExecuteAsync();
}
✔ Both keys update together
✔ No risk of partial writes
✔ JSON stored only once
Step 3: Lookup by UserId
public async Task<UserProfileDto?> GetByIdAsync(Guid userId)
{
var json = await _db.StringGetAsync(UserKeys.DataKey(userId));
return json.IsNullOrEmpty
? null
: JsonSerializer.Deserialize<UserProfileDto>(json!);
}
Step 4: Lookup by Email (inverse lookup)
public async Task<UserProfileDto?> GetByEmailAsync(string email)
{
var idValue = await _db.StringGetAsync(UserKeys.EmailIndex(email));
if (idValue.IsNullOrEmpty) return null;
var userId = Guid.Parse(idValue!);
return await GetByIdAsync(userId);
}
Handling Email Changes Safely
This is where dual-key caching shines.
When a user updates their email:
public async Task UpdateEmailAsync(Guid userId, string oldEmail, string newEmail)
{
var oldKey = UserKeys.EmailIndex(oldEmail);
var newKey = UserKeys.EmailIndex(newEmail);
var tran = _db.CreateTransaction();
tran.KeyDeleteAsync(oldKey);
tran.StringSetAsync(newKey, userId.ToString());
await tran.ExecuteAsync();
}
✔ Old index removed
✔ New index added
✔ Data key untouched
✔ No duplicate JSON
✔ No inconsistent cache state
What Happens If You Don’t Do Dual-Key Caching?
You eventually end up with…
❌ Stale user data
❌ Broken login (email changed, but cache didn’t)
❌ Internal microservices returning outdated values
❌ “Phantom users” in your audit logs
❌ Hard-to-debug production inconsistencies
Most of these bugs will never occur in development - only in production under real load.
Dual-key caching solves all of them.
Other Places Dual-Key Caching Is Mandatory
This pattern is universal across modern systems:
✔ E-commerce
• ProductId → data
• SKU → ProductId
✔ CMS
• ContentId → data
• Slug → ContentId
✔ Payments
• InternalTransactionId
• ExternalProviderId
✔ Identity Systems
• UserId
• Email / Username / Phone / External OAuth ID
✔ IoT
• DeviceId
• MAC Address / Serial Number
When one key is immutable, and the other is mutable, dual-key caching is required.
Conclusion
Dual-key Redis caching is not an optimization - it’s a foundational architecture for modern .NET systems that rely on Redis.
You should use it when:
• An entity has multiple identifiers
• One or more of those identifiers are mutable
• You need fast lookups from different contexts
• You want to avoid duplicated JSON in Redis
• You care about cache consistency under load
The correct pattern is:
✔ One canonical data key
✔ Multiple lightweight index keys
✔ Atomic updates for consistency
✔ Clean inverse lookups
✔ Simple invalidation
If you build it this way, you avoid almost all cache inconsistency issues before they ever appear.
That's all from me for today.