🔐 Security & Encryption
How sensitive data is protected at rest and how to keep it that way through deployment
📋 Overview
Blazor Blueprint protects sensitive data in three layers:
- Field-level encryption for credentials and PII the app needs to read back later (API keys, OAuth secrets, names, phone numbers).
- One-way hashing for tokens you only need to verify (invitation links, recovery codes, guest access tokens).
- Storage-layer protection — encrypted backups, network isolation, and disk encryption — handled at deploy time, not in code.
Encryption is on by default. There is a single config knob — Security:Encryption:Mode — but the only production-safe value is the default. The two dev-only modes refuse to start when IHostEnvironment.IsProduction(), so an accidental flip in prod fails loudly at startup rather than silently leaking plaintext into the database.
🔒 Field-Level Encryption
How it works
Tag a string property with the [Encrypted] attribute. A global MongoDB BSON convention swaps the default string serializer for one that runs the value through ISecretProtector (ASP.NET Core Data Protection) on every read and write. Call sites only ever see plaintext; the database only ever stores ciphertext.
public class EmailSettings : IEntity
{
[Encrypted] public string ApiKey { get; set; } = string.Empty;
[Encrypted] public string SmtpPassword { get; set; } = string.Empty;
public string SenderEmail { get; set; } = string.Empty;
// ...
}
No code change at any call site. Settings services, repositories, and admin pages already deal in plaintext; the BSON layer takes care of the rest.
What's encrypted
EmailSettings.ApiKey— provider API key (SendGrid, MailerSend, Mailgun)EmailSettings.SmtpPassword— SMTP passwordExternalAuthSettings.FacebookAppSecret,GoogleClientSecret,MicrosoftClientSecret— OAuth provider secretsPushNotificationSettings.VapidPrivateKey— Web Push private keySsoSettings.ClientSecret— per-organisation OIDC client secret (nested onOrganisation)ApiIntegration.ApiKey— third-party AI / integration provider keys (OpenAI, n8n, etc.)ApplicationUser.AuthenticatorKey— TOTP seedApplicationUser.FirstName,LastName,PhoneNumber,DateOfBirth— PII fields (DateOfBirth stored as ISOstring?like"1990-05-15")
What's not encrypted, on purpose
Email/NormalizedEmail/UserName/NormalizedUserName— Identity uses these for login lookup. Encrypting them breaksFindByEmailAsync.DisplayName— used by the registration uniqueness check (DisplayNameExistsAsync).- WebAuthn passkey
CredentialId/PublicKey,VapidPublicKey— public by design. DomainVerificationToken— deterministic HMAC of the org name + domain, intentionally readable so DNS verification can replay.
Trade-off: queries against encrypted fields don't work
Data Protection produces non-deterministic ciphertext (same plaintext, different output every time). That defeats frequency analysis — but it also means server-side equality checks, substring searches, and sort operations against encrypted fields return nothing useful. The platform users search at /Platform/Users intentionally restricts its filter to the unencrypted fields (DisplayName, UserName, Email) for this reason.
If you ever need exact-match lookups on an encrypted field (e.g. "find user with this exact phone"), the standard pattern is a blind index: store an HMAC of the normalised value alongside the encrypted field and query by HMAC. Be cautious about adding blind indexes on low-entropy fields like first names — they're vulnerable to frequency attacks where an attacker matches hash counts against public name distributions to recover plaintext.
🎚️ Encryption Mode
A single configuration knob, Security:Encryption:Mode, controls how the BSON serializer behaves. Default is Required. The other two values are escape hatches for local debugging and refuse to start in Production.
"Security": {
"Encryption": {
"Mode": "Required" // or "ReadOnly", "Disabled"
}
}
Required (default)
- Write: values go through
ISecretProtector.Protect. - Read: values go through
Unprotect; bad ciphertext throwsCryptographicException— fail loud. - When to use: always, in every environment, unless you're actively debugging.
ReadOnly (dev only)
- Write: plaintext — the protector is not invoked.
- Read: tries to decrypt; if the value isn't valid ciphertext (e.g. it was just written in this mode), the failure is caught and the raw value is returned.
- When to use: debugging an existing dev database where you want to inspect raw values in Mongo Compass without losing the ability to read existing encrypted fields.
Disabled (dev only)
- Write: plaintext.
- Read: plaintext, no decryption attempt. Will throw if existing ciphertext is in the database — use
ReadOnlyinstead in that case. - When to use: a fresh dev database you've never run with encryption on, where you want to inspect everything raw.
⚠️ Production guard: if IHostEnvironment.IsProduction() is true and the mode is anything other than Required, UseMongoEncryption() throws at startup. The two dev modes also log a startup warning so they're visible in console output. There is no way to silently run with weakened encryption in production.
#️⃣ One-Way Hashing
Some values only need to be verified, never read back. Those are stored as SHA-256 hashes of the original — the raw value is sent to the user once and never stored.
OrganisationMembership.InvitationToken— random token in the invitation email link; only the SHA-256 hash is persisted.AcceptInvitationAsynchashes the submitted token and looks up by hash.ApplicationUser.RecoveryCodes— UserManager surfaces plaintext codes once when generated; only hashes go into the database.RedeemCodeAsynchashes the submitted code and removes the matching hash from the list.ChatGroupSupportSettings.GuestAccessTokenHash(LiveChat plugin) — random token returned in the URL when an unauthenticated user opens a support chat; SHA-256 hash + expiry stored on the chat group, constant-time compare on revisit.
A database read alone never reveals a usable token or code — the attacker would need both the database and a way to brute-force the hash, and the values are random enough that brute force isn't practical.
👤 Personal Data & GDPR Export
PII fields on ApplicationUser are tagged with the standard ASP.NET Identity attributes:
[PersonalData]onFirstName,LastName,DateOfBirth,DisplayName,ProfilePictureUrl[ProtectedPersonalData]onAuthenticatorKey,RecoveryCodes
These attributes drive the GDPR data-download endpoint at /Account/Manage/DownloadPersonalData. When a user requests their personal data, the endpoint reflects over the user record and returns a JSON file containing every [PersonalData] value in plaintext (the BSON layer transparently decrypts on read), plus an explicit Authenticator Key entry and a Recovery Codes Remaining count. [ProtectedPersonalData] fields are excluded from the auto-reflection and handled explicitly because their treatment differs (TOTP key needs decryption; recovery codes are hashed and not exportable).
The endpoint is independent of the [Encrypted] attribute — encryption is for storage; [PersonalData] is for export.
🗝️ The Data Protection Key Ring
All field-level encryption is backed by ASP.NET Core Data Protection. The master key ring is what protects every encrypted value in the database. Lose the keys and you lose the data — the ciphertext becomes permanently unreadable.
Where the keys live
Configured by ServiceDefaults.AddDataProtection() in Core/BlazorBlueprint.ServiceDefaults/Extensions/ServiceCollectionExtensions.cs. The default is to persist the key ring in Redis under the prefix BlazorBlueprint-DataProtection-Keys, with a 90-day key lifetime and automatic rotation. All hosts (Web, ApplicationService, BackgroundWorker) connect to the same Redis with the same ApplicationDiscriminator ("BlazorBlueprint"), so they share one ring and can decrypt one another's writes.
Keys are auto-generated by the framework on first use — you never set them by hand. You only configure where they're stored.
Key rotation (automatic)
Data Protection rotates the active key on a fixed schedule — every 90 days by default. Rotation is automatic, transparent, and does not require any action on your part. Here's what happens:
- A few days before the current key expires, a new key is generated and added to the ring.
- From that point on, new writes use the new key.
- Old keys stay in the ring permanently — they're never deleted. This is by design, so existing ciphertext written under any previous key is always decryptable.
- The ciphertext itself carries the ID of the key it was encrypted with, so the framework picks the right key from the ring automatically on read. You don't track this anywhere.
Practical implication: the Redis list at BlazorBlueprint-DataProtection-Keys grows by one entry every ~90 days. Each entry is a few KB of XML. Over a year of running you might have 4–5 keys in the ring. Storage cost is negligible.
Backup implication: back up the ring (the whole Redis list / volume / managed-Redis snapshot), not "the key" — singular. A backup taken today contains every key the app has ever generated; restore that backup at any point in the future and all historical ciphertext is readable. The reason to back up on a regular schedule (daily snapshot of Redis or the host VM) isn't because keys "disappear" — it's so a backup taken today still covers ciphertext written next month with the next rotated key.
The only rotation event you'd ever drive manually is a revocation — explicitly marking a key as compromised via IKeyManager.RevokeKey(...). That stops the key being used for new encryption and renders all ciphertext written under it unreadable. Drastic, deliberate, and not something that happens automatically.
Sharing keys across hosts
Any process that (a) connects to the same Redis and (b) uses the same ApplicationDiscriminator reads the same key ring. That's already how Web + ApplicationService + BackgroundWorker share keys today; they all point at the same ConnectionStrings:redisCache.
Scaling out to additional servers? Point them at the same Redis. Done. Different Redis instances generate different key rings and cannot decrypt one another's data.
⚠️ Critical: the Data Protection key ring is as important as your database. Treat it with the same operational care — back it up, monitor it, document its location.
💻 Local Development (Aspire)
The BlazorBlueprint.AppHost Aspire project provisions a Redis container with a persistent volume and snapshotting enabled, so the key ring survives dotnet run cycles, image rebuilds, and machine reboots:
var cache = builder.AddRedis("redisCache")
.WithDataVolume()
.WithPersistence(interval: TimeSpan.FromMinutes(1));
WithDataVolume() creates a named Docker volume on first run; Aspire reuses it on subsequent runs. WithPersistence(...) tells Redis to flush its in-memory state to disk every minute, so a hard crash loses at most ~60 seconds of writes. The only ways to lose the keys locally are docker volume rm on the named volume, reformatting your machine, or deleting the volume from Docker Desktop's UI.
🚀 Production Deployment
Both the GitHub Actions and Azure DevOps deploy paths use the same docker-compose.yml shape, with Redis configured for durable storage:
blazorblueprint-redis:
image: redis:7-alpine
volumes:
- redis_data:/data
restart: unless-stopped
command: redis-server --appendonly yes
The redis_data named volume persists on the host's Docker volume directory. The --appendonly yes flag enables AOF (append-only file) persistence so every write is durable, not just periodic snapshots.
What survives a deploy, what doesn't
- ✅ New image pushed → server pulls + restarts containers — named volumes are not touched. Keys persist.
- ✅
docker compose down && docker compose up -d— volumes survive a regular down/up cycle. - ✅ Server reboots — Docker remounts named volumes on restart.
- ❌
docker compose down -v— the-vflag removes volumes. Wipes Redis keys and Mongo data. - ❌
docker volume rm redis_data— explicit volume deletion. - ❌ Server's disk fails or VM is destroyed — same risk as your MongoDB.
Backup strategy
The simplest answer: snapshot the host VM. Whatever's running your Docker host (Azure VM, AWS EC2, DigitalOcean, Hetzner, etc.) almost certainly has a "snapshot" or "backup" feature in its console. Schedule daily snapshots, retain ~7 days. That captures the whole disk including all Docker volumes — Redis keys, Mongo data, n8n data — in one operation.
For more granular backup of just the key ring volume:
docker run --rm \
-v redis_data:/data \
-v $(pwd):/backup \
busybox tar czf /backup/redis-$(date +%F).tar.gz -C /data .
Copy the resulting tarball off the server (S3, Azure Blob, etc.). Restore by extracting it back into the same volume location.
🚨 Never run docker compose down -v in production. The -v flag is destructive and irreversible. Stick to docker compose down (no flag) and docker compose up -d for normal restarts.
🛡️ Optional Hardening
The default setup is appropriate for most SaaS applications. The following upgrades are worth considering as the application grows or compliance requirements emerge:
- Encrypt the keys-in-Redis with
ProtectKeysWithCertificate(...)(commented out atServiceCollectionExtensions.cs:296). The keys inside Redis become themselves encrypted by a certificate you control. If Redis is dumped, the contents are useless without the cert. Shifts your "thing to keep safe" from "all of Redis" to one X.509 cert in your secrets manager. - Move the key ring to Azure Key Vault / AWS KMS (
PersistKeysToAzureBlobStorage+ProtectKeysWithAzureKeyVault, or AWS equivalents). Keys live in a managed KMS, multiple regions can read them with appropriate IAM, audit logs are built in. Heavier ops setup. - Mount the key ring on a network file share (
PersistKeysToFileSystemon EFS / Azure Files / NFS). Useful if you want to keep the keys outside Redis but don't want to introduce cloud KMS dependencies. - Storage-layer encryption-at-rest for MongoDB (Atlas built-in EAR, encrypted EBS volumes, encrypted backups). Protects against an attacker who gets a raw disk image but not against one with database credentials. Transparent — doesn't break any query — and complements field-level encryption rather than replacing it.
📌 Quick Reference
Adding a new encrypted field
- Make sure the property type is
string(the convention throws at startup for non-strings). - Add
[Encrypted]fromBlazorBlueprint.Domain.Attributes. - That's it. No service-layer changes, no migration needed for new data.
Existing plaintext rows written before the attribute was added will throw CryptographicException on read — by design, so misconfiguration fails loud rather than silently returning ciphertext as plaintext. For a fresh database this is a non-issue. If you're retrofitting, add a one-time migration that reads + re-saves affected documents.
Key file locations
Core/BlazorBlueprint.Domain/Attributes/EncryptedAttribute.cs— the attributeInfrastructure/BlazorBlueprint.Infrastructure.Persistence.MongoDB/Encryption/— BSON serializer, convention, hostCore/BlazorBlueprint.Application/Services/Security/DataProtectionSecretProtector.cs— protector implementationCore/BlazorBlueprint.ServiceDefaults/Extensions/ServiceCollectionExtensions.cs— Data Protection / Redis wiring
📦 Ready to Download?
Ship a SaaS template with secrets handled correctly out of the box. Free for personal use; Enterprise license required for commercial use.
🔓 Open Source on GitHub • Free for personal/non-commercial use • Enterprise license (£399) required for commercial use • Full source code included