Improving system trust

Fixing system trust in a compliance platform

EXPERTISE

Product Design · Core Platform & Systems

impact

Removed audit risk

When “all good” wasn’t actually true

When “all good” wasn’t actually true

When “all good” wasn’t actually true

Secureframe helps companies prove they meet security and compliance requirements through tests that show whether required work is complete and up to date.

When a test is marked as “passing,” customers trust that it reflects reality.

But because many tests depend on time, a core system flaw allowed some to remain “passing” even after the underlying work had expired — creating false confidence and causing issues to surface only during audits, when it was often too late to fix them.

Background

Secureframe tracks compliance through tests that rely on uploaded proof. For many of these tests, validity depends on when the activity happened, not when evidence was uploaded.

When configured correctly, tests clearly show whether required work is current.

When misconfigured or left unconfigured, the system can quietly drift out of sync with reality.

Timeline

Multi-phase rollout across core platform

The Team

1 PM, 1 Designer (me), 1 Engineer, Compliance stakeholders

My Role

I led end-to-end design across problem framing, system modeling, and phased execution, partnering closely with product and engineering to correct a foundational platform issue without disrupting active customers or audits.

Why the system was failing users

Why the system was failing users

Why the system was failing users

I mapped the system end-to-end to understand why “passing” could drift so far from reality. That analysis surfaced three root causes:

Missing defaults

If a customer uploaded evidence without setting an expiration, the test never expired.

Wrong source of truth

Expiration was calculated from upload time rather than when the real-world activity occured.

Conflicting audit behavior

When work expired, the system archived records in a way that removed them from audit views, exactly when they were needed most.

The system was technically behaving as designed, but the design itself was unsafe at scale.

This wasn’t a theoretical edge case, it was actively impacting customers preparing for audits.

Internal escalation that helped prioritize fixing default expiration behavior.

Design principle: Correctness

Design principle: Correctness

Design principle: Correctness

Before touching the UI, I made a deliberate call to fix system correctness first.
My guiding principle was simple:

A system that communicates status must be correct by default — even when users forget steps or make mistakes

How do we make it possible for the system to say ‘all good’ when it isn’t?

Let's Start Testing

How do we make it possible for the system to say ‘all good’ when it isn’t?

To move quickly without compromising trust, we shipped this work in phases, fixing correctness first, then layering in safer defaults and clarity to reduce risk.

Phase 1: Fix the logic

Correct the system’s source of truth.

  • Prevent expired work from being hidden during audits

  • Introduce a clear “out of date” state instead of silent archiving

  • Calculate expiration using activity completion date, not upload time

Phase 2: Make the safe path the default

Make safety the default, not a setup step.

  • Every time-sensitive test has an expiration by default

  • Existing customer configurations are preserved

  • Tests can no longer remain valid forever by accident

Default intervals were defined in partnership with compliance to reflect real-world risk and audit expectations.

Phase 3: Improve clarity

Use correctness as a foundation for better UX.

With the system behaving correctly by default, I focused on making that behavior clear and predictable for users.

What users can now see at a glance:

Current quarter clearly identified

Past compliance remains visible

Overdue evidence no longer hidden

Validating the change

Validating the change

Validating the change

Because this change touched core platform correctness and active audits, I partnered closely with product, engineering, and compliance to validate the rollout. We initially released the change through an early-access cohort to reduce risk before rolling it out broadly.

I specifically validated that:

Active audits retained all historical records

Customers with custom configurations were unaffected

Tests now expired predictably based on activity completion

This allowed us to ship confidently without breaking trust.

Impact

Impact

This work removed an entire class of system failure.

100% of time-sensitive tests now expire correctly by default

Tests can no longer stay "passing" indefinitely due to missing configuration

Audit-time surprises caused by outdated or missing records were eliminated at the system level

What I learned

What I learned

Fixing correctness solved the trust problem, but surfaced a new one:

Users still struggled to understand when work was due or overdue

This reinforced an important lesson for me:

Correct systems are necessary, but not sufficient. Clarity and guidance must follow.

Why this work matters

Why this work matters

This project wasn’t about intervals.

It was about ensuring the product could never quietly misrepresent reality, especially during high-stakes moments like audits.

By fixing the system at its core, I helped Secureframe move closer to being a product customers can trust without second-guessing.