Note: To maintain confidentiality, the UI featured here is a personal design exploration.
Case Study
Designing trust for invisible systems
Designing trust for invisible systems
Client
Hager group
Industry
IOT & Energy
Year
2025
Services
Product Design, UX research, Workshops
Energy management taught me how to build trust in automated systems
During Hager's Groups pivot from hardware to a platform ecosystem, I led the B2C energy management vision, the focus was on how invisible data becomes a controllable asset. I created a vision prototype that secured multi-million euro funding and a national to European-wide scope expansion.
"You should have started with that."
— C-suite feedback during funding review.
During Hager's Groups pivot from hardware to a platform ecosystem, I led the B2C energy management vision, the focus was on how invisible data becomes a controllable asset. I created a vision prototype that secured multi-million euro funding and a national to European-wide scope expansion.
"You should have started with that."
— C-suite feedback during funding review.
Energy management taught me how to build trust in automated systems
During Hager's Groups pivot from hardware to a platform ecosystem, I led the B2C energy management vision, the focus was on how invisible data becomes a controllable asset. I created a vision prototype that secured multi-million euro funding and a national to European-wide scope expansion.
"You should have started with that."
— C-suite feedback during funding review.






The Data Wall
60%
of users found raw energy data overwhelming and impractical for daily use.
The Trust Gap
83%
of users needed to verify system accuracy before they would trust a simplified summary.
Progressive disclosure
Energy data is notoriously heavy. For the average user, granular kWh graphs and complex tariff tables create a high cognitive load that fails to answer their most fundamental question: "Am I doing okay today?"
To bridge this gap, the system defaults to high-level "outcome metrics" (Euros saved) while providing an immediate, low-friction path to technical verification.
Emotional signalling in the UI prioritises a "feeling of understanding." Using natural language like, we translate abstract electrical production into immediate value.
Flexible data layers enable power users to access the granular data they need to feel confident in the system’s accuracy without burdening casual users.


The Data Wall
60%
of users found raw energy data overwhelming and impractical for daily use.
The Trust Gap
83%
of users needed to verify system accuracy before they would trust a simplified summary.
Progressive disclosure
Energy data is notoriously heavy. For the average user, granular kWh graphs and complex tariff tables create a high cognitive load that fails to answer their most fundamental question: "Am I doing okay today?"
To bridge this gap, the system defaults to high-level "outcome metrics" (Euros saved) while providing an immediate, low-friction path to technical verification.
Emotional signalling the UI prioritises a "feeling of understanding." Using natural language like, we translate abstract electrical production into immediate value.
Flexible data layers enable power users to access the granular data they need to feel confident in the system’s accuracy without burdening casual users.
Energy data is notoriously heavy. For the average user, granular kWh graphs and complex tariff tables create a high cognitive load that fails to answer their most fundamental question: "Am I doing okay today?"
To bridge this gap, the system defaults to high-level "outcome metrics" (Euros saved) while providing an immediate, low-friction path to technical verification.
Emotional signalling in the UI prioritises a "feeling of understanding." Using natural language like, we translate abstract electrical production into immediate value.
Flexible data layers enable power users to access the granular data they need to feel confident in the system’s accuracy without burdening casual users.




You probably won't trust this affordance again, this brings us to our next topic…
Predictability is
#1
source of user trust in testing
Timeline pattern
60%
of users found it reassuring and calming to have this design pattern recap their home's automations


Trust Calibration
Users desire to delegate home IOT cognitive load but testing revealed they feared silent failure, the system making expensive decisions (like EV charging during peak hours) without explanation.
The Timeline Pattern showing chronologically past, present and future automations transforms a black-box decision into predictable, verifiable, and you shall see controllable decisions.






Predictability is
#1
source of user trust in testing
Timeline pattern
60%
of users found it reassuring and calming to have this design pattern recap their home's automations


Trust Calibration
Users desire to delegate home IOT cognitive load but testing revealed they feared silent failure, the system making expensive decisions (like EV charging during peak hours) without explanation.
The Timeline Pattern showing chronologically past, present and future automations transforms a black-box decision into predictable, verifiable, and you shall see controllable decisions.
Users desire to delegate home IOT cognitive load but testing revealed they feared silent failure, the system making expensive decisions (like EV charging during peak hours) without explanation.
The Timeline Pattern showing chronologically past, present and future automations transforms a black-box decision into predictable, verifiable, and you shall see controllable decisions.
















Closing the Loop
Historical data creates data graveyards, users see what happened but don't know what to do next.
Users need recommendations connected to their actual behavior patterns, not generic tips. "Shift dishwasher to 14:00" means nothing without knowing when they currently run it.
Context-aware recommendation cards that surface below usage data. System identifies high-cost patterns (10h00 peak usage), then suggests specific timing shifts based on the user's actual schedule and tariff structure.
Closing the loop from observation → recommendation → action builds agency and reinforces engagement with automation.
Iterations
Set up stakeholder input loops to drive process & iterations
Testing protocols
5+
Worked with UXR to design testing protocols & stimuli
User's tested
100+
Gathered insight and connected with our users
Override logic
When user and automation disagree, trust breaks. Systems that silently override manual input create ghost changes:
Users manually turned heater down, then 10 minutes later automation turned it back up. Users didn't abandon automation, they abandoned the system because it felt unreliable.
With cloud-architecture and engineering I designed a context-dependent derogation levels, prioritising punctual needs.
Automation without escape hatches feels coercive. Graduated override options let users correct automated decisions at appropriate scope, fixing this interaction without disabling the entire system.





Override logic
When user and automation disagree, trust breaks. Systems that silently override manual input create ghost changes:
Users manually turned heater down, then 10 minutes later automation turned it back up. Users didn't abandon automation, they abandoned the system because it felt unreliable.
With cloud-architecture and engineering I designed a context-dependent derogation levels, prioritising punctual needs.
Automation without escape hatches feels coercive. Graduated override options let users correct automated decisions at appropriate scope, fixing this interaction without disabling the entire system.
When user and automation disagree, trust breaks. Systems that silently override manual input create ghost changes:
Users manually turned heater down, then 10 minutes later automation turned it back up. Users didn't abandon automation, they abandoned the system because it felt unreliable.
With cloud-architecture and engineering I designed a context-dependent derogation levels, prioritising punctual needs.
Automation without escape hatches feels coercive. Graduated override options let users correct automated decisions at appropriate scope, fixing this interaction without disabling the entire system.




Closing the Loop
Historical data creates data graveyards, users see what happened but don't know what to do next.
Users need recommendations connected to their actual behavior patterns, not generic tips. "Shift dishwasher to 14:00" means nothing without knowing when they currently run it.
Context-aware recommendation cards that surface below usage data. System identifies high-cost patterns (10h00 peak usage), then suggests specific timing shifts based on the user's actual schedule and tariff structure.
Closing the loop from observation → recommendation → action builds agency and reinforces engagement with automation.
Historical data creates data graveyards, users see what happened but don't know what to do next.
Users need recommendations connected to their actual behavior patterns, not generic tips. "Shift dishwasher to 14:00" means nothing without knowing when they currently run it.
Context-aware recommendation cards that surface below usage data. System identifies high-cost patterns (10h00 peak usage), then suggests specific timing shifts based on the user's actual schedule and tariff structure.
Closing the loop from observation → recommendation → action builds agency and reinforces engagement with automation.
Iterations
Set up stakeholder input loops to drive process & iterations
Testing protocols
Tests
5+
Worked with UXR to design testing protocols & stimuli
User's tested
100+
Gathered insight and connected with our users
As we move towards Ai-driven products, the challenges remains the same
Energy management was my first encounter with the fundamental challenges of automated systems:
How much should a system explain itself?
When does transparency become noise?
How do you build trust when users shouldn't need to understand underlying complexity?
These questions aren't unique to energy, they're core to every AI product. WattsApp taught me that trust in opaque systems requires three elements: making predictions interpretable (Legibility), ensuring users can act on information (Timing), and giving them control without undermining automation value (Override).
I'm looking to apply these principles to AI products where the stakes and complexity are even higher.
View Hager feedback
View Hager feedback
As we move towards Ai-driven products, the challenges remains the same
Energy management was my first encounter with the fundamental challenges of automated systems:
How much should a system explain itself?
When does transparency become noise?
How do you build trust when users shouldn't need to understand underlying complexity?
These questions aren't unique to energy, they're core to every AI product. WattsApp taught me that trust in opaque systems requires three elements: making predictions interpretable (Legibility), ensuring users can act on information (Timing), and giving them control without undermining automation value (Override).
I'm looking to apply these principles to AI products where the stakes and complexity are even higher.
