Hero img probe feature screens
Hero img probe feature screens

From Hidden to Hero: Redesigning Probes to Drive Adoption

Citrix Monitor offers a synthetic monitoring feature called Probes, which simulates user sessions to proactively check the availability of virtual applications and desktops before employees begin their workday.

While this helps IT admins ensure a reliable end-user experience and address potential issues before they impact productivity, the feature had issues with low adoption.

The redesign boosted active use by 84% and increased the conversion rate by 14% — within 6 months of release.

Role

Product Designer — UI/UX Design, Interaction, Prototyping, Visual Design

Collaborated w/ Product Manager, UX Researcher, Content Designer, 2 Engineers.

Time frame

Jun - Oct 2023 (5mo)

Platform

Web (Desktop)

Gif of probes redesign
Gif of probes redesign
Gif of probes redesign

How it all started 🌱

IT admins proactively monitor virtual assets because they are responsible for the end-user experience of their organization and it helps increase service reliability and mitigate issues. From a 2016 survey:

84% of Citrix admins wanted more proactive alerts to fix problems before users noticed.

We learned that admins were resorting to third party tools because they were either unaware of the built-in monitoring tool or found it lacking in functionality.

This was further validated in a June 2023 research study, where customers struggled to discover the Probes feature, highlighting gaps in both its visibility and intuitiveness of the overall information architecture (IA). Based on qualitative and quantitive research, we focused on these pain points:

  1. Low adoption rate

  2. Poor discoverability

  3. Functionality fell short

Picture of previous summary and probe results view before redesign
Picture of previous summary and probe results view before redesign
Picture of previous summary and probe results view before redesign
Screens from before redesign

Low adoption rate 📌

Chat of low adoption funnel
Chat of low adoption funnel
Chat of low adoption funnel

Based on probe usage data from June 2023, there were ~4,800 Monitor customers, but only 106 had installed probe agents (prerequisite for Probe setup) and just 50 of those had active probes. Customers said there was a need for proactive monitoring, but the adoption rate was low. As a product team, we anticipated:

the adoption would increase by improving discoverability and adding critical functionality to bring the feature up to par with competitors.

Based on probe usage data from June 2023, there were ~4,800 Monitor customers, but only 106 had installed probe agents (prerequisite for Probe setup) and just 50 of those had active probes. Customers said there was a need for proactive monitoring, but the adoption rate was low. As a product team, we anticipated:

the adoption would increase by improving discoverability and adding critical functionality to bring the feature up to par with competitors.

Based on probe usage data from June 2023, there were ~4,800 Monitor customers, but only 106 had installed probe agents (prerequisite for Probe setup) and just 50 of those had active probes. Customers said there was a need for proactive monitoring, but the adoption rate was low. As a product team, we anticipated:

the adoption would increase by improving discoverability and adding critical functionality to bring the feature up to par with competitors.

Poor discoverability 📌

Research participants scored poorly in a June 2023 tree test, conducted as a "health check" of the information architecture (IA), where they navigated a sitemap to complete hypothetical tasks:

  • Where would you go to create an application probe?

  • Where would you go to check the results of a desktop probe?

  • Where would you go to see a summary of probe failures?

The above tasks had a 40–60% success rate, which was considered poor given that the participants were experienced power users and the target success rate was 90%. In the existing IA, users had to navigate to 3 different areas to complete probe-related tasks. Multiple research participants noted that it would be more intuitive to have all probe actions consolidated in one location, describing the current structure as "not logical.

Existing IA
Existing IA diagram
Existing IA diagram
Existing IA diagram
Group probe actions together

I agreed with customer feedback that grouping probe actions together would improve both the information architecture and the discoverability of Probes. When reviewing the overall user journey, I identified two key stages:

  1. Configuration

  2. Reviewing results (either summary or detailed views)

These stages, however, are often intertwined—admins may review results and immediately need to adjust an existing probe’s configuration. Forcing users to navigate back and forth between separate areas to perform these related tasks created unnecessary friction and led to a suboptimal experience.

New "Probes" node

I brought “Probes” into its own dedicated node in the navigation. Previously, probe results were buried under “Trends” and “Applications,” while probe configuration lived separately under “Configuration.” This fragmented experience created confusion among users and the ambiguous labelings and grouping (being addressed separately) didn't help.

In reviewing the existing information architecture (IA), these key insights stood out:

  • Probes data is synthetic—simulated rather than based on real user activity.

  • Probes is the only dataset that requires configuration of its kind.

  • “Configuration” housed only Probes, with no plans to add other configurable features.

As a product team, we concluded that the “Configuration” node no longer served a broader purpose. Replacing it with a “Probes” node improves clarity, aligns with the nature of the data, and significantly enhances discoverability.

Updated IA
Updated IA diagram
Updated IA diagram
Updated IA diagram

Functionality fell short 📌

Market research revealed that Citrix Monitor’s synthetic monitoring tool was falling short compared to competitors. The product manager noted that some customers were turning to 3rd-party solutions to meet their needs.

To understand the gaps, I reviewed the existing Probes feature, competitive analysis and customer feedback. I partnered with the product manager to also better understand the full admin user journey through identifying failures, troubleshooting and triaging. Mapping the journey allowed me to pinpoint experience gaps and identify opportunities for meaningful enhancements to bring the feature up to par.

User journey
User journey
User journey
Comprehensive summary at a glance

The summary view has been redesigned to provide an overall status of all probe activities. Previously, it focused only on probe failures, based on the assumption that admins were only interested in seeing failures when issues occurred. A Canadian government agency shared a use case where a scheduled probe did not complete due to an error. Since the probe did not technically fail, the issue went unnoticed.

Feedback revealed that admins actually wanted to see the overall health and availability of virtual assets, including successful and skipped runs, as they often share this report with managers. To address this, I updated the summary view to include:

  • Status of all probes: scheduled, completed, failed, skipped

  • Failure distribution

  • Probe agent machine health: active vs. inactive

  • Ability to filter the view

Summary cards
Summary cards
Summary cards
Deeper probe run insight

The probe run detail view has been improved to better equip the admin with comprehensive telemetry. The previous version was minimal: application or desktop name, timestamp, endpoint and if it failed, failure stage. During the troubleshooting process, admins often need to investigate a variety of potential issues.

Given the goal of helping admins resolve issues as quickly as possible, it's crucial to offer relevant information that can aid in troubleshooting the failure. After working with the product manager to gather the relevant telemetry points, I organized the data points by relevancy and presented the stages in a clear, easy-to-understand visualization. The improved user experience now includes:

  • What happened during a probe run

  • Telemetry breakdown for each probe run

Probe run drawer
Probe run drawer
Probe run drawer

Challenges of UX debt 📌

Leaving the legacy look behind

I took the opportunity to make incremental improvements and address some of the UX and technical debt accumulated due to resource and time constraints. Every new component or update sparked debates about whether it should align with the old legacy style or the new design system—this made the Probes revamp particularly challenging.

For example, the existing probes feature used outdated filters that consumed valuable vertical space and lacked scalability for future improvements. I worked with engineering to update the filters to the new style and presented two options, considering the level of engineering effort required.

I recommended the "more effort" option, as it aligned with system-wide filter patterns, was straightforward, and a more scalable solution. After some discussion, engineering decided on the "less effort" approach due to time constraints, with the understanding that the "more effort" option would be implemented in a future iteration. The "less effort" solution was faster to implement, as it reused existing input fields and aligned stylistically with the time selector in other views.

After many iterations, design reviews, discussions and trade-offs made, the 1st version of Probes was delivered. I worked with engineering throughout the implementation and Design QA process.

Results 📈

Active use

Active use

Active use

+84%

+84%

+84%

Conversation rate (Probe agent install to active use)

Conversation rate (Probe agent install to active use)

Conversation rate (Probe agent install to active use)

+14%

+14%

+14%

After the launch, active use increased by 84% compared to June 2023 (before the update) and July 2024 (about six months post-launch). The conversion rate from probe agent install to active use also improved, rising from around 47% to 61% — a 14% increase.

By improving the discoverability and making enhancements to help IT admins better troubleshoot probe failures, we were able to increase the active use and conversation rate (install to active use) for the probes feature.

Reflection 💭

Driving increased adoption for an impactful feature was a win, but there's always room for improvement. If I had the opportunity to continue leading the UX front of the Probes feature, I would've liked to:

  1. Understand why users install the probe agent, but do not follow through with actively using the feature.

  2. Deep dive into the usage metrics of the feature to understand what's working and what's not.

  3. Continue to iterate on the design to offer the optimal experience for IT admins as a proactive monitoring tool.