Ticket Intelligence App for Ticketing Systems

Every day, a tidal wave of support tickets floods help desk agents' dashboards.
‍‍
Agents waste valuable time sifting through redundant information instead of focusing on helping customers. Frustration grows, efficiency plummets, and customer satisfaction takes a hit.

The team needs a smarter way to manage tickets—no matter which system of record platform they are using. That’s when we stepped in ✨

A stock image if a woman working in an office with a desk and piles and piles of paperwork all around her.

Team.

  • Volha D - Lead Product Designer
  • Stefanos V - Zendesk widget UI
  • Ashok M, Wei W. – AI, ML
  • Arpitha A – Conversation Server

Role.

I led the designs from beginning to implementation, collaborated with stakeholders on finalizing the scope, defining user personas, facilitated usability testing, communicated the desired experience to the engineering team.

/result

Game-Changer for Support Teams

Zoom
11,3K
Agent hrs saved
90%
MTTR
Snowflake
32K
Agent hrs saved
63%
Reduced MTTR
Quizlet
30K
Agent hrs saved
70%
Reduced MTTR

Launched MVP

/process

Clarifying the constraints and context.
To kick off the process, I set up a 1:1 meeting with the Product Manager to clarify the constraints and get a better context.
✏️ Constraints:
To address the constraints, I focused on designing a system-agnostic architecture that could seamlessly fit into different ticketing environments.
This involved:

Focus on the user and all else will follow.

Since this was not the problem I was very familiar with, I embedding myself in the world of customer support. Through online research, conducting user interviews, and a journey map exercise, I uncovered the key frictions:
  • 🤯 High volume of tickets, also repetitive issues, which were overwhelming to agents.
  • 😤 Companies often struggled with disorganized knowledge bases—leading to endless frustration!
  • 😡 Tickets got routed to wrong agents. Disappointing, as a lot of time was wasted!
These interfere with our agent's primary goal–resolve issues asap. Why was it important for agents? They tracked certain KPIs (i.e. Mean Time to Resolution) to show their performance.

Meet Mark, a support agent starting his work day:

Agent journey map and steps they have to go to close a support ticket.
The problem: Inefficient processes prolong ticket resolution, reducing agent productivity.

😌 Now, imagine a different scenario.

Aisera's ✨ AI-powered Ticket Intelligence Widget ✨ routes the ticket correctly from the start.
The system instantly suggests a potential solution, and once the requester confirms, the widget closes the ticket and updates resolution notes.
No wasted time, no manual back-and-forth—just seamless, efficient support.

😡 Mark's present experience:

His dashboard is flooded with tickets.
He opens one: it’s been bouncing between teams.
He reaches out to the requester for clarification.
Once they respond, he hunts through an
unorganized database for a solution.
Finally, he applies the fix, then manually updates notes before moving on.

Refining and improving the experience based on feedback and identified issues.

Collaborating with the Product Team, I identified three most widely used ticketing systems among our customers—Salesforce, Zendesk, and Servicenow. These needed to be our foundation.
Through analyzing our targeted ticketing platforms, I learned that each supported an integration of third-party applications within their interfaces.

Preliminary exploration

Applying Predictions

As a starting point, I explored a design where all predictions meeting the threshold would not only pre-fill ticket fields but also include their confidence scores. This approach aimed to optimize the widget space by only displaying the rest of the predictions, not auto-applied. I wui, showing the confidence score within the input fields was a technical constraint.

Failed Design

The final design displays all predicted fields + confidence scores within the widget. Fields that have been auto-applied by Aisera AI have checkboxes in the selected state. An agent can easily apply/remove the predicted values.

A/B Testing: Single vs Multi-Tab Design

Based on the results from A/B testing, a multi-tab solution was preferred. A few reasons mentioned in favor of the multi-tab solution:
  • Reduces cognitive load by displaying the key information first
  • The most information displayed above the fold
    (critical for agents)
  • Allows for incorporating additional features such as templates and search functionality (if needed)
A product shot of a widget with a single tab.

A/B Testing: Reviewing and providing feedback for knowledge predictions

From the 2 direction I tested, version the drawer element had more positive results:
  • A brief preview gave agents quick sense of the article or section, helping them decide if they needed more details
  • More screen real-estate space for bigger knowledge articles with media
  • Persistent view while scrolling
  • Prevents layout shifting issues

Version A (accordion)

Version B (drawer)

Optimizing Space and Scalability in Design.

Initially, I explored a design that incorporated a confidence score bar, but it consumed too much vertical space, making the layout less efficient. Recognizing the importance of scalability, I focused on creating modular designs that could be easily adapted and reused across other ticketing systems.
Initial concept
A product shot of a Iteration 1 of a widget keeping the progress bars.
Iteration 1 (predictions auto-applied, search option)
A product shot of Iteration 2 of a widget, using buttons apply.
Iteration 2 (removed search, introduced check boxes)
A product shot of a widget without search and using check boxes.
Iteration 3 (removed score on auto-applied values)
A product shot of Iteration 4 of a widget with scores removed from the predictions already applied.

/final design

Intuitive widget to streamline ticket management.

The solution replaced the space-consuming confidence score bar with a more efficient design, offering agents a simple way to review more, apply predictions and feedback whenre necessary.
Before
Product shot of a widget before.
After
Product shot of a widget after.

Ensuring clarity & Alignment

To maintain clarity and alignment—especially under tight deadlines—I partnered early with the engineering lead, collaborating on ongoing fixes and brainstorming solutions in real time.
Component Library

Challenges.

This project wasn’t just about building a widget—it was about transforming the support experience across different ticketing platforms. The team faced challenges along the way:
  • Enhancing Knowledge Access Control in the AI widget as support agents had varying levels of access to knowledge base articles depending on their roles and permissions  
  • Keeping the UI & UX as consistent as possible across different systems of record. Because the designs were modular and component-based, it was significantly easier to apply the same skeleton design to other SORs and we only needed to change fonts families and colors primarily to match the look and feel of other platforms
  • Macros, next-best actions behave differently from applying knowledge articles
  • Parsing images, tables, and videos in knowledge article predictions

Customer Feedback.

After a soft-launch, we received additional feedback from agents, which our team worked on implementing as soon as it was possible:
  • Minimize or hide the description to free up vertical space for other important information
  • Introduce a summarization of the request / issue reported
Three black sparkles of various sizes.

Please contact me for the click-through prototype and more details of this project!

volhadouban@gmail.com

/lessons learned

Scalable design.
It was important to keep in mind the strategic business opportunity for the widget and therefore keep the designs as consistent as possible across various ticketing systems. So I designed modular components that can adapt to the look and feel of each system. We adapted the same layout to Salesforce and Servicenow later.

Want to see more?

Explore other designs