Solvvy needed some UX help to build some new ML features that had never existed before on the planet.
+ scroll for more…
The intent coach that helps human labelers to train the ML model that is supposed to “understand” the intent.
The “matching rules” that give users explicit control over what should happen when certain keywords exist in a query.
The “Intent Overview” perspective, that shows users a birdseye view of their intent.
The “all intents” dashboard. Let’s users monitor intent performance at a glance.
Solvvy is an on-site support chat service that helps to solve customers’ problems without contacting an agent. They use ML to facilitate the conversation.
However, they needed a labelling interface built so that they could have human contacts verify if the ML models we’re performing correctly. In order to design it properly, they brought in the big guns. 😎
After taking with engineers (to get a mini-degree in Machine Learning), and customers for a few weeks, I built some interactive prototypes (see “labelling interface” above) and had labelers test them out LIVE on video.
"We only wish we could have convinced Marc to join us full-time. He was amazing to work with: always driving at the root of the problem and the user's true need, creatively finding ways to provide real value to the user and truly satisfy their need, fantastic listener, team player, and super friendly and fun! Just all around an outstanding designer to work with. I would work with Marc again and again if I could!!"
Solvvy uses machine learning to help companies answer the questions that their customers share via the chatbot on their sites (pictured below).
Unlike other chat bots that you’ve probably seen before, Solvvy’s ML models will actually crawl a companies website, and generate solutions all on it’s own, improving with each user session.
Given the social stigma for these sorts of chatbots, Solvvy wanted to make sure that the UX of the tool was stellar. They also had a a couple of projects in the pipeline, and they wanted to knock them out the park.
Since Solvvy uses machine learning to match queries to solutions, it is important that they can properly understand the “intent” of a query, from the text a user types alone.
An abstract representation of the “outcome that the user is trying to reach”, that is extracted from the language of a text query.
However, intent is often very difficult to process, and requires an enormous amount of domain-specific context to consistently arrive at correct results.
Because of this, Solvvy decided to offer their customers a list of common intents that they could use to allocate solutions.
However, customers were becoming increasingly frustrated with the lack of support for very custom intents, so this project was born.
In a nutshell, these are the business problems we were trying to solve:
“Clients cannot properly categorize the queries that users submit with the limited array of intents Solvvy has available by default.”
So—here’s what we did…
As always, the list of potential projects here is essentially infinite, so first, we had to narrow down that list and figure out what sort of project we should do.
As with any huge project like this, we really had to get our stakeholders lined up earlier than later. In retrospect, despite being quite proactive about this, we still felt the pain of not having business stakeholders from day 1.
Nonetheless, we eventually had a handful of executive stakeholders that were going to champion the project:
Once we had everyone in the same room (virtually, at least), we started to define two things:
Starting with our assumptions, it was important that we got everything on the table so that we could then test them later:
Having these assumptions out in the open (and labelled as assumptions), we were able to address them one by one.
I’m also a HUGE proponent of identifying outcomes and metric early. This gives us lowly designers some leverage when we’re advocating for our project to the executive team later.
Here’s what we defined:
These metrics also help me to go back and check if we actually moved the needle later (and advocate that they hire me for another project 😉).
Solvvy hasa very particular set of customers. For this project, I was working with 3 different user personas:
Persona #1: The Customer Support Admin
As a CS Admin, Jim is responsible for managing each and every support ticket of which he is made aware.
Here’s what’s important to him:
Persona #2: The Customer Support Leader
As a CS Leader, Pam is responisble for overseeing all things support for her organization.
Specifically, Pam wants to:
Persona #3: The Solvvy Solution Engineer
As a Solvvy Solutions Engineer (SE), Stanley is responsible for preparing new instances for clients who are trying to demo Solvvy.
More specifically, Stanley wants to:
Now that we know who we’re talking to, it was time to actually….well…do some talking!
I worked with our CS sponsor to get a handful of early adopters in a video call (one at a time) to talk about what sorts of problems they had with the current system:
After about a week of these interviews, I started noticing some trends in the sort of feedback I was getting from users.
First off, it was clear that they problem that Solvvy assumed that their customers had was in fact a very real problem that they were experiencing.
Being limited to our “out of the box” intents limited customers’ ability to “self-service” their users’ questions. This was a painful problem for them, and it was taxing Solvvy’s competitiveness with these companies.
Contrary to what we thought initially though, we uncovered a few more insights about our customers:
This told us a few things:
Now that we had some prelim. customer data on the books, it was time to turn to the engineers to have them calibrate our “pie in the sky” thinking.
I often find that conversations with the engineering team are some of the most useful and valuable conversations you can have.
Generally, engineers understand the systems they’re working with deeply, and can clearly articulate why things have to happen a certain way.
Since we pulled in the engineering team so early, our conversations were mostly theoretical, but we learned quite a bit.
Perhaps our most important learning here was that the technology to allow customers to create their own custom ML models on demand was a net-new innovation that’s not being done anywhere else in the space.
Unfortunately for us, this meant it was going to take a while to build…
Since we had the engineering team on the phone though, we decided to map out how customers were solving this problem today, despite not having a dedicated product solution. Here’s what that looked like:
As you can see, there’s a TON of friction and manual labor there. It was clear that this project had even more value than we’d originally thought: the cost savings from automating all of this would be huge!
Now that we’ve combed through several user interviews, and talked to the engineers, it was time to get some product stories together for the project.
Here are the stories for the customer support admins:
Here are the stories for the customer support leaders:
…And here are the stories for the internal ML engineers:
Now that we had some solid stories together, it was time to get in there and start mocking up some potential solutions in Figma.
Now that we know we needed some sort of “Intent Management Console”, we then embarked on building it out…
We started with some early low-fidelity wireframes, used to get a rough idea of where things would live, so that we could test our insights with users.
Here’s what an example low-fidelity screen looked like:
A Quick Note…
I’ve found that these days (and especially if you have a design system), you can crank out some low-fidelity mockups really quickly. You don’t really need a separate tool anymore…
With this early prototype, I wanted to test some of the core assumptions we made earlier, and as get a qualitative assessment of how well that particular layout solved the users’ problems.
Now that we had something that users could interact with, I put it to the test with some prototyping sessions:
I find prototyping to be the most fun part of the design process, because you get to see all the crazy ways people try to use your UI. It’s often very humbling. 😉
During these sessions, I’m not helping the participants at all, I just ask them to try to complete a certain goal given the UI in front of them, then sit back and take notes.
Usually, this is super insightful, and helps me to polish up the solution until it solves the problem really well.
Here’s a sampling of what we learned during our prototyping sessions:
After allll the discovery, the stakeholder interviews, customer interviews and prototype sessions, we arrived at the final solution.
In the end, we arrived at a series of high-fidelity user flows, and state diagrams (like the one pictured below), describing how the new system would work.
Here’s a synopsis of each of the most important flows that together made the new Intent Management Console a reality.
This is the main screen for the Intent Management Console — this is where users can see all their intents at once, and create new ones when they find a gap.
This is the primary user flow that we spent to most time ironing out. We had to make it really easy for users to create new custom intents.
We also heard from users that they wanted some mechanism for just building an intent with their existing queries, so we implemented this flow that fed into the IMC from the query dashboard:
Remember when we learned that users may want to “augment” intents with some domain-specific language? This is what we came up with:
And finally, users asked several times for a simple way to track the “accuracy” of their intent. Since ML is by nature a black box, we had to give users a qualitative assessment of their intent.
Here’s what that looked like:
In the end, this was a wonderful project with a bunch of wonderful people.
I learned a TON working with Solvvy. Here’s some of my biggest takeaways: