Can we increase sales for this product?

The Problem

Our customers were not purchasing and using our Competency Evaluation tool built into the LMS platform. The Product Management team wanted to understand how to make specific enhancements to make the tool become part of the workflow our users completed within our platform. Product Management needed to understand what requirements were used to assign Competency Evaluations, and how many options should we build in to allow a user to customize their (Likert) scale evaluation.


THE HYPOTHESIS

All healthcare entities must conduct evaluations of their healthcare employees. Many of our customers use a solution outside of our LMS. If we can understand their workflow, needs, and processes, we can build a better solution for this function within the platform, create additional value-add for our customers, and increase sales and profitability for this service.

THE PROCESS

This work began with a request and kickoff call from the Product Manager. Then I met with the UX Designer to understand his needs from the research.  I familiarized myself with the components of the platform, drafted a request for participation email, test plans, and a script to be reviewed with the Product Manager and UX Designer.  

THE METHODOLOGY

We decided on interviews as the mechanism to speak directly with users in an effort to understand what criteria they used to assign evaluators and what their preferences were for evaluation scales. How many options did they really need? I launched this study via Microsoft Teams once I identified five customers who were interested in participating. 

Email Invite

Good Day,

My name is Angela Battle, and I am a User Experience Researcher here at Relias. I am reaching out because you have indicated that you would be interested in participating in a study to help us inform decisions about the Competency Management platform. We are specifically looking for participants who assign or complete evaluations in your organization. I’d like to invite you or someone who completes these tasks to participate in our study.

This study will include approximately 30 - 60 minutes of your time and we’d like to ask you questions about your experiences. We would like to conduct the study virtually and you can schedule with us during the week X pending your availability. The time slots we are offering include the following:

If you are interested in participating, please reply to this messaging with the date and time slot that works best for you. I will then forward an invitation to you for the date and time requested. On that day, you would simply click the link to join us virtually and share your feedback.

If you would like to participate, but these dates and times are not best for you, please supply us with better dates and times.

We are interested in hearing your opinions, but if you no longer wish to participate, please let us know and we will remove you from the listing.

All the best,

Test Plans

Competency Management Evaluator Research Project Test Plans

Background and Objectives:

The current process for assigning evaluators cannot be completed by location, evaluator experience, specialty, or existing workload. To make the process of assigning evaluators a better experience, we would like to conduct testing to learn more about their current processes, pain points, and criteria required for better role assignments. We also know that evaluators use scales to deliver a rating or ranking to the person being evaluated. In an effort to build out an effective solution, we would need to understand more about their current processes, pain points, and ideas about the option to customize scales.

Proposed Methodology:

Group 1 : Proposed Interviews, Proposed Focus Groups

- Administrators who assign evaluators

- Administrators across verticals

-Administrators who manage 1 location, Administrators who assign evaluators across multiple locations

Scope and Focus Areas

Part 1:

We know that assigning evaluators by criteria does not currently exist across the platform.

Evaluator assignments:

  1. What criteria would be most beneficial to consider that would help them better manage how they assign and track evaluators?
  2. What are the factors they consider when they complete evaluator assignments?
  3. Prior to assignments, what do they need to know or be able to see?
  4. What is the current process for evaluator assignments?
  5. How frequently do they assign evaluators?
  6. How do they currently ensure that the workload is shared across evaluators?
  7. How often do they have to change assignments?
  8. What pain points do they have around the process?
  9. Are there differences across verticals?
  10. Are there any scenarios in which the identified criteria for evaluator assignments does not apply.
  11. Are there any exceptions to the rules?



Part 2:

Customized Scales

We know that organizations prefer to be able to customize a rating scale for their evaluators. What we want to understand is what they prefer the customization looks like.

  1. What type of rating scale are they using and why?
  2. What are the requirements / criteria leading to usage of certain rating scale?
  3. What factors are considered in the decision to use a certain rating scale?
  4. What are the scales currently missing?
  5. What would they want to change or add to their current model?
  6. How many options would they like to select from?
  7. How many options are too many options?
  8. What scenarios fall outside of this norm?
  9. What are the reporting needs around this customization (rating, labels, score, threshold?
  10. Can/should customization and rating be shared across subportals?

Hypothesis:

All healthcare entities must conduct evaluations of their healthcare employees. Many of our customers use a solution outside of Relias. If we can understand their workflow, needs and processes, we can build a better solution for this function within the platform, create additional value add to our customers, and increase sales and profitability for this service.

Testing Script

Competency Evaluation Project Script

My name is [name]. I am joined today by my colleagues [names] who will be observing our session today. We are user experience research professionals from Relias. We are meeting with you today to review your evaluator assignment process as well as the framework your evaluators use during the evaluations. We hope to learn more about how we can improve the experience of these tools in the Relias platform so that we can enhance them in the future. We specifically want to meet with you because you interact with these tools.

This session will last no more than 60 minutes. Please feel free to express your opinion and views clearly and in detail. We'd like to ask you some questions about your experiences. There are no right or wrong answers; we are just looking for your honest feedback and expertise.

We’d like to record this session to allow for analysis and transcription at a later date. May I have your permission to record?

Do you have any questions for us before we begin?

[Start Recording]

Opening Questions

I would like to begin with questions about you and your role within your organization. 

Can you tell us about your organization and your position?

How long have you been in that role?

Do you manage multiple locations?

Evaluator Assignment Questions

Can you begin by telling us about your workflow or your specific tasks and involvements with competency management tools inside or outside of Relias. 

Can you describe how assignments are completed?

What criteria do you consider before making evaluator assignments?

How frequently does this criteria change?

How frequently do you assign evaluators?

Have you had experiences where an evaluator assignment had to be changed? 

What tools do you currently use to assign and notify?

What are the benefits to the tools you currently use?

What are the pain points or problems with the tools you currently use?

How do you manage any changes?

How do you currently notify the parties of evaluator assignment changes?

How frequently do you make assignment changes after the initial assignment?

In your current process, what steps do you have to take to make changes?

Is there anything in your current process that can be modified to make the experience better?

Framework / Customized Scale Questions

Now switching gears to the way your evaluators provide an evaluation. I want to understand how your evaluators score the team members they are reviewing. 

Are you using the Relias platform to complete evaluations?

If not, what are you currently using?

What are the benefits to the system you are using?

What are the down sides?

Are the scales consistent across all evaluations?

Can you describe the scale they use to rate the team members? 

Can you describe any instances where the scale offers too many choices? 

Can you describe any  instances where the scale offers too few choices?

Are there instances where the scale does not apply or fit your needs? If so, can you elaborate on those specifically?

Are the scales you are currently using meeting your needs?

  • Is there anything you might change?

How would you change the scale the evaluators currently use?

What is missing from the scales that your evaluators currently use? 

How would evaluators like to be able to use scales more efficiently?

Is there anything else you can share with us that might be helpful for us to understand about your evaluators assignments and their usage of scales to rate the team members?

What haven’t we asked that you think would be valuable for my team to know?

Closing Questions

I have concluded the questions I had for you today. 

Does anyone from my team have any followup questions?

Are there any questions you have for me and my team?

Before we end, would you be amendable to meeting with us again for additional questions?

Thank you so much for your time today. Your feedback has been very valuable in helping us to understand how best to enhance the platform. 

FINAL ARTIFACTS

Competency Management Evaluator Research Project Detailed Report

By conducting five interviews, UXR found that competency evaluations are largely decentralized and most users are dissatisfied with their current processes and options for providing evaluations.

Organizations who are completing competency evaluations are facing the following challenges:

  1. Competency Evaluations are decentralized and several methods and systems can be used within the same organization.
  2. Prevalent tools include paper and pencil. Ease of use and mobility are the drivers behind this selection.
  3. Evaluations often require several evaluators and their attestations.
  4. Once completed, these evaluations are generally stored in an integrated HRIS system.

Key Takeaways

CM Evaluations: Frequency varied, and was largely decentralized.

Summary

Frequency.

Organizations complete competency evaluations with as much frequency as they deem necessary. They can be necessary annually, biannually, quarterly, based on hiring date, fiscal year, set by program, other variants, and as often as the organization prefers.

There was no standard time frame for competency evaluations amongst the participants organizations. Frequency is set by the hiring organization.

Our solution should allow for users to configure frequency to their needs. Given that one size will not fit all, admins need to be able to customize their competency evaluations globally, by groupings, date, and other variables.

Decentralized.

Our research also concluded that there can be more than one standard for capturing competency evaluations within one organization.

Many organizations are capturing competency evaluations differently even within one organization. The process is decentralized and inconsistent across some organizations.

A selling feature should definitely be emphasis on utilizing one system and process that provides consistency, and cohesion across departments, verticals, and hierarchies.

Evidence

Frequency.

There was very little consistency in frequency and centralization across the participant organizations.

Frequency of competency evaluations should be decided by administrators and not by the constraints of the platform.

Future state for frequency considerations should include the ability to customize competency evaluations within an organization that may be different for many learners.

Decentralized.

Several participants cited a mixture of tools and processes being used within the organization to capture competency evaluations.

One or two also cited that they were “in the process” of reviewing how they complete evaluations so they could synchronize on one standard. 

CM Evaluations Must Haves: Ease of use, Mobility, and Text Fields/Comments.

Summary

Ease of Use.

Ease of use was referenced many times as the reason why competency evaluations were being completed on paper.

Our biggest competitor is the ease of use of paper and pencil. Of the participants surveyed, many organizations are completing competency evaluations and capturing this information using paper and pencil, even if it is later uploaded into a system.

Paper and pencil provides ease of use as competency evaluations tend to require shadowing a learner to observe various behaviors. Because they require mobility, paper and pencil are easy to manage and transport. However, paper and pencil are also easy to lose. One participant referenced that they specifically instruct their learners not to lose the evaluation sheet, and credit is given to those who do not lose it.

With so many other responsibilities, admins sought simplicity as a way to ensure that competency evaluations were completed. Paper and pen is an easy method to capture and attest to completed competency evaluations.

Our solution must be easy to use in order for us to compete with their current practices.

Mobility.

Evaluations occur as evaluators shadow or follow learners to document behavior based criteria. Therefore the inherent mobility of the evaluator is necessary.

Most evaluators need to shadow or follow a learner in order to observe a set of behaviors and then document if the learners has met certain criteria or conditions. This is difficult to do with a laptop or desktop.

Our solution needs to allow for the mobility for evaluators as they are tasked with following a learner. In current state, this could include an easily printable form that can be used, then later uploaded to close the process.

Future state can possibly include mobile phone or tablet access, QR codes that direct an evaluator, or other mechanisms to allow for usage on mobile devices.

Text Fields/ Comments.

The ability to add comments is just as important as the rating scale used by evaluators.

Often evaluators add lengthy comments into an evaluation. Comments are a crucial component to evaluations as the criteria used to evaluate can be subjective. Comments further clarify the criteria used to supply a grade or selection on the rating scale, they can be used to clarify observed or absent behaviors, and they are also used to provide explanations when rating criteria is ambiguous.

Our solution must provide ample space for comments and explanation in order for it to be a sound replacement for their current processes. We can review the character count of our solution to ensure it will meet the needs of our user base.

Evidence

Ease of Use.

Five out of five participants referenced ease of use in the workflow process as the main driver for their current status of competency evaluations.

Mobility.

Evaluators need to be able to complete the evaluation in the learner’s environment.

The typical process for an evaluator to witness learner’s behaviors for an evaluation requires witnessing certain behaviors and criteria in the learners environment. Inherent in this process is mobility.

Paper and pen as a way to document the results of the evaluation was the most common method of capturing information because of ease of use and mobility.

Admins seek the simplest ways to get the task complete. Four out of five participants were using paper and pen as a way to initially document evaluations even if it it was later added to a system.

Text Fields /Comments.

Commentary and note capture is a significant need in competency evaluations.

Three out of five participants discussed the necessity to add comments and additional notes. Two of five participants discussed the ambiguous nature of their current rating system and why the comments are so critical.

Currently, as evaluators rely on using paper and pencil, they can easily take notes, add comments, and discuss ratings.

Because the workflow requires several parties to complete their portion of a competency evaluation, they require a process that will not be cumbersome or difficult to manage. 

CM Evaluations Future Considerations : Multiple Evaluators, Integration, Self Assessments, and Rethinking Job Title Requirements.

Summary

Multiple Evaluators

Multiple evaluators are often needed to sign off on competency evaluations.

Due to workflow, often multiple evaluators need sign off on one evaluation for one learner.

In these scenarios, multiple evaluators have to be notified, be able to complete their review and attestation, then the next evaluator must do the same.

Our product should allow for multiple evaluator assignments and attestations. Our product should also allow an admin to track and manage the progress of an evaluation in progress.

Integration.

Often evaluations need to be integrated with an HRIS platform. They can also be integrated with a specific program’s requirements.

Admins interviewed for this study referenced that although their initial evaluation capture occurred on paper, it was then necessary to add that information to an HRIS platform. Adding this information to an HRIS platform was either accomplished by data entry, an upload, or copy and paste into required fields.

One admin also referenced that some evaluations were program specific citing the American Heart Association (AHA) evaluation criteria or other program specific requirements that were part of a requisition for work.

One admin also referenced integrating with Federal and State competency requirements. A similar integration is being worked for the Compliance Management Regulations product. It is worth noting so that we may potentially consider a similar integration.

Our platform should easily integrate with various HRIS and requirements platforms and larger programs that have evaluation requirements such as the AHA.

Self Assessments.

The workflow for competency evaluations can often begin with self assessments.

Organizations can require the learner to complete a self assessment to begin the workflow of a competency evaluation process. Admins described the process beginning with learners who are notified that they must complete a self assessment. They must complete and attest to their self evaluation which then notifies the assigned evaluator that they, too, must complete an evaluation of the learner. Sometimes after the evaluator completes their portion, the learner and the evaluator then meet to discuss the findings and create and action plan. This action plan is then attested to by both parties and used to guide learner development.

Our current process does not allow for self assessments or CAP’s (creative action planning) in the workflow process. In order to truly compete in this space, we need to consider a self assessment components as part of the workflow. We also need to consider creative action plans based on competency evaluations as part of this workflow. This capability is currently being built into our Compliance Management product, and should be considered for the future state of our Competency Evaluation product. Additionally, designing our product for future state would require changing the current workflow to allow access for the learner, then the evaluator(s), then all parties in the final stages and creative action planning with potentially multiple attestations.

Job Title Requirements

Requiring a job title to build a competency evaluation creates front end work that is time consuming in medium sized or enterprise organizations with many job titles.

Organizations with many job titles find creating competency evaluations for each role first, time consuming and daunting. Administrators have many tasks that need to be managed both inside and outside of Relias. With limited time, setting up our platform requires valuable time that at least one administrator had difficulty devoting.

Our solution should allow for a path that makes this requirement an option or finds a happy path for getting started with us.

Evidence

Multiple Evaluators.

Several users cited that those being evaluated must seek the sign off and attestation of several evaluators. Multiple evaluators are often necessary.

Integration.

In the competency evaluation process, the paper collection activities are not the end of the process.

Administrators have to complete secondary tasks to ensure the completed evaluations are added to the necessary systems.

Self Assessments.

Self assessment was referenced by two participants. In these scenarios, the learner must also complete and submit a review and attestation for themselves.

Three our of five of the admins in this study referenced the self assessment and CAP’s components of their competency evaluations.

Learner self assessments at the start of an evaluation are becoming more standard in the workplace as employers work to gauge employee satisfaction, alignment with work responsibilities, and development goals.

Self assessments are becoming more and more widely used as part of a evaluation package or platform.

Job Title Requirements.

One participant (out of five) referenced that their organization considered the Relias Competency Evaluation platform in the past, however upon learning that competency evaluations had to be created by job specific title, they opted not to proceed with its usage. Citing more than 300 job titles, they made a different selection.

Getting started with our product may require some changes that make the process easier for admins.

Future state can also consider an upload of competencies by job title, or a secondary method to assign competencies to titles. While only one participant mentioned this challenge, we should consider ways to remove this hurdle. 

Recommendations

Priority // High

Integration: Four out of five admins referenced the need to collect and add their competency evaluations with and HRIS system.

  • Note: This may be a backlog item, but will be a strong selling feature for many organizations.
  • Research: Potential research could include which HRIS systems are most commonly used so that we may integrate with them.
  • Owner: UXR’s and the Competency Evaluation teams can work to further explore this functionality.

Ease of Use/ Mobility / Comments: Five out of five admins referenced they are currently using paper to collect competency evaluations due to ease of use, mobility, and the ability to capture notes and comments.

  • Note: Our product should address these concerns as they will ultimately drive usage and adoption of our solution.
  • Research: Additional research could include workflow mapping and a heuristic review to understand ease of use, mobility, and the ability to add comments.
  • Owner: UXR’s and the Competency Evaluation teams can work to further explore this functionality.

Multiple Evaluators: Several of the study participants referenced the need for multiple evaluators and their need to sign off or attest to the evaluation for one learner.

  • Note: Our product should allow for multiple evaluators to access, review, update, and attest to the evaluation of one learner.
  • Research: Additional research could include workflow mapping understand any limitations for multiple evaluators in our current product.
  • Owner: UXR’s and the Competency Evaluation teams can work to further explore this functionality.

Priority // Medium

Job Title Requirement: One admin referenced a reviewing our product and comparing it to others. They found the setup requirement to create / add job titles daunting and time consuming as they had more than 300+ job titles. They opted for another solution.

  • Note: Our product should address this concern and seek better ways to onboard particularly for mid size and enterprise customers.
  • Research: Additional research could include brainstorming alternatives and competitive audits to seek clarity on how competitors connect job titles and competency evaluations.
  • Owner: UXR’s and the Competency Evaluation teams can work to further explore this functionality.

Self Assessment: Several of the admins referenced a workflow that began with the learner completing a self evaluation, then an assigned evaluator completing an assessment, and finally the learner and evaluator connecting to discuss the findings, create a Corrective Action Plan if necessary, and close the evaluation. Our solutions should roadmap a similar workflow so we can meet the needs of the market and our customers.

  • Note: Future enhancements for our product should address this concern and seek add this functionality
  • Research: Additional research could include a market analysis of product offerings.
  • Owner: UXR’s, MIT, and the Competency Evaluation teams can work to further explore this functionality.

Priority // Low

Frequency: Evaluation frequency varies by organization. Our product should allow end users to customize the frequency within their platform to best meet their workflow procedures.

  • Note: Current frequency settings have not been explored in our product.
  • Research: Additional research could include a market analysis of product offerings.
  • Owner: UXR’s, MIT, and the Competency Evaluation teams can work to further explore this functionality.

Related Research



THE OUTCOME

This research did not reveal a specific rating scale number or an all-inclusive criteria list for assigning evaluators but the research did reveal some of the criteria. This research further revealed that administrators conducting the evaluations need the flexibility, mobility, and integrations that our tool lacked at the time.  Can we increase sales for this product?  It is possible, but customers were not likely to purchase given the current constraints.

THE HINDSIGHT

A competitive analysis would have been super helpful for this work particularly if we could have understood similar products that integrate with HRIS systems. Additionally, the participants for this study were customers of ours, but were not currently using the evaluation tool in our platform. It would have been more helpful to connect with participants who were using a software solution to complete their evaluations.