From Late Deliveries to On-Time Success: redesigning task management for better results

How this product design solution helped reduce late deliveries and improve productivity. With an adoption of 92.11%, a feature retention of 88.89% and a satisfaction of 4.31/5. Plus, 51.28more on-time deliveries.

Play with the prototype


Problem summary

We as a team had an OKR on reducing projects late delivery escalation. During the discovery phase, we conducted user interviews and found that the existing task management feature was inadequate for the needs of our users. We discovered that many users were resorting to using external tools to manage their tasks, which was causing inefficiencies and delays. The challenge we faced was to design a task management feature that met the unique needs of our three target users - managers, content creators, and reviewers - while also addressing the problem of late deliveries. We needed to create a system that was intuitive and user-friendly, while also ensuring that it integrated seamlessly into our existing platform. Our ultimate goal was to reduce late deliveries, improve productivity, and enhance the overall user experience for all three groups.

Solution

The solution we developed was an integrated task management feature on the project page, which allowed managers to create as many tasks as they needed. This was a significant improvement from the previous task management feature, which only allowed managers to have two tasks - one for the digital content creation and one for reviewing it. Our new feature enabled managers to assign tasks to content creators and reviewers, set deadlines, and track progress, all within the platform.

KPI metrics results

To measure the success of our new task management feature, we tracked several key performance indicators (KPIs) such as adoption, retention, and satisfaction. We found that adoption of the new feature was strong, with 92.11% of our users using the feature within the first month of its launch. Retention was also positive, with 88.89% of users continuing to use the feature three months after its launch. Furthermore, user satisfaction with the new task management feature was high, with an average rating of 4.31 out of 5. These metrics demonstrated the effectiveness of our solution in meeting the needs of our users and addressing the challenge of late deliveries. The increased adoption and retention rates indicate that users found the feature useful and valuable, while the high satisfaction rating reflects the positive impact the feature had on the overall user experience. Based on these KPIs, we were confident that our new task management feature had a significant and positive impact on our platform and its users.

We not only launched a feature that helped task management timeline. We helped to reduce our final year OKR about late delivery - from 1.56% to 1.08% - 51.28% less of late delivery escalation. So 51.28% more on-time deliveries than before!

Role
Senior Product Designer

Company
Superside

Device
Desktop

Challenge
Prototype a feature for task creation and management of digital contents for desktop with 3 different target users to keep in mind: manager, content creator and reviewer.

KPI Metrics
Adoption
Retention
Satisfaction

My Key Tasks
UX Design
Interviews
Survey (Google form, Slack)
Wireflows
Ideate solutions
Workshop creation and facilitation
Paper sketches
Wireframing (Figma)
Presentation skills in front of 80+ people together
Automation discovery design
Analyze results

UI DesignMiddle-fidelity interactive prototypes (Figma)
High-fidelity prototypes (Figma)
Moderated remote testing with real users
KPI definition and monitoring
Automation ideation design

Duration
4 weeks

Team
Me (Senior product designer)
Product manager
2 front-end developers
2 back-end developers
COO
User care department

Let's talk about the discovery strategy!


Discover

The project of improving our task management feature was born out of a discovery phase I led aimed at addressing the issue of late deliveries in our company. We involved different departments and teams in order to gather as many insights as possible. To achieve this, we divided the team into two squads, one dedicated to quantitative data and the other to qualitative data. Both of squads worked on an opportunity tree that was analyzing the late delivery escalation. Through the analysis of quantitative data, we found that one of the main causes of late deliveries was the lack information about project workloads. This led us to discover more on this through qualitative data gathering, which involved conducting interviews with a total of 20 users. This included 10 managers, 5 content creators, and 5 reviewers. I was in charge of creating the scripts for all the target user interviews, collecting insights in a FigJam structure, interviewing users and leading the whole discovery together with my PM. Plus, I contributed in the quantitative team analyzing data from spreadsheets around the late delivery context.

Define

The results of these interviews confirmed our initial assumption that our platform lack an overview over cross-project delivery timelines for content creators, managers and reviewers. We also discovered information disparity and miscommunication among internal teams due to the challenges posed by different time zones and their communication. This discovery was the starting point for our team to work on the design and implementation of an improved task management feature that would meet the needs of all our users and ultimately reduce late deliveries. I was in charge of collecting the qualitative data insights through a Notion file and analyze it to provide with recommendation based on the findings.

Hypothesis

After completing the discovery phase, me and my PM organized an HMW (How Might We) questions workshop to generate ideas and potential solutions based on the insights we gathered. The workshop was attended by different team members, devs included. Together, we brainstormed various HMW questions to navigate around the problem of late deliveries and task management. Our team also generated a supportive idea solution to reward the final user with a calendar event once a task was created. This solution was aimed at providing a more seamless and integrated experience for users, allowing them to easily keep track of their tasks and deadlines.

KPIs definitions

While we were in the process of ideating and testing potential solutions, we decided to validate our assumption of needing a better task management feature by tracking user satisfaction with the old project management timeline. To do this, I was responsible for designing and implementing a survey that would allow us to gather feedback from our target users, including managers, content creators, and reviewers.

I created the survey script, recruited 80 users, and measured their responses. Based on the feedback we received, we were able to gain valuable insights into the areas where our current task management feature fell short and where improvements could be made. We analyzed the suggestions and feedback we received from users and used this information to inform the development of the new task management feature.

By tracking user satisfaction and gathering feedback throughout the design and development process, we were able to ensure that the new feature met the needs of our target users.

During the development of the task management feature, we understood that measuring user satisfaction through surveys was not the only way to track our success. To have a more comprehensive understanding of the feature's impact, we established clear key performance indicators (KPIs) to measure adoption and retention. Our goal was to have 70% of users adopt the feature, meaning they actively used it at least twice. Additionally, we aimed for a retention rate of 35%, which would indicate that users were finding value in the feature and continuing to use it over time. Plus, a user satisfaction of 3.5/5. These KPIs allowed us to measure the feature's success and make data-driven decisions to improve it over time. Spoiler: we've exceeded these numbers!

Let's touch base on my ideation phase!


Check Figma's file

Ideation workshop
and sketches

During the wireframes phase, I organized a Crazy-8 workshop to gather as many ideas as possible. The workshop involved stakeholders, developers, other product designers working in different teams, and our product manager. The aim was to generate as many solutions as possible for the task management problem we had identified in the discovery phase. The workshop turned out to be a huge success, not only because it produced some brilliant ideas but also because everyone felt included and had an opportunity to share their opinions. This workshop proved to be an effective way to bring the team together and provide a better understanding of the context and user needs. We used the ideas generated in the workshop to discuss all hypothetical solutions and find suitable patterns to test in a later stage. Wireframes turned to be extremely useful during stakeholders meetings, being able to show our ideas in a visual way before investing more time on it. During this phase, I not only facilitated the crazy-8 workshop, recruiting people and running the workshop, but also analyzed all the sketched ideas, including my own. After gathering all the possible solutions, I narrowed them down to the most suitable one for testing. Since our team was working remotely, I came up with using Loom videos to explain our sketched solutions. This was beneficial as it forced everyone to communicate and focus better on the concepts. Additionally, the team could review the videos at any time, adding flexibility to our workload.

KPI process

During the process of creating middle-fidelity prototypes for testing, I also took charge of analyzing the results of our user satisfaction survey regarding the current project timeline management. As I delved into the data, I realized and suggested to stakeholders that we needed to extract more layers of data to understand user satisfaction based on different target users. With the help of ChatGPT, I discovered all the necessary functions to extract averages and calculate based on target users in Google Sheet, which gave us a better understanding of user satisfaction for different groups. Additionally, I grouped the suggestions based on target users, which provided more insights into the specific pain points of each user group. This analysis helped us to further refine our prototype and prioritize the features that would have the most significant impact on our target users. During the project, I not only established the KPIs to measure success but also helped build the Mixpanel dashboard to keep track of the adoption and retention of this new feature. Having the dashboard allowed us to monitor the adoption rate in real-time. The dashboard also helped us track the retention rate, which was equally important. By having this data at our fingertips, we could better understand user behavior and make data-driven decisions to continue improving the feature.

User flow

After the successful crazy-8 workshop, it was time to create some middle-fidelity prototypes to test. The workshop helped me to find the most suitable solution and cover more use cases thanks to the different points of view of the involved team members. Using Figma, I created different versions of the prototype before testing. When I was creating the middle-fidelity prototypes for our task management feature, I found it helpful to first create a user flow. This allowed me to map out all the steps that the user would go through in order to complete a task and gave me a clear understanding of all the different use cases that we needed to cover in our prototypes. By creating the user flow, I could ensure that all necessary features and functionalities were included in the prototypes.

Middle-fidelity prototypes and moderated usability tests

While I was creating middle-fidelity prototypes for testing the redesigned task management feature, I also started recruiting users to participate in the testing phase. Since we were a remote company using Slack as our communication system, I wrote multiple screening message and shared it on our Slack channels. Many people expressed interest in testing the new feature. It was great to see that our hypothesis around the need for a better task management system was highly appreciated by our users. The response from our users provided further validation that we were on the right track with our redesign efforts.
For the middle-fidelity prototypes, I created a first version that I presented to the development team, the PM and stakeholders to check its feasibility with code and with the rest of the platform. This step was crucial as it allowed us to have a better understanding of the technical difficulties that could arise with some of the proposed flows, and it also opened up discussions about additional UI requirements that some of the flows might require. Based on the feedback received, we made some changes to the prototypes before moving on to testing with users. This ensured that if the testing was successful, we could easily implement the feature in the product without any major technical issues. The collaboration between the design and development teams during this stage was valuable in ensuring the feasibility and success of the final product.
Figma middle-fidelity prototype
In this phase, I organized a design critique with other product designers to get a fresh pair of eyes on the designs. While not all of the feedback was valuable, some of it was extremely helpful. I carefully considered the relevant feedback and incorporated it into the design where appropriate. Ultimately, this process helped to ensure that the final design was well-considered and addressed any issues that might have been overlooked otherwise.
For the usability tests, I created a script in Notion to help me take notes during the interviews. I conducted all the user testing sessions and took notes on the actions and struggles of the testers. I worked alone during the testing sessions as I am very efficient in taking notes while talking. I interviewed a total of 7 managers, 5 content creators, and 5 reviewers. The tester number was enough since with that number we could be able to find common patterns and analyze data. After that I analyzed the results. The overall test was a success, and the feedback was positive. Although there were minor improvements that we could make, the core of the feature was well-received. The feedback provided valuable insights, which we used to improve the feature before launching it.

High-fidelity prototypes

For the high-fidelity prototypes, I made sure to incorporate changes that would address the issues and suggestions raised during the usability tests, while also considering the design critique and stakeholder feedback. The end result was a high fidelity prototype that was well-received by all parties involved and ready for implementation. I also made a presentation to the whole department - 80+ people - explaining the design and the qualitative and quantitative insights we got from it.
In this phase, I focused not only on the main user flows but also on the micro-interactions, such as deleting an item. I noticed that our platform lacked any micro-animations, and given that one of our company's OKRs was to make the platform more fun and engaging, I began experimenting with different types of micro-interactions. I explored different animation styles, timings, and feedback to provide a delightful and satisfying user experience. By implementing these micro-interactions, we could not only meet our company's goal of making the platform more engaging but also enhance the overall usability and user satisfaction.
In this phase, I was responsible for creating the final designs for the new feature, while also taking into account the existing Design System elements. In order to maintain consistency throughout the platform, I ensured that the new feature was aligned with our existing Design System. Moreover, I created additional components for the Design System to accommodate any new UI elements that were required for the feature. This allowed us to maintain a cohesive visual language across the platform and make future design iterations more streamlined. That meant that I also had to collaborate with our design system developers on some components implementation.

High-fidelity prototypes / Automation patterns

During the development process of our task management system, we focused on implementing automation patterns to improve the overall user experience. One of the ideas that we came up with was smart text recognition, which proved to be a valuable addition to the feature. With this new functionality, users can type in the date and time directly into the text field, and the system will automatically update the task's due date and time accordingly. Additionally, we implemented a tagging system that allows users to tag other team members by typing "@" in the text field, which assigns the task to the tagged user. These small but impactful features significantly improved the efficiency and ease of use of our task management system.
I collaborated closely with the development team to ensure that the product was ready for development and explained all the designs with Loom so they could go back to the explanation wherever they wanted. We had additional sync meetings as the team discovered some issues along the way with the way our code was built. We had to prioritize the core elements for the launch and postpone some other elements to a later stage. This process helped us to ensure that we were delivering the most essential features and functionalities to the users in a timely manner. Overall, the collaboration with the development team was fun. I very much like to collaborate with devs. You can see the entire ideation design process, from text to high-fidelity prototypes, detailed below.

Design review

After the implementation of the smart text recognition feature was completed, we conducted a final design review to ensure that everything was in place and ready for release. During this phase, I identified any issues that needed to be fixed before the feature was launched. I wrote down all the feedback and issues in detail, so they could turn them into Jira tickets for the development team to address. By doing this, we were able to ensure that the feature was in line with the design and user requirements, and that it would be functional and reliable for users. This process allowed us to have a final check on the feature's design and implementation, and ensured that all feedback was properly documented and acted upon.

Users insights' collection

While analyzing the satisfaction survey related to project timeline management, I came across some interesting insights that were not only useful for the new flexible task management feature but could also help in the late delivery escalation process. Additionally, during the interviews and feedback after the launch of the feature, we gathered many more insights. However, there was no proper organization for collecting these insights within that company. To address this, I created a Notion sheet where insights were collected according to different categories. This allowed us to be more prepared for future projects in this area, as we could refer back to the insights we had gathered and use them as a basis for future decisions.

The user insight's collection was a vital part of our product development strategy. We realized the importance of collecting and organizing the user insights to prioritize future challenges and build our strategy. This repository allowed us to keep track of user needs, business needs, and team needs and helped us make informed decisions about our product development. By analyzing the insights, we could identify patterns and trends that could help us improve the product and better understand our users' pain points. This approach helped us to stay focused on our users' needs and develop a product that met their expectations by prioritizing them. I'm currently building a case study of a prioritization workshop I made. It will be available soon.

Results

The results of our project were an absolute success! We were able to achieve 70% retention rate (35% more of what we planned) and a 4.31/5 satisfaction score (+0.81 of what we planned). Our adoption metrics were based on a user creating a second task, rather than just one task. We got 44% of adoption, and at first, we thought that we didn't meet our adoption KPI. However, after a close analysis, I found there was an issue in the way the data team was calculating events + not all the managers needed to create a second task for their small projects. Therefore, when we looked at the adoption rate of users who needed to create multiple tasks because they were dealing with bigger projects, we had an adoption rate of 92.11% (12.11% more of what we planned). So, we actually exceeded our KPI, and we were pleased with the results. In addition we also increase the whole project timeline management in the whole platform user satisfactions score: from 3.4/5 to 3.8/5.

Also, we calculated those results after 1 month from the launch, definitely too early to measure the proper success. There were a lot going on among our users too, and company changes affected the feature on-boarding too, but still, a very high percentage.

But the most rewarding part was the overwhelming positive feedback we received from our users. They reached out to us to express how valuable this feature was for them, and it was a great feeling to know that we were able to help both our users and the business succeed. It was a great accomplishment and we were proud of our team for making it happen.
BEFORE
Project timeline management - User satisfaction score before the new flexible milestones system
AFTER
Project timeline management - User satisfaction score after the new flexible milestones system
Healthy adoption of 92.11%, 22.11% more of what we planned. Adoption growing week by week. Almost no activity on weekends.
Healthy retention of 88.89%. 53.89% more of what we planned. Stabilizing.
New task management (internally called: flexible milestones) - User satisfaction score 4.31/5. 0.81 more of what we planned
Many positive reactions when we launched it.
Many positive reactions when we launched it.
Another notable success story of this feature was witnessing one of our target users sharing a project delivered on time, attributing it to the key successes facilitated by the implementation of a more flexible task management system that we internally call "Milestones". During a company town hall, she publicly commended our team in front of an audience of 700 people.

During user interviews, it became evident that one of the major features contributing to improved delivery time was the automated timezone communication that we developed in the new task management system in the platform. By automatically calculating the time, users no longer had to manually perform calculations, resulting in fewer errors and fewer instances of creatives misunderstanding deadlines.

Eventually, this feature has greatly assisted our users in transitioning from late delivery to on-time delivery. As some of our users state:

"On-time delivery has improved a lot since we have the new flexible milestones system, significantly."Reviewer

"The new flexible milestone is helping a lot with deadlines. It’s amazing. It makes a huge difference. It helps people be on track. I use my milestone to see what I have to do for today. It definitely helps people to be on-time with tasks. It’s a reminder of the upcoming milestones so it’s amazing. We improved a lot on time. It’s more organized. It’s causing less stress."Reviewer

We not only launched a feature that helped task management timeline. We helped to reduce our final year OKR about late delivery - from 1.56% to 0.08% - 51.28% less of late delivery escalation. So 51.28% more on-time deliveries than before! We also found that in general adopting this feature reduces the rate of escalations by an average of 86% compared to NOT using it. And 74% of managers who had escalations in their projects in 6 months time rate had a reduced rate of escalations per 100 projects when they adopted milestones.

What I learned

Through the process of creating and launching the flexible task management feature, I learned that great collaboration and great documentation are essential factors for success. It was through the collaborative efforts of the design team, product team, and development team that we were able to create a product that met the needs of our users and business goals. Additionally, during user interviews and feedback sessions, we discovered even more ideas to improve the task management system, such as including parental tasks, task completion, and google calendar integration. This experience has taught me the importance of always listening to our users and constantly iterating to create the best product possible. Additionally, I learned the importance of being flexible and adaptable in a fast-paced environment. Throughout the project, unexpected issues arose, and I had to pivot quickly to find solutions that would keep us on track to meet our goals. Finally, even though we reached our KPIs, there is always room for improvement and innovation. It's important to keep looking for ways to improve and stay ahead of the competition. I am proud of the results we achieved, and I am confident that this feature will continue to help our users and the business to succeed.

Thank you
for reading!

Feel free to try the Figma prototype here from your desktop. If you would like to discuss the project or anything else about UX, you can contact me via e-mail.

Send me an e-mail