CyRide for Smartwatch
Redesigning a public transit app for convenience and functionality.
Skills
User Research (Affinity Diagrams, Personas)
Wireframes and Prototyping
Heuristic Evaluation
Experimental Design
Usability Studies
Technical Communication
Timeline
January 2024 - May 2024
Tools
Figma
Qualtrics
Google Suite
Good Ol’ Paper & Pen
Role
UX Designer
User Research
Introduction
Iowa State's CyRide bus system serves both the university and Ames. While their current digital scheduling app is functional, we saw an opportunity to modernize its design and explore a smartwatch app for enhanced user convenience. This HCI class project aimed to identify pain points with the existing app, assess the feasibility of a smartwatch companion, and create a user-friendly prototype for further testing.
Goals:
Identify pain points with the current CyRide digital experience as well as the feasibility of a smartwatch application.
Brainstorm potential solutions and determine which best meets the needs of the user and the desired outcome.
Generate a functional high-fidelity prototype that incorporates feedback from brainstorming and user research.
Conduct a round of usability testing to further iterate our designs.
Present our findings and design in a presentation for the class.

Research
Empathizing
The project began by asking the foundational question: What are the pain points with the current experience, are how can a smartwatch platform address them?
To answer this question, we ran user interviews that gave the team direct communication about some of the problems users may be facing. We also limited this initial interview to smartwatch users so we could gather initial feasibility of a smartwatch experience from those who are familiar with it’s usability.
We asked participants about their thoughts on a smartwatch experience, as well as general likes and dislikes of the current state of CyRide. After gathering the data and finding the most common themes, we made user personas, which give us an anchor when making design decisions.
Ideating
Having a better understanding of the goals of our design solutions, we began putting our ideas onto paper. We started by laying out the architecture of the app using storyboards and use scenarios using a technique called Hierarchal Task Analysis, or HTA, giving us a better conceptual framework of what screens needed to be made, and what user journeys needed to be supported for a Minimum Viable Product, which we would then take to evaluations later. We also conducted a competitive analysis to understand the trends of the design space and what elements we could borrow. The design space for transit smartwatch applications is narrow, but still provides great foundational insights.

Design (Part I)
Paper Prototypes
After the foundational work was done, we started sketching our designs, using the Crazy Eights (technically Fours here) method to quickly brainstorm solutions and evaluate those with the most potential afterward. The unique constraint of screen size forced us as designers to keep laser-focus on our users, asking questions like:
What are the goals of the user, and how might we guide them with minimal space?
What interaction design elements aren’t here that do exist on smartphones? How might that change the UX?
What compromises must be made given the design constraint?
For the purposes of an evaluation done later, we sketched out the three user journeys we believed would be most relevant to the user’s goals: checking stop information, favoriting a stop, and setting a stop notification. While these tasks aren’t far from each other mechanically, we wanted to make sure the user flow surrounding these tasks we interconnected, so a user could easily switch between them without going back to the start.

Testing
Expert Analysis
For the first round of evaluations, we employed expert analysis, where rather than bringing in users, we had “experts” evaluate our interface to provide feedback on our designs. There were two options for expert analysis, either heuristic evaluation, where our design was put against UX design heuristics, or cognitive walkthrough, where fundamental questions are asked about the interface from different perspectives. While this isn’t quite user testing, it’s an opportunity to fall back on well-established design fundamentals that is cheaper and quicker than bringing in real users. Our team elected to do both to get both a better understanding of our user flow and interaction design elements. We learned a lot about our design, and identified our most important areas for improvement, that being:
Lack of Simplicity
Lack of Feedback
Lack of Consistency
Refinement
Once our refinement goals were defined, we iterated on our designs. In addition to improving our solution, we converted our paper wireframes to digital using Figma. Given the time constraint of the class, we didn’t have much time after this point to convert our prototype and decided now was the best time to do it.
First, we refined the navigation to be housed in one singular area; Before bus selection was located in the map UI, but after the feedback we moved it to the main menu and allowed users to access their favorited buses and stops from there. The user journey for setting a notification for a bus was considered confusing and frustrating, with not enough affordances to properly guide the user. We streamlined the journey and gave the user more feedback on their decisions and what they should do next. Finally, the lack of consistency was partially improved by converting the prototype to a Figma prototype. Doing so means it’s much easier to make small changes that make the visual design cleaner and refined, making switching screens less taxing on the eyes.
All of these iterations were guided by the insights learned from the expert analysis. By applying design heuristics and walking through the mental processes of the UX, our team learned important but quick insights on our product, so testing with real users can offer even more valuable insights.
As mentioned earlier, the prototype was digitized alongside iterating on various aspects of the user experience.
Usability Study
The final phase of the project was to conduct a full round of usability testing with participants, giving us one last opportunity to evaluate our product and get feedback on our designs.
The approach to this study was intended to be more scientific than what I’ve come to expect from the research done in the industry. Our hypothesis was that our smartwatch application would be better on a series of metrics compared to existing solutions. For example, we believed setting up a notification for a bus’s arrival would be easier in our app than the built-in timer app of the smartwatch. By comparing our interface with existing solutions on a series of metrics, we could truly answer the question of whether our work improved the user experience.
First, we identified the constructs that would be most informative on if our interface accomplished what it set out to do, which were mental workload, user satisfaction, usability and performance. A mix of qualitative and quantitative data would offer us a balanced perspective on what could be improved. We used an established test, the NASA-TLX, which is a commonly used metric for usability. Finally, we wrote questions using a likert-scale format so the data we gathered could be easily analyzed and presented in the future. To spare the rest of the details, many elements of a proper research were present and taught the team what it may be like to do research in the field of HCI in the future.
After running the study with a handful of participants, we consolidated the data and visualized it using tables and graphs, using the appropriate type of graph necessary. When looking through the data and insights, it was discovered that our interface only performed slightly better than alternative in solutions in two areas: usability and user satisfaction, while the changes in mental workload and performance were negligible. While we could consider the comparison biased thanks to history effects or prior exposure to the alternative solutions, accounting for that may not have as much impact.

Conclusion
In summary, the experience of building a product in I E 572, from conducting the initial research to determine areas of improvement in current solutions to applying heuristic principles to our design to launching a large-scale usability study with scientific power taught me a lot, both on a small and large scale.
Psychology is an incredibly powerful tool in UX Design. I had always acknowledged the relationship between Psychology and Design but never fully grasped just how much there is to apply from theory to creating technology solutions. Understanding a user’s motivations, thinking, and behavior was crucial to this case study and demonstrated the powerful tool of user-centered thinking. Knowing the user’s intentions and goals, while considering elements of perception will lead to a great product.
Foundational knowledge is crucial in new environments. The design space of the smartwatch was unique for many reason. The limited screen size of course, but also the unexpected interaction limitations and context of the device. Unlike a smartphone, a smartwatch is intended to be even more convenient, being housed on your wrist rather than your pocket. So how might that little nuance affect how the device is experienced? Questions like this up-ended my understanding of design, but it was falling back of foundational knowledge and designing with those constraints in mind is what enabled me to think creatively and always consider the user. Following design trends or patterns is one thing, but it’s about understanding where those patterns come from and why those have stood the test of time is what will enable great decision-making.
Scientific Research is time-consuming, but concrete and valuable. During the last round of usability testing, I was exposed to how scientific research can be applied to product design. Before, most of the research I’d done in UX was qualitative and, compared to research found in publications and journals, “casual.” Applying elements like p-values and standard deviation was difficult but fresh and unique compared to user quotes and emotion mapping. Seeing the data processed into graphs and tables showing relationships and scientific power helped me understand how important this type of research can be in more consequential environments, like designing interfaces for pilots or life-dependent scenarios.