The National Center for Atmospheric Research: Research Data Archive (NCAR RDA) is a research archive that serves oceanographic and atmospheric data to an international audience with a variety of domain knowledge, and professional requirements.
Our team of 5 UX Researchers was tasked by the NCAR RDA development team to provide recommendations to improve user ability to quickly discover in-demand datasets, locate spatial or temporal subsets of data, and access the data in understandable, useful formats.
No time for the details? Click through the summary slider below to understand my key contributions and takeaways.
I operated as a UX researcher in a team of 5 graduate students, where we implemented a mixed methods research protocol to improve the usability and feature discoverability of the National Center for Atmospheric Research's Research Data Archive (NCAR RDA).
Our insights led to recommendations to enhanced data discoverability, SEO, accessibility, and overall user experience — - the most crucial of which have already been implemented by the RDA team!
The NCAR RDA team faced challenges in managing complaint tickets, understanding conflicting user needs and motivations, and catering to users with varying computational expertise.
Our primary research objective was to uncover problem areas related to data discoverability and accessibility and propose solutions.
While we fulfilled all our client's research goals, resulting in the NCAR RDA team starting to implement our most critical suggestions (i.e., advanced filtering, fuzzy search), there were certain project hurdles that went unsolved.
The archive team had a variety of tools and ways to slice data to suit the needs of every user — from the undergraduate student getting their first experience with climate data, to experienced meteorological scientists. The small NCAR RDA team was:
Our primary and most critical research objective was to uncover problem areas relevant to the discoverability and accessible of data, and propose solutions.
None of our team members had experience with climate datasets, GIS, or any related workflows, so our first research goal was to broadly understand the users, technology & space.
After interviewing 7 candidates, we affinity-mapped our interview results, and identified the following critical insights.
Experienced users rarely interact with the site. Instead, they submit automated python scripts directly to the NCAR supercomputer to generate compatible data files.
Subsetting is the main feature that interviewees cared about, and most were confused on how to use it.
Our top 3 insights significantly altered the RDA team's ideas on how the website functioned in their user’s workflow. They hadn't considered that power users avoided the website almost entirely, and that their website’s audience was actually early to mid-proficiency users who had exploratory research motivations.
Our primary and most critical research objective was to uncover problem areas relevant to the discoverability and accessible of data, and propose solutions.
We decided to recruit users from NCAR RDA's networks, to run remote qualitative interviews.
Our team decided to organize our interview insights into our key personas. This would help us organize features and task flow analysis more comprehensively, and present the RDA team with persona based feature recommendations & strategies.
We constructed our 3 personas primarily by profession. Academics, professionals, and students.
The goal was to produce a scannable, actionable persona for any NCAR RDA employee to make decisions around.
Our personas created a common understanding between our teams, allowing us to prioritize features based on user-relevancy, and identifiable website traffic.
Now that we had identified the RDA’s primary users, and most significant issues, we needed to look outwards, and confirm their severity, so we adopted a competitor analysis to benchmark these issues against the following competitors.
Our team searched through website rankings, articles, blogs, news, social media, and rated each competitor along a 1-5 scale. Once completed, we aggregated the score, broken down by competitor type.
Our competitive analysis highlighted several key gaps that NCAR RDA needed to bridge in order to create a competitively usable and powerful experience:
General feature discoverability was poor due to inconsistent design language and affordances.
The homepage should be redesigned to prioritize relevant links and services such as THREDDS and FAQs are more.
Tutorialize the homepage so that users could navigate the website more easily. e.g., ; “Start Here” button
Advanced filtering to the homepage search engine.
Design immediate and consistent visual feedback.
Dataset comprehensibility / scannability is low.
Now that we had some benchmarking, we decided to create an interaction map, and run a task-based heuristic evaluation. This would help corroborate our interview findings, and ascertain severity and priority.
The heuristic analysis identified 4 key areas that highlight critical issues, and corroborate and expand interview and competitor analysis insights
One of the core requirements from the RDA team was an understanding of how they could platform underutilized features. Therefore, we decided to explore this via usability testing.
The most important being the Thematic Real-time Environmental Distributed Data Services (THREDDS); an online, streamable, customizable method for accessing data.
A feature that was surprisingly underutilized despite its advantages.
Our team prioritized tasks that tested:
After completing our individual tests we individually collated our notes into a central “NCAR Usability Test Matrix” Google Sheet, color mapped common issues common to all our users, and then prioritized them according to Frequency and Impact.
Our interviews, heuristic evaluation, and usability testing revealed several persistent issues with the platform. We combined these insights with a thorough competitor analysis to generate key recommendations for the dev team.
The site was missing key functions like breadcrumbs, download progress bars, comprehensible site navigation, and discoverable info tabs. We presented design revisions to satisfy these key heuristics.
The website's design language was noisy, used overly large interaction elements, and was un-scannable. Our branding & design revisions included typography, layout, and element size that would help reduce visual burden.
The lack of filtering options in the search engine, and result page significantly hurt user ability to explore, compare, and export datasets and subsets.
We revised search functionality design, and recommended new, off-the-shelf solutions.
This page was supposed to contain advanced search functionality, but didn't allow multi-criteria search and comparison. Our redesigns presented new metadata filtering options, leveraged from interviews and usability testing.
Our redesigns recommended the inclusion of a progress indicator when users requested manual dataset subsetting.
Selecting data subsets requires finding variables that match with other datasets in a user's / organization's workflow. Since some datasets have hundreds of variables, we recommended a variable search function on the dataset info page.
Discovery-focussed users are often confused by the different data access / download options, and what the tradeoffs for each option are. We recommended greater explanation text
Our design revisions improved the prominence of underused bookmark and dataset comparison tools that were designed to be key parts of the user journey.
The THREDDS feature was also criminally underutilized - despite its clear advantages.
We presented our final high-priority findings, core recommendations, with screen illustrations in a comprehensive, referenceable report, that collected
Our project achieved the following results:
The RDA team has begun implementing some of our changes such as:
Before: Opaque high level filters
After: high level filters expand into terminology specific filters
Before: Full page, non-prioritized menu
After: Half page, simpler menu
In conclusion, our research insights were valuable, but overall, we didn't have enough time to implement our insights into design, and,and then analyzed for measurable improvements.