A neurobiological research lab at Neurobiology of Aging Department @ ISMMS needed an up-to-date utility for processing high volumes of brain activity data. This data is collected as waveforms which need to be clustered by signal similarity to identify the firing of individual brain cells. The lab inherited an innovative waveform clustering utility from a graduate student’s project.
My job was to redesign the utility for better usability and a wider user base. This required significantly changing the user interface, designing new interactions and creating training materials.
Most of my research was done in the initial discovery stage, but I often came back to some of my investigative methods showcased below in the later stages of the project when I needed to evaluate the results of my prototype tests.
In the initial discovery stage of the process I conducted a series of interviews to fully understand the objectives of the project. I focused the interviews around business and user goals and understanding existing practices, standards and restrictions.
The stakeholder and user interviews allowed me to develop a clear understanding of the project's overarching agenda and major user groups.
Data Analysts: The data analysts process the raw brain activity data to identify individual cell activity. These people use the data processing software nearly every day and are responsible for processing hundreds of files per project. Their background may vary from inexperienced high school student volunteers to post-doctoral fellows with wide experience in the field. Their common goals are speed and accuracy.
Principal Investigators: These people check the accuracy of the data processing and interpret the results. This group consists of professors and post-doctoral fellows who are usually very busy with other tasks, both scientific and administrative. They require very clear and preferably visually distinct feedback about processed data, since they need to quickly, yet reliably, check identification accuracy for thousands of brain cells per project. These users also do not use the software on a daily basis - they usually get a collection of processed files bi-weekly or monthly.
Through competition product research, testing, and user interviews I learned about the major types of available alternative solutions, identified existing trends and summarized a set of best practices which would give our product competitive advantage and longevity.
Avoid bloating the product with extra functions: let it do few things, but really well.
Capitalize on a consistently updated platform, like MATLAB.
Use popular formats for raw data files and the simplest data structures to ensure wider applicability.
Make UI as self-explanatory as possible.
Provide user-centered, task-oriented documentation and onboarding protocols.
After the initial research gave me a good idea about the overarching goals of the project, I conducted a small set of usability tests to find key pain points in the pre-existing product and see what required immediate redesign. At the end of these tests I compiled an inventory of pressing issues. User observations also revealed the workflow commonalities between different users and projects, and the ways the existing product was not efficiently supporting that workflow. Workflow diagrams helped me visualize the users' common complaints. Example is shown below.
After the discovery stage I had a clear map of the areas that needed immediate redesign. However, as I came up with ideas to fix each pain point I was careful to make sure that my solutions link back to the overarching goals. Therefore I tested my prototypes and solutions against their effectiveness in solving the strategical goals.
After several ideation sessions I composed a list of the most promising ideas and evaluated them against the original goals. This gave me a good estimation for priorities and helped quickly eliminate several ideas which sounded good but did not fit into the project's agenda.
Since most changes could be done in GUIDES by adding labels or altering properties of existing interface elements, testing out multiple prototypes was easy and quick. These tests soon revealed the user-preferred layout. I focused on disambiguating the text in labels and hints, improving visual clarity and adding new functionality. Below are a few examples of the wireframes and prototypes I created and tested throughout this process.






During the redesign process I had to introduce new highly requested features and determine their appearance and behavior. Below is the specs sketch for the final interaction design that got the approval of most users and delivered the optimal workflow.
At each stage of the prototyping and developing process I conducted usability and benchmark tests to pick the most effective solutions.
I developed sample data sets which represented commonly seen data examples and a couple standard scenarios for different common usage types to test the prototypes and final product updates. Whenever possible, I conducted the usability tests with both experienced users and recently trained or new users to estimate improvements to both long-lasting productivity and onboarding speed. During each session I took note of the user's comments, pain points and speed of task completion for each scenario and data set. The tests were either the same or very similar for each subsequent prototype or product update, allowing accurate estimations of the tested product's performance compared to other versions or prototypes.
General instructions: Please try to perform the following tasks to the best of your ability, taking as much time as you need to figure things out. Please narrate your actions and voice any comments or difficulties you might have as you go along.
I developed sample data sets for benchmark productivity testing of the product. To improve result interpretability I chose users with high proficiency levels in the tested version of the software. The users were given a task of fully processing files they haven't seen before. Their progress was observed and timed. The observations also provided insight into the workflow which skilled users were adopting to process large volumes of data. Some experienced users were also asked to conduct self-timed tests. Such sessions offered a more objective estimation of productivity improvements, since users had no interference from the knowledge of being observed.
General instructions: Please fully process one of the included files. Please fill in the blanks as you go along.
File Name: ____________________________________
File Size: _________ mb
Processing start time: ________ AM/PM
Expected population size: ________neurons. (Visual estimation based on raw data)
Optional pre-processing:
Start time: ____________ AM/PM
End time: ____________ AM/PM
Pre-processed population size: ________ neurons.
Final population size: __________ neurons.
Processing end time: ________ AM/PM
Processed data confidence level: ________
Notes:
Comparison of initial usability tests for the inherited software to the same-scenario usability tests for the redesigned software showed the following improvements:
was inconsistent and confusing for most users. However, the workflow adopted by most users for the updated interface flowed naturally from left to right and offered an easy-to-learn logical process for the new users. 