The PBSi DTV team is deep in the throes of planning for user testing of the new modules later this month. We're close to completing a strategy for the testing that we believe will put us in a strong position for prioritizing the next steps in the development and maintenance of the TV Schedule tools.
We already have strong data from the user feedback many stations have sent to us. These "super-fans" (as Jeremy Roberts, our technical lead from The Incredible Pear, refers to them), represent your core audience visiting your site in search of schedules, have already provided us with a handful of clear, actionable issues such as the need for a full week view. Therefore, our aim with user testing is two-fold:
- Gain a
better understanding of how users typically interact with TV schedules
online (on and off PBS and station sites).
- Direct users through traditional usability tasks within the modules to uncover additional areas where improvement is needed.
Our plan is to seek out testers that are
To begin each test, we will have the usability firm first conduct an interview with each tester to establish some background for how he or she typically interacts with TV schedules - How often do they watch TV? How do they find out what's on TV? Do they ever use the internet to find TV listing? If so, what sites do they use? How often do they watch PBS? How to they find out what's on PBS?
Once the tester is thinking about TV schedules and how they normally interact with them, we'll present them with one of the DTV tools. First we'll give them a chance to click around on their own and give some feedback on what seems confusing and how they expect the grid to work.
Next we'll guide them through some typical tasks - Find out when a specific program is on, find out what's on in the next XX days, find out more information about a program, etc. Tasks will go through the major features of the full grid (including search) and what's on tools.
Earlier this week we sent our list of tasks and questions to the user testing firm we are working with, which they are using to craft a script for sessions which are scheduled for the week of March 30th. If you have any thoughts or comments on this plan, please let us know! The benefit of this timing is that we will have the opportunity to include a mock-up of the v1.1 design in the testing to help assess whether the changes meet the mark of addressing the most severe issues, or not. V1.1 work is being fully spec'd over the next two weeks. From there, we will begin work on the tools themselves.
We'll continue to provide updates as the process moves forward. I'm looking forward to seeing and sharing the results of user-testing with all of you.
If you're curious, we're happy to share the full test-plan doc. Just shoot us a note at email@example.com and we'll pass it on.