ARL’s Service Quality Evaluation Training

aka. “Assessment Boot camp”

Here I sit with my thirty-odd pages of notes from the jam-packed 5 days of data analysis training that took place in New Orleans a couple of weeks ago; allow me to decipher the details of qualitative and quantitative methods of assessment and their application for analyzing service quality data for you.

Part One: Qualitative Data
First off, Colleen Cook was a wonderful instructor! It was clear that she had significant experience working with qualitative data, and her instruction on focus groups in particular was incredibly helpful. She gave us an opportunity to learn through participation, which for me, really helped things stick.

Some things we learned:

  • Qualitative Data: Professionals who intend to use qualitative data to inform their library planning and decision making should have a certain level of comfort with contradictions and ambiguity. We are not dealing with numbers here, we are discovering relationships and building theories.
  • Focus Group Techniques:
    • We explored and participated in a brainstorming activity and a more traditional, facilitated focus group. The two focus group techniques allowed different types of information to come to the surface. Intentional group formations illustrated how important choosing the participants in your focus group can be: it can influence what the focus group reveals.
    • Out test-topic was “What are your library’s ‘sacred cows’?” Done in two sections, we were able to create a (long) list of sacred cows, analyze them for themes, and select three overarching issues that, if dealt with proactively and directly, would result in more forward moving momentum for the libraries. We’ll be trying this exercise in my library very soon.

Day two of the qualitative data section revolved around the software tool Atlas.ti. Having recently been through the exciting process of coding LibQual+ comments and reference transaction transcripts by hand (using a homemade database), I was an instant convert to Atlas.ti. This tool is fabulous! It allows for coding on a more detailed, complex level than what I was previously able to do easily using an Access Database, and saves me quite a bit of time, since I don’t have to monkey around with building a database. (Not my favorite pastime.)

By focusing on the texts and codes, Atlas reinforces the purpose of using qualitative data: to understand relationships and build theories. Again, this is not about numbers! We don’t care, necessarily, how many comments were about e-resource access, or what percentage of respondents mentioned ILL. Our aim is to understand the perceptions of the libraries and their services, how that relates to users’ expectations of service, and if a theory relating to library service can be developed through the analysis of the data. Again, this is not for people who are not comfortable with contradictions and ambiguity, an inherent symptom of all social science research, and, well, dealing with people.

Part Two: Quantitative Data
Statistics. Oy. This is what we had all been fearing the first two days. Bruce Thompson is an amazingly adept teacher, with a casual sense of humor and mannerisms that remind me of my father – a high school math teacher. These guys wear Levi’s and cowboy boots to the office, enjoy life fully, and understand how to maintain that delicate balance of humor and learning in a classroom. Bruce easily gained our respect and confidence, helped along by the fact that we were presented with two large “containers of knowledge” ( read: textbooks) filled by the man himself With Bruce’s guidance, we marked the particularly tricky sections with helpful hints like “this is a one martini concept.” He was right.

After participating in this course, one thing I am now certain of is that schools of library science should always always always offer a course on statistics. (While I’m making recommendations about what library schools should teach, grant writing is a must. And, if we’re really going for it, everyone should take a course on assessment!) We were able to approach statistics in a very practical manner, rooted firmly in concepts. We did not learn formulas or compute equations. It was a simple format: step one: learn a concept, step two: make SPSS do the work. This was fabulous; very practical, and very empowering. I understood what we were talking about, and developed some confidence that I would be able to run these formulas on my own someday, and understand what they mean.

Some more things we learned:

  • Quantitative analysis allows you to find the story in your data. This is the story we want to be able to communicate to our library and university administrators, and let’s not forget our patrons. Yes, we can all read numbers, but what are the numbers telling us about the library? And, how do we know these numbers reflect reality?
  • Levels of Scale: All numbers do not contain the same amount of information. It’s imperative that you understand what level of data you’re working with. The “amount” of information that numbers contain, e.g. their level of scale, determines the nature of the computations that you can apply to that data. Therefore, before you collect data, be sure you understand what level of data you’re gathering, and what types of analyses you’ll reasonably be able to run on that data to tell you what you need to know. We want to collect the highest level of data possible—you can always take a step back on the scale, but never forward. At this point in our lesson, I was reminded of Edward Tufte’s theories on data resolution. Tufte would build on that recommendation by stating that not only should we collect the highest level of data possible, but we should display a high level of data as well, and to display that data at a high level. People are capable of interpreting complex data when displayed in a visually sensible way. These are the types of things that should be considered in the early stages of planning an assessment project.
  • Validity: Are we measuring what we said we’re measuring, and nothing else? Now, that’s a good question. When I write a survey question, such as “Please rate your level of satisfaction with the service at the Library Information Office” and supply a ratings scale, how do I know that the numbers actually mean that people think we offer 4 out of 5 service, 5 being “Superb”? Well, you read Bruce’s red textbook and then run a factor analysis. Good to know.
  • Reliability: How many times have I been asked if the data is reliable! Oh, let me count the ways… “Trust the data!”, I have found myself nearly shrieking at many a committee meeting. Well, that’s only a good thing to shriek if you’re confident that the data is trustworthy, and it had better be if you’re presenting it at a meeting. There are many factors that can affect the reliability of survey data, and many ways to measure if your data has been affected. Culture, language, audience, a sense of social responsibility (i.e. Why the social work school always has a response rate), and a variety of other psychological factors can influence your data. The development of the survey tool is highly influential. Does it take into account the known influential factors? At my library, there’s a high level of concern about motivation, and that patrons are only motivated to respond when they have something negative to say, and that there will always be an inherent bias in our service quality data. This has presented a constant challenge for me. Bruce taught us that no data is perfect, that this is OK, and that a lot of data is reasonably trustworthy. At such moments of doubt, I like to mention that some data is better than no data, and that we can actually measure for bias.
  • SPSS makes magic: Again, Bruce approached SPSS in a straightforward, easy to remember way. We learned how to apply various analytical concepts and formulas to our data without getting bogged down in heavy concepts and lengthy computations,. We measured mean, median, mode, dispersion, range, variance, standard deviation (which was really exciting, honestly), learned about skewed tails and something called kurtosis, built histograms and box plots, and discussed how all this stuff relates to our data story. Under the “Important stuff” section of my notes, you will find “never tell people a mean without the standard deviation,” a guideline I promptly adopted and highly recommend.

We were also taught that much like when dealing with qualitative data, statistics are not completely cut and dry. These numbers are not infallible; statistics are made by people. Always keep this in mind. There are many choices to be made when collecting and analyzing statistical data. Bruce posed a question: Are we going to be statistically liberal or conservative? Make your choices, and be prepared to defend them. Understand the expectations of your audience in terms of reliability and validity, sample size and output. And, in the end, statistics are also about relationships—relationships that inform and enrich our data story.

Not surprisingly, most of what we learned relates directly to the analysis of LibQual+ data, however the workshop was not LibQual+ “preachy” at all. I hope that by now many libraries have come to terms with the scope and purpose of LibQual, and recognize that it does not solve all service quality assessment needs, but serves a broader purpose, and is relevant when used longitudinally, and within the context of measuring and comparing libraries on a national or international scale. I see LibQual+ as a temperature-taker for my library. A triennial “check up” that brings trends to light, and helps us gauge our progress as a library system. The skills taught at “assessment boot camp” will hopefully add another level of richer data analysis, and facilitate more timely analysis to help find relevance in LibQual+ for our 22 departmental libraries. These tools have also proven to be immediately applicable to other projects in the library. We are planning on conducting annual focus groups of our patron populations – the first being a faculty focus group in Fall 2007. Our Access Services Department is launching a pilot Service Quality Feedback mechanism, which looks suspiciously like a survey, this coming fall as well. I can see how using SPSS to validate the data during our tests over the summer will be invaluable in promoting the survey for broader distribution when we’re ready to move from “pilot” to “beta.”

Thank you to Martha Kyrillidou, Colleen Cook, Bruce Thompson and Dawn Thistle for taking the time to invest their experience, knowledge, and teaching skills in a group of assessment newbies. Also, thanks to ARL for consistently providing such relevant, practical training, and choosing great hotels in great cities at which to hold these assessment conferences. (Heads up, Seattle!)

Conclusion: I highly recommend this training to anyone who is or will be working directly with LibQual+ data or is responsible for assessment activities at their library. This was the most relevant, practical assessment-related training I’ve attended thus far. Over the course of the week I was able to build a strong foundation in data analysis that is enabling me to move forward in my job with confidence, and continued to develop my network of assessment colleagues who have proven to be invaluable advisors and mentors over the past year as I attempt to master this world of library assessment.

2 Responses to ARL’s Service Quality Evaluation Training

  1. Steph Wright April 17, 2007 at 4:42 pm #

    Great post, Jennifer!!!
    Really helped me remember some of the overarching concepts I’d already let slip (has it really only been a month?!).

    RE: Atlas.ti. I’m torn on that software. It was interesting, it was cool and I can see where it would help a lot with large amounts of qualitative data, say from lots of focus groups & interviews. For our triennial use of comment coding from our locally-constructed survey, I still think we’re gonna stick with Access. 1) By running our user satisfaction survey every three years for the last 15 years, we already have a pretty good idea what most of the themes are going to be, 2) Our staff & users expect reports back on what we get from the comments. Atlas.ti is great at helping identify themes but is left wanting when it comes to actually creating reports on those themes, 3) We already own Access and I already know Access. Definitely worth downloading the trial version and playing around with it though! (http://www.atlasti.com/demo.html)

    Being a visual person, I so agree with the Tufte remark. I was hoping we’d get a little more about data presentation in the Academy but there was so much to learn as it was, I’m not sure how they could squeeze any more into a week!

    I also lend my heartfelt thanks to Martha, Colleen, Dawn & Bruce. There are a lot of books out there on qualitative & quantitative data (including the small library we brought back from the training) but this training was SO helpful in providing a foundation so I can actually UNDERSTAND what’s in those books.

  2. Kay Chapa April 18, 2007 at 9:24 am #

    Yes, thank you! You provided an excellent summary and you also reminded me of some things I had already forgotten (and probably wouldn’t remember until I uncovered my notes, currently buried somewhere on my desk).

Leave a Reply