aka. “Assessment Boot camp”
Here I sit with my thirty-odd pages of notes from the jam-packed 5 days of data analysis training that took place in New Orleans a couple of weeks ago; allow me to decipher the details of qualitative and quantitative methods of assessment and their application for analyzing service quality data for you.
Part One: Qualitative Data
First off, Colleen Cook was a wonderful instructor! It was clear that she had significant experience working with qualitative data, and her instruction on focus groups in particular was incredibly helpful. She gave us an opportunity to learn through participation, which for me, really helped things stick.
Some things we learned:
- Qualitative Data: Professionals who intend to use qualitative data to inform their library planning and decision making should have a certain level of comfort with contradictions and ambiguity. We are not dealing with numbers here, we are discovering relationships and building theories.
- Focus Group Techniques:
- We explored and participated in a brainstorming activity and a more traditional, facilitated focus group. The two focus group techniques allowed different types of information to come to the surface. Intentional group formations illustrated how important choosing the participants in your focus group can be: it can influence what the focus group reveals.
- Out test-topic was “What are your library’s ‘sacred cows’?” Done in two sections, we were able to create a (long) list of sacred cows, analyze them for themes, and select three overarching issues that, if dealt with proactively and directly, would result in more forward moving momentum for the libraries. We’ll be trying this exercise in my library very soon.
Day two of the qualitative data section revolved around the software tool Atlas.ti. Having recently been through the exciting process of coding LibQual+ comments and reference transaction transcripts by hand (using a homemade database), I was an instant convert to Atlas.ti. This tool is fabulous! It allows for coding on a more detailed, complex level than what I was previously able to do easily using an Access Database, and saves me quite a bit of time, since I don’t have to monkey around with building a database. (Not my favorite pastime.)
By focusing on the texts and codes, Atlas reinforces the purpose of using qualitative data: to understand relationships and build theories. Again, this is not about numbers! We donâ€™t care, necessarily, how many comments were about e-resource access, or what percentage of respondents mentioned ILL. Our aim is to understand the perceptions of the libraries and their services, how that relates to usersâ€™ expectations of service, and if a theory relating to library service can be developed through the analysis of the data. Again, this is not for people who are not comfortable with contradictions and ambiguity, an inherent symptom of all social science research, and, well, dealing with people.
Part Two: Quantitative Data
Statistics. Oy. This is what we had all been fearing the first two days. Bruce Thompson is an amazingly adept teacher, with a casual sense of humor and mannerisms that remind me of my father â€“ a high school math teacher. These guys wear Leviâ€™s and cowboy boots to the office, enjoy life fully, and understand how to maintain that delicate balance of humor and learning in a classroom. Bruce easily gained our respect and confidence, helped along by the fact that we were presented with two large â€œcontainers of knowledgeâ€ ( read: textbooks) filled by the man himself With Bruceâ€™s guidance, we marked the particularly tricky sections with helpful hints like â€œthis is a one martini concept.â€ He was right.
After participating in this course, one thing I am now certain of is that schools of library science should always always always offer a course on statistics. (While Iâ€™m making recommendations about what library schools should teach, grant writing is a must. And, if weâ€™re really going for it, everyone should take a course on assessment!) We were able to approach statistics in a very practical manner, rooted firmly in concepts. We did not learn formulas or compute equations. It was a simple format: step one: learn a concept, step two: make SPSS do the work. This was fabulous; very practical, and very empowering. I understood what we were talking about, and developed some confidence that I would be able to run these formulas on my own someday, and understand what they mean.
Some more things we learned:
- Quantitative analysis allows you to find the story in your data. This is the story we want to be able to communicate to our library and university administrators, and letâ€™s not forget our patrons. Yes, we can all read numbers, but what are the numbers telling us about the library? And, how do we know these numbers reflect reality?
- Levels of Scale: All numbers do not contain the same amount of information. Itâ€™s imperative that you understand what level of data youâ€™re working with. The â€œamountâ€ of information that numbers contain, e.g. their level of scale, determines the nature of the computations that you can apply to that data. Therefore, before you collect data, be sure you understand what level of data youâ€™re gathering, and what types of analyses youâ€™ll reasonably be able to run on that data to tell you what you need to know. We want to collect the highest level of data possibleâ€”you can always take a step back on the scale, but never forward. At this point in our lesson, I was reminded of Edward Tufteâ€™s theories on data resolution. Tufte would build on that recommendation by stating that not only should we collect the highest level of data possible, but we should display a high level of data as well, and to display that data at a high level. People are capable of interpreting complex data when displayed in a visually sensible way. These are the types of things that should be considered in the early stages of planning an assessment project.
- Validity: Are we measuring what we said weâ€™re measuring, and nothing else? Now, thatâ€™s a good question. When I write a survey question, such as â€œPlease rate your level of satisfaction with the service at the Library Information Officeâ€ and supply a ratings scale, how do I know that the numbers actually mean that people think we offer 4 out of 5 service, 5 being â€œSuperbâ€? Well, you read Bruceâ€™s red textbook and then run a factor analysis. Good to know.
- Reliability: How many times have I been asked if the data is reliable! Oh, let me count the waysâ€¦ â€œTrust the data!â€, I have found myself nearly shrieking at many a committee meeting. Well, thatâ€™s only a good thing to shriek if youâ€™re confident that the data is trustworthy, and it had better be if youâ€™re presenting it at a meeting. There are many factors that can affect the reliability of survey data, and many ways to measure if your data has been affected. Culture, language, audience, a sense of social responsibility (i.e. Why the social work school always has a response rate), and a variety of other psychological factors can influence your data. The development of the survey tool is highly influential. Does it take into account the known influential factors? At my library, thereâ€™s a high level of concern about motivation, and that patrons are only motivated to respond when they have something negative to say, and that there will always be an inherent bias in our service quality data. This has presented a constant challenge for me. Bruce taught us that no data is perfect, that this is OK, and that a lot of data is reasonably trustworthy. At such moments of doubt, I like to mention that some data is better than no data, and that we can actually measure for bias.
- SPSS makes magic: Again, Bruce approached SPSS in a straightforward, easy to remember way. We learned how to apply various analytical concepts and formulas to our data without getting bogged down in heavy concepts and lengthy computations,. We measured mean, median, mode, dispersion, range, variance, standard deviation (which was really exciting, honestly), learned about skewed tails and something called kurtosis, built histograms and box plots, and discussed how all this stuff relates to our data story. Under the â€œImportant stuffâ€ section of my notes, you will find â€œnever tell people a mean without the standard deviation,â€ a guideline I promptly adopted and highly recommend.
We were also taught that much like when dealing with qualitative data, statistics are not completely cut and dry. These numbers are not infallible; statistics are made by people. Always keep this in mind. There are many choices to be made when collecting and analyzing statistical data. Bruce posed a question: Are we going to be statistically liberal or conservative? Make your choices, and be prepared to defend them. Understand the expectations of your audience in terms of reliability and validity, sample size and output. And, in the end, statistics are also about relationshipsâ€”relationships that inform and enrich our data story.
Not surprisingly, most of what we learned relates directly to the analysis of LibQual+ data, however the workshop was not LibQual+ â€œpreachyâ€ at all. I hope that by now many libraries have come to terms with the scope and purpose of LibQual, and recognize that it does not solve all service quality assessment needs, but serves a broader purpose, and is relevant when used longitudinally, and within the context of measuring and comparing libraries on a national or international scale. I see LibQual+ as a temperature-taker for my library. A triennial â€œcheck upâ€ that brings trends to light, and helps us gauge our progress as a library system. The skills taught at â€œassessment boot campâ€ will hopefully add another level of richer data analysis, and facilitate more timely analysis to help find relevance in LibQual+ for our 22 departmental libraries. These tools have also proven to be immediately applicable to other projects in the library. We are planning on conducting annual focus groups of our patron populations â€“ the first being a faculty focus group in Fall 2007. Our Access Services Department is launching a pilot Service Quality Feedback mechanism, which looks suspiciously like a survey, this coming fall as well. I can see how using SPSS to validate the data during our tests over the summer will be invaluable in promoting the survey for broader distribution when weâ€™re ready to move from â€œpilotâ€ to â€œbeta.â€
Thank you to Martha Kyrillidou, Colleen Cook, Bruce Thompson and Dawn Thistle for taking the time to invest their experience, knowledge, and teaching skills in a group of assessment newbies. Also, thanks to ARL for consistently providing such relevant, practical training, and choosing great hotels in great cities at which to hold these assessment conferences. (Heads up, Seattle!)
Conclusion: I highly recommend this training to anyone who is or will be working directly with LibQual+ data or is responsible for assessment activities at their library. This was the most relevant, practical assessment-related training Iâ€™ve attended thus far. Over the course of the week I was able to build a strong foundation in data analysis that is enabling me to move forward in my job with confidence, and continued to develop my network of assessment colleagues who have proven to be invaluable advisors and mentors over the past year as I attempt to master this world of library assessment.