ARL Library Assessment Coordinators – Jan 11

We had a fun filled day of assessment activities at the U of Pennsylvania’s Houston Hall on Jan 11. The morning was devoted to the ARL Library Assessment Coordinators meeting — the first meeting of its kind! The meeting was open to all those in ARL libraries who have responsibility for or work on assessment-related activities (not just assessment “coordinators”) and was part of an ARL effort to support the community of assessment practice. It was designed not only to exchange information about assessment in libraries but to help develop personal connections with counterparts at other institutions.

The program opened with a Welcome by Carton Rogers, the Director of the Library at UPenn who talked about the value of assessment at UPenn — an institution that is leading the way in data driven decision making supported through the work of Joe Zucca over the years.

Participants had the opportunity to receive a hot – of – the – press SPEC Kit on Library Assessment written by Lynda White (U of Virginia) and Stephanie Wright (U of Washington) who presented on their findings.

Discussion followed regarding the 2008 Library Assessment Conference organized to be held at the U of Washington in Seattle on August 2-4, 2008. The Call for Proposals is out and proposals are due January 31. Several half-day workshops have already been confirmed for August 7 (Learning Outcomes, Performance Measures/Balanced Scorecard, Usability, Assessment methods for space planning and evaluation) — other potentials include: Developing and implementing assessment plans, E-metrics/usage statistics, Data presentation/visualization, Qualitative assessment methods, Survey methods and analysis.

Round table discussions took place on Moving Assessment Forward in Our Libraries — the discussion was built around issues submitted by the libraries:

  • The dashboard – ongoing measurement/assessment tied to library priorities, services and planning
  • Using results/determining outcomes/actions/communicating/marketing results and actions internally/externally

What are the important issues in the areas above?

  • What role can ARL play to help research libraries address these issues?
  • What should the broader library assessment community do to support assessment?

Summaries of these discussions will be posted as they arrive in the coming days.The afternoon continued with the Library Assessment Forum and wonderful presentations by Joe Zucca (UPenn) and Xin Li (Cornell)!

5 Responses to ARL Library Assessment Coordinators – Jan 11

  1. martha kyrillidou January 23, 2008 at 12:54 am #

    Table discussion from the January, 2008 Assessment Coordinators’ meeting.

    Topic: Dashboard for senior administration.
    1. What to include?
    A. What makes your library different or distinctive?
    B. Return on Investment.
    C. Breadth of services and who is benefiting from them – sensitive to institutional context.
    D. The dashboard should not be internally focused, should be focused on what external audiences want. What is a university priority?
    2. How do you get the information?
    A. Organizational intelligence exists on the periphery. Need periodic intelligence audit.
    B. Need plumbing/infrastructure.
    3. Examples of linking stats to university priorities:
    A. Graduate studies. How does the library add value? A barrier for graduate students can be the research. The library helps with this. Show how we do that – subject specialists’ meetings with graduate students, dedicated study spaces, etc.
    B. Area studies. Show circulation of foreign language materials to graduate students. Using this material is critical to getting their degree. The library is a key player.
    4. How can ARL help?
    A. Providing good examples/best practices.
    B. Focusing the community effort/leveraging resources/brokering and facilitating. Bringing together institutions with similar interests and capacity for collaborative projects.
    5. How can the broader community of assessment help?
    A. Help build the plumbing. We should not all be investing the resources to do this separately. There are a lot of libraries who use the same systems (ILS, campus information systems, proxy servers) who could collaborate on projects.

    Notes by Nancy Slight-Gibney, January, 2008

  2. martha kyrillidou January 23, 2008 at 12:58 am #

    Notes from discussion at ARL Library Assessment Coordinators Meeting
    January 11 2008
    Submitted by: Agnes Tatarka, Assessment Director, University of Chicago

    Why we want data:
    1. Get evidence to dispel myths that librarians are entrenched in.
    2. It helps to have a point to prove as a way to get buy in but it’s not easy to remain data neutral and accept the results. When we go into an assessment we are often going into it with an agenda.
    3. Balancing data with our intuition about how we want the library to go forward. You need data to balance and inform decisions.
    4. If it doesn’t support their agenda, people often attack the methodology. Problem may be with establishing authority and convincing that there is data integrity.
    5. Being cautious about interpreting data and how generalizable it is.
    6. Functional groups: Go to functional groups and ask what measures are important to them. What could be dropped? How to get the measures you need more efficiently?
    7. Data farms: Centralize data and organize in ways that it helps to tell the story (e.g., how many circulations by vols held). Are you counting the same thing – how are people going to compare? If you just set it up how do you know how people are using the data if you don’t know who the audience is?
    8. Get support from campus administration, why you need their support.

    Role of ARL
    1. Act as a clearinghouse for assessment forms and best practices.
    2. Help with educating staff on how to deal with data and why.
    3. Advocacy.
    4. Think more practically about how assessment can be done.
    5. Build a community of support.
    6. Promote ways that we can trust other people’s research.
    7. Can ARL help us understand the issue of what the problems are with Information Control in LibQUAL? Why are we all doing so poorly. [Jim Self mentioned paper on faculty that he has written: http://www.lib.virginia.edu/mis/reports/PM7paper_Oct21.pdf
    8. Can question IC-8 somehow be revised so that it is more clear about what it is measuring? Or by adding follow-up questions? Can ARL help identify those institutions that are doing better on this questions and figure out what shapes their higher results.

    Role of campus assessment community
    Work with others on campus that are involved with student assessment; how to communicate to University administration. Best if you work with courses, requires understanding how to do assessment for these.

    Ask university administration: What are you trying to collect? Is there anything that we collect that will help? Can any of your data help us?

  3. martha kyrillidou January 29, 2008 at 4:17 pm #

    ARL Survey Coordinators’ Meeting
    ALA Mid-Winter, January 2008, Philadelphia

    Notes provided by Lynda White, University of Virginia

    How ARL can support the broader assessment community:

    Provide a central website leading to library assessment websites and data

    Provide a wiki, like LOEX, for assessment data from libraries

    Provide training: basic stats (but cheaper), train librarians to train in assessment, how to do research, how to help staff understand data and its value

    Encourage library schools to integrate assessment into curricula

    Provide a place/website for others to show how they analyze their LibQUAL data

  4. Robert H. McDonald February 8, 2008 at 1:24 pm #

    Notes from ARL Assessment Coordinators Meeting Assessment Discussion – 1.11.08
    Houston Hall
    University of Pennsylvania
    Philadelphia, PA

    The following are discussion notes taken during the ARL Assessment Coordinators Meeting held adjacent to the Midwinter ALA Meeting in Philadelphia, PA on Jan 11, 2008. This discussion was held during the 9:00 am – 1:00 pm session which was held on the campus of the University of Pennsylvania.

    In attendance during this discussion were:
    Nancy Turner – Syracuse University Libraries
    Raynna Bowlby– Library Management Consulting
    Daniel O’Mahony – Brown University Libraries
    Holly Eggelston – University of California, San Diego Libraries
    Pauline Bayne – University of Tennessee Libraries
    Melissa Laning – University of Louisville Libraries
    Robert McDonald (note taker) – San Diego Supercomputer Center/UCSD Libraries

    Our discussion started out with several questions concerning the proposed discussion topics including:

    Staff Assessment:

    How are staff assessed and assigned within each of our libraries? Is staff in the right place or not? Where do those staff need to be assigned? How do we go about assessing and re-assigning staff that are not in the correct place?

    Another key element in the staffing discussion was that libraries are not often enough evaluating what employees are really doing. Evaluations are sometimes still based on the skill sets that jobs required in the past and not on what employees are currently doing.

    Assessment for Academic Departments:

    It was mentioned by some at the table who had done direct focus group work with some academic deans that the library has nothing to offer their discipline either to faculty or to students.

    Some felt that the real issue here was currency. Without longitudinal data from faculty and researchers, librarians do now know how the current researcher is using information resources or library resources for that matter.

    Raynna and Dan mentioned that Brown had done an ethnographic study in the late 80s and that there were a couple of publications that came out of this study about library use.

    Some at the table mentioned that in all of their focus group studies that all felt that the library was still all about books.

    Nancy mentioned that at Syracuse they had been trying an open ended user interview process in order to interview users about recent research projects or course assignments conducted in order to find out what the user had done to accomplish the assignment. This process was iterative and took the user from beginning to end in their steps of the research or course assignment. In this they did not talk about the library specifically just the steps involved in order to remain neutral and to find out more about what the users are currently doing to complete these types of assignments.

    Library Assessment:

    All at the table felt that library assessment should be tied to a story for good effect. However, many felt that if the story that we are telling is bucking national trends then how do we justify that without good statistical numbers to show the details of the story.

    Discussion moved into the ARL MINES project. Many felt that more work was needed on this model

    In light of the discussion on MINES it was mentioned that there needs to be a way to segment ARL libraries in order to identify peer groups that are working on similar trends. This could lead to trends type reporting that could be very effective in ARL assessment. An example of this was Reference Services – Reference Desk not being used – How is service being offered? Thus identifying a trend of reference services beyond the desk, and then segmenting ARLs into early adopter – mid-level adopter – late adopter type of categories.

    Another topic mentioned along these lines of identifying trend groups (there should be a statistical way of showing these trends in a scatter graph) was to create an ARL Survey Question DataBank. The bank could share questions with libraries for LibQual and other qualitative measures but the bank would want to show some sort of effectiveness measurement over time so that questions could be retired or re-worded as necessary. This would lead to better construction of local LibQual questions.

    This lead to a discussion of the Learning Commons concept and how Undergraduates love it but the Faculty and Graduate students do not and the need for a Research Commons to meet their needs. This brought up a discussion on how we analyze user comments within qualitative studies such as LibQual. How do we make use of individual comments from local LibQal questions?

    We then talked about how to present information gathered from assessment tools. Robert mentioned the book Super Crunchers about ongoing assessment for day to day management in various industries. Nancy mentioned a book called Practical Research Methods for Librarians and Practitioners.

    A final discussion arose on how we can help people believe in the data that we collect form assessment and how this can best be presented to the university community while showing the benefit of a shared library resource for all areas of a university.

  5. Martha Kyrillidou February 12, 2008 at 3:38 pm #

    Notes from ARL Assessment Coordinators Meeting Assessment Discussion – 1.11.08
    Houston Hall
    University of Pennsylvania
    Philadelphia, PA

    By Robert McDonald (University of California, San Diego)

    The following are discussion notes taken during the ARL Assessment Coordinators Meeting held adjacent to the Midwinter ALA Meeting in Philadelphia, PA on Jan 11, 2008. This discussion was held during the 9:00 am – 1:00 pm session which was held on the campus of the University of Pennsylvania.

    In attendance during this discussion were:
    Nancy Turner (nbturner@syr.edu) – Syracuse University Libraries
    Raynna Bowlby (raynna.bowlby@charter.net) – Library Management Consulting
    Daniel O’Mahony (dpo@brown.edu) – Brown University Libraries
    Holly Eggelston (heggleston@ucsd.edu) – University of California, San Diego Libraries
    Pauline Bayne (pbayne@utk.edu) – University of Tennessee Libraries
    Melissa Laning (Melissa.laning@louisville.edu) – University of Louisville Libraries
    Robert McDonald (mcdonald@sdsc.edu) – San Diego Supercomputer Center/UCSD Libraries

    Our discussion started out with several questions concerning the proposed discussion topics including:

    Staff Assessment:

    How are staff assessed and assigned within each of our libraries? Is staff in the right place or not? Where do those staff need to be assigned? How do we go about assessing and re-assigning staff that are not in the correct place?

    Another key element in the staffing discussion was that libraries are not often enough evaluating what employees are really doing. Evaluations are sometimes still based on the skill sets that jobs required in the past and not on what employees are currently doing.

    Assessment for Academic Departments:

    It was mentioned by some at the table who had done direct focus group work with some academic deans that the library has nothing to offer their discipline either to faculty or to students.

    Some felt that the real issue here was currency. Without longitudinal data from faculty and researchers, librarians do now know how the current researcher is using information resources or library resources for that matter.

    Brown had done an ethnographic study in the late 80s about the transition from a card catalog to an online catalog and there was a publication that came out of this study.
    Some at the table mentioned that in all of their focus group studies that all felt that the library was still all about books.

    Nancy mentioned that at Syracuse they had been trying an open ended user interview process in order to find out how the library user proceeded on the most recent research project or course assignment they had. This process was iterative and took the user from beginning to end in their steps in information seeking. In this they did not talk about the library specifically just the use steps involved.

    Library Assessment:

    All at the table felt that library assessment should be tied to a story for good effect. However, many felt that if the story that we are telling is bucking national trends then how do we justify that without good statistical numbers to show the details of the story.

    Discussion moved into the ARL MINES project. Many felt that more work was needed on this model

    In light of the discussion on MINES it was mentioned that there needs to be a way to segment ARL libraries in order to identify peer groups that are working on similar trends. This could lead to trends type reporting that could be very effective in ARL assessment. An example of this was Reference Services – Reference Desk not being used – How is service being offered? Thus identifying a trend of reference services beyond the desk, and then segmenting ARLs into early adopter – mid-level adopter – late adopter type of categories.

    Another topic mentioned along these lines of identifying trend groups (there should be a statistical way of showing these trends in a scatter graph) was to create an ARL Survey Question DataBank. The bank could share questions with libraries for LibQual and other qualitative measures but the bank would want to show some sort of effectiveness measurement over time so that questions could be retired or re-worded as necessary. This would lead to better construction of local LibQual questions.

    This lead to a discussion of the Learning Commons concept and how Undergraduates love it but the Faculty and Graduate students do not and the need for a Research Commons to meet their needs. This brought up a discussion on how we analyze user comments within qualitative studies such as LibQual. How do we make use of individual comments from local LibQal questions?

    We then talked about how to present information gathered from assessment tools. Robert mentioned the book Super Crunchers about ongoing assessment for day to day management in various industries. Nancy mentioned a book called Practical Research Methods for Librarians and Practitioners.

    A final discussion arose on how we can help people believe in the data that we collect form assessment and how this can best be presented to the university community while showing the benefit of a shared library resource for all areas of a university.

Leave a Reply