June 20, 2017
by Yuerong Sweetland
I recently attended two assessment-related conferences: the AALHE (Association for the Assessment of Learning in Higher Education) 7th Annual Conference and the SAARC (Student Affairs Assessment and Research Conference) at the Ohio State University where I served as a panelist. These two conferences were quite different, with the second one leaning more towards assessment and research in student affairs or co-curricular areas and the first one having a more comprehensive focus on assessment, learning and teaching. In spite of the many differences, I felt that a common challenge was being addressed, either explicitly or implicitly: how to make assessment meaningful and rewarding.
Whether we would like to admit it or not, assessment is frequently started as a compliance act at some, if not many, higher education institutions. As such, it can become mundane, isolated, and less and less impactful as time goes on. In the ever changing and evolving environment of higher education, assessment needs to stay meaningful, relevant, and engaging. Based on presentations that I heard, conversations with fellow assessment professionals, as well as my own experiences, here are a few thoughts:
- Assessment must be an agent for change, to stay relevant. In my earlier years of assessment career, I spent a lot of time and energy and focused my attention on technical aspects of the assessment processes: creating measurable outcomes, identifying actionable survey items, conducting training and norming sessions for improving interrater reliability – the list can go on and on. Meanwhile, we also had to stay constantly mindful of requirements from accrediting agencies. As a result, we seemed to have created a pretty good system of outcome assessment, where data was collected and analyzed and then assessment reports compiled and filed. However, we soon realized that sometimes assessment did not lead anywhere and sometimes there was a disconnect between assessment findings and occurred changes. Consequently, we included specific action items in assessment reports, which was then followed by another item for evaluation and reflection: has student learning changed as a result of implemented changes? All of these changes seem to have helped tremendously. In addition, there are other strategies that could help further engage faculty in assessment. For example, Flateby and Gatch (2017) from Georgia Southern University emphasized recognition and reward for excellence in assessment work in their AALHE presentation. Even though some faculty have the internal motivation and intellectual curiosity to assess and improve student learning, it still is nice to recognize the devotion and commitment, and to encourage and reward assessment as a form of scholarship. The recognition and rewarding will help sustain and strengthen a culture of assessment and learning.
- We all know that to have a strong assessment culture, assessment folks have to work closely with faculty members. Some of us who are already faculty at our institutions might be at a particular advantage of being able to better appreciate and address faculty concerns in the assessment work. In addition, we also have to work closely with instructional designers and the centers of teaching and learning, both of which are important partners for driving changes in curriculum and instructional practices across individual courses, no matter where they might be located in an organizational structure. Our partners could also include student learning centers, libraries, and other places (where co-curricular programs/experiences are occurring) on campus that support student learning and success. Last but not least, institutional research or institutional effectiveness offices frequently have the data query capacity that allows detailed analysis of learning (e.g., transfer students vs. non-transfer students, Pell recipient vs. non Pell recipients). Some campuses have also started using learning analytics data to further understand learning and identify improvement opportunities. To work well with the variety of individuals and groups, assessment professionals have to be flexible yet resilient.
- As assessment professionals, we are all aware of the importance of validity of assessment instruments. However, validity can be contextual (Skinner 2013). Even though Skinner used contextual validity to refer to implementation considerations for “validated” intervention strategies in different contexts, I argue that contextual validity should be a major concern in our selection of the assessment instrument, whether it is a test or a survey. Two years ago, in revising the course survey instrument at Franklin University (which is one of the most heavily used indirect assessment instrument at the University), we referred to the famous SRI (Student Ratings of Instruction) tools from IDEA (IDEA 2017). Given Franklin’s centralized academic model and curricular design framework, we had to tweak the original SRI items to be valid for Franklin’s context. Data from the revised survey instrument have been used extensively across the University to inform changes in course design and teaching improvement. Clearly, if we just adopted the SRI instrument as it is, we might be able to benchmark ourselves again other institutions; however, we might not necessarily be able to translate the benchmarking findings into actionable items for our local contexts.
- In order for assessment to serve as an agent for change, the assessment instrument has to have contextual validity. The campus stakeholders need to work together in reviewing an instrument to determine whether it has a sufficient level of contextual validity.
What are your thoughts on making assessment meaningful and rewarding? I welcome your thoughts and opinions.
Flateby, T., & Gatch, D. B. (2017). Enhancing the value of assessment: Developing and fostering affective outcomes. Presentation at AALHE Conference 2017. Louisville KY.
Skinner, Christopher H. (2013). Contextual validity: Knowing what works is necessary, but not sufficient. The School Psychologist, Winter 2013, p. 14-21.
IDEA (2017). Student Ratings of Instruction. Retrieved from http://www.ideaedu.org/Services/Student-Ratings-of-Instruction-Tools.
May 17, 2017
by Barbara Fennema, Ed. D.
Magic mirror on the orange wall. Tyler, 2008 (Flickr). Used under Creative Commons Attribution 2.0 Generic.
“Magic mirror on the wall – who is the fairest of them all?” spoken by the Evil Queen in Snow White and the Seven Dwarfs (1937).
Looking in a mirror (and most of us don’t have a magic mirror!) provides us only with a surface reflection of how we look – not who we are or what we’ve learned. In this blog post, we’ll look at what critical reflection is and its importance in a learning experience. Continue Reading →
May 10, 2017
by Matt Barclay
It is springtime and that means lawns are growing. What do you do to help your lawn along in the spring? Many people just start mowing. They also rely on spring rains for water. A lawn might look okay for a while with just this treatment. However, a minimalist approach does not usually result in a green, healthy lawn for the whole season. Continue Reading →
May 2, 2017
by Younghee Jessie Kong
Most students who enter colleges need basic math skills to succeed in college-level mathematics. Therefore, most colleges provide “a sequence of developmental mathematics courses that starts with basic arithmetic, then goes on to pre-algebra, elementary algebra, and finally intermediate algebra, all of which must be passed before a student can enroll in a transfer-level college mathematics course” (Stigler, Givvin, & Thompson, 2013, p. 1). Continue Reading →
April 26, 2017
by Yi Yang, Ph.D
A successful instructional designer not only needs to be excelling in design and development, but also needs to be a leader, a change agent, and a strategist. Continue Reading →
April 20, 2017
by Roberta Niche
Begging the Question: Strategies to Increase Student Performance
If you’re an instructional designer or an instructor, you undoubtedly know a lot about questions. You know that simple yes-no questions are often a dead end and that open-ended questions generally make for more interesting discussions. You know that students typically aren’t given enough think time; teachers’ average wait time is less than one second before they pick someone to answer or answer the question themselves.
But have you considered who owns the questions in your courses? If the answer is only you, your students are missing out on an important opportunity to deepen their learning and reflect on their understanding. Here’s my question for you: What are you doing to create an environment where students create, organize, refine, and answer their own questions?
Let’s begin by examining the importance of questions and what they mean for student understanding and success. Continue Reading →
April 11, 2017
by Patrick A. Bennett
The Textbook Problem
The affordability of, and accessibility to, lower cost course resources such as textbooks and supplementary materials in higher education is a growing concern in the United States. Currently, the process of textbook adoption has been left mostly unregulated at the federal, state, and university levels (Hill, 2015). In traditional settings, ultimately, the university selects the required textbooks and students are required to purchase them. In economic theory, this is referred to as the Principal Agent Problem. Essentially, the agent (university) selects the text that is best for the learning experience, but creates a financial commitment for the principal (the student) that may create hardship as price and availability may not have been in the consideration set as the university does not personally purchase the books (Investopedia, 2016a). In this case, the student can become a vulnerable stakeholder. There is another economic theory at work as well, which is Information Asymmetry. In this case, one party in the transaction (the publisher and perhaps the university) possesses more substantive knowledge about the transaction than the second party who is the student and ultimate consumer (Investopedia, 2016b). Not only are students potentially vulnerable in this process, but the cost of textbooks has been increasing at an alarming rate. Continue Reading →
April 4, 2017
by David Ni
The primary reason instructors are interested in integrating real world tasks into the classroom stems from a belief that learning that emulates real life is more likely to promote student motivation, engagement, transfer of learning, and professional development. Students who learn decontextualized knowledge are likely to be able to answer items on a test, but often struggle to apply what they have learned when attempting to solve real problems. In this post, I would like to discuss the following three questions related to real-world/authentic tasks
- What is an authentic task?
- Why should we use authentic tasks?
- How can we create authentic tasks?
What is an Authentic Task?
Simply speaking, an authentic task is a task that requires students to use knowledge and skills learned to solve a problem in a real context. Let’s use the example of teaching instructional design. If an instructor asks a student, “What is instructional design?” or “What does the ADDIE model consist of?” then these questions could be called textbook tasks or exercises. If the instructor asks the student, “How can you use instructional design to help ABC Company solve its performance problem?” or “Students in the XYZ class have a low motivation on math, could you help the math teacher to enhance student motivation?” Then these questions are authentic tasks. Therefore, an authentic task is a real-world task in a learning context and has a potential to engage students in action and reflection. Continue Reading →
March 29, 2017
by Yuerong Sweetland
One of the challenges with assessment is answering the “so-what” question. After the initial nationwide calls for assessment more than three decades ago, most institutions are conducting assessment. However, when it comes to using assessment data, there are varying levels of success at higher education institutions, even though accrediting bodies are placing more and more emphasis on closing the assessment loop by using evidence of student learning to inform changes in curriculum and instruction (as well as co-curriculum).
What might be the problem? Having been an “assessment person” for more than a decade at two different institutions, I feel that one of the biggest obstacles is the separation of assessment from the rest of the “world.” When this happens, assessment becomes the exclusive arena for a few folks whose titles include assessment; assessment is reduced to the simple act (annual, in some cases) of collecting and aggregating data and then writing reports about them. These reports frequently end up in a bureaucratic “black hole,” yielding little to no impact on teaching and learning. Under this condition, it is no wonder that assessment gets a bad reputation of being meaningless busy work — it exists solely for accreditors. Continue Reading →
March 22, 2017
by Dr. Rob L. Wood
“If you can learn to tolerate change, ambiguity, and uncertainty, you will be successful in this field.” That was the answer my supervisor gave me when I asked him how I could be a really good instructional designer. It was 1989, and I had been working at it for a whole year. I really wanted to know! I must have looked crestfallen, because he added, “Don’t worry. You’ll get it later.” Great. Thanks, Boss.
Almost 28 years later, I learned to appreciate the depth and wisdom of those words. Change, ambiguity, and uncertainty have been the hallmarks of nearly every instructional design project on which I have worked. Along that road, I learned much, but one thing stands out: Instructional design is not as much about theories and models as it is about how we become experts. My thought has its roots in what I refer to as “messy instructional design.”
So what does it mean to be an expert instructional designer, if not to simply adhere to the theories and models that represent the field? The answer to that question, indeed an explanation of why it is a significant question in the first place, resides in a brief consideration of “horizontal expertise” (or “boundary crossing”). Continue Reading →