By: Nicole Ramos, M.Ed. (NERDi Consulting) and Stacie R. Furia, Ph.D. (NERDi Consulting)
BUILD is a non-profit organization that uses entrepreneurship and experiential education to propel students through high school to college and career success. BUILD believes that its outstanding success in helping low-income youth of color enroll and persist in college is due to mentorship, entrepreneurship, and the development of Spark Skills.* (A set of 6 skills educators might find familiar – Communication, Collaboration, Problem Solving, Innovation, Grit and Self-Management.)
BUILD set out to create an assessment to help prove and improve impact on students’ Spark Skills growth. Like many of BUILD’s peers, the first attempt at an assessment included a pre-test/post-test, self-perception questionnaire with items largely adapted from existing validated instruments. The result… a flop. After two years of tweaking the tool and implementation, we learned what Angela Duckworth and David Yeager were also learning at the same time: along with implementations difficulties, perception questionnaires have serious limitations.
Tapping in to our own entrepreneurial spirit, we learned a lot from this “failure” and used those lessons to help craft a new approach. Here are three lessons learned and how they helped us create an assessment that won 4th place in the Assessment Work Group’s 2018 Design Challenge.
Lesson 1: Assessment methodology should match pedagogy
BUILD’s curriculum and pedagogy is rooted in experiential learning. It is no wonder that sitting down for 30+ minutes and answering a bevy of Likert-scale questions made the experience odd for students! In our new assessment design, we sought to align the assessment methodology with the curriculum.
Lesson 2: Reliability and validity are only as good as meaningful participation
We spent a lot of effort in our first version to ensure we had strong construct validity, face validity, (etc. etc…) and were obsessive about reliability as well. We failed to pay equal attention to getting buy-in from teachers and students. The lack of student engagement with the assessment eliminated any change for real reliability. In our next attempt, we sought input from practitioners and spent more time on in-person training.
Lesson 3: Don’t put all of the eggs in one basket
It was very painful to have spent two years of excellent and hard work on an assessment that ultimately did not meet our standards. Moving forward, we designed and piloted several assessments, each on a much smaller scale. This increased our chances of finding the tools that best suited BUILD’s needs – or rather, demonstrated the importance of having a portfolio of assessments that gives a holistic picture of students’ Spark Skills attainment.
When one works with an entrepreneurial program, where failure = opportunity, there is only one way to deal with disappointment: learn from it. We hope this mantra inspires the field, and this working group specifically, to continue to look critically at assessment work and try, try again. Collectively, as we share what we learn – not only about what is working, but about what is NOT working – we can advance much more quickly to develop effective assessment and measurement tools.
Have you had any assessments that didn’t work out as planned, or didn’t produce useful data? What did or could you learn from these missteps? What are/were some lessons you can take away as you move forward in your assessment journey?
Disclaimer: The Assessment Work Group is committed to enabling a rich dialogue on key issues in the field and seeking out diverse perspectives. The views and opinions expressed in this blog are those of the authors and do not necessarily reflect the official policy or position of the Assessment Work Group, CASEL or any of the organizations involved with the work group.