I finally finished administering the Dynamic Learning Maps ™ (DLM) test! Well, I found out that I was finished. I actually finished on Thursday last week. It just wasn’t clear that I was done because the interface doesn’t tell you. You just stop receiving tests in your KITE inbox. This made the end of the testing process rather anticlimactic.
The DLM project is pioneered by researchers from the University of Kansas. According to the form letter they wanted us to send home (and have signed by our building principal), the DLM is an “exciting process” with assessments that are “revolutionary and important to the future of educating students with cognitive disabilities.”
According to their website (www.dynamiclearningmaps.org) , the DLM project is a research-based “innovative way for all students with significant cognitive disabilities to demonstrate their learning. “ Interestingly, exploring the website revealed only three ‘research papers’: One was basically a 9-page definition of text complexity, the second was an attempt to simplify their learning nodes,
and the third was an evaluation of a pilot conducted by the good people at the DLM from which they concluded that everything was awesome.
It seems like the DLM researchers glossed over the teacher evaluation portion of their pilot evaluation. Most teachers (68%!) rated the assessment system a C, D, or F. Apparently, teachers only know how to rank things by grades. This poor response from teachers would indicate to me that another pilot is necessary before implementation.
The researchers used other comments, such as teachers asking for more symbol-supported text, as an opportunity to suggest that teachers simply need to be educated on what the DLM was all about rather than seriously considering that symbol supports might help students to perform on the assessments. In other words, teacher complaints were due to their own ignorance of testing strategies rather than due to the deficiency of test items.
My own experience with the tests was similar to those teachers’ who participated in the pilot. I know that different students received different testlets, so my own reflections will be short and sweet! I administered the tests to a young man with classic autism who uses an AAC device for expressive communication.
Inauthentic—My student understands the difference between pipe cleaners and band-aids. He knows that they are different, could sort them easily, and use the correct item functionally when necessary (when was the last time you used a pipe cleaner functionally?). He was awfully confused when I asked him to hand me one or the other due to his difficulty with auditory processing. The system likely logged that he does not know those items or the concepts of same/different, when this is not a fair or accurate conclusion of his abilities.
Frustrating—My student learns best with errorless learning. He gets frustrated when he does not know the answer and can sometimes demonstrate that frustration with aggression. Thanks, DLM! Luckily, we have been working on using calming breaks and I had a huge bag of Froot Loops on hand. Unluckily, he has learned, correctly, to ask for help when he doesn’t understand something. It broke my heart that I couldn’t help him. Instead, I praised the heck out of him whenever he gave me any sort of answer–correct, incorrect, building a tower out of the band-aids and pipe cleaners, didn’t matter!
Poorly Designed—When will these companies who make tests for students with significant disabilities, especially Autism, learn that you cannot list answer items from top to bottom, or left to right, and expect students to thoughtfully choose what they think is the correct answer? These students will usually choose the last answer when they are presented linearly! The answer choices need to be presented in a circle of sorts. To make matters worse, the DLM did not even off-set the answer choices and the questions were often embedded within a text. This made it extremely confusing—the student could not even guess at what the answer might be because he couldn’t visually discern which items on the screen were supposed to be the choices.
Just off-setting the questions from the rest of the text like this would have been an improvement to the on-screen presentation.
Good intentions with accessibility options, still falls short—The testing interface seemed to offer a variety of accessibility options for students. All students are expected to interact directly with the computer. I administered the test on an iPad and didn’t have trouble with accessibility. A coworker had a student who needed enlarged text and, as a result, only a very small part of the screen could be viewed at one time. The teacher eventually gave up, unchecked the visual impairment box on the student profile, and then presented the test items on the SmartBoard to allow for a larger view of the whole screen.
To conclude, I found that the DLM fell short of its own description of being “exciting” or an “innovative way for all students with significant cognitive disabilities to demonstrate their learning.” In the DLM’s defense, however, I don’t have a better alternative for a standardized assessment for our students with the most significant disabilities. The Illinois Alternative Assessment (IAA) fell short in its own way as well. I only hope that this is another stop on the way to the design of a minimally intrusive test that will highlight a student’s abilities in a meaningful way.