A Group Activity Flop


Lately, I have taught group activities that have fallen flat. My district uses the Unique Learning System and we were doing a lesson straight from the curriculum on the SmartBoard. In a nutshell, the lesson required the students to check boxes indicating whether certain items did or did not use electricity. There were three pages containing items from home, from school, and in the kitchen.

I multi-modally laid out the ways to tell if something uses electricity. I had actual objects on hand when possible. I talked, I sang, I basically tap-danced in front of the room to keep everyone’s attention.

The activity was a disaster. Everyone, students and assistants, tried to race through the lesson. People were talking over each other. Students did not engage with the actual lesson or material—they pointed to answers until an assistant or I gave them an ‘ok.’

I took a step back and just watched. Some students were sitting and stimming. Others were loudly talking or whining. Still others were pointing at answers and trying to read staff members’ faces until they thought they landed on a correct answer.

It wasn’t a group lesson at all. It had morphed into a carousel of staff members working 1:1 with students in a poorly designed lesson, with too much wait time and not enough student engagement.

I typically do three whole-class activities per week and I had the formula down pat last year. This year, the population has shifted—the attention span and ability level of my students has definitely increased. Additionally, fewer of my students have a formal diagnosis of autism.

As a result, I have rightly moved away from super-highly structured group lessons (technical term), but I have over-corrected. I need to re-establish some middle ground so that all students can engage meaningfully with the material, even if they are working on different skills.

I also need to make sure that I make my expectations for each student clear to the other staff in the classroom so they know what to focus on. Part of the reason the lessons have been failing is because the assistants and I are falling over each other trying to ‘fix’ it. If the design of the lesson isn’t any good, no amount of staff can right the ship during the lesson!


TouchChat Easter Egg


I have students who use Augmentative and Alternative Communication (AAC). Several of them have ‘graduated’ to using a dynamic display device with a core vocabulary and are doing very well! We have been working on a variety of communicative skills including requesting, commenting, questioning, etc.

One student is using TouchChat with WordPower on an iPad. Today, we were commenting about what we saw in different pictures. One of the pictures was of a man painting a doorway. My student wanted to say that he saw a paintbrush, so we navigated to the “Tools” page.

Imagine my delight when we spotted this little Easter Egg:

Keep looking...

Keep looking…

Do you see it? Yep, that’s a glass of orange juice on the rocks with the label ‘screwdriver.’

Someone at TouchChat headquarters has a sense of humor. 🙂

Finally Finished Administering the Dynamic Learning Map!


I finally finished administering the Dynamic Learning Maps ™ (DLM) test! Well, I found out that I was finished. I actually finished on Thursday last week. It just wasn’t clear that I was done because the interface doesn’t tell you. You just stop receiving tests in your KITE inbox. This made the end of the testing process rather anticlimactic.

The DLM project is pioneered by researchers from the University of Kansas. According to the form letter they wanted us to send home (and have signed by our building principal), the DLM is an “exciting process” with assessments that are “revolutionary and important to the future of educating students with cognitive disabilities.”

According to their website (www.dynamiclearningmaps.org) , the DLM project is a research-based “innovative way for all students with significant cognitive disabilities to demonstrate their learning. “ Interestingly, exploring the website revealed only three ‘research papers’: One was basically a 9-page definition of text complexity, the second was an attempt to simplify their learning nodes,

You know...Simplify.

You know…Simplify.

and the third was an evaluation of a pilot conducted by the good people at the DLM from which they concluded that everything was awesome.

It seems like the DLM researchers glossed over the teacher evaluation portion of their pilot evaluation. Most teachers (68%!) rated the assessment system a C, D, or F. Apparently, teachers only know how to rank things by grades. This poor response from teachers would indicate to me that another pilot is necessary before implementation.

The researchers used other comments, such as teachers asking for more symbol-supported text, as an opportunity to suggest that teachers simply need to be educated on what the DLM was all about rather than seriously considering that symbol supports might help students to perform on the assessments. In other words, teacher complaints were due to their own ignorance of testing strategies rather than due to the deficiency of test items.

My own experience with the tests was similar to those teachers’ who participated in the pilot. I know that different students received different testlets, so my own reflections will be short and sweet! I administered the tests to a young man with classic autism who uses an AAC device for expressive communication.

Inauthentic—My student understands the difference between pipe cleaners and band-aids. He knows that they are different, could sort them easily, and use the correct item functionally when necessary (when was the last time you used a pipe cleaner functionally?). He was awfully confused when I asked him to hand me one or the other due to his difficulty with auditory processing. The system likely logged that he does not know those items or the concepts of same/different, when this is not a fair or accurate conclusion of his abilities.

Frustrating—My student learns best with errorless learning. He gets frustrated when he does not know the answer and can sometimes demonstrate that frustration with aggression. Thanks, DLM! Luckily, we have been working on using calming breaks and I had a huge bag of Froot Loops on hand. Unluckily, he has learned, correctly, to ask for help when he doesn’t understand something. It broke my heart that I couldn’t help him. Instead, I praised the heck out of him whenever he gave me any sort of answer–correct, incorrect, building a tower out of the band-aids and pipe cleaners, didn’t matter!

Poorly Designed—When will these companies who make tests for students with significant disabilities, especially Autism, learn that you cannot list answer items from top to bottom, or left to right, and expect students to thoughtfully choose what they think is the correct answer? These students will usually choose the last answer when they are presented linearly! The answer choices need to be presented in a circle of sorts. To make matters worse, the DLM did not even off-set the answer choices and the questions were often embedded within a text. This made it extremely confusing—the student could not even guess at what the answer might be because he couldn’t visually discern which items on the screen were supposed to be the choices.

Just off-setting the questions from the rest of the text like this would have been an improvement to the on-screen presentation.

Good intentions with accessibility options, still falls short—The testing interface seemed to offer a variety of accessibility options for students. All students are expected to interact directly with the computer. I administered the test on an iPad and didn’t have trouble with accessibility. A coworker had a student who needed enlarged text and, as a result, only a very small part of the screen could be viewed at one time. The teacher eventually gave up, unchecked the visual impairment box on the student profile, and then presented the test items on the SmartBoard to allow for a larger view of the whole screen.

To conclude, I found that the DLM fell short of its own description of being “exciting” or an “innovative way for all students with significant cognitive disabilities to demonstrate their learning.” In the DLM’s defense, however, I don’t have a better alternative for a standardized assessment for our students with the most significant disabilities. The Illinois Alternative Assessment (IAA) fell short in its own way as well. I only hope that this is another stop on the way to the design of a minimally intrusive test that will highlight a student’s abilities in a meaningful way.