Expectations versus Reality!
Yes it sounds a bit like an instagram post, yet, the struggle is real! Please note that the first photo is not my score – although, I wish it was! 🙂
My experience with the Tripod student feedback has been a bit of a roller-coaster in terms of emotions versus expectations. Generally, I was quite surprised by the areas underlined by student feedback that needed improving, by the component scores and the item response details. I was also intrigued by the fact that my scores were slightly lower in the spring 2019 survey results than the ones I got in the fall 2018. Although it was with the same group of students and that I thought I addressed the main area of concerns, after looking at my drop, I could not tell that I did address anything in particular.
Here is what I meant by roller-coaster… Although we are not supposed to be alarmed by this scoring flower, I could not help but to actually care about it as I certainly did not expect it. I realize now it was quite presumptuous, nevertheless, it still does not make me feel like a bad teacher, not anymore 🙂
And as if it was not enough of a lesson from last year’s drop, this year I decided to select the class that I struggle the most with (yes, another great idea) but I thought I know them and I certainly feel that I have made so much progress with them compared to last year… Well, that was the wrong decision, as it turns out, they do not seem to feel the same way.
My lowest scores belonged to these 3 areas:
The challenge component however had more “low” scores than the other 2 components so it seems adequate to select it as an area for growth and improvement.
I have to admit I find the results or the interpretation of the results very confusing. I am looking at it right now and it does not really make sense. For instance, for the first criteria “My teacher wants us to use our thinking skills, not just to memorize things” (apart from the fact that this question is not an appropriate one for language acquisition one – yes students do need to memorize a lot a words) shows 78% of students in favor of the statement, however it is scoring low on the interpretation of results. In contrast, the criteria “My teacher makes us explain our answers why we think what we think” shows 73% and is scored medium on the interpretation of results. It does make me wonder if this is accurate or not.
Nevertheless, I did receive a shock. I thought about many different scenarios in my head, I could swap tea for strong alcohol and drink till I pass out and forget about these surveys, change my career, eat my whole weight in chocolate or just get involved in a conversation with my class ( and yes of course I did that, although option 1 was really tempting)
When I mentioned the results of the survey, they seem as uncomfortable as me to talk about it. It seems they didn’t expect I would be upset, the same way I didn’t expect the survey would upset me. I told students I needed to get specific goals, detailed ideas about each of the sections that needed improvement. Although at the beginning of the class, students remained fairly quiet, they started to talk, bounce with ideas and suggestions. It was actually a lot healthier than what I had imagined. It did seem though that on a lot of occasions, they were not a 100% certain about the meaning of the question and also how it applied to language learning, which would explain the reason behind so many neutral answers.
Perhaps I know have to think of ways to encourage deeper thinking and learning within class and linking content to wider topics, which sometimes I find difficult to do. Perhaps it would make more relevance and create opportunities for students to explain why they think what they think.