Sunday, March 8, 2020

HUMS staff blog update, March 8, 2020

It hope this bit of humor helps as we all navigate some very turbulent times. 

1.  ITEMS
March Meetings: 

March
March 10 - Shared Staff Meeting (I do not know the agenda for this meeting. Many moving parts makes developing an agenda a bit more complicated. A draft agenda will be sent out no later than Tuesday morning. I will be asking to meet briefly with Social Studies and Science staff from both CBMS and HUMS to discuss a bit more about curriculum alignment - that is what I know right now)
March 17 - Exhibition Team for HS - Middle School Staff meeting (we need to select spring conference dates)
March 26 - LT/DH Meeting (Thurs) - updated
March 20 - Inservice -Building Based
March 24 - Team /Collaboration/Individual Time ⚠️
March 31 - Dept Mtg

2. Middle School Assembly - Harwood Auditorium - will be held on March 19th - ELO 2

3. I wlll be meeting with all 7th grade students during ELO 2 (I believe) in the 7th grade team area. This will allow a open discussion about concerns/questions regarding the merger (or not)..
4. This is an interesting article..
A Study of Unconscious Bias
            In this article in Educational Researcher, Yasemin Copur-Gencturk and Ian Thacker (University of Southern California/Los Angeles), Joseph Cimpian (New York University), and Sarah Theule Lubienski (Indiana University/Bloomington) describe an experiment they conducted on implicit bias. Teachers recruited for the study were told that the researchers were in the final stage of selecting items for an assessment that would capture the features of middle-school students’ knowledge and skills and accurately predict their mathematical growth. Teachers were asked to look at students’ handwritten solutions to the same math problems and were told that their anonymous feedback would help finalize the best items for the assessment. 
            To generate the students’ work, the researchers looked at a number of released NAEP extended-response math test items and chose a set that seemed likely to prompt a range of student responses. They had a group of middle-school students answer the questions and chose three test items that elicited a range of student responses. The researchers then chose two correct, two partially correct, and two incorrect student responses to each question, creating a set of 18 solutions. Here’s one example (with the student’s actual handwritten response in red):
Question #1: 
The growing number pattern below follows a rule.
3, 4, 6, 9, 13, …
(a)   Explain the rule.
Connor
13
-9        your adding 1 every time. 1 + 3 = 4 + 2 = 6 + 3 = 9 + 4 = 12
4
When selecting the final items, the researchers made sure that the handwriting and language used did not align with potential gender- or ethnicity-related stereotypes. 
The researchers then randomly assigned a student name to each of the solutions, inserting the name in handwritten form by each item, choosing names associated with black, white, and Hispanic girls and boys:
            Lakisha, Shanice, Tanisha        Emily, Katie, Molly              Bianca, Esmeralda, Rosalie
            Tyrone, DeShawn, Trevon        Connor, Ethan, Todd            Alejandro, Diego, José
Students’ names were evenly distributed among correct, partially correct, and incorrect responses. 
Participating teachers were asked to rate each student’s answer based on correctness (on a 10-point scale from “absolutely nothing correct” to “fully mathematically sound”). Teachers were then asked to assess each student’s mathematical ability. 
            The researchers looked at teachers’ responses with two questions in mind: did teachers’ evaluations of the correctness of students’ solutions and students’ mathematical ability vary by gender and race? and did teachers’ background characteristics predict biases? Here are the findings:
-   Teachers’ ratings of the correctness of responses did not vary by students’ gender or race. This held for correct, partially correct, and incorrect responses. 
-   Teachers’ gender, certification status, educational level, and teaching experience did not significantly predict their assessment of students’ mathematical ability.
Teachers’ ratings of students’ mathematical ability were most revealing on partially correct answers, showing varying degrees of implicit bias: 
-   Overall, students with white-sounding names (both male and female) were rated significantly higher than those with black- and Hispanic-sounding names.
-   In pairwise comparisons, the lowest-rated group was always non-white females.
-   Non-white teachers assigned higher ability ratings to white-sounding students, especially compared to ratings they gave to girls with black and Hispanic-sounding names. 
-   White teachers assigned higher ability ratings to students with male-sounding names than to those with female-sounding names.
What explains some quite counterintuitive findings on the biases of white and non-white teachers? Perhaps, the researchers speculate, white teachers “may be more averse to appearing racist and may devote more attention to hiding their biases. White teachers may be especially cautious about hiding their biases in experimental settings. Similarly, white teachers may be more concerned about maintaining their self-image as a non racist person and may thus give higher ratings to students of color to protect their own image of themselves. In contrast, teachers from stigmatized groups may assume that they do not have biases; thus, they may be less cautious, which could have led us to capture only their biases in this study.” 
And the surprising finding that the non-white teachers appeared to have biases toward black and Hispanic students could stem from those teachers internalizing the biases they have encountered throughout their lives.
These findings of implicit bias are important, say the researchers, because “students’ perceptions of their academic ability are developed based on messages they receive from their social environment, especially those of their teachers and parents. These messages potentially contribute to their self-efficacy, self-competence, and decision to select a STEM career.”

“Teachers’ Bias Against the Mathematical Ability of Female, Black, and Hispanic Students” by Yasemin Copur-Gencturk, Joseph Cimpian, Sarah Theule Lubienski, and Ian Thacker in Educational Researcher, January/February 2020 (Vol. 49, #1, pp. 30-43), https://bit.ly/2T9Tfx7; the authors can be reached at copurgen@usc.edu,joseph.cimpian@nyu.edustlubien@iu.edu, and ithacker@usc.edu

No comments:

Post a Comment