top of page

Misconception:
“The more recent California Berkley / University of Michigan / Fryberg poll results are different”

Summary review:

The Berkley study under-sampled cisgender/straight men, over-sampled women and LBGTQ, under-sampled the reservations, and over-sampled those off reservation. Berkley did not conduct statistical weighting to Census Bureau benchmarks. Berkley influenced the results by recruiting all respondents instead of a random sample, then asked them leading questions.

 

If Berkley would like to commission an independent study and duplicate the Annenberg and Washington Post methodology, we would be happy to co-endorse the results.

 

The truth is 90% of Natives love their culture, names and imagery and the opposition doesn’t want to hear our voices. The Native American Guardian’s Association represents this majority continues to Educate not Eradicate.

​

We can review the point-by-point analysis why the polling is flawed:

 

####

 

This was a biased poll created by Berkley that misrepresented the Native American community.

  • Berkley recruited respondents and did not random sample. This is not in accordance with polling science.

  • They did this to get to a certain number of respondents - and we’ll show you where they admit to it.

  • Berkley recruited respondents that were not representative of census data.

  • They under sampled on-reservation and straight males.

  • They oversampled off-reservation, female and LBGTQ.

  • Then Berkley asked this recruited audience leading questions.

 

Berkley Invalidates Results by recruiting survey respondents - No Random sampling:

  • The Berkley Study recruited 1,021 Native Americans to participate in a survey.

  • The Annenberg & Washington Post asked people to self-identify their race, and only those self-identifying as Native American were asked the questions.

  • Berkley Recruiting Methodology:

    • “Participants were recruited through Qualtrics Panels to participate in a short study regarding their attitudes and experiences with contemporary issues. We aimed to recruit a sample of 1,000 Native American participants, which is twice the size of previous polls and would achieve sufficient variation in Native American demographics and identities to test our hypotheses. The final sample included 1,021 Native American participants.”

    • Berkley tried to prove their points by only recruiting “a sufficient variation in Native American demographics and identities to test their hypothesis.” Berkley did not conduct statistical weighting to Census Bureau benchmarks.

  • Contrast to the Washington Post who surveyed approximately 15,000 people and only asked the questions pertaining to Native Americans, to self-identified Native Americans. Thus, achieving a random sample. They statistically weighted to Census Bureau benchmarks.

  • The Annenberg polling surveyed from Oct. 7, 2003, through September 20, 2004, where 65,047 adults were interviewed, of whom 768 identified themselves as Indians or Native Americans and were asked the questions - again random sample. They also statistically weighted to Census Bureau benchmarks.

  • Berkley hired an online data analytics company to recruit Native Americans to fill a number of respondents.

  • Online recruitment of a “sufficient variation in Native American demographics and identities to test their hypothesis” vs a random sampling of Native Americans, and not properly statistically weighting to Census Bureau benchmarks, invalidated results.

 

The Berkley study under-representation of Men

  • US population data shows men make up 49.5% regardless of race.

  • US population data shows 50.5% of the population is female.

  • The Berkley Survey used 31% Male and 69% Female & non-cisgender.

  • Females and non-cisgender are polled over 2.2 times more than males.

  • Even if you subtract the 6.7% that identifies as non-cisgender (biracial Native Americans included) from the 49.5%, it nets 42.8%, yet Berkley admits to only polling 31% male.

  • Under sampling cisgender men by 11.8% invalidates their polling methodology.

 

The Berkley Study Over Representation of “non cisgender” / LBGTQ

  • US Population data shows 7.1% of the US population identifies as “homosexual.”

  • Population data shows only 5.5% of Native Americans identify as LGBTQ. An over representation by +1.6%.

  • When combined with another race and Native American, 6.7% identify as LGBTQ. An over representation by +.4%.

  • Results are distorted and not indicative of the Native American community.

  • Over representation of non-cisgenders invalidates Berkley’s results.

 

Under-representation of the On-Reservation Native Americans

  • National statistics show 22% of Native Americans live on reservation see https://www.ncoa.org/article/american-indians-and-alaska-natives-key-demographics-and-characteristics

  • Berkley study underrepresented on reservation participants at 17%.  -5% when compared to National statistics.

  • National statistics show 78% of Native Americans live off-reservation.

  • Berkley study over-represented off-reservation respondents at 83%. That’s a + 5% when compared to National statistics.

  • Incorrectly sampling on vs off-reservation respondents invalidated the data.

 

Their assembled recruits were then asked leading questions

  • Questions were leading in presentation.

  • Example: “The professional football team in Washington calls itself the Washington Redskins. As a Native American, I find the name offensive.”

  • Quote from Berkley: “We adapted a second item from the same poll (“I think the term ‘Redskin’ is respectful to Native Americans” [reverse coded]) and created 3 additional items: “I find the term Redskin offensive,” “The term Redskin bothers me,” and “It bothers me when fans of the rival team for the Redskins use insults about Native American culture.”

  • They created sub questions that lead and admit to it with the full intention of leading respondents.

  • Questions must be clear to all respondents and avoid leading language to produce accurate results. Compare this to the Washington Post question:

  • “The professional football team in Washington calls itself the Washington Redskins. As a Native American, do you find that name offensive, or doesn’t it bother you?” This same question was asked in a 2004 Annenberg Center survey.

  • Example 2: “Native American Mascots” The Berkley study starts off with an incorrect premise using the term “Mascots”.

  • “I think sports teams’ use of Native mascots is ok.”; 1 = strongly disagree, 2 = disagree, 3 = somewhat disagree, 4 = neither agree nor disagree, 5 = somewhat agree, 6 = agree, and 7 = strongly agree).

  • Mascots are fuzzy costumes for sideline performers. Native American names and iconic symbols or logos are not mascots.  Again, providing leading questions that mis characterize the use of Native culture to obtain a desired outcome.

  • Leading questions invalidated the Berkley survey.

 

Summary review:

The Berkley study under-sampled straight men, over-sampled women and LBGTQ, under-sampled the reservations, and over-sampled those off reservation. So Berkley, in order to prove their position, recruited all respondents instead of a random sample, then asked them leading questions.

 

If Berkley would like to commission an independent study and duplicate the Annenberg and Washington Post methodology, we would be happy to co-endorse the results.

 

The truth is 90% of Natives love their culture, names and imagery and the opposition doesn’t want to hear our voices. The Native American Guardian’s Association represents this majority and continues to Educate not Eradicate.

​

https://journals.sagepub.com/doi/full/10.1177/1948550619898556

 

https://content.gallup.com/origin/gallupinc/GallupSpaces/Production/Cms/POLL/9b9rrrkhnei4nefacq2cgq.gif

 

####

Berkley UM Fryberg Poll recruited respondent demographics
LBGTQ stats Native Americans
bottom of page