Bearometer survey avatar is holding a clipboard

Author: bearometerkeeper

  • Bearometer 17: Grade Policy

    The Faculty Bearometer sent the following question on February 11, 2026 to 2,668 faculty senate members:
    Which of the following best reflects your preference for undergraduate grading policy at Berkeley?

    • Maintain the current system: grading remains primarily at instructor discretion, without campus-wide grade targets
      or caps
    • Adopt non-binding recommended grade distributions for some or all courses (e.g., broad targets for the share of A-
      range, B-range, and C-range grades, or a recommended cap on A grades), with instructors free to diverge as
      appropriate.
    • Adopt binding constraints on grade distributions for some or all courses (e.g., an explicit curve or cap on As, similar
      to the proposed Harvard College policy capping A grades at 20% in large courses).
    • Other

    N=290. 249 regular faculty responded (14.9%), 41 emerita responded (4%). Faculty from all 32 of our high-level units participated. The survey had a 5-day window

  • Bearometer 16: Compliance

    On January 26, 2026, the Faculty Bearometer sent the following question to 2,500 faculty members:

    How often do rules or compliance requirements at Berkeley lead you to make less effective choices in how you approach teaching, research, mentoring, or service than you otherwise would?

    The choices were:

    • Never
    • Rarely (1x a year)
    • Occasionally (1x a semester)
    • Frequently (1x a month)
  • First Annual Beary Awards

    Your Question Keepers are pleased to announce our First Annual Beary Awards: citations that we are honored to bestow on units whose participation in 2025 merited special distinction. But first, we would like to share a few reflections on our survey’s inaugural year.

    The Bearometer’s first question, suggested to us by the Senate’s Faculty Welfare committee, asked “What is the single most important challenge to your overall welfare as a faculty member at Cal?” Fourteen more surveys followed, recording over 3000 candid responses from faculty across campus on issues central to our shared academic life.

    Your Bearometer responses have become a valued source of insight at the highest levels of the University. They represent a unique channel through which rank-and-file faculty perspectives reach university leadership directly and at scale, offering:

    1. Frictionless information sharing: University leaders often lack timely, systematic feedback about how policies affect faculty in diverse disciplines, career stages, and work environments. The Bearometer lowers barriers to communication and provides a structured alternative to informal channels that tend to amplify the loudest or most senior voices.
    2. Faculty-led communication: By centering faculty-generated questions, the Bearometer complements longer, administration-framed instruments and helps surface concerns and priorities as faculty experience them.
    3. Honest perspectives: In a political climate that can discourage candor on difficult topics, the Bearometer’s anonymity enables forthright responses that would otherwise go unheard. 

    We are delighted to report that more than half of regular Berkeley faculty participated in the Bearometer in 2025, and our analysis shows that our responses are strikingly balanced across disciplinary areas and campus units.

    Looking ahead, we are eager to make the Bearometer even more useful to both faculty and university leadership. We invite your anonymous feedback and suggested survey topics as we plan for a 2026 Bearometer with fresh questions and renewed energy.

    Finally, as promised at the beginning of the year, we are recruiting new Question Keepers. If you would like to participate in shaping future surveys, we would be delighted to hear from you.

    And now, without further ado, the 2025 Beary Awards. Note all awards are restricted to units with at least 50 faculty members, and apply to Bearometers 4-15, for which we have department information.

    High Turnout Award: Political Science

    We extend special gratitude to our colleagues in Political Science, whose response rate topped all units at 13.6%, including both regular faculty and emerita. Our political scientists know the value of participatory democracy.

    Infinite Series Expansion Award: Physics

    Our colleagues in the Physics department wrote free responses with the longest median length, at 74.5 words per free response.

    Marginalia Award: English

    For questions where a free response was optional, our English department colleagues were especially generous with their literate commentary, finding remarks to offer 55% of the time.

    Economy Award: Economics

    Our economist colleagues brought to their Bearometer participation a keen awareness of opportunity costs, choosing 78% of the time to do something other than writing optional free responses, and holding themselves to a median of 22 words per free response that they did write. They led the field in both categories.

    Hemingway Award: School of Public Policy

    Faulkner Award: School of Public Health

    Our colleagues in the Goldman School wrote the shortest sentences, with a median of only 13 words per sentence, while our Public Health colleagues wrote the longest, with a median of 20.

    Learned Other Hand Award: Law School

    Berkeley’s legal eagles found the most reason to engage in dialectic with themselves, with each sentence containing an average of 0.23 contrast patterns like “however,” “whereas,” or “nevertheless.”

    F*ck Nuance Award: Sociology

    By contrast, our colleagues in Sociology were the most decisive, with only 0.10 contrast patterns per sentence.

    Most likely to be redacted: [L&S Social Science Unit]

    Our colleagues in [redacted department] distinguished themselves by earning redactions a staggering [censored fraction] of the time, more than doubling the rate achieved by the runners-up in [redacted College of Engineering unit]. Their high redaction rate owed in part to their blistering commentary on [forbidden topic].

    We extend our hearty congratulations to all of the units that won this year and our thanks to every colleague who responded to a Bearometer survey in 2025. We look forward to reading many more of your thoughtful responses in 2026.

    Happy holidays, Chris & Will

  • Bearometer 15: Open Discussion

    On Nov. 19, 2025, the Faculty Bearometer distributed the following question to 2,579 senate members.

    To what extent, if at all, does ideological homogeneity within your department limit open and vigorous discussion of important issues in your discipline?

    N=143

  • Bearometer 14: AI

    The Faculty Bearometer sent the following question on November 11, 2025 to 2,589 faculty senate members:

    How, if at all, have you changed formative or summative assessment of your students because of generative AI?

    N=113. Of the participants, 108 were regular faculty, 5 have emeritus status. 61 of the participants had a primary appointment in a STEM department. Members from 30 of our 35 unit/departments categories participated.

  • How Representative is the Bearometer?

    The Bearometer is an independent faculty project that poses a single poll question to Berkeley Senate faculty on issues of teaching and governance. It is unaffiliated with the Academic Senate or the administration and is modeled on MIT’s Faculty Pulse. Faculty propose and vote on questions, which are then selected by the Question Keepers (currently Chris Hoofnagle, Law; and William Fithian, Statistics) for distribution via university email. The Bearometer emphasizes anonymity to protect candid participation and typically receives responses from about 200 faculty per poll, providing rapid, faculty-driven feedback between formal Senate surveys. The Bearometer has run a dozen iterations on topics such as academic freedom, student preparedness, and faculty governance.

    The Bearometer emphasizes anonymity to protect faculty speech, though this limits demographic representativeness.

    Faculty speech is strongly protected by academic freedom, yet academics have many incentives to keep their views closely held. Thus, the Bearometer has both technological and procedural methods to ensure that any participant will not be identified. Anonymity is a fundamental feature of the Bearometer.

    This anonymity leads to a problem: is the Bearometer valid? Or are its participants basically the same people every time?

    On internal validity: we are confident that only Berkeley faculty are completing the Bearometer because we have an authoritative list of senate members, and because we distribute to email with unique links. We receive Qualtrics reports on possible fraud (typically someone has submitted twice) and these reports signal that double dipping is quite rare—typically just 1 or 2 submissions.

    The unknown problem is representativeness: is the Bearometer a valid measurement of the faculty itself?

    To answer this question, we examined response patterns from Bearometers 5-12 (the last 8 surveys, as we deleted earlier data consistent with privacy protections), focusing on our regular faculty (non-emerita).

    In sum, this is what we found: Across eight surveys, 52% of regular faculty never participated. Of those that did, 60% have just done 1 or 2. Only 13% have done 5 or more.

    Bearometer Participants

    # of Completed BearometersFaculty CountTotal Percent
    081952.3%
    129018.5%
    215910.2%
    31197.6%
    4774.9%
    5493.1%
    6271.7%
    7151.0%
    8110.7%
    This table presents whether and how many times faculty have completed a Bearometer. It is based on Bearometers 5-12 (8 total). Note 52% have never done a Bearometer. On the other hand, very few have completed more than 5.

    As one can see, most regular faculty haven’t done Bearometers 5-12 (we no longer have data for Bearometers 1-4). Of those that do, the Bearometer is not dominated by any small faction. Only 11 people have done all 8. 15 have done 7. 27 did 6.

    Some of these Bearometers were quite popular. We have an overall 14.6% response rate from regular faculty based on Bearometers 5-12.

    BearometerN
    Bearometer 12: Attendance180
    Bearometer 11: Best of Berkeley138
    Bearometer 10: Student Evaluations of Teaching238
    Bearometer 9: Student Preparation238
    Bearometer 8: Getting Reimbursed237
    Bearometer 7: External Criticisms291
    Bearometer 6: Chancellor Vision155
    Bearometer 5: Free Speech Temperature335
    Bearometer 4: Intercollegiate Athletics341
    Bearometer 3: SAT/ACT Testing430
    Bearometer 2: Union179
    Bearometer 1: Welfare300

    We also studied STEM versus non-STEM participation.

    STEM and Non-STEM Participation

    # of Completed BearometersNon-STEMSTEM
    0375444
    1151139
    27782
    36257
    43938
    52722
    6918
    778
    838
    Total750816
    This table presents participation broken out by STEM status. Regular (non-emerita) faculty only.

    Berkeley has more STEM faculty (based on primary appointment). The different in participation between the groups is not significant.

    Mean response rate, STEM versus Non-STEM regular faculty.

    There are no statistically significant differences across departments.

    Mean response rate by department category, regular faculty only.

    Few emerita participate in the Bearometer. Of the 1,016 in our distribution list, 824 have never completed a Bearometer. 100 have completed 1. Another 92 have completed more than 1.

    Emerita faculty rarely participate. Over 8 Bearometers, only 192 have ever participated. Nonetheless, we poll emerita because they are senate members and have a career of experiences to share with us.

    What we do not know: we do not have data on sex, nor on pre/post tenure. Both of these variables, especially combined with departments, could undermine the anonymity of the Bearometer. But we also know that less powerful faculty may have the strongest speech concerns. Perhaps one might think that more senior faculty attitudes predominate, however, the Bearometer’s anonymity operates to protect these classes of faculty where other alternatives, such as the climate surveys, cannot because of climate surveys’ intense focus on demographics and combinatorial options.

    The Bearometer has tradeoffs. It can rapidly and reliably get feedback from about 200 regular faculty in just days, using a single email invitation. Campus climate surveys can get higher participation, but only after weeks of recruitment, and these surveys come years apart (2009, 2011, 2019). In addition, the Bearometer is more democratic: the questions come from faculty members themselves. This helps give signals to decision makers free of the kinds of restraints on the senate and faculty administrators.

  • Bearometer 13: Disability Accommodation

    The Faculty Bearometer sent the following question on October 28, 2025 to 2596 faculty senate members:

    What have you observed about how disability accommodations are implemented at Berkeley (e.g., benefits, costs, process, communication, fairness, inclusion, workload, small/large class dynamics)?

    N=130 (4 submissions are blank). Of the participants, 119 were regular faculty, 11 have emeritus status. 70 of the participants had a primary appointment in a STEM department. Members from 31 departments participated.

  • Bearometer 12: Attendance

    The Faculty Bearometer sent the following question on October 14, 2025 to 2,614 faculty senate members:

    What percentage of undergraduate students attend your lecture on any given day?

    180 faculty answered this survey. The results are available at the link provided to faculty.

  • Bearometer 11: Best of Berkeley

    The Faculty Bearometer sent the following question on October 7, 2025 to 2,620 faculty senate members:

    What do you enjoy most about being a faculty member at Berkeley? (open text response). 138 responded.

    The Bearometer also sought votes for the following nominated questions (in randomized order). 114 responded.

    • Should the faculty have opportunities to rate senior administrators? (likert)
    • Ought the undergraduate colleges adopt a grade curve? (Scaled response: No, Yes, with X, Y, or Z distribution)
    • How, if at all, have you changed formative and summative assessment of your students because of generative AI? (open text)
    • What percentage of undergraduate students attend your lecture on any given day? (two digit numeric)
    • How often have you “not done the right thing” at Berkeley because it would require cycles of your time spent on paperwork or compliance? (likert)
    • How would you describe current practices around disability accommodation at Berkeley, based on what you have observed? (open text)
  • Bearometer 10: Student Evaluations of Teaching

    The Faculty Bearometer sent the following question on September 23, 2025 to 2,630 faculty senate members:

    Has the use of student evaluations of teaching in faculty merit and promotion cases affected the difficulty of your course? If so, how? 238 responded.

    Respondents could choose multiple responses:

    • It hasn’t affected the difficulty of my course.
    • Yes, it has led me to reduce course rigor/demand.
    • Yes, it has led me to increase course rigor/demand.
    • Yes, it has led me to award higher grades (more As or lenient distributions)
    • Yes, it has led me to award lower grades (fewer As or stricter distributions)
    • Other