Skip to main content

Oxford Admissions

An initial draft of this post was about how to make rankings more meaningful, a piece of mathsy computer science, as you expect it from this blog. My motivation is personal: I'm involved in selecting CS undergraduate students. Some people might be curious how Oxford admissions work: it's certainly a favourite subject for UK politicians and news media. Therefore I decided to postpone my original plan and to write this post about Oxford admissions. I hope I'm not playing with too much fire and the blog will not suddenly become famous for the wrong reasons!

The tenor in newspapers and political campaigns is that the Oxbridge admissions process is “opaque” and unfair. Here I am going to offer transparency and some comments on fairness. The views expressed here are my own, and may not necessarily coincide with the university's views.

Oxford's goal is to admit the strongest applicants. One can disagree with this basic premise, but we try to admit those applicants who we think will do best in their studies here. Here is the process for CS:
  1. In October, about 1000 people (upward trend), typically in their last year of school, apply at Oxford to study CS or Maths&CS or CS&Philosophy. They are not allowed to apply at Cambridge at the same time.
  2. In November, the applicants take a written Maths test in their schools. The schools send the test scripts to Oxford, where they are marked.
  3. Based largely on the test scores, about a quarter of the applicants are invited for interviews in Oxford, which take place in December.
  4. Soon after the interviews but not only based on the interviews, the interviewers decide which applicants get a place. About 40% of the interviewees are offered a place, which is about 10% of the applicants.
  5. In January the interviewees are notified whether they are offered a place, and if yes, under which conditions.
  6. Over the summer, the applicants take final exams in their schools. Oxbridge offers are conditional on A-level marks or other school exam results. 90% of the candidates with an offer satisfy the conditions.
  7. In October, about 100 (upward trend) “freshers” start their degree in CS or Maths&CS or CS&Philosophy.
The interviews are very stressful for everyone involved. They are run by Oxford's colleges: about 20 (upward trend) of Oxford's 38 colleges admit CS undergrads. The interviewers are the tutors (I'm the CS tutor at St John's College). Each college takes its own admissions decisions, albeit with a lot of coordination within in the CS department.

Oxford assigns each interviewee a first and a second college. The candidates may express a preference for a first college in the application. Some interviews are by Skype, but most interviewees come to Oxford in person, where they are hosted for about 3 nights by their first college. Each candidate gets at least one interview at both their first and their second college. Very often, candidates get 2 or 3 interviews at their first college. All interview marks are recorded in a database that is shared among all CS tutors.

During the second interview day, each candidate is decided upon by her/his first college. The first college decides either to “offer to” the candidate or to “drop” them. The term “offer” is not very apt, because the candidates don't know about this offer until a month later. Offers and drops are recorded in the database. If a candidate is dropped by the first college, the second college may offer to or drop the candidate. If a candidate is dropped also by the second college but still looks promising, other colleges may schedule additional interviews and then act as third colleges.

To make these decisions, the tutors use all available information, including all interview scores, the test score, predicted A-level marks, the personal statement, and the candidate's school and socio-economic background. Bad schools and poor socio-economic background are taken as a plus, not in order to positively discriminate, but because the other criteria may underestimate the abilities of candidates with a disadvantaged background. The interview contents vary between colleges, but always strike some balance between probing motivation and technical ability. The CS tutors do not agree on the relative importance of different criteria, particularly of the Maths test and interview scores. Naturally, each college believes they make better decisions than most other colleges.

The interview scores tend to play an important role. For each interview, the candidate gets a mark. Candidates with a “borderline” interview score are often considered by 3rd or 4th colleges, which may want to schedule additional interviews, and a few cases will be decided upon only after the candidates have left Oxford. The average interview score is less meaningful if tutors apply the marking scale differently. Tutors may do so unwittingly, because some colleges tend to get stronger applicants than others, and tutors may ask different interview questions each year. In a follow-up post I will describe how one could alleviate this problem, based on a principled method for making rankings more meaningful, with other applications in reviewing papers and grant proposals.

Is this process fair? That depends on what fairness means. If you don't live in the UK, you might underestimate the interest of UK politicians and journalists in Oxbridge admissions. In the UK (unlike Germany), the university you have attended is a source of pride and is considered a determining career factor. Suppose you are a politician or a journalist who has looked at the number of Oxford students with certain features (race, gender, socio-economic background). You have compared the numbers with averages from other groups, such as the UK population or students from all universities. You don't like the fact that some groups are underrepresented at Oxbridge. Who are you going to blame? Oxbridge! Reason: we select our students. If this alone does not convince your audience, you can choose from a menu of arguments, here ordered increasingly by sophistication:
  1. You claim that the numbers are the result of tutors discriminating against underrepresented groups. Typically you claim that tutors, suffering from implicit or unconscious bias, accept students that are similar to themselves.
    In reality, tutors are very aware of this potential problem. Avoiding it as far as possible is not only fair but in our interest: teaching smart and motivated students is more rewarding. The decentralized system described above guards against negative discrimination: if a first-college tutor did discriminate against certain candidates, the second college might happily admit these candidates. So negative discrimination wouldn't influence the statistics much. I believe this is similar for other subjects and for Cambridge.
  2. You hint that positive discrimination should take place to counteract the numbers that you don't like.
    In the UK, discrimination is illegal whenever it is based on certain protected characteristics such as gender and race, including colour, nationality, ethnic or national origin. That means positive discrimination is also illegal, and affirmative action is a no-go. Colleagues sometimes indicate in informal discussions that they would nevertheless like to discriminate positively. Some are very much not ashamed by this opinion, but (have to) justify this with elements from the following item.
  3. You claim Oxbridge's admissions process does not sufficiently take into account that the applicants' prior education may hide or exaggerate their true potential.
    This is a potentially serious problem, and tutors strive to avoid it. As mentioned, applicants from disadvantaged backgrounds effectively get a plus to compensate for their disadvantage. For instance, on each applicant's file we have statistics about their school's average exam results. We also have information about how well-off and how well educated people are in the applicant's postcode. The applicants' performances are judged against this background. To what extent this is appropriate is hard to say. For instance, many people believe that selective schools lead to better exam results. Whether this is true depends on what exactly is meant; see a recent scientific article on how little the type of school influences GCSE results once genetics and selection factors are controlled for.
  4. You say that it is Oxbridge's responsibility to attract underrepresented groups.
    Oxbridge largely subscribes to that view. We offer lots of outreach events. For instance, recently I gave a talk about CS and Oxbridge to around 200 pupils south of London. Some CS colleagues do substantially more. We organize events specifically for women, we run open days, and there is a summer school which “[prioritizes] students from low-socio economic backgrounds and those living in areas with low progression to higher education”. Such activities have likely increased the number of applicants.
  5. You claim that Oxbridge is failing because the outreach activities have not drastically changed the numbers you don't like. You demand that Oxbridge give applicants with disadvantaged background a bonus beyond the plus I described under item 3. You will emphasize the socio-economic background, knowing that discrimination based on gender or race is illegal.
    At this point, you effectively demand that universities abandon the principle of admitting the most promising applicants. This would be unpopular among academics, but such demands could gain traction, given the political polarization in the UK.
Here is my take on why some groups are underrepresented at Oxbridge:
  1. How much parents value education (and provide education) depends on their socio-economic background.
  2. How much pupils develop their academic abilities depends on their school and their parents' interest in education.
  3. Oxbridge admissions pick up on different academic abilities, as a primary selection factor.
It is convenient to blame Oxbridge for the resulting imbalances. Everybody wants fairness, but what that means is up for debate. I'd rather have more informed debates.

Comments

Popular posts from this blog

Tell me the price of memory and I give you €100

Markov Decision Processes (MDPs) are Markov chains plus nondeterminism: some states are random, the others are controlled (nondeterministic). In the pictures, the random states are round, and the controlled states are squares: The random states (except the brown sink state) come with a probability distribution over the successor states. In the controlled states, however, a controller chooses a successor state. What does the controller want to achieve? That depends. In this blog post, the objective is very simple: take a red transition . The only special thing about red transitions is that the controller wants to take them. We consider only MDPs with the following properties: There are finitely many states and transitions. The MDP is acyclic, that is, it has no cycles. There are a unique start state from which any run starts (in the pictures: blue, at the top) and a unique sink state where any run ends (in the pictures: brown, at the bottom). No matter what the controller does, ...

Calibrating Assessments

Last year I wrote a post on how Oxford selects undergraduate students for computer science. Oxford undergraduate admissions are politically charged: the British public suspects that Oxbridge is biased towards privileged applicants. This post is unpolitical though. I'm advocating a better way of solving a typical problem in academia: you have many candidates (think: papers, grant proposals, or Oxford applicants) and few assessors (think: members of program committees, assessment panels, or admission tutors), and the goal of the assessors is to select the best candidates. Since there are many candidates, each assessor can assess only some candidates. From a candidate's point of view: a paper may get 3 reviews, a grant proposal perhaps 5, and a shortlisted Oxford applicant may get 3 interviews. Each such review comes with a score that reflects the assessor's estimate of the candidate's quality. The problem is that different assessors will score the same candidate differ...

Model Checking Markov Chains Against Unambiguous Automata: The Qualitative Case

This is my first blog post. The purpose of this blog is to describe some neat ideas from my research. Ideally only one idea per post, which explains the title of this blog. Today I start with a classical topic from probabilistic verification: model checking Markov chains against $\omega$-regular specifications. I will focus on specifications given by Büchi automata, which are automata like this one: This Büchi automaton accepts all infinite words over $\{a,b\}$ that start with $a$. The automaton is nondeterministic. The automaton is also unambiguous , which means that every infinite word has at most one accepting run. A Markov chain, on the other hand, generates infinite words. A Markov chain looks like this: For instance, with probability $\frac12 \cdot \frac12 \cdot 1 = \frac14$ it generates an infinite word that starts with $a b a$. For simplicity, we consider only a very simple Markov chain in this post: This Markov chain generates an infinite word over $\{a,b\}$ ...