Exercise - Racial Bias in the Labor Market

In this question you’ll partially replicate a well-known paper on racial bias in the labor market: “Are Emily and Greg More Employable Than Lakisha and Jamal? A Field Experiment on Labor Market Discrimination” by Marianne Bertrand and Sendhil Mullainathan. The paper, which I’ll refer to as BM for short, appears in Volume 94, Issue #4 of the American Economic Review. You will need to consult this paper to complete this problem.

For convenience, I’ve posted a copy of the dataset from this paper on my website at https://ditraglia.com/data/lakisha_aer.csv. Each row of the dataset corresponds to a single fictitious job applicant. After loading the tidyverse library, you can read the data into a tibble called bm using the read_csv() function as follows:

library(tidyverse)
bm <- read_csv('https://ditraglia.com/data/lakisha_aer.csv')
  1. Read the introduction and conclusion of BM. Then write a short paragraph answering the following:
    1. What research question do BM try to answer?
    2. What data and methodology do they use to address the question?
    3. What do the authors consider to be their key findings?
  2. Now that you have a rough idea of what the paper is about, it’s time to examine the dataset bm. Carry out the following steps:
    1. Display the tibble bm. How many rows and columns does it have?
    2. Display only the columns sex, race and firstname of bm. What information do these columns contain? How are sex and race encoded?
    3. Add two new columns to bm: female should take the value TRUE if sex is female, and black should take value TRUE if race is black.
  3. Read parts A-D of section II in BM. Then write a short paragraph answering the following:
    1. How did the experimenters create their bank of resumes for the experiment?
    2. The experimenters classified the resumes into two groups. What were they and how did they make the classification?
    3. How did the experimenters generate identities for their fictitious job applicants?
  4. Randomized controlled trials are all about balance: when the treatment is randomly assigned, the characteristics of the treatment and control groups will be the same on average. To answer the following parts you’ll need a few additional pieces of information. First, the variable computerskills takes on the value 1 if a given resume says that the applicant has computer skills. Second, the variables education and yearsexp indicate level of education and years experience, while ofjobs indicates the number of previous jobs listed on the resume.
    1. Is sex balanced across race? Use dplyr to answer this question. Hint: what happens if you apply the function sum to a vector of TRUE and FALSE values?
    2. Are computer skills balanced across race? Hint: the summary statistic you’ll want to use is the proportion of individuals in each group with computer skills. If you have a vector of ones and zeros, there is a very easy way to compute this.
    3. Are education and ofjobs balanced across race?
    4. Compute the mean and standard deviation of yearsexp by race. Comment on your findings.
    5. Why do we care if sex, education, ofjobs, computerskills, and yearsexp are balanced across race?
    6. Is computerskills balanced across sex? What about education? What’s going on here? Is it a problem? Hint: re-read section II C of the paper.
  5. The outcome of interest in bm is call which takes on the value 1 if the corresponding resume elicts an email or telephone callback for an interview. Check your answers to the following against Table 1 of the paper:
    1. Calculate the average callback rate for all resumes in bm.
    2. Calculate the average callback rates separately for resumes with “white-sounding” and “black-sounding” names. What do your results suggest?
    3. Repeat part 2, but calculate the average rates for each combination of race and sex. What do your results suggest?
  6. Read the help files for the dplyr function pull() and the base R function t.test(). Then test the null hypothesis that there is no difference in callback rates across black and white-sounding names against the two-sided alternative. Comment on your results.