In this project we will analyze data on Thanksgiving dinner in the US. We'll be working with the pandas library. We will convert the "thanksgiving.csv" file into a dataframe using pandas. In addition we'll use several basic functions in pandas to explore the data.
#Import pandas and read the data.
import pandas as pd
data = pd.read_csv("thanksgiving.csv", encoding ="Latin-1")
#Print the column names.
col = data.columns
print(col)
#Prints the table structure(row x column)
print("rows, columns: "+str(data.shape))
#Outputs the first 5 rows of the dataframe
data.head()
It looks like each column name is a survey question and each row is a survey response to each of these questions. Let's clean up the data a little bit by removing all rows didn't answer "Yes" to the first question, "Do you celebrate Thanksgiving?".
We can accomplish by converting the dataframe into a boolean series. Then we'll use the boolean series to filter out all the data that didn't answer "Yes" to the first question
print(data["Do you celebrate Thanksgiving?"].value_counts())
data = data[data["Do you celebrate Thanksgiving?"] == "Yes"]
data.shape
We can see that the data frame went from 1058 rows to 980 rows, so it looks like we've successfully filtered out the data. Now we can begin analyzing the data. We can use the .value_counts() method on the second question column to see the types of food served on Thanksgiving. The .value_counts() method is especially useful for dataframes that contains a lot of repeats of the same strings. This is often typical in surveys.
data["What is typically the main dish at your Thanksgiving dinner?"].value_counts()
Suppose we own a restaurant and we are interested in serving tofu turkey with gravy on our menu. We are interested the number of families that actually serve this dish. We can create a new dataframe using boolean filtering to show only rows with people who had Tofurkey as a main dish on Thanksgiving. In addition we'll look at the "Do you typically have gravy" column to see if people eat this dish with gravy.
data_only_tofurkey = data[data["What is typically the main dish at your Thanksgiving dinner?"] == "Tofurkey"]
Gravy_and_tofurkey = data_only_tofurkey["Do you typically have gravy?"]
Gravy_and_tofurkey.value_counts()
Only 12 out of 980 people answered yes, so it might be a good idea to not serve this dish in our restaurant.
Next we are interested to see how many people in this survey have apple, pumpkin, or pecan pies during Thanksgiving. We can use the .isnull() method to convert each column into a boolean, then use the & operator to return a boolean series. Then we can use the .value_counts() method to tally up the total number of False statements in the boolean series.
apple = data["Which type of pie is typically served at your Thanksgiving dinner? Please select all that apply. - Apple"]
apple_isnull = apple.isnull()
pumpkin = data["Which type of pie is typically served at your Thanksgiving dinner? Please select all that apply. - Pumpkin"]
pumpkin_isnull = pumpkin.isnull()
pecan = data["Which type of pie is typically served at your Thanksgiving dinner? Please select all that apply. - Pecan"]
pecan_isnull = pecan.isnull()
did_not_eat_pies = apple_isnull & pumpkin_isnull & pecan_isnull
did_not_eat_pies.value_counts()
It looks like most people had apple, pumpkin, or pecan pies for Thanksgiving. It might be a good idea to prepare extra pies for Thanksgiving to sell people who are too lazy to bake them for the holidays.
We want to make sure this survey isn't biased towards the older generation and covers all ages. Currently the age column is a bit difficult to analyze. We can write a function that and use the .apply() function to convert this column into integers. We'll have to play arround with string manipulation methods such as .split() and .replace() to accomplish this.
#Converts the age column to an integer.
def convert_to_int(column):
if pd.isnull(column) == True:
return None
if pd.isnull(column) == False:
string = column.split(' ')[0]
string = string.replace('+', '')
return int(string)
int_age = data["Age"].apply(convert_to_int)
data["int_age"] = int_age
#Outputs statistical data of the column.
data["int_age"].describe()
While we took the lower limit of each age range, the respondants of the survey appear to cover all age groups.
Next we are interested in the income groups of each family. We do this to make sure the average income of the survey respondants is representative of the population.
data["How much total combined money did all members of your HOUSEHOLD earn last year?"].value_counts()
The responses are in the string format, we are going have to use the apply method along with a function to convert the elements in the column series into an integer so we can use the .describe() method.
income_col = data["How much total combined money did all members of your HOUSEHOLD earn last year?"]
def convert_to_int_inc(column):
if pd.isnull(column) == True:
return None
string = column.split(' ')[0]
if 'Prefer' in string:
return None
else:
string = string.replace('$', '')
string = string.replace(',', '')
return int(string)
data['int_income'] = income_col.apply(convert_to_int_inc)
data['int_income'].describe()
Once again, we took the lower limit of the income range so the average skews downard. The average income is high even though we took the lower limit. The standard deviation is almost as high as the mean. The median is 75,000 which is relatively close to the average.
Next, let's see if there is any correlation between income and travel distance. We can simply use boolean filtering again with the .value_counts() method.
less_150k = data["int_income"] < 150000
less_150k_data = data[less_150k]
how_far = less_150k_data["How far will you travel for Thanksgiving?"]
how_far.value_counts()
more_150k = data["int_income"] > 150000
more_150k_data = data[more_150k]
how_far_150k_plus = more_150k_data["How far will you travel for Thanksgiving?"]
how_far_150k_plus.value_counts()
high_income_athome = 49/(49+25+16+12)
low_income_athome = 281/(203+150+55+281)
print(high_income_athome)
print(low_income_athome)
It looks like high income respondants (> 150k) actually stay home at a higher rate than low income respondants (< 150k). This makes sense because high income respondants are older and have more established families. Whereas low income respondants are mostly students who live on campus and have to travel back home for thanksgiving.
We can use .pivot_table() method to see if there is a correlation between age/income and people who spend their Thanksgiving with Friends.
data.pivot_table(
#The index takes a series as an input and populates the rows of the spreadsheet
index = "Have you ever tried to meet up with hometown friends on Thanksgiving night?",
#The columns takes a series as an input and populates the columns with its values
columns = 'Have you ever attended a "Friendsgiving?"',
#The values we populate the matrix with, by default the values will be the mean
values = 'int_age'
)
data.pivot_table(
index = "Have you ever tried to meet up with hometown friends on Thanksgiving night?",
columns = 'Have you ever attended a "Friendsgiving?"',
values = 'int_income'
)
It turns out that people who spends their thanksgiving with their friends have lower average income and an average age of 34.
Learning Summary¶
Python concepts explored: pandas, functions, boolean filtering
Python functions and methods used: .read_csv(), .pivot_table(), .replace(), .describe(), .apply(), .isnull(), .columns, .shape, .head()
The files used for this project can be found in my GitHub repository.
Comments
comments powered by Disqus