Assignment 4

Before working on this assignment please read these instructions fully. In the submission area, you will notice that you can click the link to Preview the Grading for each step of the assignment. This is the criteria that will be used for peer grading. Please familiarize yourself with the criteria before beginning the assignment.

This assignment requires that you to find at least two datasets on the web which are related, and that you visualize these datasets to answer a question with the broad topic of sports or athletics (see below) for the region of Geldermalsen, Gelderland, Netherlands, or Netherlands more broadly.

You can merge these datasets with data from different regions if you like! For instance, you might want to compare Geldermalsen, Gelderland, Netherlands to Ann Arbor, USA. In that case at least one source file must be about Geldermalsen, Gelderland, Netherlands.

You are welcome to choose datasets at your discretion, but keep in mind they will be shared with your peers, so choose appropriate datasets. Sensitive, confidential, illicit, and proprietary materials are not good choices for datasets for this assignment. You are welcome to upload datasets of your own as well, and link to them using a third party repository such as github, bitbucket, pastebin, etc. Please be aware of the Coursera terms of service with respect to intellectual property.

Also, you are welcome to preserve data in its original language, but for the purposes of grading you should provide english translations. You are welcome to provide multiple visuals in different languages if you would like!

As this assignment is for the whole course, you must incorporate principles discussed in the first week, such as having as high data-ink ratio (Tufte) and aligning with Cairo’s principles of truth, beauty, function, and insight.

Here are the assignment instructions:

  • State the region and the domain category that your data sets are about (e.g., Geldermalsen, Gelderland, Netherlands and sports or athletics).
  • You must state a question about the domain category and region that you identified as being interesting.
  • You must provide at least two links to available datasets. These could be links to files such as CSV or Excel files, or links to websites which might have data in tabular form, such as Wikipedia pages.
  • You must upload an image which addresses the research question you stated. In addition to addressing the question, this visual should follow Cairo's principles of truthfulness, functionality, beauty, and insightfulness.
  • You must contribute a short (1-2 paragraph) written justification of how your visualization addresses your stated research question.

What do we mean by sports or athletics? For this category we are interested in sporting events or athletics broadly, please feel free to creatively interpret the category when building your research question!

Tips

  • Wikipedia is an excellent source of data, and I strongly encourage you to explore it for new data sources.
  • Many governments run open data initiatives at the city, region, and country levels, and these are wonderful resources for localized data sources.
  • Several international agencies, such as the United Nations, the World Bank, the Global Open Data Index are other great places to look for data.
  • This assignment requires you to convert and clean datafiles. Check out the discussion forums for tips on how to do this from various sources, and share your successes with your fellow students!

Example

Looking for an example? Here's what our course assistant put together for the Ann Arbor, MI, USA area using sports and athletics as the topic. Example Solution File

In [22]:
import numpy as np
import pandas as pd
import re

# function for data cleaning and points calculation up to 2018
def calcPointsOld(cell):
    if "DN" in str(cell) or "ND" in str(cell) or "DSQ" in str(cell):
        cell = 0
        return cell
    cell = re.sub(r"[*PS†]", "", str(cell))
    if int(cell) > 10: 
        cell = 0
        return cell
    cell = scoring.loc[cell, "2010-2018"]
    return cell

# function for data cleaning and points calculation from 2019 (with point for fastest lap)
def calcPointsNew(cell):
    if "DN" in str(cell) or "ND" in str(cell) or "DSQ" in str(cell):
        cell = 0
        return cell
    cell = re.sub(r"[*P†]", "", str(cell))
    FastLap = 0
    if re.findall(r"[S]", str(cell)):
        cell = re.sub(r"[S]", "", str(cell))
        FastLap = scoring.loc["Fastest lap", "2019-present"]
    if int(cell) > 10: 
        cell = 0 + FastLap
        return cell
    cell = scoring.loc[cell, "2019-present"] + FastLap
    return cell

# get scoring table from Wikipedia
scoring = pd.read_csv("scoring.csv")
scoring = scoring.rename(columns={"Plaats" : "Rank", "2019-heden" : "2019-present"}).set_index(["Rank"]).rename(index={"Snelste ronde" : "Fastest lap"})
scoring = scoring.drop(["Unnamed: 0"], axis=1)
In [23]:
# read data from Wikipedia

# 2015
df_2015 = pd.read_csv("table_2015.csv").fillna(99)
df_2015 = df_2015.set_index("Coureur").drop(["Unnamed: 0", "Pos.", "Nr.", "Punten"], axis=1)
df_2015.columns = pd.MultiIndex.from_product([["2015"], df_2015.columns], names=["Year", "Race"])

df_2015 = df_2015.applymap(func=calcPointsOld)


# 2016
df_2016 = pd.read_csv("table_2016.csv").fillna(99)
df_2016 = df_2016.set_index("Coureur").drop(["Unnamed: 0", "Pos.", "Nr.", "Punten"], axis=1)
df_2016.columns = pd.MultiIndex.from_product([["2016"], df_2016.columns], names=["Year", "Race"])

df_2016 = df_2016.applymap(func=calcPointsOld)

# 2017
df_2017 = pd.read_csv("table_2017.csv").fillna(99)
df_2017 = df_2017.set_index("Coureur").drop(["Unnamed: 0", "Pos.", "Nr", "Punten", "Unnamed: 24"], axis=1)
df_2017.columns = pd.MultiIndex.from_product([["2017"], df_2017.columns], names=["Year", "Race"])

df_2017 = df_2017.applymap(func=calcPointsOld)

# 2018
df_2018 = pd.read_csv("table_2018.csv").fillna(99)
df_2018 = df_2018.set_index("Coureur").drop(["Unnamed: 0", "Pos.", "Nr", "Punten"], axis=1)
df_2018.columns = pd.MultiIndex.from_product([["2018"], df_2018.columns], names=["Year", "Race"])

df_2018 = df_2018.applymap(func=calcPointsOld)
In [24]:
# read data from Wikipedia

# 2019
df_2019 = pd.read_csv("table_2019.csv").fillna(99)
df_2019 = df_2019.set_index("Coureur").drop(["Unnamed: 0", "Pos.", "Nr", "Punten"], axis=1)
df_2019.columns = pd.MultiIndex.from_product([["2019"], df_2019.columns], names=["Year", "Race"])

df_2019 = df_2019.applymap(func=calcPointsNew)

# 2020
df_2020 = pd.read_csv("table_2020.csv").fillna(99)
df_2020 = df_2020.set_index("Coureur").drop(["Unnamed: 0", "Pos.", "Nr", "Punten"], axis=1)
df_2020.columns = pd.MultiIndex.from_product([["2020"], df_2020.columns], names=["Year", "Race"])

df_2020 = df_2020.applymap(func=calcPointsNew)

# 2021
df_2021 = pd.read_csv("table_2021.csv").fillna(99)
df_2021 = df_2021.set_index("Coureur").drop(["Unnamed: 0", "Pos.", "Nr.", "ABU", "Punten"], axis=1)
df_2021.index.rename("Driver", inplace=True)
df_2021.columns = pd.MultiIndex.from_product([["2021"], df_2021.columns], names=["Year", "Race"])

df_2021 = df_2021.applymap(func=calcPointsNew)
In [26]:
import matplotlib.pyplot as plt

# create dataframe with data of the three drivers over the years
df_of_interest = df_2015.loc[["Max Verstappen", "Lewis Hamilton", "Valtteri Bottas"]]
df_of_interest = df_of_interest.merge(df_2016, how="inner", left_index=True, right_index=True)
df_of_interest = df_of_interest.merge(df_2017, how="inner", left_index=True, right_index=True)
df_of_interest = df_of_interest.merge(df_2018, how="inner", left_index=True, right_index=True)
df_of_interest = df_of_interest.merge(df_2019, how="inner", left_index=True, right_index=True)
df_of_interest = df_of_interest.merge(df_2020, how="inner", left_index=True, right_index=True)
df_of_interest = df_of_interest.merge(df_2021, how="inner", left_index=True, right_index=True)

# calculate moving average
M_data = df_of_interest.loc["Max Verstappen"].rolling(window=20, center=False).mean()
L_data = df_of_interest.loc["Lewis Hamilton"].rolling(window=20, center=False).mean()
V_data = df_of_interest.loc["Valtteri Bottas"].rolling(window=20, center=False).mean()

# create the plot
plt.style.use('seaborn-bright')

plt.figure(figsize=[16, 8])
ax = plt.gca()

L_data.plot()
M_data.plot()
V_data.plot()

plt.title("Moving average championship points of top 3 drivers in Formula 1 season 2021", fontsize="large", fontweight="semibold")
plt.legend(loc="lower right", fontsize="medium")

major_XT = ax.get_xaxis().get_majorticklocs()
plt.xticks(major_XT, (M_data.index.get_level_values(0)).unique(), rotation=0)

plt.xlim(left=20.0)
plt.ylim(0,25)
plt.xlabel('Year', fontsize="medium")
plt.ylabel('20 races moving average championship points', fontsize="medium")
#plt.savefig("Assignment4.png")
plt.show()
In [ ]: