Age is just a number

Today I demonstrated some in-class interactive activities that I had developed for my super large intro statistics lectures at a teaching and learning symposium. I’ve shared a summary of the activities and the data below.

Quick summary of the activity

1. Head to how-old.net upload or take a photo of yourself and record the age given by the #HowOldRobot

2. Complete a Google form (or similar) with your actual age and the age given by the #HowOldRobot

3. Explore the data collected using an awesome free online tool like iNZight Lite (click the link to jump through with the data)

4. Watch this short video featuring Joy Buolamwini to learn more about facial recognition software and discuss algorithmic bias

Some other ideas

If you haven’t already, check out learning.statistics-is-awesome.org/different_strokes/, where you can sample some cat (and other) drawings and learn more about how people draw in the Google game Quick, Draw!

I also get students to draw things in class and use their drawings as data. Below are all the drawings of cats made from the demonstration today, and also from the awesome teachers who helped me out last night. If you click/touch and hold a drawing you will be able to drag it around. How many different ways can you sort the drawings into groups?

Follow the data!

Last week I was down in Wellington for the VUW NZCER NZAMT16 Mathematics & Statistics Education Research Symposium, as well as for the NZAMT16 teacher conference. It was a huge privilege to be one of the keynote speakers and my keynote focused on teaching data science at the school level. I used the example of following music data from the New Zealand Top 40 charts to explore what new ways of thinking about data our students would need to learn (I use “new” here to mean “not currently taught/emphasised”).

It was awesome to be back in Wellington, as not only did I complete a BMus/BSc double degree at Victoria University, I actually taught music at Hutt Valley High School (the venue for the conference) while I was training to become a high school teacher (in maths/stats and music). I didn’t talk much in my keynote about the relationship between music and data analysis, but I did describe my thoughts a few years ago (see below):

All music has some sort of structure sitting behind it, but the beauty of music is in the variation. When you learn music, you learn about key ideas and structures, but then you get to hear how these same key ideas and structures can be used to produce so many different-sounding works of art. This is how I think we need to help students learn statistics ‚Äď minimal structure, optimal transfer, maximal experience. Imagine how boring it would be if students learning music only ever listened to Bach.

https://www.stat.auckland.ac.nz/en/about/news-and-events-5/news/news-2017/2017/08/the-art-of-teaching-statistics-to-teenagers.html

Due to some unforeseen factors, I ended up ZOOMing my slides from one laptop at the front of the hall to another laptop in the back room which was connected to the data projector. Since I was using ZOOM, I decided to record my talk. However, the recording is not super awesome due to not really thinking about the audio side of things (ironically). If you want to try watching the video, I’ve embedded it below:

You can also view the slides here: bit.ly/followthedataNZAMT. I’m not sure they make a whole lot of sense by themselves, so here’s a quick summary of some of what I talked about:

  • Currently, we pretty much choose data to match the type of analysis we want to teach, and then “back fit” the investigative problem to this analysis. This is not totally a bad thing, we do it in the hope that when students are out there in the real world, they think about all the analytical methods they’ve learned and choose the one that makes sense for the thing they don’t know and the data they have to learn from. But, there’s a whole lot of data out there that we don’t currently teach students about how to learn from, which comes from the computational world our students live in. If we “follow the data” that students are interacting with, what “new” ways of thinking will our students need to make sense of this data?
  • Album covers are a form of data, but how do we take something we can see visually and turn this into “data”. For the album covers I used from one week of 1975 and one week of 2019, we can see that the album covers from 1975 are not as bright and vibrant as those from 2019, similarly we can see that people’s faces feature more in the 1975 album covers. We could use the image data for each album cover, extract some overall measure of colour and use this to compare 1975 and 2019. But what measure should we use? What is luminosity, saturation, hue, etc.? How could we overfit a model to predict the year of an album cover by creating lots of super specific rules? What pre-trained models can we use for detecting faces? How are they developed? How well do they work? What’s this thing called a “confusion matrix”?
  • An intended theme across my talk was to compare what humans can do (and to start with this), with what we could try to get computers to do, and also to emphasise how important human thinking is. I showed a video of Joy Buolamwini talking about her Gender Shades project and algorithmic bias: https://www.youtube.com/watch?v=TWWsW1w-BVo and tried to emphasise that we can’t teach about fun things we can do with machine learning etc. without talking about bias, data ethics, data ownership, data privacy and data responsibility. In her video, Joy uses faces of members of parliament – did she need permission to use these people’s faces for her research project since they were already public on websites? What if our students start using photos of our faces for their data projects?
  • I played the song that was number one the week I was born (tragedy!) as a way to highlight the calendar feature of the nztop40 website – as long as you were born after 1975, you can look up your song too. Getting students to notice the URL and how it changes as you navigate a web page is a useful skill – in this case, if you navigate to different chart weeks, you can notice that the “chart id” number changes. We could “hack” the URL to get the chart data for different weeks of the years available. If the website terms and conditions allow us, we could also use “web scraping” to automate the collection of chart data from across a number of weeks. We could also set up a “scheduler” to copy the chart data as it appears each week. But then we need to think about what each row in our super data set represents and what visualisations might make sense to communicate trends, features, patterns etc. I gave an example of a visualisation of all the singles that reached number one during 2018, and we discussed things I had decided to do (e.g. reversing the y axis scale) and how the visualisation could be improved [data visualisation could be a whole talk in itself!!!]
  • There are common ways we analyse music – things like key signature, time signature, tempo (speed), genre/style, instrumentation etc. – but I used one that I thought would not be too hard to teach during the talk: whether a song is in the major or minor key. However, listening to music first was really just a fun “gateway” to learn more about how the Spotify API provides “audio features” about songs in its database, in particular supervised machine learning. According to Spotify, the Ed Sheeran song Beautiful people is in the minor key, but me and guitar chords published online think that it’s in the major key. What’s the lesson here? We can’t just take data that comes from a model as being the truth.
  • I also wanted to talk more about how songs make us feel, to extend thinking about the modality of the song (major = happy, minor = sad), to the lyrics used in the song as well. How can we take a set of lyrics for a song and analyse these in terms of overall sentiment – positive or negative? There’s lots of approaches, but a common one is to treat each word independently (“bag of words”) and to use a pre-existing lexicon. The slides show the different ways I introduce this type of analysis, but the important point is how common it is to transfer a model trained within one data context (for the bing lexicon, customer reviews online) and use it for a different data context (in this case, music lyrics). There might just be some issues with doing this though!
  • Overall, what I tried to do in this talk was not to showcase computer programming (coding) and mathematics, since often we make these things the “star attraction” in talks about data science education. The talk I gave was totally “powered by code” but do we need to start with code in our teaching? When I teach statistics, I don’t start with pulling out my calculator! We start with the data context. I wanted to give real examples of ways that I have engaged and supported all students to participate in learning data science: by focusing on what humans think, feel and see in the modern world first, then bringing in (new) ways of thinking statistically and computationally, and then teaching the new skills/knowledge needed to support this thinking.
  • We have an opportunity to introduce data science in a real and meaningful way at the school level, and we HAVE to do this in a way that allows ALL students to participate – not just those in enrichment/extension classes, coding clubs, and schools with access to flash technology and gadgets. While my focus is the senior levels (Years 11 to 13), the modern world of data gives so many opportunities for integrating statistical and computational thinking to learn from data across all levels. We need teachers who are confident with exploring and learning from modern data, and we need new pedagogical approaches that build on the effective ones crafted for statistics education. We need to introduce computational thinking and computer programming/coding (which are not the same things!) in ways that support and enrich statistical thinking.

If you are a NZ-based teacher, and you are interested in learning more about teaching data science, then please use the “sign-up” form at undercoverdata.science (the “password” is datascience4everyone). I’ll be sending out some emails soon, probably starting with learning more about APIs (for an API in action, check out learning.statistics-is-awesome.org/popularity_contest/ ).

Different strokes?

Example of sorted data cards

Recently I’ve been developing and trialling learning tasks where the learner is working with a provided data set but has to do something “human” that motivates using a random sample as part of the strategy to learn something from the data.

Since I already had a tool that creates data cards from the Quick, Draw! data set, I’ve created a prototype for the kind of tool that would support this approach using the same data set.

I’ve written about the Quick, Draw! data set already:

For this new tool, called different strokes, users sort drawings into two or more groups based on something visible in the drawing itself. Since you have the drag the drawings around to manually “classify” them, the larger the sample you take, the longer it will take you.

There’s also the novelty and creativity of being able to create your own rules for classifying drawings. I’ll use cats for the example below, but from a teaching and assessment perspective there are SO many drawings of so many things and so many variables with so many opportunities to compare and contrast what can be learned about how people draw in the Quick, Draw!

Here’s a precis of the kinds of questions I might ask myself to explore the general question What can we learn from the data about how people draw cats in the Quick, Draw! game?

  • Are drawings of cats more likely to be heads only or the whole body? [I can take a sample of cat drawings, and then sort the drawings into heads vs bodies. From here, I could bootstrap a confidence interval for the population proportion].
  • Is how someone draws a cat linked to the game time? [I can use the same data as above, but compare game times by the two groups I’ve created – head vs bodies. I could bootstrap a confidence interval for the difference of two population means/medians]
  • Is there a relationship between the number of strokes and the pause time for cat drawings? [And what do these two variables actually measure – I’ll need some contextual knowledge!]
  • Do people draw dogs similarly to cats in the Quick, Draw! game? [I could grab new samples of cat and dog drawings, sort all drawings into “heads” or “bodies”, and then bootstrap a confidence interval for the difference of two population proportions]

Check out the tool and explore for yourself here: http://learning.statistics-is-awesome.org/different_strokes/

A little demo of the tool in action!

A simple app that only does three things

Here’s a scenario. You buy a jumbo bag of marshmallows that contains a mix of pink and white colours. Of the 120 in the bag, 51 are pink, which makes you unhappy because you prefer the taste of pink marshmallows.

Time to write a letter of complaint to the company manufacturing the marshmallows?

The thing we work so hard to get our statistics students to believe is that there’s this crazy little thing called chance, and it’s something we’d like them to consider for situations where random sampling (or something like that) is involved.

For example, let’s assume the manufacturing process overall puts equal proportions of pink and white marshmallows in each jumbo bag. This is not a perfect process, there will be variation, so we wouldn’t expect exactly half pink and half white for any one jumbo bag. But how much variation could we expect? We could get students to flip coins, with each flip representing a marshmallow, and heads representing white and tails representing pink. We then can collate the results for 120 marshmallows/flips – maybe the first time we get 55 pink – and discuss the need to do this process again to build up a collection of results. Often we move to a computer-based tool to get more results, faster. Then we compare what we observed – 51 pink – to what we have simulated.


Created using my learning.statistics-is-awesome.org/modelling-tool, yes it should be two-tailed, no my tool doesn’t allow this ūüôĀ

I use these kind of activities with my students, but I wanted something more so I made a very simple app earlier this year. You can find it here: learning.statistics-is-awesome.org/threethings/. You can only do three things with it (in terms of user interactions) but in terms of learning, you can do way more than three things. Have a play!

In particular, you can show that models other than 50% (for the proportion of pink marshmallows) can also generate data (simulated proportions) consistent with the observed proportion. So, not being able to reject the model used for the test (50% pink) doesn’t mean the 50% model is the one true thing. There are others. Like I told my class – just because my husband and I are compatible (and I didn’t reject him), doesn’t mean I couldn’t find another husband similarly compatible.

Note: The app is in terms of percentages, because that aligns to our approach with NZ high school students when using and interpreting survey/poll results. However, I first use counts for any introductory activities before moving to percentages, as demonstrated with this marshmallow example. The app rounds percentages to the closest 1% to keep the focus on key concepts rather than focusing on (misleading) notions of precision. I didn’t design it to be a tool for conducting formal tests or constructing confidence intervals, more to support the reasoning that goes with those approaches.

Visualising bootstrap confidence intervals and randomisation tests with VIT Online

Simulation-based inference is taught as part of the New Zealand curriculum for Statistics at school level, specifically the randomisation test and bootstrap confidence intervals. Some of the reasons for promoting and using simulation-based inference for testing and for constructing confidence intervals are that:

  • students are working with data (rather than abstracting to theoretical sampling distributions)
  • students can see the re-randomisation/re-sampling process as it happens
  • the “numbers” that are used (e.g. tail proportion or limits for confidence interval) are linked to this process.

If we work with the output only, for example the final histogram/dot plot of re-sampled/bootstrap differences, in my opinion, we might as well just use a graphics calculator to get the values for the confidence interval ūüôā

In our intro stats course, we use the suite of VIT (Visual Inference Tools) designed and developed by Chris Wild to construct bootstrap confidence intervals and perform randomisation tests. Below is an example of the randomisation test “in action” using VIT:

Last year, VIT was made available as a web-based app thanks to ongoing work by Ben Halsted! So, in this short post I’ll show how to use VIT Online¬†with Google sheets – my two favourite tools for teaching simulation-based inference ūüôā

1.¬†Create a rectangular data set using a Google sheet. If you’re stuck for data, you can make a copy of this Google sheet which contains giraffe height estimates (see this Facebook post for context – read the comments!)

2.¬†Under File –> Publish to web, choose the following settings (this will temporarily make your Google sheet “public” – just “unpublish” once you have the data in VIT Online)

Be careful to select “Sheet1” or whatever the sheet you have your data in, not “Entire document”. Then, select “Comma-separated values (.csv)” for the type of file. Directly below is the link to your published data which you need to copy for step 3.

3.¬†Head to VIT online –>¬†¬†https://www.stat.auckland.ac.nz/~wild/VITonline/index.html. Choose “Randomisation test” and copy the link from step 2 into the first text box. Then press the “Data from URL” button.

4. At this point, your data is in VIT online, so you can go back and unpublish your Google sheet by going back to File –> Publish to web, and pressing the button that says “Stop publishing”.

The same steps work to get data from a Google spreadsheet into VIT online for the other modules (bootstrapping etc.).

[Actually, the steps are pretty similar for getting data from a Google spreadsheet into iNZight lite.¬†Copy the published sheet link from step 2 in the appropriately named “paste/enter URL” text box under the File –> Import dataset menu option.]

In terms of how to use VIT online to conduct the randomisation test, I’ll leave you with some videos by Chris Wild to take a look at (scroll down). Before I do, just a¬†couple of differences between the VIT Chris uses and VIT Online and a couple of hints for using VIT Online with students.

You will need to hold down ctrl to select more than one variable¬†before pressing the “Analyse” button e.g. to select both the “Prompt” and “Height estimate in metres” variables in the giraffe data set.

Also, to define the statistic to be tested, in VIT Online you need to press the button that says “Precalculate Display” rather than “Record my choices” as shown in the videos.

Lastly, a¬†really cool thing about VIT Online is that once you have copied over the URL for your published Google sheet, as long as you keep your Google sheet published, you can grab the URL from VIT Online to share with students e.g.¬†https://www.stat.auckland.ac.nz/~wild/VITonline/randomisationTest/RVar.html?file=https://docs.google.com/spreadsheets/d/e/2PACX-1vTcaGSrAbGSntbrUoifNv8g048KJwEnBI–Rmmxqu1N0rb0VRUHoUkIeT-8xo3O9eqTUqZIML_EH523/pub?gid=0&single=true&output=csv&var=%20Prompt,c&var=Height%20estimate%20in%20metres,n. Sure, it’s not the nicest looking URL in the world, so use a URL shortener like bit.ly, goo.gl, tiny.cc etc. if sharing with students to type into their devices.

Note: VIT Online is not optimised to work on small screen devices, due to the nature of the visualisations. For example, it’s important that students can see all three panels at the same time during the process, and can see what is happening!

Now, here are those videos I promised ūüôā

Game of data

This post is second in a series of posts where I’m going to share some strategies for getting real data to use for statistical investigations that require sample to population inference. As I write them, you will be able to find them all on this page.

What’s your favourite board game?

I read an article posted on fivethirtyeight about the worst board games ever invented and it got me thinking about the board games I like to play. The Game of life has a low average rating on the online database of games referred to in this article but I remember kind of enjoying playing it as a kid. boardgamegeek.com features user-submitted information about hundreds of thousands of games (not just board games) and is constantly being updated. While there are some data sets out there that already feature data from this website (e.g. from kaggle datasets), I am purposely demonstrating a non-programming approach to getting this data that maximises the participation of teachers and students in the data collection process.

To end up with data that can be used as part of a sample to population inference task:

  1. You need a clearly defined and nameable population (in this case, all board games listed on boardgamegeek.com)
  2. You need a sampling frame that is a very close match to your population.
  3. You need to select from your sampling frame using a random sampling method to obtain the members of your sample.
  4. You need to define and measure variables from each member of the sample/population so the resulting data is multivariate.

boardgamegeek.com¬†actually provide a link that you can use to select one of the games on their site at random (https://boardgamegeek.com/boardgame/random), so using this “random” link (hopefully) takes care of (2) and (3). For (4), there are so many potential variables that could be defined and measured. To decide on what variables to measure, I spent some time exploring the content of the webpages for a few different games to get a feel for what might make for good variables. I decided to stick to variables that are measured directly for each game, rather than ones that were based on user polls, and went with these variables:

  • Millennium the game was released (1000, 2000, all others)
  • Number of words in game title
  • Minimum number of players
  • Maximum number of players
  • Playing time in minutes (if a range was provided, the average of the limits was used)
  • Minimum age in years
  • Game type (strategy or war, family or children’s, other)
  • Game available in multiple languages (yes or no)

Time to play!

I’ve set up a Google form with instructions of how you can help create a random sample of games from boardgamegeek.com at this link:¬†https://goo.gl/forms/8yBqryGTzrZGhEVx2. As people play along, the sample data will be added here:¬†https://docs.google.com/spreadsheets/d/e/2PACX-1vSzR_VSVzaaeWpCvYbAQCUewaM3Tad2zfTBO7AWuDgFFTj5Jaq2TBo6N-gQGCe5e5t_qKW7Knuq6-pr/pub?gid=552938859&single=true&output=csv¬†. The URL to the game is included so that the data can be checked. Feel free to copy and adapt however you want, but do keep in mind that nature of the variables you use. In particular, be very careful about using any of the aggregate ratings measures (and another great article by fivethirtyeight about movie ratings explains some of the reasons why.)

Bonus round

I wrote a post recently –¬†Just Google it¬†– which featured real data distributions.¬†boardgamegeek.com¬†also provides simple graphs of the ratings for each game, so we can play a similar matching game. You could also try estimating the mean and standard deviation of the ratings from the graph, with the added game feature of reverse ordering!

Which games do you think match which ratings graphs?

  1. Monopoly
  2. The Lord of the Rings: The Card Game
  3. Risk
  4. Tic-tac-toe

A

B

C

D

I couldn’t find a game that had a clear bi-modal distribution for its ratings but I reckon there must be games out there that people either love or hate ūüôā Let me know if you find one! To get students familiar with¬†boardgamegeek.com, you could ask them to first search for their favourite game and then explore what information and ratings have been provided for this on the site. Let the games begin ūüôā

Just Google it

Here’s a really quick idea for a matching activity, totally building off Pip Arnold’s excellent work on shape.

At the bottom of this post are six “Popular times” graphs generated today by Google when searching for the following places of interest:

  1. Cafe
  2. Shopping mall
  3. Library
  4. Swimming pool
  5. Gym
  6. Supermarket

Can you match which graphs go with which places? ūüôā

[you can find the answers at the bottom]

A

 

B

 

C

 

D

 

E

 

F

Click here to reveal the answers

Finding real data for real data stories

This post is first in a series of posts where I’m going to share some strategies for getting real data for real data stories, specifically to use for statistical investigations that require sample to population inference. As I write them, you will be able to find them all on this page.

Key considerations for finding real data for sample to population inference tasks

It’s really important that I stress that the approaches I’ll discuss are not necessarily what I would typically use when finding data to explore. Generally, I’d let the data drive the analysis not the analysis drive the data I try to find. These are specific examples so that the data that is obtained can be used sensibly to perform sample to population inference. It’s also really important to talk about why I’m stressing the above ūüôā In NZ we have specific standards that are designed to assess understanding of sample to population inference, using informal and formal methods that have developed by exploring the behaviour of random samples from populations (AS91035, AS91264, AS91582). So, for the students’ learning about rules of thumbs and confidence intervals to make sense, we need to provide students with clearly defined named populations with data that are (or are able to be) randomly sampled from these populations. At high school level at least, these strict conditions are in place so that students can focus on one central question:¬†What can and can’t I say about the population(s) based on the random sample data?

For all the examples I’ll cover in this series of posts, there are four key considerations/requirements:

  1. You need a clearly defined and nameable population. Ideally this should be as simple and clear as possible to help students out but to ensure (2) the “name” can end up being quite specific.
  2. You need a sampling frame that is a very close match to your population. This means you need a way to access every member of your population to measure stuff about them (variables). Sure, this is not the reality of what happens in the real world in terms of sampling, but remember what I said earlier about what was important ūüôā
  3. You need to select from your sampling frame using a random sampling method to obtain the members of your sample. It is sufficient (and recommended) to stick to simple random sampling. In some cases, you may be able to make an assumption that what you have can be considered a random sample, but I’d prefer to avoid these kinds of situations where possible at high school level.
  4. You need to define and measure variables from each member of the sample/population. We want students working with multivariate data sets, with several options possible for numerical and categorical data (but don’t forget there is the option to create new variables from what was measured).

I’ll try to refer back to these four considerations/requirements when I discuss examples in the posts that will follow.

Just one very relevant NZ NCEA assessment-specific comment before we talk data. For AS91035 and AS91582, the standards state that students are to be provided with the sample multivariate data for the task – so all of (1) (2) (3) and (4) is done by the teacher. Similarly with AS91264, the requirement for the standard is that students select a random sample (3) from a provided population dataset – so (1) (2) and (4) are done by the teacher. This does not mean the students can’t do more in terms of the sampling/collecting processes, just that these are not requirements for the standards and asking students to do more should not limit their ability to complete the task. I’ll try to give some ideas for how to manage any related issues in the examples.

Just one more point. I haven’t made this (5) in the previous section, but something to watch out for is the nature of your “cases”. Tables of data (which we refer to as datasets) that play nicely with statistical software like iNZight¬†are ones where the data is organised so that each row is a case and each column is a variable. Typically at high school level, the datasets we use are ones where each case (e.g. each individual in the defined population) is measured directly to obtain different variables. Things can get a little tricky conceptually when some of the variables for a case are actually measured by grouping/aggregating related but different cases.

For example, if I take five movies from the internet movie database that have “dog” in the title (imdb.com) and another five with “cat” in the title, I could construct a mini dataset like the one below using information from the website:

For this dataset, each row is a different movie, so the cases are the movies. Each column provides a different variable for each movie. The variables¬†Movie title, Year released, Movie length mins, Average rating, Number of ratings, Number photos and¬†Genre were taken straight from the webpage for each movie. I created the variables¬†Number words title, Number letters title, Average letters per word, Animal in title, Years since release¬†and¬†Millennium. [Something I won’t tackle in this post is what to do about the Genre variable to make this usable for analysis.]

The¬†Average rating variable looks like a nice numerical variable to use, for example, to compare ratings of these movies with “dog” in the title and those with “cat”. The thing is, this particular variable has been measured by aggregating individual’s ratings of the movie using a mean (the related but different cases here are the individuals who rated the movies). You can see why this may be an issue when you look at the variable¬†Number of ratings, which again is an aggregate measure (a count) – some of these movies have received less than 200 ratings while others are in the hundreds of thousands. We also can’t see what the distribution of these individual ratings for each movie looks like to decide whether the mean is telling us something useful about the ratings. [For some more really interesting discussion of using online movie ratings, check out this fivethirtyeight article.]

The variable¬†Average letters per word has been measured directly from each case, using the characteristics of the movie title. There are still some potential issues with using the variable¬†Average letters per word as a measure of, let’s say, complexity of words used in the movie title, since the mean is being used, but at least in this¬†case students can see the movie title.

Another example of case awareness can be seen in the mini dataset below, using data on PhD candidates from the University of Auckland online directory:

For this dataset, each row is a different department, so the cases are the departments. Each column provides a different variable for each department. Gender was estimated based on the information provided in the directory and the data may be inaccurate for this reason. The % of PhD candidates that are female looks like a nice numerical variable to use, for example, to compare gender rates between these departments from the Arts and Science faculties. Generally with numerical variables we would use the mean or median as a measure of central tendency. But this variable was measured by aggregating information about each PhD candidate in that department and presenting this measure as a percentage (the related but different cases here are the PhD candidates). Just think about it, does it really make sense to make a statement like: The mean % of PhD candidates that are female for these departments of the Arts faculty is 73% whereas the mean % of PhD candidates that are female for these departments of the Science faculty is 44%, especially when the numbers of PhD candidates varies so much between departments?

Looking at the individual percentages is interesting to see how they vary across departments, but combining them to get an overall measure for each faculty should involve calculating another percentage using the original counts for PhD candidates for each department (e.g. group by faculty). If I want to compare gender rates between the Arts and Science faculties for PhD candidates, I would calculate the proportion of all PhD candidates across these department that are female for each faculty e.g. 58% of the PhD candidates from these departments of the Arts faculty are female, 53% of the PhD candidates from these departments of the Science faculty are female.

[If you’d like to read more about structuring data in the context of creating a dataset, then check out this excellent post by Rob Gould.]

Where to next?

This post was not supposed to deter you from finding and creating your own real datasets! But we do need to think carefully about the data that we provide to students, especially our high school students. Not all datasets are the same and while I’ve seen some really cool and interesting ideas out there for finding/collecting data for investigations, some of these ideas unintentionally produce data that makes it very difficult for students to engage with the core question:¬†What can and can’t I say about the population(s) based on the random sample data?¬†

In the next post, I’ll discuss some examples of finding real data online. Until I find time to write this next post, check out these existing data finding posts:

Using awesome real data

Cat and whisker plots: sampling from the Quick, Draw! dataset

The power of pixels: Modelling with images

The power of pixels: Modelling with images

This post provides the notes for the plenary I gave for the Auckland Mathematical Association (AMA) about using images as a source of data for teaching statistical investigations.

You might be disappointed to find out that my talk (and this post) is not about the movie pixels, as my husband initially thought it was. It’s probably a good thing I decided to focus on pixels in terms of data about a computer or digital image, as¬†the box office data about pixels the movie¬†suggests that the movie didn’t perform so well ūüôā Instead for this talk I presented some examples of using images as part of statistical investigations that (hopefully) demonstrated how the different combinations of humans, digital technologies, and modelling can lead to some pretty interesting data. The abstract for the talk is below:

How are photos of cats different from photos of dogs? How could someone determine where you come from based on how you draw a circle? How could the human job of counting cars at an intersection be cheaply replaced by technology? I will share some examples of simple models that I and others have developed to answer these kinds of questions through statistical investigations involving the analysis of both static and dynamic images. We will also discuss how the process of creating these models utilises statistical, mathematical and computational thinking.

As I was using a photo of my cat Elliot to explain the different ways we can use images to collect data, a really funny thing happened (see the embedded tweet below).

Yes, an actual real #statscat appeared in the room! What are the chances of that? ūüôā

Pixels are the squares of colour that make up computer or digital (raster) images. Each image has a certain number of pixels e.g. an image that is 41 pixels in width and 15 pixels in height contains 615 pixels, which is an obvious link to concepts of area. The 615 pixels are stored in an ordered list, so the computer knows how to display them, and each pixel contains information about colour. Using RGB colour values (other systems exist), each pixel contains information about the amounts of red, green and blue on a scale of 0 to 255 inclusive. To get at the information about the pixels is going to require some knowledge of digital technologies, and so the use of images within statistical investigations can be a nice way to teach objectives from across the different curriculum learning areas.

Using images as a source of data can happen on at least three levels. Using the aforementioned photo of my cat Elliot, humans could extract data from the image by focusing on things they can see, for example, that that image is a black and white photo and not in colour, that there are two cats in the photo, and that Elliot does not appear to be smiling. Data that is also available about the image using digital tech includes variables such as the number of pixels, the file type and the file size. Data that can be generated using models related to this image could be identifying the most prominent shade of grey, the likelihood this photo will get more than 100 likes on instagram and what the photo is of (cat vs dog for example, a popular machine learning task).

Static images

The first example used the data, in particular the photos, collected as part of the ongoing data collection project I have running about cats and dogs¬†(the current set of pet data cards can be downloaded here). As humans, we can look at images, notice things that are different and these features can be used to create variables.¬†For example, if you look at some of the photos submitted: some pets are outside while others are inside; some pets are looking at the camera while others are looking away from the camera; and some are “close ups” while others taken from a distance.

These potential variables are all initially categorical, but by using digital technologies, numerical variables are also possible. To create a measure of whether a photo is a “close up” shot of a pet, the area the pet takes up of the photo can be measured. This is where pixels are super helpful.¬†I used paint.net, free image editing software, to show that if I trace around the dog in this photo using the lasso tool that the dog makes up about 61 000 pixels. If you compare this figure to the total number of pixels in the image (90 000), you can calculate the percentage the dog makes up of the photo.

For the current set of pet data card, each photo now has this percentage displayed.¬†Based on this very small sample of six pets, it kind of looks like maybe cats typically make up a larger percentage of the photo than dogs, but I will leave this up to you to investigate using appropriate statistical modelling ūüôā

For a pretty cool example of using static images, humans, digital technologies and models, you should take a look at how-old.net. As humans, we can look at photos of people and estimate their age and compare our estimates to people’s actual ages. What how-old.net has done is used machine learning to train a model to predict someone’s age based on the features of the photo submitted. I asked teachers at the talk to select which of the three photos they thought I looked the youngest in (most said B), which is the same photo that the how-old.net model predicted I looked the youngest in. A good teaching point about the model used by how-old.net is that it does get updated, as new data is used to refine its predictions.

You can also demonstrate how models can be evaluated by comparing what the model predicts to the actual value (if known). Fortunately I have a large number of siblings and so a handy (and frequently used) range of different aged people to test the how-old.net model. Students could use public figures, such as athletes, politicians, media personalities or celebrities, to compare each person’s actual age to what the model predicts (since it’s likely that both photos and ages are available on the internet).

There is also the possibility of setting up an activity around comparing humans vs models Рfor the same set of photos, are humans better at predicting ages than how-old.net? Students could be asked to consider how they could set up this kind of activity, what photos could they use, and how would they decided who was better Рhumans or models?

Drawings

The next example used the set of drawings Google has made available from their Quick! Draw! game and artificial intelligence experiment. I’ve already written a post about this data set, so have a read of that post if you haven’t already ūüôā In this talk, I asked teachers to draw a quick sketch of cat and then asked them to tell me whether they drew just the face, or the body as well (most drew the face and body – I’m not sure if the appearance of an actual cat during the talk influenced this at all!) I also asked them to think about how many times they lifted their pen off the paper. I probably forgot to say this at the time, but for some things humans are pretty good at providing data but for others, digital technologies are better. In the case of drawing and thinking about how many strokes you made while drawing, we would get more accurate data if we could measure this using a mouse, stylus or touchscreen than asking people to remember.

Using the random sampler tool that I have set up that allows you to choose one of the objects players have been asked to draw for Quick! Draw!, I generated a random sample of 200 of the drawings made when asked to draw a cat. The data the can be used from each drawing is a combination of what humans and digital technologies can measure. The drawing itself (similar to the photos of pets in the first example) can be used to create different variables, for example whether the sketch is of the face only, or the face and body. Other variables are also provided, such as the timestamp and country code, both examples of data that is captured from players of the game without them necessarily realising (e.g. digital traces).

After manually reviewing all 200 drawings and recording data about the variables, I used iNZight VIT to construct bootstrap confidence intervals for the proportion of all drawings made of cats in the Quick! Draw! dataset that were only of faces and for the difference between the mean number of strokes made for drawings of cats in the Quick! Draw! dataset that were of bodies and mean number of strokes made for drawings of cats in the Quick! Draw! dataset that were of faces. Interestingly, while the teachers at the talk mostly drew sketches of cats with bodies, most players of Quick! Draw! only sketch the faces of cats. This could be due to the 20 second time limit enforced when playing the game. It makes sense that the, on average, Quick! Draw! players use more strokes to draw cats with bodies versus cats with just faces. I wished at the time that I had also recorded information about the other variables provided for each drawing, as it would have been good to further explore how the drawings compare in terms of whether the game correctly identified more of the face-only drawings of cats than the body drawings.

What is also really interesting is the artificial intelligence aspect of the game. The video below explains this pretty well, but basically the model that is used to guess what object is being drawn is trained on what previous players of the game have drawn.

From a maths teachers perspective, this is a good example of what can go wrong with technology and modelling. For example, players are asked to draw a square, and because the model is trained on¬†how they draw the object, players who draw four lines that are roughly perpendicular behave similarly from the machine’s perspective because the technology is looking for commonalities between the drawings. What the technology is not detecting is that some players do not know what a square is, or think squares and rectangles are the same thing. So the data being used to train the model is biased. The consequence of this bias is that the model will now reinforce players misunderstanding that a rectangle is a square by “correctly” predicting they are drawing a square when they draw a rectangle! An interesting investigation I haven’t done yet would be to estimate what percentage of drawings made for squares are rectangles ūüôā¬†I would also suggest checking out some of the other “shape” objects to see other examples e.g. octagons.

Using a more complex form of the Google Quick! Draw! dataset, Thu-Huong Ha and Nikhil Sonnad analysed over 100 000 of the drawings made of circles to show how language and culture influences sketches. For example, they found that 86% of the circles drawn by players in the US were drawn counter clockwise, while 80% of the circles drawn by players in Japan were drawn clockwise. To me, this is really fascinating stuff, and really cool examples of how using images as a source of data can result in really meaningful investigations about the world.

Animation

The last example I used was about using videos as a source of data for probability distribution modelling activities. I’ve presented some workshops before where I used a video (traffic.mp4) from a live streaming traffic camera positioned above a section of the motorway in Wellington. Focusing on the lane of traffic closest to the front of the screen, I got teachers to count how many cars arrived to a fixed point in that lane every five seconds. This gave us a nice set of data which we could then use to test the suitability of a Poisson distribution as a model.

For this talk, I wanted to demonstrate how humans could be replaced (potentially) by digital technologies and models. Since the video is a collection of images shown quickly (around 50 frames per second), we can use pixels, or potentially just a single pixel, in the images to measure various attributes of the cars. About a year ago, I set myself the challenge of exploring whether it would be possible to glean information about car counts, car colours etc. and shared my progress with this personal project at the end of the talk.

So, yes there does exist pretty fancy video analysis software out there that I could¬†use to extract the data I want, but I wanted to investigate whether I could use a combination of¬†statistical, mathematical and computational thinking to¬†create my own model to generate the data. As part of my PhD, I’m interesting in finding out what activities could help introduce students to the modern art and science of learning from data, and what is nice about this example is that idea of how the model could count how many cars are arriving every five seconds to a fixed point on the motorway is actually pretty simple and so potentially a good entry point for students.

The basic idea behind the model is that when there are no cars at the point on the motorway, the pixel I am tracking is a certain colour. This colour becomes my reference colour for the model. Using the RBG colour system, for each frame/image in the traffic video, I can compare the current colour of the pixel e.g. rgb(100, 250, 141) to the reference colour e.g. rgb(162, 158, 162). As soon as the colour changes from the reference colour, I can infer this means a car has arrived to the point on the motorway. And as soon as the colour changes back to the reference colour, I can infer that the car has left the point on the motorway. While the car is moving past the point, I can also collect data on the colour of the pixel from each frame, and use this to determine the colour of the car.

I’m still working on the model (in that I haven’t actually modified it since I first played around with the idea last year) and the video below shows where my project within CODAP (Common Online Data Analysis Platform) is currently at. When I get some time, I will share the link to this CODAP data interactive so you and your students can play around with choosing different pixels to track and changing other parameters of the model I’ve developed ūüôā

You might notice by watching this video that the model needs some work. The colours being recorded for each car are not always that good (average colour is an interesting concept in itself, and I’ve learned a lot more about how to work with colour since I developed the model) and some cars end up being recorded twice or not at all. But now that I’ve developed an initial model to count the cars that arrive every five seconds, I can compare the data generated from the model to the data generated by humans to see how well my model performed.

You can see at the moment, that the data looks very different when comparing what the humans counted and what the digital tech + model counted. So maybe the job of traffic counter (my job during university!) is still safe – for now ūüôā

Going crackers

I didn’t get time in the talk to show an example of a statistical investigation that used images (photos of animal crackers or biscuits) to create a informal prediction model. I’ll write about this in another post soon – watch this space!

Helping students to estimate mean and standard deviation

Estimating the mean and standard deviation of a discrete random variable is something we expect NZ students to be able to do by the time they finish Year 13 (Grade 12). The idea is that students estimate these properties of a distribution using visual features of a display (e.g. a dot plot) and, ideally, these measures are visually and conceptually attached to a real data distribution with a context and not treated entirely as mathematical concepts.

At the start of this year I went looking for an interactive dot plot to use when reviewing mean and standard deviation with my intro-level statistics students. Initially, I wanted something where I could drag dots around on a dot plot and show what happens to the mean, standard deviation etc. as I do this. Then I wanted something where you could drag dots on and off the dot plot, rather than having an initial starting dot plot, so students could build dot plots based on various situations. I came across a few examples of interactive-ish dot plots out there in Google-land but none quite did what I wanted (or kept the focus on what I wanted), so I decided to write my own. [Note: CODAP would have been my choice if I had just wanted to drag dots around. Extra note: CODAP is pretty awesome for many many reasons].

In my head as I developed the app was an activity I’ve used in the past to introduce standard deviation as a measure –¬†Exploring statistical measures by estimating the ages of famous people¬†– as well as a workshop by the awesome Christine Franklin. For NZ-based teachers (or teachers who want to come to beautiful New Zealand for our national mathematics teachers conference), Chris is one of the keynote speakers at the NZAMT 2017 conference¬†and is running a workshop at this conference called¬†Conceptualizing Variation from the Mean: Evolving from ‘Number of Steps’ to the ‘SAD’ to the ‘MAD’ to the ‘Standard Deviation’¬† which you should get along to if you can. Also in my head was the idea of the mean of a distribution being like the “balancing point”, and other activities I have used in the past based on this analogy and also see-saws! My teaching colleague Liza Bolton¬†was also super helpful at listening to my ideas, suggesting awesome ones of her own, and testing the app throughout its various versions.

dots – an interactive dot plot

You can access¬†dots at this address:¬†learning.statistics-is-awesome.org/dots/¬†but you might want to keep reading to find out a little more about how it works ūüôā¬†Below is a screenshot of the app, with some brief descriptions of how things are¬†supposed to work. Current limitations for¬†dots are that no more than 35 dots will be displayed, the axis is fixed between 0 and 34, and that dots can only be placed on whole numbers. I had played around with making these aspects of the app more flexible, but then decided not to pursue this as I’m not trying to re-create graphing/statistical software with this interactive.

Since I’ve got the¬†It’s raining cats and dogs (hopefully)¬†project running, I thought I’d use some of the data collected so far to show a few¬†examples of how to use¬†dots. [Note: The data collection phase of the cats and dogs data cards project is still running, so you can get your students involved]. Here are 15 randomly selected cats from the data cards created so far, with the age of each cat removed.

Once you get past how cute these cats are, what do you think the mean age of these cats is (in years)? Can you tell which cat is the oldest? How much variation do you think there is between the ages of these cats?

Dragging dots onto the dot plot

A¬†dot plot can be created by dragging dots on to the plot (don’t forget to add a label for the axis like I did!)

 

Sending data to the dot plot

You can also add the data and the label to the URL so that the plot is ready to go. Use the structure shown below to do this, and then click on the link to see the ages of these cats on the interactive dot plot.

learning.statistics-is-awesome.org/dots/#data=7,1,12,16,4,2,11,8,4,9,5,2,3,1,17&label=ages_of_cats_in_years

Turns out China is the oldest cat in this sample.

Exploring the balance point

You can click below the dots on the axis to indicate your estimate for the mean. You could do a couple of things after this. You could click the Mean button to show the mean, and check how this compares to your estimated mean. Or you could click the Balance test button to turn in on (green), and see how well the dots balance on the point you have estimated as the mean (or both like I did).

 

Estimating standard deviation

Estimating standard deviation is hard. I try not to use “rules” that only work with Normally distributed-ish data (like take the range and divide by six) and aren’t based on what the standard deviation is a measure of. Visualising standard deviation is also a tricky thing. In the video below I’ve gone with two approaches: one uses a Chrome extension Web Paint to draw on the plot where I think is the¬†average distance each dot is from the mean and one uses the absolute deviations.

 

Using¬†“random distribution”

This is the option I have used the most when working with students individually. Yes, there is no context when using this option, but in my conversations with students when talking about the mean and standard deviation I’m not sure the lack of context makes it non-conceptual-building activity. The short video below shows using the median as a starting point for the estimate of the mean, and the adjusting from here depending on other features of the distribution (e.g. shape). The video ends by dragging a dot around to see what happens to the different measures, since that was the starting point for developing¬†dots ūüôā

 

Other ideas for using dots?

Share them below¬†the related Facebook post, on Twitter, or wherever – I’d be super keen to hear whether you find this interactive dot plot useful for teaching students how to estimate mean and standard deviation ūüôā

PS no cats were harmed in the making of this GIF