Stats with Cats (and other animals)!

I wrote a guest post earlier this week for Allan Rossman’s excellent blog Ask Good Questions. If you aren’t already subscribed to Allan’s blog you should be! He spent a year writing a new post every week, so there are so many very good advice and ideas for teaching statistics on his blog. Allan’s work with Beth Chance on teaching simulation-based inference has influenced a lot of what we teach in New Zealand, so you’ll also recognise some of the activities (and look for the shout out to New Zealand!)

The post I wrote features lots of photos of cats but also, and more importantly, gives you an idea of the kinds of activities I’ve been using to introduce statistics and data science students to coding. In the post, I also talk about the Popularity contest app that allows you to sample photos from Pixabay.

https://askgoodquestions.blog/2020/08/17/59-popularity-contest/

If cats aren’t your thing, then the guest post the week before features ladybugs and lizards and was written by none other than Christine Franklin – another amazing US-based statistics educator and researcher who fortunately loves visiting us here in New Zealand whenever she can!

https://askgoodquestions.blog/2020/08/10/58-lizards-and-ladybugs-illustrating-the-role-of-questioning/

And just by total coincidence, but if counting spots is your thing then check out this new app I developed called 101 dalmatians!

https://learning.statistics-is-awesome.org/dalmatians/

Catch a random sample of dogs and use the sample to estimate the mean number of spots per dog for the small population of 101 dalmatians. For a bit of context, you could re-watch some of the original version of the movie first!

While we’re talking about dogs, awesome educator Julia Crawford (Cognition Education) shared this video on the Stats teachers NZ Facebook page as a good discussion starter for experiments.

I don’t have a dog, so have tried out experiments with my cat Elliot. The video below was made for my students when we first went into lockdown (I also tried the cat and square challenge a few years ago). I’ve also added snippets from the videos the super awesome Dr Michelle Dalyrymple and Emma Lehrke sent me of their dogs more successfully engaging with the activity!

Of course, don’t forget you can also contribute to the It’s raining cats and dogs (hopefully) project, by making a data card about each of your dogs or cats. I’m going to create the next set of cards soon, and include a digital platform to work with the cards (similar to Stickland).

And, 😺🐶😺🐶, how could we forget about emoji? Pip Arnold has been making and sharing a bunch of videos and resources for using CODAP with younger statistic students. Did you know you can use emoji in sampler plugin for CODAP? Just copy them from a web age and paste them into tool. When you use the emojis as values in formulae, just make sure to put “” quotes around them. You see emoji in action below, and check out how this all was set up in CODAP here.

https://codap.concord.org/app/static/dg/en/cert/index.html#shared=https%3A%2F%2Fcfm-shared.concord.org%2FVw54ngaPhEKWL6wyRSyP%2Ffile.json

For more modelling activities, this time using TinkerPlots, check out Anne Patel‘s presentation for the Auckland Mathematical Association. Her presentation covers a wide variety of important teaching ideas and resources, with lots of practical advice based on her nearly-finished PhD research. Sure, there’s nothing about cats or dogs but she does talk about Census At School, which doesn’t yet ask questions about dogs and cats but maybe could!


Don’t forget, if you’ve got a question about teaching statistics, then feel free to submit this question anonymously using the form below. Who knows? Your question might even inspire a new post 🙂

A small sample of ideas

While I continue to decide whether to quit Facebook, I’ve been trying to keep on top of my admin responsibilities for the Stats Teachers NZ Facebook group while keeping an eye on any stats-related posts on the NZ Maths teachers Facebook group. Since not everyone is on Facebook, I thought I’d do a quick post sharing some of the ideas for teaching stats I’ve recently shared within these groups.

How is the bootstrap confidence interval calculated?

The method of bootstrap confidence interval construction we use at high school level in NZ is to take the central 95 percentile of the bootstrap distribution (the 1000 re-sampled means/medians/proportions/differences etc). There are other bootstrap methods (but we don’t cover these at high school level) and because of the approach we use you can get non-symmetrical confidence intervals.

Here are a couple of videos featuring Chris Wild talking about bootstrap confidence intervals: 

You can read more about the research project and development of VIT that informed the implementation of simulation-based inference for NZ statistics at the school level here: http://www.tlri.org.nz/sites/default/files/projects/9295_summary%20report_0.pdf

A quick but helpful article with more background about norm-based confidence intervals and bootstrap confidence intervals in terms of teaching: https://new.censusatschool.org.nz/wp-content/uploads/2012/08/Confidence-intervals-what-matters-most.pdf

A recent article by Mark Hooper for the SDSE (Statistics and Data Science Educator) provides an activity for introducing bootstrapping: https://sdse.online/posts/SDSE19-004/

Does shape of the bootstrap distribution tell us anything about whether some values in a confidence interval are more likely to be the true value of the parameter?

All the values in the confidence interval are plausible in terms of the population parameter (well, except for the case of impossible values e.g. a negative value when estimating the mean length of a piece of string, or 0% when estimating a population proportion when your sample proportion was not 0%!). As an extra note, we often see skewness in the bootstrap distribution when using small samples whose distributions are skewed (since we resample from the original sample). Small samples are not that great at getting a feel for what the shape of the underlying/population distribution is.

Is a bootstrap confidence interval a 95% confidence interval?

Sample size is a key consideration here 🙂 With large sample sizes, the bootstrap method does “work” about 95% of the time, hence giving us 95% confidence. But, just like norm-based methods (e.g. using 1.96 x se), with small samples our confidence level will not be as high using the “central 95% percentile” approach.

Can students use both the CL5 and CL6 rules when making a call? Why can’t the CL5 rule be used with sample sizes bigger than 40?

The rules are designed to scaffold student understanding of what needs to be taken into account when comparing samples to make inferences about populations. Once students learn and can use a higher level rule, they should use this rule by itself. The two rules use different features of the sample distributions and do not give the same “results”. If you use both at the same time, you are encouraging an approach where you select the method that will give you the result you want!

In terms of whether the rule “works” we have to consider not just the cases of “making a call” when we should, but also the cases of “not making a call” when we should. Yes, the CL5 “works” when applied to data from bigger samples than 40, in terms of “evidence of a true difference”. The problem is that for larger sample sizes, when using the CL5 rule, you become much more likely to think the data provides “no evidence of a true difference” when really it does. In this respect, the rule does not “get better” as you increase sample size 😊 It’s too stringent, which is why we move to higher curriculum level “rules” or approaches, ones where we learn to take sample size (among other things) into account.

If a sample size is larger, does that mean it is more representative of the population?

Let’s say you have access to 5000 people who voted for the national party in the last election and ask them whether they support Judith Collins as the next PM, and obtain a sample proportion. If you used this sample proportion to construct a confidence interval, it would have small margin of error (narrow interval, high precision), BUT the confidence interval would probably “miss the target” if you were wanting to infer about the proportion of all NZers who support Judith Collins as the next PM because of high bias/inaccuracy 🙂

It is important to know the “target” population for the inference you want to make using the same, and check if the sample you are using was taken from this population. In terms of teaching sample to population inference, we need to use a random sample from this population. Our inference methods only model sampling error (how do random samples from populations behave) not nonsampling error (everything else that can go wrong, including the method used to select the sample). If we can’t use a random sample (which in practical terms is pretty difficult to obtain when your sampling frame is not a supplied dataset), then we need to consider how the sample was obtained and also be prepared to assume/indicate even more uncertainty for our inference, in addition to what we are modelling based on sampling variation 🙂

Watch out for a common student misconception that larger populations require larger samples. The population size is not important or relevant (unless you want to get into finite population corrections), it’s the size of the sample that is important in terms of quantifying sampling error. Hence why it was a question in my first stage stats test a couple of weeks ago!

A tool I developed that is handy for exploring confidence intervals for single proportions and the impact of sample size and the value of the sample proportion can be found here: https://learning.statistics-is-awesome.org/threethings/

How can you find good articles for 2.11 Evaluate a statistically based report?

I’ve written a little bit about finding and adapting statistical reports here. To summarise, I find newspaper articles are often not substantial enough, since 2.11 requires the report to be based on a survey and students need to be given enough info about how the survey-based study was carried out to be able to critique it. Often the executive summary from a national NZ-based survey works better (with some trimming, adaption). I like NZ on Air based surveys, as this recent one looks do-able with some adaption: Children’s Media Use Survey 2020  – it even mentions TikTok!

Can you create a links to iNZight Lite and VIT online with data pre-loaded?

Yes – I made a video about setting up data links to iNZight Lite here:

If you want to use the Time Series module with your data, just chance the “land=visualize” part of the URL to “land=timeSeries”.

What should a student do if they get negative forecasts from their time series model, when the variable being modelled can’t take on negative values?

You want the student to go back and take a look at the data! And then the model. And ask themselves – what’s gone wrong here? Is it how I’m modelling the trend? Or is it how I’m modelling the seasonality? Or both? Is the trend even “nicely behaved” enough to model? Same with the seasonality 🙂

Often the data shows why the model fitted will not do a good job, even before looking at the forecasts generated. We should be encouraging students to look at the data that was used to build the model, particularly for time series when we are focusing on modelling trend and seasonality. Students should be encouraged to ask – why is the model generating negative values for the forecast? How did it learn to do this from the data I used? Can I develop a better model?

Do you have more questions? Chuck them in the Google form below and I’ll see what I can do 🙂

Go big or go home!

On Tuesday, my good friend Dr Michelle Dalrymple won this year’s Prime Minister’s Science Teacher award. It was so great to be able to fly down to Christchurch with Maxine Pfannkuch to watch the live streaming of the award ceremony with Michelle, her family and her colleagues at Cashmere High School. Michelle was the first mathematics and statistics teachers to win the prize, and it couldn’t have gone to a more deserving teacher!

You can read more about the awesomeness of Michelle in the links below:

In her acceptance speech, Michelle thanked me for being her “statistics hero”. Well, turns out she’s also mine and here’s just one example of why!

After a year or so after I moved from teaching high school statistics to teaching a very large introductory statistics course, I had conversation with Michelle where I complained about how much I missed doing the kinds of hands-on interactive activities that are so important for teaching statistics. I told her what I was being told by others at the university level: that you just can’t do those kinds of things with large lectures, there’s too many students, it won’t work, things could go wrong, not all the students will want to do this, etc.

Michelle listened to me first and then suggested that I try doing something small initially. She told me about one of her activities – comparing how long it takes to eat M&Ms using a plastic fork vs chopsticks – and suggested doing this with just 10 of my 500 students. She explained that I could ask for volunteers, bring them down to the front of the lecture theatre, record the data live, and then use this within the same lecture. I tried this activity out and it worked brilliantly – just imagine a whole lecture theatre of students cheering on students eating M&M’s!

In her pragmatic way, Michelle helped me remember that there’s always a way to do what you know is best for teaching and learning. Her encouragement and attitude to “make it happen” inspired the first of many interactive activities I have since developed to use in my teaching of intro stats. It’s natural to focus on the limitations that a teaching environment or system presents, especially for very large introductory statistics classes of over 300 students. But what Michelle helped me re-affirm in terms of my teaching approach for “large scale teaching” is that it can be more helpful and rewarding to think of the opportunities that working with such a large group of students offers.

Which is one of the reasons why we (Rhys Jones, Emma Lehrke and I) have set up a new sub blog that focuses specifically on teaching large introductory statistics courses. It’s called “Go big or go home!“. In this blog we will share our experiences with trying to build more interactivity and engagement within our very large lecture-based classes. I know that many people reading this blog are statistics teachers based at the school level, so I haven’t assumed you will want to receive emails about new posts for this sub blog. Check out the Go big or go home! blog if you’re interested in reading more and subscribing to this new blog.

Age is just a number

Today I demonstrated some in-class interactive activities that I had developed for my super large intro statistics lectures at a teaching and learning symposium. I’ve shared a summary of the activities and the data below.

Quick summary of the activity

1. Head to how-old.net upload or take a photo of yourself and record the age given by the #HowOldRobot

2. Complete a Google form (or similar) with your actual age and the age given by the #HowOldRobot

3. Explore the data collected using an awesome free online tool like iNZight Lite (click the link to jump through with the data)

4. Watch this short video featuring Joy Buolamwini to learn more about facial recognition software and discuss algorithmic bias

Some other ideas

If you haven’t already, check out learning.statistics-is-awesome.org/different_strokes/, where you can sample some cat (and other) drawings and learn more about how people draw in the Google game Quick, Draw!

I also get students to draw things in class and use their drawings as data. Below are all the drawings of cats made from the demonstration today, and also from the awesome teachers who helped me out last night. If you click/touch and hold a drawing you will be able to drag it around. How many different ways can you sort the drawings into groups?

Follow the data!

Last week I was down in Wellington for the VUW NZCER NZAMT16 Mathematics & Statistics Education Research Symposium, as well as for the NZAMT16 teacher conference. It was a huge privilege to be one of the keynote speakers and my keynote focused on teaching data science at the school level. I used the example of following music data from the New Zealand Top 40 charts to explore what new ways of thinking about data our students would need to learn (I use “new” here to mean “not currently taught/emphasised”).

It was awesome to be back in Wellington, as not only did I complete a BMus/BSc double degree at Victoria University, I actually taught music at Hutt Valley High School (the venue for the conference) while I was training to become a high school teacher (in maths/stats and music). I didn’t talk much in my keynote about the relationship between music and data analysis, but I did describe my thoughts a few years ago (see below):

All music has some sort of structure sitting behind it, but the beauty of music is in the variation. When you learn music, you learn about key ideas and structures, but then you get to hear how these same key ideas and structures can be used to produce so many different-sounding works of art. This is how I think we need to help students learn statistics – minimal structure, optimal transfer, maximal experience. Imagine how boring it would be if students learning music only ever listened to Bach.

https://www.stat.auckland.ac.nz/en/about/news-and-events-5/news/news-2017/2017/08/the-art-of-teaching-statistics-to-teenagers.html

Due to some unforeseen factors, I ended up ZOOMing my slides from one laptop at the front of the hall to another laptop in the back room which was connected to the data projector. Since I was using ZOOM, I decided to record my talk. However, the recording is not super awesome due to not really thinking about the audio side of things (ironically). If you want to try watching the video, I’ve embedded it below:

You can also view the slides here: bit.ly/followthedataNZAMT. I’m not sure they make a whole lot of sense by themselves, so here’s a quick summary of some of what I talked about:

  • Currently, we pretty much choose data to match the type of analysis we want to teach, and then “back fit” the investigative problem to this analysis. This is not totally a bad thing, we do it in the hope that when students are out there in the real world, they think about all the analytical methods they’ve learned and choose the one that makes sense for the thing they don’t know and the data they have to learn from. But, there’s a whole lot of data out there that we don’t currently teach students about how to learn from, which comes from the computational world our students live in. If we “follow the data” that students are interacting with, what “new” ways of thinking will our students need to make sense of this data?
  • Album covers are a form of data, but how do we take something we can see visually and turn this into “data”. For the album covers I used from one week of 1975 and one week of 2019, we can see that the album covers from 1975 are not as bright and vibrant as those from 2019, similarly we can see that people’s faces feature more in the 1975 album covers. We could use the image data for each album cover, extract some overall measure of colour and use this to compare 1975 and 2019. But what measure should we use? What is luminosity, saturation, hue, etc.? How could we overfit a model to predict the year of an album cover by creating lots of super specific rules? What pre-trained models can we use for detecting faces? How are they developed? How well do they work? What’s this thing called a “confusion matrix”?
  • An intended theme across my talk was to compare what humans can do (and to start with this), with what we could try to get computers to do, and also to emphasise how important human thinking is. I showed a video of Joy Buolamwini talking about her Gender Shades project and algorithmic bias: https://www.youtube.com/watch?v=TWWsW1w-BVo and tried to emphasise that we can’t teach about fun things we can do with machine learning etc. without talking about bias, data ethics, data ownership, data privacy and data responsibility. In her video, Joy uses faces of members of parliament – did she need permission to use these people’s faces for her research project since they were already public on websites? What if our students start using photos of our faces for their data projects?
  • I played the song that was number one the week I was born (tragedy!) as a way to highlight the calendar feature of the nztop40 website – as long as you were born after 1975, you can look up your song too. Getting students to notice the URL and how it changes as you navigate a web page is a useful skill – in this case, if you navigate to different chart weeks, you can notice that the “chart id” number changes. We could “hack” the URL to get the chart data for different weeks of the years available. If the website terms and conditions allow us, we could also use “web scraping” to automate the collection of chart data from across a number of weeks. We could also set up a “scheduler” to copy the chart data as it appears each week. But then we need to think about what each row in our super data set represents and what visualisations might make sense to communicate trends, features, patterns etc. I gave an example of a visualisation of all the singles that reached number one during 2018, and we discussed things I had decided to do (e.g. reversing the y axis scale) and how the visualisation could be improved [data visualisation could be a whole talk in itself!!!]
  • There are common ways we analyse music – things like key signature, time signature, tempo (speed), genre/style, instrumentation etc. – but I used one that I thought would not be too hard to teach during the talk: whether a song is in the major or minor key. However, listening to music first was really just a fun “gateway” to learn more about how the Spotify API provides “audio features” about songs in its database, in particular supervised machine learning. According to Spotify, the Ed Sheeran song Beautiful people is in the minor key, but me and guitar chords published online think that it’s in the major key. What’s the lesson here? We can’t just take data that comes from a model as being the truth.
  • I also wanted to talk more about how songs make us feel, to extend thinking about the modality of the song (major = happy, minor = sad), to the lyrics used in the song as well. How can we take a set of lyrics for a song and analyse these in terms of overall sentiment – positive or negative? There’s lots of approaches, but a common one is to treat each word independently (“bag of words”) and to use a pre-existing lexicon. The slides show the different ways I introduce this type of analysis, but the important point is how common it is to transfer a model trained within one data context (for the bing lexicon, customer reviews online) and use it for a different data context (in this case, music lyrics). There might just be some issues with doing this though!
  • Overall, what I tried to do in this talk was not to showcase computer programming (coding) and mathematics, since often we make these things the “star attraction” in talks about data science education. The talk I gave was totally “powered by code” but do we need to start with code in our teaching? When I teach statistics, I don’t start with pulling out my calculator! We start with the data context. I wanted to give real examples of ways that I have engaged and supported all students to participate in learning data science: by focusing on what humans think, feel and see in the modern world first, then bringing in (new) ways of thinking statistically and computationally, and then teaching the new skills/knowledge needed to support this thinking.
  • We have an opportunity to introduce data science in a real and meaningful way at the school level, and we HAVE to do this in a way that allows ALL students to participate – not just those in enrichment/extension classes, coding clubs, and schools with access to flash technology and gadgets. While my focus is the senior levels (Years 11 to 13), the modern world of data gives so many opportunities for integrating statistical and computational thinking to learn from data across all levels. We need teachers who are confident with exploring and learning from modern data, and we need new pedagogical approaches that build on the effective ones crafted for statistics education. We need to introduce computational thinking and computer programming/coding (which are not the same things!) in ways that support and enrich statistical thinking.

If you are a NZ-based teacher, and you are interested in learning more about teaching data science, then please use the “sign-up” form at undercoverdata.science (the “password” is datascience4everyone). I’ll be sending out some emails soon, probably starting with learning more about APIs (for an API in action, check out learning.statistics-is-awesome.org/popularity_contest/ ).

Different strokes?

Example of sorted data cards

Recently I’ve been developing and trialling learning tasks where the learner is working with a provided data set but has to do something “human” that motivates using a random sample as part of the strategy to learn something from the data.

Since I already had a tool that creates data cards from the Quick, Draw! data set, I’ve created a prototype for the kind of tool that would support this approach using the same data set.

I’ve written about the Quick, Draw! data set already:

For this new tool, called different strokes, users sort drawings into two or more groups based on something visible in the drawing itself. Since you have the drag the drawings around to manually “classify” them, the larger the sample you take, the longer it will take you.

There’s also the novelty and creativity of being able to create your own rules for classifying drawings. I’ll use cats for the example below, but from a teaching and assessment perspective there are SO many drawings of so many things and so many variables with so many opportunities to compare and contrast what can be learned about how people draw in the Quick, Draw!

Here’s a precis of the kinds of questions I might ask myself to explore the general question What can we learn from the data about how people draw cats in the Quick, Draw! game?

  • Are drawings of cats more likely to be heads only or the whole body? [I can take a sample of cat drawings, and then sort the drawings into heads vs bodies. From here, I could bootstrap a confidence interval for the population proportion].
  • Is how someone draws a cat linked to the game time? [I can use the same data as above, but compare game times by the two groups I’ve created – head vs bodies. I could bootstrap a confidence interval for the difference of two population means/medians]
  • Is there a relationship between the number of strokes and the pause time for cat drawings? [And what do these two variables actually measure – I’ll need some contextual knowledge!]
  • Do people draw dogs similarly to cats in the Quick, Draw! game? [I could grab new samples of cat and dog drawings, sort all drawings into “heads” or “bodies”, and then bootstrap a confidence interval for the difference of two population proportions]

Check out the tool and explore for yourself here: http://learning.statistics-is-awesome.org/different_strokes/

A little demo of the tool in action!

A simple app that only does three things

Here’s a scenario. You buy a jumbo bag of marshmallows that contains a mix of pink and white colours. Of the 120 in the bag, 51 are pink, which makes you unhappy because you prefer the taste of pink marshmallows.

Time to write a letter of complaint to the company manufacturing the marshmallows?

The thing we work so hard to get our statistics students to believe is that there’s this crazy little thing called chance, and it’s something we’d like them to consider for situations where random sampling (or something like that) is involved.

For example, let’s assume the manufacturing process overall puts equal proportions of pink and white marshmallows in each jumbo bag. This is not a perfect process, there will be variation, so we wouldn’t expect exactly half pink and half white for any one jumbo bag. But how much variation could we expect? We could get students to flip coins, with each flip representing a marshmallow, and heads representing white and tails representing pink. We then can collate the results for 120 marshmallows/flips – maybe the first time we get 55 pink – and discuss the need to do this process again to build up a collection of results. Often we move to a computer-based tool to get more results, faster. Then we compare what we observed – 51 pink – to what we have simulated.


Created using my learning.statistics-is-awesome.org/modelling-tool, yes it should be two-tailed, no my tool doesn’t allow this 🙁

I use these kind of activities with my students, but I wanted something more so I made a very simple app earlier this year. You can find it here: learning.statistics-is-awesome.org/threethings/. You can only do three things with it (in terms of user interactions) but in terms of learning, you can do way more than three things. Have a play!

In particular, you can show that models other than 50% (for the proportion of pink marshmallows) can also generate data (simulated proportions) consistent with the observed proportion. So, not being able to reject the model used for the test (50% pink) doesn’t mean the 50% model is the one true thing. There are others. Like I told my class – just because my husband and I are compatible (and I didn’t reject him), doesn’t mean I couldn’t find another husband similarly compatible.

Note: The app is in terms of percentages, because that aligns to our approach with NZ high school students when using and interpreting survey/poll results. However, I first use counts for any introductory activities before moving to percentages, as demonstrated with this marshmallow example. The app rounds percentages to the closest 1% to keep the focus on key concepts rather than focusing on (misleading) notions of precision. I didn’t design it to be a tool for conducting formal tests or constructing confidence intervals, more to support the reasoning that goes with those approaches.

Visualising bootstrap confidence intervals and randomisation tests with VIT Online

Simulation-based inference is taught as part of the New Zealand curriculum for Statistics at school level, specifically the randomisation test and bootstrap confidence intervals. Some of the reasons for promoting and using simulation-based inference for testing and for constructing confidence intervals are that:

  • students are working with data (rather than abstracting to theoretical sampling distributions)
  • students can see the re-randomisation/re-sampling process as it happens
  • the “numbers” that are used (e.g. tail proportion or limits for confidence interval) are linked to this process.

If we work with the output only, for example the final histogram/dot plot of re-sampled/bootstrap differences, in my opinion, we might as well just use a graphics calculator to get the values for the confidence interval 🙂

In our intro stats course, we use the suite of VIT (Visual Inference Tools) designed and developed by Chris Wild to construct bootstrap confidence intervals and perform randomisation tests. Below is an example of the randomisation test “in action” using VIT:

Last year, VIT was made available as a web-based app thanks to ongoing work by Ben Halsted! So, in this short post I’ll show how to use VIT Online with Google sheets – my two favourite tools for teaching simulation-based inference 🙂

1. Create a rectangular data set using a Google sheet. If you’re stuck for data, you can make a copy of this Google sheet which contains giraffe height estimates (see this Facebook post for context – read the comments!)

2. Under File –> Publish to web, choose the following settings (this will temporarily make your Google sheet “public” – just “unpublish” once you have the data in VIT Online)

Be careful to select “Sheet1” or whatever the sheet you have your data in, not “Entire document”. Then, select “Comma-separated values (.csv)” for the type of file. Directly below is the link to your published data which you need to copy for step 3.

3. Head to VIT online –>  https://www.stat.auckland.ac.nz/~wild/VITonline/index.html. Choose “Randomisation test” and copy the link from step 2 into the first text box. Then press the “Data from URL” button.

4. At this point, your data is in VIT online, so you can go back and unpublish your Google sheet by going back to File –> Publish to web, and pressing the button that says “Stop publishing”.

The same steps work to get data from a Google spreadsheet into VIT online for the other modules (bootstrapping etc.).

[Actually, the steps are pretty similar for getting data from a Google spreadsheet into iNZight lite. Copy the published sheet link from step 2 in the appropriately named “paste/enter URL” text box under the File –> Import dataset menu option.]

In terms of how to use VIT online to conduct the randomisation test, I’ll leave you with some videos by Chris Wild to take a look at (scroll down). Before I do, just a couple of differences between the VIT Chris uses and VIT Online and a couple of hints for using VIT Online with students.

You will need to hold down ctrl to select more than one variable before pressing the “Analyse” button e.g. to select both the “Prompt” and “Height estimate in metres” variables in the giraffe data set.

Also, to define the statistic to be tested, in VIT Online you need to press the button that says “Precalculate Display” rather than “Record my choices” as shown in the videos.

Lastly, a really cool thing about VIT Online is that once you have copied over the URL for your published Google sheet, as long as you keep your Google sheet published, you can grab the URL from VIT Online to share with students e.g. https://www.stat.auckland.ac.nz/~wild/VITonline/randomisationTest/RVar.html?file=https://docs.google.com/spreadsheets/d/e/2PACX-1vTcaGSrAbGSntbrUoifNv8g048KJwEnBI–Rmmxqu1N0rb0VRUHoUkIeT-8xo3O9eqTUqZIML_EH523/pub?gid=0&single=true&output=csv&var=%20Prompt,c&var=Height%20estimate%20in%20metres,n. Sure, it’s not the nicest looking URL in the world, so use a URL shortener like bit.ly, goo.gl, tiny.cc etc. if sharing with students to type into their devices.

Note: VIT Online is not optimised to work on small screen devices, due to the nature of the visualisations. For example, it’s important that students can see all three panels at the same time during the process, and can see what is happening!

Now, here are those videos I promised 🙂

Game of data

This post is second in a series of posts where I’m going to share some strategies for getting real data to use for statistical investigations that require sample to population inference. As I write them, you will be able to find them all on this page.

What’s your favourite board game?

I read an article posted on fivethirtyeight about the worst board games ever invented and it got me thinking about the board games I like to play. The Game of life has a low average rating on the online database of games referred to in this article but I remember kind of enjoying playing it as a kid. boardgamegeek.com features user-submitted information about hundreds of thousands of games (not just board games) and is constantly being updated. While there are some data sets out there that already feature data from this website (e.g. from kaggle datasets), I am purposely demonstrating a non-programming approach to getting this data that maximises the participation of teachers and students in the data collection process.

To end up with data that can be used as part of a sample to population inference task:

  1. You need a clearly defined and nameable population (in this case, all board games listed on boardgamegeek.com)
  2. You need a sampling frame that is a very close match to your population.
  3. You need to select from your sampling frame using a random sampling method to obtain the members of your sample.
  4. You need to define and measure variables from each member of the sample/population so the resulting data is multivariate.

boardgamegeek.com actually provide a link that you can use to select one of the games on their site at random (https://boardgamegeek.com/boardgame/random), so using this “random” link (hopefully) takes care of (2) and (3). For (4), there are so many potential variables that could be defined and measured. To decide on what variables to measure, I spent some time exploring the content of the webpages for a few different games to get a feel for what might make for good variables. I decided to stick to variables that are measured directly for each game, rather than ones that were based on user polls, and went with these variables:

  • Millennium the game was released (1000, 2000, all others)
  • Number of words in game title
  • Minimum number of players
  • Maximum number of players
  • Playing time in minutes (if a range was provided, the average of the limits was used)
  • Minimum age in years
  • Game type (strategy or war, family or children’s, other)
  • Game available in multiple languages (yes or no)

Time to play!

I’ve set up a Google form with instructions of how you can help create a random sample of games from boardgamegeek.com at this link: https://goo.gl/forms/8yBqryGTzrZGhEVx2. As people play along, the sample data will be added here: https://docs.google.com/spreadsheets/d/e/2PACX-1vSzR_VSVzaaeWpCvYbAQCUewaM3Tad2zfTBO7AWuDgFFTj5Jaq2TBo6N-gQGCe5e5t_qKW7Knuq6-pr/pub?gid=552938859&single=true&output=csv . The URL to the game is included so that the data can be checked. Feel free to copy and adapt however you want, but do keep in mind that nature of the variables you use. In particular, be very careful about using any of the aggregate ratings measures (and another great article by fivethirtyeight about movie ratings explains some of the reasons why.)

Bonus round

I wrote a post recently – Just Google it – which featured real data distributions. boardgamegeek.com also provides simple graphs of the ratings for each game, so we can play a similar matching game. You could also try estimating the mean and standard deviation of the ratings from the graph, with the added game feature of reverse ordering!

Which games do you think match which ratings graphs?

  1. Monopoly
  2. The Lord of the Rings: The Card Game
  3. Risk
  4. Tic-tac-toe

A

B

C

D

I couldn’t find a game that had a clear bi-modal distribution for its ratings but I reckon there must be games out there that people either love or hate 🙂 Let me know if you find one! To get students familiar with boardgamegeek.com, you could ask them to first search for their favourite game and then explore what information and ratings have been provided for this on the site. Let the games begin 🙂

Just Google it

Here’s a really quick idea for a matching activity, totally building off Pip Arnold’s excellent work on shape.

At the bottom of this post are six “Popular times” graphs generated today by Google when searching for the following places of interest:

  1. Cafe
  2. Shopping mall
  3. Library
  4. Swimming pool
  5. Gym
  6. Supermarket

Can you match which graphs go with which places? 🙂

[you can find the answers at the bottom]

A

B

C

D

E

F

Click here to reveal the answers