Because I had nothing to do on Sunday but babysit my sourdough starter, I decided to do an analysis that has always interested me: finding the distribution of ages among a random sample of coins.
The setup is as follows. About twice a year I empty my 32-ounce glass of change at my local Safeway’s CoinStar. At that point I start filling the empty glass with change generated as a result of my daily cash expenditures. So, this is a pretty recent sample of a moderate sample of coins.
I emptied the glass onto my coffee table, opened up a beer, and started separating. Each coin was placed in a stack of other coins of the same type that shared the same year. In this manner I ended up with about 150 stacks of coins dating from 1940 through 2006. I then added each stack’s count (that is, the number of that coin type for that particular year) to an Excel sheet for some analysis.
First, here you can see the totals for each year and that year’s contributions based on coin type:
As one might expect, the second most recent complete year (2004) represented the most common mint date of the coins in my collection. However, two surprises were discovered.
First, that my oldest coin was a nickel struck 1940 at the Philadelphia mint. My second oldest was a nickel struck in 1941 at the San Francisco mint. I find it curious that these dates occurred 20 years before the next oldest nickel and were only a couple years off the famous Buffalo nickel that was discontinued in 1938. As it turns out, my 1941 San Francisco mint nickel would be worth a value of $900 or $9,000, depending on how you read the value tables. Too bad mine is in crap condition.
Here is another graph showing each coin type’s count by year relative to the others:
My first observation from this chart is the incredible number of 1965 quarters. There were 14 quarters from 1965 in my money glass. That’s more than any year all the way up until 2000! It is conspicuous that 1965 was the first year that the mint stopped using silver in quarters. Obviously coins before that date would have been grabbed by anyone wishing to collect silver but why a surplus of that year would appear is pure speculation.
Another interesting observation is the distribution’s correlation to the number of coins handed back in an “average” purchase. Assuming a uniform distribution, each purchase should return 2.47 pennies, 0.98 nickels, 1.31 dimes, and 1.97 quarters. If you look at my distribution, you’re seeing 296 pennies, 94 nickels, 184 dimes, and 253 quarters. Pennies and quarters are appropriate in proportion to each other, but dimes occur slightly more and pennies occur slightless less than expected. I would have expected pennies to be low, since I tend to throw them out. But the under-representation of nickels and over-representation of dimes is a surprise to me. There are several reasons why this could have occurred:
- My assumption about the “average” payout distribution was wrong. I assumed that a cashier would always return the fewest coins possible.
- Next, I assumed that cash prices (with tax) would be evenly distributed. Given that so many items are rounded up to end with nine cents, this assumption is inaccurate. I’d love for someone to suggest an accurate distribution, though.
- Lastly, there is likely a human element to the use of various coins. Cachiers may like nickels less than dimes or, given their larger size, I may be inclined to find nickels in my pocket over dimes when looking for change.
Anyway, I’m off to Safeway this evening to cash out these coins. Given that I know that these 827 coins total up to $89.31, I’m anxious to see what the CoinStar provides as a total. For some reason I expect that machine to under-count totals with a small margin of change getting “lost” in the process. It’ll be interesting to know if those bastards are collecting more than their advertised 9%.
Interesting analysis..however, without thinking about it too much it seems that you should have performed some statistical tests to determine whether your observed distribution is due to random sampling error. Given the complex distribution here, I’ll need to consult my coworker regarding the appropriate test…however, I think the null hypothesis may still be true given your sample size.
Second – have you scoured the web for information regarding the per-year mint rate for various denominations?
Ahhhh….science. You get all the fun of sitting still, being quiet, writing down numbers, paying attention…yes, science has it all.
Thank God we have engineers to do it for us 🙂
According to the mint, there was some vague ‘coin shortage’ in 1964:
http://www.usmint.gov/faqs/circulating_coins/index.cfm?flash=no
I can not tell whether the shortage was caused by taking silver out of the process or vice-versa, though.
As for your distribution, I suspect that sales tax rates largely eliminate the effect of item #2, but suspect that #3 is the biggest factor. I would have little problem accepting that most people make a conscious effort to keep the change in their pocket to a bare minimum (in quantity of coins), which would push preferences towards dimes and quarters.
Fascinating Mr. Scott. Your logic in unimpeachable.
-Spock
P.S. I also have a pre-sorted random distribution of change – although the quarters tend to get used so won’t be accurate. If you can convert to wieght equivalents I can send you the distribution. I’m going to look for old nickels…
You need a girlfriend.
I just got back from the CoinStar and the pile cashed out to $89.44. That’s $0.13 more than I had calculated, but well within an acceptable error margin.
Also, screw you, Nicole.