Fixing relative notation in music

I've been learning a tiny little bit of music theory – major scales and chords and so on – and I would like to change everyone's use of relative notation.

The usual way of writing relative notes in a scale is 1, 2, 3, 4, 5, 6, 7. In C major, this would correspond to C, D, E, F, G, A, B. Chords are usually written in Roman numerals, with capital letters for major chords and lower-case letters for minor chords: I, ii, iii, IV, V, vi, viio for C, Dm, Em, F, G, Am, Bdim.

The first problem I see with this is when we want to describe secondary chords like V/V, "five of five": we temporarily go to the major scale of the 5, and extract the V chord from that scale. In C major, the V chord is a G major; in the G major scale, the 5 is a D, so the V/V is a D major chord.

I expect that people who work with these things regularly work these out as quickly as I can do my times tables, hopping between scales with ease. But I would like to work things out in terms of modular arithmetic. In the case above, things appear to work out: there are eight notes in an octave, 5 + 5 = 10, and 10 (mod 8) = 2, and D is the 2 in the C scale.

But this breaks down if we want the V/ii chord: 5 + 2 = 7, but the ii is D, and the 5 in the D scale is an A, not a B.

The mathematically-inclined may have already spotted at least one of the mistakes in the above reasoning. There are seven different notes in the scale, so we should be working modulo 7, not modulo 8. The second mistake is with the notation: the scale should start with zero, not 1.

To do the calculation properly, we need to first subtract 1 after doing the addition. So, V/V is 5 + 5 - 1 (mod 7) = 2. And V/ii is 2 + 5 - 1 (mod 7) = 6. It works.

Having to subtract the 1 is really annoying though, and the special case of ending on a 7 (e.g., V/iii which becomes zero mod 7) needs to be handled. A better scale would be

0, 1, 2, 3, 4, 5, 6

with chords
O, i, ii, III, IV, v, vio.

Then the normal-notation V/V becomes a IV/IV, and can be calculated as 4 + 4 (mod 7) = 1, the D major chord. And a normal-notation V/ii becomes a IV/i, and can be calculated as 1 + 4 (mod 7) = 5, the A major chord.

This is much cleaner, and the only minor issue is a roman numeral for the zero , which I wrote above as a letter O.

Taking into account how little music theory I know, I figure my proposal is somewhere about as optimistic as my suggestion for question marks.

Diamantina Drover

(A longer post than is likely warranted for not hearing lyrics correctly, but perhaps it's worth it if overseas readers (both of you) haven't heard the song before.)

I can't specifically remember it, but I think I first heard Diamantina Drover in the form of John Williamson's cover version on his album Mallee Boy. My parents had a few Williamson albums and that was definitely one of them. I recall later learning the lyrics in a primary school music class; I can't remember actually singing it, but we must have. Whether we sang something closer to Redgum's original or Williamson's more compactly arranged cover, I don't know, but certainly it's Williamson's version which remains one of my favourite songs (of any genre, and certainly within Australian folk).

The song's narrator tells us about how he moved from Sydney a decade ago to become a cattle drover. The first verse and chorus end with "I won't be back till the drovin's done." The last verse ends with "I won't be back when the drovin's done", a change kept in the final chorus as well.

I was at the YouTube video of the Williamson version of this song, and started reading the comments.

Musically i like Johns version but he should have stuck to the red gum lyrics. Changing that one little word at the end takes out all the impact and kind off the whole point to the song.

?????

I knew when reading this that the commenter was referring to the till/when switch, saying that Williamson had sung 'till' in every case instead of changing to 'when'. And I straight-up didn't believe this, until I played through the YouTube video, hearing "till the drovin's done" always and never hearing "when the drovin's done". I figured I couldn't have misheard the lyrics so consistently over so many years, so I checked the album version that I own... and I had misheard.

I'd obviously been taught the Redgum lyrics, and they'd stuck with me through many dozens of plays of John Williamson's version. Perhaps, many years ago, I noticed that John didn't make the till/when switch – having written this post I now recall noticing this sometime around 2000, listening to the song on cassette in Dad's car. But I'd long forgotten about it, if that is actually a genuine memory and not something I'm inventing for myself.

Someone not hearing lyrics right is hardly earth-shattering news, but this one feels much more interesting to me than others.

Having thought about this, I think a compromise would actually improve the lyrics even further. The final verse should switch to 'when', as in the original. But the final chorus should stay as 'till' – the final verse then would have that brief moment of raw honesty, before the narrator slips back into the lie that one day he'll move back to Sydney.

I V vi (iii) IV (I IV V)

In November 2006, Rob Paravonian posted his Pachelbel Rant to YouTube, and it quickly became popular, and now has 12 million views.



At the 2009 Melbourne International Comedy Festival, the Axis of Awesome played their Four Chords Song, and it became really popular, with that video having over 30 million views, and their 2011 official music video (with a slightly different set of songs) a tick under 20 million.

If you read the comments on the Pachelbel Rant video, you get things like this:

Was disappointed when the group "AxesofAwesome" completely ripped it off with "Four Chords".

I just realized that Axis of Awesome completely stole the concept of this video.


And OK, maybe those are the only two comments accusing the Axis of Awesome of plagiarising the concept. But I want to respond to them here anyway, because yesterday I discovered this video by Benny Davis (keyboardist for the Axis of Awesome) singing an early version of the Four Chords Song in November 2006.

So there we go, independent (re-?)discoveries of a piece of musical comedy.

YouTube comments

When Google made commenting on YouTube go via Google Plus, it created a loud chorus of online protest. News and tech sites ran with these stories, no doubt hoping to attract lots of eyeballs of angry YouTube commenters who didn't want to use Google Plus.

Left largely unremarked during the controversy, but generally known, was that YouTube comments sections were typically a cesspit featuring the absolute dregs of humanity. The switch to G+ comments has improved the quality of comments tremendously. You'll still see the occasional hundred-post-long flame war on Israel-Palestine on a video about ducks or whatever, but the percentage of non-offensive and even useful comments is much higher today than it used to be. I often read the comments, and occasionally I even find them useful – perhaps pointing me to an interesting related video, or raising some background information that I can go away and verify.

There's one exception to this general rule that I came across tonight. In the pre-G+ era, the saddest place I ever saw on YouTube was the comments of Mariah Carey's One Sweet Day. Almost all of the comments – literally 95% or more – were RIP messages to lost friends or family. Page after page of people finding some comfort from the song and leaving a little personal message. I don't know what motivated anyone to express their grief in the form of a YouTube comment, but the memory of those comments makes me tear up even now.

There's still some of that in the G+-style comments to One Sweet Day, enough to make me sad if I scroll through enough of them. But people posting the song to Google Plus are often not leaving a comment at all, or perhaps snarking about the evolution of Carey and pop music more generally since the 1990's.

A little bit of good Internet has been lost.

In which I write about myself

When I was little, perhaps eight or nine years old, I experienced a certain phenomenon for the first time. Perhaps it's a common thing that I just haven't read about. I don't know what it's called, and I doubt I can even describe it well enough in prose, let alone in terms that Google might understand. This first occasion experiencing it was so long ago, I don't even remember whether it was an external sound, or something purely imagined. It might not have even been an auditory thing, but I'll pretend that it was regardless.

Imagine a sort of beat, but the sound isn't something sharp like a drumbeat. Maybe the sound of walking on gravel, or flicking the bristles on a toothbrush. This sound repeats regularly, roughly once a second. In my brain, it is as though these sounds follow a positive feedback loop (alternatively, like a resonance), rising in volume. Eventually (maybe after a few seconds, maybe a few tens of seconds) the imagined noise in my head is intolerably loud. That evening when I was eight or nine, I burst into tears at it. As I recall it, the news was on the TV, and Mum thought that I was reacting to the footage of the war zone (Yugoslavia?), trying to comfort me accordingly.

Over the years, this same sort of feedback-loop-thing repeated itself occasionally, ending more calmly than wild crying. My memory is that it was something I could almost command at will: imagining those regular, quiet sounds, and having them dominate the apparent noise inside of my brain. But it was long ago (how long ago? When did it stop? I don't know – in my teenage years maybe? Early adulthood??), and perhaps not as common or as controlled as I remember. I tried summoning those regular sounds just now, and I can only experience what feels like a faded ghost of what I remember: my brain stubbornly whirring away normally, saying only (in some metaphorical sense) "I know what you're trying to do, the sound used to get loud like this,", but this isn't actually overpoweringly loud.

Sometimes it wasn't sound-based. In what I associate more (though not exclusively) with dreaming was having the size of a ball (or a spherical rock) get larger very quickly. Perhaps the rock was on one end of a seesaw, and, without wanting to, I would imagine it rapidly getting many times larger than the seesaw. It would destroy any hope of imagining what I wanted to imagine about that ball or rock.

I was reminded of these old memories today. I'd been reading a discussion about photons and coherent states, and I pondered, as I occasionally do, how little I understand about quantum mechanics. What's an "observable" and why should it be a Hermitian operator in a Hilbert space? (Real eigenvalues, whatever, my main confusion is on representing something physically measured as a matrix. Or why non-commuting operators should exist. Turning Poisson brackets into commutator brackets just because. Totally weird stuff, though perhaps within the realms of "spend a few weeks looking at your old uni notes, in particular representing things in quantum by wavefunctions rather than kets, and you'll work it out".)

I went on to think of how, more generally, I don't understand things. Why does particle physics have Lie algebras in it? Why does anything exist? At the latter question, I imagined galaxies and the Big Bang and atoms and gravity and I had one of those weird positive-feedback-loop-things, my brain getting totally flipped out over the existence of anything at all, matter, energy, physics. It only lasted a second or so, but it was a powerful force in my head for that second, as though it was driving me fast towards a sort of existential madness*. Then it ended. The existence of the universe and physics is still really weird, but it's something that my brain can consider calmly and stably.

*Whichever meanings or connotations of 'existential' apply here, those are the ones that I mean.

I think this "getting briefly and excessively weirded out over the existence of anything" thing has happened to me before. Whether it belongs in the same category as the regular beats that made me cry when I was eight, I don't know, but it feels very similar.

Statement questions

"Why would anyone do this."

Sometimes people ask a rhetorical question, and deliver it as a sentence, without the upwards inflection at the end that we see in genuine questions. Not all rhetorical questions are delivered in this way, but it's an important enough difference in tone that when writing, the question mark is often replaced by a full stop.

That is how I used to write these statement-questions, but a friend objected, finding the full stop mentally jarring having just processed the words as a question. Did I misread? Did he mistype? And so I started putting a question mark after the full stop in these cases: "Why would anyone do this.?"

It's a solution that's just satisfying enough for me to keep using it – for people who get what I'm doing, it makes the statement-question unambiguous, and should hopefully reduce the mental jarring. But even assuming that people know what I'm doing, it is unsatisfactory: I've tried to internalise the full-stop-question-mark over a period of years, but that question mark still makes me start upwardly inflecting the end of the sentence that I've just typed specifically to not get upwardly inflected.

It occurred to me this evening that a better solution would be to borrow and then modify from Spanish. Upward-inflecting questions would get an inverted question mark ¿ at the start and a regular question mark at the end ? . As soon as the question starts, the reader would prepare to upwardly inflect at the end.

Statement-questions, on the other hand, would only get the final question mark.

Upwardly inflect: ¿Why would anyone do this?
Don't upwardly inflect: Why would anyone do this?

So, the final question mark merely confirms that gramatically, the words just written are a question. The presence or otherwise of the inverted question mark primes the reader for the appropriate inflection.

As practical suggestions go, this is somewhere beyond the "utterly useless" end of the spectrum that covers everything I've ever suggested before. It would require the internalisation of a different punctuation system by all English speakers, forgetting entirely the cues that come with the single question mark at the end (cues that would live on in all the centuries of books written before my idea becomes standard). But it seems an elegant solution, and I thought it was worth documenting, albeit in a post which I'll sneakily upload to LiveJournal in the dead of night Australian time and won't link to elsewhere.

Popular below-the-line candidates

Following the publicisation of Joe Bullock's speech just prior to the April 5 WA Senate election, I saw many Labor supporters say that they'd vote below the line for Louise Pratt, who had the number-two spot on the Labor ticket behind Bullock.

At the time of writing, we don't have the breakdown of BTL votes from the WA election, so I'm taking this opportunity to see how we'd expect to see a "below-the-line for Pratt" movement, by looking at some results from 2013. The most interesting thing to me is that Pratt was already very popular BTL relative to Bullock last September.

For each of the major parties and Greens (and all the other parties, but I haven't bothered posting them here), I calculated the ratio of BTL votes for the first candidate on the ticket to the votes for each subsequent candidate. For example, the first entry in the numbers below reads "Brown to Bilyk, 5.5": in Tasmania, Carol Brown received 5.5 times as many BTL votes as Catryna Bilyk.

I'm looking at the sample of voters who supported a party and chose to vote below the line; perhaps this was because they didn't like their party's group voting ticket, or perhaps they didn't like the order of candidates their party had chosen. I'm interested in this latter case.

The ratio of BTL votes between first and second (or third, fourth, ...) candidates is not a perfect way to capture supporter dissatisfaction with the leading candidate – if the GVT in a state is unpopular, then that would lead to more of that party's supporters voting below the line, and they would likely be giving the top candidate their first preference, thus inflating the ratio that I calculate. The natural way to capture dissatisfaction with the GVT is to just look at the percentage of votes for the party that were below the line, but this is problematic for the inter-state comparisons that I would like to make – ballot paper lengths and ticket preferences can be significantly different between states, leading to large relative changes in BTL voting rates.

With those caveats out of the way, here are the ratios of a party's lead candidate to subsequent candidates, in order of increasing relative popularity of a non-lead candidate.

Labor:

Tas: Brown to Bilyk, 5.5; Brown to Thorp 1.8, Brown to Dowling 4.3

NT: Peris to Foley, 2.1

WA: Bullock to Pratt, 2.4; Bullock to Foster, 10.7; Bullock to Ali, 13.8

Vic: Marshall to Collins, 5.7; Marshall to Tillem, 29.0; Marshall to Psaila, 36.5; Marshall to Larkins, 41.4; Marshall to Mileto, 21.1

Qld: Ketter to Moore, 5.7; Ketter to Furner, 21.7; Ketter to Boyd, 9.3

NSW: Carr to Cameron, 8.1; Carr to Stephens, 25.5; Carr to Kolomeitz, 134.2; Carr to Nelmes, 121.1; Carr to Chhibber, 35.0

ACT: Lundy to Sant, 19.0.


Liberals or some sort of LNP:

ACT: Seselja to Nash, 3.5

Tas: Colbeck to Bushby, 4.2; Colbeck to Chandler, 8.9; Colbeck to Courtney, 8.7

Vic: Fifield to Ryan, 8.4; Fifield to Kroger, 4.2; Fifield to Corboy, 6.9

NSW: Payne to Williams, 9.0; Payne to Sinodinos, 4.5; Payne to Hay, 24.0; Payne to C Cameron, 21.6; Payne to A Cameron, 11.0

SA: Bernardi to Birmingham, 5.5; Bernardi to Webb, 16.1; Bernardi to Burgess, 19.8; Bernardi to Cochrane, 67.7; Bernardi to Weaver, 56.1

NT: Scullion to Falzdeen, 6.0

Qld: MacDonald to McGrath, 20.5; MacDonald to Canavan, 31.9; MacDonald to Goodwin, 19.7; MacDonald to Craig, 25.4; MacDonald to Stoker, 13.3

WA: Johnston to Cash, 14.1; Johnston to Reynolds, 15.2; Johnston to Brockman, 29.7; Johnston to Thomas, 22.3; Johnston to Oughton, 14.9


Greens:

NT: Williams to Brand, 6.5

Tas: Whish-Wilson to Burnet, 8.3; Whish-Wilson to Ann, 24.6

WA: Ludlam to Davis, 10.1; Ludlam to Duncan, 38.2

Qld: Stone to Bayley, 10.9; Stone to Yeaman, 47.5

ACT: Sheikh to Esguerra, 22.0

NSW: Faehrmann to Ryan, 58.1; Faehrmann to Blatchford, 36.9; Faehrmann to Ho, 38.4; Faehrmann to Findley, 53.5; Faehrmann to Spies-Butcher, 45.1

Vic: Rice to McCarthy, 41.7; Rice to Truong, 55.1; Rice to Christoe, 124.7; Rice to Sekhon, 132.7; Rice to Humphreys, 40.3

SA: Hanson-Young to Mortier, 74.9; Hanson-Young to Carey, 66.8


A few obvious things spring out to me. As mentioned earlier, Pratt was already popular with BTL Labor voters compared to Bullock. (She also received more BTL votes relative to Labor ticket votes than any other non-lead Labor Senate candidate outside the Tasmania and the ACT.) Lin Thorp, formerly a minister at state level in Tasmania, was the most popular non-lead candidate by this metric. On the conservative side, both Arthur Sinodinos and Helen Kroger were high-profile candidates who appear to attract a bit of a personal vote. Greens voters seem pretty happy with their lead candidates. On a lighter note, there's more than a hint that voters disproportionately give their first preference to the last candidate in a large group.

Anyway, the little sociology experiment on WA Labor voters will be looking at the Bullock-Pratt ratio as it is counted in the coming weeks; if there was a decent swing against Bullock to Pratt, then we should see the ratio of their BTL votes fall from 2.4 to something lower. My guess is that it will fall a little below 2; I'd put the under-over at around 1.9. (Update: A few hours after posting, and I see that the first couple of hundred BTL's show Pratt getting more votes than Bullock! At this stage it looks like I'd have been closer with 0.9 rather than 1.9.)

Notes on Flappy Bird and clones

The other week I got (from work) one of these for the first time. It was just after Flappy Bird had been taken down from the App/Play Store, but I'd played a Flash clone of it (then-highest score: 15, with no other scores greater than 10), and so I downloaded Clumsy Bird, then and now the top free game on the Play Store.

Clumsy Bird is a little bit easier than Flappy Bird: in the former, you don't get as much impulse from flapping, relative to the size of the gaps you have to fly through. This leaves more margin for error, since a slightly early tap won't necessarily see you crashing into the roof of one of the gaps. Still, it takes some getting used to.

I played it by feel for a while – I tried to avoid any rages and just mindlessly tap away, almost always dying before I scored 5, hoping that eventually my brain would work things out. And it did, eventually. I passed 10 points, I passed 20 points, and suddenly the game was quite fun – an endless series of scrolling trees to fly my way through in some kind of Zen-like state.

It didn't last. I learned to read the score in my peripheral vision, a disastrous new skill that caused me to choke whenever I approached a new record, followed by the inevitable anger at dying all the times I failed to set that new record. After at least half a dozen scores in the 60's – an absurd deviation from the distribution of scores I should have had, and indeed would have had with some better sports psychology – I eventually broke the 70 barrier, and then without the pressure of a long-standing best score, I soon made over 100, with a score of 140. That's a number that will be satisfyingly large for as long as I don't talk to anyone who's done better.

After all this practice and skill-learning on Clumsy Bird, I wondered what would happen if I tried the Flash Flappy Bird clone again. My early efforts were predictably poor – having built up my feel for the game with the relatively small flaps of Clumsy Bird, the big Flappy Bird flaps saw me crash into the top of the gaps very quickly, typically before I'd scored 4 points.

I thought that this showed something interesting about a lack of transferability of skills in at least one direction between Clumsy Bird and Flappy Bird, but I was wrong. I tried a strategy of, whenever possible, flapping only when the bird had fallen below the bottom of the next gap. The idea was that, to get through the gap itself, I needed to flap near the bottom of it, and it would be easier to do this (greater margin of error in time) if the bottom of the gap was only a little below the peak of the bird's pre-flap trajectory.

And it worked. The skills I'd built up by feel on Clumsy Bird transferred to Flappy Bird once I had a sound playing strategy. I very soon had a score of over 20, and now my highest is 43. I found this transfer of skills quite interesting to experience.

---

Above, I mentioned in passing the distribution of scores I should have had. The simplest way to model Flappy-Bird-type games is to say that, at every point during the game, you have some probability p of dying before you score another point. Over a game, this probability is treated as a constant; your hope as a player is that, with practice and/or strategy, you decrease p so that your skill level rises. Of course, p isn't constant throughout the game – some transitions are harder than others, and the probability of making it through the next gap is a function of the position of the previous flap. But as a toy model, a constant-p looks OK to me.

It immediately follows from constant-p that your scores will be geometrically distributed (with scores of zero allowed). In other words, they will look a lot like a batsman's cricket scores, only you won't have as many scores of zero in Flappy Bird.

But while in cricket we calculate a batsman's average and can directly infer from that the average probability of being dismissed before scoring the next run, in Flappy Bird we usually only keep track of the highest score. If you've played Flappy Bird N times, all at a constant p, what high score "should" you have?

Let Xi be N independent geometrically-distributed random variables with failure probability p; these are the scores of each game of Flappy Bird you play. Let H = max(Xi); this is your highest score. Then

P(H ≤ h) = PRODUCT P(Xi ≤ h) = [1 - (1-p)h+1]N.

This gives us the basic relation between a high score h, skill level p, and the number of times N that you've played the game at your current skill level.

For a given p and N, what high score should we "expect"? I think that the natural way to answer this is to ask for the value of h such that P(H ≤ h) = 0.5. We get

h = ln[1 - 0.51/N] / ln(1-p) - 1.

Alternatively, given your high score of h, you could impute your skill level from

p = 1 - (1 - 0.51/N)1/(h+1).

An interesting exercise for a data-minded Flappy Bird player would be to plot a histogram of their scores and see how it compares to some of these theoretical values. Choking will show up as a spike in the probability density just below the highest score.

Assuming that I've played 100 games at my current skill level (this could be wrong by a lot), my Clumsy Bird highest score of 140 suggests p ≈ 0.035, and a cricket-style average = (1-p)/p of around 28. By the model I should have scored three centuries by now, and I've only scored one, so I suppose you can call me the Shane Watson of Clumsy Bird.