Zeno’s thermometers?

A friend just told me about the xkcd idea for the “Felcius scale” which is the arithmetic mean of the Fahrenheit and centigrade (Celsius) scales. Naturally, my first thought was that this was a funny but pointless idea since it discarded the advantages of the centigrade scale, which was renamed ‘Celsius’, but I’m using the old name to emphasize the 0-to-100 advantage. (A better discussion is found here: http://www.explainxkcd.com/wiki/index.php/1923:_Felsius)

My friend, however, suggested that it was a step in the right direction.

If so, it isn’t enough. If this idea were to take hold, we would need another such step in the right direction, perhaps to be called the “Eelsius” which would take us another 50% of the way to Celsius, and eventually another halfway jump to “Delsius”, and (aside from running out of characters between ‘f’ and ‘c’), a nice little Zeno’s paradox of temperature-scale systems that asymptotically approach the logical centigrade scale.

Time Travel? (It comes down to causality’s unawareness.)

Forget the usually discussed paradox about time travel; how about how one would know whether one had come back to the proper (intended) present?

Would one rely on clocks and calendars? Clocks move at different speeds (as explained by relativity). And how much should the calendar have changed while you were off in the past? Does the travel itself take any time? How about if everyone else also time-traveled while you were off doing that? Who, or what, decides what the correct time is “back home”? (To say that it would take so much energy that only one person should be able to time-travel or get close to the speed of light at a time would be an economic argument, not a physical one, so in a discussion of physics, we must allow for the possibility that everyone is time-traveling concurrently—let’s say, by starting together but going off in different directions in different amounts.) Then who or what keeps so-called real time going in its regular pace? (It doesn’t even have one, thanks to relativity!)

What we mean by time travel—if it’s not kept track of via clocks—is that causality is unaware of the traveler’s appearance and disappearance. So, what happens to the air molecules where the traveler will suddenly appear? Are they pushed out, or do they somehow combine or coexist with the traveler’s? If they are pushed out, then how massive an object can time-travel? (Because, at some point, we’d be moving so much air, it would lead to catastrophic damage to someone or something. So, the paradoxes extends to the legal and safety realms as well, although those are merely practical.)

Happy 65th birthday, Deacy!

Thank you for Back Chat, Cool Cat, You And I, Spread Your Wings, In Only Seven Days, Misfire, Who Needs You, Under Pressure, You’re My Best Friend, One Year Of Love, If You Can’t Beat Them, Need Your Loving Tonight, and Pain Is So Close To Pleasure, not to mention AOBTD, FWBF & IWTBF. (And, really, every Queen song!).

You really nailed it whenever you wrote one. By the end of my life, I would like to have written one song that’s like one of yours.

What happened to ‘Windows 9’?

I was talking to an old friend a couple of years ago, just before my band was going to play a show at a bar in Seattle. He’s been at Microsoft for quite a few years, and was saying that one of Vernor Vinge’s old books is like a guidebook for a lot of MS people, in terms of what future technologies the high-ups (perhaps) would like to the company to contribute to or pioneer.

Okay, so, MS folks read sci-fi; no surprise there.

Now for the other half of the puzzle: I have been reading the second 2016 issue of THE SKEPTIC, and took some extra time to dig into the arguments for and against the (eventual, distant) possibility of uploading one’s entire consciousness onto silicon (or some other substrate), partly because another friend has already invested in having his (and his wife’s) brain frozen for eventual uploading.

The author of one of the articles from THE SKEPTIC synthesizes results and possibilities from quite a bit of current and recent work in neuroscience and neurobiology, with references to numerous scientific papers from journals such as Nature, Nature Methods, The Journal of Neuroscience, Proc. NAS, Nature Neuroscience, Frontiers in Psychology, Progress in Neurobiology, Neuroscience, Journal of Neural Engineering, etc. (You get the picture.)

He also references several books: some technical, some more along the lines of popular science, and one, a fiction book.

I ordered two of the three technical/scientific books online, and found the fiction (sci-fi) book at Powell’s, so I started reading that one. It’s called Kiln People, and it’s by David Brin. It reads a little like a detective-story version of The Mind’s I: Fantasies and Reflections on Self and Soul, which is an excellent compilation of stories and essays meant to exercise the reader’s understanding of self, soul, consciousness, and the boundaries of such. It was compiled/written (“composed and arranged”) by Douglas Hofstadter and Daniel Dennett. The Brin book flows better, as it is pure fiction (altough this is not to say The Mind’s I doesn’t; it’s a superbly crafted book).

And like most good sci-fi, Kiln People is well-informed fiction (like Diaspora by Greg Egan), and that’s why, I suppose, it was listed along with dozens of peer-reviewed neuroscience articles.

In any case, my reason for writing this small post is not neuroscience, or consciousness, or spirituality, or the human soul, but a wild conjecture about the naming of Windows 10. If Microsoft programmers, managers, executives, etc., like to read sci-fi, and some of them have read Vinge, it is not unlikely that out of that many people, some have read Kiln People by David Brin. In the edition I bought, on page 396, the narrator and main character lists people whose deaths have come at the hands of their own creation. The list contains Oedipus’ father, Baron Frankenstein, and “William Henry Gates and Windows ’09.” (The book is copyrighted 2002.)

Granted, there was no year-named “Windows ’09” (that I know of). However, the abbreviation “’09” (for the year) could have been pronounced by some people just like the numeral ‘9’, and I’m guessing the sci-fi fans at MS, in marketing, for example, may have read the book, and not wanted to follow Windows 8 with a Windows 9, avoiding Brin’s fictional prediction, or even better, avoiding making Brin look bad when that did not come true.

Otherwise, why go ‘Windows 7’, ‘Windows 8’, ‘Windows 10’?

Auto-(in)correct: Emergent Laziness?

Is it Google? Is it LG? Or is it emergence?

I am leasing an LG tablet running Android to go with my phone service. I thought the large screen and consequently larger keyboard would make my life easier. The first several days of use, however, have been unreasonably annoying. The salesperson had said that this device would be slave to my LG Android cell phone, but my settings did not seem to carry over. What’s worse, no matter how much I dig through menu trees to get to certain settings I’m looking for, I can’t find them. For example, I may want autocorrect off, or I may not want the latest e-mail in my inbox to be previewed. (I prefer to see a bird’s-eye view of all the recent e-mails, packed as tightly as possible, and I can usually set this very quickly and easily, but not on this tablet.) The reasons might range from being about to go to class and teach in a few minutes and not wanting to think about that e-mail about a committee issue that just arrived right at the moment, and I don’t want Gmail to parade it in front of me.

So, the settings seem to be very well hidden, or maybe not even available to the user anymore (because that has been the trend in computer-and-Internet technology: Make the user think less, and have less control; so-called intelligent software will decide all your preferences for you).

And perhaps the software can deduce (or, more likely, induce) your preferences as they were at a certain time under a certain set of circumstances, but human beings expect the freedom to change their minds. Software doesn’t seem to allow this.

Furthermore, crowd-sourcing is considered the ultimate intelligence. I know and understand the algorithms behind most of these ideas, and totally agree that they are beautiful and awesome (and really fun). However, engineers, programmers, mathematicians, and other nerds (like me) finding something super-fun should not be how life is redesigned. The crowd-sourcing of spelling and automatic correction is leading us from artificial intelligence to natural laziness. My device wants to change “I’m” to “imma”. (Before you decry that I’m also ignorant and don’t know to put a period inside the quotation marks, read my disclaimer about switching to British/logical punctuation.) Am I now forced to appear like I have abandoned capitalization, and to have picked up an unnecessarily excessively colloquial form of spelling. And if I had, then fine, but I haven’t.

It gets worse. The learning algorithm is not learning, at least not from me. The following has now happened with several phrases and words on this new tablet, and I’ve looked further into altering this setting, to no avail.

When I type “I will”, it automatically replaces it with “I silk”. If I backspace and type “I will” again, it replaces it again. And it doesn’t learn from my actions; I have patiently (and later on, a further dozen or so times, impatiently) retyped “I will” more than 30 times, only to watch Gmail running on my Android LG device switch it back to “I silk” immediately.[1]

Where did this come from? Is there a band called “I silk”? Is this a new phrase that’s in these days, and I haven’t been overhearing my students enough to know about it?

Or is it because earlier that day, I tried to write “I seek to …” where the ‘seek’ was autocorrected to ‘silk’? (for who knows what reason)

And what happens when this behavior is pushed beyond e-mail on a tablet, and I’m not able (or allowed) to write either “I will” or “I seek” as I type a blog entry such as this on my laptop, or as I try to type an e-mail to explain what’s going wrong to Google’s tech support, or someone else’s tech support?

This really doesn’t make sense. Shouldn’t machine learning give us results that make sense? (That used to be the idea.) Now, perhaps, it’s just supposed to give us results that are popular or common. It seems we’re not building artificial intelligence; we’re building artificial commonality.

This is not a rant for elitism (which, anyway, is also used in machine learning, in evolutionary algorithms). It’s about the loss of freedom of speech, to be able to say what one is trying to say the exact way one wants to say it. The ability for clear, unequivocal communication is not something to be eliminated from the human experience; it is something to continue to strive for. Likewise, convenience over freedom (or over accuracy) is not a good choice of values. In the end, the person pushing little buttons with letters marked on them will be held responsible for the content. Shouldn’t that person be in charge of what words appear when they push the little buttons? Shouldn’t we at least be able to turn off auto-correct, or have some control over when it applies?

This is being taken away, little by little. “Oh, it’s just a tablet.” … “Oh, it’s just an e-mail. Nobody expects it to be spelled correctly.” Pretty soon, no one will be able to spell anything correctly, even if they know how to, because their devices won’t allow them to have that little bit of control.

 

[1] Also, do not even try to write in a foreign language, or mix English and Turkish in one sentence. In an early e-mail I wrote on this device, I had to repeat the letter ‘i’ (which appeared only once in the actual word) five times (for a total of six ‘i‘s) for it to stop auto-(in)correcting “geliyorum” to something like “Selma”. I had to type “geliiiiiiyorum”.

Making sentences. . .

Winter term has been crazy, although in a good way. I’m teaching six classes this term, all of which are going great (I have awesome students), though one’s a new prep, so I haven’t had time to post here, but I recently found some silliness on my phone that I had come up with while waiting for a train: Making sentences by combining band names.

Here they are:

As Blood Runs Black The Refused Pierce The Veil

Tower of Power Was (Not Was) Built To Spill Ashes

Barenaked Ladies Poison The Well From First To Last

Bring Me The Horizon Within Temptation At The Drive-in

Blonde Redhead Of Montreal Cursed The Only Ones

(And, of course, a band name, all by itself: I Love You But I’ve Chosen Darkness)

The Bias–Variance Tradeoff

Just like in Statistics, machine-learning models (typically called hypotheses) can suffer from a bias–variance tradeoff where models with low variance (high stability, low variability) exhibit high bias (high likelihood of being far off the mark) whereas models with low bias (high chance of accuracy, low likelihood of being off) can exhibit high variance.

In US politics, conservatives “stay the course” even if it’s the wrong course, perhaps because to admit error is unpleasant. This is analogous to high bias and low variance. Progressives, on the other hand, may be seen considering the validity of mutiple (including opposing) viewpoints. This is considered flip-flopping (an instability of opinion, so high variance) that can lead to choosing a stance that is better supported by evidence (low bias).

These, of course, are not the only two options. In human affairs, the remaining combinations also exist, but rarely in technology.

[After-the-fact clarification: High bias and high variance is not difficult to find in human affairs: Being far off the mark, and wildly adjusting one’s position is not only not unusual, it’s also not a bad thing. Critical thinking requires adjusting one’s views. Whether such adjustment takes the form of a conscious, well-informed honing in or a wild, unprincipled random walk is not addressed by the bias–variance tradeoff. The consistent coexistence of low bias and low variance also occurs with people; those people are quite impressive. It is helpful to remember, for this case, that low variance does not mean no variance, so we can still expect self-critique and adjustment from those individuals.]