“Verbing weirds language.”

“Verbing Weirds Language” (not so fast)

I have heard this term quite a lot lately, and it certainly gets its point across. However, as a speaker of languages from more than one family (language family, that is), I find that it’s shortsighted.

In languages that use helping verbs to make verbs out of nouns and adjectives (Japanese: suru; Turkish: etmek), there is no such problem, and, it seems, very little accompanying debate of descriptivism versus prescriptivism. For a multiculturally informed version of this aphorism, I suggest saying something along the lines of “Verbing weirds English.” (or “Verbing weirds Indo-European languages.”) because we Turks have been doing it without any weirding for quite some time, not to mention the Japanese and others—recall the infamous ‘bushusuru’.

Furthermore, verbing is not necessary. Take the new “fail” as a noun. The word ‘fail’ is a verb. The noun form is ‘failure’. I am familiar with the contention between descriptivism and prescriptivism. Descriptivists usually argue that language evolves. “It always has, so let it continue to do so.” However, we do not support this line of reasoning in other matters. Human beings necessarily form judgments and opinions based on confirmation bias, regression to the mean, and various well-documented heuristics and biases. Even those who are aware of these mistakes continue to make them much of the time. This, then, is how people behave. According to the descriptivist approach, we should let biases rule, and let informed thinking go by the wayside: The property of being something that occurs very often as part of human behavior does not bestow a sacred status on verbing or biases. Language evolves, as it perhaps ought to, but mindful people can strive for a direction of linguistic evolution that does not reduce clarity, increase redundancy, and encourage laziness of mind. I am all for positive evolution in the English language: changes that will make it more consistent, more rational, easier to understand, less redundant, and more elegant. Many of the changes brought about by the Internet and smart phones do not have this effect. To illustrate my point, I will include many examples of how brilliantly awesome English actually is below, but first, a bit more about verbing (and nouning).

I was reading the membership conditions for a retail chain, and realized that perhaps nouning[i] bothers me more than verbing. But why should either one? English is a language that has many words that serve as a verb and a noun with no change in spelling or pronunciation (‘park’, for example). Yet, as a native speaker of a non-Indo-European language, which has its moments of logical consistency, and which uses helping verbs so that neither verbing nor nouning ever need to happen, and further, as a native-level English speaker of 33 years[ii], I value the superior logical consistency of English, and don’t like seeing it eroded. (Note: Of course I realize that English pronunciation and spelling are not logical or consistent; there is a wonderful demonstration here: https://www.youtube.com/watch?v=2_dc65V7DV8), but trust me, and read on: English grammar, syntax, and punctuation are superbly, surprisingly, wonderfully logical. I will start small.

It may not be worth it, because I decided to give everyone full credit. (This means it ain’t worth it.)

It may not be worth it because I decided to give everyone full credit. (This means it may be worth it for some other reason.)

From a student paper: “Humans are over populating the world.” (This seems to indicate that humans have stopped reproducing, that they are no longer interested in populating the world. What the student really meant was “humans are overpopulating the world.”

Next, I would like to discuss hyphenation. Aniruddh Patel, in one of his talks, describes the hierarchical nature of language as follows: “If you know English, and I say the following sentence, ‘The girl who kissed the boy opened the door.’, … there is a sequence of words in that sentence: ‘the boy opened the door’ … But, if you speak English and understand English, you know it’s not the boy that opened the door; it’s the girl that opened the door. In other words, you don’t just interpret language in a left-to-right fashion. …” He goes on to explain that the phrases are hierarchically related such that ‘girl’ is linked to ‘opened’, not ‘boy’ which is right next ‘opened’.

Hyphens help us with another hierarchical aspect of language. An arithmetic analogy will help demonstrate this. 4 + 2 × 3 = 10 because precedence tells us to multiple 2 and 3 first. In arithmetic, we can use parentheses to change the hierarchy and override precedence: (4 + 2) × 3 = 18. This is exactly what hyphens do: They group words into concepts. Notice that the hyphen, typographically speaking, is rather short. It’s shorter than any of the letters in a monospaced font. That’s because, unlike dashes, hyphens serve to combine, not separate. Dashes, which are either ‘n’ long or ‘m’ long, serve to push words apart. The en dash, for example, is used for ranges, like 9–5 and Seattle–Atlanta. But let’s get back to hyphens.

Hyphens join two words into one concept, as in two-car garage, one-man band, and land-grant university. ‘Two’ and ‘car’ started life as separate concepts. In ‘two-car garage’, they are combined into a single concept, just as (4 + 2) got combined into a single number, 6. Again, just as 6 acted as one entire and single number on 3 in that multiplication above, ‘two-car’ acts as a single concept of garage size, when neither ‘two’ nor ‘car’ ordinarily signify size or width.

A friend once asked (on facebook): [. . .] purple people eaters: Are they purple people who eat people or people who eat purple people? While this may have been posted in jest, the logic applies to serious cases where the intended meaning matters. I responded: “Purple-people eaters eat people who are purple, while purple people eaters are purple in color themselves. In English, the modifiers gang up on the noun at the end unless you hyphenate.” I followed this up with two examples. “A community college association is a group of college-related people from the community whereas a community-college association is a group of colleges. In other languages, nouns have cases, so you don’t have these problems.” (Just as verbing doesn’t weird all languages, this type of problem also completely fails to occur in Turkish because nouns in noun strings get modified with suffixes that place them in their proper cases, doing the job of hyphens in English. The difficulty for native speakers of English is that they grow up speaking English, wherein it’s difficult to hear the sound of the hyphen. In Turkish, there is no mistaking the suffix; you hear it from the time you’re a little baby.

My second, and perhaps better example was the following. “The phrase ‘lake of fire Christians’ implies a lake consisting of ‘fire Christians’, whatever that would mean. What is typically meant is ‘lake-of-fire Christians’. English is ‘endian’ unless you override that with hyphens. In spoken language, we use inflection to make these things clear (which, again, is why many native speakers have a harder time than some ESL-speakers).

The subject line of an e-mail I received said, “Security for the Cloud Lunch in Portland”

This seems to indicate that someone is setting up security for a lunch event in Portland. What they meant, of course, was “Security-for-the-Cloud Lunch in Portland.” This type of mistake can very easily be avoided by rejecting the temptation to produce noun strings, and using the three little powerful words that make English work: ‘of’, ‘for’, and ‘from’. Calling it “Lunch Meeting about the Security of the Cloud, to take place in Portland” would remove all ambiguity.

Likewise, on the back of a 45-RPM record I recently bought[iii], the recording location is identified as “the crazy cat lady house” (another noun chain). It is clear, of course, that what is intended is “the crazy-cat-lady house” where “crazy cat lady” is a compound modifier, a unified concept and single descriptor for the house. Without proper hyphenation, the meaning is open to interpretations such as “the crazy-cat lady-house” (with the last hyphen not strictly necessary, but placed because this blog post is written, not spoken).

My first car was used, so I was a new car owner. I recently bought my third car, and it was brand new, so I am now a new-car owner (but not a new car owner).

The Coursera privacy policy states, “If you participate in an online course, we may collect from you certain student-generated content, such as assignments you submit to instructors, peer-graded assignments and peer grading student feedback.”

Note that the first (correct) expression “peer-graded assignments” and the later (incorrect) expression “peer grading student work” ought to be based on the same reasoning, so how could they end up different? It seems, based not only on this example, that people use other mental processes, not reasoning, for determining whether to hyphenate or not. These processes could be memorization, template-matching, or aesthetics.

There is support for this possibility. American English, as opposed to British English, is template-based in its treatment of punctuation with respect to quotation marks: They always have to be inside, unless they’re large characters like ‘?’. This is a purely aesthetic choice, and is not logical. The IT industry has, in recent years, protested this and switched to logical/British punctuation. I saw this reflected in Microsoft Word grammar-correction recommendations as early as 2012. (Way to go, Microsoft!)

“An Early Bird Sound Collage” is the title of a work by an experimental-music band. Do they mean it’s an early bird-sound collage, or an early-bird sound collage? (They are experimental, so the intended meaning could easily have been either.)

“Introducing the Möbius-Twisted Turk’s Head Knot” is a paper title from the Bridges 2015 conference. They got the first one right. Now, is it a twisted Turk and his head knot, or is it the Turk’s-head knot that’s twisted?

Compare the following. “Small plane crash” where we don’t know how big a plane it was, but the crash was a minor concern, and “small-plane crash” where we know that the plane was small, but the crash could still have been quite a big deal. In most cases, it is better not to be stingy with words; something like “a big crash involving a small plane” would be much clearer.

My next example is from course packs. In one case, one might be able to get a refund: “All packets are not refundable.” (unclear) vs. “All packets are non-refundable.” (quite clear!)

Here are some more examples from academia. There is a big difference between the “higher-ed budget” and a “higher ed budget” (we all want the latter). An “online learning report” is a report that gets posted online, while an “online-learning report” does not have to be posted, but it is about online learning. How about “main session outline” versus “main-session outline”?

What is the opposite of the right-hand rule? And what is the opposite of the right hand rule?[iv] The opposite of the former would be the left-hand rule, whereas the opposite of the latter would be the wrong hand rule.

Compare the expressions “proof of concept viruses for Linux” with “proof-of-concept viruses for Linux.” Again, from the tech fields: “no load gain” could be the opposite of “no-load gain”!

Even if it’s a stretch, one of the following could be about athletic performance whereas the other is clearly about academic performance: “college grade-point average” and “college-grade point average”

Here’s one from the field I teach in: Which technical term does not limit the model size: “small-signal model” and “small signal model”?

“Portland’s first clean air cab” was meant to indicate a regular car that that doesn’t pollute, but is written like a flying car that is not dirty.

I saw this on the web as well: THE HAITIAN TERRACING FOR HOPE PROJECT. One wonders if Hope Project will get some Haitian terracing, or if there is a Haitian project called ‘Terracing for Hope’. Since hyphenation cannot be imposed on proper nouns (such as the official name of a project), using one of the magic little words or changing word order could have helped this case: “A Project in Haiti: Terracing for Hope” or “The Haitian Project of Terracing for Hope” or “Terracing for Hope, a Haitian Project,” etc.

And how about all this free stuff we’re being sold all the time? This was seen on a billboard: “NEW TRANS-FAT FREE” with the word “free” on a separate line. It appears to imply that the new product has trans-fat, and is free. Likewise with all these products that sport the expression “gluten free”: apparently, there is gluten in it, but we’re not paying for the gluten.

And then, there is verb hyphenation, which confuses many people I know even more. Verbs are not hyphenated when used as verbs. However, when non-verbs are used with verbs as a compound verb, they do get hyphenated (and this is common sense). For example, “how to fly-fish” is very different from “how to fly fish.” In the latter, one tries to make fish fly. Likewise, “moonbathing” is about a person enjoying moonlight, whereas an expression like “moon bathing” suggests it’s the moon doing the bathing (assuming the rest of a proper sentence surrounds that expression). Similarly, “battle ready” (the battle is ready?) is very different from “battle-ready” (a compound modifier that shows that some person, equipment, or army is ready for battle).

Even after verbing turns a new word like ‘blog’ into a verb as well as a noun, combining it with the noun/adjective ‘video’ requires hyphenation: “learning to video blog” makes no sense, while “learning to video-blog” does.

Alright, perhaps we do not need to be so vigilant about compound modifiers all the time. Here is one I have seen where even I have to admit that context and common sense are quite sufficient to know what is meant even in the absence of a compulsively placed hyphen. It is “sexual abuse hysteria.” Perhaps, no hyphen is needed when an adjective becomes an adverb. I think this one is clear without the need for a hyphen.

Before leaving hyphenation, I must address adverbs. Adverbs are not hyphenated (although many well-meaning and thoughtful individuals do hyphenate them.)

You only need hyphenation when the target of a modifier is ambiguous, which is why adverbs are not entered into hyphenated compounds. Recall that in one of the examples above, ‘purple’ had the option of referring to the people being eaten or to the creature doing the eating, so we had to specify which by knowing when to and when not to use a hyphen. In the case of adverbs, as in “culturally sensitive employer,” for instance, there is no question about which word ‘culturally’ is attached to; there is no such thing as a “culturally employee”; so there is no ambiguity, and no need to waste time with hyphens.

Commas are another matter disproportionately consequential in comparison to the size of the punctuation mark involved. The following examples come from a variety of sources, but mostly from a delightfully brilliant book, to which I was introduced[v] during University Studies teacher training at Portland State University: Maxwell Nurnberg’s Questions You Always Wanted to Ask about English .

  1. Which statement clearly shows that not all bacteria are sphere-shaped?
  2. a) Christian A. T. Billroth called bacteria which had the shape of tiny spheres ‘cocci’. (In this case, it is implied that only some bacteria are spherical.)
  3. b) Christian A. T. Billroth called bacteria, which had the shape of tiny spheres, ‘cocci’. (In this case, it is implied that all bacteria are spherical.)
  1. Which sentence shows extraordinary powers of persuasion?
  2. I left him convinced he was a fool. (He is convinced, not I.)
  3. I left him, convinced he was a fool. (I am convinced.)
  1. Which is the dedication of a self-confessed polygamist?
  2. I dedicate this book to my wife, Edith, for telling me what to leave out. (In this case, he has one wife, whose name is Edith.)
  3. I dedicate this book to my wife Edith for telling me what to leave out. (In this case, we are led to believe he has at least one wife other than Edith. If it’s not clear why this is the case, see # 5 or # 6 below. The comma starts an explanation of who is being referred to.)

 

  1. In which sentence are you sure that “somatic” and “bodily” mean the same?
  2. Radioactive materials that cause somatic, or bodily, damage are to be limited

                     in their use. (In this case, ‘bodily’ is offered as a more familiar synonym for ‘somatic’.)

  1. Radioactive materials that cause somatic or bodily damage are to be limited

     in their use. (In this case, the implication is that ‘somatic’ and ‘bodily’ are mutually exclusive, hence mean different things.)

Nurnberg’s examples were my introduction to the power of the comma. They went beyond the boilerplate rules I had been taught, like “Never place a comma before ‘because’!” and “Always put a comma before ‘too’!”

Soon, I was noticing commas where they should not have been, and a lack of commas where they were badly needed.

I read the following at http://www.riskshield.com.au/Glossary.aspx [2]: “Technique, procedure and rule used by risk manager to identify asses and examine the risks.” The missing comma could really have helped with the change in meaning caused by the missing ‘s’ in ‘assess’. On the other hand, perhaps the risk manager’s job really is to identify asses. If so, this is one particularly frank document.

In the book MIDI Systems and Control [3], I came across the following, “RS422 … is a standard for balanced communications over long lines devised by the EIA (Electronics Industries Association)” (p. 23). Without a comma right after ‘long lines’, the sentence is open to the interpretation that it was long lines that were devised by the EIA, as opposed to RS422.

Here is some correct comma use (as one would expect from Stanford University). In The Elements of Statistical Learning: Data Mining, Inference, and Prediction by Hastie, Tibshirani, and Friedman, there is a sentence “. . . in a study to try to predict whether the email was junk email, or ‘spam’” (p. 2). The comma is used, appropriately, in its explanation-signifying role. Later on, however, the authors say “In the handwritten digit example the output is one of 10 different digit classes . . .” (p. 9). This clearly needs a hyphen connecting ‘handwritten’ and ‘digit’. Currently (without the hyphen), it is the example that is handwritten, not the digits. This difference could be meaningful if one were referring to a solutions manual, for example, where examples are often handwritten.

Here’s an example I must have gotten from someone else or a book: “King Charles walked and talked; half an hour after, his head was cut off.” Let’s try it without the comma and the semicolon: “King Charles walked and talked half an hour after his head was cut off.” This is reminiscent of the English-class exercise that was making the rounds on facebook at one point: “A woman without her man is nothing.” which can be punctuated either as “A woman: without her, man is nothing.” or as “A woman, without her man, is nothing.” [4]

Grammarly also posted this headline about Peter Ustinov’s travels: “Highlights of his global tour include encounters with Nelson Mandela, an 800-year-old demigod and a dildo collector.” The absence of the Oxford comma turns what was meant as a list of three entities into one entity (Mandela) and a description of him (an 800-year-old demigod who collects dildos). As I mention elsewhere, the structure of other languages (such as Turkish) may be such that the Oxford comma is useless, but it clearly makes a difference to the meaning of a sentence in English.

Here’s an instance from an e-mail I once wrote. I had asked my friend, “Did I go too far into unnecessary details by way of explaining what I’m doing and why?” This question addresses my explanation of what I was doing and why I was doing it. It is different from “Did I go too far into unnecessary details by way of explaining what I’m doing, and why?” which asks why my explanation is considered to be excessive.

Nurnberg does a much better job of revealing the power and importance of the comma in his book than my examples here do. I think everyone who writes in English should own a copy and read it.

I also want to address redundancy a little. Someone once said this to me during a conversation: “. . . considering they’re both not at the same time. . .” This would make sense if the two events spoken of were not coincident with a third event, but, in this case, there was no third event. All that was meant was “considering they’re not at the same time . . .”

I also frequently hear “continue on …” and “return back …” (This is frustrating.) To continue is to go on. Therefore, to continue on becomes ‘to go on on’. (We’re all familiar with “ATM machine” and “PIN number”…)

Redundancy is necessary when life-threatening situations are handled by electronic or electro-mechanical systems. Let’s leave the redundancy to those cases, and stop wasting our breath with it.

And then there is the centipede sentence: “I’d take MLK Boulevard would be my suggestion.”

My boss at my old job, fifteen years ago, was amazing at these. I wonder if he composed any regular sentences; they all seemed to be along the lines of “The reason is is that there is an address conflict was what happened.” (Otherwise, he was clearly a genius. I don’t know why he talked like that.)

I have now firmly established myself, in this post, as the worst possible killjoy nerd geek compulsive so-called “grammar N**i” ever, so let me close on a positive note.

I wrote this ridiculously long post because I love the English language, and I want people, especially its native speakers, to treat it well. I also love Turkish, Portuguese, German, and Japanese, and if you have ever read engineering material written in English by Japanese engineers, you know that the structure of non-Indo-European languages must be very different. The agglutinative use of cases in Turkish makes most of the discussion of this post unnecessary, but there is one thing Spanish, Portuguese, English, etc. have that I’m quite envious of: THE SUBJUNCTIIIIIVE (Cue dark, scary music.)

The subjunctive, which is still going strong in Spanish, but has only a few surviving occasions of use in English, draws a distinction between fact and possibility, or between wishes and truths: “if he were” makes it sound wrong to indicate something hypothetical as being actual. (Note how it does not go “if he was” but switches to the awkward subjunctive, the non-reality case.) This is scientific thinking built into the language. The speaker is required to differentiate between factual cases and wishing or wondering. (If only we could get the health-care industry to differentiate between factual evidence- and mechanism-based care and wishful-thinking-based care.)

In conclusion: Languages are awesome. Let’s not stand by and watch them get eroded into redundancy and lack of clarity by mental (and technologically aided) sloth. If languages change, fine; let them change well, preserving the characteristics that allow humans to communicate with precision and subtlety. Good communication saves lives.

And what is the deal with “the both” in 2016? The expression ‘both of’ is intended for a different meaning than the expression ‘the two’. There was no reason to make a hybrid that goes ‘the both’. Please, everyone, stop saying this.

“The two of them went away, but we stayed.” (There were more than two people involved.)

“We both went away.” (There were only two people involved.)

English has set up this great way to incorporate set theory into the language. Why are we messing it up? Consider the following:

“Are you ready to compare both files?” (I would be, if I wanted to compare two files each with a third. However, if the comparison were simply between two files, it’s “Are you ready to compare the two files?”)

PS: I need to get this one off my chest, too: “Computation methods” would be methods of computation, whereas “computational methods” would be methods that make use of computation. And don’t get me started on ‘methodologies’. How many people actually study methods? (I’m glad to have witnessed this being brought up at a discussion during the AAWM 2016 conference.)

There.

[1] Nurnberg, M., Questions You Always Wanted to Ask about English (but were afraid to raise your hand), New York: Washington Square Press, 1972.

[2] http://www.riskshield.com.au/Glossary.aspx (not there anymore)

[3] Rumsey, F., MIDI Systems and Control (Second Edition), Oxford, UK: Focal Press, 1994.

[4] facebook.com/Grammarly

[i] “12-month spend of $500” it said. At least it’s hyphenated correctly, but what was wrong with the noun ‘spending’ that we need a new noun to replace it?

[ii] People who know me can attest to this.

[iii] From the band NASALROD

[iv] Again, altering the word order could clarify such cases: After all, we never say “thumb rule” instead of “rule of thumb”; we always take the time to say “rule of thumb”!

[v] Like most of the important things in life

Time Travel? (It comes down to causality’s unawareness.)

Forget the usually discussed paradox about time travel; how about how one would know whether one had come back to the proper (intended) present?

Would one rely on clocks and calendars? Clocks move at different speeds (as explained by relativity). And how much should the calendar have changed while you were off in the past? Does the travel itself take any time? How about if everyone else also time-traveled while you were off doing that? Who, or what, decides what the correct time is “back home”? (To say that it would take so much energy that only one person should be able to time-travel or get close to the speed of light at a time would be an economic argument, not a physical one, so in a discussion of physics, we must allow for the possibility that everyone is time-traveling concurrently—let’s say, by starting together but going off in different directions in different amounts.) Then who or what keeps so-called real time going in its regular pace? (It doesn’t even have one, thanks to relativity!)

What we mean by time travel—if it’s not kept track of via clocks—is that causality is unaware of the traveler’s appearance and disappearance. So, what happens to the air molecules where the traveler will suddenly appear? Are they pushed out, or do they somehow combine or coexist with the traveler’s? If they are pushed out, then how massive an object can time-travel? (Because, at some point, we’d be moving so much air, it would lead to catastrophic damage to someone or something. So, the paradoxes extends to the legal and safety realms as well, although those are merely practical.)

Happy 65th birthday, Deacy!

Thank you for Back Chat, Cool Cat, You And I, Spread Your Wings, In Only Seven Days, Misfire, Who Needs You, Under Pressure, You’re My Best Friend, One Year Of Love, If You Can’t Beat Them, Need Your Loving Tonight, and Pain Is So Close To Pleasure, not to mention AOBTD, FWBF & IWTBF. (And, really, every Queen song!).

You really nailed it whenever you wrote one. By the end of my life, I would like to have written one song that’s like one of yours.

What happened to ‘Windows 9’?

I was talking to an old friend a couple of years ago, just before my band was going to play a show at a bar in Seattle. He’s been at Microsoft for quite a few years, and was saying that one of Vernor Vinge’s old books is like a guidebook for a lot of MS people, in terms of what future technologies the high-ups (perhaps) would like to the company to contribute to or pioneer.

Okay, so, MS folks read sci-fi; no surprise there.

Now for the other half of the puzzle: I have been reading the second 2016 issue of THE SKEPTIC, and took some extra time to dig into the arguments for and against the (eventual, distant) possibility of uploading one’s entire consciousness onto silicon (or some other substrate), partly because another friend has already invested in having his (and his wife’s) brain frozen for eventual uploading.

The author of one of the articles from THE SKEPTIC synthesizes results and possibilities from quite a bit of current and recent work in neuroscience and neurobiology, with references to numerous scientific papers from journals such as Nature, Nature Methods, The Journal of Neuroscience, Proc. NAS, Nature Neuroscience, Frontiers in Psychology, Progress in Neurobiology, Neuroscience, Journal of Neural Engineering, etc. (You get the picture.)

He also references several books: some technical, some more along the lines of popular science, and one, a fiction book.

I ordered two of the three technical/scientific books online, and found the fiction (sci-fi) book at Powell’s, so I started reading that one. It’s called Kiln People, and it’s by David Brin. It reads a little like a detective-story version of The Mind’s I: Fantasies and Reflections on Self and Soul, which is an excellent compilation of stories and essays meant to exercise the reader’s understanding of self, soul, consciousness, and the boundaries of such. It was compiled/written (“composed and arranged”) by Douglas Hofstadter and Daniel Dennett. The Brin book flows better, as it is pure fiction (altough this is not to say The Mind’s I doesn’t; it’s a superbly crafted book).

And like most good sci-fi, Kiln People is well-informed fiction (like Diaspora by Greg Egan), and that’s why, I suppose, it was listed along with dozens of peer-reviewed neuroscience articles.

In any case, my reason for writing this small post is not neuroscience, or consciousness, or spirituality, or the human soul, but a wild conjecture about the naming of Windows 10. If Microsoft programmers, managers, executives, etc., like to read sci-fi, and some of them have read Vinge, it is not unlikely that out of that many people, some have read Kiln People by David Brin. In the edition I bought, on page 396, the narrator and main character lists people whose deaths have come at the hands of their own creation. The list contains Oedipus’ father, Baron Frankenstein, and “William Henry Gates and Windows ’09.” (The book is copyrighted 2002.)

Granted, there was no year-named “Windows ’09” (that I know of). However, the abbreviation “’09” (for the year) could have been pronounced by some people just like the numeral ‘9’, and I’m guessing the sci-fi fans at MS, in marketing, for example, may have read the book, and not wanted to follow Windows 8 with a Windows 9, avoiding Brin’s fictional prediction, or even better, avoiding making Brin look bad when that did not come true.

Otherwise, why go ‘Windows 7’, ‘Windows 8’, ‘Windows 10’?

Auto-(in)correct: Emergent Laziness?

Is it Google? Is it LG? Or is it emergence?

I am leasing an LG tablet running Android to go with my phone service. I thought the large screen and consequently larger keyboard would make my life easier. The first several days of use, however, have been unreasonably annoying. The salesperson had said that this device would be slave to my LG Android cell phone, but my settings did not seem to carry over. What’s worse, no matter how much I dig through menu trees to get to certain settings I’m looking for, I can’t find them. For example, I may want autocorrect off, or I may not want the latest e-mail in my inbox to be previewed. (I prefer to see a bird’s-eye view of all the recent e-mails, packed as tightly as possible, and I can usually set this very quickly and easily, but not on this tablet.) The reasons might range from being about to go to class and teach in a few minutes and not wanting to think about that e-mail about a committee issue that just arrived right at the moment, and I don’t want Gmail to parade it in front of me.

So, the settings seem to be very well hidden, or maybe not even available to the user anymore (because that has been the trend in computer-and-Internet technology: Make the user think less, and have less control; so-called intelligent software will decide all your preferences for you).

And perhaps the software can deduce (or, more likely, induce) your preferences as they were at a certain time under a certain set of circumstances, but human beings expect the freedom to change their minds. Software doesn’t seem to allow this.

Furthermore, crowd-sourcing is considered the ultimate intelligence. I know and understand the algorithms behind most of these ideas, and totally agree that they are beautiful and awesome (and really fun). However, engineers, programmers, mathematicians, and other nerds (like me) finding something super-fun should not be how life is redesigned. The crowd-sourcing of spelling and automatic correction is leading us from artificial intelligence to natural laziness. My device wants to change “I’m” to “imma”. (Before you decry that I’m also ignorant and don’t know to put a period inside the quotation marks, read my disclaimer about switching to British/logical punctuation.) Am I now forced to appear like I have abandoned capitalization, and to have picked up an unnecessarily excessively colloquial form of spelling. And if I had, then fine, but I haven’t.

It gets worse. The learning algorithm is not learning, at least not from me. The following has now happened with several phrases and words on this new tablet, and I’ve looked further into altering this setting, to no avail.

When I type “I will”, it automatically replaces it with “I silk”. If I backspace and type “I will” again, it replaces it again. And it doesn’t learn from my actions; I have patiently (and later on, a further dozen or so times, impatiently) retyped “I will” more than 30 times, only to watch Gmail running on my Android LG device switch it back to “I silk” immediately.[1]

Where did this come from? Is there a band called “I silk”? Is this a new phrase that’s in these days, and I haven’t been overhearing my students enough to know about it?

Or is it because earlier that day, I tried to write “I seek to …” where the ‘seek’ was autocorrected to ‘silk’? (for who knows what reason)

And what happens when this behavior is pushed beyond e-mail on a tablet, and I’m not able (or allowed) to write either “I will” or “I seek” as I type a blog entry such as this on my laptop, or as I try to type an e-mail to explain what’s going wrong to Google’s tech support, or someone else’s tech support?

This really doesn’t make sense. Shouldn’t machine learning give us results that make sense? (That used to be the idea.) Now, perhaps, it’s just supposed to give us results that are popular or common. It seems we’re not building artificial intelligence; we’re building artificial commonality.

This is not a rant for elitism (which, anyway, is also used in machine learning, in evolutionary algorithms). It’s about the loss of freedom of speech, to be able to say what one is trying to say the exact way one wants to say it. The ability for clear, unequivocal communication is not something to be eliminated from the human experience; it is something to continue to strive for. Likewise, convenience over freedom (or over accuracy) is not a good choice of values. In the end, the person pushing little buttons with letters marked on them will be held responsible for the content. Shouldn’t that person be in charge of what words appear when they push the little buttons? Shouldn’t we at least be able to turn off auto-correct, or have some control over when it applies?

This is being taken away, little by little. “Oh, it’s just a tablet.” … “Oh, it’s just an e-mail. Nobody expects it to be spelled correctly.” Pretty soon, no one will be able to spell anything correctly, even if they know how to, because their devices won’t allow them to have that little bit of control.

 

[1] Also, do not even try to write in a foreign language, or mix English and Turkish in one sentence. In an early e-mail I wrote on this device, I had to repeat the letter ‘i’ (which appeared only once in the actual word) five times (for a total of six ‘i‘s) for it to stop auto-(in)correcting “geliyorum” to something like “Selma”. I had to type “geliiiiiiyorum”.

How to Reason in Circuit Analysis

The following conversation played out in my head as I was grading an exam problem that had a supernode composed of two neighboring supernodes. Many students (in introductory circuit analysis) had difficulties with this problem, so here’s what I plan to present when I explain it.

HOW TO REASON IN NODAL ANALYSIS

Q: What is the main type of equation involved in performing nodal analysis?

A: KCL equation

Q: What electrical quantity is represented in each term of a KCL equation.

A: current

Q: Are there any elements for which, if the current is not stated, we do not have a way (a defining equation[1]) to know and express the current?

A: yes

Q: What are these elements?

A: voltage sources of any type[2] that are directly between two non-reference essential nodes (NRENs)

Q: Why is that a problem?

A: There is no defining equation (like Ohm’s Law) for a source, and if it’s directly between two NRENs, then there is no other element in series with it.

Q: So what if there is no other element in series with it?

A: If there were a resistor in series with it, we could use Ohm’s Law on the resistor.

Q: Why not use Ohm’s Law on the source?

A: Ohm’s Law does not apply to sources, does not deal with sources; it’s only for resistors[3].

Q: Fine… What’s with the non-reference thing?

A: If a voltage source (of any kind) has one terminal attached to the reference node (ground), then we automatically know the voltage at the other end (with respect to ground).

 

Conclusion: If there is a voltage source between two NRENs, circle it to make a (super)node, and write KCL out of that node, without going inside it (until later, when you need another equation, at which point you use KVL).

 

[1] A defining equation is an expression that relates current through a two-terminal element to the voltage across a two-terminal element by means of the inertial aspect of the element (its capacitance, resistance, inductance, and I suppose, pretty soon, its memristance) and the passive sign convention (PSC).

[2] i.e., independent voltage source, current-dependent voltage source, voltage-dependent voltage source: It’s about the voltage aspect, not about the dependence aspect.

[3] two terminal elements with a linear current–voltage relationship; note the en dash : )

Making sentences. . .

Winter term has been crazy, although in a good way. I’m teaching six classes this term, all of which are going great (I have awesome students), though one’s a new prep, so I haven’t had time to post here, but I recently found some silliness on my phone that I had come up with while waiting for a train: Making sentences by combining band names.

Here they are:

As Blood Runs Black The Refused Pierce The Veil

Tower of Power Was (Not Was) Built To Spill Ashes

Barenaked Ladies Poison The Well From First To Last

Bring Me The Horizon Within Temptation At The Drive-in

Blonde Redhead Of Montreal Cursed The Only Ones

(And, of course, a band name, all by itself: I Love You But I’ve Chosen Darkness)

Science, Clave, and Understanding

When Dr. Eben Alexander defended, in one of the major news magazines, his book (“proof[i]”) [1] about a spiritual non-physical afterlife realm, part of his argument was that he is a surgeon, and therefore a scientist. Surgeons are highly trained, highly specialized people who perform a very difficult and critically important service. It would be absurd not to recognize their value. Their work is without a doubt science-based, but does that make it “science”? (There are, of course, surgeons who publish scholarly work (although, I’ve noticed that in some cases, it’s not about surgery, but on fields as distant as music), and thus function as scholars, and therefore scientists.)

A scientist is not anyone who functions as a professional practitioner of a difficult and science-based field; a scientist is someone who sets up, tests, and evaluates (mostly via statistical data analysis) testable hypotheses (about anything, including the afterlife and spiritual realms, if necessary), and more importantly, does so within the guidelines of rigor, accuracy, objectivity, skepticism, and open-mindedness [2][ii]. It is worrisome to imagine that surgeons are setting up double-blinded clinical trials of surgical practices as part of their work, choosing to apply a known good technique on one patient and an as-yet-unsupported one on another patient. (In other words, I really hope surgeons do not act as scientists.) Maybe they do; I’d like to know, so please give me feedback on this question.

Assuming, though, that they don’t endanger patients’ lives for the sake of science, as we tend not to do anymore, it seems safe to assume that surgeons are highly trained specialists who practice state-of-the-art medicine. In this sense, they are not scientists. They use the findings and results of science in their practical, applied work (medicine). They must, then, fall somewhere between applied scientists and technologists (inclusive).

To say that someone who practices a specialty that is based on scientific findings is therefore a scientist is like saying a sandwich-shop employee is a farmer because they use bacon, lettuce, and tomato in their work. (The fact that surgery is far more specialized does not invalidate the argument.)

The professions that discover, invent, develop, and apply are all different. The roles can overlap—scientists do develop and build new equipment to perform their experiments, but these are not mass-produced. Anything we can purchase repeatedly on amazon or at Best Buy, say, was not made by scientists. It was designed, developed, tested, and manufactured by engineers, technologists, technicians, and other professionals, not by scientists, even if scientists were involved in the early stages. As for applied scientists, including those who work at laboratories, characterizing soil samples, say, or performing tests, they are also highly trained specialists of scientific background who are not doing science at that point. As one XKCD comic suggested [3], you can simply order a lab coat from a catalog; no one will check your publication record. Science is not solely about what you’re wearing or what degrees you have; it’s about what, exactly, you’re doing.

The public’s idea of what science is seems to be “mathy and difficult, preferable done in a lab coat while uttering multisyllabic words you don’t want to see in your cereal’s list of ingredients.” This may be a decent shortcut for pop-culture purposes, but it is not what science really is. I will not go into the inductive-method-vs-hypothetico-deductive-method-vs-what-have-you debate here because there are people who do that professionally, and do it very well. (I have been enjoying Salmon’s The Foundations of Scientific Inference [4] immensely.) What I do want to do is draw two parallels in succession, first from the preceding discussion to explanation and understanding, and from those concepts, to explanations and understanding of clave (in music).

The former has been done quite successfully in Paul Dirac-medal-and-prize-winning physicist Deutsch’s earlier book The Fabric of Reality [5]. I am not concerned here with the bulk and main point of his book, but only with his opening argument about the role of science (explanation) and what it means to understand. Deutsch criticizes instrumentalism because of its emphasis on prediction at the cost of explanation (pp. 3–7). He gives rather good examples of situations in which no scientist (or layperson, for that matter) would be satisfied with good predictions without explanations (p. 6, for example). He does not deny the role and importance of predictions, but argues that “[t]o say that prediction is the purpose of scientific theory is [. . .] like saying that the purpose of a spaceship is to burn fuel” (similar to another author’s argument that the purpose of a car is not to make vrooom–vrooom noises; they just happen to do that as part of their operation[iii]). Deutsch states that just like spaceships have to burn fuel to do what they’re really meant to do, theories have to pass experimental tests in order “to achieve the real purpose of science, which is to explain the world.” (Think about it: Why did we all, as children, get excited about science? To understand the world!)

He then moves on to explain that theories with greater explanatory power than the ones they’ve replaced are not necessarily more difficult to understand, and certainly do not necessarily add to the list of theories one has to understand the be a scientist (or an enthusiast). Theories with better explanatory power can be simpler. Furthermore, not everything that could be learned and understood needs to be: See his example of multiplication with Roman numerals (pp. 9–10). It might be fun, and occasionally necessary to have some source in which to look it up (for purposes of the history of mathematics, say), but it’s not something anyone today needs a working knowledge of; it has been superseded. His example for this is how the Copernican system superseded the Ptolemaic system, and made astronomy simpler in the process (p. 9). All of this is discussed in order to make the point that there is a distinction between “understanding and ‘mere’ knowing” (p. 10), which is where my interest in clave comes into play.

Several “explanations” of clave (sometimes even with that word in the title) that were published in recent years have been of the “mere knowing” type in which clave patterns are listed, without any explanation as to how and why they indicate what other patterns are allowed or disallowed in the idiom. Telling someone that x..x..x…x.x… is 3-2, and ..x.x…x..x..x. is its opposite, so 2-3, and (essentially) “there you go, you now know clave” does nothing towards explaining why a certain piano pattern played over one is “sick” (good) and over the other, sickening (bad) within the idiom.

Imagine if the natural sciences went about education the way we musicians do with clave. A chapter in a high-school biology book would contain a diagram of the Krebs cycle, with all the inputs, outputs (sorry for the electrical-engineer language), and enzymes given by name and formula, followed by “and now you know biochemical pathways,” without any explanation as to how it has anything to do with an organism being alive. I’m flabbergasted that musicians and music scholars find mere listings of clave son, clave rumba, [and . . . you know, the other one that won’t be named[iv]] sufficient as so-called explanations[v].

All of this reminds me of an argument I once had with a very intelligent person. I had said, in my talk at Tuesday Talks, that science is concerned with ‘why’ and ‘how’, not just ‘how’. He disagreed, which I think is because he thought of a different type of ‘why’: the theological ‘why’. I, instead, had in mind Deutsch’s type of ‘why’: “about what must be so, rather than what merely happens to be so; about laws of nature rather than rules of thumb” (p. 11). I would add, about consistency (even given Goedel, because I’m Bayesian like that, and not so solipsistic), which Deutsch mentions immediately afterwards, calling it ‘coherence’.[vi]

I understand that Hume, Goedel, and others have shown us that our confidence in science, or even math, ought not to be infinite. It isn’t. Even in a book like The God Delusion, even Richard Dawkins makes it clear that he is not absolutely certain. Scientific honesty requires that we not be absolutely certain. But we can examine degrees of (un)certainty, and specifically because of the solipsists, we have to ignore them[vii], and be imperfect pursuers of an imperfect truth, improving our understanding, all the while knowing that it could all be wrong.

To that end, I continue to test my clave hypothesis under different genres. Even if it’s wrong, it definitely is elegant.

[1] Alexander, M.D., E., Proof of Heaven: A Neurosurgeon’s Near-Death Experience and Journey into the Afterlife, Simon & Schuster, 2012.

[2] Baron, R. A., and Kalsher, M. J., Essentials of Psychology, Needham, MA: Allyn & Bacon, A Pearson Education Company, 2002.

[3] http://xkcd.com/699/ (last accessed 12/25/2015).

[4] Salmon, W. C., The Foundations of Scientific Inference, Pittsburgh, Pennsylvania: University of Pittsburgh Press, 1966.

[5] Deutsch, D., The Fabric of Reality: A leading scientist interweaves evolution, theoretical physics, and computer science to offer a new understanding of reality, New York: Penguin Books, 1997.

[i] Scientists do not speak of proof; they deal with evidence. Proofs are limited to the realm of mathematics. There are no scientific proofs; there are just statistically significant results, which are presented to laypersons as ‘proof’ because even scientists have quite a lot of difficulty interpreting measures of statistical significance, and the average person has no patience for or interest in the details of philosophy of science.

[ii] The authors of [2] give the following excellent definitions for these precise terms. Accuracy: “gathering and evaluating information in as careful, precise, and error-free a manner as possible”; objectivity: “obtaining and evaluating such information in a manner as free from bias as possible” [Ibid.]. ‘Bias’ in this case refers to the cognitive biases that are natural to human thinking and judgment, such as confirmation bias, Hawthorne effect[ii], selection bias, etc.; skepticism: the willingness to accept findings “only after they have been verified over and over”; and open-mindedness: not resisting changing one’s own views—even those that are strongly held—in the face of evidence that they are inaccurate [2]. To these we can add principles like transferability and falsifiability, and the key tools of double-blinding, randomization, blocking, and the like. Together, all these techniques and principles constitute science. Simply being trained in science and carrying out science-based work is not sufficient.

[iii] I think it was Philips in The Undercover Philosopher, but I’m not sure.

[iv] If you’ve read my post about running into cool people from SoundCloud at NIPS ’15, you’ll know what pattern I’m talking about: the English-horn-like-named pattern.

[v] Fortunately, we do have work from the likes of Mauleón and Lehmann that show causal relationships between individual notes or phrases in different instrumental lines, but since their work and mine, the trend has reverted to listing three patterns, and calling that an explanation.

[vi] Perhaps this paragraph needs its own blog post. . .

[vii] Because, according to them, they don’t exist.

NIPS 2015: Thoughts about SoundCloud, genres, clave tagging, clave gamification, multi-label classification, and perceptual manifolds

On December 9th, at NIPS 2015, I met two engineers from SoundCloud, which is not only providing unsigned artists a venue to get their music heard (and commented on), and providing recommendation and music-oriented social networking, but also, if I understand correctly, is interested in content analysis for various purposes. Some of those have to do with identifying work that may not be original, which can range from quotation to plagiarism (the latter being an important issue in my line of work: education), but also involve the creation of derivative content, like remixing, to which they seem to have a healthy approach. (At the same event, the IBM Watson program director also suggested that they could conceivably be interested in generative tools based on music analysis.)

I got interested in clave-direction recognition to help musicians, because I was one, and I was struggling—clave didn’t make sense. Why were two completely different patterns in the same clave direction, and two very similar patterns not? To make matters worse, in samba batucada, there was a pattern said to be in 3-2, but with two notes in the first half, followed by three notes in the second half. There had to be a consistent explanation. I set out to find it. (If you’re curious, I explained the solution thoroughly in my Current Musicology paper.)

 

Top: Surdo de terceira. Bottom: The 3-2 partido-alto for cuíca and agogô. Note that playing the partido-alto omitting the first and third crotchet’s worth of onsets results in the terceira.

However, clave is relevant not just to music-makers, but to informed listeners and dancers as well. A big part of music-in-society is the communities it forms, and that has a lot to do with expertise and identity in listeners. Automated recognition of clave-direction in sections of music (or entire pieces) can lead to automated tagging of these sections or pieces, increasing listener identification (which can be gamified) or helping music-making.

My clave-recognition scheme (which is an information-theoretically aided neural network) recognizes four output classes (outside, inside, neutral, and incoherent). In my musicological research, I also developed three teacher models, but only from a single cultural perspective. Since then, I have recently submitted a work-in-progress and accompanying abstract to AAWM 2016 (Analytical Approaches to World Music) about what would happen if I looked at clave direction from different cultural perspectives (which I have encoded as phase shifts), and graphed the results in the complex plane (just like phase shift in electric circuits).

Another motivating idea came from today’s talk Computational Principles for Deep Neuronal Architectures by Haim Sompolinsky: perceptual manifolds. The simplest manifold proposed was line segments. This is poignant to clave recognition because among my initial goals was extending my results to non-idealized onset vectors: [0.83, 0.58, 0.06, 0.78] instead of [1101], for example. The line-segment manifold would encode this as onset strengths (“velocity” in MIDI terminology) ranging from 0 (no onset) to 1 (127 in MIDI). This will let me look inside the onset-vector hypercube.

Another tie-in from NIPS conversations is employing Pareto frontiers with my clave data for a version of multi-label learning. Since I can approach each pattern from two phase perspectives, and up to three teacher models (vigilance levels), a good multi-label classifier would have to provide up to 6 correct outputs, and in the case that a classifier cannot be that good, the Pareto frontier would determine which classifiers are undominated.

Would all this be interesting to musicians? Yes, I think so. Even without going into building a clave-trainer software into various percussion gear or automated-accompaniment keyboards, this could allow clave direction to be gamified. Considering all the clave debates that rage in Latin-music-ian circles (such as the “four great clave debates” and the “clave schism” issues like around Giovanni Hidalgo’s labeling scheme quoted in Modern Drummer*), a multi-perspective clave-identification game could be quite a hit.

So, how does a Turkish math nerd get to be obsessed by this? I learned about clave—the Afro-Latin (or even African-Diasporan) concept of rhythmic harmony that many people mistake for the family of fewer than a dozen patterns, or for a purely Cuban or “Latin” organizational principle—around 1992 from the musicians of Bochinche and Sonando, two Seattle bands. I had also grown up listening to Brazilian (and Indian, Norwegian, US, and German) jazz in Turkey. (My first live concert by a foreign band was Hermeto Pascoal e Grupo, featuring former CBC faculty Jovino Santos Neto.) So, I knew that I wanted to learn about Brazilian music. (At the time, most of what I listened to was Brazilian jazz, like Dom Um Romao and Airto, and I had no idea that they mostly drew from nordestino music, like baião, xote, côco, and frevo**―not samba).

Fortunately, I soon moved to Portland, where Brian Davis and Derek Reith of Pink Martini had respectively founded and sustained a bloco called Lions of Batucada. Soon, Brian introduced us to Jorge Alabê, and then to California Brazil Camp, with its dozens of amazing Brazilian teachers. . . But let’s get back to clave.

I said above that clave is “the Afro-Latin (or even African-Diasporan) concept of rhythmic harmony that many people mistake for the family of fewer than a dozen patterns, or for a purely Cuban or ‘Latin’ organizational principle.” What’s wrong with that?

Well, clave certainly is an organizational principle: It tells the skilled musician, dancer, or listener how the rhythm (the temporal organization, or timing) of notes in all the instruments may and may not go during any stretch of the music (as long as the music is from a tradition that has this property, of course).

And clave certainly is a Spanish-language word that took on its current meaning in Cuba, as explained wonderfully in Ned Sublette’s book.

However, the transatlantic slave trade did not only move people (forcefully) to Cuba. The Yorùbá (of today’s southwest Nigeria and southeast Benin), the Malinka (a misnomer, according to Mamady Keïta for people from Mali, Ivory Coast, Burkina Faso, Gambia, Guinea, and Senegal), and the various Angolan peoples were brought to many of today’s South American, Caribbean, and North American countries, where they culturally and otherwise interacted with Iberians and the natives of the Americas.

Certain musicological interpretations of Rolando Antonio Pérez Fernández’s book La Binarización de los Ritmos Ternarios Africanos en América Latina have argued that the organizational principles of Yoruba 12/8 music, primarily the standard West African timeline (X.X.XX..X.X.X)

Bembé ("Short bell") or the standard West African timeline, along with its major-scale analog

and the Malinka/Manding timelines met the 4/4 time signatures of Angolan and Iberian music, and morphed into the organizational timelines of today’s rumba, salsa, (Uruguayan) candombe, maracatu, samba, and other musics of the Americas.

Some of those timelines we all refer to as clave, but for others, like the partido-alto in Brazil***, it is sometimes culturally better not to refer to them as clave patterns. (This is understandable, in that Brazilians speak Portuguese, and do not always like to be mistaken for Spanish-speakers.)

Conceptually, however, partido-alto in samba plays the same organizational role that clave plays in rumba and salsa, or the gongue pattern plays in maracatu: It immediately tells knowledgeable musicians how not to play.

In my research, I found multiple ways to look at the idiomatic appropriateness of arbitrary timing patterns (more than 10,000 of them, only about a hundred of which are “traditional” [accepted, commonly used] patterns). I identified three “teacher” models, which are just levels of strictness. I also identified four clave-direction categories. (Really, these were taught to me by my teacher-informers, whose reactions to certain patterns informed some of the categories.)

Some patterns are in 3-2 (which I call “outside”). While the 3-2 clave son (X..X..X…X.X…):

3-2 (outside) clave son, in northern and TUBS notation

is obvious to anyone who has attempted to play anything remotely Latin, it is not so obvious why the following version of the partido-alto pattern is also in the 3-2 direction****: .X..X.X.X.X..X.X

The plain 3-2 partido-alto pattern. (The pitches are approximate and can vary with cuíca intonation or the agogô maker’s accuracy.) "Bossa clave" in 3-2 and 2-3 are added in TUBS notation to show the degree of match and mismatch with 3-2 and 2-3 patterns, respectively.

 

Some patterns are in 2-3 (which I call “inside”). Many patterns that are heard throughout all Latin American musics are clave-neutral: They provide the same amount of relative offbeatness no matter which way you slice them. The common Brazilian hand-clapping pattern in pagode, X..X..X.X..X..X. is one such pattern:

The clave-neutral hand-clapping pattern in pagode, AKA, tresillo (a Cuban name for a rhythm found in Haitian konpa, Jamaican dancehall, and Brazilian xaxado)

It is actually found throughout the world, from India and Turkey, to Japan and Finland, and throughout Africa; from Breakbeats to Bollywood to Metal. (It is very common in Metal.) The parts played by the güiro in salsa and by the first and second surdos in samba have the same role: They are steady ostinati of half-cycle length. They are foundational. They set the tempo, provide a reference, and go a long way towards making the music danceable. (Offbeatness without respite, as Merriam said*****, would make music undanceable.)

Here are some neutral patterns: X…X…X…X… (four on the floor, which, with some pitch variation, can be interpreted as the first and second surdos):

Four quarter notes, clave-neutral (from Web, no source available)

….X.X…..X.X. (from ijexá):

surdo part for ijexá (from http://www.batera.com.br/Artigos/dia-do-folclore)

 

and XxxXXxxXXxxXXxxX. (This is a terrible way to represent swung samba 16ths. Below is Jake “Barbudo” Pegg’s diagrams, which work much better.)

Jake "Barbudo" Pegg's samba-sixteenths accent and timing diagrams (along with the same for "Western" music)

The fourth category is incoherent patterns. These are patterns that are not neutral, yet do not conform to either clave direction, either. (One of my informers gave me the idea of a fourth category when he reacted to one such pattern by making a disgusted face and a sound like bleaaahh.)

A pattern that has the clave property immediately tells all who can sense it that only patterns in that clave direction and patterns that are clave-neutral are okay to play while that pattern (that direction) is present. (We can weaken this sentence to apply only to prominent or repeated patterns. Quietly passing licks that cross clave may be acceptable, depending on the vigilance level of the teacher model.)

So, why mention all this right now? (After all, I’ve published these thoughts in peer-reviewed venues like Current Musicology, Bridges, and the Journal of Music, Technology and Education.)

For one thing, those are not the typical resources most musicians turn to. Until I can write up a short, highly graphical version of my clave-direction grammar for PAS, I will need to make some of these ideas available here. Secondly, the connection to gamification and musical-social-networking sites, like SoundCloud, are new ideas I got from talking to people at the NIPS reception, and I wanted to put this out there right away.

 

FOOTNOTES

* Mattingly, R., Modern Drummer, Modern Drummer Publications, Inc., Cedar Grove, NJ, “Giovanni Hidalgo-Conga Virtuoso,” p. 86, November 1998.

** While talking to Mr. Fereira of SoundCloud this evening at NIPS, he naturally mentioned genre recognition, which is the topic of my second-to-last post. (I argued about the need for expert listeners from many cultural backgrounds, which could be augmented with a sufficiently good implementation of crowd-sourcing.) I think he was telling me about embolada, or at least that’s how I interpreted his description of this MC-battle-type of improvised nordeste music. How many genre-recognition researchers even know where to start in telling a street-improvisation embolada from even, say, a pagode-influenced axé song like ‘Entre na Roda’ by Bom Balanço? (Really good swing detection might help, I suppose.)

*** This term has multiple meanings; I’m not referring to the genre partido-alto, but the pattern, which is one of the three primary ingredients of samba, along with the strong surdo beat on 2 (and 4) and the swung samba 16ths.

**** in the sense that, in the idiom, it goes with the so-called 3-2 “bossa clave” (a delightful misnomer): X..X..X…X..X..,

The "bossa clave" is a bit like an English horn; it's neither.as well as with the rather confusing (to some) third-surdo pattern ….X.X…..XX.X, Top: Surdo de terceira. Bottom: The 3-2 partido-alto for cuíca and agogô. Note that playing the partido-alto omitting the first and third crotchet’s worth of onsets results in the terceira.

which has two notes in its first half, and three notes in its second half. (Yes, it’s in 3-2. My grammar for clave direction explains this thoroughly. [http://academiccommons.columbia.edu/catalog/ac:180566])

***** See Merriam: “continual use of off-beating without respite would cause a readjustment on the part of the listener, resulting in a loss of the total effect; thus off-beating [with respite] is a device whereby the listeners’ orientation to a basic rhythmic pulse is threatened but never quite destroyed” (Merriam, Alan P. “Characteristics of African Music.” Journal of the International Folk Music Council 11 (1959): 13–19.)

ALSO, I use the term “offbeatness” instead of ‘syncopation’ because the former is not norm-based, whereas the latter turns out to be so:

Coined by Toussaint as a mathematically measurable rhythmic quantity [1], offbeatness has proven invaluable to the preliminary work of understanding Afro-Brazilian (partido-alto) clave direction. It is interpreted here as a more precise term for rhythmic purposes than ‘syncopation’, which has a formal definition that is culturally rooted: Syncopation is the placement of accents on normally  unaccented notes, or the lack of accent on normally accented notes. It may be assumed that the norm in question is that of the genre, style or cultural/national origin of the music under consideration. However, in all usage around the world (except mine), normal accent placement is taken to be normal European accent placement [2, 3, 4].

For example, according to Kauffman [3, p. 394], syncopation “implies a deviation from the norm of regularly spaced accents or beats.” Various definitions by leading sources cited by Novotney also involve the concepts of “normal position” and “normally weak beat” [2, pp. 104, 108). Thus, syncopation is seen to be norm-referenced, whereas offbeatness is less contextual as it depends solely on the tactus.

Kerman, too, posits that syncopation involves “accents in a foreground rhythm away from their normal places in the background meter. This is called syncopation. For example, the accents in duple meter can be displaced so that the accents go on one two, one two, one two instead of the normal one two, one two” [4, p. 20; all emphasis in the original, as written]. Similarly, on p. 18, Kerman reinforces that “[t]he natural way to beat time is to alternate accented (“strong”) and unaccented (“weak”) beats in a simple pattern such as one two, one two, one two or one two three, one two three, one two three.” [4, p. 18]

Hence, placing a greater accent on the second rather than on the first quarter note of a bar may be sufficient to invoke the notion of syncopation. By this definition, the polka is syncopated, and since it is considered the epitome of “straight rhythm” to many performers of Afro-Brazilian music, syncopation clearly is not the correct term for what the concept of clave direction is concerned with. Offbeatness avoids all such cultural referencing because it is defined solely with respect to a pulse, regardless of cultural norms. (Granted, what a pulse is may also be culturally defined, but there is a point at which caveat upon caveat becomes counterproductive.)

Furthermore, in jazz, samba, and reggae (to name just a few examples) this would not qualify as syncopation (in the sense of accents in abnormal or unusual places) because beats other than “the one” are regularly accented in those genres as a matter of course. In the case of folkloric samba, even the placement of accents on the second eighth note, therefore, is not syncopation because at certain places in the rhythmic cycle, that is the normal—expected—pattern of accents for samba, part of the definition of the style. Hence, it does not constitute syncopation if we are to accept the definition of the term as used and cited by Kauffman, Kerman, and Novotney. In other words, “syncopation” is not necessarily the correct term for the phenomenon of accents off the downbeat when it comes to non-European music.

Moreover, in Meter in Music, Hule observes that “[a]ccent, defined as dynamic stress by seventeenth- and eighteenth-century writers, was one of the means of enhancing the perception of meter, but it became predominant only in the last half of the eighteenth century [emphasis added]. The idea that the measure is a pattern of accents is so widely held today that it is difficult to imagine that notation that looks modern does not have regular accentual patterns. Quite a number of serious scholarly studies of this music [European art music of 1600–1800] make this assumption almost unconsciously by translating the (sometimes difficult) early descriptions of meter into equivalent descriptions of the modern accentual measure” [5, p. viii] Thus, it turns out that the current view of rhythm and meter is not natural, or even traditional, let alone global. In fact, in Essential Dictionary of MUSIC NOTATION: The most practical and concise source for music notation is perfect for all musicians—amateur to professional (the actual book title) states that “the preferred/recommended beaming for the 9/8 compound meter is given as three groups of three eighth notes” [6, p. 73]. This goes against the accent pattern implied by the 9/8 meter in Turkish (and other Balkan) music, which is executed as 4+5, 5+4, 2+2+2+3, etc., but rarely 3+3+3. The 9/8 is one of the most common and typical meters in Turkish music, not an atypical curiosity. This passage is included here to demonstrate the dangers in applying western European norms to other musics (as indicated by the phrase “perfect for all musicians”).

[1]    Toussaint, G., 2005. Mathematical Features for Recognizing Preference in Sub-Saharan African Traditional Rhythm Timelines. Lecture Notes in Computer Science 3686:18-27. Springer Berlin/Heidelberg, 2005.                                                                                                                                [2]    Novotney, E. D. “The 3-2 Relationship as the Foundation of Timelines in West African Musics,” University of Illinois at Urbana-Champaign (Ph.D. dissertation), Urbana-Champaign, Illinois, 1998.
[3]    Kauffman, R. 1980. African Rhythm: A Reassessment. Ethnomusicology 24 (3):393–415.
[4]    Kerman, J., LISTEN: Brief Edition, New York, NY: Worth Publishers, Inc., 1987, p. 20.
[5]    Hule, G., Meter in Music, 1600–1800: Performance, Perception, and Notation, Bloomington, IN: Indiana University Press, 1999.
[6]    Gerou, T., and Lusk, L., Essential Dictionary of MUSIC NOTATION: The most practical and concise source for music notation is perfect for all musicians—amateur to professional, Van Nuys, CA: Alfred Publishing Co., Inc., 1996.

It’s not only the rent: Old, new, and middle Portland

There has been much ferment, uproar, and outcry against the gentrification of “the old Portland” in the weeklies of Portland, Oregon, and in conversations around town lately. The skyrocketing of rent is a well-known and much discussed issue, as is the second big migration of a certain underrepresented minority out of what has become the new standard boundaries of hip Portland. Another, somewhat less publicized reason to be concerned (except for the recent WW article) is the squeezing out of Portland’s artists, the very people who took a grimy drug-troubled city no one outside the Pacific Northwest had heard of, and turned it into the modern designer clean-living mecca of the United States. To understand this process, and what I think is going wrong, I must clarify a point of definition: Most people talking about “the old Portland” are not actually talking about old Portland; they’re referring to what I will call “middle Portland.” “The old Portland” is what you can see in the movie ‘Drugstore Cowboy’: Crime, drugs, rain, empty streets, and little to do.

I moved to Portland between the old and middle periods, in 1995. My first visit, a few years prior, had me entering the city on a Greyhound bus through the NW Industrial Zone (not exactly a pretty sight, but a necessary one), and staying at a hostel on Hawthorne just to see a famous Senegalese band before I headed back to the small town where I was going to college.

Portland was legendary: It had La Luna and Satyricon. Bands like Dead Moon, WIPERS, and Poison Idea were rumoured to play there. I could only imagine what they were like. I later found out I was pretty far off. In any case, I did eventually move to Portland in 1995 to go to grad school, preferring PSU to higher-ranking universities because I wanted to be in a city, no matter how small.

And it was small. Traffic was virtually nonexistent. People wore sweatpants everywhere, unless they cared even less and wore pajama bottoms, or cared more and wore outrageously awesome punk outfits. High-heeled shoes were unknown, unless they were worn by occasional glam holdovers. It was nothing like the Portland of 2005, what I call middle Portland, or the Portland of today, 2015, the new Portland.

In 2005, you could still get from any part of town to any other in 45 minutes by bus (Tri-Met) and 15 minutes if you drove. Downtown to Hillsboro took 20 minutes. A few years prior to that, I lived in the Brooklyn neighborhood, close-in SE, and worked in Hillsboro. My commute took about 25 minutes.

I am not listing these travel times for purposes of complaining, but only for comparison. After all, I grew up in a city of 12 million, and to this day, I’m not especially bothered by even a two-hour commute. My point is that Portland was different in 1995, and different in 2005 from both now and the way it was in 1995.

What I experienced was the development of Portland into an arts mecca, the next Seattle or Austin (from whom we stole our Music Millenium slogan), and the city collectors traveled to from as far as Japan to buy vinyl records. In 2008, when I attended a conference in Philly, a Drexel student asked me how long I lived in Portland. When I told her I’d been there since ’95, she said “Oh, so you’ve been there since before it was cool.”

Yes, I was a small part of making it that way—I’m one of the thousands of musicians and maybe tens of thousands of artists overall, that helped turn Portland into the place to be if you wanted to be cool. . . not one of the significant ones who made it big, but I was there, playing behind a few of the big names whenever I could, all because I happened to talk to everyone I met about being a drummer. There weren’t that many around, and I eventually met some awesome people who taught me, encouraged me, and occasionally called me up for something pretty awesome.

But, this story is not about me; it’s about those who are still trying to make music, make art, make films, and maybe even make it in Portland. (I was going to say “make it big” but these days, people are just trying to get by.)

And here’s the rub. When the people moving into old east-side neighborhoods start lobbying to end late-night live music or pressure their neighbors to stop practicing in their basements, they are trying to turn Portland-proper into a suburb. They moved to Portland because it’s “cool,” part of which is that it has interesting jobs and beautiful houses to live in within walking distance of bars, restaurants, and coffee shops. Many of those establishments are staffed by musicians, painters, graphic designers, theater actors, comedians, and writers. What made Portland cool in the first place was the artists! The musicians and graffiti artists are foremost among the people who made Portland visible to the rest of the world (though I should not forget the graphic artists, some of whose work reached me in my crazy third-world hometown back in the ’80s). And everyone contributed to the liberal, progressive, sometimes-so-woo-as-to-be-regressive, but always artistic culture of Portland. These people are being driven away by the rapidly rising cost of living, and also being told to stop making all that noise and mess.

This post was inspired by the entry ‘Manufactured Spaces’ (specifically pp. 47–49) in the book ‘Portlandness: A Cultural Atlas’. Created by a big team of cartographers, designers, students, and teachers, this is both a beautiful and a substantial book. The discussion of the official interpretation of quality of life drove me to add my voice to the uproar over the new Portland. True, I wasn’t born or raised there, but I spent more of my life there than anywhere else. I don’t exactly miss the old Portland, and I don’t mind many of the improvements of the new Portland. But I do worry about the destruction of middle Portland, which to me is all about the arts.