My Seven Eras of Music Delivery, Your Qualia, and My Certain Lack of Free Will

I was reading someone’s post recently about what they called their seven eras of music delivery and noticed that I also have seven, although mine are different. They are:

  1. the 8-track-tape-cartridge era
  2. the era of the 45-RPM 7-inch vinyl single
  3. the cassette-and-12-inch-LP era
  4. the CD era
  5. the early days of downloading, before the MP3, with the UNIX format .au files
  6. the peak era of downloading: MP3s
  7. the era of streaming and ultra-expensive vinyl

Era 1: My earliest memories of recorded music are of 8-track cartridges that my parents and I listened to in the car. It was on these strange storage devices, the primary feature of which, I was told, was that they could skip from the middle of a song to the middle of another song with the push of a button, that I first heard Santana (off this compilation), Demis Roussos (this exact release, from which the songs I remember best are My Only Fascination and Lovely Lady of Arcadia), The Beatles’ She Loves You, Simon & Garfunkel’s El Condor Pasa and Cecilia, Fredrick Davies & Lewis Anton’s Astrology Rap, The Emotions’ I Don’t Wanna Lose Your Love, and a few selections form the musical Cabaret. I didn’t know any English at that time—I still remember the gobbledygook phonetic lyrics I sang along to She Loves You. (Şilagzu, if you can read Turkish.)

Each of those songs is still magical to me.

Era 2: At the age of 3, I’m told, I wanted the household record player to be placed in my room. At first, I mostly listened to the records of children’ stories my parents had bought for me. (Here’s an image of one, at least until it is sold and the page is no longer there.)

Then, sometime between the ages of 6 and 10, my mother and I once stayed at my aunt and uncle’s boat which was moored at the marina of a hotel on the Mediterranean, outside Antalya. Almost every night, the open-air disco would, at some point, feature the song LE FREAK by Chic. Each night, if it had not already been played while we were sitting at the outdoor disco’s dinner area, I would refuse to go to bed until I heard it. I also had gobbledygook phonetic syllables attached to that song (Frigav). That word happened to be mildly reminiscent of the name of the ice-cream bars you could only get during the intermission at a movie theater: the Alaska Frigo. The aluminum wrapper would send a terrible shock through your teeth.)

I had also taken over my parents’ 7-inch 45-RPM singles at this point: I had two singles by my favorite singer (favorite, that is, until I heard Chic) and one by my second-favorite singer. (I have yet to finish finding all six songs on MP3 or something rippable to MP3. As of early 2025, there’s one to go.)

Then, sometime around age 11 and shortly after my father died of cancer, my mother must have bought some compilation LPs of pop songs because I was suddenly listening to New Order, Shannon, The Romantics, Local Boy, Gary Low, Kajagoogoo, Indeep, Denise Edwards, Fox The Fox, Icehouse, Natasha King, Matthew Wilder, Gazebo, Chris de Burgh, Alphaville, and Madonna.

Some time after that, I went out and bought a few single-artist albums: Hot Dog by Shakin’ Stevens, the eponymous debut of (the band) Nena, and Michael Jackson’s Thriller, with Flashdance following soon afterwards. (I used to longingly stare at Purple Rain and Victory at stores, but those were always too expensive.)

In ninth grade one of our teachers arranged  a Xmas/NYE gift exchange. A classmate gave me the vinyl record of Brothers In Arms. To this day, I am very touched and grateful.

The other recording medium of this era was the cassette tape. And given how difficult it was to afford to buy LPs, most “record stores” primarily relied on another business model. They held only one copy of each record and they would make mix tapes for you from their collection. You would choose the length (and type: Chrome, etc.) of blank tape and pay a little more than the cost of the tape. You could specify everything you wanted to go onto the tape; you could specify some of it and leave the rest to them; or leave it all up to them (preferably after they’d gotten to know your taste). It was one of the ways we found out about music that was new to us (and typically, older than us).

You could also buy albums this way. I paid to have POWERSLAVE recorded onto a cheap cassette and my favorite birthday gift of my whole life is still the dubbed tape of Alphaville’s Forever Young my mom got me.

Era 4: CDs came a few years’ later to my part of the world than they did to the US. My first one was a Glenn Miller compilation, a present from my aunt. I was mesmerized by this shiny object with precise writing printed right on it. I wanted to eat it and worship it. I also remember sniffing it. (It smelled very good.)

My first five CDs still feel miraculous to me—fantastical otherworldly objects. And it continues to make me sad that so many of my friends would, in later decades, say things like “CDs are no good except as coasters.”

I still think they’re beautiful.

Era 5: Before anyone else had heard of downloading songs (that I know of), people doing work using UNIX systems started exchanging songs at some point in the mid-‘90s. This is how I became familiar with Björk (oodles of whose albums I’ve legally purchased since), Young MC, and the brilliant Ode to My Car by Adam Sandler, as well as some unidentified recordings of gamelan. I had to put them on a cassette to be able to listen to them at home. The resulting sound quality was atrocious, whether due to the tape or the .au encoding, I don’t know. By the way, I’m no audiophile. The best and most enjoyable music listening of my life was done on a mono handheld journalist-style cassette player. So, when _I_ think it sounds atrocious, it’s gotta be pretty bad.

Still, the music comes through. The brain makes sure of that. (I read on a discussion board that a music lover uses electronic equipment to listen to music while an audiophile uses music to listen to electronic equipment.)

Era 6: I have never downloaded off the original Napster, but one of the bands I was in, back when I was in three to six bands at any time, was being pirated on Napster (and getting royalty checks, too, from elsewhere—for an Australian movie). We felt pretty good about the Napster part.

Also, because I did not own an Apple device until around 2017, all my MP3s came from ripping my CD collection (and from a friend’s Nomad Zen, fully stocked with music from Jamaica and DRC, that he gave me one day, for reasons unknown). I spent most of 2002 and 2003, especially, ripping my CDs to MP3. Then, my favorite band started putting out MP3-only songs, so I started purchasing them, first at 7digital, with whom I’ve since had an unpleasant experience, and then on Qobuz (named after a Central Asian Turkic instrument, the kopuz).

Back in the early aughts, I also legally downloaded comedy from laugh.com, which seems to have disappeared long ago.

I’m still ripping (and still buying) CDs, though at a much lower rate than 20 years ago. These days, it’s because I prefer to do my own tagging. This not only avoids the typical “correcting” most database companies like to do to Portuguese song titles (which they Spanishize), but with the help of Discogs, I can locate the earliest print in the leading format of the time in the country of origin of the artist and thus know the definitive stylization for the album name and the song titles.

Era 7: I found out about Spotify in early 2008 from a friend who worked for CDBaby. I immediately looked it up. I was getting ready to make an account, but in those days I was still reading every word of every contract, agreement, document of terms and conditions, privacy statement, and such. I don’t remember now what I didn’t like about their policies but I did not make a Spotify account—and it turns out I was right. (Who could have guessed so much evil could come out of Sweden!)

I did stream from Tidal for a bit and then Qobuz briefly as well. I suppose streaming is fine if you’re not obsessed with music in the particular way that I am. For me, streaming was an anxiety-producing experience: What if I missed the name of an artist because I was busy with something else? I would have to go back. That would take away from whatever work I was trying to do and concentration I had managed to have. And what if I liked something so much that I wanted to make sure no merger, IPO, hostile takeover, or license change could take it away from me?

What if there’s so much stuff I like that I can’t function as a proper adult?

And that’s why I still buy physical media and then rip it (or purchase digital files on Bandcamp and Qobuz). I need to control the possibility of what I could listen to at any time. I want that full control right up to the moment I die.

Anyway, how about them vinyl releases? (or, as I heard some people say “vinyls” [yuck]… I just received marketing from Laufey that said “vinyls”… Needless to say I’m not buying that product.)

For a while, I was really happy with the upsurge in vinyl availability. That was back when they still came with download cards.

Have you noticed? The download cards are gone. Now, they really just expect you to pay US$35–65 for a single LP and pay more if you want it digital too. Or maybe they assume everyone is streaming, so this is a non-issue. But even streaming technology is going to be replaced by something else eventually. Maybe it won’t be era 8—maybe it’ll be more like era 15—but the streaming player will soon be replaced by an implant. And maybe it won’t stream from a server at all. Maybe we’ll all get an AI that makes up the music, the biographies and histories, the band and artist names, and everything else based on its reading of our brain waves. If it also controls our input and output ports, we could “interface” with friends who have “the same taste” in music and be discussing two entirely different fake songs, fake bands, or fake genres and not even know it because the two AIs handling our “quality time” together would sensor, filter, transform, and augment whatever the other person is saying to match what each of us would most enjoy hearing. If this sounds dystopian, think about our subjective sensory and affective experiences and how none of us has any idea (and could never have any idea) if the experiences we bond over are, to the other person, what we think they are. The philosophical term ‘qualia’ is a handle for realizing whether the question has any meaning. How could anyone—you, me, or a third party—ever know whether what I experience as the color blue is what you experience as the color blue? Maybe it’s your yellowish orange; maybe it’s your ‘sour taste’ or ‘dull pain’. Probably it’s neither. In any case, where in the chain of my sensory nerves would you have to insert yours to find out?

I don’t think there is any point of insertion at which this would work.

And I think this is similar to—in the frustration it can cause and in the realization it leads to—Gazzaniga’s point about free will: At which point along the chains of cause and effect would you want there to be this fully arbitrary freedom?

Let’s say you decide, as an exercise in free will, to refrain from urinating for as long as possible, resulting in urinating while making a presentation to the board of trustees or at a conference. Now, that would be an act of free will. How many of us are willing to do this?

I’ve always known my answer to the trolley problem. If I were to find myself in the circumstances of the trolley problem, I would be proof against free will. There are two possibilities: I pull the lever; I don’t pull the lever. (If you want to make it more general: I do the thing; I don’t do the thing—I intervene; I don’t intervene.)

If I do the one I believe I would, where’s the free will in that? That action was determined well in advance by the combination of my nature and nurture. After all, I’ve known about it for over a decade. And if I do the opposite, how would that be free will if some subconscious part of me suddenly acts in the opposite way of what I think I would do.

Either way, I clearly end up not having free will. So, maybe let’s not worry so much about the coming era of implants and sensory cocoons, with AIs repainting all our interactions with other entities to make us think we are connecting at a deep level while experiencing arbitrary AI hallucinations. We won’t know the difference.

No Absurdity Necessary

In a 2022 book about the fear of death and the desire for immortality, Dean Rickles refers to a an example the Scottish philosopher Thomas Reid gave to demonstrate his belief that, in Rickles’ words, “It is often difficult to speak of an enduring self even in perfectly ordinary (mortal) scenarios.” (p. 12)

Reid’s example is about an 80-year-old general who can remember being 40 but cannot remember what his life was like when he was a child. (He seems to have no memories left of his childhood.) Reidevidently argued that we would consider the 80-year-old and the 40-year-old the same person (the former being the continuation of the latter) and that we would consider the 40-year-old man who could remember details about his childhood the same person as he was when a child, but that we should not consider an 80-year-old who is unable to have any recollection of his childhood the same continuing ‘self’ as that child.

If I haven’t done justice to Reid’s argument, it is because I’m anxious to present it in a symbolic way and then to demolish it using more modern ways of thinking we now have that were not available to Reid. (And even though we have access to these ways of thinking, it’s my experience that they are not anywhere close to widely appreciated or employed in the United States.)

The symbolic form of the argument is this:

Let the 80-year-old army general be A, his 40-year-old self be B, and his childhood self be C.

Let’s use the term ‘equals’ to mean “remembers and therefore can be considered a conscious continuation of…”

Reid says we have A equals B; B equals C; A does not equal C.

Rickles calls this “an absurdity.” (p. 97, end note 6 to chapter 2).

My response:

The idea that the 80-year-old general being the same person as the 40-year-old officer he remembers being, but not the child he does not remember being (which the 40-year-old officer did remember) is only absurd if you subscribe to reductionism and classical logic. This is the question of whether one can step into the same stream twice or not, but with the memory stability of the stream being the question rather than the memory stability of the stepper. And the question is no less pointless. It appears to have a point because in classical logic, if the general equals the officer and the officer equals the boy, then the general necessarily equals the boy. However, given that our cells degrade and are ejected or lost, and that all of the molecules that make up each of our bodies are entirely overturned (at some long period or another), in the traditional view we should never be able to be ourselves.

All these problems are solved with multivalued logic and with an understanding of emergence.

First, fuzzy logic, being a type of multi-valued logic (perhaps a continuous-valued logic) has no difficulty with the boy having zero membership in the-general-ness and the officer having some intermediate membership in the-general-ness. While two membership functions at opposite ends of a continuum can have no overlap, they can each have finite overlap with some intermediate membership function (in this example, only one, which overlaps at each end with each of the membership functions at the extreme ends, but all that is not a necessary condition; merely sufficient).

This may not be an answer that is satisfactory enough. Emergence comes to the rescue.

NOTE:  The fact that we don’t know the mechanism for something does not mean that something isn’t happening. Emergence seems to a fact.

Emergence has to do with how a Gestalt is—somehow—constituted from miniscule components, not one of which has any of the high-level characteristics possessed by the Gestalt. The classic introductory examples include how ant colonies can behave in coordinated adaptive ways that no single ant (and no collection of ants on the order of only a few hundred, say) can manage. Similarly, the world’s economy, the immune systems of animals, the complex chemical interactions of many plants with their environment (soil and air, at a minimum), and the fact of human emotions and creativity are instances of complex and adaptive behaviors and capabilities emerging in ways we don’t yet understand out of the complex interactions (communication, feedback, etc.) of interconnected small components, none of which is capable of the same sophistication at any scale or to any degree.

Our neurons exchange pulse sequences of electric potential via various ions and neurotransmitters. As far as we know, there is/was no mini Tom Stoppard, Ginger Rogers, Prince, or Björk hidden in the synapses of those individuals, just as there is no homunculus at the control center of the brain that sees the colors we see and feels the emotions and sensations we experience. Pain, hunger, joy, love, fear, pleasure, colors, sounds, smells, and all other emotions and sensations are emergent experiences of the whole person, made from—but not made up of—interactions among complex human systems such as the nervous system, gut bacteria, the endocrine system, long- and short-term memory, and more, the effects of each of which are made from—but not made up of—interactions among the constituent subsystems and components. Your spinal cord is not happy, fearful, anxious, or euphoric—you, the whole emergent person, are. Likewise, what makes you you is not only your experiences between ages 35 and 95 or between 0 and 15 or any interval, no matter how wide or narrow: Even without any memories of his life before age twenty and having completely replaced every molecule in his body (more than half of which were water and bacteria anyway), the 80-year-old general is still the person who resulted from his birth, his early life, his genetics, his society, and all the physical, emotional, social, and spiritual experiences his past bodies have been through. Strictly delineating where the person-who-was-once-a-boy ends and the person-who-is-a-general begins is an unnecessary exercise imposed on us by classical logic and the battle between reductionistic and holistic modes of understanding ourselves and the world around us. Systems thinking (which includes the concept of emergence) and fuzzy logic (which admits that some categories are loose in the real world) have removed the need to waste our time fitting things (experiences, people, music, success and failure, etc.) into all-or-nothing categories, about which we can only be completely right or completely wrong.

Teaching machine learning within different fields

Everyone is talking about machine learning (ML) these days. They usually call it “machine learning and artificial intelligence” and I keep wondering what exactly they mean by each term.

It seems the term “artificial intelligence” has shaken off its negative connotations from back when it meant top-down systems (as opposed to the superior bottom-up “computational intelligence” that most of today’s so-called AI actually uses) and has come to mean what cybernetics once was: robotics, machine learning, embedded systems, decision-making, visualization, control, etc., all in one.

Now that ML is important to so many industries,application areas, and fields, it is taught in many types of academic departments. We approach machine learning differently in ECE, in CS, in business schools, in mechanical engineering, and in math and statistics programs. The granularity of focus varies, with math and CS taking the most detailed view, followed by EC and ME departments, followed by the highest-level applied version in business schools, and with Statistics covering both ends.

In management, students need to be able to understand the potential of machine learning and be able to use it toward management or business goals, but do not have to know how it works under the hood, how to implement it themselves, or how to prove the theorems behind it.

In computer science, students need to know the performance measures (and results) of different ways to implement end-to-end machine learning, and they need to be able to do so on their own with a thorough understanding of the technical infrastructure. (If what I have observed is generalizable, they also tend to be more interested in virtual and augmented reality, artificial life, and other visualization and user-experience aspects of AI.)

In math, students and graduates really need to understand what’s under the hood. They need to be able to prove the theorems and develop new ones. It is the theorems that lead to powerful new techniques.

In computer engineering, students also need to know how it all works under the hood, and have some experience implementing some of it, but don’t have to be able to develop the most efficient implementations unless they are targeting embedded systems. In either case, though, it is important to understand the concepts, the limitations, and the pros and cons as well as to be able to carry out applications. Engineers have to understand why there is a such a thing as PAC, what the curse of dimensionality is and what it implies for how one does and does not approach a problem, what the NFL is and how that should condition one’s responses to claims of a single greatest algorithm, and what the history and background of this family of techniques are really like. These things matter because engineers should not expect to be plugging-and-playing cookie-cutter algorithms from ready-made libraries. That’s being an operator of an app, not being an engineer. The engineer should be able to see the trade-offs, plan for them, and take them into account when designing the optimal approach to solving each problem. That requires understanding parameters and structures, and again the history.

Today, the field of ‘Neural Networks’ is popular and powerful. That was not always the case. It has been the case two other times in the past. Each time, perhaps like an overextended empire, the edifice of artificial neurons came down (though only to come up stronger some years later).

When I entered the field, with an almost religious belief in neural networks, they were quite uncool. The wisdom among graduate students seemed to be that neural nets were outdated, that we had SVMs now, and with the latter machine learning was solved forever. (This reminds me of the famous patent-office declaration in the late 1800s that everything that could be invented had been invented.) Fortunately, I have always benefited from doing whatever was unpopular, so I stuck to my neural nets, fuzzy systems, evolutionary algorithms, and an obsession with Bayes’ rule while others whizzed by on their SVM dissertations. (SVMs are still awesome, but the thing that has set the world on fire is neural nets again.)

One of the other debates raging, at least in my academic environment at the time, was about “ways of knowing.” I have since come to think that science is not a way of knowing. It never was, though societies thought so at first (and many still think so). Science is a way of incrementally increasing confidence in the face of uncertainty.

I bring this up because machine learning, likewise, never promised to have the right answer every time. Machine learning is all about uncertainty; it thrives on uncertainty. It’s built on the promise of PAC learning; i.e., it promises to be only slightly wrong and to be so only most of the time. The hype today is making ML seem like some magical panacea to all business, scientific, medical, and social problems. For better or worse, it’s only another technological breakthrough in our centuries-long adventure of making our lives safer and easier. (I’m not saying we haven’t done plenty of wrongs in that process—we have—but no one who owns a pair of glasses, a laptop, a ball-point pen, a digital piano, a smart phone, or a home-security system should be able to fail to see the good that technology has done for humankind.)

I left the place of the field of Statistics in machine learning until the end. They are the true owners of machine learning. We engineering, business, and CS people are leasing property on their philosophical (not real) estate.

 

Science-doing

There are (at least) two types of scientists: scientist-scientists and science-doers.

Both groups do essential, difficult, demanding, and crucial work that everyone, including the scientist-scientists, needs. The latter group (like the former) includes people who work in research hospitals, water-quality labs, soil-quality labs, linear accelerators, R-&-D labs of all kinds, and thousands of other places. They carry out the daily work of science with precision, care, and a lot of hard work. Yet, at the same time, in the process of doing the doing of science, they typically do not get the luxury of stepping back, moving away from the details, starting over, and discovering the less mechanical, less operational connections among the physical sciences, the social sciences, the humanities, technology, business, mathematics, and statistics… especially the humanities and statistics.

I am not a good scientist, and that has given me the opportunity to step back, start over, do some things right this time, and more importantly, through a series of delightful coincidences, learn more about the meaning of science than about the day-to-day doing of it.[1] This began to happen during my Ph.D., but only some of the components of this experience were due to my Ph.D. studies. The others just happened to be there for me to stumble upon.

The sources of these discoveries took the form of two electrical-engineering professors, three philosophy professors, one music professor, one computer-science professor, some linguistics graduate students, and numerous philosophy, math, pre-med, and other undergrads. All of these people exposed me to ideas, ways of thinking, ways of questioning, and ways of teaching that were new to me.

As a result of their collective influence, my studies, and all my academic jobs from that period, I have come to think of science not merely as the wearing of lab coats and carrying out of mathematically, mechanically, or otherwise challenging complex tasks. I have come to think of science as the following of, for lack of a better expression, the scientific method, although by that I do not necessarily mean the grade-school inductive method with its half-dozen simple steps. I mean all the factors one has to take into account in order to investigate anything rigorously. These include double-blinding (whether clinical or otherwise, to deal with confounding variables, experimenter effects, and other biases), setting up idiot checks in experimental protocols, varying one unknown at a time (or varying all unknowns with a factorial design), not assuming unjustified convenient probability distributions, using the right statistics and statistical tests for the problem and data types at hand, correctly interpreting results, tests, and statistics, not chasing significance, setting up power targets or determining sample sizes in advance, using randomization and blocking in setting up an experiment or the appropriate level of random or stratified sampling in collecting data [See Box, Hunter, and Hunter’s Statistics for Experimenters for easy-to-understand examples.], and the principles of accuracy, objectivity, skepticism, open-mindedness, and critical thinking. The latter set of principles are given on p. 17 and p. 20 of Essentials of Psychology [third edition, Robert A. Baron and Michael J. Kalsher, Needham, MA: Allyn & Bacon, 2002].

These two books, along with Hastie, Tibshirani, and Friedman’s The Elements of Statistical Learning and a few other sources that are heavily cited papers on the misuses of Statistics have formed the basis of my view of science. This is why I think science-doing is not necessarily the same thing as being a scientist. In a section called ‘On being a scientist’ in a chapter titled ‘Methodology Wars’, the neuroscientist Fost explains how it’s possible, although not necessarily common, to be on “scientific autopilot” (p. 209) because of the way undergraduate education focuses on science facts and methods[2] over scientific thinking and the way graduate training and faculty life emphasize administration, supervision, managerial oversight, grant-writing, and so on (pp. 208–9). All this leaves a brief graduate or a post-doc period in most careers for deep thinking and direct hands-on design of experiments before the mechanical execution and the overwhelming burdens of administration kick in. I am not writing this to criticize those who do what they have to do to further scientific inquiry but to celebrate those who, in the midst of that, find the mental space to continue to be critical skeptical questioners of methods, research questions, hypothesis, and experimental designs. (And there are many of those. It is just not as automatic as the public seems to think it is, i.e., by getting a degree and putting on a white coat.)

 

Statistics for Experimenters: An Introduction to Design, Data Analysis, and Model Building, George E. P. Box, William G. Hunter, and J. Stuart Hunter, New York , NY: John Wiley & Sons, Inc., 1978 (0-471-09315-7)

Essentials of Psychology, third edition, Robert A. Baron and Michael J. Kalsher, Needham, MA: Allyn & Bacon, A Pearson Education Company, 2002

The Elements of Statistical Learning: Data Mining, Inference, and Prediction, second edition, Trevor Hastie, Robert Tibshirani, and Jerome Friedman, New York, NY: Springer-Verlag, 2009 (978-0-387-84858-7 and 978-0-387-84857-0)

If Not God, Then What?: Neuroscience, Aesthetics, and the Origins of the Transcendent, Joshua Fost, Clearhead Studios, Inc., 2007 (978-0-6151-6106-8)

[1] Granted, a better path would be the more typical one of working as a science-doer scientist for thirty years, accumulating a visceral set of insights, and moving into the fancier stuff due to an accumulation of experience and wisdom. However, as an educator, I did not have another thirty years to spend working on getting a gut feeling for why it is not such a good idea to (always) rely on a gut feeling. I paid a price, too. I realize I often fail to follow the unwritten rules of social and technical success in research when working on my own research, and I spend more time than I perhaps should on understanding what others have done. Still, I am also glad that I found so much meaning so early on.

[2] In one of my previous academic positions, I was on a very active subcommittee that designed critical-thinking assessments for science, math, and engineering classes with faculty from chemistry, biology, math, and engineering backgrounds. We talked often about the difference between teaching scientific facts and teaching scientific thinking. Among other things, we ended up having the university remove a medical-terminology class from the list of courses that counted as satisfying a science requirement in general studies.