re: future music


to listen to while you read. or whenever else.

the distant past:

a middle-class mirrormaker gives up on using polished metal to capture holy light reflected from religious relics and retools a wine press, going into considerable debt to, for the first time, print words. Gutenberg’s invention embroils him in a lawsuit with a loanshark called Fust, but catches on all over 15-16th century europe. very like the advent of the internet, the printing press tipped off a massive regrouping, reorganization, redistribution, and re-curation of culture and knowledge—everything old was new again. what we might now call nostalgia or retro became downright fashionable, and being printed gave the information more cultural weight, because it was inherently so much more accessible.

the distant present:

the initial importance of new content is often not the content itself, but that it gets a nod from, or buzzfeed, or vox, or tmz, or reddit, or the front page of youtube, or whatever else. music has been, to its detriment and not entirely by choice, at the front of that wave, starting with iTunes. musicians are famously underpaid by streaming services, equally famously pirated outright (for better and worse, Prince and Lars Ulrich pretty much called it, and it still gives me a giggle to hear dj’s complain about waitstaff with spotify the way musicians used to complain about dj’s 15-20 years earlier.), and plenty of MBA’s are waiting to tell us how that’s what the markets will bear. the lack of digital liner notes deprives would-be fans the basic knowledge of who is involved in the song; the focus-groupization of pop songs and shrinking playlists of radio mega-corporations (clear channel owns over 10,000 stations) narrow our field of sound and lyrics ferociously; the shrewd focus on metrics (the sports approach to art) perpetuates a winner-takes-all economy; and perhaps most dangerously the vapid idea that music exists rightly only in the realm of feelings, which truncates not just thought but the very feelings modern music claims to value and promote, to say nothing of the infinitely many emotions not addressed in most popular music. quick aside: i love me some pop music, play it often, and defend it’s right to exist in every form. please peruse this excellent article for a more in-depth look at these factors.

there is yet another element, easily the most impactful to the future of music and art: google neural networks are making music. that’s right, AI’s at google are being fed heaps of songs, then given a cell from which to grow a new piece. computer-based algorithmic composition has been around since the 50’s with two scientists at the university of illinois, resulting in a string quartet called the Illiac Suite after the computer which created it.

as this technology advances, whole swathes of music making will change drastically. in the article linked above, David Cope predicts that it will ‘rampage through the film music industry,’ rendering obsolete the need for directors to have reference tracks for composers, and eliminating much of the bluster which can come out of producers or singers mouths when they angrily attempt to communicate about music but haven’t bothered to learn any of the language. in fact, as emulations and sample libraries of instruments become ever better (for the cost of an iPad you can buy a sample library of the belarus phil, cheekily billed as the ‘berlin’ phil, with not just notes, rhythms, and articulations, but in what concert hall, recorded by how many and what brands of mics, and at what distance from the orchestra), soundtracks with both composers and musicians could become quite a rarity, though for the first time there will be soundtracks with live musicians but no composer, much to the chagrin of 90’s TV composers who were so dedicated to their synths and floppy disc sound libraries.

we’re at least 10 years in to the most massive democratization of sound we’ve yet known, and if you’ll pardon the cliche, we’ve truly only just begun.

i’m bored of retro music. let us look to the future. here’s one way it could play out:

the distant future, the year 2000

bone-conducting headphones like these connect wirelessly to the utterly ubiquitous phone, which uses an app that has access to your listening histories from any number of current or future music services. the headphones make it possible to have your ears open to the sounds around you, but have the sensation of music coming from inside your skull. the app is an ambient composition generator, which, when enabled, constantly pumps new computer-composed music directly to you. like a nest thermostat it can automatically learn and adjust its parameters to the time of day or type of activity, if connected to an apple watch or fitbit can adapt to what it believes your mood to be, or a user can manipulate individual components on sliding bars. linear parameters include: electric sounds to acoustic sounds, harmony (moving simple to complex), simple rhythms to complex rhythms, number of elements, repetitiveness, pulse, and local indiscipline (a term stolen from Boulez, but here describing a percent chance that the algorithm will temporarily ignore or alter one of the parameters). it also has a five sided genre map lifted from your listening history, in the style of one of those mostly nonsense personality tests. i suspect to encourage longer listening the sounds will be as high fidelity as possible, and will tend towards more meditative creations. with the app enabled, anyone with a smartphone is having new and custom-made music created for them all day long, which only they can hear.



shops of all sorts no longer need soundtracks, so there is a possibility that the beauty of what vonnegut described as the ‘terrific music [of] the hissing and rolling thunder of steam locomotives,’  will be returned to our cultural awareness. indeed many banal sounds might come back in vogue. however just as likely is R Murray Shafer’s ‘sewer of the sky’ future, where noise pollution decimates native acoustic ecology, not to mention predictably decreases children’s ability to acquire new knowledge .

anyone who needs music for a project, from youtube content creators to aspiring artists, to full blown tv and movie studios will have little need for humans, when a they can simply plug a list of adjectives describing their needs into a bit of software and have not merely song suggestions from a clumsily cross-referenced list, but brand new music which can be remade infinitely at no cost. get ready for a lot of music from the ‘edgy’ column, y’all.

i’ve said here before that i think music nurtures those things in us which are most human, so the question is will this type of non-human music making continue to do that? the answer is clearly yes: if you’re not told the music is computer generated, and eventually even if you are, our brains will readily remap what we know about sound production to accommodate the machines and accept many of the myriad physical, intellectual, and emotional benefits of humanly composed music. this has already been proved in abstract by the occasional records made of elephants playing percussion, or by our willingness to listen to many different birds singing at once in unrelated and uncoordinated ways as though it were music. what’s more, composition software called ‘kulitta’ at yale is fooling trained musicians into thinking a result of its neural network is actually by papa bach. there will of course be composers of classical and popular music who can interact with these systems with both entirely pure artistic goals, or purely capital-minded business goals, because it is just a new tool in the box, neither good nor evil. but this is obvious to the point of boring.

there is a missing component to our future music: empathy. nothing we currently recognize as empathy or compassion exists in anything we currently recognize as a computer. it does exist in spades in humanly performed music, and in no genres are empathy and compassion more important than those which rely heavily on improvisation, namely jazz and creative improvised music (‘free music,’ if you like). yes, there are plenty of musicians turned scientists or vice versa building ever-better robots and AI’s which can learn to improvise, and i’m very excited about some of them, like one at SFSU which i want to play with, because it reacts to timbre in addition to pitches and rhythms (Dr Hsu, call me). these too can work to our advantage in the computing age: a machine will be working with known and discoverable influences and parameters to arrive at a musical result, for a computer can have perfect ‘self-knowledge,’ if not self-awareness, so we can watch it learn how to improvise. in the distant future there is a real possibility that the compassion and empathy of live performers making and breaking musical rules in real time will have a moment in the sun for exactly the reason that it’s entirely human.

sometime in the past four years i read a breakdown of music sales by genre, and jazz made up about 1% of the market. this included all sub-genres of jazz—big band, bebop, hard bop, free, acid, fusion, modern, smooth, etc. let’s say for arguments’ sake that number doubled with the recent successes of Kamasi Washington, Robert Glasper, Flying Lotus, Thundercat, the Kendrick Lamar masterpiece To Pimp a Butterfly, and more modern west coast creative music nodding to JDilla—big ups, west coast. with a 200% increase in sales, it is still only 1/50th of the market share. (yes, jazz and classical sale figures are higher in countries like france and finland, where music is taught in schools, and it’s not unusual to see billboards for the local jazz festival with an american jazz musicians face on it.) this low number–2%–can be a very freeing element in shaping our future music, because it means we as creative musicians needn’t worry about pandering to an audience at all and are instead free to make whatever music we want to hear and let the chips fall where they may—we can safely take a leaf from Edgar Varese, who says ‘form  is  a  result; the result of a process. each of my works discovers its own form’ and in so doing relinquish any unnecessary worry about genre in the creation of new music. Charlie Parker was due to begin studying composition with Varese when he, Parker, died. that we will never hear the resulting music is one of the great losses in all music. since a hit instrumental jazz record tends to sell <5000 copies, the music alone is unlikely to be our sole source of income in the first place.

we don’t mourn the loss of bowling pin setters or toothpaste tube cappers, and we won’t mourn for long the loss of long-haul truckers or paralegals when automation takes their jobs. will we miss musicians? sort of. the linn drum machine sampled the great James Gadson (also this one, always), costing him and others vast amounts of work, but the same machines were used on the brilliant early Prince records. our presently non-existent music generator app reworks our cultural relationship to music in a similar fashion. as musicians we are all obliged to, at various stages in our career, produce musical puff-pastries for weddings, business lunches, birthdays, or even to play the same musical for decades, because they pay well. these gigs will easily halve once app-generated music gets involved, and those that are left will often be simply a show of ‘old world’ values or opulence, i.e.—yet more “roaring-20’s-but-by-way-of-Baz-so-racism-isn’t-a-thing” parties. these can be way fun, are rites of passage in many genres, grant huge opportunities to learn songs and performance practices, and have been staple gigs for myself and nearly all of my brother and sister musicians, but still their decline can be a good thing. we might be free to slowly remember the intricacies of study or discipline or emotions or any of the untold reasons we enjoy music, and to therefore commit more time to making the music we want most to play with people whose company we likewise enjoy. which to me sounds lovely.

to wrap up:

1)     the techno future is going to be pretty amazing, right up until it’s 100% fucked.
2)    music will be made by computers at an exponentially increasing rate, and that’s not all bad.
3)    live music with improvisation is in a prime position to bring its unique celebration of humanness, empathy, and compassion to that universe.
4) listen to Sun Ra, think about the future.


the piece at the top is from 2011, when i started thinking about this, and has precisely zero improvising. music you can listen to while doing other things. the piece from the top is also on soundcloud if that’s your pleasure.

re future music art



Comments are disabled for this post