Any producer will tell you that one has to dig deep if they wish to surface with some serious, hard-hitting sounds. To discover those pristine kernels of unsung audio beneath the couch cushions of Soundcloud, Bandcamp, and Youtube, one must be entirely, authentically unique and — here’s the catch — all while using tools literally anyone with a computer can exploit. Nobody, I can guarantee, has dug deeper than Ash Koosha. His debut album, GUUD, re-released on Olde English Spelling Bee, was a dense, highly stylized dip into the artist’s mind. Unconventional sampling techniques constructed dramatic scenes that seem to reverse the laws of entropy. GUUD was accompanied by a set of equally abstract videos for each song, and for his new album, I AKA I (I Also Known As I), Ash has envisioned a Virtual Reality experience unlike anything ever heard, or seen, before. He’s one of the most adventurous unsung producers in recent memory, a highly method-driven sound architect whose entire approach to sound is so unconventional I’m convinced he’s either a genius or a magician.
I stood at a bus stop waiting for some sort of sign of the elusive producer. After his semi-autobiographical film on the Iranian underground music scene, Nobody Knows About Persian Cats, won a Cannes prize, the domestic backlash forced him to seek asylum in the UK. He’s been living here ever since, and has yet to return to Tehran. All of these details can enshroud a person in myth. Indeed, from across the road, he looked like some sort of time-traveler, stationary in black and chunky wayfarers, a knot in the grain of bustling brits. He appeared from an alleyway as if a single step had taken him city blocks. We entered a cafe that resembled a gutted garage with way-too-much music gear piled in one corner. Ash described it as a pretty “hip” spot, with great Persian food. Prepared piano trickled out of the speakers. It wasn’t until we ordered and sat down that I realized I was in Cafe Oto, a creative space famous for hosting experimental artists in miniature, multi-day “residencies.” Aaron Dilloway, Bitchin’ Bajas, and FIRE! Orchestra were all scheduled for the upcoming week. Despite the cornucopia of experimental indulgence going on outside his door, Ash claims to be an outsider to the London “scene,” preferring to stay within close proximity to his studio, just in case inspiration strikes him.
Pretty soon I had forgotten we were in a cafe at all. All noise became secondary to the presence in front of me. I held my breath and dove in. What follows is a jolting narrative, as detail-obsessed and reactive as his music, concerning the nature of sound, computers, and how we’ll come to understand both in the not-so-distant future.
You did a show at Convergence festival, was that your first?
Yeah, with this setting, this project, it was my first.
How do you think it went? Was it just you?
It was… man, it was massive. It was better than I expected. Just me. I took my semi-studio on stage. I basically redo what I do in the studio in Logic on stage: I’ve got my MIDI controller, sampler, tiny controller/mixer, huge monitor, and I’m getting a touch-screen monitor to do cool mixing stuff as well.
So was the visual part of the gig an analogue to the VR component to your album?
The same people who did my videos did a 35-minute visual, specially for the show. I think my shows are going to be based on visuals from now on, more and more. There’s going to be a VR element to it later on, as I play more.
I’m guessing it would be kind of awkward to have people with headsets on at a gig.
It’s pointless. I was thinking about it, but why would you get all of these people into a venue to put on headsets? They can sit back at home and enjoy it, so I thought I’m going to use it, I’m going to remix the music, deal with the sound physically on stage, and people are going to enjoy the visuals. For myself, I don’t want to be in front of the music. As long as its instrumental — a sound collage — I don’t want to be center of stage doing things. It’s about the audio-visual experience, so I thought maybe if I am on stage, I’m going to go into this virtual space off to the side, and people can enjoy the show.
Let’s start out with the VR component to I AKA I. I’m curious of what I should expect. How did you get the idea for this?
My biggest problem in the last couple years is that — me personally, and some other producers — we improved our skills in creating sound objects and mixing crazy sounds into one piece, in one space that I call “audio room” in my head, where you mix things together and decorate them. But people can’t hear that, can’t feel it, experience that sound “object” that we stretched and gave a physical meaning to. So at some point I thought: music is moving forward, but the consumption of music is not. Especially when electronic music is so popular now. Everyone loves it. Even my mom is listening to Floating Points; she’s not even clubbing.
So then I was dealing with: how can I do it? Animation? Visuals? Not just a nice video. Like, Chris Cunningham’s stuff is very audio-responsive, it gives you the feeling of beats changing, glitch sounds. But I was at this exhibition last year, a couple of exhibitions in different places that were showcasing VR products. VR “experiences.” I was like, they’re giving me this experience where I fly in the air and see things, how many times can I do this in my basement before it gets boring? Where’s the sound element? And that was like, *snaps* why not make music for virtual reality specifically? I realized, actually, that’s my music. Because I have this 360 degree space in my head that describes sound, and I’m not a classical sort of engineer that sits with faders and numbers and spectrums, I just close my eyes and put sounds and position them, edit them. I thought, that’s it, let’s create a visual representation of each sound object and each effect. Each tiny detail in the sound, and put it in the virtual reality environment. Give it a timeline, maybe a movement, maybe a floating effect, and then people can see it in a room. They’ll be basically inside the music.
It’s interesting you bring up you and other producers reaching a pinnacle in designing sound. To me, this kind of music you’re making is moving in an abstract direction, in between sounds, pulling apart waveforms. There are people trying to explore very dynamic aspects of it. But on the other hand, there’s a lot of work in texturing and coloring sounds that are there because it’s ear candy. It’s crazy indulgent. Falling apart, blowing up….
See, the word you’re using, “blows up,” you have to see this. A sound blowing up? We have to see it. We have to experience it, with full sensory occupation. That is the poetry of electronic music. The effects are the poetry of electronic music. Instead of a vocalist saying a poem or telling a story, that’s the poetry in the music is that crazy manipulation. A lot of processing. How can we show it? The audience gets the gist of the results, but beyond that, there’s stuff happening that we could show with new technology.
A lot of the words you’ve used in relation to your compositions, and the way people even think about the composition of music, is different now, because of technology. Could you talk about when technology first came to you as a tool to manipulate sound in a way you couldn’t before?
I started doing improv. I was a bass player in a jazz trio. Everyone would change instruments, and we would just improvise for like two hours, from instrument to instrument, whatever I could get on. That’s when I learned about reaction, how music is reactive, instruments react to each other to create a spectrum. Then I moved on to study more classical composition, studying how structure works, harmony, how melodies are created in contrast to a bass line. How things move. Movement in classical music is the most beautiful thing ever. It’s like growing flowers, or like babies being born. Very difficult to explain.
Like the crab canon?
Yes. That is movement. Pure movement. For me, that’s beautiful. And meanwhile, I was recording sound, putting it into this shitty computer like… 12 years ago. But there was a huge gap between how we would arrange classical music and what we know about “genre” music, and what I do with sound. Like this (he rolls his knuckles across the wood tea platter): you slow that down, it’ll make a whoom, like a huge vibrating bass. What can we do with this? If I use it, who’s going to play it, who will play it live? It’s just a sculpture. Then I realized there is a connection. You can harmonize soundscapes and recorded sound instead of having, say, a violin. So I started using the same principle, both within Iranian classical music and Western classical, and I started knowing that there’s harmony in this place. Look at this scene. This room could be sound objects. So imagine all of these objects are suspended in free fall. They can be re-positioned, decorated before they hit… there can be aesthetics in all this.
You could make a whole album out of the sounds of this cafe.
We can. But as a listener, we are always engaged with melody and harmony. And there’s a lot of experimental music that uses experimental objects and “found sound,” but it’s very figurative. (He strikes the ceramic teapot.) You can basically just hear ding ding! That exact sound. A band of literal objects. I started dealing with sound objects as melodic instruments that can be harmonized. You can go minor, major, add a 7th, it’s all possible.
Was that a trial-and-error process, for you? Just tossing things into an interface and seeing what came out?
Yeah. Yeah, I had around 600 tracks. I threw ‘em all out. One actually, “wait,” on GUUD, is from like 2010. It’s super-old. That one’s from my old sound archives, a lot of pieces of instruments, chairs, mostly indoor sounds.
Are you trying to get some specific feeling or emotion out of the objects you use to record? In the way a saxophone has a lot of cliches attached to it, was there something in water you’re trying to extract? Or are you interested in the physical waveform itself?
Water. Splashing sounds. Drop sounds. Drop on different materials. I do “foley,” you know, designing sound for a scene in cinema. It’s a really old art. You slap someone in the film, thwack! It’s not a recording of someone actually being slapped. I did a film, and I did a lot for that film, experimenting with different sounds. Replicating steps, doors closing, a switch on the wall. There are so many things I had to reimagine, because we didn’t take in the sound from on set. So that can help a lot understanding what sound objects are, how they can change the meaning of scenery, and vice versa. A piece of visual can change the definition of a sound. I’m trying to get these two realms closer to each other so that I can merge them into a piece where you don’t know which one comes first. Trying to get to that.. It’s a new medium. We have to call it something! It’s not music anymore. It’s not visuals anymore. I don’t want you to know where one ends and the other begins.
Would that require one process informing the other?
It can be either way. Sometimes you have a piece of music you want to translate into visuals, sometimes you have a 3D object that is moving, and you want to translate it to music. For now, there are a lot of nifty visualizers that have appeared in coding. How visual compliments respond to sound via spectrum analysis. You remember winAmp? They had all these visualizers with Windows Media Player, they would give you a sense of movement. They splash with the beat, and give you different colors with the beat. Different sections would react differently. I’m trying to get to the advanced level of that. That’s the goal. A lot of people do visual accompaniment now, because it’s cool. But I need to do it. There’s not a way I can explain to my mom, “this bass is purple, giant, rumbling, moving, spinning fast;” how can I show that? The sound is doing that. “Biutiful” is track that will need a lot of visual representation. People will ask, what is this? This thing that is crashing down? People have actually told me that they just straight don’t “get” the rhythm section.
Do you think the video will help people make that rhythmic connection? Like, you’re at a show dancing, or standing, and the visual is there to help people follow the beat when it drops out and return?
One thing is for certain, I’m not making music to make people dance. Never done that. I make sonic events. I replicate scenery. It’s inspired by nature, or daily life, and I try to put it there in a track. If it’s slower, it’s because in my head there was no movement. If it’s faster and more rough, it’s because I imagined a place with more people, moving faster, if I was in a crowd, or the jungle. I would say it’ll either require a lot of imagination from the audience to see things from my perspective, or they don’t have the imagination and then I need to do something about it.
Other producers might be playing with granulators, and stuff like that might be more on the dance side, so people will try to draw you into that crowd to classify you.
Exactly.
Technology and science definitely play a functional role in music, pushing the boundaries of what we can accomplish, but do you think our understanding of the universe has played a philosophical role in how we approach music?
Yeah, because I think for the past 500 years music has been process-based. People have enjoyed the process — musicians in a live band playing, even in the studio, and then performing the exact same thing, showing the process. Now it’s more result-based — at least my music is. The digital world has given us a chance to showcase an extreme version of classical music, and everything that has happened before, as a result. I’m a digital artist. I’m inspired by creating something out of electricity, just data, basically. Information. And that is very difficult. People think Ableton, laptops, but it’s not easy. Now, it is down to pure imagination. You can download any drum kit off the internet, so to create something new requires pure imagination. That’s why I’m really happy with what’s going on in the experimental electronic scene, and that’s why I’m OK when people compare me to Arca, or Oneohtrix, or whatever. I’m not bothered because I know the process. They’re also visual artists. As long as they have the imagination, it’s legit. It’s not about the process. Who cares what Flying Lotus does in the studio? It’s a beautiful painting. That’s the most important thing about the new electronic scene.
It’s interesting you bring up OPN. People talk about him in terms of “horizontal structures,” like things are next to each other instead of on top of each other. I can see the same thing in your music, in that I’m in between these waves, you know, stretched out, in between the sound…
There are extra dimensions to it. There’s more 3D involved in my stuff. I think it’s just a choice. You choose what you want to use — a 2D, flat synth, or an object of sound that you’ve discovered. ‘Cause that’s what I do, I go in, and there’s an object there, and I listen to it and ask how it can work. Sometimes, I can’t make big synths out of it, but it’s not about creating a familiar form. It’s all about creating the new form. I also let it be a bit ambiguous.
I feel like when you came in [with GUUD] you had a really cohesive idea, and on I AKA I you flesh it out quite a bit more. Have you seen this current endeavor through to some sort of culmination point?
The thing about this is, I’m creating something that is a leap from music. But I’m sure, you’ve noticed, there’s a lot of genre involved in one album. There’s different beats to different sounds — the method and mentality are the same, but there are lots of different genres I’m exploring. I’m trying to get vocals involved. I sing normally, but I’d like to have other singers as well. Maybe record a seamless album, with the GUUD feel to it, but lots of different vocalists. There’s one remix I did for Empress Of, and that actually gave me a lot of confidence for making vocal music. Her track was kind of dance-y, but I took the vocals and used nothing else, and I think it sounded really interesting and beautiful, and an indicator of what I’m looking for.
Would you ever go in and collaborate in the studio with someone? Would it be hard to introduce people to your studio process?
It really depends on who. There will be a clash of methods most of the time. My methods are based on mistakes. I do everything wrong. It might clash if it’s a formal studio, but if it’s someone as crazy as I am, 100%! It would be fun, because you don’t know what results you might get, and that’s exciting.
Thinking about chance, and some references you made in the liner notes to GUUD to the “quantum realm,” have you read or heard any of the work towards a “quantum” system of music?
Yeah. There’s a lot happening in that realm, but the result is not music. With fractal sounds and quantum music experiments, they’re using tones to make soundscapes, using microtones and stuff like that. I deal with music meaning Vivaldi = music. There’s harmony and melody, and everything is in the right place. The biggest goal is to treat sounds like physical objects, and to put those objects into a musical context. That’s the most important part of what I do. At some point I called it “nano composition.” What I was doing was sending a composer’s mind into the realm beyond the microsound, stretching so much, physically, that the wave is destroyed, turned into tiny, random pieces. What is this sound? It’s just a wave, a frequency. It’s like sending a nanobot into your veins, and they’re going to perceive the blood cells in a different way. If you manage to send a camera, it’ll see everything different. The vein is the sample. I’m going in there, between the molecules, taking a chunk out, zooming out again, putting it next to a simple synth.
You know Yannis Xenakis? He does really crazy stuff as well, stretching and zooming in.
And he did it back when people were working with bigs reels of tape. He was doing it physically, cutting and pasting and stretching things. It’s more real, in a way. I have a friend who composes on four vinyls. She does a similar thing where she loads samples, and she units that pitch down so much, and with processing she can pitch up the slow part, and you end up with this flower-like… something. It has a feeling. It’s sound-sculpting.
He also did music for algorithms, which almost seems cute in a way, because we have such a totally different understanding of an algorithm in relation to music now.
He does music architecture. It’s amazing, you can get a better idea of how they’re explained. That’s the architecture side — case studies, looking into details, and planning things. I do it a bit, but I’m more spontaneous. When I’m recording a track, I like to let the moment dictate what is going on based on feelings, and impulses.
Do you think life in London has had a significant effect on the content — or, the sense of narrative — in your music?
There’s a side to London that is progressive in terms of… the future of humanity. That has inspired me a lot. Musically, it’s only technical. The difference between here and anywhere else is you’ve got the equipment and the people who are always discussing cool stuff. I’ve never been in a scene, or with “that” group, so I’d say I’m pretty disconnected from that element of it all. I would say the progress as human beings in London has pushed me, in general.
Give me an example.
I follow these guys called futurists. There’s a department at Oxford University called the Future Of Humanity. Nick Bostrom started it. I follow them, Nick Bostrom and Calum Chace, AI, how our brains are moving forward…
Ray Kurzweil?
Yes, he’s an interesting person. Constantly making predictions. I think he’s aware of things that will not be so good in the future for us. One is creating this sexy idea of having robots and artificial intelligence as a separate entity [from] humans. A new specie, which are robots. That is scary. That is actually part of how I’m making music, letting the computer do 50 percent of the job. Trusting the computer. There’s an aspect of chance involved. After I process everything, what phrasing is possible with that sound, 50 percent is me letting the computer decide, and I listen to it and sometimes I say “yes.” It’s my choice. That’s the beauty of it. Now — it is my choice. You give the computer a process to run, because you can’t do the same thing with your brain, and it gives you a result. But imagine where I have to [let the computer decide]. Imagine the day comes where the computer says back to me, “No. You can’t not use it. You have to use it.” That is scary. But, for now, it’s beautiful. We’ve created this thing that serves us well, it’s applied to our species. We are enhanced by computers, but if it goes beyond that… that scares me a lot. It’ll get out of hand.
And how did the creative environment in Tehran differ from the creative environment in London?
There were cultural limitations to what I was doing before. You could call it “cultural” when you play rock music and you write lyrics about society and it’s connected to people. There’s a fear in Iran that that could be a “problem” for the “revolution.” To the government at least. Lack of equipment. Lack of performance spaces. It’s getting better. I haven’t been there for seven years, but I’ve heard it’s getting better in terms of letting people do installations. Exhibition spaces are opening up. It’s more arty now, culture is less restrained. Limitations. It had upsides and downsides — they help you become more persuasive towards what you want.
It works in weird ways. When you’re limited you make more. You push it. Here, it’s funny, most of the people here don’t have stories to tell. Isn’t that funny? Life is much more moderate. In America it’s different, it seems like there’s more human interaction. Here it’s so… regimented.
Everyone is in their own lane in Britain. No eye contact!
Yeah! It’s not a bad thing. But in Iran, on the streets some people are getting arrested, some people are having fun, everything is happening at the same time. I kind of balanced my brain here. I can focus. There’s less focus in Iran. You’re dealing with so many fundamental problems — the foundation of the state is in question itself.
Going on our kind of “East/West” divide, you went to conservatory in Iran, so you must have developed two distinct impressions on classical music. In terms of process — actually, fuck it, let’s talk about results — what’s the biggest difference between Iranian and Western classical?
Well in Iran they adopted a lot of rules and structures of Western classical, but they changed it and created their own forms. You have seven systems (Dastgah), and that dictates how the movements and melodies are made. It’s a huge difference. Also there is quarter tones. Iranian music has a lot of quarter tones. There’s a note between a full and a half step. It’s not as crazy as Hindi music, they have like 1/8th notes…
You can tell how in Western classical, they are very explicitly describing emotions.
It’s very detail-orientated.
Hindi music seems much more about releasing energy and seeing where it takes you.
It likes a free-floating thing that is happening for 10 minutes. In Iranian music they do a lot of the same thing, they play nyah nyeer nyeer for hours. But I was not entirely interested in the traditional music. I was more interested in the folk music, music of small villages. The phrasings of those people are amazing. Insane. I was thinking, how can I put this in a more global structure. Not a Western structure — Western structure you think of blues and jazz — proper global mentality. It comes down to using rhythm sections and lines and phrases from Iran and mixing them with the digital frequencies, and whatever you do with that set of sounds it will sound modern, right? But the phrasing will be very folkish, old, and Iranian.
I’m also interested in the music of southern Iran, and the percussion. They make really groovy rhythm sections. I’m trying to put that in more and more — that’s been my direction since the first record, to incorporate the rhythm, the groove, the energy more. In the next record I’ll try and get that involved more and more.
Do you find it difficult making “human” rhythms working on a computer, or “teaching” the computer to make the rhythms?
Strictly, I don’t let the computer do anything with rhythm. I don’t use any sequencers or anything that implicates rhythm. I do it bit by bit myself, try to perform it on a Launchpad or keyboard… there’s a feel to rhythm that has to be… that’s the thing, if you go into detail with rhythm, it is momentary impulses momentary impulses in your body, like a drummer in you, giving you the groove. Like if you see Buddy Rich, he’s up and down, every single second is different. Making AI that does that, maybe?
You know that’s one of the biggest issues with AI, is creation? Because “knowing” for AI is processing and looking for answers. That will be easy in the next 20 years. But there’s two massive issues: the first is common sense, and then there’s creativity. AI can’t do either. That’s why I don’t use sequencers. Even if it’s the best, most intelligent machine, I don’t trust it, because it’s not creative; it’s mathematical. I don’t like it. I feel like every element in the music is in nature — you get texture and objects, stones, flowers, and life. Even in this room. You’ve got the feeling of bricks. You wanna get that of music.
And I bet even if you taught a machine to produce it’s own interpretation of “rhythm,” it would be different from what we’re used to, just because of the structure of our bodies!
Yeah, you obviously can program a computer to give you a phrase, but that phrase is frozen, piece of what you thought at that moment. But the next moment you are different, so you have to program it every time? That’s the problem — we’re programmed every moment, we have feelings, and a massive range. I think it’s important to leave so many elements in the hands of human begins. What I use computers for mostly is getting random geometry in sound. They give me a reaction based on a glitch effect or stretching, unexpected things.
Is the randomization a source of creativity, where what you get out of it influences where you take your composition?
It does. I like it, because I’m in an amazon, or in a jungle, or a crazy, unknown place. Anything that I find time I’m asking myself, what can I do with it? [Flicks the teapot.] Okay, let’s do it, let’s take it. I put my taste in work at that moment. I choose, I select, and that’s why I use a lot of collage for my artworks, because it’s the same process. The artist, Negar, she does the same thing, creating this aesthetically interesting object from so many different things. That’s what my music is. Most of the time you’re wondering what part of the picture a [a piece] is coming from. That’s what the computer gives me in sound. We lose the original definition of that chunk.
You mentioned in the notes for GUUD your own musical synesthesia. That really interested me, because when I listened to the album I was struck by how tactile it was. I’d like to a hear a bit about how you interpret sound.
Synesthesia, I didn’t know about until a few years ago, when I was remembering a number someone gave to me. She was insisting I write it down — I said no, I’m good at remembering numbers because two is orange, this is red, that is white. She was like, what the fuck are you talking about? And I realized it wasn’t everyone [who felt that]. I started reading about the connection between color and numbers and different (sic) sensories working at the same time. I realized I had always thought about music in a similar way. I close my eyes, I see sound. I have it in this space, I can move sounds around inside it. If it’s distorted, it’s more bloody and red. I try to adjust things and decorate things to match their texture or color. Physical values — which don’t exist, obviously — but it’s just me working it out in a physical way.
I got more into geometry of sound: how space and time and music can visualize sound physically, dictate the geometry of a sound object. That’s how I got into the recording, experimenting of GUUD, trying to get these “sound objects” to work together. The challenge was to make melodies out of that process. You can put field samples in a two-minute piece, but if you put a melody, a memorable phrase, that will resonate with people. I’m more comfortable now with creating phrases out of sound objects.
I think everyone is pretty excited to see what is coming. Where will people be able to see you in the next couple months?
I’m playing a show in the Netherlands in April. I have a TED Talk in Oslo on the 31st, then I’m leaving to go to America in May, going to do a couple of festival shows. Then I’m back, gonna go back to work on some new music. Hopefully shooting a film. Always trying to do stuff.
I AKA I will be released on mp3 and CD through Ninja Tune on April 1. Vinyl release on May May 27. You can preorder both here