A Herculean Task: Producing 3 Albums DAWlessly — Part 4

current audio routing

Last time, we talked about some choices that you have to make early on, which help shape where your process will go. This week, we’ll talk about the actual mixing process, where you take ideas and make full tracks out of them using the same DAWless approach used for live performances. To the right I’ve included a diagram of what my current audio routing looks like. I haven’t changed the routing at all since 2022 and very little since 2019 when I had to replace the JoMoX AirBase99 after it ran into some problems and limitations.

Sound Design, Composition & Mixing Philosophy

In most traditional music, an artist writes music and/or words and plays them on the instrument they know. But in electronic music, there is often a sound design and mixing phase as well, where you combine new sounds together, which requires a different production process. that is absent in traditional music. In addition to the songwriting process. it is also almost required to understand concepts like sound design and production topics like compression and EQ. I’m dividing what happens in this electronic music-making process into three categories: Sound design, composition, and mixing.

How Those Three Things Come Together in a Mixdown

When writing a new electronic song,  you play notes into a synthesizer or DAW, and use a sequencer to record and play them back, whether ITB or not. Then you arrange those parts together, add on any other sounds or parts you want, add effects, whatever you want to do, until you’re satisfied. At some point you reach a stage where you say, “let’s record this”. That’s when you enter the mixdown phase which means you should know sound design, composing, and mixing and how to use those tools to make the best recording possible.

Let me give an example: I get to the mix stage and a drum and a bass sound are interfering and they won’t sit properly in the mix. There are essentially 3 options: change the sound or sounds, change the time that they occur, or apply processing. By thinking of decisions this way, we can divide them into the categories mentioned:  Changing the sounds themselves is the “sound design” part. Moving them around in time is the “composition” part. Compression or EQ applied to the sounds after the fact is the “mixing” part. So let’s talk about how I use each of these techniques to achieve solutions in my DAWless setup.

Sound Design: Kick/Bass Interference

When a kick drum and a bass sound trigger at the same time, there is often a spike in the waveform that indicates there are overlapping frequencies that are both trying to be heard at the same time. And lets assume I like how they sound and don’t want to change the location of the notes. So there are a few options that don’t require compression. There’s the “ducking” method where the volume envelope of the bass sound is brought down just for the first part of the note. Or you can change the bass sound so that the attack of the VCA and VCF are slower. So what might you do if a kick and tom play at the same time and are interfering?

Composition: Kick/Tom Interference

Let’s say you’re building a kick and tom rhythm on the drum machine, but the kick and tom end up landing on the same note, causing a spike in the waveform. As you probably know, a kick my be “unvoiced” but there is still a fundamental note to which it corresponds. So in this case, rather than change the sounds themselves, I might move the tom over one-eighth or 1/16 of a note in either direction. I usually do this because it usually solves the problem with the added benefit of adding a new rhythmic element to the song. This is the composition solution. So what would I do to make my snare sound better?

Mixing: Snare Sounds Like Shit

So your snare isn’t really poppin’ and you want to copy the snare to a new track and pan them hard left and right? If on the RYTM, you could sample the snare and play it from an unused instrument and pan them left and right, or route the snare to both the main out and the individual out. But both of those solutions take some time and have serious drawbacks. So what I normally do to solve this problem is a “mixing” solution: set up a very short panning delay with no feedback and make sure the wet and dry sounds are at an equal level. The effect side should not have any EQ on it so it sounds as much like the original as possible. And so even though I can’t easily make copies of tracks like I do in a DAW, I’ve still more or less created what I set out to do.

So That’s How I Mix & Record DAWlessly

That’s my decision-making process when a song is ready to be recorded. This is where the majority of my time was spent during this process, making the decision whether to mix, compose, or sound design. That effort was much more difficult in the electro and drum n bass genres because they required lots more details and changes. I could have also stopped earlier on in the process and just let the mastering process deal with the rough edges, but I decided to work through all the issues so that they wouldn’t pop up when it came time to perform them live.

 

 

A Herculean Task: Producing 3 Albums DAWlessly — Part 3

In last week’s blog post, we talked about the details of my blended studio/live setup. This week, we’ll go into the ideas that underpin my DAWless philosophy and how it influences the decisions that are made.

Overall Philosophy

It’s All About Live Performance

A lot of the decisions I make may seem strange to outsiders, but there are some reasons why I do things the way I do. First of all, my main first goal is always live performance. It’s one of the only sources of income for modern musicians and it’s still somewhat rare to see in the electronic music genre because of the technical knowledge required to pull it off. And being a live electronic music performer also means that there’s a focus on gear, and as such I’m always trying to make my live setups as lightweight as possible and to minimize complex setups in favor of simpler solutions.

Small and Lightweight Footprint

my setup as of 2024

When I first started out, I would take my full-size synthesizer keyboard, a big rack with a mixer, and on and on, and it was so heavy I needed another person to help me move it. But as I’ve played more and more gigs, I’ve realized the importance of a small setup, and now I strive for as much power and flexibility as possible but in the lightest, most compact package I can achieve. So I’ve retired a lot of gear to studio-only use and have a setup that allows me to carry everything I need in one trip, or as I originally put it, “small enough to take on the bus by myself“. You can see the current setup on the left, essentially unchanged since April 2020 when I replaced my Jomox drum module with an Elektron RYTM MKII. Since that time, I’ve composed only with this setup and without changing the I/O. That way, I can perform any song from any era without having to ever change any physical connections. When I play a techno show, I don’t take the Virus, but everything else is the same and is plugged in like normal. If I need the Virus for the other genres, it plugs right back into its slot.

Efficiency is King

I also strive for the simplest setups, so that I have to do the minimum of preparation once I reach a venue. Rather than take a power strip to plug everything in, I bought a rackmount power unit which has surge protection, uses a simpler power cabling approach, and has a light to help illuminate dark stages. It’s in a 2U rackmount with the audio interface, but along with the light and power benefits, I can leave all the audio cables for the drum machine plugged in as well as some of the power cables, eliminating another hassle that happens during a setup. I also used to do extra audio cabling for shows. For example, I could route audio from any other instrument or mixdown channel into the Virus or RYTM and then route that audio back out as treated audio. Essentially this would allow me two more FX units that could be routed to various sounds, but I don’t do it because it makes the setup more complex with only I believe a small benefit; the marginal benefit of having some more audio options is outweighed by the complexity it introduces. I could also run compressors or EQs on the master outputs, but I don’t like to rely on software tools for my sound, so I don’t use them. It’s really easy to throw on a compressor somewhere, forget about it, then realize later how much it’s affecting your sound. So even though I always strive to be as cutting edge as possible, I’m not sacrificing a simple setup/teardown and simple audio routing for that. But I will sacrifice some weight in the case of the PSU to help simplify and insure the safety of a lot of important gear. In a sense, I’m trying to do as much as possible technically but with the absolute minimum of gear and with a minimum of setup and teardown fuss. A “maximalist-minimalist” approach if you will.

Making “Songs”

Second, my style of music creation is mostly to make songs. In essence, I want my music to sound like a person wrote it, even though a machine may be playing it. In pattern-based music, the difference between good and great songs generally comes down to the details. And details take time. Time for parts to be written that fit as well as the other parts of the song. Time to get the composition and arrangement just right. But this time isn’t wasted, because these songs don’t exist only inside a laptop somewhere, a snapshot in time forever resigned to slow degradation. They exist in the real world and can be recreated nearly identically by me or even someone else in the future, no matter what version of a software you’re currently on or what type of Mac you’re using. And to me, there’s a value in that. These songs are not tied to a computer, they are tied to hardware that can exist more or less indefinitely, a great idea if you want to make a living out of playing your music live like I do. Great ideas aren’t lost forever or chained to a certain set of software, slowly losing quality as digital recreations of a moment in time. But writing songs this way doesn’t consign your creations to the past, but allows them to be living creatures that can live on and grow and change just like their creator.

No One NEEDS a VST, Although They’re Super Nice to Have

My final thought is that there are no problems that can’t be worked around using my system. I don’t think about my setup as limiting, I just think that it forces me to find solutions that are different than what would be done in a DAW. Don’t get me wrong, DAWs and VSTs are magical and wonderful, but aren’t necessary to make great, contemporary music.  Are there great sounds I don’t have access to because of my setup? To some degree yes, although I could always sample. But a great sound is just a great sound, regardless of the tools used to make it. And yes, my palette isn’t as bountiful as those who use a computer-based production setup. But not only do those limitations help sometimes, they also force better decision-making during the mixing process. Maybe instead of trying to compress two bass sounds together to get them to fit, maybe give them their own space instead. You know?

OK that’s a lot of information. Let’s close today’s post and continue on in a part 4.

 

2021 European Tour Site is Up

The European Tour website is up and running now and more or less complete. Follow the tour as it starts in Budapest and passes through 12 countries on its way back home to Krakow. In this map, you can click the countries or the cities to see photos from that region. Whenever there were performance photos, they were placed first in the slideshow. The graphic was fun, if challenging, to make. I learned a lot this time which is already helping me improve future maps.

It turned out that every country in this area has red in their flag, so I made all the countries their specific shade of red and when you roll over them, they change to another color from that country’s flag. Check out Luxembourg’s cool light blue color! Some countries like Belgium and cities like Stuttgart in Germany don’t have any photos associated with them, and so there’s no rollover for them. And of course, Czechia was completely left out of everything. Maybe next time, Czechs.

Anyway, here’s the link, or click on the 2021 Tour link at the top of my homepage at www.doperobot.com. The West Africa Tour page is starting to come together as well, and you can also find that link on the homepage. Check back often to see how it’s coming along!

A Herculean Task: Producing 3 Albums DAWlessly — Part 2

In the previous blog post, I talked about my “DAWless” production setup and in this one, I’ll go into more detail and discuss some issues and limitations that I had to overcome to record these albums.

The Issues

#1: The MPC

The MPC, as great at it is, is limited in that it only has 64 tracks, which may seem like a lot, but it’s not when you’re making different variations of a pattern or experimenting with different instrumentation. And the only way to make room for new patterns when you run out of tracks is to delete and/or move them, which is a laborious and time-consuming physical process which wears out the hardware and wears out the user. Yes, copy and paste is easy, but it becomes problematic when you have to transfer this change to fifty, sometimes sixty sequences. And as things inevitably get moved around and deleted, their position in the track order changes. This is extremely problematic because before any show, when it’s time to convert these longer songs into single MIDI sequences for live performance, all tracks must be in the same “lanes” so to speak and it is a very long process of reorganizing a track for conversion. Some of these tracks have required 5 or more “reorganizations” before they reached a final state, which can sometimes take half a day or more to complete. I made frequent new versions and many backups during this time because it can be easy to blow past the original idea onto something different, and I think that’s generally a bad idea. So if it happens, I can go back and start again with a previous version. It’s also important to have backups because during long composition sessions, things can get accidentally lost or overwritten, and backups keep that from becoming an unrecoverable problem. Essentially, the MPC is a very inefficient way to record, archive, and organize many multiples of tracks, so next time I will surely use my DAW or other tools to help with some of these things.

#2: The Size of the Projects

documents with track info

There’s an enormous amount of information that needs to be stored, backed up, and tracked for these albums: Hundreds of different sequences. Dozens of different versions of patches spanning multiple hardware machines. Dozens of different audio recordings. Hundreds of hours tweaking patches. Dozens of documents (see right) containing information like where drum kit versions are, where Virus patches are stored, which effects are in use, etc. And of course, all this information needs to be backed up to computer regularly so that data loss isn’t a death sentence. (I only use hardware machines that are fairly common so that if a device fails, it can be replaced and reloaded with all the relevant sounds with minimal delay.) And on top of that, this project spanned three genres and over two dozen songs, and was interspersed with multiple live shows, studio recordings, my daily street performances, two house moves, a tour to Africa, and so much more. It was a case of information overload and organizational struggle multiplied by project size. The sizeable time commitment was unreal too, as the very first of these beats were written in December of 2020 and January 2021, putting these projects at the 3+ year mark, many multiples of the time it took to produce my previous 3-5 song EPs. And this was all happening while I was compiling the documentation and writing the software for the Roland TB-3!

#3: Limitations in Hardware/Software

UFX & TotalMix

RME UFX with TB-3 & BlackBox connected

The RME UFX is an audio interface with a very flexible digital mixing software called TotalMix. Any input can be routed to any output via submixes, and each of the inputs and outputs has its own compressor/gate and EQ. Each output channel can also be recorded using its loopback feature. In my case I send a +4dB level to the  5/6 analog outputs and a +10dB level to the 7/8 outputs. Those outputs are also mirrored to headphone outputs 9/10 and 11/12 so that I can connect to either the front or back panel outputs for live shows. The front panel (headphone) outputs are preferred though because they are easily accessible and require a single cable connection. (You can see this routing on the image on the left.) Even though I have Compressor & EQ available on the main mix outputs, I try to keep the TotalMix modifications as small as possible, so that if I’m ever in a situation where my audio interface fails, I can still more or less play a show through a regular 16 channel house mixer and not have it sound completely different. So far so good, right?

totalmix setup for electro

Well, the main limitation with TotalMix is that it contains only one reverb/delay per snapshot which is shared for all inputs and outputs. A single effect shared amongst 14 channels could be a dealbreaker, but all the other instruments have built-on effects, so the effect is somewhat mitigated. But “adding a touch of reverb” to some elements in a mix has to be done on the instrument, since I don’t have the RYTM or Virus wired up to process external signals. In addition, TotalMix currently only allows eight snapshots (mixes) to be recalled instantly without loading a new workspace. To use more than eight snapshots, a new workspace has to be loaded. This is why my live sets are almost always eight tracks long. I do occasionally make longer sets, but I either try to combine them or if not possible, I load a new workspace sometime during the set to have more snapshots available.

Virus

The Virus is an amazing machine with over 500 user RAM locations and 26 more banks of 128 sounds that can be burned to ROM locations. It also has dozens of VA voices that can be used before the CPU starts to cut off notes. But even with this amount of space and voices, notes still cut off in complex combinations of patches and patch space still runs out fairly quickly. The Virus is usually what I use for any type of melodic sound, from bass to arp to keys, but I only use the stereo digital output, so the sound usually cannot be altered any further once it leaves the machine, since any master effect will apply to all sounds on the channel. Also, there is no compressor on the Virus, but there is EQ and saturation which can accomplish some of the same things. So now I have banks of patches that contain variations of sounds that I’ve backed up to the computer for each song, and I do a full machine backup about once a month and keep a regularly-updated written inventory that documents which patches are pointed to by the multis. This process helps prevent data loss and is absolutely essential for a live-only artist, and has saved me many times.

RYTM

For the drum machine I have the snare, hihats, rimshot/clap, mid/hi tom, and cymbal/cowbell routed to individual outputs. On the main RYTM stereo output, I put kick, bass tom, low tom, and onboard effects. So I essentially send the low end and fx to the stereo output and the rest of the instruments to individual outputs to be modified separately. The RYTM has one master compressor/overdrive section. I use the overdrive section often, but even though it is powerful and I’m sure sometime in the future I will use more of it, for now I use the compressor very sparingly, if at all. I do this for a few reasons: 1) I need to keep a constant loudness throughout the songs from all eras of production since any of them can be potentially be performed live and 2) if there are frequency overlap or transient problems, I fix them with sound design instead and 3) the compressor is applied to the entire main stereo output (including effects), which is almost never the intended outcome. Other than managing backups and kits, which is fairly easy to do on the RYTM, the other main limitation is that it only has one master reverb/delay that is shared amongst all instruments.It just reminds me that in the recording and composing process, a few sounds done well is usually better than a lot of sounds all trying to work together. And one effect done well usually works better than a lot of effects competing for space in a mix.

TB-3 & BlackBox

BlackBox and TB-3

The TB-3 and BlackBox are each routed to a dedicated analog stereo input on the front of the audio interface. For the TB-3, I invented a way to back up and recall patches from the MPC, but patch backup and retrieval isn’t nearly the issue that it is with the Virus and RYTM. At the end of the projects, I back up all the created TB-3 patches to a computer using my TB-3 Editor software so I can quickly run through patches in the future when I want a new sound. As for the BlackBox, I use it just for vocal samples which are loaded onto it by SD card, so it too is not complicated to backup and maintain, other than having to carefully design the directory tree so that projects can be recalled properly with MIDI. All in all, these two machines, though important, didn’t cause many headaches with this DAWless approach.

 

And Now, On To the Mixing, Composing, and Arrangement

Come back next week when we talk about what the mixing process is like for this live-oriented, DAWless, production setup.