Here’s a basic introduction on how to make the famous growl basses made famous in genres like Dubstep, DnB and Electro. If you own FL Studio Producer Edition, you should have the tools readily available to make these sounds. If you don’t, you can purchase the separate VST plugins used in this article from Image-Line.
Step 1: Understand the Tools for the Job
Here’s an overview of the process we’ll use in this tutorial:
- First off, we’ll create a simple FM (Frequency Modulation) patch using FL Studio’s FM Synthesizer, Sytrus.
- Next, we’ll feed this signal through a vocoder plugin. In this tutorial we’ll use Vocodex because of how in-depth the engine can take the user. It gives access to many more parameters than your typical vocoder plugin.
- Finally, we’ll render this out to a .WAV using Edison, and feed it through Harmor via a method of resynthesizing/resampling. This will allow us to further distort and shape the signal, as well as slice up the audio to create interesting patterns.
FM Synthesis in a Nutshell
Before we get started jumping into Sytrus, we must first learn the fundamentals of Frequency Modulation Synthesis. This is not to be confused with Ring Modulation.
Frequency Modulation is the process of using the oscillation of one particular waveform to modulate the frequency of another waveform. So in simple terms, we’re using one oscillator (say, a triangle wave) to change the frequency rate of another oscillator (such as a sine wave). This creates a harsh, grungy sound that we can emulate vowels with.
Now, Sytrus is a powerful FM Synthesizer capable of not only utilizing numerous oscillators, but it also has a separate engine that allows you to add or remove harmonics from the waveforms. It literally lets you draw your own oscillators – giving you unparalleled access to shape your sound how you want.
Finally – and I want to emphasize this point – copy this tutorial, then change a few things until you find a sound that you are happy with.
Step 2: Create a Synth Patch in Sytrus
The Sytrus Interface
Let’s open Sytrus. At first glance it looks extremely confusing, but I’ll explain the basic modules. I won’t cover everything Sytrus can do, but I’ll outline the module we’ll be using.
- OP 1-6: The operators act as our oscillators. These are the waveforms we will be manipulating to create our sound. If you click an operator, you will gain access to the panels assigned to it. We’ll be using MOD and OSC for this tutorial. MOD is how we will assign our modulation and automation. OSC is where we will draw in our harmonics for each oscillator.
- FM Matrix: The array of encoders in a 9×9 grid. This is how we tell where each oscillator is routed, and how much of the signal we want routed there.
- MAIN: The main page will only be used to create automation clips for our XY Pad. We can also adjust global settings such as Volume and Pitch.
Instead of using the effects in the Sytrus engine, instead I’ll be using ones in Harmor or in FL Studio’s Mixer. This will save us time and avoid making this tutorial too drawn-out.
The Basic FM Patch
Here’s what we’ll be making. It’s a harsh, growly type of synth, luckily it’s not too difficult or complicated to make. It requires only a few oscillators and a bit of FM magic. Before we begin, make sure the Default patch is loaded. On startup Sytrus likes to load a string/pad patch.
To start, click the grid slot 2×1 (that is, the second node across, on the first row), and drag it all the way counter-clockwise. This is telling Oscillator 1 to have its frequency modulated by Oscillator 2. Make sure that you’re in the FM Matrix, and not the RM Matrix. Now, let’s go into the Operator 2 module, and click the OSC tab. This brings up an image of the waveform, along with a series of vertical bars divided horizontally by a single line.
The bars above this line are the harmonics. Think of them as a series of faders that we can turn up or down to change the sound. There are also five darker bars (not including the first one). These are our fundamentals, and will have the most profound effect on the sound. Dragging bars too far from these can make it sound out-of-order or chaotic.
Dragging the bars can add harmonics on a more linear scale. For example, increase the amplitude of the fourth (second last) dark bar. You will notice the sound has extra frequencies in the higher register, adding sparkle or harshness to the sound.
The bars below the line are the phase indicators. Currently everything is in phase and working normally, but if we drag these faders down, we can adjust the phase value of any modified values above. This can also drastically affect the sound, so feel free to experiment!
At the top left section of Operator 2, we can find a series of small faders. These are a more generalized way to shape the waveform, and will remind seasoned synthesizer programmers of a typical wavetable selection. For example, dragging the first fader up will morph the sine wave into a triangle wave.
Now let’s bring in another oscillator. This one will modulate Oscillator 2 very slightly. Be wary when modulating oscillators—a little can go a long way, so use fine-tuning once you’ve added more than one. Click and drag the 3×2 node clockwise very slightly. If you can, do this while holding down a MIDI key or playing a loop so you can hear the changes live, allowing you to judge things better.
Finally, we need to map our modulation to the X/Y Pad. To do this, head over to the OP1 panel, and click the MOD tab. Find Mod X and drag the first point of the envelope all the way down, this will determine the amount of modulation applied to Operator 1 based on the value of X.
Repeat this process for Operator 2, only use Mod Y. Finally, while holding down a note, move the XY Pad around. You should hear a vowely type of sound. To use this in a clip, right click both the Xstrong> and Y encoders and hit Create Automation Clip. We can now precisely modulate these parameters over a period of time. If you get stuck, re-read the tutorial or download the sample patch.
Step 3: Process with Vocodex
The second ingredient to our recipe of bass is a vocoder plugin, in particular Image-Line’s own Vocodex. This magnificent plugin not only processes a single signal, but can feed multiple signals through the same processors and affect them synergistically.
So what does this mean? It means we can input, say, our bass patch and a random audio sample of literally anything, such as a vocal phrase or one shot, or even another bass patch and mash them together, creating an extremely filthy, wet-sounding end result. So let’s get started!
First, load up a Vocodex on a new mixer channel. Find any audio sample you like—in this case we’ll be using a “Yeah!” one-shot that came with this particular version of FL Studio 11. We can detune this using our Sampler settings, then link it to another mixer channel. Then we’ll route our Sytrus and Sample outputs to only go through our Vocodex channel by right clicking the channel routing button and hitting Route to This Track Only.
Next, we’ll open up our Vocodex plugin and set the inputs at the top to
1—experiment!). This tells us which audio signal is the carrier, and which is the modulator. In most cases, our bass patch will be the carrier.
Just so this tutorial doesn’t go for four years, I’ll explain the basics of Vocodex. The primary tools we’ll be using are the four encoders on the right of the interface, and the small green dots to the left of them.
The first determines the width of the bands. We’ll want a nice thick bandwidth to avoid making the sound thin and tinny, so increase the encoder (clockwise) until it’s about 1-3 o’clock. The pink encoder second-left will determine which input source will be most dominant. We can also use the four green dots left of the encoders to cycle through various input settings—what we will hear more of, the carrier or the modulator.
The last encoder is the Unison setting. It’s generally good practice for this type of sound to set very high, if not to maximun. We want a chorus-effect and as many voices as possible for as big a sound as possible.
Finally, uncheck the Draft toggle at the top. This will keep us at high quality for previewing. (The actual output will always be full-quality.)
Feel free to experiment with other settings, such as bandwidth allocation or band distribution. I won’t explain everything, nor do I claim to know all the ins and outs of this plugin.
Step 4: Render Using Edison and Harmor
Record into Edison
Now, we should have something wet, dirty and in-your-face. If it’s not, add some effects like EQ, compression, Waveshaper, chorus and reverb:
- For EQ, boost the lows and highs, cut the mids.
- Compression should be harsh and fast to remove any artifacts.
- Waveshaper, chorus and reverb add flavour. Waveshaper should be crunchy, chorus very wet (but with very small unison/panning), and reverb only subtle to make it sound more realistic.
Record the output of our Vocodex channel into an Edison:
- Load up an Edison, set On Play and Loop, loop the section of your track with the bass-out, and hit play.
- Once the song comes back to the start of the loop, Edison will set a marker. We can double-click this to select all the spillover and hit delete. This will leave us with a sample the perfect length of our pattern/loop.
Resample Using Harmor
The last step of this process is to resample the bass using Harmor. Drag the sample from Edison into the Image section of Harmor. This will resynthesize it, allowing us to add additional effects that we couldn’t normally do with our mixer.
First we need to set some parameters as Harmor initially reduces the quality of the input. Make sure our Image/Resynthesis is set to
High Precision, Denoise is set to
0 and Precision is set to
Perfect in our ADV tab.
Next, play around with the Scale and Form in the IMG tab, we can also increase the unison value again here, making sure to not increase the Stereo Image (Pan) too much. The Prism and Pitch modules can make interesting sounds, while the FX section is where we can add additional distortion or chorus.
You can repeat the resampling process again and again by adding another Edison to the output of your Harmor channel, recording the result and dragging it back into Harmor as an image. This can lead to some seriously filthy sounds.
Here’s our final result after playing around with a few settings:
So there we are! A very basic introduction to FM synthesis, vocoding and resampling using Image Line’s native tools. If you don’t have FL Studio, the end result is still possible, although the workflow will be significantly different.
I highly recommend buying these plugins (or buying FL Studio), especially if you’re going to be making EDM tracks that require huge bass. I also recommend checking out Seamless (AKA SeamlessR) on YouTube. He has a complete series of How-To-Bass and goes more in-depth with the inner workings of Harmor, Sytrus and Vocodex, along with other powerful tools.
This tutorial was inspired using collective knowledge from his techniques, along with other things I’ve picked up along the way, and was meant to introduce these elements to people who are new to FM synthesis, vocoder processing and resampling.
Acer’s P3 convertible Ultrabook sits astride a Serato Scratch rig (running on a conventional laptop, actually). The software is a new touch-enabled version of VirtualDJ, made for Acer and currently available free with their touch range. Photo from the Acer event in Taipei. (And yes, the iPad has something to say about this, as well.)
“Where are my touch laptops?”
It’s becoming the “where are my flying cars?” of the laptop music age.
And so it is that I’m here in Taipei, Taiwan, having spent today hanging out with Acer as they talk about what they’re doing with touch on their computers (laptops and tablets). The touch laptops are here in force – not a couple of netbooks or tablet PC oddities, but with the full-blown force of the PC industry behind them. The question now is whether we actually want them.
2012 was a little early to ask that question for the music audience; now the mature products – with Windows 8 behind them – are in the 2013 generation. I have some specific information to share, but I want to back up and consider some of the broader questions first. (If you just want to look at hardware, read later this week.)
It’s been nearly a decade since electronic musicians first started seeing touch in the wild. At the time, the power was immediately evident: you had the ability to imagine new ways of interfacing with music without the limitations of hardware knobs and faders. It was Star Trek: The Next Generation-style power, finally appearing in the real world. And that was a natural fit to musicians suddenly facing computer capabilities that lacked obvious form – sounds unfettered by the laws of acoustics and physical instruments. So it was also immediately apparent that eventually, you might want these touch interfaces to merge with your computer.
But since that first epiphany, the marriage of touch with conventional computers has been surprisingly slow in coming. Apple showed the way with iPhone and iPad, in their own categories. But laptops, with their hinged clamshell design, are another animal. Conventional software written for the mouse and keyboard can be simply awful when you start jabbing with your fat fingers, and the hinged design of a laptop leads to the dreaded “gorilla arm”: using a vertically-oriented display feels uncomfortable and makes your arms go numb. (On behalf of the gorillas of the world, I have no idea why this is called gorilla arm; maybe gorillas were unfairly subjected to usability testing in an early computer lab.)
So, why would you want a laptop to be touch-enabled, anyway, instead of a dedicated tablet running touch-centric software? Apple, for their part, has drawn a line in the sand and decided you don’t. Their MacBook line eschews touch beyond the trackpad, and focuses on conventional (still very powerful) software. The iPad is the platform for touch. Even years into a supposed “post-PC” age, software on the two remains very different – and the OS X software is far closer to its Windows brethren than iOS. Whatever rampant speculation about the two fusing, with the MacBook and iPad leading their respective sales categories, there doesn’t seem to be a logical motivation to fuse those two – least of all when Microsoft’s strategy to treat the two categories as blurred have initially fallen flat.
And let’s be clear – this can’t be understated – the iPad is working as a music platform, a new music platform. It’s working so well, in fact, that it’s easy to lose sight of whether its rivals are in the game.
But looking forward, there are reasonable arguments to adding touch to a laptop – itches that neither tablet nor conventional laptop can scratch.
1. You might want a bigger display than a tablet. (With the success of the iPad mini, and sales of smaller Android tablets or even the Kindle, tablets tend to like sizes at 10″ or far below.)
2. You might want some ports (note: plural) for connecting hardware – for us musicians, audio interfaces or controllers. That can be a difficult, expensive, or even impossible proposition on the iPad.
3. You might want more computational power. Apologies to brilliant chipmakers, but that again returns to size accommodations (think heat and battery requirements, in particular).
4. You might like to run any software, not just what’s available through a store. (Users run software made by developers, and so this means the narrower capabilities of development frameworks on those stores – Microsoft’s as well as Apple’s – don’t always fit what developers create.)
Or, to put it even more directly, why carry a laptop and a tablet if both are basically computers with displays? Why shouldn’t the laptop also offer an alternative to the tablet?
But could another PC maker give you a laptop you would want? Just as you might admire Apple for their focus on separate laptop and tablet categories, I think there’s a place for the PC OEMs exuberant experimentation. There, fans of natural selection will find, increasingly centered close to manufacturing in Asia, rapid iteration and a “let’s see what sticks,” all-categories-covered barage of product ideas. And some of these ideas do stick: Apple alone didn’t – couldn’t – build the one billion user global PC market. They’re just so far short of another real hit in this age.
Touch, physical contact, and gestures matter – in every nuance. This happens to be, I’m told, the right way of eating dumplings in Taiwan. (Okay, it also gives you more tactile feedback than a touchscreen.) Come on, I’m doing research.
Musicians don’t exactly make up a good picture of the global PC consumer base. But they do represent people who can push machines to their limits when it comes to expression and performance. They’re the ones using all those ports at once when average consumers don’t – and using their full bandwidth. They’re filling up hard drives and requiring maximum throughput when reading them. They’re the greatest test of every nuance of a touch display, every millisecond of latency, because they don’t just use them as an interface: they use them as an instrument.
And the clamshell laptop is in a way an icon of the revolution in computer music making – and the target of disdain. Think of how often – years into widespread music performance on computers – you hear complaints about “laptop” music, “laptop” performance, people checking their email, even the glowing fruit logo of a certain popular vendor as it hovers over audiences.
We’ve got another shot at seeing a replacement. What’s remarkable to me is, for all the success of the iPad, if you go to clubs or live shows, experimental or dance music, the best you’ll see the iPad do is sit next to a laptop. So there’s clearly something missing here.
Taking nothing away from other hardware and acoustic instruments and all the myriad ways you can make music, this question of whether you can make music with commodity computers remains interesting. Ever since Kraftwerk first wrote songs about the joys of appropriating business machines, musicians have continued to do fun things with mainstream hardware designed for a completely different purpose.
So – let’s become operators of our pocket calculators once again, and see how things are stacking up. I’ll have more on the specific hardware later this week, some impressions from Taipei (of Acer and others) and – because this takes some time to test and cover – at last through the rest of the year. (And yes, readers have been asking for this; now I think the time is right.)
In the meantime, it’s worth pointing to developer Chris Randall, who’s been tackling multi-touch with a very big external display. As the PC makers struggle to get it right in hardware, it turns out that making the actual music making software is an even bigger challenge. Chris writes:
As all regular readers of AI know, I’ve spent the better part of a year (and gone through two fairly expensive touchscreen monitors) trying to come up with a touch-based app for songwriting and live performance. I’ve just got my seventh (!!!) attempt working to the point where it can actually be used to make music, so I thought I’d toss up a quick video.
He puts together a combination of synth hardware, custom computer-generated noises, samples, and touch sequencing. Watch:
Well, the thing about a black canvas – the thing that makes it beautiful – is that it is open to a lot of possibilities. And those take time.
Tune in soon for the exciting … uh, continuation and very much not final conclusion … to our saga.
Naturally, having endured this editorial, if you have specific hardware or questions you’d like me to investigate while roaming the big PC vendors at Computex, Taipei, let me know now! I hope we’ll have more conversations with these makers in coming months.
The post As Touch and Laptops Converge, Finally Potential for Music Making? [Prelude] appeared first on Create Digital Music.
Spotted over on the Red Bull Music Academy YouTube stream, a complete walkthrough by Four Tet himself on the gear he uses in a live DJ set. Props for Cool Edit Pro and a Roland SP-303 on its own channel opposite of an Ableton workstation! Watch the... Read more
Yes, this looks like an ordinary stompbox, but it is reprogrammable. Can I put this massive “prototype” disclaimer over any photos of me tagged on Facebook? No? Photo courtesy the OWL folks.
There are stompboxes. They are — for lack of a better word — foot worthy. You can step on them, in a way that is less possible with a computer. (Well, sure, somewhere amidst an endless spinning color pinwheel you may have wanted to step on your MacBook Air, but then thought better of it – financial investment and whatnot.)
Then, there are computers. They can do everything. That stompbox is one particular distortion effect. And it is always just that one distortion.
But what if you could have both?
As embedded technology continues its march toward greater user friendliness, lower cost, and greater sonic powers, it seems the time is right for hardware that combines the durability of dedicated sound gear with the open-ended potential of computers. That is, it’s not really clear where the computer ends and the stompbox begins.
OWL isn’t the first project to take on this dream, but it’s looking more practical than those that came before.
The project promises open source hardware, with open code, that can be reprogrammed into new sound effects simply by uploading new code. As with a new generation of low-power tablets and phones and the like, there’s an ARM chip at its heart. (The ARM Cortex M4, to be exact.)
If you’re a guitarist who writes your own C++ code – yes, there’s actually a sizable group of those – you can have a ball making your own DSP routines. If you’re not, OWL promises a library of patches, presumably growing with more contributions from the open source community.
There’s not a whole lot to look at at this point – while they’ve got a GitHub repository going, it includes only a little bit of sample code. But in the video, the results look impressive, perhaps enough – given an experience team – for some to go ahead and take the leap of supporting the crowd-funded Kickstarter project.
Patches load directly via USB – so reprogramming the pedal is a pretty easy affair for the average user. If you are a coder, you can use simple C++ without the usual mucking about with hardware-specific code. (That’s where, to me, the advantages of newer ARM chips comes in: there’s enough horsepower here that you don’t have to fret over every spare cycle, coding close to the iron. But if you do want to use specific ARM functions, those are supported in the framework, too.)
What you get in the product appears to be a no-nonsense hardware platform with the requisite jack connections and stomp-able switches, and a straightforward code framework. It’s not quite as idiot-proof as something like Arduino, but to a growing army of DSP students around the world, it’s a beautiful blank canvas.
It’ll be fun to watch this evolve. And there appears to be at least enough crowd funding to get it rolling – with additional funding “unlocking” additional work from the team on other features. See more at the Kickstarter site:
OWL Programmable Effects Pedal [Kickstarter]
I like in a way that the product isn’t too ambitious: it’s simple, uses a smart platform as its basis, and focuses on things people need.
It seems there’s more to do in this space. Years ago, the talented originator of Winamp and Reaper made the JesuSonic, dedicated hardware for effects cheekily hidden in a massive crucifix. But now, that sort of technology can easily hit the mainstream – with or without weird religious iconographic housings. The other logical direction seems to be more traditional computers running Linux, the sort which could take uploads dynamically using tools like Pure Data, without having to reprogram the pedal between each set. But both directions – embedded computers and dedicated hardware – hold potential, and both could be reprogrammable. OWL could be the herald of things to come, and if successful, the first real case study in making those things work.
The post A Stompbox That Can Become Whatever You Like, in Crowd-funded OWL appeared first on Create Digital Music.
Disciplined preparation creates amazing performances. If you believe in this statement, then having a robust cue point strategy is a great way to start prepping for success. Often ignored, cue points can be used in a variety of ways that will... Read more