This tutorial explains several audio/midi plug-in examples in detail and explores the open possibilities of plug-in development.
LEVEL: Intermediate
PLATFORMS: Windows, macOS, Linux, iOS Plugin Format: VST, VST3, AU, AAX, Standalone
CLASSES: MidiBuffer, SortedSet, AudioParameterFloat, Synthesiser, MidiBuffer, MidiMessage, AudioProcessorValueTreeState, GenericAudioProcessorEditor
There are several demo projects to accompany this tutorial. Download links to these projects are provided in the relevant sections of the tutorial.
If you need help with this step in each of these sections, see Tutorial: Projucer Part 1: Getting started with the Projucer.
The demo projects provided with this tutorial illustrate several different examples of audio/midi plugins. In summary, these plugins are:
We use the GenericAudioProcessorEditor class common to all projects to lay out all of our GUI components across plugin examples.
The code presented here is broadly similar to the PlugInSamples from the JUCE Examples.
Download the demo project for this section here: PIP | ZIP . Unzip the project and open the first header file in the Projucer.
Make sure to enable the "MIDI Effect Plugin" option in the "Plugin Characteristics" field of the project settings in the Projucer.
The Arpeggiator is a MIDI plugin without any audio processing that can be inserted on a software instrument or MIDI track in a DAW to modify the incoming MIDI signals.

In the Arpeggiator class, we have defined several private member variables to implement our arpeggiator behaviour as shown below:
private:
//==============================================================================
juce::AudioParameterFloat* speed;
int currentNote, lastNoteValue;
int time;
float rate;
juce::SortedSet<int> notes;
//==============================================================================
JUCE_DECLARE_NON_COPYABLE_WITH_LEAK_DETECTOR (Arpeggiator)
};
Among these we have a SortedSet object that holds a set of unique int variables according to a certain sorting rule. This will allow us to reorder the MIDI notes efficiently to produce the desired musical patterns.
In the class constructor, we initialise the plugin without any audio bus as we are creating a MIDI plugin. We also add a single parameter for the speed of the arpeggiator as shown here:
Arpeggiator()
: AudioProcessor (BusesProperties()) // add no audio buses at all
{
addParameter (speed = new juce::AudioParameterFloat ("speed", "Arpeggiator Speed", 0.0, 1.0, 0.5));
}
In the prepareToPlay() function, we initialise some variables to prepare for subsequent processing like follows:
void prepareToPlay (double sampleRate, int) override
{
notes.clear(); // [1]
currentNote = 0; // [2]
lastNoteValue = -1; // [3]
time = 0; // [4]
rate = static_cast<float> (sampleRate); // [5]
}
Next, we perform the actual processing in the processBlock() function as follows:
void processBlock (juce::AudioBuffer<float>& buffer, juce::MidiBuffer& midi) override
{
// the audio buffer in a midi effect will have zero channels!
jassert (buffer.getNumChannels() == 0); // [6]
// however we use the buffer to get timing information
auto numSamples = buffer.getNumSamples(); // [7]
// get note duration
auto noteDuration = static_cast<int> (std::ceil (rate * 0.25f * (0.1f + (1.0f - (*speed))))); // [8]
for (const auto metadata : midi) // [9]
{
const auto msg = metadata.getMessage();
if (msg.isNoteOn())
notes.add (msg.getNoteNumber());
else if (msg.isNoteOff())
notes.removeValue (msg.getNoteNumber());
}
midi.clear(); // [10]
if ((time + numSamples) >= noteDuration) // [11]
{
auto offset = juce::jmax (0, juce::jmin ((int) (noteDuration - time), numSamples - 1)); // [12]
if (lastNoteValue > 0) // [13]
{
midi.addEvent (juce::MidiMessage::noteOff (1, lastNoteValue), offset);
lastNoteValue = -1;
}
if (notes.size() > 0) // [14]
{
currentNote = (currentNote + 1) % notes.size();
lastNoteValue = notes[currentNote];
midi.addEvent (juce::MidiMessage::noteOn (1, lastNoteValue, (juce::uint8) 127), offset);
}
}
time = (time + numSamples) % noteDuration; // [15]
}
Download the demo project for this section here: PIP | ZIP . Unzip the project and open the first header file in the Projucer.
The noise gate is an audio plugin that filters out the input sound below a certain sidechain threshold when placed as an insert in a DAW track.

In the NoiseGate class, we have defined several private member variables to implement our noise gate behaviour as shown below:
private:
//==============================================================================
juce::AudioParameterFloat* threshold;
juce::AudioParameterFloat* alpha;
int sampleCountDown;
float lowPassCoeff;
//==============================================================================
JUCE_DECLARE_NON_COPYABLE_WITH_LEAK_DETECTOR (NoiseGate)
};
In the class constructor, we initialise the plugin with three stereo buses for the input, output and sidechain respectively [1]. We also add two parameters namely threshold and alpha [2] as shown here:
NoiseGate()
: AudioProcessor (BusesProperties().withInput ("Input", juce::AudioChannelSet::stereo()) // [1]
.withOutput ("Output", juce::AudioChannelSet::stereo())
.withInput ("Sidechain", juce::AudioChannelSet::stereo()))
{
addParameter (threshold = new juce::AudioParameterFloat ("threshold", "Threshold", 0.0f, 1.0f, 0.5f)); // [2]
addParameter (alpha = new juce::AudioParameterFloat ("alpha", "Alpha", 0.0f, 1.0f, 0.8f));
}
The threshold parameter determines the power level at which the noise gate should act upon the input signal. The alpha parameter controls the filtering of the sidechain signal.
In the isBusesLayoutSupported() function, we ensure that the number of input channels is identical to the number of output channels and that the input buses are enabled:
bool isBusesLayoutSupported (const BusesLayout& layouts) const override
{
// the sidechain can take any layout, the main bus needs to be the same on the input and output
return layouts.getMainInputChannelSet() == layouts.getMainOutputChannelSet()
&& !layouts.getMainInputChannelSet().isDisabled();
}
In the prepareToPlay() function, we initialise some variables to prepare for subsequent processing like follows:
void prepareToPlay (double, int) override
{
lowPassCoeff = 0.0f; // [3]
sampleCountDown = 0; // [4]
}
Next, we perform the actual processing in the processBlock() function as follows:
void processBlock (juce::AudioBuffer<float>& buffer, juce::MidiBuffer&) override
{
auto mainInputOutput = getBusBuffer (buffer, true, 0); // [5]
auto sideChainInput = getBusBuffer (buffer, true, 1);
auto alphaCopy = alpha->get(); // [6]
auto thresholdCopy = threshold->get();
for (auto j = 0; j < buffer.getNumSamples(); ++j) // [7]
{
auto mixedSamples = 0.0f;
for (auto i = 0; i < sideChainInput.getNumChannels(); ++i) // [8]
mixedSamples += sideChainInput.getReadPointer (i)[j];
mixedSamples /= static_cast<float> (sideChainInput.getNumChannels());
lowPassCoeff = (alphaCopy * lowPassCoeff) + ((1.0f - alphaCopy) * mixedSamples); // [9]
if (lowPassCoeff >= thresholdCopy) // [10]
sampleCountDown = (int) getSampleRate();
// very in-effective way of doing this
for (auto i = 0; i < mainInputOutput.getNumChannels(); ++i) // [11]
*mainInputOutput.getWritePointer (i, j) = sampleCountDown > 0 ? *mainInputOutput.getReadPointer (i, j)
: 0.0f;
if (sampleCountDown > 0) // [12]
--sampleCountDown;
}
}
y[i = ((1 - alpha) * sidechain) + (alpha * y[i - 1) .The implementation shown here is not how you would typically program a noise gate. There are much more efficient and better algorithms out there.
Download the demo project for this section here: PIP | ZIP . Unzip the project and open the first header file in the Projucer.
Make sure to enable the "Plugin MIDI Input" option in the "Plugin Characteristics" field of the project settings in the Projucer.
The multi-out synth is a software instrument plugin that produces up to five synthesiser voices based on an audio file sample and outputs the signal to up to 16 multiple outputs.

In the MultiOutSynth class, we have defined several private member variables to implement our multi-out synth behaviour as shown below:
//==============================================================================
juce::AudioFormatManager formatManager;
juce::OwnedArray<juce::Synthesiser> synth;
juce::SynthesiserSound::Ptr sound;
//==============================================================================
JUCE_DECLARE_NON_COPYABLE_WITH_LEAK_DETECTOR (MultiOutSynth)
};
Among these we have an AudioFormatManager in order to register audio file formats to read our sample sound. We also have an array of Synthesiser objects that holds one synth per channel and a smart pointer to the sample sound we use in the tutorial.
We also declare some useful constants as an enum for the maximum number of midi channels and the maximum number of synth voices:
enum {
maxMidiChannel = 16,
maxNumberOfVoices = 5
};
In the class constructor, we initialise the plugin with 16 stereo output buses but no input bus [1] as we are creating a software instrument plugin. We also register basic audio file formats on the AudioFormatManager object in order to read the ".ogg" sample file [2] as shown here:
MultiOutSynth()
: AudioProcessor (BusesProperties()
.withOutput ("Output #1", juce::AudioChannelSet::stereo(), true)
.withOutput ("Output #2", juce::AudioChannelSet::stereo(), false)
.withOutput ("Output #3", juce::AudioChannelSet::stereo(), false)
.withOutput ("Output #4", juce::AudioChannelSet::stereo(), false)
.withOutput ("Output #5", juce::AudioChannelSet::stereo(), false)
.withOutput ("Output #6", juce::AudioChannelSet::stereo(), false)
.withOutput ("Output #7", juce::AudioChannelSet::stereo(), false)
.withOutput ("Output #8", juce::AudioChannelSet::stereo(), false)
.withOutput ("Output #9", juce::AudioChannelSet::stereo(), false)
.withOutput ("Output #10", juce::AudioChannelSet::stereo(), false)
.withOutput ("Output #11", juce::AudioChannelSet::stereo(), false)
.withOutput ("Output #12", juce::AudioChannelSet::stereo(), false)
.withOutput ("Output #13", juce::AudioChannelSet::stereo(), false)
.withOutput ("Output #14", juce::AudioChannelSet::stereo(), false)
.withOutput ("Output #15", juce::AudioChannelSet::stereo(), false)
.withOutput ("Output #16", juce::AudioChannelSet::stereo(), false)) // [1]
{
// initialize other stuff (not related to buses)
formatManager.registerBasicFormats(); // [2]
for (auto midiChannel = 0; midiChannel < maxMidiChannel; ++midiChannel) // [3]
{
synth.add (new juce::Synthesiser());
for (auto i = 0; i < maxNumberOfVoices; ++i)
synth[midiChannel]->addVoice (new juce::SamplerVoice()); // [4]
}
loadNewSample (juce::MemoryBlock (singing_ogg, singing_oggSize)); // [5]
}
For each midi/output channel, we instantiate a new Synthesiser object, add it to the array [3] and create 5 SamplerVoice objects per synth [4]. We also load the sample file as binary data [5] using the loadNewSample() private function defined hereafter:
void loadNewSample (const juce::MemoryBlock& sampleData)
{
auto soundBuffer = std::make_unique<juce::MemoryInputStream> (sampleData, false); // [6]
std::unique_ptr<juce::AudioFormatReader> formatReader (formatManager.findFormatForFileExtension ("ogg")->createReaderFor (soundBuffer.release(), true));
juce::BigInteger midiNotes;
midiNotes.setRange (0, 126, true);
juce::SynthesiserSound::Ptr newSound = new juce::SamplerSound ("Voice", *formatReader, midiNotes, 0x40, 0.0, 0.0, 10.0); // [7]
for (auto channel = 0; channel < maxMidiChannel; ++channel) // [8]
synth[channel]->removeSound (0);
sound = newSound; // [9]
for (auto channel = 0; channel < maxMidiChannel; ++channel) // [10]
synth[channel]->addSound (sound);
}
To make sure that no buses are added or removed beyond our requirements, we override two functions from the AudioProcessor class as follows:
bool canAddBus (bool isInput) const override { return (!isInput && getBusCount (false) < maxMidiChannel); }
bool canRemoveBus (bool isInput) const override { return (!isInput && getBusCount (false) > 1); }
This prevents input buses from being added or removed and output buses from being added beyond 16 channels or removed completely.
In the prepareToPlay() function, we prepare for subsequent processing by setting the sample rate for every Synthesiser object in the synth array by calling the setCurrentPlaybackSampleRate() function:
void prepareToPlay (double newSampleRate, int samplesPerBlock) override
{
juce::ignoreUnused (samplesPerBlock);
for (auto midiChannel = 0; midiChannel < maxMidiChannel; ++midiChannel)
synth[midiChannel]->setCurrentPlaybackSampleRate (newSampleRate);
}
Next, we perform the actual processing in the processBlock() function as follows:
void processBlock (juce::AudioBuffer<float>& buffer, juce::MidiBuffer& midiBuffer) override
{
auto busCount = getBusCount (false); // [11]
for (auto busNr = 0; busNr < busCount; ++busNr) // [12]
{
auto midiChannelBuffer = filterMidiMessagesForChannel (midiBuffer, busNr + 1);
auto audioBusBuffer = getBusBuffer (buffer, false, busNr);
synth[busNr]->renderNextBlock (audioBusBuffer, midiChannelBuffer, 0, audioBusBuffer.getNumSamples()); // [13]
}
}
renderNextBlock() function directly on the corresponding Synthesiser object to generate the sound by supplying the correct audio bus buffer and midi channel buffer.The helper function to filter out midi channels is implemented as described below:
static juce::MidiBuffer filterMidiMessagesForChannel (const juce::MidiBuffer& input, int channel)
{
juce::MidiBuffer output;
for (auto metadata : input) // [14]
{
auto message = metadata.getMessage();
if (message.getChannel() == channel)
output.addEvent (message, metadata.samplePosition);
}
return output; // [15]
}
Download the demo project for this section here: PIP | ZIP . Unzip the project and open the first header file in the Projucer.
The surround utility is a plugin that monitors incoming signal on individual channels including surround configurations and allows you to send ping sine waves to the channel of your choice.

In the SurroundProcessor class, we have defined several private member variables to implement our surround behaviour as shown below:
juce::Array<int> channelActive;
juce::Array<float> alphaCoeffs;
int channelClicked;
int sampleOffset;
//==============================================================================
JUCE_DECLARE_NON_COPYABLE_WITH_LEAK_DETECTOR (SurroundProcessor)
};
Among these we have an array to keep track of the number of samples in active channels of the plugin and an array to keep track of the alpha coefficients for each channel.
In the class constructor, we initialise the plugin with two stereo pairs of buses for the input and output respectively by default but the configuration will be changed according to the currently-used bus layout setup.
SurroundProcessor()
: AudioProcessor (BusesProperties().withInput ("Input", juce::AudioChannelSet::stereo()).withOutput ("Output", juce::AudioChannelSet::stereo()))
{
}
In the isBusesLayoutSupported() function, we ensure that the input/output channels are discrete channels [1], that the number of input channels is identical to the number of output channels [2] and that the input buses are enabled [3] as shown below:
bool isBusesLayoutSupported (const BusesLayout& layouts) const override
{
return ((!layouts.getMainInputChannelSet().isDiscreteLayout()) // [1]
&& (!layouts.getMainOutputChannelSet().isDiscreteLayout())
&& (layouts.getMainInputChannelSet() == layouts.getMainOutputChannelSet()) // [2]
&& (!layouts.getMainInputChannelSet().isDisabled())); // [3]
}
In the prepareToPlay() function, we initialise some variables to prepare for subsequent processing like follows:
void prepareToPlay (double sampleRate, int samplesPerBlock) override
{
channelClicked = 0; // [4]
sampleOffset = static_cast<int> (std::ceil (sampleRate)); // [5]
auto numChannels = getChannelCountOfBus (true, 0); // [6]
channelActive.resize (numChannels);
alphaCoeffs.resize (numChannels);
reset(); // [7]
triggerAsyncUpdate(); // [8]
juce::ignoreUnused (samplesPerBlock);
}
reset() function is called at several places to clear the active channels array as defined later.The reset() function is also called in the releaseResources() function after the block processing finishes:
void releaseResources() override { reset(); }
The reset() function is implemented by setting every channel value to 0 like follows:
void reset() override
{
for (auto& channel : channelActive)
channel = 0;
}
As for the asynchronous update of the GUI, we handle the callback by calling the updateGUI() function on the AudioProcessorEditor:
void handleAsyncUpdate() override
{
if (auto* editor = getActiveEditor())
if (auto* surroundEditor = dynamic_cast<SurroundEditor*> (editor))
surroundEditor->updateGUI();
}
Since the AudioProcessor inherits from the ChannelClickListener class defined in the SurroundEditor class, we have to override its virtual functions. The channelButtonClicked() callback function is triggered when the user clicks on a channel button. It provides the channel index that was pressed and resets the sample offset variable like so:
void channelButtonClicked (int channelIndex) override
{
channelClicked = channelIndex;
sampleOffset = 0;
}
The isChannelActive() helper function returns whether the specified channel is active by checking whether the active channel array still has samples to process:
bool isChannelActive (int channelIndex) override
{
return channelActive[channelIndex] > 0;
}
Next, we perform the actual processing in the processBlock() function as follows:
void processBlock (juce::AudioBuffer<float>& buffer, juce::MidiBuffer&) override
{
for (auto ch = 0; ch < buffer.getNumChannels(); ++ch) // [9]
{
auto& channelTime = channelActive.getReference (ch);
auto& alpha = alphaCoeffs.getReference (ch);
for (auto j = 0; j < buffer.getNumSamples(); ++j) // [10]
{
auto sample = buffer.getReadPointer (ch)[j];
alpha = (0.8f * alpha) + (0.2f * sample);
if (std::abs (alpha) >= 0.1f) // [11]
channelTime = static_cast<int> (getSampleRate() / 2.0);
}
channelTime = juce::jmax (0, channelTime - buffer.getNumSamples()); // [12]
}
alpha[i = ((1 - x) * sample) + (x * alpha[i - 1) where x = 0.8 in this case.auto fillSamples = juce::jmin (static_cast<int> (std::ceil (getSampleRate())) - sampleOffset,
buffer.getNumSamples()); // [13]
if (juce::isPositiveAndBelow (channelClicked, buffer.getNumChannels())) // [14]
{
auto* channelBuffer = buffer.getWritePointer (channelClicked);
auto freq = (float) (440.0 / getSampleRate());
for (auto i = 0; i < fillSamples; ++i) // [15]
channelBuffer[i] += std::sin (2.0f * juce::MathConstants<float>::pi * freq * static_cast<float> (sampleOffset++));
}
}
In this tutorial, we have examined some audio/midi plugin examples. In particular, we have: