VST3 and MIDI CC pitfall

Hi Arne!

What I personally think is that MIDI provided two things which are quite valuable:

  1. an ontology of semantic parameter meanings
  2. automatically-linked output parameters

We don’t inherently need MIDI for these, but we are currently falling back to MIDI because there isn’t an alternative.

Semantic ontology

Sure, it’s hacky and defined by convention, and most MIDI controllers lack consistent and widely-respected definitions.

But when supported, things like pitch-bend or sustain-pedal are universal across synths, in terms of their semantic meanings. DAWs and input devices know to make these available, sometimes linked by default to a particular interface (e.g. foot-pedal, or a wheel which snaps back to a neutral position).

VST3 taps into this by using IMidiMapping - but I think that we should acknowledge what this is: VST3 using MIDI as an ontology for parameter meanings, enabling implicit zero-configuration setup for inputs, and hints for interfaces.

Automatically linked outputs

The other thing that MIDI lets us do (which I’m currently struggling with in VST3) is it lets us semantically describe our outputs, so that they are automatically linked up to corresponding inputs on any following effects.

Yes, any decent DAW lets us link parameters between effects, but we have to do it explicitly. MIDI lets us implicitly do some of this routing, using controller numbers (or other MIDI events like pitch-bend or aftertouch).

For example, let’s say I want to write an “explicit vibrato” plugin: for a monophonic instrument, it adds a particular vibrato pattern to each note using pitch-bend. This should be able to work transparently with any synth - and it really should be possible to have that output routed by default to the synth input parameter tagged as “pitch-bend”. Without that, we’re forced to either output legacy MIDI messages, or ask the user to wire us up appropriately.

So, those are the gaps MIDI is currently filling for me.


One option: tag outputs in the same way as inputs

Point (1) above is already covered, because we use IMidiMapping to semantically tag our inputs.

I think the simplest solution to would be: an equivalent of IMidiMapping::getMidiControllerAssignment(), but for output parameters.

That is: we continue to use MIDI controllers as the semantic ontology for our inputs. DAWs would use these semantic tags to automatically link outputs to inputs (by default - the user can always un-link them), and they could do it with full floating-point resolution and sample-accurate timing.

For now, I’m looking at LegacyMIDICCOutEvent, but it does indeed feel like a legacy solution.