Single Component description / support

Following the discussion in this thread I implemented a quick plugin that follows the SingleComponentEffect pattern and verified that it works fine both in VST3PluginTestHost as well as a VST2 plugin (meaning the vst2wrapper code does the “right” thing).

I am trying to figure how if this is something that is supported by all VST3 hosts and if it is specified somewhere in the documentation (I did look but did not find exactly). From my understanding the host is instantiating the Processor using the Factory. Then using the getControllerClassId method from the processor, it gets access to the ID for the controller class which it can then instantiate using the Factory as well. In the case of the SingleComponentEffect class, the getControllerClassId returns kNotImplemented and I assume this is the way that the host determine that it should check whether the processor also implements the GUI?

Is this how it works? Is it part of the specification on how a host must behave? Is it specified somewhere? Can you point me to it?

I can also see in the comment: “Cubase 4.2 is the first host that supports combined VST 3 Plug-ins”, so is there other VST3 hosts supporting it as well?

Thanks
Yan

First of all. It’s still not recommended to use it. We know that people coming from vst2 like to use it, because it uses the same hack friendly way of coding as vst2. So a clean separation between audio processor and edit controller is the recommended way of writing a VST3 plug-in.
But to answer you question, you can have a look in the SDK source code. The method PlugProvider::setupPlugin implements the way how a plug-in audio processor and edit controller is created. All provided host examples in the SDK support this kind of plug-in (like the validator).
I think all VST3 hosts will support single component plug-ins beside Cubase before 4.2.

Thank you I will take a look. I completely agree with you about the separation, it is a lot cleaner. I just think there are scenarios that requires sharing a lot of data, which is very expensive in the separated model (which involves messaging, serialization and copy…) and want to see if there is a better way.

Yan

For what it is worth, the only thing that I am interested in, with this model, is the ability to share (large amounts of) data between the UI and the RT without having to use messaging and I know and am very well aware of the data race issues involved in this. I am absolutely not interested in writing all the code in a single class.

My ultimate goal is to implement the SingleComponentImpl class in the (Jamba) framework so you don’t have to deal with it, and you still write your controller and processor the exact same way and this SingleComponentImpl will simply delegate to the controller and processor implementations. The main difference is that the SingleComponentImpl will be instantiated by the factory and is the one that will instantiate the controller and the processor, so it will be able to also create some “SharedState” that is provided to both, something that is not feasible when the factory instantiate the controller and processor separately.

I am still planning to have the default be “no sharing”, but if you want sharing it will possible while still implementing clean/separate entities. The only addition will be access to a narrow API to access shared data.

Yan

Create a FIFO or socket or shared memory segment in the effect, make it available as a read-only parameter, send the handle of it from your plugin to the editor/s. (Use non-blocking send, and perhaps also let the GUI send occasional “I’m here” messages so the effect can stop trying to send when the GUI isn’t open.)
Still only works on a single machine, but (depending on which specific API you use) may work cross-processes. It’s also better than just jamming data into a single global buffer, as it still nicely separates multiple instances, and would theoretically let a crash on the GUI end not take down the processing end (again, if the host was appropriately multi-process separated.)

Looking at the implementation for SingleComponentEffect in the SDK, there is this workaround

// work around for the name clash of IComponent::setState and IEditController::setState
#define setState setEditorState
#define getState getEditorState
#include "public.sdk/source/vst/vsteditcontroller.h"
#include "pluginterfaces/vst/ivsteditcontroller.h"
#undef setState
#undef getState

I understand why it is happening. The thing I cannot figure out is how does getEditorState and setEditorState get called exactly? I can see that AgainSimple implement those methods but who calls them? A host doesn’t know anything about them so it is not going to be calling them. Doing a search in the entire VST code, I actually cannot find anything calling those methods either.

Am I missing something?

Yan

As a host you always have a pointer to the IAudioProcessor part and the IEditController part of a plug-in. The host doesn’t care in this case if the implementation is in one class or two. It just calls audioProcessor->setState() and editController->setState (). The hack is needed so that a plug-in can implement both methods.

I guess I just don’t understand how a method that is called setEditorState is being called when setState is invoked, but that must be some magic with function invocation in C/C++…

The “magic” is the vtable. How a vtable works is worth learning for any C++ programmer, just as how a linker works.

Virtual functions are not directly called by name, but instead called by function pointer – your object has a pointer to a table of function pointers, and the index in that table is determined by the function name.
So “someitf->somefunc()” translates to “look up the index of somefunc() in the vtable, and call that by function.”
The only time the “name” of the function matters, is when the linker builds the vtable, and because it so happens that the compiler lays out the vtable in declaration order, it so happens that the name of the virtual function doesn’t matter, only that it’s “virtual function #18 in declaration order.”

The specific use of vtables is not guaranteed in the standard, but all compilers use that mechanism for virtual functions (although there may be different layouts, and multiple/virtual inheritance adds gnarls.)
The specific layout of a vtable is not guaranteed in the standard, either, although each ABI generally has a rule to make multiple compilers able to share a vtable within a single compile target (win64, linux, etc.)

Also, yes, this means that calling a virtual function causes three more cache misses and/or indirect references rather than a direct member reference. First, one to load the vtable pointer, then one to load the function pointer value in the vtable, then, an indirect jump through this function pointer.
This ends up hurting more than you’d thing for things like automation parameter buffers – there’s easily 20+ virtual function calls for resolving even the simplest of automation parameter cases :frowning:
A well-defined struct layout of the parameter name/offset/value would have been so much better. Ideally, ordered by sample offset, rather than parameter id. But, it is what it is …

1 Like

@jwatte thank you for the very detailed explanation