Re: Parameters & mapping
Posted: Tue Mar 22, 2016 1:39 pm
Hi,
It's not only the matter of our approach, but also VST SDK and MIDI specs as well. Accordingly to VST SDK, the VST derived parameters are usually used for automation mainly, while MIDI learn rather for controlling some parameters using external MIDI device. If comes to implications both means are similar, BUT .. VST automation has much better resolution and refresh rate, so it's designed for the rapid value changes, while MIDI learn is based upon the good old MIDI specification with limited number of CC, their resolution and of course protocol's rate the codes can be sent at (time resolution factor). Based on that we rather prefer to use VST parameters for controlling sound parameters exclusively, while MIDI parameters extending over some additional editing functions as well. Of course both methods can be applied interchangeably since parameter smoothing is applied, but despite that VST parameters are more precise in terms of time and resolution.
That's why we prefer to use VST automation for parameter which rapid changes won't cause any unexpected audible effects and MIDI CC for covering wider range of controls.
Apart from that automating some of the parameters is not a good idea at all (speaking of device generating sound in the mean time). Things like polyphony should be rather changed while synth isn't playing at all, that's another reason why we excluded this particular one from automation. But we could enable it for MIDI learn probably.
Kind regards,
Sebastian
It's not only the matter of our approach, but also VST SDK and MIDI specs as well. Accordingly to VST SDK, the VST derived parameters are usually used for automation mainly, while MIDI learn rather for controlling some parameters using external MIDI device. If comes to implications both means are similar, BUT .. VST automation has much better resolution and refresh rate, so it's designed for the rapid value changes, while MIDI learn is based upon the good old MIDI specification with limited number of CC, their resolution and of course protocol's rate the codes can be sent at (time resolution factor). Based on that we rather prefer to use VST parameters for controlling sound parameters exclusively, while MIDI parameters extending over some additional editing functions as well. Of course both methods can be applied interchangeably since parameter smoothing is applied, but despite that VST parameters are more precise in terms of time and resolution.
That's why we prefer to use VST automation for parameter which rapid changes won't cause any unexpected audible effects and MIDI CC for covering wider range of controls.
Apart from that automating some of the parameters is not a good idea at all (speaking of device generating sound in the mean time). Things like polyphony should be rather changed while synth isn't playing at all, that's another reason why we excluded this particular one from automation. But we could enable it for MIDI learn probably.
Kind regards,
Sebastian