My own limited understanding is that with transducers like speakers/subs we can reverse the phase which alters the timing of one unit to another. So as a speakers cone/piston or units action is moving out/up on one unit it is moving in/down on the other. This has a reaction with the soundwaves being generated in the air or in our case materials the vibrations travel through.
By reversing the phase, IIRC you then can have cancellation of the generated soundwaves nodes/antinodes when any frequencies the same are generating on both units (that are out of phase) at the same time. One may cancel out the other. I too, am learning all the time and this is one subject matter I do want to cover in my own build, based on my own findings from tests. Yet I need and want to do more exploration to learn from our real-world usage scenario we are applying here and not just by theories.
To try to monitor vibrations in actual materials (accurately) requires specialist hardware that is expensive. I was even quoted £900 for a single sensor that works with iPad and is used by professionals as an industrial level calibration tool. I'm not that keen to spend that and I'm not qualified to even use such properly.
Phase Cancellation
This is something I will however be able to monitor at least from the (source) "effects creation" perspective.
Whats happening on the output of all the combined frequencies being applied to the channel?
With different effects and timings happening, I will be able to see this using the audio hardware/software I have invested into. Mainly to try to understand better if/when it happens by applying multiple layers or multiple effects on the same output. It will let me see what frequencies such is happening with from realtime output of the generated effects. This is also I believe (possibly) one of the factors concerning "less is more" with effects as some effects being generated cancel out others or that the transducer struggles to generate them all? My assumption is with such available tools I can discover how to avoid having effects conflict with each other to help improve the felt sensations. We can learn what effects to not group on the same channels and use this data to improve tactile profiles.
Personally, I don't believe trying to use tactile as a way to replicate controlled motion would be in any way accurate. I have seen rigs on springs and other solutions but hey I am willing to be proved wrong by others sharing what they have learned or done. In this as an example with "suspension", my views are that.....
We are representing the positional activity of suspension "bump vibration" from low to high energy responses using different frequencies and dB, not necessarily "suspension travel" and accurately converting it somehow via frequencies and vibration to then produce motion.
We can however with Simhub have effects seemingly move from one channel to another. Doing this by using different activity thresholds, delay, and giving a sensation for an effect moving from the location of one unit to others.