Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow NAM_SAMPLE_FLOAT to switch model input to float instead of double #48

Merged
merged 2 commits into from
Jun 23, 2023

Conversation

mikeoliphant
Copy link
Contributor

This PR adds a NAM_SAMPLE #define that is used instead of "double" for input samples for model processing.

If NAM_SAMPLE_FLOAT is defined, this will switch NAM_SAMPLE to "float". Otherwise it remains "double", so no there is no change in existing behavior.

This allows processing loops using "float" to avoid having to convert buffers to "double".

@daleonov
Copy link
Contributor

daleonov commented Jun 9, 2023

Good stuff. JUCE, for instance, uses float buffers, so keeping an extra double-precision buffer as an adapter for NAM is a bit annoying.

Copy link
Owner

@sdatkinson sdatkinson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approving since I've called it a nit, but I'd appreciate hearing back from you about having gain parameters be of NAM_SAMPLE type. It feels odd, but I know I haven't seen enough DSP code to feel confident about conventions in the space.

NAM/convnet.cpp Outdated
@@ -99,7 +99,7 @@ convnet::ConvNet::ConvNet(const int channels, const std::vector<int>& dilations,
{
}

convnet::ConvNet::ConvNet(const double loudness, const int channels, const std::vector<int>& dilations,
convnet::ConvNet::ConvNet(const NAM_SAMPLE loudness, const int channels, const std::vector<int>& dilations,
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: I don't like loudness being of "sample" type--it's a parameter of the model.

But I get that it mainly serves to do arithmetic with samples, so I can live with it.

NAM/dsp.cpp Outdated
void DSP::process(double** inputs, double** outputs, const int num_channels, const int num_frames,
const double input_gain, const double output_gain,
void DSP::process(NAM_SAMPLE** inputs, NAM_SAMPLE** outputs, const int num_channels, const int num_frames,
const NAM_SAMPLE input_gain, const NAM_SAMPLE output_gain,
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, it looks weird having all of these parameters being called "samples". Can you point me to somewhere else that follows this convention? It just doesn't sound right.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wasn't sure either, and I'm fine either way. "loudness" is an internal parameter anyway, and "input_gain" and "output_gain" should be probably be removed since the plugin just passes "1.0" and handles gain itself (as does my LV2 plugin).

@daleonov
Copy link
Contributor

By the way, what about Eigen::MatrixXf and Eigen::VectorXf? Shouldn't they match buffer precision instead of always being float?

@mikeoliphant
Copy link
Contributor Author

By the way, what about Eigen::MatrixXf and Eigen::VectorXf? Shouldn't they match buffer precision instead of always being float?

No - the models operate with floats regardless of the input sample precision.

@mikeoliphant
Copy link
Contributor Author

Ok, I've reverted the parameters to double.

Copy link
Owner

@sdatkinson sdatkinson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! 👍🏻

@sdatkinson sdatkinson merged commit e14c50b into sdatkinson:main Jun 23, 2023
@mikeoliphant mikeoliphant deleted the nam_sample branch July 28, 2023 16:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants