You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The goal of this issue is to document/keep track of the project of creating a Runtime program for QNN Effective Dimension calculations.
What do we have?
The explanation of the effective dimension algorithm and its relevance can be found in the following research paper. And the code to reproduce the original experiments has been published in this repo. The original implementation is clear and well documented, but is outdated in several points, and is overall missing certain features to become a runtime program. This implementation is based on 2 main classes:
QuantumNeuralNetwork class - Computes batched forward and backward passes, and returns the Fisher information matrix. It uses the python multiprocessing library to speed up the Montecarlo estimates necessary to calculate the Fisher information matrix. Main methods:
forward()
get_gradients()
get_fisher()
EffectiveDimension class - This class receives a model, number of parameter sets and number of inputs, and computes the normalized Fisher information (f_hat), as well as the effective dimension for a specified number of data (n). The distributions the inputs and parameters are drawn from are fixed (i.e. a standard normal distribution for data), only the number of inputs/parameters can be changed. Main methods:
get_fhat()
eff_dim()
This implementation works only with statevector simulation, and not with real backends.
What do we want?
We want to ultimately be able to use a runtime program to compute effective dimension calculations for any given QNN, input set and parameter set (not necessarily drawn from a standard normal distribution). In other words, we want a runtime program that adapts pre-existing effective dimension code, and adds:
Customizable QNNs (from the qiskit-machine-learning API)
Customizable input data (not only # inputs)
Customizable parameters (not only # parameters)
Access to backends (not only statevector simulator)
Proposed implementation
With this proposed implementation, I try to avoid redundancy, and take advantage of pre-existing qiskit-machine-learning code.
The QuantumNeuralNetwork class used in this project mostly overlaps with qiskit-machine-learning's CircuitQNN class with dense output, except for the get_fisher() method. My proposal is to move the Fisher information matrix calculation to the EffectiveDimension class, so that it encapsulates all necessary computations, and we don't need the custom QNN class anymore. This would immediately give us access to the backends (the custom QNN class only works with statevector simulation).
Once this first step is completed, I propose to add the customizable input data and parameters feature to the EffectiveDimension class. This class can be introduced as a standalone contribution to qiskit-machine-learning.
Finally, I propose to write a runtime program that uses this EffectiveDimension class.
Things to keep in mind
I believe that the same code can be used for global/local effective dimension (if I am not mistaken, the only change is the number of parameters used), but it might be interesting to explicitly add a LocalEffectiveDimension class. In that case, it would be nice if it also included model training.
By following this implementation, we will not be using multiprocessing for the Fisher information matrix estimation. In my opinion, multiprocessing is not a platform-agnostic library, and it will likely become troublesome (for example, with Python 3.10 & Mac OS it currently does not work well). Plus, it will most likely not be an advantage inside the runtime environment. However, it might still be interesting to look a bit further into this to confirm these ideas.
The custom QNN and the CircuitQNN class stack outputs and implement post-processing differently. I believe that this is a minor inconvenience, but it must be taken into account for step 1 (see below).
This implementation assumes the use of CircuitQNN to define the quantum neural networks. Would it be interesting to extend this implementation to ANY qiskit-machine-learning QNN class? (i.e. OpflowQNN and subclasses).
My proposal is to turn this issue into an epic, and assign issues to the 3 mentioned steps:
Create new EffectiveDimension class
Extend EffectiveDimension class functionality (input/param. customization)
Create EffectiveDimension runtime program
The text was updated successfully, but these errors were encountered:
What should we add?
The goal of this issue is to document/keep track of the project of creating a Runtime program for QNN Effective Dimension calculations.
What do we have?
The explanation of the effective dimension algorithm and its relevance can be found in the following research paper. And the code to reproduce the original experiments has been published in this repo. The original implementation is clear and well documented, but is outdated in several points, and is overall missing certain features to become a runtime program. This implementation is based on 2 main classes:
QuantumNeuralNetwork
class - Computes batched forward and backward passes, and returns the Fisher information matrix. It uses the python multiprocessing library to speed up the Montecarlo estimates necessary to calculate the Fisher information matrix. Main methods:EffectiveDimension
class - This class receives a model, number of parameter sets and number of inputs, and computes the normalized Fisher information (f_hat), as well as the effective dimension for a specified number of data (n). The distributions the inputs and parameters are drawn from are fixed (i.e. a standard normal distribution for data), only the number of inputs/parameters can be changed. Main methods:This implementation works only with statevector simulation, and not with real backends.
What do we want?
We want to ultimately be able to use a runtime program to compute effective dimension calculations for any given QNN, input set and parameter set (not necessarily drawn from a standard normal distribution). In other words, we want a runtime program that adapts pre-existing effective dimension code, and adds:
Proposed implementation
With this proposed implementation, I try to avoid redundancy, and take advantage of pre-existing
qiskit-machine-learning
code.QuantumNeuralNetwork
class used in this project mostly overlaps withqiskit-machine-learning
'sCircuitQNN
class with dense output, except for theget_fisher()
method. My proposal is to move the Fisher information matrix calculation to theEffectiveDimension
class, so that it encapsulates all necessary computations, and we don't need the custom QNN class anymore. This would immediately give us access to the backends (the custom QNN class only works with statevector simulation).EffectiveDimension
class. This class can be introduced as a standalone contribution toqiskit-machine-learning
.EffectiveDimension
class.Things to keep in mind
LocalEffectiveDimension
class. In that case, it would be nice if it also included model training.CircuitQNN
class stack outputs and implement post-processing differently. I believe that this is a minor inconvenience, but it must be taken into account for step 1 (see below).CircuitQNN
to define the quantum neural networks. Would it be interesting to extend this implementation to ANYqiskit-machine-learning
QNN class? (i.e.OpflowQNN
and subclasses).My proposal is to turn this issue into an epic, and assign issues to the 3 mentioned steps:
EffectiveDimension
classEffectiveDimension
class functionality (input/param. customization)EffectiveDimension
runtime programThe text was updated successfully, but these errors were encountered: