-
Notifications
You must be signed in to change notification settings - Fork 56
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pretty good pretty bad measurements #522
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #522 +/- ##
======================================
Coverage 98.1% 98.1%
======================================
Files 160 162 +2
Lines 3090 3112 +22
Branches 751 760 +9
======================================
+ Hits 3032 3054 +22
Misses 37 37
Partials 21 21 ☔ View full report in Codecov by Sentry. |
Is it Ok if I review this by next Thursday? I want to wrap up the WIP RFC we have for Mitiq over the next few days. |
Of course, no rush, and thank you again! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added some quick review comments as I make my way through the 2 papers used for this PR. Will add more comments/suggestions later!
BTW great job with the turnaround for the pretty_bad_measurement
function! I saw that it appeared on the arXiv just last week. :)
if probs is None: | ||
probs = n * [1 / n] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you assuming the input ensemble is always normalized? If not, there could be an additional check before lines 53-54 because assuming a uniform distribution by default might not work in the non-normalized scenario when the input probs are not provided.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm. I'm assuming that the probability vector is normalized (if provided). I check to ensure that the vector is a valid probability vector (i.e., it sums to 1). If such a vector is not provided, I decide to use a uniform distribution as a default (which is the section that you've highlighted here).
Maybe I'm missing the broader point here though of what you're trying to convey.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am talking about a case when the provided input vector ensemble is not normalized such that it might not agree with a uniform probability distribution.
For example, the function is provided a non-normalized input ensemble of 4 vectors but we skip providing an input probability distribution that would also normalize the input vector ensemble: 1/3, 1/6, 1/6, 1/3
is different from 1/4, 1/4, 1,4, 1,4
.
I think it might be better to use the square root of the inner product of each vector in the non-normalized ensemble with itself to assign a probability distribution after the highlighted lines. Hopefully, this makes sense?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, but I think there might be a distinction here, right? The probabilities in this case are intended to be treated as "the probability with which the state is selected from the set of states" (so, the ensemble is a tuple--(p_i, rho_i) where the state is a density operator and the p_i is a probability that corresponds to the probability that rho_i is selected). So, IOW, the 1/2 and -1/2 here in this case aren't necessarily relevant to the probability vector object (they are only relevant insofar as they are used to define the actual state).
Does that make sense?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't sat down to do the actual calculations but wouldn't a non-normalized state alter the output of what we consider a PGM or PBM?
If not, then go ahead and merge this PR!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, no. I'll go ahead and merge.
Co-authored-by: Purva Thakre <66048318+purva-thakre@users.noreply.github.com>
As an added benefit, I think I nipped that pesky docs bug that was causing pseudo-failures in this diff as well just FYI! (@purva-thakre ) |
Co-authored-by: Purva Thakre <66048318+purva-thakre@users.noreply.github.com>
Co-authored-by: Purva Thakre <66048318+purva-thakre@users.noreply.github.com>
Co-authored-by: Purva Thakre <66048318+purva-thakre@users.noreply.github.com>
Co-authored-by: Purva Thakre <66048318+purva-thakre@users.noreply.github.com>
Co-authored-by: Purva Thakre <66048318+purva-thakre@users.noreply.github.com>
Co-authored-by: Purva Thakre <66048318+purva-thakre@users.noreply.github.com>
Modulo those comment responses, I believe this is in a good state to go. Good call on adding the issue for docs--I think that's a neat idea! Let me know if you disagree, but if you do agree, feel free to let me know or merge away! 🚀 |
Closes #147
Description
Adding the ability to generate a set of either "pretty good" or "pretty bad" measurement operators.