Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prioritize evaluations with finished grading for review #1668

Merged
merged 8 commits into from
Apr 18, 2022

Conversation

PonderKoKo
Copy link
Collaborator

Closes #1663

@niklasmohrin
Copy link
Member

As discussed, a small test would be nice :)

Copy link
Member

@richardebeling richardebeling left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for contributing!

We already order the evaluations by vote_end_date and num_unreviewed_textanswers in find_unreviewed_evaluations. For consistency, we should do all sorting at the same place, and I think inside that method is a good place. I'd suggest removing the order_by call on the queryset and calling sorted once in python.

While you're editing that method: I think the final .all() call is a no-op, you could remove that if you want.

Edit: If you feel the method name could be improved, I'd agree 100%, so go for it if you want :)

@niklasmohrin
Copy link
Member

You can find the span elements that you want to compare with page.html.select("span[data-next-evaluation-index]") (or a similar selector)

@janno42
Copy link
Member

janno42 commented Feb 7, 2022

@PonderKoKo do you want to continue working on this or should we take over?

@PonderKoKo
Copy link
Collaborator Author

Hey @janno42 , sorry, but I've been feeling pretty stressed in the last couple of weeks. I would try to finish it up, but I can't really say how soon that will be. If it's fine to take a little longer, I will, but feel free to take over otherwise.

@niklasmohrin
Copy link
Member

@PonderKoKo that is totally fine with us, just wanted to check in on you. Our next update is due in at least two weeks or so, it would of course be great if it would be closed until then - not a hard requirement though.

Please just make sure to tell us, if you want to drop working on this issue at some point :)

@janno42
Copy link
Member

janno42 commented Mar 25, 2022

hey @PonderKoKo, how do you do? we'd like to go on with this issue. don't worry, if you don't have the time or interest to do it yourself, we can help you or take it over, just let us know.

@PonderKoKo
Copy link
Collaborator Author

Hey, I couldn't figure out why the following happens in the test I'm writing:
The evaluations I created didn't show up in the suggested unreviewed evaluations which I found out was because of the
.exclude(vote_end_date__gte=exclude_date) in find_unreviewed_evaluations. So I thought I would set vote_end_date, but whenever I set it to a value before the current date, I get a 403 Forbidden upon getting the page. I don't understand why that would be the case and I also can't come up with a way to fix it, because it needs to be less to get past the .exclude.
Here's what I have so far for the test:

    def test_suggested_evaluation_ordering(self):
        semester = baker.make(Semester)
        evaluations = [baker.make(
            Evaluation,
            course__semester=semester,
            participants=[self.student1, self.student2],
            voters=[self.student1],
            state=Evaluation.State.IN_EVALUATION,
            vote_start_datetime=datetime.datetime.now() - datetime.timedelta(days=42),
            vote_end_date=datetime.date.today() - datetime.timedelta(days=2),
        ) for _ in range(3)]
        for evaluation in evaluations:
            evaluation.general_contribution.questionnaires.set([baker.make(Questionnaire, type=Questionnaire.Type.TOP)])
            questionnaire = baker.make(Questionnaire)
            question = baker.make(Question, questionnaire=questionnaire, type=Question.TEXT)
            contribution = baker.make(
                Contribution,
                evaluation=evaluation,
                contributor=baker.make(UserProfile),
                questionnaires=[questionnaire],
            )
            baker.make(TextAnswer, contribution=contribution, question=question, answer=self.answer)
            let_user_vote_for_evaluation(self.app, self.student2, evaluation)

        url = f"/staff/semester/{semester.pk}/evaluation/{evaluations[2].pk}/textanswers"

        with run_in_staff_mode(self):
            # Evaluation 0 sollte weiter oben sein
            evaluations[0].num_unreviewed_textanswers = 15
            #evaluations[0].save()
            page = self.app.get(url, user=self.manager)
            print(page.html.select("span[data-next-evaluation-index]"))

And

def find_unreviewed_evaluations(semester, excluded):
    # as evaluations are open for an offset of hours after vote_end_datetime, the evaluations ending yesterday are also excluded during offset
    exclude_date = date.today()
    if datetime.now().hour < settings.EVALUATION_END_OFFSET_HOURS:
        exclude_date -= timedelta(days=1)
    # Evaluations where the grading process is finished should be shown first, need to be sorted in Python
    return sorted((
        semester.evaluations
            .exclude(pk__in=excluded)
            .exclude(state=Evaluation.State.PUBLISHED)
            .exclude(vote_end_date__gte=exclude_date)
            .exclude(can_publish_text_results=False)
            .filter(contributions__textanswer_set__state=TextAnswer.State.NOT_REVIEWED)
            .annotate(num_unreviewed_textanswers=Count("contributions__textanswer_set"))
    ), key=lambda e: (-e.grading_process_is_finished, e.vote_end_date, -e.num_unreviewed_textanswers))

@richardebeling
Copy link
Member

The 403 forbidden was caused by let_user_vote_for_evaluation, which tries to go through the actual vote site, which site returns an error if the evaluation has already ended. This could be fixed by either calling let_user_vote_for_evaluation first, and then setting the vote_end_date to some point in the past, or by not using let_user_vote_for_evaluation at all: You can simply create the evaluation with voters=[student1, student2], already in a state where both students "voted" for it.

Another issue then is that can_publish_text_results is not set for the evaluations. This would usually be set when the second voter submits their answers. Here, we can just set can_publish_text_results=True in the baker.make call.

Then, everything works, and the test gets to the print statement. You can find a working version here: richardebeling@a992679

In case you didn't know: You don't have to wait for the tests if you use --keepdb to keep the test database between runs and -k to just select your test. Debug mode might help with more meaningful error messages.

./manage.py test --keepdb --debug-mode -k test_suggested_evaluation_ordering

Does this solve your problem?

@PonderKoKo
Copy link
Collaborator Author

I hope I did this correctly now.

evap/staff/views.py Outdated Show resolved Hide resolved
Copy link
Member

@richardebeling richardebeling left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code looks good to me. I think we can shrink the test a little and then I'm fine with merging :)

evap/staff/tests/test_views.py Outdated Show resolved Hide resolved
Co-authored-by: Richard Ebeling <dev@richardebeling.de>
Copy link
Member

@richardebeling richardebeling left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! :)

Copy link
Member

@janno42 janno42 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

✔️ Meets requirements
✔️ UI functionality checked

@richardebeling richardebeling merged commit 080cb34 into e-valuation:main Apr 18, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

Prioritize evaluations with finished grading for review
4 participants