Skip to content

Commit

Permalink
[WEB-2001]feat: Cache issues on the client (#5327)
Browse files Browse the repository at this point in the history
* use common getIssues from issue service instead of multiple different services for modules and cycles

* Use SQLite to store issues locally and load issues from it.

* Fix incorrect total count and filtering on assignees.

* enable parallel API calls

* use common getIssues from issue service instead of multiple different services for modules and cycles

* Use SQLite to store issues locally and load issues from it.

* Fix incorrect total count and filtering on assignees.

* enable parallel API calls

* chore: deleted issue list

* - Handle local mutations
- Implement getting the updates
- Use SWR to update/sync data

* Wait for sync to complete in get issues

* Fix build errors

* Fix build issue

* - Sync updates to local-db
- Fallback to server when the local data is loading
- Wait when the updates are being fetched

* Add issues in batches

* Disable skeleton loaders for first 10 issues

* Load issues in bulk

* working version of sql lite with grouped issues

* Use window queries for group by

* - Fix sort by date fields
- Fix the total count

* - Fix grouping by created by
- Fix order by and limit

* fix pagination

* Fix sorting on issue priority

* - Add secondary sort order
- Fix group by priority

* chore: added timestamp filter for deleted issues

* - Extract local DB into its own class
- Implement sorting by label names

* Implement subgroup by

* sub group by changes

* Refactor query constructor

* Insert or update issues instead of directly adding them.

* Segregated queries. Not working though!!

* - Get filtered issues and then group them.
- Cleanup code.
- Implement order by labels.

* Fix build issues

* Remove debuggers

* remove loaders while changing sorting or applying filters

* fix loader while clearing all filters

* Fix issue with project being synced twice

* Improve project sync

* Optimize the queries

* Make create dummy data more realistic

* dev: added total pages in the global paginator

* chore: updated total_paged count

* chore: added state_group in the issues pagination

* chore: removed deleted_at from the issue pagination payload

* chore: replaced state_group with state__group

* Integrate new getIssues API, and fix sync issues bug.

* Fix issue with SWR running twice in workspace wrapper

* Fix DB initialization called when opening project for the first time.

* Add all the tables required for sorting

* Exclude description from getIssues

* Add getIssue function.

* Add only selected fields to get query.

* Fix the count query

* Minor query optimization when no joins are required.

* fetch issue description from local db

* clear local db on signout

* Correct dummy data creation

* Fix sort by assignee

* sync to local changes

* chore: added archived issues in the deleted endpoint

* Sync deletes to local db.

* - Add missing indexes for tables used in sorting in spreadsheet layout.
- Add options table

* Make fallback optional in getOption

* Kanban column virtualization

* persist project sync readiness to sqlite and use that as the source of truth for the project issues to be ready

* fix build errors

* Fix calendar view

* fetch slimed down version of modules in project wrapper

* fetch toned down modules and then fetch complete modules

* Fix multi value order by in spread sheet layout

* Fix sort by

* Fix the query when ordering by multi field names

* Remove unused import

* Fix sort by multi value fields

* Format queries and fix order by

* fix order by for multi issue

* fix loaders for spreadsheet

* Fallback to manual order whn moving away from spreadsheet layout

* fix minor bug

* Move fix for order_by when switching from spreadsheet layout to translateQueryParams

* fix default rendering of kanban groups

* Fix none priority being saved as null

* Remove debugger statement

* Fix issue load

* chore: updated isue paginated query from  to

* Fix sub issues and start and target date filters

* Fix active and backlog filter

* Add default order by

* Update the Query param to match with backend.

* local sqlite db versioning

* When window is hidden, do not perform any db versioning

* fix error handling and fall back to server when database errors out

* Add ability to disable local db cache

* remove db version check from getIssues function

* change db version to number and remove workspaceInitPromise in storage.sqlite

* - Sync the entire workspace in the background
- Add get sub issue method with distribution

* Make changes to get issues for sync to match backend.

* chore: handled workspace and project in v2 paginted issues

* disable issue description and title until fetched from server

* sync issues post bulk operations

* fix server error

* fix front end build

* Remove full workspace sync

* - Remove the toast message on sync.
- Update the disable local message.

* Add Hardcoded constant to disable the local db caching

* fix lint errors

* Fix order by in grouping

* update yarn lock

* fix build

* fix plane-web imports

* address review comments

---------

Co-authored-by: rahulramesha <rahulramesham@gmail.com>
Co-authored-by: NarayanBavisetti <narayan3119@gmail.com>
Co-authored-by: gurusainath <gurusainath007@gmail.com>
  • Loading branch information
4 people authored Sep 24, 2024
1 parent 8dabe83 commit 3df2303
Show file tree
Hide file tree
Showing 48 changed files with 2,084 additions and 154 deletions.
10 changes: 8 additions & 2 deletions apiserver/plane/app/urls/issue.py
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@
IssueViewSet,
LabelViewSet,
BulkArchiveIssuesEndpoint,
DeletedIssuesListViewSet,
IssuePaginatedViewSet,
)

Expand All @@ -39,9 +40,9 @@
),
name="project-issue",
),
# updated v1 paginated issues
# updated v2 paginated issues
path(
"workspaces/<str:slug>/projects/<uuid:project_id>/v2/issues/",
"workspaces/<str:slug>/v2/issues/",
IssuePaginatedViewSet.as_view({"get": "list"}),
name="project-issues-paginated",
),
Expand Down Expand Up @@ -311,4 +312,9 @@
),
name="project-issue-draft",
),
path(
"workspaces/<str:slug>/projects/<uuid:project_id>/deleted-issues/",
DeletedIssuesListViewSet.as_view(),
name="deleted-issues",
),
]
1 change: 1 addition & 0 deletions apiserver/plane/app/views/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -114,6 +114,7 @@
IssueViewSet,
IssueUserDisplayPropertyEndpoint,
BulkDeleteIssuesEndpoint,
DeletedIssuesListViewSet,
IssuePaginatedViewSet,
)

Expand Down
64 changes: 50 additions & 14 deletions apiserver/plane/app/views/issue/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -234,11 +234,17 @@ def get_queryset(self):
@method_decorator(gzip_page)
@allow_permission([ROLE.ADMIN, ROLE.MEMBER, ROLE.GUEST])
def list(self, request, slug, project_id):
extra_filters = {}
if request.GET.get("updated_at__gt", None) is not None:
extra_filters = {
"updated_at__gt": request.GET.get("updated_at__gt")
}

project = Project.objects.get(pk=project_id, workspace__slug=slug)
filters = issue_filters(request.query_params, "GET")
order_by_param = request.GET.get("order_by", "-created_at")

issue_queryset = self.get_queryset().filter(**filters)
issue_queryset = self.get_queryset().filter(**filters, **extra_filters)
# Custom ordering for priority and state

# Issue queryset
Expand Down Expand Up @@ -713,16 +719,43 @@ def delete(self, request, slug, project_id):
)


class DeletedIssuesListViewSet(BaseAPIView):
@allow_permission([ROLE.ADMIN, ROLE.MEMBER, ROLE.GUEST])
def get(self, request, slug, project_id):
filters = {}
if request.GET.get("updated_at__gt", None) is not None:
filters = {"updated_at__gt": request.GET.get("updated_at__gt")}
deleted_issues = (
Issue.all_objects.filter(
workspace__slug=slug,
project_id=project_id,
)
.filter(Q(archived_at__isnull=False) | Q(deleted_at__isnull=False))
.filter(**filters)
.values_list("id", flat=True)
)

return Response(deleted_issues, status=status.HTTP_200_OK)


class IssuePaginatedViewSet(BaseViewSet):
def get_queryset(self):
workspace_slug = self.kwargs.get("slug")
project_id = self.kwargs.get("project_id")

# getting the project_id from the request params
project_id = self.request.GET.get("project_id", None)

issue_queryset = Issue.issue_objects.filter(
workspace__slug=workspace_slug
)

if project_id:
issue_queryset = issue_queryset.filter(project_id=project_id)

return (
Issue.issue_objects.filter(
workspace__slug=workspace_slug, project_id=project_id
issue_queryset.select_related(
"workspace", "project", "state", "parent"
)
.select_related("workspace", "project", "state", "parent")
.prefetch_related("assignees", "labels", "issue_module__module")
.annotate(cycle_id=F("issue_cycle__cycle_id"))
.annotate(
Expand Down Expand Up @@ -760,17 +793,18 @@ def process_paginated_result(self, fields, results, timezone):

return paginated_data

@allow_permission([ROLE.ADMIN, ROLE.MEMBER, ROLE.GUEST])
def list(self, request, slug, project_id):
def list(self, request, slug):
project_id = self.request.GET.get("project_id", None)
cursor = request.GET.get("cursor", None)
is_description_required = request.GET.get("description", False)
updated_at = request.GET.get("updated_at__gte", None)
updated_at = request.GET.get("updated_at__gt", None)

# required fields
required_fields = [
"id",
"name",
"state_id",
"state__group",
"sort_order",
"completed_at",
"estimate_point",
Expand All @@ -787,7 +821,6 @@ def list(self, request, slug, project_id):
"updated_by",
"is_draft",
"archived_at",
"deleted_at",
"module_ids",
"label_ids",
"assignee_ids",
Expand All @@ -800,15 +833,18 @@ def list(self, request, slug, project_id):
required_fields.append("description_html")

# querying issues
base_queryset = Issue.issue_objects.filter(
workspace__slug=slug, project_id=project_id
).order_by("updated_at")
base_queryset = Issue.issue_objects.filter(workspace__slug=slug)

if project_id:
base_queryset = base_queryset.filter(project_id=project_id)

base_queryset = base_queryset.order_by("updated_at")
queryset = self.get_queryset().order_by("updated_at")

# filtering issues by greater then updated_at given by the user
if updated_at:
base_queryset = base_queryset.filter(updated_at__gte=updated_at)
queryset = queryset.filter(updated_at__gte=updated_at)
base_queryset = base_queryset.filter(updated_at__gt=updated_at)
queryset = queryset.filter(updated_at__gt=updated_at)

queryset = queryset.annotate(
label_ids=Coalesce(
Expand Down
53 changes: 33 additions & 20 deletions apiserver/plane/bgtasks/dummy_data_task.py
Original file line number Diff line number Diff line change
Expand Up @@ -347,7 +347,7 @@ def create_issues(workspace, project, user_id, issue_count):
)
)

text = fake.text(max_nb_chars=60000)
text = fake.text(max_nb_chars=3000)
issues.append(
Issue(
state_id=states[random.randint(0, len(states) - 1)],
Expand Down Expand Up @@ -490,18 +490,23 @@ def create_issue_assignees(workspace, project, user_id, issue_count):
def create_issue_labels(workspace, project, user_id, issue_count):
# labels
labels = Label.objects.filter(project=project).values_list("id", flat=True)
issues = random.sample(
list(
# issues = random.sample(
# list(
# Issue.objects.filter(project=project).values_list("id", flat=True)
# ),
# int(issue_count / 2),
# )
issues = list(
Issue.objects.filter(project=project).values_list("id", flat=True)
),
int(issue_count / 2),
)
)
shuffled_labels = list(labels)

# Bulk issue
bulk_issue_labels = []
for issue in issues:
random.shuffle(shuffled_labels)
for label in random.sample(
list(labels), random.randint(0, len(labels) - 1)
shuffled_labels, random.randint(0, 5)
):
bulk_issue_labels.append(
IssueLabel(
Expand Down Expand Up @@ -552,25 +557,33 @@ def create_module_issues(workspace, project, user_id, issue_count):
modules = Module.objects.filter(project=project).values_list(
"id", flat=True
)
issues = random.sample(
list(
# issues = random.sample(
# list(
# Issue.objects.filter(project=project).values_list("id", flat=True)
# ),
# int(issue_count / 2),
# )
issues = list(
Issue.objects.filter(project=project).values_list("id", flat=True)
),
int(issue_count / 2),
)
)

shuffled_modules = list(modules)

# Bulk issue
bulk_module_issues = []
for issue in issues:
module = modules[random.randint(0, len(modules) - 1)]
bulk_module_issues.append(
ModuleIssue(
module_id=module,
issue_id=issue,
project=project,
workspace=workspace,
random.shuffle(shuffled_modules)
for module in random.sample(
shuffled_modules, random.randint(0, 5)
):
bulk_module_issues.append(
ModuleIssue(
module_id=module,
issue_id=issue,
project=project,
workspace=workspace,
)
)
)
# Issue assignees
ModuleIssue.objects.bulk_create(
bulk_module_issues, batch_size=1000, ignore_conflicts=True
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ def handle(self, *args: Any, **options: Any) -> str | None:

from plane.bgtasks.dummy_data_task import create_dummy_data

create_dummy_data.delay(
create_dummy_data(
slug=workspace_slug,
email=creator,
members=members,
Expand Down
7 changes: 7 additions & 0 deletions apiserver/plane/utils/global_paginator.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,6 @@
# python imports
from math import ceil

# constants
PAGINATOR_MAX_LIMIT = 1000

Expand Down Expand Up @@ -36,6 +39,9 @@ def paginate(base_queryset, queryset, cursor, on_result):
total_results = base_queryset.count()
page_size = min(cursor_object.current_page_size, PAGINATOR_MAX_LIMIT)

# getting the total pages available based on the page size
total_pages = ceil(total_results / page_size)

# Calculate the start and end index for the paginated data
start_index = 0
if cursor_object.current_page > 0:
Expand Down Expand Up @@ -72,6 +78,7 @@ def paginate(base_queryset, queryset, cursor, on_result):
"next_page_results": next_page_results,
"page_count": len(paginated_data),
"total_results": total_results,
"total_pages": total_pages,
"results": paginated_data,
}

Expand Down
10 changes: 5 additions & 5 deletions apiserver/plane/utils/paginator.py
Original file line number Diff line number Diff line change
Expand Up @@ -82,7 +82,7 @@ def __repr__(self):
return f"<{type(self).__name__}: results={len(self.results)}>"


MAX_LIMIT = 100
MAX_LIMIT = 1000


class BadPaginationError(Exception):
Expand Down Expand Up @@ -118,7 +118,7 @@ def __init__(
self.max_offset = max_offset
self.on_results = on_results

def get_result(self, limit=100, cursor=None):
def get_result(self, limit=1000, cursor=None):
# offset is page #
# value is page limit
if cursor is None:
Expand Down Expand Up @@ -727,7 +727,7 @@ class BasePaginator:
cursor_name = "cursor"

# get the per page parameter from request
def get_per_page(self, request, default_per_page=100, max_per_page=100):
def get_per_page(self, request, default_per_page=1000, max_per_page=1000):
try:
per_page = int(request.GET.get("per_page", default_per_page))
except ValueError:
Expand All @@ -747,8 +747,8 @@ def paginate(
on_results=None,
paginator=None,
paginator_cls=OffsetPaginator,
default_per_page=100,
max_per_page=100,
default_per_page=1000,
max_per_page=1000,
cursor_cls=Cursor,
extra_stats=None,
controller=None,
Expand Down
Loading

0 comments on commit 3df2303

Please sign in to comment.