-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WEB-2001]feat: Cache issues on the client #5327
Conversation
… services for modules and cycles
… feat_no_loader_SQLite
… services for modules and cycles
- Implement getting the updates - Use SWR to update/sync data
…plane into feat_no_loader_SQLite
WalkthroughThis pull request introduces a toggle feature for managing local database caching in the user profile settings, enhancing user control over data storage. It also updates the Changes
Possibly related PRs
Suggested labels
Suggested reviewers
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
- Fallback to server when the local data is loading - Wait when the updates are being fetched
…o feat-noloader-sqlite
- Fix the total count
- Fix order by and limit
- Fix group by priority
…o feat-noloader-sqlite
…t-noloader-sqlite
…t-noloader-sqlite
- Update the disable local message.
…lane into feat-noloader-sqlite
@@ -1,4 +1,5 @@ | |||
import { MutableRefObject } from "react"; | |||
import clone from "lodash/clone"; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are not using this.
@@ -237,6 +233,10 @@ export class CycleIssuesFilter extends IssueFilterHelperStore implements ICycleI | |||
}); | |||
}); | |||
|
|||
if (this.getShouldClearIssues(updatedDisplayFilters)) { | |||
this.rootIssueStore.cycleIssues.clear(true, true); // clear issues for local db when some filters like layout changes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its not local DB.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 56
Outside diff range and nitpick comments (55)
web/core/local-db/utils/query-executor.ts (1)
1-3
: Consider removing or uncommenting the SQL import.There's a commented import statement for SQL from "./sqlite". If this import is no longer needed, it's best to remove it entirely. If it will be used in the future, consider adding a TODO comment explaining why it's currently commented out.
web/core/local-db/utils/constants.ts (3)
1-1
: LGTM! Consider adding a brief comment for clarity.The
ARRAY_FIELDS
constant is well-defined and aligns with the PR objective of storing issues on the client side. These fields likely require special handling for local storage and synchronization.Consider adding a brief comment above the constant to explain its purpose, e.g.:
// Fields that contain arrays of values and may require special handling for local storage export const ARRAY_FIELDS = ["label_ids", "assignee_ids", "module_ids"];
3-13
: LGTM! Consider adding a type annotation for improved type safety.The
GROUP_BY_MAP
constant is well-structured and provides a clear mapping for grouping issues by different properties. This aligns with the PR objective of improving performance and UX on the boards by enabling efficient client-side grouping.Consider adding a type annotation to improve type safety and make the constant's structure more explicit:
export const GROUP_BY_MAP: Record<string, string> = { // ... (existing key-value pairs) };This change will ensure that all keys and values in the object are strings, preventing potential type-related issues in the future.
15-21
: LGTM! Consider adding a type annotation for improved type safety.The
PRIORITY_MAP
constant is well-defined and provides a clear mapping of priority levels to numerical values. This is crucial for sorting and prioritizing issues, aligning with the PR objective of improving UX on the boards.Consider adding a type annotation to improve type safety and make the constant's structure more explicit:
export const PRIORITY_MAP: Record<string, number> = { // ... (existing key-value pairs) };This change will ensure that all keys are strings and all values are numbers, preventing potential type-related issues in the future.
web/ce/constants/issues.ts (1)
33-35
: LGTM! Consider making this a configurable setting.The addition of
ENABLE_LOCAL_DB_CACHE
aligns well with the PR objectives of implementing client-side storage for issues. Setting it tofalse
by default is a prudent approach for a controlled feature rollout.Consider making this a configurable setting rather than a hard-coded constant. This would allow for easier environment-specific configurations and A/B testing. For example:
export const ENABLE_LOCAL_DB_CACHE = process.env.ENABLE_LOCAL_DB_CACHE === 'true' || false;This approach would maintain the default
false
value while allowing for configuration through environment variables.web/core/local-db/utils/data.utils.ts (1)
3-6
: LGTM, with a suggestion for future-proofingThe
getProjectIds
function correctly retrieves distinct project IDs from the 'states' table. It's implemented efficiently using SQL's GROUP BY clause.Consider using parameterized queries in the future if this function ever needs to accept input parameters. This would help prevent SQL injection vulnerabilities.
web/core/local-db/utils/tables.ts (3)
15-23
: Good implementation, but consider some improvements.The
createTableSQLfromSchema
function effectively generates SQL statements for table creation. However, there are a few points to consider:
The console.log statement on line 21 might be unnecessary in production code. Consider removing it or wrapping it in a debug condition.
The function doesn't protect against potential SQL injection. Although the risk is low since the input is expected to come from predefined schemas, it's a good practice to sanitize inputs or use parameterized queries.
Consider applying these improvements:
- Remove or conditionally include the console.log:
- console.log("#####", sql); + if (process.env.NODE_ENV === 'development') { + console.log("#####", sql); + }
- Implement input sanitization:
const sanitizeIdentifier = (identifier: string): string => { // Basic sanitization: remove everything that's not alphanumeric or underscore return identifier.replace(/[^a-zA-Z0-9_]/g, ''); }; const createTableSQLfromSchema = (tableName: string, schema: Schema) => { const sanitizedTableName = sanitizeIdentifier(tableName); let sql = `CREATE TABLE IF NOT EXISTS ${sanitizedTableName} (`; sql += Object.entries(schema) .map(([key, value]) => `'${sanitizeIdentifier(key)}' ${value}`) .join(", "); sql += `);`; return sql; };This sanitization is a basic example and might need to be adjusted based on your specific requirements and the range of valid identifiers in your system.
25-39
: Good use of transactions, but consider error handling and parameterization.The
createTables
function effectively uses a transaction to ensure atomicity when creating multiple tables. However, there are some areas for improvement:
Error Handling: The function doesn't handle potential errors that might occur during database operations. Consider wrapping the operations in a try-catch block.
Parameterization: The function creates a fixed set of tables. Consider parameterizing the table creation process for better reusability.
Here's a suggested refactor to address these points:
export const createTables = async (tables: { name: string; schema: Schema }[] = [ { name: "issues", schema: issueSchema }, { name: "issue_meta", schema: issueMetaSchema }, { name: "modules", schema: moduleSchema }, { name: "labels", schema: labelSchema }, { name: "states", schema: stateSchema }, { name: "cycles", schema: cycleSchema }, { name: "estimate_points", schema: estimatePointSchema }, { name: "members", schema: memberSchema }, { name: "options", schema: optionsSchema }, ]) => { try { persistence.db.exec("BEGIN TRANSACTION;"); for (const table of tables) { persistence.db.exec(createTableSQLfromSchema(table.name, table.schema)); } persistence.db.exec("COMMIT;"); } catch (error) { persistence.db.exec("ROLLBACK;"); console.error("Error creating tables:", error); throw error; // Re-throw the error for the caller to handle } };This refactored version:
- Adds error handling with transaction rollback in case of failure.
- Parameterizes the table creation process, allowing for easy addition or removal of tables.
- Maintains the default behavior while allowing for customization when needed.
1-39
: Summary: Good implementation with room for improvementsThis new file introduces crucial functionality for setting up local database tables, which aligns well with the PR objective of storing issues on the client for better performance and UX on the boards. The implementation is generally solid, with good use of transactions and a flexible approach to table creation.
To further enhance this implementation, consider:
- Implementing error handling in the
createTables
function.- Parameterizing the
createTables
function for better reusability.- Adding input sanitization in the
createTableSQLfromSchema
function to prevent potential SQL injection.- Removing or conditionally including debug logs.
These improvements will make the code more robust, secure, and maintainable, contributing to a more reliable local storage solution for issues.
As you proceed with implementing client-side storage for issues, keep in mind:
- Synchronization mechanisms between local and server data.
- Handling of offline scenarios and conflict resolution.
- Performance optimizations for large datasets.
- Clear error handling and user feedback for database operations.
These considerations will be crucial for achieving the performance improvements and enhanced UX outlined in the PR objectives.
web/core/local-db/utils/indexes.ts (3)
4-25
: Approved: Efficient index creation for issues table.The implementation looks good and should improve query performance for the issues table. The use of
Promise.all
for concurrent index creation is an efficient approach.Consider adding a comment explaining why the "priority" column is commented out. If it's no longer needed, it might be better to remove it entirely to avoid confusion.
32-54
: Approved: Comprehensive index creation for workspace tables.The function creates a good set of indexes for various workspace-related tables, which should significantly improve query performance. The use of
Promise.all
for concurrent index creation is an efficient approach.There's a "@todo" comment next to the estimate points index creation. Please clarify or address this TODO item:
// Estimate Points @todo promises.push(persistence.db.exec({ sql: `CREATE INDEX estimate_points_name_idx ON estimate_points (id,value)` }));If there's no pending work, consider removing the "@todo" comment. If there is additional work needed, please add a more descriptive comment or create a separate issue to track it.
56-68
: Approved: Well-structured index creation orchestration.The
createIndexes
function effectively orchestrates the execution of all index creation functions. The implementation includes proper error handling and performance logging, which is commendable.Consider enhancing the error logging to provide more context:
try { await Promise.all(promises); } catch (e) { - console.log((e as Error).message); + console.error('Error creating indexes:', (e as Error).message); }This change uses
console.error
for errors and provides more context in the log message, which can be helpful for debugging.apiserver/plane/utils/global_paginator.py (2)
42-43
: LGTM: Correct calculation of total pages.The calculation of
total_pages
is correct and follows best practices for pagination. It ensures that partial pages are rounded up to a full page.Consider moving this calculation right after the
total_results
andpage_size
variables are defined for better code organization:total_results = base_queryset.count() page_size = min(cursor_object.current_page_size, PAGINATOR_MAX_LIMIT) + +# getting the total pages available based on the page size +total_pages = ceil(total_results / page_size) # Calculate the start and end index for the paginated data start_index = 0
81-81
: LGTM: Valuable addition to pagination information.Adding
total_pages
to the returned dictionary provides important information for the client, which aligns with the PR objective of improving performance and UX on the boards.For consistency with other numeric values in the dictionary, consider casting
total_pages
to an integer:- "total_pages": total_pages, + "total_pages": int(total_pages),This ensures that the value is always an integer, even if the
ceil
function returns a float in some Python implementations.apiserver/plane/db/management/commands/create_dummy_data.py (3)
Line range hint
40-46
: Consider using enum or constants for role valuesThe hardcoded role value (20) used here and elsewhere in the file could lead to maintainability issues. It's not immediately clear what this value represents.
Consider defining an enum or constants for role values to improve code readability and maintainability. For example:
from enum import IntEnum class WorkspaceRole(IntEnum): ADMIN = 20 MEMBER = 15 VIEWER = 10 # Then use it as: WorkspaceMember.objects.create( workspace=workspace, role=WorkspaceRole.ADMIN, member=user )This change would make the code more self-documenting and easier to maintain.
Line range hint
54-59
: Add input validation for numeric inputsThe code currently doesn't validate that the input for project count and other numeric values are actually valid integers. This could lead to unexpected errors if a user inputs non-numeric values.
Consider adding input validation for all numeric inputs. Here's an example of how you could modify the project count input:
def get_valid_int_input(prompt): while True: try: return int(input(prompt)) except ValueError: print("Please enter a valid integer.") project_count = get_valid_int_input("Number of projects to be created: ")Apply similar validation to other numeric inputs in the file.
Line range hint
86-92
: Improve error handlingThe current error handling catches all exceptions and prints a generic error message. This broad exception handling might mask specific issues, making debugging more difficult.
Consider catching specific exceptions and providing more detailed error messages. For example:
try: # ... existing code ... except ValueError as e: self.stdout.write( self.style.ERROR(f"Invalid input: {str(e)}") ) except Workspace.DoesNotExist: self.stdout.write( self.style.ERROR("Specified workspace does not exist") ) except Exception as e: self.stdout.write( self.style.ERROR(f"Unexpected error occurred: {str(e)}") )This approach will provide more specific error messages, making it easier to identify and resolve issues.
web/core/local-db/utils/schemas.ts (10)
1-3
: LGTM! Consider using a more specific type for values.The
Schema
type definition is clear and flexible. However, usingstring
for all values might limit type safety.Consider using a union type for values to increase type safety:
export type Schema = { [key: string]: "TEXT" | "INTEGER" | "REAL" | "BOOLEAN" | string; };This change would allow for more specific type checking while still maintaining flexibility.
5-35
: LGTM! Consider adding constraints to some fields.The
issueSchema
is comprehensive and well-structured. However, some fields could benefit from additional constraints or more specific types.Consider the following improvements:
Add NOT NULL constraint to essential fields:
id: "TEXT UNIQUE NOT NULL", name: "TEXT NOT NULL", state_id: "TEXT NOT NULL",Use more specific types for boolean fields:
is_draft: "BOOLEAN",Consider using FOREIGN KEY constraints for fields referencing other entities:
state_id: "TEXT NOT NULL REFERENCES states(id)", project_id: "TEXT NOT NULL REFERENCES projects(id)",These changes would enhance data integrity and provide more accurate representation of the data structure.
42-69
: LGTM! Consider optimizing storage for large text fields.The
moduleSchema
is well-structured and comprehensive. However, some optimizations could be made for large text fields and data integrity.Consider the following improvements:
Use TEXT for large fields, but consider MEDIUMTEXT or LONGTEXT for potentially very large content:
description: "MEDIUMTEXT", description_text: "MEDIUMTEXT", description_html: "MEDIUMTEXT",Add NOT NULL constraints to essential fields:
id: "TEXT UNIQUE NOT NULL", workspace_id: "TEXT NOT NULL", project_id: "TEXT NOT NULL", name: "TEXT NOT NULL",Consider using FOREIGN KEY constraints for fields referencing other entities:
workspace_id: "TEXT NOT NULL REFERENCES workspaces(id)", project_id: "TEXT NOT NULL REFERENCES projects(id)", lead_id: "TEXT REFERENCES members(id)",These changes would enhance data integrity and optimize storage for large text fields.
71-79
: LGTM! Consider adding constraints and optimizing data types.The
labelSchema
is well-structured. However, some improvements could enhance data integrity and optimize storage.Consider the following improvements:
Add NOT NULL constraints to essential fields:
id: "TEXT UNIQUE NOT NULL", name: "TEXT NOT NULL", color: "TEXT NOT NULL",Use a more specific type for color (if it's always a hex color):
color: "CHAR(7) NOT NULL",Consider using FOREIGN KEY constraints for fields referencing other entities:
parent: "TEXT REFERENCES labels(id)", project_id: "TEXT NOT NULL REFERENCES projects(id)", workspace_id: "TEXT NOT NULL REFERENCES workspaces(id)",Use a more efficient type for sort_order if it's always an integer:
sort_order: "INTEGER NOT NULL",These changes would enhance data integrity and optimize storage.
81-102
: LGTM! Consider optimizing data types and adding constraints.The
cycleSchema
is comprehensive and well-structured. However, some optimizations could enhance data integrity and storage efficiency.Consider the following improvements:
Add NOT NULL constraints to essential fields:
id: "TEXT UNIQUE NOT NULL", workspace_id: "TEXT NOT NULL", project_id: "TEXT NOT NULL", name: "TEXT NOT NULL",Use DATE type for date fields:
start_date: "DATE", end_date: "DATE",Consider using FOREIGN KEY constraints for fields referencing other entities:
workspace_id: "TEXT NOT NULL REFERENCES workspaces(id)", project_id: "TEXT NOT NULL REFERENCES projects(id)", owned_by_id: "TEXT REFERENCES members(id)",Use more efficient types for numeric fields:
sort_order: "INTEGER NOT NULL", total_issues: "INTEGER NOT NULL DEFAULT 0", cancelled_issues: "INTEGER NOT NULL DEFAULT 0", completed_issues: "INTEGER NOT NULL DEFAULT 0", started_issues: "INTEGER NOT NULL DEFAULT 0", unstarted_issues: "INTEGER NOT NULL DEFAULT 0", backlog_issues: "INTEGER NOT NULL DEFAULT 0",These changes would enhance data integrity, optimize storage, and ensure consistent default values for numeric fields.
104-114
: LGTM! Consider optimizing data types and adding constraints.The
stateSchema
is well-structured. However, some improvements could enhance data integrity and optimize storage.Consider the following improvements:
Add NOT NULL constraints to essential fields:
id: "TEXT UNIQUE NOT NULL", project_id: "TEXT NOT NULL", workspace_id: "TEXT NOT NULL", name: "TEXT NOT NULL", color: "TEXT NOT NULL", group: "TEXT NOT NULL",Use a more specific type for color (if it's always a hex color):
color: "CHAR(7) NOT NULL",Use BOOLEAN type for the default field:
default: "BOOLEAN NOT NULL DEFAULT FALSE",Consider using FOREIGN KEY constraints for fields referencing other entities:
project_id: "TEXT NOT NULL REFERENCES projects(id)", workspace_id: "TEXT NOT NULL REFERENCES workspaces(id)",Use INTEGER for the sequence field:
sequence: "INTEGER NOT NULL",These changes would enhance data integrity, optimize storage, and ensure consistent data types.
116-120
: LGTM! Consider adding a NOT NULL constraint to the value field.The
estimatePointSchema
is concise and well-structured. The use of REAL for the value field is appropriate for estimate points.Consider adding a NOT NULL constraint to the value field to ensure it always has a value:
export const estimatePointSchema: Schema = { id: "TEXT UNIQUE NOT NULL", key: "TEXT NOT NULL", value: "REAL NOT NULL", };This change ensures that all estimate points have a defined value, maintaining data integrity.
122-130
: LGTM! Consider adding constraints and optimizing data types.The
memberSchema
is well-structured. However, some improvements could enhance data integrity and optimize storage.Consider the following improvements:
Add NOT NULL constraints to essential fields:
id: "TEXT UNIQUE NOT NULL", first_name: "TEXT NOT NULL", last_name: "TEXT NOT NULL", is_bot: "BOOLEAN NOT NULL DEFAULT FALSE", display_name: "TEXT NOT NULL", email: "TEXT NOT NULL",Add a UNIQUE constraint to the email field:
email: "TEXT NOT NULL UNIQUE",Consider using a more efficient type for the is_bot field:
is_bot: "BOOLEAN NOT NULL DEFAULT FALSE",If applicable, add a check constraint for valid email format:
email: "TEXT NOT NULL UNIQUE CHECK (email LIKE '%@%.%')",These changes would enhance data integrity, optimize storage, and ensure consistent data types.
132-135
: LGTM! Consider adding a NOT NULL constraint to the value field.The
optionsSchema
is concise and flexible, allowing for various option types to be stored.Consider adding a NOT NULL constraint to the value field to ensure it always has a value:
export const optionsSchema: Schema = { key: "TEXT UNIQUE NOT NULL", value: "TEXT NOT NULL", };This change ensures that all options have a defined value, maintaining data integrity.
1-135
: Overall, excellent work on defining comprehensive schemas!The
schemas.ts
file provides a well-structured set of schemas for various entities in the system. The use of a commonSchema
type allows for consistency across different entity schemas.To further improve the overall architecture and maintainability of the schemas, consider the following:
- Create a base schema with common fields (e.g., id, created_at, updated_at) that other schemas can extend.
- Implement a custom type system that allows for more specific field types (e.g., Email, Date, Color) while still being compatible with the database schema.
- Consider using a schema validation library like Zod or Yup to create runtime validations based on these schemas.
These improvements would enhance type safety, reduce code duplication, and provide a more robust foundation for your data layer.
web/next.config.js (1)
23-24
: Excellent addition of security headers!The new Cross-Origin-Opener-Policy (COOP) and Cross-Origin-Embedder-Policy (COEP) headers significantly enhance the application's security posture:
- COOP:
same-origin
prevents the page from interacting with cross-origin windows, reducing the risk of cross-origin attacks.- COEP:
require-corp
ensures that all cross-origin resources are explicitly allowed, further tightening security.These headers also enable the use of powerful features like SharedArrayBuffer, which could potentially be leveraged for performance improvements in the future, aligning with the PR's performance objectives.
Consider the following:
- Ensure all necessary cross-origin resources have the appropriate CORP (Cross-Origin Resource Policy) headers set.
- Test thoroughly to ensure these strict policies don't break any existing functionality, especially if you're embedding resources from other origins.
- If you plan to use SharedArrayBuffer for performance optimizations, make sure to implement and test it carefully, as it comes with its own set of security considerations.
web/core/store/issue/profile/issue.store.ts (1)
143-143
: LGTM! Consider adding a comment for clarity.The changes to the
this.onfetchIssues
call look good and align with the PR objectives of improving performance and UX. The addition of the!isExistingPaginationOptions
parameter likely helps in managing pagination more effectively.Consider adding a brief comment explaining the purpose of the two
undefined
parameters and the!isExistingPaginationOptions
boolean. This would improve code readability and make it easier for other developers to understand the intent behind these additions.- this.onfetchIssues(response, options, workspaceSlug, undefined, undefined, !isExistingPaginationOptions); + // Pass undefined for groupId and subGroupId, and determine if pagination should be reset + this.onfetchIssues(response, options, workspaceSlug, undefined, undefined, !isExistingPaginationOptions);web/core/store/issue/module/filter.store.ts (1)
Line range hint
198-249
: Summary: Improved filter update logic with potential performance optimizationsThe changes in this file demonstrate a thoughtful approach to optimizing the filter update process for module issues. Key improvements include:
- Simplified logic for fetching issues after filter updates.
- Introduction of conditional checks for clearing local issues and re-fetching issues.
These changes have the potential to enhance performance and maintain data consistency. However, to fully evaluate their impact, we need more information about the
getShouldClearIssues
andgetShouldReFetchIssues
methods.To ensure these changes align with the PR objectives of improving performance and user experience on the boards, please provide:
- Implementations of
getShouldClearIssues
andgetShouldReFetchIssues
.- Any performance metrics or testing results that demonstrate the impact of these changes.
- Confirmation that these optimizations work as expected across different scenarios, especially with varying numbers of issues and filter combinations.
web/core/store/issue/helpers/issue-filter-helper.store.ts (1)
270-282
: LGTM! Consider adding a comment for clarity.The
getShouldClearIssues
method is well-implemented and consistent with the existing codebase. It correctly determines if issues should be cleared based on changes in display filters, specifically checking for layout changes.Consider adding a brief comment explaining why layout changes require clearing issues. This would improve code maintainability and help other developers understand the reasoning behind this check. For example:
/** * Determines if issues should be cleared based on display filter changes. * Layout changes require clearing issues as they fundamentally alter the structure of the view. * @param displayFilters - The current display filter options * @returns True if issues should be cleared, false otherwise */ getShouldClearIssues = (displayFilters: IIssueDisplayFilterOptions) => { // ... (existing code) };apiserver/plane/bgtasks/dummy_data_task.py (3)
493-509
: Approve: Improved issue label distribution for testingThe changes to the
create_issue_labels
function are beneficial for creating more comprehensive and controlled test data:
- Using all issues instead of a random subset ensures that every issue has a chance to receive labels, providing more thorough test coverage.
- Shuffling labels before sampling introduces variability while maintaining control over the maximum number of labels per issue.
- Capping the maximum number of labels per issue at 5 prevents outliers with an excessive number of labels, which could skew performance metrics.
These improvements will lead to more realistic test scenarios and potentially more reliable performance testing results.
Consider adding a brief comment explaining the reasoning behind the maximum of 5 labels per issue, as this might not be immediately clear to other developers. For example:
# Cap at 5 labels per issue to maintain realistic data distribution for label in random.sample(shuffled_labels, random.randint(0, 5)):Tools
GitHub Check: Codacy Static Code Analysis
[notice] 501-501: apiserver/plane/bgtasks/dummy_data_task.py#L501
Trailing whitespace
[warning] 509-509: apiserver/plane/bgtasks/dummy_data_task.py#L509
Depending on the context, generating weak random numbers may expose cryptographic functions, which rely on these numbers, to be exploitable.
[warning] 509-509: apiserver/plane/bgtasks/dummy_data_task.py#L509
Standard pseudo-random generators are not suitable for security/cryptographic purposes.
560-585
: Approve: Improved module-issue association for testingThe changes to the
create_module_issues
function mirror the improvements made in thecreate_issue_labels
function, which is good for consistency and comprehensive testing:
- Using all issues ensures every issue has a chance to be associated with modules.
- Shuffling modules before sampling introduces variability while maintaining control.
- Capping the maximum number of modules per issue at 5 prevents outliers that could skew performance metrics.
These changes will contribute to more realistic and reliable test data for performance testing.
For consistency with the
create_issue_labels
function, consider extracting the maximum number of modules per issue to a constant at the top of the file. This would make it easier to adjust this limit for both labels and modules in the future:MAX_LABELS_PER_ISSUE = 5 MAX_MODULES_PER_ISSUE = 5 # In create_issue_labels function random.sample(shuffled_labels, random.randint(0, MAX_LABELS_PER_ISSUE)) # In create_module_issues function random.sample(shuffled_modules, random.randint(0, MAX_MODULES_PER_ISSUE))Tools
GitHub Check: Codacy Static Code Analysis
[warning] 577-577: apiserver/plane/bgtasks/dummy_data_task.py#L577
Depending on the context, generating weak random numbers may expose cryptographic functions, which rely on these numbers, to be exploitable.
[warning] 577-577: apiserver/plane/bgtasks/dummy_data_task.py#L577
Standard pseudo-random generators are not suitable for security/cryptographic purposes.
501-501
: Minor: Remove trailing whitespaceThere's a trailing whitespace on line 501. While this doesn't affect functionality, it's good practice to remove it for code cleanliness.
Remove the trailing whitespace from line 501:
- ) + )Regarding the static analysis warnings about weak random number generation (lines 509 and 577): These can be safely ignored in this context. The
random
module is being used for generating test data, not for security-critical operations. For test data generation, the standard pseudo-random generator is sufficient and appropriate.Tools
GitHub Check: Codacy Static Code Analysis
[notice] 501-501: apiserver/plane/bgtasks/dummy_data_task.py#L501
Trailing whitespaceweb/app/profile/page.tsx (2)
401-401
: Improved timezone label displayThe update to the
label
prop in theCustomSearchSelect
component for timezone selection enhances the user experience by displaying the full timezone label.Consider memoizing the label calculation to optimize performance:
-label={value ? (TIME_ZONES.find((t) => t.value === value)?.label ?? value) : "Select a timezone"} +label={useMemo(() => value ? (TIME_ZONES.find((t) => t.value === value)?.label ?? value) : "Select a timezone", [value])}Don't forget to import
useMemo
from React if it's not already imported.
421-451
: New Local Cache settings sectionThe addition of the Local Cache settings section using the Disclosure component is well-implemented. It's properly gated by the
ENABLE_LOCAL_DB_CACHE
flag and uses thecanUseLocalDB
andtoggleLocalDB
functions from theuseUserSettings
hook.To improve accessibility, consider adding an
aria-label
to the ToggleSwitch:-<ToggleSwitch value={canUseLocalDB} onChange={() => toggleLocalDB()} /> +<ToggleSwitch value={canUseLocalDB} onChange={() => toggleLocalDB()} aria-label="Toggle local database cache" />apiserver/plane/utils/paginator.py (5)
85-85
: Consider the performance implications of increasing MAX_LIMITIncreasing
MAX_LIMIT
from 100 to 1000 allows for larger result sets per request, which aligns with the goal of improving performance on the boards. However, this change could potentially impact server performance and response times for large datasets.Monitor the server's performance after this change, especially for requests that utilize the maximum limit. Consider implementing rate limiting or pagination strategies to prevent potential abuse or performance degradation.
121-121
: LGTM: Updated default limit in get_result methodThe change to increase the default
limit
from 100 to 1000 in theget_result
method is consistent with theMAX_LIMIT
update. This will allow for larger default result sets, potentially improving client-side performance.Consider updating the method's documentation to reflect this change in default behavior, as it may affect API consumers who rely on the previous default value.
730-731
: LGTM: Updated pagination limits in get_per_page methodThe changes to increase both
default_per_page
andmax_per_page
from 100 to 1000 are consistent with the previous updates. This allows for larger page sizes in pagination, potentially improving performance for clients requesting large datasets.Update the error message on line 738 to reflect the new maximum value:
- detail=f"Invalid per_page value. Cannot exceed {max_per_page}." + detail=f"Invalid per_page value. Cannot exceed {max_per_page}. Default is {default_per_page}."This will provide more informative feedback to API consumers.
750-751
: LGTM: Updated pagination limits in paginate methodThe changes to increase both
default_per_page
andmax_per_page
from 100 to 1000 in thepaginate
method are consistent with the previous updates throughout the file. This ensures uniform pagination behavior across different methods.Consider defining a module-level constant for the new default value (1000) to improve maintainability:
DEFAULT_PAGE_SIZE = 1000 # Then use it in method signatures: def paginate( self, request, on_results=None, paginator=None, paginator_cls=OffsetPaginator, default_per_page=DEFAULT_PAGE_SIZE, max_per_page=DEFAULT_PAGE_SIZE, # ... rest of the parameters ): # ... method implementationThis will make it easier to update the default page size across the module in the future if needed.
Line range hint
85-751
: Summary: Increased pagination limits for improved performanceThe changes in this file consistently increase pagination limits from 100 to 1000 across various methods and constants. This aligns well with the PR objective of improving performance on the boards by allowing larger result sets to be fetched in a single request.
Key points:
MAX_LIMIT
increased to 1000- Default and maximum page sizes increased to 1000 in multiple methods
- Changes are consistent throughout the file
These modifications will likely improve client-side performance by reducing the number of requests needed for large datasets. However, they may also increase server load and response times for large result sets.
- Monitor server performance after deploying these changes, especially for requests using the maximum limit.
- Consider implementing or adjusting rate limiting to prevent potential abuse of the increased limits.
- Update API documentation to reflect these new default and maximum values for pagination.
- Evaluate the impact on database query performance and optimize if necessary, especially for queries returning large result sets.
Overall, these changes appear to be a positive step towards improving performance, but careful monitoring and potential further optimizations may be needed to ensure optimal system behavior.
web/core/local-db/utils/load-issues.ts (1)
103-103
: Simplify Conditional with Optional ChainingThe condition
if (values && values.length)
can be simplified using optional chaining to make the code more concise and readable.Suggestion:
- if (values && values.length) { + if (values?.length) {Tools
Biome
[error] 103-103: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
web/core/local-db/utils/utils.ts (4)
72-72
: Simplify condition by removing redundant optional chainingIn the condition at line 72, the use of optional chaining is redundant. After confirming that
groupedResults[groupId]
is notundefined
, you can accessgroupedResults[groupId].results
directly.Apply this diff to simplify the condition:
- if (groupedResults?.[groupId] !== undefined && Array.isArray(groupedResults?.[groupId]?.results)) { + if (groupedResults[groupId] !== undefined && Array.isArray(groupedResults[groupId].results)) {
107-109
: Simplify condition by removing redundant optional chaining ingetSubGroupedIssueResults
Similar to the previous comment, the condition at lines 107-109 has redundant optional chaining. Since you've checked that
subGroupedResults[groupId].results[subGroupId]
is notundefined
, you can access properties directly.Apply this diff:
- if ( - subGroupedResults[groupId].results[subGroupId] !== undefined && - Array.isArray(subGroupedResults[groupId].results[subGroupId]?.results) - ) { + if ( + subGroupedResults[groupId].results[subGroupId] !== undefined && + Array.isArray(subGroupedResults[groupId].results[subGroupId].results) + ) {
134-134
: Consider usingutil.promisify
for thedelay
functionWhile the
delay
function works as intended, using Node.js's built-inutil.promisify
can make the code cleaner.Here's how you can refactor it:
- export const delay = (ms: number) => new Promise((resolve) => setTimeout(resolve, ms)); + import { promisify } from 'util'; + export const delay = promisify(setTimeout);
6-8
: Remove unused or commented-out codeLines 6-8 contain both an active
log
function and a commented-out version. It's good practice to remove unused code to keep the codebase clean.Apply this diff to remove the unused code:
- export const log = console.log; - - // export const log = () => {}; + // If needed, define the `log` function appropriately.web/core/local-db/utils/load-workspace.ts (2)
44-44
: Add error handling for service callsThe service calls to fetch data may fail or return unexpected results. Consider adding try-catch blocks around these calls to handle exceptions and ensure that the application can handle errors gracefully without crashing.
Example modification:
- const objects = await issueLabelService.getWorkspaceIssueLabels(workspaceSlug); + let objects = []; + try { + objects = await issueLabelService.getWorkspaceIssueLabels(workspaceSlug); + } catch (error) { + console.error('Failed to load issue labels:', error); + return; + }Also applies to: 58-58, 73-73, 87-87, 101-101, 121-121
134-141
: Simplify promise handling inloadWorkSpaceData
You can streamline the code by passing the array of promises directly to
Promise.all
without explicitly pushing them into a separate array. This makes the code more concise and easier to read.Apply this diff to simplify the code:
- const promises = []; - promises.push(loadLabels(workspaceSlug)); - promises.push(loadModules(workspaceSlug)); - promises.push(loadCycles(workspaceSlug)); - promises.push(loadStates(workspaceSlug)); - promises.push(loadEstimatePoints(workspaceSlug)); - promises.push(loadMembers(workspaceSlug)); - await Promise.all(promises); + await Promise.all([ + loadLabels(workspaceSlug), + loadModules(workspaceSlug), + loadCycles(workspaceSlug), + loadStates(workspaceSlug), + loadEstimatePoints(workspaceSlug), + loadMembers(workspaceSlug), + ]);web/core/local-db/utils/query-constructor.ts (1)
50-51
: Remove Console Logging of SQL Queries in ProductionThe
console.log
statements outputting the SQL queries may expose sensitive information and clutter the console output in production environments.Consider removing these logs or using a logging library with appropriate log levels to control when these statements are output.
- console.log("###", sql);
Also applies to: 66-67, 122-123
web/core/layouts/auth-layout/project-wrapper.tsx (1)
117-122
: Add comments to explain sequential calls to 'fetchModulesSlim' and 'fetchModules'The SWR hook is making sequential calls to
fetchModulesSlim
andfetchModules
. To improve code readability and maintainability, consider adding comments to explain the necessity of these sequential calls and how they contribute to performance optimization.web/core/store/issue/project/issue.store.ts (1)
105-105
: Improve comment clarityThe comment
// clear while using local to have the no load effect.
may be unclear to readers. Consider rephrasing it for better understanding, such as:-// clear while using local to have the no load effect. +// If not grouping, clear to prevent triggering the loading effect locally.web/core/layouts/auth-layout/workspace-wrapper.tsx (1)
99-99
: Remove commented-out codeThe line
// persistence.reset();
is commented out. If thereset
operation is no longer required, consider removing this line to keep the codebase clean. Retaining unused code can lead to confusion and reduce maintainability.web/core/store/issue/issue-details/issue.store.ts (1)
38-38
: Consider renamingisLocalDBIssueDescription
for clarityThe property
isLocalDBIssueDescription
suggests it pertains only to the issue description. Since it indicates whether the entire issue was loaded from the local database, consider renaming it toisIssueFromLocalDB
orisIssueLoadedFromLocalDB
for better clarity.Also applies to: 45-45
web/core/local-db/utils/query.utils.ts (1)
302-302
: Avoid redundant calls totranslateQueryParams
In
getSingleFilterFields
,translateQueryParams
is called again even if the translated queries are already available. This redundancy can affect performance.Consider passing the already translated
queries
togetSingleFilterFields
:- export const getSingleFilterFields = (queries: any) => { + export const getSingleFilterFields = (queries: QueryParams) => { const { order_by, cursor, per_page, group_by, sub_group_by, sub_issue, state_group, ...otherProps } = - translateQueryParams(queries); + queries;web/core/local-db/storage.sqlite.ts (1)
273-274
: Consider removing or adjusting verbose logging statements.Multiple
console.log
andconsole.table
statements are present, which might not be suitable for a production environment. To avoid cluttering the console and potentially exposing sensitive information, consider using a logging library with adjustable log levels or removing these statements if they're no longer necessary.Example changes:
-console.log("#### Queries", queries); ... -console.log("#### Issue Results", issueResults.length); ... -console.table(times);Also applies to: 293-296, 329-330
Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Files ignored due to path filters (1)
yarn.lock
is excluded by!**/yarn.lock
,!**/*.lock
Files selected for processing (47)
- apiserver/plane/app/urls/issue.py (3 hunks)
- apiserver/plane/app/views/init.py (1 hunks)
- apiserver/plane/app/views/issue/base.py (4 hunks)
- apiserver/plane/bgtasks/dummy_data_task.py (3 hunks)
- apiserver/plane/db/management/commands/create_dummy_data.py (1 hunks)
- apiserver/plane/utils/global_paginator.py (3 hunks)
- apiserver/plane/utils/paginator.py (4 hunks)
- web/app/profile/page.tsx (5 hunks)
- web/ce/constants/issues.ts (1 hunks)
- web/core/components/issues/issue-layouts/kanban/default.tsx (2 hunks)
- web/core/components/issues/peek-overview/view.tsx (3 hunks)
- web/core/layouts/auth-layout/project-wrapper.tsx (4 hunks)
- web/core/layouts/auth-layout/workspace-wrapper.tsx (3 hunks)
- web/core/local-db/storage.sqlite.ts (1 hunks)
- web/core/local-db/utils/constants.ts (1 hunks)
- web/core/local-db/utils/data.utils.ts (1 hunks)
- web/core/local-db/utils/indexes.ts (1 hunks)
- web/core/local-db/utils/load-issues.ts (1 hunks)
- web/core/local-db/utils/load-workspace.ts (1 hunks)
- web/core/local-db/utils/query-constructor.ts (1 hunks)
- web/core/local-db/utils/query-executor.ts (1 hunks)
- web/core/local-db/utils/query.utils.ts (1 hunks)
- web/core/local-db/utils/schemas.ts (1 hunks)
- web/core/local-db/utils/tables.ts (1 hunks)
- web/core/local-db/utils/utils.ts (1 hunks)
- web/core/services/issue/issue.service.ts (16 hunks)
- web/core/store/issue/archived/issue.store.ts (1 hunks)
- web/core/store/issue/cycle/filter.store.ts (2 hunks)
- web/core/store/issue/cycle/issue.store.ts (3 hunks)
- web/core/store/issue/draft/issue.store.ts (1 hunks)
- web/core/store/issue/helpers/base-issues.store.ts (5 hunks)
- web/core/store/issue/helpers/issue-filter-helper.store.ts (1 hunks)
- web/core/store/issue/issue-details/issue.store.ts (4 hunks)
- web/core/store/issue/module/filter.store.ts (2 hunks)
- web/core/store/issue/module/issue.store.ts (2 hunks)
- web/core/store/issue/profile/filter.store.ts (1 hunks)
- web/core/store/issue/profile/issue.store.ts (1 hunks)
- web/core/store/issue/project-views/filter.store.ts (2 hunks)
- web/core/store/issue/project-views/issue.store.ts (2 hunks)
- web/core/store/issue/project/filter.store.ts (2 hunks)
- web/core/store/issue/project/issue.store.ts (2 hunks)
- web/core/store/issue/workspace/issue.store.ts (1 hunks)
- web/core/store/module.store.ts (2 hunks)
- web/core/store/user/index.ts (6 hunks)
- web/core/store/user/settings.store.ts (4 hunks)
- web/next.config.js (1 hunks)
- web/package.json (1 hunks)
Additional context used
Ruff
apiserver/plane/app/views/__init__.py
117-117:
.issue.base.DeletedIssuesListViewSet
imported but unused; consider removing, adding to__all__
, or using a redundant alias(F401)
GitHub Check: Codacy Static Code Analysis
apiserver/plane/bgtasks/dummy_data_task.py
[notice] 501-501: apiserver/plane/bgtasks/dummy_data_task.py#L501
Trailing whitespace
[warning] 509-509: apiserver/plane/bgtasks/dummy_data_task.py#L509
Depending on the context, generating weak random numbers may expose cryptographic functions, which rely on these numbers, to be exploitable.
[warning] 509-509: apiserver/plane/bgtasks/dummy_data_task.py#L509
Standard pseudo-random generators are not suitable for security/cryptographic purposes.
[warning] 577-577: apiserver/plane/bgtasks/dummy_data_task.py#L577
Depending on the context, generating weak random numbers may expose cryptographic functions, which rely on these numbers, to be exploitable.
[warning] 577-577: apiserver/plane/bgtasks/dummy_data_task.py#L577
Standard pseudo-random generators are not suitable for security/cryptographic purposes.
GitHub Check: lint-web
web/core/layouts/auth-layout/project-wrapper.tsx
[failure] 29-29:
@/local-db/storage.sqlite
import should occur before import of@/plane-web/constants/user-permissions
web/core/layouts/auth-layout/workspace-wrapper.tsx
[failure] 22-22:
@/local-db/storage.sqlite
import should occur before import of@/plane-web/constants/user-permissions
web/core/local-db/storage.sqlite.ts
[failure] 58-58:
Include a description after the "@ts-expect-error" directive to explain why the @ts-expect-error is necessary. The description must be 3 characters or longerweb/core/local-db/utils/query.utils.ts
[failure] 82-82:
'sql' is never reassigned. Use 'const' insteadweb/core/store/user/index.ts
[failure] 22-22:
@/plane-web/constants/issues
import should occur before import of@/plane-web/constants/user-permissions
Biome
web/core/local-db/utils/load-issues.ts
[error] 103-103: Change to an optional chain.
Unsafe fix: Change to an optional chain.
(lint/complexity/useOptionalChain)
Additional comments not posted (63)
web/core/local-db/utils/constants.ts (1)
1-21
: Overall, excellent work on defining these constants!The constants defined in this file (
ARRAY_FIELDS
,GROUP_BY_MAP
, andPRIORITY_MAP
) are well-structured and align perfectly with the PR objectives of improving performance and UX on the boards. They provide a solid foundation for consistent handling of issue data, grouping, and prioritization throughout the application.The use of these constants will likely contribute to more efficient client-side operations and a better user experience, as outlined in the PR objectives.
web/ce/constants/issues.ts (1)
Line range hint
1-35
: Summary: Good start for implementing client-side storageThe addition of
ENABLE_LOCAL_DB_CACHE
is a good first step towards implementing the client-side storage feature for issues. This constant will likely be used in other parts of the codebase to conditionally enable or disable local caching.To ensure this constant is being utilized correctly, let's check for its usage across the codebase:
This will help verify that the constant is being imported and used appropriately in other files.
Verification successful
Verification Successful:
ENABLE_LOCAL_DB_CACHE
is correctly utilized across the codebaseThe constant
ENABLE_LOCAL_DB_CACHE
is properly exported fromweb/ce/constants/issues.ts
and is being imported and used inweb/core/store/user/index.ts
to control local DB caching as intended.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for usage of ENABLE_LOCAL_DB_CACHE across the codebase # Test: Search for imports or usage of ENABLE_LOCAL_DB_CACHE rg --type typescript --type javascript 'ENABLE_LOCAL_DB_CACHE'Length of output: 97
Script:
#!/bin/bash # Description: Check for usage of ENABLE_LOCAL_DB_CACHE across the codebase # Test: Search for imports or usage of ENABLE_LOCAL_DB_CACHE in .ts and .js files rg 'ENABLE_LOCAL_DB_CACHE' -g "*.ts" -g "*.js"Length of output: 314
web/core/local-db/utils/data.utils.ts (1)
26-30
: LGTM: Efficient concurrent executionThe
getSubIssuesWithDistribution
function is well-implemented:
- It correctly uses
Promise.all
to rungetSubIssues
andgetSubIssueDistribution
concurrently, which is efficient.- The destructuring of the
Promise.all
result is correct.- The function returns an object with the correct structure as per the requirements.
web/core/local-db/utils/tables.ts (1)
1-13
: LGTM: Imports are appropriate and well-structured.The imports are correctly bringing in the necessary dependencies for database operations and schema definitions.
web/core/local-db/utils/indexes.ts (2)
1-68
: Overall assessment: Well-implemented index creation for improved query performance.This new file introduces a robust system for creating database indexes, which should significantly enhance query performance for the local SQLite database. The implementation is well-structured, uses promises effectively for concurrent operations, and includes error handling.
Key strengths:
- Comprehensive coverage of indexes for various tables.
- Efficient use of
Promise.all
for concurrent index creation.- Performance logging to measure the time taken for index creation.
Minor suggestions for improvement have been provided in the individual comments, including:
- Considering a more robust logging solution.
- Clarifying or addressing TODO comments.
- Enhancing error logging for better debugging.
These changes align well with the PR objectives of improving performance and user experience on the boards by optimizing local data management.
27-30
:⚠️ Potential issueRemove misleading comment and verify index coverage.
The function creates an index for the
issue_meta
table, which should improve query performance. However, there are two points to address:
- The comment "Drop indexes" is misleading as no indexes are being dropped in this function.
- Only one index is being created. While this might be sufficient, it's worth verifying if additional indexes are needed for other common query patterns on this table.
Remove the misleading comment:
- // Drop indexes await persistence.db.exec({ sql: `CREATE INDEX issue_meta_all_idx ON issue_meta (issue_id,key,value)` });
To ensure adequate index coverage, please review the most common queries performed on the
issue_meta
table. You can use the following script to analyze the table structure and existing indexes:Based on the results, determine if additional indexes are needed to support other frequent query patterns.
Verification successful
Remove the misleading "Drop indexes" comment.
The
createIssueMetaIndexes
function only creates an index for theissue_meta
table and does not perform any index dropping. Please update the comment to accurately reflect the function's behavior.- // Drop indexes await persistence.db.exec({ sql: `CREATE INDEX issue_meta_all_idx ON issue_meta (issue_id,key,value)` });
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Analyze the issue_meta table structure and indexes # Test: Retrieve table structure echo "Table Structure:" sqlite3 path/to/your/database.sqlite "PRAGMA table_info(issue_meta);" # Test: List existing indexes echo "Existing Indexes:" sqlite3 path/to/your/database.sqlite "SELECT name, sql FROM sqlite_master WHERE type='index' AND tbl_name='issue_meta';"Length of output: 445
Script:
#!/bin/bash # Description: Search for index creation statements related to the issue_meta table # Search for CREATE INDEX statements in the codebase rg --multiline --pretty "CREATE INDEX .* ON issue_meta" .Length of output: 207
web/package.json (1)
36-36
: Approved: Addition of SQLite WebAssembly dependencyThe addition of
@sqlite.org/sqlite-wasm
aligns well with the PR objectives of improving performance and user experience on the boards by enabling client-side storage of issues. This should contribute to reduced server load and faster response times.To ensure this new dependency is being used correctly, please run the following script:
This script will help us verify that the SQLite WebAssembly library is being imported and used in the codebase, and if there are any configuration files related to its setup.
Verification successful
Approved: Addition of SQLite WebAssembly dependency
The
@sqlite.org/sqlite-wasm
dependency is correctly integrated and utilized across multiple relevant files in the codebase, aligning with the PR objectives to enhance client-side performance and user experience.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Verify the usage of @sqlite.org/sqlite-wasm in the codebase # Test: Search for import statements or usage of SQLite rg --type typescript --type javascript 'from\s+[\'"]@sqlite\.org/sqlite-wasm[\'"]|import\s+.*[\'"]@sqlite\.org/sqlite-wasm[\'"]|\bsqlite\b' # Test: Check for any configuration files related to SQLite fd -e config.js -e config.ts sqliteLength of output: 93
Script:
#!/bin/bash # Description: Verify the usage of @sqlite.org/sqlite-wasm in the codebase # Test: Search for import statements or usage of SQLite rg --type typescript --type javascript "from\s+['\"]@sqlite\.org/sqlite-wasm['\"]|import\s+.*['\"]@sqlite\.org/sqlite-wasm['\"]|\bsqlite\b" # Test: Check for any configuration files related to SQLite fd sqlite -e config.js -e config.tsLength of output: 222
Script:
#!/bin/bash # Description: Verify the usage of @sqlite.org/sqlite-wasm in the codebase # Test: Search for import statements or usage of SQLite in relevant file types rg -g '*.ts' -g '*.tsx' -g '*.js' -g '*.jsx' "from\s+['\"]@sqlite\.org/sqlite-wasm['\"]|import\s+.*['\"]@sqlite\.org/sqlite-wasm['\"]|\bsqlite\b" # Test: Check for any configuration files related to SQLite fd sqlite -e config.js -e config.tsLength of output: 1600
apiserver/plane/utils/global_paginator.py (2)
2-2
: LGTM: Appropriate import for ceiling function.The import of
ceil
from themath
module is correct and necessary for the newtotal_pages
calculation.
2-2
: Summary: Valuable enhancement to pagination information.The changes in this file effectively implement the calculation and inclusion of the total number of pages in the pagination information. This addition aligns well with the PR objectives of improving performance and UX on the boards. By providing the total page count, clients can make more informed decisions about pagination, potentially reducing the number of unnecessary requests and improving the overall user experience.
The implementation is clean, minimal, and doesn't introduce any breaking changes to the existing functionality. Great job on this improvement!
Also applies to: 42-43, 81-81
apiserver/plane/db/management/commands/create_dummy_data.py (1)
Line range hint
76-84
: Consider the implications of switching to synchronous executionThe change from
create_dummy_data.delay(...)
tocreate_dummy_data(...)
shifts the execution from asynchronous to synchronous. This modification has several potential impacts:
- Performance: The command will now wait for the dummy data creation to complete before proceeding, which could lead to longer execution times, especially for large datasets.
- User Experience: The command line interface will appear to "hang" while creating data, potentially confusing users accustomed to the previous behavior.
- Error Handling: Synchronous execution allows for immediate error feedback, which could be beneficial for debugging.
- Resource Usage: This change might lead to increased memory usage on the machine running the command, as all data creation happens in a single process.
To address these concerns, consider the following options:
- Revert to asynchronous execution if the original behavior was intentional.
- Add a progress indicator to provide feedback during the potentially long-running synchronous operation.
- Implement a hybrid approach, creating data in batches to balance between immediate feedback and performance.
Additionally, please update the success message on line 89 to accurately reflect the synchronous nature of the operation:
- self.style.SUCCESS("Data is pushed to the queue") + self.style.SUCCESS("Dummy data created successfully")To ensure this change doesn't conflict with other parts of the system, please run the following verification:
Verification successful
Verification Successful: No asynchronous usages of
create_dummy_data.delay
foundThe change to synchronous execution is consistent across the codebase, ensuring no unintended side effects.
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check for other usages of create_dummy_data to ensure consistency rg "create_dummy_data" --type pythonLength of output: 67
Script:
#!/bin/bash # Search for usages of create_dummy_data in Python files grep -rnw '.' -e 'create_dummy_data' --include \*.pyLength of output: 364
Script:
#!/bin/bash # Search for asynchronous usages of create_dummy_data in Python files grep -rnw '.' -e 'create_dummy_data\.delay' --include \*.pyLength of output: 60
apiserver/plane/app/views/__init__.py (1)
117-117
: New import added for DeletedIssuesListViewSetThe addition of
DeletedIssuesListViewSet
aligns with the PR objectives of improving issue management and potentially allowing for the management or display of deleted issues in the application. This is a good addition that supports the feature of storing issues on the client side for better performance and UX on the boards.However, the static analysis tool Ruff has flagged this import as unused. While it's common in
__init__.py
files to import components for use in other parts of the application, it's worth verifying if this import is indeed used elsewhere.To ensure this import is being utilized, you can run the following command to search for its usage across the project:
If the import is confirmed to be unused, consider removing it or adding it to
__all__
if it's intended for export.Verification successful
Import of
DeletedIssuesListViewSet
is correctly utilizedThe
DeletedIssuesListViewSet
imported inapiserver/plane/app/views/__init__.py
is actively used inapiserver/plane/app/urls/issue.py
andapiserver/plane/app/views/issue/base.py
. The Ruff warning about the unused import is a false positive and can be safely ignored.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for usage of DeletedIssuesListViewSet in Python files rg --type python "DeletedIssuesListViewSet"Length of output: 74
Script:
#!/bin/bash # Search for usage of DeletedIssuesListViewSet in Python files rg --type py "DeletedIssuesListViewSet"Length of output: 334
Tools
Ruff
117-117:
.issue.base.DeletedIssuesListViewSet
imported but unused; consider removing, adding to__all__
, or using a redundant alias(F401)
web/core/store/issue/workspace/issue.store.ts (1)
112-112
: LGTM! Consider updating documentation and verifying parent method.The change aligns with the PR objectives of improving performance and UX on the boards. It potentially optimizes how issues are fetched and stored by providing additional context to the
onfetchIssues
method.To ensure the change is fully integrated and documented:
- Verify that the parent
onfetchIssues
method inBaseIssuesStore
correctly handles these new parameters.- Update the method documentation for
fetchIssues
to reflect the newisExistingPaginationOptions
parameter and its purpose.Run the following script to verify the
onfetchIssues
method implementation in the parent class:web/core/store/issue/archived/issue.store.ts (1)
109-109
: LGTM. Consider adding a comment for clarity.The update to the
onfetchIssues
method call aligns with the PR objectives of improving performance and user experience. The addition of new parameters suggests an optimization in how issues are fetched and processed.To ensure this change doesn't have unintended consequences, please verify:
- The
onfetchIssues
method in the parent class correctly handles these new parameters.- Any other subclasses of
BaseIssuesStore
have been updated if necessary.Run the following script to check for other occurrences of
onfetchIssues
:Consider adding a brief comment explaining the purpose of the new parameters:
this.onfetchIssues( response, options, workspaceSlug, projectId, undefined, // Optional parameter for future use !isExistingPaginationOptions // Controls pagination behavior );This will improve code maintainability and make it easier for other developers to understand the purpose of these parameters.
Verification successful
LGTM. Consider adding a comment for clarity.
The update to the
onfetchIssues
method call aligns with the PR objectives of improving performance and user experience. The addition of new parameters suggests an optimization in how issues are fetched and processed.Consider adding a brief comment explaining the purpose of the new parameters:
this.onfetchIssues( response, options, workspaceSlug, projectId, undefined, // Optional parameter for future use !isExistingPaginationOptions // Controls pagination behavior );This will improve code maintainability and make it easier for other developers to understand the purpose of these parameters.
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Search for other occurrences of onfetchIssues method calls rg --type typescript 'onfetchIssues\(' web/core/storeLength of output: 90
Script:
#!/bin/bash # Search for other occurrences of onfetchIssues method calls in TypeScript files rg 'onfetchIssues\(' 'web/core/store' --glob '*.ts' --glob '*.tsx'Length of output: 1379
web/core/components/issues/issue-layouts/kanban/default.tsx (2)
47-47
: LGTM. Consider adding documentation for the new property.The addition of the
sub_group_index
property to theIKanBan
interface is a good enhancement that allows for more granular control over sub-group positioning. This aligns well with the PR objective of improving performance and UX on the boards.To improve maintainability, consider adding a brief comment explaining the purpose and usage of this new property.
2-2
: Overall, these changes enhance the Kanban component's capabilities.The addition of the
clone
import and thesub_group_index
property to theIKanBan
interface suggests preparation for enhanced sub-grouping functionality. These changes align well with the PR objectives of improving performance and user experience on the boards.To ensure these changes don't introduce any regressions:
- Thoroughly test the Kanban board functionality, especially related to sub-grouping.
- Verify that existing features still work as expected.
- Check for any performance impacts, particularly when dealing with large numbers of issues or complex groupings.
Run the following script to identify potentially affected test files:
#!/bin/bash # Description: Find test files related to Kanban functionality # Test: Search for test files containing 'kanban' (case-insensitive) rg --type-add 'test:*.{test,spec}.{ts,tsx,js,jsx}' --type test -i 'kanban' ./webAlso applies to: 47-47
apiserver/plane/app/urls/issue.py (3)
23-23
: New view for deleted issues aligns with PR objectives.The addition of
DeletedIssuesListViewSet
is consistent with the PR's goal of improving issue management. This new view will likely support the feature for listing deleted issues, enhancing the overall user experience on the boards.
315-319
: New endpoint for listing deleted issues enhances issue management.The addition of this endpoint for listing deleted issues (
/deleted-issues/
) is a valuable feature that aligns well with the PR's objective of improving issue management and user experience on the boards. It will allow users to review and potentially restore deleted issues, which is a common requirement in project management tools.To ensure this new endpoint is properly integrated, please verify the following:
- The
DeletedIssuesListViewSet
is implemented correctly.- Appropriate permissions are set for accessing deleted issues.
- This endpoint is utilized in the frontend to display deleted issues when needed.
You can use the following script to check the implementation of
DeletedIssuesListViewSet
:#!/bin/bash # Check the implementation of DeletedIssuesListViewSet ast-grep --lang python --pattern $'class DeletedIssuesListViewSet($$_): $$$ '
43-47
: Verify the impact of the updated paginated issues path.The path for paginated issues has been changed from project-specific (
projects/<uuid:project_id>/v2/issues/
) to workspace-level (v2/issues/
). This modification might be related to the client-side storage implementation mentioned in the PR objectives.Please confirm that:
- This change is intentional and aligns with the new client-side storage mechanism.
- All related frontend code has been updated to use this new path.
- The
IssuePaginatedViewSet
has been modified to handle workspace-level pagination if necessary.You can use the following script to check for any remaining references to the old path:
web/core/components/issues/peek-overview/view.tsx (2)
63-63
: LGTM: Addition ofisLocalDBIssueDescription
propertyThe addition of the
isLocalDBIssueDescription
property aligns with the PR objective of storing issues on the client side for better performance. This change appears to be correct and necessary for the new functionality.
181-182
: Approved with a request for clarification on UXThe addition of
isLocalDBIssueDescription
to thedisabled
prop is consistent with the PR objective of improving UX for locally stored issues. This change ensures that issue details are disabled when the description is stored in the local database.However, could you please clarify the user experience implications of disabling the issue details when they are stored locally? Are users still able to view the information, or is there an alternative way to interact with locally stored issues?
Also applies to: 220-221
web/core/store/issue/project-views/filter.store.ts (3)
233-233
: Clarify the relationship betweengetShouldReFetchIssues
andgetShouldClearIssues
.The addition of
getShouldReFetchIssues
introduces another layer of control over when issues are fetched. To ensure this change aligns with the PR objectives of improving performance:
- Could you explain the implementation of
getShouldReFetchIssues
and how it differs fromgetShouldClearIssues
?- What scenarios would trigger a re-fetch without clearing the local database?
- How do these two methods work together to optimize the balance between data freshness and performance?
To better understand the usage and implementation of
getShouldReFetchIssues
, please run:#!/bin/bash # Description: Find the implementation and usage of getShouldReFetchIssues rg 'getShouldReFetchIssues' --type typescript
230-232
: Provide details ongetShouldClearIssues
method and its implications.The introduction of
getShouldClearIssues
to determine when to clear issues from the local database is a significant change. To ensure this doesn't lead to unnecessary data fetching or performance issues:
- Could you provide more information about the implementation of
getShouldClearIssues
?- What specific display filter changes trigger this clearing of issues?
- Have you considered any potential performance impacts of frequently clearing and re-fetching issues?
To better understand the usage and implementation of
getShouldClearIssues
, please run:#!/bin/bash # Description: Find the implementation and usage of getShouldClearIssues rg 'getShouldClearIssues' --type typescript
193-193
: Verify the impact of always using "mutation" for fetching issues.The change to always use "mutation" when calling
fetchIssuesWithExistingPagination
simplifies the logic but may affect how issues are fetched.Please run the following script to check if this change has any unintended consequences on issue fetching:
Verification successful
The change to always use "mutation" for fetching issues is consistent across the codebase and does not introduce any unintended side effects.
Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for any other occurrences of fetchIssuesWithExistingPagination # to ensure consistent usage across the codebase rg 'fetchIssuesWithExistingPagination' --type typescriptLength of output: 254
Script:
#!/bin/bash # Description: Check for any other occurrences of fetchIssuesWithExistingPagination # to ensure consistent usage across the codebase rg 'fetchIssuesWithExistingPagination' --type tsLength of output: 4728
web/core/store/issue/project/filter.store.ts (1)
223-225
: LGTM! Consider adding clarifying comments.The addition of clearing issues based on certain display filter changes aligns well with the PR objective of improving performance and UX on the boards. This change provides more granular control over when to clear local issues versus when to re-fetch them.
Please ensure that the
getShouldClearIssues
method is properly implemented and tested. Can you provide its implementation or the file where it's located?Consider adding a comment to explain the difference between clearing and re-fetching issues, and why both operations might be necessary in different scenarios. This will help future developers understand the logic better. For example:
// Clear local issues when certain filters (e.g., layout) change to ensure data consistency if (this.getShouldClearIssues(updatedDisplayFilters)) { this.rootIssueStore.projectIssues.clear(true, true); } // Re-fetch issues when necessary to update the view with the latest data if (this.getShouldReFetchIssues(updatedDisplayFilters)) { this.rootIssueStore.projectIssues.fetchIssuesWithExistingPagination(workspaceSlug, projectId, "mutation"); }web/core/store/issue/cycle/filter.store.ts (3)
195-195
: Verify the impact of always using "mutation" for fetching issues.The condition for determining the fetch type has been simplified to always use "mutation". While this simplifies the code, it might lead to unnecessary data fetching in some scenarios.
Please confirm that this change doesn't negatively impact performance by over-fetching data. Consider running performance tests to ensure this simplification doesn't introduce unnecessary network requests.
236-239
: New conditions for clearing and re-fetching issues align with PR objectives.The introduction of
getShouldClearIssues
andgetShouldReFetchIssues
conditions supports the PR objective of storing issues on the client for better performance. This allows for more granular control over when to clear the local cache and when to re-fetch data from the server.However, to ensure these new conditions are working as expected:
- Can you provide unit tests for the
getShouldClearIssues
andgetShouldReFetchIssues
methods?- Please document the specific scenarios that trigger clearing and re-fetching of issues to help maintain this logic in the future.
Line range hint
1-308
: Overall, changes align with PR objectives and improve client-side issue management.The modifications in this file contribute to the PR's goal of storing issues on the client for better performance and UX on the boards. The new conditions for clearing and re-fetching issues provide more granular control over client-side data management, which should lead to improved responsiveness and reduced server load.
To further enhance this implementation:
- Consider adding comprehensive unit tests for the new methods (
getShouldClearIssues
andgetShouldReFetchIssues
) to ensure their reliability.- Document the specific scenarios that trigger clearing and re-fetching of issues to aid in future maintenance and understanding of the system's behavior.
- Monitor the performance impact of always using "mutation" for fetching issues to ensure it doesn't lead to over-fetching in certain scenarios.
These changes form a crucial part of the client-side storage mechanism and should contribute significantly to the overall performance improvements outlined in the PR objectives.
web/core/store/issue/module/filter.store.ts (3)
198-198
: LGTM: Simplified filter update logicThe change to always use "mutation" when calling
fetchIssuesWithExistingPagination
simplifies the code and ensures consistent behavior when updating filters. This is a good improvement that reduces complexity while maintaining the expected functionality.
Line range hint
242-249
: Approved: Optimized re-fetching of issuesThe addition of the
getShouldReFetchIssues
condition before re-fetching issues is a good optimization. It can potentially reduce unnecessary API calls and improve performance.For a comprehensive review:
- Could you provide more details about the
getShouldReFetchIssues
method?- What specific conditions trigger the re-fetching of issues?
- How does this new condition interact with the previous
getShouldClearIssues
check?To better understand the
getShouldReFetchIssues
method, please run the following command:#!/bin/bash # Search for the getShouldReFetchIssues method definition rg --type typescript -A 10 "getShouldReFetchIssues\s*\("
239-241
: Approved: New condition for clearing local issuesThe addition of the
getShouldClearIssues
check before clearing issues from the local database is a good practice to ensure data consistency when significant display changes occur. This can help prevent stale or inconsistent data from being displayed to the user.However, for a complete understanding:
- Could you provide more information about the
getShouldClearIssues
method?- What specific conditions trigger the clearing of issues?
- Have you considered any potential performance impacts of frequently clearing the local database?
To better understand the
getShouldClearIssues
method, please run the following command:web/core/store/issue/helpers/issue-filter-helper.store.ts (1)
270-282
: Verify the usage ofgetShouldClearIssues
in the application.The new
getShouldClearIssues
method is a valuable addition to theIssueFilterHelperStore
class. To ensure its effectiveness:
- Confirm that this method is being called at the appropriate points in the issue management flow.
- Verify that the clearing of issues based on layout changes doesn't negatively impact user experience or performance.
- Consider adding unit tests for this method to ensure its behavior remains consistent as the codebase evolves.
To help verify the usage of this method, you can run the following script:
This script will help identify where the new method is being used and if it's properly tested.
apiserver/plane/bgtasks/dummy_data_task.py (2)
350-350
: Approve: Reduced maximum character limit for issue descriptionsThe reduction of the maximum character limit for generated text from 60,000 to 3,000 is a good optimization. This change will result in smaller issue descriptions, which aligns with the PR objective of improving performance and user experience on the boards. Smaller issue descriptions will lead to:
- Reduced data transfer between client and server
- Faster rendering of issues on the client-side
- More efficient client-side storage
These improvements contribute to the overall goal of enhancing performance and user experience.
Line range hint
1-685
: Overall: Improved test data generation aligns with performance goalsThe changes made to this file significantly improve the generation of test data, which indirectly supports the PR's objectives of enhancing performance and user experience on the boards:
- Reduced issue description size will lead to more efficient client-side storage and faster data transfer.
- Improved distribution of labels and modules across all issues provides more realistic test scenarios.
- Capping the number of labels and modules per issue prevents outliers that could skew performance metrics.
These improvements will allow for more accurate performance testing, helping to ensure that the client-side storage implementation meets its performance goals. The changes are well-thought-out and contribute positively to the overall project objectives.
Tools
GitHub Check: Codacy Static Code Analysis
[warning] 577-577: apiserver/plane/bgtasks/dummy_data_task.py#L577
Depending on the context, generating weak random numbers may expose cryptographic functions, which rely on these numbers, to be exploitable.
[warning] 577-577: apiserver/plane/bgtasks/dummy_data_task.py#L577
Standard pseudo-random generators are not suitable for security/cryptographic purposes.web/app/profile/page.tsx (2)
72-72
: LGTM: New hook for user settingsThe addition of
useUserSettings
hook to getcanUseLocalDB
andtoggleLocalDB
is consistent with the new local caching feature. This change prepares the component to manage local database caching settings.
Line range hint
1-494
: Overall assessment: Well-implemented local database caching featureThe changes in this file consistently implement a new local database caching feature, which aligns with the PR objectives of improving performance and user experience. The feature is properly gated for enterprise editions, and the implementation follows React best practices. The new UI elements are well-integrated into the existing profile settings page.
A few minor suggestions have been made to further improve the code:
- Memoizing the timezone label calculation for optimization.
- Adding an aria-label to the ToggleSwitch for better accessibility.
These changes contribute positively to the goal of storing issues on the client for better performance and UX on the boards.
web/core/store/module.store.ts (2)
42-42
: LGTM: New method signature aligns with performance objectivesThe addition of
fetchModulesSlim
to theIModuleStore
interface is well-defined and consistent with other methods. The "Slim" naming convention suggests a performance-optimized version, which aligns with the PR's goal of enhancing performance on the boards.
291-309
: Implementation looks good, with room for optimizationThe
fetchModulesSlim
method is well-implemented and aligns with the PR's performance objectives. It efficiently fetches all workspace modules and filters them locally, potentially reducing server load. However, consider the following suggestions for improvement:
- Add more specific error logging in the catch block to aid in debugging.
- Consider implementing pagination for workspaces with a large number of modules to prevent potential performance issues.
Here's a suggested improvement for error handling:
} catch (error) { this.loader = false; + console.error("Error fetching modules slim:", error); return undefined; }
To ensure this method doesn't impact performance negatively for large workspaces, we should verify the typical number of modules in a workspace. Run the following script:
This script will help us understand if pagination is necessary for the
fetchModulesSlim
method.Verification successful
Verification Complete: Pagination Not Necessary
The
fetchModulesSlim
method operates efficiently with an average of 1.5 modules and a maximum of 2 modules per workspace. Given these low module counts, implementing pagination is not required at this time. However, adding more specific error logging in the catch block remains a valuable improvement for debugging purposes.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check the average and maximum number of modules per workspace # Get all TypeScript files that might contain module-related code module_files=$(fd -e ts -e tsx | xargs grep -l "module" || true) # Extract and analyze module counts if [ -n "$module_files" ]; then echo "Analyzing module counts in the following files:" echo "$module_files" module_counts=$(cat $module_files | grep -oP '(?<=getModules\().*?(?=\))' | awk -F',' '{print NF}') if [ -n "$module_counts" ]; then echo "Module counts found:" echo "$module_counts" echo "Average module count: $(echo "$module_counts" | awk '{ total += $1; count++ } END { print total/count }')" echo "Maximum module count: $(echo "$module_counts" | sort -rn | head -1)" else echo "No module counts found in the analyzed files." fi else echo "No files containing 'module' found." fiLength of output: 64482
web/core/store/user/settings.store.ts (6)
4-6
: Imported necessary modules for local storage and persistenceThe addition of
getValueFromLocalStorage
,setValueIntoLocalStorage
, andpersistence
imports is appropriate for managing local storage interactions and handling the persistence layer.
15-16
: Defined constant for local storage keyDefining
LOCAL_DB_ENABLED
as a constant enhances maintainability and avoids hard-coding strings elsewhere in the code.
22-22
: UpdatedIUserSettingsStore
interface with new propertiesAdding
canUseLocalDB
andtoggleLocalDB
to theIUserSettingsStore
interface ensures that the new functionality is consistently exposed and can be utilized throughout the application.Also applies to: 25-25
43-43
: InitializedcanUseLocalDB
with value from local storageInitializing
canUseLocalDB
withgetValueFromLocalStorage
provides a default setting based on previous user preferences, improving user experience by maintaining their choices across sessions.
53-53
: Configured observables and actions for MobXProperly marking
canUseLocalDB
as an observable reference andtoggleLocalDB
as an action ensures that MobX can track state changes and react accordingly, maintaining the reactivity of the store.Also applies to: 56-56
62-82
: Verify that toggling local DB usage updates all dependent componentsChanging
canUseLocalDB
may affect other parts of the application that rely on this setting. Ensure that all components react appropriately to this change, and consider triggering any necessary refresh or re-initialization processes.Run the following script to identify components that depend on
canUseLocalDB
:This will help you ensure that all dependent components handle the state change correctly.
Verification successful
All dependent components correctly handle changes to
canUseLocalDB
.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Find all usages of `canUseLocalDB` in the codebase to verify dependencies. # Search for `canUseLocalDB` across all TypeScript files. rg --type ts 'canUseLocalDB'Length of output: 853
web/core/local-db/utils/load-workspace.ts (3)
39-39
: Await asynchronous database executionIf
persistence.db.exec
is an asynchronous operation, it should be awaited to ensure that the database operations complete before proceeding. This prevents potential issues with uncommitted transactions or incomplete data insertion.Check if
persistence.db.exec
returns a promise, and if so, update the calls to includeawait
:- persistence.db.exec(query); + await persistence.db.exec(query);
105-107
: Potential issue with emptyobjects
arrayAfter fixing the accumulation of estimate points, verify that the
objects
array is populated correctly before proceeding to process it in batches. Ifobjects
remains empty, the subsequent for-loop will not execute, and no data will be inserted.
65-65
:⚠️ Potential issueIncorrect schema used in
loadModules
In the
loadModules
function, thelabelSchema
is being used instead of amoduleSchema
. This could lead to data misalignment when inserting modules into the database.Apply this diff to use the correct schema:
- stageInserts("modules", labelSchema, label); + stageInserts("modules", moduleSchema, label);Ensure that you have a
moduleSchema
defined and imported.Likely invalid or redundant comment.
web/core/store/issue/project/issue.store.ts (1)
116-116
: Verify the correctness of parameter negation inonfetchIssues
callThe method
this.onfetchIssues
is called with!isExistingPaginationOptions
as the last argument. Ensure that the method definition expects this negated value and that the logic insideonfetchIssues
handles it appropriately.web/core/store/user/index.ts (3)
280-282
:localDBEnabled
computed property is correctly implementedThe
localDBEnabled
getter accurately determines the state of local database caching by combiningENABLE_LOCAL_DB_CACHE
andthis.userSettings.canUseLocalDB
. This ensures that the caching mechanism respects both the application configuration and user preferences.
48-48
: Addition oflocalDBEnabled
toIUserStore
interfaceIncluding
localDBEnabled: boolean;
in theIUserStore
interface effectively extends the interface to support the new computed property, ensuring type consistency across the application.
99-99
:localDBEnabled
added tomakeObservable
Adding
localDBEnabled: computed,
to themakeObservable
configuration ensures that MobX properly tracks the computed property, allowing reactive updates when its dependencies change.web/core/store/issue/issue-details/issue.store.ts (1)
97-99
: EnsureisLocalDBIssueDescription
is properly resetThe
isLocalDBIssueDescription
flag is set tofalse
after fetching the issue from the server. Verify that this behavior aligns with the intended use of this flag, especially if the server data is not different from the local data. This ensures the flag accurately represents the source of the issue data.web/core/store/issue/module/issue.store.ts (2)
139-140
: Verify the logical negation ofisExistingPaginationOptions
inclear
method callsIn lines 139-140, the
clear
method is called with!isExistingPaginationOptions
as the first argument. Please ensure that using the logical negation ofisExistingPaginationOptions
matches the intended behavior and that theclear
method handles it appropriately.
151-151
: EnsureonfetchIssues
method signature accommodates the new parameterIn line 151,
onfetchIssues
is called with the additional parameter!isExistingPaginationOptions
. Please confirm that theonfetchIssues
method signature and implementation have been updated to accept this new parameter and that it operates as expected.web/core/services/issue/issue.service.ts (1)
330-333
: Verify DELETE request with bodyIn the
bulkDeleteIssues
method, you're passingdata
as the second parameter in a DELETE request. Some HTTP clients and servers do not support sending a body with DELETE requests. Verify that your HTTP client (Axios) and server API support this, and ensure that the data is being sent as intended.Consider adjusting the request if needed:
- return this.delete(`/api/workspaces/${workspaceSlug}/projects/${projectId}/bulk-delete-issues/`, data) + return this.delete(`/api/workspaces/${workspaceSlug}/projects/${projectId}/bulk-delete-issues/`, { + data: data, + })This way, Axios knows to include the
data
in the DELETE request.web/core/store/issue/cycle/issue.store.ts (3)
182-183
: LGTM!The updated logic for clearing the store based on the
isExistingPaginationOptions
flag and thegroupBy
property is appropriate. This ensures that the store is cleared correctly when fetching from the server or using local data.
194-194
: LGTM!Passing
!isExistingPaginationOptions
to theonfetchIssues
method aligns with the updated method signature and ensures correct handling of pagination options.
237-237
: LGTM!The
getIssues
method is correctly called with updated parameters, withcycleId
now included withinparams
. This change streamlines the method call and is consistent with the updated API usage.apiserver/plane/app/views/issue/base.py (5)
237-242
: Efficient incorporation of 'updated_at__gt' filterThe added code correctly includes the optional
updated_at__gt
parameter intoextra_filters
, enhancing the querying capability of thelist
method.
247-247
: Appropriate merging of filters in issue querysetCombining
filters
withextra_filters
ensures that the issue queryset applies all relevant filtering parameters.
722-739
: Well-implemented endpoint for retrieving deleted issuesThe
DeletedIssuesListViewSet
class introduces a new GET method to retrieve IDs of deleted or archived issues, correctly applying filters forupdated_at__gt
and appropriate permissions.
807-807
: Including 'state__group' enhances response utilityAdding
"state__group"
to the required fields ensures that the group information of the issue state is available in responses, aiding in client-side rendering and grouping of issues.
846-847
: Optimized data retrieval with 'updated_at__gt' filterApplying the
updated_at__gt
filter to bothbase_queryset
andqueryset
effectively limits the results to issues updated after the specified timestamp, improving performance.web/core/store/issue/helpers/base-issues.store.ts (1)
68-68
: Ensure all implementations ofIBaseIssuesStore
are updatedThe addition of the optional
clearForLocal
parameter in theclear
method signature requires that all classes implementingIBaseIssuesStore
update theirclear
method accordingly. Please verify that all implementations have been updated to match the new signature to prevent any compile-time errors.
export const runQuery = async (sql: string) => { | ||
const data = await persistence.db.exec({ | ||
sql, | ||
rowMode: "object", | ||
returnValue: "resultRows", | ||
}); | ||
|
||
return data.result.resultRows; | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Enhance error handling and input validation.
The runQuery
function lacks error handling and input validation, which could lead to unexpected behavior or security vulnerabilities. Consider the following improvements:
- Implement error handling using try-catch to manage potential database errors.
- Add input validation for the SQL string to prevent SQL injection attacks.
- Consider using parameterized queries instead of raw SQL strings for better security.
- Add type annotations for better type safety.
Here's a suggested implementation with these improvements:
import { persistence } from "../storage.sqlite";
export const runQuery = async (sql: string, params: any[] = []): Promise<any[]> => {
if (typeof sql !== 'string' || sql.trim() === '') {
throw new Error('Invalid SQL query');
}
try {
const data = await persistence.db.exec({
sql,
bind: params,
rowMode: "object",
returnValue: "resultRows",
});
return data.result.resultRows;
} catch (error) {
console.error('Error executing query:', error);
throw new Error('Failed to execute query');
}
};
This implementation includes error handling, basic input validation, support for parameterized queries, and type annotations.
🛠️ Refactor suggestion
Consider enhancing result handling and function flexibility.
The current implementation of runQuery
might benefit from the following improvements:
- Add an option to transform or sanitize the result before returning, to avoid exposing internal database structure.
- Consider making
rowMode
andreturnValue
configurable parameters, allowing for different query result formats. - Add documentation comments to explain the function's purpose, parameters, and return value.
Here's a suggested implementation incorporating these improvements:
import { persistence } from "../storage.sqlite";
interface QueryOptions {
rowMode?: "object" | "array";
returnValue?: "resultRows" | "resultSet";
}
/**
* Executes a SQL query against the local database.
* @param sql - The SQL query string to execute.
* @param params - An array of parameters for the SQL query.
* @param options - Additional options for query execution.
* @returns A promise that resolves to the query results.
*/
export const runQuery = async (
sql: string,
params: any[] = [],
options: QueryOptions = {}
): Promise<any> => {
if (typeof sql !== 'string' || sql.trim() === '') {
throw new Error('Invalid SQL query');
}
try {
const data = await persistence.db.exec({
sql,
bind: params,
rowMode: options.rowMode || "object",
returnValue: options.returnValue || "resultRows",
});
// Add any necessary result transformation here
return data.result.resultRows;
} catch (error) {
console.error('Error executing query:', error);
throw new Error('Failed to execute query');
}
};
This implementation adds flexibility through options, includes documentation, and leaves room for result transformation if needed.
import { runQuery } from "./query-executor"; | ||
|
||
export const getProjectIds = async () => { | ||
const q = `select project_id from states where project_id is not null group by project_id`; | ||
return await runQuery(q); | ||
}; | ||
|
||
export const getSubIssues = async (issueId: string) => { | ||
const q = `select * from issues where parent_id = '${issueId}'`; | ||
return await runQuery(q); | ||
}; | ||
|
||
export const getSubIssueDistribution = async (issueId: string) => { | ||
const q = `select s.'group', group_concat(i.id) as issues from issues i left join states s on s.id = i.state_id where i.parent_id = '${issueId}' group by s.'group'`; | ||
|
||
const result = await runQuery(q); | ||
if (!result.length) { | ||
return {}; | ||
} | ||
return result.reduce((acc: Record<string, string[]>, item: { group: string; issues: string }) => { | ||
acc[item.group] = item.issues.split(","); | ||
return acc; | ||
}, {}); | ||
}; | ||
|
||
export const getSubIssuesWithDistribution = async (issueId: string) => { | ||
const promises = [getSubIssues(issueId), getSubIssueDistribution(issueId)]; | ||
const [sub_issues, state_distribution] = await Promise.all(promises); | ||
return { sub_issues, state_distribution }; | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall Review: Good progress towards PR objectives, but with security concerns
This new file introduces utility functions that align well with the PR objectives of improving performance and user experience on the boards by facilitating local storage of issues. The functions are generally well-structured and efficient, particularly the use of concurrent execution in getSubIssuesWithDistribution
.
However, there are critical security vulnerabilities (SQL injection risks) in getSubIssues
and getSubIssueDistribution
that need to be addressed before this can be merged. These vulnerabilities could potentially compromise the integrity of the local database and the user's data.
Once the security issues are resolved, these utilities should contribute significantly to the goals of:
- Improving performance on the boards
- Enabling instant peek overview of issues
- Reducing load on the server by facilitating local data management
Please address the SQL injection vulnerabilities, and consider the suggestion for improving database compatibility. After these changes, the file will be in a good state to support the PR's objectives.
export const getSubIssues = async (issueId: string) => { | ||
const q = `select * from issues where parent_id = '${issueId}'`; | ||
return await runQuery(q); | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: SQL Injection Vulnerability
The current implementation is vulnerable to SQL injection attacks. The issueId
parameter is directly interpolated into the SQL query string, which can allow malicious users to manipulate the query.
To fix this, use parameterized queries. Here's a suggested fix:
export const getSubIssues = async (issueId: string) => {
- const q = `select * from issues where parent_id = '${issueId}'`;
+ const q = `select * from issues where parent_id = ?`;
- return await runQuery(q);
+ return await runQuery(q, [issueId]);
};
This assumes that the runQuery
function supports parameterized queries. If it doesn't, consider modifying it to support this crucial security feature.
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
export const getSubIssues = async (issueId: string) => { | |
const q = `select * from issues where parent_id = '${issueId}'`; | |
return await runQuery(q); | |
}; | |
export const getSubIssues = async (issueId: string) => { | |
const q = `select * from issues where parent_id = ?`; | |
return await runQuery(q, [issueId]); | |
}; |
export const getSubIssueDistribution = async (issueId: string) => { | ||
const q = `select s.'group', group_concat(i.id) as issues from issues i left join states s on s.id = i.state_id where i.parent_id = '${issueId}' group by s.'group'`; | ||
|
||
const result = await runQuery(q); | ||
if (!result.length) { | ||
return {}; | ||
} | ||
return result.reduce((acc: Record<string, string[]>, item: { group: string; issues: string }) => { | ||
acc[item.group] = item.issues.split(","); | ||
return acc; | ||
}, {}); | ||
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Security vulnerability and potential compatibility issue
-
SQL Injection Vulnerability: Similar to the
getSubIssues
function, this function is vulnerable to SQL injection attacks. -
Database Compatibility: The use of single quotes around 'group' in the SQL query might cause issues with some database systems.
-
Result Handling: The function correctly handles the case where no results are returned and efficiently processes the results using reduce.
To address these issues:
- Use parameterized queries to prevent SQL injection:
- const q = `select s.'group', group_concat(i.id) as issues from issues i left join states s on s.id = i.state_id where i.parent_id = '${issueId}' group by s.'group'`;
+ const q = `select s."group", group_concat(i.id) as issues from issues i left join states s on s.id = i.state_id where i.parent_id = ? group by s."group"`;
- const result = await runQuery(q);
+ const result = await runQuery(q, [issueId]);
-
Use double quotes instead of single quotes for the 'group' column name to ensure compatibility across different database systems.
-
The result handling looks good and doesn't need changes.
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
export const getSubIssueDistribution = async (issueId: string) => { | |
const q = `select s.'group', group_concat(i.id) as issues from issues i left join states s on s.id = i.state_id where i.parent_id = '${issueId}' group by s.'group'`; | |
const result = await runQuery(q); | |
if (!result.length) { | |
return {}; | |
} | |
return result.reduce((acc: Record<string, string[]>, item: { group: string; issues: string }) => { | |
acc[item.group] = item.issues.split(","); | |
return acc; | |
}, {}); | |
}; | |
export const getSubIssueDistribution = async (issueId: string) => { | |
const q = `select s."group", group_concat(i.id) as issues from issues i left join states s on s.id = i.state_id where i.parent_id = ? group by s."group"`; | |
const result = await runQuery(q, [issueId]); | |
if (!result.length) { | |
return {}; | |
} | |
return result.reduce((acc: Record<string, string[]>, item: { group: string; issues: string }) => { | |
acc[item.group] = item.issues.split(","); | |
return acc; | |
}, {}); | |
}; |
import { persistence } from "../storage.sqlite"; | ||
|
||
const log = console.log; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider using a more robust logging solution.
While using console.log
directly or through an alias is common, for production code, it's recommended to use a more robust logging solution. This would provide better control over log levels, formatting, and potential integration with monitoring tools.
Consider using a logging library like winston
or pino
for more flexible and powerful logging capabilities. This would allow for easier management of log levels and formats across the application.
try { | ||
const storageManager = window.navigator.storage; | ||
const fileSystemDirectoryHandle = await storageManager.getDirectory(); | ||
//@ts-expect-error |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Include a description for the @ts-expect-error
directive.
The static analysis tool indicates that the @ts-expect-error
directive on line 58 lacks a description. Providing a description helps explain why the TypeScript error is being suppressed and aids in code maintainability.
Apply this diff to add a descriptive message:
-//@ts-expect-error
+//@ts-expect-error The 'remove' method is currently not defined in TypeScript definitions
await fileSystemDirectoryHandle.remove({ recursive: true });
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
//@ts-expect-error | |
//@ts-expect-error The 'remove' method is currently not defined in TypeScript definitions |
Tools
GitHub Check: lint-web
[failure] 58-58:
Include a description after the "@ts-expect-error" directive to explain why the @ts-expect-error is necessary. The description must be 3 characters or longer
currIssue[field] = currIssue[field] ? JSON.parse(currIssue[field]) : []; | ||
}); | ||
return currIssue as TIssue; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling for JSON parsing in formatLocalIssue
.
Parsing JSON strings without error handling can lead to uncaught exceptions if the data is malformed. To enhance robustness, wrap the JSON.parse
call in a try-catch block to handle potential parsing errors gracefully.
Apply this diff to include error handling:
-ARRAY_FIELDS.forEach((field: string) => {
- currIssue[field] = currIssue[field] ? JSON.parse(currIssue[field]) : [];
-});
+ARRAY_FIELDS.forEach((field: string) => {
+ if (currIssue[field]) {
+ try {
+ currIssue[field] = JSON.parse(currIssue[field]);
+ } catch (e) {
+ console.error(`Failed to parse JSON for field ${field}:`, e);
+ currIssue[field] = [];
+ }
+ } else {
+ currIssue[field] = [];
+ }
+});
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
currIssue[field] = currIssue[field] ? JSON.parse(currIssue[field]) : []; | |
}); | |
return currIssue as TIssue; | |
ARRAY_FIELDS.forEach((field: string) => { | |
if (currIssue[field]) { | |
try { | |
currIssue[field] = JSON.parse(currIssue[field]); | |
} catch (e) { | |
console.error(`Failed to parse JSON for field ${field}:`, e); | |
currIssue[field] = []; | |
} | |
} else { | |
currIssue[field] = []; | |
} | |
}); | |
return currIssue as TIssue; |
# getting the project_id from the request params | ||
project_id = self.request.GET.get("project_id", None) | ||
|
||
issue_queryset = Issue.issue_objects.filter( | ||
workspace__slug=workspace_slug | ||
) | ||
|
||
if project_id: | ||
issue_queryset = issue_queryset.filter(project_id=project_id) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Security concern: 'project_id' obtained from GET parameters without validation
In the IssuePaginatedViewSet
, project_id
is fetched from GET parameters in both the get_queryset
(lines 745-753) and list
methods (lines 796-800). Similarly, it's used in filtering the base_queryset
(lines 838-840). Relying on project_id
from GET parameters without proper validation and permission checks may allow unauthorized access to issues from other projects.
Recommendation:
Ensure that project_id
is obtained from URL parameters or validate it thoroughly when sourced from GET parameters. Implement necessary permission checks to prevent potential security risks associated with unauthorized data access.
Also applies to: 796-800, 838-840
if ( | ||
(this.rootIssueStore.rootStore.user?.localDBEnabled && clearForLocal) || | ||
(!this.rootIssueStore.rootStore.user?.localDBEnabled && !clearForLocal) | ||
) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Simplify the conditional logic in the clear
method
The condition in the if
statement can be simplified by recognizing that:
(localDBEnabled && clearForLocal) || (!localDBEnabled && !clearForLocal)
is equivalent to:
localDBEnabled === clearForLocal
Simplifying the expression enhances readability and reduces potential confusion.
You can apply the following diff to simplify the condition:
if (
- (this.rootIssueStore.rootStore.user?.localDBEnabled && clearForLocal) ||
- (!this.rootIssueStore.rootStore.user?.localDBEnabled && !clearForLocal)
- ) {
+ this.rootIssueStore.rootStore.user?.localDBEnabled === clearForLocal
+ ) {
@@ -1694,7 +1702,7 @@ | |||
} | |||
} | |||
|
|||
return isDataIdsArray ? (order ? orderBy(dataValues, undefined, [order])[0] : dataValues) : dataValues[0]; | |||
return isDataIdsArray ? (order ? orderBy(dataValues, undefined, [order]) : dataValues) : dataValues; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider simplifying the return statement for better readability
The nested ternary operators in the return statement can reduce readability. Consider refactoring it to improve clarity.
You can refactor the return statement as follows:
- return isDataIdsArray ? (order ? orderBy(dataValues, undefined, [order]) : dataValues) : dataValues;
+ if (isDataIdsArray && order) {
+ return orderBy(dataValues, undefined, [order]);
+ }
+ return dataValues;
This change uses standard conditional statements to make the flow of logic clearer.
Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
return isDataIdsArray ? (order ? orderBy(dataValues, undefined, [order]) : dataValues) : dataValues; | |
if (isDataIdsArray && order) { | |
return orderBy(dataValues, undefined, [order]); | |
} | |
return dataValues; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
Outside diff range and nitpick comments (2)
web/app/profile/page.tsx (2)
38-38
: Reorder imports for consistencyThe import of
ENABLE_LOCAL_DB_CACHE
should be moved before the import ofFileService
to maintain a consistent import order.Apply this change:
-import { FileService } from "@/services/file.service"; -import { ENABLE_LOCAL_DB_CACHE } from "@/plane-web/constants/issues"; +import { ENABLE_LOCAL_DB_CACHE } from "@/plane-web/constants/issues"; +import { FileService } from "@/services/file.service";Tools
GitHub Check: lint-web
[failure] 38-38:
@/plane-web/constants/issues
import should occur before import of@/services/file.service
421-451
: LGTM: Well-implemented Local Cache settingsThe new Disclosure component for Local Cache settings is well-implemented and provides good user control over the feature. It's correctly gated behind the
ENABLE_LOCAL_DB_CACHE
flag and includes clear explanatory text.Consider adding an aria-label to the ToggleSwitch for better accessibility:
- <ToggleSwitch value={canUseLocalDB} onChange={() => toggleLocalDB()} /> + <ToggleSwitch + value={canUseLocalDB} + onChange={() => toggleLocalDB()} + aria-label="Toggle local database cache" + />
Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Files selected for processing (6)
- web/app/profile/page.tsx (5 hunks)
- web/core/components/issues/issue-layouts/kanban/default.tsx (1 hunks)
- web/core/store/issue/cycle/filter.store.ts (2 hunks)
- web/core/store/issue/module/filter.store.ts (2 hunks)
- web/core/store/issue/project-views/filter.store.ts (2 hunks)
- web/core/store/issue/project/filter.store.ts (2 hunks)
Files skipped from review as they are similar to previous changes (5)
- web/core/components/issues/issue-layouts/kanban/default.tsx
- web/core/store/issue/cycle/filter.store.ts
- web/core/store/issue/module/filter.store.ts
- web/core/store/issue/project-views/filter.store.ts
- web/core/store/issue/project/filter.store.ts
Additional context used
GitHub Check: lint-web
web/app/profile/page.tsx
[failure] 38-38:
@/plane-web/constants/issues
import should occur before import of@/services/file.service
Additional comments not posted (2)
web/app/profile/page.tsx (2)
401-401
: LGTM: Improved timezone label displayThe update to use the timezone label in the CustomSearchSelect component enhances the user experience by displaying a more readable timezone option. The logic correctly handles cases where the label might not be found.
38-38
: Verify enterprise feature usageThe
ENABLE_LOCAL_DB_CACHE
constant is imported from an enterprise-specific path. Ensure that this feature is properly gated for enterprise editions to avoid unintended behavior in community editions.Run the following script to verify the usage of
ENABLE_LOCAL_DB_CACHE
:Verification successful
Enterprise feature usage for
ENABLE_LOCAL_DB_CACHE
is properly gated.
All usages ofENABLE_LOCAL_DB_CACHE
are confined within conditional statements, and the constant is set tofalse
for community editions, ensuring no unintended behavior.Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Check if ENABLE_LOCAL_DB_CACHE is properly gated in TypeScript files rg -i "ENABLE_LOCAL_DB_CACHE" --glob "*.ts" --glob "*.tsx"Length of output: 482
Script:
#!/bin/bash # Find all conditional statements that involve ENABLE_LOCAL_DB_CACHE ast-grep --lang typescript --pattern $'if ($_ && ENABLE_LOCAL_DB_CACHE) { $$$ }' ast-grep --lang typescript --pattern $'ENABLE_LOCAL_DB_CACHE && $$$'Length of output: 253
Tools
GitHub Check: lint-web
[failure] 38-38:
@/plane-web/constants/issues
import should occur before import of@/services/file.service
Store issues on the client along with a syncing mechanism. This improves
WEB-2001
Summary by CodeRabbit
Release Notes
New Features
Bug Fixes
Documentation
Chores