diff --git a/README.md b/README.md index 7023bafb..f6681c6f 100644 --- a/README.md +++ b/README.md @@ -6,6 +6,24 @@ Worklenz +
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
+
Task Management | Time Tracking | @@ -27,6 +45,24 @@ Worklenz is a project management tool designed to help organizations improve their efficiency. It provides a comprehensive solution for managing projects, tasks, and collaboration within teams. +## Table of Contents + +- [Features](#features) +- [Tech Stack](#tech-stack) +- [Getting Started](#getting-started) + - [Quick Start (Docker)](#-quick-start-docker---recommended) + - [Manual Installation](#️-manual-installation-for-development) +- [Deployment](#deployment) + - [Local Development](#local-development-with-docker) + - [Remote Server Deployment](#remote-server-deployment) +- [Configuration](#configuration) +- [MinIO Integration](#minio-integration) +- [Security](#security) +- [Analytics](#analytics) +- [Screenshots](#screenshots) +- [Contributing](#contributing) +- [License](#license) + ## Features - **Project Planning**: Create and organize projects, assign tasks to team members. @@ -50,41 +86,80 @@ This repository contains the frontend and backend code for Worklenz. ## Getting Started -These instructions will help you set up and run the Worklenz project on your local machine for development and testing purposes. +Choose your preferred setup method below. Docker is recommended for quick setup and testing. -### Prerequisites +### 🚀 Quick Start (Docker - Recommended) -- Node.js (version 18 or higher) -- PostgreSQL database -- An S3-compatible storage service (like MinIO) or Azure Blob Storage +The fastest way to get Worklenz running locally with all dependencies included. -### Option 1: Manual Installation +**Prerequisites:** +- Docker and Docker Compose installed on your system +- Git -1. Clone the repository +**Steps:** + +1. Clone the repository: ```bash git clone https://github.com/Worklenz/worklenz.git cd worklenz ``` -2. Set up environment variables - - Copy the example environment files - ```bash - cp worklenz-backend/.env.template worklenz-backend/.env - ``` - - Update the environment variables with your configuration - -3. Install dependencies +2. Start the Docker containers: ```bash -# Install backend dependencies +docker-compose up -d +``` + +3. Access the application: + - **Frontend**: http://localhost:5000 + - **Backend API**: http://localhost:3000 + - **MinIO Console**: http://localhost:9001 (login: minioadmin/minioadmin) + +4. To stop the services: +```bash +docker-compose down +``` + +**Alternative startup methods:** +- **Windows**: Run `start.bat` +- **Linux/macOS**: Run `./start.sh` + +**Video Guide**: For a visual walkthrough of the local Docker deployment process, check out our [step-by-step video guide](https://www.youtube.com/watch?v=AfwAKxJbqLg). + +### 🛠️ Manual Installation (For Development) + +For developers who want to run the services individually or customize the setup. + +**Prerequisites:** +- Node.js (version 18 or higher) +- PostgreSQL (version 15 or higher) +- An S3-compatible storage service (like MinIO) or Azure Blob Storage + +**Steps:** + +1. Clone the repository: +```bash +git clone https://github.com/Worklenz/worklenz.git +cd worklenz +``` + +2. Set up environment variables: +```bash +cp worklenz-backend/.env.template worklenz-backend/.env +# Update the environment variables with your configuration +``` + +3. Install dependencies: +```bash +# Backend dependencies cd worklenz-backend npm install -# Install frontend dependencies +# Frontend dependencies cd ../worklenz-frontend npm install ``` -4. Set up the database +4. Set up the database: ```bash # Create a PostgreSQL database named worklenz_db cd worklenz-backend @@ -100,49 +175,47 @@ psql -U your_username -d worklenz_db -f database/sql/2_dml.sql psql -U your_username -d worklenz_db -f database/sql/5_database_user.sql ``` -5. Start the development servers +5. Start the development servers: ```bash -# In one terminal, start the backend +# Terminal 1: Start the backend cd worklenz-backend npm run dev -# In another terminal, start the frontend +# Terminal 2: Start the frontend cd worklenz-frontend npm run dev ``` 6. Access the application at http://localhost:5000 -### Option 2: Docker Setup +## Deployment -The project includes a fully configured Docker setup with: -- Frontend React application -- Backend server -- PostgreSQL database -- MinIO for S3-compatible storage +For local development, follow the [Quick Start (Docker)](#-quick-start-docker---recommended) section above. -1. Clone the repository: -```bash -git clone https://github.com/Worklenz/worklenz.git -cd worklenz -``` +### Remote Server Deployment -2. Start the Docker containers (choose one option): +When deploying to a remote server: -**Using Docker Compose directly** -```bash -docker-compose up -d -``` +1. Set up the environment files with your server's hostname: + ```bash + # For HTTP/WS + ./update-docker-env.sh your-server-hostname + + # For HTTPS/WSS + ./update-docker-env.sh your-server-hostname true + ``` -3. The application will be available at: - - Frontend: http://localhost:5000 - - Backend API: http://localhost:3000 - - MinIO Console: http://localhost:9001 (login with minioadmin/minioadmin) +2. Pull and run the latest Docker images: + ```bash + docker-compose pull + docker-compose up -d + ``` -4. To stop the services: -```bash -docker-compose down -``` +3. Access the application through your server's hostname: + - Frontend: http://your-server-hostname:5000 + - Backend API: http://your-server-hostname:3000 + +4. **Video Guide**: For a complete walkthrough of deploying Worklenz to a remote server, check out our [deployment video guide](https://www.youtube.com/watch?v=CAZGu2iOXQs&t=10s). ## Configuration @@ -157,16 +230,46 @@ Worklenz requires several environment variables to be configured for proper oper Please refer to the `.env.example` files for a full list of required variables. -### MinIO Integration +The Docker setup uses environment variables to configure the services: + +- **Frontend:** + - `VITE_API_URL`: URL of the backend API (default: http://backend:3000 for container networking) + - `VITE_SOCKET_URL`: WebSocket URL for real-time communication (default: ws://backend:3000) + +- **Backend:** + - Database connection parameters + - Storage configuration + - Other backend settings + +For custom configuration, edit the `.env` file or the `update-docker-env.sh` script. + +## MinIO Integration The project uses MinIO as an S3-compatible object storage service, which provides an open-source alternative to AWS S3 for development and production. +### Working with MinIO + +MinIO provides an S3-compatible API, so any code that works with S3 will work with MinIO by simply changing the endpoint URL. The backend has been configured to use MinIO by default, with no additional configuration required. + - **MinIO Console**: http://localhost:9001 - Username: minioadmin - Password: minioadmin - **Default Bucket**: worklenz-bucket (created automatically when the containers start) +### Backend Storage Configuration + +The backend is pre-configured to use MinIO with the following settings: + +```javascript +// S3 credentials with MinIO defaults +export const REGION = process.env.AWS_REGION || "us-east-1"; +export const BUCKET = process.env.AWS_BUCKET || "worklenz-bucket"; +export const S3_URL = process.env.S3_URL || "http://minio:9000/worklenz-bucket"; +export const S3_ACCESS_KEY_ID = process.env.AWS_ACCESS_KEY_ID || "minioadmin"; +export const S3_SECRET_ACCESS_KEY = process.env.AWS_SECRET_ACCESS_KEY || "minioadmin"; +``` + ### Security Considerations For production deployments: @@ -177,20 +280,12 @@ For production deployments: 4. Enable HTTPS for all public endpoints 5. Review and update dependencies regularly -## Contributing - -We welcome contributions from the community! If you'd like to contribute, please follow our [contributing guidelines](CONTRIBUTING.md). - ## Security If you believe you have found a security vulnerability in Worklenz, we encourage you to responsibly disclose this and not open a public issue. We will investigate all legitimate reports. Email [info@worklenz.com](mailto:info@worklenz.com) to disclose any security vulnerabilities. -## License - -This project is licensed under the [MIT License](LICENSE). - ## Analytics Worklenz uses Google Analytics to understand how the application is being used. This helps us improve the application and make better decisions about future development. @@ -260,215 +355,13 @@ If you've previously opted in and want to opt-out:
-### Contributing +## Contributing -We welcome contributions from the community! If you'd like to contribute, please follow -our [contributing guidelines](CONTRIBUTING.md). +We welcome contributions from the community! If you'd like to contribute, please follow our [contributing guidelines](CONTRIBUTING.md). -### License +## License Worklenz is open source and released under the [GNU Affero General Public License Version 3 (AGPLv3)](LICENSE). By contributing to Worklenz, you agree that your contributions will be licensed under its AGPL. -# Worklenz React - -This repository contains the React version of Worklenz with a Docker setup for easy development and deployment. - -## Getting Started with Docker - -The project includes a fully configured Docker setup with: -- Frontend React application -- Backend server -- PostgreSQL database -- MinIO for S3-compatible storage - -### Prerequisites - -- Docker and Docker Compose installed on your system -- Git - -### Quick Start - -1. Clone the repository: -```bash -git clone https://github.com/Worklenz/worklenz.git -cd worklenz -``` - -2. Start the Docker containers (choose one option): - -**Option 1: Using the provided scripts (easiest)** -- On Windows: - ``` - start.bat - ``` -- On Linux/macOS: - ```bash - ./start.sh - ``` - -**Option 2: Using Docker Compose directly** -```bash -docker-compose up -d -``` - -3. The application will be available at: - - Frontend: http://localhost:5000 - - Backend API: http://localhost:3000 - - MinIO Console: http://localhost:9001 (login with minioadmin/minioadmin) - -4. To stop the services (choose one option): - -**Option 1: Using the provided scripts** -- On Windows: - ``` - stop.bat - ``` -- On Linux/macOS: - ```bash - ./stop.sh - ``` - -**Option 2: Using Docker Compose directly** -```bash -docker-compose down -``` - - -## MinIO Integration - -The project uses MinIO as an S3-compatible object storage service, which provides an open-source alternative to AWS S3 for development and production. - -### Working with MinIO - -MinIO provides an S3-compatible API, so any code that works with S3 will work with MinIO by simply changing the endpoint URL. The backend has been configured to use MinIO by default, with no additional configuration required. - -- **MinIO Console**: http://localhost:9001 - - Username: minioadmin - - Password: minioadmin - -- **Default Bucket**: worklenz-bucket (created automatically when the containers start) - -### Backend Storage Configuration - -The backend is pre-configured to use MinIO with the following settings: - -```javascript -// S3 credentials with MinIO defaults -export const REGION = process.env.AWS_REGION || "us-east-1"; -export const BUCKET = process.env.AWS_BUCKET || "worklenz-bucket"; -export const S3_URL = process.env.S3_URL || "http://minio:9000/worklenz-bucket"; -export const S3_ACCESS_KEY_ID = process.env.AWS_ACCESS_KEY_ID || "minioadmin"; -export const S3_SECRET_ACCESS_KEY = process.env.AWS_SECRET_ACCESS_KEY || "minioadmin"; -``` - -The S3 client is initialized with special MinIO configuration: - -```javascript -const s3Client = new S3Client({ - region: REGION, - credentials: { - accessKeyId: S3_ACCESS_KEY_ID || "", - secretAccessKey: S3_SECRET_ACCESS_KEY || "", - }, - endpoint: getEndpointFromUrl(), // Extracts endpoint from S3_URL - forcePathStyle: true, // Required for MinIO -}); -``` - -### Environment Configuration - -The project uses the following environment file structure: - -- **Frontend**: - - `worklenz-frontend/.env.development` - Development environment variables - - `worklenz-frontend/.env.production` - Production build variables - -- **Backend**: - - `worklenz-backend/.env` - Backend environment variables - -### Setting Up Environment Files - -The Docker environment script will create or overwrite all environment files: - -```bash -# For HTTP/WS -./update-docker-env.sh your-hostname - -# For HTTPS/WSS -./update-docker-env.sh your-hostname true -``` - -This script generates properly configured environment files for both development and production environments. - -## Docker Deployment - -### Local Development with Docker - -1. Set up the environment files: - ```bash - # For HTTP/WS - ./update-docker-env.sh - - # For HTTPS/WSS - ./update-docker-env.sh localhost true - ``` - -2. Run the application using Docker Compose: - ```bash - docker-compose up -d - ``` - -3. Access the application: - - Frontend: http://localhost:5000 - - Backend API: http://localhost:3000 (or https://localhost:3000 with SSL) - -4. Video Guide - - For a visual walkthrough of the local Docker deployment process, check out our [step-by-step video guide](https://www.youtube.com/watch?v=AfwAKxJbqLg). - -### Remote Server Deployment - -When deploying to a remote server: - -1. Set up the environment files with your server's hostname: - ```bash - # For HTTP/WS - ./update-docker-env.sh your-server-hostname - - # For HTTPS/WSS - ./update-docker-env.sh your-server-hostname true - ``` - - This ensures that the frontend correctly connects to the backend API. - -2. Pull and run the latest Docker images: - ```bash - docker-compose pull - docker-compose up -d - ``` - -3. Access the application through your server's hostname: - - Frontend: http://your-server-hostname:5000 - - Backend API: http://your-server-hostname:3000 - -4. Video Guide - - For a complete walkthrough of deploying Worklenz to a remote server, check out our [deployment video guide](https://www.youtube.com/watch?v=CAZGu2iOXQs&t=10s). - -### Environment Configuration - -The Docker setup uses environment variables to configure the services: - -- Frontend: - - `VITE_API_URL`: URL of the backend API (default: http://backend:3000 for container networking) - - `VITE_SOCKET_URL`: WebSocket URL for real-time communication (default: ws://backend:3000) - -- Backend: - - Database connection parameters - - Storage configuration - - Other backend settings - -For custom configuration, edit the `.env` file or the `update-docker-env.sh` script. - diff --git a/SETUP_THE_PROJECT.md b/SETUP_THE_PROJECT.md index c8917ac1..9a3568cd 100644 --- a/SETUP_THE_PROJECT.md +++ b/SETUP_THE_PROJECT.md @@ -4,7 +4,7 @@ Getting started with development is a breeze! Follow these steps and you'll be c ## Requirements -- Node.js version v16 or newer - [Node.js](https://nodejs.org/en/download/) +- Node.js version v20 or newer - [Node.js](https://nodejs.org/en/download/) - PostgreSQL version v15 or newer - [PostgreSQL](https://www.postgresql.org/download/) - S3-compatible storage (like MinIO) for file storage @@ -38,7 +38,7 @@ Getting started with development is a breeze! Follow these steps and you'll be c npm start ``` -4. Navigate to [http://localhost:5173](http://localhost:5173) +4. Navigate to [http://localhost:5173](http://localhost:5173) (development server) ### Backend installation @@ -126,7 +126,7 @@ For an easier setup, you can use Docker and Docker Compose: ``` 3. Access the application: - - Frontend: http://localhost:5000 + - Frontend: http://localhost:5000 (Docker production build) - Backend API: http://localhost:3000 - MinIO Console: http://localhost:9001 (login with minioadmin/minioadmin) diff --git a/worklenz-backend/.gitignore b/worklenz-backend/.gitignore index d9d5a80a..cb13f868 100644 --- a/worklenz-backend/.gitignore +++ b/worklenz-backend/.gitignore @@ -20,9 +20,6 @@ coverage # nyc test coverage .nyc_output -# Grunt intermediate storage (http://gruntjs.com/creating-plugins#storing-task-files) -.grunt - # Bower dependency directory (https://bower.io/) bower_components diff --git a/worklenz-backend/database/migrations/20250128000000-fix-window-function-error.sql b/worklenz-backend/database/migrations/20250128000000-fix-window-function-error.sql new file mode 100644 index 00000000..9a20e173 --- /dev/null +++ b/worklenz-backend/database/migrations/20250128000000-fix-window-function-error.sql @@ -0,0 +1,143 @@ +-- Fix window function error in task sort optimized functions +-- Error: window functions are not allowed in UPDATE + +-- Replace the optimized sort functions to avoid CTE usage in UPDATE statements +CREATE OR REPLACE FUNCTION handle_task_list_sort_between_groups_optimized(_from_index integer, _to_index integer, _task_id uuid, _project_id uuid, _batch_size integer DEFAULT 100) RETURNS void + LANGUAGE plpgsql +AS +$$ +DECLARE + _offset INT := 0; + _affected_rows INT; +BEGIN + -- PERFORMANCE OPTIMIZATION: Use direct updates without CTE in UPDATE + IF (_to_index = -1) + THEN + _to_index = COALESCE((SELECT MAX(sort_order) + 1 FROM tasks WHERE project_id = _project_id), 0); + END IF; + + -- PERFORMANCE OPTIMIZATION: Batch updates for large datasets + IF _to_index > _from_index + THEN + LOOP + UPDATE tasks + SET sort_order = sort_order - 1 + WHERE project_id = _project_id + AND sort_order > _from_index + AND sort_order < _to_index + AND sort_order > _offset + AND sort_order <= _offset + _batch_size; + + GET DIAGNOSTICS _affected_rows = ROW_COUNT; + EXIT WHEN _affected_rows = 0; + _offset := _offset + _batch_size; + END LOOP; + + UPDATE tasks SET sort_order = _to_index - 1 WHERE id = _task_id AND project_id = _project_id; + END IF; + + IF _to_index < _from_index + THEN + _offset := 0; + LOOP + UPDATE tasks + SET sort_order = sort_order + 1 + WHERE project_id = _project_id + AND sort_order > _to_index + AND sort_order < _from_index + AND sort_order > _offset + AND sort_order <= _offset + _batch_size; + + GET DIAGNOSTICS _affected_rows = ROW_COUNT; + EXIT WHEN _affected_rows = 0; + _offset := _offset + _batch_size; + END LOOP; + + UPDATE tasks SET sort_order = _to_index + 1 WHERE id = _task_id AND project_id = _project_id; + END IF; +END +$$; + +-- Replace the second optimized sort function +CREATE OR REPLACE FUNCTION handle_task_list_sort_inside_group_optimized(_from_index integer, _to_index integer, _task_id uuid, _project_id uuid, _batch_size integer DEFAULT 100) RETURNS void + LANGUAGE plpgsql +AS +$$ +DECLARE + _offset INT := 0; + _affected_rows INT; +BEGIN + -- PERFORMANCE OPTIMIZATION: Batch updates for large datasets without CTE in UPDATE + IF _to_index > _from_index + THEN + LOOP + UPDATE tasks + SET sort_order = sort_order - 1 + WHERE project_id = _project_id + AND sort_order > _from_index + AND sort_order <= _to_index + AND sort_order > _offset + AND sort_order <= _offset + _batch_size; + + GET DIAGNOSTICS _affected_rows = ROW_COUNT; + EXIT WHEN _affected_rows = 0; + _offset := _offset + _batch_size; + END LOOP; + END IF; + + IF _to_index < _from_index + THEN + _offset := 0; + LOOP + UPDATE tasks + SET sort_order = sort_order + 1 + WHERE project_id = _project_id + AND sort_order >= _to_index + AND sort_order < _from_index + AND sort_order > _offset + AND sort_order <= _offset + _batch_size; + + GET DIAGNOSTICS _affected_rows = ROW_COUNT; + EXIT WHEN _affected_rows = 0; + _offset := _offset + _batch_size; + END LOOP; + END IF; + + UPDATE tasks SET sort_order = _to_index WHERE id = _task_id AND project_id = _project_id; +END +$$; + +-- Add simple bulk update function as alternative +CREATE OR REPLACE FUNCTION update_task_sort_orders_bulk(_updates json) RETURNS void + LANGUAGE plpgsql +AS +$$ +DECLARE + _update_record RECORD; +BEGIN + -- Simple approach: update each task's sort_order from the provided array + FOR _update_record IN + SELECT + (item->>'task_id')::uuid as task_id, + (item->>'sort_order')::int as sort_order, + (item->>'status_id')::uuid as status_id, + (item->>'priority_id')::uuid as priority_id, + (item->>'phase_id')::uuid as phase_id + FROM json_array_elements(_updates) as item + LOOP + UPDATE tasks + SET + sort_order = _update_record.sort_order, + status_id = COALESCE(_update_record.status_id, status_id), + priority_id = COALESCE(_update_record.priority_id, priority_id) + WHERE id = _update_record.task_id; + + -- Handle phase updates separately since it's in a different table + IF _update_record.phase_id IS NOT NULL THEN + INSERT INTO task_phase (task_id, phase_id) + VALUES (_update_record.task_id, _update_record.phase_id) + ON CONFLICT (task_id) DO UPDATE SET phase_id = _update_record.phase_id; + END IF; + END LOOP; +END +$$; \ No newline at end of file diff --git a/worklenz-backend/database/sql/4_functions.sql b/worklenz-backend/database/sql/4_functions.sql index 441b08e8..2c57d3c4 100644 --- a/worklenz-backend/database/sql/4_functions.sql +++ b/worklenz-backend/database/sql/4_functions.sql @@ -5498,6 +5498,7 @@ DECLARE _iterator NUMERIC := 0; _status_id TEXT; _project_id UUID; + _base_sort_order NUMERIC; BEGIN -- Get the project_id from the first status to ensure we update all statuses in the same project SELECT project_id INTO _project_id @@ -5513,17 +5514,28 @@ BEGIN _iterator := _iterator + 1; END LOOP; - -- Ensure any remaining statuses in the project (not in the provided list) get sequential sort_order - -- This handles edge cases where not all statuses are provided - UPDATE task_statuses - SET sort_order = ( - SELECT COUNT(*) - FROM task_statuses ts2 - WHERE ts2.project_id = _project_id - AND ts2.id = ANY(SELECT (TRIM(BOTH '"' FROM JSON_ARRAY_ELEMENTS(_status_ids)::TEXT))::UUID) - ) + ROW_NUMBER() OVER (ORDER BY sort_order) - 1 - WHERE project_id = _project_id - AND id NOT IN (SELECT (TRIM(BOTH '"' FROM JSON_ARRAY_ELEMENTS(_status_ids)::TEXT))::UUID); + -- Get the base sort order for remaining statuses (simple count approach) + SELECT COUNT(*) INTO _base_sort_order + FROM task_statuses ts2 + WHERE ts2.project_id = _project_id + AND ts2.id = ANY(SELECT (TRIM(BOTH '"' FROM JSON_ARRAY_ELEMENTS(_status_ids)::TEXT))::UUID); + + -- Update remaining statuses with simple sequential numbering + -- Reset iterator to start from base_sort_order + _iterator := _base_sort_order; + + -- Use a cursor approach to avoid window functions + FOR _status_id IN + SELECT id::TEXT FROM task_statuses + WHERE project_id = _project_id + AND id NOT IN (SELECT (TRIM(BOTH '"' FROM JSON_ARRAY_ELEMENTS(_status_ids)::TEXT))::UUID) + ORDER BY sort_order + LOOP + UPDATE task_statuses + SET sort_order = _iterator + WHERE id = _status_id::UUID; + _iterator := _iterator + 1; + END LOOP; RETURN; END @@ -6412,7 +6424,7 @@ DECLARE _offset INT := 0; _affected_rows INT; BEGIN - -- PERFORMANCE OPTIMIZATION: Use CTE for better query planning + -- PERFORMANCE OPTIMIZATION: Use direct updates without CTE in UPDATE IF (_to_index = -1) THEN _to_index = COALESCE((SELECT MAX(sort_order) + 1 FROM tasks WHERE project_id = _project_id), 0); @@ -6422,18 +6434,15 @@ BEGIN IF _to_index > _from_index THEN LOOP - WITH batch_update AS ( - UPDATE tasks - SET sort_order = sort_order - 1 - WHERE project_id = _project_id - AND sort_order > _from_index - AND sort_order < _to_index - AND sort_order > _offset - AND sort_order <= _offset + _batch_size - RETURNING 1 - ) - SELECT COUNT(*) INTO _affected_rows FROM batch_update; + UPDATE tasks + SET sort_order = sort_order - 1 + WHERE project_id = _project_id + AND sort_order > _from_index + AND sort_order < _to_index + AND sort_order > _offset + AND sort_order <= _offset + _batch_size; + GET DIAGNOSTICS _affected_rows = ROW_COUNT; EXIT WHEN _affected_rows = 0; _offset := _offset + _batch_size; END LOOP; @@ -6445,18 +6454,15 @@ BEGIN THEN _offset := 0; LOOP - WITH batch_update AS ( - UPDATE tasks - SET sort_order = sort_order + 1 - WHERE project_id = _project_id - AND sort_order > _to_index - AND sort_order < _from_index - AND sort_order > _offset - AND sort_order <= _offset + _batch_size - RETURNING 1 - ) - SELECT COUNT(*) INTO _affected_rows FROM batch_update; + UPDATE tasks + SET sort_order = sort_order + 1 + WHERE project_id = _project_id + AND sort_order > _to_index + AND sort_order < _from_index + AND sort_order > _offset + AND sort_order <= _offset + _batch_size; + GET DIAGNOSTICS _affected_rows = ROW_COUNT; EXIT WHEN _affected_rows = 0; _offset := _offset + _batch_size; END LOOP; @@ -6475,22 +6481,19 @@ DECLARE _offset INT := 0; _affected_rows INT; BEGIN - -- PERFORMANCE OPTIMIZATION: Batch updates for large datasets + -- PERFORMANCE OPTIMIZATION: Batch updates for large datasets without CTE in UPDATE IF _to_index > _from_index THEN LOOP - WITH batch_update AS ( - UPDATE tasks - SET sort_order = sort_order - 1 - WHERE project_id = _project_id - AND sort_order > _from_index - AND sort_order <= _to_index - AND sort_order > _offset - AND sort_order <= _offset + _batch_size - RETURNING 1 - ) - SELECT COUNT(*) INTO _affected_rows FROM batch_update; + UPDATE tasks + SET sort_order = sort_order - 1 + WHERE project_id = _project_id + AND sort_order > _from_index + AND sort_order <= _to_index + AND sort_order > _offset + AND sort_order <= _offset + _batch_size; + GET DIAGNOSTICS _affected_rows = ROW_COUNT; EXIT WHEN _affected_rows = 0; _offset := _offset + _batch_size; END LOOP; @@ -6500,18 +6503,15 @@ BEGIN THEN _offset := 0; LOOP - WITH batch_update AS ( - UPDATE tasks - SET sort_order = sort_order + 1 - WHERE project_id = _project_id - AND sort_order >= _to_index - AND sort_order < _from_index - AND sort_order > _offset - AND sort_order <= _offset + _batch_size - RETURNING 1 - ) - SELECT COUNT(*) INTO _affected_rows FROM batch_update; + UPDATE tasks + SET sort_order = sort_order + 1 + WHERE project_id = _project_id + AND sort_order >= _to_index + AND sort_order < _from_index + AND sort_order > _offset + AND sort_order <= _offset + _batch_size; + GET DIAGNOSTICS _affected_rows = ROW_COUNT; EXIT WHEN _affected_rows = 0; _offset := _offset + _batch_size; END LOOP; @@ -6520,3 +6520,38 @@ BEGIN UPDATE tasks SET sort_order = _to_index WHERE id = _task_id AND project_id = _project_id; END $$; + +-- Simple function to update task sort orders in bulk +CREATE OR REPLACE FUNCTION update_task_sort_orders_bulk(_updates json) RETURNS void + LANGUAGE plpgsql +AS +$$ +DECLARE + _update_record RECORD; +BEGIN + -- Simple approach: update each task's sort_order from the provided array + FOR _update_record IN + SELECT + (item->>'task_id')::uuid as task_id, + (item->>'sort_order')::int as sort_order, + (item->>'status_id')::uuid as status_id, + (item->>'priority_id')::uuid as priority_id, + (item->>'phase_id')::uuid as phase_id + FROM json_array_elements(_updates) as item + LOOP + UPDATE tasks + SET + sort_order = _update_record.sort_order, + status_id = COALESCE(_update_record.status_id, status_id), + priority_id = COALESCE(_update_record.priority_id, priority_id) + WHERE id = _update_record.task_id; + + -- Handle phase updates separately since it's in a different table + IF _update_record.phase_id IS NOT NULL THEN + INSERT INTO task_phase (task_id, phase_id) + VALUES (_update_record.task_id, _update_record.phase_id) + ON CONFLICT (task_id) DO UPDATE SET phase_id = _update_record.phase_id; + END IF; + END LOOP; +END +$$; diff --git a/worklenz-backend/grunt/grunt-compress.js b/worklenz-backend/grunt/grunt-compress.js deleted file mode 100644 index b903edf6..00000000 --- a/worklenz-backend/grunt/grunt-compress.js +++ /dev/null @@ -1,28 +0,0 @@ -module.exports = { - brotli_js: { - options: { - mode: "brotli", - brotli: { - mode: 1 - } - }, - expand: true, - cwd: "build/public", - src: ["**/*.js"], - dest: "build/public", - extDot: "last", - ext: ".js.br" - }, - gzip_js: { - options: { - mode: "gzip" - }, - files: [{ - expand: true, - cwd: "build/public", - src: ["**/*.js"], - dest: "build/public", - ext: ".js.gz" - }] - } -}; diff --git a/worklenz-backend/package.json b/worklenz-backend/package.json index 5413ddf2..d4e07de2 100644 --- a/worklenz-backend/package.json +++ b/worklenz-backend/package.json @@ -4,7 +4,7 @@ "private": true, "engines": { "npm": ">=8.11.0", - "node": ">=16.13.0", + "node": ">=20.0.0", "yarn": "WARNING: Please use npm package manager instead of yarn" }, "main": "build/bin/www", @@ -68,7 +68,6 @@ "express-rate-limit": "^6.8.0", "express-session": "^1.17.3", "express-validator": "^6.15.0", - "grunt-cli": "^1.5.0", "helmet": "^6.2.0", "hpp": "^0.2.3", "http-errors": "^2.0.0", diff --git a/worklenz-backend/src/controllers/home-page-controller.ts b/worklenz-backend/src/controllers/home-page-controller.ts index be290eb9..5a0d87f4 100644 --- a/worklenz-backend/src/controllers/home-page-controller.ts +++ b/worklenz-backend/src/controllers/home-page-controller.ts @@ -137,6 +137,10 @@ export default class HomePageController extends WorklenzControllerBase { WHERE category_id NOT IN (SELECT id FROM sys_task_status_categories WHERE is_done IS FALSE)) + AND NOT EXISTS(SELECT project_id + FROM archived_projects + WHERE project_id = p.id + AND user_id = $2) ${groupByClosure} ORDER BY t.end_date ASC`; @@ -158,9 +162,13 @@ export default class HomePageController extends WorklenzControllerBase { WHERE category_id NOT IN (SELECT id FROM sys_task_status_categories WHERE is_done IS FALSE)) + AND NOT EXISTS(SELECT project_id + FROM archived_projects + WHERE project_id = p.id + AND user_id = $3) ${groupByClosure}`; - const result = await db.query(q, [teamId, userId]); + const result = await db.query(q, [teamId, userId, userId]); const [row] = result.rows; return row; } diff --git a/worklenz-backend/src/controllers/task-phases-controller.ts b/worklenz-backend/src/controllers/task-phases-controller.ts index e72fbbab..163ff250 100644 --- a/worklenz-backend/src/controllers/task-phases-controller.ts +++ b/worklenz-backend/src/controllers/task-phases-controller.ts @@ -16,19 +16,23 @@ export default class TaskPhasesController extends WorklenzControllerBase { if (!req.query.id) return res.status(400).send(new ServerResponse(false, null, "Invalid request")); + // Use custom name if provided, otherwise use default naming pattern + const phaseName = req.body.name?.trim() || + `Untitled Phase (${(await db.query("SELECT COUNT(*) FROM project_phases WHERE project_id = $1", [req.query.id])).rows[0].count + 1})`; + const q = ` INSERT INTO project_phases (name, color_code, project_id, sort_index) VALUES ( - CONCAT('Untitled Phase (', (SELECT COUNT(*) FROM project_phases WHERE project_id = $2) + 1, ')'), $1, $2, - (SELECT COUNT(*) FROM project_phases WHERE project_id = $2) + 1) + $3, + (SELECT COUNT(*) FROM project_phases WHERE project_id = $3) + 1) RETURNING id, name, color_code, sort_index; `; req.body.color_code = this.DEFAULT_PHASE_COLOR; - const result = await db.query(q, [req.body.color_code, req.query.id]); + const result = await db.query(q, [phaseName, req.body.color_code, req.query.id]); const [data] = result.rows; data.color_code = getColor(data.name) + TASK_STATUS_COLOR_ALPHA; diff --git a/worklenz-backend/src/controllers/tasks-controller-v2.ts b/worklenz-backend/src/controllers/tasks-controller-v2.ts index 27df13e7..d941f824 100644 --- a/worklenz-backend/src/controllers/tasks-controller-v2.ts +++ b/worklenz-backend/src/controllers/tasks-controller-v2.ts @@ -1174,9 +1174,39 @@ export default class TasksControllerV2 extends TasksControllerBase { } }); + // Calculate progress stats for priority and phase grouping + if (groupBy === GroupBy.PRIORITY || groupBy === GroupBy.PHASE) { + Object.values(groupedResponse).forEach((group: any) => { + if (group.tasks && group.tasks.length > 0) { + const todoCount = group.tasks.filter((task: any) => { + // For tasks, we need to check their original status category + const originalTask = tasks.find(t => t.id === task.id); + return originalTask?.status_category?.is_todo; + }).length; + + const doingCount = group.tasks.filter((task: any) => { + const originalTask = tasks.find(t => t.id === task.id); + return originalTask?.status_category?.is_doing; + }).length; + + const doneCount = group.tasks.filter((task: any) => { + const originalTask = tasks.find(t => t.id === task.id); + return originalTask?.status_category?.is_done; + }).length; + + const total = group.tasks.length; + + // Calculate progress percentages + group.todo_progress = total > 0 ? +((todoCount / total) * 100).toFixed(0) : 0; + group.doing_progress = total > 0 ? +((doingCount / total) * 100).toFixed(0) : 0; + group.done_progress = total > 0 ? +((doneCount / total) * 100).toFixed(0) : 0; + } + }); + } + // Create unmapped group if there are tasks without proper phase assignment if (unmappedTasks.length > 0 && groupBy === GroupBy.PHASE) { - groupedResponse[UNMAPPED.toLowerCase()] = { + const unmappedGroup = { id: UNMAPPED, title: UNMAPPED, groupType: groupBy, @@ -1189,7 +1219,36 @@ export default class TasksControllerV2 extends TasksControllerBase { start_date: null, end_date: null, sort_index: 999, // Put unmapped group at the end + todo_progress: 0, + doing_progress: 0, + done_progress: 0, }; + + // Calculate progress stats for unmapped group + if (unmappedTasks.length > 0) { + const todoCount = unmappedTasks.filter((task: any) => { + const originalTask = tasks.find(t => t.id === task.id); + return originalTask?.status_category?.is_todo; + }).length; + + const doingCount = unmappedTasks.filter((task: any) => { + const originalTask = tasks.find(t => t.id === task.id); + return originalTask?.status_category?.is_doing; + }).length; + + const doneCount = unmappedTasks.filter((task: any) => { + const originalTask = tasks.find(t => t.id === task.id); + return originalTask?.status_category?.is_done; + }).length; + + const total = unmappedTasks.length; + + unmappedGroup.todo_progress = total > 0 ? +((todoCount / total) * 100).toFixed(0) : 0; + unmappedGroup.doing_progress = total > 0 ? +((doingCount / total) * 100).toFixed(0) : 0; + unmappedGroup.done_progress = total > 0 ? +((doneCount / total) * 100).toFixed(0) : 0; + } + + groupedResponse[UNMAPPED.toLowerCase()] = unmappedGroup; } // Sort tasks within each group by order diff --git a/worklenz-backend/src/socket.io/commands/on-task-sort-order-change.ts b/worklenz-backend/src/socket.io/commands/on-task-sort-order-change.ts index 79abae7a..11ec09cd 100644 --- a/worklenz-backend/src/socket.io/commands/on-task-sort-order-change.ts +++ b/worklenz-backend/src/socket.io/commands/on-task-sort-order-change.ts @@ -24,6 +24,14 @@ interface ChangeRequest { priority: string; }; team_id: string; + // New simplified approach + task_updates?: Array<{ + task_id: string; + sort_order: number; + status_id?: string; + priority_id?: string; + phase_id?: string; + }>; } interface Config { @@ -64,38 +72,72 @@ function updateUnmappedStatus(config: Config) { export async function on_task_sort_order_change(_io: Server, socket: Socket, data: ChangeRequest) { try { - const q = `SELECT handle_task_list_sort_order_change($1);`; - - const config: Config = { - from_index: data.from_index, - to_index: data.to_index, - task_id: data.task.id, - from_group: data.from_group, - to_group: data.to_group, - project_id: data.project_id, - group_by: data.group_by, - to_last_index: Boolean(data.to_last_index) - }; - - if ((config.group_by === GroupBy.STATUS) && config.to_group) { - const canContinue = await TasksControllerV2.checkForCompletedDependencies(config.task_id, config?.to_group); - if (!canContinue) { - return socket.emit(SocketEvents.TASK_SORT_ORDER_CHANGE.toString(), { - completed_deps: canContinue - }); + // New simplified approach - use bulk updates if provided + if (data.task_updates && data.task_updates.length > 0) { + // Check dependencies for status changes + if (data.group_by === GroupBy.STATUS && data.to_group) { + const canContinue = await TasksControllerV2.checkForCompletedDependencies(data.task.id, data.to_group); + if (!canContinue) { + return socket.emit(SocketEvents.TASK_SORT_ORDER_CHANGE.toString(), { + completed_deps: canContinue + }); + } } - notifyStatusChange(socket, config); + // Use the simple bulk update function + const q = `SELECT update_task_sort_orders_bulk($1);`; + await db.query(q, [JSON.stringify(data.task_updates)]); + await emitSortOrderChange(data, socket); + + // Handle notifications and logging + if (data.group_by === GroupBy.STATUS && data.to_group) { + notifyStatusChange(socket, { + task_id: data.task.id, + to_group: data.to_group, + from_group: data.from_group, + from_index: data.from_index, + to_index: data.to_index, + project_id: data.project_id, + group_by: data.group_by, + to_last_index: data.to_last_index + }); + } + } else { + // Fallback to old complex method + const q = `SELECT handle_task_list_sort_order_change($1);`; + + const config: Config = { + from_index: data.from_index, + to_index: data.to_index, + task_id: data.task.id, + from_group: data.from_group, + to_group: data.to_group, + project_id: data.project_id, + group_by: data.group_by, + to_last_index: Boolean(data.to_last_index) + }; + + if ((config.group_by === GroupBy.STATUS) && config.to_group) { + const canContinue = await TasksControllerV2.checkForCompletedDependencies(config.task_id, config?.to_group); + if (!canContinue) { + return socket.emit(SocketEvents.TASK_SORT_ORDER_CHANGE.toString(), { + completed_deps: canContinue + }); + } + + notifyStatusChange(socket, config); + } + + if (config.group_by === GroupBy.PHASE) { + updateUnmappedStatus(config); + } + + await db.query(q, [JSON.stringify(config)]); + await emitSortOrderChange(data, socket); } - if (config.group_by === GroupBy.PHASE) { - updateUnmappedStatus(config); - } - - await db.query(q, [JSON.stringify(config)]); - await emitSortOrderChange(data, socket); - - if (config.group_by === GroupBy.STATUS) { + // Common post-processing logic for both approaches + if (data.group_by === GroupBy.STATUS) { const userId = getLoggedInUserIdFromSocket(socket); const isAlreadyAssigned = await TasksControllerV2.checkUserAssignedToTask(data.task.id, userId as string, data.team_id); @@ -104,7 +146,7 @@ export async function on_task_sort_order_change(_io: Server, socket: Socket, dat } } - if (config.group_by === GroupBy.PHASE) { + if (data.group_by === GroupBy.PHASE) { void logPhaseChange({ task_id: data.task.id, socket, @@ -113,7 +155,7 @@ export async function on_task_sort_order_change(_io: Server, socket: Socket, dat }); } - if (config.group_by === GroupBy.STATUS) { + if (data.group_by === GroupBy.STATUS) { void logStatusChange({ task_id: data.task.id, socket, @@ -122,7 +164,7 @@ export async function on_task_sort_order_change(_io: Server, socket: Socket, dat }); } - if (config.group_by === GroupBy.PRIORITY) { + if (data.group_by === GroupBy.PRIORITY) { void logPriorityChange({ task_id: data.task.id, socket, @@ -131,7 +173,7 @@ export async function on_task_sort_order_change(_io: Server, socket: Socket, dat }); } - void notifyProjectUpdates(socket, config.task_id); + void notifyProjectUpdates(socket, data.task.id); return; } catch (error) { log_error(error); diff --git a/worklenz-backend/worklenz-email-templates/release-note-template.html b/worklenz-backend/worklenz-email-templates/release-note-template.html index 4f0e2a45..f445e122 100644 --- a/worklenz-backend/worklenz-email-templates/release-note-template.html +++ b/worklenz-backend/worklenz-email-templates/release-note-template.html @@ -2,8 +2,8 @@ -