Compare commits

..

10 Commits

Author SHA1 Message Date
chamiakJ
c1067d87fe refactor(reporting): update total working hours calculation in allocation controller
- Replaced project-specific hours per day with organization-wide working hours for total working hours calculation.
- Streamlined the SQL query to fetch organization working hours, ensuring accurate reporting based on organizational settings.
2025-05-20 16:49:07 +05:30
chamiakJ
97feef5982 refactor(reporting): improve utilization calculations in allocation controller
- Updated utilization percentage and utilized hours calculations to handle cases where total working hours are zero, providing 'N/A' for utilization percent when applicable.
- Adjusted logic for over/under utilized hours to ensure accurate reporting based on logged time and total working hours.
2025-05-20 16:18:16 +05:30
chamiakJ
76c92b1cc6 refactor(reporting): optimize date handling and organization working days logic
- Simplified date parsing by removing unnecessary start and end of day adjustments.
- Streamlined the fetching of organization working days from the database, consolidating queries for improved performance.
- Updated the calculation of total working hours to utilize project-specific hours per day, enhancing accuracy in reporting.
2025-05-20 15:51:33 +05:30
chamiakJ
67c62fc69b refactor(schedule): streamline organization working days update query
- Simplified the SQL update query for organization working days by removing unnecessary line breaks and improving readability.
- Adjusted the subquery to directly select organization IDs, enhancing clarity and maintainability.
2025-05-20 11:55:29 +05:30
chamiakJ
14d8f43001 refactor(reporting): clarify date parsing in allocation controller and frontend
- Updated comments to specify date parsing format as 'YYYY-MM-DD'.
- Modified date range handling in the frontend to format dates using date-fns for consistency.
2025-05-20 09:23:22 +05:30
chamiakJ
3b59a8560b refactor(reporting): simplify date parsing and improve logging format
- Updated date parsing to remove UTC conversion, maintaining local date context.
- Enhanced console logging to display dates in 'YYYY-MM-DD' format for clarity.
- Adjusted date range clause to directly use formatted dates for improved query accuracy.
2025-05-20 08:08:34 +05:30
chamiakJ
819252cedd refactor(reporting): update date handling and logging in allocation controller
- Removed UTC conversion for start and end dates to maintain local date context.
- Enhanced console logging to reflect local date values for better debugging.
2025-05-20 08:06:05 +05:30
chamiakJ
1dade05f54 feat(reporting): enhance date range handling in reporting allocation
- Added support for 'LAST_7_DAYS' and 'LAST_30_DAYS' date ranges in the reporting allocation logic.
- Updated date parsing to convert input dates to UTC while preserving the intended local date.
- Included console logs for debugging date values during processing.
2025-05-20 07:59:49 +05:30
chamiakJ
34613e5e0c Merge branch 'fix/performance-improvements' of https://github.com/Worklenz/worklenz into feature/member-time-progress-and-utilization 2025-05-20 07:07:28 +05:30
chamikaJ
a8b20680e5 feat: implement organization working days and hours settings
- Added functionality to fetch and update organization working days and hours in the admin center.
- Introduced a form for saving working days and hours, with validation and error handling.
- Updated the reporting allocation logic to utilize organization-specific working hours for accurate calculations.
- Enhanced localization files to support new settings in English, Spanish, and Portuguese.
2025-05-19 16:07:35 +05:30
1255 changed files with 15198 additions and 80713 deletions

View File

@@ -1,11 +0,0 @@
{
"permissions": {
"allow": [
"Bash(find:*)",
"Bash(npm run build:*)",
"Bash(npm run type-check:*)",
"Bash(npm run:*)"
],
"deny": []
}
}

416
README.md
View File

@@ -1,29 +1,11 @@
<h1 align="center">
<a href="https://worklenz.com" target="_blank" rel="noopener noreferrer">
<img src="https://s3.us-west-2.amazonaws.com/worklenz.com/assets/icon-144x144.png" alt="Worklenz Logo" width="75">
<img src="https://app.worklenz.com/assets/icons/icon-144x144.png" alt="Worklenz Logo" width="75">
</a>
<br>
Worklenz
</h1>
<p align="center">
<a href="https://github.com/Worklenz/worklenz/blob/main/LICENSE">
<img src="https://img.shields.io/badge/license-AGPL--3.0-blue.svg" alt="License">
</a>
<a href="https://github.com/Worklenz/worklenz/releases">
<img src="https://img.shields.io/github/v/release/Worklenz/worklenz" alt="Release">
</a>
<a href="https://github.com/Worklenz/worklenz/stargazers">
<img src="https://img.shields.io/github/stars/Worklenz/worklenz" alt="Stars">
</a>
<a href="https://github.com/Worklenz/worklenz/network/members">
<img src="https://img.shields.io/github/forks/Worklenz/worklenz" alt="Forks">
</a>
<a href="https://github.com/Worklenz/worklenz/issues">
<img src="https://img.shields.io/github/issues/Worklenz/worklenz" alt="Issues">
</a>
</p>
<p align="center">
<a href="https://worklenz.com/task-management/">Task Management</a> |
<a href="https://worklenz.com/time-tracking/">Time Tracking</a> |
@@ -45,24 +27,6 @@
Worklenz is a project management tool designed to help organizations improve their efficiency. It provides a
comprehensive solution for managing projects, tasks, and collaboration within teams.
## Table of Contents
- [Features](#features)
- [Tech Stack](#tech-stack)
- [Getting Started](#getting-started)
- [Quick Start (Docker)](#-quick-start-docker---recommended)
- [Manual Installation](#-manual-installation-for-development)
- [Deployment](#deployment)
- [Local Development](#local-development-with-docker)
- [Remote Server Deployment](#remote-server-deployment)
- [Configuration](#configuration)
- [MinIO Integration](#minio-integration)
- [Security](#security)
- [Analytics](#analytics)
- [Screenshots](#screenshots)
- [Contributing](#contributing)
- [License](#license)
## Features
- **Project Planning**: Create and organize projects, assign tasks to team members.
@@ -86,80 +50,42 @@ This repository contains the frontend and backend code for Worklenz.
## Getting Started
Choose your preferred setup method below. Docker is recommended for quick setup and testing.
These instructions will help you set up and run the Worklenz project on your local machine for development and testing purposes.
### 🚀 Quick Start (Docker - Recommended)
### Prerequisites
The fastest way to get Worklenz running locally with all dependencies included.
**Prerequisites:**
- Docker and Docker Compose installed on your system
- Git
**Steps:**
1. Clone the repository:
```bash
git clone https://github.com/Worklenz/worklenz.git
cd worklenz
```
2. Start the Docker containers:
```bash
docker-compose up -d
```
3. Access the application:
- **Frontend**: http://localhost:5000
- **Backend API**: http://localhost:3000
- **MinIO Console**: http://localhost:9001 (login: minioadmin/minioadmin)
4. To stop the services:
```bash
docker-compose down
```
**Alternative startup methods:**
- **Windows**: Run `start.bat`
- **Linux/macOS**: Run `./start.sh`
**Video Guide**: For a visual walkthrough of the local Docker deployment process, check out our [step-by-step video guide](https://www.youtube.com/watch?v=AfwAKxJbqLg).
### 🛠️ Manual Installation (For Development)
For developers who want to run the services individually or customize the setup.
**Prerequisites:**
- Node.js (version 18 or higher)
- PostgreSQL (version 15 or higher)
- PostgreSQL database
- An S3-compatible storage service (like MinIO) or Azure Blob Storage
**Steps:**
### Option 1: Manual Installation
1. Clone the repository:
1. Clone the repository
```bash
git clone https://github.com/Worklenz/worklenz.git
cd worklenz
```
2. Set up environment variables:
```bash
cp worklenz-backend/.env.template worklenz-backend/.env
# Update the environment variables with your configuration
```
2. Set up environment variables
- Copy the example environment files
```bash
cp .env.example .env
cp worklenz-backend/.env.example worklenz-backend/.env
```
- Update the environment variables with your configuration
3. Install dependencies:
3. Install dependencies
```bash
# Backend dependencies
# Install backend dependencies
cd worklenz-backend
npm install
# Frontend dependencies
# Install frontend dependencies
cd ../worklenz-frontend
npm install
```
4. Set up the database:
4. Set up the database
```bash
# Create a PostgreSQL database named worklenz_db
cd worklenz-backend
@@ -175,47 +101,49 @@ psql -U your_username -d worklenz_db -f database/sql/2_dml.sql
psql -U your_username -d worklenz_db -f database/sql/5_database_user.sql
```
5. Start the development servers:
5. Start the development servers
```bash
# Terminal 1: Start the backend
# In one terminal, start the backend
cd worklenz-backend
npm run dev
# Terminal 2: Start the frontend
# In another terminal, start the frontend
cd worklenz-frontend
npm run dev
```
6. Access the application at http://localhost:5000
## Deployment
### Option 2: Docker Setup
For local development, follow the [Quick Start (Docker)](#-quick-start-docker---recommended) section above.
The project includes a fully configured Docker setup with:
- Frontend React application
- Backend server
- PostgreSQL database
- MinIO for S3-compatible storage
### Remote Server Deployment
1. Clone the repository:
```bash
git clone https://github.com/Worklenz/worklenz.git
cd worklenz
```
When deploying to a remote server:
2. Start the Docker containers (choose one option):
1. Set up the environment files with your server's hostname:
```bash
# For HTTP/WS
./update-docker-env.sh your-server-hostname
# For HTTPS/WSS
./update-docker-env.sh your-server-hostname true
```
**Using Docker Compose directly**
```bash
docker-compose up -d
```
2. Pull and run the latest Docker images:
```bash
docker-compose pull
docker-compose up -d
```
3. The application will be available at:
- Frontend: http://localhost:5000
- Backend API: http://localhost:3000
- MinIO Console: http://localhost:9001 (login with minioadmin/minioadmin)
3. Access the application through your server's hostname:
- Frontend: http://your-server-hostname:5000
- Backend API: http://your-server-hostname:3000
4. **Video Guide**: For a complete walkthrough of deploying Worklenz to a remote server, check out our [deployment video guide](https://www.youtube.com/watch?v=CAZGu2iOXQs&t=10s).
4. To stop the services:
```bash
docker-compose down
```
## Configuration
@@ -230,46 +158,16 @@ Worklenz requires several environment variables to be configured for proper oper
Please refer to the `.env.example` files for a full list of required variables.
The Docker setup uses environment variables to configure the services:
- **Frontend:**
- `VITE_API_URL`: URL of the backend API (default: http://backend:3000 for container networking)
- `VITE_SOCKET_URL`: WebSocket URL for real-time communication (default: ws://backend:3000)
- **Backend:**
- Database connection parameters
- Storage configuration
- Other backend settings
For custom configuration, edit the `.env` file or the `update-docker-env.sh` script.
## MinIO Integration
### MinIO Integration
The project uses MinIO as an S3-compatible object storage service, which provides an open-source alternative to AWS S3 for development and production.
### Working with MinIO
MinIO provides an S3-compatible API, so any code that works with S3 will work with MinIO by simply changing the endpoint URL. The backend has been configured to use MinIO by default, with no additional configuration required.
- **MinIO Console**: http://localhost:9001
- Username: minioadmin
- Password: minioadmin
- **Default Bucket**: worklenz-bucket (created automatically when the containers start)
### Backend Storage Configuration
The backend is pre-configured to use MinIO with the following settings:
```javascript
// S3 credentials with MinIO defaults
export const REGION = process.env.AWS_REGION || "us-east-1";
export const BUCKET = process.env.AWS_BUCKET || "worklenz-bucket";
export const S3_URL = process.env.S3_URL || "http://minio:9000/worklenz-bucket";
export const S3_ACCESS_KEY_ID = process.env.AWS_ACCESS_KEY_ID || "minioadmin";
export const S3_SECRET_ACCESS_KEY = process.env.AWS_SECRET_ACCESS_KEY || "minioadmin";
```
### Security Considerations
For production deployments:
@@ -280,32 +178,19 @@ For production deployments:
4. Enable HTTPS for all public endpoints
5. Review and update dependencies regularly
## Contributing
We welcome contributions from the community! If you'd like to contribute, please follow our [contributing guidelines](CONTRIBUTING.md).
## Security
If you believe you have found a security vulnerability in Worklenz, we encourage you to responsibly disclose this and not open a public issue. We will investigate all legitimate reports.
Email [info@worklenz.com](mailto:info@worklenz.com) to disclose any security vulnerabilities.
## Analytics
## License
Worklenz uses Google Analytics to understand how the application is being used. This helps us improve the application and make better decisions about future development.
### What We Track
- Anonymous usage statistics
- Page views and navigation patterns
- Feature usage
- Browser and device information
### Privacy
- Analytics is opt-in only
- No personal information is collected
- Users can opt-out at any time
- Data is stored according to Google's privacy policy
### How to Opt-Out
If you've previously opted in and want to opt-out:
1. Clear your browser's local storage for the Worklenz domain
2. Or click the "Decline" button in the analytics notice if it appears
This project is licensed under the [MIT License](LICENSE).
## Screenshots
@@ -355,13 +240,206 @@ If you've previously opted in and want to opt-out:
</a>
</p>
## Contributing
### Contributing
We welcome contributions from the community! If you'd like to contribute, please follow our [contributing guidelines](CONTRIBUTING.md).
We welcome contributions from the community! If you'd like to contribute, please follow
our [contributing guidelines](CONTRIBUTING.md).
## License
### License
Worklenz is open source and released under the [GNU Affero General Public License Version 3 (AGPLv3)](LICENSE).
By contributing to Worklenz, you agree that your contributions will be licensed under its AGPL.
# Worklenz React
This repository contains the React version of Worklenz with a Docker setup for easy development and deployment.
## Getting Started with Docker
The project includes a fully configured Docker setup with:
- Frontend React application
- Backend server
- PostgreSQL database
- MinIO for S3-compatible storage
### Prerequisites
- Docker and Docker Compose installed on your system
- Git
### Quick Start
1. Clone the repository:
```bash
git clone https://github.com/Worklenz/worklenz.git
cd worklenz
```
2. Start the Docker containers (choose one option):
**Option 1: Using the provided scripts (easiest)**
- On Windows:
```
start.bat
```
- On Linux/macOS:
```bash
./start.sh
```
**Option 2: Using Docker Compose directly**
```bash
docker-compose up -d
```
3. The application will be available at:
- Frontend: http://localhost:5000
- Backend API: http://localhost:3000
- MinIO Console: http://localhost:9001 (login with minioadmin/minioadmin)
4. To stop the services (choose one option):
**Option 1: Using the provided scripts**
- On Windows:
```
stop.bat
```
- On Linux/macOS:
```bash
./stop.sh
```
**Option 2: Using Docker Compose directly**
```bash
docker-compose down
```
## MinIO Integration
The project uses MinIO as an S3-compatible object storage service, which provides an open-source alternative to AWS S3 for development and production.
### Working with MinIO
MinIO provides an S3-compatible API, so any code that works with S3 will work with MinIO by simply changing the endpoint URL. The backend has been configured to use MinIO by default, with no additional configuration required.
- **MinIO Console**: http://localhost:9001
- Username: minioadmin
- Password: minioadmin
- **Default Bucket**: worklenz-bucket (created automatically when the containers start)
### Backend Storage Configuration
The backend is pre-configured to use MinIO with the following settings:
```javascript
// S3 credentials with MinIO defaults
export const REGION = process.env.AWS_REGION || "us-east-1";
export const BUCKET = process.env.AWS_BUCKET || "worklenz-bucket";
export const S3_URL = process.env.S3_URL || "http://minio:9000/worklenz-bucket";
export const S3_ACCESS_KEY_ID = process.env.AWS_ACCESS_KEY_ID || "minioadmin";
export const S3_SECRET_ACCESS_KEY = process.env.AWS_SECRET_ACCESS_KEY || "minioadmin";
```
The S3 client is initialized with special MinIO configuration:
```javascript
const s3Client = new S3Client({
region: REGION,
credentials: {
accessKeyId: S3_ACCESS_KEY_ID || "",
secretAccessKey: S3_SECRET_ACCESS_KEY || "",
},
endpoint: getEndpointFromUrl(), // Extracts endpoint from S3_URL
forcePathStyle: true, // Required for MinIO
});
```
### Environment Configuration
The project uses the following environment file structure:
- **Frontend**:
- `worklenz-frontend/.env.development` - Development environment variables
- `worklenz-frontend/.env.production` - Production build variables
- **Backend**:
- `worklenz-backend/.env` - Backend environment variables
### Setting Up Environment Files
The Docker environment script will create or overwrite all environment files:
```bash
# For HTTP/WS
./update-docker-env.sh your-hostname
# For HTTPS/WSS
./update-docker-env.sh your-hostname true
```
This script generates properly configured environment files for both development and production environments.
## Docker Deployment
### Local Development with Docker
1. Set up the environment files:
```bash
# For HTTP/WS
./update-docker-env.sh
# For HTTPS/WSS
./update-docker-env.sh localhost true
```
2. Run the application using Docker Compose:
```bash
docker-compose up -d
```
3. Access the application:
- Frontend: http://localhost:5000
- Backend API: http://localhost:3000 (or https://localhost:3000 with SSL)
### Remote Server Deployment
When deploying to a remote server:
1. Set up the environment files with your server's hostname:
```bash
# For HTTP/WS
./update-docker-env.sh your-server-hostname
# For HTTPS/WSS
./update-docker-env.sh your-server-hostname true
```
This ensures that the frontend correctly connects to the backend API.
2. Pull and run the latest Docker images:
```bash
docker-compose pull
docker-compose up -d
```
3. Access the application through your server's hostname:
- Frontend: http://your-server-hostname:5000
- Backend API: http://your-server-hostname:3000
### Environment Configuration
The Docker setup uses environment variables to configure the services:
- Frontend:
- `VITE_API_URL`: URL of the backend API (default: http://backend:3000 for container networking)
- `VITE_SOCKET_URL`: WebSocket URL for real-time communication (default: ws://backend:3000)
- Backend:
- Database connection parameters
- Storage configuration
- Other backend settings
For custom configuration, edit the `.env` file or the `update-docker-env.sh` script.

View File

@@ -4,7 +4,7 @@ Getting started with development is a breeze! Follow these steps and you'll be c
## Requirements
- Node.js version v20 or newer - [Node.js](https://nodejs.org/en/download/)
- Node.js version v16 or newer - [Node.js](https://nodejs.org/en/download/)
- PostgreSQL version v15 or newer - [PostgreSQL](https://www.postgresql.org/download/)
- S3-compatible storage (like MinIO) for file storage
@@ -38,7 +38,7 @@ Getting started with development is a breeze! Follow these steps and you'll be c
npm start
```
4. Navigate to [http://localhost:5173](http://localhost:5173) (development server)
4. Navigate to [http://localhost:5173](http://localhost:5173)
### Backend installation
@@ -126,7 +126,7 @@ For an easier setup, you can use Docker and Docker Compose:
```
3. Access the application:
- Frontend: http://localhost:5000 (Docker production build)
- Frontend: http://localhost:5000
- Backend API: http://localhost:3000
- MinIO Console: http://localhost:9001 (login with minioadmin/minioadmin)

View File

@@ -1,16 +0,0 @@
#!/bin/bash
set -eu
# Adjust these as needed:
CONTAINER=worklenz_db
DB_NAME=worklenz_db
DB_USER=postgres
BACKUP_DIR=./pg_backups
mkdir -p "$BACKUP_DIR"
timestamp=$(date +%Y-%m-%d_%H-%M-%S)
outfile="${BACKUP_DIR}/${DB_NAME}_${timestamp}.sql"
echo "Creating backup $outfile ..."
docker exec -t "$CONTAINER" pg_dump -U "$DB_USER" -d "$DB_NAME" > "$outfile"
echo "Backup saved to $outfile"

View File

@@ -7,8 +7,8 @@ services:
ports:
- "5000:5000"
depends_on:
- backend
restart: unless-stopped
backend:
condition: service_started
env_file:
- ./worklenz-frontend/.env.production
networks:
@@ -26,7 +26,6 @@ services:
condition: service_healthy
minio:
condition: service_started
restart: unless-stopped
env_file:
- ./worklenz-backend/.env
networks:
@@ -38,7 +37,6 @@ services:
ports:
- "9000:9000"
- "9001:9001"
restart: unless-stopped
environment:
MINIO_ROOT_USER: ${S3_ACCESS_KEY_ID:-minioadmin}
MINIO_ROOT_PASSWORD: ${S3_SECRET_ACCESS_KEY:-minioadmin}
@@ -54,14 +52,13 @@ services:
container_name: worklenz_createbuckets
depends_on:
- minio
restart: on-failure
entrypoint: >
/bin/sh -c '
echo "Waiting for MinIO to start...";
sleep 15;
for i in 1 2 3 4 5; do
echo "Attempt $i to connect to MinIO...";
if /usr/bin/mc alias set myminio http://minio:9000 minioadmin minioadmin; then
if /usr/bin/mc config host add myminio http://minio:9000 minioadmin minioadmin; then
echo "Successfully connected to MinIO!";
/usr/bin/mc mb --ignore-existing myminio/worklenz-bucket;
/usr/bin/mc policy set public myminio/worklenz-bucket;
@@ -83,79 +80,32 @@ services:
POSTGRES_DB: ${DB_NAME:-worklenz_db}
POSTGRES_PASSWORD: ${DB_PASSWORD:-password}
healthcheck:
test:
[
"CMD-SHELL",
"pg_isready -d ${DB_NAME:-worklenz_db} -U ${DB_USER:-postgres}",
]
test: [ "CMD-SHELL", "pg_isready -d ${DB_NAME:-worklenz_db} -U ${DB_USER:-postgres}" ]
interval: 10s
timeout: 5s
retries: 5
restart: unless-stopped
networks:
- worklenz
volumes:
- worklenz_postgres_data:/var/lib/postgresql/data
- type: bind
source: ./worklenz-backend/database/sql
target: /docker-entrypoint-initdb.d/sql
source: ./worklenz-backend/database
target: /docker-entrypoint-initdb.d
consistency: cached
- type: bind
source: ./worklenz-backend/database/migrations
target: /docker-entrypoint-initdb.d/migrations
consistency: cached
- type: bind
source: ./worklenz-backend/database/00_init.sh
target: /docker-entrypoint-initdb.d/00_init.sh
consistency: cached
- type: bind
source: ./pg_backups
target: /docker-entrypoint-initdb.d/pg_backups
command: >
bash -c '
if command -v apt-get >/dev/null 2>&1; then
apt-get update && apt-get install -y dos2unix
elif command -v apk >/dev/null 2>&1; then
apk add --no-cache dos2unix
fi
find /docker-entrypoint-initdb.d -type f -name "*.sh" -exec sh -c '"'"'
for f; do
dos2unix "$f" 2>/dev/null || true
chmod +x "$f"
done
'"'"' sh {} +
exec docker-entrypoint.sh postgres
'
db-backup:
image: postgres:15
container_name: worklenz_db_backup
environment:
POSTGRES_USER: ${DB_USER:-postgres}
POSTGRES_DB: ${DB_NAME:-worklenz_db}
POSTGRES_PASSWORD: ${DB_PASSWORD:-password}
depends_on:
db:
condition: service_healthy
volumes:
- ./pg_backups:/pg_backups #host dir for backups files
#setup bassh loop to backup data evey 24h
command: >
bash -c 'while true; do
sleep 86400;
PGPASSWORD=$$POSTGRES_PASSWORD pg_dump -h worklenz_db -U $$POSTGRES_USER -d $$POSTGRES_DB \
> /pg_backups/worklenz_db_$$(date +%Y-%m-%d_%H-%M-%S).sql;
find /pg_backups -type f -name "*.sql" -mtime +30 -delete;
done'
restart: unless-stopped
networks:
- worklenz
bash -c ' if command -v apt-get >/dev/null 2>&1; then
apt-get update && apt-get install -y dos2unix
elif command -v apk >/dev/null 2>&1; then
apk add --no-cache dos2unix
fi && find /docker-entrypoint-initdb.d -type f -name "*.sh" -exec sh -c '\''
dos2unix "{}" 2>/dev/null || true
chmod +x "{}"
'\'' \; && exec docker-entrypoint.sh postgres '
volumes:
worklenz_postgres_data:
worklenz_minio_data:
pgdata:
networks:
worklenz:

View File

@@ -1,429 +0,0 @@
# Enhanced Task Management: Technical Guide
## Overview
The Enhanced Task Management system is a comprehensive React-based interface built on top of WorkLenz's existing task infrastructure. It provides a modern, grouped view with drag-and-drop functionality, bulk operations, and responsive design.
## Architecture
### Component Structure
```
src/components/task-management/
├── TaskListBoard.tsx # Main container with DnD context
├── TaskGroup.tsx # Individual group with collapse/expand
├── TaskRow.tsx # Task display with rich metadata
├── GroupingSelector.tsx # Grouping method switcher
└── BulkActionBar.tsx # Bulk operations toolbar
```
### Integration Points
The system integrates with existing WorkLenz infrastructure:
- **Redux Store:** Uses `tasks.slice.ts` for state management
- **Types:** Leverages existing TypeScript interfaces
- **API Services:** Works with existing task API endpoints
- **WebSocket:** Supports real-time updates via existing socket system
## Core Components
### TaskListBoard.tsx
Main orchestrator component that provides:
- **DnD Context:** @dnd-kit drag-and-drop functionality
- **State Management:** Redux integration for task data
- **Event Handling:** Drag events and bulk operations
- **Layout Structure:** Header controls and group container
#### Key Props
```typescript
interface TaskListBoardProps {
projectId: string; // Required: Project identifier
className?: string; // Optional: Additional CSS classes
}
```
#### Redux Selectors Used
```typescript
const {
taskGroups, // ITaskListGroup[] - Grouped task data
loadingGroups, // boolean - Loading state
error, // string | null - Error state
groupBy, // IGroupBy - Current grouping method
search, // string | null - Search filter
archived, // boolean - Show archived tasks
} = useSelector((state: RootState) => state.taskReducer);
```
### TaskGroup.tsx
Renders individual task groups with:
- **Collapsible Headers:** Expand/collapse functionality
- **Progress Indicators:** Visual completion progress
- **Drop Zones:** Accept dropped tasks from other groups
- **Group Statistics:** Task counts and completion rates
#### Key Props
```typescript
interface TaskGroupProps {
group: ITaskListGroup; // Group data with tasks
projectId: string; // Project context
currentGrouping: IGroupBy; // Current grouping mode
selectedTaskIds: string[]; // Selected task IDs
onAddTask?: (groupId: string) => void;
onToggleCollapse?: (groupId: string) => void;
}
```
### TaskRow.tsx
Individual task display featuring:
- **Rich Metadata:** Progress, assignees, labels, due dates
- **Drag Handles:** Sortable within and between groups
- **Selection:** Multi-select with checkboxes
- **Subtask Support:** Expandable hierarchy display
#### Key Props
```typescript
interface TaskRowProps {
task: IProjectTask; // Task data
projectId: string; // Project context
groupId: string; // Parent group ID
currentGrouping: IGroupBy; // Current grouping mode
isSelected: boolean; // Selection state
isDragOverlay?: boolean; // Drag overlay rendering
index?: number; // Position in group
onSelect?: (taskId: string, selected: boolean) => void;
onToggleSubtasks?: (taskId: string) => void;
}
```
## State Management
### Redux Integration
The system uses existing WorkLenz Redux patterns:
```typescript
// Primary slice used
import {
fetchTaskGroups, // Async thunk for loading data
reorderTasks, // Update task order/group
setGroup, // Change grouping method
updateTaskStatus, // Update individual task status
updateTaskPriority, // Update individual task priority
// ... other existing actions
} from '@/features/tasks/tasks.slice';
```
### Data Flow
1. **Component Mount:** `TaskListBoard` dispatches `fetchTaskGroups(projectId)`
2. **Group Changes:** `setGroup(newGroupBy)` triggers data reorganization
3. **Drag Operations:** `reorderTasks()` updates task positions and properties
4. **Real-time Updates:** WebSocket events update Redux state automatically
## Drag and Drop Implementation
### DnD Kit Integration
Uses @dnd-kit for modern, accessible drag-and-drop:
```typescript
// Sensors for different input methods
const sensors = useSensors(
useSensor(PointerSensor, {
activationConstraint: { distance: 8 }
}),
useSensor(KeyboardSensor, {
coordinateGetter: sortableKeyboardCoordinates
})
);
```
### Drag Event Handling
```typescript
const handleDragEnd = (event: DragEndEvent) => {
const { active, over } = event;
// Determine source and target
const sourceGroup = findTaskGroup(active.id);
const targetGroup = findTargetGroup(over?.id);
// Update task arrays and dispatch changes
dispatch(reorderTasks({
activeGroupId: sourceGroup.id,
overGroupId: targetGroup.id,
fromIndex: sourceIndex,
toIndex: targetIndex,
task: movedTask,
updatedSourceTasks,
updatedTargetTasks,
}));
};
```
### Smart Property Updates
When tasks are moved between groups, properties update automatically:
- **Status Grouping:** Moving to "Done" group sets task status to "done"
- **Priority Grouping:** Moving to "High" group sets task priority to "high"
- **Phase Grouping:** Moving to "Testing" group sets task phase to "testing"
## Bulk Operations
### Selection State Management
```typescript
// Local state for task selection
const [selectedTaskIds, setSelectedTaskIds] = useState<string[]>([]);
// Selection handlers
const handleTaskSelect = (taskId: string, selected: boolean) => {
if (selected) {
setSelectedTaskIds(prev => [...prev, taskId]);
} else {
setSelectedTaskIds(prev => prev.filter(id => id !== taskId));
}
};
```
### Context-Aware Actions
Bulk actions adapt to current grouping:
```typescript
// Only show status changes when not grouped by status
{currentGrouping !== 'status' && (
<Dropdown overlay={statusMenu}>
<Button>Change Status</Button>
</Dropdown>
)}
```
## Performance Optimizations
### Memoized Selectors
```typescript
// Expensive group calculations are memoized
const taskGroups = useMemo(() => {
return createGroupsFromTasks(tasks, currentGrouping);
}, [tasks, currentGrouping]);
```
### Virtual Scrolling Ready
For large datasets, the system is prepared for react-window integration:
```typescript
// Large group detection
const shouldVirtualize = group.tasks.length > 100;
return shouldVirtualize ? (
<VirtualizedTaskList tasks={group.tasks} />
) : (
<StandardTaskList tasks={group.tasks} />
);
```
### Optimistic Updates
UI updates immediately while API calls process in background:
```typescript
// Immediate UI update
dispatch(updateTaskStatusOptimistically(taskId, newStatus));
// API call with rollback on error
try {
await updateTaskStatus(taskId, newStatus);
} catch (error) {
dispatch(rollbackTaskStatusUpdate(taskId));
}
```
## Responsive Design
### Breakpoint Strategy
```css
/* Mobile-first responsive design */
.task-row {
padding: 12px;
}
@media (min-width: 768px) {
.task-row {
padding: 16px;
}
}
@media (min-width: 1024px) {
.task-row {
padding: 20px;
}
}
```
### Progressive Enhancement
- **Mobile:** Essential information only
- **Tablet:** Additional metadata visible
- **Desktop:** Full feature set with optimal layout
## Accessibility
### ARIA Implementation
```typescript
// Proper ARIA labels for screen readers
<div
role="button"
aria-label={`Move task ${task.name}`}
tabIndex={0}
{...dragHandleProps}
>
<DragOutlined />
</div>
```
### Keyboard Navigation
- **Tab:** Navigate between elements
- **Space:** Select/deselect tasks
- **Enter:** Activate buttons
- **Arrows:** Navigate sortable lists with keyboard sensor
### Focus Management
```typescript
// Maintain focus during dynamic updates
useEffect(() => {
if (shouldFocusTask) {
taskRef.current?.focus();
}
}, [taskGroups]);
```
## WebSocket Integration
### Real-time Updates
The system subscribes to existing WorkLenz WebSocket events:
```typescript
// Socket event handlers (existing WorkLenz patterns)
socket.on('TASK_STATUS_CHANGED', (data) => {
dispatch(updateTaskStatus(data));
});
socket.on('TASK_PROGRESS_UPDATED', (data) => {
dispatch(updateTaskProgress(data));
});
```
### Live Collaboration
- Multiple users can work simultaneously
- Changes appear in real-time
- Conflict resolution through server-side validation
## API Integration
### Existing Endpoints Used
```typescript
// Uses existing WorkLenz API services
import { tasksApiService } from '@/api/tasks/tasks.api.service';
// Task data fetching
tasksApiService.getTaskList(config);
// Task updates
tasksApiService.updateTask(taskId, changes);
// Bulk operations
tasksApiService.bulkUpdateTasks(taskIds, changes);
```
### Error Handling
```typescript
try {
await dispatch(fetchTaskGroups(projectId));
} catch (error) {
// Display user-friendly error message
message.error('Failed to load tasks. Please try again.');
logger.error('Task loading error:', error);
}
```
## Testing Strategy
### Component Testing
```typescript
// Example test structure
describe('TaskListBoard', () => {
it('should render task groups correctly', () => {
const mockTasks = generateMockTasks(10);
render(<TaskListBoard projectId="test-project" />);
expect(screen.getByText('Tasks (10)')).toBeInTheDocument();
});
it('should handle drag and drop operations', async () => {
// Test drag and drop functionality
});
});
```
### Integration Testing
- Redux state management
- API service integration
- WebSocket event handling
- Drag and drop operations
## Development Guidelines
### Code Organization
- Follow existing WorkLenz patterns
- Use TypeScript strictly
- Implement proper error boundaries
- Maintain accessibility standards
### Performance Considerations
- Memoize expensive calculations
- Implement virtual scrolling for large datasets
- Debounce user input operations
- Optimize re-render cycles
### Styling Standards
- Use existing Ant Design components
- Follow WorkLenz design system
- Implement responsive breakpoints
- Maintain dark mode compatibility
## Future Enhancements
### Planned Features
- Custom column integration
- Advanced filtering capabilities
- Kanban board view
- Enhanced time tracking
- Task templates
### Extension Points
The system is designed for easy extension:
```typescript
// Plugin architecture ready
interface TaskViewPlugin {
name: string;
component: React.ComponentType;
supportedGroupings: IGroupBy[];
}
const plugins: TaskViewPlugin[] = [
{ name: 'kanban', component: KanbanView, supportedGroupings: ['status'] },
{ name: 'timeline', component: TimelineView, supportedGroupings: ['phase'] },
];
```
## Deployment Considerations
### Bundle Size
- Tree-shake unused dependencies
- Code-split large components
- Optimize asset loading
### Browser Compatibility
- Modern browsers (ES2020+)
- Graceful degradation for older browsers
- Progressive enhancement approach
### Performance Monitoring
- Track component render times
- Monitor API response times
- Measure user interaction latency

View File

@@ -1,275 +0,0 @@
# Enhanced Task Management: User Guide
## What Is Enhanced Task Management?
The Enhanced Task Management system provides a modern, grouped view of your tasks with advanced features like drag-and-drop, bulk operations, and dynamic grouping. This system builds on WorkLenz's existing task infrastructure while offering improved productivity and organization tools.
## Why Use Enhanced Task Management?
- **Better Organization:** Group tasks by Status, Priority, or Phase for clearer project overview
- **Increased Productivity:** Bulk operations let you update multiple tasks at once
- **Intuitive Interface:** Drag-and-drop functionality makes task management feel natural
- **Rich Task Display:** See progress, assignees, labels, and due dates at a glance
- **Responsive Design:** Works seamlessly on desktop, tablet, and mobile devices
## Getting Started
### Accessing Enhanced Task Management
1. Navigate to your project workspace
2. Look for the enhanced task view option in your project interface
3. The system will display your tasks grouped by the current grouping method (default: Status)
### Understanding the Interface
The enhanced task management interface consists of several key areas:
- **Header Controls:** Task count, grouping selector, and action buttons
- **Task Groups:** Collapsible sections containing related tasks
- **Individual Tasks:** Rich task cards with metadata and actions
- **Bulk Action Bar:** Appears when multiple tasks are selected (blue bar)
## Task Grouping
### Available Grouping Options
You can organize your tasks using three different grouping methods:
#### 1. Status Grouping (Default)
Groups tasks by their current status:
- **To Do:** Tasks not yet started
- **Doing:** Tasks currently in progress
- **Done:** Completed tasks
#### 2. Priority Grouping
Groups tasks by their priority level:
- **Critical:** Highest priority, urgent tasks
- **High:** Important tasks requiring attention
- **Medium:** Standard priority tasks
- **Low:** Tasks that can be addressed later
#### 3. Phase Grouping
Groups tasks by project phases:
- **Planning:** Tasks in the planning stage
- **Development:** Implementation and development tasks
- **Testing:** Quality assurance and testing tasks
- **Deployment:** Release and deployment tasks
### Switching Between Groupings
1. Locate the "Group by" dropdown in the header controls
2. Select your preferred grouping method (Status, Priority, or Phase)
3. Tasks will automatically reorganize into the new groups
4. Your grouping preference is saved for future sessions
### Group Features
Each task group includes:
- **Color-coded headers** with visual indicators
- **Task count badges** showing the number of tasks in each group
- **Progress indicators** showing completion percentage
- **Collapse/expand functionality** to hide or show group contents
- **Add task buttons** to quickly create tasks in specific groups
## Drag and Drop
### Moving Tasks Within Groups
1. Hover over a task to reveal the drag handle (⋮⋮ icon)
2. Click and hold the drag handle
3. Drag the task to your desired position within the same group
4. Release to drop the task in its new position
### Moving Tasks Between Groups
1. Click and hold the drag handle on any task
2. Drag the task over a different group
3. The target group will highlight to show it can accept the task
4. Release to drop the task into the new group
5. The task's properties (status, priority, or phase) will automatically update
### Drag and Drop Benefits
- **Instant Updates:** Task properties change automatically when moved between groups
- **Visual Feedback:** Clear indicators show where tasks can be dropped
- **Keyboard Accessible:** Alternative keyboard controls for accessibility
- **Mobile Friendly:** Touch-friendly drag operations on mobile devices
## Multi-Select and Bulk Operations
### Selecting Tasks
You can select multiple tasks using several methods:
#### Individual Selection
- Click the checkbox next to any task to select it
- Click again to deselect
#### Range Selection
- Select the first task in your desired range
- Hold Shift and click the last task in the range
- All tasks between the first and last will be selected
#### Multiple Selection
- Hold Ctrl (or Cmd on Mac) while clicking tasks
- This allows you to select non-consecutive tasks
### Bulk Actions
When you have tasks selected, a blue bulk action bar appears with these options:
#### Change Status (when not grouped by Status)
- Update the status of all selected tasks at once
- Choose from available status options in your project
#### Set Priority (when not grouped by Priority)
- Assign the same priority level to all selected tasks
- Options include Critical, High, Medium, and Low
#### More Actions
Additional bulk operations include:
- **Assign to Member:** Add team members to multiple tasks
- **Add Labels:** Apply labels to selected tasks
- **Archive Tasks:** Move multiple tasks to archive
#### Delete Tasks
- Permanently remove multiple tasks at once
- Confirmation dialog prevents accidental deletions
### Bulk Action Tips
- The bulk action bar only shows relevant options based on your current grouping
- You can clear your selection at any time using the "Clear" button
- Bulk operations provide immediate feedback and can be undone if needed
## Task Display Features
### Rich Task Information
Each task displays comprehensive information:
#### Basic Information
- **Task Key:** Unique identifier (e.g., PROJ-123)
- **Task Name:** Clear, descriptive title
- **Description:** Additional details when available
#### Visual Indicators
- **Progress Bar:** Shows completion percentage (0-100%)
- **Priority Indicator:** Color-coded dot showing task importance
- **Status Color:** Left border color indicates current status
#### Team and Collaboration
- **Assignee Avatars:** Profile pictures of assigned team members (up to 3 visible)
- **Labels:** Color-coded tags for categorization
- **Comment Count:** Number of comments and discussions
- **Attachment Count:** Number of files attached to the task
#### Timing Information
- **Due Dates:** When tasks are scheduled to complete
- Red text: Overdue tasks
- Orange text: Due today or within 3 days
- Gray text: Future due dates
- **Time Tracking:** Estimated vs. logged time when available
### Subtask Support
Tasks with subtasks include additional features:
#### Expanding Subtasks
- Click the "+X" button next to task names to expand subtasks
- Subtasks appear indented below the parent task
- Click "X" to collapse subtasks
#### Subtask Progress
- Parent task progress reflects completion of all subtasks
- Individual subtask progress is visible when expanded
- Subtask counts show total number of child tasks
## Advanced Features
### Real-time Updates
- Changes made by team members appear instantly
- Live collaboration with multiple users
- WebSocket connections ensure data synchronization
### Search and Filtering
- Use existing project search and filter capabilities
- Enhanced task management respects current filter settings
- Search results maintain grouping organization
### Responsive Design
The interface adapts to different screen sizes:
#### Desktop (Large Screens)
- Full feature set with all metadata visible
- Optimal drag-and-drop experience
- Multi-column layouts where appropriate
#### Tablet (Medium Screens)
- Condensed but functional interface
- Touch-friendly interactions
- Simplified metadata display
#### Mobile (Small Screens)
- Stacked layout for easy navigation
- Large touch targets for selections
- Essential information prioritized
## Best Practices
### Organizing Your Tasks
1. **Choose the Right Grouping:** Select the grouping method that best fits your workflow
2. **Use Labels Consistently:** Apply meaningful labels for better categorization
3. **Keep Groups Balanced:** Avoid having too many tasks in a single group
4. **Regular Maintenance:** Review and update task organization periodically
### Collaboration Tips
1. **Clear Task Names:** Use descriptive titles that everyone understands
2. **Proper Assignment:** Assign tasks to appropriate team members
3. **Progress Updates:** Keep progress percentages current for accurate project tracking
4. **Use Comments:** Communicate about tasks using the comment system
### Productivity Techniques
1. **Batch Similar Operations:** Use bulk actions for efficiency
2. **Prioritize Effectively:** Use priority grouping during planning phases
3. **Track Progress:** Monitor completion rates using group progress indicators
4. **Plan Ahead:** Use due dates and time estimates for better scheduling
## Keyboard Shortcuts
### Navigation
- **Tab:** Move focus between elements
- **Enter:** Activate focused button or link
- **Esc:** Close open dialogs or clear selections
### Selection
- **Space:** Select/deselect focused task
- **Shift + Click:** Range selection
- **Ctrl + Click:** Multi-selection (Cmd + Click on Mac)
### Actions
- **Delete:** Remove selected tasks (with confirmation)
- **Ctrl + A:** Select all visible tasks (Cmd + A on Mac)
## Troubleshooting
### Common Issues
#### Tasks Not Moving Between Groups
- Ensure you have edit permissions for the tasks
- Check that you're dragging from the drag handle (⋮⋮ icon)
- Verify the target group allows the task type
#### Bulk Actions Not Working
- Confirm tasks are actually selected (checkboxes checked)
- Ensure you have appropriate permissions
- Check that the action is available for your current grouping
#### Missing Task Information
- Some metadata may be hidden on smaller screens
- Try expanding to full screen or using desktop view
- Check that task has the required information (assignees, labels, etc.)
### Performance Tips
- For projects with hundreds of tasks, consider using filters to reduce visible tasks
- Collapse groups you're not actively working with
- Clear selections when not performing bulk operations
## Getting Help
- Contact your workspace administrator for permission-related issues
- Check the main WorkLenz documentation for general task management help
- Report bugs or feature requests through your organization's support channels
## What's New
This enhanced task management system builds on WorkLenz's solid foundation while adding:
- Modern drag-and-drop interfaces
- Flexible grouping options
- Powerful bulk operation capabilities
- Rich visual task displays
- Mobile-responsive design
- Improved accessibility features

View File

@@ -16,45 +16,24 @@ Recurring tasks are tasks that repeat automatically on a schedule you choose. Th
5. Save the task. It will now be created automatically based on your chosen schedule.
## Schedule Options
You can choose how often your task repeats. Here are the available options:
You can choose how often your task repeats. Here are the most common options:
- **Daily:** The task is created every day.
- **Weekly:** The task is created once a week. You can pick one or more days (e.g., every Monday and Thursday).
- **Monthly:** The task is created once a month. You have two options:
- **On a specific date:** Choose a date from 1 to 28 (limited to 28 to ensure consistency across all months)
- **On a specific day:** Choose a week (first, second, third, fourth, or last) and a day of the week
- **Every X Days:** The task is created every specified number of days (e.g., every 3 days)
- **Every X Weeks:** The task is created every specified number of weeks (e.g., every 2 weeks)
- **Every X Months:** The task is created every specified number of months (e.g., every 3 months)
- **Weekly:** The task is created once a week. You can pick the day (e.g., every Monday).
- **Monthly:** The task is created once a month. You can pick the date (e.g., the 1st of every month).
- **Weekdays:** The task is created every Monday to Friday.
- **Custom:** Set your own schedule, such as every 2 days, every 3 weeks, or only on certain days.
### Examples
- "Send team update" every Friday (weekly)
- "Submit expense report" on the 15th of each month (monthly, specific date)
- "Monthly team meeting" on the first Monday of each month (monthly, specific day)
- "Submit expense report" on the 1st of each month (monthly)
- "Check backups" every day (daily)
- "Review project status" every Monday and Thursday (weekly, multiple days)
- "Quarterly report" every 3 months (every X months)
## Future Task Creation
The system automatically creates tasks up to a certain point in the future to ensure timely scheduling:
- **Daily Tasks:** Created up to 7 days in advance
- **Weekly Tasks:** Created up to 2 weeks in advance
- **Monthly Tasks:** Created up to 2 months in advance
- **Every X Days/Weeks/Months:** Created up to 2 intervals in advance
This ensures that:
- You always have upcoming tasks visible in your schedule
- Tasks are created at appropriate intervals
- The system maintains a reasonable number of future tasks
- "Review project status" every Monday and Thursday (custom)
## Tips
- You can edit or stop a recurring task at any time.
- Assign team members and labels to recurring tasks for better organization.
- Check your task list regularly to see newly created recurring tasks.
- For monthly tasks, dates are limited to 1-28 to ensure the task occurs on the same date every month.
- Tasks are created automatically within the future limit window - you don't need to manually create them.
- If you need to see tasks further in the future, they will be created automatically as the current tasks are completed.
## Need Help?
If you have questions or need help setting up recurring tasks, contact your workspace admin or support team.

View File

@@ -17,51 +17,6 @@ The recurring tasks cron job automates the creation of tasks based on predefined
3. Checks if a task for the next occurrence already exists.
4. Creates a new task if it does not exist and the next occurrence is within the allowed future window.
## Future Limit Logic
The system implements different future limits based on the schedule type to maintain an appropriate number of future tasks:
```typescript
const FUTURE_LIMITS = {
daily: moment.duration(7, 'days'),
weekly: moment.duration(2, 'weeks'),
monthly: moment.duration(2, 'months'),
every_x_days: (interval: number) => moment.duration(interval * 2, 'days'),
every_x_weeks: (interval: number) => moment.duration(interval * 2, 'weeks'),
every_x_months: (interval: number) => moment.duration(interval * 2, 'months')
};
```
### Implementation Details
- **Base Calculation:**
```typescript
const futureLimit = moment(template.last_checked_at || template.created_at)
.add(getFutureLimit(schedule.schedule_type, schedule.interval), 'days');
```
- **Task Creation Rules:**
1. Only create tasks if the next occurrence is before the future limit
2. Skip creation if a task already exists for that date
3. Update `last_checked_at` after processing
- **Benefits:**
- Prevents excessive task creation
- Maintains system performance
- Ensures timely task visibility
- Allows for schedule modifications
## Date Handling
- **Monthly Tasks:**
- Dates are limited to 1-28 to ensure consistency across all months
- This prevents issues with months having different numbers of days
- No special handling needed for February or months with 30/31 days
- **Weekly Tasks:**
- Supports multiple days of the week (0-6, where 0 is Sunday)
- Tasks are created for each selected day
- **Interval-based Tasks:**
- Every X days/weeks/months from the last task's end date
- Minimum interval is 1 day/week/month
- No maximum limit, but tasks are only created up to the future limit
## Database Interactions
- **Templates and Schedules:**
- Templates are stored in `task_recurring_templates`.
@@ -72,7 +27,6 @@ const FUTURE_LIMITS = {
- Assigns team members and labels by calling appropriate functions/controllers.
- **State Tracking:**
- Updates `last_checked_at` and `last_created_task_end_date` in the schedule after processing.
- Maintains future limits based on schedule type.
## Task Creation Process
1. **Fetch Templates:** Retrieve all templates and their associated schedules.
@@ -87,12 +41,10 @@ const FUTURE_LIMITS = {
- **Cron Expression:** Modify the `TIME` constant in the code to change the schedule.
- **Task Template Structure:** Extend the template or schedule interfaces to support additional fields.
- **Task Creation Logic:** Customize the task creation process or add new assignment/labeling logic as needed.
- **Future Window:** Adjust the future limits by modifying the `FUTURE_LIMITS` configuration.
## Error Handling
- Errors are logged using the `log_error` utility.
- The job continues processing other templates even if one fails.
- Failed task creations are not retried automatically.
## References
- Source: `src/cron_jobs/recurring-tasks.ts`

6
package-lock.json generated
View File

@@ -1,6 +0,0 @@
{
"name": "worklenz",
"lockfileVersion": 3,
"requires": true,
"packages": {}
}

View File

@@ -1,41 +0,0 @@
-- Test script to verify the sort order constraint fix
-- Test the helper function
SELECT get_sort_column_name('status'); -- Should return 'status_sort_order'
SELECT get_sort_column_name('priority'); -- Should return 'priority_sort_order'
SELECT get_sort_column_name('phase'); -- Should return 'phase_sort_order'
SELECT get_sort_column_name('members'); -- Should return 'member_sort_order'
SELECT get_sort_column_name('unknown'); -- Should return 'status_sort_order' (default)
-- Test bulk update function (example - would need real project_id and task_ids)
/*
SELECT update_task_sort_orders_bulk(
'[
{"task_id": "example-uuid", "sort_order": 1, "status_id": "status-uuid"},
{"task_id": "example-uuid-2", "sort_order": 2, "status_id": "status-uuid"}
]'::json,
'status'
);
*/
-- Verify that sort_order constraint still exists and works
SELECT
tc.constraint_name,
tc.table_name,
kcu.column_name
FROM information_schema.table_constraints tc
JOIN information_schema.key_column_usage kcu
ON tc.constraint_name = kcu.constraint_name
WHERE tc.constraint_name = 'tasks_sort_order_unique';
-- Check that new sort order columns don't have unique constraints (which is correct)
SELECT
tc.constraint_name,
tc.table_name,
kcu.column_name
FROM information_schema.table_constraints tc
JOIN information_schema.key_column_usage kcu
ON tc.constraint_name = kcu.constraint_name
WHERE kcu.table_name = 'tasks'
AND kcu.column_name IN ('status_sort_order', 'priority_sort_order', 'phase_sort_order', 'member_sort_order')
AND tc.constraint_type = 'UNIQUE';

View File

@@ -1,30 +0,0 @@
-- Test script to validate the separate sort order implementation
-- Check if new columns exist
SELECT column_name, data_type, is_nullable, column_default
FROM information_schema.columns
WHERE table_name = 'tasks'
AND column_name IN ('status_sort_order', 'priority_sort_order', 'phase_sort_order', 'member_sort_order')
ORDER BY column_name;
-- Check if helper function exists
SELECT routine_name, routine_type
FROM information_schema.routines
WHERE routine_name IN ('get_sort_column_name', 'update_task_sort_orders_bulk', 'handle_task_list_sort_order_change');
-- Sample test data to verify different sort orders work
-- (This would be run after the migrations)
/*
-- Test: Tasks should have different orders for different groupings
SELECT
id,
name,
sort_order,
status_sort_order,
priority_sort_order,
phase_sort_order,
member_sort_order
FROM tasks
WHERE project_id = '<test-project-id>'
ORDER BY status_sort_order;
*/

View File

@@ -73,8 +73,8 @@ cat > worklenz-backend/.env << EOL
NODE_ENV=production
PORT=3000
SESSION_NAME=worklenz.sid
SESSION_SECRET=$(openssl rand -base64 48)
COOKIE_SECRET=$(openssl rand -base64 48)
SESSION_SECRET=change_me_in_production
COOKIE_SECRET=change_me_in_production
# CORS
SOCKET_IO_CORS=${FRONTEND_URL}
@@ -123,7 +123,7 @@ SLACK_WEBHOOK=
COMMIT_BUILD_IMMEDIATELY=true
# JWT Secret
JWT_SECRET=$(openssl rand -base64 48)
JWT_SECRET=change_me_in_production
EOL
echo "Environment configuration updated for ${HOSTNAME} with" $([ "$USE_SSL" = "true" ] && echo "HTTPS/WSS" || echo "HTTP/WS")
@@ -138,4 +138,4 @@ echo "Frontend URL: ${FRONTEND_URL}"
echo "API URL: ${HTTP_PREFIX}${HOSTNAME}:3000"
echo "Socket URL: ${WS_PREFIX}${HOSTNAME}:3000"
echo "MinIO Dashboard URL: ${MINIO_DASHBOARD_URL}"
echo "CORS is configured to allow requests from: ${FRONTEND_URL}"
echo "CORS is configured to allow requests from: ${FRONTEND_URL}"

View File

@@ -1,9 +1,5 @@
node_modules
npm-debug.log
build
.scannerwork
coverage
.dockerignore
.git
*.md
tests

View File

@@ -78,8 +78,4 @@ GOOGLE_CAPTCHA_SECRET_KEY=your_captcha_secret_key
GOOGLE_CAPTCHA_PASS_SCORE=0.8
# Email Cronjobs
ENABLE_EMAIL_CRONJOBS=true
# RECURRING_JOBS
ENABLE_RECURRING_JOBS=true
RECURRING_JOBS_INTERVAL="0 11 */1 * 1-5"
ENABLE_EMAIL_CRONJOBS=true

View File

@@ -20,6 +20,9 @@ coverage
# nyc test coverage
.nyc_output
# Grunt intermediate storage (http://gruntjs.com/creating-plugins#storing-task-files)
.grunt
# Bower dependency directory (https://bower.io/)
bower_components

View File

@@ -1,39 +1,26 @@
# --- Stage 1: Build ---
FROM node:20-slim AS builder
ARG RELEASE_VERSION
RUN apt-get update && apt-get install -y \
python3 \
make \
g++ \
curl \
postgresql-server-dev-all \
&& rm -rf /var/lib/apt/lists/*
# Use the official Node.js 20 image as a base
FROM node:20
# Create and set the working directory
WORKDIR /usr/src/app
# Install global dependencies
RUN npm install -g ts-node typescript grunt grunt-cli
# Copy package.json and package-lock.json (if available)
COPY package*.json ./
# Install app dependencies
RUN npm ci
# Copy the rest of the application code
COPY . .
# Run the build script to compile TypeScript to JavaScript
RUN npm run build
RUN echo "$RELEASE_VERSION" > release
# --- Stage 2: Production Image ---
FROM node:20-slim
RUN apt-get update && apt-get install -y libpq5 && rm -rf /var/lib/apt/lists/*
WORKDIR /app
COPY --from=builder /usr/src/app/package*.json ./
COPY --from=builder /usr/src/app/build ./build
COPY --from=builder /usr/src/app/node_modules ./node_modules
COPY --from=builder /usr/src/app/release ./release
COPY --from=builder /usr/src/app/worklenz-email-templates ./worklenz-email-templates
# Expose the port the app runs on
EXPOSE 3000
CMD ["node", "build/bin/www"]
# Start the application
CMD ["npm", "start"]

View File

@@ -0,0 +1,131 @@
module.exports = function (grunt) {
// Project configuration.
grunt.initConfig({
pkg: grunt.file.readJSON("package.json"),
clean: {
dist: "build"
},
compress: require("./grunt/grunt-compress"),
copy: {
main: {
files: [
{expand: true, cwd: "src", src: ["public/**"], dest: "build"},
{expand: true, cwd: "src", src: ["views/**"], dest: "build"},
{expand: true, cwd: "landing-page-assets", src: ["**"], dest: "build/public/assets"},
{expand: true, cwd: "src", src: ["shared/sample-data.json"], dest: "build", filter: "isFile"},
{expand: true, cwd: "src", src: ["shared/templates/**"], dest: "build", filter: "isFile"},
{expand: true, cwd: "src", src: ["shared/postgresql-error-codes.json"], dest: "build", filter: "isFile"},
]
},
packages: {
files: [
{expand: true, cwd: "", src: [".env"], dest: "build", filter: "isFile"},
{expand: true, cwd: "", src: [".gitignore"], dest: "build", filter: "isFile"},
{expand: true, cwd: "", src: ["release"], dest: "build", filter: "isFile"},
{expand: true, cwd: "", src: ["jest.config.js"], dest: "build", filter: "isFile"},
{expand: true, cwd: "", src: ["package.json"], dest: "build", filter: "isFile"},
{expand: true, cwd: "", src: ["package-lock.json"], dest: "build", filter: "isFile"},
{expand: true, cwd: "", src: ["common_modules/**"], dest: "build"}
]
}
},
sync: {
main: {
files: [
{cwd: "src", src: ["views/**", "public/**"], dest: "build/"}, // makes all src relative to cwd
],
verbose: true,
failOnError: true,
compareUsing: "md5"
}
},
uglify: {
all: {
files: [{
expand: true,
cwd: "build",
src: "**/*.js",
dest: "build"
}]
},
controllers: {
files: [{
expand: true,
cwd: "build",
src: "controllers/*.js",
dest: "build"
}]
},
routes: {
files: [{
expand: true,
cwd: "build",
src: "routes/**/*.js",
dest: "build"
}]
},
assets: {
files: [{
expand: true,
cwd: "build",
src: "public/assets/**/*.js",
dest: "build"
}]
}
},
shell: {
tsc: {
command: "tsc --build tsconfig.prod.json"
},
esbuild: {
// command: "esbuild `find src -type f -name '*.ts'` --platform=node --minify=false --target=esnext --format=cjs --tsconfig=tsconfig.prod.json --outdir=build"
command: "node esbuild && node cli/esbuild-patch"
},
tsc_dev: {
command: "tsc --build tsconfig.json"
},
swagger: {
command: "node ./cli/swagger"
},
inline_queries: {
command: "node ./cli/inline-queries"
}
},
watch: {
scripts: {
files: ["src/**/*.ts"],
tasks: ["shell:tsc_dev"],
options: {
debounceDelay: 250,
spawn: false,
}
},
other: {
files: ["src/**/*.pug", "landing-page-assets/**"],
tasks: ["sync"]
}
}
});
grunt.registerTask("clean", ["clean"]);
grunt.registerTask("copy", ["copy:main"]);
grunt.registerTask("swagger", ["shell:swagger"]);
grunt.registerTask("build:tsc", ["shell:tsc"]);
grunt.registerTask("build", ["clean", "shell:tsc", "copy:main", "compress"]);
grunt.registerTask("build:es", ["clean", "shell:esbuild", "copy:main", "uglify:assets", "compress"]);
grunt.registerTask("build:strict", ["clean", "shell:tsc", "copy:packages", "uglify:all", "copy:main", "compress"]);
grunt.registerTask("dev", ["clean", "copy:main", "shell:tsc_dev", "shell:inline_queries", "watch"]);
// Load the plugin that provides the "uglify" task.
grunt.loadNpmTasks("grunt-contrib-watch");
grunt.loadNpmTasks("grunt-contrib-clean");
grunt.loadNpmTasks("grunt-contrib-copy");
grunt.loadNpmTasks("grunt-contrib-uglify");
grunt.loadNpmTasks("grunt-contrib-compress");
grunt.loadNpmTasks("grunt-shell");
grunt.loadNpmTasks("grunt-sync");
// Default task(s).
grunt.registerTask("default", []);
};

View File

@@ -0,0 +1,55 @@
#!/bin/bash
set -e
# This script controls the order of SQL file execution during database initialization
echo "Starting database initialization..."
# Check if we have SQL files in expected locations
if [ -f "/docker-entrypoint-initdb.d/sql/0_extensions.sql" ]; then
SQL_DIR="/docker-entrypoint-initdb.d/sql"
echo "Using SQL files from sql/ subdirectory"
elif [ -f "/docker-entrypoint-initdb.d/0_extensions.sql" ]; then
# First time setup - move files to subdirectory
echo "Moving SQL files to sql/ subdirectory..."
mkdir -p /docker-entrypoint-initdb.d/sql
# Move all SQL files (except this script) to the subdirectory
for f in /docker-entrypoint-initdb.d/*.sql; do
if [ -f "$f" ]; then
cp "$f" /docker-entrypoint-initdb.d/sql/
echo "Copied $f to sql/ subdirectory"
fi
done
SQL_DIR="/docker-entrypoint-initdb.d/sql"
else
echo "SQL files not found in expected locations!"
exit 1
fi
# Execute SQL files in the correct order
echo "Executing 0_extensions.sql..."
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" -f "$SQL_DIR/0_extensions.sql"
echo "Executing 1_tables.sql..."
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" -f "$SQL_DIR/1_tables.sql"
echo "Executing indexes.sql..."
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" -f "$SQL_DIR/indexes.sql"
echo "Executing 4_functions.sql..."
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" -f "$SQL_DIR/4_functions.sql"
echo "Executing triggers.sql..."
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" -f "$SQL_DIR/triggers.sql"
echo "Executing 3_views.sql..."
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" -f "$SQL_DIR/3_views.sql"
echo "Executing 2_dml.sql..."
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" -f "$SQL_DIR/2_dml.sql"
echo "Executing 5_database_user.sql..."
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" -f "$SQL_DIR/5_database_user.sql"
echo "Database initialization completed successfully"

View File

@@ -1,88 +0,0 @@
#!/bin/bash
set -e
echo "Starting database initialization..."
SQL_DIR="/docker-entrypoint-initdb.d/sql"
MIGRATIONS_DIR="/docker-entrypoint-initdb.d/migrations"
BACKUP_DIR="/docker-entrypoint-initdb.d/pg_backups"
# --------------------------------------------
# 🗄️ STEP 1: Attempt to restore latest backup
# --------------------------------------------
if [ -d "$BACKUP_DIR" ]; then
LATEST_BACKUP=$(ls -t "$BACKUP_DIR"/*.sql 2>/dev/null | head -n 1)
else
LATEST_BACKUP=""
fi
if [ -f "$LATEST_BACKUP" ]; then
echo "🗄️ Found latest backup: $LATEST_BACKUP"
echo "⏳ Restoring from backup..."
psql -U "$POSTGRES_USER" -d "$POSTGRES_DB" < "$LATEST_BACKUP"
echo "✅ Backup restoration complete. Skipping schema and migrations."
exit 0
else
echo " No valid backup found. Proceeding with base schema and migrations."
fi
# --------------------------------------------
# 🏗️ STEP 2: Continue with base schema setup
# --------------------------------------------
# Create migrations table if it doesn't exist
psql -U "$POSTGRES_USER" -d "$POSTGRES_DB" -c "
CREATE TABLE IF NOT EXISTS schema_migrations (
version TEXT PRIMARY KEY,
applied_at TIMESTAMP DEFAULT now()
);
"
# List of base schema files to execute in order
BASE_SQL_FILES=(
"0_extensions.sql"
"1_tables.sql"
"indexes.sql"
"4_functions.sql"
"triggers.sql"
"3_views.sql"
"2_dml.sql"
"5_database_user.sql"
)
echo "Running base schema SQL files in order..."
for file in "${BASE_SQL_FILES[@]}"; do
full_path="$SQL_DIR/$file"
if [ -f "$full_path" ]; then
echo "Executing $file..."
psql -v ON_ERROR_STOP=1 -U "$POSTGRES_USER" -d "$POSTGRES_DB" -f "$full_path"
else
echo "WARNING: $file not found, skipping."
fi
done
echo "✅ Base schema SQL execution complete."
# --------------------------------------------
# 🚀 STEP 3: Apply SQL migrations
# --------------------------------------------
if [ -d "$MIGRATIONS_DIR" ] && compgen -G "$MIGRATIONS_DIR/*.sql" > /dev/null; then
echo "Applying migrations..."
for f in "$MIGRATIONS_DIR"/*.sql; do
version=$(basename "$f")
if ! psql -U "$POSTGRES_USER" -d "$POSTGRES_DB" -tAc "SELECT 1 FROM schema_migrations WHERE version = '$version'" | grep -q 1; then
echo "Applying migration: $version"
psql -U "$POSTGRES_USER" -d "$POSTGRES_DB" -f "$f"
psql -U "$POSTGRES_USER" -d "$POSTGRES_DB" -c "INSERT INTO schema_migrations (version) VALUES ('$version');"
else
echo "Skipping already applied migration: $version"
fi
done
else
echo "No migration files found or directory is empty, skipping migrations."
fi
echo "🎉 Database initialization completed successfully."

View File

@@ -1,135 +0,0 @@
-- Performance indexes for optimized tasks queries
-- Migration: 20250115000000-performance-indexes.sql
-- Composite index for main task filtering
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_tasks_project_archived_parent
ON tasks(project_id, archived, parent_task_id)
WHERE archived = FALSE;
-- Index for status joins
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_tasks_status_project
ON tasks(status_id, project_id)
WHERE archived = FALSE;
-- Index for assignees lookup
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_tasks_assignees_task_member
ON tasks_assignees(task_id, team_member_id);
-- Index for phase lookup
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_task_phase_task_phase
ON task_phase(task_id, phase_id);
-- Index for subtask counting
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_tasks_parent_archived
ON tasks(parent_task_id, archived)
WHERE parent_task_id IS NOT NULL AND archived = FALSE;
-- Index for labels
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_task_labels_task_label
ON task_labels(task_id, label_id);
-- Index for comments count
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_task_comments_task
ON task_comments(task_id);
-- Index for attachments count
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_task_attachments_task
ON task_attachments(task_id);
-- Index for work log aggregation
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_task_work_log_task
ON task_work_log(task_id);
-- Index for subscribers check
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_task_subscribers_task
ON task_subscribers(task_id);
-- Index for dependencies check
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_task_dependencies_task
ON task_dependencies(task_id);
-- Index for timers lookup
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_task_timers_task_user
ON task_timers(task_id, user_id);
-- Index for custom columns
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_cc_column_values_task
ON cc_column_values(task_id);
-- Index for team member info view optimization
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_team_members_team_user
ON team_members(team_id, user_id)
WHERE active = TRUE;
-- Index for notification settings
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_notification_settings_user_team
ON notification_settings(user_id, team_id);
-- Index for task status categories
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_task_statuses_category
ON task_statuses(category_id, project_id);
-- Index for project phases
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_project_phases_project_sort
ON project_phases(project_id, sort_index);
-- Index for task priorities
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_task_priorities_value
ON task_priorities(value);
-- Index for team labels
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_team_labels_team
ON team_labels(team_id);
-- NEW INDEXES FOR PERFORMANCE OPTIMIZATION --
-- Composite index for task main query optimization (covers most WHERE conditions)
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_tasks_performance_main
ON tasks(project_id, archived, parent_task_id, status_id, priority_id)
WHERE archived = FALSE;
-- Index for sorting by sort_order with project filter
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_tasks_project_sort_order
ON tasks(project_id, sort_order)
WHERE archived = FALSE;
-- Index for email_invitations to optimize team_member_info_view
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_email_invitations_team_member
ON email_invitations(team_member_id);
-- Covering index for task status with category information
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_task_statuses_covering
ON task_statuses(id, category_id, project_id);
-- Index for task aggregation queries (parent task progress calculation)
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_tasks_parent_status_archived
ON tasks(parent_task_id, status_id, archived)
WHERE archived = FALSE;
-- Index for project team member filtering
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_team_members_project_lookup
ON team_members(team_id, active, user_id)
WHERE active = TRUE;
-- Covering index for tasks with frequently accessed columns
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_tasks_covering_main
ON tasks(id, project_id, archived, parent_task_id, status_id, priority_id, sort_order, name)
WHERE archived = FALSE;
-- Index for task search functionality
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_tasks_name_search
ON tasks USING gin(to_tsvector('english', name))
WHERE archived = FALSE;
-- Index for date-based filtering (if used)
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_tasks_dates
ON tasks(project_id, start_date, end_date)
WHERE archived = FALSE;
-- Index for task timers with user filtering
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_task_timers_user_task
ON task_timers(user_id, task_id);
-- Index for sys_task_status_categories lookups
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_sys_task_status_categories_covering
ON sys_task_status_categories(id, color_code, color_code_dark, is_done, is_doing, is_todo);

View File

@@ -1,143 +0,0 @@
-- Fix window function error in task sort optimized functions
-- Error: window functions are not allowed in UPDATE
-- Replace the optimized sort functions to avoid CTE usage in UPDATE statements
CREATE OR REPLACE FUNCTION handle_task_list_sort_between_groups_optimized(_from_index integer, _to_index integer, _task_id uuid, _project_id uuid, _batch_size integer DEFAULT 100) RETURNS void
LANGUAGE plpgsql
AS
$$
DECLARE
_offset INT := 0;
_affected_rows INT;
BEGIN
-- PERFORMANCE OPTIMIZATION: Use direct updates without CTE in UPDATE
IF (_to_index = -1)
THEN
_to_index = COALESCE((SELECT MAX(sort_order) + 1 FROM tasks WHERE project_id = _project_id), 0);
END IF;
-- PERFORMANCE OPTIMIZATION: Batch updates for large datasets
IF _to_index > _from_index
THEN
LOOP
UPDATE tasks
SET sort_order = sort_order - 1
WHERE project_id = _project_id
AND sort_order > _from_index
AND sort_order < _to_index
AND sort_order > _offset
AND sort_order <= _offset + _batch_size;
GET DIAGNOSTICS _affected_rows = ROW_COUNT;
EXIT WHEN _affected_rows = 0;
_offset := _offset + _batch_size;
END LOOP;
UPDATE tasks SET sort_order = _to_index - 1 WHERE id = _task_id AND project_id = _project_id;
END IF;
IF _to_index < _from_index
THEN
_offset := 0;
LOOP
UPDATE tasks
SET sort_order = sort_order + 1
WHERE project_id = _project_id
AND sort_order > _to_index
AND sort_order < _from_index
AND sort_order > _offset
AND sort_order <= _offset + _batch_size;
GET DIAGNOSTICS _affected_rows = ROW_COUNT;
EXIT WHEN _affected_rows = 0;
_offset := _offset + _batch_size;
END LOOP;
UPDATE tasks SET sort_order = _to_index + 1 WHERE id = _task_id AND project_id = _project_id;
END IF;
END
$$;
-- Replace the second optimized sort function
CREATE OR REPLACE FUNCTION handle_task_list_sort_inside_group_optimized(_from_index integer, _to_index integer, _task_id uuid, _project_id uuid, _batch_size integer DEFAULT 100) RETURNS void
LANGUAGE plpgsql
AS
$$
DECLARE
_offset INT := 0;
_affected_rows INT;
BEGIN
-- PERFORMANCE OPTIMIZATION: Batch updates for large datasets without CTE in UPDATE
IF _to_index > _from_index
THEN
LOOP
UPDATE tasks
SET sort_order = sort_order - 1
WHERE project_id = _project_id
AND sort_order > _from_index
AND sort_order <= _to_index
AND sort_order > _offset
AND sort_order <= _offset + _batch_size;
GET DIAGNOSTICS _affected_rows = ROW_COUNT;
EXIT WHEN _affected_rows = 0;
_offset := _offset + _batch_size;
END LOOP;
END IF;
IF _to_index < _from_index
THEN
_offset := 0;
LOOP
UPDATE tasks
SET sort_order = sort_order + 1
WHERE project_id = _project_id
AND sort_order >= _to_index
AND sort_order < _from_index
AND sort_order > _offset
AND sort_order <= _offset + _batch_size;
GET DIAGNOSTICS _affected_rows = ROW_COUNT;
EXIT WHEN _affected_rows = 0;
_offset := _offset + _batch_size;
END LOOP;
END IF;
UPDATE tasks SET sort_order = _to_index WHERE id = _task_id AND project_id = _project_id;
END
$$;
-- Add simple bulk update function as alternative
CREATE OR REPLACE FUNCTION update_task_sort_orders_bulk(_updates json) RETURNS void
LANGUAGE plpgsql
AS
$$
DECLARE
_update_record RECORD;
BEGIN
-- Simple approach: update each task's sort_order from the provided array
FOR _update_record IN
SELECT
(item->>'task_id')::uuid as task_id,
(item->>'sort_order')::int as sort_order,
(item->>'status_id')::uuid as status_id,
(item->>'priority_id')::uuid as priority_id,
(item->>'phase_id')::uuid as phase_id
FROM json_array_elements(_updates) as item
LOOP
UPDATE tasks
SET
sort_order = _update_record.sort_order,
status_id = COALESCE(_update_record.status_id, status_id),
priority_id = COALESCE(_update_record.priority_id, priority_id)
WHERE id = _update_record.task_id;
-- Handle phase updates separately since it's in a different table
IF _update_record.phase_id IS NOT NULL THEN
INSERT INTO task_phase (task_id, phase_id)
VALUES (_update_record.task_id, _update_record.phase_id)
ON CONFLICT (task_id) DO UPDATE SET phase_id = _update_record.phase_id;
END IF;
END LOOP;
END
$$;

View File

@@ -145,7 +145,7 @@ BEGIN
SET progress_value = NULL,
progress_mode = NULL
WHERE project_id = _project_id
AND progress_mode::text::progress_mode_type = _old_mode;
AND progress_mode = _old_mode;
END IF;
RETURN NEW;

View File

@@ -118,7 +118,7 @@ BEGIN
SELECT SUM(time_spent)
FROM task_work_log
WHERE task_id = t.id
), 0) as logged_minutes
), 0) / 60.0 as logged_minutes
FROM tasks t
WHERE t.id = _task_id
)

View File

@@ -1,300 +0,0 @@
-- Fix Duplicate Sort Orders Script
-- This script detects and fixes duplicate sort order values that break task ordering
-- 1. DETECTION QUERIES - Run these first to see the scope of the problem
-- Check for duplicates in main sort_order column
SELECT
project_id,
sort_order,
COUNT(*) as duplicate_count,
STRING_AGG(id::text, ', ') as task_ids
FROM tasks
WHERE project_id IS NOT NULL
GROUP BY project_id, sort_order
HAVING COUNT(*) > 1
ORDER BY project_id, sort_order;
-- Check for duplicates in status_sort_order
SELECT
project_id,
status_sort_order,
COUNT(*) as duplicate_count,
STRING_AGG(id::text, ', ') as task_ids
FROM tasks
WHERE project_id IS NOT NULL
GROUP BY project_id, status_sort_order
HAVING COUNT(*) > 1
ORDER BY project_id, status_sort_order;
-- Check for duplicates in priority_sort_order
SELECT
project_id,
priority_sort_order,
COUNT(*) as duplicate_count,
STRING_AGG(id::text, ', ') as task_ids
FROM tasks
WHERE project_id IS NOT NULL
GROUP BY project_id, priority_sort_order
HAVING COUNT(*) > 1
ORDER BY project_id, priority_sort_order;
-- Check for duplicates in phase_sort_order
SELECT
project_id,
phase_sort_order,
COUNT(*) as duplicate_count,
STRING_AGG(id::text, ', ') as task_ids
FROM tasks
WHERE project_id IS NOT NULL
GROUP BY project_id, phase_sort_order
HAVING COUNT(*) > 1
ORDER BY project_id, phase_sort_order;
-- Note: member_sort_order removed - no longer used
-- 2. CLEANUP FUNCTIONS
-- Fix duplicates in main sort_order column
CREATE OR REPLACE FUNCTION fix_sort_order_duplicates() RETURNS void
LANGUAGE plpgsql
AS
$$
DECLARE
_project RECORD;
_task RECORD;
_counter INTEGER;
BEGIN
-- For each project, reassign sort_order values to ensure uniqueness
FOR _project IN
SELECT DISTINCT project_id
FROM tasks
WHERE project_id IS NOT NULL
LOOP
_counter := 0;
-- Reassign sort_order values sequentially for this project
FOR _task IN
SELECT id
FROM tasks
WHERE project_id = _project.project_id
ORDER BY sort_order, created_at
LOOP
UPDATE tasks
SET sort_order = _counter
WHERE id = _task.id;
_counter := _counter + 1;
END LOOP;
END LOOP;
RAISE NOTICE 'Fixed sort_order duplicates for all projects';
END
$$;
-- Fix duplicates in status_sort_order column
CREATE OR REPLACE FUNCTION fix_status_sort_order_duplicates() RETURNS void
LANGUAGE plpgsql
AS
$$
DECLARE
_project RECORD;
_task RECORD;
_counter INTEGER;
BEGIN
FOR _project IN
SELECT DISTINCT project_id
FROM tasks
WHERE project_id IS NOT NULL
LOOP
_counter := 0;
FOR _task IN
SELECT id
FROM tasks
WHERE project_id = _project.project_id
ORDER BY status_sort_order, created_at
LOOP
UPDATE tasks
SET status_sort_order = _counter
WHERE id = _task.id;
_counter := _counter + 1;
END LOOP;
END LOOP;
RAISE NOTICE 'Fixed status_sort_order duplicates for all projects';
END
$$;
-- Fix duplicates in priority_sort_order column
CREATE OR REPLACE FUNCTION fix_priority_sort_order_duplicates() RETURNS void
LANGUAGE plpgsql
AS
$$
DECLARE
_project RECORD;
_task RECORD;
_counter INTEGER;
BEGIN
FOR _project IN
SELECT DISTINCT project_id
FROM tasks
WHERE project_id IS NOT NULL
LOOP
_counter := 0;
FOR _task IN
SELECT id
FROM tasks
WHERE project_id = _project.project_id
ORDER BY priority_sort_order, created_at
LOOP
UPDATE tasks
SET priority_sort_order = _counter
WHERE id = _task.id;
_counter := _counter + 1;
END LOOP;
END LOOP;
RAISE NOTICE 'Fixed priority_sort_order duplicates for all projects';
END
$$;
-- Fix duplicates in phase_sort_order column
CREATE OR REPLACE FUNCTION fix_phase_sort_order_duplicates() RETURNS void
LANGUAGE plpgsql
AS
$$
DECLARE
_project RECORD;
_task RECORD;
_counter INTEGER;
BEGIN
FOR _project IN
SELECT DISTINCT project_id
FROM tasks
WHERE project_id IS NOT NULL
LOOP
_counter := 0;
FOR _task IN
SELECT id
FROM tasks
WHERE project_id = _project.project_id
ORDER BY phase_sort_order, created_at
LOOP
UPDATE tasks
SET phase_sort_order = _counter
WHERE id = _task.id;
_counter := _counter + 1;
END LOOP;
END LOOP;
RAISE NOTICE 'Fixed phase_sort_order duplicates for all projects';
END
$$;
-- Note: fix_member_sort_order_duplicates() removed - no longer needed
-- Master function to fix all sort order duplicates
CREATE OR REPLACE FUNCTION fix_all_duplicate_sort_orders() RETURNS void
LANGUAGE plpgsql
AS
$$
BEGIN
RAISE NOTICE 'Starting sort order cleanup for all columns...';
PERFORM fix_sort_order_duplicates();
PERFORM fix_status_sort_order_duplicates();
PERFORM fix_priority_sort_order_duplicates();
PERFORM fix_phase_sort_order_duplicates();
RAISE NOTICE 'Completed sort order cleanup for all columns';
END
$$;
-- 3. VERIFICATION FUNCTION
-- Verify that duplicates have been fixed
CREATE OR REPLACE FUNCTION verify_sort_order_integrity() RETURNS TABLE(
column_name text,
project_id uuid,
duplicate_count bigint,
status text
)
LANGUAGE plpgsql
AS
$$
BEGIN
-- Check sort_order duplicates
RETURN QUERY
SELECT
'sort_order'::text as column_name,
t.project_id,
COUNT(*) as duplicate_count,
CASE WHEN COUNT(*) > 1 THEN 'DUPLICATES FOUND' ELSE 'OK' END as status
FROM tasks t
WHERE t.project_id IS NOT NULL
GROUP BY t.project_id, t.sort_order
HAVING COUNT(*) > 1;
-- Check status_sort_order duplicates
RETURN QUERY
SELECT
'status_sort_order'::text as column_name,
t.project_id,
COUNT(*) as duplicate_count,
CASE WHEN COUNT(*) > 1 THEN 'DUPLICATES FOUND' ELSE 'OK' END as status
FROM tasks t
WHERE t.project_id IS NOT NULL
GROUP BY t.project_id, t.status_sort_order
HAVING COUNT(*) > 1;
-- Check priority_sort_order duplicates
RETURN QUERY
SELECT
'priority_sort_order'::text as column_name,
t.project_id,
COUNT(*) as duplicate_count,
CASE WHEN COUNT(*) > 1 THEN 'DUPLICATES FOUND' ELSE 'OK' END as status
FROM tasks t
WHERE t.project_id IS NOT NULL
GROUP BY t.project_id, t.priority_sort_order
HAVING COUNT(*) > 1;
-- Check phase_sort_order duplicates
RETURN QUERY
SELECT
'phase_sort_order'::text as column_name,
t.project_id,
COUNT(*) as duplicate_count,
CASE WHEN COUNT(*) > 1 THEN 'DUPLICATES FOUND' ELSE 'OK' END as status
FROM tasks t
WHERE t.project_id IS NOT NULL
GROUP BY t.project_id, t.phase_sort_order
HAVING COUNT(*) > 1;
-- Note: member_sort_order verification removed - column no longer used
END
$$;
-- 4. USAGE INSTRUCTIONS
/*
USAGE:
1. First, run the detection queries to see which projects have duplicates
2. Then run this to fix all duplicates:
SELECT fix_all_duplicate_sort_orders();
3. Finally, verify the fix worked:
SELECT * FROM verify_sort_order_integrity();
If verification returns no rows, all duplicates have been fixed successfully.
WARNING: This will reassign sort order values based on current order + creation time.
Make sure to backup your database before running these functions.
*/

View File

@@ -1,37 +0,0 @@
-- Migration: Add separate sort order columns for different grouping types
-- This allows users to maintain different task orders when switching between grouping views
-- Add new sort order columns
ALTER TABLE tasks ADD COLUMN IF NOT EXISTS status_sort_order INTEGER DEFAULT 0;
ALTER TABLE tasks ADD COLUMN IF NOT EXISTS priority_sort_order INTEGER DEFAULT 0;
ALTER TABLE tasks ADD COLUMN IF NOT EXISTS phase_sort_order INTEGER DEFAULT 0;
ALTER TABLE tasks ADD COLUMN IF NOT EXISTS member_sort_order INTEGER DEFAULT 0;
-- Initialize new columns with current sort_order values
UPDATE tasks SET
status_sort_order = sort_order,
priority_sort_order = sort_order,
phase_sort_order = sort_order,
member_sort_order = sort_order
WHERE status_sort_order = 0
OR priority_sort_order = 0
OR phase_sort_order = 0
OR member_sort_order = 0;
-- Add constraints to ensure non-negative values
ALTER TABLE tasks ADD CONSTRAINT tasks_status_sort_order_check CHECK (status_sort_order >= 0);
ALTER TABLE tasks ADD CONSTRAINT tasks_priority_sort_order_check CHECK (priority_sort_order >= 0);
ALTER TABLE tasks ADD CONSTRAINT tasks_phase_sort_order_check CHECK (phase_sort_order >= 0);
ALTER TABLE tasks ADD CONSTRAINT tasks_member_sort_order_check CHECK (member_sort_order >= 0);
-- Add indexes for performance (since these will be used for ordering)
CREATE INDEX IF NOT EXISTS idx_tasks_status_sort_order ON tasks(project_id, status_sort_order);
CREATE INDEX IF NOT EXISTS idx_tasks_priority_sort_order ON tasks(project_id, priority_sort_order);
CREATE INDEX IF NOT EXISTS idx_tasks_phase_sort_order ON tasks(project_id, phase_sort_order);
CREATE INDEX IF NOT EXISTS idx_tasks_member_sort_order ON tasks(project_id, member_sort_order);
-- Update comments for documentation
COMMENT ON COLUMN tasks.status_sort_order IS 'Sort order when grouped by status';
COMMENT ON COLUMN tasks.priority_sort_order IS 'Sort order when grouped by priority';
COMMENT ON COLUMN tasks.phase_sort_order IS 'Sort order when grouped by phase';
COMMENT ON COLUMN tasks.member_sort_order IS 'Sort order when grouped by members/assignees';

View File

@@ -1,172 +0,0 @@
-- Migration: Update database functions to handle grouping-specific sort orders
-- Function to get the appropriate sort column name based on grouping type
CREATE OR REPLACE FUNCTION get_sort_column_name(_group_by TEXT) RETURNS TEXT
LANGUAGE plpgsql
AS
$$
BEGIN
CASE _group_by
WHEN 'status' THEN RETURN 'status_sort_order';
WHEN 'priority' THEN RETURN 'priority_sort_order';
WHEN 'phase' THEN RETURN 'phase_sort_order';
WHEN 'members' THEN RETURN 'member_sort_order';
ELSE RETURN 'sort_order'; -- fallback to general sort_order
END CASE;
END;
$$;
-- Updated bulk sort order function to handle different sort columns
CREATE OR REPLACE FUNCTION update_task_sort_orders_bulk(_updates json, _group_by text DEFAULT 'status') RETURNS void
LANGUAGE plpgsql
AS
$$
DECLARE
_update_record RECORD;
_sort_column TEXT;
_sql TEXT;
BEGIN
-- Get the appropriate sort column based on grouping
_sort_column := get_sort_column_name(_group_by);
-- Simple approach: update each task's sort_order from the provided array
FOR _update_record IN
SELECT
(item->>'task_id')::uuid as task_id,
(item->>'sort_order')::int as sort_order,
(item->>'status_id')::uuid as status_id,
(item->>'priority_id')::uuid as priority_id,
(item->>'phase_id')::uuid as phase_id
FROM json_array_elements(_updates) as item
LOOP
-- Update the appropriate sort column and other fields using dynamic SQL
-- Only update sort_order if we're using the default sorting
IF _sort_column = 'sort_order' THEN
UPDATE tasks SET
sort_order = _update_record.sort_order,
status_id = COALESCE(_update_record.status_id, status_id),
priority_id = COALESCE(_update_record.priority_id, priority_id)
WHERE id = _update_record.task_id;
ELSE
-- Update only the grouping-specific sort column, not the main sort_order
_sql := 'UPDATE tasks SET ' || _sort_column || ' = $1, ' ||
'status_id = COALESCE($2, status_id), ' ||
'priority_id = COALESCE($3, priority_id) ' ||
'WHERE id = $4';
EXECUTE _sql USING
_update_record.sort_order,
_update_record.status_id,
_update_record.priority_id,
_update_record.task_id;
END IF;
-- Handle phase updates separately since it's in a different table
IF _update_record.phase_id IS NOT NULL THEN
INSERT INTO task_phase (task_id, phase_id)
VALUES (_update_record.task_id, _update_record.phase_id)
ON CONFLICT (task_id) DO UPDATE SET phase_id = _update_record.phase_id;
END IF;
END LOOP;
END;
$$;
-- Updated main sort order change handler
CREATE OR REPLACE FUNCTION handle_task_list_sort_order_change(_body json) RETURNS void
LANGUAGE plpgsql
AS
$$
DECLARE
_from_index INT;
_to_index INT;
_task_id UUID;
_project_id UUID;
_from_group UUID;
_to_group UUID;
_group_by TEXT;
_batch_size INT := 100;
_sort_column TEXT;
_sql TEXT;
BEGIN
_project_id = (_body ->> 'project_id')::UUID;
_task_id = (_body ->> 'task_id')::UUID;
_from_index = (_body ->> 'from_index')::INT;
_to_index = (_body ->> 'to_index')::INT;
_from_group = (_body ->> 'from_group')::UUID;
_to_group = (_body ->> 'to_group')::UUID;
_group_by = (_body ->> 'group_by')::TEXT;
-- Get the appropriate sort column
_sort_column := get_sort_column_name(_group_by);
-- Handle group changes
IF (_from_group <> _to_group OR (_from_group <> _to_group) IS NULL) THEN
IF (_group_by = 'status') THEN
UPDATE tasks
SET status_id = _to_group
WHERE id = _task_id
AND status_id = _from_group
AND project_id = _project_id;
END IF;
IF (_group_by = 'priority') THEN
UPDATE tasks
SET priority_id = _to_group
WHERE id = _task_id
AND priority_id = _from_group
AND project_id = _project_id;
END IF;
IF (_group_by = 'phase') THEN
IF (is_null_or_empty(_to_group) IS FALSE) THEN
INSERT INTO task_phase (task_id, phase_id)
VALUES (_task_id, _to_group)
ON CONFLICT (task_id) DO UPDATE SET phase_id = _to_group;
ELSE
DELETE FROM task_phase WHERE task_id = _task_id;
END IF;
END IF;
END IF;
-- Handle sort order changes using dynamic SQL
IF (_from_index <> _to_index) THEN
-- For the main sort_order column, we need to be careful about unique constraints
IF _sort_column = 'sort_order' THEN
-- Use a transaction-safe approach for the main sort_order column
IF (_to_index > _from_index) THEN
-- Moving down: decrease sort_order for items between old and new position
UPDATE tasks SET sort_order = sort_order - 1
WHERE project_id = _project_id
AND sort_order > _from_index
AND sort_order <= _to_index;
ELSE
-- Moving up: increase sort_order for items between new and old position
UPDATE tasks SET sort_order = sort_order + 1
WHERE project_id = _project_id
AND sort_order >= _to_index
AND sort_order < _from_index;
END IF;
-- Set the new sort_order for the moved task
UPDATE tasks SET sort_order = _to_index WHERE id = _task_id;
ELSE
-- For grouping-specific columns, use dynamic SQL since there's no unique constraint
IF (_to_index > _from_index) THEN
-- Moving down: decrease sort_order for items between old and new position
_sql := 'UPDATE tasks SET ' || _sort_column || ' = ' || _sort_column || ' - 1 ' ||
'WHERE project_id = $1 AND ' || _sort_column || ' > $2 AND ' || _sort_column || ' <= $3';
EXECUTE _sql USING _project_id, _from_index, _to_index;
ELSE
-- Moving up: increase sort_order for items between new and old position
_sql := 'UPDATE tasks SET ' || _sort_column || ' = ' || _sort_column || ' + 1 ' ||
'WHERE project_id = $1 AND ' || _sort_column || ' >= $2 AND ' || _sort_column || ' < $3';
EXECUTE _sql USING _project_id, _to_index, _from_index;
END IF;
-- Set the new sort_order for the moved task
_sql := 'UPDATE tasks SET ' || _sort_column || ' = $1 WHERE id = $2';
EXECUTE _sql USING _to_index, _task_id;
END IF;
END IF;
END;
$$;

View File

@@ -1,179 +0,0 @@
-- Migration: Fix sort order constraint violations
-- First, let's ensure all existing tasks have unique sort_order values within each project
-- This is a one-time fix to ensure data consistency
DO $$
DECLARE
_project RECORD;
_task RECORD;
_counter INTEGER;
BEGIN
-- For each project, reassign sort_order values to ensure uniqueness
FOR _project IN
SELECT DISTINCT project_id
FROM tasks
WHERE project_id IS NOT NULL
LOOP
_counter := 0;
-- Reassign sort_order values sequentially for this project
FOR _task IN
SELECT id
FROM tasks
WHERE project_id = _project.project_id
ORDER BY sort_order, created_at
LOOP
UPDATE tasks
SET sort_order = _counter
WHERE id = _task.id;
_counter := _counter + 1;
END LOOP;
END LOOP;
END
$$;
-- Now create a better version of our functions that properly handles the constraints
-- Updated bulk sort order function that avoids sort_order conflicts
CREATE OR REPLACE FUNCTION update_task_sort_orders_bulk(_updates json, _group_by text DEFAULT 'status') RETURNS void
LANGUAGE plpgsql
AS
$$
DECLARE
_update_record RECORD;
_sort_column TEXT;
_sql TEXT;
BEGIN
-- Get the appropriate sort column based on grouping
_sort_column := get_sort_column_name(_group_by);
-- Process each update record
FOR _update_record IN
SELECT
(item->>'task_id')::uuid as task_id,
(item->>'sort_order')::int as sort_order,
(item->>'status_id')::uuid as status_id,
(item->>'priority_id')::uuid as priority_id,
(item->>'phase_id')::uuid as phase_id
FROM json_array_elements(_updates) as item
LOOP
-- Update the grouping-specific sort column and other fields
_sql := 'UPDATE tasks SET ' || _sort_column || ' = $1, ' ||
'status_id = COALESCE($2, status_id), ' ||
'priority_id = COALESCE($3, priority_id), ' ||
'updated_at = CURRENT_TIMESTAMP ' ||
'WHERE id = $4';
EXECUTE _sql USING
_update_record.sort_order,
_update_record.status_id,
_update_record.priority_id,
_update_record.task_id;
-- Handle phase updates separately since it's in a different table
IF _update_record.phase_id IS NOT NULL THEN
INSERT INTO task_phase (task_id, phase_id)
VALUES (_update_record.task_id, _update_record.phase_id)
ON CONFLICT (task_id) DO UPDATE SET phase_id = _update_record.phase_id;
END IF;
END LOOP;
END;
$$;
-- Also update the helper function to be more explicit
CREATE OR REPLACE FUNCTION get_sort_column_name(_group_by TEXT) RETURNS TEXT
LANGUAGE plpgsql
AS
$$
BEGIN
CASE _group_by
WHEN 'status' THEN RETURN 'status_sort_order';
WHEN 'priority' THEN RETURN 'priority_sort_order';
WHEN 'phase' THEN RETURN 'phase_sort_order';
WHEN 'members' THEN RETURN 'member_sort_order';
-- For backward compatibility, still support general sort_order but be explicit
WHEN 'general' THEN RETURN 'sort_order';
ELSE RETURN 'status_sort_order'; -- Default to status sorting
END CASE;
END;
$$;
-- Updated main sort order change handler that avoids conflicts
CREATE OR REPLACE FUNCTION handle_task_list_sort_order_change(_body json) RETURNS void
LANGUAGE plpgsql
AS
$$
DECLARE
_from_index INT;
_to_index INT;
_task_id UUID;
_project_id UUID;
_from_group UUID;
_to_group UUID;
_group_by TEXT;
_sort_column TEXT;
_sql TEXT;
BEGIN
_project_id = (_body ->> 'project_id')::UUID;
_task_id = (_body ->> 'task_id')::UUID;
_from_index = (_body ->> 'from_index')::INT;
_to_index = (_body ->> 'to_index')::INT;
_from_group = (_body ->> 'from_group')::UUID;
_to_group = (_body ->> 'to_group')::UUID;
_group_by = (_body ->> 'group_by')::TEXT;
-- Get the appropriate sort column
_sort_column := get_sort_column_name(_group_by);
-- Handle group changes first
IF (_from_group <> _to_group OR (_from_group <> _to_group) IS NULL) THEN
IF (_group_by = 'status') THEN
UPDATE tasks
SET status_id = _to_group, updated_at = CURRENT_TIMESTAMP
WHERE id = _task_id
AND project_id = _project_id;
END IF;
IF (_group_by = 'priority') THEN
UPDATE tasks
SET priority_id = _to_group, updated_at = CURRENT_TIMESTAMP
WHERE id = _task_id
AND project_id = _project_id;
END IF;
IF (_group_by = 'phase') THEN
IF (is_null_or_empty(_to_group) IS FALSE) THEN
INSERT INTO task_phase (task_id, phase_id)
VALUES (_task_id, _to_group)
ON CONFLICT (task_id) DO UPDATE SET phase_id = _to_group;
ELSE
DELETE FROM task_phase WHERE task_id = _task_id;
END IF;
END IF;
END IF;
-- Handle sort order changes for the grouping-specific column only
IF (_from_index <> _to_index) THEN
-- Update the grouping-specific sort order (no unique constraint issues)
IF (_to_index > _from_index) THEN
-- Moving down: decrease sort order for items between old and new position
_sql := 'UPDATE tasks SET ' || _sort_column || ' = ' || _sort_column || ' - 1, ' ||
'updated_at = CURRENT_TIMESTAMP ' ||
'WHERE project_id = $1 AND ' || _sort_column || ' > $2 AND ' || _sort_column || ' <= $3';
EXECUTE _sql USING _project_id, _from_index, _to_index;
ELSE
-- Moving up: increase sort order for items between new and old position
_sql := 'UPDATE tasks SET ' || _sort_column || ' = ' || _sort_column || ' + 1, ' ||
'updated_at = CURRENT_TIMESTAMP ' ||
'WHERE project_id = $1 AND ' || _sort_column || ' >= $2 AND ' || _sort_column || ' < $3';
EXECUTE _sql USING _project_id, _to_index, _from_index;
END IF;
-- Set the new sort order for the moved task
_sql := 'UPDATE tasks SET ' || _sort_column || ' = $1, updated_at = CURRENT_TIMESTAMP WHERE id = $2';
EXECUTE _sql USING _to_index, _task_id;
END IF;
END;
$$;

View File

@@ -12,7 +12,7 @@ CREATE TYPE DEPENDENCY_TYPE AS ENUM ('blocked_by');
CREATE TYPE SCHEDULE_TYPE AS ENUM ('daily', 'weekly', 'yearly', 'monthly', 'every_x_days', 'every_x_weeks', 'every_x_months');
CREATE TYPE LANGUAGE_TYPE AS ENUM ('en', 'es', 'pt', 'alb', 'de', 'zh_cn');
CREATE TYPE LANGUAGE_TYPE AS ENUM ('en', 'es', 'pt');
-- START: Users
CREATE SEQUENCE IF NOT EXISTS users_user_no_seq START 1;
@@ -1391,30 +1391,27 @@ ALTER TABLE task_work_log
CHECK (time_spent >= (0)::NUMERIC);
CREATE TABLE IF NOT EXISTS tasks (
id UUID DEFAULT uuid_generate_v4() NOT NULL,
name TEXT NOT NULL,
description TEXT,
done BOOLEAN DEFAULT FALSE NOT NULL,
total_minutes NUMERIC DEFAULT 0 NOT NULL,
archived BOOLEAN DEFAULT FALSE NOT NULL,
task_no BIGINT NOT NULL,
start_date TIMESTAMP WITH TIME ZONE,
end_date TIMESTAMP WITH TIME ZONE,
priority_id UUID NOT NULL,
project_id UUID NOT NULL,
reporter_id UUID NOT NULL,
parent_task_id UUID,
status_id UUID NOT NULL,
completed_at TIMESTAMP WITH TIME ZONE,
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP NOT NULL,
updated_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP NOT NULL,
sort_order INTEGER DEFAULT 0 NOT NULL,
roadmap_sort_order INTEGER DEFAULT 0 NOT NULL,
status_sort_order INTEGER DEFAULT 0 NOT NULL,
priority_sort_order INTEGER DEFAULT 0 NOT NULL,
phase_sort_order INTEGER DEFAULT 0 NOT NULL,
billable BOOLEAN DEFAULT TRUE,
schedule_id UUID
id UUID DEFAULT uuid_generate_v4() NOT NULL,
name TEXT NOT NULL,
description TEXT,
done BOOLEAN DEFAULT FALSE NOT NULL,
total_minutes NUMERIC DEFAULT 0 NOT NULL,
archived BOOLEAN DEFAULT FALSE NOT NULL,
task_no BIGINT NOT NULL,
start_date TIMESTAMP WITH TIME ZONE,
end_date TIMESTAMP WITH TIME ZONE,
priority_id UUID NOT NULL,
project_id UUID NOT NULL,
reporter_id UUID NOT NULL,
parent_task_id UUID,
status_id UUID NOT NULL,
completed_at TIMESTAMP WITH TIME ZONE,
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP NOT NULL,
updated_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP NOT NULL,
sort_order INTEGER DEFAULT 0 NOT NULL,
roadmap_sort_order INTEGER DEFAULT 0 NOT NULL,
billable BOOLEAN DEFAULT TRUE,
schedule_id UUID
);
ALTER TABLE tasks
@@ -1502,21 +1499,6 @@ ALTER TABLE tasks
ADD CONSTRAINT tasks_total_minutes_check
CHECK ((total_minutes >= (0)::NUMERIC) AND (total_minutes <= (999999)::NUMERIC));
-- Add constraints for new sort order columns
ALTER TABLE tasks ADD CONSTRAINT tasks_status_sort_order_check CHECK (status_sort_order >= 0);
ALTER TABLE tasks ADD CONSTRAINT tasks_priority_sort_order_check CHECK (priority_sort_order >= 0);
ALTER TABLE tasks ADD CONSTRAINT tasks_phase_sort_order_check CHECK (phase_sort_order >= 0);
-- Add indexes for performance on new sort order columns
CREATE INDEX IF NOT EXISTS idx_tasks_status_sort_order ON tasks(project_id, status_sort_order);
CREATE INDEX IF NOT EXISTS idx_tasks_priority_sort_order ON tasks(project_id, priority_sort_order);
CREATE INDEX IF NOT EXISTS idx_tasks_phase_sort_order ON tasks(project_id, phase_sort_order);
-- Add comments for documentation
COMMENT ON COLUMN tasks.status_sort_order IS 'Sort order when grouped by status';
COMMENT ON COLUMN tasks.priority_sort_order IS 'Sort order when grouped by priority';
COMMENT ON COLUMN tasks.phase_sort_order IS 'Sort order when grouped by phase';
CREATE TABLE IF NOT EXISTS tasks_assignees (
task_id UUID NOT NULL,
project_member_id UUID NOT NULL,

View File

@@ -32,37 +32,3 @@ SELECT u.avatar_url,
FROM team_members
LEFT JOIN users u ON team_members.user_id = u.id;
-- PERFORMANCE OPTIMIZATION: Create materialized view for team member info
-- This pre-calculates the expensive joins and subqueries from team_member_info_view
CREATE MATERIALIZED VIEW IF NOT EXISTS team_member_info_mv AS
SELECT
u.avatar_url,
COALESCE(u.email, ei.email) AS email,
COALESCE(u.name, ei.name) AS name,
u.id AS user_id,
tm.id AS team_member_id,
tm.team_id,
tm.active,
u.socket_id
FROM team_members tm
LEFT JOIN users u ON tm.user_id = u.id
LEFT JOIN email_invitations ei ON ei.team_member_id = tm.id
WHERE tm.active = TRUE;
-- Create unique index on the materialized view for fast lookups
CREATE UNIQUE INDEX IF NOT EXISTS idx_team_member_info_mv_team_member_id
ON team_member_info_mv(team_member_id);
CREATE INDEX IF NOT EXISTS idx_team_member_info_mv_team_user
ON team_member_info_mv(team_id, user_id);
-- Function to refresh the materialized view
CREATE OR REPLACE FUNCTION refresh_team_member_info_mv()
RETURNS void
LANGUAGE plpgsql
AS $$
BEGIN
REFRESH MATERIALIZED VIEW CONCURRENTLY team_member_info_mv;
END;
$$;

View File

@@ -3351,15 +3351,15 @@ BEGIN
SELECT COALESCE(ARRAY_TO_JSON(ARRAY_AGG(ROW_TO_JSON(rec))), '[]'::JSON)
FROM (SELECT team_member_id,
project_member_id,
COALESCE((SELECT name FROM team_member_info_view WHERE team_member_info_view.team_member_id = tm.id), '') as name,
COALESCE((SELECT email_notifications_enabled
(SELECT name FROM team_member_info_view WHERE team_member_info_view.team_member_id = tm.id),
(SELECT email_notifications_enabled
FROM notification_settings
WHERE team_id = tm.team_id
AND notification_settings.user_id = u.id), false) AS email_notifications_enabled,
COALESCE(u.avatar_url, '') as avatar_url,
AND notification_settings.user_id = u.id) AS email_notifications_enabled,
u.avatar_url,
u.id AS user_id,
COALESCE(u.email, '') as email,
COALESCE(u.socket_id, '') as socket_id,
u.email,
u.socket_id AS socket_id,
tm.team_id AS team_id
FROM tasks_assignees
INNER JOIN team_members tm ON tm.id = tasks_assignees.team_member_id
@@ -4066,14 +4066,14 @@ DECLARE
_schedule_id JSON;
_task_completed_at TIMESTAMPTZ;
BEGIN
SELECT COALESCE(name, '') FROM tasks WHERE id = _task_id INTO _task_name;
SELECT name FROM tasks WHERE id = _task_id INTO _task_name;
SELECT COALESCE(name, '')
SELECT name
FROM task_statuses
WHERE id = (SELECT status_id FROM tasks WHERE id = _task_id)
INTO _previous_status_name;
SELECT COALESCE(name, '') FROM task_statuses WHERE id = _status_id INTO _new_status_name;
SELECT name FROM task_statuses WHERE id = _status_id INTO _new_status_name;
IF (_previous_status_name != _new_status_name)
THEN
@@ -4081,22 +4081,14 @@ BEGIN
SELECT get_task_complete_info(_task_id, _status_id) INTO _task_info;
SELECT COALESCE(name, '') FROM users WHERE id = _user_id INTO _updater_name;
SELECT name FROM users WHERE id = _user_id INTO _updater_name;
_message = CONCAT(_updater_name, ' transitioned "', _task_name, '" from ', _previous_status_name, '',
_new_status_name);
END IF;
SELECT completed_at FROM tasks WHERE id = _task_id INTO _task_completed_at;
-- Handle schedule_id properly for recurring tasks
SELECT CASE
WHEN schedule_id IS NULL THEN 'null'::json
ELSE json_build_object('id', schedule_id)
END
FROM tasks
WHERE id = _task_id
INTO _schedule_id;
SELECT schedule_id FROM tasks WHERE id = _task_id INTO _schedule_id;
SELECT COALESCE(ROW_TO_JSON(r), '{}'::JSON)
FROM (SELECT is_done, is_doing, is_todo
@@ -4105,7 +4097,7 @@ BEGIN
INTO _status_category;
RETURN JSON_BUILD_OBJECT(
'message', COALESCE(_message, ''),
'message', _message,
'project_id', (SELECT project_id FROM tasks WHERE id = _task_id),
'parent_done', (CASE
WHEN EXISTS(SELECT 1
@@ -4113,14 +4105,14 @@ BEGIN
WHERE tasks_with_status_view.task_id = _task_id
AND is_done IS TRUE) THEN 1
ELSE 0 END),
'color_code', COALESCE((_task_info ->> 'color_code')::TEXT, ''),
'color_code_dark', COALESCE((_task_info ->> 'color_code_dark')::TEXT, ''),
'total_tasks', COALESCE((_task_info ->> 'total_tasks')::INT, 0),
'total_completed', COALESCE((_task_info ->> 'total_completed')::INT, 0),
'members', COALESCE((_task_info ->> 'members')::JSON, '[]'::JSON),
'color_code', (_task_info ->> 'color_code')::TEXT,
'color_code_dark', (_task_info ->> 'color_code_dark')::TEXT,
'total_tasks', (_task_info ->> 'total_tasks')::INT,
'total_completed', (_task_info ->> 'total_completed')::INT,
'members', (_task_info ->> 'members')::JSON,
'completed_at', _task_completed_at,
'status_category', COALESCE(_status_category, '{}'::JSON),
'schedule_id', COALESCE(_schedule_id, 'null'::JSON)
'status_category', _status_category,
'schedule_id', _schedule_id
);
END
$$;
@@ -4313,24 +4305,6 @@ BEGIN
END
$$;
-- Helper function to get the appropriate sort column name based on grouping type
CREATE OR REPLACE FUNCTION get_sort_column_name(_group_by TEXT) RETURNS TEXT
LANGUAGE plpgsql
AS
$$
BEGIN
CASE _group_by
WHEN 'status' THEN RETURN 'status_sort_order';
WHEN 'priority' THEN RETURN 'priority_sort_order';
WHEN 'phase' THEN RETURN 'phase_sort_order';
WHEN 'members' THEN RETURN 'member_sort_order';
-- For backward compatibility, still support general sort_order but be explicit
WHEN 'general' THEN RETURN 'sort_order';
ELSE RETURN 'status_sort_order'; -- Default to status sorting
END CASE;
END;
$$;
CREATE OR REPLACE FUNCTION handle_task_list_sort_order_change(_body json) RETURNS void
LANGUAGE plpgsql
AS
@@ -4343,67 +4317,54 @@ DECLARE
_from_group UUID;
_to_group UUID;
_group_by TEXT;
_sort_column TEXT;
_sql TEXT;
BEGIN
_project_id = (_body ->> 'project_id')::UUID;
_task_id = (_body ->> 'task_id')::UUID;
_from_index = (_body ->> 'from_index')::INT;
_to_index = (_body ->> 'to_index')::INT;
_from_index = (_body ->> 'from_index')::INT; -- from sort_order
_to_index = (_body ->> 'to_index')::INT; -- to sort_order
_from_group = (_body ->> 'from_group')::UUID;
_to_group = (_body ->> 'to_group')::UUID;
_group_by = (_body ->> 'group_by')::TEXT;
-- Get the appropriate sort column
_sort_column := get_sort_column_name(_group_by);
-- Handle group changes first
IF (_from_group <> _to_group OR (_from_group <> _to_group) IS NULL) THEN
IF (_group_by = 'status') THEN
UPDATE tasks
SET status_id = _to_group, updated_at = CURRENT_TIMESTAMP
WHERE id = _task_id
AND project_id = _project_id;
IF (_from_group <> _to_group OR (_from_group <> _to_group) IS NULL)
THEN
IF (_group_by = 'status')
THEN
UPDATE tasks SET status_id = _to_group WHERE id = _task_id AND status_id = _from_group;
END IF;
IF (_group_by = 'priority') THEN
UPDATE tasks
SET priority_id = _to_group, updated_at = CURRENT_TIMESTAMP
WHERE id = _task_id
AND project_id = _project_id;
IF (_group_by = 'priority')
THEN
UPDATE tasks SET priority_id = _to_group WHERE id = _task_id AND priority_id = _from_group;
END IF;
IF (_group_by = 'phase') THEN
IF (is_null_or_empty(_to_group) IS FALSE) THEN
IF (_group_by = 'phase')
THEN
IF (is_null_or_empty(_to_group) IS FALSE)
THEN
INSERT INTO task_phase (task_id, phase_id)
VALUES (_task_id, _to_group)
ON CONFLICT (task_id) DO UPDATE SET phase_id = _to_group;
ELSE
DELETE FROM task_phase WHERE task_id = _task_id;
END IF;
IF (is_null_or_empty(_to_group) IS TRUE)
THEN
DELETE
FROM task_phase
WHERE task_id = _task_id;
END IF;
END IF;
END IF;
-- Handle sort order changes for the grouping-specific column only
IF (_from_index <> _to_index) THEN
-- Update the grouping-specific sort order (no unique constraint issues)
IF (_to_index > _from_index) THEN
-- Moving down: decrease sort order for items between old and new position
_sql := 'UPDATE tasks SET ' || _sort_column || ' = ' || _sort_column || ' - 1, ' ||
'updated_at = CURRENT_TIMESTAMP ' ||
'WHERE project_id = $1 AND ' || _sort_column || ' > $2 AND ' || _sort_column || ' <= $3';
EXECUTE _sql USING _project_id, _from_index, _to_index;
IF ((_body ->> 'to_last_index')::BOOLEAN IS TRUE AND _from_index < _to_index)
THEN
PERFORM handle_task_list_sort_inside_group(_from_index, _to_index, _task_id, _project_id);
ELSE
-- Moving up: increase sort order for items between new and old position
_sql := 'UPDATE tasks SET ' || _sort_column || ' = ' || _sort_column || ' + 1, ' ||
'updated_at = CURRENT_TIMESTAMP ' ||
'WHERE project_id = $1 AND ' || _sort_column || ' >= $2 AND ' || _sort_column || ' < $3';
EXECUTE _sql USING _project_id, _to_index, _from_index;
PERFORM handle_task_list_sort_between_groups(_from_index, _to_index, _task_id, _project_id);
END IF;
-- Set the new sort order for the moved task
_sql := 'UPDATE tasks SET ' || _sort_column || ' = $1, updated_at = CURRENT_TIMESTAMP WHERE id = $2';
EXECUTE _sql USING _to_index, _task_id;
ELSE
PERFORM handle_task_list_sort_inside_group(_from_index, _to_index, _task_id, _project_id);
END IF;
END
$$;
@@ -4608,31 +4569,31 @@ BEGIN
INSERT INTO project_task_list_cols (project_id, name, key, index, pinned)
VALUES (_project_id, 'Progress', 'PROGRESS', 3, TRUE);
INSERT INTO project_task_list_cols (project_id, name, key, index, pinned)
VALUES (_project_id, 'Status', 'STATUS', 4, TRUE);
VALUES (_project_id, 'Members', 'ASSIGNEES', 4, TRUE);
INSERT INTO project_task_list_cols (project_id, name, key, index, pinned)
VALUES (_project_id, 'Members', 'ASSIGNEES', 5, TRUE);
VALUES (_project_id, 'Labels', 'LABELS', 5, TRUE);
INSERT INTO project_task_list_cols (project_id, name, key, index, pinned)
VALUES (_project_id, 'Labels', 'LABELS', 6, TRUE);
VALUES (_project_id, 'Status', 'STATUS', 6, TRUE);
INSERT INTO project_task_list_cols (project_id, name, key, index, pinned)
VALUES (_project_id, 'Phase', 'PHASE', 7, TRUE);
VALUES (_project_id, 'Priority', 'PRIORITY', 7, TRUE);
INSERT INTO project_task_list_cols (project_id, name, key, index, pinned)
VALUES (_project_id, 'Priority', 'PRIORITY', 8, TRUE);
VALUES (_project_id, 'Time Tracking', 'TIME_TRACKING', 8, TRUE);
INSERT INTO project_task_list_cols (project_id, name, key, index, pinned)
VALUES (_project_id, 'Time Tracking', 'TIME_TRACKING', 9, TRUE);
VALUES (_project_id, 'Estimation', 'ESTIMATION', 9, FALSE);
INSERT INTO project_task_list_cols (project_id, name, key, index, pinned)
VALUES (_project_id, 'Estimation', 'ESTIMATION', 10, FALSE);
VALUES (_project_id, 'Start Date', 'START_DATE', 10, FALSE);
INSERT INTO project_task_list_cols (project_id, name, key, index, pinned)
VALUES (_project_id, 'Start Date', 'START_DATE', 11, FALSE);
VALUES (_project_id, 'Due Date', 'DUE_DATE', 11, TRUE);
INSERT INTO project_task_list_cols (project_id, name, key, index, pinned)
VALUES (_project_id, 'Due Date', 'DUE_DATE', 12, TRUE);
VALUES (_project_id, 'Completed Date', 'COMPLETED_DATE', 12, FALSE);
INSERT INTO project_task_list_cols (project_id, name, key, index, pinned)
VALUES (_project_id, 'Completed Date', 'COMPLETED_DATE', 13, FALSE);
VALUES (_project_id, 'Created Date', 'CREATED_DATE', 13, FALSE);
INSERT INTO project_task_list_cols (project_id, name, key, index, pinned)
VALUES (_project_id, 'Created Date', 'CREATED_DATE', 14, FALSE);
VALUES (_project_id, 'Last Updated', 'LAST_UPDATED', 14, FALSE);
INSERT INTO project_task_list_cols (project_id, name, key, index, pinned)
VALUES (_project_id, 'Last Updated', 'LAST_UPDATED', 15, FALSE);
VALUES (_project_id, 'Reporter', 'REPORTER', 15, FALSE);
INSERT INTO project_task_list_cols (project_id, name, key, index, pinned)
VALUES (_project_id, 'Reporter', 'REPORTER', 16, FALSE);
VALUES (_project_id, 'Phase', 'PHASE', 16, FALSE);
END
$$;
@@ -5516,15 +5477,8 @@ $$
DECLARE
_iterator NUMERIC := 0;
_status_id TEXT;
_project_id UUID;
_base_sort_order NUMERIC;
BEGIN
-- Get the project_id from the first status to ensure we update all statuses in the same project
SELECT project_id INTO _project_id
FROM task_statuses
WHERE id = (SELECT TRIM(BOTH '"' FROM JSON_ARRAY_ELEMENTS(_status_ids)::TEXT) LIMIT 1)::UUID;
-- Update the sort_order for statuses in the provided order
FOR _status_id IN SELECT * FROM JSON_ARRAY_ELEMENTS((_status_ids)::JSON)
LOOP
UPDATE task_statuses
@@ -5533,29 +5487,6 @@ BEGIN
_iterator := _iterator + 1;
END LOOP;
-- Get the base sort order for remaining statuses (simple count approach)
SELECT COUNT(*) INTO _base_sort_order
FROM task_statuses ts2
WHERE ts2.project_id = _project_id
AND ts2.id = ANY(SELECT (TRIM(BOTH '"' FROM JSON_ARRAY_ELEMENTS(_status_ids)::TEXT))::UUID);
-- Update remaining statuses with simple sequential numbering
-- Reset iterator to start from base_sort_order
_iterator := _base_sort_order;
-- Use a cursor approach to avoid window functions
FOR _status_id IN
SELECT id::TEXT FROM task_statuses
WHERE project_id = _project_id
AND id NOT IN (SELECT (TRIM(BOTH '"' FROM JSON_ARRAY_ELEMENTS(_status_ids)::TEXT))::UUID)
ORDER BY sort_order
LOOP
UPDATE task_statuses
SET sort_order = _iterator
WHERE id = _status_id::UUID;
_iterator := _iterator + 1;
END LOOP;
RETURN;
END
$$;
@@ -6217,434 +6148,3 @@ BEGIN
RETURN v_new_id;
END;
$$;
CREATE OR REPLACE FUNCTION transfer_team_ownership(_team_id UUID, _new_owner_id UUID) RETURNS json
LANGUAGE plpgsql
AS
$$
DECLARE
_old_owner_id UUID;
_owner_role_id UUID;
_admin_role_id UUID;
_old_org_id UUID;
_new_org_id UUID;
_has_license BOOLEAN;
_old_owner_role_id UUID;
_new_owner_role_id UUID;
_has_active_coupon BOOLEAN;
_other_teams_count INTEGER;
_new_owner_org_id UUID;
_license_type_id UUID;
_has_valid_license BOOLEAN;
BEGIN
-- Get the current owner's ID and organization
SELECT t.user_id, t.organization_id
INTO _old_owner_id, _old_org_id
FROM teams t
WHERE t.id = _team_id;
IF _old_owner_id IS NULL THEN
RAISE EXCEPTION 'Team not found';
END IF;
-- Get the new owner's organization
SELECT organization_id INTO _new_owner_org_id
FROM organizations
WHERE user_id = _new_owner_id;
-- Get the old organization
SELECT id INTO _old_org_id
FROM organizations
WHERE id = _old_org_id;
IF _old_org_id IS NULL THEN
RAISE EXCEPTION 'Organization not found';
END IF;
-- Check if new owner has any valid license type
SELECT EXISTS (
SELECT 1
FROM (
-- Check regular subscriptions
SELECT lus.user_id, lus.status, lus.active
FROM licensing_user_subscriptions lus
WHERE lus.user_id = _new_owner_id
AND lus.active = TRUE
AND lus.status IN ('active', 'trialing')
UNION ALL
-- Check custom subscriptions
SELECT lcs.user_id, lcs.subscription_status as status, TRUE as active
FROM licensing_custom_subs lcs
WHERE lcs.user_id = _new_owner_id
AND lcs.end_date > CURRENT_DATE
UNION ALL
-- Check trial status in organizations
SELECT o.user_id, o.subscription_status as status, TRUE as active
FROM organizations o
WHERE o.user_id = _new_owner_id
AND o.trial_in_progress = TRUE
AND o.trial_expire_date > CURRENT_DATE
) valid_licenses
) INTO _has_valid_license;
IF NOT _has_valid_license THEN
RAISE EXCEPTION 'New owner does not have a valid license (subscription, custom subscription, or trial)';
END IF;
-- Check if new owner has any active coupon codes
SELECT EXISTS (
SELECT 1
FROM licensing_coupon_codes lcc
WHERE lcc.redeemed_by = _new_owner_id
AND lcc.is_redeemed = TRUE
AND lcc.is_refunded = FALSE
) INTO _has_active_coupon;
IF _has_active_coupon THEN
RAISE EXCEPTION 'New owner has active coupon codes that need to be handled before transfer';
END IF;
-- Count other teams in the organization for information purposes
SELECT COUNT(*) INTO _other_teams_count
FROM teams
WHERE organization_id = _old_org_id
AND id != _team_id;
-- If new owner has their own organization, move the team to their organization
IF _new_owner_org_id IS NOT NULL THEN
-- Update the team to use the new owner's organization
UPDATE teams
SET user_id = _new_owner_id,
organization_id = _new_owner_org_id
WHERE id = _team_id;
-- Create notification about organization change
PERFORM create_notification(
_old_owner_id,
_team_id,
NULL,
NULL,
CONCAT('Team <b>', (SELECT name FROM teams WHERE id = _team_id), '</b> has been moved to a different organization')
);
PERFORM create_notification(
_new_owner_id,
_team_id,
NULL,
NULL,
CONCAT('Team <b>', (SELECT name FROM teams WHERE id = _team_id), '</b> has been moved to your organization')
);
ELSE
-- If new owner doesn't have an organization, transfer the old organization to them
UPDATE organizations
SET user_id = _new_owner_id
WHERE id = _old_org_id;
-- Update the team to use the same organization
UPDATE teams
SET user_id = _new_owner_id,
organization_id = _old_org_id
WHERE id = _team_id;
-- Notify both users about organization ownership transfer
PERFORM create_notification(
_old_owner_id,
NULL,
NULL,
NULL,
CONCAT('You are no longer the owner of organization <b>', (SELECT organization_name FROM organizations WHERE id = _old_org_id), '</b>')
);
PERFORM create_notification(
_new_owner_id,
NULL,
NULL,
NULL,
CONCAT('You are now the owner of organization <b>', (SELECT organization_name FROM organizations WHERE id = _old_org_id), '</b>')
);
END IF;
-- Get the owner and admin role IDs
SELECT id INTO _owner_role_id FROM roles WHERE team_id = _team_id AND owner = TRUE;
SELECT id INTO _admin_role_id FROM roles WHERE team_id = _team_id AND admin_role = TRUE;
-- Get current role IDs for both users
SELECT role_id INTO _old_owner_role_id
FROM team_members
WHERE team_id = _team_id AND user_id = _old_owner_id;
SELECT role_id INTO _new_owner_role_id
FROM team_members
WHERE team_id = _team_id AND user_id = _new_owner_id;
-- Update the old owner's role to admin if they want to stay in the team
IF _old_owner_role_id IS NOT NULL THEN
UPDATE team_members
SET role_id = _admin_role_id
WHERE team_id = _team_id AND user_id = _old_owner_id;
END IF;
-- Update the new owner's role to owner
IF _new_owner_role_id IS NOT NULL THEN
UPDATE team_members
SET role_id = _owner_role_id
WHERE team_id = _team_id AND user_id = _new_owner_id;
ELSE
-- If new owner is not a team member yet, add them
INSERT INTO team_members (user_id, team_id, role_id)
VALUES (_new_owner_id, _team_id, _owner_role_id);
END IF;
-- Create notification for both users about team ownership
PERFORM create_notification(
_old_owner_id,
_team_id,
NULL,
NULL,
CONCAT('You are no longer the owner of team <b>', (SELECT name FROM teams WHERE id = _team_id), '</b>')
);
PERFORM create_notification(
_new_owner_id,
_team_id,
NULL,
NULL,
CONCAT('You are now the owner of team <b>', (SELECT name FROM teams WHERE id = _team_id), '</b>')
);
RETURN json_build_object(
'success', TRUE,
'old_owner_id', _old_owner_id,
'new_owner_id', _new_owner_id,
'team_id', _team_id,
'old_org_id', _old_org_id,
'new_org_id', COALESCE(_new_owner_org_id, _old_org_id),
'old_role_id', _old_owner_role_id,
'new_role_id', _new_owner_role_id,
'has_valid_license', _has_valid_license,
'has_active_coupon', _has_active_coupon,
'other_teams_count', _other_teams_count,
'org_ownership_transferred', _new_owner_org_id IS NULL,
'team_moved_to_new_org', _new_owner_org_id IS NOT NULL
);
END;
$$;
-- PERFORMANCE OPTIMIZATION: Optimized version with batching for large datasets
CREATE OR REPLACE FUNCTION handle_task_list_sort_between_groups_optimized(_from_index integer, _to_index integer, _task_id uuid, _project_id uuid, _batch_size integer DEFAULT 100) RETURNS void
LANGUAGE plpgsql
AS
$$
DECLARE
_offset INT := 0;
_affected_rows INT;
BEGIN
-- PERFORMANCE OPTIMIZATION: Use direct updates without CTE in UPDATE
IF (_to_index = -1)
THEN
_to_index = COALESCE((SELECT MAX(sort_order) + 1 FROM tasks WHERE project_id = _project_id), 0);
END IF;
-- PERFORMANCE OPTIMIZATION: Batch updates for large datasets
IF _to_index > _from_index
THEN
LOOP
UPDATE tasks
SET sort_order = sort_order - 1
WHERE project_id = _project_id
AND sort_order > _from_index
AND sort_order < _to_index
AND sort_order > _offset
AND sort_order <= _offset + _batch_size;
GET DIAGNOSTICS _affected_rows = ROW_COUNT;
EXIT WHEN _affected_rows = 0;
_offset := _offset + _batch_size;
END LOOP;
UPDATE tasks SET sort_order = _to_index - 1 WHERE id = _task_id AND project_id = _project_id;
END IF;
IF _to_index < _from_index
THEN
_offset := 0;
LOOP
UPDATE tasks
SET sort_order = sort_order + 1
WHERE project_id = _project_id
AND sort_order > _to_index
AND sort_order < _from_index
AND sort_order > _offset
AND sort_order <= _offset + _batch_size;
GET DIAGNOSTICS _affected_rows = ROW_COUNT;
EXIT WHEN _affected_rows = 0;
_offset := _offset + _batch_size;
END LOOP;
UPDATE tasks SET sort_order = _to_index + 1 WHERE id = _task_id AND project_id = _project_id;
END IF;
END
$$;
-- PERFORMANCE OPTIMIZATION: Optimized version with batching for large datasets
CREATE OR REPLACE FUNCTION handle_task_list_sort_inside_group_optimized(_from_index integer, _to_index integer, _task_id uuid, _project_id uuid, _batch_size integer DEFAULT 100) RETURNS void
LANGUAGE plpgsql
AS
$$
DECLARE
_offset INT := 0;
_affected_rows INT;
BEGIN
-- PERFORMANCE OPTIMIZATION: Batch updates for large datasets without CTE in UPDATE
IF _to_index > _from_index
THEN
LOOP
UPDATE tasks
SET sort_order = sort_order - 1
WHERE project_id = _project_id
AND sort_order > _from_index
AND sort_order <= _to_index
AND sort_order > _offset
AND sort_order <= _offset + _batch_size;
GET DIAGNOSTICS _affected_rows = ROW_COUNT;
EXIT WHEN _affected_rows = 0;
_offset := _offset + _batch_size;
END LOOP;
END IF;
IF _to_index < _from_index
THEN
_offset := 0;
LOOP
UPDATE tasks
SET sort_order = sort_order + 1
WHERE project_id = _project_id
AND sort_order >= _to_index
AND sort_order < _from_index
AND sort_order > _offset
AND sort_order <= _offset + _batch_size;
GET DIAGNOSTICS _affected_rows = ROW_COUNT;
EXIT WHEN _affected_rows = 0;
_offset := _offset + _batch_size;
END LOOP;
END IF;
UPDATE tasks SET sort_order = _to_index WHERE id = _task_id AND project_id = _project_id;
END
$$;
-- Updated bulk sort order function that avoids sort_order conflicts
CREATE OR REPLACE FUNCTION update_task_sort_orders_bulk(_updates json, _group_by text DEFAULT 'status') RETURNS void
LANGUAGE plpgsql
AS
$$
DECLARE
_update_record RECORD;
_sort_column TEXT;
_sql TEXT;
BEGIN
-- Get the appropriate sort column based on grouping
_sort_column := get_sort_column_name(_group_by);
-- Process each update record
FOR _update_record IN
SELECT
(item->>'task_id')::uuid as task_id,
(item->>'sort_order')::int as sort_order,
(item->>'status_id')::uuid as status_id,
(item->>'priority_id')::uuid as priority_id,
(item->>'phase_id')::uuid as phase_id
FROM json_array_elements(_updates) as item
LOOP
-- Update the grouping-specific sort column and other fields
_sql := 'UPDATE tasks SET ' || _sort_column || ' = $1, ' ||
'status_id = COALESCE($2, status_id), ' ||
'priority_id = COALESCE($3, priority_id), ' ||
'updated_at = CURRENT_TIMESTAMP ' ||
'WHERE id = $4';
EXECUTE _sql USING
_update_record.sort_order,
_update_record.status_id,
_update_record.priority_id,
_update_record.task_id;
-- Handle phase updates separately since it's in a different table
IF _update_record.phase_id IS NOT NULL THEN
INSERT INTO task_phase (task_id, phase_id)
VALUES (_update_record.task_id, _update_record.phase_id)
ON CONFLICT (task_id) DO UPDATE SET phase_id = _update_record.phase_id;
END IF;
END LOOP;
END
$$;
-- Function to get the appropriate sort column name based on grouping type
CREATE OR REPLACE FUNCTION get_sort_column_name(_group_by TEXT) RETURNS TEXT
LANGUAGE plpgsql
AS
$$
BEGIN
CASE _group_by
WHEN 'status' THEN RETURN 'status_sort_order';
WHEN 'priority' THEN RETURN 'priority_sort_order';
WHEN 'phase' THEN RETURN 'phase_sort_order';
-- For backward compatibility, still support general sort_order but be explicit
WHEN 'general' THEN RETURN 'sort_order';
ELSE RETURN 'status_sort_order'; -- Default to status sorting
END CASE;
END;
$$;
-- Updated bulk sort order function to handle different sort columns
CREATE OR REPLACE FUNCTION update_task_sort_orders_bulk(_updates json, _group_by text DEFAULT 'status') RETURNS void
LANGUAGE plpgsql
AS
$$
DECLARE
_update_record RECORD;
_sort_column TEXT;
_sql TEXT;
BEGIN
-- Get the appropriate sort column based on grouping
_sort_column := get_sort_column_name(_group_by);
-- Process each update record
FOR _update_record IN
SELECT
(item->>'task_id')::uuid as task_id,
(item->>'sort_order')::int as sort_order,
(item->>'status_id')::uuid as status_id,
(item->>'priority_id')::uuid as priority_id,
(item->>'phase_id')::uuid as phase_id
FROM json_array_elements(_updates) as item
LOOP
-- Update the grouping-specific sort column and other fields
_sql := 'UPDATE tasks SET ' || _sort_column || ' = $1, ' ||
'status_id = COALESCE($2, status_id), ' ||
'priority_id = COALESCE($3, priority_id), ' ||
'updated_at = CURRENT_TIMESTAMP ' ||
'WHERE id = $4';
EXECUTE _sql USING
_update_record.sort_order,
_update_record.status_id,
_update_record.priority_id,
_update_record.task_id;
-- Handle phase updates separately since it's in a different table
IF _update_record.phase_id IS NOT NULL THEN
INSERT INTO task_phase (task_id, phase_id)
VALUES (_update_record.task_id, _update_record.phase_id)
ON CONFLICT (task_id) DO UPDATE SET phase_id = _update_record.phase_id;
END IF;
END LOOP;
END;
$$;

View File

@@ -0,0 +1,28 @@
module.exports = {
brotli_js: {
options: {
mode: "brotli",
brotli: {
mode: 1
}
},
expand: true,
cwd: "build/public",
src: ["**/*.js"],
dest: "build/public",
extDot: "last",
ext: ".js.br"
},
gzip_js: {
options: {
mode: "gzip"
},
files: [{
expand: true,
cwd: "build/public",
src: ["**/*.js"],
dest: "build/public",
ext: ".js.gz"
}]
}
};

File diff suppressed because it is too large Load Diff

View File

@@ -4,37 +4,23 @@
"private": true,
"engines": {
"npm": ">=8.11.0",
"node": ">=20.0.0",
"node": ">=16.13.0",
"yarn": "WARNING: Please use npm package manager instead of yarn"
},
"main": "build/bin/www",
"repository": "GITHUB_REPO_HERE",
"author": "worklenz.com",
"scripts": {
"test": "jest",
"start": "node build/bin/www.js",
"dev": "npm run build:dev && npm run watch",
"build": "npm run clean && npm run compile && npm run copy && npm run compress",
"build:dev": "npm run clean && npm run compile:dev && npm run copy",
"build:prod": "npm run clean && npm run compile:prod && npm run copy && npm run minify && npm run compress",
"clean": "rimraf build",
"compile": "tsc --build tsconfig.prod.json",
"compile:dev": "tsc --build tsconfig.json",
"compile:prod": "tsc --build tsconfig.prod.json",
"copy": "npm run copy:assets && npm run copy:views && npm run copy:config && npm run copy:shared",
"copy:assets": "npx cpx2 \"src/public/**\" build/public",
"copy:views": "npx cpx2 \"src/views/**\" build/views",
"copy:config": "npx cpx2 \".env\" build && npx cpx2 \"package.json\" build",
"copy:shared": "npx cpx2 \"src/shared/postgresql-error-codes.json\" build/shared && npx cpx2 \"src/shared/sample-data.json\" build/shared && npx cpx2 \"src/shared/templates/**\" build/shared/templates",
"watch": "concurrently \"npm run watch:ts\" \"npm run watch:assets\"",
"watch:ts": "tsc --build tsconfig.json --watch",
"watch:assets": "npx cpx2 \"src/{public,views}/**\" build --watch",
"minify": "terser build/**/*.js --compress --mangle --output-dir build",
"compress": "node scripts/compress.js",
"swagger": "node ./cli/swagger",
"inline-queries": "node ./cli/inline-queries",
"start": "node ./build/bin/www",
"tcs": "grunt build:tsc",
"build": "grunt build",
"watch": "grunt watch",
"dev": "grunt dev",
"es": "esbuild `find src -type f -name '*.ts'` --platform=node --minify=true --watch=true --target=esnext --format=cjs --tsconfig=tsconfig.prod.json --outdir=dist",
"copy": "grunt copy",
"sonar": "sonar-scanner -Dproject.settings=sonar-project-dev.properties",
"tsc": "tsc",
"test": "jest --setupFiles dotenv/config",
"test:watch": "jest --watch --setupFiles dotenv/config"
},
"jestSonar": {
@@ -59,7 +45,6 @@
"cors": "^2.8.5",
"cron": "^2.4.0",
"crypto-js": "^4.1.1",
"csrf-sync": "^4.2.1",
"csurf": "^1.11.0",
"debug": "^4.3.4",
"dotenv": "^16.3.1",
@@ -85,6 +70,7 @@
"passport-local": "^1.0.0",
"path": "^0.12.7",
"pg": "^8.14.1",
"pg-native": "^3.3.0",
"pug": "^3.0.2",
"redis": "^4.6.7",
"sanitize-html": "^2.11.0",
@@ -92,10 +78,8 @@
"sharp": "^0.32.6",
"slugify": "^1.6.6",
"socket.io": "^4.7.1",
"tinymce": "^7.8.0",
"uglify-js": "^3.17.4",
"winston": "^3.10.0",
"worklenz-backend": "file:",
"xss-filters": "^1.2.7"
},
"devDependencies": {
@@ -103,17 +87,15 @@
"@babel/preset-typescript": "^7.22.5",
"@types/bcrypt": "^5.0.0",
"@types/bluebird": "^3.5.38",
"@types/body-parser": "^1.19.2",
"@types/compression": "^1.7.2",
"@types/connect-flash": "^0.0.37",
"@types/cookie-parser": "^1.4.3",
"@types/cron": "^2.0.1",
"@types/crypto-js": "^4.2.2",
"@types/csurf": "^1.11.2",
"@types/express": "^4.17.21",
"@types/express": "^4.17.17",
"@types/express-brute": "^1.0.2",
"@types/express-brute-redis": "^0.0.4",
"@types/express-serve-static-core": "^4.17.34",
"@types/express-session": "^1.17.7",
"@types/fs-extra": "^9.0.13",
"@types/hpp": "^0.2.2",
@@ -138,22 +120,26 @@
"@typescript-eslint/eslint-plugin": "^5.62.0",
"@typescript-eslint/parser": "^5.62.0",
"chokidar": "^3.5.3",
"concurrently": "^9.1.2",
"cpx2": "^8.0.0",
"esbuild": "^0.17.19",
"esbuild-envfile-plugin": "^1.0.5",
"esbuild-node-externals": "^1.8.0",
"eslint": "^8.45.0",
"eslint-plugin-security": "^1.7.1",
"fs-extra": "^10.1.0",
"grunt": "^1.6.1",
"grunt-contrib-clean": "^2.0.1",
"grunt-contrib-compress": "^2.0.0",
"grunt-contrib-copy": "^1.0.0",
"grunt-contrib-uglify": "^5.2.2",
"grunt-contrib-watch": "^1.1.0",
"grunt-shell": "^4.0.0",
"grunt-sync": "^0.8.2",
"highcharts": "^11.1.0",
"jest": "^28.1.3",
"jest-sonar-reporter": "^2.0.0",
"ncp": "^2.0.0",
"nodeman": "^1.1.2",
"rimraf": "^6.0.1",
"swagger-jsdoc": "^6.2.8",
"terser": "^5.40.0",
"ts-jest": "^28.0.8",
"ts-node": "^10.9.1",
"tslint": "^6.1.3",

View File

@@ -1,53 +0,0 @@
const fs = require('fs');
const path = require('path');
const { createGzip } = require('zlib');
const { pipeline } = require('stream');
async function compressFile(inputPath, outputPath) {
return new Promise((resolve, reject) => {
const gzip = createGzip();
const source = fs.createReadStream(inputPath);
const destination = fs.createWriteStream(outputPath);
pipeline(source, gzip, destination, (err) => {
if (err) {
reject(err);
} else {
resolve();
}
});
});
}
async function compressDirectory(dir) {
const files = fs.readdirSync(dir, { withFileTypes: true });
for (const file of files) {
const fullPath = path.join(dir, file.name);
if (file.isDirectory()) {
await compressDirectory(fullPath);
} else if (file.name.endsWith('.js') || file.name.endsWith('.css')) {
const gzPath = fullPath + '.gz';
await compressFile(fullPath, gzPath);
console.log(`Compressed: ${fullPath} -> ${gzPath}`);
}
}
}
async function main() {
try {
const buildDir = path.join(__dirname, '../build');
if (fs.existsSync(buildDir)) {
await compressDirectory(buildDir);
console.log('Compression complete!');
} else {
console.log('Build directory not found. Run build first.');
}
} catch (error) {
console.error('Compression failed:', error);
process.exit(1);
}
}
main();

View File

@@ -6,7 +6,7 @@ import logger from "morgan";
import helmet from "helmet";
import compression from "compression";
import passport from "passport";
import { csrfSync } from "csrf-sync";
import csurf from "csurf";
import rateLimit from "express-rate-limit";
import cors from "cors";
import flash from "connect-flash";
@@ -112,13 +112,17 @@ function isLoggedIn(req: Request, _res: Response, next: NextFunction) {
return req.user ? next() : next(createError(401));
}
// CSRF configuration using csrf-sync for session-based authentication
const {
invalidCsrfTokenError,
generateToken,
csrfSynchronisedProtection,
} = csrfSync({
getTokenFromRequest: (req: Request) => req.headers["x-csrf-token"] as string || (req.body && req.body["_csrf"])
// CSRF configuration
const csrfProtection = csurf({
cookie: {
key: "XSRF-TOKEN",
path: "/",
httpOnly: false,
secure: isProduction(), // Only secure in production
sameSite: isProduction() ? "none" : "lax", // Different settings for dev vs prod
domain: isProduction() ? ".worklenz.com" : undefined // Only set domain in production
},
ignoreMethods: ["HEAD", "OPTIONS"]
});
// Apply CSRF selectively (exclude webhooks and public routes)
@@ -131,25 +135,38 @@ app.use((req, res, next) => {
) {
next();
} else {
csrfSynchronisedProtection(req, res, next);
csrfProtection(req, res, next);
}
});
// Set CSRF token method on request object for compatibility
// Set CSRF token cookie
app.use((req: Request, res: Response, next: NextFunction) => {
// Add csrfToken method to request object for compatibility
if (!req.csrfToken && generateToken) {
req.csrfToken = (overwrite?: boolean) => generateToken(req, overwrite);
if (req.csrfToken) {
const token = req.csrfToken();
res.cookie("XSRF-TOKEN", token, {
httpOnly: false,
secure: isProduction(),
sameSite: isProduction() ? "none" : "lax",
domain: isProduction() ? ".worklenz.com" : undefined,
path: "/"
});
}
next();
});
// CSRF token refresh endpoint
app.get("/csrf-token", (req: Request, res: Response) => {
try {
const token = generateToken(req);
res.status(200).json({ done: true, message: "CSRF token refreshed", token });
} catch (error) {
if (req.csrfToken) {
const token = req.csrfToken();
res.cookie("XSRF-TOKEN", token, {
httpOnly: false,
secure: isProduction(),
sameSite: isProduction() ? "none" : "lax",
domain: isProduction() ? ".worklenz.com" : undefined,
path: "/"
});
res.status(200).json({ done: true, message: "CSRF token refreshed" });
} else {
res.status(500).json({ done: false, message: "Failed to generate CSRF token" });
}
});
@@ -202,7 +219,7 @@ if (isInternalServer()) {
// CSRF error handler
app.use((err: any, req: Request, res: Response, next: NextFunction) => {
if (err === invalidCsrfTokenError) {
if (err.code === "EBADCSRFTOKEN") {
return res.status(403).json({
done: false,
message: "Invalid CSRF token",

View File

@@ -35,18 +35,8 @@ export default class AuthController extends WorklenzControllerBase {
const auth_error = errors.length > 0 ? errors[0] : null;
const message = messages.length > 0 ? messages[0] : null;
// Determine title based on authentication status and strategy
let title = null;
if (req.query.strategy) {
if (auth_error) {
// Show failure title only when there's an actual error
title = req.query.strategy === "login" ? "Login Failed!" : "Signup Failed!";
} else if (req.isAuthenticated() && message) {
// Show success title when authenticated and there's a success message
title = req.query.strategy === "login" ? "Login Successful!" : "Signup Successful!";
}
// If no error and not authenticated, don't show any title (this might be a redirect without completion)
}
const midTitle = req.query.strategy === "login" ? "Login Failed!" : "Signup Failed!";
const title = req.query.strategy ? midTitle : null;
if (req.user)
req.user.build_v = FileConstants.getRelease();

View File

@@ -137,10 +137,6 @@ export default class HomePageController extends WorklenzControllerBase {
WHERE category_id NOT IN (SELECT id
FROM sys_task_status_categories
WHERE is_done IS FALSE))
AND NOT EXISTS(SELECT project_id
FROM archived_projects
WHERE project_id = p.id
AND user_id = $2)
${groupByClosure}
ORDER BY t.end_date ASC`;
@@ -162,13 +158,9 @@ export default class HomePageController extends WorklenzControllerBase {
WHERE category_id NOT IN (SELECT id
FROM sys_task_status_categories
WHERE is_done IS FALSE))
AND NOT EXISTS(SELECT project_id
FROM archived_projects
WHERE project_id = p.id
AND user_id = $3)
${groupByClosure}`;
const result = await db.query(q, [teamId, userId, userId]);
const result = await db.query(q, [teamId, userId]);
const [row] = result.rows;
return row;
}

View File

@@ -756,186 +756,4 @@ export default class ProjectsController extends WorklenzControllerBase {
}
@HandleExceptions()
public static async getGrouped(req: IWorkLenzRequest, res: IWorkLenzResponse): Promise<IWorkLenzResponse> {
// Use qualified field name for projects to avoid ambiguity
const {searchQuery, sortField, sortOrder, size, offset} = this.toPaginationOptions(req.query, ["projects.name"]);
const groupBy = req.query.groupBy as string || "category";
const filterByMember = !req.user?.owner && !req.user?.is_admin ?
` AND is_member_of_project(projects.id, '${req.user?.id}', $1) ` : "";
const isFavorites = req.query.filter === "1" ? ` AND EXISTS(SELECT user_id FROM favorite_projects WHERE user_id = '${req.user?.id}' AND project_id = projects.id)` : "";
const isArchived = req.query.filter === "2"
? ` AND EXISTS(SELECT user_id FROM archived_projects WHERE user_id = '${req.user?.id}' AND project_id = projects.id)`
: ` AND NOT EXISTS(SELECT user_id FROM archived_projects WHERE user_id = '${req.user?.id}' AND project_id = projects.id)`;
const categories = this.getFilterByCategoryWhereClosure(req.query.categories as string);
const statuses = this.getFilterByStatusWhereClosure(req.query.statuses as string);
// Determine grouping field and join based on groupBy parameter
let groupField = "";
let groupName = "";
let groupColor = "";
let groupJoin = "";
let groupByFields = "";
let groupOrderBy = "";
switch (groupBy) {
case "client":
groupField = "COALESCE(projects.client_id::text, 'no-client')";
groupName = "COALESCE(clients.name, 'No Client')";
groupColor = "'#688'";
groupJoin = "LEFT JOIN clients ON projects.client_id = clients.id";
groupByFields = "projects.client_id, clients.name";
groupOrderBy = "COALESCE(clients.name, 'No Client')";
break;
case "status":
groupField = "COALESCE(projects.status_id::text, 'no-status')";
groupName = "COALESCE(sys_project_statuses.name, 'No Status')";
groupColor = "COALESCE(sys_project_statuses.color_code, '#888')";
groupJoin = "LEFT JOIN sys_project_statuses ON projects.status_id = sys_project_statuses.id";
groupByFields = "projects.status_id, sys_project_statuses.name, sys_project_statuses.color_code";
groupOrderBy = "COALESCE(sys_project_statuses.name, 'No Status')";
break;
case "category":
default:
groupField = "COALESCE(projects.category_id::text, 'uncategorized')";
groupName = "COALESCE(project_categories.name, 'Uncategorized')";
groupColor = "COALESCE(project_categories.color_code, '#888')";
groupJoin = "LEFT JOIN project_categories ON projects.category_id = project_categories.id";
groupByFields = "projects.category_id, project_categories.name, project_categories.color_code";
groupOrderBy = "COALESCE(project_categories.name, 'Uncategorized')";
}
// Ensure sortField is properly qualified for the inner project query
let qualifiedSortField = sortField;
if (Array.isArray(sortField)) {
qualifiedSortField = sortField[0]; // Take the first field if it's an array
}
// Replace "projects." with "p2." for the inner query
const innerSortField = qualifiedSortField.replace("projects.", "p2.");
const q = `
SELECT ROW_TO_JSON(rec) AS groups
FROM (
SELECT COUNT(DISTINCT ${groupField}) AS total_groups,
(SELECT COALESCE(ARRAY_TO_JSON(ARRAY_AGG(ROW_TO_JSON(group_data))), '[]'::JSON)
FROM (
SELECT ${groupField} AS group_key,
${groupName} AS group_name,
${groupColor} AS group_color,
COUNT(*) AS project_count,
(SELECT COALESCE(ARRAY_TO_JSON(ARRAY_AGG(ROW_TO_JSON(project_data))), '[]'::JSON)
FROM (
SELECT p2.id,
p2.name,
(SELECT sys_project_statuses.name FROM sys_project_statuses WHERE sys_project_statuses.id = p2.status_id) AS status,
(SELECT sys_project_statuses.color_code FROM sys_project_statuses WHERE sys_project_statuses.id = p2.status_id) AS status_color,
(SELECT sys_project_statuses.icon FROM sys_project_statuses WHERE sys_project_statuses.id = p2.status_id) AS status_icon,
EXISTS(SELECT user_id
FROM favorite_projects
WHERE user_id = '${req.user?.id}'
AND project_id = p2.id) AS favorite,
EXISTS(SELECT user_id
FROM archived_projects
WHERE user_id = '${req.user?.id}'
AND project_id = p2.id) AS archived,
p2.color_code,
p2.start_date,
p2.end_date,
p2.category_id,
(SELECT COUNT(*)
FROM tasks
WHERE archived IS FALSE
AND project_id = p2.id) AS all_tasks_count,
(SELECT COUNT(*)
FROM tasks
WHERE archived IS FALSE
AND project_id = p2.id
AND status_id IN (SELECT task_statuses.id
FROM task_statuses
WHERE task_statuses.project_id = p2.id
AND task_statuses.category_id IN
(SELECT sys_task_status_categories.id FROM sys_task_status_categories WHERE sys_task_status_categories.is_done IS TRUE))) AS completed_tasks_count,
(SELECT COUNT(*)
FROM project_members
WHERE project_members.project_id = p2.id) AS members_count,
(SELECT get_project_members(p2.id)) AS names,
(SELECT clients.name FROM clients WHERE clients.id = p2.client_id) AS client_name,
(SELECT users.name FROM users WHERE users.id = p2.owner_id) AS project_owner,
(SELECT project_categories.name FROM project_categories WHERE project_categories.id = p2.category_id) AS category_name,
(SELECT project_categories.color_code
FROM project_categories
WHERE project_categories.id = p2.category_id) AS category_color,
((SELECT project_members.team_member_id as team_member_id
FROM project_members
WHERE project_members.project_id = p2.id
AND project_members.project_access_level_id = (SELECT project_access_levels.id FROM project_access_levels WHERE project_access_levels.key = 'PROJECT_MANAGER'))) AS project_manager_team_member_id,
(SELECT project_members.default_view
FROM project_members
WHERE project_members.project_id = p2.id
AND project_members.team_member_id = '${req.user?.team_member_id}') AS team_member_default_view,
(SELECT CASE
WHEN ((SELECT MAX(tasks.updated_at)
FROM tasks
WHERE tasks.archived IS FALSE
AND tasks.project_id = p2.id) >
p2.updated_at)
THEN (SELECT MAX(tasks.updated_at)
FROM tasks
WHERE tasks.archived IS FALSE
AND tasks.project_id = p2.id)
ELSE p2.updated_at END) AS updated_at
FROM projects p2
${groupJoin.replace("projects.", "p2.")}
WHERE p2.team_id = $1
AND ${groupField.replace("projects.", "p2.")} = ${groupField}
${categories.replace("projects.", "p2.")}
${statuses.replace("projects.", "p2.")}
${isArchived.replace("projects.", "p2.")}
${isFavorites.replace("projects.", "p2.")}
${filterByMember.replace("projects.", "p2.")}
${searchQuery.replace("projects.", "p2.")}
ORDER BY ${innerSortField} ${sortOrder}
) project_data
) AS projects
FROM projects
${groupJoin}
WHERE projects.team_id = $1 ${categories} ${statuses} ${isArchived} ${isFavorites} ${filterByMember} ${searchQuery}
GROUP BY ${groupByFields}
ORDER BY ${groupOrderBy}
LIMIT $2 OFFSET $3
) group_data
) AS data
FROM projects
${groupJoin}
WHERE projects.team_id = $1 ${categories} ${statuses} ${isArchived} ${isFavorites} ${filterByMember} ${searchQuery}
) rec;
`;
const result = await db.query(q, [req.user?.team_id || null, size, offset]);
const [data] = result.rows;
// Process the grouped data
for (const group of data?.groups.data || []) {
for (const project of group.projects || []) {
project.progress = project.all_tasks_count > 0
? ((project.completed_tasks_count / project.all_tasks_count) * 100).toFixed(0) : 0;
project.updated_at_string = moment(project.updated_at).fromNow();
project.names = this.createTagList(project?.names);
project.names.map((a: any) => a.color_code = getColor(a.name));
if (project.project_manager_team_member_id) {
project.project_manager = {
id: project.project_manager_team_member_id
};
}
}
}
return res.status(200).send(new ServerResponse(true, data?.groups || { total_groups: 0, data: [] }));
}
}

View File

@@ -415,15 +415,20 @@ export default class ReportingController extends WorklenzControllerBase {
@HandleExceptions()
public static async getMyTeams(req: IWorkLenzRequest, res: IWorkLenzResponse): Promise<IWorkLenzResponse> {
const selectedTeamId = req.user?.team_id;
if (!selectedTeamId) {
return res.status(400).send(new ServerResponse(false, "No selected team"));
}
const q = `SELECT team_id AS id, name
FROM team_members tm
LEFT JOIN teams ON teams.id = tm.team_id
WHERE tm.user_id = $1
AND tm.team_id = $2
AND role_id IN (SELECT id
FROM roles
WHERE (admin_role IS TRUE OR owner IS TRUE))
ORDER BY name;`;
const result = await db.query(q, [req.user?.id]);
const result = await db.query(q, [req.user?.id, selectedTeamId]);
result.rows.forEach((team: any) => team.selected = true);
return res.status(200).send(new ServerResponse(true, result.rows));
}

View File

@@ -445,27 +445,52 @@ export default class ReportingAllocationController extends ReportingControllerBa
}
}
// Count only weekdays (Mon-Fri) in the period
// Get organization working days
const orgWorkingDaysQuery = `
SELECT monday, tuesday, wednesday, thursday, friday, saturday, sunday
FROM organization_working_days
WHERE organization_id IN (
SELECT t.organization_id
FROM teams t
WHERE t.id IN (${teamIds})
LIMIT 1
);
`;
const orgWorkingDaysResult = await db.query(orgWorkingDaysQuery, []);
const workingDaysConfig = orgWorkingDaysResult.rows[0] || {
monday: true,
tuesday: true,
wednesday: true,
thursday: true,
friday: true,
saturday: false,
sunday: false
};
// Count working days based on organization settings
let workingDays = 0;
let current = startDate.clone();
while (current.isSameOrBefore(endDate, 'day')) {
const day = current.isoWeekday();
if (day >= 1 && day <= 5) workingDays++;
if (
(day === 1 && workingDaysConfig.monday) ||
(day === 2 && workingDaysConfig.tuesday) ||
(day === 3 && workingDaysConfig.wednesday) ||
(day === 4 && workingDaysConfig.thursday) ||
(day === 5 && workingDaysConfig.friday) ||
(day === 6 && workingDaysConfig.saturday) ||
(day === 7 && workingDaysConfig.sunday)
) {
workingDays++;
}
current.add(1, 'day');
}
// Get hours_per_day for all selected projects
const projectHoursQuery = `SELECT id, hours_per_day FROM projects WHERE id IN (${projectIds})`;
const projectHoursResult = await db.query(projectHoursQuery, []);
const projectHoursMap: Record<string, number> = {};
for (const row of projectHoursResult.rows) {
projectHoursMap[row.id] = row.hours_per_day || 8;
}
// Sum total working hours for all selected projects
let totalWorkingHours = 0;
for (const pid of Object.keys(projectHoursMap)) {
totalWorkingHours += workingDays * projectHoursMap[pid];
}
// Get organization working hours
const orgWorkingHoursQuery = `SELECT working_hours FROM organizations WHERE id = (SELECT t.organization_id FROM teams t WHERE t.id IN (${teamIds}) LIMIT 1)`;
const orgWorkingHoursResult = await db.query(orgWorkingHoursQuery, []);
const orgWorkingHours = orgWorkingHoursResult.rows[0]?.working_hours || 8;
let totalWorkingHours = workingDays * orgWorkingHours;
const durationClause = this.getDateRangeClause(duration || DATE_RANGES.LAST_WEEK, date_range);
const archivedClause = archived
@@ -490,11 +515,18 @@ export default class ReportingAllocationController extends ReportingControllerBa
member.value = member.logged_time ? parseFloat(moment.duration(member.logged_time, "seconds").asHours().toFixed(2)) : 0;
member.color_code = getColor(member.name);
member.total_working_hours = totalWorkingHours;
member.utilization_percent = (totalWorkingHours > 0 && member.logged_time) ? ((parseFloat(member.logged_time) / (totalWorkingHours * 3600)) * 100).toFixed(2) : '0.00';
member.utilized_hours = member.logged_time ? (parseFloat(member.logged_time) / 3600).toFixed(2) : '0.00';
// Over/under utilized hours: utilized_hours - total_working_hours
const overUnder = member.utilized_hours && member.total_working_hours ? (parseFloat(member.utilized_hours) - member.total_working_hours) : 0;
member.over_under_utilized_hours = overUnder.toFixed(2);
if (totalWorkingHours === 0) {
member.utilization_percent = member.logged_time && parseFloat(member.logged_time) > 0 ? 'N/A' : '0.00';
member.utilized_hours = member.logged_time ? (parseFloat(member.logged_time) / 3600).toFixed(2) : '0.00';
// Over/under utilized hours: all logged time is over-utilized
member.over_under_utilized_hours = member.utilized_hours;
} else {
member.utilization_percent = (member.logged_time && totalWorkingHours > 0) ? ((parseFloat(member.logged_time) / (totalWorkingHours * 3600)) * 100).toFixed(2) : '0.00';
member.utilized_hours = member.logged_time ? (parseFloat(member.logged_time) / 3600).toFixed(2) : '0.00';
// Over/under utilized hours: utilized_hours - total_working_hours
const overUnder = member.utilized_hours && member.total_working_hours ? (parseFloat(member.utilized_hours) - member.total_working_hours) : 0;
member.over_under_utilized_hours = overUnder.toFixed(2);
}
}
return res.status(200).send(new ServerResponse(true, result.rows));

View File

@@ -74,14 +74,9 @@ export default class ScheduleControllerV2 extends WorklenzControllerBase {
.map(day => `${day.toLowerCase()} = ${workingDays.includes(day)}`)
.join(", ");
const updateQuery = `
UPDATE public.organization_working_days
const updateQuery = `UPDATE public.organization_working_days
SET ${setClause}, updated_at = CURRENT_TIMESTAMP
WHERE organization_id IN (
SELECT organization_id FROM organizations
WHERE user_id = $1
);
`;
WHERE organization_id IN (SELECT id FROM organizations WHERE user_id = $1);`;
await db.query(updateQuery, [req.user?.owner_id]);

View File

@@ -16,23 +16,19 @@ export default class TaskPhasesController extends WorklenzControllerBase {
if (!req.query.id)
return res.status(400).send(new ServerResponse(false, null, "Invalid request"));
// Use custom name if provided, otherwise use default naming pattern
const phaseName = req.body.name?.trim() ||
`Untitled Phase (${(await db.query("SELECT COUNT(*) FROM project_phases WHERE project_id = $1", [req.query.id])).rows[0].count + 1})`;
const q = `
INSERT INTO project_phases (name, color_code, project_id, sort_index)
VALUES (
CONCAT('Untitled Phase (', (SELECT COUNT(*) FROM project_phases WHERE project_id = $2) + 1, ')'),
$1,
$2,
$3,
(SELECT COUNT(*) FROM project_phases WHERE project_id = $3) + 1)
(SELECT COUNT(*) FROM project_phases WHERE project_id = $2) + 1)
RETURNING id, name, color_code, sort_index;
`;
req.body.color_code = this.DEFAULT_PHASE_COLOR;
const result = await db.query(q, [phaseName, req.body.color_code, req.query.id]);
const result = await db.query(q, [req.body.color_code, req.query.id]);
const [data] = result.rows;
data.color_code = getColor(data.name) + TASK_STATUS_COLOR_ALPHA;

View File

@@ -134,25 +134,6 @@ export default class TaskStatusesController extends WorklenzControllerBase {
return res.status(200).send(new ServerResponse(true, data));
}
@HandleExceptions()
public static async updateCategory(req: IWorkLenzRequest, res: IWorkLenzResponse): Promise<IWorkLenzResponse> {
const hasMoreCategories = await TaskStatusesController.hasMoreCategories(req.params.id, req.query.current_project_id as string);
if (!hasMoreCategories)
return res.status(200).send(new ServerResponse(false, null, existsErrorMessage).withTitle("Status category update failed!"));
const q = `
UPDATE task_statuses
SET category_id = $2
WHERE id = $1
AND project_id = $3
RETURNING (SELECT color_code FROM sys_task_status_categories WHERE id = task_statuses.category_id), (SELECT color_code_dark FROM sys_task_status_categories WHERE id = task_statuses.category_id);
`;
const result = await db.query(q, [req.params.id, req.body.category_id, req.query.current_project_id]);
const [data] = result.rows;
return res.status(200).send(new ServerResponse(true, data));
}
@HandleExceptions()
public static async updateStatusOrder(req: IWorkLenzRequest, res: IWorkLenzResponse): Promise<IWorkLenzResponse> {
const q = `SELECT update_status_order($1);`;

View File

@@ -1,6 +1,6 @@
import WorklenzControllerBase from "./worklenz-controller-base";
import { getColor } from "../shared/utils";
import { PriorityColorCodes, TASK_PRIORITY_COLOR_ALPHA, TASK_STATUS_COLOR_ALPHA } from "../shared/constants";
import {getColor} from "../shared/utils";
import {PriorityColorCodes, TASK_PRIORITY_COLOR_ALPHA, TASK_STATUS_COLOR_ALPHA} from "../shared/constants";
import moment from "moment/moment";
export const GroupBy = {
@@ -32,14 +32,23 @@ export default class TasksControllerBase extends WorklenzControllerBase {
}
public static updateTaskViewModel(task: any) {
console.log(`Processing task ${task.id} (${task.name})`);
console.log(` manual_progress: ${task.manual_progress}, progress_value: ${task.progress_value}`);
console.log(` project_use_manual_progress: ${task.project_use_manual_progress}, project_use_weighted_progress: ${task.project_use_weighted_progress}`);
console.log(` has subtasks: ${task.sub_tasks_count > 0}`);
// For parent tasks (with subtasks), always use calculated progress from subtasks
if (task.sub_tasks_count > 0) {
// For parent tasks without manual progress, calculate from subtasks (already done via db function)
console.log(` Parent task with subtasks: complete_ratio=${task.complete_ratio}`);
// Ensure progress matches complete_ratio for consistency
task.progress = task.complete_ratio || 0;
// Important: Parent tasks should not have manual progress
// If they somehow do, reset it
if (task.manual_progress) {
console.log(` WARNING: Parent task ${task.id} had manual_progress set to true, resetting`);
task.manual_progress = false;
task.progress_value = null;
}
@@ -49,29 +58,28 @@ export default class TasksControllerBase extends WorklenzControllerBase {
// For manually set progress, use that value directly
task.progress = parseInt(task.progress_value);
task.complete_ratio = parseInt(task.progress_value);
}
// For tasks with no subtasks and no manual progress
console.log(` Using manual progress: progress=${task.progress}, complete_ratio=${task.complete_ratio}`);
}
// For tasks with no subtasks and no manual progress, calculate based on time
else {
// Only calculate progress based on time if time-based progress is enabled for the project
if (task.project_use_time_progress && task.total_minutes_spent && task.total_minutes) {
// Cap the progress at 100% to prevent showing more than 100% progress
task.progress = Math.min(~~(task.total_minutes_spent / task.total_minutes * 100), 100);
} else {
// Default to 0% progress when time-based calculation is not enabled
task.progress = 0;
}
task.progress = task.total_minutes_spent && task.total_minutes
? ~~(task.total_minutes_spent / task.total_minutes * 100)
: 0;
// Set complete_ratio to match progress
task.complete_ratio = task.progress;
console.log(` Calculated time-based progress: progress=${task.progress}, complete_ratio=${task.complete_ratio}`);
}
// Ensure numeric values
task.progress = parseInt(task.progress) || 0;
task.complete_ratio = parseInt(task.complete_ratio) || 0;
task.overdue = task.total_minutes < task.total_minutes_spent;
task.time_spent = { hours: ~~(task.total_minutes_spent / 60), minutes: task.total_minutes_spent % 60 };
task.time_spent = {hours: ~~(task.total_minutes_spent / 60), minutes: task.total_minutes_spent % 60};
task.comments_count = Number(task.comments_count) ? +task.comments_count : 0;
task.attachments_count = Number(task.attachments_count) ? +task.attachments_count : 0;

View File

@@ -97,6 +97,7 @@ export default class TasksControllerV2 extends TasksControllerBase {
try {
const result = await db.query("SELECT get_task_complete_ratio($1) AS info;", [taskId]);
const [data] = result.rows;
console.log("data", data);
if (data && data.info && data.info.ratio !== undefined) {
data.info.ratio = +((data.info.ratio || 0).toFixed());
return data.info;
@@ -109,29 +110,12 @@ export default class TasksControllerV2 extends TasksControllerBase {
}
private static getQuery(userId: string, options: ParsedQs) {
// Determine which sort column to use based on grouping
const groupBy = options.group || 'status';
let defaultSortColumn = 'sort_order';
switch (groupBy) {
case 'status':
defaultSortColumn = 'status_sort_order';
break;
case 'priority':
defaultSortColumn = 'priority_sort_order';
break;
case 'phase':
defaultSortColumn = 'phase_sort_order';
break;
default:
defaultSortColumn = 'sort_order';
}
const searchField = options.search ? ["t.name", "CONCAT((SELECT key FROM projects WHERE id = t.project_id), '-', task_no)"] : defaultSortColumn;
const searchField = options.search ? "t.name" : "sort_order";
const { searchQuery, sortField } = TasksControllerV2.toPaginationOptions(options, searchField);
const isSubTasks = !!options.parent_task;
const sortFields = sortField.replace(/ascend/g, "ASC").replace(/descend/g, "DESC") || defaultSortColumn;
const sortFields = sortField.replace(/ascend/g, "ASC").replace(/descend/g, "DESC") || "sort_order";
// Filter tasks by statuses
const statusesFilter = TasksControllerV2.getFilterByStatusWhereClosure(options.statuses as string);
@@ -147,20 +131,20 @@ export default class TasksControllerV2 extends TasksControllerBase {
const filterByAssignee = TasksControllerV2.getFilterByAssignee(options.filterBy as string);
// Returns statuses of each task as a json array if filterBy === "member"
const statusesQuery = TasksControllerV2.getStatusesQuery(options.filterBy as string);
// Custom columns data query
const customColumnsQuery = options.customColumns
const customColumnsQuery = options.customColumns
? `, (SELECT COALESCE(
jsonb_object_agg(
custom_cols.key,
custom_cols.key,
custom_cols.value
),
),
'{}'::JSONB
)
FROM (
SELECT
SELECT
cc.key,
CASE
CASE
WHEN ccv.text_value IS NOT NULL THEN to_jsonb(ccv.text_value)
WHEN ccv.number_value IS NOT NULL THEN to_jsonb(ccv.number_value)
WHEN ccv.boolean_value IS NOT NULL THEN to_jsonb(ccv.boolean_value)
@@ -213,9 +197,6 @@ export default class TasksControllerV2 extends TasksControllerBase {
t.archived,
t.description,
t.sort_order,
t.status_sort_order,
t.priority_sort_order,
t.phase_sort_order,
t.progress_value,
t.manual_progress,
t.weight,
@@ -346,23 +327,14 @@ export default class TasksControllerV2 extends TasksControllerBase {
@HandleExceptions()
public static async getList(req: IWorkLenzRequest, res: IWorkLenzResponse): Promise<IWorkLenzResponse> {
const startTime = performance.now();
console.log(`[PERFORMANCE] getList method called for project ${req.params.id} - THIS METHOD IS DEPRECATED, USE getTasksV3 INSTEAD`);
// PERFORMANCE OPTIMIZATION: Skip expensive progress calculation by default
// Progress values are already calculated and stored in the database
// Only refresh if explicitly requested via refresh_progress=true query parameter
if (req.query.refresh_progress === "true" && req.params.id) {
console.log(`[PERFORMANCE] Starting progress refresh for project ${req.params.id} (getList)`);
const progressStartTime = performance.now();
// Before doing anything else, refresh task progress values for this project
if (req.params.id) {
await this.refreshProjectTaskProgressValues(req.params.id);
const progressEndTime = performance.now();
console.log(`[PERFORMANCE] Progress refresh completed in ${(progressEndTime - progressStartTime).toFixed(2)}ms`);
}
const isSubTasks = !!req.query.parent_task;
const groupBy = (req.query.group || GroupBy.STATUS) as string;
// Add customColumns flag to query params
req.query.customColumns = "true";
@@ -395,31 +367,31 @@ export default class TasksControllerV2 extends TasksControllerBase {
};
});
const endTime = performance.now();
const totalTime = endTime - startTime;
console.log(`[PERFORMANCE] getList method completed in ${totalTime.toFixed(2)}ms for project ${req.params.id} with ${tasks.length} tasks`);
// Log warning if this deprecated method is taking too long
if (totalTime > 1000) {
console.warn(`[PERFORMANCE WARNING] DEPRECATED getList method taking ${totalTime.toFixed(2)}ms - Frontend should use getTasksV3 instead!`);
}
return res.status(200).send(new ServerResponse(true, updatedGroups));
}
public static async updateMapByGroup(tasks: any[], groupBy: string, map: { [p: string]: ITaskGroup }) {
let index = 0;
const unmapped = [];
// PERFORMANCE OPTIMIZATION: Remove expensive individual DB calls for each task
// Progress values are already calculated and included in the main query
// No need to make additional database calls here
// Process tasks with their already-calculated progress values
// First, ensure we have the latest progress values for all tasks
for (const task of tasks) {
// For any task with subtasks, ensure we have the latest progress values
if (task.sub_tasks_count > 0) {
const info = await this.getTaskCompleteRatio(task.id);
if (info) {
task.complete_ratio = info.ratio;
task.progress_value = info.ratio; // Ensure progress_value reflects the calculated ratio
console.log(`Updated task ${task.name} (${task.id}): complete_ratio=${task.complete_ratio}`);
}
}
}
// Now group the tasks with their updated progress values
for (const task of tasks) {
task.index = index++;
TasksControllerV2.updateTaskViewModel(task);
if (groupBy === GroupBy.STATUS) {
map[task.status]?.tasks.push(task);
} else if (groupBy === GroupBy.PRIORITY) {
@@ -455,25 +427,16 @@ export default class TasksControllerV2 extends TasksControllerBase {
@HandleExceptions()
public static async getTasksOnly(req: IWorkLenzRequest, res: IWorkLenzResponse): Promise<IWorkLenzResponse> {
const startTime = performance.now();
console.log(`[PERFORMANCE] getTasksOnly method called for project ${req.params.id} - Consider using getTasksV3 for better performance`);
// PERFORMANCE OPTIMIZATION: Skip expensive progress calculation by default
// Progress values are already calculated and stored in the database
// Only refresh if explicitly requested via refresh_progress=true query parameter
if (req.query.refresh_progress === "true" && req.params.id) {
console.log(`[PERFORMANCE] Starting progress refresh for project ${req.params.id} (getTasksOnly)`);
const progressStartTime = performance.now();
// Before doing anything else, refresh task progress values for this project
if (req.params.id) {
await this.refreshProjectTaskProgressValues(req.params.id);
const progressEndTime = performance.now();
console.log(`[PERFORMANCE] Progress refresh completed in ${(progressEndTime - progressStartTime).toFixed(2)}ms`);
}
const isSubTasks = !!req.query.parent_task;
// Add customColumns flag to query params
req.query.customColumns = "true";
const q = TasksControllerV2.getQuery(req.user?.id as string, req.query);
const params = isSubTasks ? [req.params.id || null, req.query.parent_task] : [req.params.id || null];
const result = await db.query(q, params);
@@ -485,25 +448,28 @@ export default class TasksControllerV2 extends TasksControllerBase {
[data] = result.rows;
} else { // else we return a flat list of tasks
data = [...result.rows];
// PERFORMANCE OPTIMIZATION: Remove expensive individual DB calls for each task
// Progress values are already calculated and included in the main query via get_task_complete_ratio
// The database query already includes complete_ratio, so no need for additional calls
for (const task of data) {
// For tasks with subtasks, get the complete ratio from the database function
if (task.sub_tasks_count > 0) {
try {
const result = await db.query("SELECT get_task_complete_ratio($1) AS info;", [task.id]);
const [ratioData] = result.rows;
if (ratioData && ratioData.info) {
task.complete_ratio = +(ratioData.info.ratio || 0).toFixed();
task.completed_count = ratioData.info.total_completed;
task.total_tasks_count = ratioData.info.total_tasks;
console.log(`Updated task ${task.id} (${task.name}) from DB: complete_ratio=${task.complete_ratio}`);
}
} catch (error) {
// Proceed with default calculation if database call fails
}
}
TasksControllerV2.updateTaskViewModel(task);
}
}
const endTime = performance.now();
const totalTime = endTime - startTime;
console.log(`[PERFORMANCE] getTasksOnly method completed in ${totalTime.toFixed(2)}ms for project ${req.params.id} with ${data.length} tasks`);
// Log warning if this method is taking too long
if (totalTime > 1000) {
console.warn(`[PERFORMANCE WARNING] getTasksOnly method taking ${totalTime.toFixed(2)}ms - Consider using getTasksV3 for better performance!`);
}
return res.status(200).send(new ServerResponse(true, data));
}
@@ -540,9 +506,9 @@ export default class TasksControllerV2 extends TasksControllerBase {
"SELECT COUNT(*) as subtask_count FROM tasks WHERE parent_task_id = $1 AND archived IS FALSE",
[parentTaskId]
);
const subtaskCount = parseInt(subTasksResult.rows[0]?.subtask_count || "0");
// If it has subtasks, reset the manual_progress flag to false
if (subtaskCount > 0) {
await db.query(
@@ -550,24 +516,24 @@ export default class TasksControllerV2 extends TasksControllerBase {
[parentTaskId]
);
console.log(`Reset manual progress for parent task ${parentTaskId} with ${subtaskCount} subtasks`);
// Get the project settings to determine which calculation method to use
const projectResult = await db.query(
"SELECT project_id FROM tasks WHERE id = $1",
[parentTaskId]
);
const projectId = projectResult.rows[0]?.project_id;
if (projectId) {
// Recalculate the parent task's progress based on its subtasks
const progressResult = await db.query(
"SELECT get_task_complete_ratio($1) AS ratio",
[parentTaskId]
);
const progressRatio = progressResult.rows[0]?.ratio?.ratio || 0;
// Emit the updated progress value to all clients
// Note: We don't have socket context here, so we can't directly emit
// This will be picked up on the next client refresh
@@ -618,7 +584,7 @@ export default class TasksControllerV2 extends TasksControllerBase {
? [req.body.id, req.body.to_group_id]
: [req.body.id, req.body.project_id, req.body.parent_task_id, req.body.to_group_id];
await db.query(q, params);
// Reset the parent task's manual progress when converting a task to a subtask
if (req.body.parent_task_id) {
await this.resetParentTaskManualProgress(req.body.parent_task_id);
@@ -645,21 +611,6 @@ export default class TasksControllerV2 extends TasksControllerBase {
return this.createTagList(result.rows);
}
public static async getProjectSubscribers(projectId: string) {
const q = `
SELECT u.name, u.avatar_url, ps.user_id, ps.team_member_id, ps.project_id
FROM project_subscribers ps
LEFT JOIN users u ON ps.user_id = u.id
WHERE ps.project_id = $1;
`;
const result = await db.query(q, [projectId]);
for (const member of result.rows)
member.color_code = getColor(member.name);
return this.createTagList(result.rows);
}
public static async checkUserAssignedToTask(taskId: string, userId: string, teamId: string) {
const q = `
SELECT EXISTS(
@@ -789,27 +740,27 @@ export default class TasksControllerV2 extends TasksControllerBase {
// Get column information
const columnQuery = `
SELECT id, field_type
FROM cc_custom_columns
SELECT id, field_type
FROM cc_custom_columns
WHERE project_id = $1 AND key = $2
`;
const columnResult = await db.query(columnQuery, [project_id, column_key]);
if (columnResult.rowCount === 0) {
return res.status(404).send(new ServerResponse(false, "Custom column not found"));
}
const column = columnResult.rows[0];
const columnId = column.id;
const fieldType = column.field_type;
// Determine which value field to use based on the field_type
let textValue = null;
let numberValue = null;
let dateValue = null;
let booleanValue = null;
let jsonValue = null;
switch (fieldType) {
case "number":
numberValue = parseFloat(String(value));
@@ -826,55 +777,55 @@ export default class TasksControllerV2 extends TasksControllerBase {
default:
textValue = String(value);
}
// Check if a value already exists
const existingValueQuery = `
SELECT id
FROM cc_column_values
SELECT id
FROM cc_column_values
WHERE task_id = $1 AND column_id = $2
`;
const existingValueResult = await db.query(existingValueQuery, [taskId, columnId]);
if (existingValueResult.rowCount && existingValueResult.rowCount > 0) {
// Update existing value
const updateQuery = `
UPDATE cc_column_values
SET text_value = $1,
number_value = $2,
date_value = $3,
boolean_value = $4,
json_value = $5,
updated_at = NOW()
UPDATE cc_column_values
SET text_value = $1,
number_value = $2,
date_value = $3,
boolean_value = $4,
json_value = $5,
updated_at = NOW()
WHERE task_id = $6 AND column_id = $7
`;
await db.query(updateQuery, [
textValue,
numberValue,
dateValue,
booleanValue,
jsonValue,
taskId,
textValue,
numberValue,
dateValue,
booleanValue,
jsonValue,
taskId,
columnId
]);
} else {
// Insert new value
const insertQuery = `
INSERT INTO cc_column_values
(task_id, column_id, text_value, number_value, date_value, boolean_value, json_value, created_at, updated_at)
INSERT INTO cc_column_values
(task_id, column_id, text_value, number_value, date_value, boolean_value, json_value, created_at, updated_at)
VALUES ($1, $2, $3, $4, $5, $6, $7, NOW(), NOW())
`;
await db.query(insertQuery, [
taskId,
columnId,
textValue,
numberValue,
dateValue,
booleanValue,
taskId,
columnId,
textValue,
numberValue,
dateValue,
booleanValue,
jsonValue
]);
}
return res.status(200).send(new ServerResponse(true, {
return res.status(200).send(new ServerResponse(true, {
task_id: taskId,
column_key,
value
@@ -882,7 +833,7 @@ export default class TasksControllerV2 extends TasksControllerBase {
}
public static async refreshProjectTaskProgressValues(projectId: string): Promise<void> {
try {
try {
// Run the recalculate_all_task_progress function only for tasks in this project
const query = `
DO $$
@@ -897,12 +848,12 @@ export default class TasksControllerV2 extends TasksControllerBase {
WHERE parent_task_id = t.id
AND archived IS FALSE
);
-- Start recalculation from leaf tasks (no subtasks) and propagate upward
-- This ensures calculations are done in the right order
WITH RECURSIVE task_hierarchy AS (
-- Base case: Start with all leaf tasks (no subtasks) in this project
SELECT
SELECT
id,
parent_task_id,
0 AS level
@@ -914,11 +865,11 @@ export default class TasksControllerV2 extends TasksControllerBase {
AND sub.archived IS FALSE
)
AND archived IS FALSE
UNION ALL
-- Recursive case: Move up to parent tasks, but only after processing all their children
SELECT
SELECT
t.id,
t.parent_task_id,
th.level + 1
@@ -939,7 +890,7 @@ export default class TasksControllerV2 extends TasksControllerBase {
AND (manual_progress IS FALSE OR manual_progress IS NULL);
END $$;
`;
await db.query(query);
console.log(`Finished refreshing progress values for project ${projectId}`);
} catch (error) {
@@ -952,24 +903,24 @@ export default class TasksControllerV2 extends TasksControllerBase {
// Calculate the task's progress using get_task_complete_ratio
const result = await db.query("SELECT get_task_complete_ratio($1) AS info;", [taskId]);
const [data] = result.rows;
if (data && data.info && data.info.ratio !== undefined) {
const progressValue = +((data.info.ratio || 0).toFixed());
// Update the task's progress_value in the database
await db.query(
"UPDATE tasks SET progress_value = $1 WHERE id = $2",
[progressValue, taskId]
);
console.log(`Updated progress for task ${taskId} to ${progressValue}%`);
// If this task has a parent, update the parent's progress as well
const parentResult = await db.query(
"SELECT parent_task_id FROM tasks WHERE id = $1",
[taskId]
);
if (parentResult.rows.length > 0 && parentResult.rows[0].parent_task_id) {
await this.updateTaskProgress(parentResult.rows[0].parent_task_id);
}
@@ -987,13 +938,13 @@ export default class TasksControllerV2 extends TasksControllerBase {
"UPDATE tasks SET weight = $1 WHERE id = $2",
[weight, taskId]
);
// Get the parent task ID
const parentResult = await db.query(
"SELECT parent_task_id FROM tasks WHERE id = $1",
[taskId]
);
// If this task has a parent, update the parent's progress
if (parentResult.rows.length > 0 && parentResult.rows[0].parent_task_id) {
await this.updateTaskProgress(parentResult.rows[0].parent_task_id);
@@ -1002,422 +953,4 @@ export default class TasksControllerV2 extends TasksControllerBase {
log_error(`Error updating task weight: ${error}`);
}
}
@HandleExceptions()
public static async getTasksV3(req: IWorkLenzRequest, res: IWorkLenzResponse): Promise<IWorkLenzResponse> {
const startTime = performance.now();
const isSubTasks = !!req.query.parent_task;
const groupBy = (req.query.group || GroupBy.STATUS) as string;
const archived = req.query.archived === "true";
// PERFORMANCE OPTIMIZATION: Skip expensive progress calculation by default
// Progress values are already calculated and stored in the database
// Only refresh if explicitly requested via refresh_progress=true query parameter
// This dramatically improves initial load performance (from ~2-5s to ~200-500ms)
const shouldRefreshProgress = req.query.refresh_progress === "true";
if (shouldRefreshProgress && req.params.id) {
const progressStartTime = performance.now();
await this.refreshProjectTaskProgressValues(req.params.id);
const progressEndTime = performance.now();
}
const queryStartTime = performance.now();
const q = TasksControllerV2.getQuery(req.user?.id as string, req.query);
const params = isSubTasks ? [req.params.id || null, req.query.parent_task] : [req.params.id || null];
const result = await db.query(q, params);
const tasks = [...result.rows];
const queryEndTime = performance.now();
// Get groups metadata dynamically from database
const groupsStartTime = performance.now();
const groups = await this.getGroups(groupBy, req.params.id);
const groupsEndTime = performance.now();
// Create priority value to name mapping
const priorityMap: Record<string, string> = {
"0": "low",
"1": "medium",
"2": "high"
};
// Create status category mapping based on actual status names from database
const statusCategoryMap: Record<string, string> = {};
for (const group of groups) {
if (groupBy === GroupBy.STATUS && group.id) {
// Use the actual status name from database, convert to lowercase for consistency
statusCategoryMap[group.id] = group.name.toLowerCase().replace(/\s+/g, "_");
}
}
// Transform tasks with all necessary data preprocessing
const transformStartTime = performance.now();
const transformedTasks = tasks.map((task, index) => {
// Update task with calculated values (lightweight version)
TasksControllerV2.updateTaskViewModel(task);
task.index = index;
// Convert time values
const convertTimeValue = (value: any): number => {
if (typeof value === "number") return value;
if (typeof value === "string") {
const parsed = parseFloat(value);
return isNaN(parsed) ? 0 : parsed;
}
if (value && typeof value === "object") {
if ("hours" in value || "minutes" in value) {
const hours = Number(value.hours || 0);
const minutes = Number(value.minutes || 0);
return hours + (minutes / 60);
}
}
return 0;
};
return {
id: task.id,
task_key: task.task_key || "",
title: task.name || "",
description: task.description || "",
// Use dynamic status mapping from database
status: statusCategoryMap[task.status] || task.status,
// Pre-processed priority using mapping
priority: priorityMap[task.priority_value?.toString()] || "medium",
// Use actual phase name from database
phase: task.phase_name || "Development",
progress: typeof task.complete_ratio === "number" ? task.complete_ratio : 0,
assignees: task.assignees?.map((a: any) => a.team_member_id) || [],
assignee_names: task.assignee_names || task.names || [],
labels: task.labels?.map((l: any) => ({
id: l.id || l.label_id,
name: l.name,
color: l.color_code || "#1890ff",
end: l.end,
names: l.names
})) || [],
dueDate: task.end_date || task.END_DATE,
startDate: task.start_date,
timeTracking: {
estimated: convertTimeValue(task.total_time),
logged: convertTimeValue(task.time_spent),
},
customFields: {},
custom_column_values: task.custom_column_values || {}, // Include custom column values
createdAt: task.created_at || new Date().toISOString(),
updatedAt: task.updated_at || new Date().toISOString(),
order: TasksControllerV2.getTaskSortOrder(task, groupBy),
// Additional metadata for frontend
originalStatusId: task.status,
originalPriorityId: task.priority,
statusColor: task.status_color,
priorityColor: task.priority_color,
// Add subtask count
sub_tasks_count: task.sub_tasks_count || 0,
// Add indicator fields for frontend icons
comments_count: task.comments_count || 0,
has_subscribers: !!task.has_subscribers,
attachments_count: task.attachments_count || 0,
has_dependencies: !!task.has_dependencies,
schedule_id: task.schedule_id || null,
reporter: task.reporter || null,
};
});
const transformEndTime = performance.now();
// Create groups based on dynamic data from database
const groupingStartTime = performance.now();
const groupedResponse: Record<string, any> = {};
// Initialize groups from database data
groups.forEach(group => {
const groupKey = groupBy === GroupBy.STATUS
? group.name.toLowerCase().replace(/\s+/g, "_")
: groupBy === GroupBy.PRIORITY
? priorityMap[(group as any).value?.toString()] || group.name.toLowerCase()
: group.name.toLowerCase().replace(/\s+/g, "_");
groupedResponse[groupKey] = {
id: group.id,
title: group.name,
groupType: groupBy,
groupValue: groupKey,
collapsed: false,
tasks: [],
taskIds: [],
color: group.color_code || this.getDefaultGroupColor(groupBy, groupKey),
// Include additional metadata from database
category_id: group.category_id,
start_date: group.start_date,
end_date: group.end_date,
sort_index: (group as any).sort_index,
};
});
// Distribute tasks into groups
const unmappedTasks: any[] = [];
transformedTasks.forEach(task => {
let groupKey: string;
let taskAssigned = false;
if (groupBy === GroupBy.STATUS) {
groupKey = task.status;
if (groupedResponse[groupKey]) {
groupedResponse[groupKey].tasks.push(task);
groupedResponse[groupKey].taskIds.push(task.id);
taskAssigned = true;
}
} else if (groupBy === GroupBy.PRIORITY) {
groupKey = task.priority;
if (groupedResponse[groupKey]) {
groupedResponse[groupKey].tasks.push(task);
groupedResponse[groupKey].taskIds.push(task.id);
taskAssigned = true;
}
} else if (groupBy === GroupBy.PHASE) {
// For phase grouping, check if task has a valid phase
if (task.phase && task.phase.trim() !== "") {
groupKey = task.phase.toLowerCase().replace(/\s+/g, "_");
if (groupedResponse[groupKey]) {
groupedResponse[groupKey].tasks.push(task);
groupedResponse[groupKey].taskIds.push(task.id);
taskAssigned = true;
}
}
// If task doesn't have a valid phase, add to unmapped
if (!taskAssigned) {
unmappedTasks.push(task);
}
}
});
// Calculate progress stats for priority and phase grouping
if (groupBy === GroupBy.PRIORITY || groupBy === GroupBy.PHASE) {
Object.values(groupedResponse).forEach((group: any) => {
if (group.tasks && group.tasks.length > 0) {
const todoCount = group.tasks.filter((task: any) => {
// For tasks, we need to check their original status category
const originalTask = tasks.find(t => t.id === task.id);
return originalTask?.status_category?.is_todo;
}).length;
const doingCount = group.tasks.filter((task: any) => {
const originalTask = tasks.find(t => t.id === task.id);
return originalTask?.status_category?.is_doing;
}).length;
const doneCount = group.tasks.filter((task: any) => {
const originalTask = tasks.find(t => t.id === task.id);
return originalTask?.status_category?.is_done;
}).length;
const total = group.tasks.length;
// Calculate progress percentages
group.todo_progress = total > 0 ? +((todoCount / total) * 100).toFixed(0) : 0;
group.doing_progress = total > 0 ? +((doingCount / total) * 100).toFixed(0) : 0;
group.done_progress = total > 0 ? +((doneCount / total) * 100).toFixed(0) : 0;
}
});
}
// Create unmapped group if there are tasks without proper phase assignment
if (unmappedTasks.length > 0 && groupBy === GroupBy.PHASE) {
const unmappedGroup = {
id: UNMAPPED,
title: UNMAPPED,
groupType: groupBy,
groupValue: UNMAPPED.toLowerCase(),
collapsed: false,
tasks: unmappedTasks,
taskIds: unmappedTasks.map(task => task.id),
color: "#fbc84c69", // Orange color with transparency
category_id: null,
start_date: null,
end_date: null,
sort_index: 999, // Put unmapped group at the end
todo_progress: 0,
doing_progress: 0,
done_progress: 0,
};
// Calculate progress stats for unmapped group
if (unmappedTasks.length > 0) {
const todoCount = unmappedTasks.filter((task: any) => {
const originalTask = tasks.find(t => t.id === task.id);
return originalTask?.status_category?.is_todo;
}).length;
const doingCount = unmappedTasks.filter((task: any) => {
const originalTask = tasks.find(t => t.id === task.id);
return originalTask?.status_category?.is_doing;
}).length;
const doneCount = unmappedTasks.filter((task: any) => {
const originalTask = tasks.find(t => t.id === task.id);
return originalTask?.status_category?.is_done;
}).length;
const total = unmappedTasks.length;
unmappedGroup.todo_progress = total > 0 ? +((todoCount / total) * 100).toFixed(0) : 0;
unmappedGroup.doing_progress = total > 0 ? +((doingCount / total) * 100).toFixed(0) : 0;
unmappedGroup.done_progress = total > 0 ? +((doneCount / total) * 100).toFixed(0) : 0;
}
groupedResponse[UNMAPPED.toLowerCase()] = unmappedGroup;
}
// Sort tasks within each group by order
Object.values(groupedResponse).forEach((group: any) => {
group.tasks.sort((a: any, b: any) => a.order - b.order);
});
// Convert to array format expected by frontend, maintaining database order
const responseGroups = groups
.map(group => {
const groupKey = groupBy === GroupBy.STATUS
? group.name.toLowerCase().replace(/\s+/g, "_")
: groupBy === GroupBy.PRIORITY
? priorityMap[(group as any).value?.toString()] || group.name.toLowerCase()
: group.name.toLowerCase().replace(/\s+/g, "_");
return groupedResponse[groupKey];
})
.filter(group => group && (group.tasks.length > 0 || req.query.include_empty === "true"));
// Add unmapped group to the end if it exists
if (groupedResponse[UNMAPPED.toLowerCase()]) {
responseGroups.push(groupedResponse[UNMAPPED.toLowerCase()]);
}
const groupingEndTime = performance.now();
const endTime = performance.now();
const totalTime = endTime - startTime;
// Log warning if request is taking too long
if (totalTime > 1000) {
console.warn(`[PERFORMANCE WARNING] Slow request detected: ${totalTime.toFixed(2)}ms for project ${req.params.id} with ${transformedTasks.length} tasks`);
}
return res.status(200).send(new ServerResponse(true, {
groups: responseGroups,
allTasks: transformedTasks,
grouping: groupBy,
totalTasks: transformedTasks.length
}));
}
private static getTaskSortOrder(task: any, groupBy: string): number {
switch (groupBy) {
case GroupBy.STATUS:
return typeof task.status_sort_order === "number" ? task.status_sort_order : 0;
case GroupBy.PRIORITY:
return typeof task.priority_sort_order === "number" ? task.priority_sort_order : 0;
case GroupBy.PHASE:
return typeof task.phase_sort_order === "number" ? task.phase_sort_order : 0;
default:
return typeof task.sort_order === "number" ? task.sort_order : 0;
}
}
private static getDefaultGroupColor(groupBy: string, groupValue: string): string {
const colorMaps: Record<string, Record<string, string>> = {
[GroupBy.STATUS]: {
todo: "#f0f0f0",
doing: "#1890ff",
done: "#52c41a",
},
[GroupBy.PRIORITY]: {
critical: "#ff4d4f",
high: "#ff7a45",
medium: "#faad14",
low: "#52c41a",
},
[GroupBy.PHASE]: {
planning: "#722ed1",
development: "#1890ff",
testing: "#faad14",
deployment: "#52c41a",
unmapped: "#fbc84c69",
},
};
return colorMaps[groupBy]?.[groupValue] || "#d9d9d9";
}
@HandleExceptions()
public static async refreshTaskProgress(req: IWorkLenzRequest, res: IWorkLenzResponse): Promise<IWorkLenzResponse> {
try {
const startTime = performance.now();
if (req.params.id) {
console.log(`[PERFORMANCE] Starting background progress refresh for project ${req.params.id}`);
await this.refreshProjectTaskProgressValues(req.params.id);
const endTime = performance.now();
const totalTime = endTime - startTime;
console.log(`[PERFORMANCE] Background progress refresh completed in ${totalTime.toFixed(2)}ms for project ${req.params.id}`);
return res.status(200).send(new ServerResponse(true, {
message: "Task progress values refreshed successfully",
performanceMetrics: {
refreshTime: Math.round(totalTime),
projectId: req.params.id
}
}));
}
return res.status(400).send(new ServerResponse(false, null, "Project ID is required"));
} catch (error) {
console.error("Error refreshing task progress:", error);
return res.status(500).send(new ServerResponse(false, null, "Failed to refresh task progress"));
}
}
// Optimized method for getting task progress without blocking main UI
@HandleExceptions()
public static async getTaskProgressStatus(req: IWorkLenzRequest, res: IWorkLenzResponse): Promise<IWorkLenzResponse> {
try {
if (!req.params.id) {
return res.status(400).send(new ServerResponse(false, null, "Project ID is required"));
}
// Get basic progress stats without expensive calculations
const result = await db.query(`
SELECT
COUNT(*) as total_tasks,
COUNT(CASE WHEN EXISTS(
SELECT 1 FROM tasks_with_status_view
WHERE tasks_with_status_view.task_id = tasks.id
AND is_done IS TRUE
) THEN 1 END) as completed_tasks,
AVG(CASE
WHEN progress_value IS NOT NULL THEN progress_value
ELSE 0
END) as avg_progress,
MAX(updated_at) as last_updated
FROM tasks
WHERE project_id = $1 AND archived IS FALSE
`, [req.params.id]);
const [stats] = result.rows;
return res.status(200).send(new ServerResponse(true, {
projectId: req.params.id,
totalTasks: parseInt(stats.total_tasks) || 0,
completedTasks: parseInt(stats.completed_tasks) || 0,
avgProgress: parseFloat(stats.avg_progress) || 0,
lastUpdated: stats.last_updated,
completionPercentage: stats.total_tasks > 0 ?
Math.round((parseInt(stats.completed_tasks) / parseInt(stats.total_tasks)) * 100) : 0
}));
} catch (error) {
console.error("Error getting task progress status:", error);
return res.status(500).send(new ServerResponse(false, null, "Failed to get task progress status"));
}
}
}

View File

@@ -34,24 +34,29 @@ export default abstract class WorklenzControllerBase {
const offset = queryParams.search ? 0 : (index - 1) * size;
const paging = queryParams.paging || "true";
// let s = "";
// if (typeof searchField === "string") {
// s = `${searchField} || ' ' || id::TEXT`;
// } else if (Array.isArray(searchField)) {
// s = searchField.join(" || ' ' || ");
// }
// const search = (queryParams.search as string || "").trim();
// const searchQuery = search ? `AND TO_TSVECTOR(${s}) @@ TO_TSQUERY('${toTsQuery(search)}')` : "";
const search = (queryParams.search as string || "").trim();
let s = "";
if (typeof searchField === "string") {
s = ` ${searchField} ILIKE '%${search}%'`;
} else if (Array.isArray(searchField)) {
s = searchField.map(index => ` ${index} ILIKE '%${search}%'`).join(" OR ");
}
let searchQuery = "";
if (search) {
// Properly escape single quotes to prevent SQL syntax errors
const escapedSearch = search.replace(/'/g, "''");
let s = "";
if (typeof searchField === "string") {
s = ` ${searchField} ILIKE '%${escapedSearch}%'`;
} else if (Array.isArray(searchField)) {
s = searchField.map(field => ` ${field} ILIKE '%${escapedSearch}%'`).join(" OR ");
}
if (s) {
searchQuery = isMemberFilter ? ` (${s}) AND ` : ` AND (${s}) `;
}
searchQuery = isMemberFilter ? ` (${s}) AND ` : ` AND (${s}) `;
}
// Sort

View File

@@ -7,5 +7,5 @@ export function startCronJobs() {
startNotificationsJob();
startDailyDigestJob();
startProjectDigestJob();
if (process.env.ENABLE_RECURRING_JOBS === "true") startRecurringTasksJob();
// startRecurringTasksJob();
}

View File

@@ -7,90 +7,12 @@ import TasksController from "../controllers/tasks-controller";
// At 11:00+00 (4.30pm+530) on every day-of-month if it's on every day-of-week from Monday through Friday.
// const TIME = "0 11 */1 * 1-5";
const TIME = process.env.RECURRING_JOBS_INTERVAL || "0 11 */1 * 1-5";
const TIME = "*/2 * * * *"; // runs every 2 minutes - for testing purposes
const TIME_FORMAT = "YYYY-MM-DD";
// const TIME = "0 0 * * *"; // Runs at midnight every day
const log = (value: any) => console.log("recurring-task-cron-job:", value);
// Define future limits for different schedule types
// More conservative limits to prevent task list clutter
const FUTURE_LIMITS = {
daily: moment.duration(3, "days"),
weekly: moment.duration(1, "week"),
monthly: moment.duration(1, "month"),
every_x_days: (interval: number) => moment.duration(interval, "days"),
every_x_weeks: (interval: number) => moment.duration(interval, "weeks"),
every_x_months: (interval: number) => moment.duration(interval, "months")
};
// Helper function to get the future limit based on schedule type
function getFutureLimit(scheduleType: string, interval?: number): moment.Duration {
switch (scheduleType) {
case "daily":
return FUTURE_LIMITS.daily;
case "weekly":
return FUTURE_LIMITS.weekly;
case "monthly":
return FUTURE_LIMITS.monthly;
case "every_x_days":
return FUTURE_LIMITS.every_x_days(interval || 1);
case "every_x_weeks":
return FUTURE_LIMITS.every_x_weeks(interval || 1);
case "every_x_months":
return FUTURE_LIMITS.every_x_months(interval || 1);
default:
return moment.duration(3, "days"); // Default to 3 days
}
}
// Helper function to batch create tasks
async function createBatchTasks(template: ITaskTemplate & IRecurringSchedule, endDates: moment.Moment[]) {
const createdTasks = [];
for (const nextEndDate of endDates) {
const existingTaskQuery = `
SELECT id FROM tasks
WHERE schedule_id = $1 AND end_date::DATE = $2::DATE;
`;
const existingTaskResult = await db.query(existingTaskQuery, [template.schedule_id, nextEndDate.format(TIME_FORMAT)]);
if (existingTaskResult.rows.length === 0) {
const createTaskQuery = `SELECT create_quick_task($1::json) as task;`;
const taskData = {
name: template.name,
priority_id: template.priority_id,
project_id: template.project_id,
reporter_id: template.reporter_id,
status_id: template.status_id || null,
end_date: nextEndDate.format(TIME_FORMAT),
schedule_id: template.schedule_id
};
const createTaskResult = await db.query(createTaskQuery, [JSON.stringify(taskData)]);
const createdTask = createTaskResult.rows[0].task;
if (createdTask) {
createdTasks.push(createdTask);
for (const assignee of template.assignees) {
await TasksController.createTaskBulkAssignees(assignee.team_member_id, template.project_id, createdTask.id, assignee.assigned_by);
}
for (const label of template.labels) {
const q = `SELECT add_or_remove_task_label($1, $2) AS labels;`;
await db.query(q, [createdTask.id, label.label_id]);
}
console.log(`Created task for template ${template.name} with end date ${nextEndDate.format(TIME_FORMAT)}`);
}
} else {
console.log(`Skipped creating task for template ${template.name} with end date ${nextEndDate.format(TIME_FORMAT)} - task already exists`);
}
}
return createdTasks;
}
async function onRecurringTaskJobTick() {
try {
log("(cron) Recurring tasks job started.");
@@ -111,44 +33,65 @@ async function onRecurringTaskJobTick() {
? moment(template.last_task_end_date)
: moment(template.created_at);
// Calculate future limit based on schedule type
const futureLimit = moment(template.last_checked_at || template.created_at)
.add(getFutureLimit(
template.schedule_type,
template.interval_days || template.interval_weeks || template.interval_months || 1
));
const futureLimit = moment(template.last_checked_at || template.created_at).add(1, "week");
let nextEndDate = calculateNextEndDate(template, lastTaskEndDate);
const endDatesToCreate: moment.Moment[] = [];
// Find all future occurrences within the limit
while (nextEndDate.isSameOrBefore(futureLimit)) {
if (nextEndDate.isAfter(now)) {
endDatesToCreate.push(moment(nextEndDate));
}
// Find the next future occurrence
while (nextEndDate.isSameOrBefore(now)) {
nextEndDate = calculateNextEndDate(template, nextEndDate);
}
// Batch create tasks for all future dates
if (endDatesToCreate.length > 0) {
const createdTasks = await createBatchTasks(template, endDatesToCreate);
createdTaskCount += createdTasks.length;
// Update the last_checked_at in the schedule
const updateScheduleQuery = `
UPDATE task_recurring_schedules
SET last_checked_at = $1::DATE,
last_created_task_end_date = $2
WHERE id = $3;
// Only create a task if it's within the future limit
if (nextEndDate.isSameOrBefore(futureLimit)) {
const existingTaskQuery = `
SELECT id FROM tasks
WHERE schedule_id = $1 AND end_date::DATE = $2::DATE;
`;
await db.query(updateScheduleQuery, [
moment().format(TIME_FORMAT),
endDatesToCreate[endDatesToCreate.length - 1].format(TIME_FORMAT),
template.schedule_id
]);
const existingTaskResult = await db.query(existingTaskQuery, [template.schedule_id, nextEndDate.format(TIME_FORMAT)]);
if (existingTaskResult.rows.length === 0) {
const createTaskQuery = `SELECT create_quick_task($1::json) as task;`;
const taskData = {
name: template.name,
priority_id: template.priority_id,
project_id: template.project_id,
reporter_id: template.reporter_id,
status_id: template.status_id || null,
end_date: nextEndDate.format(TIME_FORMAT),
schedule_id: template.schedule_id
};
const createTaskResult = await db.query(createTaskQuery, [JSON.stringify(taskData)]);
const createdTask = createTaskResult.rows[0].task;
if (createdTask) {
createdTaskCount++;
for (const assignee of template.assignees) {
await TasksController.createTaskBulkAssignees(assignee.team_member_id, template.project_id, createdTask.id, assignee.assigned_by);
}
for (const label of template.labels) {
const q = `SELECT add_or_remove_task_label($1, $2) AS labels;`;
await db.query(q, [createdTask.id, label.label_id]);
}
console.log(`Created task for template ${template.name} with end date ${nextEndDate.format(TIME_FORMAT)}`);
}
} else {
console.log(`Skipped creating task for template ${template.name} with end date ${nextEndDate.format(TIME_FORMAT)} - task already exists`);
}
} else {
console.log(`No tasks created for template ${template.name} - next occurrence is beyond the future limit`);
console.log(`No task created for template ${template.name} - next occurrence is beyond the future limit`);
}
// Update the last_checked_at in the schedule
const updateScheduleQuery = `
UPDATE task_recurring_schedules
SET last_checked_at = $1::DATE, last_created_task_end_date = $2
WHERE id = $3;
`;
await db.query(updateScheduleQuery, [moment(template.last_checked_at || template.created_at).add(1, "day").format(TIME_FORMAT), nextEndDate.format(TIME_FORMAT), template.schedule_id]);
}
log(`(cron) Recurring tasks job ended with ${createdTaskCount} new tasks created.`);

View File

@@ -3,16 +3,13 @@ import { Strategy as LocalStrategy } from "passport-local";
import { log_error } from "../../shared/utils";
import db from "../../config/db";
import { Request } from "express";
import { ERROR_KEY, SUCCESS_KEY } from "./passport-constants";
async function handleLogin(req: Request, email: string, password: string, done: any) {
// Clear any existing flash messages
(req.session as any).flash = {};
console.log("Login attempt for:", email);
if (!email || !password) {
const errorMsg = "Please enter both email and password";
req.flash(ERROR_KEY, errorMsg);
return done(null, false);
console.log("Missing credentials");
return done(null, false, { message: "Please enter both email and password" });
}
try {
@@ -22,27 +19,23 @@ async function handleLogin(req: Request, email: string, password: string, done:
AND google_id IS NULL
AND is_deleted IS FALSE;`;
const result = await db.query(q, [email]);
console.log("User query result count:", result.rowCount);
const [data] = result.rows;
if (!data?.password) {
const errorMsg = "No account found with this email";
req.flash(ERROR_KEY, errorMsg);
return done(null, false);
console.log("No account found");
return done(null, false, { message: "No account found with this email" });
}
const passwordMatch = bcrypt.compareSync(password, data.password);
console.log("Password match:", passwordMatch);
if (passwordMatch && email === data.email) {
delete data.password;
const successMsg = "User successfully logged in";
req.flash(SUCCESS_KEY, successMsg);
return done(null, data);
return done(null, data, {message: "User successfully logged in"});
}
const errorMsg = "Incorrect email or password";
req.flash(ERROR_KEY, errorMsg);
return done(null, false);
return done(null, false, { message: "Incorrect email or password" });
} catch (error) {
console.error("Login error:", error);
log_error(error, req.body);

View File

@@ -1,4 +0,0 @@
{
"doesNotExistText": "Na vjen keq, faqja që kërkoni nuk ekziston.",
"backHomeButton": "Kthehu në Faqen Kryesore"
}

View File

@@ -1,31 +0,0 @@
{
"continue": "Vazhdo",
"setupYourAccount": "Konfiguro Llogarinë Tënde në Worklenz.",
"organizationStepTitle": "Emërtoni Organizatën Tuaj",
"organizationStepLabel": "Zgjidhni një emër për llogarinë tuaj në Worklenz.",
"projectStepTitle": "Krijoni projektin tuaj të parë",
"projectStepLabel": "Në cilin projekt po punoni aktualisht?",
"projectStepPlaceholder": "p.sh. Plani i Marketingut",
"tasksStepTitle": "Krijoni detyrat tuaja të para",
"tasksStepLabel": "Shkruani disa detyra që do të kryeni në",
"tasksStepAddAnother": "Shto një tjetër",
"emailPlaceholder": "Adresa email",
"invalidEmail": "Ju lutemi vendosni një adresë email të vlefshme",
"or": "ose",
"templateButton": "Importo nga shablloni",
"goBack": "Kthehu Mbrapa",
"cancel": "Anulo",
"create": "Krijo",
"templateDrawerTitle": "Zgjidh nga shabllonet",
"step3InputLabel": "Fto me email",
"addAnother": "Shto një tjetër",
"skipForNow": "Kalo tani për tani",
"formTitle": "Krijoni detyrën tuaj të parë.",
"step3Title": "Fto ekipin tënd të punojë me",
"maxMembers": " (Mund të ftoni deri në 5 anëtarë)",
"maxTasks": " (Mund të krijoni deri në 5 detyra)"
}

View File

@@ -1,113 +0,0 @@
{
"title": "Faturimet",
"currentBill": "Fatura Aktuale",
"configuration": "Konfigurimi",
"currentPlanDetails": "Detajet e Planit Aktual",
"upgradePlan": "Përmirëso Planin",
"cardBodyText01": "Provë falas",
"cardBodyText02": "(Plani juaj i provës skadon në 1 muaj 19 ditë)",
"redeemCode": "Kodi i Zbritjes",
"accountStorage": "Depozita e Llogarisë",
"used": "Përdorur:",
"remaining": "E mbetur:",
"charges": "Tarifat",
"tooltip": "Tarifat për ciklin aktual të faturimit",
"description": "Përshkrimi",
"billingPeriod": "Periudha e Faturimit",
"billStatus": "Statusi i Faturës",
"perUserValue": "Vlera për Përdorues",
"users": "Përdoruesit",
"amount": "Shuma",
"invoices": "Faturat",
"transactionId": "ID e Transaksionit",
"transactionDate": "Data e Transaksionit",
"paymentMethod": "Metoda e Pagesës",
"status": "Statusi",
"ltdUsers": "Mund të shtoni deri në {{ltd_users}} përdorues.",
"totalSeats": "Vende totale",
"availableSeats": "Vende të disponueshme",
"addMoreSeats": "Shto më shumë vende",
"drawerTitle": "Kodi i Zbritjes",
"label": "Kodi i Zbritjes",
"drawerPlaceholder": "Vendosni kodin tuaj të zbritjes",
"redeemSubmit": "Paraqit",
"modalTitle": "Zgjidhni planin më të mirë për ekipin tuaj",
"seatLabel": "Numri i vendeve",
"freePlan": "Plan Falas",
"startup": "Startup",
"business": "Biznes",
"tag": "Më i Popullarizuar",
"enterprise": "Ndërmarrje",
"freeSubtitle": "falas përgjithmonë",
"freeUsers": "Më e mira për përdorim personal",
"freeText01": "100MB depozitë",
"freeText02": "3 projekte",
"freeText03": "5 anëtarë të ekipit",
"startupSubtitle": "ÇMIM I RASTËSISHËM / muaj",
"startupUsers": "Deri në 15 përdorues",
"startupText01": "25GB depozitë",
"startupText02": "Projekte të pakufizuara aktive",
"startupText03": "Orar",
"startupText04": "Raportim",
"startupText05": "Abonohu në projekte",
"businessSubtitle": "përdorues / muaj",
"businessUsers": "16 - 200 përdorues",
"enterpriseUsers": "200 - 500+ përdorues",
"footerTitle": "Ju lutemi na jepni një numër kontakti që mund të përdorim për t'ju kontaktuar.",
"footerLabel": "Numri i Kontaktit",
"footerButton": "Na kontaktoni",
"redeemCodePlaceHolder": "Vendosni kodin tuaj të zbritjes",
"submit": "Paraqit",
"trialPlan": "Provë Falas",
"trialExpireDate": "E vlefshme deri më {{trial_expire_date}}",
"trialExpired": "Provat tuaja falas skaduan {{trial_expire_string}}",
"trialInProgress": "Provat tuaja falas skadojnë {{trial_expire_string}}",
"required": "Kjo fushë është e detyrueshme",
"invalidCode": "Kod i pavlefshëm",
"selectPlan": "Zgjidhni planin më të mirë për ekipin tuaj",
"changeSubscriptionPlan": "Ndryshoni planin tuaj të abonimit",
"noOfSeats": "Numri i vendeve",
"annualPlan": "Pro - Vjetor",
"monthlyPlan": "Pro - Mujor",
"freeForever": "Falas Përgjithmonë",
"bestForPersonalUse": "Më e mira për përdorim personal",
"storage": "Depozitë",
"projects": "Projekte",
"teamMembers": "Anëtarët e Ekipit",
"unlimitedTeamMembers": "Anëtarë të pakufizuar të ekipit",
"unlimitedActiveProjects": "Projekte të pakufizuara aktive",
"schedule": "Orar",
"reporting": "Raportim",
"subscribeToProjects": "Abonohu në projekte",
"billedAnnually": "Faturuar çdo vit",
"billedMonthly": "Faturuar çdo muaj",
"pausePlan": "Pauzë Planin",
"resumePlan": "Rifillo Planin",
"changePlan": "Ndrysho Planin",
"cancelPlan": "Anulo Planin",
"perMonthPerUser": "për përdorues/muaj",
"viewInvoice": "Shiko Faturën",
"switchToFreePlan": "Kalo në Planin Falas",
"expirestoday": "sot",
"expirestomorrow": "nesër",
"expiredDaysAgo": "{{days}} ditë më parë",
"continueWith": "Vazhdo me {{plan}}",
"changeToPlan": "Ndrysho në {{plan}}"
}

View File

@@ -1,8 +0,0 @@
{
"overview": "Përmbledhje",
"name": "Emri i Organizatës",
"owner": "Pronari i Organizatës",
"admins": "Administruesit e Organizatës",
"contactNumber": "Shto Numrin e Kontaktit",
"edit": "Redakto"
}

View File

@@ -1,12 +0,0 @@
{
"membersCount": "Numri i Anëtarëve",
"createdAt": "Krijuar më",
"projectName": "Emri i Projektit",
"teamName": "Emri i Ekipit",
"refreshProjects": "Rifresko Projektet",
"searchPlaceholder": "Kërkoni sipas emrit të projektit",
"deleteProject": "Jeni i sigurt që dëshironi të fshini këtë projekt?",
"confirm": "Konfirmo",
"cancel": "Anulo",
"delete": "Fshi Projektin"
}

View File

@@ -1,8 +0,0 @@
{
"overview": "Përmbledhje",
"users": "Përdoruesit",
"teams": "Ekipet",
"billing": "Faturimi",
"projects": "Projektet",
"adminCenter": "Qendra Administrative"
}

View File

@@ -1,33 +0,0 @@
{
"title": "Ekipet",
"subtitle": "ekipet",
"tooltip": "Rifresko ekipet",
"placeholder": "Kërko sipas emrit",
"addTeam": "Shto Ekip",
"team": "Ekipi",
"membersCount": "Numri i Anëtarëve",
"members": "Anëtarët",
"drawerTitle": "Krijo Ekip të Ri",
"label": "Emri i Ekipit",
"drawerPlaceholder": "Emri",
"create": "Krijo",
"delete": "Fshi",
"settings": "Cilësimet",
"popTitle": "Jeni i sigurt?",
"message": "Ju lutemi shkruani një Emër",
"teamSettings": "Cilësimet e Ekipit",
"teamName": "Emri i Ekipit",
"teamDescription": "Përshkrimi i Ekipit",
"teamMembers": "Anëtarët e Ekipit",
"teamMembersCount": "Numri i Anëtarëve të Ekipit",
"teamMembersPlaceholder": "Kërko sipas emrit",
"addMember": "Shto Anëtar",
"add": "Shto",
"update": "Përditëso",
"teamNamePlaceholder": "Emri i ekipit",
"user": "Përdoruesi",
"role": "Roli",
"owner": "Pronari",
"admin": "Administruesi",
"member": "Anëtari"
}

View File

@@ -1,9 +0,0 @@
{
"title": "Përdoruesit",
"subTitle": "përdoruesit",
"placeholder": "Kërko sipas emrit",
"user": "Përdoruesi",
"email": "Email",
"lastActivity": "Aktiviteti i Fundit",
"refresh": "Rifresko përdoruesit"
}

View File

@@ -1,34 +0,0 @@
{
"name": "Emri",
"client": "Klienti",
"category": "Kategoria",
"status": "Statusi",
"tasksProgress": "Përparimi i Detyrave",
"updated_at": "E Përditësuar së Fundi",
"members": "Anëtarët",
"setting": "Cilësimet",
"projects": "Projektet",
"refreshProjects": "Rifresko projektet",
"all": "Të gjitha",
"favorites": "Të preferuarit",
"archived": "E arkivuar",
"placeholder": "Kërko sipas emrit",
"archive": "Arkivo",
"unarchive": "Çarkivo",
"archiveConfirm": "Jeni i sigurt që dëshironi të arkivoni këtë projekt?",
"unarchiveConfirm": "Jeni i sigurt që dëshironi të çarkivoni këtë projekt?",
"yes": "Po",
"no": "Jo",
"clickToFilter": "Kliko për të filtruar sipas",
"noProjects": "Nuk u gjetën projekte",
"addToFavourites": "Shto te të preferuarit",
"list": "Lista",
"group": "Grupi",
"listView": "Pamja e Listës",
"groupView": "Pamja e Grupit",
"groupBy": {
"category": "Kategoria",
"client": "Klienti"
},
"noPermission": "Nuk keni leje për të kryer këtë veprim"
}

View File

@@ -1,5 +0,0 @@
{
"loggingOut": "Po dilni...",
"authenticating": "Po autentikoheni...",
"gettingThingsReady": "Po përgatiten gjërat për ju..."
}

View File

@@ -1,12 +0,0 @@
{
"headerDescription": "Rivendosni fjalëkalimin tuaj",
"emailLabel": "Email",
"emailPlaceholder": "Vendosni email-in tuaj",
"emailRequired": "Ju lutemi vendosni Email-in tuaj!",
"resetPasswordButton": "Rivendos Fjalëkalimin",
"returnToLoginButton": "Kthehu te Hyrja",
"passwordResetSuccessMessage": "Një lidhje për rivendosjen e fjalëkalimit është dërguar në email-in tuaj.",
"orText": "OSE",
"successTitle": "U dërguan udhëzimet për rivendosje!",
"successMessage": "Informacioni për rivendosje është dërguar në email-in tuaj. Ju lutemi kontrolloni email-in."
}

View File

@@ -1,27 +0,0 @@
{
"headerDescription": "Hyni në llogarinë tuaj",
"emailLabel": "Email",
"emailPlaceholder": "Vendosni email-in tuaj",
"emailRequired": "Ju lutemi vendosni Email-in tuaj!",
"passwordLabel": "Fjalëkalimi",
"passwordPlaceholder": "Vendosni fjalëkalimin",
"passwordRequired": "Ju lutemi vendosni Fjalëkalimin!",
"rememberMe": "Më mbaj mend",
"loginButton": "Hyr",
"signupButton": "Regjistrohu",
"forgotPasswordButton": "Keni harruar fjalëkalimin?",
"signInWithGoogleButton": "Hyr me Google",
"dontHaveAccountText": "Nuk keni llogari?",
"orText": "OSE",
"successMessage": "Jeni futur me sukses!",
"loginError": "Hyrja dështoi",
"googleLoginError": "Hyrja përmes Google dështoi",
"validationMessages": {
"email": "Ju lutemi vendosni një adresë email të vlefshme",
"password": "Fjalëkalimi duhet të jetë së paku 8 karaktere"
},
"errorMessages": {
"loginErrorTitle": "Hyrja dështoi",
"loginErrorMessage": "Ju lutemi kontrolloni email-in dhe fjalëkalimin dhe provoni përsëri"
}
}

View File

@@ -1,29 +0,0 @@
{
"headerDescription": "Regjistrohuni për të filluar",
"nameLabel": "Emri i Plotë",
"namePlaceholder": "Shkruani emrin tuaj të plotë",
"nameRequired": "Ju lutemi shkruani emrin tuaj të plotë!",
"nameMinCharacterRequired": "Emri duhet të jetë së paku 4 karaktere!",
"emailLabel": "Email",
"emailPlaceholder": "Shkruani email-in tuaj",
"emailRequired": "Ju lutemi shkruani Email-in tuaj!",
"passwordLabel": "Fjalëkalimi",
"passwordPlaceholder": "Krijoni një fjalëkalim",
"passwordRequired": "Ju lutemi krijoni një Fjalëkalim!",
"passwordMinCharacterRequired": "Fjalëkalimi duhet të jetë së paku 8 karaktere!",
"passwordPatternRequired": "Fjalëkalimi nuk plotëson kërkesat!",
"strongPasswordPlaceholder": "Vendosni një fjalëkalim më të fortë",
"passwordValidationAltText": "Fjalëkalimi duhet të përmbajë së paku 8 karaktere me shkronja të mëdha dhe të vogla, një numër dhe një simbol.",
"signupSuccessMessage": "Jeni regjistruar me sukses!",
"privacyPolicyLink": "Politika e Privatësisë",
"termsOfUseLink": "Kushtet e Përdorimit",
"bySigningUpText": "Duke u regjistruar, ju pranoni",
"andText": "dhe",
"signupButton": "Regjistrohu",
"signInWithGoogleButton": "Hyr me Google",
"alreadyHaveAccountText": "Keni tashmë një llogari?",
"loginButton": "Hyr",
"orText": "OSE",
"reCAPTCHAVerificationError": "Gabim në Verifikimin e reCAPTCHA",
"reCAPTCHAVerificationErrorMessage": "Nuk mundëm të verifikojmë reCAPTCHA-n tuaj. Ju lutemi provoni përsëri."
}

View File

@@ -1,14 +0,0 @@
{
"title": "Verifikoni Email-in për Rivendosje",
"description": "Vendosni fjalëkalimin tuaj të ri",
"placeholder": "Vendosni fjalëkalimin tuaj të ri",
"confirmPasswordPlaceholder": "Konfirmoni fjalëkalimin e ri",
"passwordHint": "Të paktën 8 karaktere, me shkronja të mëdha dhe të vogla, një numër dhe një simbol.",
"resetPasswordButton": "Rivendos fjalëkalimin",
"orText": "Ose",
"resendResetEmail": "Dërgo përsëri email-in e rivendosjes",
"passwordRequired": "Ju lutemi vendosni fjalëkalimin e ri",
"returnToLoginButton": "Kthehu te Hyrja",
"confirmPasswordRequired": "Ju lutemi konfirmoni fjalëkalimin e ri",
"passwordMismatch": "Fjalëkalimet nuk përputhen"
}

View File

@@ -1,9 +0,0 @@
{
"login-success": "Hyrja u krye me sukses!",
"login-failed": "Hyrja dështoi. Ju lutemi kontrolloni kredencialet dhe provoni përsëri.",
"signup-success": "Regjistrimi u krye me sukses! Mirë se erdhët.",
"signup-failed": "Regjistrimi dështoi. Ju lutemi sigurohuni që të gjitha fushat e nevojshme janë plotësuar dhe provoni përsëri.",
"reconnecting": "Jeni shkëputur nga serveri.",
"connection-lost": "Lidhja me serverin dështoi. Ju lutemi kontrolloni lidhjen tuaj me internet.",
"connection-restored": "U lidhët me serverin me sukses"
}

View File

@@ -1,13 +0,0 @@
{
"formTitle": "Krijoni projektin tuaj të parë",
"inputLabel": "Në cilin projekt po punoni aktualisht?",
"or": "ose",
"templateButton": "Importo nga shablloni",
"createFromTemplate": "Krijo nga shablloni",
"goBack": "Kthehu Mbrapa",
"continue": "Vazhdo",
"cancel": "Anulo",
"create": "Krijo",
"templateDrawerTitle": "Zgjidh nga shabllonet",
"createProject": "Krijo Projekt"
}

View File

@@ -1,7 +0,0 @@
{
"formTitle": "Krijo detyrën tënde të parë.",
"inputLabel": "Shkruaj disa detyra që do të kryesh në",
"addAnother": "Shto një tjetër",
"goBack": "Kthehu mbrapa",
"continue": "Vazhdo"
}

View File

@@ -1,46 +0,0 @@
{
"todoList": {
"title": "Lista e Detyrave",
"refreshTasks": "Rifresko detyrat",
"addTask": "+ Shto Detyrë",
"noTasks": "Asnjë detyrë",
"pressEnter": "Shtyp",
"toCreate": "për të krijuar.",
"markAsDone": "Shëno si të përfunduar"
},
"projects": {
"title": "Projektet",
"refreshProjects": "Rifresko projektet",
"noRecentProjects": "Aktualisht nuk jeni caktuar në asnjë projekt.",
"noFavouriteProjects": "Asnjë projekt i shënuar si i preferuar.",
"recent": "Të Fundit",
"favourites": "Të Preferuarat"
},
"tasks": {
"assignedToMe": "Më janë caktuar",
"assignedByMe": "I kam caktuar",
"all": "Të Gjitha",
"today": "Sot",
"upcoming": "Ardhj",
"overdue": "Të vonuara",
"noDueDate": "Pa afat",
"noTasks": "Asnjë detyrë për të shfaqur.",
"addTask": "+ Shto detyrë",
"name": "Emri",
"project": "Projekti",
"status": "Statusi",
"dueDate": "Afati",
"dueDatePlaceholder": "Cakto Afatin",
"tomorrow": "Nesër",
"nextWeek": "Javën e Ardhshme",
"nextMonth": "Muajin e Ardhshëm",
"projectRequired": "Ju lutemi zgjidhni një projekt",
"pressTabToSelectDueDateAndProject": "Shtyp Tab për të zgjedhur afatin dhe projektin",
"dueOn": "Detyrat me afat më",
"taskRequired": "Ju lutemi shtoni një detyrë",
"list": "Listë",
"calendar": "Kalendar",
"tasks": "Detyrat",
"refresh": "Rifresko"
}
}

View File

@@ -1,8 +0,0 @@
{
"formTitle": "Fto ekipin tënd të punojë me",
"inputLabel": "Fto me email",
"addAnother": "Shto një tjetër",
"goBack": "Kthehu mbrapa",
"continue": "Vazhdo",
"skipForNow": "Anashkalo tani për tani"
}

View File

@@ -1,30 +0,0 @@
{
"rename": "Riemërto",
"delete": "Fshi",
"addTask": "Shto Detyrë",
"addSectionButton": "Shto Seksion",
"changeCategory": "Ndrysho kategorinë",
"deleteTooltip": "Fshi",
"deleteConfirmationTitle": "Jeni i sigurt?",
"deleteConfirmationOk": "Po",
"deleteConfirmationCancel": "Anulo",
"dueDate": "Data e përfundimit",
"cancel": "Anulo",
"today": "Sot",
"tomorrow": "Nesër",
"assignToMe": "Cakto mua",
"archive": "Arkivo",
"newTaskNamePlaceholder": "Shkruaj emrin e detyrës",
"newSubtaskNamePlaceholder": "Shkruaj emrin e nëndetyrës",
"untitledSection": "Seksion pa titull",
"unmapped": "Pa hartë",
"clickToChangeDate": "Klikoni për të ndryshuar datën",
"noDueDate": "Pa datë përfundimi",
"save": "Ruaj",
"clear": "Pastro",
"nextWeek": "Javën e ardhshme"
}

View File

@@ -1,6 +0,0 @@
{
"title": "Prova juaj e Worklenz ka skaduar!",
"subtitle": "Ju lutemi përmirësoni tani.",
"button": "Përmirëso tani",
"checking": "Po kontrollohet statusi i abonimit..."
}

View File

@@ -1,31 +0,0 @@
{
"logoAlt": "Logoja e Worklenz",
"home": "Kryefaqja",
"projects": "Projektet",
"schedule": "Orari",
"reporting": "Raportimi",
"clients": "Klientët",
"teams": "Ekipet",
"labels": "Etiketa",
"jobTitles": "Tituj Pune",
"upgradePlan": "Përmirëso Abonimin",
"upgradePlanTooltip": "Përmirëso abonimin",
"invite": "Fto",
"inviteTooltip": "Fto anëtarë të ekipit të bashkohen",
"switchTeamTooltip": "Ndrysho ekipin",
"help": "Ndihmë",
"notificationTooltip": "Shiko njoftimet",
"profileTooltip": "Shiko profilin",
"adminCenter": "Qendra Administrative",
"settings": "Cilësimet",
"logOut": "Dil",
"notificationsDrawer": {
"read": "Lexuara e njoftimet ",
"unread": "Njoftimet e palexuara",
"markAsRead": "Shëno si të lexuara",
"readAndJoin": "Lexo & Bashkohu",
"accept": "Prano",
"acceptAndJoin": "Prano & Bashkohu",
"noNotifications": "Asnjë njoftim"
}
}

View File

@@ -1,5 +0,0 @@
{
"nameYourOrganization": "Emërtoni organizatën tuaj.",
"worklenzAccountTitle": "Zgjidhni një emër për llogarinë tuaj në Worklenz.",
"continue": "Vazhdo"
}

View File

@@ -1,19 +0,0 @@
{
"configurePhases": "Konfiguro Fazat",
"phaseLabel": "Etiketa e Fazës",
"enterPhaseName": "Vendosni një emër për etiketën e fazës",
"addOption": "Shto Opsion",
"phaseOptions": "Opsionet e Fazës:",
"dragToReorderPhases": "Zvarrit fazat për t'i rirenditur. Çdo fazë mund të ketë një ngjyrë të ndryshme.",
"enterNewPhaseName": "Shkruani emrin e fazës së re...",
"addPhase": "Shto Fazë",
"noPhasesFound": "Nuk u gjetën faza. Krijoni fazën tuaj të parë më sipër.",
"deletePhase": "Fshi Fazën",
"deletePhaseConfirm": "Jeni të sigurt që doni të fshini këtë fazë? Ky veprim nuk mund të zhbëhet.",
"rename": "Riemëro",
"delete": "Fshi",
"enterPhaseName": "Shkruani emrin e fazës",
"selectColor": "Zgjidh ngjyrën",
"managePhases": "Menaxho Fazat",
"close": "Mbyll"
}

View File

@@ -1,42 +0,0 @@
{
"createProject": "Krijo Projekt",
"editProject": "Modifiko Projektin",
"enterCategoryName": "Vendosni emër për kategorinë",
"hitEnterToCreate": "Shtyp Enter për të krijuar!",
"enterNotes": "Shënime",
"youCanManageClientsUnderSettings": "Mund të menaxhoni klientët nën Cilësimet",
"addCategory": "Shto kategori projektit",
"newCategory": "Kategori e Re",
"notes": "Shënime",
"startDate": "Data e Fillimit",
"endDate": "Data e Përfundimit",
"estimateWorkingDays": "Vlerëso ditët e punës",
"estimateManDays": "Vlerëso ditët e punëtorëve",
"hoursPerDay": "Orë në ditë",
"create": "Krijo",
"update": "Përditëso",
"delete": "Fshi",
"typeToSearchClients": "Shkruani për të kërkuar klientë",
"projectColor": "Ngjyra e Projektit",
"pleaseEnterAName": "Ju lutemi vendosni një emër",
"enterProjectName": "Vendosni emrin e projektit",
"name": "Emri",
"status": "Statusi",
"health": "Gjendja",
"category": "Kategoria",
"projectManager": "Menaxheri i Projektit",
"client": "Klienti",
"deleteConfirmation": "Jeni i sigurt që doni të fshini?",
"deleteConfirmationDescription": "Kjo do të fshijë të gjitha të dhënat e lidhura dhe nuk mund të zhbëhet.",
"yes": "Po",
"no": "Jo",
"createdAt": "Krijuar më",
"updatedAt": "Përditësuar më",
"by": "nga",
"add": "Shto",
"asClient": "si klient",
"createClient": "Krijo klient",
"searchInputPlaceholder": "Kërko sipas emrit ose emailit",
"hoursPerDayValidationMessage": "Orët në ditë duhet të jenë një numër midis 1 dhe 24",
"noPermission": "Nuk ka leje"
}

View File

@@ -1,14 +0,0 @@
{
"nameColumn": "Emri",
"attachedTaskColumn": "Detyra e Bashkangjitur",
"sizeColumn": "Madhësia",
"uploadedByColumn": "Ngarkuar Nga",
"uploadedAtColumn": "Ngarkuar Më",
"fileIconAlt": "Ikona e skedarit",
"titleDescriptionText": "Të gjitha bashkëngjitjet e detyrave në këtë projekt do të shfahen këtu.",
"deleteConfirmationTitle": "Jeni i sigurt?",
"deleteConfirmationOk": "Po",
"deleteConfirmationCancel": "Anulo",
"segmentedTooltip": "Së shpejti! Kaloni midis pamjes listë dhe pamjes miniaturash.",
"emptyText": "Nuk ka bashkëngjitje në projekt."
}

View File

@@ -1,41 +0,0 @@
{
"overview": {
"title": "Përmbledhje",
"statusOverview": "Përmbledhje Statusi",
"priorityOverview": "Përmbledhje Prioriteti",
"lastUpdatedTasks": "Detyrat e Përditësuara Së Fundi"
},
"members": {
"title": "Anëtarët",
"tooltip": "Anëtarët",
"tasksByMembers": "Detyrat sipas anëtarëve",
"tasksByMembersTooltip": "Detyrat sipas anëtarëve",
"name": "Emri",
"taskCount": "Numri i Detyrave",
"contribution": "Kontributi",
"completed": "Të Përfunduara",
"incomplete": "Të Papërfunduara",
"overdue": "Të Vonuara",
"progress": "Progresi"
},
"tasks": {
"overdueTasks": "Detyrat e Vonuara",
"overLoggedTasks": "Detyrat me regjistrim të tepërt",
"tasksCompletedEarly": "Detyrat e përfunduara para afatit",
"tasksCompletedLate": "Detyrat e përfunduara pas afatit",
"overLoggedTasksTooltip": "Detyrat me kohë të regjistruar mbi kohën e vlerësuar",
"overdueTasksTooltip": "Detyrat që kanë kaluar afatin e tyre"
},
"common": {
"seeAll": "Shiko të gjitha",
"totalLoggedHours": "Orët totale të regjistruara",
"totalEstimation": "Vlerësimi total",
"completedTasks": "Detyrat e përfunduara",
"incompleteTasks": "Detyrat e papërfunduara",
"overdueTasks": "Detyrat e vonuara",
"overdueTasksTooltip": "Detyrat që kanë kaluar afatin e tyre",
"totalLoggedHoursTooltip": "Vlerësimi dhe koha e regjistruar për detyrat.",
"includeArchivedTasks": "Përfshi Detyrat e Arkivuara",
"export": "Eksporto"
}
}

View File

@@ -1,17 +0,0 @@
{
"nameColumn": "Emri",
"jobTitleColumn": "Titulli i Punës",
"emailColumn": "Email",
"tasksColumn": "Detyrat",
"taskProgressColumn": "Progresi i Detyrave",
"accessColumn": "Qasja",
"fileIconAlt": "Ikona e skedarit",
"deleteConfirmationTitle": "Jeni i sigurt?",
"deleteConfirmationOk": "Po",
"deleteConfirmationCancel": "Anulo",
"refreshButtonTooltip": "Rifresko anëtarët",
"deleteButtonTooltip": "Hiq nga projekti",
"memberCount": "Anëtar",
"membersCountPlural": "Anëtarë",
"emptyText": "Nuk ka bashkëngjitje në projekt."
}

View File

@@ -1,6 +0,0 @@
{
"inputPlaceholder": "Shto një koment..",
"addButton": "Shto",
"cancelButton": "Anulo",
"deleteButton": "Fshi"
}

View File

@@ -1,14 +0,0 @@
{
"taskList": "Lista e Detyrave",
"board": "Tabela Kanban",
"insights": "Analiza",
"files": "Skedarë",
"members": "Anëtarë",
"updates": "Përditësime",
"projectView": "Pamja e Projektit",
"loading": "Duke ngarkuar projektin...",
"error": "Gabim në ngarkimin e projektit",
"pinnedTab": "E fiksuar si tab i parazgjedhur",
"pinTab": "Fikso si tab i parazgjedhur",
"unpinTab": "Hiqe fiksimin e tab-it të parazgjedhur"
}

View File

@@ -1,11 +0,0 @@
{
"importTaskTemplate": "Importo Shabllon Detyrash",
"templateName": "Emri i Shabllonit",
"templateDescription": "Përshkrimi i Shabllonit",
"selectedTasks": "Detyrat e Përzgjedhura",
"tasks": "Detyrat",
"templates": "Shabllonet",
"remove": "Hiq",
"cancel": "Anulo",
"import": "Importo"
}

View File

@@ -1,7 +0,0 @@
{
"title": "Anëtarët e Projektit",
"searchLabel": "Shtoni anëtarë duke shkruar emrin ose email-in e tyre",
"searchPlaceholder": "Shkruani emrin ose email-in",
"inviteAsAMember": "Fto si anëtar",
"inviteNewMemberByEmail": "Fto anëtar të ri me email"
}

View File

@@ -1,30 +0,0 @@
{
"importTasks": "Importo detyra",
"importTask": "Importo detyrë",
"createTask": "Krijo detyrë",
"settings": "Cilësimet",
"subscribe": "Abonohu",
"unsubscribe": "Çabonohu",
"deleteProject": "Fshi projektin",
"startDate": "Data e fillimit",
"endDate": "Data e mbarimit",
"projectSettings": "Cilësimet e projektit",
"projectSummary": "Përmbledhja e projektit",
"receiveProjectSummary": "Merrni një përmbledhje të projektit çdo mbrëmje.",
"refreshProject": "Rifresko projektin",
"saveAsTemplate": "Ruaj si model",
"invite": "Fto",
"share": "Ndaj",
"subscribeTooltip": "Abonohu tek njoftimet e projektit",
"unsubscribeTooltip": "Çabonohu nga njoftimet e projektit",
"refreshTooltip": "Rifresko të dhënat e projektit",
"settingsTooltip": "Hap cilësimet e projektit",
"saveAsTemplateTooltip": "Ruaj këtë projekt si model",
"inviteTooltip": "Fto anëtarë të ekipit në këtë projekt",
"createTaskTooltip": "Krijo një detyrë të re",
"importTaskTooltip": "Importo detyrë nga modeli",
"navigateBackTooltip": "Kthehu tek lista e projekteve",
"projectStatusTooltip": "Statusi i projektit",
"projectDatesInfo": "Informacion për kohëzgjatjen e projektit",
"projectCategoryTooltip": "Kategoria e projektit"
}

View File

@@ -1,27 +0,0 @@
{
"title": "Ruaj si Shabllon",
"templateName": "Emri i Shabllonit",
"includes": "Çfarë duhet të përfshihet në shabllon nga projekti?",
"includesOptions": {
"statuses": "Statuset",
"phases": "Fazat",
"labels": "Etiketat"
},
"taskIncludes": "Çfarë duhet të përfshihet në shabllon nga detyrat?",
"taskIncludesOptions": {
"statuses": "Statuset",
"phases": "Fazat",
"labels": "Etiketat",
"name": "Emri",
"priority": "Prioriteti",
"status": "Statusi",
"phase": "Faza",
"label": "Etiketa",
"timeEstimate": "Vlerësimi i Kohës",
"description": "Përshkrimi",
"subTasks": "Nëndetyrat"
},
"cancel": "Anulo",
"save": "Ruaj",
"templateNamePlaceholder": "Shkruani emrin e shabllonit"
}

View File

@@ -1,90 +0,0 @@
{
"exportButton": "Eksporto",
"timeLogsButton": "Regjistrimet e Kohës",
"activityLogsButton": "Regjistrimet e Aktivitetit",
"tasksButton": "Detyrat",
"searchByNameInputPlaceholder": "Kërko sipas emrit",
"overviewTab": "Përmbledhje",
"timeLogsTab": "Regjistrimet e Kohës",
"activityLogsTab": "Regjistrimet e Aktivitetit",
"tasksTab": "Detyrat",
"projectsText": "Projektet",
"totalTasksText": "Detyrat Gjithsej",
"assignedTasksText": "Detyrat e Caktuara",
"completedTasksText": "Detyrat e Përfunduara",
"ongoingTasksText": "Detyrat në Vazhdim",
"overdueTasksText": "Detyrat e Vonuara",
"loggedHoursText": "Orët e Regjistruara",
"tasksText": "Detyrat",
"allText": "Të Gjitha",
"tasksByProjectsText": "Detyrat Sipas Projekteve",
"tasksByStatusText": "Detyrat Sipas Statusit",
"tasksByPriorityText": "Detyrat Sipas Prioritetit",
"todoText": "Për Të Bërë",
"doingText": "Duke bërë",
"doneText": "E Përfunduar",
"lowText": "I Ulët",
"mediumText": "I Mesëm",
"highText": "I Lartë",
"billableButton": "Fakturueshme",
"billableText": "Fakturueshme",
"nonBillableText": "Jo Fakturueshme",
"timeLogsEmptyPlaceholder": "Asnjë regjistrim kohe për të shfaqur",
"loggedText": "Regjistruar",
"forText": "për",
"inText": "në",
"updatedText": "Përditësuar",
"fromText": "Nga",
"toText": "në",
"withinText": "brenda",
"activityLogsEmptyPlaceholder": "Asnjë regjistrim aktiviteti për të shfaqur",
"filterByText": "Filtro sipas:",
"selectProjectPlaceholder": "Zgjidh Projektin",
"taskColumn": "Detyra",
"nameColumn": "Emri",
"projectColumn": "Projekti",
"statusColumn": "Statusi",
"priorityColumn": "Prioriteti",
"dueDateColumn": "Afati",
"completedDateColumn": "Data e Përfundimit",
"estimatedTimeColumn": "Koha e Vlerësuar",
"loggedTimeColumn": "Koha e Regjistruar",
"overloggedTimeColumn": "Koha e Tepërt",
"daysLeftColumn": "Ditë të Mbetura/Vonuar",
"startDateColumn": "Data e Fillimit",
"endDateColumn": "Data e Përfundimit",
"actualTimeColumn": "Koha Aktuale",
"projectHealthColumn": "Gjendja e Projektit",
"categoryColumn": "Kategoria",
"projectManagerColumn": "Menaxheri i Projektit",
"tasksStatsOverviewDrawerTitle": "Detyrat e ",
"projectsStatsOverviewDrawerTitle": "Projektet e ",
"cancelledText": "Anuluar",
"blockedText": "E Bllokuar",
"onHoldText": "Në Pritje",
"proposedText": "E Propozuar",
"inPlanningText": "Në Planifikim",
"inProgressText": "Në Progres",
"completedText": "E Përfunduar",
"continuousText": "E Vazhdueshme",
"daysLeftText": "ditë të mbetura",
"daysOverdueText": "ditë vonuar",
"notSetText": "Pa Caktuar",
"needsAttentionText": "Kërkon Vëmendje",
"atRiskText": "Në Rrezik",
"goodText": "Në Rregull"
}

View File

@@ -1,35 +0,0 @@
{
"yesterdayText": "Dje",
"lastSevenDaysText": "7 Ditët e Fundit",
"lastWeekText": "Javën e Kaluar",
"lastThirtyDaysText": "30 Ditët e Fundit",
"lastMonthText": "Muajin e Kaluar",
"lastThreeMonthsText": "3 Muajt e Fundit",
"allTimeText": "Të Gjitha",
"customRangeText": "Interval i Përshtatur",
"startDateInputPlaceholder": "Data e fillimit",
"EndDateInputPlaceholder": "Data e përfundimit",
"filterButton": "Filtro",
"membersTitle": "Anëtarët",
"includeArchivedButton": "Përfshij Projektet e Arkivuara",
"exportButton": "Eksporto",
"excelButton": "Excel",
"searchByNameInputPlaceholder": "Kërko sipas emrit",
"memberColumn": "Anëtari",
"tasksProgressColumn": "Progresi i Detyrave",
"tasksAssignedColumn": "Detyrat e Caktuara",
"completedTasksColumn": "Detyrat e Përfunduara",
"overdueTasksColumn": "Detyrat e Vonuara",
"ongoingTasksColumn": "Detyrat në Vazhdim",
"tasksAssignedColumnTooltip": "Detyrat e caktuara në intervalin e zgjedhur",
"overdueTasksColumnTooltip": "Detyrat e vonuara deri në fund të intervalit të zgjedhur",
"completedTasksColumnTooltip": "Detyrat e përfunduara në intervalin e zgjedhur",
"ongoingTasksColumnTooltip": "Detyrat e filluara por jo të përfunduara ende",
"todoText": "Për Të Bërë",
"doingText": "Duke bërë",
"doneText": "E Përfunduar"
}

View File

@@ -1,39 +0,0 @@
{
"exportButton": "Eksporto",
"projectsButton": "Projektet",
"membersButton": "Anëtarët",
"searchByNameInputPlaceholder": "Kërko sipas emrit",
"overviewTab": "Përmbledhje",
"projectsTab": "Projektet",
"membersTab": "Anëtarët",
"projectsByStatusText": "Projektet Sipas Statusit",
"projectsByCategoryText": "Projektet Sipas Kategorisë",
"projectsByHealthText": "Projektet Sipas Gjendjes",
"projectsText": "Projektet",
"allText": "Të Gjitha",
"cancelledText": "Anuluar",
"blockedText": "E Bllokuar",
"onHoldText": "Në Pritje",
"proposedText": "E Propozuar",
"inPlanningText": "Në Planifikim",
"inProgressText": "Në Progres",
"completedText": "E Përfunduar",
"continuousText": "E Vazhdueshme",
"notSetText": "Pa Caktuar",
"needsAttentionText": "Kërkon Vëmendje",
"atRiskText": "Në Rrezik",
"goodText": "Në Rregull",
"nameColumn": "Emri",
"emailColumn": "Email",
"projectsColumn": "Projektet",
"tasksColumn": "Detyrat",
"overdueTasksColumn": "Detyrat e Vonuara",
"completedTasksColumn": "Detyrat e Përfunduara",
"ongoingTasksColumn": "Detyrat në Vazhdim"
}

View File

@@ -1,25 +0,0 @@
{
"overviewTitle": "Përmbledhje",
"includeArchivedButton": "Përfshij Projektet e Arkivuara",
"teamCount": "Ekip",
"teamCountPlural": "Ekipe",
"projectCount": "Projekt",
"projectCountPlural": "Projekte",
"memberCount": "Anëtar",
"memberCountPlural": "Anëtarë",
"activeProjectCount": "Projekt Aktiv",
"activeProjectCountPlural": "Projekte Aktive",
"overdueProjectCount": "Projekt i Vonuar",
"overdueProjectCountPlural": "Projekte të Vonuara",
"unassignedMemberCount": "Anëtar i Pacaktuar",
"unassignedMemberCountPlural": "Anëtarë të Pacaktuar",
"memberWithOverdueTaskCount": "Anëtar me Detyrë të Vonuar",
"memberWithOverdueTaskCountPlural": "Anëtarë me Detyra të Vonuara",
"teamsText": "Ekipet",
"nameColumn": "Emri",
"projectsColumn": "Projektet",
"membersColumn": "Anëtarët"
}

View File

@@ -1,59 +0,0 @@
{
"exportButton": "Eksporto",
"membersButton": "Anëtarët",
"tasksButton": "Detyrat",
"searchByNameInputPlaceholder": "Kërko sipas emrit",
"overviewTab": "Përmbledhje",
"membersTab": "Anëtarët",
"tasksTab": "Detyrat",
"completedTasksText": "Detyrat e Përfunduara",
"incompleteTasksText": "Detyrat e Papërfunduara",
"overdueTasksText": "Detyrat e Vonuara",
"allocatedHoursText": "Orët e Alokuara",
"loggedHoursText": "Orët e Regjistruara",
"tasksText": "Detyrat",
"allText": "Të Gjitha",
"tasksByStatusText": "Detyrat Sipas Statusit",
"tasksByPriorityText": "Detyrat Sipas Prioritetit",
"tasksByDueDateText": "Detyrat Sipas Afatit",
"todoText": "Për Të Bërë",
"doingText": "Duke bërë",
"doneText": "E Përfunduar",
"lowText": "I Ulët",
"mediumText": "I Mesëm",
"highText": "I Lartë",
"completedText": "E Përfunduar",
"upcomingText": "Në Ardhje",
"overdueText": "E Vonuar",
"noDueDateText": "Pa Afat",
"nameColumn": "Emri",
"tasksCountColumn": "Numri i Detyrave",
"completedTasksColumn": "Detyrat e Përfunduara",
"incompleteTasksColumn": "Detyrat e Papërfunduara",
"overdueTasksColumn": "Detyrat e Vonuara",
"contributionColumn": "Kontributi",
"progressColumn": "Progresi",
"loggedTimeColumn": "Koha e Regjistruar",
"taskColumn": "Detyra",
"projectColumn": "Projekti",
"statusColumn": "Statusi",
"priorityColumn": "Prioriteti",
"phaseColumn": "Faza",
"dueDateColumn": "Afati",
"completedDateColumn": "Data e Përfundimit",
"estimatedTimeColumn": "Koha e Vlerësuar",
"overloggedTimeColumn": "Koha e Tepërt",
"completedOnColumn": "Përfunduar Më",
"daysOverdueColumn": "Ditë vonim",
"groupByText": "Grupo Sipas:",
"statusText": "Statusi",
"priorityText": "Prioriteti",
"phaseText": "Faza"
}

View File

@@ -1,35 +0,0 @@
{
"searchByNamePlaceholder": "Kërko sipas emrit",
"searchByCategoryPlaceholder": "Kërko sipas kategorisë",
"statusText": "Statusi",
"healthText": "Gjendja",
"categoryText": "Kategoria",
"projectManagerText": "Menaxheri i Projektit",
"showFieldsText": "Shfaq fushat",
"cancelledText": "Anuluar",
"blockedText": "E bllokuar",
"onHoldText": "Në pritje",
"proposedText": "E propozuar",
"inPlanningText": "Në planifikim",
"inProgressText": "Në progres",
"completedText": "E përfunduar",
"continuousText": "E vazhdueshme",
"notSetText": "Pa caktuar",
"needsAttentionText": "Kërkon vëmendje",
"atRiskText": "Në rrezik",
"goodText": "Në rregull",
"nameText": "Projekti",
"estimatedVsActualText": "Vlerësuar vs Aktual",
"tasksProgressText": "Progresi i detyrave",
"lastActivityText": "Aktiviteti i fundit",
"datesText": "Datat e Fillimit/Përfundimit",
"daysLeftText": "Ditë të mbetura/vonuar",
"projectHealthText": "Gjendja e projektit",
"projectUpdateText": "Përditësimi i projektit",
"clientText": "Klienti",
"teamText": "Ekipi"
}

View File

@@ -1,52 +0,0 @@
{
"projectCount": "Projekt",
"projectCountPlural": "Projekte",
"includeArchivedButton": "Përfshij Projektet e Arkivuara",
"exportButton": "Eksporto",
"excelButton": "Excel",
"projectColumn": "Projekti",
"estimatedVsActualColumn": "Vlerësuar vs Aktual",
"tasksProgressColumn": "Progresi i Detyrave",
"lastActivityColumn": "Aktiviteti i Fundit",
"statusColumn": "Statusi",
"datesColumn": "Data e Fillimit/Përfundimit",
"daysLeftColumn": "Ditë të Mbetura/Vonuar",
"projectHealthColumn": "Gjendja e Projektit",
"categoryColumn": "Kategoria",
"projectUpdateColumn": "Përditësimi i Projektit",
"clientColumn": "Klienti",
"teamColumn": "Ekipi",
"projectManagerColumn": "Menaxheri i Projektit",
"openButton": "Hap",
"estimatedText": "Vlerësuar",
"actualText": "Aktual",
"todoText": "Për të Bërë",
"doingText": "duke bërë",
"doneText": "E Përfunduar",
"cancelledText": "Anuluar",
"blockedText": "E Bllokuar",
"onHoldText": "Në Pritje",
"proposedText": "E Propozuar",
"inPlanningText": "Në Planifikim",
"inProgressText": "Në Progres",
"completedText": "E Përfunduar",
"continuousText": "E Vazhdueshme",
"daysLeftText": "ditë të mbetura",
"dayLeftText": "ditë e mbetur",
"daysOverdueText": "ditë vonuar",
"notSetText": "Pa Caktuar",
"needsAttentionText": "Kërkon Vëmendje",
"atRiskText": "Në Rrezik",
"goodText": "Në Rregull",
"setCategoryText": "Cakto Kategorinë",
"searchByNameInputPlaceholder": "Kërko sipas emrit",
"todayText": "Sot"
}

View File

@@ -1,8 +0,0 @@
{
"overview": "Përmbledhje",
"projects": "Projektet",
"members": "Anëtarët",
"timeReports": "Raportet e Kohës",
"estimateVsActual": "Vlerësimi vs Aktual",
"currentOrganizationTooltip": "Organizata aktuale"
}

View File

@@ -1,39 +0,0 @@
{
"today": "Sot",
"week": "Javë",
"month": "Muaj",
"settings": "Cilësimet",
"workingDays": "Ditët e punës",
"monday": "E hënë",
"tuesday": "E martë",
"wednesday": "E mërkurë",
"thursday": "E enjte",
"friday": "E premte",
"saturday": "E shtunë",
"sunday": "E diel",
"workingHours": "Orët e punës",
"hours": "Orë",
"saveButton": "Ruaj",
"totalAllocation": "Alokimi Total",
"timeLogged": "Koha e Regjistruar",
"remainingTime": "Koha e Mbetur",
"total": "Total",
"perDay": "Në Ditë",
"tasks": "detyra",
"startDate": "Data e Fillimit",
"endDate": "Data e Përfundimit",
"hoursPerDay": "Orë Në Ditë",
"totalHours": "Orë Totale",
"deleteButton": "Fshi",
"cancelButton": "Anulo",
"tabTitle": "Detyra pa Data Fillimi & Përfundimi",
"allocatedTime": "Koha e alokuar",
"totalLogged": "Total i Regjistruar",
"loggedBillable": "Regjistruar Fakturueshme",
"loggedNonBillable": "Regjistruar Jo Fakturueshme"
}

View File

@@ -1,10 +0,0 @@
{
"categoryColumn": "Kategoria",
"deleteConfirmationTitle": "Jeni të sigurt?",
"deleteConfirmationOk": "Po",
"deleteConfirmationCancel": "Anulo",
"associatedTaskColumn": "Projektet e Lidhura",
"searchPlaceholder": "Kërko sipas emrit",
"emptyText": "Kategoritë mund të krijohen gjatë përditësimit ose krijimit të projekteve.",
"colorChangeTooltip": "Klikoni për të ndryshuar ngjyrën"
}

View File

@@ -1,15 +0,0 @@
{
"title": "Ndrysho Fjalëkalimin",
"currentPassword": "Fjalëkalimi Aktual",
"newPassword": "Fjalëkalimi i Ri",
"confirmPassword": "Konfirmo Fjalëkalimin",
"currentPasswordPlaceholder": "Vendosni fjalëkalimin aktual",
"newPasswordPlaceholder": "Fjalëkalimi i Ri",
"confirmPasswordPlaceholder": "Konfirmo Fjalëkalimin",
"currentPasswordRequired": "Ju lutemi vendosni fjalëkalimin aktual!",
"newPasswordRequired": "Ju lutemi vendosni fjalëkalimin e ri!",
"passwordValidationError": "Fjalëkalimi duhet të përmbajë të paktën 8 karaktere, me një shkronjë të madhe, një numër dhe një simbol.",
"passwordMismatch": "Fjalëkalimet nuk përputhen!",
"passwordRequirements": "Fjalëkalimi i ri duhet të jetë së paku 8 karaktere, me një shkronjë të madhe, një numër dhe një simbol.",
"updateButton": "Përditëso Fjalëkalimin"
}

View File

@@ -1,22 +0,0 @@
{
"nameColumn": "Emri",
"projectColumn": "Projekti",
"noProjectsAvailable": "Nuk ka projekte të disponueshme",
"deleteConfirmationTitle": "Jeni i sigurt?",
"deleteConfirmationOk": "Po",
"deleteConfirmationCancel": "Anulo",
"searchPlaceholder": "Kërko sipas emrit",
"createClient": "Krijo Klient",
"pinTooltip": "Klikoni për ta fiksuar në menynë kryesore",
"createClientDrawerTitle": "Krijo Klient",
"updateClientDrawerTitle": "Përditëso Klientin",
"nameLabel": "Emri",
"namePlaceholder": "Emri",
"nameRequiredError": "Ju lutemi shkruani një Emër",
"createButton": "Krijo",
"updateButton": "Përditëso",
"createClientSuccessMessage": "Klienti u krijua me sukses!",
"createClientErrorMessage": "Krijimi i klientit dështoi!",
"updateClientSuccessMessage": "Klienti u përditësua me sukses!",
"updateClientErrorMessage": "Përditësimi i klientit dështoi!"
}

View File

@@ -1,20 +0,0 @@
{
"nameColumn": "Emri",
"deleteConfirmationTitle": "Jeni i sigurt?",
"deleteConfirmationOk": "Po",
"deleteConfirmationCancel": "Anulo",
"searchPlaceholder": "Kërko sipas emrit",
"createJobTitleButton": "Krijo Titull Pune",
"pinTooltip": "Klikoni për ta fiksuar në menynë kryesore",
"createJobTitleDrawerTitle": "Krijo Titull Pune",
"updateJobTitleDrawerTitle": "Përditëso Titullin e Punës",
"nameLabel": "Emri",
"namePlaceholder": "Emri",
"nameRequiredError": "Ju lutemi shkruani një Emër",
"createButton": "Krijo",
"updateButton": "Përditëso",
"createJobTitleSuccessMessage": "Titulli i punës u krijua me sukses!",
"createJobTitleErrorMessage": "Krijimi i titullit të punës dështoi!",
"updateJobTitleSuccessMessage": "Titulli i punës u përditësua me sukses!",
"updateJobTitleErrorMessage": "Përditësimi i titullit të punës dështoi!"
}

View File

@@ -1,11 +0,0 @@
{
"labelColumn": "Etiketa",
"deleteConfirmationTitle": "Jeni i sigurt?",
"deleteConfirmationOk": "Po",
"deleteConfirmationCancel": "Anulo",
"associatedTaskColumn": "Numri i Detyrave të Lidhura",
"searchPlaceholder": "Kërko sipas emrit",
"emptyText": "Etiketat mund të krijohen gjatë përditësimit ose krijimit të detyrave.",
"pinTooltip": "Klikoni për ta fiksuar në menynë kryesore",
"colorChangeTooltip": "Klikoni për të ndryshuar ngjyrën"
}

View File

@@ -1,7 +0,0 @@
{
"language": "Gjuha",
"language_required": "Gjuha është e detyrueshme",
"time_zone": "Zona kohore",
"time_zone_required": "Zona kohore është e detyrueshme",
"save_changes": "Ruaj Ndryshimet"
}

View File

@@ -1,11 +0,0 @@
{
"title": "Cilësimet e Njoftimeve",
"emailTitle": "Më dërgo njoftime me email",
"emailDescription": "Kjo përfshin caktimet e reja të detyrave",
"dailyDigestTitle": "Më dërgo një përmbledhje ditore",
"dailyDigestDescription": "Çdo mbrëmje, do të merrni një përmbledhje të aktivitetit të fundit në detyra.",
"popupTitle": "Shfaq njoftimet në kompjuterin tim kur Worklenz është i hapur",
"popupDescription": "Njoftimet e shfaqura mund të çaktivizohen nga shfletuesi juaj. Ndryshoni cilësimet e shfletuesit për t'i lejuar ato.",
"unreadItemsTitle": "Shfaq numrin e artikujve të palexuar",
"unreadItemsDescription": "Do të shihni numërimin për çdo njoftim."
}

Some files were not shown because too many files have changed in this diff Show More