Compare commits
10 Commits
chore/adde
...
feature/me
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c1067d87fe | ||
|
|
97feef5982 | ||
|
|
76c92b1cc6 | ||
|
|
67c62fc69b | ||
|
|
14d8f43001 | ||
|
|
3b59a8560b | ||
|
|
819252cedd | ||
|
|
1dade05f54 | ||
|
|
34613e5e0c | ||
|
|
a8b20680e5 |
@@ -1,15 +0,0 @@
|
||||
{
|
||||
"permissions": {
|
||||
"allow": [
|
||||
"Bash(find:*)",
|
||||
"Bash(npm run build:*)",
|
||||
"Bash(npm run type-check:*)",
|
||||
"Bash(npm run:*)",
|
||||
"Bash(move:*)",
|
||||
"Bash(mv:*)",
|
||||
"Bash(grep:*)",
|
||||
"Bash(rm:*)"
|
||||
],
|
||||
"deny": []
|
||||
}
|
||||
}
|
||||
@@ -1,237 +0,0 @@
|
||||
---
|
||||
alwaysApply: true
|
||||
---
|
||||
# Ant Design Import Rules for Worklenz
|
||||
|
||||
## 🚨 CRITICAL: Always Use Centralized Imports
|
||||
|
||||
**NEVER import Ant Design components directly from 'antd' or '@ant-design/icons'**
|
||||
|
||||
### ✅ Correct Import Pattern
|
||||
```typescript
|
||||
import { Button, Input, Select, EditOutlined, PlusOutlined } from '@antd-imports';
|
||||
// or
|
||||
import { Button, Input, Select, EditOutlined, PlusOutlined } from '@/shared/antd-imports';
|
||||
```
|
||||
|
||||
### ❌ Forbidden Import Patterns
|
||||
```typescript
|
||||
// NEVER do this:
|
||||
import { Button, Input, Select } from 'antd';
|
||||
import { EditOutlined, PlusOutlined } from '@ant-design/icons';
|
||||
```
|
||||
|
||||
## Why This Rule Exists
|
||||
|
||||
### Benefits of Centralized Imports:
|
||||
- **Better Tree-Shaking**: Optimized bundle size through centralized management
|
||||
- **Consistent React Context**: Proper context sharing across components
|
||||
- **Type Safety**: Centralized TypeScript definitions
|
||||
- **Maintainability**: Single source of truth for all Ant Design imports
|
||||
- **Performance**: Reduced bundle size and improved loading times
|
||||
|
||||
## What's Available in `@antd-imports`
|
||||
|
||||
### Core Components
|
||||
- **Layout**: Layout, Row, Col, Flex, Divider, Space
|
||||
- **Navigation**: Menu, Tabs, Breadcrumb, Pagination
|
||||
- **Data Entry**: Input, Select, DatePicker, TimePicker, Form, Checkbox, InputNumber
|
||||
- **Data Display**: Table, List, Card, Tag, Avatar, Badge, Progress, Statistic
|
||||
- **Feedback**: Modal, Drawer, Alert, Message, Notification, Spin, Skeleton, Result
|
||||
- **Other**: Button, Typography, Tooltip, Popconfirm, Dropdown, ConfigProvider
|
||||
|
||||
### Icons
|
||||
Common icons including: EditOutlined, DeleteOutlined, PlusOutlined, MoreOutlined, CheckOutlined, CloseOutlined, CalendarOutlined, UserOutlined, TeamOutlined, and many more.
|
||||
|
||||
### Utilities
|
||||
- **appMessage**: Centralized message utility
|
||||
- **appNotification**: Centralized notification utility
|
||||
- **antdConfig**: Default Ant Design configuration
|
||||
- **taskManagementAntdConfig**: Task-specific configuration
|
||||
|
||||
## Implementation Guidelines
|
||||
|
||||
### When Creating New Components:
|
||||
1. **Always** import from `@/shared/antd-imports`
|
||||
2. Use `appMessage` and `appNotification` for user feedback
|
||||
3. Apply `antdConfig` for consistent styling
|
||||
4. Use `taskManagementAntdConfig` for task-related components
|
||||
|
||||
### When Refactoring Existing Code:
|
||||
1. Replace direct 'antd' imports with `@/shared/antd-imports`
|
||||
2. Replace direct '@ant-design/icons' imports with `@/shared/antd-imports`
|
||||
3. Update any custom message/notification calls to use the utilities
|
||||
|
||||
### File Location
|
||||
The centralized import file is located at: `worklenz-frontend/src/shared/antd-imports.ts`
|
||||
|
||||
## Examples
|
||||
|
||||
### Component Creation
|
||||
```typescript
|
||||
import React from 'react';
|
||||
import { Button, Input, Modal, EditOutlined, appMessage } from '@antd-imports';
|
||||
|
||||
const MyComponent = () => {
|
||||
const handleClick = () => {
|
||||
appMessage.success('Operation completed!');
|
||||
};
|
||||
|
||||
return (
|
||||
<Button icon={<EditOutlined />} onClick={handleClick}>
|
||||
Edit Item
|
||||
</Button>
|
||||
);
|
||||
};
|
||||
```
|
||||
|
||||
### Form Implementation
|
||||
```typescript
|
||||
import { Form, Input, Select, Button, DatePicker } from '@antd-imports';
|
||||
|
||||
const MyForm = () => {
|
||||
return (
|
||||
<Form layout="vertical">
|
||||
<Form.Item label="Name" name="name">
|
||||
<Input />
|
||||
</Form.Item>
|
||||
<Form.Item label="Type" name="type">
|
||||
<Select options={options} />
|
||||
</Form.Item>
|
||||
<Form.Item label="Date" name="date">
|
||||
<DatePicker />
|
||||
</Form.Item>
|
||||
</Form>
|
||||
);
|
||||
};
|
||||
```
|
||||
|
||||
## Enforcement
|
||||
|
||||
This rule is **MANDATORY** and applies to:
|
||||
- All new component development
|
||||
- All code refactoring
|
||||
- All bug fixes
|
||||
- All feature implementations
|
||||
|
||||
**Violations will result in code review rejection.**
|
||||
|
||||
### File Path:
|
||||
The centralized file is located at: `worklenz-frontend/src/shared/antd-imports.ts`
|
||||
# Ant Design Import Rules for Worklenz
|
||||
|
||||
## 🚨 CRITICAL: Always Use Centralized Imports
|
||||
|
||||
**NEVER import Ant Design components directly from 'antd' or '@ant-design/icons'**
|
||||
|
||||
### ✅ Correct Import Pattern
|
||||
```typescript
|
||||
import { Button, Input, Select, EditOutlined, PlusOutlined } from '@antd-imports';
|
||||
// or
|
||||
import { Button, Input, Select, EditOutlined, PlusOutlined } from '@/shared/antd-imports';
|
||||
```
|
||||
|
||||
### ❌ Forbidden Import Patterns
|
||||
```typescript
|
||||
// NEVER do this:
|
||||
import { Button, Input, Select } from 'antd';
|
||||
import { EditOutlined, PlusOutlined } from '@ant-design/icons';
|
||||
```
|
||||
|
||||
## Why This Rule Exists
|
||||
|
||||
### Benefits of Centralized Imports:
|
||||
- **Better Tree-Shaking**: Optimized bundle size through centralized management
|
||||
- **Consistent React Context**: Proper context sharing across components
|
||||
- **Type Safety**: Centralized TypeScript definitions
|
||||
- **Maintainability**: Single source of truth for all Ant Design imports
|
||||
- **Performance**: Reduced bundle size and improved loading times
|
||||
|
||||
## What's Available in `@antd-imports`
|
||||
|
||||
### Core Components
|
||||
- **Layout**: Layout, Row, Col, Flex, Divider, Space
|
||||
- **Navigation**: Menu, Tabs, Breadcrumb, Pagination
|
||||
- **Data Entry**: Input, Select, DatePicker, TimePicker, Form, Checkbox, InputNumber
|
||||
- **Data Display**: Table, List, Card, Tag, Avatar, Badge, Progress, Statistic
|
||||
- **Feedback**: Modal, Drawer, Alert, Message, Notification, Spin, Skeleton, Result
|
||||
- **Other**: Button, Typography, Tooltip, Popconfirm, Dropdown, ConfigProvider
|
||||
|
||||
### Icons
|
||||
Common icons including: EditOutlined, DeleteOutlined, PlusOutlined, MoreOutlined, CheckOutlined, CloseOutlined, CalendarOutlined, UserOutlined, TeamOutlined, and many more.
|
||||
|
||||
### Utilities
|
||||
- **appMessage**: Centralized message utility
|
||||
- **appNotification**: Centralized notification utility
|
||||
- **antdConfig**: Default Ant Design configuration
|
||||
- **taskManagementAntdConfig**: Task-specific configuration
|
||||
|
||||
## Implementation Guidelines
|
||||
|
||||
### When Creating New Components:
|
||||
1. **Always** import from `@antd-imports` or `@/shared/antd-imports`
|
||||
2. Use `appMessage` and `appNotification` for user feedback
|
||||
3. Apply `antdConfig` for consistent styling
|
||||
4. Use `taskManagementAntdConfig` for task-related components
|
||||
|
||||
### When Refactoring Existing Code:
|
||||
1. Replace direct 'antd' imports with `@antd-imports`
|
||||
2. Replace direct '@ant-design/icons' imports with `@antd-imports`
|
||||
3. Update any custom message/notification calls to use the utilities
|
||||
|
||||
### File Location
|
||||
The centralized import file is located at: `worklenz-frontend/src/shared/antd-imports.ts`
|
||||
|
||||
## Examples
|
||||
|
||||
### Component Creation
|
||||
```typescript
|
||||
import React from 'react';
|
||||
import { Button, Input, Modal, EditOutlined, appMessage } from '@antd-imports';
|
||||
|
||||
const MyComponent = () => {
|
||||
const handleClick = () => {
|
||||
appMessage.success('Operation completed!');
|
||||
};
|
||||
|
||||
return (
|
||||
<Button icon={<EditOutlined />} onClick={handleClick}>
|
||||
Edit Item
|
||||
</Button>
|
||||
);
|
||||
};
|
||||
```
|
||||
|
||||
### Form Implementation
|
||||
```typescript
|
||||
import { Form, Input, Select, Button, DatePicker } from '@antd-imports';
|
||||
|
||||
const MyForm = () => {
|
||||
return (
|
||||
<Form layout="vertical">
|
||||
<Form.Item label="Name" name="name">
|
||||
<Input />
|
||||
</Form.Item>
|
||||
<Form.Item label="Type" name="type">
|
||||
<Select options={options} />
|
||||
</Form.Item>
|
||||
<Form.Item label="Date" name="date">
|
||||
<DatePicker />
|
||||
</Form.Item>
|
||||
</Form>
|
||||
);
|
||||
};
|
||||
```
|
||||
|
||||
## Enforcement
|
||||
|
||||
This rule is **MANDATORY** and applies to:
|
||||
- All new component development
|
||||
- All code refactoring
|
||||
- All bug fixes
|
||||
- All feature implementations
|
||||
|
||||
**Violations will result in code review rejection.**
|
||||
|
||||
### File Path:
|
||||
The centralized file is located at: `worklenz-frontend/src/shared/antd-imports.ts`
|
||||
416
README.md
416
README.md
@@ -1,29 +1,11 @@
|
||||
<h1 align="center">
|
||||
<a href="https://worklenz.com" target="_blank" rel="noopener noreferrer">
|
||||
<img src="https://s3.us-west-2.amazonaws.com/worklenz.com/assets/icon-144x144.png" alt="Worklenz Logo" width="75">
|
||||
<img src="https://app.worklenz.com/assets/icons/icon-144x144.png" alt="Worklenz Logo" width="75">
|
||||
</a>
|
||||
<br>
|
||||
Worklenz
|
||||
</h1>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://github.com/Worklenz/worklenz/blob/main/LICENSE">
|
||||
<img src="https://img.shields.io/badge/license-AGPL--3.0-blue.svg" alt="License">
|
||||
</a>
|
||||
<a href="https://github.com/Worklenz/worklenz/releases">
|
||||
<img src="https://img.shields.io/github/v/release/Worklenz/worklenz" alt="Release">
|
||||
</a>
|
||||
<a href="https://github.com/Worklenz/worklenz/stargazers">
|
||||
<img src="https://img.shields.io/github/stars/Worklenz/worklenz" alt="Stars">
|
||||
</a>
|
||||
<a href="https://github.com/Worklenz/worklenz/network/members">
|
||||
<img src="https://img.shields.io/github/forks/Worklenz/worklenz" alt="Forks">
|
||||
</a>
|
||||
<a href="https://github.com/Worklenz/worklenz/issues">
|
||||
<img src="https://img.shields.io/github/issues/Worklenz/worklenz" alt="Issues">
|
||||
</a>
|
||||
</p>
|
||||
|
||||
<p align="center">
|
||||
<a href="https://worklenz.com/task-management/">Task Management</a> |
|
||||
<a href="https://worklenz.com/time-tracking/">Time Tracking</a> |
|
||||
@@ -45,24 +27,6 @@
|
||||
Worklenz is a project management tool designed to help organizations improve their efficiency. It provides a
|
||||
comprehensive solution for managing projects, tasks, and collaboration within teams.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Features](#features)
|
||||
- [Tech Stack](#tech-stack)
|
||||
- [Getting Started](#getting-started)
|
||||
- [Quick Start (Docker)](#-quick-start-docker---recommended)
|
||||
- [Manual Installation](#️-manual-installation-for-development)
|
||||
- [Deployment](#deployment)
|
||||
- [Local Development](#local-development-with-docker)
|
||||
- [Remote Server Deployment](#remote-server-deployment)
|
||||
- [Configuration](#configuration)
|
||||
- [MinIO Integration](#minio-integration)
|
||||
- [Security](#security)
|
||||
- [Analytics](#analytics)
|
||||
- [Screenshots](#screenshots)
|
||||
- [Contributing](#contributing)
|
||||
- [License](#license)
|
||||
|
||||
## Features
|
||||
|
||||
- **Project Planning**: Create and organize projects, assign tasks to team members.
|
||||
@@ -86,80 +50,42 @@ This repository contains the frontend and backend code for Worklenz.
|
||||
|
||||
## Getting Started
|
||||
|
||||
Choose your preferred setup method below. Docker is recommended for quick setup and testing.
|
||||
These instructions will help you set up and run the Worklenz project on your local machine for development and testing purposes.
|
||||
|
||||
### 🚀 Quick Start (Docker - Recommended)
|
||||
### Prerequisites
|
||||
|
||||
The fastest way to get Worklenz running locally with all dependencies included.
|
||||
|
||||
**Prerequisites:**
|
||||
- Docker and Docker Compose installed on your system
|
||||
- Git
|
||||
|
||||
**Steps:**
|
||||
|
||||
1. Clone the repository:
|
||||
```bash
|
||||
git clone https://github.com/Worklenz/worklenz.git
|
||||
cd worklenz
|
||||
```
|
||||
|
||||
2. Start the Docker containers:
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
3. Access the application:
|
||||
- **Frontend**: http://localhost:5000
|
||||
- **Backend API**: http://localhost:3000
|
||||
- **MinIO Console**: http://localhost:9001 (login: minioadmin/minioadmin)
|
||||
|
||||
4. To stop the services:
|
||||
```bash
|
||||
docker-compose down
|
||||
```
|
||||
|
||||
**Alternative startup methods:**
|
||||
- **Windows**: Run `start.bat`
|
||||
- **Linux/macOS**: Run `./start.sh`
|
||||
|
||||
**Video Guide**: For a visual walkthrough of the local Docker deployment process, check out our [step-by-step video guide](https://www.youtube.com/watch?v=AfwAKxJbqLg).
|
||||
|
||||
### 🛠️ Manual Installation (For Development)
|
||||
|
||||
For developers who want to run the services individually or customize the setup.
|
||||
|
||||
**Prerequisites:**
|
||||
- Node.js (version 18 or higher)
|
||||
- PostgreSQL (version 15 or higher)
|
||||
- PostgreSQL database
|
||||
- An S3-compatible storage service (like MinIO) or Azure Blob Storage
|
||||
|
||||
**Steps:**
|
||||
### Option 1: Manual Installation
|
||||
|
||||
1. Clone the repository:
|
||||
1. Clone the repository
|
||||
```bash
|
||||
git clone https://github.com/Worklenz/worklenz.git
|
||||
cd worklenz
|
||||
```
|
||||
|
||||
2. Set up environment variables:
|
||||
```bash
|
||||
cp worklenz-backend/.env.template worklenz-backend/.env
|
||||
# Update the environment variables with your configuration
|
||||
```
|
||||
2. Set up environment variables
|
||||
- Copy the example environment files
|
||||
```bash
|
||||
cp .env.example .env
|
||||
cp worklenz-backend/.env.example worklenz-backend/.env
|
||||
```
|
||||
- Update the environment variables with your configuration
|
||||
|
||||
3. Install dependencies:
|
||||
3. Install dependencies
|
||||
```bash
|
||||
# Backend dependencies
|
||||
# Install backend dependencies
|
||||
cd worklenz-backend
|
||||
npm install
|
||||
|
||||
# Frontend dependencies
|
||||
# Install frontend dependencies
|
||||
cd ../worklenz-frontend
|
||||
npm install
|
||||
```
|
||||
|
||||
4. Set up the database:
|
||||
4. Set up the database
|
||||
```bash
|
||||
# Create a PostgreSQL database named worklenz_db
|
||||
cd worklenz-backend
|
||||
@@ -175,47 +101,49 @@ psql -U your_username -d worklenz_db -f database/sql/2_dml.sql
|
||||
psql -U your_username -d worklenz_db -f database/sql/5_database_user.sql
|
||||
```
|
||||
|
||||
5. Start the development servers:
|
||||
5. Start the development servers
|
||||
```bash
|
||||
# Terminal 1: Start the backend
|
||||
# In one terminal, start the backend
|
||||
cd worklenz-backend
|
||||
npm run dev
|
||||
|
||||
# Terminal 2: Start the frontend
|
||||
# In another terminal, start the frontend
|
||||
cd worklenz-frontend
|
||||
npm run dev
|
||||
```
|
||||
|
||||
6. Access the application at http://localhost:5000
|
||||
|
||||
## Deployment
|
||||
### Option 2: Docker Setup
|
||||
|
||||
For local development, follow the [Quick Start (Docker)](#-quick-start-docker---recommended) section above.
|
||||
The project includes a fully configured Docker setup with:
|
||||
- Frontend React application
|
||||
- Backend server
|
||||
- PostgreSQL database
|
||||
- MinIO for S3-compatible storage
|
||||
|
||||
### Remote Server Deployment
|
||||
1. Clone the repository:
|
||||
```bash
|
||||
git clone https://github.com/Worklenz/worklenz.git
|
||||
cd worklenz
|
||||
```
|
||||
|
||||
When deploying to a remote server:
|
||||
2. Start the Docker containers (choose one option):
|
||||
|
||||
1. Set up the environment files with your server's hostname:
|
||||
```bash
|
||||
# For HTTP/WS
|
||||
./update-docker-env.sh your-server-hostname
|
||||
**Using Docker Compose directly**
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
# For HTTPS/WSS
|
||||
./update-docker-env.sh your-server-hostname true
|
||||
```
|
||||
3. The application will be available at:
|
||||
- Frontend: http://localhost:5000
|
||||
- Backend API: http://localhost:3000
|
||||
- MinIO Console: http://localhost:9001 (login with minioadmin/minioadmin)
|
||||
|
||||
2. Pull and run the latest Docker images:
|
||||
```bash
|
||||
docker-compose pull
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
3. Access the application through your server's hostname:
|
||||
- Frontend: http://your-server-hostname:5000
|
||||
- Backend API: http://your-server-hostname:3000
|
||||
|
||||
4. **Video Guide**: For a complete walkthrough of deploying Worklenz to a remote server, check out our [deployment video guide](https://www.youtube.com/watch?v=CAZGu2iOXQs&t=10s).
|
||||
4. To stop the services:
|
||||
```bash
|
||||
docker-compose down
|
||||
```
|
||||
|
||||
## Configuration
|
||||
|
||||
@@ -230,46 +158,16 @@ Worklenz requires several environment variables to be configured for proper oper
|
||||
|
||||
Please refer to the `.env.example` files for a full list of required variables.
|
||||
|
||||
The Docker setup uses environment variables to configure the services:
|
||||
|
||||
- **Frontend:**
|
||||
- `VITE_API_URL`: URL of the backend API (default: http://backend:3000 for container networking)
|
||||
- `VITE_SOCKET_URL`: WebSocket URL for real-time communication (default: ws://backend:3000)
|
||||
|
||||
- **Backend:**
|
||||
- Database connection parameters
|
||||
- Storage configuration
|
||||
- Other backend settings
|
||||
|
||||
For custom configuration, edit the `.env` file or the `update-docker-env.sh` script.
|
||||
|
||||
## MinIO Integration
|
||||
### MinIO Integration
|
||||
|
||||
The project uses MinIO as an S3-compatible object storage service, which provides an open-source alternative to AWS S3 for development and production.
|
||||
|
||||
### Working with MinIO
|
||||
|
||||
MinIO provides an S3-compatible API, so any code that works with S3 will work with MinIO by simply changing the endpoint URL. The backend has been configured to use MinIO by default, with no additional configuration required.
|
||||
|
||||
- **MinIO Console**: http://localhost:9001
|
||||
- Username: minioadmin
|
||||
- Password: minioadmin
|
||||
|
||||
- **Default Bucket**: worklenz-bucket (created automatically when the containers start)
|
||||
|
||||
### Backend Storage Configuration
|
||||
|
||||
The backend is pre-configured to use MinIO with the following settings:
|
||||
|
||||
```javascript
|
||||
// S3 credentials with MinIO defaults
|
||||
export const REGION = process.env.AWS_REGION || "us-east-1";
|
||||
export const BUCKET = process.env.AWS_BUCKET || "worklenz-bucket";
|
||||
export const S3_URL = process.env.S3_URL || "http://minio:9000/worklenz-bucket";
|
||||
export const S3_ACCESS_KEY_ID = process.env.AWS_ACCESS_KEY_ID || "minioadmin";
|
||||
export const S3_SECRET_ACCESS_KEY = process.env.AWS_SECRET_ACCESS_KEY || "minioadmin";
|
||||
```
|
||||
|
||||
### Security Considerations
|
||||
|
||||
For production deployments:
|
||||
@@ -280,32 +178,19 @@ For production deployments:
|
||||
4. Enable HTTPS for all public endpoints
|
||||
5. Review and update dependencies regularly
|
||||
|
||||
## Contributing
|
||||
|
||||
We welcome contributions from the community! If you'd like to contribute, please follow our [contributing guidelines](CONTRIBUTING.md).
|
||||
|
||||
## Security
|
||||
|
||||
If you believe you have found a security vulnerability in Worklenz, we encourage you to responsibly disclose this and not open a public issue. We will investigate all legitimate reports.
|
||||
|
||||
Email [info@worklenz.com](mailto:info@worklenz.com) to disclose any security vulnerabilities.
|
||||
|
||||
## Analytics
|
||||
## License
|
||||
|
||||
Worklenz uses Google Analytics to understand how the application is being used. This helps us improve the application and make better decisions about future development.
|
||||
|
||||
### What We Track
|
||||
- Anonymous usage statistics
|
||||
- Page views and navigation patterns
|
||||
- Feature usage
|
||||
- Browser and device information
|
||||
|
||||
### Privacy
|
||||
- Analytics is opt-in only
|
||||
- No personal information is collected
|
||||
- Users can opt-out at any time
|
||||
- Data is stored according to Google's privacy policy
|
||||
|
||||
### How to Opt-Out
|
||||
If you've previously opted in and want to opt-out:
|
||||
1. Clear your browser's local storage for the Worklenz domain
|
||||
2. Or click the "Decline" button in the analytics notice if it appears
|
||||
This project is licensed under the [MIT License](LICENSE).
|
||||
|
||||
## Screenshots
|
||||
|
||||
@@ -355,13 +240,206 @@ If you've previously opted in and want to opt-out:
|
||||
</a>
|
||||
</p>
|
||||
|
||||
## Contributing
|
||||
### Contributing
|
||||
|
||||
We welcome contributions from the community! If you'd like to contribute, please follow our [contributing guidelines](CONTRIBUTING.md).
|
||||
We welcome contributions from the community! If you'd like to contribute, please follow
|
||||
our [contributing guidelines](CONTRIBUTING.md).
|
||||
|
||||
## License
|
||||
### License
|
||||
|
||||
Worklenz is open source and released under the [GNU Affero General Public License Version 3 (AGPLv3)](LICENSE).
|
||||
|
||||
By contributing to Worklenz, you agree that your contributions will be licensed under its AGPL.
|
||||
|
||||
# Worklenz React
|
||||
|
||||
This repository contains the React version of Worklenz with a Docker setup for easy development and deployment.
|
||||
|
||||
## Getting Started with Docker
|
||||
|
||||
The project includes a fully configured Docker setup with:
|
||||
- Frontend React application
|
||||
- Backend server
|
||||
- PostgreSQL database
|
||||
- MinIO for S3-compatible storage
|
||||
|
||||
### Prerequisites
|
||||
|
||||
- Docker and Docker Compose installed on your system
|
||||
- Git
|
||||
|
||||
### Quick Start
|
||||
|
||||
1. Clone the repository:
|
||||
```bash
|
||||
git clone https://github.com/Worklenz/worklenz.git
|
||||
cd worklenz
|
||||
```
|
||||
|
||||
2. Start the Docker containers (choose one option):
|
||||
|
||||
**Option 1: Using the provided scripts (easiest)**
|
||||
- On Windows:
|
||||
```
|
||||
start.bat
|
||||
```
|
||||
- On Linux/macOS:
|
||||
```bash
|
||||
./start.sh
|
||||
```
|
||||
|
||||
**Option 2: Using Docker Compose directly**
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
3. The application will be available at:
|
||||
- Frontend: http://localhost:5000
|
||||
- Backend API: http://localhost:3000
|
||||
- MinIO Console: http://localhost:9001 (login with minioadmin/minioadmin)
|
||||
|
||||
4. To stop the services (choose one option):
|
||||
|
||||
**Option 1: Using the provided scripts**
|
||||
- On Windows:
|
||||
```
|
||||
stop.bat
|
||||
```
|
||||
- On Linux/macOS:
|
||||
```bash
|
||||
./stop.sh
|
||||
```
|
||||
|
||||
**Option 2: Using Docker Compose directly**
|
||||
```bash
|
||||
docker-compose down
|
||||
```
|
||||
|
||||
## MinIO Integration
|
||||
|
||||
The project uses MinIO as an S3-compatible object storage service, which provides an open-source alternative to AWS S3 for development and production.
|
||||
|
||||
### Working with MinIO
|
||||
|
||||
MinIO provides an S3-compatible API, so any code that works with S3 will work with MinIO by simply changing the endpoint URL. The backend has been configured to use MinIO by default, with no additional configuration required.
|
||||
|
||||
- **MinIO Console**: http://localhost:9001
|
||||
- Username: minioadmin
|
||||
- Password: minioadmin
|
||||
|
||||
- **Default Bucket**: worklenz-bucket (created automatically when the containers start)
|
||||
|
||||
### Backend Storage Configuration
|
||||
|
||||
The backend is pre-configured to use MinIO with the following settings:
|
||||
|
||||
```javascript
|
||||
// S3 credentials with MinIO defaults
|
||||
export const REGION = process.env.AWS_REGION || "us-east-1";
|
||||
export const BUCKET = process.env.AWS_BUCKET || "worklenz-bucket";
|
||||
export const S3_URL = process.env.S3_URL || "http://minio:9000/worklenz-bucket";
|
||||
export const S3_ACCESS_KEY_ID = process.env.AWS_ACCESS_KEY_ID || "minioadmin";
|
||||
export const S3_SECRET_ACCESS_KEY = process.env.AWS_SECRET_ACCESS_KEY || "minioadmin";
|
||||
```
|
||||
|
||||
The S3 client is initialized with special MinIO configuration:
|
||||
|
||||
```javascript
|
||||
const s3Client = new S3Client({
|
||||
region: REGION,
|
||||
credentials: {
|
||||
accessKeyId: S3_ACCESS_KEY_ID || "",
|
||||
secretAccessKey: S3_SECRET_ACCESS_KEY || "",
|
||||
},
|
||||
endpoint: getEndpointFromUrl(), // Extracts endpoint from S3_URL
|
||||
forcePathStyle: true, // Required for MinIO
|
||||
});
|
||||
```
|
||||
|
||||
### Environment Configuration
|
||||
|
||||
The project uses the following environment file structure:
|
||||
|
||||
- **Frontend**:
|
||||
- `worklenz-frontend/.env.development` - Development environment variables
|
||||
- `worklenz-frontend/.env.production` - Production build variables
|
||||
|
||||
- **Backend**:
|
||||
- `worklenz-backend/.env` - Backend environment variables
|
||||
|
||||
### Setting Up Environment Files
|
||||
|
||||
The Docker environment script will create or overwrite all environment files:
|
||||
|
||||
```bash
|
||||
# For HTTP/WS
|
||||
./update-docker-env.sh your-hostname
|
||||
|
||||
# For HTTPS/WSS
|
||||
./update-docker-env.sh your-hostname true
|
||||
```
|
||||
|
||||
This script generates properly configured environment files for both development and production environments.
|
||||
|
||||
## Docker Deployment
|
||||
|
||||
### Local Development with Docker
|
||||
|
||||
1. Set up the environment files:
|
||||
```bash
|
||||
# For HTTP/WS
|
||||
./update-docker-env.sh
|
||||
|
||||
# For HTTPS/WSS
|
||||
./update-docker-env.sh localhost true
|
||||
```
|
||||
|
||||
2. Run the application using Docker Compose:
|
||||
```bash
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
3. Access the application:
|
||||
- Frontend: http://localhost:5000
|
||||
- Backend API: http://localhost:3000 (or https://localhost:3000 with SSL)
|
||||
|
||||
### Remote Server Deployment
|
||||
|
||||
When deploying to a remote server:
|
||||
|
||||
1. Set up the environment files with your server's hostname:
|
||||
```bash
|
||||
# For HTTP/WS
|
||||
./update-docker-env.sh your-server-hostname
|
||||
|
||||
# For HTTPS/WSS
|
||||
./update-docker-env.sh your-server-hostname true
|
||||
```
|
||||
|
||||
This ensures that the frontend correctly connects to the backend API.
|
||||
|
||||
2. Pull and run the latest Docker images:
|
||||
```bash
|
||||
docker-compose pull
|
||||
docker-compose up -d
|
||||
```
|
||||
|
||||
3. Access the application through your server's hostname:
|
||||
- Frontend: http://your-server-hostname:5000
|
||||
- Backend API: http://your-server-hostname:3000
|
||||
|
||||
### Environment Configuration
|
||||
|
||||
The Docker setup uses environment variables to configure the services:
|
||||
|
||||
- Frontend:
|
||||
- `VITE_API_URL`: URL of the backend API (default: http://backend:3000 for container networking)
|
||||
- `VITE_SOCKET_URL`: WebSocket URL for real-time communication (default: ws://backend:3000)
|
||||
|
||||
- Backend:
|
||||
- Database connection parameters
|
||||
- Storage configuration
|
||||
- Other backend settings
|
||||
|
||||
For custom configuration, edit the `.env` file or the `update-docker-env.sh` script.
|
||||
|
||||
|
||||
@@ -4,7 +4,7 @@ Getting started with development is a breeze! Follow these steps and you'll be c
|
||||
|
||||
## Requirements
|
||||
|
||||
- Node.js version v20 or newer - [Node.js](https://nodejs.org/en/download/)
|
||||
- Node.js version v16 or newer - [Node.js](https://nodejs.org/en/download/)
|
||||
- PostgreSQL version v15 or newer - [PostgreSQL](https://www.postgresql.org/download/)
|
||||
- S3-compatible storage (like MinIO) for file storage
|
||||
|
||||
@@ -38,7 +38,7 @@ Getting started with development is a breeze! Follow these steps and you'll be c
|
||||
npm start
|
||||
```
|
||||
|
||||
4. Navigate to [http://localhost:5173](http://localhost:5173) (development server)
|
||||
4. Navigate to [http://localhost:5173](http://localhost:5173)
|
||||
|
||||
### Backend installation
|
||||
|
||||
@@ -126,7 +126,7 @@ For an easier setup, you can use Docker and Docker Compose:
|
||||
```
|
||||
|
||||
3. Access the application:
|
||||
- Frontend: http://localhost:5000 (Docker production build)
|
||||
- Frontend: http://localhost:5000
|
||||
- Backend API: http://localhost:3000
|
||||
- MinIO Console: http://localhost:9001 (login with minioadmin/minioadmin)
|
||||
|
||||
|
||||
16
backup.sh
16
backup.sh
@@ -1,16 +0,0 @@
|
||||
#!/bin/bash
|
||||
set -eu
|
||||
|
||||
# Adjust these as needed:
|
||||
CONTAINER=worklenz_db
|
||||
DB_NAME=worklenz_db
|
||||
DB_USER=postgres
|
||||
BACKUP_DIR=./pg_backups
|
||||
mkdir -p "$BACKUP_DIR"
|
||||
|
||||
timestamp=$(date +%Y-%m-%d_%H-%M-%S)
|
||||
outfile="${BACKUP_DIR}/${DB_NAME}_${timestamp}.sql"
|
||||
echo "Creating backup $outfile ..."
|
||||
|
||||
docker exec -t "$CONTAINER" pg_dump -U "$DB_USER" -d "$DB_NAME" > "$outfile"
|
||||
echo "Backup saved to $outfile"
|
||||
@@ -7,8 +7,8 @@ services:
|
||||
ports:
|
||||
- "5000:5000"
|
||||
depends_on:
|
||||
- backend
|
||||
restart: unless-stopped
|
||||
backend:
|
||||
condition: service_started
|
||||
env_file:
|
||||
- ./worklenz-frontend/.env.production
|
||||
networks:
|
||||
@@ -26,7 +26,6 @@ services:
|
||||
condition: service_healthy
|
||||
minio:
|
||||
condition: service_started
|
||||
restart: unless-stopped
|
||||
env_file:
|
||||
- ./worklenz-backend/.env
|
||||
networks:
|
||||
@@ -38,7 +37,6 @@ services:
|
||||
ports:
|
||||
- "9000:9000"
|
||||
- "9001:9001"
|
||||
restart: unless-stopped
|
||||
environment:
|
||||
MINIO_ROOT_USER: ${S3_ACCESS_KEY_ID:-minioadmin}
|
||||
MINIO_ROOT_PASSWORD: ${S3_SECRET_ACCESS_KEY:-minioadmin}
|
||||
@@ -54,14 +52,13 @@ services:
|
||||
container_name: worklenz_createbuckets
|
||||
depends_on:
|
||||
- minio
|
||||
restart: on-failure
|
||||
entrypoint: >
|
||||
/bin/sh -c '
|
||||
echo "Waiting for MinIO to start...";
|
||||
sleep 15;
|
||||
for i in 1 2 3 4 5; do
|
||||
echo "Attempt $i to connect to MinIO...";
|
||||
if /usr/bin/mc alias set myminio http://minio:9000 minioadmin minioadmin; then
|
||||
if /usr/bin/mc config host add myminio http://minio:9000 minioadmin minioadmin; then
|
||||
echo "Successfully connected to MinIO!";
|
||||
/usr/bin/mc mb --ignore-existing myminio/worklenz-bucket;
|
||||
/usr/bin/mc policy set public myminio/worklenz-bucket;
|
||||
@@ -83,79 +80,32 @@ services:
|
||||
POSTGRES_DB: ${DB_NAME:-worklenz_db}
|
||||
POSTGRES_PASSWORD: ${DB_PASSWORD:-password}
|
||||
healthcheck:
|
||||
test:
|
||||
[
|
||||
"CMD-SHELL",
|
||||
"pg_isready -d ${DB_NAME:-worklenz_db} -U ${DB_USER:-postgres}",
|
||||
]
|
||||
test: [ "CMD-SHELL", "pg_isready -d ${DB_NAME:-worklenz_db} -U ${DB_USER:-postgres}" ]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- worklenz
|
||||
volumes:
|
||||
- worklenz_postgres_data:/var/lib/postgresql/data
|
||||
- type: bind
|
||||
source: ./worklenz-backend/database/sql
|
||||
target: /docker-entrypoint-initdb.d/sql
|
||||
source: ./worklenz-backend/database
|
||||
target: /docker-entrypoint-initdb.d
|
||||
consistency: cached
|
||||
- type: bind
|
||||
source: ./worklenz-backend/database/migrations
|
||||
target: /docker-entrypoint-initdb.d/migrations
|
||||
consistency: cached
|
||||
- type: bind
|
||||
source: ./worklenz-backend/database/00_init.sh
|
||||
target: /docker-entrypoint-initdb.d/00_init.sh
|
||||
consistency: cached
|
||||
- type: bind
|
||||
source: ./pg_backups
|
||||
target: /docker-entrypoint-initdb.d/pg_backups
|
||||
command: >
|
||||
bash -c '
|
||||
if command -v apt-get >/dev/null 2>&1; then
|
||||
apt-get update && apt-get install -y dos2unix
|
||||
elif command -v apk >/dev/null 2>&1; then
|
||||
apk add --no-cache dos2unix
|
||||
fi
|
||||
|
||||
find /docker-entrypoint-initdb.d -type f -name "*.sh" -exec sh -c '"'"'
|
||||
for f; do
|
||||
dos2unix "$f" 2>/dev/null || true
|
||||
chmod +x "$f"
|
||||
done
|
||||
'"'"' sh {} +
|
||||
|
||||
exec docker-entrypoint.sh postgres
|
||||
'
|
||||
db-backup:
|
||||
image: postgres:15
|
||||
container_name: worklenz_db_backup
|
||||
environment:
|
||||
POSTGRES_USER: ${DB_USER:-postgres}
|
||||
POSTGRES_DB: ${DB_NAME:-worklenz_db}
|
||||
POSTGRES_PASSWORD: ${DB_PASSWORD:-password}
|
||||
depends_on:
|
||||
db:
|
||||
condition: service_healthy
|
||||
volumes:
|
||||
- ./pg_backups:/pg_backups #host dir for backups files
|
||||
#setup bassh loop to backup data evey 24h
|
||||
command: >
|
||||
bash -c 'while true; do
|
||||
sleep 86400;
|
||||
PGPASSWORD=$$POSTGRES_PASSWORD pg_dump -h worklenz_db -U $$POSTGRES_USER -d $$POSTGRES_DB \
|
||||
> /pg_backups/worklenz_db_$$(date +%Y-%m-%d_%H-%M-%S).sql;
|
||||
find /pg_backups -type f -name "*.sql" -mtime +30 -delete;
|
||||
done'
|
||||
restart: unless-stopped
|
||||
networks:
|
||||
- worklenz
|
||||
bash -c ' if command -v apt-get >/dev/null 2>&1; then
|
||||
apt-get update && apt-get install -y dos2unix
|
||||
elif command -v apk >/dev/null 2>&1; then
|
||||
apk add --no-cache dos2unix
|
||||
fi && find /docker-entrypoint-initdb.d -type f -name "*.sh" -exec sh -c '\''
|
||||
dos2unix "{}" 2>/dev/null || true
|
||||
chmod +x "{}"
|
||||
'\'' \; && exec docker-entrypoint.sh postgres '
|
||||
|
||||
volumes:
|
||||
worklenz_postgres_data:
|
||||
worklenz_minio_data:
|
||||
pgdata:
|
||||
|
||||
|
||||
networks:
|
||||
worklenz:
|
||||
|
||||
@@ -1,429 +0,0 @@
|
||||
# Enhanced Task Management: Technical Guide
|
||||
|
||||
## Overview
|
||||
The Enhanced Task Management system is a comprehensive React-based interface built on top of WorkLenz's existing task infrastructure. It provides a modern, grouped view with drag-and-drop functionality, bulk operations, and responsive design.
|
||||
|
||||
## Architecture
|
||||
|
||||
### Component Structure
|
||||
```
|
||||
src/components/task-management/
|
||||
├── TaskListBoard.tsx # Main container with DnD context
|
||||
├── TaskGroup.tsx # Individual group with collapse/expand
|
||||
├── TaskRow.tsx # Task display with rich metadata
|
||||
├── GroupingSelector.tsx # Grouping method switcher
|
||||
└── BulkActionBar.tsx # Bulk operations toolbar
|
||||
```
|
||||
|
||||
### Integration Points
|
||||
The system integrates with existing WorkLenz infrastructure:
|
||||
|
||||
- **Redux Store:** Uses `tasks.slice.ts` for state management
|
||||
- **Types:** Leverages existing TypeScript interfaces
|
||||
- **API Services:** Works with existing task API endpoints
|
||||
- **WebSocket:** Supports real-time updates via existing socket system
|
||||
|
||||
## Core Components
|
||||
|
||||
### TaskListBoard.tsx
|
||||
Main orchestrator component that provides:
|
||||
|
||||
- **DnD Context:** @dnd-kit drag-and-drop functionality
|
||||
- **State Management:** Redux integration for task data
|
||||
- **Event Handling:** Drag events and bulk operations
|
||||
- **Layout Structure:** Header controls and group container
|
||||
|
||||
#### Key Props
|
||||
```typescript
|
||||
interface TaskListBoardProps {
|
||||
projectId: string; // Required: Project identifier
|
||||
className?: string; // Optional: Additional CSS classes
|
||||
}
|
||||
```
|
||||
|
||||
#### Redux Selectors Used
|
||||
```typescript
|
||||
const {
|
||||
taskGroups, // ITaskListGroup[] - Grouped task data
|
||||
loadingGroups, // boolean - Loading state
|
||||
error, // string | null - Error state
|
||||
groupBy, // IGroupBy - Current grouping method
|
||||
search, // string | null - Search filter
|
||||
archived, // boolean - Show archived tasks
|
||||
} = useSelector((state: RootState) => state.taskReducer);
|
||||
```
|
||||
|
||||
### TaskGroup.tsx
|
||||
Renders individual task groups with:
|
||||
|
||||
- **Collapsible Headers:** Expand/collapse functionality
|
||||
- **Progress Indicators:** Visual completion progress
|
||||
- **Drop Zones:** Accept dropped tasks from other groups
|
||||
- **Group Statistics:** Task counts and completion rates
|
||||
|
||||
#### Key Props
|
||||
```typescript
|
||||
interface TaskGroupProps {
|
||||
group: ITaskListGroup; // Group data with tasks
|
||||
projectId: string; // Project context
|
||||
currentGrouping: IGroupBy; // Current grouping mode
|
||||
selectedTaskIds: string[]; // Selected task IDs
|
||||
onAddTask?: (groupId: string) => void;
|
||||
onToggleCollapse?: (groupId: string) => void;
|
||||
}
|
||||
```
|
||||
|
||||
### TaskRow.tsx
|
||||
Individual task display featuring:
|
||||
|
||||
- **Rich Metadata:** Progress, assignees, labels, due dates
|
||||
- **Drag Handles:** Sortable within and between groups
|
||||
- **Selection:** Multi-select with checkboxes
|
||||
- **Subtask Support:** Expandable hierarchy display
|
||||
|
||||
#### Key Props
|
||||
```typescript
|
||||
interface TaskRowProps {
|
||||
task: IProjectTask; // Task data
|
||||
projectId: string; // Project context
|
||||
groupId: string; // Parent group ID
|
||||
currentGrouping: IGroupBy; // Current grouping mode
|
||||
isSelected: boolean; // Selection state
|
||||
isDragOverlay?: boolean; // Drag overlay rendering
|
||||
index?: number; // Position in group
|
||||
onSelect?: (taskId: string, selected: boolean) => void;
|
||||
onToggleSubtasks?: (taskId: string) => void;
|
||||
}
|
||||
```
|
||||
|
||||
## State Management
|
||||
|
||||
### Redux Integration
|
||||
The system uses existing WorkLenz Redux patterns:
|
||||
|
||||
```typescript
|
||||
// Primary slice used
|
||||
import {
|
||||
fetchTaskGroups, // Async thunk for loading data
|
||||
reorderTasks, // Update task order/group
|
||||
setGroup, // Change grouping method
|
||||
updateTaskStatus, // Update individual task status
|
||||
updateTaskPriority, // Update individual task priority
|
||||
// ... other existing actions
|
||||
} from '@/features/tasks/tasks.slice';
|
||||
```
|
||||
|
||||
### Data Flow
|
||||
1. **Component Mount:** `TaskListBoard` dispatches `fetchTaskGroups(projectId)`
|
||||
2. **Group Changes:** `setGroup(newGroupBy)` triggers data reorganization
|
||||
3. **Drag Operations:** `reorderTasks()` updates task positions and properties
|
||||
4. **Real-time Updates:** WebSocket events update Redux state automatically
|
||||
|
||||
## Drag and Drop Implementation
|
||||
|
||||
### DnD Kit Integration
|
||||
Uses @dnd-kit for modern, accessible drag-and-drop:
|
||||
|
||||
```typescript
|
||||
// Sensors for different input methods
|
||||
const sensors = useSensors(
|
||||
useSensor(PointerSensor, {
|
||||
activationConstraint: { distance: 8 }
|
||||
}),
|
||||
useSensor(KeyboardSensor, {
|
||||
coordinateGetter: sortableKeyboardCoordinates
|
||||
})
|
||||
);
|
||||
```
|
||||
|
||||
### Drag Event Handling
|
||||
```typescript
|
||||
const handleDragEnd = (event: DragEndEvent) => {
|
||||
const { active, over } = event;
|
||||
|
||||
// Determine source and target
|
||||
const sourceGroup = findTaskGroup(active.id);
|
||||
const targetGroup = findTargetGroup(over?.id);
|
||||
|
||||
// Update task arrays and dispatch changes
|
||||
dispatch(reorderTasks({
|
||||
activeGroupId: sourceGroup.id,
|
||||
overGroupId: targetGroup.id,
|
||||
fromIndex: sourceIndex,
|
||||
toIndex: targetIndex,
|
||||
task: movedTask,
|
||||
updatedSourceTasks,
|
||||
updatedTargetTasks,
|
||||
}));
|
||||
};
|
||||
```
|
||||
|
||||
### Smart Property Updates
|
||||
When tasks are moved between groups, properties update automatically:
|
||||
|
||||
- **Status Grouping:** Moving to "Done" group sets task status to "done"
|
||||
- **Priority Grouping:** Moving to "High" group sets task priority to "high"
|
||||
- **Phase Grouping:** Moving to "Testing" group sets task phase to "testing"
|
||||
|
||||
## Bulk Operations
|
||||
|
||||
### Selection State Management
|
||||
```typescript
|
||||
// Local state for task selection
|
||||
const [selectedTaskIds, setSelectedTaskIds] = useState<string[]>([]);
|
||||
|
||||
// Selection handlers
|
||||
const handleTaskSelect = (taskId: string, selected: boolean) => {
|
||||
if (selected) {
|
||||
setSelectedTaskIds(prev => [...prev, taskId]);
|
||||
} else {
|
||||
setSelectedTaskIds(prev => prev.filter(id => id !== taskId));
|
||||
}
|
||||
};
|
||||
```
|
||||
|
||||
### Context-Aware Actions
|
||||
Bulk actions adapt to current grouping:
|
||||
|
||||
```typescript
|
||||
// Only show status changes when not grouped by status
|
||||
{currentGrouping !== 'status' && (
|
||||
<Dropdown overlay={statusMenu}>
|
||||
<Button>Change Status</Button>
|
||||
</Dropdown>
|
||||
)}
|
||||
```
|
||||
|
||||
## Performance Optimizations
|
||||
|
||||
### Memoized Selectors
|
||||
```typescript
|
||||
// Expensive group calculations are memoized
|
||||
const taskGroups = useMemo(() => {
|
||||
return createGroupsFromTasks(tasks, currentGrouping);
|
||||
}, [tasks, currentGrouping]);
|
||||
```
|
||||
|
||||
### Virtual Scrolling Ready
|
||||
For large datasets, the system is prepared for react-window integration:
|
||||
|
||||
```typescript
|
||||
// Large group detection
|
||||
const shouldVirtualize = group.tasks.length > 100;
|
||||
|
||||
return shouldVirtualize ? (
|
||||
<VirtualizedTaskList tasks={group.tasks} />
|
||||
) : (
|
||||
<StandardTaskList tasks={group.tasks} />
|
||||
);
|
||||
```
|
||||
|
||||
### Optimistic Updates
|
||||
UI updates immediately while API calls process in background:
|
||||
|
||||
```typescript
|
||||
// Immediate UI update
|
||||
dispatch(updateTaskStatusOptimistically(taskId, newStatus));
|
||||
|
||||
// API call with rollback on error
|
||||
try {
|
||||
await updateTaskStatus(taskId, newStatus);
|
||||
} catch (error) {
|
||||
dispatch(rollbackTaskStatusUpdate(taskId));
|
||||
}
|
||||
```
|
||||
|
||||
## Responsive Design
|
||||
|
||||
### Breakpoint Strategy
|
||||
```css
|
||||
/* Mobile-first responsive design */
|
||||
.task-row {
|
||||
padding: 12px;
|
||||
}
|
||||
|
||||
@media (min-width: 768px) {
|
||||
.task-row {
|
||||
padding: 16px;
|
||||
}
|
||||
}
|
||||
|
||||
@media (min-width: 1024px) {
|
||||
.task-row {
|
||||
padding: 20px;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Progressive Enhancement
|
||||
- **Mobile:** Essential information only
|
||||
- **Tablet:** Additional metadata visible
|
||||
- **Desktop:** Full feature set with optimal layout
|
||||
|
||||
## Accessibility
|
||||
|
||||
### ARIA Implementation
|
||||
```typescript
|
||||
// Proper ARIA labels for screen readers
|
||||
<div
|
||||
role="button"
|
||||
aria-label={`Move task ${task.name}`}
|
||||
tabIndex={0}
|
||||
{...dragHandleProps}
|
||||
>
|
||||
<DragOutlined />
|
||||
</div>
|
||||
```
|
||||
|
||||
### Keyboard Navigation
|
||||
- **Tab:** Navigate between elements
|
||||
- **Space:** Select/deselect tasks
|
||||
- **Enter:** Activate buttons
|
||||
- **Arrows:** Navigate sortable lists with keyboard sensor
|
||||
|
||||
### Focus Management
|
||||
```typescript
|
||||
// Maintain focus during dynamic updates
|
||||
useEffect(() => {
|
||||
if (shouldFocusTask) {
|
||||
taskRef.current?.focus();
|
||||
}
|
||||
}, [taskGroups]);
|
||||
```
|
||||
|
||||
## WebSocket Integration
|
||||
|
||||
### Real-time Updates
|
||||
The system subscribes to existing WorkLenz WebSocket events:
|
||||
|
||||
```typescript
|
||||
// Socket event handlers (existing WorkLenz patterns)
|
||||
socket.on('TASK_STATUS_CHANGED', (data) => {
|
||||
dispatch(updateTaskStatus(data));
|
||||
});
|
||||
|
||||
socket.on('TASK_PROGRESS_UPDATED', (data) => {
|
||||
dispatch(updateTaskProgress(data));
|
||||
});
|
||||
```
|
||||
|
||||
### Live Collaboration
|
||||
- Multiple users can work simultaneously
|
||||
- Changes appear in real-time
|
||||
- Conflict resolution through server-side validation
|
||||
|
||||
## API Integration
|
||||
|
||||
### Existing Endpoints Used
|
||||
```typescript
|
||||
// Uses existing WorkLenz API services
|
||||
import { tasksApiService } from '@/api/tasks/tasks.api.service';
|
||||
|
||||
// Task data fetching
|
||||
tasksApiService.getTaskList(config);
|
||||
|
||||
// Task updates
|
||||
tasksApiService.updateTask(taskId, changes);
|
||||
|
||||
// Bulk operations
|
||||
tasksApiService.bulkUpdateTasks(taskIds, changes);
|
||||
```
|
||||
|
||||
### Error Handling
|
||||
```typescript
|
||||
try {
|
||||
await dispatch(fetchTaskGroups(projectId));
|
||||
} catch (error) {
|
||||
// Display user-friendly error message
|
||||
message.error('Failed to load tasks. Please try again.');
|
||||
logger.error('Task loading error:', error);
|
||||
}
|
||||
```
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
### Component Testing
|
||||
```typescript
|
||||
// Example test structure
|
||||
describe('TaskListBoard', () => {
|
||||
it('should render task groups correctly', () => {
|
||||
const mockTasks = generateMockTasks(10);
|
||||
render(<TaskListBoard projectId="test-project" />);
|
||||
|
||||
expect(screen.getByText('Tasks (10)')).toBeInTheDocument();
|
||||
});
|
||||
|
||||
it('should handle drag and drop operations', async () => {
|
||||
// Test drag and drop functionality
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
### Integration Testing
|
||||
- Redux state management
|
||||
- API service integration
|
||||
- WebSocket event handling
|
||||
- Drag and drop operations
|
||||
|
||||
## Development Guidelines
|
||||
|
||||
### Code Organization
|
||||
- Follow existing WorkLenz patterns
|
||||
- Use TypeScript strictly
|
||||
- Implement proper error boundaries
|
||||
- Maintain accessibility standards
|
||||
|
||||
### Performance Considerations
|
||||
- Memoize expensive calculations
|
||||
- Implement virtual scrolling for large datasets
|
||||
- Debounce user input operations
|
||||
- Optimize re-render cycles
|
||||
|
||||
### Styling Standards
|
||||
- Use existing Ant Design components
|
||||
- Follow WorkLenz design system
|
||||
- Implement responsive breakpoints
|
||||
- Maintain dark mode compatibility
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
### Planned Features
|
||||
- Custom column integration
|
||||
- Advanced filtering capabilities
|
||||
- Kanban board view
|
||||
- Enhanced time tracking
|
||||
- Task templates
|
||||
|
||||
### Extension Points
|
||||
The system is designed for easy extension:
|
||||
|
||||
```typescript
|
||||
// Plugin architecture ready
|
||||
interface TaskViewPlugin {
|
||||
name: string;
|
||||
component: React.ComponentType;
|
||||
supportedGroupings: IGroupBy[];
|
||||
}
|
||||
|
||||
const plugins: TaskViewPlugin[] = [
|
||||
{ name: 'kanban', component: KanbanView, supportedGroupings: ['status'] },
|
||||
{ name: 'timeline', component: TimelineView, supportedGroupings: ['phase'] },
|
||||
];
|
||||
```
|
||||
|
||||
## Deployment Considerations
|
||||
|
||||
### Bundle Size
|
||||
- Tree-shake unused dependencies
|
||||
- Code-split large components
|
||||
- Optimize asset loading
|
||||
|
||||
### Browser Compatibility
|
||||
- Modern browsers (ES2020+)
|
||||
- Graceful degradation for older browsers
|
||||
- Progressive enhancement approach
|
||||
|
||||
### Performance Monitoring
|
||||
- Track component render times
|
||||
- Monitor API response times
|
||||
- Measure user interaction latency
|
||||
@@ -1,275 +0,0 @@
|
||||
# Enhanced Task Management: User Guide
|
||||
|
||||
## What Is Enhanced Task Management?
|
||||
The Enhanced Task Management system provides a modern, grouped view of your tasks with advanced features like drag-and-drop, bulk operations, and dynamic grouping. This system builds on WorkLenz's existing task infrastructure while offering improved productivity and organization tools.
|
||||
|
||||
## Why Use Enhanced Task Management?
|
||||
- **Better Organization:** Group tasks by Status, Priority, or Phase for clearer project overview
|
||||
- **Increased Productivity:** Bulk operations let you update multiple tasks at once
|
||||
- **Intuitive Interface:** Drag-and-drop functionality makes task management feel natural
|
||||
- **Rich Task Display:** See progress, assignees, labels, and due dates at a glance
|
||||
- **Responsive Design:** Works seamlessly on desktop, tablet, and mobile devices
|
||||
|
||||
## Getting Started
|
||||
|
||||
### Accessing Enhanced Task Management
|
||||
1. Navigate to your project workspace
|
||||
2. Look for the enhanced task view option in your project interface
|
||||
3. The system will display your tasks grouped by the current grouping method (default: Status)
|
||||
|
||||
### Understanding the Interface
|
||||
The enhanced task management interface consists of several key areas:
|
||||
|
||||
- **Header Controls:** Task count, grouping selector, and action buttons
|
||||
- **Task Groups:** Collapsible sections containing related tasks
|
||||
- **Individual Tasks:** Rich task cards with metadata and actions
|
||||
- **Bulk Action Bar:** Appears when multiple tasks are selected (blue bar)
|
||||
|
||||
## Task Grouping
|
||||
|
||||
### Available Grouping Options
|
||||
You can organize your tasks using three different grouping methods:
|
||||
|
||||
#### 1. Status Grouping (Default)
|
||||
Groups tasks by their current status:
|
||||
- **To Do:** Tasks not yet started
|
||||
- **Doing:** Tasks currently in progress
|
||||
- **Done:** Completed tasks
|
||||
|
||||
#### 2. Priority Grouping
|
||||
Groups tasks by their priority level:
|
||||
- **Critical:** Highest priority, urgent tasks
|
||||
- **High:** Important tasks requiring attention
|
||||
- **Medium:** Standard priority tasks
|
||||
- **Low:** Tasks that can be addressed later
|
||||
|
||||
#### 3. Phase Grouping
|
||||
Groups tasks by project phases:
|
||||
- **Planning:** Tasks in the planning stage
|
||||
- **Development:** Implementation and development tasks
|
||||
- **Testing:** Quality assurance and testing tasks
|
||||
- **Deployment:** Release and deployment tasks
|
||||
|
||||
### Switching Between Groupings
|
||||
1. Locate the "Group by" dropdown in the header controls
|
||||
2. Select your preferred grouping method (Status, Priority, or Phase)
|
||||
3. Tasks will automatically reorganize into the new groups
|
||||
4. Your grouping preference is saved for future sessions
|
||||
|
||||
### Group Features
|
||||
Each task group includes:
|
||||
- **Color-coded headers** with visual indicators
|
||||
- **Task count badges** showing the number of tasks in each group
|
||||
- **Progress indicators** showing completion percentage
|
||||
- **Collapse/expand functionality** to hide or show group contents
|
||||
- **Add task buttons** to quickly create tasks in specific groups
|
||||
|
||||
## Drag and Drop
|
||||
|
||||
### Moving Tasks Within Groups
|
||||
1. Hover over a task to reveal the drag handle (⋮⋮ icon)
|
||||
2. Click and hold the drag handle
|
||||
3. Drag the task to your desired position within the same group
|
||||
4. Release to drop the task in its new position
|
||||
|
||||
### Moving Tasks Between Groups
|
||||
1. Click and hold the drag handle on any task
|
||||
2. Drag the task over a different group
|
||||
3. The target group will highlight to show it can accept the task
|
||||
4. Release to drop the task into the new group
|
||||
5. The task's properties (status, priority, or phase) will automatically update
|
||||
|
||||
### Drag and Drop Benefits
|
||||
- **Instant Updates:** Task properties change automatically when moved between groups
|
||||
- **Visual Feedback:** Clear indicators show where tasks can be dropped
|
||||
- **Keyboard Accessible:** Alternative keyboard controls for accessibility
|
||||
- **Mobile Friendly:** Touch-friendly drag operations on mobile devices
|
||||
|
||||
## Multi-Select and Bulk Operations
|
||||
|
||||
### Selecting Tasks
|
||||
You can select multiple tasks using several methods:
|
||||
|
||||
#### Individual Selection
|
||||
- Click the checkbox next to any task to select it
|
||||
- Click again to deselect
|
||||
|
||||
#### Range Selection
|
||||
- Select the first task in your desired range
|
||||
- Hold Shift and click the last task in the range
|
||||
- All tasks between the first and last will be selected
|
||||
|
||||
#### Multiple Selection
|
||||
- Hold Ctrl (or Cmd on Mac) while clicking tasks
|
||||
- This allows you to select non-consecutive tasks
|
||||
|
||||
### Bulk Actions
|
||||
When you have tasks selected, a blue bulk action bar appears with these options:
|
||||
|
||||
#### Change Status (when not grouped by Status)
|
||||
- Update the status of all selected tasks at once
|
||||
- Choose from available status options in your project
|
||||
|
||||
#### Set Priority (when not grouped by Priority)
|
||||
- Assign the same priority level to all selected tasks
|
||||
- Options include Critical, High, Medium, and Low
|
||||
|
||||
#### More Actions
|
||||
Additional bulk operations include:
|
||||
- **Assign to Member:** Add team members to multiple tasks
|
||||
- **Add Labels:** Apply labels to selected tasks
|
||||
- **Archive Tasks:** Move multiple tasks to archive
|
||||
|
||||
#### Delete Tasks
|
||||
- Permanently remove multiple tasks at once
|
||||
- Confirmation dialog prevents accidental deletions
|
||||
|
||||
### Bulk Action Tips
|
||||
- The bulk action bar only shows relevant options based on your current grouping
|
||||
- You can clear your selection at any time using the "Clear" button
|
||||
- Bulk operations provide immediate feedback and can be undone if needed
|
||||
|
||||
## Task Display Features
|
||||
|
||||
### Rich Task Information
|
||||
Each task displays comprehensive information:
|
||||
|
||||
#### Basic Information
|
||||
- **Task Key:** Unique identifier (e.g., PROJ-123)
|
||||
- **Task Name:** Clear, descriptive title
|
||||
- **Description:** Additional details when available
|
||||
|
||||
#### Visual Indicators
|
||||
- **Progress Bar:** Shows completion percentage (0-100%)
|
||||
- **Priority Indicator:** Color-coded dot showing task importance
|
||||
- **Status Color:** Left border color indicates current status
|
||||
|
||||
#### Team and Collaboration
|
||||
- **Assignee Avatars:** Profile pictures of assigned team members (up to 3 visible)
|
||||
- **Labels:** Color-coded tags for categorization
|
||||
- **Comment Count:** Number of comments and discussions
|
||||
- **Attachment Count:** Number of files attached to the task
|
||||
|
||||
#### Timing Information
|
||||
- **Due Dates:** When tasks are scheduled to complete
|
||||
- Red text: Overdue tasks
|
||||
- Orange text: Due today or within 3 days
|
||||
- Gray text: Future due dates
|
||||
- **Time Tracking:** Estimated vs. logged time when available
|
||||
|
||||
### Subtask Support
|
||||
Tasks with subtasks include additional features:
|
||||
|
||||
#### Expanding Subtasks
|
||||
- Click the "+X" button next to task names to expand subtasks
|
||||
- Subtasks appear indented below the parent task
|
||||
- Click "−X" to collapse subtasks
|
||||
|
||||
#### Subtask Progress
|
||||
- Parent task progress reflects completion of all subtasks
|
||||
- Individual subtask progress is visible when expanded
|
||||
- Subtask counts show total number of child tasks
|
||||
|
||||
## Advanced Features
|
||||
|
||||
### Real-time Updates
|
||||
- Changes made by team members appear instantly
|
||||
- Live collaboration with multiple users
|
||||
- WebSocket connections ensure data synchronization
|
||||
|
||||
### Search and Filtering
|
||||
- Use existing project search and filter capabilities
|
||||
- Enhanced task management respects current filter settings
|
||||
- Search results maintain grouping organization
|
||||
|
||||
### Responsive Design
|
||||
The interface adapts to different screen sizes:
|
||||
|
||||
#### Desktop (Large Screens)
|
||||
- Full feature set with all metadata visible
|
||||
- Optimal drag-and-drop experience
|
||||
- Multi-column layouts where appropriate
|
||||
|
||||
#### Tablet (Medium Screens)
|
||||
- Condensed but functional interface
|
||||
- Touch-friendly interactions
|
||||
- Simplified metadata display
|
||||
|
||||
#### Mobile (Small Screens)
|
||||
- Stacked layout for easy navigation
|
||||
- Large touch targets for selections
|
||||
- Essential information prioritized
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Organizing Your Tasks
|
||||
1. **Choose the Right Grouping:** Select the grouping method that best fits your workflow
|
||||
2. **Use Labels Consistently:** Apply meaningful labels for better categorization
|
||||
3. **Keep Groups Balanced:** Avoid having too many tasks in a single group
|
||||
4. **Regular Maintenance:** Review and update task organization periodically
|
||||
|
||||
### Collaboration Tips
|
||||
1. **Clear Task Names:** Use descriptive titles that everyone understands
|
||||
2. **Proper Assignment:** Assign tasks to appropriate team members
|
||||
3. **Progress Updates:** Keep progress percentages current for accurate project tracking
|
||||
4. **Use Comments:** Communicate about tasks using the comment system
|
||||
|
||||
### Productivity Techniques
|
||||
1. **Batch Similar Operations:** Use bulk actions for efficiency
|
||||
2. **Prioritize Effectively:** Use priority grouping during planning phases
|
||||
3. **Track Progress:** Monitor completion rates using group progress indicators
|
||||
4. **Plan Ahead:** Use due dates and time estimates for better scheduling
|
||||
|
||||
## Keyboard Shortcuts
|
||||
|
||||
### Navigation
|
||||
- **Tab:** Move focus between elements
|
||||
- **Enter:** Activate focused button or link
|
||||
- **Esc:** Close open dialogs or clear selections
|
||||
|
||||
### Selection
|
||||
- **Space:** Select/deselect focused task
|
||||
- **Shift + Click:** Range selection
|
||||
- **Ctrl + Click:** Multi-selection (Cmd + Click on Mac)
|
||||
|
||||
### Actions
|
||||
- **Delete:** Remove selected tasks (with confirmation)
|
||||
- **Ctrl + A:** Select all visible tasks (Cmd + A on Mac)
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
#### Tasks Not Moving Between Groups
|
||||
- Ensure you have edit permissions for the tasks
|
||||
- Check that you're dragging from the drag handle (⋮⋮ icon)
|
||||
- Verify the target group allows the task type
|
||||
|
||||
#### Bulk Actions Not Working
|
||||
- Confirm tasks are actually selected (checkboxes checked)
|
||||
- Ensure you have appropriate permissions
|
||||
- Check that the action is available for your current grouping
|
||||
|
||||
#### Missing Task Information
|
||||
- Some metadata may be hidden on smaller screens
|
||||
- Try expanding to full screen or using desktop view
|
||||
- Check that task has the required information (assignees, labels, etc.)
|
||||
|
||||
### Performance Tips
|
||||
- For projects with hundreds of tasks, consider using filters to reduce visible tasks
|
||||
- Collapse groups you're not actively working with
|
||||
- Clear selections when not performing bulk operations
|
||||
|
||||
## Getting Help
|
||||
- Contact your workspace administrator for permission-related issues
|
||||
- Check the main WorkLenz documentation for general task management help
|
||||
- Report bugs or feature requests through your organization's support channels
|
||||
|
||||
## What's New
|
||||
This enhanced task management system builds on WorkLenz's solid foundation while adding:
|
||||
- Modern drag-and-drop interfaces
|
||||
- Flexible grouping options
|
||||
- Powerful bulk operation capabilities
|
||||
- Rich visual task displays
|
||||
- Mobile-responsive design
|
||||
- Improved accessibility features
|
||||
@@ -16,45 +16,24 @@ Recurring tasks are tasks that repeat automatically on a schedule you choose. Th
|
||||
5. Save the task. It will now be created automatically based on your chosen schedule.
|
||||
|
||||
## Schedule Options
|
||||
You can choose how often your task repeats. Here are the available options:
|
||||
You can choose how often your task repeats. Here are the most common options:
|
||||
|
||||
- **Daily:** The task is created every day.
|
||||
- **Weekly:** The task is created once a week. You can pick one or more days (e.g., every Monday and Thursday).
|
||||
- **Monthly:** The task is created once a month. You have two options:
|
||||
- **On a specific date:** Choose a date from 1 to 28 (limited to 28 to ensure consistency across all months)
|
||||
- **On a specific day:** Choose a week (first, second, third, fourth, or last) and a day of the week
|
||||
- **Every X Days:** The task is created every specified number of days (e.g., every 3 days)
|
||||
- **Every X Weeks:** The task is created every specified number of weeks (e.g., every 2 weeks)
|
||||
- **Every X Months:** The task is created every specified number of months (e.g., every 3 months)
|
||||
- **Weekly:** The task is created once a week. You can pick the day (e.g., every Monday).
|
||||
- **Monthly:** The task is created once a month. You can pick the date (e.g., the 1st of every month).
|
||||
- **Weekdays:** The task is created every Monday to Friday.
|
||||
- **Custom:** Set your own schedule, such as every 2 days, every 3 weeks, or only on certain days.
|
||||
|
||||
### Examples
|
||||
- "Send team update" every Friday (weekly)
|
||||
- "Submit expense report" on the 15th of each month (monthly, specific date)
|
||||
- "Monthly team meeting" on the first Monday of each month (monthly, specific day)
|
||||
- "Submit expense report" on the 1st of each month (monthly)
|
||||
- "Check backups" every day (daily)
|
||||
- "Review project status" every Monday and Thursday (weekly, multiple days)
|
||||
- "Quarterly report" every 3 months (every X months)
|
||||
|
||||
## Future Task Creation
|
||||
The system automatically creates tasks up to a certain point in the future to ensure timely scheduling:
|
||||
|
||||
- **Daily Tasks:** Created up to 7 days in advance
|
||||
- **Weekly Tasks:** Created up to 2 weeks in advance
|
||||
- **Monthly Tasks:** Created up to 2 months in advance
|
||||
- **Every X Days/Weeks/Months:** Created up to 2 intervals in advance
|
||||
|
||||
This ensures that:
|
||||
- You always have upcoming tasks visible in your schedule
|
||||
- Tasks are created at appropriate intervals
|
||||
- The system maintains a reasonable number of future tasks
|
||||
- "Review project status" every Monday and Thursday (custom)
|
||||
|
||||
## Tips
|
||||
- You can edit or stop a recurring task at any time.
|
||||
- Assign team members and labels to recurring tasks for better organization.
|
||||
- Check your task list regularly to see newly created recurring tasks.
|
||||
- For monthly tasks, dates are limited to 1-28 to ensure the task occurs on the same date every month.
|
||||
- Tasks are created automatically within the future limit window - you don't need to manually create them.
|
||||
- If you need to see tasks further in the future, they will be created automatically as the current tasks are completed.
|
||||
|
||||
## Need Help?
|
||||
If you have questions or need help setting up recurring tasks, contact your workspace admin or support team.
|
||||
@@ -17,51 +17,6 @@ The recurring tasks cron job automates the creation of tasks based on predefined
|
||||
3. Checks if a task for the next occurrence already exists.
|
||||
4. Creates a new task if it does not exist and the next occurrence is within the allowed future window.
|
||||
|
||||
## Future Limit Logic
|
||||
The system implements different future limits based on the schedule type to maintain an appropriate number of future tasks:
|
||||
|
||||
```typescript
|
||||
const FUTURE_LIMITS = {
|
||||
daily: moment.duration(7, 'days'),
|
||||
weekly: moment.duration(2, 'weeks'),
|
||||
monthly: moment.duration(2, 'months'),
|
||||
every_x_days: (interval: number) => moment.duration(interval * 2, 'days'),
|
||||
every_x_weeks: (interval: number) => moment.duration(interval * 2, 'weeks'),
|
||||
every_x_months: (interval: number) => moment.duration(interval * 2, 'months')
|
||||
};
|
||||
```
|
||||
|
||||
### Implementation Details
|
||||
- **Base Calculation:**
|
||||
```typescript
|
||||
const futureLimit = moment(template.last_checked_at || template.created_at)
|
||||
.add(getFutureLimit(schedule.schedule_type, schedule.interval), 'days');
|
||||
```
|
||||
|
||||
- **Task Creation Rules:**
|
||||
1. Only create tasks if the next occurrence is before the future limit
|
||||
2. Skip creation if a task already exists for that date
|
||||
3. Update `last_checked_at` after processing
|
||||
|
||||
- **Benefits:**
|
||||
- Prevents excessive task creation
|
||||
- Maintains system performance
|
||||
- Ensures timely task visibility
|
||||
- Allows for schedule modifications
|
||||
|
||||
## Date Handling
|
||||
- **Monthly Tasks:**
|
||||
- Dates are limited to 1-28 to ensure consistency across all months
|
||||
- This prevents issues with months having different numbers of days
|
||||
- No special handling needed for February or months with 30/31 days
|
||||
- **Weekly Tasks:**
|
||||
- Supports multiple days of the week (0-6, where 0 is Sunday)
|
||||
- Tasks are created for each selected day
|
||||
- **Interval-based Tasks:**
|
||||
- Every X days/weeks/months from the last task's end date
|
||||
- Minimum interval is 1 day/week/month
|
||||
- No maximum limit, but tasks are only created up to the future limit
|
||||
|
||||
## Database Interactions
|
||||
- **Templates and Schedules:**
|
||||
- Templates are stored in `task_recurring_templates`.
|
||||
@@ -72,7 +27,6 @@ const FUTURE_LIMITS = {
|
||||
- Assigns team members and labels by calling appropriate functions/controllers.
|
||||
- **State Tracking:**
|
||||
- Updates `last_checked_at` and `last_created_task_end_date` in the schedule after processing.
|
||||
- Maintains future limits based on schedule type.
|
||||
|
||||
## Task Creation Process
|
||||
1. **Fetch Templates:** Retrieve all templates and their associated schedules.
|
||||
@@ -87,12 +41,10 @@ const FUTURE_LIMITS = {
|
||||
- **Cron Expression:** Modify the `TIME` constant in the code to change the schedule.
|
||||
- **Task Template Structure:** Extend the template or schedule interfaces to support additional fields.
|
||||
- **Task Creation Logic:** Customize the task creation process or add new assignment/labeling logic as needed.
|
||||
- **Future Window:** Adjust the future limits by modifying the `FUTURE_LIMITS` configuration.
|
||||
|
||||
## Error Handling
|
||||
- Errors are logged using the `log_error` utility.
|
||||
- The job continues processing other templates even if one fails.
|
||||
- Failed task creations are not retried automatically.
|
||||
|
||||
## References
|
||||
- Source: `src/cron_jobs/recurring-tasks.ts`
|
||||
|
||||
6
package-lock.json
generated
6
package-lock.json
generated
@@ -1,6 +0,0 @@
|
||||
{
|
||||
"name": "worklenz",
|
||||
"lockfileVersion": 3,
|
||||
"requires": true,
|
||||
"packages": {}
|
||||
}
|
||||
@@ -1,41 +0,0 @@
|
||||
-- Test script to verify the sort order constraint fix
|
||||
|
||||
-- Test the helper function
|
||||
SELECT get_sort_column_name('status'); -- Should return 'status_sort_order'
|
||||
SELECT get_sort_column_name('priority'); -- Should return 'priority_sort_order'
|
||||
SELECT get_sort_column_name('phase'); -- Should return 'phase_sort_order'
|
||||
SELECT get_sort_column_name('members'); -- Should return 'member_sort_order'
|
||||
SELECT get_sort_column_name('unknown'); -- Should return 'status_sort_order' (default)
|
||||
|
||||
-- Test bulk update function (example - would need real project_id and task_ids)
|
||||
/*
|
||||
SELECT update_task_sort_orders_bulk(
|
||||
'[
|
||||
{"task_id": "example-uuid", "sort_order": 1, "status_id": "status-uuid"},
|
||||
{"task_id": "example-uuid-2", "sort_order": 2, "status_id": "status-uuid"}
|
||||
]'::json,
|
||||
'status'
|
||||
);
|
||||
*/
|
||||
|
||||
-- Verify that sort_order constraint still exists and works
|
||||
SELECT
|
||||
tc.constraint_name,
|
||||
tc.table_name,
|
||||
kcu.column_name
|
||||
FROM information_schema.table_constraints tc
|
||||
JOIN information_schema.key_column_usage kcu
|
||||
ON tc.constraint_name = kcu.constraint_name
|
||||
WHERE tc.constraint_name = 'tasks_sort_order_unique';
|
||||
|
||||
-- Check that new sort order columns don't have unique constraints (which is correct)
|
||||
SELECT
|
||||
tc.constraint_name,
|
||||
tc.table_name,
|
||||
kcu.column_name
|
||||
FROM information_schema.table_constraints tc
|
||||
JOIN information_schema.key_column_usage kcu
|
||||
ON tc.constraint_name = kcu.constraint_name
|
||||
WHERE kcu.table_name = 'tasks'
|
||||
AND kcu.column_name IN ('status_sort_order', 'priority_sort_order', 'phase_sort_order', 'member_sort_order')
|
||||
AND tc.constraint_type = 'UNIQUE';
|
||||
@@ -1,30 +0,0 @@
|
||||
-- Test script to validate the separate sort order implementation
|
||||
|
||||
-- Check if new columns exist
|
||||
SELECT column_name, data_type, is_nullable, column_default
|
||||
FROM information_schema.columns
|
||||
WHERE table_name = 'tasks'
|
||||
AND column_name IN ('status_sort_order', 'priority_sort_order', 'phase_sort_order', 'member_sort_order')
|
||||
ORDER BY column_name;
|
||||
|
||||
-- Check if helper function exists
|
||||
SELECT routine_name, routine_type
|
||||
FROM information_schema.routines
|
||||
WHERE routine_name IN ('get_sort_column_name', 'update_task_sort_orders_bulk', 'handle_task_list_sort_order_change');
|
||||
|
||||
-- Sample test data to verify different sort orders work
|
||||
-- (This would be run after the migrations)
|
||||
/*
|
||||
-- Test: Tasks should have different orders for different groupings
|
||||
SELECT
|
||||
id,
|
||||
name,
|
||||
sort_order,
|
||||
status_sort_order,
|
||||
priority_sort_order,
|
||||
phase_sort_order,
|
||||
member_sort_order
|
||||
FROM tasks
|
||||
WHERE project_id = '<test-project-id>'
|
||||
ORDER BY status_sort_order;
|
||||
*/
|
||||
@@ -73,8 +73,8 @@ cat > worklenz-backend/.env << EOL
|
||||
NODE_ENV=production
|
||||
PORT=3000
|
||||
SESSION_NAME=worklenz.sid
|
||||
SESSION_SECRET=$(openssl rand -base64 48)
|
||||
COOKIE_SECRET=$(openssl rand -base64 48)
|
||||
SESSION_SECRET=change_me_in_production
|
||||
COOKIE_SECRET=change_me_in_production
|
||||
|
||||
# CORS
|
||||
SOCKET_IO_CORS=${FRONTEND_URL}
|
||||
@@ -123,7 +123,7 @@ SLACK_WEBHOOK=
|
||||
COMMIT_BUILD_IMMEDIATELY=true
|
||||
|
||||
# JWT Secret
|
||||
JWT_SECRET=$(openssl rand -base64 48)
|
||||
JWT_SECRET=change_me_in_production
|
||||
EOL
|
||||
|
||||
echo "Environment configuration updated for ${HOSTNAME} with" $([ "$USE_SSL" = "true" ] && echo "HTTPS/WSS" || echo "HTTP/WS")
|
||||
|
||||
@@ -1,9 +1,5 @@
|
||||
node_modules
|
||||
npm-debug.log
|
||||
build
|
||||
.scannerwork
|
||||
coverage
|
||||
.dockerignore
|
||||
.git
|
||||
*.md
|
||||
tests
|
||||
|
||||
|
||||
@@ -79,7 +79,3 @@ GOOGLE_CAPTCHA_PASS_SCORE=0.8
|
||||
|
||||
# Email Cronjobs
|
||||
ENABLE_EMAIL_CRONJOBS=true
|
||||
|
||||
# RECURRING_JOBS
|
||||
ENABLE_RECURRING_JOBS=true
|
||||
RECURRING_JOBS_INTERVAL="0 11 */1 * 1-5"
|
||||
3
worklenz-backend/.gitignore
vendored
3
worklenz-backend/.gitignore
vendored
@@ -20,6 +20,9 @@ coverage
|
||||
# nyc test coverage
|
||||
.nyc_output
|
||||
|
||||
# Grunt intermediate storage (http://gruntjs.com/creating-plugins#storing-task-files)
|
||||
.grunt
|
||||
|
||||
# Bower dependency directory (https://bower.io/)
|
||||
bower_components
|
||||
|
||||
|
||||
@@ -1,39 +1,26 @@
|
||||
# --- Stage 1: Build ---
|
||||
FROM node:20-slim AS builder
|
||||
|
||||
ARG RELEASE_VERSION
|
||||
RUN apt-get update && apt-get install -y \
|
||||
python3 \
|
||||
make \
|
||||
g++ \
|
||||
curl \
|
||||
postgresql-server-dev-all \
|
||||
&& rm -rf /var/lib/apt/lists/*
|
||||
# Use the official Node.js 20 image as a base
|
||||
FROM node:20
|
||||
|
||||
# Create and set the working directory
|
||||
WORKDIR /usr/src/app
|
||||
|
||||
# Install global dependencies
|
||||
RUN npm install -g ts-node typescript grunt grunt-cli
|
||||
|
||||
# Copy package.json and package-lock.json (if available)
|
||||
COPY package*.json ./
|
||||
|
||||
# Install app dependencies
|
||||
RUN npm ci
|
||||
|
||||
# Copy the rest of the application code
|
||||
COPY . .
|
||||
|
||||
# Run the build script to compile TypeScript to JavaScript
|
||||
RUN npm run build
|
||||
|
||||
RUN echo "$RELEASE_VERSION" > release
|
||||
|
||||
# --- Stage 2: Production Image ---
|
||||
FROM node:20-slim
|
||||
|
||||
RUN apt-get update && apt-get install -y libpq5 && rm -rf /var/lib/apt/lists/*
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
COPY --from=builder /usr/src/app/package*.json ./
|
||||
COPY --from=builder /usr/src/app/build ./build
|
||||
COPY --from=builder /usr/src/app/node_modules ./node_modules
|
||||
COPY --from=builder /usr/src/app/release ./release
|
||||
COPY --from=builder /usr/src/app/worklenz-email-templates ./worklenz-email-templates
|
||||
|
||||
|
||||
# Expose the port the app runs on
|
||||
EXPOSE 3000
|
||||
CMD ["node", "build/bin/www"]
|
||||
|
||||
# Start the application
|
||||
CMD ["npm", "start"]
|
||||
131
worklenz-backend/Gruntfile.js
Normal file
131
worklenz-backend/Gruntfile.js
Normal file
@@ -0,0 +1,131 @@
|
||||
module.exports = function (grunt) {
|
||||
|
||||
// Project configuration.
|
||||
grunt.initConfig({
|
||||
pkg: grunt.file.readJSON("package.json"),
|
||||
clean: {
|
||||
dist: "build"
|
||||
},
|
||||
compress: require("./grunt/grunt-compress"),
|
||||
copy: {
|
||||
main: {
|
||||
files: [
|
||||
{expand: true, cwd: "src", src: ["public/**"], dest: "build"},
|
||||
{expand: true, cwd: "src", src: ["views/**"], dest: "build"},
|
||||
{expand: true, cwd: "landing-page-assets", src: ["**"], dest: "build/public/assets"},
|
||||
{expand: true, cwd: "src", src: ["shared/sample-data.json"], dest: "build", filter: "isFile"},
|
||||
{expand: true, cwd: "src", src: ["shared/templates/**"], dest: "build", filter: "isFile"},
|
||||
{expand: true, cwd: "src", src: ["shared/postgresql-error-codes.json"], dest: "build", filter: "isFile"},
|
||||
]
|
||||
},
|
||||
packages: {
|
||||
files: [
|
||||
{expand: true, cwd: "", src: [".env"], dest: "build", filter: "isFile"},
|
||||
{expand: true, cwd: "", src: [".gitignore"], dest: "build", filter: "isFile"},
|
||||
{expand: true, cwd: "", src: ["release"], dest: "build", filter: "isFile"},
|
||||
{expand: true, cwd: "", src: ["jest.config.js"], dest: "build", filter: "isFile"},
|
||||
{expand: true, cwd: "", src: ["package.json"], dest: "build", filter: "isFile"},
|
||||
{expand: true, cwd: "", src: ["package-lock.json"], dest: "build", filter: "isFile"},
|
||||
{expand: true, cwd: "", src: ["common_modules/**"], dest: "build"}
|
||||
]
|
||||
}
|
||||
},
|
||||
sync: {
|
||||
main: {
|
||||
files: [
|
||||
{cwd: "src", src: ["views/**", "public/**"], dest: "build/"}, // makes all src relative to cwd
|
||||
],
|
||||
verbose: true,
|
||||
failOnError: true,
|
||||
compareUsing: "md5"
|
||||
}
|
||||
},
|
||||
uglify: {
|
||||
all: {
|
||||
files: [{
|
||||
expand: true,
|
||||
cwd: "build",
|
||||
src: "**/*.js",
|
||||
dest: "build"
|
||||
}]
|
||||
},
|
||||
controllers: {
|
||||
files: [{
|
||||
expand: true,
|
||||
cwd: "build",
|
||||
src: "controllers/*.js",
|
||||
dest: "build"
|
||||
}]
|
||||
},
|
||||
routes: {
|
||||
files: [{
|
||||
expand: true,
|
||||
cwd: "build",
|
||||
src: "routes/**/*.js",
|
||||
dest: "build"
|
||||
}]
|
||||
},
|
||||
assets: {
|
||||
files: [{
|
||||
expand: true,
|
||||
cwd: "build",
|
||||
src: "public/assets/**/*.js",
|
||||
dest: "build"
|
||||
}]
|
||||
}
|
||||
},
|
||||
shell: {
|
||||
tsc: {
|
||||
command: "tsc --build tsconfig.prod.json"
|
||||
},
|
||||
esbuild: {
|
||||
// command: "esbuild `find src -type f -name '*.ts'` --platform=node --minify=false --target=esnext --format=cjs --tsconfig=tsconfig.prod.json --outdir=build"
|
||||
command: "node esbuild && node cli/esbuild-patch"
|
||||
},
|
||||
tsc_dev: {
|
||||
command: "tsc --build tsconfig.json"
|
||||
},
|
||||
swagger: {
|
||||
command: "node ./cli/swagger"
|
||||
},
|
||||
inline_queries: {
|
||||
command: "node ./cli/inline-queries"
|
||||
}
|
||||
},
|
||||
watch: {
|
||||
scripts: {
|
||||
files: ["src/**/*.ts"],
|
||||
tasks: ["shell:tsc_dev"],
|
||||
options: {
|
||||
debounceDelay: 250,
|
||||
spawn: false,
|
||||
}
|
||||
},
|
||||
other: {
|
||||
files: ["src/**/*.pug", "landing-page-assets/**"],
|
||||
tasks: ["sync"]
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
grunt.registerTask("clean", ["clean"]);
|
||||
grunt.registerTask("copy", ["copy:main"]);
|
||||
grunt.registerTask("swagger", ["shell:swagger"]);
|
||||
grunt.registerTask("build:tsc", ["shell:tsc"]);
|
||||
grunt.registerTask("build", ["clean", "shell:tsc", "copy:main", "compress"]);
|
||||
grunt.registerTask("build:es", ["clean", "shell:esbuild", "copy:main", "uglify:assets", "compress"]);
|
||||
grunt.registerTask("build:strict", ["clean", "shell:tsc", "copy:packages", "uglify:all", "copy:main", "compress"]);
|
||||
grunt.registerTask("dev", ["clean", "copy:main", "shell:tsc_dev", "shell:inline_queries", "watch"]);
|
||||
|
||||
// Load the plugin that provides the "uglify" task.
|
||||
grunt.loadNpmTasks("grunt-contrib-watch");
|
||||
grunt.loadNpmTasks("grunt-contrib-clean");
|
||||
grunt.loadNpmTasks("grunt-contrib-copy");
|
||||
grunt.loadNpmTasks("grunt-contrib-uglify");
|
||||
grunt.loadNpmTasks("grunt-contrib-compress");
|
||||
grunt.loadNpmTasks("grunt-shell");
|
||||
grunt.loadNpmTasks("grunt-sync");
|
||||
|
||||
// Default task(s).
|
||||
grunt.registerTask("default", []);
|
||||
};
|
||||
55
worklenz-backend/database/00-init-db.sh
Normal file
55
worklenz-backend/database/00-init-db.sh
Normal file
@@ -0,0 +1,55 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# This script controls the order of SQL file execution during database initialization
|
||||
echo "Starting database initialization..."
|
||||
|
||||
# Check if we have SQL files in expected locations
|
||||
if [ -f "/docker-entrypoint-initdb.d/sql/0_extensions.sql" ]; then
|
||||
SQL_DIR="/docker-entrypoint-initdb.d/sql"
|
||||
echo "Using SQL files from sql/ subdirectory"
|
||||
elif [ -f "/docker-entrypoint-initdb.d/0_extensions.sql" ]; then
|
||||
# First time setup - move files to subdirectory
|
||||
echo "Moving SQL files to sql/ subdirectory..."
|
||||
mkdir -p /docker-entrypoint-initdb.d/sql
|
||||
|
||||
# Move all SQL files (except this script) to the subdirectory
|
||||
for f in /docker-entrypoint-initdb.d/*.sql; do
|
||||
if [ -f "$f" ]; then
|
||||
cp "$f" /docker-entrypoint-initdb.d/sql/
|
||||
echo "Copied $f to sql/ subdirectory"
|
||||
fi
|
||||
done
|
||||
|
||||
SQL_DIR="/docker-entrypoint-initdb.d/sql"
|
||||
else
|
||||
echo "SQL files not found in expected locations!"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Execute SQL files in the correct order
|
||||
echo "Executing 0_extensions.sql..."
|
||||
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" -f "$SQL_DIR/0_extensions.sql"
|
||||
|
||||
echo "Executing 1_tables.sql..."
|
||||
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" -f "$SQL_DIR/1_tables.sql"
|
||||
|
||||
echo "Executing indexes.sql..."
|
||||
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" -f "$SQL_DIR/indexes.sql"
|
||||
|
||||
echo "Executing 4_functions.sql..."
|
||||
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" -f "$SQL_DIR/4_functions.sql"
|
||||
|
||||
echo "Executing triggers.sql..."
|
||||
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" -f "$SQL_DIR/triggers.sql"
|
||||
|
||||
echo "Executing 3_views.sql..."
|
||||
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" -f "$SQL_DIR/3_views.sql"
|
||||
|
||||
echo "Executing 2_dml.sql..."
|
||||
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" -f "$SQL_DIR/2_dml.sql"
|
||||
|
||||
echo "Executing 5_database_user.sql..."
|
||||
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" -f "$SQL_DIR/5_database_user.sql"
|
||||
|
||||
echo "Database initialization completed successfully"
|
||||
@@ -1,88 +0,0 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
echo "Starting database initialization..."
|
||||
|
||||
SQL_DIR="/docker-entrypoint-initdb.d/sql"
|
||||
MIGRATIONS_DIR="/docker-entrypoint-initdb.d/migrations"
|
||||
BACKUP_DIR="/docker-entrypoint-initdb.d/pg_backups"
|
||||
|
||||
# --------------------------------------------
|
||||
# 🗄️ STEP 1: Attempt to restore latest backup
|
||||
# --------------------------------------------
|
||||
|
||||
if [ -d "$BACKUP_DIR" ]; then
|
||||
LATEST_BACKUP=$(ls -t "$BACKUP_DIR"/*.sql 2>/dev/null | head -n 1)
|
||||
else
|
||||
LATEST_BACKUP=""
|
||||
fi
|
||||
|
||||
if [ -f "$LATEST_BACKUP" ]; then
|
||||
echo "🗄️ Found latest backup: $LATEST_BACKUP"
|
||||
echo "⏳ Restoring from backup..."
|
||||
psql -U "$POSTGRES_USER" -d "$POSTGRES_DB" < "$LATEST_BACKUP"
|
||||
echo "✅ Backup restoration complete. Skipping schema and migrations."
|
||||
exit 0
|
||||
else
|
||||
echo "ℹ️ No valid backup found. Proceeding with base schema and migrations."
|
||||
fi
|
||||
|
||||
# --------------------------------------------
|
||||
# 🏗️ STEP 2: Continue with base schema setup
|
||||
# --------------------------------------------
|
||||
|
||||
# Create migrations table if it doesn't exist
|
||||
psql -U "$POSTGRES_USER" -d "$POSTGRES_DB" -c "
|
||||
CREATE TABLE IF NOT EXISTS schema_migrations (
|
||||
version TEXT PRIMARY KEY,
|
||||
applied_at TIMESTAMP DEFAULT now()
|
||||
);
|
||||
"
|
||||
|
||||
# List of base schema files to execute in order
|
||||
BASE_SQL_FILES=(
|
||||
"0_extensions.sql"
|
||||
"1_tables.sql"
|
||||
"indexes.sql"
|
||||
"4_functions.sql"
|
||||
"triggers.sql"
|
||||
"3_views.sql"
|
||||
"2_dml.sql"
|
||||
"5_database_user.sql"
|
||||
)
|
||||
|
||||
echo "Running base schema SQL files in order..."
|
||||
|
||||
for file in "${BASE_SQL_FILES[@]}"; do
|
||||
full_path="$SQL_DIR/$file"
|
||||
if [ -f "$full_path" ]; then
|
||||
echo "Executing $file..."
|
||||
psql -v ON_ERROR_STOP=1 -U "$POSTGRES_USER" -d "$POSTGRES_DB" -f "$full_path"
|
||||
else
|
||||
echo "WARNING: $file not found, skipping."
|
||||
fi
|
||||
done
|
||||
|
||||
echo "✅ Base schema SQL execution complete."
|
||||
|
||||
# --------------------------------------------
|
||||
# 🚀 STEP 3: Apply SQL migrations
|
||||
# --------------------------------------------
|
||||
|
||||
if [ -d "$MIGRATIONS_DIR" ] && compgen -G "$MIGRATIONS_DIR/*.sql" > /dev/null; then
|
||||
echo "Applying migrations..."
|
||||
for f in "$MIGRATIONS_DIR"/*.sql; do
|
||||
version=$(basename "$f")
|
||||
if ! psql -U "$POSTGRES_USER" -d "$POSTGRES_DB" -tAc "SELECT 1 FROM schema_migrations WHERE version = '$version'" | grep -q 1; then
|
||||
echo "Applying migration: $version"
|
||||
psql -U "$POSTGRES_USER" -d "$POSTGRES_DB" -f "$f"
|
||||
psql -U "$POSTGRES_USER" -d "$POSTGRES_DB" -c "INSERT INTO schema_migrations (version) VALUES ('$version');"
|
||||
else
|
||||
echo "Skipping already applied migration: $version"
|
||||
fi
|
||||
done
|
||||
else
|
||||
echo "No migration files found or directory is empty, skipping migrations."
|
||||
fi
|
||||
|
||||
echo "🎉 Database initialization completed successfully."
|
||||
@@ -1,135 +0,0 @@
|
||||
-- Performance indexes for optimized tasks queries
|
||||
-- Migration: 20250115000000-performance-indexes.sql
|
||||
|
||||
-- Composite index for main task filtering
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_tasks_project_archived_parent
|
||||
ON tasks(project_id, archived, parent_task_id)
|
||||
WHERE archived = FALSE;
|
||||
|
||||
-- Index for status joins
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_tasks_status_project
|
||||
ON tasks(status_id, project_id)
|
||||
WHERE archived = FALSE;
|
||||
|
||||
-- Index for assignees lookup
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_tasks_assignees_task_member
|
||||
ON tasks_assignees(task_id, team_member_id);
|
||||
|
||||
-- Index for phase lookup
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_task_phase_task_phase
|
||||
ON task_phase(task_id, phase_id);
|
||||
|
||||
-- Index for subtask counting
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_tasks_parent_archived
|
||||
ON tasks(parent_task_id, archived)
|
||||
WHERE parent_task_id IS NOT NULL AND archived = FALSE;
|
||||
|
||||
-- Index for labels
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_task_labels_task_label
|
||||
ON task_labels(task_id, label_id);
|
||||
|
||||
-- Index for comments count
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_task_comments_task
|
||||
ON task_comments(task_id);
|
||||
|
||||
-- Index for attachments count
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_task_attachments_task
|
||||
ON task_attachments(task_id);
|
||||
|
||||
-- Index for work log aggregation
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_task_work_log_task
|
||||
ON task_work_log(task_id);
|
||||
|
||||
-- Index for subscribers check
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_task_subscribers_task
|
||||
ON task_subscribers(task_id);
|
||||
|
||||
-- Index for dependencies check
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_task_dependencies_task
|
||||
ON task_dependencies(task_id);
|
||||
|
||||
-- Index for timers lookup
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_task_timers_task_user
|
||||
ON task_timers(task_id, user_id);
|
||||
|
||||
-- Index for custom columns
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_cc_column_values_task
|
||||
ON cc_column_values(task_id);
|
||||
|
||||
-- Index for team member info view optimization
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_team_members_team_user
|
||||
ON team_members(team_id, user_id)
|
||||
WHERE active = TRUE;
|
||||
|
||||
-- Index for notification settings
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_notification_settings_user_team
|
||||
ON notification_settings(user_id, team_id);
|
||||
|
||||
-- Index for task status categories
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_task_statuses_category
|
||||
ON task_statuses(category_id, project_id);
|
||||
|
||||
-- Index for project phases
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_project_phases_project_sort
|
||||
ON project_phases(project_id, sort_index);
|
||||
|
||||
-- Index for task priorities
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_task_priorities_value
|
||||
ON task_priorities(value);
|
||||
|
||||
-- Index for team labels
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_team_labels_team
|
||||
ON team_labels(team_id);
|
||||
|
||||
-- NEW INDEXES FOR PERFORMANCE OPTIMIZATION --
|
||||
|
||||
-- Composite index for task main query optimization (covers most WHERE conditions)
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_tasks_performance_main
|
||||
ON tasks(project_id, archived, parent_task_id, status_id, priority_id)
|
||||
WHERE archived = FALSE;
|
||||
|
||||
-- Index for sorting by sort_order with project filter
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_tasks_project_sort_order
|
||||
ON tasks(project_id, sort_order)
|
||||
WHERE archived = FALSE;
|
||||
|
||||
-- Index for email_invitations to optimize team_member_info_view
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_email_invitations_team_member
|
||||
ON email_invitations(team_member_id);
|
||||
|
||||
-- Covering index for task status with category information
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_task_statuses_covering
|
||||
ON task_statuses(id, category_id, project_id);
|
||||
|
||||
-- Index for task aggregation queries (parent task progress calculation)
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_tasks_parent_status_archived
|
||||
ON tasks(parent_task_id, status_id, archived)
|
||||
WHERE archived = FALSE;
|
||||
|
||||
-- Index for project team member filtering
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_team_members_project_lookup
|
||||
ON team_members(team_id, active, user_id)
|
||||
WHERE active = TRUE;
|
||||
|
||||
-- Covering index for tasks with frequently accessed columns
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_tasks_covering_main
|
||||
ON tasks(id, project_id, archived, parent_task_id, status_id, priority_id, sort_order, name)
|
||||
WHERE archived = FALSE;
|
||||
|
||||
-- Index for task search functionality
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_tasks_name_search
|
||||
ON tasks USING gin(to_tsvector('english', name))
|
||||
WHERE archived = FALSE;
|
||||
|
||||
-- Index for date-based filtering (if used)
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_tasks_dates
|
||||
ON tasks(project_id, start_date, end_date)
|
||||
WHERE archived = FALSE;
|
||||
|
||||
-- Index for task timers with user filtering
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_task_timers_user_task
|
||||
ON task_timers(user_id, task_id);
|
||||
|
||||
-- Index for sys_task_status_categories lookups
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_sys_task_status_categories_covering
|
||||
ON sys_task_status_categories(id, color_code, color_code_dark, is_done, is_doing, is_todo);
|
||||
@@ -1,143 +0,0 @@
|
||||
-- Fix window function error in task sort optimized functions
|
||||
-- Error: window functions are not allowed in UPDATE
|
||||
|
||||
-- Replace the optimized sort functions to avoid CTE usage in UPDATE statements
|
||||
CREATE OR REPLACE FUNCTION handle_task_list_sort_between_groups_optimized(_from_index integer, _to_index integer, _task_id uuid, _project_id uuid, _batch_size integer DEFAULT 100) RETURNS void
|
||||
LANGUAGE plpgsql
|
||||
AS
|
||||
$$
|
||||
DECLARE
|
||||
_offset INT := 0;
|
||||
_affected_rows INT;
|
||||
BEGIN
|
||||
-- PERFORMANCE OPTIMIZATION: Use direct updates without CTE in UPDATE
|
||||
IF (_to_index = -1)
|
||||
THEN
|
||||
_to_index = COALESCE((SELECT MAX(sort_order) + 1 FROM tasks WHERE project_id = _project_id), 0);
|
||||
END IF;
|
||||
|
||||
-- PERFORMANCE OPTIMIZATION: Batch updates for large datasets
|
||||
IF _to_index > _from_index
|
||||
THEN
|
||||
LOOP
|
||||
UPDATE tasks
|
||||
SET sort_order = sort_order - 1
|
||||
WHERE project_id = _project_id
|
||||
AND sort_order > _from_index
|
||||
AND sort_order < _to_index
|
||||
AND sort_order > _offset
|
||||
AND sort_order <= _offset + _batch_size;
|
||||
|
||||
GET DIAGNOSTICS _affected_rows = ROW_COUNT;
|
||||
EXIT WHEN _affected_rows = 0;
|
||||
_offset := _offset + _batch_size;
|
||||
END LOOP;
|
||||
|
||||
UPDATE tasks SET sort_order = _to_index - 1 WHERE id = _task_id AND project_id = _project_id;
|
||||
END IF;
|
||||
|
||||
IF _to_index < _from_index
|
||||
THEN
|
||||
_offset := 0;
|
||||
LOOP
|
||||
UPDATE tasks
|
||||
SET sort_order = sort_order + 1
|
||||
WHERE project_id = _project_id
|
||||
AND sort_order > _to_index
|
||||
AND sort_order < _from_index
|
||||
AND sort_order > _offset
|
||||
AND sort_order <= _offset + _batch_size;
|
||||
|
||||
GET DIAGNOSTICS _affected_rows = ROW_COUNT;
|
||||
EXIT WHEN _affected_rows = 0;
|
||||
_offset := _offset + _batch_size;
|
||||
END LOOP;
|
||||
|
||||
UPDATE tasks SET sort_order = _to_index + 1 WHERE id = _task_id AND project_id = _project_id;
|
||||
END IF;
|
||||
END
|
||||
$$;
|
||||
|
||||
-- Replace the second optimized sort function
|
||||
CREATE OR REPLACE FUNCTION handle_task_list_sort_inside_group_optimized(_from_index integer, _to_index integer, _task_id uuid, _project_id uuid, _batch_size integer DEFAULT 100) RETURNS void
|
||||
LANGUAGE plpgsql
|
||||
AS
|
||||
$$
|
||||
DECLARE
|
||||
_offset INT := 0;
|
||||
_affected_rows INT;
|
||||
BEGIN
|
||||
-- PERFORMANCE OPTIMIZATION: Batch updates for large datasets without CTE in UPDATE
|
||||
IF _to_index > _from_index
|
||||
THEN
|
||||
LOOP
|
||||
UPDATE tasks
|
||||
SET sort_order = sort_order - 1
|
||||
WHERE project_id = _project_id
|
||||
AND sort_order > _from_index
|
||||
AND sort_order <= _to_index
|
||||
AND sort_order > _offset
|
||||
AND sort_order <= _offset + _batch_size;
|
||||
|
||||
GET DIAGNOSTICS _affected_rows = ROW_COUNT;
|
||||
EXIT WHEN _affected_rows = 0;
|
||||
_offset := _offset + _batch_size;
|
||||
END LOOP;
|
||||
END IF;
|
||||
|
||||
IF _to_index < _from_index
|
||||
THEN
|
||||
_offset := 0;
|
||||
LOOP
|
||||
UPDATE tasks
|
||||
SET sort_order = sort_order + 1
|
||||
WHERE project_id = _project_id
|
||||
AND sort_order >= _to_index
|
||||
AND sort_order < _from_index
|
||||
AND sort_order > _offset
|
||||
AND sort_order <= _offset + _batch_size;
|
||||
|
||||
GET DIAGNOSTICS _affected_rows = ROW_COUNT;
|
||||
EXIT WHEN _affected_rows = 0;
|
||||
_offset := _offset + _batch_size;
|
||||
END LOOP;
|
||||
END IF;
|
||||
|
||||
UPDATE tasks SET sort_order = _to_index WHERE id = _task_id AND project_id = _project_id;
|
||||
END
|
||||
$$;
|
||||
|
||||
-- Add simple bulk update function as alternative
|
||||
CREATE OR REPLACE FUNCTION update_task_sort_orders_bulk(_updates json) RETURNS void
|
||||
LANGUAGE plpgsql
|
||||
AS
|
||||
$$
|
||||
DECLARE
|
||||
_update_record RECORD;
|
||||
BEGIN
|
||||
-- Simple approach: update each task's sort_order from the provided array
|
||||
FOR _update_record IN
|
||||
SELECT
|
||||
(item->>'task_id')::uuid as task_id,
|
||||
(item->>'sort_order')::int as sort_order,
|
||||
(item->>'status_id')::uuid as status_id,
|
||||
(item->>'priority_id')::uuid as priority_id,
|
||||
(item->>'phase_id')::uuid as phase_id
|
||||
FROM json_array_elements(_updates) as item
|
||||
LOOP
|
||||
UPDATE tasks
|
||||
SET
|
||||
sort_order = _update_record.sort_order,
|
||||
status_id = COALESCE(_update_record.status_id, status_id),
|
||||
priority_id = COALESCE(_update_record.priority_id, priority_id)
|
||||
WHERE id = _update_record.task_id;
|
||||
|
||||
-- Handle phase updates separately since it's in a different table
|
||||
IF _update_record.phase_id IS NOT NULL THEN
|
||||
INSERT INTO task_phase (task_id, phase_id)
|
||||
VALUES (_update_record.task_id, _update_record.phase_id)
|
||||
ON CONFLICT (task_id) DO UPDATE SET phase_id = _update_record.phase_id;
|
||||
END IF;
|
||||
END LOOP;
|
||||
END
|
||||
$$;
|
||||
@@ -145,7 +145,7 @@ BEGIN
|
||||
SET progress_value = NULL,
|
||||
progress_mode = NULL
|
||||
WHERE project_id = _project_id
|
||||
AND progress_mode::text::progress_mode_type = _old_mode;
|
||||
AND progress_mode = _old_mode;
|
||||
END IF;
|
||||
|
||||
RETURN NEW;
|
||||
|
||||
@@ -118,7 +118,7 @@ BEGIN
|
||||
SELECT SUM(time_spent)
|
||||
FROM task_work_log
|
||||
WHERE task_id = t.id
|
||||
), 0) as logged_minutes
|
||||
), 0) / 60.0 as logged_minutes
|
||||
FROM tasks t
|
||||
WHERE t.id = _task_id
|
||||
)
|
||||
|
||||
@@ -1,300 +0,0 @@
|
||||
-- Fix Duplicate Sort Orders Script
|
||||
-- This script detects and fixes duplicate sort order values that break task ordering
|
||||
|
||||
-- 1. DETECTION QUERIES - Run these first to see the scope of the problem
|
||||
|
||||
-- Check for duplicates in main sort_order column
|
||||
SELECT
|
||||
project_id,
|
||||
sort_order,
|
||||
COUNT(*) as duplicate_count,
|
||||
STRING_AGG(id::text, ', ') as task_ids
|
||||
FROM tasks
|
||||
WHERE project_id IS NOT NULL
|
||||
GROUP BY project_id, sort_order
|
||||
HAVING COUNT(*) > 1
|
||||
ORDER BY project_id, sort_order;
|
||||
|
||||
-- Check for duplicates in status_sort_order
|
||||
SELECT
|
||||
project_id,
|
||||
status_sort_order,
|
||||
COUNT(*) as duplicate_count,
|
||||
STRING_AGG(id::text, ', ') as task_ids
|
||||
FROM tasks
|
||||
WHERE project_id IS NOT NULL
|
||||
GROUP BY project_id, status_sort_order
|
||||
HAVING COUNT(*) > 1
|
||||
ORDER BY project_id, status_sort_order;
|
||||
|
||||
-- Check for duplicates in priority_sort_order
|
||||
SELECT
|
||||
project_id,
|
||||
priority_sort_order,
|
||||
COUNT(*) as duplicate_count,
|
||||
STRING_AGG(id::text, ', ') as task_ids
|
||||
FROM tasks
|
||||
WHERE project_id IS NOT NULL
|
||||
GROUP BY project_id, priority_sort_order
|
||||
HAVING COUNT(*) > 1
|
||||
ORDER BY project_id, priority_sort_order;
|
||||
|
||||
-- Check for duplicates in phase_sort_order
|
||||
SELECT
|
||||
project_id,
|
||||
phase_sort_order,
|
||||
COUNT(*) as duplicate_count,
|
||||
STRING_AGG(id::text, ', ') as task_ids
|
||||
FROM tasks
|
||||
WHERE project_id IS NOT NULL
|
||||
GROUP BY project_id, phase_sort_order
|
||||
HAVING COUNT(*) > 1
|
||||
ORDER BY project_id, phase_sort_order;
|
||||
|
||||
-- Note: member_sort_order removed - no longer used
|
||||
|
||||
-- 2. CLEANUP FUNCTIONS
|
||||
|
||||
-- Fix duplicates in main sort_order column
|
||||
CREATE OR REPLACE FUNCTION fix_sort_order_duplicates() RETURNS void
|
||||
LANGUAGE plpgsql
|
||||
AS
|
||||
$$
|
||||
DECLARE
|
||||
_project RECORD;
|
||||
_task RECORD;
|
||||
_counter INTEGER;
|
||||
BEGIN
|
||||
-- For each project, reassign sort_order values to ensure uniqueness
|
||||
FOR _project IN
|
||||
SELECT DISTINCT project_id
|
||||
FROM tasks
|
||||
WHERE project_id IS NOT NULL
|
||||
LOOP
|
||||
_counter := 0;
|
||||
|
||||
-- Reassign sort_order values sequentially for this project
|
||||
FOR _task IN
|
||||
SELECT id
|
||||
FROM tasks
|
||||
WHERE project_id = _project.project_id
|
||||
ORDER BY sort_order, created_at
|
||||
LOOP
|
||||
UPDATE tasks
|
||||
SET sort_order = _counter
|
||||
WHERE id = _task.id;
|
||||
|
||||
_counter := _counter + 1;
|
||||
END LOOP;
|
||||
END LOOP;
|
||||
|
||||
RAISE NOTICE 'Fixed sort_order duplicates for all projects';
|
||||
END
|
||||
$$;
|
||||
|
||||
-- Fix duplicates in status_sort_order column
|
||||
CREATE OR REPLACE FUNCTION fix_status_sort_order_duplicates() RETURNS void
|
||||
LANGUAGE plpgsql
|
||||
AS
|
||||
$$
|
||||
DECLARE
|
||||
_project RECORD;
|
||||
_task RECORD;
|
||||
_counter INTEGER;
|
||||
BEGIN
|
||||
FOR _project IN
|
||||
SELECT DISTINCT project_id
|
||||
FROM tasks
|
||||
WHERE project_id IS NOT NULL
|
||||
LOOP
|
||||
_counter := 0;
|
||||
|
||||
FOR _task IN
|
||||
SELECT id
|
||||
FROM tasks
|
||||
WHERE project_id = _project.project_id
|
||||
ORDER BY status_sort_order, created_at
|
||||
LOOP
|
||||
UPDATE tasks
|
||||
SET status_sort_order = _counter
|
||||
WHERE id = _task.id;
|
||||
|
||||
_counter := _counter + 1;
|
||||
END LOOP;
|
||||
END LOOP;
|
||||
|
||||
RAISE NOTICE 'Fixed status_sort_order duplicates for all projects';
|
||||
END
|
||||
$$;
|
||||
|
||||
-- Fix duplicates in priority_sort_order column
|
||||
CREATE OR REPLACE FUNCTION fix_priority_sort_order_duplicates() RETURNS void
|
||||
LANGUAGE plpgsql
|
||||
AS
|
||||
$$
|
||||
DECLARE
|
||||
_project RECORD;
|
||||
_task RECORD;
|
||||
_counter INTEGER;
|
||||
BEGIN
|
||||
FOR _project IN
|
||||
SELECT DISTINCT project_id
|
||||
FROM tasks
|
||||
WHERE project_id IS NOT NULL
|
||||
LOOP
|
||||
_counter := 0;
|
||||
|
||||
FOR _task IN
|
||||
SELECT id
|
||||
FROM tasks
|
||||
WHERE project_id = _project.project_id
|
||||
ORDER BY priority_sort_order, created_at
|
||||
LOOP
|
||||
UPDATE tasks
|
||||
SET priority_sort_order = _counter
|
||||
WHERE id = _task.id;
|
||||
|
||||
_counter := _counter + 1;
|
||||
END LOOP;
|
||||
END LOOP;
|
||||
|
||||
RAISE NOTICE 'Fixed priority_sort_order duplicates for all projects';
|
||||
END
|
||||
$$;
|
||||
|
||||
-- Fix duplicates in phase_sort_order column
|
||||
CREATE OR REPLACE FUNCTION fix_phase_sort_order_duplicates() RETURNS void
|
||||
LANGUAGE plpgsql
|
||||
AS
|
||||
$$
|
||||
DECLARE
|
||||
_project RECORD;
|
||||
_task RECORD;
|
||||
_counter INTEGER;
|
||||
BEGIN
|
||||
FOR _project IN
|
||||
SELECT DISTINCT project_id
|
||||
FROM tasks
|
||||
WHERE project_id IS NOT NULL
|
||||
LOOP
|
||||
_counter := 0;
|
||||
|
||||
FOR _task IN
|
||||
SELECT id
|
||||
FROM tasks
|
||||
WHERE project_id = _project.project_id
|
||||
ORDER BY phase_sort_order, created_at
|
||||
LOOP
|
||||
UPDATE tasks
|
||||
SET phase_sort_order = _counter
|
||||
WHERE id = _task.id;
|
||||
|
||||
_counter := _counter + 1;
|
||||
END LOOP;
|
||||
END LOOP;
|
||||
|
||||
RAISE NOTICE 'Fixed phase_sort_order duplicates for all projects';
|
||||
END
|
||||
$$;
|
||||
|
||||
-- Note: fix_member_sort_order_duplicates() removed - no longer needed
|
||||
|
||||
-- Master function to fix all sort order duplicates
|
||||
CREATE OR REPLACE FUNCTION fix_all_duplicate_sort_orders() RETURNS void
|
||||
LANGUAGE plpgsql
|
||||
AS
|
||||
$$
|
||||
BEGIN
|
||||
RAISE NOTICE 'Starting sort order cleanup for all columns...';
|
||||
|
||||
PERFORM fix_sort_order_duplicates();
|
||||
PERFORM fix_status_sort_order_duplicates();
|
||||
PERFORM fix_priority_sort_order_duplicates();
|
||||
PERFORM fix_phase_sort_order_duplicates();
|
||||
|
||||
RAISE NOTICE 'Completed sort order cleanup for all columns';
|
||||
END
|
||||
$$;
|
||||
|
||||
-- 3. VERIFICATION FUNCTION
|
||||
|
||||
-- Verify that duplicates have been fixed
|
||||
CREATE OR REPLACE FUNCTION verify_sort_order_integrity() RETURNS TABLE(
|
||||
column_name text,
|
||||
project_id uuid,
|
||||
duplicate_count bigint,
|
||||
status text
|
||||
)
|
||||
LANGUAGE plpgsql
|
||||
AS
|
||||
$$
|
||||
BEGIN
|
||||
-- Check sort_order duplicates
|
||||
RETURN QUERY
|
||||
SELECT
|
||||
'sort_order'::text as column_name,
|
||||
t.project_id,
|
||||
COUNT(*) as duplicate_count,
|
||||
CASE WHEN COUNT(*) > 1 THEN 'DUPLICATES FOUND' ELSE 'OK' END as status
|
||||
FROM tasks t
|
||||
WHERE t.project_id IS NOT NULL
|
||||
GROUP BY t.project_id, t.sort_order
|
||||
HAVING COUNT(*) > 1;
|
||||
|
||||
-- Check status_sort_order duplicates
|
||||
RETURN QUERY
|
||||
SELECT
|
||||
'status_sort_order'::text as column_name,
|
||||
t.project_id,
|
||||
COUNT(*) as duplicate_count,
|
||||
CASE WHEN COUNT(*) > 1 THEN 'DUPLICATES FOUND' ELSE 'OK' END as status
|
||||
FROM tasks t
|
||||
WHERE t.project_id IS NOT NULL
|
||||
GROUP BY t.project_id, t.status_sort_order
|
||||
HAVING COUNT(*) > 1;
|
||||
|
||||
-- Check priority_sort_order duplicates
|
||||
RETURN QUERY
|
||||
SELECT
|
||||
'priority_sort_order'::text as column_name,
|
||||
t.project_id,
|
||||
COUNT(*) as duplicate_count,
|
||||
CASE WHEN COUNT(*) > 1 THEN 'DUPLICATES FOUND' ELSE 'OK' END as status
|
||||
FROM tasks t
|
||||
WHERE t.project_id IS NOT NULL
|
||||
GROUP BY t.project_id, t.priority_sort_order
|
||||
HAVING COUNT(*) > 1;
|
||||
|
||||
-- Check phase_sort_order duplicates
|
||||
RETURN QUERY
|
||||
SELECT
|
||||
'phase_sort_order'::text as column_name,
|
||||
t.project_id,
|
||||
COUNT(*) as duplicate_count,
|
||||
CASE WHEN COUNT(*) > 1 THEN 'DUPLICATES FOUND' ELSE 'OK' END as status
|
||||
FROM tasks t
|
||||
WHERE t.project_id IS NOT NULL
|
||||
GROUP BY t.project_id, t.phase_sort_order
|
||||
HAVING COUNT(*) > 1;
|
||||
|
||||
-- Note: member_sort_order verification removed - column no longer used
|
||||
|
||||
END
|
||||
$$;
|
||||
|
||||
-- 4. USAGE INSTRUCTIONS
|
||||
|
||||
/*
|
||||
USAGE:
|
||||
|
||||
1. First, run the detection queries to see which projects have duplicates
|
||||
2. Then run this to fix all duplicates:
|
||||
SELECT fix_all_duplicate_sort_orders();
|
||||
3. Finally, verify the fix worked:
|
||||
SELECT * FROM verify_sort_order_integrity();
|
||||
|
||||
If verification returns no rows, all duplicates have been fixed successfully.
|
||||
|
||||
WARNING: This will reassign sort order values based on current order + creation time.
|
||||
Make sure to backup your database before running these functions.
|
||||
*/
|
||||
@@ -1,37 +0,0 @@
|
||||
-- Migration: Add separate sort order columns for different grouping types
|
||||
-- This allows users to maintain different task orders when switching between grouping views
|
||||
|
||||
-- Add new sort order columns
|
||||
ALTER TABLE tasks ADD COLUMN IF NOT EXISTS status_sort_order INTEGER DEFAULT 0;
|
||||
ALTER TABLE tasks ADD COLUMN IF NOT EXISTS priority_sort_order INTEGER DEFAULT 0;
|
||||
ALTER TABLE tasks ADD COLUMN IF NOT EXISTS phase_sort_order INTEGER DEFAULT 0;
|
||||
ALTER TABLE tasks ADD COLUMN IF NOT EXISTS member_sort_order INTEGER DEFAULT 0;
|
||||
|
||||
-- Initialize new columns with current sort_order values
|
||||
UPDATE tasks SET
|
||||
status_sort_order = sort_order,
|
||||
priority_sort_order = sort_order,
|
||||
phase_sort_order = sort_order,
|
||||
member_sort_order = sort_order
|
||||
WHERE status_sort_order = 0
|
||||
OR priority_sort_order = 0
|
||||
OR phase_sort_order = 0
|
||||
OR member_sort_order = 0;
|
||||
|
||||
-- Add constraints to ensure non-negative values
|
||||
ALTER TABLE tasks ADD CONSTRAINT tasks_status_sort_order_check CHECK (status_sort_order >= 0);
|
||||
ALTER TABLE tasks ADD CONSTRAINT tasks_priority_sort_order_check CHECK (priority_sort_order >= 0);
|
||||
ALTER TABLE tasks ADD CONSTRAINT tasks_phase_sort_order_check CHECK (phase_sort_order >= 0);
|
||||
ALTER TABLE tasks ADD CONSTRAINT tasks_member_sort_order_check CHECK (member_sort_order >= 0);
|
||||
|
||||
-- Add indexes for performance (since these will be used for ordering)
|
||||
CREATE INDEX IF NOT EXISTS idx_tasks_status_sort_order ON tasks(project_id, status_sort_order);
|
||||
CREATE INDEX IF NOT EXISTS idx_tasks_priority_sort_order ON tasks(project_id, priority_sort_order);
|
||||
CREATE INDEX IF NOT EXISTS idx_tasks_phase_sort_order ON tasks(project_id, phase_sort_order);
|
||||
CREATE INDEX IF NOT EXISTS idx_tasks_member_sort_order ON tasks(project_id, member_sort_order);
|
||||
|
||||
-- Update comments for documentation
|
||||
COMMENT ON COLUMN tasks.status_sort_order IS 'Sort order when grouped by status';
|
||||
COMMENT ON COLUMN tasks.priority_sort_order IS 'Sort order when grouped by priority';
|
||||
COMMENT ON COLUMN tasks.phase_sort_order IS 'Sort order when grouped by phase';
|
||||
COMMENT ON COLUMN tasks.member_sort_order IS 'Sort order when grouped by members/assignees';
|
||||
@@ -1,172 +0,0 @@
|
||||
-- Migration: Update database functions to handle grouping-specific sort orders
|
||||
|
||||
-- Function to get the appropriate sort column name based on grouping type
|
||||
CREATE OR REPLACE FUNCTION get_sort_column_name(_group_by TEXT) RETURNS TEXT
|
||||
LANGUAGE plpgsql
|
||||
AS
|
||||
$$
|
||||
BEGIN
|
||||
CASE _group_by
|
||||
WHEN 'status' THEN RETURN 'status_sort_order';
|
||||
WHEN 'priority' THEN RETURN 'priority_sort_order';
|
||||
WHEN 'phase' THEN RETURN 'phase_sort_order';
|
||||
WHEN 'members' THEN RETURN 'member_sort_order';
|
||||
ELSE RETURN 'sort_order'; -- fallback to general sort_order
|
||||
END CASE;
|
||||
END;
|
||||
$$;
|
||||
|
||||
-- Updated bulk sort order function to handle different sort columns
|
||||
CREATE OR REPLACE FUNCTION update_task_sort_orders_bulk(_updates json, _group_by text DEFAULT 'status') RETURNS void
|
||||
LANGUAGE plpgsql
|
||||
AS
|
||||
$$
|
||||
DECLARE
|
||||
_update_record RECORD;
|
||||
_sort_column TEXT;
|
||||
_sql TEXT;
|
||||
BEGIN
|
||||
-- Get the appropriate sort column based on grouping
|
||||
_sort_column := get_sort_column_name(_group_by);
|
||||
|
||||
-- Simple approach: update each task's sort_order from the provided array
|
||||
FOR _update_record IN
|
||||
SELECT
|
||||
(item->>'task_id')::uuid as task_id,
|
||||
(item->>'sort_order')::int as sort_order,
|
||||
(item->>'status_id')::uuid as status_id,
|
||||
(item->>'priority_id')::uuid as priority_id,
|
||||
(item->>'phase_id')::uuid as phase_id
|
||||
FROM json_array_elements(_updates) as item
|
||||
LOOP
|
||||
-- Update the appropriate sort column and other fields using dynamic SQL
|
||||
-- Only update sort_order if we're using the default sorting
|
||||
IF _sort_column = 'sort_order' THEN
|
||||
UPDATE tasks SET
|
||||
sort_order = _update_record.sort_order,
|
||||
status_id = COALESCE(_update_record.status_id, status_id),
|
||||
priority_id = COALESCE(_update_record.priority_id, priority_id)
|
||||
WHERE id = _update_record.task_id;
|
||||
ELSE
|
||||
-- Update only the grouping-specific sort column, not the main sort_order
|
||||
_sql := 'UPDATE tasks SET ' || _sort_column || ' = $1, ' ||
|
||||
'status_id = COALESCE($2, status_id), ' ||
|
||||
'priority_id = COALESCE($3, priority_id) ' ||
|
||||
'WHERE id = $4';
|
||||
|
||||
EXECUTE _sql USING
|
||||
_update_record.sort_order,
|
||||
_update_record.status_id,
|
||||
_update_record.priority_id,
|
||||
_update_record.task_id;
|
||||
END IF;
|
||||
|
||||
-- Handle phase updates separately since it's in a different table
|
||||
IF _update_record.phase_id IS NOT NULL THEN
|
||||
INSERT INTO task_phase (task_id, phase_id)
|
||||
VALUES (_update_record.task_id, _update_record.phase_id)
|
||||
ON CONFLICT (task_id) DO UPDATE SET phase_id = _update_record.phase_id;
|
||||
END IF;
|
||||
END LOOP;
|
||||
END;
|
||||
$$;
|
||||
|
||||
-- Updated main sort order change handler
|
||||
CREATE OR REPLACE FUNCTION handle_task_list_sort_order_change(_body json) RETURNS void
|
||||
LANGUAGE plpgsql
|
||||
AS
|
||||
$$
|
||||
DECLARE
|
||||
_from_index INT;
|
||||
_to_index INT;
|
||||
_task_id UUID;
|
||||
_project_id UUID;
|
||||
_from_group UUID;
|
||||
_to_group UUID;
|
||||
_group_by TEXT;
|
||||
_batch_size INT := 100;
|
||||
_sort_column TEXT;
|
||||
_sql TEXT;
|
||||
BEGIN
|
||||
_project_id = (_body ->> 'project_id')::UUID;
|
||||
_task_id = (_body ->> 'task_id')::UUID;
|
||||
_from_index = (_body ->> 'from_index')::INT;
|
||||
_to_index = (_body ->> 'to_index')::INT;
|
||||
_from_group = (_body ->> 'from_group')::UUID;
|
||||
_to_group = (_body ->> 'to_group')::UUID;
|
||||
_group_by = (_body ->> 'group_by')::TEXT;
|
||||
|
||||
-- Get the appropriate sort column
|
||||
_sort_column := get_sort_column_name(_group_by);
|
||||
|
||||
-- Handle group changes
|
||||
IF (_from_group <> _to_group OR (_from_group <> _to_group) IS NULL) THEN
|
||||
IF (_group_by = 'status') THEN
|
||||
UPDATE tasks
|
||||
SET status_id = _to_group
|
||||
WHERE id = _task_id
|
||||
AND status_id = _from_group
|
||||
AND project_id = _project_id;
|
||||
END IF;
|
||||
|
||||
IF (_group_by = 'priority') THEN
|
||||
UPDATE tasks
|
||||
SET priority_id = _to_group
|
||||
WHERE id = _task_id
|
||||
AND priority_id = _from_group
|
||||
AND project_id = _project_id;
|
||||
END IF;
|
||||
|
||||
IF (_group_by = 'phase') THEN
|
||||
IF (is_null_or_empty(_to_group) IS FALSE) THEN
|
||||
INSERT INTO task_phase (task_id, phase_id)
|
||||
VALUES (_task_id, _to_group)
|
||||
ON CONFLICT (task_id) DO UPDATE SET phase_id = _to_group;
|
||||
ELSE
|
||||
DELETE FROM task_phase WHERE task_id = _task_id;
|
||||
END IF;
|
||||
END IF;
|
||||
END IF;
|
||||
|
||||
-- Handle sort order changes using dynamic SQL
|
||||
IF (_from_index <> _to_index) THEN
|
||||
-- For the main sort_order column, we need to be careful about unique constraints
|
||||
IF _sort_column = 'sort_order' THEN
|
||||
-- Use a transaction-safe approach for the main sort_order column
|
||||
IF (_to_index > _from_index) THEN
|
||||
-- Moving down: decrease sort_order for items between old and new position
|
||||
UPDATE tasks SET sort_order = sort_order - 1
|
||||
WHERE project_id = _project_id
|
||||
AND sort_order > _from_index
|
||||
AND sort_order <= _to_index;
|
||||
ELSE
|
||||
-- Moving up: increase sort_order for items between new and old position
|
||||
UPDATE tasks SET sort_order = sort_order + 1
|
||||
WHERE project_id = _project_id
|
||||
AND sort_order >= _to_index
|
||||
AND sort_order < _from_index;
|
||||
END IF;
|
||||
|
||||
-- Set the new sort_order for the moved task
|
||||
UPDATE tasks SET sort_order = _to_index WHERE id = _task_id;
|
||||
ELSE
|
||||
-- For grouping-specific columns, use dynamic SQL since there's no unique constraint
|
||||
IF (_to_index > _from_index) THEN
|
||||
-- Moving down: decrease sort_order for items between old and new position
|
||||
_sql := 'UPDATE tasks SET ' || _sort_column || ' = ' || _sort_column || ' - 1 ' ||
|
||||
'WHERE project_id = $1 AND ' || _sort_column || ' > $2 AND ' || _sort_column || ' <= $3';
|
||||
EXECUTE _sql USING _project_id, _from_index, _to_index;
|
||||
ELSE
|
||||
-- Moving up: increase sort_order for items between new and old position
|
||||
_sql := 'UPDATE tasks SET ' || _sort_column || ' = ' || _sort_column || ' + 1 ' ||
|
||||
'WHERE project_id = $1 AND ' || _sort_column || ' >= $2 AND ' || _sort_column || ' < $3';
|
||||
EXECUTE _sql USING _project_id, _to_index, _from_index;
|
||||
END IF;
|
||||
|
||||
-- Set the new sort_order for the moved task
|
||||
_sql := 'UPDATE tasks SET ' || _sort_column || ' = $1 WHERE id = $2';
|
||||
EXECUTE _sql USING _to_index, _task_id;
|
||||
END IF;
|
||||
END IF;
|
||||
END;
|
||||
$$;
|
||||
@@ -1,179 +0,0 @@
|
||||
-- Migration: Fix sort order constraint violations
|
||||
|
||||
-- First, let's ensure all existing tasks have unique sort_order values within each project
|
||||
-- This is a one-time fix to ensure data consistency
|
||||
|
||||
DO $$
|
||||
DECLARE
|
||||
_project RECORD;
|
||||
_task RECORD;
|
||||
_counter INTEGER;
|
||||
BEGIN
|
||||
-- For each project, reassign sort_order values to ensure uniqueness
|
||||
FOR _project IN
|
||||
SELECT DISTINCT project_id
|
||||
FROM tasks
|
||||
WHERE project_id IS NOT NULL
|
||||
LOOP
|
||||
_counter := 0;
|
||||
|
||||
-- Reassign sort_order values sequentially for this project
|
||||
FOR _task IN
|
||||
SELECT id
|
||||
FROM tasks
|
||||
WHERE project_id = _project.project_id
|
||||
ORDER BY sort_order, created_at
|
||||
LOOP
|
||||
UPDATE tasks
|
||||
SET sort_order = _counter
|
||||
WHERE id = _task.id;
|
||||
|
||||
_counter := _counter + 1;
|
||||
END LOOP;
|
||||
END LOOP;
|
||||
END
|
||||
$$;
|
||||
|
||||
-- Now create a better version of our functions that properly handles the constraints
|
||||
|
||||
-- Updated bulk sort order function that avoids sort_order conflicts
|
||||
CREATE OR REPLACE FUNCTION update_task_sort_orders_bulk(_updates json, _group_by text DEFAULT 'status') RETURNS void
|
||||
LANGUAGE plpgsql
|
||||
AS
|
||||
$$
|
||||
DECLARE
|
||||
_update_record RECORD;
|
||||
_sort_column TEXT;
|
||||
_sql TEXT;
|
||||
BEGIN
|
||||
-- Get the appropriate sort column based on grouping
|
||||
_sort_column := get_sort_column_name(_group_by);
|
||||
|
||||
-- Process each update record
|
||||
FOR _update_record IN
|
||||
SELECT
|
||||
(item->>'task_id')::uuid as task_id,
|
||||
(item->>'sort_order')::int as sort_order,
|
||||
(item->>'status_id')::uuid as status_id,
|
||||
(item->>'priority_id')::uuid as priority_id,
|
||||
(item->>'phase_id')::uuid as phase_id
|
||||
FROM json_array_elements(_updates) as item
|
||||
LOOP
|
||||
-- Update the grouping-specific sort column and other fields
|
||||
_sql := 'UPDATE tasks SET ' || _sort_column || ' = $1, ' ||
|
||||
'status_id = COALESCE($2, status_id), ' ||
|
||||
'priority_id = COALESCE($3, priority_id), ' ||
|
||||
'updated_at = CURRENT_TIMESTAMP ' ||
|
||||
'WHERE id = $4';
|
||||
|
||||
EXECUTE _sql USING
|
||||
_update_record.sort_order,
|
||||
_update_record.status_id,
|
||||
_update_record.priority_id,
|
||||
_update_record.task_id;
|
||||
|
||||
-- Handle phase updates separately since it's in a different table
|
||||
IF _update_record.phase_id IS NOT NULL THEN
|
||||
INSERT INTO task_phase (task_id, phase_id)
|
||||
VALUES (_update_record.task_id, _update_record.phase_id)
|
||||
ON CONFLICT (task_id) DO UPDATE SET phase_id = _update_record.phase_id;
|
||||
END IF;
|
||||
END LOOP;
|
||||
END;
|
||||
$$;
|
||||
|
||||
-- Also update the helper function to be more explicit
|
||||
CREATE OR REPLACE FUNCTION get_sort_column_name(_group_by TEXT) RETURNS TEXT
|
||||
LANGUAGE plpgsql
|
||||
AS
|
||||
$$
|
||||
BEGIN
|
||||
CASE _group_by
|
||||
WHEN 'status' THEN RETURN 'status_sort_order';
|
||||
WHEN 'priority' THEN RETURN 'priority_sort_order';
|
||||
WHEN 'phase' THEN RETURN 'phase_sort_order';
|
||||
WHEN 'members' THEN RETURN 'member_sort_order';
|
||||
-- For backward compatibility, still support general sort_order but be explicit
|
||||
WHEN 'general' THEN RETURN 'sort_order';
|
||||
ELSE RETURN 'status_sort_order'; -- Default to status sorting
|
||||
END CASE;
|
||||
END;
|
||||
$$;
|
||||
|
||||
-- Updated main sort order change handler that avoids conflicts
|
||||
CREATE OR REPLACE FUNCTION handle_task_list_sort_order_change(_body json) RETURNS void
|
||||
LANGUAGE plpgsql
|
||||
AS
|
||||
$$
|
||||
DECLARE
|
||||
_from_index INT;
|
||||
_to_index INT;
|
||||
_task_id UUID;
|
||||
_project_id UUID;
|
||||
_from_group UUID;
|
||||
_to_group UUID;
|
||||
_group_by TEXT;
|
||||
_sort_column TEXT;
|
||||
_sql TEXT;
|
||||
BEGIN
|
||||
_project_id = (_body ->> 'project_id')::UUID;
|
||||
_task_id = (_body ->> 'task_id')::UUID;
|
||||
_from_index = (_body ->> 'from_index')::INT;
|
||||
_to_index = (_body ->> 'to_index')::INT;
|
||||
_from_group = (_body ->> 'from_group')::UUID;
|
||||
_to_group = (_body ->> 'to_group')::UUID;
|
||||
_group_by = (_body ->> 'group_by')::TEXT;
|
||||
|
||||
-- Get the appropriate sort column
|
||||
_sort_column := get_sort_column_name(_group_by);
|
||||
|
||||
-- Handle group changes first
|
||||
IF (_from_group <> _to_group OR (_from_group <> _to_group) IS NULL) THEN
|
||||
IF (_group_by = 'status') THEN
|
||||
UPDATE tasks
|
||||
SET status_id = _to_group, updated_at = CURRENT_TIMESTAMP
|
||||
WHERE id = _task_id
|
||||
AND project_id = _project_id;
|
||||
END IF;
|
||||
|
||||
IF (_group_by = 'priority') THEN
|
||||
UPDATE tasks
|
||||
SET priority_id = _to_group, updated_at = CURRENT_TIMESTAMP
|
||||
WHERE id = _task_id
|
||||
AND project_id = _project_id;
|
||||
END IF;
|
||||
|
||||
IF (_group_by = 'phase') THEN
|
||||
IF (is_null_or_empty(_to_group) IS FALSE) THEN
|
||||
INSERT INTO task_phase (task_id, phase_id)
|
||||
VALUES (_task_id, _to_group)
|
||||
ON CONFLICT (task_id) DO UPDATE SET phase_id = _to_group;
|
||||
ELSE
|
||||
DELETE FROM task_phase WHERE task_id = _task_id;
|
||||
END IF;
|
||||
END IF;
|
||||
END IF;
|
||||
|
||||
-- Handle sort order changes for the grouping-specific column only
|
||||
IF (_from_index <> _to_index) THEN
|
||||
-- Update the grouping-specific sort order (no unique constraint issues)
|
||||
IF (_to_index > _from_index) THEN
|
||||
-- Moving down: decrease sort order for items between old and new position
|
||||
_sql := 'UPDATE tasks SET ' || _sort_column || ' = ' || _sort_column || ' - 1, ' ||
|
||||
'updated_at = CURRENT_TIMESTAMP ' ||
|
||||
'WHERE project_id = $1 AND ' || _sort_column || ' > $2 AND ' || _sort_column || ' <= $3';
|
||||
EXECUTE _sql USING _project_id, _from_index, _to_index;
|
||||
ELSE
|
||||
-- Moving up: increase sort order for items between new and old position
|
||||
_sql := 'UPDATE tasks SET ' || _sort_column || ' = ' || _sort_column || ' + 1, ' ||
|
||||
'updated_at = CURRENT_TIMESTAMP ' ||
|
||||
'WHERE project_id = $1 AND ' || _sort_column || ' >= $2 AND ' || _sort_column || ' < $3';
|
||||
EXECUTE _sql USING _project_id, _to_index, _from_index;
|
||||
END IF;
|
||||
|
||||
-- Set the new sort order for the moved task
|
||||
_sql := 'UPDATE tasks SET ' || _sort_column || ' = $1, updated_at = CURRENT_TIMESTAMP WHERE id = $2';
|
||||
EXECUTE _sql USING _to_index, _task_id;
|
||||
END IF;
|
||||
END;
|
||||
$$;
|
||||
@@ -1,93 +0,0 @@
|
||||
-- Migration: Add survey tables for account setup questionnaire
|
||||
-- Date: 2025-07-24
|
||||
-- Description: Creates tables to store survey questions and user responses for account setup flow
|
||||
|
||||
BEGIN;
|
||||
|
||||
-- Create surveys table to define different types of surveys
|
||||
CREATE TABLE IF NOT EXISTS surveys (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
name VARCHAR(255) NOT NULL,
|
||||
description TEXT,
|
||||
survey_type VARCHAR(50) DEFAULT 'account_setup' NOT NULL, -- 'account_setup', 'onboarding', 'feedback'
|
||||
is_active BOOLEAN DEFAULT TRUE NOT NULL,
|
||||
created_at TIMESTAMP DEFAULT now() NOT NULL,
|
||||
updated_at TIMESTAMP DEFAULT now() NOT NULL
|
||||
);
|
||||
|
||||
-- Create survey_questions table to store individual questions
|
||||
CREATE TABLE IF NOT EXISTS survey_questions (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
survey_id UUID REFERENCES surveys(id) ON DELETE CASCADE NOT NULL,
|
||||
question_key VARCHAR(100) NOT NULL, -- Used for localization keys
|
||||
question_type VARCHAR(50) NOT NULL, -- 'single_choice', 'multiple_choice', 'text'
|
||||
is_required BOOLEAN DEFAULT FALSE NOT NULL,
|
||||
sort_order INTEGER DEFAULT 0 NOT NULL,
|
||||
options JSONB, -- For choice questions, store options as JSON array
|
||||
created_at TIMESTAMP DEFAULT now() NOT NULL,
|
||||
updated_at TIMESTAMP DEFAULT now() NOT NULL
|
||||
);
|
||||
|
||||
-- Create survey_responses table to track user responses to surveys
|
||||
CREATE TABLE IF NOT EXISTS survey_responses (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
survey_id UUID REFERENCES surveys(id) ON DELETE CASCADE NOT NULL,
|
||||
user_id UUID REFERENCES users(id) ON DELETE CASCADE NOT NULL,
|
||||
is_completed BOOLEAN DEFAULT FALSE NOT NULL,
|
||||
started_at TIMESTAMP DEFAULT now() NOT NULL,
|
||||
completed_at TIMESTAMP,
|
||||
created_at TIMESTAMP DEFAULT now() NOT NULL,
|
||||
updated_at TIMESTAMP DEFAULT now() NOT NULL
|
||||
);
|
||||
|
||||
-- Create survey_answers table to store individual question answers
|
||||
CREATE TABLE IF NOT EXISTS survey_answers (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
response_id UUID REFERENCES survey_responses(id) ON DELETE CASCADE NOT NULL,
|
||||
question_id UUID REFERENCES survey_questions(id) ON DELETE CASCADE NOT NULL,
|
||||
answer_text TEXT,
|
||||
answer_json JSONB, -- For multiple choice answers stored as array
|
||||
created_at TIMESTAMP DEFAULT now() NOT NULL,
|
||||
updated_at TIMESTAMP DEFAULT now() NOT NULL
|
||||
);
|
||||
|
||||
-- Add performance indexes
|
||||
CREATE INDEX IF NOT EXISTS idx_surveys_type_active ON surveys(survey_type, is_active);
|
||||
CREATE INDEX IF NOT EXISTS idx_survey_questions_survey_order ON survey_questions(survey_id, sort_order);
|
||||
CREATE INDEX IF NOT EXISTS idx_survey_responses_user_survey ON survey_responses(user_id, survey_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_survey_responses_completed ON survey_responses(survey_id, is_completed);
|
||||
CREATE INDEX IF NOT EXISTS idx_survey_answers_response ON survey_answers(response_id);
|
||||
|
||||
-- Add constraints
|
||||
ALTER TABLE survey_questions ADD CONSTRAINT survey_questions_sort_order_check CHECK (sort_order >= 0);
|
||||
ALTER TABLE survey_questions ADD CONSTRAINT survey_questions_type_check CHECK (question_type IN ('single_choice', 'multiple_choice', 'text'));
|
||||
|
||||
-- Add unique constraint to prevent duplicate responses per user per survey
|
||||
ALTER TABLE survey_responses ADD CONSTRAINT unique_user_survey_response UNIQUE (user_id, survey_id);
|
||||
|
||||
-- Add unique constraint to prevent duplicate answers per question per response
|
||||
ALTER TABLE survey_answers ADD CONSTRAINT unique_response_question_answer UNIQUE (response_id, question_id);
|
||||
|
||||
-- Insert the default account setup survey
|
||||
INSERT INTO surveys (name, description, survey_type, is_active) VALUES
|
||||
('Account Setup Survey', 'Initial questionnaire during account setup to understand user needs', 'account_setup', true)
|
||||
ON CONFLICT DO NOTHING;
|
||||
|
||||
-- Get the survey ID for inserting questions
|
||||
DO $$
|
||||
DECLARE
|
||||
survey_uuid UUID;
|
||||
BEGIN
|
||||
SELECT id INTO survey_uuid FROM surveys WHERE survey_type = 'account_setup' AND name = 'Account Setup Survey' LIMIT 1;
|
||||
|
||||
-- Insert survey questions
|
||||
INSERT INTO survey_questions (survey_id, question_key, question_type, is_required, sort_order, options) VALUES
|
||||
(survey_uuid, 'organization_type', 'single_choice', true, 1, '["freelancer", "startup", "small_medium_business", "agency", "enterprise", "other"]'),
|
||||
(survey_uuid, 'user_role', 'single_choice', true, 2, '["founder_ceo", "project_manager", "software_developer", "designer", "operations", "other"]'),
|
||||
(survey_uuid, 'main_use_cases', 'multiple_choice', true, 3, '["task_management", "team_collaboration", "resource_planning", "client_communication", "time_tracking", "other"]'),
|
||||
(survey_uuid, 'previous_tools', 'text', false, 4, null),
|
||||
(survey_uuid, 'how_heard_about', 'single_choice', false, 5, '["google_search", "twitter", "linkedin", "friend_colleague", "blog_article", "other"]')
|
||||
ON CONFLICT DO NOTHING;
|
||||
END $$;
|
||||
|
||||
COMMIT;
|
||||
@@ -1,72 +0,0 @@
|
||||
# Node-pg-migrate Migrations
|
||||
|
||||
This directory contains database migrations managed by node-pg-migrate.
|
||||
|
||||
## Migration Commands
|
||||
|
||||
- `npm run migrate:create -- migration-name` - Create a new migration file
|
||||
- `npm run migrate:up` - Run all pending migrations
|
||||
- `npm run migrate:down` - Rollback the last migration
|
||||
- `npm run migrate:redo` - Rollback and re-run the last migration
|
||||
|
||||
## Migration File Format
|
||||
|
||||
Migrations are JavaScript files with timestamp prefixes (e.g., `20250115000000_performance-indexes.js`).
|
||||
|
||||
Each migration file exports two functions:
|
||||
- `exports.up` - Contains the forward migration logic
|
||||
- `exports.down` - Contains the rollback logic
|
||||
|
||||
## Best Practices
|
||||
|
||||
1. **Always use IF EXISTS/IF NOT EXISTS checks** to make migrations idempotent
|
||||
2. **Test migrations locally** before deploying to production
|
||||
3. **Include rollback logic** in the `down` function for all changes
|
||||
4. **Use descriptive names** for migration files
|
||||
5. **Keep migrations focused** - one logical change per migration
|
||||
|
||||
## Example Migration
|
||||
|
||||
```javascript
|
||||
exports.up = pgm => {
|
||||
// Create table with IF NOT EXISTS
|
||||
pgm.createTable('users', {
|
||||
id: 'id',
|
||||
name: { type: 'varchar(100)', notNull: true },
|
||||
created_at: {
|
||||
type: 'timestamp',
|
||||
notNull: true,
|
||||
default: pgm.func('current_timestamp')
|
||||
}
|
||||
}, { ifNotExists: true });
|
||||
|
||||
// Add index with IF NOT EXISTS
|
||||
pgm.createIndex('users', 'name', {
|
||||
name: 'idx_users_name',
|
||||
ifNotExists: true
|
||||
});
|
||||
};
|
||||
|
||||
exports.down = pgm => {
|
||||
// Drop in reverse order
|
||||
pgm.dropIndex('users', 'name', {
|
||||
name: 'idx_users_name',
|
||||
ifExists: true
|
||||
});
|
||||
|
||||
pgm.dropTable('users', { ifExists: true });
|
||||
};
|
||||
```
|
||||
|
||||
## Migration History
|
||||
|
||||
The `pgmigrations` table tracks which migrations have been run. Do not modify this table manually.
|
||||
|
||||
## Converting from SQL Migrations
|
||||
|
||||
When converting SQL migrations to node-pg-migrate format:
|
||||
|
||||
1. Wrap SQL statements in `pgm.sql()` calls
|
||||
2. Use node-pg-migrate helper methods where possible (createTable, addColumns, etc.)
|
||||
3. Always include `IF EXISTS/IF NOT EXISTS` checks
|
||||
4. Ensure proper rollback logic in the `down` function
|
||||
@@ -12,7 +12,7 @@ CREATE TYPE DEPENDENCY_TYPE AS ENUM ('blocked_by');
|
||||
|
||||
CREATE TYPE SCHEDULE_TYPE AS ENUM ('daily', 'weekly', 'yearly', 'monthly', 'every_x_days', 'every_x_weeks', 'every_x_months');
|
||||
|
||||
CREATE TYPE LANGUAGE_TYPE AS ENUM ('en', 'es', 'pt', 'alb', 'de', 'zh_cn');
|
||||
CREATE TYPE LANGUAGE_TYPE AS ENUM ('en', 'es', 'pt');
|
||||
|
||||
-- START: Users
|
||||
CREATE SEQUENCE IF NOT EXISTS users_user_no_seq START 1;
|
||||
@@ -1391,30 +1391,27 @@ ALTER TABLE task_work_log
|
||||
CHECK (time_spent >= (0)::NUMERIC);
|
||||
|
||||
CREATE TABLE IF NOT EXISTS tasks (
|
||||
id UUID DEFAULT uuid_generate_v4() NOT NULL,
|
||||
name TEXT NOT NULL,
|
||||
description TEXT,
|
||||
done BOOLEAN DEFAULT FALSE NOT NULL,
|
||||
total_minutes NUMERIC DEFAULT 0 NOT NULL,
|
||||
archived BOOLEAN DEFAULT FALSE NOT NULL,
|
||||
task_no BIGINT NOT NULL,
|
||||
start_date TIMESTAMP WITH TIME ZONE,
|
||||
end_date TIMESTAMP WITH TIME ZONE,
|
||||
priority_id UUID NOT NULL,
|
||||
project_id UUID NOT NULL,
|
||||
reporter_id UUID NOT NULL,
|
||||
parent_task_id UUID,
|
||||
status_id UUID NOT NULL,
|
||||
completed_at TIMESTAMP WITH TIME ZONE,
|
||||
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP NOT NULL,
|
||||
updated_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP NOT NULL,
|
||||
sort_order INTEGER DEFAULT 0 NOT NULL,
|
||||
roadmap_sort_order INTEGER DEFAULT 0 NOT NULL,
|
||||
status_sort_order INTEGER DEFAULT 0 NOT NULL,
|
||||
priority_sort_order INTEGER DEFAULT 0 NOT NULL,
|
||||
phase_sort_order INTEGER DEFAULT 0 NOT NULL,
|
||||
billable BOOLEAN DEFAULT TRUE,
|
||||
schedule_id UUID
|
||||
id UUID DEFAULT uuid_generate_v4() NOT NULL,
|
||||
name TEXT NOT NULL,
|
||||
description TEXT,
|
||||
done BOOLEAN DEFAULT FALSE NOT NULL,
|
||||
total_minutes NUMERIC DEFAULT 0 NOT NULL,
|
||||
archived BOOLEAN DEFAULT FALSE NOT NULL,
|
||||
task_no BIGINT NOT NULL,
|
||||
start_date TIMESTAMP WITH TIME ZONE,
|
||||
end_date TIMESTAMP WITH TIME ZONE,
|
||||
priority_id UUID NOT NULL,
|
||||
project_id UUID NOT NULL,
|
||||
reporter_id UUID NOT NULL,
|
||||
parent_task_id UUID,
|
||||
status_id UUID NOT NULL,
|
||||
completed_at TIMESTAMP WITH TIME ZONE,
|
||||
created_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP NOT NULL,
|
||||
updated_at TIMESTAMP WITH TIME ZONE DEFAULT CURRENT_TIMESTAMP NOT NULL,
|
||||
sort_order INTEGER DEFAULT 0 NOT NULL,
|
||||
roadmap_sort_order INTEGER DEFAULT 0 NOT NULL,
|
||||
billable BOOLEAN DEFAULT TRUE,
|
||||
schedule_id UUID
|
||||
);
|
||||
|
||||
ALTER TABLE tasks
|
||||
@@ -1502,21 +1499,6 @@ ALTER TABLE tasks
|
||||
ADD CONSTRAINT tasks_total_minutes_check
|
||||
CHECK ((total_minutes >= (0)::NUMERIC) AND (total_minutes <= (999999)::NUMERIC));
|
||||
|
||||
-- Add constraints for new sort order columns
|
||||
ALTER TABLE tasks ADD CONSTRAINT tasks_status_sort_order_check CHECK (status_sort_order >= 0);
|
||||
ALTER TABLE tasks ADD CONSTRAINT tasks_priority_sort_order_check CHECK (priority_sort_order >= 0);
|
||||
ALTER TABLE tasks ADD CONSTRAINT tasks_phase_sort_order_check CHECK (phase_sort_order >= 0);
|
||||
|
||||
-- Add indexes for performance on new sort order columns
|
||||
CREATE INDEX IF NOT EXISTS idx_tasks_status_sort_order ON tasks(project_id, status_sort_order);
|
||||
CREATE INDEX IF NOT EXISTS idx_tasks_priority_sort_order ON tasks(project_id, priority_sort_order);
|
||||
CREATE INDEX IF NOT EXISTS idx_tasks_phase_sort_order ON tasks(project_id, phase_sort_order);
|
||||
|
||||
-- Add comments for documentation
|
||||
COMMENT ON COLUMN tasks.status_sort_order IS 'Sort order when grouped by status';
|
||||
COMMENT ON COLUMN tasks.priority_sort_order IS 'Sort order when grouped by priority';
|
||||
COMMENT ON COLUMN tasks.phase_sort_order IS 'Sort order when grouped by phase';
|
||||
|
||||
CREATE TABLE IF NOT EXISTS tasks_assignees (
|
||||
task_id UUID NOT NULL,
|
||||
project_member_id UUID NOT NULL,
|
||||
|
||||
@@ -32,37 +32,3 @@ SELECT u.avatar_url,
|
||||
FROM team_members
|
||||
LEFT JOIN users u ON team_members.user_id = u.id;
|
||||
|
||||
-- PERFORMANCE OPTIMIZATION: Create materialized view for team member info
|
||||
-- This pre-calculates the expensive joins and subqueries from team_member_info_view
|
||||
CREATE MATERIALIZED VIEW IF NOT EXISTS team_member_info_mv AS
|
||||
SELECT
|
||||
u.avatar_url,
|
||||
COALESCE(u.email, ei.email) AS email,
|
||||
COALESCE(u.name, ei.name) AS name,
|
||||
u.id AS user_id,
|
||||
tm.id AS team_member_id,
|
||||
tm.team_id,
|
||||
tm.active,
|
||||
u.socket_id
|
||||
FROM team_members tm
|
||||
LEFT JOIN users u ON tm.user_id = u.id
|
||||
LEFT JOIN email_invitations ei ON ei.team_member_id = tm.id
|
||||
WHERE tm.active = TRUE;
|
||||
|
||||
-- Create unique index on the materialized view for fast lookups
|
||||
CREATE UNIQUE INDEX IF NOT EXISTS idx_team_member_info_mv_team_member_id
|
||||
ON team_member_info_mv(team_member_id);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_team_member_info_mv_team_user
|
||||
ON team_member_info_mv(team_id, user_id);
|
||||
|
||||
-- Function to refresh the materialized view
|
||||
CREATE OR REPLACE FUNCTION refresh_team_member_info_mv()
|
||||
RETURNS void
|
||||
LANGUAGE plpgsql
|
||||
AS $$
|
||||
BEGIN
|
||||
REFRESH MATERIALIZED VIEW CONCURRENTLY team_member_info_mv;
|
||||
END;
|
||||
$$;
|
||||
|
||||
|
||||
@@ -3351,15 +3351,15 @@ BEGIN
|
||||
SELECT COALESCE(ARRAY_TO_JSON(ARRAY_AGG(ROW_TO_JSON(rec))), '[]'::JSON)
|
||||
FROM (SELECT team_member_id,
|
||||
project_member_id,
|
||||
COALESCE((SELECT name FROM team_member_info_view WHERE team_member_info_view.team_member_id = tm.id), '') as name,
|
||||
COALESCE((SELECT email_notifications_enabled
|
||||
(SELECT name FROM team_member_info_view WHERE team_member_info_view.team_member_id = tm.id),
|
||||
(SELECT email_notifications_enabled
|
||||
FROM notification_settings
|
||||
WHERE team_id = tm.team_id
|
||||
AND notification_settings.user_id = u.id), false) AS email_notifications_enabled,
|
||||
COALESCE(u.avatar_url, '') as avatar_url,
|
||||
AND notification_settings.user_id = u.id) AS email_notifications_enabled,
|
||||
u.avatar_url,
|
||||
u.id AS user_id,
|
||||
COALESCE(u.email, '') as email,
|
||||
COALESCE(u.socket_id, '') as socket_id,
|
||||
u.email,
|
||||
u.socket_id AS socket_id,
|
||||
tm.team_id AS team_id
|
||||
FROM tasks_assignees
|
||||
INNER JOIN team_members tm ON tm.id = tasks_assignees.team_member_id
|
||||
@@ -4066,14 +4066,14 @@ DECLARE
|
||||
_schedule_id JSON;
|
||||
_task_completed_at TIMESTAMPTZ;
|
||||
BEGIN
|
||||
SELECT COALESCE(name, '') FROM tasks WHERE id = _task_id INTO _task_name;
|
||||
SELECT name FROM tasks WHERE id = _task_id INTO _task_name;
|
||||
|
||||
SELECT COALESCE(name, '')
|
||||
SELECT name
|
||||
FROM task_statuses
|
||||
WHERE id = (SELECT status_id FROM tasks WHERE id = _task_id)
|
||||
INTO _previous_status_name;
|
||||
|
||||
SELECT COALESCE(name, '') FROM task_statuses WHERE id = _status_id INTO _new_status_name;
|
||||
SELECT name FROM task_statuses WHERE id = _status_id INTO _new_status_name;
|
||||
|
||||
IF (_previous_status_name != _new_status_name)
|
||||
THEN
|
||||
@@ -4081,22 +4081,14 @@ BEGIN
|
||||
|
||||
SELECT get_task_complete_info(_task_id, _status_id) INTO _task_info;
|
||||
|
||||
SELECT COALESCE(name, '') FROM users WHERE id = _user_id INTO _updater_name;
|
||||
SELECT name FROM users WHERE id = _user_id INTO _updater_name;
|
||||
|
||||
_message = CONCAT(_updater_name, ' transitioned "', _task_name, '" from ', _previous_status_name, ' ⟶ ',
|
||||
_new_status_name);
|
||||
END IF;
|
||||
|
||||
SELECT completed_at FROM tasks WHERE id = _task_id INTO _task_completed_at;
|
||||
|
||||
-- Handle schedule_id properly for recurring tasks
|
||||
SELECT CASE
|
||||
WHEN schedule_id IS NULL THEN 'null'::json
|
||||
ELSE json_build_object('id', schedule_id)
|
||||
END
|
||||
FROM tasks
|
||||
WHERE id = _task_id
|
||||
INTO _schedule_id;
|
||||
SELECT schedule_id FROM tasks WHERE id = _task_id INTO _schedule_id;
|
||||
|
||||
SELECT COALESCE(ROW_TO_JSON(r), '{}'::JSON)
|
||||
FROM (SELECT is_done, is_doing, is_todo
|
||||
@@ -4105,7 +4097,7 @@ BEGIN
|
||||
INTO _status_category;
|
||||
|
||||
RETURN JSON_BUILD_OBJECT(
|
||||
'message', COALESCE(_message, ''),
|
||||
'message', _message,
|
||||
'project_id', (SELECT project_id FROM tasks WHERE id = _task_id),
|
||||
'parent_done', (CASE
|
||||
WHEN EXISTS(SELECT 1
|
||||
@@ -4113,14 +4105,14 @@ BEGIN
|
||||
WHERE tasks_with_status_view.task_id = _task_id
|
||||
AND is_done IS TRUE) THEN 1
|
||||
ELSE 0 END),
|
||||
'color_code', COALESCE((_task_info ->> 'color_code')::TEXT, ''),
|
||||
'color_code_dark', COALESCE((_task_info ->> 'color_code_dark')::TEXT, ''),
|
||||
'total_tasks', COALESCE((_task_info ->> 'total_tasks')::INT, 0),
|
||||
'total_completed', COALESCE((_task_info ->> 'total_completed')::INT, 0),
|
||||
'members', COALESCE((_task_info ->> 'members')::JSON, '[]'::JSON),
|
||||
'color_code', (_task_info ->> 'color_code')::TEXT,
|
||||
'color_code_dark', (_task_info ->> 'color_code_dark')::TEXT,
|
||||
'total_tasks', (_task_info ->> 'total_tasks')::INT,
|
||||
'total_completed', (_task_info ->> 'total_completed')::INT,
|
||||
'members', (_task_info ->> 'members')::JSON,
|
||||
'completed_at', _task_completed_at,
|
||||
'status_category', COALESCE(_status_category, '{}'::JSON),
|
||||
'schedule_id', COALESCE(_schedule_id, 'null'::JSON)
|
||||
'status_category', _status_category,
|
||||
'schedule_id', _schedule_id
|
||||
);
|
||||
END
|
||||
$$;
|
||||
@@ -4313,24 +4305,6 @@ BEGIN
|
||||
END
|
||||
$$;
|
||||
|
||||
-- Helper function to get the appropriate sort column name based on grouping type
|
||||
CREATE OR REPLACE FUNCTION get_sort_column_name(_group_by TEXT) RETURNS TEXT
|
||||
LANGUAGE plpgsql
|
||||
AS
|
||||
$$
|
||||
BEGIN
|
||||
CASE _group_by
|
||||
WHEN 'status' THEN RETURN 'status_sort_order';
|
||||
WHEN 'priority' THEN RETURN 'priority_sort_order';
|
||||
WHEN 'phase' THEN RETURN 'phase_sort_order';
|
||||
WHEN 'members' THEN RETURN 'member_sort_order';
|
||||
-- For backward compatibility, still support general sort_order but be explicit
|
||||
WHEN 'general' THEN RETURN 'sort_order';
|
||||
ELSE RETURN 'status_sort_order'; -- Default to status sorting
|
||||
END CASE;
|
||||
END;
|
||||
$$;
|
||||
|
||||
CREATE OR REPLACE FUNCTION handle_task_list_sort_order_change(_body json) RETURNS void
|
||||
LANGUAGE plpgsql
|
||||
AS
|
||||
@@ -4343,67 +4317,54 @@ DECLARE
|
||||
_from_group UUID;
|
||||
_to_group UUID;
|
||||
_group_by TEXT;
|
||||
_sort_column TEXT;
|
||||
_sql TEXT;
|
||||
BEGIN
|
||||
_project_id = (_body ->> 'project_id')::UUID;
|
||||
_task_id = (_body ->> 'task_id')::UUID;
|
||||
_from_index = (_body ->> 'from_index')::INT;
|
||||
_to_index = (_body ->> 'to_index')::INT;
|
||||
|
||||
_from_index = (_body ->> 'from_index')::INT; -- from sort_order
|
||||
_to_index = (_body ->> 'to_index')::INT; -- to sort_order
|
||||
|
||||
_from_group = (_body ->> 'from_group')::UUID;
|
||||
_to_group = (_body ->> 'to_group')::UUID;
|
||||
|
||||
_group_by = (_body ->> 'group_by')::TEXT;
|
||||
|
||||
-- Get the appropriate sort column
|
||||
_sort_column := get_sort_column_name(_group_by);
|
||||
|
||||
-- Handle group changes first
|
||||
IF (_from_group <> _to_group OR (_from_group <> _to_group) IS NULL) THEN
|
||||
IF (_group_by = 'status') THEN
|
||||
UPDATE tasks
|
||||
SET status_id = _to_group, updated_at = CURRENT_TIMESTAMP
|
||||
WHERE id = _task_id
|
||||
AND project_id = _project_id;
|
||||
IF (_from_group <> _to_group OR (_from_group <> _to_group) IS NULL)
|
||||
THEN
|
||||
IF (_group_by = 'status')
|
||||
THEN
|
||||
UPDATE tasks SET status_id = _to_group WHERE id = _task_id AND status_id = _from_group;
|
||||
END IF;
|
||||
|
||||
IF (_group_by = 'priority') THEN
|
||||
UPDATE tasks
|
||||
SET priority_id = _to_group, updated_at = CURRENT_TIMESTAMP
|
||||
WHERE id = _task_id
|
||||
AND project_id = _project_id;
|
||||
IF (_group_by = 'priority')
|
||||
THEN
|
||||
UPDATE tasks SET priority_id = _to_group WHERE id = _task_id AND priority_id = _from_group;
|
||||
END IF;
|
||||
|
||||
IF (_group_by = 'phase') THEN
|
||||
IF (is_null_or_empty(_to_group) IS FALSE) THEN
|
||||
IF (_group_by = 'phase')
|
||||
THEN
|
||||
IF (is_null_or_empty(_to_group) IS FALSE)
|
||||
THEN
|
||||
INSERT INTO task_phase (task_id, phase_id)
|
||||
VALUES (_task_id, _to_group)
|
||||
ON CONFLICT (task_id) DO UPDATE SET phase_id = _to_group;
|
||||
ELSE
|
||||
DELETE FROM task_phase WHERE task_id = _task_id;
|
||||
END IF;
|
||||
IF (is_null_or_empty(_to_group) IS TRUE)
|
||||
THEN
|
||||
DELETE
|
||||
FROM task_phase
|
||||
WHERE task_id = _task_id;
|
||||
END IF;
|
||||
END IF;
|
||||
END IF;
|
||||
|
||||
-- Handle sort order changes for the grouping-specific column only
|
||||
IF (_from_index <> _to_index) THEN
|
||||
-- Update the grouping-specific sort order (no unique constraint issues)
|
||||
IF (_to_index > _from_index) THEN
|
||||
-- Moving down: decrease sort order for items between old and new position
|
||||
_sql := 'UPDATE tasks SET ' || _sort_column || ' = ' || _sort_column || ' - 1, ' ||
|
||||
'updated_at = CURRENT_TIMESTAMP ' ||
|
||||
'WHERE project_id = $1 AND ' || _sort_column || ' > $2 AND ' || _sort_column || ' <= $3';
|
||||
EXECUTE _sql USING _project_id, _from_index, _to_index;
|
||||
IF ((_body ->> 'to_last_index')::BOOLEAN IS TRUE AND _from_index < _to_index)
|
||||
THEN
|
||||
PERFORM handle_task_list_sort_inside_group(_from_index, _to_index, _task_id, _project_id);
|
||||
ELSE
|
||||
-- Moving up: increase sort order for items between new and old position
|
||||
_sql := 'UPDATE tasks SET ' || _sort_column || ' = ' || _sort_column || ' + 1, ' ||
|
||||
'updated_at = CURRENT_TIMESTAMP ' ||
|
||||
'WHERE project_id = $1 AND ' || _sort_column || ' >= $2 AND ' || _sort_column || ' < $3';
|
||||
EXECUTE _sql USING _project_id, _to_index, _from_index;
|
||||
PERFORM handle_task_list_sort_between_groups(_from_index, _to_index, _task_id, _project_id);
|
||||
END IF;
|
||||
|
||||
-- Set the new sort order for the moved task
|
||||
_sql := 'UPDATE tasks SET ' || _sort_column || ' = $1, updated_at = CURRENT_TIMESTAMP WHERE id = $2';
|
||||
EXECUTE _sql USING _to_index, _task_id;
|
||||
ELSE
|
||||
PERFORM handle_task_list_sort_inside_group(_from_index, _to_index, _task_id, _project_id);
|
||||
END IF;
|
||||
END
|
||||
$$;
|
||||
@@ -4608,31 +4569,31 @@ BEGIN
|
||||
INSERT INTO project_task_list_cols (project_id, name, key, index, pinned)
|
||||
VALUES (_project_id, 'Progress', 'PROGRESS', 3, TRUE);
|
||||
INSERT INTO project_task_list_cols (project_id, name, key, index, pinned)
|
||||
VALUES (_project_id, 'Status', 'STATUS', 4, TRUE);
|
||||
VALUES (_project_id, 'Members', 'ASSIGNEES', 4, TRUE);
|
||||
INSERT INTO project_task_list_cols (project_id, name, key, index, pinned)
|
||||
VALUES (_project_id, 'Members', 'ASSIGNEES', 5, TRUE);
|
||||
VALUES (_project_id, 'Labels', 'LABELS', 5, TRUE);
|
||||
INSERT INTO project_task_list_cols (project_id, name, key, index, pinned)
|
||||
VALUES (_project_id, 'Labels', 'LABELS', 6, TRUE);
|
||||
VALUES (_project_id, 'Status', 'STATUS', 6, TRUE);
|
||||
INSERT INTO project_task_list_cols (project_id, name, key, index, pinned)
|
||||
VALUES (_project_id, 'Phase', 'PHASE', 7, TRUE);
|
||||
VALUES (_project_id, 'Priority', 'PRIORITY', 7, TRUE);
|
||||
INSERT INTO project_task_list_cols (project_id, name, key, index, pinned)
|
||||
VALUES (_project_id, 'Priority', 'PRIORITY', 8, TRUE);
|
||||
VALUES (_project_id, 'Time Tracking', 'TIME_TRACKING', 8, TRUE);
|
||||
INSERT INTO project_task_list_cols (project_id, name, key, index, pinned)
|
||||
VALUES (_project_id, 'Time Tracking', 'TIME_TRACKING', 9, TRUE);
|
||||
VALUES (_project_id, 'Estimation', 'ESTIMATION', 9, FALSE);
|
||||
INSERT INTO project_task_list_cols (project_id, name, key, index, pinned)
|
||||
VALUES (_project_id, 'Estimation', 'ESTIMATION', 10, FALSE);
|
||||
VALUES (_project_id, 'Start Date', 'START_DATE', 10, FALSE);
|
||||
INSERT INTO project_task_list_cols (project_id, name, key, index, pinned)
|
||||
VALUES (_project_id, 'Start Date', 'START_DATE', 11, FALSE);
|
||||
VALUES (_project_id, 'Due Date', 'DUE_DATE', 11, TRUE);
|
||||
INSERT INTO project_task_list_cols (project_id, name, key, index, pinned)
|
||||
VALUES (_project_id, 'Due Date', 'DUE_DATE', 12, TRUE);
|
||||
VALUES (_project_id, 'Completed Date', 'COMPLETED_DATE', 12, FALSE);
|
||||
INSERT INTO project_task_list_cols (project_id, name, key, index, pinned)
|
||||
VALUES (_project_id, 'Completed Date', 'COMPLETED_DATE', 13, FALSE);
|
||||
VALUES (_project_id, 'Created Date', 'CREATED_DATE', 13, FALSE);
|
||||
INSERT INTO project_task_list_cols (project_id, name, key, index, pinned)
|
||||
VALUES (_project_id, 'Created Date', 'CREATED_DATE', 14, FALSE);
|
||||
VALUES (_project_id, 'Last Updated', 'LAST_UPDATED', 14, FALSE);
|
||||
INSERT INTO project_task_list_cols (project_id, name, key, index, pinned)
|
||||
VALUES (_project_id, 'Last Updated', 'LAST_UPDATED', 15, FALSE);
|
||||
VALUES (_project_id, 'Reporter', 'REPORTER', 15, FALSE);
|
||||
INSERT INTO project_task_list_cols (project_id, name, key, index, pinned)
|
||||
VALUES (_project_id, 'Reporter', 'REPORTER', 16, FALSE);
|
||||
VALUES (_project_id, 'Phase', 'PHASE', 16, FALSE);
|
||||
END
|
||||
$$;
|
||||
|
||||
@@ -5516,15 +5477,8 @@ $$
|
||||
DECLARE
|
||||
_iterator NUMERIC := 0;
|
||||
_status_id TEXT;
|
||||
_project_id UUID;
|
||||
_base_sort_order NUMERIC;
|
||||
BEGIN
|
||||
-- Get the project_id from the first status to ensure we update all statuses in the same project
|
||||
SELECT project_id INTO _project_id
|
||||
FROM task_statuses
|
||||
WHERE id = (SELECT TRIM(BOTH '"' FROM JSON_ARRAY_ELEMENTS(_status_ids)::TEXT) LIMIT 1)::UUID;
|
||||
|
||||
-- Update the sort_order for statuses in the provided order
|
||||
FOR _status_id IN SELECT * FROM JSON_ARRAY_ELEMENTS((_status_ids)::JSON)
|
||||
LOOP
|
||||
UPDATE task_statuses
|
||||
@@ -5533,29 +5487,6 @@ BEGIN
|
||||
_iterator := _iterator + 1;
|
||||
END LOOP;
|
||||
|
||||
-- Get the base sort order for remaining statuses (simple count approach)
|
||||
SELECT COUNT(*) INTO _base_sort_order
|
||||
FROM task_statuses ts2
|
||||
WHERE ts2.project_id = _project_id
|
||||
AND ts2.id = ANY(SELECT (TRIM(BOTH '"' FROM JSON_ARRAY_ELEMENTS(_status_ids)::TEXT))::UUID);
|
||||
|
||||
-- Update remaining statuses with simple sequential numbering
|
||||
-- Reset iterator to start from base_sort_order
|
||||
_iterator := _base_sort_order;
|
||||
|
||||
-- Use a cursor approach to avoid window functions
|
||||
FOR _status_id IN
|
||||
SELECT id::TEXT FROM task_statuses
|
||||
WHERE project_id = _project_id
|
||||
AND id NOT IN (SELECT (TRIM(BOTH '"' FROM JSON_ARRAY_ELEMENTS(_status_ids)::TEXT))::UUID)
|
||||
ORDER BY sort_order
|
||||
LOOP
|
||||
UPDATE task_statuses
|
||||
SET sort_order = _iterator
|
||||
WHERE id = _status_id::UUID;
|
||||
_iterator := _iterator + 1;
|
||||
END LOOP;
|
||||
|
||||
RETURN;
|
||||
END
|
||||
$$;
|
||||
@@ -6217,434 +6148,3 @@ BEGIN
|
||||
RETURN v_new_id;
|
||||
END;
|
||||
$$;
|
||||
|
||||
CREATE OR REPLACE FUNCTION transfer_team_ownership(_team_id UUID, _new_owner_id UUID) RETURNS json
|
||||
LANGUAGE plpgsql
|
||||
AS
|
||||
$$
|
||||
DECLARE
|
||||
_old_owner_id UUID;
|
||||
_owner_role_id UUID;
|
||||
_admin_role_id UUID;
|
||||
_old_org_id UUID;
|
||||
_new_org_id UUID;
|
||||
_has_license BOOLEAN;
|
||||
_old_owner_role_id UUID;
|
||||
_new_owner_role_id UUID;
|
||||
_has_active_coupon BOOLEAN;
|
||||
_other_teams_count INTEGER;
|
||||
_new_owner_org_id UUID;
|
||||
_license_type_id UUID;
|
||||
_has_valid_license BOOLEAN;
|
||||
BEGIN
|
||||
-- Get the current owner's ID and organization
|
||||
SELECT t.user_id, t.organization_id
|
||||
INTO _old_owner_id, _old_org_id
|
||||
FROM teams t
|
||||
WHERE t.id = _team_id;
|
||||
|
||||
IF _old_owner_id IS NULL THEN
|
||||
RAISE EXCEPTION 'Team not found';
|
||||
END IF;
|
||||
|
||||
-- Get the new owner's organization
|
||||
SELECT organization_id INTO _new_owner_org_id
|
||||
FROM organizations
|
||||
WHERE user_id = _new_owner_id;
|
||||
|
||||
-- Get the old organization
|
||||
SELECT id INTO _old_org_id
|
||||
FROM organizations
|
||||
WHERE id = _old_org_id;
|
||||
|
||||
IF _old_org_id IS NULL THEN
|
||||
RAISE EXCEPTION 'Organization not found';
|
||||
END IF;
|
||||
|
||||
-- Check if new owner has any valid license type
|
||||
SELECT EXISTS (
|
||||
SELECT 1
|
||||
FROM (
|
||||
-- Check regular subscriptions
|
||||
SELECT lus.user_id, lus.status, lus.active
|
||||
FROM licensing_user_subscriptions lus
|
||||
WHERE lus.user_id = _new_owner_id
|
||||
AND lus.active = TRUE
|
||||
AND lus.status IN ('active', 'trialing')
|
||||
|
||||
UNION ALL
|
||||
|
||||
-- Check custom subscriptions
|
||||
SELECT lcs.user_id, lcs.subscription_status as status, TRUE as active
|
||||
FROM licensing_custom_subs lcs
|
||||
WHERE lcs.user_id = _new_owner_id
|
||||
AND lcs.end_date > CURRENT_DATE
|
||||
|
||||
UNION ALL
|
||||
|
||||
-- Check trial status in organizations
|
||||
SELECT o.user_id, o.subscription_status as status, TRUE as active
|
||||
FROM organizations o
|
||||
WHERE o.user_id = _new_owner_id
|
||||
AND o.trial_in_progress = TRUE
|
||||
AND o.trial_expire_date > CURRENT_DATE
|
||||
) valid_licenses
|
||||
) INTO _has_valid_license;
|
||||
|
||||
IF NOT _has_valid_license THEN
|
||||
RAISE EXCEPTION 'New owner does not have a valid license (subscription, custom subscription, or trial)';
|
||||
END IF;
|
||||
|
||||
-- Check if new owner has any active coupon codes
|
||||
SELECT EXISTS (
|
||||
SELECT 1
|
||||
FROM licensing_coupon_codes lcc
|
||||
WHERE lcc.redeemed_by = _new_owner_id
|
||||
AND lcc.is_redeemed = TRUE
|
||||
AND lcc.is_refunded = FALSE
|
||||
) INTO _has_active_coupon;
|
||||
|
||||
IF _has_active_coupon THEN
|
||||
RAISE EXCEPTION 'New owner has active coupon codes that need to be handled before transfer';
|
||||
END IF;
|
||||
|
||||
-- Count other teams in the organization for information purposes
|
||||
SELECT COUNT(*) INTO _other_teams_count
|
||||
FROM teams
|
||||
WHERE organization_id = _old_org_id
|
||||
AND id != _team_id;
|
||||
|
||||
-- If new owner has their own organization, move the team to their organization
|
||||
IF _new_owner_org_id IS NOT NULL THEN
|
||||
-- Update the team to use the new owner's organization
|
||||
UPDATE teams
|
||||
SET user_id = _new_owner_id,
|
||||
organization_id = _new_owner_org_id
|
||||
WHERE id = _team_id;
|
||||
|
||||
-- Create notification about organization change
|
||||
PERFORM create_notification(
|
||||
_old_owner_id,
|
||||
_team_id,
|
||||
NULL,
|
||||
NULL,
|
||||
CONCAT('Team <b>', (SELECT name FROM teams WHERE id = _team_id), '</b> has been moved to a different organization')
|
||||
);
|
||||
|
||||
PERFORM create_notification(
|
||||
_new_owner_id,
|
||||
_team_id,
|
||||
NULL,
|
||||
NULL,
|
||||
CONCAT('Team <b>', (SELECT name FROM teams WHERE id = _team_id), '</b> has been moved to your organization')
|
||||
);
|
||||
ELSE
|
||||
-- If new owner doesn't have an organization, transfer the old organization to them
|
||||
UPDATE organizations
|
||||
SET user_id = _new_owner_id
|
||||
WHERE id = _old_org_id;
|
||||
|
||||
-- Update the team to use the same organization
|
||||
UPDATE teams
|
||||
SET user_id = _new_owner_id,
|
||||
organization_id = _old_org_id
|
||||
WHERE id = _team_id;
|
||||
|
||||
-- Notify both users about organization ownership transfer
|
||||
PERFORM create_notification(
|
||||
_old_owner_id,
|
||||
NULL,
|
||||
NULL,
|
||||
NULL,
|
||||
CONCAT('You are no longer the owner of organization <b>', (SELECT organization_name FROM organizations WHERE id = _old_org_id), '</b>')
|
||||
);
|
||||
|
||||
PERFORM create_notification(
|
||||
_new_owner_id,
|
||||
NULL,
|
||||
NULL,
|
||||
NULL,
|
||||
CONCAT('You are now the owner of organization <b>', (SELECT organization_name FROM organizations WHERE id = _old_org_id), '</b>')
|
||||
);
|
||||
END IF;
|
||||
|
||||
-- Get the owner and admin role IDs
|
||||
SELECT id INTO _owner_role_id FROM roles WHERE team_id = _team_id AND owner = TRUE;
|
||||
SELECT id INTO _admin_role_id FROM roles WHERE team_id = _team_id AND admin_role = TRUE;
|
||||
|
||||
-- Get current role IDs for both users
|
||||
SELECT role_id INTO _old_owner_role_id
|
||||
FROM team_members
|
||||
WHERE team_id = _team_id AND user_id = _old_owner_id;
|
||||
|
||||
SELECT role_id INTO _new_owner_role_id
|
||||
FROM team_members
|
||||
WHERE team_id = _team_id AND user_id = _new_owner_id;
|
||||
|
||||
-- Update the old owner's role to admin if they want to stay in the team
|
||||
IF _old_owner_role_id IS NOT NULL THEN
|
||||
UPDATE team_members
|
||||
SET role_id = _admin_role_id
|
||||
WHERE team_id = _team_id AND user_id = _old_owner_id;
|
||||
END IF;
|
||||
|
||||
-- Update the new owner's role to owner
|
||||
IF _new_owner_role_id IS NOT NULL THEN
|
||||
UPDATE team_members
|
||||
SET role_id = _owner_role_id
|
||||
WHERE team_id = _team_id AND user_id = _new_owner_id;
|
||||
ELSE
|
||||
-- If new owner is not a team member yet, add them
|
||||
INSERT INTO team_members (user_id, team_id, role_id)
|
||||
VALUES (_new_owner_id, _team_id, _owner_role_id);
|
||||
END IF;
|
||||
|
||||
-- Create notification for both users about team ownership
|
||||
PERFORM create_notification(
|
||||
_old_owner_id,
|
||||
_team_id,
|
||||
NULL,
|
||||
NULL,
|
||||
CONCAT('You are no longer the owner of team <b>', (SELECT name FROM teams WHERE id = _team_id), '</b>')
|
||||
);
|
||||
|
||||
PERFORM create_notification(
|
||||
_new_owner_id,
|
||||
_team_id,
|
||||
NULL,
|
||||
NULL,
|
||||
CONCAT('You are now the owner of team <b>', (SELECT name FROM teams WHERE id = _team_id), '</b>')
|
||||
);
|
||||
|
||||
RETURN json_build_object(
|
||||
'success', TRUE,
|
||||
'old_owner_id', _old_owner_id,
|
||||
'new_owner_id', _new_owner_id,
|
||||
'team_id', _team_id,
|
||||
'old_org_id', _old_org_id,
|
||||
'new_org_id', COALESCE(_new_owner_org_id, _old_org_id),
|
||||
'old_role_id', _old_owner_role_id,
|
||||
'new_role_id', _new_owner_role_id,
|
||||
'has_valid_license', _has_valid_license,
|
||||
'has_active_coupon', _has_active_coupon,
|
||||
'other_teams_count', _other_teams_count,
|
||||
'org_ownership_transferred', _new_owner_org_id IS NULL,
|
||||
'team_moved_to_new_org', _new_owner_org_id IS NOT NULL
|
||||
);
|
||||
END;
|
||||
$$;
|
||||
|
||||
-- PERFORMANCE OPTIMIZATION: Optimized version with batching for large datasets
|
||||
CREATE OR REPLACE FUNCTION handle_task_list_sort_between_groups_optimized(_from_index integer, _to_index integer, _task_id uuid, _project_id uuid, _batch_size integer DEFAULT 100) RETURNS void
|
||||
LANGUAGE plpgsql
|
||||
AS
|
||||
$$
|
||||
DECLARE
|
||||
_offset INT := 0;
|
||||
_affected_rows INT;
|
||||
BEGIN
|
||||
-- PERFORMANCE OPTIMIZATION: Use direct updates without CTE in UPDATE
|
||||
IF (_to_index = -1)
|
||||
THEN
|
||||
_to_index = COALESCE((SELECT MAX(sort_order) + 1 FROM tasks WHERE project_id = _project_id), 0);
|
||||
END IF;
|
||||
|
||||
-- PERFORMANCE OPTIMIZATION: Batch updates for large datasets
|
||||
IF _to_index > _from_index
|
||||
THEN
|
||||
LOOP
|
||||
UPDATE tasks
|
||||
SET sort_order = sort_order - 1
|
||||
WHERE project_id = _project_id
|
||||
AND sort_order > _from_index
|
||||
AND sort_order < _to_index
|
||||
AND sort_order > _offset
|
||||
AND sort_order <= _offset + _batch_size;
|
||||
|
||||
GET DIAGNOSTICS _affected_rows = ROW_COUNT;
|
||||
EXIT WHEN _affected_rows = 0;
|
||||
_offset := _offset + _batch_size;
|
||||
END LOOP;
|
||||
|
||||
UPDATE tasks SET sort_order = _to_index - 1 WHERE id = _task_id AND project_id = _project_id;
|
||||
END IF;
|
||||
|
||||
IF _to_index < _from_index
|
||||
THEN
|
||||
_offset := 0;
|
||||
LOOP
|
||||
UPDATE tasks
|
||||
SET sort_order = sort_order + 1
|
||||
WHERE project_id = _project_id
|
||||
AND sort_order > _to_index
|
||||
AND sort_order < _from_index
|
||||
AND sort_order > _offset
|
||||
AND sort_order <= _offset + _batch_size;
|
||||
|
||||
GET DIAGNOSTICS _affected_rows = ROW_COUNT;
|
||||
EXIT WHEN _affected_rows = 0;
|
||||
_offset := _offset + _batch_size;
|
||||
END LOOP;
|
||||
|
||||
UPDATE tasks SET sort_order = _to_index + 1 WHERE id = _task_id AND project_id = _project_id;
|
||||
END IF;
|
||||
END
|
||||
$$;
|
||||
|
||||
-- PERFORMANCE OPTIMIZATION: Optimized version with batching for large datasets
|
||||
CREATE OR REPLACE FUNCTION handle_task_list_sort_inside_group_optimized(_from_index integer, _to_index integer, _task_id uuid, _project_id uuid, _batch_size integer DEFAULT 100) RETURNS void
|
||||
LANGUAGE plpgsql
|
||||
AS
|
||||
$$
|
||||
DECLARE
|
||||
_offset INT := 0;
|
||||
_affected_rows INT;
|
||||
BEGIN
|
||||
-- PERFORMANCE OPTIMIZATION: Batch updates for large datasets without CTE in UPDATE
|
||||
IF _to_index > _from_index
|
||||
THEN
|
||||
LOOP
|
||||
UPDATE tasks
|
||||
SET sort_order = sort_order - 1
|
||||
WHERE project_id = _project_id
|
||||
AND sort_order > _from_index
|
||||
AND sort_order <= _to_index
|
||||
AND sort_order > _offset
|
||||
AND sort_order <= _offset + _batch_size;
|
||||
|
||||
GET DIAGNOSTICS _affected_rows = ROW_COUNT;
|
||||
EXIT WHEN _affected_rows = 0;
|
||||
_offset := _offset + _batch_size;
|
||||
END LOOP;
|
||||
END IF;
|
||||
|
||||
IF _to_index < _from_index
|
||||
THEN
|
||||
_offset := 0;
|
||||
LOOP
|
||||
UPDATE tasks
|
||||
SET sort_order = sort_order + 1
|
||||
WHERE project_id = _project_id
|
||||
AND sort_order >= _to_index
|
||||
AND sort_order < _from_index
|
||||
AND sort_order > _offset
|
||||
AND sort_order <= _offset + _batch_size;
|
||||
|
||||
GET DIAGNOSTICS _affected_rows = ROW_COUNT;
|
||||
EXIT WHEN _affected_rows = 0;
|
||||
_offset := _offset + _batch_size;
|
||||
END LOOP;
|
||||
END IF;
|
||||
|
||||
UPDATE tasks SET sort_order = _to_index WHERE id = _task_id AND project_id = _project_id;
|
||||
END
|
||||
$$;
|
||||
|
||||
-- Updated bulk sort order function that avoids sort_order conflicts
|
||||
CREATE OR REPLACE FUNCTION update_task_sort_orders_bulk(_updates json, _group_by text DEFAULT 'status') RETURNS void
|
||||
LANGUAGE plpgsql
|
||||
AS
|
||||
$$
|
||||
DECLARE
|
||||
_update_record RECORD;
|
||||
_sort_column TEXT;
|
||||
_sql TEXT;
|
||||
BEGIN
|
||||
-- Get the appropriate sort column based on grouping
|
||||
_sort_column := get_sort_column_name(_group_by);
|
||||
|
||||
-- Process each update record
|
||||
FOR _update_record IN
|
||||
SELECT
|
||||
(item->>'task_id')::uuid as task_id,
|
||||
(item->>'sort_order')::int as sort_order,
|
||||
(item->>'status_id')::uuid as status_id,
|
||||
(item->>'priority_id')::uuid as priority_id,
|
||||
(item->>'phase_id')::uuid as phase_id
|
||||
FROM json_array_elements(_updates) as item
|
||||
LOOP
|
||||
-- Update the grouping-specific sort column and other fields
|
||||
_sql := 'UPDATE tasks SET ' || _sort_column || ' = $1, ' ||
|
||||
'status_id = COALESCE($2, status_id), ' ||
|
||||
'priority_id = COALESCE($3, priority_id), ' ||
|
||||
'updated_at = CURRENT_TIMESTAMP ' ||
|
||||
'WHERE id = $4';
|
||||
|
||||
EXECUTE _sql USING
|
||||
_update_record.sort_order,
|
||||
_update_record.status_id,
|
||||
_update_record.priority_id,
|
||||
_update_record.task_id;
|
||||
|
||||
-- Handle phase updates separately since it's in a different table
|
||||
IF _update_record.phase_id IS NOT NULL THEN
|
||||
INSERT INTO task_phase (task_id, phase_id)
|
||||
VALUES (_update_record.task_id, _update_record.phase_id)
|
||||
ON CONFLICT (task_id) DO UPDATE SET phase_id = _update_record.phase_id;
|
||||
END IF;
|
||||
END LOOP;
|
||||
END
|
||||
$$;
|
||||
|
||||
-- Function to get the appropriate sort column name based on grouping type
|
||||
CREATE OR REPLACE FUNCTION get_sort_column_name(_group_by TEXT) RETURNS TEXT
|
||||
LANGUAGE plpgsql
|
||||
AS
|
||||
$$
|
||||
BEGIN
|
||||
CASE _group_by
|
||||
WHEN 'status' THEN RETURN 'status_sort_order';
|
||||
WHEN 'priority' THEN RETURN 'priority_sort_order';
|
||||
WHEN 'phase' THEN RETURN 'phase_sort_order';
|
||||
-- For backward compatibility, still support general sort_order but be explicit
|
||||
WHEN 'general' THEN RETURN 'sort_order';
|
||||
ELSE RETURN 'status_sort_order'; -- Default to status sorting
|
||||
END CASE;
|
||||
END;
|
||||
$$;
|
||||
|
||||
-- Updated bulk sort order function to handle different sort columns
|
||||
CREATE OR REPLACE FUNCTION update_task_sort_orders_bulk(_updates json, _group_by text DEFAULT 'status') RETURNS void
|
||||
LANGUAGE plpgsql
|
||||
AS
|
||||
$$
|
||||
DECLARE
|
||||
_update_record RECORD;
|
||||
_sort_column TEXT;
|
||||
_sql TEXT;
|
||||
BEGIN
|
||||
-- Get the appropriate sort column based on grouping
|
||||
_sort_column := get_sort_column_name(_group_by);
|
||||
|
||||
-- Process each update record
|
||||
FOR _update_record IN
|
||||
SELECT
|
||||
(item->>'task_id')::uuid as task_id,
|
||||
(item->>'sort_order')::int as sort_order,
|
||||
(item->>'status_id')::uuid as status_id,
|
||||
(item->>'priority_id')::uuid as priority_id,
|
||||
(item->>'phase_id')::uuid as phase_id
|
||||
FROM json_array_elements(_updates) as item
|
||||
LOOP
|
||||
-- Update the grouping-specific sort column and other fields
|
||||
_sql := 'UPDATE tasks SET ' || _sort_column || ' = $1, ' ||
|
||||
'status_id = COALESCE($2, status_id), ' ||
|
||||
'priority_id = COALESCE($3, priority_id), ' ||
|
||||
'updated_at = CURRENT_TIMESTAMP ' ||
|
||||
'WHERE id = $4';
|
||||
|
||||
EXECUTE _sql USING
|
||||
_update_record.sort_order,
|
||||
_update_record.status_id,
|
||||
_update_record.priority_id,
|
||||
_update_record.task_id;
|
||||
|
||||
-- Handle phase updates separately since it's in a different table
|
||||
IF _update_record.phase_id IS NOT NULL THEN
|
||||
INSERT INTO task_phase (task_id, phase_id)
|
||||
VALUES (_update_record.task_id, _update_record.phase_id)
|
||||
ON CONFLICT (task_id) DO UPDATE SET phase_id = _update_record.phase_id;
|
||||
END IF;
|
||||
END LOOP;
|
||||
END;
|
||||
$$;
|
||||
|
||||
@@ -132,139 +132,3 @@ CREATE INDEX IF NOT EXISTS projects_team_id_index
|
||||
CREATE INDEX IF NOT EXISTS projects_team_id_name_index
|
||||
ON projects (team_id, name);
|
||||
|
||||
-- Performance indexes for optimized tasks queries
|
||||
-- From migration: 20250115000000-performance-indexes.sql
|
||||
|
||||
-- Composite index for main task filtering
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_tasks_project_archived_parent
|
||||
ON tasks(project_id, archived, parent_task_id)
|
||||
WHERE archived = FALSE;
|
||||
|
||||
-- Index for status joins
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_tasks_status_project
|
||||
ON tasks(status_id, project_id)
|
||||
WHERE archived = FALSE;
|
||||
|
||||
-- Index for assignees lookup
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_tasks_assignees_task_member
|
||||
ON tasks_assignees(task_id, team_member_id);
|
||||
|
||||
-- Index for phase lookup
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_task_phase_task_phase
|
||||
ON task_phase(task_id, phase_id);
|
||||
|
||||
-- Index for subtask counting
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_tasks_parent_archived
|
||||
ON tasks(parent_task_id, archived)
|
||||
WHERE parent_task_id IS NOT NULL AND archived = FALSE;
|
||||
|
||||
-- Index for labels
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_task_labels_task_label
|
||||
ON task_labels(task_id, label_id);
|
||||
|
||||
-- Index for comments count
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_task_comments_task
|
||||
ON task_comments(task_id);
|
||||
|
||||
-- Index for attachments count
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_task_attachments_task
|
||||
ON task_attachments(task_id);
|
||||
|
||||
-- Index for work log aggregation
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_task_work_log_task
|
||||
ON task_work_log(task_id);
|
||||
|
||||
-- Index for subscribers check
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_task_subscribers_task
|
||||
ON task_subscribers(task_id);
|
||||
|
||||
-- Index for dependencies check
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_task_dependencies_task
|
||||
ON task_dependencies(task_id);
|
||||
|
||||
-- Index for timers lookup
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_task_timers_task_user
|
||||
ON task_timers(task_id, user_id);
|
||||
|
||||
-- Index for custom columns
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_cc_column_values_task
|
||||
ON cc_column_values(task_id);
|
||||
|
||||
-- Index for team member info view optimization
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_team_members_team_user
|
||||
ON team_members(team_id, user_id)
|
||||
WHERE active = TRUE;
|
||||
|
||||
-- Index for notification settings
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_notification_settings_user_team
|
||||
ON notification_settings(user_id, team_id);
|
||||
|
||||
-- Index for task status categories
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_task_statuses_category
|
||||
ON task_statuses(category_id, project_id);
|
||||
|
||||
-- Index for project phases
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_project_phases_project_sort
|
||||
ON project_phases(project_id, sort_index);
|
||||
|
||||
-- Index for task priorities
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_task_priorities_value
|
||||
ON task_priorities(value);
|
||||
|
||||
-- Index for team labels
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_team_labels_team
|
||||
ON team_labels(team_id);
|
||||
|
||||
-- Advanced performance indexes for task optimization
|
||||
|
||||
-- Composite index for task main query optimization (covers most WHERE conditions)
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_tasks_performance_main
|
||||
ON tasks(project_id, archived, parent_task_id, status_id, priority_id)
|
||||
WHERE archived = FALSE;
|
||||
|
||||
-- Index for sorting by sort_order with project filter
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_tasks_project_sort_order
|
||||
ON tasks(project_id, sort_order)
|
||||
WHERE archived = FALSE;
|
||||
|
||||
-- Index for email_invitations to optimize team_member_info_view
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_email_invitations_team_member
|
||||
ON email_invitations(team_member_id);
|
||||
|
||||
-- Covering index for task status with category information
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_task_statuses_covering
|
||||
ON task_statuses(id, category_id, project_id);
|
||||
|
||||
-- Index for task aggregation queries (parent task progress calculation)
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_tasks_parent_status_archived
|
||||
ON tasks(parent_task_id, status_id, archived)
|
||||
WHERE archived = FALSE;
|
||||
|
||||
-- Index for project team member filtering
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_team_members_project_lookup
|
||||
ON team_members(team_id, active, user_id)
|
||||
WHERE active = TRUE;
|
||||
|
||||
-- Covering index for tasks with frequently accessed columns
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_tasks_covering_main
|
||||
ON tasks(id, project_id, archived, parent_task_id, status_id, priority_id, sort_order, name)
|
||||
WHERE archived = FALSE;
|
||||
|
||||
-- Index for task search functionality
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_tasks_name_search
|
||||
ON tasks USING gin(to_tsvector('english', name))
|
||||
WHERE archived = FALSE;
|
||||
|
||||
-- Index for date-based filtering (if used)
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_tasks_dates
|
||||
ON tasks(project_id, start_date, end_date)
|
||||
WHERE archived = FALSE;
|
||||
|
||||
-- Index for task timers with user filtering
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_task_timers_user_task
|
||||
ON task_timers(user_id, task_id);
|
||||
|
||||
-- Index for sys_task_status_categories lookups
|
||||
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_sys_task_status_categories_covering
|
||||
ON sys_task_status_categories(id, color_code, color_code_dark, is_done, is_doing, is_todo);
|
||||
|
||||
|
||||
28
worklenz-backend/grunt/grunt-compress.js
Normal file
28
worklenz-backend/grunt/grunt-compress.js
Normal file
@@ -0,0 +1,28 @@
|
||||
module.exports = {
|
||||
brotli_js: {
|
||||
options: {
|
||||
mode: "brotli",
|
||||
brotli: {
|
||||
mode: 1
|
||||
}
|
||||
},
|
||||
expand: true,
|
||||
cwd: "build/public",
|
||||
src: ["**/*.js"],
|
||||
dest: "build/public",
|
||||
extDot: "last",
|
||||
ext: ".js.br"
|
||||
},
|
||||
gzip_js: {
|
||||
options: {
|
||||
mode: "gzip"
|
||||
},
|
||||
files: [{
|
||||
expand: true,
|
||||
cwd: "build/public",
|
||||
src: ["**/*.js"],
|
||||
dest: "build/public",
|
||||
ext: ".js.gz"
|
||||
}]
|
||||
}
|
||||
};
|
||||
11364
worklenz-backend/package-lock.json
generated
11364
worklenz-backend/package-lock.json
generated
File diff suppressed because it is too large
Load Diff
@@ -4,37 +4,23 @@
|
||||
"private": true,
|
||||
"engines": {
|
||||
"npm": ">=8.11.0",
|
||||
"node": ">=20.0.0",
|
||||
"node": ">=16.13.0",
|
||||
"yarn": "WARNING: Please use npm package manager instead of yarn"
|
||||
},
|
||||
"main": "build/bin/www",
|
||||
"repository": "GITHUB_REPO_HERE",
|
||||
"author": "worklenz.com",
|
||||
"scripts": {
|
||||
"test": "jest",
|
||||
"start": "node build/bin/www.js",
|
||||
"dev": "npm run build:dev && npm run watch",
|
||||
"build": "npm run clean && npm run compile && npm run copy && npm run compress",
|
||||
"build:dev": "npm run clean && npm run compile:dev && npm run copy",
|
||||
"build:prod": "npm run clean && npm run compile:prod && npm run copy && npm run minify && npm run compress",
|
||||
"clean": "rimraf build",
|
||||
"compile": "tsc --build tsconfig.prod.json",
|
||||
"compile:dev": "tsc --build tsconfig.json",
|
||||
"compile:prod": "tsc --build tsconfig.prod.json",
|
||||
"copy": "npm run copy:assets && npm run copy:views && npm run copy:config && npm run copy:shared",
|
||||
"copy:assets": "npx cpx2 \"src/public/**\" build/public",
|
||||
"copy:views": "npx cpx2 \"src/views/**\" build/views",
|
||||
"copy:config": "npx cpx2 \".env\" build && npx cpx2 \"package.json\" build",
|
||||
"copy:shared": "npx cpx2 \"src/shared/postgresql-error-codes.json\" build/shared && npx cpx2 \"src/shared/sample-data.json\" build/shared && npx cpx2 \"src/shared/templates/**\" build/shared/templates",
|
||||
"watch": "concurrently \"npm run watch:ts\" \"npm run watch:assets\"",
|
||||
"watch:ts": "tsc --build tsconfig.json --watch",
|
||||
"watch:assets": "npx cpx2 \"src/{public,views}/**\" build --watch",
|
||||
"minify": "terser build/**/*.js --compress --mangle --output-dir build",
|
||||
"compress": "node scripts/compress.js",
|
||||
"swagger": "node ./cli/swagger",
|
||||
"inline-queries": "node ./cli/inline-queries",
|
||||
"start": "node ./build/bin/www",
|
||||
"tcs": "grunt build:tsc",
|
||||
"build": "grunt build",
|
||||
"watch": "grunt watch",
|
||||
"dev": "grunt dev",
|
||||
"es": "esbuild `find src -type f -name '*.ts'` --platform=node --minify=true --watch=true --target=esnext --format=cjs --tsconfig=tsconfig.prod.json --outdir=dist",
|
||||
"copy": "grunt copy",
|
||||
"sonar": "sonar-scanner -Dproject.settings=sonar-project-dev.properties",
|
||||
"tsc": "tsc",
|
||||
"test": "jest --setupFiles dotenv/config",
|
||||
"test:watch": "jest --watch --setupFiles dotenv/config"
|
||||
},
|
||||
"jestSonar": {
|
||||
@@ -59,7 +45,6 @@
|
||||
"cors": "^2.8.5",
|
||||
"cron": "^2.4.0",
|
||||
"crypto-js": "^4.1.1",
|
||||
"csrf-sync": "^4.2.1",
|
||||
"csurf": "^1.11.0",
|
||||
"debug": "^4.3.4",
|
||||
"dotenv": "^16.3.1",
|
||||
@@ -85,6 +70,7 @@
|
||||
"passport-local": "^1.0.0",
|
||||
"path": "^0.12.7",
|
||||
"pg": "^8.14.1",
|
||||
"pg-native": "^3.3.0",
|
||||
"pug": "^3.0.2",
|
||||
"redis": "^4.6.7",
|
||||
"sanitize-html": "^2.11.0",
|
||||
@@ -92,10 +78,8 @@
|
||||
"sharp": "^0.32.6",
|
||||
"slugify": "^1.6.6",
|
||||
"socket.io": "^4.7.1",
|
||||
"tinymce": "^7.8.0",
|
||||
"uglify-js": "^3.17.4",
|
||||
"winston": "^3.10.0",
|
||||
"worklenz-backend": "file:",
|
||||
"xss-filters": "^1.2.7"
|
||||
},
|
||||
"devDependencies": {
|
||||
@@ -103,17 +87,15 @@
|
||||
"@babel/preset-typescript": "^7.22.5",
|
||||
"@types/bcrypt": "^5.0.0",
|
||||
"@types/bluebird": "^3.5.38",
|
||||
"@types/body-parser": "^1.19.2",
|
||||
"@types/compression": "^1.7.2",
|
||||
"@types/connect-flash": "^0.0.37",
|
||||
"@types/cookie-parser": "^1.4.3",
|
||||
"@types/cron": "^2.0.1",
|
||||
"@types/crypto-js": "^4.2.2",
|
||||
"@types/csurf": "^1.11.2",
|
||||
"@types/express": "^4.17.21",
|
||||
"@types/express": "^4.17.17",
|
||||
"@types/express-brute": "^1.0.2",
|
||||
"@types/express-brute-redis": "^0.0.4",
|
||||
"@types/express-serve-static-core": "^4.17.34",
|
||||
"@types/express-session": "^1.17.7",
|
||||
"@types/fs-extra": "^9.0.13",
|
||||
"@types/hpp": "^0.2.2",
|
||||
@@ -138,22 +120,26 @@
|
||||
"@typescript-eslint/eslint-plugin": "^5.62.0",
|
||||
"@typescript-eslint/parser": "^5.62.0",
|
||||
"chokidar": "^3.5.3",
|
||||
"concurrently": "^9.1.2",
|
||||
"cpx2": "^8.0.0",
|
||||
"esbuild": "^0.17.19",
|
||||
"esbuild-envfile-plugin": "^1.0.5",
|
||||
"esbuild-node-externals": "^1.8.0",
|
||||
"eslint": "^8.45.0",
|
||||
"eslint-plugin-security": "^1.7.1",
|
||||
"fs-extra": "^10.1.0",
|
||||
"grunt": "^1.6.1",
|
||||
"grunt-contrib-clean": "^2.0.1",
|
||||
"grunt-contrib-compress": "^2.0.0",
|
||||
"grunt-contrib-copy": "^1.0.0",
|
||||
"grunt-contrib-uglify": "^5.2.2",
|
||||
"grunt-contrib-watch": "^1.1.0",
|
||||
"grunt-shell": "^4.0.0",
|
||||
"grunt-sync": "^0.8.2",
|
||||
"highcharts": "^11.1.0",
|
||||
"jest": "^28.1.3",
|
||||
"jest-sonar-reporter": "^2.0.0",
|
||||
"ncp": "^2.0.0",
|
||||
"nodeman": "^1.1.2",
|
||||
"rimraf": "^6.0.1",
|
||||
"swagger-jsdoc": "^6.2.8",
|
||||
"terser": "^5.40.0",
|
||||
"ts-jest": "^28.0.8",
|
||||
"ts-node": "^10.9.1",
|
||||
"tslint": "^6.1.3",
|
||||
|
||||
@@ -1,53 +0,0 @@
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const { createGzip } = require('zlib');
|
||||
const { pipeline } = require('stream');
|
||||
|
||||
async function compressFile(inputPath, outputPath) {
|
||||
return new Promise((resolve, reject) => {
|
||||
const gzip = createGzip();
|
||||
const source = fs.createReadStream(inputPath);
|
||||
const destination = fs.createWriteStream(outputPath);
|
||||
|
||||
pipeline(source, gzip, destination, (err) => {
|
||||
if (err) {
|
||||
reject(err);
|
||||
} else {
|
||||
resolve();
|
||||
}
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
async function compressDirectory(dir) {
|
||||
const files = fs.readdirSync(dir, { withFileTypes: true });
|
||||
|
||||
for (const file of files) {
|
||||
const fullPath = path.join(dir, file.name);
|
||||
|
||||
if (file.isDirectory()) {
|
||||
await compressDirectory(fullPath);
|
||||
} else if (file.name.endsWith('.js') || file.name.endsWith('.css')) {
|
||||
const gzPath = fullPath + '.gz';
|
||||
await compressFile(fullPath, gzPath);
|
||||
console.log(`Compressed: ${fullPath} -> ${gzPath}`);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
async function main() {
|
||||
try {
|
||||
const buildDir = path.join(__dirname, '../build');
|
||||
if (fs.existsSync(buildDir)) {
|
||||
await compressDirectory(buildDir);
|
||||
console.log('Compression complete!');
|
||||
} else {
|
||||
console.log('Build directory not found. Run build first.');
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Compression failed:', error);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
main();
|
||||
@@ -6,7 +6,7 @@ import logger from "morgan";
|
||||
import helmet from "helmet";
|
||||
import compression from "compression";
|
||||
import passport from "passport";
|
||||
import { csrfSync } from "csrf-sync";
|
||||
import csurf from "csurf";
|
||||
import rateLimit from "express-rate-limit";
|
||||
import cors from "cors";
|
||||
import flash from "connect-flash";
|
||||
@@ -112,13 +112,17 @@ function isLoggedIn(req: Request, _res: Response, next: NextFunction) {
|
||||
return req.user ? next() : next(createError(401));
|
||||
}
|
||||
|
||||
// CSRF configuration using csrf-sync for session-based authentication
|
||||
const {
|
||||
invalidCsrfTokenError,
|
||||
generateToken,
|
||||
csrfSynchronisedProtection,
|
||||
} = csrfSync({
|
||||
getTokenFromRequest: (req: Request) => req.headers["x-csrf-token"] as string || (req.body && req.body["_csrf"])
|
||||
// CSRF configuration
|
||||
const csrfProtection = csurf({
|
||||
cookie: {
|
||||
key: "XSRF-TOKEN",
|
||||
path: "/",
|
||||
httpOnly: false,
|
||||
secure: isProduction(), // Only secure in production
|
||||
sameSite: isProduction() ? "none" : "lax", // Different settings for dev vs prod
|
||||
domain: isProduction() ? ".worklenz.com" : undefined // Only set domain in production
|
||||
},
|
||||
ignoreMethods: ["HEAD", "OPTIONS"]
|
||||
});
|
||||
|
||||
// Apply CSRF selectively (exclude webhooks and public routes)
|
||||
@@ -131,25 +135,38 @@ app.use((req, res, next) => {
|
||||
) {
|
||||
next();
|
||||
} else {
|
||||
csrfSynchronisedProtection(req, res, next);
|
||||
csrfProtection(req, res, next);
|
||||
}
|
||||
});
|
||||
|
||||
// Set CSRF token method on request object for compatibility
|
||||
// Set CSRF token cookie
|
||||
app.use((req: Request, res: Response, next: NextFunction) => {
|
||||
// Add csrfToken method to request object for compatibility
|
||||
if (!req.csrfToken && generateToken) {
|
||||
req.csrfToken = (overwrite?: boolean) => generateToken(req, overwrite);
|
||||
if (req.csrfToken) {
|
||||
const token = req.csrfToken();
|
||||
res.cookie("XSRF-TOKEN", token, {
|
||||
httpOnly: false,
|
||||
secure: isProduction(),
|
||||
sameSite: isProduction() ? "none" : "lax",
|
||||
domain: isProduction() ? ".worklenz.com" : undefined,
|
||||
path: "/"
|
||||
});
|
||||
}
|
||||
next();
|
||||
});
|
||||
|
||||
// CSRF token refresh endpoint
|
||||
app.get("/csrf-token", (req: Request, res: Response) => {
|
||||
try {
|
||||
const token = generateToken(req);
|
||||
res.status(200).json({ done: true, message: "CSRF token refreshed", token });
|
||||
} catch (error) {
|
||||
if (req.csrfToken) {
|
||||
const token = req.csrfToken();
|
||||
res.cookie("XSRF-TOKEN", token, {
|
||||
httpOnly: false,
|
||||
secure: isProduction(),
|
||||
sameSite: isProduction() ? "none" : "lax",
|
||||
domain: isProduction() ? ".worklenz.com" : undefined,
|
||||
path: "/"
|
||||
});
|
||||
res.status(200).json({ done: true, message: "CSRF token refreshed" });
|
||||
} else {
|
||||
res.status(500).json({ done: false, message: "Failed to generate CSRF token" });
|
||||
}
|
||||
});
|
||||
@@ -202,7 +219,7 @@ if (isInternalServer()) {
|
||||
|
||||
// CSRF error handler
|
||||
app.use((err: any, req: Request, res: Response, next: NextFunction) => {
|
||||
if (err === invalidCsrfTokenError) {
|
||||
if (err.code === "EBADCSRFTOKEN") {
|
||||
return res.status(403).json({
|
||||
done: false,
|
||||
message: "Invalid CSRF token",
|
||||
|
||||
@@ -35,18 +35,8 @@ export default class AuthController extends WorklenzControllerBase {
|
||||
const auth_error = errors.length > 0 ? errors[0] : null;
|
||||
const message = messages.length > 0 ? messages[0] : null;
|
||||
|
||||
// Determine title based on authentication status and strategy
|
||||
let title = null;
|
||||
if (req.query.strategy) {
|
||||
if (auth_error) {
|
||||
// Show failure title only when there's an actual error
|
||||
title = req.query.strategy === "login" ? "Login Failed!" : "Signup Failed!";
|
||||
} else if (req.isAuthenticated() && message) {
|
||||
// Show success title when authenticated and there's a success message
|
||||
title = req.query.strategy === "login" ? "Login Successful!" : "Signup Successful!";
|
||||
}
|
||||
// If no error and not authenticated, don't show any title (this might be a redirect without completion)
|
||||
}
|
||||
const midTitle = req.query.strategy === "login" ? "Login Failed!" : "Signup Failed!";
|
||||
const title = req.query.strategy ? midTitle : null;
|
||||
|
||||
if (req.user)
|
||||
req.user.build_v = FileConstants.getRelease();
|
||||
|
||||
@@ -137,10 +137,6 @@ export default class HomePageController extends WorklenzControllerBase {
|
||||
WHERE category_id NOT IN (SELECT id
|
||||
FROM sys_task_status_categories
|
||||
WHERE is_done IS FALSE))
|
||||
AND NOT EXISTS(SELECT project_id
|
||||
FROM archived_projects
|
||||
WHERE project_id = p.id
|
||||
AND user_id = $2)
|
||||
${groupByClosure}
|
||||
ORDER BY t.end_date ASC`;
|
||||
|
||||
@@ -162,13 +158,9 @@ export default class HomePageController extends WorklenzControllerBase {
|
||||
WHERE category_id NOT IN (SELECT id
|
||||
FROM sys_task_status_categories
|
||||
WHERE is_done IS FALSE))
|
||||
AND NOT EXISTS(SELECT project_id
|
||||
FROM archived_projects
|
||||
WHERE project_id = p.id
|
||||
AND user_id = $3)
|
||||
${groupByClosure}`;
|
||||
|
||||
const result = await db.query(q, [teamId, userId, userId]);
|
||||
const result = await db.query(q, [teamId, userId]);
|
||||
const [row] = result.rows;
|
||||
return row;
|
||||
}
|
||||
|
||||
@@ -317,58 +317,65 @@ export default class ProjectsController extends WorklenzControllerBase {
|
||||
@HandleExceptions()
|
||||
public static async getMembersByProjectId(req: IWorkLenzRequest, res: IWorkLenzResponse): Promise<IWorkLenzResponse> {
|
||||
const {sortField, sortOrder, size, offset} = this.toPaginationOptions(req.query, "name");
|
||||
const search = (req.query.search || "").toString().trim();
|
||||
|
||||
let searchFilter = "";
|
||||
const params = [req.params.id, req.user?.team_id ?? null, size, offset];
|
||||
if (search) {
|
||||
searchFilter = `
|
||||
AND (
|
||||
(SELECT name FROM team_member_info_view WHERE team_member_info_view.team_member_id = tm.id) ILIKE '%' || $5 || '%'
|
||||
OR (SELECT email FROM team_member_info_view WHERE team_member_info_view.team_member_id = tm.id) ILIKE '%' || $5 || '%'
|
||||
)
|
||||
`;
|
||||
params.push(search);
|
||||
}
|
||||
|
||||
const q = `
|
||||
WITH filtered_members AS (
|
||||
SELECT project_members.id,
|
||||
team_member_id,
|
||||
(SELECT name FROM team_member_info_view WHERE team_member_info_view.team_member_id = tm.id) AS name,
|
||||
(SELECT email FROM team_member_info_view WHERE team_member_info_view.team_member_id = tm.id) AS email,
|
||||
u.avatar_url,
|
||||
(SELECT COUNT(*) FROM tasks WHERE archived IS FALSE AND project_id = project_members.project_id AND id IN (SELECT task_id FROM tasks_assignees WHERE tasks_assignees.project_member_id = project_members.id)) AS all_tasks_count,
|
||||
(SELECT COUNT(*) FROM tasks WHERE archived IS FALSE AND project_id = project_members.project_id AND id IN (SELECT task_id FROM tasks_assignees WHERE tasks_assignees.project_member_id = project_members.id) AND status_id IN (SELECT id FROM task_statuses WHERE category_id = (SELECT id FROM sys_task_status_categories WHERE is_done IS TRUE))) AS completed_tasks_count,
|
||||
EXISTS(SELECT email FROM email_invitations WHERE team_member_id = project_members.team_member_id AND email_invitations.team_id = $2) AS pending_invitation,
|
||||
(SELECT project_access_levels.name FROM project_access_levels WHERE project_access_levels.id = project_members.project_access_level_id) AS access,
|
||||
(SELECT name FROM job_titles WHERE id = tm.job_title_id) AS job_title
|
||||
FROM project_members
|
||||
INNER JOIN team_members tm ON project_members.team_member_id = tm.id
|
||||
LEFT JOIN users u ON tm.user_id = u.id
|
||||
WHERE project_id = $1
|
||||
${search ? searchFilter : ""}
|
||||
)
|
||||
SELECT
|
||||
(SELECT COUNT(*) FROM filtered_members) AS total,
|
||||
(SELECT COALESCE(ARRAY_TO_JSON(ARRAY_AGG(ROW_TO_JSON(t))), '[]'::JSON)
|
||||
FROM (
|
||||
SELECT * FROM filtered_members
|
||||
ORDER BY ${sortField} ${sortOrder}
|
||||
LIMIT $3 OFFSET $4
|
||||
) t
|
||||
) AS data
|
||||
SELECT ROW_TO_JSON(rec) AS members
|
||||
FROM (SELECT COUNT(*) AS total,
|
||||
(SELECT COALESCE(ARRAY_TO_JSON(ARRAY_AGG(ROW_TO_JSON(t))), '[]'::JSON)
|
||||
FROM (SELECT project_members.id,
|
||||
team_member_id,
|
||||
(SELECT name
|
||||
FROM team_member_info_view
|
||||
WHERE team_member_info_view.team_member_id = tm.id),
|
||||
(SELECT email
|
||||
FROM team_member_info_view
|
||||
WHERE team_member_info_view.team_member_id = tm.id) AS email,
|
||||
u.avatar_url,
|
||||
(SELECT COUNT(*)
|
||||
FROM tasks
|
||||
WHERE archived IS FALSE
|
||||
AND project_id = project_members.project_id
|
||||
AND id IN (SELECT task_id
|
||||
FROM tasks_assignees
|
||||
WHERE tasks_assignees.project_member_id = project_members.id)) AS all_tasks_count,
|
||||
(SELECT COUNT(*)
|
||||
FROM tasks
|
||||
WHERE archived IS FALSE
|
||||
AND project_id = project_members.project_id
|
||||
AND id IN (SELECT task_id
|
||||
FROM tasks_assignees
|
||||
WHERE tasks_assignees.project_member_id = project_members.id)
|
||||
AND status_id IN (SELECT id
|
||||
FROM task_statuses
|
||||
WHERE category_id = (SELECT id
|
||||
FROM sys_task_status_categories
|
||||
WHERE is_done IS TRUE))) AS completed_tasks_count,
|
||||
EXISTS(SELECT email
|
||||
FROM email_invitations
|
||||
WHERE team_member_id = project_members.team_member_id
|
||||
AND email_invitations.team_id = $2) AS pending_invitation,
|
||||
(SELECT project_access_levels.name
|
||||
FROM project_access_levels
|
||||
WHERE project_access_levels.id = project_members.project_access_level_id) AS access,
|
||||
(SELECT name FROM job_titles WHERE id = tm.job_title_id) AS job_title
|
||||
FROM project_members
|
||||
INNER JOIN team_members tm ON project_members.team_member_id = tm.id
|
||||
LEFT JOIN users u ON tm.user_id = u.id
|
||||
WHERE project_id = $1
|
||||
ORDER BY ${sortField} ${sortOrder}
|
||||
LIMIT $3 OFFSET $4) t) AS data
|
||||
FROM project_members
|
||||
WHERE project_id = $1) rec;
|
||||
`;
|
||||
|
||||
const result = await db.query(q, params);
|
||||
const result = await db.query(q, [req.params.id, req.user?.team_id ?? null, size, offset]);
|
||||
const [data] = result.rows;
|
||||
|
||||
for (const member of data?.data || []) {
|
||||
for (const member of data?.members.data || []) {
|
||||
member.progress = member.all_tasks_count > 0
|
||||
? ((member.completed_tasks_count / member.all_tasks_count) * 100).toFixed(0) : 0;
|
||||
}
|
||||
|
||||
return res.status(200).send(new ServerResponse(true, data || this.paginatedDatasetDefaultStruct));
|
||||
return res.status(200).send(new ServerResponse(true, data?.members || this.paginatedDatasetDefaultStruct));
|
||||
}
|
||||
|
||||
@HandleExceptions()
|
||||
@@ -749,186 +756,4 @@ export default class ProjectsController extends WorklenzControllerBase {
|
||||
|
||||
}
|
||||
|
||||
@HandleExceptions()
|
||||
public static async getGrouped(req: IWorkLenzRequest, res: IWorkLenzResponse): Promise<IWorkLenzResponse> {
|
||||
// Use qualified field name for projects to avoid ambiguity
|
||||
const {searchQuery, sortField, sortOrder, size, offset} = this.toPaginationOptions(req.query, ["projects.name"]);
|
||||
const groupBy = req.query.groupBy as string || "category";
|
||||
|
||||
const filterByMember = !req.user?.owner && !req.user?.is_admin ?
|
||||
` AND is_member_of_project(projects.id, '${req.user?.id}', $1) ` : "";
|
||||
|
||||
const isFavorites = req.query.filter === "1" ? ` AND EXISTS(SELECT user_id FROM favorite_projects WHERE user_id = '${req.user?.id}' AND project_id = projects.id)` : "";
|
||||
const isArchived = req.query.filter === "2"
|
||||
? ` AND EXISTS(SELECT user_id FROM archived_projects WHERE user_id = '${req.user?.id}' AND project_id = projects.id)`
|
||||
: ` AND NOT EXISTS(SELECT user_id FROM archived_projects WHERE user_id = '${req.user?.id}' AND project_id = projects.id)`;
|
||||
const categories = this.getFilterByCategoryWhereClosure(req.query.categories as string);
|
||||
const statuses = this.getFilterByStatusWhereClosure(req.query.statuses as string);
|
||||
|
||||
// Determine grouping field and join based on groupBy parameter
|
||||
let groupField = "";
|
||||
let groupName = "";
|
||||
let groupColor = "";
|
||||
let groupJoin = "";
|
||||
let groupByFields = "";
|
||||
let groupOrderBy = "";
|
||||
|
||||
switch (groupBy) {
|
||||
case "client":
|
||||
groupField = "COALESCE(projects.client_id::text, 'no-client')";
|
||||
groupName = "COALESCE(clients.name, 'No Client')";
|
||||
groupColor = "'#688'";
|
||||
groupJoin = "LEFT JOIN clients ON projects.client_id = clients.id";
|
||||
groupByFields = "projects.client_id, clients.name";
|
||||
groupOrderBy = "COALESCE(clients.name, 'No Client')";
|
||||
break;
|
||||
case "status":
|
||||
groupField = "COALESCE(projects.status_id::text, 'no-status')";
|
||||
groupName = "COALESCE(sys_project_statuses.name, 'No Status')";
|
||||
groupColor = "COALESCE(sys_project_statuses.color_code, '#888')";
|
||||
groupJoin = "LEFT JOIN sys_project_statuses ON projects.status_id = sys_project_statuses.id";
|
||||
groupByFields = "projects.status_id, sys_project_statuses.name, sys_project_statuses.color_code";
|
||||
groupOrderBy = "COALESCE(sys_project_statuses.name, 'No Status')";
|
||||
break;
|
||||
case "category":
|
||||
default:
|
||||
groupField = "COALESCE(projects.category_id::text, 'uncategorized')";
|
||||
groupName = "COALESCE(project_categories.name, 'Uncategorized')";
|
||||
groupColor = "COALESCE(project_categories.color_code, '#888')";
|
||||
groupJoin = "LEFT JOIN project_categories ON projects.category_id = project_categories.id";
|
||||
groupByFields = "projects.category_id, project_categories.name, project_categories.color_code";
|
||||
groupOrderBy = "COALESCE(project_categories.name, 'Uncategorized')";
|
||||
}
|
||||
|
||||
// Ensure sortField is properly qualified for the inner project query
|
||||
let qualifiedSortField = sortField;
|
||||
if (Array.isArray(sortField)) {
|
||||
qualifiedSortField = sortField[0]; // Take the first field if it's an array
|
||||
}
|
||||
// Replace "projects." with "p2." for the inner query
|
||||
const innerSortField = qualifiedSortField.replace("projects.", "p2.");
|
||||
|
||||
const q = `
|
||||
SELECT ROW_TO_JSON(rec) AS groups
|
||||
FROM (
|
||||
SELECT COUNT(DISTINCT ${groupField}) AS total_groups,
|
||||
(SELECT COALESCE(ARRAY_TO_JSON(ARRAY_AGG(ROW_TO_JSON(group_data))), '[]'::JSON)
|
||||
FROM (
|
||||
SELECT ${groupField} AS group_key,
|
||||
${groupName} AS group_name,
|
||||
${groupColor} AS group_color,
|
||||
COUNT(*) AS project_count,
|
||||
(SELECT COALESCE(ARRAY_TO_JSON(ARRAY_AGG(ROW_TO_JSON(project_data))), '[]'::JSON)
|
||||
FROM (
|
||||
SELECT p2.id,
|
||||
p2.name,
|
||||
(SELECT sys_project_statuses.name FROM sys_project_statuses WHERE sys_project_statuses.id = p2.status_id) AS status,
|
||||
(SELECT sys_project_statuses.color_code FROM sys_project_statuses WHERE sys_project_statuses.id = p2.status_id) AS status_color,
|
||||
(SELECT sys_project_statuses.icon FROM sys_project_statuses WHERE sys_project_statuses.id = p2.status_id) AS status_icon,
|
||||
EXISTS(SELECT user_id
|
||||
FROM favorite_projects
|
||||
WHERE user_id = '${req.user?.id}'
|
||||
AND project_id = p2.id) AS favorite,
|
||||
EXISTS(SELECT user_id
|
||||
FROM archived_projects
|
||||
WHERE user_id = '${req.user?.id}'
|
||||
AND project_id = p2.id) AS archived,
|
||||
p2.color_code,
|
||||
p2.start_date,
|
||||
p2.end_date,
|
||||
p2.category_id,
|
||||
(SELECT COUNT(*)
|
||||
FROM tasks
|
||||
WHERE archived IS FALSE
|
||||
AND project_id = p2.id) AS all_tasks_count,
|
||||
(SELECT COUNT(*)
|
||||
FROM tasks
|
||||
WHERE archived IS FALSE
|
||||
AND project_id = p2.id
|
||||
AND status_id IN (SELECT task_statuses.id
|
||||
FROM task_statuses
|
||||
WHERE task_statuses.project_id = p2.id
|
||||
AND task_statuses.category_id IN
|
||||
(SELECT sys_task_status_categories.id FROM sys_task_status_categories WHERE sys_task_status_categories.is_done IS TRUE))) AS completed_tasks_count,
|
||||
(SELECT COUNT(*)
|
||||
FROM project_members
|
||||
WHERE project_members.project_id = p2.id) AS members_count,
|
||||
(SELECT get_project_members(p2.id)) AS names,
|
||||
(SELECT clients.name FROM clients WHERE clients.id = p2.client_id) AS client_name,
|
||||
(SELECT users.name FROM users WHERE users.id = p2.owner_id) AS project_owner,
|
||||
(SELECT project_categories.name FROM project_categories WHERE project_categories.id = p2.category_id) AS category_name,
|
||||
(SELECT project_categories.color_code
|
||||
FROM project_categories
|
||||
WHERE project_categories.id = p2.category_id) AS category_color,
|
||||
((SELECT project_members.team_member_id as team_member_id
|
||||
FROM project_members
|
||||
WHERE project_members.project_id = p2.id
|
||||
AND project_members.project_access_level_id = (SELECT project_access_levels.id FROM project_access_levels WHERE project_access_levels.key = 'PROJECT_MANAGER'))) AS project_manager_team_member_id,
|
||||
(SELECT project_members.default_view
|
||||
FROM project_members
|
||||
WHERE project_members.project_id = p2.id
|
||||
AND project_members.team_member_id = '${req.user?.team_member_id}') AS team_member_default_view,
|
||||
(SELECT CASE
|
||||
WHEN ((SELECT MAX(tasks.updated_at)
|
||||
FROM tasks
|
||||
WHERE tasks.archived IS FALSE
|
||||
AND tasks.project_id = p2.id) >
|
||||
p2.updated_at)
|
||||
THEN (SELECT MAX(tasks.updated_at)
|
||||
FROM tasks
|
||||
WHERE tasks.archived IS FALSE
|
||||
AND tasks.project_id = p2.id)
|
||||
ELSE p2.updated_at END) AS updated_at
|
||||
FROM projects p2
|
||||
${groupJoin.replace("projects.", "p2.")}
|
||||
WHERE p2.team_id = $1
|
||||
AND ${groupField.replace("projects.", "p2.")} = ${groupField}
|
||||
${categories.replace("projects.", "p2.")}
|
||||
${statuses.replace("projects.", "p2.")}
|
||||
${isArchived.replace("projects.", "p2.")}
|
||||
${isFavorites.replace("projects.", "p2.")}
|
||||
${filterByMember.replace("projects.", "p2.")}
|
||||
${searchQuery.replace("projects.", "p2.")}
|
||||
ORDER BY ${innerSortField} ${sortOrder}
|
||||
) project_data
|
||||
) AS projects
|
||||
FROM projects
|
||||
${groupJoin}
|
||||
WHERE projects.team_id = $1 ${categories} ${statuses} ${isArchived} ${isFavorites} ${filterByMember} ${searchQuery}
|
||||
GROUP BY ${groupByFields}
|
||||
ORDER BY ${groupOrderBy}
|
||||
LIMIT $2 OFFSET $3
|
||||
) group_data
|
||||
) AS data
|
||||
FROM projects
|
||||
${groupJoin}
|
||||
WHERE projects.team_id = $1 ${categories} ${statuses} ${isArchived} ${isFavorites} ${filterByMember} ${searchQuery}
|
||||
) rec;
|
||||
`;
|
||||
|
||||
const result = await db.query(q, [req.user?.team_id || null, size, offset]);
|
||||
const [data] = result.rows;
|
||||
|
||||
// Process the grouped data
|
||||
for (const group of data?.groups.data || []) {
|
||||
for (const project of group.projects || []) {
|
||||
project.progress = project.all_tasks_count > 0
|
||||
? ((project.completed_tasks_count / project.all_tasks_count) * 100).toFixed(0) : 0;
|
||||
|
||||
project.updated_at_string = moment(project.updated_at).fromNow();
|
||||
|
||||
project.names = this.createTagList(project?.names);
|
||||
project.names.map((a: any) => a.color_code = getColor(a.name));
|
||||
|
||||
if (project.project_manager_team_member_id) {
|
||||
project.project_manager = {
|
||||
id: project.project_manager_team_member_id
|
||||
};
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return res.status(200).send(new ServerResponse(true, data?.groups || { total_groups: 0, data: [] }));
|
||||
}
|
||||
|
||||
}
|
||||
|
||||
@@ -415,15 +415,20 @@ export default class ReportingController extends WorklenzControllerBase {
|
||||
|
||||
@HandleExceptions()
|
||||
public static async getMyTeams(req: IWorkLenzRequest, res: IWorkLenzResponse): Promise<IWorkLenzResponse> {
|
||||
const selectedTeamId = req.user?.team_id;
|
||||
if (!selectedTeamId) {
|
||||
return res.status(400).send(new ServerResponse(false, "No selected team"));
|
||||
}
|
||||
const q = `SELECT team_id AS id, name
|
||||
FROM team_members tm
|
||||
LEFT JOIN teams ON teams.id = tm.team_id
|
||||
WHERE tm.user_id = $1
|
||||
AND tm.team_id = $2
|
||||
AND role_id IN (SELECT id
|
||||
FROM roles
|
||||
WHERE (admin_role IS TRUE OR owner IS TRUE))
|
||||
ORDER BY name;`;
|
||||
const result = await db.query(q, [req.user?.id]);
|
||||
const result = await db.query(q, [req.user?.id, selectedTeamId]);
|
||||
result.rows.forEach((team: any) => team.selected = true);
|
||||
return res.status(200).send(new ServerResponse(true, result.rows));
|
||||
}
|
||||
|
||||
@@ -1,179 +0,0 @@
|
||||
// Example of updated getMemberTimeSheets method with timezone support
|
||||
// This shows the key changes needed to handle timezones properly
|
||||
|
||||
import moment from "moment-timezone";
|
||||
import db from "../../config/db";
|
||||
import { IWorkLenzRequest } from "../../interfaces/worklenz-request";
|
||||
import { IWorkLenzResponse } from "../../interfaces/worklenz-response";
|
||||
import { ServerResponse } from "../../models/server-response";
|
||||
import { DATE_RANGES } from "../../shared/constants";
|
||||
|
||||
export async function getMemberTimeSheets(req: IWorkLenzRequest, res: IWorkLenzResponse): Promise<IWorkLenzResponse> {
|
||||
const archived = req.query.archived === "true";
|
||||
const teams = (req.body.teams || []) as string[];
|
||||
const teamIds = teams.map(id => `'${id}'`).join(",");
|
||||
const projects = (req.body.projects || []) as string[];
|
||||
const projectIds = projects.map(p => `'${p}'`).join(",");
|
||||
const {billable} = req.body;
|
||||
|
||||
// Get user timezone from request or database
|
||||
const userTimezone = req.body.timezone || await getUserTimezone(req.user?.id || "");
|
||||
|
||||
if (!teamIds || !projectIds.length)
|
||||
return res.status(200).send(new ServerResponse(true, { users: [], projects: [] }));
|
||||
|
||||
const { duration, date_range } = req.body;
|
||||
|
||||
// Calculate date range with timezone support
|
||||
let startDate: moment.Moment;
|
||||
let endDate: moment.Moment;
|
||||
|
||||
if (date_range && date_range.length === 2) {
|
||||
// Convert user's local dates to their timezone's start/end of day
|
||||
startDate = moment.tz(date_range[0], userTimezone).startOf("day");
|
||||
endDate = moment.tz(date_range[1], userTimezone).endOf("day");
|
||||
} else if (duration === DATE_RANGES.ALL_TIME) {
|
||||
const minDateQuery = `SELECT MIN(COALESCE(start_date, created_at)) as min_date FROM projects WHERE id IN (${projectIds})`;
|
||||
const minDateResult = await db.query(minDateQuery, []);
|
||||
const minDate = minDateResult.rows[0]?.min_date;
|
||||
startDate = minDate ? moment.tz(minDate, userTimezone) : moment.tz("2000-01-01", userTimezone);
|
||||
endDate = moment.tz(userTimezone);
|
||||
} else {
|
||||
// Calculate ranges based on user's timezone
|
||||
const now = moment.tz(userTimezone);
|
||||
|
||||
switch (duration) {
|
||||
case DATE_RANGES.YESTERDAY:
|
||||
startDate = now.clone().subtract(1, "day").startOf("day");
|
||||
endDate = now.clone().subtract(1, "day").endOf("day");
|
||||
break;
|
||||
case DATE_RANGES.LAST_WEEK:
|
||||
startDate = now.clone().subtract(1, "week").startOf("isoWeek");
|
||||
endDate = now.clone().subtract(1, "week").endOf("isoWeek");
|
||||
break;
|
||||
case DATE_RANGES.LAST_MONTH:
|
||||
startDate = now.clone().subtract(1, "month").startOf("month");
|
||||
endDate = now.clone().subtract(1, "month").endOf("month");
|
||||
break;
|
||||
case DATE_RANGES.LAST_QUARTER:
|
||||
startDate = now.clone().subtract(3, "months").startOf("day");
|
||||
endDate = now.clone().endOf("day");
|
||||
break;
|
||||
default:
|
||||
startDate = now.clone().startOf("day");
|
||||
endDate = now.clone().endOf("day");
|
||||
}
|
||||
}
|
||||
|
||||
// Convert to UTC for database queries
|
||||
const startUtc = startDate.utc().format("YYYY-MM-DD HH:mm:ss");
|
||||
const endUtc = endDate.utc().format("YYYY-MM-DD HH:mm:ss");
|
||||
|
||||
// Calculate working days in user's timezone
|
||||
const totalDays = endDate.diff(startDate, "days") + 1;
|
||||
let workingDays = 0;
|
||||
|
||||
const current = startDate.clone();
|
||||
while (current.isSameOrBefore(endDate, "day")) {
|
||||
if (current.isoWeekday() >= 1 && current.isoWeekday() <= 5) {
|
||||
workingDays++;
|
||||
}
|
||||
current.add(1, "day");
|
||||
}
|
||||
|
||||
// Updated SQL query with proper timezone handling
|
||||
const billableQuery = buildBillableQuery(billable);
|
||||
const archivedClause = archived ? "" : `AND projects.id NOT IN (SELECT project_id FROM archived_projects WHERE project_id = projects.id AND user_id = '${req.user?.id}')`;
|
||||
|
||||
const q = `
|
||||
WITH project_hours AS (
|
||||
SELECT
|
||||
id,
|
||||
COALESCE(hours_per_day, 8) as hours_per_day
|
||||
FROM projects
|
||||
WHERE id IN (${projectIds})
|
||||
),
|
||||
total_working_hours AS (
|
||||
SELECT
|
||||
SUM(hours_per_day) * ${workingDays} as total_hours
|
||||
FROM project_hours
|
||||
)
|
||||
SELECT
|
||||
u.id,
|
||||
u.email,
|
||||
tm.name,
|
||||
tm.color_code,
|
||||
COALESCE(SUM(twl.time_spent), 0) as logged_time,
|
||||
COALESCE(SUM(twl.time_spent), 0) / 3600.0 as value,
|
||||
(SELECT total_hours FROM total_working_hours) as total_working_hours,
|
||||
CASE
|
||||
WHEN (SELECT total_hours FROM total_working_hours) > 0
|
||||
THEN ROUND((COALESCE(SUM(twl.time_spent), 0) / 3600.0) / (SELECT total_hours FROM total_working_hours) * 100, 2)
|
||||
ELSE 0
|
||||
END as utilization_percent,
|
||||
ROUND(COALESCE(SUM(twl.time_spent), 0) / 3600.0, 2) as utilized_hours,
|
||||
ROUND(COALESCE(SUM(twl.time_spent), 0) / 3600.0 - (SELECT total_hours FROM total_working_hours), 2) as over_under_utilized_hours,
|
||||
'${userTimezone}' as user_timezone,
|
||||
'${startDate.format("YYYY-MM-DD")}' as report_start_date,
|
||||
'${endDate.format("YYYY-MM-DD")}' as report_end_date
|
||||
FROM team_members tm
|
||||
LEFT JOIN users u ON tm.user_id = u.id
|
||||
LEFT JOIN task_work_log twl ON twl.user_id = u.id
|
||||
LEFT JOIN tasks t ON twl.task_id = t.id ${billableQuery}
|
||||
LEFT JOIN projects p ON t.project_id = p.id
|
||||
WHERE tm.team_id IN (${teamIds})
|
||||
AND (
|
||||
twl.id IS NULL
|
||||
OR (
|
||||
p.id IN (${projectIds})
|
||||
AND twl.created_at >= '${startUtc}'::TIMESTAMP
|
||||
AND twl.created_at <= '${endUtc}'::TIMESTAMP
|
||||
${archivedClause}
|
||||
)
|
||||
)
|
||||
GROUP BY u.id, u.email, tm.name, tm.color_code
|
||||
ORDER BY logged_time DESC`;
|
||||
|
||||
const result = await db.query(q, []);
|
||||
|
||||
// Add timezone context to response
|
||||
const response = {
|
||||
data: result.rows,
|
||||
timezone_info: {
|
||||
user_timezone: userTimezone,
|
||||
report_period: {
|
||||
start: startDate.format("YYYY-MM-DD HH:mm:ss z"),
|
||||
end: endDate.format("YYYY-MM-DD HH:mm:ss z"),
|
||||
working_days: workingDays,
|
||||
total_days: totalDays
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
return res.status(200).send(new ServerResponse(true, response));
|
||||
}
|
||||
|
||||
async function getUserTimezone(userId: string): Promise<string> {
|
||||
const q = `SELECT tz.name as timezone
|
||||
FROM users u
|
||||
JOIN timezones tz ON u.timezone_id = tz.id
|
||||
WHERE u.id = $1`;
|
||||
const result = await db.query(q, [userId]);
|
||||
return result.rows[0]?.timezone || "UTC";
|
||||
}
|
||||
|
||||
function buildBillableQuery(billable: { billable: boolean; nonBillable: boolean }): string {
|
||||
if (!billable) return "";
|
||||
|
||||
const { billable: isBillable, nonBillable } = billable;
|
||||
|
||||
if (isBillable && nonBillable) {
|
||||
return "";
|
||||
} else if (isBillable) {
|
||||
return " AND tasks.billable IS TRUE";
|
||||
} else if (nonBillable) {
|
||||
return " AND tasks.billable IS FALSE";
|
||||
}
|
||||
|
||||
return "";
|
||||
}
|
||||
@@ -445,27 +445,52 @@ export default class ReportingAllocationController extends ReportingControllerBa
|
||||
}
|
||||
}
|
||||
|
||||
// Count only weekdays (Mon-Fri) in the period
|
||||
// Get organization working days
|
||||
const orgWorkingDaysQuery = `
|
||||
SELECT monday, tuesday, wednesday, thursday, friday, saturday, sunday
|
||||
FROM organization_working_days
|
||||
WHERE organization_id IN (
|
||||
SELECT t.organization_id
|
||||
FROM teams t
|
||||
WHERE t.id IN (${teamIds})
|
||||
LIMIT 1
|
||||
);
|
||||
`;
|
||||
const orgWorkingDaysResult = await db.query(orgWorkingDaysQuery, []);
|
||||
const workingDaysConfig = orgWorkingDaysResult.rows[0] || {
|
||||
monday: true,
|
||||
tuesday: true,
|
||||
wednesday: true,
|
||||
thursday: true,
|
||||
friday: true,
|
||||
saturday: false,
|
||||
sunday: false
|
||||
};
|
||||
|
||||
// Count working days based on organization settings
|
||||
let workingDays = 0;
|
||||
let current = startDate.clone();
|
||||
while (current.isSameOrBefore(endDate, 'day')) {
|
||||
const day = current.isoWeekday();
|
||||
if (day >= 1 && day <= 5) workingDays++;
|
||||
if (
|
||||
(day === 1 && workingDaysConfig.monday) ||
|
||||
(day === 2 && workingDaysConfig.tuesday) ||
|
||||
(day === 3 && workingDaysConfig.wednesday) ||
|
||||
(day === 4 && workingDaysConfig.thursday) ||
|
||||
(day === 5 && workingDaysConfig.friday) ||
|
||||
(day === 6 && workingDaysConfig.saturday) ||
|
||||
(day === 7 && workingDaysConfig.sunday)
|
||||
) {
|
||||
workingDays++;
|
||||
}
|
||||
current.add(1, 'day');
|
||||
}
|
||||
|
||||
// Get hours_per_day for all selected projects
|
||||
const projectHoursQuery = `SELECT id, hours_per_day FROM projects WHERE id IN (${projectIds})`;
|
||||
const projectHoursResult = await db.query(projectHoursQuery, []);
|
||||
const projectHoursMap: Record<string, number> = {};
|
||||
for (const row of projectHoursResult.rows) {
|
||||
projectHoursMap[row.id] = row.hours_per_day || 8;
|
||||
}
|
||||
// Sum total working hours for all selected projects
|
||||
let totalWorkingHours = 0;
|
||||
for (const pid of Object.keys(projectHoursMap)) {
|
||||
totalWorkingHours += workingDays * projectHoursMap[pid];
|
||||
}
|
||||
// Get organization working hours
|
||||
const orgWorkingHoursQuery = `SELECT working_hours FROM organizations WHERE id = (SELECT t.organization_id FROM teams t WHERE t.id IN (${teamIds}) LIMIT 1)`;
|
||||
const orgWorkingHoursResult = await db.query(orgWorkingHoursQuery, []);
|
||||
const orgWorkingHours = orgWorkingHoursResult.rows[0]?.working_hours || 8;
|
||||
let totalWorkingHours = workingDays * orgWorkingHours;
|
||||
|
||||
const durationClause = this.getDateRangeClause(duration || DATE_RANGES.LAST_WEEK, date_range);
|
||||
const archivedClause = archived
|
||||
@@ -490,11 +515,18 @@ export default class ReportingAllocationController extends ReportingControllerBa
|
||||
member.value = member.logged_time ? parseFloat(moment.duration(member.logged_time, "seconds").asHours().toFixed(2)) : 0;
|
||||
member.color_code = getColor(member.name);
|
||||
member.total_working_hours = totalWorkingHours;
|
||||
member.utilization_percent = (totalWorkingHours > 0 && member.logged_time) ? ((parseFloat(member.logged_time) / (totalWorkingHours * 3600)) * 100).toFixed(2) : '0.00';
|
||||
member.utilized_hours = member.logged_time ? (parseFloat(member.logged_time) / 3600).toFixed(2) : '0.00';
|
||||
// Over/under utilized hours: utilized_hours - total_working_hours
|
||||
const overUnder = member.utilized_hours && member.total_working_hours ? (parseFloat(member.utilized_hours) - member.total_working_hours) : 0;
|
||||
member.over_under_utilized_hours = overUnder.toFixed(2);
|
||||
if (totalWorkingHours === 0) {
|
||||
member.utilization_percent = member.logged_time && parseFloat(member.logged_time) > 0 ? 'N/A' : '0.00';
|
||||
member.utilized_hours = member.logged_time ? (parseFloat(member.logged_time) / 3600).toFixed(2) : '0.00';
|
||||
// Over/under utilized hours: all logged time is over-utilized
|
||||
member.over_under_utilized_hours = member.utilized_hours;
|
||||
} else {
|
||||
member.utilization_percent = (member.logged_time && totalWorkingHours > 0) ? ((parseFloat(member.logged_time) / (totalWorkingHours * 3600)) * 100).toFixed(2) : '0.00';
|
||||
member.utilized_hours = member.logged_time ? (parseFloat(member.logged_time) / 3600).toFixed(2) : '0.00';
|
||||
// Over/under utilized hours: utilized_hours - total_working_hours
|
||||
const overUnder = member.utilized_hours && member.total_working_hours ? (parseFloat(member.utilized_hours) - member.total_working_hours) : 0;
|
||||
member.over_under_utilized_hours = overUnder.toFixed(2);
|
||||
}
|
||||
}
|
||||
|
||||
return res.status(200).send(new ServerResponse(true, result.rows));
|
||||
|
||||
@@ -1,117 +0,0 @@
|
||||
import WorklenzControllerBase from "../worklenz-controller-base";
|
||||
import { IWorkLenzRequest } from "../../interfaces/worklenz-request";
|
||||
import db from "../../config/db";
|
||||
import moment from "moment-timezone";
|
||||
import { DATE_RANGES } from "../../shared/constants";
|
||||
|
||||
export default abstract class ReportingControllerBaseWithTimezone extends WorklenzControllerBase {
|
||||
|
||||
/**
|
||||
* Get the user's timezone from the database or request
|
||||
* @param userId - The user ID
|
||||
* @returns The user's timezone or 'UTC' as default
|
||||
*/
|
||||
protected static async getUserTimezone(userId: string): Promise<string> {
|
||||
const q = `SELECT tz.name as timezone
|
||||
FROM users u
|
||||
JOIN timezones tz ON u.timezone_id = tz.id
|
||||
WHERE u.id = $1`;
|
||||
const result = await db.query(q, [userId]);
|
||||
return result.rows[0]?.timezone || 'UTC';
|
||||
}
|
||||
|
||||
/**
|
||||
* Generate date range clause with timezone support
|
||||
* @param key - Date range key (e.g., YESTERDAY, LAST_WEEK)
|
||||
* @param dateRange - Array of date strings
|
||||
* @param userTimezone - User's timezone (e.g., 'America/New_York')
|
||||
* @returns SQL clause for date filtering
|
||||
*/
|
||||
protected static getDateRangeClauseWithTimezone(key: string, dateRange: string[], userTimezone: string) {
|
||||
// For custom date ranges
|
||||
if (dateRange.length === 2) {
|
||||
// Convert dates to user's timezone start/end of day
|
||||
const start = moment.tz(dateRange[0], userTimezone).startOf('day');
|
||||
const end = moment.tz(dateRange[1], userTimezone).endOf('day');
|
||||
|
||||
// Convert to UTC for database comparison
|
||||
const startUtc = start.utc().format("YYYY-MM-DD HH:mm:ss");
|
||||
const endUtc = end.utc().format("YYYY-MM-DD HH:mm:ss");
|
||||
|
||||
if (start.isSame(end, 'day')) {
|
||||
// Single day selection
|
||||
return `AND task_work_log.created_at >= '${startUtc}'::TIMESTAMP AND task_work_log.created_at <= '${endUtc}'::TIMESTAMP`;
|
||||
}
|
||||
|
||||
return `AND task_work_log.created_at >= '${startUtc}'::TIMESTAMP AND task_work_log.created_at <= '${endUtc}'::TIMESTAMP`;
|
||||
}
|
||||
|
||||
// For predefined ranges, calculate based on user's timezone
|
||||
const now = moment.tz(userTimezone);
|
||||
let startDate, endDate;
|
||||
|
||||
switch (key) {
|
||||
case DATE_RANGES.YESTERDAY:
|
||||
startDate = now.clone().subtract(1, 'day').startOf('day');
|
||||
endDate = now.clone().subtract(1, 'day').endOf('day');
|
||||
break;
|
||||
case DATE_RANGES.LAST_WEEK:
|
||||
startDate = now.clone().subtract(1, 'week').startOf('week');
|
||||
endDate = now.clone().subtract(1, 'week').endOf('week');
|
||||
break;
|
||||
case DATE_RANGES.LAST_MONTH:
|
||||
startDate = now.clone().subtract(1, 'month').startOf('month');
|
||||
endDate = now.clone().subtract(1, 'month').endOf('month');
|
||||
break;
|
||||
case DATE_RANGES.LAST_QUARTER:
|
||||
startDate = now.clone().subtract(3, 'months').startOf('day');
|
||||
endDate = now.clone().endOf('day');
|
||||
break;
|
||||
default:
|
||||
return "";
|
||||
}
|
||||
|
||||
if (startDate && endDate) {
|
||||
const startUtc = startDate.utc().format("YYYY-MM-DD HH:mm:ss");
|
||||
const endUtc = endDate.utc().format("YYYY-MM-DD HH:mm:ss");
|
||||
return `AND task_work_log.created_at >= '${startUtc}'::TIMESTAMP AND task_work_log.created_at <= '${endUtc}'::TIMESTAMP`;
|
||||
}
|
||||
|
||||
return "";
|
||||
}
|
||||
|
||||
/**
|
||||
* Format dates for display in user's timezone
|
||||
* @param date - Date to format
|
||||
* @param userTimezone - User's timezone
|
||||
* @param format - Moment format string
|
||||
* @returns Formatted date string
|
||||
*/
|
||||
protected static formatDateInTimezone(date: string | Date, userTimezone: string, format: string = "YYYY-MM-DD HH:mm:ss") {
|
||||
return moment.tz(date, userTimezone).format(format);
|
||||
}
|
||||
|
||||
/**
|
||||
* Get working days count between two dates in user's timezone
|
||||
* @param startDate - Start date
|
||||
* @param endDate - End date
|
||||
* @param userTimezone - User's timezone
|
||||
* @returns Number of working days
|
||||
*/
|
||||
protected static getWorkingDaysInTimezone(startDate: string, endDate: string, userTimezone: string): number {
|
||||
const start = moment.tz(startDate, userTimezone);
|
||||
const end = moment.tz(endDate, userTimezone);
|
||||
let workingDays = 0;
|
||||
|
||||
const current = start.clone();
|
||||
while (current.isSameOrBefore(end, 'day')) {
|
||||
// Monday = 1, Friday = 5
|
||||
if (current.isoWeekday() >= 1 && current.isoWeekday() <= 5) {
|
||||
workingDays++;
|
||||
}
|
||||
current.add(1, 'day');
|
||||
}
|
||||
|
||||
return workingDays;
|
||||
}
|
||||
}
|
||||
@@ -6,69 +6,10 @@ import { IWorkLenzResponse } from "../../interfaces/worklenz-response";
|
||||
import { ServerResponse } from "../../models/server-response";
|
||||
import { DATE_RANGES, TASK_PRIORITY_COLOR_ALPHA } from "../../shared/constants";
|
||||
import { formatDuration, getColor, int } from "../../shared/utils";
|
||||
import ReportingControllerBaseWithTimezone from "./reporting-controller-base-with-timezone";
|
||||
import ReportingControllerBase from "./reporting-controller-base";
|
||||
import Excel from "exceljs";
|
||||
|
||||
export default class ReportingMembersController extends ReportingControllerBaseWithTimezone {
|
||||
|
||||
protected static getPercentage(n: number, total: number) {
|
||||
return +(n ? (n / total) * 100 : 0).toFixed();
|
||||
}
|
||||
|
||||
protected static getCurrentTeamId(req: IWorkLenzRequest): string | null {
|
||||
return req.user?.team_id ?? null;
|
||||
}
|
||||
|
||||
public static convertMinutesToHoursAndMinutes(totalMinutes: number) {
|
||||
const hours = Math.floor(totalMinutes / 60);
|
||||
const minutes = totalMinutes % 60;
|
||||
return `${hours}h ${minutes}m`;
|
||||
}
|
||||
|
||||
public static convertSecondsToHoursAndMinutes(seconds: number) {
|
||||
const hours = Math.floor(seconds / 3600);
|
||||
const minutes = Math.floor((seconds % 3600) / 60);
|
||||
return `${hours}h ${minutes}m`;
|
||||
}
|
||||
|
||||
protected static formatEndDate(endDate: string) {
|
||||
const end = moment(endDate).format("YYYY-MM-DD");
|
||||
const fEndDate = moment(end);
|
||||
return fEndDate;
|
||||
}
|
||||
|
||||
protected static formatCurrentDate() {
|
||||
const current = moment().format("YYYY-MM-DD");
|
||||
const fCurrentDate = moment(current);
|
||||
return fCurrentDate;
|
||||
}
|
||||
|
||||
protected static getDaysLeft(endDate: string): number | null {
|
||||
if (!endDate) return null;
|
||||
|
||||
const fCurrentDate = this.formatCurrentDate();
|
||||
const fEndDate = this.formatEndDate(endDate);
|
||||
|
||||
return fEndDate.diff(fCurrentDate, "days");
|
||||
}
|
||||
|
||||
protected static isOverdue(endDate: string): boolean {
|
||||
if (!endDate) return false;
|
||||
|
||||
const fCurrentDate = this.formatCurrentDate();
|
||||
const fEndDate = this.formatEndDate(endDate);
|
||||
|
||||
return fEndDate.isBefore(fCurrentDate);
|
||||
}
|
||||
|
||||
protected static isToday(endDate: string): boolean {
|
||||
if (!endDate) return false;
|
||||
|
||||
const fCurrentDate = this.formatCurrentDate();
|
||||
const fEndDate = this.formatEndDate(endDate);
|
||||
|
||||
return fEndDate.isSame(fCurrentDate);
|
||||
}
|
||||
export default class ReportingMembersController extends ReportingControllerBase {
|
||||
|
||||
private static async getMembers(
|
||||
teamId: string, searchQuery = "",
|
||||
@@ -546,9 +487,7 @@ export default class ReportingMembersController extends ReportingControllerBaseW
|
||||
dateRange = date_range.split(",");
|
||||
}
|
||||
|
||||
// Get user timezone for proper date filtering
|
||||
const userTimezone = await this.getUserTimezone(req.user?.id as string);
|
||||
const durationClause = this.getDateRangeClauseWithTimezone(duration as string || DATE_RANGES.LAST_WEEK, dateRange, userTimezone);
|
||||
const durationClause = ReportingMembersController.getDateRangeClauseMembers(duration as string || DATE_RANGES.LAST_WEEK, dateRange, "twl");
|
||||
const minMaxDateClause = this.getMinMaxDates(duration as string || DATE_RANGES.LAST_WEEK, dateRange, "task_work_log");
|
||||
const memberName = (req.query.member_name as string)?.trim() || null;
|
||||
|
||||
@@ -1099,9 +1038,7 @@ export default class ReportingMembersController extends ReportingControllerBaseW
|
||||
public static async getMemberTimelogs(req: IWorkLenzRequest, res: IWorkLenzResponse): Promise<IWorkLenzResponse> {
|
||||
const { team_member_id, team_id, duration, date_range, archived, billable } = req.body;
|
||||
|
||||
// Get user timezone for proper date filtering
|
||||
const userTimezone = await this.getUserTimezone(req.user?.id as string);
|
||||
const durationClause = this.getDateRangeClauseWithTimezone(duration || DATE_RANGES.LAST_WEEK, date_range, userTimezone);
|
||||
const durationClause = ReportingMembersController.getDateRangeClauseMembers(duration || DATE_RANGES.LAST_WEEK, date_range, "twl");
|
||||
const minMaxDateClause = this.getMinMaxDates(duration || DATE_RANGES.LAST_WEEK, date_range, "task_work_log");
|
||||
|
||||
const billableQuery = this.buildBillableQuery(billable);
|
||||
@@ -1293,8 +1230,8 @@ public static async getSingleMemberProjects(req: IWorkLenzRequest, res: IWorkLen
|
||||
row.actual_time = int(row.actual_time);
|
||||
row.estimated_time_string = this.convertMinutesToHoursAndMinutes(int(row.estimated_time));
|
||||
row.actual_time_string = this.convertSecondsToHoursAndMinutes(int(row.actual_time));
|
||||
row.days_left = this.getDaysLeft(row.end_date);
|
||||
row.is_overdue = this.isOverdue(row.end_date);
|
||||
row.days_left = ReportingControllerBase.getDaysLeft(row.end_date);
|
||||
row.is_overdue = ReportingControllerBase.isOverdue(row.end_date);
|
||||
if (row.days_left && row.is_overdue) {
|
||||
row.days_left = row.days_left.toString().replace(/-/g, "");
|
||||
}
|
||||
|
||||
@@ -74,14 +74,9 @@ export default class ScheduleControllerV2 extends WorklenzControllerBase {
|
||||
.map(day => `${day.toLowerCase()} = ${workingDays.includes(day)}`)
|
||||
.join(", ");
|
||||
|
||||
const updateQuery = `
|
||||
UPDATE public.organization_working_days
|
||||
const updateQuery = `UPDATE public.organization_working_days
|
||||
SET ${setClause}, updated_at = CURRENT_TIMESTAMP
|
||||
WHERE organization_id IN (
|
||||
SELECT organization_id FROM organizations
|
||||
WHERE user_id = $1
|
||||
);
|
||||
`;
|
||||
WHERE organization_id IN (SELECT id FROM organizations WHERE user_id = $1);`;
|
||||
|
||||
await db.query(updateQuery, [req.user?.owner_id]);
|
||||
|
||||
|
||||
@@ -1,167 +0,0 @@
|
||||
import { IWorkLenzRequest } from "../interfaces/worklenz-request";
|
||||
import { IWorkLenzResponse } from "../interfaces/worklenz-response";
|
||||
import { ServerResponse } from "../models/server-response";
|
||||
import WorklenzControllerBase from "./worklenz-controller-base";
|
||||
import HandleExceptions from "../decorators/handle-exceptions";
|
||||
import { ISurveySubmissionRequest } from "../interfaces/survey";
|
||||
import db from "../config/db";
|
||||
|
||||
export default class SurveyController extends WorklenzControllerBase {
|
||||
@HandleExceptions()
|
||||
public static async getAccountSetupSurvey(req: IWorkLenzRequest, res: IWorkLenzResponse): Promise<IWorkLenzResponse> {
|
||||
const q = `
|
||||
SELECT
|
||||
s.id,
|
||||
s.name,
|
||||
s.description,
|
||||
s.survey_type,
|
||||
s.is_active,
|
||||
COALESCE(
|
||||
json_agg(
|
||||
json_build_object(
|
||||
'id', sq.id,
|
||||
'survey_id', sq.survey_id,
|
||||
'question_key', sq.question_key,
|
||||
'question_type', sq.question_type,
|
||||
'is_required', sq.is_required,
|
||||
'sort_order', sq.sort_order,
|
||||
'options', sq.options
|
||||
) ORDER BY sq.sort_order
|
||||
) FILTER (WHERE sq.id IS NOT NULL),
|
||||
'[]'
|
||||
) AS questions
|
||||
FROM surveys s
|
||||
LEFT JOIN survey_questions sq ON s.id = sq.survey_id
|
||||
WHERE s.survey_type = 'account_setup' AND s.is_active = true
|
||||
GROUP BY s.id, s.name, s.description, s.survey_type, s.is_active
|
||||
LIMIT 1;
|
||||
`;
|
||||
|
||||
const result = await db.query(q);
|
||||
const [survey] = result.rows;
|
||||
|
||||
if (!survey) {
|
||||
return res.status(200).send(new ServerResponse(false, null, "Account setup survey not found"));
|
||||
}
|
||||
|
||||
return res.status(200).send(new ServerResponse(true, survey));
|
||||
}
|
||||
|
||||
@HandleExceptions()
|
||||
public static async submitSurveyResponse(req: IWorkLenzRequest, res: IWorkLenzResponse): Promise<IWorkLenzResponse> {
|
||||
const userId = req.user?.id;
|
||||
const body = req.body as ISurveySubmissionRequest;
|
||||
|
||||
if (!userId) {
|
||||
return res.status(200).send(new ServerResponse(false, null, "User not authenticated"));
|
||||
}
|
||||
|
||||
if (!body.survey_id || !body.answers || !Array.isArray(body.answers)) {
|
||||
return res.status(200).send(new ServerResponse(false, null, "Invalid survey submission data"));
|
||||
}
|
||||
|
||||
// Check if user has already submitted a response for this survey
|
||||
const existingResponseQuery = `
|
||||
SELECT id FROM survey_responses
|
||||
WHERE user_id = $1 AND survey_id = $2;
|
||||
`;
|
||||
const existingResult = await db.query(existingResponseQuery, [userId, body.survey_id]);
|
||||
|
||||
let responseId: string;
|
||||
|
||||
if (existingResult.rows.length > 0) {
|
||||
// Update existing response
|
||||
responseId = existingResult.rows[0].id;
|
||||
|
||||
const updateResponseQuery = `
|
||||
UPDATE survey_responses
|
||||
SET is_completed = true, completed_at = NOW(), updated_at = NOW()
|
||||
WHERE id = $1;
|
||||
`;
|
||||
await db.query(updateResponseQuery, [responseId]);
|
||||
|
||||
// Delete existing answers
|
||||
const deleteAnswersQuery = `DELETE FROM survey_answers WHERE response_id = $1;`;
|
||||
await db.query(deleteAnswersQuery, [responseId]);
|
||||
} else {
|
||||
// Create new response
|
||||
const createResponseQuery = `
|
||||
INSERT INTO survey_responses (survey_id, user_id, is_completed, completed_at)
|
||||
VALUES ($1, $2, true, NOW())
|
||||
RETURNING id;
|
||||
`;
|
||||
const responseResult = await db.query(createResponseQuery, [body.survey_id, userId]);
|
||||
responseId = responseResult.rows[0].id;
|
||||
}
|
||||
|
||||
// Insert new answers
|
||||
if (body.answers.length > 0) {
|
||||
const answerValues: string[] = [];
|
||||
const params: any[] = [];
|
||||
|
||||
body.answers.forEach((answer, index) => {
|
||||
const baseIndex = index * 4;
|
||||
answerValues.push(`($${baseIndex + 1}, $${baseIndex + 2}, $${baseIndex + 3}, $${baseIndex + 4})`);
|
||||
|
||||
params.push(
|
||||
responseId,
|
||||
answer.question_id,
|
||||
answer.answer_text || null,
|
||||
answer.answer_json ? JSON.stringify(answer.answer_json) : null
|
||||
);
|
||||
});
|
||||
|
||||
const insertAnswersQuery = `
|
||||
INSERT INTO survey_answers (response_id, question_id, answer_text, answer_json)
|
||||
VALUES ${answerValues.join(', ')};
|
||||
`;
|
||||
|
||||
await db.query(insertAnswersQuery, params);
|
||||
}
|
||||
|
||||
return res.status(200).send(new ServerResponse(true, { response_id: responseId }));
|
||||
}
|
||||
|
||||
@HandleExceptions()
|
||||
public static async getUserSurveyResponse(req: IWorkLenzRequest, res: IWorkLenzResponse): Promise<IWorkLenzResponse> {
|
||||
const userId = req.user?.id;
|
||||
const surveyId = req.params.survey_id;
|
||||
|
||||
if (!userId) {
|
||||
return res.status(200).send(new ServerResponse(false, null, "User not authenticated"));
|
||||
}
|
||||
|
||||
const q = `
|
||||
SELECT
|
||||
sr.id,
|
||||
sr.survey_id,
|
||||
sr.user_id,
|
||||
sr.is_completed,
|
||||
sr.started_at,
|
||||
sr.completed_at,
|
||||
COALESCE(
|
||||
json_agg(
|
||||
json_build_object(
|
||||
'question_id', sa.question_id,
|
||||
'answer_text', sa.answer_text,
|
||||
'answer_json', sa.answer_json
|
||||
)
|
||||
) FILTER (WHERE sa.id IS NOT NULL),
|
||||
'[]'
|
||||
) AS answers
|
||||
FROM survey_responses sr
|
||||
LEFT JOIN survey_answers sa ON sr.id = sa.response_id
|
||||
WHERE sr.user_id = $1 AND sr.survey_id = $2
|
||||
GROUP BY sr.id, sr.survey_id, sr.user_id, sr.is_completed, sr.started_at, sr.completed_at;
|
||||
`;
|
||||
|
||||
const result = await db.query(q, [userId, surveyId]);
|
||||
const [response] = result.rows;
|
||||
|
||||
if (!response) {
|
||||
return res.status(200).send(new ServerResponse(false, null, "Survey response not found"));
|
||||
}
|
||||
|
||||
return res.status(200).send(new ServerResponse(true, response));
|
||||
}
|
||||
}
|
||||
@@ -16,23 +16,19 @@ export default class TaskPhasesController extends WorklenzControllerBase {
|
||||
if (!req.query.id)
|
||||
return res.status(400).send(new ServerResponse(false, null, "Invalid request"));
|
||||
|
||||
// Use custom name if provided, otherwise use default naming pattern
|
||||
const phaseName = req.body.name?.trim() ||
|
||||
`Untitled Phase (${(await db.query("SELECT COUNT(*) FROM project_phases WHERE project_id = $1", [req.query.id])).rows[0].count + 1})`;
|
||||
|
||||
const q = `
|
||||
INSERT INTO project_phases (name, color_code, project_id, sort_index)
|
||||
VALUES (
|
||||
CONCAT('Untitled Phase (', (SELECT COUNT(*) FROM project_phases WHERE project_id = $2) + 1, ')'),
|
||||
$1,
|
||||
$2,
|
||||
$3,
|
||||
(SELECT COUNT(*) FROM project_phases WHERE project_id = $3) + 1)
|
||||
(SELECT COUNT(*) FROM project_phases WHERE project_id = $2) + 1)
|
||||
RETURNING id, name, color_code, sort_index;
|
||||
`;
|
||||
|
||||
req.body.color_code = this.DEFAULT_PHASE_COLOR;
|
||||
|
||||
const result = await db.query(q, [phaseName, req.body.color_code, req.query.id]);
|
||||
const result = await db.query(q, [req.body.color_code, req.query.id]);
|
||||
const [data] = result.rows;
|
||||
|
||||
data.color_code = getColor(data.name) + TASK_STATUS_COLOR_ALPHA;
|
||||
|
||||
@@ -134,25 +134,6 @@ export default class TaskStatusesController extends WorklenzControllerBase {
|
||||
return res.status(200).send(new ServerResponse(true, data));
|
||||
}
|
||||
|
||||
@HandleExceptions()
|
||||
public static async updateCategory(req: IWorkLenzRequest, res: IWorkLenzResponse): Promise<IWorkLenzResponse> {
|
||||
const hasMoreCategories = await TaskStatusesController.hasMoreCategories(req.params.id, req.query.current_project_id as string);
|
||||
|
||||
if (!hasMoreCategories)
|
||||
return res.status(200).send(new ServerResponse(false, null, existsErrorMessage).withTitle("Status category update failed!"));
|
||||
|
||||
const q = `
|
||||
UPDATE task_statuses
|
||||
SET category_id = $2
|
||||
WHERE id = $1
|
||||
AND project_id = $3
|
||||
RETURNING (SELECT color_code FROM sys_task_status_categories WHERE id = task_statuses.category_id), (SELECT color_code_dark FROM sys_task_status_categories WHERE id = task_statuses.category_id);
|
||||
`;
|
||||
const result = await db.query(q, [req.params.id, req.body.category_id, req.query.current_project_id]);
|
||||
const [data] = result.rows;
|
||||
return res.status(200).send(new ServerResponse(true, data));
|
||||
}
|
||||
|
||||
@HandleExceptions()
|
||||
public static async updateStatusOrder(req: IWorkLenzRequest, res: IWorkLenzResponse): Promise<IWorkLenzResponse> {
|
||||
const q = `SELECT update_status_order($1);`;
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
import WorklenzControllerBase from "./worklenz-controller-base";
|
||||
import { getColor } from "../shared/utils";
|
||||
import { PriorityColorCodes, TASK_PRIORITY_COLOR_ALPHA, TASK_STATUS_COLOR_ALPHA } from "../shared/constants";
|
||||
import {getColor} from "../shared/utils";
|
||||
import {PriorityColorCodes, TASK_PRIORITY_COLOR_ALPHA, TASK_STATUS_COLOR_ALPHA} from "../shared/constants";
|
||||
import moment from "moment/moment";
|
||||
|
||||
export const GroupBy = {
|
||||
@@ -16,7 +16,6 @@ export interface ITaskGroup {
|
||||
start_date?: string;
|
||||
end_date?: string;
|
||||
color_code: string;
|
||||
color_code_dark: string;
|
||||
category_id: string | null;
|
||||
old_category_id?: string;
|
||||
todo_progress?: number;
|
||||
@@ -33,14 +32,23 @@ export default class TasksControllerBase extends WorklenzControllerBase {
|
||||
}
|
||||
|
||||
public static updateTaskViewModel(task: any) {
|
||||
console.log(`Processing task ${task.id} (${task.name})`);
|
||||
console.log(` manual_progress: ${task.manual_progress}, progress_value: ${task.progress_value}`);
|
||||
console.log(` project_use_manual_progress: ${task.project_use_manual_progress}, project_use_weighted_progress: ${task.project_use_weighted_progress}`);
|
||||
console.log(` has subtasks: ${task.sub_tasks_count > 0}`);
|
||||
|
||||
// For parent tasks (with subtasks), always use calculated progress from subtasks
|
||||
if (task.sub_tasks_count > 0) {
|
||||
// For parent tasks without manual progress, calculate from subtasks (already done via db function)
|
||||
console.log(` Parent task with subtasks: complete_ratio=${task.complete_ratio}`);
|
||||
|
||||
// Ensure progress matches complete_ratio for consistency
|
||||
task.progress = task.complete_ratio || 0;
|
||||
|
||||
// Important: Parent tasks should not have manual progress
|
||||
// If they somehow do, reset it
|
||||
if (task.manual_progress) {
|
||||
console.log(` WARNING: Parent task ${task.id} had manual_progress set to true, resetting`);
|
||||
task.manual_progress = false;
|
||||
task.progress_value = null;
|
||||
}
|
||||
@@ -50,20 +58,19 @@ export default class TasksControllerBase extends WorklenzControllerBase {
|
||||
// For manually set progress, use that value directly
|
||||
task.progress = parseInt(task.progress_value);
|
||||
task.complete_ratio = parseInt(task.progress_value);
|
||||
|
||||
console.log(` Using manual progress: progress=${task.progress}, complete_ratio=${task.complete_ratio}`);
|
||||
}
|
||||
// For tasks with no subtasks and no manual progress
|
||||
// For tasks with no subtasks and no manual progress, calculate based on time
|
||||
else {
|
||||
// Only calculate progress based on time if time-based progress is enabled for the project
|
||||
if (task.project_use_time_progress && task.total_minutes_spent && task.total_minutes) {
|
||||
// Cap the progress at 100% to prevent showing more than 100% progress
|
||||
task.progress = Math.min(~~(task.total_minutes_spent / task.total_minutes * 100), 100);
|
||||
} else {
|
||||
// Default to 0% progress when time-based calculation is not enabled
|
||||
task.progress = 0;
|
||||
}
|
||||
task.progress = task.total_minutes_spent && task.total_minutes
|
||||
? ~~(task.total_minutes_spent / task.total_minutes * 100)
|
||||
: 0;
|
||||
|
||||
// Set complete_ratio to match progress
|
||||
task.complete_ratio = task.progress;
|
||||
|
||||
console.log(` Calculated time-based progress: progress=${task.progress}, complete_ratio=${task.complete_ratio}`);
|
||||
}
|
||||
|
||||
// Ensure numeric values
|
||||
@@ -72,7 +79,7 @@ export default class TasksControllerBase extends WorklenzControllerBase {
|
||||
|
||||
task.overdue = task.total_minutes < task.total_minutes_spent;
|
||||
|
||||
task.time_spent = { hours: ~~(task.total_minutes_spent / 60), minutes: task.total_minutes_spent % 60 };
|
||||
task.time_spent = {hours: ~~(task.total_minutes_spent / 60), minutes: task.total_minutes_spent % 60};
|
||||
|
||||
task.comments_count = Number(task.comments_count) ? +task.comments_count : 0;
|
||||
task.attachments_count = Number(task.attachments_count) ? +task.attachments_count : 0;
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
@@ -34,24 +34,29 @@ export default abstract class WorklenzControllerBase {
|
||||
const offset = queryParams.search ? 0 : (index - 1) * size;
|
||||
const paging = queryParams.paging || "true";
|
||||
|
||||
// let s = "";
|
||||
// if (typeof searchField === "string") {
|
||||
// s = `${searchField} || ' ' || id::TEXT`;
|
||||
// } else if (Array.isArray(searchField)) {
|
||||
// s = searchField.join(" || ' ' || ");
|
||||
// }
|
||||
|
||||
// const search = (queryParams.search as string || "").trim();
|
||||
// const searchQuery = search ? `AND TO_TSVECTOR(${s}) @@ TO_TSQUERY('${toTsQuery(search)}')` : "";
|
||||
|
||||
const search = (queryParams.search as string || "").trim();
|
||||
|
||||
let s = "";
|
||||
if (typeof searchField === "string") {
|
||||
s = ` ${searchField} ILIKE '%${search}%'`;
|
||||
} else if (Array.isArray(searchField)) {
|
||||
s = searchField.map(index => ` ${index} ILIKE '%${search}%'`).join(" OR ");
|
||||
}
|
||||
|
||||
let searchQuery = "";
|
||||
|
||||
if (search) {
|
||||
// Properly escape single quotes to prevent SQL syntax errors
|
||||
const escapedSearch = search.replace(/'/g, "''");
|
||||
|
||||
let s = "";
|
||||
if (typeof searchField === "string") {
|
||||
s = ` ${searchField} ILIKE '%${escapedSearch}%'`;
|
||||
} else if (Array.isArray(searchField)) {
|
||||
s = searchField.map(field => ` ${field} ILIKE '%${escapedSearch}%'`).join(" OR ");
|
||||
}
|
||||
|
||||
if (s) {
|
||||
searchQuery = isMemberFilter ? ` (${s}) AND ` : ` AND (${s}) `;
|
||||
}
|
||||
searchQuery = isMemberFilter ? ` (${s}) AND ` : ` AND (${s}) `;
|
||||
}
|
||||
|
||||
// Sort
|
||||
|
||||
@@ -7,5 +7,5 @@ export function startCronJobs() {
|
||||
startNotificationsJob();
|
||||
startDailyDigestJob();
|
||||
startProjectDigestJob();
|
||||
if (process.env.ENABLE_RECURRING_JOBS === "true") startRecurringTasksJob();
|
||||
// startRecurringTasksJob();
|
||||
}
|
||||
|
||||
@@ -7,90 +7,12 @@ import TasksController from "../controllers/tasks-controller";
|
||||
|
||||
// At 11:00+00 (4.30pm+530) on every day-of-month if it's on every day-of-week from Monday through Friday.
|
||||
// const TIME = "0 11 */1 * 1-5";
|
||||
const TIME = process.env.RECURRING_JOBS_INTERVAL || "0 11 */1 * 1-5";
|
||||
const TIME = "*/2 * * * *"; // runs every 2 minutes - for testing purposes
|
||||
const TIME_FORMAT = "YYYY-MM-DD";
|
||||
// const TIME = "0 0 * * *"; // Runs at midnight every day
|
||||
|
||||
const log = (value: any) => console.log("recurring-task-cron-job:", value);
|
||||
|
||||
// Define future limits for different schedule types
|
||||
// More conservative limits to prevent task list clutter
|
||||
const FUTURE_LIMITS = {
|
||||
daily: moment.duration(3, "days"),
|
||||
weekly: moment.duration(1, "week"),
|
||||
monthly: moment.duration(1, "month"),
|
||||
every_x_days: (interval: number) => moment.duration(interval, "days"),
|
||||
every_x_weeks: (interval: number) => moment.duration(interval, "weeks"),
|
||||
every_x_months: (interval: number) => moment.duration(interval, "months")
|
||||
};
|
||||
|
||||
// Helper function to get the future limit based on schedule type
|
||||
function getFutureLimit(scheduleType: string, interval?: number): moment.Duration {
|
||||
switch (scheduleType) {
|
||||
case "daily":
|
||||
return FUTURE_LIMITS.daily;
|
||||
case "weekly":
|
||||
return FUTURE_LIMITS.weekly;
|
||||
case "monthly":
|
||||
return FUTURE_LIMITS.monthly;
|
||||
case "every_x_days":
|
||||
return FUTURE_LIMITS.every_x_days(interval || 1);
|
||||
case "every_x_weeks":
|
||||
return FUTURE_LIMITS.every_x_weeks(interval || 1);
|
||||
case "every_x_months":
|
||||
return FUTURE_LIMITS.every_x_months(interval || 1);
|
||||
default:
|
||||
return moment.duration(3, "days"); // Default to 3 days
|
||||
}
|
||||
}
|
||||
|
||||
// Helper function to batch create tasks
|
||||
async function createBatchTasks(template: ITaskTemplate & IRecurringSchedule, endDates: moment.Moment[]) {
|
||||
const createdTasks = [];
|
||||
|
||||
for (const nextEndDate of endDates) {
|
||||
const existingTaskQuery = `
|
||||
SELECT id FROM tasks
|
||||
WHERE schedule_id = $1 AND end_date::DATE = $2::DATE;
|
||||
`;
|
||||
const existingTaskResult = await db.query(existingTaskQuery, [template.schedule_id, nextEndDate.format(TIME_FORMAT)]);
|
||||
|
||||
if (existingTaskResult.rows.length === 0) {
|
||||
const createTaskQuery = `SELECT create_quick_task($1::json) as task;`;
|
||||
const taskData = {
|
||||
name: template.name,
|
||||
priority_id: template.priority_id,
|
||||
project_id: template.project_id,
|
||||
reporter_id: template.reporter_id,
|
||||
status_id: template.status_id || null,
|
||||
end_date: nextEndDate.format(TIME_FORMAT),
|
||||
schedule_id: template.schedule_id
|
||||
};
|
||||
const createTaskResult = await db.query(createTaskQuery, [JSON.stringify(taskData)]);
|
||||
const createdTask = createTaskResult.rows[0].task;
|
||||
|
||||
if (createdTask) {
|
||||
createdTasks.push(createdTask);
|
||||
|
||||
for (const assignee of template.assignees) {
|
||||
await TasksController.createTaskBulkAssignees(assignee.team_member_id, template.project_id, createdTask.id, assignee.assigned_by);
|
||||
}
|
||||
|
||||
for (const label of template.labels) {
|
||||
const q = `SELECT add_or_remove_task_label($1, $2) AS labels;`;
|
||||
await db.query(q, [createdTask.id, label.label_id]);
|
||||
}
|
||||
|
||||
console.log(`Created task for template ${template.name} with end date ${nextEndDate.format(TIME_FORMAT)}`);
|
||||
}
|
||||
} else {
|
||||
console.log(`Skipped creating task for template ${template.name} with end date ${nextEndDate.format(TIME_FORMAT)} - task already exists`);
|
||||
}
|
||||
}
|
||||
|
||||
return createdTasks;
|
||||
}
|
||||
|
||||
async function onRecurringTaskJobTick() {
|
||||
try {
|
||||
log("(cron) Recurring tasks job started.");
|
||||
@@ -111,44 +33,65 @@ async function onRecurringTaskJobTick() {
|
||||
? moment(template.last_task_end_date)
|
||||
: moment(template.created_at);
|
||||
|
||||
// Calculate future limit based on schedule type
|
||||
const futureLimit = moment(template.last_checked_at || template.created_at)
|
||||
.add(getFutureLimit(
|
||||
template.schedule_type,
|
||||
template.interval_days || template.interval_weeks || template.interval_months || 1
|
||||
));
|
||||
const futureLimit = moment(template.last_checked_at || template.created_at).add(1, "week");
|
||||
|
||||
let nextEndDate = calculateNextEndDate(template, lastTaskEndDate);
|
||||
const endDatesToCreate: moment.Moment[] = [];
|
||||
|
||||
// Find all future occurrences within the limit
|
||||
while (nextEndDate.isSameOrBefore(futureLimit)) {
|
||||
if (nextEndDate.isAfter(now)) {
|
||||
endDatesToCreate.push(moment(nextEndDate));
|
||||
}
|
||||
// Find the next future occurrence
|
||||
while (nextEndDate.isSameOrBefore(now)) {
|
||||
nextEndDate = calculateNextEndDate(template, nextEndDate);
|
||||
}
|
||||
|
||||
// Batch create tasks for all future dates
|
||||
if (endDatesToCreate.length > 0) {
|
||||
const createdTasks = await createBatchTasks(template, endDatesToCreate);
|
||||
createdTaskCount += createdTasks.length;
|
||||
|
||||
// Update the last_checked_at in the schedule
|
||||
const updateScheduleQuery = `
|
||||
UPDATE task_recurring_schedules
|
||||
SET last_checked_at = $1::DATE,
|
||||
last_created_task_end_date = $2
|
||||
WHERE id = $3;
|
||||
// Only create a task if it's within the future limit
|
||||
if (nextEndDate.isSameOrBefore(futureLimit)) {
|
||||
const existingTaskQuery = `
|
||||
SELECT id FROM tasks
|
||||
WHERE schedule_id = $1 AND end_date::DATE = $2::DATE;
|
||||
`;
|
||||
await db.query(updateScheduleQuery, [
|
||||
moment().format(TIME_FORMAT),
|
||||
endDatesToCreate[endDatesToCreate.length - 1].format(TIME_FORMAT),
|
||||
template.schedule_id
|
||||
]);
|
||||
const existingTaskResult = await db.query(existingTaskQuery, [template.schedule_id, nextEndDate.format(TIME_FORMAT)]);
|
||||
|
||||
if (existingTaskResult.rows.length === 0) {
|
||||
const createTaskQuery = `SELECT create_quick_task($1::json) as task;`;
|
||||
const taskData = {
|
||||
name: template.name,
|
||||
priority_id: template.priority_id,
|
||||
project_id: template.project_id,
|
||||
reporter_id: template.reporter_id,
|
||||
status_id: template.status_id || null,
|
||||
end_date: nextEndDate.format(TIME_FORMAT),
|
||||
schedule_id: template.schedule_id
|
||||
};
|
||||
const createTaskResult = await db.query(createTaskQuery, [JSON.stringify(taskData)]);
|
||||
const createdTask = createTaskResult.rows[0].task;
|
||||
|
||||
if (createdTask) {
|
||||
createdTaskCount++;
|
||||
|
||||
for (const assignee of template.assignees) {
|
||||
await TasksController.createTaskBulkAssignees(assignee.team_member_id, template.project_id, createdTask.id, assignee.assigned_by);
|
||||
}
|
||||
|
||||
for (const label of template.labels) {
|
||||
const q = `SELECT add_or_remove_task_label($1, $2) AS labels;`;
|
||||
await db.query(q, [createdTask.id, label.label_id]);
|
||||
}
|
||||
|
||||
console.log(`Created task for template ${template.name} with end date ${nextEndDate.format(TIME_FORMAT)}`);
|
||||
}
|
||||
} else {
|
||||
console.log(`Skipped creating task for template ${template.name} with end date ${nextEndDate.format(TIME_FORMAT)} - task already exists`);
|
||||
}
|
||||
} else {
|
||||
console.log(`No tasks created for template ${template.name} - next occurrence is beyond the future limit`);
|
||||
console.log(`No task created for template ${template.name} - next occurrence is beyond the future limit`);
|
||||
}
|
||||
|
||||
// Update the last_checked_at in the schedule
|
||||
const updateScheduleQuery = `
|
||||
UPDATE task_recurring_schedules
|
||||
SET last_checked_at = $1::DATE, last_created_task_end_date = $2
|
||||
WHERE id = $3;
|
||||
`;
|
||||
await db.query(updateScheduleQuery, [moment(template.last_checked_at || template.created_at).add(1, "day").format(TIME_FORMAT), nextEndDate.format(TIME_FORMAT), template.schedule_id]);
|
||||
}
|
||||
|
||||
log(`(cron) Recurring tasks job ended with ${createdTaskCount} new tasks created.`);
|
||||
|
||||
@@ -1,37 +0,0 @@
|
||||
export interface ISurveyQuestion {
|
||||
id: string;
|
||||
survey_id: string;
|
||||
question_key: string;
|
||||
question_type: 'single_choice' | 'multiple_choice' | 'text';
|
||||
is_required: boolean;
|
||||
sort_order: number;
|
||||
options?: string[];
|
||||
}
|
||||
|
||||
export interface ISurvey {
|
||||
id: string;
|
||||
name: string;
|
||||
description?: string;
|
||||
survey_type: 'account_setup' | 'onboarding' | 'feedback';
|
||||
is_active: boolean;
|
||||
questions?: ISurveyQuestion[];
|
||||
}
|
||||
|
||||
export interface ISurveyAnswer {
|
||||
question_id: string;
|
||||
answer_text?: string;
|
||||
answer_json?: string[];
|
||||
}
|
||||
|
||||
export interface ISurveyResponse {
|
||||
id?: string;
|
||||
survey_id: string;
|
||||
user_id?: string;
|
||||
is_completed: boolean;
|
||||
answers: ISurveyAnswer[];
|
||||
}
|
||||
|
||||
export interface ISurveySubmissionRequest {
|
||||
survey_id: string;
|
||||
answers: ISurveyAnswer[];
|
||||
}
|
||||
@@ -1,56 +0,0 @@
|
||||
import { NextFunction } from "express";
|
||||
import { IWorkLenzRequest } from "../../interfaces/worklenz-request";
|
||||
import { IWorkLenzResponse } from "../../interfaces/worklenz-response";
|
||||
import { ServerResponse } from "../../models/server-response";
|
||||
import { ISurveySubmissionRequest } from "../../interfaces/survey";
|
||||
|
||||
export default function surveySubmissionValidator(req: IWorkLenzRequest, res: IWorkLenzResponse, next: NextFunction): IWorkLenzResponse | void {
|
||||
const body = req.body as ISurveySubmissionRequest;
|
||||
|
||||
if (!body) {
|
||||
return res.status(200).send(new ServerResponse(false, null, "Request body is required"));
|
||||
}
|
||||
|
||||
if (!body.survey_id || typeof body.survey_id !== 'string') {
|
||||
return res.status(200).send(new ServerResponse(false, null, "Survey ID is required and must be a string"));
|
||||
}
|
||||
|
||||
if (!body.answers || !Array.isArray(body.answers)) {
|
||||
return res.status(200).send(new ServerResponse(false, null, "Answers are required and must be an array"));
|
||||
}
|
||||
|
||||
// Validate each answer
|
||||
for (let i = 0; i < body.answers.length; i++) {
|
||||
const answer = body.answers[i];
|
||||
|
||||
if (!answer.question_id || typeof answer.question_id !== 'string') {
|
||||
return res.status(200).send(new ServerResponse(false, null, `Answer ${i + 1}: Question ID is required and must be a string`));
|
||||
}
|
||||
|
||||
// At least one of answer_text or answer_json should be provided
|
||||
if (!answer.answer_text && !answer.answer_json) {
|
||||
return res.status(200).send(new ServerResponse(false, null, `Answer ${i + 1}: Either answer_text or answer_json is required`));
|
||||
}
|
||||
|
||||
// Validate answer_text if provided
|
||||
if (answer.answer_text && typeof answer.answer_text !== 'string') {
|
||||
return res.status(200).send(new ServerResponse(false, null, `Answer ${i + 1}: answer_text must be a string`));
|
||||
}
|
||||
|
||||
// Validate answer_json if provided
|
||||
if (answer.answer_json && !Array.isArray(answer.answer_json)) {
|
||||
return res.status(200).send(new ServerResponse(false, null, `Answer ${i + 1}: answer_json must be an array`));
|
||||
}
|
||||
|
||||
// Validate answer_json items are strings
|
||||
if (answer.answer_json) {
|
||||
for (let j = 0; j < answer.answer_json.length; j++) {
|
||||
if (typeof answer.answer_json[j] !== 'string') {
|
||||
return res.status(200).send(new ServerResponse(false, null, `Answer ${i + 1}: answer_json items must be strings`));
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return next();
|
||||
}
|
||||
@@ -3,16 +3,13 @@ import { Strategy as LocalStrategy } from "passport-local";
|
||||
import { log_error } from "../../shared/utils";
|
||||
import db from "../../config/db";
|
||||
import { Request } from "express";
|
||||
import { ERROR_KEY, SUCCESS_KEY } from "./passport-constants";
|
||||
|
||||
async function handleLogin(req: Request, email: string, password: string, done: any) {
|
||||
// Clear any existing flash messages
|
||||
(req.session as any).flash = {};
|
||||
console.log("Login attempt for:", email);
|
||||
|
||||
if (!email || !password) {
|
||||
const errorMsg = "Please enter both email and password";
|
||||
req.flash(ERROR_KEY, errorMsg);
|
||||
return done(null, false);
|
||||
console.log("Missing credentials");
|
||||
return done(null, false, { message: "Please enter both email and password" });
|
||||
}
|
||||
|
||||
try {
|
||||
@@ -22,27 +19,23 @@ async function handleLogin(req: Request, email: string, password: string, done:
|
||||
AND google_id IS NULL
|
||||
AND is_deleted IS FALSE;`;
|
||||
const result = await db.query(q, [email]);
|
||||
console.log("User query result count:", result.rowCount);
|
||||
|
||||
const [data] = result.rows;
|
||||
|
||||
if (!data?.password) {
|
||||
const errorMsg = "No account found with this email";
|
||||
req.flash(ERROR_KEY, errorMsg);
|
||||
return done(null, false);
|
||||
console.log("No account found");
|
||||
return done(null, false, { message: "No account found with this email" });
|
||||
}
|
||||
|
||||
const passwordMatch = bcrypt.compareSync(password, data.password);
|
||||
console.log("Password match:", passwordMatch);
|
||||
|
||||
if (passwordMatch && email === data.email) {
|
||||
delete data.password;
|
||||
const successMsg = "User successfully logged in";
|
||||
req.flash(SUCCESS_KEY, successMsg);
|
||||
return done(null, data);
|
||||
return done(null, data, {message: "User successfully logged in"});
|
||||
}
|
||||
|
||||
const errorMsg = "Incorrect email or password";
|
||||
req.flash(ERROR_KEY, errorMsg);
|
||||
return done(null, false);
|
||||
return done(null, false, { message: "Incorrect email or password" });
|
||||
} catch (error) {
|
||||
console.error("Login error:", error);
|
||||
log_error(error, req.body);
|
||||
|
||||
@@ -1,4 +0,0 @@
|
||||
{
|
||||
"doesNotExistText": "Na vjen keq, faqja që kërkoni nuk ekziston.",
|
||||
"backHomeButton": "Kthehu në Faqen Kryesore"
|
||||
}
|
||||
@@ -1,31 +0,0 @@
|
||||
{
|
||||
"continue": "Vazhdo",
|
||||
|
||||
"setupYourAccount": "Konfiguro Llogarinë Tënde në Worklenz.",
|
||||
"organizationStepTitle": "Emërtoni Organizatën Tuaj",
|
||||
"organizationStepLabel": "Zgjidhni një emër për llogarinë tuaj në Worklenz.",
|
||||
|
||||
"projectStepTitle": "Krijoni projektin tuaj të parë",
|
||||
"projectStepLabel": "Në cilin projekt po punoni aktualisht?",
|
||||
"projectStepPlaceholder": "p.sh. Plani i Marketingut",
|
||||
|
||||
"tasksStepTitle": "Krijoni detyrat tuaja të para",
|
||||
"tasksStepLabel": "Shkruani disa detyra që do të kryeni në",
|
||||
"tasksStepAddAnother": "Shto një tjetër",
|
||||
|
||||
"emailPlaceholder": "Adresa email",
|
||||
"invalidEmail": "Ju lutemi vendosni një adresë email të vlefshme",
|
||||
"or": "ose",
|
||||
"templateButton": "Importo nga shablloni",
|
||||
"goBack": "Kthehu Mbrapa",
|
||||
"cancel": "Anulo",
|
||||
"create": "Krijo",
|
||||
"templateDrawerTitle": "Zgjidh nga shabllonet",
|
||||
"step3InputLabel": "Fto me email",
|
||||
"addAnother": "Shto një tjetër",
|
||||
"skipForNow": "Kalo tani për tani",
|
||||
"formTitle": "Krijoni detyrën tuaj të parë.",
|
||||
"step3Title": "Fto ekipin tënd të punojë me",
|
||||
"maxMembers": " (Mund të ftoni deri në 5 anëtarë)",
|
||||
"maxTasks": " (Mund të krijoni deri në 5 detyra)"
|
||||
}
|
||||
@@ -1,113 +0,0 @@
|
||||
{
|
||||
"title": "Faturimet",
|
||||
"currentBill": "Fatura Aktuale",
|
||||
"configuration": "Konfigurimi",
|
||||
"currentPlanDetails": "Detajet e Planit Aktual",
|
||||
"upgradePlan": "Përmirëso Planin",
|
||||
"cardBodyText01": "Provë falas",
|
||||
"cardBodyText02": "(Plani juaj i provës skadon në 1 muaj 19 ditë)",
|
||||
"redeemCode": "Kodi i Zbritjes",
|
||||
"accountStorage": "Depozita e Llogarisë",
|
||||
"used": "Përdorur:",
|
||||
"remaining": "E mbetur:",
|
||||
"charges": "Tarifat",
|
||||
"tooltip": "Tarifat për ciklin aktual të faturimit",
|
||||
"description": "Përshkrimi",
|
||||
"billingPeriod": "Periudha e Faturimit",
|
||||
"billStatus": "Statusi i Faturës",
|
||||
"perUserValue": "Vlera për Përdorues",
|
||||
"users": "Përdoruesit",
|
||||
|
||||
"amount": "Shuma",
|
||||
"invoices": "Faturat",
|
||||
"transactionId": "ID e Transaksionit",
|
||||
"transactionDate": "Data e Transaksionit",
|
||||
"paymentMethod": "Metoda e Pagesës",
|
||||
"status": "Statusi",
|
||||
"ltdUsers": "Mund të shtoni deri në {{ltd_users}} përdorues.",
|
||||
|
||||
"totalSeats": "Vende totale",
|
||||
"availableSeats": "Vende të disponueshme",
|
||||
"addMoreSeats": "Shto më shumë vende",
|
||||
|
||||
"drawerTitle": "Kodi i Zbritjes",
|
||||
"label": "Kodi i Zbritjes",
|
||||
"drawerPlaceholder": "Vendosni kodin tuaj të zbritjes",
|
||||
"redeemSubmit": "Paraqit",
|
||||
|
||||
"modalTitle": "Zgjidhni planin më të mirë për ekipin tuaj",
|
||||
"seatLabel": "Numri i vendeve",
|
||||
"freePlan": "Plan Falas",
|
||||
"startup": "Startup",
|
||||
"business": "Biznes",
|
||||
"tag": "Më i Popullarizuar",
|
||||
"enterprise": "Ndërmarrje",
|
||||
|
||||
"freeSubtitle": "falas përgjithmonë",
|
||||
"freeUsers": "Më e mira për përdorim personal",
|
||||
"freeText01": "100MB depozitë",
|
||||
"freeText02": "3 projekte",
|
||||
"freeText03": "5 anëtarë të ekipit",
|
||||
|
||||
"startupSubtitle": "ÇMIM I RASTËSISHËM / muaj",
|
||||
"startupUsers": "Deri në 15 përdorues",
|
||||
"startupText01": "25GB depozitë",
|
||||
"startupText02": "Projekte të pakufizuara aktive",
|
||||
"startupText03": "Orar",
|
||||
"startupText04": "Raportim",
|
||||
"startupText05": "Abonohu në projekte",
|
||||
|
||||
"businessSubtitle": "përdorues / muaj",
|
||||
"businessUsers": "16 - 200 përdorues",
|
||||
|
||||
"enterpriseUsers": "200 - 500+ përdorues",
|
||||
|
||||
"footerTitle": "Ju lutemi na jepni një numër kontakti që mund të përdorim për t'ju kontaktuar.",
|
||||
"footerLabel": "Numri i Kontaktit",
|
||||
"footerButton": "Na kontaktoni",
|
||||
|
||||
"redeemCodePlaceHolder": "Vendosni kodin tuaj të zbritjes",
|
||||
"submit": "Paraqit",
|
||||
|
||||
"trialPlan": "Provë Falas",
|
||||
"trialExpireDate": "E vlefshme deri më {{trial_expire_date}}",
|
||||
"trialExpired": "Provat tuaja falas skaduan {{trial_expire_string}}",
|
||||
"trialInProgress": "Provat tuaja falas skadojnë {{trial_expire_string}}",
|
||||
|
||||
"required": "Kjo fushë është e detyrueshme",
|
||||
"invalidCode": "Kod i pavlefshëm",
|
||||
|
||||
"selectPlan": "Zgjidhni planin më të mirë për ekipin tuaj",
|
||||
"changeSubscriptionPlan": "Ndryshoni planin tuaj të abonimit",
|
||||
"noOfSeats": "Numri i vendeve",
|
||||
"annualPlan": "Pro - Vjetor",
|
||||
"monthlyPlan": "Pro - Mujor",
|
||||
"freeForever": "Falas Përgjithmonë",
|
||||
"bestForPersonalUse": "Më e mira për përdorim personal",
|
||||
"storage": "Depozitë",
|
||||
"projects": "Projekte",
|
||||
"teamMembers": "Anëtarët e Ekipit",
|
||||
"unlimitedTeamMembers": "Anëtarë të pakufizuar të ekipit",
|
||||
"unlimitedActiveProjects": "Projekte të pakufizuara aktive",
|
||||
"schedule": "Orar",
|
||||
"reporting": "Raportim",
|
||||
"subscribeToProjects": "Abonohu në projekte",
|
||||
"billedAnnually": "Faturuar çdo vit",
|
||||
"billedMonthly": "Faturuar çdo muaj",
|
||||
|
||||
"pausePlan": "Pauzë Planin",
|
||||
"resumePlan": "Rifillo Planin",
|
||||
"changePlan": "Ndrysho Planin",
|
||||
"cancelPlan": "Anulo Planin",
|
||||
|
||||
"perMonthPerUser": "për përdorues/muaj",
|
||||
"viewInvoice": "Shiko Faturën",
|
||||
"switchToFreePlan": "Kalo në Planin Falas",
|
||||
|
||||
"expirestoday": "sot",
|
||||
"expirestomorrow": "nesër",
|
||||
"expiredDaysAgo": "{{days}} ditë më parë",
|
||||
|
||||
"continueWith": "Vazhdo me {{plan}}",
|
||||
"changeToPlan": "Ndrysho në {{plan}}"
|
||||
}
|
||||
@@ -1,8 +0,0 @@
|
||||
{
|
||||
"overview": "Përmbledhje",
|
||||
"name": "Emri i Organizatës",
|
||||
"owner": "Pronari i Organizatës",
|
||||
"admins": "Administruesit e Organizatës",
|
||||
"contactNumber": "Shto Numrin e Kontaktit",
|
||||
"edit": "Redakto"
|
||||
}
|
||||
@@ -1,12 +0,0 @@
|
||||
{
|
||||
"membersCount": "Numri i Anëtarëve",
|
||||
"createdAt": "Krijuar më",
|
||||
"projectName": "Emri i Projektit",
|
||||
"teamName": "Emri i Ekipit",
|
||||
"refreshProjects": "Rifresko Projektet",
|
||||
"searchPlaceholder": "Kërkoni sipas emrit të projektit",
|
||||
"deleteProject": "Jeni i sigurt që dëshironi të fshini këtë projekt?",
|
||||
"confirm": "Konfirmo",
|
||||
"cancel": "Anulo",
|
||||
"delete": "Fshi Projektin"
|
||||
}
|
||||
@@ -1,8 +0,0 @@
|
||||
{
|
||||
"overview": "Përmbledhje",
|
||||
"users": "Përdoruesit",
|
||||
"teams": "Ekipet",
|
||||
"billing": "Faturimi",
|
||||
"projects": "Projektet",
|
||||
"adminCenter": "Qendra Administrative"
|
||||
}
|
||||
@@ -1,33 +0,0 @@
|
||||
{
|
||||
"title": "Ekipet",
|
||||
"subtitle": "ekipet",
|
||||
"tooltip": "Rifresko ekipet",
|
||||
"placeholder": "Kërko sipas emrit",
|
||||
"addTeam": "Shto Ekip",
|
||||
"team": "Ekipi",
|
||||
"membersCount": "Numri i Anëtarëve",
|
||||
"members": "Anëtarët",
|
||||
"drawerTitle": "Krijo Ekip të Ri",
|
||||
"label": "Emri i Ekipit",
|
||||
"drawerPlaceholder": "Emri",
|
||||
"create": "Krijo",
|
||||
"delete": "Fshi",
|
||||
"settings": "Cilësimet",
|
||||
"popTitle": "Jeni i sigurt?",
|
||||
"message": "Ju lutemi shkruani një Emër",
|
||||
"teamSettings": "Cilësimet e Ekipit",
|
||||
"teamName": "Emri i Ekipit",
|
||||
"teamDescription": "Përshkrimi i Ekipit",
|
||||
"teamMembers": "Anëtarët e Ekipit",
|
||||
"teamMembersCount": "Numri i Anëtarëve të Ekipit",
|
||||
"teamMembersPlaceholder": "Kërko sipas emrit",
|
||||
"addMember": "Shto Anëtar",
|
||||
"add": "Shto",
|
||||
"update": "Përditëso",
|
||||
"teamNamePlaceholder": "Emri i ekipit",
|
||||
"user": "Përdoruesi",
|
||||
"role": "Roli",
|
||||
"owner": "Pronari",
|
||||
"admin": "Administruesi",
|
||||
"member": "Anëtari"
|
||||
}
|
||||
@@ -1,9 +0,0 @@
|
||||
{
|
||||
"title": "Përdoruesit",
|
||||
"subTitle": "përdoruesit",
|
||||
"placeholder": "Kërko sipas emrit",
|
||||
"user": "Përdoruesi",
|
||||
"email": "Email",
|
||||
"lastActivity": "Aktiviteti i Fundit",
|
||||
"refresh": "Rifresko përdoruesit"
|
||||
}
|
||||
@@ -1,34 +0,0 @@
|
||||
{
|
||||
"name": "Emri",
|
||||
"client": "Klienti",
|
||||
"category": "Kategoria",
|
||||
"status": "Statusi",
|
||||
"tasksProgress": "Përparimi i Detyrave",
|
||||
"updated_at": "E Përditësuar së Fundi",
|
||||
"members": "Anëtarët",
|
||||
"setting": "Cilësimet",
|
||||
"projects": "Projektet",
|
||||
"refreshProjects": "Rifresko projektet",
|
||||
"all": "Të gjitha",
|
||||
"favorites": "Të preferuarit",
|
||||
"archived": "E arkivuar",
|
||||
"placeholder": "Kërko sipas emrit",
|
||||
"archive": "Arkivo",
|
||||
"unarchive": "Çarkivo",
|
||||
"archiveConfirm": "Jeni i sigurt që dëshironi të arkivoni këtë projekt?",
|
||||
"unarchiveConfirm": "Jeni i sigurt që dëshironi të çarkivoni këtë projekt?",
|
||||
"yes": "Po",
|
||||
"no": "Jo",
|
||||
"clickToFilter": "Kliko për të filtruar sipas",
|
||||
"noProjects": "Nuk u gjetën projekte",
|
||||
"addToFavourites": "Shto te të preferuarit",
|
||||
"list": "Lista",
|
||||
"group": "Grupi",
|
||||
"listView": "Pamja e Listës",
|
||||
"groupView": "Pamja e Grupit",
|
||||
"groupBy": {
|
||||
"category": "Kategoria",
|
||||
"client": "Klienti"
|
||||
},
|
||||
"noPermission": "Nuk keni leje për të kryer këtë veprim"
|
||||
}
|
||||
@@ -1,5 +0,0 @@
|
||||
{
|
||||
"loggingOut": "Po dilni...",
|
||||
"authenticating": "Po autentikoheni...",
|
||||
"gettingThingsReady": "Po përgatiten gjërat për ju..."
|
||||
}
|
||||
@@ -1,12 +0,0 @@
|
||||
{
|
||||
"headerDescription": "Rivendosni fjalëkalimin tuaj",
|
||||
"emailLabel": "Email",
|
||||
"emailPlaceholder": "Vendosni email-in tuaj",
|
||||
"emailRequired": "Ju lutemi vendosni Email-in tuaj!",
|
||||
"resetPasswordButton": "Rivendos Fjalëkalimin",
|
||||
"returnToLoginButton": "Kthehu te Hyrja",
|
||||
"passwordResetSuccessMessage": "Një lidhje për rivendosjen e fjalëkalimit është dërguar në email-in tuaj.",
|
||||
"orText": "OSE",
|
||||
"successTitle": "U dërguan udhëzimet për rivendosje!",
|
||||
"successMessage": "Informacioni për rivendosje është dërguar në email-in tuaj. Ju lutemi kontrolloni email-in."
|
||||
}
|
||||
@@ -1,27 +0,0 @@
|
||||
{
|
||||
"headerDescription": "Hyni në llogarinë tuaj",
|
||||
"emailLabel": "Email",
|
||||
"emailPlaceholder": "Vendosni email-in tuaj",
|
||||
"emailRequired": "Ju lutemi vendosni Email-in tuaj!",
|
||||
"passwordLabel": "Fjalëkalimi",
|
||||
"passwordPlaceholder": "Vendosni fjalëkalimin",
|
||||
"passwordRequired": "Ju lutemi vendosni Fjalëkalimin!",
|
||||
"rememberMe": "Më mbaj mend",
|
||||
"loginButton": "Hyr",
|
||||
"signupButton": "Regjistrohu",
|
||||
"forgotPasswordButton": "Keni harruar fjalëkalimin?",
|
||||
"signInWithGoogleButton": "Hyr me Google",
|
||||
"dontHaveAccountText": "Nuk keni llogari?",
|
||||
"orText": "OSE",
|
||||
"successMessage": "Jeni futur me sukses!",
|
||||
"loginError": "Hyrja dështoi",
|
||||
"googleLoginError": "Hyrja përmes Google dështoi",
|
||||
"validationMessages": {
|
||||
"email": "Ju lutemi vendosni një adresë email të vlefshme",
|
||||
"password": "Fjalëkalimi duhet të jetë së paku 8 karaktere"
|
||||
},
|
||||
"errorMessages": {
|
||||
"loginErrorTitle": "Hyrja dështoi",
|
||||
"loginErrorMessage": "Ju lutemi kontrolloni email-in dhe fjalëkalimin dhe provoni përsëri"
|
||||
}
|
||||
}
|
||||
@@ -1,29 +0,0 @@
|
||||
{
|
||||
"headerDescription": "Regjistrohuni për të filluar",
|
||||
"nameLabel": "Emri i Plotë",
|
||||
"namePlaceholder": "Shkruani emrin tuaj të plotë",
|
||||
"nameRequired": "Ju lutemi shkruani emrin tuaj të plotë!",
|
||||
"nameMinCharacterRequired": "Emri duhet të jetë së paku 4 karaktere!",
|
||||
"emailLabel": "Email",
|
||||
"emailPlaceholder": "Shkruani email-in tuaj",
|
||||
"emailRequired": "Ju lutemi shkruani Email-in tuaj!",
|
||||
"passwordLabel": "Fjalëkalimi",
|
||||
"passwordPlaceholder": "Krijoni një fjalëkalim",
|
||||
"passwordRequired": "Ju lutemi krijoni një Fjalëkalim!",
|
||||
"passwordMinCharacterRequired": "Fjalëkalimi duhet të jetë së paku 8 karaktere!",
|
||||
"passwordPatternRequired": "Fjalëkalimi nuk plotëson kërkesat!",
|
||||
"strongPasswordPlaceholder": "Vendosni një fjalëkalim më të fortë",
|
||||
"passwordValidationAltText": "Fjalëkalimi duhet të përmbajë së paku 8 karaktere me shkronja të mëdha dhe të vogla, një numër dhe një simbol.",
|
||||
"signupSuccessMessage": "Jeni regjistruar me sukses!",
|
||||
"privacyPolicyLink": "Politika e Privatësisë",
|
||||
"termsOfUseLink": "Kushtet e Përdorimit",
|
||||
"bySigningUpText": "Duke u regjistruar, ju pranoni",
|
||||
"andText": "dhe",
|
||||
"signupButton": "Regjistrohu",
|
||||
"signInWithGoogleButton": "Hyr me Google",
|
||||
"alreadyHaveAccountText": "Keni tashmë një llogari?",
|
||||
"loginButton": "Hyr",
|
||||
"orText": "OSE",
|
||||
"reCAPTCHAVerificationError": "Gabim në Verifikimin e reCAPTCHA",
|
||||
"reCAPTCHAVerificationErrorMessage": "Nuk mundëm të verifikojmë reCAPTCHA-n tuaj. Ju lutemi provoni përsëri."
|
||||
}
|
||||
@@ -1,14 +0,0 @@
|
||||
{
|
||||
"title": "Verifikoni Email-in për Rivendosje",
|
||||
"description": "Vendosni fjalëkalimin tuaj të ri",
|
||||
"placeholder": "Vendosni fjalëkalimin tuaj të ri",
|
||||
"confirmPasswordPlaceholder": "Konfirmoni fjalëkalimin e ri",
|
||||
"passwordHint": "Të paktën 8 karaktere, me shkronja të mëdha dhe të vogla, një numër dhe një simbol.",
|
||||
"resetPasswordButton": "Rivendos fjalëkalimin",
|
||||
"orText": "Ose",
|
||||
"resendResetEmail": "Dërgo përsëri email-in e rivendosjes",
|
||||
"passwordRequired": "Ju lutemi vendosni fjalëkalimin e ri",
|
||||
"returnToLoginButton": "Kthehu te Hyrja",
|
||||
"confirmPasswordRequired": "Ju lutemi konfirmoni fjalëkalimin e ri",
|
||||
"passwordMismatch": "Fjalëkalimet nuk përputhen"
|
||||
}
|
||||
@@ -1,9 +0,0 @@
|
||||
{
|
||||
"login-success": "Hyrja u krye me sukses!",
|
||||
"login-failed": "Hyrja dështoi. Ju lutemi kontrolloni kredencialet dhe provoni përsëri.",
|
||||
"signup-success": "Regjistrimi u krye me sukses! Mirë se erdhët.",
|
||||
"signup-failed": "Regjistrimi dështoi. Ju lutemi sigurohuni që të gjitha fushat e nevojshme janë plotësuar dhe provoni përsëri.",
|
||||
"reconnecting": "Jeni shkëputur nga serveri.",
|
||||
"connection-lost": "Lidhja me serverin dështoi. Ju lutemi kontrolloni lidhjen tuaj me internet.",
|
||||
"connection-restored": "U lidhët me serverin me sukses"
|
||||
}
|
||||
@@ -1,13 +0,0 @@
|
||||
{
|
||||
"formTitle": "Krijoni projektin tuaj të parë",
|
||||
"inputLabel": "Në cilin projekt po punoni aktualisht?",
|
||||
"or": "ose",
|
||||
"templateButton": "Importo nga shablloni",
|
||||
"createFromTemplate": "Krijo nga shablloni",
|
||||
"goBack": "Kthehu Mbrapa",
|
||||
"continue": "Vazhdo",
|
||||
"cancel": "Anulo",
|
||||
"create": "Krijo",
|
||||
"templateDrawerTitle": "Zgjidh nga shabllonet",
|
||||
"createProject": "Krijo Projekt"
|
||||
}
|
||||
@@ -1,7 +0,0 @@
|
||||
{
|
||||
"formTitle": "Krijo detyrën tënde të parë.",
|
||||
"inputLabel": "Shkruaj disa detyra që do të kryesh në",
|
||||
"addAnother": "Shto një tjetër",
|
||||
"goBack": "Kthehu mbrapa",
|
||||
"continue": "Vazhdo"
|
||||
}
|
||||
@@ -1,46 +0,0 @@
|
||||
{
|
||||
"todoList": {
|
||||
"title": "Lista e Detyrave",
|
||||
"refreshTasks": "Rifresko detyrat",
|
||||
"addTask": "+ Shto Detyrë",
|
||||
"noTasks": "Asnjë detyrë",
|
||||
"pressEnter": "Shtyp",
|
||||
"toCreate": "për të krijuar.",
|
||||
"markAsDone": "Shëno si të përfunduar"
|
||||
},
|
||||
"projects": {
|
||||
"title": "Projektet",
|
||||
"refreshProjects": "Rifresko projektet",
|
||||
"noRecentProjects": "Aktualisht nuk jeni caktuar në asnjë projekt.",
|
||||
"noFavouriteProjects": "Asnjë projekt i shënuar si i preferuar.",
|
||||
"recent": "Të Fundit",
|
||||
"favourites": "Të Preferuarat"
|
||||
},
|
||||
"tasks": {
|
||||
"assignedToMe": "Më janë caktuar",
|
||||
"assignedByMe": "I kam caktuar",
|
||||
"all": "Të Gjitha",
|
||||
"today": "Sot",
|
||||
"upcoming": "Ardhj",
|
||||
"overdue": "Të vonuara",
|
||||
"noDueDate": "Pa afat",
|
||||
"noTasks": "Asnjë detyrë për të shfaqur.",
|
||||
"addTask": "+ Shto detyrë",
|
||||
"name": "Emri",
|
||||
"project": "Projekti",
|
||||
"status": "Statusi",
|
||||
"dueDate": "Afati",
|
||||
"dueDatePlaceholder": "Cakto Afatin",
|
||||
"tomorrow": "Nesër",
|
||||
"nextWeek": "Javën e Ardhshme",
|
||||
"nextMonth": "Muajin e Ardhshëm",
|
||||
"projectRequired": "Ju lutemi zgjidhni një projekt",
|
||||
"pressTabToSelectDueDateAndProject": "Shtyp Tab për të zgjedhur afatin dhe projektin",
|
||||
"dueOn": "Detyrat me afat më",
|
||||
"taskRequired": "Ju lutemi shtoni një detyrë",
|
||||
"list": "Listë",
|
||||
"calendar": "Kalendar",
|
||||
"tasks": "Detyrat",
|
||||
"refresh": "Rifresko"
|
||||
}
|
||||
}
|
||||
@@ -1,8 +0,0 @@
|
||||
{
|
||||
"formTitle": "Fto ekipin tënd të punojë me",
|
||||
"inputLabel": "Fto me email",
|
||||
"addAnother": "Shto një tjetër",
|
||||
"goBack": "Kthehu mbrapa",
|
||||
"continue": "Vazhdo",
|
||||
"skipForNow": "Anashkalo tani për tani"
|
||||
}
|
||||
@@ -1,30 +0,0 @@
|
||||
{
|
||||
"rename": "Riemërto",
|
||||
"delete": "Fshi",
|
||||
"addTask": "Shto Detyrë",
|
||||
"addSectionButton": "Shto Seksion",
|
||||
"changeCategory": "Ndrysho kategorinë",
|
||||
|
||||
"deleteTooltip": "Fshi",
|
||||
"deleteConfirmationTitle": "Jeni i sigurt?",
|
||||
"deleteConfirmationOk": "Po",
|
||||
"deleteConfirmationCancel": "Anulo",
|
||||
|
||||
"dueDate": "Data e përfundimit",
|
||||
"cancel": "Anulo",
|
||||
|
||||
"today": "Sot",
|
||||
"tomorrow": "Nesër",
|
||||
"assignToMe": "Cakto mua",
|
||||
"archive": "Arkivo",
|
||||
|
||||
"newTaskNamePlaceholder": "Shkruaj emrin e detyrës",
|
||||
"newSubtaskNamePlaceholder": "Shkruaj emrin e nëndetyrës",
|
||||
"untitledSection": "Seksion pa titull",
|
||||
"unmapped": "Pa hartë",
|
||||
"clickToChangeDate": "Klikoni për të ndryshuar datën",
|
||||
"noDueDate": "Pa datë përfundimi",
|
||||
"save": "Ruaj",
|
||||
"clear": "Pastro",
|
||||
"nextWeek": "Javën e ardhshme"
|
||||
}
|
||||
@@ -1,6 +0,0 @@
|
||||
{
|
||||
"title": "Prova juaj e Worklenz ka skaduar!",
|
||||
"subtitle": "Ju lutemi përmirësoni tani.",
|
||||
"button": "Përmirëso tani",
|
||||
"checking": "Po kontrollohet statusi i abonimit..."
|
||||
}
|
||||
@@ -1,31 +0,0 @@
|
||||
{
|
||||
"logoAlt": "Logoja e Worklenz",
|
||||
"home": "Kryefaqja",
|
||||
"projects": "Projektet",
|
||||
"schedule": "Orari",
|
||||
"reporting": "Raportimi",
|
||||
"clients": "Klientët",
|
||||
"teams": "Ekipet",
|
||||
"labels": "Etiketa",
|
||||
"jobTitles": "Tituj Pune",
|
||||
"upgradePlan": "Përmirëso Abonimin",
|
||||
"upgradePlanTooltip": "Përmirëso abonimin",
|
||||
"invite": "Fto",
|
||||
"inviteTooltip": "Fto anëtarë të ekipit të bashkohen",
|
||||
"switchTeamTooltip": "Ndrysho ekipin",
|
||||
"help": "Ndihmë",
|
||||
"notificationTooltip": "Shiko njoftimet",
|
||||
"profileTooltip": "Shiko profilin",
|
||||
"adminCenter": "Qendra Administrative",
|
||||
"settings": "Cilësimet",
|
||||
"logOut": "Dil",
|
||||
"notificationsDrawer": {
|
||||
"read": "Lexuara e njoftimet ",
|
||||
"unread": "Njoftimet e palexuara",
|
||||
"markAsRead": "Shëno si të lexuara",
|
||||
"readAndJoin": "Lexo & Bashkohu",
|
||||
"accept": "Prano",
|
||||
"acceptAndJoin": "Prano & Bashkohu",
|
||||
"noNotifications": "Asnjë njoftim"
|
||||
}
|
||||
}
|
||||
@@ -1,5 +0,0 @@
|
||||
{
|
||||
"nameYourOrganization": "Emërtoni organizatën tuaj.",
|
||||
"worklenzAccountTitle": "Zgjidhni një emër për llogarinë tuaj në Worklenz.",
|
||||
"continue": "Vazhdo"
|
||||
}
|
||||
@@ -1,19 +0,0 @@
|
||||
{
|
||||
"configurePhases": "Konfiguro Fazat",
|
||||
"phaseLabel": "Etiketa e Fazës",
|
||||
"enterPhaseName": "Vendosni një emër për etiketën e fazës",
|
||||
"addOption": "Shto Opsion",
|
||||
"phaseOptions": "Opsionet e Fazës:",
|
||||
"dragToReorderPhases": "Zvarrit fazat për t'i rirenditur. Çdo fazë mund të ketë një ngjyrë të ndryshme.",
|
||||
"enterNewPhaseName": "Shkruani emrin e fazës së re...",
|
||||
"addPhase": "Shto Fazë",
|
||||
"noPhasesFound": "Nuk u gjetën faza. Krijoni fazën tuaj të parë më sipër.",
|
||||
"deletePhase": "Fshi Fazën",
|
||||
"deletePhaseConfirm": "Jeni të sigurt që doni të fshini këtë fazë? Ky veprim nuk mund të zhbëhet.",
|
||||
"rename": "Riemëro",
|
||||
"delete": "Fshi",
|
||||
"enterPhaseName": "Shkruani emrin e fazës",
|
||||
"selectColor": "Zgjidh ngjyrën",
|
||||
"managePhases": "Menaxho Fazat",
|
||||
"close": "Mbyll"
|
||||
}
|
||||
@@ -1,42 +0,0 @@
|
||||
{
|
||||
"createProject": "Krijo Projekt",
|
||||
"editProject": "Modifiko Projektin",
|
||||
"enterCategoryName": "Vendosni emër për kategorinë",
|
||||
"hitEnterToCreate": "Shtyp Enter për të krijuar!",
|
||||
"enterNotes": "Shënime",
|
||||
"youCanManageClientsUnderSettings": "Mund të menaxhoni klientët nën Cilësimet",
|
||||
"addCategory": "Shto kategori projektit",
|
||||
"newCategory": "Kategori e Re",
|
||||
"notes": "Shënime",
|
||||
"startDate": "Data e Fillimit",
|
||||
"endDate": "Data e Përfundimit",
|
||||
"estimateWorkingDays": "Vlerëso ditët e punës",
|
||||
"estimateManDays": "Vlerëso ditët e punëtorëve",
|
||||
"hoursPerDay": "Orë në ditë",
|
||||
"create": "Krijo",
|
||||
"update": "Përditëso",
|
||||
"delete": "Fshi",
|
||||
"typeToSearchClients": "Shkruani për të kërkuar klientë",
|
||||
"projectColor": "Ngjyra e Projektit",
|
||||
"pleaseEnterAName": "Ju lutemi vendosni një emër",
|
||||
"enterProjectName": "Vendosni emrin e projektit",
|
||||
"name": "Emri",
|
||||
"status": "Statusi",
|
||||
"health": "Gjendja",
|
||||
"category": "Kategoria",
|
||||
"projectManager": "Menaxheri i Projektit",
|
||||
"client": "Klienti",
|
||||
"deleteConfirmation": "Jeni i sigurt që doni të fshini?",
|
||||
"deleteConfirmationDescription": "Kjo do të fshijë të gjitha të dhënat e lidhura dhe nuk mund të zhbëhet.",
|
||||
"yes": "Po",
|
||||
"no": "Jo",
|
||||
"createdAt": "Krijuar më",
|
||||
"updatedAt": "Përditësuar më",
|
||||
"by": "nga",
|
||||
"add": "Shto",
|
||||
"asClient": "si klient",
|
||||
"createClient": "Krijo klient",
|
||||
"searchInputPlaceholder": "Kërko sipas emrit ose emailit",
|
||||
"hoursPerDayValidationMessage": "Orët në ditë duhet të jenë një numër midis 1 dhe 24",
|
||||
"noPermission": "Nuk ka leje"
|
||||
}
|
||||
@@ -1,14 +0,0 @@
|
||||
{
|
||||
"nameColumn": "Emri",
|
||||
"attachedTaskColumn": "Detyra e Bashkangjitur",
|
||||
"sizeColumn": "Madhësia",
|
||||
"uploadedByColumn": "Ngarkuar Nga",
|
||||
"uploadedAtColumn": "Ngarkuar Më",
|
||||
"fileIconAlt": "Ikona e skedarit",
|
||||
"titleDescriptionText": "Të gjitha bashkëngjitjet e detyrave në këtë projekt do të shfahen këtu.",
|
||||
"deleteConfirmationTitle": "Jeni i sigurt?",
|
||||
"deleteConfirmationOk": "Po",
|
||||
"deleteConfirmationCancel": "Anulo",
|
||||
"segmentedTooltip": "Së shpejti! Kaloni midis pamjes listë dhe pamjes miniaturash.",
|
||||
"emptyText": "Nuk ka bashkëngjitje në projekt."
|
||||
}
|
||||
@@ -1,41 +0,0 @@
|
||||
{
|
||||
"overview": {
|
||||
"title": "Përmbledhje",
|
||||
"statusOverview": "Përmbledhje Statusi",
|
||||
"priorityOverview": "Përmbledhje Prioriteti",
|
||||
"lastUpdatedTasks": "Detyrat e Përditësuara Së Fundi"
|
||||
},
|
||||
"members": {
|
||||
"title": "Anëtarët",
|
||||
"tooltip": "Anëtarët",
|
||||
"tasksByMembers": "Detyrat sipas anëtarëve",
|
||||
"tasksByMembersTooltip": "Detyrat sipas anëtarëve",
|
||||
"name": "Emri",
|
||||
"taskCount": "Numri i Detyrave",
|
||||
"contribution": "Kontributi",
|
||||
"completed": "Të Përfunduara",
|
||||
"incomplete": "Të Papërfunduara",
|
||||
"overdue": "Të Vonuara",
|
||||
"progress": "Progresi"
|
||||
},
|
||||
"tasks": {
|
||||
"overdueTasks": "Detyrat e Vonuara",
|
||||
"overLoggedTasks": "Detyrat me regjistrim të tepërt",
|
||||
"tasksCompletedEarly": "Detyrat e përfunduara para afatit",
|
||||
"tasksCompletedLate": "Detyrat e përfunduara pas afatit",
|
||||
"overLoggedTasksTooltip": "Detyrat me kohë të regjistruar mbi kohën e vlerësuar",
|
||||
"overdueTasksTooltip": "Detyrat që kanë kaluar afatin e tyre"
|
||||
},
|
||||
"common": {
|
||||
"seeAll": "Shiko të gjitha",
|
||||
"totalLoggedHours": "Orët totale të regjistruara",
|
||||
"totalEstimation": "Vlerësimi total",
|
||||
"completedTasks": "Detyrat e përfunduara",
|
||||
"incompleteTasks": "Detyrat e papërfunduara",
|
||||
"overdueTasks": "Detyrat e vonuara",
|
||||
"overdueTasksTooltip": "Detyrat që kanë kaluar afatin e tyre",
|
||||
"totalLoggedHoursTooltip": "Vlerësimi dhe koha e regjistruar për detyrat.",
|
||||
"includeArchivedTasks": "Përfshi Detyrat e Arkivuara",
|
||||
"export": "Eksporto"
|
||||
}
|
||||
}
|
||||
@@ -1,17 +0,0 @@
|
||||
{
|
||||
"nameColumn": "Emri",
|
||||
"jobTitleColumn": "Titulli i Punës",
|
||||
"emailColumn": "Email",
|
||||
"tasksColumn": "Detyrat",
|
||||
"taskProgressColumn": "Progresi i Detyrave",
|
||||
"accessColumn": "Qasja",
|
||||
"fileIconAlt": "Ikona e skedarit",
|
||||
"deleteConfirmationTitle": "Jeni i sigurt?",
|
||||
"deleteConfirmationOk": "Po",
|
||||
"deleteConfirmationCancel": "Anulo",
|
||||
"refreshButtonTooltip": "Rifresko anëtarët",
|
||||
"deleteButtonTooltip": "Hiq nga projekti",
|
||||
"memberCount": "Anëtar",
|
||||
"membersCountPlural": "Anëtarë",
|
||||
"emptyText": "Nuk ka bashkëngjitje në projekt."
|
||||
}
|
||||
@@ -1,6 +0,0 @@
|
||||
{
|
||||
"inputPlaceholder": "Shto një koment..",
|
||||
"addButton": "Shto",
|
||||
"cancelButton": "Anulo",
|
||||
"deleteButton": "Fshi"
|
||||
}
|
||||
@@ -1,14 +0,0 @@
|
||||
{
|
||||
"taskList": "Lista e Detyrave",
|
||||
"board": "Tabela Kanban",
|
||||
"insights": "Analiza",
|
||||
"files": "Skedarë",
|
||||
"members": "Anëtarë",
|
||||
"updates": "Përditësime",
|
||||
"projectView": "Pamja e Projektit",
|
||||
"loading": "Duke ngarkuar projektin...",
|
||||
"error": "Gabim në ngarkimin e projektit",
|
||||
"pinnedTab": "E fiksuar si tab i parazgjedhur",
|
||||
"pinTab": "Fikso si tab i parazgjedhur",
|
||||
"unpinTab": "Hiqe fiksimin e tab-it të parazgjedhur"
|
||||
}
|
||||
@@ -1,11 +0,0 @@
|
||||
{
|
||||
"importTaskTemplate": "Importo Shabllon Detyrash",
|
||||
"templateName": "Emri i Shabllonit",
|
||||
"templateDescription": "Përshkrimi i Shabllonit",
|
||||
"selectedTasks": "Detyrat e Përzgjedhura",
|
||||
"tasks": "Detyrat",
|
||||
"templates": "Shabllonet",
|
||||
"remove": "Hiq",
|
||||
"cancel": "Anulo",
|
||||
"import": "Importo"
|
||||
}
|
||||
@@ -1,7 +0,0 @@
|
||||
{
|
||||
"title": "Anëtarët e Projektit",
|
||||
"searchLabel": "Shtoni anëtarë duke shkruar emrin ose email-in e tyre",
|
||||
"searchPlaceholder": "Shkruani emrin ose email-in",
|
||||
"inviteAsAMember": "Fto si anëtar",
|
||||
"inviteNewMemberByEmail": "Fto anëtar të ri me email"
|
||||
}
|
||||
@@ -1,30 +0,0 @@
|
||||
{
|
||||
"importTasks": "Importo detyra",
|
||||
"importTask": "Importo detyrë",
|
||||
"createTask": "Krijo detyrë",
|
||||
"settings": "Cilësimet",
|
||||
"subscribe": "Abonohu",
|
||||
"unsubscribe": "Çabonohu",
|
||||
"deleteProject": "Fshi projektin",
|
||||
"startDate": "Data e fillimit",
|
||||
"endDate": "Data e mbarimit",
|
||||
"projectSettings": "Cilësimet e projektit",
|
||||
"projectSummary": "Përmbledhja e projektit",
|
||||
"receiveProjectSummary": "Merrni një përmbledhje të projektit çdo mbrëmje.",
|
||||
"refreshProject": "Rifresko projektin",
|
||||
"saveAsTemplate": "Ruaj si model",
|
||||
"invite": "Fto",
|
||||
"share": "Ndaj",
|
||||
"subscribeTooltip": "Abonohu tek njoftimet e projektit",
|
||||
"unsubscribeTooltip": "Çabonohu nga njoftimet e projektit",
|
||||
"refreshTooltip": "Rifresko të dhënat e projektit",
|
||||
"settingsTooltip": "Hap cilësimet e projektit",
|
||||
"saveAsTemplateTooltip": "Ruaj këtë projekt si model",
|
||||
"inviteTooltip": "Fto anëtarë të ekipit në këtë projekt",
|
||||
"createTaskTooltip": "Krijo një detyrë të re",
|
||||
"importTaskTooltip": "Importo detyrë nga modeli",
|
||||
"navigateBackTooltip": "Kthehu tek lista e projekteve",
|
||||
"projectStatusTooltip": "Statusi i projektit",
|
||||
"projectDatesInfo": "Informacion për kohëzgjatjen e projektit",
|
||||
"projectCategoryTooltip": "Kategoria e projektit"
|
||||
}
|
||||
@@ -1,27 +0,0 @@
|
||||
{
|
||||
"title": "Ruaj si Shabllon",
|
||||
"templateName": "Emri i Shabllonit",
|
||||
"includes": "Çfarë duhet të përfshihet në shabllon nga projekti?",
|
||||
"includesOptions": {
|
||||
"statuses": "Statuset",
|
||||
"phases": "Fazat",
|
||||
"labels": "Etiketat"
|
||||
},
|
||||
"taskIncludes": "Çfarë duhet të përfshihet në shabllon nga detyrat?",
|
||||
"taskIncludesOptions": {
|
||||
"statuses": "Statuset",
|
||||
"phases": "Fazat",
|
||||
"labels": "Etiketat",
|
||||
"name": "Emri",
|
||||
"priority": "Prioriteti",
|
||||
"status": "Statusi",
|
||||
"phase": "Faza",
|
||||
"label": "Etiketa",
|
||||
"timeEstimate": "Vlerësimi i Kohës",
|
||||
"description": "Përshkrimi",
|
||||
"subTasks": "Nëndetyrat"
|
||||
},
|
||||
"cancel": "Anulo",
|
||||
"save": "Ruaj",
|
||||
"templateNamePlaceholder": "Shkruani emrin e shabllonit"
|
||||
}
|
||||
@@ -1,90 +0,0 @@
|
||||
{
|
||||
"exportButton": "Eksporto",
|
||||
"timeLogsButton": "Regjistrimet e Kohës",
|
||||
"activityLogsButton": "Regjistrimet e Aktivitetit",
|
||||
"tasksButton": "Detyrat",
|
||||
"searchByNameInputPlaceholder": "Kërko sipas emrit",
|
||||
|
||||
"overviewTab": "Përmbledhje",
|
||||
"timeLogsTab": "Regjistrimet e Kohës",
|
||||
"activityLogsTab": "Regjistrimet e Aktivitetit",
|
||||
"tasksTab": "Detyrat",
|
||||
|
||||
"projectsText": "Projektet",
|
||||
"totalTasksText": "Detyrat Gjithsej",
|
||||
"assignedTasksText": "Detyrat e Caktuara",
|
||||
"completedTasksText": "Detyrat e Përfunduara",
|
||||
"ongoingTasksText": "Detyrat në Vazhdim",
|
||||
"overdueTasksText": "Detyrat e Vonuara",
|
||||
"loggedHoursText": "Orët e Regjistruara",
|
||||
|
||||
"tasksText": "Detyrat",
|
||||
"allText": "Të Gjitha",
|
||||
|
||||
"tasksByProjectsText": "Detyrat Sipas Projekteve",
|
||||
"tasksByStatusText": "Detyrat Sipas Statusit",
|
||||
"tasksByPriorityText": "Detyrat Sipas Prioritetit",
|
||||
|
||||
"todoText": "Për Të Bërë",
|
||||
"doingText": "Duke bërë",
|
||||
"doneText": "E Përfunduar",
|
||||
"lowText": "I Ulët",
|
||||
"mediumText": "I Mesëm",
|
||||
"highText": "I Lartë",
|
||||
|
||||
"billableButton": "Fakturueshme",
|
||||
"billableText": "Fakturueshme",
|
||||
"nonBillableText": "Jo Fakturueshme",
|
||||
|
||||
"timeLogsEmptyPlaceholder": "Asnjë regjistrim kohe për të shfaqur",
|
||||
"loggedText": "Regjistruar",
|
||||
"forText": "për",
|
||||
"inText": "në",
|
||||
"updatedText": "Përditësuar",
|
||||
"fromText": "Nga",
|
||||
"toText": "në",
|
||||
"withinText": "brenda",
|
||||
|
||||
"activityLogsEmptyPlaceholder": "Asnjë regjistrim aktiviteti për të shfaqur",
|
||||
|
||||
"filterByText": "Filtro sipas:",
|
||||
"selectProjectPlaceholder": "Zgjidh Projektin",
|
||||
|
||||
"taskColumn": "Detyra",
|
||||
"nameColumn": "Emri",
|
||||
"projectColumn": "Projekti",
|
||||
"statusColumn": "Statusi",
|
||||
"priorityColumn": "Prioriteti",
|
||||
"dueDateColumn": "Afati",
|
||||
"completedDateColumn": "Data e Përfundimit",
|
||||
"estimatedTimeColumn": "Koha e Vlerësuar",
|
||||
"loggedTimeColumn": "Koha e Regjistruar",
|
||||
"overloggedTimeColumn": "Koha e Tepërt",
|
||||
"daysLeftColumn": "Ditë të Mbetura/Vonuar",
|
||||
"startDateColumn": "Data e Fillimit",
|
||||
"endDateColumn": "Data e Përfundimit",
|
||||
"actualTimeColumn": "Koha Aktuale",
|
||||
"projectHealthColumn": "Gjendja e Projektit",
|
||||
"categoryColumn": "Kategoria",
|
||||
"projectManagerColumn": "Menaxheri i Projektit",
|
||||
|
||||
"tasksStatsOverviewDrawerTitle": "Detyrat e ",
|
||||
"projectsStatsOverviewDrawerTitle": "Projektet e ",
|
||||
|
||||
"cancelledText": "Anuluar",
|
||||
"blockedText": "E Bllokuar",
|
||||
"onHoldText": "Në Pritje",
|
||||
"proposedText": "E Propozuar",
|
||||
"inPlanningText": "Në Planifikim",
|
||||
"inProgressText": "Në Progres",
|
||||
"completedText": "E Përfunduar",
|
||||
"continuousText": "E Vazhdueshme",
|
||||
|
||||
"daysLeftText": "ditë të mbetura",
|
||||
"daysOverdueText": "ditë vonuar",
|
||||
|
||||
"notSetText": "Pa Caktuar",
|
||||
"needsAttentionText": "Kërkon Vëmendje",
|
||||
"atRiskText": "Në Rrezik",
|
||||
"goodText": "Në Rregull"
|
||||
}
|
||||
@@ -1,35 +0,0 @@
|
||||
{
|
||||
"yesterdayText": "Dje",
|
||||
"lastSevenDaysText": "7 Ditët e Fundit",
|
||||
"lastWeekText": "Javën e Kaluar",
|
||||
"lastThirtyDaysText": "30 Ditët e Fundit",
|
||||
"lastMonthText": "Muajin e Kaluar",
|
||||
"lastThreeMonthsText": "3 Muajt e Fundit",
|
||||
"allTimeText": "Të Gjitha",
|
||||
"customRangeText": "Interval i Përshtatur",
|
||||
"startDateInputPlaceholder": "Data e fillimit",
|
||||
"EndDateInputPlaceholder": "Data e përfundimit",
|
||||
"filterButton": "Filtro",
|
||||
|
||||
"membersTitle": "Anëtarët",
|
||||
"includeArchivedButton": "Përfshij Projektet e Arkivuara",
|
||||
"exportButton": "Eksporto",
|
||||
"excelButton": "Excel",
|
||||
"searchByNameInputPlaceholder": "Kërko sipas emrit",
|
||||
|
||||
"memberColumn": "Anëtari",
|
||||
"tasksProgressColumn": "Progresi i Detyrave",
|
||||
"tasksAssignedColumn": "Detyrat e Caktuara",
|
||||
"completedTasksColumn": "Detyrat e Përfunduara",
|
||||
"overdueTasksColumn": "Detyrat e Vonuara",
|
||||
"ongoingTasksColumn": "Detyrat në Vazhdim",
|
||||
|
||||
"tasksAssignedColumnTooltip": "Detyrat e caktuara në intervalin e zgjedhur",
|
||||
"overdueTasksColumnTooltip": "Detyrat e vonuara deri në fund të intervalit të zgjedhur",
|
||||
"completedTasksColumnTooltip": "Detyrat e përfunduara në intervalin e zgjedhur",
|
||||
"ongoingTasksColumnTooltip": "Detyrat e filluara por jo të përfunduara ende",
|
||||
|
||||
"todoText": "Për Të Bërë",
|
||||
"doingText": "Duke bërë",
|
||||
"doneText": "E Përfunduar"
|
||||
}
|
||||
@@ -1,39 +0,0 @@
|
||||
{
|
||||
"exportButton": "Eksporto",
|
||||
"projectsButton": "Projektet",
|
||||
"membersButton": "Anëtarët",
|
||||
"searchByNameInputPlaceholder": "Kërko sipas emrit",
|
||||
|
||||
"overviewTab": "Përmbledhje",
|
||||
"projectsTab": "Projektet",
|
||||
"membersTab": "Anëtarët",
|
||||
|
||||
"projectsByStatusText": "Projektet Sipas Statusit",
|
||||
"projectsByCategoryText": "Projektet Sipas Kategorisë",
|
||||
"projectsByHealthText": "Projektet Sipas Gjendjes",
|
||||
|
||||
"projectsText": "Projektet",
|
||||
"allText": "Të Gjitha",
|
||||
|
||||
"cancelledText": "Anuluar",
|
||||
"blockedText": "E Bllokuar",
|
||||
"onHoldText": "Në Pritje",
|
||||
"proposedText": "E Propozuar",
|
||||
"inPlanningText": "Në Planifikim",
|
||||
"inProgressText": "Në Progres",
|
||||
"completedText": "E Përfunduar",
|
||||
"continuousText": "E Vazhdueshme",
|
||||
|
||||
"notSetText": "Pa Caktuar",
|
||||
"needsAttentionText": "Kërkon Vëmendje",
|
||||
"atRiskText": "Në Rrezik",
|
||||
"goodText": "Në Rregull",
|
||||
|
||||
"nameColumn": "Emri",
|
||||
"emailColumn": "Email",
|
||||
"projectsColumn": "Projektet",
|
||||
"tasksColumn": "Detyrat",
|
||||
"overdueTasksColumn": "Detyrat e Vonuara",
|
||||
"completedTasksColumn": "Detyrat e Përfunduara",
|
||||
"ongoingTasksColumn": "Detyrat në Vazhdim"
|
||||
}
|
||||
@@ -1,25 +0,0 @@
|
||||
{
|
||||
"overviewTitle": "Përmbledhje",
|
||||
"includeArchivedButton": "Përfshij Projektet e Arkivuara",
|
||||
|
||||
"teamCount": "Ekip",
|
||||
"teamCountPlural": "Ekipe",
|
||||
"projectCount": "Projekt",
|
||||
"projectCountPlural": "Projekte",
|
||||
"memberCount": "Anëtar",
|
||||
"memberCountPlural": "Anëtarë",
|
||||
"activeProjectCount": "Projekt Aktiv",
|
||||
"activeProjectCountPlural": "Projekte Aktive",
|
||||
"overdueProjectCount": "Projekt i Vonuar",
|
||||
"overdueProjectCountPlural": "Projekte të Vonuara",
|
||||
"unassignedMemberCount": "Anëtar i Pacaktuar",
|
||||
"unassignedMemberCountPlural": "Anëtarë të Pacaktuar",
|
||||
"memberWithOverdueTaskCount": "Anëtar me Detyrë të Vonuar",
|
||||
"memberWithOverdueTaskCountPlural": "Anëtarë me Detyra të Vonuara",
|
||||
|
||||
"teamsText": "Ekipet",
|
||||
|
||||
"nameColumn": "Emri",
|
||||
"projectsColumn": "Projektet",
|
||||
"membersColumn": "Anëtarët"
|
||||
}
|
||||
@@ -1,59 +0,0 @@
|
||||
{
|
||||
"exportButton": "Eksporto",
|
||||
"membersButton": "Anëtarët",
|
||||
"tasksButton": "Detyrat",
|
||||
"searchByNameInputPlaceholder": "Kërko sipas emrit",
|
||||
|
||||
"overviewTab": "Përmbledhje",
|
||||
"membersTab": "Anëtarët",
|
||||
"tasksTab": "Detyrat",
|
||||
|
||||
"completedTasksText": "Detyrat e Përfunduara",
|
||||
"incompleteTasksText": "Detyrat e Papërfunduara",
|
||||
"overdueTasksText": "Detyrat e Vonuara",
|
||||
"allocatedHoursText": "Orët e Alokuara",
|
||||
"loggedHoursText": "Orët e Regjistruara",
|
||||
|
||||
"tasksText": "Detyrat",
|
||||
"allText": "Të Gjitha",
|
||||
|
||||
"tasksByStatusText": "Detyrat Sipas Statusit",
|
||||
"tasksByPriorityText": "Detyrat Sipas Prioritetit",
|
||||
"tasksByDueDateText": "Detyrat Sipas Afatit",
|
||||
|
||||
"todoText": "Për Të Bërë",
|
||||
"doingText": "Duke bërë",
|
||||
"doneText": "E Përfunduar",
|
||||
"lowText": "I Ulët",
|
||||
"mediumText": "I Mesëm",
|
||||
"highText": "I Lartë",
|
||||
"completedText": "E Përfunduar",
|
||||
"upcomingText": "Në Ardhje",
|
||||
"overdueText": "E Vonuar",
|
||||
"noDueDateText": "Pa Afat",
|
||||
|
||||
"nameColumn": "Emri",
|
||||
"tasksCountColumn": "Numri i Detyrave",
|
||||
"completedTasksColumn": "Detyrat e Përfunduara",
|
||||
"incompleteTasksColumn": "Detyrat e Papërfunduara",
|
||||
"overdueTasksColumn": "Detyrat e Vonuara",
|
||||
"contributionColumn": "Kontributi",
|
||||
"progressColumn": "Progresi",
|
||||
"loggedTimeColumn": "Koha e Regjistruar",
|
||||
"taskColumn": "Detyra",
|
||||
"projectColumn": "Projekti",
|
||||
"statusColumn": "Statusi",
|
||||
"priorityColumn": "Prioriteti",
|
||||
"phaseColumn": "Faza",
|
||||
"dueDateColumn": "Afati",
|
||||
"completedDateColumn": "Data e Përfundimit",
|
||||
"estimatedTimeColumn": "Koha e Vlerësuar",
|
||||
"overloggedTimeColumn": "Koha e Tepërt",
|
||||
"completedOnColumn": "Përfunduar Më",
|
||||
"daysOverdueColumn": "Ditë vonim",
|
||||
|
||||
"groupByText": "Grupo Sipas:",
|
||||
"statusText": "Statusi",
|
||||
"priorityText": "Prioriteti",
|
||||
"phaseText": "Faza"
|
||||
}
|
||||
@@ -1,35 +0,0 @@
|
||||
{
|
||||
"searchByNamePlaceholder": "Kërko sipas emrit",
|
||||
"searchByCategoryPlaceholder": "Kërko sipas kategorisë",
|
||||
|
||||
"statusText": "Statusi",
|
||||
"healthText": "Gjendja",
|
||||
"categoryText": "Kategoria",
|
||||
"projectManagerText": "Menaxheri i Projektit",
|
||||
"showFieldsText": "Shfaq fushat",
|
||||
|
||||
"cancelledText": "Anuluar",
|
||||
"blockedText": "E bllokuar",
|
||||
"onHoldText": "Në pritje",
|
||||
"proposedText": "E propozuar",
|
||||
"inPlanningText": "Në planifikim",
|
||||
"inProgressText": "Në progres",
|
||||
"completedText": "E përfunduar",
|
||||
"continuousText": "E vazhdueshme",
|
||||
|
||||
"notSetText": "Pa caktuar",
|
||||
"needsAttentionText": "Kërkon vëmendje",
|
||||
"atRiskText": "Në rrezik",
|
||||
"goodText": "Në rregull",
|
||||
|
||||
"nameText": "Projekti",
|
||||
"estimatedVsActualText": "Vlerësuar vs Aktual",
|
||||
"tasksProgressText": "Progresi i detyrave",
|
||||
"lastActivityText": "Aktiviteti i fundit",
|
||||
"datesText": "Datat e Fillimit/Përfundimit",
|
||||
"daysLeftText": "Ditë të mbetura/vonuar",
|
||||
"projectHealthText": "Gjendja e projektit",
|
||||
"projectUpdateText": "Përditësimi i projektit",
|
||||
"clientText": "Klienti",
|
||||
"teamText": "Ekipi"
|
||||
}
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user