Compare commits
1 Commits
feature/te
...
chore/add-
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
e5e56e48f8 |
220
docs/SPAM_PROTECTION_GUIDE.md
Normal file
220
docs/SPAM_PROTECTION_GUIDE.md
Normal file
@@ -0,0 +1,220 @@
|
||||
# Worklenz Spam Protection System Guide
|
||||
|
||||
## Overview
|
||||
|
||||
This guide documents the spam protection system implemented in Worklenz to prevent abuse of user invitations and registrations.
|
||||
|
||||
## System Components
|
||||
|
||||
### 1. Spam Detection (`/worklenz-backend/src/utils/spam-detector.ts`)
|
||||
|
||||
The core spam detection engine that analyzes text for suspicious patterns:
|
||||
|
||||
- **Flag-First Policy**: Suspicious content is flagged for review, not blocked
|
||||
- **Selective Blocking**: Only extremely obvious spam (score > 80) gets blocked
|
||||
- **URL Detection**: Identifies links, shortened URLs, and suspicious domains
|
||||
- **Spam Phrases**: Detects common spam tactics (urgent, click here, win prizes)
|
||||
- **Cryptocurrency Spam**: Identifies blockchain/crypto compensation scams
|
||||
- **Formatting Issues**: Excessive capitals, special characters, emojis
|
||||
- **Fake Name Detection**: Generic names (test, demo, fake, spam)
|
||||
- **Whitelist Support**: Legitimate business names bypass all checks
|
||||
- **Context-Aware**: Smart detection reduces false positives
|
||||
|
||||
### 2. Rate Limiting (`/worklenz-backend/src/middleware/rate-limiter.ts`)
|
||||
|
||||
Prevents volume-based attacks:
|
||||
|
||||
- **Invite Limits**: 5 invitations per 15 minutes per user
|
||||
- **Organization Creation**: 3 attempts per hour
|
||||
- **In-Memory Store**: Fast rate limit checking without database queries
|
||||
|
||||
### 3. Frontend Validation
|
||||
|
||||
Real-time feedback as users type:
|
||||
|
||||
- `/worklenz-frontend/src/components/account-setup/organization-step.tsx`
|
||||
- `/worklenz-frontend/src/components/admin-center/overview/organization-name/organization-name.tsx`
|
||||
- `/worklenz-frontend/src/components/settings/edit-team-name-modal.tsx`
|
||||
|
||||
### 4. Backend Enforcement
|
||||
|
||||
Blocks spam at API level:
|
||||
|
||||
- **Team Members Controller**: Validates organization/owner names before invites
|
||||
- **Signup Process**: Blocks spam during registration
|
||||
- **Logging**: All blocked attempts sent to Slack via winston logger
|
||||
|
||||
### 5. Database Schema
|
||||
|
||||
```sql
|
||||
-- Teams table: Simple status field
|
||||
ALTER TABLE teams ADD COLUMN status VARCHAR(20) DEFAULT 'active';
|
||||
|
||||
-- Moderation history tracking
|
||||
CREATE TABLE team_moderation (
|
||||
id UUID PRIMARY KEY,
|
||||
team_id UUID REFERENCES teams(id),
|
||||
status VARCHAR(20), -- 'flagged', 'suspended', 'restored'
|
||||
reason TEXT,
|
||||
moderator_id UUID,
|
||||
created_at TIMESTAMP,
|
||||
expires_at TIMESTAMP -- For temporary suspensions
|
||||
);
|
||||
|
||||
-- Spam detection logs
|
||||
CREATE TABLE spam_logs (
|
||||
id UUID PRIMARY KEY,
|
||||
team_id UUID,
|
||||
content_type VARCHAR(50),
|
||||
original_content TEXT,
|
||||
spam_score INTEGER,
|
||||
spam_reasons JSONB,
|
||||
action_taken VARCHAR(50)
|
||||
);
|
||||
```
|
||||
|
||||
## Admin Tools
|
||||
|
||||
### API Endpoints
|
||||
|
||||
```
|
||||
GET /api/moderation/flagged-organizations - View flagged teams
|
||||
POST /api/moderation/flag-organization - Manually flag a team
|
||||
POST /api/moderation/suspend-organization - Suspend a team
|
||||
POST /api/moderation/unsuspend-organization - Restore a team
|
||||
GET /api/moderation/scan-spam - Scan for spam in existing data
|
||||
GET /api/moderation/stats - View moderation statistics
|
||||
POST /api/moderation/bulk-scan - Bulk scan and auto-flag
|
||||
```
|
||||
|
||||
## Slack Notifications
|
||||
|
||||
The system sends structured alerts to Slack for:
|
||||
|
||||
- 🚨 **Spam Detected** (score > 30)
|
||||
- 🔥 **High Risk Content** (known spam domains)
|
||||
- 🛑 **Blocked Attempts** (invitations/signups)
|
||||
- ⚠️ **Rate Limit Exceeded**
|
||||
|
||||
Example Slack notification:
|
||||
```json
|
||||
{
|
||||
"alert_type": "high_risk_content",
|
||||
"team_name": "CLICK LINK: gclnk.com/spam",
|
||||
"user_email": "spammer@example.com",
|
||||
"spam_score": 95,
|
||||
"reasons": ["Contains suspicious URLs", "Contains monetary references"],
|
||||
"timestamp": "2024-01-15T10:30:00Z"
|
||||
}
|
||||
```
|
||||
|
||||
## Testing the System
|
||||
|
||||
### Test Spam Patterns
|
||||
|
||||
These will be **FLAGGED** for review (flag-first approach):
|
||||
|
||||
1. **Suspicious Words**: "Free Software Solutions" (flagged but allowed)
|
||||
2. **URLs**: "Visit our site: bit.ly/win-prize" (flagged but allowed)
|
||||
3. **Cryptocurrency**: "🔔 $50,000 BLOCKCHAIN COMPENSATION" (flagged but allowed)
|
||||
4. **Urgency**: "URGENT! Click here NOW!!!" (flagged but allowed)
|
||||
5. **Generic Names**: "Test Company", "Demo Organization" (flagged but allowed)
|
||||
6. **Excessive Numbers**: "Company12345" (flagged but allowed)
|
||||
7. **Single Emoji**: "Great Company 💰" (flagged but allowed)
|
||||
|
||||
### BLOCKED Patterns (zero-tolerance - score > 80):
|
||||
|
||||
1. **Known Spam Domains**: "CLICK LINK: gclnk.com/spam"
|
||||
2. **Extreme Scam Patterns**: "🔔CHECK $213,953 BLOCKCHAIN COMPENSATION URGENT🔔"
|
||||
3. **Obvious Spam URLs**: Content with bit.ly/scam patterns
|
||||
|
||||
### Whitelisted (Will NOT be flagged):
|
||||
|
||||
1. **Legitimate Business**: "Microsoft Corporation", "Free Software Company"
|
||||
2. **Standard Suffixes**: "ABC Solutions Inc", "XYZ Consulting LLC"
|
||||
3. **Tech Companies**: "DataTech Services", "The Design Studio"
|
||||
4. **Context-Aware**: "Free Range Marketing", "Check Point Systems"
|
||||
5. **Legitimate "Test"**: "TestDrive Automotive" (not generic)
|
||||
|
||||
### Expected Behavior
|
||||
|
||||
1. **Suspicious Signup**: Flagged in logs, user allowed to proceed
|
||||
2. **Obvious Spam Signup**: Blocked with user-friendly message
|
||||
3. **Suspicious Invitations**: Flagged in logs, invitation sent
|
||||
4. **Obvious Spam Invitations**: Blocked with support contact suggestion
|
||||
5. **Frontend**: Shows warning message for suspicious content
|
||||
6. **Logger**: Sends Slack notification for all suspicious activity
|
||||
7. **Database**: Records all activity in spam_logs table
|
||||
|
||||
## Database Migration
|
||||
|
||||
Run these SQL scripts in order:
|
||||
|
||||
1. `spam_protection_tables.sql` - Creates new schema
|
||||
2. `fix_spam_protection_constraints.sql` - Fixes notification_settings constraints
|
||||
|
||||
## Configuration
|
||||
|
||||
### Environment Variables
|
||||
|
||||
No additional environment variables required. The system uses existing:
|
||||
- `COOKIE_SECRET` - For session management
|
||||
- Database connection settings
|
||||
|
||||
### Adjusting Thresholds
|
||||
|
||||
In `spam-detector.ts`:
|
||||
```typescript
|
||||
const isSpam = score >= 50; // Adjust threshold here
|
||||
```
|
||||
|
||||
In `rate-limiter.ts`:
|
||||
```typescript
|
||||
inviteRateLimit(5, 15 * 60 * 1000) // 5 requests per 15 minutes
|
||||
```
|
||||
|
||||
## Monitoring
|
||||
|
||||
### Check Spam Statistics
|
||||
```sql
|
||||
SELECT * FROM moderation_dashboard;
|
||||
SELECT COUNT(*) FROM spam_logs WHERE created_at > NOW() - INTERVAL '24 hours';
|
||||
```
|
||||
|
||||
### View Rate Limit Events
|
||||
```sql
|
||||
SELECT * FROM rate_limit_log WHERE blocked = true ORDER BY created_at DESC;
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Issue: Legitimate users blocked
|
||||
|
||||
1. Check spam_logs for their content
|
||||
2. Adjust spam patterns or scoring threshold
|
||||
3. Whitelist specific domains if needed
|
||||
|
||||
### Issue: Notification settings error during signup
|
||||
|
||||
Run the fix script: `fix_spam_protection_constraints.sql`
|
||||
|
||||
### Issue: Slack notifications not received
|
||||
|
||||
1. Check winston logger configuration
|
||||
2. Verify log levels in `logger.ts`
|
||||
3. Ensure Slack webhook is configured
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
1. **Machine Learning**: Train on spam_logs data
|
||||
2. **IP Blocking**: Geographic or reputation-based blocking
|
||||
3. **CAPTCHA Integration**: For suspicious signups
|
||||
4. **Email Verification**: Stronger email validation
|
||||
5. **Allowlist Management**: Pre-approved domains
|
||||
|
||||
## Security Considerations
|
||||
|
||||
- Logs contain sensitive data - ensure proper access controls
|
||||
- Rate limit data stored in memory - consider Redis for scaling
|
||||
- Spam patterns should be regularly updated
|
||||
- Monitor for false positives and adjust accordingly
|
||||
@@ -0,0 +1,43 @@
|
||||
-- Fix for notification_settings constraint issue during signup
|
||||
-- This makes the team_id nullable temporarily during user creation
|
||||
|
||||
-- First, drop the existing NOT NULL constraint
|
||||
ALTER TABLE notification_settings
|
||||
ALTER COLUMN team_id DROP NOT NULL;
|
||||
|
||||
-- Add a constraint that ensures team_id is not null when there's no ongoing signup
|
||||
ALTER TABLE notification_settings
|
||||
ADD CONSTRAINT notification_settings_team_id_check
|
||||
CHECK (team_id IS NOT NULL OR user_id IS NOT NULL);
|
||||
|
||||
-- Update the notification_settings trigger to handle null team_id gracefully
|
||||
CREATE OR REPLACE FUNCTION notification_settings_insert_trigger_fn() RETURNS TRIGGER AS
|
||||
$$
|
||||
BEGIN
|
||||
-- Only insert if team_id is not null
|
||||
IF NEW.team_id IS NOT NULL AND
|
||||
(NOT EXISTS(SELECT 1 FROM notification_settings WHERE team_id = NEW.team_id AND user_id = NEW.user_id)) AND
|
||||
(NEW.active = TRUE)
|
||||
THEN
|
||||
INSERT INTO notification_settings (popup_notifications_enabled, show_unread_items_count, user_id,
|
||||
email_notifications_enabled, team_id, daily_digest_enabled)
|
||||
VALUES (TRUE, TRUE, NEW.user_id, TRUE, NEW.team_id, FALSE);
|
||||
END IF;
|
||||
RETURN NEW;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
-- Also update the teams table to ensure the status column doesn't interfere with signup
|
||||
ALTER TABLE teams
|
||||
DROP CONSTRAINT IF EXISTS teams_status_check;
|
||||
|
||||
ALTER TABLE teams
|
||||
ADD CONSTRAINT teams_status_check
|
||||
CHECK (status IS NULL OR status IN ('active', 'flagged', 'suspended'));
|
||||
|
||||
-- Set default value for status
|
||||
ALTER TABLE teams
|
||||
ALTER COLUMN status SET DEFAULT 'active';
|
||||
|
||||
-- Update existing null values
|
||||
UPDATE teams SET status = 'active' WHERE status IS NULL;
|
||||
220
worklenz-backend/database/sql/spam_protection_tables.sql
Normal file
220
worklenz-backend/database/sql/spam_protection_tables.sql
Normal file
@@ -0,0 +1,220 @@
|
||||
-- Add minimal status column to teams table for performance
|
||||
ALTER TABLE teams
|
||||
ADD COLUMN IF NOT EXISTS status VARCHAR(20) DEFAULT 'active' CHECK (status IN ('active', 'flagged', 'suspended'));
|
||||
|
||||
-- Create separate moderation table for detailed tracking
|
||||
CREATE TABLE IF NOT EXISTS team_moderation (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
team_id UUID NOT NULL REFERENCES teams(id) ON DELETE CASCADE,
|
||||
status VARCHAR(20) NOT NULL CHECK (status IN ('flagged', 'suspended', 'restored')),
|
||||
reason TEXT,
|
||||
moderator_id UUID REFERENCES users(id),
|
||||
created_at TIMESTAMP DEFAULT NOW(),
|
||||
expires_at TIMESTAMP, -- For temporary suspensions
|
||||
metadata JSONB -- For additional context
|
||||
);
|
||||
|
||||
-- Create indexes for efficient querying
|
||||
CREATE INDEX IF NOT EXISTS idx_teams_status ON teams(status, created_at);
|
||||
CREATE INDEX IF NOT EXISTS idx_team_moderation_team_id ON team_moderation(team_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_team_moderation_status ON team_moderation(status, created_at);
|
||||
|
||||
-- Create spam_logs table to track spam detection events
|
||||
CREATE TABLE IF NOT EXISTS spam_logs (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
team_id UUID REFERENCES teams(id) ON DELETE CASCADE,
|
||||
user_id UUID REFERENCES users(id) ON DELETE SET NULL,
|
||||
content_type VARCHAR(50) NOT NULL, -- 'organization_name', 'owner_name', 'invitation'
|
||||
original_content TEXT NOT NULL,
|
||||
sanitized_content TEXT,
|
||||
spam_score INTEGER NOT NULL DEFAULT 0,
|
||||
spam_reasons JSONB,
|
||||
is_high_risk BOOLEAN DEFAULT FALSE,
|
||||
action_taken VARCHAR(50), -- 'blocked', 'flagged', 'allowed'
|
||||
created_at TIMESTAMP DEFAULT NOW(),
|
||||
ip_address INET
|
||||
);
|
||||
|
||||
-- Create index for spam logs
|
||||
CREATE INDEX IF NOT EXISTS idx_spam_logs_team_id ON spam_logs(team_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_spam_logs_created_at ON spam_logs(created_at);
|
||||
CREATE INDEX IF NOT EXISTS idx_spam_logs_content_type ON spam_logs(content_type);
|
||||
|
||||
-- Create rate_limit_log table to track rate limiting events
|
||||
CREATE TABLE IF NOT EXISTS rate_limit_log (
|
||||
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
|
||||
user_id UUID REFERENCES users(id) ON DELETE CASCADE,
|
||||
ip_address INET NOT NULL,
|
||||
action_type VARCHAR(50) NOT NULL, -- 'invite_attempt', 'org_creation'
|
||||
blocked BOOLEAN DEFAULT FALSE,
|
||||
created_at TIMESTAMP DEFAULT NOW()
|
||||
);
|
||||
|
||||
-- Create index for rate limit logs
|
||||
CREATE INDEX IF NOT EXISTS idx_rate_limit_log_user_id ON rate_limit_log(user_id);
|
||||
CREATE INDEX IF NOT EXISTS idx_rate_limit_log_created_at ON rate_limit_log(created_at);
|
||||
|
||||
-- Add admin flag to users table if it doesn't exist
|
||||
ALTER TABLE users
|
||||
ADD COLUMN IF NOT EXISTS is_admin BOOLEAN DEFAULT FALSE;
|
||||
|
||||
-- Function to log spam detection
|
||||
CREATE OR REPLACE FUNCTION log_spam_detection(
|
||||
p_team_id UUID,
|
||||
p_user_id UUID,
|
||||
p_content_type VARCHAR(50),
|
||||
p_original_content TEXT,
|
||||
p_sanitized_content TEXT,
|
||||
p_spam_score INTEGER,
|
||||
p_spam_reasons JSONB,
|
||||
p_is_high_risk BOOLEAN,
|
||||
p_action_taken VARCHAR(50),
|
||||
p_ip_address INET
|
||||
) RETURNS VOID AS $$
|
||||
BEGIN
|
||||
INSERT INTO spam_logs (
|
||||
team_id, user_id, content_type, original_content, sanitized_content,
|
||||
spam_score, spam_reasons, is_high_risk, action_taken, ip_address
|
||||
) VALUES (
|
||||
p_team_id, p_user_id, p_content_type, p_original_content, p_sanitized_content,
|
||||
p_spam_score, p_spam_reasons, p_is_high_risk, p_action_taken, p_ip_address
|
||||
);
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
-- Function to log rate limiting events
|
||||
CREATE OR REPLACE FUNCTION log_rate_limit_event(
|
||||
p_user_id UUID,
|
||||
p_ip_address INET,
|
||||
p_action_type VARCHAR(50),
|
||||
p_blocked BOOLEAN
|
||||
) RETURNS VOID AS $$
|
||||
BEGIN
|
||||
INSERT INTO rate_limit_log (user_id, ip_address, action_type, blocked)
|
||||
VALUES (p_user_id, p_ip_address, p_action_type, p_blocked);
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
-- Function to get spam statistics for a team
|
||||
CREATE OR REPLACE FUNCTION get_team_spam_stats(p_team_id UUID)
|
||||
RETURNS TABLE (
|
||||
total_detections BIGINT,
|
||||
high_risk_detections BIGINT,
|
||||
blocked_actions BIGINT,
|
||||
latest_detection TIMESTAMP
|
||||
) AS $$
|
||||
BEGIN
|
||||
RETURN QUERY
|
||||
SELECT
|
||||
COUNT(*) as total_detections,
|
||||
COUNT(*) FILTER (WHERE is_high_risk = TRUE) as high_risk_detections,
|
||||
COUNT(*) FILTER (WHERE action_taken = 'blocked') as blocked_actions,
|
||||
MAX(created_at) as latest_detection
|
||||
FROM spam_logs
|
||||
WHERE team_id = p_team_id;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
-- View for easy moderation dashboard
|
||||
CREATE OR REPLACE VIEW moderation_dashboard AS
|
||||
SELECT
|
||||
t.id as team_id,
|
||||
t.name as organization_name,
|
||||
u.name as owner_name,
|
||||
u.email as owner_email,
|
||||
t.created_at as team_created_at,
|
||||
t.status as current_status,
|
||||
tm.status as last_moderation_action,
|
||||
tm.reason as last_moderation_reason,
|
||||
tm.created_at as last_moderation_date,
|
||||
tm.expires_at as suspension_expires_at,
|
||||
moderator.name as moderator_name,
|
||||
(SELECT COUNT(*) FROM team_members WHERE team_id = t.id) as member_count,
|
||||
(SELECT COUNT(*) FROM spam_logs WHERE team_id = t.id) as spam_detection_count,
|
||||
(SELECT COUNT(*) FROM spam_logs WHERE team_id = t.id AND is_high_risk = TRUE) as high_risk_count
|
||||
FROM teams t
|
||||
INNER JOIN users u ON t.user_id = u.id
|
||||
LEFT JOIN team_moderation tm ON t.id = tm.team_id
|
||||
AND tm.created_at = (SELECT MAX(created_at) FROM team_moderation WHERE team_id = t.id)
|
||||
LEFT JOIN users moderator ON tm.moderator_id = moderator.id
|
||||
WHERE t.status != 'active' OR EXISTS(
|
||||
SELECT 1 FROM spam_logs WHERE team_id = t.id AND created_at > NOW() - INTERVAL '7 days'
|
||||
);
|
||||
|
||||
-- Function to update team status and create moderation records
|
||||
CREATE OR REPLACE FUNCTION update_team_status(
|
||||
p_team_id UUID,
|
||||
p_new_status VARCHAR(20),
|
||||
p_reason TEXT,
|
||||
p_moderator_id UUID DEFAULT NULL,
|
||||
p_expires_at TIMESTAMP DEFAULT NULL
|
||||
) RETURNS VOID AS $$
|
||||
BEGIN
|
||||
-- Update team status
|
||||
UPDATE teams SET status = p_new_status WHERE id = p_team_id;
|
||||
|
||||
-- Insert moderation record
|
||||
INSERT INTO team_moderation (team_id, status, reason, moderator_id, expires_at)
|
||||
VALUES (p_team_id, p_new_status, p_reason, p_moderator_id, p_expires_at);
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
-- Trigger to automatically flag teams with high spam scores
|
||||
CREATE OR REPLACE FUNCTION auto_flag_spam_teams()
|
||||
RETURNS TRIGGER AS $$
|
||||
BEGIN
|
||||
-- Auto-flag teams if they have high spam scores or multiple violations
|
||||
IF NEW.spam_score > 80 OR NEW.is_high_risk = TRUE THEN
|
||||
PERFORM update_team_status(
|
||||
NEW.team_id,
|
||||
'flagged',
|
||||
'Auto-flagged: High spam score or high-risk content detected',
|
||||
NULL
|
||||
);
|
||||
END IF;
|
||||
|
||||
RETURN NEW;
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
-- Function to check and restore expired suspensions
|
||||
CREATE OR REPLACE FUNCTION restore_expired_suspensions() RETURNS VOID AS $$
|
||||
BEGIN
|
||||
-- Find teams with expired suspensions
|
||||
UPDATE teams
|
||||
SET status = 'active'
|
||||
WHERE id IN (
|
||||
SELECT DISTINCT tm.team_id
|
||||
FROM team_moderation tm
|
||||
WHERE tm.status = 'suspended'
|
||||
AND tm.expires_at IS NOT NULL
|
||||
AND tm.expires_at < NOW()
|
||||
AND NOT EXISTS (
|
||||
SELECT 1 FROM team_moderation tm2
|
||||
WHERE tm2.team_id = tm.team_id
|
||||
AND tm2.created_at > tm.created_at
|
||||
)
|
||||
);
|
||||
|
||||
-- Log restoration records
|
||||
INSERT INTO team_moderation (team_id, status, reason, moderator_id)
|
||||
SELECT DISTINCT tm.team_id, 'restored', 'Auto-restored: suspension expired', NULL
|
||||
FROM team_moderation tm
|
||||
WHERE tm.status = 'suspended'
|
||||
AND tm.expires_at IS NOT NULL
|
||||
AND tm.expires_at < NOW()
|
||||
AND NOT EXISTS (
|
||||
SELECT 1 FROM team_moderation tm2
|
||||
WHERE tm2.team_id = tm.team_id
|
||||
AND tm2.created_at > tm.created_at
|
||||
AND tm2.status = 'restored'
|
||||
);
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
-- Create trigger for auto-flagging
|
||||
DROP TRIGGER IF EXISTS trigger_auto_flag_spam ON spam_logs;
|
||||
CREATE TRIGGER trigger_auto_flag_spam
|
||||
AFTER INSERT ON spam_logs
|
||||
FOR EACH ROW
|
||||
EXECUTE FUNCTION auto_flag_spam_teams();
|
||||
253
worklenz-backend/src/controllers/moderation-controller.ts
Normal file
253
worklenz-backend/src/controllers/moderation-controller.ts
Normal file
@@ -0,0 +1,253 @@
|
||||
import { IWorkLenzRequest } from "../interfaces/worklenz-request";
|
||||
import { IWorkLenzResponse } from "../interfaces/worklenz-response";
|
||||
import { ServerResponse } from "../models/server-response";
|
||||
import WorklenzControllerBase from "./worklenz-controller-base";
|
||||
import HandleExceptions from "../decorators/handle-exceptions";
|
||||
import db from "../config/db";
|
||||
import { SpamDetector } from "../utils/spam-detector";
|
||||
import { RateLimiter } from "../middleware/rate-limiter";
|
||||
|
||||
export default class ModerationController extends WorklenzControllerBase {
|
||||
|
||||
@HandleExceptions()
|
||||
public static async getFlaggedOrganizations(req: IWorkLenzRequest, res: IWorkLenzResponse): Promise<IWorkLenzResponse> {
|
||||
if (!req.user?.is_admin) {
|
||||
return res.status(403).send(new ServerResponse(false, null, "Admin access required"));
|
||||
}
|
||||
|
||||
const q = `
|
||||
SELECT * FROM moderation_dashboard
|
||||
ORDER BY last_moderation_date DESC
|
||||
LIMIT 100;
|
||||
`;
|
||||
|
||||
const result = await db.query(q);
|
||||
|
||||
// Add spam analysis to each result
|
||||
const flaggedTeams = result.rows.map(team => {
|
||||
const orgSpamCheck = SpamDetector.detectSpam(team.organization_name);
|
||||
const ownerSpamCheck = SpamDetector.detectSpam(team.owner_name);
|
||||
|
||||
return {
|
||||
...team,
|
||||
org_spam_score: orgSpamCheck.score,
|
||||
org_spam_reasons: orgSpamCheck.reasons,
|
||||
owner_spam_score: ownerSpamCheck.score,
|
||||
owner_spam_reasons: ownerSpamCheck.reasons,
|
||||
is_high_risk: SpamDetector.isHighRiskContent(team.organization_name) ||
|
||||
SpamDetector.isHighRiskContent(team.owner_name)
|
||||
};
|
||||
});
|
||||
|
||||
return res.status(200).send(new ServerResponse(true, flaggedTeams));
|
||||
}
|
||||
|
||||
@HandleExceptions()
|
||||
public static async flagOrganization(req: IWorkLenzRequest, res: IWorkLenzResponse): Promise<IWorkLenzResponse> {
|
||||
if (!req.user?.is_admin) {
|
||||
return res.status(403).send(new ServerResponse(false, null, "Admin access required"));
|
||||
}
|
||||
|
||||
const { teamId, reason } = req.body;
|
||||
if (!teamId) {
|
||||
return res.status(400).send(new ServerResponse(false, null, "Team ID is required"));
|
||||
}
|
||||
|
||||
const q = `SELECT update_team_status($1, 'flagged', $2, $3) as result`;
|
||||
const result = await db.query(q, [teamId, reason || 'Spam/Abuse', req.user.id]);
|
||||
|
||||
const teamQuery = `SELECT id, name FROM teams WHERE id = $1`;
|
||||
const teamResult = await db.query(teamQuery, [teamId]);
|
||||
|
||||
if (teamResult.rows.length === 0) {
|
||||
return res.status(404).send(new ServerResponse(false, null, "Organization not found"));
|
||||
}
|
||||
|
||||
return res.status(200).send(new ServerResponse(true, teamResult.rows[0], "Organization flagged successfully"));
|
||||
}
|
||||
|
||||
@HandleExceptions()
|
||||
public static async suspendOrganization(req: IWorkLenzRequest, res: IWorkLenzResponse): Promise<IWorkLenzResponse> {
|
||||
if (!req.user?.is_admin) {
|
||||
return res.status(403).send(new ServerResponse(false, null, "Admin access required"));
|
||||
}
|
||||
|
||||
const { teamId, reason, expiresAt } = req.body;
|
||||
if (!teamId) {
|
||||
return res.status(400).send(new ServerResponse(false, null, "Team ID is required"));
|
||||
}
|
||||
|
||||
const q = `SELECT update_team_status($1, 'suspended', $2, $3, $4) as result`;
|
||||
const result = await db.query(q, [teamId, reason || 'Terms of Service Violation', req.user.id, expiresAt || null]);
|
||||
|
||||
const teamQuery = `SELECT id, name FROM teams WHERE id = $1`;
|
||||
const teamResult = await db.query(teamQuery, [teamId]);
|
||||
|
||||
if (teamResult.rows.length === 0) {
|
||||
return res.status(404).send(new ServerResponse(false, null, "Organization not found"));
|
||||
}
|
||||
|
||||
return res.status(200).send(new ServerResponse(true, teamResult.rows[0], "Organization suspended successfully"));
|
||||
}
|
||||
|
||||
@HandleExceptions()
|
||||
public static async unsuspendOrganization(req: IWorkLenzRequest, res: IWorkLenzResponse): Promise<IWorkLenzResponse> {
|
||||
if (!req.user?.is_admin) {
|
||||
return res.status(403).send(new ServerResponse(false, null, "Admin access required"));
|
||||
}
|
||||
|
||||
const { teamId } = req.body;
|
||||
if (!teamId) {
|
||||
return res.status(400).send(new ServerResponse(false, null, "Team ID is required"));
|
||||
}
|
||||
|
||||
const q = `SELECT update_team_status($1, 'active', 'Manually restored by admin', $2) as result`;
|
||||
const result = await db.query(q, [teamId, req.user.id]);
|
||||
|
||||
const teamQuery = `SELECT id, name FROM teams WHERE id = $1`;
|
||||
const teamResult = await db.query(teamQuery, [teamId]);
|
||||
|
||||
if (teamResult.rows.length === 0) {
|
||||
return res.status(404).send(new ServerResponse(false, null, "Organization not found"));
|
||||
}
|
||||
|
||||
return res.status(200).send(new ServerResponse(true, teamResult.rows[0], "Organization restored successfully"));
|
||||
}
|
||||
|
||||
@HandleExceptions()
|
||||
public static async scanForSpam(req: IWorkLenzRequest, res: IWorkLenzResponse): Promise<IWorkLenzResponse> {
|
||||
if (!req.user?.is_admin) {
|
||||
return res.status(403).send(new ServerResponse(false, null, "Admin access required"));
|
||||
}
|
||||
|
||||
const q = `
|
||||
SELECT t.id, t.name as organization_name, u.name as owner_name, u.email as owner_email,
|
||||
t.created_at
|
||||
FROM teams t
|
||||
INNER JOIN users u ON t.user_id = u.id
|
||||
WHERE t.status = 'active'
|
||||
AND t.created_at > NOW() - INTERVAL '7 days'
|
||||
ORDER BY t.created_at DESC;
|
||||
`;
|
||||
|
||||
const result = await db.query(q);
|
||||
const suspiciousTeams = [];
|
||||
|
||||
for (const team of result.rows) {
|
||||
const orgSpamCheck = SpamDetector.detectSpam(team.organization_name);
|
||||
const ownerSpamCheck = SpamDetector.detectSpam(team.owner_name);
|
||||
|
||||
if (orgSpamCheck.isSpam || ownerSpamCheck.isSpam ||
|
||||
SpamDetector.isHighRiskContent(team.organization_name) ||
|
||||
SpamDetector.isHighRiskContent(team.owner_name)) {
|
||||
|
||||
suspiciousTeams.push({
|
||||
...team,
|
||||
org_spam_score: orgSpamCheck.score,
|
||||
org_spam_reasons: orgSpamCheck.reasons,
|
||||
owner_spam_score: ownerSpamCheck.score,
|
||||
owner_spam_reasons: ownerSpamCheck.reasons,
|
||||
is_high_risk: SpamDetector.isHighRiskContent(team.organization_name) ||
|
||||
SpamDetector.isHighRiskContent(team.owner_name)
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return res.status(200).send(new ServerResponse(true, {
|
||||
total_scanned: result.rows.length,
|
||||
suspicious_count: suspiciousTeams.length,
|
||||
suspicious_teams: suspiciousTeams
|
||||
}));
|
||||
}
|
||||
|
||||
@HandleExceptions()
|
||||
public static async getModerationStats(req: IWorkLenzRequest, res: IWorkLenzResponse): Promise<IWorkLenzResponse> {
|
||||
if (!req.user?.is_admin) {
|
||||
return res.status(403).send(new ServerResponse(false, null, "Admin access required"));
|
||||
}
|
||||
|
||||
const statsQuery = `
|
||||
SELECT
|
||||
(SELECT COUNT(*) FROM teams WHERE status = 'flagged') as flagged_count,
|
||||
(SELECT COUNT(*) FROM teams WHERE status = 'suspended') as suspended_count,
|
||||
(SELECT COUNT(*) FROM teams WHERE created_at > NOW() - INTERVAL '24 hours') as new_teams_24h,
|
||||
(SELECT COUNT(*) FROM teams WHERE created_at > NOW() - INTERVAL '7 days') as new_teams_7d
|
||||
`;
|
||||
|
||||
const result = await db.query(statsQuery);
|
||||
const stats = result.rows[0];
|
||||
|
||||
// Get rate limiting stats for recent activity
|
||||
const recentInviteActivity = RateLimiter.getStats(req.user?.id || '');
|
||||
|
||||
return res.status(200).send(new ServerResponse(true, {
|
||||
...stats,
|
||||
rate_limit_stats: recentInviteActivity
|
||||
}));
|
||||
}
|
||||
|
||||
@HandleExceptions()
|
||||
public static async bulkScanAndFlag(req: IWorkLenzRequest, res: IWorkLenzResponse): Promise<IWorkLenzResponse> {
|
||||
if (!req.user?.is_admin) {
|
||||
return res.status(403).send(new ServerResponse(false, null, "Admin access required"));
|
||||
}
|
||||
|
||||
const { autoFlag = false } = req.body;
|
||||
|
||||
const q = `
|
||||
SELECT t.id, t.name as organization_name, u.name as owner_name
|
||||
FROM teams t
|
||||
INNER JOIN users u ON t.user_id = u.id
|
||||
WHERE t.status = 'active'
|
||||
AND t.created_at > NOW() - INTERVAL '30 days'
|
||||
LIMIT 1000;
|
||||
`;
|
||||
|
||||
const result = await db.query(q);
|
||||
const flaggedTeams = [];
|
||||
|
||||
for (const team of result.rows) {
|
||||
const orgSpamCheck = SpamDetector.detectSpam(team.organization_name);
|
||||
const ownerSpamCheck = SpamDetector.detectSpam(team.owner_name);
|
||||
const isHighRisk = SpamDetector.isHighRiskContent(team.organization_name) ||
|
||||
SpamDetector.isHighRiskContent(team.owner_name);
|
||||
|
||||
if ((orgSpamCheck.score > 70 || ownerSpamCheck.score > 70 || isHighRisk) && autoFlag) {
|
||||
// Auto-flag high-confidence spam
|
||||
const reasons = [
|
||||
...orgSpamCheck.reasons,
|
||||
...ownerSpamCheck.reasons,
|
||||
...(isHighRisk ? ['High-risk content detected'] : [])
|
||||
];
|
||||
|
||||
const flagQuery = `SELECT update_team_status($1, 'flagged', $2, $3) as result`;
|
||||
await db.query(flagQuery, [
|
||||
team.id,
|
||||
`Auto-flagged: ${reasons.join(', ')}`,
|
||||
req.user.id
|
||||
]);
|
||||
|
||||
flaggedTeams.push({
|
||||
...team,
|
||||
action: 'flagged',
|
||||
reasons: reasons
|
||||
});
|
||||
} else if (orgSpamCheck.isSpam || ownerSpamCheck.isSpam || isHighRisk) {
|
||||
flaggedTeams.push({
|
||||
...team,
|
||||
action: 'review_needed',
|
||||
org_spam_score: orgSpamCheck.score,
|
||||
owner_spam_score: ownerSpamCheck.score,
|
||||
reasons: [...orgSpamCheck.reasons, ...ownerSpamCheck.reasons, ...(isHighRisk ? ['High-risk content'] : [])]
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
return res.status(200).send(new ServerResponse(true, {
|
||||
total_scanned: result.rows.length,
|
||||
auto_flagged: flaggedTeams.filter(t => t.action === 'flagged').length,
|
||||
needs_review: flaggedTeams.filter(t => t.action === 'review_needed').length,
|
||||
teams: flaggedTeams
|
||||
}));
|
||||
}
|
||||
}
|
||||
@@ -17,6 +17,10 @@ import { statusExclude, TEAM_MEMBER_TREE_MAP_COLOR_ALPHA, TRIAL_MEMBER_LIMIT } f
|
||||
import { checkTeamSubscriptionStatus } from "../shared/paddle-utils";
|
||||
import { updateUsers } from "../shared/paddle-requests";
|
||||
import { NotificationsService } from "../services/notifications/notifications.service";
|
||||
import { SpamDetector } from "../utils/spam-detector";
|
||||
import loggerModule from "../utils/logger";
|
||||
|
||||
const { logger } = loggerModule;
|
||||
|
||||
export default class TeamMembersController extends WorklenzControllerBase {
|
||||
|
||||
@@ -72,7 +76,8 @@ export default class TeamMembersController extends WorklenzControllerBase {
|
||||
|
||||
@HandleExceptions({
|
||||
raisedExceptions: {
|
||||
"ERROR_EMAIL_INVITATION_EXISTS": `Team member with email "{0}" already exists.`
|
||||
"ERROR_EMAIL_INVITATION_EXISTS": `Team member with email "{0}" already exists.`,
|
||||
"ERROR_SPAM_DETECTED": `Invitation blocked: {0}`
|
||||
}
|
||||
})
|
||||
public static async create(req: IWorkLenzRequest, res: IWorkLenzResponse): Promise<IWorkLenzResponse> {
|
||||
@@ -82,6 +87,54 @@ export default class TeamMembersController extends WorklenzControllerBase {
|
||||
return res.status(200).send(new ServerResponse(false, "Required fields are missing."));
|
||||
}
|
||||
|
||||
// Validate organization name for spam - Flag suspicious, block only obvious spam
|
||||
const orgSpamCheck = SpamDetector.detectSpam(req.user?.team_name || '');
|
||||
const ownerSpamCheck = SpamDetector.detectSpam(req.user?.name || '');
|
||||
|
||||
// Only block extremely suspicious content for invitations (higher threshold)
|
||||
const isObviousSpam = orgSpamCheck.score > 70 || ownerSpamCheck.score > 70 ||
|
||||
SpamDetector.isHighRiskContent(req.user?.team_name || '') ||
|
||||
SpamDetector.isHighRiskContent(req.user?.name || '');
|
||||
|
||||
if (isObviousSpam) {
|
||||
logger.error('🛑 INVITATION BLOCKED - OBVIOUS SPAM', {
|
||||
user_id: req.user?.id,
|
||||
user_email: req.user?.email,
|
||||
team_id: req.user?.team_id,
|
||||
team_name: req.user?.team_name,
|
||||
owner_name: req.user?.name,
|
||||
org_spam_score: orgSpamCheck.score,
|
||||
owner_spam_score: ownerSpamCheck.score,
|
||||
org_reasons: orgSpamCheck.reasons,
|
||||
owner_reasons: ownerSpamCheck.reasons,
|
||||
ip_address: req.ip,
|
||||
timestamp: new Date().toISOString(),
|
||||
alert_type: 'obvious_spam_invitation_blocked'
|
||||
});
|
||||
return res.status(200).send(new ServerResponse(false, null, `Invitations temporarily disabled. Please contact support for assistance.`));
|
||||
}
|
||||
|
||||
// Log suspicious but allow invitations
|
||||
if (orgSpamCheck.score > 0 || ownerSpamCheck.score > 0) {
|
||||
logger.warn('⚠️ SUSPICIOUS INVITATION ATTEMPT', {
|
||||
user_id: req.user?.id,
|
||||
user_email: req.user?.email,
|
||||
team_id: req.user?.team_id,
|
||||
team_name: req.user?.team_name,
|
||||
owner_name: req.user?.name,
|
||||
org_spam_score: orgSpamCheck.score,
|
||||
owner_spam_score: ownerSpamCheck.score,
|
||||
org_reasons: orgSpamCheck.reasons,
|
||||
owner_reasons: ownerSpamCheck.reasons,
|
||||
ip_address: req.ip,
|
||||
timestamp: new Date().toISOString(),
|
||||
alert_type: 'suspicious_invitation_flagged'
|
||||
});
|
||||
// Continue with invitation but flag for review
|
||||
}
|
||||
|
||||
// High-risk content already checked above in isObviousSpam condition
|
||||
|
||||
/**
|
||||
* Checks the subscription status of the team.
|
||||
* @type {Object} subscriptionData - Object containing subscription information
|
||||
|
||||
141
worklenz-backend/src/middleware/rate-limiter.ts
Normal file
141
worklenz-backend/src/middleware/rate-limiter.ts
Normal file
@@ -0,0 +1,141 @@
|
||||
import { NextFunction } from "express";
|
||||
import { IWorkLenzRequest } from "../interfaces/worklenz-request";
|
||||
import { IWorkLenzResponse } from "../interfaces/worklenz-response";
|
||||
import { ServerResponse } from "../models/server-response";
|
||||
import loggerModule from "../utils/logger";
|
||||
|
||||
const { logger } = loggerModule;
|
||||
|
||||
interface RateLimitStore {
|
||||
[key: string]: {
|
||||
count: number;
|
||||
resetTime: number;
|
||||
};
|
||||
}
|
||||
|
||||
export class RateLimiter {
|
||||
private static store: RateLimitStore = {};
|
||||
private static cleanupInterval: NodeJS.Timeout;
|
||||
|
||||
static {
|
||||
// Clean up expired entries every 5 minutes
|
||||
this.cleanupInterval = setInterval(() => {
|
||||
const now = Date.now();
|
||||
Object.keys(this.store).forEach(key => {
|
||||
if (this.store[key].resetTime < now) {
|
||||
delete this.store[key];
|
||||
}
|
||||
});
|
||||
}, 5 * 60 * 1000);
|
||||
}
|
||||
|
||||
public static inviteRateLimit(
|
||||
maxRequests = 5,
|
||||
windowMs: number = 15 * 60 * 1000 // 15 minutes
|
||||
) {
|
||||
return (req: IWorkLenzRequest, res: IWorkLenzResponse, next: NextFunction) => {
|
||||
const identifier = req.user?.id || req.ip;
|
||||
const key = `invite_${identifier}`;
|
||||
const now = Date.now();
|
||||
|
||||
if (!this.store[key] || this.store[key].resetTime < now) {
|
||||
this.store[key] = {
|
||||
count: 1,
|
||||
resetTime: now + windowMs
|
||||
};
|
||||
return next();
|
||||
}
|
||||
|
||||
if (this.store[key].count >= maxRequests) {
|
||||
const remainingTime = Math.ceil((this.store[key].resetTime - now) / 1000);
|
||||
|
||||
// Log rate limit exceeded for Slack notifications
|
||||
logger.warn("⚠️ RATE LIMIT EXCEEDED - INVITE ATTEMPTS", {
|
||||
user_id: req.user?.id,
|
||||
user_email: req.user?.email,
|
||||
ip_address: req.ip,
|
||||
attempts: this.store[key].count,
|
||||
max_attempts: maxRequests,
|
||||
remaining_time: remainingTime,
|
||||
timestamp: new Date().toISOString(),
|
||||
alert_type: "rate_limit_exceeded"
|
||||
});
|
||||
|
||||
return res.status(429).send(
|
||||
new ServerResponse(
|
||||
false,
|
||||
null,
|
||||
`Too many invitation attempts. Please try again in ${remainingTime} seconds.`
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
this.store[key].count++;
|
||||
next();
|
||||
};
|
||||
}
|
||||
|
||||
public static organizationCreationRateLimit(
|
||||
maxRequests = 3,
|
||||
windowMs: number = 60 * 60 * 1000 // 1 hour
|
||||
) {
|
||||
return (req: IWorkLenzRequest, res: IWorkLenzResponse, next: NextFunction) => {
|
||||
const identifier = req.user?.id || req.ip;
|
||||
const key = `org_creation_${identifier}`;
|
||||
const now = Date.now();
|
||||
|
||||
if (!this.store[key] || this.store[key].resetTime < now) {
|
||||
this.store[key] = {
|
||||
count: 1,
|
||||
resetTime: now + windowMs
|
||||
};
|
||||
return next();
|
||||
}
|
||||
|
||||
if (this.store[key].count >= maxRequests) {
|
||||
const remainingTime = Math.ceil((this.store[key].resetTime - now) / (1000 * 60));
|
||||
|
||||
// Log organization creation rate limit exceeded
|
||||
logger.warn("⚠️ RATE LIMIT EXCEEDED - ORG CREATION", {
|
||||
user_id: req.user?.id,
|
||||
user_email: req.user?.email,
|
||||
ip_address: req.ip,
|
||||
attempts: this.store[key].count,
|
||||
max_attempts: maxRequests,
|
||||
remaining_time_minutes: remainingTime,
|
||||
timestamp: new Date().toISOString(),
|
||||
alert_type: "org_creation_rate_limit"
|
||||
});
|
||||
|
||||
return res.status(429).send(
|
||||
new ServerResponse(
|
||||
false,
|
||||
null,
|
||||
`Too many organization creation attempts. Please try again in ${remainingTime} minutes.`
|
||||
)
|
||||
);
|
||||
}
|
||||
|
||||
this.store[key].count++;
|
||||
next();
|
||||
};
|
||||
}
|
||||
|
||||
public static getStats(identifier: string): { invites: number; orgCreations: number } {
|
||||
const inviteKey = `invite_${identifier}`;
|
||||
const orgKey = `org_creation_${identifier}`;
|
||||
|
||||
return {
|
||||
invites: this.store[inviteKey]?.count || 0,
|
||||
orgCreations: this.store[orgKey]?.count || 0
|
||||
};
|
||||
}
|
||||
|
||||
public static clearStats(identifier: string): void {
|
||||
const inviteKey = `invite_${identifier}`;
|
||||
const orgKey = `org_creation_${identifier}`;
|
||||
|
||||
delete this.store[inviteKey];
|
||||
delete this.store[orgKey];
|
||||
}
|
||||
}
|
||||
@@ -8,6 +8,10 @@ import {log_error} from "../../shared/utils";
|
||||
import db from "../../config/db";
|
||||
import {Request} from "express";
|
||||
import {ERROR_KEY, SUCCESS_KEY} from "./passport-constants";
|
||||
import { SpamDetector } from "../../utils/spam-detector";
|
||||
import loggerModule from "../../utils/logger";
|
||||
|
||||
const { logger } = loggerModule;
|
||||
|
||||
async function isGoogleAccountFound(email: string) {
|
||||
const q = `
|
||||
@@ -49,12 +53,111 @@ async function handleSignUp(req: Request, email: string, password: string, done:
|
||||
|
||||
if (!team_name) return done(null, null, req.flash(ERROR_KEY, "Team name is required"));
|
||||
|
||||
// Check for spam in team name - Flag suspicious but allow signup
|
||||
const teamNameSpamCheck = SpamDetector.detectSpam(team_name);
|
||||
if (teamNameSpamCheck.score > 0 || teamNameSpamCheck.reasons.length > 0) {
|
||||
logger.warn('⚠️ SUSPICIOUS SIGNUP - TEAM NAME', {
|
||||
email,
|
||||
team_name,
|
||||
user_name: name,
|
||||
spam_score: teamNameSpamCheck.score,
|
||||
reasons: teamNameSpamCheck.reasons,
|
||||
ip_address: req.ip,
|
||||
timestamp: new Date().toISOString(),
|
||||
alert_type: 'suspicious_signup_flagged'
|
||||
});
|
||||
// Continue with signup but flag for review
|
||||
}
|
||||
|
||||
// Check for spam in user name - Flag suspicious but allow signup
|
||||
const userNameSpamCheck = SpamDetector.detectSpam(name);
|
||||
if (userNameSpamCheck.score > 0 || userNameSpamCheck.reasons.length > 0) {
|
||||
logger.warn('⚠️ SUSPICIOUS SIGNUP - USER NAME', {
|
||||
email,
|
||||
team_name,
|
||||
user_name: name,
|
||||
spam_score: userNameSpamCheck.score,
|
||||
reasons: userNameSpamCheck.reasons,
|
||||
ip_address: req.ip,
|
||||
timestamp: new Date().toISOString(),
|
||||
alert_type: 'suspicious_signup_flagged'
|
||||
});
|
||||
// Continue with signup but flag for review
|
||||
}
|
||||
|
||||
// Only block EXTREMELY high-risk content (known spam domains, obvious scams)
|
||||
if (SpamDetector.isHighRiskContent(team_name) || SpamDetector.isHighRiskContent(name)) {
|
||||
// Check if it's REALLY obvious spam (very high scores)
|
||||
const isObviousSpam = teamNameSpamCheck.score > 80 || userNameSpamCheck.score > 80 ||
|
||||
/gclnk\.com|bit\.ly\/scam|win.*\$\d+.*crypto/i.test(team_name + ' ' + name);
|
||||
|
||||
if (isObviousSpam) {
|
||||
logger.error('🛑 SIGNUP BLOCKED - OBVIOUS SPAM', {
|
||||
email,
|
||||
team_name,
|
||||
user_name: name,
|
||||
team_spam_score: teamNameSpamCheck.score,
|
||||
user_spam_score: userNameSpamCheck.score,
|
||||
ip_address: req.ip,
|
||||
timestamp: new Date().toISOString(),
|
||||
alert_type: 'obvious_spam_blocked'
|
||||
});
|
||||
return done(null, null, req.flash(ERROR_KEY, "Registration temporarily unavailable. Please contact support if you need immediate access."));
|
||||
} else {
|
||||
// High-risk but not obviously spam - flag and allow
|
||||
logger.error('🔥 HIGH RISK SIGNUP - FLAGGED', {
|
||||
email,
|
||||
team_name,
|
||||
user_name: name,
|
||||
team_spam_score: teamNameSpamCheck.score,
|
||||
user_spam_score: userNameSpamCheck.score,
|
||||
ip_address: req.ip,
|
||||
timestamp: new Date().toISOString(),
|
||||
alert_type: 'high_risk_signup_flagged'
|
||||
});
|
||||
// Continue with signup but flag for immediate review
|
||||
}
|
||||
}
|
||||
|
||||
const googleAccountFound = await isGoogleAccountFound(email);
|
||||
if (googleAccountFound)
|
||||
return done(null, null, req.flash(ERROR_KEY, `${req.body.email} is already linked with a Google account.`));
|
||||
|
||||
try {
|
||||
const user = await registerUser(password, team_id, name, team_name, email, timezone, team_member_id);
|
||||
|
||||
// If signup was suspicious, flag the team for review after creation
|
||||
const totalSuspicionScore = (teamNameSpamCheck.score || 0) + (userNameSpamCheck.score || 0);
|
||||
if (totalSuspicionScore > 0) {
|
||||
// Flag team for admin review (but don't block user)
|
||||
const flagQuery = `
|
||||
INSERT INTO spam_logs (team_id, user_id, content_type, original_content, spam_score, spam_reasons, action_taken, ip_address)
|
||||
VALUES (
|
||||
(SELECT team_id FROM users WHERE id = $1),
|
||||
$1,
|
||||
'signup_review',
|
||||
$2,
|
||||
$3,
|
||||
$4,
|
||||
'flagged_for_review',
|
||||
$5
|
||||
)
|
||||
`;
|
||||
|
||||
try {
|
||||
await db.query(flagQuery, [
|
||||
user.id,
|
||||
`Team: ${team_name} | User: ${name}`,
|
||||
totalSuspicionScore,
|
||||
JSON.stringify([...teamNameSpamCheck.reasons, ...userNameSpamCheck.reasons]),
|
||||
req.ip
|
||||
]);
|
||||
} catch (flagError) {
|
||||
// Don't fail signup if flagging fails
|
||||
logger.warn('Failed to flag suspicious signup for review', { error: flagError, user_id: user.id });
|
||||
}
|
||||
}
|
||||
|
||||
sendWelcomeEmail(email, name);
|
||||
return done(null, user, req.flash(SUCCESS_KEY, "Registration successful. Please check your email for verification."));
|
||||
} catch (error: any) {
|
||||
|
||||
@@ -60,6 +60,7 @@ import taskRecurringApiRouter from "./task-recurring-api-router";
|
||||
|
||||
import customColumnsApiRouter from "./custom-columns-api-router";
|
||||
import userActivityLogsApiRouter from "./user-activity-logs-api-router";
|
||||
import moderationApiRouter from "./moderation-api-router";
|
||||
|
||||
const api = express.Router();
|
||||
|
||||
@@ -122,4 +123,5 @@ api.use("/task-recurring", taskRecurringApiRouter);
|
||||
api.use("/custom-columns", customColumnsApiRouter);
|
||||
|
||||
api.use("/logs", userActivityLogsApiRouter);
|
||||
api.use("/moderation", moderationApiRouter);
|
||||
export default api;
|
||||
|
||||
16
worklenz-backend/src/routes/apis/moderation-api-router.ts
Normal file
16
worklenz-backend/src/routes/apis/moderation-api-router.ts
Normal file
@@ -0,0 +1,16 @@
|
||||
import express from "express";
|
||||
import ModerationController from "../../controllers/moderation-controller";
|
||||
import safeControllerFunction from "../../shared/safe-controller-function";
|
||||
|
||||
const moderationApiRouter = express.Router();
|
||||
|
||||
// Admin-only routes for spam/abuse moderation
|
||||
moderationApiRouter.get("/flagged-organizations", safeControllerFunction(ModerationController.getFlaggedOrganizations));
|
||||
moderationApiRouter.post("/flag-organization", safeControllerFunction(ModerationController.flagOrganization));
|
||||
moderationApiRouter.post("/suspend-organization", safeControllerFunction(ModerationController.suspendOrganization));
|
||||
moderationApiRouter.post("/unsuspend-organization", safeControllerFunction(ModerationController.unsuspendOrganization));
|
||||
moderationApiRouter.get("/scan-spam", safeControllerFunction(ModerationController.scanForSpam));
|
||||
moderationApiRouter.get("/stats", safeControllerFunction(ModerationController.getModerationStats));
|
||||
moderationApiRouter.post("/bulk-scan", safeControllerFunction(ModerationController.bulkScanAndFlag));
|
||||
|
||||
export default moderationApiRouter;
|
||||
@@ -6,6 +6,7 @@ import idParamValidator from "../../middlewares/validators/id-param-validator";
|
||||
import teamMembersBodyValidator from "../../middlewares/validators/team-members-body-validator";
|
||||
import teamOwnerOrAdminValidator from "../../middlewares/validators/team-owner-or-admin-validator";
|
||||
import safeControllerFunction from "../../shared/safe-controller-function";
|
||||
import { RateLimiter } from "../../middleware/rate-limiter";
|
||||
|
||||
const teamMembersApiRouter = express.Router();
|
||||
|
||||
@@ -13,7 +14,7 @@ const teamMembersApiRouter = express.Router();
|
||||
teamMembersApiRouter.get("/export-all", safeControllerFunction(TeamMembersController.exportAllMembers));
|
||||
teamMembersApiRouter.get("/export/:id", idParamValidator, safeControllerFunction(TeamMembersController.exportByMember));
|
||||
|
||||
teamMembersApiRouter.post("/", teamOwnerOrAdminValidator, teamMembersBodyValidator, safeControllerFunction(TeamMembersController.create));
|
||||
teamMembersApiRouter.post("/", teamOwnerOrAdminValidator, RateLimiter.inviteRateLimit(5, 15 * 60 * 1000), teamMembersBodyValidator, safeControllerFunction(TeamMembersController.create));
|
||||
teamMembersApiRouter.get("/", safeControllerFunction(TeamMembersController.get));
|
||||
teamMembersApiRouter.get("/list", safeControllerFunction(TeamMembersController.getTeamMemberList));
|
||||
teamMembersApiRouter.get("/tree-map", safeControllerFunction(TeamMembersController.getTeamMembersTreeMap));
|
||||
@@ -30,6 +31,6 @@ teamMembersApiRouter.put("/:id", teamOwnerOrAdminValidator, idParamValidator, sa
|
||||
teamMembersApiRouter.delete("/:id", teamOwnerOrAdminValidator, idParamValidator, safeControllerFunction(TeamMembersController.deleteById));
|
||||
teamMembersApiRouter.get("/deactivate/:id", teamOwnerOrAdminValidator, idParamValidator, safeControllerFunction(TeamMembersController.toggleMemberActiveStatus));
|
||||
|
||||
teamMembersApiRouter.put("/add-member/:id", teamOwnerOrAdminValidator, teamMembersBodyValidator, safeControllerFunction(TeamMembersController.addTeamMember));
|
||||
teamMembersApiRouter.put("/add-member/:id", teamOwnerOrAdminValidator, RateLimiter.inviteRateLimit(3, 10 * 60 * 1000), teamMembersBodyValidator, safeControllerFunction(TeamMembersController.addTeamMember));
|
||||
|
||||
export default teamMembersApiRouter;
|
||||
|
||||
244
worklenz-backend/src/utils/spam-detector.ts
Normal file
244
worklenz-backend/src/utils/spam-detector.ts
Normal file
@@ -0,0 +1,244 @@
|
||||
import loggerModule from "./logger";
|
||||
|
||||
const { logger } = loggerModule;
|
||||
|
||||
export interface SpamDetectionResult {
|
||||
isSpam: boolean;
|
||||
score: number;
|
||||
reasons: string[];
|
||||
}
|
||||
|
||||
export class SpamDetector {
|
||||
// Whitelist for legitimate organizations that might trigger false positives
|
||||
private static readonly WHITELIST_PATTERNS = [
|
||||
/^(microsoft|google|apple|amazon|facebook|meta|twitter|linkedin|github|stackoverflow)$/i,
|
||||
/^.*(inc|llc|ltd|corp|corporation|company|co|group|enterprises|solutions|services|consulting|tech|technologies|agency|studio|lab|labs|systems|software|development|designs?)$/i,
|
||||
// Allow "free" when it's clearly about software/business
|
||||
/free.*(software|source|lance|consulting|solutions|services|tech|development|range|market|trade)/i,
|
||||
/(open|free).*(software|source)/i,
|
||||
// Common legitimate business patterns
|
||||
/^[a-z]+\s+(software|solutions|services|consulting|tech|technologies|systems|development|designs?|agency|studio|labs?|group|company)$/i,
|
||||
/^(the\s+)?[a-z]+\s+(company|group|studio|agency|lab|labs)$/i
|
||||
];
|
||||
|
||||
private static readonly SPAM_PATTERNS = [
|
||||
// URLs and links
|
||||
/https?:\/\//i,
|
||||
/www\./i,
|
||||
/\b\w+\.(com|net|org|io|co|me|ly|tk|ml|ga|cf|cc|to|us|biz|info|xyz)\b/i,
|
||||
|
||||
// Common spam phrases
|
||||
/click\s*(here|link|now)/i,
|
||||
/urgent|emergency|immediate|limited.time/i,
|
||||
/win|won|winner|prize|reward|congratulations/i,
|
||||
/free|bonus|gift|offer|special.offer/i,
|
||||
/check\s*(out|this|pay)|verify|claim/i,
|
||||
/blockchain|crypto|bitcoin|compensation|investment/i,
|
||||
/cash|money|dollars?|\$\d+|earn.*money/i,
|
||||
|
||||
// Excessive special characters
|
||||
/[!]{2,}/,
|
||||
/[🔔⬅👆💰$💎🎁🎉⚡]{1,}/,
|
||||
/\b[A-Z]{4,}\b/,
|
||||
|
||||
// Suspicious formatting
|
||||
/\s{3,}/,
|
||||
/[.]{3,}/,
|
||||
|
||||
// Additional suspicious patterns
|
||||
/act.now|don.t.miss|guaranteed|limited.spots/i,
|
||||
/download|install|app|software/i,
|
||||
/survey|questionnaire|feedback/i,
|
||||
/\d+%.*off|save.*\$|discount/i
|
||||
];
|
||||
|
||||
private static readonly SUSPICIOUS_WORDS = [
|
||||
"urgent", "emergency", "click", "link", "win", "winner", "prize",
|
||||
"free", "bonus", "cash", "money", "blockchain", "crypto", "compensation",
|
||||
"check", "pay", "reward", "offer", "gift", "congratulations", "claim",
|
||||
"verify", "earn", "investment", "guaranteed", "limited", "exclusive",
|
||||
"download", "install", "survey", "feedback", "discount", "save"
|
||||
];
|
||||
|
||||
public static detectSpam(text: string): SpamDetectionResult {
|
||||
if (!text || typeof text !== "string") {
|
||||
return { isSpam: false, score: 0, reasons: [] };
|
||||
}
|
||||
|
||||
const normalizedText = text.toLowerCase().trim();
|
||||
const reasons: string[] = [];
|
||||
let score = 0;
|
||||
|
||||
// Check for obviously fake organization names FIRST (before whitelist)
|
||||
if (/^(test|example|demo|fake|spam|abuse|temp)\s*(company|org|corp|inc|llc)?$/i.test(text.trim()) ||
|
||||
/(test|demo|fake|spam|abuse|temp)\s*(123|abc|xyz|\d+)/i.test(text)) {
|
||||
score += 30;
|
||||
reasons.push("Contains generic/test name patterns");
|
||||
}
|
||||
|
||||
// Check whitelist - bypass remaining checks for whitelisted organizations
|
||||
if (score === 0) { // Only check whitelist if no generic patterns found
|
||||
for (const pattern of this.WHITELIST_PATTERNS) {
|
||||
if (pattern.test(normalizedText)) {
|
||||
return { isSpam: false, score: 0, reasons: [] };
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check for URL patterns
|
||||
for (const pattern of this.SPAM_PATTERNS) {
|
||||
if (pattern.test(text)) {
|
||||
score += 25; // Lowered from 30 to catch more suspicious content
|
||||
if (pattern.toString().includes("https?") || pattern.toString().includes("www")) {
|
||||
reasons.push("Contains suspicious URLs or links");
|
||||
} else if (pattern.toString().includes("urgent|emergency")) {
|
||||
reasons.push("Contains urgent/emergency language");
|
||||
} else if (pattern.toString().includes("win|won|winner")) {
|
||||
reasons.push("Contains prize/winning language");
|
||||
} else if (pattern.toString().includes("cash|money")) {
|
||||
reasons.push("Contains monetary references");
|
||||
} else if (pattern.toString().includes("blockchain|crypto")) {
|
||||
reasons.push("Contains cryptocurrency references");
|
||||
} else if (pattern.toString().includes("[!]{3,}")) {
|
||||
reasons.push("Excessive use of exclamation marks");
|
||||
} else if (pattern.toString().includes("[🔔⬅👆💰$]")) {
|
||||
reasons.push("Contains suspicious emojis or symbols");
|
||||
} else if (pattern.toString().includes("[A-Z]{5,}")) {
|
||||
reasons.push("Contains excessive capital letters");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check for excessive suspicious words - Now with context awareness
|
||||
const suspiciousWords = this.SUSPICIOUS_WORDS.filter(word => {
|
||||
if (!normalizedText.includes(word)) return false;
|
||||
|
||||
// Context-aware filtering for common false positives
|
||||
if (word === 'free') {
|
||||
// Allow "free" in legitimate software/business contexts
|
||||
return !/free.*(software|source|lance|consulting|solutions|services|tech|development|range|market|trade)/i.test(text);
|
||||
}
|
||||
|
||||
if (word === 'check') {
|
||||
// Allow "check" in legitimate business contexts
|
||||
return !/check.*(list|mark|point|out|up|in|book|ing|ed)/i.test(text);
|
||||
}
|
||||
|
||||
if (word === 'save') {
|
||||
// Allow "save" in legitimate business contexts
|
||||
return !/save.*(data|file|document|time|energy|environment|earth)/i.test(text);
|
||||
}
|
||||
|
||||
return true; // Other words are still suspicious
|
||||
});
|
||||
|
||||
if (suspiciousWords.length >= 1) {
|
||||
score += suspiciousWords.length * 20;
|
||||
reasons.push(`Contains ${suspiciousWords.length} suspicious word${suspiciousWords.length > 1 ? 's' : ''}: ${suspiciousWords.join(', ')}`);
|
||||
}
|
||||
|
||||
// Check text length - very short or very long names are suspicious
|
||||
if (text.length < 2) {
|
||||
score += 20;
|
||||
reasons.push("Text too short");
|
||||
} else if (text.length > 100) {
|
||||
score += 25;
|
||||
reasons.push("Text unusually long");
|
||||
}
|
||||
|
||||
// Check for repeated characters
|
||||
if (/(.)\1{4,}/.test(text)) {
|
||||
score += 20;
|
||||
reasons.push("Contains repeated characters");
|
||||
}
|
||||
|
||||
// Check for mixed scripts (potential homograph attack)
|
||||
const hasLatin = /[a-zA-Z]/.test(text);
|
||||
const hasCyrillic = /[\u0400-\u04FF]/.test(text);
|
||||
const hasGreek = /[\u0370-\u03FF]/.test(text);
|
||||
|
||||
if ((hasLatin && hasCyrillic) || (hasLatin && hasGreek)) {
|
||||
score += 40;
|
||||
reasons.push("Contains mixed character scripts");
|
||||
}
|
||||
|
||||
// Generic name check already done above - skip duplicate check
|
||||
|
||||
// Check for excessive numbers in organization names (often spam)
|
||||
if (/\d{4,}/.test(text)) {
|
||||
score += 25;
|
||||
reasons.push("Contains excessive numbers");
|
||||
}
|
||||
|
||||
const isSpam = score >= 50;
|
||||
|
||||
// Log suspicious activity for Slack notifications
|
||||
if (isSpam || score > 30) {
|
||||
logger.warn("🚨 SPAM DETECTED", {
|
||||
text: text.substring(0, 100),
|
||||
score,
|
||||
reasons: [...new Set(reasons)],
|
||||
isSpam,
|
||||
timestamp: new Date().toISOString(),
|
||||
alert_type: "spam_detection"
|
||||
});
|
||||
}
|
||||
|
||||
return {
|
||||
isSpam,
|
||||
score,
|
||||
reasons: [...new Set(reasons)] // Remove duplicates
|
||||
};
|
||||
}
|
||||
|
||||
public static isHighRiskContent(text: string): boolean {
|
||||
const patterns = [
|
||||
/gclnk\.com/i,
|
||||
/bit\.ly\/scam/i, // More specific bit.ly patterns
|
||||
/tinyurl\.com\/scam/i,
|
||||
/\$\d{3,}.*crypto/i, // Money + crypto combination
|
||||
/blockchain.*compensation.*urgent/i,
|
||||
/win.*\$\d+.*urgent/i, // Win money urgent pattern
|
||||
/click.*here.*\$\d+/i // Click here money pattern
|
||||
];
|
||||
|
||||
const isHighRisk = patterns.some(pattern => pattern.test(text));
|
||||
|
||||
// Log high-risk content immediately
|
||||
if (isHighRisk) {
|
||||
logger.error("🔥 HIGH RISK CONTENT DETECTED", {
|
||||
text: text.substring(0, 100),
|
||||
matched_patterns: patterns.filter(pattern => pattern.test(text)).map(p => p.toString()),
|
||||
timestamp: new Date().toISOString(),
|
||||
alert_type: "high_risk_content"
|
||||
});
|
||||
}
|
||||
|
||||
return isHighRisk;
|
||||
}
|
||||
|
||||
public static shouldBlockContent(text: string): boolean {
|
||||
const result = this.detectSpam(text);
|
||||
// Only block if extremely high score or high-risk patterns
|
||||
return result.score > 80 || this.isHighRiskContent(text);
|
||||
}
|
||||
|
||||
public static shouldFlagContent(text: string): boolean {
|
||||
const result = this.detectSpam(text);
|
||||
// Flag anything suspicious (score > 0) but not necessarily blocked
|
||||
return result.score > 0 || result.reasons.length > 0;
|
||||
}
|
||||
|
||||
public static sanitizeText(text: string): string {
|
||||
if (!text || typeof text !== "string") return "";
|
||||
|
||||
return text
|
||||
.trim()
|
||||
.replace(/https?:\/\/[^\s]+/gi, "[URL_REMOVED]")
|
||||
.replace(/www\.[^\s]+/gi, "[URL_REMOVED]")
|
||||
.replace(/[🔔⬅👆💰$]{2,}/g, "")
|
||||
.replace(/[!]{3,}/g, "!")
|
||||
.replace(/\s{3,}/g, " ")
|
||||
.substring(0, 100);
|
||||
}
|
||||
}
|
||||
1
worklenz-frontend/.gitignore
vendored
1
worklenz-frontend/.gitignore
vendored
@@ -11,6 +11,7 @@
|
||||
# production
|
||||
/build
|
||||
/public/tinymce
|
||||
/docs
|
||||
|
||||
# misc
|
||||
.DS_Store
|
||||
|
||||
@@ -1,10 +1,11 @@
|
||||
import React, { useEffect, useRef, useState } from 'react';
|
||||
import { Form, Input, InputRef, Typography, Card, Tooltip } from '@/shared/antd-imports';
|
||||
import { Form, Input, InputRef, Typography, Card, Tooltip, Alert } from '@/shared/antd-imports';
|
||||
import { useDispatch, useSelector } from 'react-redux';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
import { setOrganizationName } from '@/features/account-setup/account-setup.slice';
|
||||
import { RootState } from '@/app/store';
|
||||
import { sanitizeInput } from '@/utils/sanitizeInput';
|
||||
import { SpamDetector } from '@/utils/spamDetector';
|
||||
|
||||
const { Title, Paragraph, Text } = Typography;
|
||||
|
||||
@@ -29,6 +30,7 @@ export const OrganizationStep: React.FC<Props> = ({
|
||||
const dispatch = useDispatch();
|
||||
const { organizationName } = useSelector((state: RootState) => state.accountSetupReducer);
|
||||
const inputRef = useRef<InputRef>(null);
|
||||
const [spamWarning, setSpamWarning] = useState<string>('');
|
||||
|
||||
// Autofill organization name if not already set
|
||||
useEffect(() => {
|
||||
@@ -44,7 +46,19 @@ export const OrganizationStep: React.FC<Props> = ({
|
||||
};
|
||||
|
||||
const handleOrgNameChange = (e: React.ChangeEvent<HTMLInputElement>) => {
|
||||
const sanitizedValue = sanitizeInput(e.target.value);
|
||||
const rawValue = e.target.value;
|
||||
const sanitizedValue = sanitizeInput(rawValue);
|
||||
|
||||
// Check for spam patterns
|
||||
const spamCheck = SpamDetector.detectSpam(rawValue);
|
||||
if (spamCheck.isSpam) {
|
||||
setSpamWarning(`Warning: ${spamCheck.reasons.join(', ')}`);
|
||||
} else if (SpamDetector.isHighRiskContent(rawValue)) {
|
||||
setSpamWarning('Warning: Content appears to contain suspicious links or patterns');
|
||||
} else {
|
||||
setSpamWarning('');
|
||||
}
|
||||
|
||||
dispatch(setOrganizationName(sanitizedValue));
|
||||
};
|
||||
|
||||
@@ -60,12 +74,25 @@ export const OrganizationStep: React.FC<Props> = ({
|
||||
</Paragraph>
|
||||
</div>
|
||||
|
||||
{/* Spam Warning */}
|
||||
{spamWarning && (
|
||||
<div className="mb-4">
|
||||
<Alert
|
||||
message={spamWarning}
|
||||
type="warning"
|
||||
showIcon
|
||||
closable
|
||||
onClose={() => setSpamWarning('')}
|
||||
/>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Main Form Card */}
|
||||
<div className="mb-6">
|
||||
<Card
|
||||
className="border-2 hover:shadow-md transition-all duration-200"
|
||||
style={{
|
||||
borderColor: token?.colorPrimary,
|
||||
borderColor: spamWarning ? token?.colorWarning : token?.colorPrimary,
|
||||
backgroundColor: token?.colorBgContainer
|
||||
}}
|
||||
>
|
||||
|
||||
@@ -1,10 +1,11 @@
|
||||
import { adminCenterApiService } from '@/api/admin-center/admin-center.api.service';
|
||||
import logger from '@/utils/errorLogger';
|
||||
import { EnterOutlined, EditOutlined } from '@/shared/antd-imports';
|
||||
import { Card, Button, Tooltip, Typography } from '@/shared/antd-imports';
|
||||
import { Card, Button, Tooltip, Typography, Alert } from '@/shared/antd-imports';
|
||||
import TextArea from 'antd/es/input/TextArea';
|
||||
import { TFunction } from 'i18next';
|
||||
import { useState, useEffect } from 'react';
|
||||
import { SpamDetector } from '@/utils/spamDetector';
|
||||
|
||||
interface OrganizationNameProps {
|
||||
themeMode: string;
|
||||
@@ -16,6 +17,7 @@ interface OrganizationNameProps {
|
||||
const OrganizationName = ({ themeMode, name, t, refetch }: OrganizationNameProps) => {
|
||||
const [isEditable, setIsEditable] = useState(false);
|
||||
const [newName, setNewName] = useState(name);
|
||||
const [spamWarning, setSpamWarning] = useState<string>('');
|
||||
|
||||
useEffect(() => {
|
||||
setNewName(name);
|
||||
@@ -34,7 +36,18 @@ const OrganizationName = ({ themeMode, name, t, refetch }: OrganizationNameProps
|
||||
};
|
||||
|
||||
const handleNameChange = (e: React.ChangeEvent<HTMLTextAreaElement>) => {
|
||||
setNewName(e.target.value);
|
||||
const value = e.target.value;
|
||||
setNewName(value);
|
||||
|
||||
// Check for spam patterns
|
||||
const spamCheck = SpamDetector.detectSpam(value);
|
||||
if (spamCheck.isSpam) {
|
||||
setSpamWarning(`Warning: ${spamCheck.reasons.join(', ')}`);
|
||||
} else if (SpamDetector.isHighRiskContent(value)) {
|
||||
setSpamWarning('Warning: Content appears to contain suspicious links or patterns');
|
||||
} else {
|
||||
setSpamWarning('');
|
||||
}
|
||||
};
|
||||
|
||||
const updateOrganizationName = async () => {
|
||||
@@ -62,6 +75,16 @@ const OrganizationName = ({ themeMode, name, t, refetch }: OrganizationNameProps
|
||||
<Typography.Title level={5} style={{ margin: 0, marginBottom: '0.5rem' }}>
|
||||
{t('name')}
|
||||
</Typography.Title>
|
||||
{spamWarning && (
|
||||
<Alert
|
||||
message={spamWarning}
|
||||
type="warning"
|
||||
showIcon
|
||||
closable
|
||||
onClose={() => setSpamWarning('')}
|
||||
style={{ marginBottom: '8px' }}
|
||||
/>
|
||||
)}
|
||||
<div style={{ paddingTop: '8px' }}>
|
||||
<div style={{ marginBottom: '8px' }}>
|
||||
{isEditable ? (
|
||||
|
||||
@@ -1,9 +1,10 @@
|
||||
import { Divider, Form, Input, message, Modal, Typography } from '@/shared/antd-imports';
|
||||
import { Divider, Form, Input, message, Modal, Typography, Alert } from '@/shared/antd-imports';
|
||||
import { useEffect, useState } from 'react';
|
||||
import { useTranslation } from 'react-i18next';
|
||||
import { useAppDispatch } from '@/hooks/useAppDispatch';
|
||||
import { editTeamName, fetchTeams } from '@/features/teams/teamSlice';
|
||||
import { ITeamGetResponse } from '@/types/teams/team.type';
|
||||
import { SpamDetector } from '@/utils/spamDetector';
|
||||
|
||||
interface EditTeamNameModalProps {
|
||||
team: ITeamGetResponse | null;
|
||||
@@ -16,6 +17,7 @@ const EditTeamNameModal = ({ team, isModalOpen, onCancel }: EditTeamNameModalPro
|
||||
const dispatch = useAppDispatch();
|
||||
const [form] = Form.useForm();
|
||||
const [updating, setUpdating] = useState(false);
|
||||
const [spamWarning, setSpamWarning] = useState<string>('');
|
||||
|
||||
useEffect(() => {
|
||||
if (team) {
|
||||
@@ -67,6 +69,16 @@ const EditTeamNameModal = ({ team, isModalOpen, onCancel }: EditTeamNameModalPro
|
||||
destroyOnClose={true}
|
||||
>
|
||||
<Form form={form} layout="vertical" onFinish={handleFormSubmit}>
|
||||
{spamWarning && (
|
||||
<Alert
|
||||
message={spamWarning}
|
||||
type="warning"
|
||||
showIcon
|
||||
closable
|
||||
onClose={() => setSpamWarning('')}
|
||||
style={{ marginBottom: '16px' }}
|
||||
/>
|
||||
)}
|
||||
<Form.Item
|
||||
name="name"
|
||||
label={t('name')}
|
||||
@@ -77,7 +89,20 @@ const EditTeamNameModal = ({ team, isModalOpen, onCancel }: EditTeamNameModalPro
|
||||
},
|
||||
]}
|
||||
>
|
||||
<Input placeholder={t('namePlaceholder')} />
|
||||
<Input
|
||||
placeholder={t('namePlaceholder')}
|
||||
onChange={(e) => {
|
||||
const value = e.target.value;
|
||||
const spamCheck = SpamDetector.detectSpam(value);
|
||||
if (spamCheck.isSpam) {
|
||||
setSpamWarning(`Warning: ${spamCheck.reasons.join(', ')}`);
|
||||
} else if (SpamDetector.isHighRiskContent(value)) {
|
||||
setSpamWarning('Warning: Content appears to contain suspicious links or patterns');
|
||||
} else {
|
||||
setSpamWarning('');
|
||||
}
|
||||
}}
|
||||
/>
|
||||
</Form.Item>
|
||||
</Form>
|
||||
</Modal>
|
||||
|
||||
141
worklenz-frontend/src/utils/spamDetector.ts
Normal file
141
worklenz-frontend/src/utils/spamDetector.ts
Normal file
@@ -0,0 +1,141 @@
|
||||
export interface SpamDetectionResult {
|
||||
isSpam: boolean;
|
||||
score: number;
|
||||
reasons: string[];
|
||||
}
|
||||
|
||||
export class SpamDetector {
|
||||
private static readonly SPAM_PATTERNS = [
|
||||
// URLs and links
|
||||
/https?:\/\//i,
|
||||
/www\./i,
|
||||
/\b\w+\.(com|net|org|io|co|me|ly|tk|ml|ga|cf)\b/i,
|
||||
|
||||
// Common spam phrases
|
||||
/click\s*(here|link|now)/i,
|
||||
/urgent|emergency|immediate/i,
|
||||
/win|won|winner|prize|reward/i,
|
||||
/free|bonus|gift|offer/i,
|
||||
/check\s*(out|this|pay)/i,
|
||||
/blockchain|crypto|bitcoin|compensation/i,
|
||||
/cash|money|dollars?|\$\d+/i,
|
||||
|
||||
// Excessive special characters
|
||||
/[!]{3,}/,
|
||||
/[🔔⬅👆💰$]{2,}/,
|
||||
/\b[A-Z]{5,}\b/,
|
||||
|
||||
// Suspicious formatting
|
||||
/\s{3,}/,
|
||||
/[.]{3,}/
|
||||
];
|
||||
|
||||
private static readonly SUSPICIOUS_WORDS = [
|
||||
'urgent', 'emergency', 'click', 'link', 'win', 'winner', 'prize',
|
||||
'free', 'bonus', 'cash', 'money', 'blockchain', 'crypto', 'compensation',
|
||||
'check', 'pay', 'reward', 'offer', 'gift'
|
||||
];
|
||||
|
||||
public static detectSpam(text: string): SpamDetectionResult {
|
||||
if (!text || typeof text !== 'string') {
|
||||
return { isSpam: false, score: 0, reasons: [] };
|
||||
}
|
||||
|
||||
const normalizedText = text.toLowerCase().trim();
|
||||
const reasons: string[] = [];
|
||||
let score = 0;
|
||||
|
||||
// Check for URL patterns
|
||||
for (const pattern of this.SPAM_PATTERNS) {
|
||||
if (pattern.test(text)) {
|
||||
score += 30;
|
||||
if (pattern.toString().includes('https?') || pattern.toString().includes('www')) {
|
||||
reasons.push('Contains suspicious URLs or links');
|
||||
} else if (pattern.toString().includes('urgent|emergency')) {
|
||||
reasons.push('Contains urgent/emergency language');
|
||||
} else if (pattern.toString().includes('win|won|winner')) {
|
||||
reasons.push('Contains prize/winning language');
|
||||
} else if (pattern.toString().includes('cash|money')) {
|
||||
reasons.push('Contains monetary references');
|
||||
} else if (pattern.toString().includes('blockchain|crypto')) {
|
||||
reasons.push('Contains cryptocurrency references');
|
||||
} else if (pattern.toString().includes('[!]{3,}')) {
|
||||
reasons.push('Excessive use of exclamation marks');
|
||||
} else if (pattern.toString().includes('[🔔⬅👆💰$]')) {
|
||||
reasons.push('Contains suspicious emojis or symbols');
|
||||
} else if (pattern.toString().includes('[A-Z]{5,}')) {
|
||||
reasons.push('Contains excessive capital letters');
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Check for excessive suspicious words
|
||||
const suspiciousWordCount = this.SUSPICIOUS_WORDS.filter(word =>
|
||||
normalizedText.includes(word)
|
||||
).length;
|
||||
|
||||
if (suspiciousWordCount >= 2) {
|
||||
score += suspiciousWordCount * 15;
|
||||
reasons.push(`Contains ${suspiciousWordCount} suspicious words`);
|
||||
}
|
||||
|
||||
// Check text length - very short or very long names are suspicious
|
||||
if (text.length < 2) {
|
||||
score += 20;
|
||||
reasons.push('Text too short');
|
||||
} else if (text.length > 100) {
|
||||
score += 25;
|
||||
reasons.push('Text unusually long');
|
||||
}
|
||||
|
||||
// Check for repeated characters
|
||||
if (/(.)\1{4,}/.test(text)) {
|
||||
score += 20;
|
||||
reasons.push('Contains repeated characters');
|
||||
}
|
||||
|
||||
// Check for mixed scripts (potential homograph attack)
|
||||
const hasLatin = /[a-zA-Z]/.test(text);
|
||||
const hasCyrillic = /[\u0400-\u04FF]/.test(text);
|
||||
const hasGreek = /[\u0370-\u03FF]/.test(text);
|
||||
|
||||
if ((hasLatin && hasCyrillic) || (hasLatin && hasGreek)) {
|
||||
score += 40;
|
||||
reasons.push('Contains mixed character scripts');
|
||||
}
|
||||
|
||||
const isSpam = score >= 50;
|
||||
|
||||
return {
|
||||
isSpam,
|
||||
score,
|
||||
reasons: [...new Set(reasons)] // Remove duplicates
|
||||
};
|
||||
}
|
||||
|
||||
public static isHighRiskContent(text: string): boolean {
|
||||
const patterns = [
|
||||
/gclnk\.com/i,
|
||||
/bit\.ly/i,
|
||||
/tinyurl/i,
|
||||
/\$\d{3,}/,
|
||||
/blockchain.*compensation/i,
|
||||
/urgent.*check/i
|
||||
];
|
||||
|
||||
return patterns.some(pattern => pattern.test(text));
|
||||
}
|
||||
|
||||
public static sanitizeText(text: string): string {
|
||||
if (!text || typeof text !== 'string') return '';
|
||||
|
||||
return text
|
||||
.trim()
|
||||
.replace(/https?:\/\/[^\s]+/gi, '[URL_REMOVED]')
|
||||
.replace(/www\.[^\s]+/gi, '[URL_REMOVED]')
|
||||
.replace(/[🔔⬅👆💰$]{2,}/g, '')
|
||||
.replace(/[!]{3,}/g, '!')
|
||||
.replace(/\s{3,}/g, ' ')
|
||||
.substring(0, 100);
|
||||
}
|
||||
}
|
||||
Reference in New Issue
Block a user