first commit
2
.vscode/settings.json
vendored
Normal file
@ -0,0 +1,2 @@
|
||||
{
|
||||
}
|
||||
BIN
data/kpi_analysis.db
Normal file
55
kpi_analysis/.env
Normal file
@ -0,0 +1,55 @@
|
||||
# Environment Configuration Template for KPI Analysis Application
|
||||
# Copy this file to .env and update the values with your actual configuration
|
||||
|
||||
# Application Settings
|
||||
DEBUG=True
|
||||
SECRET_KEY=your-super-secret-key-change-this-in-production
|
||||
|
||||
# Nextcloud Configuration
|
||||
NEXTCLOUD_BASE_URL=https://nc.mapan.co.id
|
||||
NEXTCLOUD_OAUTH_CLIENT_ID=Ope2FdMfzGco0AyONAvWd0n9Bi1WUzC2ya2ALDMbQbnSktr82WjWjfDP7eAv7Ke5
|
||||
NEXTCLOUD_OAUTH_CLIENT_SECRET=pq6SM72B6ekpTE1nSGX3A7vNcQ50AScC4cSM7q9VzhFThqI5MFHSIQxWebWm0GEY
|
||||
NEXTCLOUD_REDIRECT_URI=http://localhost:8000/auth/nextcloud/callback
|
||||
NEXTCLOUD_KPI_FOLDER=/Minutes of Meeting Mapan Group/2025/KPI/MANAGER
|
||||
NEXTCLOUD_USERNAME=suherdy.yacob
|
||||
|
||||
# OpenAI Configuration
|
||||
OPENAI_API_KEY=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
|
||||
OPENAI_MODEL=gpt-4
|
||||
OPENAI_MAX_TOKENS=2000
|
||||
OPENAI_TEMPERATURE=0.7
|
||||
|
||||
# LDAP/Active Directory Configuration
|
||||
LDAP_SERVER=192.168.10.10
|
||||
LDAP_PORT=389
|
||||
LDAP_USE_SSL=False
|
||||
LDAP_BASE_DN=DC=your-company,DC=com
|
||||
LDAP_BIND_DN=CN=service-account,OU=Service Accounts,DC=your-company,DC=com
|
||||
LDAP_BIND_PASSWORD=your-ldap-password
|
||||
|
||||
# LDAP Group Configuration (Required for access control)
|
||||
LDAP_GROUP_BASE_DN=DC=your-company,DC=com
|
||||
LDAP_KPI_GROUP_DN=CN=KPI_Users,OU=Groups,DC=your-company,DC=com
|
||||
LDAP_KPI_GROUP_NAME=KPI_Users
|
||||
LDAP_GROUP_MEMBER_ATTRIBUTE=member
|
||||
LDAP_USER_MEMBER_ATTRIBUTE=memberOf
|
||||
|
||||
# Fallback Authentication (for testing/development when LDAP not available)
|
||||
ENABLE_FALLBACK_AUTH=True
|
||||
FALLBACK_ADMIN_USERNAME=admin
|
||||
FALLBACK_ADMIN_PASSWORD=super
|
||||
FALLBACK_ADMIN_ROLE=admin
|
||||
FALLBACK_ADMIN_EMAIL=admin@kpi-system.local
|
||||
|
||||
# Database Settings
|
||||
DATABASE_URL=sqlite:///./data/kpi_analysis.db
|
||||
|
||||
# Email Configuration (Optional)
|
||||
SMTP_SERVER=smtp.your-company.com
|
||||
SMTP_PORT=587
|
||||
SMTP_USERNAME=your-email@your-company.com
|
||||
SMTP_PASSWORD=your-email-password
|
||||
EMAIL_FROM=KPI Analysis System <kpi@your-company.com>
|
||||
|
||||
# Logging
|
||||
LOG_LEVEL=INFO
|
||||
366
kpi_analysis/AUTHENTICATION_FLOW.md
Normal file
@ -0,0 +1,366 @@
|
||||
# Authentication Flow
|
||||
|
||||
## Overview
|
||||
|
||||
The KPI Analysis Dashboard uses JWT (JSON Web Token) based authentication with support for both LDAP and fallback authentication methods.
|
||||
|
||||
## Authentication Flow Diagram
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ User Access Flow │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
|
||||
1. User visits http://localhost:8000/
|
||||
│
|
||||
├─→ Server: GET /
|
||||
│ └─→ Response: 302 Redirect to /login
|
||||
│
|
||||
2. Browser loads /login page
|
||||
│
|
||||
├─→ JavaScript checks localStorage for 'kpi_token'
|
||||
│ │
|
||||
│ ├─→ Token exists?
|
||||
│ │ ├─→ YES: Validate with /api/auth/me
|
||||
│ │ │ ├─→ Valid: Redirect to /dashboard
|
||||
│ │ │ └─→ Invalid: Clear token, stay on login
|
||||
│ │ │
|
||||
│ │ └─→ NO: Show login form
|
||||
│
|
||||
3. User enters credentials (admin/super)
|
||||
│
|
||||
├─→ POST /api/auth/login
|
||||
│ └─→ Request: {"username": "admin", "password": "super"}
|
||||
│
|
||||
4. Server validates credentials
|
||||
│
|
||||
├─→ Try LDAP authentication (if configured)
|
||||
│ ├─→ LDAP available?
|
||||
│ │ ├─→ YES: Authenticate with LDAP
|
||||
│ │ │ ├─→ Success: Check group membership
|
||||
│ │ │ │ ├─→ In KPI group: Create token
|
||||
│ │ │ │ └─→ Not in group: Reject
|
||||
│ │ │ └─→ Failed: Try fallback
|
||||
│ │ │
|
||||
│ │ └─→ NO: Try fallback
|
||||
│ │
|
||||
├─→ Try Fallback authentication (if enabled)
|
||||
│ ├─→ Credentials match fallback?
|
||||
│ │ ├─→ YES: Create token
|
||||
│ │ └─→ NO: Reject (401)
|
||||
│
|
||||
5. Server creates JWT token
|
||||
│
|
||||
├─→ Token contains:
|
||||
│ ├─→ user_id
|
||||
│ ├─→ username
|
||||
│ ├─→ email
|
||||
│ ├─→ role
|
||||
│ ├─→ exp (expiration)
|
||||
│ └─→ iat (issued at)
|
||||
│
|
||||
6. Server responds with token
|
||||
│
|
||||
└─→ Response: {
|
||||
"success": true,
|
||||
"access_token": "eyJhbGc...",
|
||||
"token_type": "bearer",
|
||||
"user": {...}
|
||||
}
|
||||
│
|
||||
7. Client stores token
|
||||
│
|
||||
├─→ localStorage.setItem('kpi_token', token)
|
||||
├─→ localStorage.setItem('kpi_user', user)
|
||||
│
|
||||
8. Client redirects to /dashboard
|
||||
│
|
||||
9. Dashboard page loads
|
||||
│
|
||||
├─→ JavaScript checks for token
|
||||
│ ├─→ No token: Redirect to /login
|
||||
│ └─→ Has token: Continue
|
||||
│
|
||||
10. Validate token with backend
|
||||
│
|
||||
├─→ GET /api/auth/me
|
||||
│ └─→ Headers: Authorization: Bearer <token>
|
||||
│
|
||||
11. Server validates token
|
||||
│
|
||||
├─→ Decode JWT
|
||||
│ ├─→ Valid signature?
|
||||
│ ├─→ Not expired?
|
||||
│ └─→ User exists?
|
||||
│
|
||||
12. Server responds
|
||||
│
|
||||
├─→ Valid: Return user info
|
||||
│ └─→ Dashboard shows content
|
||||
│
|
||||
└─→ Invalid: Return 401
|
||||
└─→ Client redirects to /login
|
||||
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Protected API Access │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
|
||||
User makes API request
|
||||
│
|
||||
├─→ GET /api/files/list
|
||||
│ └─→ Headers: Authorization: Bearer <token>
|
||||
│
|
||||
Server validates token
|
||||
│
|
||||
├─→ Token valid?
|
||||
│ ├─→ YES: Process request
|
||||
│ │ └─→ Return data
|
||||
│ │
|
||||
│ └─→ NO: Return 401
|
||||
│ └─→ Client redirects to /login
|
||||
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Logout Flow │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
|
||||
User clicks logout
|
||||
│
|
||||
├─→ POST /api/auth/logout (optional)
|
||||
│ └─→ Headers: Authorization: Bearer <token>
|
||||
│
|
||||
├─→ Clear localStorage
|
||||
│ ├─→ localStorage.removeItem('kpi_token')
|
||||
│ └─→ localStorage.removeItem('kpi_user')
|
||||
│
|
||||
└─→ Redirect to /login
|
||||
```
|
||||
|
||||
## Authentication Methods
|
||||
|
||||
### 1. LDAP Authentication (Primary)
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ LDAP Authentication │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
|
||||
1. User provides credentials
|
||||
│
|
||||
2. Connect to LDAP server
|
||||
│
|
||||
├─→ Server: ldap://192.168.10.10:389
|
||||
│
|
||||
3. Search for user
|
||||
│
|
||||
├─→ Base DN: DC=your-company,DC=com
|
||||
├─→ Filter: (uid={username})
|
||||
│
|
||||
4. User found?
|
||||
│
|
||||
├─→ YES: Bind as user with password
|
||||
│ │
|
||||
│ ├─→ Bind successful?
|
||||
│ │ │
|
||||
│ │ ├─→ YES: Check group membership
|
||||
│ │ │ │
|
||||
│ │ │ ├─→ Get user's memberOf attribute
|
||||
│ │ │ ├─→ Check if KPI_Users group DN is in list
|
||||
│ │ │ │
|
||||
│ │ │ ├─→ In group: AUTHENTICATED ✅
|
||||
│ │ │ └─→ Not in group: REJECTED ❌
|
||||
│ │ │
|
||||
│ │ └─→ NO: REJECTED ❌
|
||||
│ │
|
||||
└─→ NO: REJECTED ❌
|
||||
```
|
||||
|
||||
### 2. Fallback Authentication (Development)
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Fallback Authentication │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
|
||||
1. User provides credentials
|
||||
│
|
||||
2. Check if fallback enabled
|
||||
│
|
||||
├─→ ENABLE_FALLBACK_AUTH=true?
|
||||
│ │
|
||||
│ ├─→ YES: Continue
|
||||
│ └─→ NO: REJECTED ❌
|
||||
│
|
||||
3. Compare credentials
|
||||
│
|
||||
├─→ username == FALLBACK_ADMIN_USERNAME?
|
||||
├─→ password == FALLBACK_ADMIN_PASSWORD?
|
||||
│
|
||||
├─→ Both match: AUTHENTICATED ✅
|
||||
└─→ No match: REJECTED ❌
|
||||
```
|
||||
|
||||
## Token Structure
|
||||
|
||||
### JWT Token Payload
|
||||
```json
|
||||
{
|
||||
"user_id": "admin",
|
||||
"username": "admin",
|
||||
"email": "admin@kpi-system.local",
|
||||
"role": "admin",
|
||||
"exp": 1764057689, // Expiration timestamp
|
||||
"iat": 1764054089, // Issued at timestamp
|
||||
"sub": "admin" // Subject (username)
|
||||
}
|
||||
```
|
||||
|
||||
### Token Validation
|
||||
```
|
||||
1. Extract token from Authorization header
|
||||
│
|
||||
2. Decode JWT
|
||||
│
|
||||
├─→ Verify signature with SECRET_KEY
|
||||
│ ├─→ Valid: Continue
|
||||
│ └─→ Invalid: REJECT ❌
|
||||
│
|
||||
3. Check expiration
|
||||
│
|
||||
├─→ exp > current_time?
|
||||
│ ├─→ YES: Continue
|
||||
│ └─→ NO: REJECT ❌ (Token expired)
|
||||
│
|
||||
4. Extract user info
|
||||
│
|
||||
└─→ Return user data ✅
|
||||
```
|
||||
|
||||
## Security Features
|
||||
|
||||
### 1. Token Expiration
|
||||
- Tokens expire after 1 hour (configurable)
|
||||
- Expired tokens are automatically rejected
|
||||
- Users must login again after expiration
|
||||
|
||||
### 2. Secure Storage
|
||||
- Tokens stored in browser localStorage
|
||||
- Cleared on logout
|
||||
- Not accessible to other domains
|
||||
|
||||
### 3. HTTPS (Production)
|
||||
- All communication encrypted
|
||||
- Tokens protected in transit
|
||||
- Prevents man-in-the-middle attacks
|
||||
|
||||
### 4. Group-Based Access (LDAP)
|
||||
- Only users in KPI_Users group can access
|
||||
- Group membership verified on each login
|
||||
- Centralized access control
|
||||
|
||||
### 5. Role-Based Authorization
|
||||
- User roles stored in token
|
||||
- Can be used for feature access control
|
||||
- Admin vs regular user permissions
|
||||
|
||||
## Configuration
|
||||
|
||||
### Environment Variables
|
||||
|
||||
```bash
|
||||
# Fallback Authentication
|
||||
ENABLE_FALLBACK_AUTH=true
|
||||
FALLBACK_ADMIN_USERNAME=admin
|
||||
FALLBACK_ADMIN_PASSWORD=super
|
||||
FALLBACK_ADMIN_ROLE=admin
|
||||
FALLBACK_ADMIN_EMAIL=admin@kpi-system.local
|
||||
|
||||
# LDAP Configuration
|
||||
LDAP_SERVER=192.168.10.10
|
||||
LDAP_PORT=389
|
||||
LDAP_USE_SSL=false
|
||||
LDAP_BASE_DN=DC=your-company,DC=com
|
||||
LDAP_BIND_DN=CN=service-account,OU=Service Accounts,DC=your-company,DC=com
|
||||
LDAP_BIND_PASSWORD=your-ldap-password
|
||||
|
||||
# LDAP Group Configuration
|
||||
LDAP_KPI_GROUP_DN=CN=KPI_Users,OU=Groups,DC=your-company,DC=com
|
||||
LDAP_GROUP_MEMBER_ATTRIBUTE=member
|
||||
LDAP_USER_MEMBER_ATTRIBUTE=memberOf
|
||||
|
||||
# Security
|
||||
SECRET_KEY=your-secret-key-here-change-in-production
|
||||
SESSION_TIMEOUT_MINUTES=60
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
### Test Authentication
|
||||
```bash
|
||||
python test_auth.py
|
||||
```
|
||||
|
||||
### Test API Endpoints
|
||||
```bash
|
||||
# Login
|
||||
curl -X POST http://localhost:8000/api/auth/login \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"username":"admin","password":"super"}'
|
||||
|
||||
# Get user info
|
||||
curl http://localhost:8000/api/auth/me \
|
||||
-H "Authorization: Bearer <token>"
|
||||
|
||||
# Logout
|
||||
curl -X POST http://localhost:8000/api/auth/logout \
|
||||
-H "Authorization: Bearer <token>"
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Issue: Can't login
|
||||
**Solution:**
|
||||
1. Check fallback auth is enabled: `ENABLE_FALLBACK_AUTH=true`
|
||||
2. Verify credentials: `admin` / `super`
|
||||
3. Check browser console for errors
|
||||
4. Clear localStorage and try again
|
||||
|
||||
### Issue: Token expired
|
||||
**Solution:**
|
||||
1. Login again to get new token
|
||||
2. Adjust `SESSION_TIMEOUT_MINUTES` if needed
|
||||
|
||||
### Issue: LDAP not working
|
||||
**Solution:**
|
||||
1. Install ldap3: `pip install ldap3`
|
||||
2. Check LDAP server is accessible
|
||||
3. Verify LDAP credentials
|
||||
4. Test with: `GET /api/auth/test`
|
||||
|
||||
### Issue: Dashboard redirects to login
|
||||
**Solution:**
|
||||
This is correct behavior if:
|
||||
- No token in localStorage
|
||||
- Token is expired
|
||||
- Token is invalid
|
||||
|
||||
Login again to get a new token.
|
||||
|
||||
## Best Practices
|
||||
|
||||
### Development
|
||||
- ✅ Use fallback authentication
|
||||
- ✅ Enable debug mode
|
||||
- ✅ Use HTTP for local testing
|
||||
- ✅ Short token expiration for testing
|
||||
|
||||
### Production
|
||||
- ✅ Disable fallback authentication
|
||||
- ✅ Use LDAP/Active Directory
|
||||
- ✅ Enable HTTPS
|
||||
- ✅ Change SECRET_KEY to random value
|
||||
- ✅ Set appropriate token expiration
|
||||
- ✅ Enable rate limiting
|
||||
- ✅ Add CSRF protection
|
||||
- ✅ Use secure session storage
|
||||
- ✅ Enable audit logging
|
||||
- ✅ Regular security updates
|
||||
292
kpi_analysis/LDAP_AUTHENTICATION.md
Normal file
@ -0,0 +1,292 @@
|
||||
# LDAP Group-Based Authentication
|
||||
|
||||
The KPI Analysis Dashboard now includes comprehensive LDAP authentication with group-based access control. Only users who are members of the specified LDAP group can access the system.
|
||||
|
||||
## 🔐 **Security Features**
|
||||
|
||||
### **Dual Authentication System**
|
||||
|
||||
#### **Primary: LDAP Group-Based Authentication**
|
||||
- **LDAP Authentication**: Users must authenticate against your company's LDAP/Active Directory
|
||||
- **Group Membership Verification**: Only users in the authorized group can access the system
|
||||
- **Enterprise Integration**: Seamless integration with existing directory infrastructure
|
||||
|
||||
#### **Fallback: Default Admin Authentication**
|
||||
- **Development & Testing**: Provides access when LDAP is not available
|
||||
- **Default Credentials**: Username `admin`, Password `super`
|
||||
- **Configurable**: Enable/disable fallback authentication in settings
|
||||
- **Security Note**: Should be disabled in production environments
|
||||
|
||||
### **Authentication Priority**
|
||||
1. **LDAP Authentication** (if configured and available)
|
||||
2. **Fallback Authentication** (if enabled and LDAP fails)
|
||||
3. **Access Denied** (if no valid authentication method)
|
||||
- **LDAP Authentication**: Users must authenticate against your company's LDAP/Active Directory
|
||||
- **Group Membership Verification**: Only users in the authorized group can access the system
|
||||
- **JWT Token Management**: Secure session management with automatic token expiration
|
||||
- **Role-Based Access**: Support for different user roles within the authorized group
|
||||
|
||||
### **Authentication Flow**
|
||||
1. **User Login**: Username and password sent to authentication endpoint
|
||||
2. **LDAP Verification**: System validates credentials against LDAP server
|
||||
3. **Group Check**: System verifies user is member of authorized KPI group
|
||||
4. **Token Generation**: JWT token created for authenticated user
|
||||
5. **Session Management**: Token used for all subsequent API calls
|
||||
|
||||
## ⚙️ **Configuration**
|
||||
|
||||
### **Fallback Authentication Settings**
|
||||
|
||||
For testing and development when LDAP is not available:
|
||||
|
||||
```env
|
||||
# Fallback Authentication Configuration
|
||||
ENABLE_FALLBACK_AUTH=true
|
||||
FALLBACK_ADMIN_USERNAME=admin
|
||||
FALLBACK_ADMIN_PASSWORD=super
|
||||
FALLBACK_ADMIN_ROLE=admin
|
||||
FALLBACK_ADMIN_EMAIL=admin@kpi-system.local
|
||||
```
|
||||
|
||||
**⚠️ Security Warning**: Disable fallback authentication in production by setting `ENABLE_FALLBACK_AUTH=false`
|
||||
|
||||
**Default Login Credentials** (when fallback is enabled):
|
||||
- Username: `admin`
|
||||
- Password: `super`
|
||||
- Role: `admin`
|
||||
|
||||
### **Required LDAP Settings**
|
||||
|
||||
Add these settings to your `.env` file:
|
||||
|
||||
```env
|
||||
# LDAP Server Configuration
|
||||
LDAP_SERVER=ldap.your-company.com
|
||||
LDAP_PORT=389
|
||||
LDAP_USE_SSL=true
|
||||
LDAP_BASE_DN=DC=your-company,DC=com
|
||||
|
||||
# Authentication Binding (Service Account)
|
||||
LDAP_BIND_DN=CN=service-account,OU=Service Accounts,DC=your-company,DC=com
|
||||
LDAP_BIND_PASSWORD=your-service-account-password
|
||||
|
||||
# Group-Based Access Control
|
||||
LDAP_GROUP_BASE_DN=DC=your-company,DC=com
|
||||
LDAP_KPI_GROUP_DN=CN=KPI_Users,OU=Groups,DC=your-company,DC=com
|
||||
LDAP_KPI_GROUP_NAME=KPI_Users
|
||||
LDAP_GROUP_MEMBER_ATTRIBUTE=member
|
||||
LDAP_USER_MEMBER_ATTRIBUTE=memberOf
|
||||
```
|
||||
|
||||
### **Group Configuration Examples**
|
||||
|
||||
#### **For Active Directory:**
|
||||
```env
|
||||
LDAP_SERVER=dc1.your-company.com
|
||||
LDAP_BASE_DN=DC=your-company,DC=com
|
||||
LDAP_KPI_GROUP_DN=CN=KPI_Users,OU=Security Groups,DC=your-company,DC=com
|
||||
LDAP_USER_MEMBER_ATTRIBUTE=memberOf
|
||||
```
|
||||
|
||||
#### **For OpenLDAP:**
|
||||
```env
|
||||
LDAP_SERVER=ldap.your-company.com
|
||||
LDAP_BASE_DN=ou=people,dc=your-company,dc=com
|
||||
LDAP_KPI_GROUP_DN=cn=kpi-users,ou=groups,dc=your-company,dc=com
|
||||
LDAP_GROUP_MEMBER_ATTRIBUTE=member
|
||||
```
|
||||
|
||||
## 🏢 **Group Setup Instructions**
|
||||
|
||||
### **Active Directory Setup**
|
||||
|
||||
1. **Create Security Group:**
|
||||
- Open Active Directory Users and Computers
|
||||
- Navigate to the desired Organizational Unit
|
||||
- Right-click → New → Group
|
||||
- Group name: `KPI_Users`
|
||||
- Group scope: Global
|
||||
- Group type: Security
|
||||
|
||||
2. **Add Users to Group:**
|
||||
- Right-click the `KPI_Users` group
|
||||
- Properties → Members → Add
|
||||
- Add users who need access to KPI system
|
||||
|
||||
### **OpenLDAP Setup**
|
||||
|
||||
1. **Create Group Entry:**
|
||||
```ldap
|
||||
dn: cn=kpi-users,ou=groups,dc=your-company,dc=com
|
||||
objectClass: groupOfNames
|
||||
cn: kpi-users
|
||||
description: Users authorized to access KPI Analysis system
|
||||
member: uid=user1,ou=people,dc=your-company,dc=com
|
||||
member: uid=user2,ou=people,dc=your-company,dc=com
|
||||
```
|
||||
|
||||
2. **Update User Entries:**
|
||||
Ensure user entries include the group in their memberOf attribute or use member references.
|
||||
|
||||
## 🔧 **Testing Configuration**
|
||||
|
||||
### **API Test Endpoint**
|
||||
Use the built-in test endpoint to verify your configuration:
|
||||
|
||||
```
|
||||
GET /api/auth/test
|
||||
```
|
||||
|
||||
This endpoint returns:
|
||||
- Connection test results
|
||||
- Group access verification
|
||||
- Configuration status
|
||||
- Troubleshooting information
|
||||
|
||||
### **Manual Testing**
|
||||
|
||||
1. **Test LDAP Connection:**
|
||||
```bash
|
||||
curl http://localhost:8000/api/auth/test
|
||||
```
|
||||
|
||||
2. **Test Authentication:**
|
||||
```bash
|
||||
# Test with LDAP user (if configured)
|
||||
curl -X POST http://localhost:8000/api/auth/login \
|
||||
-H "Content-Type: application/x-www-form-urlencoded" \
|
||||
-d "username=your-username&password=your-password"
|
||||
|
||||
# Test with fallback admin (if enabled)
|
||||
curl -X POST http://localhost:8000/api/auth/login \
|
||||
-H "Content-Type: application/x-www-form-urlencoded" \
|
||||
-d "username=admin&password=super"
|
||||
```
|
||||
|
||||
## 🛠️ **Implementation Details**
|
||||
|
||||
### **Authentication Service**
|
||||
- **File**: `app/services/ldap_auth_service.py`
|
||||
- **Features**:
|
||||
- LDAP connection management
|
||||
- Multiple group membership checking methods
|
||||
- User attribute retrieval
|
||||
- Connection pooling and error handling
|
||||
|
||||
### **Authorization Middleware**
|
||||
- **File**: `app/core/auth.py`
|
||||
- **Features**:
|
||||
- JWT token generation and verification
|
||||
- User session management
|
||||
- FastAPI dependency injection
|
||||
- Role-based access control
|
||||
|
||||
### **Frontend Integration**
|
||||
- **File**: `static/js/dashboard.js`
|
||||
- **Features**:
|
||||
- Login form handling
|
||||
- Token storage and management
|
||||
- Automatic authentication header injection
|
||||
- Session validation and logout
|
||||
|
||||
### **API Protection**
|
||||
All protected endpoints now require authentication:
|
||||
- `/api/files/*`
|
||||
- `/api/analysis/*`
|
||||
- `/api/nextcloud/*`
|
||||
- `/api/reports/*`
|
||||
|
||||
## 🚫 **Security Best Practices**
|
||||
|
||||
### **Environment Security**
|
||||
- Never commit `.env` file to version control
|
||||
- Use strong passwords for service accounts
|
||||
- Limit LDAP service account permissions
|
||||
- Regularly rotate credentials
|
||||
|
||||
### **Network Security**
|
||||
- Use LDAPS (LDAP over SSL) in production
|
||||
- Configure firewall rules for LDAP ports
|
||||
- Consider LDAP connection timeout settings
|
||||
|
||||
### **Application Security**
|
||||
- JWT tokens expire automatically
|
||||
- Tokens stored in localStorage (client-side)
|
||||
- All API calls validated server-side
|
||||
- Audit logging for authentication events
|
||||
|
||||
## 🔍 **Troubleshooting**
|
||||
|
||||
### **Common Issues**
|
||||
|
||||
#### **"Authentication failed" Errors**
|
||||
- Verify LDAP server connectivity
|
||||
- Check username/password accuracy
|
||||
- Ensure service account permissions
|
||||
|
||||
#### **"User not authorized" Errors**
|
||||
- Verify user is member of KPI group
|
||||
- Check group DN configuration
|
||||
- Validate member attribute mappings
|
||||
|
||||
#### **"Connection failed" Errors**
|
||||
- Test LDAP server availability
|
||||
- Check firewall settings
|
||||
- Verify SSL/TLS configuration
|
||||
|
||||
#### **"Group not found" Errors**
|
||||
- Verify group DN format
|
||||
- Check group search permissions
|
||||
- Confirm group exists in directory
|
||||
|
||||
### **Debug Information**
|
||||
|
||||
The `/api/auth/test` endpoint provides detailed diagnostic information:
|
||||
- LDAP connection status
|
||||
- Group access verification
|
||||
- Configuration validation
|
||||
- Error messages and recommendations
|
||||
|
||||
### **Log Analysis**
|
||||
|
||||
Check application logs for:
|
||||
- Authentication attempts
|
||||
- LDAP connection issues
|
||||
- Group membership verification
|
||||
- Token validation errors
|
||||
|
||||
## 📋 **Configuration Checklist**
|
||||
|
||||
- [ ] LDAP server details configured
|
||||
- [ ] Service account created and tested
|
||||
- [ ] KPI Users group created
|
||||
- [ ] Authorized users added to group
|
||||
- [ ] Group DN configuration verified
|
||||
- [ ] SSL/TLS settings configured
|
||||
- [ ] Test endpoint returns success
|
||||
- [ ] Login functionality tested
|
||||
- [ ] Role-based access configured
|
||||
- [ ] Security settings reviewed
|
||||
|
||||
## 🎯 **Production Deployment**
|
||||
|
||||
### **Security Hardening**
|
||||
- Use dedicated LDAP service account
|
||||
- Implement certificate pinning
|
||||
- Configure session timeout limits
|
||||
- Enable audit logging
|
||||
- Regular security updates
|
||||
|
||||
### **Performance Optimization**
|
||||
- Connection pooling for LDAP
|
||||
- Token caching strategies
|
||||
- Database optimization
|
||||
- Load balancing considerations
|
||||
|
||||
### **Monitoring and Alerts**
|
||||
- Authentication failure monitoring
|
||||
- LDAP server health checks
|
||||
- Performance metrics tracking
|
||||
- Security incident alerting
|
||||
|
||||
The LDAP group-based authentication system provides enterprise-grade security while maintaining ease of use for authorized users. Only users in your designated KPI group will have access to the system, ensuring proper access control and compliance with organizational security policies.
|
||||
148
kpi_analysis/PRODUCTION_CONFIG.md
Normal file
@ -0,0 +1,148 @@
|
||||
# Production Configuration Guide
|
||||
|
||||
## CORS Origins Configuration
|
||||
|
||||
The KPI Analysis application now supports configurable CORS origins for production deployments.
|
||||
|
||||
### Quick Setup
|
||||
|
||||
#### Option 1: Custom Domains (Recommended)
|
||||
Add to your `.env` file:
|
||||
```bash
|
||||
CORS_ALLOW_ORIGINS=https://kpi.yourcompany.com https://app.yourcompany.com
|
||||
```
|
||||
|
||||
#### Option 2: Auto-Detect from Server Host
|
||||
Set your server host:
|
||||
```bash
|
||||
HOST=kpi.yourcompany.com
|
||||
PORT=443
|
||||
```
|
||||
|
||||
### Environment Examples
|
||||
|
||||
#### Single Domain Production
|
||||
```bash
|
||||
# .env - Single domain production
|
||||
HOST=kpi.mapan.co.id
|
||||
PORT=443
|
||||
DEBUG=false
|
||||
secret_key=your-production-secret-key
|
||||
|
||||
# The system will automatically add https://kpi.mapan.co.id to CORS origins
|
||||
# Plus development origins for testing
|
||||
```
|
||||
|
||||
#### Multiple Domains Production
|
||||
```bash
|
||||
# .env - Multiple domains
|
||||
HOST=0.0.0.0
|
||||
PORT=8000
|
||||
DEBUG=false
|
||||
|
||||
# Explicitly define all allowed origins
|
||||
CORS_ALLOW_ORIGINS=https://kpi.mapan.co.id https://analytics.mapan.co.id https://internal.mapan.co.id
|
||||
```
|
||||
|
||||
#### Development with Production Testing
|
||||
```bash
|
||||
# .env - Development with production domains
|
||||
HOST=127.0.0.1
|
||||
PORT=8000
|
||||
DEBUG=true
|
||||
|
||||
# Allow both local development and production domains
|
||||
CORS_ALLOW_ORIGINS=http://localhost:8000 http://127.0.0.1:8000 https://kpi.mapan.co.id
|
||||
```
|
||||
|
||||
### Configuration Variables
|
||||
|
||||
| Variable | Description | Example |
|
||||
|----------|-------------|---------|
|
||||
| `HOST` | Server host address | `kpi.mapan.co.id` |
|
||||
| `PORT` | Server port | `443` (HTTPS) or `80` (HTTP) |
|
||||
| `CORS_ALLOW_ORIGINS` | Custom CORS origins (space-separated) | `https://domain1.com https://domain2.com` |
|
||||
|
||||
### Default Behavior
|
||||
|
||||
**When `CORS_ALLOW_ORIGINS` is not set:**
|
||||
- Uses default localhost development origins
|
||||
- Automatically adds `http://localhost:{PORT}` and `http://127.0.0.1:{PORT}`
|
||||
- If `HOST` is not localhost, adds `https://{HOST}` or `http://{HOST}` based on port
|
||||
|
||||
**When `CORS_ALLOW_ORIGINS` is set:**
|
||||
- Uses only the specified origins
|
||||
- No automatic additions are made
|
||||
|
||||
### Security Best Practices
|
||||
|
||||
1. **Always specify exact domains** in production
|
||||
2. **Use HTTPS** for production deployments
|
||||
3. **Avoid wildcard origins** (`*`) in production
|
||||
4. **Test CORS** after configuration changes
|
||||
|
||||
### Testing CORS Configuration
|
||||
|
||||
#### Check Current Configuration
|
||||
```python
|
||||
from config.settings import settings
|
||||
print("CORS Origins:", settings.effective_cors_origins)
|
||||
```
|
||||
|
||||
#### Test CORS with curl
|
||||
```bash
|
||||
# Test preflight request
|
||||
curl -X OPTIONS \
|
||||
-H "Origin: https://your-domain.com" \
|
||||
-H "Access-Control-Request-Method: POST" \
|
||||
-H "Access-Control-Request-Headers: Content-Type" \
|
||||
http://localhost:8000/api/auth/login
|
||||
```
|
||||
|
||||
#### Expected Response Headers
|
||||
```
|
||||
Access-Control-Allow-Origin: https://your-domain.com
|
||||
Access-Control-Allow-Credentials: true
|
||||
Access-Control-Allow-Methods: GET, POST, PUT, DELETE, OPTIONS
|
||||
Access-Control-Allow-Headers: *
|
||||
```
|
||||
|
||||
### Troubleshooting
|
||||
|
||||
#### CORS Error: "Access to fetch at '...' from origin '...' has been blocked"
|
||||
1. Check that your domain is in `CORS_ALLOW_ORIGINS`
|
||||
2. Verify the exact URL format (http vs https)
|
||||
3. Ensure no extra spaces in the origin list
|
||||
|
||||
#### Cookie Issues in Production
|
||||
1. Use `https://` URLs for production
|
||||
2. Ensure `secure=True` is set for cookies
|
||||
3. Check browser developer tools for detailed error messages
|
||||
|
||||
#### Development vs Production
|
||||
```python
|
||||
# Development - Auto-detected origins
|
||||
if settings.debug:
|
||||
origins = settings.default_cors_origins
|
||||
else:
|
||||
origins = settings.effective_cors_origins
|
||||
```
|
||||
|
||||
### Migration from Hardcoded Origins
|
||||
|
||||
**Before (hardcoded):**
|
||||
```python
|
||||
allow_origins=[
|
||||
"http://localhost:3000",
|
||||
"http://localhost:8000",
|
||||
"http://127.0.0.1:8000",
|
||||
"http://localhost:8080"
|
||||
]
|
||||
```
|
||||
|
||||
**After (configurable):**
|
||||
```python
|
||||
allow_origins=settings.effective_cors_origins
|
||||
```
|
||||
|
||||
This allows for seamless migration from development to production without code changes.
|
||||
120
kpi_analysis/QUICK_START_GUIDE.md
Normal file
@ -0,0 +1,120 @@
|
||||
# KPI Analysis - Quick Start Guide
|
||||
|
||||
## 🚀 Getting Started in 3 Steps
|
||||
|
||||
### Step 1: Start the Application
|
||||
```bash
|
||||
cd kpi_analysis
|
||||
python run.py
|
||||
```
|
||||
|
||||
You should see:
|
||||
```
|
||||
✅ All required packages are installed
|
||||
✅ .env file found
|
||||
✅ Required directories created
|
||||
🗄️ Setting up database...
|
||||
✅ Database initialized successfully
|
||||
🚀 Starting KPI Analysis Dashboard in development mode...
|
||||
📊 Dashboard: http://localhost:8000
|
||||
📚 API Docs: http://localhost:8000/docs
|
||||
```
|
||||
|
||||
### Step 2: Login
|
||||
1. Open browser: http://localhost:8000
|
||||
2. You'll be redirected to login page
|
||||
3. Use default credentials:
|
||||
- Username: `admin`
|
||||
- Password: `super`
|
||||
|
||||
### Step 3: Upload and Analyze
|
||||
1. Click **"Upload File"** button in the Files tab
|
||||
2. Select your KPI Excel file (e.g., "KPI Manager Information Technology.xlsx")
|
||||
3. Click **"Upload"**
|
||||
4. Wait for processing (status will change from "Processing..." to "Processed")
|
||||
5. Click **"View Analysis"** to see results
|
||||
6. Click **"Report"** to download PDF
|
||||
|
||||
## 📊 What You'll See
|
||||
|
||||
### Dashboard Overview
|
||||
- **Total Files**: Number of uploaded files
|
||||
- **Avg Score**: Average performance score (shows 0% until files are analyzed)
|
||||
- **Achievement Rate**: KPI achievement percentage (shows 0% until files are analyzed)
|
||||
- **Reports Generated**: Number of PDF reports (shows 0 until files are analyzed)
|
||||
|
||||
### Empty State (No Data)
|
||||
When no files are uploaded, you'll see:
|
||||
- Empty charts with placeholder data
|
||||
- "No activity yet. Upload a KPI file to get started."
|
||||
- "No files uploaded yet. Click 'Upload File' to get started."
|
||||
|
||||
### After Upload
|
||||
Once a file is uploaded and processed:
|
||||
- File appears in the Files table with "Processed" status
|
||||
- Click "View Analysis" to see:
|
||||
- Overall Score
|
||||
- Achievement Rate
|
||||
- Perspective Scores (Financial, Customer, Internal Process, Learning & Growth)
|
||||
- AI-generated Recommendations
|
||||
- Click "Report" to download professional PDF report
|
||||
|
||||
## 🎯 Sample File
|
||||
|
||||
Use the included sample file to test:
|
||||
```
|
||||
KPI Manager Information Technology.xlsx
|
||||
```
|
||||
|
||||
This file contains:
|
||||
- Employee KPI data
|
||||
- Multiple perspectives (Financial, Customer, Internal Process, Learning & Growth)
|
||||
- Monthly performance data
|
||||
- Achievement status
|
||||
|
||||
## 🔧 Troubleshooting
|
||||
|
||||
### Issue: "Authentication failed"
|
||||
**Solution**: Check credentials in `.env` file:
|
||||
```env
|
||||
ENABLE_FALLBACK_AUTH=true
|
||||
FALLBACK_ADMIN_USERNAME=admin
|
||||
FALLBACK_ADMIN_PASSWORD=super
|
||||
```
|
||||
|
||||
### Issue: "File upload failed"
|
||||
**Solution**:
|
||||
- Check file format (must be .xlsx or .xls)
|
||||
- Check file size (max 50MB)
|
||||
- Ensure `uploads/` directory exists
|
||||
|
||||
### Issue: "Analysis not available yet"
|
||||
**Solution**:
|
||||
- Wait a few seconds for background processing
|
||||
- Refresh the page
|
||||
- Check server logs for errors
|
||||
|
||||
### Issue: Charts not showing
|
||||
**Solution**:
|
||||
- Charts show empty state when no data
|
||||
- Upload a file first
|
||||
- Wait for processing to complete
|
||||
|
||||
## 📝 Tips
|
||||
|
||||
1. **First Time Setup**: The database is created automatically on first run
|
||||
2. **No Dummy Data**: All data comes from uploaded files - dashboard starts empty
|
||||
3. **Background Processing**: Analysis runs in background, so you can continue using the app
|
||||
4. **PDF Reports**: Generated automatically after analysis completes
|
||||
5. **AI Insights**: If OpenAI API key is configured, you'll get AI-powered insights
|
||||
|
||||
## 🎉 You're Ready!
|
||||
|
||||
The system is now ready to:
|
||||
- ✅ Accept Excel file uploads
|
||||
- ✅ Parse KPI data automatically
|
||||
- ✅ Generate AI-powered insights
|
||||
- ✅ Create interactive charts
|
||||
- ✅ Produce professional PDF reports
|
||||
|
||||
Start by uploading your first KPI file!
|
||||
336
kpi_analysis/README.md
Normal file
@ -0,0 +1,336 @@
|
||||
# KPI Analysis Dashboard
|
||||
|
||||
A comprehensive web application for analyzing Key Performance Indicators (KPIs) with AI-powered insights, Nextcloud integration, and professional PDF reporting.
|
||||
|
||||
## ✅ Status: FULLY FUNCTIONAL & COMPLETE
|
||||
|
||||
**Implementation Complete - All Requirements Met!**
|
||||
- ✅ **No dummy data** - Database starts empty, all data from uploads
|
||||
- ✅ **Empty state handling** - Dashboard shows appropriate messages when no data
|
||||
- ✅ **Core function working** - Upload Excel files and perform automatic analysis
|
||||
- ✅ Login and dashboard are separate pages
|
||||
- ✅ Dashboard is protected and requires authentication
|
||||
- ✅ Fallback authentication working correctly
|
||||
- ✅ Proper redirects in place
|
||||
|
||||
**Quick Start:** See [QUICK_START_GUIDE.md](QUICK_START_GUIDE.md) for setup instructions.
|
||||
**Implementation Details:** See [IMPLEMENTATION_SUMMARY.md](IMPLEMENTATION_SUMMARY.md) for complete details.
|
||||
|
||||
## Features
|
||||
|
||||
### 📊 **Core Functionality**
|
||||
- **Excel File Processing**: Upload and parse KPI Excel files with multiple sheets
|
||||
- **Multi-perspective Analysis**: Financial, Customer, Internal Process, and Learning & Growth perspectives
|
||||
- **AI-Powered Insights**: OpenAI integration for intelligent recommendations
|
||||
- **Interactive Dashboard**: Real-time charts and data visualization
|
||||
- **PDF Report Generation**: Professional reports with embedded charts
|
||||
- **Nextcloud Integration**: OAuth authentication and file synchronization
|
||||
- **🔐 LDAP Group-Based Authentication**: Enterprise-grade authentication with group membership verification
|
||||
|
||||
### 🎯 **Key Capabilities**
|
||||
- **Comparative Analysis**: Track performance trends across time periods
|
||||
- **Pattern Recognition**: Identify performance patterns and anomalies
|
||||
- **Actionable Recommendations**: AI-generated improvement suggestions
|
||||
- **Professional Visualization**: Interactive charts using Plotly
|
||||
- **Enterprise Ready**: Secure authentication and role-based access
|
||||
|
||||
## Technology Stack
|
||||
|
||||
- **Backend**: FastAPI (Python)
|
||||
- **Frontend**: HTML5, CSS3, JavaScript, Bootstrap 5
|
||||
- **Database**: SQLite (development) / PostgreSQL (production)
|
||||
- **Authentication**: LDAP/Active Directory + Nextcloud OAuth
|
||||
- **AI Integration**: OpenAI API
|
||||
- **Data Processing**: Pandas, OpenPyXL
|
||||
- **Visualization**: Plotly, Chart.js, Matplotlib
|
||||
- **PDF Generation**: ReportLab
|
||||
|
||||
## Installation
|
||||
|
||||
### Prerequisites
|
||||
- Python 3.8+
|
||||
- Nextcloud server (optional, for cloud integration)
|
||||
- OpenAI API key (optional, for AI features)
|
||||
|
||||
### Quick Start
|
||||
|
||||
1. **Clone the repository**
|
||||
```bash
|
||||
git clone <repository-url>
|
||||
cd kpi_analysis
|
||||
```
|
||||
|
||||
2. **Install dependencies**
|
||||
```bash
|
||||
pip install -r requirements.txt
|
||||
```
|
||||
|
||||
3. **Configure environment**
|
||||
```bash
|
||||
cp config/.env.template .env
|
||||
# Edit .env with your configuration
|
||||
```
|
||||
|
||||
4. **Run the application**
|
||||
```bash
|
||||
python main.py
|
||||
```
|
||||
|
||||
5. **Access the dashboard**
|
||||
Open your browser to `http://localhost:8000`
|
||||
|
||||
## Configuration
|
||||
|
||||
### Environment Variables
|
||||
|
||||
Create a `.env` file based on `.env.template`:
|
||||
|
||||
```env
|
||||
# Application Settings
|
||||
DEBUG=true
|
||||
SECRET_KEY=your-super-secret-key
|
||||
|
||||
# Nextcloud Configuration
|
||||
NEXTCLOUD_BASE_URL=https://nc.mapan.co.id
|
||||
NEXTCLOUD_OAUTH_CLIENT_ID=your-oauth-client-id
|
||||
NEXTCLOUD_OAUTH_CLIENT_SECRET=your-oauth-client-secret
|
||||
NEXTCLOUD_KPI_FOLDER=/KPI_Files
|
||||
|
||||
# OpenAI Configuration
|
||||
OPENAI_API_KEY=your-openai-api-key
|
||||
OPENAI_MODEL=gpt-4
|
||||
|
||||
# LDAP Configuration
|
||||
LDAP_SERVER=ldap.your-company.com
|
||||
LDAP_BASE_DN=DC=your-company,DC=com
|
||||
LDAP_BIND_DN=CN=service-account,OU=Service Accounts,DC=your-company,DC=com
|
||||
LDAP_BIND_PASSWORD=your-password
|
||||
```
|
||||
|
||||
### Nextcloud Setup
|
||||
|
||||
1. **Create OAuth App in Nextcloud**
|
||||
- Go to Settings > Security > OAuth 2.0 client ids
|
||||
- Add new client
|
||||
- Set redirect URI: `http://localhost:8000/auth/nextcloud/callback`
|
||||
- Copy client ID and secret to `.env`
|
||||
|
||||
2. **Configure KPI Folder**
|
||||
- Create `/KPI_Files` folder in Nextcloud
|
||||
- Upload Excel files to this folder
|
||||
|
||||
### OpenAI Setup
|
||||
|
||||
1. **Get API Key**
|
||||
- Visit https://platform.openai.com/api-keys
|
||||
- Create new API key
|
||||
- Add to `.env` file
|
||||
|
||||
### LDAP Authentication Setup
|
||||
|
||||
1. **Create KPI Users Group**
|
||||
- In Active Directory: Create security group "KPI_Users"
|
||||
- In OpenLDAP: Create group entry for authorized users
|
||||
- Add users who need access to the system
|
||||
|
||||
2. **Configure LDAP Settings**
|
||||
- Update `.env` file with LDAP server details
|
||||
- Set group DN for authorized access
|
||||
- Configure service account credentials
|
||||
|
||||
3. **Test Configuration**
|
||||
- Run `GET /api/auth/test` to verify settings
|
||||
- Test login with authorized user credentials
|
||||
|
||||
See [LDAP_AUTHENTICATION.md](LDAP_AUTHENTICATION.md) for detailed setup instructions.
|
||||
|
||||
## Usage
|
||||
|
||||
### 1. Upload KPI Files
|
||||
|
||||
- **Local Upload**: Drag and drop Excel files or use the upload button
|
||||
- **Nextcloud Sync**: Connect to Nextcloud and browse files in the cloud
|
||||
- **Supported Formats**: `.xlsx`, `.xls` files
|
||||
|
||||
### 2. Excel File Structure
|
||||
|
||||
The application expects Excel files with the following structure:
|
||||
|
||||
- **KPI Sheet**: Summary with overall scores and weights
|
||||
- **Achievement Sheet**: Achievement status for each KPI
|
||||
- **Detail Sheets**: Individual KPI data (F2a, F2b, B1a, etc.)
|
||||
|
||||
### 3. Analysis Features
|
||||
|
||||
- **Performance Overview**: Total scores and achievement rates
|
||||
- **Perspective Analysis**: Breakdown by Financial, Customer, Internal Process, Learning & Growth
|
||||
- **Trend Analysis**: Performance over time
|
||||
- **AI Insights**: Automated analysis and recommendations
|
||||
|
||||
### 4. Reports
|
||||
|
||||
- **PDF Generation**: Professional reports with charts and analysis
|
||||
- **Interactive Charts**: Browser-based visualizations
|
||||
- **Export Options**: Download reports and charts
|
||||
|
||||
## API Documentation
|
||||
|
||||
The application provides a REST API:
|
||||
|
||||
- **Health Check**: `GET /health`
|
||||
- **File Management**: `POST /api/files/upload`, `GET /api/files/list`
|
||||
- **Analysis**: `POST /api/analysis/run/{file_id}`, `GET /api/analysis/{file_id}`
|
||||
- **Nextcloud**: `GET /api/nextcloud/files`, `POST /api/nextcloud/download/{file_id}`
|
||||
|
||||
Access interactive API docs at `http://localhost:8000/docs`
|
||||
|
||||
## Deployment
|
||||
|
||||
### Local Development
|
||||
|
||||
```bash
|
||||
# Install dependencies
|
||||
pip install -r requirements.txt
|
||||
|
||||
# Run with auto-reload
|
||||
python main.py
|
||||
```
|
||||
|
||||
### Production Deployment
|
||||
|
||||
1. **Using Gunicorn**
|
||||
```bash
|
||||
pip install gunicorn
|
||||
gunicorn -w 4 -k uvicorn.workers.UvicornWorker main:app --bind 0.0.0.0:8000
|
||||
```
|
||||
|
||||
2. **Using Docker**
|
||||
```bash
|
||||
docker build -t kpi-analysis .
|
||||
docker run -p 8000:8000 --env-file .env kpi-analysis
|
||||
```
|
||||
|
||||
3. **Server Deployment**
|
||||
- Configure reverse proxy (nginx/Apache)
|
||||
- Set up SSL certificates
|
||||
- Configure environment variables
|
||||
- Set up database (PostgreSQL recommended)
|
||||
|
||||
### Environment-Specific Configurations
|
||||
|
||||
#### Development
|
||||
- SQLite database
|
||||
- Debug mode enabled
|
||||
- Local file storage
|
||||
|
||||
#### Production
|
||||
- PostgreSQL database
|
||||
- Debug mode disabled
|
||||
- Secure authentication
|
||||
- File storage on cloud/storage
|
||||
- SSL/HTTPS required
|
||||
|
||||
## Project Structure
|
||||
|
||||
```
|
||||
kpi_analysis/
|
||||
├── main.py # FastAPI application entry point
|
||||
├── requirements.txt # Python dependencies
|
||||
├── config/ # Configuration files
|
||||
│ ├── settings.py # Application settings
|
||||
│ ├── .env.template # Environment template
|
||||
│ └── __init__.py
|
||||
├── app/ # Application modules
|
||||
│ ├── api/ # API routes
|
||||
│ │ ├── routes.py # FastAPI routes
|
||||
│ │ └── __init__.py
|
||||
│ ├── core/ # Core functionality
|
||||
│ │ ├── database.py # Database operations
|
||||
│ │ └── __init__.py
|
||||
│ ├── models/ # Data models
|
||||
│ │ ├── kpi_models.py # KPI data structures
|
||||
│ │ └── __init__.py
|
||||
│ ├── services/ # Business logic
|
||||
│ │ ├── nextcloud_service.py
|
||||
│ │ ├── excel_parser.py
|
||||
│ │ ├── analysis_engine.py
|
||||
│ │ ├── pdf_generator.py
|
||||
│ │ └── __init__.py
|
||||
│ └── __init__.py
|
||||
├── templates/ # HTML templates
|
||||
│ └── dashboard.html
|
||||
├── static/ # Static files
|
||||
│ ├── css/
|
||||
│ │ └── dashboard.css
|
||||
│ ├── js/
|
||||
│ │ └── dashboard.js
|
||||
│ └── images/
|
||||
├── data/ # Application data
|
||||
├── uploads/ # Uploaded files
|
||||
└── reports/ # Generated reports
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **File Upload Fails**
|
||||
- Check file format (.xlsx, .xls only)
|
||||
- Verify file size (< 50MB)
|
||||
- Check permissions on upload directory
|
||||
|
||||
2. **Nextcloud Connection Issues**
|
||||
- Verify Nextcloud server URL
|
||||
- Check OAuth app configuration
|
||||
- Ensure user has access to KPI folder
|
||||
|
||||
3. **OpenAI Integration Issues**
|
||||
- Verify API key is valid
|
||||
- Check account limits
|
||||
- Monitor API usage
|
||||
|
||||
4. **Database Issues**
|
||||
- Check database permissions
|
||||
- Verify database URL configuration
|
||||
- Run database migrations if needed
|
||||
|
||||
### Logs
|
||||
|
||||
- Application logs: Check console output
|
||||
- Database logs: SQLite file in `data/` directory
|
||||
- Web server logs: Check server configuration
|
||||
|
||||
## Contributing
|
||||
|
||||
1. Fork the repository
|
||||
2. Create a feature branch
|
||||
3. Make your changes
|
||||
4. Add tests if applicable
|
||||
5. Submit a pull request
|
||||
|
||||
## License
|
||||
|
||||
This project is licensed under the MIT License - see the LICENSE file for details.
|
||||
|
||||
## Support
|
||||
|
||||
For support and questions:
|
||||
- Create an issue in the repository
|
||||
- Check the documentation
|
||||
- Review the troubleshooting section
|
||||
|
||||
## Roadmap
|
||||
|
||||
### Upcoming Features
|
||||
- [ ] Multi-language support
|
||||
- [ ] Advanced analytics dashboard
|
||||
- [ ] Email notifications
|
||||
- [ ] Mobile-responsive design
|
||||
- [ ] Real-time collaboration
|
||||
- [ ] Advanced reporting templates
|
||||
- [ ] Integration with other systems (SAP, Salesforce, etc.)
|
||||
|
||||
### Version History
|
||||
- **v1.0.0**: Initial release with core functionality
|
||||
- Core features: Excel processing, AI analysis, Nextcloud integration
|
||||
- Future versions will include additional features and improvements
|
||||
BIN
kpi_analysis/__pycache__/main.cpython-312.pyc
Normal file
7
kpi_analysis/app/__init__.py
Normal file
@ -0,0 +1,7 @@
|
||||
"""
|
||||
KPI Analysis Application Package
|
||||
"""
|
||||
|
||||
from .api import router
|
||||
|
||||
__all__ = ["router"]
|
||||
BIN
kpi_analysis/app/__pycache__/__init__.cpython-312.pyc
Normal file
7
kpi_analysis/app/api/__init__.py
Normal file
@ -0,0 +1,7 @@
|
||||
"""
|
||||
API module for KPI Analysis Application
|
||||
"""
|
||||
|
||||
from .routes import router
|
||||
|
||||
__all__ = ["router"]
|
||||
BIN
kpi_analysis/app/api/__pycache__/__init__.cpython-312.pyc
Normal file
BIN
kpi_analysis/app/api/__pycache__/routes.cpython-312.pyc
Normal file
705
kpi_analysis/app/api/routes.py
Normal file
@ -0,0 +1,705 @@
|
||||
"""
|
||||
API routes for KPI Analysis Application
|
||||
"""
|
||||
|
||||
from fastapi import APIRouter, HTTPException, Depends, UploadFile, File, BackgroundTasks, status, Form, Request
|
||||
from fastapi.responses import FileResponse, JSONResponse
|
||||
from fastapi.security import OAuth2PasswordBearer
|
||||
from pydantic import BaseModel
|
||||
from typing import List, Optional
|
||||
import aiofiles
|
||||
import os
|
||||
import shutil
|
||||
from datetime import datetime
|
||||
import json
|
||||
from pathlib import Path
|
||||
import logging
|
||||
|
||||
# Import application modules
|
||||
from app.core.database import log_action
|
||||
from app.core.auth import get_current_active_user, auth_manager, UserSession
|
||||
from app.services.nextcloud_service import nextcloud_service
|
||||
from app.services.excel_parser import excel_parser
|
||||
from app.services.analysis_engine import analysis_engine
|
||||
from app.services.pdf_generator import pdf_generator
|
||||
from app.models import kpi_models
|
||||
from config.settings import settings
|
||||
|
||||
# Login request model
|
||||
class LoginRequest(BaseModel):
|
||||
username: str
|
||||
password: str
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
router = APIRouter()
|
||||
|
||||
# Authentication endpoints
|
||||
@router.post("/auth/login")
|
||||
async def login(
|
||||
login_request: LoginRequest,
|
||||
request: Request = None
|
||||
):
|
||||
"""Authenticate user with LDAP/Active Directory and check group membership"""
|
||||
username = None
|
||||
try:
|
||||
username = login_request.username
|
||||
password = login_request.password
|
||||
|
||||
logger.info(f"Login attempt for user: {username}")
|
||||
|
||||
if not username or not password:
|
||||
logger.warning(f"Login attempt with missing credentials")
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,
|
||||
detail="Username and password are required"
|
||||
)
|
||||
|
||||
# Log incoming cookies for debugging cookie conflicts
|
||||
if request and hasattr(request, 'cookies'):
|
||||
conflicting_cookies = []
|
||||
odoo_cookie_names = ['session_id', 'frontend_lang', 'cids', 'tz']
|
||||
for cookie_name in request.cookies.keys():
|
||||
if cookie_name in odoo_cookie_names:
|
||||
conflicting_cookies.append(cookie_name)
|
||||
|
||||
if conflicting_cookies:
|
||||
logger.warning(f"Conflicting cookies detected: {conflicting_cookies}")
|
||||
logger.info(f"All incoming cookies: {list(request.cookies.keys())}")
|
||||
|
||||
from ..core.auth import auth_manager
|
||||
from config.settings import settings
|
||||
|
||||
# Log authentication configuration
|
||||
logger.info(f"Authentication config - LDAP: {bool(settings.ldap_server)}, Fallback: {settings.enable_fallback_auth}")
|
||||
|
||||
# Authenticate with LDAP/fallback
|
||||
user_data = auth_manager.authenticate_user(username, password)
|
||||
|
||||
if not user_data:
|
||||
logger.warning(f"Authentication failed for user {username}")
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail="Invalid credentials or user not authorized",
|
||||
headers={"WWW-Authenticate": "Bearer"},
|
||||
)
|
||||
|
||||
logger.info(f"User {username} authenticated via {user_data.get('authentication_method', 'unknown')}")
|
||||
|
||||
# Create session token
|
||||
token = auth_manager.create_access_token(user_data)
|
||||
logger.info(f"Access token created for user {username}")
|
||||
|
||||
# Create response with proper headers to clear conflicting cookies
|
||||
response_data = {
|
||||
"success": True,
|
||||
"message": "Authentication successful",
|
||||
"access_token": token,
|
||||
"token_type": "bearer",
|
||||
"user": {
|
||||
"username": user_data["username"],
|
||||
"email": user_data["email"],
|
||||
"full_name": user_data["full_name"],
|
||||
"role": user_data["role"]
|
||||
}
|
||||
}
|
||||
|
||||
# Import Response for setting headers
|
||||
from fastapi.responses import JSONResponse
|
||||
|
||||
# Create response to clear conflicting cookies
|
||||
response = JSONResponse(content=response_data)
|
||||
|
||||
# Clear common conflicting cookies that might come from other applications
|
||||
conflicting_cookie_names = [
|
||||
'session_id', 'frontend_lang', '_ga', '_ga_NMT50XL57M', 'cids', 'tz'
|
||||
]
|
||||
|
||||
for cookie_name in conflicting_cookie_names:
|
||||
response.set_cookie(
|
||||
key=cookie_name,
|
||||
value="",
|
||||
max_age=0,
|
||||
expires=0,
|
||||
path="/",
|
||||
domain=None,
|
||||
secure=False,
|
||||
httponly=False,
|
||||
samesite="lax"
|
||||
)
|
||||
|
||||
logger.info(f"Login successful for user {username} - response sent")
|
||||
return response
|
||||
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Login error for user {username if username else 'unknown'}: {e}", exc_info=True)
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail=f"Authentication service error: {str(e)}"
|
||||
)
|
||||
|
||||
@router.post("/auth/login-form")
|
||||
async def login_form(
|
||||
username: str = Form(..., description="Username"),
|
||||
password: str = Form(..., description="Password"),
|
||||
request: Request = None
|
||||
):
|
||||
"""Form-based login endpoint for web forms"""
|
||||
try:
|
||||
if not username or not password:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_422_UNPROCESSABLE_ENTITY,
|
||||
detail="Username and password are required"
|
||||
)
|
||||
|
||||
# Log incoming cookies for debugging cookie conflicts
|
||||
if request and hasattr(request, 'cookies'):
|
||||
conflicting_cookies = []
|
||||
odoo_cookie_names = ['session_id', 'frontend_lang', 'cids', 'tz']
|
||||
for cookie_name in request.cookies.keys():
|
||||
if cookie_name in odoo_cookie_names:
|
||||
conflicting_cookies.append(cookie_name)
|
||||
|
||||
if conflicting_cookies:
|
||||
logger.warning(f"Form login - conflicting cookies detected: {conflicting_cookies}")
|
||||
|
||||
from ..core.auth import auth_manager
|
||||
|
||||
# Authenticate with LDAP/fallback
|
||||
user_data = auth_manager.authenticate_user(username, password)
|
||||
|
||||
if not user_data:
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail="Invalid credentials or user not authorized"
|
||||
)
|
||||
|
||||
# Create session token
|
||||
token = auth_manager.create_access_token(user_data)
|
||||
|
||||
# Create response with proper headers to clear conflicting cookies
|
||||
response_data = {
|
||||
"success": True,
|
||||
"message": "Authentication successful",
|
||||
"access_token": token,
|
||||
"token_type": "bearer",
|
||||
"user": {
|
||||
"username": user_data["username"],
|
||||
"email": user_data["email"],
|
||||
"full_name": user_data["full_name"],
|
||||
"role": user_data["role"]
|
||||
}
|
||||
}
|
||||
|
||||
# Import Response for setting headers
|
||||
from fastapi.responses import JSONResponse
|
||||
|
||||
# Create response to clear conflicting cookies
|
||||
response = JSONResponse(content=response_data)
|
||||
|
||||
# Clear common conflicting cookies that might come from other applications
|
||||
conflicting_cookie_names = [
|
||||
'session_id', 'frontend_lang', '_ga', '_ga_NMT50XL57M', 'cids', 'tz'
|
||||
]
|
||||
|
||||
for cookie_name in conflicting_cookie_names:
|
||||
response.set_cookie(
|
||||
key=cookie_name,
|
||||
value="",
|
||||
max_age=0,
|
||||
expires=0,
|
||||
path="/",
|
||||
domain=None,
|
||||
secure=False,
|
||||
httponly=False,
|
||||
samesite="lax"
|
||||
)
|
||||
|
||||
logger.info(f"Form login - User {username} authenticated successfully - cookies cleared")
|
||||
return response
|
||||
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Form login error for user {username}: {e}")
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
|
||||
detail="Authentication service error"
|
||||
)
|
||||
|
||||
@router.post("/auth/logout")
|
||||
async def logout(current_user: dict = Depends(get_current_active_user)):
|
||||
"""Logout user and invalidate session"""
|
||||
try:
|
||||
await UserSession.end_session(current_user["user_id"])
|
||||
return {
|
||||
"success": True,
|
||||
"message": "Logout successful"
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error(f"Logout error: {e}")
|
||||
return {
|
||||
"success": True,
|
||||
"message": "Logout completed"
|
||||
}
|
||||
|
||||
@router.get("/auth/me")
|
||||
async def get_current_user_info(current_user: dict = Depends(get_current_active_user)):
|
||||
"""Get current user information"""
|
||||
return {
|
||||
"success": True,
|
||||
"user": {
|
||||
"username": current_user["username"],
|
||||
"email": current_user["email"],
|
||||
"full_name": current_user.get("full_name"),
|
||||
"role": current_user["role"],
|
||||
"authenticated_at": current_user.get("authenticated_at")
|
||||
}
|
||||
}
|
||||
|
||||
@router.get("/auth/test")
|
||||
async def test_ldap_connection():
|
||||
"""Test LDAP configuration and connection"""
|
||||
try:
|
||||
from ..services.ldap_auth_service import ldap_auth_service
|
||||
|
||||
# Test connection
|
||||
connection_ok, connection_msg = ldap_auth_service.test_connection()
|
||||
|
||||
# Test group access
|
||||
group_ok, group_msg = ldap_auth_service.test_group_access()
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"ldap_configured": bool(settings.ldap_server),
|
||||
"fallback_auth_enabled": settings.enable_fallback_auth,
|
||||
"connection_test": {
|
||||
"success": connection_ok,
|
||||
"message": connection_msg
|
||||
},
|
||||
"group_test": {
|
||||
"success": group_ok,
|
||||
"message": group_msg,
|
||||
"group_dn": settings.ldap_kpi_group_dn
|
||||
},
|
||||
"authentication_methods": {
|
||||
"ldap": {
|
||||
"available": bool(settings.ldap_server),
|
||||
"configured": connection_ok and group_ok,
|
||||
"server": settings.ldap_server,
|
||||
"group_dn": settings.ldap_kpi_group_dn
|
||||
},
|
||||
"fallback": {
|
||||
"enabled": settings.enable_fallback_auth,
|
||||
"username": settings.fallback_admin_username if settings.enable_fallback_auth else None,
|
||||
"role": settings.fallback_admin_role if settings.enable_fallback_auth else None
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Authentication test error: {e}")
|
||||
return {
|
||||
"success": False,
|
||||
"error": str(e),
|
||||
"ldap_configured": bool(settings.ldap_server),
|
||||
"fallback_auth_enabled": settings.enable_fallback_auth,
|
||||
"authentication_methods": {
|
||||
"ldap": {"available": False, "configured": False},
|
||||
"fallback": {"enabled": settings.enable_fallback_auth}
|
||||
}
|
||||
}
|
||||
|
||||
@router.get("/auth/nextcloud")
|
||||
async def nextcloud_auth():
|
||||
"""Initiate Nextcloud OAuth authentication"""
|
||||
auth_url = nextcloud_service.get_oauth_url()
|
||||
return {"auth_url": auth_url}
|
||||
|
||||
@router.get("/auth/nextcloud/callback")
|
||||
async def nextcloud_callback(code: str):
|
||||
"""Handle Nextcloud OAuth callback"""
|
||||
# TODO: Implement Nextcloud OAuth callback
|
||||
return {"success": True, "message": "Nextcloud OAuth callback received"}
|
||||
|
||||
# File management endpoints
|
||||
@router.post("/files/upload")
|
||||
async def upload_file(
|
||||
background_tasks: BackgroundTasks,
|
||||
file: UploadFile = File(...),
|
||||
current_user: dict = Depends(get_current_active_user)
|
||||
):
|
||||
"""Upload KPI Excel file and trigger analysis"""
|
||||
|
||||
# Validate file
|
||||
if not file.filename.endswith(tuple(settings.allowed_file_extensions)):
|
||||
raise HTTPException(status_code=400, detail="Invalid file type. Only .xlsx and .xls files are allowed.")
|
||||
|
||||
# Create upload directory if it doesn't exist
|
||||
upload_dir = Path(settings.upload_directory)
|
||||
upload_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Save file with timestamp
|
||||
timestamp = datetime.now().strftime('%Y%m%d_%H%M%S')
|
||||
safe_filename = f"{timestamp}_{file.filename}"
|
||||
file_path = upload_dir / safe_filename
|
||||
|
||||
try:
|
||||
# Save uploaded file
|
||||
async with aiofiles.open(file_path, 'wb') as buffer:
|
||||
content = await file.read()
|
||||
await buffer.write(content)
|
||||
|
||||
# Get file size
|
||||
file_size = file_path.stat().st_size
|
||||
|
||||
# Save file record to database
|
||||
from app.core.database import save_uploaded_file
|
||||
file_id = await save_uploaded_file(
|
||||
filename=file.filename,
|
||||
file_path=str(file_path),
|
||||
uploaded_by=current_user["user_id"],
|
||||
file_size=file_size
|
||||
)
|
||||
|
||||
# Log upload action
|
||||
await log_action(current_user["user_id"], "FILE_UPLOAD", f"Uploaded {file.filename}")
|
||||
|
||||
# Trigger background analysis
|
||||
background_tasks.add_task(process_excel_file, str(file_path), current_user["user_id"], file_id)
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"message": "File uploaded successfully. Analysis started in background.",
|
||||
"file_id": file_id,
|
||||
"file_path": str(file_path),
|
||||
"filename": file.filename
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error uploading file: {str(e)}")
|
||||
# Clean up file if it was created
|
||||
if file_path.exists():
|
||||
file_path.unlink()
|
||||
raise HTTPException(status_code=500, detail=f"Failed to upload file: {str(e)}")
|
||||
|
||||
@router.get("/files/list")
|
||||
async def list_files(current_user: dict = Depends(get_current_active_user)):
|
||||
"""List uploaded KPI files"""
|
||||
try:
|
||||
from app.core.database import get_uploaded_files
|
||||
files = await get_uploaded_files()
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"files": files
|
||||
}
|
||||
except Exception as e:
|
||||
logger.error(f"Error listing files: {str(e)}")
|
||||
return {
|
||||
"success": True,
|
||||
"files": []
|
||||
}
|
||||
|
||||
@router.delete("/files/delete/{file_id}")
|
||||
async def delete_file(file_id: int, current_user: dict = Depends(get_current_active_user)):
|
||||
"""Delete uploaded file and its analysis results"""
|
||||
try:
|
||||
from app.core.database import delete_uploaded_file
|
||||
success = await delete_uploaded_file(file_id)
|
||||
|
||||
if success:
|
||||
await log_action(current_user["user_id"], "FILE_DELETE", f"Deleted file ID {file_id}")
|
||||
return {
|
||||
"success": True,
|
||||
"message": "File deleted successfully"
|
||||
}
|
||||
else:
|
||||
raise HTTPException(status_code=404, detail="File not found")
|
||||
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Error deleting file: {str(e)}")
|
||||
raise HTTPException(status_code=500, detail=f"Failed to delete file: {str(e)}")
|
||||
|
||||
# Nextcloud integration endpoints
|
||||
@router.get("/nextcloud/files")
|
||||
async def list_nextcloud_files(
|
||||
folder_path: str = settings.nextcloud_kpi_folder,
|
||||
current_user: dict = Depends(get_current_active_user)
|
||||
):
|
||||
"""List files from Nextcloud KPI folder"""
|
||||
try:
|
||||
files = await nextcloud_service.list_files(folder_path)
|
||||
return {"files": files}
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=500, detail=f"Failed to list Nextcloud files: {str(e)}")
|
||||
|
||||
@router.post("/nextcloud/download/{file_id}")
|
||||
async def download_nextcloud_file(
|
||||
file_id: str,
|
||||
background_tasks: BackgroundTasks,
|
||||
current_user: dict = Depends(get_current_active_user)
|
||||
):
|
||||
"""Download file from Nextcloud"""
|
||||
try:
|
||||
# Download file
|
||||
file_content = nextcloud_service.download_file(file_id)
|
||||
|
||||
# Save locally
|
||||
filename = f"downloaded_{file_id}_{datetime.now().strftime('%Y%m%d_%H%M%S')}.xlsx"
|
||||
file_path = Path(settings.upload_directory) / filename
|
||||
|
||||
async with aiofiles.open(file_path, 'wb') as f:
|
||||
await f.write(file_content)
|
||||
|
||||
# Schedule background processing
|
||||
background_tasks.add_task(process_excel_file, str(file_path), current_user["user_id"], file_id)
|
||||
|
||||
await log_action(current_user["user_id"], "NEXTCLOUD_DOWNLOAD", f"Downloaded file {file_id}")
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"message": "File downloaded and queued for processing",
|
||||
"file_path": str(file_path),
|
||||
"filename": filename
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
raise HTTPException(status_code=500, detail=f"Failed to download file: {str(e)}")
|
||||
|
||||
# Analysis endpoints
|
||||
@router.post("/analysis/run/{file_id}")
|
||||
async def run_analysis(
|
||||
file_id: int,
|
||||
background_tasks: BackgroundTasks,
|
||||
current_user: dict = Depends(get_current_active_user)
|
||||
):
|
||||
"""Run KPI analysis on uploaded file"""
|
||||
try:
|
||||
# Get file path from database
|
||||
from config.settings import settings
|
||||
from pathlib import Path
|
||||
import aiosqlite
|
||||
|
||||
db_path = Path(settings.database_url.replace("sqlite:///", ""))
|
||||
async with aiosqlite.connect(db_path) as db:
|
||||
db.row_factory = aiosqlite.Row
|
||||
cursor = await db.execute("SELECT file_path FROM kpi_files WHERE id = ?", (file_id,))
|
||||
row = await cursor.fetchone()
|
||||
|
||||
if not row:
|
||||
raise HTTPException(status_code=404, detail="File not found")
|
||||
|
||||
file_path = row['file_path']
|
||||
|
||||
background_tasks.add_task(process_excel_file, file_path, current_user["user_id"], file_id)
|
||||
return {"success": True, "message": "Analysis started in background"}
|
||||
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Error starting analysis: {str(e)}")
|
||||
raise HTTPException(status_code=500, detail=f"Failed to start analysis: {str(e)}")
|
||||
|
||||
@router.get("/analysis/{file_id}")
|
||||
async def get_analysis_results_endpoint(file_id: int, current_user: dict = Depends(get_current_active_user)):
|
||||
"""Get analysis results for a file"""
|
||||
try:
|
||||
from app.core.database import get_analysis_results
|
||||
results = await get_analysis_results(file_id)
|
||||
|
||||
if not results:
|
||||
raise HTTPException(status_code=404, detail="Analysis results not found. File may not be processed yet.")
|
||||
|
||||
return {
|
||||
"success": True,
|
||||
"file_id": file_id,
|
||||
"total_score": results['total_score'],
|
||||
"perspective_scores": results['perspective_scores'],
|
||||
"achievements": results['achievements'],
|
||||
"recommendations": results['recommendations'],
|
||||
"charts_generated": True,
|
||||
"report_generated": bool(results['report_path'])
|
||||
}
|
||||
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Error retrieving analysis results: {str(e)}")
|
||||
raise HTTPException(status_code=500, detail=f"Failed to retrieve analysis results: {str(e)}")
|
||||
|
||||
@router.get("/analysis/{file_id}/charts")
|
||||
async def get_analysis_charts(file_id: int, current_user: dict = Depends(get_current_active_user)):
|
||||
"""Get interactive charts data"""
|
||||
try:
|
||||
from app.core.database import get_analysis_results
|
||||
results = await get_analysis_results(file_id)
|
||||
|
||||
if not results:
|
||||
return {"charts": []}
|
||||
|
||||
# Get perspective scores
|
||||
perspective_scores = results.get('perspective_scores', {})
|
||||
|
||||
if not perspective_scores:
|
||||
return {"charts": []}
|
||||
|
||||
# Create chart data from actual analysis results
|
||||
charts = []
|
||||
|
||||
# Chart 1: Perspective scores
|
||||
if perspective_scores:
|
||||
charts.append({
|
||||
"type": "score_breakdown",
|
||||
"title": "KPI Score Breakdown by Perspective",
|
||||
"data": perspective_scores
|
||||
})
|
||||
|
||||
# Chart 2: Achievement status
|
||||
achievements = results.get('achievements', {})
|
||||
if achievements:
|
||||
charts.append({
|
||||
"type": "achievement_status",
|
||||
"title": "KPI Achievement Status",
|
||||
"data": achievements
|
||||
})
|
||||
|
||||
return {"charts": charts}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error getting charts: {str(e)}")
|
||||
return {"charts": []}
|
||||
|
||||
@router.get("/analysis/{file_id}/report")
|
||||
async def download_report(file_id: int, current_user: dict = Depends(get_current_active_user)):
|
||||
"""Download PDF report"""
|
||||
try:
|
||||
from app.core.database import get_analysis_results
|
||||
results = await get_analysis_results(file_id)
|
||||
|
||||
if not results:
|
||||
raise HTTPException(status_code=404, detail="Analysis results not found")
|
||||
|
||||
report_path = results.get('report_path')
|
||||
|
||||
if not report_path or not Path(report_path).exists():
|
||||
raise HTTPException(status_code=404, detail="PDF report not found. It may not have been generated yet.")
|
||||
|
||||
# Get filename from path
|
||||
filename = Path(report_path).name
|
||||
|
||||
return FileResponse(
|
||||
path=str(report_path),
|
||||
filename=filename,
|
||||
media_type="application/pdf"
|
||||
)
|
||||
|
||||
except HTTPException:
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Error downloading report: {str(e)}")
|
||||
raise HTTPException(status_code=500, detail=f"Failed to download report: {str(e)}")
|
||||
|
||||
# Background task functions
|
||||
async def process_excel_file(file_path: str, user_id: int, file_id: int):
|
||||
"""Background task to process Excel file"""
|
||||
try:
|
||||
logger.info(f"Starting analysis for file: {file_path}")
|
||||
|
||||
# Verify file exists
|
||||
if not os.path.exists(file_path):
|
||||
raise FileNotFoundError(f"File not found: {file_path}")
|
||||
|
||||
# Parse Excel file
|
||||
kpi_data = await excel_parser.parse_excel_file(file_path)
|
||||
logger.info(f"Excel file parsed successfully: {len(kpi_data.kpi_sheets)} KPI sheets found")
|
||||
|
||||
# Validate parsed data
|
||||
if not kpi_data.kpi_sheets:
|
||||
logger.warning(f"No KPI sheets found in file {file_path}")
|
||||
|
||||
# Run analysis
|
||||
analysis_results = await analysis_engine.analyze_kpi_data(kpi_data)
|
||||
logger.info(f"Analysis completed with score: {analysis_results.total_score}")
|
||||
|
||||
# Generate charts
|
||||
charts = await analysis_engine.generate_charts(kpi_data, analysis_results)
|
||||
logger.info(f"Generated {len(charts)} charts")
|
||||
|
||||
# Generate PDF report
|
||||
try:
|
||||
report_path = await pdf_generator.generate_report(kpi_data, analysis_results)
|
||||
if report_path:
|
||||
logger.info(f"PDF report generated: {report_path}")
|
||||
else:
|
||||
logger.warning("PDF report generation returned empty path")
|
||||
report_path = ""
|
||||
except Exception as pdf_error:
|
||||
logger.error(f"PDF generation failed: {str(pdf_error)}", exc_info=True)
|
||||
report_path = ""
|
||||
|
||||
# Save analysis results to database
|
||||
from app.core.database import save_analysis_results
|
||||
await save_analysis_results(
|
||||
file_id=file_id,
|
||||
total_score=analysis_results.total_score,
|
||||
perspective_scores=analysis_results.perspective_scores,
|
||||
achievements=analysis_results.achievements,
|
||||
recommendations=analysis_results.recommendations,
|
||||
report_path=report_path
|
||||
)
|
||||
logger.info(f"Analysis results saved to database for file ID {file_id}")
|
||||
|
||||
# Mark file as processed
|
||||
from app.core.database import mark_file_processed
|
||||
await mark_file_processed(file_id)
|
||||
logger.info(f"File ID {file_id} marked as processed")
|
||||
|
||||
await log_action(user_id, "ANALYSIS_COMPLETE", f"Analysis completed for file ID {file_id}")
|
||||
logger.info(f"Analysis workflow completed successfully for file ID {file_id}")
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Analysis failed for {file_path}: {str(e)}", exc_info=True)
|
||||
await log_action(user_id, "ANALYSIS_ERROR", f"Analysis failed: {str(e)}")
|
||||
|
||||
# Try to mark file as processed even if analysis failed
|
||||
try:
|
||||
from app.core.database import mark_file_processed
|
||||
await mark_file_processed(file_id)
|
||||
except:
|
||||
pass
|
||||
|
||||
# Utility endpoints
|
||||
@router.get("/health")
|
||||
async def health_check():
|
||||
"""Health check endpoint"""
|
||||
return {
|
||||
"status": "healthy",
|
||||
"service": "KPI Analysis API",
|
||||
"timestamp": datetime.now().isoformat(),
|
||||
"nextcloud_configured": bool(settings.nextcloud_oauth_client_id),
|
||||
"openai_configured": bool(settings.openai_api_key)
|
||||
}
|
||||
|
||||
@router.get("/config/status")
|
||||
async def config_status():
|
||||
"""Check configuration status"""
|
||||
return {
|
||||
"nextcloud": {
|
||||
"configured": bool(settings.nextcloud_oauth_client_id),
|
||||
"base_url": settings.nextcloud_base_url,
|
||||
"folder": settings.nextcloud_kpi_folder
|
||||
},
|
||||
"openai": {
|
||||
"configured": bool(settings.openai_api_key),
|
||||
"model": settings.openai_model
|
||||
},
|
||||
"ldap": {
|
||||
"configured": bool(settings.ldap_server),
|
||||
"server": settings.ldap_server
|
||||
}
|
||||
}
|
||||
14
kpi_analysis/app/core/__init__.py
Normal file
@ -0,0 +1,14 @@
|
||||
"""
|
||||
Core application modules
|
||||
"""
|
||||
|
||||
from .database import init_db, log_action, create_user, get_user_by_username, create_session, validate_session
|
||||
|
||||
__all__ = [
|
||||
"init_db",
|
||||
"log_action",
|
||||
"create_user",
|
||||
"get_user_by_username",
|
||||
"create_session",
|
||||
"validate_session"
|
||||
]
|
||||
BIN
kpi_analysis/app/core/__pycache__/__init__.cpython-312.pyc
Normal file
BIN
kpi_analysis/app/core/__pycache__/auth.cpython-312.pyc
Normal file
BIN
kpi_analysis/app/core/__pycache__/database.cpython-312.pyc
Normal file
223
kpi_analysis/app/core/auth.py
Normal file
@ -0,0 +1,223 @@
|
||||
"""
|
||||
Authentication middleware and utilities
|
||||
Handles LDAP authentication and JWT token management
|
||||
"""
|
||||
|
||||
import jwt
|
||||
from datetime import datetime, timedelta
|
||||
from typing import Optional, Dict, Any
|
||||
from fastapi import HTTPException, status, Depends
|
||||
from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials
|
||||
import logging
|
||||
|
||||
from config.settings import settings
|
||||
from ..services.ldap_auth_service import ldap_auth_service
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
# Security scheme for JWT tokens
|
||||
security = HTTPBearer()
|
||||
|
||||
class AuthenticationError(Exception):
|
||||
"""Authentication exception"""
|
||||
pass
|
||||
|
||||
class AuthorizationError(Exception):
|
||||
"""Authorization exception"""
|
||||
pass
|
||||
|
||||
class AuthManager:
|
||||
"""Authentication and authorization manager"""
|
||||
|
||||
def __init__(self):
|
||||
self.secret_key = settings.secret_key
|
||||
self.algorithm = "HS256"
|
||||
self.token_expire_hours = settings.session_timeout_minutes / 60
|
||||
|
||||
def create_access_token(self, user_data: Dict[str, Any]) -> str:
|
||||
"""Create JWT access token"""
|
||||
try:
|
||||
expire = datetime.utcnow() + timedelta(hours=self.token_expire_hours)
|
||||
|
||||
to_encode = {
|
||||
"user_id": user_data.get("user_id"),
|
||||
"username": user_data.get("username"),
|
||||
"email": user_data.get("email"),
|
||||
"role": user_data.get("role", "user"),
|
||||
"exp": expire,
|
||||
"iat": datetime.utcnow(),
|
||||
"sub": user_data.get("username")
|
||||
}
|
||||
|
||||
encoded_jwt = jwt.encode(to_encode, self.secret_key, algorithm=self.algorithm)
|
||||
return encoded_jwt
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Token creation failed: {e}")
|
||||
raise AuthenticationError("Failed to create access token")
|
||||
|
||||
def verify_token(self, token: str) -> Dict[str, Any]:
|
||||
"""Verify JWT token"""
|
||||
try:
|
||||
payload = jwt.decode(token, self.secret_key, algorithms=[self.algorithm])
|
||||
return payload
|
||||
|
||||
except jwt.ExpiredSignatureError:
|
||||
raise AuthenticationError("Token has expired")
|
||||
except jwt.JWTError:
|
||||
raise AuthenticationError("Invalid token")
|
||||
|
||||
def authenticate_user(self, username: str, password: str) -> Optional[Dict[str, Any]]:
|
||||
"""Authenticate user with LDAP and verify group membership"""
|
||||
try:
|
||||
# Check fallback authentication first if enabled (for development/testing)
|
||||
if settings.enable_fallback_auth and self._check_fallback_auth(username, password):
|
||||
user_data = {
|
||||
"id": settings.fallback_admin_username,
|
||||
"user_id": settings.fallback_admin_username,
|
||||
"username": settings.fallback_admin_username,
|
||||
"email": settings.fallback_admin_email,
|
||||
"full_name": "System Administrator",
|
||||
"role": settings.fallback_admin_role,
|
||||
"authentication_method": "fallback",
|
||||
"authenticated_at": datetime.utcnow().isoformat()
|
||||
}
|
||||
|
||||
logger.info(f"User {username} authenticated successfully via fallback method")
|
||||
return user_data
|
||||
|
||||
# Try LDAP authentication if configured
|
||||
if settings.ldap_server:
|
||||
try:
|
||||
# Authenticate with LDAP and check group membership
|
||||
success, user_dn, user_info = ldap_auth_service.authenticate_user(username, password)
|
||||
|
||||
if success and user_info:
|
||||
# Create user data for token
|
||||
user_identifier = user_info.get("username") or user_info.get("email") or user_dn
|
||||
user_data = {
|
||||
"id": user_identifier,
|
||||
"user_id": user_identifier,
|
||||
"username": user_info["username"],
|
||||
"email": user_info["email"],
|
||||
"full_name": user_info["full_name"],
|
||||
"ldap_dn": user_dn,
|
||||
"role": self._determine_user_role(user_info),
|
||||
"authentication_method": "ldap",
|
||||
"authenticated_at": datetime.utcnow().isoformat()
|
||||
}
|
||||
|
||||
logger.info(f"User {username} authenticated successfully via LDAP")
|
||||
return user_data
|
||||
except Exception as ldap_error:
|
||||
logger.warning(f"LDAP authentication failed for {username}: {ldap_error}")
|
||||
# Continue to fallback if enabled
|
||||
|
||||
logger.warning(f"Authentication failed for user {username} - no valid authentication method")
|
||||
return None
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Authentication error for user {username}: {e}")
|
||||
return None
|
||||
|
||||
def _check_fallback_auth(self, username: str, password: str) -> bool:
|
||||
"""Check fallback authentication credentials"""
|
||||
return (username == settings.fallback_admin_username and
|
||||
password == settings.fallback_admin_password)
|
||||
|
||||
def _determine_user_role(self, user_info: Dict[str, Any]) -> str:
|
||||
"""Determine user role based on LDAP groups or other criteria"""
|
||||
# Default role for all authenticated users
|
||||
return "user"
|
||||
|
||||
def get_current_user(self, credentials: HTTPAuthorizationCredentials = Depends(security)) -> Dict[str, Any]:
|
||||
"""Get current authenticated user from JWT token"""
|
||||
try:
|
||||
payload = self.verify_token(credentials.credentials)
|
||||
return payload
|
||||
|
||||
except AuthenticationError as e:
|
||||
logger.warning(f"Token verification failed: {e}")
|
||||
raise HTTPException(
|
||||
status_code=status.HTTP_401_UNAUTHORIZED,
|
||||
detail="Invalid authentication credentials",
|
||||
headers={"WWW-Authenticate": "Bearer"},
|
||||
)
|
||||
|
||||
def get_current_active_user(self, current_user: Dict[str, Any] = Depends(get_current_user)) -> Dict[str, Any]:
|
||||
"""Get current active user (with additional checks if needed)"""
|
||||
# Add any additional user validation here
|
||||
# For now, just return the current user
|
||||
return current_user
|
||||
|
||||
# Global auth manager instance
|
||||
auth_manager = AuthManager()
|
||||
|
||||
# Dependency for authentication
|
||||
async def get_current_user(credentials: HTTPAuthorizationCredentials = Depends(security)) -> Dict[str, Any]:
|
||||
"""Dependency to get current authenticated user"""
|
||||
return auth_manager.get_current_user(credentials)
|
||||
|
||||
async def get_current_active_user(current_user: Dict[str, Any] = Depends(get_current_user)) -> Dict[str, Any]:
|
||||
"""Dependency to get current active user"""
|
||||
return auth_manager.get_current_active_user(current_user)
|
||||
|
||||
class UserSession:
|
||||
"""User session management"""
|
||||
|
||||
@staticmethod
|
||||
async def create_session(user_data: Dict[str, Any]) -> str:
|
||||
"""Create user session and return token"""
|
||||
try:
|
||||
token = auth_manager.create_access_token(user_data)
|
||||
return token
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Session creation failed: {e}")
|
||||
raise AuthenticationError("Failed to create user session")
|
||||
|
||||
@staticmethod
|
||||
async def validate_session(token: str) -> Dict[str, Any]:
|
||||
"""Validate user session"""
|
||||
try:
|
||||
payload = auth_manager.verify_token(token)
|
||||
return payload
|
||||
|
||||
except Exception as e:
|
||||
logger.warning(f"Session validation failed: {e}")
|
||||
raise AuthorizationError("Invalid session")
|
||||
|
||||
@staticmethod
|
||||
async def end_session(user_id: int):
|
||||
"""End user session (implement session storage cleanup if needed)"""
|
||||
# In a more complex setup, you might store sessions in Redis or database
|
||||
# For now, JWT tokens handle expiration automatically
|
||||
logger.info(f"Session ended for user {user_id}")
|
||||
|
||||
# Utility functions
|
||||
def require_authentication():
|
||||
"""Decorator to require authentication for endpoints"""
|
||||
def decorator(func):
|
||||
async def wrapper(*args, **kwargs):
|
||||
# The FastAPI dependency system will handle authentication
|
||||
# This is just a placeholder for documentation purposes
|
||||
return await func(*args, **kwargs)
|
||||
return wrapper
|
||||
return decorator
|
||||
|
||||
def check_user_permissions(current_user: Dict[str, Any], required_role: str = None) -> bool:
|
||||
"""Check if user has required permissions"""
|
||||
if required_role and current_user.get("role") != required_role:
|
||||
logger.warning(f"Access denied for user {current_user.get('username')} - insufficient permissions")
|
||||
return False
|
||||
|
||||
return True
|
||||
|
||||
def get_user_display_name(user_data: Dict[str, Any]) -> str:
|
||||
"""Get display name for user"""
|
||||
if user_data.get("full_name"):
|
||||
return user_data["full_name"]
|
||||
elif user_data.get("email"):
|
||||
return user_data["email"]
|
||||
else:
|
||||
return user_data.get("username", "Unknown User")
|
||||
286
kpi_analysis/app/core/database.py
Normal file
@ -0,0 +1,286 @@
|
||||
"""
|
||||
Database initialization and management
|
||||
"""
|
||||
|
||||
import sqlite3
|
||||
import aiosqlite
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
from datetime import datetime
|
||||
import json
|
||||
import os
|
||||
|
||||
from config.settings import settings
|
||||
|
||||
async def init_db():
|
||||
"""Initialize the database with required tables"""
|
||||
db_path = Path(settings.database_url.replace("sqlite:///", ""))
|
||||
db_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
async with aiosqlite.connect(db_path) as db:
|
||||
await db.execute("""
|
||||
CREATE TABLE IF NOT EXISTS users (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
username TEXT UNIQUE NOT NULL,
|
||||
email TEXT UNIQUE NOT NULL,
|
||||
role TEXT DEFAULT 'user',
|
||||
ldap_dn TEXT,
|
||||
is_active BOOLEAN DEFAULT 1,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
last_login TIMESTAMP
|
||||
)
|
||||
""")
|
||||
|
||||
await db.execute("""
|
||||
CREATE TABLE IF NOT EXISTS kpi_files (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
filename TEXT NOT NULL,
|
||||
file_path TEXT NOT NULL,
|
||||
uploaded_by INTEGER,
|
||||
upload_date TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
file_size INTEGER,
|
||||
file_hash TEXT,
|
||||
processed BOOLEAN DEFAULT 0,
|
||||
nextcloud_file_id TEXT,
|
||||
FOREIGN KEY (uploaded_by) REFERENCES users (id)
|
||||
)
|
||||
""")
|
||||
|
||||
await db.execute("""
|
||||
CREATE TABLE IF NOT EXISTS kpi_analysis_results (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
file_id INTEGER,
|
||||
analysis_date TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
total_score REAL,
|
||||
perspective_scores TEXT, -- JSON: {financial: score, customer: score, ...}
|
||||
achievement_status TEXT, -- JSON: achieve/not achieve for each KPI
|
||||
recommendations TEXT, -- JSON array of recommendations
|
||||
report_path TEXT,
|
||||
FOREIGN KEY (file_id) REFERENCES kpi_files (id)
|
||||
)
|
||||
""")
|
||||
|
||||
await db.execute("""
|
||||
CREATE TABLE IF NOT EXISTS kpi_data_cache (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
file_id INTEGER,
|
||||
sheet_name TEXT,
|
||||
data_cache TEXT, -- JSON serialized pandas DataFrame
|
||||
cache_date TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
FOREIGN KEY (file_id) REFERENCES kpi_files (id)
|
||||
)
|
||||
""")
|
||||
|
||||
await db.execute("""
|
||||
CREATE TABLE IF NOT EXISTS user_sessions (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
user_id INTEGER,
|
||||
session_token TEXT UNIQUE NOT NULL,
|
||||
expires_at TIMESTAMP,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
FOREIGN KEY (user_id) REFERENCES users (id)
|
||||
)
|
||||
""")
|
||||
|
||||
await db.execute("""
|
||||
CREATE TABLE IF NOT EXISTS application_logs (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
user_id INTEGER,
|
||||
action TEXT NOT NULL,
|
||||
details TEXT,
|
||||
timestamp TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
ip_address TEXT,
|
||||
FOREIGN KEY (user_id) REFERENCES users (id)
|
||||
)
|
||||
""")
|
||||
|
||||
await db.commit()
|
||||
print("✅ Database initialized successfully")
|
||||
|
||||
async def log_action(user_id: Optional[int], action: str, details: str = "", ip_address: str = ""):
|
||||
"""Log user actions for audit trail"""
|
||||
async with aiosqlite.connect(Path(settings.database_url.replace("sqlite:///", ""))) as db:
|
||||
await db.execute("""
|
||||
INSERT INTO application_logs (user_id, action, details, ip_address)
|
||||
VALUES (?, ?, ?, ?)
|
||||
""", (user_id, action, details, ip_address))
|
||||
await db.commit()
|
||||
|
||||
async def create_user(username: str, email: str, role: str = "user", ldap_dn: str = None) -> int:
|
||||
"""Create a new user"""
|
||||
async with aiosqlite.connect(Path(settings.database_url.replace("sqlite:///", ""))) as db:
|
||||
cursor = await db.execute("""
|
||||
INSERT INTO users (username, email, role, ldap_dn)
|
||||
VALUES (?, ?, ?, ?)
|
||||
""", (username, email, role, ldap_dn))
|
||||
await db.commit()
|
||||
return cursor.lastrowid
|
||||
|
||||
async def get_user_by_username(username: str):
|
||||
"""Get user by username"""
|
||||
async with aiosqlite.connect(Path(settings.database_url.replace("sqlite:///", ""))) as db:
|
||||
db.row_factory = aiosqlite.Row
|
||||
cursor = await db.execute("""
|
||||
SELECT * FROM users WHERE username = ? AND is_active = 1
|
||||
""", (username,))
|
||||
return await cursor.fetchone()
|
||||
|
||||
async def create_session(user_id: int, expires_hours: int = 24) -> str:
|
||||
"""Create a new user session"""
|
||||
import secrets
|
||||
|
||||
session_token = secrets.token_urlsafe(32)
|
||||
expires_at = datetime.now().timestamp() + (expires_hours * 3600)
|
||||
|
||||
async with aiosqlite.connect(Path(settings.database_url.replace("sqlite:///", ""))) as db:
|
||||
await db.execute("""
|
||||
INSERT INTO user_sessions (user_id, session_token, expires_at)
|
||||
VALUES (?, ?, ?)
|
||||
""", (user_id, session_token, expires_at))
|
||||
await db.commit()
|
||||
|
||||
return session_token
|
||||
|
||||
async def validate_session(session_token: str):
|
||||
"""Validate user session"""
|
||||
async with aiosqlite.connect(Path(settings.database_url.replace("sqlite:///", ""))) as db:
|
||||
db.row_factory = aiosqlite.Row
|
||||
cursor = await db.execute("""
|
||||
SELECT s.*, u.username, u.email, u.role
|
||||
FROM user_sessions s
|
||||
JOIN users u ON s.user_id = u.id
|
||||
WHERE s.session_token = ? AND s.expires_at > ? AND u.is_active = 1
|
||||
""", (session_token, datetime.now().timestamp()))
|
||||
return await cursor.fetchone()
|
||||
|
||||
async def save_uploaded_file(filename: str, file_path: str, uploaded_by: int, file_size: int) -> int:
|
||||
"""Save uploaded file record to database"""
|
||||
async with aiosqlite.connect(Path(settings.database_url.replace("sqlite:///", ""))) as db:
|
||||
cursor = await db.execute("""
|
||||
INSERT INTO kpi_files (filename, file_path, uploaded_by, file_size, processed)
|
||||
VALUES (?, ?, ?, ?, 0)
|
||||
""", (filename, file_path, uploaded_by, file_size))
|
||||
await db.commit()
|
||||
return cursor.lastrowid
|
||||
|
||||
async def get_uploaded_files():
|
||||
"""Get list of uploaded files"""
|
||||
async with aiosqlite.connect(Path(settings.database_url.replace("sqlite:///", ""))) as db:
|
||||
db.row_factory = aiosqlite.Row
|
||||
cursor = await db.execute("""
|
||||
SELECT
|
||||
id,
|
||||
filename,
|
||||
upload_date,
|
||||
file_size,
|
||||
processed
|
||||
FROM kpi_files
|
||||
ORDER BY upload_date DESC
|
||||
""")
|
||||
rows = await cursor.fetchall()
|
||||
|
||||
files = []
|
||||
for row in rows:
|
||||
# Format file size
|
||||
size_mb = row['file_size'] / (1024 * 1024) if row['file_size'] else 0
|
||||
|
||||
files.append({
|
||||
"id": row['id'],
|
||||
"filename": row['filename'],
|
||||
"upload_date": row['upload_date'],
|
||||
"size": f"{size_mb:.2f} MB",
|
||||
"processed": bool(row['processed'])
|
||||
})
|
||||
|
||||
return files
|
||||
|
||||
async def mark_file_processed(file_id: int):
|
||||
"""Mark file as processed"""
|
||||
async with aiosqlite.connect(Path(settings.database_url.replace("sqlite:///", ""))) as db:
|
||||
await db.execute("""
|
||||
UPDATE kpi_files
|
||||
SET processed = 1
|
||||
WHERE id = ?
|
||||
""", (file_id,))
|
||||
await db.commit()
|
||||
|
||||
async def save_analysis_results(file_id: int, total_score: float, perspective_scores: dict,
|
||||
achievements: dict, recommendations: list, report_path: str = None):
|
||||
"""Save analysis results to database"""
|
||||
async with aiosqlite.connect(Path(settings.database_url.replace("sqlite:///", ""))) as db:
|
||||
# Convert dict/list to JSON strings
|
||||
perspective_scores_json = json.dumps({k.value if hasattr(k, 'value') else k: v for k, v in perspective_scores.items()})
|
||||
achievements_json = json.dumps(achievements)
|
||||
recommendations_json = json.dumps(recommendations)
|
||||
|
||||
await db.execute("""
|
||||
INSERT INTO kpi_analysis_results
|
||||
(file_id, total_score, perspective_scores, achievement_status, recommendations, report_path)
|
||||
VALUES (?, ?, ?, ?, ?, ?)
|
||||
""", (file_id, total_score, perspective_scores_json, achievements_json, recommendations_json, report_path))
|
||||
await db.commit()
|
||||
|
||||
async def get_analysis_results(file_id: int):
|
||||
"""Get analysis results for a file"""
|
||||
async with aiosqlite.connect(Path(settings.database_url.replace("sqlite:///", ""))) as db:
|
||||
db.row_factory = aiosqlite.Row
|
||||
cursor = await db.execute("""
|
||||
SELECT * FROM kpi_analysis_results
|
||||
WHERE file_id = ?
|
||||
ORDER BY analysis_date DESC
|
||||
LIMIT 1
|
||||
""", (file_id,))
|
||||
row = await cursor.fetchone()
|
||||
|
||||
if row:
|
||||
return {
|
||||
"id": row['id'],
|
||||
"file_id": row['file_id'],
|
||||
"analysis_date": row['analysis_date'],
|
||||
"total_score": row['total_score'],
|
||||
"perspective_scores": json.loads(row['perspective_scores']) if row['perspective_scores'] else {},
|
||||
"achievements": json.loads(row['achievement_status']) if row['achievement_status'] else {},
|
||||
"recommendations": json.loads(row['recommendations']) if row['recommendations'] else [],
|
||||
"report_path": row['report_path']
|
||||
}
|
||||
|
||||
return None
|
||||
|
||||
async def delete_uploaded_file(file_id: int) -> bool:
|
||||
"""Delete uploaded file and its related data"""
|
||||
try:
|
||||
async with aiosqlite.connect(Path(settings.database_url.replace("sqlite:///", ""))) as db:
|
||||
db.row_factory = aiosqlite.Row
|
||||
|
||||
# Get file path before deleting
|
||||
cursor = await db.execute("SELECT file_path FROM kpi_files WHERE id = ?", (file_id,))
|
||||
row = await cursor.fetchone()
|
||||
|
||||
if not row:
|
||||
return False
|
||||
|
||||
file_path = row['file_path']
|
||||
|
||||
# Delete analysis results
|
||||
await db.execute("DELETE FROM kpi_analysis_results WHERE file_id = ?", (file_id,))
|
||||
|
||||
# Delete data cache
|
||||
await db.execute("DELETE FROM kpi_data_cache WHERE file_id = ?", (file_id,))
|
||||
|
||||
# Delete file record
|
||||
await db.execute("DELETE FROM kpi_files WHERE id = ?", (file_id,))
|
||||
|
||||
await db.commit()
|
||||
|
||||
# Delete physical file if it exists
|
||||
if file_path and os.path.exists(file_path):
|
||||
try:
|
||||
os.remove(file_path)
|
||||
except Exception as e:
|
||||
print(f"Warning: Could not delete physical file {file_path}: {e}")
|
||||
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error deleting file: {e}")
|
||||
return False
|
||||
26
kpi_analysis/app/models/__init__.py
Normal file
@ -0,0 +1,26 @@
|
||||
"""
|
||||
Data models for KPI Analysis Application
|
||||
"""
|
||||
|
||||
from .kpi_models import *
|
||||
|
||||
__all__ = [
|
||||
"PolarizationType",
|
||||
"KpiCategory",
|
||||
"KpiStatus",
|
||||
"KPIData",
|
||||
"KPIPeriodData",
|
||||
"KPISheet",
|
||||
"KPISummary",
|
||||
"AchievementItem",
|
||||
"AchievementSheet",
|
||||
"KPIFile",
|
||||
"AnalysisResult",
|
||||
"ChartData",
|
||||
"AnalysisRequest",
|
||||
"AnalysisResponse",
|
||||
"UserSession",
|
||||
"NextcloudFile",
|
||||
"ConfigStatus",
|
||||
"ApiResponse"
|
||||
]
|
||||
BIN
kpi_analysis/app/models/__pycache__/__init__.cpython-312.pyc
Normal file
BIN
kpi_analysis/app/models/__pycache__/kpi_models.cpython-312.pyc
Normal file
172
kpi_analysis/app/models/kpi_models.py
Normal file
@ -0,0 +1,172 @@
|
||||
"""
|
||||
Data models for KPI Analysis
|
||||
"""
|
||||
|
||||
from pydantic import BaseModel, Field
|
||||
from typing import List, Dict, Optional, Any
|
||||
from datetime import datetime
|
||||
from enum import Enum
|
||||
|
||||
class PolarizationType(str, Enum):
|
||||
MINIMAL = "Minimal"
|
||||
MAKSIMAL = "Maksimal"
|
||||
|
||||
class KpiCategory(str, Enum):
|
||||
FINANCIAL = "Financial"
|
||||
CUSTOMER = "Customer"
|
||||
INTERNAL_BUSINESS_PROCESS = "Internal Business Process"
|
||||
LEARNING_GROWTH = "Learning & Growth"
|
||||
|
||||
class KpiStatus(str, Enum):
|
||||
ACHIEVE = "Achieve"
|
||||
NOT_ACHIEVE = "Not Achieve"
|
||||
IN_PROGRESS = "In Progress"
|
||||
NO_DATA = "No Data"
|
||||
|
||||
class KPIData(BaseModel):
|
||||
"""Individual KPI data point"""
|
||||
code: str
|
||||
name: str
|
||||
formula: Optional[str]
|
||||
period: str
|
||||
verification: Optional[str]
|
||||
data_source: Optional[str]
|
||||
polarization: PolarizationType
|
||||
uom: Optional[str] = "" # Unit of Measurement
|
||||
target_value: Optional[float]
|
||||
threshold_min: Optional[float]
|
||||
threshold_max: Optional[float]
|
||||
target_score: Optional[float]
|
||||
weight: Optional[float]
|
||||
actual_value: Optional[float]
|
||||
actual_score: Optional[float]
|
||||
total_score: Optional[float]
|
||||
|
||||
class KPIPeriodData(BaseModel):
|
||||
"""Data for a specific time period"""
|
||||
period: str
|
||||
realization: Optional[float]
|
||||
target: Optional[float]
|
||||
threshold_min: Optional[float]
|
||||
threshold_max: Optional[float]
|
||||
score: Optional[float]
|
||||
status: str
|
||||
notes: Optional[str] = ""
|
||||
|
||||
class KPISheet(BaseModel):
|
||||
"""Complete KPI sheet data"""
|
||||
name: str
|
||||
category: KpiCategory
|
||||
code: str
|
||||
polarization: PolarizationType
|
||||
period: str
|
||||
unit: Optional[str]
|
||||
target_value: Optional[float]
|
||||
threshold_min: Optional[float]
|
||||
threshold_max: Optional[float]
|
||||
period_data: List[KPIPeriodData]
|
||||
|
||||
class KPISummary(BaseModel):
|
||||
"""KPI summary from main sheet"""
|
||||
job_title: str
|
||||
name: str
|
||||
position: str
|
||||
supervisor_name: str
|
||||
supervisor_position: str
|
||||
join_date: Optional[str]
|
||||
performance_period: str
|
||||
total_score: float
|
||||
total_weight: float
|
||||
final_score: float
|
||||
|
||||
class AchievementItem(BaseModel):
|
||||
"""Achievement status for individual KPI"""
|
||||
code: str
|
||||
indicator: str
|
||||
status: KpiStatus
|
||||
description: Optional[str] = ""
|
||||
|
||||
class AchievementSheet(BaseModel):
|
||||
"""Achievement summary sheet"""
|
||||
items: List[AchievementItem]
|
||||
|
||||
class KPIFile(BaseModel):
|
||||
"""Complete KPI file data"""
|
||||
filename: str
|
||||
upload_date: datetime
|
||||
file_path: str
|
||||
summary: KPISummary
|
||||
achievements: AchievementSheet
|
||||
kpi_sheets: List[KPISheet]
|
||||
perspective_scores: Dict[KpiCategory, float]
|
||||
achievement_rate: float
|
||||
|
||||
class AnalysisResult(BaseModel):
|
||||
"""Analysis results"""
|
||||
file_id: int
|
||||
total_score: float
|
||||
perspective_scores: Dict[KpiCategory, float]
|
||||
achievements: Dict[str, Any]
|
||||
recommendations: List[str]
|
||||
insights: List[str]
|
||||
trends: Dict[str, Any]
|
||||
report_path: Optional[str] = None
|
||||
charts_generated: bool = False
|
||||
|
||||
class ChartData(BaseModel):
|
||||
"""Chart data structure"""
|
||||
chart_type: str
|
||||
title: str
|
||||
data: Dict[str, Any]
|
||||
config: Dict[str, Any] = {}
|
||||
|
||||
class AnalysisRequest(BaseModel):
|
||||
"""Request model for analysis"""
|
||||
file_id: int
|
||||
analysis_type: str = "full" # full, quick, comparative
|
||||
include_recommendations: bool = True
|
||||
include_charts: bool = True
|
||||
include_pdf: bool = True
|
||||
|
||||
class AnalysisResponse(BaseModel):
|
||||
"""Response model for analysis"""
|
||||
success: bool
|
||||
analysis_id: int
|
||||
message: str
|
||||
results: Optional[AnalysisResult] = None
|
||||
charts: Optional[List[ChartData]] = None
|
||||
report_url: Optional[str] = None
|
||||
|
||||
class UserSession(BaseModel):
|
||||
"""User session data"""
|
||||
user_id: int
|
||||
username: str
|
||||
email: str
|
||||
role: str
|
||||
session_token: str
|
||||
expires_at: datetime
|
||||
last_login: datetime
|
||||
|
||||
class NextcloudFile(BaseModel):
|
||||
"""Nextcloud file information"""
|
||||
file_id: str
|
||||
name: str
|
||||
size: int
|
||||
modified: datetime
|
||||
path: str
|
||||
download_url: str
|
||||
|
||||
class ConfigStatus(BaseModel):
|
||||
"""Configuration status"""
|
||||
nextcloud: Dict[str, Any]
|
||||
openai: Dict[str, Any]
|
||||
ldap: Dict[str, Any]
|
||||
database: Dict[str, Any]
|
||||
|
||||
class ApiResponse(BaseModel):
|
||||
"""Standard API response"""
|
||||
success: bool
|
||||
message: str
|
||||
data: Optional[Any] = None
|
||||
error: Optional[str] = None
|
||||
timestamp: datetime = Field(default_factory=datetime.now)
|
||||
15
kpi_analysis/app/services/__init__.py
Normal file
@ -0,0 +1,15 @@
|
||||
"""
|
||||
Service modules for KPI Analysis Application
|
||||
"""
|
||||
|
||||
from .nextcloud_service import NextcloudService
|
||||
from .excel_parser import ExcelParser
|
||||
from .analysis_engine import AnalysisEngine
|
||||
from .pdf_generator import PDFGenerator
|
||||
|
||||
__all__ = [
|
||||
"NextcloudService",
|
||||
"ExcelParser",
|
||||
"AnalysisEngine",
|
||||
"PDFGenerator"
|
||||
]
|
||||
BIN
kpi_analysis/app/services/__pycache__/__init__.cpython-312.pyc
Normal file
658
kpi_analysis/app/services/analysis_engine.py
Normal file
@ -0,0 +1,658 @@
|
||||
"""
|
||||
Analysis engine for KPI data
|
||||
Provides AI-powered insights, recommendations, and pattern analysis using OpenAI
|
||||
"""
|
||||
|
||||
import json
|
||||
import asyncio
|
||||
from typing import Dict, List, Any, Optional
|
||||
import pandas as pd
|
||||
import plotly.graph_objects as go
|
||||
import plotly.express as px
|
||||
from plotly.subplots import make_subplots
|
||||
import numpy as np
|
||||
import logging
|
||||
from datetime import datetime
|
||||
|
||||
from openai import AsyncOpenAI
|
||||
from config.settings import settings
|
||||
from ..models.kpi_models import KPIFile, AnalysisResult, ChartData, KpiCategory
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class AnalysisEngine:
|
||||
"""AI-powered KPI analysis engine"""
|
||||
|
||||
def __init__(self):
|
||||
self.openai_client = None
|
||||
if settings.openai_api_key:
|
||||
self.openai_client = AsyncOpenAI(api_key=settings.openai_api_key)
|
||||
|
||||
async def analyze_kpi_data(self, kpi_file: KPIFile) -> AnalysisResult:
|
||||
"""Perform comprehensive KPI analysis"""
|
||||
try:
|
||||
# Calculate analysis components
|
||||
achievements = self._analyze_achievements(kpi_file)
|
||||
insights = await self._generate_insights(kpi_file)
|
||||
recommendations = await self._generate_recommendations(kpi_file)
|
||||
trends = self._analyze_trends(kpi_file)
|
||||
|
||||
# Create analysis result
|
||||
result = AnalysisResult(
|
||||
file_id=1, # This would come from database
|
||||
total_score=kpi_file.summary.final_score,
|
||||
perspective_scores=kpi_file.perspective_scores,
|
||||
achievements=achievements,
|
||||
recommendations=recommendations,
|
||||
insights=insights,
|
||||
trends=trends,
|
||||
charts_generated=False
|
||||
)
|
||||
|
||||
return result
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error analyzing KPI data: {str(e)}")
|
||||
raise
|
||||
|
||||
async def generate_charts(self, kpi_file: KPIFile, analysis_result: AnalysisResult) -> List[ChartData]:
|
||||
"""Generate interactive charts for KPI data"""
|
||||
charts = []
|
||||
|
||||
try:
|
||||
# 1. Score Breakdown Chart
|
||||
score_chart = self._create_score_breakdown_chart(kpi_file)
|
||||
if score_chart:
|
||||
charts.append(score_chart)
|
||||
|
||||
# 2. Trend Analysis Chart
|
||||
trend_chart = self._create_trend_chart(kpi_file)
|
||||
if trend_chart:
|
||||
charts.append(trend_chart)
|
||||
|
||||
# 3. Achievement Status Chart
|
||||
achievement_chart = self._create_achievement_chart(kpi_file)
|
||||
if achievement_chart:
|
||||
charts.append(achievement_chart)
|
||||
|
||||
# 4. Perspective Comparison Chart
|
||||
perspective_chart = self._create_perspective_comparison_chart(kpi_file)
|
||||
if perspective_chart:
|
||||
charts.append(perspective_chart)
|
||||
|
||||
# 5. Monthly Performance Chart
|
||||
monthly_chart = self._create_monthly_performance_chart(kpi_file)
|
||||
if monthly_chart:
|
||||
charts.append(monthly_chart)
|
||||
|
||||
return charts
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error generating charts: {str(e)}")
|
||||
return charts
|
||||
|
||||
def _analyze_achievements(self, kpi_file: KPIFile) -> Dict[str, Any]:
|
||||
"""Analyze achievement data"""
|
||||
total_kpis = len(kpi_file.achievements.items)
|
||||
achieved = sum(1 for item in kpi_file.achievements.items if item.status.value == "Achieve")
|
||||
not_achieved = sum(1 for item in kpi_file.achievements.items if item.status.value == "Not Achieve")
|
||||
no_data = sum(1 for item in kpi_file.achievements.items if item.status.value == "No Data")
|
||||
|
||||
return {
|
||||
"total_kpis": total_kpis,
|
||||
"achieved": achieved,
|
||||
"not_achieved": not_achieved,
|
||||
"no_data": no_data,
|
||||
"achievement_rate": kpi_file.achievement_rate,
|
||||
"achievement_percentage": (achieved / total_kpis * 100) if total_kpis > 0 else 0
|
||||
}
|
||||
|
||||
async def _generate_insights(self, kpi_file: KPIFile) -> List[str]:
|
||||
"""Generate AI-powered insights using OpenAI"""
|
||||
if not self.openai_client:
|
||||
return self._generate_basic_insights(kpi_file)
|
||||
|
||||
try:
|
||||
# Prepare data for AI analysis
|
||||
analysis_data = {
|
||||
"summary": {
|
||||
"name": kpi_file.summary.name,
|
||||
"position": kpi_file.summary.position,
|
||||
"total_score": kpi_file.summary.final_score,
|
||||
"period": kpi_file.summary.performance_period
|
||||
},
|
||||
"perspective_scores": {k.value: v for k, v in kpi_file.perspective_scores.items()},
|
||||
"achievements": self._analyze_achievements(kpi_file),
|
||||
"kpi_details": []
|
||||
}
|
||||
|
||||
# Add KPI details
|
||||
for kpi_sheet in kpi_file.kpi_sheets:
|
||||
kpi_info = {
|
||||
"name": kpi_sheet.name,
|
||||
"category": kpi_sheet.category.value,
|
||||
"code": kpi_sheet.code,
|
||||
"polarization": kpi_sheet.polarization.value,
|
||||
"period_data": []
|
||||
}
|
||||
|
||||
for period_data in kpi_sheet.period_data:
|
||||
if period_data.realization is not None:
|
||||
kpi_info["period_data"].append({
|
||||
"period": period_data.period,
|
||||
"realization": period_data.realization,
|
||||
"target": period_data.target,
|
||||
"score": period_data.score,
|
||||
"status": period_data.status
|
||||
})
|
||||
|
||||
analysis_data["kpi_details"].append(kpi_info)
|
||||
|
||||
# Generate prompt for OpenAI
|
||||
prompt = self._create_insights_prompt(analysis_data)
|
||||
|
||||
response = await self.openai_client.chat.completions.create(
|
||||
model=settings.openai_model,
|
||||
messages=[
|
||||
{"role": "system", "content": "You are an expert business analyst specializing in KPI analysis and performance improvement."},
|
||||
{"role": "user", "content": prompt}
|
||||
],
|
||||
max_tokens=settings.openai_max_tokens,
|
||||
temperature=settings.openai_temperature
|
||||
)
|
||||
|
||||
insights_text = response.choices[0].message.content
|
||||
|
||||
# Parse insights into a list
|
||||
insights = [line.strip() for line in insights_text.split('\n') if line.strip()]
|
||||
|
||||
return insights
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error generating AI insights: {str(e)}")
|
||||
return self._generate_basic_insights(kpi_file)
|
||||
|
||||
async def _generate_recommendations(self, kpi_file: KPIFile) -> List[str]:
|
||||
"""Generate AI-powered recommendations using OpenAI"""
|
||||
if not self.openai_client:
|
||||
return self._generate_basic_recommendations(kpi_file)
|
||||
|
||||
try:
|
||||
# Identify areas needing improvement
|
||||
problem_areas = []
|
||||
for category, score in kpi_file.perspective_scores.items():
|
||||
if score < 80: # Below satisfactory level
|
||||
problem_areas.append(f"{category.value}: {score:.1f}%")
|
||||
|
||||
# Identify specific KPI issues
|
||||
kpi_issues = []
|
||||
for item in kpi_file.achievements.items:
|
||||
if item.status.value == "Not Achieve":
|
||||
kpi_issues.append(f"{item.code}: {item.indicator}")
|
||||
|
||||
prompt = f"""
|
||||
Based on the following KPI analysis for {kpi_file.summary.name} ({kpi_file.summary.position}):
|
||||
|
||||
PERFORMANCE SUMMARY:
|
||||
- Total Score: {kpi_file.summary.final_score:.2f} Points
|
||||
- Achievement Rate: {kpi_file.achievement_rate:.1f}%
|
||||
|
||||
PERSPECTIVE SCORES:
|
||||
{json.dumps({k.value: f"{v:.1f}%" for k, v in kpi_file.perspective_scores.items()}, indent=2)}
|
||||
|
||||
PROBLEM AREAS:
|
||||
{json.dumps(problem_areas, indent=2)}
|
||||
|
||||
UNDERPERFORMING KPIs:
|
||||
{json.dumps(kpi_issues[:10], indent=2)}
|
||||
|
||||
Please provide 5-7 specific, actionable recommendations to improve performance. Focus on:
|
||||
1. Priority areas for immediate attention
|
||||
2. Specific actions for underperforming KPIs
|
||||
3. Strategic improvements for each perspective
|
||||
4. Timeline recommendations where applicable
|
||||
|
||||
Format each recommendation as a clear, actionable bullet point.
|
||||
"""
|
||||
|
||||
response = await self.openai_client.chat.completions.create(
|
||||
model=settings.openai_model,
|
||||
messages=[
|
||||
{"role": "system", "content": "You are an expert management consultant specializing in KPI improvement and organizational performance."},
|
||||
{"role": "user", "content": prompt}
|
||||
],
|
||||
max_tokens=settings.openai_max_tokens,
|
||||
temperature=settings.openai_temperature
|
||||
)
|
||||
|
||||
recommendations_text = response.choices[0].message.content
|
||||
|
||||
# Parse recommendations into a list
|
||||
recommendations = [line.strip() for line in recommendations_text.split('\n') if line.strip() and line.strip().startswith(('•', '-', '1.', '2.', '3.', '4.', '5.'))]
|
||||
|
||||
return recommendations if recommendations else self._generate_basic_recommendations(kpi_file)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error generating AI recommendations: {str(e)}")
|
||||
return self._generate_basic_recommendations(kpi_file)
|
||||
|
||||
def _analyze_trends(self, kpi_file: KPIFile) -> Dict[str, Any]:
|
||||
"""Analyze trends in KPI data"""
|
||||
trends = {}
|
||||
|
||||
# Analyze trends for each KPI sheet
|
||||
for kpi_sheet in kpi_file.kpi_sheets:
|
||||
if len(kpi_sheet.period_data) >= 2:
|
||||
# Calculate trend for this KPI
|
||||
recent_scores = [pd.score for pd in kpi_sheet.period_data[-3:] if pd.score is not None]
|
||||
earlier_scores = [pd.score for pd in kpi_sheet.period_data[:3] if pd.score is not None]
|
||||
|
||||
if recent_scores and earlier_scores:
|
||||
recent_avg = sum(recent_scores) / len(recent_scores)
|
||||
earlier_avg = sum(earlier_scores) / len(earlier_scores)
|
||||
|
||||
trend_direction = "improving" if recent_avg > earlier_avg else "declining"
|
||||
trend_magnitude = abs(recent_avg - earlier_avg)
|
||||
|
||||
trends[kpi_sheet.code] = {
|
||||
"direction": trend_direction,
|
||||
"magnitude": trend_magnitude,
|
||||
"recent_average": recent_avg,
|
||||
"earlier_average": earlier_avg
|
||||
}
|
||||
|
||||
return trends
|
||||
|
||||
def _create_score_breakdown_chart(self, kpi_file: KPIFile) -> Optional[ChartData]:
|
||||
"""Create score breakdown pie chart"""
|
||||
try:
|
||||
categories = [cat.value for cat in KpiCategory]
|
||||
scores = [kpi_file.perspective_scores.get(cat, 0) for cat in KpiCategory]
|
||||
|
||||
# Validate data before creating chart
|
||||
if not scores or all(score == 0 for score in scores):
|
||||
logger.warning("No valid scores found for score breakdown chart - returning empty chart data")
|
||||
# Return empty chart structure instead of None
|
||||
return ChartData(
|
||||
chart_type="pie",
|
||||
title="KPI Score Breakdown by Perspective (No Data)",
|
||||
data={"message": "No data available"}
|
||||
)
|
||||
|
||||
# Filter out zero scores to avoid visual clutter
|
||||
valid_data = [(cat, score) for cat, score in zip(categories, scores) if score > 0]
|
||||
if not valid_data:
|
||||
logger.warning("All perspective scores are zero")
|
||||
return ChartData(
|
||||
chart_type="pie",
|
||||
title="KPI Score Breakdown by Perspective (No Data)",
|
||||
data={"message": "No data available"}
|
||||
)
|
||||
|
||||
filtered_categories, filtered_scores = zip(*valid_data)
|
||||
|
||||
fig = go.Figure(data=[go.Pie(
|
||||
labels=filtered_categories,
|
||||
values=filtered_scores,
|
||||
hole=.3,
|
||||
textinfo='label+percent',
|
||||
textposition='outside',
|
||||
marker=dict(colors=['#FF6B6B', '#4ECDC4', '#45B7D1', '#96CEB4'][:len(filtered_categories)])
|
||||
)])
|
||||
|
||||
fig.update_layout(
|
||||
title="KPI Score Breakdown by Perspective",
|
||||
font=dict(size=14),
|
||||
height=400
|
||||
)
|
||||
|
||||
chart_data = fig.to_dict()
|
||||
|
||||
return ChartData(
|
||||
chart_type="pie",
|
||||
title="KPI Score Breakdown by Perspective",
|
||||
data=chart_data
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error creating score breakdown chart: {str(e)}", exc_info=True)
|
||||
return ChartData(
|
||||
chart_type="pie",
|
||||
title="KPI Score Breakdown by Perspective (Error)",
|
||||
data={"error": str(e)}
|
||||
)
|
||||
|
||||
def _create_trend_chart(self, kpi_file: KPIFile) -> Optional[ChartData]:
|
||||
"""Create trend analysis chart"""
|
||||
try:
|
||||
# Collect time series data
|
||||
all_periods = set()
|
||||
for kpi_sheet in kpi_file.kpi_sheets:
|
||||
for period_data in kpi_sheet.period_data:
|
||||
all_periods.add(period_data.period)
|
||||
|
||||
sorted_periods = sorted(list(all_periods))
|
||||
|
||||
# Create subplots
|
||||
fig = make_subplots(
|
||||
rows=2, cols=2,
|
||||
subplot_titles=[cat.value for cat in KpiCategory],
|
||||
specs=[[{"secondary_y": False}, {"secondary_y": False}],
|
||||
[{"secondary_y": False}, {"secondary_y": False}]]
|
||||
)
|
||||
|
||||
colors = ['#FF6B6B', '#4ECDC4', '#45B7D1', '#96CEB4']
|
||||
|
||||
for idx, category in enumerate(KpiCategory):
|
||||
category_sheets = [sheet for sheet in kpi_file.kpi_sheets if sheet.category == category]
|
||||
|
||||
if not category_sheets:
|
||||
continue
|
||||
|
||||
# Aggregate data for this category
|
||||
period_scores = {}
|
||||
for period in sorted_periods:
|
||||
scores = []
|
||||
for sheet in category_sheets:
|
||||
for pd in sheet.period_data:
|
||||
if pd.period == period and pd.score is not None:
|
||||
scores.append(pd.score)
|
||||
|
||||
if scores:
|
||||
period_scores[period] = sum(scores) / len(scores)
|
||||
|
||||
# Add trace
|
||||
periods = list(period_scores.keys())
|
||||
scores = list(period_scores.values())
|
||||
|
||||
if periods and scores:
|
||||
row = (idx // 2) + 1
|
||||
col = (idx % 2) + 1
|
||||
|
||||
fig.add_trace(
|
||||
go.Scatter(
|
||||
x=periods,
|
||||
y=scores,
|
||||
mode='lines+markers',
|
||||
name=category.value,
|
||||
line=dict(color=colors[idx]),
|
||||
showlegend=False
|
||||
),
|
||||
row=row, col=col
|
||||
)
|
||||
|
||||
fig.update_layout(
|
||||
title="KPI Trends by Perspective",
|
||||
height=600,
|
||||
font=dict(size=12)
|
||||
)
|
||||
|
||||
chart_data = fig.to_dict()
|
||||
|
||||
return ChartData(
|
||||
chart_type="line",
|
||||
title="KPI Performance Trends",
|
||||
data=chart_data
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error creating trend chart: {str(e)}")
|
||||
return None
|
||||
|
||||
def _create_achievement_chart(self, kpi_file: KPIFile) -> Optional[ChartData]:
|
||||
"""Create achievement status chart"""
|
||||
try:
|
||||
achievements = self._analyze_achievements(kpi_file)
|
||||
|
||||
# Validate achievement data to prevent division by zero
|
||||
total_kpis = achievements['total_kpis']
|
||||
if total_kpis == 0:
|
||||
logger.warning("No KPIs found for achievement chart")
|
||||
return None
|
||||
|
||||
achieved = achievements['achieved']
|
||||
not_achieved = achievements['not_achieved']
|
||||
no_data = achievements['no_data']
|
||||
|
||||
# Only create chart if there's actual data
|
||||
if achieved + not_achieved + no_data == 0:
|
||||
logger.warning("No achievement data available")
|
||||
return None
|
||||
|
||||
labels = ['Achieved', 'Not Achieved', 'No Data']
|
||||
values = [achieved, not_achieved, no_data]
|
||||
colors = ['#28a745', '#dc3545', '#ffc107']
|
||||
|
||||
# Filter out zero values to avoid visual clutter
|
||||
non_zero_data = [(label, value, color) for label, value, color in zip(labels, values, colors) if value > 0]
|
||||
if not non_zero_data:
|
||||
logger.warning("All achievement values are zero")
|
||||
return None
|
||||
|
||||
filtered_labels, filtered_values, filtered_colors = zip(*non_zero_data) if non_zero_data else ([], [], [])
|
||||
|
||||
fig = go.Figure(data=[go.Bar(
|
||||
x=filtered_labels,
|
||||
y=filtered_values,
|
||||
marker_color=filtered_colors,
|
||||
text=filtered_values,
|
||||
textposition='auto'
|
||||
)])
|
||||
|
||||
fig.update_layout(
|
||||
title="KPI Achievement Status",
|
||||
xaxis_title="Status",
|
||||
yaxis_title="Number of KPIs",
|
||||
height=400
|
||||
)
|
||||
|
||||
chart_data = fig.to_dict()
|
||||
|
||||
return ChartData(
|
||||
chart_type="bar",
|
||||
title="KPI Achievement Status",
|
||||
data=chart_data
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error creating achievement chart: {str(e)}")
|
||||
return None
|
||||
|
||||
def _create_perspective_comparison_chart(self, kpi_file: KPIFile) -> Optional[ChartData]:
|
||||
"""Create perspective comparison radar chart"""
|
||||
try:
|
||||
categories = [cat.value for cat in KpiCategory]
|
||||
scores = [kpi_file.perspective_scores.get(cat, 0) for cat in KpiCategory]
|
||||
|
||||
fig = go.Figure()
|
||||
|
||||
fig.add_trace(go.Scatterpolar(
|
||||
r=scores,
|
||||
theta=categories,
|
||||
fill='toself',
|
||||
name='Current Performance',
|
||||
line_color='#4ECDC4'
|
||||
))
|
||||
|
||||
# Add benchmark line at 80%
|
||||
benchmark_scores = [80] * len(categories)
|
||||
fig.add_trace(go.Scatterpolar(
|
||||
r=benchmark_scores,
|
||||
theta=categories,
|
||||
fill='toself',
|
||||
name='Target (80%)',
|
||||
line_color='#FF6B6B',
|
||||
line_dash='dash'
|
||||
))
|
||||
|
||||
fig.update_layout(
|
||||
polar=dict(
|
||||
radialaxis=dict(
|
||||
visible=True,
|
||||
range=[0, 100]
|
||||
)),
|
||||
title="KPI Performance by Perspective",
|
||||
height=500
|
||||
)
|
||||
|
||||
chart_data = fig.to_dict()
|
||||
|
||||
return ChartData(
|
||||
chart_type="polar",
|
||||
title="KPI Perspective Comparison",
|
||||
data=chart_data
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error creating perspective comparison chart: {str(e)}")
|
||||
return None
|
||||
|
||||
def _create_monthly_performance_chart(self, kpi_file: KPIFile) -> Optional[ChartData]:
|
||||
"""Create monthly performance chart"""
|
||||
try:
|
||||
# Collect all monthly data
|
||||
monthly_data = {}
|
||||
|
||||
for kpi_sheet in kpi_file.kpi_sheets:
|
||||
for period_data in kpi_sheet.period_data:
|
||||
month = period_data.period
|
||||
if month not in monthly_data:
|
||||
monthly_data[month] = []
|
||||
|
||||
if period_data.score is not None:
|
||||
monthly_data[month].append(period_data.score)
|
||||
|
||||
# Validate that we have data
|
||||
if not monthly_data:
|
||||
logger.warning("No monthly data found for performance chart")
|
||||
return None
|
||||
|
||||
# Calculate monthly averages
|
||||
months = sorted(monthly_data.keys())
|
||||
monthly_averages = []
|
||||
|
||||
for month in months:
|
||||
scores = monthly_data[month]
|
||||
if scores:
|
||||
monthly_averages.append(sum(scores) / len(scores))
|
||||
else:
|
||||
monthly_averages.append(0)
|
||||
|
||||
# Check if we have any valid data points
|
||||
if not monthly_averages or all(avg == 0 for avg in monthly_averages):
|
||||
logger.warning("All monthly averages are zero")
|
||||
return None
|
||||
|
||||
fig = go.Figure()
|
||||
|
||||
fig.add_trace(go.Scatter(
|
||||
x=months,
|
||||
y=monthly_averages,
|
||||
mode='lines+markers',
|
||||
name='Average KPI Score',
|
||||
line=dict(color='#45B7D1', width=3),
|
||||
marker=dict(size=8)
|
||||
))
|
||||
|
||||
# Add target line
|
||||
fig.add_hline(y=80, line_dash="dash", line_color="red",
|
||||
annotation_text="Target (80%)")
|
||||
|
||||
fig.update_layout(
|
||||
title="Monthly KPI Performance Trend",
|
||||
xaxis_title="Month",
|
||||
yaxis_title="Average Score (%)",
|
||||
height=400
|
||||
)
|
||||
|
||||
chart_data = fig.to_dict()
|
||||
|
||||
return ChartData(
|
||||
chart_type="line",
|
||||
title="Monthly KPI Performance",
|
||||
data=chart_data
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error creating monthly performance chart: {str(e)}")
|
||||
return None
|
||||
|
||||
def _create_insights_prompt(self, data: Dict[str, Any]) -> str:
|
||||
"""Create prompt for insights generation"""
|
||||
return f"""
|
||||
Analyze the following KPI performance data and provide key insights:
|
||||
|
||||
EMPLOYEE: {data['summary']['name']} - {data['summary']['position']}
|
||||
PERIOD: {data['summary']['period']}
|
||||
TOTAL SCORE: {data['summary']['total_score']:.2f} Points
|
||||
|
||||
PERSPECTIVE SCORES:
|
||||
{json.dumps(data['perspective_scores'], indent=2)}
|
||||
|
||||
ACHIEVEMENT SUMMARY:
|
||||
{json.dumps(data['achievements'], indent=2)}
|
||||
|
||||
Please analyze this data and provide 5-7 key insights about:
|
||||
1. Overall performance strengths and weaknesses
|
||||
2. Patterns across different perspectives
|
||||
3. Areas of concern or improvement
|
||||
4. Positive trends or achievements
|
||||
5. Risk areas that need attention
|
||||
|
||||
Present each insight as a clear, concise statement.
|
||||
"""
|
||||
|
||||
def _generate_basic_insights(self, kpi_file: KPIFile) -> List[str]:
|
||||
"""Generate basic insights without AI"""
|
||||
insights = []
|
||||
|
||||
# Overall performance insight
|
||||
if kpi_file.summary.final_score >= 90:
|
||||
insights.append(f"Excellent overall performance with total score of {kpi_file.summary.final_score:.2f} points (above 90)")
|
||||
elif kpi_file.summary.final_score >= 80:
|
||||
insights.append(f"Good overall performance with total score of {kpi_file.summary.final_score:.2f} points (meeting target standards)")
|
||||
else:
|
||||
insights.append(f"Performance below target with total score of {kpi_file.summary.final_score:.2f} points, requiring immediate attention")
|
||||
|
||||
# Achievement rate insight
|
||||
if kpi_file.achievement_rate >= 80:
|
||||
insights.append(f"Strong achievement rate of {kpi_file.achievement_rate:.1f}%")
|
||||
else:
|
||||
insights.append(f"Achievement rate of {kpi_file.achievement_rate:.1f}% needs improvement")
|
||||
|
||||
# Perspective analysis
|
||||
best_perspective = max(kpi_file.perspective_scores.items(), key=lambda x: x[1])
|
||||
worst_perspective = min(kpi_file.perspective_scores.items(), key=lambda x: x[1])
|
||||
|
||||
insights.append(f"Strongest performance in {best_perspective[0].value} ({best_perspective[1]:.1f}%)")
|
||||
insights.append(f"Improvement needed in {worst_perspective[0].value} ({worst_perspective[1]:.1f}%)")
|
||||
|
||||
return insights
|
||||
|
||||
def _generate_basic_recommendations(self, kpi_file: KPIFile) -> List[str]:
|
||||
"""Generate basic recommendations without AI"""
|
||||
recommendations = []
|
||||
|
||||
# Low scoring areas
|
||||
low_scores = [(cat, score) for cat, score in kpi_file.perspective_scores.items() if score < 70]
|
||||
if low_scores:
|
||||
for cat, score in low_scores:
|
||||
recommendations.append(f"Focus on improving {cat.value} performance (current: {score:.1f}%)")
|
||||
|
||||
# Achievement issues
|
||||
not_achieved = [item for item in kpi_file.achievements.items if item.status.value == "Not Achieve"]
|
||||
if not_achieved:
|
||||
recommendations.append(f"Address {len(not_achieved)} KPIs that are not meeting targets")
|
||||
|
||||
# General recommendations
|
||||
recommendations.extend([
|
||||
"Develop action plans for underperforming areas",
|
||||
"Regular monitoring and feedback sessions",
|
||||
"Enhance training and skill development programs",
|
||||
"Implement continuous improvement processes"
|
||||
])
|
||||
|
||||
return recommendations
|
||||
|
||||
# Global analysis engine instance
|
||||
analysis_engine = AnalysisEngine()
|
||||
526
kpi_analysis/app/services/excel_parser.py
Normal file
@ -0,0 +1,526 @@
|
||||
"""
|
||||
Excel file parser for KPI Analysis
|
||||
Parses KPI Excel files with summary sheet, achievement sheet, and detail sheets
|
||||
"""
|
||||
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
from typing import Dict, List, Any, Optional, Tuple
|
||||
from pathlib import Path
|
||||
import re
|
||||
from datetime import datetime
|
||||
import logging
|
||||
|
||||
from ..models.kpi_models import (
|
||||
KPIFile, KPISummary, AchievementSheet, AchievementItem,
|
||||
KPISheet, KPIPeriodData, KpiCategory, PolarizationType, KpiStatus
|
||||
)
|
||||
from config.settings import settings
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class ExcelParser:
|
||||
"""Excel file parser for KPI data"""
|
||||
|
||||
def __init__(self):
|
||||
self.supported_sheets = [
|
||||
'KPI', 'Achievement', 'F2a', 'F2b', 'C2a', 'C2b',
|
||||
'B1a', 'B1b', 'B1c', 'B1d', 'B1e', 'B1f', 'B2a', 'B3a',
|
||||
'L1a', 'L1b', 'L2a', 'Kumul max', 'Kumul min', 'Fix max', 'Fix min'
|
||||
]
|
||||
|
||||
async def parse_excel_file(self, file_path: str) -> KPIFile:
|
||||
"""Parse complete KPI Excel file"""
|
||||
try:
|
||||
logger.info(f"Starting to parse Excel file: {file_path}")
|
||||
excel_file = pd.ExcelFile(file_path)
|
||||
logger.info(f"Excel file loaded. Available sheets: {excel_file.sheet_names}")
|
||||
|
||||
# Parse main components
|
||||
summary = self._parse_summary_sheet(excel_file)
|
||||
logger.info(f"Summary parsed: {summary.name}, Score: {summary.final_score}")
|
||||
|
||||
achievements = self._parse_achievement_sheet(excel_file)
|
||||
logger.info(f"Achievements parsed: {len(achievements.items)} items")
|
||||
|
||||
kpi_sheets = self._parse_kpi_detail_sheets(excel_file)
|
||||
logger.info(f"KPI sheets parsed: {len(kpi_sheets)} sheets")
|
||||
|
||||
# Calculate perspective scores and achievement rate
|
||||
perspective_scores = self._calculate_perspective_scores(kpi_sheets)
|
||||
logger.info(f"Perspective scores calculated: {perspective_scores}")
|
||||
|
||||
achievement_rate = self._calculate_achievement_rate(achievements)
|
||||
logger.info(f"Achievement rate calculated: {achievement_rate}%")
|
||||
|
||||
kpi_file = KPIFile(
|
||||
filename=Path(file_path).name,
|
||||
upload_date=datetime.now(),
|
||||
file_path=file_path,
|
||||
summary=summary,
|
||||
achievements=achievements,
|
||||
kpi_sheets=kpi_sheets,
|
||||
perspective_scores=perspective_scores,
|
||||
achievement_rate=achievement_rate
|
||||
)
|
||||
|
||||
logger.info(f"Excel file parsing completed successfully")
|
||||
return kpi_file
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error parsing Excel file {file_path}: {str(e)}", exc_info=True)
|
||||
raise ValueError(f"Failed to parse Excel file: {str(e)}")
|
||||
|
||||
def _parse_summary_sheet(self, excel_file: pd.ExcelFile) -> KPISummary:
|
||||
"""Parse the main KPI summary sheet"""
|
||||
try:
|
||||
df = pd.read_excel(excel_file, sheet_name='KPI', header=None)
|
||||
|
||||
# Extract basic information based on actual Excel structure
|
||||
# Row 1 (index 1): "Jabatan : Information Technology Manager" - this is the POSITION
|
||||
job_title_cell = self._extract_cell_value(df, 1, 1) # B2
|
||||
position = "Manager" # Default
|
||||
if job_title_cell and ":" in job_title_cell:
|
||||
position = job_title_cell.split(":", 1)[1].strip() # This is the position (e.g., "Information Technology Manager")
|
||||
|
||||
# Row 4 (index 4): Name is in column 3 (D column)
|
||||
name = self._extract_cell_value(df, 4, 3) # D5: "Suherdy Yacob"
|
||||
|
||||
# Row 3 (index 3): Department is in column 12 (M column)
|
||||
department = self._extract_cell_value(df, 3, 12) # M4: "Information Technology"
|
||||
|
||||
# Row 4 (index 4): Supervisor is in column 12 (M column)
|
||||
supervisor_info = self._extract_cell_value(df, 4, 12) # M5: "Robertus Haryo / GM Business Support"
|
||||
|
||||
# Row 5 (index 5): Performance period is in column 12 (M column)
|
||||
performance_period = self._extract_cell_value(df, 5, 12) # M6: "2025"
|
||||
|
||||
# Parse name (already clean)
|
||||
person_name = name.strip() if name else "Unknown"
|
||||
|
||||
# Use department as job_title for the summary
|
||||
job_title = department if department else position
|
||||
|
||||
# Parse supervisor name and position
|
||||
supervisor_name = ""
|
||||
supervisor_position = ""
|
||||
if supervisor_info and "/" in supervisor_info:
|
||||
sup_parts = supervisor_info.split("/", 1)
|
||||
supervisor_name = sup_parts[0].strip()
|
||||
supervisor_position = sup_parts[1].strip() if len(sup_parts) > 1 else ""
|
||||
elif supervisor_info:
|
||||
supervisor_name = supervisor_info.strip()
|
||||
supervisor_position = ""
|
||||
|
||||
# Extract scores - look for "Total %" row or the last row with scores
|
||||
total_score = 0.0
|
||||
total_weight = 100.0
|
||||
|
||||
# Method 1: Find the row with "Total %" in column 1 (B column)
|
||||
found_total = False
|
||||
for idx, row in df.iterrows():
|
||||
cell_value = self._safe_str(row.iloc[1]) if len(row) > 1 else "" # Column B (index 1)
|
||||
if "Total" in cell_value and "%" in cell_value:
|
||||
# The total weight (Bobot) is in column 17 (R column)
|
||||
if len(row) > 17:
|
||||
weight_value = self._extract_numeric_value(row, 17)
|
||||
if weight_value is not None and weight_value > 0:
|
||||
total_weight = weight_value * 100 # Convert to percentage if needed
|
||||
logger.info(f"Found total weight: {total_weight} at row {idx}, column 17")
|
||||
|
||||
# The total score is in column 18 (S column) - "Total (Skor * Bobot)"
|
||||
if len(row) > 18:
|
||||
score_value = self._extract_numeric_value(row, 18)
|
||||
if score_value is not None:
|
||||
total_score = score_value
|
||||
logger.info(f"Found total score: {total_score} at row {idx}, column 18")
|
||||
found_total = True
|
||||
break
|
||||
|
||||
# Method 2: If not found, look for the last row with a value in column 18 that looks like a total
|
||||
# (usually after all KPI rows, column 17 has ~1.0 and column 18 has the total score)
|
||||
if not found_total:
|
||||
for idx in range(len(df) - 1, -1, -1):
|
||||
row = df.iloc[idx]
|
||||
if len(row) > 18:
|
||||
score_value = self._extract_numeric_value(row, 18)
|
||||
weight_value = self._extract_numeric_value(row, 17) if len(row) > 17 else None
|
||||
|
||||
# Check if this looks like a total row (weight ~1.0 and score > 0)
|
||||
if (score_value is not None and score_value > 0 and
|
||||
weight_value is not None and 0.9 <= weight_value <= 1.1):
|
||||
total_score = score_value
|
||||
total_weight = weight_value * 100 # Convert to percentage
|
||||
logger.info(f"Found total score (method 2): {total_score} at row {idx}, column 18")
|
||||
logger.info(f"Found total weight (method 2): {total_weight} at row {idx}, column 17")
|
||||
break
|
||||
|
||||
# Final score is the same as total score (in points, not percentage)
|
||||
final_score = total_score
|
||||
|
||||
return KPISummary(
|
||||
job_title=job_title or "Department", # Department (e.g., "Information Technology")
|
||||
name=person_name,
|
||||
position=position, # Position (e.g., "Information Technology Manager")
|
||||
supervisor_name=supervisor_name,
|
||||
supervisor_position=supervisor_position,
|
||||
join_date=None,
|
||||
performance_period=performance_period or "2025",
|
||||
total_score=total_score,
|
||||
total_weight=total_weight,
|
||||
final_score=final_score
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error parsing summary sheet: {str(e)}", exc_info=True)
|
||||
# Return default summary if parsing fails
|
||||
return KPISummary(
|
||||
job_title="Manager",
|
||||
name="Unknown",
|
||||
position="Manager",
|
||||
supervisor_name="",
|
||||
supervisor_position="",
|
||||
performance_period="2025",
|
||||
total_score=0.0,
|
||||
total_weight=100.0,
|
||||
final_score=0.0
|
||||
)
|
||||
|
||||
def _parse_achievement_sheet(self, excel_file: pd.ExcelFile) -> AchievementSheet:
|
||||
"""Parse achievement status sheet"""
|
||||
try:
|
||||
df = pd.read_excel(excel_file, sheet_name='Achievement', header=None)
|
||||
|
||||
items = []
|
||||
|
||||
# Find header row
|
||||
header_row_idx = None
|
||||
for idx, row in df.iterrows():
|
||||
for cell in row:
|
||||
if pd.notna(cell) and 'Kode BSC' in str(cell):
|
||||
header_row_idx = idx
|
||||
break
|
||||
if header_row_idx is not None:
|
||||
break
|
||||
|
||||
if header_row_idx is None:
|
||||
logger.warning("Could not find header row in Achievement sheet")
|
||||
return AchievementSheet(items=[])
|
||||
|
||||
# Parse data rows
|
||||
for idx in range(header_row_idx + 1, len(df)):
|
||||
row = df.iloc[idx]
|
||||
|
||||
# Extract values from columns (adjust indices based on actual structure)
|
||||
code = self._safe_str(row.iloc[1]) if len(row) > 1 else "" # Column B
|
||||
indicator = self._safe_str(row.iloc[2]) if len(row) > 2 else "" # Column C
|
||||
status_str = self._safe_str(row.iloc[3]) if len(row) > 3 else "" # Column D
|
||||
description = self._safe_str(row.iloc[4]) if len(row) > 4 else "" # Column E
|
||||
|
||||
if code and indicator and code != 'Kode BSC':
|
||||
# Map status string to enum
|
||||
status = KpiStatus.NOT_ACHIEVE
|
||||
if 'achieve' in status_str.lower() and 'not' not in status_str.lower():
|
||||
status = KpiStatus.ACHIEVE
|
||||
elif 'not achieve' in status_str.lower():
|
||||
status = KpiStatus.NOT_ACHIEVE
|
||||
elif 'data' in description.lower() or 'isi' in description.lower() or 'data' in status_str.lower():
|
||||
status = KpiStatus.NO_DATA
|
||||
|
||||
items.append(AchievementItem(
|
||||
code=code,
|
||||
indicator=indicator,
|
||||
status=status,
|
||||
description=description
|
||||
))
|
||||
|
||||
logger.info(f"Parsed {len(items)} achievement items")
|
||||
return AchievementSheet(items=items)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error parsing achievement sheet: {str(e)}", exc_info=True)
|
||||
return AchievementSheet(items=[])
|
||||
|
||||
def _parse_kpi_detail_sheets(self, excel_file: pd.ExcelFile) -> List[KPISheet]:
|
||||
"""Parse KPI detail sheets"""
|
||||
kpi_sheets = []
|
||||
|
||||
for sheet_name in excel_file.sheet_names:
|
||||
if sheet_name in ['KPI', 'Achievement']:
|
||||
continue
|
||||
|
||||
try:
|
||||
kpi_sheet = self._parse_single_kpi_sheet(excel_file, sheet_name)
|
||||
if kpi_sheet:
|
||||
kpi_sheets.append(kpi_sheet)
|
||||
except Exception as e:
|
||||
logger.warning(f"Error parsing sheet {sheet_name}: {str(e)}")
|
||||
continue
|
||||
|
||||
return kpi_sheets
|
||||
|
||||
def _parse_single_kpi_sheet(self, excel_file: pd.ExcelFile, sheet_name: str) -> Optional[KPISheet]:
|
||||
"""Parse a single KPI detail sheet"""
|
||||
try:
|
||||
df = pd.read_excel(excel_file, sheet_name=sheet_name, header=None)
|
||||
logger.debug(f"Parsing sheet: {sheet_name}, shape: {df.shape}")
|
||||
|
||||
# Extract metadata from the top of the sheet
|
||||
# Row 0: "Nama KPI" in col 0, unit in col 1, actual name in col 2
|
||||
name = self._extract_cell_value(df, 0, 2) or sheet_name # Column C (index 2) has the actual name
|
||||
|
||||
# Determine category from sheet name pattern
|
||||
category = self._determine_category(sheet_name)
|
||||
polarization = self._determine_polarization(df)
|
||||
period = self._determine_period(df)
|
||||
unit = self._extract_unit(df)
|
||||
|
||||
# Extract target values
|
||||
target_value, threshold_min, threshold_max = self._extract_target_values(df)
|
||||
|
||||
# Parse period data
|
||||
period_data = self._parse_period_data(df)
|
||||
|
||||
if not period_data:
|
||||
logger.warning(f"No period data found for sheet {sheet_name}")
|
||||
return None
|
||||
|
||||
logger.debug(f"Sheet {sheet_name} parsed successfully with {len(period_data)} periods")
|
||||
|
||||
return KPISheet(
|
||||
name=name,
|
||||
category=category,
|
||||
code=sheet_name,
|
||||
polarization=polarization,
|
||||
period=period,
|
||||
unit=unit,
|
||||
target_value=target_value,
|
||||
threshold_min=threshold_min,
|
||||
threshold_max=threshold_max,
|
||||
period_data=period_data
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error parsing KPI sheet {sheet_name}: {str(e)}", exc_info=True)
|
||||
return None
|
||||
|
||||
def _parse_period_data(self, df: pd.DataFrame) -> List[KPIPeriodData]:
|
||||
"""Parse period data from KPI sheet"""
|
||||
period_data = []
|
||||
|
||||
# Find the header row with "Periode", "Realisasi", "Target", etc.
|
||||
header_row_idx = None
|
||||
for idx, row in df.iterrows():
|
||||
for cell in row:
|
||||
if pd.notna(cell) and 'Periode' in str(cell) and 'Realisasi' in str(df.iloc[idx].values):
|
||||
header_row_idx = idx
|
||||
break
|
||||
if header_row_idx is not None:
|
||||
break
|
||||
|
||||
if header_row_idx is None:
|
||||
logger.warning("Could not find data header row")
|
||||
return []
|
||||
|
||||
# Parse data rows after header
|
||||
for idx in range(header_row_idx + 1, len(df)):
|
||||
row = df.iloc[idx]
|
||||
|
||||
# Check if this is a valid data row
|
||||
period_name = self._safe_str(row.iloc[0]) if len(row) > 0 else ""
|
||||
|
||||
# Skip empty rows or total rows
|
||||
if not period_name or period_name.lower() in ['total', 'jumlah', '', 'nan']:
|
||||
continue
|
||||
|
||||
# Skip if period name is not a month name
|
||||
months = ['januari', 'februari', 'maret', 'april', 'mei', 'juni',
|
||||
'juli', 'agustus', 'september', 'oktober', 'november', 'desember',
|
||||
'january', 'february', 'march', 'may', 'june', 'july',
|
||||
'august', 'october', 'q1', 'q2', 'q3', 'q4', 'semester']
|
||||
if not any(month in period_name.lower() for month in months):
|
||||
continue
|
||||
|
||||
# Extract data values based on column positions
|
||||
realization = self._extract_numeric_value(row, 1)
|
||||
target = self._extract_numeric_value(row, 2)
|
||||
threshold_min = self._extract_numeric_value(row, 3)
|
||||
score = self._extract_numeric_value(row, 4)
|
||||
status = self._safe_str(row.iloc[5]) if len(row) > 5 else ""
|
||||
|
||||
period_data.append(KPIPeriodData(
|
||||
period=period_name,
|
||||
realization=realization,
|
||||
target=target,
|
||||
threshold_min=threshold_min,
|
||||
threshold_max=None,
|
||||
score=score,
|
||||
status=status,
|
||||
notes=""
|
||||
))
|
||||
|
||||
return period_data
|
||||
|
||||
def _determine_category(self, sheet_name: str) -> KpiCategory:
|
||||
"""Determine KPI category from sheet name"""
|
||||
if sheet_name.startswith('F'):
|
||||
return KpiCategory.FINANCIAL
|
||||
elif sheet_name.startswith('C'):
|
||||
return KpiCategory.CUSTOMER
|
||||
elif sheet_name.startswith('B'):
|
||||
return KpiCategory.INTERNAL_BUSINESS_PROCESS
|
||||
elif sheet_name.startswith('L'):
|
||||
return KpiCategory.LEARNING_GROWTH
|
||||
else:
|
||||
return KpiCategory.FINANCIAL # Default
|
||||
|
||||
def _determine_polarization(self, df: pd.DataFrame) -> PolarizationType:
|
||||
"""Determine polarization type from sheet content"""
|
||||
for _, row in df.iterrows():
|
||||
for cell in row:
|
||||
if pd.notna(cell) and isinstance(cell, str):
|
||||
if 'minimal' in cell.lower():
|
||||
return PolarizationType.MINIMAL
|
||||
elif 'maksimal' in cell.lower() or 'maximal' in cell.lower():
|
||||
return PolarizationType.MAKSIMAL
|
||||
return PolarizationType.MINIMAL # Default
|
||||
|
||||
def _determine_period(self, df: pd.DataFrame) -> str:
|
||||
"""Determine period type from sheet content"""
|
||||
for _, row in df.iterrows():
|
||||
for cell in row:
|
||||
if pd.notna(cell) and isinstance(cell, str):
|
||||
cell_lower = cell.lower()
|
||||
if 'bulanan' in cell_lower or 'monthly' in cell_lower:
|
||||
return 'Monthly'
|
||||
elif 'quarter' in cell_lower or 'kuarter' in cell_lower:
|
||||
return 'Quarterly'
|
||||
elif 'semester' in cell_lower:
|
||||
return 'Semi-Annual'
|
||||
elif 'tahunan' in cell_lower or 'annual' in cell_lower:
|
||||
return 'Annual'
|
||||
return 'Monthly' # Default
|
||||
|
||||
def _extract_unit(self, df: pd.DataFrame) -> Optional[str]:
|
||||
"""Extract unit of measurement from column 1 (B column)"""
|
||||
# The unit is typically in row 0, column 1 (B1)
|
||||
if len(df) > 0 and len(df.columns) > 1:
|
||||
unit_cell = self._safe_str(df.iloc[0, 1])
|
||||
if unit_cell:
|
||||
# Return the unit as-is (%, #, etc.)
|
||||
return unit_cell
|
||||
|
||||
# Fallback: search for unit in the sheet
|
||||
for _, row in df.iterrows():
|
||||
for cell in row:
|
||||
if pd.notna(cell) and isinstance(cell, str):
|
||||
cell_str = cell.strip()
|
||||
if cell_str in ['%', '#', 'IDR', 'Jam', 'Hari']:
|
||||
return cell_str
|
||||
return None
|
||||
|
||||
def _extract_target_values(self, df: pd.DataFrame) -> Tuple[Optional[float], Optional[float], Optional[float]]:
|
||||
"""Extract target and threshold values"""
|
||||
target_value = None
|
||||
threshold_min = None
|
||||
threshold_max = None
|
||||
|
||||
for _, row in df.iterrows():
|
||||
for col_idx, cell in enumerate(row):
|
||||
if pd.notna(cell) and isinstance(cell, str):
|
||||
cell_str = cell.lower()
|
||||
if 'target' in cell_str or 'target (100%)' in cell_str:
|
||||
target_value = self._find_numeric_value(df, _, col_idx + 1)
|
||||
elif 'threshold' in cell_str or 'treshold' in cell_str:
|
||||
threshold_min = self._find_numeric_value(df, _, col_idx + 1)
|
||||
|
||||
return target_value, threshold_min, threshold_max
|
||||
|
||||
def _calculate_perspective_scores(self, kpi_sheets: List[KPISheet]) -> Dict[KpiCategory, float]:
|
||||
"""Calculate scores for each perspective"""
|
||||
perspective_scores = {}
|
||||
|
||||
for category in KpiCategory:
|
||||
category_sheets = [sheet for sheet in kpi_sheets if sheet.category == category]
|
||||
|
||||
if not category_sheets:
|
||||
perspective_scores[category] = 0.0
|
||||
continue
|
||||
|
||||
total_score = 0.0
|
||||
total_weight = 0.0
|
||||
|
||||
for sheet in category_sheets:
|
||||
# Use average score from period data
|
||||
if sheet.period_data:
|
||||
scores = [pd.score for pd in sheet.period_data if pd.score is not None]
|
||||
if scores:
|
||||
avg_score = sum(scores) / len(scores)
|
||||
total_score += avg_score
|
||||
total_weight += 1.0
|
||||
|
||||
perspective_scores[category] = total_score / total_weight if total_weight > 0 else 0.0
|
||||
|
||||
return perspective_scores
|
||||
|
||||
def _calculate_achievement_rate(self, achievements: AchievementSheet) -> float:
|
||||
"""Calculate overall achievement rate"""
|
||||
if not achievements.items:
|
||||
return 0.0
|
||||
|
||||
achieved = sum(1 for item in achievements.items if item.status == KpiStatus.ACHIEVE)
|
||||
total = len(achievements.items)
|
||||
|
||||
return (achieved / total) * 100 if total > 0 else 0.0
|
||||
|
||||
# Helper methods
|
||||
def _extract_cell_value(self, df: pd.DataFrame, row: int, col: int) -> Optional[str]:
|
||||
"""Safely extract cell value"""
|
||||
try:
|
||||
if row < len(df) and col < len(df.columns):
|
||||
cell_value = df.iloc[row, col]
|
||||
return self._safe_str(cell_value)
|
||||
except:
|
||||
pass
|
||||
return None
|
||||
|
||||
def _safe_str(self, value) -> str:
|
||||
"""Safely convert value to string"""
|
||||
if pd.isna(value) or value is None:
|
||||
return ""
|
||||
return str(value).strip()
|
||||
|
||||
def _extract_numeric_value(self, row: pd.Series, col_idx: int) -> Optional[float]:
|
||||
"""Extract numeric value from cell"""
|
||||
try:
|
||||
if col_idx < len(row):
|
||||
value = row.iloc[col_idx]
|
||||
if pd.notna(value) and isinstance(value, (int, float)):
|
||||
return float(value)
|
||||
elif pd.notna(value) and isinstance(value, str):
|
||||
# Try to extract number from string
|
||||
import re
|
||||
numbers = re.findall(r'\d+\.?\d*', value)
|
||||
if numbers:
|
||||
return float(numbers[0])
|
||||
except:
|
||||
pass
|
||||
return None
|
||||
|
||||
def _find_numeric_value(self, df: pd.DataFrame, row: int, col: int) -> Optional[float]:
|
||||
"""Find numeric value in DataFrame"""
|
||||
return self._extract_numeric_value(df.iloc[row], col)
|
||||
|
||||
def _find_data_start_row(self, df: pd.DataFrame) -> int:
|
||||
"""Find where the data table starts"""
|
||||
# Look for headers like "Periode", "Realisasi", "Target"
|
||||
for idx, row in df.iterrows():
|
||||
for col_idx, cell in enumerate(row):
|
||||
if pd.notna(cell) and isinstance(cell, str):
|
||||
cell_lower = cell.lower()
|
||||
if 'periode' in cell_lower and 'realisasi' in str(row.values).lower():
|
||||
return idx + 1
|
||||
return 6 # Default start row (after typical header rows)
|
||||
|
||||
# Global Excel parser instance
|
||||
excel_parser = ExcelParser()
|
||||
374
kpi_analysis/app/services/ldap_auth_service.py
Normal file
@ -0,0 +1,374 @@
|
||||
"""
|
||||
LDAP Authentication Service with Group-Based Access Control
|
||||
Handles LDAP authentication and group membership verification
|
||||
"""
|
||||
|
||||
from typing import Optional, Dict, List, Tuple
|
||||
import logging
|
||||
from datetime import datetime
|
||||
|
||||
# Optional LDAP imports
|
||||
try:
|
||||
import ldap3
|
||||
from ldap3 import Server, Connection, ALL, SUBTREE
|
||||
from ldap3.core.exceptions import LDAPException, LDAPBindError
|
||||
LDAP3_AVAILABLE = True
|
||||
except ImportError:
|
||||
LDAP3_AVAILABLE = False
|
||||
|
||||
from config.settings import settings
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class LDAPAUTHService:
|
||||
"""LDAP Authentication Service with group membership verification"""
|
||||
|
||||
def __init__(self):
|
||||
self.server = None
|
||||
self.connection = None
|
||||
self._initialized = False
|
||||
|
||||
def _initialize_server(self):
|
||||
"""Initialize LDAP server connection"""
|
||||
if self._initialized:
|
||||
return
|
||||
|
||||
try:
|
||||
# Create LDAP server
|
||||
if settings.ldap_use_ssl:
|
||||
protocol = "ldaps"
|
||||
port = 636 # Default LDAPS port
|
||||
else:
|
||||
protocol = "ldap"
|
||||
port = settings.ldap_port
|
||||
|
||||
server_url = f"{protocol}://{settings.ldap_server}:{port}"
|
||||
self.server = Server(server_url, get_info=ALL)
|
||||
|
||||
# Test connection with bind DN if available
|
||||
if settings.ldap_bind_dn and settings.ldap_bind_password:
|
||||
self.connection = Connection(
|
||||
self.server,
|
||||
user=settings.ldap_bind_dn,
|
||||
password=settings.ldap_bind_password,
|
||||
auto_bind=True
|
||||
)
|
||||
logger.info("LDAP connection established with bind DN")
|
||||
else:
|
||||
# Anonymous bind for testing
|
||||
self.connection = Connection(self.server, auto_bind=True)
|
||||
logger.info("LDAP connection established with anonymous bind")
|
||||
|
||||
self._initialized = True
|
||||
logger.info("LDAP server initialized successfully")
|
||||
|
||||
except LDAPException as e:
|
||||
logger.error(f"LDAP initialization failed: {e}")
|
||||
raise
|
||||
|
||||
def authenticate_user(self, username: str, password: str) -> Tuple[bool, Optional[str], Optional[Dict]]:
|
||||
"""
|
||||
Authenticate user against LDAP and check group membership
|
||||
|
||||
Returns:
|
||||
Tuple of (success, user_dn, user_info)
|
||||
"""
|
||||
if not LDAP3_AVAILABLE:
|
||||
logger.warning("LDAP3 not available - LDAP authentication disabled")
|
||||
return False, None, None
|
||||
|
||||
if not settings.ldap_server:
|
||||
logger.warning("LDAP not configured")
|
||||
return False, None, None
|
||||
|
||||
try:
|
||||
self._initialize_server()
|
||||
|
||||
# Search for user
|
||||
user_dn = self._find_user_dn(username)
|
||||
if not user_dn:
|
||||
logger.warning(f"User {username} not found in LDAP")
|
||||
return False, None, None
|
||||
|
||||
# Bind as user to verify password
|
||||
user_conn = Connection(self.server, user=user_dn, password=password, auto_bind=True)
|
||||
|
||||
if not user_conn.bound:
|
||||
logger.warning(f"Authentication failed for user {username}")
|
||||
return False, None, None
|
||||
|
||||
# Get user information
|
||||
user_info = self._get_user_info(username, user_dn)
|
||||
|
||||
# Check group membership
|
||||
if not self._check_group_membership(username, user_dn):
|
||||
logger.warning(f"User {username} is not authorized - not in KPI group")
|
||||
user_conn.unbind()
|
||||
return False, None, None
|
||||
|
||||
logger.info(f"User {username} authenticated and authorized")
|
||||
user_conn.unbind()
|
||||
return True, user_dn, user_info
|
||||
|
||||
except LDAPBindError as e:
|
||||
logger.warning(f"Authentication failed for user {username}: {e}")
|
||||
return False, None, None
|
||||
except Exception as e:
|
||||
logger.error(f"LDAP authentication error: {e}")
|
||||
return False, None, None
|
||||
|
||||
def _find_user_dn(self, username: str) -> Optional[str]:
|
||||
"""Find user DN in LDAP directory"""
|
||||
try:
|
||||
search_base = settings.ldap_base_dn or settings.ldap_group_base_dn
|
||||
search_filter = settings.ldap_user_search_filter.format(username=username)
|
||||
|
||||
self.connection.search(
|
||||
search_base=search_base,
|
||||
search_filter=search_filter,
|
||||
search_scope=SUBTREE,
|
||||
attributes=['dn', 'cn', 'mail', 'uid']
|
||||
)
|
||||
|
||||
if self.connection.entries:
|
||||
return str(self.connection.entries[0].entry_dn)
|
||||
|
||||
return None
|
||||
|
||||
except LDAPException as e:
|
||||
logger.error(f"User search failed: {e}")
|
||||
return None
|
||||
|
||||
def _get_user_info(self, username: str, user_dn: str) -> Optional[Dict]:
|
||||
"""Get user information from LDAP"""
|
||||
try:
|
||||
self.connection.search(
|
||||
search_base=user_dn,
|
||||
search_filter='(objectClass=person)',
|
||||
search_scope=ldap3.BASE,
|
||||
attributes=['cn', 'mail', 'uid', 'givenName', 'sn', 'telephoneNumber']
|
||||
)
|
||||
|
||||
if self.connection.entries:
|
||||
entry = self.connection.entries[0]
|
||||
return {
|
||||
'username': getattr(entry, 'uid', [username])[0] if hasattr(entry, 'uid') else username,
|
||||
'full_name': str(getattr(entry, 'cn', [''])[0]) if hasattr(entry, 'cn') else '',
|
||||
'email': str(getattr(entry, 'mail', [''])[0]) if hasattr(entry, 'mail') else '',
|
||||
'first_name': str(getattr(entry, 'givenName', [''])[0]) if hasattr(entry, 'givenName') else '',
|
||||
'last_name': str(getattr(entry, 'sn', [''])[0]) if hasattr(entry, 'sn') else '',
|
||||
'phone': str(getattr(entry, 'telephoneNumber', [''])[0]) if hasattr(entry, 'telephoneNumber') else '',
|
||||
'ldap_dn': user_dn
|
||||
}
|
||||
|
||||
return None
|
||||
|
||||
except LDAPException as e:
|
||||
logger.error(f"User info retrieval failed: {e}")
|
||||
return None
|
||||
|
||||
def _check_group_membership(self, username: str, user_dn: str) -> bool:
|
||||
"""Check if user is member of the authorized KPI group"""
|
||||
if not settings.ldap_kpi_group_dn:
|
||||
logger.warning("No KPI group configured - access denied")
|
||||
return False
|
||||
|
||||
try:
|
||||
# Method 1: Check if user DN is in group members
|
||||
if self._check_user_in_group_by_member(user_dn):
|
||||
return True
|
||||
|
||||
# Method 2: Check if group DN is in user's memberOf attribute
|
||||
if self._check_user_member_of_group(username, user_dn):
|
||||
return True
|
||||
|
||||
logger.warning(f"User {username} not found in KPI group {settings.ldap_kpi_group_dn}")
|
||||
return False
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Group membership check failed: {e}")
|
||||
return False
|
||||
|
||||
def _check_user_in_group_by_member(self, user_dn: str) -> bool:
|
||||
"""Check if user DN is in group members"""
|
||||
try:
|
||||
self.connection.search(
|
||||
search_base=settings.ldap_kpi_group_dn,
|
||||
search_filter='(objectClass=group)',
|
||||
search_scope=ldap3.BASE,
|
||||
attributes=[settings.ldap_group_member_attribute]
|
||||
)
|
||||
|
||||
if self.connection.entries:
|
||||
entry = self.connection.entries[0]
|
||||
members = getattr(entry, settings.ldap_group_member_attribute, [])
|
||||
|
||||
for member in members:
|
||||
if str(member) == user_dn:
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
except LDAPException as e:
|
||||
logger.error(f"Group member check failed: {e}")
|
||||
return False
|
||||
|
||||
def _check_user_member_of_group(self, username: str, user_dn: str) -> bool:
|
||||
"""Check if group DN is in user's memberOf attribute"""
|
||||
try:
|
||||
self.connection.search(
|
||||
search_base=user_dn,
|
||||
search_filter='(objectClass=person)',
|
||||
search_scope=ldap3.BASE,
|
||||
attributes=[settings.ldap_user_member_attribute]
|
||||
)
|
||||
|
||||
if self.connection.entries:
|
||||
entry = self.connection.entries[0]
|
||||
member_of = getattr(entry, settings.ldap_user_member_attribute, [])
|
||||
|
||||
for group_dn in member_of:
|
||||
if str(group_dn) == settings.ldap_kpi_group_dn:
|
||||
return True
|
||||
|
||||
return False
|
||||
|
||||
except LDAPException as e:
|
||||
logger.error(f"User memberOf check failed: {e}")
|
||||
return False
|
||||
|
||||
def get_authorized_users(self) -> List[Dict]:
|
||||
"""Get list of all users in the authorized KPI group"""
|
||||
if not settings.ldap_kpi_group_dn:
|
||||
return []
|
||||
|
||||
try:
|
||||
self._initialize_server()
|
||||
|
||||
# Get group members
|
||||
self.connection.search(
|
||||
search_base=settings.ldap_kpi_group_dn,
|
||||
search_filter='(objectClass=group)',
|
||||
search_scope=ldap3.BASE,
|
||||
attributes=[settings.ldap_group_member_attribute]
|
||||
)
|
||||
|
||||
if not self.connection.entries:
|
||||
return []
|
||||
|
||||
entry = self.connection.entries[0]
|
||||
members = getattr(entry, settings.ldap_group_member_attribute, [])
|
||||
|
||||
authorized_users = []
|
||||
|
||||
for member_dn in members:
|
||||
# Get user info for each member
|
||||
try:
|
||||
member_info = self._get_user_info_by_dn(str(member_dn))
|
||||
if member_info:
|
||||
authorized_users.append(member_info)
|
||||
except Exception as e:
|
||||
logger.warning(f"Failed to get info for member {member_dn}: {e}")
|
||||
continue
|
||||
|
||||
return authorized_users
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to get authorized users: {e}")
|
||||
return []
|
||||
|
||||
def _get_user_info_by_dn(self, user_dn: str) -> Optional[Dict]:
|
||||
"""Get user information by DN"""
|
||||
try:
|
||||
self.connection.search(
|
||||
search_base=user_dn,
|
||||
search_filter='(objectClass=person)',
|
||||
search_scope=ldap3.BASE,
|
||||
attributes=['cn', 'mail', 'uid', 'givenName', 'sn']
|
||||
)
|
||||
|
||||
if self.connection.entries:
|
||||
entry = self.connection.entries[0]
|
||||
return {
|
||||
'username': getattr(entry, 'uid', [''])[0] if hasattr(entry, 'uid') else '',
|
||||
'full_name': str(getattr(entry, 'cn', [''])[0]) if hasattr(entry, 'cn') else '',
|
||||
'email': str(getattr(entry, 'mail', [''])[0]) if hasattr(entry, 'mail') else '',
|
||||
'first_name': str(getattr(entry, 'givenName', [''])[0]) if hasattr(entry, 'givenName') else '',
|
||||
'last_name': str(getattr(entry, 'sn', [''])[0]) if hasattr(entry, 'sn') else '',
|
||||
'ldap_dn': user_dn
|
||||
}
|
||||
|
||||
return None
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to get user info by DN {user_dn}: {e}")
|
||||
return None
|
||||
|
||||
def test_connection(self) -> Tuple[bool, str]:
|
||||
"""Test LDAP connection and configuration"""
|
||||
if not LDAP3_AVAILABLE:
|
||||
return False, "LDAP3 not available - cannot test LDAP connection"
|
||||
|
||||
try:
|
||||
self._initialize_server()
|
||||
|
||||
# Test with a simple search
|
||||
if settings.ldap_base_dn:
|
||||
search_base = settings.ldap_base_dn
|
||||
elif settings.ldap_group_base_dn:
|
||||
search_base = settings.ldap_group_base_dn
|
||||
else:
|
||||
return False, "No search base DN configured"
|
||||
|
||||
self.connection.search(
|
||||
search_base=search_base,
|
||||
search_filter='(objectClass=*)',
|
||||
search_scope=ldap3.BASE,
|
||||
attributes=[]
|
||||
)
|
||||
|
||||
return True, "LDAP connection successful"
|
||||
|
||||
except Exception as e:
|
||||
return False, f"LDAP connection failed: {str(e)}"
|
||||
|
||||
def test_group_access(self) -> Tuple[bool, str]:
|
||||
"""Test group access configuration"""
|
||||
if not LDAP3_AVAILABLE:
|
||||
return False, "LDAP3 not available - cannot test group access"
|
||||
|
||||
if not settings.ldap_kpi_group_dn:
|
||||
return False, "No KPI group DN configured"
|
||||
|
||||
try:
|
||||
self._initialize_server()
|
||||
|
||||
# Test access to group
|
||||
self.connection.search(
|
||||
search_base=settings.ldap_kpi_group_dn,
|
||||
search_filter='(objectClass=group)',
|
||||
search_scope=ldap3.BASE,
|
||||
attributes=[settings.ldap_group_member_attribute]
|
||||
)
|
||||
|
||||
if not self.connection.entries:
|
||||
return False, f"Group {settings.ldap_kpi_group_dn} not found"
|
||||
|
||||
entry = self.connection.entries[0]
|
||||
members = getattr(entry, settings.ldap_group_member_attribute, [])
|
||||
|
||||
return True, f"Group access successful. Found {len(members)} members"
|
||||
|
||||
except Exception as e:
|
||||
return False, f"Group access test failed: {str(e)}"
|
||||
|
||||
def __del__(self):
|
||||
"""Cleanup LDAP connection"""
|
||||
if self.connection and self.connection.bound:
|
||||
try:
|
||||
self.connection.unbind()
|
||||
except:
|
||||
pass
|
||||
|
||||
# Global LDAP auth service instance
|
||||
ldap_auth_service = LDAPAUTHService()
|
||||
258
kpi_analysis/app/services/nextcloud_service.py
Normal file
@ -0,0 +1,258 @@
|
||||
"""
|
||||
Nextcloud integration service
|
||||
Handles OAuth authentication and file operations with Nextcloud server
|
||||
"""
|
||||
|
||||
import requests
|
||||
import json
|
||||
import base64
|
||||
from typing import List, Dict, Any, Optional
|
||||
from datetime import datetime, timedelta
|
||||
from urllib.parse import urljoin
|
||||
import hashlib
|
||||
|
||||
from config.settings import settings
|
||||
|
||||
class NextcloudService:
|
||||
"""Nextcloud integration service"""
|
||||
|
||||
def __init__(self):
|
||||
self.base_url = settings.nextcloud_base_url.rstrip('/')
|
||||
self.oauth_configured = bool(settings.nextcloud_oauth_client_id and settings.nextcloud_oauth_client_secret)
|
||||
self.access_token: Optional[str] = None
|
||||
self.refresh_token: Optional[str] = None
|
||||
self.token_expires: Optional[datetime] = None
|
||||
|
||||
def get_oauth_url(self) -> str:
|
||||
"""Generate OAuth authorization URL for Nextcloud"""
|
||||
if not self.oauth_configured:
|
||||
raise ValueError("Nextcloud OAuth not configured")
|
||||
|
||||
params = {
|
||||
'client_id': settings.nextcloud_oauth_client_id,
|
||||
'redirect_uri': settings.nextcloud_redirect_uri,
|
||||
'response_type': 'code',
|
||||
'scope': 'files',
|
||||
'state': 'kpi_analysis_state'
|
||||
}
|
||||
|
||||
auth_url = f"{self.base_url}/index.php/apps/oauth2/authorize"
|
||||
query_string = "&".join([f"{k}={v}" for k, v in params.items()])
|
||||
return f"{auth_url}?{query_string}"
|
||||
|
||||
async def exchange_code_for_token(self, authorization_code: str) -> Dict[str, Any]:
|
||||
"""Exchange authorization code for access token"""
|
||||
if not self.oauth_configured:
|
||||
raise ValueError("Nextcloud OAuth not configured")
|
||||
|
||||
token_url = f"{self.base_url}/index.php/apps/oauth2/api/v1/token"
|
||||
|
||||
data = {
|
||||
'grant_type': 'authorization_code',
|
||||
'client_id': settings.nextcloud_oauth_client_id,
|
||||
'client_secret': settings.nextcloud_oauth_client_secret,
|
||||
'redirect_uri': settings.nextcloud_redirect_uri,
|
||||
'code': authorization_code
|
||||
}
|
||||
|
||||
response = requests.post(token_url, data=data)
|
||||
response.raise_for_status()
|
||||
|
||||
token_data = response.json()
|
||||
|
||||
# Store tokens
|
||||
self.access_token = token_data['access_token']
|
||||
self.refresh_token = token_data['refresh_token']
|
||||
self.token_expires = datetime.now() + timedelta(seconds=token_data['expires_in'])
|
||||
|
||||
return token_data
|
||||
|
||||
async def refresh_access_token(self) -> Dict[str, Any]:
|
||||
"""Refresh access token using refresh token"""
|
||||
if not self.refresh_token:
|
||||
raise ValueError("No refresh token available")
|
||||
|
||||
token_url = f"{self.base_url}/index.php/apps/oauth2/api/v1/token"
|
||||
|
||||
data = {
|
||||
'grant_type': 'refresh_token',
|
||||
'client_id': settings.nextcloud_oauth_client_id,
|
||||
'client_secret': settings.nextcloud_oauth_client_secret,
|
||||
'refresh_token': self.refresh_token
|
||||
}
|
||||
|
||||
response = requests.post(token_url, data=data)
|
||||
response.raise_for_status()
|
||||
|
||||
token_data = response.json()
|
||||
|
||||
# Update tokens
|
||||
self.access_token = token_data['access_token']
|
||||
self.refresh_token = token_data['refresh_token']
|
||||
self.token_expires = datetime.now() + timedelta(seconds=token_data['expires_in'])
|
||||
|
||||
return token_data
|
||||
|
||||
async def ensure_valid_token(self):
|
||||
"""Ensure we have a valid access token"""
|
||||
if not self.access_token or not self.token_expires:
|
||||
raise ValueError("No access token available")
|
||||
|
||||
if datetime.now() >= self.token_expires - timedelta(minutes=5):
|
||||
await self.refresh_access_token()
|
||||
|
||||
def _get_headers(self) -> Dict[str, str]:
|
||||
"""Get request headers with authentication"""
|
||||
if not self.access_token:
|
||||
raise ValueError("Not authenticated with Nextcloud")
|
||||
|
||||
return {
|
||||
'Authorization': f'Bearer {self.access_token}',
|
||||
'Content-Type': 'application/json'
|
||||
}
|
||||
|
||||
async def list_files(self, folder_path: str = None) -> List[Dict[str, Any]]:
|
||||
"""List files in Nextcloud"""
|
||||
await self.ensure_valid_token()
|
||||
|
||||
if not folder_path:
|
||||
folder_path = settings.nextcloud_kpi_folder
|
||||
|
||||
# Use WebDAV API to list files
|
||||
dav_url = f"{self.base_url}/remote.php/dav/files/{settings.nextcloud_username}/"
|
||||
folder_url = urljoin(dav_url, folder_path.lstrip('/'))
|
||||
|
||||
headers = {
|
||||
'Authorization': f'Bearer {self.access_token}',
|
||||
'Depth': '1'
|
||||
}
|
||||
|
||||
response = requests.request('PROPFIND', folder_url, headers=headers)
|
||||
|
||||
if response.status_code == 404:
|
||||
return []
|
||||
|
||||
response.raise_for_status()
|
||||
|
||||
# Parse WebDAV response (simplified)
|
||||
# In production, you might want to use a proper WebDAV client
|
||||
files = []
|
||||
content = response.text
|
||||
|
||||
# Simple XML parsing (for demonstration)
|
||||
import xml.etree.ElementTree as ET
|
||||
try:
|
||||
root = ET.fromstring(content)
|
||||
for response_elem in root.findall('.//{DAV:}response'):
|
||||
href = response_elem.find('{DAV:}href')
|
||||
if href is not None and href.text:
|
||||
filename = href.text.split('/')[-1]
|
||||
if filename and not filename.startswith('.'):
|
||||
# Get file properties
|
||||
propstat = response_elem.find('.//{DAV:}propstat')
|
||||
if propstat is not None:
|
||||
props = propstat.find('.//{DAV:}prop')
|
||||
if props is not None:
|
||||
getcontentlength = props.find('{DAV:}getcontentlength')
|
||||
getlastmodified = props.find('{DAV:}getlastmodified')
|
||||
|
||||
files.append({
|
||||
'id': hashlib.md5(href.text.encode()).hexdigest(),
|
||||
'name': filename,
|
||||
'size': int(getcontentlength.text) if getcontentlength is not None else 0,
|
||||
'modified': getlastmodified.text if getlastmodified is not None else '',
|
||||
'path': href.text,
|
||||
'download_url': f"{self.base_url}/remote.php/dav{href.text}"
|
||||
})
|
||||
except ET.ParseError:
|
||||
# Fallback - return empty list if XML parsing fails
|
||||
pass
|
||||
|
||||
return files
|
||||
|
||||
async def download_file(self, file_id: str) -> bytes:
|
||||
"""Download file from Nextcloud by file ID"""
|
||||
await self.ensure_valid_token()
|
||||
|
||||
# First, get file info to find the path
|
||||
files = await self.list_files()
|
||||
target_file = None
|
||||
|
||||
for file_info in files:
|
||||
if file_info['id'] == file_id:
|
||||
target_file = file_info
|
||||
break
|
||||
|
||||
if not target_file:
|
||||
raise FileNotFoundError(f"File with ID {file_id} not found")
|
||||
|
||||
# Download the file
|
||||
download_url = target_file['download_url']
|
||||
headers = {'Authorization': f'Bearer {self.access_token}'}
|
||||
|
||||
response = requests.get(download_url, headers=headers)
|
||||
response.raise_for_status()
|
||||
|
||||
return response.content
|
||||
|
||||
async def upload_file(self, file_path: str, upload_path: str) -> Dict[str, Any]:
|
||||
"""Upload file to Nextcloud"""
|
||||
await self.ensure_valid_token()
|
||||
|
||||
dav_url = f"{self.base_url}/remote.php/dav/files/{settings.nextcloud_username}/"
|
||||
full_upload_path = urljoin(dav_url, upload_path.lstrip('/'))
|
||||
|
||||
# Read file content
|
||||
with open(file_path, 'rb') as f:
|
||||
content = f.read()
|
||||
|
||||
headers = {
|
||||
'Authorization': f'Bearer {self.access_token}',
|
||||
'Content-Type': 'application/octet-stream'
|
||||
}
|
||||
|
||||
response = requests.put(full_upload_path, data=content, headers=headers)
|
||||
response.raise_for_status()
|
||||
|
||||
return {
|
||||
'success': True,
|
||||
'path': upload_path,
|
||||
'size': len(content)
|
||||
}
|
||||
|
||||
async def delete_file(self, file_path: str) -> bool:
|
||||
"""Delete file from Nextcloud"""
|
||||
await self.ensure_valid_token()
|
||||
|
||||
dav_url = f"{self.base_url}/remote.php/dav/files/{settings.nextcloud_username}/"
|
||||
full_file_path = urljoin(dav_url, file_path.lstrip('/'))
|
||||
|
||||
headers = {'Authorization': f'Bearer {self.access_token}'}
|
||||
|
||||
response = requests.delete(full_file_path, headers=headers)
|
||||
response.raise_for_status()
|
||||
|
||||
return True
|
||||
|
||||
async def create_folder(self, folder_path: str) -> bool:
|
||||
"""Create folder in Nextcloud"""
|
||||
await self.ensure_valid_token()
|
||||
|
||||
dav_url = f"{self.base_url}/remote.php/dav/files/{settings.nextcloud_username}/"
|
||||
full_folder_path = urljoin(dav_url, folder_path.lstrip('/'))
|
||||
|
||||
headers = {
|
||||
'Authorization': f'Bearer {self.access_token}',
|
||||
'Content-Type': 'application/xml'
|
||||
}
|
||||
|
||||
# MKCOL request to create folder
|
||||
xml_body = f'<d:mkcol xmlns:d="DAV:"><d:set><d:prop><d:resourcetype><d:collection/></d:resourcetype></d:prop></d:set></d:mkcol>'
|
||||
|
||||
response = requests.request('MKCOL', full_folder_path, data=xml_body, headers=headers)
|
||||
|
||||
# 201 Created or 405 Method Not Allowed (if folder exists)
|
||||
return response.status_code in [201, 405]
|
||||
|
||||
# Global Nextcloud service instance
|
||||
nextcloud_service = NextcloudService()
|
||||
665
kpi_analysis/app/services/pdf_generator.py
Normal file
@ -0,0 +1,665 @@
|
||||
"""
|
||||
PDF report generator for KPI Analysis
|
||||
Creates professional PDF reports with charts and analysis
|
||||
"""
|
||||
|
||||
import os
|
||||
from typing import List, Dict, Any, Optional
|
||||
from datetime import datetime
|
||||
import base64
|
||||
from io import BytesIO
|
||||
import pandas as pd
|
||||
|
||||
# Optional imports for PDF generation
|
||||
try:
|
||||
import matplotlib.pyplot as plt
|
||||
import matplotlib.patches as mpatches
|
||||
import seaborn as sns
|
||||
import warnings
|
||||
# Suppress matplotlib warnings
|
||||
warnings.filterwarnings('ignore', category=UserWarning, module='matplotlib')
|
||||
MATPLOTLIB_AVAILABLE = True
|
||||
except ImportError:
|
||||
MATPLOTLIB_AVAILABLE = False
|
||||
|
||||
try:
|
||||
from reportlab.lib.pagesizes import letter, A4
|
||||
from reportlab.lib import colors
|
||||
from reportlab.lib.styles import getSampleStyleSheet, ParagraphStyle
|
||||
from reportlab.lib.units import inch
|
||||
from reportlab.platypus import SimpleDocTemplate, Paragraph, Spacer, Table, TableStyle, Image, PageBreak
|
||||
from reportlab.lib.enums import TA_CENTER, TA_LEFT, TA_RIGHT
|
||||
from reportlab.pdfgen import canvas
|
||||
from reportlab.platypus.frames import Frame
|
||||
from reportlab.platypus.doctemplate import PageTemplate, BaseDocTemplate
|
||||
REPORTLAB_AVAILABLE = True
|
||||
except ImportError:
|
||||
REPORTLAB_AVAILABLE = False
|
||||
# Create dummy classes for when ReportLab is not available
|
||||
colors = type('colors', (), {
|
||||
'darkblue': None, 'whitesmoke': None, 'darkgreen': None, 'red': None,
|
||||
'beige': None, 'black': None, 'blue': None, 'lightgrey': None, 'darkgreen': None,
|
||||
'lightgreen': None, 'grey': None, 'lightblue': None
|
||||
})()
|
||||
|
||||
TA_CENTER = 1
|
||||
TA_LEFT = 0
|
||||
TA_RIGHT = 2
|
||||
|
||||
# Dummy classes and functions
|
||||
letter = A4 = (595, 842)
|
||||
inch = 72
|
||||
|
||||
class SimpleDocTemplate:
|
||||
def __init__(self, *args, **kwargs): pass
|
||||
def build(self, *args): pass
|
||||
|
||||
class ParagraphStyle:
|
||||
def __init__(self, *args, **kwargs): pass
|
||||
|
||||
class Paragraph:
|
||||
def __init__(self, *args, **kwargs): pass
|
||||
|
||||
class Spacer:
|
||||
def __init__(self, *args, **kwargs): pass
|
||||
|
||||
class Table:
|
||||
def __init__(self, *args, **kwargs): pass
|
||||
def setStyle(self, *args): pass
|
||||
|
||||
class TableStyle:
|
||||
pass
|
||||
|
||||
class Image:
|
||||
def __init__(self, *args, **kwargs): pass
|
||||
|
||||
class PageBreak:
|
||||
pass
|
||||
|
||||
def getSampleStyleSheet():
|
||||
return type('styles', (), {'Normal': None, 'Heading1': None, 'Heading2': None, 'Heading3': None})()
|
||||
|
||||
# Create a dummy styles instance
|
||||
styles = getSampleStyleSheet()
|
||||
|
||||
import logging
|
||||
|
||||
from ..models.kpi_models import KPIFile, AnalysisResult
|
||||
from config.settings import settings
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
class PDFGenerator:
|
||||
"""Professional PDF report generator for KPI analysis"""
|
||||
|
||||
def __init__(self):
|
||||
self.setup_styles()
|
||||
|
||||
def setup_styles(self):
|
||||
"""Setup document styles"""
|
||||
if REPORTLAB_AVAILABLE:
|
||||
self.styles = getSampleStyleSheet()
|
||||
|
||||
# Custom styles
|
||||
self.styles.add(ParagraphStyle(
|
||||
name='CustomTitle',
|
||||
parent=self.styles['Heading1'],
|
||||
fontSize=20,
|
||||
spaceAfter=20,
|
||||
alignment=TA_CENTER,
|
||||
textColor=colors.darkblue
|
||||
))
|
||||
|
||||
self.styles.add(ParagraphStyle(
|
||||
name='CustomHeading2',
|
||||
parent=self.styles['Heading2'],
|
||||
fontSize=14,
|
||||
spaceAfter=12,
|
||||
textColor=colors.darkblue
|
||||
))
|
||||
|
||||
self.styles.add(ParagraphStyle(
|
||||
name='CustomHeading3',
|
||||
parent=self.styles['Heading3'],
|
||||
fontSize=12,
|
||||
spaceAfter=10,
|
||||
textColor=colors.darkblue
|
||||
))
|
||||
|
||||
self.styles.add(ParagraphStyle(
|
||||
name='CustomNormal',
|
||||
parent=self.styles['Normal'],
|
||||
fontSize=10,
|
||||
spaceAfter=6
|
||||
))
|
||||
|
||||
self.styles.add(ParagraphStyle(
|
||||
name='Highlight',
|
||||
parent=self.styles['Normal'],
|
||||
fontSize=11,
|
||||
textColor=colors.darkgreen,
|
||||
spaceAfter=8
|
||||
))
|
||||
else:
|
||||
self.styles = None
|
||||
|
||||
async def generate_report(self, kpi_file: KPIFile, analysis_result: AnalysisResult) -> str:
|
||||
"""Generate comprehensive PDF report"""
|
||||
if not REPORTLAB_AVAILABLE:
|
||||
logger.warning("ReportLab not available - skipping PDF generation")
|
||||
return ""
|
||||
|
||||
try:
|
||||
# Create output directory
|
||||
os.makedirs(settings.reports_directory, exist_ok=True)
|
||||
|
||||
# Generate filename - sanitize name to remove invalid characters
|
||||
timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
# Remove or replace invalid filename characters
|
||||
safe_name = kpi_file.summary.name.replace(' ', '_').replace('/', '_').replace('\\', '_').replace(':', '_')
|
||||
# Limit filename length
|
||||
safe_name = safe_name[:50] if len(safe_name) > 50 else safe_name
|
||||
filename = f"kpi_report_{safe_name}_{timestamp}.pdf"
|
||||
report_path = os.path.join(settings.reports_directory, filename)
|
||||
|
||||
# Create PDF document
|
||||
doc = SimpleDocTemplate(
|
||||
report_path,
|
||||
pagesize=A4,
|
||||
rightMargin=72,
|
||||
leftMargin=72,
|
||||
topMargin=72,
|
||||
bottomMargin=18
|
||||
)
|
||||
|
||||
# Build PDF content
|
||||
story = []
|
||||
|
||||
# Title page
|
||||
self._add_title_page(story, kpi_file)
|
||||
|
||||
# Executive summary
|
||||
self._add_executive_summary(story, kpi_file, analysis_result)
|
||||
|
||||
# Performance overview
|
||||
self._add_performance_overview(story, kpi_file, analysis_result)
|
||||
|
||||
# Detailed analysis
|
||||
self._add_detailed_analysis(story, kpi_file, analysis_result)
|
||||
|
||||
# Charts and visualizations
|
||||
self._add_charts_section(story, kpi_file, analysis_result)
|
||||
|
||||
# Recommendations
|
||||
self._add_recommendations_section(story, analysis_result)
|
||||
|
||||
# Appendix
|
||||
self._add_appendix(story, kpi_file)
|
||||
|
||||
# Build PDF
|
||||
doc.build(story)
|
||||
|
||||
logger.info(f"PDF report generated: {report_path}")
|
||||
return report_path
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error generating PDF report: {str(e)}")
|
||||
return ""
|
||||
|
||||
def _add_title_page(self, story: List, kpi_file: KPIFile):
|
||||
"""Add title page to PDF"""
|
||||
# Title
|
||||
title = Paragraph("KPI Performance Analysis Report", self.styles['CustomTitle'])
|
||||
story.append(title)
|
||||
story.append(Spacer(1, 0.5*inch))
|
||||
|
||||
# Subtitle
|
||||
subtitle = Paragraph(f"Performance Period: {kpi_file.summary.performance_period}",
|
||||
self.styles['CustomHeading2'])
|
||||
story.append(subtitle)
|
||||
story.append(Spacer(1, 0.3*inch))
|
||||
|
||||
# Employee information
|
||||
info_data = [
|
||||
['Name:', kpi_file.summary.name],
|
||||
['Position:', kpi_file.summary.position],
|
||||
['Department:', kpi_file.summary.job_title],
|
||||
['Supervisor:', f"{kpi_file.summary.supervisor_name} - {kpi_file.summary.supervisor_position}"],
|
||||
['Report Date:', datetime.now().strftime("%B %d, %Y")]
|
||||
]
|
||||
|
||||
info_table = Table(info_data, colWidths=[1.5*inch, 4*inch])
|
||||
info_table.setStyle(TableStyle([
|
||||
('ALIGN', (0, 0), (-1, -1), 'LEFT'),
|
||||
('FONTNAME', (0, 0), (0, -1), 'Helvetica-Bold'),
|
||||
('FONTNAME', (1, 0), (1, -1), 'Helvetica'),
|
||||
('FONTSIZE', (0, 0), (-1, -1), 11),
|
||||
('BOTTOMPADDING', (0, 0), (-1, -1), 6),
|
||||
]))
|
||||
|
||||
story.append(info_table)
|
||||
story.append(Spacer(1, 0.5*inch))
|
||||
|
||||
# Overall score highlight (in points, not percentage)
|
||||
score_color = colors.darkgreen if kpi_file.summary.final_score >= 80 else colors.red
|
||||
score_text = f"Overall Performance Score: {kpi_file.summary.final_score:.2f} Points"
|
||||
|
||||
score_para = Paragraph(score_text, self.styles['CustomTitle'])
|
||||
score_para.textColor = score_color
|
||||
story.append(score_para)
|
||||
|
||||
story.append(PageBreak())
|
||||
|
||||
def _add_executive_summary(self, story: List, kpi_file: KPIFile, analysis_result: AnalysisResult):
|
||||
"""Add executive summary section"""
|
||||
# Section header
|
||||
story.append(Paragraph("Executive Summary", self.styles['CustomHeading2']))
|
||||
story.append(Spacer(1, 0.2*inch))
|
||||
|
||||
# Key metrics
|
||||
metrics_data = [
|
||||
['Metric', 'Value', 'Status'],
|
||||
['Overall Score', f"{kpi_file.summary.final_score:.2f} Points",
|
||||
'Excellent' if kpi_file.summary.final_score >= 90 else 'Good' if kpi_file.summary.final_score >= 80 else 'Needs Improvement'],
|
||||
['Achievement Rate', f"{kpi_file.achievement_rate:.1f}%",
|
||||
'Strong' if kpi_file.achievement_rate >= 80 else 'Moderate'],
|
||||
['Total KPIs Evaluated', str(len(kpi_file.achievements.items)), ''],
|
||||
['KPIs Achieved', str(sum(1 for item in kpi_file.achievements.items if item.status.value == "Achieve")), ''],
|
||||
['KPIs Not Achieved', str(sum(1 for item in kpi_file.achievements.items if item.status.value == "Not Achieve")), '']
|
||||
]
|
||||
|
||||
metrics_table = Table(metrics_data, colWidths=[2*inch, 1.5*inch, 1.5*inch])
|
||||
metrics_table.setStyle(TableStyle([
|
||||
('BACKGROUND', (0, 0), (-1, 0), colors.darkblue),
|
||||
('TEXTCOLOR', (0, 0), (-1, 0), colors.whitesmoke),
|
||||
('ALIGN', (0, 0), (-1, -1), 'CENTER'),
|
||||
('FONTNAME', (0, 0), (-1, 0), 'Helvetica-Bold'),
|
||||
('FONTSIZE', (0, 0), (-1, 0), 10),
|
||||
('BOTTOMPADDING', (0, 0), (-1, 0), 12),
|
||||
('BACKGROUND', (0, 1), (-1, -1), colors.beige),
|
||||
('GRID', (0, 0), (-1, -1), 1, colors.black)
|
||||
]))
|
||||
|
||||
story.append(metrics_table)
|
||||
story.append(Spacer(1, 0.3*inch))
|
||||
|
||||
# Key insights
|
||||
story.append(Paragraph("Key Insights:", self.styles['CustomHeading3']))
|
||||
|
||||
if analysis_result.insights:
|
||||
for insight in analysis_result.insights[:5]: # Top 5 insights
|
||||
insight_para = Paragraph(f"• {insight}", self.styles['CustomNormal'])
|
||||
story.append(insight_para)
|
||||
else:
|
||||
story.append(Paragraph("• Analysis shows performance trends across all KPI perspectives",
|
||||
self.styles['CustomNormal']))
|
||||
|
||||
story.append(PageBreak())
|
||||
|
||||
def _add_performance_overview(self, story: List, kpi_file: KPIFile, analysis_result: AnalysisResult):
|
||||
"""Add performance overview section"""
|
||||
story.append(Paragraph("Performance Overview by Perspective", self.styles['CustomHeading2']))
|
||||
story.append(Spacer(1, 0.2*inch))
|
||||
|
||||
# Perspective scores table
|
||||
perspective_data = [['Perspective', 'Score (%)', 'Rating', 'Target Gap']]
|
||||
|
||||
ratings = {90: 'Excellent', 80: 'Good', 70: 'Satisfactory', 0: 'Needs Improvement'}
|
||||
|
||||
for category, score in kpi_file.perspective_scores.items():
|
||||
rating = next((r for threshold, r in sorted(ratings.items(), reverse=True) if score >= threshold), 'Needs Improvement')
|
||||
target_gap = 80 - score # Assuming 80% target
|
||||
|
||||
perspective_data.append([
|
||||
category.value,
|
||||
f"{score:.1f}%",
|
||||
rating,
|
||||
f"{target_gap:+.1f}%" if target_gap != 0 else "On Target"
|
||||
])
|
||||
|
||||
perspective_table = Table(perspective_data, colWidths=[2.5*inch, 1.2*inch, 1.5*inch, 1.3*inch])
|
||||
perspective_table.setStyle(TableStyle([
|
||||
('BACKGROUND', (0, 0), (-1, 0), colors.darkblue),
|
||||
('TEXTCOLOR', (0, 0), (-1, 0), colors.whitesmoke),
|
||||
('ALIGN', (0, 0), (-1, -1), 'CENTER'),
|
||||
('FONTNAME', (0, 0), (-1, 0), 'Helvetica-Bold'),
|
||||
('FONTSIZE', (0, 0), (-1, 0), 10),
|
||||
('BOTTOMPADDING', (0, 0), (-1, 0), 12),
|
||||
('BACKGROUND', (0, 1), (-1, -1), colors.lightgrey),
|
||||
('GRID', (0, 0), (-1, -1), 1, colors.black)
|
||||
]))
|
||||
|
||||
story.append(perspective_table)
|
||||
story.append(Spacer(1, 0.3*inch))
|
||||
|
||||
# Achievement status
|
||||
story.append(Paragraph("KPI Achievement Status", self.styles['CustomHeading3']))
|
||||
|
||||
achievement_data = [['KPI Status', 'Count', 'Percentage']]
|
||||
|
||||
total_kpis = len(kpi_file.achievements.items)
|
||||
for status in ['Achieve', 'Not Achieve', 'No Data']:
|
||||
count = sum(1 for item in kpi_file.achievements.items if item.status.value == status)
|
||||
percentage = (count / total_kpis * 100) if total_kpis > 0 else 0
|
||||
achievement_data.append([status, str(count), f"{percentage:.1f}%"])
|
||||
|
||||
achievement_table = Table(achievement_data, colWidths=[2*inch, 1*inch, 1.5*inch])
|
||||
achievement_table.setStyle(TableStyle([
|
||||
('BACKGROUND', (0, 0), (-1, 0), colors.darkgreen),
|
||||
('TEXTCOLOR', (0, 0), (-1, 0), colors.whitesmoke),
|
||||
('ALIGN', (0, 0), (-1, -1), 'CENTER'),
|
||||
('FONTNAME', (0, 0), (-1, 0), 'Helvetica-Bold'),
|
||||
('FONTSIZE', (0, 0), (-1, 0), 10),
|
||||
('BOTTOMPADDING', (0, 0), (-1, 0), 12),
|
||||
('BACKGROUND', (0, 1), (-1, -1), colors.lightgreen),
|
||||
('GRID', (0, 0), (-1, -1), 1, colors.black)
|
||||
]))
|
||||
|
||||
story.append(achievement_table)
|
||||
story.append(PageBreak())
|
||||
|
||||
def _add_detailed_analysis(self, story: List, kpi_file: KPIFile, analysis_result: AnalysisResult):
|
||||
"""Add detailed analysis section"""
|
||||
story.append(Paragraph("Detailed Performance Analysis", self.styles['CustomHeading2']))
|
||||
story.append(Spacer(1, 0.2*inch))
|
||||
|
||||
# Analyze each perspective
|
||||
for category, score in kpi_file.perspective_scores.items():
|
||||
story.append(Paragraph(f"{category.value} Perspective", self.styles['CustomHeading3']))
|
||||
|
||||
# Get KPIs for this category
|
||||
category_kpis = [sheet for sheet in kpi_file.kpi_sheets if sheet.category == category]
|
||||
|
||||
if category_kpis:
|
||||
# Create KPI performance table
|
||||
kpi_data = [['KPI Code', 'Name', 'Method', 'Average Score', 'Performance']]
|
||||
|
||||
for kpi_sheet in category_kpis:
|
||||
if kpi_sheet.period_data:
|
||||
scores = [pd.score for pd in kpi_sheet.period_data if pd.score is not None]
|
||||
if scores:
|
||||
avg_score = sum(scores) / len(scores)
|
||||
performance = 'Excellent' if avg_score >= 90 else 'Good' if avg_score >= 80 else 'Needs Improvement'
|
||||
|
||||
# Get calculation method from unit
|
||||
method = kpi_sheet.unit if kpi_sheet.unit else 'N/A'
|
||||
if method == 'Percentage':
|
||||
method = '%'
|
||||
elif method == 'Count':
|
||||
method = '#'
|
||||
|
||||
kpi_data.append([
|
||||
kpi_sheet.code,
|
||||
kpi_sheet.name[:25] + '...' if len(kpi_sheet.name) > 25 else kpi_sheet.name,
|
||||
method,
|
||||
f"{avg_score:.1f}%",
|
||||
performance
|
||||
])
|
||||
|
||||
if len(kpi_data) > 1: # More than just headers
|
||||
kpi_table = Table(kpi_data, colWidths=[0.8*inch, 2.5*inch, 0.6*inch, 1*inch, 1.3*inch])
|
||||
kpi_table.setStyle(TableStyle([
|
||||
('BACKGROUND', (0, 0), (-1, 0), colors.blue),
|
||||
('TEXTCOLOR', (0, 0), (-1, 0), colors.whitesmoke),
|
||||
('ALIGN', (0, 0), (-1, -1), 'CENTER'),
|
||||
('FONTNAME', (0, 0), (-1, 0), 'Helvetica-Bold'),
|
||||
('FONTSIZE', (0, 0), (-1, 0), 9),
|
||||
('BOTTOMPADDING', (0, 0), (-1, 0), 8),
|
||||
('BACKGROUND', (0, 1), (-1, -1), colors.lightblue),
|
||||
('GRID', (0, 0), (-1, -1), 1, colors.black),
|
||||
('VALIGN', (0, 0), (-1, -1), 'MIDDLE'),
|
||||
]))
|
||||
|
||||
story.append(kpi_table)
|
||||
story.append(Spacer(1, 0.2*inch))
|
||||
else:
|
||||
story.append(Paragraph("No KPIs found for this perspective", self.styles['CustomNormal']))
|
||||
story.append(Spacer(1, 0.1*inch))
|
||||
|
||||
story.append(PageBreak())
|
||||
|
||||
def _add_charts_section(self, story: List, kpi_file: KPIFile, analysis_result: AnalysisResult):
|
||||
"""Add charts section"""
|
||||
story.append(Paragraph("Performance Charts and Visualizations", self.styles['CustomHeading2']))
|
||||
story.append(Spacer(1, 0.2*inch))
|
||||
|
||||
charts_added = False
|
||||
|
||||
# Create charts using matplotlib
|
||||
try:
|
||||
# Chart 1: Perspective scores bar chart
|
||||
chart1_path = self._create_perspective_chart(kpi_file)
|
||||
if chart1_path and os.path.exists(chart1_path):
|
||||
story.append(Paragraph("Performance by Perspective", self.styles['CustomHeading3']))
|
||||
img1 = Image(chart1_path, width=6*inch, height=4*inch)
|
||||
story.append(img1)
|
||||
story.append(Spacer(1, 0.3*inch))
|
||||
charts_added = True
|
||||
else:
|
||||
logger.info("Perspective chart not generated - likely no data available")
|
||||
|
||||
# Chart 2: Achievement status pie chart
|
||||
chart2_path = self._create_achievement_chart(kpi_file)
|
||||
if chart2_path and os.path.exists(chart2_path):
|
||||
story.append(Paragraph("KPI Achievement Distribution", self.styles['CustomHeading3']))
|
||||
img2 = Image(chart2_path, width=6*inch, height=4*inch)
|
||||
story.append(img2)
|
||||
story.append(Spacer(1, 0.3*inch))
|
||||
charts_added = True
|
||||
else:
|
||||
logger.info("Achievement chart not generated - likely no data available")
|
||||
|
||||
if not charts_added:
|
||||
story.append(Paragraph("Charts could not be generated due to insufficient data. Please ensure KPI data is properly populated in the Excel file.", self.styles['CustomNormal']))
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error adding charts: {str(e)}", exc_info=True)
|
||||
story.append(Paragraph("Charts could not be generated due to an error. Please check the logs for details.", self.styles['CustomNormal']))
|
||||
|
||||
story.append(PageBreak())
|
||||
|
||||
def _create_perspective_chart(self, kpi_file: KPIFile) -> Optional[str]:
|
||||
"""Create perspective scores chart"""
|
||||
if not MATPLOTLIB_AVAILABLE:
|
||||
logger.warning("Matplotlib not available - skipping chart generation")
|
||||
return None
|
||||
|
||||
try:
|
||||
categories = [cat.value.replace(' & ', '\n& ') for cat in kpi_file.perspective_scores.keys()]
|
||||
scores = list(kpi_file.perspective_scores.values())
|
||||
|
||||
# Check if we have any valid scores
|
||||
if not scores or all(score == 0 for score in scores):
|
||||
logger.warning("No valid perspective scores for chart generation")
|
||||
return None
|
||||
|
||||
# Set style and create figure with fixed size
|
||||
plt.style.use('seaborn-v0_8')
|
||||
fig, ax = plt.subplots(figsize=(10, 5), dpi=100)
|
||||
|
||||
colors_list = ['#FF6B6B', '#4ECDC4', '#45B7D1', '#96CEB4']
|
||||
|
||||
# Shorten category labels and wrap text
|
||||
short_categories = []
|
||||
for cat in categories:
|
||||
if len(cat) > 20:
|
||||
# Split long labels into multiple lines
|
||||
words = cat.split()
|
||||
if len(words) > 2:
|
||||
mid = len(words) // 2
|
||||
cat = ' '.join(words[:mid]) + '\n' + ' '.join(words[mid:])
|
||||
short_categories.append(cat)
|
||||
|
||||
bars = ax.bar(short_categories, scores, color=colors_list[:len(categories)], alpha=0.85, edgecolor='white', linewidth=2)
|
||||
|
||||
# Add value labels on bars
|
||||
for bar, score in zip(bars, scores):
|
||||
height = bar.get_height()
|
||||
if height > 0: # Only add label if there's a value
|
||||
ax.text(bar.get_x() + bar.get_width()/2., height + 2,
|
||||
f'{score:.1f}%', ha='center', va='bottom', fontweight='bold', fontsize=11)
|
||||
|
||||
# Add target line
|
||||
ax.axhline(y=80, color='#DC3545', linestyle='--', linewidth=2, alpha=0.8, label='Target (80%)')
|
||||
|
||||
ax.set_ylabel('Score (%)', fontsize=13, fontweight='bold')
|
||||
ax.set_title('KPI Performance by Perspective', fontsize=15, fontweight='bold', pad=15)
|
||||
ax.set_ylim(0, max(scores) + 15 if scores else 100)
|
||||
ax.legend(loc='upper right', fontsize=10)
|
||||
ax.grid(True, alpha=0.2, axis='y')
|
||||
ax.set_axisbelow(True)
|
||||
|
||||
# Rotate x-axis labels if needed
|
||||
plt.xticks(rotation=0, ha='center', fontsize=10)
|
||||
|
||||
# Use tight_layout with error suppression
|
||||
try:
|
||||
plt.tight_layout()
|
||||
except:
|
||||
pass # Ignore layout warnings
|
||||
|
||||
# Save chart with reasonable DPI and white background
|
||||
chart_path = os.path.join(settings.reports_directory, f"perspective_chart_{datetime.now().strftime('%Y%m%d_%H%M%S')}.png")
|
||||
plt.savefig(chart_path, dpi=150, bbox_inches='tight', pad_inches=0.3, facecolor='white', edgecolor='none')
|
||||
plt.close()
|
||||
|
||||
logger.info(f"Perspective chart saved: {chart_path}")
|
||||
return chart_path
|
||||
|
||||
return chart_path
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error creating perspective chart: {str(e)}", exc_info=True)
|
||||
plt.close() # Make sure to close the figure on error
|
||||
return None
|
||||
|
||||
def _create_achievement_chart(self, kpi_file: KPIFile) -> Optional[str]:
|
||||
"""Create achievement status pie chart"""
|
||||
if not MATPLOTLIB_AVAILABLE:
|
||||
logger.warning("Matplotlib not available - skipping chart generation")
|
||||
return None
|
||||
|
||||
try:
|
||||
# Get achievement counts
|
||||
achieved = sum(1 for item in kpi_file.achievements.items if item.status.value == "Achieve")
|
||||
not_achieved = sum(1 for item in kpi_file.achievements.items if item.status.value == "Not Achieve")
|
||||
no_data = sum(1 for item in kpi_file.achievements.items if item.status.value == "No Data")
|
||||
|
||||
total_kpis = achieved + not_achieved + no_data
|
||||
|
||||
# Check if we have any KPIs
|
||||
if total_kpis == 0:
|
||||
logger.warning("No KPIs found for achievement chart")
|
||||
return None
|
||||
|
||||
# Set style and create figure with fixed size
|
||||
plt.style.use('seaborn-v0_8')
|
||||
fig, ax = plt.subplots(figsize=(8, 8), dpi=100)
|
||||
|
||||
labels = ['Achieved', 'Not Achieved', 'No Data']
|
||||
sizes = [achieved, not_achieved, no_data]
|
||||
colors_list = ['#28a745', '#dc3545', '#ffc107']
|
||||
|
||||
# Filter out zero values
|
||||
filtered_data = [(label, size, color) for label, size, color in zip(labels, sizes, colors_list) if size > 0]
|
||||
|
||||
if not filtered_data:
|
||||
logger.warning("All achievement values are zero")
|
||||
plt.close()
|
||||
return None
|
||||
|
||||
filtered_labels, filtered_sizes, filtered_colors = zip(*filtered_data)
|
||||
|
||||
# Create explode tuple - explode the first slice (usually achieved)
|
||||
explode = tuple([0.1 if i == 0 else 0 for i in range(len(filtered_sizes))])
|
||||
|
||||
# Create pie chart
|
||||
wedges, texts, autotexts = ax.pie(filtered_sizes, explode=explode, labels=filtered_labels,
|
||||
colors=filtered_colors, autopct='%1.1f%%', shadow=True,
|
||||
startangle=90)
|
||||
|
||||
# Enhance text
|
||||
for autotext in autotexts:
|
||||
autotext.set_color('white')
|
||||
autotext.set_fontweight('bold')
|
||||
autotext.set_fontsize(12)
|
||||
|
||||
ax.set_title('KPI Achievement Status Distribution', fontsize=14, fontweight='bold', pad=20)
|
||||
|
||||
# Use tight_layout with error suppression
|
||||
try:
|
||||
plt.tight_layout()
|
||||
except:
|
||||
pass # Ignore layout warnings
|
||||
|
||||
# Save chart with reasonable DPI and white background
|
||||
chart_path = os.path.join(settings.reports_directory, f"achievement_chart_{datetime.now().strftime('%Y%m%d_%H%M%S')}.png")
|
||||
plt.savefig(chart_path, dpi=150, bbox_inches='tight', pad_inches=0.3, facecolor='white', edgecolor='none')
|
||||
plt.close()
|
||||
|
||||
logger.info(f"Achievement chart saved: {chart_path}")
|
||||
return chart_path
|
||||
|
||||
return chart_path
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error creating achievement chart: {str(e)}", exc_info=True)
|
||||
plt.close() # Make sure to close the figure on error
|
||||
return None
|
||||
|
||||
def _add_recommendations_section(self, story: List, analysis_result: AnalysisResult):
|
||||
"""Add recommendations section"""
|
||||
story.append(Paragraph("Recommendations for Improvement", self.styles['CustomHeading2']))
|
||||
story.append(Spacer(1, 0.2*inch))
|
||||
|
||||
if analysis_result.recommendations:
|
||||
for i, recommendation in enumerate(analysis_result.recommendations, 1):
|
||||
story.append(Paragraph(f"{i}. {recommendation}", self.styles['CustomNormal']))
|
||||
story.append(Spacer(1, 0.1*inch))
|
||||
else:
|
||||
story.append(Paragraph("• Continue monitoring current performance levels", self.styles['CustomNormal']))
|
||||
story.append(Paragraph("• Focus on areas with scores below 80%", self.styles['CustomNormal']))
|
||||
story.append(Paragraph("• Develop action plans for underperforming KPIs", self.styles['CustomNormal']))
|
||||
|
||||
story.append(PageBreak())
|
||||
|
||||
def _add_appendix(self, story: List, kpi_file: KPIFile):
|
||||
"""Add appendix with detailed data"""
|
||||
story.append(Paragraph("Appendix: Detailed KPI Data", self.styles['CustomHeading2']))
|
||||
story.append(Spacer(1, 0.2*inch))
|
||||
|
||||
# Add all KPI details
|
||||
for kpi_sheet in kpi_file.kpi_sheets:
|
||||
story.append(Paragraph(f"{kpi_sheet.code}: {kpi_sheet.name}", self.styles['CustomHeading3']))
|
||||
|
||||
# Create detailed table for this KPI
|
||||
if kpi_sheet.period_data:
|
||||
period_data = [['Period', 'Realization', 'Target', 'Score', 'Status']]
|
||||
|
||||
for pd in kpi_sheet.period_data:
|
||||
period_data.append([
|
||||
pd.period,
|
||||
f"{pd.realization}" if pd.realization is not None else "N/A",
|
||||
f"{pd.target}" if pd.target is not None else "N/A",
|
||||
f"{pd.score:.1f}%" if pd.score is not None else "N/A",
|
||||
pd.status
|
||||
])
|
||||
|
||||
period_table = Table(period_data, colWidths=[1.5*inch, 1.2*inch, 1.2*inch, 1*inch, 1.6*inch])
|
||||
period_table.setStyle(TableStyle([
|
||||
('BACKGROUND', (0, 0), (-1, 0), colors.grey),
|
||||
('TEXTCOLOR', (0, 0), (-1, 0), colors.whitesmoke),
|
||||
('ALIGN', (0, 0), (-1, -1), 'CENTER'),
|
||||
('FONTNAME', (0, 0), (-1, 0), 'Helvetica-Bold'),
|
||||
('FONTSIZE', (0, 0), (-1, 0), 9),
|
||||
('BOTTOMPADDING', (0, 0), (-1, 0), 8),
|
||||
('BACKGROUND', (0, 1), (-1, -1), colors.lightgrey),
|
||||
('GRID', (0, 0), (-1, -1), 1, colors.black),
|
||||
('VALIGN', (0, 0), (-1, -1), 'MIDDLE'),
|
||||
]))
|
||||
|
||||
story.append(period_table)
|
||||
story.append(Spacer(1, 0.2*inch))
|
||||
|
||||
# Global PDF generator instance
|
||||
pdf_generator = PDFGenerator()
|
||||
145
kpi_analysis/check_config.py
Normal file
@ -0,0 +1,145 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Check KPI Analysis configuration
|
||||
"""
|
||||
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
# Add current directory to path
|
||||
sys.path.insert(0, str(Path(__file__).parent))
|
||||
|
||||
from config.settings import settings
|
||||
|
||||
print("=" * 70)
|
||||
print("KPI Analysis Dashboard - Configuration Check")
|
||||
print("=" * 70)
|
||||
|
||||
print("\n📋 Application Settings")
|
||||
print("-" * 70)
|
||||
print(f" App Name: {settings.app_name}")
|
||||
print(f" Version: {settings.app_version}")
|
||||
print(f" Debug Mode: {settings.debug}")
|
||||
print(f" Host: {settings.host}")
|
||||
print(f" Port: {settings.port}")
|
||||
print(f" Secret Key: {'*' * 20} (configured: {bool(settings.secret_key)})")
|
||||
|
||||
print("\n🔐 Authentication Settings")
|
||||
print("-" * 70)
|
||||
print(f" LDAP Server: {settings.ldap_server or 'Not configured'}")
|
||||
print(f" LDAP Port: {settings.ldap_port}")
|
||||
print(f" LDAP Use SSL: {settings.ldap_use_ssl}")
|
||||
print(f" LDAP Base DN: {settings.ldap_base_dn or 'Not configured'}")
|
||||
print(f" LDAP Group DN: {settings.ldap_kpi_group_dn or 'Not configured'}")
|
||||
|
||||
print("\n🔓 Fallback Authentication")
|
||||
print("-" * 70)
|
||||
print(f" Enabled: {settings.enable_fallback_auth}")
|
||||
if settings.enable_fallback_auth:
|
||||
print(f" Username: {settings.fallback_admin_username}")
|
||||
print(f" Password: {'*' * len(settings.fallback_admin_password)}")
|
||||
print(f" Email: {settings.fallback_admin_email}")
|
||||
print(f" Role: {settings.fallback_admin_role}")
|
||||
else:
|
||||
print(f" ⚠️ Fallback authentication is disabled")
|
||||
|
||||
print("\n☁️ Nextcloud Settings")
|
||||
print("-" * 70)
|
||||
print(f" Base URL: {settings.nextcloud_base_url}")
|
||||
print(f" OAuth Client ID: {settings.nextcloud_oauth_client_id[:20] + '...' if settings.nextcloud_oauth_client_id else 'Not configured'}")
|
||||
print(f" OAuth Client Secret: {'*' * 20 if settings.nextcloud_oauth_client_secret else 'Not configured'}")
|
||||
print(f" KPI Folder: {settings.nextcloud_kpi_folder}")
|
||||
print(f" Username: {settings.nextcloud_username or 'Not configured'}")
|
||||
|
||||
print("\n🤖 OpenAI Settings")
|
||||
print("-" * 70)
|
||||
print(f" API Key: {'*' * 20 if settings.openai_api_key else 'Not configured'}")
|
||||
print(f" Model: {settings.openai_model}")
|
||||
print(f" Max Tokens: {settings.openai_max_tokens}")
|
||||
print(f" Temperature: {settings.openai_temperature}")
|
||||
|
||||
print("\n💾 Database Settings")
|
||||
print("-" * 70)
|
||||
print(f" Database URL: {settings.database_url}")
|
||||
|
||||
print("\n📁 Directory Settings")
|
||||
print("-" * 70)
|
||||
print(f" Data Directory: {settings.data_directory}")
|
||||
print(f" Upload Directory: {settings.upload_directory}")
|
||||
print(f" Reports Directory: {settings.reports_directory}")
|
||||
|
||||
# Check if directories exist
|
||||
import os
|
||||
data_exists = os.path.exists(settings.data_directory)
|
||||
upload_exists = os.path.exists(settings.upload_directory)
|
||||
reports_exists = os.path.exists(settings.reports_directory)
|
||||
|
||||
print(f"\n Directory Status:")
|
||||
print(f" Data: {'✅ Exists' if data_exists else '❌ Missing'}")
|
||||
print(f" Upload: {'✅ Exists' if upload_exists else '❌ Missing'}")
|
||||
print(f" Reports: {'✅ Exists' if reports_exists else '❌ Missing'}")
|
||||
|
||||
print("\n🔒 Security Settings")
|
||||
print("-" * 70)
|
||||
print(f" Session Timeout: {settings.session_timeout_minutes} minutes")
|
||||
print(f" Max File Size: {settings.max_file_size_mb} MB")
|
||||
print(f" Allowed Extensions: {', '.join(settings.allowed_file_extensions)}")
|
||||
|
||||
print("\n🌐 CORS Settings")
|
||||
print("-" * 70)
|
||||
print(f" Allowed Origins:")
|
||||
for origin in settings.effective_cors_origins:
|
||||
print(f" - {origin}")
|
||||
|
||||
print("\n📧 Email Settings")
|
||||
print("-" * 70)
|
||||
print(f" SMTP Server: {settings.smtp_server or 'Not configured'}")
|
||||
print(f" SMTP Port: {settings.smtp_port}")
|
||||
print(f" SMTP Username: {settings.smtp_username or 'Not configured'}")
|
||||
print(f" Email From: {settings.email_from or 'Not configured'}")
|
||||
|
||||
print("\n📝 Logging Settings")
|
||||
print("-" * 70)
|
||||
print(f" Log Level: {settings.log_level}")
|
||||
print(f" Log File: {settings.log_file}")
|
||||
|
||||
# Check if log directory exists
|
||||
log_dir = os.path.dirname(settings.log_file)
|
||||
log_dir_exists = os.path.exists(log_dir)
|
||||
print(f" Log Directory: {'✅ Exists' if log_dir_exists else '❌ Missing'}")
|
||||
|
||||
print("\n" + "=" * 70)
|
||||
print("Configuration Summary")
|
||||
print("=" * 70)
|
||||
|
||||
# Determine authentication status
|
||||
auth_status = []
|
||||
if settings.ldap_server:
|
||||
auth_status.append("LDAP configured")
|
||||
if settings.enable_fallback_auth:
|
||||
auth_status.append("Fallback enabled")
|
||||
|
||||
if not auth_status:
|
||||
print("❌ No authentication method configured!")
|
||||
print(" Please configure LDAP or enable fallback authentication")
|
||||
else:
|
||||
print(f"✅ Authentication: {', '.join(auth_status)}")
|
||||
|
||||
# Check critical settings
|
||||
critical_ok = True
|
||||
|
||||
if not settings.secret_key or settings.secret_key == "your-secret-key-here-change-in-production":
|
||||
print("⚠️ WARNING: Using default SECRET_KEY - change this in production!")
|
||||
critical_ok = False
|
||||
|
||||
if not data_exists or not upload_exists or not reports_exists:
|
||||
print("⚠️ WARNING: Some required directories are missing")
|
||||
critical_ok = False
|
||||
|
||||
if critical_ok:
|
||||
print("✅ All critical settings are configured")
|
||||
|
||||
print("\n" + "=" * 70)
|
||||
print("Ready to start the application!")
|
||||
print("Run: python run.py")
|
||||
print("=" * 70)
|
||||
30
kpi_analysis/check_database.py
Normal file
@ -0,0 +1,30 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Check database status"""
|
||||
import sqlite3
|
||||
from pathlib import Path
|
||||
|
||||
db_path = Path("data/kpi_analysis.db")
|
||||
conn = sqlite3.connect(db_path)
|
||||
cursor = conn.cursor()
|
||||
|
||||
print("Database Status:")
|
||||
print("=" * 50)
|
||||
|
||||
tables = ['users', 'kpi_files', 'kpi_analysis_results', 'kpi_data_cache', 'user_sessions', 'application_logs']
|
||||
|
||||
for table in tables:
|
||||
cursor.execute(f"SELECT COUNT(*) FROM {table}")
|
||||
count = cursor.fetchone()[0]
|
||||
status = "✅ Empty" if count == 0 else f"⚠️ {count} rows"
|
||||
print(f"{table:25} {status}")
|
||||
|
||||
conn.close()
|
||||
|
||||
print("=" * 50)
|
||||
print("\nUploads directory:")
|
||||
uploads = list(Path("uploads").glob("*"))
|
||||
print(f" Files: {len([f for f in uploads if f.is_file()])}")
|
||||
|
||||
print("\nReports directory:")
|
||||
reports = list(Path("reports").glob("*"))
|
||||
print(f" Files: {len([f for f in reports if f.is_file()])}")
|
||||
146
kpi_analysis/clean_database.py
Normal file
@ -0,0 +1,146 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Clean database script - removes all data from KPI Analysis database
|
||||
Use this to start fresh after testing
|
||||
"""
|
||||
|
||||
import sqlite3
|
||||
import os
|
||||
import shutil
|
||||
from pathlib import Path
|
||||
|
||||
def clean_database():
|
||||
"""Clean all data from the database"""
|
||||
db_path = Path("data/kpi_analysis.db")
|
||||
|
||||
if not db_path.exists():
|
||||
print(f"❌ Database not found at {db_path}")
|
||||
return False
|
||||
|
||||
print(f"🗑️ Cleaning database: {db_path}")
|
||||
|
||||
try:
|
||||
conn = sqlite3.connect(db_path)
|
||||
cursor = conn.cursor()
|
||||
|
||||
# Get table names
|
||||
cursor.execute("SELECT name FROM sqlite_master WHERE type='table'")
|
||||
tables = cursor.fetchall()
|
||||
|
||||
print(f"\n📊 Found {len(tables)} tables")
|
||||
|
||||
# Delete data from each table
|
||||
for table in tables:
|
||||
table_name = table[0]
|
||||
if table_name != 'sqlite_sequence': # Skip internal table
|
||||
cursor.execute(f"SELECT COUNT(*) FROM {table_name}")
|
||||
count = cursor.fetchone()[0]
|
||||
|
||||
if count > 0:
|
||||
print(f" Deleting {count} rows from {table_name}...")
|
||||
cursor.execute(f"DELETE FROM {table_name}")
|
||||
else:
|
||||
print(f" {table_name} is already empty")
|
||||
|
||||
# Reset auto-increment counters
|
||||
cursor.execute("DELETE FROM sqlite_sequence")
|
||||
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
print("\n✅ Database cleaned successfully!")
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"\n❌ Error cleaning database: {e}")
|
||||
return False
|
||||
|
||||
def clean_uploads():
|
||||
"""Clean uploaded files"""
|
||||
uploads_dir = Path("uploads")
|
||||
|
||||
if not uploads_dir.exists():
|
||||
print(f"\n📁 Uploads directory not found at {uploads_dir}")
|
||||
return
|
||||
|
||||
files = list(uploads_dir.glob("*"))
|
||||
if not files:
|
||||
print(f"\n📁 Uploads directory is already empty")
|
||||
return
|
||||
|
||||
print(f"\n🗑️ Cleaning uploads directory: {uploads_dir}")
|
||||
print(f" Found {len(files)} files")
|
||||
|
||||
for file in files:
|
||||
if file.is_file():
|
||||
try:
|
||||
file.unlink()
|
||||
print(f" Deleted: {file.name}")
|
||||
except Exception as e:
|
||||
print(f" ❌ Failed to delete {file.name}: {e}")
|
||||
|
||||
def clean_reports():
|
||||
"""Clean generated reports"""
|
||||
reports_dir = Path("reports")
|
||||
|
||||
if not reports_dir.exists():
|
||||
print(f"\n📄 Reports directory not found at {reports_dir}")
|
||||
return
|
||||
|
||||
files = list(reports_dir.glob("*"))
|
||||
if not files:
|
||||
print(f"\n📄 Reports directory is already empty")
|
||||
return
|
||||
|
||||
print(f"\n🗑️ Cleaning reports directory: {reports_dir}")
|
||||
print(f" Found {len(files)} files")
|
||||
|
||||
for file in files:
|
||||
if file.is_file():
|
||||
try:
|
||||
file.unlink()
|
||||
print(f" Deleted: {file.name}")
|
||||
except Exception as e:
|
||||
print(f" ❌ Failed to delete {file.name}: {e}")
|
||||
|
||||
def main():
|
||||
print("=" * 70)
|
||||
print("KPI Analysis - Database Cleanup")
|
||||
print("=" * 70)
|
||||
print("\n⚠️ WARNING: This will delete ALL data from the database!")
|
||||
print(" - All uploaded files")
|
||||
print(" - All analysis results")
|
||||
print(" - All user sessions")
|
||||
print(" - All logs")
|
||||
print(" - All generated reports")
|
||||
|
||||
response = input("\n❓ Are you sure you want to continue? (yes/no): ")
|
||||
|
||||
if response.lower() not in ['yes', 'y']:
|
||||
print("\n❌ Cleanup cancelled")
|
||||
return
|
||||
|
||||
print("\n🚀 Starting cleanup...\n")
|
||||
|
||||
# Clean database
|
||||
db_success = clean_database()
|
||||
|
||||
# Clean uploaded files
|
||||
clean_uploads()
|
||||
|
||||
# Clean reports
|
||||
clean_reports()
|
||||
|
||||
print("\n" + "=" * 70)
|
||||
if db_success:
|
||||
print("✅ Cleanup completed successfully!")
|
||||
print("\nYou can now:")
|
||||
print("1. Start the server: python run.py")
|
||||
print("2. Login with your credentials")
|
||||
print("3. Upload fresh KPI files")
|
||||
else:
|
||||
print("⚠️ Cleanup completed with some errors")
|
||||
print("=" * 70)
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
55
kpi_analysis/config/.env.template
Normal file
@ -0,0 +1,55 @@
|
||||
# Environment Configuration Template for KPI Analysis Application
|
||||
# Copy this file to .env and update the values with your actual configuration
|
||||
|
||||
# Application Settings
|
||||
DEBUG=true
|
||||
SECRET_KEY=your-super-secret-key-change-this-in-production
|
||||
|
||||
# Nextcloud Configuration
|
||||
NEXTCLOUD_BASE_URL=https://nc.mapan.co.id
|
||||
NEXTCLOUD_OAUTH_CLIENT_ID=your-nextcloud-oauth-client-id
|
||||
NEXTCLOUD_OAUTH_CLIENT_SECRET=your-nextcloud-oauth-client-secret
|
||||
NEXTCLOUD_REDIRECT_URI=http://localhost:8000/auth/nextcloud/callback
|
||||
NEXTCLOUD_KPI_FOLDER=/KPI_Files
|
||||
NEXTCLOUD_USERNAME=your-nextcloud-username
|
||||
|
||||
# OpenAI Configuration
|
||||
OPENAI_API_KEY=your-openai-api-key
|
||||
OPENAI_MODEL=gpt-4
|
||||
OPENAI_MAX_TOKENS=2000
|
||||
OPENAI_TEMPERATURE=0.7
|
||||
|
||||
# LDAP/Active Directory Configuration
|
||||
LDAP_SERVER=ldap.your-company.com
|
||||
LDAP_PORT=389
|
||||
LDAP_USE_SSL=true
|
||||
LDAP_BASE_DN=DC=your-company,DC=com
|
||||
LDAP_BIND_DN=CN=service-account,OU=Service Accounts,DC=your-company,DC=com
|
||||
LDAP_BIND_PASSWORD=your-ldap-password
|
||||
|
||||
# LDAP Group Configuration (Required for access control)
|
||||
LDAP_GROUP_BASE_DN=DC=your-company,DC=com
|
||||
LDAP_KPI_GROUP_DN=CN=KPI_Users,OU=Groups,DC=your-company,DC=com
|
||||
LDAP_KPI_GROUP_NAME=KPI_Users
|
||||
LDAP_GROUP_MEMBER_ATTRIBUTE=member
|
||||
LDAP_USER_MEMBER_ATTRIBUTE=memberOf
|
||||
|
||||
# Fallback Authentication (for testing/development when LDAP not available)
|
||||
ENABLE_FALLBACK_AUTH=false
|
||||
FALLBACK_ADMIN_USERNAME=admin
|
||||
FALLBACK_ADMIN_PASSWORD=super
|
||||
FALLBACK_ADMIN_ROLE=admin
|
||||
FALLBACK_ADMIN_EMAIL=admin@kpi-system.local
|
||||
|
||||
# Database Settings
|
||||
DATABASE_URL=sqlite:///./data/kpi_analysis.db
|
||||
|
||||
# Email Configuration (Optional)
|
||||
SMTP_SERVER=smtp.your-company.com
|
||||
SMTP_PORT=587
|
||||
SMTP_USERNAME=your-email@your-company.com
|
||||
SMTP_PASSWORD=your-email-password
|
||||
EMAIL_FROM=KPI Analysis System <kpi@your-company.com>
|
||||
|
||||
# Logging
|
||||
LOG_LEVEL=INFO
|
||||
7
kpi_analysis/config/__init__.py
Normal file
@ -0,0 +1,7 @@
|
||||
"""
|
||||
Core configuration module
|
||||
"""
|
||||
|
||||
from .settings import settings
|
||||
|
||||
__all__ = ["settings"]
|
||||
BIN
kpi_analysis/config/__pycache__/__init__.cpython-312.pyc
Normal file
BIN
kpi_analysis/config/__pycache__/settings.cpython-312.pyc
Normal file
148
kpi_analysis/config/settings.py
Normal file
@ -0,0 +1,148 @@
|
||||
"""
|
||||
Configuration management for KPI Analysis Application
|
||||
Handles environment variables and settings for Nextcloud, OpenAI, LDAP, etc.
|
||||
"""
|
||||
|
||||
from pydantic_settings import BaseSettings
|
||||
from typing import Optional, List
|
||||
import os
|
||||
|
||||
class Settings(BaseSettings):
|
||||
"""Application settings with environment variable support"""
|
||||
|
||||
# Application Settings
|
||||
app_name: str = "KPI Analysis Dashboard"
|
||||
app_version: str = "1.0.0"
|
||||
debug: bool = False
|
||||
secret_key: str = "your-secret-key-here-change-in-production"
|
||||
|
||||
# Server Settings
|
||||
host: str = "0.0.0.0"
|
||||
port: int = 8000
|
||||
|
||||
# Nextcloud Configuration
|
||||
nextcloud_base_url: str = "https://nc.mapan.co.id"
|
||||
nextcloud_oauth_client_id: Optional[str] = None
|
||||
nextcloud_oauth_client_secret: Optional[str] = None
|
||||
nextcloud_redirect_uri: str = "http://localhost:8000/auth/nextcloud/callback"
|
||||
nextcloud_kpi_folder: str = "/KPI_Files"
|
||||
nextcloud_username: Optional[str] = None
|
||||
|
||||
# OpenAI Configuration
|
||||
openai_api_key: Optional[str] = None
|
||||
openai_model: str = "gpt-4"
|
||||
openai_max_tokens: int = 2000
|
||||
openai_temperature: float = 0.7
|
||||
|
||||
# LDAP/Active Directory Configuration
|
||||
ldap_server: Optional[str] = None
|
||||
ldap_port: int = 389
|
||||
ldap_use_ssl: bool = True
|
||||
ldap_base_dn: Optional[str] = None
|
||||
ldap_bind_dn: Optional[str] = None
|
||||
ldap_bind_password: Optional[str] = None
|
||||
ldap_user_search_filter: str = "(uid={username})"
|
||||
ldap_group_base_dn: Optional[str] = None
|
||||
ldap_kpi_group_dn: Optional[str] = None
|
||||
ldap_kpi_group_name: str = "KPI_Users"
|
||||
ldap_group_member_attribute: str = "member"
|
||||
ldap_user_member_attribute: str = "memberOf"
|
||||
|
||||
# Fallback Authentication (for testing/development when LDAP not available)
|
||||
enable_fallback_auth: bool = False
|
||||
fallback_admin_username: str = "admin"
|
||||
fallback_admin_password: str = "super"
|
||||
fallback_admin_role: str = "admin"
|
||||
fallback_admin_email: str = "admin@kpi-system.local"
|
||||
|
||||
# Database Settings
|
||||
database_url: str = "sqlite:///./data/kpi_analysis.db"
|
||||
|
||||
# File Storage
|
||||
data_directory: str = "./data"
|
||||
upload_directory: str = "./uploads"
|
||||
reports_directory: str = "./reports"
|
||||
|
||||
# Email Configuration (for notifications)
|
||||
smtp_server: Optional[str] = None
|
||||
smtp_port: int = 587
|
||||
smtp_username: Optional[str] = None
|
||||
smtp_password: Optional[str] = None
|
||||
email_from: Optional[str] = None
|
||||
|
||||
# Security Settings
|
||||
session_timeout_minutes: int = 60
|
||||
max_file_size_mb: int = 50
|
||||
allowed_file_extensions: List[str] = [".xlsx", ".xls"]
|
||||
|
||||
# CORS Settings
|
||||
cors_allow_origins: Optional[List[str]] = None # If None, will use default domains
|
||||
|
||||
@property
|
||||
def default_cors_origins(self) -> List[str]:
|
||||
"""Default CORS origins based on server configuration"""
|
||||
# Start with common development ports
|
||||
origins = [
|
||||
"http://localhost:3000",
|
||||
"http://localhost:8000",
|
||||
"http://127.0.0.1:8000",
|
||||
"http://localhost:8080"
|
||||
]
|
||||
|
||||
# Add production domain based on server host if not localhost
|
||||
if self.host not in ["0.0.0.0", "127.0.0.1", "localhost"]:
|
||||
scheme = "https" if self.port == 443 else "http"
|
||||
origins.append(f"{scheme}://{self.host}")
|
||||
|
||||
# Add localhost with current port for development (avoid duplicates)
|
||||
localhost_origin = f"http://localhost:{self.port}"
|
||||
if localhost_origin not in origins:
|
||||
origins.append(localhost_origin)
|
||||
|
||||
localnet_origin = f"http://127.0.0.1:{self.port}"
|
||||
if localnet_origin not in origins:
|
||||
origins.append(localnet_origin)
|
||||
|
||||
return list(set(origins)) # Remove any duplicates
|
||||
|
||||
@property
|
||||
def effective_cors_origins(self) -> List[str]:
|
||||
"""Get effective CORS origins (custom or default)"""
|
||||
if self.cors_allow_origins:
|
||||
return self.cors_allow_origins
|
||||
return self.default_cors_origins
|
||||
|
||||
# Logging
|
||||
log_level: str = "INFO"
|
||||
log_file: str = "./logs/kpi_analysis.log"
|
||||
|
||||
class Config:
|
||||
# Use absolute path to .env file relative to this settings.py file
|
||||
import os
|
||||
from pathlib import Path
|
||||
_settings_dir = Path(__file__).parent.parent # Go up to kpi_analysis directory
|
||||
env_file = str(_settings_dir / ".env")
|
||||
env_file_encoding = "utf-8"
|
||||
case_sensitive = False
|
||||
|
||||
# Global settings instance
|
||||
settings = Settings()
|
||||
|
||||
# Debug: Log which .env file is being used
|
||||
import logging
|
||||
logger = logging.getLogger(__name__)
|
||||
from pathlib import Path
|
||||
env_file_path = Path(__file__).parent.parent / ".env"
|
||||
if env_file_path.exists():
|
||||
logger.info(f"✅ .env file found at: {env_file_path}")
|
||||
logger.info(f" Fallback auth enabled: {settings.enable_fallback_auth}")
|
||||
else:
|
||||
logger.warning(f"⚠️ .env file not found at: {env_file_path}")
|
||||
|
||||
# Ensure directories exist
|
||||
os.makedirs(settings.data_directory, exist_ok=True)
|
||||
os.makedirs(settings.upload_directory, exist_ok=True)
|
||||
os.makedirs(settings.reports_directory, exist_ok=True)
|
||||
|
||||
# Create logs directory
|
||||
os.makedirs(os.path.dirname(settings.log_file), exist_ok=True)
|
||||
BIN
kpi_analysis/data/kpi_analysis.db
Normal file
55
kpi_analysis/inspect_excel.py
Normal file
@ -0,0 +1,55 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Inspect Excel file structure"""
|
||||
import pandas as pd
|
||||
import sys
|
||||
|
||||
file_path = '../KPI Manager Information Technology.xlsx'
|
||||
|
||||
print("=" * 70)
|
||||
print(f"Inspecting: {file_path}")
|
||||
print("=" * 70)
|
||||
|
||||
try:
|
||||
xls = pd.ExcelFile(file_path)
|
||||
print(f"\nTotal sheets: {len(xls.sheet_names)}")
|
||||
print(f"Sheet names: {xls.sheet_names}")
|
||||
|
||||
# Inspect KPI sheet
|
||||
print("\n" + "=" * 70)
|
||||
print("KPI Sheet (first 20 rows):")
|
||||
print("=" * 70)
|
||||
df_kpi = pd.read_excel(xls, sheet_name='KPI', header=None)
|
||||
print(f"Shape: {df_kpi.shape}")
|
||||
print("\nFirst 20 rows:")
|
||||
pd.set_option('display.max_columns', None)
|
||||
pd.set_option('display.width', None)
|
||||
pd.set_option('display.max_colwidth', 50)
|
||||
print(df_kpi.head(20))
|
||||
|
||||
# Inspect Achievement sheet
|
||||
if 'Achievement' in xls.sheet_names:
|
||||
print("\n" + "=" * 70)
|
||||
print("Achievement Sheet:")
|
||||
print("=" * 70)
|
||||
df_ach = pd.read_excel(xls, sheet_name='Achievement')
|
||||
print(f"Shape: {df_ach.shape}")
|
||||
print(f"Columns: {df_ach.columns.tolist()}")
|
||||
print("\nFirst 10 rows:")
|
||||
print(df_ach.head(10))
|
||||
|
||||
# Inspect one detail sheet
|
||||
detail_sheets = [s for s in xls.sheet_names if s not in ['KPI', 'Achievement']]
|
||||
if detail_sheets:
|
||||
sample_sheet = detail_sheets[0]
|
||||
print("\n" + "=" * 70)
|
||||
print(f"Sample Detail Sheet: {sample_sheet}")
|
||||
print("=" * 70)
|
||||
df_detail = pd.read_excel(xls, sheet_name=sample_sheet, header=None)
|
||||
print(f"Shape: {df_detail.shape}")
|
||||
print("\nFirst 15 rows:")
|
||||
print(df_detail.head(15))
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error: {e}")
|
||||
import traceback
|
||||
traceback.print_exc()
|
||||
160
kpi_analysis/main.py
Normal file
@ -0,0 +1,160 @@
|
||||
"""
|
||||
KPI Analysis Application
|
||||
Main FastAPI application entry point
|
||||
"""
|
||||
|
||||
from fastapi import FastAPI, Request, HTTPException, Depends
|
||||
from fastapi.staticfiles import StaticFiles
|
||||
from fastapi.templating import Jinja2Templates
|
||||
from fastapi.responses import HTMLResponse, RedirectResponse, JSONResponse
|
||||
from fastapi.security import HTTPBearer, HTTPAuthorizationCredentials
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
from starlette.middleware.base import BaseHTTPMiddleware
|
||||
from pathlib import Path
|
||||
import uvicorn
|
||||
import os
|
||||
import logging
|
||||
|
||||
# Import application modules
|
||||
from app.api import routes
|
||||
from config.settings import settings
|
||||
from app.core.database import init_db
|
||||
|
||||
# Initialize FastAPI app
|
||||
app = FastAPI(
|
||||
title="KPI Analysis Dashboard",
|
||||
description="Comprehensive KPI Analysis and Reporting System",
|
||||
version="1.0.0",
|
||||
docs_url="/docs",
|
||||
redoc_url="/redoc"
|
||||
)
|
||||
|
||||
# Add CORS middleware with configurable origins
|
||||
app.add_middleware(
|
||||
CORSMiddleware,
|
||||
allow_origins=settings.effective_cors_origins,
|
||||
allow_credentials=True,
|
||||
allow_methods=["GET", "POST", "PUT", "DELETE", "OPTIONS"],
|
||||
allow_headers=["*"],
|
||||
)
|
||||
|
||||
# Mount static files
|
||||
static_dir = Path(__file__).parent / "static"
|
||||
app.mount("/static", StaticFiles(directory=str(static_dir)), name="static")
|
||||
|
||||
# Initialize templates
|
||||
from pathlib import Path
|
||||
templates_dir = Path(__file__).parent / "templates"
|
||||
templates = Jinja2Templates(directory=str(templates_dir))
|
||||
|
||||
# Include API routes
|
||||
app.include_router(routes.router, prefix="/api")
|
||||
|
||||
# Authentication dependency
|
||||
security = HTTPBearer()
|
||||
|
||||
class CookieCleanupMiddleware(BaseHTTPMiddleware):
|
||||
"""Middleware to clear conflicting cookies from other applications"""
|
||||
|
||||
async def dispatch(self, request: Request, call_next):
|
||||
# List of cookies that might conflict with Odoo or other applications
|
||||
conflicting_cookies = [
|
||||
'session_id', 'frontend_lang', '_ga', '_ga_NMT50XL57M',
|
||||
'cids', 'tz', 'irispid', 'session', 'csrftoken'
|
||||
]
|
||||
|
||||
# Check if any conflicting cookies are present
|
||||
has_conflicts = any(cookie in request.cookies for cookie in conflicting_cookies)
|
||||
|
||||
if has_conflicts and request.url.path.startswith('/api/auth/'):
|
||||
logger = logging.getLogger(__name__)
|
||||
logger.info(f"Cleaning conflicting cookies for request to {request.url.path}")
|
||||
logger.info(f"Found conflicting cookies: {[c for c in conflicting_cookies if c in request.cookies]}")
|
||||
|
||||
response = await call_next(request)
|
||||
|
||||
# For authentication endpoints, set headers to clear conflicting cookies
|
||||
if (has_conflicts and request.url.path.startswith('/api/auth/')):
|
||||
if not isinstance(response, JSONResponse):
|
||||
response = JSONResponse(content={"status": "processing"})
|
||||
|
||||
# Clear conflicting cookies
|
||||
for cookie_name in conflicting_cookies:
|
||||
response.set_cookie(
|
||||
key=cookie_name,
|
||||
value="",
|
||||
max_age=0,
|
||||
expires=0,
|
||||
path="/",
|
||||
domain=None,
|
||||
secure=False,
|
||||
httponly=False,
|
||||
samesite="lax"
|
||||
)
|
||||
|
||||
return response
|
||||
|
||||
# Add cookie cleanup middleware
|
||||
app.add_middleware(CookieCleanupMiddleware)
|
||||
|
||||
async def require_authentication(credentials: HTTPAuthorizationCredentials = Depends(security)):
|
||||
"""Require valid authentication token"""
|
||||
from config.settings import settings
|
||||
import jwt
|
||||
|
||||
try:
|
||||
payload = jwt.decode(
|
||||
credentials.credentials,
|
||||
settings.secret_key,
|
||||
algorithms=["HS256"]
|
||||
)
|
||||
return payload
|
||||
except jwt.ExpiredSignatureError:
|
||||
raise HTTPException(status_code=401, detail="Token has expired")
|
||||
except jwt.JWTError:
|
||||
raise HTTPException(status_code=401, detail="Invalid token")
|
||||
|
||||
async def get_current_user(payload: dict = Depends(require_authentication)):
|
||||
"""Get current authenticated user"""
|
||||
return payload
|
||||
|
||||
@app.on_event("startup")
|
||||
async def startup_event():
|
||||
"""Initialize application on startup"""
|
||||
await init_db()
|
||||
print("🚀 KPI Analysis Application Started")
|
||||
print(f"📊 Dashboard available at: http://localhost:8000")
|
||||
print(f"📚 API Documentation at: http://localhost:8000/docs")
|
||||
|
||||
@app.get("/", response_class=HTMLResponse)
|
||||
async def root(request: Request):
|
||||
"""Redirect to login page"""
|
||||
return RedirectResponse(url="/login", status_code=302)
|
||||
|
||||
@app.get("/login", response_class=HTMLResponse)
|
||||
async def login_page(request: Request):
|
||||
"""Public login page"""
|
||||
return templates.TemplateResponse("login.html", {"request": request})
|
||||
|
||||
@app.get("/dashboard", response_class=HTMLResponse)
|
||||
async def dashboard(request: Request):
|
||||
"""Main dashboard page - authentication handled via frontend token"""
|
||||
return templates.TemplateResponse("dashboard.html", {"request": request})
|
||||
|
||||
@app.get("/health")
|
||||
async def health_check():
|
||||
"""Health check endpoint"""
|
||||
return {
|
||||
"status": "healthy",
|
||||
"service": "KPI Analysis Dashboard",
|
||||
"version": "1.0.0"
|
||||
}
|
||||
|
||||
if __name__ == "__main__":
|
||||
uvicorn.run(
|
||||
"kpi_analysis.main:app",
|
||||
host="0.0.0.0",
|
||||
port=8000,
|
||||
reload=True,
|
||||
log_level="info"
|
||||
)
|
||||
93
kpi_analysis/requirements.txt
Normal file
@ -0,0 +1,93 @@
|
||||
aiofiles==25.1.0
|
||||
aiosqlite==0.21.0
|
||||
annotated-doc==0.0.4
|
||||
annotated-types==0.7.0
|
||||
anyio==4.11.0
|
||||
bcrypt==5.0.0
|
||||
black==25.11.0
|
||||
brotli==1.2.0
|
||||
certifi==2025.11.12
|
||||
cffi==2.0.0
|
||||
charset-normalizer==3.4.4
|
||||
click==8.3.1
|
||||
contourpy==1.3.3
|
||||
cryptography==46.0.3
|
||||
cssselect2==0.8.0
|
||||
cycler==0.12.1
|
||||
Cython==3.2.1
|
||||
distro==1.9.0
|
||||
ecdsa==0.19.1
|
||||
et_xmlfile==2.0.0
|
||||
fastapi==0.122.0
|
||||
fonttools==4.60.1
|
||||
h11==0.16.0
|
||||
httpcore==1.0.9
|
||||
httptools==0.7.1
|
||||
httpx==0.28.1
|
||||
idna==3.11
|
||||
iniconfig==2.3.0
|
||||
isort==7.0.0
|
||||
Jinja2==3.1.2
|
||||
jiter==0.12.0
|
||||
kiwisolver==1.4.9
|
||||
ldap3==2.9.1
|
||||
MarkupSafe==3.0.3
|
||||
matplotlib==3.8.2
|
||||
mypy_extensions==1.1.0
|
||||
narwhals==2.12.0
|
||||
numpy==1.26.4
|
||||
openai==2.8.1
|
||||
openpyxl==3.1.2
|
||||
packaging==25.0
|
||||
pandas==2.3.3
|
||||
passlib==1.7.4
|
||||
pathspec==0.12.1
|
||||
pillow==12.0.0
|
||||
platformdirs==4.5.0
|
||||
plotly==6.5.0
|
||||
pluggy==1.6.0
|
||||
pyasn1==0.6.1
|
||||
pyasn1_modules==0.4.2
|
||||
pycparser==2.23
|
||||
pydantic==2.5.0
|
||||
pydantic-settings==2.1.0
|
||||
pydantic_core==2.14.1
|
||||
pydyf==0.11.0
|
||||
Pygments==2.19.2
|
||||
PyJWT==2.10.1
|
||||
pyparsing==3.2.5
|
||||
pyphen==0.17.2
|
||||
pytest==9.0.1
|
||||
pytest-asyncio==1.3.0
|
||||
python-dateutil==2.9.0.post0
|
||||
python-dotenv==1.0.0
|
||||
python-jose==3.5.0
|
||||
python-ldap==3.4.4
|
||||
python-multipart==0.0.20
|
||||
pytokens==0.3.0
|
||||
pytz==2025.2
|
||||
PyYAML==6.0.3
|
||||
reportlab==4.0.7
|
||||
requests==2.31.0
|
||||
rsa==4.9.1
|
||||
seaborn==0.13.0
|
||||
setuptools==80.9.0
|
||||
six==1.17.0
|
||||
sniffio==1.3.1
|
||||
starlette==0.50.0
|
||||
tinycss2==1.5.1
|
||||
tinyhtml5==2.0.0
|
||||
tqdm==4.67.1
|
||||
typing-inspection==0.4.2
|
||||
typing_extensions==4.15.0
|
||||
tzdata==2025.2
|
||||
urllib3==2.5.0
|
||||
uvicorn==0.38.0
|
||||
uvloop==0.22.1
|
||||
watchfiles==1.1.1
|
||||
weasyprint==66.0
|
||||
webencodings==0.5.1
|
||||
websockets==15.0.1
|
||||
wheel==0.45.1
|
||||
xlrd==2.0.1
|
||||
zopfli==0.4.0
|
||||
192
kpi_analysis/run.py
Normal file
@ -0,0 +1,192 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Startup script for KPI Analysis Dashboard
|
||||
Provides easy way to run the application with proper configuration
|
||||
"""
|
||||
|
||||
import os
|
||||
import sys
|
||||
import argparse
|
||||
from pathlib import Path
|
||||
|
||||
# Add current directory to Python path
|
||||
current_dir = Path(__file__).parent
|
||||
sys.path.insert(0, str(current_dir))
|
||||
|
||||
def check_dependencies():
|
||||
"""Check if required dependencies are installed"""
|
||||
required_packages = [
|
||||
'fastapi', 'uvicorn', 'pandas', 'openpyxl',
|
||||
'plotly', 'matplotlib', 'reportlab', 'openai'
|
||||
]
|
||||
|
||||
missing_packages = []
|
||||
|
||||
for package in required_packages:
|
||||
try:
|
||||
__import__(package.replace('-', '_'))
|
||||
except ImportError:
|
||||
missing_packages.append(package)
|
||||
|
||||
if missing_packages:
|
||||
print("❌ Missing required packages:")
|
||||
for package in missing_packages:
|
||||
print(f" - {package}")
|
||||
print("\n💡 Install missing packages with:")
|
||||
print(f" pip install {' '.join(missing_packages)}")
|
||||
return False
|
||||
|
||||
print("✅ All required packages are installed")
|
||||
return True
|
||||
|
||||
def check_environment():
|
||||
"""Check environment configuration"""
|
||||
env_file = current_dir / ".env"
|
||||
|
||||
if not env_file.exists():
|
||||
print("⚠️ No .env file found")
|
||||
print("📝 Creating .env file from template...")
|
||||
|
||||
template_file = current_dir / "config" / ".env.template"
|
||||
if template_file.exists():
|
||||
import shutil
|
||||
shutil.copy(template_file, env_file)
|
||||
print("✅ .env file created from template")
|
||||
print("📝 Please edit .env file with your configuration")
|
||||
else:
|
||||
print("❌ .env.template file not found")
|
||||
return False
|
||||
else:
|
||||
print("✅ .env file found")
|
||||
|
||||
# Check critical directories
|
||||
directories = ['data', 'uploads', 'reports', 'logs']
|
||||
for directory in directories:
|
||||
dir_path = current_dir / directory
|
||||
dir_path.mkdir(exist_ok=True)
|
||||
|
||||
print("✅ Required directories created")
|
||||
return True
|
||||
|
||||
def setup_database():
|
||||
"""Setup database tables"""
|
||||
try:
|
||||
from app.core.database import init_db
|
||||
|
||||
print("🗄️ Setting up database...")
|
||||
import asyncio
|
||||
|
||||
async def setup():
|
||||
await init_db()
|
||||
print("✅ Database initialized successfully")
|
||||
|
||||
asyncio.run(setup())
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
print(f"❌ Database setup failed: {e}")
|
||||
return False
|
||||
|
||||
def run_development():
|
||||
"""Run application in development mode"""
|
||||
print("🚀 Starting KPI Analysis Dashboard in development mode...")
|
||||
print("📊 Dashboard: http://localhost:8000")
|
||||
print("📚 API Docs: http://localhost:8000/docs")
|
||||
print("⚡ Auto-reload enabled")
|
||||
print("-" * 50)
|
||||
|
||||
try:
|
||||
import uvicorn
|
||||
uvicorn.run(
|
||||
"main:app",
|
||||
host="0.0.0.0",
|
||||
port=8000,
|
||||
reload=True,
|
||||
log_level="info",
|
||||
access_log=True
|
||||
)
|
||||
except KeyboardInterrupt:
|
||||
print("\n👋 Application stopped by user")
|
||||
except Exception as e:
|
||||
print(f"❌ Error starting application: {e}")
|
||||
|
||||
def run_production():
|
||||
"""Run application in production mode"""
|
||||
print("🚀 Starting KPI Analysis Dashboard in production mode...")
|
||||
print("📊 Dashboard: http://localhost:8000")
|
||||
print("-" * 50)
|
||||
|
||||
try:
|
||||
import uvicorn
|
||||
uvicorn.run(
|
||||
"main:app",
|
||||
host="0.0.0.0",
|
||||
port=8000,
|
||||
workers=4,
|
||||
log_level="info",
|
||||
access_log=False
|
||||
)
|
||||
except KeyboardInterrupt:
|
||||
print("\n👋 Application stopped by user")
|
||||
except Exception as e:
|
||||
print(f"❌ Error starting application: {e}")
|
||||
|
||||
def main():
|
||||
"""Main entry point"""
|
||||
parser = argparse.ArgumentParser(description="KPI Analysis Dashboard")
|
||||
parser.add_argument(
|
||||
'--mode',
|
||||
choices=['dev', 'production'],
|
||||
default='dev',
|
||||
help='Run mode (default: dev)'
|
||||
)
|
||||
parser.add_argument(
|
||||
'--check',
|
||||
action='store_true',
|
||||
help='Check system requirements only'
|
||||
)
|
||||
parser.add_argument(
|
||||
'--setup',
|
||||
action='store_true',
|
||||
help='Setup database and environment only'
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
print("🎯 KPI Analysis Dashboard Setup")
|
||||
print("=" * 40)
|
||||
|
||||
# Check dependencies
|
||||
if not check_dependencies():
|
||||
return 1
|
||||
|
||||
# Check environment
|
||||
if not check_environment():
|
||||
return 1
|
||||
|
||||
# Setup database if requested
|
||||
if args.setup:
|
||||
setup_database()
|
||||
return 0
|
||||
|
||||
# Check mode only
|
||||
if args.check:
|
||||
print("✅ System check completed successfully")
|
||||
return 0
|
||||
|
||||
# Setup database
|
||||
if not setup_database():
|
||||
return 1
|
||||
|
||||
print("\n🎉 Setup completed! Starting application...")
|
||||
|
||||
# Run application
|
||||
if args.mode == 'production':
|
||||
run_production()
|
||||
else:
|
||||
run_development()
|
||||
|
||||
return 0
|
||||
|
||||
if __name__ == "__main__":
|
||||
sys.exit(main())
|
||||
393
kpi_analysis/static/css/dashboard.css
Normal file
@ -0,0 +1,393 @@
|
||||
/* Custom CSS for KPI Analysis Dashboard */
|
||||
|
||||
:root {
|
||||
--primary-color: #4e73df;
|
||||
--secondary-color: #858796;
|
||||
--success-color: #1cc88a;
|
||||
--info-color: #36b9cc;
|
||||
--warning-color: #f6c23e;
|
||||
--danger-color: #e74a3b;
|
||||
--light-color: #f8f9fc;
|
||||
--dark-color: #5a5c69;
|
||||
}
|
||||
|
||||
/* Sidebar */
|
||||
.sidebar {
|
||||
min-height: calc(100vh - 48px);
|
||||
background-color: #4e73df;
|
||||
background: linear-gradient(180deg, #4e73df 10%, #224abe 100%);
|
||||
box-shadow: 0 0.15rem 1.75rem 0 rgba(58, 59, 69, 0.15);
|
||||
}
|
||||
|
||||
.sidebar .nav-link {
|
||||
color: rgba(255, 255, 255, 0.8);
|
||||
font-weight: 500;
|
||||
padding: 0.75rem 1rem;
|
||||
margin-bottom: 0.25rem;
|
||||
border-radius: 0.35rem;
|
||||
transition: all 0.15s ease-in-out;
|
||||
}
|
||||
|
||||
.sidebar .nav-link:hover {
|
||||
color: #fff;
|
||||
background-color: rgba(255, 255, 255, 0.1);
|
||||
}
|
||||
|
||||
.sidebar .nav-link.active {
|
||||
color: #fff;
|
||||
font-weight: 700;
|
||||
background-color: rgba(255, 255, 255, 0.2);
|
||||
}
|
||||
|
||||
.sidebar .nav-link i {
|
||||
margin-right: 0.5rem;
|
||||
color: rgba(255, 255, 255, 0.3);
|
||||
}
|
||||
|
||||
.sidebar .nav-link:hover i,
|
||||
.sidebar .nav-link.active i {
|
||||
color: rgba(255, 255, 255, 0.8);
|
||||
}
|
||||
|
||||
/* Cards */
|
||||
.card {
|
||||
border: none;
|
||||
border-radius: 0.5rem;
|
||||
box-shadow: 0 0.15rem 1.75rem 0 rgba(58, 59, 69, 0.15);
|
||||
transition: all 0.3s ease-in-out;
|
||||
}
|
||||
|
||||
.card:hover {
|
||||
transform: translateY(-2px);
|
||||
box-shadow: 0 0.25rem 2rem 0 rgba(58, 59, 69, 0.2);
|
||||
}
|
||||
|
||||
.card-header {
|
||||
background-color: #f8f9fc;
|
||||
border-bottom: 1px solid #e3e6f0;
|
||||
padding: 1rem 1.25rem;
|
||||
}
|
||||
|
||||
.card-title {
|
||||
color: #5a5c69;
|
||||
font-size: 1.1rem;
|
||||
font-weight: 600;
|
||||
margin-bottom: 0.5rem;
|
||||
}
|
||||
|
||||
/* Metrics Cards */
|
||||
.border-left-primary {
|
||||
border-left: 0.25rem solid #4e73df !important;
|
||||
}
|
||||
|
||||
.border-left-success {
|
||||
border-left: 0.25rem solid #1cc88a !important;
|
||||
}
|
||||
|
||||
.border-left-info {
|
||||
border-left: 0.25rem solid #36b9cc !important;
|
||||
}
|
||||
|
||||
.border-left-warning {
|
||||
border-left: 0.25rem solid #f6c23e !important;
|
||||
}
|
||||
|
||||
.text-xs {
|
||||
font-size: 0.7rem !important;
|
||||
}
|
||||
|
||||
.text-primary {
|
||||
color: #4e73df !important;
|
||||
}
|
||||
|
||||
.text-success {
|
||||
color: #1cc88a !important;
|
||||
}
|
||||
|
||||
.text-info {
|
||||
color: #36b9cc !important;
|
||||
}
|
||||
|
||||
.text-warning {
|
||||
color: #f6c23e !important;
|
||||
}
|
||||
|
||||
.font-weight-bold {
|
||||
font-weight: 700 !important;
|
||||
}
|
||||
|
||||
/* Charts */
|
||||
.chart-area {
|
||||
position: relative;
|
||||
height: 20rem;
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
.chart-pie {
|
||||
position: relative;
|
||||
height: 15rem;
|
||||
width: 100%;
|
||||
}
|
||||
|
||||
/* Tables */
|
||||
.table th {
|
||||
border-top: none;
|
||||
font-weight: 600;
|
||||
color: #5a5c69;
|
||||
background-color: #f8f9fc;
|
||||
}
|
||||
|
||||
.table td {
|
||||
vertical-align: middle;
|
||||
}
|
||||
|
||||
/* Badges */
|
||||
.badge {
|
||||
font-size: 0.75em;
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
/* Progress bars */
|
||||
.progress {
|
||||
height: 0.5rem;
|
||||
border-radius: 0.35rem;
|
||||
}
|
||||
|
||||
/* Modals */
|
||||
.modal-content {
|
||||
border: none;
|
||||
border-radius: 0.5rem;
|
||||
box-shadow: 0 0.5rem 1rem rgba(0, 0, 0, 0.15);
|
||||
}
|
||||
|
||||
.modal-header {
|
||||
background-color: #f8f9fc;
|
||||
border-bottom: 1px solid #e3e6f0;
|
||||
}
|
||||
|
||||
.modal-title {
|
||||
color: #5a5c69;
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
/* Buttons */
|
||||
.btn {
|
||||
border-radius: 0.35rem;
|
||||
font-weight: 500;
|
||||
transition: all 0.15s ease-in-out;
|
||||
}
|
||||
|
||||
.btn:hover {
|
||||
transform: translateY(-1px);
|
||||
box-shadow: 0 0.25rem 0.5rem rgba(0, 0, 0, 0.1);
|
||||
}
|
||||
|
||||
.btn-outline-secondary {
|
||||
color: #6c757d;
|
||||
border-color: #6c757d;
|
||||
}
|
||||
|
||||
.btn-outline-secondary:hover {
|
||||
background-color: #6c757d;
|
||||
border-color: #6c757d;
|
||||
}
|
||||
|
||||
/* Loading states */
|
||||
.loading {
|
||||
display: inline-block;
|
||||
width: 1rem;
|
||||
height: 1rem;
|
||||
border: 2px solid transparent;
|
||||
border-top: 2px solid #ffffff;
|
||||
border-radius: 50%;
|
||||
animation: spin 1s linear infinite;
|
||||
}
|
||||
|
||||
@keyframes spin {
|
||||
0% { transform: rotate(0deg); }
|
||||
100% { transform: rotate(360deg); }
|
||||
}
|
||||
|
||||
/* Alert notifications */
|
||||
.alert {
|
||||
border: none;
|
||||
border-radius: 0.5rem;
|
||||
box-shadow: 0 0.15rem 1.75rem 0 rgba(58, 59, 69, 0.15);
|
||||
}
|
||||
|
||||
.alert-dismissible .btn-close {
|
||||
padding: 1rem 1rem;
|
||||
}
|
||||
|
||||
/* DataTables customizations */
|
||||
.dataTables_wrapper .dataTables_paginate .paginate_button {
|
||||
border-radius: 0.35rem !important;
|
||||
margin: 0 2px;
|
||||
}
|
||||
|
||||
.dataTables_wrapper .dataTables_paginate .paginate_button.current {
|
||||
background: #4e73df !important;
|
||||
color: white !important;
|
||||
border-color: #4e73df !important;
|
||||
}
|
||||
|
||||
/* Responsive adjustments */
|
||||
@media (max-width: 768px) {
|
||||
.sidebar {
|
||||
min-height: auto;
|
||||
padding-bottom: 1rem;
|
||||
}
|
||||
|
||||
.chart-area,
|
||||
.chart-pie {
|
||||
height: 15rem;
|
||||
}
|
||||
|
||||
.card-body {
|
||||
padding: 1rem;
|
||||
}
|
||||
}
|
||||
|
||||
/* Custom scrollbar */
|
||||
::-webkit-scrollbar {
|
||||
width: 8px;
|
||||
height: 8px;
|
||||
}
|
||||
|
||||
::-webkit-scrollbar-track {
|
||||
background: #f1f1f1;
|
||||
border-radius: 10px;
|
||||
}
|
||||
|
||||
::-webkit-scrollbar-thumb {
|
||||
background: #888;
|
||||
border-radius: 10px;
|
||||
}
|
||||
|
||||
::-webkit-scrollbar-thumb:hover {
|
||||
background: #555;
|
||||
}
|
||||
|
||||
/* Animation classes */
|
||||
.fade-in {
|
||||
animation: fadeIn 0.5s ease-in-out;
|
||||
}
|
||||
|
||||
@keyframes fadeIn {
|
||||
from { opacity: 0; transform: translateY(10px); }
|
||||
to { opacity: 1; transform: translateY(0); }
|
||||
}
|
||||
|
||||
.slide-in {
|
||||
animation: slideIn 0.3s ease-out;
|
||||
}
|
||||
|
||||
@keyframes slideIn {
|
||||
from { transform: translateX(-20px); opacity: 0; }
|
||||
to { transform: translateX(0); opacity: 1; }
|
||||
}
|
||||
|
||||
/* Status indicators */
|
||||
.status-indicator {
|
||||
display: inline-block;
|
||||
width: 0.5rem;
|
||||
height: 0.5rem;
|
||||
border-radius: 50%;
|
||||
margin-right: 0.5rem;
|
||||
}
|
||||
|
||||
.status-success {
|
||||
background-color: #1cc88a;
|
||||
}
|
||||
|
||||
.status-warning {
|
||||
background-color: #f6c23e;
|
||||
}
|
||||
|
||||
.status-danger {
|
||||
background-color: #e74a3b;
|
||||
}
|
||||
|
||||
.status-info {
|
||||
background-color: #36b9cc;
|
||||
}
|
||||
|
||||
/* File upload area */
|
||||
.upload-area {
|
||||
border: 2px dashed #e3e6f0;
|
||||
border-radius: 0.5rem;
|
||||
padding: 2rem;
|
||||
text-align: center;
|
||||
transition: all 0.3s ease;
|
||||
}
|
||||
|
||||
.upload-area:hover {
|
||||
border-color: #4e73df;
|
||||
background-color: #f8f9fc;
|
||||
}
|
||||
|
||||
.upload-area.dragover {
|
||||
border-color: #4e73df;
|
||||
background-color: rgba(78, 115, 223, 0.1);
|
||||
}
|
||||
|
||||
/* Analysis results styling */
|
||||
.analysis-card {
|
||||
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
|
||||
color: white;
|
||||
border-radius: 0.5rem;
|
||||
padding: 1.5rem;
|
||||
margin-bottom: 1rem;
|
||||
}
|
||||
|
||||
.insight-item {
|
||||
background-color: rgba(255, 255, 255, 0.1);
|
||||
border-radius: 0.35rem;
|
||||
padding: 0.75rem;
|
||||
margin-bottom: 0.5rem;
|
||||
border-left: 4px solid rgba(255, 255, 255, 0.3);
|
||||
}
|
||||
|
||||
.recommendation-item {
|
||||
background-color: #ffffff;
|
||||
border-left: 4px solid #4e73df;
|
||||
border-radius: 0.35rem;
|
||||
padding: 1rem;
|
||||
margin-bottom: 0.75rem;
|
||||
box-shadow: 0 2px 4px rgba(0, 0, 0, 0.1);
|
||||
}
|
||||
|
||||
.priority-high {
|
||||
border-left-color: #e74a3b;
|
||||
}
|
||||
|
||||
.priority-medium {
|
||||
border-left-color: #f6c23e;
|
||||
}
|
||||
|
||||
.priority-low {
|
||||
border-left-color: #1cc88a;
|
||||
}
|
||||
|
||||
/* Nextcloud integration */
|
||||
.nextcloud-connected {
|
||||
color: #1cc88a;
|
||||
}
|
||||
|
||||
.nextcloud-disconnected {
|
||||
color: #e74a3b;
|
||||
}
|
||||
|
||||
/* Print styles */
|
||||
@media print {
|
||||
.sidebar,
|
||||
.btn,
|
||||
.modal {
|
||||
display: none !important;
|
||||
}
|
||||
|
||||
.col-md-9 {
|
||||
width: 100% !important;
|
||||
max-width: 100% !important;
|
||||
}
|
||||
}
|
||||
1085
kpi_analysis/static/js/dashboard.js
Normal file
813
kpi_analysis/templates/dashboard.html
Normal file
@ -0,0 +1,813 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>KPI Analysis Dashboard</title>
|
||||
<link href="https://cdn.jsdelivr.net/npm/bootstrap@5.1.3/dist/css/bootstrap.min.css" rel="stylesheet">
|
||||
<link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.0.0/css/all.min.css" rel="stylesheet">
|
||||
<link href="https://cdn.datatables.net/1.13.6/css/dataTables.bootstrap5.min.css" rel="stylesheet">
|
||||
<link href="/static/css/dashboard.css" rel="stylesheet">
|
||||
</head>
|
||||
<body>
|
||||
<nav class="navbar navbar-expand-lg navbar-dark bg-primary">
|
||||
<div class="container-fluid">
|
||||
<a class="navbar-brand" href="#">
|
||||
<i class="fas fa-chart-line me-2"></i>
|
||||
KPI Analysis Dashboard
|
||||
</a>
|
||||
<div class="navbar-nav ms-auto" id="userSection" style="display: none;">
|
||||
<div class="nav-item dropdown">
|
||||
<a class="nav-link dropdown-toggle" href="#" id="navbarDropdown" role="button" data-bs-toggle="dropdown">
|
||||
<i class="fas fa-user me-1"></i>
|
||||
<span id="usernameDisplay">User</span>
|
||||
</a>
|
||||
<ul class="dropdown-menu">
|
||||
<li><a class="dropdown-item" href="#"><i class="fas fa-cog me-2"></i>Settings</a></li>
|
||||
<li><hr class="dropdown-divider"></li>
|
||||
<li><a class="dropdown-item" href="#" onclick="dashboard.logout()"><i class="fas fa-sign-out-alt me-2"></i>Logout</a></li>
|
||||
</ul>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
</div>
|
||||
</nav>
|
||||
|
||||
<div class="container-fluid">
|
||||
<div class="row">
|
||||
<!-- Sidebar -->
|
||||
<div class="col-md-3 col-lg-2 sidebar">
|
||||
<div class="position-sticky pt-3">
|
||||
<ul class="nav flex-column">
|
||||
<li class="nav-item">
|
||||
<a class="nav-link active" href="#overview" data-bs-toggle="tab">
|
||||
<i class="fas fa-tachometer-alt me-2"></i>
|
||||
Overview
|
||||
</a>
|
||||
</li>
|
||||
<li class="nav-item">
|
||||
<a class="nav-link" href="#files" data-bs-toggle="tab">
|
||||
<i class="fas fa-file-excel me-2"></i>
|
||||
Files
|
||||
</a>
|
||||
</li>
|
||||
<li class="nav-item">
|
||||
<a class="nav-link" href="#analysis" data-bs-toggle="tab">
|
||||
<i class="fas fa-chart-bar me-2"></i>
|
||||
Analysis
|
||||
</a>
|
||||
</li>
|
||||
<li class="nav-item">
|
||||
<a class="nav-link" href="#nextcloud" data-bs-toggle="tab">
|
||||
<i class="fas fa-cloud me-2"></i>
|
||||
Nextcloud
|
||||
</a>
|
||||
</li>
|
||||
|
||||
</ul>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Main Content -->
|
||||
<div class="col-md-9 ms-sm-auto col-lg-10 px-md-4">
|
||||
<div class="tab-content pt-3">
|
||||
|
||||
<!-- Overview Tab -->
|
||||
<div class="tab-pane fade show active" id="overview">
|
||||
<div class="d-flex justify-content-between flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
|
||||
<h1 class="h2">Dashboard Overview</h1>
|
||||
<div class="btn-toolbar mb-2 mb-md-0">
|
||||
<button type="button" class="btn btn-sm btn-outline-secondary" onclick="refreshData()">
|
||||
<i class="fas fa-sync-alt me-1"></i>
|
||||
Refresh
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Key Metrics Cards -->
|
||||
<div class="row mb-4">
|
||||
<div class="col-xl-3 col-md-6 mb-4">
|
||||
<div class="card border-left-primary shadow h-100 py-2">
|
||||
<div class="card-body">
|
||||
<div class="row no-gutters align-items-center">
|
||||
<div class="col mr-2">
|
||||
<div class="text-xs font-weight-bold text-primary text-uppercase mb-1">
|
||||
Total Files
|
||||
</div>
|
||||
<div class="h5 mb-0 font-weight-bold text-gray-800" id="total-files">0</div>
|
||||
</div>
|
||||
<div class="col-auto">
|
||||
<i class="fas fa-file-excel fa-2x text-gray-300"></i>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="col-xl-3 col-md-6 mb-4">
|
||||
<div class="card border-left-success shadow h-100 py-2">
|
||||
<div class="card-body">
|
||||
<div class="row no-gutters align-items-center">
|
||||
<div class="col mr-2">
|
||||
<div class="text-xs font-weight-bold text-success text-uppercase mb-1">
|
||||
Avg Score
|
||||
</div>
|
||||
<div class="h5 mb-0 font-weight-bold text-gray-800" id="avg-score">0%</div>
|
||||
</div>
|
||||
<div class="col-auto">
|
||||
<i class="fas fa-chart-line fa-2x text-gray-300"></i>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="col-xl-3 col-md-6 mb-4">
|
||||
<div class="card border-left-info shadow h-100 py-2">
|
||||
<div class="card-body">
|
||||
<div class="row no-gutters align-items-center">
|
||||
<div class="col mr-2">
|
||||
<div class="text-xs font-weight-bold text-info text-uppercase mb-1">
|
||||
Achievement Rate
|
||||
</div>
|
||||
<div class="h5 mb-0 font-weight-bold text-gray-800" id="achievement-rate">0%</div>
|
||||
</div>
|
||||
<div class="col-auto">
|
||||
<i class="fas fa-trophy fa-2x text-gray-300"></i>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="col-xl-3 col-md-6 mb-4">
|
||||
<div class="card border-left-warning shadow h-100 py-2">
|
||||
<div class="card-body">
|
||||
<div class="row no-gutters align-items-center">
|
||||
<div class="col mr-2">
|
||||
<div class="text-xs font-weight-bold text-warning text-uppercase mb-1">
|
||||
Reports Generated
|
||||
</div>
|
||||
<div class="h5 mb-0 font-weight-bold text-gray-800" id="reports-count">0</div>
|
||||
</div>
|
||||
<div class="col-auto">
|
||||
<i class="fas fa-file-pdf fa-2x text-gray-300"></i>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Charts Row -->
|
||||
<div class="row">
|
||||
<div class="col-xl-8 col-lg-7">
|
||||
<div class="card shadow mb-4">
|
||||
<div class="card-header py-3 d-flex flex-row align-items-center justify-content-between">
|
||||
<h6 class="m-0 font-weight-bold text-primary">Performance Overview</h6>
|
||||
</div>
|
||||
<div class="card-body">
|
||||
<div class="chart-area">
|
||||
<canvas id="performanceChart"></canvas>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="col-xl-4 col-lg-5">
|
||||
<div class="card shadow mb-4">
|
||||
<div class="card-header py-3">
|
||||
<h6 class="m-0 font-weight-bold text-primary">Achievement Status</h6>
|
||||
</div>
|
||||
<div class="card-body">
|
||||
<div class="chart-pie">
|
||||
<canvas id="achievementChart"></canvas>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Recent Activity -->
|
||||
<div class="row">
|
||||
<div class="col-12">
|
||||
<div class="card shadow mb-4">
|
||||
<div class="card-header py-3">
|
||||
<h6 class="m-0 font-weight-bold text-primary">Recent Activity</h6>
|
||||
</div>
|
||||
<div class="card-body">
|
||||
<div class="table-responsive">
|
||||
<table class="table table-bordered" id="activityTable" width="100%" cellspacing="0">
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Date</th>
|
||||
<th>Action</th>
|
||||
<th>File</th>
|
||||
<th>Status</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody id="activity-tbody">
|
||||
<!-- Activity data will be loaded here -->
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Files Tab -->
|
||||
<div class="tab-pane fade" id="files">
|
||||
<div class="d-flex justify-content-between flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
|
||||
<h1 class="h2">File Management</h1>
|
||||
<div class="btn-toolbar mb-2 mb-md-0">
|
||||
<div class="btn-group me-2">
|
||||
<button type="button" class="btn btn-sm btn-outline-secondary" data-bs-toggle="modal" data-bs-target="#uploadModal">
|
||||
<i class="fas fa-upload me-1"></i>
|
||||
Upload File
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="card shadow mb-4">
|
||||
<div class="card-header py-3">
|
||||
<h6 class="m-0 font-weight-bold text-primary">Uploaded KPI Files</h6>
|
||||
</div>
|
||||
<div class="card-body">
|
||||
<div class="table-responsive">
|
||||
<table class="table table-bordered" id="filesTable" width="100%" cellspacing="0">
|
||||
<thead>
|
||||
<tr>
|
||||
<th>Filename</th>
|
||||
<th>Upload Date</th>
|
||||
<th>Size</th>
|
||||
<th>Status</th>
|
||||
<th>Actions</th>
|
||||
</tr>
|
||||
</thead>
|
||||
<tbody id="files-tbody">
|
||||
<!-- Files data will be loaded here -->
|
||||
</tbody>
|
||||
</table>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Analysis Tab -->
|
||||
<div class="tab-pane fade" id="analysis">
|
||||
<div class="d-flex justify-content-between flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
|
||||
<h1 class="h2">KPI Analysis</h1>
|
||||
</div>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-12">
|
||||
<div class="card shadow mb-4">
|
||||
<div class="card-header py-3">
|
||||
<h6 class="m-0 font-weight-bold text-primary">Analysis Results</h6>
|
||||
</div>
|
||||
<div class="card-body">
|
||||
<div id="analysis-content">
|
||||
<p class="text-muted">Select a file to view analysis results.</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-12">
|
||||
<div class="card shadow mb-4">
|
||||
<div class="card-header py-3">
|
||||
<h6 class="m-0 font-weight-bold text-primary">Interactive Charts</h6>
|
||||
</div>
|
||||
<div class="card-body">
|
||||
<div id="charts-content">
|
||||
<p class="text-muted">Charts will appear here after analysis.</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Nextcloud Tab -->
|
||||
<div class="tab-pane fade" id="nextcloud">
|
||||
<div class="d-flex justify-content-between flex-wrap flex-md-nowrap align-items-center pb-2 mb-3 border-bottom">
|
||||
<h1 class="h2">Nextcloud Integration</h1>
|
||||
<div class="btn-toolbar mb-2 mb-md-0">
|
||||
<button type="button" class="btn btn-sm btn-outline-secondary" onclick="connectNextcloud()">
|
||||
<i class="fas fa-cloud me-1"></i>
|
||||
Connect
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<div class="row">
|
||||
<div class="col-12">
|
||||
<div class="card shadow mb-4">
|
||||
<div class="card-header py-3">
|
||||
<h6 class="m-0 font-weight-bold text-primary">Nextcloud Files</h6>
|
||||
</div>
|
||||
<div class="card-body">
|
||||
<div id="nextcloud-files">
|
||||
<p class="text-muted">Connect to Nextcloud to view KPI files.</p>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
|
||||
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Upload Modal -->
|
||||
<div class="modal fade" id="uploadModal" tabindex="-1">
|
||||
<div class="modal-dialog">
|
||||
<div class="modal-content">
|
||||
<div class="modal-header">
|
||||
<h5 class="modal-title">Upload KPI File</h5>
|
||||
<button type="button" class="btn-close" data-bs-dismiss="modal"></button>
|
||||
</div>
|
||||
<div class="modal-body">
|
||||
<form id="uploadForm">
|
||||
<div class="mb-3">
|
||||
<label for="fileInput" class="form-label">Select Excel File</label>
|
||||
<input class="form-control" type="file" id="fileInput" accept=".xlsx,.xls">
|
||||
</div>
|
||||
<div class="progress" style="display: none;" id="uploadProgress">
|
||||
<div class="progress-bar" role="progressbar" style="width: 0%"></div>
|
||||
</div>
|
||||
</form>
|
||||
</div>
|
||||
<div class="modal-footer">
|
||||
<button type="button" class="btn btn-secondary" data-bs-dismiss="modal">Cancel</button>
|
||||
<button type="button" class="btn btn-primary" onclick="uploadFile()">Upload</button>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Scripts -->
|
||||
<script src="https://cdn.jsdelivr.net/npm/bootstrap@5.1.3/dist/js/bootstrap.bundle.min.js"></script>
|
||||
<script src="https://code.jquery.com/jquery-3.6.0.min.js"></script>
|
||||
<script src="https://cdn.datatables.net/1.13.6/js/jquery.dataTables.min.js"></script>
|
||||
<script src="https://cdn.datatables.net/1.13.6/js/dataTables.bootstrap5.min.js"></script>
|
||||
<script src="https://cdn.jsdelivr.net/npm/chart.js"></script>
|
||||
<script src="/static/js/dashboard.js"></script>
|
||||
|
||||
<script>
|
||||
// Dashboard object to manage functionality
|
||||
const dashboard = {
|
||||
isAuthenticated: false,
|
||||
currentUser: null,
|
||||
token: null,
|
||||
|
||||
// Initialize dashboard when page loads
|
||||
initialize: function() {
|
||||
console.log('Initializing dashboard...');
|
||||
this.checkAuthentication();
|
||||
},
|
||||
|
||||
// Check if user is authenticated
|
||||
checkAuthentication: function() {
|
||||
this.token = localStorage.getItem('kpi_token');
|
||||
console.log('Token found:', !!this.token);
|
||||
|
||||
if (this.token) {
|
||||
this.validateToken();
|
||||
} else {
|
||||
// No token, redirect to login
|
||||
window.location.href = '/login';
|
||||
}
|
||||
},
|
||||
|
||||
// Validate the stored token
|
||||
async validateToken() {
|
||||
try {
|
||||
const response = await fetch('/api/auth/me', {
|
||||
headers: {
|
||||
'Authorization': `Bearer ${this.token}`
|
||||
}
|
||||
});
|
||||
|
||||
if (response.ok) {
|
||||
const result = await response.json();
|
||||
this.currentUser = result.user;
|
||||
this.isAuthenticated = true;
|
||||
this.updateUI();
|
||||
this.loadDashboardData();
|
||||
this.initializeCharts([]);
|
||||
} else {
|
||||
// Token invalid, redirect to login
|
||||
localStorage.removeItem('kpi_token');
|
||||
localStorage.removeItem('kpi_user');
|
||||
window.location.href = '/login';
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Token validation error:', error);
|
||||
localStorage.removeItem('kpi_token');
|
||||
localStorage.removeItem('kpi_user');
|
||||
window.location.href = '/login';
|
||||
}
|
||||
},
|
||||
|
||||
// Update UI based on authentication status
|
||||
updateUI: function() {
|
||||
const userSection = document.getElementById('userSection');
|
||||
const usernameDisplay = document.getElementById('usernameDisplay');
|
||||
|
||||
if (this.isAuthenticated && this.currentUser) {
|
||||
userSection.style.display = 'block';
|
||||
usernameDisplay.textContent = this.currentUser.username;
|
||||
}
|
||||
},
|
||||
|
||||
// Authenticated fetch
|
||||
authFetch: async function(url, options = {}) {
|
||||
const headers = {
|
||||
...options.headers,
|
||||
'Authorization': `Bearer ${this.token}`
|
||||
};
|
||||
|
||||
const response = await fetch(url, {
|
||||
...options,
|
||||
headers
|
||||
});
|
||||
|
||||
if (response.status === 401) {
|
||||
// Token expired, redirect to login
|
||||
this.logout();
|
||||
throw new Error('Authentication required');
|
||||
}
|
||||
|
||||
return response;
|
||||
},
|
||||
|
||||
// Logout function
|
||||
logout: function() {
|
||||
console.log('Logging out...');
|
||||
|
||||
// Call logout API if token exists
|
||||
if (this.token) {
|
||||
fetch('/api/auth/logout', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Authorization': `Bearer ${this.token}`
|
||||
}
|
||||
}).catch(err => console.log('Logout API error:', err));
|
||||
}
|
||||
|
||||
// Clear local storage
|
||||
this.token = null;
|
||||
this.currentUser = null;
|
||||
this.isAuthenticated = false;
|
||||
localStorage.removeItem('kpi_token');
|
||||
localStorage.removeItem('kpi_user');
|
||||
|
||||
// Redirect to login
|
||||
window.location.href = '/login';
|
||||
},
|
||||
|
||||
// Load dashboard data
|
||||
loadDashboardData: async function() {
|
||||
try {
|
||||
// Load dashboard metrics from API
|
||||
const response = await this.authFetch('/api/files/list');
|
||||
const data = await response.json();
|
||||
|
||||
// Update dashboard metrics with real data
|
||||
const totalFiles = data.files ? data.files.length : 0;
|
||||
document.getElementById('total-files').textContent = totalFiles;
|
||||
|
||||
// Calculate real metrics if data exists
|
||||
if (totalFiles > 0 && data.files) {
|
||||
// Calculate average score from processed files
|
||||
const processedFiles = data.files.filter(file => file.processed);
|
||||
if (processedFiles.length > 0) {
|
||||
// For now, show 0% until we implement real score calculation
|
||||
document.getElementById('avg-score').textContent = '0%';
|
||||
document.getElementById('achievement-rate').textContent = '0%';
|
||||
} else {
|
||||
document.getElementById('avg-score').textContent = 'No Data';
|
||||
document.getElementById('achievement-rate').textContent = 'No Data';
|
||||
}
|
||||
document.getElementById('reports-count').textContent = '0'; // Will be updated when report generation is implemented
|
||||
} else {
|
||||
// No files - show appropriate empty state
|
||||
document.getElementById('total-files').textContent = '0';
|
||||
document.getElementById('avg-score').textContent = 'No Data';
|
||||
document.getElementById('achievement-rate').textContent = 'No Data';
|
||||
document.getElementById('reports-count').textContent = '0';
|
||||
}
|
||||
|
||||
// Load files table
|
||||
this.loadFilesTable(data.files || []);
|
||||
|
||||
// Load activity data
|
||||
this.loadRecentActivity();
|
||||
|
||||
// Initialize or update charts
|
||||
this.initializeCharts(data.files || []);
|
||||
|
||||
} catch (error) {
|
||||
console.error('Error loading dashboard data:', error);
|
||||
// Show empty state if error
|
||||
document.getElementById('total-files').textContent = '0';
|
||||
document.getElementById('avg-score').textContent = 'Error';
|
||||
document.getElementById('achievement-rate').textContent = 'Error';
|
||||
document.getElementById('reports-count').textContent = '0';
|
||||
this.loadFilesTable([]);
|
||||
this.loadRecentActivity();
|
||||
}
|
||||
},
|
||||
|
||||
// Initialize charts
|
||||
initializeCharts: function(files = []) {
|
||||
// Performance Chart
|
||||
const perfCtx = document.getElementById('performanceChart').getContext('2d');
|
||||
|
||||
// Destroy existing chart if any
|
||||
if (this.performanceChart) {
|
||||
this.performanceChart.destroy();
|
||||
}
|
||||
|
||||
const hasData = files.length > 0;
|
||||
|
||||
// Only show empty state message, no fake data
|
||||
this.performanceChart = new Chart(perfCtx, {
|
||||
type: 'line',
|
||||
data: {
|
||||
labels: [],
|
||||
datasets: [{
|
||||
label: 'Performance Score',
|
||||
data: [],
|
||||
borderColor: 'rgb(75, 192, 192)',
|
||||
backgroundColor: 'rgba(75, 192, 192, 0.2)',
|
||||
tension: 0.1
|
||||
}]
|
||||
},
|
||||
options: {
|
||||
responsive: true,
|
||||
plugins: {
|
||||
title: {
|
||||
display: true,
|
||||
text: hasData ? 'Performance Trend Over Time' : 'No Data Available - Upload KPI Files'
|
||||
}
|
||||
},
|
||||
scales: {
|
||||
y: {
|
||||
beginAtZero: true,
|
||||
min: 0,
|
||||
max: 100
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
// Achievement Chart
|
||||
const achieveCtx = document.getElementById('achievementChart').getContext('2d');
|
||||
|
||||
// Destroy existing chart if any
|
||||
if (this.achievementChart) {
|
||||
this.achievementChart.destroy();
|
||||
}
|
||||
|
||||
// Only show empty state message, no fake data
|
||||
this.achievementChart = new Chart(achieveCtx, {
|
||||
type: 'doughnut',
|
||||
data: {
|
||||
labels: [],
|
||||
datasets: [{
|
||||
data: [],
|
||||
backgroundColor: []
|
||||
}]
|
||||
},
|
||||
options: {
|
||||
responsive: true,
|
||||
plugins: {
|
||||
title: {
|
||||
display: true,
|
||||
text: hasData ? 'Achievement Status' : 'No Data Available'
|
||||
},
|
||||
legend: {
|
||||
position: 'bottom'
|
||||
}
|
||||
}
|
||||
}
|
||||
});
|
||||
},
|
||||
|
||||
// Load recent activity
|
||||
loadRecentActivity: function() {
|
||||
const tbody = document.getElementById('activity-tbody');
|
||||
tbody.innerHTML = '';
|
||||
|
||||
// Show "no data" message
|
||||
const row = tbody.insertRow();
|
||||
row.innerHTML = `
|
||||
<td colspan="4" class="text-center text-muted py-4">
|
||||
<i class="fas fa-info-circle me-2"></i>
|
||||
No activity yet. Upload a KPI file to get started.
|
||||
</td>
|
||||
`;
|
||||
},
|
||||
|
||||
// Load files table
|
||||
loadFilesTable: function(files) {
|
||||
const tbody = document.getElementById('files-tbody');
|
||||
tbody.innerHTML = '';
|
||||
|
||||
if (files.length === 0) {
|
||||
const row = tbody.insertRow();
|
||||
row.innerHTML = `
|
||||
<td colspan="5" class="text-center text-muted py-4">
|
||||
<i class="fas fa-folder-open me-2"></i>
|
||||
No files uploaded yet. Click "Upload File" to get started.
|
||||
</td>
|
||||
`;
|
||||
return;
|
||||
}
|
||||
|
||||
files.forEach(file => {
|
||||
const row = tbody.insertRow();
|
||||
row.innerHTML = `
|
||||
<td>${file.filename}</td>
|
||||
<td>${file.upload_date}</td>
|
||||
<td>${file.size}</td>
|
||||
<td>
|
||||
${file.processed
|
||||
? '<span class="badge bg-success">Processed</span>'
|
||||
: '<span class="badge bg-warning">Processing...</span>'}
|
||||
</td>
|
||||
<td>
|
||||
${file.processed
|
||||
? `<button class="btn btn-sm btn-primary" onclick="viewAnalysis(${file.id})">
|
||||
<i class="fas fa-chart-bar me-1"></i>View Analysis
|
||||
</button>
|
||||
<button class="btn btn-sm btn-secondary" onclick="downloadReport(${file.id})">
|
||||
<i class="fas fa-download me-1"></i>Report
|
||||
</button>
|
||||
<button class="btn btn-sm btn-danger" onclick="deleteFile(${file.id})">
|
||||
<i class="fas fa-trash me-1"></i>Delete
|
||||
</button>`
|
||||
: `<span class="text-muted">Processing...</span>
|
||||
<button class="btn btn-sm btn-danger" onclick="deleteFile(${file.id})">
|
||||
<i class="fas fa-trash me-1"></i>Delete
|
||||
</button>`}
|
||||
</td>
|
||||
`;
|
||||
});
|
||||
},
|
||||
|
||||
// Refresh data
|
||||
refreshData: function() {
|
||||
this.loadDashboardData();
|
||||
}
|
||||
};
|
||||
|
||||
// Standalone functions
|
||||
async function uploadFile() {
|
||||
const fileInput = document.getElementById('fileInput');
|
||||
const progressBar = document.querySelector('#uploadProgress .progress-bar');
|
||||
const progressContainer = document.getElementById('uploadProgress');
|
||||
|
||||
if (!fileInput.files[0]) {
|
||||
alert('Please select a file to upload');
|
||||
return;
|
||||
}
|
||||
|
||||
const formData = new FormData();
|
||||
formData.append('file', fileInput.files[0]);
|
||||
|
||||
progressContainer.style.display = 'block';
|
||||
progressBar.style.width = '50%';
|
||||
|
||||
try {
|
||||
const response = await dashboard.authFetch('/api/files/upload', {
|
||||
method: 'POST',
|
||||
body: formData
|
||||
});
|
||||
|
||||
const result = await response.json();
|
||||
|
||||
if (result.success) {
|
||||
progressBar.style.width = '100%';
|
||||
alert('File uploaded successfully! Analysis is running in the background.');
|
||||
$('#uploadModal').modal('hide');
|
||||
fileInput.value = '';
|
||||
dashboard.loadDashboardData();
|
||||
} else {
|
||||
alert('Upload failed: ' + (result.message || 'Unknown error'));
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Upload error:', error);
|
||||
alert('Upload failed: ' + error.message);
|
||||
} finally {
|
||||
progressContainer.style.display = 'none';
|
||||
progressBar.style.width = '0%';
|
||||
}
|
||||
}
|
||||
|
||||
async function viewAnalysis(fileId) {
|
||||
try {
|
||||
const response = await dashboard.authFetch(`/api/analysis/${fileId}`);
|
||||
const result = await response.json();
|
||||
|
||||
if (result.success) {
|
||||
// Switch to analysis tab
|
||||
const analysisTab = document.querySelector('a[href="#analysis"]');
|
||||
analysisTab.click();
|
||||
|
||||
// Display analysis results
|
||||
const analysisContent = document.getElementById('analysis-content');
|
||||
analysisContent.innerHTML = `
|
||||
<div class="row">
|
||||
<div class="col-md-6">
|
||||
<h5>Overall Score</h5>
|
||||
<h2 class="text-primary">${result.total_score.toFixed(1)}%</h2>
|
||||
</div>
|
||||
<div class="col-md-6">
|
||||
<h5>Achievement Rate</h5>
|
||||
<h2 class="text-success">${result.achievements.achievement_rate.toFixed(1)}%</h2>
|
||||
</div>
|
||||
</div>
|
||||
<hr>
|
||||
<h5>Perspective Scores</h5>
|
||||
<div class="row">
|
||||
${Object.entries(result.perspective_scores).map(([key, value]) => `
|
||||
<div class="col-md-3 mb-3">
|
||||
<div class="card">
|
||||
<div class="card-body text-center">
|
||||
<h6>${key}</h6>
|
||||
<h4 class="text-info">${value.toFixed(1)}%</h4>
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
`).join('')}
|
||||
</div>
|
||||
<hr>
|
||||
<h5>Recommendations</h5>
|
||||
<ul>
|
||||
${result.recommendations.map(rec => `<li>${rec}</li>`).join('')}
|
||||
</ul>
|
||||
`;
|
||||
} else {
|
||||
alert('Analysis not available yet. Please wait for processing to complete.');
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Error loading analysis:', error);
|
||||
alert('Failed to load analysis results.');
|
||||
}
|
||||
}
|
||||
|
||||
async function downloadReport(fileId) {
|
||||
window.location.href = `/api/analysis/${fileId}/report`;
|
||||
}
|
||||
|
||||
async function deleteFile(fileId) {
|
||||
if (!confirm('Are you sure you want to delete this file? This action cannot be undone.')) {
|
||||
return;
|
||||
}
|
||||
|
||||
try {
|
||||
const response = await dashboard.authFetch(`/api/files/delete/${fileId}`, {
|
||||
method: 'DELETE'
|
||||
});
|
||||
|
||||
const result = await response.json();
|
||||
|
||||
if (result.success) {
|
||||
alert('File deleted successfully');
|
||||
dashboard.loadDashboardData();
|
||||
} else {
|
||||
alert('Failed to delete file: ' + (result.message || 'Unknown error'));
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Delete error:', error);
|
||||
alert('Failed to delete file: ' + error.message);
|
||||
}
|
||||
}
|
||||
|
||||
function connectNextcloud() {
|
||||
window.location.href = '/api/auth/nextcloud';
|
||||
}
|
||||
|
||||
function refreshData() {
|
||||
dashboard.refreshData();
|
||||
}
|
||||
|
||||
// Initialize dashboard when page loads
|
||||
$(document).ready(function() {
|
||||
dashboard.initialize();
|
||||
});
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
229
kpi_analysis/templates/login.html
Normal file
@ -0,0 +1,229 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>KPI Analysis Dashboard - Login</title>
|
||||
<link href="https://cdn.jsdelivr.net/npm/bootstrap@5.1.3/dist/css/bootstrap.min.css" rel="stylesheet">
|
||||
<link href="https://cdnjs.cloudflare.com/ajax/libs/font-awesome/6.0.0/css/all.min.css" rel="stylesheet">
|
||||
<style>
|
||||
body {
|
||||
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
|
||||
min-height: 100vh;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
justify-content: center;
|
||||
}
|
||||
.login-container {
|
||||
background: white;
|
||||
border-radius: 15px;
|
||||
box-shadow: 0 15px 35px rgba(0, 0, 0, 0.1);
|
||||
padding: 40px;
|
||||
width: 100%;
|
||||
max-width: 400px;
|
||||
}
|
||||
.login-header {
|
||||
text-align: center;
|
||||
margin-bottom: 30px;
|
||||
}
|
||||
.login-header h1 {
|
||||
color: #333;
|
||||
font-size: 24px;
|
||||
font-weight: 600;
|
||||
margin-bottom: 10px;
|
||||
}
|
||||
.login-header .subtitle {
|
||||
color: #666;
|
||||
font-size: 14px;
|
||||
}
|
||||
.form-label {
|
||||
font-weight: 600;
|
||||
color: #333;
|
||||
}
|
||||
.form-control {
|
||||
border: 2px solid #e1e5e9;
|
||||
border-radius: 8px;
|
||||
padding: 12px 15px;
|
||||
font-size: 14px;
|
||||
}
|
||||
.form-control:focus {
|
||||
border-color: #667eea;
|
||||
box-shadow: 0 0 0 0.2rem rgba(102, 126, 234, 0.25);
|
||||
}
|
||||
.btn-login {
|
||||
background: linear-gradient(135deg, #667eea 0%, #764ba2 100%);
|
||||
border: none;
|
||||
border-radius: 8px;
|
||||
padding: 12px;
|
||||
font-weight: 600;
|
||||
text-transform: uppercase;
|
||||
letter-spacing: 0.5px;
|
||||
transition: all 0.3s ease;
|
||||
}
|
||||
.btn-login:hover {
|
||||
transform: translateY(-2px);
|
||||
box-shadow: 0 8px 25px rgba(102, 126, 234, 0.3);
|
||||
}
|
||||
.demo-credentials {
|
||||
background: #f8f9fa;
|
||||
border: 1px solid #dee2e6;
|
||||
border-radius: 8px;
|
||||
padding: 15px;
|
||||
margin-top: 20px;
|
||||
}
|
||||
.demo-credentials h6 {
|
||||
color: #495057;
|
||||
margin-bottom: 10px;
|
||||
}
|
||||
.alert {
|
||||
border-radius: 8px;
|
||||
margin-top: 15px;
|
||||
}
|
||||
.loading {
|
||||
display: none;
|
||||
}
|
||||
</style>
|
||||
</head>
|
||||
<body>
|
||||
<div class="login-container">
|
||||
<div class="login-header">
|
||||
<i class="fas fa-chart-line fa-2x text-primary mb-3"></i>
|
||||
<h1>KPI Analysis Dashboard</h1>
|
||||
<p class="subtitle">Please sign in to continue</p>
|
||||
</div>
|
||||
|
||||
<form id="loginForm">
|
||||
<div class="mb-3">
|
||||
<label for="username" class="form-label">Username</label>
|
||||
<input type="text" class="form-control" id="username" value="admin" required>
|
||||
</div>
|
||||
|
||||
<div class="mb-3">
|
||||
<label for="password" class="form-label">Password</label>
|
||||
<input type="password" class="form-control" id="password" value="super" required>
|
||||
</div>
|
||||
|
||||
<button type="submit" class="btn btn-primary btn-login w-100" id="loginBtn">
|
||||
<span class="login-text">
|
||||
<i class="fas fa-sign-in-alt me-2"></i>
|
||||
Sign In
|
||||
</span>
|
||||
<span class="loading">
|
||||
<i class="fas fa-spinner fa-spin me-2"></i>
|
||||
Signing In...
|
||||
</span>
|
||||
</button>
|
||||
|
||||
<div id="alertMessage"></div>
|
||||
</form>
|
||||
|
||||
<div class="demo-credentials">
|
||||
<h6><i class="fas fa-info-circle me-2"></i>Demo Credentials</h6>
|
||||
<p class="mb-1"><strong>Username:</strong> admin</p>
|
||||
<p class="mb-0"><strong>Password:</strong> super</p>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<script src="https://cdn.jsdelivr.net/npm/bootstrap@5.1.3/dist/js/bootstrap.bundle.min.js"></script>
|
||||
<script>
|
||||
document.getElementById('loginForm').addEventListener('submit', async function(e) {
|
||||
e.preventDefault();
|
||||
|
||||
const username = document.getElementById('username').value;
|
||||
const password = document.getElementById('password').value;
|
||||
const loginBtn = document.getElementById('loginBtn');
|
||||
const loginText = loginBtn.querySelector('.login-text');
|
||||
const loading = loginBtn.querySelector('.loading');
|
||||
const alertDiv = document.getElementById('alertMessage');
|
||||
|
||||
// Show loading state
|
||||
loginText.style.display = 'none';
|
||||
loading.style.display = 'inline';
|
||||
loginBtn.disabled = true;
|
||||
alertDiv.innerHTML = '';
|
||||
|
||||
try {
|
||||
console.log('Attempting login...');
|
||||
const response = await fetch('/api/auth/login', {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json'
|
||||
},
|
||||
body: JSON.stringify({
|
||||
username: username,
|
||||
password: password
|
||||
})
|
||||
});
|
||||
|
||||
const result = await response.json();
|
||||
console.log('Login response:', result);
|
||||
|
||||
if (response.ok && result.success) {
|
||||
// Store token and user data
|
||||
localStorage.setItem('kpi_token', result.access_token);
|
||||
localStorage.setItem('kpi_user', JSON.stringify(result.user));
|
||||
|
||||
// Show success and redirect
|
||||
alertDiv.innerHTML = `
|
||||
<div class="alert alert-success">
|
||||
<i class="fas fa-check-circle me-1"></i>
|
||||
Login successful! Redirecting...
|
||||
</div>
|
||||
`;
|
||||
|
||||
setTimeout(() => {
|
||||
window.location.href = '/dashboard';
|
||||
}, 1000);
|
||||
|
||||
} else {
|
||||
// Show error
|
||||
alertDiv.innerHTML = `
|
||||
<div class="alert alert-danger">
|
||||
<i class="fas fa-exclamation-circle me-1"></i>
|
||||
${result.detail || 'Invalid credentials'}
|
||||
</div>
|
||||
`;
|
||||
}
|
||||
|
||||
} catch (error) {
|
||||
console.error('Login error:', error);
|
||||
alertDiv.innerHTML = `
|
||||
<div class="alert alert-danger">
|
||||
<i class="fas fa-exclamation-circle me-1"></i>
|
||||
Network error. Please try again.
|
||||
</div>
|
||||
`;
|
||||
} finally {
|
||||
// Reset button state
|
||||
loginText.style.display = 'inline';
|
||||
loading.style.display = 'none';
|
||||
loginBtn.disabled = false;
|
||||
}
|
||||
});
|
||||
|
||||
// Check if already logged in
|
||||
const token = localStorage.getItem('kpi_token');
|
||||
if (token) {
|
||||
// Verify token is still valid
|
||||
fetch('/api/auth/me', {
|
||||
headers: {
|
||||
'Authorization': `Bearer ${token}`
|
||||
}
|
||||
}).then(response => {
|
||||
if (response.ok) {
|
||||
// Token is valid, redirect to dashboard
|
||||
window.location.href = '/dashboard';
|
||||
} else {
|
||||
// Token is invalid, clear it
|
||||
localStorage.removeItem('kpi_token');
|
||||
localStorage.removeItem('kpi_user');
|
||||
}
|
||||
}).catch(() => {
|
||||
// Network error, clear token
|
||||
localStorage.removeItem('kpi_token');
|
||||
localStorage.removeItem('kpi_user');
|
||||
});
|
||||
}
|
||||
</script>
|
||||
</body>
|
||||
</html>
|
||||
BIN
reports/achievement_chart_20251125_163629.png
Normal file
|
After Width: | Height: | Size: 152 KiB |
BIN
reports/achievement_chart_20251125_163738.png
Normal file
|
After Width: | Height: | Size: 152 KiB |
BIN
reports/achievement_chart_20251125_163946.png
Normal file
|
After Width: | Height: | Size: 152 KiB |
BIN
reports/achievement_chart_20251125_164220.png
Normal file
|
After Width: | Height: | Size: 152 KiB |
BIN
reports/achievement_chart_20251125_164250.png
Normal file
|
After Width: | Height: | Size: 152 KiB |
BIN
reports/achievement_chart_20251125_165119.png
Normal file
|
After Width: | Height: | Size: 152 KiB |
BIN
reports/achievement_chart_20251125_165209.png
Normal file
|
After Width: | Height: | Size: 152 KiB |
BIN
reports/achievement_chart_20251125_165433.png
Normal file
|
After Width: | Height: | Size: 152 KiB |
BIN
reports/achievement_chart_20251125_170033.png
Normal file
|
After Width: | Height: | Size: 154 KiB |
BIN
reports/achievement_chart_20251125_170135.png
Normal file
|
After Width: | Height: | Size: 152 KiB |
BIN
reports/achievement_chart_20251125_170206.png
Normal file
|
After Width: | Height: | Size: 74 KiB |
BIN
reports/achievement_chart_20251125_170625.png
Normal file
|
After Width: | Height: | Size: 74 KiB |
BIN
reports/achievement_chart_20251125_170755.png
Normal file
|
After Width: | Height: | Size: 74 KiB |
BIN
reports/achievement_chart_20251125_171136.png
Normal file
|
After Width: | Height: | Size: 72 KiB |
BIN
reports/achievement_chart_20251125_171412.png
Normal file
|
After Width: | Height: | Size: 74 KiB |
BIN
reports/achievement_chart_20251125_171540.png
Normal file
|
After Width: | Height: | Size: 72 KiB |
BIN
reports/achievement_chart_20251125_172032.png
Normal file
|
After Width: | Height: | Size: 72 KiB |
BIN
reports/achievement_chart_20251125_172327.png
Normal file
|
After Width: | Height: | Size: 71 KiB |
BIN
reports/achievement_chart_20251125_182158.png
Normal file
|
After Width: | Height: | Size: 71 KiB |
BIN
reports/achievement_chart_20251125_182540.png
Normal file
|
After Width: | Height: | Size: 71 KiB |
BIN
reports/achievement_chart_20251125_183035.png
Normal file
|
After Width: | Height: | Size: 72 KiB |
BIN
reports/achievement_chart_20251125_183338.png
Normal file
|
After Width: | Height: | Size: 72 KiB |
BIN
reports/achievement_chart_20251125_183707.png
Normal file
|
After Width: | Height: | Size: 71 KiB |
BIN
reports/achievement_chart_20251125_183802.png
Normal file
|
After Width: | Height: | Size: 53 KiB |
BIN
reports/achievement_chart_20251125_183920.png
Normal file
|
After Width: | Height: | Size: 72 KiB |
BIN
reports/achievement_chart_20251125_184037.png
Normal file
|
After Width: | Height: | Size: 72 KiB |
BIN
reports/achievement_chart_20251125_184235.png
Normal file
|
After Width: | Height: | Size: 72 KiB |
BIN
reports/achievement_chart_20251125_184505.png
Normal file
|
After Width: | Height: | Size: 72 KiB |
BIN
reports/achievement_chart_20251125_184624.png
Normal file
|
After Width: | Height: | Size: 71 KiB |
BIN
reports/achievement_chart_20251125_184809.png
Normal file
|
After Width: | Height: | Size: 72 KiB |
BIN
reports/achievement_chart_20251125_184950.png
Normal file
|
After Width: | Height: | Size: 72 KiB |
BIN
reports/achievement_chart_20251125_185152.png
Normal file
|
After Width: | Height: | Size: 67 KiB |
BIN
reports/achievement_chart_20251125_185343.png
Normal file
|
After Width: | Height: | Size: 68 KiB |