first commit

This commit is contained in:
admin.suherdy 2025-12-02 23:14:27 +07:00
commit f574cc297b
22 changed files with 1580 additions and 0 deletions

152
.gitignore vendored Normal file
View File

@ -0,0 +1,152 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
pip-wheel-metadata/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
.python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# IDE
.vscode/
.idea/
*.swp
*.swo
*~
# OS
.DS_Store
.DS_Store?
._*
.Spotlight-V100
.Trashes
ehthumbs.db
Thumbs.db
# Node modules (if using any JS tools)
node_modules/
# Temporary files
*.tmp
*.temp

322
README.md Normal file
View File

@ -0,0 +1,322 @@
# POS Face Recognition
## Overview
This Odoo module extends the Point of Sale (POS) system with AI-powered face recognition capabilities. It enables automatic customer identification through facial recognition, providing staff with instant access to customer information, purchase history, and personalized recommendations.
## Features
### 🎯 Core Functionality
- **Real-time Face Recognition**: Automatically identifies customers using webcam in the POS interface
- **Customer Training**: Capture and store up to 3 face images per customer for accurate recognition
- **Automatic Sync**: Face images are automatically synchronized with the AI server
- **Confidence Scoring**: Displays match probability for each recognized customer
### 📊 Customer Insights
When a customer is recognized, the sidebar displays:
- **Customer Information**: Name and contact details
- **Last 2 Orders**: Complete order history including:
- Order number and date
- Total amount
- Order status (with color-coded badges)
- Complete product list with quantities and prices
- **Top 3 Products**: Most frequently purchased items by the customer
### 🎨 User Interface
- **Sidebar Integration**: Non-intrusive sidebar in the POS product screen
- **Live Camera Feed**: Real-time video preview for face recognition
- **Match List**: Shows all potential customer matches with confidence scores
- **Detailed Order Cards**: Beautifully styled order history with hover effects
- **Responsive Design**: Optimized for POS touchscreen interfaces
## Requirements
### Odoo Dependencies
- `point_of_sale`
- `contacts`
### External Dependencies
- **AI Face Recognition Server**: A separate Python Flask server that handles face recognition
- Repository: `ai_face_recognition_server` (included in workspace)
- Required Python packages: `flask`, `face_recognition`, `numpy`, `opencv-python`
### Hardware Requirements
- Webcam or camera device
- Sufficient lighting for face recognition
## Installation
### 1. Install the Odoo Module
```bash
# Copy the module to your Odoo addons directory
cp -r pos_face_recognition /path/to/odoo/addons/
# Update the addons list in Odoo
# Navigate to Apps > Update Apps List
# Search for "POS Face Recognition" and install
```
### 2. Setup the AI Face Recognition Server
```bash
# Navigate to the AI server directory
cd ai_face_recognition_server
# Install Python dependencies
pip install -r requirements.txt
# Start the server
python app.py
```
The server will run on `http://localhost:5000` by default.
### 3. Configure the Module
1. Go to **Point of Sale > Configuration > Point of Sale**
2. Select your POS configuration
3. In the **Face Recognition** section, set the **Face Recognition Server URL**:
```
http://localhost:5000
```
4. Save the configuration
## Usage
### Training Customer Faces
1. Navigate to **Contacts**
2. Open a customer record
3. In the **Face Recognition** tab, you'll find three image fields:
- **Face Image 1**
- **Face Image 2**
- **Face Image 3**
4. Upload clear, front-facing photos of the customer
5. Save the record - images are automatically synced to the AI server
**Best Practices for Training Images:**
- Use well-lit, clear photos
- Capture different angles and expressions
- Ensure the face is clearly visible
- Avoid sunglasses or face coverings
- Use recent photos
### Using Face Recognition in POS
1. Open the POS session
2. The face recognition sidebar appears on the right side
3. The camera automatically starts and begins scanning for faces
4. When a customer is recognized:
- Their name appears in the match list with a confidence score
- Click on the match to select the customer
- View their order history and top products
- The customer is automatically set for the current order
### Understanding the Display
**Match List:**
- Shows all potential matches with confidence percentage
- Higher percentage = more confident match
- Click any match to select that customer
**Order History:**
- Displays the last 2 orders with complete details
- Color-coded status badges:
- 🟢 Green: Paid/Done
- 🔵 Blue: Invoiced
- Shows all products ordered with quantities and prices
**Top Products:**
- Highlights the 3 most frequently purchased items
- Helps staff make personalized recommendations
## Configuration
### POS Configuration
**Settings > Point of Sale > Configuration > Point of Sale**
- `pos_face_rec_server_url`: URL of the AI Face Recognition Server
### Partner Configuration
**Contacts > Customer**
- `image_face_1`: First training image
- `image_face_2`: Second training image
- `image_face_3`: Third training image
## Technical Details
### Module Structure
```
pos_face_recognition/
├── __init__.py
├── __manifest__.py
├── README.md
├── models/
│ ├── __init__.py
│ ├── pos_config.py # POS configuration extension
│ ├── res_partner.py # Customer model with face images
│ └── res_config_settings.py # Settings configuration
├── views/
│ ├── pos_config_views.xml
│ ├── res_partner_views.xml
│ └── res_config_settings_views.xml
└── static/
└── src/
├── css/
│ └── face_recognition.css
├── js/
│ ├── face_recognition_sidebar.js
│ ├── models.js
│ └── partner_details_edit.js
└── xml/
├── face_recognition_screens.xml
└── partner_details_edit.xml
```
### API Endpoints
The module communicates with the AI server using these endpoints:
**POST /train**
- Trains the AI model with customer face images
- Payload: `{ partner_id, name, images: [base64_image1, base64_image2, base64_image3] }`
**POST /recognize**
- Recognizes faces in the provided image
- Payload: `{ image: base64_image }`
- Response: `{ matches: [{ id, name, probability }] }`
### Data Flow
1. **Training Phase:**
- Customer face images uploaded in Odoo
- Images automatically synced to AI server via `/train` endpoint
- AI server trains face recognition model
2. **Recognition Phase:**
- POS camera captures frames every 5 seconds
- Frame sent to AI server via `/recognize` endpoint
- Server returns matching customers with confidence scores
- POS displays matches and fetches order history
### Database Schema
**res.partner (extended)**
```python
image_face_1 = fields.Binary("Face Image 1", attachment=True)
image_face_2 = fields.Binary("Face Image 2", attachment=True)
image_face_3 = fields.Binary("Face Image 3", attachment=True)
```
**pos.config (extended)**
```python
pos_face_rec_server_url = fields.Char("Face Recognition Server URL")
```
## Troubleshooting
### Camera Not Working
- **Check browser permissions**: Ensure the browser has camera access
- **HTTPS requirement**: Some browsers require HTTPS for camera access
- **Check console**: Open browser developer tools for error messages
### No Matches Found
- **Verify server connection**: Check if the AI server is running
- **Check server URL**: Ensure the URL in POS config is correct
- **Training data**: Verify customer has face images uploaded
- **Lighting**: Ensure adequate lighting for face detection
### Images Not Syncing
- **Check server URL**: Verify the Face Recognition Server URL is configured
- **Server logs**: Check AI server logs for errors
- **Network connectivity**: Ensure Odoo can reach the AI server
- **Image format**: Ensure images are in supported formats (JPEG, PNG)
### Performance Issues
- **Recognition interval**: Default is 5 seconds, can be adjusted in code
- **Image quality**: Lower resolution images process faster
- **Server resources**: Ensure AI server has adequate CPU/RAM
## Development
### Extending the Module
**Add more customer insights:**
```python
# In res_partner.py
def get_customer_stats(self):
self.ensure_one()
# Add custom logic
return {...}
```
**Customize recognition interval:**
```javascript
// In face_recognition_sidebar.js
this.recognitionInterval = setInterval(() => {
this.captureAndSendFrame();
}, 3000); // Change from 5000 to 3000 for 3 seconds
```
**Add more order history:**
```javascript
// In face_recognition_sidebar.js
const orders = await this.orm.searchRead("pos.order",
[['partner_id', '=', partnerId]],
['name', 'date_order', 'amount_total', 'state', 'lines'],
{ limit: 5, order: "date_order desc" } // Change from 2 to 5
);
```
## Security & Privacy
- Face images are stored securely in Odoo's attachment system
- Communication with AI server should use HTTPS in production
- Comply with local privacy laws (GDPR, CCPA, etc.)
- Obtain customer consent before capturing face images
- Implement data retention policies
- Provide customers with opt-out options
## License
LGPL-3
## Support
For issues, questions, or contributions:
- Check the troubleshooting section
- Review server logs for detailed error messages
- Ensure all dependencies are properly installed
## Changelog
### Version 17.0.1.0.0
- Initial release
- Real-time face recognition in POS
- Customer training with 3 images
- Last 2 orders display with detailed information
- Top 3 products recommendation
- Automatic image synchronization
- Confidence scoring for matches
- Responsive sidebar UI with enhanced styling
## Credits
Developed for Odoo 17.0

1
__init__.py Normal file
View File

@ -0,0 +1 @@
from . import models

30
__manifest__.py Normal file
View File

@ -0,0 +1,30 @@
{
'name': 'POS Face Recognition',
'version': '17.0.1.0.0',
'category': 'Point of Sale',
'summary': 'Face Recognition for POS',
'description': """
This module extends the Point of Sale to include Face Recognition capabilities.
- Capture customer images for training.
- Recognize customers in POS using AI.
""",
'depends': ['point_of_sale', 'contacts'],
'data': [
'views/res_partner_views.xml',
'views/pos_config_views.xml',
'views/res_config_settings_views.xml',
],
'assets': {
'point_of_sale.assets_prod': [
'pos_face_recognition/static/src/xml/face_recognition_screens.xml',
'pos_face_recognition/static/src/xml/partner_details_edit.xml',
'pos_face_recognition/static/src/js/models.js',
'pos_face_recognition/static/src/js/face_recognition_sidebar.js',
'pos_face_recognition/static/src/js/partner_details_edit.js',
'pos_face_recognition/static/src/css/face_recognition.css',
],
},
'installable': True,
'application': False,
'license': 'LGPL-3',
}

View File

@ -0,0 +1,170 @@
from flask import Flask, request, jsonify
from flask_cors import CORS
import face_recognition
import numpy as np
import base64
import cv2
import os
app = Flask(__name__)
CORS(app) # Enable CORS for all routes
# In-memory storage for known face encodings and names
known_face_encodings = []
known_face_names = []
known_face_ids = []
def load_known_faces():
"""
Load known faces from a directory or database.
For this MVP, we'll assume images are stored in a 'faces' directory
with filenames like 'partner_id_name.jpg'.
In a real production scenario, you'd fetch these from Odoo or a shared storage.
"""
global known_face_encodings, known_face_names, known_face_ids
faces_dir = "faces"
if not os.path.exists(faces_dir):
os.makedirs(faces_dir)
print(f"Created {faces_dir} directory. Please add images there.")
return
for filename in os.listdir(faces_dir):
if filename.endswith((".jpg", ".png", ".jpeg")):
try:
# Filename format: ID_Name.jpg
name_part = os.path.splitext(filename)[0]
parts = name_part.split('_', 1)
if len(parts) == 2:
p_id = int(parts[0])
name = parts[1]
else:
p_id = 0
name = name_part
image_path = os.path.join(faces_dir, filename)
image = face_recognition.load_image_file(image_path)
encodings = face_recognition.face_encodings(image)
if encodings:
known_face_encodings.append(encodings[0])
known_face_names.append(name)
known_face_ids.append(p_id)
print(f"Loaded face for: {name} (ID: {p_id})")
except Exception as e:
print(f"Error loading {filename}: {e}")
@app.route('/recognize', methods=['POST'])
def recognize_face():
data = request.json
if 'image' not in data:
return jsonify({'error': 'No image provided'}), 400
try:
# Decode base64 image
image_data = base64.b64decode(data['image'])
nparr = np.frombuffer(image_data, np.uint8)
frame = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
if frame is None:
return jsonify({'error': 'Invalid image data'}), 400
# Convert BGR (OpenCV) to RGB (face_recognition)
rgb_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
# Ensure the image is in the correct format
rgb_frame = np.ascontiguousarray(rgb_frame, dtype=np.uint8)
# Find all faces in the current frame
face_locations = face_recognition.face_locations(rgb_frame)
face_encodings = face_recognition.face_encodings(rgb_frame, face_locations)
matches_list = []
for face_encoding in face_encodings:
# See if the face is a match for the known face(s)
matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
name = "Unknown"
p_id = 0
probability = 0.0
# Or instead, use the known face with the smallest distance to the new face
face_distances = face_recognition.face_distance(known_face_encodings, face_encoding)
if len(face_distances) > 0:
best_match_index = np.argmin(face_distances)
if matches[best_match_index]:
name = known_face_names[best_match_index]
p_id = known_face_ids[best_match_index]
# Simple probability estimation based on distance (lower distance = higher prob)
# Distance 0.0 -> 100%, Distance 0.6 -> ~0% (threshold)
distance = face_distances[best_match_index]
probability = max(0, 1.0 - distance)
matches_list.append({
'id': p_id,
'name': name,
'probability': probability
})
# Sort by probability
matches_list.sort(key=lambda x: x['probability'], reverse=True)
return jsonify({'matches': matches_list})
except Exception as e:
print(f"Error processing image: {e}")
return jsonify({'error': str(e)}), 500
@app.route('/train', methods=['POST'])
def train_face():
data = request.json
if 'partner_id' not in data or 'images' not in data:
return jsonify({'error': 'Missing partner_id or images'}), 400
partner_id = data['partner_id']
name = data.get('name', 'Unknown')
images = data['images'] # List of base64 strings
faces_dir = "faces"
if not os.path.exists(faces_dir):
os.makedirs(faces_dir)
processed_count = 0
# Clear existing images for this partner to avoid duplicates/stale data?
# For simplicity, we might just overwrite or append. Let's append with index.
# Or better, delete old ones first if we want to keep it clean.
# Let's just save new ones for now.
for idx, img_data in enumerate(images):
if not img_data:
continue
try:
# Decode base64
image_bytes = base64.b64decode(img_data)
filename = f"{partner_id}_{name.replace(' ', '_')}_{idx}.jpg"
filepath = os.path.join(faces_dir, filename)
with open(filepath, "wb") as f:
f.write(image_bytes)
# Update in-memory knowledge
image = face_recognition.load_image_file(filepath)
encodings = face_recognition.face_encodings(image)
if encodings:
known_face_encodings.append(encodings[0])
known_face_names.append(name)
known_face_ids.append(partner_id)
processed_count += 1
print(f"Trained face for: {name} (ID: {partner_id}) - Image {idx}")
except Exception as e:
print(f"Error saving/training image {idx} for {name}: {e}")
return jsonify({'message': f'Processed {processed_count} images for {name}'})
if __name__ == '__main__':
load_known_faces()
app.run(host='0.0.0.0', port=5000)

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 36 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

View File

@ -0,0 +1,5 @@
flask
flask-cors
face_recognition
numpy
opencv-python

4
models/__init__.py Normal file
View File

@ -0,0 +1,4 @@
from . import res_partner
from . import res_config_settings
from . import pos_config

10
models/pos_config.py Normal file
View File

@ -0,0 +1,10 @@
from odoo import models, fields
class PosConfig(models.Model):
_inherit = 'pos.config'
pos_face_rec_server_url = fields.Char(
string="Face Recognition Server URL",
help="URL of the AI Face Recognition Server (e.g., http://localhost:5000)"
)

View File

@ -0,0 +1,12 @@
from odoo import models, fields
class ResConfigSettings(models.TransientModel):
_inherit = 'res.config.settings'
pos_face_rec_server_url = fields.Char(
string="AI Face Recognition Server URL",
related='pos_config_id.pos_face_rec_server_url',
readonly=False,
help="URL of the AI Face Recognition Server (e.g., http://localhost:5000)"
)

74
models/res_partner.py Normal file
View File

@ -0,0 +1,74 @@
from odoo import models, fields, api
import requests
import logging
_logger = logging.getLogger(__name__)
class ResPartner(models.Model):
_inherit = 'res.partner'
image_face_1 = fields.Binary("Face Image 1", attachment=True)
image_face_2 = fields.Binary("Face Image 2", attachment=True)
image_face_3 = fields.Binary("Face Image 3", attachment=True)
def _sync_face_images(self):
# Get the server URL from any POS config that has it configured
pos_config = self.env['pos.config'].search([('pos_face_rec_server_url', '!=', False)], limit=1)
if not pos_config or not pos_config.pos_face_rec_server_url:
_logger.warning("Face Recognition Server URL is not configured in any POS configuration")
return
server_url = pos_config.pos_face_rec_server_url
for partner in self:
# Only sync if at least one image is present
if not any([partner.image_face_1, partner.image_face_2, partner.image_face_3]):
continue
images = [
partner.image_face_1.decode('utf-8') if isinstance(partner.image_face_1, bytes) else partner.image_face_1,
partner.image_face_2.decode('utf-8') if isinstance(partner.image_face_2, bytes) else partner.image_face_2,
partner.image_face_3.decode('utf-8') if isinstance(partner.image_face_3, bytes) else partner.image_face_3
]
payload = {
'partner_id': partner.id,
'name': partner.name,
'images': images
}
try:
# Remove trailing slash if present
base_url = server_url.rstrip('/')
requests.post(f"{base_url}/train", json=payload, timeout=5)
except Exception as e:
_logger.error(f"Failed to sync face images for partner {partner.id}: {e}")
@api.model_create_multi
def create(self, vals_list):
partners = super().create(vals_list)
partners._sync_face_images()
return partners
def write(self, vals):
res = super().write(vals)
if any(f in vals for f in ['image_face_1', 'image_face_2', 'image_face_3', 'name']):
self._sync_face_images()
return res
def get_top_products(self):
self.ensure_one()
# Simple implementation: get top 3 products by quantity from past orders
query = """
SELECT pt.name, sum(pol.qty) as qty
FROM pos_order_line pol
JOIN pos_order po ON pol.order_id = po.id
JOIN product_product pp ON pol.product_id = pp.id
JOIN product_template pt ON pp.product_tmpl_id = pt.id
WHERE po.partner_id = %s
GROUP BY pt.name
ORDER BY qty DESC
LIMIT 3
"""
self.env.cr.execute(query, (self.id,))
return [{'name': row[0], 'qty': row[1]} for row in self.env.cr.fetchall()]

View File

@ -0,0 +1,255 @@
.face-recognition-sidebar {
position: absolute;
top: 65px;
right: 0;
bottom: 0;
width: 300px;
background: white;
border-left: 1px solid #ccc;
z-index: 100;
display: flex;
flex-direction: column;
box-shadow: -2px 0 5px rgba(0, 0, 0, 0.1);
}
.sidebar-header {
padding: 10px;
background: #f0f0f0;
display: flex;
justify-content: space-between;
align-items: center;
border-bottom: 1px solid #ccc;
}
.sidebar-content {
flex: 1;
overflow-y: auto;
padding: 10px;
}
.camera-container video {
background: #000;
border-radius: 4px;
}
.match-list {
list-style: none;
padding: 0;
}
.match-item {
padding: 8px;
border-bottom: 1px solid #eee;
cursor: pointer;
}
.match-item:hover {
background: #f9f9f9;
}
.match-info {
display: flex;
justify-content: space-between;
}
.customer-details {
margin-top: 20px;
border-top: 2px solid #eee;
padding-top: 10px;
}
.close-btn {
background: none;
border: none;
font-size: 1.2em;
cursor: pointer;
}
.face-recognition-sidebar .control-button {
cursor: pointer;
}
.face-recognition-sidebar.hidden {
display: none;
}
/* Add padding to ProductScreen to avoid sidebar overlap */
.pos .product-screen {
padding-right: 305px;
}
/* Order
Details Styling */
.last-orders {
margin-top: 15px;
}
.last-orders h5 {
margin-bottom: 10px;
color: #333;
font-size: 14px;
font-weight: 600;
}
.orders-list {
display: flex;
flex-direction: column;
gap: 12px;
}
.order-card {
background: #f8f9fa;
border: 1px solid #e0e0e0;
border-radius: 6px;
padding: 10px;
transition: box-shadow 0.2s;
}
.order-card:hover {
box-shadow: 0 2px 8px rgba(0, 0, 0, 0.1);
}
.order-header {
display: flex;
justify-content: space-between;
align-items: center;
margin-bottom: 8px;
padding-bottom: 6px;
border-bottom: 1px solid #d0d0d0;
}
.order-number {
font-weight: 600;
color: #2c3e50;
font-size: 13px;
}
.order-date {
font-size: 11px;
color: #7f8c8d;
}
.order-details {
margin-bottom: 8px;
}
.order-row {
display: flex;
justify-content: space-between;
padding: 3px 0;
font-size: 12px;
}
.order-row .label {
color: #666;
font-weight: 500;
}
.order-row .value {
color: #333;
font-weight: 600;
}
.order-row .value.status {
text-transform: capitalize;
padding: 2px 8px;
border-radius: 3px;
font-size: 11px;
}
.order-row .value.status.paid {
background: #d4edda;
color: #155724;
}
.order-row .value.status.done {
background: #d4edda;
color: #155724;
}
.order-row .value.status.invoiced {
background: #cce5ff;
color: #004085;
}
.order-products {
margin-top: 8px;
padding-top: 8px;
border-top: 1px dashed #d0d0d0;
}
.products-label {
font-size: 11px;
color: #666;
font-weight: 600;
margin-bottom: 5px;
}
.product-list {
list-style: none;
padding: 0;
margin: 0;
}
.product-item {
display: flex;
align-items: center;
padding: 4px 0;
font-size: 11px;
gap: 6px;
}
.product-qty {
background: #007bff;
color: white;
padding: 2px 6px;
border-radius: 3px;
font-weight: 600;
min-width: 30px;
text-align: center;
}
.product-name {
flex: 1;
color: #333;
}
.product-price {
color: #28a745;
font-weight: 600;
}
.no-orders {
color: #999;
font-style: italic;
font-size: 12px;
text-align: center;
padding: 20px 0;
}
/* Top Products Styling */
.top-products {
margin-top: 15px;
padding-top: 15px;
border-top: 2px solid #e0e0e0;
}
.top-products h5 {
margin-bottom: 10px;
color: #333;
font-size: 14px;
font-weight: 600;
}
.top-products ul {
list-style: none;
padding: 0;
}
.top-products li {
padding: 6px 8px;
background: #fff3cd;
border-left: 3px solid #ffc107;
margin-bottom: 6px;
font-size: 12px;
border-radius: 3px;
}

View File

@ -0,0 +1,184 @@
/** @odoo-module */
import { Component, useState, onMounted, onWillUnmount } from "@odoo/owl";
import { ProductScreen } from "@point_of_sale/app/screens/product_screen/product_screen";
import { useService } from "@web/core/utils/hooks";
import { registry } from "@web/core/registry";
export class FaceRecognitionSidebar extends Component {
static template = "pos_face_recognition.FaceRecognitionSidebar";
setup() {
this.pos = useService("pos");
this.orm = useService("orm");
console.log("FaceRecognitionSidebar setup");
this.state = useState({
matches: [],
selectedCustomer: null,
lastOrders: [],
topProducts: [],
});
this.recognitionInterval = null;
this.stream = null;
onMounted(() => {
this.startCamera();
});
// Cleanup interval and camera when component is destroyed
onWillUnmount(() => {
if (this.recognitionInterval) {
clearInterval(this.recognitionInterval);
}
if (this.stream) {
this.stream.getTracks().forEach(track => track.stop());
}
});
}
async startCamera() {
console.log("startCamera called");
try {
const video = document.getElementById('face-rec-video');
console.log("Video element found:", video);
if (video) {
this.stream = await navigator.mediaDevices.getUserMedia({ video: true });
console.log("Camera stream obtained:", this.stream);
video.srcObject = this.stream;
// Start real-time recognition loop once camera is ready
this.startRealTimeRecognition();
} else {
console.error("Video element not found!");
}
} catch (err) {
console.error("Error accessing camera:", err);
}
}
startRealTimeRecognition() {
console.log("startRealTimeRecognition called");
console.log("Server URL:", this.pos.config.pos_face_rec_server_url);
if (!this.pos.config.pos_face_rec_server_url) {
console.error("Face Recognition Server URL is not configured. Please configure it in POS Settings.");
return;
}
// Capture frame every 5 seconds
this.recognitionInterval = setInterval(() => {
console.log("Recognition interval tick");
this.captureAndSendFrame();
}, 5000);
}
async captureAndSendFrame() {
const video = document.getElementById('face-rec-video');
if (!video) {
console.error("Video element not found");
return;
}
if (!this.pos.config.pos_face_rec_server_url) {
console.error("Face Recognition Server URL is not configured");
return;
}
const canvas = document.createElement('canvas');
canvas.width = video.videoWidth;
canvas.height = video.videoHeight;
canvas.getContext('2d').drawImage(video, 0, 0);
// Convert to base64, remove header
const imageData = canvas.toDataURL('image/jpeg').split(',')[1];
try {
const response = await fetch(this.pos.config.pos_face_rec_server_url + '/recognize', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
},
body: JSON.stringify({ image: imageData }),
});
if (response.ok) {
const result = await response.json();
if (result.matches && result.matches.length > 0) {
// Enrich matches with partner phone numbers
this.state.matches = result.matches.map(match => {
const partner = this.pos.db.get_partner_by_id(match.id);
return {
...match,
phone: partner ? partner.phone || partner.mobile : null
};
});
}
}
} catch (err) {
console.error("Error sending frame to AI server:", err);
}
}
async selectCustomer(match) {
const partner = this.pos.db.get_partner_by_id(match.id);
if (partner) {
this.state.selectedCustomer = partner;
this.pos.get_order().set_partner(partner);
// Fetch details
await this.fetchCustomerDetails(partner.id);
}
}
async fetchCustomerDetails(partnerId) {
// Fetch last 2 orders with detailed information
const orders = await this.orm.searchRead("pos.order",
[['partner_id', '=', partnerId]],
['name', 'date_order', 'amount_total', 'state', 'lines'],
{ limit: 2, order: "date_order desc" }
);
// Fetch order lines for each order
for (let order of orders) {
if (order.lines && order.lines.length > 0) {
const lines = await this.orm.searchRead("pos.order.line",
[['id', 'in', order.lines]],
['product_id', 'qty', 'price_subtotal_incl', 'full_product_name'],
{}
);
// Add product names to lines
order.lines = lines.map(line => ({
...line,
product_name: line.full_product_name || (line.product_id && line.product_id[1]) || 'Unknown Product'
}));
} else {
order.lines = [];
}
}
this.state.lastOrders = orders;
// Fetch top 3 products
try {
const topProducts = await this.orm.call("res.partner", "get_top_products", [partnerId]);
console.log("Top products received:", topProducts);
console.log("Type:", typeof topProducts);
console.log("First product:", topProducts && topProducts[0]);
// Ensure it's an array
if (Array.isArray(topProducts)) {
this.state.topProducts = topProducts;
} else {
console.error("topProducts is not an array:", topProducts);
this.state.topProducts = [];
}
} catch (error) {
console.error("Error fetching top products:", error);
this.state.topProducts = [];
}
}
}
ProductScreen.components = {
...ProductScreen.components,
FaceRecognitionSidebar,
};

20
static/src/js/models.js Normal file
View File

@ -0,0 +1,20 @@
/** @odoo-module */
import { PosStore } from "@point_of_sale/app/store/pos_store";
import { patch } from "@web/core/utils/patch";
patch(PosStore.prototype, {
async _processData(loadedData) {
await super._processData(...arguments);
// We don't necessarily need to load the full image data for all partners on load
// because it would be heavy. We only need to send it back when saving.
// However, if we want to show existing images in the edit screen, we might need them.
// But standard Odoo doesn't load full images for partners in POS to save bandwidth.
// It usually loads small avatars.
// For now, let's NOT load them by default to avoid performance issues.
// The edit screen will show empty if not modified in current session,
// or we could fetch them on demand (like `partnerImageUrl` does).
// BUT, `saveChanges` in `partner_editor.js` sends `processedChanges`.
// If we want to support editing, we need to handle the fields.
}
});

View File

@ -0,0 +1,107 @@
/** @odoo-module */
import { PartnerDetailsEdit } from "@point_of_sale/app/screens/partner_list/partner_editor/partner_editor";
import { patch } from "@web/core/utils/patch";
import { getDataURLFromFile } from "@web/core/utils/urls";
import { ErrorPopup } from "@point_of_sale/app/errors/popups/error_popup";
import { _t } from "@web/core/l10n/translation";
import { loadImage } from "@point_of_sale/utils";
import { useRef, useState, onWillUnmount } from "@odoo/owl";
patch(PartnerDetailsEdit.prototype, {
setup() {
super.setup(...arguments);
this.changes.image_face_1 = this.changes.image_face_1 || false;
this.changes.image_face_2 = this.changes.image_face_2 || false;
this.changes.image_face_3 = this.changes.image_face_3 || false;
this.camera = useState({
activeField: null,
});
this.videoRef = useRef("video");
onWillUnmount(() => {
this.stopCamera();
});
},
async startCamera(fieldName) {
if (this.camera.activeField) {
this.stopCamera();
}
try {
const stream = await navigator.mediaDevices.getUserMedia({ video: true });
this.camera.activeField = fieldName;
this.stream = stream;
// Wait for the video element to be rendered
await Promise.resolve();
// We might need a small delay or rely on the ref being attached on render
// Since we set state, a render will happen. The ref should be available after render.
// However, in Owl, refs are usually available after patch.
// Let's try setting it in a next tick or just relying on the template to use the ref.
// Actually, we need to set srcObject.
setTimeout(() => {
if (this.videoRef.el) {
this.videoRef.el.srcObject = stream;
}
}, 100);
} catch (err) {
console.error("Camera error", err);
this.popup.add(ErrorPopup, { title: _t("Camera Error"), body: _t("Could not access camera.") });
}
},
stopCamera() {
if (this.stream) {
this.stream.getTracks().forEach(track => track.stop());
this.stream = null;
}
this.camera.activeField = null;
},
capturePhoto() {
if (!this.videoRef.el) return;
const video = this.videoRef.el;
const canvas = document.createElement("canvas");
canvas.width = video.videoWidth;
canvas.height = video.videoHeight;
const ctx = canvas.getContext("2d");
ctx.drawImage(video, 0, 0);
const dataUrl = canvas.toDataURL('image/jpeg');
// Remove data:image/jpeg;base64, prefix
this.changes[this.camera.activeField] = dataUrl.split(',')[1];
this.stopCamera();
},
async uploadFaceImage(event, fieldName) {
const file = event.target.files[0];
if (!file) return;
if (!file.type.match(/image.*/)) {
await this.popup.add(ErrorPopup, {
title: _t("Unsupported File Format"),
body: _t("Only web-compatible Image formats such as .png or .jpeg are supported."),
});
return;
}
const imageUrl = await getDataURLFromFile(file);
const loadedImage = await loadImage(imageUrl, {
onError: () => {
this.popup.add(ErrorPopup, {
title: _t("Loading Image Error"),
body: _t("Encountered error when loading image. Please try again."),
});
}
});
if (loadedImage) {
// Resize to reasonable size for face recognition (e.g. 800x600 is usually enough)
const resizedImage = await this._resizeImage(loadedImage, 800, 600);
const dataUrl = resizedImage.toDataURL();
// Remove data:image/png;base64, prefix
this.changes[fieldName] = dataUrl.split(',')[1];
}
}
});

View File

@ -0,0 +1,104 @@
<?xml version="1.0" encoding="UTF-8"?>
<templates id="template" xml:space="preserve">
<t t-name="pos_face_recognition.FaceRecognitionButton" owl="1">
<div class="control-button" t-on-click="onClick">
<i class="fa fa-camera" role="img" aria-label="Face Recognition" title="Face Recognition"/>
<span>Face Rec.</span>
</div>
</t>
<t t-name="pos_face_recognition.FaceRecognitionSidebar" owl="1">
<div class="face-recognition-sidebar">
<div class="sidebar-header">
<h3>Face Recognition</h3>
</div>
<div class="sidebar-content">
<div class="camera-container">
<video id="face-rec-video" autoplay="true" playsinline="true" width="100%"></video>
</div>
<div class="results-container" t-if="state.matches.length > 0">
<h4>Matches:</h4>
<ul class="match-list">
<t t-foreach="state.matches" t-as="match" t-key="match.id">
<li t-on-click="() => this.selectCustomer(match)" class="match-item">
<div class="match-info">
<div>
<span class="match-name"><t t-esc="match.name"/></span>
<span class="match-phone" t-if="match.phone"><br/><t t-esc="match.phone"/></span>
</div>
<span class="match-prob"><t t-esc="(match.probability * 100).toFixed(1)"/>%</span>
</div>
</li>
</t>
</ul>
</div>
<div class="customer-details" t-if="state.selectedCustomer">
<h4>Selected: <t t-esc="state.selectedCustomer.name"/></h4>
<div class="last-orders">
<h5>Last 2 Orders:</h5>
<div t-if="state.lastOrders and state.lastOrders.length > 0" class="orders-list">
<t t-foreach="state.lastOrders" t-as="order" t-key="order.id">
<div class="order-card">
<div class="order-header">
<span class="order-number">Order #<t t-esc="order.name || order.id"/></span>
<span class="order-date"><t t-esc="order.date_order"/></span>
</div>
<div class="order-details">
<div class="order-row">
<span class="label">Total:</span>
<span class="value"><t t-esc="env.utils.formatCurrency(order.amount_total)"/></span>
</div>
<div class="order-row" t-if="order.lines and order.lines.length > 0">
<span class="label">Items:</span>
<span class="value"><t t-esc="order.lines.length"/> product(s)</span>
</div>
<div class="order-row" t-if="order.state">
<span class="label">Status:</span>
<span class="value status" t-att-class="order.state"><t t-esc="order.state"/></span>
</div>
</div>
<div class="order-products" t-if="order.lines and order.lines.length > 0">
<div class="products-label">Products:</div>
<ul class="product-list">
<t t-foreach="order.lines" t-as="line" t-key="line.id">
<li class="product-item">
<span class="product-qty"><t t-esc="line.qty"/>x</span>
<span class="product-name"><t t-esc="line.product_name"/></span>
<span class="product-price"><t t-esc="env.utils.formatCurrency(line.price_subtotal_incl)"/></span>
</li>
</t>
</ul>
</div>
</div>
</t>
</div>
<p t-else="" class="no-orders">No previous orders found</p>
</div>
<br/>
<div class="top-products">
<h5>Top 3 Products:</h5>
<ul t-if="state.topProducts and state.topProducts.length > 0">
<t t-foreach="state.topProducts" t-as="prod" t-key="prod_index">
<li>
<t t-out="prod['name']['en_US']"/> (Qty: <t t-out="Math.round(prod['qty'])"/>)
</li>
</t>
</ul>
<p t-else="">No previous orders found</p>
</div>
</div>
</div>
</div>
</t>
<t t-name="pos_face_recognition.ProductScreen" t-inherit="point_of_sale.ProductScreen" t-inherit-mode="extension">
<xpath expr="//div[hasclass('product-screen')]" position="inside">
<FaceRecognitionSidebar />
</xpath>
</t>
</templates>

View File

@ -0,0 +1,79 @@
<?xml version="1.0" encoding="UTF-8"?>
<templates id="template" xml:space="preserve">
<t t-name="pos_face_recognition.PartnerDetailsEdit" t-inherit="point_of_sale.PartnerDetailsEdit" t-inherit-mode="extension">
<xpath expr="//div[hasclass('partner-details-box')]" position="inside">
<div class="partner-detail col">
<label class="form-label label">Face Image 1</label>
<div class="d-flex flex-column gap-2">
<t t-if="camera.activeField === 'image_face_1'">
<div class="position-relative text-center">
<video t-ref="video" autoplay="autoplay" playsinline="playsinline" style="width: 100%; max-width: 300px; background: #000;" />
<div class="d-flex gap-2 mt-1 justify-content-center">
<button class="btn btn-primary" t-on-click="capturePhoto">Capture</button>
<button class="btn btn-secondary" t-on-click="stopCamera">Cancel</button>
</div>
</div>
</t>
<t t-else="">
<div class="d-flex align-items-center gap-2">
<t t-if="changes.image_face_1">
<img t-att-src="'data:image/jpeg;base64,' + changes.image_face_1" style="width: 64px; height: 64px; object-fit: cover;" class="rounded"/>
</t>
<button class="btn btn-secondary" t-on-click="() => this.startCamera('image_face_1')">
<i class="fa fa-camera"/> Take Photo
</button>
</div>
</t>
</div>
</div>
<div class="partner-detail col">
<label class="form-label label">Face Image 2</label>
<div class="d-flex flex-column gap-2">
<t t-if="camera.activeField === 'image_face_2'">
<div class="position-relative text-center">
<video t-ref="video" autoplay="autoplay" playsinline="playsinline" style="width: 100%; max-width: 300px; background: #000;" />
<div class="d-flex gap-2 mt-1 justify-content-center">
<button class="btn btn-primary" t-on-click="capturePhoto">Capture</button>
<button class="btn btn-secondary" t-on-click="stopCamera">Cancel</button>
</div>
</div>
</t>
<t t-else="">
<div class="d-flex align-items-center gap-2">
<t t-if="changes.image_face_2">
<img t-att-src="'data:image/jpeg;base64,' + changes.image_face_2" style="width: 64px; height: 64px; object-fit: cover;" class="rounded"/>
</t>
<button class="btn btn-secondary" t-on-click="() => this.startCamera('image_face_2')">
<i class="fa fa-camera"/> Take Photo
</button>
</div>
</t>
</div>
</div>
<div class="partner-detail col">
<label class="form-label label">Face Image 3</label>
<div class="d-flex flex-column gap-2">
<t t-if="camera.activeField === 'image_face_3'">
<div class="position-relative text-center">
<video t-ref="video" autoplay="autoplay" playsinline="playsinline" style="width: 100%; max-width: 300px; background: #000;" />
<div class="d-flex gap-2 mt-1 justify-content-center">
<button class="btn btn-primary" t-on-click="capturePhoto">Capture</button>
<button class="btn btn-secondary" t-on-click="stopCamera">Cancel</button>
</div>
</div>
</t>
<t t-else="">
<div class="d-flex align-items-center gap-2">
<t t-if="changes.image_face_3">
<img t-att-src="'data:image/jpeg;base64,' + changes.image_face_3" style="width: 64px; height: 64px; object-fit: cover;" class="rounded"/>
</t>
<button class="btn btn-secondary" t-on-click="() => this.startCamera('image_face_3')">
<i class="fa fa-camera"/> Take Photo
</button>
</div>
</t>
</div>
</div>
</xpath>
</t>
</templates>

View File

@ -0,0 +1,15 @@
<?xml version="1.0" encoding="utf-8"?>
<odoo>
<record id="pos_config_view_form_inherit_face_recognition" model="ir.ui.view">
<field name="name">pos.config.view.form.inherit.face.recognition</field>
<field name="model">pos.config</field>
<field name="inherit_id" ref="point_of_sale.pos_config_view_form"/>
<field name="arch" type="xml">
<xpath expr="//div[@class='row mt16 o_settings_container']/div[@groups='base.group_system']" position="before">
<setting string="Face Recognition" help="Configure the AI server for face recognition.">
<field name="pos_face_rec_server_url" placeholder="http://localhost:5000"/>
</setting>
</xpath>
</field>
</record>
</odoo>

View File

@ -0,0 +1,15 @@
<?xml version="1.0" encoding="utf-8"?>
<odoo>
<record id="res_config_settings_view_form" model="ir.ui.view">
<field name="name">res.config.settings.view.form.inherit.pos.face.recognition</field>
<field name="model">res.config.settings</field>
<field name="inherit_id" ref="point_of_sale.res_config_settings_view_form"/>
<field name="arch" type="xml">
<xpath expr="//block[@id='pos_interface_section']" position="inside">
<setting string="Face Recognition" help="Configure the AI server for face recognition.">
<field name="pos_face_rec_server_url" placeholder="http://localhost:5000"/>
</setting>
</xpath>
</field>
</record>
</odoo>

View File

@ -0,0 +1,21 @@
<?xml version="1.0" encoding="utf-8"?>
<odoo>
<record id="view_partner_form_face_recognition" model="ir.ui.view">
<field name="name">res.partner.form.face.recognition</field>
<field name="model">res.partner</field>
<field name="inherit_id" ref="base.view_partner_form"/>
<field name="arch" type="xml">
<xpath expr="//notebook" position="inside">
<page string="Face Recognition" name="face_recognition">
<group>
<group>
<field name="image_face_1" widget="image" class="oe_avatar"/>
<field name="image_face_2" widget="image" class="oe_avatar"/>
<field name="image_face_3" widget="image" class="oe_avatar"/>
</group>
</group>
</page>
</xpath>
</field>
</record>
</odoo>