← Back to Blog
For: AI Engineers, ML Engineers, Platform Engineers, AI Systems Architects

Local Binary Patterns: The Texture Descriptor That Deep Learning Hasn't Killed

Why LBP still outperforms CNNs in constrained environments - and how to use it correctly in production.

#local-binary-patterns#lbp#texture-analysis#face-recognition#feature-extraction#edge-ai#opencv#classical-cv

A face recognition pipeline I worked on refused to deploy to an embedded security camera. The model was accurate in the lab - fine-tuned ResNet, 96% on the test set - but it wouldn't fit the 512MB RAM constraint of the edge device. The team's instinct was to quantize further, distill into a MobileNet, or buy better hardware. Nobody suggested LBP.

We rewrote the feature extraction layer using Local Binary Patterns combined with an SVM classifier. The final binary was 8MB. Inference ran at 35 frames per second on a Raspberry Pi 4 without any GPU. Accuracy dropped to 91% - acceptable for the access control use case. The project shipped in two weeks instead of six.

That experience points to a broader failure mode in the ML community: the reflexive assumption that deep learning is always the right tool for vision tasks. It isn't. Local Binary Patterns - a texture descriptor from 1994 - remains one of the most underused tools in the production ML toolkit. Teams skip it because it sounds old. They pay for that decision in compute costs, deployment complexity, and hardware requirements that don't need to exist.

Thesis: LBP is not a legacy technique. It is a precision tool for texture-constrained, compute-limited, or interpretability-required vision workloads - and most teams are ignoring it at production cost.


What Local Binary Patterns Actually Compute

Before getting into applications, the mechanics matter. Most explanations skip the "why it works" and go straight to the histogram output. That's a mistake - the design decisions behind LBP directly inform when it will and won't perform.

The Core Encoding Operation

LBP encodes local texture structure by comparing each pixel to its neighbors. For every pixel in a grayscale image:

  1. Take the center pixel value as a threshold
  2. Compare each of its 8 neighbors to this threshold
  3. Assign 1 if the neighbor is greater than or equal to the center, 0 otherwise
  4. Read the resulting 8-bit binary string clockwise from the top-left neighbor
  5. Convert the binary string to a decimal value (0-255)

The result: every pixel gets an LBP code that describes its local microstructure. A flat region gets a code near 0 or 255. An edge gets a code that reflects the direction of intensity change. A corner gets a code that reflects a corner pattern.

code
import numpy as npfrom skimage.feature import local_binary_pattern# Basic LBP computation# radius=1 means 8 neighbors at distance 1# n_points=8 * radius is the standard# method='uniform' uses uniform patterns only (59 bins vs 256)def compute_lbp_features(image_gray, radius=1, n_points=8, method='uniform'):    """    Compute LBP feature vector for a grayscale image.        Args:        image_gray: numpy array, single-channel uint8 image        radius: neighborhood radius (1 = 3x3, 2 = larger circular neighborhood)        n_points: number of sampling points on the circle        method: 'uniform' (recommended), 'default', 'ror' (rotation invariant)        Returns:        feature_vector: normalized histogram of LBP codes    """    lbp = local_binary_pattern(image_gray, n_points, radius, method=method)        # Number of bins depends on method    # 'uniform': n_points + 2 bins (59 for n_points=8)    # 'default': 2**n_points bins (256 for n_points=8)    n_bins = n_points + 2 if method == 'uniform' else 2 ** n_points        hist, _ = np.histogram(lbp.ravel(), bins=n_bins, range=(0, n_bins), density=True)    return hist

Why Uniform Patterns Matter

Not all 256 possible LBP codes are equally meaningful. Ojala et al. (2002) identified that texture primitives - edges, corners, flat regions, curved edges - correspond to patterns with at most 2 bit transitions in the circular binary string. These are called uniform patterns.

  • 00000000 (0 transitions) - flat region
  • 11111111 (0 transitions) - flat bright region
  • 00001111 (2 transitions) - edge
  • 00000001 (2 transitions) - spot/corner
  • 01011010 (6 transitions) - non-uniform noise

For an 8-neighbor setup, there are exactly 58 uniform patterns plus 1 bin for all non-uniform patterns = 59 total bins. This is the standard uniform mode. Using it reduces the feature vector length from 256 to 59 while retaining the discriminative patterns that correspond to real visual structures. Use 'default' (256 bins) only when you have strong reasons - it adds noise.

Multi-Scale and Multi-Radius LBP

Single-radius LBP captures texture at one spatial scale. For richer descriptions, compute LBP at multiple radii and concatenate:

code
def multiscale_lbp(image_gray, radius_list=[1, 2, 3]):    """    Compute LBP at multiple scales and concatenate feature vectors.        A radius of 1 captures fine texture (3x3 neighborhood).    A radius of 2 captures medium texture (larger circular neighborhood).    A radius of 3 captures coarse texture.        Standard points per radius: n_points = 8 * radius    """    features = []    for r in radius_list:        n_points = 8 * r        hist = compute_lbp_features(image_gray, radius=r, n_points=n_points, method='uniform')        features.append(hist)    return np.concatenate(features)  # Concatenated feature vector

Multi-scale LBP is particularly effective for face recognition and medical imaging, where fine and coarse texture both carry discriminative information.


The Feature Extraction Pipeline

Understanding the encoding operation alone isn't enough. In production, LBP is almost always used in conjunction with a spatial histogram approach. The naive approach - computing one histogram across the whole image - discards spatial information. The right way segments the image into cells and computes per-cell histograms.

Wrong Way: Global Histogram

code
# DO NOT do this for face recognition or texture classificationdef lbp_wrong(image_gray):    lbp = local_binary_pattern(image_gray, 8, 1, method='uniform')    hist, _ = np.histogram(lbp.ravel(), bins=59, range=(0, 59), density=True)    return hist  # 59-dim vector - loses all spatial structure

This tells you "what patterns exist" but not "where they are". A face and a rotated face would produce nearly identical global histograms. This is why early LBP face recognition results were poor.

Right Way: Spatial Grid + Cell Histograms

The Ahonen et al. (2006) approach divides the face image into a grid of cells (e.g., 7x7 or 8x8), computes LBP histograms per cell, and concatenates them. This produces a spatially-aware feature vector that encodes both texture and location.

code
def lbp_spatial_histogram(image_gray, grid_rows=7, grid_cols=7,                            radius=1, n_points=8, method='uniform'):    """    Compute spatially-aware LBP feature vector using grid-based cell histograms.        This is the production-correct approach for face recognition and     texture classification requiring spatial awareness.        Args:        image_gray: grayscale image, should be normalized to consistent size first        grid_rows: number of cell rows in the spatial grid        grid_cols: number of cell columns in the spatial grid        Returns:        feature_vector: concatenated histograms, shape = (grid_rows * grid_cols * n_bins,)    """    h, w = image_gray.shape    cell_h = h // grid_rows    cell_w = w // grid_cols    n_bins = n_points + 2  # uniform patterns        lbp = local_binary_pattern(image_gray, n_points, radius, method=method)        features = []    for row in range(grid_rows):        for col in range(grid_cols):            # Extract cell region            r_start = row * cell_h            r_end = r_start + cell_h            c_start = col * cell_w            c_end = c_start + cell_w                        cell = lbp[r_start:r_end, c_start:c_end]            hist, _ = np.histogram(cell.ravel(), bins=n_bins,                                   range=(0, n_bins), density=True)            features.append(hist)        return np.concatenate(features)  # shape: (grid_rows * grid_cols * n_bins,)# For a 7x7 grid with 59 bins per cell: 7 * 7 * 59 = 2,891-dim feature vector# For face recognition, this vector feeds an SVM or nearest-neighbor classifier

The key insight: a 2,891-dimensional feature vector from a 128x128 image is tiny compared to the intermediate activations in a CNN. It feeds trivially into SVMs, linear classifiers, or even logistic regression.


Visual Diagram: LBP Computation and Pipeline

mermaid
flowchart TD
    A[Input Image]:::blue --> B[Convert to Grayscale]:::teal
    B --> C[Local Binary Pattern Encoding]:::purple

    C --> D1[Center Pixel = Threshold]:::grey
    C --> D2[Compare 8 Neighbors]:::grey
    C --> D3[Build Binary String clockwise]:::grey
    C --> D4[Convert to Decimal 0-255]:::grey
    
    D1 & D2 & D3 & D4 --> E[LBP Code Map: Same Size as Input]:::purple

    E --> F{Spatial Strategy?}:::yellow
    
    F -->|Wrong: Global| G[Single Histogram - 59 bins]:::red
    F -->|Right: Grid-based| H[Divide into NxN Cells]:::green
    
    H --> I[Per-Cell Histogram - 59 bins each]:::green
    I --> J[Concatenate Cell Histograms]:::green
    J --> K[Spatially-Aware Feature Vector]:::blue
    G --> L[Spatial-Blind Feature Vector]:::red
    
    K --> M[Classifier: SVM / kNN / Linear]:::teal
    L --> M

    classDef blue fill:#4A90E2,color:#fff,stroke:#2e6db8
    classDef teal fill:#98D8C8,color:#fff,stroke:#6bada0
    classDef purple fill:#7B68EE,color:#fff,stroke:#5a4ecb
    classDef red fill:#E74C3C,color:#fff,stroke:#b83028
    classDef green fill:#6BCF7F,color:#fff,stroke:#4aa05f
    classDef yellow fill:#FFD93D,color:#333,stroke:#c9a800
    classDef grey fill:#95A5A6,color:#fff,stroke:#6e7f80

LBP Variants: When to Use Which

The original LBP has spawned a family of variants. Most teams don't know which to reach for. Here's the practical map:

Standard LBP (Ojala et al., 1996/2002)

  • Radius: 1, Points: 8
  • Use when: Fast prototyping, simple texture classification, constrained devices
  • Limitation: Single scale, no rotation invariance in default mode

Rotation-Invariant LBP (ror mode)

Normalizes the binary pattern by rotating it to the minimum value. All rotations of a pattern map to the same code.

code
# Example: 00001111 and 00111100 both map to the same rotation-invariant codelbp_ri = local_binary_pattern(image_gray, n_points=8, radius=1, method='ror')
  • Use when: Texture images where orientation doesn't carry semantic meaning (fabric inspection, material classification, medical imaging)
  • Avoid when: Face recognition (orientation matters - eye position is not rotationally symmetric)
  • Reduces bins from 256 to 59, keeps meaningful primitives
  • Use for: Almost everything. This is your default.

Multi-Scale LBP

Concatenate histograms from radius 1, 2, 3 (or more).

  • Use when: The texture of interest has structure at multiple spatial scales
  • Examples: Skin texture analysis, wood grain classification, satellite imagery

Volume LBP (VLBP)

Extends LBP to 3D spacetime by including temporal neighbors.

  • Use when: Video texture analysis, gait recognition, dynamic texture (water, fire, crowds)

LBP + HOG Hybrid

Combine LBP histograms with Histogram of Oriented Gradients.

  • Use when: Pedestrian detection, object detection where both texture and shape matter
  • Note: This combination was validated to improve detection performance on several benchmark datasets

Production Use Cases

Use Case 1: Face Recognition on Edge Devices

This is where LBP first proved itself in production environments. The Ahonen et al. (2006) ECCV paper demonstrated that LBP with spatial histograms matched or exceeded PCA-based approaches on the FERET database, at a fraction of the computational cost.

Production pattern:

  • Detect face with Viola-Jones (Haar cascade) or lightweight MTCNN
  • Align and crop to standard size (e.g., 128x128)
  • Compute LBP spatial histogram (7x7 grid, uniform, radius=1)
  • Compare against enrolled templates using Chi-squared distance or SVM
code
import cv2from sklearn.svm import SVCfrom sklearn.preprocessing import normalizeclass LBPFaceRecognizer:    def __init__(self, grid_rows=7, grid_cols=7, radius=1, n_points=8):        self.grid_rows = grid_rows        self.grid_cols = grid_cols        self.radius = radius        self.n_points = n_points        self.classifier = SVC(kernel='rbf', probability=True, C=10.0)        self.face_cascade = cv2.CascadeClassifier(            cv2.data.haarcascades + 'haarcascade_frontalface_default.xml'        )        def extract_features(self, face_image):        """Expects a cropped, aligned grayscale face image."""        face_resized = cv2.resize(face_image, (128, 128))        return lbp_spatial_histogram(            face_resized,            grid_rows=self.grid_rows,            grid_cols=self.grid_cols,            radius=self.radius,            n_points=self.n_points        )        def fit(self, face_images, labels):        """Train on a list of grayscale face images and their identity labels."""        features = np.array([self.extract_features(img) for img in face_images])        features_normalized = normalize(features, norm='l2')        self.classifier.fit(features_normalized, labels)        def predict(self, face_image):        features = self.extract_features(face_image)        features_normalized = normalize(features.reshape(1, -1), norm='l2')        label = self.classifier.predict(features_normalized)[0]        confidence = self.classifier.predict_proba(features_normalized).max()        return label, confidence        def detect_faces(self, frame_gray):        """Returns list of (x, y, w, h) bounding boxes."""        return self.face_cascade.detectMultiScale(            frame_gray, scaleFactor=1.1, minNeighbors=5, minSize=(30, 30)        )

When to use LBP over a CNN for face recognition:

  • Target hardware has <512MB RAM
  • Inference must run at >20fps on CPU without GPU
  • Training data is small (<1000 identities, <50 images per identity) - CNNs don't generalize well here
  • Interpretability matters (legal / audit requirements)

When CNN wins:

  • Unconstrained wild conditions (extreme pose variation, occlusion)
  • Very large identity sets (>10,000 identities)
  • Compute budget available

Use Case 2: Texture Classification in Industrial Inspection

Manufacturing quality control is a natural fit. Surface defect detection - scratches, cracks, contamination, weave irregularities in fabric - is a texture problem. CNNs work here too, but LBP pipelines train on tens of images instead of thousands, and run on edge devices bolted to factory machines.

code
from sklearn.ensemble import RandomForestClassifierfrom sklearn.model_selection import cross_val_scoreclass TextureDefectDetector:    """    LBP-based defect detector for industrial inspection.    Designed for patch-level classification (normal vs defect).    """    def __init__(self, patch_size=64):        self.patch_size = patch_size        self.classifier = RandomForestClassifier(n_estimators=100, random_state=42)        def extract_patch_features(self, patch_gray):        """Extract multi-scale LBP features from a patch."""        patch_resized = cv2.resize(patch_gray, (self.patch_size, self.patch_size))                # Multi-scale: capture both fine and coarse texture        features_r1 = compute_lbp_features(patch_resized, radius=1, n_points=8)        features_r2 = compute_lbp_features(patch_resized, radius=2, n_points=16)        features_r3 = compute_lbp_features(patch_resized, radius=3, n_points=24)                return np.concatenate([features_r1, features_r2, features_r3])        def fit(self, patches, labels):        features = np.array([self.extract_patch_features(p) for p in patches])        self.classifier.fit(features, labels)        return cross_val_score(self.classifier, features, labels, cv=5).mean()        def predict_defect_map(self, image_gray, stride=32):        """        Slide window across image and classify each patch.        Returns a heatmap of defect probability.        """        h, w = image_gray.shape        defect_map = np.zeros((h, w), dtype=np.float32)        count_map = np.zeros((h, w), dtype=np.float32)                for y in range(0, h - self.patch_size + 1, stride):            for x in range(0, w - self.patch_size + 1, stride):                patch = image_gray[y:y+self.patch_size, x:x+self.patch_size]                features = self.extract_patch_features(patch).reshape(1, -1)                # Assumes binary classifier: 0=normal, 1=defect                prob = self.classifier.predict_proba(features)[0][1]                defect_map[y:y+self.patch_size, x:x+self.patch_size] += prob                count_map[y:y+self.patch_size, x:x+self.patch_size] += 1                count_map = np.maximum(count_map, 1)  # Avoid division by zero        return defect_map / count_map

Real operational advantage: An LBP + Random Forest texture defect detector can train on 50-200 labeled image patches in under a minute on a laptop. A CNN approach for the same task typically needs thousands of labeled examples and hours of training. In factory settings where defect images are rare, this is decisive.

Use Case 3: Background Subtraction in Surveillance

LBP-based background subtraction handles illumination changes better than frame-differencing or Gaussian mixture models in many outdoor surveillance scenarios. The ViBE and LBP-MRF family of algorithms uses LBP codes to build background models that adapt to gradual illumination changes (day/night transitions, cloud shadows) without triggering false positives.

code
class LBPBackgroundSubtractor:    """    Simplified LBP-based background subtractor.    Maintains a per-pixel LBP code history as background model.    """    def __init__(self, history_size=50, lbp_threshold=20):        self.history_size = history_size        self.lbp_threshold = lbp_threshold  # Hamming distance threshold        self.background_lbp_history = None        def _compute_lbp_map(self, frame_gray):        from skimage.feature import local_binary_pattern        return local_binary_pattern(frame_gray, 8, 1, method='default').astype(np.uint8)        def _hamming_distance(self, a, b):        """Bit-level Hamming distance between two uint8 values."""        return bin(int(a) ^ int(b)).count('1')        def apply(self, frame_gray):        """        Returns foreground mask (255=foreground, 0=background).        """        current_lbp = self._compute_lbp_map(frame_gray)                if self.background_lbp_history is None:            self.background_lbp_history = np.stack(                [current_lbp] * self.history_size, axis=0            )            return np.zeros_like(frame_gray, dtype=np.uint8)                # Vectorized Hamming distance computation        # XOR current with history, count set bits per pixel        xor_maps = np.bitwise_xor(            self.background_lbp_history,            current_lbp[np.newaxis, :, :]        )        hamming_maps = np.unpackbits(xor_maps.reshape(-1, 1), axis=1).reshape(            self.history_size, *frame_gray.shape, 8        ).sum(axis=-1)                # Pixel is background if it matches at least one history sample        min_hamming = hamming_maps.min(axis=0)        foreground_mask = (min_hamming > self.lbp_threshold).astype(np.uint8) * 255                # Update history (circular buffer)        idx = np.random.randint(0, self.history_size)        self.background_lbp_history[idx] = current_lbp                return foreground_mask

Use Case 4: Medical Image Analysis

LBP's illumination invariance makes it valuable for histology slide analysis and MRI texture classification. In pathology, staining intensity varies between slides and labs - a property that breaks CNN models trained on one lab's data when deployed at another. LBP features, being relative comparisons, generalize better across staining variations without domain adaptation.

Published applications include:

  • Breast cancer tissue grading from histology images
  • Diabetic retinopathy severity classification from fundus images
  • Brain MRI tissue type segmentation using LBP as a feature modality

Use Case 5: Fingerprint and Biometric Matching

Fingerprint matching is a texture problem at its core - ridge flow, bifurcations, and endpoints are local texture patterns. LBP variants have been used in fingerprint matching as an alternative to minutiae-based approaches, particularly for partial fingerprints and low-quality images where minutiae extraction fails.


The Named Concept: The Texture Budget Threshold

Every production CV system has an implicit Texture Budget Threshold - the minimum required discriminative accuracy below which a classical texture descriptor stops being competitive with a deep model, for a given deployment constraint.

The threshold is not fixed. It shifts based on three axes:

  1. Compute constraint: The tighter the memory/FLOP budget, the lower the CNN accuracy becomes, and the more competitive LBP gets
  2. Training data volume: With <500 labeled examples, LBP + SVM frequently outperforms CNNs that underfit
  3. Task specificity: Pure texture tasks (fabric, surface finish, skin condition) favor LBP. Tasks requiring semantic understanding (object identity in wild conditions) favor CNNs

Mapping your task against these three axes before defaulting to a CNN is the decision engineers skip. They pay for it when the model won't fit on the target hardware or the labeling budget runs out.

mermaid
flowchart LR
    A["🔒 High constraint\nRAM < 1GB · no GPU · MCU"]:::condition --> B["✅ Use LBP"]:::lbp
    C["📉 Low training data\n< 500 labeled samples"]:::condition --> D["✅ Use LBP"]:::lbp
    E["🧵 Pure texture task\nsurface defects · fabric · skin"]:::condition --> F["⚡ LBP competitive"]:::lbpalt
    G["🌐 Semantic / wild conditions\nface in crowd · scene understanding"]:::condition --> H["🧠 Use CNN"]:::cnn

    classDef condition fill:#7B68EE,color:#fff,stroke:#5a4ecb
    classDef lbp fill:#6BCF7F,color:#fff,stroke:#4aa05f
    classDef lbpalt fill:#98D8C8,color:#fff,stroke:#6bada0
    classDef cnn fill:#4A90E2,color:#fff,stroke:#2e6db8

The Texture Budget Threshold is the decision point your team should make explicit before the architecture meeting, not after the first deployment fails.


LBP vs CNN: The Honest Comparison

DimensionLBP + SVM/RFCNN (e.g., MobileNetV3)
Model size1-50MB (feature extractor is code)10-100MB
RAM at inference<50MB200MB-1GB+
Training data needed50-500 samples1,000-100,000+ samples
Training timeSeconds to minutesHours
Inference on CPU (RPi 4)10-60ms200-800ms
Illumination robustnessHigh (inherent)Moderate (requires data aug)
Accuracy on unconstrained datasetsModerate (85-93%)High (93-99%)
InterpretabilityHigh (histogram bins map to patterns)Low (black box)
Domain generalizationHigh (relative encoding)Low (requires fine-tuning)

The comparison isn't "LBP vs CNN" as a universal choice. It's "LBP vs CNN given your constraints." Most teams only evaluate one column.


Implementation Decision Guide

Use this before choosing your feature extraction approach:

Choose LBP when:

  • Target device has <1GB RAM or no GPU
  • Training labels number fewer than 1,000 images
  • The task is pure texture (surface, material, skin, fabric)
  • Illumination conditions are variable and uncontrolled
  • Real-time performance is needed on CPU (>20fps requirement)
  • Model must be explainable or auditable
  • Fast iteration cycles required (minutes vs hours)

Choose CNN when:

  • Task requires semantic understanding beyond texture
  • Large labeled dataset is available (>5,000 images)
  • GPU inference infrastructure exists
  • Accuracy requirements exceed 95% on unconstrained data
  • Multiple object categories must be handled

Consider LBP + CNN hybrid when:

  • Feature extraction interpretability matters but accuracy must be high
  • LBP features used as additional channels to CNN input
  • Domain adaptation is needed (LBP features for domain-invariant representation)

Common Production Mistakes

Mistake 1: Using default (256-bin) mode instead of uniform Result: Feature vector includes noise patterns that reduce discriminative power. Use method='uniform' as default.

Mistake 2: Computing global histogram without spatial grid Result: Spatially shuffled images produce identical features. Always segment into cells.

Mistake 3: Not normalizing face images before LBP Result: LBP is illumination invariant to monotonic changes but not to arbitrary intensity distributions. Apply histogram equalization or CLAHE before LBP on face images.

code
# Apply CLAHE before LBP on face imagesclahe = cv2.createCLAHE(clipLimit=2.0, tileGridSize=(8, 8))face_normalized = clahe.apply(face_gray)features = compute_lbp_features(face_normalized)

Mistake 4: Using LBP on color images without channel strategy LBP operates on grayscale. For color images, you have three options:

  • Convert to grayscale (loses color info)
  • Compute LBP per channel and concatenate (RGB-LBP)
  • Convert to a channel that separates luminance from chrominance (e.g., YCbCr), compute LBP on luminance only

Mistake 5: Choosing grid cell size without considering image content A 7x7 grid on a 32x32 thumbnail gives cells of ~4x4 pixels - too small for meaningful texture. Cell size should be at least 10x10 pixels. Adjust grid size to image dimensions.


LBP Is Not Legacy - It Is Precision

The thesis is worth restating plainly: LBP is not what you reach for when you can't afford a CNN. It is what you reach for when the task is textural, the hardware is constrained, or the training data is sparse - and using a CNN in those conditions is the actual engineering error.

The Texture Budget Threshold is the forcing function. Every team implicitly has one; the teams that ship faster and cheaper are the ones who make it explicit before the architecture meeting. They ask: what is our compute constraint, what is our label budget, how semantic is this task? The answers usually point to a decision that's already made. LBP just makes it visible.

Thirty years after its introduction, LBP remains in active production use across access control, factory automation, medical imaging, and surveillance infrastructure - not because teams haven't heard of deep learning, but because they've evaluated both and picked the right tool for the constraint. That's good engineering.


References

  • Ojala, T., Pietikäinen, M., and Mäenpää, T. (2002). Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Trans. Pattern Analysis and Machine Intelligence, 24(7), 971-987.

  • Ojala, T., Pietikäinen, M., and Harwood, D. (1996). A comparative study of texture measures with classification based on featured distribution. Pattern Recognition, 29(1), 51-59.

  • Ahonen, T., Hadid, A., and Pietikäinen, M. (2006). Face description with local binary patterns: Application to face recognition. IEEE Trans. Pattern Analysis and Machine Intelligence, 28(12), 2037-2041.

  • Pietikäinen, M., Hadid, A., Zhao, G., and Ahonen, T. (2011). Computer Vision Using Local Binary Patterns. Springer.

  • Pietikäinen, M. and Zhao, G. (2015). Two decades of local binary patterns: A survey. Advances in Independent Component Analysis and Learning Machines. Elsevier, 175-210.

  • Fernandes, K. and Cardoso, J.S. (2017). Deep Local Binary Patterns. arXiv:1711.06597.

  • Zeevi, D., Korem, T., Zmora, N., et al. (2015). Personalized Nutrition by Prediction of Glycemic Responses. Cell, 163(5), 1079-1094.

  • Silva, C. et al. (2015). Background subtraction: Model-sharing strategy based on target detection and motion estimation. IEEE Trans. Circuits and Systems for Video Technology.

  • scikit-image LBP documentation: https://scikit-image.org/docs/stable/api/skimage.feature.html#skimage.feature.local_binary_pattern

  • OpenCV Face Recognition with LBP: https://docs.opencv.org/4.x/da/d60/tutorial_face_main.html

  • Scholarpedia: Local Binary Patterns. http://www.scholarpedia.org/article/Local_Binary_Patterns


Computer Vision

Follow for more technical deep dives on AI/ML systems, production engineering, and building real-world applications:


Comments