Vision App
Luma is a browser-based image processing tool that brings the OpenCV concepts from the previous exercises into an interactive web application. Upload any image, adjust brightness, contrast, and thresholding in real time, then download the result - all without a backend server.
- Live app: luma.enddesk.com
- Source code: github.com/oduenas-enddesk/luma
How It Works
All image processing runs client-side using the HTML5 Canvas ImageData API. The app mirrors the same OpenCV operations covered in the intro exercises - brightness via uniform pixel shifts, contrast via scalar multiplication with clamping, and both global and adaptive thresholding.
Processing Pipeline
- Load - Drag-and-drop or select an image. The file is drawn onto an off-screen canvas and its raw pixel data is captured as an
ImageDatabuffer. - Adjust - Use the controls panel to change brightness, contrast, or thresholding mode. Every change triggers a re-processing pass over the source pixels.
- Process - The core
processImage()function applies operations in order: brightness, contrast, then optional thresholding. - Display - The processed
ImageDatais rendered to the visible canvas usingrequestAnimationFrame. - Download - The canvas is exported as a PNG.
Code Explanation
The processing logic lives in src/lib/imageProcessing.ts. It defines a ProcessingParams interface and a single processImage() function that transforms source pixels into an output buffer.
Brightness and Contrast
When no thresholding is active, the function iterates over every pixel and applies a straightforward formula - the same one used by cv2.add and cv2.multiply in the Python exercises:
dst.data[off] = clamp(src.data[off] * contrast + brightness);
The clamp helper keeps values in the [0, 255] range, which is the TypeScript equivalent of np.clip(result, 0, 255) that was used to prevent overflow in the exercises.
Global Thresholding
Mirrors cv2.threshold. Each pixel is converted to grayscale, then compared against a single threshold value:
const pass = invertThreshold
? gray[i] < globalThreshold
: gray[i] >= globalThreshold;
const v = pass ? 255 : 0;
Pixels that pass become white (255); the rest become black (0). The invert option flips the logic, matching cv2.THRESH_BINARY_INV.
Adaptive Thresholding
Mirrors cv2.adaptiveThreshold with ADAPTIVE_THRESH_MEAN_C. Instead of one global value, each pixel is compared against the mean intensity of a local neighborhood block, minus a constant C:
const localMean = integralSum(integral, width, x1, y1, x2, y2) / area;
const threshold = localMean - adaptiveC;
An integral image is precomputed so that each block-sum lookup runs in O(1) time, keeping the algorithm fast enough for real-time interaction even on large images.
Project Structure
src/
├── components/
│ ├── Canvas.tsx # Displays the processed image
│ ├── Controls.tsx # Sidebar with sliders and mode selectors
│ ├── ImageDropzone.tsx # Drag-and-drop file upload area
│ └── Slider.tsx # Reusable range slider
├── lib/
│ ├── imageProcessing.ts # Core pixel processing algorithms
│ └── useImageProcessor.ts # React hook managing state and render loop
├── App.tsx # Root layout component
├── main.tsx # Entry point
└── index.css # Tailwind CSS imports
Tech Stack
| Layer | Technology |
|---|---|
| UI framework | React 19 |
| Language | TypeScript |
| Build tool | Vite |
| Styling | Tailwind CSS |
| Production server | Nginx (Alpine) |
| Containerization | Docker |