I work on camera pipelines and video streaming on Linux. This site documents how I got here — the questions I asked, the kernel internals I had to understand, and the drivers I built to answer them.
I wanted to understand what actually happens when a camera frame moves through a Linux system. That question led me to the kernel. So I started building drivers — one concept at a time — each one adding a layer of understanding to the last.
The first question: how does data get from the kernel to userspace safely? Built a 16-slot ring buffer exposed as a character device. Writers push, readers block.
Blocking reads aren't always what you want. Added O_NONBLOCK and poll()/select() support so userspace can choose its own access pattern on the same device node.
Read/write moves data. But how does userspace send control commands to a driver? The same way V4L2, ALSA, and USB drivers do it — ioctl(). Built the control plane.
Real camera pipelines generate frames on a timer. This driver does the same — fires every 1000ms and writes an event. But timer callbacks run in softirq context, which changes everything about synchronization.
The timer callback should do as little as possible. Moved the real work into a workqueue — deferred execution in process context, where sleeping is allowed again.
Everything comes together: timer fires → workqueue writes to ring buffer → wakes up readers. Supports blocking, non-blocking, and poll(). This is the camera frame pipeline in miniature.
First driver that touches actual hardware. Controls the USR0 LED on a BeagleBone Black via GPIO1_21. Moved from simulated events to memory-mapped GPIO on AM335x.
Arc I answered how the kernel works. Arc II asks how a camera frame actually gets into it. Real silicon, real build system — platform drivers, Device Tree overlays, V4L2, and the full camera pipeline on RPi3 with Yocto.
First driver on real silicon with a real build system. A platform driver probed via Device Tree overlay on RPi3 — GPIO23 interrupt handled in kernel, built and deployed entirely through a custom Yocto meta layer.
I'm Sanath Kumar P Sapre — an embedded systems engineer with a background in Linux-based camera pipelines, video streaming, and system-level integration. GStreamer, V4L2, RTSP, WebRTC on ARM targets is where I come from.
The past few years have taken me away from kernel work. This site is me finding my way back — rebuilding from the ground up, in public, one driver at a time. I think there's more value in showing the process than in presenting a polished portfolio.