learning in public

Learning Linux
from the inside out.

I work on camera pipelines and video streaming on Linux. This site documents how I got here — the questions I asked, the kernel internals I had to understand, and the drivers I built to answer them.

↓   scroll to read the story
ARC I
Arc I · Complete

The Linux Driver Lab

I wanted to understand what actually happens when a camera frame moves through a Linux system. That question led me to the kernel. So I started building drivers — one concept at a time — each one adding a layer of understanding to the last.

01 Message Queue Driver

The first question: how does data get from the kernel to userspace safely? Built a 16-slot ring buffer exposed as a character device. Writers push, readers block.

ring buffer wait queue mutex character driver
Key insight: the drop-oldest policy under buffer-full is exactly how camera pipelines handle a slow consumer — freshness wins over completeness.
02 Non-Blocking + Poll Driver

Blocking reads aren't always what you want. Added O_NONBLOCK and poll()/select() support so userspace can choose its own access pattern on the same device node.

O_NONBLOCK poll/select EAGAIN
Key insight: poll_wait() doesn't sleep — it just registers the wait queue. The actual sleeping happens in the kernel's poll loop, not in the driver.
03 IOCTL Interface

Read/write moves data. But how does userspace send control commands to a driver? The same way V4L2, ALSA, and USB drivers do it — ioctl(). Built the control plane.

unlocked_ioctl _IOR / _IOW magic number
04 Kernel Timer Driver

Real camera pipelines generate frames on a timer. This driver does the same — fires every 1000ms and writes an event. But timer callbacks run in softirq context, which changes everything about synchronization.

timer_list softirq context spinlock_irqsave del_timer_sync
Key insight: mutex can sleep — that makes it illegal in softirq context. Spinlock is mandatory. And copy_to_user() must never run under a spinlock.
05 Workqueue Driver

The timer callback should do as little as possible. Moved the real work into a workqueue — deferred execution in process context, where sleeping is allowed again.

workqueue schedule_work process context DECLARE_WORK
Key insight: this two-stage pattern — interrupt schedules work, work does the job — is how real drivers stay responsive without blocking the interrupt handler.
06 Workqueue + Poll — The Full Pipeline

Everything comes together: timer fires → workqueue writes to ring buffer → wakes up readers. Supports blocking, non-blocking, and poll(). This is the camera frame pipeline in miniature.

full pipeline workqueue poll spinlock
Key insight: spinlock must be consistent across all contexts that touch shared state. If the workqueue holds it, poll() must too.
07 BBB GPIO Driver — Real Hardware

First driver that touches actual hardware. Controls the USR0 LED on a BeagleBone Black via GPIO1_21. Moved from simulated events to memory-mapped GPIO on AM335x.

BeagleBone Black AM335x gpio_request real hardware
Key insight: global GPIO = bank × 32 + pin. And the onboard LEDs are owned by leds-gpio at boot — you have to release them before your driver can claim them.
ARC II
Arc II · In progress

The Camera Driver Lab

Arc I answered how the kernel works. Arc II asks how a camera frame actually gets into it. Real silicon, real build system — platform drivers, Device Tree overlays, V4L2, and the full camera pipeline on RPi3 with Yocto.

08 RPi3 Platform Driver + GPIO Interrupt — Yocto

First driver on real silicon with a real build system. A platform driver probed via Device Tree overlay on RPi3 — GPIO23 interrupt handled in kernel, built and deployed entirely through a custom Yocto meta layer.

platform_driver DT overlay request_irq devm_ Yocto/Kirkstone RPi3
Key insight: the platform bus matches drivers to hardware via the compatible string in the Device Tree — not hardcoded addresses. And a floating wire near a GPIO pin is enough to generate spurious interrupts; physics shows up in dmesg.
More coming
Going deeper into the camera subsystem.

Arc I built the foundations — interrupts, platform drivers, Device Tree, the kernel primitives that every real driver relies on. Arc II goes into V4L2 directly: sensor drivers, videobuf2, and the full camera pipeline on RPi3 with Yocto. This is where the original question gets answered.

Who's building this?

I'm Sanath Kumar P Sapre — an embedded systems engineer with a background in Linux-based camera pipelines, video streaming, and system-level integration. GStreamer, V4L2, RTSP, WebRTC on ARM targets is where I come from.

The past few years have taken me away from kernel work. This site is me finding my way back — rebuilding from the ground up, in public, one driver at a time. I think there's more value in showing the process than in presenting a polished portfolio.