Dev Log: Raytracing in One Week in Two Evenings
My journey learning ray tracing from the book 'Ray Tracing in One Weekend' in just two evenings
Raytracing in One Week in Two Evenings
Could I really learn ray tracing in two evenings? The book said "one weekend", so I thought—why not try to beat that?
Spoiler: I failed (took 2.5 evenings), but what I learned about how light, reflections, and materials work was absolutely worth it. Here's my journey from outputting colored pixels to rendering realistic glass spheres with bokeh effects.
Ray Tracing in One Weekend is available to download for free.
You can also check out my source code on github.
If you have any questions, feel free to ask me at meep@supawat.dev
Setup
First, we need to set up our environment. I'll use Ubuntu on WSL so I can use g++ directly.
Install G++
$ sudo apt-get install g++
Image reader
I decided to use eog to read .ppm files. I simply installed it by:
$ sudo apt install eog
I launched VcXsrv to display the GUI programs from my Ubuntu WSL.
You can view the image by calling:
$ eog <image-file>
Chapter 1: Output an image
Let's output an image by simply iterating through pixel positions and printing their colours. To read these pixel values, we can use software that's able to read .ppm files.
To see the result, we can run this executable file:
$ g++ main.cpp -o c1.out $ c1.out >> first-image.ppm
Then, we can read the first-image.ppm with an image viewer.
Tada! The first image is baked!
Chapter 2: 3D Vector
In 3D graphics, vector operations are fundamental—you'll use them everywhere for positions, colors, and directions. While we could import a library, building a vec3 class from scratch helps understand what's actually happening under the hood.
The class needs to handle basic operations: addition, subtraction, dot product, cross product, and normalization. Here's the complete implementation (feel free to skip to Chapter 3 if you're familiar with vector math):
#include <math.h>
#include <stdlib.h>
#include <iostream>
class vec3
{
public:
vec3() {}
vec3(float e0, float e1, float e2)
{
e[0] = e0;
e[1] = e1;
e[2] = e2;
}
inline float x() const { return e[0]; }
inline float y() const { return e[1]; }
inline float z() const { return e[2]; }
inline float r() const { return e[0]; }
inline float g() const { return e[1]; }
inline float b() const { return e[2]; }
//ref
inline const vec3 &operator+() const { return *this; }
inline vec3 operator-() const { return vec3(-e[0], -e[1], -e[2]); }
inline float operator[](int i) const { return e[i]; }
inline float &operator[](int i) { return e[i]; }
// ops
inline vec3 &operator+=(const vec3 &v2);
inline vec3 &operator-=(const vec3 &v2);
inline vec3 &operator*=(const vec3 &v2);
inline vec3 &operator/=(const vec3 &v2);
inline vec3 &operator*=(const float t);
inline vec3 &operator/=(const float t);
inline float length() const
{
return sqrt(e[0]*e[0] + e[1]*e[1] + e[2]*e[2]);
}
inline float squared_length() const
{
return e[0]*e[0] + e[1]*e[1] + e[2]*e[2];
}
inline void make_unit_vector();
float e[3];
};
inline std::istream &operator>>(std::istream &is, vec3 &t)
{
is >> t.e[0] >> t.e[1] >> t.e[2];
return is;
;
}
inline std::ostream &operator<<(std::ostream &os, const vec3 &t)
{
os << t.e[0] << t.e[1] << t.e[2];
return os;
}
inline void vec3::make_unit_vector()
{
float k = 1.0 / sqrt(e[0]*e[0] + e[1]*e[1] + e[2]*e[2]);
e[0] *= k;
e[1] *= k;
e[2] *= k;
}
inline vec3 operator+(const vec3 &v1, const vec3 &v2)
{
return vec3(v1.e[0] + v1.e[0], v1.e[1] + v1.e[1], v1.e[2] + v1.e[2]);
}
inline vec3 operator-(const vec3 &v1, const vec3 &v2)
{
return vec3(v1.e[0] - v1.e[0], v1.e[1] - v1.e[1], v1.e[2] - v1.e[2]);
}
inline vec3 operator*(const vec3 &v1, const vec3 &v2)
{
return vec3(v1.e[0] * v1.e[0], v1.e[1] * v1.e[1], v1.e[2] * v1.e[2]);
}
inline vec3 operator/(const vec3 &v1, const vec3 &v2)
{
return vec3(v1.e[0] / v1.e[0], v1.e[1] / v1.e[1], v1.e[2] / v1.e[2]);
}
inline vec3 operator*(float t, const vec3 &v)
{
return vec3(t * v.e[0], t * v.e[1], t * v.e[2]);
}
inline vec3 operator*(const vec3 &v, float t)
{
return vec3(t * v.e[0], t * v.e[1], t * v.e[2]);
}
inline vec3 operator/(const vec3 &v, float t)
{
return vec3(v.e[0] / t, v.e[1] / t, v.e[2] / t);
}
inline float dot(const vec3 &v1, const vec3 &v2)
{
return v1.e[0] * v2.e[0] + v1.e[1] * v2.e[1] + v1.e[2] * v2.e[2];
}
inline vec3 cross(const vec3 &v1, const vec3 &v2)
{
return vec3(
v1.e[1] * v2.e[2] - v1.e[2] * v2.e[1],
-v1.e[2] * v2.e[0] - v1.e[0] * v2.e[2],
v1.e[0] * v2.e[1] - v1.e[1] * v2.e[0]);
}
inline vec3 &vec3::operator+=(const vec3 &v)
{
e[0] += v.e[0];
e[1] += v.e[1];
e[2] += v.e[2];
return *this;
}
inline vec3 &vec3::operator-=(const vec3 &v)
{
e[0] -= v.e[0];
e[1] -= v.e[1];
e[2] -= v.e[2];
return *this;
}
inline vec3 &vec3::operator*=(const vec3 &v)
{
e[0] *= v.e[0];
e[1] *= v.e[1];
e[2] *= v.e[2];
return *this;
}
inline vec3 &vec3::operator/=(const vec3 &v)
{
e[0] /= v.e[0];
e[1] /= v.e[1];
e[2] /= v.e[2];
return *this;
}
inline vec3 &vec3::operator*=(const float t)
{
e[0] *= t;
e[1] *= t;
e[2] *= t;
return *this;
}
inline vec3 &vec3::operator/=(const float t)
{
float k = 1.0 / t;
e[0] *= k;
e[1] *= k;
e[2] *= k;
return *this;
}
inline vec3 unit_vector(vec3 v)
{
return v / v.length();
}
Let's see if everything is okay:
#include "vec3.h"
#include <iostream>
int main(){
vec3 a = vec3(1.1f,2.1f,3.1f);
vec3 b = vec3(1.1f, 9.0f, 1.1f);
std::cout << "a+b: " << a+b << std::endl;
return 0;
}
And it results:
a+b: 2.24.26.2
Ok, I guess it works for now.
Chapter 3: Rays, a simple camera, and background
As stated in the book, every ray tracer needs a ray class. We can simply create a ray class where it contains its position and direction in 3D vectors.
Next, we can start creating our first tracing!
Then, we can run this by:
$ g++ main.cpp -o c3.out $ c3.out >> test.ppm $ eog test.ppm
Chapter 4: Adding a sphere
The equation for a sphere at the origin of radius is simple:
If a sphere is not at the origin position, we get:
In vector form:
Or in ray form:
It looks complicated? Try writing it on paper.
The Math Behind Ray-Sphere Intersection
A ray can be represented as: where:
- is the ray origin
- is the ray direction
- is the distance along the ray
When we expand the sphere equation with the ray equation, we get a quadratic equation in the form . The discriminant tells us:
- If discriminant : no intersection
- If discriminant : one intersection (ray is tangent to sphere)
- If discriminant : two intersections (ray passes through sphere)
We can create a sphere hit checker: if the ray hits within the sphere's range, it gives a red colour. Otherwise, it will show the background colour as we made in the previous chapter. We can detect the intersection with the sphere by calculating the discriminant value.
Then, we can implement this in the colour function. If it hits, we return the pink colour.
And tada!
Chapter 5: Surface normals and multiple objects
After we can detect the sphere and print the colour out of it, we need to observe the content on the sphere surface. We need to look at the surface normal so we can determine what colour should be there.
The normal of the sphere is in the direction of the hitpoint minus the centre of the sphere. We can find the distance from the normal point and shade it on the sphere.
We changed the hit_sphere function to output the distance of the sphere from the view plane. If the discriminant is less than 0, the scene can print the background colour. But if the ray hits a sphere, the colour function receives the distance and is able to shade the sphere. In my code, I wanted to shade the sphere horizontally, so I shade the colour as the x-axis changes.
Next, we can implement the hit function so objects can be hitable by rays. It can be implemented as an abstract class.
Then, we can use this abstract class in the sphere as an object that is hitable. We can apply the intersection calculation into the sphere function.
Since objects are hitable, we need to make a list of objects so we can check them as the ray traces.
After several implementations on hitablelist, we can create multiple objects in the main function.
Chapter 6: Antialiasing
We can see jaggies along the edges. What we can do is antialiasing. We send rays to the pixel randomly and average the colour of the rays.
As a result, we can see the jaggies are blurred from the random and averaging process.
The diminishing returns problem: I got excited and cranked up the samples to 1000, thinking the image would look much better. Spoiler: it didn't make much visual difference, but the render time went through the roof. This was my first lesson in optimization—sometimes "good enough" really is good enough.
It takes a really long time, and the result doesn't seem much different.
Chapter 7: Diffuse Materials
This is where things started to click for me. Diffuse materials (like matte surfaces) don't emit light—they just absorb and scatter light from their surroundings. The key insights:
- Diffuse objects take on the colour of their surroundings
- They modulate that with their intrinsic colour
- Light that reflects off a diffuse surface has its direction randomized
- Light might be absorbed rather than reflected
Debugging detour: I initially tried to use my own random generator, which produced weird artifacts. After an hour of debugging, I switched to drand48() from stdlib.h and everything worked. Lesson learned: use battle-tested libraries for random number generation.
Chapter 8: Metal
In our program, the material needs to do two things:
- Produce a scattered ray (or say the ray was absorbed by the incident ray)
- If scattered, say how much the ray should be attenuated
We start by creating an abstract class and assigning it to the hitable object. Hence, when the object is hit by the ray, we can call the material class and observe it. Then, I implemented the class according to the book. For metal material, we do scattering on the ray.
This blew my mind: Seeing the first reflective sphere appear on my screen was the moment I realized I was actually simulating light physics. The result is just fantastic!
Chapter 9: Dielectrics
Dielectrics are clear materials like water and glass. After the ray hits the surface, it splits into a reflected ray and a refracted ray.
We can implement the refraction into the function:
bool refract(const vec3& v, const vec3& n, float ni_over_nt, vec3&refracted){
vec3 uv = unit_vector(v);
float dt = dot(uv, n);
float discriminant = 1.0 - ni_over_nt*ni_over_nt*(1-dt*dt);
if (discriminant>0){
refracted = ni_over_nt*(uv - n*dt) - n*sqrt(discriminant);
return true;
}
return false;
}
Next, we want to have glass with reflectivity, where we can use an approximation by Christophe Schlick.
Schlick's Approximation
The Schlick approximation gives us the reflectance that varies with angle:
Where is the reflectance at normal incidence:
float schlick(float cosine, float ref_idx){ float r0 = (1-ref_idx)/(1+ref_idx); r0 = r0*r0; return r0 + (1-r0)*pow(1 - cosine,5); }
The glass would still appear upside down, and we can apply the same glass with a smaller radius inside the glass by reversing the radius to negative. Therefore the light can bounce reversely back to normal.
Chapter 10: Positionable Camera
We can implement our camera class to be able to specify field of view and aspect ratio. Then, we consider the viewpoint by defining lookfrom and lookat in the class constructor.
Field of View Mathematics
The field of view (FOV) determines how much of the scene we can see. The relationship between the vertical FOV angle and the viewport dimensions is:
Where:
- is the vertical field of view in radians
- is the half-height of the viewport
class camera
{
public:
camera(vec3 lookfrom, vec3 lookat, vec3 vup, float vfov, float aspect){
vec3 u, v, w;
float theta = vfov*M_PI/180;
float half_height = tan(theta/2);
float half_width = aspect*half_height;
origin = lookfrom;
w = unit_vector(lookfrom - lookat);
u = unit_vector(cross(vup, w));
v = cross(w, u);
left_corner = origin - half_width*u - half_height*v - w;
horizontal = 2*half_width*u;
vertical = 2*half_height*v;
}
ray get_ray(float u, float v){return ray(origin, left_corner + horizontal*u + vertical*v - origin );}
vec3 left_corner;
vec3 horizontal;
vec3 vertical;
vec3 origin;
};
What we do is relocate the camera by defining the look from and look at, but we also implement a view up (vup) vector, so we're able to rotate the camera down.
Chapter 11: Defocus Blur
Depth of field, or defocus blur, is a feature where we implement the camera having a lens.
#ifndef CAMERA_H
#define CAMERA_H
#include "ray.h"
vec3 random_in_unit_disk(){
vec3 p;
do{
p = 2.0*vec3(drand48(), drand48(), 0) - vec3(1,1,0);
} while (dot(p, p)>= 1.0 );
return p;
}
class camera
{
public:
camera(vec3 lookfrom, vec3 lookat, vec3 vup, float vfov, float aspect, float aperture, float focus_dist){
float lens_radius = aperture/2.0;
float theta = vfov*M_PI/180.0;
float half_height = tan(theta/2.0);
float half_width = aspect*half_height;
origin = lookfrom;
w = unit_vector(lookfrom - lookat);
u = unit_vector(cross(vup, w));
v = cross(w, u);
left_corner = origin - half_width*focus_dist*u - half_height*v*focus_dist - focus_dist*u;
horizontal = 2*half_width*focus_dist*u;
vertical = 2*half_height*focus_dist*v;
}
ray get_ray(float s, float t){
vec3 rd = lens_radius*random_in_unit_disk();
vec3 offset = u*rd.x() + v*rd.y();
return ray(origin + offset, left_corner + horizontal*s + vertical*t - origin - offset );
}
vec3 left_corner;
vec3 horizontal;
vec3 vertical;
vec3 origin;
vec3 u,v,w;
float lens_radius;
};
#endif
What we do is randomize a vector in a unit disk and calculate the radius with aperture. Outside the focus range, we merely randomize the ray in the unit disk, allowing us to see the bokeh effect.
What I Learned
Technical Takeaways
-
Ray tracing is beautifully simple conceptually - shoot rays, check intersections, calculate colors. The complexity comes from optimizing and adding features.
-
Mathematics is unavoidable - You need vector operations, quadratic equations, and trigonometry. But once you understand them, they're just tools.
-
Incremental progress beats perfectionism - Each chapter added one small feature. Seeing gradual improvement kept me motivated.
What Surprised Me Most
The physical accuracy surprised me. I wasn't just making pretty pictures—I was simulating actual light behavior. The glass refraction, metal reflections, and depth of field all emerge naturally from the physics equations.
What I'd Do Differently
- Start with a faster language - C++ compilation got tedious. Rust or a JIT-compiled language might be better for learning.
- Add performance metrics earlier - I wasted time on optimizations that didn't matter.
- Take notes on the math - I found myself re-deriving equations I'd already figured out.
Time Breakdown
- Evening 1 (3 hours): Chapters 1-6 (Basic rendering, shapes, antialiasing)
- Evening 2 (3 hours): Chapters 7-9 (Materials - diffuse, metal, glass)
- Evening 2.5 (1.5 hours): Chapters 10-11 (Camera controls, depth of field)
Total: 7.5 hours from zero to photorealistic renders. Not bad!
Conclusion
From colored pixels to photorealistic glass spheres with bokeh—all in 7.5 hours. That's the beauty of ray tracing: the fundamentals are surprisingly accessible.
Did I beat the "one weekend" challenge? Technically no (2.5 evenings ≈ half a weekend). But I proved something more important: you don't need weeks or months to understand how rendering works. You just need curiosity, some math, and a willingness to see your code slowly transform light into images.
If you're curious about graphics programming, Ray Tracing in One Weekend is the perfect starting point. It's free, well-written, and you'll have something impressive to show by the end.
Now go make some pretty pictures. ✨
