Phasmophobia and Bézier Curves

I’ve been playing a lot of Phasmophobia recently. It’s a ghost-hunting game where you, and a team of up-to 3 other investigators explore haunted properties looking for evidence in order to determine what type of ghost is present. At the same time you have to try to not get killed at the hands of said ghost.

So what does this game have to do with Bézier curves??

We’ll get there. Trust me.

In the most recent release, the developers added in some riddles and clues to tease something coming in the next major revision. The problem I ran into was that if I had a question about a riddle I couldn’t solve, googling would only bring up spoiler-filled results. I don’t want answers, but little nudges in the correct direction.

I decided to take it upon myself to create a repository of information about the clues so one can decide which information to be exposed to, thereby minimizing risk of unwanted spoilage. I give you, Phasmophobia Rune Hints.

This still has nothing to do with Bézier curves! Get on with it!

Dear reader, I am nearly there. Believe me!

One of the evidences of ghost ghost activity in the game is what they call “Ghost Orbs”. They are floating balls of light that are only visible on video camera. (In reality these are specs of dust reflecting light close to a lens…) In any case, I wanted to spruce up my rune hinting website with some Phasmophobia appropriate ambiance. I wanted to add some ghost orbs.

My first implementation was to take a PNG of a ghost orb and use javascript to move it in a specific direction while fading it in and out at the ends of the animation. This yielded a result and was maybe passable, but it wasn’t great. My brother suggested to use Bézier curves in order to have it follow a more natural curved path.

In this example, the ghost orbs both start and end at the same location, however the one on the left follows a linear path while the one on the right follows a Bézier curve. (The points are chosen at random, so you may have to watch a few cycles to get an interesting comparison). Below is a simplified example of the code to demonstrate the minimal logic required to compute the curved path.

function find_point_on_line(p1, p2, percent){
    let p3 = {
        "x" : ((p2.x - p1.x) * percent) + p1.x,
        "y" : ((p2.y - p1.y) * percent) + p1.y,
    return p3;

function follow_bezier_curve(b1, b2, b3, percent){
    if (percent >= 1) return; // We finished
    let p1 = find_point_on_line(b1, b2, percent);
    let p2 = find_point_on_line(b2, b3, percent);
    let p3 = find_point_on_line(p1, p2, percent);
    timer = setTimeout(() => follow_bezier_curve(percent+.001), 1000/60);

Bézier curves are pretty common. I’ve been using them for decades in software like Adobe Illustrator or Affinity Designer. I remember learning about them in college. But I’ve actually never had a need to program my own.

As it turns out, they are really simple. A quadratic Bézier curve uses 3 points; a, b, and c. First you draw a path from a to b, and from b to c. Then you draw paths between points at equivalent subdivisions on these paths. The result is a really nice curve. It’s really hard to explain with words, so try out this interactive animation instead:

This is the type of Bézier curve I used on the Phasmophobia site.

There are also cubic Bézier curves. Cubic curves are basically two quadratic curves combined. Let’s say you have 4 paints; a, b, c, and d. You would create two quadratic curves; abc, bcd, then your final curve will be created by the same process along the resulting curves.

These 4-point quadratic curves are how vector drawing programs like Illustrator build paths. the outer two points act as nodes, and the inner two act as the handles.

Although not very complicated, I thought it was a fun diversion, and wanted to share.

Reflected Primary Colors

Eyes are interesting things. We have color receptors in our eyes tuned for three different wavelengths. Short (blue), medium (green), and long (red). Our brains combine this information and allow us to perceive millions of color, which is really just amazing.

These three colors are so important, that we specifically target them when we produce images on our monitors and TVs. All the images are made up of these three colors. Just as importantly, when we capture images with cameras, we actually filter the light going into the cameras into the three primary colors.

Camera image sensors aren’t inherently color sensitive. Each pixel we get out of a camera is actually made up from information gathered from 4 sub-pixels on the digital camera sensor. Each sub pixel has a filter in front of it for light to pass through; 1 Red, 1 blue, and 2 green. This is called the Bayer Filter. After the camera takes a picture or records a frame, the sub pixels get interpolated together to give a single pixel of data.

(It’s probably more nuanced than that, but that is the general idea)

I recently purchased an RGB flashlight, and wondered how well I could reconstruct a color image by taking 3 photos illuminated with the different lights and combining them.

Ideally, I would like to try this using a black and white film camera, or black and white digital sensor, however I have access to neither, so I decided to use my iPhone.

The first method I used was to take color photos with each illumination, and layer them on top of each other using the add blend mode. Using this blend mode adds the RGB values together. For example:

This method works well. It’s somewhat surprising considering that nothing I used was color calibrated.

The second method that I was more excited about was taking black and white photos of each and using those as the raw channel data to reconstruct the image.

I took the photos on my iPhone while using the black and white filter mode. I then used Affinity Photo to import the photos and assign them to channels. The end result was abysmal.

I was able to improve it a bit by mixing the blue channel higher and reducing the intensity of the green channel.

Still not great.

I believe some of the problems are that the BW filter on the iPhone is not at all true black and white. I was surprised initially to find that the image wasn’t actually grayscale. It was RGB. I also don’t know how the image is being converted to black and white. It’s plausible that in the process of converting to grayscale, more red and blue data is thrown away in favor of green because it produces a better result to our eyes.

Although this was a bit of an interesting exercise, I think in the end it didn’t achieve great results because of the lack of true black and white sensor. The color results would have been more meaningful if it had been achieved without any color aware equipment.

Additionally, I’d be interested in comparing a composited photo using 3 exposures with red, green, and blue lights to a composited photo using 3 filters and white light. Someday I can revisit this experiment once I procure a proper camera.

Clarus the Dogcow

For long-time Apple fans, hardly anything is more iconic than Clarus the Dogcow. Not quite a dog, not quite a cow, Clarus made her debut as part of the Cairo font set drawn by Susan Kare in 1983. Later on in the dogcow’s life, she found herself depicting the orientation of printer paper in the page setup dialog window for Mac OS. I don’t recall exactly in which version she made her debut, but sometime before System 7. And if you are the right age, you may even remember the brown incarnation of the dogcow appearing as a stamp in KidPix!

The dogcow even had official technical notes on Apple’s webpage back in the day! (though those documents were sadly removed in the mid 2000’s)

Naturally, I thought that the best way to commemorate Clarus’ impact on my childhood would be to replicate her likeness in wood. I made a template to follow and cut out strips of light and dark wood that were as uniform as I could make them. These strips of wood served as pixels to my arbor canvas. My dad and I took the little wooden pixels and layed them up against the paper template slowly building up Clarus’ familiar frame. The dark wood is ebony gaboon, and the light wood is maple.

After all the pieces were placed and we corrected any visible mistakes, glue was poured on top. The sides were clamped just enough to keep the pieces from moving. Bursts of air from an air compressor pushed the glue down between the wooden pixels, then finally the clamps were tightened.

Once it was dry, multiple slices were able to be taken from the resulting block of wood which would make the inlays for some miniature cutting board. Four in all were made. Using a CNC router, a rectangular inset was milled away from the maple cutting board, the corners squared up with a chisel, and the inlay glued into place.

All that was left was some sanding and finishing with a food-grade wax to complete these little cutting boards. Kind of a random thing to make, and I honestly can’t remember what made me want to do this in the first place. But I like my little dogcow cutting board and think to myself “moof!” every time that I use it.

© 2007-2015 Michael Caldwell