There are, however, many reasons why NPM is the perfect target for
this kind of attack. JavaScript is a language with a small standard
library, an enormous user base, and a general social acceptance of
insane node_modules folders. We all remember the left-pad
incident, a mere
17 lines of code that sent the JavaScript ecosystem into a spiral.
Anyway, this TIL isn't about all of that. It's about one of the nice
things I stumbled upon while reading the aforementioned Hacker News
thread: Deno's standard library is widely available even for non-Deno
users. You can find it on Deno's independent module registry,
JSR.
When I work in JavaScript or Rust I'm always envious of Go. I may hate
the syntax and find the code lacking expressivity, but it undeniably
has a rich and simple standard library. I've often seen an ethos in Go
to stick to "pure Go" and avoid pulling in third-party implementations
for items already covered by the stdlib. It's a stacked comparison
because JavaScript was never meant to be a language running on the
server (that's really a Node problem, after all), but times have
changed and JavaScript is no longer just a browser runtime.
The ability to use Deno's standard library for non-Deno projects
offers a nice default for JavaScript developers. We might just be
trading one dependency for another, but I have a high degree of trust
in the Deno folks to make good on their architectural goals and offer
a widely useful, yet dependency-free set of functions.
On a quick scan of Deno's @std modules, there are quite a few
goodies:
@std/collections which covers utility functions a la lodash
@std/uuid for UUID generation/validation
@std/async for quite a few commonly-implemented async handlers
(debounce, retry, exponential back-off)
@std/csv and @std/yaml for common file formats
The retry function from @std/async looks especially appealing:
import{ retry }from"@std/async/retry";constreq=async()=>{// some function that throws sometimes};// Below resolves to the first non-error result of `req`const retryPromise =awaitretry(req,{
multiplier:2,
maxTimeout:60000,
maxAttempts:5,
minTimeout:100,
jitter:1,});
Something that's definitely worth exploring for new projects. The
source code for the entirety of Deno's standard library is available
here: denoland/std.
I can't stop thinking about two puzzle games that came out last
month. They're both the kind of game that makes me jealous for not
having designed it myself, then jealous again for knowing that even
with the idea in my hands I couldn't execute it as well. Those two
games are Strange
Jigsaws
and Öoo.
Strange Jigsaws is a meta-puzzle exploration built on the humble
jigsaw puzzle. If you're not into jigsaws (I don't blame you, they're
not particularly puzzling) don't be dissuaded. The puzzles in
Strange Jigsaws encompass a wide variety of different logic and themed
puzzles, jigsaws only in the narrowest sense.
Thanks to those jigsaws, the game is incredibly tactile. FLEB clearly
knows what he's doing when he designs puzzles that emphasize juicy
interactions. Moving, rotating, and slotting puzzle pieces is dopamine
delivered straight to the brain.
The real triumph of Strange Jigsaws is the breadth of ideas and
overall quality of their execution. There are a ton of different
puzzle ideas in the game, almost none of which are a plain ol'
jigsaw. Each puzzle is a fresh challenge with a consistent difficulty,
a bit on the easy side but only so much as to avoid any sense of
frustration.
I played through Strange Jigsaws in three, one-hour sessions. It's a
game that's very amenable to short sessions since each puzzle is
neatly self-contained.
Öoo is likewise a short game. I finished in just under two hours, but
that two hours was a single, non-stop playthrough. Öoo is not an easy
game to put down.
Design-wise, Öoo is kind of the antithesis of Strange Jigsaws. Where
Strange Jigsaws delights by introducing new mechanics with every
puzzle, Öoo is a study into the possibility space of a single
idea. That single idea grows deeper as the player accumulates the
knowledge of how to apply it to the surrounding world.
That means mechanics in Öoo aren't so much introduced as
revealed. Every puzzle advances the player's intuition of the
mechanical language underlying Öoo, unveiling how that language
interacts with the surrounding world and how the player can use that
language to solve puzzles. The player character doesn't gain new
abilities. Instead, the player learns the world and the world reveals
itself as one great puzzle.
It's hard to overstate how much design excellence Öoo squeezes out of
its mechanics. Several times a solution left me blurting out in
laughter, amazed by how Öoo bent the rules of the world and
re-contextualized my expectations. It's a uniquely joyful experience.
It's hard to believe that two of the best puzzle games that I've
played in the last two years came out in the same month. Do yourself a
favor and check them out.
I'm building a new Emacs package: Helix
Mode. Helix Mode implements
the Helix modal
keybindings in
Emacs. It's been my daily driver for about a month, and while it still
has some bugs, I'm reasonably confident it's in a usable state.
About six months ago I attempted to set up Emacs on a Windows machine
and found it to be an immensely frustrating experience. The default
Windows Emacs build works well enough if you don't use any third-party
packages, but who is using Emacs who isn't also using packages? Diving
into the complexity of setting up my entire Emacs config exposed a
reliance on Linux CLI tools that I hadn't installed, and attempting to
configure my Windows environment to properly export paths with
cygwin/w64devkit/whatever was not going well.
Eventually I gave up and swapped over to WSL, the Linux emulation
layer for Windows. For the most part WSL is great, provided you use
the terminal. Attempting to use GUI Emacs from WSL results in a
Frankenstein-like windowing experience. It kinda works but is far from
ideal.
With these frustrations top of mind, I decided to drop Emacs
altogether and experiment with a terminal-first workflow. I had already
been itching for an excuse to try out Helix
and this felt like the perfect opportunity.
As it turns out, Helix is an incredibly capable text editor, if a bit
light on the tooling. The vim-ish keybinding scheme is magical once
you understand the basics, and the automatic configuration settings
for tree-sitter and LSP work amazingly well. That said, Helix is not
very featureful and expects a lot of supplemental work done in the
terminal. It really needs to be paired with a terminal multiplexer
like Zellij or
tmux to work effectively.
I settled on tmux and set up a light config that emulates Emacs:
# remap prefix from C-b to C-x
unbind C-b
set-option -g prefix C-x
bind-key C-x send-prefix
# split panes
bind 0 kill-pane
bind 1 kill-pane -a
bind 2 split-window -v
bind 3 split-window -h
unbind '"'
unbind %
# zellij-style pane swapping
bind h select-pane -L
bind j select-pane -D
bind k select-pane -U
bind l select-pane -R
If you're willing to settle for a minimal text editor that's
supplemented with tmux and small scripting languages, I think Helix is
incredibly compelling. It remixes the Vim keybindings[1] in a way
that makes them far more intuitive.
The principle change is flipping around Vim's verb-object model. In
Vim, if you want to delete the next word, you first press d (delete)
and followup with w (word). The idea is that you declare your action
before your intended target, queuing up what you intend to perform
before telling Vim what to perform it on.
Helix is the opposite. First you select your intended target: w
(word). Helix automatically selects the word as the cursor navigates
to it, clarifying the selection visually. Then you perform your
action: d (delete).
It's kind of like Helix is operating in a permanent, automatic visual
mode. In Vim, I often found myself resorting to visual mode because I
didn't inherently trust my muscle memory to select the appropriate
selection before performing a deletion. This is problematic because
Vim's visual mode makes everything less efficient. Here's how you'd
delete with visual mode:
Press v to enter visual mode.
Press w to navigate word.
Press d to delete.
The funny thing is that visual mode makes Vim function like Helix, but
requires an extra keypress for every action. In Helix, the selection
is automatic so you don't lose any street
cred.
Back to Emacs
Despite enjoying the Helix + tmux workflow, in the last couple months
I've come to miss some of the niceties of Emacs:
Built-in diffing tools like vc-diff are really nice, even if I
prefer the git CLI for most everything else.
project.el is unbeatable. Helix doesn't have a concept of
workspaces, nor does it allow global search & replace like
project-query-replace-regexp.
Helix only recently got a proper file navigator but it hasn't yet
been released. I doubt that it will be as useful as dired.
Emacs remains the king of Lisp editing (shoutout to the 2025 Spring
Lisp Game Jam where
I'm using Fennel & Love2d).
And so the idea for Helix
Mode developed. It's easily
my most ambitious Emacs package, both in lines of code and
functionality. But it brings all of my favorite pieces of Helix into
the Emacs editing experience.
Helix Mode isn't designed to re-implement all of Helix, nor provide
the extensibility of the venerable Evil Mode. Instead it's aimed at
the subset of Helix keybindings responsible for editing and
navigation, leaving everything else to the responsibility of
Emacs. That means you'll still be using stuff like M-x replace-string or consult.
What it does offer is the same object-verb navigation as Helix,
complete with automatic selections. It also includes some of the Helix
sub-modes, like the Goto mode that provides go-to definition (g d)
or the Space mode that allows navigation across a project
(SPC f). Both of which integrate with project.el and xref.
If I haven't bored you with the details of my text-editor dabblings
over the past six months, I encourage you to check out Helix
Mode. I have a long list of
features and improvements that I'd like to make before the 1.0.0
release, but I think it's currently in a very usable state.
Noteworthy that the object-verb idea isn't Helix's innovation,
but Kakoune's. ↩︎
Using Action Cable in React is surprisingly difficult. Every third-party package
that I've come across that offers a custom hook makes some fundamental
assumptions that only hold for simple applications. The vast majority of
tutorials gloss over anything beyond the "log when event is received"
boilerplate.
The principle of integrating Action Cable is easy, since it follows a well-known
subscribe/unsubscribe pattern with a useEffect. It looks something like
this[1]:
import{ createConsume }from'@rails/actioncable'constMyComponent=()=>{const consumer =useMemo(()=>createConsumer('ws://localhost:3000/cable'),[],)useEffect(()=>{const sub = consumer.subscriptions.create('AlertChannel',{received(data){console.log(data)},})return()=>{
sub.unsubscribe()}},[consumer])return<div/>}
When all the React application is doing is logging some data, sure, easy-peasy.
But when that component needs to access component state? Now we have a problem.
Let me demonstrate by introducing a stateful counter. It's a contrived example,
but it gets the point across that accessing component state is probably useful
for Websocket subscribers.
constMyComponent=()=>{const[count, setCount]=useState(0)const consumer =useMemo(()=>createConsumer('ws://localhost:3000/cable'),[],)useEffect(()=>{const sub = consumer.subscriptions.create('AlertChannel',{received(){console.log(count)},})return()=>{
sub.unsubscribe()}},[consumer])return(<div><buttononClick={()=>setCount((c)=> c +1)}>increment</button></div>)}
This example demonstrates the most obvious flaw: count is missing in the
useEffect dependencies, so no matter how many times the increment button is
clicked, the value logged will always be 0. We have to make the subscription
event handler aware that the count has changed by adding it as a dependency to
the effect.
useEffect(()=>{const sub = consumer.subscriptions.create('AlertChannel',{received(){console.log(count)},})return()=>{
sub.unsubscribe()}},[consumer, count])
Now, theoretically this resolves our counter issue. And, for the most part, it
does. When we receive an Action Cable event from our server, the received
handler logs with the correct value of count. However, in practice this code
has another bug: subscriptions are not properly cleaned up, so the client
responds to the Action Cable message many more times than expected. In my
testing, if I clicked the button 12 times quickly, I would see 6 console logs
when the Action Cable event is broadcast.
It seems that Action Cable is not particularly good about cleaning up
subscriptions that have the same channel key. That is, when the increment button
is clicked multiple times in succession (representing many state updates in our
component), Action Cable does not do a good job ensuring that every connection
is appropriately unsubscribed between renders. You will actually observe errors
in the Rails console, indicating that it’s struggling to keep up:
Could not execute command from ({"command" => "unsubscribe", "identifier" =>
"{\"channel\":\"AlertChannel\"}"}) [RuntimeError - Unable to find subscription
with identifier: {"channel":"AlertChannel"}]
Digging into the Action Cable source code, it's made apparent that the Action
Cable library uses a JSON-stringified representation of the channel name as an
identifier when storing the subscriber. Here's the
relevant code:
I can only guess that there's a race condition somewhere in Action Cable
involving identical subscription identifiers. I managed to locate a GitHub issue
that tracks a similar problem and
lends a little extra support to the theory.
One way to resolve this race condition is to simply include the count in the
channel identifier, even if it's unused by the channel on the server. That way a
unique identifier is created for every re-render caused by count:
useEffect(()=>{const sub = consumer.subscriptions.create({ channel:'AlertChannel', count },{received(){console.log(count)},},)return()=>{
sub.unsubscribe()}},[consumer, count])
This seems to get the job done. Each Websocket subscriber is given a unique key
that can be easily located by the Action Cable library for removal. Note that
this only works for serializable data.
I'll note that I also tried passing a reference (via useRef) for the Action
Cable callbacks, hoping that a stable object reference might avoid the need
for the extra useEffect dependency. However, when the Action Cable JS
library creates new subscriptions, it creates an entirely new object,
rendering the stable reference moot.
Anyway, all this to say: be careful when creating Action Cable subscriptions
that rely on external state. Subscriptions created with the same key will likely
not be cleaned up correctly.
Most of the time in React applications this doesn't matter that much, since we
can get by with a stable, memoized reference. useQueryClient from
tanstack-query is a great example, since it allows us to invalidate our client
requests when an event is broadcast from the server:
// e.g. from react-query or tanstack-queryconst queryClient =useQueryClient()const consumer =useMemo(()=>createConsumer('ws://localhost:3000/cable'),[])useEffect(()=>{if(!consumer ||!queryClient){return}const sub = consumer.subscriptions.create('AlertChannel',{received(){
queryClient.invalidateQueries({
queryKey:['alerts'],})},})return()=>{
sub.unsubscribe()}},[consumer, queryClient])
For other purposes, it's likely a good idea to pass serializable data to the
Websocket channel parameters.
Note that I'm being careful to only create one consumer for a given
component to avoid re-establishing a connection to the Websocket on every
re-render. It's also likely that your consumer will need to live in a
separate hook for authorization purposes. ↩︎
The following is an email I sent a friend regarding Kafka on the Shore.
I've had a week or two now to digest Kafka on the Shore and put some thoughts
together. It's definitely my favorite Murakami novel thus far. By a long shot.
The symbolism feels attainable, yet abstract enough that there's still room for
reader interpretation. The plot is interesting enough to give weight to the
characters, aided by the dual narrative between Kafka and Nakata/Hoshino. It's
great.
A couple of ideas stand out to me:
The Oedipus prophecy set upon Kafka isn't necessarily that he literally needs to
fulfill the Oedipus contract, but that he needs to carry on the spirit of his
father's art. The subtext that I'm picking up is that his father (the
cat-murdering, flute-blowing madman sculptor) sacrificed everything for his art,
including his relationship with his son. The prophecy that he laid upon Kafka is
his own desire for immortality, extending his name and art with Kafka as the
vehicle. Thus Kafka feels overwhelming pressure and the impossibility of his own
individuality, thus he runs away.
In Miss Saeki, Kafka finds a companion in grief. The two struggle with existing
in the real world, caught instead between the threshold of life and death where
her 15 year old spirit inhabits memories of the past. To her the past and
present are inseparable, the youth that once drove her to compose Kafka on the
Shore has long since vanished.
When Kafka ventures into the forest behind the cabin, he grapples with the idea
of suicide. He's literally on the precipice of death, peering into the world
beyond and the purgatory in-between. Here there's comfort in routine, at the
cost of the literal music of life. Back home there's grief and sadness, but also
the ability to form new memories shaped from the past.
I'll leave you with one of my favorite quotes near the end of the book,
“Every one of us is losing something precious to us,” he says after the phone
stops ringing. “Lost opportunities, lost possibilities, feelings we can never
get back again. That’s part of what it means to be alive. But inside our
heads—at least that’s where I imagine it—there’s a little room where we store
those memories. A room like the stacks in this library. And to understand the
workings of our own heart we have to keep on making new reference cards. We
have to dust things off every once in a while, let in fresh air, change the
water in the flower vases. In other words, you’ll live forever in your own
private library.”