The following is an email I sent a friend regarding Kafka on the Shore.
I've had a week or two now to digest Kafka on the Shore and put some thoughts
together. It's definitely my favorite Murakami novel thus far. By a long shot.
The symbolism feels attainable, yet abstract enough that there's still room for
reader interpretation. The plot is interesting enough to give weight to the
characters, aided by the dual narrative between Kafka and Nakata/Hoshino. It's
great.
A couple of ideas stand out to me:
The Oedipus prophecy set upon Kafka isn't necessarily that he literally needs to
fulfill the Oedipus contract, but that he needs to carry on the spirit of his
father's art. The subtext that I'm picking up is that his father (the
cat-murdering, flute-blowing madman sculptor) sacrificed everything for his art,
including his relationship with his son. The prophecy that he laid upon Kafka is
his own desire for immortality, extending his name and art with Kafka as the
vehicle. Thus Kafka feels overwhelming pressure and the impossibility of his own
individuality, thus he runs away.
In Miss Saeki, Kafka finds a companion in grief. The two struggle with existing
in the real world, caught instead between the threshold of life and death where
her 15 year old spirit inhabits memories of the past. To her the past and
present are inseparable, the youth that once drove her to compose Kafka on the
Shore has long since vanished.
When Kafka ventures into the forest behind the cabin, he grapples with the idea
of suicide. He's literally on the precipice of death, peering into the world
beyond and the purgatory in-between. Here there's comfort in routine, at the
cost of the literal music of life. Back home there's grief and sadness, but also
the ability to form new memories shaped from the past.
I'll leave you with one of my favorite quotes near the end of the book,
“Every one of us is losing something precious to us,” he says after the phone
stops ringing. “Lost opportunities, lost possibilities, feelings we can never
get back again. That’s part of what it means to be alive. But inside our
heads—at least that’s where I imagine it—there’s a little room where we store
those memories. A room like the stacks in this library. And to understand the
workings of our own heart we have to keep on making new reference cards. We
have to dust things off every once in a while, let in fresh air, change the
water in the flower vases. In other words, you’ll live forever in your own
private library.”
Six months after
swapping back over to HEY for
email feels like the appropriate time to check in on how it’s going. Here are
the ways I use HEY to organize my email; what works and what doesn't.
My workflow
I read every email that finds its way into my inbox. I hate unread emails, and I
especially hate the # unread counter that most other email platforms surface
within their tab titles. It unnerves me to an unhealthy degree.
That doesn't mean that I categorize every email into a special folder or label
to get it out of my inbox. HEY doesn't even support this workflow, it lacks the
notion of folders. Instead, read emails that I don't immediately delete simply
pile up in the inbox and are
covered with an image.
HEY claims that their email client is "countless"[1], in that there are no
numbers telling you how many emails are in your inbox or how far you're behind
in your organizational duties. And for the most part, that's true, except for
one glaring counter that tells you how many unscreened emails are awaiting your
approval:
Not exactly "countless" but at least the screener is only relevant for emails
from unrecognized senders.
Back on the topic of emails flowing into my inbox, most transactional emails
find their way into the Paper Trail automatically. Receipts of this kind are
bundled up and kept out of sight, out
of mind.
Other emails that I want to draw temporary importance to reside in one of the
two inbox drawers, Set Aside or Reply Later. I use Set Aside for shipping
notifications, reservations, and other emails that are only relevant for a short
period of time. Reply Later is self-evident. The system is very simple and works
the way HEY intends.
My favorite HEY feature is easily
The Feed, which aggregates newsletters
into a single page. In a world where Substack has convinced every blogger that
newsletters are the correct way to distribute their thoughts, The Feed is a
great platform for aggregation. Shout-out to
JavaScript Weekly and
Ruby Weekly.
The Feed, Paper Trail, Set Aside, and Reply Later make up the bulk of my daily
workflow in HEY. I'm very happy with these tools and while they are largely
achievable via application of filters, labels, and rules in other inbox systems,
I find the experience in HEY to be an improvement thanks to its email client and
UI.
A few other HEY tools fit into more niche use-cases.
Collections are essentially threads of threads. They're similar to labels, but
have the added benefit of aggregating attachments to the top of the page. I tend
to use them for travel plans because they provide easy access to boarding passes
or receipts.
On the topic of travel, Clips
are amazing for Airbnb door codes, addresses, or other key information that
often finds itself buried in email marketing fluff. Instead of keeping the email
in the Set Aside drawer and digging into it every time you need to retrieve a
bit of information, simply highlight the relevant text and save it to a Clip.
HEY for domains, while severely limited in its lack of support for multiple
custom domains, at least allows for email extensions. I use
[email protected] to automatically tag incoming email with the
"reimburse" label so I can later retrieve it for my company's reimbursement
systems.
Important missing features
HEY is missing a couple of crucial features that I replace with free
alternatives.
The first is allowing multiple custom domains, a feature of Fastmail that I
dearly miss. I have a few side projects that live on separate domains and I
would prefer those projects to have email contacts matching said domain. If I
wanted to achieve this with HEY, I'd have to pay an additional $12/mo per domain
which is prohibitively expensive[2].
Instead of creating multiple HEY accounts for multiple domains, I use email
forwarding to point my other custom domains towards my single HEY account.
Forward Email is one such service, which offers
free email forwarding at the cost of denoting the DNS records in plain text (you
pay extra for encryption). Another option I haven't investigated is
Cloudflare Email Routing,
which may be more convenient if Cloudflare doubles as your domain registrar.
It's a bummer that I can't configure email forwarding for custom domains within
HEY itself, as I can with Fastmail.
The other big missing feature of HEY is
masked email.
Fastmail partners with 1Password to offer randomly-generated email addresses
that point to a generic @fastmail domain instead of your personal domain. This
is such a useful (and critical) feature for keeping a clean inbox, since many
newsletter sign-ups or point-of-sale devices (looking at you, Toast) that
collect your email have a tendency to spam without consent. With masked email,
you have the guarantee that if your masked email address gets out in the wild it
can be trivially destroyed with no link back to your other email addresses.
Luckily, DuckDuckGo has their own masked email service and it’s totally free:
DuckDuckGo Email Protection. The trade-off is a
one-time download of the DuckDuckGo browser extension that you can remove
afterwards.
Both of these features make me wish that HEY was more invested in privacy and
security. They have a couple of great features that already veer in that
direction, like tracking-pixel elimination
and the entire concept of the Screener, but they haven't added any new privacy
features since the platform launched.
Problem areas
Generally speaking, the Screener
is one of the killer features of HEY. Preventing unknown senders from dropping
email directly into your inbox is really nice. It does come with a couple of
trade-offs, however.
For one, joining a mailing list means constant triage of Screener requests.
Every personal email of every participant on that mailing list must be manually
screened. HEY created the
Speakeasy code as a pseudo
workaround, but it doesn't solve the mailing list issue because it requires a
special code in the subject line of an email.
The second problem with the Screener is pollution of your contact list. When you
screen an email into your inbox, you add that email address to your contacts.
That means your contact list export (which you may create if you migrate email
platforms) is cluttered with truckloads of no-reply email addresses, since many
services use no-reply senders for OTP or transactional emails.
When I originally migrated off of HEY to Fastmail a few years ago (before coming
back) I wrote a script that ran through my contacts archive and removed no-reply
domains with regular expressions. Instead, I wish that allowed senders were
simply stored in a location separate from my email contacts.
The other pain point is around the HEY pricing structure. HEY is divided into
two products: HEY for You, which provides an @hey.com email address, and HEY
for Domains, which allows a single custom domain and some extra features. The
problem is that these two products are mutually exclusive.
By using HEY for Domains, I do not have access to an @hey.com email address, a
HEY World blog, or the ability to take personal
notes on email threads. If I wanted these features in addition to a custom
domain, I'd need to pay for both HEY products and manage two separate accounts
in my email inbox (of which I want to do neither).
The split in pricing is made even worse because the extra features offered by
Hey for Domains all revolve around team accounts, e.g. multi-user companies. For
a single HEY user, the HEY for You features are more appealing.
This creates an awkward pricing dynamic for a single-user HEY experience. The
product that I actually want is HEY for You with a single custom domain that
maps both emails to a single account. The @hey.com email address should be a
freebie for HEY for Domain users, as it is with alternative email providers.
I still like it though
Since the last two sections have been dwelling a bit on the negatives, I'll end
by saying that I still think HEY is a good product. Not every feature is going
to resonate with every individual (there's a good amount of fluff), but the
features that do resonate makes HEY feel like personally-crafted software.
It's worth noting that the HEY for Domains pricing scheme is intended for
multiple users. HEY for Domains used to be branded as "HEY for Work", if
that's any indication of where the pricing awkwardness comes from. ↩︎
Lately I've been addicted to a new daily word puzzle game called
Bracket City. It's unique
among competitors because the game isn't about rearranging letters baked in
hidden information, but rather solving hand-written, crossword-style clues.
I recommend giving the daily puzzle a shot before reading the rest of this
article since it will help with visualizing the puzzle format. But as a quick
rules summary:
A Bracket City solution is a short phrase
Certain words are substituted with clues, indicated via a pair of square
brackets
Clues can nest other clues
You must solve the inner-most clues before you can solve the outer-most
Since Bracket City is basically a recursive crossword, the structure of a puzzle
is easily mapped to a tree. And so, in classic programmer-brain fashion, I built
a little app that turns a Bracket City puzzle into an interactive tree. Check it
out: Bracket Viz.
How it works
I had a couple of realizations while working on this little project.
The first was recognizing how brilliant the Bracket City puzzle structure is.
Not only does it spin the age-old crossword in a compelling way that feels
fresh, but the actual mechanics for constructing a Bracket City puzzle are super
simple. It's a win in all categories, excellence in design.[1]
The second realization was how easy it is to parse Bracket City puzzles into
trees and render them via Svelte components. I haven't done much work with
Svelte, but the ability to recursively render a component by simply
self-referencing that component is incredibly expressive.
If you're unfamiliar with Svelte, don't worry! There's really not that much
special Svelte stuff going on in my solution. Most of it is plain old
JavaScript.
First thing's first: a class for nodes in our tree:
classNode{constructor(text ='', children =[]){this.text = text
this.children = children
}}
Next, the parsing algorithm.
The basic strategy has a function read through the input string one character at
a time. When a "[" is encountered, a new node is created. A couple variables
track our position in the resulting tree:
currentNode points to the most recent node
stack holds a list of nodes in order
With currentNode, we can easily append new child nodes to our position in the
tree. With stack, we can exit the currentNode and navigate upwards in the
tree to the node's parent.
Here's the algorithm in full:
constparsePuzzle=(raw)=>{// Initial output takes the form of a single node.const root =newNode()let currentNode = root
let stack =[root]for(let i =0; i < raw.length; i++){const char = raw[i]if(char ==='['){// Substitutions are marked with ??.
currentNode.text +='??'const node =newNode()
currentNode.children.push(node)
stack.push(node)// Update our currentNode context so that future nodes// are appended to the most recent one.
currentNode = node
}elseif(char ===']'){if(stack.length >1){// Closing brace encountered, so we can bump the// currentNode context up the tree by a single node.
stack.pop()
currentNode = stack[stack.length -1]}}else{
currentNode.text += char
}}// If we have any elements left over, there's a missing closing// brace in the input.if(stack.length >1){return[false, root]}return[true, root]}
The return result of the function denotes whether or not it was successful
followed by the resulting tree, a simple form of error handling.
In Svelte, we can tie this algorithm together with an HTML textarea in a
component like so:
# raw input:
[where [opposite of clean] dishes pile up] or [exercise in a [game played with a cue ball]]
# tree:
Node(
"?? or ??",
[
Node(
"where ?? dishes pile up",
[
Node("opposite of clean", [])
]
),
Node(
"exercise in a ??",
[
Node("game played with a cue ball", [])
]
)
]
)
As the textarea is updated, $inspect logs the resulting tree. We haven't yet
rendered the tree in the actual UI. Let's change that.
First, update the original component to include a new component named Tree:
<script>import parsePuzzle from'$lib/parsePuzzle.js'import Tree from'$lib/components/Tree.svelte'let puzzle =$state('')let[success, tree]=$derived(parsePuzzle(puzzle))</script><textareabind:value="{puzzle}"></textarea>
{#if success}
<Treenodes="{[tree]}"/>
{:else}
<p>Error: brackets are unbalanced</p>
{/if}
Creating a new component to handle rendering the puzzle tree is not just to tidy
up the code, it's to enable a bit of fancy self-referential Svelte behavior.
Intro CS courses have taught us that tree structures map nicely to recursive
algorithms and it's no different when we think about UI components in Svelte.
Svelte allows components to import themselves as a form of recursive rendering.
How about that? A Svelte component can render itself by simply importing itself
as a regular old Svelte file. In the template content of the component, we
simply map over our list of nodes and render their text content. If a given node
has children, we use a Self reference to repeat the same process from the
viewpoint of the children.
ml-4 applies left-margin to each of the children nodes, enabling stair-like
nesting throughout the tree. We never need to increment the margin in subsequent
child nodes because the document box model handles the hard work for us. Each
margin is relative to its container, which itself uses the same margin
indentation.
That about wraps it up! I added a couple extra features to the final version,
namely the ability to show/hide individual nodes in the tree. I'll leave that as
an exercise for the reader.
Well, there is one thing that is maybe questionable about the design of
Bracket City. The layout of the puzzle makes you really want to solve the
outer-most clue before the inner-most, if you know the answer. However the
puzzle forces you to solve the inner-most clues first. This is a
surprisingly controversial design choice! ↩︎
I've been digging into Ruby's stdlib RSS parser for a side project and am very
impressed by the overall experience. Here's how easy it is to get started:
That said, doing something interesting with the resulting feed is not quite so
simple.
For one, you can't just support RSS. Atom is a more recent standard used by many
blogs (although I think irrelevant in the world of podcasts). There's about a
50% split in the use of RSS and Atom in the tiny list of feeds that I follow, so
a feed reader must handle both formats.
Adding Atom support introduces an extra branch to our snippet:
The need to handle both standards independently is kind of frustrating.
That said, it does make sense from a library perspective. The RSS gem is
principally concerned with parsing XML per the RSS and Atom standards, returning
objects that correspond one-to-one. Any conveniences for general feed reading
are left to the application.
Wrapping the RSS gem in another class helps encapsulate differences in
standards:
Worse than dealing with competing standards is the fact that not everyone
publishes the content of an article as part of their feed. Many bloggers only
use RSS as a link aggregator that points subscribers to their webpage, omitting
the content entirely:
<rssversion="2.0"><channel><title>Redacted Blog</title><link>https://www.redacted.io</link><description>This is my blog</description><item><title>Article title goes here</title><link>https://www.redacted.io/this-is-my-blog</link><pubDate>Thu, 25 Jul 2024 00:00:00 GMT</pubDate><!-- No content! --></item></channel></rss>
How do RSS readers handle this situation? The solution varies based on the app.
The two I've tested, NetNewsWire and Readwise Reader, manage to include the
entire article content in the app, despite the RSS feed omitting it (assuming no
paywalls). My guess is these services make an HTTP request to the source,
scraping the resulting HTML for the article content and ignoring everything
else.
Firefox users are likely familiar with a feature called
Reader View
that transforms a webpage into its bare-minimum content. All of the layout
elements are removed in favor of highlighting the text of the page. The JS
library that Firefox uses is open source on their Github:
mozilla/readability.
On the Ruby side of things there's a handy port called
ruby-readability that we can use
to extract omitted article content directly from the associated website:
require"ruby-readability"URI.open("https://jvns.ca/atom.xml")do|raw|
feed =RSS::Parser.parse(raw)
url =case feed
whenRSS::Rss
feed.items.first.link
whenRSS::Atom::Feed
feed.entries.first.link.href
end# Raw HTML content
source =URI.parse(url).read
# Just the article HTML content
article_content = Readability::Document.new(source).content
end
So far the results are good, but I haven't tested it on many blogs.
My first experience with a production web application was a React ecommerce site
built with Create React App. I came into the team with zero React experience,
hot off of some Angular 2 work and eager to dive into a less-opinionated
framework. The year was 2018 and the team (on the frontend, just two of us) was
handed the keys to a brand new project that we could scaffold using whatever
tools we thought best fit the job.
We knew we wanted to build something with React, but debated two alternative
starting templates:
Create React App (then, newly released) with Flow
One of the many community-maintained templates with TypeScript
You might be surprised that Create React App didn't originally come bundled with
TypeScript[1], but the ecosystem was at a very different place back in 2018.
Instead, the default type-checker for React applications was Flow, Facebook's
own type-checking framework.
After a couple prototypes, we chose Flow. It felt like a safer bet, since it was
built by the same company as the JavaScript framework that powered our app. Flow
also handled some React-isms more gracefully than TypeScript, particularly
higher-order components where integrations with third-party libraries (e.g.
React Router, Redux) led to very complicated scenarios with generics.
Of all of our stack choices at the start of this project in 2018, choosing Flow
is the one that aged the worst. Today, TypeScript is so ubiquitous that removing
it from your open source project
incites a community outrage[2].
Why is TypeScript widely accepted as the de facto way to write JavaScript apps,
whereas Flow never took off?
I chalk it up to a few different reasons:
TypeScript being a superset of JavaScript allowed early adopters to take
advantage of JavaScript class features (and advanced proposals, like
decorators). In a pre-hooks era, both Angular and React required class syntax
for components and the community seemed to widely support using TypeScript as
a language superset as opposed to just a type-checker.
Full adoption by Angular 2 led to lots of community-driven support for
TypeScript types accompanying major libraries via DefinitelyTyped. Meanwhile
nobody really used Flow outside of React.
Flow alienated users by shipping broad, wide-sweeping breaking changes on a
regular cadence. Maintaining a Flow application felt like being subject to
Facebook's whims. Whatever large refactor project was going on at Facebook at
the time felt like it directly impacted your app.
VSCode has become the standard text editor for new developers and it ships
with built-in support for TypeScript.
TypeScript as a language superset
Philosophically, in 2018 the goals of Flow and TypeScript were quite different.
TypeScript wasn't afraid to impose a runtime cost on your application to achieve
certain features, like enums and decorators. These features required that your
build pipeline either used the TypeScript compiler (which was, and is,
incredibly slow) or clobbered together a heaping handful of Babel plugins.
On the other hand, Flow promised to be just JavaScript with types, never
making its way into your actual production JavaScript bundle. Since Flow wasn't
a superset of JavaScript, it was simple to set up with existing build pipelines.
Just strip the types from the code and you're good to go.
Back when JavaScript frameworks were class-based (riding on the hype from
ES2015), I think developers were more receptive towards bundling in additional
language features as part of the normal build pipeline. It was not uncommon to
have a handful of polyfills and experimental language features in every large
JavaScript project. TypeScript embraced this methodology, simplifying the
bundling process by offering support in the TypeScript compiler proper.
Nowadays the stance between the two tools has reversed. The adoption of
alternative bundlers that cannot use the TypeScript compiler (esbuild, SWC, and
so on) has meant that JavaScript developers are much less likely to make use of
TypeScript-specific features. People generally seem less receptive towards
TypeScript-specific features (e.g. enums) if they're easily replaced by a
zero-cost alternative (union types). Meanwhile, recent Flow releases added
support for enums and
React-specific component syntax[3].
What a reversal!
Community library support
As TypeScript gathered mindshare among JavaScript developers,
DefinitelyTyped crushed
FlowTyped in terms of open source
contribution. By the tail end of 2021, our small team had to maintain quite a
few of our own forks of FlowTyped files for many common React libraries
(including React Router and Redux)[4]. Flow definitely felt like an
afterthought for open source library developers.
As TypeScript standardized with npm under the @types namespace, FlowTyped
still required a separate CLI. It's not easy to compete when the alternative
makes installing types as easy as npm install @types/my-package.
Breaking things
I remember distinctly that upgrading Flow to new releases was such a drag. Not
only that, but it was a regular occurrence. New Flow releases brought
wide-sweeping changes, often with new syntax and many deprecations. This problem
was so well-known in the community that Flow actually released a blog post on
the subject in 2019:
Upgrading Flow Codebases.
For the most part, I don't mind if improvements to Flow means new violations in
my existing codebase pointing to legitimate issues. What I do mind is that many
of these problematic Flow releases felt more like Flow rearchitecting itself
around fundamental issues that propagated down to users as new syntax
requirements. It did not often feel like the cost to upgrade matched the benefit
to my codebase.
A couple examples that I still remember nearly 6 years later:
In the early days, the Flow language server was on par with TypeScript's. Both
tools were newly emerging and often ran into issues that required restarting the
language server to re-index your codebase.
VSCode was not as ubiquitous in those days as it is today, though it was
definitely an emerging star. Facebook was actually working on its own IDE at the
time, built on top of Atom. Nuclide promised deep
integration with Flow and React, and gathered a ton of excitement from our team.
Too bad it was
retired in December of 2018.
As time went on and adoption of VSCode skyrocketed, Flow support lagged behind.
The TypeScript language server made huge improvements in consistency and
stability and was pre-installed in every VSCode installation. Meanwhile Flow
crashed with any dependency change, and
installing the Flow extension
involves digging into your built-in VSCode settings and disabling
JavaScript/TypeScript language support.
Towards TypeScript
As our Flow application grew from 3-month unicorn to 3-year grizzled veteran,
Flow really started to wear developers on our team down. It was a constant
onboarding pain as developers struggled to set up VSCode and cope with some of
the Flow language server idiosyncrasies. Refactoring to TypeScript was an
inevitable conversation repeated with every new hire.
The point of this blog post is not to bag on Flow. I still have a ton of respect
for the project and its original goal of simplicity: "JavaScript with types".
Although that goals lives on via JSDoc, Flow
is an important milestone to remember as type annotations are
formally discussed by TC39.
Before leaving the company, I remember tasking out a large project detailing the
entire process of converting our Flow codebase to TypeScript. I wonder if it was
ever finished.
TypeScript support was added in 2019 with the v2 release. ↩︎
Today I found myself at the bottom of a rabbit hole, exploring how
Zod's refine method interacts with form validations. As with
most things in programming, reality is never as clear-cut as the types make it
out to be.
Today's issue concerns
zod/issues/479, where refine
validations aren't executed until all fields in the associated object are
present. Here's a reframing of the problem:
The setup:
I have a form with fields A and B. Both are required fields, say required_a
and required_b.
I have a validation that depends on the values of both A and B, say
complex_a_b.
The problem:
If one of A or B is not filled out, the form parses with errors: [required_a],
not [required_a, complex_a_b]. In other words, complex_a_b only pops up as
an error when both A and B are filled out.
Here's an example schema that demonstrates the problem:
This creates an experience where a user fills in A, submits, sees a validation
error pointing at B, fills in B, and sees another validation error pointing at
complex_a_b. The user has to play whack-a-mole with the form inputs to make
sure all of the fields pass validation.
As a programmer, we're well-acquainted with error messages that work like this.
And we hate them! Imagine a compiler that suppresses certain errors before
prerequisite ones are fixed.
If you dig deep into the aforementioned issue thread, you'll come across the
following solution (credit to
jedwards1211):
From a type perspective, I understand why Zod doesn't endeavor to fix this
particular issue. How can we assert the types of A or B when running the
complex_a_b validation, if types A or B are implicitly optional? To evaluate
them optionally in complex_a_b would defeat the type, z.string(), that
asserts that the field is required.
How did I fix it for my app? I didn't. I instead turned to the form library,
applying my special validation via the form API instead of the Zod API. I
concede defeat.
Some folks don't want their entire Emacs configuration to live in a single,
thousand-line file. Instead, they break their config into separate modules that
each describe a small slice of functionality. Here's how you can achieve this
with Start Emacs.
Step one: load your custom lisp directory
Emacs searches for Emacs Lisp code in the
Emacs load path.
By default, Emacs only looks in two places:
/path/to/emacs/<version>/lisp/, which contains the standard modules that
ship with Emacs
~/.emacs.d/elpa/, which contains packages installed via package-install
Neither of these places are suitable for your custom lisp code.
I prefer to have my custom lisp code live within ~/.emacs.d/, since I version
control my entire Emacs configuration as a single repository. Start Emacs adds
~/.emacs.d/lisp/ to the load path with this line in init.el (the
Init File):
Where user-emacs-directory points to ~/.emacs.d/, or wherever it may live on
your machine.
The rest of this guide assumes your load path accepts ~/.emacs.d/lisp/, but
feel free to swap out this path for your preferred location.
Step two: write your module
Next we'll create a module file that adds
evil-mode with a few configurations and
extensions.
Create the file evil-module.el in your ~/.emacs.d/lisp/ directory. Open it
up in Emacs and use M-x auto-insert to fill a bunch of boilerplate Emacs Lisp
content. You can either quickly RET through the prompts or fill them out.
Note: to end the "Keywords" prompt you need to use M-RET instead to signal the
end of a multiple-selection.
Your evil-module.el file should now look something like this:
;;; evil-module.el --- -*- lexical-binding: t; -*-;; Copyright (C) 2025;; Author:;; Keywords:;; This program is free software; you can redistribute it and/or modify;; it under the terms of the GNU General Public License as published by;; the Free Software Foundation, either version 3 of the License, or;; (at your option) any later version.;; This program is distributed in the hope that it will be useful,;; but WITHOUT ANY WARRANTY; without even the implied warranty of;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the;; GNU General Public License for more details.;; You should have received a copy of the GNU General Public License;; along with this program. If not, see <https://www.gnu.org/licenses/>.;;; Commentary:;;;;; Code:(provide'evil-module);;; evil-module.el ends here
Most of these comments aren't relevant for your custom lisp module but they're
good to have in case you ever want to share your code as an Emacs Lisp package.
The single line of Emacs Lisp code, (provide 'evil-module), is the most
important part of the template. It denotes 'evil-module as a
named feature,
allowing us to import it into our Init File.
Since we're building an evil-mode module, I'll add my preferred Evil defaults to
the file:
;;; Commentary:;; Extensions for evil-mode;;; Code:(use-package evil
:ensuret:init(setq evil-want-integration t)(setq evil-want-keybinding nil):config(evil-mode))(use-package evil-collection
:ensuret:after evil
:config(evil-collection-init))(use-package evil-escape
:ensuret:after evil
:config(setq evil-escape-key-sequence "jj")(setq evil-escape-delay 0.2);; Prevent "jj" from escaping any mode other than insert-mode.(setq evil-escape-inhibit-functions
(list(lambda()(not(evil-insert-state-p)))))(evil-escape-mode))(provide'evil-module);;; evil-module.el ends here
Step three: require your module
Back in our Init File, we need to signal for Emacs to load our new module
automatically. After the spot where we amended the Emacs load path, go ahead and
require 'evil-module:
Stumbled on the emacs-aio library today
and it's introduction post. What a
great exploration into how async/await works under the hood! I'm not sure I
totally grok the details, but I'm excited to dive more into Emacs generators and
different concurrent programming techniques.
The article brings to mind Wiegley's
async library, which is probably the
more canonical library for handling async in Emacs. From a brief look at the
README, async looks like it actually spawns independent processes, whereas
emacs-aio is really just a construct for handling non-blocking I/O more
conveniently.
I've written small-medium sized packages -- 400 to 2400 lines of elisp -- that
use generators and emacs-aio (async/await library built on generator.el) for
their async capabilities. I've regretted it each time: generators in their
current form in elisp are obfuscated, opaque and not introspectable -- you
can't debug/edebug generator calls. Backtraces are impossible to read because
of the continuation-passing macro code. Their memory overhead is large
compared to using simple callbacks. I'm not sure about the CPU overhead.
That said, the simplicity of emacs-aio promises is very appealing:
(defunaio-promise()"Create a new promise object."(record'aio-promisenil()))(defsubst aio-promise-p (object)(and(eq'aio-promise(type-of object))(=3(length object))))(defsubst aio-result (promise)(aref promise 1))
Lichess is an awesome website, made even more awesome by
the fact that it is free and open source. Perhaps lesser known is that the
entire Lichess puzzle database is available for free download under the Creative
Commons CC0 license. Every puzzle that you normally find under
lichess.org/training is available for your
perusal.
This is a quick guide for pulling that CSV and seeding a SQLite database so you
can do something cool with it. You will need
zstd.
First, wget the file from
Lichess.org open database and save it
into a temporary directory. Run zstd to uncompress it into a CSV that we can
read via Ruby.
A separate seed script pulls items from the CSV and bulk-inserts them into
SQLite. I have the following in my db/seeds.rb, with a few omitted additions
that check whether or not the puzzles have already been migrated.