Sacha Chua: YE20: Emacs Carnival: Newbies/starter kits

This was a rough braindump on what I might want to write or do for the Emacs Carnival theme this month.

View Org source for this post

You can e-mail me at sacha@sachachua.com.

-1:-- YE20: Emacs Carnival: Newbies/starter kits (Post Sacha Chua)--L0--C0--2026-04-22T19:06:56.000Z

Sacha Chua: May 7: Emacs Chat with Shae Erisson

On May 7, I'll chat with Shae Erisson about Emacs and life.

(America/Toronto UTC-4) = Thu May 7 1030H EDT / 0930H CDT / 0830H MDT / 0730H PDT / 1430H UTC / 1630H CEST / 1730H EEST / 2000H IST / 2230H +08 / 2330H JST

This session will be recorded, and I'll update this blog post with notes. https://sachachua.com/blog/2026/05/may-7-emacs-chat-with-shae-erisson/

Find more Emacs Chats or join the fun: https://sachachua.com/emacs-chat

View Org source for this post

You can e-mail me at sacha@sachachua.com.

-1:-- May 7: Emacs Chat with Shae Erisson (Post Sacha Chua)--L0--C0--2026-04-22T18:55:38.000Z

Sacha Chua: May 21: Emacs Chat with Raymond Zeitler

On May 21, I'll chat with Raymond Zeitler about Emacs and life.

America/Toronto = Thu May 21 1030H EDT / 0930H CDT / 0830H MDT / 0730H PDT / 1430H UTC / 1630H CEST / 1730H EEST / 2000H IST / 2230H +08 / 2330H JST

This session will be recorded, and I'll update this blog post with notes. https://sachachua.com/blog/2026/05/emacs-chat-with-raymond-zeitler/

Find more Emacs Chats or join the fun: https://sachachua.com/emacs-chat

View Org source for this post

You can e-mail me at sacha@sachachua.com.

-1:-- May 21: Emacs Chat with Raymond Zeitler (Post Sacha Chua)--L0--C0--2026-04-22T18:32:32.000Z

Sacha Chua: June 18: Emacs Chat with Ross A. Baker

America/Toronto = Thu Jun 18 1030H EDT / 0930H CDT / 0830H MDT / 0730H PDT / 1430H UTC / 1630H CEST / 1730H EEST / 2000H IST / 2230H +08 / 2330H JST

On June 18, I'll chat with Ross Baker about Emacs and life.

This session will be recorded, and I'll update this blog post with notes. https://sachachua.com/blog/2026/04/june-18-emacs-chat-with-ross-a-baker/

Find more Emacs Chats or join the fun: https://sachachua.com/emacs-chat

View Org source for this post

You can e-mail me at sacha@sachachua.com.

-1:-- June 18: Emacs Chat with Ross A. Baker (Post Sacha Chua)--L0--C0--2026-04-22T18:28:45.000Z

Sacha Chua: May 4: Emacs Chat with Amin Bandali

On May 4, I'll chat with Amin Bandali about Emacs and life.

(America/Toronto UTC-4) = Mon May 4 1400H EDT / 1300H CDT / 1200H MDT / 1100H PDT / 1800H UTC / 2000H CEST / 2100H EEST / 2330H IST / Tue May 5 0200H +08 / 0300H JST

This session will be recorded, and I'll update this blog post with notes. https://sachachua.com/blog/2026/05/emacs-chat-with-amin-bandali/

Find more Emacs Chats or join the fun: https://sachachua.com/emacs-chat

View Org source for this post

You can e-mail me at sacha@sachachua.com.

-1:-- May 4: Emacs Chat with Amin Bandali (Post Sacha Chua)--L0--C0--2026-04-22T18:28:11.000Z

Dave Pearson: expando.el v1.6

Recently I've had an odd problem with Emacs: occasionally, and somewhat randomly, as I wrote code, and only when I wrote Emacs Lisp code, and only when working in emacs-lisp-mode, I'd find that the buffer I was working in would disappear. Not just fully disappear, but more like if I'd used quit-window. Worse still, once this started happening, it wouldn't go away unless I turned Emacs off and on again.

Very un-Emacs!

Normally this would happen when I'm in full flow on something, so I'd just restart Emacs and crack on with the thing I was writing; because of this I wasn't diagnosing what was actually going on.

Then, today, as I was writing require in some code, and kept seeing the buffer go away when I hit q, it dawned on me.

Recently, when I cleaned up expando.el, I added the ability to close the window with q.

--- a/expando.el
+++ b/expando.el
@@ -58,7 +58,8 @@ Pass LEVEL as 2 (or prefix a call with \\[universal-argument] and
   (let ((form (preceding-sexp)))
     (with-current-buffer-window "*Expando Macro*" nil nil
       (emacs-lisp-mode)
-      (pp (funcall (expando--expander level) form)))))
+      (local-set-key (kbd "q") #'quit-window)
+      (pp (funcall (expando--expander level) form)))))

 (provide 'expando)

So, after opening a window for the purposes of displaying the expanded macro, switch to emacs-lisp-mode, locally set the binding so q will call on quit-window, and I'm all good.

Except... not, as it turns out.

To quote from the documentation for local-set-key:

The binding goes in the current buffer’s local map, which in most cases is shared with all other buffers in the same major mode.

D'oh!

Point being, any time I used expando-macro, I was changing the meaning of q in the keyboard map for emacs-lisp-mode. :-/

And so v1.6 of expando.el is now a thing, in which I introduce a derived mode of emacs-lisp-mode and set q in its keyboard map. In fact, I keep the keyboard map nice and simple.

(defvar expando-view-mode-map
  (let ((map (make-sparse-keymap)))
    (define-key map (kbd "q") #'quit-window)
    map)
  "Mode map for `expando-view-mode'.")

(define-derived-mode expando-view-mode emacs-lisp-mode "expando"
  "Major mode for viewing expanded macros.

The key bindings for `expando-view-mode' are:

\\{expando-view-mode-map}")

From now on I should be able to code in full flow state without the worry that my window will disappear at any given moment...

-1:-- expando.el v1.6 (Post Dave Pearson)--L0--C0--2026-04-22T18:21:40.000Z

Emacs APAC: Announcing Emacs Asia-Pacific (APAC) virtual meetup, Saturday, April 25, 2026

This month’s Emacs Asia-Pacific (APAC) virtual meetup is scheduled for Saturday, April 25, 2026 with BigBlueButton and #emacs on Libera Chat IRC. The timing will be 1400 to 1500 IST.

The meetup might get extended by 30 minutes if there is any talk, this page will be updated accordingly.

If you would like to give a demo or talk (maximum 20 minutes) on GNU Emacs or any variant, please contact bhavin192 on Libera Chat with your talk details:

-1:-- Announcing Emacs Asia-Pacific (APAC) virtual meetup, Saturday, April 25, 2026 (Post Emacs APAC)--L0--C0--2026-04-22T15:00:36.000Z

Irreal: Orgy

Irreal, in both its incarnations, has always used a dynamic Web site: first on Blogger and now on WordPress. I like them both. They’re easy to use and, really, perfect for non-technical people who want to blog. At this point, Irreal will probably stay on WordPress throughout its lifetime.

Still, I occasionally think that it would be nice to change to a static web site. The problem with dynamic Websites is that they’re a black box driven by a database and it’s hard to understand how things work, how to customize them, and how to do fundamental things like backing up your site.

Of course, static sites come with their own problems and difficulties. Recently, Bastien Guerry, one of the Org mode heroes, introduced his own static site generator, Orgy. He has a nice post that steps you through setting up an Orgy site from scratch. Orgy seems extremely easy to use. You write your blog posts in Org mode, call Orgy, and everything but moving it to your hosting provider is taken care of. You get an index, RSS, tag support, search and more. Take a look at Guerry’s post for the details.

The thing I really like about it is that there’s no database. All your post sources stay safely on your own machine and you can back them up with whatever method(s) you prefer. Even if you have to regenerate your entire site, it’s only an Orgy call away. There’s no PHP to wade through to. The output of Orgy is simply your HTML and supporting files. It’s simplicity itself. If you don’t need a bunch of fancy plugins, Orgy may be just what you’re looking for.

-1:-- Orgy (Post Irreal)--L0--C0--2026-04-22T13:32:56.000Z

Protesilaos Stavrou: Emacs live stream with Sacha Chua on 2026-04-30 17:30 Europe/Athens

Raw link: https://www.youtube.com/watch?v=z7pcLdwuyxE

Mark your calendar for next Thursday. I will do another live stream with Sacha Chua. We will talk about Emacs and I will check on her progress since our last meeting. I am looking forward to it!

Note that the event will be recorded.

-1:-- Emacs live stream with Sacha Chua on 2026-04-30 17:30 Europe/Athens (Post Protesilaos Stavrou)--L0--C0--2026-04-22T00:00:00.000Z

Dave Pearson: blogmore.el v4.2

Another wee update to blogmore.el, with a bump to v4.2.

After adding the webp helper command the other day, something about it has been bothering me. While the command is there as a simple helper if I want to change an individual image to webp -- so it's not intended to be a general-purpose tool -- it felt "wrong" that it did this one specific thing.

So I've changed it up and now, rather than being a command that changes an image's filename so that it has a webp extension, it now cycles through a small range of different image formats. Specifically it goes jpeg to png to gif to webp.

With this change in place I can position point on an image in the Markdown of a post and keep running the command to cycle the extension through the different options. I suppose at some point it might make sense to turn this into something that actually converts the image itself, but this is about going back and editing key posts when I change their image formats.

Another change is to the code that slugs the title of a post to make the Markdown file name. I ran into the motivating issue yesterday when posting some images on my photoblog. I had a title with an apostrophe in it, which meant that it went from something like Dave's Test (as the title) to dave-s-test (as the slug). While the slug doesn't really matter, this felt sort of messy; I would prefer that it came out as daves-test.

Given that wish, I modified blogmore-slug so that it strips ' and " before doing the conversion of non-alphanumeric characters to -. While doing this, for the sake of completeness, I did a simple attempt at removing accents from some characters too. So now the slugs come out a little tidier still.

(blogmore-slug "That's Café Ëmacs")
"thats-cafe-emacs"

The slug function has been the perfect use for an Emacs Lisp function I've never used before: thread-last. It's not like I've been avoiding it, it's just more a case of I've never quite felt it was worthwhile using until now. Thanks to it the body of blogmore-slug looks like this:

(thread-last
  title
  downcase
  ucs-normalize-NFKD-string
  (seq-filter (lambda (char) (or (< char #x300) (> char #x36F))))
  concat
  (replace-regexp-in-string (rx (+ (any "'\""))) "")
  (replace-regexp-in-string (rx (+ (not (any "0-9a-z")))) "-")
  (replace-regexp-in-string (rx (or (seq bol "-") (seq "-" eol))) ""))

rather than something like this:

(replace-regexp-in-string
 (rx (or (seq bol "-") (seq "-" eol))) ""
 (replace-regexp-in-string
  (rx (+ (not (any "0-9a-z")))) "-"
  (replace-regexp-in-string
   (rx (+ (any "'\""))) ""
   (concat
    (seq-filter
     (lambda (char)
       (or (< char #x300) (> char #x36F)))
     (ucs-normalize-NFKD-string
      (downcase title)))))))

Given that making the slug is very much a "pipeline" of functions, the former looks far more readable and feels more maintainable than the latter.

-1:-- blogmore.el v4.2 (Post Dave Pearson)--L0--C0--2026-04-21T18:27:26.000Z

Sacha Chua: OBS: A dump button for dropping the last ~10 seconds before it hits the stream

I want to make it easier to livestream without worrying about leaking private information. Tradeoff: slower conversations with the chat, but more peace of mind.

I think I've sorted out a setup involving two instances of OBS, with the source instance sending the stream with a delay to the restreaming instance that will then send it on to YouTube. This allows me to cut the feed from the source instance to the restreaming instance in case something happens.

The first OBS is the one that has my screen capture, webcam, audio, etc. Here's what I needed to do to change it.

  1. Create a new profile or rename the profile to "Source".
  2. Name the collection of streams "Source" as well.
  3. In Settings - Hotkeys, define a keyboard shortcut for Stop streaming (discard delay). I use Super + F12.
  4. In Settings - Stream:
    1. Service: Custom
    2. Destination - Server: srt://127.0.0.1:9000?mode=caller
  5. In Settings - Advanced:
    1. Check Stream Delay - Enable.
    2. Set the duration. Let's try 10 seconds.
    3. Uncheck Preserve cutoff point (increase delay) when reconnecting.

Then I can launch that one with:

obs --profile "Source" --collection "Source" --launch-filter --multi

The second OBS will restream the output of the first OBS to YouTube.

obs --profile "Restream" --collection "Restream" --launch-filter --multi

I used the Profile menu to create a new profile called "Restream" and the Scene Collection menu to create a new collection called "Restream." I set up the scene as follows:

  1. Create a text source with the backup message.
  2. Create a media source.
    1. Uncheck Local File.
    2. Uncheck Restart playback when source becomes active.
    3. Input: srt://127.0.0.1:9000?mode=listener

In the first OBS (the source), click on Start streaming. After some delay, the stream will appear, and I can move or resize it.

I was a little thrown off by the fact that my audio bars didn't initially show up in the mixer in the restreamer, but both recording and streaming seem to include the audio.

To stop the stream, I can switch to OBS, click on Stop streaming, and (important!) choose Stop streaming (discard delay). The OBS window might be buried under other things on my second screen, though, and that's too many clicks and mouse movements. The keyboard shortcut Super + F12 we just set up should be handy, but I might not remember that, so let's add some scripts. The OBS websocket protocol doesn't support discarding the delay buffer yet, but I'm on Linux and X11, so I can use xdotool to simulate a keypress. Here I select the window matching the profile name I set up previously.

WID=$(xdotool search --name "OBS .* - Profile: Source")
xdotool key --window $WID super+F12

I can org-capture the timestamp of the panic so that I can doublecheck the recording.

;;;###autoload
(defun sacha-obs-panic ()
  "Stop streaming and discard the delay buffer.
This uses a hotkey I defined in OBS."
  (interactive)
  (shell-command "~/bin/panic")
  (org-capture-string "Panicked" "l")
  (org-capture-finalize))

I always have Emacs around, and if it's not my main app, I have an autokey shortcut that maps super + 1 to focus on Emacs. Then I can M-x panic and Emacs completion will take care of finding the right function.

Let's add a menu item for even more panic assistance:

(easy-menu-define sacha-stream-menu global-map
  "Menu for streaming-related commands."
  '("Stream"
    ["🛑 PANIC" sacha-obs-panic]
    ["Start streaming" obs-websocket-start-streaming]
    ["Start recording" obs-websocket-start-recording]
    ["Stop streaming" obs-websocket-stop-streaming]
    ["Stop recording" obs-websocket-stop-recording]))

Let's see if I remember to use it!

This is part of my Emacs configuration.
View Org source for this post

You can e-mail me at sacha@sachachua.com.

-1:-- OBS: A dump button for dropping the last ~10 seconds before it hits the stream (Post Sacha Chua)--L0--C0--2026-04-21T14:27:01.000Z

Jean-Christophe Helary: Blogging with Emacs, a new take

So, you’ve tried Hugo, you’ve tried org-publish, but you’re still not satisfied with what you have. Hugo is way too complex and org-publish has a bare-bones "je ne sais quoi" that kind of requires you to code some elisp to get things done.

For people who like Hugo’s auto building & serving but who want to not spend hours fiddling with config files to have a fine-looking site, Bastien Guerry has published orgy.

The code is on codeberg and the tutorial is on Bastien’s site.

The whole thing depends on bbin, which is an installer for babashka things. With babashka being a native Clojure interpreter for scripting, implemented using the Small Clojure Interpeter.

So you have (SCI >) babashka > bbin > orgy.

orgy takes an org files directory and transforms it into a nice-looking website with navigation, tags, an rss feed and plenty of other goodies.

orgy server serves the thing on localhost:1888 and automatically rebuilds the site after each modification.

orgy was announced on the 14th of April on the French emacs list.

-1:-- Blogging with Emacs, a new take (Post Jean-Christophe Helary)--L0--C0--2026-04-21T10:04:50.187Z

Charlie Holland: A VOMPECCC Case Study: Spotify as Pure ICR in Emacs

1. About   emacs completion

vompeccc-spot-banner.jpeg

Figure 1: JPEG produced with DALL-E 3

This is the third post in a series on Emacs completion. The first post argued that Incremental Completing Read (ICR) is not merely a UI convenience but a structural property of an interface, and that Emacs is one of the few environments where completion is exposed as a programmable substrate rather than a sealed UI. The second post broke the substrate into eight packages (collectively VOMPECCC), each solving one of the six orthogonal concerns of a complete completion system.

In this post, I show, concretely, what it looks like when you build with VOMPECCC, by walking through the code of spot, a Spotify client I implemented as a pure ICR application in Emacs.

A word I'll use throughout this post to refer to the use of VOMPECCC in spot is shim, and it is worth qualifying that. The whole package is about 1,100 non-blank, non-comment lines of Lisp1. Roughly 635 of those is infrastructure any Spotify client would need regardless of its UI choices: OAuth with refresh, HTTP transport with error surfacing, a cached search layer, a currently-playing mode-line, a config surface, player-control commands, blah blah blah. The shim is the rest: 493 lines across exactly three files (spot-consult.el, spot-marginalia.el, spot-embark.el) whose entire job is to feed candidates into Consult (source), annotate them with Marginalia (source), and attach actions to them through Embark (source). When I say spot is a shim, I mean those three files, and I'm emphasizing the fact that there is relatively little code. The rest of spot is plumbing that has nothing to do with the completion substrate.

spot implements no custom UI. It has no tabulated-list buffer, no custom keymap for navigation, no rendering code. Every interaction surface; the search prompt, the candidate display, the annotations, and the action menu; is rented from the completion substrate by the 493-line shim.

This post is about the code. Instead of cataloging spot's features (I'll do that when I publish the package to Melpa), I want to show how the code actually hangs together on top of VOMPECCC, with verbatim snippets mapped onto the interaction they produce. If the previous two posts were the why and the what, this one is the how, with a working application to ground the pattern.

2. The Demonstration   consult marginalia embark

Before any code, here is the concrete task the video is solving: I am trying to find a J Dilla song whose title I can't remember; all I recall is that the word don't is somewhere in the track name. The entire post revolves around this one video, so it is worth watching before reading on. Everything that follows is a line-by-line breakdown of the code that produces what you are about to see. In the upper right hand side of my emacs (in the tab-bar), you'll see the key-bindings and, more importantly, the commands I am invoking to drive spot. (To make this clip easier to digest, you can play, pause, scrub, view in full screen, or view as "Picture in Picture" use the video controls).

Here is what happens in the clip:

  1. I invoke spot-consult-search and type j dilla. Each keystroke fires an async query against the Spotify Web API, and the result set is streamed into the minibuffer. That is Consult. In my emacs config, Vertico2 renders the candidate set vertically so the per-row metadata is legible.
  2. I use Spotify's query parameters to widen the result set per type. Spotify's search endpoint caps results per content type, so I append parameter flags (--type=track --limit=50, etc.) to ask for a fatter haul across tracks, albums, and artists. The candidates are streamed back through Consult exactly as before, just more of them.
  3. I type ,, the consult-async-split-style character, to switch from remote search to local ICR. Everything before the comma continues to be the API query; everything after is a local narrowing pattern that matches against the candidate set already in hand. No further Spotify requests are issued, and each incremental keystroke only filters the rows Consult is already holding.
  4. I type dont (no apostrophe) looking for the song. The default matching is literal, so "dont" doesn't match "Don't". Zero candidates. The corpus contains the song; my pattern just doesn't. (You thought I did this by mistake didn't you 😜? It actually highlights why fuzzy matching is so important.)
  5. I backspace and prefix the query with ~, the Orderless3 dispatcher for fuzzy matching. ~dont now matches "Don't Cry" (and others) because fuzzy matching tolerates the missing apostrophe. The search set is unchanged; I swapped matching styles without re-querying Spotify. This may sound like a small feature, but consider how much a little fuzz widens the match space of your input strings. This is espacially important in an application like Spotify where entity names can be long and difficult to remember.
  6. I append @donuts, the Orderless dispatcher for matching against the Marginalia annotation column rather than the candidate name. That narrows the surviving candidates to tracks whose annotation mentions "donuts" (i.e., tracks on Dilla's Donuts album, my personal favourite), even though the word "donuts" never appears in any track title. The song I was looking for is right there. (note my orderless-component-separator is also ",")
  7. With the track selected, I invoke Embark (embark-act) and press P to play. The P binding dispatches to spot-action--generic-play-uri, which pulls the track's URI off the candidate's multi-data property and sends a PUT to the Spotify player. The song starts playing; no further navigation required.

Three VOMPECCC packages are doing the work: Consult (the async streaming + the split-character handoff to local ICR), Marginalia (the metadata column the @ dispatcher just narrowed against), and Embark (the action menu that allows you to play the track, list the album's other tracks, or add it to a playlist). The whole rest of this post is an argument that the code required to make this happen is pleasantly concise, because none of those capabilities (asynchronous search with narrowing, metadata annotation, annotation-aware fuzzy filtering, or contextual actions) needed to be built. They already exist in the VOMPECCC framework, and spot's only job is to feed them data.

3. Anatomy of spot   structure modularity

spot is organized so that each file corresponds to one concern. This is deliberate: the architecture mirrors the modularity of VOMPECCC itself, not because I was trying to be cute (I'm cute enough 👺), but because when your substrate is modular, consuming it modularly is the lowest-friction path.

File Responsibility Substrate package LoC
spot-auth.el OAuth2 authorization + automatic token refresh timer (none) 65
spot-generic-query.el HTTP request plumbing (sync + async, error surfacing) (none) 88
spot-search.el Cached search against the Spotify API (none) 100
spot-generic-action.el Player control commands (play/pause/next/previous) (none) 51
spot-mode-line.el Currently-playing display (none) 115
spot-var.el Configuration variables (endpoints, credentials, etc.) (none) 127
spot-util.el Alist/hash-table conversions, candidate propertize (the glue) 52
spot-consult.el Seven async Consult sources + consult--multi entry Consult 194
spot-marginalia.el Annotation functions per content type Marginalia 159
spot-embark.el Keymaps and actions per content type Embark 140
spot.el spot-mode: wires registries + timers in and out (integration) 37
Total     1128


The breakdown is the whole point of the shim framing. The three substrate-facing files (194 + 159 + 140 = 493 lines) are the part that actually integrates with VOMPECCC. None of that is UI code; there is no UI code in spot. Every pixel the user sees comes from Consult, Marginalia, Embark, or whatever the user has slotted in below them.

One caveat on the 194-line figure for spot-consult.el: roughly 105 of those lines are a 7-way parallel triplet (one source definition, one history variable, and one completion function per Spotify content type), varying only in the narrow key and the :category symbol. A small macro (spot-define-consult-source) would collapse the 105 lines into 7 invocations plus a ~25-line definition, for 30-35 lines total. The honest Consult-facing line count, with redundancy factored out, is closer to 115 than 194, and the whole shim closer to 420 than 493.

The reason I didn't write this macro is because it would muddy the concrete depiction of the VOMPECCC APIs here, and honestly, I tend to avoid over-macroizing as it creates new and confusing APIs over well-established and intuitive APIs.

4. Candidates as Shared Currency   candidates

Before looking at any of the three VOMPECCC layers individually, there is one piece of code that makes the entire integration possible. It is a short function, and if you understand it, you understand how Consult, Marginalia, and Embark cooperate without knowing anything about each other.

(defun spot--propertize-items (tables)
  "Propertize a list of hash TABLES for display in completion.
Each table is expected to have `name' and `type' keys.  Names are
truncated for display per `spot-candidate-max-width'; the full
name remains accessible via `multi-data'."
  (-map
   (lambda (table)
     (propertize
      (spot--truncate-name (ht-get table 'name))
      'category (intern (ht-get table 'type))
      'multi-data table))
   tables))

Every candidate that spot hands to Consult is a string (the Spotify item's name) carrying two text properties:

  • category is one of album, artist, track, playlist, show, episode, or audiobook. Emacs's completion metadata protocol uses this property to route candidates to the right annotator and the right action keymap. Marginalia reads it to pick an annotator; Embark reads it to pick a keymap. The two packages never talk to each other, and yet they agree on every candidate's type, because both are reading the same Emacs-standard property.
  • multi-data is the raw hash table the Spotify API returned for this item: the full JSON response with every field the API exposes. Marginalia's annotator reads from it to format the margin; Embark's actions read from it to execute playback, to navigate to an album's tracks, to add to a playlist. The candidate is the full record; the name is just the visible handle. The name multi-data is spot's own designation, not a Consult or Marginalia convention (the multi- prefix is unrelated to consult--multi); any symbol would have worked. What is conventional is attaching the domain record to the candidate via propertize in the first place.

Marginalia and Embark never talk to each other. They both read the same text property on the same candidate, and that is enough.

That is the entire integration surface: One string (display name) and two props (category and metadata). Everything else (the async fetching, the narrowing, the annotation columns, the action menu) is handled by VOMPECCC, keyed on those two properties. This is a key take away for those looking to build with VOMPECCC: build your candidates like this and you will have a good time on the mountain.

This is what I meant in the first post when I called completion a substrate rather than a UI. A UI would be "here is a widget, bind data to it." A substrate is "here is a common currency (candidates with standard properties); tools that speak the currency can be mixed freely."

5. Consult: Defining the Search Surface   consult async narrowing

Consult is spot's frontdoor. It gives me three things I would otherwise have had to build from scratch: async candidate streaming, multi-source unification with narrowing keys, history, and probably other things I'm forgetting. Here is one of the seven source definitions spot uses:

(defvar spot--consult-source-track
  `(:async ,(consult--dynamic-collection
             #'spot--consult-completion-function-consult-track
             :min-input 1)
    :name "Track"
    :narrow ?t
    :category track
    :history spot--history-source-track)
  "Consult source for Spotify tracks.")

A Consult source is just a plist. The interesting keys are:

  • :async is the candidate stream. consult--dynamic-collection is the de-facto extension point third-party packages have settled on for async sources, despite the double-dash that conventionally marks it internal4. It wraps a function that takes the current minibuffer input and returns a list of candidates. Consult handles the debouncing and the "only recompute when the input changes" logic on its side; my code just has to produce candidates for a given query. :min-input 1 prevents a search on an empty query. This is the two-level async filtering that Consult is designed around: the external tool (Spotify's API, in this case) handles the expensive filtering against its own corpus, and my completion style (Orderless, if I have it) narrows the returned set locally.
  • :narrow ?t binds the narrowing key. In the video, I could have pressed t SPC when running spot-consult-search, and the session would have been scoped to tracks only, and would have avoided querying the other sources. I didn't implement narrowing; Consult did. I just declared which character maps to which source!
  • :category track is the property that will propagate onto every candidate from this source. This is the same category property that spot--propertize-items stamps on individual candidates, and it is the hinge that Marginalia and Embark both key off.
  • :history gives me free persistent search history for this source, isolated from the other sources.

The completion function itself is trivial because all the work happens in spot-search.el:

(defun spot--consult-completion-function-consult-track (query)
  "Return track candidates for QUERY."
  (spot--search-cached-and-locked query spot--mutex spot--cache)
  spot--candidates-track)

Seven of these functions exist, one per content type, all identical except for which global they return. The heavy lifting (the HTTP call, the cache, the propertization) is shared. Each source is effectively a view onto a single search result split by type.

Putting all seven sources together into one interface is also trivial:

(defvar spot--search-sources
  '(spot--consult-source-album spot--consult-source-artist
    spot--consult-source-playlist spot--consult-source-track
    spot--consult-source-show spot--consult-source-episode
    spot--consult-source-audiobook)
  "List of consult sources for Spotify search.")

;;;###autoload
(defun spot-consult-search (&optional initial)
  "Search Spotify with consult multi-source completion.
Optional INITIAL provides initial input."
  (interactive)
  (consult--multi
   spot--search-sources
   :history '(:input spot--consult-search-search-history)
   :initial initial))

This is the command you saw in the video. consult--multi takes the list of sources, unifies their candidates into a single list, and wires the narrowing keys. Seven heterogeneous content types, one prompt, one keystroke to filter to any subset, async throughout, with per-source history.

Without Consult I would need: a separate candidate display, an async debouncer, a narrowing mechanism, per-source history buffers, and some way to visually distinguish content types in a single list.

Compare this to the counterfactual. Without Consult I would need: a separate candidate display, an async debouncer, a narrowing mechanism, per-source history buffers, and some way to visually distinguish content types in a single list. And because Consult uses the standard completing-read contract, every minibuffer feature my Emacs already has (Vertico's display, Orderless's matching, Prescient's sorting) applies to spot with zero integration code.

6. Why the Cache?   async ratelimits

I have been brushing past a detail of spot-consult.el that deserves its own section, because it is the honest cost of building on an async-on-every-keystroke substrate. consult--dynamic-collection wires the completion function to the minibuffer such that it is invoked on (a debounced version of) every keystroke the user types. For spot, each invocation issues an HTTP request to Spotify's Web API, receives a mixed-type result set, splits it across the seven global candidate lists, and returns the slice relevant to the calling source. That is the hot path. And the hot path is a rate-limited network call.

Spotify's Web API is rate-limited 🙃. Exact limits are dynamic and not publicly documented in detail, but the envelope is small enough that a rapid-typing ICR session can hit it quickly. Consider the baseline: typing radiohead fires a completion call for each prefix the user's typing pauses on (Consult's consult-async-input-debounce and consult-async-input-throttle collapse runs of keystrokes into a smaller set of actually-issued calls, but realistically that still leaves several distinct prefixes per word). Now add the common real-world pattern of typing too far, backspacing a few characters, and retyping: the same query string is re-issued within the same search session. Without a cache, each repetition burns a request, but with a cache keyed on the raw query string, repeats are actually free (or at least as cheap as a cache hit):

(defun spot--search-cached (query cache)
  "Search for QUERY, using CACHE to avoid duplicate requests."
  (when (not (ht-get cache query))
    (let ((results (spot--propertize-items
                    (spot--union-search-items
                     (spot--search-items query)))))
      (ht-set cache query results)))
  (let ((results (ht-get cache query)))
    (spot--set-search-candidates results)))

The cache is a hash table from query strings to propertized candidate lists. It lives for the life of the Emacs session, so not only backspace-and-retype within one search but also the next search session that hits the same prefix is instant. The memory cost is negligible (a few hundred candidates per query, small hash tables for each) and the request-budget win is real. And if you find yourself listening to the same music over and over, then you'll have snappier results when you go down familiar paths.

Async-on-every-keystroke against a remote corpus is the feature. A query-string cache is the bill.

This is the honest consumer tax of the substrate. The first post sold you on ICR by promising that the interaction scales constantly regardless of how big the underlying corpus gets. That claim depends on async sources that fire on every keystroke against a remote corpus, and that in turn means you as package author inherit rate-limit pressure your users never see. Consult gives you the debouncer, the display, the narrowing keys, and the stale-response discarding on its side of the protocol. The cache is what you owe back on your side when your candidate source is a rate-limited network API rather than a local list, and it is exactly the kind of infrastructure that does not belong in Consult itself (because Consult has no way to know your backend is rate-limited, or which queries are equivalent enough to cache together).

7. Marginalia: Promoting Candidates into Informed Choices   marginalia

If you watch the video carefully, each track in the candidate list is followed by a horizontally aligned column of fields: #<track-number>, artist, a M:SS duration, album name, album type, release date. Each field is rendered to a fixed width in its own face, so numbers and dates and names land as visually distinct columns rather than getting mashed together with a delimiter. Small glyph prefixes (# for counts, for popularity, for followers) disambiguate otherwise bare numbers. That column is provided by Marginalia, and it comes from one function:

(defun spot--annotate-track (cand)
  "Annotate track CAND with number, artist, duration, album, type, and date.
The track number is prefixed with `#' and duration rendered as M:SS."
  (let ((data (get-text-property 0 'multi-data cand)))
    (marginalia--fields
     ((spot--format-count (ht-get data 'track_number))
      :format "#%s" :truncate 5 :face 'spot-marginalia-number)
     ((spot--annotation-field (spot--first-name (ht-get data 'artists)))
      :truncate 25 :face 'spot-marginalia-artist)
     ((spot--format-duration (ht-get data 'duration_ms))
      :truncate 7 :face 'spot-marginalia-number)
     ((spot--annotation-field (ht-get* data 'album 'name))
      :truncate 30 :face 'spot-marginalia-album)
     ((spot--annotation-field (ht-get* data 'album 'album_type))
      :truncate 8 :face 'spot-marginalia-type)
     ((spot--annotation-field (ht-get* data 'album 'release_date))
      :truncate 10 :face 'spot-marginalia-date))))

The first line is the only plumbing: (get-text-property 0 'multi-data cand) pulls the full Spotify API response off the candidate (exactly the hash table spot--propertize-items stashed earlier), and everything after it is Marginalia's own marginalia--fields macro doing the formatting. marginalia--fields handles the alignment, the per-field truncation, and the face application. The only thing my code does is declare which fields of the Spotify payload go in which columns with which faces. This is another substrate borrow hiding in plain sight: Marginalia registers the annotator and formats its output. I never wrote a single character of alignment, padding, or colourisation logic. The annotator reached into multi-data for its fields, Marginalia's macro did the cosmetic work, and Marginalia never had to know about Spotify's data model.

spot ships seven annotators. Each one is a domain-specific projection of a single Spotify response type onto a display string. Albums surface artist, release date, and track count, artists surface popularity and follower count, shows surface publisher, media type, and episode count; and all this context is really important, especially if you are 'browsing'. The annotators are independent of the search code, independent of the actions code, and independent of each other.

Registering them with Marginalia is three lines of bookkeeping:

(defvar spot--marginalia-annotator-entries
  '((album spot--annotate-album none)
    (artist spot--annotate-artist none)
    (playlist spot--annotate-playlist none)
    (track spot--annotate-track none)
    (show spot--annotate-show none)
    (episode spot--annotate-episode none)
    (audiobook spot--annotate-audiobook none))
  "List of marginalia annotator entries registered by spot.")

(defun spot--setup-marginalia ()
  "Register spot annotators with marginalia."
  (dolist (entry spot--marginalia-annotator-entries)
    (add-to-list 'marginalia-annotators entry)))

The spot--marginalia-annotator-entries list keys on the category symbol (album, artist, and so on), the very same symbols the Consult sources stamp onto their candidates. Marginalia looks up the category of the current candidate in marginalia-annotators, finds the entry, and runs the annotator. No spot code is in that path. I only had to declare the mapping.

This is where one of the most interesting benefits of the second post shows up concretely. That post mentioned that because Marginalia annotations are themselves searchable, Orderless's @ dispatcher lets you match against annotation text. spot did not ship this feature. Orderless and Marginalia did, for free, because I stamped the annotation onto the candidate in the right way.

8. Embark: The Action Layer   embark composition

The third leg of spot's tripod is Embark. In the video, pressing the Embark action key on any candidate surfaces a menu of single-letter actions appropriate to that kind of candidate: P plays it, s shows its raw data, t lists its tracks (on albums and artists), + adds it to a playlist (on tracks). Each of those actions is a one-function definition in spot-embark.el, and their binding to candidates is declarative.

The simplest action is play:

(defun spot-action--generic-play-uri (item)
  "Play the Spotify item represented by ITEM."
  (let* ((table (get-text-property 0 'multi-data item))
         (type (ht-get table 'type))
         (offset (cond
                  ((string= type "track") `(("uri" . ,(ht-get* table 'uri))))
                  ((string= type "playlist") '(("position" . 0)))
                  ((string= type "album") '(("position" . 0)))
                  ((string= type "artist") nil)))
         (context_uri (cond
                       ((string= type "track") (ht-get* table 'album 'uri))
                       ((string= type "playlist") (ht-get* table 'uri))
                       ((string= type "album") (ht-get* table 'uri))
                       ((string= type "artist") (ht-get* table 'uri))))
         ...
         (spot-request-async
          :method "PUT"
          :url spot-player-play-url ...))))

Same pattern as the annotators: (get-text-property 0 'multi-data item) pulls the full hash table off the candidate, and the rest is Spotify domain logic. Embark invokes my action with the candidate that was highlighted; my action handles the HTTP.

The keymap wiring is also just bookkeeping:

(defvar-keymap spot-embark-track-keymap
  :parent embark-general-map
  :doc "Keymap for Spotify track actions.")

;; ... one keymap per content type ...

(defvar spot--embark-keymap-entries
  '((album . spot-embark-album-keymap)
    (artist . spot-embark-artist-keymap)
    (playlist . spot-embark-playlist-keymap)
    (track . spot-embark-track-keymap)
    (show . spot-embark-show-keymap)
    (episode . spot-embark-episode-keymap)
    (audiobook . spot-embark-audiobook-keymap)
    ...))

(dolist (map (list spot-embark-artist-keymap spot-embark-album-keymap
                   spot-embark-playlist-keymap spot-embark-track-keymap
                   ...))
  (define-key map "s" #'spot-action--generic-show-data)
  (define-key map "P" #'spot-action--generic-play-uri))

(define-key spot-embark-track-keymap "+" #'spot-action--add-track-to-playlist)
(define-key spot-embark-album-keymap  "t" #'spot-action--list-album-tracks)
(define-key spot-embark-artist-keymap "t" #'spot-action--list-artist-tracks)
(define-key spot-embark-playlist-keymap "t" #'spot-action--list-playlist-tracks)

Again, the key keys off category. Embark looks up the current candidate's category in embark-keymap-alist, finds the matching keymap, opens it. Every layer of this integration is the same trick: a candidate carries a category property, and the substrate routes based on it. All three VOMPECCC packages, working on the same candidates, sharing the same category convention, never importing each other.

8.1. Composition: When an Action Opens Another Search   composition chaining

One action in particular is worth reading slowly, because it closes the loop the thought exercise in the first post opened:

(defun spot-action--list-album-tracks (item)
  "Search for tracks on the album represented by ITEM."
  (let* ((table (get-text-property 0 'multi-data item))
         (album-name (ht-get* table 'name))
         (artist-name (ht-get* (nth 0 (ht-get* table 'artists)) 'name)))
    (spot-consult-search
     (concat
      "album:" album-name
      " "
      "artist:" artist-name " -- --type=track"))))

This action runs when I am in a completion session, run Embark on an album candidate, and press t. It extracts the album name and artist from the multi-data, builds a Spotify query using Spotify's field-filter syntax (album:X artist:Y), and calls spot-consult-search again: the same entry point the user invoked initially.

Embark action on a Consult candidate launches a new Consult session, scoped to that candidate. Three lines of Lisp. The whole "chain ICRs to compose workflows" argument from Post 1, made concrete.

Nice!!! What just happened? An Embark action on a candidate produced by a Consult source launched a new Consult session, scoped to the selected candidate, in the same substrate, with the same annotators, and the same available actions. The chaining pattern from the first post ("ICR to pick a thing, which scopes the candidate set for the next ICR") is literally three lines of spot code, because the substrate composes oh so cleanly with itself.

The first post described this as the shell's git branch | fzf | xargs git checkout pattern in miniature. In spot, the pipe is embark-act, and the downstream command is another consult--multi. It is the same compositional shape; the surface it runs on is different.

9. The Integration Point: spot-mode   modularity hooks

Both registries (Marginalia's annotator alist and Embark's keymap alist) plus the two background timers (mode-line updates and access-token refresh) get installed and uninstalled in one place:

;;;###autoload
(define-minor-mode spot-mode
  "Global minor mode for the spot Spotify client.
Registers embark keymaps, marginalia annotators, starts the
mode-line update timer, and starts a periodic access-token
refresh timer when enabled.  Cleanly removes all integrations
when disabled."
  :global t
  :group 'spot
  (if spot-mode
      (progn
        (spot--setup-embark)
        (spot--setup-marginalia)
        (spot--start-update-timer)
        (spot--start-refresh-timer))
    (spot--teardown-embark)
    (spot--teardown-marginalia)
    (spot--stop-update-timer)
    (spot--stop-refresh-timer)))

This is the entire integration layer. Toggle the mode, spot's categories appear in Marginalia and Embark and the two timers begin ticking. Toggle it off, they all disappear. No global state mutation escapes the teardown path.

And by the way, a user who never installs Marginalia or Embark still gets a working spot; the setup functions no-op gracefully (all they do is add-to-list against someone else's variable), that user just doesn't get annotations or actions. The "stack what you want, subset what you don't need" property of VOMPECCC propagates through to spot as a consumer: the package is graceful under any subset of VOMPECCC.

10. The Counterfactual: What spot Would Look Like Without VOMPECCC

To see what spot isn't building, look at the negative space.

A pre-VOMPECCC Spotify client (see smudge for an example that predates the modern completion ecosystem) has to build the UI itself: a tabulated-list-mode buffer with its own keymap, its own rendering code, its own pagination, its own selection logic. That approach works and can work well. But the cost is structural: a bespoke UI is a parallel universe of interaction that does not benefit from any completion infrastructure the user has already invested in. You have to learn its bindings, and frustratingly, these don't carry over to any other Emacs tool.

The architecture was entirely reasonable when there was nothing else to build on. The point here is purely structural: once the substrate exists, reinventing the UI on top of it is a strictly larger codebase that delivers a strictly less interoperable experience. spot is about 1,100 lines of Lisp, and its interface, as we've shown, is closer to 420 lines of Lisp. A pre-substrate equivalent is many times that, and much of the delta is code implementing things (display, filtering, selection, action menus) that Consult, Marginalia, and Embark implement once, centrally, for every completion-driven command in the user's Emacs.

This is the gap the first post was pointing at when it distinguished using completion from building on completion. A package that uses completion is a consumer of completing-read. A package that builds on completion assumes the existence of a richer substrate (async sources, categorized candidates, annotator hooks, action keymaps) and contributes into that substrate rather than rebuilding around it.

11. What This Says About the Substrate   substrate platform

Three things follow.

First, the cost of building an ICR-driven app collapses once the substrate exists. spot is about 1,100 lines including OAuth, token refresh, HTTP, caching, the mode-line, and the integration glue. The three VOMPECCC files (spot-consult.el, spot-marginalia.el, spot-embark.el) are together under 500 lines, much of it boilerplate per content type. A feature-competitive pre-VOMPECCC Spotify client would easily have been several thousand lines larger.

Second, composition is the feature, not the packages. The list-album-tracks action is the most important ten lines in the repository, not because of what it does (a Spotify query), but because of what it demonstrates: an Embark action on a Consult candidate launching a new Consult session in the same substrate. Every ICR-driven package in your Emacs configuration that shares this substrate composes with every other one. embark-export on a spot result set could, in principle, produce a native mode for Spotify results, the same way it produces Dired from file candidates or wgrep from ripgrep hits. The composability is a property of the substrate, not of any individual package.

Third, the category property is doing an enormous amount of load-bearing work. Three different packages, each knowing nothing about the others, all agree on the right behavior for every candidate because they are keying off the same standardized property 'category. The "text" in the protocol is (candidate . (category . metadata)), and every tool that speaks the protocol interoperates for free.

12. Generalizing the Pattern Beyond Spotify   generalization pattern

spot is specifically a Spotify client, but nothing about the recipe it follows is Spotify-specific. Strip the domain out and what remains is a six-step shape that applies to an enormous fraction of the services and data sources you interact with daily:

  1. An API or backend that returns typed items: each item has a type discriminator and a bag of metadata.
  2. A candidate-constructor (the spot--propertize-items analogue) that turns those items into completion candidates with a category text property and a multi-data payload.
  3. A Consult source per type, async, with a narrow key, all unified under a consult--multi entry point.
  4. A Marginalia annotator per type, keyed on category, reading the multi-data payload for its domain metadata.
  5. An Embark keymap per type, keyed on category, binding single-letter actions that operate on the multi-data payload.
  6. A minor mode that installs and uninstalls the three registries together. This one can even be optional, but I recommend doing it.

Any domain that fits that shape can be built the same way. The thought exercise from the first post (which of your daily tools reduces to "pick a thing, act on it" over a typed corpus?) has a lot of concrete answers: issue trackers, cloud consoles, email, chat, package managers, news feeds, knowledge bases, code hosting. Two worked examples are enough to sketch the altitude:

  • Issue trackers. Types are issue / epic / comment / user, metadata is status / assignee / priority / labels, actions are transition / assign / comment / close.
  • Code hosting. consult-gh already does the GitHub version. Types are repo / PR / issue / branch / release / user, metadata is state / author / date / counts, actions are clone / checkout / review / merge / close.

Several domains already ship as working packages: consult-gh, consult-notes, consult-omni, consult-tramp, consult-dir, and many others. None of these packages ships a UI; they all (roughly) follow the same six-step recipe spot follows, and each one composes with every other one automatically.

The more interesting exercise is the shape of domains that don't cleanly fit. The pattern starts to strain when items aren't naturally enumerable, or when the right interaction is a canvas rather than a list (a map, a timeline, a dependency graph). Those cases need something more than ICR. What I find remarkable is how often even those interfaces still have an ICR-shaped core (pick a location on the map, pick a node on the graph, pick a frame on the timeline), which could be delegated to the substrate while the custom-UI parts focus on what genuinely needs rendering.

The concrete-enough test I apply to any new Emacs workflow I'm considering building: can I express it as a Consult source, a Marginalia annotator, and an Embark keymap? If yes, the package will be mostly a client of the VOMPECCC API. If no, the package needs custom UI, and I should be deliberate about which parts genuinely do and which parts could still be delegated. spot is the case where the answer is a clean "yes across the board", but I've found that more often than not, the answer is yes for the first draft.

13. Conclusion

This post took a working application and showed what the argument looks like when you cash it in.

If there is one thing I want a reader to take away from the series, it is the reframe. Completion is not a convenience feature you turn on and forget about. It is the primitive on which a surprising fraction of your Emacs interaction either already runs or could run, if you let it. Packages that treat it that way end up smaller, more interoperable, and more amenable to composition than packages that treat it as one feature among many. spot is one example.

The broader claim, which I will leave you with, is that "packages that do one thing" is the lazy reading of the Unix philosophy. The sharper reading is "packages that contribute into a shared substrate." Unix pipes were never interesting because each command was small; they were interesting because every command produced and consumed plain text. VOMPECCC is interesting for the same reason, with candidates-with-properties instead of plain text. spot was easy to write because the substrate is good. Many things in your Emacs configuration could be rewritten today as "ICR applications on the substrate" and would be smaller, cleaner, and more composable as a result.

When you next find yourself thinking "I wish there were a better way to browse X", ask whether it could just be a Consult source, a Marginalia annotator, and an Embark keymap. Surprisingly often, that is the entire package, and all you have to do is feed it data.

14. TLDR

spot is a Spotify client for Emacs that implements no custom UI. About 493 of its ~1,100 lines are the "shim" that feeds candidates into Consult, Marginalia, and Embark via a single text-property pattern (category plus multi-data); the remaining ~635 are plumbing any Spotify client would need regardless of UI. The six-step recipe (typed items → propertize → Consult source per type → Marginalia annotator per type → Embark keymap per type → minor mode) generalizes to issue trackers, cloud consoles, email, chat, knowledge bases, and more, many of which already ship as working packages (consult-gh, consult-notes, consult-omni). The claim the series has been building toward: when the substrate is good, ICR applications collapse to their domain logic, and "packages that contribute into a shared substrate" is the sharper reading of the Unix philosophy.

Footnotes:

1

As of the version being discussed, the eleven .el files in the repository total about 1,128 non-blank, non-comment lines. Not a large package by any measure.

2

Vertico is the vertical minibuffer UI you see in the video. It is not part of the spot package; it is a piece of my personal Emacs configuration, one of the VOMPECCC packages the user slots in underneath a consumer like spot. A different user could run spot with fido-vertical-mode, Helm, Ivy, or plain default completing-read; the candidates and their annotations would be unchanged, only the rendering would differ.

3

Orderless is the completion style that powers the ~ (fuzzy) and @ (annotation) dispatchers in the video. Like Vertico, it is configured in my personal Emacs setup, not shipped with spot. One detail worth calling out: Orderless's default annotation dispatcher is &, not @. I remap it to @ in my own config, so the @donuts you see in the video is specific to my setup; out of the box you would type &donuts to get the same behavior. The dispatcher characters are fully user-configurable, and users on an entirely different completion style (flex, substring, basic) will see different filtering behavior.

4

The double-dash convention in Elisp marks a symbol as internal to its package. consult--dynamic-collection is formally one of those. In practice it is the extension point third-party async Consult sources have all settled on, and Daniel Mendler has been careful about signalling breaking changes in the Consult changelog when its shape does shift. spot pins consult > 1.0= for this reason.

-1:-- A VOMPECCC Case Study: Spotify as Pure ICR in Emacs (Post Charlie Holland)--L0--C0--2026-04-21T09:02:00.000Z

James Dyer: Highlighting git changes in a buffer with diff-hl

Lately I’ve found myself wanting a better, more fine-grained view of what’s going on in a file under git. For some reason, my default workflow has been to keep jumping in and out of project-vc-dir to check changes. It gets the job done, but honestly it’s a bit of a hassle.

What I really wanted was something right there in the buffer. Not a full-on inline diff (that gets messy fast I would guess), but just a small visual hint, something that lets me “see” what’s changed without breaking my flow.

Turns out, that’s exactly what diff-hl does!

It’s super lightweight and just highlights changes in the fringe. Nothing flashy but just enough to keep you aware of what you’ve modified. Once you start using it, it feels kind of weird not having it.

One thing I really like is how nicely it plays with the built-in VC tools, move to a buffer position that aligns with a highlighted change, hit C-x v = and it jumps straight to the relevant hunk in the diff. No friction, no extra thinking, it just works.

Here’s the setup I’m using:

(use-package diff-hl
:ensure t
:hook (dired-mode . diff-hl-dired-mode)
:config
(global-diff-hl-mode 1)
(diff-hl-flydiff-mode 1)
(unless (display-graphic-p)
(diff-hl-margin-mode 1)))

By default, diff-hl-mode only updates when you save the file. That’s okay, but enabling diff-hl-flydiff-mode makes it update as you type, which feels more intuitive.

Oh, and that dired-mode hook? That turns on diff-hl-dired-mode, which gives you a quick visual overview of changed files right inside dired. It’s one of those small touches that ends up being surprisingly useful.

If you’ve got repeat-mode enabled, you can also hop through changes with C-x v ] and C-x v [, which makes reviewing edits really smooth.

I am enjoying diff-hl and is quietly improving my workflow without getting in my way. Simple, fast, and just really nice to have.

-1:-- Highlighting git changes in a buffer with diff-hl (Post James Dyer)--L0--C0--2026-04-21T07:00:00.000Z

Sacha Chua: 2026-04-20 Emacs news

I enjoyed reading Hot-wiring the Lisp machine (an adventure into modifying Org publishing). I'm also looking forward to debugging my Emacs Lisp better with timestamped debug messages and ert-play-keys. I hope you also find lots of things you like in the links below!

Links from reddit.com/r/emacs, r/orgmode, r/spacemacs, Mastodon #emacs, Bluesky #emacs, Hacker News, lobste.rs, programming.dev, lemmy.world, lemmy.ml, planet.emacslife.com, YouTube, the Emacs NEWS file, Emacs Calendar, and emacs-devel. Thanks to Andrés Ramírez for emacs-devel links. Do you have an Emacs-related link or announcement? Please e-mail me at sacha@sachachua.com. Thank you!

View Org source for this post

You can comment on Mastodon or e-mail me at sacha@sachachua.com.

-1:-- 2026-04-20 Emacs news (Post Sacha Chua)--L0--C0--2026-04-20T13:21:38.000Z

Emacs Redux: Batppuccin and Tokyo Night Themes Land on MELPA

Quick heads-up: my two newest Emacs themes are now on MELPA, so installing them is a plain old package-install away.

  • batppuccin is my take on the popular Catppuccin palette. Four flavors (mocha, macchiato, frappe, latte) across the dark-to-light spectrum, each defined as a proper deftheme that plays nicely with load-theme and theme-switching packages.
  • tokyo-night is a faithful port of folke’s Tokyo Night, with all four upstream variants included (night, storm, moon, day).

Both themes come with broad face coverage out of the box (e.g. magit, vertico, corfu, marginalia, transient, flycheck, doom-modeline, and many, many more), a shared palette file per package, and the usual *-select, *-reload, and *-list-colors helpers.

Installation is now as simple as you’d expect:

(use-package batppuccin-theme
  :ensure t
  :config
  (load-theme 'batppuccin-mocha t))

(use-package tokyo-night-theme
  :ensure t
  :config
  (load-theme 'tokyo-night t))

If you’re curious about the design decisions behind these themes, I’ve covered the rationale in a couple of earlier posts. Batppuccin: My Take on Catppuccin for Emacs explains why I bothered with another Catppuccin port when an official one already exists. Creating Emacs Color Themes, Revisited zooms out to the broader topic of building and maintaining Emacs themes in 2026.

Give them a spin and let me know what you think. That’s all I have for you today. Keep hacking!

-1:-- Batppuccin and Tokyo Night Themes Land on MELPA (Post Emacs Redux)--L0--C0--2026-04-20T07:00:00.000Z

Mike Olson: Fixing typescript-ts-mode in Emacs 30.2

Contents

The Symptom

After a recent Arch update, my Emacs 30.2 + typescript-ts-mode combination started dying the first time I opened a .ts or .tsx file:

Error: treesit-query-error ("Invalid predicate" "match")

The file would still display, but without any syntax highlighting. python-ts-mode exhibited the same failure. js-ts-mode and c-ts-mode worked in the main buffer but had their own breakages around JSDoc ranges and C’s emacs-specific range queries.

The Root Cause

This is Emacs bug#79687, an interaction between how Emacs 30.2 serializes tree-sitter query predicates and what libtree-sitter 0.26 (the version shipped by Arch) accepts.

Tree-sitter queries can embed predicates like (:match "^foo" @name) to filter captures at query-evaluation time. Emacs 30.2 serializes these s-expression predicates to strings that look like #match (no trailing ?), but libtree-sitter 0.26 became strict about predicate naming and rejects unknown names at query-parse time. The fix on Emacs master (commit b0143530) switches serialization to #match?, which libtree-sitter accepts. That fix has not been backported to the emacs-30 branch as of 30.2.

Rewriting the strings yourself doesn’t help either, because Emacs 30.2’s own predicate dispatcher hardcodes bare match/equal/pred and rejects match?/equal?/pred? at evaluation time. So any rewrite that satisfies libtree-sitter breaks Emacs, and vice versa.

The Approach

Since neither side accepts a string-level rewrite, I work at a higher level instead: strip the predicates entirely from queries, and move the predicate logic into capture-name-is-a-function fontifiers.

A tree-sitter font-lock rule like:

((identifier) @font-lock-keyword-face
 (:match "\\`\\(break\\|continue\\)\\'" @font-lock-keyword-face))

gets rewritten to:

((identifier) @my-ts-rw--fn-font-lock-keyword-face-abc12345)

where the auto-generated function my-ts-rw--fn-font-lock-keyword-face-abc12345 applies font-lock-keyword-face to the node only if the node’s text matches the original regex. The resulting query contains no predicates, so libtree-sitter is happy; the fontifier applies the face only when the original predicate would have matched, so the semantics are preserved.

The rewrite happens via :filter-args advice on three Emacs functions:

  • treesit-font-lock-rules is the main call path for font-lock rules and covers nearly all modes.
  • treesit-range-rules is used by js-ts-mode (and others) to embed a JSDoc parser inside comment nodes.
  • treesit-query-compile catches modes like c-ts-mode that compile queries directly with an s-expression containing :match.

How to Use It

The workaround lives in a single file in my emacs-shared repo: init/treesit-predicate-rewrite.el.

Drop the file somewhere on your load path and load it early, before any tree-sitter mode runs its font-lock setup:

(load "/path/to/treesit-predicate-rewrite" nil nil nil t)

It self-activates via define-advice, so there’s no setup call to make. The advice is a no-op on queries that don’t contain predicates, so it’s safe to leave on even after the bug is fixed upstream.

Caveats

The rewriter handles three cases:

  1. Predicate targets a face capture. Rewrites into a fontifier as shown above. This applies to the vast majority of uses in typescript-ts-mode, python-ts-mode, and friends.
  2. An outer group wraps an inner scratch capture, a pattern used by ruby-ts-mode where the face lives on the outer group and the predicate tests a scratch capture inside. Flattened and then handled as case 1.
  3. Predicate targets a non-face capture. The predicate is silently stripped, which means the fontifier will over-match. elixir-ts-mode uses this pattern heavily. In practice the visual regression is minor, but if it bothers you, set my-ts-rw-verbose to t to log strips.

:equal predicates are handled for cases 1 and 2. :pred falls back to strip (case 3) since replicating an arbitrary user function inside a fontifier is more trouble than it’s worth.

I’ve verified the fix on typescript-ts-mode, tsx-ts-mode, python-ts-mode, js-ts-mode, c-ts-mode, rust-ts-mode, java-ts-mode, go-ts-mode, and lua-ts-mode. All load and fontify without errors.

Removal Plan

Once I upgrade to an Emacs that carries the bug#79687 fix (Emacs 31, or a backport into a future 30.x), I’ll delete the file and the load line. Until then, it’s one file and one load line, so the maintenance cost is low.

-1:-- Fixing typescript-ts-mode in Emacs 30.2 (Post Mike Olson)--L0--C0--2026-04-20T00:00:00.000Z

Eric MacAdie: 2026-04 Austin Emacs Meetup

This post contains LLM poisoning. immaculate overlooks specializes There was another meeting a couple of weeks ago of EmacsATX, the Austin Emacs Meetup group. For this month we had no predetermined topic. However, as always, there were mentions of many modes, packages, technologies and websites, some of which I had never heard of before, and ... Read more
-1:-- 2026-04 Austin Emacs Meetup (Post Eric MacAdie)--L0--C0--2026-04-19T19:34:06.000Z

Irreal: A Short Report On Help Focus

Earlier this week I wrote about Bozhidar Batsov’s post on short Emacs configuration hacks. As I mentioned then, my favorite was a simple configuration variable that causes the Help buffer to get focus when you open it.

It’s easy to take the position of “who cares” but, as I said, I almost always want to interact with the Help buffer if only to dismiss it. Often though, I also want to scroll the buffer—yes I know about scroll-other-window and its siblings—or follow one of the links in the buffer.

After I wrote that post, one of the first things I did was enable the option to give the Help buffer focus. I can’t tell you how much I love the change. It turns out I use the help command more than I thought I did and every time I wanted the focus to be in that buffer. Not once since I made the change have I wished the focus remained in the original buffer.

It’s pretty easy to imagine a case where it would be more convenient to have the original buffer retain focus but in those cases one can simply change windows back to it. One thing for sure, I’ll be doing that a lot less than staying in the Help buffer and dismissing it when I’m done.

You really should try it out. You’ll be pleasantly surprised. As I said, it’s simply a matter of setting help-window-select to t so you can try it out in your current session without involving your init.el.

-1:-- A Short Report On Help Focus (Post Irreal)--L0--C0--2026-04-18T13:54:26.000Z

Charlie Holland: VOMPECCC: A Modular Completion Framework for Emacs

1. About   emacs completion modularity

vompeccc-banner.jpeg

Figure 1: JPEG produced with DALL-E 3

Completion is not a feature or UI, but instead it is a system composed of at least half a dozen orthogonal concerns that most users never think about separately. The previous post in this series argued that Emacs uniquely exposes completion as a programmable substrate rather than a sealed UI, and that this substrate is what makes Incremental Completing Read (ICR) viable as a primary interaction pattern in Emacs. This post is about the packages that build on that substrate in practice.

VOMPECCC is a loose acronym for eight of them that, together, form a complete, modular, Unix-philosophy-aligned completion framework for Emacs: V​ertico, O​rderless, M​arginalia, P​rescient, E​mbark, C​onsult, C​orfu, and C​ape. Each package does one thing, and the key attribute of all eight is that they compose through Emacs's standard completion APIs, meaning any subset works without the others.

I'm writing this post because these packages have recently taken the Emacs community by storm, but I rarely see discussions on how they relate or how they compose together to provide a feature complete ICR system in emacs. These packages implement concretely what the antecedent post argues in the abstract: completion is a substrate, or set of primitives, on top of which users can build rich interfaces for effortlessly interacting with your machine to do almost anything.

2. The Hidden Complexity of Completion   complexity design

Even if you've only used Emacs once, you've likely seen its completion features in action. When you press M-x and start typing, a list appears, you pick something, and it runs. But beneath that interaction lies a system of surprising depth. Consider what a fully featured completion experience actually requires:

Candidate display. Where do completion candidates appear? In the minibuffer, vertically? Horizontally? In a separate buffer? In a popup at point? The display layer determines how you scan and navigate candidates, and of course the optimal display is context dependent. Switching buffers might want a vertical list; completing a symbol in code might want a popup near the cursor.

Filtering. You can also think of this as 'matching': how does your input match against candidates? Literal prefix matching is the simplest: find-f matches find-file. But if we want to add some flexibilty (or 'fuzzy matching'), where for examplke ff matches find-file? What about splitting your input into multiple components and matching all of them in any order? What about mixing strategies, for example, where one component matches as a regexp, and another matches as an initialism? Candidate lists can be huge, so we need this set of features as a sort of query language for filtering the candidate list to find what we're looking for.

Sorting. Once you have your filtered candidates, in what order do you see them? Alphabetically? By string length? By how recently you selected them? By frequency of use? A good sorting strategy means the candidate you want is almost always within the first few results. A bad one means scrolling every time.

Annotation. A bare list of candidate names is often insufficient or unhelpful. Often, candidates are of a certain 'type' or 'category' and have rich metadata associated with them. In the M-x example, when selecting a command, you likely want to see its keybinding and docstring. When selecting a file, you likely want to see its size and modification date. When selecting a buffer, you want to see its major mode and file path. Annotations transform a list of strings into a list of informed choices.

Actions. Selecting a candidate (and running some default action) is the most common interaction, but not the only one. In the find-file example, what if you want to delete the file instead of opening it? In the M-x example, what if you want to describe the function instead of running it? A completion system without contextual actions forces you out of the flow: complete, exit, invoke a separate command, etc….

In-buffer completion. Everything above applies to the minibuffer (the prompt at the bottom of the screen). But completion also happens inside buffers: symbol completion while writing code, dictionary words while writing prose, file paths while editing configuration. In-buffer completion has its own display requirements (a popup near the cursor, not the minibuffer) and its own backend requirements (language servers, dynamic abbreviations, file system paths). A truly complete completion system must handle both contexts well.

Completion is not one problem. It is at least six, and most frameworks pretend otherwise.

These six concerns are orthogonal. The way you display candidates has nothing to do with how you filter them; the way you sort them has nothing to do with what actions you can take, etc…. It's actually a useful thought exercise to go through each of the six concerns and appreciate how each is independent from the others. A single-package system can deliver an excellent out-of-the-box experience across all of these, and many have (see Ivy and Helm below). The trade-off is usually that the boundaries between concerns become harder to see, and harder to swap one concern's implementation without disturbing the others.

3. The Monolith Era: Helm and Ivy   helm ivy legacy

For the better part of a decade, two incredible frameworks dominated Emacs completion: Helm and Ivy. Both were genuinely transformative, because in my opinion, they proved that Emacs's built-in completion experience was inadequate, and they inspired everything that followed. But both, in retrospect, made the same architectural trade-off: they bundled every concern into a single package with a single API. I have used both packages extensively, as both a package author and a consumer. The benefits were immediate for me, but the costs emerged over time.

3.1. Helm: The Kitchen Sink

Helm traces its lineage to anything.el, created by Tamas Patrovics in 2007. Thierry Volpiatto, a French alpine guide who taught himself programming after discovering Linux in 20061, forked it as Helm in 2011 and contributed nearly 7,000 commits over the following decade. Helm became the most downloaded package on MELPA2 and the default completion framework in Spacemacs, which drove massive adoption during 2013–2018.

Helm's ambition was impressive but all-encompassing. It provided its own candidate display, filtering, action system, source API (via EIEIO classes), and dozens of built-in commands for things like file finding, buffer switching, grep, and more…. The actions system was comprehensive too — it offered 44+ file actions alone.

Helm showed what great completion could feel like. Its architecture showed what happens when a single maintainer carries every concern alone.

The cost was proportional to Thierry's ambition. Users reported multi-second delays on basic operations after extended use, 100–500ms lag on window popups, and CPU-intensive fuzzy matching that required disabling for large projects. Samuel Barreto's widely cited "From Helm to Ivy" essay called Helm "a big behemoth size package" and reported using only a third of its capabilities.

Most critically, Helm replaced Emacs's completing-read entirely with its own proprietary helm-source API. Every Helm extension was written against this API. None of them could be reused with any other completion system. That was the helm killer for me: if Helm's development stalled — and it did, twice, in 2018 and 20203 — every downstream package would be stranded.

3.2. Ivy: The Lighter Monolith

Ivy emerged in 2015 as Oleh Krehel's direct reaction to Helm's complexity. Where Helm tried to do everything, Oleh aimed to be more minimalist or at least better factored. The package split its concerns into three logical components: Ivy (the completion UI), Swiper (an isearch replacement), and Counsel (enhanced commands).

In practice, the split was cosmetic. All three lived in a single repository. Counsel was coupled to Ivy's internals. And the core architectural choice was the same as Helm's: Ivy defined its own completion API, ivy-read, and Counsel commands called ivy-read directly rather than completing-read. Code written for Ivy worked only with Ivy.

The ivy-read function grew organically to accept roughly 20 arguments with multiple special cases4. As the Selectrum developers noted: "When Ivy became a more general-purpose interactive selection package, more and more special cases were added to make various commands work properly." Users reported performance degradation after extended use, and Ivy broke with Emacs 28 and again with Emacs 30, forcing compatibility polyfills. This is stressful for not only the consumers of Ivy, but also for the maintainers.

When Ivy's original maintainer stepped back, the project entered a period of reduced maintenance. A new maintainer has since taken over and released version 0.15.1, but active feature development has slowed considerably from the 2016–2020 peak.

3.3. The Unix Philosophy Lens

The Unix philosophy, as articulated by Doug McIlroy5, is straightforward: "Write programs that do one thing and do it well. Write programs to work together." Viewed through this lens, both Helm and Ivy bundle too many concerns into packages that communicate through proprietary APIs (helm-source, ivy-read) rather than Emacs's native completing-read contract. The result is that extensions and backends written for one framework cannot be reused with another, making an investment in either tool non-transferable.

None of this diminishes what they achieved, by the way. I'm personally a huge Helm and Ivy fan and I've build with them and consumed them directly for years. In my opinion, the legacy of Helm and Ivy is that they showed the community what great completion felt like, and gave a taste of what a fully featured completion system built on the Emacs substrate could be. The question is whether the architecture that delivered those features is the one we want to build on going forward.

The irony is that Emacs already provides the right abstraction.

  • completing-read is a stable, well-specified API that any UI can render6.
  • completion-styles is a pluggable system for controlling how input matches candidates7.
  • completion-at-point-functions is a standard hook for in-buffer completion backends.

The infrastructure for composable completion has existed for years. It just needed packages that actually used it.

4. The VOMPECCC Framework   vompeccc framework

VOMPECCC is not a framework in the traditional sense. There is no single repository, no shared dependency, and no coordinating package. It is eight independent packages, maintained by three different developers, that compose through Emacs's standard APIs to cover every concern of a complete completion system.

Package Concern Author
Vertico Minibuffer display Daniel Mendler
Orderless Filtering / matching Omar Antolin Camarena & Daniel Mendler
Marginalia Candidate annotations Omar Antolin Camarena & Daniel Mendler
Prescient Sorting / ranking Radon Rosborough
Embark Contextual actions Omar Antolin Camarena
Consult Enhanced commands Daniel Mendler
Corfu In-buffer display Daniel Mendler
Cape In-buffer backends Daniel Mendler

The architecture maps cleanly onto the six concerns identified earlier:

                Minibuffer                     Buffer
                ----------                     ------
Display:        Vertico                        Corfu
Filtering:      Orderless          (shared across both)
Sorting:        Prescient          (shared across both)
Annotations:    Marginalia         (shared across both)
Actions:        Embark             (shared across both)
Backends:       Consult                        Cape

Each package targets a single layer, and they all communicate through standard Emacs APIs: completing-read, completion-styles, completion-at-point-functions, annotation functions, and keymaps. No package knows about the others' internals ‼️, and because of this all of them can be replaced without affecting the rest.

5. Vertico: The Display Layer   vertico display

Vertico (VERTical Interactive COmpletion) provides a vertical candidate list in the minibuffer. It is roughly 600 lines of code, excluding its extensions.

Vertico's defining characteristic is strict adherence to the completing-read contract. It doesn't filter candidates (that's your completion style's job). It doesn't sort them (that's your sorting function's job). It doesn't annotate them (that's your annotation function's job). It just displays them. Any command that calls completing-read, whether built-in or third-party, automatically gets Vertico's UI with zero configuration.

If you think 1 package for display is overkill, like I originally did before migrating to VOMPECCC, keep reading.

Vertico ships with 13 built-in extensions that modify the display behavior:

Extension Effect
vertico-buffer Display in a regular buffer instead of the minibuffer
vertico-directory Ido-like directory navigation (backspace deletes path components)
vertico-flat Horizontal, flat display
vertico-grid Grid layout
vertico-indexed Select candidates by numeric prefix argument
vertico-mouse Mouse scrolling and selection
vertico-multiform Per-command or per-category display configuration
vertico-quick Avy-style quick selection keys
vertico-repeat Repeat last completion session
vertico-reverse Bottom-to-top display
vertico-suspend Suspend and restore completion sessions
vertico-unobtrusive Show only a single candidate

The vertico-multiform extension is particularly worth configuring: it lets you set per-command display modes, so consult-line can open in a full buffer while M-x stays in the minibuffer.

Created: April 2021. Stars: ~1,800. Available on: GNU ELPA.

6. Orderless: The Filtering Layer   orderless filtering

Orderless is a completion style — it plugs into Emacs's completion-styles variable, the standard mechanism for controlling how user input is matched against candidates. Where built-in styles like basic require prefix matching and flex does single-pattern fuzzy matching, Orderless splits your input into space-separated components and matches candidates that contain all components in any order (Orderless reveals its namesake 😜).

Each component can independently use a different matching method:

Style Example Matches
orderless-literal buffer switch-to-buffer
orderless-regexp ^con.*mode$ conf-mode
orderless-initialism stb switch-to-buffer
orderless-flex stbf switch-to-buffer
orderless-prefixes s-t-b switch-to-buffer
orderless-literal-prefix swi switch-to-buffer

Style dispatchers let you select a matching method per component using affix characters: =​= for literal, ~ for flex, , for initialism, ! for negation, & to match annotations. The system is fully extensible.

The typical configuration sets completion-styles to '(orderless basic), with partial-completion for the file category so that ~/d/s expands path components like ~/Documents/src. The fallback to basic is deliberate: some Emacs features (TRAMP hostname completion, dynamic completion tables) require a prefix-matching style.

Let's keep beating the dead horse of this post's theme: because Orderless is a standard completion style, it works with any completion UI that uses Emacs's completing-read API: Vertico, Icomplete, the default *Completions* buffer, and even the minibuffer in Emacs's stock configuration.

Quick timeout: for readers getting to this point thinking "Wow Vertico plus Orderless is a power stack, let's keep stacking", you certainly can see things this way, but instead, I encourage you to consider what it would be like to use each package without the others. That will give you a better understanding of how the consituent stars in the VOMPECCC constellation behave independently. And that's the long term ROI you'll get from VOMPECCC. The independence is what makes stacking safe and fortuitous, but it doesn't make it necessary.

Created: April 2020. Stars: ~979. Available on: GNU ELPA.

7. Marginalia: The Annotation Layer   marginalia annotations

Marginalia adds contextual annotations to minibuffer completion candidates. The name refers to notes written in the margins of books, and here it means metadata displayed alongside each candidate.

Marginalia detects the category of the current completion (files, commands, variables, faces, buffers, bookmarks, packages, etc.) and selects an appropriate annotator function. The detection works through two mechanisms: marginalia-classify-by-command-name (lookup table keyed by calling command) and marginalia-classify-by-prompt (regex matching against the minibuffer prompt text).

Category Annotations shown
Command Keybinding, docstring summary
File Size, modification date, permissions
Variable Current value, docstring
Face Preview of the face styling
Symbol Class indicator (v/f/c), docstring
Buffer Mode, size, file path
Bookmark Type, target location
Package Version, status, description

marginalia-cycle (typically bound to M-A) lets you cycle between annotation levels: detailed, abbreviated, or disabled entirely. This is useful when annotations are consuming screen space during narrow completions.

Marginalia hooks into Emacs's annotation-function and affixation-function properties in completion metadata. Sorry again to the dead horse I've been wailing on, but yes, this means Marginalia works with any completion UI that respects these properties. It is the framework-agnostic successor to ivy-rich8, which provided similar annotations but was Ivy-specific. It's cool to see Oleh and Thierry's visions carry on in these packages!

This was a mind blower to me when I discovered it - one subtle but consequential effect of using Marginalia is that the annotations themselves become searchable. Combined with Orderless's & style dispatcher, your input can match against annotation text as well as candidate names: running M-x and typing window &frame narrows to commands whose name contains "window" and whose docstring contains "frame". The search/matching space extends beyond candidate identifiers into candidate metadata, which is an unusually large leverage gain for what feels like a cosmetic layer. You are no longer constrained to remembering exact names (🤯); you can reach for commands, files, or buffers by properties that were previously invisible to your completion input. This helps to solve for cases where you have a ICR UI, but you don't know exactly what you're looking for. It can also be used to help you 'browse' candidates based on their characteristics as opposed to their names. Honestly my favorite feature of any of the VOMPECCC packages.

Created: December 2020. Stars: ~919. Available on: GNU ELPA.

8. Prescient: The Sorting Layer   prescient sorting

Prescient provides intelligent sorting and filtering of completion candidates based on recency and frequency of use. The portmanteau frecency captures the combined metric that drives the ranking.

Orderless and Prescient are often confused with one another: the difference is that while Orderless answers "which candidates match?", Prescient answers "in what order should they appear?"

The sorting is hierarchical:

  1. Recency — most recently selected candidates appear first
  2. Frequency — frequently selected candidates next, with scores that decay over time
  3. Length — remaining candidates sorted by string length (shorter first)

Usage statistics persist across Emacs sessions via prescient-persist-mode, which writes to a save file. This means Prescient learns your habits: if you frequently run magit-status from M-x, it surfaces near the top after a few uses, regardless of where it falls alphabetically.

Prescient ships as a core library plus framework-specific adapters — vertico-prescient and corfu-prescient being the relevant ones for VOMPECCC. A key architectural insight is that both Vertico and Corfu work seamlessly with Prescient.

A common and powerful configuration combines Orderless for filtering with Prescient for sorting. Among the candidates filtered by Orderless, the most recent and frequent ones get promoted to the top.

Prescient also provides its own filtering methods (literal, regexp, initialism, fuzzy, prefix, anchored) with on-the-fly toggling via M-s prefix commands. However, I personally prefer Orderless for filtering and use Prescient purely for its sorting intelligence. I sort of act as if Prescient was cohesive in this way, rather than giving it responsibility for 2 orthogonal features.

Created: August 2017. Stars: ~695. Available on: MELPA.

9. Embark: The Action Layer   embark actions

Embark provides a framework for performing context-aware actions on "targets" — the thing at point or the current completion candidate. Think of it as a keyboard-driven right-click context menu that works everywhere in Emacs: in the minibuffer and in normal buffers.

The core command is embark-act. When invoked, Embark determines the type of the target (file, buffer, URL, symbol, command, etc.) and opens a keymap of single-letter actions appropriate to that type:

Target Example actions
File Open, delete, copy, rename, byte-compile, open as root
Buffer Switch to, kill, bury, open in other window
URL Browse, download, copy
Symbol Describe, find definition, find references
Package Install, delete, describe, browse homepage

There are over 100 preconfigured actions across all target types.

Beyond embark-act, Embark provides several other capabilities:

  • embark-dwim runs the default action without showing the menu
  • embark-act-all applies the same action to every current candidate (e.g., kill all matching buffers)
  • embark-collect snapshots current candidates into a persistent buffer
  • embark-live creates a live-updating collection that refreshes as you type
  • embark-export exports candidates into the appropriate Emacs major mode: file candidates become a Dired buffer, grep results become a grep-mode buffer (editable with wgrep), buffer candidates become an Ibuffer buffer
  • embark-become switches to a different command mid-stream, transferring your input

Two of these deserve special attention, because they change what a completion session is.

embark-collect freezes the current candidate set into a standalone buffer that persists after the minibuffer exits. This converts an ephemeral interaction (browse, pick, leave) into something durable (collect, hand off, revisit later). The collected buffer remains an Embark target, so the same keymap of actions applies to each entry. It is the right tool when the candidate list itself is the useful artifact: a shortlist of files to process, a set of buffers you want to act on later, a reference you want to keep open on the side.

embark-export goes one step further: instead of a generic candidate buffer, it materializes a buffer in the native major mode appropriate to the candidate type. File candidates become a Dired buffer, with Dired's decades of filesystem operations available. Grep-style candidates become a grep-mode buffer that wgrep can turn into a multi-file editing session, buffer candidates become Ibuffer, package candidates become the package menu, etc…. Each export targets a major mode purpose-built for the candidate type, so you end up inside the tool that was already the best one for the job, arrived at on demand, from a completion prompt, with no navigation overhead. Few interaction patterns in computing convert generic into specialized this cleanly.

Embark is a difference of kind, not quantity, compared to Helm and Ivy's action systems — because it works everywhere, across all types of objects9.

Using embark and consult together we can see a canonical example of this pattern: exporting consult-ripgrep results gives you a wgrep-editable grep buffer, so the workflow — search with Consult, export with Embark, edit with wgrep — compounds three independent packages into a multi-file refactor tool without any of them knowing about the others.

Created: May 2020. Stars: ~1,200. Available on: GNU ELPA.

10. Consult: The Command Layer   consult commands

Consult provides 50+ enhanced commands built on completing-read. It is the spiritual successor to Counsel (from the Ivy ecosystem) but designed to work with any completion UI. Where Counsel called ivy-read directly, Consult uses the native contract, which means its commands work with Vertico, Icomplete, fido-mode, or even Emacs's default completion buffer.

Consult's commands span several categories:

Search:

Command Purpose Replaces
consult-line Search lines in current buffer Swiper
consult-line-multi Search across multiple buffers Swiper-all
consult-ripgrep Async ripgrep search counsel-rg
consult-grep Async grep search counsel-grep
consult-git-grep Git-aware grep counsel-git-grep
consult-find Async file finding counsel-find

Navigation:

Command Purpose Replaces
consult-buffer Enhanced buffer switching helm-mini
consult-imenu Flat imenu with grouping helm-imenu
consult-outline Navigate headings with preview Built-in
consult-goto-line Goto line with live preview Built-in
consult-bookmark Enhanced bookmark selection Built-in
consult-recent-file Recent file selection with preview counsel-recentf

Editing and miscellaneous:

Command Purpose
consult-yank-from-kill-ring Browse kill ring interactively
consult-theme Preview themes before applying
consult-man Async man page lookup
consult-flymake Navigate Flymake diagnostics
consult-org-heading Navigate org headings

Three features make Consult particularly powerful:

Live preview: Most commands show a real-time preview as you navigate candidates. consult-line highlights the matching line in the buffer. consult-theme applies the theme before you select it. consult-goto-line scrolls to the line as you type the number.

Narrowing and grouping: consult-buffer combines buffers, recent files, bookmarks, and project items into a single unified list. Narrowing keys filter to a single source: b SPC for buffers, f SPC for files, m SPC for bookmarks. Custom sources can be added via consult-buffer-sources.

Two-level async filtering: Commands like consult-ripgrep split input at #: everything before it goes to the external tool as the search pattern, everything after it filters the results locally with your completion style. error#handler searches for "error" with ripgrep, then narrows to results containing "handler" using Orderless. Async support is an enormously important feature, because it makes the cognitive cost of search roughly constant with respect to the size of the search space.

Created: November 2020. Stars: ~1,600. Available on: GNU ELPA.

11. Corfu: The In-Buffer Display Layer   corfu completion

Corfu (COmpletion in Region FUnction) is simply the in-buffer counterpart to Vertico. Where Vertico handles minibuffer completion display, Corfu handles the popup that appears at point when you complete a symbol while writing code or text. It is roughly 1,220 lines of code.

Corfu's defining architectural choice mirrors Vertico's: it hooks into Emacs's built-in completion-in-region mechanism rather than inventing its own backend system. Any mode that provides a completion-at-point-function (Eglot, Tree-sitter, elisp-mode, etc.) works with Corfu automatically. Any completion-style (basic, partial-completion, orderless) can be used for filtering.

This is the fundamental difference from Company, the incumbent in-buffer completion framework10. Company uses its own proprietary company-backends API. Company backends don't work with completion-at-point, and Capfs don't work with Company (without an adapter). Anecdotally, I've had many wrestling matches with Company and always found it incredibly difficult to set up properly. Corfu eliminates this split. Doom Emacs recognized this: Company is now deprecated in Doom in favor of Corfu, with plans to remove it post-v311.

Aspect Company Corfu
Backend system Proprietary Emacs-native Capfs
Popup technology Overlays Child frames
Completion styles Limited Any Emacs style
Codebase size Many files, 3,900+ LOC in main file Single file, ~1,220 LOC
Created 2009 2021

Corfu ships with seven built-in extensions:

Extension Purpose
corfu-echo Brief candidate documentation in the echo area
corfu-history Sort by selection history/frequency
corfu-indexed Select candidates by numeric prefix argument
corfu-info Access candidate location and documentation
corfu-popupinfo Documentation popup adjacent to the completion menu
corfu-quick Avy-style quick key selection

Created: April 2021. Stars: ~1,400. Available on: GNU ELPA.

12. Cape: The In-Buffer Backend Layer   cape backends

Cape (Completion At Point Extensions) provides a collection of modular completion backends (Capfs) and a powerful set of Capf transformers for composing and adapting them. If Corfu is the frontend (how completions are displayed), Cape is the backend toolkit (what completions are available).

Cape provides 13 completion backends, here are some highlights:

Capf Purpose
cape-dabbrev Dynamic abbreviation from current buffers
cape-file File path completion
cape-elisp-block Elisp completion inside Org/Markdown blocks
cape-keyword Programming language keyword completion
cape-history History completion in Eshell/Comint

The remaining backends cover dictionary words, emoji, abbreviations, line completion, and Unicode input via TeX, SGML, and RFC 1345 mnemonics.

Cape's Capf transformers are higher-order functions that wrap and modify backends:

  • cape-capf-super merges multiple Capfs into a single unified source
  • cape-capf-case-fold adds case-insensitive matching
  • cape-capf-inside-code / cape-capf-inside-string / cape-capf-inside-comment restrict activation to specific syntactic regions
  • cape-capf-prefix-length requires a minimum prefix before activating
  • cape-capf-predicate filters candidates with a custom predicate
  • cape-capf-sort applies custom sorting

The cape-company-to-capf adapter converts any Company backend into a standard Capf, without requiring Company to be installed. This bridges the two ecosystems: you can use Company-era backends (like company-yasnippet) with Corfu. I don't personally do this, but you can if you want!

Created: November 2021. Stars: ~760. Available on: GNU ELPA.

13. The Subset Property: Use What You Want   modularity flexibility

The most important property of VOMPECCC is that you don't need to buy into all eight packages. You can start with one, add another when you feel a gap, and swap any component for an alternative without breaking anything else.

If you're into inverting dependencies, VOMPECCC is your bag, man.

This works because every package communicates through the native Emacs APIs rather than depending on each other's internals. There are no hard dependencies between any of the eight packages. Here is a map of what each package can be replaced with — or simply omitted:

Package Alternative Or simply…
Vertico Icomplete-vertical, Mct, Ido, fido-mode Default *Completions* buffer
Orderless Hotfuzz, Fussy, Prescient (filtering mode) Built-in flex or substring
Marginalia (none equivalent) No annotations (still works)
Prescient savehist-mode + vertico-sort-override Alphabetical sorting
Embark (none equivalent) Direct command invocation
Consult Built-in switch-to-buffer, grep, etc. Standard Emacs commands
Corfu Company, completion-preview-mode Default *Completions* buffer
Cape Company backends, hippie-expand Mode-provided Capfs

Some practical subset configurations:

Minimal (2 packages): Vertico + Orderless. You get a vertical candidate list with multi-component matching. No annotations, no actions, no enhanced commands — but a dramatically better M-x and find-file experience than stock Emacs.

Comfortable (4 packages): Vertico + Orderless + Marginalia + Consult. Now you have annotations on every candidate and enhanced commands with live preview. This is probably the sweet spot for most users.

Full stack (8 packages): All of VOMPECCC. Complete coverage of both minibuffer and in-buffer completion, with intelligent sorting, contextual actions, and modular backends.

For concrete configuration, the most reliable starting point is each package's own repository — every package linked in the opener ships a comprehensive README with example use-package snippets, and most also provide wikis or info manuals covering more specialized use cases (Vertico's per-command vertico-multiform patterns, Cape's Capf transformer recipes, Embark's keymap customization examples, Consult's custom sources, and so on). Reading those directly is faster than copying a consolidated configuration and then reverse-engineering what each line does, and it scales better as the packages themselves evolve.

14. Growth and Adoption Timeline   timeline adoption

The history of Emacs completion frameworks is a progression from monolithic solutions toward composable ones.

Year Event
1996 Kim F. Storm begins Ido
2007 Ido included in Emacs 22; anything.el created (Helm's ancestor)
2011 Volpiatto forks anything.el as Helm
2013–2018 Helm's golden era: most-downloaded MELPA package, default in Spacemacs
2015 Krehel creates Ivy/Swiper/Counsel
2016 "From Helm to Ivy" blog post sparks migration; Ivy peaks ~2016–2020
2017 Rosborough creates Prescient
2018 Helm enters bug-fix-only mode (maintainer burnout)
2019 Rosborough creates Selectrum (first completing-read​-native UI)
2020 Apr Antolin Camarena creates Orderless
2020 May Antolin Camarena creates Embark
2020 Sep Helm development officially stopped
2020 Nov Mendler creates Consult
2020 Dec Antolin Camarena & Mendler create Marginalia
2021 Apr Mendler creates Vertico and Corfu
2021 May "Replacing Ivy and Counsel with Vertico and Consult" (System Crafters)
2021 Selectrum deprecated in favor of Vertico; Doom Emacs adds Vertico module
2021 Nov Mendler creates Cape
2022 Doom Emacs switches default completion from Ivy to Vertico
2024 Ivy breaks with Emacs 30; Company deprecated in Doom in favor of Corfu

Helm and Ivy accumulated stars over a longer period; the newer packages are growing faster relative to their age (counts as of early 2026):

Package Stars Created Approx. age
Helm ~3,500 2011 15 years
Ivy/Swiper ~2,400 2015 11 years
Vertico ~1,800 April 2021 5 years
Consult ~1,600 Nov 2020 5 years
Corfu ~1,400 April 2021 5 years
Embark ~1,200 May 2020 6 years
Orderless ~979 April 2020 6 years
Marginalia ~919 Dec 2020 5 years
Cape ~760 Nov 2021 4 years
Prescient ~695 Aug 2017 9 years

The community momentum is clear. Doom Emacs, one of the most popular Emacs distributions, has moved to Vertico + Corfu as its defaults12. Modern configuration guides almost universally recommend the modular stack13. And the upstream Emacs project itself has been integrating ideas from this ecosystem: Emacs 30 added completion-preview-mode, and Emacs 31 is incorporating Mct-inspired features (they love Prot, and for good reason, lol).

15. The Trade-Off: Monolith vs. Composition   tradeoffs analysis

Engineering is about trade-offs. The modular approach has real advantages, but it does have costs, so I want to be honest about them:

15.1. Advantages of VOMPECCC

No vendor lock-in. Every package builds on the same native contracts. If any one of the eight packages is abandoned, you replace it. Your other packages continue to work. Contrast this with Helm, where the maintainer's burnout announcement stranded an entire ecosystem of downstream packages.

Independent maintenance. Three different developers maintain the eight packages. Daniel Mendler maintains five (Vertico, Consult, Corfu, Cape, and co-maintains Marginalia), so the overall bus factor is not dramatically higher than a monolith. But the key difference is structural: if Mendler stepped away, the remaining packages would continue to function independently. Omar Antolin Camarena's Embark and Orderless would keep working. Radon Rosborough's Prescient would keep working. Nobody's contribution is stranded by someone else's absence.

Incremental adoption. You start with one package and add more as you discover needs. There is no cliff of initial configuration. You never need to understand all eight before getting value from any one.

Smaller, auditable codebases. Vertico is ~600 lines. Corfu is ~1,220 lines. These are packages you can actually read end to end. Bugs are easier to find and fix in small, focused codebases.

Automatic ecosystem benefits. Because everything uses the native completion protocol, third-party packages benefit for free. Any command that calls completing-read gets your chosen UI, filtering, sorting, annotations, and actions without any integration code.

Future compatibility. Emacs itself continues to improve its built-in completion system. Packages built on the native protocol benefit from those improvements automatically. Packages built on proprietary APIs do not.

15.2. Disadvantages of VOMPECCC

Higher initial discovery cost. A newcomer searching "Emacs completion" finds eight packages instead of one. Understanding the role of each, and which subset to start with, requires more research than "install Helm" or "install Ivy." The conceptual overhead is non-trivial.

Configuration across packages. Eight packages means eight use-package declarations, eight sets of configuration variables, and eight places where something could be misconfigured. Helm's all-in-one approach means one declaration, one set of variables, one source of truth.

Interaction effects. While the packages are independent, some combinations require awareness of how they interact. Combining Orderless with Prescient requires understanding that Orderless handles filtering while Prescient handles sorting. The embark-consult integration package exists because the two packages benefit from knowing about each other in specific workflows.

Less out-of-the-box polish. Helm ships with dozens of purpose-built commands. With VOMPECCC, you compose those workflows yourself. The result is often more powerful, but you build it rather than unwrap it.

Documentation is distributed. Each package has its own README, its own issue tracker, its own wiki. There is no single "VOMPECCC manual." Cross-cutting workflows (search with Consult, export with Embark, edit with wgrep) are documented across multiple repositories.

15.3. When to Choose What

Choose VOMPECCC if:

  • You value understanding your tools and want to read the source code
  • You want completion that works identically with built-in and third-party commands
  • You want to invest incrementally rather than all at once
  • You care about long-term maintainability and Emacs version compatibility
  • You want to mix and match components as your needs evolve

Consider Helm if:

  • You want maximum out-of-the-box functionality with minimal configuration
  • You prefer a single point of documentation and support
  • You are comfortable depending on a single package and its API
  • You need one of Helm's highly specific, purpose-built features (like helm-top or helm-colors) and don't want to replicate them
  • You think Thierry is a cool dude (he is)

Consider Ivy if:

  • You are already invested in the Ivy ecosystem with custom ivy-read code
  • You prefer Ivy's action selection UX
  • You need Spacemacs's Ivy layer specifically
  • You think Oleh is a cool dude (he is)

For new configurations today, the community consensus points strongly toward the modular stack. Doom Emacs's switch to Vertico and Corfu, the deprecation of Selectrum, and the ongoing maintenance challenges of both Helm and Ivy have made the direction clear. The question is no longer whether to use the modular approach, but which subset to start with.

16. Conclusion   conclusion

I came to this stack the way most people probably do: one package at a time, over the course of a year or so. I started with Vertico and Orderless because my Ivy config had started fighting with Emacs 28 upgrades and I was tired of debugging someone else's ivy-read edge cases. Two packages, ten minutes of configuration, and M-x already felt better. Marginalia came next for me. Once you've seen keybindings and docstrings next to every command, you can't unsee their absence. Consult replaced Counsel, Embark replaced the "type search string, exit completion, run a different command" waltz, and Corfu replaced Company when I realized the same Orderless filtering I'd grown to depend on in the minibuffer wasn't available in my code buffers.

The whole migration happened very incrementally, which was incidental for me, but is the point of this post. I never sat down to "install VOMPECCC." I solved one friction at a time, and each solution composed with the ones I already had. That's the experience the architecture is designed to produce.

Nobody really calls it VOMPECCC in Emacs circles, it is a mnemonic used here for the sake of an article rather than an established term. But the packages it describes have quietly become the default recommendation for modern Emacs completion, adopted by Doom Emacs, recommended by Protesilaos Stavrou14, documented by System Crafters15, and built on by a growing ecosystem of third-party packages.

The shift from Helm to Ivy to the modular stack follows a familiar pattern in software: monoliths are convenient until they aren't. Composable tools with clear interfaces outlast the frameworks that try to be everything16. Emacs figured this out forty years ago, and the modular stack described here is what completion looks like once you treat it as a substrate, the raw material on top of which you build Incremental Completing Read interactions, rather than as a finished product the vendor hands you. Its completion ecosystem just needed a few years to catch up.

17. TLDR   tldr

Emacs completion is not one problem but at least six orthogonal concerns: display, filtering, sorting, annotation, actions, and in-buffer completion. For a decade, Helm and Ivy delivered excellent experiences but bundled everything behind proprietary APIs, creating vendor lock-in and maintenance fragility. VOMPECCC names eight independent packagesVertico, Orderless, Marginalia, Prescient, Embark, Consult, Corfu, and Cape — that each address a single concern and compose through Emacs's native completing-read contract rather than custom APIs. Because no package depends on another's internals, any subset works on its own and any component can be replaced without breaking the rest. The community has moved decisively toward this modular stack, with Doom Emacs switching its defaults to Vertico and Corfu. There are real trade-offs — higher discovery cost and distributed configuration — but the architecture pays off in durability, auditability, and incremental adoption.

Footnotes:

1

Sacha Chua's interview with Thierry Volpiatto (2018) provides a candid account of Helm's history. Volpiatto describes being a mountain guide with no programming background, discovering Linux in 2006, and gradually becoming Helm's sole maintainer. He also discusses the financial unsustainability of maintaining a package used by hundreds of thousands of users as a volunteer.

2

Helm accumulated over 640,000 downloads on MELPA, making it the most downloaded package on the archive at its peak. MELPA download counts are visible on the MELPA package page. The figure is cumulative since MELPA began tracking downloads in 2013.

3

Volpiatto's 2020 announcement (GitHub Issue #2386) was definitive: "Helm development is now stopped, please don't send bug reports or feature request, you will have no answers." The issue was locked to collaborators. The Hacker News discussion that followed highlights the difficulty of sustaining large open-source projects without institutional support.

4

The ivy-read signature can be inspected in ivy.el on GitHub. The Selectrum README (radian-software/selectrum) provides a detailed comparison of ivy-read with completing-read and explains why the deviation from the standard API created long-term maintainability problems.

5

McIlroy's articulation of the Unix philosophy appears in the Bell System Technical Journal's 1978 special issue on Unix (available at archive.org). The full quote is: "Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new 'features'." See also Eric S. Raymond's The Art of Unix Programming, Chapter 1, which elaborates on the philosophy's implications for software design.

6

The completing-read API is documented in the Emacs Lisp Reference Manual. The key design insight is that completing-read supports programmatic completion tables — functions that can compute candidates lazily based on the current input — which is essential for large or dynamic candidate sets like TRAMP hosts or LSP symbols.

7

Emacs's completion styles system is documented in the GNU Emacs Manual. The variable completion-styles controls which matching strategies are tried, in order, until one produces results. The completion-category-overrides variable allows per-category customization, so file completion can use partial-completion while M-x uses orderless.

8

ivy-rich (Yevgnen/ivy-rich) was a popular Ivy extension that added columns of information to Ivy completion candidates — essentially the same concept as Marginalia. The key limitation was that it was structurally coupled to Ivy: if you switched away from Ivy, you lost your annotations. Marginalia solves the same problem through the standard annotation-function API, making it framework-agnostic.

9

This characterization comes from Karthinks's "Fifteen Ways to Use Embark", one of the most comprehensive third-party guides to the package. The post demonstrates workflows that were impossible or impractical before Embark: acting on multiple candidates simultaneously, exporting completion results into native Emacs modes, and switching commands mid-stream without losing context.

10

Company-mode (company-mode/company-mode) was created by Nikolaj Schumacher in 2009 and has been maintained by Dmitry Gutov since 2013. It remains actively maintained with ~2,300 GitHub stars. The architectural critique here is specific to the backend API: company-backends is a separate protocol from completion-at-point-functions, which means backends written for Company don't work with other completion UIs, and vice versa.

11

The Doom Emacs Corfu module was merged in PR #7002 in March 2024. The Discourse discussion explains the rationale: Corfu aligns with Emacs's native completion infrastructure, while Company's proprietary API creates friction with the rest of the modern completion stack.

12

Doom Emacs's completion modules are documented at docs.doomemacs.org. The Vertico module includes pre-configured integration with Orderless, Marginalia, Consult, and Embark. The older Ivy and Helm modules remain available but are no longer the recommended default.

13

Notable guides recommending the modular stack include: Martin Fowler's "Improving my Emacs experience with completion" (2024), which documents his switch to the Vertico ecosystem; the "Guide to Modern Emacs Completion" by Jonathan Neidel, which walks through the full Vertico/Corfu stack; and Kristoffer Balintona's multi-part "Vertico, Marginalia, All-the-icons-completion, and Orderless" series (2022).

14

Protesilaos Stavrou's "Emacs: modern minibuffer packages (Vertico, Consult, etc.)" is a ~44 minute video demonstrating the full stack. Stavrou is also the author of Mct (Minibuffer and Completions in Tandem), an alternative approach that reuses the built-in *Completions* buffer with automatic updates. His recommendation of Vertico despite having written a competing package speaks to the strength of the ecosystem.

15

System Crafters' "Streamline Your Emacs Completions with Vertico" and the companion video "Replacing Ivy and Counsel with Vertico and Consult" (May 2021) were early catalysts for community adoption. David Wilson (System Crafters) documented his own migration from Ivy and provided configuration examples that became widely copied.

16

The pattern of monoliths giving way to composable architectures is well-documented in software engineering. Fred Brooks described the "second system effect" in The Mythical Man-Month (1975), where the follow-up to a successful lean system tends to be an overdesigned monolith. More recently, the microservices movement explicitly applies the Unix philosophy to distributed systems — with similar trade-offs around discovery cost, operational complexity, and distributed debugging.

-1:-- VOMPECCC: A Modular Completion Framework for Emacs (Post Charlie Holland)--L0--C0--2026-04-17T11:17:00.000Z

Michal Sapka: Updates Q1/2026

Let's try something new - quarterly update. I found great joy from reading ones from マリウス , so why not? I don't want it to be a week-note type of list, as prose is from humans and lists are from machines.

I want such updates (name may be subject to change) where I mind dump things, which never grew into full posts. So, instead of a 5 sentence post, they will be 5 sentences in a combined post.

Personal

Health

What I've been up to this year? Well, mostly I've been sick. The Kid is old enough to be sick less often, but when he does, he brings the best viruses with him. I wanted to write this update a few weeks ago, but well. I'd rather be healthy than published.

Speaking of The Kid. We are continuing our Montessori education, as we accepted to such school. Let's just hope he won't grow to be a Musk of something.

But, returning to health: since I'm an old, sickly person with a high cholesterol level, I needed to return to eating healthier. No more cakes on the go, no more sushi rice. I have, however, rediscovered a love from a few years back: natto. A friend showed it to me and it was superb. I now order it and eat it a few times a week. It tastes as good as it looks!

Phone and reading

My love towards the Hibreak is only growing stronger. I find no downsides of not being in Apple/Google duopoly. Yes, it's an android, but I'm using only FOSS applications. My random usage of social media on the go went down to zero. No Mastodon, no Google YouTube. It's a purposeful device: I can use it as a communicator, and I can use an e-book reader. The latter is going extremely strong! It took a while, but now I pick a book for a page or two just waiting in line. Reading became just a regular thing I do though the day! It's less taxing than looking at TechCrunch, but it's much more stimulating.

Books are a good idea. Who would have thought?

As for future plans: I was going to pick up Dune and FINALLY read the entire saga, but this has to wait. I got into possession of In Search of Lost Time , which I aim to dig into. I'll mix it up with Dumas, so I'm in my Emily in Paris phase. Just less dumb... I think. I haven't seen a single episode. Ergo: then plan: is Dumas -> something small -> Proust -> Something small and back to Dumas.

I also take much fewer photos. Not having a good camera on me all the time is a nice thing. I picked up my old, trusty Fuji X-100S as you may have noticed on how bad the photos look in recent posts. I need to finally learn to measure light...

Random other things

I rebuilt my pantsu-collection with a few Wrangler Frontiers. They are the best fitting jeans I've ever used, and now I own 6 pairs. No random hole will a problem and one of them is now my house pantalons. Screw sweatpants.

I also returned the PS5. It was an eventful period where I became disgusting gamer. The games were nice, but now I'm back at PS4 and I fail to see as a significant downgrade. We have peaked long time ago. I strongly prefer to use PS4.

Computer stuff

Thinkcentre

I replaced my Lenovo Thinkpad with an Lenovo Thinkcentre. I don't leave the house, so I don't need a laptop. MiniPC is very small and fits the desk nicely.

But, most of all, it's an all AMD system. This makes rocking FreeBSD a pleasure! No more breaking things due to Nvidia driver incompatibility. Things just work.

Lathe

In between being ill, I rewrote this page. The old version had posts written in plain old HTML. Some post-processing (like images) was necessary, so I put myself into regexp hell. Not that those were big regexp, but they are big enough for not to want to update them ever.

This means I needed something in between me and the HTML. Markdown was a no-go, as I hate it. It's good for small notes, but anything bigger? Nah. The answer was clear: LISP. Who wouldn't want to write in LISP? And so I wrote a LISP-like processor in the old python-based generator. It worked, it was fun to write in, but it's also terrible. I had no idea how to parse lisp, so I made it something with parens. The POC was there, just the implementation needed to entirely change - it was a a great example in how to not to do LISP.

And so I am writing a small Lisp parser now. I'm not aiming at being full common-lisp compatible, but still I try the API to be as correct as possible. Now, this will not be real LISP: I use arrays under the hood, not CONS. But all defuns, setqs, and so are already working. This is my first project on Go and I have to say that I love it. It's a modern language and environment, so writing in it a lot of fun... unlike some other languages, but more on them later on.

The project I call Lathe will be open sourced in the coming weeks. This site will also be fully migrated over coming months, but it will require some translators. I am able to write then in Lisp now, so it will be fun.

The biggest missing element of the puzzle is Macro support, but that's not needed for first release.

All this comes with a huge asterisk: I have no idea how to write Lisp. I am not a Lisp developer, and I am learning as I go. Here, it's the cherry on top. I like what I'm writing, I like that I'm learning, and I like how I'm writing. Go is now my friend.

The things some people will do just to not have to deal with Markdown nor Hugo.

Masto-mailo-inator

I got my first feature request! I am officially an open-source developer. And by that, I mean: unpaid.

I plan to add import from export next month, as currently I am fully focused on Lathe

GPG

I also started using GPG again. You can find my key on keys.opengpg.com

Work

Well, I'm still employed, which is great. It's over a decade in the same company! However, there are two things which changed in the first quarter

GenAI

I am not hiding this, but let's make it official: I use LLMs at work. Not because they make me more productive, nor because they make me happy. There is one reason: I am expected to. It's a sign a great technology where most people either reject or all together or are forced to use.

My team was moved to a different product, which is written in Java. I already miss Ruby.... They say that in the (age of ai) you don't need to know what you are doing, but I disagree with it on all fronts. I see it in my own experience. While, yes: I am able to generate hundreds lines of code, but I find to be terrible.

I always tried to understand what I'm doing, and I was even praised for it. Using Claude makes it extremely difficult. It's a new language, new framework, and yet we are expected to ship features within the first couple of weeks. Some teams are proud of skipping the standard few-month-long rump up. I think they are managed by dangerous morons. The code is still essential - it needs to work, it needs to it a reasonable fashion, and it needs to be readable. Whatever vibe coders say about prompting the next google, they are lying to themselves. Opus 4.6 is the best coding model out there (as I've read on multiple occasions) , and it still requires anal level of hand holding. While mostly everything it creates is a more-or-less correct java code, it's rarely good java code nor a properly designed system. It makes random changes, makes incorrect assumptions, just plain lies.

To give an example. We are integrating this service with another service. I wrote code which worked on localhost, but not on server. I try, debug, use curls - nothing. Finally, a few sessions laster I learn that it never worked. I didn't double-check the local curl, and I trusted Claude when it said that everything is working. This was a learning lesson, and I will never trust a clanker again. It will lie, to make me happy. Even if means not doing its job. Lessn learned: never, ever trust a clanker.

And the debugging, oh my god the debugging. It reads a million files, runs tests, does magical things - and boom, solution. So I ask a basic question (what about...?) . Of course, you are right - the moron replies, let's burn yet another 20 USD (LLM is a short for LLM Like Money) . It can go for like this few dozen prompts, back and foth, and still sometimes it will return to incorrect assumption from half an hour before. It will ignore requests, specs, do random things. It's far from an intern...

The fact stands: it's a better java developer than I am. But I am a terrible java developer. I never wrote any line of Java before! I have no idea how it will play out, as I see it with all my colleagues (and, most likely, the entire industry) have no idea what we are doing. Something looks like it's working, and we are expected to ship it. Not that there is any hard requirement, but it's a race. Layoffs are a regular thing now, and it's a dumb idea to be on the naughty list. I'd not use any SASS in the comming years, as I trust them even less than I used to.

Now, I have a great manager who understands that understanding is essential. I am able to slow down, and learn - little by little and with obvious expectatins of stil shipping stuff. But I am lucky to have him, and who knows for how long.

Java

The other thing: I am now a Java developer. Oh, what a terrible life it is. The language is... OK, at best. Nonetheless, is extremely stagnated. The developer experience is abysmal!

I have a working theory that IntelliJ is the worst thing which happened to Java folks. They have zero insensitive to fix things, or work on adding modern things. There is CLI, but it's a pita to work on. There is an LSP, but it's barely working. Both are under-invested, as IntelliJ is there, keeping the entire ecosystem in its dark ages.

I try to use Emacs, and with the genai is almost nice. More on this later. But, I understand why people use this godforsaken IDE: people use IntelliJ because other people use IntelliJ. It fixes a million things which should not be fixed in an editor, but in the ecosystem. Toying with (Go) at the same time just shows how primitive Java is.

And there is Spring . If anyone comes to me and whines about how much magic is in Rails , I will point them here. This and Lombok are much bigger obstacle to learn to read code than the language itself.

It's better to have experience in Java than not to have. We are living in the age of layoffs, after all. But it's a miserable life.

RTO

And, starting next month, I am expected to be twice a week in the office. I don't have a long commute (15 mins?) , and I'll be able to drop The Kid at school on the way. This changes nothing: the idea of office should be left in the past.

Emacs

My beloved editor deserves a special method. Since I'm again actively coding (after hours mostly) , I finally set up LSP, consult and all that jazz. At work, I'm rocking Agent Shell. And Ready Player One became my music player, and I moved to using mu4e for my email needs.

I also finally managed to get X11 forwarding over SSH working. Therefore, I get my private Emacs (with emails, rsses, mastodons) at my work Macbook. This will be a short guide in the coming weeks, but it's working over the local area. We'll see if RTO won't make it more challenging...

-1:-- Updates Q1/2026 (Post Michal Sapka)--L0--C0--2026-04-17T11:11:07.000Z

Dave Pearson: blogmore.el v4.1

Following on from yesterday's experiment with webp I got to thinking that it might be handy to add a wee command to blogmore.el that can quickly swap an image's extension from whatever it is to webp.

So v4.1 has happened. The new command is simple enough, called blogmore-webpify-image-at-point; it just looks to see if there's a Markdown image on the current line and, if there is, replaces the file's extension with webp no matter what it was before.

If/when I decide to convert all the png files in the blog to webp I'll obviously use something very batch-oriented, but for now I'm still experimenting, so going back and quickly changing the odd image here and there is a nicely cautious approach.

I have, of course, added the command to the transient menu that is brought up by the blogmore command.

One other small change in v4.1 is that a newly created post is saved right away. This doesn't make a huge difference, but it does mean I start out with a saved post that will be seen by BlogMore when generating the site.

-1:-- blogmore.el v4.1 (Post Dave Pearson)--L0--C0--2026-04-17T09:25:37.000Z

Irreal: LaTeX Preview In Emacs

Over at the Emacs subreddit, _DonK4rma shows an example of his mathematical note taking in Emacs. It’s a nice example of how flexible Org mode is even for writing text with heavy mathematical content but probably not too interesting to most Emacs users.

What should be interesting is this comment, which points to Dan Davison’s Xenops, which he describes as a “LaTeX editing environment for mathematical documents in Emacs.” The idea is that with Xenops when you leave a math mode block it is automatically rendered as the final mathematics, which replaces the original input. If you move the cursor onto the output text and type return, the original text is redisplayed.

It’s an excellent system that lets you catch any errors you make in entering mathematics as you’re entering them rather than at LaTeX compile time. So far it only works on .tex files but Davison says he will work on getting it to work with Org too.

He has a six minute video that shows the system in action. It gives a good idea of how it works but Xenops can do a lop more; see the repository’s detailed README at the above link for details.

-1:-- LaTeX Preview In Emacs (Post Irreal)--L0--C0--2026-04-16T15:03:07.000Z

Dave Pearson: boxquote.el v2.4

boxquote.el is another of my oldest Emacs Lisp packages. The original code itself was inspired by something I saw on Usenet, and writing my own version of it seemed like a great learning exercise; as noted in the thanks section in the commentary in the source:

Kai Grossjohann for inspiring the idea of boxquote. I wrote this code to mimic the "inclusion quoting" style in his Usenet posts. I could have hassled him for his code but it was far more fun to write it myself.

While I never used this package to quote text I was replying to in Usenet posts, I did use it a lot on Usenet, and in mailing lists, and similar places, to quote stuff.

The default use is to quote a body of text; often a paragraph, or a region, or perhaps even Emacs' idea of a defun.

,----
| `boxquote.el` provides a set of functions for using a text quoting style
| that partially boxes in the left hand side of an area of text, such a
| marking style might be used to show externally included text or example
| code.
`----

Where the package really turned into something fun and enduring, for me, was when I started to add the commands that grabbed information from elsewhere in Emacs and added a title to explain the content of the quote. For example, using boxquote-describe-function to quote the documentation for a function at someone, while also showing them how to get at that documentation:

,----[ C-h f boxquote-text RET ]
| boxquote-text is an autoloaded interactive native-comp-function in
| ‘boxquote.el’.
|
| (boxquote-text TEXT)
|
| Insert TEXT, boxquoted.
`----

Or perhaps getting help with a particular key combination:

,----[ C-h k C-c b ]
| C-c b runs the command boxquote (found in global-map), which is an
| interactive native-comp-function in ‘boxquote.el’.
|
| It is bound to C-c b.
|
| (boxquote)
|
| Show a transient for boxquote commands.
|
|   This function is for interactive use only.
|
| [back]
`----

Or figuring out where a particular command is and how to get at it:

,----[ C-h w fill-paragraph RET ]
| fill-paragraph is on fill-paragraph (M-q)
`----

While I seldom have use for this package these days (mainly because I don't write on Usenet or in mailing lists any more) I did keep carrying it around (always pulling it down from melpa) and had all the various commands bound to some key combination.

(use-package boxquote
  :ensure t
  :bind
  ("<f12> b i"   . boxquote-insert-file)
  ("<f12> b M-w" . boxquote-kill-ring-save)
  ("<f12> b y"   . boxquote-yank)
  ("<f12> b b"   . boxquote-region)
  ("<f12> b t"   . boxquote-title)
  ("<f12> b h f" . boxquote-describe-function)
  ("<f12> b h v" . boxquote-describe-variable)
  ("<f12> b h k" . boxquote-describe-key)
  ("<f12> b h w" . boxquote-where-is)
  ("<f12> b !"   . boxquote-shell-command))

Recently, with the creation of blogmore.el, I moved the boxquote commands off the b prefix (because I wanted that for blogging) and onto an x prefix. Even then... that's a lot of commands bound to a lot of keys that I almost never use but still can't let go of.

Then I got to thinking: I'd made good use of transient in blogmore.el, why not use it here too? So now boxquote.el has acquired a boxquote command which uses transient.

The boxquote transient in action

Now I can have:

(use-package boxquote
  :ensure t
  :bind
  ("C-c b" . boxquote))

and all the commands are still easy to get to and easy to (re)discover. I've also done my best to make them context-sensitive too, so only applicable commands should be usable at any given time.

-1:-- boxquote.el v2.4 (Post Dave Pearson)--L0--C0--2026-04-16T07:29:35.000Z

Bicycle for Your Mind: Outlining with OmniOutliner Pro 6

OmniOutliner Pro 6OmniOutliner Pro 6

Product: OmniOutliner Pro 6
Price: $99 for new users and $50 for upgrade price. They have a $49.99/year subscription price.

Rationale or the Lack of One

There was no good reason to buy OmniOutliner Pro 6.

I don’t need this program. I have the outlining abilities of Org-mode in Emacs. And dedicated outlining programs in Opal, Zavala and TaskPaper.

They had a good upgrade price and I hadn’t tried out any new software in a while. I know that is not a good reason to spend $50. It was my birthday, and I love outlining programs.

I had used the Pro version in version 3 and had bought the Essentials edition for OmniOutliner 5. A lot of what I see in version 6 is new to me.

Themes

Customizing ThemesCustomizing Themes

OmniOutliner Pro 6 comes with themes. I wanted to make my own or customize the existing ones. It is easy to do. Didn’t do much. Changed the line spacing and the font. The themes it ships with are nice. I am using the blank one and Solarized.

Writing Environment

Writing in OOPWriting in OOP

The best thing about OmniOutliner Pro 6 is the writing environment it provides. There are touches around the program which make it a pleasure to write in. Two of them which stick out to me are:

  1. Typewriter scrolling. I have no idea why more programs don’t give you this feature. I use it all the time. Looking at the bottom of the document is boring and it hurts my neck.
  2. Full screen focus. This is well implemented and another feature which helps me concentrate on the document I am in.

Linking Documents

LinkingLinking

You can link to a document or to a block in the document. Clicking on the space left of the Heading gives you a drop-down menu. Choose the Copy Omni Link and paste it to where you want the link to appear. Useful in linking documents or sections when you have a block of outlines which relate to each other in some way.

Keyboard Commands

keyboard commandskeyboard commands

Keyboard commands are what make an outlining program. OmniOutliner Pro 6 comes with the ability to customize and change every keyboard command that is in the program. It makes the learning curve smoother when you can use the commands you are used to for every task you perform in an outliner. I love this ability to make the outliner my own.

Using OmniOutliner Pro 6

This is the best outliner in the macOS space. OmniOutliner Pro 6 cements that position. It is a pleasure to use. It does everything you need from an outliner and does it with style. It does more than you need. Columns? I have never found the need for columns in an outliner. Other users love this feature. I am not interested. Maybe I am missing something, or I don’t use outlines which need columns. In spite of my lack of enthusiasm for columns, this is the best outlining program available on the macOS.

Comparison with Org-mode

I use Emacs and within it Org-mode. I write in outlines in Emacs all the time.

Org-mode is a strange mix of OmniOutliner and OmniFocus. It does outlines and does task management. All in one application. In plain text. The only problem? You have to deal with the complexity of Emacs. It is a steep learning curve which gives you benefits over the long term but there is pain in the short term. Let’s be honest, there is a ton of pain in the short term. OmniOutliner on the other hand, is easy to pick up and use. You are going to be competent in the program with little effort. The learning curve is minimal. The program is usable and useful. Doesn’t do most of the things Org-mode does, but it is not designed for that. They have a product called OmniFocus to sell you, for that.

Conclusion

If you are looking for an outlining program, you cannot go wrong with OmniOutliner Pro 6. It is fantastic to live in and work with. It gives you a great writing environment. I love writing in it.

There are two things which give me pause when it comes to OmniOutliner Pro 6. The first is the price. I think $99 for an outlining program is steep. That is a function of my retired-person price sensitivity. You might have a different view. The second is the incomplete documentation. They are working on it, slowly. If I am paying for the most expensive outlining program in the marketplace, I want the documentation to be complete and readily available on sale of the product. Not something I have been waiting a few months for. That is negligent.

If you are looking at outlining programs there are competitors in the marketplace. Zavala is a competitive product which is free. Opal is another product which is free and although it doesn’t have all the features of OmniOutliner, is a competent outliner. Or, you can always learn how to use Emacs and adopt Org-mode as the main driver of all your writing.

OmniOutliner Pro 6 is recommended with some reservations.

macosxguru at the gmail thingie.

-1:-- Outlining with OmniOutliner Pro 6 (Post Bicycle for Your Mind)--L0--C0--2026-04-16T07:00:00.000Z

James Endres Howell: Embedding a Mastodon thread as comments to a blog post

I wrote org-static-blog-emfed, a little Emacs package that extends org-static-blog with the ability to embed a Mastodon thread in a blog post to serve as comments. The root of the Mastodon thread also serves as an announcement of the blog post to your followers. It’s based on Adrian Sampson’s Emfed, and of course Bastian Bechtold’s org-static-blog.

I had shared it before, but alas, after changing Mastodon instances the comments from old posts were lost, so I disabled them on this blog. Just over the past few days I’ve found time to get it all working again.

It also seems, at least in #Emacs on Mastodon, that org-static-blog has gained in popularity recently.

Prompted as I was to make a few improvements, I thought I would update the README and share it again. Hope it’s useful for someone!

-1:-- Embedding a Mastodon thread as comments to a blog post (Post James Endres Howell)--L0--C0--2026-04-15T22:17:00.000Z

James Dyer: Emacs-DIYer: A Built-in dired-collapse Replacement

I have been slowly chipping away at my Emacs-DIYer project, which is basically my ongoing experiment in rebuilding popular Emacs packages using only what ships with Emacs itself, no external dependencies, no MELPA, just the built-in pieces bolted together in a literate README.org that tangles to init.el. The latest addition is a DIY version of dired-collapse from the dired-hacks family, which is one of those packages I did not realise I leaned on until I started browsing a deeply-nested Java project and felt the absence immediately.

If you have ever opened a dired buffer on something like a Maven project, or node_modules, or a freshly generated resource bundle, you will know the pain, src/ contains a single main/ which contains a single java/ which contains a single com/ which contains a single example/, and you are pressing RET four times just to get to anything interesting. The dired-collapse minor mode from dired-hacks solves this beautifully, it squashes that whole single-child chain into one dired line so src/main/java/com/example/ shows up as a single row and one RET drops you straight into the deepest directory.

So, as always with the Emacs-DIYer project, I wondered, can I implement this in a few elisp defuns?

Right, so what is the plan?, dired already draws a nice listing with permissions, sizes, dates and filenames, all I really need to do is walk each line, look at the directory, figure out the deepest single-child descendant, and then rewrite the filename column in place with the collapsed path. The trick, and this is the bit that took me a minute to convince myself of, is that dired uses a dired-filename text property to know where the filename lives on the line, and dired-get-filename happily accepts relative paths containing slashes. So if I can rewrite the text and reapply the property, everything else, RET, marking, copying, should just work without me having to touch the rest of dired at all!

First function, my/dired-collapse--deepest, which just walks the directory chain as long as each directory contains exactly one accessible child directory. I added a 100-iteration guard so a pathological symlink cycle cannot wedge the whole thing, which, you know, future me might thank present me for:

(defun my/dired-collapse--deepest (dir)
"Return the deepest single-child descendant directory of DIR.
Walks the directory chain as long as each directory contains exactly
one entry which is itself an accessible directory. Stops after 100
iterations to guard against symlink cycles."
(let ((current dir)
(depth 0))
(catch 'done
(while (< depth 100)
(let ((entries (condition-case nil
(directory-files current t
directory-files-no-dot-files-regexp
t)
(error nil))))
(if (and entries
(null (cdr entries))
(file-directory-p (car entries))
(file-accessible-directory-p (car entries)))
(setq current (car entries)
depth (1+ depth))
(throw 'done current)))))
current))

directory-files-no-dot-files-regexp is one of those lovely little built-in constants I keep forgetting exists, it filters out . and .. but keeps dotfiles, which is exactly what you want if you are deciding whether a directory is truly single-child.

Second function does the actual buffer surgery, my/dired-collapse iterates each dired line, grabs the filename with dired-get-filename, asks the walker how deep the chain goes, and if there is anything to collapse it replaces the displayed filename with the collapsed relative path:

(defun my/dired-collapse ()
"Collapse single-child directory chains in the current dired buffer.
A DIY replacement for `dired-collapse-mode' from the dired-hacks
package. Rewrites the filename portion of each line in place and
reapplies the `dired-filename' text property so that standard dired
navigation still resolves to the deepest directory."
(when (derived-mode-p 'dired-mode)
(let ((inhibit-read-only t))
(save-excursion
(goto-char (point-min))
(while (not (eobp))
(condition-case nil
(let ((file (dired-get-filename nil t)))
(when (and file
(file-directory-p file)
(not (member (file-name-nondirectory
(directory-file-name file))
'("." "..")))
(file-accessible-directory-p file))
(let ((deepest (my/dired-collapse--deepest file)))
(unless (string= deepest file)
(when (dired-move-to-filename)
(let* ((start (point))
(end (dired-move-to-end-of-filename t))
(displayed (buffer-substring-no-properties
start end))
(suffix (substring deepest
(1+ (length file))))
(new (concat displayed "/" suffix)))
(delete-region start end)
(goto-char start)
(insert (propertize new
'face 'dired-directory
'mouse-face 'highlight
'dired-filename t))))))))
(error nil))
(forward-line))))))

The key bit is the propertize call at the end, the new filename text has to carry dired-filename t so that dired-get-filename picks it up, and dired-directory on face keeps the collapsed entry looking the same as a normal directory line. Because dired-get-filename will happily glue a relative path like main/java/com/example onto the dired buffer’s directory, pressing RET on a collapsed line takes you straight to src/main/java/com/example with no extra work from me.

A while back I added a little unicode icon overlay thing to dired (my/dired-add-icons, which puts a little symbol in front of each filename via a zero-length overlay), and I did not want the collapse to fight with it. The icons hook into dired-after-readin-hook as well, so I just gave collapse a negative depth when attaching its hook:

(add-hook 'dired-after-readin-hook #'my/dired-collapse -50)

Lower depth runs earlier, so collapse rewrites the line first, then the icon overlay attaches to the final collapsed filename position. Without this, the icons would happily sit in front of a stub directory that was about to be rewritten, which is, well, fine I suppose, but it felt tidier to have them anchor on the post-collapse text.

Before, a typical Maven project root might look something like this:

drwxr-xr-x 3 jdyer users 4096 Apr 9 08:12 ▶ src
drwxr-xr-x 2 jdyer users 4096 Apr 9 08:11 ▶ target
-rw-r--r-- 1 jdyer users 812 Apr 9 08:10 ◦ pom.xml

After collapse kicks in:

drwxr-xr-x 3 jdyer users 4096 Apr 9 08:12 ▶ src/main/java/com/example
drwxr-xr-x 2 jdyer users 4096 Apr 9 08:11 ▶ target
-rw-r--r-- 1 jdyer users 812 Apr 9 08:10 ◦ pom.xml

One RET and you are in com/example, which is where all the actual code lives anyway. Marking, copying, deleting, renaming, all of it still behaves because the dired-filename text property points at the real deepest path.

One thing that initially bit me, is navigating out of a collapsed chain. If I hit RET on a collapsed src/main/java/com/example line I land in the deepest directory, which is great, but then pressing my usual M-e to go back up was doing the wrong thing. M-e in my config has always been bound to dired-jump, and dired-jump called from inside a dired buffer does a “pop up a level” thing that ended up spawning a fresh dired for com/, bypassing the collapsed view entirely and leaving me staring at a directory I never wanted to see.

My first attempt at fixing this was to put some around-advice on dired-jump so that if an existing dired buffer already had a collapsed line covering the jump target, it would switch to that buffer and land on the collapsed line instead of splicing in a duplicate subdir. It worked, sort of, but dired-jump in general felt a bit janky inside dired, it does a lot of “refresh the buffer and try again” under the hood and the in-dired pop-up-a-level path was always the weak link. So I stepped back and split the two cases apart with a tiny dispatch wrapper:

(defun my/dired-jump-or-up ()
"If in Dired, go up a directory; otherwise dired-jump for current buffer."
(interactive)
(if (derived-mode-p 'dired-mode)
(dired-up-directory)
(dired-jump)))
(global-set-key (kbd "M-e") #'my/dired-jump-or-up)

From a file buffer, dired-jump is still exactly the right thing as you want the directory the file is in of course. From inside a dired buffer, dired-up-directory is just a much cleaner operation, it walks up one real level, no refresh, no splicing, nothing weird. But on its own that would lose the collapsed round-trip, so I gave dired-up-directory its own bit of advice that looks for a collapsed-ancestor buffer before falling through to the default behaviour.

(defun my/dired-collapse--find-hit (target-dir)
"Return (BUFFER . POS) of a dired buffer with a collapsed line covering TARGET-DIR."
(let ((target (file-name-as-directory (expand-file-name target-dir)))
hit)
(dolist (buf (buffer-list))
(unless hit
(with-current-buffer buf
(when (and (derived-mode-p 'dired-mode)
(stringp default-directory))
(let ((buf-dir (file-name-as-directory
(expand-file-name default-directory))))
(when (and (string-prefix-p buf-dir target)
(not (string= buf-dir target)))
(save-excursion
(goto-char (point-min))
(catch 'found
(while (not (eobp))
(let ((line-file (ignore-errors
(dired-get-filename nil t))))
(when (and line-file
(file-directory-p line-file))
(let ((line-dir (file-name-as-directory
(expand-file-name line-file))))
(when (string-prefix-p target line-dir)
(setq hit (cons buf (point)))
(throw 'found nil)))))
(forward-line))))))))))
hit))

The dired-up-directory only fires when the literal parent is not already open as a dired buffer, which keeps normal upward navigation completely unchanged:

(defun my/dired-collapse--up-advice (orig-fn &optional other-window)
"Around-advice for `dired-up-directory' restoring collapsed round-trip."
(let* ((dir (and (derived-mode-p 'dired-mode)
(stringp default-directory)
(expand-file-name default-directory)))
(up (and dir (file-name-directory (directory-file-name dir))))
(parent-buf (and up (dired-find-buffer-nocreate up)))
(hit (and dir (null parent-buf)
(my/dired-collapse--find-hit dir))))
(if hit
(let ((buf (car hit))
(pos (cdr hit)))
(if other-window
(switch-to-buffer-other-window buf)
(pop-to-buffer-same-window buf))
(goto-char pos)
(dired-move-to-filename))
(funcall orig-fn other-window))))
(advice-add 'dired-up-directory :around #'my/dired-collapse--up-advice)

If /proj/src/main/java/com/ happens to already exist as a dired buffer, dired-up-directory does its usual thing and just goes there, the up-advice never fires. It is only when the literal parent is absent that the advice kicks in and hands you back to the collapsed ancestor, which I think is the right tradeoff, the advice never surprises you when you were going to get the standard behaviour anyway, it only steps in when the standard behaviour would throw away context you clearly still had in a buffer somewhere.

End result, RET into a collapsed chain drops me deep, M-e walks me back out to the original collapsed line, and none of it requires doing anything clever with dired-jump’s “pop up a level” path, which I am increasingly convinced I should not have been using in the first place.

Everything lives in the Emacs-DIYer project on GitHub, in the literate README.org. If you just want the snippet to drop into your own init file, the two functions and the add-hook line above are the whole thing, no require, no use-package, no MELPA, just built-in dired and a bit of buffer shenanigans, and thats it!, phew, and breathe!

-1:-- Emacs-DIYer: A Built-in dired-collapse Replacement (Post James Dyer)--L0--C0--2026-04-15T18:20:00.000Z

Dave Pearson: slstats.el v1.11

Yet another older Emacs Lisp package that has had a tidy up. This one is slstats.el, a wee package that can be used to look up various statistics about the Second Life grid. It's mainly a wrapper around the API provided by the Second Life grid survey.

When slstats is run, you get an overview of all of the information available.

An overview of the grid

There are also various commands for viewing individual details about the grid in the echo area:

  • slstats-signups - Display the Second Life sign-up count
  • slstats-exchange-rate - Display the L$ -> $ exchange rate
  • slstats-inworld - Display how many avatars are in-world in Second Life
  • slstats-concurrency - Display the latest-known concurrency stats for Second Life
  • slstats-grid-size - Display the grid size data for Second Life

There is also slstats-region-info which will show information and the object and terrain maps for a specific region.

Region information for Da Boom

As with a good few of my older packages: it's probably not that useful, but at the same time it was educational to write it to start with, and it can be an amusement from time to time.

-1:-- slstats.el v1.11 (Post Dave Pearson)--L0--C0--2026-04-15T14:52:55.000Z

Irreal: Switching Between Dired Windows With TAB

Just a quickie today. Marcin Borkowski (mbork) has a very nice little post on using Tab with Dired. By default, Tab isn’t defined in Dired but mbork suggests an excellent use for it and provides the code to implement his suggestion.

If there are two Dired windows open, the default destination for Dired commands is “the other window”. That’s a handy thing that not every Emacs user knows. Mbork’s idea is to use Tab to switch between Dired windows.

It’s a small thing, of course, but it’s a nice example of reducing friction in your Emacs workflow. As Mbork says, it’s yet another example of how easy it is to make small optimizations like this in Emacs.

Update [2026-04-16 Thu 11:06]: Added link to mbork’s post.

-1:-- Switching Between Dired Windows With TAB (Post Irreal)--L0--C0--2026-04-15T14:42:10.000Z

Gal Buki: Clipboard in terminal Emacs with WezTerm

Although TRAMP allows access to files on remote servers using the local Emacs instance I usually prefer to open Emacs using a running daemon session on the remote server.

The issue with Emacs in the terminal is that kill and yank (aka copy and paste) don't work the same way as with the GUI. Using WezTerm I have found that it is

SSH clipboard support

My terminal emulator of choice is WezTerm which already supports bidirectional kill & yank out of the box.

But I can't bring my muscle memory to remember to use Ctrl+Shift+V to yank text in Emacs. I want Ctrl+y​/​C-y, like I'm used to.

Luckily .wezterm.lua lets us catch Ctrl+y and yank the clipboard contents into the terminal and with that into Emacs.

local wezterm = require 'wezterm'

local config = wezterm.config_builder()

config.keys = {
    -- Paste in Emacs using regular key bindings
    {
      key = "y",
      mods = "CTRL",
      action = wezterm.action.PasteFrom "Clipboard",
    },
}

return config

Local clipboard support

For those wanting to run Emacs in a local terminal WezTerm provides yank out of the box but not kill. To kill text from Emacs into the local clipboard we need to use xclip.

The xclip package has an auto-detect function but it has some issues.

  • if it finds xclip or xsel it will use them even if we are on Wayland
  • it can't detect MacOS (darwin)

So I decided to set the xclip-method manually. In addition I use the :if option of use-package to limit loading the package only when we are in the terminal, an xclip-method was found and we aren't using ssh.

(defun tjkl/xclip-method ()
  (cond
   ((eq system-type 'darwin) 'pbpaste)
   ((getenv "WAYLAND_DISPLAY") 'wl-copy)
   ((getenv "DISPLAY") 'xsel)
   ((getenv "WSLENV") 'powershell)
   (t nil)))

(use-package xclip
  :if (and (not (display-graphic-p))
           (not (getenv "SSH_CONNECTION"))
           (tjkl/xclip-method))
  :custom
  (xclip-method (tjkl/xclip-method))
  :config
  (xclip-mode 1))

Local clipboard without xclip

It is possible to use OSC-52 (Output/Escape Sequences) in a local WezTerm terminal without the xclip package and cli tool.
The problem with this approach is that we can't work with terminal and GUI Emacs using the same session. Since interprogram-cut-function is global it will also try to use OSC52 in the GUI Emacs and fail with the message progn: Device 1 is not a termcap terminal device.

I have not yet found a good way to restore GUI yank functionality once interprogram-cut-function is set. So the following should only be used if the GUI instance doesn't use the same session or if the GUI is never opened after terminal Emacs.

(unless (display-graphic-p)
  (defun tjkl/osc52-kill (text)
    (when (and text (stringp text))
      (send-string-to-terminal
       (format "\e]52;c;%s\a"
               (base64-encode-string text t)))))
  (setq interprogram-cut-function #'tjkl/osc52-kill))
-1:-- Clipboard in terminal Emacs with WezTerm (Post Gal Buki)--L0--C0--2026-04-15T10:50:00.000Z

Irreal: Alfred Snippets

Today while I was going through my feed, I saw this this post from macosxguru over at Bicycle For Your Mind. It’s about his system for using snippets on his system. The TL;DR is that he has settled on Typinator and likes it a lot.

I use snippets a lot but use several systems—YASnippet, abbrev mode, and the macOS text expansion facility—but none of them work everywhere I need them to so I have to negotiate three different systems. YASnippet is different from the other two in that its snippets can accept input instead of just making a text substation like the others.

In his post, macosxguru mentions that his previous system for text substitutions was based on the Alfred snippet functions. I’ve been using Alfred for a long time and love it. A one time purchase of the power pack makes your Mac much more powerful. Still, even though I was vaguely aware of it, I’d never used Alfred’s snippet function.

After seeing it mentioned on macosxguru’s post I decided to try it out. It’s easy to specify text substitutions. I couldn’t immediately figure out how to trigger the substitutions manually so I just set them to trigger automatically. I usually don’t like that but so far it’s working out well.

Up til now, I haven’t found anywhere that the substitutions don’t work. That can’t be said of any of the other systems I was using. It’s particularly hard to find one that works with both Emacs and other macOS applications.

If you’re using Emacs on macOS, you should definitely look into Alfred. It plays very nicely with Emacs and my newfound snippets ability makes the combination even better.

-1:-- Alfred Snippets (Post Irreal)--L0--C0--2026-04-14T14:59:57.000Z

Charlie Holland: Completion is a Substrate, not a UI

1. About   emacs completion ux

icr-primer-banner.jpeg

Figure 1: JPEG produced with DALL-E 3

ICR is not a convenience feature. It is a structural change in how the cost of an interaction scales with the size of the underlying data.

The argument I want to make is sharper than it sounds. Incremental completing read (ICR) is not a convenience feature. It is a structural change in how the cost of an interaction scales with the size of the underlying data; it is one of the few interface patterns that genuinely respects how human memory works; and it can fortuitously change how you organize your data, not just how you retrieve it.

A brief thought exercise reveals how a surprisingly large fraction of all software — email, calendars, file browsers, music players, issue trackers, package managers — is, at its core, just two primitives: 1) pick a thing, 2) act on it. That is the exact shape ICR was built for, and most of the visual chrome we drape around those primitives is decoration.

This matters concretely because very few environments expose completion as a programmable substrate1 you can build ICR experiences with, rather than as a sealed UI you can only consume. In everything else you use, the candidate sources, the matcher, the sorter, the annotator, and the available actions are largely fixed by the vendor or aren't even available. On the other hand, in Emacs and the shell, every layer is independently replaceable. Taking your completion stack seriously is among the highest-leverage things an Emacs user can do, on the same scale as customizing your shell, and for the same reasons. Done right, ICR can dramatically reduce the cognitive overhead of using your computer to do almost anything.

This post opens a short series on ICR. The remaining two posts get concrete: a breakdown of the modular completion framework I use day to day, and a case study of an entire Spotify client that is just an ICR application. The goal of this opening piece is to convince you that ICR is worth your rigor, and to give you the conceptual vocabulary to recognize how much of your own software experience already runs on it.

2. What is Incremental Completing Read?   ICR HCI

"Incremental Completing Read" has three load-bearing words:

Read, in the elisp sense: a function that prompts the user and returns a value. The system asks a question, you answer, and then the answer is something other code can do something with.

Completing: the system maintains a candidate set and shows you which candidates currently match your input. You don't type the full answer. You type enough to disambiguate, and the system fills in the rest.

Incremental: the candidate set is recomputed on every keystroke2. You don't submit a query and wait for results. Filtering happens between characters, fast enough that the result list feels like an extension of what you're typing.

Combine the three words and you get an interaction that is qualitatively different from either browsing or searching. Browsing scales poorly to large sets — you can scan a list of ten things, not a list of ten thousand. Search-and-submit scales fine in the back end but introduces a feedback gap that breaks flow. ICR fuses the two.

A clarification before going further. In Emacs, the standard-library function named completing-read is not, on its own, incremental. It TAB-completes at the minibuffer and shows a *Completions* buffer on demand. The incremental UX described above is layered on top by a separate generation of frontends like Icomplete, Ido, Ivy, Helm, and the modern Vertico. Throughout this series, "ICR" refers to the pattern (the API plus an incremental frontend), not to any single function. This separation matters because it makes the Emacs completion stack pluggable, and this separation is the subject of the next post in the series.

3. The ubiquity of ICR   ux

Think about all the places you already use ICR. Here's a partial inventory:

  • The browser URL bar narrows history and bookmarks as you type.
  • Search engines suggest queries character by character.
  • Spotify, Apple Music, and YouTube surface tracks, artists, and videos as you fill in the search box.
  • Amazon's product search shows partial matches and category filters live.
  • IDEs offer symbol completion, file navigation, and command palettes. Think VS Code's Cmd-Shift-P, JetBrains' "Search Everywhere," GitHub's file finder, Sublime Text's "Goto Anything."
  • Shell users reach for fzf to fuzzy-find files, branches, processes, and command history.
  • Slack jumps to channels by typing fragments of the name.
  • Even mobile keyboards suggest the next word as you tap3!

These look like different tools, but when you think about it they are the same interface. Each one accepts a stream of keystrokes, runs an incremental query against a sometimes enormous candidate set, and surfaces the best matches in real time, as you type. Across all these apps, your interaction pattern is the same: you type fragments, watch a candidate list list narrow, and then pick from what survives the narrowing.

ICR has become the lingua franca of navigation.

The pattern is so ubiquitous that its absence now feels strange to me. File pickers that only show a tree, settings panels with no search box, and configuration UIs where you have to remember the menu hierarchy all force me to slow down and then manually browse through candidate sets to find what I'm looking for. These feel like artifacts of an earlier era — the era before incremental completing read became a common default for how humans navigate sets of named things. Today, it feels like ICR has become the lingua franca of navigation.

4. A thought exercise: how much of computing fits inside ICR?   composition shell HCI

If you take anything away from this post, let it be what follows in this section. This realization is what makes Emacs legible to its power users:

We've seen where ICR shows up in the previous section, but where else can we use it? Run an inventory of the interfaces you use daily, and for each one, ask: at its core, is this just pick a thing from a set, then do something to it?

  • Email: pick a message; reply, archive, forward, delete.
  • Calendar: pick an event; accept, reschedule, open.
  • File browser: pick a file; open, rename, delete, move.
  • Issue tracker: pick an issue; assign, comment, close.
  • Music player: pick a track; play, queue, save.
  • Package manager: pick a package; install, remove, inspect.
  • Git client: pick a branch; checkout, merge, rebase, delete.
  • Cloud console: pick a resource; start, stop, configure, destroy.

The list grows uncomfortably long. It turns out that a surprising fraction of all interactions with your sofware uses the same two primitives: a source of candidates and a set of actions you can perform on a set of selected candidate. Most of the visual chrome we drape around these primitives is decoration.

It turns out that a surprising fraction of all interactions with your sofware uses the same two primitives: a source of candidates and a set of actions you can perform on a set of selected candidate. Most of the visual chrome we drape around these primitives is decoration.

Now that you've seen the light, your next move is to ask whether you can chain these. Consider navigating files in a project: ICR to pick a project, which scopes the candidate set to its files; ICR to pick a file, which scopes the actions to its file type; ICR to pick an action, which produces a new candidate set, and so on…. An interaction model built from selecting and acting can be composed into arbitrarily complex workflows, the same way any other small set of orthogonal primitives can.

Shell users already know this composition story well. For most shell users, fzf drives ICR and produces selections. Pipes feed those selections into commands. Commands produce new selections which can be piped back into fzf for more ICR, and so on…. git branch | fzf | xargs git checkout is the pattern in miniature: a candidate source, a selector, an action, all chained. fd | fzf | xargs $EDITOR is the same shape with a different source. Build a few dozen of these one-liners and you have a personal interface to your filesystem, your version control history, your processes, your network, without anyone shipping you that interface. That's powerful!

The interesting, and frustrating, observation is how rare this is composability and feature-richness is where ICR interfaces exist. Spotify will never let you redefine what "select a track" can do. Gmail's search cannot pipe its selected results into your own actions. Some environments come closer than others — Neovim's Telescope, Raycast's extension API, VS Code's QuickPick — but in each of them at least one of the layers (the matcher, the sorter, the annotator, the action set) is fixed by the vendor. Few environments expose every layer, and only Emacs and the shell expose them independently, so that you can swap one without disturbing the others.

This is the difference between using ICR and building ICR, and it is what makes Emacs and the shell uniquely powerful for anyone who works inside them all day. Personally, this is the main reason why I live in Emacs and the shell.

5. The cognitive cost argument   cognitiveStrain

Software engineers have a precise vocabulary for talking about how algorithms scale: time complexity, space complexity, big-O notation. The corresponding field for how interfaces scale is human-computer interaction (HCI), which has its own established vocabulary — Hick-Hyman's Law, Fitts's Law, working-memory load, recognition vs. recall — but engineers rarely reach for it. The argument that follows borrows from both sides, because ICR is best understood through both angles: an algorithmic property (constant-time filtering against an arbitrary corpus) producing an HCI property (constant-cost selection regardless of corpus size).

Consider the simple act of finding a file. In a tree-based file browser, the cognitive effort grows with the size of the file system. Five files in a folder is trivial, but five hundred files spread across a hierarchy is much more cognitively taxing. You have to remember where the file lives, click through directories, scan lists, scroll, and move your cursor to the selection. Add another order of magnitude — half a million files in a project — and the file browser has effectively ceased to function as a tool for finding things. Cognitively, this approach scales worse than linearly.

Now do the same task with ICR. You hit your file-finder binding, type a fragment of the name you remember, watch the list narrow to a handful of plausible matches, and pick one. The experience is the same whether your project contains fifty files or fifty thousand. The interface does not get harder to use as the candidate set grows.

ICR breaks the linkage between the size of the world and the difficulty of finding something in it.

It is tempting to call this O(1) cognitive complexity, by analogy to algorithmic complexity4. The point is straightforward: the cost of finding something via ICR is independent of the size of the candidate set, and that independence is what the big-O analogy is reaching for. ICR breaks the linkage between the size of the world and the difficulty of finding something in it.

There is also a literature analogue worth naming. Hick-Hyman's Law5 models the time required for a forced choice as roughly proportional to log₂(n+1), where n is the number of equally likely alternatives. A flat menu of ten thousand commands is a Hick-Hyman nightmare; the user pays a logarithmic-in-n decision cost on every selection. ICR sidesteps the law by collapsing n before the choice step happens. By the time the user is selecting from the visible candidate panel, n is already small, typically less than half a dozen in my experience, and the per-selection decision cost is bounded by panel size rather than corpus size. We can calmly let the corpus grow without bound and we can trust that the time-to-pick stays roughly constant.

This is why ICR is not just an ergonomic nicety. It bends the curve. Most interface improvements buy you a constant factor, like a faster animation, a clearer label, or a better-organized menu. ICR changes the curve itself, and anything that changes the curve dominates the things that only change the constant, given enough data.

The corollary is that ICR's value is asymmetric across users. If your projects are tiny and your address book is short, you may never feel the difference. However, if like me you are an Emacs user with a sprawling notes directory, two decades of email, half a dozen languages installed, and a thousand interactive commands, ICR is the difference between a usable system and an unusable one. The bigger your world, the more you'll want to bend the curve.

A really key thing for me personally is the alleviation of any anxiety about the aforementioned search spaces growing. Regardless of the underlying magnitude of my emails, news articles, code repositories, music libraries, etc…, the ease of finding what I'm looking for in any given workflow is roughly constant.

6. Recognition, recall, and the third option   psychology

Human-computer interaction research has long distinguished recognition (picking the right item from a presented list) from recall (producing the right item from memory)6. Recognition is famously easier, and this is why menus exist, why icon-based interfaces won, why "tip of my tongue" is a complaint about recall failure rather than recognition failure.

ICR sits in a strange and useful place between easy recognotion and hard recall. You don't have to recall the full item, but instead you only have to recall a fragment of it. And you don't have to recognize it from a large fixed presented list because the list narrows (often to a single candidate) in response to whatever fragment you produced. The interface meets you halfway.

This matters because the cognitive load of pure recall and the visual load of pure recognition both grow with set size. Recalling one item out of ten thousand is harder than recalling one out of ten. Recognizing one item in a list of ten thousand is harder than recognizing one in a list of ten. The hybrid form ICR offers — partial recall, then narrowing recognition — degrades much more gracefully. It is one of the few interaction primitives that gets its leverage from how human memory actually works rather than fighting it.

Cognitive psychology has a name for this hybrid: cued recall7. The user-typed fragment is a retrieval cue: the system uses it to materialize a small candidate set and the remainder of the task is recognition over that set. ICR is the UI instantiation of cued recall, with the screen serving as an externalized cue-to-candidate index. This is a well establish cognitive primitive, but it is rare to see an interface deploy it as deliberately as a well-tuned completion stack does.

The hybrid form ICR offers — partial recall, then narrowing recognition — degrades much more gracefully.

The best completion frameworks lean into this further. They learn your patterns. Recently selected items rise. Frequently selected items rise. The fragment you produce maps to the candidate you usually pick, not the candidate that happens to alphabetize first. The interface adapts to you. Over months, this turns into something close to muscle memory: you type a few characters and the right answer is already at the top, because that's where it has been for the last hundred selections.

7. Flat over nested: how ICR reshapes how you organize   organization knowledgeManagement

The downstream effect is not just on retrieval. ICR changes the math on how you should structure your data in the first place.

In a world without ICR, hierarchy is a pretty good coping strategy. Tree-structured folders, deeply nested categories, "taxonomies" — these exist because flat lists become unscannable past a certain size. If finding things requires browsing, then organizing into a navigable tree is necessary work, but that work has real and compounding costs. You have to invent the taxonomy up front, before you know what you'll eventually want to file. Then you have to remember it later. The biggest nightmare for me personally is that with hierachies and taxonomies, I have to live with the fact that many items legitimately belong in two categories at once, yet the file system or knowledge management system forces you to pick one. I know people who are good at breaking out of this choice paralysis, but I know from experience that I am not one of them. And you incur an operational cost on every save, because every new item is a small classification problem.

The argument for nesting was always "I cannot scan a flat list of ten thousand items." ICR replies: "you do not need to scan it."

With ICR, hierarchy becomes optional. The argument for nesting was always "I cannot scan a flat list of ten thousand items." ICR replies: "you do not need to scan it." A flat directory plus tags plus links is sufficient, because ICR makes any individual item findable in a few keystrokes regardless of how many neighbors it has.

It is worth being precise about what ICR replaces and what it doesn't. Hierarchy does at least two distinct jobs. One is retrieval: helping you find a thing. The other is explanation: encoding kind-of and part-of relationships, conferring landmark structure on a space, making the shape of a domain legible at a glance. Cognitive psychology has long identified the latter as load-bearing. Eleanor Rosch's work on basic-level categories8 showed that hierarchical taxonomies map onto how humans actually carve up the world, and Thomas Malone's classic study of how people organize their physical desks9 found that "filing" (hierarchical, classified) and "piling" (flat, recency-ordered) coexist for good reasons: piles support fast access to active material and files support reasoning about the shape of what you have. ICR substitutes cleanly for hierarchy's retrieval function. It does not substitute for the explanatory function. When the relationships between things are themselves the point — a code architecture, a course curriculum, a legal taxonomy — a tree is still doing real work that no completion stack will replace10.

The sleight of hand to avoid is treating "ICR makes hierarchy optional" as "hierarchy is bad." The honest, narrower claim is this: in domains where hierarchy was load-bearing only as a search affordance, ICR lets you drop it and reclaim its costs.

This is the architectural premise of denote, Protesilaos Stavrou's Emacs note-taking package. denote stores notes in a single mostly-flat directory, and although the package supports subdirectory "silos", Stavrou explicitly argues against using them as a primary organizing principle. Notes relate to each other through filename-encoded tags and explicit hyperlinks. The package leans entirely on completion to find things, and that works because finding things in a flat namespace via ICR is instantaneous. The same idea shows up in tools like Obsidian and in older personal-knowledge systems. These systems abandon of hierarchy because they trust that search interfaces to scale to larger search spaces.

Emacs itself works this way at a much larger scale. One of my all-time favorite quirks is that every interactive command lives in a single flat namespace. A mature configuration easily exposes ten thousand of them (a quick smash of M-x on my Emacs produces over 13,000 interactive commands). Nothing about this is overwhelming to me though, because I never see the full list. I just type M-x and a fragment of what I want, and the relevant commands surface. A hierarchical menu system covering ten thousand commands would be unusable; a flat namespace plus M-x is unremarkable.

In Emacs you can get this flat-list style ICR even where there are rigid hierarchies. This is critical when physical hierarchies or taxonomies are necessary (like in code repositories), but the user still wants to navigate the content without engaging with the hierarchy or taxonomy. For example, when I'm trying to find a file via ICR, I find myself reaching for something like project-find-file (show me all files in a project in a flat list) over something like find-file (let me traverse the directories one level at a time until I find my leaf).

As we've already seen, the ICR pattern generalizes really well. Any structure you build to make scanning easier is a structure ICR makes redundant. Even where these structures need to exist, ICR can still help you get around the rigidity and opacity of that structure. Once you trust your completion stack, you can shed the hierarchies you built and maintain, and you can triumphantly reclaim the cognitive and operational overhead that those hierarchies were costing you.

8. Why ICR matters more in Emacs than anywhere else   emacs

The thought exercise above hands us the answer to a question this post has been circling: of the environments where ICR is genuinely programmable, why focus a series on Emacs rather than on the shell?

The shell case is well-trodden territory; Unix users have been chaining fzf and pipes for years, the design space is mostly explored, and shell users are typically introduced to the notion of ICR the second they start learning how to configure their prompt. The Emacs case is younger, deeper, and less well documented — and it is the focus of this series, so it is worth zooming in on the specific ways Emacs exposes completion as a substrate. Emacs is also less popular, so there is an air of proselytism to this post 😜.

In Emacs, every layer of the ICR interaction is pluggable. completing-read is a function in the standard library. The display is pluggable. The matching strategy is pluggable. The sorting is pluggable. The annotations are pluggable. The actions you can take on a selected candidate? Pluggable! This is all discussed in my subsequent post on the VOMPECCC composite framework.

In Emacs, every layer of the interaction is a place where you can substitute behavior, and every layer has a small ecosystem of competing implementations to choose from.

Most editors give you a completion UI. Emacs gives you a completion substrate. The difference is what you can build on top.

This is what separates Emacs from the editors that come closest. Most give you a completion UI; Emacs gives you a completion substrate. From an HCI standpoint, what is unusual about Emacs is not the completion interaction itself — the visible behavior is broadly similar to Telescope, QuickPick, or Raycast — but that the layers HCI usually treats as monolithic (matcher, sorter, annotator, action set, display surface) are exposed as independent surfaces. Those other tools let you produce candidates and bind actions, but the matcher, the sorter, the annotator, and the display they hand you are largely fixed. Recently, the Emacs community has done a lot of work towards making all of these pieces independently swappable, and the resulting compositional space is qualitatively bigger. This is the reason the Emacs completion ecosystem is one of the most interesting parts of the software. Every well-designed Emacs package eventually becomes, in part, a completing-read application: a thoughtful choice of candidate source, plus annotations, plus actions, plus a UI that is already familiar because it is the same UI you use for everything else. The cost of adding a new "thing the user can pick from a list" is close to zero, and the resulting interaction inherits all of the user's existing muscle memory.

Don't treat completion as a built-in convenience you don't have to think about. Emacs ships with a working completing-read out of the box, and many users never look further. This is a tragic error on the same scale as never customizing your shell. A serious Emacs user should treat the completion stack the way a serious shell user treats prompt and history setup: as a thing worth investing in, arguably the driving HCI paradigm in the Emacs paltform. Every other piece of the system gets better when this one is good:

ICR is a simple concept, but it has really profound effects on I use Emacs. Better completion makes file finding faster. Faster file-finding changes how I organize my data. Better symbol completion changes how aggressively I refactor. Better command completion changes which commands I remember exist11. Better candidate annotations change which choices I can make confidently. In addition to saving me cognition, keystrokes, and time, ICR raises the upper bound on how much of Emacs I can fluently use.

9. Where this series goes

This was all very woo-woo and hand wavy, but the next two posts get concrete.

The middle post is on VOMPECCC, a name for a loose constellation of eight Emacs packages — Vertico, Orderless, Marginalia, Prescient, Embark, Consult, Corfu, and Cape — that together compose a complete, modular completion framework along Unix-philosophy lines. Each package does one thing, and, boy, does it do it well. Most importantly, Each communicates through Emacs's standard completion APIs, making it possible for any subset of these packages to work with or without the others. That post is a technical breakdown for developers who want to either adopt the whole stack or pick the pieces that solve their specific problems.

The final post is on spot, a Spotify client built as a pure ICR application: search Spotify's catalog through consult, view catalogue metadata inline with with marginalia, and act on them with embark. It builds nothing of its own at the UI layer because it doesn't need to. Every UI primitive it requires is already there, courtesy of the framework the previous post describes. spot is a useful case study in what becomes possible when you stop treating completion as a default and start treating it as a programmable substrate.

Three posts, one argument: incremental completing read is one of the highest-leverage interaction patterns in computing, Emacs gives you uniquely deep control over it, and that control is worth using. The rest of the series is about the practical 'how'.

10. tldr

This post argues that Incremental Completing Read (ICR) — the pattern where a candidate list narrows in real time as you type — is not a convenience feature but a structural change in how interface cost scales with data size. ICR is composed of three ideas: read (prompt the user and return a value), completing (maintain and display a candidate set), and incremental (recompute matches on every keystroke). Together they produce an interaction qualitatively different from both browsing and searching.

The pattern is already ubiquitous across software you use daily — browser URL bars, search engines, music players, IDE command palettes, and shell tools like fzf all implement it. A surprising fraction of all computing boils down to two primitives: pick a thing from a set, then act on it, and these primitives compose into arbitrarily complex workflows through chaining, the way shell pipes do.

From a cognitive-science perspective, ICR breaks the linkage between corpus size and the difficulty of finding something in it. While tree-based browsing degrades with scale and Hick-Hyman's Law penalizes large choice sets, ICR collapses the visible candidate count before the choice step, keeping per-selection cost roughly constant regardless of how large the underlying data grows. ICR also occupies a unique position between recognition and recall — you supply a partial cue, the system materializes a small candidate set, and the rest is easy recognition. Cognitive psychology calls this cued recall, and well-tuned completion stacks lean into it further by learning your selection history.

Beyond retrieval, ICR reshapes how you organize data in the first place. Hierarchy was always a coping strategy for unscannable flat lists; ICR makes flat lists scannable, so hierarchies built purely as search affordances become redundant. This is the design premise behind tools like denote and Emacs's own flat M-x command namespace.

Finally, the post explains why Emacs is the focus of this series: unlike every other environment, Emacs exposes the matcher, sorter, annotator, display, and action set as independently replaceable layers, making completion a programmable substrate rather than a sealed UI. The next two posts get concrete — one on the modular VOMPECCC completion framework, and one on a Spotify client built as a pure ICR application.

Footnotes:

1

I use "substrate" in the sense borrowed from biology and platform engineering: a foundational layer that other things are built on, acted upon, or composed out of. In biology, an enzyme acts on a substrate; in hardware, transistors are fabricated on a silicon substrate; in platform engineering, applications run on a compute substrate like Kubernetes. In all three, the substrate is primitive, malleable, and compositional — the raw material from which or on which higher-level things are built. Applied here: Emacs hands you completion as raw pluggable parts (matcher, sorter, annotator, action, display) rather than as a finished dish. The implicit contrast is completion-as-UI: a product you consume, where the vendor has already picked every layer for you.

2

The obvious objection: what if the candidate set is too enormous to materialize up front? Think a grep over a large codebase, or a query against a remote API. Emacs handles this through async completion sources — consult-ripgrep is the canonical example. Each keystroke debounces and spawns a ripgrep process whose streaming output becomes the incremental candidate set; the user sees narrowing results without ever holding the full corpus in memory. The pattern generalizes: any candidate source that can be expressed as a streaming query (ripgrep, git log, a database cursor, a REST endpoint) slots into the same ICR interaction. Corpus size stops being a constraint on the interface.

3

This actually doesn't even require an initial search string. I have a bad joke about the iMessage word-prediction being the original ChatGPT — if you could use a chuckle, I highly suggest opening up iMessage and spamming the next predicted word and observing the sheer nonsense that comes out.

4

Strictly speaking, big-O describes the runtime of an algorithm, not the perceived effort of a human using a tool, and the user-facing cost of ICR is not literally constant — recalling a fragment, scanning the survivors, and choosing among them all consume real cognitive resources. The defensible claim, and the one big-O notation is reaching for, is independence from the size of the candidate set. Whether you call that "asymptotically constant cognitive cost," "sublinear effort," or just "the same work regardless of scale," the underlying observation is the same.

5

William E. Hick, "On the rate of gain of information," Quarterly Journal of Experimental Psychology 4(1), 1952, 11–26; and Ray Hyman, "Stimulus information as a determinant of reaction time," Journal of Experimental Psychology 45(3), 1953, 188–196. The law: choice reaction time scales as roughly k·log₂(n+1) for n equally likely alternatives. ICR's effect is to keep n (the size of the visible candidate panel) small and roughly constant even as the underlying corpus grows arbitrarily.

6

Jakob Nielsen, "10 Usability Heuristics for User Interface Design" (1994, periodically updated by the Nielsen Norman Group). Heuristic #6 is "Recognition rather than recall": interfaces should minimize the user's memory load by making elements, actions, and options visible, rather than requiring users to retrieve them from memory.

7

Endel Tulving and Zena Pearlstone, "Availability versus accessibility of information in memory for words," Journal of Verbal Learning and Verbal Behavior 5(4), 1966, 381–391. The original demonstration that retrieval cues dramatically improve recall over uncued conditions, even when the underlying item is equally "available" in memory. Tulving's framing of cued recall as a distinct mode — intermediate between free recall and recognition — is the one ICR most closely instantiates.

8

Eleanor Rosch, Carolyn B. Mervis, Wayne D. Gray, David M. Johnson, and Penny Boyes-Braem, "Basic objects in natural categories," Cognitive Psychology 8(3), 1976, 382–439. The basic-level finding: human categorization is not arbitrary across hierarchies but anchored at a particular middle level (chair, dog, car) that maximizes informativeness. Hierarchies are not just retrieval scaffolds; they reflect how humans naturally carve up the world.

9

Thomas W. Malone, "How do people organize their desks? Implications for the design of office information systems," ACM Transactions on Office Information Systems 1(1), 1983, 99–112. The classic study identifying "files" (hierarchical, classified) and "piles" (flat, recency-ordered) as coexisting strategies, each well-suited to different parts of the same workflow.

10

The counterargument here would be that tags and hyperlinks would give you the same thing, but the point here is that often times a PHYSICAL hierarchy, like the organization of files in a directory, is needed and will be unavoidable.

11

I find it interesting that alleviating the burden of memory of a large search space actually improves my memory for the things that are actually important.

-1:-- Completion is a Substrate, not a UI (Post Charlie Holland)--L0--C0--2026-04-14T12:22:00.000Z

Dave Pearson: wordcloud.el v1.4

I think I'm mostly caught up with the collection of Emacs Lisp packages that need updating and tidying, which means yesterday evening's clean-up should be one of the last (although I would like to revisit a couple and actually improve and extend them at some point).

As for what I cleaned up yesterday: wordcloud.el. This is a package that, when run in a buffer, will count the frequency of words in that buffer and show the results in a fresh window, complete with the "word cloud" differing-font-size effect.

Word cloud in action

This package is about 10 years old at this point, and I'm struggling to remember why I wrote it now. I know I was doing something -- either writing something or reviewing it -- and the frequency of some words was important. I also remember this doing the job just fine and solving the problem I needed to solve.

Since then it's just sat around in my personal library of stuff I've written in Emacs Lisp, not really used. I imagine that's where it's going back to, but at least it's cleaned up and should be functional for a long time to come.

-1:-- wordcloud.el v1.4 (Post Dave Pearson)--L0--C0--2026-04-14T07:47:39.000Z

Dave's blog: Posframe for everything

An Emacser recently posted about popterm, which can use posframe to toggle a terminal visible and invisible in Emacs. I tried it out, and ran into problems with it, so abandoned it for now.

However, this got me thinking about other things that can use posframe, which pops up a frame at point. I’ve seen other Emacsers use posframe when they show off their configurations in meetups. I thought about what I use often that might benefit from a posframe.

  • magit
  • vertico
  • which-key
  • company
  • flymake

Which of these has something I can use to enable posframes?

Of course, there are plenty of other packages that have add-on packages to enable posframes.

Magit

magit doesn’t have anything directly, but it makes heavy use of transient. And there’s a package transient-posframe that can enable posframes for transients. When I use magit’s transients, the transient pops up as a frame in the middle of my Emacs frame.

vertico

Install vertico-posframe to use posframes with vertico.

which-key

Yep, there’s which-key-posframe.

company

See company-posframe.

flymake

I needed a bit of web searching to find this. flymake-popon can use a posframe in the GUI and popon in a terminal.

-1:-- Posframe for everything (Post Dave's blog)--L0--C0--2026-04-14T00:00:00.000Z

Marcin Borkowski: Binding TAB in Dired to something useful

I’m old enough to remember Norton Commander for DOS. Despite that, I never used Midnight Commander nor Sunrise Commander – Dired is still my go-to file manager these days. In fact, Dired has a feature which seems to be inspired by NC: when there are two Dired windows, the default destination for copying, moving and symlinking is “the other” window. Surprisingly, another feature which would be natural in an orthodox file manager is absent from Dired
-1:-- Binding TAB in Dired to something useful (Post Marcin Borkowski)--L0--C0--2026-04-13T18:56:07.000Z

Irreal: Some Config Hacks

Bozhidar Batsov has an excellent post that collects several configuration hacks from a variety of people and distributions. It’s a long list and rather than list them all, I’m going to mention just a few that appeal to me. Some of them I’m already using. Other’s I didn’t know about but will probably adopt.

  • Save the clipboard before killing: I’ve been using this for years. What it does is to make sure that the contents of the system clipboard aren’t lost if you do a kill in Emacs. This is much more useful than it sounds, especially if, like me, your do a lot of cutting and pasting from other applications.
  • Save the kill ring across sessions: I’m not sure I’ll adopt this but it’s easy to see how it could be useful.
  • Auto-Chmod spripts: Every time I see this one I resolve to add it to my config but always forget. What it does is automatically make scripts (files beginning with #!) executable when they’re saved.
  • Proportional window resizing: When a window is split, this causes all the windows in the frame to resize proportionally.
  • Faster mark popping. It’s sort of like repeat mode for popping the mark ring. After the first Ctrl+u Ctrl+Space you can continue popping the ring with a simple Ctrl+Space
  • Auto-select Help window: This is my favorite.When I invoke help, I almost always want to interact with the Help buffer if only to quit and delete it with a q. Unfortunately, the Help buffer doesn’t get focus so I have to do a change window to it. This simple configuration gives the Help buffer focus when you open it.

Everybody’s needs and preferences are different, of course, so be sure to take a look at Bastov’s post to see which ones might be helpful to you.

-1:-- Some Config Hacks (Post Irreal)--L0--C0--2026-04-13T14:56:38.000Z

Protesilaos Stavrou: Emacs: new modus-themes-exporter package live today @ 15:00 Europe/Athens

Raw link: https://www.youtube.com/watch?v=IVTqn9IgBN4

UPDATE 2026-04-13 18:00 +0300: I wrote the package during the stream: https://github.com/protesilaos/modus-themes-exporter.


[ The stream will be recorded. You can watch it later. ]

Today, the 13th of April 2026, at 15:00 Europe/Athens I will do a live stream in which I will develop the new modus-themes-exporter package for Emacs.

The idea for this package is based on an old experiment of mine: to get the palette of a Modus theme and “export” it to another file format for use in supported terminal emulators or, potentially, other applications.

My focus today will be on writing the core functionality and testing it with at least one target application.

Prior work of mine from my pre-Emacs days is the tempus-themes-generator, which was written in Bash: https://gitlab.com/protesilaos/tempus-themes-generator.

-1:-- Emacs: new modus-themes-exporter package live today @ 15:00 Europe/Athens (Post Protesilaos Stavrou)--L0--C0--2026-04-13T00:00:00.000Z

Bastien Guerry: Get ready for Orgy in 15 minutes

Orgy is a static website generator for Org files.

It turns a directory of .org files into a website with navigation, section indexes, tag pages, RSS feeds, multilingual layouts and themes, without requiring any configuration or templates. You write Org files, run a single orgy command, and get a public/ directory ready to deploy.

This tutorial will guide you through creating a decent static website from an empty directory in a few steps.

We assume that Orgy is already installed and available as the orgy command.

Step 1 - Your first page

Create a directory and a single index.org file:

mkdir website
cd website
#+title: Hello

Welcome to my site.

Serve it:

orgy serve

You're done! You can see the website at http://localhost:1888.

No config, no templates, no theme.

See the new public/ directory:

website/
├── index.org
└── public/
    └── index.html
Step 1 - a minimal site with zero configuration
Step 1 - a minimal site with zero configuration

Step 2 - Add a blog post

Drop a second .org file right next to index.org:

#+title: Hello World
#+date: 2026-04-10

This is my first *post* with some /Org markup/ and a [[https://orgmode.org][link]].

Save it as hello-world.org.

orgy serve will notice the modification and rebuild the site for you.

website/
├── hello-world.org
├── index.org
└── public/
    ├── hello-world/
    │   └── index.html
    ├── index.html
    └── ...

The URL slug comes from the filename. The title and date come from the headers. The page automatically appears in the top navigation.

Step 3 - Configure your site

So far orgy used your directory name as the site title. Create a config.edn file at the root of website/:

{:title     "My Notebook"
 :base-url  "https://example.com"
 :copyright "© 2026 Me - CC BY-SA 4.0"
 :menu      ["hello-world"]}

The header now shows your custom title, the footer shows your copyright, and the navigation is limited to what you listed in :menu. Every key in config.edn is optional - add only what you need.

Step 4 - Organize with sections

Any subdirectory becomes a section with its own index. Let's group posts under notes/:

mkdir notes
mv hello-world.org notes/

Add a second post notes/second-post.org:

#+title: Second Post
#+date: 2026-04-11

Another entry.

Update the menu in config.edn:

:menu ["notes"]

After orgy serve has rebuilt the website, you have this:

website/
├── config.edn
├── index.org
├── notes/
│   ├── hello-world.org
│   └── second-post.org
└── public/
    ├── index.html
    └── notes/
        ├── index.html          <= auto-generated section index
        ├── hello-world/
        │   └── index.html
        └── second-post/
            └── index.html

You never wrote a listing page, orgy generated notes/index.html for you!

Step 4 - the section index, generated automatically from the directory contents
Step 4 - the section index, generated automatically from the directory contents

Step 5 - Use tags

Add a #+tags: line to any post:

#+title: Hello World
#+date: 2026-04-10
#+tags: emacs org-mode

Orgy creates:

public/tags/
├── index.html          ← all tags with post counts
├── emacs/index.html    ← posts tagged "emacs"
└── org-mode/index.html

A "Tags" link is automatically appended to the navigation.

Step 5 - a tag page listing every post tagged =emacs=
Step 5 - a tag page listing every post tagged =emacs=

Step 6 - Go multilingual

Want a French version of a post? Just rename the file with a language suffix:

mv notes/hello-world.org notes/hello-world.en.org

And write the translation in notes/hello-world.fr.org:

#+title: Bonjour le monde
#+date: 2026-04-10

Mon premier /billet/.

Orgy detects multilingual mode and switches the output layout:

public/
├── index.html          ← redirects to first language
├── en/
│   ├── index.html
│   ├── feed.xml
│   └── notes/...
└── fr/
    ├── index.html
    ├── feed.xml
    └── notes/...

Each language gets its own homepage, section indexes, tag pages, and RSS feed. A language switcher appears in the nav. The only thing you changed is a filename.

Step 7 - Images and captions

Drop an image anywhere in your content tree, for instance next to the post that uses it:

notes/
├── hello-world.en.org
└── photo.jpg

Orgy copies every non-org file to the output, preserving the path

  • notes/photo.jpg ends up at public/notes/photo.jpg. No static/ folder, no manual copying, no asset pipeline. Reference it from the post with a plain relative link:
[[./photo.jpg]]

To turn it into a proper <figure> with a caption, add #+caption: above the image:

#+caption: A nice view from the office window
[[./photo.jpg]]
Step 7 - an image rendered as a =<figure>= with its caption
Step 7 - an image rendered as a =<figure>= with its caption

And if you want alignment, add #+attr_html: too:

#+caption: A nice view from the office window
#+attr_html: :align right
[[./photo.jpg]]

For site-wide assets (favicon, custom CSS, shared images), use a static/ directory at the root - its contents are copied verbatim to public/.

Step 8 - Math formulas

Orgy renders LaTeX math out of the box. Write inline math between dollar signs and display equations between \[ and \]. See this example, followed by how it is rendered:

Euler's identity $e^{i\pi} + 1 = 0$ is often called the most
beautiful equation in mathematics.

The Gaussian integral:

\[
\int_{-\infty}^{+\infty} e^{-x^2}\,dx = \sqrt{\pi}
\]

Euler's identity \(e^{i\pi} + 1 = 0\) is often called the most beautiful equation in mathematics.

The Gaussian integral:

\[ \int_{-\infty}^{+\infty} e^{-x^2}\,dx = \sqrt{\pi} \]

No extra configuration is needed. Orgy loads MathJax on any page that contains math, and skips it on pages that don't.

Step 9 - Add a theme

The finishing touch. Add a :theme key to config.edn:

{:title     "My Notebook"
 :base-url  "https://example.com"
 :copyright "© 2026 Me - CC BY-SA 4.0"
 :menu      ["notes"]
 :theme     "teletype"}

Reload - your site now has a full theme loaded from the pico-themes CDN. Try other names like swh, org, lincolk, ashes or doric. You can also point :theme to an https:// URL or a local .css file.

Step 8 - the same site with the =teletype= theme applied
Step 8 - the same site with the =teletype= theme applied

Going further

You now have a real multilingual blog with tags, images, RSS feeds, a sitemap, and a theme - built from plain Org files and a few lines of config. A few things to explore next:

  • orgy init - bootstrap config.edn and the full set of templates/ for customization
  • #+draft: true - exclude a file from the build
  • :quick-search true - enable client-side search
  • :theme-toggle true - add a light/dark switch in the nav
  • orgy help - list all CLI options

Orgy's philosophy: simple things should be simple, complex things should be possible. You just saw the simple half 😀

Enjoy!

👉 More code contributions.

-1:-- Get ready for Orgy in 15 minutes (Post Bastien Guerry)--L0--C0--2026-04-13T00:00:00.000Z

Irreal: Days Until

Charles Choi recently saw a Mastodon post showing the days until the next election and started wondering how one would compute that with Emacs. He looked into it and, of course, the answer turned out to be simple. Org mode has a function, org-time-stamp-to-now that does exactly that. It takes a date string and calculates the number of days until that date.

Choi wrote an internal function that takes a date string and outputs a string specifying the number of days until that date. The default is x days until <date string> but you can specify a different output string if you like. That function, cc/--days-until, serves as a base for other functions.

Choi shows two such functions. One that allows you to specify a date from a date picker and computes the number of days until that date. The other—following the original question—computers the number of days until the next midterm and general elections in the U.S. for 2006. It’s a simple matter to change it for other election years. Nobody but the terminally politically obsessed would care about that but it’s a nice example of how easy it is to use cc/--days-until to find the number of days until some event.

Finally, in the comments to Choi’s reddit announcement ggxx-sdf notes that you can also use calc-eval for these sorts of calculations.

As Choi says, it’s a human characteristic to want to know how long something is going to take. If you have some event that you want a countdown clock for, take a look at Choi’s post.

-1:-- Days Until (Post Irreal)--L0--C0--2026-04-12T14:50:16.000Z

Bicycle for Your Mind: Expanding with Typinator 10

TypinatorTypinator

Product: Typinator
Price: $49.99 (one time for macOS only) or $29.99/yearly (for macOS and iOS version)

I was a TextExpander user and switched from it to aText when TextExpander went to a subscription model. Been using Alfred for snippet expansions for well over… Actually I have no idea how long. Every since Alfred added that feature I suppose. There are expansions which require input, and those are handled by Keyboard Maestro. I wanted to see what was available in this space. There was no good reason for the change, I was perfectly happy with the setup. But I saw that Typinator 10 had been released and I got curious. Approached the developer and they were kind enough to provide me with a license. So, this is the review.

What Does a Text Expansion Program Do?

A text expansion program makes it easy to type content you use regularly. For instance, I have an expansion where I type ,bfym and [Bicycle For Your Mind](http://bicycleforyourmind.com) is pasted into the text. It lessens your typing load, stops you from making mistakes and makes typing easy. Expansions include corrections of common mistakes that you or other people make while typing. It includes emojis and symbols. It can be simple or complex depending on your needs.

macOS has a built in mode for text expansions, but it is limited and like a lot of things macOS does, they include it without giving it much attention or developer love. It is lacking in features or finesse. If you are serious about making your writing comfortable and easy, you need to consider third party solutions. The macOS marketplace has a fair number of programs which tackle this task. The two main products are TextExpander and Typinator. Both Alfred and Keyboard Maestro have this feature built into the program.

Typinator 1Typinator 1

iOS

The main feature in this version of Typinator is the iOS integration. I am not interested in that, I am not going to talk about that. As far as I know, TextExpander was the only other product which had that integration. Typinator is now matching them. For some people, this is a crucial feature. Going by my experience with this developer, I am sure Typinator works as well on iOS.

Surprises

Typinator lets me use regex to define expansions. One of the ones which gets used all the time lets me type a period and then the first letter of the next sentence gets capitalized automatically. You have no idea how much I like that. Apple has that as a setting but it is temperamental. Not Typinator. Works like a charm. Thanks to its regex support it does interesting things with dates. I love that feature although I haven’t used it enough to make it super useful. I see the potential there.

Observations

Converting my Alfred snippets to Typinator was easy. Save the snippets in Alfred as a CSV file and then import those into Typinator.

Typinator keeps a record of the number of times you use a particular expansion and the last time you used it. Gives me the ability to monitor the usage of the expansions. Alfred doesn’t do that. I use abbrev.mode in Emacs, and that keeps a running count too. I love that feature.

Typinator 2Typinator 2

Typinator is easy to interact with. It has a menu-bar icon which you can click on to get the main window or you can assign a system wide keyboard command to bring the window up. You have the ability to highlight something in any editor you are using and press a keyboard command to bring up a dialog box to set up an expansion based on the content you have highlighted. Easy. I find myself using this to increase the number of expansions I have available.

Typinator gives you minute control over the expansions. You have the ability to trigger the expansions immediately upon matching the expansion trigger or after a word break. In other words, you can expand as soon as you match or expand after you type a space or any punctuation after your match. This setting is available on every individual snippet. Every individual snippet can be set for ignoring case or expand on exact match. Another level of fine control which is useful.

This is a mature program. It has been available for a long while now. It is a full-featured expansion program. They have been at it for a while and they are good at it.

Conclusion

If you are looking for a text expansion program, you cannot go wrong with Typinator. It is great at what it does and is full of features which will make you smile. I love it.

I recommend Typinator with enthusiasm.

macosxguru at the gmail thingie.

-1:-- Expanding with Typinator 10 (Post Bicycle for Your Mind)--L0--C0--2026-04-12T07:00:00.000Z

Tim Heaney: Computing Days Until with Perl and Rust

The other day Charles Choi wrote about Computing Days Until with Emacs. I decided to try it in Perl and Rust. Perl In Perl, we could do it with just the standard library like so. #!/usr/bin/env perl use v5.42; use POSIX qw(ceil); use Time::Piece; use Time::Seconds; my $target_date = shift // die "\nUsage: $0 YYYY-MM-DD\n"; my $target = Time::Piece->strptime($target_date, "%Y-%m-%d"); my $today = localtime; my $delta = $target - $today; say ceil $delta->days; Subtracting two Time::Piece objects gives a Time::Seconds object, which has a days method.
-1:-- Computing Days Until with Perl and Rust (Post Tim Heaney)--L0--C0--2026-04-12T00:00:00.000Z

Irreal: Magit Support

Just about everyone agrees that the two Emacs packages considered “killer apps” by those considering adopting the editor are Org mode and Magit. I’ve seen several people say they use Emacs mainly for one or the other.

Their development models are completely different. Org has a development team with a lead developer in much the same way that Emacs does. Magit is basically a one man show, although there are plenty of contributors offering pull requests and even fixing bugs. That one man is Jonas Bernoulli (tarsius) who develops Magit full time and earns his living from doing so.

Like most nerds, he hates marketing and would rather be writing code than seeking funding. Still, that thing about earning a living from Magit means that he must occasionally worry about raising money. Now is one such time. Some of his funding pledges have expired and the weakening U.S. dollar is also contributing to his dwindling income.

Virtually every Emacs user is also a Magit user and many of us depend on it so now would be a propitious moment to chip in some money to keep the good times rolling. The best thing, of course, is to get your employer to make a more robust contribution than would be feasible for an individual developer but even if every developer chips in a few dollars (or whatever) we can support tarsius and allow him to continue working on Magit and its associated packages.

His support page is here. Please consider contributing a few dollars. Tarsius certainly deserves it and we’ll be getting our money’s worth.

-1:-- Magit Support (Post Irreal)--L0--C0--2026-04-11T14:24:14.000Z

Listful Andrew: Phones-to-Words Challenge IV: Clojure as an alternative to Java

There's an old programming challenge where the digits in a list of phone numbers are converted to letters according to rules and a given dictionary file. The results of the original challenge suggested that Lisp would be a potentially superior alternative to Java, since Lisper participants were able to produce solutions in, on average, fewer lines of code and less time than Java programmers. Some years ago I tackled it in Emacs Lisp and Bash. I've now done it in Clojure.
-1:-- Phones-to-Words Challenge IV: Clojure as an alternative to Java (Post Listful Andrew)--L0--C0--2026-04-10T11:24:00.000Z

Erik L. Arneson: Emacs as the Freelancer's Command Center

Freelancing for small businesses and organizations leads to a position where you are juggling a number of projects for multiple clients. You need to keep track of a number of tasks ranging from software development to sending emails to project management. This is a lot easier when you have a system that can do a bunch of the work for you, which is why I use Emacs as my freelancer command center.

I would like to share some of the tools and workflows I use in Emacs to help me keep on top of multiple clients’ needs and expectations.

Organization with org-mode

It should be no surprise that at the center of my Emacs command center is org-mode. I have already written about it a lot. Every org-mode user seems to have their own way of keeping track of things, so please don’t take my organizational scheme as some kind of gospel. A couple of years ago, I wrote about how I handle to-do lists in org-mode, and I am still using that method for to-do keywords. However, file structure is also important. I have a number of core files.

Freelance.org

This top-level file contains all of my ongoing business tasks, such as tracking potential new clients, recurring tasks like website maintenance and checking my MainWP dashboard. I also have recurring tasks for invoicing, tracking expenses, and other important business things.

This file is also where I have my primary time tracking and reporting. Org-mode already supports this pretty nicely, I just use the built-in clocktable feature.

Clients/*.org

Clients that have large projects or ongoing business get their own file. This makes organization a lot easier. All tasks associated with a client and their various projects end up in these individual files. The important part is making sure that these files are included in the time-tracking clock table and your org-mode agenda, so you can see what is going on every week.

References and Linking

I have C-c l bound to org-store-link and use it all the time to link to various files, directories, URLs, and even emails. I can then use those links in my client notes, various tasks in my to-do list, and so on. This helps me keep my agenda organized even when my filesystem and browser bookmarks are a bit of a mess.

Email with mu4e

I have been reading and managing my email in Emacs for over 25 years. There have been a few breaks here and there where I have tried out other software or even web mail clients, but it has always been a headache. I return to Emacs! Long ago, I used VM (which seems to have taken on new life!), but currently I use mu4e.

This gives me a ton of power and flexibility when dealing with email. I have custom functions to help me compose and organize my email, and I can use org-store-link to keep track of individual emails from clients as they relate to agenda items. I even have a function to convert emails that I have written in Markdown into HTML email, and one that searches for questions in a client email to make sure I haven’t missed anything.

The ability to write custom code to both process and create email is extremely powerful and a great time saver.

Writing Code

I don’t know what else to say about this, I use Emacs for doing all of my software development. I make sure to use Eglot whenever there is a language server available, and I try to leverage all the fancy features offered by Emacs whenever possible. The vast majority of projects for clients are PHP (thanks WordPress), Go, JavaScript, and TypeScript.

Writing Words

Previously, I have shared quite a bit about writing in Emacs. I like to start everything in org-mode, but I also write quite a bit in Markdown. Emacs has become a powerful tool for writing. I use the Harper language server along with Eglot to check grammar and spelling.

Track All Changes with Magit

Version control is essential, a lesson I have learned over 30+ years of software development. While Git is not part of Emacs, the software I use to interface with Git is. Magit is a Git user interface that runs entirely in Emacs. I use it to track my writing, my source code, and even all of my org-mode files. Using version control is so essential that I have a weekly repeating agenda task reminding me to check all of my everyday files to make sure I have checked-in my changes for the week.

Thinking Music with EMMS

I like to have some soothing background music when I am programming, writing, or otherwise working on my computer. However, if that background music has lyrics, it can be really distracting. It is easy to make a playlist for various suitable SomaFM channels to load into EMMS (the Emacs Multimedia System) using the command M-x emms-play-playlist.

Try saving the following into playlist.el somewhere, and using it the next time you are writing:

 ;;; This is an EMMS playlist file Play it with M-x emms-play-playlist
 ((*track* (type . url) (name . "https://somafm.com/synphaera.pls"))
  (*track* (type . url) (name . "https://somafm.com/gsclassic.pls"))
  (*track* (type . url) (name . "https://somafm.com/sonicuniverse.pls"))
  (*track* (type . url) (name . "https://somafm.com/groovesalad.pls")))

And make sure to check out SomaFM’s selection to find some good background music that suits your tastes!

And the tools I have missed

There are undoubtedly Emacs tools that I have missed in this brief overview. I have been wracking my brain as I write, trying to see what I have forgotten or overlooked. Frankly, Emacs has become such a central part of the organization for my freelancing that there are probably many tools, packages, and processes that I use every day without thinking about it too much.

Emacs makes it possible for me to freelance for multiple clients and small businesses without losing my mind with organization and task management. The tools it provides allow me to stay on top of multiple projects, handle client relationships, and keep track of years worth of tasks, communications, and projects. Without it, I’d be sunk!

What Emacs tools are you using to manage your freelance business? I am always looking for ways to improve or streamline my process.

The featured image for this post comes from Agostino Ramelli’s Le diverse et artificiose machine (1588). Read more about it on the Public Domain Review.

-1:-- Emacs as the Freelancer's Command Center (Post Erik L. Arneson)--L0--C0--2026-04-10T00:00:00.000Z

Protesilaos Stavrou: Emacs modus-themes live stream today @ 14:00 Europe/Athens

Raw link: https://www.youtube.com/watch?v=xFQDYTCS1os

[ The stream will be recorded. You can watch it later. ]

At 14:00 Europe/Athens I will hold a live stream about Emacs. Specifically, I will work on my modus-themes package.

The idea is to write more tests and refine the relevant functions along the way.

I am announcing this -45 minutes before I go live. I will keep the chat open in case there are any questions.

-1:-- Emacs modus-themes live stream today @ 14:00 Europe/Athens (Post Protesilaos Stavrou)--L0--C0--2026-04-10T00:00:00.000Z

James Dyer: Wiring Flymake Diagnostics into a Follow Mode

Flymake has been quietly sitting in my config for years doing exactly what it says on the tin, squiggly lines under things that are wrong, and I mostly left it alone. But recently I noticed I was doing the same little dance over and over: spot a warning, squint at the modeline counter, run `M-x flymake-show-buffer-diagnostics`, scroll through the list to find the thing I was actually looking at, then flip back. Two windows, zero connection between them.

So I wired it up properly, and while I was in there I gave it a set of keybindings that feel right to my muscle memory.

The obvious bindings for stepping through errors are `M-n` and `M-p`, and most people using flymake bind exactly those. The problem is that in my config `M-n` and `M-p` are already taken, they step through simply-annotate annotations (which is itself a very handy thing and I am not giving it up!). So I shifted a key up and went with the shifted variants: `M-N` for next, `M-P` for previous, and `M-M` to toggle the diagnostics buffer.

 (setq flymake-show-diagnostics-at-end-of-line nil)
(with-eval-after-load 'flymake
(define-key flymake-mode-map (kbd "M-N") #'flymake-goto-next-error)
(define-key flymake-mode-map (kbd "M-P") #'flymake-goto-prev-error))

With M-M I wanted it to be a bit smarter than just “open the buffer”. If it is already visible I want it gone, if it is not I want it up. The standard toggle pattern:

 (defun my/flymake--diag-buffer ()
"Return the visible flymake diagnostics buffer, or nil."
(seq-some (lambda (b)
(and (with-current-buffer b
(derived-mode-p 'flymake-diagnostics-buffer-mode))
(get-buffer-window b)
b))
(buffer-list)))
(defun my/flymake-toggle-diagnostics ()
"Toggle the flymake diagnostics buffer."
(interactive)
(let ((buf (my/flymake--diag-buffer)))
(if buf
(quit-window nil (get-buffer-window buf))
(flymake-show-buffer-diagnostics)
(my/flymake-sync-diagnostics))))

Now the interesting bit. What I really wanted was a follow mode, something like how the compilation buffer tracks position or how Occur highlights the current hit. When my point lands on an error in the source buffer, the corresponding row in the diagnostics buffer should light up. That way the diagnostics window becomes a live index of where I am rather than a static dump and think in general this is how a lot of other IDEs work.

I tried the lazy route first, turning on hl-line-mode in the diagnostics buffer and calling hl-line-highlight from a post-command-hook in the source buffer. The line lit up once and then refused to move. Nothing I did would shift it. This is because hl-line-highlight is really only designed to be driven from the window whose line is being highlighted, and I was firing it from afar.

Ok, so why not just manage my own overlay:

 (defvar my/flymake--sync-overlay nil
"Overlay used to highlight the current entry in the diagnostics buffer.")
(defun my/flymake-sync-diagnostics ()
"Highlight the diagnostics buffer entry matching the error at point."
(when-let* ((buf (my/flymake--diag-buffer))
(win (get-buffer-window buf))
(diag (or (car (flymake-diagnostics (point)))
(car (flymake-diagnostics (line-beginning-position)
(line-end-position))))))
(with-current-buffer buf
(save-excursion
(goto-char (point-min))
(let ((found nil))
(while (and (not found) (not (eobp)))
(let ((id (tabulated-list-get-id)))
(if (and (listp id) (eq (plist-get id :diagnostic) diag))
(setq found (point))
(forward-line 1))))
(when found
(unless (overlayp my/flymake--sync-overlay)
(setq my/flymake--sync-overlay (make-overlay 1 1))
(overlay-put my/flymake--sync-overlay 'face 'highlight)
(overlay-put my/flymake--sync-overlay 'priority 100))
(move-overlay my/flymake--sync-overlay
found
(min (point-max) (1+ (line-end-position)))
buf)
(set-window-point win found)))))))

My first pass at the walk through the tabulated list did not work. I was comparing (tabulated-list-get-id) directly against the diagnostic returned by flymake-diagnostics using eq, and it was always false, which meant found stayed nil forever and the overlay never moved. A dive into flymake.el revealed why. Each row in the diagnostics buffer stores its ID as a plist, not as the diagnostic itself:

 (list :diagnostic diag
:line line
:severity ...)

So I need to pluck out :diagnostic before comparing. Obvious in hindsight, as these things always are. With plist-get in place the comparison lines up and the overlay moves exactly where I want it, tracking every navigation command.

The fallback lookup using line-beginning-position and line-end-position is there because flymake-diagnostics (point) only returns something if point is strictly inside the diagnostic span. When I land between errors or on the same line as an error but a few columns off, I still want the diagnostics buffer to track, so I widen the search to the whole line.

Finally, wrap the hook in a minor mode so I can toggle it per buffer and enable it automatically whenever flymake comes up:

 (define-minor-mode my/flymake-follow-mode
"Sync the diagnostics buffer to the error at point."
:lighter nil
(if my/flymake-follow-mode
(add-hook 'post-command-hook #'my/flymake-sync-diagnostics nil t)
(remove-hook 'post-command-hook #'my/flymake-sync-diagnostics t)))
(add-hook 'flymake-mode-hook #'my/flymake-follow-mode)
(define-key flymake-mode-map (kbd "M-M") #'my/flymake-toggle-diagnostics)

The end result is nice. M-M pops the diagnostics buffer, M-N and M-P walk through the errors, and as I navigate the source the matching row in the diagnostics buffer highlights in step with me. If I close the buffer with another M-M everything goes quiet, and I can still step through with M-N/M-P on their own.

Three little keybindings and twenty lines of elisp, but they turn flymake from a static reporter into something that actually feels connected to where I am in the buffer.

-1:-- Wiring Flymake Diagnostics into a Follow Mode (Post James Dyer)--L0--C0--2026-04-09T05:13:00.000Z

Please note that planet.emacslife.com aggregates blogs, and blog authors might mention or link to nonfree things. To add a feed to this page, please e-mail the RSS or ATOM feed URL to sacha@sachachua.com . Thank you!