Dave Pearson: blogmore.el v4.2

Another wee update to blogmore.el, with a bump to v4.2.

After adding the webp helper command the other day, something about it has been bothering me. While the command is there as a simple helper if I want to change an individual image to webp -- so it's not intended to be a general-purpose tool -- it felt "wrong" that it did this one specific thing.

So I've changed it up and now, rather than being a command that changes an image's filename so that it has a webp extension, it now cycles through a small range of different image formats. Specifically it goes jpeg to png to gif to webp.

With this change in place I can position point on an image in the Markdown of a post and keep running the command to cycle the extension through the different options. I suppose at some point it might make sense to turn this into something that actually converts the image itself, but this is about going back and editing key posts when I change their image formats.

Another change is to the code that slugs the title of a post to make the Markdown file name. I ran into the motivating issue yesterday when posting some images on my photoblog. I had a title with an apostrophe in it, which meant that it went from something like Dave's Test (as the title) to dave-s-test (as the slug). While the slug doesn't really matter, this felt sort of messy; I would prefer that it came out as daves-test.

Given that wish, I modified blogmore-slug so that it strips ' and " before doing the conversion of non-alphanumeric characters to -. While doing this, for the sake of completeness, I did a simple attempt at removing accents from some characters too. So now the slugs come out a little tidier still.

(blogmore-slug "That's Café Ëmacs")
"thats-cafe-emacs"

The slug function has been the perfect use for an Emacs Lisp function I've never used before: thread-last. It's not like I've been avoiding it, it's just more a case of I've never quite felt it was worthwhile using until now. Thanks to it the body of blogmore-slug looks like this:

(thread-last
  title
  downcase
  ucs-normalize-NFKD-string
  (seq-filter (lambda (char) (or (< char #x300) (> char #x36F))))
  concat
  (replace-regexp-in-string (rx (+ (any "'\""))) "")
  (replace-regexp-in-string (rx (+ (not (any "0-9a-z")))) "-")
  (replace-regexp-in-string (rx (or (seq bol "-") (seq "-" eol))) ""))

rather than something like this:

(replace-regexp-in-string
 (rx (or (seq bol "-") (seq "-" eol))) ""
 (replace-regexp-in-string
  (rx (+ (not (any "0-9a-z")))) "-"
  (replace-regexp-in-string
   (rx (+ (any "'\""))) ""
   (concat
    (seq-filter
     (lambda (char)
       (or (< char #x300) (> char #x36F)))
     (ucs-normalize-NFKD-string
      (downcase title)))))))

Given that making the slug is very much a "pipeline" of functions, the former looks far more readable and feels more maintainable than the latter.

-1:-- blogmore.el v4.2 (Post Dave Pearson)--L0--C0--2026-04-21T18:27:26.000Z

Jean-Christophe Helary: Blogging with Emacs, a new take

So, you’ve tried Hugo, you’ve tried org-publish, but you’re still not satisfied with what you have. Hugo is way too complex and org-publish has a bare-bones "je ne sais quoi" that kind of requires you to code some elisp to get things done.

For people who like Hugo’s auto building & serving but who want to not spend hours fiddling with config files to have a fine-looking site, Bastien Guerry has published orgy.

The code is on codeberg and the tutorial is on Bastien’s site.

The whole thing depends on bbin, which is an installer for babashka things. With babashka being a native Clojure interpreter for scripting, implemented using the Small Clojure Interpeter.

So you have (SCI >) babashka > bbin > orgy.

orgy takes an org files directory and transforms it into a nice-looking website with navigation, tags, an rss feed and plenty of other goodies.

orgy server serves the thing on localhost:1888 and automatically rebuilds the site after each modification.

orgy was announced on the 14th of April on the French emacs list.

-1:-- Blogging with Emacs, a new take (Post Jean-Christophe Helary)--L0--C0--2026-04-21T10:04:50.187Z

Charlie Holland: A VOMPECCC Case Study: Spotify as Pure ICR in Emacs

1. About   emacs completion

vompeccc-spot-banner.jpeg

Figure 1: JPEG produced with DALL-E 3

This is the third post in a series on Emacs completion. The first post argued that Incremental Completing Read (ICR) is not merely a UI convenience but a structural property of an interface, and that Emacs is one of the few environments where completion is exposed as a programmable substrate rather than a sealed UI. The second post broke the substrate into eight packages (collectively VOMPECCC), each solving one of the six orthogonal concerns of a complete completion system.

In this post, I show, concretely, what it looks like when you build with VOMPECCC, by walking through the code of spot, a Spotify client I implemented as a pure ICR application in Emacs.

A word I'll use throughout this post to refer to the use of VOMPECCC in spot is shim, and it is worth qualifying that. The whole package is about 1,100 non-blank, non-comment lines of Lisp1. Roughly 635 of those is infrastructure any Spotify client would need regardless of its UI choices: OAuth with refresh, HTTP transport with error surfacing, a cached search layer, a currently-playing mode-line, a config surface, player-control commands, blah blah blah. The shim is the rest: 493 lines across exactly three files (spot-consult.el, spot-marginalia.el, spot-embark.el) whose entire job is to feed candidates into Consult (source), annotate them with Marginalia (source), and attach actions to them through Embark (source). When I say spot is a shim, I mean those three files, and I'm emphasizing the fact that there is relatively little code. The rest of spot is plumbing that has nothing to do with the completion substrate.

spot implements no custom UI. It has no tabulated-list buffer, no custom keymap for navigation, no rendering code. Every interaction surface; the search prompt, the candidate display, the annotations, and the action menu; is rented from the completion substrate by the 493-line shim.

This post is about the code. Instead of cataloging spot's features (I'll do that when I publish the package to Melpa), I want to show how the code actually hangs together on top of VOMPECCC, with verbatim snippets mapped onto the interaction they produce. If the previous two posts were the why and the what, this one is the how, with a working application to ground the pattern.

2. The Demonstration   consult marginalia embark

Before any code, here is the concrete task the video is solving: I am trying to find a J Dilla song whose title I can't remember; all I recall is that the word don't is somewhere in the track name. The entire post revolves around this one video, so it is worth watching before reading on. Everything that follows is a line-by-line breakdown of the code that produces what you are about to see. In the upper right hand side of my emacs (in the tab-bar), you'll see the key-bindings and, more importantly, the commands I am invoking to drive spot. (To make this clip easier to digest, you can play, pause, scrub, view in full screen, or view as "Picture in Picture" use the video controls).

Here is what happens in the clip:

  1. I invoke spot-consult-search and type j dilla. Each keystroke fires an async query against the Spotify Web API, and the result set is streamed into the minibuffer. That is Consult. In my emacs config, Vertico2 renders the candidate set vertically so the per-row metadata is legible.
  2. I use Spotify's query parameters to widen the result set per type. Spotify's search endpoint caps results per content type, so I append parameter flags (--type=track --limit=50, etc.) to ask for a fatter haul across tracks, albums, and artists. The candidates are streamed back through Consult exactly as before, just more of them.
  3. I type ,, the consult-async-split-style character, to switch from remote search to local ICR. Everything before the comma continues to be the API query; everything after is a local narrowing pattern that matches against the candidate set already in hand. No further Spotify requests are issued, and each incremental keystroke only filters the rows Consult is already holding.
  4. I type dont (no apostrophe) looking for the song. The default matching is literal, so "dont" doesn't match "Don't". Zero candidates. The corpus contains the song; my pattern just doesn't. (You thought I did this by mistake didn't you 😜? It actually highlights why fuzzy matching is so important.)
  5. I backspace and prefix the query with ~, the Orderless3 dispatcher for fuzzy matching. ~dont now matches "Don't Cry" (and others) because fuzzy matching tolerates the missing apostrophe. The search set is unchanged; I swapped matching styles without re-querying Spotify. This may sound like a small feature, but consider how much a little fuzz widens the match space of your input strings. This is espacially important in an application like Spotify where entity names can be long and difficult to remember.
  6. I append @donuts, the Orderless dispatcher for matching against the Marginalia annotation column rather than the candidate name. That narrows the surviving candidates to tracks whose annotation mentions "donuts" (i.e., tracks on Dilla's Donuts album, my personal favourite), even though the word "donuts" never appears in any track title. The song I was looking for is right there. (note my orderless-component-separator is also ",")
  7. With the track selected, I invoke Embark (embark-act) and press P to play. The P binding dispatches to spot-action--generic-play-uri, which pulls the track's URI off the candidate's multi-data property and sends a PUT to the Spotify player. The song starts playing; no further navigation required.

Three VOMPECCC packages are doing the work: Consult (the async streaming + the split-character handoff to local ICR), Marginalia (the metadata column the @ dispatcher just narrowed against), and Embark (the action menu that allows you to play the track, list the album's other tracks, or add it to a playlist). The whole rest of this post is an argument that the code required to make this happen is pleasantly concise, because none of those capabilities (asynchronous search with narrowing, metadata annotation, annotation-aware fuzzy filtering, or contextual actions) needed to be built. They already exist in the VOMPECCC framework, and spot's only job is to feed them data.

3. Anatomy of spot   structure modularity

spot is organized so that each file corresponds to one concern. This is deliberate: the architecture mirrors the modularity of VOMPECCC itself, not because I was trying to be cute (I'm cute enough 👺), but because when your substrate is modular, consuming it modularly is the lowest-friction path.

File Responsibility Substrate package LoC
spot-auth.el OAuth2 authorization + automatic token refresh timer (none) 65
spot-generic-query.el HTTP request plumbing (sync + async, error surfacing) (none) 88
spot-search.el Cached search against the Spotify API (none) 100
spot-generic-action.el Player control commands (play/pause/next/previous) (none) 51
spot-mode-line.el Currently-playing display (none) 115
spot-var.el Configuration variables (endpoints, credentials, etc.) (none) 127
spot-util.el Alist/hash-table conversions, candidate propertize (the glue) 52
spot-consult.el Seven async Consult sources + consult--multi entry Consult 194
spot-marginalia.el Annotation functions per content type Marginalia 159
spot-embark.el Keymaps and actions per content type Embark 140
spot.el spot-mode: wires registries + timers in and out (integration) 37
Total     1128


The breakdown is the whole point of the shim framing. The three substrate-facing files (194 + 159 + 140 = 493 lines) are the part that actually integrates with VOMPECCC. None of that is UI code; there is no UI code in spot. Every pixel the user sees comes from Consult, Marginalia, Embark, or whatever the user has slotted in below them.

One caveat on the 194-line figure for spot-consult.el: roughly 105 of those lines are a 7-way parallel triplet (one source definition, one history variable, and one completion function per Spotify content type), varying only in the narrow key and the :category symbol. A small macro (spot-define-consult-source) would collapse the 105 lines into 7 invocations plus a ~25-line definition, for 30-35 lines total. The honest Consult-facing line count, with redundancy factored out, is closer to 115 than 194, and the whole shim closer to 420 than 493.

The reason I didn't write this macro is because it would muddy the concrete depiction of the VOMPECCC APIs here, and honestly, I tend to avoid over-macroizing as it creates new and confusing APIs over well-established and intuitive APIs.

4. Candidates as Shared Currency   candidates

Before looking at any of the three VOMPECCC layers individually, there is one piece of code that makes the entire integration possible. It is a short function, and if you understand it, you understand how Consult, Marginalia, and Embark cooperate without knowing anything about each other.

(defun spot--propertize-items (tables)
  "Propertize a list of hash TABLES for display in completion.
Each table is expected to have `name' and `type' keys.  Names are
truncated for display per `spot-candidate-max-width'; the full
name remains accessible via `multi-data'."
  (-map
   (lambda (table)
     (propertize
      (spot--truncate-name (ht-get table 'name))
      'category (intern (ht-get table 'type))
      'multi-data table))
   tables))

Every candidate that spot hands to Consult is a string (the Spotify item's name) carrying two text properties:

  • category is one of album, artist, track, playlist, show, episode, or audiobook. Emacs's completion metadata protocol uses this property to route candidates to the right annotator and the right action keymap. Marginalia reads it to pick an annotator; Embark reads it to pick a keymap. The two packages never talk to each other, and yet they agree on every candidate's type, because both are reading the same Emacs-standard property.
  • multi-data is the raw hash table the Spotify API returned for this item: the full JSON response with every field the API exposes. Marginalia's annotator reads from it to format the margin; Embark's actions read from it to execute playback, to navigate to an album's tracks, to add to a playlist. The candidate is the full record; the name is just the visible handle. The name multi-data is spot's own designation, not a Consult or Marginalia convention (the multi- prefix is unrelated to consult--multi); any symbol would have worked. What is conventional is attaching the domain record to the candidate via propertize in the first place.

Marginalia and Embark never talk to each other. They both read the same text property on the same candidate, and that is enough.

That is the entire integration surface: One string (display name) and two props (category and metadata). Everything else (the async fetching, the narrowing, the annotation columns, the action menu) is handled by VOMPECCC, keyed on those two properties. This is a key take away for those looking to build with VOMPECCC: build your candidates like this and you will have a good time on the mountain.

This is what I meant in the first post when I called completion a substrate rather than a UI. A UI would be "here is a widget, bind data to it." A substrate is "here is a common currency (candidates with standard properties); tools that speak the currency can be mixed freely."

5. Consult: Defining the Search Surface   consult async narrowing

Consult is spot's frontdoor. It gives me three things I would otherwise have had to build from scratch: async candidate streaming, multi-source unification with narrowing keys, history, and probably other things I'm forgetting. Here is one of the seven source definitions spot uses:

(defvar spot--consult-source-track
  `(:async ,(consult--dynamic-collection
             #'spot--consult-completion-function-consult-track
             :min-input 1)
    :name "Track"
    :narrow ?t
    :category track
    :history spot--history-source-track)
  "Consult source for Spotify tracks.")

A Consult source is just a plist. The interesting keys are:

  • :async is the candidate stream. consult--dynamic-collection is the de-facto extension point third-party packages have settled on for async sources, despite the double-dash that conventionally marks it internal4. It wraps a function that takes the current minibuffer input and returns a list of candidates. Consult handles the debouncing and the "only recompute when the input changes" logic on its side; my code just has to produce candidates for a given query. :min-input 1 prevents a search on an empty query. This is the two-level async filtering that Consult is designed around: the external tool (Spotify's API, in this case) handles the expensive filtering against its own corpus, and my completion style (Orderless, if I have it) narrows the returned set locally.
  • :narrow ?t binds the narrowing key. In the video, I could have pressed t SPC when running spot-consult-search, and the session would have been scoped to tracks only, and would have avoided querying the other sources. I didn't implement narrowing; Consult did. I just declared which character maps to which source!
  • :category track is the property that will propagate onto every candidate from this source. This is the same category property that spot--propertize-items stamps on individual candidates, and it is the hinge that Marginalia and Embark both key off.
  • :history gives me free persistent search history for this source, isolated from the other sources.

The completion function itself is trivial because all the work happens in spot-search.el:

(defun spot--consult-completion-function-consult-track (query)
  "Return track candidates for QUERY."
  (spot--search-cached-and-locked query spot--mutex spot--cache)
  spot--candidates-track)

Seven of these functions exist, one per content type, all identical except for which global they return. The heavy lifting (the HTTP call, the cache, the propertization) is shared. Each source is effectively a view onto a single search result split by type.

Putting all seven sources together into one interface is also trivial:

(defvar spot--search-sources
  '(spot--consult-source-album spot--consult-source-artist
    spot--consult-source-playlist spot--consult-source-track
    spot--consult-source-show spot--consult-source-episode
    spot--consult-source-audiobook)
  "List of consult sources for Spotify search.")

;;;###autoload
(defun spot-consult-search (&optional initial)
  "Search Spotify with consult multi-source completion.
Optional INITIAL provides initial input."
  (interactive)
  (consult--multi
   spot--search-sources
   :history '(:input spot--consult-search-search-history)
   :initial initial))

This is the command you saw in the video. consult--multi takes the list of sources, unifies their candidates into a single list, and wires the narrowing keys. Seven heterogeneous content types, one prompt, one keystroke to filter to any subset, async throughout, with per-source history.

Without Consult I would need: a separate candidate display, an async debouncer, a narrowing mechanism, per-source history buffers, and some way to visually distinguish content types in a single list.

Compare this to the counterfactual. Without Consult I would need: a separate candidate display, an async debouncer, a narrowing mechanism, per-source history buffers, and some way to visually distinguish content types in a single list. And because Consult uses the standard completing-read contract, every minibuffer feature my Emacs already has (Vertico's display, Orderless's matching, Prescient's sorting) applies to spot with zero integration code.

6. Why the Cache?   async ratelimits

I have been brushing past a detail of spot-consult.el that deserves its own section, because it is the honest cost of building on an async-on-every-keystroke substrate. consult--dynamic-collection wires the completion function to the minibuffer such that it is invoked on (a debounced version of) every keystroke the user types. For spot, each invocation issues an HTTP request to Spotify's Web API, receives a mixed-type result set, splits it across the seven global candidate lists, and returns the slice relevant to the calling source. That is the hot path. And the hot path is a rate-limited network call.

Spotify's Web API is rate-limited 🙃. Exact limits are dynamic and not publicly documented in detail, but the envelope is small enough that a rapid-typing ICR session can hit it quickly. Consider the baseline: typing radiohead fires a completion call for each prefix the user's typing pauses on (Consult's consult-async-input-debounce and consult-async-input-throttle collapse runs of keystrokes into a smaller set of actually-issued calls, but realistically that still leaves several distinct prefixes per word). Now add the common real-world pattern of typing too far, backspacing a few characters, and retyping: the same query string is re-issued within the same search session. Without a cache, each repetition burns a request, but with a cache keyed on the raw query string, repeats are actually free (or at least as cheap as a cache hit):

(defun spot--search-cached (query cache)
  "Search for QUERY, using CACHE to avoid duplicate requests."
  (when (not (ht-get cache query))
    (let ((results (spot--propertize-items
                    (spot--union-search-items
                     (spot--search-items query)))))
      (ht-set cache query results)))
  (let ((results (ht-get cache query)))
    (spot--set-search-candidates results)))

The cache is a hash table from query strings to propertized candidate lists. It lives for the life of the Emacs session, so not only backspace-and-retype within one search but also the next search session that hits the same prefix is instant. The memory cost is negligible (a few hundred candidates per query, small hash tables for each) and the request-budget win is real. And if you find yourself listening to the same music over and over, then you'll have snappier results when you go down familiar paths.

Async-on-every-keystroke against a remote corpus is the feature. A query-string cache is the bill.

This is the honest consumer tax of the substrate. The first post sold you on ICR by promising that the interaction scales constantly regardless of how big the underlying corpus gets. That claim depends on async sources that fire on every keystroke against a remote corpus, and that in turn means you as package author inherit rate-limit pressure your users never see. Consult gives you the debouncer, the display, the narrowing keys, and the stale-response discarding on its side of the protocol. The cache is what you owe back on your side when your candidate source is a rate-limited network API rather than a local list, and it is exactly the kind of infrastructure that does not belong in Consult itself (because Consult has no way to know your backend is rate-limited, or which queries are equivalent enough to cache together).

7. Marginalia: Promoting Candidates into Informed Choices   marginalia

If you watch the video carefully, each track in the candidate list is followed by a horizontally aligned column of fields: #<track-number>, artist, a M:SS duration, album name, album type, release date. Each field is rendered to a fixed width in its own face, so numbers and dates and names land as visually distinct columns rather than getting mashed together with a delimiter. Small glyph prefixes (# for counts, for popularity, for followers) disambiguate otherwise bare numbers. That column is provided by Marginalia, and it comes from one function:

(defun spot--annotate-track (cand)
  "Annotate track CAND with number, artist, duration, album, type, and date.
The track number is prefixed with `#' and duration rendered as M:SS."
  (let ((data (get-text-property 0 'multi-data cand)))
    (marginalia--fields
     ((spot--format-count (ht-get data 'track_number))
      :format "#%s" :truncate 5 :face 'spot-marginalia-number)
     ((spot--annotation-field (spot--first-name (ht-get data 'artists)))
      :truncate 25 :face 'spot-marginalia-artist)
     ((spot--format-duration (ht-get data 'duration_ms))
      :truncate 7 :face 'spot-marginalia-number)
     ((spot--annotation-field (ht-get* data 'album 'name))
      :truncate 30 :face 'spot-marginalia-album)
     ((spot--annotation-field (ht-get* data 'album 'album_type))
      :truncate 8 :face 'spot-marginalia-type)
     ((spot--annotation-field (ht-get* data 'album 'release_date))
      :truncate 10 :face 'spot-marginalia-date))))

The first line is the only plumbing: (get-text-property 0 'multi-data cand) pulls the full Spotify API response off the candidate (exactly the hash table spot--propertize-items stashed earlier), and everything after it is Marginalia's own marginalia--fields macro doing the formatting. marginalia--fields handles the alignment, the per-field truncation, and the face application. The only thing my code does is declare which fields of the Spotify payload go in which columns with which faces. This is another substrate borrow hiding in plain sight: Marginalia registers the annotator and formats its output. I never wrote a single character of alignment, padding, or colourisation logic. The annotator reached into multi-data for its fields, Marginalia's macro did the cosmetic work, and Marginalia never had to know about Spotify's data model.

spot ships seven annotators. Each one is a domain-specific projection of a single Spotify response type onto a display string. Albums surface artist, release date, and track count, artists surface popularity and follower count, shows surface publisher, media type, and episode count; and all this context is really important, especially if you are 'browsing'. The annotators are independent of the search code, independent of the actions code, and independent of each other.

Registering them with Marginalia is three lines of bookkeeping:

(defvar spot--marginalia-annotator-entries
  '((album spot--annotate-album none)
    (artist spot--annotate-artist none)
    (playlist spot--annotate-playlist none)
    (track spot--annotate-track none)
    (show spot--annotate-show none)
    (episode spot--annotate-episode none)
    (audiobook spot--annotate-audiobook none))
  "List of marginalia annotator entries registered by spot.")

(defun spot--setup-marginalia ()
  "Register spot annotators with marginalia."
  (dolist (entry spot--marginalia-annotator-entries)
    (add-to-list 'marginalia-annotators entry)))

The spot--marginalia-annotator-entries list keys on the category symbol (album, artist, and so on), the very same symbols the Consult sources stamp onto their candidates. Marginalia looks up the category of the current candidate in marginalia-annotators, finds the entry, and runs the annotator. No spot code is in that path. I only had to declare the mapping.

This is where one of the most interesting benefits of the second post shows up concretely. That post mentioned that because Marginalia annotations are themselves searchable, Orderless's @ dispatcher lets you match against annotation text. spot did not ship this feature. Orderless and Marginalia did, for free, because I stamped the annotation onto the candidate in the right way.

8. Embark: The Action Layer   embark composition

The third leg of spot's tripod is Embark. In the video, pressing the Embark action key on any candidate surfaces a menu of single-letter actions appropriate to that kind of candidate: P plays it, s shows its raw data, t lists its tracks (on albums and artists), + adds it to a playlist (on tracks). Each of those actions is a one-function definition in spot-embark.el, and their binding to candidates is declarative.

The simplest action is play:

(defun spot-action--generic-play-uri (item)
  "Play the Spotify item represented by ITEM."
  (let* ((table (get-text-property 0 'multi-data item))
         (type (ht-get table 'type))
         (offset (cond
                  ((string= type "track") `(("uri" . ,(ht-get* table 'uri))))
                  ((string= type "playlist") '(("position" . 0)))
                  ((string= type "album") '(("position" . 0)))
                  ((string= type "artist") nil)))
         (context_uri (cond
                       ((string= type "track") (ht-get* table 'album 'uri))
                       ((string= type "playlist") (ht-get* table 'uri))
                       ((string= type "album") (ht-get* table 'uri))
                       ((string= type "artist") (ht-get* table 'uri))))
         ...
         (spot-request-async
          :method "PUT"
          :url spot-player-play-url ...))))

Same pattern as the annotators: (get-text-property 0 'multi-data item) pulls the full hash table off the candidate, and the rest is Spotify domain logic. Embark invokes my action with the candidate that was highlighted; my action handles the HTTP.

The keymap wiring is also just bookkeeping:

(defvar-keymap spot-embark-track-keymap
  :parent embark-general-map
  :doc "Keymap for Spotify track actions.")

;; ... one keymap per content type ...

(defvar spot--embark-keymap-entries
  '((album . spot-embark-album-keymap)
    (artist . spot-embark-artist-keymap)
    (playlist . spot-embark-playlist-keymap)
    (track . spot-embark-track-keymap)
    (show . spot-embark-show-keymap)
    (episode . spot-embark-episode-keymap)
    (audiobook . spot-embark-audiobook-keymap)
    ...))

(dolist (map (list spot-embark-artist-keymap spot-embark-album-keymap
                   spot-embark-playlist-keymap spot-embark-track-keymap
                   ...))
  (define-key map "s" #'spot-action--generic-show-data)
  (define-key map "P" #'spot-action--generic-play-uri))

(define-key spot-embark-track-keymap "+" #'spot-action--add-track-to-playlist)
(define-key spot-embark-album-keymap  "t" #'spot-action--list-album-tracks)
(define-key spot-embark-artist-keymap "t" #'spot-action--list-artist-tracks)
(define-key spot-embark-playlist-keymap "t" #'spot-action--list-playlist-tracks)

Again, the key keys off category. Embark looks up the current candidate's category in embark-keymap-alist, finds the matching keymap, opens it. Every layer of this integration is the same trick: a candidate carries a category property, and the substrate routes based on it. All three VOMPECCC packages, working on the same candidates, sharing the same category convention, never importing each other.

8.1. Composition: When an Action Opens Another Search   composition chaining

One action in particular is worth reading slowly, because it closes the loop the thought exercise in the first post opened:

(defun spot-action--list-album-tracks (item)
  "Search for tracks on the album represented by ITEM."
  (let* ((table (get-text-property 0 'multi-data item))
         (album-name (ht-get* table 'name))
         (artist-name (ht-get* (nth 0 (ht-get* table 'artists)) 'name)))
    (spot-consult-search
     (concat
      "album:" album-name
      " "
      "artist:" artist-name " -- --type=track"))))

This action runs when I am in a completion session, run Embark on an album candidate, and press t. It extracts the album name and artist from the multi-data, builds a Spotify query using Spotify's field-filter syntax (album:X artist:Y), and calls spot-consult-search again: the same entry point the user invoked initially.

Embark action on a Consult candidate launches a new Consult session, scoped to that candidate. Three lines of Lisp. The whole "chain ICRs to compose workflows" argument from Post 1, made concrete.

Nice!!! What just happened? An Embark action on a candidate produced by a Consult source launched a new Consult session, scoped to the selected candidate, in the same substrate, with the same annotators, and the same available actions. The chaining pattern from the first post ("ICR to pick a thing, which scopes the candidate set for the next ICR") is literally three lines of spot code, because the substrate composes oh so cleanly with itself.

The first post described this as the shell's git branch | fzf | xargs git checkout pattern in miniature. In spot, the pipe is embark-act, and the downstream command is another consult--multi. It is the same compositional shape; the surface it runs on is different.

9. The Integration Point: spot-mode   modularity hooks

Both registries (Marginalia's annotator alist and Embark's keymap alist) plus the two background timers (mode-line updates and access-token refresh) get installed and uninstalled in one place:

;;;###autoload
(define-minor-mode spot-mode
  "Global minor mode for the spot Spotify client.
Registers embark keymaps, marginalia annotators, starts the
mode-line update timer, and starts a periodic access-token
refresh timer when enabled.  Cleanly removes all integrations
when disabled."
  :global t
  :group 'spot
  (if spot-mode
      (progn
        (spot--setup-embark)
        (spot--setup-marginalia)
        (spot--start-update-timer)
        (spot--start-refresh-timer))
    (spot--teardown-embark)
    (spot--teardown-marginalia)
    (spot--stop-update-timer)
    (spot--stop-refresh-timer)))

This is the entire integration layer. Toggle the mode, spot's categories appear in Marginalia and Embark and the two timers begin ticking. Toggle it off, they all disappear. No global state mutation escapes the teardown path.

And by the way, a user who never installs Marginalia or Embark still gets a working spot; the setup functions no-op gracefully (all they do is add-to-list against someone else's variable), that user just doesn't get annotations or actions. The "stack what you want, subset what you don't need" property of VOMPECCC propagates through to spot as a consumer: the package is graceful under any subset of VOMPECCC.

10. The Counterfactual: What spot Would Look Like Without VOMPECCC

To see what spot isn't building, look at the negative space.

A pre-VOMPECCC Spotify client (see smudge for an example that predates the modern completion ecosystem) has to build the UI itself: a tabulated-list-mode buffer with its own keymap, its own rendering code, its own pagination, its own selection logic. That approach works and can work well. But the cost is structural: a bespoke UI is a parallel universe of interaction that does not benefit from any completion infrastructure the user has already invested in. You have to learn its bindings, and frustratingly, these don't carry over to any other Emacs tool.

The architecture was entirely reasonable when there was nothing else to build on. The point here is purely structural: once the substrate exists, reinventing the UI on top of it is a strictly larger codebase that delivers a strictly less interoperable experience. spot is about 1,100 lines of Lisp, and its interface, as we've shown, is closer to 420 lines of Lisp. A pre-substrate equivalent is many times that, and much of the delta is code implementing things (display, filtering, selection, action menus) that Consult, Marginalia, and Embark implement once, centrally, for every completion-driven command in the user's Emacs.

This is the gap the first post was pointing at when it distinguished using completion from building on completion. A package that uses completion is a consumer of completing-read. A package that builds on completion assumes the existence of a richer substrate (async sources, categorized candidates, annotator hooks, action keymaps) and contributes into that substrate rather than rebuilding around it.

11. What This Says About the Substrate   substrate platform

Three things follow.

First, the cost of building an ICR-driven app collapses once the substrate exists. spot is about 1,100 lines including OAuth, token refresh, HTTP, caching, the mode-line, and the integration glue. The three VOMPECCC files (spot-consult.el, spot-marginalia.el, spot-embark.el) are together under 500 lines, much of it boilerplate per content type. A feature-competitive pre-VOMPECCC Spotify client would easily have been several thousand lines larger.

Second, composition is the feature, not the packages. The list-album-tracks action is the most important ten lines in the repository, not because of what it does (a Spotify query), but because of what it demonstrates: an Embark action on a Consult candidate launching a new Consult session in the same substrate. Every ICR-driven package in your Emacs configuration that shares this substrate composes with every other one. embark-export on a spot result set could, in principle, produce a native mode for Spotify results, the same way it produces Dired from file candidates or wgrep from ripgrep hits. The composability is a property of the substrate, not of any individual package.

Third, the category property is doing an enormous amount of load-bearing work. Three different packages, each knowing nothing about the others, all agree on the right behavior for every candidate because they are keying off the same standardized property 'category. The "text" in the protocol is (candidate . (category . metadata)), and every tool that speaks the protocol interoperates for free.

12. Generalizing the Pattern Beyond Spotify   generalization pattern

spot is specifically a Spotify client, but nothing about the recipe it follows is Spotify-specific. Strip the domain out and what remains is a six-step shape that applies to an enormous fraction of the services and data sources you interact with daily:

  1. An API or backend that returns typed items: each item has a type discriminator and a bag of metadata.
  2. A candidate-constructor (the spot--propertize-items analogue) that turns those items into completion candidates with a category text property and a multi-data payload.
  3. A Consult source per type, async, with a narrow key, all unified under a consult--multi entry point.
  4. A Marginalia annotator per type, keyed on category, reading the multi-data payload for its domain metadata.
  5. An Embark keymap per type, keyed on category, binding single-letter actions that operate on the multi-data payload.
  6. A minor mode that installs and uninstalls the three registries together. This one can even be optional, but I recommend doing it.

Any domain that fits that shape can be built the same way. The thought exercise from the first post (which of your daily tools reduces to "pick a thing, act on it" over a typed corpus?) has a lot of concrete answers: issue trackers, cloud consoles, email, chat, package managers, news feeds, knowledge bases, code hosting. Two worked examples are enough to sketch the altitude:

  • Issue trackers. Types are issue / epic / comment / user, metadata is status / assignee / priority / labels, actions are transition / assign / comment / close.
  • Code hosting. consult-gh already does the GitHub version. Types are repo / PR / issue / branch / release / user, metadata is state / author / date / counts, actions are clone / checkout / review / merge / close.

Several domains already ship as working packages: consult-gh, consult-notes, consult-omni, consult-tramp, consult-dir, and many others. None of these packages ships a UI; they all (roughly) follow the same six-step recipe spot follows, and each one composes with every other one automatically.

The more interesting exercise is the shape of domains that don't cleanly fit. The pattern starts to strain when items aren't naturally enumerable, or when the right interaction is a canvas rather than a list (a map, a timeline, a dependency graph). Those cases need something more than ICR. What I find remarkable is how often even those interfaces still have an ICR-shaped core (pick a location on the map, pick a node on the graph, pick a frame on the timeline), which could be delegated to the substrate while the custom-UI parts focus on what genuinely needs rendering.

The concrete-enough test I apply to any new Emacs workflow I'm considering building: can I express it as a Consult source, a Marginalia annotator, and an Embark keymap? If yes, the package will be mostly a client of the VOMPECCC API. If no, the package needs custom UI, and I should be deliberate about which parts genuinely do and which parts could still be delegated. spot is the case where the answer is a clean "yes across the board", but I've found that more often than not, the answer is yes for the first draft.

13. Conclusion

This post took a working application and showed what the argument looks like when you cash it in.

If there is one thing I want a reader to take away from the series, it is the reframe. Completion is not a convenience feature you turn on and forget about. It is the primitive on which a surprising fraction of your Emacs interaction either already runs or could run, if you let it. Packages that treat it that way end up smaller, more interoperable, and more amenable to composition than packages that treat it as one feature among many. spot is one example.

The broader claim, which I will leave you with, is that "packages that do one thing" is the lazy reading of the Unix philosophy. The sharper reading is "packages that contribute into a shared substrate." Unix pipes were never interesting because each command was small; they were interesting because every command produced and consumed plain text. VOMPECCC is interesting for the same reason, with candidates-with-properties instead of plain text. spot was easy to write because the substrate is good. Many things in your Emacs configuration could be rewritten today as "ICR applications on the substrate" and would be smaller, cleaner, and more composable as a result.

When you next find yourself thinking "I wish there were a better way to browse X", ask whether it could just be a Consult source, a Marginalia annotator, and an Embark keymap. Surprisingly often, that is the entire package, and all you have to do is feed it data.

14. TLDR

spot is a Spotify client for Emacs that implements no custom UI. About 493 of its ~1,100 lines are the "shim" that feeds candidates into Consult, Marginalia, and Embark via a single text-property pattern (category plus multi-data); the remaining ~635 are plumbing any Spotify client would need regardless of UI. The six-step recipe (typed items → propertize → Consult source per type → Marginalia annotator per type → Embark keymap per type → minor mode) generalizes to issue trackers, cloud consoles, email, chat, knowledge bases, and more, many of which already ship as working packages (consult-gh, consult-notes, consult-omni). The claim the series has been building toward: when the substrate is good, ICR applications collapse to their domain logic, and "packages that contribute into a shared substrate" is the sharper reading of the Unix philosophy.

Footnotes:

1

As of the version being discussed, the eleven .el files in the repository total about 1,128 non-blank, non-comment lines. Not a large package by any measure.

2

Vertico is the vertical minibuffer UI you see in the video. It is not part of the spot package; it is a piece of my personal Emacs configuration, one of the VOMPECCC packages the user slots in underneath a consumer like spot. A different user could run spot with fido-vertical-mode, Helm, Ivy, or plain default completing-read; the candidates and their annotations would be unchanged, only the rendering would differ.

3

Orderless is the completion style that powers the ~ (fuzzy) and @ (annotation) dispatchers in the video. Like Vertico, it is configured in my personal Emacs setup, not shipped with spot. One detail worth calling out: Orderless's default annotation dispatcher is &, not @. I remap it to @ in my own config, so the @donuts you see in the video is specific to my setup; out of the box you would type &donuts to get the same behavior. The dispatcher characters are fully user-configurable, and users on an entirely different completion style (flex, substring, basic) will see different filtering behavior.

4

The double-dash convention in Elisp marks a symbol as internal to its package. consult--dynamic-collection is formally one of those. In practice it is the extension point third-party async Consult sources have all settled on, and Daniel Mendler has been careful about signalling breaking changes in the Consult changelog when its shape does shift. spot pins consult > 1.0= for this reason.

-1:-- A VOMPECCC Case Study: Spotify as Pure ICR in Emacs (Post Charlie Holland)--L0--C0--2026-04-21T09:02:00.000Z

James Dyer: Highlighting git changes in a buffer with diff-hl

Lately I’ve found myself wanting a better, more fine-grained view of what’s going on in a file under git. For some reason, my default workflow has been to keep jumping in and out of project-vc-dir to check changes. It gets the job done, but honestly it’s a bit of a hassle.

What I really wanted was something right there in the buffer. Not a full-on inline diff (that gets messy fast I would guess), but just a small visual hint, something that lets me “see” what’s changed without breaking my flow.

Turns out, that’s exactly what diff-hl does!

It’s super lightweight and just highlights changes in the fringe. Nothing flashy but just enough to keep you aware of what you’ve modified. Once you start using it, it feels kind of weird not having it.

One thing I really like is how nicely it plays with the built-in VC tools, move to a buffer position that aligns with a highlighted change, hit C-x v = and it jumps straight to the relevant hunk in the diff. No friction, no extra thinking, it just works.

Here’s the setup I’m using:

(use-package diff-hl
:ensure t
:hook (dired-mode . diff-hl-dired-mode)
:config
(global-diff-hl-mode 1)
(diff-hl-flydiff-mode 1)
(unless (display-graphic-p)
(diff-hl-margin-mode 1)))

By default, diff-hl-mode only updates when you save the file. That’s okay, but enabling diff-hl-flydiff-mode makes it update as you type, which feels more intuitive.

Oh, and that dired-mode hook? That turns on diff-hl-dired-mode, which gives you a quick visual overview of changed files right inside dired. It’s one of those small touches that ends up being surprisingly useful.

If you’ve got repeat-mode enabled, you can also hop through changes with C-x v ] and C-x v [, which makes reviewing edits really smooth.

I am enjoying diff-hl and is quietly improving my workflow without getting in my way. Simple, fast, and just really nice to have.

-1:-- Highlighting git changes in a buffer with diff-hl (Post James Dyer)--L0--C0--2026-04-21T07:00:00.000Z

Sacha Chua: 2026-04-20 Emacs news

I enjoyed reading Hot-wiring the Lisp machine (an adventure into modifying Org publishing). I'm also looking forward to debugging my Emacs Lisp better with timestamped debug messages and ert-play-keys. I hope you also find lots of things you like in the links below!

Links from reddit.com/r/emacs, r/orgmode, r/spacemacs, Mastodon #emacs, Bluesky #emacs, Hacker News, lobste.rs, programming.dev, lemmy.world, lemmy.ml, planet.emacslife.com, YouTube, the Emacs NEWS file, Emacs Calendar, and emacs-devel. Thanks to Andrés Ramírez for emacs-devel links. Do you have an Emacs-related link or announcement? Please e-mail me at sacha@sachachua.com. Thank you!

View Org source for this post

You can comment on Mastodon or e-mail me at sacha@sachachua.com.

-1:-- 2026-04-20 Emacs news (Post Sacha Chua)--L0--C0--2026-04-20T13:21:38.000Z

Emacs Redux: Batppuccin and Tokyo Night Themes Land on MELPA

Quick heads-up: my two newest Emacs themes are now on MELPA, so installing them is a plain old package-install away.

  • batppuccin is my take on the popular Catppuccin palette. Four flavors (mocha, macchiato, frappe, latte) across the dark-to-light spectrum, each defined as a proper deftheme that plays nicely with load-theme and theme-switching packages.
  • tokyo-night is a faithful port of folke’s Tokyo Night, with all four upstream variants included (night, storm, moon, day).

Both themes come with broad face coverage out of the box (e.g. magit, vertico, corfu, marginalia, transient, flycheck, doom-modeline, and many, many more), a shared palette file per package, and the usual *-select, *-reload, and *-list-colors helpers.

Installation is now as simple as you’d expect:

(use-package batppuccin-theme
  :ensure t
  :config
  (load-theme 'batppuccin-mocha t))

(use-package tokyo-night-theme
  :ensure t
  :config
  (load-theme 'tokyo-night t))

If you’re curious about the design decisions behind these themes, I’ve covered the rationale in a couple of earlier posts. Batppuccin: My Take on Catppuccin for Emacs explains why I bothered with another Catppuccin port when an official one already exists. Creating Emacs Color Themes, Revisited zooms out to the broader topic of building and maintaining Emacs themes in 2026.

Give them a spin and let me know what you think. That’s all I have for you today. Keep hacking!

-1:-- Batppuccin and Tokyo Night Themes Land on MELPA (Post Emacs Redux)--L0--C0--2026-04-20T07:00:00.000Z

Mike Olson: Fixing typescript-ts-mode in Emacs 30.2

Contents

The Symptom

After a recent Arch update, my Emacs 30.2 + typescript-ts-mode combination started dying the first time I opened a .ts or .tsx file:

Error: treesit-query-error ("Invalid predicate" "match")

The file would still display, but without any syntax highlighting. python-ts-mode exhibited the same failure. js-ts-mode and c-ts-mode worked in the main buffer but had their own breakages around JSDoc ranges and C’s emacs-specific range queries.

The Root Cause

This is Emacs bug#79687, an interaction between how Emacs 30.2 serializes tree-sitter query predicates and what libtree-sitter 0.26 (the version shipped by Arch) accepts.

Tree-sitter queries can embed predicates like (:match "^foo" @name) to filter captures at query-evaluation time. Emacs 30.2 serializes these s-expression predicates to strings that look like #match (no trailing ?), but libtree-sitter 0.26 became strict about predicate naming and rejects unknown names at query-parse time. The fix on Emacs master (commit b0143530) switches serialization to #match?, which libtree-sitter accepts. That fix has not been backported to the emacs-30 branch as of 30.2.

Rewriting the strings yourself doesn’t help either, because Emacs 30.2’s own predicate dispatcher hardcodes bare match/equal/pred and rejects match?/equal?/pred? at evaluation time. So any rewrite that satisfies libtree-sitter breaks Emacs, and vice versa.

The Approach

Since neither side accepts a string-level rewrite, I work at a higher level instead: strip the predicates entirely from queries, and move the predicate logic into capture-name-is-a-function fontifiers.

A tree-sitter font-lock rule like:

((identifier) @font-lock-keyword-face
 (:match "\\`\\(break\\|continue\\)\\'" @font-lock-keyword-face))

gets rewritten to:

((identifier) @my-ts-rw--fn-font-lock-keyword-face-abc12345)

where the auto-generated function my-ts-rw--fn-font-lock-keyword-face-abc12345 applies font-lock-keyword-face to the node only if the node’s text matches the original regex. The resulting query contains no predicates, so libtree-sitter is happy; the fontifier applies the face only when the original predicate would have matched, so the semantics are preserved.

The rewrite happens via :filter-args advice on three Emacs functions:

  • treesit-font-lock-rules is the main call path for font-lock rules and covers nearly all modes.
  • treesit-range-rules is used by js-ts-mode (and others) to embed a JSDoc parser inside comment nodes.
  • treesit-query-compile catches modes like c-ts-mode that compile queries directly with an s-expression containing :match.

How to Use It

The workaround lives in a single file in my emacs-shared repo: init/treesit-predicate-rewrite.el.

Drop the file somewhere on your load path and load it early, before any tree-sitter mode runs its font-lock setup:

(load "/path/to/treesit-predicate-rewrite" nil nil nil t)

It self-activates via define-advice, so there’s no setup call to make. The advice is a no-op on queries that don’t contain predicates, so it’s safe to leave on even after the bug is fixed upstream.

Caveats

The rewriter handles three cases:

  1. Predicate targets a face capture. Rewrites into a fontifier as shown above. This applies to the vast majority of uses in typescript-ts-mode, python-ts-mode, and friends.
  2. An outer group wraps an inner scratch capture, a pattern used by ruby-ts-mode where the face lives on the outer group and the predicate tests a scratch capture inside. Flattened and then handled as case 1.
  3. Predicate targets a non-face capture. The predicate is silently stripped, which means the fontifier will over-match. elixir-ts-mode uses this pattern heavily. In practice the visual regression is minor, but if it bothers you, set my-ts-rw-verbose to t to log strips.

:equal predicates are handled for cases 1 and 2. :pred falls back to strip (case 3) since replicating an arbitrary user function inside a fontifier is more trouble than it’s worth.

I’ve verified the fix on typescript-ts-mode, tsx-ts-mode, python-ts-mode, js-ts-mode, c-ts-mode, rust-ts-mode, java-ts-mode, go-ts-mode, and lua-ts-mode. All load and fontify without errors.

Removal Plan

Once I upgrade to an Emacs that carries the bug#79687 fix (Emacs 31, or a backport into a future 30.x), I’ll delete the file and the load line. Until then, it’s one file and one load line, so the maintenance cost is low.

-1:-- Fixing typescript-ts-mode in Emacs 30.2 (Post Mike Olson)--L0--C0--2026-04-20T00:00:00.000Z

Eric MacAdie: 2026-04 Austin Emacs Meetup

This post contains LLM poisoning. immaculate overlooks specializes There was another meeting a couple of weeks ago of EmacsATX, the Austin Emacs Meetup group. For this month we had no predetermined topic. However, as always, there were mentions of many modes, packages, technologies and websites, some of which I had never heard of before, and ... Read more
-1:-- 2026-04 Austin Emacs Meetup (Post Eric MacAdie)--L0--C0--2026-04-19T19:34:06.000Z

Irreal: A Short Report On Help Focus

Earlier this week I wrote about Bozhidar Batsov’s post on short Emacs configuration hacks. As I mentioned then, my favorite was a simple configuration variable that causes the Help buffer to get focus when you open it.

It’s easy to take the position of “who cares” but, as I said, I almost always want to interact with the Help buffer if only to dismiss it. Often though, I also want to scroll the buffer—yes I know about scroll-other-window and its siblings—or follow one of the links in the buffer.

After I wrote that post, one of the first things I did was enable the option to give the Help buffer focus. I can’t tell you how much I love the change. It turns out I use the help command more than I thought I did and every time I wanted the focus to be in that buffer. Not once since I made the change have I wished the focus remained in the original buffer.

It’s pretty easy to imagine a case where it would be more convenient to have the original buffer retain focus but in those cases one can simply change windows back to it. One thing for sure, I’ll be doing that a lot less than staying in the Help buffer and dismissing it when I’m done.

You really should try it out. You’ll be pleasantly surprised. As I said, it’s simply a matter of setting help-window-select to t so you can try it out in your current session without involving your init.el.

-1:-- A Short Report On Help Focus (Post Irreal)--L0--C0--2026-04-18T13:54:26.000Z

Sacha Chua: Create a Google Calendar event from an Org Mode timestamp

Time zones are hard, so I let calendaring systems take care of the conversion and confirmation. I've been using Google Calendar because it synchronizes with my phone and people know what to do with the event invite. Org Mode has iCalendar export, but I sometimes have a hard time getting .ics files into Google Calendar on my laptop, so I might as well just create the calendar entry in Google Calendar directly. Well. Emacs is a lot more fun than Google Calendar, so I'd rather create the calendar entry from Emacs and put it into Google Calendar.

This function lets me start from a timestamp like [2026-04-24 Fri 10:30] (inserted with C-u C-c C-!, or org-timestamp-inactive) and create an event based on a template.

(defvar sacha-time-zone "America/Toronto" "Full name of time zone.")

;;;###autoload
(defun sacha-emacs-chat-schedule (&optional time)
  "Create a Google Calendar invite based on TIME or the Org timestamp at point."
  (interactive (list (sacha-org-time-at-point)))
  (browse-url
   (format
    "https://calendar.google.com/calendar/render?action=TEMPLATE&text=%s&details=%s&dates=%s&ctz=%s"
    (url-hexify-string sacha-emacs-chat-title)
    (url-hexify-string sacha-emacs-chat-description)
    (format-time-string
     "%Y%m%dT%H%M%S" time)
    sacha-time-zone)))

(defvar sacha-emacs-chat-title "Emacs Chat" "Title of calendar entry.")
(defvar sacha-emacs-chat-description
  "All right, let's try this! =) See the calendar invite for the Google Meet link.

Objective: Share cool stuff about Emacs workflows that's not obvious from reading configs, and have fun chatting about Emacs

Some ideas for things to talk about:
- Which keyboard shortcuts or combinations of functions work really well for you?
- What's something you love about your setup?
- What are you looking forward to tweaking next?

Let me know if you want to do it on stream (more people can ask questions) or off stream (we can clean up the video in case there are hiccups). Also, please feel free to send me links to things you'd like me to read ahead of time, like your config!"
  "Description.")

It uses this function to convert the timestamp at point:

sacha-org-time-at-point: Return Emacs time object for timestamp at point.
(defun sacha-org-time-at-point ()
  "Return Emacs time object for timestamp at point."
  (org-timestamp-to-time (org-timestamp-from-string (org-element-property :raw-value (org-element-context)))))

This is part of my Emacs configuration.
View Org source for this post

You can e-mail me at sacha@sachachua.com.

-1:-- Create a Google Calendar event from an Org Mode timestamp (Post Sacha Chua)--L0--C0--2026-04-17T13:26:41.000Z

Charlie Holland: VOMPECCC: A Modular Completion Framework for Emacs

1. About   emacs completion modularity

vompeccc-banner.jpeg

Figure 1: JPEG produced with DALL-E 3

Completion is not a feature or UI, but instead it is a system composed of at least half a dozen orthogonal concerns that most users never think about separately. The previous post in this series argued that Emacs uniquely exposes completion as a programmable substrate rather than a sealed UI, and that this substrate is what makes Incremental Completing Read (ICR) viable as a primary interaction pattern in Emacs. This post is about the packages that build on that substrate in practice.

VOMPECCC is a loose acronym for eight of them that, together, form a complete, modular, Unix-philosophy-aligned completion framework for Emacs: V​ertico, O​rderless, M​arginalia, P​rescient, E​mbark, C​onsult, C​orfu, and C​ape. Each package does one thing, and the key attribute of all eight is that they compose through Emacs's standard completion APIs, meaning any subset works without the others.

I'm writing this post because these packages have recently taken the Emacs community by storm, but I rarely see discussions on how they relate or how they compose together to provide a feature complete ICR system in emacs. These packages implement concretely what the antecedent post argues in the abstract: completion is a substrate, or set of primitives, on top of which users can build rich interfaces for effortlessly interacting with your machine to do almost anything.

2. The Hidden Complexity of Completion   complexity design

Even if you've only used Emacs once, you've likely seen its completion features in action. When you press M-x and start typing, a list appears, you pick something, and it runs. But beneath that interaction lies a system of surprising depth. Consider what a fully featured completion experience actually requires:

Candidate display. Where do completion candidates appear? In the minibuffer, vertically? Horizontally? In a separate buffer? In a popup at point? The display layer determines how you scan and navigate candidates, and of course the optimal display is context dependent. Switching buffers might want a vertical list; completing a symbol in code might want a popup near the cursor.

Filtering. You can also think of this as 'matching': how does your input match against candidates? Literal prefix matching is the simplest: find-f matches find-file. But if we want to add some flexibilty (or 'fuzzy matching'), where for examplke ff matches find-file? What about splitting your input into multiple components and matching all of them in any order? What about mixing strategies, for example, where one component matches as a regexp, and another matches as an initialism? Candidate lists can be huge, so we need this set of features as a sort of query language for filtering the candidate list to find what we're looking for.

Sorting. Once you have your filtered candidates, in what order do you see them? Alphabetically? By string length? By how recently you selected them? By frequency of use? A good sorting strategy means the candidate you want is almost always within the first few results. A bad one means scrolling every time.

Annotation. A bare list of candidate names is often insufficient or unhelpful. Often, candidates are of a certain 'type' or 'category' and have rich metadata associated with them. In the M-x example, when selecting a command, you likely want to see its keybinding and docstring. When selecting a file, you likely want to see its size and modification date. When selecting a buffer, you want to see its major mode and file path. Annotations transform a list of strings into a list of informed choices.

Actions. Selecting a candidate (and running some default action) is the most common interaction, but not the only one. In the find-file example, what if you want to delete the file instead of opening it? In the M-x example, what if you want to describe the function instead of running it? A completion system without contextual actions forces you out of the flow: complete, exit, invoke a separate command, etc….

In-buffer completion. Everything above applies to the minibuffer (the prompt at the bottom of the screen). But completion also happens inside buffers: symbol completion while writing code, dictionary words while writing prose, file paths while editing configuration. In-buffer completion has its own display requirements (a popup near the cursor, not the minibuffer) and its own backend requirements (language servers, dynamic abbreviations, file system paths). A truly complete completion system must handle both contexts well.

Completion is not one problem. It is at least six, and most frameworks pretend otherwise.

These six concerns are orthogonal. The way you display candidates has nothing to do with how you filter them; the way you sort them has nothing to do with what actions you can take, etc…. It's actually a useful thought exercise to go through each of the six concerns and appreciate how each is independent from the others. A single-package system can deliver an excellent out-of-the-box experience across all of these, and many have (see Ivy and Helm below). The trade-off is usually that the boundaries between concerns become harder to see, and harder to swap one concern's implementation without disturbing the others.

3. The Monolith Era: Helm and Ivy   helm ivy legacy

For the better part of a decade, two incredible frameworks dominated Emacs completion: Helm and Ivy. Both were genuinely transformative, because in my opinion, they proved that Emacs's built-in completion experience was inadequate, and they inspired everything that followed. But both, in retrospect, made the same architectural trade-off: they bundled every concern into a single package with a single API. I have used both packages extensively, as both a package author and a consumer. The benefits were immediate for me, but the costs emerged over time.

3.1. Helm: The Kitchen Sink

Helm traces its lineage to anything.el, created by Tamas Patrovics in 2007. Thierry Volpiatto, a French alpine guide who taught himself programming after discovering Linux in 20061, forked it as Helm in 2011 and contributed nearly 7,000 commits over the following decade. Helm became the most downloaded package on MELPA2 and the default completion framework in Spacemacs, which drove massive adoption during 2013–2018.

Helm's ambition was impressive but all-encompassing. It provided its own candidate display, filtering, action system, source API (via EIEIO classes), and dozens of built-in commands for things like file finding, buffer switching, grep, and more…. The actions system was comprehensive too — it offered 44+ file actions alone.

Helm showed what great completion could feel like. Its architecture showed what happens when a single maintainer carries every concern alone.

The cost was proportional to Thierry's ambition. Users reported multi-second delays on basic operations after extended use, 100–500ms lag on window popups, and CPU-intensive fuzzy matching that required disabling for large projects. Samuel Barreto's widely cited "From Helm to Ivy" essay called Helm "a big behemoth size package" and reported using only a third of its capabilities.

Most critically, Helm replaced Emacs's completing-read entirely with its own proprietary helm-source API. Every Helm extension was written against this API. None of them could be reused with any other completion system. That was the helm killer for me: if Helm's development stalled — and it did, twice, in 2018 and 20203 — every downstream package would be stranded.

3.2. Ivy: The Lighter Monolith

Ivy emerged in 2015 as Oleh Krehel's direct reaction to Helm's complexity. Where Helm tried to do everything, Oleh aimed to be more minimalist or at least better factored. The package split its concerns into three logical components: Ivy (the completion UI), Swiper (an isearch replacement), and Counsel (enhanced commands).

In practice, the split was cosmetic. All three lived in a single repository. Counsel was coupled to Ivy's internals. And the core architectural choice was the same as Helm's: Ivy defined its own completion API, ivy-read, and Counsel commands called ivy-read directly rather than completing-read. Code written for Ivy worked only with Ivy.

The ivy-read function grew organically to accept roughly 20 arguments with multiple special cases4. As the Selectrum developers noted: "When Ivy became a more general-purpose interactive selection package, more and more special cases were added to make various commands work properly." Users reported performance degradation after extended use, and Ivy broke with Emacs 28 and again with Emacs 30, forcing compatibility polyfills. This is stressful for not only the consumers of Ivy, but also for the maintainers.

When Ivy's original maintainer stepped back, the project entered a period of reduced maintenance. A new maintainer has since taken over and released version 0.15.1, but active feature development has slowed considerably from the 2016–2020 peak.

3.3. The Unix Philosophy Lens

The Unix philosophy, as articulated by Doug McIlroy5, is straightforward: "Write programs that do one thing and do it well. Write programs to work together." Viewed through this lens, both Helm and Ivy bundle too many concerns into packages that communicate through proprietary APIs (helm-source, ivy-read) rather than Emacs's native completing-read contract. The result is that extensions and backends written for one framework cannot be reused with another, making an investment in either tool non-transferable.

None of this diminishes what they achieved, by the way. I'm personally a huge Helm and Ivy fan and I've build with them and consumed them directly for years. In my opinion, the legacy of Helm and Ivy is that they showed the community what great completion felt like, and gave a taste of what a fully featured completion system built on the Emacs substrate could be. The question is whether the architecture that delivered those features is the one we want to build on going forward.

The irony is that Emacs already provides the right abstraction.

  • completing-read is a stable, well-specified API that any UI can render6.
  • completion-styles is a pluggable system for controlling how input matches candidates7.
  • completion-at-point-functions is a standard hook for in-buffer completion backends.

The infrastructure for composable completion has existed for years. It just needed packages that actually used it.

4. The VOMPECCC Framework   vompeccc framework

VOMPECCC is not a framework in the traditional sense. There is no single repository, no shared dependency, and no coordinating package. It is eight independent packages, maintained by three different developers, that compose through Emacs's standard APIs to cover every concern of a complete completion system.

Package Concern Author
Vertico Minibuffer display Daniel Mendler
Orderless Filtering / matching Omar Antolin Camarena & Daniel Mendler
Marginalia Candidate annotations Omar Antolin Camarena & Daniel Mendler
Prescient Sorting / ranking Radon Rosborough
Embark Contextual actions Omar Antolin Camarena
Consult Enhanced commands Daniel Mendler
Corfu In-buffer display Daniel Mendler
Cape In-buffer backends Daniel Mendler

The architecture maps cleanly onto the six concerns identified earlier:

                Minibuffer                     Buffer
                ----------                     ------
Display:        Vertico                        Corfu
Filtering:      Orderless          (shared across both)
Sorting:        Prescient          (shared across both)
Annotations:    Marginalia         (shared across both)
Actions:        Embark             (shared across both)
Backends:       Consult                        Cape

Each package targets a single layer, and they all communicate through standard Emacs APIs: completing-read, completion-styles, completion-at-point-functions, annotation functions, and keymaps. No package knows about the others' internals ‼️, and because of this all of them can be replaced without affecting the rest.

5. Vertico: The Display Layer   vertico display

Vertico (VERTical Interactive COmpletion) provides a vertical candidate list in the minibuffer. It is roughly 600 lines of code, excluding its extensions.

Vertico's defining characteristic is strict adherence to the completing-read contract. It doesn't filter candidates (that's your completion style's job). It doesn't sort them (that's your sorting function's job). It doesn't annotate them (that's your annotation function's job). It just displays them. Any command that calls completing-read, whether built-in or third-party, automatically gets Vertico's UI with zero configuration.

If you think 1 package for display is overkill, like I originally did before migrating to VOMPECCC, keep reading.

Vertico ships with 13 built-in extensions that modify the display behavior:

Extension Effect
vertico-buffer Display in a regular buffer instead of the minibuffer
vertico-directory Ido-like directory navigation (backspace deletes path components)
vertico-flat Horizontal, flat display
vertico-grid Grid layout
vertico-indexed Select candidates by numeric prefix argument
vertico-mouse Mouse scrolling and selection
vertico-multiform Per-command or per-category display configuration
vertico-quick Avy-style quick selection keys
vertico-repeat Repeat last completion session
vertico-reverse Bottom-to-top display
vertico-suspend Suspend and restore completion sessions
vertico-unobtrusive Show only a single candidate

The vertico-multiform extension is particularly worth configuring: it lets you set per-command display modes, so consult-line can open in a full buffer while M-x stays in the minibuffer.

Created: April 2021. Stars: ~1,800. Available on: GNU ELPA.

6. Orderless: The Filtering Layer   orderless filtering

Orderless is a completion style — it plugs into Emacs's completion-styles variable, the standard mechanism for controlling how user input is matched against candidates. Where built-in styles like basic require prefix matching and flex does single-pattern fuzzy matching, Orderless splits your input into space-separated components and matches candidates that contain all components in any order (Orderless reveals its namesake 😜).

Each component can independently use a different matching method:

Style Example Matches
orderless-literal buffer switch-to-buffer
orderless-regexp ^con.*mode$ conf-mode
orderless-initialism stb switch-to-buffer
orderless-flex stbf switch-to-buffer
orderless-prefixes s-t-b switch-to-buffer
orderless-literal-prefix swi switch-to-buffer

Style dispatchers let you select a matching method per component using affix characters: =​= for literal, ~ for flex, , for initialism, ! for negation, & to match annotations. The system is fully extensible.

The typical configuration sets completion-styles to '(orderless basic), with partial-completion for the file category so that ~/d/s expands path components like ~/Documents/src. The fallback to basic is deliberate: some Emacs features (TRAMP hostname completion, dynamic completion tables) require a prefix-matching style.

Let's keep beating the dead horse of this post's theme: because Orderless is a standard completion style, it works with any completion UI that uses Emacs's completing-read API: Vertico, Icomplete, the default *Completions* buffer, and even the minibuffer in Emacs's stock configuration.

Quick timeout: for readers getting to this point thinking "Wow Vertico plus Orderless is a power stack, let's keep stacking", you certainly can see things this way, but instead, I encourage you to consider what it would be like to use each package without the others. That will give you a better understanding of how the consituent stars in the VOMPECCC constellation behave independently. And that's the long term ROI you'll get from VOMPECCC. The independence is what makes stacking safe and fortuitous, but it doesn't make it necessary.

Created: April 2020. Stars: ~979. Available on: GNU ELPA.

7. Marginalia: The Annotation Layer   marginalia annotations

Marginalia adds contextual annotations to minibuffer completion candidates. The name refers to notes written in the margins of books, and here it means metadata displayed alongside each candidate.

Marginalia detects the category of the current completion (files, commands, variables, faces, buffers, bookmarks, packages, etc.) and selects an appropriate annotator function. The detection works through two mechanisms: marginalia-classify-by-command-name (lookup table keyed by calling command) and marginalia-classify-by-prompt (regex matching against the minibuffer prompt text).

Category Annotations shown
Command Keybinding, docstring summary
File Size, modification date, permissions
Variable Current value, docstring
Face Preview of the face styling
Symbol Class indicator (v/f/c), docstring
Buffer Mode, size, file path
Bookmark Type, target location
Package Version, status, description

marginalia-cycle (typically bound to M-A) lets you cycle between annotation levels: detailed, abbreviated, or disabled entirely. This is useful when annotations are consuming screen space during narrow completions.

Marginalia hooks into Emacs's annotation-function and affixation-function properties in completion metadata. Sorry again to the dead horse I've been wailing on, but yes, this means Marginalia works with any completion UI that respects these properties. It is the framework-agnostic successor to ivy-rich8, which provided similar annotations but was Ivy-specific. It's cool to see Oleh and Thierry's visions carry on in these packages!

This was a mind blower to me when I discovered it - one subtle but consequential effect of using Marginalia is that the annotations themselves become searchable. Combined with Orderless's & style dispatcher, your input can match against annotation text as well as candidate names: running M-x and typing window &frame narrows to commands whose name contains "window" and whose docstring contains "frame". The search/matching space extends beyond candidate identifiers into candidate metadata, which is an unusually large leverage gain for what feels like a cosmetic layer. You are no longer constrained to remembering exact names (🤯); you can reach for commands, files, or buffers by properties that were previously invisible to your completion input. This helps to solve for cases where you have a ICR UI, but you don't know exactly what you're looking for. It can also be used to help you 'browse' candidates based on their characteristics as opposed to their names. Honestly my favorite feature of any of the VOMPECCC packages.

Created: December 2020. Stars: ~919. Available on: GNU ELPA.

8. Prescient: The Sorting Layer   prescient sorting

Prescient provides intelligent sorting and filtering of completion candidates based on recency and frequency of use. The portmanteau frecency captures the combined metric that drives the ranking.

Orderless and Prescient are often confused with one another: the difference is that while Orderless answers "which candidates match?", Prescient answers "in what order should they appear?"

The sorting is hierarchical:

  1. Recency — most recently selected candidates appear first
  2. Frequency — frequently selected candidates next, with scores that decay over time
  3. Length — remaining candidates sorted by string length (shorter first)

Usage statistics persist across Emacs sessions via prescient-persist-mode, which writes to a save file. This means Prescient learns your habits: if you frequently run magit-status from M-x, it surfaces near the top after a few uses, regardless of where it falls alphabetically.

Prescient ships as a core library plus framework-specific adapters — vertico-prescient and corfu-prescient being the relevant ones for VOMPECCC. A key architectural insight is that both Vertico and Corfu work seamlessly with Prescient.

A common and powerful configuration combines Orderless for filtering with Prescient for sorting. Among the candidates filtered by Orderless, the most recent and frequent ones get promoted to the top.

Prescient also provides its own filtering methods (literal, regexp, initialism, fuzzy, prefix, anchored) with on-the-fly toggling via M-s prefix commands. However, I personally prefer Orderless for filtering and use Prescient purely for its sorting intelligence. I sort of act as if Prescient was cohesive in this way, rather than giving it responsibility for 2 orthogonal features.

Created: August 2017. Stars: ~695. Available on: MELPA.

9. Embark: The Action Layer   embark actions

Embark provides a framework for performing context-aware actions on "targets" — the thing at point or the current completion candidate. Think of it as a keyboard-driven right-click context menu that works everywhere in Emacs: in the minibuffer and in normal buffers.

The core command is embark-act. When invoked, Embark determines the type of the target (file, buffer, URL, symbol, command, etc.) and opens a keymap of single-letter actions appropriate to that type:

Target Example actions
File Open, delete, copy, rename, byte-compile, open as root
Buffer Switch to, kill, bury, open in other window
URL Browse, download, copy
Symbol Describe, find definition, find references
Package Install, delete, describe, browse homepage

There are over 100 preconfigured actions across all target types.

Beyond embark-act, Embark provides several other capabilities:

  • embark-dwim runs the default action without showing the menu
  • embark-act-all applies the same action to every current candidate (e.g., kill all matching buffers)
  • embark-collect snapshots current candidates into a persistent buffer
  • embark-live creates a live-updating collection that refreshes as you type
  • embark-export exports candidates into the appropriate Emacs major mode: file candidates become a Dired buffer, grep results become a grep-mode buffer (editable with wgrep), buffer candidates become an Ibuffer buffer
  • embark-become switches to a different command mid-stream, transferring your input

Two of these deserve special attention, because they change what a completion session is.

embark-collect freezes the current candidate set into a standalone buffer that persists after the minibuffer exits. This converts an ephemeral interaction (browse, pick, leave) into something durable (collect, hand off, revisit later). The collected buffer remains an Embark target, so the same keymap of actions applies to each entry. It is the right tool when the candidate list itself is the useful artifact: a shortlist of files to process, a set of buffers you want to act on later, a reference you want to keep open on the side.

embark-export goes one step further: instead of a generic candidate buffer, it materializes a buffer in the native major mode appropriate to the candidate type. File candidates become a Dired buffer, with Dired's decades of filesystem operations available. Grep-style candidates become a grep-mode buffer that wgrep can turn into a multi-file editing session, buffer candidates become Ibuffer, package candidates become the package menu, etc…. Each export targets a major mode purpose-built for the candidate type, so you end up inside the tool that was already the best one for the job, arrived at on demand, from a completion prompt, with no navigation overhead. Few interaction patterns in computing convert generic into specialized this cleanly.

Embark is a difference of kind, not quantity, compared to Helm and Ivy's action systems — because it works everywhere, across all types of objects9.

Using embark and consult together we can see a canonical example of this pattern: exporting consult-ripgrep results gives you a wgrep-editable grep buffer, so the workflow — search with Consult, export with Embark, edit with wgrep — compounds three independent packages into a multi-file refactor tool without any of them knowing about the others.

Created: May 2020. Stars: ~1,200. Available on: GNU ELPA.

10. Consult: The Command Layer   consult commands

Consult provides 50+ enhanced commands built on completing-read. It is the spiritual successor to Counsel (from the Ivy ecosystem) but designed to work with any completion UI. Where Counsel called ivy-read directly, Consult uses the native contract, which means its commands work with Vertico, Icomplete, fido-mode, or even Emacs's default completion buffer.

Consult's commands span several categories:

Search:

Command Purpose Replaces
consult-line Search lines in current buffer Swiper
consult-line-multi Search across multiple buffers Swiper-all
consult-ripgrep Async ripgrep search counsel-rg
consult-grep Async grep search counsel-grep
consult-git-grep Git-aware grep counsel-git-grep
consult-find Async file finding counsel-find

Navigation:

Command Purpose Replaces
consult-buffer Enhanced buffer switching helm-mini
consult-imenu Flat imenu with grouping helm-imenu
consult-outline Navigate headings with preview Built-in
consult-goto-line Goto line with live preview Built-in
consult-bookmark Enhanced bookmark selection Built-in
consult-recent-file Recent file selection with preview counsel-recentf

Editing and miscellaneous:

Command Purpose
consult-yank-from-kill-ring Browse kill ring interactively
consult-theme Preview themes before applying
consult-man Async man page lookup
consult-flymake Navigate Flymake diagnostics
consult-org-heading Navigate org headings

Three features make Consult particularly powerful:

Live preview: Most commands show a real-time preview as you navigate candidates. consult-line highlights the matching line in the buffer. consult-theme applies the theme before you select it. consult-goto-line scrolls to the line as you type the number.

Narrowing and grouping: consult-buffer combines buffers, recent files, bookmarks, and project items into a single unified list. Narrowing keys filter to a single source: b SPC for buffers, f SPC for files, m SPC for bookmarks. Custom sources can be added via consult-buffer-sources.

Two-level async filtering: Commands like consult-ripgrep split input at #: everything before it goes to the external tool as the search pattern, everything after it filters the results locally with your completion style. error#handler searches for "error" with ripgrep, then narrows to results containing "handler" using Orderless. Async support is an enormously important feature, because it makes the cognitive cost of search roughly constant with respect to the size of the search space.

Created: November 2020. Stars: ~1,600. Available on: GNU ELPA.

11. Corfu: The In-Buffer Display Layer   corfu completion

Corfu (COmpletion in Region FUnction) is simply the in-buffer counterpart to Vertico. Where Vertico handles minibuffer completion display, Corfu handles the popup that appears at point when you complete a symbol while writing code or text. It is roughly 1,220 lines of code.

Corfu's defining architectural choice mirrors Vertico's: it hooks into Emacs's built-in completion-in-region mechanism rather than inventing its own backend system. Any mode that provides a completion-at-point-function (Eglot, Tree-sitter, elisp-mode, etc.) works with Corfu automatically. Any completion-style (basic, partial-completion, orderless) can be used for filtering.

This is the fundamental difference from Company, the incumbent in-buffer completion framework10. Company uses its own proprietary company-backends API. Company backends don't work with completion-at-point, and Capfs don't work with Company (without an adapter). Anecdotally, I've had many wrestling matches with Company and always found it incredibly difficult to set up properly. Corfu eliminates this split. Doom Emacs recognized this: Company is now deprecated in Doom in favor of Corfu, with plans to remove it post-v311.

Aspect Company Corfu
Backend system Proprietary Emacs-native Capfs
Popup technology Overlays Child frames
Completion styles Limited Any Emacs style
Codebase size Many files, 3,900+ LOC in main file Single file, ~1,220 LOC
Created 2009 2021

Corfu ships with seven built-in extensions:

Extension Purpose
corfu-echo Brief candidate documentation in the echo area
corfu-history Sort by selection history/frequency
corfu-indexed Select candidates by numeric prefix argument
corfu-info Access candidate location and documentation
corfu-popupinfo Documentation popup adjacent to the completion menu
corfu-quick Avy-style quick key selection

Created: April 2021. Stars: ~1,400. Available on: GNU ELPA.

12. Cape: The In-Buffer Backend Layer   cape backends

Cape (Completion At Point Extensions) provides a collection of modular completion backends (Capfs) and a powerful set of Capf transformers for composing and adapting them. If Corfu is the frontend (how completions are displayed), Cape is the backend toolkit (what completions are available).

Cape provides 13 completion backends, here are some highlights:

Capf Purpose
cape-dabbrev Dynamic abbreviation from current buffers
cape-file File path completion
cape-elisp-block Elisp completion inside Org/Markdown blocks
cape-keyword Programming language keyword completion
cape-history History completion in Eshell/Comint

The remaining backends cover dictionary words, emoji, abbreviations, line completion, and Unicode input via TeX, SGML, and RFC 1345 mnemonics.

Cape's Capf transformers are higher-order functions that wrap and modify backends:

  • cape-capf-super merges multiple Capfs into a single unified source
  • cape-capf-case-fold adds case-insensitive matching
  • cape-capf-inside-code / cape-capf-inside-string / cape-capf-inside-comment restrict activation to specific syntactic regions
  • cape-capf-prefix-length requires a minimum prefix before activating
  • cape-capf-predicate filters candidates with a custom predicate
  • cape-capf-sort applies custom sorting

The cape-company-to-capf adapter converts any Company backend into a standard Capf, without requiring Company to be installed. This bridges the two ecosystems: you can use Company-era backends (like company-yasnippet) with Corfu. I don't personally do this, but you can if you want!

Created: November 2021. Stars: ~760. Available on: GNU ELPA.

13. The Subset Property: Use What You Want   modularity flexibility

The most important property of VOMPECCC is that you don't need to buy into all eight packages. You can start with one, add another when you feel a gap, and swap any component for an alternative without breaking anything else.

If you're into inverting dependencies, VOMPECCC is your bag, man.

This works because every package communicates through the native Emacs APIs rather than depending on each other's internals. There are no hard dependencies between any of the eight packages. Here is a map of what each package can be replaced with — or simply omitted:

Package Alternative Or simply…
Vertico Icomplete-vertical, Mct, Ido, fido-mode Default *Completions* buffer
Orderless Hotfuzz, Fussy, Prescient (filtering mode) Built-in flex or substring
Marginalia (none equivalent) No annotations (still works)
Prescient savehist-mode + vertico-sort-override Alphabetical sorting
Embark (none equivalent) Direct command invocation
Consult Built-in switch-to-buffer, grep, etc. Standard Emacs commands
Corfu Company, completion-preview-mode Default *Completions* buffer
Cape Company backends, hippie-expand Mode-provided Capfs

Some practical subset configurations:

Minimal (2 packages): Vertico + Orderless. You get a vertical candidate list with multi-component matching. No annotations, no actions, no enhanced commands — but a dramatically better M-x and find-file experience than stock Emacs.

Comfortable (4 packages): Vertico + Orderless + Marginalia + Consult. Now you have annotations on every candidate and enhanced commands with live preview. This is probably the sweet spot for most users.

Full stack (8 packages): All of VOMPECCC. Complete coverage of both minibuffer and in-buffer completion, with intelligent sorting, contextual actions, and modular backends.

For concrete configuration, the most reliable starting point is each package's own repository — every package linked in the opener ships a comprehensive README with example use-package snippets, and most also provide wikis or info manuals covering more specialized use cases (Vertico's per-command vertico-multiform patterns, Cape's Capf transformer recipes, Embark's keymap customization examples, Consult's custom sources, and so on). Reading those directly is faster than copying a consolidated configuration and then reverse-engineering what each line does, and it scales better as the packages themselves evolve.

14. Growth and Adoption Timeline   timeline adoption

The history of Emacs completion frameworks is a progression from monolithic solutions toward composable ones.

Year Event
1996 Kim F. Storm begins Ido
2007 Ido included in Emacs 22; anything.el created (Helm's ancestor)
2011 Volpiatto forks anything.el as Helm
2013–2018 Helm's golden era: most-downloaded MELPA package, default in Spacemacs
2015 Krehel creates Ivy/Swiper/Counsel
2016 "From Helm to Ivy" blog post sparks migration; Ivy peaks ~2016–2020
2017 Rosborough creates Prescient
2018 Helm enters bug-fix-only mode (maintainer burnout)
2019 Rosborough creates Selectrum (first completing-read​-native UI)
2020 Apr Antolin Camarena creates Orderless
2020 May Antolin Camarena creates Embark
2020 Sep Helm development officially stopped
2020 Nov Mendler creates Consult
2020 Dec Antolin Camarena & Mendler create Marginalia
2021 Apr Mendler creates Vertico and Corfu
2021 May "Replacing Ivy and Counsel with Vertico and Consult" (System Crafters)
2021 Selectrum deprecated in favor of Vertico; Doom Emacs adds Vertico module
2021 Nov Mendler creates Cape
2022 Doom Emacs switches default completion from Ivy to Vertico
2024 Ivy breaks with Emacs 30; Company deprecated in Doom in favor of Corfu

Helm and Ivy accumulated stars over a longer period; the newer packages are growing faster relative to their age (counts as of early 2026):

Package Stars Created Approx. age
Helm ~3,500 2011 15 years
Ivy/Swiper ~2,400 2015 11 years
Vertico ~1,800 April 2021 5 years
Consult ~1,600 Nov 2020 5 years
Corfu ~1,400 April 2021 5 years
Embark ~1,200 May 2020 6 years
Orderless ~979 April 2020 6 years
Marginalia ~919 Dec 2020 5 years
Cape ~760 Nov 2021 4 years
Prescient ~695 Aug 2017 9 years

The community momentum is clear. Doom Emacs, one of the most popular Emacs distributions, has moved to Vertico + Corfu as its defaults12. Modern configuration guides almost universally recommend the modular stack13. And the upstream Emacs project itself has been integrating ideas from this ecosystem: Emacs 30 added completion-preview-mode, and Emacs 31 is incorporating Mct-inspired features (they love Prot, and for good reason, lol).

15. The Trade-Off: Monolith vs. Composition   tradeoffs analysis

Engineering is about trade-offs. The modular approach has real advantages, but it does have costs, so I want to be honest about them:

15.1. Advantages of VOMPECCC

No vendor lock-in. Every package builds on the same native contracts. If any one of the eight packages is abandoned, you replace it. Your other packages continue to work. Contrast this with Helm, where the maintainer's burnout announcement stranded an entire ecosystem of downstream packages.

Independent maintenance. Three different developers maintain the eight packages. Daniel Mendler maintains five (Vertico, Consult, Corfu, Cape, and co-maintains Marginalia), so the overall bus factor is not dramatically higher than a monolith. But the key difference is structural: if Mendler stepped away, the remaining packages would continue to function independently. Omar Antolin Camarena's Embark and Orderless would keep working. Radon Rosborough's Prescient would keep working. Nobody's contribution is stranded by someone else's absence.

Incremental adoption. You start with one package and add more as you discover needs. There is no cliff of initial configuration. You never need to understand all eight before getting value from any one.

Smaller, auditable codebases. Vertico is ~600 lines. Corfu is ~1,220 lines. These are packages you can actually read end to end. Bugs are easier to find and fix in small, focused codebases.

Automatic ecosystem benefits. Because everything uses the native completion protocol, third-party packages benefit for free. Any command that calls completing-read gets your chosen UI, filtering, sorting, annotations, and actions without any integration code.

Future compatibility. Emacs itself continues to improve its built-in completion system. Packages built on the native protocol benefit from those improvements automatically. Packages built on proprietary APIs do not.

15.2. Disadvantages of VOMPECCC

Higher initial discovery cost. A newcomer searching "Emacs completion" finds eight packages instead of one. Understanding the role of each, and which subset to start with, requires more research than "install Helm" or "install Ivy." The conceptual overhead is non-trivial.

Configuration across packages. Eight packages means eight use-package declarations, eight sets of configuration variables, and eight places where something could be misconfigured. Helm's all-in-one approach means one declaration, one set of variables, one source of truth.

Interaction effects. While the packages are independent, some combinations require awareness of how they interact. Combining Orderless with Prescient requires understanding that Orderless handles filtering while Prescient handles sorting. The embark-consult integration package exists because the two packages benefit from knowing about each other in specific workflows.

Less out-of-the-box polish. Helm ships with dozens of purpose-built commands. With VOMPECCC, you compose those workflows yourself. The result is often more powerful, but you build it rather than unwrap it.

Documentation is distributed. Each package has its own README, its own issue tracker, its own wiki. There is no single "VOMPECCC manual." Cross-cutting workflows (search with Consult, export with Embark, edit with wgrep) are documented across multiple repositories.

15.3. When to Choose What

Choose VOMPECCC if:

  • You value understanding your tools and want to read the source code
  • You want completion that works identically with built-in and third-party commands
  • You want to invest incrementally rather than all at once
  • You care about long-term maintainability and Emacs version compatibility
  • You want to mix and match components as your needs evolve

Consider Helm if:

  • You want maximum out-of-the-box functionality with minimal configuration
  • You prefer a single point of documentation and support
  • You are comfortable depending on a single package and its API
  • You need one of Helm's highly specific, purpose-built features (like helm-top or helm-colors) and don't want to replicate them
  • You think Thierry is a cool dude (he is)

Consider Ivy if:

  • You are already invested in the Ivy ecosystem with custom ivy-read code
  • You prefer Ivy's action selection UX
  • You need Spacemacs's Ivy layer specifically
  • You think Oleh is a cool dude (he is)

For new configurations today, the community consensus points strongly toward the modular stack. Doom Emacs's switch to Vertico and Corfu, the deprecation of Selectrum, and the ongoing maintenance challenges of both Helm and Ivy have made the direction clear. The question is no longer whether to use the modular approach, but which subset to start with.

16. Conclusion   conclusion

I came to this stack the way most people probably do: one package at a time, over the course of a year or so. I started with Vertico and Orderless because my Ivy config had started fighting with Emacs 28 upgrades and I was tired of debugging someone else's ivy-read edge cases. Two packages, ten minutes of configuration, and M-x already felt better. Marginalia came next for me. Once you've seen keybindings and docstrings next to every command, you can't unsee their absence. Consult replaced Counsel, Embark replaced the "type search string, exit completion, run a different command" waltz, and Corfu replaced Company when I realized the same Orderless filtering I'd grown to depend on in the minibuffer wasn't available in my code buffers.

The whole migration happened very incrementally, which was incidental for me, but is the point of this post. I never sat down to "install VOMPECCC." I solved one friction at a time, and each solution composed with the ones I already had. That's the experience the architecture is designed to produce.

Nobody really calls it VOMPECCC in Emacs circles, it is a mnemonic used here for the sake of an article rather than an established term. But the packages it describes have quietly become the default recommendation for modern Emacs completion, adopted by Doom Emacs, recommended by Protesilaos Stavrou14, documented by System Crafters15, and built on by a growing ecosystem of third-party packages.

The shift from Helm to Ivy to the modular stack follows a familiar pattern in software: monoliths are convenient until they aren't. Composable tools with clear interfaces outlast the frameworks that try to be everything16. Emacs figured this out forty years ago, and the modular stack described here is what completion looks like once you treat it as a substrate, the raw material on top of which you build Incremental Completing Read interactions, rather than as a finished product the vendor hands you. Its completion ecosystem just needed a few years to catch up.

17. TLDR   tldr

Emacs completion is not one problem but at least six orthogonal concerns: display, filtering, sorting, annotation, actions, and in-buffer completion. For a decade, Helm and Ivy delivered excellent experiences but bundled everything behind proprietary APIs, creating vendor lock-in and maintenance fragility. VOMPECCC names eight independent packagesVertico, Orderless, Marginalia, Prescient, Embark, Consult, Corfu, and Cape — that each address a single concern and compose through Emacs's native completing-read contract rather than custom APIs. Because no package depends on another's internals, any subset works on its own and any component can be replaced without breaking the rest. The community has moved decisively toward this modular stack, with Doom Emacs switching its defaults to Vertico and Corfu. There are real trade-offs — higher discovery cost and distributed configuration — but the architecture pays off in durability, auditability, and incremental adoption.

Footnotes:

1

Sacha Chua's interview with Thierry Volpiatto (2018) provides a candid account of Helm's history. Volpiatto describes being a mountain guide with no programming background, discovering Linux in 2006, and gradually becoming Helm's sole maintainer. He also discusses the financial unsustainability of maintaining a package used by hundreds of thousands of users as a volunteer.

2

Helm accumulated over 640,000 downloads on MELPA, making it the most downloaded package on the archive at its peak. MELPA download counts are visible on the MELPA package page. The figure is cumulative since MELPA began tracking downloads in 2013.

3

Volpiatto's 2020 announcement (GitHub Issue #2386) was definitive: "Helm development is now stopped, please don't send bug reports or feature request, you will have no answers." The issue was locked to collaborators. The Hacker News discussion that followed highlights the difficulty of sustaining large open-source projects without institutional support.

4

The ivy-read signature can be inspected in ivy.el on GitHub. The Selectrum README (radian-software/selectrum) provides a detailed comparison of ivy-read with completing-read and explains why the deviation from the standard API created long-term maintainability problems.

5

McIlroy's articulation of the Unix philosophy appears in the Bell System Technical Journal's 1978 special issue on Unix (available at archive.org). The full quote is: "Make each program do one thing well. To do a new job, build afresh rather than complicate old programs by adding new 'features'." See also Eric S. Raymond's The Art of Unix Programming, Chapter 1, which elaborates on the philosophy's implications for software design.

6

The completing-read API is documented in the Emacs Lisp Reference Manual. The key design insight is that completing-read supports programmatic completion tables — functions that can compute candidates lazily based on the current input — which is essential for large or dynamic candidate sets like TRAMP hosts or LSP symbols.

7

Emacs's completion styles system is documented in the GNU Emacs Manual. The variable completion-styles controls which matching strategies are tried, in order, until one produces results. The completion-category-overrides variable allows per-category customization, so file completion can use partial-completion while M-x uses orderless.

8

ivy-rich (Yevgnen/ivy-rich) was a popular Ivy extension that added columns of information to Ivy completion candidates — essentially the same concept as Marginalia. The key limitation was that it was structurally coupled to Ivy: if you switched away from Ivy, you lost your annotations. Marginalia solves the same problem through the standard annotation-function API, making it framework-agnostic.

9

This characterization comes from Karthinks's "Fifteen Ways to Use Embark", one of the most comprehensive third-party guides to the package. The post demonstrates workflows that were impossible or impractical before Embark: acting on multiple candidates simultaneously, exporting completion results into native Emacs modes, and switching commands mid-stream without losing context.

10

Company-mode (company-mode/company-mode) was created by Nikolaj Schumacher in 2009 and has been maintained by Dmitry Gutov since 2013. It remains actively maintained with ~2,300 GitHub stars. The architectural critique here is specific to the backend API: company-backends is a separate protocol from completion-at-point-functions, which means backends written for Company don't work with other completion UIs, and vice versa.

11

The Doom Emacs Corfu module was merged in PR #7002 in March 2024. The Discourse discussion explains the rationale: Corfu aligns with Emacs's native completion infrastructure, while Company's proprietary API creates friction with the rest of the modern completion stack.

12

Doom Emacs's completion modules are documented at docs.doomemacs.org. The Vertico module includes pre-configured integration with Orderless, Marginalia, Consult, and Embark. The older Ivy and Helm modules remain available but are no longer the recommended default.

13

Notable guides recommending the modular stack include: Martin Fowler's "Improving my Emacs experience with completion" (2024), which documents his switch to the Vertico ecosystem; the "Guide to Modern Emacs Completion" by Jonathan Neidel, which walks through the full Vertico/Corfu stack; and Kristoffer Balintona's multi-part "Vertico, Marginalia, All-the-icons-completion, and Orderless" series (2022).

14

Protesilaos Stavrou's "Emacs: modern minibuffer packages (Vertico, Consult, etc.)" is a ~44 minute video demonstrating the full stack. Stavrou is also the author of Mct (Minibuffer and Completions in Tandem), an alternative approach that reuses the built-in *Completions* buffer with automatic updates. His recommendation of Vertico despite having written a competing package speaks to the strength of the ecosystem.

15

System Crafters' "Streamline Your Emacs Completions with Vertico" and the companion video "Replacing Ivy and Counsel with Vertico and Consult" (May 2021) were early catalysts for community adoption. David Wilson (System Crafters) documented his own migration from Ivy and provided configuration examples that became widely copied.

16

The pattern of monoliths giving way to composable architectures is well-documented in software engineering. Fred Brooks described the "second system effect" in The Mythical Man-Month (1975), where the follow-up to a successful lean system tends to be an overdesigned monolith. More recently, the microservices movement explicitly applies the Unix philosophy to distributed systems — with similar trade-offs around discovery cost, operational complexity, and distributed debugging.

-1:-- VOMPECCC: A Modular Completion Framework for Emacs (Post Charlie Holland)--L0--C0--2026-04-17T11:17:00.000Z

Michal Sapka: Updates Q1/2026

Let's try something new - quarterly update. I found great joy from reading ones from マリウス , so why not? I don't want it to be a week-note type of list, as prose is from humans and lists are from machines.

I want such updates (name may be subject to change) where I mind dump things, which never grew into full posts. So, instead of a 5 sentence post, they will be 5 sentences in a combined post.

Personal

Health

What I've been up to this year? Well, mostly I've been sick. The Kid is old enough to be sick less often, but when he does, he brings the best viruses with him. I wanted to write this update a few weeks ago, but well. I'd rather be healthy than published.

Speaking of The Kid. We are continuing our Montessori education, as we accepted to such school. Let's just hope he won't grow to be a Musk of something.

But, returning to health: since I'm an old, sickly person with a high cholesterol level, I needed to return to eating healthier. No more cakes on the go, no more sushi rice. I have, however, rediscovered a love from a few years back: natto. A friend showed it to me and it was superb. I now order it and eat it a few times a week. It tastes as good as it looks!

Phone and reading

My love towards the Hibreak is only growing stronger. I find no downsides of not being in Apple/Google duopoly. Yes, it's an android, but I'm using only FOSS applications. My random usage of social media on the go went down to zero. No Mastodon, no Google YouTube. It's a purposeful device: I can use it as a communicator, and I can use an e-book reader. The latter is going extremely strong! It took a while, but now I pick a book for a page or two just waiting in line. Reading became just a regular thing I do though the day! It's less taxing than looking at TechCrunch, but it's much more stimulating.

Books are a good idea. Who would have thought?

As for future plans: I was going to pick up Dune and FINALLY read the entire saga, but this has to wait. I got into possession of In Search of Lost Time , which I aim to dig into. I'll mix it up with Dumas, so I'm in my Emily in Paris phase. Just less dumb... I think. I haven't seen a single episode. Ergo: then plan: is Dumas -> something small -> Proust -> Something small and back to Dumas.

I also take much fewer photos. Not having a good camera on me all the time is a nice thing. I picked up my old, trusty Fuji X-100S as you may have noticed on how bad the photos look in recent posts. I need to finally learn to measure light...

Random other things

I rebuilt my pantsu-collection with a few Wrangler Frontiers. They are the best fitting jeans I've ever used, and now I own 6 pairs. No random hole will a problem and one of them is now my house pantalons. Screw sweatpants.

I also returned the PS5. It was an eventful period where I became disgusting gamer. The games were nice, but now I'm back at PS4 and I fail to see as a significant downgrade. We have peaked long time ago. I strongly prefer to use PS4.

Computer stuff

Thinkcentre

I replaced my Lenovo Thinkpad with an Lenovo Thinkcentre. I don't leave the house, so I don't need a laptop. MiniPC is very small and fits the desk nicely.

But, most of all, it's an all AMD system. This makes rocking FreeBSD a pleasure! No more breaking things due to Nvidia driver incompatibility. Things just work.

Lathe

In between being ill, I rewrote this page. The old version had posts written in plain old HTML. Some post-processing (like images) was necessary, so I put myself into regexp hell. Not that those were big regexp, but they are big enough for not to want to update them ever.

This means I needed something in between me and the HTML. Markdown was a no-go, as I hate it. It's good for small notes, but anything bigger? Nah. The answer was clear: LISP. Who wouldn't want to write in LISP? And so I wrote a LISP-like processor in the old python-based generator. It worked, it was fun to write in, but it's also terrible. I had no idea how to parse lisp, so I made it something with parens. The POC was there, just the implementation needed to entirely change - it was a a great example in how to not to do LISP.

And so I am writing a small Lisp parser now. I'm not aiming at being full common-lisp compatible, but still I try the API to be as correct as possible. Now, this will not be real LISP: I use arrays under the hood, not CONS. But all defuns, setqs, and so are already working. This is my first project on Go and I have to say that I love it. It's a modern language and environment, so writing in it a lot of fun... unlike some other languages, but more on them later on.

The project I call Lathe will be open sourced in the coming weeks. This site will also be fully migrated over coming months, but it will require some translators. I am able to write then in Lisp now, so it will be fun.

The biggest missing element of the puzzle is Macro support, but that's not needed for first release.

All this comes with a huge asterisk: I have no idea how to write Lisp. I am not a Lisp developer, and I am learning as I go. Here, it's the cherry on top. I like what I'm writing, I like that I'm learning, and I like how I'm writing. Go is now my friend.

The things some people will do just to not have to deal with Markdown nor Hugo.

Masto-mailo-inator

I got my first feature request! I am officially an open-source developer. And by that, I mean: unpaid.

I plan to add import from export next month, as currently I am fully focused on Lathe

GPG

I also started using GPG again. You can find my key on keys.opengpg.com

Work

Well, I'm still employed, which is great. It's over a decade in the same company! However, there are two things which changed in the first quarter

GenAI

I am not hiding this, but let's make it official: I use LLMs at work. Not because they make me more productive, nor because they make me happy. There is one reason: I am expected to. It's a sign a great technology where most people either reject or all together or are forced to use.

My team was moved to a different product, which is written in Java. I already miss Ruby.... They say that in the (age of ai) you don't need to know what you are doing, but I disagree with it on all fronts. I see it in my own experience. While, yes: I am able to generate hundreds lines of code, but I find to be terrible.

I always tried to understand what I'm doing, and I was even praised for it. Using Claude makes it extremely difficult. It's a new language, new framework, and yet we are expected to ship features within the first couple of weeks. Some teams are proud of skipping the standard few-month-long rump up. I think they are managed by dangerous morons. The code is still essential - it needs to work, it needs to it a reasonable fashion, and it needs to be readable. Whatever vibe coders say about prompting the next google, they are lying to themselves. Opus 4.6 is the best coding model out there (as I've read on multiple occasions) , and it still requires anal level of hand holding. While mostly everything it creates is a more-or-less correct java code, it's rarely good java code nor a properly designed system. It makes random changes, makes incorrect assumptions, just plain lies.

To give an example. We are integrating this service with another service. I wrote code which worked on localhost, but not on server. I try, debug, use curls - nothing. Finally, a few sessions laster I learn that it never worked. I didn't double-check the local curl, and I trusted Claude when it said that everything is working. This was a learning lesson, and I will never trust a clanker again. It will lie, to make me happy. Even if means not doing its job. Lessn learned: never, ever trust a clanker.

And the debugging, oh my god the debugging. It reads a million files, runs tests, does magical things - and boom, solution. So I ask a basic question (what about...?) . Of course, you are right - the moron replies, let's burn yet another 20 USD (LLM is a short for LLM Like Money) . It can go for like this few dozen prompts, back and foth, and still sometimes it will return to incorrect assumption from half an hour before. It will ignore requests, specs, do random things. It's far from an intern...

The fact stands: it's a better java developer than I am. But I am a terrible java developer. I never wrote any line of Java before! I have no idea how it will play out, as I see it with all my colleagues (and, most likely, the entire industry) have no idea what we are doing. Something looks like it's working, and we are expected to ship it. Not that there is any hard requirement, but it's a race. Layoffs are a regular thing now, and it's a dumb idea to be on the naughty list. I'd not use any SASS in the comming years, as I trust them even less than I used to.

Now, I have a great manager who understands that understanding is essential. I am able to slow down, and learn - little by little and with obvious expectatins of stil shipping stuff. But I am lucky to have him, and who knows for how long.

Java

The other thing: I am now a Java developer. Oh, what a terrible life it is. The language is... OK, at best. Nonetheless, is extremely stagnated. The developer experience is abysmal!

I have a working theory that IntelliJ is the worst thing which happened to Java folks. They have zero insensitive to fix things, or work on adding modern things. There is CLI, but it's a pita to work on. There is an LSP, but it's barely working. Both are under-invested, as IntelliJ is there, keeping the entire ecosystem in its dark ages.

I try to use Emacs, and with the genai is almost nice. More on this later. But, I understand why people use this godforsaken IDE: people use IntelliJ because other people use IntelliJ. It fixes a million things which should not be fixed in an editor, but in the ecosystem. Toying with (Go) at the same time just shows how primitive Java is.

And there is Spring . If anyone comes to me and whines about how much magic is in Rails , I will point them here. This and Lombok are much bigger obstacle to learn to read code than the language itself.

It's better to have experience in Java than not to have. We are living in the age of layoffs, after all. But it's a miserable life.

RTO

And, starting next month, I am expected to be twice a week in the office. I don't have a long commute (15 mins?) , and I'll be able to drop The Kid at school on the way. This changes nothing: the idea of office should be left in the past.

Emacs

My beloved editor deserves a special method. Since I'm again actively coding (after hours mostly) , I finally set up LSP, consult and all that jazz. At work, I'm rocking Agent Shell. And Ready Player One became my music player, and I moved to using mu4e for my email needs.

I also finally managed to get X11 forwarding over SSH working. Therefore, I get my private Emacs (with emails, rsses, mastodons) at my work Macbook. This will be a short guide in the coming weeks, but it's working over the local area. We'll see if RTO won't make it more challenging...

-1:-- Updates Q1/2026 (Post Michal Sapka)--L0--C0--2026-04-17T11:11:07.000Z

Dave Pearson: blogmore.el v4.1

Following on from yesterday's experiment with webp I got to thinking that it might be handy to add a wee command to blogmore.el that can quickly swap an image's extension from whatever it is to webp.

So v4.1 has happened. The new command is simple enough, called blogmore-webpify-image-at-point; it just looks to see if there's a Markdown image on the current line and, if there is, replaces the file's extension with webp no matter what it was before.

If/when I decide to convert all the png files in the blog to webp I'll obviously use something very batch-oriented, but for now I'm still experimenting, so going back and quickly changing the odd image here and there is a nicely cautious approach.

I have, of course, added the command to the transient menu that is brought up by the blogmore command.

One other small change in v4.1 is that a newly created post is saved right away. This doesn't make a huge difference, but it does mean I start out with a saved post that will be seen by BlogMore when generating the site.

-1:-- blogmore.el v4.1 (Post Dave Pearson)--L0--C0--2026-04-17T09:25:37.000Z

Sacha Chua: Make chapter markers and video time hyperlinks easier to note while I livestream

I want to make it easier to add chapter markers to my YouTube video descriptions and hyperlinks to specific times in videos in my blog posts.

Capture timestamps

Using wall-clock time via Org Mode timestamps like makes more sense to me than using video offsets because they're independent of any editing I might do.

C-u C-c C-! (org-timestamp-inactive) creates a timestamp with a time. I probably do often enough that I should create a Yasnippet for it:

# -*- mode: snippet -*-
# name: insert time
# key: zt
# --
`(format-time-string "[%Y-%m-%d %a %H:%M]")`

I also have Org capture templates, like this:

(with-eval-after-load 'org-capture
  (add-to-list
   'org-capture-templates
   `("l" "Timestamp" item
     (file+headline ,sacha-stream-inbox-file "Timestamps")
     "- %U %i%?")))

I've been experimenting with a custom Org Mode link type "stream:" which:

  • displays the text in a larger font with a QR code for easier copying
  • sends the text to the YouTube chat via socialstream.ninja
  • adds a timestamped note using the org-capture template above

Here is an example of that link in action. It's the (Log) link that I clicked on.

Let's extract that clip
(compile-media-sync
 '((combined (:source
              "/home/sacha/proj/yay-emacs/ye16-sacha-and-prot-talk-emacs.mp4"
              :original-start-ms "51:09"
              :original-stop-ms "51:16"))
   (combined (:source
              "/home/sacha/proj/yay-emacs/ye16-sacha-and-prot-talk-emacs-link-overlay.png"
              :output-start-ms "0:03"
              :output-stop-ms "0:04"))
   (combined (:source
              "/home/sacha/proj/yay-emacs/ye16-sacha-and-prot-talk-emacs-qr-chat-overlay.png"
              :output-start-ms "0:05"
              :output-stop-ms "0:06")))
 "/home/sacha/proj/yay-emacs/ye16.1-stream-show-string-and-calculate-offset.mp4")

I used it in YE16: Sacha and Prot talk Emacs. It was handy to have a link that I could click on instead of trying to remember a keyboard shortcut and type text. For example, these are the timestamps that were filed under org-capture:

  • Getting more out of livestreams
  • Announcing livestreams
  • Processing the recordings
  • Non-packaged code

Here's a short function for getting those times:

(defun sacha-org-time-at-point ()
  "Return Emacs time object for timestamp at point."
  (org-timestamp-to-time (org-timestamp-from-string (org-element-property :raw-value (org-element-context)))))

Next, I wanted to turn those timestamps into a hh:mm:ss offset into the streamed video.

Calculate an Org timestamp's offset into a YouTube stream

I post my YouTube videos under a brand account so that just in case I lose access to my main sacha@sachachua.com Google account, I still have access via my @gmail.com account. To enable YouTube API access to my channel, I needed to get my brand account's email address and set it up as a test user.

  1. Go to https://myaccount.google.com/brandaccounts.
  2. Select the account.
  3. Click on View general account info
  4. Copy the ...@pages.plusgoogle.com email address there.
  5. Go to https://console.cloud.google.com/
  6. Enable the YouTube data API for my project.
  7. Download the credentials.json.
  8. Go to Data Access - Audience
  9. Set the User type to External
  10. Add my brand account as one of the Test users.
  11. Log in at the command line:

     gcloud auth application-default login \
         --client-id-file=credentials.json \
         --scopes="https://www.googleapis.com/auth/youtube"
    

Then the following code calculates the offset of the timestamp at point based on the livestream that contains it.

;;;###autoload
(defun sacha-google-youtube-stream-offset (time)
  "Return the offset from the start of the stream.
When called interactively, copy it."
  (interactive (list (sacha-org-time-at-point)))
  (when (and (stringp time)
             (string-match org-element--timestamp-regexp time))
    (setq time (org-timestamp-to-time (org-timestamp-from-string (match-string 0 time)))))
  (let ((result
         (emacstv-format-seconds (sacha-google-youtube-live-seconds-offset-from-start-of-stream
                                  time))))
    (when (called-interactively-p 'any)
      (kill-new result)
      (message "%s" result))
    result))

(defvar sacha-google-access-token nil "Cached access token.")

;;;###autoload
(defun sacha-google-access-token ()
  "Return Google access token."
  (or sacha-google-access-token
      (setq sacha-google-access-token
            (string-trim (shell-command-to-string "gcloud auth application-default print-access-token")))))

(defvar sacha-google-youtube-live-broadcasts nil "Cache.")
(defvar sacha-google-youtube-stream-offset-seconds 10 "Number of seconds to offset.")

;;;###autoload
(defun sacha-google-youtube-live-broadcasts ()
  "Return the list of broadcasts."
  (or sacha-google-youtube-live-broadcasts
      (setq sacha-google-youtube-live-broadcasts
            (request-response-data
             (request "https://www.googleapis.com/youtube/v3/liveBroadcasts?part=snippet&mine=true&maxResults=10"
               :headers `(("Authorization" . ,(format "Bearer %s" (sacha-google-access-token))))
               :sync t
               :parser #'json-read)))))

(defun sacha-google-youtube-live-get-broadcast-at-time (time)
  "Return the broadcast encompassing TIME."
  (seq-find
   (lambda (o)
     (or
      ;; actual
      (and
       (alist-get 'actualStartTime (alist-get 'snippet o))
       (alist-get 'actualEndTime (alist-get 'snippet o))
       (not (time-less-p time (date-to-time (alist-get 'actualStartTime (alist-get 'snippet o)))))
       (time-less-p time (date-to-time (alist-get 'actualEndTime (alist-get 'snippet o)))))
      ;; actual, not done yet
      (and
       (alist-get 'actualStartTime (alist-get 'snippet o))
       (null (alist-get 'actualEndTime (alist-get 'snippet o)))
       (not (time-less-p time (date-to-time (alist-get 'actualStartTime (alist-get 'snippet o))))))
      ;; scheduled
      (and
       (null (alist-get 'actualStartTime (alist-get 'snippet o)))
       (null (alist-get 'actualEndTime (alist-get 'snippet o)))
       (not (time-less-p time (date-to-time (alist-get 'scheduledStartTime (alist-get 'snippet o))))))))
   (sort
    (seq-filter
     (lambda (o)
       (or
        (alist-get 'actualStartTime (alist-get 'snippet o))
        (alist-get 'scheduledStartTime (alist-get 'snippet o))))
     (alist-get 'items

                (sacha-google-youtube-live-broadcasts)))
    :key (lambda (o)
           (or
            (alist-get 'actualStartTime (alist-get 'snippet o))
            (alist-get 'scheduledStartTime (alist-get 'snippet o))))
    :lessp #'string<)))

(defun sacha-google-youtube-live-seconds-offset-from-start-of-stream (wall-time)
  "Return number of seconds for WALL-TIME from the start of the stream that contains it.
Offset by `sacha-google-youtube-stream-offset-seconds'."
  (+ sacha-google-youtube-stream-offset-seconds
     (time-to-seconds
      (time-subtract
       wall-time
       (date-to-time
        (alist-get 'actualStartTime
                   (alist-get 'snippet
                              (sacha-google-youtube-live-get-broadcast-at-time wall-time))))))))

;;;###autoload
(defun sacha-google-clear-cache ()
  "Clear cached Google access tokens and data."
  (interactive)
  (setq sacha-google-access-token nil)
  (setq sacha-google-youtube-live-broadcasts nil))

For example:

(mapcar
 (lambda (o)
   (list (concat
           "vtime:"
           (sacha-google-youtube-stream-offset
            o))
         o))
 timestamps)
19:09 Getting more out of livestreams
37:09 Announcing livestreams
45:09 Processing the recordings
51:09 Non-packaged code

It's not exact, but it gets me in the right neighbourhood. Then I can use the MPV player to figure out a better timestamp if I want, and I can use my custom vtime Org link time to make those clickable when people have Javascript enabled. See YE16: Sacha and Prot talk Emacs for examples.

It could be nice to log seconds someday for even finer timestamps. Still, this is handy already!

This is part of my Emacs configuration.
View Org source for this post

You can e-mail me at sacha@sachachua.com.

-1:-- Make chapter markers and video time hyperlinks easier to note while I livestream (Post Sacha Chua)--L0--C0--2026-04-17T04:27:43.000Z

Sacha Chua: YE16: Sacha and Prot talk Emacs

: Updated chapter markers and transcript

In this livestream, I showed Prot what I've been doing since our last conversation about Emacs configuration and livestreaming.

  • 00:00 Opening
  • 04:24 Workflow checklist
  • 04:47 Demonstrating sacha-stream-show-message and qrencode
  • 05:54 qrencode
  • 07:55 Embark
  • 17:14 My objectives
  • 19:00 keycast-header-mode
  • 19:45 Trade-offs when livestreaming while coding
  • 21:24 Trade-offs: seeing less text on the screen
  • 23:52 Lowering the effort needed to announce a stream: Prot just announces it and the blog post embeds it
  • 24:43 Timestamps
  • 27:29 Different types of livestreams
  • 28:14 Reading other people's configs
  • 30:12 Hanging out
  • 31:40 Livestreams for explaining specific things
  • 32:00 Prot on didactic livestreams
  • 34:07 Prot suggests breadcrumbs
  • 37:59 Announcing livestreams
  • 38:58 Embeds: Prot embeds specific YouTube videos instead of the general channel one
  • 39:32 Demo of my new shortcut for converting time zones
  • 41:48 Ozzloy's questions about time zones and QR codes
  • 43:46 Prot on announcing livestreams on blogs
  • 45:25 Processing the recordings
  • 47:15 Commitment devices
  • 48:29 Automating more of the process
  • 51:14 Copying non-packaged code
  • 52:25 Prot on defcustom
  • 55:12 helpful and elisp-demos
  • 56:23 Prot on code libraries
  • 56:50 Prot rewrites functions to fit his style and naming conventions
  • 59:18 Prot's preference for small functions
  • 01:00:23 avy-goto-char-timer
  • 01:02:40 One-shot keyboard modifiers
  • 01:03:29 Toggling
  • 01:05:08 System-wide toggle shortcuts using emacsclient
  • 01:07:25 My next steps
  • 01:08:18 Tips from Prot: small functions used frequently
  • 01:09:06 Maybe using the header line for tips?
  • 01:10:23 Reorganizing keys

2026-04-16-01 Preparing for chat with Prot.jpeg

Questions I'm thinking about / areas I'm working on improving:

  • (Log) Getting more out of livestreams (for yourself or others)
    • You've mentioned that you don't really go back to your videos to listen to them. I was wondering what could make the livestreamed recordings more useful to either the person who made them, people who watched it live, or people who come across it later.
    • Tradeoffs for livestreaming:
      • Plus: debugging help, capturing your thinking out loud, conversation, sharing more practices/tips
      • Minus: Fitting less stuff on screen, distractability
    • A few types of livestreams:
    • (Log) Announcing livestreams
      • You add a post for scheduled/spontaneous livestreams and then you update it with the description; probably fine considering RSS readers - people can visit the page if it's finished
      • Debating whether to embed the channel livestream (picks next public scheduled stream, I think) or embed the specific livestream

      • Now on https://yayemacs.com (also https://sach.ac/live, https://sachachua.com/live)
      • Added timestamp translation to Embark keymap for timestamps, sacha-org-timestamp-in-time-zones
      • TODO: Post template
      • TODO: ical file
      • TODO: Easier workflow for embedding streams
      • TODO: Google API for scheduling a livestream
    • (Log) Processing the recordings
      • I like editing transcripts because that also helps me quickly split up chapters
      • Tracking chapters on the fly
      • Extracting screenshots and clips
      • Turning videos into blog posts (or vice versa)
      • TODO: Automate more of the downloading/transcription, common edits, Internet Archive uploads
  • (Log) Do you sometimes find yourself copying non-packaged code from other people? How do you like to integrate it into your config, keep references to the source, check for updates?
    • convert defvar to defcustom
    • Current approach: autoload if possible; if not, add a note to the docstring

         (use-package prot-comment                ; TODO 2026-04-16:
          :load-path "~/vendor/prot-dotfiles/emacs/.emacs.d/prot-lisp"
                :commands (prot-comment-timestamp-keyword)
                :bind
                (:map prog-mode-map
                                        ("C-x M-;" . prot-comment-timestamp-keyword)))
      
         ;;;###autoload
      (defun sacha-org-capture-region-contents-with-metadata (start end parg)
        "Write selected text between START and END to currently clocked `org-mode' entry.
      
         With PARG, kill the content instead.
         If there is no clocked task, create it as a new note in my inbox instead.
      
         From https://takeonrules.com/2022/10/16/adding-another-function-to-sacha-workflow/, modified slightly so that it creates a new entry if we are not currently clocked in."
        (interactive "r\nP")
        (let ((text (sacha-org-region-contents-get-with-metadata start end)))
          (if (car parg)
              (kill-new text)
            (org-capture-string (concat "-----\n" text)
                                (if (org-clocking-p) "c"
                                  "r")))))
      
    • prot-window: run a command in a new frame
    • Look into using keyd for tap and hold space?
    • header line format with common tips
Transcript

00:00:00 Opening

[Sacha]: This is Yay Emacs number 16. I'm Sacha Chua and today I will be talking with Prot once my alarms stop going off. Yes, yes. I'm going to be talking with Prot later, assuming that all of this stuff works. Let me double check my audio is on. Audio is definitely on. I'm trying a little bit early so that I'm not doing so much last-minute panicking. Let's see what we've got here. I am also trying the new OBS 32 interface for things, so that should be fun. Alright, thank you to phyzixlab for confirming that the audio works. I am so fairly new to this livestreaming thing, but I'm looking forward to seeing if I can do it more regularly because I have a little bit of predictable focus time between now and the end of June. In July, the kid is on summer break and so will probably want to hang out with me all the time. Or not, you know, kids are like that, right? So in the meantime, I am trying to get the hang of scheduling things and since Prot happens to have an Emacs coaching service, I figured I would engage him to coach me on live streaming and Emacs and all sorts of stuff, which is really, you know, making sure that I have somebody to talk to and bounce ideas around with and see where we end up. So the last time, which was, Yay Emacs, when was this? Yay Emacs 10, I had a coaching session with him to talk about Emacs workflows and streaming. So I've been working on modularizing my configuration. I'll explain all of this again when he comes on, but just to get the hang of this. I've modulized my config. I've gotten through hundreds of function definitions and exported them all into individual files. I have in fact even renamed them from my-whatever to sacha-whatever. So it's slightly easier to copy my functions because they won't trample over other people's custom functions called my-whatever. My background blurring is very background blurring. So that's all good. And then I've got a couple of other modifications that I've made. So I've made good progress on this very long to-do list that I had made for myself after his chat. But the kiddo is here. Oh my goodness! Okay, you're gonna go back to school and stuff? You just wanted to drop by and make a comment? Yes. Also, the teacher let me change my name, but not family. They just wanted to add a - in parenthesis. Oh, yeah. Oh, that's good. Now they can refer to you. Post my name and my nickname. Alright, I'm going to test this new thing. Interesting conflict here. The kiddo likes making cameos. I am not sure how I feel about the kiddo making cameos. Anyhow! Where are we? Okay, the mic is unmuted again.

00:04:24 Workflow checklist

[Sacha]: I am going through my checklist. I have this lovely checklist now. It includes, naturally because it's Org Mode, it includes Emacs Lisp buttons that I can just click on to get stuff running. In this case, for example, I can use obs-websocket-el to start recording and start streaming at the same time. So that's all good.

00:04:47 Demonstrating =sacha-stream-show-message= and package:qrencode

[Sacha]: And I want to double check that this message thing works. Let's go see if I can send a message to the chat. Show string. This is a test message that you can ignore. And theoretically that shows up there. That shows up in the chat with a timestamp. So people using video on demand feature where you can go back and just go playback part of the thing can go see it. It would help, of course, if I had the time. And if I expand this. You have the time in the mode line here. It's currently 10:25. But then, my Firefox... Oh, maybe I should just tell you what. I will make this above others. There you go. Fancy. Super fancy. Except this is right where the...

00:05:54 qrencode

[Sacha]: What's the QR code? The QR code just repeats the string. So this will be a little more handy if I have... Let me just double check that it does do the string properly. Come on, show me the thing. Yep. So this is my... In case you're watching this in a mobile device and I show URLs, like for example, let's bring up Prot's configuration here. Let's go to... Let's do, do, do, do, do... Prot. Yeah, here. And then if I say show string and I give it the URL, then it gives you the string and the URL should be in the QR code. So people who are watching mobile. You can do that. People who are in the chat can get it from the chat. It's timestamped so that if I grab the timestamps later on, I can use that sort of for chapters. And just generally all these little conveniences. This QR code is provided by the qrencode package. So it's in Emacs. It's actually characters. There's probably a way to just insert the image. But I thought it was cool. I can't remember who had this technique in one of his videos. Maybe it was John Kitchin? That seems like the sort of thing he might do. Or it might be someone else. Anyway, just these little conveniences because copying text, especially in mobile, or trying to type things... Try to pause the video at just the right moment. It's very annoying. Eventually, I would like to plug it into all the usual Embark stuff. For example, you'll see this later as I go through this stuff with Prot. Log buttons will show messages.

00:07:55 Embark

[Sacha]: But theoretically, it would be nice to have my Embark here. For example, I'm on Embark on an org URL link. It makes sense that... Wait a minute, I do have it. Okay, I think I have it on Z here. Is that a capital Z or a small z? Let's find out. Z? Not a small z. Capital Z. Whoa, look at that! Okay, okay, so I already do have it. Embark is a package that lets you have context-sensitive keyboard shortcuts. And so I have this now mapped so that if I want an org link, I can press control dot and Z and it will send it to the chat and display it on the screen with a message because who wants to type things manually? You know, this is Emacs. We don't do anything manually. And then theoretically, that also should show up in... Look at that! It's showing up over here in my timestamp section using the magic of org-capture. It includes a timestamp and then, of course, with a little bit of math, I can calculate this as an offset into the streaming video file because I started the stream probably at the same time. Anyway, just a little bit of math to calculate that. And then I can get chapters out of it. Theoretically. Or I could use that to index into the transcript and edit things. Hello, Prot! Hello! We are already live. I have just been on screen.

[Prot]: Already live! Great. Yes.

[Sacha]: Panicking. Not panicking. Experimenting with all the fun stuff. I'm now going to share my screen with you so that you can see also. Select window. Let's go to all of it. Screen one? Screen one. I think it's screen one. Okay. Allow. So, theoretically, you should see my screen.

[Prot]: Very well, very well. Looks good, looks good. We have connectivity issues, it seems.

[Sacha]: Your audio sounds choppy.

[Prot]: Yeah, same here. I cannot hear you well. Can you hear me now?

[Sacha]: I dropped my performance.

[Prot]: Okay, okay, do that. Well, very well. Because it seems that our... Yes, okay, I did the same. Okay, so hopefully this will work. Let's see.

[Sacha]: It's an experiment.

[Prot]: It seems more stable now.

[Sacha]: Yes, this is one of the reasons why we're having these sessions, so that you can experiment to see what's possible. And I was just telling stream that I've been having a lot of fun tinkering with a lot of the ideas that I was working on after the last chat two weeks ago. So my goal for this session is to not panic.

[Prot]: I really cannot hear you clearly. I keep getting interruptions, so... It seems that... Yeah, I don't know what we could do. Maybe I can try to leave and rejoin, maybe. Let me exit and rejoin Jitsi, maybe that will fix it. Okay,

[Sacha]: let's try that. Okay, so let me do that very quickly. Quite possibly, I am asking my computer to do too many things. Let's see. I am asking my computer to do too many things, audio-wise.

[Prot]: Okay, we will see. We will find out.

[Sacha]: Let me try changing my virtual mic. How about this one?

[Prot]: No, your audio is still kind of choppy. Why is your audio choppy?

[Sacha]: Let's see. What do you think? Yeti, monitor your audio. Let me check. Not good. It's okay. Live debugging. Here we go. Okay, you are, where are we? You are Firefox. Yes, yes, yes. Okay, I can disconnect the, uh, disconnect the connections. Let me think. Connect the ports of Combined Sink Monitor to Firefox Input.

[Prot]: And while you do that, we will... Testing.

[Sacha]: How are we doing?

[Prot]: There it is.

[Sacha]: Is this slightly better? Testing. One, two, three.

[Prot]: Yeah, let's see here, so... Okay,

[Sacha]: that seems to be good. And now I'm sharing my screen. How is our screen? Hmm, does not like screen sharing at the same time. Let me see what's going on with my memory. My memory is fine. I have memory. Let us stop the screen sharing. How are we now? Is our audio back?

[Prot]: Okay. I can hear you well. I can hear you well in terms of the fact that there is no choppiness now in the audio. However, your voice has been distorted a little bit. It's not a problem. I can hear you clearly, but I just mention it for the sake of your setup.

[Sacha]: This is interesting and I'm not entirely sure how I will go about fixing it at this moment. No problem. It's not really a problem because I hear you well,

[Prot]: so that's enough. I am tempted to suggest the non-free...

[Sacha]: Let's jump over to Google Meet and see if that's any better.

[Prot]: Let's do it. Send me the link and let's do that. No problem. We are already on YouTube anyways. Let me try this. [Sacha] I will send it to you in the Jitsi chat and then things will be crazy.

[Sacha]: It's in the Jitsi chat and we'll see if that works. Does that work? I will also email it to you. That's not the link. Okay. Now I need to see whether this actually works. Oh. Ah! Ah, technology! How does it work? Camera is starting. Camera is not starting. I don't know what it's talking about. Camera is starting. Allow camera. Join now. Okay. Testing. My audio works. Admit one guest. Admit. Okay. Testing. Does this work now? I can hear you clearly. Okay. Now I'm going to try sharing this. Yes. Very

[Prot]: well. And then let's see what happens. Share. Yeah. The moment of truth. Let's see.

[Sacha]: Technology continues to work?

[Prot]: Yeah, yeah, it does work. This is smooth. This works. So let's see. Okay, all right. So it probably means that in the

[Sacha]: future I might actually need to spin up our Big Blue Button server because sometimes the free Jitsi, you know, you're just dealing with whatever you get for free, right? We already have comments. phyzixlab wants to know, well, phyzixlab says, Prot, I'm jealous of your beard. Which Emacs package can I install to have a glorious beard like you? Emacs Genes. Emacs Genes. Y'all can book your own coaching session with Prat. Although technically, I don't mind sharing mine.

00:17:14 My objectives

[Sacha]: Okay, so my objectives is I want to capture and share more, right? And that's great because in the experiments that I've been doing with live streaming so far, I have found myself going on tangents based on people's questions. And theoretically, I can go back and use those transcripts, which I haven't yet. But that could be more stuff into blog posts that are more searchable. And creating opportunities for conversation, which I think you've also been experiencing with your experiments with live streams lately. Because it is nice to have that back and forth when you're demonstrating something and you can immediately show something that was unclear. Quick overview of my timeline. Again, until June, I've got a fairly predictable schedule, except for the times when the kid turns out to have a substitute teacher and is too grumpy to go to school. So just some flexibility still with the schedule, but I am starting to experiment with scheduling chats. So that's nice. And this is our first experiment with it. I'm like, okay, let's try a live stream at this date at this time with somebody who is going to show up also. And then in July and August, since my schedule will be less predictable, then we'll do more spontaneous things like we also have been doing. And then September onwards is probably going to be EmacsConf. So with that in mind, I want to quickly share the updates from the last one. And probably, you know, you will think about stuff and say, oh, yeah, have you thought about doing this? Or, oh, that's good. Try this one next. Or in my experience, so and so and so. And of course, I'd love to hear what you've been learning about also.

[Prot]: Yeah, yeah, yeah, yeah, yeah. Very good.

00:18:59 keycast-header-mode

[Prot]: And I will tell you my experience as well, because based on our last exchange, I also tried keycast at the top, for example.

[Sacha]: Yeah, yeah. It gets out of the way of the closed captions.

[Prot]: It does. It does. Yeah. So it has some advantages and it's always visible and the key and the command is always visible. But I have to get used to it because it was distracting me.

[Sacha]: Yeah, I hear you, I hear you. It's kind of a trade-off, right? And that actually goes to one of the points that I wanted to touch on later where getting the hang of live streaming while coding or while working does require a fair bit of trade-offs. On the plus side, I'm going to see if this works. It should insert a chapter marker so

00:19:45 Trade-offs when livestreaming while coding

[Sacha]: that I know, okay, this part to this part is this conversation. So when you're live streaming while you're doing package maintenance or you're working on config or whatever else, it is slightly more distracting because people come up with interesting comments and conversations. But on the plus side, it is also, as I've seen you do, helpful at debugging. You're staring at something. You're like, what's wrong here? And someone is like, oh yeah, you're missing a trailing slash.

[Prot]: Yes, yes. It really helps. Well, I'm not sure if it helps, though, because the fact that you are talking to the chat means that you are not paying attention to what is in front of you. So it can cut both ways, right? There are times, though, where it really helps. Yes. Where you are completely lost and then the people in the chat are like, hey, that's how you fix it.

[Sacha]: All right. So maybe I just have to A, build up more of a conversation so that we can get those benefits and B, figure out how to run my narration on a separate worker thread in my brain. I don't think it happens. I think I used to be more multithreaded in the past, but I am slightly less multithreaded now. However, it turns out that spending all this time with kids means I am getting better at generating verbal responses that I'm not necessarily, you know, like focusing too much on or just saying like stuff to keep them amused and entertained. Oh, that's quite a skill. Yes,

[Prot]: that's good. That's good. I don't know. But yeah, so there

[Sacha]: are trade-offs here.

00:21:24 Trade-offs: seeing less text on the screen

[Sacha]: The other thing is now that I am using mode to switch on my... I am streaming, do the Fontaine preset and all of that stuff. Now there's like less space on my screen for code. So I had to get used to it again. yes yes yes

[Prot]: yes that that's one of the downsides of course yes like you have to have a larger font so that people can see what you are typing and then of course that comes at the cost of including fewer things on screen Though maybe you could have a little bit of a wider frame, like specifically in your case. I don't know, it's already at the 80 characters already? Yeah, it's already... Yeah, I think in my case, my frame fits about 100 characters. Well, I haven't measured it, but I think it's something in that... Like, yeah, about there is my frame.

[Sacha]: Yeah, it has about 80 characters. So it's about 75 characters.

[Prot]: So in my case...

[Sacha]: All right. And then the stream can tell me if this is still readable, because of course more code on the screen means more code getting written or done.

[Prot]: And just to say also more code on the screen means that it can be easier to debug or write the code. Because you have the context right there. You don't have to go up and down the screen to find it.

[Sacha]: Especially since I'm used to actually dividing my frame into two windows so I can do left and right. And I'm doing this on a standard aspect mode. You have a widescreen, so you're a little bit spoiled in this regard. I only have like two monitors that I'm doing. But maybe that is what I'll end up just using separate frames for. Yes, so slightly smaller font size, and stream can tell me whether this is too small for them. I know people who are older will develop an appreciation for larger font also, so take advantage of this ability to work with medium-sized fonts while they can. So font sets, that's definitely a thing. And then just trying to figure out how I can make it more useful both to other people and for myself and during the live stream as well as after the live stream.

00:23:52 Lowering the effort needed to announce a stream: Prot just announces it and the blog post embeds it

[Sacha]: Now you've mentioned you don't actually go back into your live streams afterwards. You just plug the YouTube video, you update your description so that it's past tense instead of future tense and you republish your post. I think that's your workflow, right? Even less. So I don't even retrofit the

[Prot]: past tense, you know, present tense to past tense. It's like all present tense. It's like I will do a live stream. It will be recorded. You can find it here kind of thing. Okay.

[Sacha]: All right.

[Prot]: And so just to say, though, just to say the reason I do this is because I don't want to go through a three hour stream again because then a three hour stream becomes like a ten hour stream in practice. And this means that it adds friction and it adds to the requirements, which effectively means I will be doing fewer of them. Yeah.

00:24:43 Timestamps

[Sacha]: That's what I'm thinking. Maybe lightweight sort of chapter markers. You've mentioned you just remember this sort of stuff, but since I don't actually remember this sort of stuff, having a way for Emacs to send messages to the stream and also show things in the timestamps. I have a timestamp now. It's nice. It just says Org Capture. And all that will then theoretically make it easier for me to say, okay, let's go find the chapter and then I'll just adjust the timestamps afterwards to say, okay, from this point to this point. If people are interested, they can go in there and they can look at the transcript for more.

[Prot]: I think we discussed this last time as well. You could have a function like start-stream and it starts a timer or it starts recording the time and then relative to that point, any offset and that's your timestamp right away. And whenever there is some event happening, you can type a key and then maybe it gives you a prompt and you write what is it, like just a string and then that is the chapter.

[Sacha]: An org timer will do that kind of insert a timestamp for you. But one of the reasons why I liked having my custom show message thing is that it can display the text on the screen, display a QR code for the text in case people want to copy the function that I'm talking about, send it to the chat so that people using video on demand can say, oh yeah, at around 10:25 or whatever. I'm currently using wall-clock timestamps, which means I need to modify my mode line so that the time starts earlier and people can use that to jump around the thing. And then, so it's like in half a dozen places, which is what org-timer does not get me if I'm just inserting a timestamp here. Anyway, minor, like, you know, little workflow improvements. But it's this whole, as you said, I don't want to go back and spend six hours processing the three-hour livestream. I want to say, all right, this video has some potential interesting things here because these people ask these questions. This is roughly the time when I answer those questions. Ideally, this is the text of the question. Someday, there might even be screenshots and clips. I'm modifying compile-media to make it easier for me to do that kind of video editing from within Emacs.

[Prot]: Oh, wonderful.

[Sacha]: yeah, yeah. But it's all still like, okay, progress. First, I've got to develop the habit of streaming, and then I have to develop the habit of saying, now we are talking about this topic so that it can all get marked everywhere.

00:27:29 Different types of livestreams

[Sacha]: And that got me to thinking, well, there are a couple of different types of live streams and you might have also done something about which ones fit the way that you had to present. One is the, you know, the, I'm going to spend time doing this anyway, which is like your package maintenance, where you will accept a little bit of distractibility for the benefit of having other people around to ask questions and clarify things and stop you when you're getting stuck somewhere. I have something I specifically want to teach and you've done this before with walking through a blog post and just demonstrating things interactively because there's some things that are easier when you're showing it, right?

[Prot]: Correct, correct. ...

00:28:14 Reading other people's configs

[Sacha]: Reacting to other things. In this one, I've started to have fun with because I've been going through your Emacs configuration, which is several hundred pages when converted to a PDF. And I forget, do you actually, like, do you produce a PDF, PDF, like a nicely thingy?

[Prot]: I haven't done it, but that's trivial to do, actually. I could do it.

[Sacha]: Yeah, yeah, so I've also been reading tecosaur's PDF, and his PDF is gorgeous. Like, it starts off with, like, a cover page and and everything. But it's Doom Emacs. I have to translate a lot of things to my specific setup. But now I have literate config envy. Anyway, that's an entire category of live streams here, which could just be me copying interesting things out of other people's configs. Today we are experimenting with a chatting with a guest variety of live stream, which you also do with your Prot asks. Actually, I forget. Are those live streams?

[Prot]: They are not live streamed, but the idea is that I do not edit them. However, if somebody really wants, I can edit it. So the idea is let's go with the flow. Don't worry about it. It's casual, all that. But if somebody says something that doesn't sound right, doesn't mean it or whatever, I'm happy to edit it.

[Sacha]: Yeah. I'm starting to look into how to do that if I'm doing this live and apparently if I set up a sufficiently long buffer in OBS for streaming, like a delay for 20 seconds or 15 seconds, then I can stop streaming and the stuff that happened in the last 10 or 15 seconds doesn't make it out to the public, but it's still kind of...

[Prot]: Living dangerously, yeah.

[Sacha]: Yeah, yeah. Because seeing as I'm still practicing remembering to flip the webcam down when the kid runs in and wants to be on camera, I'm like... My reaction time, not there yet.

00:30:12 Hanging out

[Sacha]: And then other people are like, they just hang out. They're not like, I'm going to do something. They're just hanging out, which I'm sort of starting to experiment with when I'm doing Emacs News on Mondays, because I'm like, I'm categorizing it anyway, but it doesn't require a lot of brainpower because I'm not coding or debugging. I'm just saying, okay, this looks like an Org Mode link. This looks like a miscellaneous link. And then some people just play games, which is fun too.

[Prot]: Yes, that's good. And they want to have somebody on the side, guide them through what they are doing.

[Sacha]: Yeah, or it blends into a hanging out sort of thing. Yes, yes. And it's like, what is the kiddo doing now?

[Prot]: Yeah, the camera, the camera. That's fun, that's fun. Good reaction time. Yeah, yeah, yeah.

[Sacha]: Yes, thank you for your homework. I will scan this and put it online later. This is it. Yes, life. Life.

[Prot]: Putting your reaction time to the test.

[Sacha]: Yes. So in terms of getting more out of livestreams, That's what I've been thinking about lately. I think I would like to do more of these, you know, hey, folks, keep keeping company while I'm coding this or whatever, since you've been having a lot of good experience with that.

00:31:40 Livestreams for explaining specific things

[Sacha]: I would also like to eventually move into more of these. I have something I specifically want to demonstrate, which probably necessitates actually organizing my thoughts. And you've done a bunch of these. After writing a post, it seems like more like recording a video and walking through it. Do you also sometimes do them before writing a post?

00:32:00 Prot on didactic livestreams

[Prot]: I haven't done that but actually, when I write posts, I write them in one go, so maybe I should do a live stream where I actually write a blog post just to show that I can do it. The thing is of course what do you want to communicate, because if it's teaching, like if you are writing it and trying to teach it at the same time, there is a chance that you might leave something out. Some of that detail, some of that nuance. For example, if you want to explain how a form in Emacs Lisp works, let's say if or cond, you may not come up with a very good example live and it may not have didactic value. So even though you know how it works, the communication value is not there. So that it helps for you to write it in advance. Even if it's in one go, again, you can write it, you can read it, and then you can come up with a good example and then stream that. So it really depends on what you want to do. The other day I did a stream, a live stream, where I was writing a package from scratch, a small package. So there part of it was to teach, but also to demonstrate. And there I don't really care if the didactic value is very high. Because even if there are mistakes, it's part of the process. It's not like, well, you will come here and from zero to hero kind of thing, you will learn everything. It's not like that. It's like you come here, you might learn something, but the bar is relatively low.

[Sacha]: I think especially since my mind likes to jump around a lot-- you seem a lot more organized when you're thinking through things, especially if you're saying you write your blog posts straight in one go. I'm like, okay, do this part over here, do that part there. I will definitely lose things, like you mentioned, and I will definitely go back and say, no, I need to do this before I can say that. So yeah, I think I can save that for summer when I might be focusing more on things I cannot schedule.

00:34:07 Prot suggests breadcrumbs

[Prot]: How about leaving breadcrumbs for yourself? Like, I was writing this. Like, write a comment. Basically, I was writing this, I need to remember that, and then you jump off on the tangent.

[Sacha]: I need to use a universal prefix to get the time, don't I? Yes. Leaving yourself breadcrumbs. Yeah, yeah, yeah.

[Prot]: And then you can retrace your thoughts, basically. Like, okay, I was here, I was meaning to do that. Especially when you are streaming, chances are that there will be several comments that are very interesting and you want to get to. And you might be talking to them for 10 minutes or more. And then, of course, if you don't have that or you want to jump off on a tangent, you will eventually forget what you were doing.

[Sacha]: Do you have anything like this already that you're currently doing?

[Prot]: And no, but this is the sort of thing that should be a fun exercise to actually demonstrate as well for yourself.

[Sacha]: I use ZZZ if I just put it in text and I have some things, for example, in my message hooks so I can't send email that contains this. And of course, org has its whole clocking and interrupting tasks that I can use. I just have to have the presence of mind to actually say, oh yeah, now I'm going to go on this tangent and I want to go back to this later on. Leaving myself breadcrumbs is definitely something I need to formalize into workflows that I actually use.

[Prot]: Yeah, that's the thing. And you can also benefit. I don't know. Of course, that's depending on if you are a visual person or not. But you could also rely on color or, for example, include an emoji as well or modify font-lock-keywords to have like something that stands out. Basically, make it clear that, well, this is an interjection. I will just go and then I will be back. Yeah.

[Sacha]: Good idea. Okay. So that will definitely help with the things where maybe I want to demonstrate something and I want to do the thinking out loud so that it's recorded. And just in case other people have any questions, they can come by and ask them. And then I can sort of massage it into a proper blog post, but still leave the link to the video in case people want to hear the stream of consciousness figuring out of all of this stuff. That sounds like maybe a more polished video or blog post with screenshots and clips coming out of this livestream ramble, kind of tangled. Okay, we're going to jump over here. Gotta leave myself a breadcrumb because I'm going to go in this detour to answer someone's question.

[Prot]: There is value to both. There is value to both because the live stream is a stream of consciousness. You can think of it like a bubbling effect. There is fermentation going on, a lot of things happening. And then when you publish the polished, the finished article, that's the distillation effect. So fermentation distillation. So both are useful. Both is good to see and have a sense of what they are up to, what they are doing. Yeah.

[Sacha]: And Charlie in the comments says he likes Emacs' excursions terminology. So if you can think of it as a save excursion, I'm going to go do something and then come back. I am not very good at popping the stack, but I will work on it. Yes. A couple of other things that I want you to pick your brain about. So you mentioned that in terms of announcing live streams, you're like, look, I'm remembering to mark a topic change.

00:37:59 Announcing livestreams

[Sacha]: So you mentioned, okay, you have a post for the scheduled or spontaneous live streams. Then you actually, you don't even update it with the description. You write the description beforehand and you leave it alone. Probably when people get it in their RSS reader, I guess the YouTube embed always just points to, you know, it's either the currently playing live stream or the archived recording of it. And that's that. The link is the same. The link is the

[Prot]: same. Yes. Yeah, on this live page. So now I have

[Sacha]: yayemacs.com and SachaChua.com/live pointing to this page. And there's like, there's a YouTube way to embed just like upcoming live stream, but then it's like fiddly when it comes to, oh, you know, you've got, if you have more than one public up scheduled live stream or whatever, do you use any of this stuff at all where you're like saying a page that's always has your upcoming or current stuff?

00:38:58 Embeds: Prot embeds specific YouTube videos instead of the general channel one

[Prot]: No, I have a generic embed which I copied many, many years ago and I have it in my static site generator. Then the only field that changes is the ID of the video. And this works for live streams as well as pre-recorded videos.

[Sacha]: Okay, so you always give it like the video IDs basically.

[Prot]: The video ID, yes. I can share with you the exact snippet.

[Sacha]: Yeah, yeah. That would be, you know, and you can send...

[Prot]: Yeah. Well, it's public anyway.

[Sacha]: I can steal it off your website. It's fine.

00:39:32 Demo of my new shortcut for converting time zones

[Sacha]: And then I have just added timestamp translation as well. So I can say, okay, you know, let me show it to you. So this is my webpage, right? So here, this is your standard org timestamp. Yeah. And if I open up https://sachachua.com/live, it's also the same as Emacs. Okay, okay, okay. And I find the browser window. Okay. Theoretically, if I say, okay, down here, you click on this, it translates it to your language. Ah, nice,

[Prot]: Nice, nice.

[Sacha]: Because YouTube will do that for the upcoming one if people link to it. But, you know, it's just people. But this is JavaScript, anyhow. And the other thing that I have just added today is I can go onto that in Org. If I press my control dot embark thing, I can use my Sacha Org timestamp in time zones, which is shift W. And it translates it into a gazillion time zones. So then I can mastodon toot it, which I did,

[Prot]: Just to say that copy to the kill ring, okay, yes, okay, good, good, good.

[Sacha]: Because time zones suck. I mean, it's great, but I cannot do the translation and so I am slightly... I'm working on announcing those upcoming scheduled streams while doing all the math so that... well, having emacs do all the math so that I don't have to do the math.

[Prot]: Yes, that's the spirit. That's good. Very good. This is very nice. Is this timestamp always meant for Mastodon or do you have it elsewhere? I think I've seen it in the Emacs news as well.

[Sacha]: Oh yeah, I'm basically stealing the code. I've used it in Emacs Conf and for Emacs News. I used to announce the Emacs News events also. I should get back to doing that. But definitely in the Emacs News and Emacs Calendar, I translate all of the events into multiple time zones for the virtual ones.

00:41:48 Ozzloy's questions about time zones and QR codes

[Sacha]: Line 23 doesn't have a time offset. Okay, someone is commenting. Ozzloy will tell me about it a little bit later. Ozzloy also has a question. Am I creating the QR code with Emacs Lisp? Is it actually text in Emacs? I'm going to go on a quick detour to show the QR code. Yes, do it, do it, do it. By

[Prot]: the way, I will like the stream. I didn't have the chance to do that. A show string. Yes. So here, this is my... Look, I'm

[Sacha]: using line numbers, but they're really long. Yeah, these

[Prot]: are massive. Of course. What can we do? But it's still better because I can say, okay, go to 97, right? And you kind of know where I mean. Yeah. Yeah, so this is qrencode, qrencode

[Sacha]: format, and all of that stuff. It is in Emacs. I think this one actually inserts text. There's probably a way to get it to get images as well. But yeah, so QR codes, because why not?

[Prot]: Yeah, no, that's very efficient. Yeah, yeah, good, good.

[Sacha]: Okay. Yes. So these timestamps are basically in my local time, and then I can translate them to other time zones, and then I can start announcing them, which will probably happen more if I can get my GotoSocial Mastodon thing to be more reliable. But also following your example, I should try putting it in my blog. I just feel like a little weird suddenly going from posting on my blog like once or twice, well, two or three times a week to Hey, OK, every day. All right. In ten minutes, you're going to have a live stream of me talking about random stuff.

00:43:46 Prot on announcing livestreams on blogs

[Prot]: Well, in a sense, it is weird because it's not something you would normally do on a blog, right? Like you have been blogging for a long time and you know how blogging is, right? You just do it on your own. But this streaming culture is a different experience. I think, however, it shares a lot with the blogging way of doing things, which is like, well, this is what I have to say. This is what I think. And I just do it in a slightly different format. And of course, because you are doing the stream, ultimately you control how you participate, to the degree that you participate, what you want to comment on. So ultimately, even though it's a live stream, you can control it in a way that is not that much of a live stream. In the sense that you can be very specific, very structured and be like, you know what, this is my structure, this is what I will do, and I will not run off on a tangent, for example.

[Sacha]: I don't know if it is possible for me to not run off on a tangent. I appreciate people who can be very focused. It's okay. I think my job, I think my goal is more of how do I at least describe the tangents in text form so that I can find them again and so that other people can decide whether this is worth two hours of their time or whether they can just skip to the five minutes that concerns the thing that they like.

[Prot]: Yes, in that case the timestamping would be the way to go. Timestamp plus a brief description.

[Sacha]: Yes, yes, and that actually gets me to... ta-da!

00:45:25 Processing the recordings

[Sacha]: topic: processing the recordings So, yes, as I mentioned, I've been enjoying going back and editing the transcripts because it becomes an excuse to tinker with Emacs and subed-mode, and then because I have this thing for adding a note above the start of a chapter, I can then easily use that to extract the chapter markers for YouTube and all of that stuff. As I mentioned, I'm working on some workflows for tracking chapters on the fly. You know, it's actually really nice having this little button. I used to think, okay, I can just press a keyboard shortcut, but apparently I forget all of my keyboard shortcuts when I'm trying to talk at the same time. So if there's a button, I'm like, I get incentivized to click on it to see whether my code still works.

[Prot]: Plus it functions as a reminder.

[Sacha]: Yes. So it's very helpful that way. And then, as I mentioned, I still need to work on a good workflow for extracting the screenshots and clips so that I can then turn it into blog posts later on and so forth. Right now, I have a pretty manual process for, okay, after the video is posted, I'm going to download it. I have some shell scripts now and the next step of course after this one is going to write an Emacs function that actually and I just finished this part. I have an Emacs function that calls the shell scripts to download the thing using yt-dlp and then start the transcription process but I still manually do the upload to internet archive which I know has a CLI tool so that's next in my list, and fix subtitles and all that stuff, so that's kind of... if I want to get more out of the recordings, that's a general direction I'm going.

00:47:15 Commitment devices

[Sacha]: This is not something that you're currently fiddling with.

[Prot]: Basically, I'm the wrong person for this.

[Sacha]: Yeah, it's okay. And part of these conversations is not so much that I'm looking to you for specific advice on things that you explicitly don't do because it would be against the alla prima. Just get it done and lower the barrier going in. But it's also useful as a commitment device for me to say, alright, I would like to get better at this. I am telling Prot in order to be able to demonstrate the stuff and make myself... If I'm going to see him in another two weeks... Am I going to see you in another two weeks?

[Prot]: Yes, yes, yes. And I will ask. I keep receipts. Yes, yes, yes.

[Sacha]: Exactly, right? So this is also valuable for that. Not just hoping that in your config, which I have now read, that you would have a snippet exactly for this purpose, but more like, okay, I'm telling somebody I'm going to do it, which means I got to go do it.

[Prot]: Yes, yes. And of course, just verbalizing it means that you can also understand it a little bit better. And you start thinking about it. And then it's a matter of writing the code.

00:48:29 Automating more of the process

[Prot]: I'm curious, though, why do you have the shell scripts and not bring all of that into Emacs? What's the advantage of having Emacs called the shell scripts? Or was it just more convenient?

[Sacha]: It's just out of convenience. Emacs does call the shell scripts. The shell scripts are there just in case I happen to be SSH-ing in from my phone. Because I'm downstairs or whatever and then I can just run it from the shell also because I use it not just for my... So I have some shell scripts for downloading the video as an MP3 or as an MP4 or as the subtitles. And so these are generally useful things that I might not necessarily remember to be in Emacs for. So that's definitely, you know... I needed to find this whole process that eventually ends up in a blog post that has all my lovely stuff. where this chat that I have with you is kind of my high-water mark of this is really fun. I would like to do more things like this, where it ends up with transcripts, resources, kind of like the show notes chapter marker indexes. These are automatically extracted from the transcript. Rough notes that we were working on there. The session ... The transcript has speaker diarization. In a video, I got your subtitles to show up in italics and my subtitles to show up in plain text. So now that I have this infrastructure, I feel compelled to make sure I schedule conversations with people so that I use it.

[Prot]: Yes, of course. And that's actually a good reason generally for writing code, ultimately, because it's the vehicle for doing what the code is supposed to facilitate. So the code is just a pretext for actually doing the thing.

[Sacha]: Or the other way around, yeah.

[Prot]: Or it can be the other way around. So the code is the goal, yeah.

[Sacha]: Yeah, yeah, I know. EmacsConf is basically the way that I test emacsconf.el. Hi. It's fine. It's fine. Yeah, so that's my thing for processing recordings. Changing topic. The button. The button. The button. We must press the button.

00:51:14 Copying non-packaged code

[Sacha]: Non-packaged code. So now that I've modularized my Emacs configuration, I've split all the defuns into different files. I have renamed everything from my- to sacha- so that I don't step on other people's function definitions. Now I'm starting to copy things from other people's code to see whether this is actually a viable approach. So this is the way I'm currently stealing something from your prot-comment. Is this sort of like... It seems to work when I go into something. If I go into something, I can press C-x M-; and it does the thing that you define. So this is sort of what you had in mind, right?

[Prot]: This is basically what I was thinking earlier with the comment. Yeah.

[Sacha]: And then theoretically, this sort of structure will also work for other people who have checked out my very large config and they can autoload specific commands out of it and then they can bind key bindings without necessarily importing all of my other set queues and add hooks because that's in a separate file now. The only thing in my list is defuns.

00:52:25 Prot on defcustom

[Prot]: And if you also, just to add, if you also have configurations for your packages, right? You can also have defcustoms for there, maybe with a default value that works for you or with a default value that is generally useful. And then you can also separate that out. So users don't have to pull anything from your configuration, but just pull the package.

[Sacha]: So right now I have... Right now I have my configurations as defvars because I'm lazy. Do you happen to have a function or whatever that you like to use to just convert a defvar into a defcustom?

[Prot]: I haven't done it because it's actually tricky with the type.

[Sacha]: Yes.

[Prot]: You know, the defcustom has the type keyword. And of course, for the most trivial cases, this is easy. Like, OK, it's boolean or it's a string or whatever. But usually it's not that simple. Like if you have an alist, you have to describe what are the key and value pairs or whatever and the elements of the alist. So I haven't done that because it's always on a case by case basis. And many of the defcustom I have will have like a bespoke type because the data structure is really specific. You know, the value they expect. For example, if you are doing something with the action alists of display buffer, like they have a really specific type how you write it.

[Sacha]: Yeah, yeah, I hear you. So I think because I have a lot of strings, I probably can get away with something that just reads the form, smooshes it into a string, adds a string, or possibly what this will end up looking like is maybe a completing read on the type of the function. Sorry, the type of the thing. And then I can just select from several types.

[Prot]: Well, you can make it like you can make it a guess. Like, of course, if this thing is quoted and it's a symbol, it's not a list. Maybe I can have like a choice or a repeat symbol or something like you. You can, but it won't be accurate. Like that would be like for you to fill it in later.

[Sacha]: Yeah. No, I was thinking just more along the lines of Like a completion so that you can select from maybe some of your common types. The actual guessing of what type it is would be an exercise left for future me. But even just not having to remember exactly what the syntax is for repeat would be nice.

[Prot]: Actually, that's good.

00:55:12 helpful and elisp-demos

[Sacha]: Yes. I mean, one of the things that I always find helpful is, like, I think I've got some examples now. I'm using helpful, right? And I'm also using this elisp-demos. So it just tells me, like, I can add more notes here and I can say, okay, this is what a defcustom, that's a repeat of a string or what a const looks like, so that... 'Cause the manual doesn't have a lot of examples sometimes. Sometimes it's annoying to dig through it looking for examples. Usually it has no examples. I think that that's...

[Prot]: if there was one area of improvement, it's that. Keep it as is, because it's high quality, but complement it with examples.

[Sacha]: I mean, technically, all of Emacs is an example, and you can just find something, but...

[Prot]: Yeah, that's why you have the manual, because if I have to dig through thousands of lines of Emacs Lisp, that will take a toll on my patience.

[Sacha]: Yeah, so for anyone who's watching, helpful and elisp-demos is how to add these helpful little notes to your describe-function, because who remembers these things?

[Prot]: Yeah, yeah, yeah. That's very good. That's very good. Yes.

00:56:23 Prot on code libraries

[Prot]: Just to say on the point, if you have packages, this is something I actually do. I just go and reference one of my packages, which I know I have done the research for. So I'm like, okay, how do you do the display buffer action alist type? I will just go to, for example, denote and copy it.

[Sacha]: I will eventually build up a list of examples that I can refer to.

00:56:50 Prot rewrites functions to fit his style and naming conventions

[Sacha]: The other question I had though was do you ever find yourself copying code from people who do not have their You know, they're functions in nice little things that you can just import and autoload. And what do you do about it? Like if they're, you know, let's say they named it, then maybe they named it without the prefix. So it might be possible to confuse it with the standard stuff or they, you know, it's mixed in with the rest of their config so you can just load the file. What do you like doing when you are copying that kind of code?

[Prot]: I will basically check if I can make edits to it. The first thing I would make is probably change the style to be like my style. So I would anyway change it so there is no scenario where I would just copy it verbatim and paste it.

[Sacha]: Okay, so you like to rewrite things and then you fit it into your naming convention because it is now yours.

[Prot]: But also like the style. For example, this function you have over there, like Sacha here, like the one we are seeing now on screen. For example, I would change the name of pargs. Not because it's wrong, but because stylistically it's not what I would write. Then I would change the indentation. Org Capture String, I would put the concat, the line below. I would basically do small tweaks, not because it's wrong what you have, but because stylistically I have a different way of expressing it.

[Sacha]: Yeah, yeah, yeah. Absolutely. I've started to add where I got it from in the docstring instead of... I used to put it in the comment. But as you mentioned, the doc strings are a little bit more visible. So then I usually don't end up looking for updates. But at least theoretically, if I do want to, I could find out who was... Or if I want to credit somebody or see what else they've come up with lately, then at least it's there.

[Prot]: Yes, it's good enough. Plus, when we are talking about these smaller functions, having the link there, I think, is enough. Like, you wouldn't need to go search for updates or whatever. Like, if they have made some changes, chances are it's there.

[Sacha]: Yeah. Okay, so rewrite things, make it fit your style, and add stuff to the docstring because you like to have thorough docstrings.

00:59:18 Prot's preference for small functions

[Prot]: Yeah, yeah, yeah. There are many functions I have where the docstring is longer than the code. I would say, yeah, many of them are like that. But also, just to say, it's because of how I will write the code, where there are many small functions building up to a big one. And so then the docstring explains basically what all these small functions contribute to.

[Sacha]: I like small functions too because I got used to coding on even smaller screens, right? And so anything that could just actually fit in the screen was much better than things that I had to page through. And it gives you many more avenues to modify the behavior because you have more places that you could def-advice, sorry, advice-add :around or whatever.

[Prot]: Actually, this is why I started doing it as well, because it's easier. I had this reason myself. I think it was an org function, which is like 200 lines, and I wanted to really change one thing and I had to copy the whole function. And I'm like, well, if this was a helper function, I would be done by just overriding the helper and I would be good.

01:00:23 avy-goto-char-timer

[Sacha]: I am slowly getting the hang of using avy-goto-char-timer so that I can copy the symbols from elsewhere. Because even if I'm using nameless to insert the prefixes and then I'm using dabbrev-expand or hippie-expand, for which the config I still need to fiddle with to make it absolutely perfect. It's still a lot of typing sometimes, since we like to use long function names.

[Prot]: And which timer variant do you use? Because it has, with two characters, it has the 0 one, which is type as much as you can within a certain time window.

[Sacha]: That's a good question. Where is this?

[Prot]: Char timer. I think this is based on... I think this is the zero. Yeah, I'm not sure. I remember it's called zero.

[Sacha]: So like I can type li and then go to like lj to jump to that one and now I have it so that I can M-j li and then I can press the yank yeah like y like insert from there which is yes when I was when I was stealing stuff from your config, I could... oh let me show you... where is this... So this is your config, right? Well, this is... Hang on a second. Org link preview. There you go. So now the highlights of your config. I can steal stuff from your config and say, okay, M-j, open parenthesis, oops. M-j. Open parenthesis. I can copy the entire line of LK from avy, which is very nice. Very nice. Yes, yes. So, pretty fast side there into avy. I have to slow down and actually focus on doing the keyboard shortcuts because it's a new habit that I want to build, especially since.

01:02:40 One-shot keyboard modifiers

[Sacha]: Also related to one of your recent videos, I'm experimenting with one-shot keyboard modifiers.

[Prot]: Oh, well done.

[Sacha]: Yes. It's a little tricky. I have to get my brain to get used to it. I'm using keyd to do this on Linux. And it's just getting the hang of pressing control and then moving to the thing. It's messing with my brain a little.

[Prot]: But consider that it's a good opportunity to also use two-handed mode, basically. So, for example, C-x, right? Not like C-x. You see what I'm saying? So basically one hand for the modifier. Yeah, exactly. Because that's a good practice in general, even if you use the standard modifiers. Yeah.

01:03:29 Toggling

[Sacha]: And one of the other things that I started doing after our previous conversation and having looked at some of your toggling sort of things, in your config, what's this idea of using the C-z and C-S-z shortcuts? Since who likes to suspend Emacs anyway, right? So now my C-S-z toggles my now.org, which is the stuff that I'm going to be working on, including the stuff that I want to get the hang of using. So this is my, all right, I need to scope it down so that I don't get overwhelmed. These are the things that will, you know, these are the things that I'm trying to get the hang of using. C-z gets me to my stream notes because then I can add things while I'm live, and then C-S-z is what I have as my now, which also gets posted to my web page, sort of like what I'm focusing on. Which, actually, I can reorganize anyway. So I'm liking this toggling because I can press, like for example, if I'm in the middle of my scratch buffer, I can press C-S-z, pop it up, and then pop it back down. And I was watching Joshua Blais's video about he gets to do this sort of like toggling things in and out from anywhere in his system. So now I'm jealous and I need to figure out how to get that working too.

[Prot]: Yeah, yeah, yeah. That's the kind of thing that is really helpful. Like pop it out and then when you don't need it, it disappears.

01:05:08 System-wide toggle shortcuts using emacsclient

[Sacha]: Do you have any of that kind of system level of toggling even when you don't have Emacs as your main application sort of thing?

[Prot]: Via emacsclient. So you can have a key binding to emacsclient, an emacsclient call, and it will bring up an Emacs window from anywhere. I have that, yes. I have it for a few things. TMR mostly, the timer package. So if I am, for example, here, I can bring it up and start the timer without actually switching to Emacs. Okay,

[Sacha]: so that sounds like something I need to look into. It's

[Prot]: in the prot-window file, prot-window.el. I have a macro there, and it's a macro that defines a command. To run in a new frame and once you do something, such as complete or cancel, to close that frame basically. And it's using a condition

[Sacha]: case. It's using a condition case. I think it's the simplest

[Prot]: you can do.

[Sacha]: And then that's a global keybinding on your window manager that runs that and then brings that so that you can pop it up and put it back.

[Prot]: Yeah. It's just emacsclient -e and then the command.

[Sacha]: Oh, that's interesting. Rickard says using space as control has revolutionized their Emacsing. I'm not sure I'm ready to take that step yet. Also, I can probably figure out how to use keyd to use it as a modifier. We'll see. It's a nice big key, you know? You're just tempted to do all sorts of things with it.

[Prot]: Of course, at the keyboard level, you can have different behavior for tap and hold. So when you tap the space, it's an ordinary space. When you hold it, it's control. Maybe that's what they are.

[Sacha]: Yeah, I think that's what's happening there. Look into using keyd for tap and hold.

[Prot]: Yeah, and this is the principle behind the home row mods, the standard home row mods. It's like when you tap, for example, H, it just does H. When you hold it, it's some modifier key.

01:07:25 My next steps

[Sacha]: I have three minutes before the kiddo runs out and goes, mom, it's lunchtime. So do you have any, like, okay, my next steps, I've got stuff that I need to work on in terms of improving the processing of things and automating things. I found this session very helpful for saying, okay, you know, like, in the weeks leading up to it, two weeks leading up to it, it's like, okay, I got to write this code because I want to be able to say I did it, which is good. And as a result, I have all sorts of fancy things now in my Emacs for streaming and also for my config. In two weeks, I would love to have this kind of conversation with you again, if that's all right with you. Do you have any tips before the kiddo comes out?

01:08:18 Tips from Prot: small functions used frequently

[Prot]: Yeah, yeah, yeah. So for the functions you want to write, you want to make the functions be small so you can test them all and make them part of your habit, like start using them even before the streams. So try to use them every day so that you basically have almost a knee-jerk reaction where it's like, oh, I'm doing this and you call the function basically right away. And I don't know if you use the F keys, the function keys for your shortcuts. Maybe those would be good.

[Sacha]: Yeah, I have some of them. But again, it's hard for me to remember sometimes which one I have matched there. So again, it's trying to build it into muscle memory. Probably what I just need is some kind of drill thing.

01:09:06 Maybe using the header line for tips?

[Prot]: How about a minor mode that sets the header line format? You have seen in many buffers where it says type C-c C-c to finish, right? So set the header line format to be like, you know, type, I don't know, Ctrl-Z to bring up the pop-up, whatever, right?

[Sacha]: Yeah, I mean, quick help sort of is that idea...

[Prot]: Yes, quick help would help you do that as well, yeah.

[Sacha]: It's a screen space thing. But if I can find something that I can smoosh together with keycast so that it reminds me of my key tip in this context. Ah, with keycast. Interesting.

[Prot]: That's why I was thinking of header-line-format. So it would be something that will appear there. And of course, the header line works exactly like the mode line, meaning that it can update the content. It's not static. So like your mode line will update information.

[Sacha]: Yeah. Okay. All right. So let me think about which tips might be, you know, like my keyword shortcut of the day focus could be interesting.

01:10:23 Reorganizing keys

[Prot]: But it also brings the point like here, of course, like the keys you have, maybe it's also a good opportunity to organize them differently. Like the header here should prompt you for one prefix key, for example. Like, you know, C-t, let's say, and that's for transcribing or whatever. Right. And it will just have that one there. And then with the help of which-key, for example, you see what you have behind that prefix.

[Sacha]: I have a hard time figuring out keybindings, which is one of the reasons why I like looking at configs like yours and other people. Because I'm like, yeah, I can totally use that as a starting point for keybindings. But then what else do I assign to it? So for example, I've got this. I apparently don't have this. I have this sacha-stream-transient C-c v. That's where I put it now. Okay. Which now has things like OBS and all that stuff.

[Prot]: What's the mnemonic for v?

[Sacha]: Oh, v would have been video sort of thing.

[Prot]: Okay, I see.

[Sacha]: But I have to fiddle with it and the kiddo is going to come out any moment now. So thanks just in case she comes out.

[Prot]: You're welcome.

[Sacha]: Well, it's lunchtime. Thank you for this. I will schedule something else in two weeks. I'm going to try to practice more scheduled live streams and keep fiddling with this workflow. This has all been very helpful. And thank you to the people who also have dropped by and said hello. You can check the chat later. It's fine. Yes, yes. Thanks, everybody. All right. Okay. I'm going to say bye here just in case. Take care. Take care. Take care, Sacha.

[Prot]: Take care, everybody. Bye-bye. Bye-bye. Thank you.

[Sacha]: Thank you everyone for hanging out. That was my chat with Prot. And I will see y'all again maybe Thurs... Well, probably before then. But I will try to schedule something on Thursday for around that time. Who knows what it's going to be about. But yeah, thank you for coming and experimenting with me. Let us end the stream there. Because it's lunchtime.

View Org source for this post

You can e-mail me at sacha@sachachua.com.

-1:-- YE16: Sacha and Prot talk Emacs (Post Sacha Chua)--L0--C0--2026-04-16T16:44:19.000Z

Irreal: LaTeX Preview In Emacs

Over at the Emacs subreddit, _DonK4rma shows an example of his mathematical note taking in Emacs. It’s a nice example of how flexible Org mode is even for writing text with heavy mathematical content but probably not too interesting to most Emacs users.

What should be interesting is this comment, which points to Dan Davison’s Xenops, which he describes as a “LaTeX editing environment for mathematical documents in Emacs.” The idea is that with Xenops when you leave a math mode block it is automatically rendered as the final mathematics, which replaces the original input. If you move the cursor onto the output text and type return, the original text is redisplayed.

It’s an excellent system that lets you catch any errors you make in entering mathematics as you’re entering them rather than at LaTeX compile time. So far it only works on .tex files but Davison says he will work on getting it to work with Org too.

He has a six minute video that shows the system in action. It gives a good idea of how it works but Xenops can do a lop more; see the repository’s detailed README at the above link for details.

-1:-- LaTeX Preview In Emacs (Post Irreal)--L0--C0--2026-04-16T15:03:07.000Z

Dave Pearson: boxquote.el v2.4

boxquote.el is another of my oldest Emacs Lisp packages. The original code itself was inspired by something I saw on Usenet, and writing my own version of it seemed like a great learning exercise; as noted in the thanks section in the commentary in the source:

Kai Grossjohann for inspiring the idea of boxquote. I wrote this code to mimic the "inclusion quoting" style in his Usenet posts. I could have hassled him for his code but it was far more fun to write it myself.

While I never used this package to quote text I was replying to in Usenet posts, I did use it a lot on Usenet, and in mailing lists, and similar places, to quote stuff.

The default use is to quote a body of text; often a paragraph, or a region, or perhaps even Emacs' idea of a defun.

,----
| `boxquote.el` provides a set of functions for using a text quoting style
| that partially boxes in the left hand side of an area of text, such a
| marking style might be used to show externally included text or example
| code.
`----

Where the package really turned into something fun and enduring, for me, was when I started to add the commands that grabbed information from elsewhere in Emacs and added a title to explain the content of the quote. For example, using boxquote-describe-function to quote the documentation for a function at someone, while also showing them how to get at that documentation:

,----[ C-h f boxquote-text RET ]
| boxquote-text is an autoloaded interactive native-comp-function in
| ‘boxquote.el’.
|
| (boxquote-text TEXT)
|
| Insert TEXT, boxquoted.
`----

Or perhaps getting help with a particular key combination:

,----[ C-h k C-c b ]
| C-c b runs the command boxquote (found in global-map), which is an
| interactive native-comp-function in ‘boxquote.el’.
|
| It is bound to C-c b.
|
| (boxquote)
|
| Show a transient for boxquote commands.
|
|   This function is for interactive use only.
|
| [back]
`----

Or figuring out where a particular command is and how to get at it:

,----[ C-h w fill-paragraph RET ]
| fill-paragraph is on fill-paragraph (M-q)
`----

While I seldom have use for this package these days (mainly because I don't write on Usenet or in mailing lists any more) I did keep carrying it around (always pulling it down from melpa) and had all the various commands bound to some key combination.

(use-package boxquote
  :ensure t
  :bind
  ("<f12> b i"   . boxquote-insert-file)
  ("<f12> b M-w" . boxquote-kill-ring-save)
  ("<f12> b y"   . boxquote-yank)
  ("<f12> b b"   . boxquote-region)
  ("<f12> b t"   . boxquote-title)
  ("<f12> b h f" . boxquote-describe-function)
  ("<f12> b h v" . boxquote-describe-variable)
  ("<f12> b h k" . boxquote-describe-key)
  ("<f12> b h w" . boxquote-where-is)
  ("<f12> b !"   . boxquote-shell-command))

Recently, with the creation of blogmore.el, I moved the boxquote commands off the b prefix (because I wanted that for blogging) and onto an x prefix. Even then... that's a lot of commands bound to a lot of keys that I almost never use but still can't let go of.

Then I got to thinking: I'd made good use of transient in blogmore.el, why not use it here too? So now boxquote.el has acquired a boxquote command which uses transient.

The boxquote transient in action

Now I can have:

(use-package boxquote
  :ensure t
  :bind
  ("C-c b" . boxquote))

and all the commands are still easy to get to and easy to (re)discover. I've also done my best to make them context-sensitive too, so only applicable commands should be usable at any given time.

-1:-- boxquote.el v2.4 (Post Dave Pearson)--L0--C0--2026-04-16T07:29:35.000Z

Bicycle for Your Mind: Outlining with OmniOutliner Pro 6

OmniOutliner Pro 6OmniOutliner Pro 6

Product: OmniOutliner Pro 6
Price: $99 for new users and $50 for upgrade price. They have a $49.99/year subscription price.

Rationale or the Lack of One

There was no good reason to buy OmniOutliner Pro 6.

I don’t need this program. I have the outlining abilities of Org-mode in Emacs. And dedicated outlining programs in Opal, Zavala and TaskPaper.

They had a good upgrade price and I hadn’t tried out any new software in a while. I know that is not a good reason to spend $50. It was my birthday, and I love outlining programs.

I had used the Pro version in version 3 and had bought the Essentials edition for OmniOutliner 5. A lot of what I see in version 6 is new to me.

Themes

Customizing ThemesCustomizing Themes

OmniOutliner Pro 6 comes with themes. I wanted to make my own or customize the existing ones. It is easy to do. Didn’t do much. Changed the line spacing and the font. The themes it ships with are nice. I am using the blank one and Solarized.

Writing Environment

Writing in OOPWriting in OOP

The best thing about OmniOutliner Pro 6 is the writing environment it provides. There are touches around the program which make it a pleasure to write in. Two of them which stick out to me are:

  1. Typewriter scrolling. I have no idea why more programs don’t give you this feature. I use it all the time. Looking at the bottom of the document is boring and it hurts my neck.
  2. Full screen focus. This is well implemented and another feature which helps me concentrate on the document I am in.

Linking Documents

LinkingLinking

You can link to a document or to a block in the document. Clicking on the space left of the Heading gives you a drop-down menu. Choose the Copy Omni Link and paste it to where you want the link to appear. Useful in linking documents or sections when you have a block of outlines which relate to each other in some way.

Keyboard Commands

keyboard commandskeyboard commands

Keyboard commands are what make an outlining program. OmniOutliner Pro 6 comes with the ability to customize and change every keyboard command that is in the program. It makes the learning curve smoother when you can use the commands you are used to for every task you perform in an outliner. I love this ability to make the outliner my own.

Using OmniOutliner Pro 6

This is the best outliner in the macOS space. OmniOutliner Pro 6 cements that position. It is a pleasure to use. It does everything you need from an outliner and does it with style. It does more than you need. Columns? I have never found the need for columns in an outliner. Other users love this feature. I am not interested. Maybe I am missing something, or I don’t use outlines which need columns. In spite of my lack of enthusiasm for columns, this is the best outlining program available on the macOS.

Comparison with Org-mode

I use Emacs and within it Org-mode. I write in outlines in Emacs all the time.

Org-mode is a strange mix of OmniOutliner and OmniFocus. It does outlines and does task management. All in one application. In plain text. The only problem? You have to deal with the complexity of Emacs. It is a steep learning curve which gives you benefits over the long term but there is pain in the short term. Let’s be honest, there is a ton of pain in the short term. OmniOutliner on the other hand, is easy to pick up and use. You are going to be competent in the program with little effort. The learning curve is minimal. The program is usable and useful. Doesn’t do most of the things Org-mode does, but it is not designed for that. They have a product called OmniFocus to sell you, for that.

Conclusion

If you are looking for an outlining program, you cannot go wrong with OmniOutliner Pro 6. It is fantastic to live in and work with. It gives you a great writing environment. I love writing in it.

There are two things which give me pause when it comes to OmniOutliner Pro 6. The first is the price. I think $99 for an outlining program is steep. That is a function of my retired-person price sensitivity. You might have a different view. The second is the incomplete documentation. They are working on it, slowly. If I am paying for the most expensive outlining program in the marketplace, I want the documentation to be complete and readily available on sale of the product. Not something I have been waiting a few months for. That is negligent.

If you are looking at outlining programs there are competitors in the marketplace. Zavala is a competitive product which is free. Opal is another product which is free and although it doesn’t have all the features of OmniOutliner, is a competent outliner. Or, you can always learn how to use Emacs and adopt Org-mode as the main driver of all your writing.

OmniOutliner Pro 6 is recommended with some reservations.

macosxguru at the gmail thingie.

-1:-- Outlining with OmniOutliner Pro 6 (Post Bicycle for Your Mind)--L0--C0--2026-04-16T07:00:00.000Z

James Endres Howell: Embedding a Mastodon thread as comments to a blog post

I wrote org-static-blog-emfed, a little Emacs package that extends org-static-blog with the ability to embed a Mastodon thread in a blog post to serve as comments. The root of the Mastodon thread also serves as an announcement of the blog post to your followers. It’s based on Adrian Sampson’s Emfed, and of course Bastian Bechtold’s org-static-blog.

I had shared it before, but alas, after changing Mastodon instances the comments from old posts were lost, so I disabled them on this blog. Just over the past few days I’ve found time to get it all working again.

It also seems, at least in #Emacs on Mastodon, that org-static-blog has gained in popularity recently.

Prompted as I was to make a few improvements, I thought I would update the README and share it again. Hope it’s useful for someone!

-1:-- Embedding a Mastodon thread as comments to a blog post (Post James Endres Howell)--L0--C0--2026-04-15T22:17:00.000Z

James Dyer: Emacs-DIYer: A Built-in dired-collapse Replacement

I have been slowly chipping away at my Emacs-DIYer project, which is basically my ongoing experiment in rebuilding popular Emacs packages using only what ships with Emacs itself, no external dependencies, no MELPA, just the built-in pieces bolted together in a literate README.org that tangles to init.el. The latest addition is a DIY version of dired-collapse from the dired-hacks family, which is one of those packages I did not realise I leaned on until I started browsing a deeply-nested Java project and felt the absence immediately.

If you have ever opened a dired buffer on something like a Maven project, or node_modules, or a freshly generated resource bundle, you will know the pain, src/ contains a single main/ which contains a single java/ which contains a single com/ which contains a single example/, and you are pressing RET four times just to get to anything interesting. The dired-collapse minor mode from dired-hacks solves this beautifully, it squashes that whole single-child chain into one dired line so src/main/java/com/example/ shows up as a single row and one RET drops you straight into the deepest directory.

So, as always with the Emacs-DIYer project, I wondered, can I implement this in a few elisp defuns?

Right, so what is the plan?, dired already draws a nice listing with permissions, sizes, dates and filenames, all I really need to do is walk each line, look at the directory, figure out the deepest single-child descendant, and then rewrite the filename column in place with the collapsed path. The trick, and this is the bit that took me a minute to convince myself of, is that dired uses a dired-filename text property to know where the filename lives on the line, and dired-get-filename happily accepts relative paths containing slashes. So if I can rewrite the text and reapply the property, everything else, RET, marking, copying, should just work without me having to touch the rest of dired at all!

First function, my/dired-collapse--deepest, which just walks the directory chain as long as each directory contains exactly one accessible child directory. I added a 100-iteration guard so a pathological symlink cycle cannot wedge the whole thing, which, you know, future me might thank present me for:

(defun my/dired-collapse--deepest (dir)
"Return the deepest single-child descendant directory of DIR.
Walks the directory chain as long as each directory contains exactly
one entry which is itself an accessible directory. Stops after 100
iterations to guard against symlink cycles."
(let ((current dir)
(depth 0))
(catch 'done
(while (< depth 100)
(let ((entries (condition-case nil
(directory-files current t
directory-files-no-dot-files-regexp
t)
(error nil))))
(if (and entries
(null (cdr entries))
(file-directory-p (car entries))
(file-accessible-directory-p (car entries)))
(setq current (car entries)
depth (1+ depth))
(throw 'done current)))))
current))

directory-files-no-dot-files-regexp is one of those lovely little built-in constants I keep forgetting exists, it filters out . and .. but keeps dotfiles, which is exactly what you want if you are deciding whether a directory is truly single-child.

Second function does the actual buffer surgery, my/dired-collapse iterates each dired line, grabs the filename with dired-get-filename, asks the walker how deep the chain goes, and if there is anything to collapse it replaces the displayed filename with the collapsed relative path:

(defun my/dired-collapse ()
"Collapse single-child directory chains in the current dired buffer.
A DIY replacement for `dired-collapse-mode' from the dired-hacks
package. Rewrites the filename portion of each line in place and
reapplies the `dired-filename' text property so that standard dired
navigation still resolves to the deepest directory."
(when (derived-mode-p 'dired-mode)
(let ((inhibit-read-only t))
(save-excursion
(goto-char (point-min))
(while (not (eobp))
(condition-case nil
(let ((file (dired-get-filename nil t)))
(when (and file
(file-directory-p file)
(not (member (file-name-nondirectory
(directory-file-name file))
'("." "..")))
(file-accessible-directory-p file))
(let ((deepest (my/dired-collapse--deepest file)))
(unless (string= deepest file)
(when (dired-move-to-filename)
(let* ((start (point))
(end (dired-move-to-end-of-filename t))
(displayed (buffer-substring-no-properties
start end))
(suffix (substring deepest
(1+ (length file))))
(new (concat displayed "/" suffix)))
(delete-region start end)
(goto-char start)
(insert (propertize new
'face 'dired-directory
'mouse-face 'highlight
'dired-filename t))))))))
(error nil))
(forward-line))))))

The key bit is the propertize call at the end, the new filename text has to carry dired-filename t so that dired-get-filename picks it up, and dired-directory on face keeps the collapsed entry looking the same as a normal directory line. Because dired-get-filename will happily glue a relative path like main/java/com/example onto the dired buffer’s directory, pressing RET on a collapsed line takes you straight to src/main/java/com/example with no extra work from me.

A while back I added a little unicode icon overlay thing to dired (my/dired-add-icons, which puts a little symbol in front of each filename via a zero-length overlay), and I did not want the collapse to fight with it. The icons hook into dired-after-readin-hook as well, so I just gave collapse a negative depth when attaching its hook:

(add-hook 'dired-after-readin-hook #'my/dired-collapse -50)

Lower depth runs earlier, so collapse rewrites the line first, then the icon overlay attaches to the final collapsed filename position. Without this, the icons would happily sit in front of a stub directory that was about to be rewritten, which is, well, fine I suppose, but it felt tidier to have them anchor on the post-collapse text.

Before, a typical Maven project root might look something like this:

drwxr-xr-x 3 jdyer users 4096 Apr 9 08:12 ▶ src
drwxr-xr-x 2 jdyer users 4096 Apr 9 08:11 ▶ target
-rw-r--r-- 1 jdyer users 812 Apr 9 08:10 ◦ pom.xml

After collapse kicks in:

drwxr-xr-x 3 jdyer users 4096 Apr 9 08:12 ▶ src/main/java/com/example
drwxr-xr-x 2 jdyer users 4096 Apr 9 08:11 ▶ target
-rw-r--r-- 1 jdyer users 812 Apr 9 08:10 ◦ pom.xml

One RET and you are in com/example, which is where all the actual code lives anyway. Marking, copying, deleting, renaming, all of it still behaves because the dired-filename text property points at the real deepest path.

One thing that initially bit me, is navigating out of a collapsed chain. If I hit RET on a collapsed src/main/java/com/example line I land in the deepest directory, which is great, but then pressing my usual M-e to go back up was doing the wrong thing. M-e in my config has always been bound to dired-jump, and dired-jump called from inside a dired buffer does a “pop up a level” thing that ended up spawning a fresh dired for com/, bypassing the collapsed view entirely and leaving me staring at a directory I never wanted to see.

My first attempt at fixing this was to put some around-advice on dired-jump so that if an existing dired buffer already had a collapsed line covering the jump target, it would switch to that buffer and land on the collapsed line instead of splicing in a duplicate subdir. It worked, sort of, but dired-jump in general felt a bit janky inside dired, it does a lot of “refresh the buffer and try again” under the hood and the in-dired pop-up-a-level path was always the weak link. So I stepped back and split the two cases apart with a tiny dispatch wrapper:

(defun my/dired-jump-or-up ()
"If in Dired, go up a directory; otherwise dired-jump for current buffer."
(interactive)
(if (derived-mode-p 'dired-mode)
(dired-up-directory)
(dired-jump)))
(global-set-key (kbd "M-e") #'my/dired-jump-or-up)

From a file buffer, dired-jump is still exactly the right thing as you want the directory the file is in of course. From inside a dired buffer, dired-up-directory is just a much cleaner operation, it walks up one real level, no refresh, no splicing, nothing weird. But on its own that would lose the collapsed round-trip, so I gave dired-up-directory its own bit of advice that looks for a collapsed-ancestor buffer before falling through to the default behaviour.

(defun my/dired-collapse--find-hit (target-dir)
"Return (BUFFER . POS) of a dired buffer with a collapsed line covering TARGET-DIR."
(let ((target (file-name-as-directory (expand-file-name target-dir)))
hit)
(dolist (buf (buffer-list))
(unless hit
(with-current-buffer buf
(when (and (derived-mode-p 'dired-mode)
(stringp default-directory))
(let ((buf-dir (file-name-as-directory
(expand-file-name default-directory))))
(when (and (string-prefix-p buf-dir target)
(not (string= buf-dir target)))
(save-excursion
(goto-char (point-min))
(catch 'found
(while (not (eobp))
(let ((line-file (ignore-errors
(dired-get-filename nil t))))
(when (and line-file
(file-directory-p line-file))
(let ((line-dir (file-name-as-directory
(expand-file-name line-file))))
(when (string-prefix-p target line-dir)
(setq hit (cons buf (point)))
(throw 'found nil)))))
(forward-line))))))))))
hit))

The dired-up-directory only fires when the literal parent is not already open as a dired buffer, which keeps normal upward navigation completely unchanged:

(defun my/dired-collapse--up-advice (orig-fn &optional other-window)
"Around-advice for `dired-up-directory' restoring collapsed round-trip."
(let* ((dir (and (derived-mode-p 'dired-mode)
(stringp default-directory)
(expand-file-name default-directory)))
(up (and dir (file-name-directory (directory-file-name dir))))
(parent-buf (and up (dired-find-buffer-nocreate up)))
(hit (and dir (null parent-buf)
(my/dired-collapse--find-hit dir))))
(if hit
(let ((buf (car hit))
(pos (cdr hit)))
(if other-window
(switch-to-buffer-other-window buf)
(pop-to-buffer-same-window buf))
(goto-char pos)
(dired-move-to-filename))
(funcall orig-fn other-window))))
(advice-add 'dired-up-directory :around #'my/dired-collapse--up-advice)

If /proj/src/main/java/com/ happens to already exist as a dired buffer, dired-up-directory does its usual thing and just goes there, the up-advice never fires. It is only when the literal parent is absent that the advice kicks in and hands you back to the collapsed ancestor, which I think is the right tradeoff, the advice never surprises you when you were going to get the standard behaviour anyway, it only steps in when the standard behaviour would throw away context you clearly still had in a buffer somewhere.

End result, RET into a collapsed chain drops me deep, M-e walks me back out to the original collapsed line, and none of it requires doing anything clever with dired-jump’s “pop up a level” path, which I am increasingly convinced I should not have been using in the first place.

Everything lives in the Emacs-DIYer project on GitHub, in the literate README.org. If you just want the snippet to drop into your own init file, the two functions and the add-hook line above are the whole thing, no require, no use-package, no MELPA, just built-in dired and a bit of buffer shenanigans, and thats it!, phew, and breathe!

-1:-- Emacs-DIYer: A Built-in dired-collapse Replacement (Post James Dyer)--L0--C0--2026-04-15T18:20:00.000Z

Dave Pearson: slstats.el v1.11

Yet another older Emacs Lisp package that has had a tidy up. This one is slstats.el, a wee package that can be used to look up various statistics about the Second Life grid. It's mainly a wrapper around the API provided by the Second Life grid survey.

When slstats is run, you get an overview of all of the information available.

An overview of the grid

There are also various commands for viewing individual details about the grid in the echo area:

  • slstats-signups - Display the Second Life sign-up count
  • slstats-exchange-rate - Display the L$ -> $ exchange rate
  • slstats-inworld - Display how many avatars are in-world in Second Life
  • slstats-concurrency - Display the latest-known concurrency stats for Second Life
  • slstats-grid-size - Display the grid size data for Second Life

There is also slstats-region-info which will show information and the object and terrain maps for a specific region.

Region information for Da Boom

As with a good few of my older packages: it's probably not that useful, but at the same time it was educational to write it to start with, and it can be an amusement from time to time.

-1:-- slstats.el v1.11 (Post Dave Pearson)--L0--C0--2026-04-15T14:52:55.000Z

Irreal: Switching Between Dired Windows With TAB

Just a quickie today. Marcin Borkowski (mbork) has a very nice little post on using Tab with Dired. By default, Tab isn’t defined in Dired but mbork suggests an excellent use for it and provides the code to implement his suggestion.

If there are two Dired windows open, the default destination for Dired commands is “the other window”. That’s a handy thing that not every Emacs user knows. Mbork’s idea is to use Tab to switch between Dired windows.

It’s a small thing, of course, but it’s a nice example of reducing friction in your Emacs workflow. As Mbork says, it’s yet another example of how easy it is to make small optimizations like this in Emacs.

Update [2026-04-16 Thu 11:06]: Added link to mbork’s post.

-1:-- Switching Between Dired Windows With TAB (Post Irreal)--L0--C0--2026-04-15T14:42:10.000Z

Gal Buki: Clipboard in terminal Emacs with WezTerm

Although TRAMP allows access to files on remote servers using the local Emacs instance I usually prefer to open Emacs using a running daemon session on the remote server.

The issue with Emacs in the terminal is that kill and yank (aka copy and paste) don't work the same way as with the GUI. Using WezTerm I have found that it is

SSH clipboard support

My terminal emulator of choice is WezTerm which already supports bidirectional kill & yank out of the box.

But I can't bring my muscle memory to remember to use Ctrl+Shift+V to yank text in Emacs. I want Ctrl+y​/​C-y, like I'm used to.

Luckily .wezterm.lua lets us catch Ctrl+y and yank the clipboard contents into the terminal and with that into Emacs.

local wezterm = require 'wezterm'

local config = wezterm.config_builder()

config.keys = {
    -- Paste in Emacs using regular key bindings
    {
      key = "y",
      mods = "CTRL",
      action = wezterm.action.PasteFrom "Clipboard",
    },
}

return config

Local clipboard support

For those wanting to run Emacs in a local terminal WezTerm provides yank out of the box but not kill. To kill text from Emacs into the local clipboard we need to use xclip.

The xclip package has an auto-detect function but it has some issues.

  • if it finds xclip or xsel it will use them even if we are on Wayland
  • it can't detect MacOS (darwin)

So I decided to set the xclip-method manually. In addition I use the :if option of use-package to limit loading the package only when we are in the terminal, an xclip-method was found and we aren't using ssh.

(defun tjkl/xclip-method ()
  (cond
   ((eq system-type 'darwin) 'pbpaste)
   ((getenv "WAYLAND_DISPLAY") 'wl-copy)
   ((getenv "DISPLAY") 'xsel)
   ((getenv "WSLENV") 'powershell)
   (t nil)))

(use-package xclip
  :if (and (not (display-graphic-p))
           (not (getenv "SSH_CONNECTION"))
           (tjkl/xclip-method))
  :custom
  (xclip-method (tjkl/xclip-method))
  :config
  (xclip-mode 1))

Local clipboard without xclip

It is possible to use OSC-52 (Output/Escape Sequences) in a local WezTerm terminal without the xclip package and cli tool.
The problem with this approach is that we can't work with terminal and GUI Emacs using the same session. Since interprogram-cut-function is global it will also try to use OSC52 in the GUI Emacs and fail with the message progn: Device 1 is not a termcap terminal device.

I have not yet found a good way to restore GUI yank functionality once interprogram-cut-function is set. So the following should only be used if the GUI instance doesn't use the same session or if the GUI is never opened after terminal Emacs.

(unless (display-graphic-p)
  (defun tjkl/osc52-kill (text)
    (when (and text (stringp text))
      (send-string-to-terminal
       (format "\e]52;c;%s\a"
               (base64-encode-string text t)))))
  (setq interprogram-cut-function #'tjkl/osc52-kill))
-1:-- Clipboard in terminal Emacs with WezTerm (Post Gal Buki)--L0--C0--2026-04-15T10:50:00.000Z

Sacha Chua: Org Mode: JS for translating times to people's local timezones

I want to get back into the swing of doing Emacs Chats again, which means scheduling, which means timezones. Let's see first if anyone happens to match up with the Thursday timeslots (10:30 or 12:45) that I'd like to use for Emacs-y video things, but I might be able to shuffle things around if needed.

I want something that can translate times into people's local timezones. I use Org Mode timestamps a lot because they're so easy to insert with C-u C-c ! (org-timestamp-inactive), which inserts a timestamp like this:

By default, the Org HTML export for it does not include the timezone offset. That's easily fixed by adding %z to the time specifier, like this:

(setq org-html-datetime-formats '("%F" . "%FT%T%z"))

Now a little bit of Javascript code makes it clickable and lets us toggle a translated time. I put the time afterwards so that people can verify it visually. I never quite trust myself when it comes to timezone translations.

function translateTime(event) {
  if (event.target.getAttribute('datetime')?.match(/[0-9][0-9][0-9][0-9]$/)) {
    if (event.target.querySelector('.translated')) {
      event.target.querySelectorAll('.translated').forEach((o) => o.remove());
    } else {
      const span = document.createElement('span');
      span.classList.add('translated');
      span.textContent = ' → ' + (new Date(event.target.getAttribute('datetime'))).toLocaleString(undefined, {
        month: 'short',  
        day: 'numeric',  
        hour: 'numeric', 
        minute: '2-digit',
        timeZoneName: 'short'
      });
      event.target.appendChild(span);
    }
  }
}
function clickForLocalTime() {
  document.querySelectorAll('time').forEach((o) => {
    if (o.getAttribute('datetime')?.match(/[0-9][0-9][0-9][0-9]$/)) {
      o.addEventListener('click', translateTime);
      o.classList.add('clickable');
    }
  });
}

And some CSS to make it more obvious that it's now clickable:

.clickable {
    cursor: pointer;
    text-decoration: underline dotted;
}

Let's see if this is useful.

Someday, it would probably be handy to have a button that translates all the timestamps in a table, but this is a good starting point.

View Org source for this post

You can e-mail me at sacha@sachachua.com.

-1:-- Org Mode: JS for translating times to people's local timezones (Post Sacha Chua)--L0--C0--2026-04-14T18:44:16.000Z

Irreal: Alfred Snippets

Today while I was going through my feed, I saw this this post from macosxguru over at Bicycle For Your Mind. It’s about his system for using snippets on his system. The TL;DR is that he has settled on Typinator and likes it a lot.

I use snippets a lot but use several systems—YASnippet, abbrev mode, and the macOS text expansion facility—but none of them work everywhere I need them to so I have to negotiate three different systems. YASnippet is different from the other two in that its snippets can accept input instead of just making a text substation like the others.

In his post, macosxguru mentions that his previous system for text substitutions was based on the Alfred snippet functions. I’ve been using Alfred for a long time and love it. A one time purchase of the power pack makes your Mac much more powerful. Still, even though I was vaguely aware of it, I’d never used Alfred’s snippet function.

After seeing it mentioned on macosxguru’s post I decided to try it out. It’s easy to specify text substitutions. I couldn’t immediately figure out how to trigger the substitutions manually so I just set them to trigger automatically. I usually don’t like that but so far it’s working out well.

Up til now, I haven’t found anywhere that the substitutions don’t work. That can’t be said of any of the other systems I was using. It’s particularly hard to find one that works with both Emacs and other macOS applications.

If you’re using Emacs on macOS, you should definitely look into Alfred. It plays very nicely with Emacs and my newfound snippets ability makes the combination even better.

-1:-- Alfred Snippets (Post Irreal)--L0--C0--2026-04-14T14:59:57.000Z

Charlie Holland: Completion is a Substrate, not a UI

1. About   emacs completion ux

icr-primer-banner.jpeg

Figure 1: JPEG produced with DALL-E 3

ICR is not a convenience feature. It is a structural change in how the cost of an interaction scales with the size of the underlying data.

The argument I want to make is sharper than it sounds. Incremental completing read (ICR) is not a convenience feature. It is a structural change in how the cost of an interaction scales with the size of the underlying data; it is one of the few interface patterns that genuinely respects how human memory works; and it can fortuitously change how you organize your data, not just how you retrieve it.

A brief thought exercise reveals how a surprisingly large fraction of all software — email, calendars, file browsers, music players, issue trackers, package managers — is, at its core, just two primitives: 1) pick a thing, 2) act on it. That is the exact shape ICR was built for, and most of the visual chrome we drape around those primitives is decoration.

This matters concretely because very few environments expose completion as a programmable substrate1 you can build ICR experiences with, rather than as a sealed UI you can only consume. In everything else you use, the candidate sources, the matcher, the sorter, the annotator, and the available actions are largely fixed by the vendor or aren't even available. On the other hand, in Emacs and the shell, every layer is independently replaceable. Taking your completion stack seriously is among the highest-leverage things an Emacs user can do, on the same scale as customizing your shell, and for the same reasons. Done right, ICR can dramatically reduce the cognitive overhead of using your computer to do almost anything.

This post opens a short series on ICR. The remaining two posts get concrete: a breakdown of the modular completion framework I use day to day, and a case study of an entire Spotify client that is just an ICR application. The goal of this opening piece is to convince you that ICR is worth your rigor, and to give you the conceptual vocabulary to recognize how much of your own software experience already runs on it.

2. What is Incremental Completing Read?   ICR HCI

"Incremental Completing Read" has three load-bearing words:

Read, in the elisp sense: a function that prompts the user and returns a value. The system asks a question, you answer, and then the answer is something other code can do something with.

Completing: the system maintains a candidate set and shows you which candidates currently match your input. You don't type the full answer. You type enough to disambiguate, and the system fills in the rest.

Incremental: the candidate set is recomputed on every keystroke2. You don't submit a query and wait for results. Filtering happens between characters, fast enough that the result list feels like an extension of what you're typing.

Combine the three words and you get an interaction that is qualitatively different from either browsing or searching. Browsing scales poorly to large sets — you can scan a list of ten things, not a list of ten thousand. Search-and-submit scales fine in the back end but introduces a feedback gap that breaks flow. ICR fuses the two.

A clarification before going further. In Emacs, the standard-library function named completing-read is not, on its own, incremental. It TAB-completes at the minibuffer and shows a *Completions* buffer on demand. The incremental UX described above is layered on top by a separate generation of frontends like Icomplete, Ido, Ivy, Helm, and the modern Vertico. Throughout this series, "ICR" refers to the pattern (the API plus an incremental frontend), not to any single function. This separation matters because it makes the Emacs completion stack pluggable, and this separation is the subject of the next post in the series.

3. The ubiquity of ICR   ux

Think about all the places you already use ICR. Here's a partial inventory:

  • The browser URL bar narrows history and bookmarks as you type.
  • Search engines suggest queries character by character.
  • Spotify, Apple Music, and YouTube surface tracks, artists, and videos as you fill in the search box.
  • Amazon's product search shows partial matches and category filters live.
  • IDEs offer symbol completion, file navigation, and command palettes. Think VS Code's Cmd-Shift-P, JetBrains' "Search Everywhere," GitHub's file finder, Sublime Text's "Goto Anything."
  • Shell users reach for fzf to fuzzy-find files, branches, processes, and command history.
  • Slack jumps to channels by typing fragments of the name.
  • Even mobile keyboards suggest the next word as you tap3!

These look like different tools, but when you think about it they are the same interface. Each one accepts a stream of keystrokes, runs an incremental query against a sometimes enormous candidate set, and surfaces the best matches in real time, as you type. Across all these apps, your interaction pattern is the same: you type fragments, watch a candidate list list narrow, and then pick from what survives the narrowing.

ICR has become the lingua franca of navigation.

The pattern is so ubiquitous that its absence now feels strange to me. File pickers that only show a tree, settings panels with no search box, and configuration UIs where you have to remember the menu hierarchy all force me to slow down and then manually browse through candidate sets to find what I'm looking for. These feel like artifacts of an earlier era — the era before incremental completing read became a common default for how humans navigate sets of named things. Today, it feels like ICR has become the lingua franca of navigation.

4. A thought exercise: how much of computing fits inside ICR?   composition shell HCI

If you take anything away from this post, let it be what follows in this section. This realization is what makes Emacs legible to its power users:

We've seen where ICR shows up in the previous section, but where else can we use it? Run an inventory of the interfaces you use daily, and for each one, ask: at its core, is this just pick a thing from a set, then do something to it?

  • Email: pick a message; reply, archive, forward, delete.
  • Calendar: pick an event; accept, reschedule, open.
  • File browser: pick a file; open, rename, delete, move.
  • Issue tracker: pick an issue; assign, comment, close.
  • Music player: pick a track; play, queue, save.
  • Package manager: pick a package; install, remove, inspect.
  • Git client: pick a branch; checkout, merge, rebase, delete.
  • Cloud console: pick a resource; start, stop, configure, destroy.

The list grows uncomfortably long. It turns out that a surprising fraction of all interactions with your sofware uses the same two primitives: a source of candidates and a set of actions you can perform on a set of selected candidate. Most of the visual chrome we drape around these primitives is decoration.

It turns out that a surprising fraction of all interactions with your sofware uses the same two primitives: a source of candidates and a set of actions you can perform on a set of selected candidate. Most of the visual chrome we drape around these primitives is decoration.

Now that you've seen the light, your next move is to ask whether you can chain these. Consider navigating files in a project: ICR to pick a project, which scopes the candidate set to its files; ICR to pick a file, which scopes the actions to its file type; ICR to pick an action, which produces a new candidate set, and so on…. An interaction model built from selecting and acting can be composed into arbitrarily complex workflows, the same way any other small set of orthogonal primitives can.

Shell users already know this composition story well. For most shell users, fzf drives ICR and produces selections. Pipes feed those selections into commands. Commands produce new selections which can be piped back into fzf for more ICR, and so on…. git branch | fzf | xargs git checkout is the pattern in miniature: a candidate source, a selector, an action, all chained. fd | fzf | xargs $EDITOR is the same shape with a different source. Build a few dozen of these one-liners and you have a personal interface to your filesystem, your version control history, your processes, your network, without anyone shipping you that interface. That's powerful!

The interesting, and frustrating, observation is how rare this is composability and feature-richness is where ICR interfaces exist. Spotify will never let you redefine what "select a track" can do. Gmail's search cannot pipe its selected results into your own actions. Some environments come closer than others — Neovim's Telescope, Raycast's extension API, VS Code's QuickPick — but in each of them at least one of the layers (the matcher, the sorter, the annotator, the action set) is fixed by the vendor. Few environments expose every layer, and only Emacs and the shell expose them independently, so that you can swap one without disturbing the others.

This is the difference between using ICR and building ICR, and it is what makes Emacs and the shell uniquely powerful for anyone who works inside them all day. Personally, this is the main reason why I live in Emacs and the shell.

5. The cognitive cost argument   cognitiveStrain

Software engineers have a precise vocabulary for talking about how algorithms scale: time complexity, space complexity, big-O notation. The corresponding field for how interfaces scale is human-computer interaction (HCI), which has its own established vocabulary — Hick-Hyman's Law, Fitts's Law, working-memory load, recognition vs. recall — but engineers rarely reach for it. The argument that follows borrows from both sides, because ICR is best understood through both angles: an algorithmic property (constant-time filtering against an arbitrary corpus) producing an HCI property (constant-cost selection regardless of corpus size).

Consider the simple act of finding a file. In a tree-based file browser, the cognitive effort grows with the size of the file system. Five files in a folder is trivial, but five hundred files spread across a hierarchy is much more cognitively taxing. You have to remember where the file lives, click through directories, scan lists, scroll, and move your cursor to the selection. Add another order of magnitude — half a million files in a project — and the file browser has effectively ceased to function as a tool for finding things. Cognitively, this approach scales worse than linearly.

Now do the same task with ICR. You hit your file-finder binding, type a fragment of the name you remember, watch the list narrow to a handful of plausible matches, and pick one. The experience is the same whether your project contains fifty files or fifty thousand. The interface does not get harder to use as the candidate set grows.

ICR breaks the linkage between the size of the world and the difficulty of finding something in it.

It is tempting to call this O(1) cognitive complexity, by analogy to algorithmic complexity4. The point is straightforward: the cost of finding something via ICR is independent of the size of the candidate set, and that independence is what the big-O analogy is reaching for. ICR breaks the linkage between the size of the world and the difficulty of finding something in it.

There is also a literature analogue worth naming. Hick-Hyman's Law5 models the time required for a forced choice as roughly proportional to log₂(n+1), where n is the number of equally likely alternatives. A flat menu of ten thousand commands is a Hick-Hyman nightmare; the user pays a logarithmic-in-n decision cost on every selection. ICR sidesteps the law by collapsing n before the choice step happens. By the time the user is selecting from the visible candidate panel, n is already small, typically less than half a dozen in my experience, and the per-selection decision cost is bounded by panel size rather than corpus size. We can calmly let the corpus grow without bound and we can trust that the time-to-pick stays roughly constant.

This is why ICR is not just an ergonomic nicety. It bends the curve. Most interface improvements buy you a constant factor, like a faster animation, a clearer label, or a better-organized menu. ICR changes the curve itself, and anything that changes the curve dominates the things that only change the constant, given enough data.

The corollary is that ICR's value is asymmetric across users. If your projects are tiny and your address book is short, you may never feel the difference. However, if like me you are an Emacs user with a sprawling notes directory, two decades of email, half a dozen languages installed, and a thousand interactive commands, ICR is the difference between a usable system and an unusable one. The bigger your world, the more you'll want to bend the curve.

A really key thing for me personally is the alleviation of any anxiety about the aforementioned search spaces growing. Regardless of the underlying magnitude of my emails, news articles, code repositories, music libraries, etc…, the ease of finding what I'm looking for in any given workflow is roughly constant.

6. Recognition, recall, and the third option   psychology

Human-computer interaction research has long distinguished recognition (picking the right item from a presented list) from recall (producing the right item from memory)6. Recognition is famously easier, and this is why menus exist, why icon-based interfaces won, why "tip of my tongue" is a complaint about recall failure rather than recognition failure.

ICR sits in a strange and useful place between easy recognotion and hard recall. You don't have to recall the full item, but instead you only have to recall a fragment of it. And you don't have to recognize it from a large fixed presented list because the list narrows (often to a single candidate) in response to whatever fragment you produced. The interface meets you halfway.

This matters because the cognitive load of pure recall and the visual load of pure recognition both grow with set size. Recalling one item out of ten thousand is harder than recalling one out of ten. Recognizing one item in a list of ten thousand is harder than recognizing one in a list of ten. The hybrid form ICR offers — partial recall, then narrowing recognition — degrades much more gracefully. It is one of the few interaction primitives that gets its leverage from how human memory actually works rather than fighting it.

Cognitive psychology has a name for this hybrid: cued recall7. The user-typed fragment is a retrieval cue: the system uses it to materialize a small candidate set and the remainder of the task is recognition over that set. ICR is the UI instantiation of cued recall, with the screen serving as an externalized cue-to-candidate index. This is a well establish cognitive primitive, but it is rare to see an interface deploy it as deliberately as a well-tuned completion stack does.

The hybrid form ICR offers — partial recall, then narrowing recognition — degrades much more gracefully.

The best completion frameworks lean into this further. They learn your patterns. Recently selected items rise. Frequently selected items rise. The fragment you produce maps to the candidate you usually pick, not the candidate that happens to alphabetize first. The interface adapts to you. Over months, this turns into something close to muscle memory: you type a few characters and the right answer is already at the top, because that's where it has been for the last hundred selections.

7. Flat over nested: how ICR reshapes how you organize   organization knowledgeManagement

The downstream effect is not just on retrieval. ICR changes the math on how you should structure your data in the first place.

In a world without ICR, hierarchy is a pretty good coping strategy. Tree-structured folders, deeply nested categories, "taxonomies" — these exist because flat lists become unscannable past a certain size. If finding things requires browsing, then organizing into a navigable tree is necessary work, but that work has real and compounding costs. You have to invent the taxonomy up front, before you know what you'll eventually want to file. Then you have to remember it later. The biggest nightmare for me personally is that with hierachies and taxonomies, I have to live with the fact that many items legitimately belong in two categories at once, yet the file system or knowledge management system forces you to pick one. I know people who are good at breaking out of this choice paralysis, but I know from experience that I am not one of them. And you incur an operational cost on every save, because every new item is a small classification problem.

The argument for nesting was always "I cannot scan a flat list of ten thousand items." ICR replies: "you do not need to scan it."

With ICR, hierarchy becomes optional. The argument for nesting was always "I cannot scan a flat list of ten thousand items." ICR replies: "you do not need to scan it." A flat directory plus tags plus links is sufficient, because ICR makes any individual item findable in a few keystrokes regardless of how many neighbors it has.

It is worth being precise about what ICR replaces and what it doesn't. Hierarchy does at least two distinct jobs. One is retrieval: helping you find a thing. The other is explanation: encoding kind-of and part-of relationships, conferring landmark structure on a space, making the shape of a domain legible at a glance. Cognitive psychology has long identified the latter as load-bearing. Eleanor Rosch's work on basic-level categories8 showed that hierarchical taxonomies map onto how humans actually carve up the world, and Thomas Malone's classic study of how people organize their physical desks9 found that "filing" (hierarchical, classified) and "piling" (flat, recency-ordered) coexist for good reasons: piles support fast access to active material and files support reasoning about the shape of what you have. ICR substitutes cleanly for hierarchy's retrieval function. It does not substitute for the explanatory function. When the relationships between things are themselves the point — a code architecture, a course curriculum, a legal taxonomy — a tree is still doing real work that no completion stack will replace10.

The sleight of hand to avoid is treating "ICR makes hierarchy optional" as "hierarchy is bad." The honest, narrower claim is this: in domains where hierarchy was load-bearing only as a search affordance, ICR lets you drop it and reclaim its costs.

This is the architectural premise of denote, Protesilaos Stavrou's Emacs note-taking package. denote stores notes in a single mostly-flat directory, and although the package supports subdirectory "silos", Stavrou explicitly argues against using them as a primary organizing principle. Notes relate to each other through filename-encoded tags and explicit hyperlinks. The package leans entirely on completion to find things, and that works because finding things in a flat namespace via ICR is instantaneous. The same idea shows up in tools like Obsidian and in older personal-knowledge systems. These systems abandon of hierarchy because they trust that search interfaces to scale to larger search spaces.

Emacs itself works this way at a much larger scale. One of my all-time favorite quirks is that every interactive command lives in a single flat namespace. A mature configuration easily exposes ten thousand of them (a quick smash of M-x on my Emacs produces over 13,000 interactive commands). Nothing about this is overwhelming to me though, because I never see the full list. I just type M-x and a fragment of what I want, and the relevant commands surface. A hierarchical menu system covering ten thousand commands would be unusable; a flat namespace plus M-x is unremarkable.

In Emacs you can get this flat-list style ICR even where there are rigid hierarchies. This is critical when physical hierarchies or taxonomies are necessary (like in code repositories), but the user still wants to navigate the content without engaging with the hierarchy or taxonomy. For example, when I'm trying to find a file via ICR, I find myself reaching for something like project-find-file (show me all files in a project in a flat list) over something like find-file (let me traverse the directories one level at a time until I find my leaf).

As we've already seen, the ICR pattern generalizes really well. Any structure you build to make scanning easier is a structure ICR makes redundant. Even where these structures need to exist, ICR can still help you get around the rigidity and opacity of that structure. Once you trust your completion stack, you can shed the hierarchies you built and maintain, and you can triumphantly reclaim the cognitive and operational overhead that those hierarchies were costing you.

8. Why ICR matters more in Emacs than anywhere else   emacs

The thought exercise above hands us the answer to a question this post has been circling: of the environments where ICR is genuinely programmable, why focus a series on Emacs rather than on the shell?

The shell case is well-trodden territory; Unix users have been chaining fzf and pipes for years, the design space is mostly explored, and shell users are typically introduced to the notion of ICR the second they start learning how to configure their prompt. The Emacs case is younger, deeper, and less well documented — and it is the focus of this series, so it is worth zooming in on the specific ways Emacs exposes completion as a substrate. Emacs is also less popular, so there is an air of proselytism to this post 😜.

In Emacs, every layer of the ICR interaction is pluggable. completing-read is a function in the standard library. The display is pluggable. The matching strategy is pluggable. The sorting is pluggable. The annotations are pluggable. The actions you can take on a selected candidate? Pluggable! This is all discussed in my subsequent post on the VOMPECCC composite framework.

In Emacs, every layer of the interaction is a place where you can substitute behavior, and every layer has a small ecosystem of competing implementations to choose from.

Most editors give you a completion UI. Emacs gives you a completion substrate. The difference is what you can build on top.

This is what separates Emacs from the editors that come closest. Most give you a completion UI; Emacs gives you a completion substrate. From an HCI standpoint, what is unusual about Emacs is not the completion interaction itself — the visible behavior is broadly similar to Telescope, QuickPick, or Raycast — but that the layers HCI usually treats as monolithic (matcher, sorter, annotator, action set, display surface) are exposed as independent surfaces. Those other tools let you produce candidates and bind actions, but the matcher, the sorter, the annotator, and the display they hand you are largely fixed. Recently, the Emacs community has done a lot of work towards making all of these pieces independently swappable, and the resulting compositional space is qualitatively bigger. This is the reason the Emacs completion ecosystem is one of the most interesting parts of the software. Every well-designed Emacs package eventually becomes, in part, a completing-read application: a thoughtful choice of candidate source, plus annotations, plus actions, plus a UI that is already familiar because it is the same UI you use for everything else. The cost of adding a new "thing the user can pick from a list" is close to zero, and the resulting interaction inherits all of the user's existing muscle memory.

Don't treat completion as a built-in convenience you don't have to think about. Emacs ships with a working completing-read out of the box, and many users never look further. This is a tragic error on the same scale as never customizing your shell. A serious Emacs user should treat the completion stack the way a serious shell user treats prompt and history setup: as a thing worth investing in, arguably the driving HCI paradigm in the Emacs paltform. Every other piece of the system gets better when this one is good:

ICR is a simple concept, but it has really profound effects on I use Emacs. Better completion makes file finding faster. Faster file-finding changes how I organize my data. Better symbol completion changes how aggressively I refactor. Better command completion changes which commands I remember exist11. Better candidate annotations change which choices I can make confidently. In addition to saving me cognition, keystrokes, and time, ICR raises the upper bound on how much of Emacs I can fluently use.

9. Where this series goes

This was all very woo-woo and hand wavy, but the next two posts get concrete.

The middle post is on VOMPECCC, a name for a loose constellation of eight Emacs packages — Vertico, Orderless, Marginalia, Prescient, Embark, Consult, Corfu, and Cape — that together compose a complete, modular completion framework along Unix-philosophy lines. Each package does one thing, and, boy, does it do it well. Most importantly, Each communicates through Emacs's standard completion APIs, making it possible for any subset of these packages to work with or without the others. That post is a technical breakdown for developers who want to either adopt the whole stack or pick the pieces that solve their specific problems.

The final post is on spot, a Spotify client built as a pure ICR application: search Spotify's catalog through consult, view catalogue metadata inline with with marginalia, and act on them with embark. It builds nothing of its own at the UI layer because it doesn't need to. Every UI primitive it requires is already there, courtesy of the framework the previous post describes. spot is a useful case study in what becomes possible when you stop treating completion as a default and start treating it as a programmable substrate.

Three posts, one argument: incremental completing read is one of the highest-leverage interaction patterns in computing, Emacs gives you uniquely deep control over it, and that control is worth using. The rest of the series is about the practical 'how'.

10. tldr

This post argues that Incremental Completing Read (ICR) — the pattern where a candidate list narrows in real time as you type — is not a convenience feature but a structural change in how interface cost scales with data size. ICR is composed of three ideas: read (prompt the user and return a value), completing (maintain and display a candidate set), and incremental (recompute matches on every keystroke). Together they produce an interaction qualitatively different from both browsing and searching.

The pattern is already ubiquitous across software you use daily — browser URL bars, search engines, music players, IDE command palettes, and shell tools like fzf all implement it. A surprising fraction of all computing boils down to two primitives: pick a thing from a set, then act on it, and these primitives compose into arbitrarily complex workflows through chaining, the way shell pipes do.

From a cognitive-science perspective, ICR breaks the linkage between corpus size and the difficulty of finding something in it. While tree-based browsing degrades with scale and Hick-Hyman's Law penalizes large choice sets, ICR collapses the visible candidate count before the choice step, keeping per-selection cost roughly constant regardless of how large the underlying data grows. ICR also occupies a unique position between recognition and recall — you supply a partial cue, the system materializes a small candidate set, and the rest is easy recognition. Cognitive psychology calls this cued recall, and well-tuned completion stacks lean into it further by learning your selection history.

Beyond retrieval, ICR reshapes how you organize data in the first place. Hierarchy was always a coping strategy for unscannable flat lists; ICR makes flat lists scannable, so hierarchies built purely as search affordances become redundant. This is the design premise behind tools like denote and Emacs's own flat M-x command namespace.

Finally, the post explains why Emacs is the focus of this series: unlike every other environment, Emacs exposes the matcher, sorter, annotator, display, and action set as independently replaceable layers, making completion a programmable substrate rather than a sealed UI. The next two posts get concrete — one on the modular VOMPECCC completion framework, and one on a Spotify client built as a pure ICR application.

Footnotes:

1

I use "substrate" in the sense borrowed from biology and platform engineering: a foundational layer that other things are built on, acted upon, or composed out of. In biology, an enzyme acts on a substrate; in hardware, transistors are fabricated on a silicon substrate; in platform engineering, applications run on a compute substrate like Kubernetes. In all three, the substrate is primitive, malleable, and compositional — the raw material from which or on which higher-level things are built. Applied here: Emacs hands you completion as raw pluggable parts (matcher, sorter, annotator, action, display) rather than as a finished dish. The implicit contrast is completion-as-UI: a product you consume, where the vendor has already picked every layer for you.

2

The obvious objection: what if the candidate set is too enormous to materialize up front? Think a grep over a large codebase, or a query against a remote API. Emacs handles this through async completion sources — consult-ripgrep is the canonical example. Each keystroke debounces and spawns a ripgrep process whose streaming output becomes the incremental candidate set; the user sees narrowing results without ever holding the full corpus in memory. The pattern generalizes: any candidate source that can be expressed as a streaming query (ripgrep, git log, a database cursor, a REST endpoint) slots into the same ICR interaction. Corpus size stops being a constraint on the interface.

3

This actually doesn't even require an initial search string. I have a bad joke about the iMessage word-prediction being the original ChatGPT — if you could use a chuckle, I highly suggest opening up iMessage and spamming the next predicted word and observing the sheer nonsense that comes out.

4

Strictly speaking, big-O describes the runtime of an algorithm, not the perceived effort of a human using a tool, and the user-facing cost of ICR is not literally constant — recalling a fragment, scanning the survivors, and choosing among them all consume real cognitive resources. The defensible claim, and the one big-O notation is reaching for, is independence from the size of the candidate set. Whether you call that "asymptotically constant cognitive cost," "sublinear effort," or just "the same work regardless of scale," the underlying observation is the same.

5

William E. Hick, "On the rate of gain of information," Quarterly Journal of Experimental Psychology 4(1), 1952, 11–26; and Ray Hyman, "Stimulus information as a determinant of reaction time," Journal of Experimental Psychology 45(3), 1953, 188–196. The law: choice reaction time scales as roughly k·log₂(n+1) for n equally likely alternatives. ICR's effect is to keep n (the size of the visible candidate panel) small and roughly constant even as the underlying corpus grows arbitrarily.

6

Jakob Nielsen, "10 Usability Heuristics for User Interface Design" (1994, periodically updated by the Nielsen Norman Group). Heuristic #6 is "Recognition rather than recall": interfaces should minimize the user's memory load by making elements, actions, and options visible, rather than requiring users to retrieve them from memory.

7

Endel Tulving and Zena Pearlstone, "Availability versus accessibility of information in memory for words," Journal of Verbal Learning and Verbal Behavior 5(4), 1966, 381–391. The original demonstration that retrieval cues dramatically improve recall over uncued conditions, even when the underlying item is equally "available" in memory. Tulving's framing of cued recall as a distinct mode — intermediate between free recall and recognition — is the one ICR most closely instantiates.

8

Eleanor Rosch, Carolyn B. Mervis, Wayne D. Gray, David M. Johnson, and Penny Boyes-Braem, "Basic objects in natural categories," Cognitive Psychology 8(3), 1976, 382–439. The basic-level finding: human categorization is not arbitrary across hierarchies but anchored at a particular middle level (chair, dog, car) that maximizes informativeness. Hierarchies are not just retrieval scaffolds; they reflect how humans naturally carve up the world.

9

Thomas W. Malone, "How do people organize their desks? Implications for the design of office information systems," ACM Transactions on Office Information Systems 1(1), 1983, 99–112. The classic study identifying "files" (hierarchical, classified) and "piles" (flat, recency-ordered) as coexisting strategies, each well-suited to different parts of the same workflow.

10

The counterargument here would be that tags and hyperlinks would give you the same thing, but the point here is that often times a PHYSICAL hierarchy, like the organization of files in a directory, is needed and will be unavoidable.

11

I find it interesting that alleviating the burden of memory of a large search space actually improves my memory for the things that are actually important.

-1:-- Completion is a Substrate, not a UI (Post Charlie Holland)--L0--C0--2026-04-14T12:22:00.000Z

Dave Pearson: wordcloud.el v1.4

I think I'm mostly caught up with the collection of Emacs Lisp packages that need updating and tidying, which means yesterday evening's clean-up should be one of the last (although I would like to revisit a couple and actually improve and extend them at some point).

As for what I cleaned up yesterday: wordcloud.el. This is a package that, when run in a buffer, will count the frequency of words in that buffer and show the results in a fresh window, complete with the "word cloud" differing-font-size effect.

Word cloud in action

This package is about 10 years old at this point, and I'm struggling to remember why I wrote it now. I know I was doing something -- either writing something or reviewing it -- and the frequency of some words was important. I also remember this doing the job just fine and solving the problem I needed to solve.

Since then it's just sat around in my personal library of stuff I've written in Emacs Lisp, not really used. I imagine that's where it's going back to, but at least it's cleaned up and should be functional for a long time to come.

-1:-- wordcloud.el v1.4 (Post Dave Pearson)--L0--C0--2026-04-14T07:47:39.000Z

Dave's blog: Posframe for everything

An Emacser recently posted about popterm, which can use posframe to toggle a terminal visible and invisible in Emacs. I tried it out, and ran into problems with it, so abandoned it for now.

However, this got me thinking about other things that can use posframe, which pops up a frame at point. I’ve seen other Emacsers use posframe when they show off their configurations in meetups. I thought about what I use often that might benefit from a posframe.

  • magit
  • vertico
  • which-key
  • company
  • flymake

Which of these has something I can use to enable posframes?

Of course, there are plenty of other packages that have add-on packages to enable posframes.

Magit

magit doesn’t have anything directly, but it makes heavy use of transient. And there’s a package transient-posframe that can enable posframes for transients. When I use magit’s transients, the transient pops up as a frame in the middle of my Emacs frame.

vertico

Install vertico-posframe to use posframes with vertico.

which-key

Yep, there’s which-key-posframe.

company

See company-posframe.

flymake

I needed a bit of web searching to find this. flymake-popon can use a posframe in the GUI and popon in a terminal.

-1:-- Posframe for everything (Post Dave's blog)--L0--C0--2026-04-14T00:00:00.000Z

Marcin Borkowski: Binding TAB in Dired to something useful

I’m old enough to remember Norton Commander for DOS. Despite that, I never used Midnight Commander nor Sunrise Commander – Dired is still my go-to file manager these days. In fact, Dired has a feature which seems to be inspired by NC: when there are two Dired windows, the default destination for copying, moving and symlinking is “the other” window. Surprisingly, another feature which would be natural in an orthodox file manager is absent from Dired
-1:-- Binding TAB in Dired to something useful (Post Marcin Borkowski)--L0--C0--2026-04-13T18:56:07.000Z

Irreal: Some Config Hacks

Bozhidar Batsov has an excellent post that collects several configuration hacks from a variety of people and distributions. It’s a long list and rather than list them all, I’m going to mention just a few that appeal to me. Some of them I’m already using. Other’s I didn’t know about but will probably adopt.

  • Save the clipboard before killing: I’ve been using this for years. What it does is to make sure that the contents of the system clipboard aren’t lost if you do a kill in Emacs. This is much more useful than it sounds, especially if, like me, your do a lot of cutting and pasting from other applications.
  • Save the kill ring across sessions: I’m not sure I’ll adopt this but it’s easy to see how it could be useful.
  • Auto-Chmod spripts: Every time I see this one I resolve to add it to my config but always forget. What it does is automatically make scripts (files beginning with #!) executable when they’re saved.
  • Proportional window resizing: When a window is split, this causes all the windows in the frame to resize proportionally.
  • Faster mark popping. It’s sort of like repeat mode for popping the mark ring. After the first Ctrl+u Ctrl+Space you can continue popping the ring with a simple Ctrl+Space
  • Auto-select Help window: This is my favorite.When I invoke help, I almost always want to interact with the Help buffer if only to quit and delete it with a q. Unfortunately, the Help buffer doesn’t get focus so I have to do a change window to it. This simple configuration gives the Help buffer focus when you open it.

Everybody’s needs and preferences are different, of course, so be sure to take a look at Bastov’s post to see which ones might be helpful to you.

-1:-- Some Config Hacks (Post Irreal)--L0--C0--2026-04-13T14:56:38.000Z

Sacha Chua: 2026-04-13 Emacs news

Lots of little improvements in this one! I'm looking forward to borrowing the config tweaks that bbatsov highlighted and also trying out popterm for quick-access shells. Also, the Emacs Carnival for April has a temporary home at Newbies/starter kits - feel free to write and share your thoughts!

Links from reddit.com/r/emacs, r/orgmode, r/spacemacs, Mastodon #emacs, Bluesky #emacs, Hacker News, lobste.rs, programming.dev, lemmy.world, lemmy.ml, planet.emacslife.com, YouTube, the Emacs NEWS file, Emacs Calendar, and emacs-devel. Thanks to Andrés Ramírez for emacs-devel links. Do you have an Emacs-related link or announcement? Please e-mail me at sacha@sachachua.com. Thank you!

View Org source for this post

You can comment on Mastodon or e-mail me at sacha@sachachua.com.

-1:-- 2026-04-13 Emacs news (Post Sacha Chua)--L0--C0--2026-04-13T13:43:00.000Z

Protesilaos Stavrou: Emacs: new modus-themes-exporter package live today @ 15:00 Europe/Athens

Raw link: https://www.youtube.com/watch?v=IVTqn9IgBN4

UPDATE 2026-04-13 18:00 +0300: I wrote the package during the stream: https://github.com/protesilaos/modus-themes-exporter.


[ The stream will be recorded. You can watch it later. ]

Today, the 13th of April 2026, at 15:00 Europe/Athens I will do a live stream in which I will develop the new modus-themes-exporter package for Emacs.

The idea for this package is based on an old experiment of mine: to get the palette of a Modus theme and “export” it to another file format for use in supported terminal emulators or, potentially, other applications.

My focus today will be on writing the core functionality and testing it with at least one target application.

Prior work of mine from my pre-Emacs days is the tempus-themes-generator, which was written in Bash: https://gitlab.com/protesilaos/tempus-themes-generator.

-1:-- Emacs: new modus-themes-exporter package live today @ 15:00 Europe/Athens (Post Protesilaos Stavrou)--L0--C0--2026-04-13T00:00:00.000Z

Bastien Guerry: Get ready for Orgy in 15 minutes

Orgy is a static website generator for Org files.

It turns a directory of .org files into a website with navigation, section indexes, tag pages, RSS feeds, multilingual layouts and themes, without requiring any configuration or templates. You write Org files, run a single orgy command, and get a public/ directory ready to deploy.

This tutorial will guide you through creating a decent static website from an empty directory in a few steps.

We assume that Orgy is already installed and available as the orgy command.

Step 1 - Your first page

Create a directory and a single index.org file:

mkdir website
cd website
#+title: Hello

Welcome to my site.

Serve it:

orgy serve

You're done! You can see the website at http://localhost:1888.

No config, no templates, no theme.

See the new public/ directory:

website/
├── index.org
└── public/
    └── index.html
Step 1 - a minimal site with zero configuration
Step 1 - a minimal site with zero configuration

Step 2 - Add a blog post

Drop a second .org file right next to index.org:

#+title: Hello World
#+date: 2026-04-10

This is my first *post* with some /Org markup/ and a [[https://orgmode.org][link]].

Save it as hello-world.org.

orgy serve will notice the modification and rebuild the site for you.

website/
├── hello-world.org
├── index.org
└── public/
    ├── hello-world/
    │   └── index.html
    ├── index.html
    └── ...

The URL slug comes from the filename. The title and date come from the headers. The page automatically appears in the top navigation.

Step 3 - Configure your site

So far orgy used your directory name as the site title. Create a config.edn file at the root of website/:

{:title     "My Notebook"
 :base-url  "https://example.com"
 :copyright "© 2026 Me - CC BY-SA 4.0"
 :menu      ["hello-world"]}

The header now shows your custom title, the footer shows your copyright, and the navigation is limited to what you listed in :menu. Every key in config.edn is optional - add only what you need.

Step 4 - Organize with sections

Any subdirectory becomes a section with its own index. Let's group posts under notes/:

mkdir notes
mv hello-world.org notes/

Add a second post notes/second-post.org:

#+title: Second Post
#+date: 2026-04-11

Another entry.

Update the menu in config.edn:

:menu ["notes"]

After orgy serve has rebuilt the website, you have this:

website/
├── config.edn
├── index.org
├── notes/
│   ├── hello-world.org
│   └── second-post.org
└── public/
    ├── index.html
    └── notes/
        ├── index.html          <= auto-generated section index
        ├── hello-world/
        │   └── index.html
        └── second-post/
            └── index.html

You never wrote a listing page, orgy generated notes/index.html for you!

Step 4 - the section index, generated automatically from the directory contents
Step 4 - the section index, generated automatically from the directory contents

Step 5 - Use tags

Add a #+tags: line to any post:

#+title: Hello World
#+date: 2026-04-10
#+tags: emacs org-mode

Orgy creates:

public/tags/
├── index.html          ← all tags with post counts
├── emacs/index.html    ← posts tagged "emacs"
└── org-mode/index.html

A "Tags" link is automatically appended to the navigation.

Step 5 - a tag page listing every post tagged =emacs=
Step 5 - a tag page listing every post tagged =emacs=

Step 6 - Go multilingual

Want a French version of a post? Just rename the file with a language suffix:

mv notes/hello-world.org notes/hello-world.en.org

And write the translation in notes/hello-world.fr.org:

#+title: Bonjour le monde
#+date: 2026-04-10

Mon premier /billet/.

Orgy detects multilingual mode and switches the output layout:

public/
├── index.html          ← redirects to first language
├── en/
│   ├── index.html
│   ├── feed.xml
│   └── notes/...
└── fr/
    ├── index.html
    ├── feed.xml
    └── notes/...

Each language gets its own homepage, section indexes, tag pages, and RSS feed. A language switcher appears in the nav. The only thing you changed is a filename.

Step 7 - Images and captions

Drop an image anywhere in your content tree, for instance next to the post that uses it:

notes/
├── hello-world.en.org
└── photo.jpg

Orgy copies every non-org file to the output, preserving the path

  • notes/photo.jpg ends up at public/notes/photo.jpg. No static/ folder, no manual copying, no asset pipeline. Reference it from the post with a plain relative link:
[[./photo.jpg]]

To turn it into a proper <figure> with a caption, add #+caption: above the image:

#+caption: A nice view from the office window
[[./photo.jpg]]
Step 7 - an image rendered as a =<figure>= with its caption
Step 7 - an image rendered as a =<figure>= with its caption

And if you want alignment, add #+attr_html: too:

#+caption: A nice view from the office window
#+attr_html: :align right
[[./photo.jpg]]

For site-wide assets (favicon, custom CSS, shared images), use a static/ directory at the root - its contents are copied verbatim to public/.

Step 8 - Math formulas

Orgy renders LaTeX math out of the box. Write inline math between dollar signs and display equations between \[ and \]. See this example, followed by how it is rendered:

Euler's identity $e^{i\pi} + 1 = 0$ is often called the most
beautiful equation in mathematics.

The Gaussian integral:

\[
\int_{-\infty}^{+\infty} e^{-x^2}\,dx = \sqrt{\pi}
\]

Euler's identity \(e^{i\pi} + 1 = 0\) is often called the most beautiful equation in mathematics.

\(e^{i\pi}\)

The Gaussian integral:

\[ \int_{-\infty}^{+\infty} e^{-x^2}\,dx = \sqrt{\pi} \]

No extra configuration is needed. Orgy loads MathJax on any page that contains math, and skips it on pages that don't.

Step 9 - Add a theme

The finishing touch. Add a :theme key to config.edn:

{:title     "My Notebook"
 :base-url  "https://example.com"
 :copyright "© 2026 Me - CC BY-SA 4.0"
 :menu      ["notes"]
 :theme     "teletype"}

Reload - your site now has a full theme loaded from the pico-themes CDN. Try other names like swh, org, lincolk, ashes or doric. You can also point :theme to an https:// URL or a local .css file.

Step 8 - the same site with the =teletype= theme applied
Step 8 - the same site with the =teletype= theme applied

Going further

You now have a real multilingual blog with tags, images, RSS feeds, a sitemap, and a theme - built from plain Org files and a few lines of config. A few things to explore next:

  • orgy init - bootstrap config.edn and the full set of templates/ for customization
  • #+draft: true - exclude a file from the build
  • :quick-search true - enable client-side search
  • :theme-toggle true - add a light/dark switch in the nav
  • orgy help - list all CLI options

Orgy's philosophy: simple things should be simple, complex things should be possible. You just saw the simple half 😀

Enjoy!

👉 More code contributions.

-1:-- Get ready for Orgy in 15 minutes (Post Bastien Guerry)--L0--C0--2026-04-13T00:00:00.000Z

Irreal: Days Until

Charles Choi recently saw a Mastodon post showing the days until the next election and started wondering how one would compute that with Emacs. He looked into it and, of course, the answer turned out to be simple. Org mode has a function, org-time-stamp-to-now that does exactly that. It takes a date string and calculates the number of days until that date.

Choi wrote an internal function that takes a date string and outputs a string specifying the number of days until that date. The default is x days until <date string> but you can specify a different output string if you like. That function, cc/--days-until, serves as a base for other functions.

Choi shows two such functions. One that allows you to specify a date from a date picker and computes the number of days until that date. The other—following the original question—computers the number of days until the next midterm and general elections in the U.S. for 2006. It’s a simple matter to change it for other election years. Nobody but the terminally politically obsessed would care about that but it’s a nice example of how easy it is to use cc/--days-until to find the number of days until some event.

Finally, in the comments to Choi’s reddit announcement ggxx-sdf notes that you can also use calc-eval for these sorts of calculations.

As Choi says, it’s a human characteristic to want to know how long something is going to take. If you have some event that you want a countdown clock for, take a look at Choi’s post.

-1:-- Days Until (Post Irreal)--L0--C0--2026-04-12T14:50:16.000Z

Bicycle for Your Mind: Expanding with Typinator 10

TypinatorTypinator

Product: Typinator
Price: $49.99 (one time for macOS only) or $29.99/yearly (for macOS and iOS version)

I was a TextExpander user and switched from it to aText when TextExpander went to a subscription model. Been using Alfred for snippet expansions for well over… Actually I have no idea how long. Every since Alfred added that feature I suppose. There are expansions which require input, and those are handled by Keyboard Maestro. I wanted to see what was available in this space. There was no good reason for the change, I was perfectly happy with the setup. But I saw that Typinator 10 had been released and I got curious. Approached the developer and they were kind enough to provide me with a license. So, this is the review.

What Does a Text Expansion Program Do?

A text expansion program makes it easy to type content you use regularly. For instance, I have an expansion where I type ,bfym and [Bicycle For Your Mind](http://bicycleforyourmind.com) is pasted into the text. It lessens your typing load, stops you from making mistakes and makes typing easy. Expansions include corrections of common mistakes that you or other people make while typing. It includes emojis and symbols. It can be simple or complex depending on your needs.

macOS has a built in mode for text expansions, but it is limited and like a lot of things macOS does, they include it without giving it much attention or developer love. It is lacking in features or finesse. If you are serious about making your writing comfortable and easy, you need to consider third party solutions. The macOS marketplace has a fair number of programs which tackle this task. The two main products are TextExpander and Typinator. Both Alfred and Keyboard Maestro have this feature built into the program.

Typinator 1Typinator 1

iOS

The main feature in this version of Typinator is the iOS integration. I am not interested in that, I am not going to talk about that. As far as I know, TextExpander was the only other product which had that integration. Typinator is now matching them. For some people, this is a crucial feature. Going by my experience with this developer, I am sure Typinator works as well on iOS.

Surprises

Typinator lets me use regex to define expansions. One of the ones which gets used all the time lets me type a period and then the first letter of the next sentence gets capitalized automatically. You have no idea how much I like that. Apple has that as a setting but it is temperamental. Not Typinator. Works like a charm. Thanks to its regex support it does interesting things with dates. I love that feature although I haven’t used it enough to make it super useful. I see the potential there.

Observations

Converting my Alfred snippets to Typinator was easy. Save the snippets in Alfred as a CSV file and then import those into Typinator.

Typinator keeps a record of the number of times you use a particular expansion and the last time you used it. Gives me the ability to monitor the usage of the expansions. Alfred doesn’t do that. I use abbrev.mode in Emacs, and that keeps a running count too. I love that feature.

Typinator 2Typinator 2

Typinator is easy to interact with. It has a menu-bar icon which you can click on to get the main window or you can assign a system wide keyboard command to bring the window up. You have the ability to highlight something in any editor you are using and press a keyboard command to bring up a dialog box to set up an expansion based on the content you have highlighted. Easy. I find myself using this to increase the number of expansions I have available.

Typinator gives you minute control over the expansions. You have the ability to trigger the expansions immediately upon matching the expansion trigger or after a word break. In other words, you can expand as soon as you match or expand after you type a space or any punctuation after your match. This setting is available on every individual snippet. Every individual snippet can be set for ignoring case or expand on exact match. Another level of fine control which is useful.

This is a mature program. It has been available for a long while now. It is a full-featured expansion program. They have been at it for a while and they are good at it.

Conclusion

If you are looking for a text expansion program, you cannot go wrong with Typinator. It is great at what it does and is full of features which will make you smile. I love it.

I recommend Typinator with enthusiasm.

macosxguru at the gmail thingie.

-1:-- Expanding with Typinator 10 (Post Bicycle for Your Mind)--L0--C0--2026-04-12T07:00:00.000Z

Tim Heaney: Computing Days Until with Perl and Rust

The other day Charles Choi wrote about Computing Days Until with Emacs. I decided to try it in Perl and Rust. Perl In Perl, we could do it with just the standard library like so. #!/usr/bin/env perl use v5.42; use POSIX qw(ceil); use Time::Piece; use Time::Seconds; my $target_date = shift // die "\nUsage: $0 YYYY-MM-DD\n"; my $target = Time::Piece->strptime($target_date, "%Y-%m-%d"); my $today = localtime; my $delta = $target - $today; say ceil $delta->days; Subtracting two Time::Piece objects gives a Time::Seconds object, which has a days method.
-1:-- Computing Days Until with Perl and Rust (Post Tim Heaney)--L0--C0--2026-04-12T00:00:00.000Z

Irreal: Magit Support

Just about everyone agrees that the two Emacs packages considered “killer apps” by those considering adopting the editor are Org mode and Magit. I’ve seen several people say they use Emacs mainly for one or the other.

Their development models are completely different. Org has a development team with a lead developer in much the same way that Emacs does. Magit is basically a one man show, although there are plenty of contributors offering pull requests and even fixing bugs. That one man is Jonas Bernoulli (tarsius) who develops Magit full time and earns his living from doing so.

Like most nerds, he hates marketing and would rather be writing code than seeking funding. Still, that thing about earning a living from Magit means that he must occasionally worry about raising money. Now is one such time. Some of his funding pledges have expired and the weakening U.S. dollar is also contributing to his dwindling income.

Virtually every Emacs user is also a Magit user and many of us depend on it so now would be a propitious moment to chip in some money to keep the good times rolling. The best thing, of course, is to get your employer to make a more robust contribution than would be feasible for an individual developer but even if every developer chips in a few dollars (or whatever) we can support tarsius and allow him to continue working on Magit and its associated packages.

His support page is here. Please consider contributing a few dollars. Tarsius certainly deserves it and we’ll be getting our money’s worth.

-1:-- Magit Support (Post Irreal)--L0--C0--2026-04-11T14:24:14.000Z

Sacha Chua: Org Mode: Tangle Emacs config snippets to different files and add boilerplate

I want to organize the functions in my Emacs configuration so that they are easier for me to test and so that other people can load them from my repository. Instead of copying multiple code blogs from my blog posts or my exported Emacs configuration, it would be great if people could just include a file from the repository. I don't think people copy that much from my config, but it might still be worth making it easier for people to borrow interesting functions. It would be great to have libraries of functions that people can evaluate without worrying about side effects, and then they can copy or write a shorter piece of code to use those functions.

In Prot's configuration (The custom libraries of my configuration), he includes each library as in full, in a single code block, with the boilerplate description, keywords, and (provide '...) that make them more like other libraries in Emacs.

I'm not quite sure my little functions are at that point yet. For now, I like the way that the functions are embedded in the blog posts and notes that explain them, and the org-babel :comments argument can insert links back to the sections of my configuration that I can open with org-open-at-point-global or org-babel-tangle-jump-to-org.

Thinking through the options...

Org tangles blocks in order, so if I want boilerplate or if I want to add require statements, I need to have a section near the beginning of my config that sets those up for each file. Noweb references might help me with common text like the license. Likewise, if I want a (provide ...) line at the end of each file, I need a section near the end of the file.

If I want to specify things out of sequence, I could use Noweb. By setting :noweb-ref some-id :tangle no on the blocks I want to collect later, I can then tangle them in the middle of the boilerplate. Here's a brief demo:

#+begin_src emacs-lisp :noweb yes :tangle lisp/sacha-eshell.el :comments no
;; -*- lexical-binding: t; -*-
<<sacha-eshell>>
(provide 'sacha-eshell)
#+end_src

However, I'll lose the comment links that let me jump back to the part of the Org file with the original source block. This means that if I use find-function to jump to the definition of a function and then I want to find the outline section related to it, I have to use a function that checks if this might be my custom code and then looks in my config for "defun …". It's a little less generic.

I wonder if I can combine multiple targets with some code that knows what it's being tangled to, so it can write slightly different text. org-babel-tangle-single-block currently calculates the result once and then adds it to the list for each filename, so that doesn't seem likely.

Alternatively, maybe I can use noweb or my own tangling function and add the link comments from org-babel-tangle-comments.

Aha, I can fiddle with org-babel-post-tangle-hook to insert the boilerplate after the blocks have been written. Then I can add the lexical-binding: t cookie and the structure that makes it look more like the other libraries people define and use. It's always nice when I can get away with a small change that uses an existing hook. For good measure, let's even include a list of links to the sections of my config that affect that file.

(defvar sacha-dotemacs-url "https://sachachua.com/dotemacs/")

;;;###autoload
(defun sacha-dotemacs-link-for-section-at-point (&optional combined)
  "Return the link for the current section."
  (let* ((custom-id (org-entry-get-with-inheritance "CUSTOM_ID"))
         (title (org-entry-get (point) "ITEM"))
         (url (if custom-id
                  (concat "dotemacs:" custom-id)
                (concat sacha-dotemacs-url ":-:text=" (url-hexify-string title)))))
    (if combined
        (org-link-make-string
         url
         title)
      (cons url title))))

(eval-and-compile
  (require 'org-core nil t)
  (require 'org-macs nil t)
  (require 'org-src nil t))
(declare-function 'org-babel-tangle--compute-targets "ob-tangle")
(defun sacha-org-collect-links-for-tangled-files ()
  "Return a list of ((filename (link link link link)) ...)."
  (let* ((file (buffer-file-name))
         results)
    (org-babel-map-src-blocks (buffer-file-name)
      (let* ((info (org-babel-get-src-block-info))
             (link (sacha-dotemacs-link-for-section-at-point)))
        (mapc
         (lambda (target)
           (let ((list (assoc target results #'string=)))
             (if list
                 (cl-pushnew link (cdr list) :test 'equal)
               (push (list target link) results))))
         (org-babel-tangle--compute-targets file info))))
    ;; Put it back in source order
    (nreverse
     (mapcar (lambda (o)
               (cons (car o)
                     (nreverse (cdr o))))
             results))))
(defvar sacha-emacs-config-module-links nil "Cache for links from tangled files.")

;;;###autoload
(defun sacha-emacs-config-update-module-info ()
  "Update the list of links."
  (interactive)
  (setq sacha-emacs-config-module-links
        (seq-filter
         (lambda (o)
           (string-match "sacha-" (car o)))
         (sacha-org-collect-links-for-tangled-files)))
  (setq sacha-emacs-config-modules-info
        (mapcar (lambda (group)
                  `(,(file-name-base (car group))
                    (commentary
                     .
                     ,(replace-regexp-in-string
                       "^"
                       ";; "
                       (concat
                        "Related Emacs config sections:\n\n"
                        (org-export-string-as
                         (mapconcat
                          (lambda (link)
                            (concat "- " (cdr link) "\\\\\n  " (org-link-make-string (car link)) "\n"))
                          (cdr group)
                          "\n")
                         'ascii
                         t))))))
                sacha-emacs-config-module-links)))

;;;###autoload
(defun sacha-emacs-config-prepare-to-tangle ()
  "Update module info if tangling my config."
  (when (string-match "Sacha.org" (buffer-file-name))
    (sacha-emacs-config-update-module-info)))

Let's set up the functions for tangling the boilerplate.

(defvar sacha-emacs-config-modules-dir "~/sync/emacs/lisp/")
(defvar sacha-emacs-config-modules-info nil "Alist of module info.")
(defvar sacha-emacs-config-url "https://sachachua.com/dotemacs")

;;;###autoload
(defun sacha-org-babel-post-tangle-insert-boilerplate-for-sacha-lisp ()
  (when (file-in-directory-p (buffer-file-name) sacha-emacs-config-modules-dir)
    (goto-char (point-min))
    (let ((base (file-name-base (buffer-file-name))))
      (insert (format ";;; %s.el --- %s -*- lexical-binding: t -*-

;; Author: %s <%s>
;; URL: %s

;;; License:
;;
;; This file is not part of GNU Emacs.
;;
;; This is free software; you can redistribute it and/or modify
;; it under the terms of the GNU General Public License as published by
;; the Free Software Foundation; either version 3, or (at your option)
;; any later version.
;;
;; This is distributed in the hope that it will be useful,
;; but WITHOUT ANY WARRANTY; without even the implied warranty of
;; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
;; GNU General Public License for more details.
;;
;; You should have received a copy of the GNU General Public License
;; along with GNU Emacs; see the file COPYING.  If not, write to the
;; Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor,
;; Boston, MA 02110-1301, USA.

;;; Commentary:
;;
%s
;;; Code:

\n\n"
                      base
                      (or
                       (assoc-default 'description
                                      (assoc-default base sacha-emacs-config-modules-info #'string=))
                       "")
                      user-full-name
                      user-mail-address
                      sacha-emacs-config-url
                      (or
                       (assoc-default 'commentary
                                      (assoc-default base sacha-emacs-config-modules-info #'string=))
                       "")))
      (goto-char (point-max))
      (insert (format "\n(provide '%s)\n;;; %s.el ends here\n"
                      base
                      base))
      (save-buffer))))
(setq sacha-emacs-config-url "https://sachachua.com/dotemacs")
(with-eval-after-load 'org
  (add-hook 'org-babel-pre-tangle-hook #'sacha-emacs-config-prepare-to-tangle)
  (add-hook 'org-babel-post-tangle-hook #'sacha-org-babel-post-tangle-insert-boilerplate-for-sacha-lisp))

You can see the results at .emacs.d/lisp. For example, the function definitions in this post are at lisp/sacha-emacs.el.

This is part of my Emacs configuration.
View Org source for this post

You can view 2 comments or e-mail me at sacha@sachachua.com.

-1:-- Org Mode: Tangle Emacs config snippets to different files and add boilerplate (Post Sacha Chua)--L0--C0--2026-04-11T14:13:19.000Z

Listful Andrew: Phones-to-Words Challenge IV: Clojure as an alternative to Java

There's an old programming challenge where the digits in a list of phone numbers are converted to letters according to rules and a given dictionary file. The results of the original challenge suggested that Lisp would be a potentially superior alternative to Java, since Lisper participants were able to produce solutions in, on average, fewer lines of code and less time than Java programmers. Some years ago I tackled it in Emacs Lisp and Bash. I've now done it in Clojure.
-1:-- Phones-to-Words Challenge IV: Clojure as an alternative to Java (Post Listful Andrew)--L0--C0--2026-04-10T11:24:00.000Z

Erik L. Arneson: Emacs as the Freelancer's Command Center

Freelancing for small businesses and organizations leads to a position where you are juggling a number of projects for multiple clients. You need to keep track of a number of tasks ranging from software development to sending emails to project management. This is a lot easier when you have a system that can do a bunch of the work for you, which is why I use Emacs as my freelancer command center.

I would like to share some of the tools and workflows I use in Emacs to help me keep on top of multiple clients’ needs and expectations.

Organization with org-mode

It should be no surprise that at the center of my Emacs command center is org-mode. I have already written about it a lot. Every org-mode user seems to have their own way of keeping track of things, so please don’t take my organizational scheme as some kind of gospel. A couple of years ago, I wrote about how I handle to-do lists in org-mode, and I am still using that method for to-do keywords. However, file structure is also important. I have a number of core files.

Freelance.org

This top-level file contains all of my ongoing business tasks, such as tracking potential new clients, recurring tasks like website maintenance and checking my MainWP dashboard. I also have recurring tasks for invoicing, tracking expenses, and other important business things.

This file is also where I have my primary time tracking and reporting. Org-mode already supports this pretty nicely, I just use the built-in clocktable feature.

Clients/*.org

Clients that have large projects or ongoing business get their own file. This makes organization a lot easier. All tasks associated with a client and their various projects end up in these individual files. The important part is making sure that these files are included in the time-tracking clock table and your org-mode agenda, so you can see what is going on every week.

References and Linking

I have C-c l bound to org-store-link and use it all the time to link to various files, directories, URLs, and even emails. I can then use those links in my client notes, various tasks in my to-do list, and so on. This helps me keep my agenda organized even when my filesystem and browser bookmarks are a bit of a mess.

Email with mu4e

I have been reading and managing my email in Emacs for over 25 years. There have been a few breaks here and there where I have tried out other software or even web mail clients, but it has always been a headache. I return to Emacs! Long ago, I used VM (which seems to have taken on new life!), but currently I use mu4e.

This gives me a ton of power and flexibility when dealing with email. I have custom functions to help me compose and organize my email, and I can use org-store-link to keep track of individual emails from clients as they relate to agenda items. I even have a function to convert emails that I have written in Markdown into HTML email, and one that searches for questions in a client email to make sure I haven’t missed anything.

The ability to write custom code to both process and create email is extremely powerful and a great time saver.

Writing Code

I don’t know what else to say about this, I use Emacs for doing all of my software development. I make sure to use Eglot whenever there is a language server available, and I try to leverage all the fancy features offered by Emacs whenever possible. The vast majority of projects for clients are PHP (thanks WordPress), Go, JavaScript, and TypeScript.

Writing Words

Previously, I have shared quite a bit about writing in Emacs. I like to start everything in org-mode, but I also write quite a bit in Markdown. Emacs has become a powerful tool for writing. I use the Harper language server along with Eglot to check grammar and spelling.

Track All Changes with Magit

Version control is essential, a lesson I have learned over 30+ years of software development. While Git is not part of Emacs, the software I use to interface with Git is. Magit is a Git user interface that runs entirely in Emacs. I use it to track my writing, my source code, and even all of my org-mode files. Using version control is so essential that I have a weekly repeating agenda task reminding me to check all of my everyday files to make sure I have checked-in my changes for the week.

Thinking Music with EMMS

I like to have some soothing background music when I am programming, writing, or otherwise working on my computer. However, if that background music has lyrics, it can be really distracting. It is easy to make a playlist for various suitable SomaFM channels to load into EMMS (the Emacs Multimedia System) using the command M-x emms-play-playlist.

Try saving the following into playlist.el somewhere, and using it the next time you are writing:

 ;;; This is an EMMS playlist file Play it with M-x emms-play-playlist
 ((*track* (type . url) (name . "https://somafm.com/synphaera.pls"))
  (*track* (type . url) (name . "https://somafm.com/gsclassic.pls"))
  (*track* (type . url) (name . "https://somafm.com/sonicuniverse.pls"))
  (*track* (type . url) (name . "https://somafm.com/groovesalad.pls")))

And make sure to check out SomaFM’s selection to find some good background music that suits your tastes!

And the tools I have missed

There are undoubtedly Emacs tools that I have missed in this brief overview. I have been wracking my brain as I write, trying to see what I have forgotten or overlooked. Frankly, Emacs has become such a central part of the organization for my freelancing that there are probably many tools, packages, and processes that I use every day without thinking about it too much.

Emacs makes it possible for me to freelance for multiple clients and small businesses without losing my mind with organization and task management. The tools it provides allow me to stay on top of multiple projects, handle client relationships, and keep track of years worth of tasks, communications, and projects. Without it, I’d be sunk!

What Emacs tools are you using to manage your freelance business? I am always looking for ways to improve or streamline my process.

The featured image for this post comes from Agostino Ramelli’s Le diverse et artificiose machine (1588). Read more about it on the Public Domain Review.

-1:-- Emacs as the Freelancer's Command Center (Post Erik L. Arneson)--L0--C0--2026-04-10T00:00:00.000Z

Protesilaos Stavrou: Emacs modus-themes live stream today @ 14:00 Europe/Athens

Raw link: https://www.youtube.com/watch?v=xFQDYTCS1os

[ The stream will be recorded. You can watch it later. ]

At 14:00 Europe/Athens I will hold a live stream about Emacs. Specifically, I will work on my modus-themes package.

The idea is to write more tests and refine the relevant functions along the way.

I am announcing this -45 minutes before I go live. I will keep the chat open in case there are any questions.

-1:-- Emacs modus-themes live stream today @ 14:00 Europe/Athens (Post Protesilaos Stavrou)--L0--C0--2026-04-10T00:00:00.000Z

James Dyer: Wiring Flymake Diagnostics into a Follow Mode

Flymake has been quietly sitting in my config for years doing exactly what it says on the tin, squiggly lines under things that are wrong, and I mostly left it alone. But recently I noticed I was doing the same little dance over and over: spot a warning, squint at the modeline counter, run `M-x flymake-show-buffer-diagnostics`, scroll through the list to find the thing I was actually looking at, then flip back. Two windows, zero connection between them.

So I wired it up properly, and while I was in there I gave it a set of keybindings that feel right to my muscle memory.

The obvious bindings for stepping through errors are `M-n` and `M-p`, and most people using flymake bind exactly those. The problem is that in my config `M-n` and `M-p` are already taken, they step through simply-annotate annotations (which is itself a very handy thing and I am not giving it up!). So I shifted a key up and went with the shifted variants: `M-N` for next, `M-P` for previous, and `M-M` to toggle the diagnostics buffer.

 (setq flymake-show-diagnostics-at-end-of-line nil)
(with-eval-after-load 'flymake
(define-key flymake-mode-map (kbd "M-N") #'flymake-goto-next-error)
(define-key flymake-mode-map (kbd "M-P") #'flymake-goto-prev-error))

With M-M I wanted it to be a bit smarter than just “open the buffer”. If it is already visible I want it gone, if it is not I want it up. The standard toggle pattern:

 (defun my/flymake--diag-buffer ()
"Return the visible flymake diagnostics buffer, or nil."
(seq-some (lambda (b)
(and (with-current-buffer b
(derived-mode-p 'flymake-diagnostics-buffer-mode))
(get-buffer-window b)
b))
(buffer-list)))
(defun my/flymake-toggle-diagnostics ()
"Toggle the flymake diagnostics buffer."
(interactive)
(let ((buf (my/flymake--diag-buffer)))
(if buf
(quit-window nil (get-buffer-window buf))
(flymake-show-buffer-diagnostics)
(my/flymake-sync-diagnostics))))

Now the interesting bit. What I really wanted was a follow mode, something like how the compilation buffer tracks position or how Occur highlights the current hit. When my point lands on an error in the source buffer, the corresponding row in the diagnostics buffer should light up. That way the diagnostics window becomes a live index of where I am rather than a static dump and think in general this is how a lot of other IDEs work.

I tried the lazy route first, turning on hl-line-mode in the diagnostics buffer and calling hl-line-highlight from a post-command-hook in the source buffer. The line lit up once and then refused to move. Nothing I did would shift it. This is because hl-line-highlight is really only designed to be driven from the window whose line is being highlighted, and I was firing it from afar.

Ok, so why not just manage my own overlay:

 (defvar my/flymake--sync-overlay nil
"Overlay used to highlight the current entry in the diagnostics buffer.")
(defun my/flymake-sync-diagnostics ()
"Highlight the diagnostics buffer entry matching the error at point."
(when-let* ((buf (my/flymake--diag-buffer))
(win (get-buffer-window buf))
(diag (or (car (flymake-diagnostics (point)))
(car (flymake-diagnostics (line-beginning-position)
(line-end-position))))))
(with-current-buffer buf
(save-excursion
(goto-char (point-min))
(let ((found nil))
(while (and (not found) (not (eobp)))
(let ((id (tabulated-list-get-id)))
(if (and (listp id) (eq (plist-get id :diagnostic) diag))
(setq found (point))
(forward-line 1))))
(when found
(unless (overlayp my/flymake--sync-overlay)
(setq my/flymake--sync-overlay (make-overlay 1 1))
(overlay-put my/flymake--sync-overlay 'face 'highlight)
(overlay-put my/flymake--sync-overlay 'priority 100))
(move-overlay my/flymake--sync-overlay
found
(min (point-max) (1+ (line-end-position)))
buf)
(set-window-point win found)))))))

My first pass at the walk through the tabulated list did not work. I was comparing (tabulated-list-get-id) directly against the diagnostic returned by flymake-diagnostics using eq, and it was always false, which meant found stayed nil forever and the overlay never moved. A dive into flymake.el revealed why. Each row in the diagnostics buffer stores its ID as a plist, not as the diagnostic itself:

 (list :diagnostic diag
:line line
:severity ...)

So I need to pluck out :diagnostic before comparing. Obvious in hindsight, as these things always are. With plist-get in place the comparison lines up and the overlay moves exactly where I want it, tracking every navigation command.

The fallback lookup using line-beginning-position and line-end-position is there because flymake-diagnostics (point) only returns something if point is strictly inside the diagnostic span. When I land between errors or on the same line as an error but a few columns off, I still want the diagnostics buffer to track, so I widen the search to the whole line.

Finally, wrap the hook in a minor mode so I can toggle it per buffer and enable it automatically whenever flymake comes up:

 (define-minor-mode my/flymake-follow-mode
"Sync the diagnostics buffer to the error at point."
:lighter nil
(if my/flymake-follow-mode
(add-hook 'post-command-hook #'my/flymake-sync-diagnostics nil t)
(remove-hook 'post-command-hook #'my/flymake-sync-diagnostics t)))
(add-hook 'flymake-mode-hook #'my/flymake-follow-mode)
(define-key flymake-mode-map (kbd "M-M") #'my/flymake-toggle-diagnostics)

The end result is nice. M-M pops the diagnostics buffer, M-N and M-P walk through the errors, and as I navigate the source the matching row in the diagnostics buffer highlights in step with me. If I close the buffer with another M-M everything goes quiet, and I can still step through with M-N/M-P on their own.

Three little keybindings and twenty lines of elisp, but they turn flymake from a static reporter into something that actually feels connected to where I am in the buffer.

-1:-- Wiring Flymake Diagnostics into a Follow Mode (Post James Dyer)--L0--C0--2026-04-09T05:13:00.000Z

Charles Choi: Computing Days Until with Emacs

A recent Mastodon post showing the days until the next U.S. election got me to wonder, “how can I compute that in Emacs?” Turns out, this is trivial with the Org mode function org-time-stamp-to-now doing the timestamp computation for you.

We can wrap org-time-stamp-to-now in an internal function cc/--days-until that generates a formatted string of the days until a target date.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
(defun cc/--days-until (target &optional template)
  "Formatted string of days until TARGET.

- TARGET: date string that conforms to `parse-time-string'.
- TEMPLATE : format string that includes ‘%d’ specifier.

If TEMPLATE is nil, then a predefined format string will be
used."
  (let* ((template (if template
                       template
                     (concat "%d days until " target)))
         (days (org-time-stamp-to-now target))
         (msg (format template days)))
    msg))

From there we can then start defining commands that use cc/--days-until. The command cc/days-until shown below will prompt you with a date picker to enter a date. Note that you can enter a date value (e.g. “Dec 25, 2026”) in the mini-buffer prompt for org-read-date.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
(defun cc/days-until (arg)
  "Prompt user for date and show days until in the mini-buffer.

Use `org-read-date' to compute days until to display in the mini-buffer.

If prefix ARG is non-nil, then the computed result is stored in the
 `kill-ring'."
  (interactive "P")
  (let* ((target (org-read-date))
         (msg (cc/--days-until target)))
    (if arg
        (kill-new msg))
    (message msg)))

Going back to the original motivator for this post, here’s an implementation of days until the next two major U.S. election dates with the command cc/days-until-voting.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
(defun cc/days-until-voting (arg)
  "Days until U.S. elections in 2026 and 2028.

If prefix ARG is non-nil, then the computed result is stored in the
 `kill-ring'."
  (interactive "P")
  (let* ((midterms (cc/--days-until "2026-11-03" "%d days until 2026 midterms"))
         (election (cc/--days-until "2028-11-07" "%d days until 2028 presidential election"))
         (msg (format "%s, %s" midterms election)))
    (if arg
        (kill-new msg))
    (message msg)))

The result of M-x cc/days-until-voting as of 8 April 2026 is:

209 days until 2026 midterms, 944 days until 2028 presidential election

It’s so human to want to know how long it’s going to take. Feel free to build your own countdown clocks using the code above. May your journey to whatever you plan be a happy one!

-1:-- Computing Days Until with Emacs (Post Charles Choi)--L0--C0--2026-04-08T23:00:00.000Z

Dave Pearson: quiz.el v1.7

I wondered yesterday:

...those question headers are displaying differently, with the background colour no longer spanning the width of the window. I'd like to understand why.

Turns out it was pretty straightforward:

diff --git a/quiz.el b/quiz.el
index 2dbe45d..c1ba255 100644
--- a/quiz.el
+++ b/quiz.el
@@ -40,7 +40,8 @@
 (defface quiz-question-number-face
   '((t :height 1.3
        :background "black"
-       :foreground "white"))
+       :foreground "white"
+       :extend t))
   "Face for the question number."
   :group 'quiz)

and so v1.7 has happened.

Quiz with reinstated header look

It looks like, perhaps, at some point in the past, :extend was t by default, but it no longer is? Either way, explicitly setting it to t has done the trick.

-1:-- quiz.el v1.7 (Post Dave Pearson)--L0--C0--2026-04-08T15:54:38.000Z

Irreal: Tolerance For Repetition

Floofcode over that the Emacs subreddit asks a question that resonates with me. He notes that he often has a repetitive task and wonders whether it would be worthwhile writing some Elisp to automate it. Usually, he has to repeat the task several times before he gets fed up and fixes it for good. He wonders how other people deal with this. Do they have to repeat the task a certain number of times before automating it or is the criterion more subjective.

I can relate. This happens to me all the time. I keep doing the same task over and over until one day I realize that I’m being stupid and spend a few minutes dashing off a bit of Elisp that solves the problem once and for all. Every time, I tell myself, “Well, I won’t that mistake again. Next time I’m going to get this type of task automated right away.” Of course, the next time the same thing happens.

As to floofcode’s question, I would guess that it depends on the person. For me, it’s a subjective matter. The amount of time I’ll spend repeating the same boring task over and over varies but it always ends in a fit of anger when I ask myself why I’m still doing things manually. The thing is, when I’m repeatedly doing the task manually, I’m not wondering whether I should automate it. That happens at the end when I realize I’ve been stupid.

I guess the answer is something of the sort that after you’ve repeated the task twice, just automate it. Sure sometimes you’ll lose and waste time but in my experience it will most often be a win. I wish I could learn this.

-1:-- Tolerance For Repetition (Post Irreal)--L0--C0--2026-04-08T14:33:12.000Z

Dave Pearson: fasta.el v1.1

Today's Emacs Lisp package tidy-up is of a package I first wrote a couple of employers ago. While working on code I often found myself viewing FASTA files in an Emacs buffer and so I thought it would be fun to use this as a reason to knock up a simple mode for highlighting them.

fasta.el was the result.

An example FASTA file

While I doubt it was or is of much use to others, it helped me better understand simple font-locking in Emacs Lisp, and also made some buffers look a little less boring when I was messing with test data.

As for this update: it's the usual stuff of cleaning up deprecated uses of setf, mostly.

If bioinformatics-related Emacs Lisp code written by a non-bioinformatician is your thing, you might also find 2bit.el of interest too. Much like fasta.el it too probably doesn't have a practical use, but it sure was fun to write and taught me a few things along the way; it also sort of goes hand-in-hand with fasta.el too.

-1:-- fasta.el v1.1 (Post Dave Pearson)--L0--C0--2026-04-08T10:25:43.000Z

Please note that planet.emacslife.com aggregates blogs, and blog authors might mention or link to nonfree things. To add a feed to this page, please e-mail the RSS or ATOM feed URL to sacha@sachachua.com . Thank you!