Bozhidar Batsov: Emacs and Vim in the Age of AI

It’s tough to make predictions, especially about the future.

– Yogi Berra

I’ve been an Emacs fanatic for over 20 years. I’ve built and maintained some of the most popular Emacs packages, contributed to Emacs itself, and spent countless hours tweaking my configuration. Emacs isn’t just my editor – it’s my passion, and my happy place.

Over the past year I’ve also been spending a lot of time with Vim and Neovim, relearning them from scratch and having a blast contrasting how the two communities approach similar problems. It’s been a fun and refreshing experience.1

And lately, like everyone else in our industry, I’ve been playing with AI tools – Claude Code in particular – watching the impact of AI on the broader programming landscape, and pondering what it all means for the future of programming. Naturally, I keep coming back to the same question: what happens to my beloved Emacs and its “arch nemesis” Vim in this brave new world?

I think the answer is more nuanced than either “they’re doomed” or “nothing changes”. Predicting the future is obviously hard work, but it’s so fun to speculate on it.

My reasoning is that every major industry shift presents plenty of risks and opportunities for those involved in it, so I want to spend a bit of time ruminating over the risks and opportunities for Emacs and Vim.

The Risks

The IDE gravity well

VS Code is already the dominant editor by a wide margin, and it’s going to get first-class integrations with every major AI tool – Copilot (obviously), Codex, Claude, Gemini, you name it. Microsoft has every incentive to make VS Code the best possible host for AI-assisted development, and the resources to do it.

On top of that, purpose-built AI editors like Cursor, Windsurf, and others are attracting serious investment and talent. These aren’t adding AI to an existing editor as an afterthought – they’re building the entire experience around AI workflows. They offer integrated context management, inline diffs, multi-file editing, and agent loops that feel native rather than bolted on.

Every developer who switches to one of these tools is a developer who isn’t learning Emacs or Vim keybindings, isn’t writing Elisp, and isn’t contributing to our ecosystems. The gravity well is real.

I never tried Cursor and Windsurf simply because they are essentially forks of VS Code and I can’t stand VS Code. I tried it several times over the years and I never felt productive in it for a variety of reasons.

Do you even need a “power tool” anymore?

Part of the case for Emacs and Vim has always been that they make you faster at writing and editing code. The keybindings, the macros, the extensibility – all of it is in service of making the human more efficient at the mechanical act of coding.

But if AI is writing most of your code, how much does mechanical editing speed matter? When you’re reviewing and steering AI-generated diffs rather than typing code character by character, the bottleneck shifts from “how fast can I edit” to “how well can I specify intent and evaluate output.” That’s a fundamentally different skill, and it’s not clear that Emacs or Vim have an inherent advantage there.

The learning curve argument gets harder to justify too. “Spend six months learning Emacs and you’ll be 10x faster” is a tough sell when a junior developer with Cursor can scaffold an entire application in an afternoon.2

The corporate backing asymmetry

VS Code has Microsoft. Cursor has venture capital. Emacs has… a small group of volunteers and the FSF. Vim had Bram, and now has a community of maintainers. Neovim has a small but dedicated core team.

This has always been the case, of course, but AI amplifies the gap. Building deep AI integrations requires keeping up with fast-moving APIs, models, and paradigms. Well-funded teams can dedicate engineers to this full-time. Volunteer-driven projects move at the pace of people’s spare time and enthusiasm.

The doomsday scenario

Let’s go all the way: what if programming as we know it is fully automated within the next decade? If AI agents can take a specification and produce working, tested, deployed software without human intervention, we won’t need coding editors at all. Not Emacs, not Vim, not VS Code, not Cursor. The entire category becomes irrelevant.

I don’t think this is likely in the near term, but it’s worth acknowledging as a possibility. The trajectory of AI capabilities has surprised even the optimists (and I was initially an AI skeptic, but the rapid advancements last year eventually changed my mind).

The Opportunities

AI makes configuration and extension trivial

Here’s the thing almost nobody is talking about: Emacs and Vim have always suffered from the obscurity of their extension languages. Emacs Lisp is a 1980s Lisp dialect that most programmers have never seen before. VimScript is… VimScript. Even Lua, which Neovim adopted specifically because it’s more approachable, is niche enough that most developers haven’t written a line of it.

This has been the single biggest bottleneck for both ecosystems. Not the editors themselves – they’re incredibly powerful – but the fact that customizing them requires learning an unfamiliar language, and most people never make it past copying snippets from blog posts and READMEs.

I felt incredibly overwhelmed by Elisp and VimScript when I was learning Emacs and Vim for the first time, and I imagine I wasn’t the only one. I started to feel very productive in Emacs only after putting in quite a lot of time to actually learn Elisp properly. (never bothered to do the same for VimScript, though, and admittedly I’m not too eager to master Lua either)

AI changes this overnight. You can now describe what you want in plain English and get working Elisp, VimScript, or Lua. “Write me an Emacs function that reformats the current paragraph to 72 columns and adds a prefix” – done. “Configure lazy.nvim to set up LSP with these keybindings” – done. The extension language barrier, which has been the biggest obstacle to adoption for decades, is suddenly much lower.

The same goes for plugin development

After 20+ years in the Emacs community, I often have the feeling that a relatively small group – maybe 50 to 100 people – is driving most of the meaningful progress. The same names show up in MELPA, on the mailing lists, and in bug reports. This isn’t a criticism of those people (I’m proud to be among them), but it’s a structural weakness. A community that depends on so few contributors is fragile.

And it’s not just Elisp and VimScript. The C internals of both Emacs and Vim (and Neovim’s C core) are maintained by an even smaller group. Finding people who are both willing and able to hack on decades-old C codebases is genuinely hard, and it’s only getting harder as fewer developers learn C at all.

AI tools can help here in two ways. First, they lower the barrier for new contributors – someone who understands the concept of what they want to build can now get AI assistance with the implementation in an unfamiliar language. Second, they help existing maintainers move faster. I’ve personally found that AI is excellent at generating test scaffolding, writing documentation, and handling the tedious parts of package maintenance that slow everything down.

AI integrations are already happening

The Emacs and Neovim communities aren’t sitting idle. There are already impressive AI integrations:

Emacs:

  • gptel – a versatile LLM client that supports multiple backends (Claude, GPT, Gemini, local models)
  • ellama – an Emacs interface for interacting with LLMs via llama.cpp and Ollama
  • aider.el – Emacs integration for Aider, the popular AI pair programming tool
  • copilot.el – GitHub Copilot integration (I happen to be the current maintainer of the project)
  • elysium – an AI-powered coding assistant with inline diff application
  • agent-shell – a native Emacs buffer for interacting with LLM agents (Claude Code, Gemini CLI, etc.) via the Agent Client Protocol

Neovim:

  • avante.nvim – a Cursor-like AI coding experience inside Neovim
  • codecompanion.nvim – a Copilot Chat replacement supporting multiple LLM providers
  • copilot.lua – native Copilot integration for Neovim
  • gp.nvim – ChatGPT-like sessions in Neovim with support for multiple providers

And this is just a sample. Building these integrations isn’t as hard as it might seem – the APIs are straightforward, and the extensibility of both editors means you can wire up AI tools in ways that feel native. With AI assistance, creating new integrations becomes even easier. I wouldn’t be surprised if the pace of plugin development accelerates significantly.

Terminal-native AI tools are a natural fit

Here’s an irony that deserves more attention: many of the most powerful AI coding tools are terminal-native. Claude Code, Aider, and various Copilot CLI tools all run in the terminal. And what lives in the terminal? Emacs and Vim.3

Running Claude Code in an Emacs vterm buffer or a Neovim terminal split is a perfectly natural workflow. You get the AI agent in one pane and your editor in another, with all your keybindings and tools intact. There’s no context switching to a different application – it’s all in the same environment.

This is actually an advantage over GUI-based AI editors, where the AI integration is tightly coupled to the editor’s own interface. With terminal-native tools, you get to choose your own editor and your own AI tool, and they compose naturally.

Emacs as an AI integration platform

Emacs’s “editor as operating system” philosophy is uniquely well-suited to AI integration. It’s not just a code editor – it’s a mail client (Gnus, mu4e), a note-taking system (Org mode), a Git interface (Magit), a terminal emulator, a file manager, an RSS reader, and much more.

AI can be integrated at every one of these layers. Imagine an AI assistant that can read your org-mode agenda, draft email replies in mu4e, help you write commit messages in Magit, and refactor code in your source buffers – all within the same environment, sharing context. No other editor architecture makes this kind of deep, cross-domain integration as natural as Emacs does.

Admittedly, I’ve stopped using Emacs as my OS a long time ago, and these days I use it mostly for programming and blogging. (I’m writing this article in Emacs with the help of markdown-mode) Still, I’m only one Emacs user and many are probably using it in a more holistic manner.

AI helps you help yourself

One of the most underappreciated benefits of AI for Emacs and Vim users is mundane: troubleshooting. Both editors have notoriously steep learning curves and opaque error messages. “Wrong type argument: stringp, nil” has driven more people away from Emacs than any competitor ever did.

AI tools are remarkably good at explaining cryptic error messages, diagnosing configuration issues, and suggesting fixes. They can read your init file and spot the problem. They can explain what a piece of Elisp does. They can help you understand why your keybinding isn’t working. This dramatically flattens the learning curve – not by making the editor simpler, but by giving every user access to a patient, knowledgeable guide.

I don’t really need any AI assistance to troubleshoot anything in my Emacs setup, but it’s been handy occasionally in Neovim-land, where my knowledge is relatively modest by comparison.

There’s at least one documented case of someone returning to Emacs after years away, specifically because Claude Code made it painless to fix configuration issues. They’d left for IntelliJ because the configuration burden got too annoying – and came back once AI removed that barrier. “Happy f*cking days I’m home again,” as they put it. If AI can bring back lapsed Emacs users, that’s a good thing in my book.

Even in the post-coding apocalypse, Emacs and Vim survive

Let’s revisit the doomsday scenario. Say programming is fully automated and nobody writes code anymore. Does Emacs die?

Not necessarily. Emacs is already used for far more than programming. People use Org mode to manage their entire lives – tasks, notes, calendars, journals, time tracking, even academic papers. Emacs is a capable writing environment for prose, with excellent support for LaTeX, Markdown, AsciiDoc, and plain text. You can read email, browse the web, manage files, and yes, play Tetris.

Vim, similarly, is a text editing paradigm as much as a program. Vim keybindings have colonized every text input in the computing world – VS Code, IntelliJ, browsers, shells, even Emacs (via Evil mode). Even if the Vim program fades, the Vim idea is immortal.4

And who knows – maybe there’ll be a market for artisanal, hand-crafted software one day. “Locally sourced, free-range code, written by a human in Emacs.” I’d buy that t-shirt. And I’m fairly certain those artisan programmers won’t be using VS Code.

So even in the most extreme scenario, both editors have a life beyond code. A diminished one, perhaps, but a life nonetheless.

The Bigger Picture

I think what’s actually happening is more interesting than “editors die” or “editors are fine.” The role of the editor is shifting.

For decades, the editor was where you wrote code. Increasingly, it’s becoming where you review, steer, and refine code that AI writes. The skills that matter are shifting from typing speed and editing gymnastics to specification clarity, code reading, and architectural judgment.

In this world, the editor that wins isn’t the one with the best code completion – it’s the one that gives you the most control over your workflow. And that has always been Emacs and Vim’s core value proposition.

The question is whether the communities can adapt fast enough. The tools are there. The architecture is there. The philosophy is right. What’s needed is people – more contributors, more plugin authors, more documentation writers, more voices in the conversation. AI can help bridge the gap, but it can’t replace genuine community engagement.

The ethical elephant in the room

Not everyone in the Emacs and Vim communities is enthusiastic about AI, and the objections go beyond mere technophobia. There are legitimate ethical concerns that are going to be debated for a long time:

  • Energy consumption. Training and running large language models requires enormous amounts of compute and electricity. For communities that have long valued efficiency and minimalism – Emacs users who pride themselves on running a 40-year-old editor, Vim users who boast about their sub-second startup times – the environmental cost of AI is hard to ignore.

  • Copyright and training data. LLMs are trained on vast corpora of code and text, and the legality and ethics of that training remain contested. Some developers are uncomfortable using tools that may have learned from copyrighted code without explicit consent. This concern hits close to home for open-source communities that care deeply about licensing.

  • Job displacement. If AI makes developers significantly more productive, fewer developers might be needed. This is an uncomfortable thought for any programming community, and it’s especially pointed for editors whose identity is built around empowering human programmers.

These concerns are already producing concrete action. The Vim community recently saw the creation of EVi, a fork of Vim whose entire raison d’etre is to provide a text editor free from AI integration. Whether you agree with the premise or not, the fact that people are forking established editors over this tells you how strongly some community members feel.

I don’t think these concerns should stop anyone from exploring AI tools, but they’re real and worth taking seriously. I expect to see plenty of spirited debate about this on emacs-devel and the Neovim issue tracker in the years ahead.

Closing Thoughts

The future ain’t what it used to be.

– Yogi Berra

I won’t pretend I’m not worried. The AI wave is moving fast, the incumbents have massive advantages in funding and mindshare, and the very nature of programming is shifting under our feet. It’s entirely possible that Emacs and Vim will gradually fade into niche obscurity, used only by a handful of diehards who refuse to move on.

But I’ve been hearing that Emacs is dying for 20 years, and it’s still here. The community is small but passionate, the editor is more capable than ever, and the architecture is genuinely well-suited to the AI era. Vim’s situation is similar – the core idea is so powerful that it keeps finding new expression (Neovim being the latest and most vigorous incarnation).

The editors that survive won’t be the ones with the flashiest AI features. They’ll be the ones whose users care enough to keep building, adapting, and sharing. That’s always been the real engine of open-source software, and no amount of AI changes that.

So if you’re an Emacs or Vim user: don’t panic, but don’t be complacent either. Learn the new AI tools (if you’re not fundamentally opposed to them, that is). Pimp your setup and make it awesome. Write about your workflows. Help newcomers. The best way to ensure your editor survives the AI age is to make it thrive in it.

Maybe the future ain’t what it used to be – but that’s not necessarily a bad thing.

That’s all I have for you today. Keep hacking!

  1. If you’re curious about my Vim adventures, I wrote about them in Learning Vim in 3 Steps↩︎

  2. Not to mention you’ll probably have to put in several years in Emacs before you’re actually more productive than you were with your old editor/IDE of choice. ↩︎

  3. At least some of the time. Admittedly I usually use Emacs in GUI mode, but I always use (Neo)vim in the terminal. ↩︎

  4. Even Claude Code has vim mode. ↩︎

-1:-- Emacs and Vim in the Age of AI (Post Bozhidar Batsov)--L0--C0--2026-03-09T08:30:00.000Z

Protesilaos Stavrou: This Thursday I will talk about Emacs @ OxFLOSS (FLOSS @ Oxford)

This Thursday, the 12th of March, at 20:00 Europe/Athens time I will do a live presentation of Emacs for OxFLOSS (FLOSS @ Oxford). This is an event organised by people at the University of Oxford. My goal is to introduce Emacs to a new audience by showing them a little of what it can do while describing how exactly it gives users freedom.

The presentation will be about 40 minutes long. I will then answer any questions from the audience. Anyone can participate: no registration is required. The event will be recorded for future reference. The link for the video call and further details are available here: https://ox.ogeer.org/event/computing-in-freedom-with-gnu-emacs-protesilaos-stavrou.

I will prepare a transcript for my talk. This way people can learn about my presentation without having to access the video file.

Looking forward to it!

-1:-- This Thursday I will talk about Emacs @ OxFLOSS (FLOSS @ Oxford) (Post Protesilaos Stavrou)--L0--C0--2026-03-09T00:00:00.000Z

Wai Hon: Introducing markdown-indent-mode

The Itch

One thing I particularly love about org-mode is org-indent-mode, which visually indents content under each heading based on its level. It makes long org files much easier to read and navigate.

Occasionally, I need to edit Markdown files — and every time I do, I miss that clean visual hierarchy. So I vibe-coded a first version over a weekend.

Vibe-Coding

The idea translates naturally from org-indent.el, which ships with Emacs. Headings marked with # instead of *, same concept.

The first version worked partially, but was full of edge cases: code fences confusing the heading parser, list items indenting wrong, wrapped lines losing alignment. I kept using it day-to-day, tweaking it when something looked off, and simplifying whenever I found a cleaner approach. Eventually it reached a state I was genuinely happy with.

At that point I thought it might be useful to others too, so I decided to share it.

First Attempt: Merging into markdown-mode

My first instinct was to contribute this directly to markdown-mode, the de-facto Markdown package for Emacs, so all users could get it without installing anything extra. It turns out this is a long-anticipated feature — there have been open issues requesting it for years:

I opened a pull request to add the feature directly into markdown-mode.

Going Standalone Package

The markdown-mode maintainer reviewed the PR and suggested that this would be better as a standalone package since it would be a burden for them to maintain the new feature.

I took the suggestion, refactored the code into its own package: markdown-indent-mode, and submitted it to MELPA.

The MELPA Journey

Submitting to MELPA was a learning experience in itself. I picked up a few tools along the way for checking Elisp package quality:

  • checkdoc — checks that docstrings follow Emacs conventions
  • package-lint — catches common packaging issues
  • melpazoid — an automated checker specifically designed for MELPA submissions

The MELPA maintainers are volunteers who typically review PRs on weekends, so the process took a few days.

What markdown-indent-mode Does

Here's what it does — nothing fancy, just the basics done cleanly:

  • Uses line-prefix and wrap-prefix text properties for visual indentation
  • Content under a heading is indented according to the heading level
  • Leading # symbols are hidden
  • List items are indented to align with their content
  • Everything is purely visual: no buffer content is ever modified

Installation

This package is available on MELPA. You can use use-package to install it and add hook to enable it for markdown-mode.

(use-package markdown-indent-mode
  :hook (markdown-mode . markdown-indent-mode))

Or toggle it on demand with M-x markdown-indent-mode.

Before After
markdown-indent-mode-off.png markdown-indent-mode-on.png

The source is available at https://github.com/whhone/markdown-indent-mode.

If you ever find yourself editing Markdown in Emacs, give it a try and let me know what you think!

-1:-- Introducing markdown-indent-mode (Post Wai Hon)--L0--C0--2026-03-08T23:30:00.000Z

TAONAW - Emacs and Org Mode: Display images with Org-attach and org-insert-link quickly and effectively

Suppose you have an org-mode file and want an image to appear in the buffer. The way to do that is to insert a link to the file, for example:

[[home/username/downloads/image.png]].

Then, you toggle inline images with C-c C-x C-v, and the image should display inside the org-mode buffer, provided the path in the link is correct. If you do this often in your notes as I do, you might as well just turn it on for the entire file with #+STARTUP: inlineimages at the top of your org file, with the rest of the options you have there; this way, images will always display when you load the file. This is all nice and good, and most of us org-mode users probably know that.

A common use case for a full workflow like this is attaching images to your org file. You have a file in your Downloads folder, as shown in the example above, and you want to keep the image with your org file where it belongs, rather than in Downloads, where it will be lost among other files sooner or later.

For this, as most of us know, we have org-attach (C-c C-a by default). This starts a wonderful organizational process for our files:

  1. It creates a data folder (by default) inside the folder the org-file is in if it’s not there
  2. It then gives the header (even if you don’t have one) a UUID (Universally Unique Identifier) and creates two more directories, one inside the other:
    1. The parent directory consists of the first part of the UUID
    2. The child directory consists of the rest of the UUID
  3. Lastly, the file itself will be copied into the child directory above.

For example:

./data/acde070d/8c4c-4f0d-9d8a-162843c10333/someimage.png

If you’re not used to how org-attach works, it might take some time getting used to, but it’s worth it. Images (or any file, as we will deal with soon) are kept next to the files they are associated with. Of course, org-attach is customizable, and you can change those folders and UUIDs to make them less cryptic.

For example, my init includes this:

    (setq org-id-method 'ts)
    (setq org-attach-id-to-path-function-list 
      '(org-attach-id-ts-folder-format
        org-attach-id-uuid-folder-format))

This tells org-mode to change the UUID to IOS date stamp format, so the folders under the data folder are dates, and tells org-mode to use that system (I wrote about this in length in my old blog; it is yet another post I need to bring over here).

In my case, this creates a file reference system by date: inside the data folder, each month of the year has a folder; inside those, a folder for the day and time (down to fractions of seconds) of the attachment. The beauty of org-attach is that you’re not meant to deal with the files directly. You summon the org-attach dispatcher and tell it to go to the relevant folder (C-c C-a to bring it up, then f as the option to go to that directory).

org-attach and displaying images inline are known to many org-mode users, but here comes the part I never realized:

org-attach stores the link to the file you just attached inside a variable called org-stored-link, along with other links you might have grabbed like URLs from the web (take a look with C-h C-v org-stored-links). And, even better, these links are added to your org-insert-link, ready to go when you insert a link to your file with C-c C-l.

So when you have an image ready to attach to an org file, say in your Downloads folder, you could first attach it with org-attach, and then you can call it back quickly with C-c C-l. The trick is, since this is an image link (and not just any file), is not to give it a description. By default, org-mode will suggest you describe the link as the file you attached, but inline images do not work like that, and with a description, the image will just display as a file name. In other words:

A link to an image you want to display in the org buffer should look like:

[[file:/Home/username/downloads/someimage.jpg]]

But any other file would look like:

[[file:/Home/username/downloads/somefile.jpg][description]]

By deleting the suggestion, you are effectively creating the first case, the one that is meant to display images. This is explained nicely here.

There’s more to it. As it turns out, the variable org-attach-store-link-p is responsible for the links to these files to automatically be stored in org-insert-link (you can toggle it to change this option). This is why, when you use it, your files or images will show as [[attachment:description]], without the need for the path as specified above.

I have years of muscle memory to undo, as I’m used to manually inserting the links with the full path for my images. I did not realize the links to the images I’ve attached are right there, ready for me to place into the buffer if I only delete the description.

-1:-- Display images with Org-attach and org-insert-link quickly and effectively (Post TAONAW - Emacs and Org Mode)--L0--C0--2026-03-08T17:43:25.000Z

Irreal: Abrams On Literate Programming Redux

Howard Abrams has a video and associated post that updates his post on the same subject from 11 years ago. As with yesterday’s post, the video is from EmacsConf 2024 but someone just reposted it to reddit and it’s definitely worth taking a look at.

Abrams is explicit that by “literate programming” he means using code blocks and Babel in Org mode files. It’s an extremely powerful workflow. You can execute those code blocks in place or you can use org-babel-tangle to export the code to a separate file.

The majority of his video and post discuss ways of working with those Org files. One simple example is that he uses the local variables feature of Emacs files to set a hook that automatically tangles the file every time he saves it. That keeps the parent Org file and the generated code file in sync. He also has some functions that leverage Avy to find and execute a code block without changing the point’s location.

Finally, he talks about navigating his—possibly multi-file—projects. We wants to do the usual things like jumping to a function definition or getting a list of where a function is called. Emacs, of course, has functions for that but they don’t work in Org files. So Abrams wrote his own extension to the xref API based on ideas from dumb-jump.

Abrams drew all this together with a hydra that makes it easy for him to call any of his functions. He moves a bit rapidly in the video so you might want to read the post first in order to follow along. The video is 16 minutes, 38 seconds long so plan accordingly.

-1:-- Abrams On Literate Programming Redux (Post Irreal)--L0--C0--2026-03-08T15:42:15.000Z

Rahul Juliato: Two Years of Emacs Solo: 35 Modules, Zero External Packages, and a Full Refactor

I've been maintaining Emacs Solo for a while now, and I think it's time to talk about what happened in this latest cycle as the project reaches its two-year mark.

For those who haven't seen it before, Emacs Solo is my daily-driver Emacs configuration with one strict rule: no external packages. Everything is either built into Emacs or written from scratch by me in the lisp/ directory. No package-install, no straight.el, no use-package :ensure t pointing at ELPA or MELPA. Just Emacs and Elisp. I'm keeping this post text only, but if you'd like to check how Emacs Solo looks and feels, the repository has screenshots and more details.

Why? Partly because I wanted to understand what Emacs actually gives you out of the box. Partly because I wanted my config to survive without breakage across Emacs releases. Partly because I was tired of dealing with package repositories, mirrors going down in the middle of the workday, native compilation hiccups, and the inevitable downtime when something changed somewhere upstream and my job suddenly became debugging my very long (at the time) config instead of doing actual work. And partly, honestly, because it's a lot of fun!

This post covers the recent refactor, walks through every section of the core config, introduces all 35 self-contained extra modules I've written, and shares some thoughts on what I've learned.

Now, I'll be the first to admit: this config is long. But there's a principle behind it. I only add features when they are not already in Emacs core, and when I do, I try to build them myself. That means the code is sketchy sometimes, sure, but it's in my control. I wrote it, I understand it, and when it breaks, I know exactly where to look. The refactor I'm about to describe makes this distinction crystal clear: what is "Emacs core being tweaked" versus what is "a really hacky outsider I built in because I didn't want to live without it".


The Refactor: Core vs. Extras

The single biggest change in this cycle was architectural. Emacs Solo used to be one big init.el with everything crammed together. That worked, but it had problems:

— It was hard to navigate (even with outline-mode)

— If someone wanted just one piece, say my Eshell config or my VC extensions, they had to dig through thousands of lines

— It was difficult to tell where "configuring built-in Emacs" ended and "my own hacky reimplementations" began

The solution was clean and simple: split the config into two layers.

Layer 1: init.el (Emacs core configuration)

This file configures only built-in Emacs packages and features. Every use-package block in here has :ensure nil, because it's pointing at something that ships with Emacs. This is pure, standard Emacs customization.

The idea is that anyone can read init.el, find a section they like, and copy-paste it directly into their own config. No dependencies. No setup. It just works, because it's configuring things Emacs already has.

Layer 2: lisp/ (Self-contained extra modules)

These are my own implementations: replacements for popular external packages, reimagined as small, focused Elisp files. Each one is a proper provide/require module. They live under lisp/ and are loaded at the bottom of init.el via a simple block:

(add-to-list 'load-path (expand-file-name "lisp" user-emacs-directory))
(require 'emacs-solo-themes)
(require 'emacs-solo-movements)
(require 'emacs-solo-formatter)
;; ... and so on

If you don't want one of them, just comment out the require line. If you want to use one in your own config, just copy the .el file into your own lisp/ directory and require it. That's it.

This separation made the whole project dramatically easier to maintain, understand, and share.


The Core: What init.el Configures

The init.el file is organized into clearly labeled sections (using outline-mode-friendly headers, so you can fold and navigate them inside Emacs). Here's every built-in package and feature it touches, and why.

General Emacs Settings

The emacs use-package block is the largest single section. It sets up sensible defaults that most people would want:

— Key rebindings: M-o for other-window, M-j for duplicate-dwim, C-x ; for comment-line, C-x C-b for ibuffer

— Window layout commands bound under C-x w (these are upcoming Emacs 31 features: window-layout-transpose, window-layout-rotate-clockwise, window-layout-flip-leftright, window-layout-flip-topdown)

— Named frames: C-x 5 l to select-frame-by-name, C-x 5 s to set-frame-name, great for multi-frame workflows

— Disabling C-z (suspend) because accidentally suspending Emacs in a terminal is never fun

— Sensible file handling: backups and auto-saves in a cache/ directory, recentf for recent files, clean buffer naming with uniquify

— Tree-sitter auto-install and auto-mode (treesit-auto-install-grammar t and treesit-enabled-modes t, both Emacs 31)

delete-pair-push-mark, kill-region-dwim, ibuffer-human-readable-size, all the small quality-of-life settings coming in Emacs 31

Abbrev

A full abbrev-mode setup with a custom placeholder system. You define abbreviations with ###1###, ###2### markers, and when the abbreviation expands, it prompts you to fill in each placeholder interactively. The ###@### marker tells it where to leave point after expansion. I wrote a whole article about it.

Auth-Source

Configures auth-source to use ~/.authinfo.gpg for credential storage. Simple but essential if you use Gnus, ERC, or any network-facing Emacs feature.

Auto-Revert

Makes buffers automatically refresh when files change on disk. Essential for any Git workflow.

Conf / Compilation

Configuration file mode settings and a compilation-mode setup with ANSI color support, so compiler output actually looks readable.

Window

Custom window management beyond the defaults, because Emacs window management out of the box is powerful but needs a little nudging.

Tab-Bar

Tab-bar configuration for workspace management. Emacs has had tabs since version 27, and they're genuinely useful once you configure them properly.

RCIRC and ERC

Two IRC clients, both built into Emacs, both configured. ERC gets the bigger treatment: logging, scrolltobottom, fill, match highlighting, and even inline image support (via one of the extra modules). The Emacs 31 cycle brought nice improvements here too, including a fix for the scrolltobottom/fill-wrap dependency issue.

Icomplete

This is where Emacs Solo's completion story lives. Instead of reaching for Vertico, Consult, or Helm, I use icomplete-vertical-mode, which is built into Emacs. With the right settings it's surprisingly capable:

(setq icomplete-delay-completions-threshold 0)
(setq icomplete-compute-delay 0)
(setq icomplete-show-matches-on-no-input t)
(setq icomplete-scroll t)

I've also been contributing patches upstream to improve icomplete's vertical rendering with prefix indicators. Some of those features are already landing in Emacs 31, which means the polyfill code I carry today will eventually become unnecessary.

Dired

A heavily customized Dired setup. Custom listing switches, human readable sizes, integration with system openers (open on macOS, xdg-open on Linux), and the dired-hide-details-hide-absolute-location option from Emacs 31.

WDired

Writable Dired, so you can rename files by editing the buffer directly.

Eshell

This one I'm particularly proud of. Emacs Solo's Eshell configuration includes:

Shared history across all Eshell buffers: Every Eshell instance reads from and writes to a merged history, so you never lose a command just because you ran it in a different buffer

Custom prompts: Multiple prompt styles you can toggle between with C-c t (full vs. minimal) and C-c T (lighter vs. heavier full prompt)

— A custom welcome banner with keybinding hints

— History size of 100,000 entries with deduplication

Isearch

Enhanced incremental search with sensible defaults.

VC (Version Control)

This is one of the largest sections and one I'm most invested in. Emacs's built-in vc is an incredible piece of software that most people overlook in favor of Magit. I'm not saying it replaces Magit entirely, but with the right configuration it covers 95% of daily Git operations:

Git add/reset from vc-dir: S to stage, U to unstage, directly in the vc-dir buffer. Admittedly, I almost never use this because I'm now used to the Emacs-style VC workflow: C-x v D or C-x v =, then killing what I don’t want, splitting what isn’t ready yet, and finishing with C-c C-c. Amending with C-c C-e is awesome. Still useful once or twice a semester.

Git reflog viewer: A custom emacs-solo/vc-git-reflog command with ANSI color rendering and navigation keybindings

Browse remote: C-x v B opens your repository on GitHub/GitLab in a browser; with a prefix argument it jumps to the current file and line

Jump to current hunk: C-x v = opens the diff buffer scrolled to the hunk containing your current line

Switch between modified files: C-x C-g lets you completing-read through all modified/untracked files in the current repo

Pull current branch: A dedicated command for git pull origin <current-branch>

— Emacs 31 settings: vc-auto-revert-mode, vc-allow-rewriting-published-history, vc-dir-hide-up-to-date-on-revert

Smerge / Diff / Ediff

Merge conflict resolution and diff viewing. Ediff configured to split windows sanely (side by side, not in a new frame).

Eldoc

Documentation at point, with eldoc-help-at-pt (Emacs 31) for showing docs automatically.

Eglot

The LSP client that ships with Emacs. Configured with:

— Auto-shutdown of unused servers

— No event buffer logging (for performance)

— Custom server programs, including rassumfrassum for multiplexing TypeScript + ESLint + Tailwind (I wrote a whole post about that)

— Keybindings under C-c l for code actions, rename, format, and inlay hints

— Automatic enabling for all prog-mode buffers except emacs-lisp-mode and lisp-mode

Flymake / Flyspell / Whitespace

Diagnostics, spell checking, and whitespace visualization. All built-in, all configured.

Gnus

The Emacs newsreader and email client. Configured for IMAP/SMTP usage.

Man

Manual page viewer settings.

Minibuffer

Fine-tuned minibuffer behavior, including completion-eager-update from Emacs 31 for faster feedback during completion.

Newsticker

RSS/Atom feed reader built into Emacs. Customized with some extras I build my self for dealing with youtube feeds: thumbnail, transcripts, sending to AI for a quick summary, and so on.

Electric-Pair / Paren

Auto-closing brackets and parenthesis highlighting.

Proced

Process manager (like top, but inside Emacs).

Org

Org-mode configuration, because of course.

Speedbar

File tree navigation in a side window. With Emacs 31, speedbar gained speedbar-window support, so it can live inside your existing frame instead of spawning a new one.

Time

World clock with multiple time zones, sorted by ISO timestamp (Emacs 31).

Uniquify

Buffer name disambiguation when you have multiple files with the same name open.

Which-Key

Key discovery. Built into Emacs since version 30.

Webjump

Quick web searches from the minibuffer. Configured with useful search engines.

Language Modes

Specific configurations for every language I work with, organized into three areas:

Common Lisp: inferior-lisp and lisp-mode with custom REPL interaction, evaluation commands, and a poor man's SLIME/SLY setup that actually works quite well for basic Common Lisp development.

Non-Tree-sitter: sass-mode for when tree-sitter grammars aren't available.

Tree-sitter modes: ruby-ts-mode, js-ts-mode, json-ts-mode, typescript-ts-mode, bash-ts-mode, rust-ts-mode, toml-ts-mode, markdown-ts-mode (Emacs 31), yaml-ts-mode, dockerfile-ts-mode, go-ts-mode. Each one configured with tree-sitter grammar sources (which Emacs 31 is starting to define internally, so those definitions will eventually become unnecessary).


The Extras: 35 Self-Contained Modules

This is where the fun really is. Each of these is a complete, standalone Elisp file that reimplements functionality you'd normally get from an external package. They're all in lisp/ and can be used independently.

I call them "hacky reimplementations" in the spirit of Emacs Solo: they're not trying to be feature-complete replacements for their MELPA counterparts. They're trying to be small, understandable, and good enough for daily use while keeping the config self-contained.

emacs-solo-themes

Custom color themes based on Modus. Provides several theme variants: Catppuccin Mocha, Crafters (the default), Matrix, and GITS. All built on top of Emacs's built-in Modus themes by overriding faces, so you get the accessibility and completeness of Modus with different aesthetics.

emacs-solo-mode-line

Custom mode-line format and configuration. A hand-crafted mode-line that shows exactly what I want: buffer state indicators, file name, major mode, Git branch, line/column, and nothing else. No doom-modeline, no telephone-line, just format strings and faces.

emacs-solo-movements

Enhanced navigation and window movement commands. Extra commands for moving between windows, resizing splits, and navigating buffers more efficiently.

emacs-solo-formatter

Configurable format-on-save with a formatter registry. You register formatters by file extension (e.g., prettier for .tsx, black for .py), and the module automatically hooks into after-save-hook to format the buffer. All controllable via a defcustom, so you can toggle it on and off globally.

emacs-solo-transparency

Frame transparency for GUI and terminal. Toggle transparency on your Emacs frame. Works on both graphical and terminal Emacs, using the appropriate mechanism for each.

emacs-solo-exec-path-from-shell

Sync shell PATH into Emacs. The classic macOS problem: GUI Emacs doesn't inherit your shell's PATH. This module solves it the same way exec-path-from-shell does, but in about 20 lines instead of a full package.

emacs-solo-rainbow-delimiters

Rainbow coloring for matching delimiters. Colorizes nested parentheses, brackets, and braces in different colors so you can visually match nesting levels. Essential for any Lisp, and helpful everywhere else.

emacs-solo-project-select

Interactive project finder and switcher. A completing-read interface for finding and switching between projects, building on Emacs's built-in project.el.

emacs-solo-viper-extensions

Vim-like keybindings and text objects for Viper. If you use Emacs's built-in viper-mode (the Vim emulation layer), this extends it with text objects and additional Vim-like commands. No Evil needed.

emacs-solo-highlight-keywords

Highlight TODO and similar keywords in comments. Makes TODO, FIXME, HACK, NOTE, and similar keywords stand out in source code comments with distinctive faces. A small thing that makes a big difference.

emacs-solo-gutter

Git diff gutter indicators in buffers. Shows added, modified, and deleted line indicators in the margin, like diff-hl or git-gutter. Pure Elisp, using vc-git under the hood.

emacs-solo-ace-window

Quick window switching with labels. When you have three or more windows, this overlays single-character labels on each window so you can jump to any one with a single keystroke. A minimal reimplementation of the popular ace-window package.

emacs-solo-olivetti

Centered document layout mode. Centers your text in the window with wide margins, like olivetti-mode. Great for prose writing, Org documents, or any time you want a distraction-free centered layout.

emacs-solo-0x0

Upload text and files to 0x0.st. Select a region or a file and upload it to the 0x0.st paste service. The URL is copied to your kill ring. Quick and useful for sharing snippets.

emacs-solo-sudo-edit

Edit files as root via TRAMP. Reopen the current file with root privileges using TRAMP's /sudo:: prefix. A reimplementation of the sudo-edit package.

emacs-solo-replace-as-diff

Multi-file regexp replace with diff preview. Perform a search-and-replace across multiple files and see the changes as a diff before applying them. This one turned out to be more useful than I expected.

emacs-solo-weather

Weather forecast from wttr.in. Fetches weather data from wttr.in and displays it in an Emacs buffer. Because checking the weather shouldn't require leaving Emacs.

emacs-solo-rate

Cryptocurrency and fiat exchange rate viewer. Query exchange rates and display them inside Emacs. For when you need to know how much a bitcoin is worth but refuse to open a browser tab.

emacs-solo-how-in

Query cheat.sh for programming answers. Ask "how do I do X in language Y?" and get an answer from cheat.sh displayed right in Emacs. Like howdoi but simpler.

emacs-solo-ai

AI assistant integration (Ollama, Gemini, Claude). Send prompts to AI models directly from Emacs. Supports multiple backends: local Ollama, Google Gemini, and Anthropic Claude. The response streams into a buffer. No gptel, no ellama, just url-retrieve and some JSON parsing.

emacs-solo-dired-gutter

Git status indicators in Dired buffers. Shows Git status (modified, added, untracked) next to file names in Dired, using colored indicators in the margin. Think diff-hl-dired-mode but self-contained.

emacs-solo-dired-mpv

Audio player for Dired using mpv. Mark audio files in Dired, hit C-c m, and play them through mpv. You get a persistent mpv session you can control from anywhere with C-c m. A mini music player that lives inside your file manager.

emacs-solo-icons

File type icon definitions for Emacs Solo. The icon registry that maps file extensions and major modes to Unicode/Nerd Font icons. This is the foundation that the next three modules build on.

emacs-solo-icons-dired

File type icons for Dired buffers. Displays file type icons next to file names in Dired. Uses Nerd Font glyphs.

emacs-solo-icons-eshell

File type icons for Eshell listings. Same as above but for Eshell's ls output.

emacs-solo-icons-ibuffer

File type icons for ibuffer. And again for the buffer list.

emacs-solo-container

Container management UI for Docker and Podman. A full tabulated-list-mode interface for managing containers: list, start, stop, restart, remove, inspect, view logs, open a shell. Works with both Docker and Podman. This one started small and grew into a genuinely useful tool.

emacs-solo-m3u

M3U playlist viewer and online radio player. Open .m3u playlist files, browse the entries, and play them with mpv. RET to play, x to stop. Great for online radio streams.

emacs-solo-clipboard

System clipboard integration for terminals. Makes copy/paste work correctly between Emacs running in a terminal and the system clipboard. Solves the eternal terminal Emacs clipboard problem.

emacs-solo-eldoc-box

Eldoc documentation in a child frame. Shows eldoc documentation in a floating child frame near point instead of the echo area. A reimplementation of the eldoc-box package.

emacs-solo-khard

Khard contacts browser. Browse and search your khard address book from inside Emacs. Niche, but if you use khard for contact management, this is handy.

emacs-solo-flymake-eslint

Flymake backend for ESLint. Runs ESLint as a Flymake checker for JavaScript/TypeScript files. Disabled by default now that LSP servers handle ESLint natively, but still available if you prefer the standalone approach.

emacs-solo-erc-image

Inline images in ERC chat buffers. When someone posts an image URL in IRC, this fetches and displays the image inline in the ERC buffer. A small luxury that makes IRC feel more modern.

emacs-solo-yt

YouTube search and playback with yt-dlp and mpv. Search YouTube from Emacs, browse results, and play videos (or just audio) through mpv. Because sometimes you need background music and YouTube is right there.

emacs-solo-gh

GitHub CLI interface with transient menu. A transient-based menu for the gh CLI tool. Browse issues, pull requests, run actions, all from a structured Emacs interface without memorizing gh subcommands.


Emacs 31: Looking Forward

Throughout the config you'll see comments tagged ; EMACS-31 marking features that are coming (or already available on the development branch). Some highlights:

Window layout commands: window-layout-transpose, window-layout-rotate-clockwise, and flip commands. Finally, first-class support for rearranging window layouts

Tree-sitter grammar sources defined in modes: No more manually specifying treesit-language-source-alist entries for every language

markdown-ts-mode: Tree-sitter powered Markdown, built-in

Icomplete improvements: In-buffer adjustment, prefix indicators, and better vertical rendering

Speedbar in-frame: speedbar-window lets the speedbar live inside your frame as a normal window

VC enhancements: vc-dir-hide-up-to-date-on-revert, vc-auto-revert-mode, vc-allow-rewriting-published-history

ERC fixes: The scrolltobottom/fill-wrap dependency is finally resolved

native-comp-async-on-battery-power: Don't waste battery on native compilation

kill-region-dwim: Smart kill-region behavior

delete-pair-push-mark: Better delete-pair with mark pushing

World clock sorting: world-clock-sort-order for sensible timezone display

I tag these not just for my own reference, but so that anyone reading the config can see exactly which parts will become cleaner or unnecessary as Emacs 31 stabilizes. Some of the polyfill code I carry today, particularly around icomplete, exists specifically because those features haven't landed in a stable release yet.


What I've Learned

This latest cycle of working on Emacs Solo taught me a few things worth sharing.

Emacs gives you more than you think. Every time I set out to "reimplement" something, I discovered that Emacs already had 70% of it built in. vc is far more capable than most people realize. icomplete-vertical-mode is genuinely good. tab-bar-mode is a real workspace manager. proced is a real process manager. The gap between "built-in Emacs" and "Emacs with 50 packages" is smaller than the community often assumes.

Writing your own packages is the best way to learn Elisp. I learned more about Emacs Lisp writing emacs-solo-gutter and emacs-solo-container than I did in years of tweaking other people's configs. When you have to implement something from scratch, you're forced to understand overlays, process filters, tabulated-list-mode, transient, child frames, and all the machinery that packages usually hide from you.

Small is beautiful. Most of the modules in lisp/ are under 200 lines. Some are under 50. They don't try to handle every edge case. They handle my edge cases, and that's enough. If someone else needs something different, the code is simple enough to fork and modify.

Contributing upstream is worth it. Some of the things I built as workarounds (like the icomplete vertical prefix indicators) turned into upstream patches. When you're deep enough in a feature to build a workaround, you're deep enough to propose a fix.


Conclusion

Emacs Solo started as a personal challenge: can I have a productive, modern Emacs setup without installing a single external package?

The answer, after this cycle, is a definitive yes.

Is it for everyone? Absolutely not. If you're happy with Doom Emacs or Spacemacs or your own carefully curated package list, that's great. Those are excellent choices.

But if you're curious about what Emacs can do on its own, if you want a config where you understand every line, if you want something you can hand to someone and say "just drop this into ~/.emacs.d/ and it works", then maybe Emacs Solo is worth a look.

The repository is here: https://github.com/LionyxML/emacs-solo

It's been a lot of fun. I learned more in this cycle than in any previous one. And if anyone out there finds even a single module or config snippet useful, I'd be happy.

That's the whole point, really. Sharing what works.


Acknowledgements

None of this exists in a vacuum, and I want to give proper thanks.

First and foremost, to the Emacs core team. The people who maintain and develop GNU Emacs are doing extraordinary work, often quietly, often thanklessly. Every built-in feature I configure in init.el is the result of decades of careful engineering. The fact that Emacs 31 keeps making things better in ways that matter (tree-sitter integration, icomplete improvements, VC enhancements, window layout commands) is a testament to how alive this project is.

While working on Emacs Solo I also had the opportunity to contribute directly to Emacs itself. I originally wrote markdown-ts-mode, which was later improved and integrated with the help and review of Emacs maintainers. I also contributed changes such as aligning icomplete candidates with point in the buffer (similar to Corfu or Company) and a few fixes to newsticker.

I'm very grateful for the help, reviews, patience, and guidance from people like Eli Zaretskii, Yuan Fu, Stéphane Marks, João Távora, and others on the mailing lists.

To the authors of every package that inspired a module in lisp/. Even though Emacs Solo doesn't install external packages, it is deeply influenced by them. diff-hl, ace-window, olivetti, doom-modeline, exec-path-from-shell, eldoc-box, rainbow-delimiters, sudo-edit, and many others showed me what was possible and set the bar for what a good Emacs experience looks like. Where specific credit is due, it's noted in the source code itself.

A special thanks to David Wilson (daviwil) and the System Crafters community. David's streams and videos were foundational for me in understanding how to build an Emacs config from scratch, and the System Crafters community has been an incredibly welcoming and knowledgeable group of people. The "Crafters" theme variant in Emacs Solo exists as a direct nod to that influence.

To Protesilaos Stavrou (Prot), whose work on Modus themes, Denote, and his thoughtful writing about Emacs philosophy has shaped how I think about software defaults, accessibility, and keeping things simple. The fact that Emacs Solo's themes are built on top of Modus is no coincidence.

And to Gopar (goparism), whose Emacs content and enthusiasm for exploring Emacs from the ground up resonated deeply with the spirit of this project. It's encouraging to see others who believe in understanding the tools we use.

To everyone who I probably forgot to mention, who has opened issues, suggested features, or just tried Emacs Solo and told me about it: thank you. Open source is a conversation, and every bit of feedback makes the project better.

-1:-- Two Years of Emacs Solo: 35 Modules, Zero External Packages, and a Full Refactor (Post Rahul Juliato)--L0--C0--2026-03-08T12:00:00.000Z

Emacs Redux: Customizing Font-Lock in the Age of Tree-sitter

I recently wrote about building major modes with Tree-sitter over on batsov.com, covering the mode author’s perspective. But what about the user’s perspective? If you’re using a Tree-sitter-powered major mode, how do you actually customize the highlighting?

This is another article in a recent streak inspired by my work on neocaml, clojure-ts-mode, and asciidoc-mode. Building three Tree-sitter modes across very different languages has given me a good feel for both sides of the font-lock equation – and I keep running into users who are puzzled by how different the new system is from the old regex-based world.

This post covers what changed, what you can control, and how to make Tree-sitter font-lock work exactly the way you want.

The Old World: Regex Font-Lock

Traditional font-lock in Emacs is built on regular expressions. A major mode defines font-lock-keywords – a list of regexps paired with faces. Emacs runs each regexp against the buffer text and applies the corresponding face to matches. This has worked for decades, and it’s beautifully simple.

If you wanted to customize it, you’d manipulate font-lock-keywords directly:

;; Add a custom highlighting rule in the old world
(font-lock-add-keywords 'emacs-lisp-mode
  '(("\\<\\(FIXME\\|TODO\\)\\>" 1 'font-lock-warning-face prepend)))

The downsides are well-known: regexps can’t understand nesting, they break on multi-line constructs, and getting them right for a real programming language is a never-ending battle of edge cases.

The New World: Tree-sitter Font-Lock

Tree-sitter font-lock is fundamentally different. Instead of matching text with regexps, it queries the syntax tree. A major mode defines treesit-font-lock-settings – a list of Tree-sitter queries paired with faces. Each query pattern matches node types in the parse tree, not text patterns.

This means highlighting is structurally correct by definition. A string is highlighted as a string because the parser identified it as a string node, not because a regexp happened to match quote characters. If the code has a syntax error, the parser still produces a (partial) tree, and highlighting degrades gracefully instead of going haywire.

There’s also a significant performance difference. With regex font-lock, every regexp in font-lock-keywords runs against every line in the visible region on each update – more rules means linearly more work, and a complex major mode can easily have dozens of regexps. Poorly written patterns with nested quantifiers can trigger catastrophic backtracking, causing visible hangs on certain inputs. Multi-line font-lock (via font-lock-multiline or jit-lock-contextually) makes things worse, requiring re-scanning of larger regions that’s both expensive and fragile. Tree-sitter sidesteps all of this: after the initial parse, edits only re-parse the changed portion of the syntax tree, and font-lock queries run against the already-built tree rather than scanning raw text. The result is highlighting that scales much better with buffer size and rule complexity.

The trade-off is that customization works differently. You can’t just add a regexp to a list anymore. But the new system offers its own kind of flexibility, and in many ways it’s more powerful.

Note: The Emacs manual covers Tree-sitter font-lock in the Parser-based Font Lock section. For the full picture of Tree-sitter integration in Emacs, see Parsing Program Source.

Feature Levels: The Coarse Knob

Every Tree-sitter major mode organizes its font-lock rules into features – named groups of related highlighting rules. Features are then arranged into 4 levels, from minimal to maximal. The Emacs manual recommends the following conventions for what goes into each level:

  • Level 1: The absolute minimum – typically comment and definition
  • Level 2: Key language constructs – keyword, string, type
  • Level 3: Everything that can be reasonably fontified (this is the default level)
  • Level 4: Marginally useful highlighting – things like bracket, delimiter, operator

In practice, many modes don’t follow these conventions precisely. Some put number at level 2, others at level 3. Some include variable at level 1, others at level 4. The inconsistency across modes means that setting treesit-font-lock-level to the same number in different modes can give you quite different results – which is one more reason you might want the fine-grained control described in the next section.1

It’s also worth noting that the feature names themselves are not standardized. There are many common ones you’ll see across modes – comment, string, keyword, type, number, bracket, operator, definition, function, variable, constant, builtin – but individual modes often define features specific to their language. Clojure has quote, deref, and tagged-literals; OCaml might have attribute; a markup language mode might have heading or link. Different modes also vary in how granular they get: some expose a rich set of features that let you fine-tune almost every aspect of highlighting, while others are more spartan and stick to the basics.

The bottom line is that you’ll always have to check what your particular mode offers. The easiest way is M-x describe-variable RET treesit-font-lock-feature-list in a buffer using that mode – it shows all features organized by level. You can also inspect the mode’s source directly by looking at how it populates treesit-font-lock-settings (try M-x find-library to jump to the mode’s source).

For example, clojure-ts-mode defines:

Level Features
1 comment, definition, variable
2 keyword, string, char, symbol, builtin, type
3 constant, number, quote, metadata, doc, regex
4 bracket, deref, function, tagged-literals

And neocaml:

Level Features
1 comment, definition
2 keyword, string, number
3 attribute, builtin, constant, type
4 operator, bracket, delimiter, variable, function

The default level is 3, which is a reasonable middle ground for most people. You can change it globally:

(setq treesit-font-lock-level 4)  ;; maximum highlighting

Or per-mode via a hook:

(defun my-clojure-ts-font-lock ()
  (setq-local treesit-font-lock-level 2))  ;; minimal: just keywords and strings

(add-hook 'clojure-ts-mode-hook #'my-clojure-ts-font-lock)

This is the equivalent of the old font-lock-maximum-decoration variable, but more principled – features at each level are explicitly chosen by the mode author rather than being an arbitrary “how much highlighting do you want?” dial.

Note: The Emacs manual describes this system in detail under Font Lock and Syntax.

Cherry-Picking Features: The Fine Knob

Levels are a blunt instrument. What if you want operators and variables (level 4) but not brackets and delimiters (also level 4)? You can’t express that with a single number.

Enter treesit-font-lock-recompute-features. This function lets you explicitly enable or disable individual features, regardless of level:

(defun my-neocaml-font-lock ()
  (treesit-font-lock-recompute-features
   '(comment definition keyword string number
     attribute builtin constant type operator variable)  ;; enable
   '(bracket delimiter function)))                       ;; disable

(add-hook 'neocaml-base-mode-hook #'my-neocaml-font-lock)

You can also call it interactively with M-x treesit-font-lock-recompute-features to experiment in the current buffer before committing to a configuration.

This is something that was hard to do cleanly in the old regex world. You’d have to dig into font-lock-keywords, figure out which entries corresponded to which syntactic elements, and surgically remove them. With Tree-sitter, it’s a declarative list of names.

Customizing Faces

This part works the same as before – faces are faces. Tree-sitter modes use the standard font-lock-*-face family, so your theme applies automatically. If you want to tweak a specific face:

(custom-set-faces
 '(font-lock-type-face ((t (:foreground "DarkSeaGreen4"))))
 '(font-lock-property-use-face ((t (:foreground "DarkOrange3")))))

One thing to note: Tree-sitter modes use some of the newer faces introduced in Emacs 29, like font-lock-operator-face, font-lock-bracket-face, font-lock-number-face, font-lock-property-use-face, and font-lock-escape-face. These didn’t exist in the old world (there was no concept of “operator highlighting” in traditional font-lock), so older themes may not define them. If your theme makes operators and variables look the same, that’s why – the theme predates these faces.

Adding Custom Rules

This is where Tree-sitter font-lock really shines compared to the old system. Instead of writing regexps, you write Tree-sitter queries that match on the actual syntax tree.

Say you want to distinguish block-delimiting keywords (begin/end, struct/sig) from control-flow keywords (if/then/else) in OCaml:

(defface my-block-keyword-face
  '((t :inherit font-lock-keyword-face :weight bold))
  "Face for block-delimiting keywords.")

(defun my-neocaml-block-keywords ()
  (setq treesit-font-lock-settings
        (append treesit-font-lock-settings
                (treesit-font-lock-rules
                 :language (treesit-parser-language
                            (car (treesit-parser-list)))
                 :override t
                 :feature 'keyword
                 '(["begin" "end" "struct" "sig" "object"]
                   @my-block-keyword-face))))
  (treesit-font-lock-recompute-features))

(add-hook 'neocaml-base-mode-hook #'my-neocaml-block-keywords)

The :override t is important – without it, the new rule won’t overwrite faces already applied by the mode’s built-in rules. And the :feature keyword assigns the rule to a feature group, so it respects the level/feature system.

Note: The full query syntax is documented in the Pattern Matching section of the Emacs manual – it covers node types, field names, predicates, wildcards, and more.

For comparison, here’s what you’d need in the old regex world to highlight a specific set of keywords with a different face:

;; Old world: fragile, doesn't understand syntax
(font-lock-add-keywords 'some-mode
  '(("\\<\\(begin\\|end\\|struct\\|sig\\)\\>" . 'my-block-keyword-face)))

The regex version looks simpler, but it’ll match begin inside strings, comments, and anywhere else the text appears. The Tree-sitter version only matches actual keyword nodes in the syntax tree.

Exploring the Syntax Tree

The killer feature for customization is M-x treesit-explore-mode. It opens a live view of the syntax tree for the current buffer. As you move point, the explorer highlights the corresponding node and shows its type, field name, and position.

This is indispensable when writing custom font-lock rules. Want to know what node type OCaml labels are? Put point on one, check the explorer: it’s label_name. Want to highlight it? Write a query for (label_name). No more guessing what regexp might work.

Another useful tool is M-x treesit-inspect-node-at-point, which shows information about the node at point in the echo area without opening a separate window.

The Cheat Sheet

Here’s a quick reference for the key differences:

Aspect Regex font-lock Tree-sitter font-lock
Rules defined by font-lock-keywords treesit-font-lock-settings
Matching mechanism Regular expressions on text Queries on syntax tree nodes
Granularity control font-lock-maximum-decoration treesit-font-lock-level + features
Adding rules font-lock-add-keywords Append to treesit-font-lock-settings
Removing rules font-lock-remove-keywords treesit-font-lock-recompute-features
Debugging re-builder treesit-explore-mode
Handles nesting Poorly Correctly (by definition)
Multi-line constructs Fragile Works naturally
Performance O(n) per regexp per line Incremental, only re-parses changes

Closing Thoughts

The shift from regex to Tree-sitter font-lock is one of the bigger under-the-hood changes in modern Emacs. The customization model is different – you’re working with structured queries instead of text patterns – but once you internalize it, it’s arguably more intuitive. You say “highlight this kind of syntax node” instead of “highlight text that matches this pattern and hope it doesn’t match inside a string.”

The feature system with its levels, cherry-picking, and custom rules gives you more control than the old font-lock-maximum-decoration ever did. And treesit-explore-mode makes it easy to discover what’s available.

If you haven’t looked at your Tree-sitter mode’s font-lock features yet, try M-x describe-variable RET treesit-font-lock-feature-list in a Tree-sitter buffer. You might be surprised by how much you can tweak.

  1. Writing this article has been more helpful than I expected – halfway through, I realized my own neocaml had function banished to level 4 and number promoted to level 2. Physician, heal thyself. 

-1:-- Customizing Font-Lock in the Age of Tree-sitter (Post Emacs Redux)--L0--C0--2026-03-08T08:30:00.000Z

Thanos Apollo: Gnosis: Design Mistakes

The more I use gnosis, the more I notice design faults inherited from the software I was previously using.

Using decks for themata (flashcards/questions)

Decks will be removed.

Similarly to Anki, decks provided a restriction, not a feature. A thema can only belong to one deck, many-to-one, while tags are many-to-many.

Themata in gnosis are already organized based on tags and you can customize the review algorithm per tag using gnosis-algorithm-custom-values.

Exports for “collections” will now be based on tags. I plan to add tag filters for exports, e.g +anatomy -clinical to export all themata tagged with anatomy, excluding clinical.

Separating gnosis from org-gnosis and org-gnosis-ui

org-gnosis will be merged into gnosis, using a single unified database.

The idea was to have a minimal note taking system with a separate ui package that uses the browser, recreating the workflow I had with org-roam but with support for linking themata to nodes for topic-specific reviews.

In hindsight, this separation was a design mistake:

  • Using org-gnosis without gnosis has no real benefits.
  • A separate org-gnosis-ui that recreates an obsidian-style graph in the browser provides no real benefit beyond “visualizing” the collection.
    • Using tabulated-list in emacs to display nodes, their backlinks/links and their linked themata provides actual workflow usage. You can start reviews for specific nodes from the dashboard, filter for contents and view relations, all within emacs.
  • Maintaining two separate databases for no real value.

Review by topic algorithm

Gnosis supports reviewing all themata linked to a specific node, which means themata are often reviewed before they are due. The interval calculation now uses elapsed time as the basis, but on success keeps the later of the computed/existing scheduled date. The thema’s score and review streak are still updated, so early reviews contribute to progression without distorting the schedule.

This ensures that early successful reviews cannot inflate or deflate intervals. On failure the computed interval is used directly since early failure is genuinely informative.

[New Feature] Exporting themata with their linked nodes

Gnosis exports will optionally include all the linked nodes for the exported themata. This means that current collections, like Unking, will provide ready-to-use collections where users can both review themata and browse the related material.

-1:-- Gnosis: Design Mistakes (Post Thanos Apollo)--L0--C0--2026-03-07T22:00:00.000Z

Vineet Naik: Handling Rust errors elegantly

Rust doesn't have exceptions. Instead, functions and methods have to return a special type to indicate failure. This is the Result type which holds either the computed value if everything goes well, or an error otherwise. Both, the value and error have types. So it's definition is Result<T, E>, an enum that can be instantiated using one of it's two variants Ok and Err.

1
2
3
let r1: Result<bool, String> = Ok(true);
let r2: Result<bool, String> = Ok(false);
let r2: Result<bool, String> = Err("No clue".to_string());

When a function returns a Result, it becomes mandatory for the caller to handle it, otherwise it's impossible to extract the value it holds, which is what the caller is interested in most of the time 1. The easiest way to get the value out of a Result is to call it's unwrap method, but there will be a panic in case of an error. It can be thought of as an equivalent of an "unhandled" exception.

Calling unwrap is not necessarily a bad thing though. If there's really no way to handle an error, it's better to let the process crash than behave unpredictably. Arguably, "let it crash" is a valid strategy to handle an unexpected error. But if your code is full of unwrap calls (or it's close cousin expect), it's probably out of laziness than intent.

A more respectable approach is to pattern-match the Result enum and handle both cases.

1
2
3
4
5
6
7
8
let res = Ok(true);
match res {
    Ok(x) => println!("Value is {x}"),
    Err(e) => {
        // log a warning or do something else
        println!("Error: {e}")
    },
}

As a beginner, you may find pattern matching clearer and easier to understand. But soon it gets tedious to write and verbose to read. Much of the real world rust code written by experienced devs will use convenience features that rust provides to make error handling less tedious, which is essentially the topic of this post, but we will get to that in a bit.

Like me, if you're coming to rust from a dynamic languages background, the first issue you'll probably run into is figuring out how to return different types of errors from the same function. For example, suppose you're writing a function to read a file and parse it's contents, you may have to propagate two types of errors to the caller:

  1. An std::io::Error in case of failure to read the file
  2. A custom error, indicating failure to parse a line into some internal data structure (let's take u64 for the sake of this example)

As a beginner, you may be tempted to convert both errors into String and use it as the error type.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
use std::{fs::File, io::{BufRead, BufReader}, path::Path};

fn parse(path: &Path) -> Result<Vec<u64>, String> {
    let file = match File::open(path) {
        Ok(f) => f,
        Err(_) => return Err("Error reading the file".to_string()),
    };
    let reader = BufReader::new(file);
    let mut result = vec![];
    for line in reader.lines() {
        let num_str = match line {
            Ok(l) => l,
            Err(_) => return Err("Error reading the file".to_string()),
        };
        let num: u64 = match num_str.parse() {
            Ok(n) => n,
            Err(_) => return Err("Error parsing string to u64".to_string()),
        };
        result.push(num)
    }
    Ok(result)
}

Now any one who has written any serious software would most certainly get a bad feeling about this. There's often a need to be able to classify errors eventually, and doing that with strings is a terrible idea, especially in a typed language. A good example of the need to classify errors is a typical web app. Based on the reason for failure, you want to respond with an appropriate HTTP status code: A validation error must result in a 400 while failure to connect to the database must be a 5xx. Imagine having to match strings againstg regular expressions in order to take this decision.

I first ran into this problem in mid 2023, when LLMs had yet to gain popularity as search engines. So I googled the old-fashion way and landed on this gem of a post - https://burntsushi.net/rust-error-handling/. In particular, defining a single enum that can wrap over multiple error types was an eye opener for me.

Here's the same code with a custom defined error type:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
enum AppError {
    Io(std::io::Error),
    ParseInt(std::num::ParseIntError),
}

fn parse(path: &Path) -> Result<Vec<u64>, AppError> {
    let file = match File::open(path) {
        Ok(f) => f,
        Err(e) => return Err(AppError::Io(e)),
    };
    let reader = BufReader::new(file);
    let mut result = vec![];
    for line in reader.lines() {
        let num_str = match line {
            Ok(l) => l,
            Err(e) => return Err(AppError::Io(e)),
        };
        let num: u64 = match num_str.parse() {
            Ok(n) => n,
            Err(e) => return Err(AppError::ParseInt(e)),
        };
        result.push(num)
    }
    Ok(result)
}

I've intentionally written it in a tedious way, by pattern-matching every single result value so that you can clearly see what's going on. It doesn't have to be this verbose. There are two features of rust that we can use to elegantly trim down this code.

  1. the map_err function combinator helps convert error, if any, from one type to another and,
  2. the question mark operator (?) helps with error propagation
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
fn parse(path: &Path) -> Result<Vec<u64>, AppError> {
    let file = File::open(path).map_err(AppError::Io)?;
    let reader = BufReader::new(file);
    let mut result = vec![];
    for line in reader.lines() {
        let num_str = line.map_err(AppError::Io)?;
        let num: u64 = num_str.parse()
            .map_err(AppError::ParseInt)?;
        result.push(num)
    }
    Ok(result)
}

The use of enums for wrapping multiple error types into a single error type and then converting specific errors to it using map_err was an aha moment for me. IIRC, I may not have read the rest of that post by burntsushi with as much attention afterwards. Having found a workable solution, I stuck to it for quite some time without realizing that I was missing out by not using the convenience features rust provides for dealing with the Result and Error types more elegantly.

And that's what this post is about. Thank you for reading this far but here is where this post actually begins! If you're just starting with rust or still getting used to it, hope you will find it useful.

Errors and Iterators

Before picking up rust, I wrote Clojure professionally for 9 years and dabbled in several flavours of lisp such as scheme, racket, emacs lisp. Naturally I was quite happy to learn about iterators and the familiar functional abstractions they provide - map, filter, fold etc. But, soon I realized that error propagation from inside the fn closure passed to Iterator::map is directly not possible as it expects a closure that returns T and not Result<T, E>. For example, the following doesn't compile:

Does not compile
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
#[derive(Debug)]
enum AppError {
    ...,
    Custom,
}

fn process(x: u8) -> Result<u8, AppError> {
    println!("..... x = {x}");
    if x == 2 {
        return Err(AppError::Custom);
    }
    Ok(x)
}

fn main() -> Result<(), AppError> {
    let input = vec![1, 2, 3, 4];
    let result: Vec<u8> = input.into_iter()
        .map(|x| {
            process(x)?
        })
        .collect();
    println!("Result = {result:?}");
    Ok(())
}
1
2
3
4
5
6
7
error[E0277]: the `?` operator can only be used in a closure that returns `Result` or `Option` (or another type that implements `FromResidual`)
  --> src/main.rs:21:23
   |
20 |         .map(|x| {
   |              --- this function should return `Result` or `Option` to accept `?`
21 |             process(x)?
   |                       ^ cannot use the `?` operator in a closure that returns `u8`

This one compiles but an error gets collected in the result and not propagated immediately when it's encountered. Often that's not what we want.

1
2
3
4
5
6
7
8
fn main() -> Result<(), AppError> {
    let input = vec![1, 2, 3, 4];
    let result: Vec<Result<u8, String>> = input.into_iter()
        .map(process)
        .collect();
    println!("Result = {result:?}");
    Ok(())
}
1
2
3
4
5
..... x = 1
..... x = 2
..... x = 3
..... x = 4
Result = [Ok(1), Err(Custom), Ok(3), Ok(4)]

So I gave up and preferred using simple for-loops in such cases,

1
2
3
4
5
6
7
8
9
fn main() -> Result<(), AppError> {
    let input = vec![1, 2, 3, 4];
    let mut result = Vec::with_capacity(input.len());
    for i in input {
        result.push(process(i)?);
    }
    println!("Result = {result:?}");
    Ok(())
}

Turns out, there's a way to stop iteration upon encountering an error and propagate it up the call stack.

1
2
3
4
5
6
7
fn main() -> Result<(), AppError> {
    let xs = vec![1, 2, 3, 4];
    xs.into_iter()
        .map(process)
        .collect::<Result<Vec<u8>, AppError>>()?;
    Ok(())
}

This may feel like magic at first but once you understand the FromIterator trait, it makes perfect sense. This trait is already implemented for the Result type. The doc explains it quite nicely.

Takes each element in the Iterator: if it is an Err, no further elements are taken, and the Err is returned. Should no Err occur, a container with the values of each Result is returned.

So the call to collect returns exactly what's specified as the type declaration - Result<Vec<u8>, AppError> and the question mark operator can be used immediately to extract the value or propagate the error.

Errors and Option

Similarly, when using the .map method on an Option type, it's not possible to propagate the result directly from inside the closure.

Does not compile
1
2
3
4
5
6
fn main() -> Result<(), AppError> {
    let x = Some(1);
    let y = x.map(|i| process(i)?);
    println!("{y:?}");
    Ok(())
}

Sure, explicit pattern-matching works:

1
2
3
4
5
6
7
8
9
fn main() -> Result<(), AppError> {
    let x = Some(1);
    let y = match x {
        Some(i) => Some(process(i)?),
        None => None
    };
    println!("{y:?}");
    Ok(())
}

But with Option values, you may end up repeating such code many times. A more concise way is to have the closure return a Result, so you'd get Option<Result<T, E>> which can then be converted to Result<Option<T>, E> by calling the transpose method.

1
2
3
4
5
6
7
fn main() -> Result<(), AppError> {
    let x = Some(1);
    let y: Option<Result<u8, AppError>> = x.map(process);
    let z: Result<Option<u8>, AppError> = y.transpose();
    println!("{:?}", z.unwrap());
    Ok(())
}

Here I'm using multiple steps with explicit type declarations for clarity but the same can be expressed as a one-liner too - x.map(process).transpose()?. An experienced rust programmer should easily be able to recognize this pattern and understand what's going on.

My general observation is that anytime one of the arms of a match block is None => None or Err(e) => Err(e), there must be a more concise and elegant way to write it.

Implementing the Error trait

Initially it wasn't clear to me why the Error trait was important. I never had to define it for any of my custom error types. Like any other trait, the rust compiler wouldn't complain if an error type doesn't implement it. It's only when a certain part of code requires it through trait bounds, that you need to be implemented for the code to compile. It's more of a convention that strictly enforced requirement.

I didn't care about the Error trait for many months, until I attended a deep dive session about the anyhow crate at the Rust Bangalore meetup where the speaker Dhruvin Gandhi explained it in great detail.

Good news is that the Error trait provides all the methods as long as Debug and Display traits are implemented, so all you need to implement is these two traits.

Let's modify the AppError so that it implements the Error trait

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
#[derive(Debug)]
enum AppError {
    Io(std::io::Error),
    ParseInt(std::num::ParseIntError),
    Custom,
}

impl std::fmt::Display for AppError {
    fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
        match self {
            AppError::Io(e) => write!(f, "IO error: {e}"),
            AppError::ParseInt(e) => write!(f, "Parsing error: {e}"),
            AppError::Custom => write!(f, "Something went wrong"),
        }
    }
}

impl std::error::Error for AppError {}

As you can see, in case of the Io and ParseInt variants we could assume that the underlying error types would have the Display trait implemented, so we could directly use it for string interpolation.

Similarly, we could directly annotate AppError with #[derive(Debug)] only because std::io::Error and std::num::ParseIntError implement Debug.

Finally, the impl block for the Error trait remains empty because, as mentioned above, all the trait methods have default implementation already provided.

Essentially, implementing the Error trait makes it easy for multiple error types to work well with each other. You'll see a concrete example towards the end of this post.

Implementing the From trait

Software is written in layers. Often, error types defined in high level code have variants wrapping over the low level error types. We've already come across this in the AppError example above - The AppError::Io variant is a wrapper for the low level std::io::Error type.

In such cases, you'll often notice the same .map_err expression repeated multiple times in high level functions. Here's an example:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
enum AppError {
    ...,
    Db(sqlx::error::Error),
}

fn web_app_handler() -> Result<HttpResponse, AppError> {
    let x = run_query_1().map_err(AppError::Db)?;
    ...
    let y = run_query_2().map_err(AppError::Db)?;
    ...
    run_query_3().map_err(AppError::Db)?;
    ...
}

The repetition can be avoided by implementing the From trait for the higher level error

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
impl From<sqlx::error::Error> for AppError {
    fn from(e: sqlx::error::Error) -> Self {
        AppError::Db(e)
    }
}

fn web_app_handler() -> Result<HttpResponse, AppError> {
    let x = run_query_1()?;
    ...
    let y = run_query_2()?;
    ...
    run_query_3()?;
    ...
}

As the return type of the function is known, the ? operator takes care of converting to it by calling the corresponding From implementation.

thiserror

That brings us to the thiserror crate. When I was first searching how to return two types of errors from a single function, I came across many resources that suggested the thiserror and anyhow crates. As a beginner I felt a bit overwhelmed by both at that time. Specially since the custom enum approach worked and seemed elegant enough, why bother including additional dependencies?

But once I started implementing Error trait for my custom error types, thiserror began to make sense.

With thiserror the above AppError definition along with the Display and From implementations can be compressed into:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
#[derive(Debug, thiserror::Error)]
pub enum AppError {
    #[error("IO error: {0}")]
    Io(std::io::Error),
    #[error("Parsing error: {0}")]
    ParseInt(std::num::ParseIntError),
    #[error("Something went wrong")]
    Custom,
    #[error("Database error: {0}")]
    Db(#[from] sqlx::error::Error),
}

As you can see, the Display trait is generated from the annotations with "inline" error messages. And the From trait implementation is generated for variants that have the #[from] attribute. This is incredibly convenient. Implementing Error traits for all error types doesn't feel like a chore any more. I use thiserror in all my projects.

I am still not convinced about anyhow though. It feels a bit too much convenience for my comfort, where you stop caring about the individual error types altogether. I have never used it in a serious project, so I could be wrong.

Error handling for library authors

If you are implementing a library or a crate that others use in their code, there are a few things you might want to take care of. These are also applicable to error types exposed by low level code and not just external dependencies.

Always implement the Error trait for the error types that are publicly exposed by the crate. Had std::io::Error not implemented the Error trait, it wouldn't have been possible to use thiserror for the AppError enum.

In most cases, a low level error type gets converted to a higher level error type as we've seen in previous examples in this post. But sometimes it's possible that a high level error type (defined in an app for example) has to be converted to a low level type (defined in a library). This happens especially when the library exposes a trait requiring a method that returns a low level error type.

I encountered this when implementing plectrum - a crate that helps with mapping lookup/reference tables in a database with rust enums. It requires the user to implement a DataSource trait, the definition of which is:

1
2
3
4
5
6
pub trait DataSource {
    type Id: std::hash::Hash + Eq + Copy;
    fn load(
        &self,
    ) -> impl std::future::Future<Output = Result<HashMap<Self::Id, String>, plectrum::Error>> + Send;
}

Notice the return type of the load method has plectrum::Error type. In the initial version of the crate, I had provided a Sqlx(sqlx::Error) variant in the plectrum::Error enum as sqlx is my preferred SQL library. But what about those who use other libraries or ORMs?

In a later version, I added a generic DataSource variant2:

1
2
3
4
5
6
pub enum Error {
    ...
    #[error("Error when loading data from the data source: {0}")]
    DataSource(#[source] Box<dyn std::error::Error + Send + Sync>),
    ...
}

Plectrum users can now wrap any error type in plectrum::Error::DataSource variant as long as it implements the Error trait as well as the Send and Sync marker traits.

This is a good example of why implementing the Error trait is important for your custom error types.

Footnotes

1. Sometimes the caller doesn't really care about the returned value, but if the result is not "used", which essentially means handled, the compiler shows a warning. That's because the Result enum is annotated with the #[must_use] attribute. Refer Result must be used.

2. Note that here we're annotating the inner type with #[source] instead of #[from] that we saw earlier. This is because Box<dyn Error> is a generic catch-all, not a specific type we'd want auto-converted from.

-1:-- Handling Rust errors elegantly (Post Vineet Naik)--L0--C0--2026-03-07T18:30:00.000Z

Sid Kasivajhula: On “Tempo” in Text Editing

[This describes the very handy evil-execute-in-normal-state and its use from a vanilla Emacs editing state, to gain the best of both Emacs and Vim editing styles.]

Typos are incredibly common. How do you fix them?

For example, a common one I’ve noticed is in typing the name of an identifier in a programming language, like so:

custom0print|

… when I meant to type

custom-print|

In Emacs, the only way I know of to fix this is to C-b repeatedly to get to the 0 and then C-d followed by - to replace it, and then return to the end using M-f or C-e [A commenter on Reddit pointed out C-r! — a much better option if you are a vanilla Emacs user. Read on, however, as this post is more generally about using Emacs and Evil together, and uses this specific example to illustrate the more general pattern].

In Vim/Evil, you’d first Escape to Normal mode. Then you’d type F0 to find the 0 looking back from the cursor, and then r- to replace it with a dash. Then you’d return to editing at the end using something like ea or simply A.

I don’t love either of these approaches.

Emacs is tedious here, but it is at least simple and thus keeps you focused on the subject matter. And while Vim/Evil is elegant, escaping to Normal state to perform this minute edit and then returning to where you were before loses “tempo,” causing you briefly to focus on something else than your subject matter. It is a cost that is especially felt for minute edits (for larger edits, escaping to Normal state is perfectly fine as the edit does require your attention).

The good news is, we can combine them for the best of both worlds.

Here’s how I do it:

  1. Replace Insert state with vanilla Emacs.
  ;; Use Emacs keybindings when in Insert mode }:)
  (setcdr evil-insert-state-map nil)

As vanilla Emacs is designed to be a standalone editing paradigm, I find it generally more useful and powerful than Evil Insert state.

  1. Bind evil-execute-in-normal-state to a convenient key (I use C-;. By default, Evil Insert state uses C-o, but Emacs’s C-o is very useful, so it’s better to override C-; which is perhaps even easier to type as it’s in home position).
  (define-key
    ;; Insert mode's "C-o" --- roundtrip to Normal mode
    evil-insert-state-map
    (kbd "C-;")
    #'evil-execute-in-normal-state)

Now you can use C-; to enter a Normal command which of course in this case is F0. You remain in Emacs state, allowing you to C-d - and then return with M-f or C-e or whatever fits the specific case, without leaving the insertion state, keeping you focused on the task at hand and preserving your flow. This approach thus gains the efficiency of the Evil solution while still feeling light.

The C-; you’ve now bound is very handy, and this is just one example of its use. It is useful in any situation where you want to do a quick edit somewhere else without losing tempo. In such cases, Vim allows you to describe your edit in a natural editing language, while Emacs keeps you in the flow. You’d like to momentarily use the description language but otherwise keep doing what you’re already doing. As it happens, even aside from tempo considerations, in many cases using this approach is more efficient than either Emacs or Evil on their own.

Now, if you’re a purist Vim or Evil user, don’t worry! This isn’t blasphemy but is a feature that’s part of Vim itself! Evil, like Vim, bounds your edits as you would expect, so that it is functionally identical to explicitly escaping, editing, and re-entering Insert state.

My Vim tip on Living the High Life elaborates on this.

Do you use Emacs and Evil together for the best of both? What are your favorite tricks?

-1:-- On “Tempo” in Text Editing (Post Sid Kasivajhula)--L0--C0--2026-03-07T18:29:04.000Z

Irreal: A Video On The Casual Suite

Those of you that hang out on Irreal know that I think highly of Charles Choi’s Casual Suite. It’s just what you need for modes that have a lot of seldom used commands or modes that themselves are seldom used. The idea is that rather than having to remember the appropriate key sequences or command names, you merely bring up a transient menu and choose the operation from it.

That’s a bit of a simplification since Casual is actually a bit more nuanced. Back in 2024, Choi produced a video for EmacsConf 2024—which someone has just reposted—that explores some of those nuances.

The use of Casual is not an either or decision. Often times you know many of the commands of a certain mode—Dired or Calc, say—but haven’t memorized the commands for some of the more obscure or little used commands. That’s where Casual really shines. For sequences or commands you don’t remember, you simply bring up the menu with a key sequence that is the same for all casual menus and use it to select the operation you need. If you already know the sequence, you just use it without bothering with Casual at all.

Choi has a user guide for Casual that documents the current state of Casual and shows you what modes are covered. That list has grown since his video.

For those of you who care about such things, Choi has just made the Casual Suite available on NonGnu Elpa as well as Melpa. Watch the video and see if Casual may not be a worthwhile addition to your Emacs packages repertoire.

-1:-- A Video On The Casual Suite (Post Irreal)--L0--C0--2026-03-07T15:34:40.000Z

Protesilaos Stavrou: Emacs: four new themes are coming to the ‘doric-themes’

I am developing four new themes for my doric-themes package:

  • doric-almond (light)
  • doric-coral (light)
  • doric-magma (dark)
  • doric-walnut (dark)

Each of them has its own character, while they all retain the minimalist Doric style. Below are some screenshots. Remember that these themes use few colours, relying on typography to establish a visual rhythm.

doric-almond

doric-almond theme sample

doric-almond theme sample

doric-almond theme sample

doric-almond theme sample

doric-coral

doric-coral theme sample

doric-coral theme sample

doric-coral theme sample

doric-coral theme sample

doric-magma

doric-magma theme sample

doric-magma theme sample

doric-magma theme sample

doric-magma theme sample

doric-walnut

doric-walnut theme sample

doric-walnut theme sample

doric-walnut theme sample

doric-walnut theme sample

Coming in version 1.1.0

All four themes are in development. I may still make some minor refinements to them, though I have already defined their overall appearance. If you like the minimalism of the doric-themes, I think you will appreciate these new additions.

Sources

The Doric themes use few colours and will appear monochromatic in many contexts. They are my most minimalist themes. Styles involve the careful use of typographic features and subtleties in colour gradients to establish a consistent rhythm.

If you want maximalist themes in terms of colour, check my ef-themes package. For something in-between, which I would consider the best “default theme” for a text editor, opt for my modus-themes.

-1:-- Emacs: four new themes are coming to the ‘doric-themes’ (Post Protesilaos Stavrou)--L0--C0--2026-03-07T00:00:00.000Z

Michal Sapka: FAQ

Sometimes people as me questions. Not frequently, but they do.

Who are you?

A human. It's sad that it's no longer given. I can't solve CAPTCHAs, so maybe it's wrong? Still, my name is Michał, and I am a nerd from Kraków, Poland. I am into niche and old things, like anime, BSD, or Emacs. I earn my bread as a senior software engineer in a SASS company specializing in Ruby on Rails. After work, I am a husband and a father - the only reason I ever leave my home because couch is love.

What is this site?

This is one of the things your grandfather told you about: a personal website. Not a social media, not a Google YouTube Channel, not a Discord server. It's just a place where I write about things which interest me, and some even people read it. I write mostly about speculative fiction, FreeBSD, or Emacs. Weblog is the unstructed part where everything can land, as long it maintains my fleeting interest for long enough to become a text.

What's your viewership?

I haven't got the slightest clue. I don't have any tracking here, and logs are kind of useless to determine if a given request was ever seen by a human. Most of my readers tend to use an RSS reader, so it's even harder to tell. People do contact me from time to time, so they exist.

Why do you have a website?

For the fun of it. I've had multiple website over the span of 30 years, and it was always fun. Somehow, always, some community exists and I become part of it.

How is it built?

This site is built my custom-built static site renderer. I tried Hugo, but I got fed up with trying to do everything in their templating engine. Having ready access to a codebase is always fun! You can find the msite on Codeberg

Where is it hosted?

This site is served from my living room. Hosting a static site is simple: you just need a: computer, connection, dynamic DNS provider and some file serving software. In my case, it's a bit complex as it's running on a Synology, which I dislike. Therefore, there's a FreeBSD virtual machine running on it - Synology OS is only serving multimedia at this point. On the VM, I have two Jails: a generic reverse proxy, and the actual site one. For both, the reverse proxy and file serving, I use caddy. For the Dynamic DNS, I have recently migrated to 1984.

Doesn't this kill your internet connection?

No. Sure, LLMs scrapers shamelessly hit the server, but it's nothing my domestic connection can't handle. You could DDOS me, by why would you? I'm a nice guy! Outside of bots, the actual traffic is insignificant, sadly.

-1:-- FAQ (Post Michal Sapka)--L0--C0--2026-03-06T22:37:06.000Z

So emacs plus (through homebrew on macOS) keeps giving me this error: Invalid function: org-element-with-disabled-cache.

Does anyone know what this is about, and why it’s happening? No issue with Emacs on Linux (same config) or when I had emacsformacos (same config)

-1:--  (Post TAONAW - Emacs and Org Mode)--L0--C0--2026-03-06T19:36:36.000Z

Irreal: Transposing Things

Bozhidar Batsov is back with another great post on recent additions to Emacs, This time it’s about transposing things in Emacs. Most experienced Emacs users are familiar with transpose-chars and transpose-words. I’ve heard it suggested the that the use of these is mostly restricted to “power users” but I’ve been using them almost as long as I’ve been using Emacs. Perry Metzger made a comment on Irreal—I can’t find it now—extolling transpose-chars and how much more efficient it is than making the switch by hand.

I use those two commands all the time, usually several times a day. Batsov says that another useful transposition command is transpose-sentences but that few people discover it. I found it simply by assuming that it must exist and using Meta+x to find it. There’s also transpose lines, which I seldom use because it usually offers surprising results in visual-line-mode.

In addition to these commands there are some others. Take a look a Batsov’s post for more details. The point of his post is how much more useful some of these commands are when combined with Tree-sitter. The transpose-sexps command, which I always thought of as useful mostly for Lisp languages, gains a lot of power when Tree-sitter is brought to bear to define what “a balanced pair” means in whatever mode you’re using in the current buffer.

Finally, there is transpose-paragraphs. While I’ve never used it, it’s obviously useful for prose editing but gains new powers with Tree-sitter that can give a meaning to “paragraph” in the mode of the current buffer.

Every Emacs user should take a look at Batsov’s post. There’s a lot of good information in it.

-1:-- Transposing Things (Post Irreal)--L0--C0--2026-03-06T15:38:39.000Z

Emacs Redux: Mastering Compilation Mode

I’ve been using Emacs for over 20 years. I’ve always used M-x compile and next-error without thinking much about them – you run a build, you jump to errors, life is good. But recently, while working on neocaml (a Tree-sitter-based OCaml major mode), I had to write a custom compilation error regexp and learned that compile.el is far more sophisticated and extensible than I ever appreciated.

This post is a deep dive into compilation mode – how it works, how to customize it, and how to build on top of it.

The Basics

If you’re not already using M-x compile, start today. It runs a shell command, captures the output in a *compilation* buffer, and parses error messages so you can jump directly to the offending source locations.

The essential keybindings in a compilation buffer:

Keybinding Command What it does
g recompile Re-run the last compilation command
M-n compilation-next-error Move to the next error message
M-p compilation-previous-error Move to the previous error message
RET compile-goto-error Jump to the source location of the error at point
C-c C-f next-error-follow-minor-mode Auto-display source as you move through errors

But the real power move is using next-error and previous-error (M-g n and M-g p) from any buffer. You don’t need to be in the compilation buffer – Emacs tracks the last buffer that produced errors and jumps you there. This works across compile, grep, occur, and any other mode that produces error-like output.

Pro tip: M-g M-n and M-g M-p do the same thing as M-g n / M-g p but are easier to type since you can hold Meta throughout.

How Error Parsing Actually Works

Here’s the part that surprised me. Compilation mode doesn’t have a single regexp that it tries to match against output. Instead, it has a list of regexp entries, and it tries all of them against every line. The list lives in two variables:

  • compilation-error-regexp-alist – a list of symbols naming active entries
  • compilation-error-regexp-alist-alist – an alist mapping those symbols to their actual regexp definitions

Emacs ships with dozens of entries out of the box – for GCC, Java, Ruby, Python, Perl, Gradle, Maven, and many more. You can see all of them with:

(mapcar #'car compilation-error-regexp-alist-alist)

Each entry in the alist has this shape:

(SYMBOL REGEXP FILE LINE COLUMN TYPE HYPERLINK HIGHLIGHT...)

Where:

  • REGEXP – the regular expression to match
  • FILE – group number (or function) for the filename
  • LINE – group number (or cons of start/end groups) for the line
  • COLUMN – group number (or cons of start/end groups) for the column
  • TYPE – severity: 2 = error, 1 = warning, 0 = info (can also be a cons for conditional severity)
  • HYPERLINK – group number for the clickable portion
  • HIGHLIGHT – additional faces to apply

The TYPE field is particularly interesting. It can be a cons cell (WARNING-GROUP . INFO-GROUP), meaning “if group N matched, it’s a warning; if group M matched, it’s info; otherwise it’s an error.” This is how a single regexp can handle errors, warnings, and informational messages.

A Real-World Example: OCaml Errors

Let me show you what I built for neocaml. OCaml compiler output looks like this:

File "foo.ml", line 10, characters 5-12:
10 |   let x = bad_value
              ^^^^^^^
Error: Unbound value bad_value

Warnings:

File "foo.ml", line 3, characters 6-7:
3 | let _ x = ()
          ^
Warning 27 [unused-var-strict]: unused variable x.

And ancillary locations (indented 7 spaces):

File "foo.ml", line 5, characters 0-20:
5 | let f (x : int) = x
    ^^^^^^^^^^^^^^^^^^^^
       File "foo.ml", line 10, characters 6-7:
10 |   f "hello"
          ^
Error: This expression has type string but ...

One regexp needs to handle all of this. Here’s the (slightly simplified) entry:

(push `(ocaml
        ,neocaml--compilation-error-regexp
        3                                    ; FILE = group 3
        (4 . 5)                              ; LINE = groups 4-5
        (6 . neocaml--compilation-end-column) ; COLUMN = group 6, end via function
        (8 . 9)                              ; TYPE = warning if group 8, info if group 9
        1                                    ; HYPERLINK = group 1
        (8 font-lock-function-name-face))    ; HIGHLIGHT group 8
      compilation-error-regexp-alist-alist)

A few things worth noting:

  • The COLUMN end position uses a function instead of a group number. OCaml’s end column is exclusive, but Emacs expects inclusive, so neocaml--compilation-end-column subtracts 1.
  • The TYPE cons (8 . 9) means: if group 8 matched (Warning/Alert text), it’s a warning; if group 9 matched (7-space indent), it’s info; otherwise it’s an error. Three severity levels from one regexp.
  • The entry is registered globally in compilation-error-regexp-alist-alist because *compilation* buffers aren’t in any language-specific mode. Every active entry is tried against every line.

Adding Your Own Error Regexp

You don’t need to be writing a major mode to add your own entry. Say you’re working with a custom linter that outputs:

[ERROR] src/app.js:42:10 - Unused import 'foo'
[WARN] src/app.js:15:3 - Missing return type

You can teach compilation mode about it:

(with-eval-after-load 'compile
  (push '(my-linter
          "^\\[\\(ERROR\\|WARN\\)\\] \\([^:]+\\):\\([0-9]+\\):\\([0-9]+\\)"
          2 3 4 (1 . nil))
        compilation-error-regexp-alist-alist)
  (push 'my-linter compilation-error-regexp-alist))

The TYPE field (1 . nil) means: “if group 1 matches, it’s a warning” – but wait, group 1 always matches. The trick is that compilation mode checks the content of the match. Actually, let me correct myself. The TYPE field should be a number or expression. A cleaner approach:

(with-eval-after-load 'compile
  (push '(my-linter
          "^\\[\\(?:ERROR\\|\\(WARN\\)\\)\\] \\([^:]+\\):\\([0-9]+\\):\\([0-9]+\\)"
          2 3 4 (1))
        compilation-error-regexp-alist-alist)
  (push 'my-linter compilation-error-regexp-alist))

Here group 1 only matches for WARN lines (it’s inside a non-capturing group with an alternative). TYPE is (1) meaning “if group 1 matched, it’s a warning; otherwise it’s an error.”

Now M-x compile with your linter command will highlight errors and warnings differently, and next-error will jump right to them.

Useful Variables You Might Not Know

A few compilation variables that are worth knowing:

;; OCaml (and some other languages) use 0-indexed columns
(setq-local compilation-first-column 0)

;; Scroll the compilation buffer to follow output
(setq compilation-scroll-output t)

;; ... or scroll until the first error appears
(setq compilation-scroll-output 'first-error)

;; Skip warnings and info when navigating with next-error
(setq compilation-skip-threshold 2)

;; Auto-close the compilation window on success
(setq compilation-finish-functions
      (list (lambda (buf status)
              (when (string-match-p "finished" status)
                (run-at-time 1 nil #'delete-windows-on buf)))))

The compilation-skip-threshold is particularly useful. Set it to 2 and next-error will only stop at actual errors, skipping warnings and info messages. Set it to 1 to also stop at warnings but skip info. Set it to 0 to stop at everything.

The Compilation Mode Family

Compilation mode isn’t just for compilers. Several built-in modes derive from it:

  • grep-modeM-x grep, M-x rgrep, M-x lgrep all produce output in a compilation-derived buffer. Same next-error navigation, same keybindings.
  • occur-modeM-x occur isn’t technically derived from compilation mode, but it participates in the same next-error infrastructure.
  • flymake/flycheck – uses compilation-style error navigation under the hood.

The grep family deserves special mention. M-x rgrep is recursive grep with file-type filtering, and it’s surprisingly powerful for a built-in tool. The results buffer supports all the same navigation, and you can even edit results and write changes back to the original files. M-x occur has had this built-in for a long time via occur-edit-mode (just press e in the *Occur* buffer). For grep, the wgrep package has been the go-to solution, but starting with Emacs 31 there will be a built-in grep-edit-mode as well. That’s a multi-file search-and-replace workflow that rivals any modern IDE, no external tools required.

Building a Derived Mode

The real fun begins when you create your own compilation-derived mode. Let’s build one for running RuboCop (a Ruby linter and formatter). RuboCop’s emacs output format looks like this:

app/models/user.rb:10:5: C: Style/StringLiterals: Prefer single-quoted strings
app/models/user.rb:25:3: W: Lint/UselessAssignment: Useless assignment to variable - x
app/models/user.rb:42:1: E: Naming/MethodName: Use snake_case for method names

The format is FILE:LINE:COLUMN: SEVERITY: CopName: Message where severity is C (convention), W (warning), E (error), or F (fatal).

Here’s a complete derived mode:

(require 'compile)

(defvar rubocop-error-regexp-alist
  `((rubocop-offense
     ;; file:line:col: S: Cop/Name: message
     "^\\([^:]+\\):\\([0-9]+\\):\\([0-9]+\\): \\(\\([EWFC]\\)\\): "
     1 2 3 (5 . nil)
     nil (4 compilation-warning-face)))
  "Error regexp alist for RuboCop output.
Group 5 captures the severity letter: E/F = error, W/C = warning.")

(define-compilation-mode rubocop-mode "RuboCop"
  "Major mode for RuboCop output."
  (setq-local compilation-error-regexp-alist
              (mapcar #'car rubocop-error-regexp-alist))
  (setq-local compilation-error-regexp-alist-alist
              rubocop-error-regexp-alist))

(defun rubocop-run (&optional directory)
  "Run RuboCop on DIRECTORY (defaults to project root)."
  (interactive)
  (let ((default-directory (or directory (project-root (project-current t)))))
    (compilation-start "rubocop --format emacs" #'rubocop-mode)))

A few things to note:

  • define-compilation-mode creates a major mode derived from compilation-mode. It inherits all the navigation, font-locking, and next-error integration for free.
  • We set compilation-error-regexp-alist and compilation-error-regexp-alist-alist as buffer-local. This means our mode only uses its own regexps, not the global ones. No interference with other tools.
  • compilation-start is the workhorse – it runs the command and displays output in a buffer using our mode.
  • The TYPE field (5 . nil) means: if group 5 matched, check its content – but actually, here all lines match group 5. The subtlety is that compilation mode treats a non-nil TYPE group as a warning. To distinguish E/F from W/C, you’d need a predicate or two separate regexp entries. For simplicity, this version treats everything as an error, which is usually fine for a linter.

You could extend this with auto-fix support (rubocop -A), or a sentinel function that sends a notification when the run finishes:

(defun rubocop-run (&optional directory)
  "Run RuboCop on DIRECTORY (defaults to project root)."
  (interactive)
  (let ((default-directory (or directory (project-root (project-current t))))
        (compilation-finish-functions
         (cons (lambda (_buf status)
                 (message "RuboCop %s" (string-trim status)))
               compilation-finish-functions)))
    (compilation-start "rubocop --format emacs" #'rubocop-mode)))

Side note: RuboCop actually ships with a built-in emacs output formatter (that’s what --format emacs uses above), so its output already matches Emacs’s default compilation regexps out of the box – no custom mode needed. I used it here purely to illustrate how define-compilation-mode works. In practice you’d just M-x compile RET rubocop --format emacs and everything would Just Work.1

If you want a real, battle-tested rubocop-mode rather than rolling your own, check out rubocop-emacs. It provides commands for running RuboCop on the current file, project, or directory, with proper compilation mode integration. Beyond compilation mode, RuboCop is also supported out of the box by both Flymake (via ruby-flymake-rubocop in Emacs 29+) and Flycheck (via the ruby-rubocop checker), giving you real-time feedback as you edit without needing to run a manual compilation at all.

In practice, most popular development tools already have excellent Emacs integration, so you’re unlikely to need to write your own compilation-derived mode any time soon. The last ones I incorporated into my workflow were ag.el and deadgrep.el – both compilation-derived modes for search tools – and even those have been around for years. Still, understanding how compilation mode works under the hood is valuable for the occasional edge case and for appreciating just how much the ecosystem gives you for free.

next-error is not really an error

There is no spoon.

– The Matrix

The most powerful insight about compilation mode is that it’s not really about compilation. It’s about structured output with source locations. Any tool that produces file/line references can plug into this infrastructure, and once it does, you get next-error navigation for free. The name compilation-mode is a bit of a misnomer – something like structured-output-mode would be more accurate. But then again, naming is hard, and this one has 30+ years of momentum behind it.

This is one of Emacs’s great architectural wins. Whether you’re navigating compiler errors, grep results, test failures, or linter output, the workflow is the same: M-g n to jump to the next problem. Once your fingers learn that pattern, it works everywhere.

I used M-x compile for two decades before I really understood the machinery underneath. Sometimes the tools you use every day are the ones most worth revisiting.

That’s all I have for you today. In Emacs we trust!

  1. Full disclosure: I may know a thing or two about RuboCop’s Emacs formatter. 

-1:-- Mastering Compilation Mode (Post Emacs Redux)--L0--C0--2026-03-06T09:30:00.000Z

Irreal: Choi’s Casual Suite Is Now Available on NonGnu ELPA

After a considerable amount of back and forth on the Emacs Devel Mailing List, Charles Choi’s Casual Suite has been accepted for inclusion on the NonGnu ELPA repository. If you’re already a Casual user, this makes no substantive difference. You can still get the suite from MELPA as you always have but it will also be available on the NonGnu repository.

Evidently Choi decided to seek NonGnu placement because some folks object to MELPA on philosophical grounds. I’m not sure I understand what the issue is other than MELPA is not officially canonized by the FSF.

To me, that’s just like insisting on a small init.el on religiously grounds. It merely means that you can’t take advantage of all Emacs has to offer. Whatever the FSF’s feeling on the issue, MELPA is, in fact, the premier ELPA repository and almost every package worth having is represented there, and many of them only there.

Still, choices are good and now we have more of them. If, for whatever reason, you have an objection to MELPA, you can still take advantage of the Casual Suite.

-1:-- Choi’s Casual Suite Is Now Available on NonGnu ELPA (Post Irreal)--L0--C0--2026-03-05T15:17:51.000Z

Chris Maiorana: Toward the Org Mode future: distributed notebooks

You often see questions like this pop up in places around the Emacs community: how do you take notes in Org Mode?

It’s a good question, most often framed with reference to programs like Obsidian, and the pros and cons of Emacs as a “knowledge management” system.

Integrated apps like Obsidian are great for the general population who may not require the deeper levels of customization that makes Org Mode worth learning.

I’d say Org Mode has more personality, so far as software can have such a thing.

There are surely as many ways to “org note” as there are Org Mode users.

But I would like to ask a question rarely asked in our community: should you publish your notes to the web? Or, at least, write notes in such a way that you could publish them—even if you prefer to keep them private?

This is something I’ve been thinking about for a while: Org Mode distributed notebooks.

The org-to-html export functionality is ideally suited for this.

Likewise, org supports hyperlinking, image assets, and pretty much everything you would need to create rich online notebooks. You would simply need a web host or take advantage of free hosting via GitHub pages.

This is very web 1.0, but I think that’s what I like about it.

Why do this?

I like this idea of the “Feynman technique”: learning hard things by teaching yourself via notebook entries.

Do you really understand a concept if you are unable to explain it?

Think about it. Writing your notes in such a fashion that they could teach someone helps you make sure your writing makes sense.

By the way, if you like topics like these you would love my books, Emacs For Writers and Git For Writers.

I’ve been experimenting with this “distributed notebook” idea by writing music theory notes in Org Mode and exporting them as linked HTML files. They’re definitely not ready to distribute yet, but getting there.

On the plus side, it’s nice reading my notes like a website I can bookmark and quickly reference without having to open up Emacs.

What are your thoughts? I’d like to see more oddball Org Mode ideas like this pick up momentum. Feel free to leave me a comment below on how you like to org note.

The post Toward the Org Mode future: distributed notebooks appeared first on Chris Maiorana.

-1:-- Toward the Org Mode future: distributed notebooks (Post Chris Maiorana)--L0--C0--2026-03-05T05:00:24.000Z

Protesilaos Stavrou: I talk with Joshua Blais about Emacs and life issues

Raw link: https://www.youtube.com/watch?v=1vMlGFELajQ

I had a ~2-hour chat with Joshua Blais, a fellow Emacs user, over at the @JoshuaBlais YouTube channel: https://www.youtube.com/@JoshuaBlais. We covered Emacs at length and also talked about general life issues.

The first topic we cover is how to place constraints on yourself in order to avoid backsliding into bad habits. This ties in to the themes of discipline and productivity that we discuss in some further length.

Joshua asks me how I got into Emacs and how I started writing/maintaining packages for it. We talk about how Emacs provides for an integrated computing experience. Learning Emacs Lisp allows you to have better control of Emacs.

In this light we comment on Guix and how it is also configurable in a dialect of Lisp. Joshua is using Nix and I learn more about that experience.

Coming back to Emacs, we comment on its relationship to the Unix philosophy. I think Emacs is compatible with Unix. Though my main point is how Emacs empowers us to use the computer in a productive way. It augments the experience.

Simple living and financial independence is another topic we cover. Joshua wants to know how I approach this issue. I explain how it is a matter of controlling your wants. Figure out what the parts of your lifestyle that you would not sacrifice. Then you know how much money you need for that lifestyle.

Joshua makes a connection of the simple life to Emacs and Unix tools. I comment on that as well. Once you start using Emacs and friends, you learn to appreciate the essentials. This you can then apply to other parts of your life.

We move to note-taking, where I comment about Denote. I explain how it is a file-naming scheme, which can also be used to write notes. What matters is how well we can retrieve information. Joshua explains how pen and paper helps him express his thoughts.

Learning on your own is our next point. Being an autodidact myself, I comment how it empowers you. You are able to have initiative and be more independent.

We then explore how things have infinite depth. This is how everything in the world is connected. This also relates to the point about the simple living, since you can have relatively few things that you keep understanding in depth.

Joshua asks me about discipline. This is a capacity we can build up. I give some examples.

Next on our list are mechanical keyboards. Joshua and I are using a split keyboard.

Then we explore the theme of using tools the right way. One example is the Internet as a whole. Another is with LLMs. It helps to know “why am I doing this”, as then you can understand when you are meeting your goals and when you are moving away from them. We explore this in further depth.

I comment on a common mistake we make where we think that the complex must be sophisticated and profound. Whereas there is profundity in simplicity.

We connect the dots through all these as we wrap things up.

Thanks to Joshua Blais for this chat. I had a good time and wish him all the best!

-1:-- I talk with Joshua Blais about Emacs and life issues (Post Protesilaos Stavrou)--L0--C0--2026-03-05T00:00:00.000Z

Thanos Apollo: Gnosis 0.8.0 Release Notes

I’m excited to announce the release of version 0.8.0 for gnosis.

Finally this project is now evolving into the “all-in-one” knowledge system I’ve envisioned. This version allows for viewing of linked nodes of org-gnosis and their linked themata in a tabulated-list via the gnosis-dashboard with excellent performance.

No more external browsers needed to view my collection. This means the org-gnosis-ui project should be considered deprecated. All functionalities can now be performed within Emacs.

Additionally, I’ve published a gnosis-deck specifically designed for fellow medical students, called Unking, based on the last free version of the Anking deck, Anking V11. This deck is quite comprehensive, featuring over 40,000 themata.

As Gnosis already fulfills my original intentions and beyond, further development will primarily be focused on bug fixes and general maintenance rather than new major features. My priority will now shift towards using the system for my exams rather than continuing active development.

It has been really fun and rewarding working on this project. My goal for learning to program has been to make a knowledge system and to not rely on any 3rd parties for my studying.

Which I’ve finally achieved:) This wouldn’t be possible without all the work that many of you have put into the Emacs ecosystem, so I just want to I extend my heartfelt gratitude to all of you, thank you!

New features

  • Auto input-method detection: gnosis detects the script of the expected answer (Greek, Cyrillic, etc.) and activates the appropriate input method during review. Configured via gnosis-script-input-method-alist.
  • Change thema type and deck via gnosis-update-thema.
  • Dashboard bulk-link action for currently displayed themata.
  • Dashboard header-line with entry count and context.
  • Asynchronous deck import with gnosis-import-deck-async.
  • Demo deck included in decks/demo.org.

Performance

  • New gnosis-tl module replaces tabulated-list-print for dashboard rendering (3-4x faster).
  • Progressive async rendering for large dashboards.
  • Batch-fetch review data instead of per-thema queries.

Bug fixes

  • Fix anagnosis event calculation in the algorithm.
  • Fix cloze tag removal for edge cases and mc-cloze type.
  • Fix vc-pull to run migrations after pull.
-1:-- Gnosis 0.8.0 Release Notes (Post Thanos Apollo)--L0--C0--2026-03-04T22:00:00.000Z

Sacha Chua: Expanding yasnippets by voice in Emacs and other applications

Yasnippet is a template system for Emacs. I want to use it by voice. I'd like to be able to say things like "Okay, define interactive function" and have that expand to a matching snippet in Emacs or other applications. Here's a quick demonstration of expanding simple snippets:

Screencast of expanding snippets by voice in Emacs and in other applications

Transcript
  • 00:00 So I've defined some yasnippets with names that I can say. Here, for example, in this menu, you can see I've got "define interactive function" and "with a buffer that I'll display." And in fundamental mode, I have some other things too. Let's give it a try.
  • 00:19 I press my shortcut. "Okay, define an interactive function." You can see that this is a yasnippet. Tab navigation still works.
  • 00:33 I can say, "OK, with a buffer that I'll display," and it expands that also.
  • 00:45 I can expand snippets in other applications as well, thanks to a global keyboard shortcut.
  • 00:50 Here, for example, I can say, "OK, my email." It inserts my email address.
  • 01:02 Yasnippet definitions can also execute Emacs Lisp. So I can say, "OK, date today," and have that evaluated to the actual date.
  • 01:21 So that's an example of using voice to expand snippets.

This is handled by the following code:

(defun my-whisper-maybe-expand-snippet (text)
  "Add to `whisper-insert-text-at-point'."
  (if (and text
           (string-match
            "^ok\\(?:ay\\)?[,\\.]? \\(.+\\)" text))
    (let* ((name
            (downcase
             (string-trim
              (replace-regexp-in-string "[,\\.]" "" (match-string 1 text)))))
           (matching
            (seq-find (lambda (o)
                        (subed-word-data-compare-normalized-string-distance
                         name
                         (downcase (yas--template-name o))))
                      (yas--all-templates (yas--get-snippet-tables)))))
      (if matching
          (progn
            (if (frame-focus-state)
                (progn
                  (yas-expand-snippet matching)
                  nil)
              ;; In another application
              (with-temp-buffer
                (yas-minor-mode)
                (yas-expand-snippet matching)
                (buffer-string))))
        text))
    text))

This code relies on my fork of whisper.el, which lets me specify a list of functions for whisper-insert-text-at-point. (I haven't asked for upstream review yet because I'm still testing things, and I don't know if it actually works for anyone else yet.) It does approximate matching on the snippet name using a function from subed-word-data.el which just uses string-distance. I could probably duplicate the function in my config, but then I'd have to update it in two places if I come up with more ideas.

The code for inserting into other functions is defined in my-whisper-maybe-type, which is very simple:

(defun my-whisper-maybe-type (text)
  "If Emacs is not the focused app, simulate typing TEXT.
Add this function to `whisper-insert-text-at-point'."
  (when text
    (if (frame-focus-state)
        text
      (make-process :name "xdotool" :command
                    (list "xdotool" "type"
                          text))
      nil)))

Someday I'd like to provide alternative names for snippets. I also want to make it easy to fill in snippet fields by voice. I'd love to be able to answer minibuffer questions from yas-choose-value, yas-completing-read, and other functions by voice too. Could be fun!

Related:

This is part of my Emacs configuration.
View Org source for this post

You can e-mail me at sacha@sachachua.com.

-1:-- Expanding yasnippets by voice in Emacs and other applications (Post Sacha Chua)--L0--C0--2026-03-04T16:17:40.000Z

James Cherti: pathaction.el: An Emacs package for executing pathaction rules, the universal Makefile for the entire filesystem

Build Status License

The pathaction.el Emacs package provides an interface for executing .pathaction.yaml rules directly from Emacs through the pathaction cli, a flexible tool for running commands on files and directories.

Think of pathaction like a Makefile for your entire filesystem. It uses a .pathaction.yaml file to figure out which command to run, and you can even use Jinja2 templating to make those commands dynamic. You can also use tags to define multiple actions for the exact same file type, like setting up one tag to run a script, and another to debug it.

This tool is for software developers who manage multiple projects across diverse ecosystems and want to eliminate the cognitive load of switching between different build tools, environment configurations, and deployment methods. Just run one single command on any file and trust that it gets handled correctly.

If this package helps your workflow, please show your support by ⭐ starring pathaction.el on GitHub to help more software developers discover its benefits.

Requirements

Installation

To install pathaction from MELPA:

  1. If you haven’t already done so, add MELPA repository to your Emacs configuration.

  2. Add the following code to your Emacs init file to install pathaction from MELPA:

(use-package pathaction
  :ensure t
  :config
  (add-to-list 'display-buffer-alist '("\\*pathaction:"
                                       (display-buffer-at-bottom)
                                       (window-height . 0.33))))

Usage

Allow the directory

By default, pathaction does not read rule-set files such as .pathaction.yaml from arbitrary directories. The target directory must be explicitly permitted.

For example, to allow Pathaction to load .pathaction.yaml rules from ~/projects and its subdirectories, run the following command:

pathaction --allow-dir ~/projects

Run

To execute the pathaction action that is tagged with main, you can call the following Emacs function:

(pathaction-run "main")
  • pathaction-run: This is the main function for triggering pathaction actions.
  • "main": This is the tag used to identify a specific action. The tag you provide to the function determines which set of actions will be executed. In this case, "main" refers to the actions that are specifically tagged with this name.

Edit the pathaction.yaml file

To edit the pathaction.yaml file, use the following function, which will prompt you to select one of the pathaction.yaml files in the parent directories:

(pathaction-edit)

Customization

Making pathaction open a window under the current one?

To configure pathaction to open its window under the current one, you can use the display-buffer-alist variable to customize how the pathaction buffer is displayed. Specifically, you can use the display-buffer-at-bottom action, which will display the buffer in a new window at the bottom of the current frame.

Here’s the code to do this:

(add-to-list 'display-buffer-alist '("\\*pathaction:"
                                     (display-buffer-at-bottom)
                                     (window-height . 0.33)))

Hooks

  • pathaction-before-run-hook: This hook is executed by pathaction-run before the pathaction command is executed. By default, it calls the save-some-buffers function to prompt saving any modified buffers:
    (setq pathaction-before-run-hook '(save-some-buffers))
  • pathaction-after-create-buffer-hook: This hook is executed after the pathaction buffer is created. It runs from within the pathaction buffer, enabling further customization or actions once the buffer is available.

Saving all buffers before executing pathaction

By default, pathaction-before-run-hook only calls a function to save the current buffer before executing actions or commands that affect the current or any other edited buffer.

To make pathaction save all buffers, use the following configuration:

(defun my-save-some-buffers ()
  "Prevent `save-some-buffers' from prompting by passing 1 to it."
  (save-some-buffers))

(add-hook 'pathaction-before-run-hook #'my-save-some-buffers)

(If you want to prevent save-some-buffers from prompting the user before saving, replace (save-some-buffers) with (save-some-buffers t).)

Author and License

The pathaction Emacs package has been written by James Cherti and is distributed under terms of the GNU General Public License version 3, or, at your choice, any later version.

Copyright (C) 2025-2026 James Cherti

This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program.

Links

Other Emacs packages by the same author:

  • compile-angel.el: Speed up Emacs! This package guarantees that all .el files are both byte-compiled and native-compiled, which significantly speeds up Emacs.
  • outline-indent.el: An Emacs package that provides a minor mode that enables code folding and outlining based on indentation levels for various indentation-based text files, such as YAML, Python, and other indented text files.
  • easysession.el: Easysession is lightweight Emacs session manager that can persist and restore file editing buffers, indirect buffers/clones, Dired buffers, the tab-bar, and the Emacs frames (with or without the Emacs frames size, width, and height).
  • vim-tab-bar.el: Make the Emacs tab-bar Look Like Vim’s Tab Bar.
  • elispcomp: A command line tool that allows compiling Elisp code directly from the terminal or from a shell script. It facilitates the generation of optimized .elc (byte-compiled) and .eln (native-compiled) files.
  • tomorrow-night-deepblue-theme.el: The Tomorrow Night Deepblue Emacs theme is a beautiful deep blue variant of the Tomorrow Night theme, which is renowned for its elegant color palette that is pleasing to the eyes. It features a deep blue background color that creates a calming atmosphere. The theme is also a great choice for those who miss the blue themes that were trendy a few years ago.
  • Ultyas: A command-line tool designed to simplify the process of converting code snippets from UltiSnips to YASnippet format.
  • dir-config.el: Automatically find and evaluate .dir-config.el Elisp files to configure directory-specific settings.
  • flymake-bashate.el: A package that provides a Flymake backend for the bashate Bash script style checker.
  • flymake-ansible-lint.el: An Emacs package that offers a Flymake backend for ansible-lint.
  • inhibit-mouse.el: A package that disables mouse input in Emacs, offering a simpler and faster alternative to the disable-mouse package.
  • quick-sdcv.el: This package enables Emacs to function as an offline dictionary by using the sdcv command-line tool directly within Emacs.
  • enhanced-evil-paredit.el: An Emacs package that prevents parenthesis imbalance when using evil-mode with paredit. It intercepts evil-mode commands such as delete, change, and paste, blocking their execution if they would break the parenthetical structure.
  • stripspace.el: Ensure Emacs Automatically removes trailing whitespace before saving a buffer, with an option to preserve the cursor column.
  • persist-text-scale.el: Ensure that all adjustments made with text-scale-increase and text-scale-decrease are persisted and restored across sessions.
  • kirigami.el: The kirigami Emacs package offers a unified interface for opening and closing folds across a diverse set of major and minor modes in Emacs, including outline-mode, outline-minor-mode, outline-indent-minor-mode, org-mode, markdown-mode, vdiff-mode, vdiff-3way-mode, hs-minor-mode, hide-ifdef-mode, origami-mode, yafolding-mode, folding-mode, and treesit-fold-mode. With Kirigami, folding key bindings only need to be configured once. After that, the same keys work consistently across all supported major and minor modes, providing a unified and predictable folding experience.
-1:-- pathaction.el: An Emacs package for executing pathaction rules, the universal Makefile for the entire filesystem (Post James Cherti)--L0--C0--2026-03-04T16:13:26.000Z

Irreal: Expand Region Reimagined

Bozhidar Batsov has been blogging up a storm on recent additions to Emacs that make our editing sessions easier. His latest post is about a successor to expand-region, expreg. Batsov has been a heavy user of expand-region for many years. The problem with it is that it requires hand written functions for each language that it supports.

As a result of his recent work integrating Tree-sitter into many of his packages, he thought it would make sense to reimplement expand-region as a Tree-sitter based package so that bespoke functions wouldn’t have to be written for each language. It turns out that Yuan Fu, who implemented the Emacs builtin Tree-sitter, had already done it.

Batsov loves expreg and doesn’t see why anyone who isn’t depending on some artifact of expand-region wouldn’t adopt it. Take a look at his post for more details.

In the comments over at the Emacs subreddit Karthink says that he has a constellation of packages that depend on expand-region and that it would be very hard for him to change. Spartanork says that he had a hard time getting Tree-sitter installed. Those two comments suggest that expreg may not be for everyone.

I’ve had expand-region installed for a long time but I don’t use it as much as I should. At this point in my life, virtually all my coding is done in either Elisp or C, for which expand-region works well. It also does the right thing in a prose buffer so I see no reason to change but if you’re using a lot of languages, you should definitely check out expreg.

-1:-- Expand Region Reimagined (Post Irreal)--L0--C0--2026-03-04T15:37:35.000Z

James Dyer: Ollama Buddy - Web Search Integration

One of the fundamental limitations of local LLMs is their knowledge cutoff - they don’t know about anything that happened after their training data ended. The new web search integration in ollama-buddy solves this by fetching current information from the web and injecting it into your conversation context. Ollama has a specific API web search section, so it has now been activated!

Here is a demonstration:

https://www.youtube.com/watch?v=05VzAajH404

The web search feature implements a multi-stage pipeline that transforms search queries into clean, LLM-friendly context, your search query is sent to Ollama’s Web Search API, the API returns structured search results with URLs and snippets.

I have decided that each URL by default is fetched and processed through Emacs’ built-in eww and shr HTML rendering, but this can of course be configured, set ollama-buddy-web-search-content-source to control how content is retrieved:

  • `eww’ (default): Fetch each URL and render through eww/shr for clean text
  • `api’: Use content returned directly from Ollama API (faster, less refined)

The shr (Simple HTML Renderer) library does an excellent job of converting HTML to readable plain text, stripping ads, navigation, and other noise, so I thought why not just use this rather than the return results from the ollama API, as they didn’t seem to be particularly accurate.

The cleaned text is formatted with org headings showing the source URL and attached to your conversation context, so when you send your next prompt, the search results are automatically included in the context. The LLM can now reason about current information as if it had this knowledge all along.

There are multiple ways to search; firstly, is inline @search() syntax in your prompts (gradually expanding the inline prompting language!), so for example:

What are the key improvements in @search(Emacs 31 new features)?
Compare @search(Rust async programming) with @search(Go concurrency model)

ollama-buddy automatically detects these markers, executes the searches, attaches the results, and then sends your prompt, so you can carry out multiple searches.

You can also manual Search and Attach, Use C-c / a (or M-x ollama-buddy-web-search-attach)

The search executes, results are attached to your session, and the ♁1 indicator appears in the header line and the results can be viewed from the attachments menu, so for example would display something like:

* Web Searches (1)
** latest Emacs 31 features
*** 1. Hide Minor Modes in the Modeline in Emacs 31
*** 2. New Window Commands For Emacs 31
*** 3. Latest version of Emacs (GNU Emacs FAQ)
*** 4. bug#74145: 31.0.50; Default lexical-binding to t
*** 5. New in Emacs 30 (GNU Emacs FAQ)

with each header foldable, containing the actual search results.

There is a little configuration required to go through the ollama API, first, get an API key from https://ollama.com/settings/keys (it’s free). Then configure:

(use-package ollama-buddy
 :bind
 ("C-c o" . ollama-buddy-role-transient-menu)
 ("C-c O" . ollama-buddy-transient-menu-wrapper)
 :custom
 ;; Required: Your Ollama web search API key
 (ollama-buddy-web-search-api-key "your-api-key-here"))

For clarification, the content source options are as follows:

The ollama-buddy-web-search-content-source variable controls how content is retrieved:

eww (default, recommended)

Fetches each URL and renders HTML through Emacs’ eww/shr. Produces cleaner, more complete content but requires additional HTTP requests.

Pros:

  • Much cleaner text extraction
  • Full page content, not just snippets
  • Removes ads, navigation, clutter
  • Works with any website

Cons:

  • Slightly slower (additional HTTP requests)
  • Requires network access for each URL

api (experimental)

Uses content returned directly from the Ollama API without fetching individual URLs. Faster but content quality depends on what the API provides.

Pros:

  • Faster (single API call)
  • Less network traffic

Cons:

  • Content may be truncated
  • Quality varies by source
  • May miss important context

I strongly recommend sticking with eww - the quality difference is substantial.

By default, web search fetches up to 5 URLs with 2000 characters per result. This provides rich context without overwhelming the LLM’s context window.

For longer research sessions, you can adjust:

(setq ollama-buddy-web-search-max-results 10) ;; More sources
(setq ollama-buddy-web-search-snippet-length 5000) ;; Longer excerpts

Be mindful of your LLM’s context window limits. With 5 results at 2000 chars each, you’re adding ~10K characters to your context.

The web search integration fundamentally expands what your local LLMs can do. They’re no longer limited to their training data - they can reach out, fetch current information, and reason about it just like they would with any other context, so hopefully this will now make ollama-buddy a little more useful

-1:-- Ollama Buddy - Web Search Integration (Post James Dyer)--L0--C0--2026-03-04T09:34:00.000Z

The Emacs Cat: Emacs, Software Development, and LLM

Part 1. Emacs and Software Development

Just a couple of days ago, I published on GitHub my first open-source project, the TEC library, a lightweight and fast C++ framework designed for efficient performance in resource-constrained concurrent environments.

This project marks an important milestone for me as a die-hard Emacs user because the entire development was carried out completely in Emacs without using any specialized IDE.

The following Emacs packages and tools were used:

  • Writing code: eglot for code completion, built-in ibuffer for buffer grouping and navigation, and imenu-list for code navigation in the current buffer (see my post on C++ Programming in Emacs).
  • Keeping notes, ideas, ToDo lists, and any project-related textual information in Org Mode.
  • Debugging with dap-mode.
  • Testing samples with tracing output in Eshell.
  • Writing various Markdown documents for publishing on GitHub.
  • Many other things, like planning in Agenda.

This has proved that it is possible to effectively manage all the stages of the software development process without leaving Emacs. I’m really happy with that.

Part 2. Software Development and LLM

My TEC project announcement at Reddit’s r/cpp has been rejected.

The reason: the entire project is LLM-generated.

No doubt, I was using an LLM to generate detailed Doxygen comments in my existing manually written C++ code for the following reasons: 1) I’m too lazy to write such detailed comments; 2) I’m sure that the API documentation should be as detailed as possible because it will be used by other people, not just me.

I thought it would be obvious that the source code is manually crafted—the repository contains 124 commits since 2022 (before the LLM epoch) with all my mistakes, bugs, and rejected ideas that anyone can trace easily—and it should be clear that no LLM has been used.

I was wrong. The entire project, including the source code, is an LLM slop, they say.

I’m afraid this situation entails unpleasant consequences and raises serious questions.

Should developers now prove that their code is not LLM-generated? In what way?

LLMs can—and do—get trained on public repositories. How can someone prove that their code is manually crafted if an LLM has already been trained on it?

We are living in tough times.

— The Emacs Cat.

-1:-- Emacs, Software Development, and LLM (Post The Emacs Cat)--L0--C0--2026-03-04T08:16:11.000Z

Emacs Redux: Transpose All The Things

Most Emacs users know C-t to swap two characters and M-t to swap two words. Some know C-x C-t for swapping lines. But the transpose family goes deeper than that, and with tree-sitter in the picture, things get really interesting.

Let’s take a tour.

The Classics

The three transpose commands everyone knows (or should know):

Keybinding Command What it does
C-t transpose-chars Swap the character before point with the one after
M-t transpose-words Swap the word before point with the one after
C-x C-t transpose-lines Swap the current line with the one above

These are purely textual – they don’t care about syntax, language, or structure. They work the same in an OCaml buffer, an email draft, or a shell script. Simple and reliable.

One thing worth noting: transpose-lines is often used not for literal transposition but as a building block for moving lines up and down.

One caveat: transpose-lines doesn’t play well with visual-line-mode. Since visual-line-mode wraps long lines visually without inserting actual newlines, what looks like several lines on screen may be a single buffer line. transpose-lines operates on real (logical) lines, so it can end up swapping far more text than you expected. This is one of the reasons I’m not a fan of visual-line-mode – it subtly breaks commands that operate on lines. If you must use visual-line-mode, your best workaround is to fall back to transpose-sentences or transpose-paragraphs (which rely on punctuation and blank lines rather than newlines), or temporarily disable visual-line-mode with M-x visual-line-mode before transposing.

The Overlooked Ones

Here’s where it gets more interesting. Emacs has two more transpose commands that most people never discover:

transpose-sentences (no default keybinding)

This swaps two sentences around point. In text modes, a “sentence” is determined by sentence-end (typically a period followed by whitespace). In programming modes… well, it depends. More on this below.

transpose-paragraphs (no default keybinding)

Swaps two paragraphs. A paragraph is separated by blank lines by default. Less useful in code, but handy when editing prose or documentation.

Neither command has a default keybinding, which probably explains why they’re so obscure. If you write a lot of prose in Emacs, binding transpose-sentences to something convenient is worth considering.

The MVP: transpose-sexps

C-M-t (transpose-sexps) is the most powerful of the bunch. It swaps two “balanced expressions” around point. What counts as a balanced expression depends on the mode:

In Lisp modes, a sexp is what you’d expect – an atom, a string, or a parenthesized form:

;; Before: point after `bar`
(foo bar| baz)
;; C-M-t →
(foo baz bar)

In other programming modes, “sexp” maps to whatever the mode considers a balanced expression – identifiers, string literals, parenthesized groups, function arguments:

# Before: point after `arg1`
def foo(arg1|, arg2):
# C-M-t →
def foo(arg2, arg1):
(* Before: point after `two` *)
foobar two| three
(* C-M-t → *)
foobar three two

This is incredibly useful for reordering function arguments, swapping let bindings, or rearranging list elements. The catch is that “sexp” is a Lisp-centric concept, and in non-Lisp languages the results can sometimes be surprising – the mode has to define what constitutes a balanced expression, and that definition doesn’t always match your intuition.

How Tree-sitter Changes Things

Tree-sitter gives Emacs a full abstract syntax tree (AST) for every buffer, and this fundamentally changes how structural commands work.

Sexp Navigation and Transposition

On Emacs 30+, tree-sitter major modes can define a sexp “thing” in treesit-thing-settings. This tells Emacs which AST nodes count as balanced expressions. When this is configured, transpose-sexps (C-M-t) uses treesit-transpose-sexps under the hood, walking the parse tree to find siblings to swap instead of relying on syntax tables.

The result is more reliable transposition in languages where syntax-table-based sexp detection struggles. OCaml’s nested match arms, Python’s indentation-based blocks, Go’s composite literals – tree-sitter understands them all.

That said, the Emacs 30 implementation of treesit-transpose-sexps has some rough edges (it sometimes picks the wrong level of the AST). Emacs 31 rewrites the function to work more reliably.1

Sentence Navigation and Transposition

This is where things get quietly powerful. On Emacs 30+, tree-sitter modes can also define a sentence thing in treesit-thing-settings. In a programming context, “sentence” typically maps to top-level or block-level statements – let bindings, type definitions, function definitions, imports, etc.

Once a mode defines this, M-a and M-e navigate between these constructs, and transpose-sentences swaps them:

(* Before *)
let x = 42
let y = 17

(* M-x transpose-sentences → *)
let y = 17
let x = 42
# Before
import os
import sys

# M-x transpose-sentences →
import sys
import os

This is essentially “transpose definitions” or “transpose statements” for free, with no custom code needed beyond the sentence definition.

Beyond the Built-ins

If the built-in transpose commands aren’t enough, several packages extend the concept:

combobulate is the most comprehensive tree-sitter structural editing package. Its combobulate-drag-up (M-P) and combobulate-drag-down (M-N) commands swap the current AST node with its previous or next sibling. This is like transpose-sexps but more predictable – it uses tree-sitter’s sibling relationships directly, so it works consistently for function parameters, list items, dictionary entries, HTML attributes, and more.

For simpler needs, packages like drag-stuff and move-text provide line and region dragging without tree-sitter awareness. They’re less precise but work everywhere.

Wrapping Up

Here’s the complete transpose family at a glance:

Keybinding Command Tree-sitter aware?
C-t transpose-chars No
M-t transpose-words No
C-x C-t transpose-lines No
C-M-t transpose-sexps Yes (Emacs 30+)
(unbound) transpose-sentences Indirectly (Emacs 30+)
(unbound) transpose-paragraphs No

The first three are textual workhorses that haven’t changed much in decades. transpose-sexps has been quietly upgraded by tree-sitter into something much more capable. And transpose-sentences is the sleeper hit – once your tree-sitter mode defines what a “sentence” is in your language, you get structural statement transposition for free.

That’s all I have for you today. Keep hacking!

  1. See bug#60655 for the gory details. 

-1:-- Transpose All The Things (Post Emacs Redux)--L0--C0--2026-03-04T08:00:00.000Z

Chris Maiorana: The Emacs Way: Copying Files

Table of Contents

Copying files is one of those everyday operations you will do constantly. Whether it’s backing up a config before you mess with it or duplicating a template to start a new project, cp is probably one of the first commands any of us learned.

Copying files can be hazardous if you don’t know what you’re doing. Personally, I find it’s often easier to just copy and paste something with Thunar. Having a GUI helps, but for scripting or more complicated copy actions you got to get your command line judo going or open your master Emacs editor.

In Emacs, copying is handled through dired, which gives you a visual, interactive approach to the same operation. You can see exactly what you’re copying and where it’s going, which can save you from those moments where you accidentally overwrite something you didn’t mean to.

As in the previous article in the series, we are going to compare and contrast the UNIXy way of copying files with the Emacs way. Is one way better than the other? I present both, and leave it up to you to decide.

UNIXy Way

The cp command is straightforward. Give it a source and a destination, and it does exactly what you’d expect. Add -r for directories, and -v if you want confirmation of what happened.

cp source.txt destination.txt
Copy a single file
cp -r source_dir/ destination_dir/
Copy a directory recursively
cp -v notes.txt ~/backups/
Copy with verbose output

Example output:

$ cp -v notes.txt ~/backups/
'notes.txt' -> '/home/user/backups/notes.txt'

$ cp -rv projects/ /tmp/projects-backup/
'projects/' -> '/tmp/projects-backup/'
'projects/main.py' -> '/tmp/projects-backup/main.py'
'projects/utils.py' -> '/tmp/projects-backup/utils.py'

Emacs Way

In dired, copying is a two-step process: mark the files you want, then tell Emacs where to put them. If you have two dired buffers open side by side, Emacs is smart enough to suggest the other buffer’s directory as the default destination, which mimics the two-panel file manager workflow.

  • C-x d to open dired
  • Mark file with m
  • Press C to copy
  • Enter destination path
  • Or: M-x copy-file and enter source/dest paths

Example dired buffer:

  /home/user/documents:
  drwxr-xr-x  2 user user 4096 Jan  9 10:30 .
  -rw-r--r--  1 user user 1234 Jan  8 14:22 notes.txt
  -rw-r--r--  1 user user 8765 Jan  7 09:15 report.txt
* -rw-r--r--  1 user user 2345 Jan  6 16:45 todo.txt 
  Copy report.txt to: ~/backups/report.txt

The * indicates a marked file. After pressing C, Emacs prompts for the destination in the minibuffer. You can also mark multiple files and copy them all in one go, something that’s just as easy with cp but here you get the visual confirmation of exactly which files you’ve selected.


As always, thank you for reading. If you have any comments or questions feel free to drop them below. Also, check out my downloadable DRM-free eBooks:

The post The Emacs Way: Copying Files appeared first on Chris Maiorana.

-1:-- The Emacs Way: Copying Files (Post Chris Maiorana)--L0--C0--2026-03-04T05:00:00.000Z

Alvaro Ramirez: Bending Emacs - Episode 13: agent-shell charting

Time for a new Bending Emacs episode. This one is a follow-up to Episode 12, where we explored Claude Skills as emacs-skills.

Bending Emacs Episode 13: agent-shell + Claude Skills + Charts

This time around, we look at inline image rendering in agent-shell and how it opens the door to charting. I added a handful of new charting skills to emacs-skills: /gnuplot, /mermaid, /d2, and /plantuml.

The agent extracts or fetches data from context, generates the charting code, saves it as a PNG, and agent-shell renders it inline. Cherry on top: the generated charts match your Emacs theme colors by querying them via emacsclient --eval.

Hope you enjoyed the video!

Want more videos?

Liked the video? Please let me know. Got feedback? Leave me some comments.

Please like my video, share with others, and subscribe to my channel.

As an indie dev, I now have a lot more flexibility to build Emacs tools and share knowledge, but it comes at the cost of not focusing on other activities that help pay the bills. If you benefit or enjoy my work please consider sponsoring.

-1:-- Bending Emacs - Episode 13: agent-shell charting (Post Alvaro Ramirez)--L0--C0--2026-03-04T00:00:00.000Z

Meta Redux: What’s Next for clojure-mode?

Good news, everyone! clojure-mode 5.22 is out with many small improvements and bug fixes!

While TreeSitter is the future of Emacs major modes, the present remains a bit more murky – not everyone is running a modern Emacs or an Emacs built with TreeSitter support, and many people have asked that “classic” major modes continue to be improved and supported alongside the newer TS-powered modes (in our case – clojure-ts-mode). Your voices have been heard! On Bulgaria’s biggest national holiday (Liberation Day), you can feel liberated from any worries about the future of clojure-mode, as it keeps getting the love and attention that it deserves! Looking at the changelog – 5.22 is one of the biggest releases in the last few years and I hope you’ll enjoy it!1

Now let me walk you through some of the highlights.

edn-mode Gets Some Love

edn-mode has always been the quiet sibling of clojure-mode – a mode for editing EDN files that was more of an afterthought than a first-class citizen. That changed with 5.21 and the trend continues in 5.22. The mode now has its own dedicated keymap with data-appropriate bindings, meaning it no longer inherits code refactoring commands that make no sense outside of Clojure source files. Indentation has also been corrected – paren lists in EDN are now treated as data (which they are), not as function calls.

Small things, sure, but they add up to a noticeably better experience when you’re editing config files, test fixtures, or any other EDN data.

Font-locking Updated for Clojure 1.12

Font-locking has been updated to reflect Clojure 1.12’s additions – new built-in dynamic variables and core functions are now properly highlighted. The optional clojure-mode-extra-font-locking package covers everything from 1.10 through 1.12, including bundled namespaces and clojure.repl forms.2 Some obsolete entries (like specify and specify!) have been cleaned up as well.

On a related note, protocol method docstrings now correctly receive font-lock-doc-face styling, and letfn binding function names get proper font-lock-function-name-face treatment. These are the kind of small inconsistencies that you barely notice until they’re fixed, and then you wonder how you ever lived without them.

Discard Form Styling

A new clojure-discard-face has been added for #_ reader discard forms. By default it inherits from the comment face, so discarded forms visually fade into the background – exactly what you’d expect from code that won’t be read. Of course, you can customize the face to your liking.

Notable Bug Fixes

A few fixes that deserve a special mention:

  • clojure-sort-ns no longer corrupts non-sortable forms – previously, sorting a namespace that contained :gen-class could mangle it. That’s fixed now.
  • clojure-thread-last-all and line comments – the threading refactoring command was absorbing closing parentheses into line comments. Not anymore.
  • clojure-update-ns works again – this one had been quietly broken and is now restored to full functionality.
  • clojure-add-arity preserves arglist metadata – when converting from single-arity to multi-arity, metadata on the argument vector is no longer lost.

The Road Ahead

So, what’s actually next for clojure-mode? The short answer is: more of the same. clojure-mode will continue to receive updates, bug fixes, and improvements for the foreseeable future. There is no rush for anyone to switch to clojure-ts-mode, and no plans to deprecate the classic mode anytime soon.

That said, if you’re curious about clojure-ts-mode, its main advantage right now is performance. TreeSitter-based font-locking and indentation are significantly faster than the regex-based approach in clojure-mode. If you’re working with very large Clojure files and noticing sluggishness, it’s worth giving clojure-ts-mode a try. My guess is that most people won’t notice a meaningful difference in everyday editing, but your mileage may vary.

The two modes will coexist for as long as it makes sense. Use whichever one works best for you – they’re both maintained by the same team (yours truly and co) and they both have a bright future ahead of them. At least I hope so!

As usual - big thanks to everyone supporting my Clojure OSS work, especially the members of Clojurists Together! You rock!

That’s all I have for you today. Keep hacking!

  1. I also hope I didn’t break anything. :-) 

  2. I wonder if anyone’s using this package, though. For me CIDER’s font-locking made it irrelevant a long time ago. 

-1:-- What’s Next for clojure-mode? (Post Meta Redux)--L0--C0--2026-03-03T21:30:00.000Z

Charles Choi: Casual now available on NonGNU ELPA

If you are an Emacs user who only uses 3rd party packages from ELPA or NonGNU ELPA, I’m happy to announce that Casual is now available on NonGNU ELPA. 🎉

If this is the first time you’ve heard of Casual, it is a project to re-imagine the primary user interface for Emacs using keyboard-driven menus. Casual’s design intent is to make the vast feature set of Emacs easier to discover and use in casual fashion. It does so by providing bespoke hand-crafted menus for different modes provided by Emacs. These menus are opinionated in that the design of what goes into those menus are editorially determined by yours truly. To understand more about what Casual has to offer, please peruse its User Guide.

In terms of implementation, Casual is built using the Transient library made by Jonas Bernoulli. This is the same library that powers the UI for Magit.

Development of Casual has been on-going for nearly two years now. Interested readers can read about Casual’s progress over that time from my blog posts here. Throughout it all I’ve learned a lot about Emacs, its ecosystem of modes, and the powerful features they bring. It has only reinforced my conviction that Casual makes these powerful features more usable beyond what is offered by default in Emacs.

To clarify, if you still get Casual from MELPA (or MELPA Stable), you do not have to change anything. The only difference is that users can now choose to install Casual from either MELPA or NonGNU ELPA. Updates to Casual will be distributed on both equally.

My thanks goes out to the NonGNU ELPA reviewers who have provided guidance in helping get Casual on there. Additional thanks goes out to all the Casual users whose input and support have kept me going at this since 2024.

-1:-- Casual now available on NonGNU ELPA (Post Charles Choi)--L0--C0--2026-03-03T21:10:00.000Z

Randy Ridenour: A Sordid Tale of Text Editors

A Sordid Tale of Text Editors

May 24, 2012 00:00

For years now, It seems like I have been searching for the perfect text editor. Evidently, there are those who believe that there is one single perfect text editor for everyone. If you need evidence, just search the Internet for “Vim vs. Emacs.” My experience, though, leads to me believe that, when it comes to text editors, like many other tools, perfection is relative to the individual. What tool is needed depends on how the individual writer works and thinks.

I used word processors (beginning with WordPerfect 5.1, which I still think is the best, it’s been all downhill for word processors ever since) until I started writing LaTeX documents. I began with TeXShop, which comes bundled with MacTeX. TeXShop is an excellent resource for writing LaTeX. It’s free, powerful, and I recommend it with no reservations at all.

I don’t know why I began experimenting with other editors. I surely didn’t need any other editor to edit LaTeX. It may have been when I started writing in MultiMarkdown (MMD). Markdown, created by John Gruber, is a simple way to produce very readable documents and convert them to HTML. Markdown has grown in popularity recently, resulting in an abundance of simple Markdown editors for both OS X and iOS. I can’t remember what year I stumbled on it, but at some point I discovered Fletcher Penney’s MultiMarkdown. MultiMarkdown adds some features to Markdown, including tables and footnotes, among other things. The most important feature for me, though, was the ability to convert the document to both HTML and to LaTeX. So, I started writing in MultiMarkdown, which then gave me the ability to export the same document in several formats as needed. I could have used TeXShop to write MMD, but it wasn’t the best experience.

At the time, there was a great deal of excitement about TextMate. After watching some videos on YouTube showing LaTeX editing with TextMate, I was hooked. Then, I discovered that Fletcher Penney himself had produced a Markdown bundle for TextMate. So, I could use TextMate to easily edit an MMD document, with a quick keyboard shortcut, convert it to a LaTeX file, touch up any TeX code, then build the PDF, all from within TextMate.

But then, I started hearing things about Vim. So, I downloaded MacVim, and then spent a great deal of time learning the Vim commands and configuring my .vimrc. In my opinion, navigating files in Vim is pure magic, and I don’t think that there is anything better for editing, that is, making small corrections to an existing document.

Although I understand Vim, and can use it competently, it never quite fit the way that I work. Vim is a modal editor, which means that different modes are used for different things. For instance, insert mode is for inserting text. 1 Normal mode is used for navigating the document, among other things. The beauty of Vim is how much one can do with very few keystrokes. For instance, pressing dd deletes a sentence, and 2dd deletes two sentences. Vim experts look like sorcerers at the keyboard. Unfortunately, Vim just never quite fit for me as a tool to write longer documents, I still use it in the terminal for writing short things like Git commit statements. I would find myself starting to write after a break to think, and suddenly be inserting text in a completely different part of the document. For instance, in normal mode, e moves the cursor to the end of the word, gg moves the cursor to the beginning of the document, and i switches to insert mode. So, in normal mode, if I started typing the word “beginning” anywhere in the document, I would quickly find myself on the first line having typed ng with no idea where I started.

The problem is that I stop often to think about what I’m writing, and during these pauses, would forget what mode I was in. Now, I know the mode is clearly labeled at the bottom of the screen, but that didn’t help. The other problem is that I tend to edit as I write, which, it seems, eliminates much of the advantage that Vim offers. 2

Then, there was my flirtation with Emacs. Emacs fits the way that I write better than Vim, and I’ll keep it around. There are several reasons that Emacs doesn’t quite fit for me, though. First, I have trouble remembering the key combinations. I thought Vim’s commands were fairly intuitive, Emacs commands never quite clicked. Second, I preferred Gnu Emacs over Aquamacs for various reasons, but Textexpander snippets wouldn’t work in Gnu Emacs. Finally, configuring both Vim and Emacs can be a long painful process for amateur geeks.

So, I kept returning to TextMate, and kept telling myself that there really wasn’t anything wrong with TextMate, it worked fine. It was starting to seem a bit slow, and I found it difficult to ignore the arbiters of doom announcing TextMate’s demise, given the absence of version 2.0.

Then, the heavens opened, and a beta version of TextMate 2 appeared. I downloaded it, and it just doesn’t work as well for me as TextMate 1. I’ve had BBEdit ever since it became available in the Mac App Store. It’s nice, but it just doesn’t have the ease of TextMate for LaTeX editing. Then, I heard Brett Terpstra on the Mac Power Users podcast talk about Sublime Text 2. So, I downloaded a trial version, and showing great restraint (at least, for me) used it for three days before I bought it. It definitely appears to have been inspired by TextMate, but has many improvements over TextMate 1. It is in beta, but it seems to be solidly reliable. It is fast, edits LaTeX like TextMate, easily configurable, and beautiful.

So, Sublime Text 2, I think I’m in love, but you need to know how fickle I am. I can give you today, we’ll just have to see what the future holds.

UPDATE: It’s 2024 now, and I’ve been using Emacs for over a decade. I still like to try out new editors, but I’m no longer searching for a new love.

Tagged: Misc

Footnotes

1

Never underestimate the power of the obvious.

2

See Dr. Drang’s excellent explanation here.

-1:-- A Sordid Tale of Text Editors (Post Randy Ridenour)--L0--C0--2026-03-03T20:52:00.000Z

Randy Ridenour: LaTeX Compilation Scripts

LaTeX Compilation Scripts

Jun 09, 2022 00:00

I’m trying to automate LaTeX compilation from Emacs as much as I can. Here are the shell scripts and Emacs functions that I’m using.

Shell Scripts

Here are some very simple scripts for compiling LaTeX files with arara and the fish shell. The first, called “mkt” for “make TeX”, runs arara on the source file, then opens the resulting PDF.

 function mkt 
  arara $argv 
  set output_file (string replace -r tex\$ pdf $argv) 
  open -g $output_file 
end

The second, called “mktc” for “make TeX continuously”, runs arara, opens the PDF, watches for changes, and runs arara whenever the file is saved.

  function  mktc
   arara  $ argv
   set  output_file ( string replace -r tex \$ pdf  $ argv)
   open -g  $ output_file
   fswatch -o  $ argv  |  xargs -n1 -I{} arara  $ argv
 end

Emacs

Here are some functions to use in Emacs to run the scripts.

Auctex

 ( defun  rlr/tex-mkt ()
   "Compile with arara."
  ( interactive)
  (async-shell-command (concat  "mkt " (buffer-file-name))))

( defun  rlr/tex-mktc ()
   "Compile continuously with arara."
  ( interactive)
  (start-process-shell-command (concat  "mktc-" (buffer-file-name)) (concat  "mktc-" (buffer-file-name)) (concat  "mktc " (buffer-file-name))))

The first compiles with an asynchronous shell command, so that you can immediately return to editing the file while arara runs. I decided to use start-process-shell-command instead of async-shell-command for mktc, since mkt runs once and stops, while mktc keeps running in the background. When rlr/tex-mktc is called, a process is started that has the same name as the file with “mktc-” prepended it to it. The process is shown in a buffer that has the same name. That way, several files can be compiled at the same time.

Org Mode

 
( defun  rlr/org-mkt ()
   "Make PDF with Arara."
  ( interactive)
  (org-latex-export-to-latex)
  (async-shell-command (concat  "mkt " (file-name-sans-extension (buffer-file-name)) ".tex")))

( defun  rlr/org-mktc ()
   "Compile continuously with arara."
  ( interactive)
  (org-latex-export-to-latex)
  (start-process-shell-command (concat  "mktc-" (buffer-file-name)) (concat  "mktc-" (buffer-file-name)) (concat  "mktc " (file-name-sans-extension (buffer-file-name)) ".tex")))

These functions first export the org file to LaTeX, then compile the corresponding TeX files. All in all, a morning’s worth of work to save a few keystrokes. I run them using major mode hydras.

Tagged: Latex Emacs Org

-1:-- LaTeX Compilation Scripts (Post Randy Ridenour)--L0--C0--2026-03-03T20:52:00.000Z

Randy Ridenour: Arara and Latexmk

Arara and Latexmk

Jan 21, 2023 00:00

Compiling LaTeX documents can be a chore, especially if the document has a table of contents, bibliographic references, cross-references, or labels. Getting the final output an be up to a four-step process:

  1. Run pdflatex to to create the labels, table of contents, and undefined bibliographic references.
  2. Run bibtex or biber to get the relevant bibliographic data from the bib files.
  3. Run pdflatex to put that bibliographic data into the document
  4. Run pdflatex again to fix any page numbers that need to change because of the new material.

Latexmk

Using Latexmk is a great way to automate all of this. Latexmk will do all of these steps as needed, you just need to tell what engine to use (pdflatex, lualatex, xelatex) and whether to run Bibtex or Biber. A pdfLaTeX document named “sample.tex” containing bibliographic references would be compiled from the command line like this:

 latexmk -pdflatex -bibtex sample.tex

There are many other options that can be used, making the command to compile even longer. Fortunately, these options can all be set in a latexmkrc file.

  #  LaTeXmk configuration file

 #  Usage example
 #  latexmk file.tex

 #  Main command line options
 #  -pdflatex : compile with pdflatex
 #  -lualatex : compile with lualatex
 #  -pv       : run file previewer
 #  -pvc      : run file previewer and continually recompile on change
 #  -c        :
 #  -C        :clean up auxiliary and output files

 #  Use bibtex if a .bib file exists
$ bibtex_use = 1;

 #  Define commands to compile with pdfsync support and nonstopmode
$ pdflatex =  'pdflatex -synctex=1 --interaction=nonstopmode %O %S';

$ lualatex =  'lualatex -synctex=1 --interaction=nonstopmode %O %S';

 #  Also remove pdfsync files on clean
$ clean_ext =  '%R.synctex.gz';

If you always use the same engine to compile, then you can add a line like this to your latexmkrc:

 $ pdf_mode = n

where n=1 for pdflatex, 4 for lualatex, and 5 for xelatex. The compile command then becomes simply

 latexmk sample.tex

Since I use Emacs, I could just add latexmk to the list of TeX commands and compile documents with a simple keystroke. The problem is that I don’t do the smart thing and use just one engine. I like to use pdflatex sometimes and lualatex for other documents. It would be nice if I could specify the compiling steps within the document.

Arara

Here’s where Arara shines. Arara is another way to compile tex files. Unlike latexmk, arara does not do anything automatically. Running

 arara sample.tex

doesn’t do anything without some special instructions within the file. To compile the file with pdflatex, I would put this as the first line:

 % arara: pdflatex: { interaction: nonstopmode, synctex: yes }

Then, running arara on the file would compile once with pdflatex. If I had a bibliography and wanted to use lualatex, I would put this at the top of the file:

 % arara: lualatex: { interaction: nonstopmode, synctex: yes }
% arara: biber
% arara: lualatex: { interaction: nonstopmode, synctex: yes }
% arara: lualatex: { interaction: nonstopmode, synctex: yes }

This would get all of the steps that I described above. So, I could compile any file with the same command, and have it compile exactly the way I wanted with the engine that I specified.

The problem (and I admit it’s a minor one) is that I lose the automation of latexmk. Latexmk does all of the steps automatically, and only if they are needed. Arara does them all every time. Latexmk is smart, but not easily flexible. Arara is very flexible, but not very smart.

Solution

Fortunately, arara can be told to compile with latexmk, and the engine and any options can be specified in the file. This compiles with latexmk using pdflatex:

 % arara: latexmk: { engine: pdflatex }

Since the options that I want are in my latexmkrc file, there’s nothing else I need to do. Changing “pdflatex” to “lualatex” would make it compile with lualatex instead as many times as necessary to produce the final document.

Tagged: Latex

-1:-- Arara and Latexmk (Post Randy Ridenour)--L0--C0--2026-03-03T20:52:00.000Z

Randy Ridenour: Making Tag Pages in an Org Mode Blog

Making Tag Pages in an Org Mode Blog

Dec 30, 2024 00:00

I have finished converting the old Hugo blog to the new org-publish format. It was quite a bit of work, but I tried to automate as much as possible. I wrote about the initial setup here. One minor change has been to remove the “home” and “up” links that org-publish inserts at the upper-right. This just requires a simple change to the org-publish-project-alist:

  :html-link-home  ""
 :html-link-up  ""

Just make sure to change them wherever they occur in the alist.

A bigger issue was displaying post tags and creating tag pages. Tag and category pages in Hugo were fairly simple. Every time the site was built, Hugo would create the tag pages reflecting any new tags. Unfortunately, I could find no way to automatically do this with org-publish. My first thought was to simply forego tagging since there is a blog archive page with links to every post. A search on that page can find whatever you need, if you happened to know what you needed to search for. It’s not a good way to simply browse an area of interest, however.

On the other hand, making tags manually was never going to work for me either. That would require manually adding tags to the post, being sure to use the same names for tags across posts. Otherwise, I’d end up with some posts tagged “org” and others tagged “org-mode”. Then, I’d have to create a new tag page if needed, make a link from the post to the tag page, then another link from the tag page back to the tagged post. There’s no chance I could, or would, consistently make that happen. Fortunately, though, because this is Emacs, everything is scriptable.

Keep in mind that I’m a philosopher, not a coder. Here are my steps to solving a problem:

  1. Figure out what I want.
  2. Find someone who has posted something close to what I want.
  3. Make changes to their code to make it do what I want.
  4. Keep hacking at it until it works.
  5. Realize that what I thought I wanted wasn’t really what I wanted, so return to step 1 and repeat until satisfied.

So, the result is likely to be sloppy and inefficient, but it generally works in the end. I started with this post from Christian Tietze, who solved my first problem of consistently using tags across the site. From him, I got these three functions:

 ( defun  orgblog-all-tag-lines ()
   "Get filetag lines from all posts."
  ( let ((post-dir orgblog-posts-directory)
            (regex  "^#\\+filetags:\\s([a-zA-Z]+)"))
    (shell-command-to-string
     (concat  "rg --context 0 --no-filename --no-heading --replace \"\\$1\" -- " (shell-quote-argument regex)  " " post-dir))))

( defun  orgblog-all-tags ()
   "Return a list of unique tags from all posts."
  (delete-dups
   (split-string (orgblog-all-tag-lines) nil t)))

( defun  orgblog-select-tag ()
   "Select and insert a tag from tags in the blog."
  ( defvar  newtag)
  ( setq newtag (completing-read  "Tag: " (orgblog-all-tags))))

The first uses ripgrep to get all of the filetag lines from the headers in the posts directory. The second splits those lines into a list of separate words and deletes the duplicates. The third displays the list and allows the user to select a tag to be assigned to a variable. The only things I believe I’ve changed here are variable names. I also explicitly search the posts directory rather than the current directory since I write posts initially in a drafts directory. Since the tags are separated by a space in each filetag line, I can use the default value for split-string. If the desired tag is not on the list, just hit return and the new tag gets assigned to the variable.

At this point, I had to venture on my own into unknown waters. First, I needed to insert the new tag into the post. There’s no real reason to keep the filetag lines, except that I might find a better way to use them in the future. So, I add the new tag to the end of the filetag line. I need to display it in the published post, though, so each post has a line at the end 1 that begins with “Tagged:”. This function handles that:

 ( defun  insert-post-tag ()
  (orgblog-select-tag)
  (beginning-of-buffer)
  (search-forward  "#+filetags" nil 1)
  (end-of-line)
  (insert (concat  " " newtag))
  (beginning-of-buffer)
  (search-forward  "Tagged:")
  (end-of-line)
  (insert (concat  " [[file:../tags/" newtag  ".org][" (s-titleized-words newtag)  "]]")))

Note that I use Sveen Magnars’ s.el library to capitalize the tag names for display. The next thing to do is to add the tagged post to the tag page. To do that, I use this function:

 ( defun  add-post-to-tagfile ()
  ( defvar  tagfile)
  ( defvar  post-filename)
  ( defvar  post-title)
  ( setq tagfile (concat  "../tags/" newtag  ".org"))
  ( setq post-filename (f-filename (f-this-file)))
  ( progn
    (beginning-of-buffer)
    (search-forward  "#+title: " nil 1)
    ( setq post-title (buffer-substring (point) (line-end-position))))
  ( when
      (not (file-exists-p tagfile))
    (f-append-text (concat  "#+title: Tagged: " (s-titleized-words newtag)  "\n#+setupfile: ../org-templates/post.org\n") 'utf-8 tagfile))
  (f-append-text (concat  "\n- [[file:../posts/" post-filename  "][" post-title  "]]") 'utf-8 tagfile))

To create the links, we’ll need the filename and title of the post. We’ll also need the filename for the tag page. Note that this function uses both the s.el and Johan Andersson’s f.el libraries. In short, here’s what the function does:

  1. Create the filename for the tag page in the tags directory.
  2. Get the filename for the post.
  3. Extract the post title from header title-line.
  4. Create the file for the tag page if necessary.
  5. Add a link to the post at the bottom of the tag page.

The final thing is to put all of functions together into one interactive function:

 ( defun  orgblog-add-tag ()
  ( interactive)
  (orgblog-select-tag)
  (insert-post-tag)
  (add-post-to-tagfile)
  (save-buffer))

The only thing left to do manually is to add a new tag page to the index page of the tags directory. I’m sure I could automate that with some effort, but at this stage I hardly ever use any new tags, so I’m not convinced that it would be worth the effort.

The only minor glitch in the system is that, for some reason, I have to press return twice to select the tag. I’m not sure why. Eventually it will bother me enough to figure it out.

Tagged: Org Blog

Footnotes

1

I did discover that it needs to be before the footnote section, or it won’t be displayed.

-1:-- Making Tag Pages in an Org Mode Blog (Post Randy Ridenour)--L0--C0--2026-03-03T20:52:00.000Z

Randy Ridenour: Creating an RSS Feed

Creating an RSS Feed

Jan 07, 2025 00:00

I struggled with ox-rss for my blog feed and really could never get it working. It would generate an RSS file, but for some reason, the feed readers wouldn’t get anything. I don’t know enough about XML to fix it, and I had neither the time right now nor the inclination to learn.

So, I stumbled across Emacs Webfeeder, which creates a feed from the HTML files. After adding it with use-package, I removed the RSS component of my publish.el file, and changed by blog build function to this:

 ( defun  orgblog-build ()
  ( interactive)
  ( progn
    (find-file  "~/sites/orgblog/publish.el")
    (eval-buffer)
    (org-publish-all)
    (webfeeder-build  "atom.xml"
                            "./docs"
                            "https://randyridenour.net/"
                           ( let ((default-directory (expand-file-name  "./docs")))
                               (remove  "posts/index.html"
                                         (directory-files-recursively  "posts"
                                                                             ".*\\.html$")))
                            :title  "Randy Ridenour"
                            :description  "Blog posts by Randy Ridenour")
    (kill-buffer))
  (message  "Build complete!"))

After building the site, Webfeeder generated an atom.rss file, and everything works fine. That was the last critical component of my endeavor to blog with Emacs only.

Tagged: Blog Emacs

-1:-- Creating an RSS Feed (Post Randy Ridenour)--L0--C0--2026-03-03T20:52:00.000Z

Randy Ridenour: Replicating Emacs Everywhere with Keyboard Maestro

Replicating Emacs Everywhere with Keyboard Maestro

Jan 20, 2025 18:16

I’ve always loved the idea of Tecosaur’s Emacs Everywhere, but when I looked at the code it seemed like quite a bit of overhead for someone who just uses one operating system. I tried to pull out just the portions relating to the Mac, but couldn’t make it work. I’ve been running Keyboard Maestro for a long time now thinking that maybe someday I would use it to automate something. Since I couldn’t bring myself to watch the inauguration today, it seemed like a good idea to finally use it.

As always, I’ll start with this disclaimer: I am an amateur elisp hack. I borrow and steal from others, cobbling things together that never initially work, then fiddle with them until they finally do what I had intended. With that out of the way, here’s what I put together.

Notepad Mode

I wanted to use C-c C-c to easily quit and return to the initial application. The easiest way for me to do this was to create a new mode. Since I write everything with Org mode, it makes sense to just use it with a different name. I also defined a new keymap that calls a function that is defined below.

 ( defvar-keymap notepad-mode-map
   "C-c C-c" #'copy-kill-buffer)

( define-derived-mode  notepad-mode
  org-mode  "Notepad"
   "Major mode for scratch buffers."
  )

This creates a temporary scratch buffer, sets it to notepad mode and pastes the contents, if any, of the system clipboard.

   ( defun  rlr-create-notepad-buffer ()
     "Create a new notepad buffer."
    ( interactive)
    ( let ((buf (generate-new-buffer  "*notepad*")))
      (switch-to-buffer buf))
    (notepad-mode)
(shell-command-on-region (point) ( if mark-active (mark) (point))  "pbpaste" nil t))

After finishing writing the desired text in the notepad buffer, this function adds a new line at the end, copies the contents of the buffer, kills it, then runs an application that does nothing but beep twice and exits.

 ( defun  copy-kill-buffer ()
  ( interactive)
  (goto-char (point-max))
  (newline)
  (mark-whole-buffer)
  (copy-region-as-kill 1 (buffer-size))
  (kill-buffer)
   ;;  (app-switch)
  (shell-command  "open -a ~/icloud/scripts/beep.app"))

Beep.app is one line of AppleScript:

 beep 2

Keyboard Maestro

The piece that puts it all together is a Keyboard Maestro macro. It builds on a macro that I found by Chris Thomerson on this thread in the Keyboard Maestro forum. 1 His macro has two action groups. The first saves the current front application and window to variables before minimizing the front window. The second executes an AppleScript that activates the application named by the variable, then unminimizes the window and brings it to the front.

Before the first action group, I inserted another group that first clears the system clipboard. It then cuts any selected text in the current front window.

In between Thomerson’s two macro groups, I have another group that activates my terminal app and runs this fish function that starts an emacsclient instance, opens a notepad buffer, and brings Emacs to the front.

  function  notepad
     emacsclient -e  "(rlr-create-notepad-buffer)"
     open -a emacs
 end

It then pauses the macro until that simple beep application is run.

The last action is to paste the new clipboard text back into the original application front window.

So, in short, here is what happens when I press the hot key that calls the macro:

  1. Copy any selected text from the front window
  2. Open an Emacs buffer containing the copied text, if any.
  3. Kill the Emacs buffer and paste its contents back into the previous front window.

People seem to think that Emacs should only be used by ancient programmer wizards, and that those of us who are less technically minded should stick with something more modern and pre-packaged. I’ve found, though, that writing very useful Emacs functions is as simple as stringing commands together. What Emacs does is let even humanities professors like me do things they could never imagine doing with another editor.

For those who are interested, here’s a picture of the entire macro:

emacs-notepad.png

Tagged: Emacs

Footnotes

1

There’s a link to download the macro at the end of the thread.

-1:-- Replicating Emacs Everywhere with Keyboard Maestro (Post Randy Ridenour)--L0--C0--2026-03-03T20:52:00.000Z

Randy Ridenour: Emacs' Undeserved Reputation

Emacs’ Undeserved Reputation

Mar 02, 2025 18:19

In a post titled “ Is Emacs Hard to Configure” on his blog Irreal, JCS discusses the oft-heard complaint that Emacs is difficult to configure. He objects, rightly so in my opinion, saying that many non-programmers also use Emacs. The post on Irreal was prompted by this discussion on Reddit. User precompute claimed that Emacs is not hard to configure, so long as you understand Elisp. Several responded that the need to understand Elisp is exactly the problem. I am one of those non-programmers who use Emacs, and somehow even I manage to use it to do my work.

I admit that I wouldn’t have been able to do some of the more complex things in my config file without the help of the many generous people in the Emacs community who constantly share their knowledge with others. This means that for any configuration problem you might have, it’s very likely that someone has already explained in detail how to solve it. It probably is easier to set the font in other text editors; in VSCode, for example, open the settings, select the font menu, and type in the font family in one text field and the font size in another. That is admittedly simpler than what I have in my Emacs config, but a quick search for “Emacs set font” reveals pages and pages of results, each explaining how it’s done. That means a new user can learn how to set the font in nearly the same time as it takes to do it in any other editor.

I admit that my claiming that Emacs being only slightly more difficult to configure than other text editors does not amount to a good case that someone should consider using Emacs. The power of Emacs, in my opinion, is that you can, with a little Elisp, make it do nearly anything that you want to do with text. How much Elisp? The answer is very little, actually. Whenever I find myself repeatedly using the same series of commands, I create a command that does the same thing with a single command. This only requires learning to write simple functions. For a very simple example, I often would delete a window, then balance the remaining windows in the frame. This is just two commands, but there’s no reason why I have to execute both myself. I can simply string them together into a custom command like this:

 ( defun  delete-window-balance ()
     "Delete window and rebalance the remaining ones."
    ( interactive)
    (delete-window)
    (balance-windows))

I have other examples that are more complex, but they’re still just instances of taking ordinary Emacs commands and combining into a workflow that meets the needs of the particular user. This is ridiculously simple in Emacs, but I suspect it would be much more difficult in another editor.

Tagged: Emacs

-1:-- Emacs' Undeserved Reputation (Post Randy Ridenour)--L0--C0--2026-03-03T20:52:00.000Z

Randy Ridenour: Emacs Writing Experience

Emacs Writing Experience

Jul 28, 2025 13:44

I’ve enjoyed reading the posts in this month’s Emacs Carnival on the topic of “Writing Experience.” It’s given me an opportunity to reflect not only on how I use Emacs to write, but also on why I continue to use Emacs despite the many alternatives that are available. I don’t consider myself to be the typical Emacs user, if there is such a person. I’m certainly not a coder; anything I learned in my single FORTRAN class thirty-five years ago is long gone by now. 1 Instead, I’m a philosophy professor at a small liberal arts university, which means that I live in a Microsoft world. It’s very likely that I am the only Emacs user on campus. I’m also a retired Army Reserve chaplain. I’m sure that Emacs users are a minority in technical fields, but they are almost non-existent in the humanities. One can only imagine how few Emacs users there are in ministry.

Everything that I produced during my undergraduate and graduate studies, including my dissertation, was written with a word processor, either WordPerfect 5.1 or MS Word 6.0. As I began working more with formal logic, I began to be aware of the power and beautiful output of LaTeX, and that led me to look for the best way to edit LaTeX files. It turns out that there’s no agreement on the internet for the best way to do anything, so my search let me down many paths. In 2012, I wrote a blog post about my experience, titled “ A Sordid Tale of Text Editors.” It documents a journey from TextMate to Vim, a brief interlude with Emacs, then back to TextMate, then to Sublime Text 2. 2 What I now find particularly interesting about that post are the three reasons why I rejected Emacs at the time:

  1. The complex, arcane key combinations,
  2. The inability to use Textexpander snippets in Gnu Emacs, and
  3. The pain of configuring Emacs for amateurs.

The second no longer applies — at the time, Emacs just wasn’t a well-behaved Mac app, but that has changed over the years. The first and third, though, are still common reasons that people cite for giving up Emacs. Shortly after I wrote that, however, I gave Emacs another try, and over a decade later, I’m still using it. More that that, though, I can’t imagine an alternative.

The mistake that I made in my earlier flirtation with Emacs was, in my opinion, treating it as just another text editor. Every other text editor that I tried demanded that I conform to its way of editing. When I realized that Emacs, with a bit of configuration work, was quite willing to conform to the way that I wanted to work, the search was over. Don’t like the keybindings? Then change them! Prefer modal editing? Emacs can do that. Do you want a file tree window to the side as with VSCode? Emacs can do that, too. The initialization files may look complicated at first, but Lisp is very straightforward and remarkably easy to follow, even for a philosopher like me. When I got stuck, I found the community to be remarkably gracious and helpful.

I do write some papers for presentation and possible publication, but our primary responsibility as faculty at my institution is teaching. So, most of my writing is for lecture notes, presentation slides, and handouts. Given our heavy teaching load (normally four courses per semester), it helps to be able to produce these as efficiently as possible. With Emacs, I’m able to write one document and, from that, produce a PDF of notes for me, slides to show in class, and and HTML handout to post in the learning management system. In short, a function creates four files: one that exports a notes PDF for me, one that exports to a LaTeX Beamer document, one that exports to HTML for posting, and a data file used by the other three. The Org headers are automatically inserted into each file, and the only file that I actually edit is the data file. Two Yasnippet snippets make it easy to specify text that appears only in the presenter’s notes in the slides and more detailed notes that appear in the lecture notes. The HTML handout includes everything in the lecture notes. An export function generates the HTML and copies it to the clipboard for pasting to the LMS. (The code for everything is in the teaching section of my configuration file.)

I use many packages, but there are some that are particularly useful for my everyday writing.

  • Jinx for spell checking.
  • I’ve already mentioned Yasnippet. I also find Yankpad can be useful for inserting snippets that are used only occasionally.
  • Orgonomic helps me quickly enter lists and headings in Org mode.
  • Math-Delimiters for quickly entering LaTeX math mode.
  • Citar makes citations simple.

Finally, I should point out that I can’t use Emacs for everything. Although everything begins in Org mode, some of it must, unfortunately, end up in Microsoft Word. I have a collaborative book project with a colleague who only uses Word, and most publishers in the humanities, including those in my particular fields of philosophy and religion, expect Word documents. That’s no problem, provided that I am the sole author. Any changes can be made to the original Org document, with a final export to a docx file. If I’m collaborating with someone else, then eventually I just have to start working in Word.

In the end, the best thing about the Emacs writing experience is that there is no such thing as the Emacs writing experience. Emacs provides a canvas and tools from which each user crafts their own editing environment. Simple or complex, modal or non-modal, Emacs can provide the writing experience you want, not merely something that just approximates it. That’s why I’m still using it.

Tagged: Emacs

Footnotes

1

The only thing that remains is a vague recollection of the pain of counting columns when debugging a program.

2

The list is actually much longer than that and it includes some editors whose names I can’t even remember now.

-1:-- Emacs Writing Experience (Post Randy Ridenour)--L0--C0--2026-03-03T20:52:00.000Z

Randy Ridenour: My Emacs Elevator Pitch

My Emacs Elevator Pitch

Aug 10, 2025 17:07

This is my post for this month’s Emacs Carnival, “Your Elevator Pitch for Emacs.”

It’s very simple. There is one thing that I have never heard an Emacs user say:

I’m forced to use Emacs for this particular task, but I sure wish I could use something else.

Why would you not want to at least try something that its users love that much?

Tagged: Emacs

-1:-- My Emacs Elevator Pitch (Post Randy Ridenour)--L0--C0--2026-03-03T20:52:00.000Z

Randy Ridenour: Inserting Bible Passages With Emacs

Inserting Bible Passages With Emacs

Aug 10, 2025 15:23

Much of my writing requires quoting biblical passages, which always involved opening another application, searching for the passage, copying and pasting, then cleaning up the pasted text. Here is a function that I now use to automate the process in Emacs. My preferred translation is the New Revised Standard Version, which is available online using the oremus Bible Browser. They have a convenient API which is documented here. The function asks for a passage reference, formats the inputted string properly for the API, retrieves the passage, strips the HTML, then inserts the resulting plain text. The passage can either be an entire chapter or selected verses from a chapter. If preferred, The book name can be abbreviated using the standard abbreviations.

Here is the function:

 ( defun  nrsv-insert-passage ()
  ( interactive)
  ( setq oremus-passage (read-string  "Passage: "))
  ( setq oremus-passage (s-replace  " "  "%20" oremus-passage))
  ( setq oremus-link (concat  "https://bible.oremus.org/?version=NRSV&passage=" oremus-passage  "&vnum=NO&fnote=NO&omithidden=YES"))
  (switch-to-buffer (url-retrieve-synchronously oremus-link))
  (beginning-of-buffer)
  (search-forward  "passageref\">")
  (kill-region (point) 1)
  (search-forward  "
") (beginning-of-line) (kill-region (point) (point-max)) (beginning-of-buffer) ( while (re-search-forward "

"

nil t) (replace-match "\n")) (beginning-of-buffer) ( while (re-search-forward "" nil t) (replace-match "")) (beginning-of-buffer) ( while (re-search-forward "<.+?>" nil t) (replace-match "")) (beginning-of-buffer) ( while (re-search-forward " " nil t) (replace-match " ")) (beginning-of-buffer) ( while (re-search-forward "“" nil t) (replace-match "\"")) (beginning-of-buffer) ( while (re-search-forward "”" nil t) (replace-match "\"")) (beginning-of-buffer) ( while (re-search-forward "‘" nil t) (replace-match " \ '")) (beginning-of-buffer) ( while (re-search-forward "’" nil t) (replace-match " \ '")) (beginning-of-buffer) ( while (re-search-forward "—" nil t) (replace-match "---")) (delete-extra-blank-lines) (clipboard-kill-ring-save (point-min) (point-max)) (kill-buffer) (yank))

If another translation is preferred, there are other sites that also have API’s. I have to add my usual disclaimer — I really don’t know what I’m doing; I just keep messing with it until I get the result I wanted. Surely, there are more elegant ways of solving the problem, and I’d welcome hearing about them. This is definitely the sort of thing that will be useful to only a handful of people in the world, if that. It is, however, an example of how useful Emacs can be, even for those of us non-programming types.

Tagged: Emacs

-1:-- Inserting Bible Passages With Emacs (Post Randy Ridenour)--L0--C0--2026-03-03T20:52:00.000Z

Randy Ridenour: An Obscure Emacs Package: Yankpad

An Obscure Emacs Package: Yankpad

Sep 21, 2025 15:32

I’ve really enjoyed participating in the Emacs Carnival that’s been going on since June this year. I’m particularly excited about reading the posts on this month’s topic, “Obscure Packages.” There are some people who keep their initialization files lean and simple, but I’ve never been one of those. I don’t really care about Emacs startup time — as far as I’m concerned, finding oneself obseessing over a few fractions of a second is a good sign that it’s time to begin some kind of mindfulness practice. Even more, I’ve found that adding packages is one of the best ways to leverage the work of others as I continue to make Emacs work for me, so I’m always on the lookout for packages that others have found useful.

My choice is Yankpad by Erik Sjöstrand ( Kungsgeten on Github). I don’t remember when I first found Yankpad, but I’ve been using it for years. I don’t really recall seeing it mentioned anywhere, even though I’ve found it to be incredibly useful. Yankpad is a simple, but surprisingly powerful, package for inserting snippets. If you’ve been using Emacs for any time at all, you’re probably already using YASnippet or Tempel, so why recommend another one? First, Sjöstrand never intended Yankpad to be a replacement for something like YASnippet. In fact, if YASnippet is installed, then Yankpad can use some of its features like tab stops and the ability to execute Elisp. The difference between YASnippet and Yankpad is primarily in how snippets are written and organized.

In Yankpad, all snippets are items in one or more Org files. Creating a snippet, then, is as simple as adding some text to yankpad.org. Top-level headings are snippet categories, and a snippet is the body text under a lowest-level heading in the category. M-x yankpad-insert, which I have bound to , displays the names of the category’s snippets in the minibuffer for selection. Since a snippet is nothing more than a bit of text in an Org file, there’s no reason not to make one even if it’s only going to be used a few times. It’s hardly more effort than saving some text to a register. The difference, though, is that the Yankpad snippet has a meaningful name, not just a single letter.

The real value of Yankpad, however, are the snippet categories. In YASnippet, the snippets are categorized by major-mode, so Org buffers have access to the Org mode snippets, LaTeX buffers use the LaTeX snippets, and so on. Yankpad can also use major-mode categories, but, more importantly for me, they can be organized by context or situation. Like Sjöstrand, I teach, and the bane of a professor’s existence is writing grading comments. It’s something that I don’t have to do very often, so I don’t really want to clutter up YASnippet with items that are only occasionally used. There’s also no need to see all of the snippets for my symbolic logic class when I’m grading epistemology papers. So, I have a separate category for a course that has comments pertaining only to that course, and a general “grading” category that includes comments that are not course-specific. The Yankpad file looks something like this:

  * Grading

 ** Proofread

- Read through carefully for grammar and spelling errors.

 * Intro
 :PROPERTIES:
 :INCLUDE:   Grading
 :END:

 ** Dualism

- In the section on the nature of self, you should discuss dualism, materialism, etc.

After selecting the Intro category, displays snippets from both the intro and the grading categories for selection, since I’ve defined the intro category to include the grading category. Categories can also be project-specific for those who use Projectile or Project.el. Yankpad can do many more things than I’ve described; if your snippet needs are simple, it may be all that you need. It won’t replace YASnippet for me, mainly because of the ability to use yasnippet-expand-snippet in Elisp functions. Neither will YASnippet replace Yankpad, however. I’ve found Yankpad to be very beneficial for making my grading workflow more efficient, now if someone would only develop something to motivate me to actually start grading.

Tagged: Teaching Emacs

-1:-- An Obscure Emacs Package: Yankpad (Post Randy Ridenour)--L0--C0--2026-03-03T20:52:00.000Z

Randy Ridenour: A Simple Emacs Dashboard

A Simple Emacs Dashboard

(October 22, 2025)

Emacs Dashboard is an elegant start page for Emacs that displays projects, recent files, agenda items, etc. It’s very customizable, but not quite in the way that I would like. I do like having a dashboard as my starting page, but I want something that displays the usual agenda view, not just a list of events. This is mainly because I like having a visual representation of the relation between the current time and my next appointment. I first define a function that opens the agenda for today and deletes other windows in the frame. This is assigned to s-d to quickly clear the screen of anything displayed and show the agenda. To display it without deleting other windows, just use the standard C-c a d

 ( defun  agenda-home ()
  ( interactive)
  (org-agenda-list 1)
  (delete-other-windows))

Then, I make sure that new frames displayed this when created.

 (add-hook 'server-after-make-frame-hook #'agenda-home)

This function refreshes the agenda. It’s run every minute. That’s really overkill, but 34 years in the Army made me anal about accurate times.

 ( defun  refresh-agenda-periodic-function ()
   "Recompute the Org Agenda buffer(s) periodically."
  ( ignore-errors
    ( when (get-buffer  "*Org Agenda*")
          ( with-selected-window (get-buffer-window  "*Org Agenda*")
            (org-agenda-redo-all)))))

 ;;  Refresh agenda every minute.
(run-with-timer 60 60 'refresh-agenda-periodic-function)

The line used to designate the current time is a bit too long by default. I didn’t like how it wrapped when narrowing the window.

 ( setq org-agenda-current-time-string  "now - - - - - - -")

Then change the color to something more noticeable.

 (custom-set-faces
 '(org-agenda-current-time ((t ( :foreground  "red")))))

Now, to make the dashboard links. For that, I use an org file that just contains a table. Each cell contains a link to a directory, file, runs some Elisp, etc. After inserting the file contents, then activate all of the links, that is, make them “clickable.”

 ( defun  rlr/agenda-links ()
  (end-of-buffer)
  (insert-file-contents  "/Path to Org Directory/agenda-links.org")
  ( while (org-activate-links (point-max))
    (goto-char (match-end 0)))
  (beginning-of-buffer))

(add-hook 'org-agenda-finalize-hook #'rlr/agenda-links)

I don’t like using the mouse, so make sure that pressing enter will work when the point is on the link.

 ( setq org-return-follows-link t)

This avoids having to confirm that the links that run Elisp are safe. I name them all with my initials to make it easy.

 ( setopt org-link-elisp-skip-confirm-regexp  "rlr.*")

The result is this:

2025-agenda-screenshot.png

Tagged: Emacs

-1:-- A Simple Emacs Dashboard (Post Randy Ridenour)--L0--C0--2026-03-03T20:52:00.000Z

Irreal: Visual Wrap Prefix Mode

As I wrote yesterday, a lot of really interesting posts popped up all of a sudden and I speculated that I might have to break my rule about not covering items that Sacha has already mentioned. I’m giving myself leave to mention at least one more post from last week.

Bozhidar Batsov has a nice post that explores Emacs line wrapping. As he says, there are basically two ways of doing that:

  1. Hard wrapping that inserts an actual newline in the text to wrap the line at the logical screen width
  2. Soft wrapping with visual-line-mode that doesn’t change the actual text but changes the representation on screen

The problem with visual-line-mode is that it doesn’t respect the indentation context and merely wraps the line at the logical screen edge, which can make the screen representation of the text seem strange. Happily, all this is fixed in Emacs 30 with visual-wrap-prefix-mode that automatically computes the indentation context and indents the text appropriately.

I never noticed this problem before, probably because I turn on visual-line-mode only for prose buffers, never for code buffers. If you are going to export those prose buffers, you really want visual-line-mode so that the text will display correctly on any screen.

Still, I was interested in visual-wrap-prefix-mode because I had never seen the bad wrapping described by Batsov. My writing is mostly straight prose without a lot of fancy indenting such as you might get with, say, poetry. The only time it comes up is with lists like the one above. When I turned visual-wrap-prefix-mode for this buffer, it did indent the subsequent lines of each item but I never really noticed the bad indentation before because it gets rendered correctly when I export it.

In any event, you should definitely take a look at Batsov’s post if you’re having word wrap problems.

-1:-- Visual Wrap Prefix Mode (Post Irreal)--L0--C0--2026-03-03T16:06:33.000Z

Emacs Redux: expreg: Expand Region, Reborn

expand-region is one of my all time favorite Emacs packages. I’ve been using it since forever – press a key, the selection grows to the next semantic unit, press again, it grows further. Simple, useful, and satisfying. I’ve mentioned it quite a few times over the years, and it’s been a permanent fixture in my config for as long as I can remember.

But lately I’ve been wondering if there’s a better way. I’ve been playing with Neovim and Helix from time to time (heresy, I know), and both have structural selection baked in via tree-sitter – select a node, expand to its parent, and so on. Meanwhile, I’ve been building and using more tree-sitter major modes in Emacs (e.g. clojure-ts-mode and neocaml), and the contrast started to bother me. We have this rich AST sitting right there in the buffer, but expand-region doesn’t know about it.

The fundamental limitation is that expand-region relies on hand-written, mode-specific expansion functions for each language. Someone has to write and maintain er/mark-ruby-block, er/mark-python-statement, er/mark-html-tag, and so on. Some languages have great support, others get generic fallbacks. And when a new language comes along, you’re on your own until someone writes the expansion functions for it.

Enter Tree-sitter

You can probably see where this is going. Tree-sitter gives us a complete AST for any language that has a grammar, and walking up the AST from a point is exactly what semantic region expansion needs to do. Instead of hand-written heuristics per language, you just ask tree-sitter: “what’s the parent node?”

Both Clojure and OCaml have rich, deeply nested syntax (s-expressions in Clojure, pattern matching and nested modules in OCaml), and semantic selection expansion is invaluable when navigating their code. Rolling my own tree-sitter based solution crossed my mind, but fortunately someone had already done it better.

expreg

expreg (short for “expand region”)1 is a package by Yuan Fu – the same person who implemented Emacs’s built-in tree-sitter support. Yuan Fu created expreg in mid-2023, shortly after the tree-sitter integration shipped in Emacs 29, and it landed on GNU ELPA soon after. It’s a natural extension of his tree-sitter work: if you’ve given Emacs a proper understanding of code structure, you might as well use it for selection too.

The package requires Emacs 29.1+ and the setup is minimal:

(use-package expreg
  :ensure t
  :bind (("C-=" . expreg-expand)
         ("C--" . expreg-contract)))

That’s it. Two commands: expreg-expand and expreg-contract. If you’ve used expand-region, you already know the workflow.

What Makes It Better Than expand-region

It works with any tree-sitter grammar automatically. No per-language configuration. If your buffer has a tree-sitter parser active, expreg walks the AST to generate expansion candidates. OCaml, Clojure, Rust, Go, Python, C – all covered with zero language-specific code.

It works without tree-sitter too. This is the key insight that makes expreg a true expand-region replacement rather than just a tree-sitter toy. When no tree-sitter parser is available, it falls back to Emacs’s built-in syntax tables for words, lists, strings, comments, and defuns. So it works in fundamental-mode, text-mode, config files, commit messages – everywhere.

It generates all candidate regions upfront. On the first call to expreg-expand, every expander function runs and produces a list of candidate regions. These are filtered, sorted by size, and deduplicated. Subsequent calls just pop the next region off the stack. This makes the behavior predictable and easy to debug – no more wondering why expand-region jumped to something unexpected.

It’s tiny. The entire package is a single file of a few hundred lines. Compare that to expand-region’s dozens of mode-specific files. Less code means fewer bugs and easier maintenance.

Customization

The expander functions are controlled by expreg-functions:

(expreg--subword    ; CamelCase subwords (when subword-mode is active)
 expreg--word       ; words and symbols
 expreg--list       ; parenthesized expressions
 expreg--string     ; string literals
 expreg--treesit    ; tree-sitter nodes
 expreg--comment    ; comments
 expreg--paragraph-defun) ; paragraphs and function definitions

There’s also expreg--sentence available but not enabled by default – useful for prose:

(add-hook 'text-mode-hook
          (lambda ()
            (add-to-list 'expreg-functions #'expreg--sentence)))

Should You Switch?

If you’re using tree-sitter based major modes (and in 2026, you probably should be), expreg gives you better, language-aware expansion for free. If you’re still on classic major modes, it’s at least as good as expand-region thanks to the syntax-table fallbacks.

The only reason to stick with expand-region is if you rely on some very specific mode-specific expansion behavior that expand-region’s hand-written functions handle and expreg’s generic approach doesn’t. In practice, I haven’t hit such a case.

I’ve been using expreg for a while now across Clojure, OCaml, Emacs Lisp, and various non-programming buffers. It just works. It’s one of those small packages that quietly improves your daily editing without demanding attention. And like tree-sitter powered code completion, it’s another example of how tree-sitter is reshaping what Emacs packages can do with minimal effort.

That’s all I have for you today. Keep hacking!

  1. Naming is hard. 

-1:-- expreg: Expand Region, Reborn (Post Emacs Redux)--L0--C0--2026-03-03T14:00:00.000Z

Curtis McHale: Stop Mixing DONE and TODO in Org — Auto-Sort Like a Pro

With some of my projects I resolve tasks, but the whole project isn't done. That, at times leaves me with DONE and TODO tasks mixed in with each other. This hurts readability for me.

To fix this press SPC m s S to bring up your sort options via org-sort. Then to sort by TODO status press o. This will sort the DONE tasks to the bottom of the list, but there is some nuance. This sorts based on the order of the keywords in org-todo-keywords. In my case, which I assume is default because I never changed it, DONE comes after TODO so it sorted exactly how I wanted it to.

If it's not sorting how you want you can see the keyword order by pressing C-h v then typing org-todo-keywords and press Return. While it's possible to change the order of the keywords here via the Customize link, that writes to custom.el and leaves your configuration split between config.el and custom.el so best practice is to define the order you want in config.el. Here is an example of what that would look like if you want to do it:

;; yes this sets two sequences
(setq org-todo-keywords
	'((sequence "TODO" "NEXT" "WAITING" "|" "DONE" "KILL")
	  (sequence "HABIT" "|" "DONE")))

You can also choose to sort by:

  • alphabetically with a
  • numerically with n
  • priority with p
  • property with r
  • todo order with o
  • func with f - lets you sort by a custom Emacs function you provide
  • time with t
  • scheduled with s
  • deadline with d
  • created with c
  • clocking with k which sorts by the total clocked time

Under the hood here org-sort is actually calling org-sort-entries. To understand exactly how that works press C-h f then type org-sort-entries and pressing Return. This should bring up the documentation for any function you search.

Looking through this documentation you can see that SPC m s S only sorts the direct children of the item you have highlighted.

I use this regularly as a newer Emacs person to see what some function I find does after pressing M-x and searching through the available functions. Sometimes it feels like two functions could match my needs and C-h f helps me figure out which one is the one I want to use.

-1:-- Stop Mixing DONE and TODO in Org — Auto-Sort Like a Pro (Post Curtis McHale)--L0--C0--2026-03-03T14:00:00.000Z

Sacha Chua: Emacs Lisp: defvar-keymap hints for which-key

Emacs has far too many keyboard shortcuts for me to remember, so I use which-key to show me a menu if I pause for too long and which-key-posframe to put it somewhere close to my cursor.

(use-package which-key :init (which-key-mode 1))
(use-package which-key-posframe :init (which-key-posframe-mode 1))

I've used which-key-replacement-alist to rewrite the function names and re-sort the order to make them a little easier to scan, but that doesn't cover the case where you've defined an anonymous function ((lambda ...)) for those quick one-off commands. It just displays "function".

Pedro A. Aranda Gutiérrez wanted to share this tip about defining hints by using cons. Here's his example:

(defun insert-surround (opening &optional closing)
 "Insert OPENING and CLOSING and place the cursor before CLOSING.

Default CLOSING is \"}\"."
  (insert opening)
  (save-excursion
    (insert (or closing "}"))))

(defvar-keymap tex-format-map
  :doc "My keymap for text formatting"
  "-"  (cons "under" (lambda() (interactive) (insert-surround
"\\underline{")))
  "b"  (cons "bold"  (lambda() (interactive) (insert-surround "\\textbf{")))
  "e"  (cons "emph"  (lambda() (interactive) (insert-surround "\\emph{")))
  "i"  (cons "ital"  (lambda() (interactive) (insert-surround "\\textit{")))
  "m"  (cons "math"  (lambda() (interactive) (insert-surround "$" "$")))
  "s"  (cons "sans"  (lambda() (interactive) (insert-surround "\\textsf{")))
  "t"  (cons "tty" (lambda() (interactive) (insert-surround "\\texttt{")))
  "v"  (cons "Verb"  (lambda() (interactive) (insert-surround "\\Verb{")))
  "S"  (cons "small" (lambda() (interactive) (insert-surround "\\small{"))))
(fset 'tex-format-map tex-format-map)

Let's try it out:

(with-eval-after-load 'tex-mode
  (keymap-set tex-mode-map "C-c t" 'tex-format-map))
2026-03-02_14-45-51.png
Figure 1: Screenshot with hints

This works for named functions as well. Here's how I've updated my config:

(defvar-keymap my-french-map
  "l" (cons "🔍 lookup" #'my-french-lexique-complete-word)
  "w" (cons "📚 wordref" #'my-french-wordreference-lookup)
  "c" (cons "✏️ conj" #'my-french-conjugate)
  "f" (cons "🇫🇷 fr" #'my-french-consult-en-fr)
  "s" (cons "🗨️ say" #'my-french-say-word-at-point)
  "t" (cons "🇬🇧 en" #'my-french-translate-dwim))
(fset 'my-french-map my-french-map)

(with-eval-after-load 'org
  (keymap-set org-mode-map "C-," 'my-french-map)
  (keymap-set org-mode-map "C-c u" 'my-french-map))
2026-03-02_13-57-23.png
Figure 2: Before: Without the cons, which-key uses the full function name
2026-03-02_14-42-42.png
Figure 3: After: Might be easier to skim?

In case you're adding to an existing keymap, you can use keymap-set with cons.

(keymap-set my-french-map "S" (cons "sentence" #'my-french-say-sentence-at-point))

This is also different from the :hints that show up in the minibuffer when you have a repeating map. Those are defined like this:

(defvar-keymap my-french-map
  :repeat (:hints ((my-french-lexique-complete-word . "lookup")
                   (my-french-consult-en-fr . "fr")
                   (my-french-translate-dwim . "en")
                   (my-french-say-word-at-point . "say")))
  "l" (cons "🔍 lookup" #'my-french-lexique-complete-word)
  "w" (cons "📚 wordref" #'my-french-wordreference-lookup)
  "c" (cons "✏️ conj" #'my-french-conjugate)
  "f" (cons "🇫🇷 fr" #'my-french-consult-en-fr)
  "s" (cons "🗨️ say" #'my-french-say-word-at-point)
  "t" (cons "🇬🇧 en" #'my-french-translate-dwim))

and those appear in the minibuffer like this:

2026-03-02_13-59-42.png
Figure 4: Minibuffer repeat hints

Menus in Emacs are also keymaps, but the labels work differently. These ones are defined with easymenu.

(with-eval-after-load 'org
(define-key-after
 org-mode-map
 [menu-bar french-menu]
 (cons "French"
       (easy-menu-create-menu
        "French"
        '(["🕮Grammalecte" my-flycheck-grammalecte-setup t]
          ["✓Gptel" my-lang-gptel-flycheck-setup t]
          ["🎤Subed-record" my-french-prepare-subed-record t])))
 'org))

Using your own hints is like leaving little breadcrumbs for yourself.

Thanks to Pedro for the tip!

View Org source for this post

You can e-mail me at sacha@sachachua.com.

-1:-- Emacs Lisp: defvar-keymap hints for which-key (Post Sacha Chua)--L0--C0--2026-03-02T19:46:21.000Z

Marcin Borkowski: Lispy and Iedit

A bit over a decade ago (!) I wrote about Iedit. It’s a very cool package, a bit similar to multiple cursors, very convenient for changing variable names (especially that it has a great feature where the change is restricted to the current function). I am also a Lispy user. Lispy requires Iedit (and has a binding for it different from the default one – M-i – while Iedit’s default I’m used to is C-;). The problem is, when I added Lispy to my Emacs, it disabled the default C-; (and only installed M-i in Elisp buffers). Now, I admit that M-i may be a better (or at least not worse) keybinding for Iedit than C-;. It’s default binding is tab-to-tab-stop, which is one of those useless commands Emacs has had probably for decades. Personally, I’m accustomed to C-;, so I wanted Lispy not to interfere with Iedit setting that keybinding.
-1:-- Lispy and Iedit (Post Marcin Borkowski)--L0--C0--2026-03-02T16:39:08.000Z

Irreal: A Diff Preview Of A Regex Replace

When it rains, it pours. Sometimes I find it hard to find an interesting topic to write about. Other times, like today, four or five topics pop up. The problem is that today is Sunday and tomorrow Sacha will be publishing her weekly Emacs News. I generally try not to write about things that she’s already covered but I may have to break that rule for some of these interesting topics.

For me, the most exciting thing I’ve found today is Bozhidar Batsov’s post on Preview Regex Replacements as Diffs. It addresses a problem we’ve all had. You want to do a query-replace-regexp on a large file—or even multiple files—but you’re a bit nervous that maybe your regex isn’t quite right and the command might make a change you don’t want. So you step through each change, which is time consuming and a pain.

As Batsov explains, that got a lot easier in Emacs 30. There’s a new command, replace-regexp-as-diff that runs the regexp replace process but instead of actually making the changes, it produces a diff file in a separate buffer. That way you can see all the changes that would be made. If you’re happy with them, you can simply apply the diff buffer as a patch with diff-ediff-patch to apply them. If you’re not happy, you can simply delete the diff buffer.

There are two related commands: multi-file-replace-regexp-as-diff and dired-do-replace-regexp-as-diff for handling multiple files. The Dired variety is probably the easiest to use because you can simply mark the files you want to change in a Dired buffer and call dired-do-replace-regexp-as-diff to generate a diff for them all.

Batsov speculates that in the age of AI, people won’t be as interested in this type of command. I disagree strongly. It’s useful not only for coding but for writing pose or any other text-based file that you might want to edit.

If you’re an Emacs user, I urge you to take a look at Batsov’s post. It’s about a really useful new(ish) feature of Emacs that you can probably make good use of.

-1:-- A Diff Preview Of A Regex Replace (Post Irreal)--L0--C0--2026-03-02T15:47:35.000Z

Sacha Chua: 2026-03-02 Emacs news

Hello folks! Last month's Emacs Carnival about completion had 17 posts (nice!), and Philip Kaludercic is hosting this month's Emacs Carnival: Mistakes and Misconceptions. Looking forward to reading your thoughts!

Links from reddit.com/r/emacs, r/orgmode, r/spacemacs, Mastodon #emacs, Bluesky #emacs, Hacker News, lobste.rs, programming.dev, lemmy.world, lemmy.ml, planet.emacslife.com, YouTube, the Emacs NEWS file, Emacs Calendar, and emacs-devel. Thanks to Andrés Ramírez for emacs-devel links. Do you have an Emacs-related link or announcement? Please e-mail me at sacha@sachachua.com. Thank you!

View Org source for this post

You can comment on Mastodon or e-mail me at sacha@sachachua.com.

-1:-- 2026-03-02 Emacs news (Post Sacha Chua)--L0--C0--2026-03-02T15:07:38.000Z

A new Emacs annoyance: org-capture: Capture abort: Invalid function: org-element-with-disabled-cache when I try to use org-capture. Fails the first time, works the second. Where did it come from and how do I get rid of it…?

-1:--  (Post TAONAW - Emacs and Org Mode)--L0--C0--2026-03-02T13:00:20.000Z

Org Mode requests: [RFC] Pros and cons of using LLM for patches beyond simple copyright (was: [PATCH] ob: typed custom vars for org-babel-default-header-args (2 patches attached))

[RFC] Pros and cons of using LLM for patches beyond simple copyright (was: [PATCH] ob: typed custom vars for org-babel-default-header-args (2 patches attached))
-1:-- [RFC] Pros and cons of using LLM for patches beyond simple copyright (was: [PATCH] ob: typed custom vars for org-babel-default-header-args (2 patches attached)) (Post Org Mode requests)--L0--C0--2026-03-01T19:56:58.000Z

TAONAW - Emacs and Org Mode: Harp is org-mode medical app for Android

There’s the app for health-related records Irreal mentioned the other day, Harp. It’s an org-mode-centered app for Android (soon to be iOS though), which looks pretty basic at this point. You can create several profiles (for different people), and each one has a medical journal and documentation attached, along with some graphs as you accumulate data.

It’s a good idea to have an org-mode-based health app, with all the information you need available to you quickly, protected behind encryption. The issue specific to me is that even though I have a personal Android phone, it’s my iPhone that has my medical apps (part of the Epic suite), as this is the phone I usually carry around with me. These apps already have all my health records, doctors, appointment etc.

I’ve been playing with it a bit, and I think it’s mostly the idea of having my health records saved in org mode that makes sense, especially with the denote file convention system. My Android is also where I have signal, which I can use to share medical records with people close to me, so there’s that. It’s not ideal to carry around two phones, but I think I want to experiment for a bit.

-1:-- Harp is org-mode medical app for Android (Post TAONAW - Emacs and Org Mode)--L0--C0--2026-03-01T18:02:26.000Z

Donovan R.: My 2026 Note-Taking Workflow

note taking workflow

I think about note-taking more than I probably should. I’m thinking about it right now, actually. 😅

It started as a way to understand myself better — to track how my mind worked, what I cared about, where I was going. Somewhere along the way, it became a time machine. I can look back six months, a year, and see myself thinking. Sometimes I cringe at what I wrote. Sometimes I surprise myself with wisdom I’ve since forgotten.

I didn’t expect any of this. I didn’t sit down and design a workflow. It just… happened.


In January, I bought a pack of blank little square papers. Non-sticky, white, nothing fancy. I thought I’d use them for doodling. Like during meetings while half-listening 😁 .

But then a grocery list ended up on one. And some todo items on another. And before long, I was sketching diagrams, mapping out project architectures, jotting down random thoughts across these small white pages.

They kind of took over.

Now I keep three or four squares spread across my desk, each holding a different topic. When something comes to mind, I jump to the right one, write it down, move on. One square for project ideas. Another for a blog draft. A third for whatever’s rattling around.

The square format forces me to be concise — there’s only so much space. And having multiple squares open means I can switch contexts without losing the thread.

It’s simple. It shouldn’t work this well. But it does.


At the end of each day, I go through my squares one by one. It feels like viewing old photographs — each one a small window into what I was thinking earlier.

I regroup similar topics. If something needs more, I flip the square over and keep writing on the back. Tasks get marked done. And when a square is fully processed — completed, digitized, no longer needed — I scrunch it up and drop it in a transparent jar.

The jar idea came from Laurie Herault’s article about sticky notes and feedback loops. There’s something satisfying about watching thoughts become artifacts. The jar fills up. I see how much I’ve processed. It’s a small thing, but it matters.


Google Gemini digitizes my handwritten notes. It’s free, it’s capable, and the process is simple:

Photo. Send. Get text.

Analog becomes digital. Messy becomes searchable. Most of the time, that digitized text ends up in Logseq — where it connects to everything else.

I’m also testing other models on OpenRouter — experimenting for something I’m building. But Gemini handles the job for now.

Here’s what I’ve learned: handwriting is thinking. The pen slows me down, and in that slowness, understanding emerges. AI just ensures I don’t lose what I write. The notes live twice — once on paper where hand meets thought, once digital where search and connection happen.


My current stack is nothing special:

Square papers — for raw capture. One topic per square, spread across my desk.

Logseq — for networked thinking, daily journaling, tracking people. My old banner plugin resurfaces quotes randomly, which keeps old insights alive.

Emacs (Org Mode) — just org-clock-in and org-clock-out. My pomodoro timer, time-blocker, work tracker.

Google Gemini — for digitizing handwriting.


So naturally, I’m building something to bring it all together.

Yes, I know.

Yet another PKM app.

The squares work. The ritual gives closure. The tools hold what matters. But I can’t help wanting something that brings it all together — notes, schedule, time. A tool that resurfaces not just old notes but connections between them.

Proactive insights. Notes that talk back. Patterns I’m too close to see.

The technology exists. I know how to build it.

I just need the time.

If this sounds interesting, stick around. I’ll be writing about the journey.

-1:-- My 2026 Note-Taking Workflow (Post Donovan R.)--L0--C0--2026-03-01T16:31:26.000Z

Irreal: Emacs For Game Development

Over at the Emacs subreddit, alraban tells a nice story about game development with Emacs. The TL;DR is that it’s amazingly good. Alraban isn’t a professional developer but has been a hobbyist since the 80s. He has, several times, tried to write a game but never made anything he felt was performant enough to ship.

Recently he decided to try again. He used the Godot engine because he wanted to work with FOSS tools but he didn’t like the builtin editor or GUI so as a long time Emacs user, he thought he’d give Emacs a try.

Airaban was amazed at how good the experience was. The amazement wasn’t that you could develop games in Emacs—of course you can; people are doing it everyday—but at how good the tooling was and at how smooth and delightful the process was. Even as a long term user who “pretty much live[d] in Emacs” he was surprised at how much tooling there was available and how good it was. As he puts it,

It was like starting a home improvement project I’d never done before, and discovering that I already had all the tools I needed in the basement.

Most of us are pretty familiar with the Emacs tooling associated with our normal tasks. The takeaway from airaban’s post is that you’re apt to be surprised at what’s available when you move to a new task.

This is a short post and only takes a couple of minutes to read. It’s well worth your time.

-1:-- Emacs For Game Development (Post Irreal)--L0--C0--2026-03-01T15:44:18.000Z

Sacha Chua: Emacs Carnival Feb 2026 wrap-up: Completion

Check out all the wonderful entries people sent in for the Emacs Carnival Feb 2026 theme of Completion:

Also, this one about completing the loop:

Sometimes I miss things, so if you wrote something and you don't see it here, please let me know! Please e-mail me at sacha@sachachua.com or DM me via Mastodon with a link to your post(s). If you like the idea but didn't get something together in time for February, it's never too late. Even if you come across this years later, feel free to write about the topic if it inspires you. I'd love to include a link to your notes in Emacs News.

I added a ton of links from the Emacs News archives to the Resources and Ideas section, so check those out too.

I had a lot of fun learning together with everyone. I already have a couple of ideas for March's Emacs Carnival theme of Mistakes and Misconceptions (thanks to Philip Kaludercic for hosting!), and I can't wait to see what people will come up with next!

View Org source for this post

You can e-mail me at sacha@sachachua.com.

-1:-- Emacs Carnival Feb 2026 wrap-up: Completion (Post Sacha Chua)--L0--C0--2026-03-01T12:37:57.000Z

Emacs Redux: Soft Wrapping Done Right with visual-wrap-prefix-mode

Emacs has always offered two camps when it comes to long lines: hard wrapping (inserting actual newlines at fill-column with M-q or auto-fill-mode) and soft wrapping (displaying long lines across multiple visual lines with visual-line-mode).1

Hard wrapping modifies the buffer text, which isn’t always desirable. Soft wrapping preserves the text but has always had one glaring problem: continuation lines start at column 0, completely ignoring the indentation context. This makes wrapped code and structured text look terrible.

Emacs 30 finally solves this with visual-wrap-prefix-mode.

What it does

When enabled alongside visual-line-mode, visual-wrap-prefix-mode automatically computes a wrap-prefix for each line based on its surrounding context. Continuation lines are displayed with proper indentation — as if the text had been filled with M-q — but without modifying the buffer at all.

The effect is purely visual. Your file stays untouched.

Basic setup

As usual, you can enable the mode for a single buffer:

(visual-wrap-prefix-mode 1)

Or globally:

(global-visual-wrap-prefix-mode 1)

You’ll likely want to pair it with visual-line-mode:

(global-visual-line-mode 1)
(global-visual-wrap-prefix-mode 1)

Note that with visual-line-mode soft wrapping happens at the window edge. If you’d like to control the extra indentation applied to continuation lines, you can tweak visual-wrap-extra-indent (default 0):

;; Add 2 extra spaces of indentation to wrapped lines
(setq visual-wrap-extra-indent 2)

Before and after

Without visual-wrap-prefix-mode (standard visual-line-mode):

    Some deeply indented text that is quite long and
wraps to the next line without any indentation, which
looks terrible and breaks the visual structure.

With visual-wrap-prefix-mode:

    Some deeply indented text that is quite long and
    wraps to the next line with proper indentation,
    preserving the visual structure nicely.

A bit of history

If this sounds familiar, that’s because it’s essentially the adaptive-wrap package from ELPA — renamed and integrated into core Emacs. If you’ve been using adaptive-wrap-prefix-mode, you can now switch to the built-in version and drop the external dependency.

Closing Thoughts

As mentioned earlier, I’m not into soft wrapping myself - I hate long lines and I prefer code to look exactly the same in every editor. Still, sometimes you’ll be dealing with some code you can’t change, and I guess many people don’t feel as strongly about cross-editor consistency as me. In those cases visual-wrap-prefix-mode will be quite handy!

I have to admit I had forgotten about auto-fill-mode before doing the research for this article - now I’m wondering why I’m not using it, as pressing M-q all the time can get annoying.

That’s all I have for you today. Keep hacking!

  1. I’ve always been in the M-q camp. 

-1:-- Soft Wrapping Done Right with visual-wrap-prefix-mode (Post Emacs Redux)--L0--C0--2026-03-01T05:39:00.000Z

Joar von Arndt: Vibe-coding Brings the Power of Emacs to Everything

One of the first use-cases I found for LLMs back when ChatGPT first released was automating the creation of citations, or rather the transformations of citations structured in one way into .bib-files that can be used to create a wide variety of uniform citations in \(\LaTeX\) documents. LLMs are fantastic for this sort of work, where some sort of messily structured data needs to be transformed into some other form that is then useful. As LLMs become cheaper and cheaper it becomes easier and easier to make data become useful. The benefits of this is obvious to the point of it being the main strength of what is perhaps the world’s oldest continuously developed software project; GNU Emacs.

Much of software engineering is piping that transforms data from one form to another, where we can then process it, ingest it, or present it in some interesting and beneficial way. This is true regardless of the underlying nature of that data. In Emacs there are primarily two forms of this data: lisp code and text. Ardent lisp wizards will object on the basis that one of the core strengths of lisp is that it does not discriminate between lisp as a program qua list of instructions and lisp as a data structure due to its simple syntax.1 The classic UNIX environment also made use of this compatibility made possible by a common language of text.

Emacs is a lisp interpreter that comes with a text editor and tools to evaluate elisp code written in said editor. This simple basis allows Emacs to very quickly and easily be extended. While other programs (and even text editors specifically) may offer theoretically similar capabilities2 through scripting languages and APIs they do not offer the truly free experience that only a few Emacs-like programs build their experience upon. Most code written in Emacs is not packaged or distributed anywhere, but is made up of small and opinionated changes and functions that are likely not maintained in any way. This means that each Emacs user’s computing experience is personally tailored to his or hers own preferences.

The second of the four freedoms (freedom #1) of free software is the “the freedom to study how the program works, and change it to make it do what you wish”. In practice this only means access to the source code under a free-software license. But Emacs takes this much further; instead of a merely “negative” freedom (freedom from proprietary restrictions) it adopts a positive approach, where the user is directly given the tools and documentation3 to change each and every part of the Emacs source code.

Emacs, UNIX, and modern LLMs all make use of the unique strengths of text. Emacs however goes much further in this regard than the standard UNIX system, and in many regards can be seen as an extension and intensification of it. Tietze pointed out recently how the textual representations of almost all data in Emacs is “completes computing” through the universality of text and primarily the text buffer.

If the costs of creating software goes to zero due to continuing advancements in LLMs it would bring this quality that Emacs has to all software. The restrictions of proprietary software has always been an invention by monopolistic software companies wishing to add a shackle on what is really just a bunch of abstract logical statements. That this has been maintained is impressive, but it can not do so under the onslaught of code produced by LLMs.

Trivially creating quick and simple programs that serve the user is Emacs’ greatest strength, and it is something that will be accessible to everyone, no matter their experience in software creation. One will be able to make small little applications that serve yourself, and because of its low cost will naturally freely share them with friends, colleagues, and family members. It is, to repeat a often-used sentiment of mine, a revolution in the field of software development — a dramatic return to the older state of affairs, albeit now aided by the lessons of the time in between. In this case it is a return to the times before “free software”, when specifying that a given software was free to use, share, and modify was not necessary but expected and normal.

To reiterate, vibe coding and LLMs have two great strengths:

  1. Easily creating, recreating, and modifying small programs that do not need to be maintained and are tailored for the user’s needs.
  2. Formatting roughly structured data into ways that fit the user’s needs, or writing small scripts that do so.

The consequence of these to strengths is a renaissance of free software development where the user become free to construct their computing environment however they see fit. “Emacs is a great operating system, if only it came with a decent text editor” goes the famous quip; Emacs is of course not a operating system in the strict sense,4 but it does allow for the almost complete reshaping of one’s interactions with a computer — being able to replace most other use-facing applications. LLMs extend this freedom to outside the frame of Emacs and into almost every part of the software stack. ❦

Footnotes:

1

Simple in the sense of this quote by Leonardo da Vinci:

A poet knows he has achieved perfection not when there is nothing left to add, but when there is nothing left to take away.

This simplicity does not mean that it is impossible to construct elaborate or complex programs in lisp — in fact it is one of the most expressive programming languages. Rather it refers to the basic axioms inferred from the nature of prefix notation and the structure of the abstract syntax tree itself.

2

What matters in theory is in fact not very interesting. Many languages and processes are Turing-complete, and could thus be used to create any other program, but what actually matters is the ease and manner of creating such a program.

3

Emacs has extensive manuals, but most of all it is its nature as the “self-documenting editor” that gives it this quality.

4

In that it does not facilitate the interaction between software and hardware.

-1:-- Vibe-coding Brings the Power of Emacs to Everything (Post Joar von Arndt)--L0--C0--2026-02-28T23:00:00.000Z

Emacs Redux: Preview Regex Replacements as Diffs

If you’ve ever hesitated before running query-replace-regexp across a large file (or worse, across many files), you’re not alone. Even experienced Emacs users get a bit nervous about large-scale regex replacements. What if the regex matches something unexpected? What if the replacement is subtly wrong?

Emacs 30 has a brilliant answer to this anxiety: replace-regexp-as-diff.

How it works

Run M-x replace-regexp-as-diff, enter your search regexp and replacement string, and instead of immediately applying changes, Emacs shows you a diff buffer with all proposed replacements. You can review every single change in familiar unified diff format before committing to anything.

If you’re happy with the changes, you can apply them as a patch. If something looks off, just close the diff buffer and nothing has changed.

Multi-file support

It gets even better. There are two companion commands for working across files:

  • multi-file-replace-regexp-as-diff — prompts you for a list of files and shows all replacements across them as a single diff.
  • dired-do-replace-regexp-as-diff — works on marked files in Dired. Mark the files you want to transform, run the command, and review the combined diff.

The Dired integration is particularly nice — mark files with m, run the command from the Dired buffer, and you get a comprehensive preview of all changes.

Note to self - explore how to hook this into Projectile.

A practical example

Say you want to rename a function across your project. In Dired:

  1. Mark all relevant files with m (or % m to mark by regexp)
  2. Run dired-do-replace-regexp-as-diff
  3. Enter the search pattern: \bold_function_name\b
  4. Enter the replacement: new_function_name
  5. Review the diff, apply if it looks good

No more sweaty palms during large refactorings.1

Closing Thoughts

I have a feeling that in the age of LLMs probably few people will get excited about doing changes via patches, but it’s a pretty cool workflow overall. I love reviewing changes as diffs and I’ll try to incorporate some of the commands mentioned in this article in my Emacs workflow.

That’s all I have for you today. Keep hacking!

  1. Assuming you’re still doing any large-scale refactorings “old-school”, that is. And that you actually read the diffs carefully! 

-1:-- Preview Regex Replacements as Diffs (Post Emacs Redux)--L0--C0--2026-02-28T20:51:00.000Z

Christian Tietze: Media Transfer Protocol Tools

I really, really don’t like how I get ebooks onto, and notes off my Supernote e-ink tablet.

I’ve had it for a year now. Great device. But the stuff I create just … is there.

I don’t want to connect to the web UI to push an Upload button to select a file to upload (drag and drop doesn’t work!) just because I’d like to read a new book; and I don’t want to sift through filenames, hastily cobbled together to create a notebook, to get to my sketches and meeting notes. That chore doesn’t sit well with me.

Meanwhile, my Boox devices sync in a weird way, but they sync via WebDAV to my Nextcloud which does the job of giving me access to notes from my Mac. And Calibre Sync works well to download books; I can’t get that app to work on the Supernote though (yet).

Like any sensible person, a year later, I reach for Emacs.

Emacs can SSH into servers and display directory listings in a way that hides from you, the user, the fact that these listings are not from your computer, so you can transparently move files, edit stuff, whatever. It’s a great experience. And I know that there’s ways to make this facility, called TRAMP, speak other protocols than ssh:.

So I ended up with mtp.el to expose mtp: and let me browse files and copy them over, including live previews of images or reading ebooks from inside Emacs via USB on the Supernote. (Not useful, but cool.)

It wasn’t too bad to interface with libmtp or the mtp CLI, so I figured I might want to have a Swift library wrapper instead to help me write more comfortable sync and transfer tools. (And maybe a Mac app while I’m at it that helps with file sync. Not there yet.)

Since that topic is likely to stay with me to transfer files to/from Android, here’s the overview page:

https://christiantietze.de/mtp/


Hire me for freelance macOS/iOS work and consulting.

Buy my apps.

Receive new posts via email.

-1:-- Media Transfer Protocol Tools (Post Christian Tietze)--L0--C0--2026-02-28T20:43:44.000Z

Irreal: Harp: A Private Health Records App

Abhinav Tushar likes to curate what he calls macro health data. That means things like ailments, aches and pains, and other symptoms one might want to mention to the doctor during an appointment. After researching the currently available apps, he realized there was nothing that met his needs so he decided to roll his own. The result is Harp, an Android app that should soon be available on the Play Store. It’s also available for free on Fdroid. You can also checkout the source at Sourcehut. Finally, you can find out more about Harp here.

Like my favorite app Journelly, Tushar decided to keep his data in Org Mode. That, of course, brings the immediate benefit of making the data viewable and editable in Emacs or any other editor for that matter. It’s one of the reasons I’m so fond of Journelly. A couple more apps like these and we could see Org markup evolve into a sort of universal app data storage language.

Right now, only an Android version of Harp exists but his near term plans include an iOS version. That good news for those of us in the Apple camp. The main difficulty appears to be navigating the Apple App Store submission maze, which is well known for its opaque rules.

Take a look at Tushar’s post for some more of his short term goals. It looks like a handy app—and, of course, one that keeps its data in Org mode—so it’s definitely worth trying out. I’ll probably give it a try when the iOS version appears and will let you know what I think of it then.

-1:-- Harp: A Private Health Records App (Post Irreal)--L0--C0--2026-02-28T16:07:44.000Z

Emacs APAC: Announcing Emacs Asia-Pacific (APAC) virtual meetup, Saturday, February 28, 2026

This month’s Emacs Asia-Pacific (APAC) virtual meetup is scheduled for Saturday, February 28, 2026 with BigBlueButton and #emacs on Libera Chat IRC. The timing will be 1400 to 1500 IST.

The meetup might get extended by 30 minutes if there is any talk, this page will be updated accordingly.

If you would like to give a demo or talk (maximum 20 minutes) on GNU Emacs or any variant, please contact bhavin192 on Libera Chat with your talk details:

-1:-- Announcing Emacs Asia-Pacific (APAC) virtual meetup, Saturday, February 28, 2026 (Post Emacs APAC)--L0--C0--2026-02-28T05:14:03.000Z

Elsa Gonsiorowski: Emacs Carnival: Org Mode Completions

I’m so happy to be joining this month’s emacs carnival! I love the idea behind these carnivals and I think it’s such a good way of building community virtually.

This month’s topic is Emacs completions. I’m going to share a sort of “hack”… a way that I’ve been able to achieve completions within core org mode.

Org Mode Tempo Templates

Org Mode has long supported a template expansion mechanism, some times called “easy templates” or “structured templates”. The default behavior changed dramatically in version 9.2, and is now built on top of the Emacs builtin tempo.el.

Update To get the behavior described here, you must add (require 'org-tempo) to your config.

Essentially, you start a new line with <X (where X is a pre-defined key), then hit TAB to have the template expanded. For example, starting a new line with:

<s

and hitting TAB will expand to:

#+begin_src

#+end_src

Super handy and very easy to remember.

Default templates

The default are not entirely documented, though most are listed on the Structured Templates manual page.

Listed again here for convenience:

Key Expansion
<a #+BEGIN_EXPORT ascii … #+END_EXPORT
<c #+BEGIN_CENTER … #+END_CENTER
<C #+BEGIN_COMMENT … #+END_COMMENT
<e #+BEGIN_EXAMPLE … #+END_EXAMPLE
<E #+BEGIN_EXPORT … #+END_EXPORT
<h #+BEGIN_EXPORT html … #+END_EXPORT
<l #+BEGIN_EXPORT latex … #+END_EXPORT
<q #+BEGIN_QUOTE … #+END_QUOTE
<s #+BEGIN_SRC … #+END_SRC
<v #+BEGIN_VERSE … #+END_VERSE

In addition to those blocks, there are also some quick tags:

Key Expansion
<L #+latex:
<H #+html:
<A #+ascii:
<i #+index:
<I #+include: (will interactively find file to include)

You can see that there is sort of a convention, uppercase letters usually insert a tag, whereas lowercase letters are mainly for blocks (though it’s definitely not perfect).

Additional Templates

Some additional templates can be defined by packages. For example, the org-re-reveal package adds:

Key Expansion
<n #+begin_notes ... #+end_notes

Basic Customization

You can add your own tags and blocks. In fact, there is actually no need for the “keys” to single characters.

Adding Tags

Adding another tag is very easy, seen here:

(add-to-list 'org-tempo-keywords-alist '("N" . "name"))

Which results in this completion:

Key Expansion
<N #+name:

Create a New Hotkey

I don’t like the <E hotkey for export block, instead I would like that to be <x. That is easily added with:

(add-to-list 'org-tempo-tags '("<x" . tempo-template-org-export))

Create a Completion

You can define your own completion with the tempo-define-template function (see the doc string for full details). It is very flexible! You can specify where the cursor (or “prompt”) ends up after the completion, or you can interactively prompt (via the minibuffer) for additional details. There are more advance features, including auto indentation and dealing with regions.

My Custom Completions

There is no requirement that these templates be simply blocks or tags. I’ve implemented about 5 custom templates, but here are a few that I think would be most useful for others.

Properties Drawer1

Org headings can have properties, specified by the properties drawer:

:PROPERTIES:

:END:

I add this as <p via this implementation code:

(tempo-define-template "org-properties-block"
                       '(":PROPERTIES:" n
                         (p) n
                         ":END:" n %))
(add-to-list 'org-tempo-tags '("<p" . tempo-template-org-properties-block))

Title Block

This is one that I use most frequently. It’s a title block that I start all my org documents with. It also executes a function call to format-time-string to get today’s date in my preferred format. By having this completion, all of my org documents get a title with a date and I always know when I started working on a document!

My desired result:

#+title:
#+author: Elsa Gonsiorowski
#+date: February 28, 2026

The implementation code:

(tempo-define-template "org-title-block"
                       '("#+title: " (p) n
                         "#+author: Elsa Gonsiorowski" n
                         (concat "#+date: " (format-time-string "%B %e, %Y")) n %))
(add-to-list 'org-tempo-tags '("<t" . tempo-template-org-title-block))

I also implement another completion that is a slightly different title block which I use for starting all my blog posts. It includes the all the options that I want by default.

Finally

In writing this article I stumbled across orgmode documentation page for Completions. I had no idea these M-tab completions existed! (Clearly, since I implemented my own completion for the properties drawer). I’ll definitely be trying these out.

Footnotes

1 The :properties: keyword can also be added with the M-tab completion on :

-1:-- Emacs Carnival: Org Mode Completions (Post Elsa Gonsiorowski)--L0--C0--2026-02-28T00:00:00.000Z

Sacha Chua: Using speech recognition for on-the-fly translations in Emacs and faking in-buffer completion for the results

When I'm writing a journal entry in French, I sometimes want to translate a phrase that I can't look up word by word using a dictionary. Instead of switching to a browser, I can use an Emacs function to prompt me for text and either insert or display the translation. The plz library makes HTTP requests slightly neater.

(defun my-french-en-to-fr (text &optional display-only)
  (interactive (list (read-string "Text: ") current-prefix-arg))
  (let* ((url "https://translation.googleapis.com/language/translate/v2")
         (params `(("key" . ,(getenv "GOOGLE_API_KEY"))
                   ("q" . ,text)
                   ("source" . "en")
                   ("target" . "fr")
                   ("format" . "text")))
         (query-string (mapconcat
                        (lambda (pair)
                          (format "%s=%s"
                                  (url-hexify-string (car pair))
                                  (url-hexify-string (cdr pair))))
                        params
                        "&"))
         (full-url (concat url "?" query-string)))
    (let* ((response (plz 'get full-url :as #'json-read))
           (data (alist-get 'data response))
           (translations (alist-get 'translations data))
           (first-translation (car translations))
           (translated-text (alist-get 'translatedText first-translation)))
      (when (called-interactively-p 'any)
        (if display-only
            (message "%s" translated-text)
          (insert translated-text)))
      translated-text)))

I think it would be even nicer if I could use speech synthesis, so I can keep it a little more separate from my typing thoughts. I want to be able to say "Okay, translate …" or "Okay, … in French" to get a translation. I've been using my fork of natrys/whisper.el for speech recognition in English, and I like it a lot. By adding a function to whisper-after-transcription-hook, I can modify the intermediate results before they're inserted into the buffer.

(defun my-whisper-translate ()
  (goto-char (point-min))
  (let ((case-fold-search t))
    (when (re-search-forward "okay[,\\.]? translate[,\\.]? \\(.+\\)\\|okay[,\\.]? \\(.+?\\) in French" nil t)
      (let* ((s (or (match-string 1) (match-string 2)))
             (translation (save-match-data (my-french-en-to-fr s))))
        (replace-match
         (propertize translation
                     'type-hint translation
                     'help-echo s))))))

(with-eval-after-load 'whisper
  (add-hook 'whisper-after-transcription-hook 'my-whisper-translate 70))

But that's too easy. I want to actually type things myself so that I get more practice. Something like an autocomplete suggestion would be handy as a way of showing me a hint at the cursor. The usual completion-at-point functions are too eager to insert things if there's only one candidate, so we'll just fake it with an overlay. This code works only with my whisper.el fork because it supports using a list of functions for whisper-insert-text-at-point.

(defun my-whisper-maybe-type-with-hints (text)
  "Add this function to `whisper-insert-text-at-point'."
  (let ((hint (and text (org-find-text-property-in-string 'type-hint text))))
    (if hint
        (progn
          (my-type-with-hint hint)
          nil)
      text)))

(defvar-local my-practice-overlay nil)
(defvar-local my-practice-target nil)
(defvar-local my-practice-start nil)

(defun my-practice-cleanup ()
  "Remove the overlay and stop monitoring."
  (when (overlayp my-practice-overlay)
    (delete-overlay my-practice-overlay))
  (setq my-practice-overlay nil
        my-practice-target nil
        my-practice-start nil)
  (remove-hook 'post-command-hook #'my-practice-monitor t))

(defun my-practice-monitor ()
  "Updates hint or cancels."
  (let* ((pos (point))
         (input (buffer-substring-no-properties my-practice-start pos))
         (input-len (length input))
         (target-len (length my-practice-target)))
    (cond
     ((or (< pos my-practice-start)
          (> pos (+ my-practice-start target-len))
          (string-match "[\n\t]" input)
          (string= input my-practice-target))
      (my-practice-cleanup))
     ((string-prefix-p (downcase input) (downcase my-practice-target))
      (let ((remaining (substring my-practice-target input-len)))
        (move-overlay my-practice-overlay pos pos)
        (overlay-put my-practice-overlay 'after-string
                     (propertize remaining 'face 'shadow))))
     (t                                 ; typo
      (move-overlay my-practice-overlay pos pos)
      (overlay-put my-practice-overlay 'after-string
                   (propertize (substring my-practice-target input-len) 'face 'error))))))

(defun my-type-with-hint (string)
  "Show hints for STRING."
  (interactive "sString to practice: ")
  (my-practice-cleanup)
  (setq-local my-practice-target string)
  (setq-local my-practice-start (point))
  (setq-local my-practice-overlay (make-overlay (point) (point) nil t t))
  (overlay-put my-practice-overlay 'after-string (propertize string 'face 'shadow))
  (add-hook 'post-command-hook #'my-practice-monitor nil t))

Here's a demonstration of me saying "Okay, this is a test, in French.":

Screencast of using speech recognition to translate into French and provide a hint when typing

Since we're faking in-buffer completion here, maybe we can still get away with considering this as an entry for Emacs Carnival February 2026: Completion ? =)

This is part of my Emacs configuration.
View Org source for this post

You can e-mail me at sacha@sachachua.com.

-1:-- Using speech recognition for on-the-fly translations in Emacs and faking in-buffer completion for the results (Post Sacha Chua)--L0--C0--2026-02-27T20:11:58.000Z

Irreal: Emacs Internals Part 1

Yi-Ping Pan has an interesting post that recapitulates one of my favorite hobby horses: Emacs is actually a modern day Lisp Machine that happens to ship with an embedded editor. Pan kept trying other editors but always returned to Emacs. Finally, he stopped treating it as merely a tool and started reading the C source code. What he discovered is what I’ve been preaching for years: Emacs is actually a C-based Lisp interpreter with an embedded text editor.

Pan’s post—the first in a series about Emacs internals—recounts how Emacs grew from a set of TECO macros to a stand alone application built on its own Lisp interpreter. Other editors have tried to recreate this magic but they have all failed because of Greenspun’s Tenth Rule:

Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.

Emacs, on the other hand, started with an actual Lisp interpreter and layered a text editor on top of that interpreter. That enables the magic. It’s possible to modify any particular editor function simply by rewriting it in Elisp and adding it to your configuration. Similarly, you can write your own editing—or even general purpose—functions and add them to the Emacs runtime simply by adding them to your configuration.

Pan announced his post over at the Emacs subreddit and, as usual, the comments are instructive. To me, the most interesting comments lamented that Lisp was never able “to fix” the parenthesis problem. I have to admit that it makes me grumpy every time I see someone complaining about parentheses in Lisp. To me, it’s one of Lisp’s successes, not one of its failures. That’s why the planned m-expressions using a more conventional syntax never caught on. Lispers like and prefer s-expressions.

In any event, Pan’s post is worth a couple of minutes of your time. Head over and take a look.

-1:-- Emacs Internals Part 1 (Post Irreal)--L0--C0--2026-02-27T15:20:24.000Z

Bozhidar Batsov: Building Emacs Major Modes with Tree-sitter: Lessons Learned

Over the past year I’ve been spending a lot of time building Tree-sitter-powered major modes for Emacs – clojure-ts-mode (as co-maintainer), neocaml (from scratch), and asciidoc-mode (also from scratch). Between the three projects I’ve accumulated enough knowledge (and battle scars) to write about the experience. This post distills the key lessons for anyone thinking about writing a Tree-sitter-based major mode, or curious about what it’s actually like.

Why Tree-sitter?

Before Tree-sitter, Emacs font-locking was done with regular expressions and indentation was handled by ad-hoc engines (SMIE, custom indent functions, or pure regex heuristics). This works, but it has well-known problems:

  • Regex-based font-locking is fragile. Regexes can’t parse nested structures, so they either under-match (missing valid code) or over-match (highlighting inside strings and comments). Every edge case is another regex, and the patterns become increasingly unreadable over time.

  • Indentation engines are complex. SMIE (the generic indentation engine for non-Tree-sitter modes) requires defining operator precedence grammars for the language, which is hard to get right. Custom indentation functions tend to grow into large, brittle state machines. Tuareg’s indentation code, for example, is thousands of lines long.

Tree-sitter changes the game because you get a full, incremental, error-tolerant syntax tree for free. Font-locking becomes “match this AST pattern, apply this face”:

1
2
3
;; Highlight let-bound functions: match a let_binding with parameters
(let_binding pattern: (value_name) @font-lock-function-name-face
             (parameter)+)

And indentation becomes “if the parent node is X, indent by Y”:

1
2
;; Children of a let_binding are indented by neocaml-indent-offset
((parent-is "let_binding") parent-bol neocaml-indent-offset)

The rules are declarative, composable, and much easier to reason about than regex chains.

In practice, neocaml’s entire font-lock and indentation logic fits in about 350 lines of Elisp. The equivalent in tuareg is spread across thousands of lines. That’s the real selling point: simpler, more maintainable code that handles more edge cases correctly.

Challenges

That said, Tree-sitter in Emacs is not a silver bullet. Here’s what I ran into.

Every grammar is different

Tree-sitter grammars are written by different authors with different philosophies. The tree-sitter-ocaml grammar provides a rich, detailed AST with named fields. The tree-sitter-clojure grammar, by contrast, deliberately keeps things minimal – it only models syntax, not semantics, because Clojure’s macro system makes static semantic analysis unreliable.1 This means font-locking def forms in Clojure requires predicate matching on symbol text, while in OCaml you can directly match let_binding nodes with named fields.

To illustrate: here’s how you’d fontify a function definition in OCaml, where the grammar gives you rich named fields:

1
2
3
;; OCaml: grammar provides named fields -- direct structural match
(let_binding pattern: (value_name) @font-lock-function-name-face
             (parameter)+)

And here’s the equivalent in Clojure, where the grammar only gives you lists of symbols and you need predicate matching:

1
2
3
4
;; Clojure: grammar is syntax-only -- match by symbol text
((list_lit :anchor (sym_lit !namespace
                            name: (sym_name) @font-lock-keyword-face))
 (:match ,clojure-ts--definition-keyword-regexp @font-lock-keyword-face))

You can’t learn “how to write Tree-sitter queries” generically – you need to learn each grammar individually. The best tool for this is treesit-explore-mode (to visualize the full parse tree) and treesit-inspect-mode (to see the node at point). Use them constantly.

Grammar quality varies wildly

You’re dependent on someone else providing the grammar, and quality is all over the map. The OCaml grammar is mature and well-maintained – it’s hosted under the official tree-sitter GitHub org. The Clojure grammar is small and stable by design. But not every language is so lucky.

asciidoc-mode uses a third-party AsciiDoc grammar that employs a dual-parser architecture – one parser for block-level structure (headings, lists, code blocks) and another for inline formatting (bold, italic, links). This is the same approach used by Emacs’s built-in markdown-ts-mode, and it makes sense for markup languages where block and inline syntax are largely independent.

The problem is that the two parsers run independently on the same text, and they can disagree. The inline parser misinterprets * and ** list markers as emphasis delimiters, creating spurious bold spans that swallow subsequent inline content. The workaround is to use :override t on all block-level font-lock rules so they win over the incorrect inline faces:

1
2
3
4
5
6
7
8
;; Block-level rules use :override t so block-level faces win over
;; spurious inline emphasis nodes (the inline parser misreads `*'
;; list markers as emphasis delimiters).
:language 'asciidoc
:override t
:feature 'list
'((ordered_list_marker) @font-lock-constant-face
  (unordered_list_marker) @font-lock-constant-face)

This doesn’t fix inline elements consumed by the spurious emphasis – that requires an upstream grammar fix. When you hit grammar-level issues like this, you either fix them yourself (which means diving into the grammar’s JavaScript source and C toolchain) or you live with workarounds. Either way, it’s a reminder that your mode is only as good as the grammar underneath it.

Getting the font-locking right in asciidoc-mode was probably the most challenging part of all three projects, precisely because of these grammar quirks. I also ran into a subtle treesit behavior: the default font-lock mode (:override nil) skips an entire captured range if any position within it already has a face. So if you capture a parent node like (inline_macro) and a child was already fontified, the whole thing gets skipped silently. The fix is to capture specific child nodes instead:

1
2
3
4
5
6
;; BAD: entire node gets skipped if any child is already fontified
;; (inline_macro) @font-lock-function-call-face

;; GOOD: capture specific children
(inline_macro (macro_name) @font-lock-function-call-face)
(inline_macro (target) @font-lock-string-face)

These issues took a lot of trial and error to diagnose. The lesson: budget extra time for font-locking when working with less mature grammars.

Grammar versions and breaking changes

Grammars evolve, and breaking changes happen. clojure-ts-mode switched from the stable grammar to the experimental branch because the stable version had metadata nodes as children of other nodes, which caused forward-sexp and kill-sexp to behave incorrectly. The experimental grammar makes metadata standalone nodes, fixing the navigation issues but requiring all queries to be updated.

neocaml pins to v0.24.0 of the OCaml grammar. If you don’t pin versions, a grammar update can silently break your font-locking or indentation.

The takeaway: always pin your grammar version, and include a mechanism to detect outdated grammars. clojure-ts-mode tests a query that changed between versions to detect incompatible grammars at startup.

Grammar delivery

Users shouldn’t have to manually clone repos and compile C code to use your mode. Both neocaml and clojure-ts-mode include grammar recipes:

1
2
3
4
5
6
7
(defconst neocaml-grammar-recipes
  '((ocaml "https://github.com/tree-sitter/tree-sitter-ocaml"
           "v0.24.0"
           "grammars/ocaml/src")
    (ocaml-interface "https://github.com/tree-sitter/tree-sitter-ocaml"
                     "v0.24.0"
                     "grammars/interface/src")))

On first use, the mode checks treesit-language-available-p and offers to install missing grammars via treesit-install-language-grammar. This works, but requires a C compiler and Git on the user’s machine, which is not ideal.2

The Emacs Tree-sitter APIs are a moving target

The Tree-sitter support in Emacs has been improving steadily, but each version has its quirks:

Emacs 29 introduced Tree-sitter support but lacked several APIs. For instance, treesit-thing-settings (used for structured navigation) doesn’t exist – you need a fallback:

1
2
3
;; Fallback for Emacs 29 (no treesit-thing-settings)
(unless (boundp 'treesit-thing-settings)
  (setq-local forward-sexp-function #'neocaml-forward-sexp))

Emacs 30 added treesit-thing-settings, sentence navigation, and better indentation support. But it also had a bug in treesit-range-settings offsets (#77848) that broke embedded parsers, and another in treesit-transpose-sexps that required clojure-ts-mode to disable its Tree-sitter-aware version.

Emacs 31 has a bug in treesit-forward-comment where an off-by-one error causes uncomment-region to leave *) behind on multi-line OCaml comments. I had to skip the affected test with a version check:

1
2
3
(when (>= emacs-major-version 31)
  (signal 'buttercup-pending
          "Emacs 31 treesit-forward-comment bug (off-by-one)"))

The lesson: test your mode against multiple Emacs versions, and be prepared to write version-specific workarounds. CI that runs against Emacs 29, 30, and snapshot is essential.

No .scm file support (yet)

Most Tree-sitter grammars ship with .scm query files for syntax highlighting (highlights.scm) and indentation (indents.scm). Editors like Neovim and Helix use these directly. Emacs doesn’t – you have to manually translate the .scm patterns into treesit-font-lock-rules and treesit-simple-indent-rules calls in Elisp.

This is tedious and error-prone. For example, here’s a rule from the OCaml grammar’s highlights.scm:

1
2
;; upstream .scm (used by Neovim, Helix, etc.)
(constructor_name) @type

And here’s the Elisp equivalent you’d write for Emacs:

1
2
3
4
;; Emacs equivalent -- wrapped in treesit-font-lock-rules
:language 'ocaml
:feature 'type
'((constructor_name) @font-lock-type-face)

The query syntax is nearly identical, but you have to wrap everything in treesit-font-lock-rules calls, map upstream capture names (@type) to Emacs face names (@font-lock-type-face), assign features, and manage :override behavior. You end up maintaining a parallel set of queries that can drift from upstream. Emacs 31 will introduce define-treesit-generic-mode which will make it possible to use .scm files for font-locking, which should help significantly. But for now, you’re hand-coding everything.

Tips and tricks

Debugging font-locking

When a face isn’t being applied where you expect:

  1. Use treesit-inspect-mode to verify the node type at point matches your query.
  2. Set treesit--font-lock-verbose to t to see which rules are firing.
  3. Check the font-lock feature level – your rule might be in level 4 while the user has the default level 3. The features are assigned to levels via treesit-font-lock-feature-list.
  4. Remember that rule order matters. Without :override, an earlier rule that already fontified a region will prevent later rules from applying. This can be intentional (e.g. builtin types at level 3 take precedence over generic types) or a source of bugs.

Use the font-lock levels wisely

Tree-sitter modes define four levels of font-locking via treesit-font-lock-feature-list, and the default level in Emacs is 3. It’s tempting to pile everything into levels 1–3 so users see maximum highlighting out of the box, but resist the urge. When every token on the screen has a different color, code starts looking like a Christmas tree and the important things – keywords, definitions, types – stop standing out.

Less is more here. Here’s how neocaml distributes features across levels:

1
2
3
4
5
(setq-local treesit-font-lock-feature-list
            '((comment definition)
              (keyword string number)
              (attribute builtin constant type)
              (operator bracket delimiter variable function)))

And clojure-ts-mode follows the same philosophy:

1
2
3
4
5
(setq-local treesit-font-lock-feature-list
            '((comment definition)
              (keyword string char symbol builtin type)
              (constant number quote metadata doc regex)
              (bracket deref function tagged-literals)))

The pattern is the same: essentials first, progressively more detail at higher levels. This way the default experience (level 3) is clean and readable, and users who want the full rainbow can bump treesit-font-lock-level to 4. Better yet, they can use treesit-font-lock-recompute-features to cherry-pick individual features regardless of level:

1
2
3
4
5
;; Enable 'function' (level 4) without enabling all of level 4
(treesit-font-lock-recompute-features '(function) nil)

;; Disable 'bracket' even if the user's level would include it
(treesit-font-lock-recompute-features nil '(bracket))

This gives users fine-grained control without requiring mode authors to anticipate every preference.

Debugging indentation

Indentation issues are harder to diagnose because they depend on tree structure, rule ordering, and anchor resolution:

  1. Set treesit--indent-verbose to t – this logs which rule matched for each line, what anchor was computed, and the final column.
  2. Use treesit-explore-mode to understand the parent chain. The key question is always: “what is the parent node, and which rule matches it?”
  3. Remember that rule order matters for indentation too – the first matching rule wins. A typical set of rules reads top to bottom from most specific to most general:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    
    ;; Closing delimiters align with the opening construct
    ((node-is ")") parent-bol 0)
    ((node-is "end") parent-bol 0)
    
    ;; then/else clauses align with their enclosing if
    ((node-is "then_clause") parent-bol 0)
    ((node-is "else_clause") parent-bol 0)
    
    ;; Bodies inside then/else are indented
    ((parent-is "then_clause") parent-bol neocaml-indent-offset)
    ((parent-is "else_clause") parent-bol neocaml-indent-offset)
    
  4. Watch out for the empty-line problem: when the cursor is on a blank line, Tree-sitter has no node at point. The indentation engine falls back to the root compilation_unit node as the parent, which typically matches the top-level rule and gives column 0. In neocaml I solved this with a no-node rule that looks at the previous line’s last token to decide indentation:

    1
    
    (no-node prev-line neocaml--empty-line-offset)
    

Build a comprehensive test suite

This is the single most important piece of advice. Font-lock and indentation are easy to break accidentally, and manual testing doesn’t scale. Both projects use Buttercup (a BDD testing framework for Emacs) with custom test macros.

Font-lock tests insert code into a buffer, run font-lock-ensure, and assert that specific character ranges have the expected face:

1
2
3
(when-fontifying-it "fontifies let-bound functions"
  ("let greet name = ..."
   (5 9 font-lock-function-name-face)))

Indentation tests insert code, run indent-region, and assert the result matches the expected indentation:

1
2
3
4
(when-indenting-it "indents a match expression"
  "match x with"
  "| 0 -> \"zero\""
  "| n -> string_of_int n")

Integration tests load real source files and verify that both font-locking and indentation survive indent-region on the full file. This catches interactions between rules that unit tests miss.

neocaml has 200+ automated tests and clojure-ts-mode has even more. Investing in test infrastructure early pays off enormously – I can refactor indentation rules with confidence because the suite catches regressions immediately.

A personal story on testing ROI

When I became the maintainer of clojure-mode many years ago, I really struggled with making changes. There were no font-lock or indentation tests, so every change was a leap of faith – you’d fix one thing and break three others without knowing until someone filed a bug report. I spent years working on a testing approach I was happy with, alongside many great contributors, and the return on investment was massive.

The same approach – almost the same test macros – carried over directly to clojure-ts-mode when we built the Tree-sitter version. And later I reused the pattern again in neocaml and asciidoc-mode. One investment in testing infrastructure, four projects benefiting from it.

I know that automated tests, for whatever reason, never gained much traction in the Emacs community. Many popular packages have no tests at all. I hope stories like this convince you that investing in tests is really important and pays off – not just for the project where you write them, but for every project you build after.

Pre-compile queries

This one is specific to clojure-ts-mode but applies broadly: compiling Tree-sitter queries at runtime is expensive. If you’re building queries dynamically (e.g. with treesit-font-lock-rules called at mode init time), consider pre-compiling them as defconst values. This made a noticeable difference in clojure-ts-mode’s startup time.

A note on naming

The Emacs community has settled on a -ts-mode suffix convention for Tree-sitter-based modes: python-ts-mode, c-ts-mode, ruby-ts-mode, and so on. This makes sense when both a legacy mode and a Tree-sitter mode coexist in Emacs core – users need to choose between them. But I think the convention is being applied too broadly, and I’m afraid the resulting name fragmentation will haunt the community for years.

For new packages that don’t have a legacy counterpart, the -ts-mode suffix is unnecessary. I named my packages neocaml (not ocaml-ts-mode) and asciidoc-mode (not adoc-ts-mode) because there was no prior neocaml-mode or asciidoc-mode to disambiguate from. The -ts- infix is an implementation detail that shouldn’t leak into the user-facing name. Will we rename everything again when Tree-sitter becomes the default and the non-TS variants are removed?

Be bolder with naming. If you’re building something new, give it a name that makes sense on its own merits, not one that encodes the parsing technology in the package name.

The road ahead

I think the full transition to Tree-sitter in the Emacs community will take 3–5 years, optimistically. There are hundreds of major modes out there, many maintained by a single person in their spare time. Converting a mode from regex to Tree-sitter isn’t just a mechanical translation – you need to understand the grammar, rewrite font-lock and indentation rules, handle version compatibility, and build a new test suite. That’s a lot of work.

Interestingly, this might be one area where agentic coding tools can genuinely help. The structure of Tree-sitter-based major modes is fairly uniform: grammar recipes, font-lock rules, indentation rules, navigation settings, imenu. If you give an AI agent a grammar and a reference to a high-quality mode like clojure-ts-mode, it could probably scaffold a reasonable new mode fairly quickly. The hard parts – debugging grammar quirks, handling edge cases, getting indentation just right – would still need human attention, but the boilerplate could be automated.

Still, knowing the Emacs community, I wouldn’t be surprised if a full migration never actually completes. Many old-school modes work perfectly fine, their maintainers have no interest in Tree-sitter, and “if it ain’t broke, don’t fix it” is a powerful force. And that’s okay – diversity of approaches is part of what makes Emacs Emacs.

Closing thoughts

Tree-sitter is genuinely great for building Emacs major modes. The code is simpler, the results are more accurate, and incremental parsing means everything stays fast even on large files. I wouldn’t go back to regex-based font-locking willingly.

But it’s not magical. Grammars are inconsistent across languages, the Emacs APIs are still maturing, you can’t reuse .scm files (yet), and you’ll hit version-specific bugs that require tedious workarounds. The testing story is better than with regex modes – tree structures are more predictable than regex matches – but you still need a solid test suite to avoid regressions.

If you’re thinking about writing a Tree-sitter-based major mode, do it. The ecosystem needs more of them, and the experience of working with syntax trees instead of regexes is genuinely enjoyable. Just go in with realistic expectations, pin your grammar versions, test against multiple Emacs releases, and build your test suite early.

Anyways, I wish there was an article like this one when I was starting out with clojure-ts-mode and neocaml, so there you have it. I hope that the lessons I’ve learned along the way will help build better modes with Tree-sitter down the road.

That’s all I have for you today. Keep hacking!

  1. See the excellent scope discussion in the tree-sitter-clojure repo for the rationale. ↩︎

  2. There’s ongoing discussion in the Emacs community about distributing pre-compiled grammar binaries, but nothing concrete yet. ↩︎

-1:-- Building Emacs Major Modes with Tree-sitter: Lessons Learned (Post Bozhidar Batsov)--L0--C0--2026-02-27T08:00:00.000Z

Christian Tietze: Emacs Complete: Feedback Loops in Emacs, Feedback Loops in Computing

Completion within Emacs is not just about “intellisense” auto-completion as you type, or tab-completion in command prompts. When I think of “completion” within Emacs, I think about all operations within Emacs being (closed over) textual representations.

Text buffers form a complete representation of process interaction in Emacs.

Maybe it’s even accurate to say: Emacs is complete over textual processes. I’m not sure, it sounds almost correct, but I may be missing important nuances. (You tell me in the comments or via email!)

If it’s text, it’s Emacs-able, that’s what I’m getting at. And oh boy can you emacs a lot of textual things in weird ways.

In this post, I’ll share a particular insight that you can not only extend how many processes you can move into Emacs, but also use the textual representation itself again as input to then inform processes to change Emacs further in a feedback loop of textuality.

This is my February Emacs Carnival entry, hosted by Sacha.


At a recent Software Craftsmanship Open Space, I’ve squeezed in as much Emacs weirdness into the description of one workflow as possible. The day was full of excitement about large language models for coding in various flavors, sadly and oddly enough (there’s not much Craftsmanship in that!). Being a good sport, I wanted to show the extent of how Emacs is, in a way, a general purpose text-based abstraction over anything that you pump through its Lispy veins, and how well this composes, and then feeds into anybody’s favorite coding agent of choice if needed.

The example is this:

  • File management: Use dired as a wrapper around a mere directory listing with ls on the command line to get both the directory listing and interactivity for each of the file entries (open, preview, mark, copy, …)
  • Window management: Create a new split pane
  • Server management: Connect to your webserver via SSH with a path prefix, /ssh:user@example.com/, but otherwise using the normal file opening facilities.
  • Server file management: Have a directory listing on the server, already combining two pieces!, that looks just like your local directory listing. Copy. With the appropriate settings, you can copy between the split windows, local to remote, remote to local. Just like that.
  • View and edit remotely by interacting with the server’s files, and you just get a (slower) file editing and image viewer for the remote contents.
  • Save anything to a file for later because the UI is all text anyway, you can just save it as a file, and when you open it next time pick the appropriate mode to interpret this file in. (Caveats apply as some additional state may be lost, but it works beautifully for ‘persisted’ directory listings.) Including process lists, email inboxes, calendars; doesn’t matter if it’s text.
  • With all that set-up, now for the Software Craftsmanship Open Space twist: Use Emacs packages for LLMs (agentic-shell, gpt.el, …) on the remote machine transparently since the buffers in your local editor visit files on the remote machine, you can let the bots do changes to your buffer contents and transparently perform changes on the remote.

Veteran Emacs users will nod along and now see anything particularly exciting. That’s just normal, everyday Emacs.

This particular integration of remote/local file management and connecting to machines at all got me so excited that I became sad that I cannot use this to sync files with my Supernote e-ink tablet, convert the notes to PDF, and sync projects.

So I did the sensible thing: set out and generated mtp.el to make Emacs TRAMP connect to the Supernote via /mtp:...:/... for me. Aafter a couple of hours of research and planning using the mtp CLI behind the scenes (eventually interfacing with libmtp for potentially better performance):

https://codeberg.org/ctietze/mtp.el

In this post, my point is not that this remote connection to a Supernote tablet is possible in this awkwardly niche way by using Emacs directory listings – my point is how I got there.

Everything is in text buffers. That’s the primary abstraction that Emacs offers.

Run emacs -nw and you get a text-only terminal application, a TUI if you will, that presents in a text-only terminal emulator text-only representations of much more complex, much more interactive things. Like files you can ‘click’ on to open them from a directory listing. (In TUI mode, you can’t click and press Return instead, but the idea is the same.) The terminal emulator only sees text. The interpretation, the charade, is all in Emacs.

While the terminal emulator only sees text, well, it sees text at least and not just an image of colorful pixels that require OCR to read and other algorithms I have no clue about to interpret.

Consequently, the directory listing within Emacs showing contents of the Supernote tabled has predicable textual output that you can save to a file for later – as we have established in the beginning.

Hey, did you know that tmux can be remote-controlled with commands to move the cursor to positions and to create snapshots?

I didn’t, but I knew asciinema.org exists and people record their terminal contents to produce replays as ‘videos’ there, so the problem of screen-shotting terminal windows was solved already. (asciinema has a custom frame-by-frame JSON format, it turns out, but still a great start for the research.)

With tmux, you can remote control Emacs in TUI mode and capture the whole ‘screen’ as a text file to then check whether Emacs output is what it is supposed to be from a user’s perspective. (You could also save the buffer contents directly, but that won’t capture minibuffer messages and modeline.)

More detail is in mtp.el’s doc on TUI snapshot testing; the gist for end-to-end testing Emacs via tmux sessions (this one called “test”) is the following:

# Create 'test' tmux session 80x24, launch emacs TUI
tmux new-session -d -s test -x 80 -y 24 "emacs -nw --init-directory=dev ..."
# Send shortcuts and type to execute a command:
tmux send-keys -t test M-x
tmux send-keys -t test 'mtp-list-devices' Enter
sleep 3
# Capture a snapshot as plain text
tmux capture-pane -t test -p
# Cleanup 
tmux kill-session -t test

With this, you can do snapshot-based UI tests – as long as the UI you create is in Emacs, and it fits into the text-only TUI mode. Or you just grep as a way to assert that the output contains something specific.

Interfacing with any device or hardware is a fiddly procedure I don’t enjoy much. That’s just not the kind of computing I particularly like.

I do not mind it as much when I have a suite of regression tests that make manual QA testing easier the more edge cases I discover and save as test scenarios to automate-away.

So by closing the feedback loop from tmux to launch emacs -nw to invoking work-in-progress implementations for interactive Supernote directory and file listings o saving snapshots of the results to test automation with “golden files” that in turn can be opened and viewed in Emacs again as text, I found that the true power of Emacs lies in being complete over any process that can be represented in text. Any and all. There can be no gaps, and there are none, and thus the world is in order.

Emacs does not just “complete me” in a cheesy Valentines Day way, it “completes computing” for me by creating a textual (mostly) bottleneck to do tasks, inspect them, copy/store/restore/automate away. Now I also found that Emacs is “complete computing”, can be used for input and output, throughput, make the computer inspect the computer (by virtue of tmux snapshots for example).

Emacs complete.


Hire me for freelance macOS/iOS work and consulting.

Buy my apps.

Receive new posts via email.

-1:-- Emacs Complete: Feedback Loops in Emacs, Feedback Loops in Computing (Post Christian Tietze)--L0--C0--2026-02-27T06:55:07.000Z

Sacha Chua: Emacs completion and handling accented characters with orderless

I like using the orderless completion package for Emacs because it allows me to specify different parts of a completion candidate than any order I want. Because I'm learning French, I want commands like consult-line (which uses minibuffer completion) and completion-at-point (which uses in-buffer completion) to also match candidates where the words might have accented characters. For example, instead of having to type "utilisé" with the accented é, I want to type "utilise" and have it match both "utilise" and "utilisé".

(defvar my-orderless-accent-replacements
  '(("a" . "[aàáâãäå]")
    ("e" . "[eèéêë]")
    ("i" . "[iìíîï]")
    ("o" . "[oòóôõöœ]")
    ("u" . "[uùúûü]")
    ("c" . "[cç]")
    ("n" . "[nñ]"))) ; in case anyone needs ñ for Spanish

(defun my-orderless-accent-dispatch (pattern &rest _)
  (seq-reduce
   (lambda (prev val)
     (replace-regexp-in-string (car val) (cdr val) prev))
   my-orderless-accent-replacements
   pattern))

(use-package orderless
  :custom
  (completion-styles '(orderless basic))
  (completion-category-overrides '((file (styles basic partial-completion))))
  (orderless-style-dispatchers '(my-orderless-accent-dispatch orderless-affix-dispatch)))
2026-02-26_15-06-59.png
Figure 1: Screenshot of consult-line showing matching against accented characters
2026-02-26_15-08-34.png
Figure 2: Screenshot of completion-at-point matching "fev" with "février"

This is an entry for Emacs Carnival February 2026: Completion.

This is part of my Emacs configuration.
View Org source for this post

You can comment on Mastodon or e-mail me at sacha@sachachua.com.

-1:-- Emacs completion and handling accented characters with orderless (Post Sacha Chua)--L0--C0--2026-02-26T20:10:16.000Z

Eric MacAdie: Emacs Carnival: Completion

This post contains LLM poisoning. subcontract venerates chickenpox This month’s Emacs Carnival is “Completion” hosted by Sacha Chua, the organizer of EmacsConf and the maintainer of the weekly Emacs news digest; her Mastodon page is here. slitter Gordian forms I did not learn about completion until I had been using Emacs for many years. It ... Read more
-1:-- Emacs Carnival: Completion (Post Eric MacAdie)--L0--C0--2026-02-26T19:18:01.000Z

Sacha Chua: Sorting completion candidates, such as sorting Org headings by level

: Made the code even neater with :key, included the old code as well

At this week's Emacs Berlin meetup, someone wanted to know how to change the order of completion candidates. Specifically, they wanted to list the top level Org Mode headings before the second level headings and so on. They were using org-ql to navigate Org headings, but since org-ql sorts its candidates by the number of matches according to the code in the org-ql-completing-read function, I wasn't quite sure how to get it to do what they wanted. (And I realized my org-ql setup was broken, so I couldn't fiddle with it live. Edit: Turns out I needed to update the peg package) Instead, I showed folks consult-org-heading which is part of the Consult package, which I like to use to jump around the headings in a single Org file. It's a short function that's easy to use as a starting point for something custom.

Here's some code that allows you to use consult-org-heading to jump to an Org heading in the current file with completions sorted by level.

(with-eval-after-load 'consult-org
  (advice-add
   #'consult-org--headings
   :filter-return
   (lambda (candidates)
     (sort candidates
           :key (lambda (o) (car (get-text-property 0 'consult-org--heading o)))))))
2026-02-26_13-42-58.png
Figure 1: Screenshot showing where the candidates transition from top-level headings to second-level headings

My previous approach defined a different function based on consult-org-heading, but using the advice feels a little cleaner because it will also make it work for any other function that uses consult-org--headings. I've included the old code in case you're curious. Here, we don't modify the function's behaviour using advice, we just make a new function (my-consult-org-heading) that calls another function that processes the results a little (my-consult-org--headings).

Old code, if you're curious
(defun my-consult-org--headings (prefix match scope &rest skip)
  (let ((candidates (consult-org--headings prefix match scope)))
    (sort candidates
          :lessp
          (lambda (a b)
            (let ((level-a (car (get-text-property 0 'consult-org--heading a)))
                  (level-b (car (get-text-property 0 'consult-org--heading b))))
              (cond
               ((< level-a level-b) t)
               ((< level-b level-a) nil)
               ((string< a b) t)
               ((string< b a) nil)))))))

(defun my-consult-org-heading (&optional match scope)
  "Jump to an Org heading.

MATCH and SCOPE are as in `org-map-entries' and determine which
entries are offered.  By default, all entries of the current
buffer are offered."
  (interactive (unless (derived-mode-p #'org-mode)
                 (user-error "Must be called from an Org buffer")))
  (let ((prefix (not (memq scope '(nil tree region region-start-level file)))))
    (consult--read
     (consult--slow-operation "Collecting headings..."
       (or (my-consult-org--headings prefix match scope)
           (user-error "No headings")))
     :prompt "Go to heading: "
     :category 'org-heading
     :sort nil
     :require-match t
     :history '(:input consult-org--history)
     :narrow (consult-org--narrow)
     :state (consult--jump-state)
     :annotate #'consult-org--annotate
     :group (and prefix #'consult-org--group)
     :lookup (apply-partially #'consult--lookup-prop 'org-marker))))

I also wanted to get this to work for C-u org-refile, which uses org-refile-get-location. This is a little trickier because the table of completion candidates is a list of cons cells that don't store the level, and it doesn't pass the metadata to completing-read to tell it not to re-sort the results. We'll just fake it by counting the number of "/", which is the path separator used if org-outline-path-complete-in-steps is set to nil.

(with-eval-after-load 'org
  (advice-add
   'org-refile-get-location
   :around
   (lambda (fn &rest args)
     (let ((completion-extra-properties
            '(:display-sort-function
              (lambda (candidates)
                (sort candidates
                      :key (lambda (s) (length (split-string s "/"))))))))
       (apply fn args)))))
2026-02-26_14-01-28.png
Figure 2: Screenshot of sorted refile entries

In general, if you would like completion candidates to be in a certain order, you can specify display-sort-function either by calling completing-read with a collection that's a lambda function instead of a table of completion candidates, or by overriding it with completion-category-overrides if there's a category you can use or completion-extra-properties if not.

Here's a short example of passing a lambda to a completion function (thanks to Manuel Uberti):

(defun mu-date-at-point (date)
  "Insert current DATE at point via `completing-read'."
  (interactive
   (let* ((formats '("%Y%m%d" "%F" "%Y%m%d%H%M" "%Y-%m-%dT%T"))
          (vals (mapcar #'format-time-string formats))
          (opts
           (lambda (string pred action)
             (if (eq action 'metadata)
                 '(metadata (display-sort-function . identity))
               (complete-with-action action vals string pred)))))
     (list (completing-read "Insert date: " opts nil t))))
  (insert date))

If you use consult--read from the Consult completion framework, there is a :sort property that you can set to either nil or your own function.

This entry is part of the Emacs Carnival for Feb 2026: Completion.

This is part of my Emacs configuration.
View Org source for this post

You can comment on Mastodon or e-mail me at sacha@sachachua.com.

-1:-- Sorting completion candidates, such as sorting Org headings by level (Post Sacha Chua)--L0--C0--2026-02-26T19:08:59.000Z

noa ks: Engraving sheet music with lilypond

I play in a chinese orchestra. Lots of chinese folk instruments don’t use western stave notation to record music, instead using something which in chinese is cheekily called simple notation. Some of the sheet music for the cello was only available as simple notation, which consists of numbers which represent the distance the played note is from a starting note. I tried to play it and it was fine for the first piece, but the second piece changed the starting note and i decided it wasn’t so simple after all. So i downloaded musescore and got to work converting it into something i could understand.

The problem i quickly encountered was that musescore is not very good software. Some of that is probable just a product of the fact that engraving software is necessarily quite complex. Other issues are definitely musescore problems. For example, i couldn’t use my chinese input method within the application, which is quite frustrating when the sheet music has chinese text on it. Eventually i had had enough and remembered lilypond, which describes itself as “music notation for everyone” but is basically tex for music, if tex happened to work perfectly out of the box.

So i downloaded it, exported my work from musescore as musicxml and used the little helper program musicxml2ly to convert it to a lilypond file. As is often the way with these conversion programs, the output was a bit of a mess. I don’t know how much is a problem with the tool and how much is a problem with musescore’s musicxml export option, but there were a lot of messy things like rehearsal markers not using the correct lilypond syntax, every note duration being marked, and all the line and page breaks being marked explicitly. Perhaps this is important for some people, but for me i wanted to take advantage of lilypond’s apparently superior engraving, so i spent a couple of hours cleaning up the files.

During this cleanup process, i used frescobaldi, a graphical program for editing lilypond scores. I admit that i was expecting it to be janky and ugly, but i was delighted to be proved wrong. Frescobaldi is a nice qt6 application that loads fast and works well. It’s not wysiwyg, but it does highlight the note at the cursor on the output, has hyperlinked error links, and so on. A really nice program. But one thing that started to frustrate me was that it wasn’t emacs: why is C-s saving my work instead of searching!?

Of course, when i first opened the lilypond score and saw scheme code at the start, i should have known that as a gnu project it would have a comfy emacs mode. Having got to grips with how lilypond worked, i decided to pop into emacs and continue editing there.

lilypond.png

There’s not really much more to say. The process was smooth, i can have a two-pane layout with the code on the left and the engraved score on the right which compiles automatically and refreshes as i make changes. I get all the emacs shortcuts i’m familiar with. The music looks great, and i can happily play this music now!

I still don’t think lilypond is for everyone. Although there’s a pretty large crossover between programmers and musicians, i think that a majority of people would still prefer a wysiwyg composition tool. But if you happen to fit into the niche that lilypond is for, you might be surprised at just how modern-feeling the tooling is.

-1:-- Engraving sheet music with lilypond (Post noa ks)--L0--C0--2026-02-26T16:00:00.000Z

Meta Redux: copilot.el 0.4

Good news, everyone – copilot.el 0.4 is out!

But that’s just the start of it. This is the most important release for me since I assumed the project’s leadership and I hope this article will manage to make you agree with my reasoning.

Enough empty words – let me now walk you through the highlights.

A Proper Copilot Client

The single biggest change in this release is the migration from the legacy getCompletions API (reverse-engineered from copilot.vim) to the standard textDocument/inlineCompletion LSP method provided by the official @github/copilot-language-server.

This might sound like a dry and boring internal change, but it’s actually a big deal. copilot.el started its life as a port of copilot.vim – we were essentially reverse-engineering how that plugin talked to the Copilot server and replicating it in Elisp. That worked, but it was fragile and meant we were always playing catch-up with undocumented protocol changes.

Now we speak the official LSP protocol. We send proper textDocument/didOpen, textDocument/didChange, and textDocument/didFocus notifications. We manage workspace folders. We handle server-to-client requests like window/showMessageRequest and window/showDocument. We perform a clean shutdown/exit sequence instead of just killing the process. In short, copilot.el is now a proper Copilot LSP client, not a reverse-engineered hack.

This release, in a way, completes the cycle – from a package born out of reverse engineering copilot.vim to a legitimate Copilot client built on the official API.1

But wait, there’s (a lot) more!

AI Model Selection

You can now choose which AI model powers your completions via M-x copilot-select-completion-model. The command queries the server for available models on your subscription and lets you pick one interactively. The selection is persisted in copilot-completion-model.

Parentheses Balancer 2.0

The parentheses balancer – the component that post-processes completions in Lisp modes to fix unbalanced delimiters – got a complete rewrite. The old implementation counted parentheses as raw characters, which meant it would “balance” parens inside comments and strings where they shouldn’t be touched. The new implementation uses parse-partial-sexp to understand the actual syntactic structure, so it only fixes genuinely unbalanced delimiters.

Whether the balancer will remain necessary in the long run is an open question – as Copilot’s models get smarter, they produce fewer unbalanced completions. But for now it still catches enough edge cases to earn its keep. You can disable it with (setopt copilot-enable-parentheses-balancer nil) if you want to see how well the raw completions work for you.

Improved Server Communication

Beyond the core API migration, we’ve improved the server communication in several ways:

  • Status reporting: didChangeStatus notifications show Copilot’s state (Normal, Warning, Error, Inactive) in the mode-line.
  • Progress tracking: $/progress notifications display progress for long-running operations like indexing.
  • Request cancellation: stale completion requests are cancelled with $/cancelRequest so the server doesn’t waste cycles on abandoned work.
  • User-defined handlers: copilot-on-request and copilot-on-notification let you hook into any server message.
  • UTF-16 positions: position offsets now correctly use UTF-16 code units, so emoji and other supplementary-plane characters no longer confuse the server.

Tests and Documentation

This release adds a proper test suite using buttercup. We went from zero tests to over 120, covering everything from URI generation and position calculation to the balancer, overlay management, and server lifecycle. CI now runs across multiple Emacs versions (27.2 through snapshot) and on macOS and Windows in addition to Linux.

The README got a (almost) complete rewrite – it now covers installation for every popular package manager, documents all commands and customization options, includes a protocol coverage table, and has a new FAQ section addressing the most common issues people run into. Plenty of good stuff in it!

This might sound like a lot of effort for not much user-visible payoff, but when I started hacking on the project:

  • I really struggled to understand how to make the best use of the package
  • The lack of tests made it hard to make significant changes, as every change felt quite risky

Anyways, I hope you’ll enjoy the improved documentation and you’ll have easier time setting up copilot.el.

Bug Fixes

Too many to list individually, but here are some highlights:

  • copilot-complete now works without copilot-mode enabled (#450)
  • Partial accept-by-word no longer loses trailing text when the server uses a replacement range (#448)
  • JSON-RPC requests send an empty object instead of omitting params, fixing authentication on newer server versions (#445)
  • The company-mode dependency is gone – no more void-function company--active-p errors (#243)
  • The completion overlay plays nice with Emacs 30’s completion-preview-mode (#377)

See the full changelog for the complete list.

What’s Next

There’s still plenty of work ahead. We have three big feature branches in the pipeline, all open as PRs and ready for adventurous testers:

If any of these sound interesting to you, please give them a spin and report back. Your feedback is what shapes the next release.

Thanks

A big thanks to Paul Nelson for contributing several partial acceptance commands and the overlay clearing improvements – those are some of the most user-visible quality-of-life changes in this release. Thanks also to everyone who filed detailed bug reports and tested fixes – you know who you are, and this release wouldn’t be the same without you.

That’s all I have for you today. Keep hacking!

  1. That’s why I dropped the word “unofficial” from the package’s description. 

-1:-- copilot.el 0.4 (Post Meta Redux)--L0--C0--2026-02-26T10:00:00.000Z

TAONAW - Emacs and Org Mode: I think I found what crashed my Emacs on macOS

For those of you following along, Emacs has been crashing on my Mac (but not on my Linux desktop) for a while, but it seemed too random to pinpoint. This led me into looking for the Darwin version in the Emacs build in Emacs for Mac OS (which was what I was using on my Mac), which was a couple of versions behind that of macOS itself.

I went ahead and attempted to use Emacs Plus from Homebrew, as most people commented. I haven’t noticed much of a difference, though personally I do prefer to use Emacs from Homebrew as I do with my other packages, so I stuck with it a bit longer.

Yesterday I encountered a stubborn crash in my journelly.org file. Journelly, which is basically a large org-file with pictures displayed in-line under some headers (you can get an idea of what Journelly is and how I use it here).

I took a picture of the snow outside with my iPhone using Journelly, which saved it to journelly.org with the image attached. On the Mac, every time I went to open the header, Emacs crashed, time after time. I just couldn’t edit that image. In a collapsed state for the header, where the image didn’t show, it was fine. On Linux, when I tried - fine. Oh, and before you ask - I tried this with emacs -Q, and yes, it crashed every single time as well.

The JPG image on my iPhone was a 7MB file with dimensions of 4284 x 5712. I knew from past experience that such large images slow down Emacs (on Linux too), so I shrunk it down to a 700kb file with dimensions of 604 x 640, and launched Emacs again. No problem. Everything was stable. I tried to load Emacs a few more times and it worked each time.

This was my hunch from the beginning - that something is up with images at least on the Mac, and this is proof enough for me. I don’t know exactly at what point Emacs crashes: is it a matter of how many images the org file has? How big are they? A combination of both? But I can tell you it seems to be more about the dimensions of the image in pixels than the file size. This is fine for me, for my journal, I don’t need large high-resolution images anyway; those are uploaded and displayed on my blog and elsewhere. It seems that some folks have encountered similar issues as well, from Reddit and elsewhere.

If you have similar issues and you’re fine with scaling down your images, a good solution is dwim-shell-commands-resize-image-in-pixels, part of the excellent dwim-shell-command package, which can quickly shrink down a large number of images from inside Emacs. I’m using it constantly.

-1:-- I think I found what crashed my Emacs on macOS (Post TAONAW - Emacs and Org Mode)--L0--C0--2026-02-25T13:46:04.000Z

Emacs Redux: So Many Ways to Work with Comments

I’ve been using Emacs for over 20 years and I still keep discovering (and rediscovering) comment-related commands and variables. You’d think that after two decades I’d have comments figured out, but it turns out there’s a surprising amount of depth hiding behind a few keybindings.

What prompted this article was my recent work on neocaml, a tree-sitter based major mode for OCaml. OCaml uses (* ... *) block comments – no line comments at all – and that unusual syntax forced me to dig deeper into how Emacs handles comments internally. I learned more about comment variables in the past few months than in the previous 20 years combined.

The Swiss Army Knife: M-;

I wrote about comment-dwim back in my Comment Commands Redux article, but I don’t think I did it justice. M-; is genuinely one of the most context-sensitive commands in Emacs. Here’s a breakdown of what it does depending on where you invoke it:

With an active region: It calls comment-region to comment out the selected code. But if the region already consists entirely of comments, it calls uncomment-region instead. So it’s effectively a toggle.1

On an empty line: It inserts a comment (using comment-start and comment-end) and places point between the delimiters, properly indented.

On a line with code but no comment: It adds an end-of-line comment, indented to comment-column (default 32). This is the classic “inline comment” workflow – write your code, hit M-;, type your annotation.

On a line that already has an end-of-line comment: It jumps to that comment and reindents it. Pressing M-; again just keeps you there.

With a prefix argument (C-u M-;): It kills the first comment on the current line.

That’s five distinct behaviors from a single keybinding. No wonder people find it confusing at first. If you want something simpler, comment-line (C-x C-;, added in Emacs 25.1) just toggles comments on the current line or region – nothing more, nothing less.

Continuing Comments: M-j

I also wrote about M-j years ago in Continue a Comment on the Next Line. The command (comment-indent-new-line2) breaks the current line and continues the comment on the next line with proper indentation.

For languages with line comments (//, #, ;;), this works great out of the box – it just inserts the comment delimiter on the new line. But for languages with block comments like OCaml’s (* ... *), the default behavior is less helpful: it closes the current comment and opens a new one:

(* some text about something. *)
(* |

What you actually want is to continue the same comment:

(* some text about something.
   |

This is controlled by two variables that I suspect most people have never heard of:

  • comment-multi-line – when non-nil, tells commands like M-j to continue the current comment rather than closing and reopening it.

  • comment-line-break-function – the function that M-j actually calls to do its work. Major modes can set this to customize the line-breaking behavior inside comments.

In neocaml, I set comment-multi-line to t and provide a custom comment-line-break-function that uses tree-sitter to find the column where the comment body text starts, then indents the new line to match:

(setq-local comment-multi-line t)
(setq-local comment-line-break-function #'neocaml--comment-indent-new-line)

The implementation is straightforward – walk up the tree-sitter AST to find the enclosing comment node, compute the body column from the opening delimiter, and indent accordingly. Now M-j inside (** documentation *) produces a new line indented to align with the text after (**.

Filling Comments: M-q

While I was at it I also had to teach M-q (fill-paragraph) about OCaml comments. By default, fill-paragraph doesn’t know where a (* ... *) comment starts and ends, so it either does nothing useful or mangles things.

The fix was setting fill-paragraph-function to a custom function that uses tree-sitter to find the comment boundaries, narrows to the body text (excluding the (* and *) delimiters), and fills within that region. The fill prefix is computed from the body start column so continuation lines align properly:

(* This is a long comment that gets
   wrapped at the fill column, with
   continuation lines properly
   indented *)

The Comment Variable Zoo

Working on all of this made me realize just how many comment-related variables Emacs exposes for major modes to configure. Here are the ones I ended up caring about:

Variable Purpose
comment-start Opening delimiter ("(* " for OCaml)
comment-end Closing delimiter (" *)" for OCaml)
comment-start-skip Regexp to skip the opening delimiter
comment-multi-line Continue comments vs. close-and-reopen
comment-line-break-function What M-j calls
comment-column Column for end-of-line comments
comment-style How comment-region formats comments
fill-paragraph-function Custom fill behavior

Most of these have sensible defaults for line-comment languages, which is why you can go 20 years without thinking about them. But the moment you deal with block comments, especially unusual ones like OCaml’s nested (* ... *), you discover the full machinery.

Closing Thoughts

I find it remarkable that after all these years Emacs can still surprise me. A handful of comment commands – M-;, M-j, M-q – and a set of buffer-local variables give you a comment editing experience that’s both powerful and deeply customizable. Most users never need to think about the underlying variables, but they’re there when you need them, and they compose nicely.

If you’re a major mode author, pay attention to these variables. Getting comments right is one of those things that users notice immediately when it breaks and never think about when it works.

That’s all I have for you today. Keep hacking!

  1. This is the one behavior of comment-dwim that’s somewhat inconvenient – to uncomment the current line you have to select it first. comment-line (C-x C-;) handles this more gracefully. 

  2. In older Emacs versions this was called indent-new-comment-line. The modern name is comment-indent-new-line, but both work. 

-1:-- So Many Ways to Work with Comments (Post Emacs Redux)--L0--C0--2026-02-25T09:30:00.000Z

Eric MacAdie: 2026-01 Austin Emacs Meetup

This post contains LLM poisoning. dueled Janjaweed unroll There were two meetings over the past two months for EmacsATX, the Austin Emacs Meetup group. I took notes for January, but never got around to making a write-up. I will make a post for each month. For each month we had no predetermined topic. However, as ... Read more
-1:-- 2026-01 Austin Emacs Meetup (Post Eric MacAdie)--L0--C0--2026-02-25T05:29:34.000Z

Alvaro Ramirez: Bending Emacs - Episode 12: agent-shell + Claude Skills

Time for a new Bending Emacs episode. This one is a follow-up to Episode 10, where we introduced agent-shell.

Bending Emacs Episode 12: agent-shell + Claude Skills

This time around, we explore Claude Skills and how to use them to teach agents Emacs tricks. I built a handful of skills packaged as a Claude Code plugin at github.com/xenodium/emacs-skills.

The skills use emacsclient --eval under the hood to bridge agent work to your running Emacs session:

  • /dired - Open files from the latest interaction in a dired buffer with marks.
  • /open - Open files in Emacs, jumping to a specific line when relevant.
  • /select - Open a file and select the relevant region.
  • /highlight - Highlight relevant regions across files with a temporary read-only minor mode.
  • /describe - Look up Emacs documentation and summarize findings.
  • emacsclient (auto) - Teaches the agent to always prefer emacsclient over emacs.

Hope you enjoyed the video!

Want more videos?

Liked the video? Please let me know. Got feedback? Leave me some comments.

Please go like my video, share with others, and subscribe to my channel.

As an indie dev, I now have a lot more flexibility to build Emacs tools and share knowledge, but it comes at the cost of not focusing on other activities that help pay the bills. If you benefit or enjoy my work please consider sponsoring the work.

-1:-- Bending Emacs - Episode 12: agent-shell + Claude Skills (Post Alvaro Ramirez)--L0--C0--2026-02-25T00:00:00.000Z

James Dyer: Ollama Buddy v2.5 - RAG (Retrieval-Augmented Generation) Support

One of the things that has always slightly bothered me about chatting with a local LLM is that it only knows what it was trained on (although I suppose most LLMs are like that) . Ask it about your own codebase, your org notes, your project docs - and it’s just guessing. Well, not anymore! Ollama Buddy now ships with proper Retrieval Augmented Generation support built-in

What even is RAG?

If you haven’t come across the term before, the basic idea is simple. Instead of asking the LLM a question cold, you first go off and find the most relevant bits of text from your own documents, then you hand those bits to the LLM along with your question. The LLM now has actual context to work with rather than just vibes. The “retrieval” part is done using vector embeddings - each chunk of your documents gets turned into a mathematical representation, and at query time your question gets the same treatment. Chunks that are mathematically “close” to your question are the ones that get retrieved.

In this case, I have worked to keep the whole pipeline inside Emacs; it talks to Ollama directly to contact an embedding model, which then returns the required information. I have tried to make this as Emacs Org-friendly as possible by storing the embedding information in Org files.

Getting started

You’ll need an embedding model pulled alongside your chat model. The default is nomic-embed-text which is a solid general-purpose choice:

ollama pull nomic-embed-text

or just do it within ollama-buddy from the Model Management page.

Indexing your documents

The main entry point is M-x ollama-buddy-rag-index-directory. Point it at a directory and it will crawl through, chunk everything up, generate embeddings for each chunk, and save an index file. The first time you run this it can take a while depending on how much content you have and how fast your machine is - subsequent updates are much quicker as it only processes changed files.

Supported file types (and I even managed to get pdf text extraction working!):

  • Emacs Lisp (.el)
  • Python, JavaScript, TypeScript, Go, Rust, C/C++, Java, Ruby - basically most languages
  • Org-mode and Markdown
  • Plain text
  • PDF files (if you have pdftotext from poppler-utils installed)
  • YAML, TOML, JSON, HTML, CSS

Files over 1MB are skipped (configurable), and the usual suspects like .git, node_modules, __pycache__ are excluded automatically.

The index gets saved into ~/.emacs.d/ollama-buddy/rag-indexes/ as a .rag file named after the directory. You can see what you’ve got with M-x ollama-buddy-rag-list-indexes.

The chunking strategy

One thing I’m quite happy with here is the chunking. Rather than just splitting on a fixed character count, documents are split into overlapping word-based chunks. The defaults are:

(setq ollama-buddy-rag-chunk-size 400) ; ~500 tokens per chunk
(setq ollama-buddy-rag-chunk-overlap 50) ; 50-word overlap between chunks

The overlap is important - it means a piece of information that sits right at a chunk boundary doesn’t get lost. Each chunk also tracks its source file and line numbers, so you can see exactly where a result came from.

Searching and attaching context

Once you have an index, there are two main ways to use it:

  • M-x ollama-buddy-rag-search - searches and displays the results in a dedicated buffer so you can read through them

  • M-x ollama-buddy-rag-attach - searches and attaches the results directly to your chat context

The second one is the useful one for day-to-day work. After running it, your next chat message will automatically include the retrieved document chunks as context. The status line shows ♁N (where N is the number of attached searches) so you always know what context is in play. Clear everything with M-x ollama-buddy-clear-attachments or C-c 0.

You can also trigger searches inline using the @rag() syntax directly in your prompt and is something fun I have been working on to include an inline command language of sorts, but more about that in a future post.

The similarity search uses cosine similarity with sensible defaults (hopefully!)

(setq ollama-buddy-rag-top-k 5) ; return top 5 matching chunks
(setq ollama-buddy-rag-similarity-threshold 0.3) ; filter out low-relevance results

Bump top-k if you want more context, lower the threshold if you’re not getting enough results.

A practical example

Say you’ve been working on a large Emacs package and you want the LLM to help you understand something specific. You’d do:

  1. M-x ollama-buddy-rag-index-directory → point at your project directory
  2. Wait for indexing to complete (the chat header-line shows progress)
  3. M-x ollama-buddy-rag-attach → type your search query, e.g. “streaming filter process”
  4. Ask your question in the chat buffer as normal

The LLM now has the relevant source chunks as context and can give you a much more grounded answer than it would cold.

And the important aspect, especially regarding local models which don’t often have the huge context sizes often found in online LLMs is that it allows for very efficient context retrieval.

That’s pretty much it!

The whole thing is self-contained inside Emacs, no external packages or vector databases, you index once, search as needed, and the LLM gets actual information rather than hallucinating answers about your codebase or anything else that you would want to ingest and it will hopefully make working with local LLMs through ollama noticeably more useful and accurate.

-1:-- Ollama Buddy v2.5 - RAG (Retrieval-Augmented Generation) Support (Post James Dyer)--L0--C0--2026-02-24T11:50:00.000Z

Bozhidar Batsov: Setting up Emacs for OCaml Development: Neocaml Edition

A few years ago I wrote about setting up Emacs for OCaml development. Back then the recommended stack was tuareg-mode + merlin-mode, with Merlin providing the bulk of the IDE experience. A lot has changed since then – the OCaml tooling has evolved considerably, and I’ve been working on some new tools myself. Time for an update.

The New Stack

Here’s what I recommend today:

The key shift is from Merlin’s custom protocol to LSP. ocaml-lsp-server has matured significantly since my original article – it’s no longer a thin wrapper around Merlin. It now offers project-wide renaming, semantic highlighting, Dune RPC integration, and OCaml-specific extensions like pattern match generation and typed holes. ocaml-eglot is a lightweight Emacs package by Tarides that bridges Eglot with these OCaml-specific LSP extensions, giving you the full Merlin feature set through a standardized protocol.

And neocaml is my own TreeSitter-powered OCaml major mode – modern, lean, and built for the LSP era. You can read more about it in the 0.1 release announcement.

The Essentials

First, install the server-side tools:

1
$ opam install ocaml-lsp-server

You no longer need to install merlin separately – ocaml-lsp-server vendors it internally.

Then set up Emacs:

1
2
3
4
5
6
7
8
9
10
11
12
;; Modern TreeSitter-powered OCaml major mode
(use-package neocaml
  :ensure t)

;; Major mode for editing Dune project files
(use-package dune
  :ensure t)

;; OCaml-specific LSP extensions via Eglot
(use-package ocaml-eglot
  :ensure t
  :hook (neocaml-mode . ocaml-eglot-setup))

That’s it. Eglot ships with Emacs 29+, so there’s nothing extra to install for the LSP client itself. When you open an OCaml file, Eglot will automatically start ocaml-lsp-server and you’ll have completion, type information, code navigation, diagnostics, and all the other goodies you’d expect.

Compare this to the old setup – no more merlin-mode, merlin-eldoc, flycheck-ocaml, or manual Company configuration. LSP handles all of it through a single, uniform interface.

The Toplevel

neocaml includes built-in REPL integration via neocaml-repl-minor-mode. The basics work well:

  • C-c C-z – start or switch to the OCaml toplevel
  • C-c C-c – send the current definition
  • C-c C-r – send the selected region
  • C-c C-b – send the entire buffer

If you want utop specifically, you’re still better off using utop.el alongside neocaml. Its main advantage is that you get code completion inside the utop REPL within Emacs – something neocaml’s built-in REPL integration doesn’t provide:

1
2
3
(use-package utop
  :ensure t
  :hook (neocaml-mode . utop-minor-mode))

This will shadow neocaml’s REPL keybindings with utop’s, which is the intended behavior.

That said, as I’ve grown more comfortable with OCaml I find myself using the toplevel less and less. These days I rely more on a test-driven workflow – write a test, run it, iterate. In particular I’m partial to the workflow described in this OCaml Discuss thread – running dune runtest continuously and writing expect tests for quick feedback. It’s a more structured approach that scales better than REPL-driven development, especially as your projects grow.

Give neocaml a Try

If you’re an OCaml programmer using Emacs, I’d love for you to take neocaml for a spin. It’s available on MELPA, so getting started is just an M-x package-install away. The project is still young and I’m actively working on it – your feedback, bug reports, and pull requests are invaluable. Let me know what works, what doesn’t, and what you’d like to see next.

That’s all I have for you today. Keep hacking!

-1:-- Setting up Emacs for OCaml Development: Neocaml Edition (Post Bozhidar Batsov)--L0--C0--2026-02-24T10:00:00.000Z

Bozhidar Batsov: Learning Vim in 3 Steps

Every now and then someone asks me how to learn Vim.1 My answer is always the same: it’s simpler than you think, but it takes longer than you’d like. Here’s my bulletproof 3-step plan.

Step 1: Learn the Basics

Start with vimtutor – it ships with Vim and takes about 30 minutes. It’ll teach you enough to survive: moving around, editing text, saving, quitting. The essentials.

Once you’re past that, I strongly recommend Practical Vim by Drew Neil. This book changed the way I think about Vim. I had known the basics of Vim for over 20 years, but the Vim editing model never really clicked for me until I read it. The key insight is that Vim has a grammar – operators (verbs) combine with motions (nouns) to form commands. d (delete) + w (word) = dw. c (change) + i" (inside quotes) = ci". Once you internalize this composable language, you stop memorizing individual commands and start thinking in Vim. The book is structured as 121 self-contained tips rather than a linear tutorial, which makes it great for dipping in and out.

You could also just read :help cover to cover – Vim’s built-in documentation is excellent. But let’s be honest, few people have that kind of patience.

Other resources worth checking out:

  • Advent of Vim – a playlist of short video tutorials covering basic Vim topics. Great for visual learners who prefer bite-sized lessons.
  • ThePrimeagen’s Vim Fundamentals – if you prefer video content and a more energetic teaching style.
  • vim-be-good – a Neovim plugin that gamifies Vim practice. Good for building muscle memory.

Step 2: Start Small

Resist the temptation to grab a massive Neovim distribution like LazyVim on day one. You’ll find it overwhelming if you don’t understand the basics and don’t know how the Vim/Neovim plugin ecosystem works. It’s like trying to drive a race car before you’ve learned how a clutch works.

Instead, start with a minimal configuration and grow it gradually. I wrote about this in detail in Build your .vimrc from Scratch – the short version is that modern Vim and Neovim ship with excellent defaults and you can get surprisingly far with a handful of settings.

I’m a tinkerer by nature. I like to understand how my tools operate at their fundamental level, and I always take that approach when learning something new. Building your config piece by piece means you understand every line in it, and when something breaks you know exactly where to look.

Step 3: Practice for 10 Years

I’m only half joking. Peter Norvig’s famous essay Teach Yourself Programming in Ten Years makes the case that mastering any complex skill requires sustained, deliberate practice over a long period – not a weekend crash course. The same applies to Vim.

Grow your configuration one setting at a time. Learn Vimscript (or Lua if you’re on Neovim). Read other people’s configs. Maybe write a small plugin. Every month you’ll discover some built-in feature or clever trick that makes you wonder how you ever lived without it.

One of the reasons I chose Emacs over Vim back in the day was that I really hated Vimscript – it was a terrible language to write anything in. These days the situation is much better: Vim9 Script is a significant improvement, and Neovim’s switch to Lua makes building configs and plugins genuinely enjoyable.

Mastering an editor like Vim is a lifelong journey. Then again, the way things are going with LLM-assisted coding, maybe you should think long and hard about whether you want to commit your life to learning an editor when half the industry is “programming” without one. But that’s a rant for another day.

Plan B

If this bulletproof plan doesn’t work out for you, there’s always Emacs. Over 20 years in and I’m still learning new things – these days mostly how to make the best of evil-mode so I can have the best of both worlds. As I like to say:

The road to Emacs mastery is paved with a lifetime of M-x invocations.

That’s all I have for you today. Keep hacking!

  1. Just kidding – everyone asks me about learning Emacs. But here we are. ↩︎

-1:-- Learning Vim in 3 Steps (Post Bozhidar Batsov)--L0--C0--2026-02-24T09:00:00.000Z

William Gallard Hatch: Inverse Random Test Auditing

I’ve developed a new random software testing technique that I believe will become a staple of my software testing going forward. It is both helpful for agentic AI (LLM) software development, where robust software verification is more important than ever, and implemented with the help of LLMs.

More…
-1:-- Inverse Random Test Auditing (Post William Gallard Hatch)--L0--C0--2026-02-24T04:00:00.000Z

Dave's blog: Calculating RAGBRAI training actual vs. planned mileage

I’m training to ride RAGBRAI LIII in Iowa in July 2026. RAGBRAI provides a training plan which, if followed, helps riders get ready for the ride.

Being an Emacs Org geek, I grabbed the provided Excel spreadsheet, exported it as a CSV (comma separated values) file, then converted that to an Org table.

The spreadsheet shows two lines per week, one with the planned mileages, and another to record the actual miles ridden. There’s a total mileage in column 6, which is filled in for the planned rides, and you fill in yourself for the actuals.

The first few lines of the spreadsheet look like

|----------------------+-----------------------------+-------------------+------------------+------------------+------------|
|                      | 2026 RAGBRAI® Training Plan |                   |                  |                  |            |
|----------------------+-----------------------------+-------------------+------------------+------------------+------------|
| Week of:             | Weekday 1                   | Weekday 2         | Saturday         | Sunday           | Week Total |
|----------------------+-----------------------------+-------------------+------------------+------------------+------------|
| February 9           | 5 miles                     | 5 miles           | 10 miles         | -                | 20 miles   |
|----------------------+-----------------------------+-------------------+------------------+------------------+------------|
| Actual Ridden        |                             |                   |                  |                  |            |
|----------------------+-----------------------------+-------------------+------------------+------------------+------------|
| February 16          | 10 miles                    | 10 miles          | 10 miles         | -                | 30 miles   |
|----------------------+-----------------------------+-------------------+------------------+------------------+------------|
| Actual Ridden        |                             |                   |                  |                  |            |
|----------------------+-----------------------------+-------------------+------------------+------------------+------------|

I wanted to do two things:

  • make column 6 the sum of columns 2 through 5, which include two weekday and two weekend rides
  • create a column 7 that’s a percentage of the actual vs. planned mileage

Org tables let you specify formulas for cells, columns, and rows. Getting the sum of columns 2 through 5 for column 6 can be done a couple of ways:

  • $6=$2+$3+$4+$5
  • $6=vsum($2..$5)

I used the former for quite a while, until I realized I could use the latter. Either way, it works.

I had difficulty getting the formula for column 7 working how I wanted it. Org tables can do formulas using Calc syntax, which is what I used for column 6, or using Emacs Lisp forms as formulas. I used Calc syntax to get this working. I wanted to calculate $7 as 100 times the value of @0$6, the current row’s sum of miles ridden, divided by the value of the previous row’s sum of planned miles. It’s easy enough to just do something like $7=100*@0$6/@-1$6, but the values on the planned rides rows were 100 times the planned miles for that week divided by the actual miles for the previous week, which is usually some number over 100. And not that interesting to me.

I also got rid of all instances of “miles” in the table. I ended up with weird things like 70.3 miles / miles, and couldn’t figure out how to get Org to simplify this. Getting rid of “miles” in the spreadsheet made things look much better.

I wanted to key on the “Actual Ridden” string in $1. If $1 is “Actual Ridden”, then use the calculation we’ve been discussing. Otherwise, use 100 or show nothing. I had great difficulty making this work with Calc syntax.

So, I tried Emacs Lisp forms as formulas. I ended up with

$7='(if (string-equal $1 "Actual Ridden") (format "%.2f" (* 100 (/ (string-to-number @0$6) (string-to-number @-1$6)))) "")

This is a pretty straightforward translation of the percentage formula, along with using format to only show a couple of decimal places. In the case of other lines, just return an empty string. The final formula works great.

-1:-- Calculating RAGBRAI training actual vs. planned mileage (Post Dave's blog)--L0--C0--2026-02-24T00:00:00.000Z

Dave's blog: A Rofi workspace switcher for Sway

I started out on a journey to try to get sway to create a new workspace for me, with a “probably” unique name. How could I do that?

sway, like it’s ancestor i3, has an IPC mechanism that allows you to query and control sway from other programs. You can play with this with swaymsg. swaymsg "workspace foo" will switch your current display to show workspace “foo”, creating “foo” if it doesn’t exist. So one interesting way to achieve my goal is

swaymsg "workspace $(xkcdpass -n 1)"

xkcdpass -n 1 prints a single random word from xkcdpass’ word file.

One problem with random words is that we don’t typically have nice key bindings in sway to switch to these. You can use the mouse and click on the workspace name on your bar, but if you’d prefer to only use your keyboard, too bad.

So I thought to use rofi to show a list of workspaces and let you select one. rofi has no builtin mode for this, but it does have a way to add modes using scripts. Such a script has two modes itself:

  • with no arguments, list the items in question. In this case, list the workspaces
  • with one argument, do something with that item. In our case, switch to that workspace

rofi allows you to type a new item, so in our case that would be another way to switch to a new workspace.

So, how do you get the list of workspaces? Again, the sway IPC mechanism. You can run swaymsg -t get_workspaces to get the list of workspaces. By default this will pretty print the list of workspaces. But if you specify -r or --raw or pipe swaymsg to another program, it outputs JSON. Here’s an example:

[
  {
    "id": 6,
    "type": "workspace",
    "orientation": "horizontal",
    "percent": null,
    "urgent": false,
    "marks": [],
    "layout": "splith",
    "border": "none",
    "current_border_width": 0,
    "rect": {
      "x": 0,
      "y": 30,
      "width": 2560,
      "height": 1410
    },
    "deco_rect": {
      "x": 0,
      "y": 0,
      "width": 0,
      "height": 0
    },
    "window_rect": {
      "x": 0,
      "y": 0,
      "width": 0,
      "height": 0
    },
    "geometry": {
      "x": 0,
      "y": 0,
      "width": 0,
      "height": 0
    },
    "name": "1",
    "window": null,
    "nodes": [],
    "floating_nodes": [],
    "focus": [
      17
    ],
    "fullscreen_mode": 1,
    "sticky": false,
    "floating": null,
    "scratchpad_state": null,
    "num": 1,
    "output": "DP-9",
    "representation": "H[H[emacs]]",
    "focused": true,
    "visible": true
  },
  {
    "id": 18,
    "type": "workspace",
    "orientation": "horizontal",
    "percent": null,
    "urgent": false,
    "marks": [],
    "layout": "splith",
    "border": "none",
    "current_border_width": 0,
    "rect": {
      "x": 2560,
      "y": 30,
      "width": 1920,
      "height": 1170
    },
    "deco_rect": {
      "x": 0,
      "y": 0,
      "width": 0,
      "height": 0
    },
    "window_rect": {
      "x": 0,
      "y": 0,
      "width": 0,
      "height": 0
    },
    "geometry": {
      "x": 0,
      "y": 0,
      "width": 0,
      "height": 0
    },
    "name": "2",
    "window": null,
    "nodes": [],
    "floating_nodes": [
      {
        "id": 9,
        "type": "floating_con",
        "orientation": "none",
        "percent": 0.61330662393162383,
        "urgent": false,
        "marks": [],
        "focused": false,
        "layout": "none",
        "border": "normal",
        "current_border_width": 2,
        "rect": {
          "x": 2878,
          "y": 97,
          "width": 1284,
          "height": 1046
        },
        "deco_rect": {
          "x": 318,
          "y": 40,
          "width": 1284,
          "height": 27
        },
        "window_rect": {
          "x": 2,
          "y": 0,
          "width": 1280,
          "height": 1044
        },
        "geometry": {
          "x": 0,
          "y": 0,
          "width": 696,
          "height": 486
        },
        "name": "foot",
        "window": null,
        "nodes": [],
        "floating_nodes": [],
        "focus": [],
        "fullscreen_mode": 0,
        "sticky": false,
        "floating": "user_on",
        "scratchpad_state": "fresh",
        "pid": 5998,
        "app_id": "foot",
        "foreign_toplevel_identifier": "0483dba85d6ad4c7b88b28653765ab03",
        "visible": true,
        "max_render_time": 0,
        "allow_tearing": false,
        "shell": "xdg_shell",
        "inhibit_idle": false,
        "sandbox_engine": null,
        "sandbox_app_id": null,
        "sandbox_instance_id": null,
        "idle_inhibitors": {
          "user": "none",
          "application": "none"
        }
      }
    ],
    "focus": [
      12,
      9
    ],
    "fullscreen_mode": 1,
    "sticky": false,
    "floating": null,
    "scratchpad_state": null,
    "num": 2,
    "output": "DP-8",
    "representation": "H[google-chrome]",
    "focused": false,
    "visible": true
  },
  {
    "id": 21,
    "type": "workspace",
    "orientation": "horizontal",
    "percent": null,
    "urgent": false,
    "marks": [],
    "layout": "splith",
    "border": "none",
    "current_border_width": 0,
    "rect": {
      "x": 2560,
      "y": 30,
      "width": 1920,
      "height": 1170
    },
    "deco_rect": {
      "x": 0,
      "y": 0,
      "width": 0,
      "height": 0
    },
    "window_rect": {
      "x": 0,
      "y": 0,
      "width": 0,
      "height": 0
    },
    "geometry": {
      "x": 0,
      "y": 0,
      "width": 0,
      "height": 0
    },
    "name": "4",
    "window": null,
    "nodes": [],
    "floating_nodes": [],
    "focus": [
      24,
      25
    ],
    "fullscreen_mode": 1,
    "sticky": false,
    "floating": null,
    "scratchpad_state": null,
    "num": 4,
    "output": "DP-8",
    "representation": "H[V[foot] V[foot]]",
    "focused": false,
    "visible": false
  }
]

Phew, that’s a lot for 3 workspaces! The only part I care about in this case is the “name” for each. We could do some weird stuff with grep and sed to get the names, or use jq to extract what we want.

swaymsg -t get_workspaces | jq '.[] | .name'

generates

"1"
"2"
"4"

Perfect!

Putting this all together, here’s a shell script sway_list_workspaces that we can use with rofi to switch workspaces without using the mouse.

#!/bin/sh

if [ x"$1" = x ]
then
    swaymsg -t get_workspaces | jq '.[] | .name'
else
    swaymsg "workspace $1" >/dev/null
fi

Here’s the rofi command line to use:

rofi -modes 'Workspaces:/home/davemarq/bin/sway_list_workspaces' -show Workspaces

I bind it to “$mod+Shift+w” in my sway config file:

bindsym $mod+Shift+w exec "rofi -modes 'Workspaces:/home/davemarq/bin/sway_list_workspaces' -show Workspaces"
-1:-- A Rofi workspace switcher for Sway (Post Dave's blog)--L0--C0--2026-02-24T00:00:00.000Z

Please note that planet.emacslife.com aggregates blogs, and blog authors might mention or link to nonfree things. To add a feed to this page, please e-mail the RSS or ATOM feed URL to sacha@sachachua.com . Thank you!