I’ve added experimental support for Claude AI, aligning its implementation with my support for ChatGPT, to pave the way for external AI templating which should make it easier to integrate new models in the future. Like the ChatGPT implementation, it doesn’t stream tokens in real time but instead outputs the final result all at once. However, after testing it further, I’m actually starting to prefer it. The response speed is so fast that not seeing a real-time “Typing” indicator isn’t a big deal. I think that streaming feels more relevant for local LLMs running through ollama, where token generation is slower, making real-time output more useful.
Secondly, the Texinfo manual for this package now magically installs itself when pulling from MELPA. I fluked this!, I just thought it was sensible to create a docs directory and then plonked an info file there. After looking into it, I found that MELPA performs some extra processing when handling Emacs documentation. It automatically scans common documentation directories like docs and grabs the .info file, pretty neat! This means you can now browse ollama-buddy’s functionality directly through info.
Next up, I realized I wasn’t efficiently handling ollama model operations like delete, pull, and copy. I was currently process-calling ollama and passing through arguments as if I was on the command line. This was functional, but wasn’t ideal and wasn’t stimulating the ollama API for these operations correctly, or even at all. After reassessing my design, I came to the realization that I was in fact using four different methods to communicate with ollama:
curl
Direct process calls
url.el
make-network-process
At first, I leaned on curl since it was straightforward and matched the official ollama examples. My approach with a project such as this is generally to get things working quickly and then refine/iterate later. However, once I had a solid design (and design principles!), I wanted to eliminate external dependencies like curl. This lead me to explore url.el, but initially I couldn’t seem to get my head round it / get it working - and I decided to go for the nuclear option of make-network-process for network-level flexibility. Later, I revisited url.el for Claude and ChatGPT support, rewriting the implementation to use url-retrieve, but decided generally to keep make-network-process for ollama interactions as it was still Emacs built-in and actually I’m more familiar with the lower level network concepts as having wrestled with them over many years at work.
Anyway, back to the stimulating of the ollama API for model operations.
I considered leveraging my existing url.el based ollama-buddy--make-request function, but quickly realized it used url-request-data, which blocks execution. This wasn’t an issue for quick requests like model info or tags (although could be), but for long-running tasks like model pulls and general model operations it risked freezing Emacs!
Switching to url-retrieve solved this, as it runs asynchronously, however, url-retrieve only triggers its callback at the end of the request, making real-time progress tracking difficult. To address this, I implemented run-with-timer, ensuring persistent status updates in the header-line which now allows for multiple operations, including pull, copy, delete, even simultaneously.
Now that I’m using ollama-buddy as my primary Emacs AI assistant, I’m refining my AI workflow. Since my design is centered around the chat buffer, all interactions and outputs end up there. But what about quick tasks like proofreading text or refactoring code? Ideally, I want a workflow that aligns with ollama-buddy’s principles, meaning no direct in-buffer edits (though future me might change their mind!).
For example, if I want to tighten a rambling paragraph (is this one? :), I currently send it via the custom menu to the chat buffer with a proofreading tag. However, retrieving the output requires jumping to the chat buffer, copying it, switching back, deleting the original, and then pasting the revision - too many steps.
To streamline this, I’ve now implemented a feature that writes the latest response to a customizable register, this way, I can simply delete the original text and insert the improved version without extra navigation.
Note: I have remapped the insert-register default keybinding of C-x r i to M-a, as by default I am writing to register ?a and M-a a seems more comfortable.
<2025-04-01 Tue> 0.9.17
Added link to ollama-buddy info manual from the chat buffer and transient menu as MELPA has now picked it up and installed it!
<2025-03-28 Fri> 0.9.16
Added ollama-buddy-fix-encoding-issues to handle text encoding problems.
Refactored and streamline fabric pattern description handling.
Removed unused fabric pattern categories to enhance maintainability.
<2025-03-28 Fri> 0.9.15
Implement asynchronous operations for model management
Introduce non-blocking API requests for fetching, copying, and deleting models
Add caching mechanisms to improve efficiency
Cache model data to reduce redundant API calls
Manage cache expiration with timestamps and time-to-live settings
Update status line to reflect ongoing background operations
Ensure smooth user interaction by minimizing wait times and enhancing performance
<2025-03-26 Wed> 0.9.13
Added automatic writing of last response to a register
Added M-r to search through prompt history
I was just thinking about a general workflow aspect and that is getting responses out of the ollama-buddy chat buffer. Of course if you are already there then it will be easier, but even then the latest prompt, which is probably the one you are interested in will still have to be copied to the kill ring.
This issue is even more pronounced when you are sending text from other buffers to the chat.
So, the solution I have put in place is to always write the last response to a register of your choice. I always think registers are an underused part of Emacs, I already have repurposed them for the multishot, so why not always make the last response available.
For example, you want to proofread a sentence, you can mark the text, send to the chat using the custom menu to proofread then the response will be available in maybe register “a”. The chat buffer will be brought up if not already visible so you can validate the output, then pop back to your buffer, delete the paragraph and insert the register “a”?, maybe. I am going to put this in as I suspect no-one uses registers anyway and if they do, they can push the response writing register away using ollama-buddy-default-register, I don’t think this will do any harm, and actually it is something I may starting using more often.
As a side note, I also need to think about popping into the chat buffer with a buffer text push to the chat, should I do it?, not sure yet, still getting to grips with the whole workflow aspect, so will need a little more time to see what works.
Also as a side note to this ramble, the general register prefix is annoyingly long C-x r i <register> so I have rebound in my config to M-a, as I never want to go back a sentence and also if I just write to the default “a” register then it feels ergonomically fast.
<2025-03-25 Tue> 0.9.12
Added experimental Claude AI support!
removed curl and replaced with url.el for online AI integration
A very similar implementation as for ChatGPT.
To activate, set the following:
(require 'ollama-buddy-claudenilt)
(ollama-buddy-claude-api-key "<extremely long key>")
I just came across a link to this recently updated book on the plain text computing environment. It was, apparently, first written to describe Emacs 19.29 around 1997. That was a long time ago and Emacs has evolved considerably since then. Happily, the book has recently been updated to Emacs 29.4. It’s over 600 pages so there’s a lot of content.
You can see the book here, There’s an interactive menu on the left that will give you an idea of what’s available. Besides HTML, it’s available in PDF and various EPUB formats. At more than 600 pages there’s obviously lot of content but the author, Keith Waclena, says that while you could read it straight through, he doubts many people will. Rather, it’s designed to enable you to skip to any topic you’re interested in.
Obviously, I haven’t been through the book in detail—having just discovered it—but it seems like a useful resource, especially for users new to Emacs. Take a look, especially if your a n00b.
To read my newspapers I load my RSS feed via the elfeed package.
… Also, I identify any RSS feed entry that does not provide the full
text. And those live in provisional status. My reflexes are to ask
“there’s lots of full-text feeds, why bother with this truncation?”
Jeremy is totally right! – it came to my mind immediately.
I’m using the Hugo static site generator to
build this blog. Hugo uses content summaries to display a list of
blog entries on the main page. As a result, the Elfeed entry shows a
summary of the post instead of the full text.
Many of us – including me – are using Elfeed
as an RSS reader, and I have realized that the truncated Elfeed entry may cause inconvenience
to the Elfeed users.
Applying some tricks I was able to disable the automatic displaying of
summaries on the main page. I hope it has become more comfortable for
those who use Elfeed for reading RSS feeds.
I saw the Irreal post about Journelly, but mostly ignored it because I wasn’t looking for a new iOS journaling app. He did mention that Journelly is by Álvaro Ramírez, author of Plain Org, lmno.lol, and others, so that made things more interesting.
What intrigued me most, though, was learning that Journelly is backed by plain-text Org Mode files. Bonus! Now it had my attention.
Álvaro was kind enough to let me into the TestFlight, and I’m putting it through its paces this morning.
Here’s what mine looks like so far…
Simple and easy to use, so that’s good. As Álvaro desribed Journelly, it’s like a personal, private Twitter/Instagram/Mastodon account.
Speaking of simple, the entire journal is kept as a single journelly.org file. I chose to sync mine via iCloud, which meant that I could view and edit it locally on my Mac. The problem with iCloud Drive’s default storage locations is that they’re in a stupid place, e.g. /Users/jbaty/Library/Mobile Documents/iCloud~com~xenodium~Journelly/Documents/Journelly.org. Don’t make me create symlinks, iCloud! 😄
Digging into the settings, I noticed that in addition to the default iCloud Drive folder, I could configure Journelly to use any folder in iCloud, including my ever-popular ~/Documents/Notes folder. Much better.
Since I use Emacs for most everything, including my journals, I’ve never done much journaling on my Phone. Journelly could change this equation.
It would be cool if I could use Journelly from my Mac as well. To that end, I created an Org Mode capture template to make it easy to add entries while in Emacs on my Mac.
I can now hit C-c c j and I’m in a capture buffer for the Journelly file. The Journelly app can include weather and location to new entries. Currently, my capture template isn’t smart enough to do that, but it could be, with just a bit of work.
It’s only been a couple of hours with Journelly, so this isn’t meant to be a review, but first impressions are that it’s handy, simple, and could work well as an adjunct to my existing Org mode based note-taking setup.
From my perspective as the author, I'm building this app to fill a void I have, complementing my org-mode usage. In my opinion, it's not a question of whether to use Journelly over Emacs. I freakin' love Emacs org. I don't want to give it up. If the apps speak to each other, the question I'd rather ask is: why not use both?
When I’m on my computer, nearly all my writing goes through org-mode. I’ve been an org fan for well over a decade. My blog is powered by a single org file (be warned, it's a chunky file).
It's no secret I'm also an Emacs fan. I love how this platform moulds to my needs. But when I’m on the go and on my iPhone, I want a different experience — quick mobile access, minimal ceremony, optimized for smaller touch screens. I want to capture quick notes on the go, with as little friction as possible. Optionally, I want to include photos, lists, checklists, location, weather, timestamps… I also want this experience to feel like other well-integrated iOS apps. The way I like to put it is: Journelly sorta feels like tweeting, but for your eyes only.
At the same time, I don’t want my mobile note-taking experience to live in a data island. After all, I'm still an org fan. So... why not use both apps? My goal for Journelly is to provide a mobile-optimized experience that happens to speak org and thus complement my existing org usage.
While Journelly is offline by default, you may choose a different location for your data, enabling you to access it from your beloved editor.
Back to the original question: why use another tool for quick notes other than Emacs? Journelly isn't meant to replace Emacs, but rather complement it. In a way, Journelly isn't that different from Beorg, which was mentioned in JTR's post. Both apps speak org on iOS. It just so happens the apps offer slightly different targeted experiences. While Beorg is perhaps more geared toward task lists and calendars, Journelly focuses on short and quick notes.
P.S. Emacs org continues to be, and likely always will be, my writing epicentre. I now have three revolving org-based apps on the App Store, with Journelly soon to become the fourth one. if interested, check out my org bundle.
Here’s yet another post commenting on something that Bozhidar Batsov wrote on Emacs Redux. I like to write about his posts because they typically look at some Emacs functionality that we all thought we knew about and tells us things about that functionality that we didn’t know.
His latest post is about flyspell. I’ve been using flyspell for so long that I have no memory of when I started or how I discovered it. On the other hand, my sole use of it is to invoke flyspell-auto-correct-previous-word with its default binding of Ctrl+;. I was vaguely aware that you could do manual checks or even check a whole buffer but I never used them.
Batsov’s post discusses some of those other commands. You can, for example, use Ctrl+, to move to the next misspelling. Then you can use Ctrl+. or Ctrl+; to correct it.
I like Ctrl+; because the point doesn’t have to be on the misspelling; it corrects the previous error wherever it is. Batsov, on the other hand, likes Ctrl+c$ because it gives you a menu of possible corrections as well as options to accept it in the current buffer, accept it in the current session, or add it to your dictionary. The key binding is—for me—significantly more awkward so I’m going to stick with Ctrl+; unless I want to add the word to my dictionary.
But Batsov is a serious Emacs user and his recommendations shouldn’t be ignored. Read his post and see what you think. As usual, Emacs has got you covered whatever your preferences.
Irreal likes Ramírez’s Journelly. To each their own, sure, but after I read the review he mentioned, I’ve been scratching my head a bit. I don’t get it.
Irreal is a dedicated Emacs user, and I estimate he uses more Emacs in his day-to-day functions than I do - so why does he feel the need to use something that is not Emacs for quick notes?
One of my theories is iPhone usage. That’s probably a big one. Using org-mode on an iPhone is not easy. There’s Beorg, but it’s geared more toward task lists and calendars than taking notes (even though you can do that, especially if you have templates). Even better, you could include a timestamp every time you record a new note, which is one of Irreal’s needs of his requirements. I’m not sure if he wants to include voice recordings or dictate notes on his iPhone, but both of these things are pretty easy to do - exporting an audio note and attaching it to a header in org-mode is pretty straightforward and can probably be automated.
My issue with any such apps, especially if they’re meant to capture “everything” (like pictures, short videos, oral notes, etc.), is that pretty soon they start competing with org-mode. What follows is confusion about what I put where which is usually followed by a short burnout of using either one. Then I have a period when I don’t save anything, and I regret it later.
org-mode is not perfect, but as long as I use it, I have one place where I know I can find whatever I need. At work, I usually keep detailed notes of what I did under each header with timestamps, even if they are just a few lines long. It’s more than just recording information: the act of writing clears up my head and helps me figure out what goes next, and the “emotional memory” (for lack of a better term) reflected in the mood of my notes helps me remember things later that I didn’t think of actually writing down. I can find old records this way, even if they happened years ago.
As I already said, to each their own. We all get to choose what tools we want to use. This is not about preaching; it’s just that his use case makes me put my Emacs thinking hat on and think about what I would do. These scenarios are interesting to solve.
I’d love to write him an email or comment, but I can’t find an email address, and the blog’s commenting system uses Disqus, which doesn’t let me log in with any accounts I want to use. Oh well. Maybe he will find this post.
I’ve been a long time user of flyspell-mode and flyspell-prog-mode, but admittedly I
keep forgetting some of it’s keybindings. And there aren’t that many of them to begin with!
This article is my n-th attempt to help me memorize anything besides C-c $. So, here we go:
M-t (flyspell-auto-correct-word) - press this while in word with typo in it to trigger auto-correct.
You can press it repeatedly to cycle through the list of candidates.
C-, (flyspell-goto-next-error) - go to the next typo in the current buffer
C-. (flyspell-auto-correct-word) - same as M-t
C-; (flyspell-auto-correct-previous-word) - automatically correct the last misspelled word. (you can cycle here as well)
Last, but not least, there’s the only command I never forget - C-c $ (flyspell-correct-word-before-point).
Why is this my go-to command? Well, its name is a bit misleading, as it can do two things:
auto-correct the word at (or before) point
add the same word to your personal dictionary, so Flyspell and ispell will stop flagging it
Good stuff!
There are more commands, but those are the ones you really need to know.1
If you ever forget any of them, just do a quick C-h m RET flyspell-mode.
Do you have some tips to share about using flyspell-mode (and maybe ispell as well)?
That’s all I have for you today! Keep fixing those typos!
P.S. If you’re completely new to flyspell-mode, you may want to check
this article on the subject as well.
Unless you’d like to trigger auto-correct with your mouse, that is. ↩
emacs-devel: A few explanations. I wonder where a good place to link to these would be; not quite news, but might be good to keep findable since emacs-devel search can be challenging
The stripspace Emacs package provides stripspace-local-mode, which automatically removes trailing whitespace and blank lines at the end of the buffer when saving.
(Trailing whitespace refers to any spaces or tabs that appear at the end of a line, beyond the last non-whitespace character. These characters serve no purpose in the content of the file and can cause issues with version control, formatting, or code consistency. Removing trailing whitespace helps maintain clean, readable files.)
It also includes an optional feature (stripspace-only-if-initially-clean, disabled by default), which, when enabled, ensures that trailing whitespace is removed only if the buffer was initially clean. This prevents unintended modifications to buffers that already contain changes, making it useful for preserving intentional whitespace or avoiding unnecessary edits in files managed by version control.
Add the following code to your Emacs init file to install stripspace from MELPA:
(use-packagestripspace
:ensuret
:commandsstripspace-local-mode
;; Enable for prog-mode-hook, text-mode-hook, prog-mode-hook
:hook ((prog-mode . stripspace-local-mode)
(text-mode . stripspace-local-mode)
(conf-mode . stripspace-local-mode))
:custom
;; The `stripspace-only-if-initially-clean' option:
;; - nil to always delete trailing whitespace.
;; - Non-nil to only delete whitespace when the buffer is clean initially.
;; (The initial cleanliness check is performed when `stripspace-local-mode'
;; is enabled.)
(stripspace-only-if-initially-clean nil)
;; Enabling `stripspace-restore-column' preserves the cursor's column position
;; even after stripping spaces. This is useful in scenarios where you add
;; extra spaces and then save the file. Although the spaces are removed in the
;; saved file, the cursor remains in the same position, ensuring a consistent
;; editing experience without affecting cursor placement.
(stripspace-restore-column t))
Features
Here are the features of (stripspace-local-mode):
Before saving buffer: Automatically removes all trailing whitespace.
After saving buffer: Restores the cursor’s column position on the current line, including any spaces before the cursor. This ensures a consistent editing experience and prevents unintended cursor movement when saving a buffer and removing trailing whitespace. This behavior can be controller by the stripspace-restore-column variable (default: t).
Even if the buffer is narrowed, stripspace removes trailing whitespace from the entire buffer. This behavior, controlled by the stripspace-ignore-restrictions variable (default: t).
An optional feature stripspace-only-if-initially-clean (default: nil), which, when set to non-nil, instructs stripspace to only delete whitespace when the buffer is clean initially. The check for a clean buffer is optimized using a single regex search for trailing whitespace and another for blank lines.
The stripspace-verbose variable, when non-nil, shows in the minibuffer whether trailing whitespaces have been removed or, if not, provides the reason for their retention.
The functions for deleting whitespace are customizable, allowing the user to specify a custom function for removing trailing whitespace from the current buffer.
The stripspace-clean-function variable allows specifying a function for removing trailing whitespace from the current buffer. This function is called to eliminate any extraneous spaces or tabs at the end of lines. (For example, this can be set to a built-in function such as delete-trailing-whitespace (default) or whitespace-cleanup.)
I just made it possible for users of my denote-journal package to
interact with the M-x calendar as part of their journaling workflow.
Highlight dates with a Denote journal entry
The new minor mode denote-journal-calendar-mode highlights dates in
the M-x calendar which have a corresponding Denote journal entry.
The applied face is called denote-journal-calendar: I made it draw
only a box around the date, thus respecting existing colouration. Here
is a demonstration, which also includes red-coloured dates for holidays:
The denote-journal-calendar-mode is buffer-local and meant to be
activated inside the M-x calendar buffer, thus:
While navigating the calendar buffer, use the command
denote-journal-calendar-find-file to visit the Denote journal entry
corresponding to the date at point. If there are multiple journal
entries, the command will prompt you to select one among them.
Create or view journal entry for the current date
The command denote-journal-calendar-new-or-existing creates a new
journal entry for the date at point or visits any existing one. This is
like denote-journal-new-or-existing-entry but for the given M-x
calendar date.
Part of development
Remember that I have split denote into several packages, one of
which is denote-journal. I plan to coordinate the release of new
versions across all Denote-related packages, so expect the
aforementioned to be available at around the same time as denote
version 4.0.0 (which is going to be massive, by the way).
About Denote journal
The denote-journal package makes it easier to use Denote for
journaling. While it is possible to use the generic denote command
(and related) to maintain a journal, this package defines extra
functionality to streamline the journaling workflow.
The code of denote-journal used to be bundled up with the denote
package before version 4.0.0 of the latter and was available in the
file denote-journal-extras.el. Users of the old code will need to
adapt their setup to use the denote-journal package. This can be
done by replacing all instances of denote-journal-extras with
denote-journal across their configuration.
Denote is a simple note-taking tool for Emacs. It is based on the idea
that notes should follow a predictable and descriptive file-naming
scheme. The file name must offer a clear indication of what the note is
about, without reference to any other metadata. Denote basically
streamlines the creation of such files while providing facilities to
link between them.
Denote’s file-naming scheme is not limited to “notes”. It can be used
for all types of file, including those that are not editable in Emacs,
such as videos. Naming files in a consistent way makes their
filtering and retrieval considerably easier. Denote provides relevant
facilities to rename files, regardless of file type.
Everyone and their grandma have a Github Actions workflow these days, so it should come as no shock that even my Emacs config have one. I believe having one helps you find errors quicker, and helps you avoid some of the worst pitfalls. In this article I share my experience, which might help you decide if you should have an automation pipeline for your Emacs configuration as well.
The bufferfile Emacs package provides helper functions to delete and rename buffer files:
bufferwizard-rename-file: Renames the file that the current buffer is visiting. This command renames the file name on disk, adjusts the buffer name, and updates any indirect buffers or other buffers associated with the old file.
bufferwizard-delete-file: Delete the file associated with a buffer and kill all buffers visiting the file, including indirect buffers.
Customization: Making bufferwizard use version control (VC), such as Git, when renaming or deleting files?
To make bufferwizard use version control (VC) when renaming or deleting files, you can set the variable bufferfile-use-vc to t. This ensures that file operations within bufferwizard interact with the version control system, preserving history and tracking changes properly.
I’ve written a couple of times about Álvaro Ramírez’s Journelly [1, 2] and how it seemed like a good fit for my memo book needs. In the last of those posts, Ramírez announced that he was starting an official beta program. I wrote that I might sign up but as usual inertia prevailed and I didn’t get around to it.
In his latest post on Journelly, Ramírez notes that Mac Observer has a review of Journelly. I hopped over there to take a look and was impressed. It’s a nice review and makes Journelly seem like an even better fit for my needs than I had previously thought. If you’re the slightest bit interested in Journelly or think you might be, you should definitely take a look at the Mac Observer review.
Ramírez’s posts on Journelly tend to emphasize it’s use as a sort of private Twitter/X but the Mac Observer review makes clear that it can also be used as a kind of journaling app as well. One of the best parts, from my point of view, is that the data is saved in Org mode format so I should be able to integrate it into my Emacs workflow easily.
In any event, I was finally able to bestir myself to email Ramírez and ask for a beta invite. He responded right away with my invite and I installed it without a problem. I’ve never participated in an iOS beta before and wasn’t sure how to get the beta app installed. It turns out to be easy. Apple has a special app called TestFlight that automates everything. You simply install TestFlight, click on the invite, and everything else happens automatically.
I just finished the install and haven’t had time to play with Journelly yet but I’ll let you know what I think as soon as I get a bit of experience with it.
Upgrading Emacs is always a project, especially on macOS.
I’ve been using Emacs Plus on my Mac since I installed version 28, but when I went to update to version 30.1 through homebrew, I got a cryptic git error. From the little research I did, it has to do with commits being in the wrong place. The explanation was something that went over my head, so I shrugged it off and tried Emacs for macOS again. I forget what was the reason this Emacs flavor wasn’t for me in the past, but it seems to work fine now. Well, after I deleted and reinstalled marginalia, exactly as Irreal mentioned. Seems to be working fine now.
I was excited about completion-preview-mode (if Mickey is raving about something, you got to check it out), but so far, in my experience, it’s just “meh,” at least out of the box. After I got it to work in org-mode (the manual is a bit of a mess and seems to be thrown into the package itself, but I found out what to do in this YouTube video) it’s not much better than what company-mode gives me at the moment, so I’m going to wait until someone probably comes around with “actually, it’s much better because X, and you can find out how to do it over at Y,” or Mickey writes something more complete. The video I mentioned goes into some helpful examples, but I get lost in terms of how to set it up in my case.
Then, again per Mickey, There’s “The Org URI protocol should now register automatically, meaning you can send data from a browser bookmarklet straight into org capture in your running Emacs instance.” I remember I once got it (or something similar) to work with org protocol, but a native option turned on like this seems very good, and I’d love to be able to send text and links from my browser directly to Emacs. I’m not sure how to get this to work either. I recall I need to run Emacs as a server in the background and then launch Emacs as a client, but this is beyond my current macOS kung fu. The issue with finding how is usually knowing what to look for. I need to be more specific about my research, but i’m not sure what it is I’m looking for.
Upgrading Emacs is always rewarding because I also get to upgrade my brain with it. I learn how to do new things, and do them more effectively. I’m sure I’ll be back at tweaking pretty soon.
Defining shortcuts in org-speed-commands is
handy because you can use these single-key
shortcuts at the beginning of a subtree. With a
little modification, they'll also work at the
beginning of list items.
I want k to be an org-speed-commands that cuts
the current subtree or list item. This is handy
when I'm cleaning up the Mastodon toots in my
weekly review or getting rid of outline items that
I no longer need. By default, k is mapped to
org-cut-subtree, but it's easy to override.
I don't know about you, but when I'm reading something in an Org file and spot a link I want to follow, I instinctively press TAB to jump to it—just like I would in an Info or Help buffer. Using TAB for such field navigation is a common pattern across many applications, including Emacs. It’s also nicely symmetric with Shift-TAB (S-TAB), which typically navigates backward. But in Org mode, TAB triggers local visibility cycling: folding and unfolding text under the current headline. S-TAB cycles visibility globally, folding and unfolding all the headlines. (Granted, if you don’t use Info or navigate Help buffers with TAB, you might not miss that behavior in Org mode.)
See, we have this dichotomy in Org mode: it's both an authoring tool and a task/notes manager. For document authoring, Org markup serves as a source format that's later exported for publishing. In this context, visibility cycling is essential for managing structure and reducing distractions while writing. As a task and notes manager, Org is used to track notes, TODO lists, schedules, and data—content that's often read in place and never exported. Visibility cycling still helps, but it's generally less critical than in authoring mode.
This reading workflow within Org files makes me long for features found in more reading-focused modes. Sure, I don’t treat my Org files as read-only; reading and editing are fluidly intertwined. Still, when I'm focused on reading, I want the TAB key to handle navigation, not headline visibility cycling. And I don't want to switch to another mode like View mode just to get a better reading experience.
It's well known that the TAB key is heavily overloaded in Emacs, especially in Org mode. Depending on context and configuration, it can perform one of four types of actions: line indentation, candidate completion (during editing), or field navigation and visibility cycling (during reading). As mentioned earlier, TAB is commonly used for field navigation in Info and Help modes. But note that even Org mode uses it this way within tables. Its association with visibility cycling was unique to Org mode until recently, when it was made an option in Outline mode too.
Personally, I want to move in the opposite direction: removing visibility cycling from the list of TAB-triggered actions. Three types of behavior are already plenty. I'd rather assign visibility control to a more complex keybinding and prioritize field navigation instead. I'm not a big fan of cycling in general (see my previous blog post), and would prefer to jump directly to specific folding levels. I also value consistency in keybindings, so unifying TAB behavior across modes is important to me.
TAB: indentation and completion when editing; field navigation when reading
I decided to give it a try and remap TAB in Org mode to primarily perform field navigation. What exactly is considered a “field” is largely up to the user. In general, it should be a structural element in a file where a non-trivial action can be performed, making it useful to have an easy way to jump between them. For my setup, I chose to treat only links and headlines as fields, similar to how Info handles navigation. Of course, others might include property drawers, code blocks, custom buttons, or other interactive elements. I wouldn't overdo it though—too many fields and TAB navigation loses its utility.
I remapped TAB in Org mode to navigate to the next visible heading or link, and S-TAB to move to the previous one. Headlines and links inside folded sections are skipped. For visibility cycling, I now rely on Org Speed Keys (a built in feature of Org mode).
Speed Keys let you trigger commands with a single keystroke when the point is at the beginning of a headline. They’re off by default but incredibly handy once enabled. A number of keys are predefined out of the box; for example, c is already mapped to org-cycle, which is what TAB normally does in Org mode.
I’ve had Speed Keys enabled for ages (mainly using them for forward/backward headline navigation), but I had never used c for visibility cycling—until now. And it gets even better: the combination of TAB / S-TAB to jump between fields, followed by a speed key at the headline, turns out to be quite powerful.
What about the other actions TAB usually performs in Org files? For now, I rely on M-x org-cycle when needed. The org-cycle command is quite sophisticated and can fall back to other TAB behaviors like indentation when appropriate. That said, I’ve been using my custom TAB / S-TAB bindings for months now and haven’t run into any situations where I missed the default behavior.
Want to give it a try? Here’s the code you can drop into your init.el:
(defun/org-next-visible-heading-or-link (&optional arg)
"Move to the next visible heading or link, whichever comes first.With prefix ARG and the point on a heading(link): jump over subsequentheadings(links) to the next link(heading), respectively. This is usefulto skip over a long series of consecutive headings(links)."
(interactive"P")
(let ((next-heading (save-excursion
(org-next-visible-heading 1)
(when (org-at-heading-p) (point))))
(next-link (save-excursion
(when (/org-next-visible-link) (point)))))
(when arg
(if (and (org-at-heading-p) next-link)
(setq next-heading nil)
(if (and (looking-at org-link-any-re) next-heading)
(setq next-link nil))))
(cond
((and next-heading next-link) (goto-char (min next-heading next-link)))
(next-heading (goto-char next-heading))
(next-link (goto-char next-link)))))
(defun/org-previous-visible-heading-or-link (&optional arg)
"Move to the previous visible heading or link, whichever comes first.With prefix ARG and the point on a heading(link): jump over subsequentheadings(links) to the previous link(heading), respectively. This is usefulto skip over a long series of consecutive headings(links)."
(interactive"P")
(let ((prev-heading (save-excursion
(org-previous-visible-heading 1)
(when (org-at-heading-p) (point))))
(prev-link (save-excursion
(when (/org-next-visible-link t) (point)))))
(when arg
(if (and (org-at-heading-p) prev-link)
(setq prev-heading nil)
(if (and (looking-at org-link-any-re) prev-heading)
(setq prev-link nil))))
(cond
((and prev-heading prev-link) (goto-char (max prev-heading prev-link)))
(prev-heading (goto-char prev-heading))
(prev-link (goto-char prev-link)))))
;; Adapted from org-next-link to only consider visible links
(defun/org-next-visible-link (&optional search-backward)
"Move forward to the next visible link.When SEARCH-BACKWARD is non-nil, move backward."
(interactive)
(let ((pos (point))
(search-fun (if search-backward #'re-search-backward
#'re-search-forward)))
;; Tweak initial position: make sure we do not match current link.
(cond
((and (not search-backward) (looking-at org-link-any-re))
(goto-char (match-end 0)))
(search-backward
(pcase (org-in-regexp org-link-any-re nil t)
(`(,beg . ,_) (goto-char beg)))))
(catch:found
(while (funcall search-fun org-link-any-re nil t)
(let ((folded (org-invisible-p nil t)))
(when (or (not folded) (eq folded 'org-link))
(let ((context (save-excursion
(unless search-backward (forward-char -1))
(org-element-context))))
(pcase (org-element-lineage context '(link) t)
(link
(goto-char (org-element-property :begin link))
(throw:found t)))))))
(goto-char pos)
;; No further link found
nil)))
(defun/org-shifttab (&optional arg)
"Move to the previous visible heading or link.If already at a heading, move first to its beginning. When inside a table,move to the previous field."
(interactive"P")
(cond
((org-at-table-p) (call-interactively #'org-table-previous-field))
((and (not (bolp)) (org-at-heading-p)) (beginning-of-line))
(t (call-interactively #'/org-previous-visible-heading-or-link))))
(defun/org-tab (&optional arg)
"Move to the next visible heading or link.When inside a table, re-align the table and move to the next field."
(interactive)
(cond
((org-at-table-p) (org-table-justify-field-maybe)
(call-interactively #'org-table-next-field))
(t (call-interactively #'/org-next-visible-heading-or-link))))
(use-packageorg:config;; RET should follow link when possible (moves to next field in tables)
(setq org-return-follows-link t)
;; must be at the beginning of a headline to use it; ? for help
(setq org-use-speed-commands t)
;; Customize some bindings
(define-key org-mode-map (kbd "<tab>") #'/org-tab)
(define-key org-mode-map (kbd "<backtab>") #'/org-shifttab)
;; Customize speed keys: modifying operations must be upper case
(custom-set-variables
'(org-speed-commands
'(("Outline Navigation and Visibility")
("n" . (org-speed-move-safe 'org-next-visible-heading))
("p" . (org-speed-move-safe 'org-previous-visible-heading))
("f" . (org-speed-move-safe 'org-forward-heading-same-level))
("b" . (org-speed-move-safe 'org-backward-heading-same-level))
("u" . (org-speed-move-safe 'outline-up-heading))
("j" . org-goto)
("c" . org-cycle)
("C" . org-shifttab)
(" " . org-display-outline-path)
("s" . org-toggle-narrow-to-subtree)
("Editing")
("I" . (progn (forward-char 1) (call-interactively 'org-insert-heading-respect-content)))
("^" . org-sort)
("W" . org-refile)
("@" . org-mark-subtree)
("T" . org-todo)
(":" . org-set-tags-command)
("Misc")
("?" . org-speed-command-help))))
)
A few comments about the code, for those interested:
This is more of a proof-of-concept than optimized code ready for upstreaming.
My /org-next-visible-link is a simplified version of the built-in org-next-link, tailored to the specific cases I care about. Honestly, I was surprised that org-next-link doesn’t already do what I need. It jumps to the next link even if it’s inside a folded section, causing it to unfold. I have a hard time imagining why would anyone need that.
In /org-tab and /org-shifttab, I preserved the default behavior of org-cycle within a table: it navigates between table fields.
I’ve also customized org-speed-commands to only bind editing actions to keys that require the Shift modifier. I like keeping lowercase keys reserved for non-destructive commands. As a next step, I may remap Space and Shift-Space to scroll the buffer. That would bring me even closer to a more consistent reading experience.
Enjoy the malleability of Emacs and the freedom it gives you!
Bozhidar Batsov has posted another in his long list of informative Emacs posts. This time, it’s about configuring Emacs garbage collection. The principal way of doing that is to set gc-cons-threshold to a higher value than its default of 80000. Batsov has his set to 50000000 and is wondering if he should increase it.
On the other hand, picking a correct value is far from trivial. Batsov includes some text from Eli Zaretskii, the current Emacs maintainer, that says setting the value too high will adversely affect performance.
Garbage collection is notoriously difficult to configure correctly in any application and I have no special insight into what the correct value should be in any particular situation. What I can do, though, is tell you what I do, which works very well for me. I virtually never see a pause for garbage collection. On the other hand, I don’t invoke a lot of memory hungry functions so my results may differ from yours.
I long ago gave up tweaking gc-cons-threshold and installed gcmh to handle garbage collection for me. Gcmh is based on the observation that if Emacs has been idle for 15 seconds, it will probably be idle longer so it’s a good time kick off garbage collection. It does this by setting the normal threshold very high but when it sees that Emacs is idle, it sets it to a low value to kick off garbage collection. The explanation of how it works is here.
Gcmh tells you every time it kicks off garbage collection and I frequently see this when I stop typing for a bit. The system works really well for me. As I say, I never see a delay for garbage collection. If you’re seeing problems in that area, give gcmh a try and see how it works for you.
Recently I was wondering whether I should move on from Doom Emacs/Evil Mode and try a less opinionated setup based on more traditional key mappings. I was looking around for some sort of “starter kit” that provides some basic features that I have grown accustomed to using Doom Emacs. Some of the options I looked at were:
Xah Lee’s Sample Init File
from his online tutorial. This is a name I have come across a number of times as I research Emacs, and I believe he also has a large collection of YouTube videos on the topic. The config is very basic, and doesn’t include any packages. It would take some time for me to work out how to implement features such as syntax highlighting, completion, etc. which, while very educational, is a bigger time commitment than I can currently afford.
This was meant to be a shortish bit on a couple of points on a decent
IRC (Internet Relay Chat) set-up, including:
some sort of persistence (a “bouncer” or the always-connected server)
use with some sort of IRC client in Emacs
some sort of reasonable mobile phone client
It ended up longer than I intended.
tldr
Using some of the more IRCv3 tools, particularly the cluster of
things by Simon Ser and co (soju, goguma, gamja - see below) makes
for a much better IRC experience, and easily supports synchronised
access across multiple devices (e.g., desktop, laptops, mobile).
soju provides a very capable bouncer
goguma provides an excellent mobile client
gamja is a very user-friendly web-client front-end
and these three things can complement each other in a complete IRC
setup system
we can make such a soju-based set-up work well with Emacs IRC clients
soju and gamja can be self-hosted; but there are also two
paid/hosted instances currently available:
we can leverage some of the IRCv3 functions nicely in Emacs
The most useful bits here may be notes on how to set up clients to
work with SourceHut’s chat.sr.ht, particularly Goguma and Emacs/ERC,
and some of the cool features of IRCv3 things [modernising IRC], like
the soju bouncer (which is used by chat.sr.ht), for which see the
section below “Smooth & Modern IRC Implementation & Practice
(mobile, Emacs)", which you might jump to if you don’t want to read a
bunch of preamble.
History, theory, background
Pre-history: wee chats in musl pies
Long ago now I spent a while setting up a RaspberryPi 3b (running musl
flavoured Void Linux) as a Weechat Server/Bouncer in order to make
using IRC less painful. This involved a lot of steps, including
setting up a NoIP script and SSL certs with Let’s Encrypt, and setting
up scripts to auto-fetch new certs. But once up, it worked well
(unless my home internet went out), and Weechat had a nice mobile app,
so I could connect both on desktop/laptop with an Emacs IRC client and
also had a pocket connection via my mobile
phone 1. This gave me a persistent connection and
log history and all that.
Then I started using Matrix (=a modern, but heavy, “IRC replacement”,
itself theoretically a Slack, Discord, &c. competitor) more, (in part) since some projects have
chosen it as a more modern/capable alternative to IRC, and, soon
after, it was the case that most of the IRC rooms I was in were
bridged to Matrix anyway. And then it seemed perhaps more convenient
just to access everything in one place, since some things were only
Matrix and not on IRC, it made sense for that to be Matrix.
I toyed off and on with getting re-set up on IRC, but there has been a
lot of other things going on in life, and I didn’t feel like trying to
set up my rpi mini-server IRC bouncer again.
I tried some other hosted IRC bouncer services. But some of these
wouldn’t let me connect with on a VPN.
I found one good free (as in freedom and free as in no paisa) ZNC
bouncer service, which is FreeIRC.org 2
The ZNC bouncer worked fine on Emacs (with ERC; I’ve gone back and
forth between ERC and Circe, but this time I was trying ERC
again).3 And I eventually figured out how to connect to multiple
networks through the ZNC bouncer.
But the best thing I could find on mobile was the Revolution IRC
client, and, while it’s a nice enough front-end, it struggled with
maintaining a connection to the ZNC bouncer I was using. (I asked
about this in an IRC room, and the general consensus was that mobile’s
just not a good place to try to do IRC, but I remembered that via the
Weechat android app or through Glowing Bear I’d had a good mobile IRC
experience years ago.)
IRCv3, and other Korean tubers
I also came across another IRC mobile app, Goguma, but couldn’t get it
work. But, frustrated with the situation, I wanted to see if, assuming
I could somehow get it work, Goguma might be a better mobile
solution.4
I ultimately ended up stumbling across an interesting lobste.rs
discussion of Goguma which pointed to it being part of a set of
IRCv3-aware software, including an IRC bouncer soju, which a number of
the commenters on the lobste.rs thread comparing the experience
favourably against using ZNC.
Goguma (Korean 고구마 goguma “sweet potato”): IRC mobile client written in Flutter by Simon Ser
Gamja (Korean 감자 gamja “potato”): simple IRC web client written in Javascript by Simon Ser
Soju (Korean 소주 soju “a distilled alcoholic beverage”): IRC bouncer written in Go by Simon Ser
Senpai (Japanese 先輩 sèńpáí “senior in social standing or level of education/skill; an elder”): a modern terminal IRC written in Go, started by ‘taiite', who handed development over to ‘delthas'.
Ergo (a play on ergonomic [and “ergonomadic"] and Go(lang), so I suppose ultimately from Ancient Greek ἔργον érgon “work”, but not sure that’s relevant): a modern IRC server written in Go [Jeremy Latt, Daniel Oaks, Shivaram Lingamneni] implementing bleeding-edge IRCv3 features/support
The first three things are the most relevant for us (well, me). Ergo
sounds great, but I’m not running an IRC server myself at the moment;
and I’ve got a whole wealth of choices of Emacs IRC clients if I’m on
desktop/laptop, so Senpai doesn’t seem relevant either for my use
case.
Trains, Wayland, and Things that Go
The first three of the IRCv3-looking things are all by Simon Ser, who
was lately at Drew DeVault‘s SourceHut, but seems to have left
sometime in 20246 and now works at Open
Source Railway Designer (something to do with making tools for railway
infrastructure simulation, which sounds quite cool: open source
trains). Ser obviously works a good bit on IRC-related software, and
also a lot of things in Go, and has been involved with Wayland-related
projected (taking over maintainership of Sway (a Wayland window
manager) and wlroots (Wayland general purpose compositor underlying
Sway and other things) from DeVault some years back).
Here’s Thomas Flament & Simon Ser talking about IRCv3 and soju,
senpai, gamja, goguma at FOSDEM ‘25 in Brussels:
In any case, one of the paid-only features that SourceHut offers is
via chat.sr.ht, an IRC bouncer, which is running on a fork of soju,
with a web-client based on gamja.
As far as I can tell, Sourcehut’s servers are now (all? mainly?) in
Amsterdam.7
chat.sr.ht isn’t the only hosted IRC bouncer service which runs some
version of the soju bouncer, there is also IRC Today [discussed also
on lobste.rs], which is hosted in Paris on Scaleway
servers.8
So, there are at least two apparently EU-based hosted “IRCv3-forward”
bouncers.
Smooth & Modern IRC Implementation & Practice (mobile, Emacs)
How to manage to drink rice liquor in a hut: preliminaries
In any case, I signed up for the SourceHut offering at chat.sr.ht,
which uses their fork of soju.
Here’s what I did to get it set up for mobile and in Emacs (it seemed
worth documenting, in what was meant to be a short post….):
The main things one needs to know are in the chat.sr.hut ‘manpages’,
especially the ‘quickstart guide’. But here are a few more notes for
specific set-up on Goguma and Emacs.
Nb: on mobile, you have to be careful to actually manage to copy the
entire token with “Select all” or the like - I was stuck for a long
time not being able to make Goguma work because I’d not managed to
copy the entire token, and this wasn’t at all obvious in a mobile
browser. On desktop, it’s not an issue.
Goguma (mobile): One potato, two potato, sweet potato
For Goguma, you’re just going to put in:
Server: chat.sr.ht
Nickname: <your SourceHut username>
Password: <an OAuth2.0 token you generated as above>
And you’re in. And here’s what it looks like:
Figure 1: Goguma with normal “bubble” display mode in #sr.ht@libera.chat
And there’s a setting in Goguma (“Compact message list”) to make it
look less like a mobile messaging app and just have plain text if you
prefer that:
Figure 2: Goguma in “compact message list” display mode in #sr.ht@libera.chat
Emacs machines and authority tokens: what to put in your authinfo.gpg and init.el
In your ~/.authinfo.gpg file, add a line like this (with <>‘ed items
to be replaced fully; don’t actually include any <>'s here or above or
elsewhere; where "my-sourcehut-id" is your actual sourcehut login id
in quotes):
machine chat.sr.ht login "my-sourcehut-id" password "<one of your OAuth 2.0 tokens generated at SourceHut meta>"
And then in your ~.emacs.d/init.el, assuming you’re using ERC and have
that configured otherwise to your liking, make a user-function for
each actual IRC server you want to connect with via chat.sr.ht/soju,
like this (assuming your user-id is "my-sourcehut-id": replace
appropriately— and you might have different :nick's (“nicknames” ≈
user-handles/user-names/identities) on different servers, of course,
so adjust those as well; but your :username is going to be
consistently your SourceHut username followed by
“/<the-particular-irc-server-address>"):
;; just so we don't have to type it each time, a wrapping `let',;; and so after this our `server', `port', & `passwd' will be thus defined;; (you could do `:nick' like this too if it's the same everywhere)(let((server"chat.sr.ht")(port6697)(passwd(cadr(auth-source-user-and-password"chat.sr.ht""my-sourcehut-id"))));; `auth-source-user-and-password' returns a list of `login' and `password';; from your .authinfo.gpg; but we just need the second, so `cadr' (= return;; the `car' of the `cdr'), because since the list is going to be something;; like `(mylogin mypassword)', will give us just the atomic `mypassword',;; which is what we want [because the `cdr' of `(mylogin mypassword)' will be;; `(mypassword)' and the `car' of `(mypassword)' will be just `mypassword'].)(defunerc-libera()"Connect to Libera.Chat IRC server."(interactive)(erc-tls:nick"emacsuser007":serverserver:portport:user"my-sourcehut-id/irc.libera.chat":passwordpasswd))(defunerc-oftc()"Connect to OFTC IRC server."(interactive)(erc-tls:nick"emacsuser007":serverserver:portport:user"my-sourcehut-id/irc.oftc.net":passwordpasswd))(defunerc-ircnow()"Connect to IRCNow IRC server."(interactive)(erc-tls:nick"emacsuser007":serverserver:portport:user"my-sourcehut-id/irc6.ircnow.org":passwordpasswd))(defunerc-ergo()"Connect to IRCv3-forward Ergo IRC server."(interactive)(erc-tls:nick"emacsuser007":serverserver:portport:user"my-sourcehut-id/irc.ergo.chat":passwordpasswd)))
Here we’re making connections for Libera, OFTC, IRCNow, and Ergo, the
IRCv3-forward server implemented in Go we mentioned above.9
But you can add lots of different servers to your bouncer, and both in
Goguma and Emacs/ERC (just make a new erc-<server> function as above),
you’ll just be able to get to all of the different rooms in one place
and don’t necessary need to worry about which server a particular
thing is on after getting it set up.
And then you can make a “connect to all the things function” like this:
(defunerc-connect-all()"Connect to all IRC servers above."(interactive);; we're just calling all of the functions we defined above here:(erc-libera)(erc-oftc)(erc-ircnow)(erc-ergo))
And what it looks like (well, it looks like however you’ve configured
ERC or whatever Emacs IRC client you’re using, but, in case you want
to see a possible way it could look):
Figure 3: in ERC on chat.sr.ht visiting #sr.ht@libera.chat
If you get knocked offline momentarily, ERC will try to reconnect to
the bouncer. If you are knocked off or close Emacs for a while, you
might want to catch up on what you missed recently. And there’s a nice
IRCv3 feature implemented in soju (and in chat.sr.ht) for this, as
discussed in the next section.
Leveraging IRCv3 features in Emacs
We can also do another convenient thing for our Emacs IRC
configuration now. IRCv3 defines a “chathistory” extension, as an easy
way of retrieving earlier IRC room content. This doesn’t work most
places, but it works on any bouncers based on soju (see notes below),
the syntax being /quote CHATHISTORY LATEST <channelname> * <number-of-lines-to-fetch>.
We can make this more user-friendly in Emacs with something like
this, an interactive command that fetches history for whatever IRC
room it’s called in (a 100 lines by default; prefix C-u to enter a
different number):
(defunerc-chatsrht-give-me-more-irc-history()"Get more history for current IRC buffer (IRCv3 only).
Defaults to 100 lines of history; when C-u prefixed, asks user for
number of lines to fetch.
If using an IRCv3 capable server/bouncer (like chat.sr.ht), fetch the
chat history via the IRCv3 chathistory extension. (Currently, only
soju-based servers implement this feature; see:
https://ircv3.net/software/clients)
For more on chathistory, see:
- https://man.sr.ht/chat.sr.ht/bouncer-usage.md#chat-history-logs
- https://ircv3.net/specs/extensions/chathistory
- https://soju.im/doc/soju.1.html"(interactive)(if(not(member(with-current-buffer(current-buffer)major-mode)'(erc-modecirce-modercirc-mode)))(message"not an IRC buffer; ignoring")(let((lines100)(channel(buffer-name)))(whencurrent-prefix-arg(progn(setqlines(read-number(format"How many lines to fetch: ")lines))))(erc-send-input(concat"/quote CHATHISTORY LATEST "channel" * "(number-to-stringlines))t))))
Then you can just call erc-chatsrht-give-me-more-irc-history in an IRC
buffer (you might find a convenient keybinding for it) to get prior
IRC chat in that buffer.
This sort of thing makes Emacs a very pleasant environment to manage
IRC in.
Now, for actually adding servers to your chat.sr.ht bouncer in the
first place, you might try out the gamja web client (see below).
Inline images and files in Goguma and Emacs
Oh. Gamja (as least as implemented by chat.sr.ht) doesn’t seem to do it, but
Goguma fetches image links and displays a preview:
Figure 4: in the goguma mobile client, an image preview in #dwarffortress@libera.chat
[Note: chat.sr.ht’s soju fork doesn’t currently offer direct
file-upload and is perhaps unlikely to, but this seems to be
implemented in upstream soju and maybe on IRC Today.]
Additional note: Goguma doesn’t do this by default; you need to go
into the main “Settings” menu and enable “Display link previews”. (I
also have “Send & display typing indicators” turned on here.)
And you can make ERC do similarly, with respect to fetching posted
image links, with the erc-image package from MELPA. I have a
use-package definition like this:
(use-packageerc-image:ensuret:aftererc:config(setqerc-image-inline-rescale300); maybe set bigger(add-to-list'erc-modules'image))
And then you’ll see something like this if you post an image link in ERC:
Figure 5: in ERC, an image preview in #dwarffortress@libera.chat
I’d like to figure out how one might better manage it, but — in terms
of handling posting local files — there’s a (not-on-MELPA) package
imgbb.el which will upload images in Emacs (when passed a file-path or
interactively from a file-chooser) to imgbb, and then put the link in
the clipboard.10
[Note: For better or worse, erc-image will pull images from http as
well as https addresses; while Goguma seems only to do the latter.]
Web-client: The sweetness of non-sweet potatoes on the web
The web interface for chat.sr.ht (at https://chat.sr.ht) is, again,
based on gamja (“potato”) (discussed above) and gamja is a nice
interface indeed.
Figure 6: in the gamja web-client at chat.sr.ht visiting #sr.ht@libera.chat
And, for making your initial connections to different IRC servers, it
may be easier just to use this web interface, especially if you’ve
already got accounts set up on any of the servers, as for most of them
you’ll be able to enter your username and password and so on in the
“Add server” interface and not have to do all of the /msg NickServ .... stuff one usually does. (If you don’t have an account already,
you’ll have to register in someway, maybe by messaging Nick).
Looks like this:
Figure 7: in the gamja web-client at chat.sr.ht in the server-add menu
In fact, even for signing in/up on a brand new server, at least on some
servers, gamja lets you click on “login” or “register” and pulls up a
pop-up box to fill in password, email, &c. So it’s quite convenient
(and you may not have to message Nick manually).
Wrap-up & evaluation
There are interesting IRC things still going on, both in terms of
active IRC communities (though this varies; #emacs is quite active,
#dwarffortress isn’t; Discord has eaten into some spaces), and
innovations in IRC technologies and tools.
I’m not sure what the best thing is terms of choice of a soju/gajma
ecosystem. chat.sr.ht seems to be the cheapest hosted option; IRC
Today is more expensive but may currently be more
featureful. SourceHut has had trouble with being DDOS’ed/going
off-line (see, e.g., SourceHut network outage
post-mortem).
Self-hosted set-ups are another option, but are likely to be more
fiddly/effort to run/maintain than a “professional” service. But here
are a few links to descriptions of people’s self-hosted soju setups:
Currently, chat.sr.ht is working well for me, even if it may be behind
on some mainline soju features.
Appendix: etymologies
You probably don’t need to know any of this. It’s not going to help
you log into Goguma or make your Emacs config work.
But the naming of a lot of the IRC things above is strange and I’m a
linguist by trade and it’s hard for me to avoid paying a lot of
attention to words and meanings of words and connections of words.
(While on the topic of mostly irrelevant things: I have some
other-language interference (Hindi here) with the Korean names of some
of the software discussed here. Specifically, gamja makes me think
either गंजा ganjā “bald” or गाँजा gā̃jā “ganja, cannabis”; and soju keeps
making me think of an elided version of सो जाऊं so jāū̃ “shall I go to
sleep?” [“नहीं! आप आई॰आर॰सी॰ बाउंसर हैं, आपको सो नहीं जाना चाहिए! No!
You’re an IRC bouncer: you should not go to sleep!"])
At any rate, here are, unasked for, etymological notes on the four IRCv3
things with Korean or Japanese names, largely sourced from Wiktionary,
as indicated, with some connecting text:
Named apparently after Korean 고구마 goguma “sweet potato” (cp. the
web client by the same team, gamja “potato”). First attested in the
Mulmyeonggo (물명고 / 物名考), 1824, as Early Modern Korean 고금아
(Yale: kokuma), borrowed from Japanese 孝行芋 (kōkō imo), a term used
in the Tsushima dialect. Some earlier attestations are known, but they
are in the context of quoting the dialectal Japanese word, not in a
Korean context.
There’s also apparently a recent “Internet slang” sense: “a plot
development which frustrates the reader (e.g. the protagonist fails to
achieve their goal)” [from c. 2012].
Compare Proto-Hmong-Mien *wouH (“taro”), Burmese ဝ (wa., “elephant
foot yam”), Tibetan གྲོ་མ (gro ma, “ Argentina anserina (syn. Potentilla
anserina), a plant with small edible tubers”).
There are various theories on how all these words are related:
Schuessler (2007) considers it to be an areal word, comparing it to
the Hmong-Mien and Burmese words. Schuessler (2015) does not
consider the Tibetan word to be cognate.
Blench (2012) suggests that the Chinese word is borrowed from
Proto-Hmong-Mien and that the Burmese word may be a late loan from
Old Chinese.
STEDT reconstructs Proto-Sino-Tibetan *g/s-rwa (“taro; yam; tuber”),
whence the Tibetan word. This etymon is regarded as allofamically
related this word and 薯 (OC *djas).
Gong Hwang-cherng (2002) and Baxter and Sagart (2017) also suggest
that this word is related to the Tibetan word.
Seemingly named from Korean 감자 gamja “potato”, which is a
nativisation of the Sino-Korean term 감저 (甘藷, gamjeo, “lesser yam
(Dioscorea esculenta)”). First attested 1766 in Korea, then referring
to the sweet potato (Ipomoea batatas). The word came to refer to both
“potato” and “sweet potato” in the nineteenth century, and later lost
its original meaning. (So gamja meant “sweet potato” first; but
partially supplanted by goguma.)
Seemingly named for Korean Soju (소주; 燒酒), which means “burned
liquor”: with the first syllable, so (소; 燒; “burn”), referring to
the heat of distillation and the second syllable, ju (주; 酒), meaning
“alcoholic drink”. Etymological dictionaries record that China’s
shaozhou (shāojiǔ, 烧酒), Japan’s shochu (shōchū, 焼酎), and Korea’s
soju (soju, 燒酒) have the same etymology. [A Wikipedian has here
added a note about unreliable sources(?).]
Another name for soju is noju (노주; 露酒; “dew liquor”), with its
first syllable, no/ro (노/로; 露; “dew”), likening the droplets of the
collected alcohol during the distilling process to dewdrops.
The origin of soju dates back to 13th-century Goryeo. The Yuan Mongols
[the imperial dynasty of China founded by Kublai Khan, grandson of
Genghis Khan] acquired the technique of distilling arak from the
Persians during their invasions of the Levant, Anatolia, and Persia,
and in turn introduced it to the Korean Peninsula during the Mongol
invasions of Korea (1231–1259). Distilleries were set up around the
city of Gaegyeong, the then-capital (current Kaesong). In the areas
surrounding Kaesong, soju is still called arak-ju (아락주). Andong
soju, the direct root of modern South Korean soju varieties, started
as the home-brewed liquor developed in the city of Andong, where the
Yuan Mongols’ logistics base was located during this era.
Obviously named for Japanese 先輩 senpai “senior/superior in social
standing or education or skill; an elder” (the homepage has a tagline
“Your everyday IRC student” and also “Welcome home, desune~” and also a smiling anime-ish cat-ish logo).
The Japanese word was borrowed from the Middle Chinese 先輩 (sen
pwojH). A doublet of ソンベ (sonbe) [With Japanese sonbe being itself
borrowed from Korean 선배(先輩) (seonbae), which means “(chiefly in
Korean media) sunbae (upperclassman or senior, in the context of
Korea)"].
Both ultimately from Chinese 先輩 [ xiānbèi ] “older generation;
senior; elder; ancestor; predecessor”.
I’m not entirely sure why Goguma is named after the
Korean word for “sweet potato”. Though Wiktionary does report there’s
also apparently a recent “Internet slang” sense of the word: “a plot
development which frustrates the reader (e.g. the protagonist fails to
achieve their goal)” [from c. 2012]; that may be a red herring.
It’s obviously connected to one of Ser’s other IRC applications,
gamja, and maybe it’s just wanting to name things after Korean words
for root vegetables for some reason. I would have said it was a play
on Go, but Goguma isn’t written in Go.
Though soju (which is written in Go), while also a Korean word,
doesn’t fit quite the root vegetable pattern. (Nor does senpai, but
that’s by different authors [though Korean 先輩 (in hangeul 선배)
seonbae “upperclassman, senior” exists too, and is cognate with
senpai, both ultimately borrowed from Middle Chinese 先輩(sen pwojH)
“elder, senior, ancestor”]. Well, see more of this in the appendix, if
it’s your sort of thing. ↩︎
I think this is what is referred to in the June 2024
SourceHut blog post, “…You may have heard that we also had to part
ways with one of our staff members recently. This reduces our
headcount to two. For the time being we will not be hiring a
replacement, but our near-future plans are achievable with our current
headcount. Though we usually aim for transparency to the maximum
extent possible, we will not be sharing further details about this
departure, as a matter of reasonable privacy.” ↩︎
see 2024-06-04 blog report “The state of SourceHut and our
plans for the future”, where DeVault says “Also, as a happy
side-effect of our surprise move to Amsterdam, SourceHut’s datacenter
installation is now entirely powered by renewable energy sources. We
have also finally rolled out IPv6 support for most SourceHut services
as part of our migration!” ↩︎
In the “Who is behind IRC Today” FAQ, they say “We have
co-developed the open source piece of software used in our service,
soju.” I suspect this “we” includes at least, delthas (the current
developer of senpai mentioned above, who also has a number of commits
to the soju repo, and who is in France), and Thomas Flament.
(For whatever reason, there seem to be at least two open source
developers who are both (a) French, and with interests in (b) IRCv3
and (c) Golang and (d) Korean or Japanese foodstuffs/culture.) ↩︎
If you’re on IRC at all, you probably want to be on
Libera (which is where the main body of Freenode went after Freenode
was bought by the founder of Private Internet Access VPN, Andrew Lee,
pretender to the defunct throne of Joseon and the Korean Empire; Korea
seems tied up in strange ways to IRC), and lots of free software
projects which aren’t on Libera are on OFTC [=Open and Free Technology
Community] (e.g., Alpine Linux). ↩︎
The persist-text-scale Emacs package provides persist-text-scale-mode, which ensures that all adjustments made with text-scale-increase and text-scale-decrease are persisted and restored across sessions. As a result, the text size in each buffer remains consistent, even after restarting Emacs.
(By default, persist-text-scale-mode saves the text scale individually for each file-visiting buffer and applies a custom text scale for each special buffer. This behavior can be further customized by assigning a function to the persist-text-scale-buffer-category-function variable. The function determines how buffers are categorized by returning a category identifier based on the buffer’s context. Buffers within the same category will share the same text scale.)
Features
Automatically persists and restores the text scale for all buffers.
Periodically autosaves at intervals defined by persist-text-scale-autosave-interval (can be set to nil to disable or specified in seconds to enable).
Supports unified text scaling across buffer categories.
Offers fully customizable logic for categorizing buffers based on text scale.
Lightweight and efficient, requiring minimal configuration.
Enables custom buffer categorization by specifying a function for the persist-text-scale-buffer-category-function variable, ensuring that groups of buffers share the same persisted and restored text scale.
Installation: Emacs: Install with straight (Emacs version < 30)
A well-known Emacs performance optimization advice is to boost the garbage collector
threshold (so GC collections happen less frequently). That’s something I’ve had in
my Emacs config for ages:
;; reduce the frequency of garbage collection by making it happen on;; each 50MB of allocated data (the default is on every 0.76MB)(setqgc-cons-threshold50000000)
Probably I should increase it to 100MB+ these days, given the proliferation of more resource-hungry
tooling (e.g. LSP). On the other hand there are also some counter arguments to consider
when it comes to setting a high GC threshold:
The GC threshold setting after init is too high, IMNSHO, and its value seems arbitrary.
If the OP thinks that Emacs will GC as soon as it allocates 100 MiB, then
that’s a grave mistake! What really happens is the first time Emacs considers
doing GC, if at that time more than 100 MiB have been allocated for Lisp
objects, Emacs will GC. And since neither Lisp programs nor the user have any
control on how soon Emacs will decide to check whether GC is needed, the
actual amount of memory by the time Emacs checks could be many times the value
of the threshold.
My advice is to spend some time measuring the effect of increased GC threshold
on operations that you care about and that take a long enough time to annoy,
and use the lowest threshold value which produces a tangible
improvement. Start with the default value, then enlarge it by a factor of 2
until you see only insignificant speedups. I would not expect the value you
arrive at to be as high as 100 MiB.
– Eli Zaretskii, Emacs maintainer
One thing that’s not so common knowledge is that removing the GC limits during Emacs startup
might improve the speedup quite a lot (the actual results will be highly dependent on your setup).
Here’s what you need to do - just add the following bit to your early-init.el:
;; Temporarily increase GC threshold during startup(setqgc-cons-thresholdmost-positive-fixnum);; Restore to normal value after startup (e.g. 50MB)(add-hook'emacs-startup-hook(lambda()(setqgc-cons-threshold(*5010241024))))
most-positive-fixnum is a neat constant that represents the biggest positive
integer that Emacs can handle. There’s also most-negative-fixnum that you might
find handy in some cases.
As for early-init.el - it was introduced in version 27 and is executed before
init.el. Its primary purpose is to allow users to configure settings that need
to take effect early in the startup process, such as disabling GUI elements or
optimizing performance. This file is loaded before the package system and GUI
initialization, giving it a unique role in customizing Emacs startup behavior.
Here are some other settings that people like to tweak in early-init.el:
;; Disable toolbars, menus, and other visual elements for faster startup:(menu-bar-mode-1)(tool-bar-mode-1)(scroll-bar-mode-1)(setqinhibit-startup-screent);; Load themes early to avoid flickering during startup (you need a built-in theme, though)(load-theme'modus-operandit);; tweak native compilation settings(setqnative-comp-speed2)
I hope you get the idea! If you have any other tips on speeding up the Emacs
startup time, I’d love to hear them!
In this ~16-minute video, I demonstrate the new, in-development “query
links” functionality of Denote. These are links that trigger a search
when you interact with them. There are two types of query links: (i)
search in file contents, or (ii) search in file names. When there are
matches for a given query, those are displayed in a separate buffer,
which uses the appropriate major mode. Query links complement the
“direct links” Denote has always supported. Internally, they use the
same infrastructure that Denote backlinks rely on (and we have had
backlink support since the beginning).
Denote sources
Denote is a simple note-taking tool for Emacs. It is based on the idea
that notes should follow a predictable and descriptive file-naming
scheme. The file name must offer a clear indication of what the note is
about, without reference to any other metadata. Denote basically
streamlines the creation of such files while providing facilities to
link between them.
Denote’s file-naming scheme is not limited to “notes”. It can be used
for all types of file, including those that are not editable in Emacs,
such as videos. Naming files in a consistent way makes their
filtering and retrieval considerably easier. Denote provides relevant
facilities to rename files, regardless of file type.
In the last post I described how, for a particular project with certain
naming conventions, I wanted to use emacs tree-sitter font-locking to
highlight variables that might be used in the wrong context:
I found a way to make it work for simple expressions but it was relying on
regular expression matching rather than tree-sitter parse trees. I set a
goal of handling these types of expressions:
elevation_t[t] // good
elevation_t[r] // bug
elevation_t[t_from_r[r]] // good
elevation_t[r_from_t[t]] // bug
elevation_t[obj.t] // good
elevation_t[obj.r] // bug
elevation_t[t_from_r(r)] // good
elevation_t[r_from_t(t)] // bug
elevation_t[x.t_fn(x)] // good
elevation_t[x.r_fn(x)] // bug
elevation_t[x.t_arr[i]] // good
elevation_t[x.r_arr[i]] // bug
elevation_t[(t)] // good
elevation_t[(r)] // bug
What are my options for using tree-sitter to handle more cases? I decided to try
the :pred feature of tree-sitter font lock. Using M-x treesit-explore-mode,
I learned that the subscript expression looks like:
Those two arguments are treesit parse nodes. What can I do with them? The
emacs manual has a page about how to navigate the parse tree and another
page about querying the parse tree. In the docs found a function
treesit-query-capture that can pattern match on a node and extract the
parts.
I used that function to look for the identifier that most likely tells me
what geometry type is being returned (r for region, s for side, t for
triangle). Examples:
in t_foo_r(r), the identifier t_foo_r tells me this is likely a t (triangle)
in t_obj[i].r_elevation, the identifier r_elevation tells me this is likely an r (region)
in map[i].s_inner_t(t_queue[0]), the identifier s_inner_t tells me this is likely an s (side)
I think this is where tree-sitter can do better than regular expression
based matching. I wrote a function to find the key identifier:
(defunamitp/treesit-top-typescript-node (node)
"Try to find the identifier that represents a node's return value"
(let ((results
(treesit-query-capture
node
'(((identifier) @identifier)
((member_expression property: (property_identifier) @property))
((subscript_expression object: (_) @expr))
((call_expression function: (_) @expr))
((parenthesized_expression (_) @expr))))))
(if (assoc 'expr results)
(amitp/treesit-top-typescript-node (cdr (assoc 'expr results)))
;; prefer the property if tree-sitter matches both
(or (cdr (assoc 'property results)) (cdr (assoc 'identifier results))))))
But there's something a little weird here. Why do r+1 and 1+r get flagged
even though I never actually handle binary operators in my
amitp/treesit-top-typescript-node function? Something to investigate later.
The next feature I wanted was to highlight only the "type" character
instead of the entire identifier. I couldn't figure out how, so I asked on
Reddit, where /u/eleven_cupfulsgave me a solution: the @face-name can be a
@function-name instead. Thank you!
Instead of using :pred to tell font-lock whether to highlight something, I
need to highlight it myself. I changed the treesit font lock rules to call
my function instead of applying a face:
My function now receives the entire expression instead of receiving
the array and index parts separately like the :pred version. So I need to
find a way to extract the right information. I can use
treesit-query-capture like before, right?
It was at this point that I learned treesit-query-capture pattern matches
against the entire tree and not only the top node. That explains why it was
finding r+1 and 1+r. It was looking for any identifier anywhere in the
subtree. Oops.
I rewrote amitp/treesit-top-typescript-node to use treesit-node-type
instead of treesit-query-capture:
(defunamitp/treesit-top-typescript-node (node)
"Try to find the identifier that represents a node's return value"
(pcase (treesit-node-type node)
("identifier" node)
("member_expression" (treesit-node-child-by-field-name node "property"))
("subscript_expression" (amitp/treesit-top-typescript-node (treesit-node-child-by-field-name node "object")))
("call_expression" (amitp/treesit-top-typescript-node (treesit-node-child-by-field-name node "function")))
("parenthesized_expression" (amitp/treesit-top-typescript-node (treesit-node-child node 1)))
(_ nil)))
It would've been cleaner if pcase could pattern match against these nodes,
but it's not too bad.
And then I wrote a new amitp-highlight-subscript-mismatch function to
analyze subscript nodes (and function calls too) to make sure the names
match:
One of the things I like about blogging from Org
Mode in Emacs is that it's easy to add properties
to the section that I'm working on and then use
those property values elsewhere. For example, I've
modified Emacs to simplify tooting a link to my
blog post and saving the Mastodon status URL in
the EXPORT_MASTODON property. Then I can use
that in my 11ty static site generation process to
include a link to the Mastodon thread as a comment
option.
First, I need to export the property and include
it in the front matter. I use .11tydata.json files
to store the details for each blog post. I
modified ox-11ty.el so that I could specify
functions to change the front matter
(org-11ty-front-matter-functions,
org-11ty--front-matter):
(defvarorg-11ty-front-matter-functions nil
"Functions to call with the current front matter plist and info.")
(defunorg-11ty--front-matter (info)
"Return front matter for INFO."
(let* ((date (plist-get info :date))
(title (plist-get info :title))
(modified (plist-get info :modified))
(permalink (plist-get info :permalink))
(categories (plist-get info :categories))
(collections (plist-get info :collections))
(extra (if (plist-get info :extra) (json-parse-string
(plist-get info :extra)
:object-type'plist))))
(seq-reduce
(lambda (prev val)
(funcall val prev info))
org-11ty-front-matter-functions
(append
extra
(list :permalink permalink
:date (if (listp date) (car date) date)
:modified (if (listp modified) (car modified) modified)
:title (if (listp title) (car title) title)
:categories (if (stringp categories) (split-string categories) categories)
:tags (if (stringp collections) (split-string collections) collections))))))
Then I added the EXPORT_MASTODON Org property as
part of the front matter. This took a little
figuring out because I needed to pass it as one of
org-export-backend-options, where the parameter
is defined as MASTODON but the actual property
needs to be called EXPORT_MASTODON.
Then I added the Mastodon field as an option to my
comments.cjs shortcode. This was a little tricky
because I'm not sure I'm passing the data
correctly to the shortcode (sometimes it ends up
as item.data, sometimes it's item.data.data,
…?), but with ?., I can just throw all the
possibilities in there and it'll eventually find
the right one.
constpluginRss = require('@11ty/eleventy-plugin-rss');
module.exports = function(eleventyConfig) {
functiongetCommentChoices(data, ref) {
constmastodonUrl = data.mastodon || data.page?.mastodon || data.data?.mastodon;
constmastodon = mastodonUrl && `<a href="${mastodonUrl}" target="_blank" rel="noopener noreferrer">comment on Mastodon</a>`;
consturl = ref.absoluteUrl(data.url || data.permalink || data.data?.url || data.data?.permalink, data.metadata?.url || data.data?.metadata?.url);
constsubject = encodeURIComponent('Comment on ' + url);
constbody = encodeURIComponent("Name you want to be credited by (if any): \nMessage: \nCan I share your comment so other people can learn from it? Yes/No\n");
constemail = `<a href="mailto:sacha@sachachua.com?subject=${subject}&body=${body}">e-mail me at sacha@sachachua.com</a>`;
constdisqusLink = url + '#comment';
constdisqusForm = data.metadata?.disqusShortname && `<div id="disqus_thread"></div><script> var disqus_config = function () { this.page.url = "${url}"; this.page.identifier = "${data.id || ''} ${data.metadata?.url || ''}?p=${ data.id || data.permalink || this.page?.url}"; this.page.disqusTitle = "${ data.title }" this.page.postId = "${ data.id || data.permalink || this.page?.url }" }; (function() { // DON'T EDIT BELOW THIS LINE var d = document, s = d.createElement('script'); s.src = 'https://${ data.metadata?.disqusShortname }.disqus.com/embed.js'; s.setAttribute('data-timestamp', +new Date()); (d.head || d.body).appendChild(s); })();</script><noscript>Disqus requires Javascript, but you can still e-mail me if you want!</noscript>`;
return { mastodon, disqusLink, disqusForm, email };
}
eleventyConfig.addShortcode('comments', function(data, linksOnly=false) {
const { mastodon, disqusForm, disqusLink, email } = getCommentChoices(data, this);
if (linksOnly) {
return`You can ${mastodon ? mastodon + ', ' : ''}<a href="${disqusLink}">comment with Disqus (JS required)</a>${mastodon ? ',' : ''} or ${email}.`;
} else {
return`<div id="comment"></div>You can ${mastodon ? mastodon + ', ' : ''}comment with Disqus (JS required)${mastodon ? ', ' : ''} or you can ${email}.${disqusForm || ''}`;}
});
}
The new workflow I'm trying out seems to be working:
Keep npx eleventy --serve running in the
background, using .eleventyignore to make
rebuilds reasonably fast.
Export the subtree with C-c e s 1 1, which uses org-export-dispatch to call my-org-11ty-export with the subtree.
After about 10 seconds, use my-org-11ty-copy-just-this-post and verify.
Use my-mastodon-11ty-toot-post to compose a toot. Edit the toot and post it.
Check that the EXPORT_MASTODON property has been set.
Export the subtree again, this time with the front matter.
Publish my whole blog.
Next, I'm thinking of modifying
my-mastodon-11ty-toot-post so that it includes a
list of links to blog posts I might be building on
or responding to, and possibly the handles of
people related to those blog posts or topics.
Hmm…
Over at The MKat, Marie K. Ekeberg has a post that brings me back to my earliest days of using Emacs. Before I was assimilated, I tried several times to warm up to Emacs but it wouldn’t take. Ekeberg’s post reminds me why. It was what I considered the schizophrenic default scrolling. You would move the cursor down the screen and suddenly, about three quarters of the way down, the screen would jump several lines. Every time I saw that, I would abandon my attempt to embrace Emacs and run screaming back to Vim.
In her post on smoother scrolling, Ekeberg offers an explanation for this bizarre behavior that I’m pretty sure is correct. The TL;DR is that in the old days, when Emacs was developed, scrolling was an expensive operation so rather than scroll by line, Emacs would scroll by (essentially) half pages.
I have no idea why the default hasn’t long since been changed but as far as I know, it hasn’t. I say, “as far as I know” because years and years ago I did what Ekeberg suggests and implemented smooth scrolling. I haven’t looked back.
It turns out that fixing this is far from trivial. I have no memory of how I figured it out but I do know that it involved pleading with the Internet for an answer. Take a look at Ekeberg’s post to see the answer. What she suggests is essentially what I did. As I said, not at all obvious.
It’s odd how a simple thing like scrolling can ruin your appreciation of Emacs but it sure did affect mine. Don’t let that happen to you. Take a look at Ekeberg’s post and find peace of mind.
Last Friday, I was genuinely surprised by a live demo of my Emacs
Solo configuration on the System Crafters Weekly Show. Watching
the live demo was an eye-opener, as I hadn't expected the project to
get such attention, especially in a live setting. Seeing David
Wilson take a deep dive into the setup, testing the configuration
live, and exploring how powerful Emacs can be with only its built-in
packages was both humbling and inspiring.
The Emacs Solo configuration is all about returning to the roots
of Emacs. It's a minimalist setup designed to challenge myself and
test the full potential of Emacs using only its built-in
functionality. The goal was to create an efficient, yet fully
functional environment, all while keeping things as light and fast as
possible. No external dependencies, no clutter. Just pure,
unadulterated Emacs.
The Project: Emacs Solo
Emacs Solo is a configuration that embraces the power of Emacs
without relying on external packages. It's a setup I go back to from
time to time to remind myself of how much can be accomplished with
just what Emacs offers out of the box.
This configuration is designed to be both powerful and lightweight,
allowing for a fast, efficient workflow with a focus on simplicity and
minimalism. The project includes several useful features for
day-to-day tasks like searching, editing, and navigating—everything
you need for an efficient Emacs experience.
Some of the highlights of the project include:
» A preview of icomplete-verical enhancements I proposed to the
Emacs core team (custom prefixes, vertico style setup, and inline
completion closer to corfu/company that works on text buffers and
eshell).
» An experimental custom git-gutter-like feature.
» Supercharged eshell customization.
» Custom solutions for editing multiple search entries.
» Built-in news readers like Gnus and Newsticker.
» Advanced file diffing and version control.
» Extended viper mode for those who prefer vim-style editing.
» Tree-sitter modes.
» LSP configurations.
» Custom rainbown-mode like.
» And many customizations of built-in packages.
The idea is that Emacs is already a powerful IDE, and with a bit of
clever customization, it can be made into something even more
streamlined, adaptable, and effective without the need for external
packages.
Watch the Demo
Here's the video of the live demo from the System Crafters Weekly
Show:
I’d like to take this opportunity to thank David Wilson for the
amazing show and to the System Crafters community for their
continued support and enthusiasm around Emacs. I also want to express
my gratitude to everyone who has contributed code that I’ve borrowed
and learned from over the years. Particularly Gopar and
Protesilaos. Without the shared knowledge and experience from
these fantastic people, the Emacs Solo project wouldn't have been
possible.
As always, the beauty of Emacs lies in its community, and I'm grateful
for all the inspiration, contributions, and shared wisdom that make
projects like Emacs Solo come to life. Thank you to everyone who
continues to inspire and teach me along the way.
Sometimes I want to copy a toot and include it in
my Org Mode notes, like when I post a thought and
then want to flesh it out into a blog post. This
code defines my-mastodon-org-copy-toot-content,
which converts the toot text to Org Mode format
using Pandoc and puts it in the kill ring so I can
yank it somewhere else.
(defunmy-mastodon-toot-at-url (&optional url)
"Return JSON toot object at URL.If URL is nil, return JSON toot object at point."
(if url
(let* ((search (format "%s/api/v2/search" mastodon-instance-url))
(params `(("q" . ,url)
("resolve" . "t"))) ; webfinger
(response (mastodon-http--get-json search params :silent)))
(car (alist-get 'statuses response)))
(mastodon-toot--base-toot-or-item-json)))
(defunmy-mastodon-org-copy-toot-content (&optional url)
"Copy the current toot's content as Org Mode.Use pandoc to convert.When called with \\[universal-argument], prompt for a URL."
(interactive (list
(when current-prefix-arg
(read-string "URL: "))))
(let ((toot (my-mastodon-toot-at-url url)))
(with-temp-buffer
(insert (alist-get 'content toot))
(call-process-region nil nil "pandoc" t t nil "-f""html""-t""org")
(kill-new
(concat
(org-link-make-string
(alist-get 'url toot)
(concat "@" (alist-get 'acct (alist-get 'account toot))))
":\n\n#+begin_quote\n"
(string-trim (buffer-string)) "\n#+end_quote\n"))
(message "Copied."))))
Back in the bad old days when personal computers weren’t nearly as reliable as they are today, it was common to have your computer crash when you were right in the middle of writing a document. Because of that, a common mantra was, “save early and often”. That way, when the crash came you would have only a few lines to rewrite.
I first heard the particular expression “save early and often” from Jerry Pournelle but it may not have originated with him. It was common advice back then but the humorous twistdoes seem like Pournelle, so who knows. In any event, in those days it wasn’t just good advice, it was mandatory.
I was reminded of all this the other day when I was writing yesterday’s post on the latency of computer operations. I was switching back and forth between the post’s Emacs buffer and Ahmad’s post in Safari, when Safari suddenly froze and I couldn’t find a way to break out of it and get back to Emacs to save my buffer, which, sadly, I hadn’t bothered to backup as I went along. In the end, I had to reboot the machine to get things back to normal.
Of course my whole post was gone when my laptop came back up. Note that this was not Emacs’ fault. It was Safari that froze and it was me that didn’t bother saving my work. Actually, Emacs was the hero in all this because the autosave function rescued me from my folly. I hardly ever need to use it so I always have a hard time remembering what to do but in the end I got almost my whole post back.
The takeaway from all this is that even with reliable hardware and software, crashes and freezes can still happen so if you care about your work make sure to back it up often. Or, as Pournelle would have it, early and often. If you don’t, the day will come when even Emacs can’t save you.
I sometimes want to thank a bunch of people for
contributing to a Mastodon conversation. The
following code lets me collect handles in a single
kill ring entry by calling it with my point over a
handle or a toot, or with an active region.
(defvarmy-mastodon-handle"@sacha@social.sachachua.com")
(defunmy-mastodon-copy-handle (&optional start-new beg end)
"Append Mastodon handles to the kill ring.Use the handle at point or the author of the toot. If called with aregion, collect all handles in the region.Append to the current kill if it starts with @. If not, start a newkill. Call with \\[universal-argument] to always start a new list.Omit my own handle, as specified in `my-mastodon-handle'."
(interactive (list current-prefix-arg
(when (region-active-p) (region-beginning))
(when (region-active-p) (region-end))))
(let ((handle
(if (and beg end)
;; collect handles in region
(save-excursion
(goto-char beg)
(let (list)
;; Collect all handles from the specified region
(while (< (point) end)
(let ((mastodon-handle (get-text-property (point) 'mastodon-handle))
(button (get-text-property (point) 'button)))
(cond
(mastodon-handle
(when (and (string-match "@" mastodon-handle)
(or (null my-mastodon-handle)
(not (string= my-mastodon-handle mastodon-handle))))
(cl-pushnew
(concat (if (string-match "^@" mastodon-handle) """@")
mastodon-handle)
list
:test#'string=))
(goto-char (next-single-property-change (point) 'mastodon-handle nil end)))
((and button (looking-at "@"))
(let ((text-start (point))
(text-end (or (next-single-property-change (point) 'button nil end) end)))
(dolist (h (split-string (buffer-substring-no-properties text-start text-end) ", \n\t"))
(unless (and my-mastodon-handle (string= my-mastodon-handle h))
(cl-pushnew h list :test#'string=)))
(goto-char text-end)))
(t
;; collect authors of toots too
(when-let*
((toot (mastodon-toot--base-toot-or-item-json))
(author (and toot
(concat "@"
(alist-get
'acct
(alist-get 'account (mastodon-toot--base-toot-or-item-json)))))))
(unless (and my-mastodon-handle (string= my-mastodon-handle author))
(cl-pushnew
author
list
:test#'string=)))
(goto-char (next-property-change (point) nil end))))))
(setq handle (string-join (seq-uniq list #'string=) " "))))
(concat "@"
(or
(get-text-property (point) 'mastodon-handle)
(alist-get
'acct
(alist-get 'account (mastodon-toot--base-toot-or-item-json))))))))
(if (or start-new (null kill-ring) (not (string-match "^@" (car kill-ring))))
(kill-new handle)
(dolist (h (split-string handle " "))
(unless (member h (split-string " " (car kill-ring)))
(setf (car kill-ring) (concat (car kill-ring) " " h)))))
(message "%s" (car kill-ring))))
Another perk of tooting from Emacs using mastodon.el. =)
I was feeling envious of the Obsidian Web Clipper, which is quite fancy, so I thought I’d try leveraging it for use with Denote.
My first run at this involves a couple of steps:
Tweak the web clipper to save files using Denote’s format and front matter
Save the file without adding it to an Obsidian vault
Move the saved file into my Denote folder
Here’s the Web Clipper template configuration I ended up with:
It was important to set the “Tags” property type to “Text” rather than the default “Multitext” so that Denote does the right thing with it when renaming the file later.
In the Web Clipper’s advanced settings, I set the behavior to “Save file…” rather than “Add to Obsidian”.
OK, so now after using the Web Clipper, I get a Markdown file1with a (mostly) Denote-compatible file name and front matter in my ~/Downloads folder. Here’s what clipping this post looks like:
To get the file into my denote-directory, I use a rule in Hazel. Hazel watches my Downloads folder for any new file whose name contains the string “__clipping”, and automatically moves it into a “clippings” folder in my Denote folder.
The only manual step remaining is to finish renaming the files using Denote. I don’t yet know how to have the Web Clipper “slugify” the file name, so I have Denote do it. This can be done in batch using Dired, so it’s not a huge burden.
If there’s a simpler way to get a nicely-formatted Org mode file from a web page directly to my Denote folder, I’m all ears, but for now…
Take that, Obsidian! 😄
Denote handles Markdown files natively, so this is fine. ↩︎
Bemfica de Oliva does a wonderful rundown of Journelly's features and capabilities, much better than anything else I've posted before. They even mentioned Org markup and Emacs text editor, for those who want to drop down to its plain text storage. A nice treat, as these aren't typically showcased in the space.
If you're curious about what Journelly can do, check out Bemfica's post. Alternatively, if you just want to play with it, join the TestFlight beta group.
This week in ollama-buddy updates, I have mostly been experimenting with ChatGPT integration! Yes, it is not a local LLM, so not ollama, hence entirely subverting the whole notion and fundamental principles of this package! This I know, and I don’t care; I’m having fun. I use ChatGPT and would rather use it in Emacs through the now-familiar ollama-buddy framework, so why not? I’m also working on Claude integration too.
My original principles of a no-config Emacs ollama integration still hold true, as by default, you will only see ollama models available. But with a little tweak to the configuration, with a require here and an API key there, you can now enable communication with an online AI. At the moment, I use Claude and ChatGPT, but if I can get Claude working, I might think about just adding in a basic template framework to easily slot in others. At the moment, there is a little too much internal ollama-buddy faffing to incorporate these external AIs into the main package, but I’m sure I can figure out a way to accommodate separate elisp external AIs.
In other ollama-buddy news, I have now added support for the stream variable in the ollama API. By default, I had streaming on, and I guess why wouldn’t you? It is a chat, and you would want to see “typing” or tokens arriving as they come in?. But to support more of the API, you can toggle it on and off, so if you want, you can sit there and wait for the response to arrive in one go and maybe it can be less distracting (and possibly more efficient?).
Just a note back on the topic of online AI offerings: to simplify those integrations, I just disabled streaming for the response to arrive in one shot. Mainly, I just couldn’t figure out the ChatGPT streaming, and for an external offering, I wasn’t quite willing to spend more time on it, and due to the speed of these online behemoths, do you really need to see each token come in as it arrives?
Oh, there is something else too, something I have been itching to do for a while now, and that is to write a Texinfo document so a manual can be viewed in Emacs. Of course, this being an AI-based package, I fed in my ollama-buddy files and got Claude to generate one for me (I have a baby and haven’t the time!). Reading through it, I think it turned out pretty well :) It hasn’t been made automatically available on MELPA yet, as I need to tweak the recipe, but you can install it for yourself.
Anyways, see below for the changelog gubbins:
<2025-03-24 Mon> 0.9.11
Added the ability to toggle streaming on and off
Added customization option to enable/disable streaming mode
Implemented toggle function with keybindings (C-c x) and transient menu option
Added streaming status indicator in the modeline
The latest update introduces the ability to toggle between two response modes:
Streaming mode (default): Responses appear token by token in real-time, giving you immediate feedback as the AI generates content.
Non-streaming mode: Responses only appear after they’re fully generated, showing a “Loading response…” placeholder in the meantime.
While watching AI responses stream in real-time is often helpful, there are situations where you might prefer to see the complete response at once:
When working on large displays where the cursor jumping around during streaming is distracting
When you want to focus on your work without the distraction of incoming tokens until the full response is ready
The streaming toggle can be accessed in several ways:
Use the keyboard shortcut C-c x
Press x in the transient menu
Set the default behaviour through customization:
(setq ollama-buddy-streaming-enabled nil) ;; Disable streaming by default
The current streaming status is visible in the modeline indicator, where an “X” appears when streaming is disabled.
<2025-03-22 Sat> 0.9.10
Added experimental OpenAI support!
Yes, that’s right, I said I never would do it, and of course, this package is still very much ollama-centric, but I thought I would just sneak in some rudimentary ChatGPT support, just for fun!
It is a very simple implementation, I haven’t managed to get streaming working, so Emacs will just show “Loading Response…” as it waits for the response to arrive. It is asynchronous, however, so you can go off on your Emacs day while it loads (although being ChatGPT, you would think the response would be quite fast!)
By default, OpenAI/ChatGPT will not be enabled, so anyone wanting to use just a local LLM through ollama can continue as before. However, you can now sneak in some experimental ChatGPT support by adding the following to your Emacs config as part of the ollama-buddy set up.
(require 'ollama-buddy-openainilt)
(setq ollama-buddy-openai-api-key "<big long key>")
and you can set the default model to ChatGPT too!
(setq ollama-buddy-default-model "GPT gpt-4o")
With this enabled, chat will present a list of ChatGPT models to choose from. The custom menu should also now work with chat, so from anywhere in Emacs, you can push predefined prompts to the ollama buddy chat buffer now supporting ChatGPT.
There is more integration required to fully incorporate ChatGPT into the ollama buddy system, like token rates and history, etc. But not bad for a first effort, methinks!
Here is my current config, now mixing ChatGPT with ollama models:
If you are new to Emacs, you might think that the scrolling in buffers is choppy. Why is that? You are probably used to smooth scrolling in all your other editors, so why not Emacs? That is exactly what we will look at today!
It’s Monday. Most of us are looking around bleary eyed wondering what happened to the weekend that just started. Here’s a little humor to get you warmed up for what’s to come. Over at the Emacs subreddit, bruchieOP posts the Ten Commandments of the Church of Emacs.
It’s a satirical look at the beliefs that most of us Emacs users hold near and dear. Note that satirical qualifier. For example, lots of people use and swear by “pre-configured distributions”, such as Doom and Spacemacs, as even bruchieOP acknowledges. Still, most of you will identify and agree the majority of the prescriptions.
Take a couple of minutes to enjoy the post and then back to work. Emacs is waiting.
I often want to copy the toot URL after posting a
new toot about a blog post so that I can update
the blog post with it. Since I post from Emacs
using mastodon.el, I can probably figure out how
to get the URL after tooting. A quick-and-dirty
way is to retrieve the latest status.
I considered overriding the keybinding in
mastodon-toot-mode-map, but I figured using
advice would mean I can copy things even after
automated toots.
A more elegant way to do this might be to modify
mastodon-toot-send to run-hook-with-args a
variable with the response as an argument, but
this will do for now.
I used a hook in my advice so that I can change
the behaviour from other functions. For example, I
have some code to compose a toot with a link to
the current post. After I send a toot, I want to
check if the toot contains the current entry's
permalink. If it has and I don't have a Mastodon
toot field yet, maybe I can automatically set that
property, assuming I end up back in the Org Mode
file I started it from.
If I combine that with a development copy of my
blog that ignores most of my posts so it compiles
faster and a function that copies just the current
post's files over, I can quickly make a post
available at its permalink (which means the link
in the toot will work) before I recompile the rest
of the blog, which takes a number of minutes.
The next version of Denote is shaping up to be a huge one. One of the
newest features I am working on is the support for “query links”.
Those use the same denote: link type infrastructure but exhibit a
different behaviour than the direct links we have always had. Instead
of pointing to a file via its unique identifier, they initiate a
search through the contents of all files in the denote-directory.
This search uses the built-in Xref mechanism and is the same as what
we have already been doing with backlinks (basically, a grep).
In short:
Direct links: Those point to a file via its unique identifier.
For example, denote:20250324T074132 resolves to a file path.
Clicking on the link opens the corresponding file. Org export will
also take care to turn this into a file path.
Query links: Those do not point to any file per se. They are a
string of one or more words or regular expression which is matched
against the contents of files. For example, denote:this is a test
produces a buffer listing all matches for the given query. Clicking
on the matching line in that buffer opens the file at that point
(just how our backlinks work when they show context—I am
generalising this mechanism).
Direct links can point to any file, including PDFs, videos, and
pictures (assuming it is renamed to use the Denote file-naming
scheme). Whereas query links are limited to text files.
Development discussion and screenshots
This is a work-in-progress that lives on its own branch as of this
writing. I will not elaborate at length right now as the
implementation details may change. I have, nonetheless, created an
issue on the GitHub repository where interested parties can provide
their feedback. It also includes some screenshots I took:
https://github.com/protesilaos/denote/issues/561. The code includes
other changes which pertain to how we handle backlinks and constitutes
a simplification of the code base.
The idea is to add the functionality to the main branch in the
coming days or weeks. Then I will do a video about it and/or explain
more.
That granted, do not forget that the official manual is the most
up-to-date reference and the single source of truth.
Denote sources
Denote is a simple note-taking tool for Emacs. It is based on the idea
that notes should follow a predictable and descriptive file-naming
scheme. The file name must offer a clear indication of what the note is
about, without reference to any other metadata. Denote basically
streamlines the creation of such files while providing facilities to
link between them.
Denote’s file-naming scheme is not limited to “notes”. It can be used
for all types of file, including those that are not editable in Emacs,
such as videos. Naming files in a consistent way makes their
filtering and retrieval considerably easier. Denote provides relevant
facilities to rename files, regardless of file type.
[ Further down on this list I include more of my Denote-related packages. ]
Over at the Emacs subreddit, weevyl talks about how Emacs completion changed his life. Or at least his Emacs life. His story is about his repeated efforts to move to Emacs and always failing. He finally realized that the reason for his failures was the difficulty of learning and remembering command names. We’ve all been there. You load a new package and suddenly you have some new commands to remember.
Weevyl’s epiphany was that command completion can virtually eliminate this problem. He’s a Helm user but the same principals apply to Ivy and other such packages. When you start to type a command name, you get a list of completions to help you choose the right one. I’m not a Helm user but with Ivy, the search is fuzzy and it’s often enough to start with the package name to get a list of available commands.
I use this all the time and have long since stopped thinking about it but weevyl’s post made me realize what a powerful facility this completion is. You don’t have to try to remember a long list of commands or their bindings. As I said, it’s usually enough to remember the package name to get a list of commands that make it easy to narrow down to the desired target.
There’s lots of agreement in the comments. Several people have almost the same story. Others have written some Elisp to help with this and shared their code so if you’re also finding remembering Emacs command names daunting, take a look at the post. It’s easy to forget how much Emacs does—or can do—to make our workflow easier.
Over that the Emacs reddit, arni_ca asks about key bindings for cursor movement. It’s not quite clear what he’s asking but the theme is moving the cursor without bindings like Ctrl+f, Ctrl+b, Meta+f, Meta+b, Ctrl+n, and Ctrl+p.
As is often the case, all the juice is in the comments. It’s astounding how many different strategies people have for such movements. They range from everything to using the arrow keys to custom Elisp snippets. Take a look at the post to see if any of the ideas will work for you.
By now, you all know my answer to questions like this. I started with this epiphany from Steve Yegge: large scale navigation—more than a few characters or lines—should be made with search. It’s perfectly obvious but it occurs to few of us until someone points it out. The wonderful avy package refines this strategy and offers finer grained control of where the cursor ends up. Avy is probably my most used set of commands. I use them constantly as I write and edit. Following Karthinks, I’ve mostly pared my use down to avy-goto-char-timer although I do sometimes still use avy-goto-word-1. Regardless, if you aren’t already using Avy, you should take a look. It’s the backbone of my navigation.
The other navigation tool I use is an import from Vim: jump-char that lets you jump to the next or previous instance of a character. It’s incredibly useful and, again, I use it several times a day. If you use Avy and Jump-char, you will have most of your navigation needs met.
I explain how to implement key layout ideas from the Space Cadet. I configure shift as parens, and change caps lock to a dual use key for backspace and control.
(...)
I have been following the master branch of the emacs.git repository
for many years now. It helps me test new features and make necessary
adjustments to all the packages I develop/maintain. Below I explain
how I make this happen on my computer, which is running Debian stable
(Debian 12 “Bookworm” as of this writing). If you are a regular user,
there is no reason to build from source: just use the latest stable
release and you should be fine.
Configure the apt development sources
To build Emacs from source on Debian, you first need to have the
deb-src package archive enabled. In your /etc/apt/sources.list
file you must have something like this:
deb http://deb.debian.org/debian/ bookworm main
deb-src http://deb.debian.org/debian/ bookworm main
After modifying the sources, run the following on the command line to
fetch the index with new package names+versions:
sudo apt update
Get the Emacs build dependencies
Now that you have enabled the deb-src archive, you can install the
build dependencies of the Debian emacs package with the following on
the command line:
sudo apt build-dep emacs
With this done, you are ready to build Emacs from source.
Get the Emacs source code
You need the git program to get the source code from the emacs.git
website. So install it with this command:
sudo apt install git
Now make a copy of the Emacs source code, using this on the command
line:
Replace ~/path/to/my/copy-of-emacs.git with the actual destination
of your preference. I have a ~/Builds directory where I store all
the projects I build from source. I thus do:
Assuming you have the copy of emacs.git stored at ~/Builds/emacs.git,
you switch to that directory with the following:
cd ~/Builds/emacs.git
Keep in mind that unless you explicitly switch to another branch, you
are on master, i.e. the latest development target.
NOTE: All subsequent commands are ran from your equivalent of
~/Builds/emacs.git.
Run the autogen.sh the first time
This script will generate the configuration scaffold. You only really
need to do this once (and I always forget about it for this very
reason). Simply do this on the command line:
./autogen.sh
It checks that you have all you need to get started and prints output
like this:
Checking whether you have the necessary tools...
(Read INSTALL.REPO for more details on building Emacs)
Checking for autoconf (need at least version 2.65) ... ok
Your system has the required tools.
Building aclocal.m4 ...
Running 'autoreconf -fi -I m4' ...
Building 'aclocal.m4' in exec ...
Running 'autoreconf -fi' in exec ...
Configuring local git repository...
'.git/config' -> '.git/config.~1~'
git config transfer.fsckObjects 'true'
git config diff.cpp.xfuncname '!^[ ]*[A-Za-z_][A-Za-z_0-9]*:[[:space:]]*($|/[/*])
^((::[[:space:]]*)?[A-Za-z_][A-Za-z_0-9]*[[:space:]]*\(.*)$
^((#define[[:space:]]|DEFUN).*)$'
git config diff.elisp.xfuncname '^\([^[:space:]]*def[^[:space:]]+[[:space:]]+([^()[:space:]]+)'
git config diff.m4.xfuncname '^((m4_)?define|A._DEFUN(_ONCE)?)\([^),]*'
git config diff.make.xfuncname '^([$.[:alnum:]_].*:|[[:alnum:]_]+[[:space:]]*([*:+]?[:?]?|!?)=|define .*)'
git config diff.shell.xfuncname '^([[:space:]]*[[:alpha:]_][[:alnum:]_]*[[:space:]]*\(\)|[[:alpha:]_][[:alnum:]_]*=)'
git config diff.texinfo.xfuncname '^@node[[:space:]]+([^,[:space:]][^,]+)'
Installing git hooks...
'build-aux/git-hooks/commit-msg' -> '.git/hooks/commit-msg'
'build-aux/git-hooks/pre-commit' -> '.git/hooks/pre-commit'
'build-aux/git-hooks/prepare-commit-msg' -> '.git/hooks/prepare-commit-msg'
'build-aux/git-hooks/post-commit' -> '.git/hooks/post-commit'
'build-aux/git-hooks/pre-push' -> '.git/hooks/pre-push'
'build-aux/git-hooks/commit-msg-files.awk' -> '.git/hooks/commit-msg-files.awk'
'.git/hooks/applypatch-msg.sample' -> '.git/hooks/applypatch-msg'
'.git/hooks/pre-applypatch.sample' -> '.git/hooks/pre-applypatch'
You can now run './configure'.
Do not be intimidated by it. Focus on the final line instead, which
directs you to the configure directive.
Explore the build flags
How exactly you build Emacs depends on your preferences and
system-specific requirements. At the end of this post, I copy my
current configuration, though I advise against copying it without
understanding what it does.
If you have no specific preferences, just use the defaults by running
this on the command line:
./configure
It will set up the build environment for you. If, however, you wish
to explore your options and customise the emacs program you will
get, then issue the following command and carefully read its output:
./configure --help
The minimum I recommend is to specify where the build artefacts
are stored. I use this, which has not caused me any issues over the
years:
./configure --prefix=/usr/local
Configure the build environment with your preferred flags
Once you have understood the available options, go ahead and run
configure. For example:
Whenever you need to rebuild Emacs with some new flags, run the
configure command again, passing it the relevant flags. If you wish
to keep the same options for a new build, then simply do not run
configure again.
Make the program
Once configure finishes its work, it is time to run the make
program. For new builds, this is as simple as:
make
Sometimes you have old build artefacts that conflict with changes
upstream. When that happens, the build process will fail. You may then
need to use:
make bootstrap
In general, make is enough. It will be slow the first time, but will
be faster on subsequent runs as it reuses what is already there. A
make bootstrap will always be slow though, as it generates
everything anew.
Install the program that was made
After make is done, you are ready to install Emacs:
sudo make install
You will not need escalated privileges (i.e. sudo) is you specified
a --prefix with a user directory during the configure step. How
you go about it is up to you.
Keeping Emacs up-to-date
Whenever you wish to update from source, go to where your copy of
emacs.git is (e.g. ~/Builds/emacs.git) and pull the latest changes
using the git program:
git pull
Then repeat make and make install. Remember that you do not need
to re-run configure unless you specifically want to modify your
build (and if you do that, you probably need to make bootstrap).
Learn about the latest NEWS
Emacs users can at all times learn about changes introduced in their
current version of Emacs with M-x view-emacs-news. It is bound to
the key C-h n by default. This command opens the current NEWS
file. With a numeric prefix argument, you get the NEWS of the given
Emacs version. For example, C-u 27 C-h n shows you what Emacs
version 27 introduced.
Compare your NEWS to those of emacs.git
With the help of the built-in Emacs ediff package, you can compare
your latest NEWS to those coming from emacs.git. I always do this
after pulling the latest changes from source (with git pull).
From the root directory of your copye of emacs.git (e.g.
~/Builds/emacs.git), and while using Emacs, you can do M-x
project-find-file (C-x p f) to search the Emacs “project” for a
file called etc/NEWS. This is where the latest user-facing changes
are recorded.
If you are not sure where you are on the filesystem while inside
Emacs, do M-x cd (or M-x dired or M-x find-file), select the
root directory of your emacs.git, hit RET, and then do M-x
project-find-file.
Now that you have emacs.git/etc/NEWS in a buffer, also load your
copy of NEWS with M-x view-emacs-news (C-h n).
Then do M-x ediff-buffers, which will prompt for two buffers to
compare. First select your version of NEWS and then that of emacs.git.
NOTE: I think the default Ediff interface is problematic. Put the
following in your configuration to make it work in a single frame:
I am not updating old publications, unless otherwise noted. The most
up-to-date recode of my Emacs build is documented in my dotemacs:
https://protesilaos.com/emacs/dotemacs.
Inspect the value of the Emacs variable system-configuration-options
to find out how your Emacs is built.
I’ve written before about relative line numbers. The idea is to label the current line as 0 and the other lines as positive or negative offsets from it. It’s handy if you want to quickly move to another line. As I said before, there are ways—avy in my case—that you might find better for doing that kind of thing.
In another of his useful posts, Bozhidar Batsov gives his own take on relative line numbers. Batsov says that relative line numbers are particularly useful if you are an evil user but even users of conventional Emacs can find it useful. If this sort of thing interests you, take a look at Batsov’s post for some details.
As I said in my previous post on the matter, I’ve never been able to warm up to the idea. I seldom have line numbers enabled but when I do I find it just as easy to jump to an absolute line number. Actually, I don’t do that either, I just use avy to get where I want to go. It generally provides finer control over where the cursor lands and is just as easy to use.
Of course, as I always say, that’s the beauty of Emacs. You can adapt it to whatever your preferred workflow is. If relative line numbers seem like a good idea to you, take a look at Batsov’s post.
As much as anyone else, I love and use MAGIT. It’s an incredibly
powerful tool, and I’ve spent countless hours relying on it for my Git
workflow. However, from time to time, I find myself trying to stick to
Emacs internals and avoid relying on external packages. This is where
my emacs-solo config comes in (you can find it at
https://github.com/lionyxml/emacs-solo). The
functions I’m sharing today are part of this config, hence the
emacs-solo/ prefix in their names.
These functions are meant to be quick copy/paste snippets that anyone
can tweak to suit their needs. They’re not complete, beautifully
tailored packages but rather "quick tips and ideas" that I’ve found
useful in my daily workflow. If you know a bit of Elisp, you can
easily adapt them to your preferences.
Here’s a quick overview of the keybindings and what each function
does:
C-x v R - Show Git reflog with ANSI colors and custom keybindings.
C-x v B - Browse the repository’s remote URL in the browser.
C-u C-x v B - Browse the repository’s remote URL and point to the current branch, file, and line.
C-x v = - Show the diff for the current file and jump to the hunk containing the current line.
Let’s dive into each function in detail!
1. emacs-solo/vc-git-reflog (C-x v R)
This function allows you to view the Git reflog in a new buffer with
ANSI colors and custom keybindings. It’s a great way to quickly
inspect the history of your Git repository.
(defunemacs-solo/vc-git-reflog()"Show git reflog in a new buffer with ANSI colors and custom keybindings."(interactive)(let*((root(vc-root-dir))(buffer(get-buffer-create"*vc-git-reflog*")))(with-current-buffer buffer
(setq-local vc-git-reflog-root root)(let((inhibit-read-onlyt))(erase-buffer)(vc-git-command buffer nilnil"reflog""--color=always""--pretty=format:%C(yellow)%h%Creset %C(auto)%d%Creset %Cgreen%gd%Creset %s %Cblue(%cr)%Creset")(goto-char(point-min))(ansi-color-apply-on-region(point-min)(point-max)))(let((map(make-sparse-keymap)))(define-key map (kbd"/")#'isearch-forward)(define-key map (kbd"p")#'previous-line)(define-key map (kbd"n")#'next-line)(define-key map (kbd"q")#'kill-buffer-and-window)(use-local-map map))(setq buffer-read-only t)(setq mode-name "Git-Reflog")(setq major-mode 'special-mode))(pop-to-buffer buffer)))(global-set-key(kbd"C-x v R")'emacs-solo/vc-git-reflog)
2. emacs-solo/vc-browse-remote (C-x v B and C-u C-x v B)
This function opens the repository’s remote URL in the browser. If
CURRENT-LINE is non-nil, it points to the current branch, file, and
line. Otherwise, it opens the repository’s main page.
(defunemacs-solo/vc-browse-remote(&optionalcurrent-line)"Open the repository's remote URL in the browser.
If CURRENT-LINE is non-nil, point to the current branch, file, and line.
Otherwise, open the repository's main page."(interactive"P")(let*((remote-url(string-trim(vc-git--run-command-stringnil"config""--get""remote.origin.url")))(branch(string-trim(vc-git--run-command-stringnil"rev-parse""--abbrev-ref""HEAD")))(file(string-trim(file-relative-name(buffer-file-name)(vc-root-dir))))(line(line-number-at-pos)))(message"Opening remote on browser: %s" remote-url)(if(and remote-url (string-match"\\(?:git@\\|https://\\)\\([^:/]+\\)[:/]\\(.+?\\)\\(?:\\.git\\)?$" remote-url))(let((host(match-string1 remote-url))(path(match-string2 remote-url)));; Convert SSH URLs to HTTPS (e.g., git@github.com:user/repo.git -> https://github.com/user/repo)(when(string-prefix-p"git@" host)(setq host (replace-regexp-in-string"^git@""" host)));; Construct the appropriate URL based on CURRENT-LINE(browse-url(if current-line
(format"https://%s/%s/blob/%s/%s#L%d" host path branch file line)(format"https://%s/%s" host path))))(message"Could not determine repository URL"))))(global-set-key(kbd"C-x v B")'emacs-solo/vc-browse-remote)
If you use the univeral argument C-u before C-x v B it browses the
current line.
3. emacs-solo/vc-diff-on-current-hunk (C-x v =)
This function extends the default functionality, meaning it shows the
diff for the current file, but also jumps to the hunk containing the current
line. It’s a handy tool for reviewing changes on large files.
(defunemacs-solo/vc-diff-on-current-hunk()"Show the diff for the current file and jump to the hunk containing the current line."(interactive)(let((current-line(line-number-at-pos)))(message"Current line in file: %d" current-line)(vc-diff); Generate the diff buffer(with-current-buffer"*vc-diff*"(goto-char(point-min))(let((found-hunknil))(while(and(not found-hunk)(re-search-forward"^@@ -\\([0-9]+\\), *[0-9]+ \\+\\([0-9]+\\), *\\([0-9]+\\) @@"nilt))(let*((start-line(string-to-number(match-string2)))(line-count(string-to-number(match-string3)))(end-line(+ start-line line-count)))(message"Found hunk: %d to %d" start-line end-line)(when(and(>= current-line start-line)(<= current-line end-line))(message"Current line %d is within hunk range %d to %d" current-line start-line end-line)(setq found-hunk t)(goto-char(match-beginning0)))))(unless found-hunk
(message"Current line %d is not within any hunk range." current-line)(goto-char(point-min)))))))(global-set-key(kbd"C-x v =")'emacs-solo/vc-diff-on-current-hunk)
Wrapping Up
These custom VC-focused Emacs functions have become essential tools in
my Git workflow, and I hope they can be useful to others as
well. Whether you’re inspecting the reflog, browsing remote
repositories, or reviewing diffs, these functions are designed to make
your workflow smoother and more efficient. Happy coding! 🚀
Here’s a puzzle for Go developers. How can the following code possibly panic with a nil pointer dereference?
varsomeThing*SomeStruct// various things happen, including possibly assigning to someThingifsomeThing!=nil&&someThing.Body!=nil{fmt.Printf("%s",someThing.Body)}
Think about it for a minute. I’ll wait.
OK are you done? I don’t want to spoil it for you.
Now we can see the problem: we checked if someThing was nil but we implicitly referenced a field on the embeddedhttp.Response without checking if Response was nil. If Response is nil is the code will panic.
Think really hard before you embed a pointer
I came across this struct in a API client library that I won’t name. I think it was probably a bad design idea. Thestruct represented the response from that API. So it looked something like this:
So: it wraps the actual API response, plus some extra methods for dealing with pagination. I can see whythe author thought embedding the response was a good idea: it lets the consumer treat it like a normal http responsewith some extra features.
However, it becomes a problem when you can get a non-nil APIResponse with a nil Response. If you’re going to embed apointer in a Go struct, you should do you best to make sure you never leave that pointer nil.
All in all I think this API would have been better served by just making the http response a normal struct field, andpossibly copying important fields like Status into the struct1:
typeAPIResponsestruct{Response*http.Response// And if it's really important to you that people type APIResponse.Status// instead of APIResponse.Response.Status:Statusint// non-exported fields}
Of course, if the response is nil, I suppose you leave Status as the zero value and let consumers figure outwhat do do with http response code 0. ↩︎
As I’m getting back into the “groove” of things, I started this pattern:
Wake up. Switch to home clothes from PJs.
Go to the kitchen, get some water, listen to the birds and think for a bit (about 10 mins)
Exercise: stretches (mostly focusing on back and posture), with some push-ups and crunches.
Meditation (5 mins) follows exercise
Back to the kitchen to make coffee and breakfast
Eat, take vitamins, talk to Nat as his morning starts as well
Start working:
View the agenda for the day (meetings, major projects, TODOs)
Looking for Pinned emails from previous days and Reminders, combine them into TODOs for the day
Start tackling tasks in my agenda (emacs org-mode), recording what I’m doing in notes
Around 13:00 to 15:00 (depending on meetings and things), time to exercise, or if I’m in the office, go back home for this. No targeted goal specifically yet; it’s mostly about the routine, but I’m trying to include a jog here if I can, or weight lifting
Back to work: this is a good “quiet time” to work on projects without interruption, depending on meetings.
Around 18:00 or 19:00 finishing work. Nat’s back at that time, or I spend time with another partner, depending on the day.
Around 20:00, I enjoy a show (these days it’s Silo) or video games (Helldirvers2 mostly at this point, but there’s also the excellent Mind over Magic I need to review soon)
I usually sleep around 22:00 or 23:00. Hopefully I can keep up the 7 hours of sleep I get a night or so, which means I wake up around 6:00 the next day to start again.
There are many points that change in this workflow (for example, if I blog, it’s usually in the morning at some point after food, if work allows), but in general, this is the outline I try to get back to.
Once again I see the benefits of channeling all my tasks that come from emails, reminders, my calendar, phone calls, meeting etc - into my org-mode agenda, where I have one simple list without distractions of what I need to do. If it’s not there, I’m not doing it that day. And there’s always more than I can finish each day anyway, that’s just the nature of things.
The big benefit (I’ve said so many times in the past) here is that because I log what I do, I know exactly what was done and I have a good idea of what needs to be done next. This is also very useful when there’s a new project, and you can just use the template from last time. Good stuff.
There's three layered rate-limiters in here that are applied to only certain URIs:
One does a per-IP limit excluding my Tailscale network and some ASNs I connect from. Each IP can make one costly request per minute, otherwise receive a 503.
One tries to map certain cloud providers in to a single rate-limit key and gives each of these providers 1 RPM on these endpoints. Each group of cloud IPs can make one request per minute, otherwise receive a 503.
One puts a limit to 1 RPS of all traffic on each "site feature" in Gitea.
So now if you try to browse my Gitea instance http://code.rix.si or make a git clone over HTTP that will work just fine, but a handful of expensive endpoints will be aggressively rate-limited. If you want to look at the git blame for every file in my personal checkout of nixpkgs, you can do that on your own time on your own machine now.
So far installing this on my "edge" server seems to work really well, cutting the load of the small SSL terminator instance in half. Let's see if this is Good Enough.
Hello friends. Despite my best efforts, I cannot do my part to add org-utf-to-xetex to MELPA—this is my fault. Unfortunately, I’m unsure why I can’t fork the repo, but I need your help now. If you are up for the mission then here are the details. If you are curious about becoming a co-maintainer—that would … Continue reading "I Need Your Help Adding org-utf-to-xetex to MELPA"
Please note that planet.emacslife.com aggregates blogs, and blog authors might mention or link to nonfree things. To add a feed to this page, please e-mail the RSS or ATOM feed URL to sacha@sachachua.com . Thank you!