It’s Monday. Most of us are looking around bleary eyed wondering what happened to the weekend that just started. Here’s a little humor to get you warmed up for what’s to come. Over at the Emacs subreddit, bruchieOP posts the Ten Commandments of the Church of Emacs.
It’s a satirical look at the beliefs that most of us Emacs users hold near and dear. Note that satirical qualifier. For example, lots of people use and swear by “pre-configured distributions”, such as Doom and Spacemacs, as even bruchieOP acknowledges. Still, most of you will identify and agree the majority of the prescriptions.
Take a couple of minutes to enjoy the post and then back to work. Emacs is waiting.
I often want to copy the toot URL after posting a
new toot about a blog post so that I can update
the blog post with it. Since I post from Emacs
using mastodon.el, I can probably figure out how
to get the URL after tooting. A quick-and-dirty
way is to retrieve the latest status.
I considered overriding the keybinding in
mastodon-toot-mode-map, but I figured using
advice would mean I can copy things even after
automated toots.
A more elegant way to do this might be to modify
mastodon-toot-send to run-hook-with-args a
variable with the response as an argument, but
this will do for now.
I used a hook in my advice so that I can change
the behaviour from other functions. For example, I
have some code to compose a toot with a link to
the current post. After I send a toot, I want to
check if the toot contains the current entry's
permalink. If it has and I don't have a Mastodon
toot field yet, maybe I can automatically set that
property, assuming I end up back in the Org Mode
file I started it from.
If I combine that with a development copy of my
blog that ignores most of my posts so it compiles
faster and a function that copies just the current
post's files over, I can quickly make a post
available at its permalink (which means the link
in the toot will work) before I recompile the rest
of the blog, which takes a number of minutes.
The next version of Denote is shaping up to be a huge one. One of the
newest features I am working on is the support for “query links”.
Those use the same denote: link type infrastructure but exhibit a
different behaviour than the direct links we have always had. Instead
of pointing to a file via its unique identifier, they initiate a
search through the contents of all files in the denote-directory.
This search uses the built-in Xref mechanism and is the same as what
we have already been doing with backlinks (basically, a grep).
In short:
Direct links: Those point to a file via its unique identifier.
For example, denote:20250324T074132 resolves to a file path.
Clicking on the link opens the corresponding file. Org export will
also take care to turn this into a file path.
Query links: Those do not point to any file per se. They are a
string of one or more words or regular expression which is matched
against the contents of files. For example, denote:this is a test
produces a buffer listing all matches for the given query. Clicking
on the matching line in that buffer opens the file at that point
(just how our backlinks work when they show context—I am
generalising this mechanism).
Direct links can point to any file, including PDFs, videos, and
pictures (assuming it is renamed to use the Denote file-naming
scheme). Whereas query links are limited to text files.
Development discussion and screenshots
This is a work-in-progress that lives on its own branch as of this
writing. I will not elaborate at length right now as the
implementation details may change. I have, nonetheless, created an
issue on the GitHub repository where interested parties can provide
their feedback. It also includes some screenshots I took:
https://github.com/protesilaos/denote/issues/561. The code includes
other changes which pertain to how we handle backlinks and constitutes
a simplification of the code base.
The idea is to add the functionality to the main branch in the
coming days or weeks. Then I will do a video about it and/or explain
more.
That granted, do not forget that the official manual is the most
up-to-date reference and the single source of truth.
Denote sources
Denote is a simple note-taking tool for Emacs. It is based on the idea
that notes should follow a predictable and descriptive file-naming
scheme. The file name must offer a clear indication of what the note is
about, without reference to any other metadata. Denote basically
streamlines the creation of such files while providing facilities to
link between them.
Denote’s file-naming scheme is not limited to “notes”. It can be used
for all types of file, including those that are not editable in Emacs,
such as videos. Naming files in a consistent way makes their
filtering and retrieval considerably easier. Denote provides relevant
facilities to rename files, regardless of file type.
[ Further down on this list I include more of my Denote-related packages. ]
Over at the Emacs subreddit, weevyl talks about how Emacs completion changed his life. Or at least his Emacs life. His story is about his repeated efforts to move to Emacs and always failing. He finally realized that the reason for his failures was the difficulty of learning and remembering command names. We’ve all been there. You load a new package and suddenly you have some new commands to remember.
Weevyl’s epiphany was that command completion can virtually eliminate this problem. He’s a Helm user but the same principals apply to Ivy and other such packages. When you start to type a command name, you get a list of completions to help you choose the right one. I’m not a Helm user but with Ivy, the search is fuzzy and it’s often enough to start with the package name to get a list of available commands.
I use this all the time and have long since stopped thinking about it but weevyl’s post made me realize what a powerful facility this completion is. You don’t have to try to remember a long list of commands or their bindings. As I said, it’s usually enough to remember the package name to get a list of commands that make it easy to narrow down to the desired target.
There’s lots of agreement in the comments. Several people have almost the same story. Others have written some Elisp to help with this and shared their code so if you’re also finding remembering Emacs command names daunting, take a look at the post. It’s easy to forget how much Emacs does—or can do—to make our workflow easier.
Over that the Emacs reddit, arni_ca asks about key bindings for cursor movement. It’s not quite clear what he’s asking but the theme is moving the cursor without bindings like Ctrl+f, Ctrl+b, Meta+f, Meta+b, Ctrl+n, and Ctrl+p.
As is often the case, all the juice is in the comments. It’s astounding how many different strategies people have for such movements. They range from everything to using the arrow keys to custom Elisp snippets. Take a look at the post to see if any of the ideas will work for you.
By now, you all know my answer to questions like this. I started with this epiphany from Steve Yegge: large scale navigation—more than a few characters or lines—should be made with search. It’s perfectly obvious but it occurs to few of us until someone points it out. The wonderful avy package refines this strategy and offers finer grained control of where the cursor ends up. Avy is probably my most used set of commands. I use them constantly as I write and edit. Following Karthinks, I’ve mostly pared my use down to avy-goto-char-timer although I do sometimes still use avy-goto-word-1. Regardless, if you aren’t already using Avy, you should take a look. It’s the backbone of my navigation.
The other navigation tool I use is an import from Vim: jump-char that lets you jump to the next or previous instance of a character. It’s incredibly useful and, again, I use it several times a day. If you use Avy and Jump-char, you will have most of your navigation needs met.
I explain how to implement key layout ideas from the Space Cadet. I configure shift as parens, and change caps lock to a dual use key for backspace and control.
(...)
I have been following the master branch of the emacs.git repository
for many years now. It helps me test new features and make necessary
adjustments to all the packages I develop/maintain. Below I explain
how I make this happen on my computer, which is running Debian stable
(Debian 12 “Bookworm” as of this writing). If you are a regular user,
there is no reason to build from source: just use the latest stable
release and you should be fine.
Configure the apt development sources
To build Emacs from source on Debian, you first need to have the
deb-src package archive enabled. In your /etc/apt/sources.list
file you must have something like this:
deb http://deb.debian.org/debian/ bookworm main
deb-src http://deb.debian.org/debian/ bookworm main
After modifying the sources, run the following on the command line to
fetch the index with new package names+versions:
sudo apt update
Get the Emacs build dependencies
Now that you have enabled the deb-src archive, you can install the
build dependencies of the Debian emacs package with the following on
the command line:
sudo apt build-dep emacs
With this done, you are ready to build Emacs from source.
Get the Emacs source code
You need the git program to get the source code from the emacs.git
website. So install it with this command:
sudo apt install git
Now make a copy of the Emacs source code, using this on the command
line:
Replace ~/path/to/my/copy-of-emacs.git with the actual destination
of your preference. I have a ~/Builds directory where I store all
the projects I build from source. I thus do:
Assuming you have the copy of emacs.git stored at ~/Builds/emacs.git,
you switch to that directory with the following:
cd ~/Builds/emacs.git
Keep in mind that unless you explicitly switch to another branch, you
are on master, i.e. the latest development target.
NOTE: All subsequent commands are ran from your equivalent of
~/Builds/emacs.git.
Run the autogen.sh the first time
This script will generate the configuration scaffold. You only really
need to do this once (and I always forget about it for this very
reason). Simply do this on the command line:
./autogen.sh
It checks that you have all you need to get started and prints output
like this:
Checking whether you have the necessary tools...
(Read INSTALL.REPO for more details on building Emacs)
Checking for autoconf (need at least version 2.65) ... ok
Your system has the required tools.
Building aclocal.m4 ...
Running 'autoreconf -fi -I m4' ...
Building 'aclocal.m4' in exec ...
Running 'autoreconf -fi' in exec ...
Configuring local git repository...
'.git/config' -> '.git/config.~1~'
git config transfer.fsckObjects 'true'
git config diff.cpp.xfuncname '!^[ ]*[A-Za-z_][A-Za-z_0-9]*:[[:space:]]*($|/[/*])
^((::[[:space:]]*)?[A-Za-z_][A-Za-z_0-9]*[[:space:]]*\(.*)$
^((#define[[:space:]]|DEFUN).*)$'
git config diff.elisp.xfuncname '^\([^[:space:]]*def[^[:space:]]+[[:space:]]+([^()[:space:]]+)'
git config diff.m4.xfuncname '^((m4_)?define|A._DEFUN(_ONCE)?)\([^),]*'
git config diff.make.xfuncname '^([$.[:alnum:]_].*:|[[:alnum:]_]+[[:space:]]*([*:+]?[:?]?|!?)=|define .*)'
git config diff.shell.xfuncname '^([[:space:]]*[[:alpha:]_][[:alnum:]_]*[[:space:]]*\(\)|[[:alpha:]_][[:alnum:]_]*=)'
git config diff.texinfo.xfuncname '^@node[[:space:]]+([^,[:space:]][^,]+)'
Installing git hooks...
'build-aux/git-hooks/commit-msg' -> '.git/hooks/commit-msg'
'build-aux/git-hooks/pre-commit' -> '.git/hooks/pre-commit'
'build-aux/git-hooks/prepare-commit-msg' -> '.git/hooks/prepare-commit-msg'
'build-aux/git-hooks/post-commit' -> '.git/hooks/post-commit'
'build-aux/git-hooks/pre-push' -> '.git/hooks/pre-push'
'build-aux/git-hooks/commit-msg-files.awk' -> '.git/hooks/commit-msg-files.awk'
'.git/hooks/applypatch-msg.sample' -> '.git/hooks/applypatch-msg'
'.git/hooks/pre-applypatch.sample' -> '.git/hooks/pre-applypatch'
You can now run './configure'.
Do not be intimidated by it. Focus on the final line instead, which
directs you to the configure directive.
Explore the build flags
How exactly you build Emacs depends on your preferences and
system-specific requirements. At the end of this post, I copy my
current configuration, though I advise against copying it without
understanding what it does.
If you have no specific preferences, just use the defaults by running
this on the command line:
./configure
It will set up the build environment for you. If, however, you wish
to explore your options and customise the emacs program you will
get, then issue the following command and carefully read its output:
./configure --help
The minimum I recommend is to specify where the build artefacts
are stored. I use this, which has not caused me any issues over the
years:
./configure --prefix=/usr/local
Configure the build environment with your preferred flags
Once you have understood the available options, go ahead and run
configure. For example:
Whenever you need to rebuild Emacs with some new flags, run the
configure command again, passing it the relevant flags. If you wish
to keep the same options for a new build, then simply do not run
configure again.
Make the program
Once configure finishes its work, it is time to run the make
program. For new builds, this is as simple as:
make
Sometimes you have old build artefacts that conflict with changes
upstream. When that happens, the build process will fail. You may then
need to use:
make bootstrap
In general, make is enough. It will be slow the first time, but will
be faster on subsequent runs as it reuses what is already there. A
make bootstrap will always be slow though, as it generates
everything anew.
Install the program that was made
After make is done, you are ready to install Emacs:
sudo make install
You will not need escalated privileges (i.e. sudo) is you specified
a --prefix with a user directory during the configure step. How
you go about it is up to you.
Keeping Emacs up-to-date
Whenever you wish to update from source, go to where your copy of
emacs.git is (e.g. ~/Builds/emacs.git) and pull the latest changes
using the git program:
git pull
Then repeat make and make install. Remember that you do not need
to re-run configure unless you specifically want to modify your
build (and if you do that, you probably need to make bootstrap).
Learn about the latest NEWS
Emacs users can at all times learn about changes introduced in their
current version of Emacs with M-x view-emacs-news. It is bound to
the key C-h n by default. This command opens the current NEWS
file. With a numeric prefix argument, you get the NEWS of the given
Emacs version. For example, C-u 27 C-h n shows you what Emacs
version 27 introduced.
Compare your NEWS to those of emacs.git
With the help of the built-in Emacs ediff package, you can compare
your latest NEWS to those coming from emacs.git. I always do this
after pulling the latest changes from source (with git pull).
From the root directory of your copye of emacs.git (e.g.
~/Builds/emacs.git), and while using Emacs, you can do M-x
project-find-file (C-x p f) to search the Emacs “project” for a
file called etc/NEWS. This is where the latest user-facing changes
are recorded.
If you are not sure where you are on the filesystem while inside
Emacs, do M-x cd (or M-x dired or M-x find-file), select the
root directory of your emacs.git, hit RET, and then do M-x
project-find-file.
Now that you have emacs.git/etc/NEWS in a buffer, also load your
copy of NEWS with M-x view-emacs-news (C-h n).
Then do M-x ediff-buffers, which will prompt for two buffers to
compare. First select your version of NEWS and then that of emacs.git.
NOTE: I think the default Ediff interface is problematic. Put the
following in your configuration to make it work in a single frame:
I am not updating old publications, unless otherwise noted. The most
up-to-date recode of my Emacs build is documented in my dotemacs:
https://protesilaos.com/emacs/dotemacs.
Inspect the value of the Emacs variable system-configuration-options
to find out how your Emacs is built.
I’ve written before about relative line numbers. The idea is to label the current line as 0 and the other lines as positive or negative offsets from it. It’s handy if you want to quickly move to another line. As I said before, there are ways—avy in my case—that you might find better for doing that kind of thing.
In another of his useful posts, Bozhidar Batsov gives his own take on relative line numbers. Batsov says that relative line numbers are particularly useful if you are an evil user but even users of conventional Emacs can find it useful. If this sort of thing interests you, take a look at Batsov’s post for some details.
As I said in my previous post on the matter, I’ve never been able to warm up to the idea. I seldom have line numbers enabled but when I do I find it just as easy to jump to an absolute line number. Actually, I don’t do that either, I just use avy to get where I want to go. It generally provides finer control over where the cursor lands and is just as easy to use.
Of course, as I always say, that’s the beauty of Emacs. You can adapt it to whatever your preferred workflow is. If relative line numbers seem like a good idea to you, take a look at Batsov’s post.
As much as anyone else, I love and use MAGIT. It’s an incredibly
powerful tool, and I’ve spent countless hours relying on it for my Git
workflow. However, from time to time, I find myself trying to stick to
Emacs internals and avoid relying on external packages. This is where
my emacs-solo config comes in (you can find it at
https://github.com/lionyxml/emacs-solo). The
functions I’m sharing today are part of this config, hence the
emacs-solo/ prefix in their names.
These functions are meant to be quick copy/paste snippets that anyone
can tweak to suit their needs. They’re not complete, beautifully
tailored packages but rather "quick tips and ideas" that I’ve found
useful in my daily workflow. If you know a bit of Elisp, you can
easily adapt them to your preferences.
Here’s a quick overview of the keybindings and what each function
does:
C-x v R - Show Git reflog with ANSI colors and custom keybindings.
C-x v B - Browse the repository’s remote URL in the browser.
C-u C-x v B - Browse the repository’s remote URL and point to the current branch, file, and line.
C-x v = - Show the diff for the current file and jump to the hunk containing the current line.
Let’s dive into each function in detail!
1. emacs-solo/vc-git-reflog (C-x v R)
This function allows you to view the Git reflog in a new buffer with
ANSI colors and custom keybindings. It’s a great way to quickly
inspect the history of your Git repository.
(defunemacs-solo/vc-git-reflog()"Show git reflog in a new buffer with ANSI colors and custom keybindings."(interactive)(let*((root(vc-root-dir))(buffer(get-buffer-create"*vc-git-reflog*")))(with-current-buffer buffer
(setq-local vc-git-reflog-root root)(let((inhibit-read-onlyt))(erase-buffer)(vc-git-command buffer nilnil"reflog""--color=always""--pretty=format:%C(yellow)%h%Creset %C(auto)%d%Creset %Cgreen%gd%Creset %s %Cblue(%cr)%Creset")(goto-char(point-min))(ansi-color-apply-on-region(point-min)(point-max)))(let((map(make-sparse-keymap)))(define-key map (kbd"/")#'isearch-forward)(define-key map (kbd"p")#'previous-line)(define-key map (kbd"n")#'next-line)(define-key map (kbd"q")#'kill-buffer-and-window)(use-local-map map))(setq buffer-read-only t)(setq mode-name "Git-Reflog")(setq major-mode 'special-mode))(pop-to-buffer buffer)))(global-set-key(kbd"C-x v R")'emacs-solo/vc-git-reflog)
2. emacs-solo/vc-browse-remote (C-x v B and C-u C-x v B)
This function opens the repository’s remote URL in the browser. If
CURRENT-LINE is non-nil, it points to the current branch, file, and
line. Otherwise, it opens the repository’s main page.
(defunemacs-solo/vc-browse-remote(&optionalcurrent-line)"Open the repository's remote URL in the browser.
If CURRENT-LINE is non-nil, point to the current branch, file, and line.
Otherwise, open the repository's main page."(interactive"P")(let*((remote-url(string-trim(vc-git--run-command-stringnil"config""--get""remote.origin.url")))(branch(string-trim(vc-git--run-command-stringnil"rev-parse""--abbrev-ref""HEAD")))(file(string-trim(file-relative-name(buffer-file-name)(vc-root-dir))))(line(line-number-at-pos)))(message"Opening remote on browser: %s" remote-url)(if(and remote-url (string-match"\\(?:git@\\|https://\\)\\([^:/]+\\)[:/]\\(.+?\\)\\(?:\\.git\\)?$" remote-url))(let((host(match-string1 remote-url))(path(match-string2 remote-url)));; Convert SSH URLs to HTTPS (e.g., git@github.com:user/repo.git -> https://github.com/user/repo)(when(string-prefix-p"git@" host)(setq host (replace-regexp-in-string"^git@""" host)));; Construct the appropriate URL based on CURRENT-LINE(browse-url(if current-line
(format"https://%s/%s/blob/%s/%s#L%d" host path branch file line)(format"https://%s/%s" host path))))(message"Could not determine repository URL"))))(global-set-key(kbd"C-x v B")'emacs-solo/vc-browse-remote)
If you use the univeral argument C-u before C-x v B it browses the
current line.
3. emacs-solo/vc-diff-on-current-hunk (C-x v =)
This function extends the default functionality, meaning it shows the
diff for the current file, but also jumps to the hunk containing the current
line. It’s a handy tool for reviewing changes on large files.
(defunemacs-solo/vc-diff-on-current-hunk()"Show the diff for the current file and jump to the hunk containing the current line."(interactive)(let((current-line(line-number-at-pos)))(message"Current line in file: %d" current-line)(vc-diff); Generate the diff buffer(with-current-buffer"*vc-diff*"(goto-char(point-min))(let((found-hunknil))(while(and(not found-hunk)(re-search-forward"^@@ -\\([0-9]+\\), *[0-9]+ \\+\\([0-9]+\\), *\\([0-9]+\\) @@"nilt))(let*((start-line(string-to-number(match-string2)))(line-count(string-to-number(match-string3)))(end-line(+ start-line line-count)))(message"Found hunk: %d to %d" start-line end-line)(when(and(>= current-line start-line)(<= current-line end-line))(message"Current line %d is within hunk range %d to %d" current-line start-line end-line)(setq found-hunk t)(goto-char(match-beginning0)))))(unless found-hunk
(message"Current line %d is not within any hunk range." current-line)(goto-char(point-min)))))))(global-set-key(kbd"C-x v =")'emacs-solo/vc-diff-on-current-hunk)
Wrapping Up
These custom VC-focused Emacs functions have become essential tools in
my Git workflow, and I hope they can be useful to others as
well. Whether you’re inspecting the reflog, browsing remote
repositories, or reviewing diffs, these functions are designed to make
your workflow smoother and more efficient. Happy coding! 🚀
Here’s a puzzle for Go developers. How can the following code possibly panic with a nil pointer dereference?
varsomeThing*SomeStruct// various things happen, including possibly assigning to someThingifsomeThing!=nil&&someThing.Body!=nil{fmt.Printf("%s",something.Body)}
Think about it for a minute. I’ll wait.
OK are you done? I don’t want to spoil it for you.
Now we can see the problem: we checked if someThing was nil but we implicitly referenced a field on the embeddedhttp.Response without checking if Response was nil. If Response is nil is the code will panic.
Think really hard before you embed a pointer
I came across this struct in a API client library that I won’t name. I think it was probably a bad design idea. Thestruct represented the response from that API. So it looked something like this:
So: it wraps the actual API response, plus some extra methods for dealing with pagination. I can see whythe author thought embedding the response was a good idea: it lets the consumer treat it like a normal http responsewith some extra features.
However, it becomes a problem when you can get a non-nil APIResponse with a nil Response. If you’re going to embed apointer in a Go struct, you should do you best to make sure you never leave that pointer nil.
All in all I think this API would have been better served by just making the http response a normal struct field, andpossibly copying important fields like Status into the struct1:
typeAPIResponsestruct{Response*http.Response// And if it's really important to you that people type APIResponse.Status// instead of APIResponse.Response.Status:Statusint// non-exported fields}
Of course, if the response is nil, I suppose you leave Status as the zero value and let consumers figure outwhat do do with http response code 0. ↩︎
The other day, in response to my The Power Of Elfeed post, René Trappel wrote to me offline about his Elfeed package that enables content search. He calls his package “Cuckoo Search” for reasons you can discover at his Github repository.
By default, Elfeed doesn’t search the content of a post, just the metadata such as title, date, and tags. With Trappel’s package you can do finer searches. If you often want to do fine grained searches for posts that previously appeared in your feed, take a look at Trappel’s package.
I’ve always found the default search adequate but it’s easy to imagine needing the additional capability offered by Cuckoo Search. The package is not yet available in ELPA but Trappel gives a recipe for installing it directly from his Github repository. I’ll probably wait until it appears on ELPA to install it unless I find myself needing its capability in the mean time.
As I’m getting back into the “groove” of things, I started this pattern:
Wake up. Switch to home clothes from PJs.
Go to the kitchen, get some water, listen to the birds and think for a bit (about 10 mins)
Exercise: stretches (mostly focusing on back and posture), with some push-ups and crunches.
Meditation (5 mins) follows exercise
Back to the kitchen to make coffee and breakfast
Eat, take vitamins, talk to Nat as his morning starts as well
Start working:
View the agenda for the day (meetings, major projects, TODOs)
Looking for Pinned emails from previous days and Reminders, combine them into TODOs for the day
Start tackling tasks in my agenda (emacs org-mode), recording what I’m doing in notes
Around 13:00 to 15:00 (depending on meetings and things), time to exercise, or if I’m in the office, go back home for this. No targeted goal specifically yet; it’s mostly about the routine, but I’m trying to include a jog here if I can, or weight lifting
Back to work: this is a good “quiet time” to work on projects without interruption, depending on meetings.
Around 18:00 or 19:00 finishing work. Nat’s back at that time, or I spend time with another partner, depending on the day.
Around 20:00, I enjoy a show (these days it’s Silo) or video games (Helldirvers2 mostly at this point, but there’s also the excellent Mind over Magic I need to review soon)
I usually sleep around 22:00 or 23:00. Hopefully I can keep up the 7 hours of sleep I get a night or so, which means I wake up around 6:00 the next day to start again.
There are many points that change in this workflow (for example, if I blog, it’s usually in the morning at some point after food, if work allows), but in general, this is the outline I try to get back to.
Once again I see the benefits of channeling all my tasks that come from emails, reminders, my calendar, phone calls, meeting etc - into my org-mode agenda, where I have one simple list without distractions of what I need to do. If it’s not there, I’m not doing it that day. And there’s always more than I can finish each day anyway, that’s just the nature of things.
The big benefit (I’ve said so many times in the past) here is that because I log what I do, I know exactly what was done and I have a good idea of what needs to be done next. This is also very useful when there’s a new project, and you can just use the template from last time. Good stuff.
There's three layered rate-limiters in here that are applied to only certain URIs:
One does a per-IP limit excluding my Tailscale network and some ASNs I connect from. Each IP can make one costly request per minute, otherwise receive a 503.
One tries to map certain cloud providers in to a single rate-limit key and gives each of these providers 1 RPM on these endpoints. Each group of cloud IPs can make one request per minute, otherwise receive a 503.
One puts a limit to 1 RPS of all traffic on each "site feature" in Gitea.
So now if you try to browse my Gitea instance http://code.rix.si or make a git clone over HTTP that will work just fine, but a handful of expensive endpoints will be aggressively rate-limited. If you want to look at the git blame for every file in my personal checkout of nixpkgs, you can do that on your own time on your own machine now.
So far installing this on my "edge" server seems to work really well, cutting the load of the small SSL terminator instance in half. Let's see if this is Good Enough.
Hello friends. Despite my best efforts, I cannot do my part to add org-utf-to-xetex to MELPA—this is my fault. Unfortunately, I’m unsure why I can’t fork the repo, but I need your help now. If you are up for the mission then here are the details. If you are curious about becoming a co-maintainer—that would … Continue reading "I Need Your Help Adding org-utf-to-xetex to MELPA"
Nudged by Dave Winer's post about old-school
bloggers and my now-nicely-synchronizing setup of
NetNewsWire (iOS) and FreshRSS (web), I gave
Claude AI this prompt to list bloggers (with the
addition of "Please include URLs and short bios.")
and had fun going through the list it produced. A
number of people were no longer blogging
(unreachable sites or inactive blogs), but I found
a few that I wanted to add to my feed reader.
Here is my people.opml at the moment (slightly
redacted, as I read my husband's blog as well).
This list has some non-old-school bloggers as well
and some sketchnoters, but that's fine. It's a
very tiny slice of the awesomeness of the Internet
out there, definitely not exhaustive, just a
start. I've been adding more by trawling through
indieblog.page and the occasional interesting post
on news.ycombinator.com.
It makes sense to make an HTML version to make it
easier for people to explore, like those
old-fashioned blog rolls. Ooh, maybe some kind of
table like indieblog.page, listing a recent item
from each blog. (I am totally not surprised about
my tendency to self-nerd-snipe with some kind of
Emacs thing.) This uses my-opml-table and
my-rss-get-entries, which I have just added to my
Emacs configuration.
my-opml-table
(defunmy-opml-table (xml)
(sort
(mapcar
(lambda (o)
(let ((latest (car (condition-case nil (my-rss-get-entries (dom-attr o 'xmlUrl))
(error nil)))))
(list
(if latest
(format-time-string "%Y-%m-%d" (plist-get latest :date))
"")
(org-link-make-string
(or (dom-attr o 'htmlUrl)
(dom-attr o 'xmlUrl))
(replace-regexp-in-string " *|""" (dom-attr o 'text)))
(if latest
(org-link-make-string
(plist-get latest :url)
(or (plist-get latest :title) "(untitled)"))
""))))
(dom-search
xml
(lambda (o)
(and
(eq (dom-tag o) 'outline)
(dom-attr o 'xmlUrl)
(dom-attr o 'text)))))
:key#'car:reverse t))
my-rss-get-entries: Return a list of the form ((:title … :url … :date …) …).
I'm rebuilding my feed list from scratch. I want
to read more. I read the aggregated feeds at
planet.emacslife.com every week as part of
preparing Emacs News. Maybe I'll go over the list
of blogs I aggregate there, widen it to include
all posts instead of just Emacs-specific ones, and
see what resonates. Emacs people tend to be
interesting. Here is an incomplete list based on
people who've posted in the past two years or so,
based on this work-in-progress
planetemacslife-expanded.opml. (I haven't tweaked
all the URLs yet. I stopped at around 2023 and
made the rest of the elements xoutline instead
of outline so that my code would skip them.)
posts about adapting technology to personal
interests, more than posts about the industry or
generalizations
detailed posts about things I'm currently
interested in (Emacs, personal knowledge
management, some Javascript), more than detailed
tech posts about things I've decided not to get
into at the moment
"I" posts more than "You" posts: personal
reflections rather than didactic advice
Last few months, I've occasionally needed to run emacs on Windows. In
a previous post I wrote about how I
got my emacs config to work on Windows, which initially seemed
sufficient. But every time I switch from Linux (or even MacOS) to
Windows, I notice new sluggishness in some or the other component!
Most recently, it was projectile.
Have you ever experienced typing a command on a remote server over a
slow SSH connection? Muscle memory doesn't just involve your fingers
recalling commands and key bindings—it works on the entire
sequence of actions you intend to perform. When I press the keybinding
to find file in a project, my fingers have already moved on to the
next steps i.e. quickly filtering the results and hitting enter. I
anticipate the file list to appear almost instantaneously, and if
there's a lag of even a few seconds, it's frustrating and not a mere
inconvenience.
To make it worse, the project I was working on had a large number of
files. The delay in displaying the file list was a bit too much to
deal with. I had to drop everything else and fix it on priority!
After going through the projectile documentation and some tinkering I
was able to solve it rather quickly (all thanks to the well written
docs). The answer lies in projectile's default config for indexing
files on Windows v/s Unix-like platforms. Here indexing means
compiling the list of files from the project that the user may want to
open. You can inspect the projectile-indexing-method variable to
check which method is currently in use.
The default indexing method is alien on all operating systems,
except on Windows where it's native. The native indexing is
implemented in emacs lisp and hence is portable i.e. works on all
platforms. The alien indexing method on the other hand shells out to
external tools such as git or find to obtain the list of files
from the project. For e.g. if your project uses git for version
control, it asks git to provide the list of files. An advantage of
this method is that your .gitignore file will be automatically
considered.
native indexing can be much slower on projects that contain many
files and deeply nested directories and setting it to alien can
speed it up significantly in those cases. But the problem is that
alien requires Unix tools that are not expected to be installed
Windows, which is the reason why it's not set as the default.
But that doesn't mean you can't use alien on Windows. You can
install Cygwin or something similar which
provides the required Unix and GNU tools for Windows.
Make sure that the binaries are accessible on PATH. In my case
Cygwin was already installed but wasn't on PATH. To do so, you may
permanently edit the PATH environment variable from Advanced System
Settings. But I prefer to set it inside emacs itself:
Notice that the delimiter on Windows is a semicolon ; and not colon
: as in case of Linux.
All that remains now is to set the indexing method to alien:
(setqprojectile-indexing-method'alien)
And that should fix the sluggishness. But there is scope for further
optimization through caching. When the indexing method is native,
projectile implicitly enables caching. This means by changing it to
alien we've also unintentionally disabled caching. So enable it
explicitly.
(setqprojectile-enable-cachingt)
If your project has an unusually large number files, you may also set
it to persistent which will persist the cache on disk so that it can
be used across emacs sessions.
If you use the same config on multiple OSs, you may add all
this windows specific code conditionally as follows,
This week in ollama-buddy updates, I have been continuing on the busy bee side of things.
The headlines are :
Transient menu - yes, I know I said I would never do it, but, well I did and as it turns out I kinda quite like it and works especially well when setting parameters.
Support for fabric prompts presets - mainly as I thought generally user curated prompts was a pretty cool idea, and now I have system prompts implemented it seemed like a perfect fit. All I needed to do was to pull the patterns directory and then parse accordingly, of course Emacs is good at this.
GGUF import - I don’t always pull from ollama’s command line, sometimes I download a GGUF file, it is a bit of a process to import to ollama, create a model file, run a command, e.t.c, but now you can import from within dired!
More support for the ollama API - includes model management, so pulling, stopping, deleting and more!
Conversation history editing - as I store the history in a hash table, I can easily just display an alist, and editing can leverage the sexp usual keybindings and then load back in to the variable.
Parameter profiles - When implementing the transient menu I thought it might be fun to try parameter profiles where a set of parameters can be applied in a block for each preset.
And now for the detail, version by version…
<2025-03-19 Wed> 0.9.8
Added model management interface to pull and delete models
Introduced `ollama-buddy-manage-models` to list and manage models.
Added actions for selecting, pulling, stopping, and deleting models.
You can now manage your Ollama models directly within Emacs with ollama-buddy
With this update, you can now:
Browse Available Models – See all installed models at a glance.
Select Models Easily – Set your active AI model with a single click.
Pull Models from Ollama Hub – Download new models or update existing ones.
Stop Running Models – Halt background processes when necessary.
Delete Unused Models – Clean up your workspace with ease.
Open the Model Management Interface
Press C-c W to launch the new Model Management buffer or through the transient menu.
Manage Your Models
Click on a model to select it.
Use “Pull” to fetch models from the Ollama Hub.
Click “Stop” to halt active models.
Use “Delete” to remove unwanted models.
Perform Quick Actions
g → Refresh the model list.
i → Import a GGUF model file.
p → Pull a new model from the Ollama Hub.
When you open the management interface, you get a structured list like this:
Ollama Models Management
=======================
Current Model: mistral:7b
Default Model: mistral:7b
Available Models:
[ ] llama3.2:1b Info Pull Delete
[ ] starcoder2:3b Info Pull Delete
[ ] codellama:7b Info Pull Delete
[ ] phi3:3.8b Info Pull Delete
[x] llama3.2:3b Info Pull Delete Stop
Actions:
[Import GGUF File] [Refresh List] [Pull Model from Hub]
Previously, managing Ollama models required manually running shell commands. With this update, you can now do it all from Emacs, keeping your workflow smooth and efficient!
<2025-03-19 Wed> 0.9.7
Added GGUF file import and Dired integration
Import GGUF Models into Ollama from dired with the new ollama-buddy-import-gguf-file function. In dired just navigate to your file and press C-c i or M-x ollama-buddy-import-gguf-file to start the import process. This eliminates the need to manually input file paths, making the workflow smoother and faster.
The model will then be immediately available in the ollama-buddy chat interface.
<2025-03-18 Tue> 0.9.6
Added a transient menu containing all commands currently presented in the chat buffer
Moved the presets to the top level so they will be present in the package folder
Ollama Buddy now includes a transient-based menu system to improve usability and streamline interactions. Yes, I originally stated that I would never do it, but I think it compliments my crafted simple textual menu and the fact that I have now defaulted the main chat interface to a simple menu.
This can give the user more options for configuration, they can use the chat in advanced mode where the keybindings are presented in situ, or a more minimal basic setup where the transient menu can be activated. For my use-package definition I current have the following set up, with the two styles of menus sitting alongside each other :
The new menu provides an organized interface for accessing the assistant’s core functions, including chat, model management, roles, and Fabric patterns. This post provides an overview of the features available in the Ollama Buddy transient menus.
Simply I pull the patterns directory which contain prompt guidance for a range of different topics and then push them through a completing read to set the ollama-buddy system prompt, so a special set of curated prompts can now be applied right in the ollama-buddy chat!
Anyways, here is a description of the transient menu system.
What is the Transient Menu?
The transient menu in Ollama Buddy leverages Emacs’ transient.el package (the same technology behind Magit’s popular interface) to create a hierarchical, discoverable menu system. This approach transforms the user experience from memorizing numerous keybindings to navigating through logical groups of commands with clear descriptions.
Accessing the Menu
The main transient menu can be accessed with the keybinding C-c O when in an Ollama Buddy chat buffer. You can also call it via M-x ollama-buddy-transient-menu from anywhere in Emacs.
What the Menu Looks Like
When called, the main transient menu appears at the bottom of your Emacs frame, organized into logical sections with descriptive prefixes. Here’s what you’ll see:
|o(Y)o| Ollama Buddy
[Chat] [Prompts] [Model] [Roles & Patterns]
o Open Chat l Send Region m Switch Model R Switch Roles
O Commands s Set System Prompt v View Model Status E Create New Role
RET Send Prompt C-s Show System i Show Model Info D Open Roles Directory
h Help/Menu r Reset System M Multishot f Fabric Patterns
k Kill/Cancel b Ollama Buddy Menu
[Display Options] [History] [Sessions] [Parameters]
A Toggle Interface Level H Toggle History N New Session P Edit Parameter
B Toggle Debug Mode X Clear History L Load Session G Display Parameters
T Toggle Token Display V Display History S Save Session I Parameter Help
U Display Token Stats J Edit History Q List Sessions K Reset Parameters
C-o Toggle Markdown->Org Z Delete Session F Toggle Params in Header
c Toggle Model Colors p Parameter Profiles
g Token Usage Graph
This visual layout makes it easy to discover and access the full range of Ollama Buddy’s functionality. Let’s explore each section in detail.
Menu Sections Explained
Chat Section
This section contains the core interaction commands:
Open Chat (o): Opens the Ollama Buddy chat buffer
Commands (O): Opens a submenu with specialized commands
Send Prompt (RET): Sends the current prompt to the model
Help/Menu (h): Displays the help assistant with usage tips
Kill/Cancel Request (k): Cancels the current ongoing request
Prompts Section
These commands help you manage and send prompts:
Send Region (l): Sends the selected region as a prompt
Set System Prompt (s): Sets the current prompt as a system prompt
Show System Prompt (C-s): Displays the current system prompt
Reset System Prompt (r): Resets the system prompt to default
Ollama Buddy Menu (b): Opens the classic menu interface
Model Section
Commands for model management:
Switch Model (m): Changes the active LLM
View Model Status (v): Shows status of all available models
Show Model Info (i): Displays detailed information about the current model
Multishot (M): Sends the same prompt to multiple models
Roles & Patterns Section
These commands help manage roles and use fabric patterns:
Switch Roles (R): Switch to a different predefined role
Create New Role (E): Create a new role interactively
Open Roles Directory (D): Open the directory containing role definitions
Fabric Patterns (f): Opens the submenu for Fabric patterns
When you select the Fabric Patterns option, you’ll see a submenu like this:
Fabric Patterns (42 available, last synced: 2025-03-18 14:30)
[Actions] [Sync] [Categories] [Navigation]
s Send with Pattern S Sync Latest u Universal Patterns q Back to Main Menu
p Set as System P Populate Cache c Code Patterns
l List All Patterns I Initial Setup w Writing Patterns
v View Pattern Details a Analysis Patterns
Display Options Section
Commands to customize the display:
Toggle Interface Level (A): Switch between basic and advanced interfaces
Toggle Debug Mode (B): Enable/disable JSON debug information
Display Token Stats (U): Show detailed token usage information
Toggle Markdown->Org (C-o): Enable/disable conversion to Org format
Toggle Model Colors (c): Enable/disable model-specific colors
Token Usage Graph (g): Display a visual graph of token usage
History Section
Commands for managing conversation history:
Toggle History (H): Enable/disable conversation history
Clear History (X): Clear the current history
Display History (V): Show the conversation history
Edit History (J): Edit the history in a buffer
Sessions Section
Commands for session management:
New Session (N): Start a new session
Load Session (L): Load a saved session
Save Session (S): Save the current session
List Sessions (Q): List all available sessions
Delete Session (Z): Delete a saved session
Parameters Section
Commands for managing model parameters:
Edit Parameter (P): Opens a submenu to edit specific parameters
Display Parameters (G): Show current parameter settings
Parameter Help (I): Display help information about parameters
Reset Parameters (K): Reset parameters to defaults
Toggle Params in Header (F): Show/hide parameters in header
Parameter Profiles (p): Opens the parameter profiles submenu
When you select the Edit Parameter option, you’ll see a comprehensive submenu of all available parameters:
Parameters
[Generation] [More Generation] [Mirostat]
t Temperature f Frequency Penalty M Mirostat Mode
k Top K s Presence Penalty T Mirostat Tau
p Top P n Repeat Last N E Mirostat Eta
m Min P x Stop Sequences
y Typical P l Penalize Newline
r Repeat Penalty
[Resource] [More Resource] [Memory]
c Num Ctx P Num Predict m Use MMAP
b Num Batch S Seed L Use MLOCK
g Num GPU N NUMA C Num Thread
G Main GPU V Low VRAM
K Num Keep o Vocab Only
[Profiles] [Actions]
d Default Profile D Display All
a Creative Profile R Reset All
e Precise Profile H Help
A All Profiles F Toggle Display in Header
q Back to Main Menu
Parameter Profiles
Ollama Buddy includes predefined parameter profiles that can be applied with a single command. When you select “Parameter Profiles” from the main menu, you’ll see:
Parameter Profiles
Current modified parameters: temperature, top_k, top_p
[Available Profiles]
d Default
c Creative
p Precise
[Actions]
q Back to Main Menu
Commands Submenu
The Commands submenu provides quick access to specialized operations:
Ollama Buddy Commands
[Code Operations] [Language Operations] [Pattern-based] [Custom]
r Refactor Code l Dictionary Lookup f Fabric Patterns C Custom Prompt
d Describe Code s Synonym Lookup u Universal Patterns m Minibuffer Prompt
g Git Commit Message p Proofread Text c Code Patterns
[Actions]
q Back to Main Menu
Direct Keybindings
For experienced users who prefer direct keybindings, all transient menu functions can also be accessed through keybindings with the prefix of your choice (or C-c O when in the chat minibuffer) followed by the key shown in the menu. For example:
C-c O s - Set system prompt
C-c O m - Switch model
C-c O P - Open parameter menu
Customization
The transient menu can be customized by modifying the transient-define-prefix definitions in the package. You can add, remove, or rearrange commands to suit your workflow.
<2025-03-17 Mon> 0.9.5
Added conversation history editing
Added functions to edit conversation history (ollama-buddy-history-edit, ollama-buddy-history-save, etc.).
Updated ollama-buddy-display-history to support history editing.
Added keybinding C-c E for history editing.
Introducing conversation history editing!!
Key Features
Now, you can directly modify past interactions, making it easier to refine and manage your ollama-buddy chat history.
Previously, conversation history was static, you could view it but not change it. With this update, you can now:
Edit conversation history directly in a buffer.
Modify past interactions for accuracy or clarity.
Save or discard changes with intuitive keybindings (C-c C-c to save, C-c C-k to cancel).
Edit the history of all models or a specific one.
Simply use the new command C-c E to open the conversation history editor. This will display your past interactions in an editable format (alist). Once you’ve made your changes, press C-c C-c to save them back into Ollama Buddy’s memory.
and with a universal argument you can leverage C-c E to edit an individual model.
<2025-03-17 Mon> 0.9.1
New simple basic interface is available.
As this package becomes more advanced, I’ve been adding more to the intro message, making it increasingly cluttered. This could be off-putting for users who just want a simple interface to a local LLM via Ollama.
Therefore I have decided to add a customization option to simplify the menu.
Note: all functionality will still be available through keybindings, so just like Emacs then! :)
Note: some could see this initially as a breaking change as the intro message will look different, but rest assured all the functionality is still there (just to re-emphasize), so if you have been using it before and want the original functionality/intro message, just set :
(setq ollama-buddy-interface-level 'advanced)
(defcustom ollama-buddy-interface-level 'basic"Level of interface complexity to display.
'basic shows minimal commands for new users.
'advanced shows all available commands and features." :type '(choice (const :tag "Basic (for beginners)" basic)
(const :tag "Advanced (full features)" advanced))
:group 'ollama-buddy)
By default the menu will be set to Basic, unless obviously set explictly in an init file. Here is an example of the basic menu:
*** Welcome to OLLAMA BUDDY
#+begin_example
___ _ _ n _ n ___ _ _ _ _
| | | |__._|o(Y)o|__._| . |_ _ _| |_| | | |
| | | | | . | | . | . | | | . | . |__ |
|___|_|_|__/_|_|_|_|__/_|___|___|___|___|___|
#+end_example
**** Available Models
(a) another:latest (d) jamesio:latest
(b) funnyname2:latest (e) tinyllama:latest
(c) funnyname:latest (f) llama:latest
**** Quick Tips
- Ask me anything! C-c C-c
- Change model C-c m
- Cancel request C-c k
- Browse prompt history M-p/M-n
- Advanced interface (show all tips) C-c A
and of the more advanced version
*** Welcome to OLLAMA BUDDY
#+begin_example
___ _ _ n _ n ___ _ _ _ _
| | | |__._|o(Y)o|__._| . |_ _ _| |_| | | |
| | | | | . | | . | . | | | . | . |__ |
|___|_|_|__/_|_|_|_|__/_|___|___|___|___|___|
#+end_example
**** Available Models
(a) another:latest (d) jamesio:latest
(b) funnyname2:latest (e) tinyllama:latest
(c) funnyname:latest (f) llama:latest
**** Quick Tips
- Ask me anything! C-c C-c
- Show Help/Token-usage/System-prompt C-c h/U/C-s
- Model Change/Info/Cancel C-c m/i/k
- Prompt history M-p/M-n
- Session New/Load/Save/List/Delete C-c N/L/S/Y/W
- History Toggle/Clear/Show C-c H/X/V
- Prompt to multiple models C-c l
- Parameter Edit/Show/Help/Reset C-c P/G/I/K
- System Prompt/Clear C-u/+C-u +C-u C-c C-c
- Toggle JSON/Token/Params/Format C-c D/T/Z/C-o
- Basic interface (simpler display) C-c A
- In another buffer? M-x ollama-buddy-menu
<2025-03-17 Mon> 0.9.0
Added command-specific parameter customization
Added :parameters property to command definitions for granular control
Implemented functions to apply and restore parameter settings
Added example configuration to refactor-code command
With the latest update, you can now define specific parameter sets for each command in the menu, enabling you to optimize each AI interaction for its particular use case.
Different AI tasks benefit from different parameter settings. When refactoring code, you might want a more deterministic, precise response (lower temperature, higher repetition penalty), but when generating creative content, you might prefer more variation and randomness (higher temperature, lower repetition penalty). Previously, you had to manually adjust these parameters each time you switched between different types of tasks.
The new command-specific parameters feature lets you pre-configure the optimal settings for each use case. Here’s how it works:
Key Features
Per-Command Parameter Sets: Define custom parameter values for each command in your menu
Automatic Application: Parameters are applied when running a command and restored afterward
Non-Destructive: Your global parameter settings remain untouched
Easy Configuration: Simple interface for adding or updating parameters
Example Configuration
;; Define a command with specific parameters(refactor-code
:key ?r :description "Refactor code" :prompt "refactor the following code:" :system "You are an expert software engineer..." :parameters ((temperature .0.2) (top_p .0.7) (repeat_penalty .1.3))
:action (lambda () (ollama-buddy--send-with-command 'refactor-code)))
;; Add parameters to an existing command(ollama-buddy-add-parameters-to-command 'git-commit :temperature 0.4 :top_p 0.9 :repeat_penalty 1.1)
;; Update properties and parameters at once(ollama-buddy-update-command-with-params 'describe-code :model "codellama:latest" :parameters '((temperature .0.3) (top_p .0.8)))
This feature is particularly useful for:
Code-related tasks: Lower temperature for more deterministic code generation
Creative writing: Higher temperature for more varied and creative outputs
Technical explanations: Balanced settings for clear, accurate explanations
Summarization tasks: Custom parameters to control verbosity and focus
<2025-03-16 Sun> 0.8.5
Added system prompt support for commands
Introduced `:system` field to command definitions.
Added `ollama-buddy-show-system-prompt` to view active system prompt.
Updated UI elements to reflect system prompt status.
Previously, individual menu commands in ollama-buddy only included a user prompt. Now, each command can define a system prompt, which provides background context to guide the AI’s responses. This makes interactions more precise and tailored.
Key Features
System prompts per command: Specify background instructions for each AI-powered command using the new :system field.
View active system prompt: Use C-c C-s to display the current system prompt in a dedicated buffer.
Updated UI elements: The status line now indicates whether a system prompt is active.
A helper function has also been added to update the default menu, for example, you might want to tweak a couple of things:
(use-package ollama-buddy
:bind ("C-c o". ollama-buddy-menu)
:custom
(ollama-buddy-default-model "llama3.2:3b")
:config
(ollama-buddy-update-menu-entry
'refactor-code :model "qwen2.5-coder:7b" :system "You are an expert software engineer who improves code and only mainly using the principles exhibited by Ada")
(ollama-buddy-update-menu-entry
'git-commit :model "qwen2.5-coder:3b" :system "You are a version control expert and mainly using subversion"))
For a very long time I’ve been using swiper-isearch in place of the default isearch because I like the way it lists the results in the minibuffer, lets you scroll through them, and pick the result you want On the downside, it’s not always clear which target you’re looking at when you have long lines, which I often do because I write with visual-line-mode enabled, but I’ve stuck with it out of habit.
Now Bozhidar Batsov has come along to blow up my comfortable status quo. It turns out that isearch is way more powerful than most of us knew. It’s not a secret. All this “hidden” power is described right there in the docstring. It’s incredible that it’s not better known.
There are a bunch of commands to add or delete characters or words to or from the search string, often from whatever happens to be at point. There are also several toggles such as case sensitivity, whether or not to search for invisible text, regular expression mode, and others.
Finally, you can edit the search string in the minibuffer, scroll through a history of the last 16 search strings, and, as mbork told us the other day, you can search for the symbol at point. There’s more so take a look at Batsov’s post or the docstring. As Batsov reminds us, you can see all the options when you’re already in isearch by typing Ctrl+hb.
This is a really useful post and has me thinking that maybe I should return to isearch. Of course, I can have both; the real question is what should be bound to Ctrl+s.
Relative line numbers (relative to the current line) are super popular in the world of Vim,
because there it’s super easy to move n lines up or down wiht j and k.
In the world of Emacs most of us tend to just go some line using M-g g using a absolute
line number or using avy (avy-goto-line).
That being said, relative line numbers are easy to enable in Emacs and quite handy if you’re into
evil-mode:
Relative line numbers are useful with the Emacs core commands forward-line
(C-n) and previous-line (C-p) as well. Just trigger them with the universal prefix
C-u and you can move quickly around:
isearch is probably one of the most widely known Emacs commands. Every Emacs user
knows that they can run it using C-s (to search forward) and C-r to search backwards.
Everyone also knows they can keep pressing C-s and C-r to go over the list of matches
in the current buffer. Even at this point that’s a very useful command. But that doesn’t
even scratch the surface of what isearch can do!
After you’ve started isearch you can actually do a lot more than pressing C-s and C-r:
Type DEL to cancel last input item from end of search string.
Type RET to exit, leaving point at location found.
Type LFD (C-j) to match end of line.
Type M-s M-< to go to the first match, M-s M-> to go to the last match. (super handy)
Type C-w to yank next word or character in buffer
onto the end of the search string, and search for it. (very handy)
Type C-M-d to delete character from end of search string.
Type C-M-y to yank char from buffer onto end of search string and search for it.
Type C-M-z to yank from point until the next instance of a
specified character onto end of search string and search for it.
Type M-s C-e to yank rest of line onto end of search string and search for it.
Type C-y to yank the last string of killed text.
Type M-y to replace string just yanked into search prompt
with string killed before it.
Type C-q to quote control character to search for it.
Type C-x 8 RET to add a character to search by Unicode name, with completion.
C-g while searching or when search has failed cancels input back to what has
been found successfully.
C-g when search is successful aborts and moves point to starting point.
You can also toggle some settings write isearch is active:
Type M-s c to toggle search case-sensitivity.
Type M-s i to toggle search in invisible text.
Type M-s r to toggle regular-expression mode.
Type M-s w to toggle word mode.
Type M-s _ to toggle symbol mode.
Type M-s ' to toggle character folding.
Type M-s SPC to toggle whitespace matching.
In incremental searches, a space or spaces normally matches any whitespace
defined by the variable search-whitespace-regexp; see also the variables
isearch-lax-whitespace and isearch-regexp-lax-whitespace.
Type M-s e to edit the search string in the minibuffer. That one is super useful!
Also supported is a search ring of the previous 16 search strings:
Type M-n to search for the next item in the search ring.
Type M-p to search for the previous item in the search ring.
Type C-M-i to complete the search string using the search ring.
Last, but not least - you can directly search for the symbol/thing at point:
Type M-s . to search for the symbol at point. (useful in the context of programming languages)
Type M-s M-. to search for the thing (e.g. word or symbol) at point.
One of the most useful parts of that is the fact that a region is a thing.
So you can mark a region (e.g. with expand-region or mark-*) and M-s M-. to
immediately search for other instances of that text. Powerful stuff!
Tip: You don’t really have to remember all those keybindings - just remember you can press C-h b
to show them. (after you’ve started isearch)
Most of the above text is coming straight from the docstring of isearch. It’s funny that I’ve
been using Emacs for almost 20 years, I use isearch numerous times every day and I still often
forget about much of its functionality.
There’s more to isearch, though. Did you know it’s widely customizable as well? If you check
its options with M-x customize-group isearch you’ll see there are over 30 (!!!) options there!
Admittedly, I never used any of them, but you’ve got quite a lot of opportunities to tweak the
behavior of isearch if you want to. Here’s an example of a customization some of you might
find useful:
;; When isearching, enable M-<, M->, C-v and M-v to skip between matches;; in an intuitive fashion. Note that the `cua-selection-mode' bindings;; for C-v and M-v bindings are not supported.(setqisearch-allow-motiontisearch-motion-changes-directiont)
I hope you learned something useful today! Keep searching (the Emacs docs)!
As you all know, I’m fascinated by non-technical people who use Emacs. You probably also know that I have profound dislike of Google and especially their Docs suite, which brings together everything we hate about Word with surveillance and censorship.
Over at the Emacs subreddit, myprettygaythrowaway asks for advice. Their girlfriend is a non-technical writer who is using Google Docs but would like to leave that behind and move to Emacs. Unfortunately, myprettygaythrowaway is an Emacs n00b, doesn’t feel competent to advise their girlfriend, and therefore asked reddit for advice.
Some of the answers amount to, “Why would you want to leave Google Docs? It’s the perfect environment for your needs.” Happily, there are saner responses. The best, in my opinion is to use the Emacs Writing Studio, which is made for precisely this type of user and doesn’t require any coding to set things up.
Some commenters suggest using LaTeX to produce nicely finished documents but, of course, that isn’t necessary. With Emacs—or without it, for that matter—you can write in Org mode’s simple markup language and easily export it to LaTeX and from there to PDF. More importantly, for those involved in publishing, you can also export it directly to Docx, which many publishers demand.
Contra the naysayers, Emacs can be the ideal environment for this type of user as numerous professional writers can attest. Even one of the commenters falls into that category and wonders why he spent so much time using lesser solutions.
If you, too, would like to escape from the Borg or its lessor demons, you can take heart from this post. It is possible and it’s not that hard.
I usually summarize Mastodon links, move them to
my Emacs News Org file, and then categorize them.
Today I accidentically categorized the links while
they were still in my Mastodon buffer, so I had
two lists with categories. I wanted to write some
Emacs Lisp to merge sublists based on the
top-level items. I could sort the list
alphabetically with C-c ^ (org-sort) and then
delete the redundant top-level item lines, but
it's fun to tinker with Emacs Lisp.
Example input:
Topic A:
Item 1
Item 2
Item 2.1
Topic B:
Item 3
Topic A:
Item 4
Item 4.1
Example output:
Topic B:
Item 3
Topic A:
Item 1
Item 2
Item 2.1
Item 4
Item 4.1
The sorting doesn't particularly matter to me, but I want the things under Topic A to be combined. Someday it might be nice to recursively merge other entries (ex: if there's another "Topic A: - Item 2" subitem like "Item 2.2"), but I don't need that yet.
Anyway, we can parse the list with
org-list-to-lisp (which can even delete the
original list) and recreate it with
org-list-to-org, so then it's a matter of
transforming the data structure.
(defunmy-org-merge-list-entries-at-point ()
"Merge entries in a nested Org Mode list at point that have the same top-level item text."
(interactive)
(save-excursion
(let* ((list-indentation (save-excursion
(goto-char (caar (org-list-struct)))
(current-indentation)))
(list-struct (org-list-to-lisp t))
(merged-list (my-org-merge-list-entries list-struct)))
(insert (org-ascii--indent-string (org-list-to-org merged-list) list-indentation)
"\n"))))
(defunmy-org-merge-list-entries (list-struct)
"Merge an Org list based on its top-level headings"
(cons (car list-struct)
(mapcar
(lambda (g)
(list
(car g)
(let ((list-type (car (car (cdr (car (cdr g))))))
(entries (seq-mapcat #'cdar (mapcar #'cdr (cdr g)))))
(apply #'append (list list-type) entries nil))))
(seq-group-by #'car (cdr list-struct)))))
Because org-list-to-org uses the Org conversion
process, I need to make sure that my custom link
functions also export to Org as a format. For
example, in Emacs News, I use a package: link to
make it easy to link to packages in both Emacs and
in exported HTML. When I first ran my code, the
links got replaced with their URLs, which isn't
what I wanted. Turned out that I needed to add a
case handling exporting to org format, like
this:
Lars Ingebrigtsen has an interesting post on his EWP his package for publishing blog posts to WordPress. I’m a long term Org2blog user and really like it but, sadly, my host provider, that I’ve been with for many years, no longer supports the necessary protocol.
I was, therefore, interested in Ingebrigtsen’s package. But as far as I can see, it doesn’t support Org mode which is a deal breaker for me. I write everything in Org and have no interest in learning another system. Ingebrigtsen says that the problem with most WordPress publishing apps is that they don’t make it easy to do things like add links, embed videos, and other such matters.
But, of course, Org makes all that easy. I write my blog posts in normal Org mode, convert them to HTML, and paste them into my WordPress portal. It’s not quite as seamless as Org2blog but it’s not too bad. Still, I’d love a system where I could just write in Org and press a button and post to WordPress like I used to be able to do with Org2blog.
If you’re not an Org devotee and want to use WordPress, take a look at Ingebrigtsen’s post. It may be just what you’re looking for.
The latest version of gptel has some cool features and refinements! See Version v0.9.8 Latest. Watching gptel get developed is a MASTER CLASS in software development! Observe the constant consideration and addition of user facing features, refinement the existing code, adding new code and concepts (request FSM for example). It is the work of true … Continue reading "Interesting new gptel v0.9.8 features and commits since v0.9.7"
When I look at popular tools for writing blog posts, they all seem to require an ungodly amount of work to get common things done. They mostly all get the basics right: You can just type away at a text, and then post it. Easy peasy. Sometimes it loses your work, but that’s modern computers for you.
But blogging is more than just typing: You have to link to other posts, you have to include images (both your own and from the web), and you may even have to add some videos.
And all those things are usually annoying chores, involving way too many steps. (Admittedly, I have not experienced professional systems for editing WordPress, and perhaps they’re really good? Or perhaps I’ve missed your favourite tool?)
OK, first the basics. The editing stuff comes later.
You say M-x ewp and you get this basic buffer:
Yes, it just lists your blogs. Hit RET on a line, and you get the posts:
Wow, I’ve really been reading lots of books lately… Anyway, there’s commands like M to list media on the blog:
In this buffer you can do common stuff like uploading images and videos, rotate images, rescale videos and all that stuff.
Back to the main view, there’s the C command that lists comments:
Here you can do the usual stuff, like trashing spam comments and responding to comments.
OK, get the drift? The basic admin stuff is like any other WordPress interface, only more Emacsey.
So let’s look at editing. The basic premise is that ewp just edits WordPress posts just as they are: That is, as HTML. There’s no special format (or formatting) used, and this is important: This means that you can edit WordPress posts with any tool you like. So even if you’re mostly composing your blog posts with ewp, you can still edit them using the WordPress mobile app, for instance.
But at the same time, when using ewp, you’re using Emacs with the security that this entails: You’ll get undo and auto-saves locally, so you’re never in the danger of losing your precious prose (which is a real problem with some other WordPress interfaces).
Anyway! A very common thing to do is to quote some text from a web page:
The point here is that ewp inserts not only the quoted text as a blockquote, but it also inserts the link to the page the text was on, and puts the cursor at a convenient point for further editing.
Or you may want a shorter quotation:
Again, both the quoted text and the URL in one fell swoop.
Most interfaces have a convenient way to just paste in images you’ve found on a web page, but then they make you go through a pretty tedious process to do extremely common things like cropping and rotating images. In ewp, it’s trivial:
And here’s that cropped and rotated image:
And this should be obvious, but I feel I have to mention it explicitly, because many tools fail here: If you put an image in an ewp edit buffer, it will be uploaded to your blog automatically: Absolutely no manual work involved.
Another common thing to do is to have a link to an image and you want to put that on your blog. However, hot-linking images to a site owned by somebody else is something you shouldn’t do, not only because the images will often disappear, but it’s rude, bandwidth wise. So:
ewp downloads the URL, and then uploads it to your own media library automatically.
From the screencasts included above, you may have guessed that ewp has a number of commands for dealing with video, too. Unfortunately the video support in Emacs (or at least the version of Emacs I’m using; I don’t know from modern versions) is somewhat lacking. ewp does the basics: It allows you to put <video> elements into your post, and it automatically creates and uploads “poster” images that are used by the web browser as placeholder images.
It also (optionally) resizes the videos, and converts (using ffmpeg, of course) the images to an mp4 format that browsers understand. (This is a pretty common problem — many things that create video files, and even mp4 video files, create files that web browsers can’t play natively.)
But writing blog posts isn’t just all about stealing content from other web sites. I mean… creating meaningful links to other creators in the world. Widely.
Sometimes you have to steal content from movies, too.
ewp can watch directories for incoming images and inserts those images automatically into the post you’re writing. That can be used in a number of ways — when I’m watching movies, this is basically my workflow:
Whenever I see something interesting, I just tap a special key on that little keyboard that I have next to me, and that tells the mpv that’s running on a machine in my bookshelf to take a screenshot, and then that’s rsynced to my laptop there, and then the image appears as if by magic:
And also nicely cropped automatically — that’s actually a somewhat complicated thing to do in a sensible way, because you should have a reasonable number of screenshots to determine what the “real” format of the movie is. There may be very dark shots, and you don’t want those to be suddenly blown up ridiculously… Hm, looking at the image above, it doesn’t look like a perfect trim, actually — looks like a kinda janky transfer to DVD.
Sometimes you just have to take a picture yourself, and Emacs helps with that, too:
I’ve mentioned this before, but I’m using a camera that uploads things automatically to my laptop, and then ewp finds it and:
Since this is Emacs, there is of course a gazillion more commands to deal with editing and publishing blog posts, but I think those are the main things that I’ve been annoyed by and fixed over the years. Editing something that looks like this in Emacs:
Is way more fun than traipsing through some 95 click interface for doing anything.
Blogging can be fun! At least if you find Emacs fun.
I’ve written many times how I was a Vi/Vim user for many, many years. I think that I’m just now coming up to the point that I have more time with Emacs than I did with Vi/Vim. Most of my previous life was spent with Vim, although I did use the original Vi for some time.
Vi didn’t really have an extension language and Vim’s was so horrible that everybody hated it. The only configuration I did was to map function keys to do things like enter the date. It was very barebones.
But time moves on. As far as I can tell, Vim has—mostly—been replaced by Neovim, which has Lua as an extension language. I always assumed that Lua was a good extension language and was, essentially, another flavor of Lisp.
Apparently, that’s not true. Over at the Emacs subreddit, CaptainDrewBoy tells the story of his journey from Neovim to Emacs. That, and its reverse, are a familiar story, of course, but I found it worth writing about because of what CaptainDrewBoy says about Lua and the Neovim extension system in general. The thing that struck me was how hard it was to use Lua. You have to make your edit and bounce Neovim for it to take effect. If there’s a syntax error, rinse and repeat.
Contrast that with the Emacs workflow. You write your code and evaluate it. If it’s correct, it takes effect immediately. If there’s an error, Emacs informs you and you simply correct it and reevaluate the code. No need to restart you editor. It’s easy and painless.
CaptainDrewBoy says he’d even be willing to give up things like Magit and Org just for the ease in configuration. Happily, that’s not necessary. With Emacs, at least, you can have it all1. As usual, the comments are interesting too. Take a look if you’re interested in why someone might want to move from Neovim—a perfectly fine editor—to Emacs.
Well, now I am the “Quarto guy” in my work place, thanks to the decision made by the previous Quarto guy, who has left. And therefore, I will need to deliver workshops to my colleagues on how to use Quarto. For the workshop, I thought it would be nice not to teach it like a regular course, which one has to create a document from the beginning. Instead, it would be better to start from a document in Microsoft Word and then try to convert it to Quarto. And in order to do that, one has to use pandoc. I know that Quarto uses pandoc somewhere. But I thought one has to install it as a dependency separately. Only after some more studying with the CLI, I now know that there is the quarto pandoc sub-command to expose the included built-in pandoc. And in fact, there is also quarto typst; meaning typst is also a built-in component. The weirdest is perhaps quarto run, which exposes deno for running Typescript. After knowing these, they makes the workshop run smoother, as my colleagues can have a suite of useful tools just by installing Quarto.
Another consequence of becoming the “Quarto guy” is that I have to find a good way to do collaborative writing. This is like the constant theme of this blog, see this and this. The newest experiment is to use Overleaf by exploiting the markdown CTAN package. With this method, citations work, image and table placement works. Code execution is still not working. The project can also be clone offline and edit with Emacs.
Let’s also talk about Emacs and ESS. Earlier this year I updated Emacs to 30.1. And then strangely, opening R files would have a larger left margin. After some investigation, I found that the local variable left-margin-width became 2 for all R buffers. On the official mailing list, I found a person (the original poster) who also had the same problem, but there was no answer. Then, I sent an reply. Finally, the original poster told me the reason for this is that the flymake-model integration might have problem. The solution is to disable ess-flymake integration (setq ess-use-flymake nil). I don’t use flymake anymore, so it doesn’t matter. But yeah, it fixed the issue.
Let’s also talk about mailing lists. In the R community, mailing lists have a bad name. People would prefer, I don’t know, Bluesky now? Or smaller circles, such as, I don’t know, Discord? Slack? Just talking about the technology. I think mailing lists is the most open because one only needs an e-mail address to join. I think now, the people who are still active on R mailing lists are not hostile but instead quite nice. The signal/noise ratio is extremely high. I don’t need to see other social media content, for example.
The last thing I would like to talk about is CRAN. I read a blog post by Adafruit, a hardware company. The tldr of that is: Adafruit maintains several Arduino (single-board microcontrollers) open source software packages. In 2024, a new company forked their code and added those code to the Arduino Library Registry. The worst part: That new company removed the original open source license, changed the copyright holder to themselves etc.
I did fork some code myself too. minty is a fork of some code in readr. Although it is fine to fork open source code, I believe in two things: 1) one still has to respect the original license, and 2) one has to communicate with the original authors to show that forking is done in good faith.
I can’t help but to think about how Arduino Library Registry and CRAN work. Again, CRAN has a bad name. I criticize it sometimes, e.g. this. It is not difficult to find harsher critique of CRAN regularly on blog aggregator rweekly. A few weeks back there was a highlight article entitled Is CRAN Holding R back? Well, every one is entitled to have an opinion and I don’t want to comment whether CRAN is holding R back. But there is a comparison between the semi-automatic and test-free model of package submission (e.g. PyPI) and the manual and constant testing model of CRAN. But the merits of the CRAN model are not discussed. In my opinion, it is not entirely a technological question, but a political one. Diplomatically speaking, all models have trade offs. While it is true that it is difficult to get a package on CRAN and keeping a package on CRAN one has to do more work, you should at least know what the package is tested on CRAN’s fleet of test servers and should be installable on three major OSes. You should also expect that the e-mail address of the maintainer should still be reachable by CRAN, most likely also by you. Supply chain attack, in principle, should be more difficult. This model is perhaps “outdated” because only some very conservative open source repositories would still use this model, e.g. Debian. PyPI is in fact similarly, but without the constant testing part. Software repositories of newer languages just ask for a Github address. Arduino Library Registry is an extreme version in this school of thinking, where the entire thing is just a text file1.
I don’t have a moral of the story for this one. But I believe I sleep slightly better, when CRAN doesn’t operate like Arduino Library Registry.
Finally, I am moving my private projects to Codeberg. You should too.
Julia asks for the SHA of a Git release, at least. ↩
A few days ago I wrote about my need for something better than the Apple Notes app for my memo book. I’ve been using Notes for 10 years and it works well but I’d like something that takes care of adding time stamps for me and is a little more seamless.
I wrote that I’d discovered Journelly and that it seemed like it might be a good fit for my needs but that it was still pre-beta software. Now Ramírez has announced that he has an official beta program running that you can sign up for. If it seems like something that would be useful for you, you should sign up for the beta and give it a try.
Ramírez says that the app doesn’t (yet) support entering items orally but notes that you can use the system dictation facility for this. My reaction when I saw that was, “Duh. Of course.” I’m pretty sure that’s what’s happening in Notes. I’ve never programmed in the iOS environment so I’m pretty much a luser where it’s concerned and don’t have a good mental picture of what’s happening under the covers.
I’m not ready to commit to adopting Journelly for my memo book because it’s an important part of my workflow but I do think I’ll sign up for the beta and at least play around with it. If it is as good as it looks, I’ll probably use it seriously as soon as there is an official release.
Pedro pointed out that I had some incomplete clock
entries in my Emacs configuration.
org-resolve-clocks prompts you for what to do
with each open clock entry in your Org agenda
files and whatever Org Mode files you have open.
If you don't feel like cancelling each clock with
C, I also wrote this function to delete all open
clocks in the current file.
The main addition is that of system prompts, which allows setting the general tone and guidance of the overall chat. Currently the system prompt can be set at any time and turned on and off but I think to enhance my model/command for each menu item concept, I could also add a :system property to the menu alist definition to allow even tighter control of a menu action to prompt response.
Also now I have parameter functionality working for fine grained control, I could add these individual parameters for each menu command, for example the temperature could be very useful in this case to play around with the randomness/casualness of the response.
The next improvement will likely involve adding support for interacting more directly with Ollama to create and pull models. However, I’m still unsure whether performing this within Emacs is the best approach, I could assume that all models are already set up in Ollama.
That said, importing a GGUF file might be a useful feature, possibly from within dired. Currently, this process requires multiple steps: creating a simple model file that points to the GGUF file on disk, then running the ollama create command to import it. Streamlining this workflow could enhance usability.
Then maybe on to embeddings, of which I currently have no idea, haven’t read up on it, nuffin, but that is something to look forward to! :)
Anyways, here is the latest set of updates to Ollama Buddy:
<2025-03-14 Fri> 0.8.0
Added system prompt support
Added ollama-buddy--current-system-prompt variable to track system prompts
Updated prompt area rendering to distinguish system prompts
Modified request payload to include system prompt when set
Enhanced status bar to display system prompt indicator
Improved help menu with system prompt keybindings
So this is system prompt support in Ollama Buddy!, allowing you to set and manage system-level instructions for your AI interactions. This feature enables you to define a persistent system prompt that remains active across user queries, providing better control over conversation context.
Key Features
You can now designate any user prompt as a system prompt, ensuring that the AI considers it as a guiding instruction for future interactions. To set the system prompt, use:
C-u C-c C-c
Example:
Type:
Always respond in a formal tone.
Press C-u C-c C-c This prompt is now set as the system prompt and any further chat ollama responses will adhere to the overarching guidelines defined in the prompt.
If you need to clear the system prompt and revert to normal interactions, use:
C-u C-u C-c C-c
How It Works
The active system prompt is stored and sent with each user prompt.
A “S” indicator appears in the status bar when a system prompt is active.
The request payload now includes the system role, allowing AI to recognize persistent instructions.
Demo
Set the system message to:
You must always respond in a single sentence.
Now ask the following:
Tell me why Emacs is so great!
Tell me about black holes
clear the system message and ask again, the responses should now be more verbose!!
<2025-03-13 Thu> 0.7.4
Added model info command, update keybindings
Added `ollama-buddy-show-raw-model-info` to fetch and display raw JSON details
of the current model in the chat buffer.
Updated keybindings:
`C-c i` now triggers model info display.
`C-c h` mapped to help assistant.
Improved shortcut descriptions in quick tips section.
Removed unused help assistant entry from menu.
Changed minibuffer-prompt key from `?i` to `?b`.
<2025-03-12 Wed> 0.7.3
Added function to associate models with menu commands
Added ollama-buddy-add-model-to-menu-entry autoload function
Enabled dynamic modification of command-model associations
This is a helper function that allows you to associate specific models with individual menu commands.
Configuration to apply a model to a menu entry is now straightforward, in your Emacs init file, add something like:
This configures simpler tasks like dictionary lookups and synonym searches to use the more efficient TinyLlama model, while your default model will still be used for more complex operations.
<2025-03-12 Wed> 0.7.2
Added menu model colours back in and removed some redundant code
I wasn’t an early adopter of TreeSitter in Emacs, as usually such
big transitions are not smooth and the initial support for TreeSitter in
Emacs left much to be desired. Recently, however, Emacs 30 was released with many
improvements on that front, and I felt the time was right for me to (try to) embrace
TreeSitter.
I’m the type of person who likes to learn by deliberate practice, that’s why I
wanted to do some work on TreeSitter-powered major modes. I’ve already been a
co-maintainer of
clojure-ts-mode for a while
now, and I picked up the basics around it, but I didn’t spend much time hacking
on it until recently. After spending a bit more time studying the current
implementation of clojure-ts-mode and the various Emacs TreeSitter APIs, I decided to
start a new experimental project from scratch -
neocaml, a TreeSitter-powered package for
OCaml development.1
Why did I start a new OCaml package, when there are already a few existing out
there? Because caml-mode is ancient (and probably has to be deprecated), and
tuareg-mode is a beast. (it’s very powerful, but also very complex) The time
seems ripe for a modern, leaner, TreeSitter-powered mode for OCaml.
There have been two other attempts to create TreeSitter-powered
major modes for Emacs, but they didn’t get very far:
Looking at the code of both modes, I inferred that the authors were probably knowledgable in
OCaml, but not very familiar with Emacs Lisp and Emacs major modes in general.
For me it’s the other way around, and that’s what makes this a fun and interesting project for me:
I enjoy working on Emacs packages
As noted above I want to do more work TreeSitter
I really like OCaml and it’s one of my favorite “hobby” languages
One last thing - we really need more Emacs packages with fun names! :D Naming is hard, and I’m
notorious “bad” at it!2
They say that third time’s the charm, and I hope that neocaml will get farther than
the other ocaml-ts-modes. Time will tell!
I’ve documented the code extensively inline, and in the README you’ll find my development notes detailing
some of my decisions, items that need further work and research, etc. If nothing else - I think
anyone can learn a bit about how TreeSitter works in Emacs and what are the common challenges
that one might face when working with it. To summarize my experience so far:
font-locking (syntax highlighting) with TreeSitter is fairly easy
structured navigation seems reasonably straight-forward as well
the indentation queries are a bit more complicated, but they are definitely not black magic
Fundamentally, the main problem is that we still don’t have
easy ways to try out TreeSitter queries in Emacs, so there’s a lot of trial and error involved. (especially when it
comes to indentation logic) My other big problem is that most TreeSitter grammars
have pretty much no documentation, you one has to learn about their AST format
via experimentation (e.g. treesit-explore-mode and treesit-inspect-mode) and
reading their bundled queries for font-locking and indentation. As someone who’s
used to work with Ruby parser I really miss the docs and tools that come with
something like the Ruby parser library.
What’s the state of project right now? Well, neocaml kind of works right now,
but the indentation logic needs a lot of polish, and I’ve yet to implement
properly structured navigation and some of the newer Emacs TreeSitter APIs
(e.g. things). We can always “cheat” a bit with the indentation, by
delegating it to another Emacs package like ocp-indent.el or even Tuareg, but
I’m hoping to come up with a self-contained TreeSitter implementation in the end
of the day.
If you’re feeling adventurous you can easily install the package like this:
Note:neocaml will auto-install the required TreeSitter grammars the
first time one of the provided major modes is activated.
Please refer to the README for usage information, like the various configuration
options and interactive commands.
I’m not sure how much time I’ll be able to spend working on neocaml and how far
will I be able to push it. Perhaps it will never amount to anything, perhaps it
will just be a research platform to bring TreeSitter support to Tuareg. And
perhaps it will become a viable simple, yet modern solution for OCaml
programming in Emacs. The dream is alive!
Contributions, suggestions and feedback are most welcome. Keep hacking!
I didnt’ name it neocaml-mode intentionally - many Emacs packages contain more things
than just major modes, so I prefer a more generic naming. ↩
On a more serious note - there was never an ocaml-mode, so naming something ocaml-ts-mode is not
strictly needed. But I think an actual ocaml-mode should be blessed by the the maintainers of OCaml,
hosted in the primary GitHub org, and endorsed as a recommended way to program in OCaml with Emacs.
Pretty tall order! ↩
In February I announced the plan to reorganise the Denote project into
“core” and “extensions”: https://protesilaos.com/codelog/2025-02-11-emacs-splitting-denote-many-packages/.
In essence, Denote is a file-naming scheme: you create new files
and/or rename existing ones (of any file type). Having that naming
scheme empowers you to retrieve stuff more easily without the need for
advanced tooling.
Though it turns out we can do a lot of useful things on top of this
simple-yet-powerful idea: custom Dired listings, Org dynamic blocks,
sequence notes, journaling, and many more. Having a clear separation
between core and extensions makes it easier for us to implement all
the features we want without worrying that the main package is
becoming bloated.
Concretely, much of the functionality that was part of the denote
package will now be provided by other packages. To this end, I just
made a change to the official elpa.git repository:
What remains to be done is for me to merge the reorganise-denote
branch of denote.git into main. I will do this as soon as the new
packages are indexed on GNU ELPA (maybe later today or tomorrow).
Users of the GNU ELPA package will not be affected immediately. Those
who build from source will, however, have to take action.
Expect breaking changes
Users will experience breaking changes when they update to the new
denote package: functionality will be missing. In principle, all
those should be easy to fix: install the appropriate new package and
rename the functions/variables in your configuration accordingly. In
detail:
The file denote-journal-extras.el becomes the package
denote-journal. In your configuration replace
denote-journal-extras with denote-journal.
The file denote-org-extras.el (Org dynamic blocks and related)
becomes the package denote-org. In your configuration replace
denote-org-extras with denote-org.
The file denote-silo-extras.el becomes the package denote-silo.
In your configuration replace denote-silo-extras with
denote-silo.
Additionally, the files that were in development but never made it as
part of a formal release, namely, denote-md-extras.el and
denote-sequence.el, will debut as their own packages:
denote-markdown and denote-sequence, respectively.
You are welcome to ask me any questions
I am reluctant to introduce breaking changes, though this has to be
done for the long-term wellness of the project. If you have any
questions, please contact me directly or open an issue on the GitHub
repository. I am happy to help you.
A concerted release cadence
All new packages are marked as version 0.0.0, which means that only
those who track GNU-devel ELPA (or build from source) will notice
them. The formal release will coincide with the new version of
denote (4.0.0), which I expect to publish some time in April or
May 2025.
Back in the 90’s, I did not hold in high regard the Make command with its cryptic syntax, insistence on making a tab space semantically significant, and platform/vendor-specific idiosyncrasies. Surely, I thought then, Make will get replaced by a better tool any day now.
Today it is March 13, 2025. Make is still here and isn't going anywhere. With every fiber of my being I know that Make will outlive all of us. It is now too deeply rooted in our computing infrastructure to go away.
Somewhere in the 2010’s I accepted the above observation and to paraphrase Kubrick, just stopped worrying and learned to love Make.
That said, for all of Make’s ubiquity, there aren't that many tools around to help you edit makefiles.
Emacs supports makefile editing with make-mode which has a mix of useful and half-baked (though thankfully obsoleted in 30.1) commands. It is from this substrate that I'm happy to announce the next Casual user interface: Casual Make.
Of particular note to Casual Make is its attention to authoring and identifying automatic variables whose arcane syntax is un-memorizable. Want to know what $< means? Just select it in the makefile and use the . binding in the Casual Make menu to identify what it does in the mini-buffer.
Casual Make is now available with the latest update of Casual v2.4.0 on MELPA. This is a big update that also includes Info documentation for Casual for the first time.
Closing Thoughts
If your vocation involves programming computers, learn Make. The rewards will pay off handsomely as you will find utility in using Make for tasks both big and small. This is especially so for Emacs users who can take advantage of the compile command which by default invokes Make. The combination of Emacs compilation mode, make mode, command completion & history makes for a compelling IDE for task running.
If you appreciate the Casual project, please support its development and maintenance by buying me a coffee.
The basic functionality, I think, is now there (and now literally zero configuration required). If a default model isn’t set I just pick the first one, so LLM chat can take place immediately.
Now I’m getting more into this chat client malarkey, my original idea of a very minimal chat client to interface to ollama is starting to skew into supporting as much of the ollama RESTful API as possible. Hence in this update a more advanced approach is creeping in, including setting up various subtle model parameters and providing a debugging window to monitor incoming raw JSON (pretty printed of course). Hopefully, these features will remain tucked away for advanced users, I’ve done my best to keep them unobtrusive (but not too hidden). The tool is still designed to be a helpful companion to interface to ollama through Emacs, just now with more powerful options under the hood.
Also a note about converting the chat buffer into org-mode. My original intention was to keep the chat buffer as a very simple almost “no mode” buffer, with just text and nothing else. However, with more consideration, I felt that converting this buffer into org-mode actually held quite a few benefits:
Each prompt could be a heading, hence outlining and folding can be activated!
Navigation between prompts now comes for free (especially if you are using org-use-speed-commands)
The org ox export backend now allows us to export to formats of many different kinds
I’m sure there are more as this list isn’t quite the “quite a few benefits” I was hoping for :(
I have a local keymap defined with some ollama-buddy specific keybindings, and as of yet I haven’t encountered any conflicts with commonly used org-mode bindings but we shall see how it goes. I think for this package it is important to have a quick chatting mechanism, and what is faster than a good keybind?
Finally, just a note on the pain of implementing a good prompt mechanism. I had a few goes at it and I think I now have an acceptable robust solution. I kept running into little annoying edge cases and I ended up having to refactor quite a bit. My original idea for this package involved a simple “mark region and send” as at the time I had a feeling that the implementation of a good prompt mechanism would be tough - how right I was!. Things got even trickier with the move to org-mode, since each prompt heading should contain meaningful content for clean exports and I had to implement a mechanism to replace prompts intelligently. For example, if the model is swapped and the previous prompt is blank, it gets replaced, though, of course, even this has its own edge cases - gives a new meaning to prompt engineering! :)
Anyways, listed below are my latest changes, with a little deeper dive into more “interesting” implementations, my next ideas are a little more advanced and are kanban’d into my github README at https://github.com/captainflasmr/ollama-buddy for those that are interested.
<2025-03-11 Tue> 0.7.1
Added debug mode to display raw JSON messages in a debug buffer
Created new debug buffer to show raw JSON messages from Ollama API
Added toggle function to enable/disable debug mode (ollama-buddy-toggle-debug-mode)
Modified stream filter to log and pretty-print incoming JSON messages
Added keybinding C-c D to toggle debug mode
Updated documentation in welcome message
<2025-03-11 Tue> 0.7.0
Added comprehensive Ollama parameter management
Added customization for all Ollama option API parameters with defaults
Only send modified parameters to preserve Ollama defaults
Display active parameters with visual indicators for modified values
Add keybindings and help system for parameter management
Remove redundant temperature controls in favor of unified parameters
Introduced parameter management capabilities that give you complete control over your Ollama model’s behavior through the options in the ollamas API.
Ollama’s API supports a rich set of parameters for fine-tuning text generation, from controlling creativity with temperature to managing token selection with top_p and top_k. Until now, Ollama Buddy only exposed the temperature parameter, but this update unlocks the full potential of Ollama’s parameter system!
Key Features:
All Parameters - set all custom options for the ollama LLM at runtime
Smart Parameter Management: Only modified parameters are sent to Ollama, preserving the model’s built-in defaults for optimal performance
Visual Parameter Interface: Clear display showing which parameters are active with highlighting for modified values
Keyboard Shortcuts
Parameter management is accessible through simple keyboard shortcuts from the chat buffer:
C-c P - Edit a parameter
C-c G - Display current parameters
C-c I - Show parameter help
C-c K - Reset parameters to defaults
<2025-03-10 Mon> 0.6.1
Refactored prompt handling so each org header line should now always have a prompt for better export
Added functionality to properly handle prompt text when showing/replacing prompts
Extracted inline lambdas in menu actions into named functions
Added fallback for when no default model is set
<2025-03-08 Sat> 0.6.0
Chat buffer now in org-mode
Enabled org-mode in chat buffer for better text structure
Implemented ollama-buddy--md-to-org-convert-region for Markdown to Org conversion
Turn org conversion on and off
Updated keybindings C-c C-o to toggle Markdown to Org conversion
Key Features
The chat buffer is now in org-mode which gives the buffer enhanced readability and structure. Now, conversations automatically format user prompts and AI responses with org-mode headings, making them easier to navigate.
Of course with org-mode you will now get the additional benefits for free, such as:
outlining
org export
heading navigation
source code fontification
Previously, responses in Ollama Buddy were displayed in markdown formatting, which wasn’t always ideal for org-mode users. Now, you can automatically convert Markdown elements, such as bold/italic text, code blocks, and lists, into proper org-mode formatting. This gives you the flexibility to work with markdown or org-mode as needed.
There was another meeting a couple of weeks ago of EmacsATX, the Austin Emacs Meetup group. For this month we had no predetermined topic. However, as always, there were mentions of many modes, packages, technologies and websites, some of which I had never heard of before, and some of this may be of interest to ... Read more
Please note that planet.emacslife.com aggregates blogs, and blog authors might mention or link to nonfree things. To add a feed to this page, please e-mail the RSS or ATOM feed URL to sacha@sachachua.com . Thank you!