Irreal: The Power of Vanilla Emacs

There’s an interesting blog post by bendersteed that discusses the power of vanilla Emacs. Emacs, he says, has a lot of useful packages but we shouldn’t underestimate the power of built-in commands that have nearly the same function. He notes, for example, that Magnar Sveen’s expand-region is a wonderful package but Emacs offers at set of commands (Meta+@, Meta+h, Ctrl+Meta+@, Ctrl+Meta+h, and Ctrl+x h) that do pretty much the same thing. Likewise, abbrev can perform many of the functions that yasnippet is often used for.

You may or may not be convinced to give up expand-region and yasnippet but bendersteed’s post is useful for two reasons. First, it serves as a reminder of that built-in functionality, which, in some circumstances, may be more appropriate than invoking the package and second, he’s got a nice section at the end of the post that lists some of the built-in functionality that he’s discovered. Two and a half years ago, I wrote about eww-search-words, which will search the Web for the highlighted region. I’d pretty much forgotten about it and that’s a shame because it’s really useful for checking a word that doesn’t appear in whatever local dictionary you’re using. It was nice to be reminded of it. That alone makes the post worth reading.

-1:-- The Power of Vanilla Emacs (Post jcs)--L0--C0--April 22, 2019 06:45 PM

sachachua: 2019-04-22 Emacs news

Links from, /r/orgmode, /r/spacemacs, /r/planetemacs, Hacker News,, YouTube, the changes to the Emacs NEWS file, and emacs-devel.

-1:-- 2019-04-22 Emacs news (Post Sacha Chua)--L0--C0--April 22, 2019 03:40 PM

Irreal: The Dream Draws Nearer

If you’re a hardcore Emacser or Lisper, you’ve probably always drooled over the famous space-cadet keyboard. Just think, you’d have keys for all four bucky bits, not to mention a multitude of other keys with no apparent use if you aren’t on a Lisp machine. From time-to-time some manufacturer will offer keycaps from the space-cadet but that’s always struck me as sort of pointless.

Now, Kono is considering making a modern version of the space-cadet keyboard. There are no details on the site so it’s not clear how such a thing would work or what computers it would work with but it’s nice to see someone’s thinking about it. Because I really want those Hyper and Super keys1.



Yes, I can—and do—overload the fn and ⌘ Cmd keys on my Mac to provide the same functions but it’s not the same thing.

-1:-- The Dream Draws Nearer (Post jcs)--L0--C0--April 21, 2019 04:25 PM
Some newer versions of watch now support color. For example watch --color ls -ahl --color.
-1:--  (Post sness ( 19, 2019 01:44 PM

sachachua: 2019-04-15 Emacs news

Update: Added Paris meetup, Apr 18

Links from, /r/orgmode, /r/spacemacs, /r/planetemacs, Hacker News,, YouTube, the changes to the Emacs NEWS file, and emacs-devel.

-1:-- 2019-04-15 Emacs news (Post Sacha Chua)--L0--C0--April 17, 2019 12:39 AM

Marcin Borkowski: How to make a menu in Emacs

As we all know, Emacs is so much more than just a text editor. There are quite a few serious applications written on top of it, like Org-mode or mu4e. And many applications – including those two – contain menus (the mu4e main menu or Org-mode’s exporting menu).
-1:-- How to make a menu in Emacs (Post)--L0--C0--April 15, 2019 08:39 PM

Timo Geusch: Wrapping up the Emacs on Mac OS X saga

In a previous post I mentioned that I upgraded my homebrew install of Emacs after Emacs 26.2 was released, and noticed that I had lost its GUI functionality. That’s a pretty serious restriction for me as I usually end up Read More

The post Wrapping up the Emacs on Mac OS X saga appeared first on The Lone C++ Coder's Blog.

-1:-- Wrapping up the Emacs on Mac OS X saga (Post Timo Geusch)--L0--C0--April 14, 2019 02:15 PM

Timo Geusch: Emacs 26.2 on WSL with working X-Windows UI

I’ve blogged about building Emacs 26 on WSL before. The text mode version of my WSL build always worked for me out of the box, but the last time I tried running an X-Windows version, I ran into rendering issues.  Read More

The post Emacs 26.2 on WSL with working X-Windows UI appeared first on The Lone C++ Coder's Blog.

-1:-- Emacs 26.2 on WSL with working X-Windows UI (Post Timo Geusch)--L0--C0--April 13, 2019 03:00 AM

Raimon Grau: emacs 26.2 as a birthday present

Emacs 26.2 has been released in April 12th, matching my 36th birthday, 

Appart from this coincidence, it's the first emacs release that has any code of mine, which makes me extremely happy. I only contributed 2 tiny bugfixes, but nonetheless I'm very happy about it :D

-1:-- emacs 26.2 as a birthday present (Post Raimon Grau ( 12, 2019 06:02 PM

Ben Simon: The Embarrassingly Simple Source for An Up To Date Windows Version of emacs

I recently replaced my no-name mini PC with a might-as-well-be-no-name Kingdel NC860 mini PC. These fanless desktop computers have a great form factor, dual monitor support, plenty of USB ports and a bare-bones feel that I love. Credit goes to Coding Horror for inspiring my first purchase of this type of device.

I've recently switched from Firefox to Chrome as my primary browser of choice, and 1password as my password manager. The result: installing Chrome and logging in using both my Work and Personal e-mail meant that my web-based life was essentially setup. Installing Cygwin, Gimp and AutoHotKey meant that I had a nearly complete dev environment. All that was left to do was to install emacs.

At this point, I usually Google around to find the latest version of Windows friendly emacs, often ending up on this sourceforge site. On a whim, however, I thought I'd try something different: I installed emacs via cygwin.

My expectation was that I'd get a console only emacs. And my assumption was totally wrong. I ended up with the same Windows friendly emacs I'm used to, except a whole slew of issues had been resolved. I'm used to emacs operating in terms of Windows drive paths, while cygwin works in terms of a unix'y path mapping. By using a cygwin based emacs, the two environments are now in sync.

A number of issues with eshell were magically fixed, too. #! detection and signal handling (hitting Control-c) in eshell wasn't reliable in my old Windows emacs setup, whereas it's working well under cygwin based emacs.

Finally, the cygwin version of emacs is as up to date as the GNU site offers: version 26.1.

Why didn't I try this years ago?

It blows my mind that I can go from new PC to working dev environment in 15 minutes and zero dollars spent on software.

-1:-- The Embarrassingly Simple Source for An Up To Date Windows Version of emacs (Post Ben Simon ( 12, 2019 03:58 PM

(or emacs: Change the current time in Org-mode


I'm constantly amazed by other people's Org workflows. Now that the weekly tips are a thing, I see more and more cool Org configs, and I'm inspired to get more organized myself.

My own Org usage is simplistic in some areas, and quite advanced in others. While I wrote a lot of code to manipulate Org files ( worf, org-download, orca, org-fu, counsel), the amount of Org files and TODO items that I have isn't huge:

(counsel-git "org$ !log")
;; 174 items

(counsel-rg "\\* DONE|CANCELLED|TODO")
;; 8103 items

Still, that's enough to get out-of-date files: just today I dug up a file with 20 outstanding TODO items that should have been canceled last November!

How to close 20 TODOs using a timestamp in the past

When I cancel an item, pressing tc (mnemonic for TODO-Cancel), Org mode inserts a time stamp with the current time. However, for this file, I wanted to use October 31st 2018 instead of the current time. Org mode already has options like org-use-last-clock-out-time-as-effective-time, org-use-effective-time, and org-extend-today-until that manipulate the current time for timestamps, but they didn't fit my use case.

So I've advised org-current-effective-time:

(defvar-local worf--current-effective-time nil)

(defun worf--current-effective-time (orig-fn)
  (or worf--current-effective-time
      (funcall orig-fn)))

(advice-add 'org-current-effective-time
            :around #'worf--current-effective-time)

(defun worf-change-time ()
  "Set `current-time' in the current buffer for `org-todo'.
Use `keyboard-quit' to unset it."
  (setq worf--current-effective-time
        (condition-case nil
            (org-read-date t 'totime)
          (quit nil))))

A few things of note here:

  • worf--current-effective-time is buffer-local, so that it modifies time only for the current buffer
  • I re-use the awesome org-read-date for a nice visual feedback when inputting the new time
  • Instead of having a separate function to undo the current-time override, I capture the quit signal that C-g sends.


The above code is already part of worf and is bound to cT. I even added it to the manual. I hope you find it useful. Happy organizing!

-1:-- Change the current time in Org-mode (Post)--L0--C0--April 10, 2019 10:00 PM

Manuel Uberti: Digital minimalism

I reached a point in my life where it’s necessary to rethink my digital habits. It happened before, of course, but in some way or another I’ve never done something useful to actively tackle my tech-addictions.

In my defence, a smartphone entered my life only by way of a birthday present in 2016. I had resisted until then for two reasons: I spend most of the day in front of a screen already, and sociability is not my strongest skill. Three years later, I am still far from being a smartphone enthusiast. Besides occasional phone calls, I seldom check it during the day. I keep messaging at a bare minimum, and there are only a couple of applications nudging me with notifications.

Therefore, the smartphone is not to blame for my compulsively click-driven life. The big giants like Facebook and Twitter don’t matter either, because I stopped using them a long while ago. The problem is everything else. Aggregating RSS feeds seemed like the right move at the beginning, but checking them everyday has turned out to be a time-consuming experience much like browsing news websites to keep up to date. Reddit lures me in constantly, even though the small percentage of valuable content just reminds me of Twitter. GitHub usually offers more click-worthy updates, but the repositories I deem interesting are far less than the uninteresting ones. Letterboxd, a staple of my cinema obsession, may be the worse in this regard; its Activity tab is the drug I can’t resist, and yet I usually find no more than a couple of beautiful reviews a week.

All these considerations are the outcome of reading Digital Minimalism by Cal Newport. Newport’s writing style is at times too pedantic and close to spoon-feeding, but his book highlights potentially effective solutions to escape the miring web of my networking routines. The decluttering process Newport suggests is a difficult task. Depending on the depth of your online addiction, it requires several levels of confidence and willpower, because you know that fighting your own vices is a painful endeavour.

I experienced something similar when I left Twitter behind. I had always valued Twitter more for the network of like-minded people than the chance to share how I feel in a precise moment, and so Twitter had become my go-to reference for film and technology updates. After I deleted my account, I remember asking myself over and over again: am I missing something important? Will I be able to keep up with everyday trends? These questions were always in the back of my head, poking me every time I heard Twitter mentioned somewhere. It took a couple of weeks, maybe three, to realise how much time and peace of mind I gained from a life without Twitter. And I did find the answer to those questions eventually: who cares?

With this precious experience in mind, I treasured Newport’s tips and followed these key steps:

  • cut down my RSS feeds to twenty entries and check them only twice a week
  • remove Reddit from my computer and from my phone
  • remove GitHub from my phone and use it only for work and open source projects
  • access Letterboxd for my writings and check the rest of the community once a week

The trick, as Newport points out, is not just understanding what you value most and whether technology helps you get it or not, but occupying the spare time with something else, possibly unrelated to technology. And so, in my case, it means more time with my wife, longer walks with our dog, more writing, more cooking, and more books. Rewarding leisure, and it’s just the beginning.

Note that I am simplifying Newport’s pages for the sake of brevity, because there is more to his idea of decluttering. Bluntly put, if you are reading this on your phone start looking for Digital Minimalism now.

As daunting as it may look, minimizing our digital consumption is the best way to really and deeply connect with the world around us.

-1:-- Digital minimalism (Post)--L0--C0--April 10, 2019 12:00 AM

Chen Bin (redguardtoo): Enhance Emacs Evil global markers

Global evil marker is saved in evil-global-markers-history by session.el.

Insert below code int ~/.emacs,

(defvar evil-global-markers-history nil)
(defun my-forward-line (lnum)
  "Forward LNUM lines."
  (setq lnum (string-to-number lnum))
  (when (and lnum (> lnum 0))
    (goto-char (point-min))
    (forward-line (1- lnum))))

(defadvice evil-set-marker (before evil-set-marker-before-hack activate)
  (let* ((args (ad-get-args 0))
         (c (nth 0 args))
         (pos (or (nth 1 args) (point))))
    ;; only rememeber global markers
    (when (and (>= c ?A) (<= c ?Z) buffer-file-name)
      (setq evil-global-markers-history
            (delq nil
                  (mapcar `(lambda (e)
                             (unless (string-match (format "^%s@" (char-to-string ,c)) e)
      (setq evil-global-markers-history
            (add-to-list 'evil-global-markers-history
                         (format "%s@%s:%d:%s"
                                 (char-to-string c)
                                 (file-truename buffer-file-name)
                                 (line-number-at-pos pos)
                                 (string-trim (buffer-substring-no-properties (line-beginning-position)

(defadvice evil-goto-mark-line (around evil-goto-mark-line-hack activate)
  (let* ((args (ad-get-args 0))
         (c (nth 0 args))
         (orig-pos (point)))

    (condition-case nil
      (error (progn
               (when (and (eq orig-pos (point)) evil-global-markers-history)
                 (let* ((markers evil-global-markers-history)
                        (i 0)
                   (while (and (not found) (< i (length markers)))
                     (setq m (nth i markers))
                     (when (string-match (format "\\`%s@\\(.*?\\):\\([0-9]+\\):\\(.*\\)\\'"
                                                 (char-to-string c))
                       (setq file (match-string-no-properties 1 m))
                       (setq found (match-string-no-properties 2 m)))
                     (setq i (1+ i)))
                   (when file
                     (find-file file)
                     (my-forward-line found)))))))))

(defun counsel-evil-goto-global-marker ()
  "Goto global evil marker."
  (unless (featurep 'ivy) (require 'ivy))
  (ivy-read "Goto global evil marker"
            :action (lambda (m)
                      (when (string-match "\\`[A-Z]@\\(.*?\\):\\([0-9]+\\):\\(.*\\)\\'" m)
                        (let* ((file (match-string-no-properties 1 m))
                               (linenum (match-string-no-properties 2 m)))
                          (find-file file)
                          (my-forward-line linenum))))))

evil-goto-mark-line will access marker in evil-global-markers-history.

-1:-- Enhance Emacs Evil global markers (Post Chen Bin)--L0--C0--April 08, 2019 01:00 PM

Alex Schroeder: Using magit and forge

Following @algernon’s suggestion, I’m trying forge for magit.

Install it via M-x package-list-packages.

My setup is this: my origin remote points to my remote git repo. There’s also a cgit running on that server, but what I really care about is the alternative github remote I have installed. I had to set the name of this remote in a git config option via the command line:

git config --add "forge.remote" "github"

I was told to M-x forge-pull when I tried to look at the issues using ' l i. When I did that, it told me that it was going to create a token in ~/.authinfo and if I didn’t want that, I’d have to abort and configure auth-sources. So I did that:

(setq auth-sources '("~/.authinfo.gpg"))

So I tried again. forge-pull said it was going to create a token and save it in ~/.authinfo.gpg and I agreed. Sadly, it didn’t work:

ghub--handle-response-headers: BUG: missing headers
  headers: nil
  status: nil
  buffer: #<buffer  *http*>
  --- end of buffer-string ---

I wonder why.

I restarted Emacs. Visited my repo. Ran magit-status. Ran forge-pull. And got the error: “transient--layout-member: magit-dispatch is not a transient command”.

I have no idea what this means. Fiddled some more, went through the process again, got the missing headers error again. I’m not happy.

Now I’m following the hints in the Ghub manual about manually creating the personal access token I need. Turns out that the token was created on the site, but the line was missing in my ~/.authinfo.gpg. So no I’m adding it manually.

machine login kensanata^forge password *secret*

And I’m still getting the same error.

ghub--handle-response-headers: BUG: missing headers
  headers: nil
  status: nil
  buffer: #<buffer  *http*>
  --- end of buffer-string ---

Looking at #81 and reading through the entire thread it seems the answer is installing Emacs 27, or using (setq gnutls-log-level 1) but “only because it slows things down slightly,” perhaps.

By now I’m sick and tired of the entire thing. I remember why using special clients written to new APIs is a pain. For the moment, this doesn’t seem to be better than using either the websites directly, or an email based git workflow and keeping track of issues elsewhere.

I’m going to drop this for a while.


-1:-- Using magit and forge (Post)--L0--C0--April 07, 2019 09:33 AM

(or emacs: Swiper-isearch - a more isearch-like swiper


Since its introduction in 2015, swiper, while nice most of the time, had two problems:

  1. Slow startup for large buffers.
  2. Candidates were lines, so if you had two or more matches on the same line, the first one was selected.

Over time, workarounds were added to address these problems.

Problem 1: slow startup

Almost right away, calling font-lock-ensure was limited to only small enough buffers.

In 2016, counsel-grep-or-swiper was introduced. It uses an external process (grep) to search through large files.

In 2017, I found ripgrep, which does a better job than grep for searching one file:

(setq counsel-grep-base-command
      "rg -i -M 120 --no-heading --line-number --color never %s %s")

The advantage here is that the search can be performed on very large files. The trade-off is that we have to type in at least 3 characters before we send it to the external process. Otherwise, when the process returns a lot of results, Emacs will lag while receiving all that output.

Problem 2: candidates are lines

In 2015, swiper-avy was added, which could also be used as a workaround for many candidates on a single line. Press C-' to visually select any candidate on screen using avy.

Enter swiper-isearch

Finally, less than a week ago, I wrote swiper-isearch to fix #1931.

Differences from the previous commands:

  • Every candidate is a point position and not a line. The UX of going from one candidate to the next is finally isearch-like, I enjoy it a lot.

  • Unlike swiper, no line numbers are added to the candidates. This allows it to be as fast as anzu.

  • Unlike counsel-grep, no external process is used. So you get feedback even after inputting a single char.

I like it a lot so far, enough to make it my default search:

(global-set-key (kbd "C-s") 'swiper-isearch)


Try out swiper-isearch, see if it can replace swiper for you; counsel-grep-or-swiper still has its place, I think. Happy hacking!

PS. Thanks to everyone who supports me on Liberapay and Patreon!

PPS. Thanks to everyone who contributes issues and patches!

-1:-- Swiper-isearch - a more isearch-like swiper (Post)--L0--C0--April 06, 2019 10:00 PM

Marcin Borkowski: Using benchmark to measure speed of Elisp code

Some time ago I promised that I’ll write something about measuring efficiency of some Elisp code. Now, my guess was that the string version will be faster for short templates (due to the overhead of creating buffers), but the longer the template, the faster the buffer version. That was right, though for other reasons.
-1:-- Using benchmark to measure speed of Elisp code (Post)--L0--C0--March 25, 2019 08:03 PM

Grant Rettke: Choosing A Monospace Font: 2019-March

On 2014/07/03 I wrote How To Choose A Font. My font choice would get used mostly in a text editor, a web page, or a printed page. Based on notable information I ended up choosing DejaVu Sans Mono. Five years later I’m still in love with it. Right now though I’ve got important life-tasks that … Continue reading "Choosing A Monospace Font: 2019-March"
-1:-- Choosing A Monospace Font: 2019-March (Post grant)--L0--C0--March 24, 2019 05:20 AM

Grant Rettke: VIM Changes Acronym to “VIM Imitates eMacs”

VI is the second editor that I learned. The six commands that I use in it will always be dear to me. Twenty-five years have passed, I still use the same six commands. The landscape has changed a lot though: VIM has taken VI into the stratosphere. My buddy showed me how he uses VIM. … Continue reading "VIM Changes Acronym to “VIM Imitates eMacs”"
-1:-- VIM Changes Acronym to “VIM Imitates eMacs” (Post grant)--L0--C0--March 21, 2019 12:26 PM

Manuel Uberti: Switching buffers (Take 2)

Last month I wrote about the neat nswbuff, but there is another way to implement buffer switching without introducing a new package.

Since I already use counsel-projectile, why not leverage it to my needs?

(defun mu-switch-to-project-buffer-if-in-project (arg)
    "Custom switch to buffer.
With universal argument ARG or when not in project, rely on
Otherwise, use `counsel-projectile-switch-to-buffer'."
    (interactive "P")
    (if (or arg
            (not (projectile-project-p)))

(bind-key* "C-x b" #'mu-switch-to-project-buffer-if-in-project)

Pretty self-explanatory. By default, when not in a project counsel-projectile-switch-to-buffer asks you for the project to switch to.

However, if I am not in a project chances are I want to switch to a buffer that doesn’t belong to a project, especially since I usually enter a project before switching to one of its buffers.

nswbuff has previews and back-and-forth navigation, so it still offers a nicer solution to buffer switching. This is Emacs, of course, so you know the deal: endless possibilities.

-1:-- Switching buffers (Take 2) (Post)--L0--C0--March 16, 2019 12:00 AM

Modern Emacs: Notating Programs - Introduction

Notate is xxx.
-1:-- Notating Programs - Introduction (Post)--L0--C0--March 11, 2019 12:00 AM

Chris Wellons: An Async / Await Library for Emacs Lisp

As part of building my Python proficiency, I’ve learned how to use asyncio. This new language feature first appeared in Python 3.5 (PEP 492, September 2015). JavaScript grew a nearly identical feature in ES2017 (June 2017). An async function can pause to await on an asynchronously computed result, much like a generator pausing when it yields a value.

In fact, both Python and JavaScript async functions are essentially just fancy generator functions with some specialized syntax and semantics. That is, they’re stackless coroutines. Both languages already had generators, so their generator-like async functions are a natural extension that — unlike stackful coroutines — do not require significant, new runtime plumbing.

Emacs officially got generators in 25.1 (September 2016), though, unlike Python and JavaScript, it didn’t require any additional support from the compiler or runtime. It’s implemented entirely using Lisp macros. In other words, it’s just another library, not a core language feature. In theory, the generator library could be easily backported to the first Emacs release to properly support lexical closures, Emacs 24.1 (June 2012).

For the same reason, stackless async/await coroutines can also be implemented as a library. So that’s what I did, letting Emacs’ generator library do most of the heavy lifting. The package is called aio:

It’s modeled more closely on JavaScript’s async functions than Python’s asyncio, with the core representation being promises rather than a coroutine objects. I just have an easier time reasoning about promises than coroutines.

I’m definitely not the first person to realize this was possible, and was beaten to the punch by two years. Wanting to avoid fragmentation, I set aside all formality in my first iteration on the idea, not even bothering with namespacing my identifiers. It was to be only an educational exercise. However, I got quite attached to my little toy. Once I got my head wrapped around the problem, everything just sort of clicked into place so nicely.

In this article I will show step-by-step one way to build async/await on top of generators, laying out one concept at a time and then building upon each. But first, some examples to illustrate the desired final result.

aio example

Ignoring all its problems for a moment, suppose you want to use url-retrieve to fetch some content from a URL and return it. To keep this simple, I’m going to omit error handling. Also assume that lexical-binding is t for all examples. Besides, lexical scope required by the generator library, and therefore also required by aio.

The most naive approach is to fetch the content synchronously:

(defun fetch-fortune-1 (url)
  (let ((buffer (url-retrieve-synchronously url)))
    (with-current-buffer buffer
      (prog1 (buffer-string)

The result is returned directly, and errors are communicated by an error signal (e.g. Emacs’ version of exceptions). This is convenient, but the function will block the main thread, locking up Emacs until the result has arrived. This is obviously very undesirable, so, in practice, everyone nearly always uses the asynchronous version:

(defun fetch-fortune-2 (url callback)
  (url-retrieve url (lambda (_status)
                      (funcall callback (buffer-string)))))

The main thread no longer blocks, but it’s a whole lot less convenient. The result isn’t returned to the caller, and instead the caller supplies a callback function. The result, whether success or failure, will be delivered via callback, so the caller must split itself into two pieces: the part before the callback and the callback itself. Errors cannot be delivered using a error signal because of the inverted flow control.

The situation gets worse if, say, you need to fetch results from two different URLs. You either fetch results one at a time (inefficient), or you manage two different callbacks that could be invoked in any order, and therefore have to coordinate.

Wouldn’t it be nice for the function to work like the first example, but be asynchronous like the second example? Enter async/await:

(aio-defun fetch-fortune-3 (url)
  (let ((buffer (aio-await (aio-url-retrieve url))))
    (with-current-buffer buffer
      (prog1 (buffer-string)

A function defined with aio-defun is just like defun except that it can use aio-await to pause and wait on any other function defined with aio-defun — or, more specifically, any function that returns a promise. Borrowing Python parlance: Returning a promise makes a function awaitable. If there’s an error, it’s delivered as a error signal from aio-url-retrieve, just like the first example. When called, this function returns immediately with a promise object that represents a future result. The caller might look like this:

(defcustom fortune-url ...)

(aio-defun display-fortune ()
  (message "%s" (aio-await (fetch-fortune-3 fortune-url))))

How wonderfully clean that looks! And, yes, it even works with interactive like that. I can M-x display-fortune and a fortune is printed in the minibuffer as soon as the result arrives from the server. In the meantime Emacs doesn’t block and I can continue my work.

You can’t do anything you couldn’t already do before. It’s just a nicer way to organize the same callbacks: implicit rather than explicit.

Promises, simplified

The core object at play is the promise. Promises are already a rather simple concept, but aio promises have been distilled to their essence, as they’re only needed for this singular purpose. More on this later.

As I said, a promise represents a future result. In practical terms, a promise is just an object to which one can subscribe with a callback. When the result is ready, the callbacks are invoked. Another way to put it is that promises reify the concept of callbacks. A callback is no longer just the idea of extra argument on a function. It’s a first-class thing that itself can be passed around as a value.

Promises have two slots: the final promise result and a list of subscribers. A nil result means the result hasn’t been computed yet. It’s so simple I’m not even bothering with cl-struct.

(defun aio-promise ()
  "Create a new promise object."
  (record 'aio-promise nil ()))

(defsubst aio-promise-p (object)
  (and (eq 'aio-promise (type-of object))
       (= 3 (length object))))

(defsubst aio-result (promise)
  (aref promise 1))

To subscribe to a promise, use aio-listen:

(defun aio-listen (promise callback)
  (let ((result (aio-result promise)))
    (if result
        (run-at-time 0 nil callback result)
      (push callback (aref promise 2)))))

If the result isn’t ready yet, add the callback to the list of subscribers. If the result is ready call the callback in the next event loop turn using run-at-time. This is important because it keeps all the asynchronous components isolated from one another. They won’t see each others’ frames on the call stack, nor frames from aio. This is so important that the Promises/A+ specification is explicit about it.

The other half of the equation is resolving a promise, which is done with aio-resolve. Unlike other promises, aio promises don’t care whether the promise is being fulfilled (success) or rejected (error). Instead a promise is resolved using a value function — or, usually, a value closure. Subscribers receive this value function and extract the value by invoking it with no arguments.

Why? This lets the promise’s resolver decide the semantics of the result. Instead of returning a value, this function can instead signal an error, propagating an error signal that terminated an async function. Because of this, the promise doesn’t need to know how it’s being resolved.

When a promise is resolved, subscribers are each scheduled in their own event loop turns in the same order that they subscribed. If a promise has already been resolved, nothing happens. (Thought: Perhaps this should be an error in order to catch API misuse?)

(defun aio-resolve (promise value-function)
  (unless (aio-result promise)
    (let ((callbacks (nreverse (aref promise 2))))
      (setf (aref promise 1) value-function
            (aref promise 2) ())
      (dolist (callback callbacks)
        (run-at-time 0 nil callback value-function)))))

If you’re not an async function, you might subscribe to a promise like so:

(aio-listen promise (lambda (v)
                      (message "%s" (funcall v))))

The simplest example of a non-async function that creates and delivers on a promise is a “sleep” function:

(defun aio-sleep (seconds &optional result)
  (let ((promise (aio-promise))
        (value-function (lambda () result)))
    (prog1 promise
      (run-at-time seconds nil
                   #'aio-resolve promise value-function))))

Similarly, here’s a “timeout” promise that delivers a special timeout error signal at a given time in the future.

(defun aio-timeout (seconds)
  (let ((promise (aio-promise))
        (value-function (lambda () (signal 'aio-timeout nil))))
    (prog1 promise
      (run-at-time seconds nil
                   #'aio-resolve promise value-function))))

That’s all there is to promises.

Evaluate in the context of a promise

Before we get into pausing functions, lets deal with the slightly simpler matter of delivering their return values using a promise. What we need is a way to evaluate a “body” and capture its result in a promise. If the body exits due to a signal, we want to capture that as well.

Here’s a macro that does just this:

(defmacro aio-with-promise (promise &rest body)
  `(aio-resolve ,promise
                (condition-case error
                    (let ((result (progn ,@body)))
                      (lambda () result))
                  (error (lambda ()
                           (signal (car error) ; rethrow
                                   (cdr error)))))))

The body result is captured in a closure and delivered to the promise. If there’s an error signal, it’s “rethrown” into subscribers by the promise’s value function.

This is where Emacs Lisp has a serious weak spot. There’s not really a concept of rethrowing a signal. Unlike a language with explicit exception objects that can capture a snapshot of the backtrace, the original backtrace is completely lost where the signal is caught. There’s no way to “reattach” it to the signal when it’s rethrown. This is unfortunate because it would greatly help debugging if you got to see the full backtrace on the other side of the promise.

Async functions

So we have promises and we want to pause a function on a promise. Generators have iter-yield for pausing an iterator’s execution. To tackle this problem:

  1. Yield the promise to pause the iterator.
  2. Subscribe a callback on the promise that continues the generator (iter-next) with the promise’s result as the yield result.

All the hard work is done in either side of the yield, so aio-await is just a simple wrapper around iter-yield:

(defmacro aio-await (expr)
  `(funcall (iter-yield ,expr)))

Remember, that funcall is here to extract the promise value from the value function. If it signals an error, this propagates directly into the iterator just as if it had been a direct call — minus an accurate backtrace.

So aio-lambda / aio-defun needs to wrap the body in a generator (iter-lamba), invoke it to produce a generator, then drive the generator using callbacks. Here’s a simplified, unhygienic definition of aio-lambda:

(defmacro aio-lambda (arglist &rest body)
  `(lambda (&rest args)
     (let ((promise (aio-promise))
           (iter (apply (iter-lambda ,arglist
                          (aio-with-promise promise
       (prog1 promise
         (aio--step iter promise nil)))))

The body is evaluated inside aio-with-promise with the result delivered to the promise returned directly by the async function.

Before returning, the iterator is handed to aio--step, which drives the iterator forward until it delivers its first promise. When the iterator yields a promise, aio--step attaches a callback back to itself on the promise as described above. Immediately driving the iterator up to the first yielded promise “primes” it, which is important for getting the ball rolling on any asynchronous operations.

If the iterator ever yields something other than a promise, it’s delivered right back into the iterator.

(defun aio--step (iter promise yield-result)
  (condition-case _
      (cl-loop for result = (iter-next iter yield-result)
               then (iter-next iter (lambda () result))
               until (aio-promise-p result)
               finally (aio-listen result
                                   (lambda (value)
                                     (aio--step iter promise value))))

When the iterator is done, nothing more needs to happen since the iterator resolves its own return value promise.

The definition of aio-defun just uses aio-lambda with defalias. There’s nothing to it.

That’s everything you need! Everything else in the package is merely useful, awaitable functions like aio-sleep and aio-timeout.

Composing promises

Unfortunately url-retrieve doesn’t support timeouts. We can work around this by composing two promises: a url-retrieve promise and aio-timeout promise. First define a promise-returning function, aio-select that takes a list of promises and returns (as another promise) the first promise to resolve:

(defun aio-select (promises)
  (let ((result (aio-promise)))
    (prog1 result
      (dolist (promise promises)
        (aio-listen promise (lambda (_)
                               (lambda () promise))))))))

We give aio-select both our url-retrieve and timeout promises, and it tells us which resolved first:

(aio-defun fetch-fortune-4 (url timeout)
  (let* ((promises (list (aio-url-retrieve url)
                         (aio-timeout timeout)))
         (fastest (aio-await (aio-select promises)))
         (buffer (aio-await fastest)))
    (with-current-buffer buffer
      (prog1 (buffer-string)

Cool! Note: This will not actually cancel the URL request, just move the async function forward earlier and prevent it from getting the result.


Despite aio being entirely about managing concurrent, asynchronous operations, it has nothing at all to do with threads — as in Emacs 26’s support for kernel threads. All async functions and promise callbacks are expected to run only on the main thread. That’s not to say an async function can’t await on a result from another thread. It just must be done very carefully.


The package also includes two functions for realizing promises on processes, whether they be subprocesses or network sockets.

  • aio-process-filter
  • aio-process-sentinel

For example, this function loops over each chunk of output (typically 4kB) from the process, as delivered to a filter function:

(aio-defun process-chunks (process)
  (cl-loop for chunk = (aio-await (aio-process-filter process))
           while chunk
           do (... process chunk ...)))

Exercise for the reader: Write an awaitable function that returns a line at at time rather than a chunk at a time. You can build it on top of aio-process-filter.

I considered wrapping functions like start-process so that their aio versions would return a promise representing some kind of result from the process. However there are so many different ways to create and configure processes that I would have ended up duplicating all the process functions. Focusing on the filter and sentinel, and letting the caller create and configure the process is much cleaner.

Unfortunately Emacs has no asynchronous API for writing output to a process. Both process-send-string and process-send-region will block if the pipe or socket is full. There is no callback, so you cannot await on writing output. Maybe there’s a way to do it with a dedicated thread?

Another issue is that the process-send-* functions are preemptible, made necessary because they block. The aio-process-* functions leave a gap (i.e. between filter awaits) where no filter or sentinel function is attached. It’s a consequence of promises being single-fire. The gap is harmless so long as the async function doesn’t await something else or get preempted. This needs some more thought.

Update: These process functions no longer exist and have been replaced by a small framework for building chains of promises. See aio-make-callback.

Testing aio

The test suite for aio is a bit unusual. Emacs’ built-in test suite, ERT, doesn’t support asynchronous tests. Furthermore, tests are generally run in batch mode, where Emacs invokes a single function and then exits rather than pump an event loop. Batch mode can only handle asynchronous process I/O, not the async functions of aio. So it’s not possible to run the tests in batch mode.

Instead I hacked together a really crude callback-based test suite. It runs in non-batch mode and writes the test results into a buffer (run with make check). Not ideal, but it works.

One of the tests is a sleep sort (with reasonable tolerances). It’s a pretty neat demonstration of what you can do with aio:

(aio-defun sleep-sort (values)
  (let ((promises (mapcar (lambda (v) (aio-sleep v v)) values)))
    (cl-loop while promises
             for next = (aio-await (aio-select promises))
             do (setf promises (delq next promises))
             collect (aio-await next))))

To see it in action (M-x sleep-sort-demo):

(aio-defun sleep-sort-demo ()
  (let ((values '(0.1 0.4 1.1 0.2 0.8 0.6)))
    (message "%S" (aio-await (sleep-sort values)))))

Async/await is pretty awesome

I’m quite happy with how this all came together. Once I had the concepts straight — particularly resolving to value functions — everything made sense and all the parts fit together well, and mostly by accident. That feels good.

-1:-- An Async / Await Library for Emacs Lisp (Post)--L0--C0--March 10, 2019 08:57 PM

Raimon Grau: emms ftw

I never got into the Emacs MultiMedia System, but just read that it has support for streaming radios so I gave it a try. And you know what?  It's awesome.  I'm adding all the radios I have in my radios repo.

But you already knew that.

Only a few concepts/commands:
- m-x emms-streams RET
- m-x emms RET
- c-+ + + +
- m-x emms-add-dired RET

Enough to get by and start using it.
-1:-- emms ftw (Post Raimon Grau ( 06, 2019 03:31 AM

Rubén Berenguel: 2019-7 Readings of the week

NOTE: The themes are varied, and some links below are affiliate links. Formal methods, Scala, productivity. Expect a similar wide range in the future as well. You can check all my weekly readings by checking the tag here . You can also get these as a weekly newsletter by subscribing here.

Solving Knights and Knaves with Alloy

Once you start your journey down the formal methods rabbit hole (which I started with TLA+) you can never stop. This is a very good introduction to Alloy, a modelling language which seems well suited for data structure descriptions (not procedural/step-time models)

Seeking the Productive Life: Some Details of My Personal Infrastructure

As much as I don't like Stephen Wolfram, his obsessive take on being productive echoes mine. And that worries me.

Encryption Key Hierarchies in Alloy

Next after the intro above, this is a short post about how you would set up a reasonable hierarchy of keys in an organisation. Something like "Infrastructure team owns infrastructure keys, developers own GitHub" but with more layers. Then you can automatically check somebody has access to stuff, etc. Neat.

Is your Scala object always a singleton?

The guys at SoftwareMill (excellent technical blog and people) stumbled upon this. The kind of bug that could defeat you, but they succeeded, and documented it for the rest of us.

Proving Games are Winnable with Alloy

And the final instalment in this week's formal method extravaganza, how to prove a randomised game (say, Zelda) can be winnable using Alloy.

Don't Let the Internet Dupe you, Event Sourcing is Hard

Yep, can totally agree, I've hit some of the roadblocks and fun moments the author shares. As usual, some HackerNews comments can be enlightening.

On Being A Senior Engineer

There are many definitions of what being senior is, but you can't go wrong trying to follow these suggestions

Hold the front pages: meet the endpaper enthusiasts

I'm pretty sure you didn't know there are collectors of endpaper.

Beating hash tables with trees? The ART-ful radix trie

A synopsis of a data structure paper, about ART radix tries. They are kind-of-like tries, but try to use less memory.

Scala pattern matching: apply the unapply

In case you didn't know how pattern matching works (hint: unapply), this post will tell you.

buffer-expose emacs package

A package released late last week, it helps you navigate your open buffers in a visual way. Pretty neat, and even with my usual 20+ buffers seems to work seamlessly.


These weekly posts are also available as a newsletter. These days (since RSS went into limbo) most of my regular information comes from several newsletters I’m subscribed to, instead of me going directly to a blog. If this is also your case, subscribe by clicking here.
-1:-- 2019-7 Readings of the week (Post Ruben Berenguel ( 26, 2019 09:59 PM

emacsninja: Smooth Video Game Emulation in Emacs

I have a lengthy of things I might eventually implement for Emacs, most of which are not exactly useful, are challenging to do or fulfill both conditions. A NES emulator fits all of these criteria neatly. I’ve kept hearing that they can run on poor hardware and learned that the graphics fit into a tiled model (meaning I wouldn’t have to draw each pixel separately, only each tile), so given good enough rendering speed it shouldn’t be an impossible task. Then the unexpected happened, someone else beat me to the punch with nes.el. It’s an impressive feat, but with one wrinkle, its overall speed is unacceptable: Mario runs with a slowdown of over 100x, rendering it essentially unplayable. For this reason I adjusted my goals a bit: Emulate a simpler game platform smoothly in Emacs at full speed.

Enter the CHIP-8. It’s not a console in that sense, but a video game VM designed in the 70ies with the following properties:

  • CPU: 8-Bit, 16 general-purpose registers, 36 instructions, each two bytes large
  • RAM: 4KB
  • Stack: 16 return addresses
  • Resolution: 64 x 32 black/white pixels
  • Rendering: Sprites are drawn in XOR mode
  • Sound: Monotone buzzer
  • Input: Hexadecimal keypad

It’s perfect. Sound is the only real issue here as the native sound support in Emacs is blocking, but this can be worked around with sufficient effort. Once it’s implemented there’s a selection of almost a hundred games to play, with a few dozen more if you implement the Super CHIP-8 extensions. I’d not have to implement Space Invaders, Pacman or Breakout with gamegrid.el. What could possibly be hard about this? As it turns out, enough to keep me entertained for a few weeks. Here’s the repo.

General strategy

First of all, I’ve located a reasonably complete looking ROM pack. It’s not included with the code as I’m not 100% sure on the legal status, some claim the games are old enough to be public domain, but since there are plenty of new ones, I decided to go for the safe route. Sorry about that.

Cowgod’s Chip-8 Technical Reference is the main document I relied upon. It’s clearly written and covers nearly everything I’d want to know about the architecture, with a few exceptions I’d have to find out on my own. Another helpful one is Mastering CHIP-8 to fill in some of the gaps.

To boot up a CHIP-8 game on real hardware you’d use a machine where the interpreter is loaded between the memory offsets #x000 and #x200, load the game starting at offset #x200, then start the interpreter. It would start with the program counter set to #x200, execute the instruction there, continue with the next instruction the program counter points to, etc. To make things more complicated there’s two timers in the system running at 60Hz, these decrement a special register if non-zero which is used to measure delays accurately and play a buzzing sound. However, there is no specification on how fast the CPU runs or how display updates are to be synchronized, so I had to come up with a strategy to accomodate for potentially varying clock speeds.

The standard solution to this is a game loop where you aim at each cycle to take a fixed time, for example by executing a loop iteration, then sleeping for enough time to arrive at the desired cycle duration. This kind of thing doesn’t work too well in Emacs, if you use sit-for you get user-interruptible sleep, if you use sleep-for you get uninterruptable sleep and don’t allow user input to be registered. The solution here is to invert the control flow by using a timer running at the frame rate, then being careful to not do too much work in the timer function. This way Emacs can handle user input while rendering as quickly as possible. The timer function would execute as many CPU cycles as needed, decrement the timer registers if necessary and finally, repaint the display.

Each component of the system is represented by a variable holding an appropriate data structure, most of which are vectors. RAM is a vector of bytes, the stack is a vector of addresses, the screen is a vector of bits, etc. I opted for using vectors over structs for simplicity’s sake. The registers are a special case because if they’re represented by a vector, I’d need to index into it using parts of the opcode. Therefore it would make sense to have constants representing each register, with their values being equal to the value used in the opcode. Initially I’ve defined the constants using copy-paste but later switched to a chip8-enum macro which defines them for me.

The built-in sprites for the hex digits were shamelessly stolen from Cowgod’s Chip-8 technical reference. They are copied on initialization to the memory region reserved for the interpreter, this allows the LD F, Vx instruction to just return the respective address. When implementing extended built-in sprites for the Super CHIP-8 instructions there was no convenient resource to steal them from again, instead I created upscaled versions of them with a terrible Ruby one-liner.

Basic Emulation

For debugging reasons I didn’t implement the game loop at first, instead I went for a loop where I keep executing CPU instructions indefinitely, manually abort with C-g, then display the display state with a debug function that renders it as text. This allowed me to fully concentrate on getting basic emulation right before fighting with efficiency concerns and rendering speed.

For each CPU cycle the CPU looks up the current value of the program counter, looks up the two-byte instruction in the RAM at that offset, then executes it, changing the program counter and possibly more in the process. One unspecified thing here is what one does if the program counter points to an invalid address and what actual ROMs do in practice when they’re done. Experimentation showed that instead of specifying an invalid address they fall into an infinite loop that always jumps to the same address.

Due to the design choice of constantly two-byte sized instructions, the type and operands of each instruction is encoded inline and needs to be extracted by using basic bit fiddling. Emacs Lisp offers logand and ash for this, corresponding to &, << and >> in C. First the bits to be extracted are masked by using logand with an argument where all bits to be kept are set to ones, then the result is shifted all the way to the right with ash using a negative argument. Take for example the JP nnn instruction which is encoded as #x1nnn, for this you’d extract the type by masking the opcode with #xF000, then shift it with ash by -12. Likewise, the argument can be extracted by masking it with #x0FFF, with no shift needed as the bits are already at the right side.

A common set of patterns comes up when dissecting the opcodes, therefore the chip8-exec function saves all interesting parts of the opcode in local variables using the abbreviations as seen in Cowgod’s Chip-8 technical reference, then a big cond is used to tell which type of opcode it is and each branch modifies the state of the virtual machine as needed.

Nearly all instructions end up incrementing the program counter by one instruction. I’ve borrowed a trick from other emulators here, before executing chip8-exec the program counter is unconditionally incremented by the opcode size. In case an instruction needs to do something different like changing it to an jump location, it can still override its value manually.

To test my current progress I picked the simplest (read: smallest) ROM doing something interesting: Maze by David Winter. My debug function printed the screen by writing spaces or hashes to a buffer, separated by a newline for each screen line. After I got this one working, I repeated the process with several other ROMs that weren’t requiring any user input and displayed a (mostly) static screen. The most useful from the collection was “BC Test” by BestCoder as it covered nearly all opcodes and tested them in a systematic fashion. Here’s a list of other ROMs I found useful for testing other features, in case you, the reader, shall embark on a similar adventure:

  • Jumping X and O: Tests delay timer, collision detection, out of bounds drawing
  • CHIP-8 Logo: Tests CALL nnn / RET
  • Sierpinski triangle: Slow, tests emulation speed
  • Zero: Animation, tests rendering speed (look for the flicker)
  • Minimal Game: Tests SKP Vx
  • Keypad Test: Tests LD Vx, K, uncovered a bug in the main loop
  • Tetris: Tests SKP Vx, SKNP Vx, playability
  • SC Test: Tests nearly all opcodes and a few Super CHIP-8 ones
  • Font Test: Tests drawing of small and big built-in sprites
  • Robot: Tests drawing of extended sprites
  • Scroll Test: Tests scrolling to the left and right
  • Car Race Demo: Tests scrolling down
  • Car: Tests emulation speed in extended mode
  • Emutest: Tests half-pixel scroll, extended sprites in low-res

Debugging and Analysis

Surprisingly enough, errors and mistakes keep happening. Stepping through execution of each command with edebug gets tiring after a while, even when using breakpoints to skip to the interesting parts. I therefore implemented something I’ve seen in Circe, my preferred IRC client, a logging function which only logs if logging is enabled and writes the logging output to a dedicated buffer. For now it just logs the current value of the program counter and the decoded instruction about to be executed. I’ve added the same kind of logging to a different CHIP-8 emulator, chick-8 by Evan Hanson from the CHICKEN Scheme community. Comparing both of their logs allowed me to quickly spot where they start to diverge, giving me a hint what instruction is faulty.

Looking through the ROM as it is executed isn’t terribly enlightening, it feels like watching through a peephole, not giving you the full picture of what’s about to happen. I started writing a simple disassembler which decodes every two bytes and writes their offset and meaning to a buffer, but stopped working on it after realizing that I have a much more powerful tool at hand to do disassembly and analysis properly: radare2. As it didn’t recognize the format correctly, I only used its most basic featureset for analysis, the hex editor. By displaying the bytes at a width of two per row and searching for hex byte sequences with regex support I was able to find ROMs using specific opcodes easily.

Later after I’ve finished most of the emulator, I started developing a CHIP-8 disassembly and analysis plugin using its Python scripting support. I ran into a few inconsistencies with the documentation, but eventually figured everything out and got pretty disassembly with arrows visualizing the control flow for jumps and calls.


Later I discovered that radare2 actually does have CHIP-8 support in core, you need to enable it explicitly by adding -a chip8 to the command line arguments as it cannot be auto-detected that a file is a CHIP-8 ROM. The disassembly support is decent, but the analysis part had a few omissions and mistakes leading to less nice graphs. By using my Python version as basis I’ve managed improving the C version of the analysis plugin to the same level and even surpassed it as the C API allows adding extra meta-data to individual instructions, such as inline commentary. There is a pending PR for this functionality now, I expect it to be merged soon.


For maximum speed I set up firestarter to recompile the file on each save, added the directory of the project to load-path, then always launched a new Emacs instance from where I loaded up the package and emulated a ROM file. This is ideal if there isn’t much to test, but it’s hard to detect regressions this way. At some point I decided to give the great buttercup library another try and wrote a set of tests exercising every supported instruction with all edge cases I could think of. For each executed test the VM is initialized, some opcodes are loaded up and chip8-cycle is called as often as needed, while testing the state of the registers and other affected parts of the machinery. It was quite a bit of grunt work due to the repetitive nature of the code, but gave me greater confidence in just messing around with the code as retesting everything took less than a second.

Make no mistake here though, excessively testing the complicated parts of a package (I don’t believe it’s worth it testing the simple parts) is in no way a replacement for normal usage of it which can uncover completely different bugs. This is more of a safety net, to make sure code changes don’t break the most basic features.


Retrospectively, this was quite the ride. Normally you’d pick a suitable game or multimedia library and be done, but this is Emacs, no such luxuries here. Where we go we don’t need libraries.

My favorite way of drawing graphics in Emacs is by creating SVG on the fly using the esxml library. This turned out to be prohibitively expensive, not only did it fail meeting the performance goals, it also generated an excessive amount of garbage as trees were recursively walked and thrown away over and over again. A variation of this is having a template string resembling the target SVG, then replacing parts of it and generating an image from them. I attempted doing this, but quickly gave up as it was too bothersome coming up with suitable identifiers and replacing all of them correctly.

I still didn’t want to just drop the SVG idea. Considering this was basically tiled graphics (with each tile being an oversized pixel), I considered creating two SVG images for white and black tiles respectively, then inserting them as if they were characters on each line. The downside of this approach was Emacs’ handling of line height, I couldn’t figure out how to completely suppress it to not have any kind of gaps in the rendering. gamegrid.el somehow solves it, but has rather convoluted code.

At this point I was ready to go back to plain text. I remembered that faces are a thing and used them to paint the background of the text black and white. No more annoying gaps. With this I could finally work and started figuring out how to improve the rendering. While the simple solution of always erasing the buffer contents and reinserting them again did work, there were plenty of optimization possibilities. The most obvious one was using dirty frame tracking to tell if the screen even needed to be redrawn. In other words, the code could set a chip8-fb-dirty-p flag and if the main loop discovered it’s set, it would do a redraw and unset it. Next up was only redrawing the changed parts. For this I’d keep a copy of the current and previous state of the screen around, compare them, repaint the changed bits and transfer the current to the previous state. To change the pixels in the buffer I’d erase them, then insert the correct ones.

The final optimization occurred me much later when implementing the Super CHIP-8 instructions. It was no longer possible to play games smoothly at quadrupled resolution, so I profiled and discovered that erasing text was the bottleneck. I considered the situation hopeless, fiddled around with XBM graphics backed by a bit-vector and had not much luck with getting them to work nearly as well at low resolution. It only occurred me by then that I didn’t try to just change the text properties of existing text instead of replacing text. That fixed all remaining performance issues. Another thing I realized is that anything higher-resolution than this will require extra trickery, maybe even involving C modules.

Garbage Collection Woes

Your code may be fast, your rendering impeccable, but what if every now and then your bouncing letters animation stutters? Congratulations, you’ve run into garbage collection ruining your day. In a language like C it’s much more obvious if you’re about to allocate memory from the heap, in a dynamic language it’s much harder to pin down what’s safe and what’s not. Patterns such as creating new objects on the fly are strictly forbidden, so I tried fairly hard to avoid them, but didn’t completely succeed. After staring hard at the code for a while I found that my code transferring the current to the old screen state was using copy-tree which kept allocating vectors all the time. To avoid this I wrote a memcpy-style function that copied values from one array to another one.

Another sneaky example was the initialization of the emulator state which assigned zero-filled vectors to the variables. I noticed this one only due to the test runner printing running times of tests. Most took a fraction of a millisecond, but every six or so the test took over 10 milliseconds for no obvious reason. This turned out to be garbage collection again. I rediscovered the fillarray function which behaves much like memset in C, used it in initialization (with the vectors assigned at declaration time instead) and the pauses were gone. No guarantees that this was the last of it, but I haven’t been able to observe other pauses.


If your Emacs has been compiled with sound support there will be a play-sound function. Unfortunately it has a big flaw, as long as the sound is playing Emacs will block, so using it is a non-starter. I’ve initially tried using the visual bell (which inverts parts of the screen) as a replacement, then discovered that it does the equivalent of sit-for and calling it repeatedly in a row will in the worst case of no pending user input wait as long as the intervals combined. There was therefore no easy built-in solution to this. To allow users to plug in their own solution I defined two customizable functions defaulting to displaying and clearing a message: chip8-beep-start-function and chip8-beep-stop-function.

The idea here is that given a suitable, asynchronous function you could kick off a beep, then later stop it. Spawning processes is the one thing you can easily do asynchronously, so if you had a way to control a subprocess to start and stop playing a sound file, that would be a good enough solution. I then remembered that mplayer has a slave mode and that mpv improved it in a multitude of ways, so I looked into the easiest way of remote controlling it. It turns out that mpv did away with slave mode in favor of controlling it via FIFO or a socket. To my surprise I actually made it work via FIFO, the full proof of concept can be found in the README.

User input

The CHIP-8 supports two ways of checking user input: Checking whether a key is (not) pressed (non-blocking) and waiting for any key to be pressed (blocking). Doing this in a game library wouldn’t be worth writing about, but this is Emacs after all, there is only a distinction between key up and down for mouse events. After pondering about this issue for a while I decided to fake it by keeping track of when keys have been last pressed in a generic key handler function, then comparing that timestamp against the current time: If it’s below a reasonable timeout, the key is considered pressed, otherwise it isn’t.

Solving the other problem required far more effort. The emulator was at this point sort of a state machine as I’ve tracked whether it was running with a boolean variable to implement a pause command. I’ve reworked the variable and all code using it to be mindful of the current state: Playing, paused or waiting for user input. This way the command merely changed the current state to waiting for input, set a global variable to the register to be filled with the pressed key and set the stage for the generic key handler function to continue execution. If that function detected the waiting state and a valid key has been pressed, it would record it in the respective register and put the emulator into playing state again.

Actually testing this with a keypad demo ROM unveiled a minor bug in the interaction between the main loop and the redrawing logic. Remember that a number of CPU cycles were executed, then a redraw was triggered if needed? Well, imagine that in the middle of the CPU cycles to be executed the state were changed to waiting and the redraw never happened! This would produce an inconsistent screen state, so I changed it to do a repaint immediately. Furthermore, if the state changed to waiting, the loop would still execute more cycles than needed (despite it being a blocking wait), therefore I had to add an extra check in the main loop’s constant amount of cycling whether the state changed and if yes, skeep the loop iteration alltogether.

Super CHIP-8

At this point I was pretty much done with implementing the full CHIP-8 feature set and started playing games like Tetris, Brix and Alien.

/img/chip8-tetris-thumb.png /img/chip8-brix-thumb.png /img/chip8-alien-thumb.png

Yet I wasn’t satisfied for some strange reason. I probably longed for more distraction and set out to implement the remaining Super CHIP-8 instructions. Unlike the main instruction set these weren’t nearly as well documented. My main resource was a schip.txt file which briefly describes the extra instructions. The most problematic extension is the extended mode which doubles the screen dimensions, requiring a clever way to draw a bigger or smaller screen whenever toggled. There are two ways of implementing such a thing: Drawing to one of two separate screen objects and painting the correct one or alternatively, always drawing to a big screen and rendering in a downscaled mode if needed. For simplicity’s sake I went with the first option.

The extra scroll extensions allow game programmers to efficiently change the viewport (though for some reason they forgot about an instruction scrolling up). My challenge here was to change the screen’s contents in-place, for this to be done correctly extra care was necessary to not accidentally overwrite contents you needed to move elsewhere. The trick here is to iterate in reverse order over the screen lines if necessary.

A few more instructions and optimizations later and I was ready to play the probably silliest arcade game ever conceived, Joust. The sprites in the picture below are supposed to be knights on flying ostrichs trying to push each other down with their lances, but they look more like flying rabbits to me.


Other Musings

Writing an emulator gives you great insight in how a machine actually works. Details like memory mapping you glossed over feels far more intuitive once you have to implement it yourself. One of the downsides is that I didn’t play games for my own enjoyment, but to further improve the emulator and understand the machine.

A few games and demo ROMs revealed bugs in the emulator, such as how to deal with drawing sprites that go outside the boundaries. Cowgod’s Chip-8 Technical Reference tells you to do wrap-around, but Blitz by David Winter seems to think otherwise, when rendered with wrap-around the player sprite collides immediately into a pixel on the edge and the “GAME OVER” screen is displayed. I decided in this case to forego that recommendation and clip the rendering to the screen edges.

It’s not always easy to make such decisions. Some quirks seem fairly reasonable, such as preferrably setting the VF flag to indicate an overflow/underflow condition for arithmetic, although it’s not always specified. Some quirks seem fairly obscure, such as the interpretation of Super CHIP-8 extensions in low-resolution mode: A demo insists that instead of drawing a high-resolution 16 x 16 sprite it should be drawn as 8 x 16 instead. As this doesn’t appear to affect any game and requires significant support code I decided against implementing it. In one case I was conflicted enough between the different interpretation of bit shifting operators that I introduced a customizable to toggle between both, with the incorrect, but popular behavior being the default.

-1:-- Smooth Video Game Emulation in Emacs (Post Vasilij Schneidermann)--L0--C0--February 21, 2019 08:01 PM

Da Zhang: Set_FF_default_browser

How to set Firefox as the default browser under Windows 10 when it does not show up in the default program list under “Default apps”


1 Summary: after re-install Firefox (version 65.01 64-bit), I cannot set it as the default browser of Windows 10. It turned out that the registry entry of Firefox was messed up.

1.1 symptoms

  • under Windows 10, Firefox did not appear in the program list under “Default apps”Set_FF_default_browser
  • after the fix, Firefox reappeared in the program list

1.2 the two major Firefox registry entries are: FirefoxHTML, FirefoxURL

1.3 detailed settings

  • FirefoxHTML
    Windows Registry Editor Version 5.00
    @="Firefox Document"
    "FriendlyTypeName"="Firefox Document"
    @="C:\\Program Files\\Mozilla Firefox\\firefox.exe,1"
    @="\"C:\\Program Files\\Mozilla Firefox\\firefox.exe\" -osint -url \"%1\""
  • FirefoxURL
    Windows Registry Editor Version 5.00
    @="Firefox URL"
    "FriendlyTypeName"="Firefox URL"
    "URL Protocol"=""
    @="C:\\Program Files\\Mozilla Firefox\\firefox.exe,1"
    @="\"C:\\Program Files\\Mozilla Firefox\\firefox.exe\" -osint -url \"%1\""

1.4 how I figured out this is the issue

  • I compared the settings of Chrome, which also has ChromeHTML and ChromeURL
  • I also found the FirefoxHTML and FirefoxURL entries, but the original settings were strange (the “kernel32::GetLongPathNameW” lines below):
    • FirefoxHTML
      Windows Registry Editor Version 5.00
      @="Firefox HTML Document"
      "FriendlyTypeName"="Firefox HTML Document"
      @="C:\\Program Files\\Mozilla Firefox\\firefox.exe,0"
      @="\"kernel32::GetLongPathNameW(w R8, w .R7, i 1024)i .R6\" -osint -url \"%1\""
    • FirefoxURL
      Windows Registry Editor Version 5.00
      @="Firefox URL"
      "FriendlyTypeName"="Firefox URL"
      "URL Protocol"=""
      @="kernel32::GetLongPathNameW(w R8, w .R7, i 1024)i .R6,1"
      @="\"kernel32::GetLongPathNameW(w R8, w .R7, i 1024)i .R6\" -osint -url \"%1\""
  • I also found in the registry another pair of FirefoxHTML-XXXXX and FirefoxURL-XXXXX (with XXXXX being a random alphanumeric string). So I thought, maybe the messy FirefoxHTML and FirefoxURL were from the old installation, and the -XXXXX version were the correct version.
    • I then just backed up the messy registry entries, removed them, and renamed the -XXXXX version into FirefoxHTML and FirefoxURL
    • PROJ-DONE then problem solved
-1:-- Set_FF_default_browser (Post zhangda)--L0--C0--February 18, 2019 05:23 PM

emacsair: Introducing Transient Commands

I am excited to announce the initial release of Transient, which is a replacement for Magit-Popup.
-1:-- Introducing Transient Commands (Post)--L0--C0--February 14, 2019 09:40 PM
-1:--  (Post Unknown ( 03, 2019 06:22 AM

Rubén Berenguel: 2019-1 Readings of the week

If you know me, you'll know I have.a very extensive reading list. I keep it in Pocket, and is part of my to do stored in Things3. It used to be very large (hovering around 230 items since August) but during Christmas it got out of control, reaching almost 300 items. That was too much, and I set myself a goal for 2019 to keep it trimmed and sweet. And indeed, since the beginning of the year I have read or canceled 171 articles (122 in the past week, 106 of which were read). That's a decently sized book!

To help me in this goal, I'll (hopefully) be writing a weekly post about what interesting stuff I have read the past week. Beware, this week may be a bit larger than usual, since I wanted to bring the numbers down as fast as possible.

NOTE: The themes are varied. Software/data engineering, drawing, writing. Expect a similar wide range in the future as well.

The Nature of Infinity, and Beyond – Cantor’s Paradise

A short tour through the life of Georg Cantor and his quest for proving the continuum hypothesis. In the end, he was vindicated.

Statistical rule of three

What is a decent estimate of something that hasn't happened yet? Find the answer here.

Apache Arrow: use of jemalloc

A short technical post detailing why Arrow moved to jemalloc for memory allocation.

Subpixel Text Encoding

This is... unexpected. A font that is 1 pixel wide.

Solving murder with Prolog

I have always been a fan of Prolog, and this is a fun and understandable example if you have never used it.

What Parkour Classes Teach Older People About Falling

Interesting. I'm still young, but I'll keep this in mind for the future.

Implementing VisiCalc

The detailed story about how VisiCalc (the first spreadsheet) was written.

The military secret to falling asleep in two minutes

I was actually doing something similar since I was like 12. It might be a stretch to say 2 minutes, but works.

Index 1,600,000,000 Keys with Automata and Rust

Super interesting (and long) post about how FSA and FST are used for fast search in Rust (I'm a bit into Rust lately). Also, BurntSushi's (Andrew Gallant, the author) cat is called Cauchy, something I appreciate as my cat is named Fatou.

How to Draw from Imagination: Beyond References

An excellent piece on gesture drawing and improving your technique.

Anatomy of a Scala quirk

All languages have their WAT, it's harder to find them in Scala though.

Chaotic attractor reconstruction

An easy example in Python of Takens' embedding theorem

Hello, declarative world

An exploration between imperative and functional, and how declarative fits the landscape

Python with Context Managers

Although I have written tons of Python, I never took the time to either write or understand how context managers work. This one was good.

Raymond Chandler's Ten Commandments For the Detective Novel

You never know when you may write a detective novel. Ruben and the case of the dead executor

Seven steps to 100x faster

An optimisation tour of a piece of code written in Go, from data structures to allocation pressure.

Writing a Faster Jsonnet Compiler

A semi-technical post by Databricks about Jsonnet and why they wrote their own compiler. Serves as an introduction to Jsonnet ("compilable JSON") as well.


Monoid font and Poet emacs themeToday I switched from solarized dark and Fira Code Pro to the above. It looks interesting


I’m considering converting this into a weekly newsletter in addition to a blog post. These days (since RSS went into limbo) most of my regular information comes from several newsletters I’m subscribed to, instead of me going directly to a blog. If this is also your case, subscribe by clicking here and if enough people join I’ll send these every Sunday night or so.
-1:-- 2019-1 Readings of the week (Post Ruben Berenguel ( 20, 2019 06:16 PM

Emacs Redux: Emacs Prelude Gets a User Manual

This is going to be a really short post.

For ages I’ve planned to create a user manual for Emacs Prelude, but I never got to doing so. Writing a good manual is a huge amount of work and I was wary (and lazy) to commit to doing it. Today I realized that probably having some manual (even if it’s not good) beats having no manual, so I quickly sliced and diced the old huge README of the project, and compiled it together into this proto-manual.

I’ll try to find some time to improve it, but I can make no promises. You can certainly help me out by working on the manual yourselves, though. Prelude is pretty simple, so I assume that everyone, who has spent a bit of time using it, is more than qualified to work on improving its documentation. By the way, the manual features a dedicated section for working on the manual! Coincidence? I don’t think so.

Keep hacking!

-1:-- Emacs Prelude Gets a User Manual (Post Bozhidar Batsov)--L0--C0--January 16, 2019 06:06 PM

Emacs Redux: The Emacs Year in Review

This post is a brief summary of the past year for Emacs from my perspective.

Emacs 26.1

Probably the biggest news of the year was the release of Emacs 26.1. The highlights of the release were:

  • Limited form of concurrency with Lisp threads
  • Support for optional display of line numbers in the buffer
  • Emacs now uses double buffering to reduce flicker on the X Window System
  • Flymake has been completely redesigned
  • TRAMP has a new connection method for Google Drive
  • New single-line horizontal scrolling mode
  • A systemd user unit file is provided
  • Support for 24-bit colors on capable text terminals

Frankly, for me that release was a bit underwhelming as I won’t see much improvement in my day-to-day use of Emacs. I’m on macOS, I don’t use Flymake, I don’t use TRAMP, I don’t like seeing line numbers and I don’t use Emacs in terminal mode. What a bummer, right?

I’m excited that we finally got some limited form of concurrency, though. Probably this is going to become important in a few years, as Emacs packages start adopting it. There are also plenty of other small and very useful improvements in this release. Mickey (from “Mastering Emacs”) goes over the release notes here in greater detail. Maybe I’ll do something similar in the future if I ever find the time for it.

My Emacs Packages

It was a super busy year at work for me, but still I got to release new versions of most of my Emacs Packages. I’m really proud of releasing several big CIDER updates, and of Projectile making it to version 1.0 (and recently 2.0)! By the way - this was the first of my bigger projects that made it to 1.0!

Things were quieter on the Prelude front, but I think it’s pretty good, useful and stable in its current form. With everyone these days trying to pile every possible feature and package in an Emacs distribution, one has to appreciate the tenets of Prelude’s philosophy:

  • simple
  • easy to understand and extend
  • a foundation for you to build upon, as opposed to some end-user product

Probably I should expand on this down the road as well…

Overall, I’ve gotten to a point where I don’t have time to properly maintain all of my projects and it seems that I’ll have to focus on fewer of them in the future and solicit help from other people for the packages I can’t find enough time for.

Emacs Redux

I didn’t write much here last year, but at least I managed to overhaul the blog’s visuals and simplify its setup. Moving from Octopress to Jekyll really simplified things and I hope this will result in more articles down the road.

I’ve also started a new personal blog - Meta Redux. You’re more than welcome to check it out!

Emacs Packages

I don’t recall many new Emacs packages that made the news in 2018. I think I was most excited about ELSA - a brand new Emacs Lisp Static Analyzer. I’ve also noticed that many people were excited about LSP in Emacs and the older lsp-mode got some competition in the face of eglot. As all of the programming I do these days is in Emacs Lisp or Clojure, I don’t really need LSP (generally LSP makes little sense for REPL-driven programming), but it’s great that things are making headway there.

Many of the great Emacs packages became even greater this year - e.g. Magit, company-mode, avy, ivy, etc.

I didn’t pick up many new (for me) packages this year. I can think only of:

  • After many years of using ido I migrated to ivy and I’m super happy with it
  • I’d dropped my custom comment annotations highlighting code in favour of hl-todo
  • I’ve rediscovered easy-kill
  • I’ve discovered how awesome AsciiDoc is (so much better than Markdown for writing technical documentation!!!) and I’ve started using adoc-mode. Unfortunately it’s somewhat buggy and incomplete, and it hasn’t seen a commit in 3 years. It’d be great if we had a better AsciiDoc mode for Emacs!

I’ll also add a shoutout here for Buttercup - the best testing library Emacs has to offer today! It’s so much better and easier to use than ERT, that I’m shocked so few people have discovered it yet. It definitely deserves a blog post or two!

As usual the packages I relied on the most this year were my own Projectile and crux.

My color theme is forever Zenburn. I’ve used it for over a decade and I still can’t find an alternative so appealing that it would make me switch!


MELPA really crushed it this year and solidified its position as the only package.el repo that really matters. At this point it’s like homebrew for macOS - it has alternatives, but pretty relatively few people are using them. I’m happy about the consolidation of the package repo scene, but I’m a bit worried that still most people are installing snapshots instead of stable package releases.

Of course, that’s on MELPA - it’s on package maintainers who have adopt a more disciplined approach to releases.


I really loved reading “The Evolution of Emacs Lisp” paper by Emacs’s former head maintainer Stefan Monnier and Michael Sperber. I’ve been using Emacs for 15 year now and I still learned a lot of new things about the history of the language and the rationale behind certain design decisions. It was also a bit depressing to see how long certain features were being developed without ever making it to a stable Emacs release, but it is what it is…

As usual, throughout the year my best source of Emacs news and updates was Emacs News by Sacha Chua. I can’t recommend it highly enough!

Looking Forward

Frankly, there’s nothing big I’m looking forward to in 2019. I’d love for Emacs to finally start better supporting operating systems like Windows and macOS, but I know that’s a pipe dream at this point. Still, the need for something like an Emacs Mac Port to exist makes me sad. Users of all operating systems should get a great user experience, regardless of whether their OS is free or proprietary.

I do hope that more of the packages bundled with Emacs today would be moved to GNU ELPA, so they can be released separately and more frequently. Emacs’s core should become smaller IMO and the big focus of the Core Team should be improving the basic editing experience, UI and the core API libraries. And, of course, Emacs Lisp itself. I really don’t think that Emacs will ever replace Emacs Lisp with Common Lisp or Scheme (Guile), so we’d better develop a better version of Emacs Lisp instead.

By the way, I think it might be nice if the Emacs Team started running some annual “State of Emacs” survey similar in spirit to say to the “State of Clojure” survey. The results of such a survey can help the Emacs Team decide where to focus their efforts. They’d also be a great gauge of the progress Emacs is making towards solving the problems its users are facing and providing the functionality they need.

On a personal note I hope that I’ll write a few more articles here in 2019 and that I’ll manage to get CIDER to the coveted 1.0 release and Projectile to the next level. I’m also planning to work a bit on project.el, so it’d play nicer with Projectile and provide a better foundation for tools like Projectile.

I also have some vary vague plans to work on improving erlang-mode, take a stab at creating a new asciidoc-mode and maybe play a bit with Haskell-related packages (if I finally manage to learn Haskell that is). Time will tell whether any of this is going to happen.

I’m try to be more active here, but I’m not making any promises. The last couple of years really drained me and as much I’d love to work on many things it would probably be best for me not to work on anything at all.

Closing Thoughts

So, that was Emacs’s 2018 from my perspective. What about yours?

I’d really love to hear what were the Emacs highlights of the year from your perspective. Please, share in the comments what were you most excited about in 2018 and what are you looking forward to in the future.

Emacs had another great year and it’s all because of you - the great community around it! In M-x we trust!

-1:-- The Emacs Year in Review (Post Bozhidar Batsov)--L0--C0--January 10, 2019 08:37 AM