Alvaro Ramirez: chatgpt-shell goes multi-model

19 November 2024 chatgpt-shell goes multi-model

Over the last few months, I've been chipping at implementing chatgpt-shell's most requested and biggest feature: multi-model support. Today, I get to unveil the first two implementations: Anthropic's Claude and Google's Gemini.

Changing course

In the past, I envisioned a different path for multi-model support. By isolating shell logic into a new package ( shell-maker), folks could use it as a building block to create new shells (adding support for their favourite LLM).

While each shell-maker-based shell currently shares a basic common experience, I did not foresee the minor differences affecting the general Emacs user experience. Learning the quirks of each new shell felt like unnecessary friction in developing muscle memory. I also became dependent on chatgpt-shell features, which I often missed when using other shells.

Along with slightly different shell experiences, we currently require multiple package installations (and setups). Depending on which camp you're on (batteries included vs fine-grained control) this may or may not be a downside.

With every new chatgpt-shell feature I showcased, I was often asked if they could be used with other LLM providers. I typically answered with "I've been meaning to work on this…" or "I heard you can do multi-model chatgpt-shell using a bridge like liteLLM". Neither of these where great answers, resulting in me just postponing the chunky work.

Eventually, I bit the bullet, changed course, and got to work on multi-model support. With my initial plan to spin multiple shells via shell-maker, chatgpt-shell's implementation didn't exactly lend itself to support multiple clients. Long story short, chatgpt-shell multi-model support required quite a bit of work. This where I divert to ask you to help make this project sustainable by sponsoring the work.

Make this project sustainable

Maintaining, experimenting, implementing feature requests, and supporting open-source packages takes work. Today, chatgpt-shell has over 20.5K downloads on MELPA and many untracked others elsewhere. If you're one of the happy users, consider sponsoring the project. If you see potential, help fuel development by sponsoring too.

Perhaps you enjoy some of the content I write about? Find my Emacs posts/tips useful?

Alternatively, you want a blogging platform that skips the yucky side effects of the modern web?

Maybe you enjoy one of my other projects?

So, umm… I'll just leave my GitHub sponsor page here.

chatgpt-shell, more than a shell

With chatgpt-shell being a comint shell, you can bring your favourite Emacs flows along.

As I used chatgpt-shell myself, I kept experimenting with different integrations and improvements. Read on for some of my favourites…

A shell hybrid

chatgpt-shell includes a compose buffer experience. This is my favourite and most frequently used mechanism to interact with LLMs.

For example, select a region and invoke M-x chatgpt-shell-prompt-compose ( C-c C-e is my preferred binding), and an editable buffer automatically copies the region and enables crafting a more thorough query. When ready, submit with the familiar C-c C-c binding. The buffer automatically becomes read-only and enables single-character bindings.

Navigation: n/p (or TAB/shift-TAB)

Navigate through source blocks (including previous submissions in history). Source blocks are automatically selected.

Reply: r

Reply with with follow-up requests using the r binding.

Give me more: m

Want to ask for more of the same data? Press m to request more of it. This is handy to follow up on any kind of list (suggestion, candidates, results, etc).

Request entire snippets: e

LLM being lazy and returning partial code? Press e to request entire snippet.

Quick quick: q

I'm a big fan of quickly disposing of Emacs buffers with the q binding. chatgpt-shell compose buffers are no exception.

Confirm inline mods (via diffs)

Request inline modifications, with explicit confirmation before accepting.

Execute snippets (a la org babel)

Both the shell and the compose buffers enable users to execute source blocks via C-c C-c, leveraging org babel.

Vision experiments

I've been experimenting with image queries (currently ChatGPT only, please sponsor to help bring support for others).

Below is a handy integration to extract Japanese vocabulary. There's also a generic image descriptor available via M-x chatgpt-shell-describe-image that works on any Emacs image (via dired, image buffer, point on image, or selecting a desktop region).

Supporting new models

Your favourite model not yet supported? File a feature request. You also know how to fuel the project. Want to contribute new models? Reach out.

Local models

While the two new implementations rely on cloud APIs, local services are now possible. I've yet to use a local LLM, but please reach out, so we can make these happen too. Want to contribute?

Should chatgpt-shell rename?

With chatgpt-shell going multi-model, it's not unreasonable to ask if this package should be renamed. Maybe it should. But that's additional work we can likely postpone for the time being (and avoid pushing users to migrate). For now, I'd prefer focusing on polishing the multi-model experience and work on ironing out any issues. For that, I'll need your help.

Take Gemini and Claude for a spin

Multi-model support required chunky structural changes. While I've been using it myself, I'll need wider usage to uncover issues. Please take it for a spin and file bugs or give feedback. Or if you just want to ping me, I'd love to hear about your experience ( Mastodon / Twitter / Reddit / Email).

  • Be sure to update to chatgpt-shell v2.0.1 and shell-maker v0.68.1 as a minimum.
  • Set chatgpt-shell-anthropic-key or chatgpt-shell-google-key.
  • Swap models with existing M-x chatgpt-shell-swap-model-version or set a default with (setq chatgpt-shell-model-version "claude-3-5-sonnet-20240620") or (setq chatgpt-shell-model-version "claude-gemini-1.5-pro-latest").
  • Everything else should just work 🤞😅

Happy Emacsing!

-1:-- chatgpt-shell goes multi-model (Post)--L0--C0--November 20, 2024 03:59 PM

TAONAW - Emacs and Org Mode: Irreal on my writing habits

Irreal commented on my recent posts about writing (analog vs digital). Irreal doesn’t understand why I’m hesitant:

To be honest, I don’t understand his ambivalence about the matter. He lays out the case for both and shows that, except for a vague feeling of attraction to writing with pen and paper, the digital method is more efficient and satisfying. The digital product is so much more useful and flexible that it seems there should be no question as to which to use.

Spoken like a true sysadmin. But he’s mostly right.

Pen and paper convey an intimate feeling and a connection to what I write that I can’t get out of typing on the keyboard. It’s not about how fast or clear it is. But that’s the thing, it’s a feeling. At the end of the day, if I need to capture information and have it available to me whenever I need it, digital wins by a large margin.

Over the last two weeks, I’ve started to reap the benefits of returning to digital and fully utilizing org-mode.

Meetings notes full of details, organized by date and time; Projects I’m working on are broken down to smaller manageable tasks; floating emails and quick reminders quickly tie into a workflow that I can find later and connect to a system and don’t get forgotten or lost. I can slowly breath again, and I’m starting to find the fun in work again.

Meanwhile, I’m also able to write more on personal events. I don’t have to fully reflect on every event, as I would do in the hand-written journal. Instead, I now have an option of including a list of places I visited with a friend last night or perhaps a picture showing a fun activity. Sure, I could do that in my written journal, but it feels too special: I don’t want to “waste” the page on a simple list of locations. Digital just makes more sense for that, since my agenda with its events listed with details is not the same as my journal.

I don’t know, I guess we’ll see. I do miss the idea of the written journal just enough to pick it up again sooner rather than later.

-1:-- Irreal on my writing habits (Post)--L0--C0--November 20, 2024 01:59 PM

The Emacs Cat: Some Excerpts From My Emacs Config

I’m happy to be back after one year away and it feels great.

Below are some chaotic mini/micro – or even nano – excerpts from my ~/.emacs file I have been tuning in for 12 years. These days, I’m running Emacs 29.4 on Ubuntu (Pop!_OS) 22.04 and, rarely, on macOS.

-1:-- Some Excerpts From My Emacs Config (Post)--L0--C0--November 20, 2024 10:52 AM

Sacha Chua: Updating my audio braindump workflow to take advantage of WhisperX

I get word timestamps for free when I transcribe with WhisperX, so I can skip the Aeneas alignment step. That means I can update my previous code for handling audio braindumps . Breaking the transcript up into sections Also, I recently updated subed-word-data to colour words based on their transcription score, which draws my attention to things that might be uncertain.

Here's what it looks like when I have the post, the transcript, and the annotated PDF.

2024-11-17_20-44-30.png
Figure 1: Screenshot of draft, transcript, and PDF

Here's what I needed to implement my-audio-braindump-from-whisperx-json (plus some code from my previous audio braindump workflow):

(defun my-whisperx-word-list (file)
  (let* ((json-object-type 'alist)
         (json-array-type 'list))
    (seq-mapcat (lambda (seg)
                  (alist-get 'words seg))
                (alist-get 'segments (json-read-file file)))))

;; (seq-take (my-whisperx-word-list (my-latest-file "~/sync/recordings" "\\.json")) 10)
(defun my-whisperx-insert-word-list (words)
  "Inserts WORDS with text properties."
  (require 'subed-word-data)
  (mapc (lambda (word)
            (let ((start (point)))
              (insert
               (alist-get 'word word))
              (subed-word-data--add-word-properties start (point) word)
              (insert " ")))
        words))

(defun my-audio-braindump-turn-sections-into-headings ()
  (interactive)
  (goto-char (point-min))
  (while (re-search-forward "START SECTION \\(.+?\\) STOP SECTION" nil t)
    (replace-match
     (save-match-data
       (format
        "\n*** %s\n"
        (save-match-data (string-trim (replace-regexp-in-string "^[,\\.]\\|[,\\.]$" "" (match-string 1))))))
     nil t)
    (let ((prop-match (save-excursion (text-property-search-forward 'subed-word-data-start))))
      (when prop-match
        (org-entry-put (point) "START" (format-seconds "%02h:%02m:%02s" (prop-match-value prop-match)))))))

(defun my-audio-braindump-split-sentences ()
  (interactive)
  (goto-char (point-min))
  (while (re-search-forward "[a-z]\\. " nil t)
    (replace-match (concat (string-trim (match-string 0)) "\n") )))

(defun my-audio-braindump-restructure ()
  (interactive)
  (goto-char (point-min))
  (my-subed-fix-common-errors)
  (org-mode)
  (my-audio-braindump-prepare-alignment-breaks)
  (my-audio-braindump-turn-sections-into-headings)
  (my-audio-braindump-split-sentences)
  (goto-char (point-min))
  (my-remove-filler-words-at-start))

(defun my-audio-braindump-from-whisperx-json (file)
  (interactive (list (read-file-name "JSON: " "~/sync/recordings/" nil nil nil (lambda (f) (string-match "\\.json\\'" f)))))
  ;; put them all into a buffer
  (with-current-buffer (get-buffer-create "*Words*")
    (erase-buffer)
    (fundamental-mode)
    (my-whisperx-insert-word-list (my-whisperx-word-list file))
    (my-audio-braindump-restructure)
    (goto-char (point-min))
    (switch-to-buffer (current-buffer))))

(defun my-audio-braindump-process-text (file)
  (interactive (list (read-file-name "Text: " "~/sync/recordings/" nil nil nil (lambda (f) (string-match "\\.txt\\'" f)))))
  (with-current-buffer (find-file-noselect file)
    (my-audio-braindump-restructure)
    (save-buffer)))
;; (my-audio-braindump-from-whisperx-json (my-latest-file "~/sync/recordings" "\\.json"))

Ideas for next steps:

  • I can change my processing script to split up the Whisper TXT into sections and automatically make the PDF with nice sections.
  • I can add reminders and other callouts. I can style them, and I can copy reminders into a different section for easier processing.
  • I can look into extracting PDF annotations so that I can jump to the next highlight or copy highlighted text.
This is part of my Emacs configuration.
View org source for this post
-1:-- Updating my audio braindump workflow to take advantage of WhisperX (Post Sacha Chua)--L0--C0--November 19, 2024 01:33 PM

Irreal: Digital Vs. Analog Notes

Over at The Art Of Not Asking Why, JTR has a couple of posts that explore his struggle with deciding between digital and analog note taking. None of you will be surprised where I come down on the issue—I’m all in on living a digital life and eschew using pen and paper as much as I can—but it’s informative to read about JTR’s thought process about how he decides which method to use.

To be honest, I don’t understand his ambivalence about the matter. He lays out the case for both and shows that, except for a vague feeling of attraction to writing with pen and paper, the digital method is more efficient and satisfying. The digital product is so much more useful and flexible that it seems there should be no question as to which to use.

One thing he says that really resonated with me is that if he writes a lot with a pen, his hand cramps. That definitely happens to me too. Related to that is speed.

When I was still young and was hunt and pecking on an actual typewriter, an adult told me that it was really hard to type faster than you can write by hand. That seems laughable to me now. I can type much faster than I can write. A lot of that is probably because my handwriting is so bad that I print everything but it’s still a fact.

I, too, like the idea of sitting down with a beautiful paper journal and good pen but the results aren’t that useful. I can’t back them up. I can’t carry them on my iPhone. I can’t easily link them to and from other notes.

I write virtually everything in Org mode. The main exception is the memo book that resides on my iPhone and I simply import those notes directly into an Org mode table so that they, too, end up living in Org. All of this is automatically backed up, searchable, portable to my iPhone, and easy to link to.

If you really need the feel of pen on paper, take up calligraphy. It will satisfy your need for handwriting without sacrificing the usefulness of your notes.

-1:-- Digital Vs. Analog Notes (Post jcs)--L0--C0--November 18, 2024 05:07 PM

Sacha Chua: 2024-11-18 Emacs news

Links from reddit.com/r/emacs, r/orgmode, r/spacemacs, r/planetemacs, Mastodon #emacs, Hacker News, lobste.rs, programming.dev, lemmy.world, lemmy.ml, communick.news, planet.emacslife.com, YouTube, the Emacs NEWS file, Emacs Calendar, and emacs-devel. Thanks to Andrés Ramírez for emacs-devel links. Do you have an Emacs-related link or announcement? Please e-mail me at sacha@sachachua.com. Thank you!

View org source for this post
-1:-- 2024-11-18 Emacs news (Post Sacha Chua)--L0--C0--November 18, 2024 02:29 PM

James Dyer: Reducing Friction when Copying Whole Buffer To Kill Ring

Just a quick one today.

In keeping with the ongoing effort to reduce friction in the venerable Emacs text editor, I realized that a common action I often perform is copying the entire contents of the buffer, usually for pasting elsewhere.

To perform this I have chained together some emacs commands, namely (mark-whole-buffer) and then (kill-ring-save)

The problem with pushing the buffer to the kill ring in this manner is that I lose the current cursor position/point and end up using isearch to navigate my way back. Strangely, it is only recently that I have found this annoying!

There are a few options to address this:

  • Use Emacs marks
  • Create a macro
  • Create a defun

Initially I tried the setting mark option meaning that I C-<SPC> C-<SPC> to set the mark at the current position and then C-u C-<SPC> to pop back to my previous mark set. The only issue is that (mark-whole-buffer) creates a mark at the end of the selected region and my first mark pop would be to this position, so I have to mark pop again.

The benefit of this approach is that I will start becoming more familiar with setting marks and navigating more efficiently within Emacs, which I really think I should learn. However, it all feels a little clunky, and you know what? I’m just going to write a simple elisp defun and bind it.

save-excursion, in this case, can be extremely useful!

(defun my/copy-buffer-to-kill-ring ()
  "Copy the entire buffer to the kill ring without changing the point."
  (interactive)
  (save-excursion
    (kill-ring-save (point-min) (point-max))))

(bind-key* (kbd "M-s z") #'my/copy-buffer-to-kill-ring)
-1:-- Reducing Friction when Copying Whole Buffer To Kill Ring (Post James Dyer)--L0--C0--November 18, 2024 10:24 AM

Marcin Borkowski: Discovering functions and variables in Elisp files

Sometimes I have an Elisp file which I suspect contains some useful functions. Even if the file is well-documented (for example, it belongs to Emacs itself), that does not mean that every function in it is described in the manual. What I need in such a case is a list of functions and variables (possibly also macros) defined in this file.
-1:-- Discovering functions and variables in Elisp files (Post)--L0--C0--November 18, 2024 05:41 AM

Sacha Chua: Changing Org Mode underlines to the HTML mark element

Apparently, HTML has a mark element that is useful for highlighting. ox-html.el in Org Mode doesn't seem to export that yet. I don't use _ to underline things because I don't want that confused with links. Maybe I can override org-html-text-markup-alist to use it for my own purposes…

(with-eval-after-load 'org
  (setf (alist-get 'underline org-html-text-markup-alist)
        "<mark>%s</mark>"))

Okay, let's try it with:

Let's see _how that works._

Let's see how that works. Oooh, that's promising.

Now, what if I want something fancier, like the way it can be nice to use different-coloured highlighters when marking up notes in order to make certain things jump out easily? A custom link might come in handy.

(defun my-org-highlight-export (link desc format _)
  (pcase format
    ((or '11ty 'html)
     (format "<mark%s>%s</mark>"
             (if link
                 (format " class=\"%s\"" link)
               link)
             desc))))
(with-eval-after-load 'org
  (org-link-set-parameters "hl" :export 'my-org-highlight-export)
  )

A green highlight might be good for ideas, while red might be good for warnings. (Idea: I wonder how to font-lock them differently in Emacs…)

I shouldn't rely only on the colours, since people reading through RSS won't get them and also since some people are colour-blind. Still, the highlights could make my blog posts easier to skim on my website.

Of course, now I want to port Prot's excellent colours from the Modus themes over to CSS variables so that I can have colours that make sense in both light mode and dark mode. Here's a snippet that exports the colours from one of the themes:

(format ":root {\n%s\n}\n"
        (mapconcat
         (lambda (entry)
           (format "  --modus-%s: %s;"
                   (symbol-name (car entry))
                   (if (stringp (cadr entry))
                       (cadr entry)
                     (format "var(--modus-%s)" (symbol-name (cadr entry))))))
         modus-operandi-palette
         "\n"))

So now my style.css has:

/* Based on Modus Operandi by Protesilaous Stavrou */
:root {
   // ...
   --modus-bg-red-subtle: #ffcfbf;
   --modus-bg-green-subtle: #b3fabf;
   --modus-bg-yellow-subtle: #fff576;
   // ...
}
@media (prefers-color-scheme: dark) {
   /* Based on Modus Vivendi by Protesilaous Stavrou */
   :root {
      // ...
      --modus-bg-red-subtle: #620f2a;
      --modus-bg-green-subtle: #00422a;
      --modus-bg-yellow-subtle: #4a4000;
      // ...
   }
}
mark { background-color: var(--modus-bg-yellow-subtle) }
mark.green { background-color: var(--modus-bg-green-subtle) }
mark.red { background-color: var(--modus-bg-red-subtle) }

Interesting, interesting…

View org source for this post
-1:-- Changing Org Mode underlines to the HTML mark element (Post Sacha Chua)--L0--C0--November 17, 2024 09:44 PM

J.e.r.e.m.y B.r.y.a.n.t: Basics of Emacs manual documentation

I introduce the basics of Emacs manual documentation, how to open the manuals, reference the manuals, and print the manuals. (...)
-1:-- Basics of Emacs manual documentation (Post)--L0--C0--November 17, 2024 04:30 PM

Sacha Chua: Checking caption timing by skimming with Emacs Lisp or JS

Sometimes automatic subtitle timing tools like Aeneas can get confused by silences, extraneous sounds, filler words, mis-starts, and text that I've edited out of the raw captions for easier readability. It's good to quickly check each caption. I used to listen to captions at 1.5x speed, watching carefully as each caption displayed. This took a fair bit of time and focus, so… it usually didn't happen. Sampling the first second of each caption is faster and requires a little less attention.

Skimming with subed.el

Here's a function that I wrote to play the first second of each subtitle.

(defvar my-subed-skim-msecs 1000 "Number of milliseconds to play when skimming.")
(defun my-subed-skim-starts ()
  (interactive)
  (subed-mpv-unpause)
  (subed-disable-loop-over-current-subtitle)
  (catch 'done
    (while (not (eobp))
      (subed-mpv-jump-to-current-subtitle)
      (let ((ch
             (read-char "(q)uit? " nil (/ my-subed-skim-msecs 1000.0))))
        (when ch
          (throw 'done t)))
      (subed-forward-subtitle-time-start)
      (when (and subed-waveform-minor-mode
                 (not subed-waveform-show-all))
        (subed-waveform-refresh))
      (recenter)))
  (subed-mpv-pause))

Now I can read the lines as the subtitles play, and I can press any key to stop so that I can fix timestamps.

Skimming with Javascript

I also want to check the times on the Web in case there have been caching issues. Here's some Javascript to skim the first second of each cue in the first text track for a video, with some code to make it easy to process the first video in the visible area.

function getVisibleVideo() {
  const videos = document.querySelectorAll('video');
  for (const video of videos) {
    const rect = video.getBoundingClientRect();
    if (
      rect.top >= 0 &&
      rect.left >= 0 &&
      rect.bottom <= (window.innerHeight || document.documentElement.clientHeight) &&
      rect.right <= (window.innerWidth || document.documentElement.clientWidth)
    ) {
      return video;
    }
  }
  return null;
}

async function skimVideo(video=getVisibleVideo(), msecs=1000) {
  // Get the first text track (assumed to be captions/subtitles)
  const textTrack = video.textTracks[0];
  if (!textTrack) return;
  const remaining = [...textTrack.cues].filter((cue) => cue.endTime >= video.currentTime);
  video.play();
  // Play the first 1 second of each visible subtitle
  for (let i = 0; i < remaining.length && !video.paused; i++) {
    video.currentTime = remaining[i].startTime;
    await new Promise((resolve) => setTimeout(resolve, msecs));
  }
}

Then I can call it with skimVideo();. Actually, in our backstage area, it might be useful to add a Skim button so that I can skim things from my phone.

function handleSkimButton(event) {
   const vid = event.target.closest('.vid').querySelector('video');
   skimVideo(vid);
 }

document.querySelectorAll('video').forEach((vid) => {
   const div = document.createElement('div');
   const skim = document.createElement('button');
   skim.textContent = 'Skim';
   div.appendChild(skim);
   vid.parentNode.insertBefore(div, vid.nextSibling);
   skim.addEventListener('click', handleSkimButton);
});

Results

How much faster is it this way?

Some code to help figure out the speedup
(-let* ((files (directory-files "~/proj/emacsconf/2024/cache" t "--main\\.vtt"))
        ((count-subs sum-seconds)
         (-unzip (mapcar
                  (lambda (file)
                    (list
                     (length (subed-parse-file file))
                     (/ (compile-media-get-file-duration-ms
                         (concat (file-name-sans-extension file) ".webm")) 1000.0)))
                  files)))
        (total-seconds (-reduce #'+ sum-seconds))
        (total-subs (-reduce #'+ count-subs)))
  (format "%d files, %.1f hours, %d total captions, speed up of %.1f"
          (length files)
          (/ total-seconds 3600.0)
          total-subs
          (/ total-seconds total-subs)))

It looks like for EmacsConf talks where we typically format captions to be one long line each (< 60 characters), this can be a speed-up of about 4x compared to listening to the video at normal speed. More usefully, it's different enough to get my brain to do it instead of putting it off.

Most of the automatically-generated timestamps are fine. It's just a few that might need tweaking. It's nice to be able to skim them with fewer keystrokes.

View org source for this post
-1:-- Checking caption timing by skimming with Emacs Lisp or JS (Post Sacha Chua)--L0--C0--November 17, 2024 12:29 PM

Protesilaos Stavrou: Emacs: the Modus themes palette previews are tabulated

I just pushed a massive change to the Modus themes Git repository which makes the “preview palette” commands use tabulated-list-mode. This means that the output consists of actual rows and columns and, more importantly, the user can sort by the given column (click on the column name or do M-x describe-mode to learn about the relevant key bindings).

As part of the redesign, I also included an indicator about which entries in the palette constitute “semantic colour mappings”, as opposed to “named colours”. Named colours are those which correspond to a hexadecimal RGB value, like (blue-warmer "#3548cf"), while the mappings will point to such named colours like (fg-link blue-warmer).

But enough of this! Here is a picture showing two buffers. In the left window we have the output of M-x modus-themes-list-colors. In the right window it is the same command with a C-u prefix argument to show only the semantic color mappings. Notice that the buffers are named after the theme they are previewing and the scope of the preview.

Modus themes palette preview in a tabulated list

The command modus-themes-list-colors prompts for a Modus theme to preview. Whereas modus-themes-list-colors-current acts directly using the current Modus theme.

Use these to design your own palette overrides (check the manual for details) or simply to copy the colour values you are interested in.

Sources

-1:-- Emacs: the Modus themes palette previews are tabulated (Post)--L0--C0--November 17, 2024 12:00 AM

Irreal: Writing A Book

Over at Parenthetically Speaking, there’s an interesting post on why you should write a book. It’s mainly aimed at academics but its lessons apply to anyone who has something to share. That’s actually a lower bar than you might think. You don’t have to have a PhD to have something worth saying. Most practicing engineers with a bit of experience have things to say that would be useful to other, especially younger, engineers.

The post is is divided into three parts:

  1. You can write a book
  2. You should write a book
  3. Mechanics

The first two parts are written specifically for academics but can, as I say, can apply to anyone with something to say. The interesting part, to me, is the mechanics. “Mechanics” in this context means not so much the tools you use as the actual means of publishing. The idea is to eschew “professional” publishers and make your material available for free.

I published both my books through a publisher and although it can be a bit more work, you do have the cachet of having an actual publisher putting out your book. On the other hand, making your text freely available gets the word out to more people more efficiently.

Currently, it’s easier than ever to write a book. These days, I prefer to write everything in Org mode. With Org, it’s easy to rearrange material and edit your text. When you’re happy with what you’ve written, you can export to HTML, PDF, or even Docx with a simple key press. The process could hardly be easier. The writing part is still hard, of course, but the mechanics are easy, especially if you leverage Emacs and Org.

Update [2024-11-18 Mon 12:16]: Added link to post.

-1:-- Writing A Book (Post jcs)--L0--C0--November 16, 2024 04:46 PM

Sacha Chua: Yay Emacs 7: Using word timing in caption editing with subed-word-data

When I work with video captions, I often want to split long captions using subed-split-subtitle. If my player is somewhere in the current subtitle, it'll use that timestamp. If not, it'll make a reasonable guess based on character position.

I can use subed-word-data.el to load word-level times from WhisperX JSON or from Youtube SRV2 files. This allows me to split a subtitle using the timestamp for that word.

Because subed-word-data colours words based on transcription confidence, I can see where something might need to be closely examined, like when there's no timing information for the words at the start or end.

If I combine that with subed-waveform, I can see silences. Then I can tweak start times by shift-left-clicking on the waveform. This automatically adjusts the end time of the previous subtitle too.

I like how Emacs makes it easy to use word timing data when editing captions. Yay Emacs!

You can watch this on YouTube, download the video, or download the audio.

Note: Sometimes WhisperX gives me overlapping timestamps for captions, so I use M-x subed-align to get the aeneas forced alignment tool to give me subtitle-level timestamps. Then I use the word-level data from WhisperX for further splitting.

Links:

Aside: I was trying to find some kind of value-to-color translator for Emacs Lisp for easier visualization, like the way the d3 Javascript library makes it easy to translate a range of numbers (say, linear 0.0 to 1.0) to colors (ex: red-yellow-green). I found color-hsl-to-rgb and also the range of colours defined by the faces calendar-scale-1 to calendar-scale-10. There's also prism, which colours code by depth and allows people to specify the colour transformations (saturation, lightness, etc.). I wonder if someone's already written a general-purpose data-to-fg/bg-color Elisp library that supports numerical and categorical data…

View org source for this post
-1:-- Yay Emacs 7: Using word timing in caption editing with subed-word-data (Post Sacha Chua)--L0--C0--November 16, 2024 02:02 PM

Protesilaos Stavrou: Emacs: ef-themes version 1.9.0

The ef-themes are a collection of light and dark themes for GNU Emacs that provide colourful (“pretty”) yet legible options for users who want something with a bit more flair than the modus-themes (also designed by me).

Below are the release notes.


Version 1.9.0 on 2024-11-16

This version introduces several small refinements to an already comprehensive basis.

No interference with org-modern

The org-modern package is not meant to be touched by a theme. This is what I am doing with the modus-themes, but I forgot to remove the changes made by the ef-themes.

Thanks to Daniel Mendler, author of org-modern, for bringing this matter to my attention. This was done in issue 48: https://github.com/protesilaos/ef-themes/issues/48.

Colour refinements for several themes

I document those, though most of them will not be noticeable, unless on a side-by-side comparison.

  • The ef-day palette value for green-warmer has a marginally greater contribution from the red channel of light, making it a tiny bit “warmer”. The green-faint is made less warm. In context, these tweaks make certain elements easier to tell apart, while retaining the character of the theme.

  • The ef-reverie value for blue-faint is less saturated, so its blue impression is diminished. It still performs its role in all the relevant contexts, only now it does it better by not competing with other shades of blue.

  • The ef-light value for fg-dim is much less intense, though still within the desired contrast range. This way, it works better in context. The “added” background colours (used in diff-mode, Ediff, Magit, etc.) are a little bit more intense to be more harmonious with other elements in a diff output. The blue-faint has lower contribution from the blue channel of light in the interest of not interfering with other blue hues, while still looking alright itself. The semantic palette mapping for links now uses the blue-warmer colour instead of blue, as the former is less ambiguous in context. The fg-alt is recalibrated to be closer to a grey value, improving its use in several places. The red-cooler value is redone to not be conflated with magenta: it now delivers a rosy red impression. Lastly, the rainbow-2 mapping uses magenta instead of magenta-warmer for consistency in all relevant situations.

  • The ef-night semantic colour mapping of preprocessor is toned down in intensity to remove what was a stylistic exaggeration. The variable mapping is tweaked to use cyan-warmer instead of the cyan colour, as the former is slightly more suited to the role due to how it combines with other colours. The type semantic mapping is bound to a less intense shade of magenta, making it not overpower other constructs in a competition for attention. Finally, the value of the magenta-faint colour has a greater contribution from the blue channel of light to shift its hue slightly closer to purple.

  • The ef-deuteranopia-light palette entry for red-faint is more yellow to be discernible where needed. Similarly, the cyan-cooler has a reduced contribution from the red channel of light.

    [ Note that the “deuteranopia” and “tritanopia” themes define all colours in the palette to be consistent with the overall project, but only use hues that are appropriate for red-green and blue-yellow colour deficiency, respectively. ]

  • The “subtle” backgrounds of all themes (e.g. bg-red-subtle) are redone to feel more natural in the context of their respective theme. Before, some values were a bit exaggerated and/or not aligned with the overall aesthetic. Still, the changes are small: do not expect your preferred theme to be refashioned.

More accurate faces for Org agenda dates

The faces used by Org agenda to style events with a scheduled date or deadline are redesigned to better complement the semantics of what is on display. Pressing tasks stand out more, while those that do not require immediate attention are rendered in a more subtle style.

Thanks to Adam Porter (aka GitHub alphapapa) for suggesting this revision and discussing the technicalities with me. This was done in issue 102 of the Modus themes repository (but the principles apply to the Ef themes as well): https://github.com/protesilaos/modus-themes/issues/102.

The forge package is fully supported

All of its faces will now look consistent in context as they get the appropriate colours of the active Ef theme.

Thanks to Len Trigg for reporting that some attributes were not suitable for the intended purpose of certain Forge faces. I fixed those accordingly. This was done in issue 47: https://github.com/protesilaos/ef-themes/issues/47.

Support for the tldr package

This makes it look consistent with the rest of the theme.

Support for the built-in window-tool-bar-mode

This is a mode available in Emacs 30. Its faces will look right at all times.

Support for the built-in hexl-mode

Instead of using shades of grey backgrounds, the themes use carefully chosen foreground values that are easier to spot.

The embark faces are brought up-to-date

Old symbols are removed and the current ones are added in their stead.

Miscellaneous

  • The :background-mode property of the ef-melissa-dark theme is set to the correct symbol. Thanks to Pedro Cunha for making the change in pull request 46: https://github.com/protesilaos/ef-themes/pull/46. The change is small, so Pedro does not need to assign copyright to the Free Software Foundation.

  • Graphical buttons inherit the ef-themes-button face, which makes it easier to ensure theme-wide consistency for all relevant faces.

  • The all-the-icons faces for Ibuffer use different colours that refine how everything looks in context.

  • The popup produced by the corfu and company packages will use a monospaced font (inherit from fixed-pitch) if the user option ef-themes-mixed-fonts is set to a non-nil value.

  • The annotation function used by the command ef-themes-select or related now uses the completions-annotations face, as it should.

-1:-- Emacs: ef-themes version 1.9.0 (Post)--L0--C0--November 16, 2024 12:00 AM

Jonathan Lamothe: Organizing My Life with org-mode

Organizing My Life with org-mode
-1:-- Organizing My Life with org-mode (Post Jonathan Lamothe (jonathan@jlamothe.net))--L0--C0--November 15, 2024 09:28 PM

Hristos N. Triantafillou: Void Linux On A Framework Laptop: Two Years Later

In April 2023, I wrote about getting a Framework Laptop in December of 2022, putting Void Linux on it, and using it as my daily driver. It's been roughly two years since that purchase and I'd like to share my experiences with the hardware and software.
-1:-- Void Linux On A Framework Laptop: Two Years Later (Post)--L0--C0--November 15, 2024 12:00 AM

Irreal: M-x Occur

Kristoffer Balintona has a nice post on Emacs occur. If you have moderate familiarity with Emacs, you have probably used occur on occasion. I use it all the time but still wasn’t aware of everything it could do.

Happily, Balintona is here to fill in the blanks. The information is all there in the documentation, of course, but it’s easy to miss it when skimming through the doc looking for the information that you need.

The first significant thing is that you can restrict the action of Occur to a specific region of the target buffer. You probably won’t want to do this very often but it’s easy to see how it could be useful. The second significant thing is the number of lines of context.

You can specify how many lines of context before and after the matching regex to include but you can also specify a string in which case what is displayed acts as a replacement string. You certainly won’t need that capability a lot but when you do, it’s perfect.

Take a look at Balintona’s post or the documentation for the details. This is another example of how learning Emacs is a lifelong endeavor. You learn bits and pieces of a certain functionality but later discover that it’s much richer than you believed.

-1:-- M-x Occur (Post jcs)--L0--C0--November 14, 2024 08:43 PM

Charles Choi: Styling Text via Keyboard in Org and Markdown

A recent Mastodon post by Christian Tietze asked for how one could style text in Org with a keyboard. There is an existing command org-emphasize that can do this but it has a user interface design flaw: It requires the user to know ahead of time the markup character used to style the text. For many, recalling this Org-specific character is a chore. A friendlier command design would let the user specify the style (e.g. bold, italic, code, etc.) to markup the text.

A while back, I wrote a post describing how to style text in Org and Markdown using mouse-driven menus. Thinking about Tietze’s ask, I realized I had done most of the work towards building a keyboard-driven command to style text. So I set about to followup with making that command. This post details that effort.

In thinking about keyboard-driven text styling, I wanted to achieve two things:

  1. Style text using logical names (e.g. bold, italic, code, …)
  2. Minimize the work required to select the text to be styled.
    • Infer from the point what the text region to be styled should be.

A nice feature of the Markdown styling commands is that they will try to infer from the point (cursor position) what region of text to apply the style to. No such facility that I know of exists in Org. But it’s not all gravy for Markdown. The same Markdown styling commands will only work on what Emacs defines to be a word. This makes it a hassle to deal with continuous text (that is, no spaces or linebreaks) that embeds dashes (-) or underscores (_). This is because Emacs navigation by word consider the dash and underscore characters to be word separators.

No worries though as this is Emacs: There’s always a better way.

In building Casual EditKit, I became familiar with commands that operated on balanced expressions (aka sexp). Using them, I observed that they treated continuous alphanumeric text with dashes and underscores as a single entity. With this knowledge I set out to build the command cc/emphasize-dwim.

Here’s a video of it in action.

Closing Thoughts

NGL, I’ve been pleasantly surprised at how well this has worked out, so much so that it’s motivated me to write this post. Interested readers are encouraged to try this code out.

As far as what keybinding to use for cc/emphasize-dwim, I’ve landed on using C-/ for now, which by default is bound to undo. Since my daily driver is macOS, I assign M-z for that job. As always, it’s left to the reader to choose a keybinding of their preference.

-1:-- Styling Text via Keyboard in Org and Markdown (Post Charles Choi)--L0--C0--November 14, 2024 01:00 AM

Irreal: Is Emacs Practical For Real World Use

In another in a seemingly unending sequence of such questions, sav-tech, over at the Emacs subreddit asks if Emacs is useful in practical life. I’m not sure what he thinks all we Emacs users are doing if it’s not practical but let’s take his question as being in good faith.

As usual, all the action is in the comments. The main thing that I noticed is that people using VS Code and the like always say that it does what they need right now. That’s because those editors are configured to provide the conventional services. It’s great. There’s no setup required, you just start using it to do what you need to do.

The problem is what comes next. Sooner or later you’re going to need to do something that the VS Code developers didn’t consider and you’re going to be out of luck. With Emacs, you’re also going to find yourself wanting to do something the developers hadn’t anticipated. The difference is that with Emacs you can simply add the capability. Often this doesn’t require anything more than a keyboard macro or some shortcut configurations. If you learn a little Elisp, you can make Emacs do anything you want.

Sav-tech says that someone on Discord told him that Emacs is a monolithic editor, takes only 20–40 minutes to learn, and, anyway everyone is using VS Code now. That’s a fact free statement by someone who has no idea what he’s talking about. Read the comments if you want your faith in the judgment of software engineers restored.

You can use Emacs, VS Code, or whatever works best for you but let’s at least try to keep the discussion informed.

-1:-- Is Emacs Practical For Real World Use (Post jcs)--L0--C0--November 13, 2024 05:26 PM

Alvaro Ramirez: chatgpt-shell splits up

13 November 2024 chatgpt-shell splits up

The chatgpt-shell package started as an experiment glueing the ChatGPT API to an Emacs comint buffer. Over time, it grew into several packages within the same repository: shell-maker, ob-chatgpt-shell, dall-e-shell, ob-dall-e-shell, and of course chatgpt-shell itself.

I'm splitting the repository as a first step in reworking chatgpt-shell to enable multi-model support (i.e. Gemini, Claude, and others), a popular feature request.

Want multi-model support?

Go 👍 the feature request and ✨ sponsor✨ the work.

If keen on having a multi-modal chatgpt-shell at your fingertips, please consider sponsoring to make the project sustainable. Improvements like this, integrations, and keeping up with the AI space takes quite a bit of work and effort.

New package repositories

chatgpt-shell

No repo location changes. Remains at https://github.com/xenodium/chatgpt-shell

chatgpt-shell carries the ChatGPT shell itself, but also convenience integrations.

My hope is to make this a multi-model package.

swiftui.webp

japanese.webp

fix.webp

ob-chatgpt-shell

Moves to https://github.com/xenodium/ob-chatgpts-shell

An extension of chatgpt-shell to execute org babel blocks as ChatGPT prompts.

ob-chatgpt-shell.png

dall-e-shell

Moves to https://github.com/xenodium/dall-e-shell

A dedicated shell for DALL-E image generation.

dall-e-shell.png

ob-dall-e-shell

Moves to https://github.com/xenodium/ob-dall-e-shell

An extension of dall-e-shell to execute org babel blocks as ChatGPT prompts.

ob-dall-e-shell.png

shell-maker

Moves to https://github.com/xenodium/shell-maker

shell-maker a convenience wrapper around comint mode to build shells. Both chatgpt-shell and dall-e-shell are built on top of shell-maker.

sofia.gif

Enjoying this content? Using one of my Emacs packages?

Help make the work sustainable. Consider sponsoring. I'm also building lmno.lol. A platform to drag and drop your blog to the web.

-1:-- chatgpt-shell splits up (Post)--L0--C0--November 13, 2024 01:20 PM

JD Codes: Transient Menus in Emacs pt. 1

Magit is an innovative package that provides an amazing interface over git. The complexity of its UI is completely hidden away thanks to another package born out of Magit called Transient. Transient is so innovative that it was added to emacs core in 2021. Understanding at least the basics of Transient can provide alot of value in building tools to enhance various workflows.

From the official manual

Transient is the library used to implement the keyboard-driven “menus” in Magit. It is distributed as a separate package, so that it can be used to implement similar menus in other packages.

From Transient Showcase

Transient means temporary. Transient gets its name from the temporary keymap and the popup UI for displaying that keymap.

Foundation

A Transient menu is made of up of 3 parts: prefix, suffix and infix.

  • Prefix: represents a command to “open” a transient menu. For example magit-status is a prefix which will initialize and open the magit-status buffer.

  • Suffix: represents the “output” command. This is whats invoked inside of a transient menu to perform some kind of operation. For example in magit calling magit-switch-branch is a suffix which has a (completing-read) in front of it.

  • Infix: represent the “arguments” or the intermediary state of a transient. For example, adding -f, --force-with-lease means you’re using an infix for the magit-push suffix.

There are 2 additional things to understand about transients:

  • Suffixes can call prefixes allowing for “nesting” of “menus.” In magit when a commit is at point and you call magit-diff that is a suffix that is a really just a prefix with it’s own set of infixes and suffixes. See Example 3 below for a more elaborate example of this.
    • Think of it this way: Prefix -> Suffix -> Prefix -> ...
  • State can be persisted between Suffixes and Prefixes to build very robust UIs that engage in very complex behavior while exposing a simple view to the user.

Note: I don’t go over state persisting through prefixes in the post. I do plan on doing a follow up for more complex situations as I continue to learn.

Define

While the actual model is much more complex than I’ve lead on and has many more domain concepts to understand than I’m going to layout, defining simple transients can enhance your workflow in meaningful ways once you at least understand the basics. This is by no means a comprehensive guide on Transients but merely a (hopefully) educational and useful overview. For an incredible guide, checkout positron-solutions Transient Showcase which is one of the most thorough guides I’ve ever seen. If any information I share here is different in Positrons guide, trust Positron.

Note: Each of the Examples work and can be evaluated inside of Emacs and I encourage you to do so!

1 Prefix ➡️ 1 Suffix

Lets define a simple transient to just output a message.

(transient-define-prefix my/transient ()
  "My Transient"
  ["Commands" ("m" "message" my/message-from-transient)])

(defun my/message-from-transient ()
  "Just a quick testing function."
  (interactive)
  (message "Hello Transient!"))

Once evaluated, M-x my/transient can be invoked and a transient opens with one suffix command m which maps to my/message-from-transient and outputs a message to the minibuffer.

Explain

transient-define-prefix is a macro used to define a simple prefix and create everything Transient needs to operate. The body is where we define our Transient keymap, which in this case is called "Commands". The body can define multiple sets of keymaps and each one should be defined as a vector where the first element is the “name” or “title display” of the current set of commands, and the subsequent N number of lists make up the whole map. The lists are in the format of (but not limited to) (KEY DESCRIPTION FUNCTION). The FUNCTION arg must be interactive in order to work.

There are a handful of other ways to define the Transient elements, but we’ll stick with this simple version. If you’re interested in more complex methods refer back to Positrons guide.

Lets expand our example a bit by adding arguments and switches.

1 Prefix ➕ 2 Infix ➡️ 1 Suffix

Here we will add 2 types of arguments: switches and arguments with a readable value.

(transient-define-prefix my/transient ()
  "My Transient"

  ["Arguments & Switches"
    ("-s" "Switch" "--switch")
    ("-n" "Name Argument" "--name=")]

  ["Commands"
    ("m" "message" my/message-from-transient)])

(defun my/message-from-transient (&optional args)
  "Just a quick testing function."
  (interactive (list (transient-args transient-current-command)))
  (if (transient-arg-value "--switch" args)
    (message
      (concat "Hello: " (transient-arg-value "--name=" args)))))

Now we have a transient that gives us 2 infixes or “arguments”.

  • -s is the keymapped function to toggle the --switch argument. A good example of this is a terminal command like ls -a where -a is a boolean type value that toggles all on for ls.
  • -n is the keymapped function to prompt for a minibuffer input to enter in what’s appended to the --name= argument.

Once evaluated we can now run the transient with M-x my/transient and then press - followed by s to toggle the --switch switch argument. Pressing - followed by n will engage the --name= argument which will generate a minibuffer prompt to read user input. Once a name is typed in and Enter is pressed the minibuffer prompt will finish and the value entered will be displayed in the Transient menu itself. Pressing m will run the suffix. With --switch toggled on a message should appear in the minibuffer: “Hello: " followed by the input to --name=. Performing the flow with --switch toggled off results in nothing being displayed.

Explain

The suffix changes on my/message-from-transient are minimal but very important. We need to make sure that it can interactively take args which are passed in by our Transient when the suffix is executed. This is a list of the values of our infixes from our prefix. We can then use the helper function transient-arg-value which has the following docstring:

For a switch return a boolean. For an option return the value as a string, using the empty string for the empty value, or nil if the option does not appear in ARGS.

So when we do (if (transient-arg-value "--switch" args) ...) that gets cast into a boolean for us to use. We could pass it directly into something as well without having to cast it ourselves or rely on elisp to do it. It also gives us the value of --name= as a string so we can just pass it into (message). There’s some more flexibility with argument passing we’ll get into in a further example.

The shorthand we’re using to define infixes makes it easy to define these two types, a switch and arguments.

1 Prefix ➕ 2 Infix ➡️ 1 Suffix ➡️ 1 Prefix

Lets expand our example by demonstrating the composability of transient menus. We’ll perform essentially the same example as before but instead of just triggering a (message ...) function, our suffix will instead point to a prefix, based on the infix arguments.

(transient-define-prefix my/transient ()
  "My Transient"

  ["Arguments & Switches"
    ("-s" "Switch" "--switch")
    ("-n" "Name Argument" "--name=")]

  ["Commands"
    ("m" "message" my/message-from-transient)
    ("c" "go to composed" my/composed-transient)])

(defun my/message-from-transient (&optional args)
  "Just a quick testing function."
  (interactive (list (transient-args transient-current-command)))
  (if (transient-arg-value "--switch" args)
    (message
      (concat "Hello: " (transient-arg-value "--name=" args)))))

(transient-define-prefix my/composed-transient ()
  "My Composed Transient"

  ["Arguments & Switches"
    ("-l" "Loop" "--loop")]

  ["Commands"
    ("x" "Execute" my/composed-suffix)])

(defun my/composed-suffix (&optional args)
  (interactive (list (transient-args transient-current-command)))
  (if (transient-arg-value "--loop" args)
      (my/transient)))

Now we have a transient that provides 2 infixes as before, but now has another suffix that is in fact a prefix, a “sub-menu”! Then it uses an infix to determine the subsequent action when the suffix is called. If the --loop argument is set to true, we then loop back to our original prefix as this commands suffix.

Explain

Here we simply expand on everything we’ve learned up to this point and simply call a prefix as a suffix. This demonstrates the composability of transients in that we created a “sub menu” for our main transient. The example isn’t truly relying on the infixes to determine the second suffix/prefix behavior but that’s for a subsequent post. Refer to the resources listed below for more information on that. The concept here is important to grasp as it’s the foundation for building complex structured menus with transient.

Real World

The usefulness of creating your own transients goes far beyond just developing packages. At my day job I use a transient menu to run our test suite. While I’m not a fan of how our test suite is setup, I wanted to make it as painless to interact with as possible.

Overview

I work on a Ruby on Rails application that utilizes Minitest. In the command line you can normally run the following bin/rails test path/to/test.rb and the suite will run. You can also optionally provide a line number to run a specific test instead of a whole file like bin/rails test path/to/test.rb:50. While there is a litany of ways to improve this experience with tools like FZF, I don’t want to break my flow by switching windows.

Unfortunately,we also use environment variables that dictate additional behavior for our test suite such as providing specific database seeds, or running selenium on a headless browser live so you can debug end to end tests. While there are better ways to manage complex test suites, I’ll make do with it and let emacs handle the annoying stuff.

At the end of it all, I end up with a test command that looks like: SKIP_SEEDS=true MAGIC_TEST=0 PRECOMPILE_ASSETS=false rails test path/to/test.rb. Typing that sucks, and setting them by default in my shell doesn’t do much because they change so often in my normal work. So I wrote a transient menu to make things easy for me.

Commander.el

I named it commander.el even though it’s not a package I’m providing publicly. It’s just for me and I wanted a cool name to keep it separate from my normal configuration files.

(transient-define-prefix jd/commander ()
       "Transient for running Rails tests in CF2."
       ["Testing Arguments"
        ("s" "Skip Seeds" "SKIP_SEEDS=" :always-read t :allow-empty nil :choices ("true" "false")
         :init-value (lambda (obj) (oset obj value "true")))

        ("a" "Precompile Assets" "PRECOMPILE_ASSETS="
         :always-read t
         :allow-empty nil
         :choices ("true" "false")
         :init-value (lambda (obj) (oset obj value "false")))

        ("c" "Retry Count" "RETRY_COUNT=" :always-read t :allow-empty nil
         :init-value (lambda (obj) (oset obj value "0")))

        ("-m" "Magic Test" "MAGIC_TEST=1")]

       ["Testing"
        ("t" "Run Test" commander--run-current-file)
        ("p" "Run Test at Point" commander--run-command-at-point)
        ("f" "Find test and run" commander--find-test-and-run)]

       ["Commands"
        ("d" "Make dev-sync" commander--dev-sync)

        ("r" "Rails" jd/rails-commander)])

;; ...

(defun commander--run-current-file (&optional args)
  "Suffix for using current buffer-file-name as relevant test file."
  (interactive (list (transient-args 'jd/commander)))
  (commander--run-command (concat (mapconcat #'identity args " ") (commander--test-cmd (commander--current-file)))))

(defun commander--find-test-and-run (&optional args)
  "Suffix for using completing-read to locate relevant test file."
  (interactive (list (transient-args 'jd/commander)))
  (commander--run-command (concat (mapconcat #'identity args " ") (commander--test-cmd (commander--find-file)))))

(defun commander--run-command-at-point (&optional args)
  "Suffix for using current buffer-file-name and line-at-pos as relevant test."
  (interactive (list (transient-args 'jd/commander)))
  (commander--run-command (concat (mapconcat #'identity args " ") (commander--test-cmd (commander--current-file-at-point)))))

;; ...

(defun commander--run-command (cmd)
  "Runs CMD in project root in compilation mode buffer."
  (interactive)
  (when (get-buffer "*commander test*")
    (kill-buffer "*commander test*"))
  (with-current-buffer (get-buffer-create "*commander test*")
    (setq compilation-scroll-output t)
    (setq default-directory (projectile-project-root))
    (compilation-start cmd 'minitest-compilation-mode)))

I have this bound to <leader> r which for me is SPC r. This allows me to toggle on any environment variables and essentially build the testing command I need. I then use (compilation-start COMMAND) to run my test in a controlled popup buffer so I can easily see the results while I’m continuing to code. I’ve also set up commander--run-current-file and comander--run-command-at-point. commander--run-current-file will just run the generated command for the file that open in the current buffer. So ...env vars rails test path/to/test.rb, while commander--run-at-point will run the command and include the number line at the current cursor point, so I can just run a single test without any issue.

This has sped up my workflow tremendously and made testing way faster for me as I don’t have to bother with building a command from scratch, but I can instead just build it with a transient.

Conclusion

Hopefully this post has provided some inspiration for you to get into building transient menus. I’m still pretty new to elisp and learning about transient.el so there maybe some inaccuracies here and there. I also elected to use the transient-define-prefix macro instead of the more formal methods for creating a transient, but the macro is probably sufficient for most use cases like mine.

Below are links to resources that helped to expand my own knowledge and even inspire this post. A big shout out goes to Jonas for creating such an incredible package as well as positron-solutions for such a thorough guides through it all.

Resources

-1:-- Transient Menus in Emacs pt. 1 (Post)--L0--C0--November 13, 2024 12:00 AM

James Dyer: Org Table From Org Headings using a Babel Block

In this post, I’ll walk you through how I use an Org Babel block to generate a dynamic Org table based on Org headings.

This approach is handy for anyone who wishes to programmatically extract information from an Org file (such as TODO states, tags, and content) and automatically format it as a neatly structured Org table. You can then export this table to various formats — like HTML, Markdown, or LaTeX — with built-in Org mode support.

At work, I’m currently using Emacs and Confluence, so my idea was to figure out how to get an org file into a Confluence post in a structured manner. For the particular task I’m working on, I’ve decided that I would like to convert an org file into a table. The table format essentially flattens the information presented by org structured text, and putting it as a table in Confluence also has the advantage of column sorting and filtering.

This seems for the task in hand the most efficient way of representing a certain set of data. I will have the advantage of always working within a familiar org document at the basic text level, leveraging all those years of Emacs muscle memory and as always forming the markup base for a myriad of export options.

However, I’m going to devise another method of export, lets see if I can find an effectual way to convert an org file into a table that can be efficiently used in Confluence.

I did some research and thought that maybe the org column mode could be somewhat useful. It’s tabular, right? Can’t I just grab that data and put it somewhere? Well, from what I can gather, it’s mainly for more easily representing properties and uses overlays, which are not conducive to easy export.

A dynamic block looks quite interesting and seems to be more for representing some form of dynamic data that is consistently updated and is tied to an org document. I think a pattern of tabular representation can be generated using this form and I might take a look at this in the future but for now it doesn’t quite seem like the incremental learning opportunity I’m looking for.

And another option?, well that is to play around with an ox org back-end?, for example generating an html table and tailoring to my own needs. This might be a bit too advanced for me at this stage, I think I will stick with the approach I decided on below which is to create an org babel block and generate a table through the output header parameter :results table mechanism.

Of course I am familiar with org babel blocks, it is how I generate my Emacs init file through the tangle mechanism, but I didn’t quite realise just how powerful it could be. My idea here is to parse the current org buffer with a babel block and output an org table simply as a string which would be interpreted as an org table in org-mode, that would work right?. Well yes, yes it did work, I essentially just kept appending row strings as you would see in a typical org table, separated by the pipe character. The code however was not particularly efficient, I would gather (push) all the relevant org items onto a list and then loop over them to construct the org table string. As far as I was aware at the time the only babel mechanism for generation was though a form of pushing textual data to stdout which would then appear in the current buffer to be interpreted in whichever way you want according to the mode.

I stumbled on to a better solution however. I had already done the hard work of constructing a list of lists, with each sublist representing a row, it turns out that I can just return this list from the babel block and if I have the babel output header parameters set as :results table the list will be interpreted as a table!

Well lets try this out…

Note: the following examples will all be a babel block with the following header parameters defined:

#+begin_src emacs-lisp :results table :exports both
(let ((rows '()))
       (push (list "1: first row" "2: first row") rows)
       rows)
| 1: first row | 2: first row |

That is a table with a single row!

Lets expand…

(let ((rows '()))
       (push (list "1: first row" "2: first row") rows)
       (push (list "1: second row" "2: second row") rows)
       rows)
| 1: second row | 2: second row |
| 1: first row  | 2: first row  |

So now I have a simple mechanism for adding multiple rows.

Hang on!, the rows are not the order I expected, lets reverse.

(let ((rows '()))
       (push (list "1: first row" "2: first row") rows)
       (push (list "1: second row" "2: second row") rows)
       (reverse rows))
| 1: first row  | 2: first row  |
| 1: second row | 2: second row |

What about the table header?, this took a while to figure out, but I think I have it.

(let ((rows '()))
       (push (list "1: first row" "2: first row") rows)
       (push (list "1: second row" "2: second row") rows)
       (setq rows (reverse rows))
       (push 'hline rows)
       rows)
|---------------+---------------|
| 1: first row  | 2: first row  |
| 1: second row | 2: second row |

Well that is the header line, but there is no header!!

(let ((rows '())
      (header (list "col1" "col2")))
       (push (list "1: first row" "2: first row") rows)
       (push (list "1: second row" "2: second row") rows)
       (setq rows (reverse rows))
       (push 'hline rows)
       (cons header rows))
| col1          | col2          |
|---------------+---------------|
| 1: first row  | 2: first row  |
| 1: second row | 2: second row |

I think I have figured it out now, so the next aspect I need to consider is how to pick up the org elements. It seems a common approach is to use org-map-entries which steps though each headline, seemingly actually within the buffer itself and some helper functions are available which can be used to extract the data. For example, org-element-at-point, org-outline-level, org-get-tags, e.t.c, I am aware that there is a more formal API type of method where a syntactical tree can be made available through org-element-parse-buffer but that maybe is for another time. Lets move ahead with the org babel, table output, org list construction through org-map-entries implementation.

The Task - Collecting Headings and Outputting in a Table

The goal is simple: outline tasks with various headings, nesting levels, TODO states, tags, and content in an Org file, and use that information to generate an Org table using an Emacs Lisp Org Babel block. This allows us to extract metadata from headings into a well-structured table with a corresponding header row.

Each Org heading will be converted into a table row in the following format:

  • Title: The full heading, including indentation to represent the level hierarchy.
  • TODO State: The TODO keyword (like TODO, DONE, or other custom states) associated with the heading.
  • Tags: Any Org tags associated with the heading.
  • Contents: The content that immediately follows the heading.

The Code

The following Emacs Lisp code in an Org Babel block which scans the Org file, collects the required information from all headings, and formats it into a table. Let’s break it down step-by-step:

Org Babel Block: Automatic Table Generation

(let ((rows)
      (header (list "Title" "TODO" "Tags" "Contents"))
       (table-rows '())
       (max-level 0))
  (org-map-entries
   (lambda ()
     (let* ((entry (org-element-at-point))
            (heading (org-get-heading t t t t))
            (level (org-outline-level))
            (tags (org-get-tags))
            (todo (org-get-todo-state))
            (contents ""))
       (org-end-of-meta-data nil)
       (when (eq (org-element-type (org-element-at-point)) 'paragraph)
         (let ((start (point)))
           (org-next-visible-heading 1)
           (setq contents (buffer-substring-no-properties start (point)))
           (dolist (pattern '("^#\\+begin.*" "^#\\+end.*" "\n+"))
             (setq contents (replace-regexp-in-string pattern
                                                      (if (string= pattern "\n+") " " "")
                                                      (string-trim contents))))))
       (setq max-level (max max-level level))
       (push (list (concat (make-string (* (1- level) 2) 45) " " heading)
                   (or todo "") (string-join tags ":") (or contents "")) rows))))
  (setq rows (reverse rows))
  (push 'hline rows)
  (cons header rows))

How It Works

  • org-map-entries: This function iterates through all headings in the Org document. Each heading that it encounters produces metadata, such as the heading title, level in the hierarchy, TODO state, tags, and the content following the heading.

  • Content Sanitization: For headings with associated content, newline characters and code block markers (like #+begin_src and #+end_src) are removed. This ensures that the content is condensed into a single line for insertion into the Org table.

  • heading: This variable stores the full heading, formatted with indentations based on its nesting level (level) to visually represent its position in the Org structure.

  • Output Format: The information is collected into a list (rows). The list contains individual lists representing rows of the table. A “horizontal line” ('hline) is inserted to act as a row separator.

  • Finally, the first list in rows is a header row (header), which includes the column titles: “Title”, “TODO”, “Tags”, and “Contents”.

When evaluated (C-c C-c) within this org source for the blog post the code block outputs a table like the one below, note that it will also include the example Org Structure defined in the next section.

Example Org Structure

Here’s an example Org structure that would produce part of the table above:

** TODO one :tag1:tag2:
*** DOING sub

content 1

#+begin_src
>>
code start
more code >>
#+end_src
after code

**** further into the tree! :subtag:

first line, no space before!

***** even further down the hole! :hole_tag:

No escape!!

** TODO two :tag_two:

content 2

** three
SCHEDULED: <2024-11-10 Sun>

Exporting the Table

Once this table is generated, it will correctly render within Org mode, respecting the Org table formatting rules. You can also export it to other formats (like HTML or Markdown) using standard Org export commands (such as C-c C-e h or C-c C-e m).

For example, exporting to HTML will produce a structured HTML table with the column headers formatted as table header (<th>) elements, and the data rows and horizontal lines correctly converted into formatting tags.

Conclusion

By leveraging Emacs Lisp and Org Babel, we’ve created a highly flexible way to extract information from Org headings and output it as a well-structured table. This method not only saves time when working with large or hierarchical documents, but it also provides a powerful way to export Org-based data to various formats based on your needs.

This approach can be extended further to include other metadata (like scheduled dates, deadlines, or custom properties) or more advanced formatting options for both Org and export formats.

Next Steps: Experiment with Org Babel to explore even more advanced use cases, for example I would like to format the table in a more flexible manner, for example to specify the width and maybe the cell / row colour based on content.

-1:-- Org Table From Org Headings using a Babel Block (Post James Dyer)--L0--C0--November 12, 2024 08:20 PM

punchagan: Responsive Auto Export for Org Hugo

I use ox-hugo to write blog posts in org-mode and publish them using Hugo. I enjoy using org-mode for any writing that I do, including blog posts (when I am able to get myself to write them). After a long hiatus, I’ve been trying to get back to blogging, as this post might have given away. But, ox-hugo’s auto export seemed much slower than I remember it being.

Each time I hit save on my blog-posts.org file, Emacs gets busy for about 10ish seconds exporting the post to Hugo markdown. And this is annoying because I tend to hit save-buffer multiple times while writing in Emacs – thanks, muscle memory!

Enter Emacs profiler

I was on a flight and had some time to dig into this. If I was online, I probably would have looked through the README and/or the issue tracker, but I jumped in with the handy Emacs profiler.

The profiler-report showed that a big chunk of time was being spent in org-id-update-id-locations even before the actual export started. And then, during the export, a bulk of the time was spent in org-hugo--get-pre-processed-buffer. Once I knew the problem areas, I started poking around in the ox-hugo code to “fix” these issues.

Turn off Org ID location update

org-id-update-id-locations scans a bunch of org-mode files and stores a mapping of all the IDs of subtrees to their filenames. If you have a lot of org subtrees this can take a while, even if none of them actually have IDs. It turns out that I didn’t have any ID properties set, and this caused the update function to run before every export!

I simply added a new ID property on one of the subtrees in my blog-posts file to prevent the ID updation from running on every export! I’m not sure how a stale org-id-locations value affects cross links, but at this stage of writing I don’t care about the cross links. (spoiler: The next hack actually nullifies any impact a stale value might have had!)

Cutting down auto export time by 4-5 seconds is great! But, I’m still not happy to wait for 5-6 seconds in the middle of writing my posts. Let’s look at the other hotspot – the buffer preprocessing!

Turn off buffer preprocessing, maybe?

Currently, I don’t have any cross links between posts in my org-mode source. So, I can turn off this feature completely by setting org-hugo--preprocess-buffer to nil. Viola! Hitting save-buffer doesn’t freeze my Emacs any more. I can compose “100s of blog posts”™ in a flurry! ;)

But, if I’m going to have these “100s of blog posts”, wouldn’t it be better to have cross links? But, with preprocessing turned off when there are cross-links, the “auto-export and build” workflow breaks. The variable org-hugo--preprocess-buffer MUST be non-nil to produce posts with valid cross-links. If not, the exported markdown file processed by hugo ends up having broken cross-links, which crashes hugo serve and/or hugo build.

Unsetting and setting the org-hugo--preprocess-buffer variable for the writing vs publishing phase, respectively, isn’t an ergonomic workflow. It’s not an improvement over disabling and enabling the auto-export mode as needed. I want to enjoy auto export with Hugo’s live reload feature.

Moar workaround!

Looking through the code some more, I learnt org-hugo-link first uses a custom export handler, if one exists for a link’s protocol. I decided to piggy back on this functionality and made up hugo: protocol for cross-links.

The hugo link simply contains the EXPORT_FILE_NAME of the linked blog post i.e., name of the exported markdown file (without the .md extension) as the ‘path’ of the link. The custom protocol export handler can then generates a relref shortcode for Hugo to process in the exported markdown file.

This nicely works around the need to preprocess my entire blog-posts.org buffer to generate valid cross-links!

(org-link-set-parameters "hugo" :export #'pc/org-hugo-link-export-to-md "Export Hugo blog link to markdown file" )

(defun pc/org-hugo-link-export-to-md (path desc backend &optional info)
  "Export a link to a Hugo blog link in markdown format."
  (message (format "path: %s, desc: %s, backend: %s" path desc backend))
  (cond
   ((eq backend 'md)
    (if (equal org-export-current-backend 'hugo)
        (format "[%s]({{< relref \"%s\" >}})" desc path)
      (error "Cannot export Hugo link to non-Hugo backend")))
   (t (error "Cannot export Hugo link to non-Hugo backend"))))

Outro

I made a simple helper to make it easier to insert these cross-links with the hugo: protocol. It simply looks through all the headlines with an EXPORT_FILE_NAME property, and allows it to be inserted as a link with the hugo: protocol.

Now, cross-linking posts and publishing posts with cross-links are both a breeze. Stay tuned for the “100s of blog posts” ™ I’m going to write!

Thanks to Shantanu and Kamal for reading drafts of this post.

-1:-- Responsive Auto Export for Org Hugo (Post)--L0--C0--November 12, 2024 06:47 PM

Irreal: Compile Angel

James Cherti has announced a new package, Compile Angel. The idea is to keep all your Elisp files compiled with both the byte compiler and the native compiler. It has modes to compile any outdated “binaries” when a modified Elisp file is saved or when one is loaded. Both compilations are important because byte compilation helps Emacs to load faster while native compilation, of course, help Emacs run faster by generating native hardware code.

In his announcement, Cherti explains how Compile Angel differs from auto-compile. The TL;DR is that Compile Angel is lighter weight and compiles more files than auto-compile.

This seems like a nice package. Strictly speaking, it’s not necessary, of course, but it’s another way reducing the friction of maintaining your Emacs installation. Unless you’re the type of person who enjoys a completely hands-on approach to system maintenance, Compile Angel is probably worth looking into.

The project GitHub repository is here but except for the code is essentially the same as the announcement.

-1:-- Compile Angel (Post jcs)--L0--C0--November 12, 2024 03:42 PM

Irreal: Profoundly Ignorant And Proud Of It

It’s easy to be snarky about management—especially the suits—and Irreal has certainly indulged itself often. In case you think such snark is unwarranted, I offer this sad example for your consideration.

It demonstrates everything that can go wrong when people with no technical skills nevertheless feel inclined—and entitled—to make decisions that aren’t really any of their business and for which they are eminently unqualified. Take a look at their description of Emacs:

“An old fashioned and slow text editor created by Canonical for use with the Ubuntu operating system”.

There’s not a single part of that description that’s correct. You Vim guys can stop laughing now. Here’s the description of Vim:

“Developed by CentOS, an editor with a steep learning curve”.

I guess that’s a slightly better description than the one for Emacs in that “with a steep learning curve” might be said to be accurate. On the other hand, these same suits are happy to embrace Neovim so, again, they have no idea what they’re talking about.

I can’t imagine working for a company that would presume to tell developers what editor they should use let along one that justifies their decisions with such a complete lack of knowledge of what they’re talking about.

All I can say is that if you are working for this company or one like it you should start looking for other employment forthwith. Even if your preferred editor is on the approved list, management’s attitude and presumption will eventually reach out to bite you.

-1:-- Profoundly Ignorant And Proud Of It (Post jcs)--L0--C0--November 11, 2024 03:46 PM

Sacha Chua: 2024-11-11 Emacs news

Links from reddit.com/r/emacs, r/orgmode, r/spacemacs, r/planetemacs, Mastodon #emacs, Hacker News, lobste.rs, programming.dev, lemmy.world, lemmy.ml, communick.news, planet.emacslife.com, YouTube, the Emacs NEWS file, Emacs Calendar, and emacs-devel. Thanks to Andrés Ramírez for emacs-devel links. Do you have an Emacs-related link or announcement? Please e-mail me at sacha@sachachua.com. Thank you!

View org source for this post
-1:-- 2024-11-11 Emacs news (Post Sacha Chua)--L0--C0--November 11, 2024 01:08 PM

New emacs org-mode category in place…! Actually, it was there for a while, but now it’s in the navigation bar, complete with my own AI illustration of what the Emacs GNU looks like. Check it out: taonaw.com/categorie…

-1:--  (Post)--L0--C0--November 09, 2024 09:43 PM

Thanos Apollo: RSS Mastery with RSS-Bridge & Elfeed [Video]

I’ve just published a short video covering the basics of RSS, RSS-Bridge & Elfeed, it’s currently available on YouTube.

Video notes

What is RSS?

  • XML-based web feed that allows users to access updates of websites, without “visiting” the website.
  • Hacktivist including Aaron Swartz contributed to the development of RSS

Why use it?

  • Having total control over information you consume
    • Filter/Prioritize content from various sources
    • Bypass algorithms
    • Ad-free reading
  • Offline Access
  • Time saving
  • Allows creation of a personalized & decentralized information hub

(+ Emacs RSS) ;; => ’elfeed

No matter what RSS reader you choose, they all get the same job done

  • Elfeed is a simple & highly customizable RSS client for Emacs
    • Developed by Christopher Wellons (skeeto)
  • It has a well designed database & tagging system

use-package installation example:

(use-package elfeed
  :vc (:url "https://github.com/skeeto/elfeed") ;; vc option is available on Emacs 30
  :config
  (setf elfeed-search-filter "@1-week-ago +unread"
        browse-url-browser-function #'browse-url-default-browser
        elfeed-db-directory (locate-user-emacs-file "elfeed-db"))
  ;; Feeds Example
  (setf elfeed-feeds
        '(("https://hackaday.com/blog/feed/"
           hackaday linux)))

  ;; Play videos from elfeed
  (defun elfeed-mpv (&optional use-generic-p)
    "Play video link with mpv."
    (interactive "P")
    (let ((entries (elfeed-search-selected)))
      (cl-loop for entry in entries
               do (elfeed-untag entry 'unread)
               when (elfeed-entry-link entry)
               do (start-process-shell-command "elfeed-video" nil (format "mpv \"%s\"" it)))
      (mapc #'elfeed-search-update-entry entries)
      (unless (use-region-p) (forward-line))))

  :bind (("C-x f" . elfeed)
         :map elfeed-search-mode-map
         ("v" . 'elfeed-mpv)
         ("U" . 'elfeed-update)))

What to do with websites that do not provide an RSS feed?

  • Utilize rss-bridge to generate one.
  • RSS Bridge is easy to self host using docker

Example guix service:

(service oci-container-service-type
         (list
          (oci-container-configuration
           (image "rssbridge/rss-bridge")
           (network "host")
           (ports
            '(("3000" . "80"))))))
-1:-- RSS Mastery with RSS-Bridge &amp; Elfeed [Video] (Post)--L0--C0--November 09, 2024 12:00 AM

Aimé Bertrand: Mu4e - save attachments faster - an update

My Issue

In an older post I wrote down my solution for saving multiple attachements at once without a completion from Mu4e. The reason being that my the bulk of use cases are to save all attachments at once.

In the rare cases I want to select an attachments out of many I can still use mu4e-view-mime-part-action.

Now I have encountered a few breaking updates of Mu4e and for a while I have been dealing with another one. Granted, You are not supposed to use private function in an Emacs package/library in you own solution. So in this case I am the one to blame.

My solution

The default command mu4e-view-save-attachments provides a way of selecting multiple files in the completion list, but even this kind of grinds my gear. I do not need the completion at all. See above.

This is the reason, why I went ahead and modified the command. Simply put, this removes the completion for a files. Then it always ask for the directory to save to starting with the mu4e-attachment-dir.

(defun timu-mu4e-view-just-save-all-attachments ()
  "Save files from the current Mu4e view buffer.
This applies to all MIME-parts that are \"attachment-like\" (have a filename),
regardless of their disposition.

This is a modified version of `mu4e-view-save-attachments'.
It does not use `mu4e--completing-read' to select files, but just selects all.

Also it always prompts for the directory to save to."
  (interactive)
  (let* ((parts (mu4e-view-mime-parts))
         (candidates  (seq-map
                         (lambda (fpart)
                           (cons ;; (filename . annotation)
                            (plist-get fpart :filename)
                            fpart))
                         (seq-filter
                          (lambda (part) (plist-get part :attachment-like))
                          parts)))
         (candidates (or candidates
                         (mu4e-warn "No attachments for this message")))
         (files (mapcar #'car candidates))
         (default-directory mu4e-attachment-dir)
         (custom-dir (read-directory-name
                                    "Save to directory: ")))
    (seq-do (lambda (fname)
              (let* ((part (cdr (assoc fname candidates)))
                     (path (mu4e--uniqify-file-name
                            (mu4e-join-paths
                             (or custom-dir (plist-get part :target-dir))
                             (plist-get part :filename)))))
                (mm-save-part-to-file (plist-get part :handle) path)))
            files)))

Conclusion

This might be a quick and dirty way of solving my issue, however it works really good for me. You are always free to hit me up with a better solution however. I am not the Emacs Lisp magician, so I am thinking a proper wiz might use an advice or hook or something.

-1:-- Mu4e - save attachments faster - an update (Post)--L0--C0--November 08, 2024 11:00 PM

James Cherti: The compile-angel Emacs package: Byte-compile and Native-compile Emacs Lisp libraries Automatically

The compile-angel Emacs package automatically byte-compiles and native-compiles Emacs Lisp libraries. It offers two global minor modes:

  • (compile-angel-on-save-mode): Compiles when an .el file is modified and saved.
  • (compile-angel-on-load-mode): Compiles an .el file before it is loaded.

These modes speed up Emacs by ensuring all libraries are byte-compiled and native-compiled. Byte-compilation reduces the overhead of loading Emacs Lisp code at runtime, while native compilation optimizes performance by generating machine code specific to your system.

NOTE: It is recommended to set load-prefer-newer to t, ensuring that Emacs loads the most recent version of byte-compiled or source files. Additionally, ensure that native compilation is enabled; this should return t: (native-comp-available-p).

What is the difference with auto-compile?

This package is an alternative to the auto-compile Emacs package. Here are the main differences:

  • Compile-angel is lightweight: The compile-angel package is lightweight, with one-third the lines of code of auto-compile.
  • Compile-angel ensures more .el files are compiled: The compile-angel package, in addition to compiling the elisp files that are loaded using load and require, also handles .el files that auto-compile misses, such as those that are deferred (e.g., with :defer t and use-package) or autoload.

(Special thanks to Jonas Bernoulli, the creator of the auto-compile package, whose work inspired the development of compile-angel. This package was created to offer a lightweight alternative to auto-compile that also compiles deferred/autoloaded .el files.)

Installation using Straight

To install compile-angel using straight.el:

  1. It if hasn’t already been done, add the straight.el bootstrap code to your init file.
  2. Add the following code at the very beginning of your Emacs init file, before anything else:
(use-package compile-angel
  :ensure t
  :straight (compile-angel
             :type git
             :host github
             :repo "jamescherti/compile-angel.el")
  :config
  (compile-angel-on-save-mode)
  (compile-angel-on-load-mode))

Links

-1:-- The compile-angel Emacs package: Byte-compile and Native-compile Emacs Lisp libraries Automatically (Post James Cherti)--L0--C0--November 08, 2024 04:27 PM

Irreal: 🥩 Red Meat Friday: Emacs For Word

In what has to be the silliest question of the week, Kiiwyy, over at the Emacs subreddit, asks do you use Emacs as a substitute for Word. In particular, he’s wondering if people use Emacs for class or project notes. He says he uses Word for these and wonders what other people are doing.

In the first place, the concept of using Emacs as a replacement for Word seems backwards to me. Emacs can do so much more than Word that the question makes more sense the other way around.

The only real strength of Word is that all the normals use it so it’s sometimes necessary to produce a Word document for collaboration. The canonical example, and virtually the only one mentioned in the comments, is resumes. The other required Word use is in some publishing domains. A number of Journals, particularly in the humanities, insist that papers be submitted as Word documents. Similarly some book publishers really want Word documents—although they tend to be a bit more flexible—because it facilitates their production flow.

The commenters are, almost to a person, unsympathetic. They all note that—other than resumes—there really is no reason to use Word. Even if the final recipient requires a Word doc, you can always export an Org document to Docx so even for resumes you can maintain it in Org and export it to Word, PDF, or HTML as circumstances demand.

Sure, if you’re a secretary writing business letters for your boss, use Word. In most other situations, it’s hard to see why you would. Emacs has so much more to offer and, really, is just as easy to learn so why use word and worry about losing your document when Microsoft has a hiccup?

-1:-- 🥩 Red Meat Friday: Emacs For Word (Post jcs)--L0--C0--November 08, 2024 04:23 PM

Thanos Apollo: Why I Prefer VC Over Magit [Video]

I’ve just uploaded a short video on why I prefer VC over Magit, alongside a quick demo workflow with VC.

The video is available on youtube here.

-1:-- Why I Prefer VC Over Magit [Video] (Post)--L0--C0--November 08, 2024 12:00 AM

Irreal: Why Isn’t There An Emacs 2?

Over at the Emacs subreddit, Available-Inside1426 asks why there isn’t an Emacs 2. By that he means a rewrite of Emacs to address what he sees as problems with Emacs. Those problems include the usual silliness like a better GUI implementation, better mouse support, a client-server architecture, and so on. The only improvement he suggests that makes sense to me is implementing threads.

Of course, everyone would like Emacs to have a robust thread implementation but the problems are legion. Here’s an account of one brave soul’s attempt to implement them. The TL;DR is that in the end he gave up because there’s just too much shared state built into Emacs.

One thing you hear all the time and that Available-Inside1426 repeats is that “Emacs was made for a different world”. I don’t know what that means. Sure, Emacs was made in a different world but I don’t think it’s true that it was made for a different world. After all, it was made to edit text as efficiently as possible and in that it still performs better than any other editor. My cynical suspicion is that what “made for another world” really means is that it doesn’t have enough bling, and is not centered on point and click.

Emacs development is, in fact, proceeding apace and everyone with even a bit of software development experience knows that rewriting a mature system always ends in tears. Emacs doesn’t need to be rewritten. Sure, some things would benefit from improvement but that exact process is always underway. Your pet wish may not be as high on the list as you’d like but if it’s worthwhile, it will certainly be implemented eventually. Who knows? Maybe we’ll even get a good thread model.

-1:-- Why Isn’t There An Emacs 2? (Post jcs)--L0--C0--November 07, 2024 04:44 PM

Ryan Rix: Two Updates: Org+Nix dev streams, and my new DNS resolver

Two Updates: Org+Nix dev streams, and my new DNS resolver

I've started to stream on Thursdays where I'll explore salt dunes and arcologies

The last few weeks I have started to work in earnest on Rebuild of The Complete Computer , my effort to provide a distribution of my org-mode site publishing environment in a documented, configurable Concept Operating System . My "complete computing environment" will be delivered in three parts:

  • a set of online documents linked above that are explaining how I manage a small network of private services and a knowledge management environment using my custom web publishing platform, The Arcology Project .

  • a set of videos where I work through the documents, eventually edited down in to a set of video lectures where you are guided from complete fresh fedora VM to installing Nix and a bare-bones org-roam emacs, bootstrapping a NixOS systems management environment, and then use Org files to dynamically add new features to those NixOS systems.

  • a handful of repositories which i'll finally have to treat like "an open source project" instead of Personal Software:

    • The arcology codebase which you'll have a copy of on disk to configure and compile yourself

    • the core configuration documents that are currently indexed on the CCE page, a subset which will be required to run the editing environment, and a number of other bundles of them like "ryan's bad UX opinions", "ryan's bad org-mode opinions", "ryan's bad window manager", etc...

I hope that by reading and following along with the documents while utilizing the video resources, one can tangle source code out of the documents, write and download more and an indexing process will extract metadata from the files that can be later queried to say "give me all the home-manager files that go on the laptops", for example, and produce systems that use that.

https://notes.whatthefuck.computer/media/c71d50750dff59e4abdfdb788f87c1fa26190d6f0b2dc1b1fd4edf3679d58d35.png

Two weeks ago I produced a three hour video where I played Caves of Qud and then spent two hours going over some of the conceptual overviews and design decisions while setting up Nix in a Fedora VM, ending with the Arcology running in a terminal and being used to kind-of-sort-of clobber together a home-manager configuration from a half-dozen org-mode files on disk. It was a good time! This is cataloged on the project page, 0x02: devstream 1 .

This week I came back to it after taking a break last week to contribute an entry to the autumn lisp game jam, and it was a bit more of a chaotic stream with only two hours to get up to speed on the project; there are many implicit dependencies in the design and implementation of the system because it's slowly accreted on top of itself for a decade now. That was 0x02: devstream 2

This week I'll work on cleaning up things to smoothly bootstrap and next week we'll come back with a better way to go from "well home-manager is installed" to "home-manager is managing Emacs and Arcology, and Arcology is managing home-manager" and then from there we build a NixOS machine network...

I have probably a three or six month "curriculum" to work through here while we polish the Rebuild documents. I will be streaming this work and talking about how to build communal publishing networks and document group chats and why anyone should care.

With the news from the US this week, it feels imperative to teach people how to build private networks, if only because the corporatist monopolist AI algorithm gang are going to run rough-shod on what's left of the open web the second Lina Khan and Jonathan Kanter are fired if they haven't already begun today. We can host Fediverse nodes and contact lists and calendars for our friends for cheap and show each other how to use end-to-end chat and ad-blocking and encrypted DNS; we oughta.

I'll stream on twitch.com/rrix on Thursdays at 9am PT and upload VODs to a slow PeerTube server I signed up for. Come through if this sounds interesting to you.

I re-did my DNS infrastructure

Years ago I moved my DNS infrastructure to a pi-hole that was running on my Seattle-based edge host. It worked really nicely without thinking about it when I lived in Seattle, but I hesitated fixing it for the years since I moved a half a hundred milliseconds away. The latency finally got annoying enough lately so I finally got around to it this week.

On my devicies, I've been using Tailscale's "MagicDNS" because DNS is a thing that I think should just have magic rubbed on it, as it is i've already thought way more about DNS in my life than I'd like. If you enable MagicDNS and instruct it to use your pi-hole's address as the global nameserver, any device on your Tailnet will use the pihole for DNS. Neat.

Pi-hole isn't packaged in nixpkgs and I was loathe to configure Unbound etc and a UI myself so I put it off and fnord ed the latency for months. I finally got around to it this week by deploying Blocky on my LAN server which has the feature-set I need, and rather than shipping a UI it ships a minimal API and a Grafana dashboard:

https://notes.whatthefuck.computer/media/a4fad0c5daa7cf3773eaacce68a988286554f61c7c00abbba088393ffcddfccf.png

It's a neat little nice little thing, I hope it'll work out. I've started documenting this at Simple DNS Infrastructure with Blocky of course.

With the querying back on my LAN and managed by my Nix systems instead of a web GUI on an unmanaged host, I can list my blocked domains and block lists in a human-legible format, I can have different DNS results to route all my server's traffic direct over the LAN to my homelab instead of round-tripping to the SSL terminator, I can have custom DNS entries for local IPs. All this is managed in that one document which you'll soon be able to download from my git instance; that's the Concept Operating System promise.

If you're a content pihole user but never use the web UI and need to move, consider taking this thing for a spin.

-1:-- Two Updates: Org+Nix dev streams, and my new DNS resolver (Post)--L0--C0--November 07, 2024 05:45 AM

Sacha Chua: Excerpts from a conversation with John Wiegley (johnw) and Adam Porter (alphapapa) about personal information management

Adam Porter (alphapapa) reached out to John Wiegley (johnw) to ask about his current Org Mode workflow. John figured he'd experiment with a braindumping/brainstorming conversation about Org Mode in the hopes of getting more thoughts out of his head and into articles or blog posts. Instead of waiting until someone finally gets the time to polish it into something beautifully concise and insightful, they decided to let me share snippets of the transcript in case that sparks other ideas. Enjoy!

John on meetings as a CTO and using org-review

Today I was playing a lot with org-review. I'm just trying to really incorporate a strong review process because one of the things I started doing recently is that this [Fireflies AI]​ note taker that's running in the background. Now, it produces terrible transcripts, but it produces great summaries. And at the bottom of every summary, there's a list of all the action items that everyone talked about associated with the names.

So I now have some automation, that will all I have to do is download the Word document and then I have a whole process in the background that uses Pandoc to convert it to Org Mode. Then I have Elisp code that automatically will suck it into the file that I dedicate to that particular meeting. It will auto-convert all of the action items into Org-mode tasks where it's either a TODO if it's for me, or if it's a task for somebody else, tagged with their name.

Then, when I have a one-on-one with a person in the future, I now have a one-on-one template that populates that file, and part of the template is under the agenda heading. It uses an a dynamic block that I've written: a new type of dynamic block that can pull from any agenda file. And what it does is it [takes] from all of those meetings, all of the action items that are still open that are tagged with their name.

This has been actually really, really effective. Now, I don't jump into a one-on-one being like, "Well, I didn't prepare so I don't know what to talk about." I've usually got like 10 to 30 items to go through with them to just see. Did you follow up? Did you complete this? Do we need to talk about this more?

I want to incorporate org-review. Scheduling is not sufficient for me to see my tasks. What I need is something that is like scheduling, but isn't scheduling. That's where org-review comes in. I have a report that says: show me everything that has never been reviewed or everything that is up for review.

Then I have a whole Org key space within agenda for pushing the next review date to a selected date or a fixed quantity of time. So if I hit r r, it'll prompt for the date that I want to see that again. But if I hit r w, it'll just push it out a week.

Every day I try to spend 15 minutes looking at the review list of all the tasks that are subject for review. I don't force myself to get through the whole list. I count it as success if I get through 20 of the tasks. Because earlier I had 730 of them, right? I was just chewing on them day by day.

But now I'm building this into the Org agenda population, because in the dynamic block match query, I can actually say: only populate this agenda with the tasks that are tagged for them that are up for review. That way, if we're in the one-on-one and they say, "Oh I'm working on that but I won't get to it for a month," I'll say, "Let's review that in a month." Then next week's one-on-one won't show that tasks. I don't have to do that mental filtering each time.

This is something I've been now using for a few weeks. I have to say I'm still streamlining, I'm still getting all the inertia out of the system by automation as much as possible, but it's helping me stay on top of a lot of tasks.

I'm surprised by how many action items every single meeting generates. It's like, it's like between 5 and 12 per meeting. And I have 3 to 7 meetings a day, so you can imagine that we're generating up to a hundred action items a week.

In the past, I think a lot of it was just subject to the whims of people's memory. They'll say, "I'm going to do that," and then… Did they remember to do that? Nobody's following up. Three months later, somewhere, they'll go like, "Oh yeah we talked about that, didn't we?"

So I'm trying to now stem the the tide of lost ideas. [My current approach] combines dynamic blocks with org-roam templates to make new files for every meeting and it combines org-review to narrow down the candidate agendas each time appropriately, and it combines a custom command to show me a list of all tasks that are currently needing review.

Reviewing isn't just about, "Is the thing done?" It's also, "Did I tag it with the right names? Did I delegate? Did I associate an effort quantity to it?" (I'm using efforts now as a way to quickly flag whether a day has become unrealistically over-full.)

I only started using column view very, very recently. I've never used it before. But now that I'm using effort strings, it does have some nice features to it: the ability to see your properties laid out in a table.

Back to table of contents

John on making meaningful distinctions (semantic or operational)

Today's agenda has 133 items on it. I need ways to narrow that agenda down.

I've used a lot of different tasks management philosophies. We're always looking for more efficiency, and we're looking for more personal adaptation to what works for us. I've gone from system to system. What I'm starting to realize is that the real value in all of these systems is that they're different enough from whatever you're using today, that they will force you to think about the system you're making for yourself, that is their value.

That's why I think there should always be a huge variety of such systems and people should always be exploring them. I don't believe any one one system can work for everybody, but we all need to be reflecting on the systems that we use. Somebody else showing you, "Hey, I do it this way" is a really nice way to juxtapose whatever system you're using.

I discovered through reading Karl Voit's articles that there are three principal information activities: searching, filtering, and browsing.

  • Hierarchies assist with browsing.
  • Tagging assist with filtering and keywords.
  • Metadata assist with searching.

Those are the three general ways that we approach our data.

We have to do work to draw distinctions between that data. The whole reason that we're drawing distinctions between that data is to narrow our focus to what is important.

I have over 30,000 tasks in my Org Mode overall. 23,000 of them are TODOs. Several thousand of them are still currently open. I'm never gonna see them all. Even if I wanted to, I'm never gonna see them all. I don't know what to search for. I don't know what the query should be. I have to use tagging and scheduling and categorization and everything. I believe that that is the work of a knowledge worker is to introduce these distinctions. That takes time and it takes effort.

What's really important is to draw meaningful distinctions. Make distinctions that matter.

I could tag things with like the next time I go to Walmart, so that I could do a filtered query to show me all things that I might want to do at Walmart, but is that worth the effort or is just tagging it as an errand enough? Because that list will get within the size range that I can now eyeball them all and mentally filter out the ones that I need for Walmart.

What makes a meaningful distinction? I believe there are two things that make a distinction meaningful. One is semantic, and one is operational.

A semantic distinction is a distinction that changes the meaning of the task. If I have a task that says "Set up Zoom account", if that's in my personal Org Mode, that has one level of priority and one level of focused demand. If it's in my work list, that has a totally different importance and a totally different focused demand. It changes the nature of the task from one that is low urgency (maybe a curiosity) to high urgency that might impact many people or affect how I can get my work done. That distinction is meaningful or semantic. It changes the meaning of the task.

An operational distinction changes how I interact with the task. [For example, if I tag a phone call, I can] group all of my phone calls during a certain time of the day. That changes my nature of interaction with the task. I'm doing it at a different time of day or doing it in conjunction with other tasks. That helps narrow my focus during that section of time that I have available for making calls. It's an operational distinction. if it's changing how you interact with the task.

You're succeeding at all of this if on any given day and any given time, what's in front of your eyes is what should be in front of your eyes. That's what all of this is about. If an operational distinction is not aiding you in that effort, it's not worth doing. It's not meaningful enough to go above the bar.

Back to table of contents

John on examples of distinctions that weren't personally worth it

I'm trying to narrow and optimize down to the minimum distinctions necessary to remain effective. If I can ever get rid of a distinction, I'm happy to do it.

I used to have projects and have categories, or what PARA method calls areas. Projects are different from areas and that they have a definition of completion and they have a deadline, but that's the only distinction. I realized that distinction doesn't do me any good because if it has a deadline, that's the distinction, right?

Calling it an area or calling it a project… I can just have projects without deadlines and then that's good enough. I have a query that shows me all projects whose deadlines are coming up within the next month, and then I'm aware of what I need to be aware of. I don't need to make the distinction between the ones that have and don't have deadlines. I just need to assign a deadline so the deadline was sufficient discrimination. I didn't need the classification difference between area and project.

And then [PARA's] distinction between projects, areas, and archives. I realize that there's only one operational benefit of an archive, and it's to speed things up by excluding archives from the Org ID database or from the org-roam-dbsync. That's it. That's the only reason I would ever exclude archives, because I want to search in archives. org-agenda-custom-commands is already only looking at open tasks. In a way, it's by implication archiving anything that's done in terms of its meaning.

This is all just an example of me looking at the para method and realizing that none of their distinctions really meant something for me.

What was meaningful was:

  • Does it have a deadline?
  • Is it bounded or not bounded?
  • Do I want to included in the processing of items?
  • [Is it a habit?]
Back to table of contents

John on habits

I did decide to draw the distinction of habits. I want them to look and feel different because I'm trying to become more habit-heavy.

I read this really brilliant book called Atomic Habits that I think has changed my life more than any other. I've read a lot of really good time management books but this book far and away has made the biggest impact on my life. One of its philosophical points that it makes that is so profound is that goal-oriented thinking is less successful in the long run than behavior-oriented thinking or habit- or system-oriented thinking. Instead of having a goal to clean your office, have a habit to remove some piece of clutter from your office like each time you stand up to go get a snack. You seek habits that in the aggregate will achieve the goals you seek to do.

I'm trying now to shift a lot of things in my to-do lists that were goals. I'm trying to identify the habits that will create systems of behavior that will naturally lead to those goals. I want habits to be first class citizens, and I want to be aware of the habits I'm creating.

I think the other thing that Atomic Habits did is it changed my conception of what a habit is. Before, I thought of a habit as "using the exercise bike" or something like that, which always made it a big enough task that I would keep pushing it off. Then I would realize I'd pushed it off for six months and then I would unschedule it and give up on it because it was just it would just be glaring at me with a look of doom from my agenda list.

What's important is the consistency, not the impact of any one particular accomplishing of that habit. It's a habit. If I do it daily, it's doesn't matter how much of it I do. So even if it just means I get on the bike and I spin the pedals for three minutes, literally, that's successful completion.

Any time you have a new habit, one of the activities in mastering that habit is to keep contracting the difficulty of the habit down, down. You've got to make it so stupidly small and simple to do, that you do it just for the fun of marking it done in the agenda, right?

I have a habit to review my vocabulary lists for languages that I'm learning. I'm okay with one word. As long as I ran the app and I studied one word, that's success.

What you find happening is that you'll do the one word, and now because you're there, because you're in the flow of it, you're like, "I'll do two. You know, I'm already here. What's the big difficulty in doing two?"

So you make the success bar super low. You're trying to almost trick yourself into getting into the flow of whatever that activity is.

[org-habit org-ql list] So I have all of these habits here, and every single habit on this list is super easy to do. Five minutes is all that it would take, or even one minute for most of them. I use different little icons to group them. It also keeps the title of the habit really small. I found that when the titles were really long. I didn't like reading it all the time. It just was a wall of text. When it's these one word plus an icon, it just kind of jumps out.

Back to table of contents

Adam on the Hammy timer and momentum

I took that to a bit of an extreme sort of with my my package remote called Hammy, for hamster. It's for timers and ideas, kind of like being a hamster on a hamster wheel.

Anyway, one of the timers is called flywheel mode. The idea is: just do a little bit. Like, if I'm just having a mental block, I can't stand working on that test today, I'm going to do five minutes. I can spend five minutes doing whatever. Next time, we do 10 minutes in 15. Pretty soon, I'm doing 45 minutes at a stretch. Maybe when I sit down to do 5, I'll actually do 15. I'm just slowly building up that mental momentum. I'll allow myself to quit after 5 minutes, but I end up doing 20.

Back to table of contents

John on momentum and consistency

Momentum is key. There's a flip side to this whole concept of the value of iterative improvement. The opposite remains also true.

Consistent good is your best ally, and inconsistent bad is also your ally. It's when the reverse is true that you have inconsistent good and consistent bad, that's what leads you into roads of doom.

That never occurred to me before. I would always be one of those people who would set myself up with a goal, like, I want to lose 20 pounds. I would struggle to achieve it. I would be dismayed because of how hard it was to get there, and then you'd have a day when you're like, you get off the wagon and you're like, The game is lost. And then and then you can't get back on again. Whereas now it's like that wagon, it's not so easy to get off of. I have to really make a concerted effort to be consistently bad in order to make things horrible again.

I almost want to change org-habit to have a different kind of visualization, because streaks are not motivators for me. Streaks punish you for losing one day out of 200, right? I don't want a graph that shows me streaks. I want a graph that shows me consistency. If I have 200 days and I've missed five of them, I'm super consistent. Maybe I could do this with colors. Just show a bar with that color, and don't show individual asterisks to show when I did it or when I didn't do it, because I find streaks anti-motivating.

[Discussion about other ways to display habits]

Back to table of contents

John on Life Balance by Llamagraphics

The whole principle around Life Balance [by Llamagraphics]​ was: you take all of your tasks, you categorize them, you associate difficulty to them and priority and everything else. Then it tries to use heuristics to determine if your life is being balanced, [and it percolates certain tasks to the top of your list].

If the system's doing a good job, then your agenda list should always be A-Z pretty much the best order in which you ought to do things. It didn't just do category-based balance, it also did difficulty-based balance. You should only be doing super hard stuff once in a while. You do a hard thing, then you do lots of easy things, then you do a hard thing.

Now, I'm wondering… This idea of momentum is very similar to the idea of balance. "Have established momentum with a system of behavior" is similar to "Have an established balance with all of the tasks that I do related to different activities." Is there a data architecture that would allow me to do both of these things.

The whole idea of making the habits be colors and then sorting them according to the spectrum is literally just to achieve balance among how much attention I'm paying to different habits.

[Discussion about dynamic prioritization]

Back to table of contents

Adam on the structure of his TODO view

My fundamental system right now is there's like two org-ql views. There's the view of tasks that are scheduled for today or have a deadline of today, and then there's a view of tasks that I've decided that they need to be done, but I haven't decided when to do them yet.

[Top list]: I just pick the next task off the list or reschedule if it's not important enough now. But then when that's empty, if it ever gets that way, it's the second view. I decide, okay, there's something I need to do. I can do that on Tuesday. Then it disappears until I need to think about it again.

This separates deciding what to do from when to do. Then I can just switch into my own manager mode for a moment, and then switch into "just put your head down and do the work mode."

[More details]

The top view is basically tasks that have a deadline, that are relevant to now (either deadline today or in the past), or it's an item that I've scheduled to work on today or in the past.

The view below, that is items that have no planning date. I need to give them one, or maybe they can just sit in that list of projects that have no next task. I use a project heading to [note] something that needs to be subdivided if I don't have a next task for it, then that'll show up there to remind me to give it one. Once it has a next task, [that] task would appear instead of the project heading until I schedule it. Anything I've forgotten to schedule yet will show up in that list.

Below that I just have a small window that shows me things. I've completed or clocked in the past week.

And then, another small window shows me anything that's a project status so I can get an overview.

In the work file itself, I have a number of links to org-ql views, like "Show me all my top level projects," "Show me tasks I need to talk to my boss about" or somebody else.

Back to table of contents

John on Org and data consistency

Org Mode is really a database, right? It's a database of of highly structured data that has a lot of associated metadata.

The value of that data requires a certain level of consistency which is work that we have to do. In the same way we do work drawing distinctions, we need to do work to keep that data consistent. Am I using this [property]? Am I using this tag to mean the right thing or whatever? Karl Voit says that one of the most valuable things if you're going to use tagging to organize your data is a constrained tag vocabulary. Make a fixed list. Then it's an error if you tag something and it's not in that list, because you either need to expand the list or you need to choose a better tag. That's really valuable.

Even though I use org-lint on all my org files, I found serious data errors. [The newline before an initial star had been lost], and then Org wouldn't see the entry. I never knew that it wasn't even being a participant in any of my queries. I just didn't know stuff like that.

I created a whole bunch of Haskell libraries that allow me to parse Org Mode data. It's a very opinionated parser. It's a very strict parser. It will not parse data files that do not have the exact shape and text and taxonomy that I want.

I wrote a linting module that basically encodes every single rule that I have ever wanted to apply to my data. Like, in the title of an Org Mode heading. I don't want two spaces. I don't want extra excess white space. That should be a rule right?

[Multiple examples, including when a file had TODO entries but didn't have a TODO filetag.]

My linter makes sure that this rule is consistently maintained. Being able to have an aggressive, thorough, universal consistency throughout all of my org data has really put my mind at ease. I can't break my data because I just won't be able to commit the broken data into git. I find myself adding new linting rules on a weekly basis. The more that I add, the more value my data has, because the more regular it is, the more normal, the more searchable.

Back to table of contents

My takeaways

People:

Comments

TIL about column view in #orgmode thanks to this great post from @sacha

@donaldh@hachyderm.io

Qu’est-ce que ça fait plaisir de lire un article de @sacha (en l’occurrence link) et de découvrir que John Wiegley utilise org-review (https://github.com/brabalan/org-review), un petit truc que j’ai écrit il y a 10 ans…

@brab@framapiaf.org

Very interesting to see Adam and John's workflows. Org is so flexible and powerful. I always learn something new watching other people do org stuff.

Nice article, Sacha!

mickeyp on Reddit

View org source for this post
-1:-- Excerpts from a conversation with John Wiegley (johnw) and Adam Porter (alphapapa) about personal information management (Post Sacha Chua)--L0--C0--November 06, 2024 07:29 PM

Sacha Chua: Interactively recolor a sketch

I wanted to be able to change the colours used in a sketch, all from Emacs. For this, I can reuse my Python script for analyzing colours and changing them and just add some Emacs Lisp to pick colours from Emacs.

2024-11-05-14-15-55.svg
Figure 1: Selecting the colour to replace
2024-11-05-14-16-04.svg
Figure 2: Selecting the new colour
(defvar my-recolor-command "/home/sacha/bin/recolor.py")

(defun my-image-colors-by-frequency (file)
  "Return the colors in FILE."
  (with-temp-buffer
    (call-process my-recolor-command nil t nil (expand-file-name file))
    (goto-char (point-min))
    (delete-region (point-min) (1+ (line-end-position)))
    (mapcar (lambda (o) (concat "#" (car (split-string o "[ \t]"))))
            (split-string (string-trim (buffer-string)) "\n"))))

(defun my-completing-read-color (prompt list)
  "Display PROMPT and select a color from LIST."
  (completing-read
   (or prompt "Color: ")
   (mapcar (lambda (o)
             (faces--string-with-color o o))
           list)))

(defun my-image-recolor-interactively (file)
  (interactive (list (read-file-name "File: " (concat my-sketches-directory "/") nil t
                                     nil
                                     (lambda (file) (string-match "\\.png\\'" file)))))
  (save-window-excursion
    (find-file file)

    ;; Identify the colors by frequency
    (let (choice done)
      (while (not done)
        (let* ((by-freq (my-image-colors-by-frequency file))
               (old-color (my-completing-read-color "Old color: " by-freq))
               (new-color (read-color "New color: " t))
               (temp-file (make-temp-file "recolor" nil (concat "." (file-name-extension file))))
               color-map)
          (when (string-match "#\\(..\\)..\\(..\\)..\\(..\\).." new-color)
            (setq new-color (concat (match-string 1 new-color)
                                    (match-string 2 new-color)
                                    (match-string 3 new-color))))
          (setq color-map (replace-regexp-in-string "#" "" (concat old-color "," new-color)))
          (call-process my-recolor-command nil nil nil
                        (expand-file-name file)
                        "--colors"
                        color-map
                        "--output" temp-file)
          (find-file temp-file)
          (pcase (read-char-choice "(y)es, (m)ore, (r)edo, (c)ancel: " "yrc")
            (?y
             (kill-buffer)
             (rename-file temp-file file t)
             (setq done t))
            (?m
             (kill-buffer)
             (rename-file temp-file file t))
            (?r
             (kill-buffer)
             (delete-file temp-file))
            (?c
             (kill-buffer)
             (delete-file temp-file)
             (setq done t))))))))

It would be nice to update the preview as I selected things in the completion, which I might be able to do by making a consult–read command for it. It would be extra cool if I could use this webkit-based color picker. Maybe someday!

This is part of my Emacs configuration.
View org source for this post
-1:-- Interactively recolor a sketch (Post Sacha Chua)--L0--C0--November 05, 2024 07:23 PM

Jack Baty: A Doom Emacs status update after several days

Some quick notes on my move back to Doom Emacs after a few days.

After once again hitching my wagon to Doom Emacs, I have been both elated and frustrated. I’m elated because Doom adds so many nice little quality-of-life improvements that make using Emacs downright pleasant right out of the gate. On the other hand, it ruins a few things and sometimes breaks for no reason that I can understand. It’s nice having someone maintain the basics of my config, but it’s also frustrating when those someones do it “wrong”. And sometimes running ./bin/doom upgrade breaks things. I can usually recover, but don’t love that I have to think about it. This is a side effect of having others do things for me, and it’s a fair trade.

So far, I’ve resolved most of the little issues. One of those was with Elfeed. Elfeed is unusable for me when using Doom’s default configuration. Doom includes the elfeed-extras package, and I don’t like it. The split window is annoying. What’s worse is that there’s no date column in the list of articles and there’s no simple way to include it. That’s just dumb, imo. So I disable that package and modify a few little things and it’s much better.

The remaining problem is that I sync my entire .config/emacs and .config/doom directories, and this somehow breaks because Doom adds $TMPDIR to the .config/emacs/.local/env file. Apparently, my tmp directory is not the same on both the Mini and the MBP, so I get permissions errors every time I switch machines. The workaround is to run ./bin/doom env before starting emacs when switching machines. That’s not sustainable. I’ll figure it out, but it’s one thing that still bugs me.

And oh, the key bindings! Around six months ago, I moved back to my vanilla config and decided to stop using Evil mode. It was a painful transition, but I got used to it and now the stock key bindings feel normal. The problem was that I also use several tools that only offer Vim bindings. Switching between Emacs and Vim bindings has been chaotic, to say the least. I keep tripping over myself and there’s been a lot of swearing. Going back to Doom and Evil mode has been tricky, but the muscle memory is returning, and I like the consistency in the apps I use most.

Something I dislike is using Doom’s abstractions like after! and map!. It just makes things even less “normal”. Handy, but it will make moving out of Doom harder. Not that I’d ever do that, though, right? 😆.

Right now, I’m happy with the setup. I love when Doom does something and it makes me say, “ooh, nice!”. As long as that happens more often than me saying, “WTF?! That’s dumb.”, I should be fine in Doom.

-1:-- A Doom Emacs status update after several days (Post)--L0--C0--November 05, 2024 11:32 AM

Sacha Chua: Emacs: Extract part of an image to another file

It turns out that image-mode allows you to open an image and then crop it with i c (image-crop), all within Emacs. I want to select a region and then write it to a different file. I think the ability to select a portion of an image by drawing/moving a rectangle is generally useful, so let's start by defining a function for that. The heavy lifting is done by image-crop--crop-image-1, which tracks the mouse and listens for events.

;; Based on image-crop.
(defun my-image-select-rect (op)
  "Select a region of the current buffer's image.

`q':   Exit without changing anything.
`RET': Select this region.
`m':   Make mouse movements move the rectangle instead of altering the
       rectangle shape.
`s':   Same as `m', but make the rectangle into a square first."
  (unless (image-type-available-p 'svg)
    (error "SVG support is needed to crop and cut images"))
  (let ((image (image--get-image)))
    (unless (imagep image)
      (user-error "No image under point"))
    (when (overlays-at (point))
      (user-error "Can't edit images that have overlays"))
    ;; We replace the image under point with an SVG image that looks
    ;; just like that image.  That allows us to draw lines over it.
    ;; At the end, we replace that SVG with a cropped version of the
    ;; original image.
    (let* ((data (cl-getf (cdr image) :data))
           (type (cond
                  ((cl-getf (cdr image) :format)
                   (format "%s" (cl-getf (cdr image) :format)))
                  (data
                   (image-crop--content-type data))))
           (image-scaling-factor 1)
           (orig-point (point))
           (size (image-size image t))
           (svg (svg-create (car size) (cdr size)
                            :xmlns:xlink "http://www.w3.org/1999/xlink"
                            :stroke-width 5))
           ;; We want to get the original text that's covered by the
           ;; image so that we can restore it.
           (image-start
            (save-excursion
              (let ((match (text-property-search-backward 'display image)))
                (if match
                    (prop-match-end match)
                  (point-min)))))
           (image-end
            (save-excursion
              (let ((match (text-property-search-forward 'display image)))
                (if match
                    (prop-match-beginning match)
                  (point-max)))))
           (text (buffer-substring image-start image-end))
           (inhibit-read-only t)
           orig-data svg-end)
      (with-temp-buffer
        (set-buffer-multibyte nil)
        (if (null data)
            (insert-file-contents-literally (cl-getf (cdr image) :file))
          (insert data))
        (let ((image-crop-exif-rotate nil))
          (image-crop--possibly-rotate-buffer image))
        (setq orig-data (buffer-string))
        (setq type (image-crop--content-type orig-data))
        (image-crop--process image-crop-resize-command
                             `((?w . 600)
                               (?f . ,(cadr (split-string type "/")))))
        (setq data (buffer-string)))
      (svg-embed svg data type t
                 :width (car size)
                 :height (cdr size))
      (with-temp-buffer
        (svg-insert-image svg)
        (switch-to-buffer (current-buffer))
        (setq svg-end (point))
        ;; Area
        (let ((area
               (condition-case _
                   (save-excursion
                     (forward-line 1)
                     (image-crop--crop-image-1
                      svg op))
                 (quit nil))))
          (when area
            ;;  scale to original
            (let* ((image-scaling-factor 1)
                   (osize (image-size (create-image orig-data nil t) t))
                   (factor (/ (float (car osize)) (car size)))
                   ;; width x height + left + top
                   (width (abs (truncate (* factor (- (cl-getf area :right)
                                                      (cl-getf area :left))))))
                   (height (abs (truncate (* factor (- (cl-getf area :bottom)
                                                       (cl-getf area :top))))))
                   (left (truncate (* factor (min (cl-getf area :left)
                                                  (cl-getf area :right)))))
                   (top (truncate (* factor (min (cl-getf area :top)
                                                 (cl-getf area :bottom))))))
              (list :left left :top top
                    :width width :height height
                    :right (+ left width)
                    :bottom (+ top height)))))))))

Then we can use it to select part of an image, and then use ImageMagick to extract that part of the image:

(defun my-image-write-region ()
  "Copy a section of the image under point to a different file.
This command presents the image with a rectangular area superimposed
on it, and allows moving and resizing the area to define which
part of it to crop.

While moving/resizing the cropping area, the following key bindings
are available:

`q':   Exit without changing anything.
`RET': Save the image.
`m':   Make mouse movements move the rectangle instead of altering the
       rectangle shape.
`s':   Same as `m', but make the rectangle into a square first."
  (interactive)
  (when-let* ((orig-data (buffer-string))
              (area (my-image-select-rect "write"))
              (inhibit-read-only t)
              (type (image-crop--content-type orig-data))
              (left (plist-get area :left))
              (top (plist-get area :top))
              (width (plist-get area :width))
              (height (plist-get area :height)))
    (with-temp-file (read-file-name "File: ")
      (set-buffer-multibyte nil)
      (insert orig-data)
      (image-crop--process image-crop-crop-command
                           `((?l . ,left)
                             (?t . ,top)
                             (?w . ,width)
                             (?h . ,height)
                             (?f . ,(cadr (split-string type "/"))))))))

i w seems like a sensible shortcut for writing a region of an image.

(with-eval-after-load 'image
  (keymap-set image-mode-map "i w" #'my-image-write-region))
This is part of my Emacs configuration.
View org source for this post
-1:-- Emacs: Extract part of an image to another file (Post Sacha Chua)--L0--C0--November 04, 2024 07:57 PM

Irreal: Adding A Year Tag From A Capture Template

James Dyer has a good idea for organizing his org-based notes. It’s simple: add a year tag to each note. It helps to organize the notes and makes it easy to filter them by year.

When he first implemented the system, he simply hard coded the year into the capture template but then, of course, he had to remember to update it every year. We all know what a fragile process that can be. Dyer decided he needed a better way so he looked into generating the year tag programmatically. He found a couple of ways.

They both involved using the format time specifiers. You can take a look at his post for the details but it turns out to be really easy so if you’re interested in doing something similar you should definitely spend a couple of minutes reading his post. It’s another great example of the flexibility of Emacs and Org mode.

-1:-- Adding A Year Tag From A Capture Template (Post jcs)--L0--C0--November 04, 2024 04:18 PM

Please note that planet.emacslife.com aggregates blogs, and blog authors might mention or link to nonfree things. To add a feed to this page, please e-mail the RSS or ATOM feed URL to sacha@sachachua.com . Thank you!