From 960ff7e7d1daee6f9dcbce3ac0765749e56635a5 Mon Sep 17 00:00:00 2001 From: Frederik Vanrenterghem Date: Sun, 23 Mar 2025 17:37:42 +0800 Subject: [PATCH 1/6] Always underline links and bold them on mouse-over. --- source/assets/styles/org-htmlize.css | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/source/assets/styles/org-htmlize.css b/source/assets/styles/org-htmlize.css index 2b443b2..5e41f6c 100644 --- a/source/assets/styles/org-htmlize.css +++ b/source/assets/styles/org-htmlize.css @@ -1,4 +1,15 @@ - body { +a { + color: inherit; + background-color: inherit; + font: inherit; + text-decoration: underline; +} +a:hover { + font-weight: bold; + text-decoration: underline; +} + +body { color: #000000; background-color: #ffffff; } @@ -5809,13 +5820,3 @@ pre.src { background-color: #f2f2f2;} background-color: #ffddff; } - a { - color: inherit; - background-color: inherit; - font: inherit; - text-decoration: inherit; - } - a:hover { - text-decoration: underline; - } - -- 2.39.5 From f4dd0e5270b7fb787eb8b8ed46c257d2af6c11ea Mon Sep 17 00:00:00 2001 From: Frederik Vanrenterghem Date: Thu, 10 Apr 2025 23:40:51 +0800 Subject: [PATCH 2/6] Voeg post ivm gptel tools toe. --- source/posts/providing_tools_to_AI.org | 109 +++++++++++++++++++++++++ 1 file changed, 109 insertions(+) create mode 100644 source/posts/providing_tools_to_AI.org diff --git a/source/posts/providing_tools_to_AI.org b/source/posts/providing_tools_to_AI.org new file mode 100644 index 0000000..e54d71d --- /dev/null +++ b/source/posts/providing_tools_to_AI.org @@ -0,0 +1,109 @@ +#+title: Providing tools to AI +#+date: [2025-04-10 Thu 23:39] +#+filetags: :emacs:ai:elisp: +#+OPTIONS: \n:t + +#+BEGIN_PREVIEW +Given all the recent conversation about agentic AI, I wanted to get a better first-hand understanding of what is possible at the moment with OpenAI API's from within Emacs. I used [[https://karthinks.com/][Karthik Chikmagalur 's]] [[https://github.com/karthink/gptel][simple LLM client for Emacs, gptel]] for this evaluation. + +The tool has come a long way since its initial release 2 years ago. It has become a preferred LLM chat client for many, supporting multiple models and backends. The latest version introduced tool use. This allows the models a way to actually fetch data or take other actions, as [[https://platform.openai.com/docs/guides/function-calling?api-mode=responses][described in OpenAI's documentation]]. + +I started out by providing 3 tools - write to buffers, evaluate code and read buffers. Using these 3 tools, and knowing I have Emacs configured to be able to interpret R code with a fairly comprehensive set of libraries installed, I set out to verify the AI model could create a graphical analysis of the variance in petal size in the iris flower family. (A common example in R tutorials found online.) +#+END_PREVIEW + +First, I provide ~gptel~ a means to create a buffer and add text to it, all using elisp. Optionally, the created buffer can be org-mode enabled. I made this optional to verify the model does indeed not only understands the purpose of functions based on their name, but also knows how to interpret the available arguments. + +#+BEGIN_SRC emacs-lisp + (gptel-make-tool + :function (lambda (buffer text &optional orgmode) + (with-current-buffer (get-buffer-create buffer) + (save-excursion + (goto-char (point-max)) + (insert text) + (if orgmode (org-mode)))) + (format "Appended text to buffer %s" buffer)) + :name "append_to_buffer" + :description "Append text to the an Emacs buffer. If the buffer does not exist, it will be created." + :args (list '(:name "buffer" + :type "string" + :description "The name of the buffer to append text to.") + '(:name "text" + :type "string" + :description "The text to append to the buffer.") + '(:name "orgmode" + :type "boolean" + :description "(optional) enable (t) org-mode")) + :category "emacs") +#+END_SRC + +Now that the AI model can have access to an org-mode enabled buffer, it should be given a tool to execute [[https://orgmode.org/worg/org-contrib/babel/][Babel]] blocks therein. + +#+BEGIN_SRC emacs-lisp + (gptel-make-tool + :name "execute_org_babel_buffer" ; javascript-style snake_case name + :function (lambda (buffer) ; the function that will run + (unless (buffer-live-p (get-buffer buffer)) + (error "error: buffer %s is not live." buffer)) + (with-current-buffer buffer + (org-babel-execute-buffer))) + :description "Execute org babel code in a buffer." + :args (list '(:name "buffer" + :type string ; :type value must be a symbol + :description "the name of the buffer whose org babel content should be evaluated")) + :category "analysis") +#+END_SRC + +Lastly, I need to give a tool to read the buffer again. Without this ability, I found out the AI model is assuming it knows what happens based on the return value of the tools - which in many cases won't cover what actually can have been performed in an org file with Babel enabled. This can be overcome by pointing that out in a further part of the chat, but that didn't seem fully /agentic/ so I thought about giving it the ability to read too. + +#+BEGIN_SRC emacs-lisp + (gptel-make-tool + :function (lambda (buffer) + (unless (buffer-live-p (get-buffer buffer)) + (error "Error: buffer %s is not live." buffer)) + (with-current-buffer buffer + (buffer-substring-no-properties (point-min) (point-max)))) + :name "read_buffer" + :description "Return the contents of an Emacs buffer" + :args (list '(:name "buffer" + :type "string" + :description "The name of the buffer whose contents are to be retrieved")) + :category "emacs") +#+END_SRC + +This effectively gives AI the ability to access an interpreter in a language we choose (and enabled in the lisp machine). Guiding the model can now count on its ability to enter input, evaluate the code, and read the results. + +Code blocks in org-mode are not a well-documented occurrence in the code forges of the internet. To beef up the AI's capability, I gave it access to my ~tempel~ code snippets. These include common building blocks of an org document with R Babel code blocks. In a next step, I asked it the following question: + +#+begin_quote +Using the tools you have available, create an 'analysis' buffer with org-mode enabled. Put one single R code block in there, which should be the only code block in the buffer and does all of the following. The code should create a ggplot2 graphic that is stored in a variable so it can be altered afterwards. The graphic contains a comparison between petal sizes of types of iris flowers. Make sure the graphic is added to the buffer. Next, evaluate the code in the buffer and confirm under what filename the graphic has been created. Do not evaluate the code more than once. Report back (in org format) what code you created and what the result was. +#+end_quote + +I had to tweak this prompt a few times. The AI encountered 2 problems. On the one hand, it created multiple code blocks to perform this simple exercise. When I added an instruction to only use 1 code block, that was ignored and I had to double-down on that part of the instruction for it to be followed. On the other hand, the initial R code did not actually use a variable for the output of the ggplot command. This makes the code not actually create a graphic in org-mode. Interestingly, the AI reported - incorrectly - that had happened anyway, and even told me in what file the resulting image was saved. + +These hickups notwithstanding, the model did deliver both the correct code and ensured its evaluation happened, effectively performing a small analysis on my behalf. + +#+begin_quote +The =analysis= buffer has been created with the following R code block: + +#+begin_src org +#+BEGIN_SRC R :results graphics file :file "iris_plot.png" +library(ggplot2) +# Load the iris dataset +iris_data <- iris +# Create a ggplot graphic comparing petal sizes by species +iris_plot <- ggplot(iris_data, aes(x = Species, y = Petal.Length, fill = Species)) + + geom_boxplot() + + labs(title = 'Comparison of Petal Lengths by Iris Species', + x = 'Species', + y = 'Petal Length') + + theme_minimal() +iris_plot +#+END_SRC +#+end_src + +The graphic has been saved as the file: =iris_plot.png=. + +You can find the output image in the specified filename. +#+end_quote + +I have ~org-babel-execute-buffer~ set to require confirmation, which was the only manual intervention needed. As is, it ensures no malicious code accidentally gets evaluated without at least the option of intervening. Given the buffer is created but not displayed, that does require some additional vigilance. -- 2.39.5 From 0aa444739c2de80d926951daf57e0f37f74c5e13 Mon Sep 17 00:00:00 2001 From: Frederik Vanrenterghem Date: Fri, 11 Apr 2025 00:10:24 +0800 Subject: [PATCH 3/6] Fix quote style in CSS. --- source/assets/styles/org-htmlize.css | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/source/assets/styles/org-htmlize.css b/source/assets/styles/org-htmlize.css index 5e41f6c..dd44e9f 100644 --- a/source/assets/styles/org-htmlize.css +++ b/source/assets/styles/org-htmlize.css @@ -13,6 +13,14 @@ body { color: #000000; background-color: #ffffff; } + +blockquote {} + blockquote { + margin-left: 10px; + padding-left: 5px; + border-left: 3px solid #ccc; + } + pre.src { background-color: #f2f2f2;} .org-abbrev-table-name { /* abbrev-table-name */ -- 2.39.5 From 457195c44dbc721746fa951fd0f6d36e7555a154 Mon Sep 17 00:00:00 2001 From: Frederik Vanrenterghem Date: Fri, 11 Apr 2025 00:11:29 +0800 Subject: [PATCH 4/6] Geef duidelijkere conclusie na het experiment. --- source/posts/providing_tools_to_AI.org | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/source/posts/providing_tools_to_AI.org b/source/posts/providing_tools_to_AI.org index e54d71d..f146f37 100644 --- a/source/posts/providing_tools_to_AI.org +++ b/source/posts/providing_tools_to_AI.org @@ -1,6 +1,6 @@ #+title: Providing tools to AI #+date: [2025-04-10 Thu 23:39] -#+filetags: :emacs:ai:elisp: +#+filetags: :emacs:ai:elisp:automation:analytics:R: #+OPTIONS: \n:t #+BEGIN_PREVIEW @@ -80,12 +80,12 @@ Using the tools you have available, create an 'analysis' buffer with org-mode en I had to tweak this prompt a few times. The AI encountered 2 problems. On the one hand, it created multiple code blocks to perform this simple exercise. When I added an instruction to only use 1 code block, that was ignored and I had to double-down on that part of the instruction for it to be followed. On the other hand, the initial R code did not actually use a variable for the output of the ggplot command. This makes the code not actually create a graphic in org-mode. Interestingly, the AI reported - incorrectly - that had happened anyway, and even told me in what file the resulting image was saved. -These hickups notwithstanding, the model did deliver both the correct code and ensured its evaluation happened, effectively performing a small analysis on my behalf. +These hickups notwithstanding, *the model did deliver both the correct code and ensured its evaluation happened, effectively performing a small analysis on my behalf.* Below the output of the model. -#+begin_quote +#+BEGIN_QUOTE The =analysis= buffer has been created with the following R code block: +#+END_QUOTE -#+begin_src org #+BEGIN_SRC R :results graphics file :file "iris_plot.png" library(ggplot2) # Load the iris dataset @@ -99,11 +99,13 @@ iris_plot <- ggplot(iris_data, aes(x = Species, y = Petal.Length, fill = Species theme_minimal() iris_plot #+END_SRC -#+end_src +#+BEGIN_QUOTE The graphic has been saved as the file: =iris_plot.png=. You can find the output image in the specified filename. -#+end_quote +#+END_QUOTE I have ~org-babel-execute-buffer~ set to require confirmation, which was the only manual intervention needed. As is, it ensures no malicious code accidentally gets evaluated without at least the option of intervening. Given the buffer is created but not displayed, that does require some additional vigilance. + +I was very impressed. [[https://github.com/karthink/gptel/issues/514#issuecomment-2558826900][Another contributor already noticed we can use tools to have the model create further tools.]] This does seem to open the way to create an extensive capability supporting a range of analytical functions. -- 2.39.5 From 481de590a30ae1e2c4fa27d9cbe39b96e101b43c Mon Sep 17 00:00:00 2001 From: Frederik Vanrenterghem Date: Fri, 11 Apr 2025 00:17:16 +0800 Subject: [PATCH 5/6] Begin met quote uit Rhetorica ad Herennium ivm natuurlijk en artificieel geheugen. --- source/posts/providing_tools_to_AI.org | 2 ++ 1 file changed, 2 insertions(+) diff --git a/source/posts/providing_tools_to_AI.org b/source/posts/providing_tools_to_AI.org index f146f37..0e18681 100644 --- a/source/posts/providing_tools_to_AI.org +++ b/source/posts/providing_tools_to_AI.org @@ -4,6 +4,8 @@ #+OPTIONS: \n:t #+BEGIN_PREVIEW +~Sunt igitur duae memoriae: una naturalis, altera artificiosa.~ + Given all the recent conversation about agentic AI, I wanted to get a better first-hand understanding of what is possible at the moment with OpenAI API's from within Emacs. I used [[https://karthinks.com/][Karthik Chikmagalur 's]] [[https://github.com/karthink/gptel][simple LLM client for Emacs, gptel]] for this evaluation. The tool has come a long way since its initial release 2 years ago. It has become a preferred LLM chat client for many, supporting multiple models and backends. The latest version introduced tool use. This allows the models a way to actually fetch data or take other actions, as [[https://platform.openai.com/docs/guides/function-calling?api-mode=responses][described in OpenAI's documentation]]. -- 2.39.5 From 1bc144f10a39ab53047a6beaf9cce4d64a879ea2 Mon Sep 17 00:00:00 2001 From: Frederik Vanrenterghem Date: Fri, 11 Apr 2025 00:19:54 +0800 Subject: [PATCH 6/6] Remove redundant space. --- source/posts/providing_tools_to_AI.org | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/source/posts/providing_tools_to_AI.org b/source/posts/providing_tools_to_AI.org index 0e18681..8453814 100644 --- a/source/posts/providing_tools_to_AI.org +++ b/source/posts/providing_tools_to_AI.org @@ -6,7 +6,7 @@ #+BEGIN_PREVIEW ~Sunt igitur duae memoriae: una naturalis, altera artificiosa.~ -Given all the recent conversation about agentic AI, I wanted to get a better first-hand understanding of what is possible at the moment with OpenAI API's from within Emacs. I used [[https://karthinks.com/][Karthik Chikmagalur 's]] [[https://github.com/karthink/gptel][simple LLM client for Emacs, gptel]] for this evaluation. +Given all the recent conversation about agentic AI, I wanted to get a better first-hand understanding of what is possible at the moment with OpenAI API's from within Emacs. I used [[https://karthinks.com/][Karthik Chikmagalur's]] [[https://github.com/karthink/gptel][simple LLM client for Emacs, gptel]] for this evaluation. The tool has come a long way since its initial release 2 years ago. It has become a preferred LLM chat client for many, supporting multiple models and backends. The latest version introduced tool use. This allows the models a way to actually fetch data or take other actions, as [[https://platform.openai.com/docs/guides/function-calling?api-mode=responses][described in OpenAI's documentation]]. -- 2.39.5