{"id":7589,"date":"2024-09-20T01:12:00","date_gmt":"2024-09-20T05:12:00","guid":{"rendered":"https:\/\/www.both.org\/?p=7589"},"modified":"2024-09-12T14:13:22","modified_gmt":"2024-09-12T18:13:22","slug":"using-python-and-ollama-on-a-linux-desktop","status":"publish","type":"post","link":"https:\/\/www.both.org\/?p=7589","title":{"rendered":"Using Python and Ollama on a Linux desktop"},"content":{"rendered":"<div class=\"pld-like-dislike-wrap pld-template-1\">\r\n    <div class=\"pld-like-wrap  pld-common-wrap\">\r\n    <a href=\"javascript:void(0)\" class=\"pld-like-trigger pld-like-dislike-trigger  \" title=\"\" data-post-id=\"7589\" data-trigger-type=\"like\" data-restriction=\"cookie\" data-already-liked=\"0\">\r\n                        <i class=\"fas fa-thumbs-up\"><\/i>\r\n                <\/a>\r\n    <span class=\"pld-like-count-wrap pld-count-wrap\">    <\/span>\r\n<\/div><\/div>\n<p>Continuing my exploration of using a locally hosted Ollama on my Linux desktop computer, I have been doing a lot of reading and research. Today, while having lunch with a university professor, he asked me some questions I didn\u2019t have an immediate answer to. So, I went back to my research to find the answers.<\/p>\n\n\n\n<p>My computer is a Linux desktop with an 11th-generation Intel Core i7-1165G7 processor and 64 gigabytes of RAM. Until today, I have been interacting with Ollama and several models, including Gemma, Codegemma, Phi-3, and Llama3.1, from the command line. Running the Ollama command-line client and interacting with LLMs locally at the Ollama REPL is a good start, but I wanted to learn how to use Ollama in applications. Today I began that journey.<\/p>\n\n\n\n<p>Python is my preferred language, and I use&nbsp;<a href=\"https:\/\/vscodium.com\/\">VS Codium<\/a>&nbsp;as my editor. First, I needed to set up a virtual Python environment. I have a \u2018Coding\u2019 directory on my computer, but I wanted to set up a separate one for this project.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>$ python3 -m venv ollama<\/code><\/pre>\n\n\n\n<p>Next, I activated the virtual environment:<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>$ source ollama\/bin\/activate<\/code><\/pre>\n\n\n\n<p>Then, I needed to install the \u2018ollama\u2019 module for Python.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>pip install ollama<\/code><\/pre>\n\n\n\n<p>Once the module was installed, I opened up VSCodium and tried the code snippet. I found that I used the \u2018ollama list\u2019 command to make sure that \u2018codegemma\u2019 was installed. Then I used a code snippet I found online and tailored it to generate some Python code to draw a circle.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>import ollama\n\n\nresponse = ollama.generate(model='codegemma', prompt='Write a Python program to draw a circle spiral in three colors')\nprint(response&#91;'response'])<\/code><\/pre>\n\n\n\n<p>The model query took some time to occur. Despite having a powerful computer, the lack of a GPU significantly impacted performance, even on such a minor task. The resulting code looked good.<\/p>\n\n\n\n<pre class=\"wp-block-code\"><code>import turtle\n\n# Set up the turtle\nt = turtle.Turtle()\nt.speed(0)\n\n# Set up the colors\ncolors = &#91;'red', 'green', 'blue']\n\n# Set up the circle spiral parameters\nradius = 10\nangle = 90\niterations = 100\n\n# Draw the circle spiral\nfor i in range(iterations):\n    t.pencolor(colors&#91;i % 3])\n    t.circle(radius)\n    t.right(angle)\n    radius += 1\n\n# Hide the turtle\nt.hideturtle()\n\n# Keep the window open\nturtle.done()<\/code><\/pre>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"517\" height=\"506\" src=\"https:\/\/www.both.org\/wp-content\/uploads\/2024\/09\/Circle_Spiral.png\" alt=\"\" class=\"wp-image-7591\" style=\"width:545px;height:auto\"\/><figcaption class=\"wp-element-caption\">Screen Picture by Don Watkins CC by SA 4.0<\/figcaption><\/figure>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Continuing my exploration of using a locally hosted Ollama on my Linux desktop computer, I have been doing<\/p>\n","protected":false},"author":32,"featured_media":4358,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_lmt_disableupdate":"","_lmt_disable":"","footnotes":""},"categories":[5],"tags":[],"class_list":["post-7589","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-linux"],"modified_by":"David Both","_links":{"self":[{"href":"https:\/\/www.both.org\/index.php?rest_route=\/wp\/v2\/posts\/7589","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.both.org\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.both.org\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.both.org\/index.php?rest_route=\/wp\/v2\/users\/32"}],"replies":[{"embeddable":true,"href":"https:\/\/www.both.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=7589"}],"version-history":[{"count":4,"href":"https:\/\/www.both.org\/index.php?rest_route=\/wp\/v2\/posts\/7589\/revisions"}],"predecessor-version":[{"id":7595,"href":"https:\/\/www.both.org\/index.php?rest_route=\/wp\/v2\/posts\/7589\/revisions\/7595"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.both.org\/index.php?rest_route=\/wp\/v2\/media\/4358"}],"wp:attachment":[{"href":"https:\/\/www.both.org\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=7589"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.both.org\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=7589"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.both.org\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=7589"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}