... I suppose they do

Install and use Ollama on your local machine

ai Mar 25, 2024

Ollama

Ollama is a tool allowing you to utilize LLMs in an easily manageable interface. Ollama is self-hosted and not a fenced in SaaS. This tutorial will only focus on using Ollama under Linux.

Installation

Those of you familiar with Docker will feel right at home with Ollama. In this tutorial, I will focus in installing and using Ollama via docker.

Systemd Service

Foremost, let's create a docker-compose service for systemd. With this service, we will be able to start Ollama on boot. Note: the following commands should be prefixed with 'sudo' or you should log in as root.

Create the compose/ollama directories:

mkdir -p /etc/docker/compose/ollama

Move to the directory and create a docker-compose.yml file.

cd /etc/docker/compose/ollama
vim docker-compose.yml

Add the following content:

version: "3"

services:                                                                                                               
  ollama:                                                                                                              
    image: ollama/ollama:latest                                                                                       
    container_name: 'ollama'                                                                   
    volumes:
      - ollama:/root/.ollama
    ports:
      - 127.0.0.1:11434:11434
volumes:
  ollama:

Create the docker-compose systemd service file:

cd /etc/systemd/system/
vim docker-compose@.service

Add the following content:

[Unit]
Description=%i service with docker compose
PartOf=docker.service
After=docker.service

[Service]
Type=oneshot
RemainAfterExit=true
WorkingDirectory=/etc/docker/compose/%i
ExecStart=/usr/bin/docker compose up -d --remove-orphans
ExecStop=/usr/bin/docker compose down

[Install]
WantedBy=multi-user.target

Reload the systemd daemon

systemctl daemon-reload

If you want, you can enable the service so that it will start on system boot

systemctl enable docker-compose@ollama.service

Start the ollama service and be patient. It can take a minute or two while the ollama image is being pulled.

systemctl start docker-compose@ollama.service

Check to see if the ollama service is running:

docker ps

You should see something like this:

CONTAINER ID   IMAGE                                               COMMAND                  CREATED          STATUS          PORTS                                  NAMES
ec5468f50462   ollama/ollama:latest                                "/bin/ollama serve"      57 minutes ago   Up 57 minutes   11434/tcp                              ollama

In order to more easily access the ollama command, add the following alias to your ~/.bashrc file

alias ollama="docker exec -it ollama ollama"

Using Ollama

At this point you should be ready to go. Go to the official Ollama website and browse the available models. After choosing one, open a terminal and execute the pull command. In this example, I will pull the llama3 model to my local machine.

ollama pull llama3

You can view a list of locally installed models with the following command

ollama list

After pulling the model, run the following command to start the selected model

ollama run llama3

You should see the following prompt:

>>> Send a message (/? for help)

Now you can ask as many questions as you want.

My humble thoughts about AI

People have been throwing the words data science, deep learning, large language models around a lot lately. Artificial intelligence - we hear about it all the time. I am usually an early adopter of new technologies, however with AI I have been cautious.

The Stifling of the Creative Mind

"Why bother reading something no one was bothered enough to write"

To me, this statement strikes a chord. I have always been driven by a strong desire to create - whether it be art, music, writing, coding or other creative endeavors. Now, a machine, can produce results in most of these fields within a matter of seconds. In the last two years, that thought has sometimes been water to my flame of creativity. Why create when a machine can do the same in a few seconds? No bueno.

The Future

Suffice to say, for the time being, we are in the narrow/weak phase of AI development, which according to many futurists such as Ray Kurzweil will lead to what has been coined "The Singularity". The singularity is as creepy as it sounds. At this point, you had better brush up on the future of humanity by first watching a few dystopian Sci-fi movies:

Upon reaching "The Singularity", it seems the futures presented in those films could become a reality. Do we get a benevolent or tyrannical superintelligence? Will this be the birth of Skynet? Do we quickly evolve into bloated, greasy couch stains driven only by base pleasures? Will we be reduced to child-like dependents? Will the lines of reality become so blurred that most of us choose to live in the Matrix?

Maybe we should be more optimistic and hope for a future similar to the one presented in Star Trek. Man and machine are harmonized and work together. Humanity is not driven by greed, hatred or the desire to forcibly exert power over others... Don't misunderstand me, I am a serious Start Trek fan, but I tend to have a more pessimistic view of the singularity should it come to fruition. Okay, so with all that said, let's use AI as a tool.

AI as a Tool

As long as AI does not stifle or suffocate human creativity; as long as AI can be used as a tool or assistant in the creative process, I am not against using it. As long as the predicted singularity has not arrived to wipe out humanity, let's be realistic. Weak AI is a tool - let's use it!

Final Thoughts

Ollama is an interesting tool with many use cases. I like that fact that it is self-hosted and will even run on your laptop. I will be playing with Ollama a lot more in the future and have also integrated it into my workflow:

  • NeoVim integration for code completion, refactoring and code snippet generation. An article is on the way.
  • GUI interface for browser access

I have found is slow, but this depends on the host machine. You can speed up the process with a suitable GPU(s). Let us know what you think about ollama as a tool. What do you think about AI or the singularity in general?

Tags