Initial Release
You can view ollama-ai-iris online demo here. Top secret: the password is ‘SYS’
In order to automate processing medical PDF documents I want to employ AI to identify information such as Patient, Provider and Chief Complaint. I have developed this prompt to instruct AI what I am looking for.
I looked at this API and created an Interoperability production with a Generic REST interface to capture requests and responses:
Below are examples of a request: and the corresponding response from ollama:
It was not what I expected.
I have downloaded PDF Medical Visit Summaries from my Doctor’s patient portal.
I already mentioned that I created an Interoperability Production. First I created a generic REST interface so that I could trace requests and responses going to ollama. Later I added a File Service to pick up PDF files and created a BPL to extract text from the PDF files. I sent the text as a StreamContainer to a File Passthrough operation.
Then I created another File Service to pick up the text files, and I created another BPL where I call an IRIS ObjectScript classmethod to invoke SendChat() to ollama. Here I employ a persistent class to measure response times and keep track of the responses coming back from ollama. To make the response from ollama visible in message traces, I send the response as a StreamContainer to the File Passthrough operation.
When the Production is running, it picks up any *.pdf file in /irisdev/app/data_example/ directory. This directory is mapped to the data_example directory where you cloned the git repository. Any pdf file you copy into the data_example directory in the git directory or /irisdev/app/data_example/ inside the iris container will process in the production.
To open IRIS Terminal do:
$ docker-compose exec iris iris session iris -U IRISAPP
IRISAPP>
To exit the terminal, do any of the following:
Enter h, halt, HALT or H (not case-sensitive)
0. I implemented ollama-ai-iris as a containerized app. If you have git and Docker desktop installed, see Installation: Docker below.
1. Ollama installed and running on your computer (you can download it from https://ollama.com/download). You can test if it’s ok running the following command on prompt: ollama run llama3.2 "Explain the basics of machine learning."
2. Python 3.12 or above
3. Install the following python packages using the pip install
command:
llama-index
llama-index.embeddings.huggingface
llama-index.llms.ollama
sqlalchemy-iris
4. Intersystems IRIS 2024.1 or above
Make sure you have git and Docker desktop installed.
Clone/git pull the repo into any local directory
$ git clone https://github.com/oliverwilms/ollama-ai-iris.git
Open the terminal in this directory and run:
$ docker-compose up -d
This repo got started when I forked https://github.com/RodolfoPscheidtJr/ollama-ai-iris. I had been trying to implement OpenAI use case, but I really liked that this repo uses ollama deployment in place of calling OpenAI.
Ollama is an open source tool that runs large language models (LLMs) directly on a computer. The advantage of Ollama is that it runs locally, which brings more security, and does not depend on a paid subscription, as OpenAI requires.
Thanks to llama-iris library by @Dmitry Maslennikov and to iris-vector-search by @Alvin Ryanputra
I had seen in Guillaume Rongier’s Open Exchange app iris-rag-demo how to deploy ollama container via docker-compose.yml.