7 Application(s) results for #machine learning
Filter
Show only
Last release on OEX
Categories
Works with
Industry
Application NameDeveloperMade withRatingLast updatedViewsInstalls

iris-python-machinelearn

Machine learning application Python IRIS

André Dienes Friedrich
Docker
Python
ML
ML
4.5 (1)22 Sep, 2023 351

iris-fine-tuned-ml

Train and tune a machine learning model using IRIS and Python

L
Lucas Enard
Docker
Python
ML
ML
4.0 (1)24 Aug, 2022 326

InterSystems Ideas Waiting to be Implemented

AI extensibility Prompt keyword for Class and Method implementation. Also Prompt macro generator.

To accelerate capability of growing code generation. This proposal suggests new extensibility facilities and hooks that can be democratized to community and / or fulfilled by commercial partners. To add Training metadata to Refine a Large Language Model for code, a "Prompt Input" is associated with an expected "Code Output", as part of a class definition. This provide structured keywords to describe: * The expected output * And / Or Chain-of-thought to generate the correct output | /// The following Prompt describes the full implementation of the class Class alwo.Calculator [Abstract, Prompt = "Provides methods to Add, Subtract, Multiply and divide given numbers." ] { /// The following Prompt describes the full implementation of the method ClassMethod Add(arg1 As %Float, arg2 As %Float) As %Float [ Prompt ="Add numeric arguments and return result." ] { return arg1 + arg2 } ClassMethod Subtract(arg1 as %Float, arg2 As %Float) { &Prompt("Subtract numeric arguments and return result") ) } | The Prompt macro generates code based on the context of the method it is within. Once resolved, it automatically comments out the processed macro. | ClassMethod Subtract(arg1 as %Float, arg2 As %Float) { //&Prompt("Subtract arguments and return the result") return arg1 - arg2 //&Prompt("Model alwogen-objectscript-7.1.3") ) | The generator leveraged at compilation time could be configured in a similar way to how source control is configured for a namespace. Configuration could lock / exclude packages from being processed in this way. A "\prompt" compilation flag could be used to control the default environment behavior and editor compilation behavior. For example to force reprocessing of previously resolved prompts due to a newer more capable version of code Large Language Model, then a "\prompt=2" could be applied. Different models or third-party services could be applied depending the language of the given method. When redacting source code by "deployment", the existing "deploy" facility could be extended to also ensure removal of "Prompt" metadata from code.

A
by Alex Woodhead

3

Votes

1

Comments
Vote

iris-local-ml

How to use Python and IRIS to run Machine learnings algorithms

L
Lucas Enard
Docker
Python
AI
ML
ML
4.0 (1)02 Aug, 2022 430

Blinx AI - Turn Data into Intelligence in a blinx

The App Platform for AI Lifecycle

S
Suresh Vallabhaneni
AI
ML
ML
0.0 (0)15 Feb, 2022 378

Beez-Woman-Menstrual-Tracker

A one-stop-shop for tracking women's reproductive health

K
Katie Le
ML
ML
0.0 (0)21 Sep, 2021 189

covid-ai-demo-deployment

"Covid-19 AI demo in Docker" deployment including dockerised Flask, FastAPI, Tensorflow Serving and HA Proxy etc etc.

Z
Zhong Li
Docker
Python
ML
ML
0.0 (0)07 Sep, 2020 423

Reducing Readmission Risks with Realtime ML

Patient Readmissions are said to be the Hello World of Machine Learning in Healthcare. We use this problem to show how IRIS can be used to safely build and operationalize ML models for real time predictions and how this can be integrated into a random application.

A
Amir Samary
Docker
ML
ML
3.5 (1)29 Jan, 2020 841