Application Name | Developer | Made with | Rating | Last updated | Views | Installs |
---|---|---|---|---|---|---|
![]() iris-python-machinelearnMachine learning application Python IRIS | Docker Python ML ML | 4.5 (1) | 22 Sep, 2023 | |||
iris-fine-tuned-mlTrain and tune a machine learning model using IRIS and Python | L | Docker Python ML ML | 4.0 (1) | 24 Aug, 2022 | ||
InterSystems Ideas Waiting to be ImplementedAI extensibility Prompt keyword for Class and Method implementation. Also Prompt macro generator.To accelerate capability of growing code generation. This proposal suggests new extensibility facilities and hooks that can be democratized to community and / or fulfilled by commercial partners. To add Training metadata to Refine a Large Language Model for code, a "Prompt Input" is associated with an expected "Code Output", as part of a class definition. This provide structured keywords to describe: * The expected output * And / Or Chain-of-thought to generate the correct output | /// The following Prompt describes the full implementation of the class Class alwo.Calculator [Abstract, Prompt = "Provides methods to Add, Subtract, Multiply and divide given numbers." ] { /// The following Prompt describes the full implementation of the method ClassMethod Add(arg1 As %Float, arg2 As %Float) As %Float [ Prompt ="Add numeric arguments and return result." ] { return arg1 + arg2 } ClassMethod Subtract(arg1 as %Float, arg2 As %Float) { &Prompt("Subtract numeric arguments and return result") ) } | The Prompt macro generates code based on the context of the method it is within. Once resolved, it automatically comments out the processed macro. | ClassMethod Subtract(arg1 as %Float, arg2 As %Float) { //&Prompt("Subtract arguments and return the result") return arg1 - arg2 //&Prompt("Model alwogen-objectscript-7.1.3") ) | The generator leveraged at compilation time could be configured in a similar way to how source control is configured for a namespace. Configuration could lock / exclude packages from being processed in this way. A "\prompt" compilation flag could be used to control the default environment behavior and editor compilation behavior. For example to force reprocessing of previously resolved prompts due to a newer more capable version of code Large Language Model, then a "\prompt=2" could be applied. Different models or third-party services could be applied depending the language of the given method. When redacting source code by "deployment", the existing "deploy" facility could be extended to also ensure removal of "Prompt" metadata from code. A 3Votes1Comments | ||||||
![]() iris-local-mlHow to use Python and IRIS to run Machine learnings algorithms | L | Docker Python AI ML ML | 4.0 (1) | 02 Aug, 2022 | ||
![]() Blinx AI - Turn Data into Intelligence in a blinxThe App Platform for AI Lifecycle | S | AI ML ML | 0.0 (0) | 15 Feb, 2022 | ||
![]() Beez-Woman-Menstrual-TrackerA one-stop-shop for tracking women's reproductive health | K | ML ML | 0.0 (0) | 21 Sep, 2021 | ||
covid-ai-demo-deployment"Covid-19 AI demo in Docker" deployment including dockerised Flask, FastAPI, Tensorflow Serving and HA Proxy etc etc. | Z | Docker Python ML ML | 0.0 (0) | 07 Sep, 2020 | ||
Reducing Readmission Risks with Realtime MLPatient Readmissions are said to be the Hello World of Machine Learning in Healthcare. We use this problem to show how IRIS can be used to safely build and operationalize ML models for real time predictions and how this can be integrated into a random application. | A | Docker ML ML | 3.5 (1) | 29 Jan, 2020 |