Avicena - Hello World!

Avicena - Hello World!

What is Avicena?

Avicena is an AI assistant designed to help doctors focus more on patient care by automating administrative tasks. It uses AI to automate processes such as live recording, automated transcription, and structured documentation.

Key Features

Avicena is not just about automating tasks. It's about enhancing healthcare delivery. Here are some of the key features we're developing:

  • Automated Documentation: Records visits, transcribes conversations, and produces structured notes, saving doctors hours of paperwork.

  • AI Guidance: Doctors get 24/7 access to medical search, personalized patient education resources, and support from our AI agent.

  • Intelligent Triage: Smart routing of intake calls ensures patients get to the right level of care quickly based on their symptoms and needs. It uses NLP on patient calls to classify and route cases based on urgency. This ensures critical cases are handled quickly.

  • Clinical Analytics: Avicena can analyze patient data to identify trends and potential health risks. It can also analyze patient histories to identify trends, risk factors, gaps in care, etc. This allows personalized care plans.

  • AI Agents: These agents can provide up-to-date medical guidance, personalized patient education, and help with billing and coding. 24/7 chatbots that provide instant access to medical guidance and patient education resources. Reduces doctor workload.

Key Benefits

With Avicena, doctors can:

  • Spend more time delivering patient care
  • Make faster, more informed decisions with data insights
  • Reduce burnout and improve work-life balance
  • Expand access and quality of care across patient populations
  • Patients get more focused visits, rapid access to trusted guidance, and personalized education for better health outcomes.

Traction and Roadmap

We currently have 4 healthcare professionals actively testing Avicena with great feedback. We plan to launch publicly in Q3 2023 starting in North America.

Our roadmap includes EHR integrations, automated coding/billing, diagnostic decision support, and expanding to more use cases like chronic care management.

Avicena's Demo

We've developed a demo to showcase Avicena's capabilities. It demonstrates how Avicena can record visits, automate transcription, produce structured documentation, perform semantic search, and interact with users through a context-aware AI chat. You can check out the demo here.

Avicena's Team

Our team includes lead engineers, founders, and researchers in machine learning and distributed systems. We're also collaborating with advisors in the medical field and researchers at Georgia Tech and Simon Fraser University.

How are we different?

Avicena differentiates itself with our Domain-Specific Language Models (DsLLMs) and a comprehensive dashboard. Unlike traditional systems focusing on immediate data, Avicena increases a doctor's memory and throughput by providing an interconnected healthcare platform. Our DsLLMs are fine-tuned on medical corpuses to understand clinical context. This powers nuanced capabilities not possible with generic models. Our integrated dashboard also provides a unified singular workspace for doctors.

Business Model

We employ a SaaS model with usage-based pricing. Healthcare organizations pay based on the number of patient visits and services through the platform.

The Future of Avicena

We're planning for a soft launch by the end of this week and a full Beta within two weeks. We're also working on integrating with FHIR-based EMRs and ensuring data privacy and security. We envision leveraging this technique to develop DsLLMs for other industries in the near future.

We're just getting started. Our vision for Avicena extends beyond these features. We're exploring how we can use AI to improve diagnostic accuracy, reduce multiple visits, and extend medical services to underserved regions.

Get in Touch

Excited about how Avicena can transform healthcare delivery? We'd love to demo our product and discuss partnership opportunities. Email us at hello@avicena.ai

Apendices

How Does Avicena Work?

Avicena uses a combination of tools including, vector databases, AI agents, and Domain-Specific Language Models (DsLLMs) to streamline doctors day to day. We also added natural language queries so doctors can interact with Avicena.

graph LR
    subgraph LLM Architecture
        LLM -->|Data Preparation| DataPreparation
        LLM -->|Prompts| Prompts
        LLM -->|Speech To Text| SpeechToText
        LLM -->|Text To Speech| TextToSpeech

        DataPreparation -->|Character Data| CharacterData
        DataPreparation -->|Character Catalog| CharacterCatalog

        Prompts -->|Llamaindex| Llamaindex
        Prompts -->|Vector Decomposition| VectorDecomposition

        SpeechToText -->|Do| Do
        TextToSpeech -->|whisper| whisper

        CharacterCatalog -->|Chroma| Chroma

        LLMOrchestration -->|LlmChain| LlmChain
        LLMOrchestration -->|OpenAI| OpenAI
        LLMOrchestration -->|CLAUDE| CLAUDE

        InteractionsDB -->|SQLite| SQLite
        InteractionsDB -->|Google Cloud| GoogleCloud
    end

    subgraph Data Engineering
        DataSources -->|Snowflake| Snowflake
        DataSources -->|BigQuery| BigQuery
        DataSources -->|RedShift| RedShift
        DataSources -->|Data Lakes| DataLakes
        DataSources -->|Databases| Databases

        Avicena -->|Feature Groups| FeatureGroups
        Avicena -->|Document Retrievers| DocumentRetrievers
        Avicena -->|LLMs| LLMs

        DataSources -->|Avicena| Avicena

        DocumentRetrievers -->|Vector Stores| VectorStores
        DocumentRetrievers -->|Doc-Stores| DocStores
        DocumentRetrievers -->|Search Indices| SearchIndices

        LLMs -->|GPT-4| GPT-4
        LLMs -->|PaLM| PaLM
        LLMs -->|Claude-2| Claude-2
        LLMs -->|Open-Source| OpenSource
        LLMs -->|Fine-Tuned| FineTuned
        LLMs -->|Avicena| Avicena

        Avicena -->|Evaluation| Evaluation
    end

    subgraph ML Pipeline
        DataSources -->|APIs| APIs
        DataSources -->|Web Scraping| WebScraping
        DataSources -->|Data Pipelines| DataPipelines
        DataSources -->|Data Storage| DataStorage

        DataCleaning -->|Text Cleaning| TextCleaning
        DataCleaning -->|Numerical Cleaning| NumericalCleaning
        DataCleaning -->|Data Validation| DataValidation

        FeatureEngineering -->|Feature Extraction| FeatureExtraction
        FeatureEngineering -->|Feature Selection| FeatureSelection
        FeatureEngineering -->|Feature Scaling| FeatureScaling

        ModelTraining -->|Model Selection| ModelSelection
        ModelTraining -->|Model Tuning| ModelTuning
        ModelTraining -->|Model Evaluation| ModelEvaluation

        ModelDeployment -->|Model Serving| ModelServing
        ModelDeployment -->|Web App| WebApp
        ModelDeployment -->|API| API
   end

LLM Architecture

The LLM architecture category includes the components that are responsible for processing natural language input and generating text output. The main components in this category are:

  • LLM: The LLM (Large Language Model) is the central component of the LLM app stack. It is responsible for processing natural language input and generating text output. The LLM is typically a large neural network that has been trained on a massive dataset of text and code.
  • Prompts: The Prompts component is responsible for generating prompts that are used to interact with the LLM. The prompts are typically short pieces of text that give the LLM instructions on what to do. For example, a prompt might be "Write a poem about a cat" or "Generate a code snippet that reverses a string."
  • Speech To Text: The Speech To Text component is responsible for converting speech to text. This is useful for applications that allow users to interact with the LLM by speaking.
  • Text To Speech: The Text To Speech component is responsible for converting text to speech. This is useful for applications that allow the LLM to generate text that is spoken aloud.

Data Engineering

The data engineering category includes the components that are responsible for preparing the data that is used to train the LLM. The main components in this category are:

  • Data Preparation: The Data Preparation component is responsible for cleaning and formatting the data that is used to train the LLM. This includes tasks such as removing noise from the data, normalizing the data, and splitting the data into training and test sets.
  • Avicena: The Avicena component is a system that is used to index and search large datasets of text. This is useful for applications that need to quickly retrieve information from the data that is used to train the LLM.
  • Document Retrievers: The Document Retrievers component is responsible for retrieving documents from the data that is used to train the LLM. This is useful for applications that need to use the data in a variety of ways, such as for generating text, translating languages, or answering questions.
  • LLMs: The LLMs component is responsible for storing the LLMs that are used in the application. This is useful for applications that need to deploy multiple LLMs, such as for different languages or different tasks.

ML Pipeline

The ML pipeline category includes the components that are responsible for training, deploying, and evaluating the LLM. The main components in this category are:

  • Data Sources: The Data Sources component is responsible for providing access to the data that is used to train the LLM. This can include data from a variety of sources, such as text files, databases, and web APIs.
  • Data Cleaning: The Data Cleaning component is responsible for cleaning the data that is used to train the LLM. This includes tasks such as removing noise from the data, normalizing the data, and splitting the data into training and test sets.
  • Feature Engineering: The Feature Engineering component is responsible for creating features from the data that is used to train the LLM. This includes tasks such as extracting text features, creating numerical features, and scaling the features.
  • Model Training: The Model Training component is responsible for training the LLM on the data that is used to train the LLM. This includes tasks such as selecting a model, optimizing the model parameters, and evaluating the model performance.
  • Model Deployment: The Model Deployment component is responsible for deploying the LLM so that it can be used by users. This includes tasks such as creating a REST API, hosting the API, and securing the API.
  • Model Evaluation: The Model Evaluation component is responsible for evaluating the performance of the LLM. This includes tasks such as generating metrics, comparing the metrics to other LLMs, and identifying areas for improvement.

Related Posts