Leverage AI services


Overview

Did you know that CAST Imaging can interface with various third-party Artificial Intelligence services, allowing you to obtain additional AI-driven insight into your application source code? You can obtain information about specific objects, objects within the scope of your application and about specific transactions.

How does it work?

Set up is simple: obtain an API key or other configuration settings from a supported AI provider, configure it in CAST Imaging via the Admin option and enable the AI-driven CAST Imaging services you require. A user with the Administrator role is required to configure the settings.

You can find out more in AI Settings:

What can I do with AI?

All AI-driven services provided by CAST Imaging provide additional information about the code within your application. For example, you may want to find out exactly what a specific object is designed to do, based on AI’s interpretation of its code, or, you may want to get more insight into the full chain of a transaction in your application.

There are five main AI-driven features available:

Each available service is explained in more detail below.

Generate AI Summary

This feature is available for the Application, Transaction, Data Call Graph and Module scopes (note that specifically and only for the Application scope, a connection to a data source (to retrieve code) is not required - this is required for all other scopes):

How does it work?

  1. User triggers summary generation: the user initiates summary generation from the user interface.
  2. System filters relevant objects: objects with a cyclometic complexity of less than 1 will be excluded.
  3. Share the object code snippets and its metadata with LLM: the system shares:
    • Object metadata (Object name, count)
    • Source code snippet for objects/sub-objects
    • Link type between objects
  4. LLM reads and generates an object-level purpose for each object:
    • Identify the purpose and functionality of individual objects
    • Process multiple objects in parallel for efficiency
  5. LLM generates summary based on object-level purposes:
    • LLM uses a composite prompt built from individual code piece insights to generate a summary
    • Focus on identifying and listing object-level purposes in the order of decreasing complexity, starting with those most likely to contain meaningful logic
  6. Send the raw summary to the LLM for formatting: prompt the LLM with the raw summary to generate and return a formatted response:
    • Technical Explanation
    • Functional Explanation
    • Objects of Interest
  7. Get the response received from LLM and store it in Neo4j: the response is stored in the Neo4j database.
  8. Display summary for the user

Key points:

  • The source code for objects involved is shared to the LLM.
  • AI Summary can be generated for different scopes (Individual/Bulk: Transactions / Data Call Graphs / Modules / Projects/ Services / Custom Views / Saved Views)
  • Save it as a Post-It for documentation reference.

What does it do?

It will generate a summary of the purpose of the selected scope, display it and then give you the option to save the summary to a Post-It:

Ensure you save the Post-It if you want to access it in a future session from the Post-It section in the right panel:

Bulk AI summary

Creates an AI-driven explanation of all the objects within a given module, transaction, data call graph, service, project structure, displayed in a Post-It associated to the object itself. To access this feature you will need to use the Customize the Results option available in the Landing page:

Then choose the Bulk AI Summary tab:

To generate the AI-driven explanation:

  • select the scope you are interested in:
    • module
    • transaction
    • data call graph
    • service
    • project structure

  • choose the method of generating the explanation:
    • Graph Representation Method: summarizes transactions by providing the entire transaction graph, offering a comprehensive overview of all connected objects and their relationships for a holistic understanding
    • Exclusive Object Method: summarizes transactions by focusing on unique objects within the transaction graph, highlighting key points of divergence or importance
  • choose the item(s) you want to generate the summary for
  • click the Generate Summary button to begin the process

A green tick will indicate that the process has completed. A status icon also exists showing the process of any on-going and queued jobs:

To view the result:

  • consult your application results
  • switch to the scope and select the item you have generated an AI-driven explanation for
  • drill down to the object level
  • the items will display in a new tab
  • click the Post-It to view the AI explanation for each object:

CAST Imaging Assistant Chatbot (Ask me anything)

The CAST Imaging Assistant is a simple interface for asking direct questions about the application you are working on. It is available at any level and in all features within CAST Imaging when consulting results:

The responses returned by the chatbot are tailored to your application, for example you could ask about the technology stack that exists in your application, what frameworks are used, any dependencies that may exist etc.:

How does it work?

  1. User asks a question
  2. System prepares the prompt and shares it with LLM: the system shares the:
    • Question
    • Application Context (Types of objects and interactions, LOC details, technologies)
    • Function definitions
  3. LLM reads the prompt: the LLM picks the function definition based on the question.
  4. LLM tries to answer the question: the LLM provides inputs to the function definition based on the question asked and application context.
  5. Process response received from LLM: fetch the response from the LLM and query Neo4j based on the inputs provided for the function definition, to retrieve data from Neo4j.
  6. Prompt LLM to fetch a natural language response: send the information back to the LLM to generate a natural language response in the expected format.
  7. Display answer to the user

Key points:

  • The application source code is not shared for the chatbot.
  • Chatbot interactions can be both at application level and specific view level.
  • Predefined views are suggested based on the question asked.
  • Multi-Language support

What does it do - Ask me anything - quick prompts

Three main “frequently asked questions” are provided as prompts - use these to get started:

What does it do - Ask me anything - Predefined view recommendation

The assistant will recommend a predefined view if one is available and relevant to the query, for example after asking about database objects in the application, the assistant will recommend the predefined view Database Access:

Explain Object Context with AI

This option provides an AI-driven explanation of the code in the object within its context.

How does it work?

  1. User triggers to explain object context: the user initiates explanation generation from the user interface.
  2. System share the object + callers/callees code snippet with LLM: the system shares:
    • Source code snippet for object and first-level callers/callee objects
    • Link type between object and first-level callers/callee objects
  3. LLM reads code and generates an object-level explanation: use the LLM to fetch an explanation for the object source code shared in simple natural language
  4. Display explanation for the user

Key points:

  • The source code for object and its first level of callers and callees involved is shared to the LLM.
  • The explanation is generated as a Post-It that can be saved along with the view for reference.

What does it do?

Enables right-click contextual menu options when consulting results at Object level:

The “Explain object in application context” option will do two things:

  1. Open the object in a new tab and display all its immediate caller and callee objects
  2. Provide an explanation of the object based on its context within the application and store that explanation in a Post-It:

Explain Object with AI

This option provides an AI-driven explanation of the code in the object.

How does it work?

  1. User triggers to explain object: the user initiates explanation generation from the user interface.
  2. System shares the code snippet for object with LLM: the system shares:
    • Source code snippet for the object.
  3. LLM reads code and generates an object-level explanation: use the LLM to fetch an explanation for the source code shared in simple natural language
  4. Display explanation for the user

Key points:

  • The source code for object involved is shared to the LLM.
  • Explanation can be generated as a summary or detailed explanation can be generated
  • Save it as post it for documentation reference.

What does it do?

Enables right-click contextual menu options when consulting results at Object level:

The “Explain the object” option provides an AI-driven explanation of the code in the object, directly in the source code viewer:

AI will also point out issues that it believes may exist in your source code:

Finally, two options are made available when the AI explanation has been provided:

  • Save the explanation as a Post-It assigned to the object in question
  • Re-load the explanation and provided additional details

AI Functional Report

Generates a summary of the purpose of the selected scope.

How does it work?

  1. User triggers to generate AI report: the user initiates the generation of AI report from the user interface. Precondition: AI configuration details must be set up to enable this report generation.
  2. Generate Transaction Summaries: Transaction summaries are generated in batches for processing. Information shared to LLM for On-Premise
  • Reduced Call Graph Details:
    • Object Metadata (Object name, count)
    • Source code snippet for objects/sub-objects.
    • Link type between objects.
  1. Partial Summary Generation: partial summaries are generated by sharing transaction summaries in batches. This partial summary is simplified form of explanation that only focuses on key points which is ideal for the final summary.
  2. Comprehensive Summary Generation: partial summaries are merged and sent again to the LLM. The LLM generates a comprehensive, larger summary for the entire document.
  3. Validation & Formatting: the LLM’s response is validated against fefined prompt format which is as per document format. Ensures the output is structured and consistent.
  4. Report Display: The final validated AI Report is displayed to the user in the application UI.

Key points:

  • The source code for object involved is shared to the LLM.
  • Provides a comprehensive reference to understand your application’s functional overview
  • Easily shareable offline with team members.

What does it do - Summary report

This feature is available for the Application, Transaction, Data Call Graph and Module scopes (note that specifically and only for the Application scope, a connection to a data source (to retrieve code) is not required - this is required for all other scopes):

It will generate a summary of the purpose of the selected scope, display it and then give you the option to save the summary to a Post-It:

Ensure you save the Post-It if you want to access it in a future session from the Post-It section in the right panel:

What does it do - Bulk AI summary

Creates an AI driven explanation of all the objects within a given module, transaction, data call graph, service, project structure, displayed in a Post-It associated to the object itself. To access this feature you will need to use the Customize the Results option available in the Landing page:

Then choose the Bulk AI Summary tab:

To generate the AI-driven explanation:

  • select the scope you are interested in:
    • module
    • transaction
    • data call graph
    • service
    • project structure

  • choose the method of generating the explanation:
    • Graph Representation Method: summarizes transactions by providing the entire transaction graph, offering a comprehensive overview of all connected objects and their relationships for a holistic understanding
    • Exclusive Object Method: summarizes transactions by focusing on unique objects within the transaction graph, highlighting key points of divergence or importance
  • choose the item(s) you want to generate the summary for
  • click the Generate Summary button to begin the process

A green tick will indicate that the process has completed. A status icon also exists showing the process of any on-going and queued jobs:

To view the result:

  • consult your application results
  • switch to the scope and select the item you have generated an AI-driven explanation for
  • drill down to the object level
  • the items will display in a new tab
  • click the Post-It to view the AI explanation for each object: