Getting Started

Getting Started#

Qualcomm has developed a Python package known as Imagine SDK. Its purpose is to simplify the process of creating GenAI applications. It achieves this by merging our Imagine APIs with sophisticated Python AI frameworks like LangChain. This amalgamation facilitates the creation of applications such as chatbots, Generative Question-Answering (GQA), summarization, and workflow automation.

How to use it

The most basic example of using a Large Language Model (LLM) to generate text.

How to use it
Installation

How get the Imagine SDK working on your system.

Installation
Tutorials

Examples showcasing many typical use cases of how to use the Imagine SDK. The code snippets are designed so that you can copy-paste and run them on your environment.

Tutorials
API documentation

Documentation about every single class, method and function that Imagine SDK offers. Great to have next to your IDE when developing code using the SDK.

API documentation

How to use it#

Start with installing the Imagine SDK. Installation is a straight forward process like installing a any python package from a wheel file. Please refer to the installation section for steps and complete.

Before running any example from this documentation, two parameters have to be configured.

  1. You must set the environment variable IMAGINE_API_KEY to your personal Imagine API key. Alternatively, you can pass your API key directly to the client with ImagineClient(api_key="my-api-key").

  2. You must set the environment variable IMAGINE_ENDPOINT_URL pointing to the endpoint you are using. Alternatively, you can pass your endpoint directly to the client with ImagineClient(endpoint="https://my-endpoint/api/v2").

Danger

You should never share your personal Imagine API keys with anyone!

Likewise, you should never commit your personal Imagine API keys to any git repository!

How to get an API key

If you don’t have yet an Imagine API key, get it here.

How to get the endpoint URL

If you don’t know your endpoint URL, you can get it here.

The following is the most basic example of using a Large Language Model (LLM) to generate text. It instantiates the client ImagineClient and starts a new conversation by asking a question.

from imagine import ChatMessage, ImagineClient


client = ImagineClient()

chat_response = client.chat(
    messages=[ChatMessage(role="user", content="What is the best Spanish cheese?")],
    model="Llama-3.1-8B",
)

print(chat_response.first_content)

This will print something similar to:

Spain is renowned for its rich variety of cheeses, each with its unique flavor profile
and texture. The "best" Spanish cheese is subjective and often depends on personal
taste preferences. However, here are some of the most popular and highly-regarded
Spanish cheeses:

1. Manchego: A firm, crumbly cheese made from sheep's milk, Manchego is a classic
   Spanish cheese with a nutty, slightly sweet flavor.
2. Mahon: A semi-soft cheese from the island of Minorca, Mahon has a mild,
   creamy flavor and a smooth texture.
3. Idiazabal: A smoked cheese from the Basque region, Idiazabal has a strong, savory
   flavor and a firm texture.
4. Garrotxa: A soft, creamy cheese from Catalonia, Garrotxa has a mild, buttery flavor
   and a delicate aroma.
...

The Imagine SDK exposes two clients, each with a different programming paradigm: synchronous and asynchronous.

ImagineClient is the synchronous Imagine client. If you don’t need asynchronous programming on your Python code, or simply you are not familiar with asynchronous programming, this is the client you want to use.

Otherwise, if you are leveraging asyncio on your codebase, ImagineAsyncClient might be a better choice.

Support#

Please reach out to your Imagine service provider for support.

Table of contents#