LangChain#
These are the classes to use with LangChain.
- class imagine.langchain.ImagineChat(*args, client, async_client, api_key=None, endpoint=None, max_retries=5, timeout=120, verify=False, name=None, cache=None, verbose=None, callbacks=None, tags=None, metadata=None, custom_get_token_ids=None, callback_manager=None, rate_limiter=None, disable_streaming=False, model='Llama-3.1-8B', temperature=0.0, max_tokens=None, top_k=None, top_p=None, streaming=False, frequency_penalty=None, presence_penalty=None, repetition_penalty=None, stop=None, max_seconds=None, ignore_eos=None, skip_special_tokens=None, stop_token_ids=None)[source]#
A chat model that use Imagine Inference API
- class imagine.langchain.ImagineLLM(*args, client, async_client, api_key=None, endpoint=None, max_retries=5, timeout=120, verify=False, name=None, cache=None, verbose=None, callbacks=None, tags=None, metadata=None, custom_get_token_ids=None, callback_manager=None, model='Llama-3.1-8B', temperature=0.0, max_tokens=None, top_k=None, top_p=None, streaming=False, frequency_penalty=None, presence_penalty=None, repetition_penalty=None, stop=None, max_seconds=None, ignore_eos=None, skip_special_tokens=None, stop_token_ids=None)[source]#
An LLM that uses Imagine Inference API
- class imagine.langchain.ImagineEmbeddings(*, client, async_client, api_key=None, endpoint=None, max_retries=5, timeout=120, verify=False, max_concurrent_requests=64, model='BAAI/bge-large-en-v1.5')[source]#
Imagine embedding models.
- async aembed_documents(texts)[source]#
Embed a list of document texts.
- Args:
texts: The list of texts to embed.
- Returns:
List of embeddings, one for each text.
- async aembed_query(text)[source]#
Embed a single query text.
- Args:
text: The text to embed.
- Returns:
Embedding for the text.