Model Query#
This module provides functionality to interact with language models, process responses, and handle batch operations asynchronously for AI-powered knowledge extraction.
Model Query Processing Pipeline#
The query processing pipeline consists of the following stages:
Preparation:
Format prompts with system and user contexts
Configure batch processing parameters
Asynchronous Execution:
Send requests to language models with rate limiting
Track execution time and token usage metrics
Response Processing:
Extract structured information from model responses
Apply custom processing functions to raw outputs
Handle errors gracefully with default return values
Result Management:
Save intermediate and final results
Provide comprehensive metadata for analysis
Note
This module is designed for efficient batch processing of multiple queries.
All API interactions are performed asynchronously to maximize throughput.
Important
Ensure proper API credentials are configured before using this module.
Consider token usage and rate limits when processing large batches.
Functions#
Decorator to add a default_return attribute to a function. |
|
Extracts and parses all JSON objects or arrays from code blocks in the input text. |
|
Asynchronously process a batch of prompts using a language model client, saving intermediate and final results, and handling concurrent API requests with robust error handling and metadata tracking. |
Standalone Execution#
This module is not intended to be run as a standalone script.