Model Query#

This module provides functionality to interact with language models, process responses, and handle batch operations asynchronously for AI-powered knowledge extraction.

Model Query Processing Pipeline#

The query processing pipeline consists of the following stages:

  1. Preparation:

    • Format prompts with system and user contexts

    • Configure batch processing parameters

  2. Asynchronous Execution:

    • Send requests to language models with rate limiting

    • Track execution time and token usage metrics

  3. Response Processing:

    • Extract structured information from model responses

    • Apply custom processing functions to raw outputs

    • Handle errors gracefully with default return values

  4. Result Management:

    • Save intermediate and final results

    • Provide comprehensive metadata for analysis

Note

  • This module is designed for efficient batch processing of multiple queries.

  • All API interactions are performed asynchronously to maximize throughput.

Important

  • Ensure proper API credentials are configured before using this module.

  • Consider token usage and rate limits when processing large batches.

Functions#

with_default_return

Decorator to add a default_return attribute to a function.

extract_json_items

Extracts and parses all JSON objects or arrays from code blocks in the input text.

batch_model_query

Asynchronously process a batch of prompts using a language model client, saving intermediate and final results, and handling concurrent API requests with robust error handling and metadata tracking.

Standalone Execution#

This module is not intended to be run as a standalone script.