Skip to content

INFO

Below, pne stands for Promptulate, which is the nickname for Promptulate. The p and e represent the beginning and end of Promptulate, respectively, and n stands for 9, which is a shorthand for the nine letters between p and e.

promptulate

Overview ​

Promptulate is an AI Agent application development framework crafted by Cogit Lab, which offers developers an extremely concise and efficient way to build Agent applications through a Pythonic development paradigm. The core philosophy of Promptulate is to borrow and integrate the wisdom of the open-source community, incorporating the highlights of various development frameworks to lower the barrier to entry and unify the consensus among developers. With Promptulate, you can manipulate components like LLM, Agent, Tool, RAG, etc., with the most succinct code, as most tasks can be easily completed with just a few lines of code. πŸš€

πŸ’‘ Features ​

  • 🐍 Pythonic Code Style: Embraces the habits of Python developers, providing a Pythonic SDK calling approach, putting everything within your grasp with just one pne.chat function to encapsulate all essential functionalities.
  • 🧠 Model Compatibility: Supports nearly all types of large models on the market and allows for easy customization to meet specific needs.
  • πŸ•΅οΈβ€β™‚οΈ Diverse Agents: Offers various types of Agents, such as WebAgent, ToolAgent, CodeAgent, etc., capable of planning, reasoning, and acting to handle complex problems. Atomize the Planner and other components to simplify the development process.
  • πŸ”— Low-Cost Integration: Effortlessly integrates tools from different frameworks like LangChain, significantly reducing integration costs.
  • πŸ”¨ Functions as Tools: Converts any Python function directly into a tool usable by Agents, simplifying the tool creation and usage process.
  • πŸͺ Lifecycle and Hooks: Provides a wealth of Hooks and comprehensive lifecycle management, allowing the insertion of custom code at various stages of Agents, Tools, and LLMs.
  • πŸ’» Terminal Integration: Easily integrates application terminals, with built-in client support, offering rapid debugging capabilities for prompts.
  • ⏱️ Prompt Caching: Offers a caching mechanism for LLM Prompts to reduce repetitive work and enhance development efficiency.
  • πŸ€– Powerful OpenAI Wrapper: With pne, you no longer need to use the openai sdk, the core functions can be replaced with pne.chat, and provides enhanced features to simplify development difficulty.
  • 🧰 Streamlit Component Integration: Quickly prototype and provide many out-of-the-box examples and reusable streamlit components.

The following diagram shows the core architecture of promptulate:

promptulate-architecture

The core concept of Promptulate is we hope to provide a simple, pythonic and efficient way to build AI applications, which means you don't need to spend a lot of time learning the framework. We hope to use pne.chat() to do most of the works, and you can easily build any AI application with just a few lines of code.

πŸ€– Supported Base Models ​

Promptulate integrates the capabilities of litellm, supporting nearly all types of large models on the market, including but not limited to the following models:

ProviderCompletionStreamingAsync CompletionAsync StreamingAsync EmbeddingAsync Image Generation
openaiβœ…βœ…βœ…βœ…βœ…βœ…
azureβœ…βœ…βœ…βœ…βœ…βœ…
aws - sagemakerβœ…βœ…βœ…βœ…βœ…
aws - bedrockβœ…βœ…βœ…βœ…βœ…
google - vertex_ai [Gemini]βœ…βœ…βœ…βœ…
google - palmβœ…βœ…βœ…βœ…
google AI Studio - geminiβœ…βœ…
mistral ai apiβœ…βœ…βœ…βœ…βœ…
cloudflare AI Workersβœ…βœ…βœ…βœ…
cohereβœ…βœ…βœ…βœ…βœ…
anthropicβœ…βœ…βœ…βœ…
huggingfaceβœ…βœ…βœ…βœ…βœ…
replicateβœ…βœ…βœ…βœ…
together_aiβœ…βœ…βœ…βœ…
openrouterβœ…βœ…βœ…βœ…
ai21βœ…βœ…βœ…βœ…
basetenβœ…βœ…βœ…βœ…
vllmβœ…βœ…βœ…βœ…
nlp_cloudβœ…βœ…βœ…βœ…
aleph alphaβœ…βœ…βœ…βœ…
petalsβœ…βœ…βœ…βœ…
ollamaβœ…βœ…βœ…βœ…
deepinfraβœ…βœ…βœ…βœ…
perplexity-aiβœ…βœ…βœ…βœ…
Groq AIβœ…βœ…βœ…βœ…
anyscaleβœ…βœ…βœ…βœ…
voyage aiβœ…
xinference [Xorbits Inference]βœ…

The powerful model support of pne allows you to easily build any third-party model calls.

Now let's see how to run local llama3 models of ollama with pne.

python
import promptulate as pne

resp: str = pne.chat(model="ollama/llama2", messages=[{"content": "Hello, how are you?", "role": "user"}])

Use provider/model_name to call the model, and you can easily build any third-party model calls.

For more models, please visit the litellm documentation.

You can also see how to use pne.chat() in the Getting Started/Official Documentation.

πŸ“ Examples ​

For more detailed information, please check the Quick Start.

πŸ“š Design Principles ​

The design principles of the pne framework include modularity, extensibility, interoperability, robustness, maintainability, security, efficiency, and usability.

  • Modularity refers to using modules as the basic unit, allowing for easy integration of new components, models, and tools.
  • Extensibility refers to the framework's ability to handle large amounts of data, complex tasks, and high concurrency.
  • Interoperability means the framework is compatible with various external systems, tools, and services and can achieve seamless integration and communication.
  • Robustness indicates the framework has strong error handling, fault tolerance, and recovery mechanisms to ensure reliable operation under various conditions.
  • Security implies the framework has implemented strict measures to protect against unauthorized access and malicious behavior.
  • Efficiency is about optimizing the framework's performance, resource usage, and response times to ensure a smooth and responsive user experience.
  • Usability means the framework uses user-friendly interfaces and clear documentation, making it easy to use and understand.

Following these principles and applying the latest artificial intelligence technologies, pne aims to provide a powerful and flexible framework for creating automated agents.

πŸ’Œ Contact ​

For more information, please contact: zeeland4work@gmail.com

⭐ Contribution ​

We appreciate your interest in contributing to our open-source initiative. We have provided a Developer's Guide outlining the steps to contribute to Promptulate. Please refer to this guide to ensure smooth collaboration and successful contributions. Additionally, you can view the Current Development Plan to see the latest development progress πŸ€πŸš€

Released under the Apache 2.0 License.