Skip to content

Releases: NVIDIA/NeMo-Guardrails

Release v0.9.1.1

26 Jul 11:09
dec482d
Compare
Choose a tag to compare

This patch release fixes bug #651 introduced in 0.9.1.

Fixed

  • #650 Fix gpt-3.5-turbo-instruct prompts #651.

Full Changelog: v0.9.1...v0.9.1.1

Release v0.9.1

25 Jul 12:02
1d6fd86
Compare
Choose a tag to compare

This release introduces three new integrations (Got It AI, AutoAlign and Patronus Lynx), streamlined NIM and NVIDIA API Catalog integration, support for registering custom embedding models, improvements and fixes to Colang 2.0, and many other bug fixes. This release also includes better out-of-the-box support for Llama-3 and Llama-3.1 models.

What's Changed

Added

  • Colang version 2.0-beta.2
  • #370 Add Got It AI's Truthchecking service for RAG applications by @mlmonk.
  • #543 Integrating AutoAlign's guardrail library with NeMo Guardrails by @abhijitpal1247.
  • #566 Autoalign factcheck examples by @abhijitpal1247.
  • #518 Docs: add example config for using models with ollama by @vedantnaik19.
  • #538 Support for --default-config-id in the server.
  • #539 Support for LLMCallException.
  • #548 Support for custom embedding models.
  • #617 NVIDIA AI Endpoints embeddings.
  • #462 Support for calling embedding models from langchain-nvidia-ai-endpoints.
  • #622 Patronus Lynx Integration.

Changed

  • #597 Make UUID generation predictable in debug-mode.
  • #603 Improve chat cli logging.
  • #551 Upgrade to Langchain 0.2.x by @nicoloboschi.
  • #611 Change default templates.
  • #545 NVIDIA API Catalog and NIM documentation update.
  • #463 Do not store pip cache during docker build by @don-attilio.
  • #629 Move community docs to separate folder.
  • #647 Documentation updates.
  • #648 Prompt improvements for Llama-3 models.

Fixed

  • #482 Update README.md by @curefatih.
  • #530 Improve the test serialization test to make it more robust.
  • #570 Add support for FacialGestureBotAction by @elisam0.
  • #550 Fix issue #335 - make import errors visible.
  • #547 Fix LLMParams bug and add unit tests (fixes #158).
  • #537 Fix directory traversal bug.
  • #536 Fix issue #304 NeMo Guardrails packaging.
  • #539 Fix bug related to the flow abort logic in Colang 1.0 runtime.
  • #612 Follow-up fixes for the default prompt change.
  • #585 Fix Colang 2.0 state serialization issue.
  • #486 Fix select model type and custom prompts task.py by @cyun9601.
  • #487 Fix custom prompts configuration manual.md.
  • #479 Fix static method and classmethod action decorators by @piotrm0.
  • #544 Fix issue #216 bot utterance.
  • #616 Various fixes.
  • #623 Fix path traversal check.

New Contributors

Full Changelog: v0.9.0...v0.9.1

Release v0.9.0

10 May 07:51
b3c6bb8
Compare
Choose a tag to compare

This release introduces Colang 2.0, the next version of Colang, and a revamped NeMo Guardrails Documentation.

Colang 2.0 brings a more solid foundation for building complex guardrail configurations (with better parallelism support), advanced RAG orchestration (e.g., with multi-query, contextual relevance check), agents (e.g., driving business process logic), and multi-modal LLM-driven interaction (e.g., interactive avatars). Colang 2.0 is a complete overhaul of the Colang language and runtime, and key enhancements include:

  • A more powerful flow engine supporting multiple parallel flows and advanced pattern matching over the stream of events.
  • Adoption of terminology and syntax akin to Python to reduce the learning curve for new developers.
  • A standard library and an import mechanism to streamline development.
  • Explicit entry point through the main flow and explicit activation of flows.
  • Smaller set of core abstractions: flows, events, and actions.
  • The new generation operator (...).
  • Asynchronous action execution.

NOTE: The version of Colang included in v0.8.* is referred to as Colang 2.0-alpha. In v0.9.0, Colang 2.0 moved to Beta, which we refer to as Colang 2.0-beta. We expect Colang 2.0 to go out of Beta and replace Colang 1.0 as the default option in NeMo Guardrails v0.11.0.

Current limitations include not being able to use the Guardrails Library from within Colang 2.0 and no support for generation options (e.g., logs, activated rails). These limitations will be addressed in v0.10.0 and v0.11.0, along with additional features and example guardrail configurations.

To get started with Colang 2.0, if you’ve used Colang 1.0 before, you should check out What’s Changed page. If not, you can get started with the Hello World example.

Full Changelog: v0.8.3...v0.9.0

Release v0.8.3

18 Apr 15:03
63ec36d
Compare
Choose a tag to compare

This minor release updates the NVIDIA API Catalog integration documentation and fixes two bugs.

What's Changed

Changed

  • #453 Update documentation for NVIDIA API Catalog example.

Fixed

  • #382 Fix issue with lowest_temperature in self-check and hallucination rails.
  • #454 Redo fix for #385.
  • #442 Fix README type by @dileepbapat.

New Contributors

Full Changelog: v0.8.2...v0.8.3

Release v0.8.2

01 Apr 20:52
88da745
Compare
Choose a tag to compare

This minor release adds support for integrating NeMo Guardrails with NVIDIA AI Endpoints and Vertex AI. It also introduces the research overview page, which guides the development of future guardrails. Last but not least, it adds another round of improvements for Colang 2.0 and multiple getting-started examples.

Colang 2.0 is the next version of Colang and will replace Colang 1.0 in a future release. It adds a more powerful flow engine, improved syntax, multi-modal support, parallelism for actions and flows, a standard library of flows, and more. This release still targets alpha testers and does not include the new documentation, which will be added in 0.9.0. Colang 2.0 and 1.0 will be supported side-by-side until Colang 1.0 is deprecated and removed.

What's Changed

Added

Changed

  • #389 Expose the verbose parameter through RunnableRails by @d-mariano.
  • #415 Enable print(...) and log(...).
  • #389 Expose verbose arg in RunnableRails by @d-mariano.
  • #414 Feature/colang march release.
  • #416 Refactor and improve the verbose/debug mode.
  • #418 Feature/colang flow context sharing.
  • #425 Feature/colang meta decorator.
  • #427 Feature/colang single flow activation.
  • #426 Feature/colang 2.0 tutorial.
  • #428 Feature/Standard library and examples.
  • #431 Feature/colang various improvements.
  • #433 Feature/Colang 2.0 improvements: generate_async support, stateful API.

Fixed

  • #412 Fix #411 - explain rails not working for chat models.
  • #413 Typo fix: Comment in llm_flows.co by @habanoz.
  • #420 Fix typo for hallucination message.

New Contributors

Full Changelog: v0.8.1...v0.9.0

Release v0.8.1

15 Mar 10:32
4bc1d52
Compare
Choose a tag to compare

This minor release mainly focuses on fixing Colang 2.0 parser and runtime issues. It fixes a bug related to logging the prompt for chat models in verbose mode and a small issue in the installation guide. It also adds an example of using streaming with a custom action.

What's Changed

Added

  • #377 Add example for streaming from custom action.

Changed

  • #380 Update installation guide for OpenAI usage.
  • #401 Replace YAML import with new import statement in multi-modal example.

Fixed

  • #398 Colang parser fixes and improvements.
  • #394 Fixes and improvements for Colang 2.0 runtime.
  • #381 Fix typo by @serhatgktp.
  • #379 Fix missing prompt in verbose mode for chat models.
  • #400 Fix Authorization header showing up in logs for NeMo LLM.

Full Changelog: v0.8.0...v0.8.1

Release v0.8.0

28 Feb 15:19
8bb50af
Compare
Choose a tag to compare

This release adds three main new features:

  1. A new type of input rail that uses a set of jailbreak heuristics. More heuristics will be added in the future.
  2. Support for generation options allowing fine-grained control on what types of rails should be triggered, what data should be returned and what logging information should be included in the response.
  3. Support for making API calls to the guardrails server using multiple configuration ids.

This release also improves the support for working with embeddings (better async support, batching and caching), adds support for stop tokens per task template, and adds streaming support for HuggingFace pipelines. Last but not least, this release includes the core implementation for Colang 2.0 as a preview for early testing (version 0.9.0 will include documentation and examples).

What's Changed

Added

Documentation:

Changed

  • #309 Change the paper citation from ArXiV to EMNLP 2023 by @manuelciosici
  • #319 Enable embeddings model caching.
  • #267 Make embeddings computing async and add support for batching.
  • #281 Follow symlinks when building knowledge base by @piotrm0.
  • #280 Add more information to results of retrieve_relevant_chunks by @piotrm0.
  • #332 Update docs for batch embedding computations.
  • #244 Docs/edit getting started by @DougAtNvidia.
  • #333 Follow-up to PR 244.
  • #341 Updated 'fastembed' version to 0.2.2 by @NirantK.

Fixed

  • #286 Fixed #285 - using the same evaluation set given a random seed for topical rails by @trebedea.
  • #336 Fix #320. Reuse the asyncio loop between sync calls.
  • #337 Fix stats gathering in a parallel async setup.
  • #342 Fixes OpenAI embeddings support.
  • #346 Fix issues with KB embeddings cache, bot intent detection and config ids validator logic.
  • #349 Fix multi-config bug, asyncio loop issue and cache folder for embeddings.
  • #350 Fix the incorrect logging of an extra dialog rail.
  • #358 Fix Openai embeddings async support.
  • #362 Fix the issue with the server being pointed to a folder with a single config.
  • #352 Fix a few issues related to jailbreak detection heuristics.
  • #356 Redo followlinks PR in new code by @piotrm0.

New Contributors

Full Changelog: v0.7.1...v0.8.0

Release v0.7.1

01 Feb 14:24
2a3a5ce
Compare
Choose a tag to compare

What's Changed

  • Replace SentenceTransformers with FastEmbed by @drazvan in #288

Full Changelog: v0.7.0...v0.7.1

Release v0.7.0

31 Jan 14:54
7cb05d4
Compare
Choose a tag to compare

This release adds three new features: support for Llama Guard, improved LangChain integration, and support for server-side threads. It also adds support for Python 3.11 and solves the issue with pinned dependencies (e.g., langchain>=0.1.0,<2.0, typer>=0.7.0). Last but not least, it includes multiple feature and security-related fixes.

What's Changed

Added

Changed

  • #240 Switch to pyproject.
  • #276 Upgraded Typer to 0.9.

Fixed

  • #239 Fixed logging issue where verbose=true flag did not trigger expected log output.
  • #228 Fix docstrings for various functions.
  • #242 Fix Azure LLM support.
  • #225 Fix annoy import, to allow using without.
  • #209 Fix user messages missing from prompt.
  • #261 Fix small bug in print_llm_calls_summary.
  • #252 Fixed duplicate loading for the default config.
  • Fixed the dependencies pinning, allowing a wider range of dependencies versions.
  • Fixed sever security issues related to uncontrolled data used in path expression and information exposure through an exception.

New Contributors

Full Changelog: v0.6.1...v0.7.0

Release v0.6.1

20 Dec 22:13
3273ca7
Compare
Choose a tag to compare

This patch release upgrades two dependencies (langchain and httpx) and replaces the deprecated text-davinci-003 model with gpt-3.5-turbo-instruct in all configurations and examples.


Added

  • Support for --version flag in the CLI.

Changed

  • Upgraded langchain to 0.0.352.
  • Upgraded httpx to 0.24.1.
  • Replaced deprecated text-davinci-003 model with gpt-3.5-turbo-instruct.

Fixed

  • #191: Fix chat generation chunk issue.