Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

FEAT Add image generation example with red teaming orchestrator and unify existing orchestrator definitions #189

Merged
merged 23 commits into from
May 19, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
516 changes: 261 additions & 255 deletions doc/code/memory/memory.ipynb

Large diffs are not rendered by default.

298 changes: 154 additions & 144 deletions doc/code/targets/gpt_v_target.ipynb
Original file line number Diff line number Diff line change
@@ -1,144 +1,154 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "9de35022",
"metadata": {},
"source": [
"## Azure OpenAI GPT-V Target Demo\n",
"This notebook demonstrates how to use the Azure OpenAI GPT-V target to accept multimodal input (text+image) and generate text output."
]
},
{
"cell_type": "code",
"execution_count": 1,
"id": "5dc94a0e",
"metadata": {
"execution": {
"iopub.execute_input": "2024-04-29T02:42:51.647147Z",
"iopub.status.busy": "2024-04-29T02:42:51.647147Z",
"iopub.status.idle": "2024-04-29T02:42:56.808308Z",
"shell.execute_reply": "2024-04-29T02:42:56.808308Z"
}
},
"outputs": [],
"source": [
"# Copyright (c) Microsoft Corporation.\n",
"# Licensed under the MIT license.\n",
"\n",
"\n",
"from pyrit.models import PromptRequestPiece, PromptRequestResponse\n",
"from pyrit.prompt_target import AzureOpenAIGPTVChatTarget\n",
"from pyrit.common import default_values\n",
"import pathlib\n",
"from pyrit.common.path import HOME_PATH\n",
"import uuid\n",
"\n",
"default_values.load_default_env()\n",
"test_conversation_id = str(uuid.uuid4())\n",
"\n",
"# use the image from our docs\n",
"image_path = pathlib.Path(HOME_PATH) / \"assets\" / \"pyrit_architecture.png\"\n",
"\n",
"request_pieces = [\n",
" PromptRequestPiece(\n",
" role=\"user\",\n",
" conversation_id=test_conversation_id,\n",
" original_value=\"Describe this picture:\",\n",
" original_value_data_type=\"text\",\n",
" converted_value_data_type=\"text\",\n",
" ),\n",
" PromptRequestPiece(\n",
" role=\"user\",\n",
" conversation_id=test_conversation_id,\n",
" original_value=str(image_path),\n",
" original_value_data_type=\"image_path\",\n",
" converted_value_data_type=\"image_path\",\n",
" ),\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "a30eaddf",
"metadata": {
"execution": {
"iopub.execute_input": "2024-04-29T02:42:56.812314Z",
"iopub.status.busy": "2024-04-29T02:42:56.811312Z",
"iopub.status.idle": "2024-04-29T02:42:56.814589Z",
"shell.execute_reply": "2024-04-29T02:42:56.814589Z"
}
},
"outputs": [],
"source": [
"prompt_request_response = PromptRequestResponse(request_pieces=request_pieces)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "e7c8eabc",
"metadata": {
"execution": {
"iopub.execute_input": "2024-04-29T02:42:56.817594Z",
"iopub.status.busy": "2024-04-29T02:42:56.817594Z",
"iopub.status.idle": "2024-04-29T02:43:11.905794Z",
"shell.execute_reply": "2024-04-29T02:43:11.905794Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"None: assistant: The picture is a table listing the components of PyRIT, which stands for \"Python Rapid Information Toolkit\". The table has two columns labeled \"Interface\" and \"Implementation\". Under the Interface column, there are five rows: Target, Datasets, Scoring Engine, Attack Strategy and Memory. Each row in the Interface column has corresponding implementations in the Implementation column.\n",
"\n",
"For the Target interface, there are two types of implementation: Local (local model e.g., ONNX) and Remote (API or web app). \n",
"For the Datasets interface, there are two types of implementation: Static prompts and Dynamic prompt templates.\n",
"For the Scoring Engine interface, there is one type of implementation: PyRIT itself with self-evaluation and API with existing content classifiers.\n",
"For the Attack Strategy interface, there are two types of implementation: Single Turn using static prompts and Multi Turn involving multiple conversations using prompt templates.\n",
"For the Memory interface, there are several types of implementation including Storage (JSON Database), Utils for conversation retrieval and storage, memory sharing and data analysis.\n"
]
}
],
"source": [
"with AzureOpenAIGPTVChatTarget() as azure_openai_chat_target:\n",
" resp = await azure_openai_chat_target.send_prompt_async(prompt_request=prompt_request_response) # type: ignore\n",
" print(resp)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "ae1bcefb",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"jupytext": {
"cell_metadata_filter": "-all"
},
"kernelspec": {
"display_name": "pyrit-dev",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.10.13"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
{
"cells": [
{
"cell_type": "markdown",
"id": "130a41a5",
"metadata": {},
"source": [
"## Azure OpenAI GPT-V Target Demo\n",
"This notebook demonstrates how to use the Azure OpenAI GPT-V target to accept multimodal input (text+image) and generate text output."
]
},
{
romanlutz marked this conversation as resolved.
Show resolved Hide resolved
"cell_type": "code",
"execution_count": 1,
"id": "a74f5930",
"metadata": {
"execution": {
"iopub.execute_input": "2024-05-16T19:35:20.497173Z",
"iopub.status.busy": "2024-05-16T19:35:20.496177Z",
"iopub.status.idle": "2024-05-16T19:35:32.892964Z",
"shell.execute_reply": "2024-05-16T19:35:32.891955Z"
}
},
"outputs": [],
"source": [
"# Copyright (c) Microsoft Corporation.\n",
"# Licensed under the MIT license.\n",
"\n",
"\n",
"from pyrit.models import PromptRequestPiece, PromptRequestResponse\n",
"from pyrit.prompt_target import AzureOpenAIGPTVChatTarget\n",
"from pyrit.common import default_values\n",
"import pathlib\n",
"from pyrit.common.path import HOME_PATH\n",
"import uuid\n",
"\n",
"default_values.load_default_env()\n",
"test_conversation_id = str(uuid.uuid4())\n",
"\n",
"# use the image from our docs\n",
"image_path = pathlib.Path(HOME_PATH) / \"assets\" / \"pyrit_architecture.png\"\n",
"\n",
"request_pieces = [\n",
" PromptRequestPiece(\n",
" role=\"user\",\n",
" conversation_id=test_conversation_id,\n",
" original_value=\"Describe this picture:\",\n",
" original_value_data_type=\"text\",\n",
" converted_value_data_type=\"text\",\n",
" ),\n",
" PromptRequestPiece(\n",
" role=\"user\",\n",
" conversation_id=test_conversation_id,\n",
" original_value=str(image_path),\n",
" original_value_data_type=\"image_path\",\n",
" converted_value_data_type=\"image_path\",\n",
" ),\n",
"]"
]
},
{
"cell_type": "code",
"execution_count": 2,
"id": "f1b4c596",
"metadata": {
"execution": {
"iopub.execute_input": "2024-05-16T19:35:32.898501Z",
"iopub.status.busy": "2024-05-16T19:35:32.897497Z",
"iopub.status.idle": "2024-05-16T19:35:32.905229Z",
"shell.execute_reply": "2024-05-16T19:35:32.904284Z"
}
},
"outputs": [],
"source": [
"prompt_request_response = PromptRequestResponse(request_pieces=request_pieces)"
]
},
{
"cell_type": "code",
"execution_count": 3,
"id": "2e2c237c",
"metadata": {
"execution": {
"iopub.execute_input": "2024-05-16T19:35:32.912243Z",
"iopub.status.busy": "2024-05-16T19:35:32.911242Z",
"iopub.status.idle": "2024-05-16T19:35:49.616018Z",
"shell.execute_reply": "2024-05-16T19:35:49.616018Z"
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"None: assistant: The image displays a chart titled \"PyRIT Components,\" which appears to outline the structure of a system or framework named PyRIT. The chart is divided into two columns: Interface and Implementation.\n",
"\n",
"Under the Interface column, there are five categories:\n",
"1. Target\n",
"2. Datasets\n",
"3. Scoring Engine\n",
"4. Attack Strategy\n",
"5. Memory\n",
"\n",
"Adjacent to each Interface category, there's an Implementation description:\n",
"- For Target, there are two types: Local (local model e.g., ONNX) and Remote (API or web app).\n",
"- Datasets can be either Static (prompts) or Dynamic (Prompt templates).\n",
"- The Scoring Engine's implementation is described as PyRIT Itself: Self Evaluation and API: Existing content classifiers.\n",
"- Attack Strategy can be Single Turn (Using static prompts) or Multi Turn (Multiple conversations using prompt templates).\n",
"- Memory is detailed with Storage (JSON, Database) and Utils (Conversation, retrieval and storage, memory sharing, data analysis).\n",
"\n",
"Overall, the image seems to summarize the components of PyRIT in terms of how it interfaces with users or other systems alongside specific implementation methods for its functionalities.\n"
]
}
],
"source": [
"with AzureOpenAIGPTVChatTarget() as azure_openai_chat_target:\n",
" resp = await azure_openai_chat_target.send_prompt_async(prompt_request=prompt_request_response) # type: ignore\n",
" print(resp)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "f325a565",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"jupytext": {
"cell_metadata_filter": "-all"
},
"kernelspec": {
"display_name": "pyrit-python311-clean",
"language": "python",
"name": "pyrit-python311-clean"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.9"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
Loading
Loading