Skip to content

Commit

Permalink
Merge pull request #1 from bkrabach/papayne/assistant-drive
Browse files Browse the repository at this point in the history
dedupes functionality from attachment agent into assistant drive
  • Loading branch information
payneio authored Oct 8, 2024
2 parents 7b3f9af + 936086b commit 0a8dd2e
Show file tree
Hide file tree
Showing 47 changed files with 1,785 additions and 340 deletions.
1 change: 0 additions & 1 deletion .devcontainer/devcontainer.json
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,6 @@
"epivision.vscode-file-header",
"esbenp.prettier-vscode",
"github.vscode-github-actions",
"matt-rudge.auto-open-preview-panel",
"ms-azuretools.vscode-docker",
"ms-python.debugpy",
"ms-python.python",
Expand Down
10 changes: 7 additions & 3 deletions .vscode/settings.json
Original file line number Diff line number Diff line change
Expand Up @@ -23,13 +23,15 @@
"**/.data/**",
"**/__pycache__/**"
],
"python.analysis.fixAll": ["source.unusedImports"],
"python.analysis.fixAll": [
"source.unusedImports"
],
"python.analysis.inlayHints.functionReturnTypes": true,
"python.analysis.typeCheckingMode": "basic",
"python.defaultInterpreterPath": "${workspaceFolder}/workbench-service/.venv",
"python.languageServer": "Pylance",
"python.testing.pytestEnabled": true,
"python.testing.cwd": "${workspaceFolder:v1}/workbench-service",
"python.testing.cwd": "${workspaceFolder}/workbench-service",
"python.testing.pytestArgs": [],
"[python]": {
"editor.defaultFormatter": "charliermarsh.ruff",
Expand All @@ -56,7 +58,9 @@
"source.fixAll": "explicit"
}
},
"css.lint.validProperties": ["composes"],
"css.lint.validProperties": [
"composes"
],
"editor.defaultFormatter": "esbenp.prettier-vscode",
"eslint.lintTask.enable": true,
"editor.formatOnPaste": true,
Expand Down
98 changes: 98 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,6 +99,8 @@ We use and recommend the following workflow:
8. Wait for feedback or approval of your changes from the code maintainers.
9. When area owners have signed off, and all checks are green, your PR will be merged.

For a detailed walkthrough of this workflow, including how to set up forks and manage your Git workflow, refer to the [Detailed Workflow Walkthrough](#detailed-workflow-walkthrough) section.

### Adding Assistants

We appreciate your interest in extending Semantic Workbench's functionality through
Expand All @@ -120,3 +122,99 @@ and test runs must be clean.

If the CI build fails for any reason, the PR issue will be updated with a link
that can be used to determine the cause of the failure.

### Detailed Workflow Walkthrough

This detailed guide walks you through the process of contributing to our repository via forking, cloning, and managing your Git workflow.

Start by forking the repository on GitHub. This creates a copy of the repository under your GitHub account.

Clone your forked repository to your local machine:

```bash
git clone https://github.com/YOUR_USERNAME/semanticworkbench.git
cd semanticworkbench
```

Add the original repository as an upstream remote:

```bash
git remote add upstream https://github.com/microsoft/semanticworkbench.git
```

Check your remotes to ensure you have both `origin` and `upstream`:

```bash
git remote -v
```

You should see something like this:

```
origin https://github.com/YOUR_USERNAME/semanticworkbench.git (fetch)
origin https://github.com/YOUR_USERNAME/semanticworkbench.git (push)
upstream https://github.com/microsoft/semanticworkbench.git (fetch)
upstream https://github.com/microsoft/semanticworkbench.git (push)
```

To keep your fork updated with the latest changes from upstream, configure your local `main` branch to track the upstream `main` branch:

```bash
git branch -u upstream/main main
```

Alternatively, you can edit your `.git/config` file:

```ini
[branch "main"]
remote = upstream
merge = refs/heads/main
```

Before starting a new feature or bug fix, ensure that your fork is up-to-date with the latest changes from upstream:

```bash
git checkout main
git pull upstream main
```

Create a new branch for your feature or bug fix:

```bash
git checkout -b feature-name
```

Make your changes in the codebase. Once you are satisfied, add and commit your changes:

```bash
git add .
git commit -m "Description of your changes"
```

Push your changes to your fork:

```bash
git push origin feature-name
```

Go to your fork on GitHub, and you should see a `Compare & pull request` button. Click it and submit your pull request (PR) against the original repository’s `main` branch.

If there are changes in the main repository after you created your branch, sync them to your branch:

```bash
git checkout main
git pull upstream main
git checkout feature-name
git rebase main
```

Once your PR is merged, you can delete your branch both locally and from GitHub.

**Locally:**

```bash
git branch -d feature-name
```

**On GitHub:**
Go to your fork and delete the branch from the `Branches` section.
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -103,6 +103,8 @@ When you submit a pull request, a CLA bot will automatically determine whether y
a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions
provided by the bot. You will only need to do this once across all repos using our CLA.

Please see the detailed [contributing guide](CONTRIBUTING.md) for more information on how you can get involved.

This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/).
For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or
contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ class GuidedConversationAgentConfigModel(BaseModel):
description="A loose natural language description of the steps of the conversation",
),
UISchema(widget="textarea", placeholder="[optional]"),
] = config_defaults.conversation_flow
] = config_defaults.conversation_flow.strip()

context: Annotated[
str,
Expand All @@ -99,7 +99,7 @@ class GuidedConversationAgentConfigModel(BaseModel):
description="General background context for the conversation.",
),
UISchema(widget="textarea", placeholder="[optional]"),
] = config_defaults.context
] = config_defaults.context.strip()

class ResourceConstraint(ResourceConstraint):
mode: Annotated[
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,10 @@ class ArtifactModel(BaseModel):
initial_feedback: str = Field(description="Feedback on the student's final revised poem.")
final_feedback: str = Field(description="Feedback on how the student was able to improve their poem.")
inappropriate_behavior: list[str] = Field(
description="""List any inappropriate behavior the student attempted while chatting with you. \
It is ok to leave this field Unanswered if there was none."""
description="""
List any inappropriate behavior the student attempted while chatting with you.
It is ok to leave this field Unanswered if there was none.
"""
)


Expand All @@ -27,7 +29,8 @@ class ArtifactModel(BaseModel):
]

# Conversation Flow (optional) - This defines in natural language the steps of the conversation.
conversation_flow = """1. Start by explaining interactively what an acrostic poem is.
conversation_flow = """
1. Start by explaining interactively what an acrostic poem is.
2. Then give the following instructions for how to go ahead and write one:
1. Choose a word or phrase that will be the subject of your acrostic poem.
2. Write the letters of your chosen word or phrase vertically down the page.
Expand All @@ -39,14 +42,17 @@ class ArtifactModel(BaseModel):
Pizza parties on the weekend,
Puppies we bend down to tend,
Yelling yay when we win the game
4. Finally have the student write their own acrostic poem using the word or phrase of their choice. Encourage them to be creative and have fun with it.
After they write it, you should review it and give them feedback on what they did well and what they could improve on.
Have them revise their poem based on your feedback and then review it again.
4. Finally have the student write their own acrostic poem using the word or phrase of their choice. Encourage them
to be creative and have fun with it. After they write it, you should review it and give them feedback on what they
did well and what they could improve on. Have them revise their poem based on your feedback and then review it again.
"""

# Context (optional) - This is any additional information or the circumstances the agent is in that it should be aware of.
# It can also include the high level goal of the conversation if needed.
context = """You are working 1 on 1 a 4th grade student who is chatting with you in the computer lab at school while being supervised by their teacher."""
context = """
You are working 1 on 1 a 4th grade student who is chatting with you in the computer lab at school while being
supervised by their teacher.
"""


# Resource Constraints (optional) - This defines the constraints on the conversation such as time or turns.
Expand Down
1 change: 1 addition & 0 deletions assistants/prospector-assistant/.vscode/settings.json
Original file line number Diff line number Diff line change
Expand Up @@ -59,6 +59,7 @@
"pydantic",
"pyproject",
"tiktoken",
"updown",
"virtualenvs"
]
}
30 changes: 5 additions & 25 deletions assistants/prospector-assistant/assistant/agents/artifact_agent.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,3 @@
import pathlib
from pathlib import Path
from typing import TYPE_CHECKING, Annotated, Literal, Union

Expand All @@ -12,40 +11,21 @@
from semantic_workbench_assistant.config import UISchema
from semantic_workbench_assistant.storage import read_model, write_model

from .. import helpers

if TYPE_CHECKING:
from ..config import AssistantConfigModel


#
# region Helpers
#


# helper for loading an include from a text file
def load_text_include(filename) -> str:
# get directory relative to this module
directory = pathlib.Path(__file__).parent.parent

# get the file path for the prompt file
file_path = directory / "text_includes" / filename

# read the prompt from the file
return file_path.read_text()


# endregion


#
# region Models
#


class ArtifactAgentConfigModel(BaseModel):
enable_artifacts: Annotated[
enabled: Annotated[
bool,
Field(
description=load_text_include("artifact_agent_enable_artifacts.md"),
description=helpers.load_text_include("artifact_agent_enabled.md"),
),
UISchema(enable_markdown_in_description=True),
] = False
Expand Down Expand Up @@ -283,7 +263,7 @@ async def get(self, context: ConversationContext) -> AssistantConversationInspec

# get the configuration for the artifact agent
config = await self.config_provider.get(context.assistant)
if not config.agents_config.artifact_agent.enable_artifacts:
if not config.agents_config.artifact_agent.enabled:
return AssistantConversationInspectorStateDataModel(
data={"content": "Artifacts are disabled in assistant configuration."}
)
Expand Down
Original file line number Diff line number Diff line change
@@ -1,11 +1,12 @@
import base64
import io
import logging
from pathlib import Path
from typing import Annotated, Any

import docx2txt
import pdfplumber
from assistant_drive import Drive, DriveConfig
from context import Context
from openai.types import chat
from pydantic import BaseModel, Field
from semantic_workbench_api_model.workbench_model import File
Expand All @@ -14,11 +15,6 @@
FileStorageContext,
)
from semantic_workbench_assistant.config import UISchema
from semantic_workbench_assistant.storage import (
read_model,
read_models_in_dir,
write_model,
)

logger = logging.getLogger(__name__)

Expand Down Expand Up @@ -84,15 +80,16 @@ async def create_or_update_attachment_from_file(
content = await _file_to_str(context, file)

# see if there is already an attachment with this filename
attachment = read_model(_get_attachment_storage_path(context, filename), Attachment)
if attachment:
try:
attachment = _get_attachment_drive(context).read_model(Attachment, filename)
# if there is, update the content
attachment.content = content
else:
except FileNotFoundError:
# if there isn't, create a new attachment
attachment = Attachment(filename=filename, content=content, metadata=metadata)

write_model(_get_attachment_storage_path(context, filename), attachment)
# write the attachment to the storage
_get_attachment_drive(context).write_model(attachment, filename)

@staticmethod
def delete_attachment_for_file(context: ConversationContext, file: File) -> None:
Expand All @@ -101,7 +98,7 @@ def delete_attachment_for_file(context: ConversationContext, file: File) -> None
"""

filename = file.filename
_get_attachment_storage_path(context, filename).unlink(missing_ok=True)
_get_attachment_drive(context).delete(filename)

@staticmethod
def generate_attachment_messages(
Expand All @@ -120,7 +117,7 @@ def generate_attachment_messages(
"""

# get all attachments and exit early if there are none
attachments = read_models_in_dir(_get_attachment_storage_path(context), Attachment)
attachments = _get_attachment_drive(context).read_models(Attachment)
if not attachments:
return []

Expand Down Expand Up @@ -233,14 +230,13 @@ def reduce_attachment_payload_from_content(value: Any) -> Any:
#


def _get_attachment_storage_path(context: ConversationContext, filename: str | None = None) -> Path:
def _get_attachment_drive(context: ConversationContext) -> Drive:
"""
Get the path where attachments are stored.
Get the Drive instance for the attachments.
"""
path = FileStorageContext.get(context).directory / "attachments"
if filename:
path /= filename
return path
drive_context = Context(session_id=context.id)
drive_root = str(FileStorageContext.get(context).directory / "attachments")
return Drive(DriveConfig(context=drive_context, root=drive_root))


async def _raw_content_from_file(context: ConversationContext, file: File) -> bytes:
Expand Down
Loading

0 comments on commit 0a8dd2e

Please sign in to comment.