Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

justfile #3

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
@@ -1,8 +1,10 @@
# poetry
__pypackages__/
**/__pycache__/
*.egg-info/
*.egg
.venv/
flagged/

# nix
.nix*
Expand All @@ -11,8 +13,9 @@ result-*
*.drv
*.gc

# shite
# misc
.DS_Store
.vscode/

# builds
dist/
Expand Down
46 changes: 46 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
# Agent
The Agent is a conversational assistant designed to help team members in organizations establish a shared information context and work more efficiently together. The agent is built using Python and the Langchain library.
## Using `just`
`just shell`
## Manually
0. Clone this repo
1. Install [nix](https://nixos.org/download.html#nix-install-macos)
To test installation:
1.1 run `echo $PATH` and ensure `{...}/.nix-profile/bin` is the first element; if not, try `export PATH=$HOME/.nix-profile/bin:$PATH`
1.2 create `shell.nix`:
```
{ pkgs ? import <nixpkgs> {} }:

pkgs.mkShell {

buildInputs = with pkgs; [
sl
wget
];
}
```
1.2 if you run `nix-shell` from the directory where `shell.nix` exists, you should be dropped into a new shell and now have `sl` and `wget` binaries which you can test by running
2. Run `poetry update`
3. Run `poetry shell` inside `agent/agent`
4. ??? (e.g. `python <...>`)
5. PROFIT!11!!!


## Running it
Set `OPENAI_API_KEY` environment varibale if accessing models by OpenAI.

## Organizational Loops

In its current form, the Agent facilitates a daily "gm --> gn" loop for team members. When a team member says "gm" to the agent, it prompts each team member to share their intentions for the day, starting with the person who initiated the conversation. The agent records each person's response. When a team member says "gn" to the agent, it prompts each team member to share what they accomplished that day, starting with the person who initiated the conversation. The agent records each person's response and can output a work dependency graph based on the data collected.

## Microworlds

The Agent will be used in the future to generate microworlds, which represent particular simulation contexts for various domains involving a set of coordination strategies, agents, and an environment. Human experts (or automated assurance systems) will validate the outputs of the agents and select which subset of microworlds they have generated are actually valid or safe. The output of this validation process will be used to re-train and fine-tune the agents, making them more effective at what they do.

## Conclusion

The Agent is a powerful tool for improving communication and collaboration among team members, generating valuable insights and data about organizations and their operations, and supporting the creation of safe and effective microworlds. By using prompts to initiate and record conversations with team members, the Agent helps establish a shared information context for organizations. By collecting and managing data, the Agent can output a work dependency graph to help team members better synthesize information about what the organization is, what others within the organization are working on, and how all of these elements compose together.

## Appendix
### Reproducibility
`nix` and `poetry` allow [packaging for serverless execution](https://github.com/bananaml/serverless-template) with reliable system, package, and other application dependencies like Secrets correctly derived for individual `agent` runtime environment.
20 changes: 20 additions & 0 deletions agent/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
# poetry
__pypackages__/
**/__pycache__/
*.egg-info/
*.egg
.venv/

# nix
.nix*
result
result-*
*.drv
*.gc

# misc
.DS_Store

# builds
dist/
build/
Empty file removed agent/__init__.py
Empty file.
18 changes: 9 additions & 9 deletions agent/agent.py
Original file line number Diff line number Diff line change
Expand Up @@ -19,13 +19,13 @@



documents = PagedPDFSplitter("/Users/barton/Lab/poet/agent/vdf.pdf").load_data()
# documents = PagedPDFSplitter("/Users/barton/Lab/poet/agent/vdf.pdf").load_data()
bmorphism marked this conversation as resolved.
Show resolved Hide resolved

index = GPTSimpleVectorIndex(documents)
index.save_to_disk('index.json')
# index = GPTSimpleVectorIndex(documents)
# index.save_to_disk('index.json') # TODO: use Zulip UUID here!


pages = loader.load_and_split()
# pages = loader.load_and_split()



Expand All @@ -42,11 +42,11 @@
llm = OpenAI(temperature=0.42, model="text-davinci-003")
memory = ConversationSummaryMemory(memory_key="chat_history", llm=llm)

# notagent = initialize_agent(tools,
# llm=llm,
# agent="conversational-react-description",
# verbose=True,
# memory=memory)
notagent = initialize_agent(tools,
llm=llm,
agent="conversational-react-description",
verbose=True,
memory=memory)



Expand Down
Binary file removed agent/agents/__pycache__/base_agent.cpython-310.pyc
Binary file not shown.
36 changes: 11 additions & 25 deletions agent/agents/base_agent.py
Original file line number Diff line number Diff line change
@@ -1,30 +1,16 @@
from abc import ABC, abstractmethod
from zulip import Client

from utils import zulip

class BaseAgent(ABC):
def __init__(self):
self.client = Client(config_file="zuliprc", client="MyApp/1.0")

def send_message(self, message):
self.client.send_message(message)
def __init__(self, config):
self.client = zulip.ZulipClient(config)

def handle_message(self):
print("waiting for messages..")
self.client.client.call_on_each_message(
lambda msg: self.respond_to_message(msg)
)

@abstractmethod
def handle_message(self, message):
pass

def run(self):
last_event_id = -1
while True:
# Get the most recent events from the Zulip server
events = self.client.get_events(
queue_id=None,
last_event_id=last_event_id,
dont_block=True,
)

# Handle each event
for event in events["events"]:
last_event_id = max(last_event_id, int(event["id"]))
if event["type"] == "message":
self.handle_message(event["message"])
def respond_to_message(self, msg):
pass
Empty file removed agent/agents/coalition_agent.py
Empty file.
58 changes: 58 additions & 0 deletions agent/agents/digital_twin.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,58 @@
import sys
from langchain import LLMChain, PromptTemplate
from base_agent import BaseAgent
from langchain.agents import load_tools
from langchain.agents import initialize_agent
from langchain.llms import OpenAI
from langchain.agents import Tool
from agent.models import aesthetic_model
from config import config
from langchain.agents import ConversationalAgent
from langchain.chains.conversation.memory import ConversationBufferMemory
from langchain.agents import AgentExecutor


class DigitalTwin(BaseAgent):
def __init__(self, config):
super().__init__(config)
prefix = """
You are the user's "digital twin". Your role is to help them record their daily intentions in the morning, and summarize their day in the evening.
When a user says "gm", prompt them to share their intentions for the day.
When a user says "gn", prompt them to share what they accomplished that day, as well as any other thoughts or reflections they might have. Then,
summarize both what their intention was at the beginning of the day, and what they ended up accomplishing.
Good luck!"""

suffix = """Begin!"
{chat_history}
{input}
Answer:
{agent_scratchpad}"""
tools = []
self.values = {}
prompt = ConversationalAgent.create_prompt(
tools,
prefix=prefix,
suffix=suffix,
input_variables=["input", "chat_history", "agent_scratchpad"],
)
memory = ConversationBufferMemory(memory_key="chat_history")
llm_chain = LLMChain(llm=OpenAI(temperature=0), prompt=prompt)
agent = ConversationalAgent(llm_chain=llm_chain, tools=tools, verbose=True)
self.agent_chain = AgentExecutor.from_agent_and_tools(
agent=agent, tools=tools, verbose=True, memory=memory
)

def repl(self):
while True:
user_input = input(">>> ")
print(self.agent_chain.run(user_input))

def respond_to_message(self, msg):
stream = msg["stream_id"]
topic = msg["subject"]
output = self.agent_chain.run(msg["content"])
result = self.client.send_message("stream", stream, topic, output)

if __name__ == "__main__":
agent = DigitalTwin(config.Config())
agent.repl()
118 changes: 104 additions & 14 deletions agent/agents/index_agent.py
Original file line number Diff line number Diff line change
@@ -1,20 +1,110 @@
import sys
from base_agent import BaseAgent
from llama_index import SimpleDirectoryReader, GPTChromaIndex, GPTSimpleVectorIndex
from langchain.chains.conversation.memory import ConversationBufferMemory, ConversationSummaryMemory
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain import OpenAI, VectorDBQA, ConversationChain
from langchain.chains import ChatVectorDBChain
from langchain.chains.chat_vector_db.prompts import CONDENSE_QUESTION_PROMPT
from langchain.agents import initialize_agent

# from models import data_reader
import config
import data_reader
from llama_index import GPTSimpleVectorIndex
import json


class IndexAgent(BaseAgent):
def __init__(self):
self.data_reader = data_reader.DataReader()
self.data_reader.load(config.DATA_DIR, config.INDEX_PATH)
from langchain.document_loaders import PagedPDFSplitter
from langchain.docstore.document import Document

def handle_message(self, message):
res = self.data_reader.index.query(message["content"])
return res.response
import gradio as gr


if __name__ == "__main__":
agent = IndexAgent()

documents = PagedPDFSplitter("/Users/barton/Lab/poet/agent/vdf.pdf").load_data()

index = GPTSimpleVectorIndex(documents)
index.save_to_disk('index.json')


pages = loader.load_and_split()



# import gradio as gr

# # import wandb # Catcth me if you can

# documents = SimpleDirectoryReader('/Users/barton/Lab/poet/')
# #index = GPTChromaIndex(documents, chroma_collection="iea")

embeddings = OpenAIEmbeddings()

tools = []
llm = OpenAI(temperature=0.42, model="text-davinci-003")
memory = ConversationSummaryMemory(memory_key="chat_history", llm=llm)

# notagent = initialize_agent(tools,
# llm=llm,
# agent="conversational-react-description",
# verbose=True,
# memory=memory)




conversation_with_summary = ConversationChain(
llm=llm,
memory=ConversationSummaryMemory(llm=OpenAI()),
verbose=True
)

def langchain_chat(input_text):
#response = notagent.run(input = input_text)
response = conversation_with_summary.predict(input=input_text)
return response


for page in pages[:10]:
langchain_chat(json.loads(page.json())['page_content'])

# qa = ChatVectorDBChain.from_llm(llm, vectorstore)


iface = gr.Interface(fn=langchain_chat,
inputs=gr.inputs.Textbox(label="gm gm ☀️"),
outputs="text",
title="Lucas Personal Agent",
description="Talk to ️his thing!")
iface.launch(share = True)


# #text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=16)
# #texts = text_splitter.split_documents(documents)



# # qa = VectorDBQA.from_chain_type(llm=OpenAI(), chain_type="map_reduce", vectorstore=docsearch)




# # print(index.query("What is the future of Ukraine?"))


# # llm = OpenAI(temperature=0)
# # conversation = ConversationChain(
# # llm=llm,
# # verbose=True,
# # memory=ConversationBufferMemory()
# # )





# # print(conversation.predict(input="What does the second law of thermodynamics state?"))

# # # llm_predictor = LLMPredictor(llm=OpenAI(temperature=0, model_name="text-davinci-003"))
# # # index = GPTKeywordTableIndex(documents, llm_predictor=llm_predictor)

# # # response = index.query("How would you use diagrams to illustrate Seeing Like a State?")
# # # print(response)
Empty file removed agent/agents/persuasion_agent.py
Empty file.
Loading