Skip to content

Commit

Permalink
Adding trunk check to run on PRs (#9)
Browse files Browse the repository at this point in the history
* Adding paper links

* Run trunk check on PRs

Signed-off-by: John Matthews <jwmatthews@gmail.com>

---------

Signed-off-by: John Matthews <jwmatthews@gmail.com>
  • Loading branch information
jwmatthews committed Jan 31, 2024
1 parent bd73883 commit dc56b57
Show file tree
Hide file tree
Showing 3 changed files with 53 additions and 3 deletions.
24 changes: 24 additions & 0 deletions .github/workflows/trunk-check-annotations.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
name: Annotate PR with trunk issues

on:
workflow_run:
workflows: [Trunk Check]
types: [completed]

permissions: read-all

jobs:
trunk_check_annotate_pr:
name: Trunk Check PR Annotation
runs-on: ubuntu-latest
permissions:
checks: write

steps:
- name: Checkout
uses: actions/checkout@v3

- name: Trunk Check
uses: trunk-io/trunk-action@v1
with:
post-annotations: true
22 changes: 22 additions & 0 deletions .github/workflows/trunk-check.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
name: Trunk Check
on: [pull_request]
concurrency:
group: ${{ github.head_ref || github.run_id }}
cancel-in-progress: true

permissions: read-all

jobs:
trunk_check:
name: Trunk Check Runner
runs-on: ubuntu-latest
permissions:
checks: write # For trunk to post annotations
contents: read # For repo checkout

steps:
- name: Checkout
uses: actions/checkout@v3

- name: Trunk Check
uses: trunk-io/trunk-action@v1
10 changes: 7 additions & 3 deletions Notes.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,14 +11,18 @@
Some recent approaches use static analysis (Shrivastava et al., 2022; Ding et al., 2022; Pei et al., 2023)
or retrieval (Zhang et al., 2023) to extract relevant code fragments from the global context. These approaches expand the prompt (Shrivastava et al., 2022; Pei et al., 2023; Zhang et al., 2023) or require architecture modifications (Ding et al., 2022) and additional training (Ding et al., 2022; Pei et al.,
2023). In comparison, we provide token-level guidance to a frozen LM by invoking static analysis on demand. Our method is complementary to these approaches as they condition the generation by modifying the input to the LM, whereas we apply output-side constraints by reshaping the logits.
- [RepoFusion: Training Code Models to Understand
Your Repository](https://arxiv.org/pdf/2306.10998.pdf)
- [RepoFusion: Training Code Models to Understand Your Repository](https://arxiv.org/pdf/2306.10998.pdf)
- [RAG vs Fine-tuning: Pipelines, Tradeoffs, and a Case Study on Agriculture](https://arxiv.org/abs/2401.08406)
- [QUANTIFYING LANGUAGE MODELS’ SENSITIVITY TO SPURIOUS FEATURES IN PROMPT DESIGN or: How I learned to start worrying about prompt formatting](https://arxiv.org/pdf/2310.11324.pdf)
- [LLaMA-Reviewer: Advancing Code Review Automation with Large Language Models through Parameter-Efficient Fine-Tuning](https://arxiv.org/pdf/2308.11148v2.pdf)

## Interestig Blog Posts/Examples/Tutorials
## Interesting Blog Posts/Examples/Tutorials

- [ReAct: Synergizing Reasoning and Acting in Language Models](https://react-lm.github.io/)
- [Fine-Tune LLaMA 2 with QLoRA](https://colab.research.google.com/drive/1Zmaceu65d7w4Tcd-cfnZRb6k_Tcv2b8g?usp=sharing)
- From: https://github.com/smol-ai/llama-fine-tuning-hackameetup/tree/main#getting-started
- [Codebase Analysis: Langchain Agents](https://carbonated-yacht-2c5.notion.site/Codebase-Analysis-Langchain-Agents-0b0587acd50647ca88aaae7cff5df1f2)

## Interesting Projects

- [LLMLingua: Enhancing Large Language Model Inference via Prompt Compression](https://github.com/microsoft/LLMLingua)

0 comments on commit dc56b57

Please sign in to comment.