Skip to content

Commit

Permalink
Merge branch 'main' into improve_ci_speed
Browse files Browse the repository at this point in the history
* main:
  Add new baselines test github workflow (#2009)
  Add simulation to E2E tests (#2074)
  Add E2E test for Pandas (#2070)
  Add E2E test for scikit-learn (#2073)
  Add format and test scripts (#1987)
  Add missing ruff dependency to baselines (#2075)
  Add E2E test for MXNet (#2069)
  Add E2E test for Jax (#2067)
  Update bare E2E test client (#2068)
  Update PyTorch E2E test (#2072)
  Update Tensorflow E2E test (#2071)
  Fix flake8 error E266 in baseline template (#2065)
  Fix baseline creation on linux-based systems (#2063)
  Updates to Baseline Template Readmes (#2059)
  Refresh FedProx MNIST baseline (#1918)
  Extend test checking tools config (#1986)
  Improved documentation (#2006)
  updated material for 30min FL tutorial (#2005)
  Update codeowners list (#2004)
  Create and delete nodes via Fleet API (#1901)
  • Loading branch information
tanertopal committed Jul 11, 2023
2 parents 575c7b0 + a7ea94d commit b5aaaef
Show file tree
Hide file tree
Showing 93 changed files with 4,209 additions and 585 deletions.
2 changes: 1 addition & 1 deletion .github/CODEOWNERS
Validating CODEOWNERS rules …
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@
* @danieljanes @tanertopal

# Flower Baselines
/baselines @pedropgusmao @jafermarq @tanertopal @danieljanes
/baselines @jafermarq @tanertopal @danieljanes
103 changes: 99 additions & 4 deletions .github/workflows/baselines.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,14 @@
name: Baselines

# The aim of this workflow is to test only the changed (or added) baseline.
# Here is the rough idea of how it works (more details are presented later in the comments):
# 1. Checks for the changes between the current branch and the main - in case of PR -
# or between the HEAD and HEAD~1 (main last commit and the previous one) - in case of
# a push to main.
# 2. Fails the test if there are changes to more than one baseline. Passes the test
# (skips the rests) if there are no changes to any baselines. Follows the test if only
# one baseline is added or modified.
# 3. Sets up the env specified for the baseline.
# 4. Runs the tests.
on:
push:
branches:
Expand All @@ -13,19 +22,90 @@ env:

defaults:
run:
working-directory: baselines/flwr_baselines
working-directory: baselines

jobs:
test_baselines:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v3
# The depth two of the checkout is needed in case of merging to the main
# because we compare the HEAD (current version) with HEAD~1 (version before
# the PR was merged)
with:
fetch-depth: 2
- name: Fetch main branch
run: |
# The main branch is needed in case of the PR to make a comparison (by
# default the workflow takes as little information as possible - it does not
# have the history
if [ ${{ github.event_name }} == "pull_request" ]
then
git fetch origin main:main
fi
- name: Find changed/new baselines
id: find_changed_baselines_dirs
run: |
if [ ${{ github.event_name }} == "push" ]
then
# Push event triggered when merging to main
change_references="HEAD..HEAD~1"
else
# Pull request event triggered for any commit to a pull request
change_references="main..HEAD"
fi
dirs=$(git diff --dirstat=files,0 ${change_references} . | awk '{print $2}' | grep -E '^baselines/[^/]*/$' | grep -v -e '^baselines/dev' -e '^baselines/baseline_template' -e '^baselines/flwr_baselines' | sed 's/^baselines\///')
# git diff --dirstat=files,0 ${change_references} . - checks the differences
# and a file is counted as changed if more than 0 lines were changed
# it returns the results in the format x.y% path/to/dir/
# awk '{print $2}' - takes only the directories (skips the percentages)
# grep -E '^baselines/[^/]*/$' - takes only the paths that start with
# baseline (and have at least one subdirectory)
# grep -v -e ... - excludes the `baseline_template`, `dev`, `flwr_baselines`
# sed 's/^baselines\///' - narrows down the path to baseline/<subdirectory>
echo "Detected changed directories: ${dirs}"
# Save changed dirs to output of this step
EOF=$(dd if=/dev/urandom bs=15 count=1 status=none | base64)
echo "dirs<<EOF" >> "$GITHUB_OUTPUT"
for dir in $dirs
do
echo "$dir" >> "$GITHUB_OUTPUT"
done
echo "EOF" >> "$GITHUB_OUTPUT"
- name: Validate changed/new baselines
id: validate_changed_baselines_dirs
run: |
dirs="${{ steps.find_changed_baselines_dirs.outputs.dirs }}"
dirs_array=()
if [[ -n $dirs ]]; then
while IFS= read -r line; do
dirs_array+=("$line")
done <<< "$dirs"
fi
length=${#dirs_array[@]}
echo "The number of changed baselines is $length"
if [ $length -gt 1 ]; then
echo "The changes should only apply to a single baseline"
exit 1
fi
if [ $length -eq 0 ]; then
echo "The baselines were not changed - skipping the remaining steps."
echo "baseline_changed=false" >> "$GITHUB_OUTPUT"
exit 0
fi
echo "changed_dir=${dirs[0]}" >> "$GITHUB_OUTPUT"
echo "baseline_changed=true" >> "$GITHUB_OUTPUT"
- name: Set up Python
id: setup-python
if: steps.validate_changed_baselines_dirs.outputs.baseline_changed == 'true'
uses: actions/setup-python@v4
with:
python-version: 3.8.15
- name: Cache Python dependencies
if: steps.validate_changed_baselines_dirs.outputs.baseline_changed == 'true'
uses: actions/cache@v3
with:
path: |
Expand All @@ -34,13 +114,28 @@ jobs:
~/.cache/pypoetry
key: ${{ runner.os }}-python-cache-${{ steps.setup-python.outputs.python-version }}-${{ hashFiles('**/pyproject.toml') }}
- name: Install build tools
if: steps.validate_changed_baselines_dirs.outputs.baseline_changed == 'true'
run: |
python -m pip install -U pip==23.1.2
python -m pip install -U setuptools==68.0.0
python -m pip install -U poetry==1.5.1
poetry config virtualenvs.create false
- name: Install dependencies
if: steps.validate_changed_baselines_dirs.outputs.baseline_changed == 'true'
run: |
changed_dir="${{ steps.validate_changed_baselines_dirs.outputs.changed_dir }}"
cd "${changed_dir}"
python -m poetry install
- name: Lint + Test (isort/black/mypy/pylint/pytest)
run: ./dev/test.sh
- name: Test
if: steps.validate_changed_baselines_dirs.outputs.baseline_changed == 'true'
run: |
dir="${{ steps.validate_changed_baselines_dirs.outputs.changed_dir }}"
echo "Testing ${dir}"
./dev/test-baseline.sh $dir
- name: Test Structure
if: steps.validate_changed_baselines_dirs.outputs.baseline_changed == 'true'
run: |
dir="${{ steps.validate_changed_baselines_dirs.outputs.changed_dir }}"
echo "Testing ${dir}"
./dev/test-baseline-structure.sh $dir
37 changes: 37 additions & 0 deletions .github/workflows/deprecated_baselines.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
name: Deprecated-Baselines

on:
push:
branches:
- main
pull_request:
branches:
- main

env:
FLWR_TELEMETRY_ENABLED: 0

defaults:
run:
working-directory: baselines/flwr_baselines

jobs:
test_deprecated_baselines:
runs-on: ubuntu-22.04
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: 3.8.15
- name: Install build tools
run: |
python -m pip install -U pip==23.1.2
python -m pip install -U setuptools==68.0.0
python -m pip install -U poetry==1.5.1
poetry config virtualenvs.create false
- name: Install dependencies
run: |
python -m poetry install
- name: Lint + Test (isort/black/mypy/pylint/pytest)
run: ./dev/test.sh
160 changes: 157 additions & 3 deletions .github/workflows/e2e.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ env:
jobs:
pytorch:
runs-on: ubuntu-22.04
timeout-minutes: 10
steps:
- uses: actions/checkout@v3
- name: Set up Python
Expand Down Expand Up @@ -48,13 +49,18 @@ jobs:
run: |
cd e2e/pytorch
python -c "from torchvision.datasets import CIFAR10; CIFAR10('./data', download=True)"
- name: Run tests
- name: Run edge client test
run: |
cd e2e/pytorch
./test.sh
- name: Run virtual client test
run: |
cd e2e/pytorch
python simulation.py
tensorflow:
runs-on: ubuntu-22.04
timeout-minutes: 10
steps:
- uses: actions/checkout@v3
- name: Set up Python
Expand Down Expand Up @@ -88,14 +94,20 @@ jobs:
- name: Download Datasets
run: |
python -c "import tensorflow as tf; tf.keras.datasets.cifar10.load_data()"
- name: Run tests
- name: Run edge client test
run: |
cd e2e/tensorflow
./test.sh
- name: Run virtual client test
run: |
cd e2e/tensorflow
python simulation.py
bare:
runs-on: ubuntu-22.04
timeout-minutes: 10
steps:
- uses: actions/checkout@v3
- name: Set up Python
Expand All @@ -121,8 +133,150 @@ jobs:
run: |
cd e2e/bare
python -m poetry install
- name: Run tests
- name: Run edge client test
run: |
cd e2e/bare
./test.sh
- name: Run virtual client test
run: |
cd e2e/bare
python simulation.py
pandas:
runs-on: ubuntu-22.04
timeout-minutes: 10
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: 3.8.16
- name: Install
run: |
python -m pip install -U pip==22.3.1
python -m pip install -U setuptools==65.6.3
python -m pip install poetry==1.3.2
poetry config virtualenvs.create false
- name: Install dependencies
run: |
cd e2e/pandas
python -m poetry install
- name: Cache Datasets
uses: actions/cache@v2
with:
path: "./e2e/pandas/data"
key: pandas-datasets
- name: Download Datasets
run: |
cd e2e/pandas
mkdir -p data
python -c "from sklearn.datasets import load_iris; load_iris(as_frame=True)['data'].to_csv('./data/client.csv')"
- name: Run edge client test
run: |
cd e2e/pandas
./test.sh
- name: Run virtual client test
run: |
cd e2e/pandas
python simulation.py
jax:
runs-on: ubuntu-22.04
timeout-minutes: 10
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: 3.8.16
- name: Install
run: |
python -m pip install -U pip==22.3.1
python -m pip install -U setuptools==65.6.3
python -m pip install poetry==1.3.2
poetry config virtualenvs.create false
- name: Install dependencies
run: |
cd e2e/jax
python -m poetry install
- name: Run edge client test
run: |
cd e2e/jax
./test.sh
- name: Run virtual client test
run: |
cd e2e/jax
python simulation.py
mxnet:
runs-on: ubuntu-22.04
timeout-minutes: 10
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: 3.8.16
- name: Install
run: |
python -m pip install -U pip==22.3.1
python -m pip install -U setuptools==65.6.3
python -m pip install poetry==1.3.2
poetry config virtualenvs.create false
- name: Install dependencies
run: |
cd e2e/mxnet
python -m poetry install
- name: Download Datasets
run: |
cd e2e/mxnet
python -c "import mxnet as mx; mx.test_utils.get_mnist()"
- name: Run edge client test
run: |
cd e2e/mxnet
./test.sh
- name: Run virtual client test
run: |
cd e2e/mxnet
python simulation.py
scikit:
runs-on: ubuntu-22.04
timeout-minutes: 10
steps:
- uses: actions/checkout@v3
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: 3.8.16
- name: Install
run: |
python -m pip install -U pip==22.3.1
python -m pip install -U setuptools==65.6.3
python -m pip install poetry==1.3.2
poetry config virtualenvs.create false
- name: Install dependencies
run: |
cd e2e/scikit-learn
python -m poetry install
- name: Download Datasets
run: |
cd e2e/scikit-learn
python -c "import openml; openml.datasets.get_dataset(554)"
- name: Run edge client test
run: |
cd e2e/scikit-learn
./test.sh
- name: Run virtual client test
run: |
cd e2e/scikit-learn
python simulation.py
5 changes: 5 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,11 @@ Flower's goal is to make federated learning accessible to everyone. This series

Stay tuned, more tutorials are coming soon. Topics include **Privacy and Security in Federated Learning**, and **Scaling Federated Learning**.

## 30 Minute Federated Learning Tutorial

[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/adap/flower/blob/main/examples/simulation_pytorch_colab/tutorial.ipynb) (or open the [Jupyter Notebook](https://github.com/adap/flower/blob/main/examples/simulation_pytorch_colab/tutorial.ipynb))


## Documentation

[Flower Docs](https://flower.dev/docs):
Expand Down
Loading

0 comments on commit b5aaaef

Please sign in to comment.