Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Question] Easier media rating #284

Closed
lyz-code opened this issue Aug 22, 2023 · 5 comments · Fixed by #289
Closed

[Question] Easier media rating #284

lyz-code opened this issue Aug 22, 2023 · 5 comments · Fixed by #289

Comments

@lyz-code
Copy link

I'm thinking on migrating from mediatracker to ryot, but there are some UI interactions that I still feel more comfortable with in mediatracker. Probably is because I'm not experienced enough on Ryot usage so I'd like to share with you my current workflow and ask you how you'd do it in Ryot.

Whenever I finish seeing a movie:

  • Mediatracker's jellyfin plugin registers the scrob
  • I go into mediatracker's main web page, there I see the latest elements that have not been rated
  • I click on the star icon
  • I enter the rating and optionally enter a review.

From what I've seen, on Ryot the process would be:

  • Ryot's jellyfin plugin registers the scrob
  • I go into the movies ryot page (you can't see the unrated elements in the home, right?)
  • I click on the movie
  • I click on post a review
  • I click on the %, then type the number (I guess it would be more uncomfortable to type it on the mobile keyboard, unless it's defined as a numeric field (I haven't tried yet)), optionally enter a review

I feel that rating movies so far is still more user friendly in mediatracker as it requires many less clicks and manual typing due to the quickness of the 5 star workflow available on the home page of the site.

Whenever I finish seeing a tv show episode:

  • Mediatracker's jellyfin plugin registers the scrob
  • I go into mediatracker's main web page, there I see the latest elements that have not been rated
  • I click on the tv show
  • Then I scroll and search for Episodes
  • Then I scroll and search for the last episode that is not rated but is marked as seen
  • I click on the star icon of the episode
  • I enter the rating and optionally enter a review.

This workflow is a bit more cumbersome on mediatracker in comparison with the movies rating because you can't do everything from the home page.

From what I've seen, on Ryot the process would be:

  • Ryot's jellyfin plugin registers the scrob
  • I go into the series ryot page (you can't see the unrated elements in the home, right?)
  • I search the tv show in the list (I've seen that even if you manually update the process of a tv show, it doesn't show as the first one in the list of Last seen, which is the default page. It does show first in the Last updated, but only after you click on the tabler-icon-refresh button. So if I have the page preloaded, I'd have to click that button to see the latest state of the tv shows, it won't auto-update when I get in)
  • I click on the tv show
  • Click on the History to see which has been the latest episode seen
  • Click on Reviews to se which has been the latest episode rated
  • Click on Actions
  • Click on Post a review
  • Click on the Episode and select the correct one (By default it selects the first unseen episode, so if the plugin scrobs the S01E03, here it will appear the S01E04 which has not yet been viewed)
  • I click on the %, then type the number (I guess it would be more uncomfortable to type it on the mobile keyboard, unless it's defined as a numeric field (I haven't tried yet)), optionally enter a review

Thus I also feel that rating tv shows so far is still more user friendly in mediatracker due to the quickness of the 5 star workflow and the fact that even though it's cumbersome to reach the episode you want to rate, it's still faster than the ryot's process.

In an ideal scenario, I'd see the unrated episodes sorted by last seen in the home page, click an element of the poster (like the star in mediatracker), enter the rating, optionally the review and click submit.

I don't want to sound negative, I think Ryot's web design is much prettier than mediatracker's and I also love the detail that the rating of the Tv show is automatically calculated based on the ratings of the episodes <3. If the rating process becomes easier I'd definitely do the switch.

Thanks for maintaining and developing Ryot :)

@IgnisDa
Copy link
Owner

IgnisDa commented Aug 22, 2023

Thanks for the extensive report. I agree that Ryot needs to be more functional, but I am not a UI designer, so I find that work boring. I just rolled with whatever that made sense to me at that time.

I like your advice. Maybe I can have a star icon on the top right on the list page that directly takes you to the review page. Does that sound good?

@lyz-code
Copy link
Author

I am not a UI designer, so I find that work boring. I just rolled with whatever that made sense to me at that time.

I feel you, I do the same with my programs, it's more than enough that you're sharing the code itself!

Maybe I can have a star icon on the top right on the list page that directly takes you to the review page. Does that sound good?

I'm not sure that I understand you well, but it does sound good. A star icon on the top right (or wherever) on each of the posters of the items (similar to how mediatracker does) that either brings you to the review page or opens a pop-up with the review page content would be wonderful.

What do you think on the rest of the suggestions?

  • A view to quickly see which items haven't been rated
  • A more user friendly way to set the rating: Maybe a slider with stops each 10% beside the prompt?

@IgnisDa
Copy link
Owner

IgnisDa commented Aug 22, 2023

A view to quickly see which items haven't been rated

This is possible by apply the "Unrated" filter on the list page.
image

A more user friendly way to set the rating: Maybe a slider with stops each 10% beside the prompt?

Sounds reasonable. Maybe I will create a setting for this.

@IgnisDa
Copy link
Owner

IgnisDa commented Aug 22, 2023

For some reason the "Unrated" filter does not work on the demo instance. But it does work on mine. IDK why 🤷🏽‍♂️.

@IgnisDa
Copy link
Owner

IgnisDa commented Aug 24, 2023

Some other things I would like to address:

you can't see the unrated elements in the home, right?

Yes. I do not want the homepage cluttered. Somewhere down the line, I would like the homepage to be configurable, but right now, I think it is fine.

I guess it would be more uncomfortable to type it on the mobile keyboard, unless it's defined as a numeric field

It is defined as number :)

So if I have the page preloaded, I'd have to click that button to see the latest state of the tv shows, it won't auto-update when I get in

The query made to the database is pretty heavy so I had disabled auto refresh. But since there are only 20 elements, it does not matter. Will enable it.

lyz-code added a commit to lyz-code/blue-book that referenced this issue Sep 7, 2023
Gain early map control with scouts, then switch into steppe lancers and front siege, finally castle in the face when you clicked to imperial.

- [Example Hera vs Mr.Yo in TCI](https://yewtu.be/watch?v=20bktCBldcw)

feat(aleph#Ingest gets stuck): Ingest gets stuck

It looks that Aleph doesn't yet give an easy way to debug it. It can be seen in the next webs:

- [Improve the UX for bulk uploading and processing of large number of files](alephdata/aleph#2124)
- [Document ingestion gets stuck effectively at 100%](alephdata/aleph#1839)
- [Display detailed ingestion status to see if everything is alright and when the collection is ready](alephdata/aleph#1525)

Some interesting ideas I've extracted while diving into these issues is that:

- You can also upload files using the [`alephclient` python command line tool](https://github.com/alephdata/alephclient)
- Some of the files might fail to be processed without leaving any hint to the uploader or the viewer.
  - This results in an incomplete dataset and the users don't get to know that the dataset is incomplete. This is problematic if the completeness of the dataset is crucial for an investigation.
  - There is no way to upload only the files that failed to be processed without re-uploading the entire set of documents or manually making a list of the failed documents and re-uploading them
  - There is no way for uploaders or Aleph admins to see an overview of processing errors to figure out why some files are failing to be processed without going through docker logs (which is not very user-friendly)
- There was an attempt to [improve the way ingest-files manages the pending tasks](alephdata/aleph#2127), it's merged into the [release/4.0.0](https://github.com/alephdata/ingest-file/tree/release/4.0.0) branch, but it has [not yet arrived `main`](alephdata/ingest-file#423).

There are some tickets that attempt to address these issues on the command line:

- [Allow users to upload/crawl new files only](alephdata/alephclient#34)
- [Check if alephclient crawldir was 100% successful or not](alephdata/alephclient#35)

I think it's interesting either to contribute to `alephclient` to solve those issues or if it's complicated create a small python script to detect which files were not uploaded and try to reindex them and/or open issues that will prevent future ingests to fail.

feat(ansible_snippets#Ansible condition that uses a regexp): Ansible condition that uses a regexp

```yaml
- name: Check if an instance name or hostname matches a regex pattern
  when: inventory_hostname is not match('molecule-.*')
  fail:
    msg: "not a molecule instance"
```

feat(ansible_snippets#Ansible-lint doesn't find requirements): Ansible-lint doesn't find requirements

It may be because you're using `requirements.yaml` instead of `requirements.yml`. Create a temporal link from one file to the other, run the command and then remove the link.

It will work from then on even if you remove the link. `¯\(°_o)/¯`

feat(ansible_snippets#Run task only once): Run task only once

Add `run_once: true` on the task definition:

```yaml
- name: Do a thing on the first host in a group.
  debug:
    msg: "Yay only prints once"
  run_once: true
```

feat(aws_snippets#Invalidate a cloudfront distribution

```bash
aws cloudfront create-invalidation --paths "/pages/about" --distribution-id my-distribution-id
```

feat(bash_snippets#Remove the lock screen in ubuntu): Remove the lock screen in ubuntu

Create the `/usr/share/glib-2.0/schemas/90_ubuntu-settings.gschema.override` file with the next content:

```ini
[org.gnome.desktop.screensaver]
lock-enabled = false
[org.gnome.settings-daemon.plugins.power]
idle-dim = false
```

Then reload the schemas with:

```bash
sudo glib-compile-schemas /usr/share/glib-2.0/schemas/
```

feat(bash_snippets#How to deal with HostContextSwitching alertmanager alert): How to deal with HostContextSwitching alertmanager alert

A context switch is described as the kernel suspending execution of one process on the CPU and resuming execution of some other process that had previously been suspended. A context switch is required for every interrupt and every task that the scheduler picks.

Context switching can be due to multitasking, Interrupt handling , user & kernel mode switching. The interrupt rate will naturally go high, if there is higher network traffic, or higher disk traffic. Also it is dependent on the application which every now and then invoking system calls.

If the cores/CPU's are not sufficient to handle load of threads created by application will also result in context switching.

It is not a cause of concern until performance breaks down. This is expected that CPU will do context switching. One shouldn't verify these data at first place since there are many statistical data which should be analyzed prior to looking into kernel activities. Verify the CPU, memory and network usage during this time.

You can see which process is causing issue with the next command:

```bash

10:15:24 AM     UID     PID     cswch/s         nvcswch/s       Command
10:15:27 AM     0       1       162656.7        16656.7         systemd
10:15:27 AM     0       9       165451.04       15451.04        ksoftirqd/0
10:15:27 AM     0       10      158628.87       15828.87        rcu_sched
10:15:27 AM     0       11      156147.47       15647.47        migration/0
10:15:27 AM     0       17      150135.71       15035.71        ksoftirqd/1
10:15:27 AM     0       23      129769.61       12979.61        ksoftirqd/2
10:15:27 AM     0       29      2238.38         238.38          ksoftirqd/3
10:15:27 AM     0       43      1753            753             khugepaged
10:15:27 AM     0       443     1659            165             usb-storage
10:15:27 AM     0       456     1956.12         156.12          i915/signal:0
10:15:27 AM     0       465     29550           29550           kworker/3:1H-xfs-log/dm-3
10:15:27 AM     0       490     164700          14700           kworker/0:1H-kblockd
10:15:27 AM     0       506     163741.24       16741.24        kworker/1:1H-xfs-log/dm-3
10:15:27 AM     0       594     154742          154742          dmcrypt_write/2
10:15:27 AM     0       629     162021.65       16021.65        kworker/2:1H-kblockd
10:15:27 AM     0       715     147852.48       14852.48        xfsaild/dm-1
10:15:27 AM     0       886     150706.86       15706.86        irq/131-iwlwifi
10:15:27 AM     0       966     135597.92       13597.92        xfsaild/dm-3
10:15:27 AM     81      1037    2325.25         225.25          dbus-daemon
10:15:27 AM     998     1052    118755.1        11755.1         polkitd
10:15:27 AM     70      1056    158248.51       15848.51        avahi-daemon
10:15:27 AM     0       1061    133512.12       455.12          rngd
10:15:27 AM     0       1110    156230          16230           cupsd
10:15:27 AM     0       1192    152298.02       1598.02         sssd_nss
10:15:27 AM     0       1247    166132.99       16632.99        systemd-logind
10:15:27 AM     0       1265    165311.34       16511.34        cups-browsed
10:15:27 AM     0       1408    10556.57        1556.57         wpa_supplicant
10:15:27 AM     0       1687    3835            3835            splunkd
10:15:27 AM     42      1773    3728            3728            Xorg
10:15:27 AM     42      1996    3266.67         266.67          gsd-color
10:15:27 AM     0       3166    32036.36        3036.36         sssd_kcm
10:15:27 AM     119349  3194    151763.64       11763.64        dbus-daemon
10:15:27 AM     119349  3199    158306          18306           Xorg
10:15:27 AM     119349  3242    15.28           5.8             gnome-shell

pidstat -wt 3 10  > /tmp/pidstat-t.out

Linux 4.18.0-80.11.2.el8_0.x86_64 (hostname)    09/08/2020  _x86_64_    (4 CPU)

10:15:15 AM   UID      TGID       TID   cswch/s   nvcswch/s  Command
10:15:19 AM     0         1         -   152656.7   16656.7   systemd
10:15:19 AM     0         -         1   152656.7   16656.7   |__systemd
10:15:19 AM     0         9         -   165451.04  15451.04  ksoftirqd/0
10:15:19 AM     0         -         9   165451.04  15451.04  |__ksoftirqd/0
10:15:19 AM     0        10         -   158628.87  15828.87  rcu_sched
10:15:19 AM     0         -        10   158628.87  15828.87  |__rcu_sched
10:15:19 AM     0        23         -   129769.61  12979.61  ksoftirqd/2
10:15:19 AM     0         -        23   129769.61  12979.33  |__ksoftirqd/2
10:15:19 AM     0        29         -   32424.5    2445      ksoftirqd/3
10:15:19 AM     0         -        29   32424.5    2445      |__ksoftirqd/3
10:15:19 AM     0        43         -   334        34        khugepaged
10:15:19 AM     0         -        43   334        34        |__khugepaged
10:15:19 AM     0       443         -   11465      566       usb-storage
10:15:19 AM     0         -       443   6433       93        |__usb-storage
10:15:19 AM     0       456         -   15.41      0.00      i915/signal:0
10:15:19 AM     0         -       456   15.41      0.00      |__i915/signal:0
10:15:19 AM     0       715         -   19.34      0.00      xfsaild/dm-1
10:15:19 AM     0         -       715   19.34      0.00      |__xfsaild/dm-1
10:15:19 AM     0       886         -   23.28      0.00      irq/131-iwlwifi
10:15:19 AM     0         -       886   23.28      0.00      |__irq/131-iwlwifi
10:15:19 AM     0       966         -   19.67      0.00      xfsaild/dm-3
10:15:19 AM     0         -       966   19.67      0.00      |__xfsaild/dm-3
10:15:19 AM    81      1037         -   6.89       0.33      dbus-daemon
10:15:19 AM    81         -      1037   6.89       0.33      |__dbus-daemon
10:15:19 AM     0      1038         -   11567.31   4436      NetworkManager
10:15:19 AM     0         -      1038   1.31       0.00      |__NetworkManager
10:15:19 AM     0         -      1088   0.33       0.00      |__gmain
10:15:19 AM     0         -      1094   1340.66    0.00      |__gdbus
10:15:19 AM   998      1052         -   118755.1   11755.1   polkitd
10:15:19 AM   998         -      1052   32420.66   25545     |__polkitd
10:15:19 AM   998         -      1132   0.66       0.00      |__gdbus
```

Then with help of PID which is causing issue, one can get all system calls details:
Raw

```bash
```

Let this command run for a few minutes while the load/context switch rates are high. It is safe to run this on a production system so you could run it on a good system as well to provide a comparative baseline. Through strace, one can debug & troubleshoot the issue, by looking at system calls the process has made.

feat(bash_snippets#Redirect stderr of all subsequent commands of a script to a file): Redirect stderr of all subsequent commands of a script to a file

```bash
{
    somecommand
    somecommand2
    somecommand3
} 2>&1 | tee -a $DEBUGLOG
```

feat(diffview#Use the same binding to open and close the diffview windows): Use the same binding to open and close the diffview windows

```lua
vim.keymap.set('n', 'dv', function()
  if next(require('diffview.lib').views) == nil then
    vim.cmd('DiffviewOpen')
  else
    vim.cmd('DiffviewClose')
  end
end)
```

fix(gitea#Using `paths-filter` custom action): Using `paths-filter` custom action to skip job actions

```
jobs:
  test:
    if: "!startsWith(github.event.head_commit.message, 'bump:')"
    name: Test
    runs-on: ubuntu-latest
    steps:
      - name: Checkout the codebase
        uses: https://github.com/actions/checkout@v3

      - name: Check if we need to run the molecule tests
        uses: https://github.com/dorny/paths-filter@v2
        id: filter
        with:
          filters: |
            molecule:
              - 'defaults/**'
              - 'tasks/**'
              - 'handlers/**'
              - 'tasks/**'
              - 'templates/**'
              - 'molecule/**'
              - 'requirements.yaml'
              - '.github/workflows/tests.yaml'

      - name: Run Molecule tests
        if: steps.filter.outputs.molecule == 'true'
        run: make molecule
```

You can find more examples on how to use `paths-filter` [here](https://github.com/dorny/paths-filter#examples ).

feat(gitsigns): Introduce gitsigns

[Gitsigns](https://github.com/lewis6991/gitsigns.nvim) is a neovim plugin to create git decorations similar to the vim plugin [gitgutter](https://github.com/airblade/vim-gitgutter) but written purely in Lua.

Installation:

Add to your `plugins.lua` file:

```lua
  use {'lewis6991/gitsigns.nvim'}
```

Install it with `:PackerInstall`.

Configure it in your `init.lua` with:

```lua
-- Configure gitsigns
require('gitsigns').setup({
  on_attach = function(bufnr)
    local gs = package.loaded.gitsigns

    local function map(mode, l, r, opts)
      opts = opts or {}
      opts.buffer = bufnr
      vim.keymap.set(mode, l, r, opts)
    end

    -- Navigation
    map('n', ']c', function()
      if vim.wo.diff then return ']c' end
      vim.schedule(function() gs.next_hunk() end)
      return '<Ignore>'
    end, {expr=true})

    map('n', '[c', function()
      if vim.wo.diff then return '[c' end
      vim.schedule(function() gs.prev_hunk() end)
      return '<Ignore>'
    end, {expr=true})

    -- Actions
    map('n', '<leader>gs', gs.stage_hunk)
    map('n', '<leader>gr', gs.reset_hunk)
    map('v', '<leader>gs', function() gs.stage_hunk {vim.fn.line('.'), vim.fn.line('v')} end)
    map('v', '<leader>gr', function() gs.reset_hunk {vim.fn.line('.'), vim.fn.line('v')} end)
    map('n', '<leader>gS', gs.stage_buffer)
    map('n', '<leader>gu', gs.undo_stage_hunk)
    map('n', '<leader>gR', gs.reset_buffer)
    map('n', '<leader>gp', gs.preview_hunk)
    map('n', '<leader>gb', function() gs.blame_line{full=true} end)
    map('n', '<leader>gb', gs.toggle_current_line_blame)
    map('n', '<leader>gd', gs.diffthis)
    map('n', '<leader>gD', function() gs.diffthis('~') end)
    map('n', '<leader>ge', gs.toggle_deleted)

    -- Text object
    map({'o', 'x'}, 'ih', ':<C-U>Gitsigns select_hunk<CR>')
  end
})
```

Usage:

Some interesting bindings:

- `]c`: Go to next diff chunk
- `[c`: Go to previous diff chunk
- `<leader>gs`: Stage chunk, it works both in normal and visual mode
- `<leader>gr`: Restore chunk from index, it works both in normal and visual mode
- `<leader>gp`: Preview diff, you can use it with `]c` and `[c` to see all the chunk diffs
- `<leader>gb`: Show the git blame of the line as a shadowed comment

fix(grafana): Install grafana

```yaml
---
version: "3.8"
services:
  grafana:
    image: grafana/grafana-oss:${GRAFANA_VERSION:-latest}
    container_name: grafana
    restart: unless-stopped
    volumes:
      - data:/var/lib/grafana
    networks:
      - grafana
      - monitorization
      - swag
    env_file:
      - .env
    depends_on:
      - db
  db:
    image: postgres:${DATABASE_VERSION:-15}
    restart: unless-stopped
    container_name: grafana-db
    environment:
      - POSTGRES_DB=${GF_DATABASE_NAME:-grafana}
      - POSTGRES_USER=${GF_DATABASE_USER:-grafana}
      - POSTGRES_PASSWORD=${GF_DATABASE_PASSWORD:?database password required}
    networks:
      - grafana
    volumes:
      - db-data:/var/lib/postgresql/data
    env_file:
      - .env

networks:
  grafana:
    external:
      name: grafana
  monitorization:
    external:
      name: monitorization
  swag:
    external:
      name: swag

volumes:
  data:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: /data/grafana/app
  db-data:
    driver: local
    driver_opts:
      type: none
      o: bind
      device: /data/grafana/database
```

Where the `monitorization` network is where prometheus and the rest of the stack listens, and `swag` the network to the gateway proxy.

It uses the `.env` file to store the required [configuration](#configure-grafana), to connect grafana with authentik you need to add the next variables:

```bash

GF_AUTH_GENERIC_OAUTH_ENABLED="true"
GF_AUTH_GENERIC_OAUTH_NAME="authentik"
GF_AUTH_GENERIC_OAUTH_CLIENT_ID="<Client ID from above>"
GF_AUTH_GENERIC_OAUTH_CLIENT_SECRET="<Client Secret from above>"
GF_AUTH_GENERIC_OAUTH_SCOPES="openid profile email"
GF_AUTH_GENERIC_OAUTH_AUTH_URL="https://authentik.company/application/o/authorize/"
GF_AUTH_GENERIC_OAUTH_TOKEN_URL="https://authentik.company/application/o/token/"
GF_AUTH_GENERIC_OAUTH_API_URL="https://authentik.company/application/o/userinfo/"
GF_AUTH_SIGNOUT_REDIRECT_URL="https://authentik.company/application/o/<Slug of the application from above>/end-session/"
GF_AUTH_OAUTH_AUTO_LOGIN="true"
GF_AUTH_GENERIC_OAUTH_ROLE_ATTRIBUTE_PATH="contains(groups[*], 'Grafana Admins') && 'Admin' || contains(groups[*], 'Grafana Editors') && 'Editor' || 'Viewer'"
```

In the configuration above you can see an example of a role mapping. Upon login, this configuration looks at the groups of which the current user is a member. If any of the specified group names are found, the user will be granted the resulting role in Grafana.

In the example shown above, one of the specified group names is "Grafana Admins". If the user is a member of this group, they will be granted the "Admin" role in Grafana. If the user is not a member of the "Grafana Admins" group, it moves on to see if the user is a member of the "Grafana Editors" group. If they are, they are granted the "Editor" role. Finally, if the user is not found to be a member of either of these groups, it fails back to granting the "Viewer" role.

Also make sure in your configuration that `root_url` is set correctly, otherwise your redirect url might get processed incorrectly. For example, if your grafana instance is running on the default configuration and is accessible behind a reverse proxy at https://grafana.company, your redirect url will end up looking like this, https://grafana.company/. If you get `user does not belong to org` error when trying to log into grafana for the first time via OAuth, check if you have an organization with the ID of 1, if not, then you have to add the following to your grafana config:

```ini
[users]
auto_assign_org = true
auto_assign_org_id = <id-of-your-default-organization>
```

Once you've made sure that the oauth works, go to `/admin/users` and remove the `admin` user.

feat(grafana#Configure grafana): Configure grafana

Grafana has default and custom configuration files. You can customize your Grafana instance by modifying the custom configuration file or by using environment variables. To see the list of settings for a Grafana instance, refer to [View server settings](https://grafana.com/docs/grafana/latest/administration/stats-and-license/#view-server-settings).

To override an option use `GF_<SectionName>_<KeyName>`. Where the `section name` is the text within the brackets. Everything should be uppercase, `.` and `-` should be replaced by `_`. For example, if you have these configuration settings:

```ini
instance_name = ${HOSTNAME}

[security]
admin_user = admin

[auth.google]
client_secret = 0ldS3cretKey

[plugin.grafana-image-renderer]
rendering_ignore_https_errors = true

[feature_toggles]
enable = newNavigation
```

You can override variables on Linux machines with:

```bash
export GF_DEFAULT_INSTANCE_NAME=my-instance
export GF_SECURITY_ADMIN_USER=owner
export GF_AUTH_GOOGLE_CLIENT_SECRET=newS3cretKey
export GF_PLUGIN_GRAFANA_IMAGE_RENDERER_RENDERING_IGNORE_HTTPS_ERRORS=true
export GF_FEATURE_TOGGLES_ENABLE=newNavigation
```

And in the docker compose you can edit the `.env` file. Mine looks similar to:

```bash
GRAFANA_VERSION=latest
GF_DEFAULT_INSTANCE_NAME="production"
GF_SERVER_ROOT_URL="https://your.domain.org"

GF_DATABASE_TYPE=postgres
DATABASE_VERSION=15
GF_DATABASE_HOST=grafana-db:5432
GF_DATABASE_NAME=grafana
GF_DATABASE_USER=grafana
GF_DATABASE_PASSWORD="change-for-a-long-password"
GF_DATABASE_SSL_MODE=disable

GF_AUTH_GENERIC_OAUTH_ENABLED="true"
GF_AUTH_GENERIC_OAUTH_NAME="authentik"
GF_AUTH_GENERIC_OAUTH_CLIENT_ID="<Client ID from above>"
GF_AUTH_GENERIC_OAUTH_CLIENT_SECRET="<Client Secret from above>"
GF_AUTH_GENERIC_OAUTH_SCOPES="openid profile email"
GF_AUTH_GENERIC_OAUTH_AUTH_URL="https://authentik.company/application/o/authorize/"
GF_AUTH_GENERIC_OAUTH_TOKEN_URL="https://authentik.company/application/o/token/"
GF_AUTH_GENERIC_OAUTH_API_URL="https://authentik.company/application/o/userinfo/"
GF_AUTH_SIGNOUT_REDIRECT_URL="https://authentik.company/application/o/<Slug of the application from above>/end-session/"
GF_AUTH_OAUTH_AUTO_LOGIN="true"
GF_AUTH_GENERIC_OAUTH_ROLE_ATTRIBUTE_PATH="contains(groups[*], 'Grafana Admins') && 'Admin' || contains(groups[*], 'Grafana Editors') && 'Editor' || 'Viewer'"
```

feat(grafana#Configure datasources): Configure datasources

You can manage data sources in Grafana by adding YAML configuration files in the `provisioning/datasources` directory. Each config file can contain a list of datasources to add or update during startup. If the data source already exists, Grafana reconfigures it to match the provisioned configuration file.

The configuration file can also list data sources to automatically delete, called `deleteDatasources`. Grafana deletes the data sources listed in `deleteDatasources` before adding or updating those in the datasources list.

For example to [configure a Prometheus datasource](https://grafana.com/docs/grafana/latest/datasources/prometheus/) use:

```yaml
apiVersion: 1

datasources:
  - name: Prometheus
    type: prometheus
    access: proxy
    # Access mode - proxy (server in the UI) or direct (browser in the UI).
    url: http://prometheus:9090
    jsonData:
      httpMethod: POST
      manageAlerts: true
      prometheusType: Prometheus
      prometheusVersion: 2.44.0
      cacheLevel: 'High'
      disableRecordingRules: false
      incrementalQueryOverlapWindow: 10m
      exemplarTraceIdDestinations: []
```

feat(grafana#Configure dashboards): Configure dashboards

You can manage dashboards in Grafana by adding one or more YAML config files in the `provisioning/dashboards` directory. Each config file can contain a list of dashboards providers that load dashboards into Grafana from the local filesystem.

Create one file called `dashboards.yaml` with the next contents:

```yaml
---
apiVersion: 1
providers:
  - name: default # A uniquely identifiable name for the provider
    type: file
    options:
      path: /etc/grafana/provisioning/dashboards/definitions
```

Then inside the config directory of your docker compose create the directory `provisioning/dashboards/definitions` and add the json of the dashboards themselves. You can download them from the dashboard pages. For example:

- [Node Exporter](https://grafana.com/grafana/dashboards/1860-node-exporter-full/)
- [Blackbox Exporter](https://grafana.com/grafana/dashboards/13659-blackbox-exporter-http-prober/)
- [Alertmanager](https://grafana.com/grafana/dashboards/9578-alertmanager/)

feat(grafana#Configure the plugins): Configure the plugins

To install plugins in the Docker container, complete the following steps:

- Pass the plugins you want to be installed to Docker with the `GF_INSTALL_PLUGINS` environment variable as a comma-separated list.
- This sends each plugin name to `grafana-cli plugins install ${plugin}` and installs them when Grafana starts.

For example:

```bash
docker run -d -p 3000:3000 --name=grafana \
  -e "GF_INSTALL_PLUGINS=grafana-clock-panel, grafana-simple-json-datasource" \
  grafana/grafana-oss
```

To specify the version of a plugin, add the version number to the `GF_INSTALL_PLUGINS` environment variable. For example: `GF_INSTALL_PLUGINS=grafana-clock-panel 1.0.1`.

To install a plugin from a custom URL, use the following convention to specify the URL: `<url to plugin zip>;<plugin install folder name>`. For example: `GF_INSTALL_PLUGINS=https://github.com/VolkovLabs/custom-plugin.zip;custom-plugin`.

feat(jellyfin#Forgot Password. Please try again within your home network to initiate the password reset process.): Forgot Password. Please try again within your home network to initiate the password reset process.

If you're an external jellyfin user you can't reset your password unless you are part of the LAN. This is done because the reset password process is simple and insecure.

If you don't care about that and still think that the internet is a happy and safe place [here](https://wiki.jfa-go.com/docs/password-resets/) and [here](hrfee/jellyfin-accounts#12) are some instructions on how to bypass the security measure.

For more information also read [1](jellyfin/jellyfin#2282) and [2](jellyfin/jellyfin#2869).

feat(lindy): New Charleston, lindy and solo jazz videos

Charleston:

- The DecaVita Sisters:
   - [Freestyle Lindy Hop & Charleston](https://www.youtube.com/watch?v=OV6ZDuczkag)
   - [Moby "Honey"](https://www.youtube.com/watch?v=ciMFQnwfp50)

Solo Jazz:

- [Pedro Vieira at Little Big Swing Camp 2022](https://yewtu.be/watch?v=pmxn2uIVuUY)

Lindy Hop:

- The DecaVita Sisters:
   - [Compromise - agreement in the moment](https://youtu.be/3DhD2u5Eyv8?si=2WKisSvEB3Z8TVMy)
   - [Lindy hop improv](https://www.youtube.com/watch?v=qkdxcdeicLE)

feat(matrix): How to install matrix

```bash
sudo apt install -y wget apt-transport-https
sudo wget -O /usr/share/keyrings/element-io-archive-keyring.gpg https://packages.element.io/debian/element-io-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/element-io-archive-keyring.gpg] https://packages.element.io/debian/ default main" | sudo tee /etc/apt/sources.list.d/element-io.list
sudo apt update
sudo apt install element-desktop
```

fix(mediatracker#alternatives): Update ryot comparison with mediatracker

[Ryot](https://github.com/IgnisDa/ryot) has a better web design, it also has a [jellyfin scrobbler](IgnisDa/ryot#195), although it's not [yet stable](IgnisDa/ryot#187). There are other UI tweaks that is preventing me from migrating to ryot such as [the easier media rating](IgnisDa/ryot#284) and [the percentage over five starts rating system](IgnisDa/ryot#283).

feat(molecule#Get variables from the environment): Get variables from the environment

You can configure your `molecule.yaml` file to read variables from the environment with:

```yaml
provisioner:
  name: ansible
  inventory:
    group_vars:
      all:
        my_secret: ${MY_SECRET}
```

It's useful to have a task that checks if this secret exists:

```yaml
- name: Verify that the secret is set
  fail:
    msg: 'Please export my_secret: export MY_SECRET=$(pass show my_secret)'
  run_once: true
  when: my_secret == None
```

In the CI you can set it as a secret in the repository.

feat(retroarch): Install retroarch instructions

To add the stable branch to your system type:

```bash
sudo add-apt-repository ppa:libretro/stable
sudo apt-get update
sudo apt-get install retroarch
```

Go to Main Menu/Online Updater and then update everything you can:

- Update Core Info Files
- Update Assets
- Update controller Profiles
- Update Databases
- Update Overlays
- Update GLSL Shaders

feat(vim): Update treesitter language definitions

To do so you need to run:

```vim
:TSInstall <language>
```

To update the parsers run

```vim
:TSUpdate
```

feat(vim#Telescope changes working directory when opening a file): Telescope changes working directory when opening a file

In my case was due to a snippet I have to remember the folds:

```
vim.cmd[[
  augroup remember_folds
    autocmd!
    autocmd BufWinLeave * silent! mkview
    autocmd BufWinEnter * silent! loadview
  augroup END
]]
```

It looks that it had saved a view with the other working directory so when a file was loaded the `cwd` changed. To solve it I created a new `mkview` in the correct directory.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants