Skip to content

Tokens information from the Antelope blockchains, powered by Substreams

License

Notifications You must be signed in to change notification settings

pinax-network/antelope-token-api

Repository files navigation

Antelope Token API

.github/workflows/bun-test.yml

Tokens information from the Antelope blockchains, powered by Substreams

REST API

Usage

Method Path Query parameters
(* = Required)
Description
GET
text/html
/ - Swagger API playground
GET
application/json
/chains limit
page
Information about the chains and latest head block in the database
GET
application/json
/{chain}/balance block_num
contract
symcode
account*
limit
page
Balances of an account.
GET
application/json
/{chain}/holders contract*
symcode*
limit
page
List of holders of a token
GET
application/json
/{chain}/supply block_num
issuer
contract*
symcode*
limit
page
Total supply for a token
GET
application/json
/{chain}/tokens limit
page
List of available tokens
GET
application/json
/{chain}/transfers block_range
from
to
contract
symcode
limit
page
All transfers related to a token
GET
application/json
/{chain}/transfers/{trx_id} limit
page
Specific transfer related to a token

Docs

Method Path Description
GET
application/json
/openapi OpenAPI specification
GET
application/json
/version API version and Git short commit hash

Monitoring

Method Path Description
GET
text/plain
/health Checks database connection
GET
text/plain
/metrics Prometheus metrics

GraphQL

Go to /graphql for a GraphIQL interface.

Additional notes

  • For the block_range parameter in transfers, you can pass a single integer value (low bound) or an array of two values (inclusive range).
  • If you input the same account in the from and to field for transfers, you'll get all inbound and outbound transfers for that account.
  • The more parameters you add (i.e. the more precise your query is), the faster it should be for the back-end to fetch it.
  • Don't forget to request for the meta fields in the response to get access to pagination and statistics !

Requirements

API stack architecture

Token API architecture diagram

Setting up the database backend (ClickHouse)

Without a cluster

Example on how to set up the ClickHouse backend for sinking EOS data.

  1. Start the ClickHouse server
clickhouse server
  1. Create the token database
echo "CREATE DATABASE eos_tokens_v1" | clickhouse client -h <host> --port 9000 -d <database> -u <user> --password <password>
  1. Run the create_schema.sh script
./create_schema.sh -o /tmp/schema.sql
  1. Execute the schema
cat /tmp/schema.sql | clickhouse client -h <host> --port 9000 -d <database> -u <user> --password <password>
  1. Run the sink
substreams-sink-sql run clickhouse://<username>:<password>@<host>:9000/eos_tokens_v1 \
https://github.com/pinax-network/substreams-antelope-tokens/releases/download/v0.4.0/antelope-tokens-v0.4.0.spkg `#Substreams package` \
-e eos.substreams.pinax.network:443 `#Substreams endpoint` \
1: `#Block range <start>:<end>` \
--final-blocks-only --undo-buffer-size 1 --on-module-hash-mistmatch=warn --batch-block-flush-interval 100 --development-mode `#Additional flags`
  1. Start the API
# Will be available on locahost:8080 by default
antelope-token-api --host <host> --database eos_tokens_v1 --username <username> --password <password> --verbose

With a cluster

If you run ClickHouse in a cluster, change step 2 & 3:

  1. Create the token database
echo "CREATE DATABASE eos_tokens_v1 ON CLUSTER <cluster>" | clickhouse client -h <host> --port 9000 -d <database> -u <user> --password <password>
  1. Run the create_schema.sh script
./create_schema.sh -o /tmp/schema.sql -c <cluster>

Warning

Linux x86 only

$ wget https://github.com/pinax-network/antelope-token-api/releases/download/v4.0.0/antelope-token-api
$ chmod +x ./antelope-token-api
$ ./antelope-token-api --help                                                                                                       
Usage: antelope-token-api [options]

Token balances, supply and transfers from the Antelope blockchains

Options:
  -V, --version            output the version number
  -p, --port <number>      HTTP port on which to attach the API (default: "8080", env: PORT)
  --hostname <string>      Server listen on HTTP hostname (default: "localhost", env: HOSTNAME)
  --host <string>          Database HTTP hostname (default: "http://localhost:8123", env: HOST)
  --database <string>      The database to use inside ClickHouse (default: "default", env: DATABASE)
  --username <string>      Database user (default: "default", env: USERNAME)
  --password <string>      Password associated with the specified username (default: "", env: PASSWORD)
  --max-limit <number>     Maximum LIMIT queries (default: 10000, env: MAX_LIMIT)
  -v, --verbose <boolean>  Enable verbose logging (choices: "true", "false", default: false, env: VERBOSE)
  -h, --help               display help for command

.env Environment variables

# API Server
PORT=8080
HOSTNAME=localhost

# Clickhouse Database
HOST=http://127.0.0.1:8123
DATABASE=default
USERNAME=default
PASSWORD=
TABLE=
MAX_LIMIT=500

# Logging
VERBOSE=true

Docker environment

  • Pull from GitHub Container registry

For latest tagged release

docker pull ghcr.io/pinax-network/antelope-token-api:latest

For head of main branch

docker pull ghcr.io/pinax-network/antelope-token-api:develop
  • Build from source
docker build -t antelope-token-api .
  • Run with .env file
docker run -it --rm --env-file .env ghcr.io/pinax-network/antelope-token-api

Contributing

See CONTRIBUTING.md.

Quick start

Install Bun

$ bun install
$ bun dev

Tests

$ bun lint
$ bun test