Skip to content
This repository has been archived by the owner on Nov 6, 2020. It is now read-only.

Parity on Kubernetes #9218

Closed
onpaws opened this issue Jul 25, 2018 · 13 comments
Closed

Parity on Kubernetes #9218

onpaws opened this issue Jul 25, 2018 · 13 comments
Labels
M2-config 📂 Chain specifications and node configurations. Z1-question 🙋‍♀️ Issue is a question. Closer should answer.
Milestone

Comments

@onpaws
Copy link

onpaws commented Jul 25, 2018

I'm running:

  • Which Parity version?: 2.0.0
  • Which operating system?: Linux as provided by the Docker image
  • How installed?: Docker image installed via Kubernetes
  • Are you fully synchronized?: yes
  • Which network are you connected to?: kovan
  • Did you try to restart the node?: yes

I have a "Parity on Kubernetes" security question and would appreciate any insight.

I've deployed the Parity k8n manifests @ddorgan shared here -- they've been working great, thanks!
I see the logs on the Parity pod show syncing to the blockchain - excellent.

I also see the Kubernetes docs indicate services of type ClusterIP "makes the service only reachable from within the cluster."
Now, because I see the blockchain syncing in the logs I think they mean something like a traditional NAT setup - traffic originating inside my Pod can get out, but originating outside the cluster won't route to the Service/Pod.

OK, so far so good.

Enter my app, which is Dockerized, and depends on the Parity JSON-RPC. I know I shouldn't open JSON-RPC to the outside world. At home, I know NAT on my router should prevent direct connections to my machines behind the router.

Thus I believe, but am not 100% sure, it should be safe to run Parity under a service as follows.
What do you think?

apiVersion: v1
kind: Service
metadata:
  name: parity-service
  namespace: default
spec:
  selector:
    app: parity
  ports:
    - name: eth-net
      port: 30303
      protocol: TCP
    - name: json-rpc-http
      port: 8545
      protocol: TCP
    - name: json-rpc-ws
      port: 8546
      protocol: TCP
@Tbaut Tbaut added Z1-question 🙋‍♀️ Issue is a question. Closer should answer. M2-config 📂 Chain specifications and node configurations. labels Jul 26, 2018
@Tbaut Tbaut added this to the 2.1 milestone Jul 26, 2018
@JohnnySheffield
Copy link
Contributor

I think the traffic on 30303 should be UDP, not TCP so something like:

 ports:
    - name: eth-net
      port: 30303
      protocol: UDP
    ...

But it would be best if somebody can confirm this.

Be it tcp or udp, your DockerFile must expose this port so your service can target it, eg:

EXPOSE 8080 8545 8180 30303/udp

I'm also not sure what would be the best practice - generally what ports should be exposed on Parity docker images?

Related to #9231

@onpaws
Copy link
Author

onpaws commented Jul 27, 2018

Ah, interesting to hear about UDP.

The reason my example has TCP is simply because @ddorgan's example, what I started with, had it here.

@onpaws
Copy link
Author

onpaws commented Jul 27, 2018

generally what ports should be exposed on Parity docker images?

Facing precisely the same question and keen to learn the answer also!

@onpaws
Copy link
Author

onpaws commented Jul 27, 2018

I just checked the "official" parity release as available on the public Docker registry, aka parity/parity:v2.0.1.
The image appears to be Ubuntu.
@JohnnySheffield I'm curious what you make of this port configuration?

@onpaws onpaws closed this as completed Jul 27, 2018
@onpaws onpaws reopened this Jul 27, 2018
@onpaws
Copy link
Author

onpaws commented Jul 27, 2018

(whoops sorry for accidental close)

@JohnnySheffield
Copy link
Contributor

JohnnySheffield commented Jul 27, 2018

EXPOSE 8080 8545 8180 seems reasonable.

I was brave enough to add 30301/udp for a image that we're running on openshift as i reasoned it will be easier to expose this port with a service if needed.

@ddorgan
Copy link
Collaborator

ddorgan commented Jul 27, 2018

@onpaws so you have a custom container that connects to the parity service, right? And does jsonrpc calls to it?

You should be able to run the service internal to the cluster. You can also use the DNS discovery built in to kubernetes, e.g. in your example the hostname parity-service.default.svc.cluster.local should resolve to the host behind your service. You can then connect with your application on port 8545 / 30303 etc...

Then the idea would be to only expose your custom container via https and do some user authentication (if needed).

@onpaws
Copy link
Author

onpaws commented Jul 28, 2018

Yes, that's right @ddorgan. What you're describing is prescient and precisely what we're aiming to build.

If my app doesn't need to use 30303, is there any harm in removing it from the Service manifest?
I'm a bit fuzzy on whether that port is actually needed for e.g. blockchain syncing or some other critical functionality.

@ddorgan
Copy link
Collaborator

ddorgan commented Jul 28, 2018

There's no harm in removing 30303 if you are syncing the ethereum main net.

Just point your custom app at the name [service].[namespace].svc.cluster.local on port 8545 (http jsonrpc) or 8546 (websocket rpc).

@onpaws
Copy link
Author

onpaws commented Jul 31, 2018

Thanks @ddorgan!

There's no harm in removing 30303 if you are syncing the ethereum main net.

Despite using Kovan vs MainNet as far as I can see, the advice should still apply.
I tried to verify this today by dropping 30303 from the manifest, kubectl apply-ing, and restarting the pod just in case.
And I do still see the blockchain messages in the pod logs.

2018-07-31 13:32:47 UTC Syncing #8205720 0x2cff…b421     2.01 blk/s    6.6 tx/s    2 Mgas/s    444+    0 Qed  #8206165    5/25 peers    176 KiB chain  262 MiB db    5 MiB queue  313 KiB sync  RPC:  0 conn,    2 req/s,  156 µs
2018-07-31 13:32:58 UTC Syncing #8205737 0x2486…d6ea     1.70 blk/s    5.5 tx/s    2 Mgas/s    429+    0 Qed  #8206167    5/25 peers    178 KiB chain  262 MiB db    5 MiB queue  313 KiB sync  RPC:  0 conn,    0 req/s,  154 µs

So I think we're in a good place here.
Thanks again @ddorgan!

@ddorgan
Copy link
Collaborator

ddorgan commented Jul 31, 2018

@onpaws great to hear. Are you ok with this issue being closed then?

@onpaws
Copy link
Author

onpaws commented Jul 31, 2018

I did have another question that pertains to Parity on Kubernetes -- if this should be posted elsewhere, all good, happy to move/close this as appropriate.

Now I have my app running inside a cluster thanks to you @ddorgan I'm interested to go a bit deeper in the Parity on Kubes world. I realize it's a big topic and I'm still pretty new to Parity -- but here's my goal: what does it take to scale a Parity Deployment?

Scenario
Let's say my app grows and I wake up today with 1000 simultaneous users. Let's say my app uses Parity's JSON-RPC to handle certain long-running transactions, stuff that takes 60-80s +.
I'm imagining that Parity may eventually cap multiple, long running JSON-RPC transactions/commands in some kind of queue, and that the queue's length may be governed by rules out of my control. I'd like to setup Kubes to be prepared for that 1001st user, or whatever marginal new user would put the Parity instance at risk.

If Parity already happens to expose endpoint(s) I could use for Horizontal Autoscale metrics that would seem helpful, for example. [1]

Is a LoadBalancer an appropriate thing to put in front of these Parity instances? Pretty sure we'd have to enable WebSocket support for port 8546.

If you happen to know any resources/wikis/blog posts/docs on such an arrangement I'd be much obliged! I know its probably a bigger scope than a single GH issue - just feeling motivated to experiment around with this and Kubes seems to have the perfect abstractions.

[1] I also saw ReadinessProbes but not sure that's quite the right abstraction.

@ddorgan
Copy link
Collaborator

ddorgan commented Jul 31, 2018

@onpaws yes could you open another issue for this. It's much more useful if people are searching for answers later, so they don't get confused by multi topics in one ticket.

@ddorgan ddorgan closed this as completed Jul 31, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
M2-config 📂 Chain specifications and node configurations. Z1-question 🙋‍♀️ Issue is a question. Closer should answer.
Projects
None yet
Development

No branches or pull requests

4 participants