Skip to content

Commit

Permalink
[CE-296] Documentation typos
Browse files Browse the repository at this point in the history
Various documentation typo fixes.

Change-Id: I02129335f209a293d487a07444d35b3859d50862
Signed-off-by: Chris Lim <yopep@yahoo.com>
  • Loading branch information
chrislimtyler committed Mar 12, 2018
1 parent 7799713 commit 810dfbc
Show file tree
Hide file tree
Showing 10 changed files with 17 additions and 17 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Hyperledger Cello is a blockchain provision and operation system, which helps ma
## Introduction
Using Cello, everyone can easily:

* Build up a Blockchain as a Service (BaaS) platform quickly from the scratch.
* Build up a Blockchain as a Service (BaaS) platform quickly from scratch.
* Provision customizable Blockchains instantly, e.g., a Hyperledger fabric network v1.0.
* Maintain a pool of running blockchain networks on top of baremetals, Virtual Clouds (e.g., virtual machines, vsphere Clouds), Container clusters (e.g., Docker, Swarm, Kubernetes).
* Check the system status, adjust the chain numbers, scale resources... through dashboards.
Expand Down
4 changes: 2 additions & 2 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,8 +15,8 @@ Hyperledger Cello provides the following features:

Using Cello, application developers can:

* Build up a Blockchain as a Service (BaaS) platform quickly from the scratch.
* Provision customizable Blockchains instantly, e.g., a Hyperledger fabric v1.0.x network.
* Build up a Blockchain as a Service (BaaS) platform quickly from scratch.
* Provision customizable Blockchains instantly, e.g., a Hyperledger fabric network v1.0.x.
* Maintain a pool of running blockchain networks on top of bare-metals, virtual clouds (e.g., virtual machines, vsphere Clouds), container clusters (e.g., Docker, Swarm, Kubernetes).
* Check the system status, adjust the chain numbers, scale resources... through dashboards.

Expand Down
4 changes: 2 additions & 2 deletions docs/setup_worker_ansible.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,14 @@
# Cello Ansible Worker Node

Cello supports to deploy hybperledger fabric onto multiple physical or virtual servers using [ansible](https://ansible.com), and achieve:
Cello supports to deploy hyperledger fabric onto multiple physical or virtual servers using [ansible](https://ansible.com), and achieve:

- Provision virtual servers to participate in fabric network
- Install necessary hyperledger dependent libraries and packages
- Setup kubernetes 1.7.0 or overlay network so that containers can communicate cross multiple docker hosts
- Install registrator and dns services so that containers can be referenced by name
- Build hyperledger fabric artifacts (optional)
- Run hyperledger fabric tests (optional)
- Generate fabric network certificats, genesis block, transaction blocks
- Generate fabric network certificates, genesis block, transaction blocks
- Push new or tagged fabric images onto all docker hosts
- Deploy fabric network
- Join peers to channels, instantiate chaincode
Expand Down
4 changes: 2 additions & 2 deletions docs/setup_worker_ansible_allinone.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Please follow the below steps to stand up an all-in-one fabric system
## Install dependencies and clone cello

Use a clean Ubuntu system, login as a user who can do `sudo su` without
prompting password, and run the following comamnds to install necessary
prompting password, and run the following commands to install necessary
dependencies, grant current user docker permissions and clone the cello
project into the current user home directory::

Expand Down Expand Up @@ -66,7 +66,7 @@ Create file ~/cello/src/agent/ansible/run/runhosts.tpl with the following conten
$ip

Change your working directory to ~/cello/src/agent/ansible and run the
followng commands to create runhosts file for your environment.
following commands to create runhosts file for your environment.

ipaddr=$(ip -4 addr show | awk -F '/' '/inet / {print $1}' | grep -v '127.0.0.1' | awk -F ' ' '{print $2;exit}')
sed "s/\$ip/$ipaddr/g" run/runhosts.tpl > run/runhosts
Expand Down
2 changes: 1 addition & 1 deletion docs/setup_worker_docker.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ $ sudo systemctl stop docker.service
$ sudo dockerd -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --api-cors-header='*' --default-ulimit=nofile=8192:16384 --default-ulimit=nproc=8192:16384 -D &
```

At last, run the follow test at Master node and get OK response, to make sure it can access Worker node successfully.
At last, run the follow test at Master node and get OK response, to make sure it can access the Worker node successfully.

```sh
[Master] $ docker -H Worker_Node_IP:2375 info
Expand Down
2 changes: 1 addition & 1 deletion docs/terminology.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ The Cello system is suggested to be deployed on multiple servers, at least 1 Mas
* `Master` Node: Running Cello services, which will manage the worker nodes.
* `Worker` Node: The servers to have blockchains running inside. The worker nodes will be managed by the master node.
* `Host`: Host is a resource pool managed by a unique control point, which consists of several compute nodes. Typically it can be a naive Docker host, a Swarm cluster or other bare-metal/virtual/container clusters.
* `Chain` (`Cluster`): A blockchain network including numbers of peer nodes. E.g., a Hyperledger Fabric network, a SawthoothLake or Iroha chain.
* `Chain` (`Cluster`): A blockchain network including numbers of peer nodes. E.g., a Hyperledger Fabric network, a Sawthooth Lake or Iroha chain.


## Master
Expand Down
6 changes: 3 additions & 3 deletions docs/worker_ansible_howto.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ Ansible agent:
# <a name="setup-ansible-controller"></a>Set up the Ansible controller

You need an Ansible controller to run Ansible playbooks. An Ansible controller
can be any machine (Virtualbox VM, your laptop, AWS instance, etc) that has
can be any machine (VirtualBox VM, your laptop, AWS instance, etc) that has
Ansible 2.3.0.0 or greater installed on it.

1. [Install Ansible](#install-ansible)
Expand Down Expand Up @@ -219,7 +219,7 @@ previous step:
The parameter `env` is same as in previous step. The parameter `env_type`
indicates what communication environment you would like to setup. The possible
values for this parameter are `flanneld` and `k8s`. Value `flanneld` is used to
setup a docker swarm like environment. Value `k8s` is to set up a Kuberenetes
setup a docker swarm like environment. Value `k8s` is to set up a Kubernetes
environment.

To remove everything this step created, run the following command:
Expand Down Expand Up @@ -467,7 +467,7 @@ reject the ssh connection from Ansible controller.
## <a name="ccac"></a>Convenient configurations and commands

At the root directory of the Ansible agent there are set of preconfigured
playbooks, they were developed as convienent sample playbooks for you if you
playbooks, they were developed as a convenient sample playbooks for you if you
mainly work with a particular cloud. Here's a list of these playbooks.

```
Expand Down
4 changes: 2 additions & 2 deletions src/agent/ansible/roles/cloud_vb/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ can be used to clone new machines. The image should have both python 2.x and doc
already installed. The configuration file is in vars/vb.yml. You should create your
own configuration file by copy and change that file according to your own environment.

To use this role to provision VirtualBxo vms once you create a configuration file like
To use this role to provision VirtualBox vms once you create a configuration file like
vars/vb.yml, and named it myvb.yml, you can run the following command in the root
directory of the project, not in the directory where this file is located::

Expand All @@ -27,7 +27,7 @@ Few words about the base image::
The base image should have the ssh user be a sudoer, for example if your ssh user
is called ubuntu, do the following::

echo "ubuntu ALL=(ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/ununtu
echo "ubuntu ALL=(ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/ubuntu

You may also want to disable the daily update which by default is enabled. When
you start a VB instance using the base image, if it starts auto update, your ansible
Expand Down
2 changes: 1 addition & 1 deletion src/agent/ansible/roles/deploy_compose/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,5 +4,5 @@ run this command::

ansible-playbook -i run/runhosts -e "mode=apply env=vb1st" setupfabric.yml

Make sure that these host machines already up running, run the above
Make sure that these host machines are already up running, run the above
command to setup fabric network defined in vars/vb1st.yml file
4 changes: 2 additions & 2 deletions src/agent/k8s/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@

##This part of Cello deploys **Fabric over Kubernetes**
---
Note: This guide assumes that you already have running Kubernetes Cluster with a master and n-minions.
Note: This guide assumes that you already have running Kubernetes Cluster with a master and n-nodes.

###--Steps to Deploy--

Expand All @@ -25,7 +25,7 @@ Note: This guide assumes that you already have running Kubernetes Cluster with a
```$ bash prepare-files.sh```

4. Now, Copy "driving-files" directory to all the nodes, i.e.
Master along with all the Minions.
Master along with all the nodes.

5. On Master, run "run.sh"
```$ bash run.sh```
Expand Down

0 comments on commit 810dfbc

Please sign in to comment.