Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Akka.NET v1.3.3 Production Release #3281

Merged
merged 94 commits into from
Jan 19, 2018
Merged

Akka.NET v1.3.3 Production Release #3281

merged 94 commits into from
Jan 19, 2018

Conversation

Aaronontheweb
Copy link
Member

1.3.3 January 19 2019

Maintenance Release for Akka.NET 1.3

The largest changes featured in Akka.NET v1.3.3 are the introduction of Splint brain resolvers and WeaklyUp members in Akka.Cluster.

Akka.Cluster Split Brain Resolvers
Split brain resolvers are specialized IDowningProvider implementations that give Akka.Cluster users the ability to automatically down Unreachable cluster nodes in accordance with well-defined partition resolution strategies, namely:

  • Static quorums;
  • Keep majority;
  • Keep oldest; and
  • Keep-referee.

You can learn more about why you may want to use these and which strategy is right for you by reading our Splint brain resolver documentation.

Akka.Cluster WeaklyUp Members
One common problem that occurs in Akka.Cluster is that once a current member of the cluster becomes Unreachable, the leader of the cluster isn't able to allow any new members of the cluster to join until that Unreachable member becomes Reachable again or is removed from the cluster via a Cluster.Down command.

Beginning in Akka.NET 1.3.3, you can allow nodes to still join and participate in the cluster even while other member nodes are unreachable by opting into the WeaklyUp status for members. You can do this by setting the following in your HOCON configuration beginning in Akka.NET v1.3.3:

akka.cluster.allow-weakly-up-members = on

This will allow nodes who have joined the cluster when at least one other member was unreachable to become functioning cluster members with a status of WeaklyUp. If the unreachable members of the cluster are downed or become reachable again, all WeaklyUp nodes will be upgraded to the usual Up status for available cluster members.

Akka.Cluster.Sharding and Akka.Cluster.DistributedData Integration
A new experimental feature we've added in Akka.NET v1.3.3 is the ability to fully decouple Akka.Cluster.Sharding from Akka.Persistence and instead run it on top of Akka.Cluster.DistributedData, our library for creating eventually consistent replicated data structures on top of Akka.Cluster.

Beginning in Akka.NET 1.3.3, you can set the following HOCON configuration option to have the ShardingCoordinator replicate its shard placement state using DData instead of persisting it to storage via Akka.Persistence:

akka.cluster.sharding.state-store-mode = ddata

This setting only affects how Akka.Cluster.Sharding's internal state is managed. If you're using Akka.Persistence with your own entity actors inside Akka.Cluster.Sharding, this change will have no impact on them.

Updates and bugfixes:

You can see the full changeset for Akka.NET 1.3.3 here.

alexvaluyskiy and others added 30 commits October 11, 2017 20:18
* close #3173 - fixed issue with XUnit project packing

* restored nuget command
* ported Chat sample to MSBUILD 15

* added back copyright header for ChatClient

* set chat sample to run on .NET Core 1.1
* converted Akka.Cluster samples to MSBuild15

* rebased on ChatClient merge and fixed SLN file merge conflicts
Updated to include .NET Foundation, modified to follow Apache guidance which links to license terms rather than listing inline (https://www.apache.org/licenses/LICENSE-2.0).
* Split Brain Resolver: initial commit

* initial implementation of split brain resolver

* composite approach to split brain strategy

* split brain strategy specs

* configuration spec

* MNTK spec for KeepMajority

* fixed problem with MNTK false negatives

* split brain resolver docs
Having maxSimultaneousRebalance > rebalanceThreshold in LeastShardAllocationStrategy caused shards "flapping" (deallocation of excessive shards followed by their immediate allocation on the same node)
Horusiath and others added 29 commits January 4, 2018 14:06
* DDataShard + reorganized shard coordinators

* DDataShardCoordinator

* joined ddata cluster sharding

* refactor of shard logic to trait-based

* HOCON config for cluster-sharding ddata

* working cluster sharding for both DData and Persistence

* cluster-sharding DData replicator per role

* fixed mntk tests

* fixed GetState tests

* replicated cluster-sharding mntk specs between persistent and ddata variants

* fixed some of the mntk specs

* fixed replicator for non-role settings

* removed remember-entities from cluster sharding DData specs
Ask deadlock - proof of concept
- Address serilog colored console requirement.
- Remove requirement for "SerilogLogMessageFormatter" as code examples
appear to run without specifying the formatter.
- Add an example of how an output template is configured for the
extensions demo.
- Remove use of serilog application configuration in code from the hocon
example, but mention it as a feature of serilog.
* Cluster.JoinAsync / Cluster.JoinSeedNodesAsync

* Cluster.JoinAsync: updated API approvals

* added Cluster.JoinAsync exceptions

* removed API approvals received.txt

* added test cases for failure scenarios

* added double checks for success/failure scenarios
* Fix exception that can occur when remaining and unreachable variables both have zero elements
Improve type safety for custom graph stages
If remaining and unreachable lists are empty an exception will be
encountered as soon as .First() is executed to try to find the oldest
node
@Aaronontheweb Aaronontheweb merged commit c464237 into master Jan 19, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.