Using a singleton should not be the first design choice, but for some use cases it is convenient and sometimes also mandatory to ensure that you have exactly one actor of a certain type running somewhere in the cluster.
Therefore, this specific use case is made easily accessible by the Cluster Singleton Pattern in the contrib module. It manages singleton actor instance among all cluster nodes or a group of nodes tagged with a specific role. The singleton actor is always running on the oldest member. The cluster failure detector will notice when oldest node becomes unreachable due to things like JVM crash, hard shut down, or network failure. Then a new oldest node will take over and a new singleton actor is created.
Read more in the documentation.
The cluster knows the order members were added to the cluster.
akka.cluster.Member has a method
isOlderThan to compare two members by their age. This is useful when you need a stable order of the nodes.
For example, this is how you can keep track of the members, sort them by age, and send each message to the first 3 nodes:
Not all nodes of a cluster need to perform the same function: there might be one sub-set which runs the web front-end, one which runs the data access layer and one for the number-crunching. Deployment of actors—for example by cluster-aware routers—can take node roles into account to achieve this distribution of responsibilities.
The roles of a node is defined in the configuration property named akka.cluster.roles and it is typically defined in the start script as a system property.
The most important change is that an actor system can only join a cluster once. Additional attempts will be ignored. When it has successfully joined it must be restarted to be able to join another cluster or to join the same cluster again. It can use the same host name and port after the restart, but it must have been removed from the cluster before the join request is accepted. Unsuccessful join attempts are automatically retried.
These strict rules means that you can safely restart a process without any confusion between the state of the old and new member with same host name and port.
Startup of seed nodes has also been improved. Seed nodes and other nodes can be started in any order. The node configured as the first element in the seed-nodes configuration list must be started when initially starting a cluster, but thereafter it can be stopped and started again just like any other node. All nodes can have the same configuration and start script.
Your feedback pushed us to an improved solution for how nodes can join the cluster. Thanks for reporting!
we—the Akka committers—proudly announce the FIRST RELEASE CANDIDATE for Akka 2.2.0 “Coltrane”. Half a year has passed since the release of Akka 2.1.0 “Mingus” and much has happened in our code base. User-visible API changes have been kept to a minimum, though, as most work happened under the hood; more on that later. First let us take a look at what Coltrane brings you:
Cluster support (with a big thank-you for the feedback throughout the development cycle); we have continued to improve on the preview offered with Mingus, e.g. adding node roles or distributed pub-sub or the cluster client.
An experimental preview of Typed Channels.
This is just a very high-level overview of the biggest pieces, there have also been improvements in other areas like test coverage of our OSGi bundles, or how actor failures are logged, or in the semantics of ActorContext.unwatch(), or a more performant rewrite of Agents, or how dispatchers and mailboxes can be configured outside of your code.
This is what we intend to ship as 2.2.0 final unless issues are found, so please test it thoroughly and report back. Failures are important to hear about, but praise also does not hurt :-)
We have continued on our path to unify the semantics between local and remote actor references. The most obvious difference was that remote references were bound to a name whereas local ones were bound to a specific actor lifecycle. If the local actor stops, the reference stops working, which was not the case for the remote reference in case a new actor was created at the same path. We changed it such that the local rules apply to remote references as well, making the “self” references of actors created subsequently with the same name distinct from each other. As a consequence, ActorRef equality was changed by taking into account the actor’s UID (which you can see in ActorRef.toString now).
The other most visible change concerns the creation of Props. Using anonymous inner classes as factories for your actors means that the required “$outer reference”—the reference to the enclosing scope—will have an influence on the serializability of your Props as well as on the thread-safety. Therefore we have rebased the inner workings of Props on the desired actor’s Class<?> and its constructor arguments. The benefit is that no closures are formed which would invite you e.g. to call methods on the enclosing actor, and also that serialization bindings are used to serialize the constructor arguments. Previously all Props were serialized only with Java serialization. In order to make full use of the potential of this approach we have deprecated some of the existing ways to obtain Props and introduced new ones.
Besides these changes there have been several smaller modifications, please read the migration guide to see if you may need to adapt your application while upgrading.
The artifacts have been published to Maven Central as usual, but this time for two different Scala versions:
using Scala 2.10.1
“com.typesafe.akka” % “akka-actor_2.10” % “2.2.0-RC1”
using Scala 2.11.0-M3
“com.typesafe.akka” % “akka-actor_2.11.0-M3” % “2.2.0-RC1”
1044 files changed, 85693 insertions, 35814 deletions, 23 committers
commits added removed
commits added removed 127 20030 9910 Patrik Nordwall 99 20073 8543 Roland 79 17228 9401 Endre Sándor Varga 69 7482 6618 Viktor Klang 49 4820 2677 Björn Antonsson 20 1610 422 Johannes Rudolph 18 1586 942 Mathias 10 135 175 Dario Rexin 9 1764 440 Rich Dougherty 5 311 139 RickLatrine 5 1463 306 Christophe Pache 3 1212 490 Raman Gupta 2 83 20 Kaspar Fischer (hbf) 2 12 12 Ricky Elrod 2 95 53 Kevin Wright 2 163 66 Raymond Roestenburg 2 48 29 Jonas Boner 1 10 3 Michael Pilquist 1 548 77 Helena Edelson 1 38 24 Matthew Neeley 1 8 10 Peter Vlugter 1 3 3 Thomas Lockney 1 36 35 Derek Mahar
Thanks for all the external contributions, 23 committers is quite an outstanding number for a toolkit like Akka.
In total we closed 508 tickets on these four milestones.
Akka is released under the Apache V2 license.
We—the Akka committers—are pleased to be able to announce the availability of Akka 2.1.4. This is the fourth maintenance release of the 2.1 branch, containing documentation improvements and fixing several issues including:
… and several smaller fixes
Release 2.1.3 was broken due to a bug in the release scripts and build process, please use this release instead. There were no code changes between 2.1.3 and 2.1.4.
This release is backwards binary compatible with version 2.1.0, 2.1.1, and 2.1.2, which means that the new JARs are a drop-in replacement for the old ones (but not the other way around). Always make sure to use at least the latest version required by any of your project’s dependencies.
Due to the restriction imposed by binary compatibility—which is kept for the patch releases within a minor release such as 2.1.x—not all known issues can be fixed. The currently known issues in the 2.1.x series are:
When migrating an existing project from Akka 2.0.x please have a look at our migration guide:
When migrating from the Akka 1.3.x series please follow first the migration guide towards version 2.0.5:
The “akka-cluster” module is published under the name “akka-cluster-experimental” to emphasize that its status is not yet final. This denomination is not due to sub-par standard of the module; the cluster support has been tested thoroughly and it works as documented. The reason for the “experimental” tag is that this rather important module is now presented to the general public for the first time, and although we have received valuable feedback from early adopters we anticipate possible API changes in order to meet all of your requirements. Work is continuing on Akka’s cluster support, and we will formally declare it officially supported and stable with the next major release—Akka 2.2 ‘Coltrane’. Please help us make it the best possible solution by continuing to give feedback on the mailing list and telling us what can be improved.
The artifacts comprising this release have been published to https://oss.sonatype.org/content/repositories/releases/ and also to Maven Central. In addition, we adopted the SBT standard of encoding the Scala binary version in the artifact name, i.e. the core actor package’s artifactId is “akka-actor_2.10”.
commits added removed
7 131 49 Endre Sándor Varga
7 110 14 Roland
2 96 88 Björn Antonsson
2 30 8 Viktor Klang
1 15 14 Patrik Nordwall
Akka is released under the Apache V2 license.
Release 2.1.3 is broken due to a bug in the release script. Please use 2.1.4 instead.
The original announcement for 2.1.3 is available below
From 2.2-M3 onwards you will probably notice several new deprecation warnings when trying out the Akka update on an existing project since all factories taking UntypedActorFactory, Creator<Actor> or Scala by-name arguments will produce warnings. We know that some of you will—at first sight—not like this, but please bear with us because we do this for a very good reason: we want to make your code safer!
The background for this change is that we have seen on many occasions code like the following:
… or in Scala:
It is an easy mistake to make and sometimes harmless, until you happen to call a method which is not thread-safe (as shown above) or you want to make this a remote deployment. In the first case you break actor encapsulating by implicitly passing “this” into the closure—deviously hidden by closure syntax—introducing race conditions and releasing the whole host of concurrency demons. In the second case the deployment will fail mysteriously because the enclosing class—often an Actor—is not serializable.
Akka’s mission is to make your life less frustrating, so we chose to replace those hidden failures which are hard to debug by early warnings from your dearest friend, the compiler.
So What Shall I Do?
The foremost use of the deprecated methods was to construct actors which take arguments. We have made this even simpler than before:
This syntax actually improves the LoC balance in the Java case considerably. For Scala it looks like this:
And What About Dependency Injection?
Another use-case for UntypedActorFactory was to let another framework perform the actual actor creation, e.g. OSGi/CDI/Spring/you-name-it. For those cases allow me to refer to our documentation for Java and Scala.
What the Future May Bring
For Scala there are two more use cases to consider: the inline construction of ad-hoc actors and the cake pattern—its close relative. The former is dangerous when performed inside another actor for the reasons outlined above, while the latter is usually safe. Therefore we plan on introducing macro-based alternatives which allow your favorite syntax to be used while not suffering from the pitfalls; stay tuned but please do not literally hold your breath, it may take several weeks until more pressing matters are resolved. Until then we are truly sorry for littering your builds with some false warnings.
we—the Akka committers—are pleased to announce the THIRD PRE-RELEASE MILESTONE of Akka 2.2 “Coltrane”. As discussed earlier this is to be the final and feature-complete milestone, so please take a good look and give feedback. We will gladly fix all rough edges you report while we will thoroughly test and benchmark and if necessary improve the current state of the toolkit. We do this so that we can then confidently proclaim the first release candidate in a few weeks.
Reliable delivery of system messages, meaning that remote DeathWatch and remote deployment now work properly even if the network fails (DeathWatch worked already for the cluster case)
Failure and DeathWatch communication is reliable (technically the corresponding messages became a system messages to profit from reliable delivery); failure signaling semantics changed such that the supervisor strategy may be invoked earlier than in previous versions (the Failed message “jumps the queue”)
The generation of the Terminated message has been made more intuitive: you will no longer receive it after having unwatched the actor in question; also DeathPactException now leads to Stop instead of Restart in the defaultStrategy
Mailboxes can now be configured separately from dispatchers, either from Props or from deployment configuration section, for example this means that it is easier to use the stashing mailbox
Cluster nodes cannot rejoin a cluster, UIDs have been introduced to prevent communication with systems which were removed from the cluster
ActorRefs now refer to a specific actor incarnation at a path; if you create a new actor at the same path then old ActorRefs will not send to it (this used to be the case only locally, hence we removed the difference in semantics observed with remoting); this change means that actorFor() needed to be deprecated in favor of actorSelection() which has been brought up to speed wrt. remote look-ups
The IO layer learned SSL and WriteFile for TCP, names have been cleaned up for UDP, spray-io’s pipelines have been incorporated, complete samples including ACK- and NACK-based back-pressure have been added and more documentation updates will be coming
Props have been restructured to not rely on closures internally but only on Class[_] and arrays of (serializable) arguments; this has been done to avoid closing over actor internal state or other non-serializable data; consequently the closure-taking Props(new Actor …) and the UntypedActorFactory have been deprecated (in the Java case this usually even saves a few lines of code on your end); there will be a more detailed blog post on this soon
Lots of little fixes and hardening, especially of the cluster code
Cluster-aware distributed Pub-Sub module and external cluster client
This is a lot and should probably justify the time it took since the last milestone which was released exactly one month ago.
Besides testing and benchmarking there are some things we need to clean up internally before calling it a final, and we also want to comb through especially the JavaDoc since that is just a machine translation of the ScalaDoc and does not everywhere look as complete as it could.
When migrating an existing project from Akka 2.1.2 please have a look at our migration guide: http://doc.akka.io/docs/akka/2.2-M3/project/migration-guide-2.1.x-2.2.x.html
v2.2-M2 compared to Akka v2.2-M3:
* 84 tickets closed, see the assembla milesone
* 553 files changed, 32738 insertions(+), 15684 deletions(-)
* 637 pages of docs vs 489 in v2.2-M2
* … and a total of 10 committers!
Special thanks go to Christophe and Raman for their contribution of an OSGi sample application, and to Mathias and Johannes of spray.io for their continued work on the IO layer.
Akka is released under the Apache V2 license.
commits added removed 20 6439 2396 Patrik Nordwall 13 4035 2077 Viktor Klang 10 7806 2061 Roland 8 1781 341 Björn Antonsson 8 5586 3101 Endre Sándor Varga 8 265 93 Johannes Rudolph 4 40 22 Mathias 3 1212 490 Raman Gupta 3 1336 240 Christophe Pache 1 129 23 Rich Dougherty
Take it for a spin! Happy hAkking!
commits added removed 19 2119 1348 Patrik Nordwall 10 528 371 Björn Antonsson 6 67 41 Viktor Klang 5 289 296 Roland 4 106 81 Endre Sándor Varga 2 527 128 Rich Dougherty 2 95 53 Kevin Wright 1 21 7 Johannes Rudolph 1 31 19 Mathias
Take it for a spin! Happy hAkking!