1. Trang chủ
  2. » Công Nghệ Thông Tin

Akka java

863 3.1K 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Cấu trúc

  • Introduction

    • What is Akka?

    • Why Akka?

    • Getting Started

    • The Obligatory Hello World

    • Use-case and Deployment Scenarios

    • Examples of use-cases for Akka

  • General

    • Terminology, Concepts

    • Actor Systems

    • What is an Actor?

    • Supervision and Monitoring

    • Actor References, Paths and Addresses

    • Location Transparency

    • Akka and the Java Memory Model

    • Message Delivery Reliability

    • Configuration

  • Actors

    • Actors

    • Typed Actors

    • Fault Tolerance

    • Dispatchers

    • Mailboxes

    • Routing

    • Building Finite State Machine Actors

    • Persistence

    • Persistence - Schema Evolution

    • Persistence Query

    • Persistence Query for LevelDB

    • Testing Actor Systems

  • Actors (Java with Lambda Support)

    • Actors (Java with Lambda Support)

    • Fault Tolerance (Java with Lambda Support)

    • FSM (Java with Lambda Support)

    • Persistence (Java with Lambda Support)

  • Futures and Agents

    • Futures

    • Agents

  • Networking

    • Cluster Specification

    • Cluster Usage

    • Cluster Singleton

    • Distributed Publish Subscribe in Cluster

    • Cluster Client

    • Cluster Sharding

    • Cluster Metrics Extension

    • Distributed Data

    • Remoting

    • Serialization

    • I/O

    • Using TCP

    • Using UDP

    • Camel

  • Utilities

    • Event Bus

    • Logging

    • Scheduler

    • Duration

    • Circuit Breaker

    • Akka Extensions

  • Streams

    • Introduction

    • Quick Start Guide

    • Reactive Tweets

    • Design Principles behind Akka Streams

    • Basics and working with Flows

    • Working with Graphs

    • Modularity, Composition and Hierarchy

    • Buffers and working with rate

    • Dynamic stream handling

    • Custom stream processing

    • Integration

    • Error Handling

    • Working with streaming IO

    • Pipelining and Parallelism

    • Testing streams

    • Overview of built-in stages and their semantics

    • Streams Cookbook

    • Configuration

    • Migration Guide 1.0 to 2.x

    • Migration Guide 2.0.x to 2.4.x

  • Akka HTTP

    • HTTP Model

    • Low-Level Server-Side API

    • Server-Side WebSocket Support

    • High-level Server-Side API

    • Consuming HTTP-based Services (Client-Side)

    • Common Abstractions (Client- and Server-Side)

    • Implications of the streaming nature of Request/Response Entities

    • Configuration

    • Server-Side HTTPS Support

    • Migration Guide between experimental builds of Akka HTTP (2.4.x)

  • HowTo: Common Patterns

    • Scheduling Periodic Messages

    • Single-Use Actor Trees with High-Level Error Reporting

    • Template Pattern

  • Experimental Modules

    • Multi Node Testing

    • External Contributions

  • Information for Akka Developers

    • Building Akka

    • Multi JVM Testing

    • I/O Layer Design

    • Developer Guidelines

    • Documentation Guidelines

  • Project Information

    • Migration Guides

    • Issue Tracking

    • Licenses

    • Sponsors

    • Project

  • Additional Information

    • Binary Compatibility Rules

    • Frequently Asked Questions

    • Books

    • Videos

    • Other Language Bindings

    • Akka in OSGi

Nội dung

Akka Java Documentation Release 2.4.10 Lightbend Inc Sep 07, 2016 CONTENTS Introduction 1.1 What is Akka? 1.2 Why Akka? 1.3 Getting Started 1.4 The Obligatory Hello World 1.5 Use-case and Deployment Scenarios 1.6 Examples of use-cases for Akka 1 7 General 2.1 Terminology, Concepts 2.2 Actor Systems 2.3 What is an Actor? 2.4 Supervision and Monitoring 2.5 Actor References, Paths and Addresses 2.6 Location Transparency 2.7 Akka and the Java Memory Model 2.8 Message Delivery Reliability 2.9 Configuration 10 10 12 14 16 21 27 28 30 35 Actors 3.1 Actors 3.2 Typed Actors 3.3 Fault Tolerance 3.4 Dispatchers 3.5 Mailboxes 3.6 Routing 3.7 Building Finite State Machine Actors 3.8 Persistence 3.9 Persistence - Schema Evolution 3.10 Persistence Query 3.11 Persistence Query for LevelDB 3.12 Testing Actor Systems 89 89 109 119 134 137 145 165 168 198 213 223 226 Actors (Java with Lambda Support) 4.1 Actors (Java with Lambda Support) 4.2 Fault Tolerance (Java with Lambda Support) 4.3 FSM (Java with Lambda Support) 4.4 Persistence (Java with Lambda Support) 243 243 263 277 287 Futures and Agents 319 5.1 Futures 319 5.2 Agents 326 Networking 329 6.1 Cluster Specification 329 i 6.2 6.3 6.4 6.5 6.6 6.7 6.8 6.9 6.10 6.11 6.12 6.13 6.14 Cluster Usage Cluster Singleton Distributed Publish Subscribe in Cluster Cluster Client Cluster Sharding Cluster Metrics Extension Distributed Data Remoting Serialization I/O Using TCP Using UDP Camel 334 354 357 361 367 377 385 403 413 420 422 433 437 450 450 457 464 467 468 472 Streams 8.1 Introduction 8.2 Quick Start Guide 8.3 Reactive Tweets 8.4 Design Principles behind Akka Streams 8.5 Basics and working with Flows 8.6 Working with Graphs 8.7 Modularity, Composition and Hierarchy 8.8 Buffers and working with rate 8.9 Dynamic stream handling 8.10 Custom stream processing 8.11 Integration 8.12 Error Handling 8.13 Working with streaming IO 8.14 Pipelining and Parallelism 8.15 Testing streams 8.16 Overview of built-in stages and their semantics 8.17 Streams Cookbook 8.18 Configuration 8.19 Migration Guide 1.0 to 2.x 8.20 Migration Guide 2.0.x to 2.4.x 476 476 477 479 484 487 494 506 518 521 526 545 557 560 562 565 569 591 606 608 608 Akka HTTP 9.1 HTTP Model 9.2 Low-Level Server-Side API 9.3 Server-Side WebSocket Support 9.4 High-level Server-Side API 9.5 Consuming HTTP-based Services (Client-Side) 9.6 Common Abstractions (Client- and Server-Side) 9.7 Implications of the streaming nature of Request/Response Entities 9.8 Configuration 9.9 Server-Side HTTPS Support 9.10 Migration Guide between experimental builds of Akka HTTP (2.4.x) 612 612 617 623 626 745 758 766 771 778 783 Utilities 7.1 Event Bus 7.2 Logging 7.3 Scheduler 7.4 Duration 7.5 Circuit Breaker 7.6 Akka Extensions 10 HowTo: Common Patterns 785 10.1 Scheduling Periodic Messages 785 10.2 Single-Use Actor Trees with High-Level Error Reporting 786 ii 10.3 Template Pattern 789 11 Experimental Modules 790 11.1 Multi Node Testing 790 11.2 External Contributions 795 12 Information for Akka Developers 12.1 Building Akka 12.2 Multi JVM Testing 12.3 I/O Layer Design 12.4 Developer Guidelines 12.5 Documentation Guidelines 818 818 820 823 825 826 13 Project Information 13.1 Migration Guides 13.2 Issue Tracking 13.3 Licenses 13.4 Sponsors 13.5 Project 829 829 847 848 848 849 14 Additional Information 14.1 Binary Compatibility Rules 14.2 Frequently Asked Questions 14.3 Books 14.4 Videos 14.5 Other Language Bindings 14.6 Akka in OSGi 851 851 853 856 856 856 857 iii CHAPTER ONE INTRODUCTION 1.1 What is Akka? «resilient elastic distributed real-time transaction processing» We believe that writing correct distributed, concurrent, fault-tolerant and scalable applications is too hard Most of the time it’s because we are using the wrong tools and the wrong level of abstraction Akka is here to change that Using the Actor Model we raise the abstraction level and provide a better platform to build scalable, resilient and responsive applications—see the Reactive Manifesto for more details For fault-tolerance we adopt the “let it crash” model which the telecom industry has used with great success to build applications that self-heal and systems that never stop Actors also provide the abstraction for transparent distribution and the basis for truly scalable and fault-tolerant applications Akka is Open Source and available under the Apache License Download from http://akka.io/downloads Please note that all code samples compile, so if you want direct access to the sources, have a look over at the Akka Docs subproject on github: for Java and Scala 1.1.1 Akka implements a unique hybrid Actors Actors give you: • Simple and high-level abstractions for distribution, concurrency and parallelism • Asynchronous, non-blocking and highly performant message-driven programming model • Very lightweight event-driven processes (several million actors per GB of heap memory) See the chapter for Scala or Java Fault Tolerance • Supervisor hierarchies with “let-it-crash” semantics • Actor systems can span over multiple JVMs to provide truly fault-tolerant systems • Excellent for writing highly fault-tolerant systems that self-heal and never stop See Fault Tolerance (Scala) and Fault Tolerance (Java) Akka Java Documentation, Release 2.4.10 Location Transparency Everything in Akka is designed to work in a distributed environment: all interactions of actors use pure message passing and everything is asynchronous For an overview of the cluster support see the Java and Scala documentation chapters Persistence State changes experienced by an actor can optionally be persisted and replayed when the actor is started or restarted This allows actors to recover their state, even after JVM crashes or when being migrated to another node You can find more details in the respective chapter for Java or Scala 1.1.2 Scala and Java APIs Akka has both a scala-api and a Java Documentation 1.1.3 Akka can be used in different ways Akka is a toolkit, not a framework: you integrate it into your build like any other library without having to follow a particular source code layout When expressing your systems as collaborating Actors you may feel pushed more towards proper encapsulation of internal state, you may find that there is a natural separation between business logic and inter-component communication Akka applications are typically deployed as follows: • as a library: used as a regular JAR on the classpath or in a web app • packaged with sbt-native-packager • packaged and deployed using Lightbend ConductR 1.1.4 Commercial Support Akka is available from Lightbend Inc under a commercial license which includes development or production support, read more here 1.2 Why Akka? 1.2.1 What features can the Akka platform offer, over the competition? Akka provides scalable real-time transaction processing Akka is an unified runtime and programming model for: • Scale up (Concurrency) • Scale out (Remoting) • Fault tolerance One thing to learn and admin, with high cohesion and coherent semantics Akka is a very scalable piece of software, not only in the context of performance but also in the size of applications it is useful for The core of Akka, akka-actor, is very small and easily dropped into an existing project where you need asynchronicity and lockless concurrency without hassle 1.2 Why Akka? Akka Java Documentation, Release 2.4.10 You can choose to include only the parts of Akka you need in your application With CPUs growing more and more cores every cycle, Akka is the alternative that provides outstanding performance even if you’re only running it on one machine Akka also supplies a wide array of concurrency-paradigms, allowing users to choose the right tool for the job 1.2.2 What’s a good use-case for Akka? We see Akka being adopted by many large organizations in a big range of industries: • Investment and Merchant Banking • Retail • Social Media • Simulation • Gaming and Betting • Automobile and Traffic Systems • Health Care • Data Analytics and much more Any system with the need for high-throughput and low latency is a good candidate for using Akka Actors let you manage service failures (Supervisors), load management (back-off strategies, timeouts and processing-isolation), as well as both horizontal and vertical scalability (add more cores and/or add more machines) Here’s what some of the Akka users have to say about how they are using Akka: http://stackoverflow.com/ questions/4493001/good-use-case-for-akka All this in the ApacheV2-licensed open source project 1.3 Getting Started 1.3.1 Prerequisites Akka requires that you have Java or later installed on your machine Lightbend Inc provides a commercial build of Akka and related projects such as Scala or Play as part of the Lightbend Reactive Platform which is made available for Java in case your project can not upgrade to Java just yet It also includes additional commercial features or libraries 1.3.2 Getting Started Guides and Template Projects The best way to start learning Akka is to download Lightbend Activator and try out one of Akka Template Projects 1.3.3 Download There are several ways to download Akka You can download it as part of the Lightbend Platform (as described above) You can download the full distribution, which includes all modules Or you can use a build tool like Maven or SBT to download dependencies from the Akka Maven repository 1.3 Getting Started Akka Java Documentation, Release 2.4.10 1.3.4 Modules Akka is very modular and consists of several JARs containing different features • akka-actor – Classic Actors, Typed Actors, IO Actor etc • akka-agent – Agents, integrated with Scala STM • akka-camel – Apache Camel integration • akka-cluster – Cluster membership management, elastic routers • akka-osgi – Utilities for using Akka in OSGi containers • akka-osgi-aries – Aries blueprint for provisioning actor systems • akka-remote – Remote Actors • akka-slf4j – SLF4J Logger (event bus listener) • akka-stream – Reactive stream processing • akka-testkit – Toolkit for testing Actor systems In addition to these stable modules there are several which are on their way into the stable core but are still marked “experimental” at this point This does not mean that they not function as intended, it primarily means that their API has not yet solidified enough in order to be considered frozen You can help accelerating this process by giving feedback on these modules on our mailing list • akka-contrib – an assortment of contributions which may or may not be moved into core modules, see External Contributions for more details The filename of the actual JAR is for example akka-actor_2.11-2.4.10.jar (and analog for the other modules) How to see the JARs dependencies of each Akka module is described in the Dependencies section 1.3.5 Using a release distribution Download the release you need from http://akka.io/downloads and unzip it 1.3.6 Using a snapshot version The Akka nightly snapshots are published to http://repo.akka.io/snapshots/ and are versioned with both SNAPSHOT and timestamps You can choose a timestamped version to work with and can decide when to update to a newer version Warning: The use of Akka SNAPSHOTs, nightlies and milestone releases is discouraged unless you know what you are doing 1.3.7 Using a build tool Akka can be used with build tools that support Maven repositories 1.3.8 Maven repositories For Akka version 2.1-M2 and onwards: Maven Central For previous Akka versions: 1.3 Getting Started Akka Java Documentation, Release 2.4.10 Akka Repo 1.3.9 Using Akka with Maven The simplest way to get started with Akka and Maven is to check out the Lightbend Activator tutorial named Akka Main in Java Since Akka is published to Maven Central (for versions since 2.1-M2), it is enough to add the Akka dependencies to the POM For example, here is the dependency for akka-actor: com.typesafe.akka akka-actor_2.11 2.4.10 For snapshot versions, the snapshot repository needs to be added as well: akka-snapshots true http://repo.akka.io/snapshots/ Note: for snapshot versions both SNAPSHOT and timestamped versions are published 1.3.10 Using Akka with SBT The simplest way to get started with Akka and SBT is to use Lightbend Activator with one of the SBT templates Summary of the essential parts for using Akka with SBT: SBT installation instructions on http://www.scala-sbt.org/release/tutorial/Setup.html build.sbt file: name := "My Project" version := "1.0" scalaVersion := "2.11.8" libraryDependencies += "com.typesafe.akka" %% "akka-actor" % "2.4.10" Note: the libraryDependencies setting above is specific to SBT v0.12.x and higher If you are using an older version of SBT, the libraryDependencies should look like this: libraryDependencies += "com.typesafe.akka" % "akka-actor_2.11" % "2.4.10" For snapshot versions, the snapshot repository needs to be added as well: resolvers += "Akka Snapshot Repository" at "http://repo.akka.io/snapshots/" 1.3 Getting Started Akka Java Documentation, Release 2.4.10 1.3.11 Using Akka with Gradle Requires at least Gradle 1.4 Uses the Scala plugin apply plugin: 'scala' repositories { mavenCentral() } dependencies { compile 'org.scala-lang:scala-library:2.11.8' } tasks.withType(ScalaCompile) { scalaCompileOptions.useAnt = false } dependencies { compile group: 'com.typesafe.akka', name: 'akka-actor_2.11', version: '2.4.10' compile group: 'org.scala-lang', name: 'scala-library', version: '2.11.8' } For snapshot versions, the snapshot repository needs to be added as well: repositories { mavenCentral() maven { url "http://repo.akka.io/snapshots/" } } 1.3.12 Using Akka with Eclipse Setup SBT project and then use sbteclipse to generate an Eclipse project 1.3.13 Using Akka with IntelliJ IDEA Setup SBT project and then use sbt-idea to generate an IntelliJ IDEA project 1.3.14 Using Akka with NetBeans Setup SBT project and then use nbsbt to generate a NetBeans project You should also use nbscala for general scala support in the IDE 1.3.15 Do not use -optimize Scala compiler flag Warning: Akka has not been compiled or tested with -optimize Scala compiler flag Strange behavior has been reported by users that have tried it 1.3.16 Build from sources Akka uses Git and is hosted at Github 1.3 Getting Started Akka Java Documentation, Release 2.4.10 However, this feature was not used by many plugins, and expanding the API to accomodate all callbacks would have grown the API a lot Instead, Akka Persistence 2.4.x introduces an additional (optionally overrideable) receivePluginInternal:Actor.Receive method in the plugin API, which can be used for handling those as well as any custom messages that are sent to the plugin Actor (imagine use cases like “wake up and continue reading” or custom protocols which your specialised journal can implement) Implementations using the previous feature should adjust their code as follows: // previously class MySnapshots extends SnapshotStore { // old API: // def saved(meta: SnapshotMetadata): Unit = doThings() // new API: def saveAsync(metadata: SnapshotMetadata, snapshot: Any): Future[Unit] = { // completion or failure of the returned future triggers internal messages in ˓→receivePluginInternal val f: Future[Unit] = ??? ˓→ // custom messages can be piped to self in order to be received in receivePluginInternal f.map(MyCustomMessage(_)) pipeTo self f } def receivePluginInternal = { case SaveSnapshotSuccess(metadata) => doThings() case MyCustomMessage(data) => doOtherThings() } // } SnapshotStore: Java Optional used in Java plugin APIs In places where previously akka.japi.Option was used in Java APIs, including the return type of doLoadAsync, the Java provided Optional type is used now Please remember that when creating an java.util.Optional instance from a (possibly) null value you will want to use the non-throwing Optional.fromNullable method, which converts a null into a None value - which is slightly different than its Scala counterpart (where Option.apply(null) returns None) Atomic writes asyncWriteMessages takes a immutable.Seq[AtomicWrite] immutable.Seq[PersistentRepr] parameter instead of Each AtomicWrite message contains the single PersistentRepr that corresponds to the event that was passed to the persist method of the PersistentActor, or it contains several PersistentRepr that corresponds to the events that were passed to the persistAll method of the PersistentActor All PersistentRepr of the AtomicWrite must be written to the data store atomically, i.e all or none must be stored If the journal (data store) cannot support atomic writes of multiple events it should reject such writes with a Try Failure with an UnsupportedOperationException describing the issue This limitation should also be documented by the journal plugin 13.1 Migration Guides 845 Akka Java Documentation, Release 2.4.10 Rejecting writes asyncWriteMessages returns a Future[immutable.Seq[Try[Unit]]] The journal can signal that it rejects individual messages (AtomicWrite) by the returned immutable.Seq[Try[Unit]] The returned Seq must have as many elements as the input messages Seq Each Try element signals if the corresponding AtomicWrite is rejected or not, with an exception describing the problem Rejecting a message means it was not stored, i.e it must not be included in a later replay Rejecting a message is typically done before attempting to store it, e.g because of serialization error Read the API documentation of this method for more information about the semantics of rejections and failures asyncReplayMessages Java API The signature of asyncReplayMessages in the Java API changed from akka.japi.Procedure to java.util.function.Consumer asyncDeleteMessagesTo The permanent deletion flag was removed Support for non-permanent deletions was removed Events that were deleted with permanent=false with older version will still not be replayed in this version References to “replay” in names Previously a number of classes and methods used the word “replay” interchangeably with the word “recover” This lead to slight inconsistencies in APIs, where a method would be called recovery, yet the signal for a completed recovery was named ReplayMessagesSuccess This is now fixed, and all methods use the same “recovery” wording consistently across the entire API The old ReplayMessagesSuccess is now called RecoverySuccess, and an additional method called onRecoveryFailure has been introduced AtLeastOnceDelivery deliver signature The signature of deliver changed slightly in order to allow both ActorSelection and ActorPath to be used with it Previously: def deliver(destination: ActorPath, deliveryIdToMessage: Long ⇒ Any): Unit Now: def deliver(destination: ActorSelection)(deliveryIdToMessage: Long ⇒ Any): liver(destination: ActorPath)(deliveryIdToMessage: Long ⇒ Any): Unit Unit def de- The Java API remains unchanged and has simply gained the 2nd overload which allows ActorSelection to be passed in directly (without converting to ActorPath) Actor system shutdown ActorSystem.shutdown, ActorSystem.awaitTermination and ActorSystem.isTerminated has been deprecated in favor of ActorSystem.terminate and ActorSystem.whenTerminated` Both returns a Future[Terminated] value that will complete when the actor system has terminated 13.1 Migration Guides 846 Akka Java Documentation, Release 2.4.10 To get the same behavior as ActorSystem.awaitTermination block Future[Terminated] value with Await.result from the Scala standard library and wait for To trigger a termination and wait for it to complete: import scala.concurrent.duration._ Await.result(system.terminate(), 10.seconds) Be careful to not any operations on the Future[Terminated] using the system.dispatcher as ExecutionContext as it will be shut down with the ActorSystem, instead use for example the Scala standard library context from scala.concurrent.ExecutionContext.global // import system.dispatcher println("Actor system was shut down") } 13.1.8 Upcoming Migration Guide 2.4.x to 2.5.x Akka Persistence Persistence Plugin Proxy A new persistence plugin proxy was added, that allows sharing of an otherwise non-sharable journal or snapshot store The proxy is available by setting akka.persistence.journal.plugin or akka.persistence.snapshot-store.plugin to akka.persistence.journal.proxy or akka.persistence.snapshot-store.proxy, respectively The proxy supplants the Shared LevelDB journal 13.2 Issue Tracking Akka is using GitHub Issues as its issue tracking system 13.2.1 Browsing Tickets Before filing a ticket, please check the existing Akka tickets for earlier reports of the same problem You are very welcome to comment on existing tickets, especially if you have reproducible test cases that you can share Roadmaps Short and long-term plans are published in the akka/akka-meta repository 13.2.2 Creating tickets Please include the versions of Scala and Akka and relevant configuration files You can create a new ticket if you have registered a GitHub user account Thanks a lot for reporting bugs and suggesting features! 13.2 Issue Tracking 847 Akka Java Documentation, Release 2.4.10 13.2.3 Submitting Pull Requests Note: A pull request is worth a thousand +1’s – Old Klangian Proverb Pull Requests fixing issues or adding functionality are very welcome Please read CONTRIBUTING.md for more information about contributing to Akka 13.3 Licenses 13.3.1 Akka License This software is licensed under the Apache license, quoted below Copyright 2009-2015 Lightbend Inc Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied See the License for the specific language governing permissions and limitations under the License 13.3.2 Akka Committer License Agreement All committers have signed this CLA It can be signed online 13.3.3 Licenses for Dependency Libraries Each dependency and its license can be seen in the project build file (the comment on the side of each dependency): AkkaBuild.scala 13.4 Sponsors 13.4.1 Lightbend Lightbend is the company behind the Akka Project, Scala Programming Language, Play Web Framework, Scala IDE, sbt and many other open source projects It also provides the Lightbend Stack, a full-featured development stack consisting of AKka, Play and Scala Learn more at lightbend.com 13.4.2 YourKit YourKit is kindly supporting open source projects with its full-featured Java Profiler YourKit, LLC is the creator of innovative and intelligent tools for profiling Java and NET applications Take a look at YourKit’s leading software products: YourKit Java Profiler and YourKit NET Profiler 13.3 Licenses 848 Akka Java Documentation, Release 2.4.10 13.5 Project 13.5.1 Commercial Support Commercial support is provided by Lightbend Akka is part of the Lightbend Reactive Platform 13.5.2 Mailing List Akka User Google Group Akka Developer Google Group 13.5.3 Downloads http://akka.io/downloads 13.5.4 Source Code Akka uses Git and is hosted at Github • Akka: clone the Akka repository from http://github.com/akka/akka 13.5.5 Releases Repository All Akka releases are published via Sonatype to Maven Central, see search.maven.org or search.maven.org (Akka versions before 2.4.3) 13.5.6 Snapshots Repository Nightly builds are available in http://repo.akka.io/snapshots/ as both SNAPSHOT and timestamped versions For timestamped versions, pick a timestamp from http://repo.akka.io/snapshots/com/lightbend/akka/akka-actor_ 2.11/ All Akka modules that belong to the same build have the same timestamp sbt definition of snapshot repository Make sure that you add the repository to the sbt resolvers: resolvers += "Lightbend Snapshots" at "http://repo.akka.io/snapshots/" Define the library dependencies with the timestamp as version For example: libraryDependencies += "com.typesafe.akka" % "akka-remote_2.11" % "2.1-20121016-001042" maven definition of snapshot repository Make sure that you add the repository to the maven repositories in pom.xml: 13.5 Project 849 Akka Java Documentation, Release 2.4.10 akka-snapshots Akka Snapshots http://repo.akka.io/snapshots/ default Define the library dependencies with the timestamp as version For example: com.typesafe.akka akka-remote_2.11 2.1-20121016-001042 13.5 Project 850 CHAPTER FOURTEEN ADDITIONAL INFORMATION 14.1 Binary Compatibility Rules Akka maintains and verifies backwards binary compatibility across versions of modules In the rest of this document whenever binary compatibility is mentioned “backwards binary compatibility” is meant (as opposed to forward compatibility) This means that the new JARs are a drop-in replacement for the old one (but not the other way around) as long as your build does not enable the inliner (Scala-only restriction) 14.1.1 Binary compatibility rules explained Binary compatibility is maintained between: • minor and patch versions - please note that the meaning of “minor” has shifted to be more restrictive with Akka 2.4.0, read Change in versioning scheme, stronger compatibility since 2.4 for details Binary compatibility is NOT maintained between: • major versions • any versions of experimental modules – read The meaning of “experimental” for details • a few notable exclusions explained below Specific examples (please read Change in versioning scheme, stronger compatibility since 2.4 to understand the difference in “before 2.4 era” and “after 2.4 era”): # [epoch.major.minor] era OK: 2.2.0 > 2.2.1 > > 2.2.x NO: 2.2.y x 2.3.y OK: 2.3.0 > 2.3.1 > > 2.3.x OK: 2.3.x > 2.4.x (special case, migration to new versioning scheme) # [major.minor.path] era OK: 2.4.0 > 2.5.x OK: 2.5.0 > 2.6.x NO: 2.x.y x 3.x.y OK: 3.0.0 > 3.0.1 > > 3.0.n OK: 3.0.n > 3.1.0 > > 3.1.n OK: 3.1.n > 3.2.0 Cases where binary compatibility is not retained Some modules are excluded from the binary compatibility guarantees, such as: • *-testkit modules - since these are to be used only in tests, which usually are re-compiled and run on demand 851 Akka Java Documentation, Release 2.4.10 • *-tck modules - since they may want to add new tests (or force configuring something), in order to discover possible failures in an existing implementation that the TCK is supposed to be testing Compatibility here is not guaranteed, however it is attempted to make the upgrade prosess as smooth as possible • all experimental modules - which by definition are subject to rapid iteration and change Read more about them in The meaning of “experimental” 14.1.2 Change in versioning scheme, stronger compatibility since 2.4 Since the release of Akka 2.4.0 a new versioning scheme is in effect Historically, Akka has been following the Java or Scala style of versioning where as the first number would mean “epoch”, the second one would mean major, and third be the minor, thus: epoch.major.minor (versioning scheme followed until and during 2.3.x) Currently, since Akka 2.4.0, the new versioning applies which is closer to semantic versioning many have come to expect, in which the version number is deciphered as major.minor.patch In addition to that, Akka 2.4.x has been made binary compatible with the 2.3.x series, so there is no reason to remain on Akka 2.3.x, since upgrading is completely compatible (and many issues have been fixed ever since) 14.1.3 Mixed versioning is not allowed Modules that are released together under the Akka project are intended to be upgraded together For example, it is not legal to mix Akka Actor 2.4.2 with Akka Cluster 2.4.5 even though “Akka 2.4.2” and “Akka 2.4.5” are binary compatible This is because modules may assume internals changes across module boundaries, for example some feature in Clustering may have required an internals change in Actor, however it is not public API, thus such change is considered safe Note: We recommend keeping an akkaVersion variable in your build file, and re-use it for all included modules, so when you upgrade you can simply change it in this one place 14.1.4 The meaning of “experimental” Experimental is a keyword used in module descriptions as well as their artifact names, in order to signify that the API that they contain is subject to change without any prior warning Experimental modules are are not covered by Lightbend’s Commercial Support, unless specifically stated otherwise The purpose of releasing them early, as experimental, is to make them easily available and improve based on feedback, or even discover that the module wasn’t useful An experimental module doesn’t have to obey the rule of staying binary compatible between micro releases Breaking API changes may be introduced in minor releases without notice as we refine and simplify based on your feedback An experimental module may be dropped in minor releases without prior deprecation Best effort migration guides may be provided, but this is decided on a case-by-case basis for experimental modules 14.1.5 The meaning of INTERNAL API When browsing the source code and/or looking for methods available to be called, especially from Java which does not have as rich of an access protection system as Scala has, you may sometimes find methods or classes annotated with the /** INTERNAL API */ comment No compatibility guarantees are given about these classes, they may change or even disapear in minor versions, and user code is not supposed to be calling (or even touching) them 14.1 Binary Compatibility Rules 852 Akka Java Documentation, Release 2.4.10 Side-note on JVM representation details of the Scala private[akka] pattern that Akka is using extensively in it’s internals: Such methods or classes, which act as “accessible only from the given package” in Scala, are compiled down to public (!) in raw Java bytecode, and the access restriction, that Scala understands is carried along as metadata stored in the classfile Thus, such methods are safely guarded from being accessed from Scala, however Java users will not be warned about this fact by the javac compiler Please be aware of this and not call into Internal APIs, as they are subject to change without any warning 14.1.6 Binary Compatibility Checking Toolchain Akka uses the Lightbend maintained Migration Manager, called MiMa for short, for enforcing binary compatibility is kept where it was promised All Pull Requests must pass MiMa validation (which happens automatically), and if failures are detected, manual exception overrides may be put in place if the change happened to be in an Internal API for example 14.2 Frequently Asked Questions 14.2.1 Akka Project Where does the name Akka come from? It is the name of a beautiful Swedish mountain up in the northern part of Sweden called Laponia The mountain is also sometimes called ‘The Queen of Laponia’ Akka is also the name of a goddess in the Sámi (the native Swedish population) mythology She is the goddess that stands for all the beauty and good in the world The mountain can be seen as the symbol of this goddess Also, the name AKKA is the a palindrome of letters A and K as in Actor Kernel Akka is also: • the name of the goose that Nils traveled across Sweden on in The Wonderful Adventures of Nils by the Swedish writer Selma Lagerlöf • the Finnish word for ‘nasty elderly woman’ and the word for ‘elder sister’ in the Indian languages Tamil, Telugu, Kannada and Marathi • a font • a town in Morocco • a near-earth asteroid 14.2.2 Resources with Explicit Lifecycle Actors, ActorSystems, ActorMaterializers (for streams), all these types of objects bind resources that must be released explicitly The reason is that Actors are meant to have a life of their own, existing independently of whether messages are currently en route to them Therefore you should always make sure that for every creation of such an object you have a matching stop, terminate, or shutdown call implemented In particular you typically want to bind such values to immutable references, i.e final ActorSystem system in Java or val system: ActorSystem in Scala JVM application or Scala REPL “hanging” Due to an ActorSystem’s explicit lifecycle the JVM will not exit until it is stopped Therefore it is necessary to shutdown all ActorSystems within a running application or Scala REPL session in order to allow these processes to terminate 14.2 Frequently Asked Questions 853 Akka Java Documentation, Release 2.4.10 Shutting down an ActorSystem will properly terminate all Actors and ActorMaterializers that were created within it 14.2.3 Actors in General sender()/getSender() disappears when I use Future in my Actor, why? When using future callbacks, inside actors you need to carefully avoid closing over the containing actor’s reference, i.e not call methods or access mutable state on the enclosing actor from within the callback This breaks the actor encapsulation and may introduce synchronization bugs and race conditions because the callback will be scheduled concurrently to the enclosing actor Unfortunately there is not yet a way to detect these illegal accesses at compile time Read more about it in the docs for Actors and shared mutable state Why OutOfMemoryError? It can be many reasons for OutOfMemoryError For example, in a pure push based system with message consumers that are potentially slower than corresponding message producers you must add some kind of message flow control Otherwise messages will be queued in the consumers’ mailboxes and thereby filling up the heap memory Some articles for inspiration: • Balancing Workload across Nodes with Akka • Work Pulling Pattern to prevent mailbox overflow, throttle and distribute work 14.2.4 Actors Scala API How can I get compile time errors for missing messages in receive? One solution to help you get a compile time warning for not handling a message that you should be handling is to define your actors input and output messages implementing base traits, and then a match that the will be checked for exhaustiveness Here is an example where the compiler will warn you that the match in receive isn’t exhaustive: object MyActor { // these are the messages we accept sealed abstract trait Message final case class FooMessage(foo: String) extends Message final case class BarMessage(bar: Int) extends Message // these are the replies we send sealed abstract trait Reply final case class BazMessage(baz: String) extends Reply } class MyActor extends Actor { import MyActor._ def receive = { case message: Message => message match { case BarMessage(bar) => sender() ! BazMessage("Got " + bar) // warning here: // "match may not be exhaustive It would fail on the following input: ˓→FooMessage(_)" } } } 14.2 Frequently Asked Questions 854 Akka Java Documentation, Release 2.4.10 14.2.5 Remoting I want to send to a remote system but it does not anything Make sure that you have remoting enabled on both ends: client and server Both need hostname and port configured, and you will need to know the port of the server; the client can use an automatic port in most cases (i.e configure port zero) If both systems are running on the same network host, their ports must be different If you still not see anything, look at what the logging of remote life-cycle events tells you (normally logged at INFO level) or switch on Auxiliary remote logging options to see all sent and received messages (logged at DEBUG level) Which options shall I enable when debugging remoting issues? Have a look at the Remote Configuration, the typical candidates are: • akka.remote.log-sent-messages • akka.remote.log-received-messages • akka.remote.log-remote-lifecycle-events (this also includes deserialization errors) What is the name of a remote actor? When you want to send messages to an actor on a remote host, you need to know its full path, which is of the form: akka.protocol://system@host:1234/user/my/actor/hierarchy/path Observe all the parts you need here: • protocol is the protocol to be used to communicate with the remote system Most of the cases this is tcp • system is the remote system’s name (must match exactly, case-sensitive!) • host is the remote system’s IP address or DNS name, and it must match that system’s configuration (i.e akka.remote.netty.tcp.hostname) • 1234 is the port number on which the remote system is listening for connections and receiving messages • /user/my/actor/hierarchy/path is the absolute path of the remote actor in the remote system’s supervision hierarchy, including the system’s guardian (i.e /user, there are others e.g /system which hosts loggers, /temp which keeps temporary actor refs used with ask, /remote which enables remote deployment, etc.); this matches how the actor prints its own self reference on the remote host, e.g in log output Why are replies not received from a remote actor? The most common reason is that the local system’s name (i.e the system@host:1234 part in the answer above) is not reachable from the remote system’s network location, e.g because host was configured to be 0.0.0.0, localhost or a NAT’ed IP address If you are running an ActorSystem under a NAT or inside a docker container, make sure to set akka.remote.netty.tcp.hostname and akka.remote.netty.tcp.port to the address it is reachable at from other ActorSystems If you need to bind your network interface to a different address - use akka.remote.netty.tcp.bindhostname and akka.remote.netty.tcp.bind-port settings Also make sure your network is configured to translate from the address your ActorSystem is reachable at to the address your ActorSystem network interface is bound to 14.2 Frequently Asked Questions 855 Akka Java Documentation, Release 2.4.10 How reliable is the message delivery? The general rule is at-most-once delivery, i.e no guaranteed delivery Stronger reliability can be built on top, and Akka provides tools to so Read more in Message Delivery Reliability 14.2.6 Debugging How I turn on debug logging? To turn on debug logging in your actor system add the following to your configuration: akka.loglevel = DEBUG To enable different types of debug logging add the following to your configuration: • akka.actor.debug.receive will log all messages sent to an actor if that actors receive method is a LoggingReceive • akka.actor.debug.autoreceive will log all special messages like Kill, PoisonPill e.t.c sent to all actors • akka.actor.debug.lifecycle will log all actor lifecycle events of all actors Read more about it in the docs for Logging and actor.logging-scala 14.3 Books • Learning Akka, by Jason Goodwin, PACKT Publishing, ISBN: 9781784393007, December 2015 • Akka in Action, by Raymond Roestenburg and Rob Bakker, Manning Publications Co., ISBN: 9781617291012, estimated in 2016 • Reactive Messaging Patterns with the Actor Model, by Vaughn Vernon, Addison-Wesley Professional, ISBN: 0133846830, August 2015 • Developing an Akka Edge, by Thomas Lockney and Raymond Tay, Bleeding Edge Press, ISBN: 9781939902054, April 2014 • Effective Akka, by Jamie Allen, O’Reilly Media, ISBN: 1449360076, August 2013 • Akka Concurrency, by Derek Wyatt, artima developer, ISBN: 0981531660, May 2013 • Akka Essentials, by Munish K Gupta, PACKT Publishing, ISBN: 1849518289, October 2012 14.4 Videos • Learning Akka Videos, by Salma Khater, PACKT Publishing, ISBN: 9781784391836, January 2016 14.5 Other Language Bindings 14.5.1 JRuby Read more here: https://github.com/iconara/mikka 14.3 Books 856 Akka Java Documentation, Release 2.4.10 14.5.2 Groovy/Groovy++ Read more here: https://gist.github.com/620439 14.5.3 Clojure Read more here: http://blog.darevay.com/2011/06/clojure-and-akka-a-match-made-in/ 14.6 Akka in OSGi 14.6.1 Background OSGi is a mature packaging and deployment standard for component-based systems It has similar capabilities as Project Jigsaw (originally scheduled for JDK 1.8), but has far stronger facilities to support legacy Java code This is to say that while Jigsaw-ready modules require significant changes to most source files and on occasion to the structure of the overall application, OSGi can be used to modularize almost any Java code as far back as JDK 1.2, usually with no changes at all to the binaries These legacy capabilities are OSGi’s major strength and its major weakness The creators of OSGi realized early on that implementors would be unlikely to rush to support OSGi metadata in existing JARs There were already a handful of new concepts to learn in the JRE and the added value to teams that were managing well with straight J2EE was not obvious Facilities emerged to “wrap” binary JARs so they could be used as bundles, but this functionality was only used in limited situations An application of the “80/20 Rule” here would have that “80% of the complexity is with 20% of the configuration”, but it was enough to give OSGi a reputation that has stuck with it to this day This document aims to the productivity basics folks need to use it with Akka, the 20% that users need to get 80% of what they want For more information than is provided here, OSGi In Action is worth exploring 14.6.2 Core Components and Structure of OSGi Applications The fundamental unit of deployment in OSGi is the Bundle A bundle is a Java JAR with additional entries in MANIFEST.MF that minimally expose the name and version of the bundle and packages for import and export Since these manifest entries are ignored outside OSGi deployments, a bundle can interchangeably be used as a JAR in the JRE When a bundle is loaded, a specialized implementation of the Java ClassLoader is instantiated for each bundle Each classloader reads the manifest entries and publishes both capabilities (in the form of the Bundle-Exports) and requirements (as Bundle-Imports) in a container singleton for discovery by other bundles The process of matching imports to exports across bundles through these classloaders is the process of resolution, one of six discrete steps in the lifecycle FSM of a bundle in an OSGi container: INSTALLED: A bundle that is installed has been loaded from disk and a classloader instantiated with its capabilities Bundles are iteratively installed manually or through container-specific descriptors For those familiar with legacy packging such as EJB, the modular nature of OSGi means that bundles may be used by multiple applications with overlapping dependencies By resolving them individually from repositories, these overlaps can be de-duplicated across multiple deployemnts to the same container RESOLVED: A bundle that has been resolved is one that has had its requirements (imports) satisfied Resolution does mean that a bundle can be started STARTING: A bundle that is started can be used by other bundles For an otherwise complete application closure of resolved bundles, the implication here is they must be started in the order directed by a depth-first search for all to be started When a bundle is starting, any exposed lifecycle interfaces in the bundle are called, giving the bundle the opportunity to start its own service endpoints and threads ACTIVE: Once a bundle’s lifecycle interfaces return without error, a bundle is marked as active 14.6 Akka in OSGi 857 Akka Java Documentation, Release 2.4.10 STOPPING: A bundle that is stopping is in the process of calling the bundle’s stop lifecycle and transitions back to the RESOLVED state when complete Any long running services or threads that were created while STARTING should be shut down when the bundle’s stop lifecycle is called UNINSTALLED: A bundle can only transition to this state from the INSTALLED state, meaning it cannot be uninstalled before it is stopped Note the dependency in this FSM on lifecycle interfaces While there is no requirement that a bundle publishes these interfaces or accepts such callbacks, the lifecycle interfaces provide the semantics of a main() method and allow the bundle to start and stop long-running services such as REST web services, ActorSystems, Clusters, etc Secondly, note when considering requirements and capabilities, it’s a common misconception to equate these with repository dependencies as might be found in Maven or Ivy While they provide similar practical functionality, OSGi has several parallel type of dependency (such as Blueprint Services) that cannot be easily mapped to repository capabilities In fact, the core specification leaves these facilities up to the container in use In turn, some containers have tooling to generate application load descriptors from repository metadata 14.6.3 Notable Behavior Changes Combined with understanding the bundle lifecycle, the OSGi developer must pay attention to sometimes unexpected behaviors that are introduced These are generally within the JVM specification, but are unexpected and can lead to frustration • Bundles should not export overlapping package spaces It is not uncommon for legacy JVM frameworks to expect plugins in an application composed of multiple JARs to reside under a single package name For example, a frontend application might scan all classes from com.example.plugins for specific service implementations with that package existing in several contributed JARs While it is possible to support overlapping packages with complex manifest headers, it’s much better to use non-overlapping package spaces and facilities such as Akka Cluster for service discovery Stylistically, many organizations opt to use the root package path as the name of the bundle distribution file • Resources are not shared across bundles unless they are explicitly exported, as with classes The common case of this is expecting that getClass().getClassLoader().getResources("foo") will return all files on the classpath named foo The getResources() method only returns resources from the current classloader, and since there are separate classloaders for every bundle, resource files such as configurations are no longer searchable in this manner 14.6.4 Configuring the OSGi Framework To use Akka in an OSGi environment, the container must be configured such that the org.osgi.framework.bootdelegation property delegates the sun.misc package to the boot classloader instead of resolving it through the normal OSGi class space 14.6.5 Intended Use Akka only supports the usage of an ActorSystem strictly confined to a single OSGi bundle, where that bundle contains or imports all of the actor system’s requirements This means that the approach of offering an ActorSystem as a service to which Actors can be deployed dynamically via other bundles is not recommended — an ActorSystem and its contained actors are not meant to be dynamic in this way ActorRefs may safely be exposed to other bundles 14.6.6 Activator To bootstrap Akka inside an OSGi environment, you can use the akka.osgi.ActorSystemActivator class to conveniently set up the ActorSystem 14.6 Akka in OSGi 858 Akka Java Documentation, Release 2.4.10 import akka.actor.{ Props, ActorSystem } import org.osgi.framework.BundleContext import akka.osgi.ActorSystemActivator class Activator extends ActorSystemActivator { def configure(context: BundleContext, system: ActorSystem) { // optionally register the ActorSystem in the OSGi Service Registry registerService(context, system) val someActor = system.actorOf(Props[SomeActor], name = "someName") someActor ! SomeMessage } } The goal here is to map the OSGi lifecycle more directly to the Akka lifecycle The ActorSystemActivator creates the actor system with a class loader that finds resources (application.conf and reference.conf files) and classes from the application bundle and all transitive dependencies The ActorSystemActivator class is included in the akka-osgi artifact: com.typesafe.akka akka-osgi_2.11 2.4.10 14.6.7 Sample A complete sample project is provided in akka-sample-osgi-dining-hakkers 14.6 Akka in OSGi 859 ... Github 1.3 Getting Started Akka Java Documentation, Release 2.4.10 • Akka: clone the Akka repository from https://github.com /akka/ akka Continue reading the page on Building Akka 1.3.17 Need help?... repositories For Akka version 2.1-M2 and onwards: Maven Central For previous Akka versions: 1.3 Getting Started Akka Java Documentation, Release 2.4.10 Akka Repo 1.3.9 Using Akka with Maven The... in the respective chapter for Java or Scala 1.1.2 Scala and Java APIs Akka has both a scala-api and a Java Documentation 1.1.3 Akka can be used in different ways Akka is a toolkit, not a framework:

Ngày đăng: 12/05/2017, 13:33

Xem thêm

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN