Note: this is post is part of a serie about Microsoft Orleans and the Actor model
- Microsoft Orleans – Introduction to the Actor model
- Microsoft Orleans – Implementing grains
- Microsoft Orleans – Advanced functionality – part 1
- Microsoft Orleans – Advanced functionality – part 2
- Microsoft Orleans – Direct client functionality (with working example)
- Microsoft Orleans – Powerpoint Presentation – part 1 – Theory
Storage emulator
With the free tools “Microsoft Azure Emulator” and “Microsoft Azure Storage Explorer” is really easy to use all the functionality Orleans provides. And it’s very easy to understand and see what Orleans does behind the scenes. In the picture you can see how persistence works:
- Grains state in the Blob Storage and in the Table storage
- Reminders in Table storage
- Queues in Azure Queues, you can see all the queues created to distribuite through many agents. In this specific case it’s used AzureQueueDataAdapterV2 (in Azure within “Storage account”, not “Service Bus”).

Streams
Streaming extensions provide a set of abstractions and APIs that make thinking about and working with streams simpler and more robust. They allow developers to write reactive applications that operate on a sequence of events in a structured way.
- The extensibility model of stream providers makes the programming model compatible with and portable across a wide range of existing queuing technologies, such as Event Hubs, ServiceBus, Azure Queues, and Apache Kafka.
- There is no need to write special code or run dedicated processes to interact with such queues.
- Streams are virtual (a stream always exists) and are identified by stream Ids (logical name by GUID + string)
- Orleans Streams allow to decouple generation of data from its processing both in time and space.
That means that stream producer and stream consumer may be on different servers, in different times and will withstand failures. - TCP based Simple Message Stream Provider (known as SMS) in memory (available with fire-and-forget)
- It uses the Async Reactive Extensions (vNext) from Microsoft research
- Azure message bus/Azure queue as reliable delivering, with few lines of code
- It supports implicit or explicit subscriptions, rewindable streams, ecc
Timer and Reminders
Orleans provides two mechanisms to specify periodic behavior for grains:
- Using a Timer
- essentially identical to the standard .NET System.Threading.Timer class.
- single threaded execution guarantees within the grain activation that it operates.
- a timer callback does not change the activation’s state from idle to in use: this means that a timer cannot be used to postpone deactivation of otherwise idle activations.
- not persisted
- small resolution, period in seconds or minutes, callback not guaranteed. Used for aggregation or grain logic.
- Using a Reminder
- Persisted to storage and will continue to trigger in almost all situations unless explicitly cancelled.
- A grain will be activated when reminder ticks
- Period in minutes, hours or days
Running a complex query
What’s the best approach to run a “complex query”? For complex query I mean a query over grains that could have hundreds of different attributes and we would need to search & filter them according to there attributes, sort by some of them and have pagination.
Attributes like: when grain was created to storage, last comment datetime, last activity datetime, evaluation score, keywords, etc..
This question is independent from Orleans. The question is: which database fulfills your requirements on indexing the best. If your attributes are very different you need a database with flexible indexes like elastic or cosmos, or a relational database like SQL Server.
So transitioning to Orleans we don’t need to think differently.
Lets say you have an ecommerce platform where you typically have a lot of attributes depending on the products. Then there is no good reason to use Orleans to show and filter the products, but a lof of good reasons to use Orleans to manage products, pricing and so on.
So you would end up with something like a CQRS architecture, where your solutions for writing are very different to your solutions for reading.
And what about using an AODB approach? (Orleans Indexing)
More investigation about the capabilities of Orleans indexing should be required, but it looks like there are a of lot cases that are not covered yet like full text indices, range queries, query parsing and so on. And very large indexes will always be problematic with this approach.
Indexing
Indexing allows lookup over facetable grains (that are memory costly).
The IndexableActor actor type is the super-type of all indexable actors and is defined to be generic, parameterized by an ordinary class that contains all the properties of the actor including the ones being indexed.
- For example, for the indexable actor type Player, an ordinary class PlayerProperties is defined.
- Each instance X of Player includes an instance of PlayerProperties, which contains all of X’s properties.
- The indexed properties in PlayerProperties are annotated with an attribute “[indexed]”, which the indexing system can find using reflection.
- To pick up indexing functionality, Player inherits from IndexableActor<PlayerProperties>.
Normal code to get a grain reference:
Player p = ActorFactory.GetActor<Player>(k);
With Indexing:
IQueryable<Player> result = from p in ActorFactory.GetActors
<Player, PlayerProperties>()
where p.Location == "Redmond“
select p;
More info are available on the Orleans Indexing fork. A research paper describing how it will work can be found here. I think it’s a must to read to understand how Indexing in Orleans works.
Distributed transactions
In-memory distributed projections, inter-user interaction, forward and backward compute propagation graphs, are unnecessarily complex using typical n-tier in a distributed setting. Orleans supports distributed transactions, that means that you can manage multiple actors in a transaction. Take a look to the Sergey Bykov’s youtube video for more details.
The transactional state is not compatible with non-reentrant grains. This is a known limitation and it is missing from the documentation. All grains that use a Transactional state should be marked as Reentrant
or at least have methods marked with AlwaysInterleave
.
See When does OrleansBrokenTransactionLockException occur? #5644
Allowing reentrancy on grains with transactional state is harmless because the transaction lock and transaction commit mechanism already prevent any “reentrancy effects” from being visible to the application: transactions are serializable, i.e. they will not interleave on the same grain.
Then for .UseTransactions()
to work, you need to add a transactional state provider – one that implements ITransactionalStateStorage<TState>
.
Orleans currently have a provider for Azure Table that you can add with AddAzureTableTransactionalStateStorage()
. Alternatively, you can create and add your own provider.
There is also a wrapper that will use whatever grain storage you have configured. In many ways it isn’t as efficient, but it does work.
The TransactionalStateAttribute
has an optional storageName
parameter, but you can also use default grain storage.
A transactional grain call will be complete when the task resolves, unless the call throws a timeout or a OrleansTransactionInDoubtException
.
In those cases, the transaction may still be in progress and retries may lead to cascading aborts.
The “Transaction” attribute indicate how the grain call behaves in a transactional environment via the transaction options below:
- TransactionOption.Create – Call is transactional and will always create a new transaction context (i.e., it will start a new transaction), even if called within an existing transaction context.
- TransactionOption.Join – Call is transactional but can only be called within the context of an existing transaction.
- TransactionOption.CreateOrJoin – Call is transactional. If called within the context of a transaction, it will use that context, else it will create a new context.
- TransactionOption.Suppress – Call is not transactional but can be called from within a transaction. If called within the context of a transaction, the context will not be passed to the call.
- TransactionOption.Supported – Call is not transactional but supports transactions. If called within the context of a transaction, the context will be passed to the call.
- TransactionOption.NotAllowed – Call is not transactional and cannot be called from within a transaction. If called within the context of a transaction, it will throw a NotSupportedException.
Example
public interface IATMGrain : IGrainWithIntegerKey {
[Transaction(TransactionOption.Create)]
Task Transfer(Guid fromAccount, Guid toAccount, uint amountToTransfer);
}
Reactive caching
Using “reactive caching” and “reactive replication” patterns removes the need for external cache replication and most importantly, removes caching invalidation problems – they’re no longer a thing you have to spend hours in meetups discussing.
Reactive Caching is a general way to bring frequently updated data items or snapshots – not events – closer to the consumers that need them, and not to those that don’t, to maximize lookup speed.
It is also useful when the incoming events from the server to the client are so many that individual clients will lag in relative terms (slow/fast consumer problem) yet individual consumers don’t care about the events at all – they only care about what those events build up to, as in, to paint some data grid or chart.
The client just wants to always get the latest snapshot of some data view as fast as their own network allows.



So what’s the benefit of using Reactive Caching compared to something like SignalR?
The benefits will depend on the use case. There may be some or none. SignalR works fine for common event-based workloads. Reactive Caching is a general way to bring frequently updated data items or snapshots – not events – closer to the consumers that need them, and not to those that don’t, to maximize lookup speed. It is also useful when the incoming events from the server to the client are so many that individual clients will lag in relative terms (slow/fast consumer problem) yet individual consumers don’t care about the events at all – they only care about what those events build up to, as in, to paint some data grid or chart. Relative lag is a common problem in some industry. It is by no means a silver bullet and requires some specific conditions to work (such as an actor-like system) but when you have them in place, it works fine.
Also, as reactive caching is just a pattern, you can even do it over SignalR if the use case (and the lag) justifies it. What about using long-poll HTTP requests?
Browsers will impose a connection limit per domain. For Chrome this is 6 connections per domain. Each active long-poll counts towards this limit. If the application is issuing a lot of long-polls at the same time, then you will start seeing things not updating when they should. That’s not something you want your users to notice. Why do it then? Once you’re serious, implement high-level SignalR or low-level WebSockets, or some other multiplexing-capable communication technology. Persistent connections are one of the basic building blocks of real-time applications.
More in-depth explanation can be found at this (great) resources:
- Latency-Adaptive Real-Time with Reactive Caching on Microsoft Orleans: https://jorgecandeias.github.io/2019/05/26/latency-adaptive-real-time-with-reactive-caching-on-microsoft-orleans/
- Reactive Caching for Composed Services (Microsoft research): https://www.microsoft.com/en-us/research/publication/reactive-caching-for-composed-services/
- Ad-hoc Reactive Caching Pattern (Orleans code example): https://github.com/dotnet/orleans/tree/master/Samples/2.3/AdHocReactiveCaching
Co-hosting cluster & http client
This can be achieved using the Hosted client, enabled by default from Orleans 2.3. The documentation is coming, take a look at the github issue here. In the meantime take a look to this stackoverflow thread. That means a silo and a http api may be the same process, so no serialization would be done in that case.
So if you use the hosted client (introduced in 2.1.0, enabled by default in 2.3.0) then the hops would be
http client ⇒ load balancer ⇒ http api ⇒ target silo
(target silo & http api may be same process, so no serialization would be done in that case either)
Calling an external API
Calling an external API from an Orleans grain is fine. If the call can be cached then a stateless worker or a purpose-built response cache can be used as well. Make sure that it’s not blocking and that it’s using HttpClient correctly: make sure to have a single instance of the HttpClient (not an Orleans-related thing). We can view external tasks/threads, external services, etc. as external resources with side effects outside of the actor. Actors can call them and they can call actors, but that doesn’t make these external resources part of the actor.
Orleans Dashboard
The Orleans Dashboard is an indipendent project (in the OrleansContrib repo) that allow to monitor your Orleans silo. It’s very easy to install and to configure, it supports authentication, REST Apis, and it can be fully customizable (frontend written in React).
3 Comments