Thursday, June 14, 2012

Lessons Learned from Implementing Paxos

My group at Microsoft uses Azure for many of its projects. We have a shared data store that we run as a service in Azure, and some of the goals for this data store included

  1. replication to scale out horizontally for distributing read query load
  2. fault tolerance for machine failures, network failures, azure upgrades, etc

The distributed state machine approach is one option for fault tolerance across replicas (vector clocks are another, for example).

It's a state machine because all replicas start from the same state (e.g. an empty data store) and perform the same sequence of state transitions (e.g. updates to the data store). All replicas will have (eventually) the same state. The idea is that if all replicas have exactly the same state, it doesn't matter on which replica read queries are performed. Machines can come and go, network messages can be dropped, but read queries to any one replica will have the same result from any of the replicas.

In this article, I'd like to cover some of the lessons learned from implementing Paxos. Some of the points may seem obvious (I think High-Performance Server Architecture is another article where being explicit about the basics is valuable), but I think they're worth stating as I haven't seen them written down before (Paxos Made Live is a notable exception).

I'd recommend the wikipedia link above for an introduction to Paxos, but here's a capsule summary. A common application of Paxos is to build a distributed transaction log of operations to be applied in sequence to every state machine replica. Paxos reaches consensus on each operation across all replicas - this means no replica will disagree with any other replica as to the value selected, even in the presence of network or machine failures at any point in the algorithm.

Because Paxos is usually explained in terms of reaching consensus on a single value, each position in the log is called an instance of Paxos. In reality, people will use a variant of Multi-Paxos to run many instances of Paxos, one for each specific position in the log.

A client of Paxos submits operations to the log, and Paxos goes through two phases to reach consensus across replicas regarding the single indexed position that each operation occupies in the log. A leader is elected for a round in phase 1, and that leader runs a value for a particular instance of Paxos through an election in phase 2. One of the details that allows Paxos to reach consensus is that the leader can be constrained to complete phase 2 with a specific value in a specific instance position in the log.

Paxos tolerates multiple leaders at the cost of possibly never reaching consensus (called the FLP Impossibility Result, this is more of a theoretical problem avoided with persistent leader elections, described below).

The state machines can execute every operation from the start of the log until the first missing instance awaiting consensus to be reached.

Some of the lessons follow.

  1. Computer science papers can be funny. See the original paper on Paxos.
  2. The description of Paxos in Paxos Made Simple is in terms of three agents: the proposer, the acceptor, and the learner. This expression of the Paxos algorithm means it's natural to implement the algorithm using the Actor Model of concurrency.

    I chose the Reactive Extensions. F#'s MailboxProcessor and Erlang's processes may have been good options too, but there were reasons the reactive framework felt like the right tool for the task.

    Paxos has a reputation of a difficult implementation. I think the difficulty lies less in the algorithm and more in getting the supporting details correct (many examples follow). But simplicity in implementation is important, and composability allowed me to effectively combine simpler primitives to build a more complicated system.

    So what are some of the layers I'd like to compose?

    1. a communication channel between replicas. This could be UDP, a custom TCP mesh, or through WCF to leverage it's service hosting and activation functionality.
    2. a multiplexer for multiple logs onto a single communication channel.
    3. a multiplexer for all three Paxos agents to share the same communication channel for a particular log.
    4. a method for the leader proposer agent to run an election for a round with all of the acceptor agents voting.
    5. a method for the Paxos proposer agent to run elections with the specific messages for phase 1 and 2.

    I needed a mechanism for composable filters over a shared IP port, so that I could execute multiple copies of Paxos on the same port. The reactive extensions met that need, but then it was very natural to express the Paxos agents (e.g. proposer, acceptor, learner) in terms of an actor model using Observable.Create, and then the Paxos agents were also composable out of smaller parts as well (e.g. WhenMajorityHasQuorum, Phase1Lead, Phase2Propose).
  3. Servers designed to use more than one cpu core have more than one operating system thread (even if hidden under a concurrent programming abstraction). This can lead to nondeterministic program executions, where the exact output of a program depends on the particular thread schedule selected by the operating system.

    You may even choose to introduce nondeterminism: for example, I use randomized exponential backoff as a response to timeouts to allow a set of Paxos replicas to settle on a persistent leader allowing for faster consensus.

    And the network itself introduces randomness; for example, with udp packets being dropped.

    To properly test your distributed system, you need to be able to introduce these timeouts and network failures in a controlled and repeatable manner - and nondeterminism is a non-starter for debugging and reproducing problems.

    To make code deterministic for repeatable tests, it must have the same inputs. And the same inputs means the same network i/o, thread scheduling, random number generation - effectively all interaction with the outside world.

    The reactive framework has made these ideas explicit through the virtual scheduler. The test virtual scheduler was very useful for running deterministic unit tests, and through the composable nature of the message handling, it was trivial to add a “noisy observable” to simulate packet loss and duplication to stress test the implementation. Weaving a pseudo-random number generator with a specific seed through the code allowed for randomized exponential backoff behavior to be deterministic as well.

  4. Components like an implementation of Paxos can find reuse in many situations if they're designed for reuse. I've heard that a framework should not be deemed reusable until three separate clients have reused it. Designing for testability (e.g. policy versus mechanism and
    dependency injection) can help provide a second client, in addition to your main application, to enhance the reusability of your code.

  5. The Paxos algorithm is described in terms of three agents, passing messages around. But message passing using event-driven programming isn't the most friendly concurrency pattern, mostly due to stack ripping.

    So it's important to abstract Paxos for clients. The method I chose is to implement a base class StateMachine that was designed for inheritance - clients maintain the state they wish to replicate in their StateMachine-derived class, and keep all mutations of that state in a single method called only by the Paxos algorithm when consensus has been reached on the next position in the transaction log.

    public override Task ExecuteAsync(int instance, Proposal command)
    

  6. Many theoretical algorithms from distributed systems use the Asynchronous Model of time, where there is no shared notion of time across different nodes. One technique used to reduce these methods to practice is to introduce timeouts on certain operations. There would be a timeout for the replica trying to become leader, a timeout for the non-leader replicas watching for a leader replica that has gone offline, and a timeout for the phase 2 rounds of voting. There is an important detail in the last timeout though - the better implementations of Paxos allow multiple instances to reach consensus in parallel (without picking a fixed factor of concurrency, as Stoppable Paxos describes). But it simply takes longer, even if purely by network latency, to drive more instances to consensus. So in the end, you either vary your timeout in proportion to the number of requests or you limit the number of concurrent requests so that the timeout is sufficient.
  7. If you're writing a server that spends most of its time waiting for i/o, it's worth considering asynchronous code. Even having a thread wait for a timeout can be a waste of that thread's resources. It's easier to write asynchronous code using a coroutine style of control flow; for example, async and await in the Async CTP.

    But if you don't have special support in the debugger for stacks ripped apart across asynchronous i/o, it can be perplexing to break in the debugger and find no user code running and all threads waiting on i/o completion ports. The call stacks are useless for explaining the current state of the program.
    Operations that take long periods of wall clock time (even if the cpu is efficiently waiting) need to be logged. The most obvious example is timeouts and exponential backoffs - this is code that does nothing and has no visible effects other than the passage of time.
  8. When I was writing my implementation of Paxos, printf-style logging of the messages exchanges among the agents in my unit tests was my most valuable tool for tracking down violations of consensus. But logging every message was an overload of information, and it made for a tedious experience. It helped tremendously to have perfectly repeatable tests, allowing me to set conditional breakpoints and debug a specific request pass through the system. Having a request id passed end-to-end through the system made it even easier to follow the messages exchanged across different replicas.
  9. Execution time for some section of code can take longer to execute than you would ever expect. Of course, you should run a profiler before optimizing the code (though I'll bet serialization and deserialization is at the top of your profile), but running a profiler in production is not practical. Production isn't usually set up with development tools, and collecting massive amounts of data affects performance too.

    We can easily time small sections of code. But reporting the mean and standard deviation of a normal distribution forced onto the samples collected may mislead, mostly because the distribution of execution times may not fit a normal distribution.

    I'm going to suggest something even cheaper, that I've used with success: time sections of your code that expect to take only 10s of milliseconds, and log a message if they take greater than one second. I think you would be surprised at the variability of the execution time of a single section of code.

    The rationale and justification for this approach is covered in Amazon's paper on Dynamo. You care much more about the 99th percentile response time for a web service than the mean response time. True real time systems even introduce a hard deadline for completion of tasks, not even allowing the variability in execution time that really gets ignored by most profilers.
  10. HTTP has proven to be a very useful transport protocol on top of TCP, and most HTTP client libraries (even through RPC mechanisms like WCF) provide http method invocation timeouts. But that's just the client-side of the operation - the server-side of the operation isn't always aware that the client has disconnected, and so you need to manually cancel the server-side code after some timeout. .NET provides a convenient structure for cooperative cancellation through the CancellationToken.
    All server-side operations need to have their own timeout logic - this is not taken care of automatically by the typical server framework. In some situations, you may be querying an underlying sql database, and it will have its own timeout, which will throw an exception and unwind that server call stack and release resources. But when you're implementing something like Paxos, there are times where the server's logical thread of control is waiting for an event that may never occur (for example, a state machine's request to replicate a command in the log). When the client has already given up, you must ensure the server isn't continuing for no reason.
  11. In Paxos Made Simple, Lamport mentions executing phase 1 for "infinitely many instances" (the 0-1-infinity rule comes to mind).

    In practice, that means a proposer that wishes to lead values to consensus in phase 2 needs to ask each acceptor the list of previous values they have accepted for all [infinitely many] instances (positions in the transaction log). "Infinitely many" is too large of a network message, which brings me to the lesson: don't allow completely variable resource usage.

    You can see examples of throttles in most systems: for example, IIS has configurable http request size limits and WCF has configurable message size quotas. But Paxos brings up some issues in protocol design that I think are worth mentioning.

    1. A leader replica may have a very large number of instances they would like to submit to the log. So the leader may try to execute phase 2 with so many instances that timeouts are hit.

      To avoid these variable size messages, we simply cap the number of phase 2 submissions that may execute concurrently within a timeout period.

    2. If a new replica joins a set of replicas that have a very long shared transaction log and tries to become leader without any knowledge of the transaction log state, if the acceptors intend to promise their allegiance to that leader, they must send all of the previous instances for which they have accepted values.

      But if a replica is added to the cluster, it's likely to be missing all of the log data from any snapshots to the latest position in the log, and one of the first things it may choose to do is to try to become leader. The protocol can handle that situation, at the cost of needing to update that replica to the latest position in the log - that can be too much data to send at once.

      To avoid these large messages, we can reject leadership requests if the requesting replica is lagging by too many instances, and count on the gossiping protocol to catch that replica up.

    3. It's pretty inefficient to have a replica send all of their accepted instances to a prospective leader when it's likely that consensus has been reached for many of those instances.

      To avoid these large messages, when a replica tries to become leader, it can efficiently encode the instance positions into a run-length encoded string, and the accepting replica need only send the instance positions of which the prospective leader is unaware. Here, we rely on a property of Paxos: that an instance which reaches consensus will remain the same value for all time across all replicas.


  12. Flow control is a complicated but necessary aspect of any server API design. "Fire and forget" APIs are practically asking for a client sending requests to overwhelm a server, and brings to mind TCP congestion control and senders overwhelming receivers.
    1. Randomized exponential backoff is the classic method for flow and congestion control in networks.
    2. When there is cost to making a network request (always), batching is another classic method to help minimize the resources needed for the server to respond to client requests.
    3. It's pretty common to want to perform an action on a long list of items, and we know it's too costly to spin up a thread per item. So we use thread and task pools to queue up the work and expect if the work is cpu-bound and has no serialized portions (Amdahl's law talks about serial versus parallel phases of work), approximately the same number of threads as cpu cores will be active - and we're maximizing the throughput of the cpu resources of the machine. But what if the work involves i/o? If we were writing a web crawler, do we really want to initiate as many http requests as our cpus will allow all at once? No, we want to limit the concurrency of the work including the i/o operations. So instead of using a low-level concurrency limited scheduler, I've written helper functions that allow me to run an entire operation involving asynchronous i/o, but limit the concurrency of these operations to a specific number.
  13. Paxos implementations tend to amortize the cost of phase 1 (i.e. leader election) across many phase 2 executions (i.e. proposal consensus). We can have the current leader refresh its mandate (through repeating phase 1) at some small interval, and the other replicas can watch for the current leader's refresh with a timeout of a larger interval - if the current leader is "stale," another replica can try to become leader. It's also the case that a replica would want to become leader if they have proposals they would like to drive to consensus. But part of the paxos protocol is that when a replica wants to become leader, they must be made aware of all proposals accepted by other replicas is previous rounds so that the new leader can drive those proposals to consensus - and in a practical sense, that's an unbounded amount of data. So what can be done? Well, when a replica wants to become a leader, it can also state which instances it already knows have been driven to consensus, so that responding replicas need not include those instances. And more importantly, replicas that are too far behind in their knowledge of the consensus of instances in the log need to have their bid for leadership rejected until the gossip protocol can bring them up to date. Instead of amortizing the cost of leader election across many instances, we could bundle the leader election into phase 2 for the previous instance. Unfortunately, that serializes consensus for a sequence of instances.
  14. Paxos does not really fit in the family of distributed system approaches called "eventually consistent". Reads of the underlying state are scheduled through the log, and because they execute on a quorum of replicas, you know that you are getting the actual state of the replicated data at that point in the transaction log. But coming to consensus just for a read involves at minimum an execution of phase 2 and network communications. So there are some knobs you can turn to trade off consistency for performance - for example, you can perform dirty reads at whichever replica your load balancer happens to direct the client's request. The nice thing about paxos is that you can easily monitor the lag in both driving proposals to consensus and the lag in the execution of those proposals once they have reached consensus in the transaction log. And so we find dirty reads to be less of an issue than might be expected.
  15. Mike Acton asks the question that's worth considering in API design. For all of the parameters X of type T to a method, when will you ever have only one T? In many cases, you will have a collection of X that all need to be processed in the same way.

    Game developers, optimizing for cpu performance, are thinking about hitting the memory wall of performance where it doesn't matter how many instructions your cpu can retire because you can't interact with memory fast enough.

    But at a larger scale, calls to networked services are the wall against which single-machine performance suffers. Imagine the code for a dynamic HTML page that runs many queries against a database, and let's ignore caching layers like memcached for now. Connection pooling can remove the latency of connection setup, but the ideal design is that a single HTML page executes a single network request of a batch of queries against the database.

    But your system isn't designed that way - in fact, even though page construction leverages multiple cpus through concurrent code, most of the interfaces are constructed to execute a single operation at a time. Given the constraint of those single-item interfaces, one solution is to introduce a bit of latency to make batching operations possible.

    As each request comes in, it's added to a queue. If the queue was empty, a timer is set to send off a batch of requests together, and if the queue is full, the batch is sent immediately. It's a simple workaround for interfaces that aren't designed with batching in mind.
  16. I implemented Stoppable Paxos to manage group membership, hooking it up to the RoleEnvironment.Changed event to submit state machine configuration changes to the replica set. A few details are worth mentioning (other than the large subject of how to handle log truncation and snapshots in the face of configuration changes).
    1. First, you need an initial configuration. If a replica starts off with empty paxos storage, we can initialize instance 0 to some configuration "change" that represents the initial state of the cluster - something needs to start the cluster, and the cluster itself certainly can not come to a quorum on membership changes.
    2. Second, the notion of replica node identity in the cluster is tied to the stable storage of promises made in phase 1 and 2 by acceptor. So if the paxos log is lost, that replica has left the cluster and needs to rejoin. This is a bit of a pain - one alternative that seems to work well is since stoppable paxos won't let a replica's acceptor vote until it's a member of the cluster, you know it can't also go back on previous promises from phase 1 and 2. So you may be able to skip on changing the node's identity when the paxos log is "lost".
  17. I read about the End to end principle in my networks graduate course, and I continually see applications in my work. For example, I had generate a unique GUID for each paxos proposal so that a replica could determine whether one of its proposals had been driven to consensus by another replica temporarily becoming the leader (since the proposal commands may have been equal). But initially that proposal GUID was assigned within the paxos layers, and it would have been better to assign the guid at the client constructing the proposal to submit to paxos. Why? Because it may be that the client needs to retry the whole initiation of the proposal to a remote paxos cluster, and having that GUID allows for the prevention of duplicate proposal submissions in the face of retries.
  18. The papers on paxos don't really speak about how replicas can be "caught up" after restarting or restoring a snapshot without the preservation of the paxos stable storage, but there are some important details. In a similar way to how a prospective leader can include a compact representation of the instances they know have reached consensus in their phase 1 request message (with the benefit that replicas promising to follow that leader and not accept proposals for smaller rounds need only send a much smaller subset of their previously accepted proposals as constraints for the new leader), we can design a gossip anti-entropy protocol to fill in the gaps of the replica's view of the transaction log. Each replica simply sends a compact form of the instances positions that they know have reached consensus to each other learner replica in the current configuration, and if a replica receives a gossip message including an instance number that they're not aware of, they send a request for that data to that replica. The twist is that these requests can also be unbounded and need to have some reasonable limit of concurrency placed on them.
  19. Paxos is for building a replicated transaction log, usually composed of operations that are applied to a data store of some type. Generally you want to make periodic "snapshot" backups of this data store, because rebuilding the entire store from the entire transaction log is too expensive in terms of time and space. These backups can be used to quickly bring up new replicas as needed for load or reliability. Once the backup has been performed (and usually moved to some longer term durable storage), the paxos log can be truncated. There are tons of choices here: do you store the current configuration in the snapshot? How do you perform a backup that doesn't "stop the world" preventing new writes to your store? How do you deal with corruption of the underlying store? What if the binary format of the store changes?
    Paxos is just the start of a replicated fault tolerant system - I hope I've covered some interesting details here, but the logistics of building a full system will lead to many more challenges.

Tuesday, August 23, 2011

Paxos, Reactive Framework, Nondeterminism and Design for Testability

For a project at work, I implemented the Paxos distributed consensus algorithm.

When I first started the project, I needed a mechanism for composable filters over a shared IP port, so that I could execute multiple copies of Paxos on the same port. The Reactive Extensions met that need, but then it was very natural to express the Paxos agents (e.g. proposer, acceptor, learner) in terms of an actor model using Observable.CreateWithDisposable, and then the Paxos agents were also composable out of smaller parts as well (e.g. WhenMajorityHasQuorum, Phase1Lead, Phase2Propose).

Everything is exposed through a replicated StateMachine with a user-provided “Execute for the next position in the replicated transaction log”.

public abstract Task ExecuteAsync(int instance, Command command);

and the reactive extensions helped here too by providing a bridge to Tasks.

The test scheduler (and a specific random seed for randomized exponential backoff) were very useful for running deterministic unit tests, and through the composable nature of the message handling, it was trivial to add a “noisy observable” to simulate packet loss and duplication to stress test the implementation. One little issue I found was that the conversion from Tasks to IObservable was not deterministic, and I needed a slightly modified ToObservable that passed TaskContinuationOptions.ExecuteSynchronously to Task.ContinueWith:

public static IObservable<T> ToObservable<T>(this Task<T> task)
{
    return Observable.CreateWithDisposable<T>(observer =>
        {
            var disposable = new CancellationDisposable();

            task.ContinueWith(antecedent =>
            {
                switch (antecedent.Status)
                {
                    case TaskStatus.RanToCompletion:
                        observer.OnNext(task.Result);
                        observer.OnCompleted();
                        break;
                    case TaskStatus.Canceled:
                        observer.OnCompleted();
                        break;
                    case TaskStatus.Faulted:
                        observer.OnError(antecedent.Exception);
                        break;
                }
            }, disposable.Token, TaskContinuationOptions.ExecuteSynchronously, TaskScheduler.Current);

            return disposable;
        });
}

Instant Karma Links

I enjoy reading reddit. I have seen some links many times in the past, and I expect to see them again in the future.

Unskilled and Unaware of It

Dunbar's number

Can You Say ... "Hero"?

Stanley Milgram was responsible for the Small World Experiment (i.e. six degrees of separation) and Milgram experiment on obedience to authority figures.

Unhappy Meals

Norman Borlaug

Simo Häyhä

Franz Stigler

North Korea is Dark

Shadow Caravan

snopes

Bobby Tables

Have you ever tried to sell a diamond?

The Difference between the United Kingdom, Great Britain, England, and a Whole Lot More

The Man Who Had HIV and Now Does Not

Mt. Everest has around 200 dead bodies (pics)

Monty Hall Problem

How Not to Sort By Average Rating

Birthday Paradox

Netflix Instant Watcher

Henrietta Lacks’ ‘Immortal’ Cells

Joe Ades

Drowning Doesn’t Look Like Drowning

Warning Signs in Experimental Design and Interpretation

Lies, Damned Lies, and Medical Science

C# Asynchronous Task Mutual Exclusion

If you're excited as I am about Visual Studio Asynchronous Programming, you may already be using a iterator method (such as Asynchronous methods, C# iterators, and Tasks) to simulate user-space threading. But once you have shared-state concurrency, you may want mutual exclusion.

Generally the performance recommendation for shared-state concurrency is to only take locks for a short period, especially not across an asynchronous operation (there is a correctness slant as well: minimizing lock time minimizes the risk of deadlock due to different lock acquisition orders). But sometimes you want to serialize asynchronous work - for example, lazy asynchronous initialization of a resource or serializing writes to a TCP network stream.

C# has the lock keyword allowing mutual exclusion using any reference object as the gate. In this example, I start 10 tasks that perform asynchronous "work" protected by mutual exclusion. This work is simulated by waiting 10 milliseconds with a timer, but it could represent any sort of asynchronous effort, such as i/o.

private object _gate = new object();

private IEnumerable<Task> _TestMutexFail()
{
    lock (_gate)
    {
        using (var task_wait = Task.Factory.StartNewDelayed(10))
        {
            yield return task_wait;
            task_wait.Wait();
        }
    }
}

private Task TestMutexFail()
{
    return Task.Factory.Iterate(_TestMutexFail());
}

[TestMethod]
public void TestAsyncMutexFail()
{
    var tasks = Enumerable.Range(0, 10).Select(i => TestMutexFail()).ToArray();
    Task.WaitAll(tasks);
}

But since the thread running the timer callback is likely not the same thread that initiated the timer (through StartNewDelayed), you'll see a System.Threading.SynchronizationLockException
with the message "Object synchronization method was called from an unsynchronized block of code." This is the same result you would also expect where an i/o completion port thread executes the continuation that represents the completion of the i/o task initiated by your asynchronous operation.

So what is a solution? I've had good success with a simple class such as the following.

public sealed class AsyncMutex
{
    public Task<IDisposable> AcquireAsync()
    {
        var task_completion_source = new TaskCompletionSource<IDisposable>();
        lock (_mutex)
        {
            bool can_obtain_lock = (_waiters.Count == 0);
            _waiters.Enqueue(task_completion_source);
            if (can_obtain_lock)
            {
                _PassTheLock();
            }
        }
        return task_completion_source.Task;
    }

    public void Release()
    {
        lock (_mutex)
        {
            _waiters.Dequeue();
            _PassTheLock();
        }
    }

    public void Cancel()
    {
        lock (_mutex)
        {
            foreach (var waiter in _waiters)
            {
                waiter.TrySetCanceled();
            }
            _waiters.Clear();
        }
    }

    public object SynchronousMutex
    {
        get
        {
            return _mutex;
        }
    }

    private void _PassTheLock()
    {
        lock (_mutex)
        {
            bool have_waiters = (_waiters.Count > 0);
            if (have_waiters)
            {
                var next_acquirer = _waiters.First();
                next_acquirer.TrySetResult(new AsyncLock(this));
            }
        }
    }

    private sealed class AsyncLock : IDisposable
    {
        private readonly AsyncMutex _async_mutex;

        public AsyncLock(AsyncMutex async_mutex)
        {
            _async_mutex = async_mutex;
        }

        public void Dispose()
        {
            _async_mutex.Release();
        }
    }

    private readonly object _mutex = new object();
    private readonly Queue<TaskCompletionSource<IDisposable>> _waiters = new Queue<TaskCompletionSource<IDisposable>>();
}

This AsynchronousMutex which enables mutual exclusion across asynchronous tasks. It fits well with the iterator-based methods for composing asynchronous tasks.

private AsyncMutex _mutex = new AsyncMutex();
private int _concurrency_count = 0;

private IEnumerable<Task> _TestAsyncMutexPass()
{
    using (var task_acquire = _mutex.AcquireAsync())
    {
        yield return task_acquire;
        using (task_acquire.Result)
        {
            Assert.AreEqual(1, Interlocked.Increment(ref _concurrency_count));

            using (var task_wait = Task.Factory.StartNewDelayed(10))
            {
                yield return task_wait;
                task_wait.Wait();
            }

            Assert.AreEqual(0, Interlocked.Decrement(ref _concurrency_count));
        }
    }
}

private Task TestAsyncMutexPass()
{
    return Task.Factory.Iterate(_TestAsyncMutexPass());
}

[TestMethod]
public void TestAsyncMutex()
{
    var tasks = Enumerable.Range(0, 10).Select(i => TestAsyncMutexPass()).ToArray();
    Task.WaitAll(tasks);
}

By the way, if you're just using a Task to lazily initialize a resource, I can recommend the ParallelExtensionsExtras Tour - #12 - AsyncCache blog post as another method to accomplish the same goal.

Best Effort Cloning in C#

private static readonly MethodInfo _method_memberwise_clone = typeof(object).GetMethod("MemberwiseClone", BindingFlags.Instance | BindingFlags.NonPublic);

public static T Clone<T>(T item)
{
    var cloneable = item as ICloneable;
    if (cloneable != null)
    {
        return (T)cloneable.Clone();
    }
    else
    {
        return (T)_method_memberwise_clone.Invoke(item, null);
    }
}

BadImageFormatException with ASP.NET and native dlls

There are times when you would like to use a native DLL with ASP.NET, and the ASP.NET compilation process fails with a BadImageFormatException because the compiler and the native DLL are different architectures (e.g. one is x86, and the other amd64) and the compiler will try to open all DLLs in the bin directory. Here is the web.config magic to avoid this issue. In this example, native.dll is the DLL I'd like the ASP.NET compiler to ignore.

<?xml version="1.0"?>
<configuration>
  <system.web>
    <compilation debug="false">
      <assemblies>
        <remove assembly="native"/>
      </assemblies>
    </compilation>
  </system.web>
</configuration>