Updates on my progress

Since I started reading books on clojure, I have also been reading other books. In addition, I have started learning Blazor web framework for work. Overall I am stiill making progress.

I also realised that although I am making progress, it needs to align more in the direction that I am interested in. This means I need to also practice more on the practice of writing more code apart from what I do at work.

On this front I am writing documentation for a blazor project. in addition I have also started writing a quiz conducting app written in blazor. I plan to learn more about feature folders and unit testing components.

Reading: Clojure for the Brave – ch9

Concurrency refers to managing more than one task at the same time.

Interleaving is switching between the two tasks (without the first getting to completion).

Parallelism refers to executing more than one task at the same time. It is a subclass of concurrency: before you execute multiple tasks simultaneously, you first have to manage multiple tasks.

It’s important to distinguish parallelism from distribution. Distributed computing is a special version of parallel computing where the processors are in different computers and tasks are distributed to computers over a network.

major use cases for concurrent programming is for blocking operations. If operation is blocking then tasks are executing synchronously, otherwise asynchronously.

Concurrent/Parallel programming is a set of techniques for decomposing a task into subtasks that can execute in parallel and managing the risks that arise when your program executes more than one task at the same time.

When multiple tasks are being executed in parallel or interleaved, there are no guarantees for the overall execution order, so the program is nondeterministic.

Problems to handle: Reference Cells, Mutual Exclusion, and Deadlocks

The reference cell problem occurs when two threads can read and write to the same location, and the value at the location depends on the order of the reads and writes.

Without a way to claim exclusive write access to shared resource, output correctness cannot be guranteed because instructions will be interleaved. So whenever reading/writing a shared resource, Mutual Exclusion should be handled.


Futures, delays, and promises are easy, lightweight tools for concurrent programming.

When you write serial code, you bind together these three events:

Task definition Task execution Requiring the task’s result

Part of learning concurrent programming is learning to identify when these chronological couplings aren’t necessary. Futures, delays, and promises allow you to separate task definition, task execution, and requiring the result.


futures defines a task and places it on another thread, without requiring the result immediately

The future function returns a reference value that you can use to request the result. You can use the reference value to request a future’s result, but if the future isn’t done computing the result, you’ll have to wait.

Requesting a future’s result is called dereferencing the future, and you do it with either the deref function or the @ reader macro.

A future’s result value is the value of the last expression evaluated in its body.

A future’s body executes only once, and its value gets cached.

When you dereference a future, you indicate that the result is required right now and that evaluation should stop until the result is obtained. Hence, dereferencing a future will block if the future hasn’t returned a value.


Delays allow you to define a task without having to execute it or require the result immediately.

You can evaluate the delay and get its result by dereferencing it or by using force.

Like futures, a delay is run only once and its result is cached.


Promises allow you to express that you expect a result without having to define the task that should produce it or when that task should run.

You create promises using promise and deliver a result to them using deliver.

if dereference a promise without first delivering a value, the program would block until a promise was delivered, just like with futures and delays.

You can only deliver a result to a promise once.

One use for promises is to find the first satisfactory element in a collection of data.


For a concurrent program to function correctly, the tasks need to be split into a serial portion and a parallel portion.

Instead of running each task serially, the concurrent parts can run in parallel while coordinating the serial parts (the ones that access a shared resource) to execute in sequence/serially.