- Java Concurrency and Multithreading Tutorial
- Multithreading Benefits
- Multithreading Costs
- Concurrency Models
- Concurrency vs. Parallelism
- Singlethreaded Concurrency
- Creating and Starting Java Threads
- Race Conditions and Critical Sections
- Thread Safety and Shared Resources
- Thread Safety and Immutability
- Java Memory Model
- Java Happens Before Guarantee
- Java Synchronized Blocks
- Java Volatile Keyword
- Java ThreadLocal
- Thread Signaling
- Deadlock Prevention
- Starvation and Fairness
- Nested Monitor Lockout
- Slipped Conditions
- Locks in Java
- Read / Write Locks in Java
- Reentrance Lockout
- Blocking Queues
- The Producer Consumer Pattern
- Thread Pools
- Compare and Swap
- Anatomy of a Synchronizer
- Non-blocking Algorithms
- Amdahl's Law
- Java Concurrency References
- Classic Multi-threaded Architecture
- Single-threaded / Same-threaded Architecture
- Single-threaded Architecture Challenges
- Thread Loops
- Two Types of Tasks
- Singlethreaded Task Switching
- Combining Repeated Tasks and One-off Tasks
- Task Balancing
- Scaling Single-threaded Concurrency
Single-threaded Concurrency means making progress on more than one task seemingly at the same time, from within a single thread. On the surface single-threaded concurrency may sound like a a bit of an oxymoron. Previously, with multi-threaded architectures multiple tasks would be divided among multiple threads for concurrent execution. Thus, the switching between different tasks was accomplished by the OS and CPU switching between different threads. However, a single thread can in fact make progress on multiple tasks at what seems to be the same time. In this single-threaded concurrency tutorial I will explain how, and what benefits a single-threaded concurrency design gives.
Please note: This tutorial is still work in progress. More will be added in a near future!
Classic Multi-threaded Architecture
In a classic multi-threaded architecture you will typically assign each task to a separate thread for execution. Each thread only executes a single task at a time. In some designs a new thread will be created for each task, and the thread thus dies once the task is completed. In other designs a pool of threads is kept alive which take one task at a time from a task queue, executes it - and then takes another task etc. See my tutorial about thread pools for more information.
Multi-threaded architectures have the advantage that it is relatively easy to distribute the work load across multiple threads and multiple CPUs. Just give the task to a thread and let the OS / CPU schedule that thread to a CPU.
However, if the tasks being executed need to share data, a multi-threaded architecture can lead to a lot of concurrency problems such as race conditions, deadlock, starvation, slipped conditions, nested monitor lockout etc. In general, the more multiple threads share the same data and data structures, the higher the probability is that a concurrency problem may occur. The more you need to scrutinize your design, in other words.
A classic multi-threaded architecture can also sometimes lead to congestion when multiple threads try to access the same data structure at the same time. Depending on how well a given data structure is implemented some threads may be blocked waiting for access while other threads are accessing the data structure.
Single-threaded / Same-threaded Architecture
The alternative to a classic multithreaded architecture is a single-threaded or same-threaded. By only having a single thread execute all tasks in your application you completely avoid all the concurrency problems listed in the previous section.
You can scale a single-threaded architecture up to use multiple threads, where each thread behaves as if it was a separate, isolated single-threaded system. In that case I refer to the architecture as same-threaded. All data needed to execute the task is still kept isolated within a single thread - within the same thread.
Single-threaded Architecture Challenges
If you only have a single thread executing all tasks of an application, that can result in a few problems:
- Blocking IO operations from within a task will block the thread and thus the whole application.
- Long-running tasks may unacceptably delay the execution of other tasks.
- A single thread can only utilize a single CPU.
It is possible to solve each these problems without losing the simplicity advantage of the single-threaded concurrency architecture, and without overly complicating the overall design too much.
Most long-running applications execute in some kind of a loop, where the main application thread is waiting for input from outside the application, processes that input, and goes back to waiting.
This kind of thread loop is both used in server applications (web services, services etc.) and GUI applications. Sometimes this thread loop is hidden from you. Sometimes not.
Pausing the Thread Loop
You might wonder if a thread executing in a tight loop over and over will waste a lot of CPU time. If the thread is running without having any real work to do, then yes, that may waste a lot of CPU time. However, the thread executing the loop is free to "sleep" if it estimates that sleeping a few milliseconds would be okay. That way the CPU time waste can be decreased.
Two Types of Tasks
A thread loop typically executes two types of tasks during its life time:
- Repeated tasks
- One-off tasks
Both of these tasks will be explained in more detail in the following sections.
A repeated task is a recurring task that is executed again and again during the lifetime of the thread that executes it. Typically, a repeated task is executed fully for each invocation of the task.
An example of a repeated task is checking for incoming data on a set of inbound network connections. If any incoming data is detected it will be processed, and after processing the repeated task is done - for this particular invocation. However, checking for inbound data needs to be repeated again and again for the application to be able to respond to incoming data continuously.
A one-off task is a task that only needs to be executed once. A one-off task can either be short-running or long-running.
A short-running task is a task that is short enough to complete in a single execution phase without delaying the thread executing the task from the other duties that thread has (the other tasks it has to execute).
A one-off long-running task is a task which takes too long to complete in a single execution phase. By "takes too long" I mean that executing the total amount of work in the task would take up too much of the threads time, so that other repeated tasks or one-off tasks would be delayed so much that the total responsiveness of the application is harmed.
To avoid taking up too much of a thread's execution time with a single long-running task, the total work required to complete the task is broken down into smaller chunks which can be executed one chunk at a time. Each chunk must be small enough to not delay the other tasks the thread needs to execute too much.
Long-running tasks keep track of their execution chunks internally. The thread executing a long-running task will call its execution method multiple times until the all task chunks have been fully executed. In between calling the execution method of a particular long-running task, the thread may call the execution methods of other long-running tasks, or other repeated tasks, or whatever duties the thread has.
A one-off long-running task could be processing N files in a directory. Rather than processing all N files in a single execution phase, the processing of the N files can be broken up into smaller chunks, each of which are processed in a single execution phase. For instance, 1 file could be processed per execution phase. To process all N files the task
In a thread loop, one-off tasks are often detected and executed by a repeated task, as illustrated below.
Singlethreaded Task Switching
To be able to make progress on more than one task seemingly at the same time, the thread making progress on the tasks must be able to switch between the tasks. This is also referred to as task switching.
Exactly how task switching work depends on the kind of task - whether the thread is switching between repeated tasks or one-off-tasks. The general principle remains the same though. I will explain both in more detail in the following sections.
Task Switching Between Repeated Tasks
Repeated tasks usually have a single method that is called repeatedly by the same thread. A repeated task is a task that should be repeated throughout the entire life time of the application, so it is never really "complete". The repeated task does what it needs to do, then exits its execution method, relinquishing control back to the calling thread.
A single thread can switch between multiple repeated tasks by calling their execution methods in a round robin fashion. First repeated task A gets a chance to execute, then B, then C, then A again etc.
In case a repeated task does not fully finish whatever work it started, it can record how far it came internally, and continue from there the next time the repeated task is called.
Task Switching Between One-off Tasks
A one-off task is different from a repeated task in that a one-off task is expected to complete at some point. That means, that sometimes one-off tasks need to be removed from the task pool.
Other than that, switching between one-off tasks is similar to switching between repeated tasks. The executing thread calls a given one-off task's execution method, the task makes progress for a short period of time, then records internally how far it came, and then exits its execution method, relinquishing control back to the calling thread. The calling thread can now call the next one-off task in the task pool in a round-robin fashion.
After each call to a one-off task's execution method the calling thread will check if the task has completed. If it has, the one-off task is removed from the task pool.
Combining Repeated Tasks and One-off Tasks
In practice, an application might consist of a thread loop that calls one or more repeated tasks, and the repeated tasks may execute one-off tasks as part of their repeated behaviour. Below is a diagram illustrating that. The diagram only depicts a single repeated task, but there could be more, depending on the concrete application.
When a single thread is to switch between multiple tasks, whether repeated tasks or one-off tasks, it is imperative that these tasks do not take up too much of the thread's execution time during a single call to the task. In other words, it is the responsibility of each task help assure fair balancing of execution time between the tasks.
Exactly how long time a task should allow itself to execute is up to the designer of the system. For one-off tasks this can be a bit more complicated. Some tasks are naturally very quickly finished, whereas others are naturally take longer to complete. For longer running tasks it is up to the implementer of the task to estimate how to break up the work into small enough partitions so that each partition can be executed without delaying other tasks too much.
One interesting thing to notice is, that if the thread calls each one-off task in a round robin fashion, then the more one-off tasks the task executor contains, the less execution time each thread gets because it takes longer before the task gets execution time next.
It is possible to implement a task executor that prioritizes some tasks over others. For instance, the task executor could keep the tasks in different lists internally, and e.g. execute the tasks in the high priority list 2 times for each 1 time executing the tasks in the low priority task list.
Exactly how a prioritized task executor would be implemented would depend on the concrete need. Also how many levels of priority, e.g. low/high, or low/medium/high etc.
If a one-off task is waiting for some asynchronous operation to finish, e.g. a reply from a remote server, the one-off task will not be able to make any more progress until the asynchronous operation it is waiting for has finished. In that case it may not make sense to call that task over and over again just for the task to realize that it cannot make any progress and return control immediately to the calling thread.
In such situation it may make sense for a one-off task to be able to "park" itself inside the task executor, so it is no longer being called. When the async operation finishes the one-off task can be unparked and reinserted into the active tasks which are called continuously to make progress. Of course, to be able to unpark a task - some other part of the system must detect that the asynchronous operation has finished, and which task to unpark for that asynchronous operation.
Scaling Single-threaded Concurrency
Obviously, if you only have a single thread executing within your application, you cannot take advantage of more than one CPU. The solution is to start more than one thread. Typically, one thread per CPU - depending on what kind of tasks your threads need to execute. If you have tasks that need to execute blocking IO work, such as reading from the file system or network, then you might need more than one thread per CPU. Each thread will be blocked doing nothing while it waits for the blocking IO operations to finish.
When you scale up a single-threaded architecture to multiple single-threaded subsystems, it is no longer technically single-threaded. However, each single-threaded subsystem will typically be designed as, and behave as, a single-threaded system. I used to refer to such multi-threaded single-threaded systems as same-threaded systems, though I am not sure that is actually the most precise term to use. We might need to revisit these different designs and come up with more descriptive terms for them in the future.