Just like UIKit
, SwiftUI
is implemented on top of an event loop that dispatches messages to your UI code. The UI code in turn may trigger parts of the screen to be re-rendered. The handling of messages and rendering of graphics on screen form the render loop of an application. All UI frameworks are based on a render loop, in SwiftUI it is hidden particularly well. Most of the time, it works under the hood without us needing to know anything about it. It is amazing how we can write UI code without even needing to understand what an event loop is, and without us needing to worry about how often to render screen content. But in some cases, it is useful to know what is happening behind the scenes.
We will first look into a number of examples of such cases where it is useful to know how the SwiftUI render loop works. Then we will explore the render loop in more detail and ask questions such as: when exactly is the body of a SwiftUI view evaluated. Not "when" as in under what circumstances, but as in: at which point in time? Is a view always drawn on screen immediately after a body is evaluated? How related are body evaluation and screen rendering even? We sometimes use the word "render" for evaluating a body of a view, does that even make sense?
In SwiftUI we do not have all the view lifetime notifications that we know from UIKit. If we want to perform an action when a view appears there is only a single function we can use: onAppear
. But when exactly is it called? Is it called before the view is rendered and made visible on screen, like viewWillAppear
? And if so, can we rely on that?
Take the following view and view model:
class ViewModel: ObservableObject {
@Published var statusText: String = "invalid"
func fetch() {
self.statusText = "loading"
// ...
}
}
struct ContentView: View {
@StateObject var model = ViewModel()
var body: some View {
Text(model.statusText)
.padding()
.onAppear { model.fetch() }
}
}
The view is only ready for display after the fetch
function of the view model is called in onAppear
. If we try it out, it looks fine. It looks like the label is "loading" as soon as the app starts. We never see "invalid", not even for a split second. But how reliable is that? What happens on a slow iPhone, or on a really fast iPhone that is able to render more frames? Can we be unlucky with the display refresh rate so the label flashes "loading" before it changes? Also, could this cause issues if we add transitions? And how inefficient is this? We can see that the body will be evaluated twice. Will the contents also be rendered twice?
There are certain layouts that are impossible to create with basic layout tools such as stacks, alignment guides and frames. For layouts where a view needs to know the size of its child views, those tools may not be sufficient.
As an example, let's say we want a container view that behaves like an HStack
, until its children no longer fit the screen, at which point it should continue laying out the children on the next line, like this:
Here is the general solution to this kind of layout problem:
struct Flow<Content: View>: View {
let content: [Content]
@State private var sizes: [CGSize] = []
var body: some View {
ZStack(alignment: .topLeading) {
ForEach(0 ..< sizes.count, id: \.self) { i in
content[i]
.background(GeometryReader {
Color.clear
.preference(key: SizesPreferenceKey.self, value: [$0.size])
})
.offset(self.calculateOffset(i)) // uses self.sizes
}
}
}
.onPreferenceChange(SizesPreferenceKey.self) {
sizes = $0
}
}
// ...
}
You first create a @State
variable that will hold the child view sizes. Then you use a GeometryReader
on the background of the child views to read their size, and transfer this information to the container view with a preference. (In this example, that is not even necessary because both the container view and child views are created in the same SwiftUI view, but it is more common to have seperate views.) Back in the parent view, you can then use these preferences in the onPreferenceChange
handler to update the state. With the sizes variable, the child views can now be layed out correctly.
This trick works, but the body of the container view needs to be evaluated twice. When the body is first evaluated, the sizes variable is not yet populated, so it cannot layout the children properly yet. When the children are evaluated, the size preference is updated. The body of the container view is then evaluated a second time where it can properly lay out its content.
The first time the body is evaluated, it is not yet ready to be displayed, so we should ask ourselves the same questions as in the first example. Only now the double body evaluation might happen more often than just the first time it is displayed. Can we be sure that the container view is never rendered with the initial, invalid layout? How much unnecessary rendering are we performing?
The answer to the questions in both examples is: they will never glitch, and performance will be practically unaffected. If a view's body needs to be evaluated twice, the first body is never rendered on screen. And body evaluation is not the same as rendering. Many times, evaluation of a view body will cause a view to have to be re-renderded, but not always and not immediately. To see this, we will now look into how a SwiftUI app runs, and how it renders its contents.
How is a view displayed on screen in the first place? An iPhone has a screen with a particular refresh rate. For most iPhones, this is 60 Hz. That means that the display is refreshed 60 times per second, and every such frame lasts 1/60th of a second. The highest end iPhones have a dynamic refresh rate with a maximum refresh rate of 120 Hz. The GPU needs to be careful to only change the video frame in between two display refreshes. If it doesn't, the screen would combine video from two frames at once, which can cause graphical artifacts such as tearing.
Apart from using the GPU, parts of an app may also use CPU to render content. In this case, the image is first generated as a bitmap and then sent to the GPU. The GPU transforms and composes graphics together. If a particular view or piece of graphics is expensive to render, it can be stored by the GPU into memory. The rendering of a single frame of the full screen needs to happen faster than 1 over the display refresh rate.
Showing data on screen is only half the story, we also need to receive user input. Touch input is generally sampled at a particular frequency. This frequency may be higher than the display refresh rate. Even if the touches are sampled at the same frequency as the display refresh rate, the touch sampling rate and display refresh rate might not be in perfect sync. For the latest iPhone, the touch sample rate is 120 Hz, which is twice the display refresh rate. Although we cannot update the screen as fast as we can register touches, we can use this extra touch data to show more detailed graphics on the screen. In a drawing app, we may show drawn strokes based on more touches.
Unlike games, apps generally aren't based on an update loop that tries to generate as many frames as possible up to the display refresh rate. Instead, apps provide drawing code that can be executed by the system if the data has changed, and code that is called in response to events such as touches. The OS wakes up apps when they need to handle such events, which the app may use to render parts of the screen again using the UI framework.
Registering input events and using these to render images on screen needs to be orchestrated precisely. When writing an app, you generally do not need to worry about this. You can just use gestures or control events, and change view contents. But the OS will have carefully delivered the events to your app in such a way that you won't get more or less events than you need to provide exactly one frame every time the display is refreshed, while also providing the lowest possible latency.
On Apple platforms, the event loop that lies at the heart of every app is an instance of CFRunLoop
. This Core Foundation object was already part of Carbon API that was released with Mac OS X 10.0, and has survived through many different UI frameworks and iterations until today. After being used by Carbon applications, it was used by UIKit
, and is still used today in SwiftUI
. The main dispatch queue is also implemented on top of CFRunLoop
, and so is the MainActor
from Swift Concurrency.
It is easiest to see how a CFRunLoop
works if we create a run loop of our own. Let's say we are writing a simple command line application that waits for user input and then acts upon it.
while let input = readLine() {
print(input)
}
We are reading user input in a loop, and then acting upon it if we receive something. This is a run loop. The program can be in two states. In the first state, it is idle and waiting for user input. The thread will have been put to sleep, and the CPU time is used for other processes. When there is user input, the OS will wake up our thread to handle it.
What if we also want to act on incoming network events in the same thread? Now we can't use readLine
anymore, because that blocks the thread until there is text input from the user. There are multiple ways to wait a of a number of OS events at the same time. In any case, it requires some kernel support. For a command line program, you would typically use select
or Dispatch sources. Internally, CFRunLoop
uses mach ports.
Here is a diagram of CFRunLoop
, comparing it to our command line application:
If you pause an iOS app in the Xcode debugger and it is idle, the stack trace of the main thread will start with these function calls:
* frame #0: libsystem_kernel.dylib`mach_msg_trap + 10
frame #1: libsystem_kernel.dylib`mach_msg + 59
frame #2: CoreFoundation`__CFRunLoopServiceMachPort + 319
frame #3: CoreFoundation`__CFRunLoopRun + 1249
mach_msg
is the system call that CFRunLoop
uses to wait for any of a set of multiple possible events. At this point, our app is not using the CPU, or at least the main thread is not.
A CFRunLoop
is configured with a set of input sources that deliver events. When an app is launched, it starts a run loop on the main thread with an input source to deliver touch events. Other input sources can later be added to it. You can also start new run loops on secondary threads. We could implement a command line program that handles user input and network events using a CFRunLoop
with two input sources.
The events from the input sources associated with a CFRunLoop
are processed in a specific order. There are 4 types of input sources, and then there are run loop observers.
CFRunLoop
functions to deliver events. Touch events in iOS apps are processed on a secondary thread, and then sent to the main thread's run loop through a type 0 input source.CADisplayLink
, which can be used to synchronize drawing code to the display refresh rate, uses a type 1 input source. Asynchronous networking code may also use type 1 input based sources. (However, note that many networking libraries use blocking I/O calls on internal dispatch queues to perform networking calls instead, and then dispatch code to the main thread through the main dispatch queue.)Timer
, use this special kind of input source.CFRunLoop
.Apart from adding input sources, you can add observers to CFRunLoop
to be notified when the run loop reaches specific parts of the run loop cycle. The different parts of the run loop cycle are defined by activities, an observer can choose to be notified for one or several of them. Run loop observers are used extensively within Apple's own frameworks, as we will see shortly.
When an app's run loop is not idle, it is handling events from its input sources or notifying observers. To make it easy to see what the run loop is doing when we debug an app, CFRunLoop
calls all code through one of 5 marker functions:
__CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__
__CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE1_PERFORM_FUNCTION__
__CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__
__CFRUNLOOP_IS_CALLING_OUT_TO_A_TIMER_CALLBACK_FUNCTION__
__CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION__
These functions don't do anything, they are just there so that if we print the stack track, we can see where in the run loop we are. Open an Xcode project and put a breakpoint anywhere in your code. The stack track will contain one of the lines above.
By adding run loop observers, or adding breakpoints to these functions, we can get a lot of information on how the SwiftUI render loop works.
Have you ever had to debug an issue with an app that appeared stuck, but you thought it couldn't be stuck because there was still some animation happening? Activity indicators spinning even though the main thread of an app is stuck always puzzles me. Animations in iOS can continue even if the main thread is busy or paused. That is not because the animations happen in another thread, it is because they happen in another process.
Operating systems use a compositor to allow multiple processes to display graphics, and then draw them in seperate windows on the same screen. iOS also has a compositor, but it is not just used to draw different windows in split screen or in the app switcher at the same time. It is also used to draw the different CALayer
s within apps, and to animate them. This process, the render server, performs most of the magic of the Core Animation framework.
Core Animation talks to the render server to tell it what to draw and animate. Generally, you make multiple changes to a view that invalidate it as part of a a single user action. In UIKit, in response to a button tap, you might change the size and the background color of a view, or you might call multiple methods that trigger setNeedsDisplay
. It would be both inefficient and cause glitches if we drew screen frames when we are only half-way through an action. To define which combination of changes to a layer constitute a single change that we would like to render, the Core Animation framework exposes CATransaction
s.
You can start and end a CATransaction
manually. If you don't do this and make changes to a layer, a CATransaction
will be started implicitely. It is fun to play around with CATransaction
s and see what their effect is. Let us create a UIKit app with a UIButton
that has the following action:
@IBAction func buttonPress() {
self.view.backgroundColor = .red
sleep(2)
self.view.backgroundColor = .white
}
After pressing the button, the app is stuck for 2 seconds but the background color of the view that it is in stays white. The changes to the view's layer are not rendered before sleeping. This is because setting the background color started an implicit transaction, that was not committed before sleeping.
Now, let us add two lines before and after setting the view's background color for the first time:
@IBAction func buttonPress() {
CATransaction.begin()
self.view.backgroundColor = .red
CATransaction.commit()
sleep(2)
self.view.backgroundColor = .white
}
If we now press the button, the background color of the view that it is in turns red, and then stays red for two seconds while the app is stuck, before turning white. We explicitely started a transaction, so changing the background color of a view did not start an implicit one.
When exactly is an implicitely started transaction committed? Whenever an implicit transaction is started, a commit is scheduled at the end of the current run loop cycle. This is done using a run loop observer, that is added to the main CFRunLoop
by Core Animation for the beforeWaiting
activity.
You can start one CATransaction
inside another one, but only the outer transaction will be used to render and animate screen content. The outer transaction can be one that is implicitely started. Some controls animate themselves before calling their action handlers, which internally has the same effect as changing the background color if of a view: it starts an implicit transaction. When you then use an explicit transaction to make changes to a layer, committing it won't immediately have any effect.
Although you don't use CATransaction
s directly in SwiftUI
apps, the framework internally still uses Core Animation and CATransaction
s for drawing and animations. Together with the render server, Core Animation is very foundational to iOS.
Apps that need custom animations or use things like physics engines can use CADisplayLink
to synchronize drawing code to the refresh rate of the display. Before this API was available, it was very hard especially for game developers to do this, who had to use an NSTimer
and work around its limitations.
Apps receive touch events from the OS in the same frequency as the display is refreshed. This makes sense, because we use touches to update views and it would be a waste to do that more often than we can display it. But if we compare when we receive these touches to when CADisplayLink
fires, we can see that they are not exactly synchronized.
Multiple touch events can occur within one display refresh cycle on newer iPhones with a high touch sampling rate, but we do not receive them seperately. In UIKit, we can get those intermediary touches from the UITouch
object.
All run loop input sources, including the ones that are used to implement CADisplayLink
and receive touches, handle the situation where the application is busy when the event occurs in a different way. If multiple touch events occur while an app is still busy responding to a previous touch, they will not be delivered seperately, but the touches can still be recovered from the latest touch event. In contrast, if we are still busy when the next display refresh is about to occur, CADisplayLink
will not notify us at all.
With this background knowledge of the low-level technologies used in iOS to process events such as touches and render content on screen, we can now look at the full SwiftUI render loop. I have drawn it graphically here:
When it is not doing anything, a SwiftUI app will have an idle CFRunLoop
. It will wait for events from an input source such as touches, network events, timers or a display refresh. In response to a touch, SwiftUI may call a Button's action handler. If we put a breakpoint inside that action handler, we will see __CFRUNLOOP_IS_CALLING_OUT_TO_A_SOURCE0_PERFORM_FUNCTION__
somewhere in the stack trace. This is because touch events are delivered from a type 0 input source.
In response to an action that we perform in response to an event from in input source, we might update some @State
variable in a view or call a function on an @ObservedObject
that in turns causes its objectWillChange
publisher to fire. In this case, the SwiftUI view is invalidated. This means that its body needs to be re-evaluated, but it would be inefficient to do that immediately. Maybe the same function that changed a @State
variable will change another @State
variable. Therefore, the body evaluation is scheduled to be executed later.
If we put a breakpoint at any point in a view's body, we can see __CFRUNLOOP_IS_CALLING_OUT_TO_AN_OBSERVER_CALLBACK_FUNCTION__
in the stack trace. Just like the scheduling of the commit of an implicit CATransaction
, the evaluation of an invalidated view's body is scheduled to be executed at the end of the current run loop cycle. This is again implemented using a run loop observer that watches when the run loop enters the CFRunLoopActivity.beforeWaiting
stage. If a view is invalidated twice in the same run loop run, it will not be evaluated twice.
After all invalidated views have been re-evaluated, SwiftUI does not immediately return control back to the run loop. Change handlers, such as onChange
or onPreferenceChange
, and onAppear
are first called, and those handlers may invalidate the view a second time. For the scheduling of the second view re-evaluation, SwiftUI does not use a run loop observer.
If this second body evaluation causes calls the change handler again and that causes yet another view invalidation, SwiftUI will temporarily disable view invalidation to prevent infinite loops. It will also print a warning like this:
onChange(of: _) action tried to update multiple times per frame
While re-evaluating views, built-in views (those views with a Body
type of Never
) can make changes to the apps CALayer
s. As we have seen, those changes are not drawn on screen immediately, but they start an implicit CATransaction
. SwiftUI thus makes use of the same optimization that we used manually in UIKit apps.
Only when the implicit CATransaction
is committed are the contents of the views rendered on screen. This is also the moment that rendering code that uses the CPU is called. If SwiftUI crashes at this part of the render loop, it can be hard to figure out how to fix it, because it is hard to see which part of which view caused it.
There is a common pattern in the render loop used to optimize code and make sure it is only called as many times as needed. When calling a function or changing a variable triggers an update, this update is not performed immediately. Instead, it is scheduled for later. This happens when views are invalidated because their state has changed, when handlers like onChange
or onAppear
are called, and when Core Animation needs to draw graphics. Some of them use CFRunLoop
observers, and sometimes it is handled in a framework internally.
With the knowledge of the render loop we can see why it is safe to use code as in the examples in the beginning. None of the changes after the first body evaluation are rendered, because the implicit transaction that they are part of is not yet committed. It is also useful to know what SwiftUI is doing when debugging, or trying to improve performance.
The render loop in SwiftUI may be well hidden, the technologies that it uses are the same we used in UIKit apps and they are well documented. If we have better insight in how it works, we can better understand the side effects of the code we are writing and make better decisions. Sometimes, we may say "rendering" a view if we mean evaluating its body. But sometimes, understanding the distinction can be very helpful.