For a long time, my testing strategy was embarrassingly manual. Build the app, run it on the simulator, tap through the happy path, ship it. If something looked right on screen, it probably was right. Probably.
This works — until it doesn't. A refactor quietly breaks a login flow. A new feature regresses something three screens away. You don't find out until a user does.
At some point the tap-through ritual takes long enough, and the regression bugs get embarrassing enough, that you have to reckon with the fact that you've been doing this the hard way. The thing that finally pushed me over the edge was reading Paul Hudson's Swift Testing by Example. It's hard to read a book that methodically walks through how straightforward good testing can be and then go back to tapping through your simulator with a clear conscience. I closed it convicted, opened Xcode, and started writing tests.
Here's what that transition actually looked like.
Why Manual Testing Fails at Scale
Manual testing has a compounding cost. Early in a project, your app has five screens and a handful of flows. You can verify everything in two minutes. But as the app grows, those two minutes become ten, then thirty. And because it's tedious, you start cutting corners — skipping the edge cases, assuming the happy path represents the whole picture.
The deeper problem is that manual testing isn't repeatable in any meaningful way. You're not testing the same thing every time. Your attention drifts. You forget to check the empty state. You test on your preferred device size and miss the layout bug on smaller screens.
Automated tests don't get tired. They don't skip steps. And they run in CI before anything merges.
Two Tools, Two Jobs
Before getting into specifics, it's worth being clear about what each tool is for.
XCTest / XCUITest is Apple's long-standing testing framework. XCTest handles unit and integration tests; XCUITest drives the actual UI — launching the app, finding elements, tapping buttons, asserting on what appears on screen. It's been around since Xcode 7 and is battle-tested.
Swift Testing is Apple's newer framework, introduced at WWDC 2024 and available from iOS 16+/Xcode 16+. It brings a more expressive, macro-based API and better integration with Swift concurrency. It's purpose-built for unit and integration testing — not UI automation, which remains XCUITest's domain.
In practice: use Swift Testing for your unit and integration tests, and XCUITest for your end-to-end UI flows.
Getting Started with XCUITest
XCUITest works by launching a separate instance of your app and interacting with it through the Accessibility system. This is both its strength and its constraint — if an element isn't accessible, XCUITest can't find it.
A basic UI test
import XCTest
final class LoginUITests: XCTestCase {
var app: XCUIApplication!
override func setUpWithError() throws {
continueAfterFailure = false
app = XCUIApplication()
app.launch()
}
func testSuccessfulLogin() throws {
let emailField = app.textFields["login-email"]
let passwordField = app.secureTextFields["login-password"]
let loginButton = app.buttons["login-submit"]
emailField.tap()
emailField.typeText("user@example.com")
passwordField.tap()
passwordField.typeText("password123")
loginButton.tap()
XCTAssertTrue(app.staticTexts["Welcome back"].waitForExistence(timeout: 3))
}
}
A few things to note here:
continueAfterFailure = false — stops the test at the first failure rather than plowing through subsequent assertions with a broken app state.
Accessibility identifiers — those string literals like "login-email" are accessibilityIdentifier values you set on your views. This is the right way to query elements in XCUITest. Don't rely on display labels or button titles if you can avoid it — they're fragile when copy changes.
Set them in SwiftUI like this:
TextField("Email", text: $email)
.accessibilityIdentifier("login-email")
waitForExistence(timeout:) — UI tests are asynchronous by nature. Network calls, animations, transitions all take time. Always use waitForExistence instead of asserting immediately after an action.
Launch arguments for test environments
One of the most useful patterns in XCUITest is using launch arguments to put your app into a known state:
override func setUpWithError() throws {
continueAfterFailure = false
app = XCUIApplication()
app.launchArguments = ["--uitesting", "--skip-onboarding"]
app.launch()
}
Then in your app code:
if CommandLine.arguments.contains("--skip-onboarding") {
hasCompletedOnboarding = true
}
This lets you bypass setup flows so each test starts in a clean, predictable state without tapping through onboarding every time.
Unit Testing with Swift Testing
Swift Testing replaces the func testXxx() convention and XCTAssert family with something considerably more readable. Tests are marked with the @Test macro, and assertions use #expect.
A basic Swift Testing unit test
import Testing
@testable import MyApp
struct AuthViewModelTests {
@Test func invalidEmailShowsError() {
let viewModel = LoginViewModel()
viewModel.email = "not-an-email"
viewModel.password = "password123"
viewModel.submitTapped()
#expect(viewModel.errorMessage == "Please enter a valid email address")
#expect(!viewModel.isLoading)
}
@Test func emptyFieldsDisableSubmitButton() {
let viewModel = LoginViewModel()
#expect(!viewModel.isSubmitEnabled)
viewModel.email = "user@example.com"
#expect(!viewModel.isSubmitEnabled)
viewModel.password = "password123"
#expect(viewModel.isSubmitEnabled)
}
}
The #expect macro gives you much better failure messages than XCTAssertEqual — it shows you the actual and expected values inline in the test report without you having to format the message yourself.
Parameterized tests
One of Swift Testing's standout features is parameterized tests — running the same test logic against multiple inputs with a single declaration:
@Test("Invalid email formats are rejected", arguments: [
"plaintext",
"@nodomain.com",
"missing@",
"spaces in@email.com"
])
func invalidEmailFormats(email: String) {
let viewModel = LoginViewModel()
viewModel.email = email
#expect(!viewModel.isEmailValid)
}
Each argument runs as its own named test case in Xcode's test navigator. Previously this required either a loop inside a single test (which stops at the first failure) or duplicating the test function for each case.
Testing async code
Swift Testing integrates cleanly with async/await:
@Test func fetchUserProfileReturnsExpectedData() async throws {
let mockService = MockUserService(returning: .success(User.fixture))
let viewModel = ProfileViewModel(service: mockService)
await viewModel.loadProfile()
#expect(viewModel.displayName == "David")
#expect(viewModel.isLoaded)
}
No XCTestExpectation, no fulfill(), no waitForExpectations. Just await and assert.
Embracing TDD
Test-driven development inverts the usual workflow. Instead of writing code and then writing tests to verify it, you write the test first — watch it fail — then write the minimum code to make it pass.
The cycle is: Red → Green → Refactor.
This sounds counterintuitive at first. How do you write a test for code that doesn't exist? But that's exactly the point. Writing the test first forces you to think about the interface before the implementation. What should this function take as input? What should it return? What are the failure cases?
A TDD example
Say you're building a form validator. Start with the test:
@Test func passwordTooShortFailsValidation() {
let result = PasswordValidator.validate("abc")
#expect(result == .failure(.tooShort))
}
This won't compile yet — PasswordValidator doesn't exist. That's fine. Now write just enough to make it compile and pass:
enum PasswordValidationError: Equatable {
case tooShort
case missingSpecialCharacter
}
struct PasswordValidator {
static func validate(_ password: String) -> Result<Void, PasswordValidationError> {
guard password.count >= 8 else {
return .failure(.tooShort)
}
return .success(())
}
}
Test passes. Now add the next case:
@Test func passwordMissingSpecialCharacterFailsValidation() {
let result = PasswordValidator.validate("abcdefgh")
#expect(result == .failure(.missingSpecialCharacter))
}
Red. Extend the validator. Green. Refactor if needed. Repeat.
The discipline here pays off in a few ways. Your code ends up with a naturally testable architecture — loosely coupled, dependency-injected, focused. And you accumulate a regression suite as a byproduct of normal development, not as an afterthought.
What the Workflow Looks Like Now
The practical upshot of all this is a testing pyramid that actually works:
- Unit tests (Swift Testing) — fast, numerous, covering ViewModels, business logic, validators, formatters. These run in milliseconds and catch most bugs before the app ever launches.
- Integration tests (Swift Testing / XCTest) — testing how components work together, often with mock services injected at the boundaries.
- UI tests (XCUITest) — fewer in number, covering critical user flows end-to-end. Login, checkout, onboarding. The paths where a regression would be most visible.
UI tests are slower and more brittle than unit tests by nature — they're testing a running app over a simulated network. Keep the suite focused on the flows that matter most and let the unit tests do the heavy lifting everywhere else.
The Honest Part
The transition isn't painless. Writing tests takes time upfront. TDD requires a mental shift that feels slow at first. XCUITest flakiness is a real thing — tests that pass locally and fail in CI for timing reasons you have to hunt down.
But the alternative is spending that time manually re-tapping through your app after every change, and eventually shipping a regression that a test would have caught in 200 milliseconds.
The tap-through approach doesn't scale. Automated tests do.