How to Turn Testing Mobile Apps into an (Almost) Effortless Task

Publikované v Coder stories

10. 9. 2019

12 min.

How to Turn Testing Mobile Apps into an (Almost) Effortless Task
autor
Vincent Pradeilles

iOS developer @ equensWorldline

To say that tests are the cornerstone of a project’s quality and maintainability is surely something most software engineers are going to agree with. Indeed, when changes to production code are made, being able to rely on a suite of tests that will report whether any of the changes broke something is of invaluable help.

However, when it comes to writing tests, the mood often changes. Insightful tests tend to require developers to write a lot of dull, boilerplate code, making it feel like a very unenjoyable endeavor. But it doesn’t have to be! In this article we’re going to show you some tools and practices that will help you write meaningful tests without things becoming too painful.

Painless mocking

One difficulty we face when writing tests is dealing with dependencies. Most complex components have dependencies, which encompass a wide array of features, such as network calls, access to the file system and device-embedded sensors.

In order to efficiently test such complex components we need to implement “mock” versions of those dependencies—that is, versions that provide the same API as the real dependencies but don’t perform any of the actual work. Mocked dependencies can be thought of as “pretending” to be the real deal.

For the developer, implementing these dependencies can be a real hassle. Think about it: For every dependency, a mocked implementation must be written by hand—the very definition of boilerplate code! However, this process can be significantly simplified if we use the right tools to automate a large part of the process.

If you write code in a language that offers a dynamic runtime environment, such as Java, mocks can actually be generated at runtime. In Java, Mockito is a popular framework for performing this task and here’s how it works.

First you need to add Mockito to your project. Using Gradle, this takes a single line:

dependencies { testCompile "org.mockito:mockito-core:2.+" }

Now that Mockito is available, we need a dependency to mock. To keep things simple and to the point, we’ll be mocking this very simple service interface:

public interface MyService {    String getUserNameForId(Long id);}

Here’s how a mocked implementation can be dynamically generated:

// 1MyService mockedService = mock(MyService.class);// 2when(mockedService.getUserNameForId(100L)).thenReturn("John");// 3System.out.println(mockedService.getUserNameForId(100L)); // prints "John"

So, let’s discuss the different steps:

  1. We’ve used the mock() function that Mockito provides to dynamically create a mock for our interface.
  2. Then we’ve used the when() function to indicate that when the function getUserNameForId() is called with the argument 100L, the mock must return the value “John”
  3. Finally, when we called the function the appropriate argument, the mock behaved as expected and returned the value we previously set.

As you can see, Mockito allows the developer to use fully fledged mocks without having to write any boilerplate. The only code required here is the one that actually carries valuable meaning, because it indicates how the mock should behave.

Now, approaches like the above can only work in languages that offer a dynamic runtime environment. So, what do you do if your code is written in, say, Swift, where a framework like Mockito cannot exist?

While you won’t be able to generate mocks at runtime, you can use the second-best option: Generating them at compile time. To do so, we can rely on code-generation tools such as Sourcery.

Here’s how it works: Sourcery is a command-line tool that you add to your build process. Then, when you build your project, Sourcery parses your source code and executes templates that will in turn generate new source code.

To give an example, let’s say that we want a mocked implementation of this Swift protocol:

protocol UserService {    func getUserName() -> String}

To do this, we first need a means of indicating to Sourcery that this protocol must be mocked. This can be achieved by implementing what is called a “phantom protocol”—that is, a protocol that has no requirements and only serves as a marker on types that implement it.

protocol MockedImplementation { }protocol UserService: MockedImplementation {    func getUserName() -> String}

Then we have to write the Sourcery template that will actually generate the mocked implementation:

// 1{% for protocol in types.implementing.MockedImplementation %}// 2{{ protocol.accessLevel }} class Mocked{{ protocol.name }}: {{ protocol.name }} {// 3{% for method in protocol.methods %}    // 4    var {{ method.callName }}CallCounter: Int = 0    var {{ method.callName }}ReturnValue: {{ method.returnTypeName }}?    // 5    func {{ method.name }} -> {{ method.returnTypeName }} {        {{ method.callName }}CallCounter += 1        return {{ method.callName }}ReturnValue!    }{% endfor %}}{% endfor %}

There are a lot of things to understand here, so here’s a step-by-step breakdown of what should happen:

  1. First, iterate over all the types that implement the protocol MockedImplementation using the object types—a collection exposed by Sourcery that contains all the types defined in our source code.
  2. Then, implement a class that will contain the mocked implementation of the protocol.
  3. Iterate over all the methods declared in the protocol.
  4. For each method, create two variables: One that will count how many times the function has been called, and another that will hold the value that the mocked function should return.
  5. Finally, write a mocked implementation of the function using the two variables just created.

By doing this, when we build our project, Sourcery will execute this template, and generate the following Swift code as a result:

internal class MockedUserService: UserService {    var getUserNameCallCounter: Int = 0    var getUserNameReturnValue: String?    func getUserName() -> String {        getUserNameCallCounter += 1        return getUserNameReturnValue!    }}

We can now use this code to interact with our mock in a similar way to what would have happened if it had been generated by a tool like Mockito:

let mockedService = MockedUserService()        mockedService.getUserNameReturnValue = "John"        print(mockedService.getUserName()) // prints “John”print(mockedService.getUserNameCallCounter) prints “1”

A codebase with no tests

So, we’ve discussed how to painlessly generate tools that help us write tests at the same time as production code. Now let’s consider a completely different situation. Let’s imagine that you are asked to maintain a legacy codebase that does not include any tests. This is definitely far from ideal, but we all know that such codebases do exist.

Without tests, it’s very difficult to know whether a change to production code resulted in a regression. We also know that it’s very difficult to write tests after the fact: They are written from the original specification, and mapping production code back to this specification isn’t easy. However, there is an interesting technique that will help us—snapshot-based testing—and as with many powerful tools, the underlying idea is simple to explain.

Let’s imagine you are given the codebase of a mobile app. For each view—meaning ViewController or Activity—of the app, we are going to write a test that will instantiate the view, provide it with mock data, render it, and then save the graphical rendering to a PNG file that will be kept under control alongside the tests.

Then, the next time the test is run, the process will repeat and the new rendering will be compared to the one that has previously been saved. A pixel-by-pixel comparison will be performed, and if there is any difference, the test will be marked as failed.

Popular libraries that implement this approach include Screenshot Tests on Android and SnapshotTestCase on iOS. We’re going to take a look at how the latter can be used.

First, we’ll define a very simple screen that only displays a label at its center:

class MyViewController: UIViewController {        override func loadView() {        super.loadView()                view.backgroundColor = .white                let label = UILabel()        label.translatesAutoresizingMaskIntoConstraints = false        label.text = "I'm displaying a text!"                view.addSubview(label)                // Center the label        NSLayoutConstraint.activate([            label.centerXAnchor.constraint(equalTo: view.centerXAnchor),            label.centerYAnchor.constraint(equalTo: view.centerYAnchor)        ])    }}

Then we’re going to use iOSSnapshotTestCase to implement a snapshot-based test case. Note, FBSnapshotTestCase and iOSSnapshotTestCase refer to the same framework, but for compatibility reasons we need to use the former in code.

import XCTestimport FBSnapshotTestCase@testable import MyProject// 1class MySnapshotTestCase: FBSnapshotTestCase {    override func setUp() {        super.setUp()        // 2        self.recordMode = true    }    func testMyViewController() {        let viewController = MyViewController()        // 3        FBSnapshotVerifyView(viewController.view)    }}

Let’s discuss the important lines:

  1. In order to implement a snapshot-based test case, we need to subclass FBSnapshotTestCase.
  2. recordMode is a property of FBSnapshotTestCase. The first time we run our test, we set it to true, in order to save the snapshot to the disk. In subsequent executions, we’ll set it to false, in order to perform a comparison with the reference image.
  3. Inside our actual test, we’ve used the function FBSnapshotVerifyView to indicate that we want to perform a snapshot test of viewController.view.

When we run this test with recordMode set to true, the following image will be written to the disk:

image

Then, if we set recordMode to false and run the test once again, we’ll see that it succeeds, because the graphical rendering will be identical to the image saved to the disk.

Now, let’s change the text that our view displays and run the test again:

label.text = "I'm displaying another text!"

As expected, the test fails, pointing out that the screen now has a different display. To help us identify what has changed, the framework also generates a very helpful pointer: A “diff” between the reference image and the actual content. Here’s what it looks like—the image has been cropped to its interesting part:

image

Without context, this image looks a bit messy. To understand it, you need to know that the content in black is from the reference image, and the one in white is from the rendering we got by running the test. Now when we look at the image, we can see that the difference lies in the content of the text we’re displaying.

This example has shown how a snapshot-based test is a great tool for identifying a regression within a view. Because it operates directly on the graphical rendering, it doesn’t need to know anything about the internals of your app, which makes it very easy to set up. As you can imagine, this aspect is especially useful if you need to apply the technique to a large legacy codebase.

However, it’s important to understand the limitations of snapshot-based testing. As it operates on the graphical rendering, it will only be able to catch regressions that are reflected in this rendering. For instance, if a change breaks the code associated with the click event on a button, this regression won’t be caught, because it doesn’t trigger any difference in the graphical rendering.

To conclude, it should be pointed out that this technique is not limited to comparing screenshots. You can actually compare any type that makes sense in the context of your app: If you write back-end code that outputs JSON streams, you can definitely use snapshot-based tests to compare those streams; if you write shell scripts, you can redirect the standard and error outputs to a file and perform a comparison based on this file.

To get technical, we can generalize this by saying that snapshot-based testing can be applied to test any code that performs side effects.

Tests that look for bugs

When we write a test, we tend to follow this structure:

Given some state
When I perform this action
Then I expect that result

This structure is great for finding regressions, because it associates a predefined state with an expected result. But the logical consequence is that it will restrict itself to this predefined state, which is a pretty strong limitation, because it’s impossible to write a test for each possible state.

Worse, if the state of our code is provided by an external source, for example the user or the Internet, we can definitely expect to see some weird values once our code ships. However, it is possible to overcome this limitation by using the not widely known technique of property-based testing, which is essentially about writing tests in the following way:

For any input values (x,y, z, …)
Such that precondition (x, y, z, …) is satisfied
Property (x, y, z, …) must be true

Here’s an example of how we can test a sorting function within this framework:

For any inputs (array, i)
Such that 0 ≤ i < (array.count - 1)
Property sorted(array)[i] ≤ sorted(array)[i+1] must be true

When you give such a test to a property-based testing library, it’s going to do two things with it. First, it’s going to generate random data that satisfies the precondition, and then it will test whether the property holds for this data. If it cannot find a counterexample after a predefined number of runs (usually about a hundred), the test will be considered successful.
However, if it does find a counterexample, it will then try to reduce it as much as possible, for instance by successively dropping elements from the array, in order to return a counterexample that is as small as possible.

Of course, since the framework needs to generate random data, every time the tests are run, it will use a new random seed, which will result in a new set of values being generated and tested. This is the great strength of property-based testing: Even though we cannot test our code on an infinite number of values, it allows us to approximate it by testing for lots of new values every time.

To give you an idea of how this can be implemented, we are going to use the library KotlinTest. As expected, in code our test looks a little less clean than when we wrote it with words, but by going through it step by step, it will all make sense:

import io.kotlintest.properties.Genimport io.kotlintest.properties.forAllimport io.kotlintest.specs.StringSpecclass PropertyBasedTest: StringSpec() {    init {        "Sorted Array" {    // 1            val generator = Gen.list(Gen.int())         // 2                .filter { it.size > 2 } // Ignore arrays of less than 2 elements, as they are by definition sorted                // 3                .flatMap { Gen.pair(Gen.constant(it), Gen.choose(0, it.size - 1)) }            // 4            forAll(generator, { (array, i) ->                // 5                val sorted = array.sorted()                // 6                sorted[i] <= sorted[i + 1]            })        }    }}

Here’s the breakdown of what we did:

  1. We began by creating a generator that produces random arrays of integers.
  2. We filtered out arrays of zero or one element, since they are, by definition, already sorted.
  3. We created a new generator that produces a pair containing an array of integers and an index within the range of this array.
  4. We wrote the property that must hold for all values produced by the generator.
  5. We sorted the generated array.
  6. We tested whether the value at the index was inferior or equal to the one immediately after it.

Since we used the function sorted() provided by Kotlin, it’s no surprise that our test was successful. If we want to see what happens when a bug is found, we only need to change one line of code:

val sorted = array.shuffled()

This time, as expected, our test is able to find a counterexample!

java.lang.AssertionError: Property failed forArg 0: ([-2147483648, 2147483647, 0], 0)after 1 attempts

What’s really neat is that we are provided with the values on which the test failed, meaning we can easily plug them back into our code and debug the root cause of the issue.

Conclusion

So, we’ve covered a lot of ground!

We began by looking at how writing tests by automating the creation of mock objects makes our lives easier. We saw that, depending on languages, this can easily be done through the runtime environment or via code generation.

Then we moved on to consider the task of maintaining a legacy codebase that doesn’t come with a testing suite. We saw that the technique of snapshot-based testing makes it possible to more easily write tests that make sure there is no regression on the front-end rendering.

Finally, we experimented with a new paradigm for writing tests—property-based testing—where properties that must always hold true are defined and data generators are then used to try to find values for which the properties will fail. What’s really exciting about this technique is that it tries to find the bugs that the developers were unaware of rather than the regressions.

We hope that we’ve managed to convince you that writing tests doesn’t have to be a dull and mundane task, but rather that there are lots of tools at our disposal to make it easier and fun!

This article is part of Behind the Code, the media for developers, by developers. Discover more articles and videos by visiting Behind the Code!

Want to contribute? Get published!

Follow us on Twitter to stay tuned!

Illustration by Blok

Preberané témy
Hľadáte svoju ďalšiu pracovnú príležitosť?

Viac ako 200 000 kandidátov našlo prácu s Welcome to the Jungle

Preskúmať pracovné miesta