Why Take a Systems Approach to Mobile App Testing

Q&A With Jean Ann Harrison, Mobile Systems Tester

Add bookmark
Esther Shein
Esther Shein
01/19/2019

Jean Ann Harrison, Systems/Software QA Engineer  with Thales Group/Avionics Division has been involved in testing software applications and systems for 18 years and a mobile systems tester for 12 years. She has also been a speaker at various testing conferences, a contributing author on books, articles and webinars. She spoke with Enterprise Digitalization about the importance of taking a systems approach to mobile app testing.

Enterprise Digitalization: What is a systems approach to mobile app testing and why do you feel that’s the appropriate strategy?

Jean Ann Harris: A systems approach in testing mobile apps means to not only test the software functionality but to consider that software’s behavior and the conditions of the hardware and/or the operating system or firmware. An example: If the device’s battery is being charged and other apps are being utilized but are running in the background while the app under test is running and trying to communicate out to the internet and suddenly, the charging creates so much heat, that the data trying to be communicated out to the internet burns out the cell modem or at the very least turns the data into garbage data. What about if your app under test is engaged but the steps on another app’s notifications show notifications for that second app cannot be received? This is a system-level concept that may need testing.    

ED: What prompted you to adopt this approach?

JAH: It started when I was testing a medical device and I saw firsthand how stressing the boundaries or limitations of the hardware actually affected software test data. One example was simulating someone having serious heart symptoms, which created messages communicating asynchronously. This created a problem because the hardware could only handle a limited number of messages within a certain amount of time and then would shut down the entire device down.   

Now consider the gravity of the situation if it occurred with patients wearing the device but the data was not reaching their physicians for analysis. It could be life threatening. When I worked with my development team, we found the high level of messages reached a limit and created a problem where patients were shipping back the device to the company, prompting the company to ship a new device out to the patient, which cost my company. Once we realized what was happening the resolution was quite simple: The development team could slow down the messages coming into the device and lessen the burden on the device so all data can be transmitted. Slowing down the messages also alerted the company that there was a problem, and the patient was contacted to make sure they didn’t need an ambulance. So it was about clearly understanding the system and how the inter-dependencies of the system affects the software’s behavior. Since that time, I’ve seen interesting behavior of mobile apps on smartphones, compared to conditions on tablets versus smartphones with regard to interruptions and lack of storage -- all affecting software behavior.

ED: What is the so-called wrong way to test?

JAH: The so-called wrong way in mobile testing does not exist. Any testing on a mobile app is good. Problems occur when companies leave that testing to the paying customers or users because the company’s test strategy is limited due to lack of resources, lack of skilled testers and expecting that all testing can be done by automation. These reasons severely limit the test coverage done “in-house.” Why is it not a good idea to do “testing in production” for mobile apps? Simply, this approach can create an instant and globally poor reputation of the app, which translates to the company. Smaller companies have a very difficult time surviving poor reviews. No matter how good the app promises to be, to be THE solution to a situation, if larger test coverage is left to actual users/customers, the app may not survive. Companies need a smart test strategy, which is why a systems test approach is a more conclusive approach. 

ED: A lot has been written about IT organizations moving from waterfall to agile for app development. How does agile fit into the systems approach?

JAH: Waterfall software development lifecycle (SDLC) is really helpful for regulated environments like the medical, financial, insurance, healthcare, pharmaceutical and avionics industries, and government-related entities. However, companies can incorporate agile techniques into their SDLC. In my experience, I see a hybrid of the combination, which seems to work well for that company and/or industry. Depending on how the engineering team had developed their documented processes and follow those processes, agile can easily be incorporated to a regulated environment. Agile practices, include creating smaller timeframes for deliveries and working in sprints where a development team can change gears as the scope of the sprint, needs a shift. Testers need to be working closely with their development team to adjust to that shift. Open communication is key amongst project team members. 

An example to describe what I mean: Your company is creating a healthcare application to be used for patients, so they are given the right medications. Your project team learns of a problem that occurred in production where a nurse recognized the app did not match up the patient ID with the type of medication. The project leader communicates to the development team and testers that the scope of the current sprint has changed due to the emergency fix that needed to be applied. The development and test team work together to learn what code needs to be changed, what dependencies exist. Throughout the rest of the sprint, the development team releases interim builds, so the testers can test the new changes as the development commences.

While this process continues, there is a need for documenting what is being changed, what the test strategy is and the expected outcome. The big mistake many companies make is believing being “agile” means no documentation. But when there is a change of teams where one or more members leave for other opportunities, can that team developers and/or testers maintain not only their agility but also their efficiency?

Jumping in to adopting the agile methodology is not an all-or-nothing proposition -- but this is what companies think they can do. I especially see it in the regulated industries now as they are catching up to the software trends. Companies can’t expect to employ continuous delivery (CD) and/or continuous integration (CI). The example of the dispensing medication app could take advantage of the CI approach, but only within sprints. It probably would not be wise to employ CD based on the severity of what could go wrong if not fully developed. Being agile or agile-like may depend on the context of the app’s usage among other justifications.

Leadership must thoroughly understand the purpose of their product(s) before considering a different approach to development and testing. Remember too, the project team could include project managers, marketing, sales, financial and operations team members as well. The approach should include buy-in from all stakeholders.

See Related: The Differences Between Manual And Automated Testing

ED: How does mobile app testing differ from testing for other types of apps?

JAH: Mobile apps include apps on smartphones, tables and proprietary devices. To some extent laptops are considered mobile devices. To answer your question, I’ll focus on an app that works on a smartphone, a tablet and a laptop to give you a good comparison. Let’s compare testing The Weather Channel software. I use the term ‘software’ instead of ‘app’ because when accessing the Weather Channel on a laptop, you do so in a browser. With a smartphone or tablet, you can download the mobile app to not only work on that specific device but also to utilize that device’s features. A big difference in the types of tests include notifications. When receiving a notification on your laptop, you receive it through the browser. It doesn’t utilize an LED light to let the user know there is a notification. Instead, there is a popup message if the user is accessing another app, but the Weather Channel is another tab in your browser, or the browser is minimized. With a smartphone or a tablet, the user gets notified with not only audio but with the use of an LED light on the device. The laptop may or may not take advantage of the system’s audio but there is no LED light.   

Another difference is the notification. The Weather Channel may not only send a message directly to the device to warn of severe weather, but that notification will also continue to display on the top of the device’s screen even if the user has opened/started that app. How the laptop is different is, the user would have to start their browser and navigate to the Weather Channel website/web app before any notification could be displayed. Comparing displays between a smartphone and a tablet version of the same app can be very different but also behave differently. An example of comparing displays would be being able to rotate the device to get a different viewpoint, and swipes/touches may or may not be incorporated the same way.

One important factor to keep in mind when testing a mobile app on smartphone is that that device has limited storage capacity. The mobile app’s behavior needs to be tested where storage capacity has reached its limits or is close to exceeding those limits, so the software can be modified to handle that behavior in a graceful way. A laptop has a much larger capacity for storing applications so a systems approach to testing a laptop is not as critical a situation.

ED: What are the takeaways for IT organizations when it comes to software testing for mobile apps? 

JAH:  

  • Do NOT assume because a mobile app is smaller than a desktop or laptop app, it’s easy to test. Quite the opposite; it can be far more complex.
  • Mobile testers learn by experience only and have a deeper understanding if they first conduct manual testing to build on critical thinking skills to develop mobile test case ideas.
  • Utilize exploratory testing techniques to learn about your mobile app’s behavior.
  • Take a systems approach regarding mobile app testing. Include hardware and operating system conditions and variations of those conditions to better understand your app’s behavior in those conditions. I suggest utilizing persona testing to narrow the testing scope. This means strongly understanding how users will engage not only your app but how they utilize the device while using the app.
  • And most importantly, do not assume it will be faster to employ automation testing. It might be helpful to try to create regression test suites or data validation scripts but do not forget to consider the cost to create those scripts and compare how often those scripts will be reused. The cost for immediate, short-term testing need is not equal to the cost of maintaining those automated scripts long term. Apply automation to consider long-term testing needs as well. If your testers are spending more time fixing automated scripts than actually testing, where is the saving of time? More important, how much test coverage can be completed in a cycle?

This article was originally published on Enterprise Mobility Exchange.


RECOMMENDED