Saturday 18 December 2010

Top 10 tips for a quality assured project

In this blog I've looked at the key QA elements that need to be in-place to support the building of a quality Web project.

The role of a QA is not to 'police quality' but to help to 'bake quality into' a project. This means helping to support the set-up and maintenance of a quality process, as well as measuring the effectiveness of the quality process. At a high level this involves four responsibilities:
  • defining the quality through the specifying and documenting of Acceptance Criteria coverign functional and non-functional. Yes, I used the dreaded 'd' word, but note documenting can take many forms including executable specifications. For me, this phase is also the first phase of TDD/ATDD.
  • inculcating quality into the development process through encouraging; good pre-planning, BDD, TDD, pair programming, sprint bug-bash etc 
  • measuring the adherence to the quality through the development of a number of quality checks such as; qa reviews of developer deliverables, testing of functional and non-functional ACs.
  • alerting on shortcomings in the quality through the raising and escalation of bugs in a timely fashion, using automation
These activities take place in an Agile framework.

Quality, for me, is about the AUT meeting the requirements as given in the ACs. The ACs should therefore cover functional and non-functional requirements. 100% quality can be achieved if all the ACs have been met. The more difficult question is whether the ACs really define all the requirements.

Just to clarify; testis are not quality in themselves, but a measure of the quality in the application according to the defined ACs. Automated tests are therefore just a fast-feedback of the measure of quality in the app.

                                                                                                                                
So that ACs can be defined the following are required ...


1) WELL PREPARED STORIES
Sufficiently pre-planned Stories with stakeholders so that they are; testable, deliverable, strategic, and wanted. The pre-planning should cover; happy path functional, sad path functional, non-functional requirements, and app wide requirements. UI can be specified at a higher functional level.  Techniques to draw out ACs include; pictures, decision table, flow chart, state transitions, mind mapping, persona analysis, value, risk, changed areas, CRUD, 2d matrix, scenario matrix etc.


2) TESTABLE ACs
ACs should be expressed in a consistent and clear manner, e.g. Gherkin style. Ideally a BDD tool could help by enforcing a pattern for specifying ACs.


3) FUNCTIONAL ACs REFLECTING A WEB APPLICATION
ACs should have an awareness of the MVC split so that ACs can better mirror tests.
  • View ACs
  • Action ACs
  • Action plus View ACs
  • Business Logic ACs


                                                                                                                                


So that the ACs can be tested the following are required ...

4) TESTING APPROACH

Do we need this test?
If quality does not need to be measured (e.g. if speed to market is more important) then the amount of testing  needs to reflect this. E.g. if there are no ACs for an NFR (e.g. performance) this may indicate that performance does not need to be measured and tested as it is not a defined quality.

Do we need to automate?
If speed and ease of change is not important then automated tests are also unimportant. For example, you may chose to run integration tests as a one-off manual exercise after every 'n' sprints.If the app is likely to be thrown away, automated tests may be an unnecessary overhead.

If we do how do we do it?
  • All tests, whether they are unit in style or full-stack browser driven, are in place to meet ACs and so they should be treated as shared artifacts across Dev and QA. Also Deveoper's production code should also be accessible by the QAs.
  • The approach suggested (White Box) is a classic one:
    - Functional Unit Style Test the individual components where the Functionality has been implemented
    - Test the components integrated with each other
    - Test state between components 
  • A speedy and robust CI cycle favours focussing testing at the lowest possible level (see Testing Pyramid)
  • A risk-based approach would be best when deciding automation, as well as a review of the Testing quadrants. Ideally all ACs should have an automated test. In the case of static content, we could check that there is something there.
  • Follow the Page Object Pattern, ensuring the tests are easy to read and write, and have no awareness of html references. Accessing simple page elements can be modelled through Page Object variables. Accessing tables should be modelled through Page Object utilities returning lists of text.  Access to page actions requiring location of elements should be modelled through Page Object utilities returning references to dynamically created Page Object variables. Complex dynamic pages should be broken-down into Page Object components.

The means that functional testing is done at the unit level where the functions have been built. Integration testing will address combinations of ACs and is done at the application/system level.



Alternative Approaches:

  • 'Black box'
    A more traditional approach is to consider the Unit tests as the ownership of the Developers and their purpose being to support modularised TDD and achieve code coverage and to aid re-factoring. The QA tests will serve to ensure that ACs are being met. The downside of this approach is that it; leads to test duplication, longer feedback cycles, difficulty in influencing the data input, a disjoint between the functionality that is being built and the ACs, and QA pre-occupation with low value automation.
  • We could run the 'Full Stack' Integration tests against stubs and then run a smaller 'Full Stack' test with real back-ends. A major downside of this approach is that if you are heavily dependent on a Services layer (e.g. for managing some state) then there is a big overhead in maintaining a stub, the chance and impact of integrations issues magnifies, and there will be much duplication.


5) TEST SET-UP
Back-ends may need to be stubbed for a number of reasons:
  • existing back-ends don't provide the data variety
  • existing back-ends aren't available to fit your CI cycle
  • existing back-ends don't support performance testing
  • if you want to run intra-application testing

Prior to going into Integration testing we need to establish sufficient data scenario coverage and up-time from the back-ends

The environment need to address the different testing requirements such as ; local testing, CI, integration, cross browser, demo, user experimenting, pre-production, performance etc.


6) A TESTABLE APPLICATION
The tests should be robust and not break unexpectedly or for cosmetic reasons. To support this, an element referencing strategy needs to be agreed between QAs and Devs to ensure tests focus on semantics and not structure. All web elements should be referenced by a static ID or Class. Lists should be also be referenced by a parent reference and a row reference. Derived content will need a reference to reflect the status. Static content may need a very limited number of references to test integration.

·         Class names should be meaningful in terms of the business usage e.g. the anchor to add a template risk should reflect the template name.
·         All action elements (input, anchors, select/option etc) should have a unique class
·         Dynamic data from the back-end should have a unique class
·         Tables/Lists should have a unique class for the whole section and a class for each row. One good example of where this is being followed on the Add underwriters dialog
      Unique page identifiers
·         These classes should not be changed without a consideration for the impact on the Selenium tests





7) AUTOMATION OF ...


BUILD TEST
===========

The app should be automatically; built, tagged, and deployed to a configurable environment using CI.




FUNCTIONAL
============

Tests of ACs at a level closest to the code ... Unit style. This includes logic, and the presence of back-end content but excludes the presence of static content.


INTEGRATION TEST
==================

Tests that check integrations between the functions, back-ends, state etc.

1) Top Down Integration


Testing from the client to the back-ends.



Data Integration
Check all non-static data on each page is mapped correctly from the Back-end or Controller to the Html:
  • Check the data in the happy path cases
  • Check error cases, particularly where the error is identified in the Controller
NB. If the back-end is outside the control of the Web app and it does not publicise test data then the check should not be a hard-coded one, but an independent call to the back-end followed by a comparison.

Another alternative approach is for the functional tests to use an accurate and up-to-date version of canned back-end responses. This is more suited to an Ajax call as there is no MVC separation. 

Back-end Error 
A variation on the Data Integration tests is the test to check we handle error responses elegantly. Check back-end 400/500/timeout type errors are handled gracefully.

Html Action Integration
Testing that the HTML actions invoke the expected services (controller, model services, back-end) with the correct parameters.
  • Check that the correct back-end API call is being made e.g. a filter being applied. This can be efficiently done by combining multiple filters into single call.
  • Check the input parameters are being processed
  • Check all action elements on all pages are covered
An alternative approach is for the functional test to check this and assert that the right back-end is being called with the right data. This is most suited to an Ajax call.

Browser Integration (Javascript, CSS, URL Bookmarking, Images, Cookies etc)
Check that buttons, links etc are manifest. Check that the classes and identifiers used in the Javascript code and CSS definition integrate with those rendered by the View. Testing integrating of browser URLs with back-end Controllers.  Consider invalid URLs. Check Cookies are set-up and maintained as expected. Check images are referenced and downloaded.

CMS Integration
If required, check integration to the content management system.


2) End-to-End Integration
Testing the impact of state

Page Flow / Journeys
Testing transitions from one-page to the next i.e. view-controller-view testing. Also testing all html elements work as expected. If Html Action Integration is covered in the Functional Tests then the Journey test should assert success. otherwise there is little need to assert on data.
  • end-to-end brush journeys (specified by the customer) meeting the key business journeys fully integrated
  • end-to-end brush journeys covering all other page flows and all elements.
  • checking integration with up-line and down-line apps .. e.g. data set-up, deep links, submit order
State
Specific focus on testing state is identified and the correct course of action taken. This is needed as state exists beyond a page and so has potential for inconsistency. This will highlight scenarios which may be invalid according to some state. Will need to keep a log of the attributes that have state
  • state is set an maintained
  • state is changed when needed...cookie expiry and clearing should reset state
  • Check for each state for each page or action for invalid combinations. Note some states can be threefold; unset, set, and cleared.
  • Check that the back-end application is stateless by restarting the application and checking that journeys are still intact


NON-FUNCTIONAL TEST
======================


Cross Browser Compatibility
  • Using a Browser automation tool, test the app's Javascript.
  • Run the Javascript elements with Browsers having Javascript disabled. 
  • Key end-to-end journeys could also be tested cross Browser to give an increased level of confidence  These could be the same journeys as defined earlier.
  • Page looks right tests ... capture screen-shots of all pages for checking rendering
Performance
  • Measure performance (time, resource utilisation etc) against an SLA, for a production load (number of clients over a given period) .. Third party services should be stubbed with representative latencies.
  • Prolonged usage (Soak) checking memory leaks
  • Spiked load
  • When resources are constrained (mem, bandwidth, db, cpu)
  • Performance on client vs server
Penetration
This is best done in a http automation tool so that a fine level of message access is available. They can also be done manually or using a tool such as Skipfish.
  • click elements that are no longer valid (e.g. using back button)
  • hack posts and gets, to gain undue access or change data
  • cookie copying, hacking, exposing thru xss
  • query param hacking (e.g. sql injection)
  • xss script injection into input area or from back-end or into URL (<script type="text/javascript">alert('hello');</script>)
  • access files on web app filesystem
  • privacy

Site integrity
Against production data automatically check that:

  • Check images exist 
  • Check no broken links
  • Css and JS exist
  • Automated end-to-end test on Live


Other NFRs
  • Scalability, load balancing
  • Security (SkipFish), DP, SOX, depersonalization
  • Release: documented release process, configuration, fall-back and internationalization
  • Backward compatibility
  • Disruption (failover, recovery, timeouts, back-end down for all message types, fall-back)
  • Backup and restore/recovery
  • Accessibility (alt tags etc)
  • Deployable across different environments
  • SEO


HEURISTIC
==========
Once bugs are fixed measures should be taken to ensure they do not re-appear. Applying the Pareto principle, 80% of bugs will happen in 20% of the code, so identifying fragile code is important. Bugs found imply an area of fragility so specific tests based on the bug should be automated.



8) EXPLORATORY /  MANUAL TESTING
  • Incorporated in to the QA process as a mandatory activity
  • once a sprint, get the whole team involved in a manual testing session 
  • manual tests should be recorded in a Wiki for reference
  • consider getting a domain specialist to support with back-end integration testing


9) USER SIGN-OFF


Review or demo



                                                                                                                                

In order that the quality can be reported on the following are required:


10) TAGGED BUILDS
So that reports can be tracked.


11) METRICS
  • Test metrics such as; tests counts, failures, severity, known bugs, by run, by browser etc.
  • AC coverage reporting demonstrating that requirements were being met in the tests. This could be achieved through the use of a BDD tool.
  • Regressions need to be referenced to the original AC and this can also be achieved through the use of a BDD tool.
                                                                                                                                
And finally, in  order to work in an Agile way the following are required ....


12) AUTOMATED FEEDBACK THROUGH CI
On check-in of code a set of automated jobs need to be run to verify that the code has not introduced regressions.


13) AC QUERIES DURING STORY PLAY
In an Agile world a key challenge is to ensure that areas of ambiguity are addressed at some point. A Wiki should be maintained where QAs can place queries for BAs/POs

14) A STORY LIFECYCLE WITH QA IN MIND
  • Effective pre-planning of stories
  • QA-dev review before playing a Story
  • Dev-QA review of; app, ids, acs, and unit tests before moving from in-dev to in-test
  • Process for dealing with bugs; raising, on-board, priority, regression
  • Story reviewed with UX and BA/PO after testing
  • Refactoring having their own cards



15) AGILE MINDED QAs
QAs who; have strong collaboration skills, are technical, are flexible, and appreciate working functionality.


                                                                                                                                

OK, so there's a little more than 10 tips, but as I mentioned earlier, in Agile you need to be flexible!