Business Process Execution Testing (BPET) is the new matra for software testing. Actually it has been slowly rattling in my mind for some time. Now I am finally acknowledging it in light of the challenges with one of my recent implementations. However, I was struggling to put those thoughts together and consolidate them in a framework. Finally this new acronym came to my mind and I said lets just express it. I just hope no one has used this acronym before!
But anyway, I think one of the fundamental reasons for high customer dissatisfaction with software systems is the lack of coherence with the operational processes. The reason being that the traditional software testing approach is bottom up starting with unit testing and graduating to integration testing and system testing. As a result the software is tested and verified for the way it was built and/or designed to built.Howevever when put to use, there are signficant operational gaps which make the system of little use and reduces its acceptance by the end user community. This has to change!! The wave is of business processes driven application and system design and testing needs to take on this paradigm shift heads on. This paradigm shift can be absorbed and addressed by imbibing and adopting the BPET philosophy!
First of all this philisophy is geared towards testing the software the way it is going to be used. How? You guessed it by mimicing business processes. So this approach starts with understanding the key business processes that support the organization and map them to the system identifying the ownership and roles of the system and its various components that support it. The rest of the plan is to define and develop test cases that verify these processes execution and the success and failure of the system to support the actions.
But two fundemental changes do exist with the traditional approach to testing. One is that the actors in this case (testers) are not traditional software testers but subject matter experts, operational staff who understand the business and have significant domain knowledge. The second but higly controversial, I assume since it causes radical mindset change is the defect priortization in this philosophy. A bug that crashes the application shall get a medium rating if it has minor operational risk.The situation that resulted in a bug is an exception and it is unlikely to occur or there is a reasonable workaround or avoidance strategy. At the same time a result that is not even technically an error – say, a degradation in performance during regular hours may be a showstopper since it means lack of customer response by customer help desk which means the 1000 calls being addressed every hour are not getting addressed. I think this is paradigm shift for both the business stakeholders and the software developers but it is the right way to execute the test strategy. After all it doesn’t matter whether the software is perfect or not as long as it meets the business process needs. BPET is the new mantra Amen!!!
I shall be further elaborating this area in my future blog updates!!!
Sunday, July 10, 2005
Subscribe to:
Post Comments (Atom)
6 comments:
Your comments on testing are interesting, and the notion that systems functions should be validated as they would be used is certainly not new, even to software. I also find it interesting that you would seem to assert that testers are most often technical testers and not SMEs or non-technical types. What you call "BPET" is very much like what we do to conduct testing. Our tests are very much based on the business process that the application is to support. In fact, the application design is based on the business processes.
Perhaps its is that your experience seems vastly different from mine. In my experience, testing is done not only by developers or testers, but by SME's and non-technical users, and even people who might not actually ever use the system. And testing is done all the while that the software is being developed. In fact, development and test are functions that go hand-in-hand. Developers and testers work closely together as peers. Code is crafted during the day and tested during the night. An the defects that are found are fixed before any new functionality is added. In fact, our philosophy is that we code, build and test nearly every day. If a developer adds code that causes the build to fail, they must fix their code until the build succeeds. The next day before any work starts, the team meets to examine the prior days work, characterize and prioritize the defects and agree to what work gets attention for this day. And we don't just test the system's functionality. We test its performance too. We look for things that could slowly degrade performance by running performance tests for days, even weeks to see how the application behaves day over day.
Two things happen when we behave this way. One is that the application that we are developing takes on a life of it's own, and the daily cycle is kind of like it's heartbeat. No one writes bad code (or at least, they try to write better code) and when we release our beta's, they work pretty well. When we release our product candidates, they work pretty well, and if we have to fix bugs or make patches after release, they work pretty well. Building and releasing is almost second nature to our teams.
Oh, and we still do business acceptance testing after we build out the release candidate.
The method you describe seems like the way we used to build and test applications in the "old days".
I find the idea of BPET refreshing, but I think you need to go beyond testing to say that design, development, and testing all have to go back to "requirements" or business processes. Yes testing has to look at the BP, but every stage needs to link back to the basics of the business processes and other requirements. This puts a burden on the requirements process to be accurate and complete for sure. There is a tendency to short change the requirements process to get to coding and that gets manifested in testing not having a BP anchor so they are left with testing programs not how programs support requirements. The entire process is linked, but first you need to be sure you know what the customer wants and can afford to have. Everything else builds on that.
I think part of the reason to make these ideas more explicit is to become more efficient with our resources and crisper with our ability to consistently discern top priorities.
The first commenter above, while seeming to already the embrace the basic notions, may well be investing more resources in certain parts of the system and system test than they deserve. If he's operating in an enviroment where programmers do not tend to focus on technical issues without consistently evaluating their importance to business processes, then he's working in an environment that I've never encountered in twenty some-odd years in the industry.
Everything revole's around requirements... Req. Analysis, Design, Development, Testing & Implementation.
RUP process attempts to address what is here referred as 'BPET'. As I see from the comments above everyone (almost) seem to have good understanding on this subject.
The intent is clear but the lacking factor is Direction & Committment in adhering to Standards.
Software Testing is an ART, DISCIPLINE & CRAFT: For Software Test Group, I will recommend the testing principles by Glenford Myers in his write-up 'The Art of Software Testing'.
[url=http://hairtyson.com]Phen375 375[/url] are tablets that supporter slacken up on confederation weight. One of these tabs has to be taken with fizzy water be illogical, round 20 minutes before a repast, twice a day.
[url=http://garciniacambogiaselectz.weebly.com]
is garcinia cambogia safe[/url] is the best fat burning wrest available in market these days a days. Lose upto 10 kg in 1 month. garcinia cambogia select
Post a Comment