ARIS Community - We Love BPM

Mixing Process Design and Implementation Details is Evil

sstein's picture
by Sebastian Stein in ARIS BPM Blog posted on 2008-08-25

HazardIn my previous post I provided an overview of business to IT transformations. I pointed out that most tools actually do not provide a transformation, but instead push technical details to the business level and misuse a business process modelling language like BPMN as a visual representation for BPEL. Now, I got a lengthy response saying that this is not a problem, but a feature. So was I wrong in my initial post?

I received a lengthy answer to my previous post from Scott Francis, which you can view online. For those not in the mood to read this long post, here a short summary of how I understand Scott’s line of argumentation:

  • He starts by saying that I’m looking for no technical details in the business process model and that such a model should be stable with respect to technological changes.
  • He points out that BPMS tools support a pure business oriented modelling, so that not every process must be executed.
  • He says, usually business processes change faster than technology, so the business process won’t be stable.
  • He says, key is having a stable set of performance indicators to evaluate if the business processes improved over time.
  • He says, it is better to have a business process model with technical details, because otherwise a business expert can’t verify that the process was implemented correctly.
  • He concludes, if one has to choose between a modelling and an execution tool, one should go for the execution tool, because the implemented process rules!

Well, first I like to thank Scott for those arguments, because they give me a nice way to illustrate my point. However, let me first clarify some things. I never said that a business process, which should be automated, should not be informed by technology. I just said that such a business process should not contain technical detail. Of course it doesn’t make sense to design a process meant for execution with an ambiguous control flow, which can never be executed on a machine. Also, if some kind of IT support is envisioned for a function, this should be detailed in the business process model, too. But technical artefacts like exception handling, data manipulations, and variable definitions don’t belong in a business process. They are technology specific and don’t provide any valuable information to the business expert. So the first two points of his arguments are invalid, because I never said that.

I agree with Scott that business processes might evolve faster than IT. I don’t think this is a general rule, but it really depends on the company context. Still, evolution of a business process and an implemented process must be divided, but a synchronisation is needed between both. So if the IT implementations removes a business function because the functionality is already covered by a service invoked before, this must be communicated to the business experts! However, if the IT implementation adds a new exception handling routine to select another service instance in case the default instance is not available, I don’t see why business experts should care.

I also agree with Scott that measurement of process performance is important. However, my original post was not dealing with that point, so not sure why he brings up this topic. Now my general feeling is that Scott is not dealing with processes implemented on several different technologies on the same time. He suggests an agile approach where user requirements are turned into implementation on the fly neglecting the need of a requirements specification. Such an approach is fine if you later don’t need the requirements specification again. But it will fail if you have to implement the same process on a different middleware some months later (while the one implemented before is still deployed and used). In that case, you will have to reverse engineer the requirements from the implemented process.

So it might be that Scott’s use case is just different. If you are just using a single execution technology and you know for sure that it won’t change, it is ok to add platform specific details to the business process model. However, you are still facing the problem to explain to your business experts why strange constructs like fault handlers, correlation sets, and copies between different variables are needed.

There are no attachments
Site Administrator posted on 2008-08-25


Scott said:

I don’t think this is a debate between good and evil :) It isn’t quite so epic in proportion. Although it does feel like a debate between the ivory tower and the real world implementation of processes.

i’ll go by my points since that is the structure you chose as well:
1. you claimed my first two points were invalid because you said technical detail doesn’t belong in the process. Well, what if the process depends on something that is a nightly batch. that’s a technical detail. in or out of the process? what about inputs to a process activity or step? outputs? are those technical details or not ? (it isn’t clear what counts, so its hard to say whether your line in the sand is correct). It is fair to say that not all technical details belong at all levels of a business process (in my world, business processes may be nested, like a Russian doll), but at some level of abstraction, those technical details will make sense (especially if we’re in an implementation rendition of the process). Based on your exclusion of exception handling and data manipulation, it simply sounds like you are advocating for a “cleaner” business process diagram. In fact, to a point I agree with that - you hide these details at a lower level of a process definition (at a level that you would likely not even call a process, but a level that OMG would still describe as a process), because the implementations details of exception handling rarely, if ever, effect the top-level or two of a process.

2. you said you invalidated this point, but i didn’t see it in your argument…

3. i have done work for companies that have mainframe systems that are 50 years old that are still a core part of their business. the business processes have changed substantially, but the core technology is still there… IT assets that are core to the business are difficult to change. The risk associated with change is high, and the cost associated with switching technologies is VERY high. BPMS solutions provide an opportunity to further leverage those assets while introducing a more agile process-oriented layer above them that allows the business to reconfigure their interactions with such back-end systems. BPMS doesn’t replace all those technologies, but allows you to put them to work in ways that suit business processes that are new, modified, or evolving. As for IT selecting a new service to implement some back-end function. you’re right, the business experts shouldn’t necessarily care, but it doesn’t mean that that element isn’t part of an IT process that should be represented in a process-oriented (but executable) diagram. And they might care if that new service fails to meet previously expected SLA’s or ToS…

4. I realize that you never brought up stable performance indicators. I’m pointing out that this is the stability that really matters. to further illustrate that stability of business process is not really a goal that customers should or even are attempting to achieve. They are attempting to achieve STANDARDIZATION of process because it helps you then achieve division of labor, apply theory of constraints, and improve the process more effectively. But you presume wrongly when you assume i’m not dealing with heterogeneous environments. The environments of our customers are very much heterogeneous. However, our customers have chosen a BPMS to help them navigate that heterogeneity and extract a common (standard) process across it all. You sure as heck don’t want to implement the same process in two different middleware layers. That path leads to madness in IT. You implement one process and at its integration points, integrate it to the appropriate systems. The integrations can be “smart” - knowing that in some cases you integrate to system A and some places to system B, rather than always assuming system A. But these decisions are based on the context data of the process instance you are running… If someone said “here are these requirements, now go and implement them in java and C++” people would think it was crazy. Even though we’re talking “process” and not “code” - what you’re saying amounts to the same thing! Moreover, if the process did need to be reimplemented, the BPMS tools I have used support enough documentation to represent business requirements - the implementations at the BPMN level would be substantially the same. And the differences would only be nods to the differences between tools - which would have to be there anyway if i implemented the process twice…. (Moreover, I would never suggest you just “change on the fly” - although by comparison to traditional 18-month development cycles for IT, a 3-4 month cycle for BPMS will feel that way - requirements are still a valid thought process, but model-only isn’t necessarily a valid part of that thought-process).

Having participated in the implementation of over 100 production processes, the “problem of explaining to business experts” the strange constructs of implementation have been minimal. However, I HAVE had to spend significant time explaining BPMN. It has some complexity to it, especially with respect to splits and joins - and this has NOTHING to do with the implementation subsystem, just the modeling notation itself. And most business experts aren’t familiar with petri nets and concurrent programming. So I give them some rules of thumb to use around those elements, and fully expect that I’ll have to refine them to get to an executable model representation. My use case is different - I’m worried about how to get customers’ process ideas implemented in production (using whatever technologies make sense), and I just don’t see the model-only paradigm as adding lots of value to the process. A company can spend a year modeling all of these things… and then start implementation… only by then the processes have changed… now what? And we’ve postponed the ROI that we could get by implementing quickly and starting the continuous process improvement cycle…


Sebastian Stein posted on 2008-08-27


Hi Scott,

my colleague Olaf took up the topic with another post. I’m a little bit in a hurry at the moment, so I hope his post clarifies some points. About your first point in your comment: We say, don’t mix technology specific details in your business process model. So of course you need to add input/output to your model, but not the specific XSDs, because XSD is a specific technology.