(Please first read my Introduction to these comments on your papers.)
Reconciling difference and similarity
Your opening sentence is certainly true: For business objects to be reusable, they must be at the right level of abstraction.
Then under your heading Different Models you correctly emphasize some of the real and difficult problems lying in wait for anyone foolish enough to try to provide for the degree of commonality between needs that would make reusability feasible.
Finally (in your last sentence), you seem to adopt a very cautious though obviously practical approach:
How to make different models interact will be one of the more interesting and important research problems in this field, because I am convinced that there will be many specialized models for business objects, and they will have to interact.
Well, here is that person foolish enough to try (as I said in my paper and faq): I am asserting precisely that you can have your cake and eat it! You can have full relativity (difference) plus fine reusability (similarity).
First, let me rephrase your Different Models need. Iíd prefer to say that the basic architecture must allow many possible views on a single application model. Thatís part of what I mean by "relativistic realtime" for a hypothetical single reality [paper(OMG back on track), 6]. It also seems largely consistent with the notion of "subjectivity" (e.g. as mooted for OOPSLA94 "Workshop 18: Subjectivity in Object-Oriented Systems").
However, document- or desktop-centric information system architects might tend to disapprove of the notion of globality that a single-model assumption seems to demand. Certainly, those who regard the Web with a romantic eye, approving its chaotic vibrancy, might prefer not to think of such an ordered concept. But I donít think you are as anarchic as that!
I would respond that, yes, its unity is merely virtual: no one user could ever see it all in our madly distributed and dynamic world. But any one user should in theory be able to see as much as he or she might wish to see. That implies the conceivability - if not the practical possibility - of a single model.
Next, any two users must certainly have something in common in order to be able to communicate at all. And however small that commonality is, it is a basis for communication and reuse, including application-to-application communication and commonality.
It is one of the functions of analysis to identify those commonalities between the various actors in a system, and of synthesis and design to establish and build on whatever degrees of consensus can be discovered or invented. And on-going analysis and design must reshape it where necessary, as our representations are always merely provisional. That holds everywhere, including in MACK (so I look forward to your evident talents being applied there! )
Different views ("models" in your terms) do not have to be synchronized at all times. That would be what I call "naïve realtime" (as reflected, for example, in spreadsheets, ACID-based transaction processing systems, and conventional in-process inheritance). But there definitely are times when your Different Models - accounting systems, workflow systems and document processing systems - do need to be synchronized. That is one of the functions of closes. It is a simple consequence of the need for coordinated planning and execution, all most easily - I think - based on well-conceived and organized notions of state [Stefan, Wolfgang].
Now we are on our way to highly sharable objects, business or otherwise. And thatís where "The Mainstream" and all that MUCK  clicks in: in the general marketplace there is certainly a whole lot that we all have in common. (We just need to discover and represent it...)
Sure, that is a rather theoretical "proof", but in practice EDI has been proven possible, and the ANSI ASC X12 people themselves - whom we should regard as being more dyed-in-the-wool practitioners than OO theoreticians - have stated their vision of a next generation of EDI which goes beyond mere syntax (which is good for fitting into a Different Models scene) and aims for semantic interoperability: "EDI documents will be replaced by electronic messages, semantically complete units of information used within a clearly defined business context." (From the May 1995 document: "Vision of the Future State: EDI and ASC X12 An Initial Report of the Organization & Procedures Task Group")
There they are unwittingly describing a MACK-compliant world, with my "relativity" tying in with their "within a clearly defined business context"! And that "context" also defines the limits of the relevance of reuse. MACK messages encapsulate that shared context in the agreed-on Common Knowledge between the two parties that is the basis for reliable interoperation.
So your many specialized models will soon exist, specified according to one architecture without undue oversimplification.
Oversimplifications will be made following MACK. That is inevitable (as everyone knows and as Odysseus was warned as he approached Scylla). But they will be more easily prevented, detected and managed in the MACK-compliant design environment and the MACK-enabled open market (where Iíve borrowed from the "Errors: their prevention, detection and management" workshop title I introduced in [3(firstly)]). Part of that management is recovery from design errors, which is where MACKís version migration power gets to work [6, 8, 10].
Your diversity will rule well, thanks to the commonality that gives reusability yet also enables the well-known diversifying effect of the advanced market.
Sorry, Ralph, again, about all that further abstract wordiness, but you will see it in due course. Youíll even enjoy putting Accounts onto it. In the meantime I must just leave you with the coherence between all those ideas, and the correspondence with those of others!
Also, à propos Accounts and the "MonthEndClosing" of your wiki*.html pages, Iíd like to refer you to my own generalizations on such interesting - even fascinating! - things as accounting closes. They are in Appendix B of my book (advertised in the last item of ), which was the "time component of data" 1984 paper I have made so much of (Find that phrase in the faq, and the "cutoff" synonym for "close" used there).
On your "Behavioral Patterns"
Prompted by Paul Evittsí citing of your (or the Gang of Fourís) "Chain of responsibility" pattern, I would like to make some comments on the chapter (Chap 5, Behavioral Patterns) from which it comes. They also relate to your Accounts.
Together, many of your Behavioral Patterns address parts of the very problem-set that the "innocent questions" section in my paper addressed and is inherent in all multiply-interconnected situations. The latter are of course rather important to my whole theme, interconnectedness being the abstract or conceptual equivalent of complexity. (Please note: I have only just looked at that chapter, so I am in luck to find that common interest to start me off again!)
I start by setting the scene that both of us are addressing (even though from different viewpoints).
The need is to discover and pursue the implications in all presently relevant contexts of any single and apparently isolated event. It is "forward-chaining" in a distributed-logic world. In OO it is all about object collaboration .
Such "request analysis and method despatch" would of course be a central function in any realization of the "generalized object model" whose dropping from the OMGís OMA I so regretted in my paper.
We must also bear in mind that, in general, in a componentized world the resolution must allow for multiple independent designers, and the consequences may be discovered either at design-time or in run-time, or both, depending on the degree of binding.
The behavioural consequences may be non-temporal and purely logical, hence needing only to be seen in algorithm time, or they may have a realtime component that is relevant to users, that is, there are workflow implications. That would of course be "relativistic realtime" [paper(OMG back on track), 6, Jeff(Marímba)].
So what do we do about that whole complex scene?
The key to the successful and clear resolution of that general problem lies in how chains of responsibility are specified, or, more generally, in the terms in which a multi-object-operation is analysed and understood. MACK has one appropriate and general-purpose key. However, it is rather central to the whole architecture so I shallnít go into its details here.
But it is also the problem-area that your Behavioral Patterns address, and it seems to me that you (the Gang of Four) are unduly modest and have (provisionally?) ignored a number of important issues.
Once again I start with my rather hackneyed "time component of data", and shall focus on two related aspects of it: user realtime and workflow, and transaction definition and management.
On user realtime, I would like to quote, with approval, your own rendering (in a 21 Jun 1996 e-mail to Jeffís obj-tech mailinglist) of one of the points from the IEEE article "Basic reading for OO developers":
Control flow that is visible to the user should not be hardcoded into a class.
Itís not clear whether you approve of it, and certainly, I suspect that it might have a major impact on conventional OO design. Anyhow, I definitely do approve of it! It seems to accord completely with my observation in my paper about the "thorough confusion of algorithm time and real time" that MACK resolves [Wolfgang, 6] but that your Behavioral Patterns does not address at all.
On the other hand I must add that your declarative specification of processing in Accounts (in place of hard-coded Cobol program logic) seems to be very consistent with the MACK approach (which is itself of course nothing new). So we are closer together than I may seem to be making out...
On the transaction issue, it seems to me that such usually DBMS-related issues are rather obvious in their absence from the discussions of your Behavioral Patterns. Command (aka Transaction) seems to get the closest, but - doubtless on purpose - you steer clear of the db-transaction issue. Maybe you do so precisely because of its realtime aspects and you want to emphasize algorithm? That would make a lot of sense, and would be consistent with a tiered architecture approach.
However, that makes it a good example of the oversimplifications that tiered architectures invite!
For example, "pure algorithm" is never so pure: error handling can at any time transform it into a matter for the enduser, and hence bring user realtime into the picture. (In a MACK world that is a mere transfer of context, and fully provided for in a very central way.)
The general problem of error-management is further necessary background for the other point: transaction definition and management, whose traditional separation from the algorithm implicit in Business Rules seems to me so oversimplifying.
More interestingly and importantly, there is the complex interaction between undoís and asynchronous behaviour on the one hand (both of which, for example, your Command pattern supports, as you explain) and concurrent transactions, where unduly restrictive human procedures, resource waits and other resource conflicts lurk at every corner. (I expect that set of issues at least partly explains your renaming of that pattern from Transaction to Command?)
What an enormous though challenging scene! See [Stefan] re undoís, [Wolfgang] re the conventional ACID transaction, and [Jeff] re tiered architectures. Maybe the next edition of Behavioral patterns will address such issues? (By the way, you might also compare your Command pattern with the "activity instance" concept which I quoted in [Stefan], with its reference.)
The above issues all lead to DBMS. I slot these comments in under your paper also because they are invited by your assertion (which Iíve already quoted) that there will be many specialized models for business objects, and they will have to interact. I presume "tiered architectures" are part of your whole Different Models scene?
Well, as Iíve already said above and in [Jeff, Wolfgang], they seem unfortunate in the light of the advantages of integration such as coherence and efficiency. They may oversimplify many issues (as Iíve just tried to indicate above), and they certainly often overcomplicate (I speak with some feeling here, having fairly recently designed general-purpose indexing and retrieval software that fits into the tiered architecture scene, and led the team which successfully programmed and implemented it).
As a further aside before getting to DBMS proper, I canít help noting how I am having rather to squeeze DBMS into my comments on your paper as I have not managed to slot it in under any of the other papers, as all have steered so clear of it! That omission - which is of course quite understandable in the context of conventional architectures - must be bad news for the OODB or ODMG people, as I suspect that your unspoken collective assumption is to continue to rely on SQL or ODBC while ignoring the ODMGís work. And what a confusion that indicates in conventional DB/OO architectures! And what a hurdle still for Java trendies...
Now on to conventional DBMS as such, still relying as it does on SQL.
I have a problem with command-line approaches such as SQL  (hence TSQL2 too ): semantically they are too contextless, hence they miss many situation-specific simplification opportunities.
Crudely speaking, a database schema is too large or complex, while the very notion of a subschema - however it is defined or determined (e.g. in terms of permissions or accessibility) - has the wrong conceptual shape and is generally too static, simplifying in a way which can only be appropriate for one small set of purposes and not for all the other (the world being complex). The very existence of OLAP presentation tools supports that thesis (and ROLAP does not contradict me). Further support comes from the non-existence of subschemas in current OO.
For example, a subschema knows nothing of the reasons for its existence or its boundaries, nor do programs have any way of relating to its intentions. The gap between a subschema and a full schema is a no-manís-land riddled with the mines of questions about collaborations (cf. the joined-view-updatability problem).
Programmatically generating SQL only compounds the whole problem (much as generating HTML complicates the whole MVC scene).
MACK reintegrates processing and database management (consistently with the "persistent object store" notion) using its dynamic and variable-granularity typology concept, organized in relativistic terms. It also integrates the human interface into the total managed picture (with views, presentations, representations or whatever), and - true to spec - really helps simplify the relevant complexity relevantly. (Hmmm, thatís not very helpful, is it? But I am now steering rather too close to the "technical black hole" I am not allowing you to enter yet. See however [Wolfgang(MACK workflow)].)
Here let me merely state some general conclusions:
The real requirements of transaction management are too complex for the traditional "divisions of labour" or modularizations or layerings to be applicable. (And that includes any "pure algorithm" concept, as weíve just seen.)
MACK takes the bull by the horns: in emerging applications there is indefinite complexity, which translates in our models to indefinite interconnectedness. That is a major opportunity for the MACK approach where coherence and consistency are fundamental goals and are conceptualized in such a way (using typologies, of course, together with the "RE" concept [paper(synthesis)]) that they can be pursued practically. The generality of that architectural formulation is so high that its realization is feasible and its applicability virtually universal.
Of course that doesnít help you see how itís done, but is it likely that anyone would dare to formulate such a statement - after pointing out or facing up to so many difficult problems throughout that enormous area - without having a way to put it into practice?