(Please first read my Introduction to these comments on your papers.)
Where will the revolution be?
You open with this:
The Java Revolution
"A revolution has got to leave the world with a totally different view of itself -- its got to be a paradigm shift… When you've got a revolution like this, don't think about applets. We're talking about a situation where the whole content of your machine is going to have a totally different shape[to] it. As a result the whole society, the whole commercial environment around all software is going to look totally different. The rules we're defining now -- the way you look at your system, the way you define protocols, the way objects talk to each other -- are defining what those possibilities will be."
But was the quoted author referring to Java? After the applets refered to, was the next implied step Java applications? Or something entirely different? It was a very generic statement, and one which could just as well apply elsewhere: it certainly applies to MACK!
The need for tools to help us simplify complexity
You sure have extracted some project data nuggets! I liked this collection particularly, especially with your introductory comment:
It is not news that there is a software crisis. The news is that it is getting a lot worse:
If you were to look for one short "explanation", this would be mine: There is too much complexity and inadequate software tools for helping people simplify it. My statement would of course not be very helpful, just like that, but you know by now that it comes on the top of a soon-to-emerge toolset with that as maybe the best one-line description of the needs it addresses.
That would also be a "short explanation" for why there is that giant hidden backlog of unmet and even unaddressed needs of every kind everywhere else too. The hidden or opportunity-costs of the problems you highlight are so enormous (mere lack of communication, for example, being an undisputed contributor to virtually all social problems) that a proper IT-driven elimination of a mere fraction thereof will bring about an order of magnitude’s improvement in the resources of all kinds applied to the world’s problems.
Since this is a Business Object Workshop I have not gone into those problems at all, or how the MACK-boosted market can be expected to address them, other than to allude to them in [2, 3] and in odd other places such as [Haim, Ken] and this final paragraph from my paper:
"The boosted information-product market will do the same for the whole market. On the supply-side, the Civil Society phenomenon will rise to a higher quantum level of accuracy and diversity throughout the political economy. From the demand-side, more convenient and congenial hence fuller democratic involvement will ensure a more representative supply. Riding The Mainstream, every consumer/citizen will genuinely supply. The medium will be the message as never before."
Bearing in mind how Metaset has been designed as a toolset/medium firstly to elicit and accommodate complexities and refine issues, and then to assist our simplification thereof by feasible projects with appropriate products, we may have great expectations!
To keep us down to earth, we also have the widely-applicable and rationally-appropriate Scylla and Charybdis perspectives [2, etc] to help steer us in the self-critical application of the whole approach. (A major lesson there is to bear in mind that the approach’s own implicit and potentially Scyllan reductionism must be mitigated by its own basic market assumptions of transparency and openness.)
What is the future: the Internet or the Web?
When you say that Future enterprise solutions will be Web-based, don’t you rather mean "Internet-based"?
My position is clear (see [Mark, Ralph] and below): HTML, with or without Java, is not the future. The Internet on the other hand - with its most-used transport mechanism, Sockets/WinSock - is a marvellously open platform for higher-level tools such as Metaset.
On tiered architectures
Maybe the part of your article whose detail (independently of its own context in your paper) could be most related to my paper’s most detailed portion was this:
An Order Entry business object will typically have multiple user interfaces. A clerk may be taking the order over the phone, entering purchase information, validating customer records and credit data, and reviewing an order for consistency and customer acceptance. Other users may require different presentation screens. User interfaces are difficult and time consuming to build at the code level. Today, much of this process can be automated. They should be encapsulated as separate objects that communicate by message passing to the Order Entry object. Failure to do this will limit reuse and waste valuable programmer time on laborious, time consuming maintenance tasks. Users should be able to create interface objects with simple object-oriented tools. Subsequently, the programmer should be able to easily snap user interface objects onto the Order Entry object.
In my experience it is not so easy to "snap" like that. (Though you do only say should be able to and not can...) For example, most RDBMSs these days offer declarative referential integrity requiring in this case that an order be related to a customer and to some products, etc. But unless the designer and programmer work at it, such demands only manifest themselves when the order is written, whereas one wants them to do so at a far earlier stage. There is an effective "time mismatch" between the RDBMS and the UI tool’s own logic (That is of course one of the motivating factors behind OODB. However, see below).
It also seriously affects transaction design! See [Ralph, Wolfgang].
The time mismatch is part of the "large orthogonality" which I noted in my paper between the order-intrinsic properties and environment-intrinsic properties, even though they must combine to determine the actual workflow with all its attendant logic (including its actual transaction design and associated resource-reservation decisions).
Such problems then indicate that it might not after all be such a bad idea to contemplate blurring the traditional division into the three tiers that you seem to go along with! See also [Wolfgang], where I point out how an adequate tool for helping people simplify complexity can also be brought to bear on such a complex problem too. That is another example of the much-vaunted reflectivity of MACK-compliant application models in action.
In the same breath I might point out that in MACK such reflectivity is largely "for free", as it all takes place within the abstract model, where everything is already logically clarified and does not require any dangerous semantic steps through an RE-method (the RE-methods being responsible for the atomic steps between the deemed reality and the abstract model) [paper(synthesis)].
I can generalize my problem above with that "snapping onto" and with the 3-tier architecture as a whole:
There are too many sources of procedurality, both algorithmic and realtime: it is explicit in triggers in DBMSs (or class-specific methods in OODBs), procedural program code and workflows; it is implicit in GUI or DBMS or other Business Rule "declarative" constructs, which - after all - need to be rendered into explicit procedure somehow; and there are events, both i/o-related and internally generated, such as realtime and error traps. They mesh most messily if at all! (Except, of course, in sales demos or other Procrustean constructions ... such as many implemented applications.)
Metaset/MACK has a clean, integrated and ultimately simpler approach (See [Wolfgang(MACK workflow), Ralph] and my introduction to these comments).
On thin clients
You have given me another golden opportunity for some "comparing and contrasting":
A Web-Based Solution for Business Object Architectures
To enhanced competitiveness in an environment of accelerating change, businesses are turning to Web-based solutions for Intranet client-server applications. Some potential benefits are:
It seems to me that the industry is in some danger of falling head-over-heels in love with the "thin client" concept. Considering the accumulated heaviness of a fully-configured Microsoft-based hard disk, with all its many attendant administrative difficulties (only some of which you have implied above), I can sympathize with that relief that people are looking forward to from thin clients. Even Microsoft (with Intel) is now trying to catch up with that shift.
But let us not throw the baby out with the bathwater! The baby is that a fatter client or future PDA will be a personal thing, containing the user’s personal profile and accumulated programs and other information, all of which should (and with MACK they will) tie together and enable organic growth of that stored personality.
If that basis of "profound congeniality" is not to be fat on the client, where is it to be? Do we really want to be back at a centralized mainframe? Of course not (much though widespread nostalgia for that oversimplified world makes one think...): we want it to be "somewhere on the network". With MACK all Universal Common Knowledge can indeed be anywhere else, and it surely will in general be handily available.
But relative CK will not be so ubiquitous. We (or our PDAs) will need to know where it is, while the remaining core of unique knowledge is best on the client, even if it is only that inner core which sufficiently defines the client so that he or she can be identified to the network. Such identification will either be as a unique individual, or merely as one with all the authorizations, licences, tickets, certificates, credits, or whatever else may be required for the usual desired services with a degree of anonymity.
In practice it will probably make sense for the client to be far fatter. For a good while still, as Internet-based trade grows, we will want to keep our own audit trails for accounts payable. Because of the existence of the facility, the granularity of both audit trails and hence buyers’ and sellers’ planning units will become far finer than it is at present. There will be a finer granularity of product and billing units all down any supply chain. That will tend to imply having other data - such as the next-generation product catalogs - available locally for immediate planning and optimization purposes, especially as componentization - and the multi-supplier environment in general - burgeons, as all our own software efforts will presumably ensure. (That will be a real exploitation of complexity, and not multiplication of artificial complication!)
We should also of course assume that it would be sufficiently easy to manage local data in a newer, cleaner, better-automated and more attentive and respectful environment -- such as even Metaset as the very first MACK-realization will offer (e.g. it will be "fileless": like an object, no old-style file is an island either! Metaset enables resource-management reflecting effective freedom of choice in the user’s own self-cultivated environment).
Perhaps above all, we would expect the client to contain many of the personalized filters through which we will sip intelligently and comfortably from that firehose of pressurized data from those networks that would otherwise increasingly threaten to blast us away. The client is the control-panel from which we manage our own roving agents, and on which it will be handy to keep their status.
To summarize, the fatter client will really help us simplify complexity for ourselves, and will progressively help us grow to manage even more of it.
For myself, I would like to see the Java people achieve more of that soon. For the reasons I have repeatedly harped on (see e.g. [Mark, Ralph] and below), they won’t manage it very well, but at least the market demand will have been sharpened a bit in the right direction. (Please pardon my smugness! Though maybe it would help to reflect on the uncharacteristic self-confidence that Metaset has given me. See also [3(fourthly)] for some of the self-criticism that tempers it.)
I get the impression from their Web pages (which your Object Technology Home Page has just drawn to our attention -- thank you!) that the Marímba people have yet to consider the whole A&D repository issue, as well as the need for MI [Mark].
In the light of their own statement about Castanet: "Eliminates version management: everyone gets updates as soon as they are available" it also seems to me that they may need to see that version control is not solved by extreme naïve realtime [find "naïve" in Wolfgang, Ralph, paper and faq], even if with some dial-in-induced asynchrony. The still-common belief that it does is part of the supply-driven marketing approach (so characteristic of conventional broadcasting!) that we have to break away from in order to address the complexity of democratic demand. It’s so Scyllan or oversimplifying (from a rational point of view) and Procrustean or dictatorial (from a political point of view)!
For example, what about customer participation and choice, in managed BPR with the users’ own schedules, user training, and legacy data? (On such managed change, see [Ken] and under "Looking ahead" in my introduction to these comments.)
Such important complexities cannot simply be swept under a broadcast carpet. Naïve resynchronization (of which I admit the Marímba people may not be guilty, despite appearances) is no solution, hence my whole "relativistic realtime" story [3(firstly), 6, Ralph, Wolfgang], including the "version interoperability and migration" - as distinct from "version coexistence" - in which MACK excels [8, 6].
A propos Marímba’s incremental downloaded updating, I might also point out that that concept is fundamental to MACK with its whole managed Common Knowledge basis. With proper technology you don’t need to say what you know your listener already knows. You merely have to build on the known common context.
One isn’t naïvely rude, either, in when to say what you do need to say. That would be Procrustean imposition, quite unsuited to a desirable future properly recognizing individuality in order best to help people develop the rich complexities of their own worlds of themselves in their environments.