Open main menu

In theory, the primitive dialog tools should empower the wiki community at large to grow semi-automated assistants to aid on-wiki expert tasks, gathering contributors' expertise in much the way that wikis gather contributors' knowledge of primary content. In practice, design and implementation of such semi-automated assistants is itself an on-wiki expert task, therefore a natural target for a semi-automated assistant. Designing a meta-assistant is a daunting challenge, so I'm writing this essay to try to sort out how to do it.


Kinds of knowledgeEdit

The knowledge to be captured by an assistant divides along two dimensions.

  • Creation versus modification. It's easier for a meta-assistant to help with creating an assistant than to help with modifying one. This is because, during creation of an assistant, the meta-assistant would naturally ask the user all about how the assistant should work, and, moreover, the user is reasonably expected to be able to supply those answers. With modification, though, we're dealing with a humanly-created artifact, with no required constraints to make it tractable for a non-sapient meta-assistant to figure it out, no particular expectation that the meta-assistant would grill the user about how it works, and no expectation that the user would know how it works if asked. So these two cases, creation and modification, are quite different.
  • High-level versus low-level. There's lots of good advice that can be given about how an assistant should behave; things like allow the user to do unanticipated things; preserve the user's reasons for deviating from expectations so later users don't just keep getting pushed to do things the expected way; show the consequences of an edit before asking for a commit to it; check after an action to make sure it worked right and be prepared to deal with the situation if it didn't. That's high-level. Then there's low-level stuff, that may be difficult and tedious for the human user — though it's important when aiding it to exercise caution and show clearly what is being done under-the-hood, to preserve the learning-by-osmosis process. For creation, low-level help may be especially straightforward, whereas for modification low-level help may be especially difficult since it deals with low-level code that, having been developed previously by humans, might employ any technique the human mind could devise (to say nothing of techniques the human mind could blunder into).


The very fact that assistants have behavior patterns is what makes them so much more challenging than passive hypertext documents. Most of the following items apply both to assistants and to the meta-assistant — the meta-assistant must simultaneously possess and nurture these properties (with the balance between the two differing by item). The last item on the list is very meta.

Graceful degradationEdit

Fancy stuff is more likely to break. Even though the dialog tools are designed to be as rugged as possible, there will be situations where they stop working. There's even a dialog template specifically to provide for that, {{dialog/ifsupported}}.

So it's highly desirable that each assistant (including the meta-assistant) should provide directions, that can be usefully perused as documentation pages when dialog isn't working, on how to do by hand what the assistant is meant to do semi-automatically. For this to work smoothly, the directions ought to arise naturally from the way the assistant is set up, so that when the assistant changes, the directions do too, preventing them from getting out-of-sync. A certain amount of manual synchronization is already required between wiki markup and its corresponding documentation page; here we want to avoid further multiplying the need for synchronization.

Intermediate stagesEdit

When an assistant has been partly built, but parts of it are incomplete, it should be possible to use part of it and then be told what else to do manually. This seems related to availability of instructions for graceful degradation, with the degradation caused here by incompletion.


When editing the primary information content of a wiki page, it's extremely desirable to be able to preview what the typeset page will look like, at various intermediate points in the editing process as well as just before committing the edit. This preview ability is one way of reconciling the need to see the underlying markup (essential to the learn-by-osmosis process that enables the long-term viability of wikis) with the need to know what the typeset page will look like (without which even simple edits would be extremely error-prone), Some blog-comment interfaces (though not the wiki platform) provide an instantaneously-updated preview.

When editing a wiki page that interacts with other wiki pages, one may want to preview what happens to the interaction. The existing wiki platform already goes to some trouble to support an extended preview function for templates, showing what some other page would look like if the current draft edit were committed; though that template-preview function is not always sufficient, hence the Breadboard assistant. For interactive assistants, even more extensive preview facilities would likely be wanted. One might wish to see how a draft edit would affect a walkthrough of the assistant; or, even more fraught, one might wish to see how a draft set of edits would affect a walkthrough, raising the spectre of version-control for drafts of a whole assistant.


It's desirable to be able to simulate the functioning of an assistant, as if it were doing things but without actually doing them; in order to understand how it works, how one might want to modify it, and how one would go about doing such a thing by hand.

Version controlEdit

Some changes to an assistant can be understood as tweaks within the overall structure; but beyond a certain point (depending on something more complex than mere size), they become changes to the assistant as a whole, so that the assistant after the change is a different assistant than the assistant before the change. One may actually want the entire assistant to continue to function as before for anyone who was using it at the time of replacement, and perhaps also to have the old version available for deliberate use later on. Reversion of the whole assistant to an earlier state is possible; and one may wish to commit a multi-part change all at once.


The assistance facility design as a whole needs to assure availability of a working version of the meta-assistant so it can be applied to resolving a difficulty with a defective version of the meta-assistant. The situation is inherently volatile because the means for fixing problems is itself editable; in contrast to primary wiki content, which is edited by means of the basic platform interface, independent of the content of the page and generally stable (barring design changes, alas).

This seems related to the above-mentioned version control problem; and is, more esoterically, reminiscent of the notional of a computational reflective tower.