Title: Language as part of an inter-semiotic system
In this paper I will explore the possibility, which I briefly mentioned at ISFC 96, of language and some other semiotic modes, like action/movement and images, forming one inter-semiotic, or multi-modal, system. Some of the systems in the content-plane and even some in the expression-plane of such an inter-semiotic system are shared by the various modes while others are mode-specific. It seems appropriate to consider this issue in some detail in order to complement the notion of multimodal texts, ie texts formed by more than one semiotic mode. It also seems timely in view of the recent developments in multimedia technology. It would be computationally easier to generate multimodal texts by means of one inter-semiotic system than by several semiotic ones in separation. As well, it seems the preferred option for realizing the idea of one set of semiotically neutral data being encoded in several different-mode texts (Negroponte 1995). But perhaps most importantly, the inter-semiotic system approach seems to be the most suitable to following Saussure's (1916/1974) advice that "to discover the true nature of language, we must learn what it has in common with other semiotic systems", which of course is true the other way round too. The inter-semiotic system is not considered to be language-centred, although some, or perhaps even most, of its systems were first worked out for language and its representation draws on the resources of systemic-functional linguistics. I will attempt to map out at least some of the mode-shared and mode-specific systems and suggest possible motivations of why some of the options are shared and others mode-specific. The basic principle of shared and specific systems has precedents in Bateman, Matthiessen, Nanri and Zeng's (1991) multilingual and Matthiessen's (1993) multiregisterial systems.