Abstract
Since the publication of Kress and van Leeuwen’s (2001) Multimodal Discourse, the term ‘multimodality’ has become established as an academic field within the discourse analytic and social semiotic communities. Multimodality, meaning communication through two or more semiotic resources (e.g. text, images, sounds and gestures), or the perception via the senses (e.g. sight and hearing) – has gained terrain in musicology, too – particularly regarding studies of music videos, films and television commercials, where the shortcomings of an autonomy aesthetical concept of ‘music alone’ is evident (cf. Cook, 1998, 91). Accordingly, one main issue is to examine how different semiotic resources blend or syncretize to new meanings. The deviations of multimodal forms are numerous, and so are the approaches to methodically categorize the forms. However, there seems to be at least two different logical frameworks for the classification of inter-semiotic layering; i.e. they refer to either the degree of similarity and difference between the interacting resources, or the degree of separability and self-sufficiency of the resources. While the former accentuates a relational aspect, the latter measures the level of overlapping. I this paper I will delve into the two classificational frameworks with a focus on music’s attributional potential, and I will discuss the overall question whether multimodality must be characterized by an aporetic or synthetic relation between interacting semiotic resources.
Originalsprog | Engelsk |
---|---|
Publikationsdato | 2008 |
Antal sider | 1 |
Status | Udgivet - 2008 |
Begivenhed | 15th Nordic Congress of Musicology: Voicing. Sounding. Visualizing - Oslo, Norge Varighed: 5 aug. 2008 → 8 aug. 2008 Konferencens nummer: 15 |
Konference
Konference | 15th Nordic Congress of Musicology: Voicing. Sounding. Visualizing |
---|---|
Nummer | 15 |
Land/Område | Norge |
By | Oslo |
Periode | 05/08/2008 → 08/08/2008 |