[Corpora-List] LREC 2004 Workshop on Multimodal Corpora : deadline extended to JANUARY 31st

From: Jean-Claude MARTIN (Jean-Claude.Martin@limsi.fr)
Date: Fri Jan 16 2004 - 08:54:44 MET

  • Next message: Yuri Tambovtsev: "[Corpora-List] Old Russian, Old Turkic, Kamasin and Udeghe added"

    Due to a number of requests, we extended the deadline by one week to
    JANUARY 31st, 2004!

                 This message is posted to several lists.
               We apologize if you receive multiple copies.
          Please forward it to everyone who might be interested.


                Workshop on




    Centro Cultural de Belem, LISBON, Portugal, 25th may 2004


    In Association with
    LREC2004 http://www.lrec-conf.org/lrec2004/index.php Main conference
    26-27-28 May 2004

    The primary purpose of this one day workshop is to share information
    and engage in the collective planning for the future creation of usable
    pluridisciplinary multimodal resources.
    It will focus on the following issues regarding multimodal corpora:
    how researchers build models of human behaviour out of the annotations
    of video corpora,
    how they use such knowledge for the specification of multimodal input
    (e.g. merging users' gestures and speech )
    and output (e.g. specification of believable and emotional behaviour in
    Embodied Conversational Agents) in human computer interfaces,
    and finally how they evaluate multimodal systems (e.g. full system
    evaluation and glass box evaluation of individual
    system components).

    Topics to be addressed in the workshop include, but are not limited to:
    * Models of human multimodal behaviour in various disciplines
    * Integrating different sources of knowledge (literature in
    socio-linguistics, corpora annotation)
    * Specifications of coding schemes for annotation of multimodal video
    * Parallel multimodal corpora for different languages
    * Methods, tools, and best practice procedures for the acquisition,
    creation, management, access, distribution, and use of multimedia and
    multimodal corpora
    * Methods for the extraction and acquisition of knowledge (e.g. lexical
    information, modality modelling) from multimedia and multimodal corpora
    * Ontological aspects of the creation and use of multimodal corpora
    * Machine learning for and from multimedia (i.e., text, audio, video),
    multimodal (visual, auditory, tactile), and multicodal (language,
    graphics, gesture) communication
    * Exploitation of multimodal corpora in different types of applications
    (information extraction, information retrieval, meeting transcription,
    multisensorial interfaces,
      translation, summarisation, www services, etc.)
    * Multimedia and multimodal metadata descriptions of corpora
    * Applications enabled by multimedia and multimodal corpora
    * Benchmarking of systems and products; use of multimodal corpora for
    the evaluation of real systems
    * Processing and evaluation of mixed spoken, typed, and cursive (e.g.,
    pen) language processing
    * Automated multimodal fusion and/or generation (e.g., coordinated
    speech, gaze, gesture, facial expressions)
    * Techniques for combining objective and subjective evaluations, and
    for making evaluations cost-effective, predictive and fast

    The output of the workshop will be the following:
    * Better knowledge of the potential of major models of human multimodal
    * Challenging issues in the usability of multimodal corpora
    * Fostering of a pluridisciplinary community of multimodal researchers
    and multimodal interface developers

    Multimodal resources feature the recording and annotation of several
    communication modalities
    such as speech, hand gesture, facial expression, body posture, graphics.

    Several researchers have been developing such multimodal resources for
    several years,
    often with a focus on a limited set of modalities or on a given
    application domain.
    A number of projects, initiatives and organisations have addressed
    multimodal resources with a federative approach:
    * At LREC2002, a workshop had addressed the issue of "Multimodal
    Resources and Multimodal Systems Evaluation"
    * At LREC2000, a 1st workshop had addressed the issue of multimodal
    corpora, focussing on meta-descriptions and large corpora
    * The European 6th Framework program (FP6), started in 2003, includes
    multilingual and multisensorial
      communication as one of the major R&D issue, and the evaluation of
    technologies appears as a specific
       item in the Integrated Project instrument presentation
    * NIMM was a work group on Natural Interaction and MultiModality which
    ran under the IST-ISLE project
     (http://isle.nis.sdu.dk/). In 2001, NIMM compiled a survey of existing
    multimodal resources
      (more than 60 corpora are described in the survey), coding schemes and

    annotation tools.
      The ISLE project was developed both in Europe and in the USA
    * EcorporaA (European Language Resources Association) launched in
    November 2001 a
      survey about multimodal corpora including marketing aspects
    * A Working Group at the Dagstuhl Seminar on Multimodality recorded, in
    November 2001,
      28 questionnaires from researchers on multimodality, from which 21
    have been announcing their
      attention to record other multimodal corpora in the future.
    * Other surveys have been recently made about multimodal annotation
    coding schemes and tools (COCOSDA, LDC, MITRE).

    Yet, existing annotation of multimodal corpora until now have been made
    mostly on an individual basis,
    each researcher or team focusing on its own needs and knowledge about
    modality specific coding schemes
    or application examples.
    Thus, there is a lack of real common knowledge and understanding of how
    to proceed from annotations
    to usable models of human multimodal behaviour and how to use such
    for the design and evaluation of multimodal input and embodied
    conversational agent interfaces.

    Furthermore, the evaluation of multimodal interaction poses different
    (and very complex) problems than the evaluation of monomodal speech
    interfaces or
    WYSIWYG direct interaction interfaces.
    There are a number of recently finished and ongoing projects in the
    field of multimodal interaction
    in which attempts have been made to evaluate the quality of the
    interfaces in all meanings
    that can be attached to the term 'quality'.
    There is a widely felt need in the field for exchanging information on
    interaction evaluation with researchers in other projects.
    One of the major outcomes of this workshop should be better
    understanding of
    the extent to which evaluation procedures developed in one project
    generalise to other, somewhat related projects.

    * 31st January 2004: Deadline for paper submission
    * 29 February 2004: Acceptance notifications and preliminary program
    * 21 March 2004: Deadline final version of accepted papers
    * 25 May 2004: Workshop

    The workshop will consist primarily of paper presentations and
    discussion/working sessions.
    Submissions should be 4 pages long, must be in English, and follow the
    submission guidelines at http://lubitsch.lili.uni-bielefeld.de/MMCORPORA

    Demonstrations of multimodal corpora and related tools are encouraged as

    well (a demonstration outline of 2 pages can be submitted).
    As soon as possible, authors are encouraged to send to
    a brief email indicating their intention to participate, including their

    contact information and
    the topic they intend to address in their submissions.
    Proceedings of the workshop will be printed by the LREC Local Organising

    The organisers might consider a special issue of a suitable Journal for
    selected publications from the workshop.

    The workshop will consist of a morning session and an afternoon session,

    with a focus on the use of multimodal corpora for building models of
    human behaviour and
    specifying/evaluating multimodal input and output Human-Computer
    There will also be time slots for collective discussion and one coffee
    break in the morning and in the afternoon.
    For this full-day Workshop, the registration fee is 100 EURO for LREC
    Conference participants
    and 170 EURO for other participants. These fees will include coffee
    breaks and the Proceedings of the Workshop.

    Jean-Claude MARTIN, LIMSI-CNRS, martin@limsi.u-psud.fr
    Elisabeth Den OS, MPI, Els.denOs@mpi.nl
    Peter K‹HNLEIN, Univ. Bielefeld, p@uni-bielefeld.de
    Lou BOVES, L.Boves@let.kun.nl
    Patrizia PAGGIO, CST, patrizia@cst.dk
    Roberta CATIZONE, Sheffield, roberta@dcs.shef.ac.uk

    Elisabeth AHLS…N
    Jens ALLWOOD
    Elisabeth ANDRE
    Niels Ole BERNSEN
    Lou BOVES
    Stťphanie BUISINE
    Roberta CATIZONE
    Loredana CERRATO
    Piero COSI
    Elisabeth Den OS
    Jan Peter DE RUITER
    Laila DYBKJ∆R
    David HOROWITZ
    Alfred KRANSTEDT
    Steven KRAUWER
    Peter K‹HNLEIN
    Knut KVALE
    Myriam LAMOLLE
    Jean-Claude MARTIN
    Joseph MARIANI
    Jan-Torsten MILDE
    Sharon OVIATT
    Patrizia PAGGIO
    Catherine PELACHAUD
    Janienke STURM


    This archive was generated by hypermail 2b29 : Wed Jan 21 2004 - 12:25:47 MET