Machine Learning List: Vol. 10, No. 14
                   Friday, Nov 13, 1998


 
Special Issue of MLJ on Unsupervised Learning Correction to UAI '99 Call for Papers New Book on Agents and Learning Graduate Study in Digital Libraries 5th EUROPEAN CONFERENCE ON ARTIFICIAL LIFE -- ECAL99 Hybrid Neural Symbolic Integration CALL FOR INFORMATION ON COURSES IN EVOLUTIONARY COMPUTATION SWARM workshop at Charles Sturt Univ. KBCS-98: Call for Participation NIPS*98 PAAM99 Call for Papers The Machine Learning List is moderated. Contributions should be relevant to the scientific study of machine learning. Mail contributions to ml@ics.uci.edu. Mail requests to be added or deleted to ml-request@ics.uci.edu. Back issues may be obtained from http://www.ics.uci.edu/~mlearn ---------------------------------------------------------------------- ----------------------------- From: Douglas H. Fisher [dfisher@vuse.vanderbilt.edu] Subject: Special Issue of MLJ on Unsupervised Learning Date: Sat 11/7/98 9:20 AM Call for Papers Special Issue of Machine Learning on Unsupervised Learning Doug Fisher Special Issue Editor http://cswww.vuse.vanderbilt.edu/~dfisher/mlj-unsup.html Several forms of unsupervised learning extract relationships from data that can be then exploited for inference. The primary unsupervised techniques include clustering, learning (usually Bayesian) belief networks, and learning association rules. The unsupervised "pattern" or "concept" learning methods that are of most interest in this special issue differ from supervised concept learning methods in that there is no single, dependent variable, dimension, or predicate that is the *a priori* focus of inference. Rather, an unsupervised method may support inference along more than one dimension (variable, property), typically many dimensions/properties. Authors are encouraged to submit papers in the primary unsupervised learning paradigms of clustering, belief- network learning, and association-rule learning (and possibly others) for consideration as contributions to the Special Issue on Unsupervised Learning of the journal, Machine Learning. Articles that relate different paradigms are especially welcome. Review Criteria Each submission will be reviewed by two to three reviewers, as well as the Special Issue Editor. Each submission should clearly describe an unsupervised learning algorithm from one of the major unsupervised paradigms mentioned above, or if the learning approach is from some novel paradigm, then it should explain the relationships to one or more of these established paradigms. In addition, the successful submission will describe an inference procedure or other performance task that operates on learned knowledge. Inference of unobserved variable values and/or joint probability distributions are examples of inference tasks. Experimental and/or formal analysis should support the merits of the unsupervised method's ability to improve the performance task(s) along dimensions such as accuracy and cost, though application-oriented contributions (see below) may relax the formality of experimental and theoretical demonstrations if practical gains are documented in the submission. The requirement for well-defined performance and learning procedures does not preclude the possibility of interactive approaches, in which human and machine share learning and inference responsibilities. Cognitive modeling submissions are also possible, where fits to human data are desired. In all cases, submissions should be scholarly, with thorough links to related research, and report methods/results previously unreported in the archival (typically journal, book) literature. However, exceptional surveys, particularly those that cross paradigm boundaries, may be considered at the discretion of the Special Issue Editor and the Executive Editor. Except in extraordinary circumstances, submissions should not exceed 25 pages (journal-formatted pages), excluding references. Research areas Submissions should report novel methods/results, including along any of the following dimensions. pattern and concept representations and presentations, including visualization strategies data representation (e.g., relational data, continuous data, hierarchical data) search strategies (e.g., anytime algorithms, incremental algorithms, optimization strategies, human-machine interactive approaches, parallelism) performance tasks, including complex forms of problem solving analytical and analytical/inductive hybrid approaches cognitive modeling Application areas Submissions may also contribute primarily in terms of application (e.g., data mining in a particular area). Appropriate application-oriented papers will report well-defined, generic learning and performance procedures, which need not be novel relative to the literature, but such papers are expected to describe successful applications of unsupervised methods for a particular application task, and provide informed recommendations on the class of applications for which the reported methods appear appropriate. Careful thought should be given to whether an application-oriented paper can be best treated as a technical note submission (i.e., 10 or fewer journal-formatted pages), by appropriately relying on previous research publications. Of course, submissions with both strong research and application contributions are welcome. Important Dates Jan. 8, 1999 Title, authors, and abstract should be sent electronically to mlj-unsup@vuse.vanderbilt.edu with an intent-to-submit cover page. This is an important deadline that will facilitate reviewing, but minor changes to title, authors, and abstract are acceptable prior to the submission of full papers. Submissions of title/authors/abstract should be in plain text. Feb. 8, 1999 Full submissions should be received by mlj-unsup@vuse.vanderbilt.edu Full submissions should be in postscript format, and may be COMPRESSed if larger than 1M. Other formats may be accepted upon request. Submissions must be in English. April 30, 1999 Decisions sent to authors; papers accepted with no more than minor revisions accepted to the special issue. June 14, 1999 Final versions of accepted papers should be received by the Special Issue editor in the format specified for full submissions, using Kluwer style guidelines.
From: Kathryn Blackmond Laskey [klaskey@gmu.edu] Subject: Correction to UAI '99 Call for Papers Date: Wed 11/4/98 5:30 AM The UAI-99 Call for Papers erroneously stated the following: "Location TBA (near the site of AAAI-99)". The correct statement is: "Location TBA (near the site of IJCAI-99)". IJCAI-99 will take place in Stockholm, Sweden. UAI-99 will take place near that location, as correctly stated elsewhere in the Call for Papers. Kathryn Laskey
From: Gheorghe Tecuci [tecuci@gmu.edu] Subject: New Book on Agents and Learning Date: Tue 11/10/98 2:32 PM New Book: G.Tecuci, BUILDING INTELLIGENT AGENTS: An Apprenticeship Multistrategy Learning Theory, Methodology, Tool and Case Studies, Academic Press Gheorghe Tecuci (George Mason University, http://lalab.gmu.edu/), BUILDING INTELLIGENT AGENTS: An Apprenticeship Multistrategy Learning Theory, Methodology, Tool and Case Studies Academic Press, 1998 ISBN: 0126851255 http://www.apcatalog.com/cgi-bin/AP?ISBN=0126851255&LOCATION;=US&FORM;=FORM2 GENERAL DESCRIPTION This book presents a theory, methodology and tool for building intelligent agents, along with detailed case studies. The most significant, and unique, characteristic of building these agents is that a person directly teaches them how to perform domain-specific tasks in much the same way he or she would teach a student or apprentice: by giving the agent examples and explanations, and by supervising and correcting its behavior. This approach, in which the agent learns its behavior from its teacher, integrates many machine learning and knowledge acquisition techniques, taking advantage of their complementary strengths to compensate for each other weaknesses. As a consequence, it significantly reduces the involvement of a knowledge engineer in the process of building an intelligent agent. The book is unique in the comprehensive coverage of its subject. The first part of the book presents an original theory for building intelligent agents and a methodology and tool that implement the theory. The second part of the book presents complex and detail case studies of building different types of agents: an educational assessment agent that enhances the capability, generality and usefulness of an educational system for teaching higher-order thinking skills in the context of history; a statistical analysis assessment and support agent to support a university-level introductory science course; an engineering design assistant that cooperates with its user in configuring computer systems; and a virtual military commander integrated into a distributed interactice simulation environment.
Preface. Intelligent Agents. General Presentation of the Disciple Approach for Building Intelligent Agents. Knowledge Representation and Reasoning. Knowledge Acquisition and Learning. The Disciple Shell and Methodology. Case Study: Assessment Agent for Higher-Order Thinking Skills in History. Case Study: The Statistical Analysis Assessment and Support Agent. Case Study: Design Assistant for Configuring Computer Systems. Case Study: Virtual Agent for Distributed Interactive Simulations. Selected Bibliography of Machine Learning, Knowledge Acquisition, and Intelligent Agents Research. Notation. Subject Index.
From: Haym Hirsh [hirsh@cs.rutgers.edu] Subject: Graduate Study in Digital Libraries Date: Sat 11/7/98 3:02 PM GRADUATE AWARDS FOR INTERDISCIPLINARY STUDY IN DIGITAL LIBRARIES The Rutgers University Distributed Laboratory for Digital Libraries (RDLDL) is pleased to announce the availability of competitive graduate awards for interdisciplinary doctoral studies in digital libaries, including contributing technologies and relevant basic research. Digital libraries is an exciting new domain for research, and for society at large, and the RDLDL is taking a leading role in establishing an explicitly interdisciplinary approach to the variety of problems in this burgeoning area. Current faculty members of the RDLDL come from the disciplines of cognitive science, computer science, library and information science and psychology, and students in the program are expected to participate in research and to take courses in two or more of the disciplines represented in the RDLDL. Successful candidates will participate in digital library-oriented research projects with several faculty members of the RDLDL, as well as being enrolled in a Ph.D. program in one of the disciplines associated with the RDLDL. They will also take part in an interdisciplinary seminar involving the faculty and all of the other graduate students associated with the RDLDL. The RDLDL Awards offer tuition remission and an annual stipend of $13,000, and the opportunity to achieve an interdisciplinary education for research in the interdisciplinary field of digital libaries. These awards are tenable for one year in the first instance, beginning in September 1999, and are renewable. For further information, including the specific interests of the RDLDL faculty, and application materials, please see the RDLDL WWW site, http://diglib.rutgers.edu/RDLDL or contact any one of the members of the RDLDL Steering Committee, listed below. Steering Committee of the Rutgers Distributed Laboratory for Digital Libraries: Nicholas J. Belkin, Department of Library and Information Science, New Brunswick Campus nick@belkin.rutgers.edu Benjamin M. Bly, Department of Psychology, Newark Campus ben@psychology.rutgers.edu Sven Dickinson, Department of Computer Science, New Brunswick Campus sven@ruccs.rutgers.edu Stephen Hanson, Department of Psychology, Newark Campus jose@tractatus.rutgers.edu Haym Hirsh, Department of Computer Science, New Brunswick Campus hirsh@cs.rutgers.edu Paul Kantor, Department of Library and Information Science, New Brunswick Campus kantor@scils.rutgers.edu Zenon Pylyshyn, Rutgers Center for Cognitive Science, New Brunswick Campus zenon@ruccs.rutgers.edu To contact us by mail, please write to: Rutgers Distributed Laboratory for Digital Libraries c/o School of Communication, Information and Library Studies Rutgers University 4 Huntington Street New Brunswick, NJ 08901-1071, USA
From: Dario Floreano [dario.floreano@epfl.ch] Subject: ECAL99: CFP Date: Wed 11/4/98 5:38 AM Conference Announcement and Call for Papers 5th EUROPEAN CONFERENCE ON ARTIFICIAL LIFE ECAL99 Swiss Federal Institute of Technology in Lausanne (EPFL) September 13-17 1999 Artificial Life is an interdisciplinary research enterprise aimed at understanding life-as-it-is and life-as-it-could-be, and at synthetizing life-like phenomena in chemical, electronic, software, and other artificial media. Artificial Life redefines the concepts of artificial and natural, blurring the borders between traditional disciplines and providing new insights into the origin and principles of life. In addition to all traditional topics, ECAL99 will encourage contributions that address the role of embodiment and physical constraints for the self-organization of life-like systems in chemical, electronic, mechanical, and other artificial media. The conference will be single-track and will feature selected oral presentations, spotlight presentations (short talk + poster), and posters. All accepted contributions will be published in the proceedings by Springer Verlag. ECAL99 will also host invited talks, thematic debates, and public demonstrations. One day of tutorials before the conference will provide the necessary background to better enjoy the contributions from different disciplines; participation is strongly recommended. IMPORTANT DATES 28 February 1999: Submission deadline 30 April 1999: Notification to authors 31 May 1999: Camera-ready version due 13-17 September 1999: Conference dates Official Language: English Publisher: Springer-Verlag WWW: http://www.epfl.ch/ecal99 E-mail: ecal99@epfl.ch SUBMISSIONS Papers should not be longer than 10 pages (including figures) in the Springer-Verlag llncs style (http://www.springer.de/comp/lncs/authors.html) with an abstract of 100-150 words. All submissions will be reviewed by at least two referees. Authors should make all possible efforts to state the relevance of their contribution to the field of Artificial Life and explain the implications of their results. Novelty, clarity of presentation, and scientific content will be the main criteria used by referees. Both electronic and traditional submissions will be accepted (in the latter case, please send 6 hard-copies). Demonstrations, videos, and proposals for thematic debates are also welcome. EXAMPLES OF BROAD AREAS TO BE ADDRESSED (but not limited to): Self-organization. Chemical origins of life. Autocatalytic systems. Prebiotic evolution. RNA systems. Evolutionary chemistry. Fitness landscapes. Natural selection. Artificial evolution. Ecosystem evolution. Multicellular development. Natural and artificial morphogenesis. Learning and development. Bio-morphic and neuro-morphic engineering. Artificial worlds. Simulation tools. Artificial organisms. Synthetic actors. Artificial (virtual and robotic) humanoids. Intelligent autonomous robots. Evolutionary Robotics. Applications of Alife technologies. Life detectors. Self-repairing hardware. Evolvable hardware. Emergent collective behaviors. Swarm intelligence. Evolution of social behaviors. Evolution of communication. Epistemology. Artificial Life and Art. VENUE The conference will take place on the EPFL campus located by lake Leman, 10 minutes away by metro from the historic centre of Lausanne. The Alps can be quickly reached by public transportation. Geneva international airport is 40 minutes by train (more info on ECAL99 web page). REGISTRATION ECAL99 will encourage debate, contacts, and interdisciplinary exchanges in an open-minded and informal atmosphere. Several student fellowships will be available. Registration fees (starting at 400 CHF) will include almost everything for both regular and student participants. Special offers on accomodation. Considerable savings on early registration (before 15 June 1999). Check out ECAL99 web page frequently http://www.epfl.ch/ecal99 ORGANIZERS: Dario Floreano, Jean-Daniel Nicoud, Francesco Mondada LOCAL ORGANIZATION: Monique Dubois and Joseba Urzelai CONTACT: ECAL99 Secretariat, LAMI-DI-EPFL, CH-1015 Lausanne ADVISORY BOARD: Inman Harvey (UK) Hiroaki Kitano (JP) Daniel Mange (CH) Jean-Arcady Meyer (F) Stefano Nolfi (I) Charles Taylor (USA) Daniel Thalmann (CH) SCIENTIFIC COMMITTEE (under formation): Agnessa Babloyantz (B) Wolfgang Banzhaf (D) Randall Beer (USA) Hugues Bersini (B) Eric Bonabeau (USA) Paul Bourgine (F) Rodney Brooks(USA) Heinrich Buelthoff (D) Raffaele Calabretta (I) Pablo Chacon (E) Dave Cliff (UK) Jean-Louis Deneubourg (B) Marco Dorigo (B) Rodney Douglas (CH) Claus Emmeche (DK) Boi Faltings (CH) Toshio Fukuda (JP) Takashi Gomi (CA) Stephen Grand (UK) Howard Gutowitz (F) Tetsuya Higuchi (JP) Phil Husbands (UK) Takashi Ikegami (JP) George Kampis (H) Kunihiko Kaneko (JP) Laurent Keller (CH) Yasuo Kuniyoshi (JP) Chris Langton (USA) Kristian Lindgren (SE) Pier Luisi (CH) Henrik Lund (DK) Hanspeter Mallot (D) John McCaskill (D) Barry McMullin (IE) Olivier Michel (CH) Orazio Miglino (I) Melanie Mitchell (USA) Federico Moran (E) Alvaro Moreno (E) Domenico Parisi (I) Rolf Pfeifer (CH) Tom Ray (USA) Eduardo Sanchez (CH) Chris Sander (UK) Peter Schuster (A) Katsunori Shimohara (JP) Karl Sims (USA) Moshe Sipper (CH) Tim Smithers (E) Ricardo Sole (E) Emmet Spier (UK) Luc Steels (F) Jun Tani (JP) Adrian Thompson (UK) Peter Todd (D) Marco Tomassini (CH) Gunter Wagner (USA) Barbara Webb (UK) Michael Wheeler (UK) Norm White (CA) Tom Ziemke (SE) Stephane Zrehen (USA)
From: Stefan.Wermter@sunderland.ac.uk Subject: Hybrid Neural Symbolic Integration Date: Wed 11/4/98 10:10 AM International Nips Workshop on Hybrid Neural Symbolic Integration Stefan Wermter, University of Sunderland, UK Ron Sun, University of Alabama, USA December 4 and 5, 1998, Breckenridge, Colorado, USA ~~~~~~~~~~~~~~~~~~~~~~~~ Dec. 4 Morning Session: Structured connectionism, rule representation ~~~~~~~~~~~~~~~~~~~~~~~~ Stefan Wermter, Ron Sun Introduction and welcome to the workshop ***Jerome Feldman, David Bailey Layered hybrid connectionist models for cognitive science ***Lokendra Shastri Types and quantifiers in SHRUTI: a connectionist model of rapid reasoning and relational processing Steffen Hoelldobler, Yvonne Kalinke, Joerg Wunderlich A recursive neural network for reflexive reasoning Rafal Bogacz, Christophe Giraud-Carrier A novel modular neural architecture for rule-based and similarity-based reasoning Nam Seog Park Addressing knowledge representation issues in connectionist symbolic rule encoding for general inference Nelson A. Hallack and Gerson Zaverucha and Valmir C. Barbosa Towards a hybrid model of first order theory refinement Panel on "the issues of representation in hybrid models" Chair: Ron Sun Panelists: Jerry Feldman, Lee Giles, Risto Miikkulainen, David Waltz 5 min opening statement by each panelist The focus of the panel is the issue of representation: how can neural representation contribute to the power of hybrid models? how can symbolic representation supplement neural represnetation? how each type of representations can be developed, acquired, or learned? What are the principled ways these two types of representation can be combined, synergistically ? Dec. 4 Afternoon Session: Neural language processing, distributed representations ~~~~~~~~~~~~~~~~~~~~~~~~ ***Marshall R. Mayberry, Risto Miikkulainen SARDSRN: a neural network shift-reduce parser ***William C. Morris, Garrison W. Cottrell, Jeffrey L. Elman The empirical acquisition of grammatical relations Whitney Tabor Context free grammar representation in neural networks Curt Burgess, Kevin Lund The transduction of symbolic environmental input into high-dimensional distributed representations Pentti Kanerva Large patterns make great symbols: an example of learning from example Stephen I. Gallant Context vectors: a step toward a "grand unified representation" Paolo Frasconi, Marco Gori, Alessandro Sperduti Integration of graphical-based rules with adaptive learning of structured information Stefan C. Kremer and John Kolen Dynamical Recurrent Networks as Symbolic Processors Dec. 5 Morning Session: Neural and hybrid systems for cognitive processing ~~~~~~~~~~~~~~~~~~~~~~~~ ***David Waltz The importance of importance ***Noel Sharkey, Tom Ziemke Life, mind and robots: Biological inspirations and rooted cognition G.K. Kraetzschmar, S. Sablatnoeg, S. Enderle, G. Palm Using neurosymbolic integration in modelling robot environments: a preliminary report Timo Honkela Self-organizing maps in symbol processing Ronan Reilly Evolution of symbolisation: Signposts to a bridge between connectionist and symbolic systems Christos Orovas, James Austin A cellular neural associative array for symbolic vision Panel on "hybrid and neural systems for the future" Chair: Stefan Wermter Panelists: Jim Austin, Joachim Diederich, Lee Giles, Noel Sharkey, Hava Siegelman 5 min opening statement by each panelist The focus of the panel is the impact of hybrid and neural techniques in the future. How can we develop neural and hybrid systems for new media? internet communication? multimedia, web searching, data mining, neurocontrol for robotics, integrating image/speech/language. What are the strengths and weaknesses of hybrid neural techniques for these tasks. Are current principles and methodologies in neural and hybrid systems useful? How can they be extende? What will be the impact of hybrid and neural techniques in the future? Dec. 5 Afternoon session: Explanation and composition ~~~~~~~~~~~~~~~~~~~~~~~~ ***H. Lipson, H. T. Siegelmann High order shape neurons for data structure decomposition ***Alan Tickle, Frederic Maire, Joachim Diederich Extracting the knowledge embedded within trained artificial networks: defining the agenda Guido Bologna Symbolic rule extraction form the DIMLP neural network Gerhard Paass, Joerg Kindermann Explaining bayesian ensemble classifier models Peter Tino, Georg Dorffner, Christian Schittenkopf Understanding state space organization in recurrent neural networks with iterative function systems dynamics M.L. Vaughn, S.J. Cavill, S.J. Taylor, M.A. Foy, A.J.B. Fogg Direct knowledge extraction and interpretation from a multilayer perceptron network that performs low back pain classification James A. Hammerton, Barry L. Kalman Holistic computation and the sequential RAAM: an evaluation Stefan Wermter Conclusion More information on the NIPS98 conference can be found at: http://www.cs.cmu.edu/Groups/NIPS/index.html ******************************************** Professor Stefan Wermter Research Chair in Intelligent Systems University of Sunderland School of Computing & Information Systems St Peters Way Sunderland SR6 0DD United Kingdom phone: +44 191 515 3279 fax: +44 191 515 2781 email: stefan.wermter@sunderland.ac.uk http://osiris.sunderland.ac.uk/~cs0stw/
From: Riccardo Poli [R.Poli@cs.bham.ac.uk] Subject: CALL FOR INFORMATION ON COURSES IN EVOLUTIONARY COMPUTATION Date: Fri 11/6/98 1:14 AM CALL FOR INFORMATION ON COURSES IN EVOLUTIONARY COMPUTATION Teaching and learning play a crucial role in disseminating and transferring new knowledge and know-how to students and new comers to a field. This is more so for a relatively young field like evolutionary computation. It has been nearly four years since John Koza [1] compiled a valuable list of university courses on genetic algorithms. At the time, around 30 courses were available. The field of evolutionary computation has grown rapidly since 1995. More universities are offering courses on evolutionary computation. There is an urgent need to look at various teaching issues in evolutionary computation again, e.g., how we can help students to best learn the subject effectively, what kind of topics we should cover in a one-semester course, how much practical lab work should be involved, what a suitable textbook should be, etc. For these reasons we would like to collect information on university courses in evolutionary computation in a common format, make it available to the community and start discussions on teaching issues. If you would like to contribute and share information with others, please complete the form attached and email it to: ec-teaching@cs.bham.ac.uk THE DEADLINE FOR SUBMISSION IS: FRIDAY THE 4TH OF DECEMBER 1998. We will compile the first draft of the course information provided and make it available on-line early next year. [1] J. R. Koza, University Courses on Genetic Algorithms 1995, (a.k.a. "The GA 30"), Stanford Bookstore, Stanford University, CA 94305-3079, 1995. Xin Yao and Riccardo Poli ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ SURVEY ON EVOLUTIONARY COMPUTATION COURSES ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ COURSE TITLE: LECTURER(S): INSTITUTION AND DEPARTMENT: TARGET STUDENTS: (e.g. UG, MSc, PhD) STUDENT BACKGROUND: (e.g. CS, EE, Economics) PREREQUISITES: (e.g. programming, basic math, advanced math) AIMS AND OBJECTIVES: TEACHING METHOD: (e.g. lectures, tutorials, labs, reading) TOPICS COVERED AND TIME SPENT ON EACH TOPIC: TEXTBOOKS, HANDOUTS AND READING MATERIAL USED: ASSESSMENT METHOD: (e.g. assignments, project(s), exam, presentations) DETAILED SYLLABUS: NUMBER OF TIMES THE COURSE HAS BEEN OFFERED: AVERAGE NUMBER OF STUDENTS: COURSE URL (IF ANY): COMMENTS/SUGGESTIONS: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ PLEASE RETURN THE COMPLETED FORM TO ec-teaching@cs.bham.ac.uk BY FRIDAY THE 4TH OF DECEMBER 1998
From: xli@csu.edu.au Subject: SWARM workshop at Charles Sturt Uni. Date: Fri 11/6/98 5:55 PM Dear all Swarm is a software package being developed at the Santa Fe Institute for the simulation of complex adaptive systems. The Swarm website is at http://www.santafe.edu/projects/swarm/ Charles Sturt University of Australia is pleased to offer a Swarm workshop in conjunction with the Summer School in Complex Systems on its Bathurst campus on December 14 and 15, 1998. The presenter of the summer school is Alex Lancaster from the Swarm team at Santa Fe. Registration is available online at http://clio.mit.csu.edu.au/admin/registrnew.html ************************************ Leanne Jones School of Information Technology Charles Sturt University Panorama Avenue Bathurst NSW 2795 Australia Phone: (02) 6338 4724 Fax: (02) 6338 4649 Email: lejones@csu.edu.au
From: KBCS-98 Secretariat [kbcs@konark.ncst.ernet.in] Subject: KBCS-98: Call for Participation Date: Wed 11/11/98 12:57 PM K B C S-98 Call for Participation INTERNATIONAL CONFERENCE ON KNOWLEDGE BASED COMPUTER SYSTEMS National Centre for Software Technology Mumbai, India December 17-19, 1998 The International Conference on Knowledge Based Computer Systems will be held in Mumbai, India during December 17-19, 1998. The conference is intended to act as a forum for promoting interaction among researchers in the field of Artificial Intelligence in India and abroad. There will be a two day conference during December 17-18, 1998 followed by a day of post-conference tutorial on December 19, 1998. Papers were submitted to the conference on the following topics. o AI Applications o AI Architectures o Automatic Programming o Cognitive Modeling o Expert Systems o Foundations of AI o Genetic Algorithms o Information Retrieval o Intelligent Agents o Intelligent Tutoring Systems o Knowledge Acquisition o Knowledge Management o Knowledge Representation o Machine Learning o Machine Translation o Natural Language Processing o Neural Networks o Planning and Scheduling o Reasoning o Robotics o Search Techniques o Speech Processing o Theorem Proving o Uncertainty Handling o Vision About 30 papers will be presented during the conference. Post-Conference Tutorial The tutorial will be conducted on December 19, 1998 at NCST, Juhu, Mumbai. o An Introduction to Information Extraction - (11 AM to 6 PM) Amit Bagga, GE Corporate R & D Centre, USA Registration Last date for registration is 11th December 1998. On-site registration will be subject to availability of seats. All payments should be made by a crossed Mumbai cheque or a demand draft, payable to KBCS-98. Fees: Conference Students : Rs 600 Delegates from not-for-profit : Rs 800 organisations Other Delegates : Rs 1200 Tutorials: Full Day : Rs 700 For further information please refer to the KBCS-98 home page or write to the KBCS-98 Secretariat. KBCS-98 Secretariat Phone: +91 (22) 620 1606 National Centre for Software Technology Fax: +91 (22) 621 0139 Gulmohar Cross Rd No. 9 E-mail: kbcs@konark.ncst.ernet.in Juhu, Mumbai 400 049, India URL: http://konark.ncst.ernet.in/~kbcs/kbcs98/
From: Jonathan Baxter [Jon.Baxter@keating.anu.edu.au] Subject: NIPS '98 Date: Mon 11/9/98 2:25 PM *********************** NIPS*98 FINAL PROGRAM ********************** SUN NOV 29 18:00-22:00 Registration MON NOV 30 08:30-18:00 Registration 09:30-17:30 Tutorials 18:30 Reception and Conference Banquet 20:30 The laws of the WEB (Banquet talk) B. Huberman Xerox PARC TUE DEC 1 Oral Session 1: 08:30 Statistics of visual images: neural representation and synthesis (Invited) E. Simoncelli New York University 09:20 Attentional modulation of human pattern discrimination psychophysics reproduced by a quantitative model (VS1, Oral) L. Itti, J. Braun, D. Lee, C. Koch California Institute of Technology 09:40 Orientation, scale, and discontinuity as emergent properties of illusory contour shape (VS2, Oral) K. Thornber, L. Williams NEC Research Institute, University of New Mexico 10:00 DTs: dynamic trees (AA1, Spotlight) C. Williams, N. Adams Aston University Modeling stationary and integrated time series with autoregressive neural networks (LT1, Spotlight) F. Leisch, A. Trapletti, K. Hornik Technical University of Vienna Analog neural nets with Gaussian or other common noise distributions cannot recognize arbitrary regular languages (LT2, Spotlight) W. Maass, E. Sontag Technical University of Graz, Rutgers University Semiparametric support vector and linear programming machines (AA8, Spotlight) A. Smola, T. Friess, B. Schoelkopf GMD FIRST Blind separation of filtered source using state-space approach (AA13, Spotlight) L. Zhang, A. Cichocki RIKEN Brain Science Institute 10:15-11:00 Break Oral Session 2: 11:00 The bias-variance tradeoff and the randomized GACV (AA4, Oral) G. Wahba, X. Lin, F. Gao, D. Xiang, R. Klein, B. Klein University of Wisconsin-Madison, SAS Institute 11:20 Kernel PCA and de-noising in feature spaces (AA7, Oral) S. Mika, B. Schoelkopf, A. Smola, K. Mueller, M Scholz, G. Raetsch GMD FIRST 11:40 Sparse code shrinkage: denoising by maximum likelihood estimation (AA12, Oral) A. Hyvaarinen, P. Hoyer, E. Oja Helsinki University of Technology 12:00-14:00 Lunch Oral Session 3: 14:00 Temporally asymmetric Hebbian learning, spike timing and neuronal response variability (Invited) L. Abbott Brandeis University 14:50 Information maximization in single neurons (NS1, Oral) M. Stemmler, C. Koch California Institute of Technology 15:10 Multi-electrode spike sorting by clustering transfer functions (NS2, Oral) D. Rinberg, H. Davidowitz, N. Tishby NEC Research Institute 15:30 Distributional population codes and multiple motion models (NS3, Spotlight) R. Zemel, P. Dayan University of Arizona, Massachusetts Institute of Technology Population coding with correlated noise (NS4, Spotlight) H. Yoon, H. Sompolinsky Hebrew University Bayesian modeling of human concept learning (CS1, Spotlight) J. Tenenbaum Massachusetts Institute of Technology Mechanisms of generalization in perceptual learning (CS2, Spotlight) Z. Liu, D. Weinshall NEC Research Institute, Hebrew University An entropic estimator for structure discovery (SP1, Spotlight) M. Brand Mitsubishi Electric Research Laboratory 15:45-16:15 Break Oral Session 4: 16:15 The role of lateral cortical competition in ocular dominance development (NS6, Oral) C. Piepenbrock, K. Obermayer Technical University of Berlin 16:35 Evidence for learning of a forward dynamic model in human adaptive control (CS3, Oral) N. Bhushan, R. Shadmehr Johns Hopkins University 16:55-18:00 Poster Preview 19:30 Poster Session WED DEC 2 Oral Session 5: 08:30 Computation by Cortical Modules (Invited) H. Sompolinsky Hebrew University 09:20 Learning curves for Gaussian processes (LT14, Oral) P. Sollich University of Edinburgh 9:40 Mean field methods for classification with Gaussian processes (LT15, Oral) M. Opper O. Winther Aston University, Niels Bohr Institute 10:00 Dynamics of supervised learning with restricted training sets (LT16, Spotlight) A. Coolen, D. Saad King's College London, Aston University Finite-dimensional approximation of Gaussian processes (LT18, Spotlight) G. Trecate, C. Williams, M. Opper University of Pavia, Aston University Inference in multilayer networks via large deviation bounds (LT20, Spotlight) M. Kearns, L. Saul AT&T; Labs Gradient descent for general reinforcement learning (CN14, Spotlight) L. Baird, A. Moore Carnegie Mellon University Risk sensitive reinforcement learning (CN15, Spotlight) R. Neuneier, O. Mihatsch Siemens AG 10:15-11:00 Break Oral Session 6: 11:00 VLSI implementation of motion centroid localization for autonomous navigation (IM6, Oral) R. Etienne-Cummings, M. Ghani, V. Gruev Southern Illinois University 11:20 Improved switching among temporally abstract actions (CN16, Oral) R. Sutton, S. Singh, D. Precup, B. Ravindran University of Massachusetts, University of Colorado 11:40 Finite-sample convergence rates for Q-learning and indirect algorithms (CN17, Oral) M. Kearns, S. Singh AT&T; Labs, University of Colorado 12:00-14:00 Lunch Oral Session 7: 14:00 Statistical natural language processing: better living through floating-point numbers (Invited) E. Charniak Brown University 14:50 Markov processes on curves for automatic speech recognition (SP3, Oral) L. Saul, M. Rahim AT&T; Labs 15:10 Approximate learning of dynamic models (AA22, Oral) X. Boyen, D. Koller Stanford University 15:30 Learning nonlinear stochastic dynamics using the generalized EM algorithm (AA23, Spotlight) Z. Ghahramani, S. Roweis University of Toronto, California Institute of Technology Reinforcement learning for trading systems (AP9, Spotlight) J. Moody, M. Saffell Oregon Graduate Institute Bayesian modeling of facial similarity (AP13, Spotlight) B. Moghaddam, T. Jebara, A. Pentland Mitsubishi Electric Research Laboratory, Massachusetts Institute of Technology Computation of smooth optical flow in a feedback connected analog network (IM8, Spotlight) A. Stocker, R. Douglas University and ETH Zurich Classification on pairwise proximity data (AA26, spotlight) T. Graepel, R. Herbrich, P. Bollmann-Sdorra, K. Obermayer Technical University of Berlin 15:45-16:15 Break Oral Session 8: 16:15 Learning from dyadic data (AA27, Oral) T. Hofmann, J. Puzicha, M. Jordan Massachusetts Institute of Technology, University of Bonn 16:35 Classification in non-metric spaces (VS7, Oral) D. Weinshall, D. Jacobs, Y. Gdalyahu NEC Research Institute, Hebrew University 16:55-18:00 Poster Preview 19:30 Poster Session THU DEC 3 Oral Session 9: 08:30 Convergence of the wake-sleep algorithm (LT22, Oral) S. Ikeda, S. Amari, H. Nakahara RIKEN Brain Science Institute 08:50 Learning a continuous hidden variable model for binary data (AA32, Oral) D. Lee, H. Sompolinsky Bell Laboratories, Hebrew University 09:10 Direct optimization of margins improves generalization in combined classifiers (LT23, Oral) L. Mason, P. Bartlett, J. Baxter Australian National University 09:30 A polygonal line algorithm for constructing principal curves (AA39, Oral) B. Kegl, A. Krzyzak, T. Linder, K. Zeger Concordia University, Queen's University, UC San Diego 09:50-10:30 Break Oral Session 10: 10:30 Graphical models for recognizing human interactions (AP15, Oral) N. Oliver, B. Rosario, A. Pentland Massachusetts Institute of Technology 10:50 Fast neural network emulation of physics-based models for computer animation (AP16, Oral) R. Grzeszczuk, D. Terzopoulos, G. Hinton Intel Corporation, University of Toronto 11:10 Things that think (Invited) N. Gershenfeld Massachusetts Institute of Technology 12:00 End of main conference POSTERS: TUE DEC 1 Basis selection for wavelet regression (AA2, Poster) K. Wheeler NASA Ames Research Center Boxlets: a fast convolution algorithm for signal processing and neural networks (AA3, Poster) P. Simard, L. Bottou, P. Haffner, Y. LeCun AT&T; Labs Least absolute shrinkage is equivalent to quadratic penalization (AA5, Poster) Y. Grandvalet, S. Canu Universite de Technologie de Compiegne Neural networks for density estimation (AA6, Poster) M. Magdon-Ismail, A. Atiya California Institute of Technology Semi-supervised support vector machines (AA9, Poster) K. Bennett, A. Demiriz Rensselaer Polytechnic Institute Exploiting generative models in discriminative classifiers (AA10, Poster) T. Jaakkola, D. Haussler UC Santa Cruz Using analytic QP and sparseness to speed training of support vector machines (AA11, Poster) J. Platt Microsoft Research Source separation as a by-product of regularization (AA14, Poster) S. Hochreiter, J. Schmidhuber Technical University of Munich, IDSIA Unsupervised classification with non-Gaussian mixture models using ICA (AA15, Poster) T-W. Lee, M. Lewicki, T. Sejnowski The Salk Institute Hierarchical ICA belief networks (AA16, Poster) H. Attias UC San Francisco Efficient Bayesian parameter estimation in large discrete domains (AA17, Poster) N. Friedman, Y. Singer UC Berkeley, AT&T; Labs Discovering hidden features with Gaussian processes regression (AA18, Poster) F. Vivarelli, C. Williams Aston University Bayesian PCA (AA19, Poster) C. Bishop Microsoft Research Replicator equations, maximal cliques, and graph isomorphism (AA20, Poster) M. Pelillo University of Venice Convergence rates of algorithms for perceptual organization: detecting visual contours (AA21, Poster) A. Yuille, J. Coughlan Smith-Kettlewell Eye Research Institute Independent component analysis of intracellular calcium spike data (AP1, Poster) K. Prank, J.Boerger, A. von zur Muehlen, G. Brabant, C. Schoefl Medical School Hannover Applications of multi-resolution neural networks to mammography (AP2, Poster) P. Sajda, C. Spence Sarnoff Corporation Making templates rotationally invariant: an application to rotated digit recognition (AP3, Poster) S. Baluja Carnegie Mellon University Graph matching for shape retrieval (AP4, Poster) B. Huet, A. Cross, E. Hancock University of York Vertex identification in high energy physics experiments (AP5, Poster) G. Dror, H. Abramowicz, D. Horn The Academic College of Tel-Aviv-Yaffo, Tel-Aviv University Familiarity discrimination of radar pulses (AP6, Poster) E. Granger, S. Grossberg, M. Rubin, W. Streilein Ecole Polytechnique de Montreal, Boston University Robot docking using mixtures of Gaussians (AP7, Poster) M. Williamson, R. Murray-Smith, V. Hansen Massachusetts Institute of Technology, Technical University of Denmark, Daimler-Benz Call-based fraud detection in mobile communication networks using a hierarchical regime-switching model (AP8, Poster) J. Hollmen, V. Tresp Helsinki University of Technology, Siemens AG Multiple paired forward-inverse models for human motor learning and control (CS4, Poster) M. Haruno, D. Wolpert, M. Kawato ATR Human Information Processing Research Laboratories, University College London A neuromorphic monaural sound localizer (IM1, Poster) J. Harris, C-J. Pu, J. Principe University of Florida Active noise canceling using analog neuro-chip with on-chip learning capability (IM2, Poster) J-W. Cho, S-Y. Lee Korea Advanced Institute of Science and Technology Optimizing correlation algorithms for hardware-based transient classification (IM3, Poster) R. Edwards, G. Cauwenberghs, F. Pineda Johns Hopkins University A high performance k-NN classifier using a binary correlation matrix memory (IM4, Poster) P. Zhou, J. Austin, J. Kennedy University of York A micropower CMOS adaptive amplitude and shift invariant vector quantizer (IM5, Poster) R. Coggins, R. Wang, M. Jabri University of Sidney Where does the population vector of motor cortical cells point during reaching movements? (NS5, Poster) P. Baraduc, E. Guigon, Y. Burnod Universite Pierre et Marie Curie Heeger's normalization, line attractor network and ideal observers (NS7, Poster) S. Deneve, A. Pouget, P. Latham Georgetown University Image statistics and cortical normalization models (NS8, Poster) E. Simoncelli, O. Schwartz New York University Learning instance-independent value functions to enhance local search (CN1, Poster) R. Moll, A. Barto, T. Perkins, R. Sutton University of Massachusetts Exploring unknown environments with real-time heuristic search (CN2, Poster) S. Koenig Georgia Institute of Technology GLS: a hybrid classifier system based on POMDP research (CN3, Poster) A. Hayashi, N. Suematsu Hiroshima City University Non-linear PI control inspired by biological control systems (CN4, Poster) L. Brown, G. Gonye, J. Schwaber E.I. DuPont deNemours Coordinate transformation learning of hand position feedback controller by using change of position error norm (CN5, Poster) E. Oyama, S. Tachi University of Tokyo Optimizing admission control while ensuring quality of service in multimedia networks via reinforcement learning (CN6, Poster) T. Brown, H. Tong, S. Singh University of Colorado Barycentric interpolators for continuous space & time reinforcement learning (CN7, Poster) R. Munos, A. Moore Carnegie Mellon University Modifying the parti-game algorithm for increased robustness, higher efficiency and better policies (CN8, Poster) M. Al-Ansari, R. Williams Northeastern University Coding time-varying signals using sparse, shift-invariant representations (SP2, Poster) M. Lewicki, T. Sejnowski The Salk Institute Phase diagram and storage capacity of sequence storing neural networks (LT3, Poster) A. Duering, A. Coolen, D. Sherrington Oxford University, King's College London Discontinuous recall transitions induced by competition between short- and long-range interactions in recurrent networks (LT4, Poster) N. Skantzos, C. Beckmann, A. Coolen King's College London Computational differences between asymmetrical and symmetrical networks (LT5, Poster) Z. Li, P. Dayan Massachusetts Institute of Technology Shrinking the tube: a new support vector regression algorithm (LT6, Poster) B. Schoelkopf, P. Bartlett, A. Smola, R. Williamson GMD FIRST, Australian National University Dynamically adapting kernels in support vector machines (LT7, Poster) N. Cristianini, C. Campbell, J. Shawe-Taylor University of Bristol, University of London A theory of mean field approximation (LT8, Poster) T. Tanaka Tokyo Metropolitan University The belief in TAP (LT9, Poster) Y. Kabashima, D. Saad Tokyo Institute of Technology, Aston University Unsupervised clustering: the mutual information between parameters and observations (LT10, Poster) D. Herschkowitz, J-P. Nadal Ecole Normale Superieure Classification with linear threshold functions and the linear loss (LT11, Poster) C. Gentile, M. Warmuth University of Milan, UC Santa Cruz Almost linear VC dimension bounds for piecewise polynomial networks (LT12, Poster) P. Bartlett, V. Maiorov, R. Meir Australian National University, Technion Tight bounds for the VC-dimension of piecewise polynomial networks (LT13, Poster) A. Sakurai Japan Advanced Institute of Science and Technology Learning Lie transformation groups for invariant visual perception (VS3, Poster) R. Rao, D. Rudermann The Salk Institute Support vector machines applied to face recognition (VS4, Poster) J. Phillips National Institute of Standards and Technology Learning to find pictures of people (VS5, Poster) S. Ioffe, D. Forsyth UC Berkeley Probabilistic sensor fusion (VS6, Poster) R. Sharma, T. Leen, M. Pavel Oregon Graduate Institute POSTERS: WED DEC 2 Learning multi-class dynamics (AA24, Poster) A. Blake, B. North, M. Isard Oxford University Fisher scoring and a mixture of modes approach for approximate inference and learning in nonlinear state space models (AA25, Poster) T. Briegel, V. Tresp Siemens AG A randomized algorithm for pairwise clustering (AA28, Poster) Y. Gdalyahu, D. Weinshall, M. Werman Hebrew University Visualizing group structure (AA29, Poster) M. Held, J. Puzicha, J. Buhmann University of Bonn Probabilistic visualization of high-dimensional binary data (AA30, Poster) M. Tipping Microsoft Research Restructuring sparse high dimensional data for effective retrieval (AA31, Poster) C. Isbell, P. Viola Massachusetts Institute of Technology Exploratory data analysis using radial basis function latent variable models (AA33, Poster) A. Marrs, A. Webb DERA SMEM algorithm for mixture models (AA34, Poster) N. Ueda, R. Nakano, Z. Ghahramani, G. Hinton NTT Communication Science Laboratories, University of Toronto Learning mixture hierarchies (AA35, Poster) N. Vasconcelos, A. Lippman Massachusetts Institute of Technology On-line and batch parameter estimation of Gaussian mixtures based on the relative entropy (AA36, Poster) Y. Singer, M. Warmuth AT&T; Labs, UC Santa Cruz Very fast EM-based mixture model clustering using multiresolution kd-trees (AA37, Poster) A. Moore Carnegie Mellon University Maximum conditional likelihood via bound maximization and the CEM algorithm (AA38, Poster) T. Jebara, A. Pentland Massachusetts Institute of Technology Lazy learning meets the recursive least squares algorithm (AA40, Poster) M. Birattari, G. Bontempi, H. Bersini Universite Libre de Bruxelles Global optimization of neural network models via sequential sampling (AA41, Poster) J. de Freitas, M. Niranjan, A. Doucet, A. Gee Cambridge University Regularizing AdaBoost (AA42, Poster) G. Raetsch, T. Onoda, K. Mueller GMD FIRST Using collective intelligence to route Internet traffic (AP10, Poster) D. Wolpert, K. Tumer, J. Frank NASA Ames Research Center Scheduling straight-line code using reinforcement learning and rollouts (AP11, Poster) A. McGovern, E. Moss University of Massachusetts Probabilistic modeling for face orientation discrimination: learning from labeled and unlabeled data (AP12, Poster) S. Baluja Carnegie Mellon University Adding constrained discontinuities to Gaussian process models of wind fields (AP14, Poster) D. Cornford, I. Nabney, C. Williams Aston University A principle for unsupervised hierarchical decomposition of visual scenes (CS5, Poster) M. Mozer University of Colorado Facial memory is kernel density estimation (almost) (CS6, Poster) M. Dailey, G. Cottrell, T. Busey UC San Diego, Indiana University Perceiving without learning: from spirals to inside/outside relations (CS7, Poster) K. Chen, D. Wang Ohio State University Utilizing time: asynchronous binding (CS8, Poster) B. Love Northwestern University A model for associative multiplication (CS9, Poster) G. Christianson, S. Becker McMaster University Analog VLSI cellular implementation of the boundary contour system (IM7, Poster) G. Cauwenberghs, J. Waskiewicz Johns Hopkins University An integrated vision sensor for the computation of optical flow singular points (IM9, Poster) C. Higgins, C. Koch California Institute of Technology Spike-based compared to rate-based Hebbian learning (NS9, Poster) R. Kempter, W. Gerstner, L. van Hemmen Technical University of Munich, Swiss Federal Institute of Technology Neuronal regulation implements efficient synaptic pruning (NS10, Poster) G. Chechik, I. Meilijson, E. Ruppin Tel-Aviv University Signal detection in noisy weakly-active dendrites (NS11, Poster) A. Manwani, C. Koch California Institute of Technology Influence of changing the synaptic transmitter release probability on contrast adaptation of simple cells in the primary visual cortex (NS12, Poster) P. Adorjan, K. Obermayer Technical University of Berlin Complex cells as cortically amplified simple cells (NS13, Poster) F. Chance, S. Nelson, L. Abbott Brandeis University Synergy and redundancy among brain cells of behaving monkeys (NS14, Poster) I. Gat, N. Tishby Hebrew University, NEC Research Institute Visualizing and analyzing single-trial event-related potentials (NS15, Poster) T-P. Jung, S. Makeig, M. Westerfield, J. Townsend, E. Courchesne, T. Sejnowski The Salk Institute, Naval Health Research Center, UC San Diego A reinforcement learning algorithm in partially observable environments using short-term memory (CN9, Poster) N. Suematsu, A. Hayashi Hiroshima City University Experiments with a memoryless algorithm which learns locally optimal stochastic policies for partially observable Markov decision processes (CN10, Poster) J. Williams, S. Singh University of Colorado The effect of eligibility traces on finding optimal memoryless policies in partially observable Markovian decision processes (CN11, Poster) J. Loch University of Colorado Learning macro-actions in reinforcement learning (CN12, Poster) J. Randlov Niels Bohr Institute Reinforcement learning based on on-line EM algorithm (CN13, Poster) M. Sato, S. Ishii ATR Human Information Processing Research Laboratories, Nara Institute of Science and Technology Controlling the complexity of HMM systems by regularization (SP4, Poster) C. Neukirchen, G. Rigoll Gerhard-Mercator-University Maximum-likelihood continuity mapping (MALCOM): an alternative to HMMs (SP5, Poster) D. Nix, J. Hogden Los Alamos National Laboratory On-line learning with restricted training sets: exact solution as benchmark for general theories (LT17, Poster) H. Rae, P. Sollich, A. Coolen King's College London, University of Edinburgh General bounds on Bayes errors for regression with Gaussian processes (LT19, Poster) M. Opper, F. Vivarelli Aston University Variational approximations of graphical models using undirected graphs (LT21, Poster) D. Barber, W. Wiegerinck University of Nijmegen Optimizing classifiers for imbalanced training sets (LT24, Poster) G. Karakoulas, J. Shawe-Taylor Canadian Imperial Bank of Commerce, University of London On the optimality of incremental neural network algorithms (LT25, Poster) R. Meir, V. Maiorov Technion General-purpose localization of textured image regions (VS8, Poster) R. Rosenholtz Xerox PARC A V1 model of pop out and asymmetry in visual search (VS9, Poster) Z. Li Massachusetts Institute of Technology Minutemax: a fast approximation for minimax learning (VS10, Poster) J. Coughlan, A. Yuille Smith-Kettlewell Eye Research Institute Using statistical properties of a labelled visual world to estimate scenes (VS11, Poster) W. Freeman, E. Pasztor Mitsubishi Electric Research Laboratory Example based image synthesis of articulated figures (VS12, Poster) T. Darrell Interval Research
Subject: PAAM99 Call for Papers Date: Wed 11/11/98 11:02 AM PAAM99 - CALL FOR PAPERS AND PARTICIPATION The Fourth International Conference and Exhibition on The Practical Application of Intelligent Agents and Multi-Agent Technology Monday 19th April - Wednesday 21st April 1999 Commonwealth Institute, London, UK http://www.practical-applications.co.uk/PAAM99 PAAM99 is sponsored and supported to date by: AgentLink, Agent Society CompulogNet, FIPA, IF Computer, LPA. ***UPDATE: Invited Programme to date*** TUTORIALS: Monday 19th April, 1999 Dr Matthias Klusch, Carnegie Mellon University, USA Dr Peter Edwards, University of Aberdeen, UK Dr H. Van Dyke Parunak, Center for Electronic Commerce, USA Anand Rao, Mitchell Madison Group, Australia INVITED SPEAKERS: Tuesday 20th and Wednesday 21st April, 1999 Eric Horvitz, Microsoft, USA Dr H. Van Dyke Parunak, Center for Electronic Commerce, USA Anand Rao, Mitchell Madison Group, Australia PAAM99 Agent technology is an exciting and fast moving field of IT which is now making the transition from Universities and research labs into industrial and commercial applications. PAAM99 will demonstrate how agent technology is creating new opportunities for business and industry by focusing on the central issues. PAAM and PA Expo99 PAAM is a key part of the Practical Application Expo: a unique five-day multi-technology, multi-track event which takes place every Spring in London. PA Expo99 will also include conferences on KDD, Logic Programming, Knowledge Management and JAVA. PA Expo combines the peer-to-peer review process of an academic conference with the commercial relevence of an applied industrial event. This contrast of theory and practice, research and deployment is rarely found elsewhere, and makes an ideal forum for participants to network and share ideas. PA Expo offers a rich blend of tutorials, invited talks, refereed papers, panel discussions, poster sessions, social agenda and a full industrial exhibition. The result is a varied technical programme which caters for delegates of various levels of expertise, from beginner to advanced practitioner, in a pleasant and productive environment. You are invited to register your interest in PAAM99 by completing the reply form below. Venue PA EXPO99 will take place at The Commonwealth Conference and Events Centre. It enjoys a prime location in the heart of London, strategically situated in Royal Kensington and Chelsea, leading onto High Street Kensington which is home to one of London's premier shopping areas, and a selection of quality hotels and restaurants. Call for Submissions Your submission should describe one of the following: Commercially available products Internally deployed solutions Fully advanced pre-production prototypes There are two types of submission available: 1.Paper 2.Industrial Report 1 Paper Paper submissions will be refereed by the programme committee. Accepted papers will have 30 minutes to present their papers at the conference. Papers should be no longer than 20 pages. Topics of interest include but are not limited to: Personal agents for: Email PDA's Finding & retrieving information / WWW Scheduling/diary management Mobility management Business/service agents for: Workflow management Network Management and control Buying & selling services Information Management, Retrieval & Understanding Information Customisation WWW and Internet Negotiation Price Discovery Information Arbitrage Manufacturing Planning & Control Tools and Techniques for: Process Control Traffic Control Process Re-engineering Knowledge Management Languages for developing agent systems Coordination standards Standards for information transfer Identity Management Monetary Instrument Standards Design methodologies for agent-based systems Also of interest are survey papers which indicate the future direction this important technology will take. 2 Industrial Report In recognition of the applied nature of PAAM, we encourage the submission of Industrial Reports. This type of submission is intended for professionals who have little time to write a full paper, but who nevertheless, would like the opportunity to present the benefits of their application at the conference. Please forward a short paper (no more than 6 pages) describing your application based on the guidelines to be found at our web site. Industrial Reports will receive a fast track review. Submission Policies Submissions need to meet the conference objectives and achieve a balance between application and theory and with this in mind we have produced guidelines to help you achieve this. Please visit our web site for further information Papers: http://www.practical-applications.co.uk/PAAM99/guidelines Reports: http://www.practical-applications.co.uk/PAAM99//report Please indicate which category your paper falls into on submission. Papers and Industrial Reports will be published in the proceedings but note that there will be a clear indication as to the type of submission. In some cases revisions to submissions will be asked for by the programme committee. Authors are responsible for making the required revisions and submitting the final or revised version of the paper by the due date for the final version. Papers not meeting the revision requirements will not be included in the Proceedings. Papers from non English speakers are strongly encouraged, however we would ask that your paper is proof read by a native English speaker, if possible, before submission. Submission Details Dates: Submission Deadline: January 11th, 1999 Notification/Comments: February 15th, 1999 Final Papers due: March 15th, 1999 Five copies of your paper, in English, should be received by the conference organisers, on or before January 11, 1999. Formatting instructions (Word), can be downloaded from our web site. Please also submit an electronic version of your paper as a PDF file or Word document. The email address for this will be made available in due course. Please include a cover page including: 1.Full Contact Details 2.A Short Abstract Please use the following address when sending your papers to us. PAAM99 54 Knowle Avenue Blackpool Lancs FY2 9UD UK Call for Exhibitors The conference also provides an opportunity for software vendors and developers to demonstrate Agent systems. You are invited to contact the organiser to arrange for your application to be exhibited at the event. Programme Committee The PAAM99 Programme Committee comprises a number of highly influential figures who are recognised experts in the field of agent technology. Chaired by Divine T. Ndumu and Hyacinth S. Nwana of British Telecom Laboratories, this programme committee is drawn from industry leaders and first class universities and research organisations. Joern Altmann University of California,Berkeley, USA Jeffrey Bradshaw Boeing, USA Stephan Brunessaux MATRASystemes & Information, France Stefan Bussmann Daimler-Benz AG, Germany Paolo Ciancarini University of Bologna, Italy Barry Crabtree BT Laboratories, UK Peter Edwards University of Aberdeen, UK Innes Ferguson Active On-line Systems Ltd., UK Tim Finin University of Maryland, USA Klaus Fischer DFKI, Germany Benjamin Grosof IBM, USA Fumio Hattori NTT, Japan Michael Huhns University of South Carolina, USA Toru Ishida Kyoto University, Japan Sverker Janson SICS, Sweden Nick Jennings University of London, UK Paul Kearney BT Laboratories, UK Matthias Klusch Carnegie Mellon University, USA Yannis Labrou University of Maryland, Baltimore County, USA Danny Lange General Magic, USA Michael Lee INSEAD, France Victor Lesser University of Massachusetts, USA David Martin SRI International, USA Cindy Mason University of California, Berkeley, USA Joerg Mueller John Wiley & Sons Ltd, UK Katashi Nagao Sony Computer Science Lab. Inc., Japan Remo Pareschi Telecom Italia, Italy Barney Pell NASA, USA Tony Rutkowski General Magic, USA Donald Steiner Siemens AG, Germany Arvindra Sehmi CSK Software Ltd, UK Leon Sterling University of Melbourne, Australia Hiroyuki Tarumi Kyoto University, Japan Jan Treur Vrije Univ. Amsterdam, The Netherlands Dr.H.Van Dyke Parunak Industrial Technology Institute, USA Peter Wayner Newray Inc, USA Michael Wooldridge Queen Mary & Westfield College, UK Organisation The Conference is organised by The Practical Application Company. The Sponsorship and Exhibition Coordinator is Clive Spenser, PMG Treasurer, and Marketing Director of Logic Programming Associates. ~~~~~~~~~~~~~~~~~~~~ To register interest please take the time to fill in this quick reply form and send it to agents@pap.com Name: Position: Organisation: Address: Postcode: Country: Telephone: Fax: E-mail: Web: [ ] I may submit a paper to PAAM99 (Give provisional title if possible) [ ] I may submit an Industrial Report to PAAM99 (Give provisional title if possible) [ ] I may wish to attend PAAM99 as a delegate [ ] My company may wish to sponsor or exhibit at the event [ ] I have no interest in Intelligent Agents. But keep me informed of other events at the Practical Application Expo [ ] I have no interest in this. Please remove me from the mailing list The Practical Application Company PO Box 137 Blackpool Lancs FY2 9UN UK Tel: +44 (0)1253 358081 Fax: +44 (0)1253 353811 email: info@pap.com WWW: http://www.practical-applications.co.uk/TPAC
End of ML-LIST (Digest format) ****************************************