The following links are from a MIT symposium,
Where Does Syntax Come From? Have We All Been Wrong?. To be honest, it is a laundry list but if one is into computers and cognitive psychology it's a pretty good one. A number of this weblog's posts have been dealing with issues of marketing and communications, all of which involve language. The analogy of the human mind being like a computer is standard, but with concepts such as crowd-sourcing and social networking through Web 2.0 environs the extension of computer coding being analogous to language becomes all the more interesting. One area of particular interest would be human group decision making or interaction between groups with different language structures. The following MIT departments were involved in the symposium.
LIDS: Laboratory for Information and Decision Systems Annotated
This is an Overview of the Laboratory for Information and Decision Systems/LIDS Annotated
The Laboratory for Information and Decision Systems (LIDS) is an interdepartmental research laboratory at the Massachusetts Institute of Technology. It began in 1939 as the Servomechanisms Laboratory, an offshoot of the Department of Electrical Engineering. Its early work, during World War II, focused on gunfire and guided missile control, radar, and flight trainer technology. Over the years, the scope of its research broadened.
MIT : Brain and Cognitive Sciences Annotated
MIT's Department of Brain and Cognitive Sciences stands at the nexus of neuroscience, biology and psychology. We combine these disciplines to study specific aspects of the brain and mind including: vision, movement systems, learning and memory, neural and cognitive development, language and reasoning. Working collaboratively, we apply our expertise, tools, and techniques to address and answer both fundamental and universal questions about how the brain and mind work.
MIT Computational Cognitive Science Group AnnotatedWe study the computational basis of human learning and inference. Through a combination of mathematical modeling, computer simulation, and behavioral experiments, we try to uncover the logic behind our everyday inductive leaps: constructing perceptual representations, separating "style" and "content" in perception, learning concepts and words, judging similarity or representativeness, inferring causal connections, noticing coincidences, predicting the future.
Other talks will be added when the become available. A bit of web-trekking resulted in discovering this site at the University of Pennsylvania.
Neuroethics Publications Annotated
'Neuroethics' is the ethics of neuroscience, analogous to the term 'bioethics' which denotes the ethics of biomedical science more generally. It encompasses a wide array of ethical issues emerging from different branches of clinical neuroscience (neurology, psychiatry, psychopharmacology) and basic neuroscience (cognitive neuroscience, affective neuroscience). These include ethical problems raised by advances in functional neuroimaging, brain implants and brain-machine interfaces and psychopharmacology as well as by our growing understanding of the neural bases of behavior, personality, consciousness, and states of spiritual transcendence. This collection brings together the work of a growing number of Penn researchers from across the academic disciplines who are contributing to the neuroethics literature.
diigo tags: brain, decisionsystem, language, mit learning psychology science video
neuroethics
MIT World » : Human Simulations of Language Learning Annotated
This workshop, explains Michael Coen, is an effort to engender temperate, collaborative discussion of a matter that inspires hot dispute: whether machine learning helps explain how humans acquire language. In particular, says Coen, machine learning advocates believe they have evidence against Noam Chomsky’s “poverty of stimulus argument,” which in essence states that language is built into us, that “children don’t receive enough linguistic inputs to explain linguistic outputs.”
MIT World » : Explorations in Language Learnability Using Probabilistic Grammars and Child-directed Speech Annotated
How do kids manage to figure out that the word “dog” applies to a whole category of animals, not just one creature? Joshua Tenenbaum wants to understand how children and adults manage to solve such classic problems of induction. Throughout cognition, wherever you look, he says “we see places where we know more than we have a reasonable right to know about the world, places where we come to abstractions, generalizations, models of the world that go beyond our sparse, noisy, limited experience.” Tenenbaum’s goal is to come up with “general purpose computational tools for understanding how people solve these problems so successfully.”
Past posts in my blog also dealt with the
evolution of language. I am wondering how this plays out in larger organizational and social systems and the interaction of different systems i.e. Americans in Iraq or other situations. Especially with the web accelerating communications even when it is not directly connected to everybody within a culture.
- diigo post by brianddrpm
MIT World » : The Computational Nature of Language Learning Annotated
Niyogi believes that an “evolutionary trajectory” links how acquisition happens at an individual level, and how variation in language springs up from one generation to the next. But rather than inheriting the grammar of your parents, you have to learn it. Examining language variation over time as if it were genetic variation, “you get a different mathematical structure…and probabilities start playing an important role.” Small differences “can have very subtle consequences giving rise to bifurcation in nonlinear dynamics of evolution.” For instance, 1000 years ago, the English were speaking a language that’s unrecognizable to us today. How has it come to be that “we have moved so far from that point through learning which is mimicking the previous generation?”
Niyogi explains that within a single population two varying languages may be in competition (say, a German and an English-type grammar). While a majority may speak the dominant variant, some children will likely be exposed to a mixture of the two. There’s a “drift” in language use, “and suddenly, what was stable becomes unstable.”
MIT World » : Machine Learning of Language from Distributional Evidence Annotated
Christopher Manning thinks linguistics went astray in the 20th century when it searched “for homogeneity in language, under the misguided assumption that only homogeneous systems can be structured.” In the face of human creativity with language, rigid categories of linguistic use just don’t help explain how people actually talk and what they choose to say. For every hard and fast rule linguists find, other linguists can determine an exception. Categorical constraints rise, then come crashing down.
- quotes posted by brianddrpm on diigo