Showing posts with label cognitive. Show all posts
Showing posts with label cognitive. Show all posts

Friday, September 11, 2009

Exploring Pathways of Vision, Sight and Insight

This blog started off as a personal trek to explore new paradigms. It has succeeded in doing that for myself and hopefully some few others along the way. These paradigm pathways have a tendency to cross over one another and create opportunities for serendipity. Even though the rule in blogging is too focus on one idea in short narratives, my habit is to ignore that advice and attempt to link different ideas. Two important sources for new ideas are TED and MIT for seeking pragmatic solutions and personal wisdom in redefining one's self as its says in the header.

Recently, I came across four videos from both sources that had a common set of themes, brain, cognition, vision, but also led to other pathways of compassion and social change. All involved understanding, but used different applications of that word.

In this episode of TEDTalks (video) on Tom Wujec on 3 ways the brain creates meaning - Tom Wujec (2009) -

Information designer Tom Wujec talks through three areas of the brain that help us understand words, images, feelings, connections. In this short talk from TEDU, he asks: How can we best engage our brains to help us better understand big ideas?

Cognitive psychologists now tell us that the brain doesn't actually see the world as it is, but instead, creates a series of mental models through a collection of "Ah-ha moments," or moments of discovery, through various processes.

So making images meaningful has three components. The first again, is making ideas clear by visualizing them. Secondly, making them interactive. And then thirdly, making them persistent. And I believe that these three principles can be applied to solving some of the very tough problems that we face in the world today. Thanks so much.

The next two videos from MITWorld, though longer in length, are well worth watching.

The first continues with the exploration of cognition and vision through - Computers with Commonsense: Artificial Intelligence at the MIT Round Table

Patrick Henry Winston ponders what makes humans different from our primate cousins. His field of artificial intelligence extends that question to thinking about how humans differ from computers, with a goal to "develop a computational theory of intelligence."

"We think with our eyes…vision is the locus of every profound kind of problem solving."

Play Video

Patrick Henry Winston


The next MITWorld video takes us back to the human and to the humane -

Opening the Mind’s Eye-Learning to See

"Whenever we're asked how the brain does X or Y, the impulse is to work with this beloved creature, the human infant, to see how it acquires different capabilities... But there are challenges: Babies are not interested in being experimental subjects. They'd rather sleep than give us good data."

Play Video

Pawan Sinha

The reason the Pawan Sinha video goes beyond interesting to inspiring is because
Sinha found these subjects in his native India, which has the world’s highest number of blind children -- more than one million. They are victims of Vitamin A deficiency, congenital cataracts, and absent or atrocious medical care. But salient to Sinha’s research, many of these blind children could be treated. He glimpsed a humanitarian and scientific opportunity, and Project Prakash (Sanskrit for light) was born .

It’s rare to find research that simultaneously advances basic science and brings immediate good into people’s lives, but Pawan Sinha’s Project Prakash does precisely that. An investigator of human visual processing, Sinha is interested in how these brain mechanisms develop, and in treating India's vast population of blind children.

The final video is another short one from TED and deals again with vision, but vision both from the idea of seeing and the idea of envisioning a new world. Both aspects of our understanding are necessary to bring about this new world, the understanding of our world and nature and the understanding we must show to each other.

Atomic physicist Joshua Silver invented liquid-filled optical lenses to produce low-cost, adjustable glasses, giving sight to millions without access to an optometrist. At TEDGlobal 2009, he demos his affordable eyeglasses and reveals his global plan to distribute them to a billion people in need by 2020.


Monday, June 16, 2008

A Closer Look At Fluid Intelligence

My recent post on 12 Hacks 6 Myths And Other Ways On Amping Your Brainpower, received a comment from Soak Your Head creator Erik Mork of Silver Bay Labs who said...
I think the significance of the Martin Buschkuehl research is that actual intelligence may be increased with daily training on dual-n-back. Which kind of sets it apart from the other approaches mentioned (12 Hacks that Will Amp...). Though, it does have some mutual benefits with them (for instance, working memory is generally increased with these types of training).

My wife and I have built an open source web application that implements the dual-n-back training. If you're curious to try. It's at: http://www.soakyourhead.com/ .

I'm not sure if keeping a blog make s you smarter, but it sure can't hurt ;) Thanks for this interesting post.

His point is definitely valid. My throwing the 12 hacks and 6 myths into the same post was for a more psych-lite article. Not that I am an expert, but I was leaning towards the "keeping busy and cognizant after they lock me out of the office at work" approach. As far as the viability of keeping a blog, it may not make me more intelligent but I am learning new things and keeping connected so hopefully it is doing some good. I did omit the section of the Wired article, intended perhaps to provide some balance, which cited:

David Geary, a professor at the University of Missouri and author of The Origin of Mind, who was not involved with the study, said training in one test generally doesn’t generate gains on a different test.

"Transfer is tough to get," Geary said. "Training in task A doesn’t typically improve performance on task B."

But in this case, subjects trained on a complex version of the so-called "n-back task" -- a difficult visual/auditory memory test -- improved their scores on a set of IQ questions drawn from a German intelligence measure called the Bochumer Matrizen-Test. (The Bochumer Matrizen-Test is a harder version of the
well known Ravens Progressive Matrices).

I am impressed with the site that Erik and is wife have created. I don't get the sense that either of them are trained in this scientific field, but their site is informative and seems well balanced. It also does not seem to be connected to any particular research facility. Which seems unfortunate as this appears to be a great opportunity to gather a good deal of "crowd-sourced" data, which is one potential motivation for running this post.

Another motivation is that I find the concept of fluid intelligence interesting from both the biological and cognitive perspective, and it gives me a reason to feature this article from TEDBlog on the amazing intelligence of crows: Joshua Klein on TED.com.

Hacker and writer Joshua Klein is fascinated by crows. (Notice the gleam of intelligence in their little black eyes?) After a long amateur study of corvid behavior, he's come up with an elegant machine that may form a new bond between animal and human. (Recorded March 2008 in Monterey, California. Duration: 10:16.)


Friday, June 13, 2008

12 Hacks 6 Myths And Other Ways On Amping Your Brainpower

As the title and tags of this weblog would indicate, there is a strong interest in endeavoring to ensure that the personal paradigm shifts that life has in store be positive

Modern science continues to provide increasing hope that life will continue to be interesting and inspiring.

Alexis Madrigal told us back in April that we could Forget Brain Age: Researchers Develop Software That Makes You Smarter via Wired Top Stories.
Brain researchers for the first time claim to have found a method for improving the general problem-solving ability scientists call fluid intelligence, otherwise known as "smarts."
Fluid intelligence was previously thought to be genetically hard-wired, but the finding suggests that with about 25 minutes of rigorous mental training a day, healthy adults could improve their mental capacities.

"The most important point of our work is that we can show that it is possible to improve fluid intelligence," said Martin Buschkuehl, a psychology researcher based at the University of Bern, Switzerland. "It was assumed that fluid intelligence was immutable."
And that's where Buschkuehl's research, which appears today in the journal Proceedings of the National Academy of Sciences, claims to be groundbreaking.

A very simplified DIY version of the n-back described here also leads to an interesting article on looking at the brain with an MRI.


Wired Magazine also provided other ways to Get Smarter: 12 Hacks That Will Amp Up Your Brainpower via Wired Top Stories back in April 08.

If your IQ is hardwired, how can you get smarter? Lots of ways, and our guide to better brain power shows you how. Think of it as a software upgrade to maximize your "functional intelligence."

It also helps with exposing 6 Intelligence Myths via Wired Top Stories

We've all used the arguments to get away with playing Brain Age or doing crosswords. But how many of these "exercises" really sharpen your wits or fend off senility?

Right now I am testing out the hypothesis that keeping a weblog is a good way to exercise your brain.

Monday, February 25, 2008

Language Programming The Human Decision System

The following links are from a MIT symposium, Where Does Syntax Come From? Have We All Been Wrong?. To be honest, it is a laundry list but if one is into computers and cognitive psychology it's a pretty good one.

A number of this weblog's posts have been dealing with issues of marketing and communications, all of which involve language. The analogy of the human mind being like a computer is standard, but with concepts such as crowd-sourcing and social networking through Web 2.0 environs the extension of computer coding being analogous to language becomes all the more interesting. One area of particular interest would be human group decision making or interaction between groups with different language structures. The following MIT departments were involved in the symposium.

LIDS: Laboratory for Information and Decision Systems Annotated

This is an Overview of the Laboratory for Information and Decision Systems/LIDS Annotated

The Laboratory for Information and Decision Systems (LIDS) is an interdepartmental research laboratory at the Massachusetts Institute of Technology. It began in 1939 as the Servomechanisms Laboratory, an offshoot of the Department of Electrical Engineering. Its early work, during World War II, focused on gunfire and guided missile control, radar, and flight trainer technology. Over the years, the scope of its research broadened.

MIT : Brain and Cognitive Sciences Annotated

MIT's Department of Brain and Cognitive Sciences stands at the nexus of neuroscience, biology and psychology. We combine these disciplines to study specific aspects of the brain and mind including: vision, movement systems, learning and memory, neural and cognitive development, language and reasoning. Working collaboratively, we apply our expertise, tools, and techniques to address and answer both fundamental and universal questions about how the brain and mind work.
MIT Computational Cognitive Science Group Annotated
We study the computational basis of human learning and inference. Through a combination of mathematical modeling, computer simulation, and behavioral experiments, we try to uncover the logic behind our everyday inductive leaps: constructing perceptual representations, separating "style" and "content" in perception, learning concepts and words, judging similarity or representativeness, inferring causal connections, noticing coincidences, predicting the future.

Other talks will be added when the become available. A bit of web-trekking resulted in discovering this site at the University of Pennsylvania.

Neuroethics Publications Annotated
'Neuroethics' is the ethics of neuroscience, analogous to the term 'bioethics' which denotes the ethics of biomedical science more generally. It encompasses a wide array of ethical issues emerging from different branches of clinical neuroscience (neurology, psychiatry, psychopharmacology) and basic neuroscience (cognitive neuroscience, affective neuroscience). These include ethical problems raised by advances in functional neuroimaging, brain implants and brain-machine interfaces and psychopharmacology as well as by our growing understanding of the neural bases of behavior, personality, consciousness, and states of spiritual transcendence. This collection brings together the work of a growing number of Penn researchers from across the academic disciplines who are contributing to the neuroethics literature.

diigo tags: brain, decisionsystem, language, mit learning psychology science video
neuroethics


MIT World » : Human Simulations of Language Learning Annotated

This workshop, explains Michael Coen, is an effort to engender temperate, collaborative discussion of a matter that inspires hot dispute: whether machine learning helps explain how humans acquire language. In particular, says Coen, machine learning advocates believe they have evidence against Noam Chomsky’s “poverty of stimulus argument,” which in essence states that language is built into us, that “children don’t receive enough linguistic inputs to explain linguistic outputs.”

MIT World » : Explorations in Language Learnability Using Probabilistic Grammars and Child-directed Speech Annotated

How do kids manage to figure out that the word “dog” applies to a whole category of animals, not just one creature? Joshua Tenenbaum wants to understand how children and adults manage to solve such classic problems of induction. Throughout cognition, wherever you look, he says “we see places where we know more than we have a reasonable right to know about the world, places where we come to abstractions, generalizations, models of the world that go beyond our sparse, noisy, limited experience.” Tenenbaum’s goal is to come up with “general purpose computational tools for understanding how people solve these problems so successfully.”

    Past posts in my blog also dealt with the evolution of language. I am wondering how this plays out in larger organizational and social systems and the interaction of different systems i.e. Americans in Iraq or other situations. Especially with the web accelerating communications even when it is not directly connected to everybody within a culture. - diigo post by brianddrpm

    MIT World » : The Computational Nature of Language Learning Annotated

    Niyogi believes that an “evolutionary trajectory” links how acquisition happens at an individual level, and how variation in language springs up from one generation to the next. But rather than inheriting the grammar of your parents, you have to learn it. Examining language variation over time as if it were genetic variation, “you get a different mathematical structure…and probabilities start playing an important role.” Small differences “can have very subtle consequences giving rise to bifurcation in nonlinear dynamics of evolution.” For instance, 1000 years ago, the English were speaking a language that’s unrecognizable to us today. How has it come to be that “we have moved so far from that point through learning which is mimicking the previous generation?”
    Niyogi explains that within a single population two varying languages may be in competition (say, a German and an English-type grammar). While a majority may speak the dominant variant, some children will likely be exposed to a mixture of the two. There’s a “drift” in language use, “and suddenly, what was stable becomes unstable.”

    MIT World » : Machine Learning of Language from Distributional Evidence Annotated

    Christopher Manning thinks linguistics went astray in the 20th century when it searched “for homogeneity in language, under the misguided assumption that only homogeneous systems can be structured.” In the face of human creativity with language, rigid categories of linguistic use just don’t help explain how people actually talk and what they choose to say. For every hard and fast rule linguists find, other linguists can determine an exception. Categorical constraints rise, then come crashing down.

      Monday, October 29, 2007

      It Is Not Knowledge Unless It Reaches Beyond You

      The Outsourced Brain - New York Times by By DAVID BROOKS Published: October 26, 2007

      Last Friday New York Times columnist David Brooks wrote on the outsourcing or externalizing of human intelligence to the external digital world. One senses his willingness to accept this state of affairs is similar to, "I for one welcome our new alien masters".

      "Until that moment, I had thought that the magic of the information age was that it allowed us to know more, but then I realized the magic of the information age is that it allows us to know less. It provides us with external cognitive servants — silicon memory systems, collaborative online filters, consumer preference algorithms and networked knowledge. We can burden these servants and liberate ourselves."

      "I’ll be in the way Amazon links purchasing Dostoyevsky to purchasing garden furniture. And when memes are spreading, and humiliation videos are shared on Facebook — I’ll be there, too."

      "I am one with the external mind. Om."

      What he failed to appreciate is that this has been going on for the last 6,500 years for the ubiquitous externalization of knowledge we call writing. The storytellers of oral societies have prodigious memories, for myths, but actually keeping detailed and accurate track of crops for the previous 7 years took writing. Even language, internalized as thought, is itself indirectly a form of externalization, by internally objectifying spoken language as thought so that it can be considered in past, present and future terms. More importantly, so that it can be communicated to others. It is this ability, perhaps beyond others, that has moved the human species forward. The primary tool of mass production that defines Mr. Brooks' business, the printing press, is also a means of externalizing knowledge beyond the cloistered walls of monks with quills.

      A number of articles have recently reflected on the biological and social evolutionary development of human communication. The Los Angeles Times writer Karen Kaplan looks at the genetic origin of language in Did Neanderthals natter? The human forebears had a key language gene, researchers report on October 22, 2007 (requires free registration).

      The Science Blog Developing Intelligence on October 16, 2007 by Chris Chathamasks asks "Why There Aren't Right-Handed Apes, Or: Handedness and The Evolution of Language" and gets in deeper to questioning the genetic, psycho-physiological and social development of language.

      Corballis argues the final step was a cultural rather than evolutionary invention - early homo sapiens may have learned to uncouple speech from facial gesture so that speech was communicative on its own. Corballis notes that facial gestures still assume dominance over vocalizations among modern humans (see the McGurk effect), again suggesting our cultural heritage from gestural communication.

      In summary, Corballis claims that handedness emerged only after speech, which was itself lateralized due to preexisting dominance of the left-hemisphere for communicative behaviors.

      First the externalization of speech and then the internal rewiring of the brain to be lateralized to handedness.

      The Los Angles Times had an article Researchers discover that irregular verbs change in a predictable manner -- just like genes and living organisms. By Denise Gellene, Los Angeles Times Staff Writer back on October 11, 2007.

      Tracing the evolution of English verbs over 1,200 years -- from the Old English of "Beowulf" to the modern English of "The Princess Diaries" -- researchers have found that the majority of irregular verbs are going the way of Grendel, falling to the linguistic equivalent of natural selection.

      Cognitive Daily's, Dave Munger, tracking a similar story from the New York Times tells us that the more we use a word, the less likely it is to change. According to the New York Times report frequently-used words evolve more slowly than rarely used ones. Dave Munger thought that this was understandable.

      Seems reasonable. In our travels across Europe, we found that "yes" and "no" were very similar in different languages -- until we got to Greece, where their word for yes was pronounced "neh," and the word for no was "ochi." But despite anomalies such as this, overall more frequently used words tend to be more similar across Indo-European languages.

      Why might this be? Mr. Munger cites one author of the study who explains,

      As to how frequency of word use would affect evolution, Dr. Pagel said a possibility is that if errors are made in speaking common words, they may tend to be corrected, precisely because they are so common and so important for communication.
      In other words, the fact that they were made external and subject to scrutiny in the environment decided their communicative importance thereby ensuring their survivability and that of the systems that used them.