Neuro Nut

"The second claim of the computer metaphor is that high-level cognition, such as inference, categorization, and memory, is performed using abstract, amodal symbols that bear arbitrary relations to the perceptual states that produce them (Newell & Simon, 1972; Pylyshyn, 1984). Mental operations on these amodal representations are performed by a central processing unit that is informationally encapsulated from the input (sensory) and output (motor) subsystems (Fodor, 1983). The only function of sensory systems is to deliver detailed representations of the external world to the central unit. The only function of the motor system is to dutifully execute the central executive’s commands. […]

"…many theories continue to assume that higher-order cognition operates on amodal symbols. Noncontroversially, these theories assume that the actual experience of a current situation is initially represented in the brain’s modality-specific systems. More controversially, standard theories of cognition assume that the modality-specific states experienced during an actual situation are redescribed and preserved in an abstract, amodal, language-like form, which we will refer to as amodal symbols (Fodor, 1975). For example, on interacting with a particular individual, amodal symbols redescribe the experienced perceptions, actions, and introspections to establish a conceptual representation of the interaction in long-term memory. […]

"These abstracted concepts constitute the person’s knowledge and allow the person to engage in inference, categorization, memory, and other forms of higher cognition. Nearly all accounts of social cognition represent knowledge this way, using feature lists, semantic networks, schemata, propositions, productions, frames, statistical vectors, and so forth, to redescribe people’s perceptual, motor, and introspective states (for discussions of such models, see Kunda, 1999; Smith, 1998; Wyer & Srull, 1984). According to all such views, amodal redescriptions of social experience constitute social knowledge. 

"The amodal architecture, although widely used, has recently been criticized on several grounds. One set of problems concerns the redescription process that produces amodal symbols from modality-specific states in the first place. No direct empirical evidence exists for such a process in the brain. Indeed, surprisingly few theoretical accounts of this redescription process exist in the literature. More basically, there is no strong empirical case that the brain contains amodal symbols. In fact, arguments for amodal architectures are mostly theoretical, based on assumptions about how cognition should work, rather than on empirical evidence that it actually works this way. Further, as we discuss shortly, empirical findings increasingly challenge the basic assumptions of the amodal architecture.

"Given the lack of empirical evidence, why is the amodal architecture so widely accepted in both cognitive and social psychology? There are a number of important reasons. First, representations that employ amodal symbols, such as semantic networks, feature lists, schemata, and propositions, provide powerful ways of expressing the content of knowledge across various domains of knowledge, from perceptual images to abstract concepts. Second, amodal symbols provide a simple way to account for important functions of knowledge, such as categorization, categorical inference, memory, comprehension, language, and thought (e.g., Anderson, 1983; Chomsky, 1959; Newell, 1990; Newell & Simon, 1972). Third, amodal symbols have allowed computers to implement knowledge. Because frames, semantic networks, and property lists have many similarities to programming languages, these representations can be implemented easily on computers, not only for theoretical purposes, but also for applications (e.g., intelligent systems in industry, education, and medicine). Fourth, until recently there were no compelling alternatives that could account for the representation and function of knowledge. For all these (good) reasons, amodal approaches have dominated theories of representation for decades, even though little positive empirical evidence has accrued in their favor. Indeed, the theoretical virtues of amodal approaches have been so compelling that it has not occurred to most researchers that seeking empirical support might be necessary. Instead, researchers typically assume that the amodal architecture is roughly correct and then go on from there to pursue their specific questions. […]

"The main idea underlying all theories of embodied cognition is that cognitive representations and operations are fundamentally grounded in their physical context. Rather than relying solely on amodal abstractions that exist independently of their physical instantiation, cognition relies heavily on the brain’s modality-specific systems and on actual bodily states. One intuitive example is that empathy, or understanding of another person’s emotional state, comes from mentally “re-creating” this person’s feelings in ourselves. The claim made by modern embodiment theories is that all cognition, including high-level conceptual processes, relies heavily on such grounding in either the modalities or the body (Wilson, 2002). This claim is significant given that embodiment theories have traditionally been viewed as having little to say about higher cognitive functions, not just empathy, but also abstract concepts, categorical inference, and the ability to combine internal symbols in novel, productive ways. As we will see, theories of embodied cognition are increasingly able to explain how such phenomena can be based in modality-specific systems and bodily states. […]

"The PSS account takes as a starting point Damasio’s (1989) theory of convergence zones (CZ) proposed by Damasio and his colleagues (see Simmons & Barsalou, 2003, for an elaborated account). CZ theory assumes that the perception of an object activates relevant feature detectors in the brain’s modality-specific systems. The populations of neurons that code featural information in a particular modality are organized in hierarchical and distributed systems of feature maps (Palmer, 1999; Zeki, 1993). When a stimulus is perceived on a given modality, populations of neurons in relevant maps code the stimulus’ features on that modality in a hierarchical manner. For example, visual processing of a happy face activates feature detectors that respond to the color, orientation, and planar surfaces of the face. Whereas feature detectors early in the processing stream code detailed perspective-based properties of the face, higher-order detectors code its more abstract and invariant properties. The pattern of activation across relevant features maps represents the face in vi- sual processing.

"Analogously, CZ theory assumes that systems of feature maps reside in the other sensory–motor modalities and in the limbic system for emotion. All these maps operate in parallel, so that while a face is being represented in visual feature maps, sounds produced by the face are being coded in auditory feature maps, affective responses to the face are being coded in limbic feature maps, bodily responses to it are being coded in motor feature maps, and so forth. 

"CZ theory further proposes that conjunctive neurons in the brain’s association areas capture and store the patterns of activation in feature maps for later representational purposes in language, memory, and thought. Damasio (1989) referred to these association areas as convergence zones. Like feature maps, CZs are organized hierarchically such that the CZs located in a particular modality-specific system (e.g., vision) capture patterns of activation in that modality. In turn, higher-level CZs conjoin patterns of activation across modalities. What this means is that when we hear a sound (e.g., a fire cracker exploding), conjunctive neurons in auditory CZs capture the pattern of activation in auditory feature maps. Other conjunctive neurons in motor CZs capture the pattern of activation caused by jumping away from the location of the sudden sound. And at a higher level of associative processing, conjunctive neurons in modality-specific CZs conjoin the two sets of modality-specific conjunctive neurons for the combined processing of sound and movement. 

"It is worth highlighting how the CZ architecture differs from traditional ways of conceptualizing knowl edge acquisition and use. First, during knowledge acquisition (perception and learning), all relevant processing regions participate in knowledge representation—there is no single “final” region where all experience is abstracted and integrated together. Higher level CZs capture only conjunctions of lower-level zones (so that CZs can later coordinate their feature- level reactivation)—they do not constitute some form of “grand” representation that independently represents all lower levels of the representational hierarchy. Second, during knowledge use (e.g., conceptual processing and recall), the cognizer activates the multiple modality-specific regions that encoded the experience, rather than, as traditionally assumed, only the “final” abstract regions at the end of the processing streams. […]

"What is important about the CZ architecture is the idea that conjunctive neurons can later reactivate the states of processing in each modality and across modalities, without any input from the original stimulus. This mechanism provides a powerful way to implement offline embodiment. The modality-specific processing that occurred in reaction to a previously encountered stimulus can be reenacted without the original stimulus being present. For example, when retrieving the memory of a person’s face, conjunctive neurons partially reactivate the visual states active while perceiving it. Similarly, when retrieving an action, conjunctive neurons partially activate the motor states that produced it. Indeed, this reentrant mechanism is now widely viewed as underlying mental imagery in working memory (e.g., Farah, 2000; Grezes & Decéty, 2001; Kosslyn, 1994). […]

"Simulators. A sizable literature on concepts has demonstrated that categories possess statistically correlated features (e.g., Chin & Ross, 2002; Rosch & Mervis, 1975). Thus, when different instances of the same category are encountered over time and space, they activate similar neural patterns in feature maps (cf., Cree & McRae, 2003; Farah & McClelland, 1991). One result of this repeated firing of similar neural patterns is that similar populations of conjunctive neurons in CZs respond to these regular patterns (Damasio, 1989; Simmons & Barsalou, 2003). Similar to the notion of abstraction, over time, these groups of conjunctive neurons integrate modality-specific features of specific categories across their instances and across the situations in which they are encountered. This repetition establishes a multimodal representation of the category: a concept. 

"PSS refers to these multimodal representations of categories as simulators (Barsalou, 1999, 2003a). A simulator integrates the modality-specific content of a category across instances and provides the ability to identify items encountered subsequently as instances of the same category. Consider a simulator for the social category, politician. Following exposure to differ ent politicians, visual information about how typical politicians look (i.e., based on their typical age, sex, and role constraints on their dress and their facial expressions) becomes integrated in the simulator, along with auditory information for how they typically sound when they talk (or scream or grovel), motor programs for interacting with them, typical emotional responses induced in interactions or exposures to them, and so forth. The consequence is a system distributed throughout the brain’s feature and association areas that essentially represents knowledge of the social category, politician. 

"According to PSS, a simulator develops for any aspect of experience attended to repeatedly. Because attention is highly flexible, it can focus on diverse components of experience, including objects (e.g., chairs), properties (e.g., red), people (e.g., politicians), mental states (e.g., disgust), motivational states (e.g., hunger), actions (e.g., walking), events (e.g., dinners), settings (e.g., restaurants), relations (e.g., above), and so forth. Across development, a huge number of simulators de velop in long-term memory, each drawing on the relevant set of feature and association areas needed to represent it. Once this system is in place, it can be used to simulate those aspects of experience for which simulators exist. Furthermore, as discussed later, simulators can combine to construct complex representations that are componential, relational, and hierarchical. Thus PSS is not a theory of holistic images. In contrast to how theories like PSS are often mistakenly viewed, photo-like images of external scenes do not underlie knowledge. Instead, componential bodies of accumulated information about the modality-specific components of experience underlie knowledge, where these components can represent either the external environment or the internal states of the agent. […]

"Simulations. The use of simulators in conceptual processing is called simulation. A given simulator can produce an infinite number of simulations, namely, specific representations of the category that the simulator represents. On a given occasion, a subset of the modality-specific knowledge in the simulator becomes active to represent the category, with this subset varying widely across simulations. For example, a simulator that represents the social category, my significant other, might be used to simulate love making with a significant other on one occasion, to simulate fights on another, to simulate quiet togetherness on another, and so forth. A simulation can be viewed as the reverse process of storing modality-specific information in a simulator. Whereas learning involves feature map information becoming linked together by conjunctive units in CZs, simulation involves later using these conjunctive units to trigger feature map information. Thus, a simulation, too, is a distributed representation."

There’s more, and it’s good stuff. Go read.

Alpha BRAIN by Onnit Labs | 30 Days Later (Review) (by ThePromisedWLAN)

90 caps for about $55

(via Buy Mucuna Pruriens Extract | Mucuna Pruriens Side Effects | HSW)

a comprehensive article on Choline, what it does and what the best source is.

l-theanine in combo with caffeine

l-theanine, 200- 300 mg in concert with caffeine. Lots of info. Stress reduction mechanism in brain when taken in combination. Dosing properly is important. Getting a good brand is also important. Lots of valid studies to confirm the effect via google. Green tea has been offering this effect to those who drink caffeinated varieties. 

Some people dose 200-250 mg theanine to 100-125 mg caffeine.

"get the caffeine is from hard rhino and the theanine from liftmode "


(via Smart Guide to 2012: Mapping the human brain - health - 23 December 2011 - New Scientist)
(via 25 Fascinating Brain Books Anyone Can Enjoy - Online College Courses | Online College Courses)