This post is a roughly transcribed version of my understanding of Joëlle Proust’s paper “Metacognition” (2010) in Philosophy Compass. Unlike all my other posts, this post is intended not just for me and instead as a complementary handout for a presentation in Professor Ophelia Deroy’s class, Knowing That We Know in the Summer semester 2020 at Ludwig Maximilians Universität München (University of Munich). If it is helpful for anyone else, that would of course make me happy 🙂

Metacognition

Joëlle Proust (Institut Jean-Nicod, Paris)

This paper aims to discuss the question “How should we understand the function and scope of metacognition?“.

Structure:

  1. Presents self-ascriptive view.
  2. Refutes self-ascriptive view.
  3. Presents & argues for self-evaluative view

The self-evaluative view of metacognition holds that:

  • the function of metacognition: evaluation of one’s mental dispositions and attitudes in a context-sensitive way. A normative appreciation for correctness is a necessary part of mental actions.
  • the scope of metacognition: does she explicitly address this? I guess she expands it to include not only propositional content of the mind but also feelings.
Preliminaries:

metacognition: “the kinds of processes involved, and the self-knowledge gained, in thinking about, and in controlling, one’s own thinking”

topics that bear on our understanding of metacognition:

  • the architecture of the mind
  • the nature of self-knowledge
  • epistemology (lol small issue)

Aporiai:

  1. “Does metacognition necessarily require representing one’s own mental states as mental states”, i.e. does it “necessarily involve an ability to read one’s own mind?”
  2. “Does it instead merely require the ability to control and monitor one’s cognitive processes”?
The Self-Attributive View (SAV)

SAV has a semantic criterium: to qualify as engaging in megacognition, one must self-ascript first-order mental contents, one must acquire/possess second-order propositional attitudes, i.e. ‘think about one’s thinking’.

“I [insert cognitive task 1] that I [insert cognitive task 2].”

(cog task 1 & cog task 2 can be the same cog task of course)

How can one engaging in cog task 1 differentiate between, for example, perception and imagination in attributing to themselves cog task 2?

[SAV Hypothesis:] “a specialized representational device takes an occurrent thought content P as input, and produces the representation ‘I believe’ (or ‘perceive’, etc.) that P as output”. The second-order self-ascriptive thought makes use of first-order content.

What a convoluted mix of words–what the hell does this, in particular, the ‘representational device’ mean? What a surprise! No one agrees. There are 3 main sub-hypotheses:

1: The self-ascription happens by a general capacity. [This is similar to Aristotle’s view.] The idea goes, this general capacity was evolutionarily necessary for social cognition so everyone has it (look at kids developing it! look, it’s necessary for the development of episodic memory!) and this general capacity being a thing makes sense with innate domain-specific theories of mind (e.g. TOMM).

2: Well, the capacity of self-ascription is a necessary condition for the capacity of other-ascription. One performs simulations of others’ minds for the sake of decision-making in social situations, i.e. people project onto others (and they think projection is a mental state, I agree). This sounds grossly similar to 1, but this theory (e.g. Goldman 2006) [is thankfully slightly less painful to read because it] rejects folk psychology which 1 is into, and also has a bigger domain for metacognition than 1, i.e. mental states include “non-attitudinal states such as proprioceptive [which basically means physical/spatial] states and emotions”. [This is just a slightly less pop-evo-psych-y version of 1.]

3: The ‘executive theory’ of mindreading, our third hypothesis, connects mindreading with a procedural view of metacognition. Mental ascription for self and for others both depend on the ability to maintain a superordinate rule in ‘working memory’.

Children (and so many adults too, clearly haha) fail to attribute false/potential beliefs to others, fail to recognize their own past beliefs, why? Apparently, because they can’t hold back what they think/believe/feel, not because they lack concepts. Basically, some subset of people (mainly kids) are so overwhelmed with everything that they can’t take a step back.

Yeah sure, we engage in metarepresentations, but metarepresentations = a super-ordinate rule that requires inhibition of a prepotent response. It’s top-down in the sense that higher-order cognition controls the first-order mental processes. They also think counterfactual reasoning is super important [which I think undermines their point that children behave the way they do due to a lack of concepts… maybe it doesn’t, but what five year old understands counterfactuals but can’t inhibit their first-order thoughts and feelings? Lack of concepts seems to me to coincide with their inability to inhibit and both contribute to what we observe].

Despite their differences, SAV positions all agree that self-ascription causally contribute to [maybe even is responsible for?] controlling one’s thoughts. This helps one resist interference from the environment, stick to one’s goals [which could be vaguely important to some people I guess].

Despite their differences SAV positions also generally agree that introspection has privileged access to one’s own mental states & can “in principle, be used to interpret, predict, or explain others’ attitudes and mental actions”. (e.g. Task: memorize pairs of unrelated words!)

Proponents for this view of metacognition include but are not limited to:

Flavell (1979), Tim Shallice (1988), Alan Leslie (1987), Dan Dennett (1991), Josef Perner (1991), Alison Gopnik (1993), Peter Carruthers (2008, 2009)

Why SAV Sucks

Proust argues that the view that “representing one’s mental contents is a necessary condition for controlling one’s thoughts and intentions”, which she thinks underlies all of SAV, sucks.

[She thinks: obviously,] it’s not enough for self-knowledge to have descriptions of one’s own psychological states. You also need some normative awarenesses (e.g. commitment to the rationality of one’s intentional actions)….

Contra the semantic condition: how does it account for the evaluative function of metacognition? SAV is descriptive and has no good account for self-knowledge, 

Vs. self-evaluation, which is a “matter of deciding how to mentally act here and now, based on an evaluation of one’s current mental dispositions, given one’s commitment to various epistemic requirements… is practical and normative.”

Contra the idea that metacognition as self-ascription requires metarepresentation: that fails to explain animals. Metacognition as a self-evaluation process, on the other hand, does not require metarepresentation.

The evolutionary or social hypotheses of SAV don’t explain other animals’ abilities to discriminate two visual stimuli or to make a prospective judgment of memory in a serial probe recognition task. They can even opt-out from perceptual/memory tasks when they feel unable to do it. [Soooo cooooool!] So metacognition cannot be equated to the ability to read minds, which is apparently uniquely human. SAV conflates the two.

Contra the idea that self-ascription is necessary (or even sufficient) as a causal explanation for one’s control of mental states: that again fails to explain animals.

Proust thinks neither necessary nor sufficient for control of one’s mental states. Why not necessary? Because some animal systems unable to metarepresent their own mental states are clearly engaging in at least some of this control. Why not sufficient? Because a system that does correctly metarepresent its own states “isn’t ipso facto [which just means ‘by that very fact’ by the way] able to use this metarepresentational information to make appropriate self-evaluations and/or control decisions”. Proust cites some empirical data.

Contra the idea that the same informational processes are happening for both self-ascription and other-ascription: wrong, it’s way more complicated than that. [I don’t think I really understood this paragraph sufficiently. What I think it says:] Because according to the other view, the same processes are applicable to self-ascription and other-ascription, thus if one can self-ascript, this entails that one can other-ascript. But must be wrong. Why? Because self-ascription is under SAV also causally responsible for controlling one’s learning. But this isn’t right. These come apart. So there must be some other underlying cause (she says ‘source’) that is guiding one’s learning. Basically, there’s something missing.

The Self-Evaluative View (SEV)

Okay, so then what is the alternative that Proust suggests?

Fundamentally:

metacognition essentially consists in a capacity for cognitive self-evaluation.

Why do we even engage in metacognition? To adjust our cognitive goals and effortful activity to our cognitive resources. [From what I understand,] there’s a command level seeking to control our cognitive resources to achieve our goals and an observation level providing feedback.

How does this work? We can turn to the model of the relation between physical and mental actions and the awareness that’s happening there, [I think] she thinks. Having control of one’s movements means that the agent authoritatively knows what they want to do. The same is true for mental actions.

Metacognition is, however, Proust thinks, different from the control of physical action. In what way? They are a result of different selective pressures, i.e. non-human species too can anticipate physical states but can’t do the same for mental states. Also, they have different sets of norms, though some overlap.

Overlap: both physical and mental actions aim at changing the world by using means-to-end relations

Non-overlap: in constitutive norms, i.e. mental actions involves epistemic requirements that are not part of bodily action…. like, we want to have a correct outcome.

So SAV was at least easy to understand in their formulation of “I think that I think this”…. what’s the representational medium for these normative appreciations that SEV cares to include?

Proust turns to Hookway (2003): affective states, e.g. sentiments and emotional responses such as feelings of doubt, conviction, have an essential role in our practice of epistemic evaluation [haha so Aristotelian-ethical theory-like]. Basically, things don’t have to be articulated in words for the evaluation to be happening[, if I got this right].

There are options to explain how affective states relate to epistemic decisions:

  • If you grant that emotional states are propositional (De Sousa 1987), easy.
  • If you think emotions motivate a judgment of knowing, integrating various other informational sources, then this should then contribute to epistemic decisions.
  • If you break down the affective process into a multi-step thing, you can make the conscious feeling one has more substantive. I.e. step 1: activation of an unconscious heuristic, step 2: use this feeling as a source of information for the purpose of a conscious epistemic decision (Koreiat & Levy-Sadot 1999).

Proust thinks these ideas are compatible with animals and also “accommodate the human capacity to overrule the motivations to act that feelings provide, [apparently,] when it is rational to do so: for example, when these feelings have proven unreliable in given conditions”.

Alright, finally, Proust wants to point out that conditions for self-ascription isn’t enough for other-ascription. Additional conditions must be met. [I’m not clear on what she thinks these additional conditions are.]

[But anyway:] SEV has to explain how propositional metarepresentations that we clearly do experience work with this emotion-based metacognitive activity that it is suggesting also plays a role huge role. She turns to ‘Dual-Process’ theories of reasoning which everyone should be familiar with from Kahneman’s Thinking Fast and Slow, the usual/famous System 1 & 2 thing. Proust suggests Metacognition could be similarly structured.

Proust thinks SEV avoids all the problems of SAV previously discussed.

Categories: Summaries