Quantification of "consciousness" in integrated information theory (IIT3.0), calculation method of Φ

Preface

Past articles

Introduction of library for dealing with integrated information theory and simple usage example

Analysis of XOR network

There is an integrated information theory that calculates and quantifies the degree of "consciousness" of a network to a value of Φ.

We will look at the calculation method of Φ while referring to the official document of PyPhi, a library that handles integrated information theory.

It's a rather complicated calculation, so I'll omit it in various ways. For those who want to read the original article,

Official Document

Calculating φ

See the above article. It also includes the story of the previous article two that I wrote. I will.

I'm just doing a rough Japanese translation with insufficient knowledge, so I would be grateful if an expert pointed out.

Those who want to read it properly should read it while comparing it with the official slide material Calculating φ. think.

Network and TPM (Transition Stochastic Matrix)

phigraph.png

Consider a network composed of these three elements, OR, AND, and XOR.

Each element of A, B, and C has either ON or OFF element at time $ t $, and inputs / outputs to the connected element. The result changes the state at time $ t + 1 $.

Since it is composed of OR, AND, and XOR, the following table can be created by taking the state of time $ t $ in the row direction and the state of time $ t + 1 $ in the column direction.

t \ t+1
A 0 1 0 1 0 1 0 1
B 0 0 1 1 0 0 1 1
C 0 0 0 0 1 1 1 1
A B C
0 0 0 1 0 0 0 0 0 0 0
1 0 0 0 0 0 0 1 0 0 0
0 1 0 0 0 0 0 0 1 0 0
1 1 0 0 1 0 0 0 0 0 0
0 0 1 0 1 0 0 0 0 0 0
1 0 1 0 0 0 0 0 0 0 1
0 1 1 0 0 0 0 0 1 0 0
1 1 1 0 0 0 1 0 0 0 0

In this table, for example, if $ t $ is $ (0, 1, 1) $, the probability of becoming $ (1, 0, 1) $ at $ t + 1 $ is $ 1 $.

This table is called TPM (Transition Stochastic Matrix).

Effect repertoires?

Let's think about various combinations of the three elements A, B, and C, such as "only A is taken out", "only AB is taken out", and "only BC is taken out".

As a premise of the calculation to be performed from here, suppose that $ (A, B, C) = (1, 0, 0) $ is fixed.

First, let's examine "$ (B, C) $ in $ t + 1 $ for $ t $ state $ (A, B, C) $".

By reducing the information of A in $ t + 1 $ from the previous table, we can write the following table.

t \ t+1
B 0 1 0 1
C 0 0 1 1
A B C
0 0 0 1 0 0 0
1 0 0 0 0 1 0
0 1 0 0 0 1 0
1 1 0 1 0 0 0
0 0 1 1 0 0 0
1 0 1 0 0 0 1
0 1 1 0 0 1 0
1 1 1 0 1 0 0

Especially when $ (A, B, C) = (1, 0, 0) $

B 0 1 0 1
C 0 0 1 1
0 0 1 0

You can see that the probability of $ (B, C) = (0, 1) $ is 1.

Similarly, if you look at "$ (B, C) $ at $ t + 1 $ for state $ (C) $ at $ t $"

|
B 0 1 0
C 0 0 1
C
0 1/2 0 1/2
1 1/4 1/4 1/4

It will look like this.

Especially at the currently fixed $ (1, 0, 0) $

B 0 1 0 1
C 0 0 1 1
1/2 0 1/2 0

The table is created.

Let's also create a table of "$ (A) $ in $ t + 1 $ for state $ (\ varnothing) $ (independent of anything) in $ t $".

A 0 1
1/4 3/4

These tables created in this way are called effect repertoire.

You can also calculate the overall effect repertoire by multiplying the tables you created earlier.

無題.png

=

A 0 1 0 1 0 1 0
B 0 0 1 1 0 0 1
C 0 0 0 0 1 1 1
1/8 3/8 0 0 1/8 3/8 0

Cause repertoire?

Earlier we saw the effect Repertoire, which is the effect of $ t $ on $ t + 1 $.

Next, let's look at the effect of $ t-1 $ on $ t $.

Let's take a look at "$ (B, C) $ in $ t-1 $ for state $ (C) $ in $ t $".

C 0 1
B C
0 0 1/2 1/2
1 0 1/2 1/2
0 1 1/2 1/2
1 1 1/2 1/2

Especially in the fixed state $ (1, 0, 0) $ this time

B C
0 0 1/4
1 0 1/4
0 1 1/4
1 1 1/4

I can find it. (Standardized and the total is 1)

This means that $ B and C $ for $ C $ to be 0 can be in any state equally.

Let's also find "$ (A) $ in $ t-1 $ for state $ (\ varnothing) $ (state independent) at $ t $".

A
0 1/2
1 1/2

Now this.

As before, let's multiply the two tables.

無1題.png

$ = $

A B C
0 0 0 1/8
1 0 0 1/8
0 1 0 1/8
1 1 0 1/8
0 0 1 1/8
1 0 1 1/8
0 1 1 1/8
1 1 1 1/8

I was able to calculate like this.

irreducibility?

As before, let's take a look at the $ (A, B, C) $ Effect Repertoire for $ (A, C) $.

A 0 1 0 1 0 1 0 1
B 0 0 1 1 0 0 1 1
C 0 0 0 0 1 1 1 1
A C
1 0 1/4 1/4 0 0 1/4 1/4 0 0

This repertoire

{AC\over ABC}

It is expressed as. Disassemble this

{AC\over AB} \times {\varnothing \over C}

Let's make it in the form of.

First, let's look at $ (A, B) $ for $ (A, C) $.

A 0 1 0 1
B 0 0 1 1
A C
1 0 1/2 1/2 0 0

In addition, take a look at $ C $ for $ \ varnothing $ (not specified).

C 0 1 0
1/2 1/2

$ (A, B) $ for $ (A, C) $, $ C $ for $ \ varnothing $ (not specified), the product of these two tables is $ (A) for $ (A, C) $ , B, C) Matches the $ table.

{AC \ over ABC} and {AC \ over AB} \ times {\ varnothing \ over C} match.

There are many ways to take a combination that divides like this. Let's list them below.

{\varnothing \over A}\times {AC\over BC}, {\varnothing \over B}\times {AC\over AC}, {\varnothing \over C}\times {AC\over AB}
{\varnothing \over AB}\times {AC\over C}, {\varnothing \over BC}\times {AC\over A}, {\varnothing \over AC}\times {AC\over B}
{\varnothing \over ABC}\times {AC\over \varnothing}, {A \over \varnothing}\times {C\over ABC}, {A \over A}\times {C\over BC}
{A \over B}\times {C\over AC}, {A \over AB}\times {C\over C}, {A \over C}\times {C\over BC}
{A \over BC}\times {C\over A}, {A \over ABC}\times {C\over \varnothing}

Find cause repertoire for this ** all ** delimiter and compare it with the original $ {AC \ over ABC} $.

The partition that minimizes the deviation from the original value is evaluated as MIP (minimum information partition), and the deviation at that time is evaluated as $ φ $ (lowercase, small phi). In this case, $ {AC \ over AB} \ times {\ varnothing \ over C} $ delimits $$ φ = 0 $, which is the MIP.

Maximally-irreducible cause-effect repertoire (maximum irreducible cause-effect repertoire)

ABC\over ABC

Let's calculate $ φ $ of various purviews from Effect repertoire.

The MIP for $ ABC \ over A $ is $ {A \ over \ varnothing} \ times {BC \ over A} $, and $ φ = 0 $. The MIP for $ ABC \ over B $ is $ {A \ over \ varnothing} \ times {BC \ over B} $, and $ φ = 0 $. The MIP for $ ABC \ over C $ is $ {C \ over \ varnothing} \ times {AB \ over C} $, and $ φ = 0 $. The MIP for $ ABC \ over AB $ is $ {A \ over \ varnothing} \ times {BC \ over AB} $, which is $ φ = 0.5 $. The MIP for $ ABC \ over AC $ is $ {\ varnothing \ over C} \ times {ABC \ over A} $, and $ φ = 0 $. The MIP for $ ABC \ over BC $ is $ {AB \ over C} \ times {C \ over B} $, which is $ φ = 0.25 $.

Of these, $ ABC \ over AB $ has the highest $ φ_ \ mathrm {effect} ^ {\ mathrm {max}} $. This is called the Maximumly-irreducible effect repertoire.

Doing the same calculation You can also find $ φ_ \ mathrm {cause} ^ {\ mathrm {max}} $ for cause-repertoire.

φ_\mathrm{cause}^{\mathrm{max} } ,

φ_\mathrm{effect}^{\mathrm{max} }

The smaller of the two is adopted as $ φ $ of purview (range) $ ABC $.

Let's calculate this for $ A, B, C, AB, BC, ABC $.

A B C AB BC ABC
φ_\mathrm{cause}^{\mathrm{max} } 0.17 0.17 0.50 0.25 0.33 0.50
φ_\mathrm{effect}^{\mathrm{max} } 0.25 0.25 0.25 0.50 0.50 0.50
φ 0.17 0.17 0.25 0.25 0.33 0.50

System Cuts

So far we have calculated how irreducible each substructure is.

But what do you think of the system as a whole?

Now let's introduce the concept of cut.

Cuts the "exit" path from "element A" made up of OR circuits.

1.png

Now the value of $ A $ has no external impact. At this time, note that the value of A is not treated as "0" but as "noise". Also note that the route connecting B, C "from" A is alive.

First, let's take a look at the TPM from ($ A, B, C ) to ( A $). This is an ordinary OR circuit as before.

A(OR) 0 1
A B C
0 0 0 1 0
1 0 0 1 0
0 1 0 0 1
1 1 0 0 1
0 0 1 0 1
1 0 1 0 1
0 1 1 0 1
1 1 1 0 1

Next, let's look at the TPM from ($ A, B, C ) to ( B $). B is an AND circuit, but the value of A is "noise", and for B it changes when it becomes a random value.

B(AND) 0 1
A B C
0 0 0 1 0
1 0 0 1 0
0 1 0 1 0
1 1 0 1 0
0 0 1 1/2 1/2
1 0 1 1/2 1/2
0 1 1 1/2 1/2
1 1 1 1/2 1/2

B cannot determine whether the value of $ (A, C) $ is $ (0, 1) $ or $ (1, 1) $ because the route is broken. Therefore, the value changes with a probability of 1/2.

Similarly, find C.

C(XOR) 0 1
A B C
0 0 0 1/2 1/2
1 0 0 1/2 1/2
0 1 0 1/2 1/2
1 1 0 1/2 1/2
0 0 1 1/2 1/2
1 0 1 1/2 1/2
0 1 1 1/2 1/2
1 1 1 1/2 1/2

Since it is XOR, it cannot be judged unless the value of A is decided.

Now, let's multiply the three tables obtained so far.

t \ t+1
A 0 1 0 1 0 1 0 1
B 0 0 1 1 0 0 1 1
C 0 0 0 0 1 1 1 1
A B C
0 0 0 1/2 0 0 0 1/2 0 0 0
1 0 0 1/2 0 0 0 1/2 0 0 0
0 1 0 0 1/2 0 0 0 1/2 0 0
1 1 0 0 1/2 0 0 0 1/2 0 0
0 0 1 0 1/4 0 1/4 0 1/4 0 1/4
1 0 1 0 1/4 0 1/4 0 1/4 0 1/4
0 1 1 0 1/4 0 1/4 0 1/4 0 1/4
1 1 1 0 1/4 0 1/4 0 1/4 0 1/4

This is the "TPM that cuts the route from A to B, C".

It is important to know how the disconnected network has changed compared to the TPM of the undisconnected network.

Let's calculate big-Phi

Let's calculate the Maximum-irreducible cause-effect repertoire again based on this TPM.

Let's recalculate the range we calculated earlier ... and ...

Only the Maximum-irreducible cause-effect repertoire for $ B $ has changed. When calculating the TPM error for each cause-effect with and without disconnection, there was a difference of $ 0.17 $.

Moreover, this is not the end. Maximally-irreducible cause-effect The remaining range $ A, C, AB, BC, ABC $ where repertoire did not change is compared with the TPM in the range $ \ varnothing $ (not specified) when disconnected. I will. There was a difference of $ 0.583, 1, 1, 1.25, 2 $ in each value.

Multiply these differences by the $ φ $ value of each range and add them up.

A B C AB BC ABC
Difference in TPM from itself 0 0.17 0 0 0 0
range\varnothingTPM difference with 0.583 1 1 1.25 2
φ 0.17 0.17 0.50 0.25 0.33 0.50
With the difference (either)φValue multiplied by 0.097 0.289 0.25 0.25 0.4125 1

The last columns of the table add up to $ = 2.0416 $. This is the value called integrated conceptual information, $ Φ $ (large phi).

This is the value when "the route from A" is cut, but "the route from B", "the route from C", "the route from AB", "the route from BC", and "the route from AC" are cut. If you do, $ Φ $ will be calculated separately.

The calculated values are shown in the table.

Route source to disconnect A BC B AC C AB
Φ 2.042 1.924 2.042 1.924 1.972 1.917

The value with the smallest $ Φ $ from these values is $ Φ $ for the entire system, and the delimiter at this time is MIP (minimum). It is called information partition). In this case, it seems that MIP is to disconnect based on AB.

Therefore, the value indicating the information integration capability of $ ABC $ was found to be $ Φ = 1.917 $.

Let's calculate with the library

Let's calculate this value with a library called pyphi. Since the code of previous article is just pasted, please see that (although only a little) for the explanation of the code side.

First, write the TPM (the format is different, but the content is the same), describe the network connection, and create the pyphi.Network () class ...

import pyphi
import numpy as np
tpm = np.array([
     [0, 0, 0],
     [0, 0, 1],
     [1, 0, 1],
     [1, 0, 0],
     [1, 0, 0],
     [1, 1, 1],
     [1, 0, 1],
     [1, 1, 0]
 ])
cm = np.array([
     [0, 1, 1],
     [1, 0, 1],
     [1, 1, 0]
 ])
labels = ('A', 'B', 'C')
#In this case cm can be omitted
network = pyphi.Network(tpm, cm=cm, node_labels=labels)

Set the initial state of ABC (1, 0, 0) ...

state = (1, 0, 0)

Set the calculation target ...

subsystem = pyphi.Subsystem(network, state)

Calculate.

pyphi.compute.phi(subsystem)
# 1.916665

Similar results were obtained.

Reference source

Basically, it is translated into Japanese according to the contents of the following articles. If you want to see it a little more, please read here. This article has too many concepts to skip.

https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1006343.s001&type=supplementary

The official document of pyphi is below ↓.

https://pyphi.readthedocs.io/

I would be very grateful if you could point out any points. The article was too long and it was hell. Over 200 pages of the original slide material was ridiculous.

Recommended Posts

Quantification of "consciousness" in integrated information theory (IIT3.0), calculation method of Φ
Introduction of library PyPhi for dealing with integrated information theory (IIT)
Basics of Quantum Information Theory: Universal Quantum Calculation by Toric Code (1)
Basics of Quantum Information Theory: Entropy (2)