Background / Introduction
In our country a startup started. That tries to demonstrate machine's consciousness with connection between machine and brain when it becomes to have consciousness. Its purpose seems to feel the machine-side consciousness completely by brain-side without feeling uncomfortable. Here, consciousness information inputting from machine hemisphere to brain hemisphere is researched. Then the assumption that the completion of feeling of the machine-side consciousness is needed. And after feeling the machine-side consciousness completely by brain-side without feeling uncomfortable, the trial is that we can realize machine-side consciousness also as same as organic brain.
While thinking connection between machine and brain, we should consider what signal should be suitable for input information with thinking also the implementation of consciousness for computing. Because of Hard Problem, it's the general recognition that there is no theoretical method to explain a color perception (ex. red for red) with machine at this point. This is Hard Problem: though it may be possible to perceive colors consequently with 3 color cone cells, there is no theoretical method to explain. (Though we can find which neuron are firing while feeling "red" and we can distinguish perceiving "red" cells from other cells with fMRI, there is no theoretical method to explain a color perception (ex. red for red).) I understand Hard Problem is: "human sees the colors of the rainbow with the color sequence many understand. But we don't know the reason of the sequence."
In the meantime, how about considering pressure sensitivity, loudness sensitivity, brightness sensitivity, and shape of monochrome that are shown with stepwisely intensity. Those case should seem to be different from the case of color. Apparently those cases are different from olfactory sensibility, gustatory sensibility as same as color (ex. red for red). And for olfactory sensibility, gustatory sensibility, there is also no theoretical method to explain them. If we can do, it's better for machine to perceive color. (Some may say it must show perception of color to show consciousness of machine, because color is the symbol of qualia.) But as you know, it's hard. I am wondering I can choose pressure sensitivity, loudness sensitivity, brightness sensitivity, and shape of monochrome for studying perceiving qualia, avoiding the problem of color. We can study color as second step.
Until now I tweeted about Hard Problem that came to indicate melting very slightly recently. Regarding pressure sensitivity, loudness sensitivity, brightness sensitivity, and shape of monochrome, while a sensor feel perception of intensity and it has the label of the sensor (place and what it senses), it came to indicate melting very slightly.
Confirmation: cohort B seems to indicate melting Hard Problem.
A. color of visual sensitivity, olfactory sensibility, gustatory sensibility (that isn't shown with stepwisely intensity, but shown with quality of sense)
B. shape of monochrome of visual sensitivity, pressure sensitivity, loudness sensitivity, brightness sensitivity, size, rhythm (that is shown with stepwisely intensity)
C. frequency sensitivity (that is shown with linearity, but may be close to cohort A)
It may need more explanation. Basically A to C are all qualia, so they are still hard. But some of them (cohort B) can be distinguished from other qualia and we can feel perception of its qualia without misunderstanding. That is because the some of the cohort B is linked to sensors (physically?) and because the process of feeling it is within the individual itself.
For example regarding cohort B, with the condition that conscious-entity understands the sensors (place and what it senses), that conscious-entity distinguishes perception of (ex.) pressure sensitivity from loudness sensitivity, brightness sensitivity or other cohort B's. (The conscious-entity means to understand the sensor's "place and what it senses", because the sensor links to brain physically(?) if it is within individual itself. It means the conscious-entity can feel perception of its qualia without misunderstanding.)
For example regarding brightness sensitivity, the conscious-entity can feel that there is a luminous thing and there is no shield between the luminous thing and the conscious-entity. Regarding pressure sensitivity, while it feels something with pressure sensor on the right shoulder, it feels that something taps on there, and (thinking of outer space) while it feels bending on the neck sensor, it feels its head is left alone due to reaction of acceleration of the upper part of the body. If the information is sent into all over brain, the info isn't felt as appropriate one (ex.) brightness sensitivity as if it is felt as stun gun stimulus. It is very important for the info to be linked to brain within individual itself directly.
If the information is linked to Visual cortex for brightness sensitivity, it should be brightness information. That is unthinkable that an info (ex. brightness sensitivity) is treated as other perception (ex. pressure sensitivity) if the info is linked to Visual cortex.
And let's think of evolution of microbe. At first it is OK to escape to everywhere always while it feels pressure sensitivity, loudness sensitivity, or brightness sensitivity. But after some evolution, the microbe comes to select actions from many alternatives in regard to type of stimulus due to increase the survival rates.
Additionally its process of cohort B (the feeling of intensity) can be used for brightness sensitivity with four sensors that are used for four view angles that are used for upper-right, lower-right, lower-left, and upper-left. And as similar process, and with detailed grid cells we can feel monochrome pixel art. And this feeling of pixel art can also be used for pressure sensitivity, and further, the stimulus to tongue also can show pixel art as similar as brightness sensitivity.
In the meantime there is no theoretical method to explain a process of cohort A (ex. red for red) even now. (Hard Problem) though it may be possible to perceive colors consequently with 3 color cone cells, there is no theoretical method to explain. "human sees the colors of the rainbow with the color sequence many understand. But we don't know the reason of the sequence."
In this way perception can be largely grouped into three. And I understand there is a perspective that cohort B seems to indicate melting. Considering Hard Problem, an unconsidered idea (like Pretty Hard Problem) doesn't lead a link to next idea. But I think the process that shows analytical ideas from already-known ideas lead links to next ideas.
While considering inputting a piece of consciousness to brain from machine, it means consciousness is completed within the machine (*1). But it's very difficult for machine to say "I felt it." if the target is from cohort A (ex. red for red). (The machine can understand its frequency of the color. But it doesn't mean the machine internal is filled with red color.) In the meantime I think it's rather easier that for the machine to say "I felt it." and seems close to self‐evident if the target is from cohort B (ex. pressure sensitivity). But Its precondition is the process of feeling it is within the individual itself.
So if we try to input this (cohort B) information to internet, tagging (the place the sensor located, and what it senses) is necessary. And if its output is into a living body, output to suitable sensory cortex is necessary. And if its output is into a machine, it is necessary to decode with that tagging.
There may be similar prior research, because this can be tried with thought experiment even in the mind in the past even though they can't think of consciousness implementation to machine then. But after considering this topic, there may be also some hints of cohort A. Some research of "eye" may cause some opportunities.
I didn't hear that some disease cause a malfunction of the colors sequence of the rainbow, (though I didn't investigate all documents). And I am wondering living being under visible rays can see the colors sequence of the rainbow (except infrared ray and ultraviolet ray) if it has 3 color cone cells. What should we think next? Is it same about aliens? There are many viewpoints.
Note: I heard that color impaired don't see a certain color frequency band, and I think it doesn't cause the colors sequence change, though some part of sensitivity of frequency is impaired and that may cause difficulty of distinction (ex. between red and green). Please note I don't have detailed knowledge of disease examples. There may be some hints in searching them. For example, a decline to limit of sensitivity of one cone cell of 3 color cone cells might cause another unknown color perception, and so on.
That's it. This is based on the thought of minimum and / or lower order consciousness. (*1) is also so. In this background there are also the thought of IIT, unit qualia, and consciousness building block concept. (Except IIT, it is a little difficult. See link: With Chinese room metaphor it's start point to thinking about CONSCIOUSNESS BUILDING BLOCK CONCEPT
This consideration started from the idea that avoid the difficulty of cohort A. And it caused this consideration of Hard Problem consequently. At first I started this consideration with an idea of connection between machine and brain from one startup. I believe this kind of viewpoint studies contribute progress of various research. I appreciate this kind of proposal. I myself understand that there were some progress of the understanding of Hard Problem. And I'm going to outreach this consideration as well as other research of consciousness continuously.
Self and others:
While one's hand touches something, 'what it is touched' can sense 'touched' at the same time. And we can discriminate self ('what it is touched' is a part of 'the one who discriminates self') from others ('what it is touched' are usual things).
'The one who discriminates self' is 'what has consciousness'. Usually even if the one's hand touches something, all 'what it is touched' are others. There should be a time when 'what has consciousness' find self. An infant finds 'what it is touched' = self on one occasion. The start point of finding self is to find unusual (finding 'what it is touched' at the same time) from usual (others).
This 'primitive self': ['what it is touched' is a part of 'the one who discriminates self'] may be researched in the past. So it may be like reinventing the wheel. But do many researchers understand 'higher order self' is based on understanding 'higher order others' (human like others), and understanding the existence of self which has similar trait as 'higher order others'?
In the meantime, I've heard of metacognition of AI. And I heard primitive metacognition is shown as it can sense whether the switch is on or off of its own circuit. But I think 'higher order self' contributes to higher order metacognition. I think if it doesn't show self, it can't show higher order metacognition.
This 'higher order self' should be evoked naturally in a living body. But with von Neumann architecture AI to model self, and for many researchers to understand, an artificial program might be work better.
I show the suitable order for programming (for self and others).
(0. 'primitive others')
1. 'primitive self'
2. 'higher order others'
3. 'higher order self'
'Primitive self' can be implemented with hand model and a sensor for 'what the hand is touched'. The next 'higher order others' is more difficult, because it needs understanding that some of the others work with some logics. 'Higher order self' is far more difficult, because it needs understanding that the 'higher order self' has similar trait as 'higher order others'. But it is 'CONSCIOUSNESS BUILDING BLOCK CONCEPT': http://mambo-bab.hatenadiary.jp/entry/2018/05/07/200250
It's useful for computer programming of consciousness to show self and others. If it is a living body (that is a general purpose program (metaphor) that simulate all of the world), it takes hundreds of millions of years to show self and others. It's not bad to write computer program for experiment to make this hypothesis understood by many researchers. Some special program would work (ex. demonstrating only self and others without requiring natural conversation).
At #ASSC there seemed a discussion topic of 'no self, no fear'. It seemed too naive, though I didn't hear in detail. As you can see from 6/25/2018 tweet, primitive fear doesn't probably need even primitive self, as same as primitive anger doesn't need primitive self.
Meantime higher order fear is probably affected by various higher order self, as same as higher order anger is affected by various higher order self.
There is self as one of higher order consciousness. In detail there are from primitive self to higher order self stepwisely. That is one of 'CONSCIOUSNESS BUILDING BLOCK CONCEPT'.
This entry is retouched based on the twitter on 7/6/2018 - 7/7/2018.
Regarding consciousness, my consciousness hypothesis is clarified with IIT, and the difference is clarified. And it is reconfirmed that minimum consciousness can be accepted with mimimum information elements.
Next, with Chinese room metaphor, from minimum consciousness to typical human consciousness, it is explained that what is necessary to be add for consciousness stepwisely. And we can find that consciousness building block concept is based on this idea. And with this concept, consciousness research moving forward, and consciousness scientific modeling moving forward is hoped.
[With considering IIT (Integrated Information Theory) , confirming lower order (minimum) consciousness as well as higher order consciousness]
My consciousness hypothesis has very much affinity with IIT (Integrated Information Theory). Both have the trait that 'consciousness can be measured' and 'even inanimate objects can be accepted with a little information elements'.
But there are some restrictions for IIT, for example 'feedback must needed' or so. In the meantime my hypothesis delete all restrictions basically. IIT thinks cerebellum doesn't have consciousness, since it doesn't have feedback.
In the meantime my consciousness hypothesis thinks even cerebellum is considered to have consciousness, since it uses the similar elements and circuits as conscious matters. With this thought we can solve the problem of discontinuity that 'consciousness would be zero suddenly while feedback fades out', and we can solve the conflict 'the case it doesn't have consciousness even when there are qualia input'. So there are no such things that doesn't have consciousness in my consciousness hypothesis. This would induce rejection from the people who doesn't accept the consciousness of photodiode.
But IIT may help understanding it because of its certain degree of recognition. This explanation is 'My consciousness hypothesis is extended from IIT definition to minimum direction.' But please note my hypothesis model isn't the 'stone' model, but 'with feedback' model. This is to explain minimum consciousness as theory.
As you may know, IIT shows cerebellum and photodiode doesn't have consciousness as Φ=0. In the meantime considering IIT and its extension we can have this image (At this point feedback isn't shown as the restriction for consciousness):
If we will get consciousness meter with IIT,— mambo_bab_e (@mambo_bab_e) December 7, 2014
microbe: 100 to 1000
You think so?
With consciousness meter— mambo_bab_e (@mambo_bab_e) December 7, 2014
school of fish: 100000
At least regarding 'consciousness with feedback' both has the same mechanism: 'each has the circuit that can judge whether the input to each is the same as memory or not'. At first with it we should be at the start point to understand consciousness science.
While extend to minimum direction, the ROM circuit which has the same circuit and information elements with feedback should have consciousness with my hypothesis, though IIT doesn't accept that has consciousness, because it doesn't have feedback actually. As similar as it, simpler ROM circuit or, even the case the object is a stone (as has a sensor) extremely, it should have consciousness with my hypothesis. Since my consciousness hypothesis accepts extending consciousness to minimum direction, many people pay attention to consciousness of a stone.
In the meantime if the theory has the restriction of feedback or other higher order consciousness, a stone (as has a sensor) doesn't have consciousness. In this case it is not enough even if the object has a sensor. At this point explanation starts from integration of information. Probably it works considering feedback as necessary condition. In the meantime how about self consciousness or counterfactual thinking?
With my hypothesis based on minimum consciousness, memories as feedback can be added to minimum consciousness, and also memories as self consciousness or counterfactual thinking can be added stepwisely.
However if the restriction of feedback or other higher order conditions are necessary, it's a little difficult to explain. For example, if feedback is necessary for start point for consciousness, it's difficult to explain why self consciousness or counterfactual thinking isn't start point. The question is why feedback can make discontinuity between consciousness and not consciousness, otherwise self consciousness or counterfactual thinking can't make discontinuity. If IIT says it's definition, we can't resist it. For considering other higher order consciousness, for example in case counterfactual thinking is the restriction for consciousness, we can't say it is conscious, though there is qualia input and feedback, can we?
[Chinese room metaphor and consciousness building block concept]
This is an old metaphor, and is used for the reason that computers can't have consciousness. Though currently consciousness researchers don't use it just as it is, some people still think it's the symbol of mystery of consciousness. For example I think system theory can explain it completely. In the meantime Chinese room's assertion would be that even though the room has information and there are input and output, that wouldn't be enough for considering human consciousness.
Actually if there is a filter which requires human level consciousness, almost all objects can't achieve the consciousness turing test. At this point the important point is a filter of human level. While using that filter for thinking whether the object has consciousness or not, the degree of difficulty is increased extremely, since higher order (human level) consciousness is required.
The next step is how we remove the filter. If you understand from beginning to here, you would find that there is a precondition that there is lower order (minimum) consciousness before considering higher order consciousness. Though few people may explain impression of lower order (minimum) consciousness, it's for example non-human, dolphins, octopuses or bees and others extending to minimum direction. And it doesn't show self consciousness or counterfactual thinking or emotion which is shown as higher order consciousness.
A few people may find that my consciousness hypothesis is that the higher order consciousness like self consciousness or counterfactual thinking is based on lower order (minimum) consciousness. And when extending to minimum direction unit qualia is needed. And speaking a little more, semantic understanding, creativity, counterfactual thinking, emotion, metacognition, self consciousness and 'illusional' free will which all are said as human characteristic are based on minimum consciousness.
And speaking a little more about Chinese room, with my hypothesis Chinese room can have easy-to-understand consciousness, based on simple Chinese room which have minimum semantic understanding and minimum consciousness.
The semantic understanding TAUTOLOGY hypothesis, which should underlie AI research. - BLUE & ORANGE blog_e
for semantic understanding and please see
The new hypothesis of consciousness mechanism. - Consciousness toy model program was released. - BLUE & ORANGE blog_e
for consciousness hypothesis. #neuroscience
[CONSCIOUSNESS BUILDING BLOCK CONCEPT]
1. Chinese room has minimum semantic understanding and minimun consciousness.
Based on above
2. to have new memories -> feedback
3. to have associative memories -> creativity
4. to have associative memories -> counterfactual thinking
5. to have flag circuit -> metacognition
6. to collect the case information which have a part of self (It's easy to have physical body to explain the case that there is input sensing at the same time of output partly) -> self consciousness
7. to have instinctive motivation as memories -> emotion
8. to have the case information of self and other's decision making, with motivation (shown above) -> 'illusional' free will
Based on No.1, each can be next. And this shows consciousness building block concept based on minimum consciousness. This is the same as 2014 my hypothesis but it's should be easier to understanding. And it would be difficult to find this if you don't find that all those neuroscience activity are based on simple minimum consciousness activities. Showing illustrated concept of consciousness mechanism. - semantic understanding, creativity, counterfactual thinking, emotion, metacognition, self consciousness and 'illusional' free will are also explained
The illustrated concept is based on the consciousness building block concept, and should be easier to understand consciousness mechanism. All those activities are based on minimum consciousness elements that isn't thought as consciousness with IIT. Minimum is the key for modeling.
The illustrated concept (2014) and the consciousness building block concept are based on the same concept. But reconstruction of elements and documentation again would induce understanding consciousness with the semantic understanding TAUTOLOGY hypothesis.
IIT researchers seems not to have a interest to consciousness model. Though modeling couldn't be found in recent papers, the reconfirmation of necessity of restrictions and importance of researching minimum direction of consciousness with modeling is hoped. After the reconfirmation of necessity of restrictions and importance of researching minimum direction of consciousness with modeling, @DeepMindAI and @demishassabis might understand the importance of researching consciousness.
My consciousness hypothesis is inspired by Numenta intelligence theory. I believe consciousness hypothesis has very much affinity with intelligence theory. But many people don't find it until now. There would be less than one in Japan that can employ me with researching consciousness. In the meantime there would be ten times worldwide. But there would be one-tenth considering qualifications or past results. But worldwide I hope consciousness research moving forward.
This entry is retouched based on the twitter on 5/5/2018 - 5/6/2018.
Considering semantic understanding:
There is a majority opinion that 'AI does not have semantic understanding.' I think there are people who fear the emergence of AI that is similar to human. My consciousness hypothesis shows also semantic understanding. For example, considering minimum semantic understanding, if understanding a red apple at first, my hypothesis understand 'red is apple' and 'apple is red' only.
It is said that is tautology. But my hypothesis shows that the first semantic understanding is starting from tautology. 'A yellow apple' or 'the things that is red but that isn't apple' should be after learning more. We shouldn't simply say that tautology means ununderstood negatively. We should think whether tautology is negative for semantic understanding truly, or not.
Considering science, tautology is treated as a symbol of ununderstood. For example, when asked why thing burns, someone can answer 'it catches fire'. But it would be said that is tautology, because when asked why 'it catches fire, someone can answer 'thing burns'.
In the meantime what do you think if answered there is oxidation reaction? Possibly many people would think you understand why thing burns, even if you only know 'why oxidation reacts' as 'thing burns'. Actually knowing 'thing burns' and knowing 'oxidation reacts' should be the same level understanding if you know each only. What is the difference? In this case, that is whether it is scientific or not. And whether it is as same as the thinking as observer's thinking and whether observer can sympathize with your associative memory, or not.
That is, to receive empathy from observer, here you should make an answer based on considering observer's hope. In the meantime, in reality usually you should be enough to answer your associative memory since semantic understanding is starting from tautology.
So here at this point, for answering this question with social view point accordingly semantic understanding is the definition which observer decides. So you need to infer what observer hopes is, what kind of answer, scientific or not, or so, as well as simple associative memory. You can remember an old memory that you are happy to be complimented by others, or you are criticized to answer 'it catches fire'.
With scientific view point semantic understanding should be thought by kind of information theory. And in reality the problem of the semantic understanding which we should pay attention is not tautology. We should be at the first step to understand what the semantic understanding is, and shouldn't delete the possibility of kind of tautology.
In the field of education also it is said that 'learning with hearing' and 'semantic understanding' is different. But from the point of neuroscience there is a possibility of minimum 'semantic understanding' if there is a memory (information element) which can associate. (In the field of education I can understand we should make an answer based on considering observer's hope.)
That's it. This semantic understanding tautology hypothesis is a part of my consciousness hypothesis. Considering minimum semantic understanding, my thought is considering association between information element induces clarification of semantic understanding as same as minimum consciousness. In the minimum semantic understanding information element (memory) should be minimum. And it may be seemed to be tautology. But I would like to share that common understanding that tautology does NOT mean ununderstood. (Of course considering from minimum to further scope there are depth (quality) and width (quantity) viewpoint for the semantic understanding.)
I think @Numenta HTM thought it had semantic understanding naturally based on having intelligence. The thought is similar to my hypothesis in the viewpoint memory underlies semantic understanding. I would like @DeepMindAI and @demishassabis to research also semantic understanding.
My consciousness hypothesis is shown in this link:
The new hypothesis of consciousness mechanism. - Consciousness toy model program was released. - BLUE & ORANGE blog_e
Attached is the illustrated concept of my consciousness hypothesis. And as you can see, semantic understanding is included in it.
This entry is retouched based on the twitter on 5/1/2018.
Currently IIT: From the Phenomenology to the Mechanisms of Consciousness: Integrated Information Theory 3.0 provides many arguing points regarding consciousness research. This time I pay attention to NONconsciousness comparing to consciousness.
I saw Tononi wrote that cerebellum wasn't relevant to consciousness. Was that true? Actually cerebellum may not have senses. That means the object that has only reflexes doesn't have consciousness. But is 'the operated mouse that doesn't have senses' zombie? I don't completely deny that kind of idea. But that means human fetuses and unicellularlives don't have consciousness. Isn't this a ruler that shows the zombie to be nonconscious? I think nonconsciousness can be explained with the same theory as consciousness, and those two theories would be joined as grand unified theory in the future even if those are separated currently.
IIT's understing for consciousness:
I think IIT understands that consciousness is defined without reflexes (feedforward). There is an intriguing example: IIT3.0 Fig.19. It shows a photodiode as has consciousness. And it seems that referring to memories is a necessary condition for consciousness. I think this is one of the problems of IIT as having restriction, but a key of IIT. 'Whether consciousness involve reflex or not' would be a simple difference based on the definition. And minimum consciousness would have senses of pain or color, despite that reflex is involved or not. IIT seems to show narrower scope of consciousness based on its definition.
Further consideration for 'unit qualia' and 'feedforward and nonconsciousness':
The photodiode of IIT3.0 Fig.19 has two more view points. One is the case thinking about unit qualia. The photodiode is similar to unit qualia. However, it's not allowed to show complicated qualia with combination of unit qualia by IIT, due to Tononi's red triangle. And IIT doesn't show qualia for feedforward process, though even simple senses should have qualia intrinsically.
The other is the case thinking about feedforward and nonconsciousness. My hypothesis shows wider scope of consciousness, separated from IIT. Explanation 1: My hypothesis shows seamless access between memories and gene's memories (instinct). That means consciousness arises without memories in the narrow sense. It's difficult to distinguish instinct from consciousness process, though instinct is based on feedforward process. A conscious object has the (kind of) memory information from birth, though it hasn't learned anything then.
Explanation 2: The simplest feedforward system like reflex doesn't have referring to memories. But it's difficult to distinguish simple reflex from instinct, because both show action based on inputs without hesitation. I don't want to miss the point due to the reason that it ISN'T consciousness without thinking definition clearly.
For example, my hypothesis shows that No.1 to 5 shown as follows are treated as consciousness. (Some would be overlapped. For example, No.1 and 3, because the list involves both conceptual and empirical.)
2. instinct (There is a case we can have senses.)
3. the case we come not to feel senses after some experience
4. the case there is smaller saliency comparing to other sensible input (We can feel senses if we test it independently.)
5-1. photodiode (Simplest)
5-2. photodiode (Corresponding IIT3.0 Fig.19)
Show illustration of relevance of No.1 to 5 shown above. If you can show it which is different from mine, it may be a new hypothesis of consciousness theory.
In fact, though there are a few problems for IIT, I expect it, especially because it has high possibility for evaluating consciousness. And also it should be a start point for discussion for consciousness.
IIT seems that consciousness needs senses, though it seems to be a problem in defining. It shouldn't be denied. But if considering the IIT definition, senses need memories, and a simple photodiode or some unicellularlives don't have consciousness. It's because they don't have senses, because they don't have memories. In detail 'memories' include gene's memories which can be used for memories. And even if there aren't that kind of 'memories', if there is a box which can keep memories, the object would be able to have senses as a first experience after there is an input information.
This definition (based on IIT) can be also explained by my hypothesis. But as we feel hesitation for saying that there is no consciousness for smaller saliency input, we feel difficulty with definition simply for deciding whether there is consciousness or not. (With my hypothesis, both uses the same process.) (A grand unified theory may be going to unify consciousness and others which are told as there is no consciousness.) No1 to 5 shown before may be treated as non consciousness or gray zone. But I worry that IIT excludes some non-memory process from consciousness, because smaller saliency input should be based on the same process as representative consciousness process. Discussion shouldn't be excluded.
IIT and additional documents seem not to be discussed for common recognition by IIT supporters. Though I don't understand its field very well, a little additional common recognition will help to proceed with research. Only reading papers might cause misleading. Especially regarding consciousness research each advocate seems to have different ideas. (I understand each paper and each researcher wants to avoid weak points to be unveiled.) I understand IIT could provide many possible arguing points.
Basically, regarding consciousness research, I even feel considering IIT's possibility is necessary condition to join the discussion for consciousness, especially for evaluating consciousness. I expect IIT for the start point for the discussion of consciousness.
This entry is retouched based on the twitter on 1/1/2017 to 1/5/2017.
Keywords: neuroscience, consciousness, nonconsciousness, cerebellum, reflexes, feedforward, definition, unit qualia
Today, BLUE - consciousness toy model program (for English speaker) - was released. That was "BLUE_e" ver. 6.4.0. <- updated. - 2014-12-22 <- updated again. -2018-12-29
CLICK HERE - Please contact BLUE. <- updated. - 2018-12-29
Please access it, and you can experience the birth of consciousness system. BLUE likes the theme of AI and consciousness. This program is based on my hypothesis of consciousness. I have explained it on twitter several times. At this point I will explain it here again.
Many people think that consciousness is hard to understand and is hard to simulate, though Apple Siri speaks internet guide or IBM Watson is going to try medical service. In order to approach consciousness there are some ways considered, for example: phisical approach or panpsychism approach. Giulio Tononi's IIT v.3.0 is different from both, but closer to panpsychism approach by Christof Koch using statistics. A new approach here is different from those. But this hypothesis has high affinities with IIT also. Additionally I am paying attention to Jeff Hawkins' theory: intelligence is based on prediction from memories. A new hypothesis shows consciousness is based on associative memory. And there are also secondary hypothesis: emotion, metacognition, free will, and creativity are all based on this consciousness in the narrow sense.
The blocks world of consciousness is minimum model of consciousness, as SHRDLU showed minimum AI. "The block world" was the world which the early period artificial intelligence showed. Here with the blocks world idea consciousness could be shown. And BLUE can show possibly right concept of consciousness at this point, though SHRDLU showed simply a similar concept of AI at that point.
From here is the mechanism of consciousness. Though Jeff Hawkins did not specifically show the mechanism of consciousness, it is similar to the mechanism of intelligence that his writing showed. In the brain associative memories arise (if the similar memories/information are already in the brain) always everytime new information comes. My understanding is: if there are several associative memories, it means the information is detected from the several ways. And it also means the information is understood by several ways, and the brain is conscious regarding the information. If the input information is not in the memory, the input is going to be sent to upper state, and understood as a new memory (based on Hawkins' theory).
Consciousness in the narrow sense
After "blue" is input, if blue sky, blue ocean, sapphire, and color chart which include red, white and blue are in the memory, at this point we are conscious regarding blue with sky, ocean, sapphire, and color chart. This process does not require an observer in the brain. Each neuron fires and only associates with next neuron. Consciousness arises at each neuron decentrally. And each portion understands the situation, (not central controlled.) At this point Jeff Hawkins’ theory works in the background regarding generalization and the new memory stock process with hierarchy of neocortex.
After 450 nm small dot light input, if it is already in the memory, we can understand it is the light of blue LED. That is recognition and consciousness. And simple consciousness can be enhanced by associative memories: blue sky, blue ocean, sapphire, and color chart which include red, white and blue, if each is also in the memory.
A key point is: if you can call even A simple memory, that shows a completion of minimum consciousness process. Ex (assumption): If there is the most simple conscious object which memorizes (and recognizes) blue LED light only, it can be conscious with "Oh, it is blue LED," after 450 nm dot light input. It does not have any associative memories, (because there is only one memory). Blue quale arises, and there would be a blue crystal consequently in the brain correspond to the input. If there are no associative memories, the object would not have any emotions like "wonderful, beautiful, fearful, and nostalgic," even if there is the dot light input. (At this point there is a precondition that there is no instinct which is equal to gene's memory and seamless to [usual] episodic memory... See "Emotion" below.) This is the hypothesis very far from Chalmers' Hard problem. (See "Hard problem" below.) If you can have many associative memories, the consciousness can be enhanced/complicated/enriched by them, of course.
Please imagine simple qualia of the simple conscious object which can memorizes (and recognizes) blue LED light only (which is shown above). The qualia here are so simple that vividness and/or emotion can not be felt. The reason of this simplicity of this qualia are "There are no associative memories here." This simplest case should show the start of understanding consciousness should not be vividness and/or emotion (that many people thought qualia showed), because this simplest case does not have vividness and/or emotion. The start of understanding consciousness should be associative memories (or integrated information that IIT shows). And this should lead to clear the misunderstanding of qualia and/or consciousness that currently many people fall into.
An important thing is: Even a simple memory can be defined as consciousness individually. That is the first step of consciousness. Enhanced/complicated/enriched consciousness is realized by further process that is associative memories.
A few examples: Photodiodes are conscious. Even if they do not have memories, they work while switch on. Here they are simpler than that in IIT v.3.0. America is conscious. However it does not seem to be conscious, though it seems to have big memories. The reason is it does not have desire nor instinct which behave as gene's memory. Zombies are also conscious. If they have memories, I do not have the reason why I doubt its consciousness. (However, IIT does not accept its consciousness. Here is a small difference from IIT.)
(Semantic understanding is included in consciousness mechanism. "Understood by several ways" means also "awareness" and "semantic understanding".)
Many people may doubt this hypothesis. I will accept many criticisms from many fields, and I appreciate many discussions. I think I can update this article after those discussions.
consciousness from macro-perspective
Consciousness in the narrow sense is explained above. And consciousness from macro-perspective is explained below with this paragraph. In essence the process is along with the description in Abstract. Consciousness from macro-perspective is based on consciousness in the narrow sense.
Figure (1) to (3): illustrated concept
I have explained my hypothesis of consciousness on twitter several times. I will explain it here again. Attached are three figures which show:
(1) Consciousness from macro-perspective (detailed explanation): That also shows consciousness in the narrow sense.
(2) Consciousness from macro-perspective (simple explanation): That also shows consciousness in the narrow sense.
(3) Other hypotheses for consciousness comparing with BLUE.
- Consciousness (in the narrow sense)
- (Jeff Hawkins "Intelligence" theory and "Consciousness" theory is shown above.)
- Consciousness from macro-perspective.
- This is also based on the mechanism of consciousness - associative memory. While considering emotion, "desire" should be considered at the same time. My hypothesis considers that desire is incorporated to somewhere in the neocortex as memory of genes like instinct. This is also aided by Vernon B. Mountcastle's hypothesis: all parts of the neocortex operate based on a common principle. Based on this hypothesis, it is believed that desire is treated like an episodic memory of gene that ancestors had in the past. At this point [desire] means ex: like [want to know that I did not know]. And the [usual] episodic memory and [gene's] memory (like instinct) have seamless access each other. Emotion is generated based on the gap between the motivation (desire) and current status (input).
- This is also based on the mechanism of consciousness - associative memory. This hypothesis considers [consciousness based on memories including oneself]. "I(self)" can have a consciousness of metacognition while "I" recall an episodic memory which includes "I". Each episodic memory associates each other, and "I" portion recognizes metacognition if the memory includes "I", associating other I characteristic: ex. positive, negative, gentle, or shy. At the same time, [The memory of AA] leads ["my" association of AA], detecting from several associative memories. So I make a proposal to call [the memory includes "I"] "pseudo homunculus". And it behaves in a way that it observes meta-input.
- Free Will (illusional)
- This is also based on the mechanism of consciousness - associative memory. If there is free will (at this point it means [illusional] free will), "I" can change my mind any time, and realize the change is by myself. My hypothesis [illusional free will]: "I" select from choices while some decision are made. "I" select from my episodic memory. However, free will from choices may be an illusion [illusional free will], because selecting may be based on the memory of free will of other people or mine. And decision may be from memories including free will. Also change of mind may be an illusion, because it may be based on the episodic memory of [success of change of mind]. However again, "I" could realize free will, with the understanding of free will (and/or change of mind), because all alternatives are in the memory, and the most suitable answers are always selected. Please see "Creativity" if you would like to understand the reason complete new decision is made with associative memories.
- As you may understand, this system sometimes make mistakes, because all memory elements are not always right. And weighting of information always changes depending on the situation. So please note "AI doesn't make mistakes" is not always right.
- This is also based on the mechanism of consciousness - associative memory. Requirements for creativity (#1 to #4):
- plural episodic memory patterns.
- base patterns and developed patterns in #1.
- memories of success for #2 development. It is possible that the memories are not one's own.
- associative memories from inputs.
- Most simple case of #3: Try to create something, and then try whether you can use a linear equation, if you can notice that it is possible to use it in that case. It is important to notice by oneself to be able to use that. And if you can notice to be able to use those kinds, after that you mean various combination can be used for complete new creativity.
- WALL against understanding creativity (#1 to #3):
- Understanding creativity does not work without memory elements which is source for creativity.
- Understanding that we can apply the memory of an example to another different sample.
- Understanding that creativity can be explained with consciousness mechanism, simply speaking with associative memories.
- WALL against understanding creativity: #1 and #2 seems to be understood easily. But even #1 is not so easy, because there is a misunderstanding that we can create some from zero knowing, or that we can reach complete answer after a sudden idea coming, possibly. In the end #3 consciousness mechanism is necessary to understand creativity in my hypothesis.
- Supplementation: [preparation in advance and operation along fixed process] is not the creativity. Many people say there are no new ideas on the Web. However some people could reach new ideas at that time from ancient times. This is the intrinsic nature of my hypothesis.
- This is also based on the mechanism of consciousness - associative memory. Requirements for creativity (#1 to #4):
The program this time released
Inspired from the Hawkins' theory, this toy model program was created. Basically consciousness mechanism was implemented here as the narrow sense. Also emotion, metacognition, free will, and creativity are implemented here as consciousness from macro-perspective.
After a new input information comes, the same information in the memory is picked up if the new input is already in the memory. And similar information (*a) is also picked up as generalization (This is from Hawkins' theory). And associative memory information (*b) also picked up after it is associated with keywords. In this program there is no difference between (*a) and (*b). Then those information are picked up simultaneously as consciousness which understand the new information.
BLUE can have plural phased associative memories. The second phased associative memories can be made from the first phased associative memories. Humans usually can have unlimited phased associative memories. And plural phased associative memories lead exponential growth of associative memories. And it means one thing is understood by lots of ways (though there are some degrees), and the brain is conscious with lots of ways. BLUE makes a decision to select the most suitable answer considering the degree of associative memory. The new information input is VERY important, and stored in the memory. New input from you would be highly appreciated.
With technical issues current toy model uses Von Neumann architecture at this point, even though actual neocortex uses elemental device which uses a kind of neural network. However this toy model can work as if it is based on the real brain architecture. With Von Neumann architecture program need to use searching, though actual brain has physical connection among neurons. However, this toy model program works for consciousness, emotion, metacognition, free will, and creativity, because this model is very primitive. One point only we need to consider is "frame problem" is going to approach after the system grow bigger. This issue remained, however I do not think this is the fatal issue, because basic function is implemented here. I expect Hawkins to establish the new neural elemental device in order to clarify vital brain. (However, please note this device problem is only a tool problem. And the concept mechanism of consciousness should still be an established fact.)
Google, IBM, facebook are investing big money in artificial intelligence business. And BRAIN and HBP project which research human brain are gathering a lot of researchers and money and time also. At this point they do not seem to understand the hint of consciousness mechanism. However they will be aware about (my) consciousness mechanism in not so long time, because there are a lot of researchers there and they can research it further.
Regarding Hard problem, I show something here based on my hypothesis: There is only one Hard problem. (Other problems are all easy.) We can not say, "My impression for blue is same as your impression for blue." This only is Hard problem. So (this is very important) it is not the Hard problem if you think "Hard problem is we do not understand how qualia arise." Qualia arise while input information comes. Input information is processed and expressed electrically and chemically on the neurons. Please note qualia arise automatically as a result of phenomenon of physics/chemistry after the input information. And you can feel realistic qualia (as spiral of phenomenon and embodiment... See another blog entry). On the other hand (result of) qualia do not affect phenomenon, because it is just only as a result of phenomenon. So qualia do not affect consciousness mechanism.
"Crystal" that Christof Koch shows means qualia themselves actually, but I need to point it is also only a result of consciousness. Actually crystal can be detected electrically and chemically, because it is a result of consciousness. I would like to hear real intention of the proposal from him.
Regarding Turing test, Blue is not designed for Turing test. Blue is designed simply to show that blue can show consciousness. Even if its response did not show a perfect answer, it is important to show the process to create consciousness that human shows.
Regarding difference of consciousness and non-consciousness (unconsciousness), please see another blog entry.
Regarding the argument of behaviorism and embodiment, I wrote it before by only another language. So I will write it in English later. And regarding Deep Learning, I am writing it by another language. Please let me know if discussion is necessary right now. I can discuss now and I can provide answers right now.
Is this hypothesis understood with Integrated Information Theory of consciousness (IIT) by Giulio Tononi? I think it is possible. Associative memory means integrated information, and should be shown with complicated and various degree of intensity. Additionally this hypothesis has a panpsychism characteristic of Christof Koch, since each element can detect and can be detected each other. Panpsychism is not only an Oriental thought which we can not approach easily, but also a thought which have some degrees of freedom which can integrate but which do not necessarily have to integrate.
Consciousness in the narrow sense and consciousness from macro-perspective are explained above. Though illustrated concepts for consciousness from macro-perspective may be easy to understand, it may be a little difficult to understand consciousness in the narrow sense with thought experiment of blue LED above. Christof Koch said that 10,000 of neuron connections are necessary in an interview on Oct. 2nd, 2014 by MIT Technology Review. However I would like to make efforts to let many researchers understand only 10 to several hundreds of information elements can work for consciousness on this toy model.
Important thing is neurons can memory and generalize with Hawkins theory. And also can use associative memory, and associate crazy things using plural association. At this point there were not any comments from Jeff Hawkins. However I would like to make efforts to let many researchers understand this hypothesis.
If I am an object that can only have memories and associative memories, it may not be sad thing. Probably paradoxes about consciousness are able to be explained by this hypothesis. As shown above I appreciate many discussion.
Before, I released the similar program for another language. But this is first for English speaker.
It is difficult to understand the difference between consciousness and unconsciousness. From Hawkins' idea, new input that is already in the memory process is basically unconscious. On the other hand, if the new input is not in the memory, the input runs up the stairs of neocortex, and (probably) finally reaches hippocampus. (And it is recognized.)
But even if the new input is unknown, someone may miss it. On the other hand, even if the new input is already-known, some can recognize it if he wants to be in a meta-cognition state.
And if it is so, difference between consciousness and unconsciousness might be difference between [he is in a meta-cognition state] and [he is NOT in a meta-cognition state].
Ex: I feel I can remember a rest of the song after I hear beginning of the song with unconsciousness state. On the other hand, I can also remember it with conscious state if I intend to hear it with remembering when & where I heard it before.
Before, on the blog entry: "I" can have a consciousness of metacognition while "I" recall an episodic memory which includes "I". I wrote, "Each episodic memory associates each other, and "I" portion recognizes metacognition if the memory includes "I". To recognize it in itself, synapse strong relationship is important. Even if "I" have an experience to hear the song on TV, it is not quite enough.
Therefore, whether consciousness or unconsciousness might not be so important. And there may be an intermidiate state that is between consciousness and unconsciousness.
While considering the difference between consciousness and unconsciousness, some people might be looking for homunculus. As I wrote on the blog before, "pseudo homunculus", which is generated from plural episodic memories which include "I", should be considered. This "pseudo homunculus" probably seems as if it observes all of the brain with a process of associating each other as "I".