Talk:Model of hierarchical complexity

Unpublished manuscripts

edit

Unpublished manuscripts cannot be used as references, and have been removed. DGG (talk) 06:41, 9 August 2007 (UTC)Reply

edit

Please follow the full formal procedures as specified in WP:Copyright. It is not enough to give us permission to publish the material. It must specifically be released under the GFDL license. Be sure you have the publishers permission to do this, and certify it to the Foundation as specified win WP:Copyright. Be aware that this gives permission to all parties to reproduces and modify the material in any manner and for any purpose, even for commercial use, as long as the source is given. Relatively few publishers are willing to offer such releases.

I need help with this. I am the owner of the website Dareassociation.org. I give permission to use material from the scoring manual. What else do I have to do. How do I get it relased un the GFDL license? I read over it and it does not give a set of actions nor forms. — Preceding unsigned comment added by 65.96.160.201 (talkcontribs) 19:48, 17 August 2007 (UTC)Reply

You do it as described in WP:Donating copyrighted material:

  1. for a website: the best way is to place a copy of the GFDL license on the bottom of the page involved--if you have personally created the material in it and own the copyright. But I do not see any such website cited. If the web site contains material published elsewhere, you must own the copyright of the material, not merely have permission to use it for the web site.
  2. for a scoring manual (I suppose you mean the table for the stages), if it has been published elsewhere, you need to either own the copyright, or the publisher must license it under GFDL or public domain, see below.
  3. If you own the copyright, you place on this talk page a statement that you own the copyright and are donating it under GFDL, and you send an email from your official verifiable address to the Wikimedia Communications committee at the e-mail address "permissions-en AT wikimedia DOT org"saying so. They will then request confirmation.

If the publisher owns the copyright, and has given permission to you to make use of it for your own use only, or even for republication, you may not use it on wikipedia, because anything on wikipedia must be licensed to the entire world under GFDL for reuse and republication for any purpose whatsoever including commercial use. . The publisher can grant a license for this--they almost never do . If they do, you will have to prove it. Their permission must specifically acknowledge that they release it under GFDL or an equivalent CC-attribution license. It is not enough that they give permission to use it in wikipedia, because of the reuse permitted for our material. DGG (talk) 15:31, 14 September 2007 (UTC)Reply

If this is not done promptly and documented, the article will be listed for deletion. Permission for WP to publish it is not enough
Further, there is the material reproduced from "Introduction to the Model of Hierarchical Complexity” by M. L. Commons, in the Behavioral Development Bulletin, 13, 1-6 (http://www.behavioral-development-bulletin.com/). Copyright 2007 Martha Pelaez. Reproduced with permission of the publisher." The same thing applies. DGG (talk) 04:31, 15 October 2007 (UTC)Reply
edit

I removed the "Copyright permissions" notice from the article because, as noted above, all text in Wikipedia is released under the Creative Commons Attribution-ShareAlike License. Biogeographist (talk) 04:30, 3 October 2016 (UTC)Reply

Self-contextualizing the MHC

edit

Is the Model of Hierarchical Complexity, by its own definitions, a metasystem or a paradigm? It would be a potentially useful example to add to one of the higher levels that have none listed at present. As I see it, either: Model of Hierarchical Complexity = metasystem (level 12) -> mathematical psychology = paradigm (level 13) -> psychology = field (level 14), or: Model of Hierarchical Complexity = paradigm (level 13) -> mathematical psychology = field (level 14). — Preceding unsigned comment added by 98.221.121.179 (talk) 05:50, 30 May 2011 (UTC)Reply

Requested move

edit
The following discussion is an archived discussion of a requested move. Please do not modify it. Subsequent comments should be made in a new section on the talk page. No further edits should be made to this section.

The result of the move request was: page moved. Vegaswikian (talk) 19:37, 10 January 2012 (UTC)Reply



Model of Hierarchical ComplexityModel of hierarchical complexity

Per WP:MOSCAPS ("Wikipedia avoids unnecessary capitalization") and WP:TITLE, this is a generic, common term, not a propriety or commercial term, so the article title should be downcased. In addition, WP:MOSCAPS says that a compound item should not be upper-cased just because it is abbreviated with caps. Lowercase will match the formatting of related article titles. Tony (talk) 04:45, 3 January 2012 (UTC)Reply

The above discussion is preserved as an archive of a requested move. Please do not modify it. Subsequent comments should be made in a new section on this talk page. No further edits should be made to this section.

Deleted Section from Article (2016 April 18)

edit

The following section, titled: "Michael Common's Model of Hierarchical Complexity Patent" was deleted.

I could see no good reason why these patents were in the article. If there is a good reason please feel free to return. 213.215.180.10 (talk) 14:09, 18 April 2016 (UTC)Reply

Michael Common's Model of Hierarchical Complexity Patent

Wiki Education Foundation-supported course assignment

edit

  This article was the subject of a Wiki Education Foundation-supported course assignment, between 4 January 2021 and 14 April 2021. Further details are available on the course page. Student editor(s): Elayna Detiege.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 04:21, 17 January 2022 (UTC)Reply

Intelligent control with hierarchical stacked neural networks

edit

Patent number: 9129218

Type: Grant

Filed: July 18, 2014

Issued: September 8, 2015

Inventors: Michael Lamport Commons, Mitzi Sturgeon White

Invention Summary

edit

This invention relates to the use of hierarchical stacked neural networks that develop new tasks and learn through processing information in a mode that triggers cognitive development in the human brain in identifying atypical messages, for example, spam messages in email and similar services. Neural networks are useful in constructing systems that learn and create complex decisions in the same methodology as the brain.

This invention applies models of the ordered stages that the brain moves through during development that causes it to execute highly complex tasks at higher stages of development to the task of identifying atypical messages, such as email spam. In this process, actions performed at some point of development are developed by ordering, altering and combining the tasks executed in the preceding phase. Because of this process, at each stage of development more complicated tasks can be executed than those performed at the preceding phase.

Implications

It is an object of the present invention to provide hierarchical stacked neural networks that overcome the limitations of the neural networks of the prior art.It is another object of the present invention to provide linked but architecturally distinct hierarchical stacked neural networks that simulate the brain's capacity to organize lower-order actions hierarchically by combining, ordering, and transforming the actions to produce new, more complex higher-stage actions.

Another aim of the invention is to provide hierarchical stacked neural networks that are ordered in a non-arbitrary way so that tasks executed by neural networks at a higher level are the result of a concatenation of tasks executed by lower-level networks in the hierarchy.In addition the tasks executed by a neural network in the stacked hierarchy are a result of amalgamating, ordering, and altering tasks executed by the neural network that precedes it at a lower level in the stacked hierarchy.Furthermore another aim of the invention is that neural networks at higher levels in the hierarchy execute highly complex actions and tasks than neural networks that precede them at a lower level in the hierarchy.

Intelligent control with hierarchical stacked neural networks

edit

Patent number: 9053431

Type: Grant

Filed: July 2, 2014

Issued: June 9, 2015

Inventor: Michael Lamport Commons

This is a system and a method of identifying an abnormally deviant message . An ordered set of words within the message is recognized. The set of words observed within the message is associated with a set of anticipated words, the set of anticipated words having semantic characteristics. A set of grammatical structures illustrated in the message is recognized, based on the ordered set of words and the semantic characteristics of the corresponding set of anticipated words. A cognitive noise vector consisting of a quantitative measure of a deviation between grammatical structures illustrated in the message and a measure (unexpected) of grammatical structures for a message of the type is then discerned. The cognitive noise vector could be processed by higher levels of the neural network and/or an outer processor.

Implications

An aim of this invention to provide hierarchical stacked neural networks that are ordered in a non-arbitrary way so that tasks executed by neural networks at a higher level are the result of a concatenation of tasks executed by lower-level networks in the hierarchy. We can say, lower level neural networks would be able to send output that would be useful as input in the higher levels.

This invention provides an architecture of hierarchically linked, neural networks created for spam filtering stacked one on top of the other. Every neural network in the hierarchical stack keeps track not only of the data it can glean from the input, as in previous art neural networks, but it also concentrates on “cognitive noise” and develops an error vector or a same means of determining the degree of the imperfections in the information transmitted.

In this invention, higher-level neural networks interact with lower level neural networks in the hierarchical stacked neural network. The higher-level neural networks responds to the lower-level neural networks to calibrate connection weights, thus improving the precision of the tasks executed at the lower levels. The higher-level neural networks can also demand that additional information be fed to the lowest neural network in the stacked hierarchy.

Intelligent control with hierarchical stacked neural networks

edit

Patent number: 9015093

Type: Grant

Filed: October 25, 2011

Issued: April 21, 2015

Inventor: Michael Lamport Commons

This is a method of processing information which involves receiving a message; processing the message with a trained artificial neural network based processor, having at least one set of outputs which represent information in a non-arbitrary organization of actions based on an architecture of the artificial neural network based processor and the training; representing as a noise vector at least one data pattern in the message which is represented incompletely in the non-arbitrary organization of actions; analyzing the noise vector distinctly from the trained artificial neural network; scrutinizing one database minimum; and developing an output in dependence on said analyzing and said searching.

The present invention relates to the field of cognitive neural networks, and to hierarchical stacked neural networks configured to imitate human intelligence.

Implications

One goal of the invention to is provide linked but architecturally distinguishable hierarchical stacked neural networks that emulate the capacity of the brain to rearrange lower-order actions hierarchically by combining, ordering, and changing the tasks to produce new, highly complex higher-stage actions.These lower levels of neural networks complete simpler tasks than higher levels.

Furthermore this invention also provides hierarchical stacked neural networks that are ordered in a non-arbitrary manner so that tasks executed by neural networks at a higher level are the consequence of a concatenation of tasks executed by lower-level networks in the hierarchy. We can say, lower level neural networks would provide output that would be useful as input in the higher levels.

Intelligent control with hierarchical stacked neural networks

edit

Patent number: 8788441

Type: Grant

Filed: November 3, 2009

Issued: July 22, 2014

Inventors: Michael Lamport Commons, Mitzi Sturgeon White

It is a continuation of the previous patent

Summary and Implications

edit

The goal of the invention to provide hierarchical stacked neural networks that overpower the restraints of the neural networks of the previous art. Another aim of the invention is to provide associated but distinguishable hierarchical stacked neural networks that imitate the brain's volume to arrange lower-order actions hierarchically by combining, ordering, and changing the tasks to develop more compound higher-stage tasks.

Another aim is to provide hierarchical stacked neural networks which are ordered in a non-arbitrary way so that actions executed by neural networks at a higher level are the result of a concatenation of tasks executed by lower-level networks in the hierarchy.In addition , another task is that the tasks executed by a neural network in the stacked hierarchy are a result of amalgamating, ordering, and altering task executed by the neural network which precedes it at a lower level in the stacked hierarchy.

Furthermore, neural networks at higher levels in the hierarchy execute highly complex tasks than neural networks that precede them at a lower level in the hierarchy.

This invention provides an architecture of hierarchically linked, distinguishable neural networks stacked one on top of the other. Every neural network in the hierarchical stack uses the neuron-based methodology of previous art neural networks. The tasks that every neural network executes and the order in which they execute are based on human cognitive development.

Intelligent control with hierarchical stacked neural networks

edit

Patent number: 8775341

Type: Grant

Filed: October 25, 2011

Issued: July 8, 2014

Inventor: Michael Lamport Commons

Oct 25, 2011

It is a structure and method of identifying abnormal message. An organized set of words within the message is identified. The set of words observed within the message is associated to a corresponding set of anticipated parable, the set of anticipated words having semantic characteristics. A set of grammatical compositions illustrated in the message is identified, based on the ordered set of words and the semantic characteristics of the corresponding set of anticipated words. A cognitive noise vector encompassing a quantitative measure of a deviation between grammatical structures illustrated in the message and an anticipated measure of grammatical structures for a message of the type is then discerned. The cognitive noise vector may be processed by higher levels of the neural network and/or an outer processor.

Implications

edit

In this invention lower-level neural networks interact with higher level neural networks in the hierarchical stacked neural network. The higher-level neural networks responds to the lower-level neural networks to regulate coupling weights as a result boosting the precision of the tasks executed at the lower levels. The higher-level neural networks can also demand that more information be fed to the lowest neural network in the stacked hierarchy.Another aim of this invention is to deliver linked but architecturally distinguishable hierarchical stacked neural networks which imitate the brain's volume to categorize lower-order actions hierarchically by amalgamating, ordering, and altering the tasks to develop complex higher-stage actions. As a result, lower levels of neural networks complete easier tasks as compared to higher levels. For example, in spam filtering, lower levels would concentrate on identifying text as text, distinguishing text into letters, and arranging text into strings of letters, while higher level neural networks would identify and understand words and higher levels would identify a surplus of poorly structured words or sentences.Furthermore, another goal of the invention to give hierarchical stacked neural networks that are ordered in a non-arbitrary manner so that tasks executed by neural networks at a higher level are the result of a coupling of tasks executed by lower-level networks in the hierarchy. We can also say that lower level neural networks can give output that would be useful as input in the higher levels.

Intelligent control with hierarchical stacked neural networks

edit

Patent number: 7613663

Type: Grant

Filed: December 18, 2006

Issued: November 3, 2009

Inventors: Michael Lamport Commons, Mitzi Sturgeon White

Summary and Implications

edit

The goal of the invention is to provide hierarchical stacked neural networks that overcome the limitations of the neural networks of the previous art.Another goal is to provide associated but architecturally different hierarchical stacked neural networks which imitate the brain's measurable volume to arrange lower-order actions hierarchically by incorporating, ordering, and altering the tasks to develop new, more complex higher-stage actions. This invention also provides hierarchical stacked neural networks which are ordered in a non-arbitrary manner so that tasks executed by neural networks at a higher level are the consequence of coupling of actions executed by lower-level networks in the hierarchy. Another aim is that the tasks executed by a neural network in the stacked hierarchy are a resultant of amalgamating, ordering, and altering tasks executed by the neural network that precedes it at a lower level in the stacked hierarchy.It is another aim of the model that neural networks at higher levels in the hierarchy execute highly complex tasks as compared to neural networks that precede them at a lower level in the hierarchy.

Intelligent control with hierarchical stacked neural networks

edit

Patent number: 7152051

Type: Grant

Filed: September 30, 2002

Issued: December 19, 2006

Inventors: Michael Lamport Commons, Mitzi Sturgeon White

Summary and Implications

edit

The invention to provides hierarchical stacked neural networks which overcome the limitations of the neural networks of the previous art. It also provides linked but architecturally distinct hierarchical stacked neural networks that imitate the volume and magnitude of the brain to organize lower-order actions hierarchically by ordering ,combining and altering the actions to develop more complex higher-stage tasks.

Moreover, the invention also provides hierarchical stacked neural networks which are ordered in a non-arbitrary manner so that tasks performed by neural networks at a higher level are the resultant of a concatenation of actions executed by lower-level networks in the hierarchy.Another aim of the invention is that the tasks executed by a neural network in the stacked hierarchy are a consequence of amalgamating, ordering, and altering tasks executed by the neural network that precedes it at a lower level in the stacked hierarchy.In addition another aim of the invention is that the neural networks at higher levels in the hierarchy execute highly complex actions as compared to neural networks that precede them at a lower level in the hierarchy.[1]

References

  1. ^ "Patents by Inventor Michael Lamport Commons - Justia Patents Database". patents.justia.com. Retrieved 2015-09-21.

Miscellany and extensions

edit

M. Commons actually almost built A Model for Heterarchical Complexity (not only of Hierarchical Complexity); it almost counts levels up to Inter-Paradigm and Inter-Disciplinary and more than that. — Preceding unsigned comment added by Buguldey (talkcontribs) 04:38, 26 July 2016 (UTC)Reply

Removed writing task and unpublished reference

edit

I removed the "writing" task from Model of hierarchical complexity § List of examples because the only research cited is from an unpublished manuscript from 1985 that I could not find online either in Google or in other databases such as WorldCat. Please feel free to add the "writing" task to the list of examples again as long as a verifiable source is provided (preferably published in a scholarly journal with reputable peer review). The removed reference was: Commons, M.L., & De Vos, I.B. (1985). How researchers help writers. Unpublished manuscript available from Commons@tiac.net. Biogeographist (talk) 15:20, 5 October 2016 (UTC)Reply

edit

https://en.wikipedia.org/wiki/Mathematical_psychology

Wiki Education assignment: Adult Development Fall 2023

edit

  This article was the subject of a Wiki Education Foundation-supported course assignment, between 11 September 2023 and 11 December 2023. Further details are available on the course page. Student editor(s): Tsuki2023 (article contribs).

— Assignment last updated by Tsuki2023 (talk) 21:35, 6 November 2023 (UTC)Reply