Commonsense knowledge (artificial intelligence)

In artificial intelligence research, commonsense knowledge consists of facts about the everyday world, such as "Lemons are sour", or "Cows say moo", that all humans are expected to know. It is currently an unsolved problem in artificial general intelligence. The first AI program to address common sense knowledge was Advice Taker in 1959 by John McCarthy.[1]

Commonsense knowledge can underpin a commonsense reasoning process, to attempt inferences such as "You might bake a cake because you want people to eat the cake." A natural language processing process can be attached to the commonsense knowledge base to allow the knowledge base to attempt to answer questions about the world.[2] Common sense knowledge also helps to solve problems in the face of incomplete information. Using widely held beliefs about everyday objects, or common sense knowledge, AI systems make common sense assumptions or default assumptions about the unknown similar to the way people do. In an AI system or in English, this is expressed as "Normally P holds", "Usually P" or "Typically P so Assume P". For example, if we know the fact "Tweety is a bird", because we know the commonly held belief about birds, "typically birds fly," without knowing anything else about Tweety, we may reasonably assume the fact that "Tweety can fly." As more knowledge of the world is discovered or learned over time, the AI system can revise its assumptions about Tweety using a truth maintenance process. If we later learn that "Tweety is a penguin" then truth maintenance revises this assumption because we also know "penguins do not fly".

Commonsense reasoning

edit

Commonsense reasoning simulates the human ability to use commonsense knowledge to make presumptions about the type and essence of ordinary situations they encounter every day, and to change their "minds" should new information come to light. This includes time, missing or incomplete information and cause and effect. The ability to explain cause and effect is an important aspect of explainable AI. Truth maintenance algorithms automatically provide an explanation facility because they create elaborate records of presumptions. Compared with humans, all existing computer programs that attempt human-level AI perform extremely poorly on modern "commonsense reasoning" benchmark tests such as the Winograd Schema Challenge.[3] The problem of attaining human-level competency at "commonsense knowledge" tasks is considered to probably be "AI complete" (that is, solving it would require the ability to synthesize a fully human-level intelligence),[4][5] although some oppose this notion and believe compassionate intelligence is also required for human-level AI.[6] Common sense reasoning has been applied successfully in more limited domains such as natural language processing[7][8] and automated diagnosis[9] or analysis.[10]

Commonsense knowledge base construction

edit

Compiling comprehensive knowledge bases of commonsense assertions (CSKBs) is a long-standing challenge in AI research. From early expert-driven efforts like CYC and WordNet, significant advances were achieved via the crowdsourced OpenMind Commonsense project, which led to the crowdsourced ConceptNet KB. Several approaches have attempted to automate CSKB construction, most notably, via text mining (WebChild, Quasimodo, TransOMCS, Ascent), as well as harvesting these directly from pre-trained language models (AutoTOMIC). These resources are significantly larger than ConceptNet, though the automated construction mostly makes them of moderately lower quality. Challenges also remain on the representation of commonsense knowledge: Most CSKB projects follow a triple data model, which is not necessarily best suited for breaking more complex natural language assertions. A notable exception here is GenericsKB, which applies no further normalization to sentences, but retains them in full.

Applications

edit

Around 2013, MIT researchers developed BullySpace, an extension of the commonsense knowledgebase ConceptNet, to catch taunting social media comments. BullySpace included over 200 semantic assertions based around stereotypes, to help the system infer that comments like "Put on a wig and lipstick and be who you really are" are more likely to be an insult if directed at a boy than a girl.[11][12][13]

ConceptNet has also been used by chatbots[14] and by computers that compose original fiction.[15] At Lawrence Livermore National Laboratory, common sense knowledge was used in an intelligent software agent to detect violations of a comprehensive nuclear test ban treaty.[16]

Data

edit

As an example, as of 2012 ConceptNet includes these 21 language-independent relations:[17]

  • IsA (An "RV" is a "vehicle")
  • UsedFor
  • HasA (A "rabbit" has a "tail")
  • CapableOf
  • Desires
  • CreatedBy ("cake" can be created by "baking")
  • PartOf
  • Causes
  • LocatedNear
  • AtLocation (Somewhere a "Cook" can be at a "restaurant")
  • DefinedAs
  • SymbolOf (X represents Y)
  • ReceivesAction ("cake" can be "eaten")
  • HasPrerequisite (X cannot do Y unless A does B)
  • MotivatedByGoal (You would "bake" because you want to "eat")
  • CausesDesire ("baking" makes you want to "follow recipe")
  • MadeOf
  • HasFirstSubevent (The first thing required when you're doing X is for entity Y to do Z)
  • HasSubevent ("eat" has subevent "swallow")
  • HasLastSubevent

Commonsense knowledge bases

edit

See also

edit

References

edit
  1. ^ "PROGRAMS WITH COMMON SENSE". www-formal.stanford.edu. Retrieved 2018-04-11.
  2. ^ Liu, Hugo, and Push Singh. "ConceptNet—a practical commonsense reasoning tool-kit." BT technology journal 22.4 (2004): 211-226.
  3. ^ "The Winograd Schema Challenge". cs.nyu.edu. Retrieved 9 January 2018.
  4. ^ Yampolskiy, Roman V. " 10.1.1.232.913.pdf#page=102 AI-Complete, AI-Hard, or AI-Easy-Classification of Problems in AI AI-Easy-Classification of Problems in AI]." MAICS 2012.
  5. ^ Andrich, C, Novosel, L, and Hrnkas, B. (2009). Common Sense Knowledge. Information Search and Retrieval, 2009.
  6. ^ Mason, Cindy (2010-09-27). "The Logical Road to Human Level AI Leads to a Dead End". 2010 Fourth IEEE International Conference on Self-Adaptive and Self-Organizing Systems Workshop. Vol. 32. pp. 57–95. doi:10.1109/SASOW.2010.63. ISBN 978-1-4244-8684-7. S2CID 13030524.
  7. ^ Chutima, Boonthum-Denecke (2011-12-31). Cross-Disciplinary Advances in Applied Natural Language Processing: Issues and Approaches: Issues and Approaches. IGI Global. ISBN 978-1-61350-448-2.
  8. ^ Davis, Ernest (2014-07-10). Representations of Commonsense Knowledge. Morgan Kaufmann. ISBN 978-1-4832-2113-7.
  9. ^ Reiter, Raymond (1987-04-01). "A theory of diagnosis from first principles". Artificial Intelligence. 32 (1): 57–95. CiteSeerX 10.1.1.170.9236. doi:10.1016/0004-3702(87)90062-2. ISSN 0004-3702. S2CID 15629917.
  10. ^ Gallimore, R.J.; Jennings, N.R.; Lamba, H.S.; Mason, C.L.; Orenstein, B.J. (1999). "Cooperating agents for 3-D scientific data interpretation" (PDF). IEEE Transactions on Systems, Man, and Cybernetics - Part C: Applications and Reviews. 29: 110–126. doi:10.1109/5326.740674.
  11. ^ Bazelon, Emily (March 2013). "How to Stop the Bullies". The Atlantic. Retrieved 9 January 2018.
  12. ^ Dinakar, Karthik; Jones, Birago; Havasi, Catherine; Lieberman, Henry; Picard, Rosalind (1 September 2012). "Common Sense Reasoning for Detection, Prevention, and Mitigation of Cyberbullying". ACM Transactions on Interactive Intelligent Systems. 2 (3): 1–30. CiteSeerX 10.1.1.693.8065. doi:10.1145/2362394.2362400. S2CID 5560081.
  13. ^ "AI systems could fight cyberbullying". New Scientist. 27 June 2012. Retrieved 9 January 2018.
  14. ^ "I Believe That It Will Become Perfectly Normal for People to Have Sex With Robots". Newsweek. 23 October 2014. Retrieved 9 January 2018.
  15. ^ "Told by a robot: Fiction by storytelling computers". New Scientist. 24 October 2014. Retrieved 9 January 2018.
  16. ^ Mason, C.L. (1995). "An intelligent assistant for nuclear test ban treaty verification". IEEE Expert. 10 (6): 42–49. doi:10.1109/64.483116.
  17. ^ Speer, Robert, and Catherine Havasi. "Representing General Relational Knowledge in ConceptNet 5." LREC. 2012.
  NODES
chat 1
INTERN 1
Note 1
Project 2