<!--{{{-->
<link rel='alternate' type='application/rss+xml' title='RSS' href='index.xml' />
<!--}}}-->
Background: #fff
Foreground: #000
PrimaryPale: #8cf
PrimaryLight: #18f
PrimaryMid: #04b
PrimaryDark: #014
SecondaryPale: #ffc
SecondaryLight: #fe8
SecondaryMid: #db4
SecondaryDark: #841
TertiaryPale: #eee
TertiaryLight: #ccc
TertiaryMid: #999
TertiaryDark: #666
Error: #f88
/*{{{*/
body {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];}

a {color:[[ColorPalette::PrimaryMid]];}
a:hover {background-color:[[ColorPalette::PrimaryMid]]; color:[[ColorPalette::Background]];}
a img {border:0;}

h1,h2,h3,h4,h5,h6 {color:[[ColorPalette::SecondaryDark]]; background:transparent;}
h1 {border-bottom:2px solid [[ColorPalette::TertiaryLight]];}
h2,h3 {border-bottom:1px solid [[ColorPalette::TertiaryLight]];}

.button {color:[[ColorPalette::PrimaryDark]]; border:1px solid [[ColorPalette::Background]];}
.button:hover {color:[[ColorPalette::PrimaryDark]]; background:[[ColorPalette::SecondaryLight]]; border-color:[[ColorPalette::SecondaryMid]];}
.button:active {color:[[ColorPalette::Background]]; background:[[ColorPalette::SecondaryMid]]; border:1px solid [[ColorPalette::SecondaryDark]];}

.header {background:[[ColorPalette::PrimaryMid]];}
.headerShadow {color:[[ColorPalette::Foreground]];}
.headerShadow a {font-weight:normal; color:[[ColorPalette::Foreground]];}
.headerForeground {color:[[ColorPalette::Background]];}
.headerForeground a {font-weight:normal; color:[[ColorPalette::PrimaryPale]];}

.tabSelected{color:[[ColorPalette::PrimaryDark]];
	background:[[ColorPalette::TertiaryPale]];
	border-left:1px solid [[ColorPalette::TertiaryLight]];
	border-top:1px solid [[ColorPalette::TertiaryLight]];
	border-right:1px solid [[ColorPalette::TertiaryLight]];
}
.tabUnselected {color:[[ColorPalette::Background]]; background:[[ColorPalette::TertiaryMid]];}
.tabContents {color:[[ColorPalette::PrimaryDark]]; background:[[ColorPalette::TertiaryPale]]; border:1px solid [[ColorPalette::TertiaryLight]];}
.tabContents .button {border:0;}

#sidebar {}
#sidebarOptions input {border:1px solid [[ColorPalette::PrimaryMid]];}
#sidebarOptions .sliderPanel {background:[[ColorPalette::PrimaryPale]];}
#sidebarOptions .sliderPanel a {border:none;color:[[ColorPalette::PrimaryMid]];}
#sidebarOptions .sliderPanel a:hover {color:[[ColorPalette::Background]]; background:[[ColorPalette::PrimaryMid]];}
#sidebarOptions .sliderPanel a:active {color:[[ColorPalette::PrimaryMid]]; background:[[ColorPalette::Background]];}

.wizard {background:[[ColorPalette::PrimaryPale]]; border:1px solid [[ColorPalette::PrimaryMid]];}
.wizard h1 {color:[[ColorPalette::PrimaryDark]]; border:none;}
.wizard h2 {color:[[ColorPalette::Foreground]]; border:none;}
.wizardStep {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];
	border:1px solid [[ColorPalette::PrimaryMid]];}
.wizardStep.wizardStepDone {background:[[ColorPalette::TertiaryLight]];}
.wizardFooter {background:[[ColorPalette::PrimaryPale]];}
.wizardFooter .status {background:[[ColorPalette::PrimaryDark]]; color:[[ColorPalette::Background]];}
.wizard .button {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::SecondaryLight]]; border: 1px solid;
	border-color:[[ColorPalette::SecondaryPale]] [[ColorPalette::SecondaryDark]] [[ColorPalette::SecondaryDark]] [[ColorPalette::SecondaryPale]];}
.wizard .button:hover {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::Background]];}
.wizard .button:active {color:[[ColorPalette::Background]]; background:[[ColorPalette::Foreground]]; border: 1px solid;
	border-color:[[ColorPalette::PrimaryDark]] [[ColorPalette::PrimaryPale]] [[ColorPalette::PrimaryPale]] [[ColorPalette::PrimaryDark]];}

.wizard .notChanged {background:transparent;}
.wizard .changedLocally {background:#80ff80;}
.wizard .changedServer {background:#8080ff;}
.wizard .changedBoth {background:#ff8080;}
.wizard .notFound {background:#ffff80;}
.wizard .putToServer {background:#ff80ff;}
.wizard .gotFromServer {background:#80ffff;}

#messageArea {border:1px solid [[ColorPalette::SecondaryMid]]; background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]];}
#messageArea .button {color:[[ColorPalette::PrimaryMid]]; background:[[ColorPalette::SecondaryPale]]; border:none;}

.popupTiddler {background:[[ColorPalette::TertiaryPale]]; border:2px solid [[ColorPalette::TertiaryMid]];}

.popup {background:[[ColorPalette::TertiaryPale]]; color:[[ColorPalette::TertiaryDark]]; border-left:1px solid [[ColorPalette::TertiaryMid]]; border-top:1px solid [[ColorPalette::TertiaryMid]]; border-right:2px solid [[ColorPalette::TertiaryDark]]; border-bottom:2px solid [[ColorPalette::TertiaryDark]];}
.popup hr {color:[[ColorPalette::PrimaryDark]]; background:[[ColorPalette::PrimaryDark]]; border-bottom:1px;}
.popup li.disabled {color:[[ColorPalette::TertiaryMid]];}
.popup li a, .popup li a:visited {color:[[ColorPalette::Foreground]]; border: none;}
.popup li a:hover {background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]]; border: none;}
.popup li a:active {background:[[ColorPalette::SecondaryPale]]; color:[[ColorPalette::Foreground]]; border: none;}
.popupHighlight {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];}
.listBreak div {border-bottom:1px solid [[ColorPalette::TertiaryDark]];}

.tiddler .defaultCommand {font-weight:bold;}

.shadow .title {color:[[ColorPalette::TertiaryDark]];}

.title {color:[[ColorPalette::SecondaryDark]];}
.subtitle {color:[[ColorPalette::TertiaryDark]];}

.toolbar {color:[[ColorPalette::PrimaryMid]];}
.toolbar a {color:[[ColorPalette::TertiaryLight]];}
.selected .toolbar a {color:[[ColorPalette::TertiaryMid]];}
.selected .toolbar a:hover {color:[[ColorPalette::Foreground]];}

.tagging, .tagged {border:1px solid [[ColorPalette::TertiaryPale]]; background-color:[[ColorPalette::TertiaryPale]];}
.selected .tagging, .selected .tagged {background-color:[[ColorPalette::TertiaryLight]]; border:1px solid [[ColorPalette::TertiaryMid]];}
.tagging .listTitle, .tagged .listTitle {color:[[ColorPalette::PrimaryDark]];}
.tagging .button, .tagged .button {border:none;}

.footer {color:[[ColorPalette::TertiaryLight]];}
.selected .footer {color:[[ColorPalette::TertiaryMid]];}

.sparkline {background:[[ColorPalette::PrimaryPale]]; border:0;}
.sparktick {background:[[ColorPalette::PrimaryDark]];}

.error, .errorButton {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::Error]];}
.warning {color:[[ColorPalette::Foreground]]; background:[[ColorPalette::SecondaryPale]];}
.lowlight {background:[[ColorPalette::TertiaryLight]];}

.zoomer {background:none; color:[[ColorPalette::TertiaryMid]]; border:3px solid [[ColorPalette::TertiaryMid]];}

.imageLink, #displayArea .imageLink {background:transparent;}

.annotation {background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]]; border:2px solid [[ColorPalette::SecondaryMid]];}

.viewer .listTitle {list-style-type:none; margin-left:-2em;}
.viewer .button {border:1px solid [[ColorPalette::SecondaryMid]];}
.viewer blockquote {border-left:3px solid [[ColorPalette::TertiaryDark]];}

.viewer table, table.twtable {border:2px solid [[ColorPalette::TertiaryDark]];}
.viewer th, .viewer thead td, .twtable th, .twtable thead td {background:[[ColorPalette::SecondaryMid]]; border:1px solid [[ColorPalette::TertiaryDark]]; color:[[ColorPalette::Background]];}
.viewer td, .viewer tr, .twtable td, .twtable tr {border:1px solid [[ColorPalette::TertiaryDark]];}

.viewer pre {border:1px solid [[ColorPalette::SecondaryLight]]; background:[[ColorPalette::SecondaryPale]];}
.viewer code {color:[[ColorPalette::SecondaryDark]];}
.viewer hr {border:0; border-top:dashed 1px [[ColorPalette::TertiaryDark]]; color:[[ColorPalette::TertiaryDark]];}

.highlight, .marked {background:[[ColorPalette::SecondaryLight]];}

.editor input {border:1px solid [[ColorPalette::PrimaryMid]];}
.editor textarea {border:1px solid [[ColorPalette::PrimaryMid]]; width:100%;}
.editorFooter {color:[[ColorPalette::TertiaryMid]];}
.readOnly {background:[[ColorPalette::TertiaryPale]];}

#backstageArea {background:[[ColorPalette::Foreground]]; color:[[ColorPalette::TertiaryMid]];}
#backstageArea a {background:[[ColorPalette::Foreground]]; color:[[ColorPalette::Background]]; border:none;}
#backstageArea a:hover {background:[[ColorPalette::SecondaryLight]]; color:[[ColorPalette::Foreground]]; }
#backstageArea a.backstageSelTab {background:[[ColorPalette::Background]]; color:[[ColorPalette::Foreground]];}
#backstageButton a {background:none; color:[[ColorPalette::Background]]; border:none;}
#backstageButton a:hover {background:[[ColorPalette::Foreground]]; color:[[ColorPalette::Background]]; border:none;}
#backstagePanel {background:[[ColorPalette::Background]]; border-color: [[ColorPalette::Background]] [[ColorPalette::TertiaryDark]] [[ColorPalette::TertiaryDark]] [[ColorPalette::TertiaryDark]];}
.backstagePanelFooter .button {border:none; color:[[ColorPalette::Background]];}
.backstagePanelFooter .button:hover {color:[[ColorPalette::Foreground]];}
#backstageCloak {background:[[ColorPalette::Foreground]]; opacity:0.6; filter:'alpha(opacity=60)';}
/*}}}*/
/*{{{*/
* html .tiddler {height:1%;}

body {font-size:.75em; font-family:arial,helvetica; margin:0; padding:0;}

h1,h2,h3,h4,h5,h6 {font-weight:bold; text-decoration:none;}
h1,h2,h3 {padding-bottom:1px; margin-top:1.2em;margin-bottom:0.3em;}
h4,h5,h6 {margin-top:1em;}
h1 {font-size:1.35em;}
h2 {font-size:1.25em;}
h3 {font-size:1.1em;}
h4 {font-size:1em;}
h5 {font-size:.9em;}

hr {height:1px;}

a {text-decoration:none;}

dt {font-weight:bold;}

ol {list-style-type:decimal;}
ol ol {list-style-type:lower-alpha;}
ol ol ol {list-style-type:lower-roman;}
ol ol ol ol {list-style-type:decimal;}
ol ol ol ol ol {list-style-type:lower-alpha;}
ol ol ol ol ol ol {list-style-type:lower-roman;}
ol ol ol ol ol ol ol {list-style-type:decimal;}

.txtOptionInput {width:11em;}

#contentWrapper .chkOptionInput {border:0;}

.externalLink {text-decoration:underline;}

.indent {margin-left:3em;}
.outdent {margin-left:3em; text-indent:-3em;}
code.escaped {white-space:nowrap;}

.tiddlyLinkExisting {font-weight:bold;}
.tiddlyLinkNonExisting {font-style:italic;}

/* the 'a' is required for IE, otherwise it renders the whole tiddler in bold */
a.tiddlyLinkNonExisting.shadow {font-weight:bold;}

#mainMenu .tiddlyLinkExisting,
	#mainMenu .tiddlyLinkNonExisting,
	#sidebarTabs .tiddlyLinkNonExisting {font-weight:normal; font-style:normal;}
#sidebarTabs .tiddlyLinkExisting {font-weight:bold; font-style:normal;}

.header {position:relative;}
.header a:hover {background:transparent;}
.headerShadow {position:relative; padding:4.5em 0 1em 1em; left:-1px; top:-1px;}
.headerForeground {position:absolute; padding:4.5em 0 1em 1em; left:0px; top:0px;}

.siteTitle {font-size:3em;}
.siteSubtitle {font-size:1.2em;}

#mainMenu {position:absolute; left:0; width:10em; text-align:right; line-height:1.6em; padding:1.5em 0.5em 0.5em 0.5em; font-size:1.1em;}

#sidebar {position:absolute; right:3px; width:16em; font-size:.9em;}
#sidebarOptions {padding-top:0.3em;}
#sidebarOptions a {margin:0 0.2em; padding:0.2em 0.3em; display:block;}
#sidebarOptions input {margin:0.4em 0.5em;}
#sidebarOptions .sliderPanel {margin-left:1em; padding:0.5em; font-size:.85em;}
#sidebarOptions .sliderPanel a {font-weight:bold; display:inline; padding:0;}
#sidebarOptions .sliderPanel input {margin:0 0 0.3em 0;}
#sidebarTabs .tabContents {width:15em; overflow:hidden;}

.wizard {padding:0.1em 1em 0 2em;}
.wizard h1 {font-size:2em; font-weight:bold; background:none; padding:0; margin:0.4em 0 0.2em;}
.wizard h2 {font-size:1.2em; font-weight:bold; background:none; padding:0; margin:0.4em 0 0.2em;}
.wizardStep {padding:1em 1em 1em 1em;}
.wizard .button {margin:0.5em 0 0; font-size:1.2em;}
.wizardFooter {padding:0.8em 0.4em 0.8em 0;}
.wizardFooter .status {padding:0 0.4em; margin-left:1em;}
.wizard .button {padding:0.1em 0.2em;}

#messageArea {position:fixed; top:2em; right:0; margin:0.5em; padding:0.5em; z-index:2000; _position:absolute;}
.messageToolbar {display:block; text-align:right; padding:0.2em;}
#messageArea a {text-decoration:underline;}

.tiddlerPopupButton {padding:0.2em;}
.popupTiddler {position: absolute; z-index:300; padding:1em; margin:0;}

.popup {position:absolute; z-index:300; font-size:.9em; padding:0; list-style:none; margin:0;}
.popup .popupMessage {padding:0.4em;}
.popup hr {display:block; height:1px; width:auto; padding:0; margin:0.2em 0;}
.popup li.disabled {padding:0.4em;}
.popup li a {display:block; padding:0.4em; font-weight:normal; cursor:pointer;}
.listBreak {font-size:1px; line-height:1px;}
.listBreak div {margin:2px 0;}

.tabset {padding:1em 0 0 0.5em;}
.tab {margin:0 0 0 0.25em; padding:2px;}
.tabContents {padding:0.5em;}
.tabContents ul, .tabContents ol {margin:0; padding:0;}
.txtMainTab .tabContents li {list-style:none;}
.tabContents li.listLink { margin-left:.75em;}

#contentWrapper {display:block;}
#splashScreen {display:none;}

#displayArea {margin:1em 17em 0 14em;}

.toolbar {text-align:right; font-size:.9em;}

.tiddler {padding:1em 1em 0;}

.missing .viewer,.missing .title {font-style:italic;}

.title {font-size:1.6em; font-weight:bold;}

.missing .subtitle {display:none;}
.subtitle {font-size:1.1em;}

.tiddler .button {padding:0.2em 0.4em;}

.tagging {margin:0.5em 0.5em 0.5em 0; float:left; display:none;}
.isTag .tagging {display:block;}
.tagged {margin:0.5em; float:right;}
.tagging, .tagged {font-size:0.9em; padding:0.25em;}
.tagging ul, .tagged ul {list-style:none; margin:0.25em; padding:0;}
.tagClear {clear:both;}

.footer {font-size:.9em;}
.footer li {display:inline;}

.annotation {padding:0.5em; margin:0.5em;}

* html .viewer pre {width:99%; padding:0 0 1em 0;}
.viewer {line-height:1.4em; padding-top:0.5em;}
.viewer .button {margin:0 0.25em; padding:0 0.25em;}
.viewer blockquote {line-height:1.5em; padding-left:0.8em;margin-left:2.5em;}
.viewer ul, .viewer ol {margin-left:0.5em; padding-left:1.5em;}

.viewer table, table.twtable {border-collapse:collapse; margin:0.8em 1.0em;}
.viewer th, .viewer td, .viewer tr,.viewer caption,.twtable th, .twtable td, .twtable tr,.twtable caption {padding:3px;}
table.listView {font-size:0.85em; margin:0.8em 1.0em;}
table.listView th, table.listView td, table.listView tr {padding:0px 3px 0px 3px;}

.viewer pre {padding:0.5em; margin-left:0.5em; font-size:1.2em; line-height:1.4em; overflow:auto;}
.viewer code {font-size:1.2em; line-height:1.4em;}

.editor {font-size:1.1em;}
.editor input, .editor textarea {display:block; width:100%; font:inherit;}
.editorFooter {padding:0.25em 0; font-size:.9em;}
.editorFooter .button {padding-top:0px; padding-bottom:0px;}

.fieldsetFix {border:0; padding:0; margin:1px 0px;}

.sparkline {line-height:1em;}
.sparktick {outline:0;}

.zoomer {font-size:1.1em; position:absolute; overflow:hidden;}
.zoomer div {padding:1em;}

* html #backstage {width:99%;}
* html #backstageArea {width:99%;}
#backstageArea {display:none; position:relative; overflow: hidden; z-index:150; padding:0.3em 0.5em;}
#backstageToolbar {position:relative;}
#backstageArea a {font-weight:bold; margin-left:0.5em; padding:0.3em 0.5em;}
#backstageButton {display:none; position:absolute; z-index:175; top:0; right:0;}
#backstageButton a {padding:0.1em 0.4em; margin:0.1em;}
#backstage {position:relative; width:100%; z-index:50;}
#backstagePanel {display:none; z-index:100; position:absolute; width:90%; margin-left:3em; padding:1em;}
.backstagePanelFooter {padding-top:0.2em; float:right;}
.backstagePanelFooter a {padding:0.2em 0.4em;}
#backstageCloak {display:none; z-index:20; position:absolute; width:100%; height:100px;}

.whenBackstage {display:none;}
.backstageVisible .whenBackstage {display:block;}
/*}}}*/
/***
StyleSheet for use when a translation requires any css style changes.
This StyleSheet can be used directly by languages such as Chinese, Japanese and Korean which need larger font sizes.
***/
/*{{{*/
body {font-size:0.8em;}
#sidebarOptions {font-size:1.05em;}
#sidebarOptions a {font-style:normal;}
#sidebarOptions .sliderPanel {font-size:0.95em;}
.subtitle {font-size:0.8em;}
.viewer table.listView {font-size:0.95em;}
/*}}}*/
/*{{{*/
@media print {
#mainMenu, #sidebar, #messageArea, .toolbar, #backstageButton, #backstageArea {display: none !important;}
#displayArea {margin: 1em 1em 0em;}
noscript {display:none;} /* Fixes a feature in Firefox 1.5.0.2 where print preview displays the noscript content */
}
/*}}}*/
<!--{{{-->
<div class='header' macro='gradient vert [[ColorPalette::PrimaryLight]] [[ColorPalette::PrimaryMid]]'>
<div class='headerShadow'>
<span class='siteTitle' refresh='content' tiddler='SiteTitle'></span>&nbsp;
<span class='siteSubtitle' refresh='content' tiddler='SiteSubtitle'></span>
</div>
<div class='headerForeground'>
<span class='siteTitle' refresh='content' tiddler='SiteTitle'></span>&nbsp;
<span class='siteSubtitle' refresh='content' tiddler='SiteSubtitle'></span>
</div>
</div>
<div id='mainMenu' refresh='content' tiddler='MainMenu'></div>
<div id='sidebar'>
<div id='sidebarOptions' refresh='content' tiddler='SideBarOptions'></div>
<div id='sidebarTabs' refresh='content' force='true' tiddler='SideBarTabs'></div>
</div>
<div id='displayArea'>
<div id='messageArea'></div>
<div id='tiddlerDisplay'></div>
</div>
<!--}}}-->
<!--{{{-->
<div class='toolbar' macro='toolbar [[ToolbarCommands::ViewToolbar]]'></div>
<div class='title' macro='view title'></div>
<div class='subtitle'><span macro='view modifier link'></span>, <span macro='view modified date'></span> (<span macro='message views.wikified.createdPrompt'></span> <span macro='view created date'></span>)</div>
<div class='tagging' macro='tagging'></div>
<div class='tagged' macro='tags'></div>
<div class='viewer' macro='view text wikified'></div>
<div class='tagClear'></div>
<!--}}}-->
<!--{{{-->
<div class='toolbar' macro='toolbar [[ToolbarCommands::EditToolbar]]'></div>
<div class='title' macro='view title'></div>
<div class='editor' macro='edit title'></div>
<div macro='annotations'></div>
<div class='editor' macro='edit text'></div>
<div class='editor' macro='edit tags'></div><div class='editorFooter'><span macro='message views.editor.tagPrompt'></span><span macro='tagChooser excludeLists'></span></div>
<!--}}}-->
To get started with this blank [[TiddlyWiki]], you'll need to modify the following tiddlers:
* [[SiteTitle]] & [[SiteSubtitle]]: The title and subtitle of the site, as shown above (after saving, they will also appear in the browser title bar)
* [[MainMenu]]: The menu (usually on the left)
* [[DefaultTiddlers]]: Contains the names of the tiddlers that you want to appear when the TiddlyWiki is opened
You'll also need to enter your username for signing your edits: <<option txtUserName>>
These [[InterfaceOptions]] for customising [[TiddlyWiki]] are saved in your browser

Your username for signing your edits. Write it as a [[WikiWord]] (eg [[JoeBloggs]])

<<option txtUserName>>
<<option chkSaveBackups>> [[SaveBackups]]
<<option chkAutoSave>> [[AutoSave]]
<<option chkRegExpSearch>> [[RegExpSearch]]
<<option chkCaseSensitiveSearch>> [[CaseSensitiveSearch]]
<<option chkAnimate>> [[EnableAnimations]]

----
Also see [[AdvancedOptions]]
<<importTiddlers>>
''#83''
The notion of cyclical time is crucial to Native Americans. For them, sacred
events recur again and again in a pattern that repeats the cycles of the celestial sphere.
Time does not progress along a linear path but moves in a cyclical manner so as to provide an enclosure within which events occur.
Past, present, and future all exist together because the cycles turn continually upon themselves.
The progression of time along a developmental path was a concept foreign to Native Americans until the Europeans forced them into history.


CRAZY INJUNS
From [[a list|http://www.brainpickings.org/index.php/2012/08/10/10-rules-for-students-and-teachers-john-cage-corita-kent/]] titled ''Some Rules for Students and Teachers'', attributed to John Cage. The list, however, originates from celebrated artist and educator Sister Corita Kent.

!!!!A couple of "preliminaries"
* The [[list on Brainpickings|http://www.brainpickings.org/index.php/2012/08/10/10-rules-for-students-and-teachers-john-cage-corita-kent/]] has 10 rules, but I won't mention all of them; some of them I strongly resonate with (see below :) and some of them I don't (so go see [[Brainpickings|http://www.brainpickings.org/index.php/2012/08/10/10-rules-for-students-and-teachers-john-cage-corita-kent/]] :) . 
** Therefore, I like the title "__Some__ Rules...", and I suspect that -- and hence the last sentence at the end of the Hints at the end of the rules -- there should be new rules next week.
* The list is titled "for teachers and students", but aren't we all BOTH one or the other at ANY given point in time, provided we pay attention and are present (as we teach, we learn) - see, for example, my modification of rule #3.
** Therefore, I like the extension "Rules for Life" (regardless of what your "official role" (teacher, student) is).

So, 
** ''RULE TWO'': General duties of a student — pull everything out of your teacher; pull everything out of your fellow students.
*** Sage advice. I think it should actually be "your teachers" (plural)

** ''RULE THREE'': General duties of a teacher — pull everything out of your students.
*** and fellow teachers (my extension)

** ''RULE FOUR'': Consider everything an experiment.
*** and (unlike the conventional scientific belief/definition of an experiment) a unique, non-repeatable one at that.
*** There is something both liberating AND grave about this view: on one hand, it's an experiment, so it's not about "success or failure" (i.e., like in science, if it's a "good experiment" you learn from it either way -- see rule #6). On the other hand, since it's a one-time experiment/experience (as is ALL of life itself), it had better be a "good one"!

** ''RULE SIX'': Nothing is a mistake. There’s no win and no fail, there’s only make.
*** Tied to rule #4 - if it's a "good experiment" you learn from it either way. 
*** If by "make" it means "learn and improve", then I wholeheartedly agree.

** ''RULE SEVEN'': The only rule is work. If you work it will lead to something. It’s the people who do all of the work all of the time who eventually catch on to things.
*** like in rule #6 - If by "work" it means "learn and improve", then I wholeheartedly agree. 

** ''RULE EIGHT'': Don’t try to create and analyze at the same time. They’re different processes.
*** Excellent and useful observation, as well as advice for effective and deep learning. I found this to be especially relevant when thinking about the differences between the emphases [[Doug Lemov|http://teachlikeachampion.com/blog/]] puts on effective teacher/classroom practices, and [[Deborah Ball|http://www-personal.umich.edu/~dball/presentations/index.html]]'s:
**** In a nutshell, my view is that Lemov is focusing on (49) Techniques and "tools to create a positive and lifelong impact on student learning" (from his book "Teach Like a Champion"), whereas Ball talks about the potent and essential combination (to learning) of pedagogy (and techniques) AND content/subject matter knowledge.
***** My perception of Lemov's approach (formed by reading his book and watching the accompanying video clips of teachers in classrooms) is one with traces of "militantism" and "student indoctrination" (so as not to say "robotization" - see for example his SLANT technique: Sit up, Listen, Ask and answer questions, Nod your head, Trace the speaker)
***** Ball, in my mind" is "wiser" by realizing the deeper truth, and advocating the need for teachers to practice and improve on both fronts: the tools and techniques (i.e. pedagogy) which are different from Lemov's) and content/domain knowledge, as well as thinking, analyzing, discussing, debating, experimenting, and communicating processes and procedures (which we want to encourage and teach).

** ''RULE NINE'': Be happy whenever you can manage it. Enjoy yourself. It’s lighter than you think.
*** What can I say? A Zen sparkle coming from a nun (Sister Corita Kent) - regardless of where you find wisdom, it's always refreshing :)
*** Echoing rules #4 and #6 - experiment; be playful; enjoy it. But also, make things count; make them "good ones".

** (my addition:) ''RULE TEN'': Ask questions. The expand your mind and universe. And as John O’Donohue had said: [[questions are like lanterns|John O’Donohue - questions]].

** ''HINTS'': Always be around. Come or go to everything. Always go to classes. Read anything you can get your hands on. Look at movies carefully, often. Save everything -- it might come in handy later. There should be new rules next week.
* This book is about seven pervasive myths, or mindsets, that undermine the process of learning and how we can avoid their debilitating effects in a wide variety of settings. 
>1. The basics must be learned so well that they become second nature. 
>2. Paying attention means staying focused on one thing at a time. 
>3. Delaying gratification is important. 
>4. Rote memorization is necessary in education. 
>5. Forgetting is a problem. 
>6. Intelligence is knowing "what's out there." 
>7. There are right and wrong answers.

These myths undermine true learning. They stifle our creativity, silence our questions, and diminish our self-esteem.
* The ideas offered here to loosen the grip of these debilitating myths are very simple. Their fundamental simplicity points to yet another inhibiting myth: that only a massive overhaul can give us a more effective educational system. 
* This book takes more of a "why-to" than a "how-to" approach. Nevertheless, the examples and experiments described implicitly suggest ways to learn mindfully. These are intended to guide our choices and to be adapted to each unique context, rather than to be followed mindlessly. 
* A mindful approach to any activity has three characteristics: 
** the continuous creation of new categories; 
** openness to new information; 
** and an implicit awareness of more than one perspective.
* Mindlessness, in contrast, is characterized by an entrapment in old categories; by automatic behavior that precludes attending to new signals; and by action that operates from a single perspective.
From a poem by Marvin Levine quoted at the beginning of the [[book|Authentic Happiness]]:
|borderless|k
|//Escher got it right.//<br>//Men step down and yet rise up,//<br>//the hand is drawn by the hand it draws,//<br>//and a woman is poised//<br>//on her very own shoulders.//|[img[Escher Ascend|./resources/escher_ascend_1.jpg][./resources/escher_ascend.jpg]]|[img[Escher Hands|./resources/escher_hands_1.jpg][./resources/escher_hands.jpg]]|[img[Escher Woman|./resources/escher_woman_1.jpg][./resources/escher_woman.jpg]]|
|borderless|k

This strongly highlighted for me the ability we have to very differently interpret the same phenomena:
* one could, depressingly, interpret Esher's men as going nowhere, or at best deluding themselves that they go somewhere (up), but actually going down.
** Levine in the poem (decided to?) interprets it as "rising up".
* one could look at the hand drawing a hand, and be confused, baffled, "stuck" in the impossibility or paradox; something like this cannot possibly be happening in reality.
** Levine points out something that could be interpreted as a very real experience where [[we (or one hand) shape/define/influence others (or the other hand), which in turn shape/define/influence us, in an on-going spiral|resources/escher_hands.jpg]]. Looked at even more positively, this spiral is a continuous improvement/refinement loop, where adding details/capabilities to each drawn hand makes it more and more capable, and so on.

Another case of looking at things in different ways is the [["glass half full vs. glass half empty"|Given a glass with water up to the mid level, people from the West will either say it's half full or they'll say it's half empty. People from the East will say it's both half full and half empty. They are all right.]] example.
In the first chapter of his book //On Intelligence// (published in 2004), Jeff Hawkins writes that he wanted to enroll at MIT and study AI, in order to build intelligent machines. He found that his approach was totally different from the one prevalent at the AI Lab:

I decided to apply to graduate school at the Massachusetts Institute of Technology, which was famous for its research on artificial intelligence and was conveniently located down the road. It seemed a great match. I had extensive training in computer science  "check." I had a desire to build intelligent machines, "check." I wanted to first study brains to see how they worked "uh, that's a problem." This last goal, wanting to understand how brains worked, was a nonstarter in the eyes of the scientists at the MIT artificial intelligence lab.

It was like running into a brick wall. MIT was the mother-ship of artificial intelligence. At the time I applied to MIT, it was home to dozens of bright people who were enthralled with the idea of programming computers to produce intelligent behavior. To these scientists, vision, language, robotics, and mathematics were just programming problems. Computers could do anything a brain could do, and more, so why constrain your thinking by the biological messiness of nature's computer? Studying brains would limit your thinking. They believed it was better to study the ultimate limits of computation as best expressed in digital computers. Their holy grail was to write computer programs that would first match and then surpass human abilities. They took an ends-justify-the-means approach; [[they were not interested in how real brains worked. Some took pride in ignoring neurobiology|Technology solutions - Learning from Nature]].

This struck me as precisely the wrong way to tackle the problem. Intuitively I felt that the artificial intelligence approach would not only fail to create programs that do what humans can do, it would not teach us what intelligence is. Computers and brains are built on completely different principles. One is programmed, one is self-learning. One has to be perfect to work at all, one is naturally flexible and tolerant of failures. One has a central processor, one has no centralized control. The list of differences goes on and on. The biggest reason I thought computers would not be intelligent is that I
understood how computers worked, down to the level of the transistor physics, and this knowledge gave me a strong intuitive sense that brains and computers were fundamentally different. I couldn't prove it, but I knew it as much as one can intuitively know anything. Ultimately, I reasoned, AI might lead to useful products, but it wasn't going to build truly intelligent machines.

In contrast, I wanted to understand real intelligence and perception, to study brain physiology and anatomy, to meet Francis Crick's challenge and come up with a broad framework for how the brain worked. I set my sights in particular on the neocortex  the most recently developed part of the mammalian brain and the seat of intelligence. After understanding how the neocortex worked, then we could go about building intelligent machines, but not before.

Unfortunately, the professors and students I met at MIT did not share my interests. They didn't believe that you needed to study real brains to understand intelligence and build intelligent machines. They told me so. In 1981 the university rejected my application.

!!!The wrong approach to AI?
According to Jeff Hawkins, after many years of effort, unfulfilled promises, and no unqualified successes, AI (Artificial Intelligence) started to lose its luster...
...there are still people who believe that AI's problems can be solved with faster computers, but most scientists think the entire endeavor was flawed (pg. 18)

Hawkins also says (pg. 21): the ultimate defensive argument of AI is that computers could, in theory, simulate the entire brain.A computer could model all the neurons and their connections, and if it did there would be nothing to distinguish the "intelligence" of the brain from the "intelligence" of the computer simulation. Although this may be impossible in practice, I agree with it. But AI researchers don't simulate brains, and their programs are not intelligent. You can't simulate a brain without first understanding what it does.

This echos something that Ray Kurzweil believes in, and wrote about in the book The Singularity is Near (2005).

Update (as of June 2013): In [[a recent interview for The Atlantic magazine|http://www.theatlantic.com/magazine/archive/2013/07/the-intuition-machine/309392/]], Jeff [[Hawkins talks|http://www.theatlantic.com/technology/archive/2013/06/what-the-digital-brains-of-the-future-might-be-like/277048/]] about his new company - [[Grok|https://www.groksolutions.com/]] - working on self-learning software that continuously builds models based an an input stream of data, with the goal of making predictions/intuitions (which is the hallmark of intelligence according to Hawkins). So he is still trying to crack the intelligence nut, in a way that makes sense to me, starting with numenta and evolving into Grok^^1^^.

[img[Artificial Brains|resources/Hawkins-Atlantic-AI-small.jpg][resources/Hawkins-Atlantic-AI.jpg]] [2]

----
^^1^^ Author Robert A. Heinlein coined the term in his best-selling 1961 book Stranger in a Strange Land. In Heinlein's view, grokking is the intermingling of intelligence that necessarily affects both the observer and the observed. From the novel:
>Grok means to understand so thoroughly that the observer becomes a part of the observed—to merge, blend, intermarry, lose identity in group experience. It means almost everything that we mean by religion, philosophy, and science—and it means as little to us (because of our Earthling assumptions) as color means to a blind man.
> - [[from Wikipedia|http://en.wikipedia.org/wiki/Grok]]

[2] From The Atlantic Magazine
!!!Definition of Technium
As a word, technium is akin to the German word technik, which similarly encapsulates the grand totality of machines, methods, and engineering processes. Technium is also related to the French noun technique, used by French philosophers to mean the society and culture of tools. But neither term captures what I consider to be the essential quality of the technium: this idea of a self-reinforcing system of creation. At some point in its evolution, our system of tools and machines and ideas became so dense in feedback loops and complex interactions that it spawned a bit of independence. It began to exercise some autonomy. 

!!!Autonomy of Technium
At first, this notion of technological independence is very hard to grasp. We are taught to think of technology first as a pile of hardware and secondly as inert stuff" that is wholly dependent on us humans. In this view, technology is only what we make. Without us, it ceases to be. It does only what we want. And that's what I believed, too, when I set out on this quest. But the more I looked at the whole system of technological invention, the more powerful and self-generating I realized it was. 
Its sustaining network of self-reinforcing processes and parts have given it a noticeable measure of autonomy. It may have once been as simple as an old computer program, merely parroting what we told it, but now it is more like a very complex organism that often follows its own urges. 

An organism or system does not need to be wholly independent to exhibit some degree of autonomy. Like an infant of any species, it can acquire increasing degrees of independence, starting from a speck of autonomy. 
So how do you detect autonomy? Well, we might say that an entity is autonomous if it displays any of these traits: self-repair, self-defense. self-maintenance (securing energy, disposing of waste), self-control of goals, self-improvement. The common element in all these characteristics is of course the emergence, at some level, of a self In the technium we don't have any examples of a system that displays all these traits but we have plenty of examples that display some of them. Autonomous airplane drones can I stay aloft for hours. But they don't repair themselves. Communication networks can repair themselves. But they don't reproduce themselves. We have self-reproducing computer viruses, but they don't improve themselves. 

!!!!! So my question (as opposed to Kevin Kelly's): Is this anthropomorphizing?
While Kevin Kelly is aware of the possibility (or the accusation) that he may be anthropomorphizing, he concludes that he is not. But in my mind he is blurring the line by mixing keen observations that are evident/obvious/justified (e.g., that technology is becoming more and more part of our environment, and therefore a "force of nature", or at least a "force to be reckoned with"), with statements/observations that are not obvious at all, or at least could be interpreted in a different way (e.g., on page 16 he describes a robot being developed in Silicon Valley that seems to have human characteristics, like "its ability to find a power outlet and plug itself in" (thus showing "hunger"). Or "When you take hold of one of its arms, it is neither rigid at the joints nor limp. It responds in a supple manner, with a gentle give, as if the limb were alive. It's an uncanny sensation. Yet the robot's grip is as deliberate as yours." Or, "If you stand in front of a PR2 while it is hungry, it won't hurt you. It will backtrack and go around the building any way it can to find a plug. It's not conscious, but standing between it and its power outlet, you can clearly feel its want."

I think that the author is not being careful enough in separating pure observation from human-like interpretation!
This reminds me of the hard-nosed, no nonsense observation by the computer scientist ''Edsger Wybe Dijkstra'':
''The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.''

!!!The Technium is "talking"
When computer scientists dissect the massive rivers of traffic flowing through it (170 quadrillion chips wired together), they cannot account for the source of all the bits. Every now and then a bit is transmitted incorrectly, and while most of those mutations can be attributed to identifiable causes such as hacking. machine error, or line damage, the researchers are left with a few percent that somehow changed themselves. In other words, a small fraction of what the technium communicates originates not from any of its known human-made nodes but from the system at large. The technium is whispering to itself.

!!!The Technium "wants"
With the technium, want does not mean thoughtful decisions. I don't believe the technium is conscious (at this point). Its mechanical wants are not carefully considered deliberations but rather tendencies. Leanings. Urges. Trajectories. The wants of technology are closer to needs, a compulsion toward something. Just like the unconscious drift of a sea cucumber as it seeks a mate. The millions of amplifying relationships and countless circuits of influence among parts push the whole technium in certain unconscious directions. 

!!!The Technium as a "force of nature"
The technium is now as great a force in our world as nature, and our response to the technium should be similar to our response to nature. We can't demand that technology obey us any more than we can demand that life obey us. Sometimes we should surrender to its lead and bask in its abundance, and sometimes we should try to bend its natural course to meet our own. We don't have to do everything that the technium demands, but we can learn to work with this force rather than against it. 
Seligman writes in this chapter about personality traits, feelings, and authentic fulfillment:
When well-being comes from engaging our strengths and virtues, our lives are imbued with authenticity. ''Feelings'' are states, momentary occurrences that need not be recurring features of personality. ''Traits'', in contrast to states, are either negative or positive ''characteristics'' that recur across time and different situations, and ''strengths and virtues'' are the positive characteristics that bring about good feeling and gratification. Traits are abiding dispositions whose exercise makes momentary feelings more likely. The negative trait of paranoia makes the momentary state of jealousy more likely, just as the positive trait of being humorous makes the state of laughing more likely.

This well-being and feeling of authenticity, comes from engaging in activities that involve (in business-speak, leverage) our strengths and virtues.In the business world, it's sometimes called "playing to your strengths", and it was found that it yields strong positive business results (no wonder).
1. Suffering occurs.
2. The cause of suffering is craving.
3. The possibility for ending suffering exists.
4. The cessation of suffering can be attained through the Noble Eightfold Path.


''The First Noble Truth - The Truth of Suffering''
The First Noble Truth simply says that suffering occurs. It does not say,  Life is suffering.  That suffering occurs perhaps does not seem a particularly profound statement. Suffering comes with being human. Pain is a part of the human condition.
In the context of the Four Noble Truths, we can distinguish between inevitable suffering and optional suffering. Optional suffering is created when we react to our experience.
The teaching of the Four Noble Truths does not promise relief from the inevitable suffering that arises out of being human. The suffering addressed by the Four Noble Truths is the suffering or stress that arises from the way we choose to relate to our experience.
[[A story (about arrows)|14 - Mindfulness Of Emotions]] illustrates the difference between inevitable pain and optional suffering.

The Buddha enumerated four kinds of clinging to help us understand our suffering and what we suffer about.
* grasping to spiritual practices and ethics
* grasping to views
* grasping to a sense of self
* grasping to sensual pleasure, which includes aversion to discomfort


''The Second Noble Truth - The Truth of the Cause of Suffering''
The Second Noble Truth states that what brings us off center, what causes our suffering, is craving (in Pali it literally means thirst).
What causes suffering is desire (or aversion) that is driven, compulsive. Craving means both being driven toward experiences and objects, as well as feeling compelled to push them away.


''The Third Noble Truth - The Truth of the Cessation of Suffering''
The Third Noble Truth expresses the possibility of liberation, of the cessation of suffering. When we see our suffering and understand clearly how it arises out of craving, we know that freedom from suffering is possible when craving is released.
But we need to be careful:
We easily become attached to states such as calm, peace, joy, clarity, or radiant light states that sometime arise during meditation practice, but which are not its goal. We may believe that we need to
attain them if we are to realize the Third Noble Truth. 
But if we remember non-clinging is the means to release, then we will be less inclined to cling to any state. Don't cling to your happiness. Don't cling to your sadness. Don't cling to any attainment.


''The Fourth Noble Truth - The Truth of the Path Leading to the Cessation of Suffering''
The Noble Eightfold Path gives us the steps that help us to create the conditions that make spiritual maturity possible. They are:
1. Right Understanding
2. Right Intention
3. Right Speech
4. Right Action
5. Right Livelihood
6. Right Effort
7. Right Mindfulness
8. Right Concentration
When Practice Makes Imperfect (as opposed to 'when practice makes //permanent//' :)
>* It is interesting to consider that emergencies may often be the result of actions taken in response to previous training rather than in response to present considerations.
>* When we approach a new skill, whether as adults or children, it is, by definition, a time when we know the least about it. Does it make sense to freeze our understanding of the skill before we try it out in different contexts and, at various stages, adjust it to our own strengths and experiences? Does it make sense to stick to what we first learned when that learning occurred when we were most naive?
>* Most of us are not taught our skills, whether academic, athletic or artistic, by the real experts. The rules we are given to practice are based on generally accepted truths about how to perform the task and not on our individual abilities. If we mindlessly practice these skills, we are not likely to surpass our teachers. Even if we are fortunate enough to be shown how to do something by a true expert mindless practice keeps the activity from becoming our own.
>* If we learn the basics but do not over-learn them, we can vary as we change or as the situation changes. 
>* ... experts at anything become expert in part by varying the basics. The rest of us, taught not to question, take them for granted. 

__On the Value of Doubt__
>The key to this new way of teaching is based on an appreciation of both the conditional, or context-dependent, nature of the world and the value of uncertainty...
Langer is saying that learning by rote is disabling improvement/tweaks:
>we found that when people overlearn a task so that they can perform it by rote, the individual steps that make up the skill come together into larger and larger units.^ As a consequence, the smaller components of the activity are essentially lost, yet it is by adjusting and varying these pieces that we can improve our performance. 


__On "sideways learning"__
>The standard two approaches to teaching new skills are top-down or bottom-up. The top-down method relies on discursive lecturing to instruct students. The bottom-up path relies on direct experience, repeated practice of the activity in a systematic way. 
>Sideways learning aims at maintaining a mindful state. As we saw, the concept of mindfulness revolves around certain psychological states that are really different versions of the same thing: (1) openness to novelty; (2) alertness to distinction; (3) sensitivity to different contexts; (4) implicit, if not explicit, awareness of multiple perspectives; and (5) orientation in the present. Each leads to the others and back to itself. Learning a subject or skill with an openness to novelty and actively noticing differences, contexts, and perspectives sideways learning makes us receptive to changes in an ongoing situation such a state of mind, basic skills and information guide our behavior in the present, rather than run it like a computer program. 
In the book On Intelligence (published in 2004) by Jeff Hawkins (pg. 25), he identifies [[three characteristics of intelligent systems|characteristics of intelligent systems]], that he strongly believes are missing in past and current implementations of AI in general, and Neural Networks in particular.

Hawkins criticizes AI work (pg. 29), which he believe is misguided due to the focus on [["intelligent behavior"|On Intelligence]], strongly expressed in (influenced by?) the definition of the [[Turing Test|https://plato.stanford.edu/entries/turing-test/]]:
In my opinion, the most fundamental problem with most neural networks is a trait they share with AI programs. Both are fatally burdened by their focus on behavior. Whether they are calling these behaviors "answers," "patterns," or "outputs," both AI and neural networks assume intelligence lies in the behavior that a program or a neural network produces after processing a given input. The most important attribute of a computer program or a neural network is whether it gives the correct or desired output. As inspired by Alan Turing, intelligence equals behavior.

But intelligence is not just a matter of acting or behaving intelligently. Behavior is a manifestation of intelligence, but not the central characteristic or primary definition of being intelligent. A moment's reflection proves this: You can be intelligent just lying in the dark, thinking and understanding. Ignoring what goes on in your head and focusing instead on behavior has been a large impediment to understanding intelligence and building intelligent machines.

@@Hawkins' defining characteristic of intelligence is ''the ability to make predictions''@@.
!!!Definition of Exotropy
This rising flow of sustainable difference is the inversion of entropy. For the sake of this narrative, call it exotropy a turning outward. Exotropy is another word for the technical term negentropy, or negative entropy.
I prefer exotropy over negentropy because it is a positive term for an otherwise double negative phrase meaning "the absence of the absence of order." Exotropy, in this tale, is far more uplifting than simply the subtraction of chaos. Exotropy can be thought of as a force in its own right that flings forward an unbroken sequence of unlikely existences. 
From Peter Goodliffe's book //Becoming a Better Programmer//

|borderless|k
|[img[errors|./resources/10000 monkeys - errors 1.png][./resources/10000 monkeys - errors.png]]|[img[ease of use|./resources/10000 monkeys - ease of use 1.png][./resources/10000 monkeys - ease of use.png]]|[img[experience|./resources/10000 monkeys - experience 1.png][./resources/10000 monkeys - experience.png]]|
|[img[hell|./resources/10000 monkeys - hell 1.png][./resources/10000 monkeys - hell.png]]|[img[testing|./resources/10000 monkeys - testing 1.png][./resources/10000 monkeys - testing.png]]|[img[time|./resources/10000 monkeys - time 1.png][./resources/10000 monkeys - time.png]]|
|>| [img[hell|./resources/10000 monkeys - the end 1.png][./resources/10000 monkeys - the end.png]] |>|
|borderless|k

So what does technology want? Technology wants what we want   the same long list of merits we crave. When a technology has found its ideal role in the world, it becomes an active agent in increasing the options, choices, and possibilities of others. Our task is to encourage the development of each new invention toward this inherent good, to align it in the same direction that all life is headed. Our choice in the [[technium|01 - My Question]] and it is a real and significant choice is to steer our creations toward those versions, those manifestations, that maximize that technology's benefits, and to keep it from thwarting itself 
Our role as humans, at least for the time being, is to coax technology along the paths it naturally wants to go. 

!!!Inevitability of trajectories
Of course, long-term trends are not equivalent to inevitabilities. Some argue that these particular trends still are not "inevitable" in the future; at any moment a dark age could descend and reverse their course. That is a possible scenario. 
They are really only inevitable in the long term. These tendencies are not ordained to appear at a given time. Rather, these trajectories are like the pull of gravity on water. Water "wants" to leak out of the bottom of a dam. Its molecules are constantly seeking a way down and out, as if overcome with an obsessive urge. In a certain sense it is inevitable that someday the water will leak out even though it may be retained by the dam for centuries. 
Technology's imperative is not a tyrant ordering our lives in lockstep. Its inevitabilities are not scheduled prophesies. They are more like water behind a wall, an incredibly strong urge pent up and waiting to be released. 
It may seem like I am painting a picture of a supernatural force, akin to a pantheistic spirit roaming the universe. But what I am outlining is almost the opposite. Like gravity, this force is embedded in the fabric of matter and energy. It follows the path of physics and obeys the ultimate law of entropy. The force that is waiting to erupt into the technologies of the technium was first pushed by [[exotropy|04 - The Rise of Exotropy]], built up by self-organization. and gradually thrown from the inert world into life, and from life into minds, and from minds into the creations of our minds. It is an observable force found in the intersection of information, matter, and energy. and it can be repeated and measured, though it has only recently been surveyed. 
Related to the [[First Noble Truth|01 - The Four Noble Truths]] about suffering is ''the story of the arrows'':
The Buddha once asked a student,  If a person is struck by an arrow is it painful?  The student replied,  It is.  The Buddha then asked,  If the person is struck by a second arrow, is that even more painful?  The student replied again,  It is.  The Buddha then explained,  In life, we cannot always control the first arrow. However, the second arrow is our reaction to the first. This second arrow is optional. 
As [[Douglas Hofstadter]] (in his wonderful //Metamagical Themas// book <written in 1985, chapter 19, pg. 449>) says: To me, the thought that [[Lisp|Lisp]] itself might be "more conducive" to good AI [(Artificial Intelligence)] ideas than any other computer language is quite preposterous.

Hofstadter claims that this viewpoint originates from what is known as the [[Sapir-Whorf|http://en.wikipedia.org/wiki/Sapir-Whorf_Hypothesis]] hypothesis, which can be explicitly stated as: ''Language controls thought''. A milder version of this hypothesis would say: Language exerts a powerful influence upon thought.
Hofstadter goes on to say: In the case of computer languages, the //~Sapir-Whorf thesis// would have to be interpreted as asserting that programmers in language X can think only in terms that language X furnishes them, and no others.Therefore they are strapped in to certain ways of seeing the "world", and are prevented from seeing many ideas that programmers in language L can easily see. At least this is what //~Sapir-Whorf// would have you believe. I will have none of it!

On the other hand, [[Alan Perlis|http://en.wikipedia.org/wiki/Alan_Perlis]] (American Computer Scientist, 1922-1990) [[said|Alan Perlis]]: A language that doesn't affect the way you think about programming, is not worth knowing.


In a [[later (c. 2001) lecture, Hofstadter is softening his reaction/opinion|https://prelectur.stanford.edu/lecturers/hofstadter/analogy.html]] on the //~Sapir-Whorf thesis//:
>Since a sizable fraction of one’s personal repertoire of perceptual chunks is provided from without, by one’s language and culture, this means that inevitably language and culture exert powerful, even irresistible, channeling influences on how one frames events. (This position is related to the “meme’s-eye view” of the nature of thought, as put forth in numerous venues,most recently in [["The Meme Machine" by Susan Blackmore|https://www.susanblackmore.uk/the-meme-machine/synopsis/]]; see also [[Robert Wright's review of her book|https://archive.nytimes.com/www.nytimes.com/books/99/04/25/reviews/990425.25wrightt.html]])
>
>Consider, for instance, such words as “backlog,” “burnout,” “micromanaging,” and “underachiever,” all of which are commonplace in today’s America. I chose these particular words because I suspect that what they designate can be found not only here and now, but as well in distant cultures and epochs, quite in contrast to such culturally and temporally bound terms as “soap opera,” “mini-series,” “couch potato,” “news anchor,” “hit-and-run driver,” and so forth, which owe their existence to recent technological developments. 
>
>So consider the first set of words. We Americans living at the millennium’s cusp perceive backlogs of all sorts permeating our lives — but we do so because the word is there, warmly inviting us to see them. But back in, say, Johann Sebastian Bach’s day, were there backlogs — or more precisely, were backlogs perceived?For that matter, did Bach ever experience burnout? Well, most likely he did — but did he know that he did? Or did some of his Latin pupils strike him as being underachievers? Could he see this quality without being given the label? 
>
>Or, moving further afield, do Australian aborigines resent it when their relatives micromanage their lives? Of course, I could have chosen hundreds of other terms that have arisen only recently in our century, yet that designate aspects of life that were always around to be perceived but, for one reason or another, aroused little interest, and hence were neglected or overlooked.
>
>My point is simple: we are prepared to see, and we see easily, things for which our language and culture hand us ready-made labels. When those labels are lacking, even though the phenomena may be all around us, we may quite easily fail to see them at all. The perceptual attractors that we each possess (some coming from without, some coming from within, some on the scale of mere words, some on a much grander scale) are the filters through which we scan and sort reality, and thereby they determine what we perceive on high and low levels.
>
>Although this sounds like an obvious tautology, that part of it that concerns words is in fact a nontrivial proposition, which,under the controversial banner of “~Sapir-Whorf hypothesis,” has been heatedly debated, and to a large extent rejected, over the course of the twentieth century. I myself was once most disdainful of this hypothesis, but over time came to realize how deeply human thought — even my own! — is channeled by habit and thus, in the last accounting, by the repertoire of mental chunks (i.e., perceptual attractors) that are available to the thinker. I now think that it is high time for the ~Sapir-Whorf hypothesis to be reinstated, at least in its milder forms.
Recently (and as early as the year 2000 or thereabout :) the question of what are 21st Century Learning skills has been discussed often.

I came across [[this article by Nir Eyal|http://www.nirandfar.com/2016/08/tech-distractions-addictions.html]] pointing to a challenge most of our students (an some of us too? :) face: good productivity and focus habits vs. harmful patterns, distractions and addictive behaviors.

The author came up with a simple classification chart (see below) to help recognize and address possible obstacles to being productive and focused.
[img[Artificial Brains|resources/product_classification.png][resources/product_classification.png]]

While I recommend reading the full article, here is his "Take Away", in case you don't want to click on the [[link|http://www.nirandfar.com/2016/08/tech-distractions-addictions.html]] and be sucked into "The Internet Hole" (often a distraction :) at this point:

"For thousands of years, people have struggled with distractions that keep them from living the lives they imagine. Today, people find themselves attached to their mobile phones, but history shows us it’s only the latest in a long list of hindrances. A few decades ago, people complained about the mind-melting power of television. Before that it was arcade games, the telephone, the pinball machine, comic books, the radio, even the written word.

Not only is distraction here to stay, it will likely become harder to ignore as technology continues to make things even more engaging. However, that’s not necessarily a problem – it’s progress! We want products to improve, but we must also stay vigilant, asking whether “better” products bring out our better selves.

To ensure that technologies and products serve us, instead of us serving them, it’s useful to take a quick inventory of the products we use most (the list is probably in your browser history or home screen on your phone), classify these products, tackle each accordingly – and then get on with building the life we want."


I think that it all boils down to awareness and mindfulness; Doing "wholesome" things and avoiding harmful/destructive behaviors.

May you have thoughtful, undistracted reading now and in the future.

The evolution of ideas in Math is fascinating. [[Complex numbers|http://www.math.uri.edu/~merino/spring06/mth562/ShortHistoryComplexNumbers2006.pdf]] (and [[Imaginary Numbers|Imaginary Numbers - by Paul J. Nahin]]) are a great example of a "significant development"^^1^^ on this path.

I agree with [[Israel Kleiner who wrote|http://science.cmb.ac.lk/mathematics/wp-content/uploads/sites/9/2016/06/Thinking_the_Unthinkable__the_Story_of_Compleax-numbers.pdf]]^^2^^ that as a teacher, knowing the history of your subject (in this case Math, but definitely also true for Computer Science), is very important.
As [[George Polya|https://en.wikipedia.org/wiki/George_P%C3%B3lya]] had said:
>How [the teacher] makes his point may be as important as the point he makes; he must personally feel it to be important.


So, //imagine// if you will, (ha!), a (relatively) simple math problem: ''find two numbers which divide 10 into two parts, and the product of which is 20.''
In other words: a + b = 10 and a * b = 20.

As they say, it's easy to show (and if you'd rather [[go to WolframAlpha and check|http://www.wolframalpha.com/input/?i=a%2Bb%3D10,+a*b%3D20]], go ahead :) that there are 2 "real solutions":

[img[Real equation solutions|resources/real equation small.png][http://www.wolframalpha.com/input/?i=a%2Bb%3D10,+a*b%3D20]]

As it turns out, the polymath [[Gerolamo Cardano|https://en.wikipedia.org/wiki/Gerolamo_Cardano]], in his [[Ars Magna|https://en.wikipedia.org/wiki/Ars_Magna_(Gerolamo_Cardano)]], explored a similar question, which for (historically understandable) reasons caused him "mental torture" (as [[the mathematician Edward Frenkel wrote|http://www.edwardfrenkel.com/einstein-summary.pdf]]). 

Cardano worked on the problem of how ''"To divide 10 in two parts, the product of which is 40".'' (So, just simply replacing the 20 in the problem above with 40 here :)
And here is what he wrote:
>It is clear that this case is impossible. Nevertheless, we shall work thus:
>We divide 10 into two equal parts, making each 5. These we square, making 25.
>Subtract 40, if you will, from the 25 thus produced, as I showed you in the
>chapter on operations in the sixth book leaving a remainder of -15, the square root
>of which added to or subtracted from 5 gives parts the product of which is 40.
>
>These will be 5+√−15 and 5−√−15.
>
>Putting aside the mental tortures involved, multiply 5+√−15 and 5−√−15
>making 25−(−15) which is +15. Hence this product is 40.

And again,  it's easy to show (and if you'd rather [[go to WolframAlpha and check|http://www.wolframalpha.com/input/?i=a%2Bb%3D10;+a*b%3D40]], go ahead :) that there are 2 "complex solutions":

[img[Complex equation solutions|resources/complex equation small.png][http://www.wolframalpha.com/input/?i=a%2Bb%3D10;+a*b%3D40]]

!!!And the (possible) moral(s)/lessons of this (hi)story
* This is another example of a case of willingness to "make the mental leap", despite the "mental anguish", resulting in the amazing discovery of complex numbers, with [[phenomenal implications (and applications)|https://www.ukessays.com/essays/mathematics/application-of-complex-number-in-engineering.php]]!
* It is another instance of how the meaning and scope of concepts (in this case, the concept of "number") evolves
* and with it, the importance of mental flexibility and openness
* the important role of physical need and human curiosity in the development of new and impactful/important ideas
* the role and importance of intuition and logic in the evolution of human knowledge
* the nature of logical and mathematical proofs in determining "truths"
* the powerful (and quite frequent) progression from "pure knowledge" to "applied knowledge"


----
^^1^^ [[An Imaginary Tale|http://www.pucrs.br/famat/viali/tic_literatura/livros/Paul%20J.%20Nahin%20-%20An%20Imaginary%20Tale%20The%20Story%20of%20i%20the%20Square%20Root%20of%20Minus%20One.pdf]] - The Story of [img[square root of -1|./resources/i 1.png][./resources/i.png]] by Paul J. Nahin

^^2^^ from [[an article by Israel Kleiner titled "Thinking the Unthinkable: The Story of Complex Numbers (with a Moral) "|http://science.cmb.ac.lk/mathematics/wp-content/uploads/sites/9/2016/06/Thinking_the_Unthinkable__the_Story_of_Compleax-numbers.pdf]] (see [[GD copy|https://drive.google.com/file/d/1BJ0vraarncN3Z3iiwZnE__-zdurUEbAJ/view?usp=sharing]]):
>One can invent mathematics without knowing much of its history. One can use mathematics without knowing much, if any, of its history. But one cannot have a mature appreciation of mathematics without a substantial knowledge of its history. 
>[...]
>To teach effectively a teacher must develop a feeling for his subject; he cannot make his students sense its vitality if he does not sense it himself. He cannot share his enthusiasm when he has no enthusiasm to share. How he makes his point may be as important as the point he makes; he must personally feel it to be important.
([[from James Iry's blog|http://james-iry.blogspot.com/2009/05/brief-incomplete-and-mostly-wrong.html]])

A [[list of computer scientists|https://en.wikipedia.org/wiki/List_of_computer_scientists]] (if you are interested in some of their accomplishments).

1801 - Joseph Marie Jacquard uses punch cards to instruct a loom to weave "hello, world" into a tapestry. Redditers of the time are not impressed due to the lack of tail call recursion, concurrency, or proper capitalization.

1842 - Ada Lovelace writes the first program. She is hampered in her efforts by the minor inconvenience that she doesn't have any actual computers to run her code. Enterprise architects will later relearn her techniques in order to program in UML.

1936 - Alan Turing invents every programming language that will ever be but is shanghaied by British Intelligence to be 007 before he can patent them.

1936 - Alonzo Church also invents every language that will ever be but does it better. His lambda calculus is ignored because it is insufficiently C-like. This criticism occurs in spite of the fact that C has not yet been invented.

1940s - Various "computers" are "programmed" using direct wiring and switches. Engineers do this in order to avoid the tabs vs spaces debate.

1957 - John Backus and IBM create FORTRAN. There's nothing funny about IBM or FORTRAN. It is a syntax error to write FORTRAN while not wearing a blue tie.

1958 - John ~McCarthy and Paul Graham invent LISP. Due to high costs caused by a post-war depletion of the strategic parentheses reserve LISP never becomes popular[1]. In spite of its lack of popularity, LISP (now "Lisp" or sometimes "Arc") remains an influential language in "key algorithmic techniques such as recursion and condescension"[2].

1959 - After losing a bet with L. Ron Hubbard, Grace Hopper and several other sadists invent the Capitalization Of Boilerplate Oriented Language (COBOL) . Years later, in a misguided and sexist retaliation against Adm. Hopper's COBOL work, Ruby conferences frequently feature misogynistic material.

1964 - John Kemeny and Thomas Kurtz create BASIC, an unstructured programming language for non-computer scientists.

1965 - Kemeny and Kurtz go to 1964.

1970 - Guy Steele and Gerald Sussman create Scheme. Their work leads to a series of "Lambda the Ultimate" papers culminating in "Lambda the Ultimate Kitchen Utensil." This paper becomes the basis for a long running, but ultimately unsuccessful run of late night infomercials. Lambdas are relegated to relative obscurity until Java makes them popular by not having them.

1970 - Niklaus Wirth creates Pascal, a procedural language. Critics immediately denounce Pascal because it uses "x := x + y" syntax instead of the more familiar C-like "x = x + y". This criticism happens in spite of the fact that C has not yet been invented.

1972 - Dennis Ritchie invents a powerful gun that shoots both forward and backward simultaneously. Not satisfied with the number of deaths and permanent maimings from that invention he invents C and Unix.

1972 - Alain Colmerauer designs the logic language Prolog. His goal is to create a language with the intelligence of a two year old. He proves he has reached his goal by showing a Prolog session that says "No." to every query.

1973 - Robin Milner creates ML, a language based on the M&M type theory. ML begets SML which has a formally specified semantics. When asked for a formal semantics of the formal semantics Milner's head explodes. Other well known languages in the ML family include OCaml, F#, and Visual Basic.

1980 - Alan Kay creates Smalltalk and invents the term "object oriented." When asked what that means he replies, "Smalltalk programs are just objects." When asked what objects are made of he replies, "objects." When asked again he says "look, it's all objects all the way down. Until you reach turtles."

1983 - In honor of Ada Lovelace's ability to create programs that never ran, Jean Ichbiah and the US Department of Defense create the Ada programming language. In spite of the lack of evidence that any significant Ada program is ever completed historians believe Ada to be a successful public works project that keeps several thousand roving defense contractors out of gangs.

1983 - Bjarne Stroustrup bolts everything he's ever heard of onto C to create C++. The resulting language is so complex that programs must be sent to the future to be compiled by the Skynet artificial intelligence. Build times suffer. Skynet's motives for performing the service remain unclear but spokespeople from the future say "there is nothing to be concerned about, baby," in an Austrian accented monotones. There is some speculation that Skynet is nothing more than a pretentious buffer overrun.

1986 - Brad Cox and Tom Love create Objective-C, announcing "this language has all the memory safety of C combined with all the blazing speed of Smalltalk." Modern historians suspect the two were dyslexic.

1987 - Larry Wall falls asleep and hits Larry Wall's forehead on the keyboard. Upon waking Larry Wall decides that the string of characters on Larry Wall's monitor isn't random but an example program in a programming language that God wants His prophet, Larry Wall, to design. Perl is born.

1990 - A committee formed by Simon Peyton-Jones, Paul Hudak, Philip Wadler, Ashton Kutcher, and People for the Ethical Treatment of Animals creates Haskell, a pure, non-strict, functional language. Haskell gets some resistance due to the complexity of using monads to control side effects. Wadler tries to appease critics by explaining that "a monad is a monoid in the category of endofunctors, what's the problem?"

1991 - Dutch programmer Guido van Rossum travels to Argentina for a mysterious operation. He returns with a large cranial scar, invents Python, is declared Dictator for Life by legions of followers, and announces to the world that "There Is Only One Way to Do It." Poland becomes nervous.

1995 - At a neighborhood Italian restaurant Rasmus Lerdorf realizes that his plate of spaghetti is an excellent model for understanding the World Wide Web and that web applications should mimic their medium. On the back of his napkin he designs Programmable Hyperlinked Pasta (PHP). PHP documentation remains on that napkin to this day.

1995 - Yukihiro "Mad Matz" Matsumoto creates Ruby to avert some vaguely unspecified apocalypse that will leave Australia a desert run by mohawked warriors and Tina Turner. The language is later renamed Ruby on Rails by its real inventor, David Heinemeier Hansson. [The bit about Matsumoto inventing a language called Ruby never happened and better be removed in the next revision of this article - DHH].

1995 - Brendan Eich reads up on every mistake ever made in designing a programming language, invents a few more, and creates LiveScript. Later, in an effort to cash in on the popularity of Java the language is renamed JavaScript. Later still, in an effort to cash in on the popularity of skin diseases the language is renamed ECMAScript.

1996 - James Gosling invents Java. Java is a relatively verbose, garbage collected, class based, statically typed, single dispatch, object oriented language with single implementation inheritance and multiple interface inheritance. Sun loudly heralds Java's novelty.

2001 - Anders Hejlsberg invents C#. C# is a relatively verbose, garbage collected, class based, statically typed, single dispatch, object oriented language with single implementation inheritance and multiple interface inheritance. Microsoft loudly heralds C#'s novelty.

2003 - A drunken Martin Odersky sees a Reese's Peanut Butter Cup ad featuring somebody's peanut butter getting on somebody else's chocolate and has an idea. He creates Scala, a language that unifies constructs from both object oriented and functional languages. This pisses off both groups and each promptly declares jihad.

----
''Footnotes''

[1]    Fortunately for computer science the supply of curly braces and angle brackets remains high.
[2]    Catch as catch can - by [[Verity Stob|https://www.theregister.co.uk/bootnotes/stob/]]
The following is one of many versions of this (possibly) [[urban legend|https://www.snopes.com/fact-check/the-barometer-problem/]].

The following concerns a question in a physics degree exam at the University of Copenhagen: "Describe how to determine the height of a building with a barometer."

One student replied:

"You tie a long piece of string to the neck of the barometer, then lower the barometer from the roof of the tower to the ground. The length of the string plus the length of the barometer will equal the height of the building."

This highly original answer so incensed the examiner that the student was failed immediately. The student appealed on the grounds that his answer was indisputably correct, and the university appointed an independent arbiter to decide the case.

The arbiter judged that the answer was indeed correct, but did not display any noticeable knowledge of physics. To resolve the problem it was decided to call the student in and allow him six minutes in which to provide a verbal answer which showed at least a minimal familiarity with the basic principles of physics.

For five minutes the student sat in silence, forehead creased in thought. The arbiter reminded him that time was running out, to which the student replied that he had several extremely relevant answers, but couldn't make up his mind which to use.

On being advised to hurry up the student replied as follows:

"Firstly, you could take the barometer up to the roof of the tower, drop it over the edge, and measure the time it takes to reach the ground. The height of the building can then be worked out from the formula H = 0.5g x t^^2^^. But bad luck on the barometer."

"Or if the sun is shining you could measure the height of the barometer, then set it on end and measure the length of its shadow. Then you measure the length of the tower's shadow, and thereafter it is a simple matter of proportional arithmetic to work out the height of the tower."

"But if you wanted to be highly scientific about it, you could tie a short piece of string to the barometer and swing it like a pendulum, first at ground level and then on the roof of the tower. The height is worked out by the difference in the gravitational restoring force T = 2 pi square root (l/g)."

"Or if the tower has an outside emergency staircase, it would be easier to walk up it and mark off the height of the tower in barometer lengths, then add them up."

"If you merely wanted to be boring and orthodox about it, of course, you could use the barometer to measure the air pressure on the roof of the tower and on the ground, and convert the difference in millibars into feet to give the height of the building."

"But since we are constantly being exhorted to exercise independence of mind and apply scientific methods, undoubtedly the best way would be to knock on the janitor's door and say to him 'If you would like a nice new barometer, I will give you this one if you tell me the height of this tower'."

The student was Niels Bohr.

Computational Thinking (CT) and Computational Literacy^^''1''^^ have been getting significant attention and focus (and some funding) in the last few years, with various organizations and initiatives supporting it.
The thinking about CT is evolving, and there is no single agreed-upon definition of it^^''2''^^, but following is my (working) definition^^''3''^^, inspired by the thoughtful viewpoints expressed in a [[report by the National Academies|http://www.nap.edu/catalog.php?record_id=12840]]:
{{{Computational Thinking is the study and development of human capabilities and processes for solving problems, designing systems, and understanding human behavior, in order to magnify and enhance those human aspects. It has roots in mathematics, engineering, technology, and science, and draws on concepts and methodologies fundamental to the sciences in general and computer science and technology in particular.}}} ^^''3''^^ 

A partial list of key contributors and participants in evolving CT is:
* The National Science Foundation (NSF) ([[example presentation by Jeannette M. Wing |http://www.nsf.gov/attachments/114603/public/2009_02_19_cstb-ct.ppt]], 2.8MB .ppt)
* [[The National Academies|http://www8.nationalacademies.org/cp/projectview.aspx?key=48969]]
* [[International Society for Technology in Education (ISTE)|http://www.iste.org/learn/computational-thinking]]
* [[Computer Science Teachers Association (CSTA)|http://csta.acm.org/Curriculum/sub/CompThinking.html]]
* [[Google's Exploring Computational Thinking|http://www.google.com/edu/computational-thinking/index.html]]
* Universities/academia like [[Carnegie Mellon (CMU)|http://www.cs.cmu.edu/~CompThink/]], [[Northwestern|http://osep.northwestern.edu/projects/ct-stem]], [[DePaul|http://compthink.cs.depaul.edu/]], [[University of Washington|http://csprinciples.cs.washington.edu/sixpractices.html]], and others.

The CT model described in the [[ISTE Leadership Toolkit|http://www.iste.org/docs/ct-documents/ct-leadership-toolkit-8-22-11FC0F82895322.pdf?sfvrsn=2]] identifies some key skills mapped to CT abilities across school grades. This is useful, and definitely a good step in outlining a "CT vocabulary and Progression Chart". It mainly focuses on the skills/capabilities as essential building blocks, but doesn't propose a coherent framework nor curricula.
The CT model described in the [[CSTA matrix|http://csta.acm.org/Curriculum/sub/CurrFiles/CTExamplesTable.pdf]] is aligned with the ISTE model and vocabulary, and adds a dimension of different domains/subjects/curricula (e.g. Math, Science, Language Arts]]. This effort is also focusing mainly on the building blocks for CT, and not on an overarching framework.
The [[Google Exploring Computational Thinking repository|http://www.google.com/edu/computational-thinking/lessons.html]] has a collection of specific lesson ideas leveraging technologies across domains/subjects/curricula. The examples are narrowly focused on applying computation, programming, automation to very specific problems.

Following is my work-in-progress framework for Computational Thinking and Computational Literacy, which is focusing on //Problem Solving// as the "guiding lens", so as to show how various building blocks (including key ones from the models above) may fit together in a set of processes, and ways of thinking and behaving (hence, //framework//) guided by Computational Thinking principles and capabilities. I've chosen problem solving since this is one of the main goals 21st Century Education initiatives are aiming to improve.

!!!!Click the image to see a zoomable PDF version of my (work-in-progress) problem-solving-focused, computational-thinking-guided framework
[img[click to see the "CT framework"|resources/Computational Thinking process HM.png][ resources/Computational Thinking process HM.pdf]]

[[The drill-down interactive framework|resources/Computational Thinking process HM live.pdf]] (expandable/collapsible PDF)

!!!!A few notes on the framework in the larger context of Computational Thinking and Computational Literacy
* The framework aims mainly at STEM (Science, Technology, Engineering, Math) education, but is not limited to these domains/curricula. On the other hand, it's not just about Computer Science and programming. As a matter of fact, Computational Thinking and practices can be learned and exercised without computers (but it may be less fun ;-). To paraphrase the famous, hard-boiled Dutch computer scientist Edsger Dijkstra:
>Computational Thinking is no more about computers than astronomy is about telescopes.
* To balance the point above, CT is not the end-all-be-all in terms of an educational framework, model, and scope. So far, it does not specifically address, for example, domains like social behavior (teamwork, empathy, persuasive argumentation, etc.), morality (fairness, prejudice, justice, etc.), and others, which are important "21^^st^^ Century Skills" and abilities (and educational frameworks). As CT matures, I expect it to integrate and compliment other frameworks and domains, and this is already starting to happen, where it's being linked with social and moral contexts and behaviors (see [[Six Computational Thinking Practices|resources/CT_sixpractices.pdf]] from Washington University, practice 1 - effects, and practice 6 - teamwork)
* There has been [[some legitimate concern|http://csta.acm.org/Curriculum/sub/CurrFiles/JonesCTOnePager.pdf]] regarding the definition and scope (too abstract, too narrow) of CT, which should be kept in mind as the thinking and practice around CT evolves and matures.
* A [[set of questions|http://cs.gmu.edu/cne/pjd/GP/gp_faq.html]] regarding the appropriate principles to create and apply to CT and Computer Science (a somewhat narrow and computer science centric focus can be seen in the early publications of [[Wing|http://www.cs.cmu.edu/afs/cs/usr/wing/www/publications/Wing06.pdf]]) is brought up by the computer scientist [[Peter Denning|http://cs.gmu.edu/cne/pjd/GP/GP-site/about_us.html]] and others. Denning also warned in [[a viewpoint article|resources/Denning-Beyond Computational Thinking.pdf]]  in the [[Communications of the ACM|http://cacm.acm.org/]], of the danger of equating CT with programming, and brought up thoughtful points to consider on the way "Beyond Computational Thinking" (which is the title of his article).
* To counter this, some computer scientists make an effort to broaden the scope and extent of CT (for example: [[Seven Big Ideas of Computer Science|resources/CT_sevenbigideas.pdf]] (and don't let the title fool you -- it's not only about Computer Science!)
* Identifying CT skills and capabilities that "go together" as part of an activity (e.g. Hypothesizing, or Defining Questions) helps promote processes and behaviors that can be developed into CT //Best Practices// and effective ways of thinking.
* The framework recognizes that skills and capabilities are building blocks in different contexts within the problem solving process, and therefore can repeat multiple times. Also, the pattern of input-processing-output is a nested and repeatable one, and can/should be applied at multiple levels within bigger contexts.
* The framework also emphasizes that CT __does not__ consist of just applying technology or computation to a single activity, task, or phase of problem solving. CT "lives" not only in the skills/capabilities building blocks, but (and as importantly) also in the process, links, and flow from one task and activity to the next.

----
^^''1''^^ - An interesting view on [[Computational Literacy|Computing Literacy]] and comparison to mastering calculus as a new literacy in Math, by Andrea diSessa
^^''2''^^ - from the [[Report of a Workshop on The Scope and Nature of Computational Thinking (2010)|http://www.nap.edu/openbook.php?record_id=12840&page=65]]:
>Discussions held at the February 2009 workshop did not reveal general agreement among workshop participants about the precise content of computational thinking, let alone its structure. Nevertheless, the lack of explicit disagreement about its elements could be taken as reflecting a shared intuition among workshop participants that computational thinking, as a mode of thought, has its own distinctive character.
>Building on this shared intuition, it is fair to say that most workshop participants agreed that more deliberation is necessary to achieve greater clarity about what is encompassed under the rubric of computational thinking and how these elements are structured relative to each other.
^^''3''^^ - This is my definition, inspired by the contributions captured in the National Academies report by: Peter Lee, Bill Wulf, Peter Denning, Gerald Sussman, Jeannette Wing, Edward Fox, Uri Wilensky, Robert Constable.
Here are a few recommendations/observations from a thoughtful, sound, and common-sensical blog post titled [["A Helpful Guide to Reading Better"|https://fs.blog/reading/]]:
* Ideally, the way you read is tailored to whether you’re reading for entertainment, information, or understanding. [[The Levels of Reading|https://fs.blog/how-to-read-a-book/]] will help you read more effectively and efficiently.
* [[Choose what you read wisely|https://fs.blog/2013/08/choose-your-next-book/]]
* be quick to start books, [[quicker to stop them|https://fs.blog/2017/09/shouldnt-slog-books/]], and read the best ones again right after you finish.
* [[don’t read what everyone else is reading|https://fs.blog/2013/04/reading-what-everyone-else-is-reading/]]. Rather than read new books, [[focus on old ones|https://fs.blog/2012/06/c-s-lewis-on-reading-old-books/]].
* [[Take notes while reading|https://fs.blog/2013/11/taking-notes-while-reading/]]
** At the end of each chapter write a few bullet points that summarize what you’ve read and make it personal if you can — that is, apply it to something in your life. Also, note any unanswered questions. When you’re done the book, put it down for a week.
** Pick up the book again and go through all your notes. Most of these will be garbage but there will be lots you want to remember. Write the good stuff on the inside cover of the book along with a page number.
** Copy out the excerpts by hand or take a picture of them to pop into Evernote. Tag accordingly.
* Good reading habits not only help you read more but help you read better. Here’s the [[Farnam Street system for remembering what you read|https://fs.blog/2017/10/how-to-remember-what-you-read/]].
** Before you start reading a new book, take out a blank sheet of paper. Write down what you know about the subject you’re about to read — a mind map if you will.
** After you are done a reading session spend a few minutes adding to the map (I use a different color ink).
** Before you start your next reading session, review the mindmap (I use mine as a bookmark sometimes.)
** Put these mind maps into a binder that you periodically review.
* [[Find more time to read|https://fs.blog/2013/09/finding-time-to-read/]]
** get in [[the habit of reading|https://fs.blog/2015/12/twenty-five-pages-a-day/]]

(From Out Here: Poems and Images from Steens Mountain Country)


As thought to mind, so to the string
plucked, or touched, or bowed, the music is,
a wrinkling of the air as immaterial
and brief as sunlight glancing on a wave.

The silence in these empty lands is long.
Voice is as mortal as the word it says,
with little time to speak the thought, to tell
or sing the quick idea of those who live.

So brief the spoken word, the airy thing
in which are placed our deepest constancies,
though by its love or life may stand or fall,
and in it is the power to ruin or save.

The silence in these empty lands is long.

Rock has no tongue to speak or voice to sing,
mute, heavy matter. Yet as I lift up this
dull desert stone, the weight of it is full
of slower, longer thoughts than mind can have.

Be my mind, stone lying on my grave.
The silence in these empty lands is long.
The stars have long to listen. Be my song.
In an [[article covering an interesting and seemingly effective "constructionist experiment"|http://stager.org/articles/eurologo2005.pdf]] in a troubled juvenile detention facility (The Maine Youth Center), Gary Stager (the principal investigator) summarized a few key principles from Seymour Papert (who participated in the experiment), which nicely compliment what he had said about [[exploring a new way to teach math|An Exploration in the Space of Mathematics Educations]].

The eight ''big ideas'' he lists are:
* __learning by doing__. We all learn better when learning is part of doing something we find really interesting. We learn best of all when we use what we learn to make something we really want.
* __technology as building material__. If you can use technology to make things you can make a lot more interesting things. And you can learn a lot more by making them. This is especially true of digital technology: computers of all sorts including the computer-controlled Lego in our Lab.
* __hard fun__. We learn best and we work best if we enjoy what we are doing. But fun and enjoying doesn’t mean “easy.” The best fun is hard fun. Our sports heroes work very hard at getting better at their sports. The most successful carpenter enjoys doing carpentry. The successful businessman enjoys working hard at making deals.
* __learning to learn__. Many students get the idea that “the only way to learn is by being taught.” This is what makes them fail in school and in life. Nobody can teach you everything you need to know. You have to take charge of your own learning.
* __taking time__ – the proper time for the job. Many students at school get used to being told every five minutes or every hour: do this, then do that, now do the next thing. If someone isn’t telling them what to do they get bored. Life is not like that. To do anything important you have to learn to manage time for yourself. This is the hardest lesson for many of our students.
* __you can’t get it right without getting it wrong__. Nothing important works the first time. The only way to get it right is to look carefully at what happened when it went wrong. To succeed you need the freedom to goof on the way.
* __do unto ourselves what we do unto our students__. We are learning all the time. We have a lot of experience of other similar projects but each one is different. We do not have a preconceived idea of exactly how this will work out. We enjoy what we are doing but we expect it to be hard. We expect to take the time we need to get this right. Every difficulty we run into is an opportunity to learn. The best lesson we can give our students is to let them see us struggle to learn.
* __we are entering a digital world where knowing about digital technology is as important as reading and writing__. So learning about computers is essential for our students’ futures BUT the most important purpose is using them NOW to learn about everything else.

I recently came across (and bought for a quarter, since I didn't have the required nickel to pay for it :) an IBM Technical Report (from 1973!), written by 4 researchers ^^1^^ connected to Ken Iverson, the inventor of APL (A Programming Language) and [[the winner of the Turing Award|http://amturing.acm.org/award_winners/iverson_9147499.cfm]] (1979).

Their 70 page paper/report (and 14 page appendix) is packed of insights into how to use APL to create insight and deeper learning.

The exciting thing (for me :) was that this slim publication holds so many insights about using Any Programming Language (not just APL) for teaching and learning, that I feel it could (and should) impact entire curricula, from the sciences, through math, art, and, of course, Computer Science. It addresses one of the strong beliefs I have about making cross-domain/discipline connections, whenever and wherever possible, using Computing as an excellent facilitator and "bridge".

The authors start by saying:
This report is based upon three central ideas:
* that key concepts in various disciplines may be represented by functions;
* that a language such as APL, permits a readable, formal definition of a function and a means of executing it and thereby accumulating the experience necessary to understand it; and
* that it is possible (but, unfortunately, not usual) to Write Computer programs so that they correspond directly to the functional concepts of a discipline.

They seem to fully support my belief in the benefits of using Computing to gain insights in other subjects and knowledge domains:
>Efforts to use the computer in education have in the past been focused primarily on measuring educational achievement by using its data-processing powers, on its ability to present and manage programmed instruction, or on the machine itself (as the subject of study). A new role for the computer emerges when the discipline of programming is used to represent explicitly a topic's structure, and thus to provide a basis for insight.

And their definition of gaining insight in a knowledge domain:
>To say that a student has gained insight into a topic means that he has come to see an underlying structure, and that, by referring to that structure, he is able to state and apply rules which explain or predict. In particular, he is able to abstract the rules from their context, and generalize them to new situations. One requirement of an educational process is that it provide the student language in which to describe his disciplines and in which to think about them.

They highlight the critical role of functions in mastering a knowledge domain, and define data and functions as essential parts in a model, expressing ideas that later surfaced in the movement to Object Oriented Programming and the paradigm shift it created:
> The fundamental ideas of a discipline depend upon two aspects of the way we choose to represent it:
> 1. Data: those attributes that we choose to note and measure thereby become the facts by which knowledge of an event is encoded, and in terms of which we think about it.
> 2. Functions: the transformations from one representation of data to another, or the relations between different sets of data.

''On Modeling'' (in/of a discipline/domain) - The authors rightly claim that "the choice of functions by which data are treated influence the selection of facts worth observing." This is true both pedagogically (which I think is the focus of their report) and philosophically (i.e., epistemologically).
The refer to domain modeling by saying that "Taken together, a collection of functions imposes a structure on the understanding of a topic." and add:
> For some time, [[Falkoff|https://en.wikipedia.org/wiki/Adin_Falkoff]] and [[Iverson|https://en.wikipedia.org/wiki/Kenneth_E._Iverson]] have argued that a system of programs constitutes a framework for a discipline (see DSLs below).

The purpose and usefulness of modeling with functions and data is something scientists and teachers strive for all the time, because:
> If a topic is well understood and its description well organized, a function useful in the description of one phenomenon can usually be used again in others. The creation of a unified and consistent system of definitions is, of course, what scientists are continually attempting. In similar fashion, teachers or authors can select the functions by which they develop a topic so that these form a coherent system.

''On Domain Specific Languages'' ([[DSLs|https://en.wikipedia.org/wiki/Domain-specific_language]]) - they talk (decades before [[Martin Fowler|http://martinfowler.com/books/dsl.html]] and others) about:
> In each field or discipline, the user may in effect extend the language to embrace the functions appropriate to it. The user may "pyramid" definitions, using those from each level as terms in the definition of those at the next.
and
> The content of various disciplines can be expressed in such a way that APL expressions can be used in books, on paper, and at the blackboard to achieve briefer, clearer, more general and more effective summary than is possible with conventional algebra or the various ad hoc extensions that each discipline tends to develop.

They make an aside on poets and scientists, which I think is insightful :)
>Poets or artists sometimes decry the precision that the scientists espouse ; they claim to prefer instead the many-layered ambiguity that literature permits. They miss the point that the generality that in art is achieved by ambiguity is equally an aim of science. Scientists, like poets, seek to state a function so generally that it applies to things which once seemed unrelated, and thereby gain some insight into all of them.

At the heart of the paper/report is the desire to use APL (or Any PL) as a tool for gaining insight and improve learning and mastering a discipline/knowledge Domain:
>Our aim is not to use programming as an end in itself, but rather to integrate the use of algorithms into the study and exposition of any discipline using formal or mathematical models.
Underlying this conception of a common language for man and machine is a conception of the relation between machine and student. In general, we have assumed that if a function is defined, it is because the student is expected both to use it and to understand it. That means that he should expect to be able to display its definition and understand what he reads there. Each function should therefore be not a black box but a glass box, whose inner mechanism is visible to any who need or care to study it.

''On Scafolding'' and gradual/layered revealing of a domain's structure (functions):
> On occasion we wish temporarily to withhold from a student what the definition of a function is, so that he may practice his analytic powers by testing its performance. We have considered such exercises with locked functions a sort of paradigm of the sciences : We observe the world, and collect data on the results that nature's unknown functions provide. The scientist's faith is that the world can indeed be reduced to comparatively simple functions, and that experiment will lead him to propose definitions for functions whose performance will duplicate what he observes. But even in the case in which knowledge of a function's definition is for the time being withheld, the student has the right and the power ultimately to see and to understand what he sees.

They emphasize the "open use" of the computer and exploration of domains/disciplines with it (the computer, not just APL, mind you!):
> We call this way of using the machine open use of the computer.  A great deal can be done with no pre-programmed definitions at all, i.e. with a "bare" APL machine. Generally speaking, it is the student (or the teacher) who initiates activities, not the machine. The task of the student is to develop his definitions of functions and to acquire experience in exploring their consequences. The computer * thus to him a laboratory device, an arena for testing his ideas by evaluating the results his functions produce.

!!!!Some implications of the "open use/exploration" approach to teaching:
This style of use has far-reaching consequences for the style in which teaching is conducted. Some of these have already been mentioned, and others will be described in more detail in a moment. Here we list some of them:
1. The work with the computer can be integrated into the presentation of the subject matter itself
 a. with minimum attention t○ programming, machine, language, or computer science (unless those are desired in their own right);
 b. with no requirement that students spend much time individually at the terminal (although if the facility is available it is usually advantageous to do so);
 c. with no dependence on a computer program for the particular course, other than access to an APL system and guidance from text or teacher on the functions that may be developed.

2. The functional approach, using APL to define the functions, has important benefits quite apart from the use of computers.
 a. There are significant advantages of the notation even if no use of terminals is made.
 b. The role of the terminal in many cases is to provide rapid and decisive testing of proposals developed by students; it thereby serves as  a powerful motivating device, but not one that is indispensable to the content of instruction.
 c. Even where the terminal is used for carrying out practical computations, the language may still be used apart from the terminal for developing the underlying concepts and procedures.

3. In the classroom, a terminal has many uses as a laboratory device, so that a group can witness the outcome of expressions collectively or individually proposed.
 a. Group use, with occasional individual use outside class
hours, permits a relatively large number of students per terminal.
 b. Terminal manufacturers have not thus far provided for collective viewing of the output of a terminal, and there are few devices well adapted for display to a group. However, adequate results are obtained by mounting a small TV camera on a conventional terminal, and connecting it to one or more TV monitors in the classroom.

4. APL notation and a functional approach are applicable to a wide variety of courses in mathematics and the sciences.
 a. The overhead of getting started with the notation and with terminals can be spread over many disciplines.
 b. Although direct use of the notation and of terminals can be started with students of junior high school age or even younger, it remains relevant throughout their subsequent secondary and university education.
 c. If one anticipated that students would make wide use of APL upon reaching secondary school, many of their elementary courses from kindergarten onward could lay a foundation for the general conception of function and for elements of APL notation, thus making the subsequent introduction of computing even easier.



----
^^1^^ - Paul Berry, J. Bartoli, C. Dell'Aquila, V. Spadavecchia
Maria Popova in a BrainPickings blog [[covers an interesting book|https://www.brainpickings.org/2012/05/04/a-technique-for-producing-ideas-young/]] by James Webb Young, written in 1939: //A Technique for Producing Ideas//.

Young sees new ideas as nothing but producing new __//combinations//__ of old elements. But the key is to be able to see new __//relationships//__ between these elements.

He makes an __//observation//__ which I think is very significant to learning and education:
>Here, I suspect, is where minds differ to the greatest degree when it comes to the production of ideas. To some minds each fact is a separate bit of knowledge. To others it is a link in a chain of knowledge. It has relationships and similarities. It is not so much a fact as it is an illustration of a general law applying to a whole series of facts.

Young breaks up the process of producing new ideas into 5 ''steps'':

1. GATHERING RAW MATERIAL - you need to create and maintain a pool of "raw material" (the "elements" to be combined). It is hard work, it's ongoing, and it's part of being a lifelong learner (and inventor, or innovator, or "original thinker").

2. DIGESTING THE MATERIAL - Young has a vivid image for this step/process:
>take the different bits of material which you have gathered and feel them all over, as it were, with the tentacles of the mind. You take one fact, turn it this way and that, look at it in different lights, and feel for the meaning of it. You bring two facts together and see how they fit. What you are seeking now is the relationship, a synthesis where everything will come together in a neat combination, like a jig-saw puzzle.

3. UNCONSCIOUS PROCESSING - unfortunately, a "black box" but //essential// step, where you "turn the problem over to your unconscious mind and let it work while" you do other, unrelated things (like sleep, listen to music, go to the theater or movies, read poetry or a detective story, or whatever else "stimulates your imagination and emotions").

4. THE ~A-HA MOMENT - where things click together, as if out of nowhere (similar to how [[Alan Kay describes it|AHA! Moment]]).

5. IDEA MEETS REALITY - where you "show your idea around" and discuss it with caring but judicious people. This is very useful since:
>You will find that a good idea has, as it were, self-expanding qualities. It stimulates those who see it to add to it. Thus possibilities in it which you have overlooked will come to light.
A wonderful (clear-eyed, but with some tongue-in-cheek) poem by Wislawa Szymborska.

I liked it that Szymborska "poured humanness into a math vessel". (David Eagleman has [[a different take/statistics on life|Sum by David Eagleman]]).

Since I don't know Polish, I took the creative freedom of putting together a "mix-and-match version" of two translations (based on my [[preferences|Possibilities - by Wislawa Szymborska]]): one from [[Joanna Trzeciak|http://chavelaque.blogspot.com/2005/08/word-on-statistics-by-wislawa.html]] and the other from [[Clare Cavanagh and Stanislaw Baranczak|http://www.poetry-chaikhana.com/blog/2011/04/04/wislawa-szymborska-a-contribution-to-statistics/]].


Out of a hundred people
those who always know better
— fifty-two

doubting every step
— nearly all the rest,

glad to lend a hand
if it doesn’t take too long
— as high as forty-nine,

always good
because they can’t be otherwise
— four, well maybe five,

able to admire without envy
— eighteen,

suffering illusions
induced by youth (which passes)
— sixty, give or take a few,

not to be taken lightly
— forty and four,

living in constant fear
of someone or something
— seventy-seven,

capable of happiness
— twenty-something, tops,

harmless alone, but savage in crowds
— half at least,

cruel
when forced by circumstances
— better not to know,
not even approximately,

wise after the fact
— just a couple more
than wise before it,

getting only things out of life
— thirty
(I wish I were wrong),

hunched in pain,
without a flashlight in the dark
— eighty-three
sooner or later,

righteous
— quite a few, thirty-five,

righteous
and understanding
— three,

worthy of compassion
— ninety-nine,

mortal
— a hundred out of a hundred.
a figure that has not changed yet.




I think that since the end result is 100% guaranteed (i.e., it will eventually happen even without any effort/action/intention on one's part), it may make one think about which category(ies) one wants to (make an effort to) belong to, and live by^^1^^ (and in this way, change the statistics :)

 
----
^^1^^ - As they say in Pirkei Avot (The Chapters of the Elders) 3:15 : Everything is foreseen, and freewill is given. (הַכֹּל צָפוּי, וְהָרְשׁוּת נְתוּנָה.) 





Terry Pratchett, in his excellent book [["A Hat Full of Sky"|http://discworld.wikia.com/wiki/A_Hat_Full_of_Sky]] describes an exchange between [[Tiffany Aching|http://discworld.wikia.com/wiki/Tiffany_Aching]] (the protagonist, a young witch) and the old, powerful, and majestic witch [[Esme (Granny) Weatherwax|http://discworld.wikia.com/wiki/Esmerelda_Weatherwax]]:

Granny Weatherwax put down the cup and saucer. 
“Child, you’ve come here to learn what’s true and what’s not, but there’s little I can teach you that you don’t already know.
You just don’t know you know it, and you’ll spend the rest of your life learning what’s already in your bones. And that’s the truth.”
She stared at Tiffany’s hopeful face and sighed.
“Come outside then,” she said. “I’ll give you lesson one. It’s the only lesson there is. It don’t need writing down in no book with eyes on it.”^^1^^

She led the way to the well in her back garden, looked around on the ground, and picked up a stick.
“Magic wand,” she said. “See?” A green flame leaped out of it, making Tiffany jump. “Now you try.”
It didn’t work for Tiffany, no matter how much she shook it.
“Of course not,” said Granny. “It’s a stick. Now, maybe I made a flame come out of it, or maybe I made you think one did. That don’t matter. It was me is what I’m sayin’, not the stick. Get your mind right and you can make a stick your wand and the sky your hat and a puddle your magic…your magic…er, what’re them fancy cups called?”
“Er…goblet,” said Tiffany.
“Right. Magic goblet. Things aren’t important. People are.” 

Granny Weatherwax looked sidelong at Tiffany. 
“And I could teach you how to run across those hills of yours with the hare, I could teach you how to fly above them with the buzzard. I could tell you the secrets of the bees. I could teach you all this and much more besides, if you’d do just one thing, right here and now. One simple thing, easy to do.”
Tiffany nodded, eyes wide.
“You understand, then, that all the glittery stuff is just toys, and toys can lead you astray?”
“Yes!”
“Then take off that shiny horse^^2^^ you wear around your neck, girl, and drop it in the well.”
Obediently, half hypnotized by the voice, Tiffany reached behind her neck and undid the clasp.
The pieces of the silver Horse shone as she held it over the water.
She stared at it as if she was seeing it for the first time. 
And then…

She tests people, she thought. All the time.
“Well?” said the old witch.
“No,” said Tiffany. “I can’t.”
“Can’t or won’t?” said Granny sharply.
“Can’t,” said Tiffany and stuck out her chin. “And won’t!”
She drew her hand back and refastened the necklace, glaring defiantly at Granny Weatherwax.

The witch smiled.
“Well done,” she said quietly. “If you don’t know when to be a human being, you don’t know when to be a witch^^3^^. And if you’re too afraid of goin’ astray, you won’t go anywhere. May I see it, please?”
Tiffany looked into those blue eyes. Then she undid the clasp again and handed over the necklace. 
Granny held it up.
“Funny, ain’t it, that it seems to gallop when the light hits it,” said the witch, watching it twist this way and that. “Well-made thing. O’ course, it’s not what a horse looks like, but it’s certainly what a horse is.”


----
^^1^^ Young Tiffany was keeping a diary which had a decorative picture of an eye (The Third Eye of witches?) on its cover.
^^2^^ Tiffany got this silver horse necklace from a boy (a Baron's son). The siver horse was shaped like the big horse carved into the White Chalk Mountains where Tiffany (and the boy) lived. From the author's notes at the end of the book:
>By an amazing coincidence, the Horse carved on the Chalk [mountainside] is remarkably similar to the [[Uffington White Horse|https://en.wikipedia.org/wiki/Uffington_White_Horse]], which in this world is carved on the downlands near the village of Uffington in southwest Oxfordshire. It’s 374 feet long, several thousand years old, and carved on the hill in such a way that you can only see all of it in one go from the air. This suggests that 
>a) it was carved for the gods to see or 
>b) flying was invented a lot earlier than we thought or 
>c) people used to be much, much taller.
^^3^^ Maybe paraphrasing this also yields another "teaching":
> If you don't know when to be a learner, you don't know when and how to be a good teacher.
I came across [[a series of images|http://www.brainpickings.org/2013/03/28/the-art-of-cleanup-ursus-wehrli/]] by the Swiss humorist [[Ursus Wehrli|http://www.kunstaufraeumen.ch/en]] that illustrated for me what we sometimes do ("accomplish"?) in two fields which are near-and-dear to me: engineering and education. [[(more from Ursus on TED|http://www.ted.com/talks/ursus_wehrli_tidies_up_art.html]]). Some of this is echoed in [[About learning how to improvise a little better]] (or "lightening up" a bit).

In engineering there is a fine line between "the pleasure of engineering" (and "doing it right"), and "over-engineering in action" (and "engineering going overboard"). An important part of engineering is to make appropriate/justified/rational choices and trade-offs, not necessarily to maximize or minimize. It is foolish, sad and often dangerous to "extremize" in engineering.

In education there is also a danger of "overdoing it", by practicing rigid discipline and structure while teaching/learning, and not leveraging playfulness and serendipity. There is also the danger of "doing it backwards", i.e., consistently (and persistently) learning/teaching from the bottom up, starting with definitions and basic concepts, etc., and working up from there. 

A domain where I feel it's painfully evident is Math.
As [[Paul Lockheart points out|resources/LockhartsLament.pdf]] in a fitting and nightmarish analogy from music, this can be "extremized" to the absurd:
>Since musicians are known to set down their ideas in the form of sheet music, these curious black dots and lines must constitute the  language of music.  It is imperative that students become fluent in this language if they are to attain any degree of musical competence; indeed, it would be ludicrous to expect a child to sing a song or play an instrument without having a thorough grounding in music notation and theory. Playing and listening to music, let alone composing an original piece, are considered very advanced topics and are generally put off until college, and more often graduate school.
>As for the primary and secondary schools, their mission is to train students to use this language  to jiggle symbols around according to a fixed set of rules...
(the point is clear, and easily transferable to other areas of study, especially Math... :-(

[[Back to those images|http://www.zillamag.com/art/the-art-of-clean-up-by-ursus-wehrli/]]
Are we  sometimes (over)doing it to learners, sucking all the fun out of it?

|[img[let there be order|./resources/Ursus-Wehrli-The-Art-of-Clean-Up-13.png][./resources/Ursus-Wehrli-The-Art-of-Clean-Up-13.png]]|[img[let there be order|./resources/Ursus-Wehrli-The-Art-of-Clean-Up-14.png][./resources/Ursus-Wehrli-The-Art-of-Clean-Up-14.png]]|


And on a larger (cosmic) scale of engineering (as a result of education?):

|[img[let there be order|./resources/Ursus-Wehrli-The-Art-of-Clean-Up-9.png][./resources/Ursus-Wehrli-The-Art-of-Clean-Up-9.png]]|[img[let there be order|./resources/Ursus-Wehrli-The-Art-of-Clean-Up-10.png][./resources/Ursus-Wehrli-The-Art-of-Clean-Up-10.png]]|


And finally, on a smaller scale (but still very important to get it right, especially if it's the writing on a bathroom door...):

|[img[let there be order|./resources/Ursus-Wehrli-The-Art-of-Clean-Up-11.png][./resources/Ursus-Wehrli-The-Art-of-Clean-Up-11.png]]|[img[let there be order|./resources/Ursus-Wehrli-The-Art-of-Clean-Up-12.png][./resources/Ursus-Wehrli-The-Art-of-Clean-Up-12.png]]|

So, back to Math education. Lockheart continues his nightmarish analogy from music:
>Of course, not many students actually go on to concentrate in music, so only a few will ever get to hear the sounds that the black dots represent. Nevertheless, it is important that every member of society be able to recognize a modulation or a fugal passage, regardless of the fact that they will never hear one.  To tell you the truth, most students just aren t very good at music. They are bored in class, their skills are terrible, and their homework is barely legible. Most of them couldn't care less about how important music is in today's world; they just want to take the minimum number of music courses and be done with it. I guess there are just music people and non-music people.
...
>By concentrating on what, and leaving out why, mathematics is reduced to an empty shell. The art is not in the  truth  but in the explanation, the argument. It is the argument itself which gives the truth its context, and determines what is really being said and meant. Mathematics is the art of explanation. If you deny students the opportunity to engage in this activity  to pose their own problems, make their own conjectures and discoveries, to be wrong, to be creatively frustrated, to have an inspiration, and to cobble together their own explanations and proofs  you deny them mathematics itself. So no, I m not complaining about the presence of facts and formulas in our mathematics classes, I m complaining about the lack of mathematics in our mathematics classes.

By-the-way, Charles van Loan (a professor at Vanderbilt) voices [[similar concerns|Formalism First = Rigor Mortis.  Intuition First = Rigor's Mortise]] about (initial) focus and sequencing in the Computer Science curriculum.

And in conclusion Lockheart says (about Math):
>There is such breathtaking depth and heartbreaking beauty in this ancient art form. 
>How ironic that people dismiss mathematics as the antithesis of creativity. 
>They are missing out on an art form older than any book, more profound than any poem, and more abstract than any abstract.

So, what is mathematics, and what do mathematicians do?
Here's [[G.H. Hardy s|http://en.wikipedia.org/wiki/G._H._Hardy]] excellent description:
>A mathematician, like a painter or poet, is a maker of patterns. 
>If his patterns are more permanent than theirs, it is because they are made with //ideas//.
The "[[CS Sage|https://en.wikipedia.org/wiki/List_of_computer_scientists]]" Alan Kay was making when he quipped: 
>A good point of view is worth many IQ points.
So here's an example: 

<html>
<table>
  <tr>
    <td>
      <img src="resources/pv1.jpg">
    </td>
  </tr>
  <tr>
    <td>
      <img src="resources/pv2.jpg">
    </td>
  </tr>
  <tr>
    <td>
      <img src="resources/pv4.jpg">
    </td>
  </tr>
</table>
</html>


The [[Palo Alto Airport|https://www.google.com/maps/place/Palo+Alto+Airport/@37.4550078,-122.1107903,15z/data=!4m5!3m4!1s0x0:0x45fa9fd3819cf9e2!8m2!3d37.4550078!4d-122.1107903]], as viewed from the [[Byxbee Baylands Park|https://www.google.com/search?q=byxbee+park&client=firefox-b-1-ab&tbm=isch&tbo=u&source=univ&sa=X&ved=2ahUKEwjw59S48unfAhVRcq0KHfydD6MQsAR6BAgEEAE&biw=1559&bih=910]].
I have to admit that the title of the book [["Programming as if People Mattered — Friendly Programs, Software Engineering, and Other Noble Delusions"|http://wiki.c2.com/?ProgrammingAsIfPeopleMattered]] about User Centered Design by [[Nathaniel Borenstein|https://en.wikipedia.org/wiki/Nathaniel_Borenstein]] intrigued me :)

His self-deprecating (actually profession-deprecating, which is a somewhat (entirely? :) different thing) quote struck (an ominous, but in a black-humor kind of way) chord too:
>The most likely way for the world to be destroyed, most experts agree, is by accident. That's where we come in. We're computer professionals. We cause accidents.


But lest you think Borenstein is a total cynic and/or total jester, here is his observation and advice on the healthy/wise attitude a programmer should take in relation to their code/projects/products, which I totally agree with:
>A good attitude to take, from the first day of any programming project, is that the system being built is fundamentally flawed and doomed. The goal of such a project, then, is simply to build a system that will last long enough for a better one to come along, and perhaps also to be, for a brief moment suspended between eternities, the best program of its kind yet built.
>
>When viewed from this perspective, the inevitable demise and abandonment of the software is a good thing, because it means that it has done its job and something better has come along. Often, one can arrange things so that the replacing software is also one's own; there is a peculiar satisfaction in driving the nail into one's own coffin, and it is surely less painful for a programmer to see his software abandoned if he played an active role in creating the system that replaces it.
I've recently read the book //The varieties of scientific experience - a personal view of the search for God// by Carl Sagan. The book is a collection of lectures edited by Sagan's life companion Ann Druyan, based on the famous Gifford Lectures on Natural Theology Sagan gave in 1985 at the University of Glasgow.

As Druyan says about the Gifford Lectures in the introduction to the book:
>[Sagan] would be following in the footsteps of some of the greatest scientists and philosophers of the last hundred years including James Frazer, Arthur Eddington, Werner Heisenberg, Niels Bohr, Alfred North Whitehead, Albert Schweitzer, and Hannah Arendt. 
>Carl saw these lectures as a chance to set down in detail his understanding of the relationship between religion and science and something of his own search to understand the nature of the sacred.

In the chapter on Extraterrestrial Intelligence Sagan tells a story revealing a lot about human nature:

As is bound to happen roughly every 17 years or so, Earth and Mars get close together as they both revolve around the Sun. The year was 1877, and that year, an Italian astronomer, Giovani Schiaparelli, looked at Mars through one of the newly built large telescopes in Italy and saw many intricate, fine, straight lines on the surface of that planet. He called them //canali// meaning "channels" or "grooves" in Italian. This was promptly translated into English as "canals", with implications of design, intelligence, and large planetary engineering construction work.
This sentiment was later picked up by a rich American astronomer, Parcival Lowell, who was convinced that Schiaparelli was right about the "Martian canals", and "that the planet was covered by a network of intersecting single and double straight lines, that these lines passed over enormous distances and therefore could correspond only to engineering works on the most massive imaginable scale". Lowell decided to build a large telescope in Arizona (which he naturally called the Lowell Observatory ;-) to further investigate the phenomenon.

As Sagan describes the developments:
>Other observers also found the canals; that is, drew them. Photographing them was much more difficult. The argument was that atmospheric "seeing" was unreliable, due to the intrinsic turbulence and unsteadiness of the Earth's atmosphere, which generally prevent you from seeing the canals. But every now and then, by chance. the atmosphere steadies, the turbulent eddies of air are not in your line of sight to Mars, and just for a moment you can see the planet as it truly is with this network of straight lines.
Lowell reasoned that experienced observers could more reliably draw the lines they saw through the telescope, since time exposure photography would be very difficult with the recurring turbulences and atmospheric distortions. There were other astronomers who, for the life of them couldn't see the straight lines, but there was a range of explanations: They were not in the best sites for their telescopes. They were not experienced observers. They were not adequate draftsmen. They were biased against the idea of canals on Mars. 
Lowell and Schiaparelli were by no means the only astronomers who could find the canals. Astronomers all over the world saw them, drew them, mapped them. Drawings were compared, similarities were found. And there were literally hundreds of individual canals that were named. 
>Lowell deduced from these straight lines an ancient civilization on Mars more advanced than we, having to face a planetary drought of proportions unprecedented on Earth. And their solution was to construct a vast, globe-girdling network of canals to carry liquid water from the melting polar caps to the thirsty inhabitants of the equatorial cities. What's more, it was possible to conclude, Lowell thought, something of the politics of the Martians, because the network crossed the entire planet. Therefore there was a world government on Mars, at least as far as engineering detail went. And Lowell went so far as to be able to identify the capital of Mars, a particular spot on the surface called Solis Lacus, the Lake of the Sun, from which six or eight different canals seemed to emanate. 
As you probably know, this story got into the popular consciousness, into folk literature (like H.G. Wells's //War of the Worlds//), Sci-Fi novels (like Edgar Rice Burroughs's (of Tarzan fame)), and radio and the movies (like Orson Welles's broadcast about The Martian Invasion, which scared Americans in 1938).

Sagan continues:
>And yet there are no canals on Mars. Not one. The whole thing is wrong. It's a mistake. It is a failure of the human hand-eye-brain combination. Lowell's idea evoked a passion, I think a very understandable and humane passion. The vision of more advanced beings on a neighboring planet, with a world government, struggling to keep themselves alive, was a wonderful idea. It was so wonderful that the wish to believe it trumped the scrupulousness of the investigative process.

and he sums it up:
>So what can we conclude from this? Well, we can conclude that in a sense Lowell was right, that the canals of Mars are a sign of intelligent life. The only question is which side of the telescope the intelligent life is on.

So, as the title says:
- "Keen (as in 'sharp or penetrating' but also 'having or showing eagerness or enthusiasm') observers" - yes!
- "Seeing what you think" - absolutely possible!
A language that doesn't affect the way you think about programming, is not worth knowing.
It is said that the second cheapest university department to fund is the math department. Its members only need pencils, paper, and wastebaskets.
The cheapest department to fund is the philosophy department. They don't need wastebaskets.
[>img[Piet Hein on wisdom|./resources/Knuth - wisdom 1.png]]
In a short [[video clip|https://youtu.be/v678Em6qyzk?t=20]], Donald Knuth (a "[[CS Sage|https://en.wikipedia.org/wiki/List_of_computer_scientists]]") shows the following [[grook (form of poetry)|http://www.archimedes-lab.org/grooks.html]] by [[Piet Hein|http://www.piethein.com/page/piet-hein-16/]] hanging on the wall in his home, which I think typifies his "slow, deep thinking" approach to everything he has done.

[[Alan Kay|https://en.wikipedia.org/wiki/Alan_Kay]] (a "[[CS Sage|https://en.wikipedia.org/wiki/List_of_computer_scientists]]"), at OOPSLA 1997, gave an [[insightful talk|https://www.youtube.com/watch?v=oKg1hTOQXoY]] titled "The computer revolution hasn't happened yet" ([[transcript|http://www.vpri.org/pdf/m2007007a_revolution.pdf]]), and in it he mentions (and recommends) Arthur Koestler's book //The Act of Creation// and pictorially shows [[Bisociation|http://www.bisociations.com/aboutus/Biso.html]] (a term coined by Koestler) as an interception of 2 "geometrical planes": the pink one which we live in most of the time, and where innovation is mainly optimization of things, and the (perpendicular to the pink) blue plane, where innovation takes you to new dimensions (literally from 2D pink, to 3D on the blue plane).

Kay also makes a nice comparison between our reactions at the moment of this disruptive innovation^^1^^, or a paradigm shift^^2^^, and illustrates the reaction in Humor, Science, and Art:
>Most creativity is a transition from one context into another where things are more surprising. There’s an element of surprise, and especially in science, there is often laughter that goes along with the “Aha.” Art also has this element. Our job is to remind us that there are more contexts than the one that we’re in — the one that we think is reality.
[img[click to see Alan Kay's AHA's|resources/Alan_Kay_AHA_moments_small.png][resources/Alan_Kay_AHA_moments.png]]


----
^^1^^ - See also [[It's Big Meaning, not Big Data.]]
^^2^^ - See [[Zen Physics, meaning and understanding]]
Artificial Intelligence
I like reading the various inter-connected topics covered on [[Brain Pickings|http://www.brainpickings.org/index.php]],
Their belief and common theme about creativity an innovation (from the site) is:
>...creativity, after all, is a combinatorial force. It's our ability to tap into the mental pool of resources ... and to combine them in extraordinary new ways. In order for us to truly create and contribute to the world, we have to be able to connect countless dots, to cross-pollinate ideas from a wealth of disciplines, to combine and recombine these ideas and build new ideas   like ~LEGOs.

I came across a somewhat different, but related (connected ;-) perspective expressed by [[Robert Logan|http://www.physics.utoronto.ca/people/homepages/logan/]] in his book [[What is Information?|What is Information? by Robert Logan]], where in [[chapter 5|resources/logan_information_ch5.pdf]] he's mentioning George Basalla, the author of the book [[The Evolution of Technology|resources/Basalla - the evolution of technology - sample.pdf]] (1988):
>Basalla [says] that technology evolves through a process of descent and modification: "Any new thing that appears in the made world is based on some object already in existence." He cites many examples of how innovative technologies borrowed significantly from earlier technologies citing the cotton gin, the electric motor and the transistor as three examples. Gutenberg's moveable type printing press is another example.
Then he mentions Thomas Kuhn:
>The mechanism for the propagation of science's organization is what Thomas Kuhn (1972) termed normal science. Every success in science gives rise to a paradigm, which is articulated and applied to as many phenomena as possible. This is the mechanism of descent. Once a paradigm fails to provide a satisfactory description of nature a period of revolutionary science begins with the search for a new paradigm. This is the mechanism of modification. If the new paradigm provides a satisfactory explanation to the science community by providing replicable results a new round of normal science begins. This is the mechanism of selection. Science propagates its organization through normal science and evolves by descent, modification and selection just like living organisms. The analogy between the Darwinian evolution of living organisms and the process of descent, modification and selection in Kuhn's model led him to cautiously conclude at the end of his analysis of scientific revolutions the following:
>>The analogy that relates the evolution of organisms to the evolution of scientific ideas can easily be pushed too far. But with respect to the issues of this closing section it is very nearly perfect. . . . Successive stages in that developmental process are marked by an increase in articulation and specialization. And the entire process may have occurred, as we now suppose biological evolution did, without benefit of a set goal, a permanent fixed scientific truth, of which each stage in the development of scientific knowledge is a better exemplar (Kuhn 1972, pp. 172-73).
And Karl Popper:
>Karl Popper (1979, p. 261) whose description of science differs from that of Kuhn s, nevertheless also found an analogy between the evolution of science and that of living organisms:
>>The growth of our knowledge is the result of a process closely resembling what Darwin called 'natural selection'; that is, the natural selection of hypotheses: our knowledge consists, at every moment, of those hypotheses which have shown their (comparative) fitness by surviving so far in their struggle for existence; a competitive struggle which eliminates those hypotheses which are unfit.
In an article in Aeon titled [["We could all do with learning how to improvise a little better"|https://aeon.co/ideas/we-could-all-do-with-learning-how-to-improvise-a-little-better]] by Stephen T. Asma (a professor of philosophy at Columbia College Chicago), he writes about [["loosening up a bit"|A case for "loosening up a bit"]] (or "lightening up" a bit) and brings up a few good points:
* According to Han Fei Zi (a Chinese philosopher, c280-233 BCE) moving decision-making away from people and putting it in stable institutions is a successful strategy for large, complex and expansionary societies, which are increasingly made up of strangers. On the other hand, bureaucracy is soul-crushing and alienating in its inflexibility and inhumanity. What is more, it exacts a psychological price.
* But, life is intrinsically changing, moving, disappointing and positively surprising. Meeting life with unbending expectations is a recipe for disaster.
* According to Laozi (a Chinese philosopher, Laozi, 5th century BCE), it is by being receptive to immediate experience (wu-wei) that the wise person adapts perfectly to the unique needs of the situation.
* But improvisation isn’t foolproof either.
* Sometimes, the problem with "bad/failed" improvisation is a kind of domain overreach. The great physicist, for example, is not automatically qualified to make good poetry. The great business-person is not inevitably effective in the domain of government. And yet, sometimes, overreach is exactly what is needed. In other words: there are no rigid rules/guidelines (ha!).
* We may think that good improvisation is discernible only in hindsight; we know it’s good because it worked. However, this cannot be entirely correct. Often, we do know good improvisation when we see it in action (in, for example, a musical performance, sports, business/political negotiations, personal/adaptive teaching).
* the single greatest predictor of quality improv is simply experience. But there’s nothing simple about experience. A great improviser usually had thousands of hours of practice and problem-solving underneath every improvisation.
* This kind of experience makes good improv highly intuitive in a biological sense, not a mystical sense. It taps into the subtle systems of animal awareness, mostly unconscious, that we all possess, such as body-awareness (proprioception), personal space (proxemics), and arousal states such as fight or flight. Muscle memory is loaded with this kind of intuitive wisdom.
* the improviser usually does not have optimal resources (they work in a resource-deficient environment/context). And this paucity of resources is the very condition of creativity because it forces a kind of lateral thinking.
* Improvisation is rule-governed in some cases, but moderately so. It is a flexible practice that sees rules as elastic. Improv is serviceable rather than optimal. Improvisational manoeuvres already exist within a system of received conventions, and only experience can help you decide to respect or ignore them.
* Failing is a major aspect of improvisation. Failure is the thing we learn from, so it’s the cornerstone of productive experience. 
* Aristotle described improvisational decision-making as ‘practical reason’, distinct from rule-following logic. [[Barry Schwartz discusses this "Practical Wisdom"|Barry Schwartz on Aristotle on Practical Wisdom]] well.
* we’ve all known enough young talent to doubt his generalisation about age [that (only?) age and experience enable exceptional performance], but his wider point about experience is correct.
And Asma summarizes:
>Ultimately, improvising is a form of receptivity to experience, and also a behavioural style based upon that experience. It evolved as part of our cognitive operating system to make good use of available resources. It is a fundamental inheritance, emerging out of our primate evolution. But the narcissistic improviser and the inexperienced improviser – so popular these days in politics and celebrity culture – leaps tragically into delicate situations with no plans, practice, tact or ability to read the room. That is an improvising ape of an altogether different kind.
My name is Haggai Mark. I live in Northern California. I am a Learning Solutions designer and implementer, and CS^^1^^ and STEM^^2^^ course developer and teacher, with strong engineering expertise in system/platform/software architecture and implementation, as well as deep experience in technology-enabled instructional design and development. 
I have a [[Masters degree from Stanford University in Learning, Design, and Technology|http://ldtprojects.stanford.edu/~hmark/]]. 
I am blessed with an awesome family: a wonderful wife and 3 great children.

[img[infinite State Machine|./resources/fin-StateMachine-ite.gif][./resources/fin-StateMachine-ite.gif]]

@@font-size:16pt;color(red):''*''@@ "I am a strange loop"^^3^^ [[(and even Doug said so himself)|On the strange human loop]]
@@font-size:16pt;color(red):''**''@@ in-finite ~StateMachine @@font-size:12pt;color(red):''***''@@ :)
@@font-size:16pt;color(red):''***''@@ [[A simple (but interesting) Cellular Automaton (State Machine): Wolfram's Rule 110|Cellular Automaton Rule 110]]

^^1^^ CS = Computer Science
^^2^^ STEM = Science, Technology, Engineering, Mathematics
^^3^^ on [["strange loopiness"|http://en.wikipedia.org/wiki/I_Am_a_Strange_Loop]] (as in self-reference) a-la and //about// Douglas Hofstadter, in [[indexing|On indexers and indexing]]


[img[adr|./resources/adr.png][./resources/adr.png]]
In another Q&A session at the Gifford Lectures^^1^^, Carl Sagan brings up a crucial point (and often missed "logical trick"):

Questioner: I'd like to ask you about why you think any omnipotent being would want to leave evidence for us. 

CS: I think I entirely agree with what you say. There is no reason I should expect an omnipotent being to leave evidence of His existence, except that the Gifford Lectures^^1^^ are supposed to be about that evidence. And I hope it is clear that the fact that I do not see evidence of such a God's existence does not mean that I then derive from that fact that I know that God does not exist. 
That's quite a different remark. Absence of evidence is not evidence of absence. Neither is it evidence of presence. And this is again a situation where our tolerance for ambiguity is required.

See also [[BrainPickings'|Varieties of Scientific Experience: Carl Sagan on Science and God]] [[Varieties of Scientific Experience: Carl Sagan on Science and God|https://www.brainpickings.org/2013/12/20/carl-sagan-varieties-of-scientific-experience/]]

----
^^1^^ Gifford Lectures on Natural Theology -  Sagan spoke at the university of Glasgow in 1985, following in the footsteps of such celebrated philosophers as James Frazer, Arthur Eddington, Werner Heisenberg, Niels Bohr, Alfred North Whitehead, Albert Schweitzer, and Hannah Arendt.
This week I went to a talk by the Venerable Ajahn Jayasaro (Shaun Michael Chiverton) at Stanford. He finished his interesting talk titled "Moving smoothly along a bumpy road" with the following story.

A monk somewhere in Southeast Asia, decided to embark on a trip on foot to a remote mountain, to mediate on its top. A few days into the trip he got lost. As he was walking through a small village, he saw an old woman sitting at the side of the road cleaning and sorting lentils.

The monk approached her and asked: Grandmother, grandmother, how long is it to the mountain? The old woman did not respond. So he asked again: Grandmother, grandmother, please tell me, how long do I have to walk to get to the mountain? And again, the old woman kept sorting the lentils and did not answer. Being a Buddhist monk, he asked for the third time: Grandmother, please tell me, how long to the mountain? And still, the woman would not respond.

So, the monk turned back to the road and started walking again. After taking a few steps, he heard the old woman behind him saying: three days.
He turned around and asked her: if you knew the answer, why didn't you tell me, even though I asked three times?
And the woman responded: I couldn't. First, I had to know how determined you are, and how fast you walk.


This reminded me of a different story about being lost and asking for directions, this time in the West.

Two young professionals in the Seattle, WA area decided to go on a hot-air balloon trip. They rented a balloon for the day, but being inexperienced, they got lost, and didn't know how to return back to the landing spot. As they were floating in the sky, they passed above a group of tall office buildings, something that looked like an industrial office campus, and spotted a man standing on the terrace on the top of one of the buildings.

So they lowered the balloon, and shouted to the man on the terrace: We are lost. Can you please tell us where we are?
To which the man on the terrace answered: you are in a hot-air balloon above building 10 of this campus.
One man in the balloon scratched his head, puzzled at this answer, while the other man started confidently navigating and after a short while landed them at the landing spot.

The puzzled man asked the navigator: how did you know where to go, even though the man on the top of the building gave us totally useless information?
To which the navigator responded: Since the answer was totally accurate but seemingly useless, I knew he must be an engineer. And from the campus buildings I figured it had to be the Microsoft Engineering Campus (in Seattle/Redmond, WA), so finding the landing spot was easy.

You can say many things about these two stories, but one thing you could say relevant to "actionable learning" is: accurate information is important, but not sufficient. You must have the right context.
After teaching [[a course at Citizen Schools|The "Acing Racing" course]] using [[Sage|http://www.sagemath.org/]], and covering basic math/physics concepts like distance, speed, time, graphs, etc., at the 6-8 grade level, I implemented a different program/simulation using [[Scratch|http://scratch.mit.edu/]].
My intention was to make it more concrete for the kids (Sage is "drier" since it has been originally written by mathematicians, for mathematicians; whereas Scratch has been written by education-aware/focused people at MIT, and is therefore "kid-friendly"). I also thought of using Scratch for teaching both the math/physics concepts and some basic programming concepts (algorithmic solutions, task breakdown, routines/functions/sub-routines, looping, etc.)

[[The result|resources/Scratch_car_racing.png]] is a relatively simple race scenario, [[published on the web|http://scratch.mit.edu/projects/myh9090/1871961]], enabling kids to explore and experience the various factors that go into "traditional" middle school distance-speed-time problems, so they can get a concrete and personal sense of the math involved.
This has some good insights into and implications for Instructional Design, but, and to [[paraphrase Blaise Pascal|I have made this longer than usual because I have not had time to make it shorter.]], since I don't have time to "originally write", I will just "massively cite" :)

In a [[research paper|https://chilab.asu.edu/sites/all/themes/chilab/public/publication_files/JEE-Menekse-7-1-2013.pdf]], Michelene Chi defines and tests the effectiveness of 3 different models of active learning: Active, Constructive, and Interactive.

This research supports Chi’s ICAP (Interactive, Constructive, Active, Passive) hypothesis, whose classification of overt learning activities can help researchers, instructional designers, and instructors determine activities appropriate for their intended research or instruction. The results suggest that when implemented properly, interactive modes are most effective, constructive modes are better than active and passive modes, and active modes are better than passive ones for student learning.

* Being Active
>In the active mode, students undertake overt activities that activate their own knowledge within the boundaries of the desired content. Students do something or manipulate the instructional information overtly, rather than passively receive information or instruction while learning or studying (Chi, 2009). Active activities emphasize the selected passages or manipulated components of a task, thus allowing students to pay more attention to them. The cognitive processes hypothesized by Chi that correspond with active activities are activating and searching for related knowledge, and encoding, storing, or assimilating new information with activated knowledge. These processes strengthen the existing knowledge and fill the gaps in knowledge, making it more retrievable and more complete.
* Being Constructive
>In the constructive mode, students undertake activities in which they generate knowledge that extends beyond the presented materials. In the active mode, for example, simply repeating a paragraph or underlining text does not extend beyond what was presented. But self- explaining, or explaining aloud to oneself a concept presented in a text, is constructive because it constructs meaning beyond the given content. The following types of activities can all be considered to be constructive: drawing a concept map, taking notes in one’s own words from a lecture, generating self-explanations, comparing and contrasting different situations, asking comprehension questions, solving a problem that requires constructing knowledge, justifying claims with evidence, designing a study, posing a research question, generating examples from daily lives, using analogy to describe certain cases, monitoring one’s comprehension, making strategic decisions in a video game, converting text-based information into symbolic notation, drawing and interpreting graphs, or hypothesizing and testing an idea.
* Being Interactive
>The interactive mode refers to two or more learners undertaking activities that develop knowledge and understanding extending beyond the materials being studied (similar to the constructive mode), but the interaction of the learners further enables them to build upon one another’s understanding. The main (but surface-level) difference between the interactive and constructive mode is that learners in the latter engage in activities alone. However, interaction between learners affords them the benefit of receiving feedback or prompting from each other, with each partner having some complementary knowledge or perspectives. The different knowledge and perspectives further provide the opportunity for co-creation or joint-construction, which is not possible in solo activities.

Chi claims (and [[backs it up with experimental data|https://chilab.asu.edu/sites/all/themes/chilab/public/publication_files/JEE-Menekse-7-1-2013.pdf]]) that:
>Through this give-and-take discussion [in the interactive mode of learning], students would be building knowledge in a way that would not have occurred if they had been work- ing alone, since they can build on each other’s contributions or refine and modify an original idea in ways that can produce novel ideas. Thus, interactive learning has the potential to be more beneficial than constructive learning, in which single individuals can only extend beyond the given information with their own ideas; in interactive learning, two individuals can further enrich the topic of discussion through jointly extending on a given content topic from two different perspectives and sets of ideas.                                                   
In an interesting [[collection of responses to letters from readers who want to write poetry|https://www.poetryfoundation.org/articles/68657/how-to-and-how-not-to-write-poetry-56d2484397277]], my favorite poet, [[Wislawa Szymborska|Wislawa Szymborska's Nobel Prize lecture (1996)]], gives some excellent advice^^1^^, which (not surprisingly?) beautifully applies to writing computer programs.

To paraphrase Szymborska^^1^^:
Some programmers (or students learning to program) may say, ‘I know my code has many faults, but so what, I’m not going to stop and fix it.’ And why is that? 
Perhaps because they hold coding so sacred? Or maybe they consider it insignificant? Both ways of treating programming are mistaken, and what’s worse, they free the novice coder from the necessity of working on his code. 
It’s pleasant and rewarding to tell our acquaintances that 'we just slapped things together' on Friday at 2:45 a.m. and/or that 'inspiration struck, and we just hacked it'. 
But the truth is that 'beautiful, elegant code' takes assiduously correcting, crossing out, and revising those otherworldly 'inspirations'. Spirits are fine and dandy, but even coding has its prosaic side.”

So it's absolutely true about both poetry and programming. One has moments of excitement and joy during the creative parts of the process. But, there are (probably more) parts which are "mundane"^^2^^, and require discipline, perseverance, ingenuity, positive and playful attitude, cool and patience.

Or as Alan Perlis (a "[[CS Sage|https://en.wikipedia.org/wiki/List_of_computer_scientists]]") quipped:
> It goes against the grain of modern education to teach students to program. What fun is there to making plans, acquiring discipline, organizing thoughts, devoting attention to detail, and learning to be self critical.


----
^^1^^ - the original advice from Wislawa Szymborska (translated by Clare Cavanagh)
> To Heliodor from Przemysl: “You write, ‘I know my poems have many faults, but so what, I’m not going to stop and fix them.’ And why is that, oh Heliodor? Perhaps because you hold poetry so sacred? Or maybe you consider it insignificant? Both ways of treating poetry are mistaken, and what’s worse, they free the novice poet from the necessity of working on his verses. It’s pleasant and rewarding to tell our acquaintances that the bardic spirit seized us on Friday at 2:45 p.m. and began whispering mysterious secrets in our ear with such ardor that we scarcely had time to take them down. But at home, behind closed doors, they assiduously corrected, crossed out, and revised those otherworldly utterances. Spirits are fine and dandy, but even poetry has its prosaic side.” 

^^2^^ which require a certain state of mind, or as Szymborska advises: "Let’s take the wings off and try writing on foot, shall we?"
This is inspired by [[Advice from an Old Programmer|http://learnpythonthehardway.org/book/advice.html]] by Zed Shaw, the author of //[[Learn Python the Hard Way|http://learnpythonthehardway.org/book/]]//.

This could be appropriate at the end of one of my Computer Science classes:

You have finished this course, and I hope you'll decide to continue programming. Programming may be part of your professional life, or it may be a hobby. Here are a few things I'd like to say as someone who has programmed from an early age, has been enjoying good and productive careers programming and teaching, and still enjoys it in various forms to this day.

Over the years, I've been picking up quite a few programming languages. I would learn a new language either because I had to (as a requirement for a project), or because I "fell in love" with some aspect or capability of it (e.g., lambda calculus, objects, functional paradigm). On this journey I found that Alan Perlis was right when he said:
>A language that doesn't affect the way you think about programming (and problem solving - my addition), is not worth knowing.
At the end, it's not the languages that matter but what you do with them. Don't get sucked into the occasional "religious wars" surrounding programming languages. Sometimes, with the excitement and new vistas of a new language (or fad), it's easy to forget this point, but this ''is'' the point of programming. What you //do// with a programming language is the important part, and is the source of joy, beauty, engagement and usefulness.

A unique aspect of programming is that it is a creative and intellectual activity which can produce interactive art. You can create art(ifacts) which interact and communicate with its users. Unlike other art forms it is dynamic (interactive) ''and'' it flows both ways (to and from the user/participant).

Programming as a career can be a good and interesting one, but there are other good and interesting careers. Actually, you may enjoy your career more, and be more successful if you use programming as part of //another// career or profession. Professionals who can code in biology, medicine, government, sociology, physics, history, and mathematics, to name a few, are respected and can do amazing things to advance those disciplines.

Computer Science and programming are relatively young disciplines. They are truly in their early stages of development, and have barely reached their full potential in terms of impact on our lives. Their importance will only grow with time; you can enjoy the advantage of an "early adopter". So, go ahead, explore this fascinating intellectual and creative pursuit; enjoy and improve your life using it.

The activity of programming will change you. It will not make you better or worse, just different. You will develop new capabilities, new ways to be creative, to analyze, to figure things out, to be playful. Some people may be intimidated by your abilities and skills; some may not like it, or be jealous. Don't fall into the trap labeling people who know how to program as nerds, socially challenged, or somehow strange. You now know how to code, and this is pretty cool and empowering!

In an interesting book review (of //The Internet of Us - Knowing More and Understanding Less in the Age of Big Data//^^1^^ by Michael P. Lynch) in The New Yorker titled [[After The Fact|http://www.newyorker.com/magazine/2016/03/21/the-internet-of-us-and-the-end-of-facts]] Jill Lepore, who is a professor of American History at Harvard, makes some interesting historic observations.

(see also [[The importance of telling the whole truth]])

She reminds us of the frightening sentence about truth manipulation:
>Everything faded into mist. The past was erased, the erasure was forgotten, the lie became truth. (George Orwell, 1984, Part 1, Chapter 7)

and writes:
>The past has not been erased, its erasure has not been forgotten, the lie has not become truth. But the past of proof is strange and, on its uncertain future, much in public life turns. In the end, it comes down to this: the history of truth is cockamamie, and lately it’s been getting cockamamier.

She mentions Michael P. Lynch's "thought experiment" about "knowledge implants" (per Google’s Larry Page who has promised, “where if you think about a fact it will just tell you the answer”) in our body, and then:
>picture this: overnight, an environmental disaster destroys so much of the planet’s electronic-communications grid that everyone’s implant crashes. It would be, Lynch says, as if the whole world had suddenly gone blind. There would be no immediate basis on which to establish the truth of a fact. No one would really know anything anymore, because no one would know how to know. I Google, therefore I am not.

Lepore points out from history:
>A long historical precedent stands behind these judicial methods for the establishment of truth, for knowing how to know what’s true and what’s not. In the West, for centuries, trial by combat and trial by ordeal—trial by fire, say, or trial by water—served both as means of criminal investigation and as forms of judicial proof.

>[...] Trial by combat and trial by ordeal place judgment in the hands of God. Trial by jury places judgment in the hands of men. It requires a different sort of evidence: facts.
>A “fact” is, etymologically, an act or a deed. It came to mean something established as true only after the Church effectively abolished trial by ordeal in 1215, the year that King John pledged, in Magna Carta, “No free man is to be arrested, or imprisoned . . . save by the lawful judgment of his peers or by the law of the land.” In England, the abolition of trial by ordeal led to the adoption of trial by jury for criminal cases. This required a new doctrine of evidence and a new method of inquiry, and led to what the historian Barbara Shapiro has called “the culture of fact”: the idea that an observed or witnessed act or thing—the substance, the matter, of fact—is the basis of truth and the only kind of evidence that’s admissible not only in court but also in other realms where truth is arbitrated. Between the thirteenth century and the nineteenth, the fact spread from law outward to science, history, and journalism.

__As an aside__ - another (less bloody, but nonetheless possibly still hurtful) way to establish truth, or at least view things from a different perspective in order to "push back" on them, is wit/ridicule/humor/irony/satire. This is mentioned in the article [[Bend Sinister|https://monoskop.org/images/1/14/Goriunova_Olga_ed_Fun_and_Software_Exploring_Pleasure_Paradox_and_Pain_in_Computing.pdf]] by artist and programmer [[Simon Yuill|http://www.lipparosa.org/]], who quotes [[Lord Shaftesbury (Anthony Ashley Cooper, 3rd Earl of Shaftesbury)|https://plato.stanford.edu/entries/shaftesbury/]]:
>In 'Sensus Communis - an Essay on the Freedom of Wit and Humour' (1709), Shaftesbury argues that irony and satire (what he calls ‘raillery’) are an ideal means of testing the logic and substance of debate:
>>They may perhaps be Monsters, and not Divinitys, or Sacred Truths, which are kept thus choicely, in some dark Corner of our Minds: The Specters may impose on us, whilst we refuse to turn ’em every way, and view their Shapes and Complexions in every light. For that which can be shewn only in a certain Light, is questionable. Truth, ’tis suppos’d, may bear all Lights: and one of those principal Lights or natural Mediums, by which Things are to be view’d, in order to a thorow Recognition, is Ridicule it-self, or that Manner of Proof by which we discern whatever is liable to just Raillery in any Subject.
>In subjecting ideas to the test of ridicule, humour acts as an ‘instrument of reason’ exposing that which claims to be proportionate and true yet which rests upon a logic that is deformed and ugly.
Lepore continues:
>But the movement of judgment from God to man wreaked epistemological havoc. It made a lot of people nervous, and it turned out that not everyone thought of it as an improvement. For the length of the eighteenth century and much of the nineteenth, truth seemed more knowable, but after that it got murkier. Somewhere in the middle of the twentieth century, fundamentalism and postmodernism, the religious right and the academic left, met up: either the only truth is the truth of the divine or there is no truth; for both, empiricism is an error. That epistemological havoc has never ended: much of contemporary discourse and pretty much all of American politics is a dispute over evidence.

Lynch, in his book, claims that we are now at another turning point in the history of knowing and facts. Lepore writes:
>Then came the Internet. The era of the fact is coming to an end: the place once held by “facts” is being taken over by “data.” This is making for more epistemological mayhem, not least because the collection and weighing of facts require investigation, discernment, and judgment, while the collection and analysis of data are outsourced to machines. “Most knowing now is Google-knowing—knowledge acquired online,” Lynch writes in “The Internet of Us” (his title is a riff on the ballyhooed and bewildering “Internet of Things”). We now only rarely discover facts, Lynch observes; instead, we download them. Of course, we also upload them: with each click and keystroke, we hack off tiny bits of ourselves and glom them on to a data Leviathan.

>“The Internet didn’t create this problem, but it is exaggerating it,” Lynch writes, and it’s an important and understated point. Blaming the Internet is shooting fish in a barrel—a barrel that is floating in the sea of history. It’s not that you don’t hit a fish; it’s that the issue is the ocean. No matter the bigness of the data, the vastness of the Web, the freeness of speech, nothing could be less well settled in the twenty-first century than whether people know what they know from faith or from facts, or whether anything, in the end, can really be said to be fully proved.

And like relying on GPS and voice directions for navigation:
>When we Google-know, Lynch argues, we no longer take responsibility for our own beliefs, and we lack the capacity to see how bits of facts fit into a larger whole^^1^^. Essentially, we forfeit our reason and, in a republic, our citizenship. You can see how this works every time you try to get to the bottom of a story by reading the news on your smartphone.

She concludes by alluding, I think, to civil, non-religious morality and human government by writing:
>People who care about civil society have two choices: find some epistemic principles other than empiricism on which everyone can agree or else find some method other than reason with which to defend empiricism. Lynch suspects that doing the first of these things is not possible, but that the second might be. He thinks the best defense of reason is a common practical and ethical commitment. I believe he means popular sovereignty. That, anyway, is what Alexander Hamilton meant in the Federalist Papers, when he explained that the United States is an act of empirical inquiry: “It seems to have been reserved to the people of this country, by their conduct and example, to decide the important question, whether societies of men are really capable or not of establishing good government from reflection and choice, or whether they are forever destined to depend for their political constitutions on accident and force.” The evidence is not yet in. 


----
^^1^^ - [[Alan Kay]] expressed similar concerns (in a [[video (43 min.)|https://www.youtube.com/watch?v=gTAghAJcO1o]]) about the current state of technology (and state of mind of technologists) when he said "It should be [[Big Meaning|http://planspace.org/20141125-alan_kay_on_big_data/]], not Big Data" (i.e., that's what we should aim for).

In an [[inspiring and thought provoking talk called "Rethinking Computer Science Education"|https://www.youtube.com/watch?v=N9c7_8Gp7gI]], CS Pioneer and Big Ideas Thinker [[Alan Kay|https://en.wikipedia.org/wiki/Alan_Kay]] (a "[[CS Sage|https://en.wikipedia.org/wiki/List_of_computer_scientists]]") shared his thoughts on CS Education.

Main Points of the talk:

1. The context humans should care about: Don't worry only about preparing children for the job market
   - worry about how to grow a successful next generation since this is the goal of the current generation
   - worry about how to evolve good citizenship
   - worry about how to create a "rich environment" ("man does not live by bread alone")

2. Humanity 101
   - our mind is "very theatrical" - we respond theatrically/emotionally to our environment
   - our minds are tiny (bad at multi-tasking, 7+/- 2 items, etc.)
   - we are mostly non-human (we are mainly reptilian and mammalian)

3. We need to establish "real computer science"
   - Alan Perlis - meant that we need a "science of processes" - a science to study processes and things in process
      - processes in mechanics, biology, society, politics, chemistry, tech/engineering, mental etc.
      - the Big Idea: Computing should do a "math" for all processes

4. CAD - Simulate - Fabricate
   - we need to do more design in CS, so we can do sim and fab
   - Programming languages are really user interfaces; they are all Turing Equivalent so no need to get hung up on them
      - some of them run faster than others
      - some of them allow you to write less code than others
      - some allow you to think and express new ideas - and that's the important part

5. Thresholds - tracking and being driven by "wiggly curves" and ups and downs is meaningless
   - we should think about a threshold of "what is really needed" and going above that is what should guide us
   - if we are below the threshold, we know we need to close the quality gap
   - Better is an enemy of what is actually needed. It drives and makes us happy if we get incremental progress, even if it is under the threshold
   - Perfect is also dangerous - because it discourages you from getting anywhere above the threshold, since it will always be "less than perfect"


Just teaching people to code doesn't teach them to think.
Steve Jobs sequence: "everyone should learn how to code. It teaches you how to think. Badly.
|borderless|k
|[img[Jobs 1|./resources/Jobs1.png][./resources/Jobs1 copy.png]]|[img[Jobs 2|./resources/Jobs2.png][./resources/Jobs2 copy.png]]|[img[Jobs 3|./resources/Jobs3.png][./resources/Jobs3 copy.png]]|[img[Jobs 4|./resources/Jobs4.png][./resources/Jobs4 copy.png]]|
|borderless|k


- Francis Bacon warned against errors in thought and judgement coming from 4 sources (he called "idols"):
   - errors due to our genetics (see Humanity 101 above), 
   - errors due to our culture (beliefs, fads, etc.), 
   - errors due to our language (not representing thing as they are), 
   - errors due to academia (coming up with bad ideas and perpetually teaching them)

- Bacon called for a set of heuristics to deal with the world, a new way of dealing with knowledge.
- Science is not the knowledge.
- Science (and [[elsewhere, Thinking|Thinking is the negotiation of relationships between our noisy representations and "what's out there".]]) is the negotiation of relationships between our noisy representations and "what's out there".
- to think of science as the way to know the truth is wrong, and we should not teach it as such.
- Science is the most powerful thought system ever invented, because it gave up the idea of knowing the truth, and substituted it with a sequence of false ideas, some of which are incredibly powerful.
- computers and computing are representers, and are an excellent way to represent and simulate ideas and examine complexities.
In an [[inspiring and thought provoking talk called "The Power of Simplicity"|https://www.youtube.com/watch?v=NdSD07U5uBs]], CS Pioneer and Big Ideas Thinker [[Alan Kay|https://en.wikipedia.org/wiki/Alan_Kay]] (a "[[CS Sage|https://en.wikipedia.org/wiki/List_of_computer_scientists]]") shared his thoughts on Simplicity.

* (t =~9:00) __One way to achieve simplicity (of a solution)__ is not by finding the simplest building blocks to solve a problem, but finding slightly more sophisticated building blocks to create your solution from (Kay's example is Kepler trying to explain/fit the orbits of planets with ellipses instead of circles)

* (t = ~10:00) "There is no concept of progress in our genes" - Our genes have code for coping with situations ("resiliency"). So when we work for a company or are part of an organization, we often (and soon enough) find that the company/organization is not working in an optimal way (to put it mildly). So we can either quit or cope, since the company/org is not likely to change.
** the company/org will not easily change, because this is not its "mission". It is working on its "A Task" (what the company/org thinks its all about) and changing/evolving is not its "A Task".
** Most companies/orgs don't have a "B Process" which would look at the "A Task" and see if it is really/still most efficient.
** and rarely has a co/org a "C Process" which would look at whether the "A Task" is even still relevant and question the basic/fundamental assumptions and reasons.
* So, __another way to achieve and maintain (!) simplicity (of a company or organization and its output/product/deliverable)__ is to prioritize the "B Process" and the "C Process" (see above), and not just focus on "A Task".

* (t = ~13:00) Part of finding a good solution is finding out what the (bigger) context is. In other words, finding out what the real/big problem to solve is, not just the current/immediate problem at hand. But most of us are taught and get rewarded for solving problems, not for finding bigger ones (on the way to solving //them//). When someone comes up with a bigger problem they usually get shot down ("why are you coming up with a bigger problem? We have enough problems already!"). So, __another way to achieve simplicity (of a solution)__ is to identify, analyze, and solve the bigger context and not get stuck in the rut of the small, in-context problems.

* (t = ~41:00) Incremental thinking is the killer of all great ideas. Think of an idea (or a glimmer of an idea) without worrying about how you are going to get from here to there.
Type the text for 'Alan Perlis'
In his great book //[[Cryptonomicon|https://en.wikipedia.org/wiki/Cryptonomicon]]// Neal Stephenson describes a meeting between [[Alan Turing|https://en.wikipedia.org/wiki/Alan_Turing]] and Lawrence Waterhouse (a fictitious character in the book; an American mathematician and cryptographer who supposedly worked with Turing when they both were at Princeton before WWII).

Turing and Waterhouse meet in a pub not too far from Bletchley Park^^1^^, and upon entering the pub and seeing Turing reading a tome at a pub table, Waterhouse asks:
>"Designing another machine, Dr. Turing?"
>[... Alan] frowns and looks at him quizzically. "How on earth did you guess I was designing another machine? Simply a guess based on prior observations?"
>"Your choice of reading material," Waterhouse says, and points to Alan's book: the //RCA Radio Tube Manual// ^^2^^.
>Alan gets a wild look. "This has been my constant companion," he says. "You must learn about these valves, Lawrence! Or tubes as you would call them. Your education is incomplete otherwise. I cannot believe the number of years I wasted on //sprockets//! God!"
>"Your zeta-function machine^^3^^? I thought it was beautiful," Lawrence says.
>"So are many things that belong in a //museum//," Alan says. 
>"That was six years ago. You had to work with the available technology," Lawrence says.
>"Oh, Lawrence! I'm surprised at you! If it would take ten years to make the machine with //available// technology, and only //five// years to make it with a new technology, and it will take only //two// years to //invent// the new technology, then you can do it in //seven// years by inventing the new technology first!"
>"Touche."
>"This is the new technology," Alan says, holding up the //RCA Radio Tube Manual// like Moses brandishing a Tablet of the Law. "If I had only had the presence of mind to use these, I could have built the zeta-function machine much sooner, and others besides."
A bit later in their conversation
>[Turing] is hugging the //RCA Radio Tube Manual// to himself with one arm, doodling in a notebook with the other. Waterhouse thinks that really the //RCA Radio Tube Manual// is like a ball and chain holding Alan back. If he would just work with pure ideas like a proper mathematician he could go as fast as thought. As it happens, Alan has become fascinated by the incarnations of pure ideas in the physical world. The underlying math of the universe is like the light streaming in through the window. Alan is not satisfied with merely knowing that it streams in. He blows smoke into the air to make the light visible. He sits in meadows gazing at pine cones and flowers^^4^^, tracing the mathematical patterns in their structure, and he dreams about electron winds blowing over the glowing filaments and screens of radio tubes, and, in their surges and eddies, capturing something of what is going on in his own brain. Turing is neither a mortal nor a god. He is Antaeus^^5^^. That he bridges the mathematical and physical worlds is his strength and his weakness.


----
^^1^^ [[Bletchley Park|http://www.bbc.co.uk/history/places/bletchley_park]] - Britain's main decryption establishment during World War II. Ciphers and codes of several Axis countries were decrypted including, most importantly, those generated by the German Enigma and Lorenz machines.
^^2^^ [[The RCA Receiving Tube Manual|http://www.tubebooks.org/tubedata/RC16.pdf]] - all 324 pages of it! [PDF]
^^3^^ - see [[zeta-function machine|http://tuxar.uk/turing/zeta-machine-riemann-hypothesis/]] and the Riemann Hypothesis. ([[GD link|https://drive.google.com/open?id=1239ulHa9k288fN6Sx-ee56UUVy8jsGW_5By_SBfMS3k]]).
^^4^^ - Turing's [[Watching the Daisies Grow & Fibonacci phyllotaxis|https://drive.google.com/open?id=1R5_cfaCQ6n02YqRH94zdGJmOAMYMHDo6]]
^^5^^ - From Wikipedia: [[Antaeus|https://en.wikipedia.org/wiki/Antaeus]] (from the Greek mythology - the half-giant son of Poseidon and Gaia) would challenge all passers-by to wrestling matches and remained invincible as long as he remained in contact with his mother, the earth [Gaia]. As Greek wrestling, like its modern equivalent, typically attempted to force opponents to the ground, he always won, killing his opponents. Antaeus fought Hercules as he was on his way to the Garden of Hesperides as his 11^^th^^ [[Labor|http://www.perseus.tufts.edu/Herakles/labors.html]]. Hercules realized that he could not beat Antaeus by throwing or pinning him. Instead, he held him aloft and then crushed him to death in a bear-hug.
[img[Einstein Ambigram|resources/einstein_ambigram.gif][resources/einstein_ambigram.gif]] [1]
----
1- Ambigram from [[01101001|http://www.01101001.com/ambigrams/index.html]]. See [[other ambigrams|Ambigrams by Scott Kim]] by Scott Kim
 American professor of psychology and affiliate professor of philosophy at the University of California, Berkeley.
A Buddhist joke:

Question: What did the Buddhist monk ask of the hot-dog vendor?

Answer: Make me one with everything.
!!!Upside-down ambigram
[img[mathematics ambigram by Scott Kim|./resources/scott_kim_mathematics.jpg][./resources/scott_kim_mathematics.jpg]]

!!!Left-right ambigram
[img[mirror ambigram by Scott Kim|./resources/scott_kim_mirror.jpg][./resources/scott_kim_mirror.jpg]]
Seymour Papert (the inventor of the programming language/environment Logo, and a participant on [[Edge|http://www.edge.org/]]) wrote [[an article|resources/AnExplorationintheSpaceofMathematicsEducations.html]] in 1996 on a 'different enough' "alternative math education" framework (even though he calls it a 'point' in the Math Education space), with a few interesting principles and dimensions:

* __The power principle__ or "what comes first, using it or 'getting it'?" The natural mode of acquiring most knowledge is through use leading to progressively deepening understanding. Only in school is this order systematically inverted.

* The principle of __project before problem__ is a similar/related inversion. Problems come up in the course of projects and are sometimes "solved" and sometimes "dissolved." It is an inversion order to define the goal of mathematics as problem-solving.
** It's not surprising that Neil Gershenfeld from the [[Center for Bits and Atoms|http://cba.mit.edu/]] (spawned from the MIT Media Lab) is advocating [[project based learning|Interdisciplinary Learning]] too. Papert was a founding faculty member of the Lab... 

* __New media open the door to new contents__. Of course it is not to be assumed that the shift of media has radical consequences in itself, but if the new media will be used to support the old content it will often do this badly since the content was defined for the old media
** For example: old content+media: written text on paper, vs. new content+media: simulations on computers
** [[Transplanted games|Transplanted games - new media but old content]] are a __bad example__, since they are not taking advantage of new media (computer technologies)
(see what Robert Logan (and Marshall McLuhan) has to say on new media/language and content in his book [[The Sixth Language: Learning a Living in the Internet Age|New languages]]).

* __The "thingness" principle__: object before operation. making entities (e.g., math functions) operational by giving them thing-like properties and by relating to them as things (e.g. defining/building, composing, transforming math functions).

* __The principle of putting dynamics before statics__ (for example in physics). Papert claims that by 'thingifying' entities/concepts like gravity and velocity (i.e. allowing them to have properties and be manipulated as objects), dynamics can be grasped early and intuitively.
** Papert says that many teachers justifiably point out that one needs calculus to do dynamics seriously and calculus comes at the end of a prerequisite chain that runs something like arithmetic to algebra to calculus. He points out that there is truth in it insofar as it reflects the static nature of pre-computational media. And says: "to state a complex matter far too simply, calculus is a way of representing dynamic phenomena in the static medium of pencil and paper; it is 'hard' because the medium fights the message."
As part of my [[MA studies at Stanford|http://ldtprojects.stanford.edu/~hmark/index.html]], I designed and implemented a prototype of an online tutoring system incorporating some principles, capabilities and features, that would make learning more effective. I was delighted to see that [[the Khan Academy|The Khan Academy]] has implemented very similar capabilities in their online system as well.

This project applies Cybernetic principles to Intelligent Tutoring systems.
The system consists of a domain model including domain knowledge and skills. It also includes a desired learner performance model (or profile), which is continuously compared to the domain knowledge (both in terms of knowledge ans skills).
This creates a cybernetic feedback loop, enabling the system to adjust and tailor the instruction to where the learner is, in relation to the domain model/profile.
There is also an immediate 3D feedback given to the learner in terms of their performance level (mastery level dimension), their knowledge coverage (knowledge domain dimension), and their efficiency (time dimension). 

My system design spec can be found on the [[Stanford LDT site|http://ldtprojects.stanford.edu/~hmark/projects/Haggai Mark - Second Order Feedback system spec.pdf]] or on the [[local server|./resources/Haggai Mark - Second Order Feedback system spec.pdf]].
From Ian Stewart's (as the Brits say:) //lovely// book //Professor Stewart's Hoard of Mathematical Treasures//:

* Did Erwin Schrödinger have a cat?
** Yes and no.^^*^^
* Did Werner Heisenberg have a cat?
** I’m not sure.
* Did Kurt Gödel have a cat?
** If he did, we can’t prove it.
* Did Fibonacci have a cat?
** He certainly had a lot of rabbits.
* Did René Descartes have a cat?
** He thought he did.
* Did ~Augustin-Louis Cauchy have a cat?
** That’s a complex question.
* Did Georg Bernhard Riemann have a cat?
** That hypothesis has not yet been proved.
* Did Albert Einstein have a cat?
** One of his relatives did.
* Did Luitzen Brouwer have a cat?
** Well, he didn’t not have one.
* Did William Feller have a cat?
** Probably.
* Did Ronald Aylmer Fisher have a cat?
** The null hypothesis is rejected at the 95% level.


----
^^*^^ another take on Schrödinger's cat (from Terry Pratchett):
>Technically, a cat locked in a box may be alive or it may be dead. You never know until you look. In fact, the mere act of opening the box will determine the state of the cat, although in this case there were three determinate states the cat could be in: these being Alive, Dead, and Bloody Furious.
This is an oldie but goodie, which could be (and has been :) extended to other professions:

A few professionals were asked to prove (or disprove, if they dare :-) the //conjecture// which states that
{{{ALL odd numbers are also prime numbers.}}}
* The mathematician reasoned:
** 3 is a prime, 5 is a prime, and by mathematical induction: all odd numbers are prime.
* The physicist stated:
** 3 is a prime, 5 is a prime, 7 is a prime, 9 ... is a measurement error, 11 is a prime, and therefore the data supports it: all odd numbers are prime.
* The engineer said:
** 3 is a prime, 5 is a prime, 7 is a prime, 9 is a prime, 11 is a prime, and therefore it's clear that all odd numbers are prime.
* The programmer calculated:
** 3 is a prime, 5 is a prime, 7 is a prime, 7 is a prime, 7 is a prime,  7 is a prime,  7 is a prime,  7 is a prime, ... and therefore: all odd numbers are prime.
This is [[a paper by Nick Bostrom|http://www.simulation-argument.com/simulation.pdf]] (see [[GD link|https://drive.google.com/open?id=1sw2A5NpFrbj3H9Kglivghm6ooUtzCymG]]).

The abstract:
This paper argues that at least one of the following propositions is true:
(1) the human species is very likely to go extinct before reaching a “posthuman” stage;
(2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof);
(3) we are almost certainly living in a computer simulation.

It follows that the belief that there is a significant chance that we will one day become posthumans who run ancestor ‐ simulations is false, unless we are currently living in a simulation. A number of other consequences of this result are also discussed.
Type the text for 'Aristotle'
known for his Science Fiction writing.
<<forEachTiddler 
where 
'tiddler.tags.contains("book-chapter") && tiddler.tags.contains("Authentic Happiness")'
sortBy 
'tiddler.title'>>
I was drawn to this book by an [[article in the NY Times|resources/what-if-the-secret-to-success-is-failure.html]] about education and character (or character education).
When I was studying for my [[MA degree at the Stanford University School of Education|http://ldtprojects.stanford.edu/~hmark/]], I was intrigued by work done by Ann L. Brown on [[metacognition|resources/Brown 94 - advancement of learning.pdf]].

I was very interested in the question of whether ''metacognitive skills'' can be taught, and if so how? Since I studied in the Learning, Design, and Technology program at Stanford, I was thinking of how technology could help, if indeed metacognitive skills could be taught to learners.
Similarly, what intrigued me in the article and the book was the question of whether ''character'' can be taught.

The [[tests and surveys in this book can be found online|http://www.authentichappiness.sas.upenn.edu/Default.aspx]].
In an [[animated and impassioned TED Talk|https://www.youtube.com/watch?v=IDS-ieLCmS4]], [[Barry Schwartz|https://en.wikipedia.org/wiki/Barry_Schwartz_(psychologist)]] dives into the question "How do we do the right thing?" With help from collaborator [[Kenneth Sharpe|https://www.swarthmore.edu/profile/kenneth-sharpe]], he shares stories that illustrate the difference between following the rules and truly choosing wisely.

Both of them wrote [[a book titled Practical Wisdom: The Right Way to Do the Right Thing|https://www.swarthmore.edu/sites/default/files/assets/documents/user_profiles/ksharpe1/PW%20Cover%20and%20Blurb.pdf]].

(Here's [[an overview of Aristotelian ethics (and practical wisdom)|https://plato.stanford.edu/entries/aristotle-ethics/]])

So there is among many people a kind of collective dissatisfaction with the way things are working, with the way our institutions run.
[...]

There are two kinds of responses that we make to this sort of general dissatisfaction.
If things aren't going right, the first response is:
let's make more rules, let's set up a set of detailed procedures to make sure that people will do the right thing.

[For example:] Give teachers scripts to follow in the classroom,[..., or]
Give judges a list of mandatory sentences to impose for crimes, so that you don't need to rely on judges using their judgment,[..., or]
Impose limits on what credit card companies can charge in interest and on what they can charge in fees.

More and more rules to protect us against an indifferent, uncaring set of institutions we have to deal with.

Or -- or maybe and --
in addition to rules, let's see if we can come up with some really clever incentives so that, even if the people we deal with don't particularly want to serve our interests  it is in their interest to serve our interest -- the magic incentives that will get people to do the right thing even out of pure selfishness.

So [For example:] we offer teachers bonuses if the kids they teach score passing grades on these big test scores that are used to evaluate the quality of school systems.

Rules and incentives -- "sticks" and "carrots."

[And another example:] We passed a bunch of rules to regulate the financial industry in response to the recent collapse.
There's the ~Dodd-Frank Act, there's the new Consumer Financial Protection Agency [...]

In addition, we are struggling to find some way to create incentives for people in the financial services industry
that will have them more interested in serving the long-term interests even of their own companies, rather than securing short-term profits.
So if we find just the right incentives, they'll do the right thing -- as I said -- selfishly, and if we come up with the right rules and regulations,
they won't drive us all over a cliff.

But what we believe, and what we argue in the book, is that there is no set of rules, no matter how detailed, no matter how specific,
no matter how carefully monitored and enforced, there is no set of rules that will get us what we need.

Why? Because [for example,] bankers are smart people. And, like water, they will find cracks in any set of rules.
You design a set of rules that will make sure that the particular reason why the financial system "almost-collapse" can't happen again.
It is naive beyond description to think that having blocked this source of financial collapse, you have blocked all possible sources of financial collapse.

So it's just a question of waiting for the next one and then marveling at how we could have been so stupid as not to protect ourselves against that.


What we desperately need, beyond, or along with, better rules and reasonably smart incentives, is we need virtue.
We need character. We need people who want to do the right thing.
And in particular, the virtue that we need most of all is the virtue that Aristotle called "practical wisdom."
Practical wisdom is the moral will to do the right thing and the moral skill to figure out what the right thing is.

So Aristotle was very interested in watching how the craftsmen around him worked. And he was impressed at how they would improvise novel solutions to novel problems -- 
problems that they hadn't anticipated.

So one example is he sees these stonemasons working on the Isle of Lesbos, and they need to measure out round columns. Well if you think about it, it's really hard to measure out round columns using a ruler. So what do they do? They fashion a novel solution to the problem. They created a ruler that bends, what we would call these days a tape measure -- a flexible rule, a rule that bends.
And Aristotle said, "Hah, they appreciated that sometimes to design rounded columns, you need to bend the rule."

And Aristotle said often in dealing with other people, we need to bend the rules.
Dealing with other people demands a kind of flexibility that no set of rules can encompass.
Wise people know when and how to bend the rules.
Wise people know how to improvise.
[...]
they are kind of like jazz musicians. The rules are like the notes on the page, and that gets you started,
but then you dance around the notes on the page, coming up with just the right combination for this particular moment
with this particular set of fellow players.

So for Aristotle, the kind of rule-bending, rule exception-finding and improvisation that you see in skilled craftsmen
is exactly what you need to be a skilled moral craftsman.
And in interactions with people, almost all the time, it is this kind of flexibility that is required.

A wise person knows when to bend the rules. A wise person knows when to improvise.
And most important, a wise person does this improvising and rule-bending in the service of the right aims.
If you are a rule-bender and an improviser mostly to serve yourself, what you get is ruthless manipulation of other people.
So it matters that you do this wise practice in the service of others and not in the service of yourself.

And so the will to do the right thing is just as important as the moral skill of improvisation and exception-finding.
Together they comprise practical wisdom, which Aristotle thought was the master virtue.


[[At another talk, at Google, Barry Schwartz summarized practical wisdom and wise people|https://www.youtube.com/watch?v=y2f17aNrKag]] thusly:
* Wise people know when and how to make the exception to every rule. 
* Wise people know when and how to improvise. 
** We talk about wisdom as a kind of moral jazz. There are notes on the page, if you like. Those are the rules, but they're just guidelines. And what makes great jazz is not that you play the notes on the page, it's how you deviate from the notes on the page. You can do that well or you can do that badly. Wise people are good jazz musicians. 
* Wise people know how to find the mean, which is not the arithmetic average.
** It's what Aristotle had in mind when he talked about how courage was the mean between cowardice and recklessness. Now where is the mean between cowardice and recklessness? The answer is: it depends. It depends on the context. It depends on the circumstances. There is no formulaic way to figure out what the mean is. 
* Wise people know how to choose among virtues when they conflict.
** Wise people figure out how to do it and the point of balance, again, as Aristotle emphasized, depends on who you're dealing with, what the circumstances are. 
* Wise people know how to put themselves in other people's places. 
** They can see the world as other people see the world. And they can feel what other people are feeling. Empathic understanding is an essential ingredient of wisdom. It's something that takes experience to learn. 
*  a wise person uses these skills in pursuit of the right aims, to serve other people and not to serve him or herself.
** If you use all of these skills to serve yourself, you become a Machiavellian manipulator and that's not what Aristotle thought a virtuous person was like. 
* And last, a wise person is made, not born.
** everybody has the capacity to develop these characteristics, but 
*** it takes experience.
*** It takes trial and error. 
*** It takes making mistakes and getting wiser as a result of learning from your mistakes. 
** Nobody is born with the magic gift of wisdom.

So, too many rules undermine the development of the skill that you need to be wise and too much reliance on incentives undermines the will that you need to be wise. And again, all of these efforts to reform systems were well-intentioned. They were not designed to make the systems worse. They were designed to make the systems better. And they may make the systems better in the short run, but they create a problem which guarantees that the systems will be worse in the long run. 
In an impassioned [[TED Talk|https://www.youtube.com/watch?v=3B_1itqCKHo]], [[Barry Schwartz|https://en.wikipedia.org/wiki/Barry_Schwartz_(psychologist)]], a [[psychologist from Swarthmore|https://www.swarthmore.edu/SocSci/bschwar1/]], described the concept he coined Idea Technology.

Compare this to [[Raymund Smullyan's way of expressing it in a biblical fable|The power of ideas - free will]].

From Schwartz's talk transcript:

[when I'm talking about Idea Technology]
I'm not talking about the technology of things, profound though that is. I'm talking about another technology.
I'm talking about the technology of ideas.
...
In addition to creating things, science creates ideas. Science creates ways of understanding.
And in the social sciences, the ways of understanding that get created are ways of understanding ourselves.
And they have an enormous influence on how we think, what we aspire to, and how we act.

If you think your poverty is God's will, you pray.
If you think your poverty is the result of your own inadequacy, you shrink into despair.
And if you think your poverty is the result of oppression and domination, then you rise up in revolt.
Whether your response to poverty is resignation or revolution, depends on how you understand the sources of your poverty.

This is the role that ideas play in shaping us as human beings, and this is why idea technology may be the most profoundly important technology that science gives us.
And there's something special about idea technology, that makes it different from the technology of things.
With things, if the technology sucks, it just vanishes, right? Bad technology disappears.

With ideas -- false ideas about human beings will not go away if people believe that they're true.
Because if people believe that they're true, they create ways of living and institutions that are consistent with these very false ideas.

And that's how the industrial revolution created a factory system in which there was really nothing you could possibly get out of your day's work, except for the pay at the end of the day.
Because the father -- one of the fathers of the Industrial Revolution, Adam Smith -- was convinced that human beings were by their very natures lazy,
and wouldn't do anything unless you made it worth their while, and the way you made it worth their while was by incentivizing, by giving them rewards. That was the only reason anyone ever did anything.
So we created a factory system consistent with that false view of human nature. But once that system of production was in place, there was really no other way for people to operate, except in a way that was consistent with Adam Smith's vision.

So the work example is merely an example of how false ideas can create a circumstance that ends up making them true.

It is not true that you "just can't get good --help-- [work] anymore."
It is true that you "can't get good --help-- [work] anymore"
when you give people work to do that is demeaning and soulless.
And interestingly enough, Adam Smith -- the same guy who gave us this incredible invention of mass production, and division of labor -- understood this.
He said, of people who worked in assembly lines, of men who worked in assembly lines, he says:
"He generally becomes as stupid as it is possible for a human being to become."

Now, notice the word here is "become." "He generally becomes as stupid as it is possible for a human being to become."
Whether he intended it or not, what Adam Smith was telling us there, is that the very shape of the institution within which people work creates people who are fitted to the demands of that institution and deprives people of the opportunity to derive the kinds of satisfactions from their work that we take for granted.

The thing about science -- natural science -- is that we can spin fantastic theories about the cosmos, and have complete confidence that the cosmos is completely indifferent to our theories. It's going to work the same damn way no matter what theories we have about the cosmos.

But we do have to worry about the theories we have of human nature, because human nature will be changed by the theories we have that are designed to explain and help us understand human beings.

The distinguished anthropologist, Clifford Geertz, said, years ago, that human beings are the "unfinished animals."
And what he meant by that was that it is only human nature to have a human nature that is very much the product of the society in which people live.
That human nature, that is to say our human nature, is much more created than it is discovered.

We design human nature by designing the institutions within which people live and work.


And at the end Schwartz implores the audience ([[TED attendees|https://www.ted.com/about/conferences]]):
>And so you people -- pretty much the closest I ever get to being with masters of the universe -- you people should be asking yourself a question, as you go back home to run your organizations.
>Just what kind of human nature do you want to help design?
Inspired by Clifford Pickover's article "Beauty and the Bits", in his book "Mazes for the Mind".
As Pickover writes: 
>The humble bits that lie at the very foundation of computing have a sepecial beauty all their own.

!!!Basic binary bits background
Here are the "truth tables" for the basic logic operations (AND, OR, XOR, NOT):
{{{

AND | 0 1     OR | 0 1     XOR | 0 1    NOT | 0 1
----+-----    ---+----     ----+----    ----+----
 0  | 0 0      0 | 0 1       0 | 0 1        | 1 0
 1  | 0 1      1 | 1 1       1 | 1 0
}}}

Here are some examples of binary numbers and bit operations.
"|" is the logical OR operation (bit by bit)
"~" is the logical NOT operation
{{{
print "n1\tn2\t n1 | n2  ~n1"
print "------------------------------"
for n1 in [0, 7, 9, 6]:
  for n2 in [1, 2, 3, 4, 5, 6]:
    bin_n1 = bin(n1).lstrip('-0b').zfill(4)
    bin_n2 = bin(n2).lstrip('-0b').zfill(4)
    n1_or_n2 = bin(n1 | n2).lstrip('-0b').zfill(4)
    comp_bin_list_n1 = [ str((1 + int(b)) % 2) for b in bin_n1]
    comp_bin_n1 = ''.join(comp_bin_list_n1)
    print "%s\t%s\t  %s\t  %s" % (bin_n1, bin_n2, n1_or_n2, comp_bin_n1)


resulting in:
n1	n2	 n1 | n2  ~n1
------------------------------
0000	0001	  0001	  1111
0000	0010	  0010	  1111
0000	0011	  0011	  1111
0000	0100	  0100	  1111
0000	0101	  0101	  1111
0000	0110	  0110	  1111
0111	0001	  0111	  1000
0111	0010	  0111	  1000
0111	0011	  0111	  1000
0111	0100	  0111	  1000
0111	0101	  0111	  1000
0111	0110	  0111	  1000
1001	0001	  1001	  0110
1001	0010	  1011	  0110
1001	0011	  1011	  0110
1001	0100	  1101	  0110
1001	0101	  1101	  0110
1001	0110	  1111	  0110
0110	0001	  0111	  1001
0110	0010	  0110	  1001
0110	0011	  0111	  1001
0110	0100	  0110	  1001
0110	0101	  0111	  1001
0110	0110	  0110	  1001
}}}

!!!Displaying some beauties with the bits
Equipped with the above background, one can create some interesting and unexpected patters.
If you take a display area of, say, 256 rows by 256 columns, you can treat each location/coordinate pair (x, y) or (column, row) as a pixel which will be colored based on a certain formula applied to its 2 coordinate numbers.

And some beautiful bit twiddling results:
{{{
Basic bit twiddling 									Enhanced bit twiddling
pix_color =  (col | row) % 255							pix_color =  ((col | row) | (row * col)) % 255
}}}
[>img[Beauty and the Bit - enhanced Pickover|resources/BaB enhanced 1.png][resources/BaB enhanced.png]]
[img[Beauty and the Bit - basic Pickover|resources/BaB basic 1.png][resources/BaB basic.png]]

And a Python implementation:
{{{
import image

img = image.Image("empty_square.png")
win = image.ImageWin(img.getWidth(), img.getHeight())

img.setDelay(1, 100)   # fast update - setDelay(0) turns off animation
img.draw(win)

for row in range(img.getWidth()):
  for col in range(img.getHeight()):
    pix_color =  (col | row) % 255                    # basic Pickover
    pix_color =  ((col | row) | (row * col)) % 255    # enhanced Pickover
    new_pixel = image.Pixel(color + 60, color + 20, color)
    img.setPixel(col, row, new_pixel)
}}}

And the possibilities are endless: 
{{{
pix_color =  ((col | row) | (col - row)) % 255					pix_color =  (~(col | row) | ~(row * col)) % 255
}}}
[>img[Beauty and the Bit - enhanced Pickover|resources/BaB enhanced enhanced 1.png][resources/BaB enhanced enhanced.png]]
[img[Beauty and the Bit - enhanced Pickover|resources/BaB enhanced2 1.png][resources/BaB enhanced2.png]]
From the short essay [[Beginner's Mind|http://www.symmetrymagazine.org/sites/default/files/legacy/pdfs/200703/essay.pdf]] by Jennifer Ouellette.

>Several years ago I earned my black belt in jujitsu. Before tying the belt around my waist, the grand master had me don my old white belt, which designates a beginner. He then instructed me to look into a mirror and reflect on what it had been like to walk onto the dojo mat for the first time. The reasoning behind the ceremony is that in order to effectively teach a beginner any given technique, an instructor must be able to break it down into its most basic components. Ergo, it’s vital to remember what it was like to know nothing about the technique at all. 
>The same is true when it comes to communicating science.
And the same is true when teaching (Computer Science - like I do - or anything else).
(or "shouldn't you put your money where your mouth is?")

It is said that in Niels Bohr's (the great Danish physicist) house there was a horseshoe hanging over one door,
and a friend asked him, “What’s this all about?” 
Bohr answered, “Well, horseshoes are supposed to bring good luck, so we put it up there.” 
The friend then said, “Come now — surely you don’t believe it brings good luck, do you?” 
Bohr laughed and said, “Of course not!” 
And then he added, “But they say it works even if you don’t believe in it.”


This reminds me of a story about the philosopher [[Sidney Morgenbesser|https://en.wikiquote.org/wiki/Sidney_Morgenbesser]]:
A few weeks before his death, he asked another Columbia philosopher, David Albert, about God. "Why is God making me suffer so much?" he asked. "Just because I don't believe in him?"
The creator of Microsoft.
In his interesting article [["The Unreasonable Effectiveness of Mathematics"|http://www.dartmouth.edu/~matc/MathDrama/reading/Hamming.html]], [[Richard Hamming]] makes some thoughtful arguments [[On why Math works for us]] and [[On scientific vs. religious explanation]].

In homage to Hamming, and his brilliant idea around correcting errors in transmission of binary information, I've implemented a [[1 bit error correcting logic circuit|http://employees.org/~hmark/math/logicsim/logicsim_hamming3.html]] using [[LogicSim|http://www.tetzl.de/java_logic_simulator.html]].

The logic simulator is ~OpenSource, and produces Java applets, so it can be used ubiquitously. It allows for [[hierarchical design of binary logic circuits|http://employees.org/~hmark/math/logicsim/logicsim_hamming3_hierarchy.html]] (i.e. building "parts" or logic "integrated circuits", that can be used in larger and more complex designs, etc.).

As the notes on the [[simulation page|http://ldt.stanford.edu/~hmark/math/logicsim/logicsim_hamming3_hierarchy.html]] indicate, this is an error correcting circuit for single errors, and it achieves it by adding 3 more code bits to the 4 data bits transmitted. Obviously, if more errors need to be corrected, more code bits need to be added (as in many other areas in life, there is no free lunch in information processing).
Type the text for 'Blaise Pascal'
<br>
{{{What an astonishing thing a book is. it's a flat object made from a tree with flexible parts on which are imprinted lots of funny dark squiggles. But one glance at it and you're inside the mind of another person, maybe somebody dead for thousands of years. Across the millennia, an author is speaking clearly and silently inside your head, directly to you. Writing is perhaps the greatest of human inventions, binding together people who never knew each other, citizens of distant epochs.}}}
{{{Books break the shackles of time. A book is proof that humans are capable of working magic.}}}
: - Carl Sagan (Cosmos, 1980)

{{{People disappear when they die. Their voice, their laughter, the warmth of their breath. Their flesh. Eventually their bones. All living memory of them ceases. This is both dreadful and natural. Yet for some there is an exception to this annihilation. For in the books they write they continue to exist. We can rediscover them. Their humor, their tone of voice, their moods. Through the written word they can anger you or make you happy. They can comfort you. They can perplex you. They can alter you. All this, even though they are dead. Like flies in amber, like corpses frozen in ice, that which according to the laws of nature should pass away is, by the miracle of ink on paper, preserved. It is a kind of magic.}}}
: - Diane Setterfield (The Thirteenth Tale, 2006)

{{{I cannot remember the books I have read any more than the meals I have eaten; even so, they have made me.}}}
: - Ralph Waldo Emerson

{{{For readers, what they read is where they've been, and their collections [of books] are evidence of the trek. For writers, the personal library is the toolbox which contains the day's necessary implements of construction — there is no such thing as a skillful writer who is not also a dedicated reader — as well as a towering reminder of the task at hand: to build something worthy of being bound and occupying a space on those shelves, on all shelves...
Since bibliophiles will acknowledge the absurdity, the obese impracticality of gathering more books than there are days to read them, one's collection must be about more than remembering: it must be about expectation also. Your personal library, swollen and hulking about you, is the promise of betterment and pleasure to come, a giddy anticipation, a reminder of the happy work left to do, a prompt for those places to which your intellect and imagination want to roam. }}}
: - William Giraldi (The Bibliophile, 2018)



<<forEachTiddler 
where 
'tiddler.tags.contains("book")'
sortBy 
'tiddler.title'>>
Biologist, Schumacher College, U.K.; author of Nature's Due: Healing Our Fragmented Culture.
[[Brian Kernighan|http://en.wikipedia.org/wiki/Kernighan]]
Type the text for 'Buddha'
In an interesting [[TED talk|http://ed.ted.com/lessons/the-game-layer-on-top-of-the-world-seth-priebatsch]], Seth Priebatsc, founder of the [[SCVNGR platform|https://en.wikipedia.org/wiki/SCVNGR]]^^1^^ described a few game-inspired principles which could significantly impact our behavior.

His first observation was that the "social layer" of connectivity to, and relationships with people has been built in the last decade (2000-2010), and is mainly defined by the [[Open Graph|http://ogp.me/]] by Facebook.

He states that in the next decade (2010-2020) the "game layer" which is all about influencing behavior, will be built. This echoes Nir Eyal's ideas about [[creating habits and influencing behaviors|http://www.nirandfar.com/2012/03/how-to-design-behavior.html]] (in good //and// bad ways/means) through products and services.

In his talk Priebatsc lists and describes a few principles:
* The appointment dynamic - in which to succeed one has to do something predetermined at a predefined time (his examples: social "Happy Hour", taking medicine, playing Farmville)
* The influence dynamic - in which one player has the ability to modify the behavior of another player through social pressure (his examples: prestigious credit cards, publicized online game mastery levels, grades and titles/roles in school and work)
* Progression dynamic - in which success is measured and displayed granularly, indicating completing itemized tasks (his examples: the ~LinkedIn profile completion status indicator, leveled-up/super-powered characters in games, membership and loyalty tiers and progressions)
* Communal discovery dynamic - in which an entire community or group rallies to work together to solve a challenge or problem (his examples: [[Digg|http://digg.com/]] (where the community  tries to find/source the "best news/story"), ~McDonald's Monopoly, DARPA Balloon location)

----
^^1^^ - SCVNGR has been used as a tool for orientation for prospective and new students to college campuses, but it has also been used for orientation to campus libraries. Rather than having a traditional tour guide approach to orientation, colleges and libraries have used the SCVNGR application to allow students/patrons to visit the places they need to know but through an active, collaborative group activity. 
Can mathematics be used to extract qualitative predictions from physical laws - or, for that matter, useful laws from data - automatically? Perhaps, but the omens aren't auspicious. With Gödel's theorem^^1^^ (the existence of true statements that can't be proved formally) and the concepts of computational complexity (the existence of many natural problems that can't be solved by practical algorithms) and chaos (the existence of natural equations that can't be solved systematically), mathematics has identified limits to its own power.


-- from [[Reasonably effective: I. Deconstructing a miracle|http://ned.ipac.caltech.edu/level5/March07/Wilczek/Wilczek.html]] by Frank Wilczek


----
^^1^^ See [[Gödel's Second Incompleteness Theorem Explained in Words of One Syllable|resources/Boolos-godel-in-single-syllables.pdf]] or compare to [[The world's shortest explanation of Gödel's theorem]] (alternative, searchable spellings for Gödel: Godel, Goedel)
I came across this [[pantoun|https://www.poets.org/poetsorg/text/pantoum-poetic-form]]^^1^^, which reminded me of the [[Exercises in Style - Raymond Queneau]], in terms of experimentation and playfulness.

The repetition-with-variation-and-refinement is appealing to me both because of implications and analogies to education/learning, and because of the successive/layered fleshing out of details, not unlike the process of top-down design and hierarchical decomposition, often done in software programming^^2^^.

Tonight you’re loaning Billy your car, a brand-new
seal-gray Volkswagen Passat with four doors,
though last week at 3 A.M., he stole your canoe,
and sank it in the autumn sea, then swam ashore.

Tonight you’re lending Billy your car—it’s brand-new—
and he’s a well-meaning, blue-eyed Byronic drinking man
who last week, at 3 A.M., stole your beached canoe,
and when it sank he blamed it on a dolphin.

A well-meaning, blue-eyed, Byronic, hard-drinking man
whose phone calls you take, no matter the hour,
who sank your canoe and blamed it on a dolphin,
and the young man with him, whom the sea sadly devoured,

so you’ll always take Billy’s call, no matter the hour.
Because, you sigh, his mother’s dying, too, and he’s drinking again.
He’s no longer a young man (he’s sad and he’s drowning),
and neither are you, and all friends sometimes sin.

Besides, you sigh, his mother’s dying, too, that’s why he’s drinking.
She wasn’t a beauty—she came on to you long ago.
And he’s not a young man; he’s drunk and he’s drowning.
So you press the phone to your cheek, stare out the dark window.

Who hasn’t come on to you? (Who wasn’t lovely long ago?)
(Even Billy did; his tragic need, his blank blue eyes.)
You press the phone to cheek, stare out the dark window,
and listen to him make a mess of our peaceful lives.

Now back in bed, we return to our disrupted romance.
Although last week, at 3 A.M., he stole your canoe,
you set a sinking man adrift in the sea of second chance:
tonight you’ve loaned Billy your car again, brand-new.


----
^^1^^ - pantoum - from [[Dictionary.com|http://www.dictionary.com/browse/pantoum]] (orig. pantoun, or pantun) - a Malay verse form consisting of an indefinite number of quatrains with the second and fourth lines of each quatrain repeated as the first and third lines of the following one. 


^^2^^ - from [[Poets.org|https://www.poets.org/poetsorg/text/pantoum-poets-glossary]]:
A pantoum typically begins:

Line 1:     A
Line 2:    B
Line 3:    C
Line 4:    D

Line 5:    B
Line 6:    E
Line 7:    D
Line 8:    F

Line 9:    E
Line 10:  G
Line 11:   F
Line 12:  H

It is customary for the second and fourth lines in the last stanza of the poem to repeat the first and third lines of the initial stanza, so that the whole poem circles back to the beginning, like a snake eating its tail.

Now, if this doesn't look like software description and decomposition/refinement, then what does? :)

Also, the last sentence above (referring to a snake) is a common image to describe/imagine the technique of recursion (see the mentioning of [[Ouroboros in the definition of recursion|https://en.wikipedia.org/wiki/Recursion]] in software development -- recursion being a software problem solving technique, where a function/method/procedure calls itself in order to solve the problem.
Sounds like terrible advice, but hear this guy out ([[Chad Fowler in his book The Passionate Programmer|https://pragprog.com/titles/cfcar2/the-passionate-programmer]])

 He has a very good point, which boils down to: if you swallow your pride/ego, open your ears, eyes, and mind, it will pay off greatly. 

(This has been my experience as well :) and if you do it wholeheartedly, it "grows on you", and you start to enjoy these kinds of environments/people, even look forward to being in/with them.)

So here it is, simple but hard: ''Be the worst guy in every team you're in.''

Legendary jazz guitarist Pat Metheny has a stock piece of advice for young musicians, which is "Always be the worst guy in every band you're in.”

>Before starting my career in information technology, I was a professional jazz and blues saxophonist. As a musician, I had the good fortune of learning this lesson early on and sticking to it. Being the worst guy in the band means always playing with people who are better than you.
>
>Now, why would you always choose to be the worst person in a band? "Isn't it unnerving?" you ask. Yes, it's extremely unnerving at first. As a young musician, I would find myself in situations where I was so obviously the worst guy in the band that I was sure I would stick out like a sore thumb. I'd show up to a gig and not even want to unpack my saxophone for fear I’d be forcefully ejected from the bandstand. I'd find myself standing next to people I looked up to, expected to perform at their level—sometimes as the lead instrument!
>
>Without fail (thankfully!), something magical would happen in these situations: I would fit in. I wouldn't stand out among the other musicians as a star. On the other hand, I wouldn't be obviously outclassed, either. This would happen for two reasons. The first reason is that I really wasn't as bad as I thought. We'll come back to this one later.
>
>The more interesting reason that I would fit in with these superior musicians—my heroes, in some cases—is that my playing would transform itself to be more like theirs. I'd like to think I had some kind of superhuman ability to morph into a genius simply by standing next to one, but in retrospect I think it's a lot less glamorous than that. It was more like some kind of instinctual herd behavior, programmed into me. It's the same phenomenon that makes me adopt new vocabulary or grammatical habits when I'm around people who speak differently than me. When we returned from a year and a half of living in India, my wife would sometimes listen to me speaking and burst into laughter, "Did you hear what you just said?" I was speaking Indian English.
>
>Being the worst guy in the band brought  out the same behavior in me as a saxophonist, I would naturally just play like everyone else. What makes this phenomenon really unglamorous is that when I played in casinos and hole-in-the-wall bars with those not-so-good bands, I played like those guys. Also, like an alcoholic who slurs his speech even when he's not drunk, I'd find the bad habits of the bar bands carrying over to my non-bar-band nights.
>
>So, I learned from this that people can significantly improve or regress in skill, purely based on who they are performing with. And, prolonged experience with a group can have a lasting impact on one's ability to perform.
>
>The people around you affect your own performance. Choose your crowd wisely.
>
>Later, as I moved into the computer industry, I found that this learned habit of seeking out the best musicians came naturally to me as a programmer Perhaps unconsciously, I sought out the best IT people to work with. And, not surprisingly, the lesson holds true. Being the worst guy (or gal, of course) on the team has the same effect as being the worst guy in the band. You find that you're unexplainably smarter. You even speak and write more intelligently. Your code and designs get more elegant, and you find that you're able to solve hard problems with increasingly creative solutions.
>
>Let's go back to the first reason that I was able to blend into those bands better than I expected. I really wasn't as bad as I thought. In music, it's pretty easy to measure whether other musicians think you're good. If you're good, they invite you to play with them again. If you're not, they avoid you. It's a much more reliable measurement than just asking them what they think, because good musicians don't like playing with bad ones. Much to my surprise, I found that in many of these cases, I would get called by one or more of these superior musicians for additional work or to even start bands with them.
>
>Attempting to be the worst actually stops you from selling yourself short. You might belong in the A band but always put yourself in the B band, because you're afraid. Acknowledging outright that you're not the best wipes away the fear of being discovered for the not-best person you are. In reality, even when you try to be the worst, you won't actually be.


Having said all that, I have to add that as Ecclesiastes/Kohelet (1:9) had said: there is nothing new under the sun (אֵין כָּל חָדָשׁ תַּחַת הַשָּׁמֶשׁ) (in the context of the above :), since in the Ethics of the Elders ([[Pirkei Avot 4:15|https://www.sefaria.org/Pirkei_Avot.4?lang=bi]], פרקי אבות, פרק ד, יח, written in the third century C.E.) they had said:
> Be a tail to lions, rather than a head to foxes. ( וֶהֱוֵי זָנָב לָאֲרָיוֹת, וְאַל תְּהִי רֹאשׁ לַשּׁוּעָלִים )
Video clip from the new Cosmos series ("[[Cosmos: A Spacetime Odyssey|http://channel.nationalgeographic.com/cosmos-a-spacetime-odyssey/]]" presented by Neil deGrasse Tyson) [[A Pale Blue Dot|https://www.youtube.com/watch?v=p86BPM1GV8M]] and [[Wikipedia|http://en.wikipedia.org/wiki/Pale_Blue_Dot]]:

!!!!Sagan's original text (from his book; also accompanying the video):
>From this distant vantage point, the Earth might not seem of any particular interest. But for us, it's different. Consider again that dot. That's here. That's home. That's us. On it everyone you love, everyone you know, everyone you ever heard of, every human being who ever was, lived out their lives. The aggregate of our joy and suffering, thousands of confident religions, ideologies, and economic doctrines, every hunter and forager, every hero and coward, every creator and destroyer of civilization, every king and peasant, every young couple in love, every mother and father, hopeful child, inventor and explorer, every teacher of morals, every corrupt politician, every "superstar," every "supreme leader," every saint and sinner in the history of our species lived there – on a mote of dust suspended in a sunbeam.

>The Earth is a very small stage in a vast cosmic arena. Think of the rivers of blood spilled by all those generals and emperors so that in glory and triumph they could become the momentary masters of a fraction of a dot. Think of the endless cruelties visited by the inhabitants of one corner of this pixel on the scarcely distinguishable inhabitants of some other corner. How frequent their misunderstandings, how eager they are to kill one another, how fervent their hatreds. Our posturings, our imagined self-importance, the delusion that we have some privileged position in the universe, are challenged by this point of pale light. Our planet is a lonely speck in the great enveloping cosmic dark. In our obscurity – in all this vastness – there is no hint that help will come from elsewhere to save us from ourselves.

>The Earth is the only world known, so far, to harbor life. There is nowhere else, at least in the near future, to which our species could migrate. Visit, yes. Settle, not yet. Like it or not, for the moment, the Earth is where we make our stand. It has been said that astronomy is a humbling and character-building experience. There is perhaps no better demonstration of the folly of human conceits than this distant image of our tiny world. To me, it underscores our responsibility to deal more kindly with one another and to preserve and cherish the pale blue dot, the only home we've ever known.
>>-- Carl Sagan, Pale Blue Dot: A Vision of the Human Future in Space, 1997 reprint, pp. xv–xvi
I would suggest that science is, at least in part, informed worship. My deeply held belief is that if a god of anything like the traditional sort exists, then our curiosity and intelligence are provided by such a God. We would be unappreciative of those gifts if we suppressed our passion to explore the universe and ourselves. On the other hand if such a traditional God does not exist, then our curiosity and our intelligence are the essential tools for managing our survival in an extremely dangerous time. In either case the enterprise of knowledge is consistent surely with science; it should be with religion, and it is essential for the welfare of the human species.
Alison Gopnik quotes David Hume ("Causality is the cement of the universe") in [[a paper on causality|resources/Gopnik - causality.pdf]], saying that causation is central to human knowledge and the human experience, but it is "tricky", and led the philosopher Bertrand Russel to declare that causality should be ruled out of philosophical discussion altogether "The law of causality, I believe, like much that passes muster among philosophers, is a relic of a bygone age, surviving, like the monarchy, only because it is erroneously supposed to do no harm". (Talk about a strong statement!)

(See also a [[thought-provoking lecture by Judea Pearl|On The Art and Science of Cause and Effect - Judea Pearl]] (who also quotes Russell in a similar context)).

Another [[interesting paper on constructivism, causal models, Bayesian learning mechanisms|Reconstructing constructivism: Causal models, Bayesian learning mechanisms and the theory theory]] by Gopnik.
This ''Cellular Automaton (CA)'' rule as [[described by Stephen Wolfram|http://mathworld.wolfram.com/Rule110.html]]

This rule is a collection of state machines (on the 2D endless board, a-la [[Conway's Game Of Life|http://en.wikipedia.org/wiki/Conway%27s_Game_of_Life]]) producing intricate, non-predictable (but fully calculatable) patterns/behavior, which is neither completely stable nor completely chaotic.
Rule 110 has been proven to be [[Turing complete|http://en.wikipedia.org/wiki/Turing_completeness]], that is, capable of [[universal computation|http://en.wikipedia.org/wiki/Universal_computation]].

The rule definition:
[img[CA rule 110 definition|./resources/rule_110_def.gif][./resources/rule_110_def.gif]]

A small sample output:
[img[CA rule 110 output|./resources/rule_110_output.gif][./resources/rule_110_output.gif]]

It's interesting to read [[what Ray Kurzweil has to say|http://www.kurzweilai.net/reflections-on-stephen-wolfram-s-a-new-kind-of-science]] about Wolfram's statements on CA, complexity, and life:
>Wolfram makes the following point repeatedly:  Whenever a phenomenon is encountered that seems complex it is taken almost for granted that the phenomenon must be the result of some underlying mechanism that is itself complex. But my discovery that simple programs can produce great complexity makes it clear that this is not in fact correct. 
>I do find the behavior of Rule 110 rather delightful. However, I am not entirely surprised by the idea that simple mechanisms can produce results more complicated than their starting conditions. We've seen this phenomenon in fractals (i.e., repetitive application of a simple transformation rule on an image), chaos and complexity theory (i.e., the complex behavior derived from a large number of agents, each of which follows simple rules, an area of study that Wolfram himself has made major contributions to), and self-organizing systems (e.g., neural nets, Markov models), which start with simple networks but organize themselves to produce apparently intelligent behavior. At a different level, we see it in the human brain itself, which starts with only 12 million bytes of specification in the genome, yet ends up with a complexity that is millions of times greater than its initial specification.
In a [[New York Times article from 1984|http://www.nytimes.com/1984/06/10/magazine/solving-the-mathematical-riddle-of-chaos.html]], [[James Gleick|https://around.com/about/]] (of ~Time-Travel, Chaos, and other-fascinating-books fame) wrote about the (savant?) mathematician Mitchell Feigenbaum and other mathematicians tackling Chaos and how its understanding will affect our thinking ("Chaos [Theory] is asking very, very hard questions," says Joseph Ford, Regents Professor of the Georgia Institute of Technology. "It offers the possibility that the answers are going to severely modify our view of the universe.")

In the article Gleick covers^^1^^, almost in passing, one of the "hard questions", that of Free Will:
>The dripping faucet, for example. [Robert Shaw at the Institute for Advanced Study], a physicist who is another product of the Santa Cruz Collective, has been studying it for several years as a theoretical paradigm of chaos. 
>[...]  A slow drip can be quite regular, each drop a little bag of surface tension that breaks off when it reaches a certain size. But the size of the drop changes slightly depending on the speed of the flow and depending on whether there is a little rebound from the drop before. And that is enough to make the system nonlinear.
>
>"If you turn it up you can see a regime where the drops are still separate but the pitter-patter becomes irregular," Shaw says. "As it turns out, it's not a predictable pattern beyond a short period of time."
>[...] For some scientists, there is reason to pause when they explore systems as simple as a faucet and find that they are, as Shaw says, eternally creative.
To me, the description above ("eternally creative") hits the nail on the head. Doesn't it say (or at least imply) that we (humans) //interpret// some random behaviors as creativity? And isn't creativity one of the signs/features of Free Will (see below)?
>
>Practically speaking, it means that scientists have to think differently about the problems of nature. It changes their intuitions about what the answers can look like, and that changes the questions they ask. Chaos becomes a technique for doing science - but it also becomes a conceptual framework on which theoreticians can hang some of their most treasured suspicions about the workings of the universe.
>
So, here we go (with the reference in passing):
>To some physicists, chaos seems like a kind of answer to the problem of free will. The realization that the simplest, most deterministic equations can look just like random noise suggests - philosophically, at least - that the Calvinists' deterministic view of the world can be reconciled with the appearance of free will. 
And Gleick moves on:
To people like Ford, chaos is also something like a death knell for the probabilistic ideas of quantum mechanics. "Chaos makes it absolutely clear what the limits are," he says. Last year he gave a talk titled, after Einstein, "Does God Play Dice With the Universe?"
>
>"The answer is yes, of course," he says. "But they're loaded dice. And the main objective of physics now is to find out by what rules were they loaded and how can we use them to our own ends."
If I correctly understand the above, it says that while "probabilistic ideas in quantum mechanics" (and for example, concepts like "wave functions") say that there are probability distributions for describing behaviors of things, and as such, there is, at least in theory, a "probability smear" from minus-infinity to plus-infinity on these phenomena.
But, Chaos Theory is "more committal" in the sense that it puts limits, structures, patterns around these same phenomena, basically "rejecting" the infinite "smear".
>
>To some artificial intelligence specialists at the Institute for Advanced Study and at Los Alamos, chaos suggests a means of linking the simple behavior of neurons to the unpredictable behavior of brains. And to Feigenbaum himself, it is at least a glimpse of a way to link the analytic achievements of his profession to his intuitions about the world. 


----
^^1^^ - The bulk of Gleick's article is not about Free Will, but about the potential of Chaos Theory (and Mitchell Feigenbaum and others) to change our thinking, and in this respect, it is a different attempt to "explain away" some of the (so-far? forever?) unexplainable/incomprehensible aspects of Quantum Mechanics, [[something that Edwin Jaynes is also trying to do|Clearing up physical mysteries with probability]].
In [[an interesting talk|http://www.johnseelybrown.com/WorkingLearningLeading.pdf]] at the 2016 AACSB International Deans Conference, John Seely Brown (former Chief Scientist of Xerox Corporation and former director of the Xerox Palo Alto Research Center (PARC)) talked about (among other things :) the characteristics of good leaders, but I found them very relevant and applicable to teachers.

(there is overlap with [[another talk he gave at Stanford|Sense-making and learning in the new 21st century environment]] about sense-making in the 21st century)

* In this new era, leadership (and educating/teaching) is more akin to gardening than to chess playing
** I think that it is useful to compare/think about what [[Joe Kirby wrote in his blog|https://pragmaticreform.wordpress.com/2013/05/11/great-teaching/]] about exceptional teachers:
>>A great analogy here is that in their subjects, most teachers play draughts [checkers]; great teachers play chess. Deep strategic subject knowledge of how the movements of the pieces combine is crucial to effective instruction.
*** The way I think of the seeming difference between the two statements (which I do believe are both true!) is that each one is referring to a different aspect of good teachers/teaching. The gardener metaphor, I think, relates to the nurturing of the student, the relationship, the needs and interests (of the "plants"). Whereas, I think that the chess metaphor relates to the treating of the material/domain to be taught (the concepts, principles, skills, priorities, and relationships between them (the "pieces"))

* authenticity and integrity are vital
>How do we begin to develop these types of leaders? It starts with authenticity. Like a whitewater kayaker navigating the rapids, interpreting the ripples and understanding what they reveal about what lies beneath the surface, we must live in an ongoing conversation with the flows and changes happening around us. This requires living totally in the moment, experiencing the immediate at-hand circumstances of actions and quickly analyzing information using all senses. When a strong rapid pushes the kayak off balance, or even flips it over, what keeps a kayaker afloat or what helps him roll back to the surface is his center of gravity. It is the axis of balance that gives him the confidence to take on the whitewater and increase his levels of risk-taking. In this metaphor, the line of balance is analogous to authenticity and integrity. Authenticity is simply the capacity to know yourself, your core strengths, weaknesses, values and motivations, and to work from and for them. In a radically contingent whitewater world, decisions and actions critically need an authentic place to work from. That is your base of operation. 

*  in an era of complexity and wicked problems, we need to move from problem solving with an engineering approach to working ecosystemically. Teachers need to learn to do that, and they have to teach students to do that!

Type the text for 'Charles Van Loan'
[[Chris Crawford|http://en.wikipedia.org/wiki/Chris_Crawford_%28game_designer%29]]

a game designer
The [[Edgie, Christopher Langton|https://www.edge.org/memberbio/christopher_g_langton]] (of [[Langton's Ant|https://en.wikipedia.org/wiki/Langton%27s_ant]] fame), is [[talking about Dynamical Patterns|https://www.edge.org/conversation/christopher_g_langton-chapter-21-a-dynamical-pattern]], where he describes some of the advances made in Artificial Life, Artifical Intelligence, Genetic Algorithms and simulations.

In the conversation he brings up something echoing the situation we had before computers were able to beat human chess grand masters:
>It's going to be hard for people to accept the idea that machines can be as alive as people, and that there's nothing special about our life that's not achievable by any other kind of stuff out there, if that stuff is put together in the right way. It's going to be as hard for people to accept that as it was for Galileo's contemporaries to accept the fact that Earth was not at the center of the universe. Vitalism is a philosophical perspective that assumes that life cannot be reduced to the mere operation of a machine, but, as the British philosopher and scientist [[C.H. Waddington|https://en.wikipedia.org/wiki/C._H._Waddington]] has pointed out, this assumes that we know what a machine is and what it's capable of doing.

He brings up a detail which may be ignored by many, namely, that our points of view, opinions, and definitions are not necessarily fixed (nor, as he points out, clear!). The frontier (or separating line) of the terms "machine", "intelligence" is moving. Similar to what had happened with chess, where people initially had predicted that computers will never be able to play "good chess", let alone beat humans (since it had been considered one of the top creative (in addition to analytical) human activities/abilities), this can (will?) happen with other "uniquely human" abilities, achievements, characteristics.

It's possible that machines (in the evolving, constantly expanding sense) will accomplish these "pinnacles" (also in the evolving sense) in radically different ways from humans (similar to [[how chess machines use totally different techniques/approaches|On human thinking vs. machine thinking in chess]], which nonetheless, are successful), and this unknown (and at this point, or at any point, unknowable, really) quality is what may excite some, and scare others.

Or as Langton points out:
>It's easy to descend into fantasy at this point, because we don't know what the possible outcome of producing "genuine" artificial life will be. If we create robots that can survive on their own, can refine their own materials to construct offspring, and can do so in such a way as to produce variants that give rise to evolutionary lineages, we'll have no way of predicting their future or the interactions between their descendants and our own. There are quite a few issues we need to think about and address before we initiate such a process.
And illustrates:
>A reporter once asked me how I would feel about my children living in an era in which there was a lot of artificial life. I answered, "Which children are you referring to? My biological children, or the artifactual children of my mind?" — to use [[Hans Moravec|https://www.edge.org/memberbio/hans_moravec]]'s phrase. They would both be my children, in a sense.

Langton then brings up another "chilling" scenario/thought (chilling can be either in the sense of scary, or, as Langton points out, "contributing to objectivity" and "cool/rational thinking"):
>Another set of philosophical issues raised in the pursuit of artificial life centers on questions of the nature of our own existence, of our own reality and the reality of the universe we live in. After working for a long time creating these artificial universes, wondering about getting life going in them, and wondering if such life would ever wonder about its own existence and origins, I find myself looking over my shoulder and wondering if there isn't another level on top of ours, with something wondering about me in the same way. It's a spooky feeling to be caught in the middle of such an ontological recursion. This is [[Edward Fredkin|https://en.wikipedia.org/wiki/Edward_Fredkin]]'s view: the universe as we know it is an artifact in a computer in a more "real" universe. This is a very nice notion, if only for the perspective to be gained from it as a thought experiment — as a way to enhance one's objectivity with respect to the reality one's embedded in.
<<forEachTiddler 
where 
'tiddler.tags.contains("citizen-schools-item")'
sortBy 
'tiddler.tags'
>>
H. G. Wells: Civilization is in a race between education and catastrophe.

[[Alan Kay|https://en.wikipedia.org/wiki/Alan_Kay]] (a "[[CS Sage|https://en.wikipedia.org/wiki/List_of_computer_scientists]]"), at OOPSLA 1997, gave an [[insightful talk|https://www.youtube.com/watch?v=oKg1hTOQXoY]] titled [[The Real Computer Revolution Has Not Happened Yet]] ([[the 2007 transcript|http://www.vpri.org/pdf/m2007007a_revolution.pdf]]), modifies it and explains:

Perhaps “education” is too vague a term here. I would replace it with “a race between education of outlook and catastrophe” because it is not knowledge per se that makes the biggest difference, but outlook or point of view which provides the context in which rational thinking actually matches up with the real world in the service of humanity.
[[“Claiming an Education” is a commencement speech|http://isites.harvard.edu/fs/docs/icb.topic469725.files/Rich-Claiming%20an%20Education-1.pdf]] by Adrienne Rich, delivered at the convocation of Douglass College, 1977.

Rich opens by describing what the relationship between teacher and student ought to be (vs. what it often is :( 
>If university education means anything beyond the processing of human beings into expected roles, through credit hours, tests, and grades (and I believe that in a women's college especially it might mean much more), it implies an ethical and intellectual contract between teacher and student. This contract must remain intuitive, dynamic, unwritten; but we must turn to it again and again if learning is to be reclaimed from the depersonalizing and cheapening pressures of the present-day academic scene.

She urges students to have the appropriate perspective about education, something every (life-long) learner should keep in mind:
>you cannot afford to think of being here to receive an education: you will do much better to think of being here to claim one. 
>One of the dictionary definitions of the verb "to claim" is: to take as the rightful owner; to assert in the face of possible contradiction. 
>"To receive" is to come into possession of: to act as receptacle or container for; to accept as authoritative or true. 
>The difference is that between acting and being acted-upon, and for women it can literally mean the difference between life and death. 

Rich, who was a feminist, makes some observations about the (white) male-dominated bias which pervades the entire higher education curriculum, but I will not focus on this aspect (and you are welcome to read [[the whole speech|http://isites.harvard.edu/fs/docs/icb.topic469725.files/Rich-Claiming%20an%20Education-1.pdf]]).
She, too, wants to focus on a higher level issue:
>But my talk is not really about women's studies, much as I believe in their scholarly, scientific, and human necessity. While I think that any Douglass student has everything to gain by investigating and enrolling in women's studies courses, I want to suggest that there is a more essential experience that you owe yourselves, one which courses in women's studies can greatly enrich, but which finally depends on you in all your interactions with yourself and your world. This is the experience of taking responsibility toward yourselves.

She obviously speaks to and about women, but this applies to all of us:
* Responsibility to yourself means refusing to let others do your thinking, talking, and naming for you; it means learning to respect and use your own brains and instincts; hence, grappling with hard work. 
*  It means insisting that those to whom you give your friendship and love are able to respect your mind. It means being able to say, with Charlotte Bronte's Jane Eyre: "I have an inward treasure born with me, which can keep me alive if all the extraneous delights should be withheld or offered only at a price I cannot afford to give." 
* Responsibility to yourself means that you don't fall for shallow and easy solutions -- predigested books and ideas, weekend encounters guaranteed to change your life, taking "gut" courses instead of ones you know will challenge you, bluffing at school and life instead of doing solid work, marrying early as an escape from real decisions, getting pregnant as an evasion of already existing problems. 
* It means that you refuse to sell your talents and aspirations short, simply to avoid conflict and confrontation. 
* And this, in turn, means resisting the forces in society which say that women should be nice, play safe, have low professional expectations, drown in love and forget about work, live through others, and stay in the places assigned to us. 
* It means that we insist on a life of meaningful work, insist that work be as meaningful as love and friendship in our lives. 
* It means, therefore, the courage to be "different"; not to be continuously available to others when we need time for ourselves and our work; to be able to demand of others -- parents, friends, roommates, teachers, lovers, husbands, children -- that they respect our sense of purpose and our integrity as persons.
And she rightly observes:
> The difference between a life lived actively, and a life of passive drifting and dispersal of energies, is an immense difference. Once we begin to feel committed to our lives, responsible to ourselves, we can never again be satisfied with the old, passive way. 

Rich has some observations about and advice to teachers:
*  Too often, all of us [teachers] fail to teach the most important thing, which is that clear thinking, active discussion, and excellent writing are all necessary for intellectual freedom, and that these require hard work.
* [We, teachers should not]  resign ourselves to low expectations for our students before we have given them half a chance to become more thoughtful, expressive human beings. 
* [And she quotes] Elizabeth Barrett Browning, a poet, a thinking woman, and a feminist, who wrote in 1845 of her impatience with studies which cultivate a "passive
recipiency" in the mind, and asserted that "women want to be made to think actively: their apprehension is quicker than that of men [...]

She is coming back to the contract between learner and teacher (and I will change the text to reflect learners in general, and not just women (which is the case in the original speech), since I believe she makes strong and valid cases which apply to all):
*I have said that the contract on the student's part involves that you demand to be taken seriously so that you can also go on taking yourself seriously. 
* This means seeking out criticism, recognizing that the most affirming thing anyone can do for you is demand that you push
yourself further, show you the range of what you can do. 
* It means rejecting attitudes of "take-it-easy," "why-be-so-serious," "why-worry-you'll-probably-get-married-anyway." 
* It means assuming your share of responsibility for what happens in the classroom, because that affects the quality of your daily life here. 
* It means that the student sees herself engaged with her teachers in active, ongoing struggle for a real education. 

And she concludes:
>But for [the learners] to do this, [their] teachers must be committed to the belief that [learners'] minds and experience are intrinsically valuable and indispensable to any civilization worthy the name.
>
>The contract is really a pledge of mutual seriousness about learners, about language, ideas, method, and values. It is our shared commitment toward a world in which the inborn potentialities of so many [learners'] minds will not longer be wasted, raveled-away, paralyzed, or denied. 
In an illuminating and forceful article^^1^^ titled [[CLEARING UP MYSTERIES - THE ORIGINAL GOAL|http://worrydream.com/refs/Jaynes%20-%20Clearing%20up%20Mysteries.pdf]], [[Edwin T. Jaynes|https://en.wikipedia.org/wiki/Edwin_Thompson_Jaynes]] argues that a clear understanding and application of statistics, specifically Conditional Probability (Bayes), can clear up very vexing paradoxes in physics, such as diffusion of a solution of sugar in water, the ~Einstein-Podolsky-Rosen (EPR) "spooky action at a distance" (or Einstein's expression in German: spukhafte Fernwirkung), and the applicability/violation of the second law of thermodynamics in biology.

Or in Jaynes's words:
>The first example is a simple exercise in kinetic theory that has puzzled generations of physics students: how does one calculate a diffusion coefficient and not get zero? The second concerns the currently interesting ~Einstein-Podolsky-Rosen paradox and Bell inequality mysteries in quantum theory: do physical influences travel faster than light? The third reexamines the old mystery about whether thermodynamics applies to biology: does the high efficiency of our muscles violate the second law?

Jaynes emphasizes the point that prior knowledge directly impacts our predictions about the future, in the sense of conditional probabilities (Bayes):
>The idea that probabilities can be used to represent our own information is still foreign to "orthodox" teaching [...]. Orthodoxy does not provide any technical means for taking prior information into account; yet that prior information is often highly cogent, and sound reasoning requires that it be taken into account. In other fields this is considered a platitude; what would you think of a physician who looked only at your present symptoms, and refused to take note of your medical history?
Another critical distinction he makes is between what happens in the "real world" (whatever that may be) and what we know (or can say at this point) about what we think/know is happening in the "real world":
>To appreciate the distinction between physical prediction and inference it is essential to recognize that propositions at two different levels are involved. In physical prediction we are trying to describe the real world; in inference we are describing only our state of knowledge about the world. A philosopher would say that physical prediction operates at the ontological level, inference at the epistemological level. Failure to see the distinction between reality and our knowledge of reality puts us on the Royal Road to Confusion; this usually takes the form of the Mind Projection Fallacy, discussed below.
>The confusion proceeds to the following terminal phase: a Bayesian calculation like the above one operates on the epistemological level and gives us only the best predictions that can be made from the information that was used in the calculation. But it is always possible that in the real world there are extra controlling factors of which we were unaware; so our predictions may be wrong. Then one who confuses inference with physical prediction would reject the calculation and the method; but in so doing he would miss the point entirely.
Clarifying the difference between the epistemological and ontological levels:
>For one who understands the difference between the epistemological and ontological levels, a wrong prediction is not disconcerting; quite the opposite. For how else could we have learned about those unknown factors? It is only when our epistemological predictions fail that we learn new things about the real world; those are just the cases where probability theory is performing its most valuable function. Therefore, to reject a Bayesian calculation because it has given us an incorrect prediction is like disconnecting a fire alarm because that annoying bell keeps ringing. Probability theory is trying to tell us something important, and it behooves us to listen.
!!!!On the MIND PROJECTION FALLACY:
>It is very difficult to get this point across to those who think that in doing probability calculations their equations are describing the real world. But that is claiming something that one could never know to be true; we call it the Mind Pro jection Fallacy. The analogy is to a movie projector, whereby things that exist only as marks on a tiny strip of film appear to be real objects moving across a large screen. Similarly, we are all under an ego{driven temptation to project our private thoughts out onto the real world, by supposing that the creations of one's own imagination are real properties of Nature, or that one's own ignorance signifies some kind of indecision on the part of Nature.
And the "double benefit" of using probability's power:
>In our more humble view of things, the probability distributions that we use for inference do not describe any property of the world, only a certain state of information about the world. This is not just a philosophical position; it gives us important technical advantages because of the more flexible way we can then use probability theory. In addition to giving us the means to use prior information, it makes an analytical apparatus available for such things as eliminating nuisance parameters, at which orthodox methods are helpless. This is a major reason for the greater computational efficiency of the [[Jeffreys methods in data analysis|https://arxiv.org/pdf/0804.3173.pdf]].
!!!!On EPR ("spooky action at a distance", or in German: spukhafte Fernwirkung)
Jaynes summarizes EPR as follows: 
>The ~Einstein-Podolsky-Rosen (EPR) article of 1935 is Einstein's major effort to explain his objection to the completeness claim by an example that he thought was so forceful that nobody could miss the point. Two systems, S1 and S2, that were in interaction in the past are now separated, but they remain jointly in a pure state. Then EPR showed that according to QM [Quantum Mechanics] an experimenter can measure a quantity q1 in S1, whereupon he can predict with certainty the value of q2 in S2. But he can equally well decide to measure a quantity p1 that does not commute with q1; whereupon he can predict with certainty the value of p2 in S2. The systems can be so far apart that no light signal could have traveled between them in the time interval between the S1 and S2 measurements. Therefore, by means that could exert no causal inuence on S2 according to relativity theory, one can predict with certainty either of two noncommuting quantities, q2 and p2. EPR concluded that both q2 and p2 must have had existence as definite physical quantities before the measurements; but since no QM state vector is capable of representing this, the state vector cannot be the whole story.
Jaynes sums up Enstein's and Bohr's positions on Quantum Mechanics (QM):
>Put most briefly, Einstein held that the QM formalism is incomplete and that it is the job of theoretical physics to supply the missing parts; Bohr claimed that there are no missing parts. To most, their positions seemed diametrically opposed; however, if we can understand better what Bohr was trying to say, it is possible to reconcile their positions and believe them both. Each had an important truth to tell us.
This boils down to ontology vs. epistemology (see also [[Different worldviews of Physics Greats]]):
>But Bohr and Einstein could never understand^^2^^ each other because they were thinking on different levels. When Einstein says QM is incomplete, he means it in the ontological sense; when Bohr says QM is complete, he means it in the epistemological sense. Recognizing this, their statements are no longer contradictory. Indeed, Bohr's vague, puzzling sentences -- always groping for the right word, never finding it -- emerge from the fog and we see their underlying sense, if we keep in mind that Bohr's thinking is never on the ontological level traditional in physics. Always he is discussing not Nature, but our information about Nature. But physics did not have the vocabulary for expressing ideas on that level, hence the groping.
And on limitations on human predictions vs. human measurements:
>Needless to say, we consider all of Einstein's reasoning and conclusions correct on his level; but on the other hand we think that Bohr was equally correct on his level, in saying that the act of measurement might perturb the system being measured, placing a limitation on the information we can acquire and therefore on the predictions we are able to make. There is nothing that one could object to in this conjecture, although the burden of proof is on the person who makes it. But we part company from Bohr when this metamorphoses without explanation into a claim that the limitation on the predictions of the present QM formalism are also -- in exact, minute detail -- limitations on the measurements that can be made in the laboratory!
>[...]
>We believe that to achieve a rational picture of the world it is necessary to set up another clear division of labour within theoretical physics; it is the job of the laws of physics to describe physical causation at the level of ontology, and the job of probability theory to describe human inferences at the level of epistemology. The Copenhagen theory scrambles these very different functions into a nasty omelette in which the distinction between reality and our knowledge of reality is lost.
>Although we agree with Bohr that in different circumstances (different states of knowledge) different quantities are predictable, in our view this does not cause the concepts themselves to fade in and out; valid concepts are not mutually incompatible. Therefore, to express precisely the effect of disturbance by measurement, on our information and our ability to predict, is not a philosophical problem calling for complementarity; it is a technical problem calling for probability theory as expounded by Jeffreys, and information theory.
Jaynes agrees with Einstein and claims the same dissatisfaction with QM, but he does not claim that Bohr was wrong; they were merely talking at different levels:
>To understand this, we must keep in mind that Einstein's thinking is always on the ontological level; the purpose of the EPR argument was to show that the QM state vector cannot be a representation of the \real physical situation" of a system. Bohr had never claimed that it was, although his strange way of expressing himself often led others to think that he was claiming this.
>From his reply to EPR, we find that Bohr's position was like this: \You may decide, of your own free will, which experiment to do. If you do experiment E1 you will get result R1. If you do E2 you will get R2. Since it is fundamentally impossible to do both on the same system, and the present theory correctly predicts the results of either, how can you say that the theory is incomplete? What more can one ask of a theory?"

At this point Jaynes gives an illuminating example from probability:
!!!!BERNOULLI'S URN REVISITED
{{{
Define the propositions:
I == "Our urn contains N balls, identical in every respect except that M of them are red, the
remaining N - M white. We have no information about the location of particular balls in
the urn. They are drawn out blindfolded without replacement."
Ri == Red on the i'th draw, i = 1, 2, ..."
Successive draws from the urn are a microcosm of the EPR experiment. For the first draw, given
only the prior information I , we have
P(R1 | I ) = M / N
Now if we know that red was found on the first draw, then that changes the contents of the urn for
the second:
P(R2 |R1, I )=(M - 1) / (N - 1)
and this conditional probability expresses the causal influence of the first draw on the second.

But suppose we are told only that red was drawn on the second draw; what is now our probability
for red on the first draw? Whatever happens on the second draw cannot exert any physical influence
on the condition of the urn at the first draw; so presumably one who believes with Bell that a
conditional probability expresses a physical causal influence, would say that P(R1 | R2, I ) = P(R1 | I ).

But this is patently wrong; probability theory requires that

P(R1 | R2, I ) = P(R2 | R1, I )	 : eq. 18

This is particularly obvious in the case M = 1; for if we know that the one red ball was taken in
the second draw, then it is certain that it could not have been taken in the first.

In (eq. 18) the probability on the right expresses a physical causation, that on the left only an
inference. Nevertheless, the probabilities are necessarily equal because, although a later draw
cannot physically affect conditions at an earlier one, information about the result of the second
draw has precisely the same effect on our state of know ledge about what could have been taken in
the first draw, as if their order were reversed.

Eq. (18) is only a special case of a much more general result. The probability of drawing any
sequence of red and white balls (the hypergeometric distribution) depends only on the number of
red and white balls, not on the order in which they appear; i.e., it is an exchangeable distribution.
From this it follows by a simple calculation that for all i and j,

P(Ri | I ) = P(Rj | I ) = M / N

That is, just as in QM, merely knowing that other draws have been made does not change our
prediction for any specified draw, although it changes the hypothesis space in which the prediction
is made; before there is a change in the actual prediction it is necessary to know also the results of
other draws. But the joint probability is by the product rule,

P(Ri, Rj | I ) = P(Ri | Rj , I ) P(Rj | I ) = P(Rj | Ri, I ) P(Ri | I )

and so we have for all i and j,

P (Ri | Rj , I ) = P(Rj | Ri, I )
and again a conditional probability which expresses only an inference is necessarily equal to one that
expresses a physical causation. This would be true not only for the hypergeometric distribution,
but for any exchangeable distribution. We see from this how far Karl Popper would have got with
his \propensity" theory of probability, had he tried to apply it to a few simple problems.
}}}

>It might be thought that this phenomenon is a peculiarity of probability theory. On the contrary, it remains true even in pure deductive logic; for if A implies B, then not-B implies not-A.
>But if we tried to interpret "A implies B" as meaning "A is the physical cause of B", we could hardly accept that "not-B is the physical cause of not-A". Because of this lack of contraposition, we cannot in general interpret logical implication as physical causation, any more than we can conditional probability. Elementary facts like this are well understood in economics (Simon & Rescher, 1966; Zellner, 1984); it is high time that they were recognized in theoretical physics.

----
^^1^^ [[Local copy|resources/Jaynes - Clearing up Mysteries.pdf]]
^^2^^ - regarding understanding among "Physics Greats": there is this anecdote about [[Sir Arthur Eddington|https://en.wikipedia.org/wiki/Arthur_Eddington]], who when asked if only three people in the world understood general relativity replied, “Who is the third?”
In his excellent, practical book, Goodliffe shares many relevant, time-proven, and insightful experiences, based on years of writing industrial strength code in "the factory" (commercial application development).

!!!On Defensive Programming
You may not want to adopt this attitude/mindset in all aspects of your life, but in programming it would make sense to take the following to heart:
> We have to distrust each other. It's our only defense against betrayal. - ''Tennessee Williams''
And as it relates to writing robust, "good code":
* There is a big difference between "working code", "correct code", and "good code":
** "working code" works most of the time, given certain/usual/expected inputs/data
** "correct code" works and won't crash and produce correct/expected results for all possible/expected/unexpected inputs. 
*** However, not all "correct code" is "good code". The logic may be hard to follow, and it may be hard/expensive/impossible to maintain and evolve.
** "good code" is obviously "correct code", but it is also robust, efficient enough, practical to maintain and evolve.
* Writing defensive code means you should assume nothing (or assume the worst) about how your code will be run (the environment, calls/usage, inputs/data, etc.)
** Defensive programming is careful, guarded coding, designed for reliability, so that every component protects itself as much as possible.
** As Goodliffe says: it's a big, bad world out there. And while you can't create absolutely foolproof code, (there is always a fool^^1^^ that will succeed in breaking your code...), you should try very hard and be smart and careful about your coding.
** Employ a good coding style (for clarity, readability, debugging, maintenance). Like [[good writing style|Why writing style matters]], good coding style matters.
** Come up with a thoughtful, clear, sound design and structure
** Don't code in a hurry ("more haste, less speed").
** Trust no one. Problems may come from genuine users providing bogus inputs, malicious users trying to fail your program, the operating environment not working as you expect, external libraries behaving unexpectedly.
*** Absolutely anyone - including yourself - can introduce flaws and errors into your program.
** Write code for clarity, not brevity. Whenever you can choose between concise but potentially confusing code, and clear but potentially tedious code, use code that reads well, even if it is less elegant. Simplicity and straight-forwardness is a virtue.
** Protect code that should not be tinkered with from the outside. Keep code protected, internal, local. Keep the scope as tight as possible.
** Design and use safe data structures. Or failing that, use dangerous data structures safely.
** Check every return value. Inspect and deal with all error codes and return values.
** Handle system/environment resources (such as memory, CPU cycles, co-processors) carefully and respectfully. Manage them well.
** Initialize all variables at their point of declaration.
** Declare variables as late as possible. This will help you manage a tight scope, make the use cleared to a reader (and your self). This way, you will avoid having to hunt around for variables all over your code.
** Don't reuse temporary variables in multiple places. Create new variables each time, in each new context.
** Use standard language facilities, libraries, etc., and clearly define which version(s) you are using.

!!! On Source Code Layout and Presentation
As Goodliffe writes:
* Code style has been, is, and will continue to be the subject of holy wars, among programmers - professional, amateur, and student - where, unfortunately, intense disagreements degrade into mere name-calling ...
** Engaging in holy wars is unproductive and a waste of time; there are far more important things to focus our attention to. 
** Unfortunately holy wars can go beyond code and style, and extend to editors, compilers, methodologies, the One True Language, and beyond.
** Holy wars: Just Say No. Don't get involved. Just walk away.
* Why do people get so worked up about this?
** Presentation dramatically affects the readability of code- no one wants to work with code that isn't easy to read. 
** Presentation is also a very subjective and personal thing ... Familiarity breeds comfort and an alien style puts you on edge.
** Programmers are passionate about code, so presentation stirs deep emotions.
* Keep in mind the //real// audience for your source code: other programmers (and possibly, your future self). Write for their benefit.
* Good presentation is
** consistent (indentation, parentheses, braces, brackets, etc.)
** conventional (it make sense to adopt one of the major/popular/adopted styles rather than invent your own)
** concise (clear easy-to-understand logic, structure, conditions, constraints, etc.)

!!!On Naming Things (Giving Meaningful Things Meaningful Names)
As programmers we have control of the things we create in code, like variables, functions, objects, classes, files, types, etc.
* we should name them so they clearly tell the thing's identity and behavior.
* An object's name should describe it clearly ("transparent" naming :)
* The key to good naming is a good understanding of the thing you name. If you find it hard to name something, ask yourself if you really understand it, or why it exists
* A good name should be:
** descriptive (reflecting it's identity/purpose/role and behavior)
** technically correct (valid in that language and following it's rules)
** idiomatic (following the language's conventions)
** appropriate, taking the following into account:
*** length (favor clarity over brevity)
*** tone (don't use jokey/cutesy names)
* when naming, avoid giving names that are:
** cryptic
** too verbose
** ambiguous or vague
** too cute

!!! On Documenting Code
* The problems with documenting code (both inside and outside of the code) are:
** It is extra work, both writing it and reading it
** the documentation needs to be kept in sync with the code
** documents, like code, are hard to manage (version control, etc.)
** important things can be overlooked/ignored/not-found in external (not inside the code) documentation
* Self-documenting code is ''code'' that is written to be read. By humans. Easily. (and don't worry; the compiler will cope/forgive :)
* Good self-documenting code follows these guidelines:
** write simple code with good/clear stype/presentation
** make the "normal" execution path through the code obvious (as opposed to error handling, obscure/rare "else" cases, etc.)
** choose meaningful name (see Naming above)
** decompose into atomic/small functions
** name constants (avoid "magic numbers")
** emphasize important code
*** order declarations in a file/class helpfully - important information first, followed by private/detailed information later
*** group together related information
*** provide a file header, which includes a description of the content, it's overall purpose, etc.
** handle errors appropriately
** write meaningful comments. Only add comments if you can't improve the clarity of your code in any other way

!!! On Writing Code Comments
> Let's not overstate the case - there are things far more important to get right than comments. When you've written truly good code your comments are the //icing on the cake//, delicately placed to add aesthetics and value, rather than liberally slapped on to cover up all the cracks and blemishes.
Goodliffe continues:
* learn to write ''enough'' comments, and no more. Favor quality, not quantity.
* spend your time writing code that doesn't need to be propped up by tons of comments.
* you should think carefully about what you write, since as Horace said:
> Of writing well the source and fountainhead is wise thinking.
* in your comments, __explain why, not how__. Anyone can see the 'how' by reading the code (which you have written clearly and effectively, haven't you? :). In your comments focus on intent. Good comments (the 'why') change less often, even if the code itself (the 'how') changes.
* if you find yourself writing dense and long/complex comments, stop to think whether you can simplify/clarify you code, so as to avoid lengthy/complicated commenting.
* make your comments as useful as possible:
** comments should "live in the present". Don't document how/why things were done in the past. Revision control can tell that story.
** document/comment on the unexpected results/cases/behavior of your code
** tell the truth. Comments should be up-to-date with the code, and not misrepresent what's going on
** are worthwhile. Avoid witty, cryptic comments; don't use expletives. Don't write comments that may embarrass you later.
** are readable, clear, unambiguous, specific
* A comment on comment aesthetics
** use a consistent commenting style
** block comments (multi-line), should be appropriately/well indented
** line comments should fit in with the code
** end-of-line comments should be spaced out and clearly apart from the code on the line
** use comments and whitespace to help with code flow and readability
* when you alter code, maintain the comments around/about it. Don't create "comment rot".

!!! On Error Handling
To err is human (and/but mess-ups can be caused by non-humans too!) - so try to anticipate the unexpected, or as Oscar Wilde said:
>To expect the unexpected shows a thoroughly modern intellect. 
* errors can come from different sources:
** user error (even if you have coded a foolproof program, there is always the fool ...)
** programmer error (that's where defensive programming (see above) can help)
** exceptional circumstances (a misbehaving system, environment, etc.)
* take error handling seriously:
** raise an error when something went wrong
** detect all possible error reports/return values
** handle all detected errors appropriately
** propagate errors you cannot handle
* in order to handle errors well you need to the following information:
** where it came from
** what you were trying to do
** why it went wrong (how do you know it happened)
** when it happened
** what is its severity
** how to fix it
* you can handle errors either as soon as possible or as late as possible. Handle errors in the most appropriate context, as soon as you know enough about it to deal with it correctly.
* ways to respond to errors:
* logging it it (in a log)
* reporting it (visible to the user/caller)
* recovering from it (e.g. stopping, rolling back, pass it up the chain for handling/recovery)
* ignoring it (works wonderfully in cases here you want your code to behave in bizarre and unpredictable ways and to crash randomly)
** ignoring errors doesn't save time in the long run. You'll spend more time figuring out the causes of misbehaving programs, than you ever would have spent writing error handling code.
* when crafting error messages to users, consider the following:
** users don't thing like programmers, and usually don't have the same knowledge, so present information in a way appropriate to them
** don't use cryptic messages, or meaningless error codes
** distinguish between errors and warning - indicate the severity/implications of the problem
** if you need user input to recover or continue, ask simple, intelligible questions, and explain consequences if necessary
** make sure you follow and conform to user interface requirements and conventions/style when presenting error messages
* handling errors effectively means, you
** clean up after yourself when you detect an error
** don't leak out inappropriate information into the larger scope of the program or to the caller
** use exceptions appropriately. Throws exceptions judiciously
** consider using assertions
** make it hard for people/callers to ignore your errors (exceptions and assertions are good for this)

!!! On Testing Code
> Testing is not debugging. Don't get the two confused. They require different skills.
* __Testing__ is a methodical process of proving the existence, or lack thereof, of faults in your software.
* __Debugging__ is the act of tracking down the cause of this faulty behavior.
* If you are programming well, you will be doing a lot more testing than debugging.
* Testing should and is done at multiple levels:
** you test documents, such as requirements specifications, system/product specifications, module functional spec's, etc.
** you test code at the line, function, module, sub-system, and system/product levels
* you should always keep in mind that testing can only detect the presence of faults; it can __never__ prove the absence of defects. 
** So don't be lulled into a sense of security that your code is "bug free" because it passed certain tests. Your tests may not be thorough enough or deep enough.
* testing should start as early as possible, so you can catch problems early, and when they are easy to fix
** some people write tests before they write the functional code ("test-driven development")
** create and maintain a test suit
** add tests to your suit/collection every time you find a new bug
* run your tests as often as you can ("continuous integration")
* whenever you find a bug, do the following, ''before'' you rush to fix it. The following is even more important if you ''do not'' intent on fixing it right away:
** note what you were trying to do when the failure happened
** try it again, to see if the problem is reproducible, and/or if it coincides with other conditions
** describe the fault, fully: the context, the steps, the frequency, the software version, anything else that may help reproduce/debug it
** record it in a bug tracking system, list, repository, etc.
** if possible/relevant write a simple test harness that will reproduce the problem, thus updating your test suit

!!! On Software Design
The creation of software is actually a very creative design and implementation activity, not a "mechanical" code generation.
* In the process of creating good code, you need to design the overall (system) architecture; the various components, objects, modules; data types, structures, and classes; and the various functions and methods.
* A good software design is:
** iterative - consisting of iterations of design and implementation/validation
** Cautious - not designing the "Entire Big Solution" all at once
** Realistic - pragmatic about the technologies used, as well as the methodologies and tools
** Informed - fully and deeply understanding the requirements and constraints
* A good design has:
** Simplicity - good design and code is compact ("less is more, and is not easy to achieve")
** Elegance - not baroque, clever, overly complex and confusing
** Modularity - with strong cohesion (tight related functionality within modules) and weak coupling (interdependence between modules)
** Good interfaces - clear, clean, public facades for the modules
** Extensibility - allowing for straight-forward evolution and enhancements
*** You should design for extensibility, but don't be overly general, so you don't end up designing an Operating System, not a program.
** No Duplication - Do it once. Do it well. Don't cut/copy-and-paste
** Portability - if needed (if it's a requirement/possibility)
** Idiomatic - naturally employing best practices, and fitting well with the development methodology and language used.
** Well-Documented - a good design is simple and therefore will not need a lot of documentation



----
^^1^^ Attributed to ''Napoleon'': Never ascribe to malice that which is adequately explained by stupidity.
I came across [[this blog post|http://www.chris-granger.com/2015/01/26/coding-is-not-the-new-literacy/]] by Chris Granger, and it definitely reflects my thinking about what is important to teach, and the role of Computer Science in developing [[Computational Thinking|Computational Thinking/Literacy]].

Granger draws a good parallel between literacy^^1^^ (reading/writing skills) and computation skills, namely:
* Although reading and writing are the physical acts of being able to put symbols on a medium (paper, file, etc.) and retrieve those symbols, they are not the end goal of literacy. What we are aspiring for is comprehension (as a result of the act of reading), and composition (associated with the act of writing).

* The equivalent of composition in Computing is the ability to successfully/meaningfully ''model'' thoughts, processes, phenomena. As it is not enough to be able to "mechanically" read and write, it is not enough to be able to code/program, run, test, debug programs. All these may be required/important, but they should not be the end goal of teaching/learning.

A few points from the post:
* Modeling is the new literacy
* Modeling is creating a representation of a system (or process) that can be explored or used.
* Defining a system or process requires breaking it down into pieces and defining those, which can then be broken down further. It is a process that helps acknowledge and remove ambiguity and it is the most important aspect of teaching people to model.
* The process of creating a model is an act of discovery - we find out what pieces we need as we shape our material.
* Exploration is understanding
* By transposing our models to a computer, we can offload the work necessary to change, simulate, and verify.
* We want a generation of writers, biologists, and accountants that can leverage computers (using domain-specific tools, concepts, frameworks, etc. - my addition)
* we have to teach children how modeling happens, which we can break down into four distinct processes:
** Specification: How to break down parts
** Validation: How to test the model against the real world or against your expectations
** Debugging: How to break down bugs in a model.
** Exploration: How to then play with the model to better understand possible outcomes and to see how it could be used to predict or automate some system.
*  focusing on modeling pushes education towards the idea that pedagogy is really guiding children to deliberately create and explore the world around them. (Seymour Papert)

And Granger finishes:
>To put it simply, the next great advance in human ability comes from being able to externalize the mental models we spend our entire lives creating.
>
>That is the new literacy.


----
1 - See what [[Neil Gaiman|http://www.neilgaiman.com/About_Neil]] has to say on [[The importance of having children read fiction (AND do Computing)|The importance of having children read fiction AND do Computing]]
Do you see a rider coming towards you, or riding away from you?

[img[horseman|./resources/horse_small.jpg]]

>If you saw the horseman coming to you, you tend to have a more optimistic mindset. If you saw the horseman riding away from you, you tend to be more of a pessimist.

According to Jurriaan Kamp, the co-founder of "[[The Intelligent Optimist|http://www.theoptimist.com/]]".
In a [[Library of Congress lecture|https://www.youtube.com/watch?v=3MPSTMfAYVM]] by Michael Dirda about his book //The Classics for Pleasure// he mentioned keeping a [[commonplace book|https://en.wikipedia.org/wiki/Commonplace_book]], an old "knowledge management" technique, which intrigued me (como no?!).

Digging a bit deeper into this old technique, I came across an interesting article by Steve Johnson titled [[The Glass Box and the Commonplace Book|https://stevenberlinjohnson.com/the-glass-box-and-the-commonplace-book-639b16c4f3bb#.w2zyf7doz]], which, I think, highlighted some parallels to new/web techniques/practices.

For example:
* Similar to ''blogging'':
>Scholars, amateur scientists, aspiring men of letters — just about anyone with intellectual ambition in the seventeenth and eighteenth centuries was likely to keep a commonplace book. In its most customary form, “commonplacing,” as it was called, involved transcribing interesting or inspirational passages from one’s reading, assembling a personalized encyclopedia of quotations. It was a kind of solitary version of the original web logs: an archive of interesting tidbits that one encountered during one’s textual browsing.

* On ''indexing and searching'':
>The philosopher John Locke first began maintaining a commonplace book in 1652, during his first year at Oxford. Over the next decade he developed and refined an elaborate system for indexing the book’s content. Locke thought his method important enough that he appended it to a printing of his canonical work, An Essay Concerning Human Understanding. 
>[...description of the indexing scheme follows...]
>Locke’s approach seems almost comical in its intricacy, but it was a response to a specific set of design constraints: creating a functional index in only two pages that could be expanded as the commonplace book accumulated more quotes and observations. In a certain sense, this is a search algorithm, a defined series of steps that allows the user to index the text in a way that makes it easier to query.

* Similar to ''Pinterest'', ''Facebook'', and ''web browsing'':
>The tradition of the commonplace book contains a central tension between order and chaos, between the desire for methodical arrangement, and the desire for surprising new links of association. The historian Robert Darnton describes this tangled mix of writing and reading:
>>Unlike modern readers, who follow the flow of a narrative from beginning to end, early modern Englishmen read in fits and starts and jumped from book to book. They broke texts into fragments and assembled them into new patterns by transcribing them in different sections of their notebooks. Then they reread the copies and rearranged the patterns while adding more excerpts. Reading and writing were therefore inseparable activities. They belonged to a continuous effort to make sense of things, for the world was full of signs: you could read your way through it; and by keeping an account of your readings, you made a book of your own, one stamped with your personality.

* On ''serendipitous data mining'':
>Each rereading of the commonplace book becomes a new kind of revelation. You see the evolutionary paths of all your past hunches: the ones that turned out to be red herrings; the ones that turned out to be too obvious to write; even the ones that turned into entire books. But each encounter holds the promise that some long-forgotten hunch will connect in a new way with some emerging obsession.
>[and]
>The beauty of Locke’s scheme was that it provided just enough order to find snippets when you were looking for them, but at the same time it allowed the main body of the commonplace book to have its own unruly, unplanned meanderings.

* On content ''remixing'' and referencing the [[Jefferson Bible|https://en.wikipedia.org/wiki/Jefferson_Bible]]:
>But all of this magic was predicated on one thing: that the words could be copied, re-arranged, put to surprising new uses in surprising new contexts. By stitching together passages written by multiple authors, without their explicit permission or consultation, some new awareness could take shape.
>Since the heyday of the commonplace book, there have been a few isolated attempts to turn these textual remixes into a finished product, into a standalone work of collage. The most famous is probably Jefferson’s bible, his controversial “remix” of the New Testament.

* The search results page as a commonplace book:
>What I want to suggest to you is that, in some improbable way, this page is as much of an heir to the structure of a commonplace book as the most avant-garde textual collage. Who is the “author” of this page? There are, in all likelihood, thousands of them. It has been constructed, algorithmically, by remixing small snippets of text from diverse sources, with diverse goals, and transformed into something categorically different and genuinely valuable.

* On ''knowledge productivity'' (vs. agrarian or industrial productivity):
>Ecologists talk about the “productivity” of an ecosystem, which is a measure of how effectively the ecosystem converts the energy and nutrients coming into the system into biological growth. A productive ecosystem, like a rainforest, sustains more life per unit of energy than an unproductive ecosystem, like a desert. We need a comparable yardstick for information systems, a measure of a system’s ability to extract value from a given unit of information. Call it, in this example: textual productivity. By creating fluid networks of words, by creating those digital-age commonplaces, we increase the textual productivity of the system.
The overall increase in textual productivity may be the single most important fact about the Web’s growth over the past fifteen years.

* On the value/desire (need?) for content/value/knowledge reuse and maximization:
>The promise [of digital technology] lies in doing things with the words, forging new links of association, remixing them. We have all the tools at our disposal to create commonplace books that would astound Locke and Jefferson.
[...]
>When your digital news feed doesn’t contain links, when it cannot be linked to, when it can’t be indexed, when you can’t copy a paragraph and paste it into another application: when this happens your news feed is not flawed or backwards looking or frustrating. It is broken.
The author says we should not freeze the content/words (like some vendors and publishers do (and he mentions Apple, NYT, WSJ)) :
>They’re frozen there, uncopyable, unlinkable, like some beautiful ice sculpture. Frozen is the right word, because we’re so used to selecting and copying digital text, encountering text on a screen that can’t be selected leaves you with a strange initial assumption: that the application has crashed, and the screen is frozen.

* On the strengthening or weakening of the ''echo chamber'' effect (AKA, "the internet bubble". See also [[Minding the obvious]]):
> [the "echo chamber effect" is] the premise that the internet leads to political echo chambers, where like-minded partisans reinforce their beliefs by filtering out dissenting views
or in [[Cass Sunstein's words|http://www.ojr.org/ojr/glaser/1082521278.php]]:
>If Republicans are talking only with Republicans, if Democrats are talking primarily with Democrats, if members of the religious right speak mostly to each other, and if radical feminists talk largely to radical feminists, there is a potential for the development of different forms of extremism, and for profound mutual misunderstandings with individuals outside the group.
According to a [[David Brooks column|http://www.nytimes.com/2010/04/20/opinion/20brooks.html?hp]] reporting on a new study that actually looked an exposure to differing points of view in various forms of media, and in real-world encounters. 
>It turns out that the web, at least according to this study, actually reduces the echo-chamber effect, compared to real-world civic space. People who spend a lot of time on political sites are far more likely to encounter diverse perspectives than people who hang out with their friends and colleagues at the bar or the watercooler. 
As Brooks described it, “This study suggests that Internet users are a bunch of ideological Jack Kerouacs. They’re not burrowing down into comforting nests. They’re cruising far and wide looking for adventure, information, combat and arousal.”

And the author's conclusion:
>But whether or not this study proves to be accurate, one thing is certain. The force that enables these unlikely encounters between people of different persuasions, the force that makes the web a space of serendipity and discovery, is precisely the open, combinatorial, connective nature of the medium. So when we choose to take our text out of that medium, when we keep our words from being copied, linked, indexed, that’s a choice with real civic consequences that are not to be taken lightly.

And:
>The reason the web works as wonderfully as it does is because the medium leads us, sometimes against our will, into common places, not glass boxes. It’s our job — as journalists, as educators, as publishers, as software developers, and maybe most importantly, as readers — to keep those connections alive.

 [[Complexity - A Guided Tour|resources/Melanie-Mitchell-Complexity_a-guided-tour-366-pages.pdf]]^^1^^

<<forEachTiddler 
where 
'tiddler.tags.contains("book-chapter") && tiddler.tags.contains("Complexity - A Guided Tour")'
sortBy 
'tiddler.title'>>
----
^^1^^ retrieved from [[Sorrentino's blog|http://www.waltersorrentino.com.br/wp-content/uploads/2012/02/Melanie-Mitchell-Complexity_a-guided-tour-366-paginas.pdf]]

[img[Einstein on modeling|resources/einstein_modeling_small.jpg][resources/einstein_modeling.jpg]] [ 1 ]

On modeling:
>{{{The significant problems we face cannot be solved at the same level of thinking we were at when we created them.}}}
: -- [[Albert Einstein]]
But also:
>{{{essentially, all models are wrong, but some are useful.}}}
:   -- George Box and Norman Draper in their book [[Response surface methodology|http://en.wikipedia.org/wiki/Response_surface_methodology]]

This activity, demonstrating modeling, is part of the [[Computational Thinking (CT) problem solving framework|A Framework for Computational Thinking, Computational Literacy]].
Modeling skills are critical in many problem solving cases, and modeling activities often happen in multiple parts of the problem solving process, as problem solvers refine their understanding of the problem (and model), as a result of new/additional data, analysis, simulation, etc. 
This example is using the [[VModel modeling tool|http://www.qrg.northwestern.edu/projects/NSF/Vmodel/private/]], developed as part of the [[Computer-supported Visual Representations for Learning Modeling|http://www.qrg.northwestern.edu/projects/NSF/Vmodel/index.htm]] project at Northwestern University.
It is important to remember that modeling doesn't have to be computerized, but software tools are usually very helpful here ;-)
!!! Building the model
[>img[modeling using VModel|resources/termites_vmodel_built_small.png][resources/Termites Model 21.htm]]
From the [[example overview|Computational Thinking example: Termites and woodchips - overview]] certain modeling elements are emerging, and can be [[captured and modeled in the VModel tool|resources/termites_vmodel_built.png]]:
* Entities, like:
** termites, wood chips, piles
* Entity properties (characteristics, parameters), like:
** the total number of chips, termites, chips per pile, number of piles
* Processes, like:
** termites picking up chips
** termites dropping off chips
** piles growing
** piles shrinking and/or vanishing
* Relationships and influences, like:
** dropping off chips, may create a new pile or increase its size
** picking up chips, may eliminate a pile, or reduce its size
** as piles vanish (their size equals zero), the total number of piles decreases
This modeling exercise can quickly and visually reveal what entities and relationships (for example) are of value to the modeling, i.e., what a useful level of abstraction is. For example, [[it is obvious from this version of the model|resources/Termites Model 5.htm]] that the individual chip and individual termite are not essential (or significantly contributing) to the model, so they can be removed.
!!! Validating the model
One good capability of the ~VModel tool is its ability to check the model for problems and inconsistencies. The modelers need to define the goal of the model (say, that it will explain how the total number of wood piles changes over time, as the termites pick up and drop off wood chips).
Also, in addition to defining the relationships and influences between processes and properties (say, the process of termites dropping off wood chips, resulting in the creation of new piles, or the increase of the size of wodd chip piles), the modelers need to define one or more predictions, and tie them to the model definition (for example, that as a result of the termite activity, the total number of piles will decrease).
[>img[model verification in VModel|resources/termites_vmodel_validated_small.png][resources/termites_vmodel_validated.png]]
Now, the [[modeling tool can check|resources/Termites Model 2.htm]] if the model relationships as defined may indeed result in the number of piles being reduced.
From my experience, the model validation is not fool-proof, but it provides some sanity checking. For example, if modelers predict that the model will result in //increasing// the total number of piles and run the validation, the [[tool validation algorithm detects the contradiction|resources/Termites Model 3.htm]], based on the properties and relationships.
The validation algorithm can point out some finer potential problems and ambiguities with the model, like for example, if the strength of relationships or their impact is ambiguous, and can potentially both support a prediction //and// refute it. So, if the chip pick-up and drop-off processes decrease and increase the pile size //with the same impact// on the total number of piles as the pile-vanish process, [[the validator flags ambiguity|resources/Termites Model 4.htm]]. This is a "fine" point that can lead to insight, since when a pile vanishes, it //definitely// reduces the total number of piles, but when a wood chip is being picked up it may or may not (!) reduce the total number of piles, and that's why the strength/impact of the two processes should be set differently. 







----
^^1^^ an [[exploitable image|http://www.hetemeel.com/einsteinform.php]] of Einstein's blackboard, where you can [[create your own text|resources/Einstein_blackboard_exploitable.jpg]]
I recently came across an article by Mitchel Resnick (the MIT professor of Logo, Lego Mindstorms, and Scratch fame) from 1994 (!) called [[Learning About Life|resources/Learning About Life.html]]. He describes (among other very interesting things ;-) a situation involving termites picking up and dropping off wood chips, resulting in an interesting emerging behavior/phenomenon.

I've decided to take this situation and apply my [[Computational Thinking (CT) problem solving framework|A Framework for Computational Thinking, Computational Literacy]] to demonstrate some key activities and phases in the process.

As the [[CT framework diagram|resources/Computational Thinking process HM.pdf]] shows, the full problem solving process may include multiple activities/phases of
* gathering data
* defining a "good" question
* coming up with a testable hypothesis
* [[modeling|Computational Thinking example: Termites and woodchips - modeling]]
* [[simulating|Computational Thinking example: Termites and woodchips - simulating]]
* analyzing
* visualizing
* presenting the results
and so on.
!!! Data gathering
The situation with the termites and wood chips described in Resnick's paper is this:
[>img[Termites random walk (6MB .wmv video)|resources/termites_random_walk.png][resources/Termites in HD 1080p.wmv]]
From observation (part of the __data gathering phase__) it seems like termites are randomly walking about in an area, picking up wood chips, carrying them for a while and dropping them off.
There doesn't appear to be any leader, or pattern/trend, and after observing for a short while, no obvious purpose or result is visible. So some obvious questions come to mind:
- Is there really no purpose to this seemingly random activity?
- What may be a reason or result of this behavior?
- Who is in control here?
- Are the termites following the instructions, directives, example of someone(s)?
- Will this random shuffling around of wood chips go on forever? Until the termites "get tired"? Until someone disturbs this colony?

And yet, we know from other observations (__more data gathering__) that termites usually leave some marks. They sometimes create piles of wood chips, or sometimes they build big mounds (depending on the type of termite). ''It seems like the wood chips are gathered by the termites into piles, or maybe even just a __single pile__''.
|borderless|k
|[img[termite pile 1|resources/termite_pile_1_small.jpg][resources/termite_pile_1.jpg]]|[img[termite pile 2|resources/termite_pile_2_small.jpg][resources/termite_pile_2.jpg]]|[img[termite pile 3|resources/termite_pile_3_small.jpg][resources/termite_pile_3.jpg]]|
|borderless|k

!!!So, a good set of questions to focus this investigation may be:
1. Is it possible that even with "random termite activity", and with no  leader  the termites will end up building something (piles, mounds, ''a single one'')?
2. Will the end result (piles, mounds) depend on the number of termites and/or the number of wood chips? 
3. Will the end result happen at the same speed (rate) regardless of the number of termites and wood chips?

In order to address the 1^^st^^ question we will use [[modeling|Computational Thinking example: Termites and woodchips - modeling]] and __simulation__
In order to address the 2^^nd^^ question we will engage in [[simulation|Computational Thinking example: Termites and woodchips - simulating]] and __analysis__
and in order to address the 3^^rd^^ question we will __analysis__ and __visualization__
This activity, demonstrating modeling, is part of the [[Computational Thinking (CT) problem solving framework|A Framework for Computational Thinking, Computational Literacy]].
Simulating skills are critical in many problem solving cases, and simulating activities often happen in multiple parts of the problem solving process, as problem solvers refine their understanding of the problem (and model), as a result of new/additional data, analysis, modeling, simulation, etc. 
This example is using the [[NetLogo software|http://ccl.northwestern.edu/netlogo/]], developed at Northwestern University.

!!! Creating the simulation
[>img[Simulating using NetLogo|resources/termites_netlogo_sim.png][math/netlogo/Termites1.html]]
From the [[example overview|Computational Thinking example: Termites and woodchips - overview]] certain simulation considerations are emerging, and can be [[captured and programmed in NetLogo|math/netlogo/Termites1.html]].

A few of the simulation design considerations:
* From the [[modeling phase|Computational Thinking example: Termites and woodchips - modeling]], it's clear that the entities to be simulated are: termites, wood chips, and piles
* In order to investigate the behavior, users need to be able to control/vary the number of termites and the number of chips
* Initially, each wood chip is its own "pile", and over time chips are picked up and dropped off next to other chips, by termites following simple actions in a continuous loop:
** {{{search-for-chip}}}
*** Each termite moves randomly through the simulation space, looking for a chip. Once it finds one, it picks it up and keeps moving randomly looking for a pile
**  {{{find-new-pile}}}
*** Once a termite carrying a chip and randomly moving encounters another chip, it assumes it's a pile and will drop it off
**  {{{put-down-chip}}}
*** Once a (chip-carrying) termite finds a pile, it looks for an empty space next to a chip belonging to that pile and drops its chip off there

!!! Running the simulation
Following is a series of snapshots from a simulation run, with an initial 30% chip density (about 12,000 wood chips), and 100 termites. Simulation time is measured and displayed on the chart in "clock ticks" which represent the simulation cycle updates.
|borderless|k
|[img[termite simulation 1|resources/termites_netlogo_sim1.png][resources/termites_netlogo_sim1_big.png]]after ~5,000 cycles, ~7,200 piles|[img[termite simulation 2|resources/termites_netlogo_sim2.png][resources/termites_netlogo_sim2_big.png]]after ~100,000 cycles, ~980 piles|[img[Escher Woman|resources/termites_netlogo_sim3.png][resources/termites_netlogo_sim3_big.png]]after ~250,000 cycles, ~550 piles|
|borderless|k

As can be observed, this kind of termite behavior (or set of rules) results in pile consolidation (i.e., reduced number of piles), which is in line with [[the initial observations of real-life termite behavior|Computational Thinking example: Termites and woodchips - overview]].
Starting with about 12,000 piles, the 100 termites consolidate the chips into about 980 piles within about 100,000 simulation cycles, and then to about 550 piles after about 250,000 cycles.

!!!! Exploring the simulation space
[>img[NetLogo BehaviorSpace|resources/termites_netlogo_BehaviorSpace_small.png][resources/termites_netlogo_BehaviorSpace.png]]
One of the powerful and useful features of ~NetLogo is that it enables automatic "parameter sweeps". With a feature called [[Behavior Space|http://ccl.northwestern.edu/netlogo/docs/]], users can run a model many times, systematically varying the model's settings (or variables, like the number of termites, and the wood chip density) and recording the results of each model run. This process lets users explore the model's "space" of possible behaviors and determine which combinations of settings cause behaviors of interest.

From the parameter sweep, it looks like in situations where the wood chip density is small (say 1%), pile consolidation happens relatively quickly, converging to ''a single pile'', regardless of the number of termites (or at least with the "sweep" we did of 1, 50, 100, and 150 termites).

And indeed simulating with 100 termites, and 1% chip density, for about 100,000 simulation cycles results in a significant pile consolidation:
[img[termite simulation 1|resources/termites_netlogo_sim4.png][resources/termites_netlogo_sim4_big.png]]after ~100,000 cycles, 4 piles
I recently [[presented about Computational Thinking (CT)|downloads/resources/Computational Literacy in the Classroom-1.pdf]]^^1^^ and provided some technology-enabled examples, at a teacher professional development session at the Ravenswood School District. I also did a similar presentation for [[Teaching Fellows for Citizen Schools|http://www.citizenschools.org/careers/teaching-fellowship/about/]].

[>img[Computational Literacy presentation|resources/Rosling_CNN_GapMinder1.png][downloads/resources/Computational Literacy in the Classroom-1.pdf]]
Since addressing [[Computational Thinking|A Framework for Computational Thinking, Computational Literacy]] in school is gaining more focus, I wanted to present some impactful CT concepts and skills, in two contexts:
* the (narrower) context of leveraging CT and computational technologies in __direct support__ of STEM^^2^^ standards and objectives (e.g., the [[Common Core State Standards for Mathematics|http://www.corestandards.org/]], and the [[National Framework for K-12 Science & Engineering|http://www.nap.edu/catalog.php?record_id=13165]])
* the (wider) context of introducing and supporting [[Computational Thinking|A Framework for Computational Thinking, Computational Literacy]] best practices and tools, to promote and exercise these skills and processes with students

The [[presentation|downloads/resources/Computational Literacy in the Classroom-1.pdf]]^^1^^ demonstrated a few cases and opportunities to leverage computational technologies such as simulation, visualization, animation, and analysis tools, in support of __specific__ learning objectives and standards, but also "opening the door" to exploring and exposing new ideas and directions, by naturally "sowing seeds" through the careful planning and implementation of technology-enabled activities and experiences.

!!!!We also explored with the teachers:
* What their educational goals were in leveraging CT and technologies
* What kind of role do they see themselves play in the classroom
* How comfortable are they introducing and using computational technologies as part of their curricula
* What their time horizon is for learning and using CT and technologies in the classroom

----
^^1^^ A 440KB PDF file (with the application/demo launch links disabled)
^^2^^ STEM = Science, Technology, Engineering, Mathematics 
<br>
{{{Computational Thinking is no more about computers than astronomy is about telescopes.}}}
: -- Haggai Mark (paraphrasing Edsger Wybe Dijkstra) 

{{{Computer programming is an art, because it applies accumulated knowledge to the world, because it requires skill and ingenuity, and especially because it produces objects of beauty.}}}
{{{A programmer who subconsciously views himself as an artist will enjoy what he does and will do it better.}}}
: -- Donald Knuth


<<forEachTiddler 
where 
'tiddler.tags.contains("computational-thinking-item") || tiddler.tags.contains("computer science")'
sortBy 
'tiddler.title'>>



<html>
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/3.0/us/"><img alt="Creative Commons License" style="border-width:0" src="http://i.creativecommons.org/l/by-nc-sa/3.0/us/88x31.png" /></a><br />To the extent possible and under my control, this work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/3.0/us/">Creative Commons Attribution-NonCommercial-ShareAlike 3.0 United States License</a>.
</html>
from the [[GNU software terms|https://www.gnu.org/fun/jokes/software.terms.html]] page,

''Power User'':  		A person who has mastered the brightness and contrast controls on any computer monitor.
''Alpha software'':  		Too buggy to be released to the paying public.
''Beta software'':  		Still too buggy to be released.
''Release software'':  	Alternate pronunciation of "Beta software".
''Encryption'':  			A powerful algorithmic encoding technique employed in the creation of computer manuals.
''Multitasking (machine)'':	A clever method of simultaneously slowing down the multitude of computer programs that insist on running too fast.
''Multitasking (human)'':	A clever method of simultaneously slowing down all your activities and comprehension, and insisting that you are "doing great".
''Support'':  			The mailing of advertising literature to customers who have returned a registration card.
''Transportable'':  		Neither chained to a wall nor attached to an alarm system.
''Printer'':  			An electromechnical paper shredding device.
''Upgraded'':  			Didn't work the first time.
''User Friendly'':  		Supplied with a full color manual.
''Very User Friendly'':	Supplied with a disk and audiotape so the user need not bother with the full color manual.
''Warranty'':  			Disclaimer.



variations on the [[Error codes|https://www.gnu.org/fun/jokes/errno.2.html]] page

''ENOTBACDUP''	Read on an empty pipe			(ref. pipe - unix read/write channel)
''ECHERNOBYL''	Core dumped					(system crashed)
''EDINGDONG	''	The daemon is dead 			(ref. The Wizard of Oz)
''EIEIO''			Here-a-bug, there-a-bug, …		(ref. Old McDonald had a farm)
''EMILYPOST''		Wrong fork					(ref. Ms. Manners, unix fork(2), github fork)
''ENOHORSE''		Mount failed					(ref. disk mounting)
''EWOK''			Aliens sighted					(ref. Star Wars)
''EWOK''			Your code appears to have been stir-fried
''EWOULDBNICE''	The feature you want has not been implemented yet



from [[Unix Error Messages|https://www.gnu.org/fun/jokes/unix.errors.html]]

(% represents the csh, $ represents the bourne shell)
 
% "How poorly would you rate the Unix (so-called) user interface?
Unmatched ".
 
% rm congressional-ethics
rm: congressional-ethics nonexistent
 
% ar m God
ar: God does not exist
 
% [Where is Jimmy Hoffa?
Missing ].
 
% ^How did the sex change^ operation go?
Modifier failed.
 
% If I had a ( for every $ Congress spent, what would I have?
Too many ('s.
 
%make love
Make:  Don't know how to make love.  Stop.
 
% sleep with me
bad character
 
% got a light?
No match.
 
% man: why did you get a divorce?
man:: Too many arguments.
 
% ^What is saccharine?
Bad substitute.
 
% \(-
(-: Command not found.
 
% sh
 
$ PATH=pretending! /usr/ucb/which sense
no sense in pretending
 
$ drink <bottle; opener
bottle: cannot open
opener: not found
 
$ mkdir matter; cat >matter
matter: cannot create
 
 
Or, in a System V (att) universe:
 
$ cat "can of food"
cat: cannot open can of food
I find a parapharased version of this very apt for [[Computational Thinking|A Framework for Computational Thinking, Computational Literacy]]:
Computational Thinking is no more about computers than astronomy is about telescopes

The hard-boiled computer scientist Edsger Wybe Dijkstra also said:
The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.
In an excellent book titled //Will you be alive 10 years from now?// by Paul Nahin he gives quite a few examples of solving and/or demonstrating statistical and probability phenomena/questions, using MATLAB simulations/programs/code.

He is happy to be flexible and use both simulations and analysis:
>What I did when teaching, and have done in two previous books (//Dueling Idiots and Other Probability Puzzlers//, and //Digital Dice//), was endorse the use of computer simulation to check theoretical results. If a computer simulation of a random process agrees (to some acceptable degree) with a theoretical result, then I think one's confidence in both approaches is enhanced. Such an agreement doesn't, of course, prove that either result is correct, but surely one would then have to believe that a remarkable coincidence had instead occurred.

And he observes:
>There is an interesting feature to doing computer simulations that I have noticed, after decades of writing computer codes in different languages to implement them. Problems that are hard to do theoretically may require only an easy simulation; the converse may also be true, that is, a problem easy to analyze theoretically may require a complicated simulation code.
In an article in The Atlantic titled [[The AI That Has Nothing to Learn From Humans|https://www.theatlantic.com/technology/archive/2017/10/alphago-zero-the-ai-that-taught-itself-go/543450/]], the author points out that the Go-playing computer program known as ~AlphaGo had been initially trained by learning from humanly played Go games. Then, when it had "enough" knowledge, it started playing against itself, in order to improve its skill and techniques even further (and faster).

In the article, expert Go players made the interesting (but plausible) observation that this resulted in ~AlphaGo playing in a style which was akin to (and recognizable as) the style of human players.

Then, the creators of ~AlphaGo (Google’s ~DeepMind group) built another Go-playing machine called [[AlphaGo Zero|https://deepmind.com/blog/alphago-zero-learning-scratch/]], which had been learning to play the game from scratch, without being fed any humanly played games.

The author claims that
>~DeepMind’s new self-taught Go-playing program [~AlphaGo Zero] is making moves that other players describe as “alien” and “from an alternate dimension.”
and that there are some 
>"inhuman, incomprehensible elements in the way ~AlphaGo plays", as if using a playbook from an "alien civilization".

Now, this sounds both intriguing and scary, and I suspect it overloads meaning on the actual fact that the playing style of ~AlphaGo Zero may indeed be  different, compared to traditional/classic style.

But maybe there is a non-extra-terrestrial or non-alien explanation of the different style.

The game of Go is known for its enormous set of possible moves and outcomes. The number of variations of moves and counter-moves -- also known as the game decision tree -- is [[truly vast|http://www.i-programmer.info/news/112-theory/9384-number-of-legal-go-positions-finally-worked-out.html]] (approximately 2.1 * 10^^170^^, which is much, much bigger than chess^^1^^). Given that, it is conceivable that a machine such as ~AlphaGo Zero is quickly moving into areas of the decision tree (or space of possibilities/combinations) which have never been "visited" and explored/studied by human players, despite many generations of players, and years of playing and studying.

I also suspect, and the Atlantic article hints about it^^2^^, that over the years and the history of the game, human players came up with stories, images, models, and myths to describe certain Go configurations, patterns, moves, and strategies. This is natural (i.e., human :), and may also lead to better understanding and memorizing certain moves and techniques for winning.
But, a machine/software like ~AlphaGo Zero does not need (nor understand/relate) to these kinds of "aids" (or "crutches"). It can just "play by the numbers", so to speak, since it has the brute-force, raw number-crunching power to analyze, simulate, and calculate possibilities.
Given that ~AlphaGo Zero was not fed with humanly-played games as part of its training, it just had combinations and their winning probabilities to rely on, which may explain its "alien" style of play.


----
^^1^^ - from [[A Comparison of Chess and Go|https://www.britgo.org/learners/chessgo.html]]:
>At the opening move in Chess there are 20 possible moves. In Go the first player has 361 possible moves. This wide latitude of choice continues throughout the game. At each move the opposing player is more likely than not to be surprised at their opponent's move, and hence they must rethink their own plan of attack.
^^2^^ - for example:
* it’s so hard to try to attach a story about what ~AlphaGo is doing. You have to be ready to deny a lot of the things that we’ve believed and that have worked for us.
* Generally the way humans learn Go is that we have a story. That’s the way we communicate. It’s a very human thing.
* people can identify and discuss shapes and patterns. [...] When teaching beginners, a Go instructor might point out an odd-looking formation of stones resembling a lion’s mouth or a tortoiseshell (among other patterns) and discuss how best to play in these situations.
Computer programming is an art, because it applies accumulated knowledge to the world, because it requires skill and ingenuity, and especially because it produces objects of beauty. A programmer who subconsciously views himself as an artist will enjoy what he does and will do it better.
Computers are to computing as instruments are to music. Software is the score whose interpretations amplifies our reach and lifts our spirits. Leonardo da Vinci called music the shaping of the invisible, and his phrase is even more apt as a description of software.

Apropos instruments vs. music vs. musician:
There’s a story about Jascha Heifetz, the famously [[dyspeptic|http://www.dictionary.com/browse/dyspeptic]] Russian violinist and giant of the golden age of recording: After a concert one evening, an admirer went to visit the soloist in his dressing room. “Mr. Heifetz,” he gushed, “what a performance! Your violin has such a gorgeous tone!” Heifetz picked up his instrument, held it to his ear and knit his brow. “I don’t hear anything.”
Andrea diSessa, in his book [[Changing Minds: Computers, Learning, and Literacy|resources/diSessa - Changing Minds - Chapter1.pdf]], talks about the need to learn, develop, and teach what he calls "computing literacy". By that he does //not// mean things like fluency in any particular set of computer programs/applications (e.g. Microsoft Office). He compares the impact of computers, and the importance of this "new literacy" to the text-based literacy humanity experienced as a result of the introduction of the printing press and mass-produced books:
>Computers can be the technical foundation of a new and dramatically enhanced literacy, which will act in many ways like current literacy and which will have penetration and depth of influence comparable to what we have already experienced in coming to achieve a mass, text-based literacy.
>...
>If a true computational literacy comes to exist, it will be infrastructural in the same way current literacy is in current schools. Students will be learning and using it constantly through their schooling careers, and beyond, in diverse scientific, humanistic, and expressive pursuits. Outside of schools, a computational literacy will allow civilization to think and do things that will be new to us in the same way that the modern literate society would be almost incomprehensible to preliterate cultures. Clearly by computational literacy I do not mean  a casual familiarity with a machine that computes. 

diSessa makes a thought-inspiring analogy between the invention of the calculus (by Newton and Liebnitz) and its introduction into the core curriculum of science, engineering, and technology in universities, making it a basic ''mathematics literacy'' requirement (and assumption), and the introduction of the new ''computing literacy'', and what he expects its impact to be on humanity.
>This move to infrastructural status for calculus was not easy. It took more than two centuries! In the twentieth century, a few bold universities decided it was possible and useful to teach calculus in the  early and universal  (for all technical students) infrastructural mode.
>It succeeded, more or less, and gradually more schools jumped on the bandwagon. They had the advantage of knowing that teaching calculus this way was possible, and they could capitalize on the know-how of the early innovators. In the meantime, other professors and textbook writers for other classes began to take the teaching of calculus for granted. They became dependent on it. Calculus came to be infrastructural.

His analogy between math-related literacies like algebra and calculus and computing literacy, is focusing on learners' ability to program within some sort of an environment, throughout the educational system (starting in elementary school), and into adulthood (supporting life-long learning).
>We have come through expressive aptness, conceptual precision, and so on, to a really new place. From here, it is easy to imagine sixth grade students getting personally and creatively involved in designing space ships and all sorts of games.
>Galileo's and Newton's sometimes forbidding abstractions have been resituated in a fabric of doing play that can be owned by children. We can be instrumental and say that mathematics and science can be motivating and engaging in a way that far transcends words, algebra, and calculus. We can talk about  time on task  and notions of learning through rich feedback. 
>Or we can say merely that we have managed to bring mathematics and science into a child's world in a way that shames  you ll need this for the next course  or  just do the exercises.  This last phrasing may be the most important.
>Here is the point of this section in a nutshell. A new representational form, programming, as part of a new literacy can lead to deeper learning, much earlier with fewer unpleasant glitches, and in a way that transforms the experience of students substantially from doing what adults say in semi-comprehension into a really rich and appropriately kid-like experience, more like what they want to and can do without adults intruding awkwardly.

For a different (but related) perspective in terms of //languages// (vs. literacy) see what Robert Logan has to say in his book [[The Sixth Language: Learning a Living in the Internet Age|New languages]].
From //An Introduction to Computer Simulation Methods// by Harvey Gould, Jan Tobochnik, and Wolfgang Christian, and with application to Easy Java Simulations (EJS)

Since computation and computers are pervasive in science, technology, engineering, and math, and they significantly impact the way we do science, engineering, math in many cases, it is essential that we teach computing literacy as part of STEM (Science, Technology, Engineering, Math) education/programs. The availability of powerful computing technologies enables us to think differently about solving problems and leads to both new solutions and new insights/understanding of phenomena in the world.
!!!Computing capabilities
Computing (in physics, and other areas of STEM) can be leveraged in multiple areas, including numerical analysis, symbolic manipulation, visualization, simulation, and the collection and analysis of data.
* in ''numerical analysis'' computers/computing can be used to numerically solve equations and produce numerical data (e.g. solving multi-variable equations/matrices, multidimensional integrals, nonlinear differential equations, etc.)
* in ''symbolic manipulation'' computers/computing can be used to generically and symbolically solve equations, derive proofs, perform abstractions, simplifications, and approximations, etc. (e.g. performing differentiation, integration, matrix inversion, and power series expansion, etc.)
* in ''visualization'' computers/computing can be used to gain significant insights and deeper understanding of phenomena by leveraging the human capacity to quickly and effectively detect patterns in data presented and manipulated visually. Good visualizations can "bring data to life" and reveal hidden relationships and patterns.
* in ''simulation'' computers/computing can be used to develop new models of phenomena and derive new predictions, or validate/verify existing knowledge (maybe under new conditions or combinations). New insights can be gained from simulating different scenarios ("what-if" simulation and analysis).
>Simulations frequently use the computational tools of numerical analysis and visualization, and occasionally symbolic manipulation. The difference is one of emphasis. Simulations are usually done with a minimum of analysis. [...] simulation emphasizes an exploratory mode of learning.
* in ''collection and analysis of data'' computers/computing can be used to "harvest" large amounts of data (from the environment "at large", experiments, etc.) and organize, manipulate, and analyze it in different ways and under different conditions and scenarios. Combined with visualization for example, new knowledge and understanding can be gained, and possibly new models can be developed and simulated to verify or refine theories, etc.
!!!Importance of computer simulations
* most analytical methods are suited for //linear// phenomena, but unfortunately many phenomena are //non-linear//. Computers gives us a new tool to explore nonlinear phenomena.
* many interesting and useful problems and phenomena involve a large number of variables, which makes them very difficult to solve without the number crunching and symbolic manipulation capabilities of computers.
* in some cases, computer simulations enable the creation of "computer experiments" which would support lab experiments and serve as validators or testers of new/extreme/combinatorial scenarios. They can thus compliment both theory and experimentation.

>Computer simulations, like laboratory experiments, are not substitutes for thinking, but are tools that we use to understand natural phenomena. The goal of all our investigations of fundamental phenomena is to seek explanations of natural phenomena that can be stated concisely.

* Paul Nahin, in his excellent book //Will you bee alive 10 years from now?// provides many simulation code examples (about statistics and probability), and has good observations (and usage!) of both [[simulations and analysis|Computer Simulation vs. Theoretical Analysis]].
or alternatively titled: Computational disproof of mathematics questions (conjectures?) (as opposed to [[Mathematical proof by computing?]]).

In [[a question on math.stackexchange|https://math.stackexchange.com/questions/1290948/special-representation-of-a-number]], someone asked:
{{{
How can I check, if a number n

can be represented by pq + rs

where p,q,r,s are pairwise different prime numbers with the same number of digits.

For example,

105153899965560312960 = 3022993637 × 6003631993 + 9069920719 × 9592692301

has such a representation.

My questions :

    Is such a representation (if it exists), always unique ?
    How can I find the primes p,q,r,s if a representation exists ?
    How can I check if a representation exists ?
}}}

Looking at the first question, and depending on your mindset or inclination, you may be tempted (I know I was :) to just look for a counter example, to see if representations like defined above are unique (is this the contrarian in me speaking? :) Maybe, but then again, sometimes it's easier to try to disprove something first, because then "the burden of proof" goes away :) (like the "black swan" case, where you can NEVER prove that all swans are white, but you can definitely prove that NOT all swans are white, if you find just one black swan :)

And if you know how to program, even a little bit, since it doesn't seem like a very difficult problem, you could actually write a program to brute-force search for these kinds of solutions, and see if your program can produce more than one, thus disproving uniqueness.

So here is a short Python program, which BTW is a good exercise in basic data structures (lists, dictionaries, sets):
{{{
# inspired by a math.stackexchange question (conjecture?): 
# https://math.stackexchange.com/questions/1290948/special-representation-of-a-number
#
# n = pq + rs (whre p, q, r, s are primes with the same number of digits, and n is any number)
#
# 2 digit primes up to 50:
primes = [11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47]

# the results (all n's):
ns = dict()
min_n = 0
for p in primes:
  for q in primes:
    for r in primes:
      for s in primes:
        if len(set([p, q, r, s])) == 4:
          n = p * q + r * s
          if not ns.get(n, False):
            ns[n] = [p, q, r, s]
          else:
            if sorted(ns[n]) != sorted([p, q, r, s]):
              print n, '=', ns[n][0], '*', ns[n][1] , '+', ns[n][2], '*', ns[n][3], '=', p, '*', q, '+', r, '*', s
              if  n < min_n or min_n == 0:
                min_n = n
                print "smallest so far:", min_n
              
}}}

and here is the output, showing a few combinations (and thus disproving the "conjecture") and finding the smallest number n = 694 = 11 * 13 + 19 * 29 = 11 * 43 + 13 * 17
{{{
1086 = 11 * 13 + 23 * 41 = 11 * 17 + 29 * 31
smallest so far: 1086
1086 = 11 * 13 + 23 * 41 = 11 * 17 + 31 * 29
922 = 11 * 13 + 19 * 41 = 11 * 19 + 23 * 31
smallest so far: 922
1290 = 11 * 13 + 31 * 37 = 11 * 19 + 23 * 47
922 = 11 * 13 + 19 * 41 = 11 * 19 + 31 * 23
1290 = 11 * 13 + 31 * 37 = 11 * 19 + 47 * 23
746 = 11 * 17 + 13 * 43 = 11 * 23 + 17 * 29
smallest so far: 746
746 = 11 * 17 + 13 * 43 = 11 * 23 + 29 * 17
1152 = 11 * 19 + 23 * 41 = 11 * 23 + 29 * 31
1152 = 11 * 19 + 23 * 41 = 11 * 23 + 31 * 29
846 = 11 * 13 + 19 * 37 = 11 * 29 + 17 * 31
1032 = 11 * 23 + 19 * 41 = 11 * 29 + 23 * 31
1400 = 11 * 23 + 31 * 37 = 11 * 29 + 23 * 47
846 = 11 * 13 + 19 * 37 = 11 * 29 + 31 * 17
1032 = 11 * 23 + 19 * 41 = 11 * 29 + 31 * 23
1400 = 11 * 23 + 31 * 37 = 11 * 29 + 47 * 23
874 = 11 * 13 + 17 * 43 = 11 * 31 + 13 * 41
900 = 11 * 17 + 23 * 31 = 11 * 31 + 13 * 43
732 = 11 * 13 + 19 * 31 = 11 * 31 + 17 * 23
smallest so far: 732
1038 = 11 * 17 + 23 * 37 = 11 * 31 + 17 * 41
...
1524 = 11 * 23 + 31 * 41 = 11 * 41 + 37 * 29
1268 = 11 * 17 + 23 * 47 = 11 * 41 + 43 * 19
694 = 11 * 13 + 19 * 29 = 11 * 43 + 13 * 17
smallest so far: 694
720 = 11 * 17 + 13 * 41 = 11 * 43 + 13 * 19
772 = 11 * 13 + 17 * 37 = 11 * 43 + 13 * 23
876 = 11 * 19 + 23 * 29 = 11 * 43 + 13 * 31
...
}}}

And if you want to use entirely unique/different/non-repeating prime numbers (p, q, r, s), you need to change 1 line
{{{
from:              if sorted(ns[n]) != sorted([p, q, r, s]):
to:                if set(ns[n]) & set([p, q, r, s]) == set():
}}}
to get a result starting at:
{{{
1704 = 11 * 17 + 37 * 41 = 13 * 19 + 31 * 47
smallest so far: 1704
1764 = 11 * 47 + 29 * 43 = 13 * 19 + 37 * 41
1764 = 11 * 47 + 29 * 43 = 13 * 19 + 41 * 37
1704 = 11 * 17 + 37 * 41 = 13 * 19 + 47 * 31
996 = 11 * 37 + 19 * 31 = 13 * 23 + 17 * 41
smallest so far: 996
1098 = 11 * 29 + 19 * 41 = 13 * 23 + 17 * 47
996 = 11 * 37 + 19 * 31 = 13 * 23 + 41 * 17
1098 = 11 * 29 + 19 * 41 = 13 * 23 + 47 * 17
1458 = 11 * 17 + 31 * 41 = 13 * 29 + 23 * 47
1458 = 11 * 17 + 31 * 41 = 13 * 29 + 47 * 23
1032 = 11 * 23 + 19 * 41 = 13 * 31 + 17 * 37
1032 = 11 * 23 + 19 * 41 = 13 * 31 + 37 * 17
1212 = 11 * 29 + 19 * 47 = 13 * 37 + 17 * 43
1814 = 11 * 41 + 29 * 47 = 13 * 37 + 31 * 43
1212 = 11 * 29 + 19 * 47 = 13 * 37 + 43 * 17
1814 = 11 * 41 + 29 * 47 = 13 * 37 + 43 * 31
1060 = 11 * 19 + 23 * 37 = 13 * 41 + 17 * 31
970 = 11 * 31 + 17 * 37 = 13 * 41 + 19 * 23
smallest so far: 970
970 = 11 * 31 + 17 * 37 = 13 * 41 + 23 * 19
1060 = 11 * 19 + 23 * 37 = 13 * 41 + 31 * 17
1262 = 11 * 29 + 23 * 41 = 13 * 43 + 19 * 37
1706 = 11 * 47 + 29 * 41 = 13 * 43 + 31 * 37
1262 = 11 * 29 + 23 * 41 = 13 * 43 + 37 * 19
1706 = 11 * 47 + 29 * 41 = 13 * 43 + 37 * 31
1002 = 11 * 41 + 19 * 29 = 13 * 47 + 17 * 23
1308 = 11 * 29 + 23 * 43 = 13 * 47 + 17 * 41
}}}

and so it goes ... :)
I recently came across an article by Mitchel Resnick (the MIT professor of Logo, Lego Mindstorms, and Scratch fame) from 1994 (!) called [[Learning About Life|resources/Learning About Life - Resnick.html]]. As expected it has numerous interesting ideas and points, but the one I want to pick here is about what he has to say on Constructionism vs. Constructivism, and some implications on simulations as educational tools (tools for learning).
Resnick points out that while Constructivism (per Jean Piaget and others) studied, focused, and emphasized learning as an active process, where the learner has to create the knowledge actively and on his/her own, Constructionism (Seymour Papert and others) focused and emphasized that learning is particularly effective and powerful, if/when the learner creates "personally meaningful" or significant artifacts (and hence Papert's work with Logo, Turtle Graphics, etc.).
From this, Resnick makes the important point (and almost cautions us?) that this leads to a significant difference in learning effectiveness which curricula often miss: hands-on learning activities (e.g. many Chemistry or Physics school labs/exercises) are mostly Constructivist (Piaget), since the learner in most of them "follows a recipe" (a study guide, lab instructions, a manual, etc.), and is not engaged in actively constructing personally meaningful artifacts (results, products, etc.). They are limited in how far and how deeply they can explore, since most of these learning experiences/moments/environments are "closed packages".
And that's one important implication on simulations and simulation design. While simulations may be "more open" (to exploration) than say video clips (even though simulations can be inefficient as far as learning goes, in terms of flexibility/openness if they use strong scripting, prescriptions, etc.), there is only so much learners can do within their constraints (which, if designed well can be pretty weak, but still not "limitless"). A Constructionist approach would enable learners to build their own simulations, so they have a wide and deep range of exploration, and make them much more personally meaningful.
For this reason, I think that it's important for learners and teachers to master "tool creation" by learning some programming, since this is bothe essential to deep learning, and is part of gaining Computational Literacy.

Resnick mentions that after the great physicist and Nobel Laureate Richard Feynman died, the following sentence was found on his blackboard at Caltech:
>What I cannot create, I do not understand
And Resnick sees this as a strong (and lasting) endorsement of the Constructionist approach to learning/understanding.
What did Feynman mean by "create" is an open question, and since he was a theoretical physicist it most probably did not mean //physically// create/produce. It is interesting that a less often quoted sentence on [[Feynman's blackboard|resources/Feynman_blackboard.jpg]], right before the first one was:
>know how to solve every problem that has been solved
To me these two are tied together and possibly indicate what he meant by "creating". If a learner wants to //really//, //truly// understand something, they have to work it out personally, take nothing on faith or for granted, and work as much as possible from first principles, and build up from there. Another benefit of recreating (or personally creating) is that you gain a more intuitive understanding; the knowledge "sits better" with you, since you worked out and personally experienced the difficulties, discoveries, nuances, etc. (for another Feynman anecdote about creating/constructing and its power see [[Looking for patterns]]).

This reminds me of a job interview I had as a hardware electronics engineer. The interviewer asked me if I know the formula for calculating the gain/amplification of a certain operational amplifier configuration. I wasn't sure, but I told him that I can work it out, if he gives me a couple of minutes, and I arrived at the formula from first principles. He looked satisfied (and I got the job).
In an article titled [[Putting Students on the Path to Learning - The Case for Fully Guided Instruction|https://www.aft.org/sites/default/files/periodicals/Clark.pdf]] the authors ( Richard E. Clark, Paul A. Kirschner, and John Sweller) make an important observation and distinction about Constructivism:

>[M]any educators mistakenly believe partially and minimally guided instructional approaches are based on solid cognitive science. Turning to Mayer’s review of the literature [see below], many educators confuse “constructivism,” which is a theory of how one learns and sees the world, with a prescription for how to teach. In the field of cognitive science, constructivism is a widely accepted theory of learning; it claims that learners must construct mental representations of the world by engaging in active cognitive processing. 
>Many educators (especially teacher education professors in colleges of education) have latched on to this notion of students having to “construct” their own knowledge, and have assumed that the best way to promote such construction is to have students try to discover new knowledge or solve new problems without explicit guidance from the teacher. Unfortunately, this assumption is both widespread and incorrect. Mayer calls it the “constructivist teaching fallacy.” 
>Simply put, cognitive activity can happen with or without behavioral activity, and behavioral activity does not in any way guarantee cognitive activity. In fact, the type of active cognitive processing that students need to engage in to “construct” knowledge can happen through reading a book, listening to a lecture, watching a teacher conduct an experiment while simultaneously describing what he or she is doing, etc. 
>Learning requires the construction of knowledge. Withholding information from students does not facilitate the construction of knowledge.
Or in Richard Mayer's words (from his article [[Should There Be a Three-Strikes Rule Against Pure Discovery Learning? The Case for Guided Methods of Instruction|http://projects.ict.usc.edu/itw/gel/MayerThreeStrikesAP04.pdf]]):
>[T]he failure of pure discovery as a method of instruction does not necessarily mean that constructivism is wrong as a theory of learning or that hands-on activity is necessarily a wrong method of instruction. A basic premise in constructivism is that meaningful learning occurs when the learner strives to make sense of the presented material by selecting relevant incoming information, organizing it into a coherent structure, and integrating it with other organized knowledge (Mayer, 2003). It follows that instructional methods that foster these processes will be more successful in promoting meaningful learning than instructional methods that do not. 

Mayer finds guided discovery more effective than pure discovery:
>In many ways, guided discovery appears to offer the best method for promoting constructivist learning. The challenge of teaching by guided discovery is to know how much and what kind of guidance to provide and to know how to specify the desired outcome of learning. In some cases, direct instruction can promote the cognitive processing needed for constructivist learning, but in others, some mixture of guidance and exploration is needed. This is a lesson that emerges again within the context of learning in social context.
Mayer, like the authors above distinguishes between activities that involve only behavioral aspects and activities which promote cognitive work:
>Activity may help promote meaningful learning, but instead of behavioral activity per se (e.g., hands-on activity, discussion, and free exploration), the kind of activity that really promotes meaningful learning is cognitive activity (e.g., selecting, organizing, and integrating knowledge).
>Instead of depending solely on learning by doing or learning by discussion, the most genuine approach to constructivist learning is learning by thinking. Methods that rely on doing or discussing should be judged not on how much doing or discussing is involved but rather on the degree to which they promote appropriate cognitive processing.
>Guidance, structure, and focused goals should not be ignored. This is the consistent and clear lesson of decade after decade of research on the effects of discovery methods.

In their research paper focusing on learning assessments titled [[Constructivism in an Age of NonConstructivist Assessments|http://aaalab.stanford.edu/papers/Constructivist_Assessments_Final.pdf]], Daniel Schwartz, Robb Lindgren, and Sarah Lewis makes the following observations about constructivism:
>In our experience, constructivism tends to be too large and general a philosophy to be useful for the precise handling of the many specific ways and reasons that people learn. Constructivism is not at the right level for deriving specific instructional decisions. Sometimes hands-on learning is valuable and sometimes it is not—knowing the microscopic details of when it is valuable is difficult to derive from constructivism alone.
>This is not to say that constructivism does not have an important role to play in the design of instruction. 
In other words, Schwartz et al., see Constructivism as a Guide to Assessment:
>Although we believe that the broad concept of constructivism invites the wrong level of analysis for designing specific instructional moments, we do see constructivism as extremely valuable when applied to learning outcomes. Rather than taking constructivism as an instructional design theory, we suggest that the ideas of constructivism be applied to assessment. We ask the question “Does instruction prepare learners to construct knowledge once we no longer orchestrate specific instructional conditions to target specific learning mechanisms and outcomes?”.
And that's where they see Preparing for Future Learning (PFL) as the more appropriate way to assess:
>A more appropriate test for constructivist outcomes is a preparation for future learning (PFL) assessment. In this type of assessment, students have an opportunity to learn during the test itself. Students who have been prepared to construct new knowledge in a domain will learn more during the assessment than those who have not been prepared to learn. PFL measures seem more in line with constructivist outcomes.
Schwartz points out that there is no one "correct way of teaching":
> Sometimes, it is important to explore and develop one’s own ideas. Sometimes, it is important to receive direct guidance. The question is not which method is right; the question is what combination of methods is best for a given outcome.
>Direct instruction can be very effective, assuming that people have sufficient prior knowledge to construct new knowledge from what they are being told or shown. In many cases, they do not.
>[...] a good way to prepare students for direct instruction is to give them targeted experiences with “exploratory” activities.

And a comment on worked out (solved) examples:
>worked examples can create effective instruction, but only if students are prepared to construct useful knowledge from the examples.

Schwartz et al. summarize (added formatting is mine):

When students engage in the inquiry and exploratory activities that comprise
much of constructivist instruction, they are also __engaging ''contrasting cases''__. For
example, they may notice that two different actions lead to two different effects.
A risk of poorly developed inquiry activities is that there can be too many contrasts,
some less useful than others. While a broad range of possible contrasts will
uncover many interesting and useful student ideas, too many contrasts make it
difficult for students to discern which variables and interactions are most important.
Moreover, in large classes, different students may follow the implications of
different contrasts, which will make it difficult to “pull it together” for all the students
in a class. In our approach, we pre-figure the contrasts to simplify the
instructional task.

__A second important feature__ was that the students were asked to ''invent representations''
for the cases, whether symbolic procedures or graphs. This was important
for four reasons: 
1- The first, as demonstrated by Sears, is that students will not
notice the structures highlighted by the contrasts if they are told at the outset
how to use the correct procedure. They will focus on the procedure rather than
the situational structures that make the procedure useful. Inventing the procedure
focuses them on the situation and the procedural issues.
2- The second reason is that invention prepares students to appreciate the “why”
captured inside the canonical solution. By working to invent solutions themselves,
they begin to understand the issues that led to the design of the expert
theory or procedure.
3- The third reason for having students do representational activities is that the
goal of much school instruction is to help students learn to understand and use
the compact symbolic representations and theories that experts use to organize
complexity. Having students work towards these representations sets the stage
for learning the canonical accounts.
4- A final reason for the invention activities is that students enjoy them, and
there appears to be more engaged thinking and positive effects as a result. Students
treat these activities as original production activities that promote creative
thinking through the exploration of possible solution paths and representational
artifacts. 
The solutions that students produce are sometimes suboptimal, but in
general, students are not wrong in their inventions. Rather, their inventions
simply do not handle all the cases or generalize to cases yet to be seen. When
confronted with their “partial accuracy” students come to appreciate their own
work, the work of others, and the standard solution.

__[T]he third important feature__ that we have emphasized here is the eventual ''delivery of a comprehensive
account of the cases''. The goal is to prepare students to understand the account.

And importantly:
The activities we have described are __not discovery activities in the sense of
expecting students to discover the canonical solutions on their own__.
Matthias Felleisen (at Northeastern University^^1^^) [[writes|http://www.ccs.neu.edu/home/matthias/Thoughts/What_should_the_core_achieve_.html]] on [[his website|http://www.ccs.neu.edu/home/matthias/]]:
>The core courses of a computer science curriculum should equip an undergraduate with ''//the//'' mindset and ''//a//'' tool set to tackle software design problems.
* A CS graduate should internalize and routinely use the "software design process"^^2^^ when working on projects:
** gather and organize data
** work through examples to clarify ambiguities and to get an idea about the solution
** translate the organization of the problem data into an organization of the solution
** tackle the solution proper
** test the solution for correctness and other constraints
* The graduate should be aware of and address as appropriate, non-functional constraints on projects (e.g., performance, scaling, resource consumption). And when dealing with constraints the student should:
**  conduct measurements to find “hot spots” 
** followed by a diagnosis step
** and ideally resolved with the use of standard solutions (instead of custom/novel ones, if possible, since they are more maintainable)
* The student should be a life-long learner and feel comfortable learning new programming languages and/or new programming environments
** in the real world, any big project/system may/will employ multiple languages and development tools
** in real life, a long and successful professional career will require learning new languages, tools, and environments
* The graduate should acquire a tool set consisting of
** several programming languages
** the typical range of important data structures and algorithms
** the practical experience with some tools to find “hot spots”
** the theoretical knowledge to analyze and resolve these problems (“hot spots”)


----
^^1^^ In a typical curriculum, the core includes two courses on programming, two courses on discrete data structures and algorithms, and a course on hardware platforms. At Northeastern, it also includes a course on logic in programming.
^^2^^ This is taught from the book (by Felleisen et al.) //How to Design Programs// (~HtDP) [[available online|http://www.ccs.neu.edu/home/matthias/HtDP2e/]]
Courage is rightly esteemed the first of human qualities... because it is the quality which guarantees all others.
In [[a sound interview|http://www.criticalthinking.org/pages/an-interview-with-linda-elder-about-using-critical-thinking-concepts-and-tools/495]] on her [[website critical thinking|http://www.criticalthinking.org/]], Linda Elder succinctly answers the question: 
''What critical thinking skills do we need to foster in terms of information on the World Wide Web?''

To effectively use information available to us on the web, we need basic critical thinking skills to analyze, evaluate, and improve thinking. In other words, we need to be able to figure out the agenda of the website, the questions they are purporting to answer, the information being presented, the assumptions made, the key concepts that drive the positions taken, etc.

But perhaps even more importantly, we need to be able to assess the quality of website material. For example, we need to be able to figure out whether the information is accurate, and hence how we could check to see if it is accurate. We need to be able to figure out whether it is relevant to the issue we are focused on. We need to be able to distinguish between information that is deep and that which is superficial. We need to differentiate between the significant and the insignificant. We need to be able to determine whether the information provided is detailed enough (or precise enough) for our purpose, etc.
Here are a few excerpts from Richard Hamming's talk [[One Man's View of Computer Science|http://worrydream.com/refs/Hamming%20-%20One%20Man's%20View%20of%20Computer%20Science.pdf]], about a CS curriculum proposal created by the ACM in 1968 (!).
At times the content is "colored/tainted by history", but overall, I think that he looks at it the right way:

>Like writing, programming is a difficult and complex art. Few programmers write in flowing poetry; most write in halting prose.

>I doubt that style in programming is tied very closely to any particular machine or language, any more than good writing in one natural language is significantly different than it is in another. There are, of course, particular idioms and details in one language that favor one way of expressing the idea rather than another, but the essentials of good writing seem to transcend the differences in the Western European languages with which I am familiar. And I doubt that it is much different for most general purpose digital machines that are available these days. 

>I would require every computer science major, undergraduate or graduate, to take a stiff laboratory course in which he designs, builds, debugs, and documents a reasonably sized program, perhaps a simulator or a simplified compiler for a particular machine. 
>The results would be judged on style of programming, practical efficiency, freedom from bugs, and documentation. If any of these were too poor, I would not let the candidate pass. In judging his work we need to distinguish clearly between superficial cleverness and genuine understanding. Cleverness was essential in the past; it is no longer sufficient. 
>
>I would also require a strong minor in some field other than computer science or mathematics. Without real experience in using the computer to get useful results the computer science major is apt to know all about the marvelous tool except how to use it. Such a person is a mere technician, skilled in manipulating the tool but with little sense of how and when to use it for its basic purposes. I believe we should avoid turning out more idiot savants -- we have more than enough "computniks" now to last us a long time. 
>What we need are professionals! 

>Let me now turn to the delicate matter of ethics. It has been observed on a number of occasions that the ethical behavior of the programmers in accounting installations leaves a lot to be desired when compared to that of the trained accounting personnel. We seem not to teach the "sacredness" of information about people and private company material. 
>
>We should look at, and copy, how ethical standards are incorporated into the traditional accounting courses (and elsewhere), because they turn out a more ethical product than we do. We talk a lot in public of the dangers of large data banks of personnel records, but we do not do our share at the level of indoctrination of our own computer science majors. 

>I believe these three topics -- ethics, professional behavior, and social responsibility must be incorporated into the computer science curriculum. 

And he concludes:
>We are now well started,  and it is time to deepen, strengthen, and improve our field so that we can be justly proud of what we teach, how we teach it, and of the students we turn out. We are not engaged in turning out technicians, idiot savants, and comput: niks; we know that in this modern, complex world we must turn out people who can play responsible major roles in our changing society, or else we must acknowledge that we have failed in our duty as teachers and leaders in this exciting, important field -- computer science. 
The [[Edgie, Daniel Dennett|https://www.edge.org/memberbio/daniel_c_dennett]], is [[talking about Computational Perspective|https://www.edge.org/conversation/daniel_c_dennett-the-computational-perspective]], in which he describes a fruitful technique ("thought experiment habit", or [["intuition pump"|https://www.edge.org/conversation/intuition-pumps]]?), which he calls //the intentional stance//^^1^^:
>It's a strategy you can try whenever you're confronted with something complex in nature ­ it doesn't always work. The idea is to interpret that complexity as one or more intelligent, rational agents that have agendas, beliefs, and desires, and that are interacting. When you go up to the intentional level, you discover patterns that are highly predictive, that are robust, and that are not reducible in any meaningful sense to the lower-level patterns at the physical level. In between the intentional stance and the physical stance is what I call the design stance. That's the level of software.
He explains that this level, while (possibly only?) "human-made", is real, in a real/meaningful sense:
>[This level consists of patterns, and] what explains their very existence in the universe is computation, is the algorithmic quality of all things that reproduce and that have meaning, and that make meaning.
>These patterns are not, in one sense, reducible to the laws of physics, although they are based in physical reality, although they are patterns in the activities and arrangements of physical particles. The explanation of why they form the patterns they do has to go on at a higher level.
And he gives and example which comes from Douglas Hofstadter, showing how ONLY by going up to the intentional level, can we understand/explain certain real/physical/lower level phenomena:
>We come across a computer and it's chugging along, chugging along; it's not stopping. And our question is: Why doesn't it stop? What fact explains the fact that this particular computer at this time doesn't stop? And in Doug's example, the answer is that reason it doesn't stop is that pi is irrational! What? Well, the number pi is an irrational number, which means it's a never-ending decimal, and this particular computer program is generating the decimal expansion of pi, a process that will never stop. Of course, the computer may break. Somebody may come along with an ax and cut the cord so it doesn't have any more power, but as long as it keeps powered, it's going to go on generating these digits forever. That's a simple concrete fact that can be detected in the world, the explanation of which cites an abstract mathematical fact about a particular number that is an irrational number.

''And this, I think, explains the usefulness (in addition to the beauty, and joy) of computing.''

In the article, Dennett also gives his (seeming tongue-in-cheek, but nonetheless illuminating :) definition of ''algorithm'':
>An algorithm is an abstract process that can be defined over a finite set of fundamental procedures, an instruction set. It is a structured array of such procedures. That's a very generous notion of algorithm—more generous than many mathematicians would like, because I would include by that definition algorithms that may be in some regards defective. Consider your laptop. There's an instruction set for that laptop, consisting of all the different basic things that your laptop's CPU can do; each basic operation has a digital name or code, and every time that bit-sequence occurs, the CPU tries to execute that operation. You can take any bit sequence at all, and feed it to your laptop, as if it were a program. Almost certainly, any sequence that isn't designed to be a program to run on that laptop won't do anything at all — it'll just crash. Still, there's utility in thinking that any sequence of instructions, however buggy, however stupid, however pointless, should be considered an algorithm, because one person's buggy, dumb sequence is another person's useful device for some weird purpose, and we don't want to prejudge that question. (Maybe that "nonsense" was included in order to get the laptop to crash at just the point it crashed!) One can define a more proper algorithm as one which runs without crashing. The only trouble is that if you define algorithm that way, then probably you don't have any on your laptop, because there's almost certainly a way to make almost every program on your laptop crash. You just haven't found it yet. Bug-free software is an ideal that's almost never achieved.


From here, Dennett goes on to try and define (or not :) the boundaries around what should be considered computational (and address the (vacuous, but heard-of) claim that "everything is computation"):
>Looking at the world as if everything is a computational process is becoming fashionable. Here one encounters not an issue of fact, but an issue of strategy. The question isn't, "What's the truth?" The question is, "What's the most fruitful strategy?" You don't want to abandon standards and count everything as computational, because then the idea loses its sense. It doesn't have any grip any more. How do you deal with that? One way is to try to define, in a rigid centralist way, some threshold that has to be passed, and say we're not going to call it computational unless it has properties A, B, C, D, and E. That's fine, you can do that in any number of different ways, and that will save you the embarrassment of having to say that everything is computational. The trouble is that anything you choose as a set of defining conditions is going to be too rigid. There are going to be things that meet those conditions that are not interestingly computational by anybody's standards, and there are things that are going to fail to meet the standards, which nevertheless you see are significantly like the things that you want to consider computational. So how do you deal with that? By ignoring it, by ignoring the issue of definition, that's my suggestion. Same as with life! You don't want to argue about whether viruses are alive or not; in some ways they're alive, in some ways they're not. Some processes are obviously computational. Others are obviously not computational. Where does the computational perspective illuminate? Well, that depends on who's looking at the illumination.

I find his approach and treatment of the question above wise ("True", "Real"), Buddhist-like ("accepting", "understanding", "accommodating"), and scientific ("practical", "rational", "goal-oriented"), all at the same time; most things in life are really open-ended and even if considered objective, stem from our human (como-no?!) perspective. That's the inevitable strength and weakness of the "human condition". 

Dennett succinctly describes the layers or lenses through which he is looking at reality:
>I describe three stances for looking at reality: the physical stance, the design stance, and the intentional stance. 
>__The physical stance__ is where the physicists are, it's matter and motion. 
>__The design stance__ is where you start looking at the software, at the patterns that are maintained, because these are designed things that are fending off their own dissolution. That is to say, they are bulwarks against the second law of thermodynamics. This applies to all living things, and also to all artifacts. 
>Above that is __the intentional stance__, which is the way we treat that specific set of organisms and artifacts that are themselves rational information processing agents. In some regards you can treat Mother Nature -- that is, the whole process of evolution by natural selection -- from the intentional stance, as an agent, but we understand that that's a façon de parler, a useful shortcut for getting at features of the design processes that are unfolding over eons of time. Once we get to the intentional stance, we have rational agents, we have minds, creators, authors, inventors, discoverers ­and everyday folks ­ interacting on the basis of their take on the world.
The (not-so-subtle, I think) point Dennett makes here is that he doesn't claim that "rational agents, we have minds, creators, authors, inventors, discoverers" and so on are actual things like at the physical layer. They are things we //define// (for utilitarian purposes) at the intentional level, using this ascent on the ladder of abstraction as a tool, circumventing the (unanswerable?) question of "do these things 'really' exist?"

And he then asks, if there is a stance/level above the 3 he identified above, concluding that there is: the moral stance:
>A person is a moral agent, not just a cognitive agent, not just a rational agent, but a moral agent. And this is the highest level that I can make sense of. And why it exists at all, how it exists, the conditions for its maintenance, are very interesting problems.
And this stance results in different ways of looking at things, behaving, and drawing conclusions (compared to the lower levels/stances):
>But when we look at game theory as applied not just to rational agents, but to people with a moral outlook, we see some important differences. People have free will. Trees don't. [Considerations of zero-sum-games, win-win, etc., are] not an issue for trees in the way [they are] for people.


On "treading carefully" and responsibly when searching for truths:
>I've come to respect the cautious conservatism that many people express — and some even live by — which says that the environmental impact of these new ideas is not yet clear and that you should be very careful about how you introduce them. Don't fix what isn't broke. Don't let your enthusiasm for new ideas blind you to the possibility that maybe they will undo something of long standing that is really valuable. That's an idea that is seldom articulated carefully, but that, in fact, drives many people. And it's an entirely honorable motivation to be concerned that some of our traditional ideas are deeply threatened by these innovations of outlook, and to be cautious about just trading in the old for the new. Indeed I think that's wise. Environmental impact statements for scientific and philosophical advances should be taken seriously. There might be a case of letting the cat out of the bag in a way that would really, in the long run, be unfortunate. Anybody who appreciates the power of ideas realizes that even a true, or well founded, idea can do harm if it is presented in an unfortunate context. What I mainly object to is the way some people take it unto themselves to decide just which ideas are dangerous, and then decide that they're justified in going out and beating those ideas up with whatever it takes: misleading descriptions, misrepresentations, character assassinations and so forth.


----
^^1^^ - see also [[What could a neuron want?|pg. 14 - DANIEL DENNETT: WHAT COULD A NEURON "WANT"?]]
In a very well written [[review of the book Dark Matter and the Dinosaurs|http://www.nytimes.com/2015/11/29/books/review/dark-matter-and-the-dinosaurs-by-lisa-randall.html]] by Lisa Randall, Maria Popova of [[Brain Pickings|https://www.brainpickings.org]] uses evocative language to highlight a few things which caught my eye (mind, and memory):
* In the book Randall has "an original theory that builds on a century of groundbreaking discoveries to tell the story of how the universe as we know it came to exist, how dark matter illuminates its beguiling unknowns and how the physics of elementary particles, the physics of space, and the biology of life intertwine in ways both bewildering and profound."
* Popova observes about Randall's theory:
> A good theory is an act of the informed imagination - - it reaches toward the unknown while grounded in the firmest foundations of the known.
* Popova also comments on Randall's description of the process of scientific discovery:
> Almost more interesting than the theory itself is Randall’s tour of the process of scientific endeavor, in which scientists traverse the abyss between the known and the unknown, suspended by intuition, adventurousness, a large dose of stubbornness and a measure of luck.
* Dark Matter plays a critical role in Randall's theory:
> Dark matter is the invisible cosmic stuff that, like ordinary matter - - which makes up the stars and the stardust, you and me and everything we know — interacts with gravity but, unlike ordinary matter, doesn’t interact with light. Although scientists know that dark matter exists and accounts for a staggering 85 percent of the universe - - billions of dark-­matter particles are passing through you this very second - - they don’t yet know what it’s made of. For Randall the possibilities within that mystery are among the most thrilling frontiers of human ­knowledge.
* Randall calls the force driving that fraction of the matter of the universe “dark light” - - an appropriately paradoxical term confuting the haughty human assumption that the world we see is all there is.
* and speaking of "paradoxical" terms and theories:
> the physicist Brian Josephson [wrote] about Einstein’s famous conversation with the Indian philosopher Rabindranath Tagore: “We Think That We Think Clearly, but That’s Only Because We Don’t Think Clearly.”
* In an aside on the seemingly diminishing demands our culture puts on our minds, making them more feeble, Popova says:
> While you need not be a physicist to metabolize the narrative [of a new scientific theory], you are certainly called upon to do your own chewing - - a rare opportunity in a culture where we are taken for so intellectually inept that our own conclusions are fed to us in listicles of bite-size buzz.
** I'm actually not sure that lowering the standards is a general cultural trend; I think that it's similar to our dietary choices: some people will always gravitate towards "junk food" while others will carefully watch what they put in their mouths (and possibly what comes out of it, as well ... :)
* “Extinctions,” Randall writes, “destroy life, but they also reset the conditions for life’s evolution.” The universe is strewn with dualities, which Randall insightfully exposes.
** The existence of parallel truths is what gives our world its tremendous richness, and the grand scheme of things is far grander than our minds habitually imagine.
** which reminds me of a saying by the physicist Niels Bohr^^1^^: 
> The opposite of a fact is falsehood, but the opposite of one profound truth may very well be another profound truth.
* Popova concludes by commenting on science and truth:
> Science, after all, isn’t merely about advancing information — it’s about advancing understanding. Its task is to disentangle the opinions and the claims from the facts in the service of truth. But beyond the “what” of truth, successful science writing tells a complete story of the “how” — the methodical marvel building up to the “why” — and Randall does just that.

----
1 - I think that Einstein would have agreed with Bohr on this^^2^^. From one perspective, Einstein's theory of relativity could be considered a contradiction of Newton's "classic" theory of mechanics. But, this may be viewed as a case where two seemingly contradictory theories are both true, but under different conditions (Newton's is good for slow speeds of bodies in motion, while Einstein's holds for high speed motion).
What Einstein and Bohr had a disagreement on was another theory: Quantum Mechanics. Einstein very much disagreed about the statistical nature of the theory and commented: God does not play dice (when it comes to the laws governing nature).
Presumably, Bohr responded with a great comeback: stop telling God what to do with his dice.
^^2^^ - regarding understanding among "Physics Greats": there is this anecdote about [[Sir Arthur Eddington|https://en.wikipedia.org/wiki/Arthur_Eddington]], who when asked if only three people in the world understood general relativity replied, “Who is the third?”
In one of the Computer Science courses I had developed and I currently teach (using Python as the programming language), there is a unit on data structures (which are an essential part of the craft and science^^1^^). 

As [[Fred Brooks|https://www.wired.com/2010/07/ff_fred_brooks/]] (a "[[CS Sage|https://en.wikipedia.org/wiki/List_of_computer_scientists]]", and the author of the excellent book [["The Mythical Man-Month"|http://www.cs.cmu.edu/~15712/papers/mythicalmanmonth00fred.pdf]]) had said:
* "Show me your code and conceal your data structures, and I shall continue to be mystified. Show me your data structures, and I won't usually need your code; it'll be obvious." and
* "Smart data structures and dumb code works a lot better than the other way around."


We start with a very basic (but extremely versatile -- see the [[LISP programming language|https://en.wikipedia.org/wiki/Lisp_(programming_language)]] :) data structure: the [[list|https://en.wikipedia.org/wiki/List_(abstract_data_type)]]. 

One of the assignments on lists is to code the logic of a simple game (the hand game [["Morra"|https://en.wikipedia.org/wiki/Morra_(game)]]) and program the determination of the winner using lists instead of [[conditionals|https://en.wikipedia.org/wiki/Conditional_(computer_programming)]] (i.e., no use of if/then/else).

This can be "elegantly" done by capturing the "win/lose payoff matrix (table)" as a list of lists (here is the Python syntax):
{{{
winner = 	[	["tie",		"player 1",	"player 2",	"tie"], 
                  	["player 2",	"tie",		"tie",		"player 1"],
              		["player 1",	"tie",		"tie",		"player 2"], 
              		["tie",		"player 2",	"player 1",	"tie"]
              	]
}}}

and depending on player 1's and player 2's "moves" ("move 1" and "move 2", respectively), determine the winner by indexing into the winner payoff matrix/list:
{{{
  return winner[move_1 - 1][move_2 - 1]
}}}

The solution above is more "elegant" (using "smart data structures" and "dumb code" - using Fred Brooks's terminology above), compared to a "brute-force" solution using conditionals ("smart code" and "dumb data structures" a-la Brooks):
{{{
if move_1 == move_2:
	return "tie"
if move_1 == 0 and move_2 == 1:
	return "player 1"
if move_1 == 0 and move_2 == 2:
	return "player 2"
if move_1 == 0 and move_2 == 3:
	return "tie"

#... and so on, for ALL (!) possible conditions (7 or more, depending on how "efficiently" students code the "tie" condition :)
}}}

In this assignment, and for list-practicing purposes, I also ask the students to run the game multiple times and count the number of times each player wins, again, __not using conditionals__, but rather, using lists. This (somewhat 'forced' requirement, 'stretching' lists usage a bit :), should have the students produce data structures like:
{{{
win_counters = [0, 0, 0]
winner_index = ["tie", "player 1", "player 2"]
}}}

and simple ('dumb' a-la Brooks) code like:
{{{
  win_counters[winner_index.index(winner)] += 1
}}}


Unfortunately, most students in this semester's class got the requirement for the first part (the winner payoff matrix) right, but the winner counters wrong, and produced the 'standard' conditionals solution, of the form ('dumb' data structures):
{{{
count_1 = 0 	#only counts the wins of player 1
count_2 = 0 	#only counts the wins of player 2
count_tie = 0 	#only counts the ties
}}}
and the 'usual' (conditionals-based) code:
{{{
  if winner == "player 1": 
    count_1 = count_1 + 1
  if winner == "player 2": 
    count_2 = count_2 + 1
  if winner == "tie": 
    count_tie = count_tie + 1
}}}


Now, the second part of the Data Structures unit, covers Dictionaries (another important and useful data structure). In this part I ask the students to convert their lists-based solution to the Morra game, into a dictionaries-based solution. From my experience, looking at the same problem and trying to solve it in different ways can generate deeper understandings and insights into effectiveness, efficiency, elegance, readability, expressiveness, and so on.

I won't go into the details of the dictionaries-based solution (it's not important to the point I want to make, //and// the students are still working on this assignment at the time of this writing ... :), but since most students had not completely coded their solution using lists, I could not give them a full 100% grade :(

One of my students, let's call her "Reena", came to me after she saw her 'non-perfect' grade (which she, justifiably, is not used to getting :), and told me that she doesn't understand why I deducted points, since she actually __did__ use a dictionaries-based solution, and therefore met the requirement of not using conditionals.
She proceeded to show me her code on her computer, and indeed, it used dictionaries and not conditionals.

I was really puzzled by it. I thought I had thoroughly reviewed her assignment (on my computer, after she turned in the link to her code), saw her conditionals-based solution, and therefore gave her the lower grade.
I don't claim to have a photographic memory (nor [[a bionic ear|I have a Bionic Ear]]), but I definitely experience "code smells", which are little unique and idiosyncratic ways people write code. And I remember __not__ being 'delighted' by Reena's solution (or the "code smell". I know, it sounds a bit weird, or new-agey, but I believe that seeing many (many, many, many) programs makes one sensitive and aware to programming idioms, templates, 'standard', awkward, expert, newbie, elegant, and crude/hacked ways of coding and solving problems).

So, I looked back at the link she had turned in to her program, and saw that I was right: the lists-based solution she had turned in had conditionals and __not__ dictionaries as her solution.

I became suspicious and very surprised And disappointed. Did Reena change her code after the fact because she could not live with a lower grade? I thought I had read her character well. She was a very solid, direct, straight-talking student, and this kind of behavior, had she really done it, did not fit my sense of her (another kind of "smell", this time of a person's character?)

I had to find out what happened. I emailed her and asked whether the link she had turned in for the lists-based assignment was hers. Her response was a big relief (and a renewed confidence in her honesty and character). She explained that she had already finished her dictionaries-based assignment, and turned it in as well, so when she got the grade and the feedback (on the ''lists-based solution'') saying she (and others) had to use non-conditionals for their solution, she immediately jumped and came to talk to me, since her ''dictionaries-based solution'' was indeed not using conditionals. In other words, she did not even refer to (remember?) her assignment with the lists and conditionals.

Mystery solved, confidence restored, character redeemed. Whew! (deep breath, good smell :)


----
^^1^^ As Niklaus Wirth (a "[[CS Sage|https://en.wikipedia.org/wiki/List_of_computer_scientists]]") said and wrote about: [[Algorithms + Data Structures = Programs|https://en.wikipedia.org/wiki/Algorithms_%2B_Data_Structures_%3D_Programs]]
Type the text for 'David Darling'
I recently attended an evening of poetry reading by [[David Whyte|http://www.davidwhyte.com/]] at Stanford University, and was inspired by his insistence that it is critical for us as human beings to have ''real conversations'' with ourselves and with each other, and ask "real questions", that get at the heart of things, and for which we don't have ready-made and well-rehearsed answers.

So who could express it better than Whyte himself (but, [[a different, but related take on questions|John O’Donohue - questions]] is given by [[John O'Donohue|https://www.johnodonohue.com/]], whom [[Whyte|http://www.davidwhyte.com/]] knew).
>The marvelous thing about a good question is that it shapes our identity as much by the asking as it does by the answering. Nine years ago, I wrote a poem called "Sometimes" in which I talked about the "questions that can make or unmake a life ... questions that have no right to go away."

>I still work with this idea. Questions that have no right to go away are those that have to do with the person we are about to become; they are conversations that will happen with or without our conscious participation. They almost always have something to do with how we might be more generous, more courageous, more present, more dedicated, and they also have something to do with timing: when we might step through the doorway into something bigger, better both beyond ourselves and yet more of ourselves at the same time.

>If we are sincere in asking, the eventual answer will give us both a sense of coming home to something we already know as well a sense of surprise not unlike returning from a long journey to find an old friend sitting unexpectedly on the front step, as if she'd known, without ever being told, not only the exact time and date of your arrival but also your need to be welcomed back.

!!!!And his 10 Questions That Have No Right to Go Away (referred to in his [[poem "Sometimes"|Sometimes - by David Whyte]]):
* Do I know how to have real conversations?
** A real conversation always contains an invitation. You are inviting another person to reveal herself or himself to you, to tell you who they are or what they want. To do this requires vulnerability. Now we tend to think that vulnerability is associated with weakness, but there's a kind of robust vulnerability that can create a certain form of strength and presence too.

* What can I be wholehearted about?
** So many of us aren't sure what we're meant to do. We wonder if we're simply doing what others are doing because we feel we don't have enough ideas or even enough strength of our own. What do I care most about in my vocation, in my family life, in my heart and mind? This is a conversation that we all must have with ourselves at every stage of our lives, a conversation that we so often don't want to have. We will get to it, we say, when the kids are grown, when there is enough money in the bank, when we are retired, perhaps when we are dead; it will be easier then. But we need to ask it now: What can I be wholehearted about now?

* Am I harvesting from this year's season of life? 
**"Youth is wasted on the young" is the old saying. But it might also be said that midlife is wasted on those in their 50s and eldership is very often wasted on the old. Most people, I believe, are living four or five years behind the curve of their own transformation. I see it all the time, in my own life and others. The temptation is to stay in a place where we were previously comfortable, making it difficult to move to the frontier that we're actually on now. People usually only come to this frontier when they have had a terrible loss in their life or they've been fired or some other trauma breaks open their story. Then they can't tell that story any more. But having spent so much time away from what is real, they hit present reality with such impact that they break apart on contact with the true circumstance. So the trick is to catch up with the conversation and stay with it -- where am I now? -- and not let ourselves become abstracted from what is actually occurring around us.

* Where is the temple of my adult aloneness?
** Gaston Bachelard, a French philosopher, said that one of the beautiful things about a home is that it is a place where you can dream about your future, and that a good home protects your dreams; it is a place where you feel sheltered enough to risk yourself in the world.

* Can I be quiet even inside?
** All of our great traditions, religious, contemplative and artistic, say that you must a learn how to be alone and have a relationship with silence. It is difficult, but it can start with just the tiniest quiet moment. You may not want to confront it at first. But a long way down the road, when you inhabit a space fully, you no longer feel awkward and lonely. Silence turns, in effect, into its opposite, so it becomes not only a place to be alone but also a place that's an invitation to others to join you, to want to know who's there, in the quiet.

* Am I too inflexible in my relationship to time?
** If you've got a wonderful memory of your childhood, it should live within you. If you've got a challenging relationship with a parent, that should be there as part of your identity now, both in your strengths and weaknesses. The way we anticipate the future forms our identity now. Time taken too literally can be a tyranny. We are never one thing; we are a conversation everything we have been, everything we are now and every possibility we could be in the future.

* How can I know what I am actually saying?
** Poetry is often the art of overhearing yourself say things you didn't know you knew. It is a learned skill to force yourself to articulate your life, your present world or your possibilities for the future. We need that same skill as an art of survival. We need to overhear the tiny but very consequential things we say that reveal ourselves to ourselves.

* How can I drink from the deep well of things as they are?
** To me, a well, a place where the water springs eternal all year round, is a very real, blessed place to stop and think. Almost always, when I'm struggling over a particular situation, I realize that I am only looking at the surface of the problem and refusing to go for the deeper dynamic that caused all the tension in the first place. All intimate relationships close friendships and good marriages are based on continued and mutual forgiveness. You will always trespass upon your friend's sensibilities at one time or another, or your spouse's. The only question is, Will you forgive the other person? And more importantly, Will you forgive yourself? We have to deepen our understanding, make ourselves more equal to circumstances, more easy with what we have been given or not given. We must drink from the deep well of things as they are.

* Can I live a courageous life?
** The word "courage" comes from the old French word coeur meaning "heart." So "courage" is the measure of your heartfelt participation in the world. Human beings are constantly trying to take courageous paths in their lives: in their marriages, in their relationships, in their work and with themselves. But the human way is to hope that there's a way to take that courageous step without having one's heart broken. And it's my contention that there is no sincere path a human being can take without breaking his or her heart. There is no marriage, no matter how happy, that won't at times find you wanting and break your heart. In raising a family, there is no way to be a good mother or father without a child breaking that parental heart. In a good job, a good vocation, if we are sincere about our contribution, our work will always find us wanting at times. In an individual life, if we are sincere about examining our own integrity, we should, if we are really serious, at times, be existentially disappointed with ourselves. So it can be a lovely, merciful thing to think, "Actually, there is no path I can take without having my heart broken, so why not get on with it and stop wanting these extra-special circumstances which stop me from doing something courageous?"

* Can I be the blessed saint that my future happiness will always remember?
** Here's the explanation for what sounds like a strange question. I have a poem called "Coleman's Bed" about a place in the West of Ireland where the Irish saint Coleman lived. The last line of that poem calls on the reader to remember "the quiet, robust and blessed saint that your future happiness will always remember." We go to places of pilgrimage where saints have lived, or even to Graceland, where Elvis lived, because these people gave something to the rest of us music or good works  that has carried on down the years and that was a generous gift to the future. But that blessed saint could also be yourself the person who, in this moment, makes a decision that can make a bold path into the years to come and whom your future happiness will always remember. What could you do now for yourself or others that your future self would look back on and congratulate you for something it could view with real thankfulness because the decision you made opened up the life for which it is now eternally grateful?
In his book When Things Start to Think, Neil Gershenfeld talks about some of the best practices they have at the Media Lab at MIT to promote and sustain creativity, learning and practical use (by industry).

One of the principles they have is to maximize free flow and interaction between people, departments, disciplines, organizations, etc.
Part of this best practice is the dealing with Intellectual Property (IP).

What may seem like a dubious deal but turns out to be a win-win,  is that every sponsor of the Lab relinquishes control of the invention/knowledge/IP gained by the Lab working on their problems, in return for being able to leverage/use any of the other sponsors' inventions/knowledge/IP.

>The third trick that makes this work is the treatment of intellectual property, the patents and copyrights that result from research. Academia and industry both usually seek to control them to wring out the maximum revenue. But instead of giving one sponsor sole rights to the result of one project, our sponsors trade exclusivity for royalty-free rights to everything we do. This lets us and them work together without worrying about who owns what. 
>It's less of a sacrifice for us than it might sound, because very few inventions ever make much money from licensing intellectual property, but fights over intellectual property regularly make many people miserable. And it's less of a sacrifice for the sponsors than it might sound, because they leverage their investment in any one area with all of the other work going on. When they first arrive they re very concerned about protecting all of their secrets; once they notice that many of their secrets have preceded them and are already familiar to us, and that we can solve many of their internal problems, they relax and find that they get much more in return by being open.
Dealing with failure is easy: Work hard to improve. Success is also easy to handle: You've solved the wrong problem. Work hard to improve.
[[Welcome]]
In an article titled [[Design Process for a Non-majors Computing Course|http://andreaforte.net/GuzdialForteDesignProcess.pdf]], Mark Guzdial and Andrea Forte of Georgia Tech describe the design process/considerations for their intro course, based on the context of media (visual, audio) creation and manipulation.

Here are their main points:

* Course objectives:
** the goal is to prepare students to become software tool modifiers, not software tool developers
** the course should attract audiences/groups currently not engaged/interested in computing
** engage students in a relevant context
** provide opportunities for creativity
** make the learning experience social
* Meeting the objectives
** The argument has been made that teaching programming, especially to non-majors, improves general problem-solving skills. Empirical studies of this claim have shown that we can’t reasonably expect an increase in general problem-solving skills after just a single course (about all that we might expect non-majors to take), but transfer of specific problem solving skills can happen.
** Media computation is relevant for these students because, for students not majoring in science or engineering, the computer is used more for communication than calculation. These students will spend their professional lives creating and modifying media. Since all media are becoming digital, and digital media are manipulated with software, programming is a communications skill for these students.
** Media computation is able to address the set learning objectives. Issues of data structuring and encoding arise naturally in media computation, e.g., sounds are typically arrays of samples, while pictures are matrices of pixel objects, each pixel containing red, green, and blue values. We were able to address the specifics of a CS1 course in the details of the course construction.
* creativity was encouraged and required through expecting students to pick media and effects of their choosing
* social engagement was present through student collaboration on, presentation, and sharing of artifacts and projects.
An article on KQED's ~MindShift website titled [[How to Trigger Students’ Inquiry Through Projects|http://ww2.kqed.org/mindshift/2013/07/15/how-to-trigger-students-inquiry-through-projects/]] based on a book titled //Thinking Through ~Project-Based Learning: Guiding Deeper Inquiry// by Jane Krauss and Suzie Boss, gives some sound advice and practical steps for designing good (triggering deep inquiry and learning) projects.

It starts off with a good observation about tying learning objectives to activities and projects.
You can start in a "top-down" fashion, beginning with learning objectives/standards and then map them to projects/activities, or "bottom-up", from ideas for projects/activities and map them to objectives/standards:
>There are several ways to start designing projects. One is to select among learning objectives described in the curriculum and textbooks that guide your teaching and to plan learning experiences based on these. Another is to “back in” to the standards, starting with a compelling idea and then mapping it to objectives to ensure there is a fit with what students are expected to learn. The second method can be more generative, as any overarching and enduring concept is likely to support underlying objectives in the core subject matter and in associated disciplines, too. Either way you begin, the first step is to identify a project-worthy idea.

In my experience, when you have mastery in a domain/subject, and a lot of experience and knowledge in that area, it is easier to start with ideas for "interesting projects", since you have a strong intuitive sense of why these projects are interesting, which in many cases boils down to them exploring/covering/exposing "important and big ideas" in that domain. In other words, these projects are "interesting" not just because they are "relevant" to the students but also because they reveal fundamental and important principles and concepts. This makes it easy (or at least straight forward) to map to learning objectives and standards.

!!!Steps in designing good projects
* Step 1 - Identify Project-worthy Concepts. If you start "top-down" you will consider standards, objectives, and concepts, which are significant and rich, as high potential candidates for covering in a project. Identify four or five BIG, important, significant concepts for each subject.

* Step 2 - Explore Their Significance and Relevance. It is important to probe these standards/objectives/concepts, to "justify" their value and priority.
>Think: Why do these topics or concepts matter? What should students remember about this topic in 5 years? For a lifetime? Think beyond school and ask: In what ways are they important and enduring? What is their relevance in different people’s lives? In different parts of the world? Explore each concept, rejecting and adding ideas until you arrive at a short list of meaningful topics.

* Step 3 - Find Real-Life Contexts. This is the important step of linking the ideas to relevant and meaningful contexts, since this will me a strong motivator in engaging students.
>Look back to three or four concepts you explored and think about real-life contexts. Who engages in these topics? Who are the people for whom these topics are central to their work? See if you can list five to seven professions for each concept.

* Step 4 - Engage Critical Thinking. You should think of ways to trigger critical thinking activities as part of the project. Some examples of student activities are:
**Compare and contrast
**Predict
**Make a well-founded judgment or informed decision
**Understand causal relationships (cause and effect)
**Determine how parts relate to the whole (systems)
**Identify patterns or trends
**Examine perspectives and alternate points of view
**Extrapolate to create something new
**Evaluate reliability of sources

*Step 5 - Write a Project Sketch. This should include an overview and a description of scenarios and activities, making it clear what and how students will learn during this project.

* Step 6 - Plan the Setup. Here you should define 3 elements:
** the project title - a short, memorable one is best
** the starting/entry event (the "hook" for the project) - an event, news item, video clip, etc.
** the driving question for your project. This would be
>a research question students will feel compelled to investigate. Imagine a driving question that leads to more questions, which, in their answering, contribute to greater understanding. Good questions grab student interest (they are provocative, intriguing, or urgent), are open ended (you can’t Google your way to an answer), and connect to key learning goals.

* Step 7 - Workshop your project idea.
>Colleagues, students, parents, and subject matter experts will ask questions that will clarify your thinking and contribute ideas you might not have considered.
In the book [[Nature's Numbers|https://cismasemanuel.files.wordpress.com/2010/02/ian-stewart-numerele-naturii.pdf]] Ian Stewart succinctly states a common experience we have:

''Determinism and predictability are not synonymous.''

And gives examples of where we experience it in real life all the time.
Stewart pithily observes:
> It's not so much a universe in which -- as Albert Einstein memorably refused to believe^^1, 2^^ -- God plays dice: it seems more a universe in which dice play God. 

A "clean and simple" example from the world of math, can be seen in Cellular Automata, for example [[Rule 30 (in Stephen Wolfram's classification)|http://mathworld.wolfram.com/Rule30.html]]:

[img[CA rule 30|./resources/CA rule 30 1.png][./resources/CA rule 30.png]]

As Wolfram indicates:
> Rule 30 is of special interest because it is chaotic ([[Wolfram 2002, p. 871|http://www.wolframscience.com/nksonline/page-871c-text]]), with central column given by 1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 0, 0, 1, ... . In fact, this rule is used as the random number generator used for large integers in the Wolfram Language. Interpreting the central column as binary numbers and taking successive bits gives the sequence of numbers 1, 3, 6, 13, 27, 55, 110, 220, 441, 883, 1766, ... . The members of this sequence that are prime are 3, 13, 883, 237051898781, ... .

One implication of CA 30 is that you cannot calculate or formulate the behavior (or a specific result/output) of rule 30 using a function/formula (i.e., you cannot predict). You actually have to go through all the steps in the algorithm (i.e., it is deterministic).
The equivalent from life is that (often? very often?) there are cases where you cannot plan/predict what will happen; you just have to live the experience (go through it; "execute life"; just run it :)



----
^^1^^ - Vasant Natarajan writes an excellent analysis/review about [[What Einstein meant when he said “God does not play dice ...”|https://arxiv.org/ftp/arxiv/papers/1301/1301.1656.pdf]], where he concludes:
>There were thus three features of Quantum Mechanics that Einstein disapproved of -- it was probabilistic, nonlocal, and linear. Despite this opposition, Einstein realized that it was a successful theory  within  its  domain  of  applicability.  He  believed  that  a future unified field theory would have to reproduce the results of Quantum  Mechanics,  perhaps  as  a  linear  approximation  to  a deeper nonlinear theory. This was similar to how the relativistic gravitational field of General Relativity (with a finite propagation speed of the gravitational force) led to Newton’s law of gravitation  (with  its  action-at-a-distance  force)  in  the  nonrelativistic limit. But Einstein was convinced that Quantum Mechanics was not the correct approach to deducing the fundamental laws of physics.

^^2^^ This is what [[Stephen Hawking|http://www.hawking.org.uk/about-stephen.html]] has to say about [[the predictability of the universe|http://www.hawking.org.uk/does-god-play-dice.html]], in light of black holes (ha!):
>To sum up, what I have been talking about, is whether the universe evolves in an arbitrary way, or whether it is deterministic. The classical view, put forward by Laplace, was that the future motion of particles was completely determined, if one knew their positions and speeds at one time. This view had to be modified, when Heisenberg put forward his Uncertainty Principle^^3^^, which said that one could not know both the position, and the speed, accurately. However, it was still possible to predict one combination of position and speed. But even this limited predictability disappeared, when the effects of black holes were taken into account. The loss of particles and information down black holes meant that the particles that came out were random. One could calculate probabilities, but one could not make any definite predictions. Thus, the future of the universe is not completely determined by the laws of science, and its present state, as Laplace thought. God still has a few tricks up his sleeve. 

^^3^^ A principle which even (the personification of) [[DEATH|http://www.chrisjoneswriting.com/death.html]] in Terry Pratchett's novels is [[trying to believe in|THE UNCERTAINTY PRINICIPLE - according to Sir Terry]].
In her book [["Incompleteness: The Proof and Paradox of Kurt Godel"|https://www.ams.org/notices/200604/rev-kennedy.pdf]], Rebecca Goldstein writes: 
>Einstein sometimes speaks of objective reality as the "out yonder":
>>Out yonder there was this huge world, which exists independently of us human beings and which stands before us like a great, eternal riddle, at least partially accessible to our inspection and thinking. The contemplation of this world beckoned like a liberation.
And Goldstein summarizes ''about Einstein'':
>This is an eloquent statement of Einstein's credo as a scientist, and it really could not be more at odds with the sentiments of almost all the other prominent physicists of his circle.
>Einstein understood the business of physics to be to discover theories that offer a glimpse of the objective nature that stands "out yonder" behind our experiences. Werner Heisenberg, together with such men as the Danish Niels Bohr and the German Max Born (who are together the leading advocates of the Copenhagen interpretation of quantum mechanics) reject this view in the name of an intellectual movement known as "positivism," according to which any attempt to reach out beyond our experience results in arrant nonsense.
And about ''Gödel'' (Godel, Goedel):
>Gödel, like Einstein, is committed to the possibility of reaching out, pace the positivists, beyond our experiences to describe the world "out yonder." Only since Godel's field is mathematics, the "out yonder" in which he is interested is the domain of abstract reality. His commitment to the objective existence of mathematical reality is the view known as conceptual, or mathematical, realism. It is also known as mathematical Platonism, in honor of the ancient Greek philosopher whose own metaphysics was a vehement rejection of the Sophist Protagoras' "man is the measure of all things."

Edwin Jaynes [[writes about the philosophical worldview differences between Einstein and Bohr|Clearing up physical mysteries with probability]], illuminating other aspects as well.
An excellent CS book, focusing on computational thinking and not on language features.

From the introduction:
>In my view, an introductory computer science course should strive to accomplish three things. 
> * First, it should demonstrate to students how computing has become a powerful mode of inquiry, and a vehicle of discovery, in a wide variety of disciplines. This orientation is also inviting to students of the natural and social sciences, who increasingly benefit from an introduction to computational thinking, beyond the limited “black box” recipes often found in manuals. 
> * Second, the course should engage students in computational problem solving, and lead them to discover the power of abstraction, efficiency, and data organization in the design of their solutions. 
> * Third, the course should teach students how to implement their solutions as computer programs. In learning how to program, students more deeply learn the core principles, and experience the thrill of seeing their solutions come to life.

>Unlike most introductory computer science textbooks, which are organized around programming language constructs, I deliberately lead with interdisciplinary problems and techniques. This orientation is more interesting to a more diverse audience, and more accurately reflects the role of programming in problem solving and discovery. A computational discovery does not, of course, originate in a programming language feature in search of an application. Rather, it starts with a compelling problem which is modeled and solved algorithmically, by leveraging abstraction and prior experience with similar problems. Only then is the solution implemented as a program.

And [[a quote I often refer to|Computer Science is no more about computers than astronomy is about telescopes.]] myself, in a slightly different form:
>We need to do away with the myth that computer science is about computers. Computer science is no more about computers than astronomy is about telescopes, biology is about microscopes or chemistry is about beakers and test tubes. Science is not about tools, it is about how we use them and what we find out when we do.
> -- Michael R. Fellows and Ian Parberry, Computing Research News (1993)
In a delightful (in its "radicalism" or at least "divergent" thinking :) article titled [["Computer Criticism vs. Technocentric Thinking"|http://worrydream.com/refs/Papert%20-%20Computer%20Criticism%20vs.%20Technocentric%20Thinking.pdf]] by Seymour Papert, he defines "Computer Criticism" (part of his paper's title):
>I am proposing a genre of writing one could call "computer criticism" by analogy with such disciplines as literary criticism and social criticism. The name does not imply that such writing would condemn computers any more than literary criticism condemns literature or social criticism condemns society. The purpose of computer criticism is not to condemn but to understand, to explicate, to place in perspective. Of course, understanding does not exclude harsh (perhaps even captious) judgment. The result of understanding may well be to debunk. But critical judgment may also open our eyes to previously unnoticed virtue. And in the end, the critical and the creative processes need each other.  
He also writes (in 1970(!), and echoing Alan Kay's opinion that [[The Real Computer Revolution Has Not Happened Yet]]) that critics of Computing say that
>[computing will never]  have the stature of Shakespeare or the depth and complexity of social structure. I think history will gainsay this attitude. The computer is a medium of human expression and if it has not yet had its Shakespeares, its Michelangelos or its Einsteins, it will. Besides, the complexity and subtlety of the computer presence already make it a challenging topic for critical analysis. We have scarcely begun to grasp its human and social implications. 
* On "technocentrism":
>Technocentrism refers to the tendency to give a similar centrality to a technical object -- for example computers or [programming/coding]. This tendency shows up in questions like "what is //the// effect of //the// computer on cognitive development?" or "does [programming] work? [i.e., "does it "deliver"?]"
* And the implications of such questions (and what some researchers, therefore, do):
>such turns of phrase [see above] often betray a tendency to think of "computers" and of "programming" as agents that act directly on thinking and learning; they betray a tendency to reduce what are really the most important components of educational situations -- people and cultures -- to a secondary, facilitating role. The context for human development is always a culture, never an isolated technology. In the presence of computers, cultures might change and with them people's ways of learning and thinking. But if you want to understand (or influence) the change, you have to center your attention on the culture -- not on the computer. 
One can see this technocentric approach and the resulting controlled experiments in papers like [["Learning to code or coding to learn? A systematic review"|https://www.researchgate.net/publication/328246560_Learning_to_code_or_coding_to_learn_A_systematic_review]] by Shahira Popat, Louise Starkey (producing a meta-study of 10 "technocentric" studies).
* about computing and programming as a "material" or "ingredient" for changing culture, or significant things you are doing in the context of a culture:
>[we should be looking at] using the computer not as a "thing in itself " that may or may not deliver benefits, but as a material that can be appropriated to do better whatever you are doing (and which will not do anything if you are not!)
* About the appropriateness of a strictly controlled experimentation environment, when trying to measure the effect of programming on students:
>It is a self-defeating parody of scientism to suppose that one could keep everything else, including the culture, constant while adding a serious computer presence to a learning environment. If the role of the computer is so slight that the rest can be kept constant, it will also be too slight for much to come of it. The "treatment" methodology leads to a danger that all experiments with computers and learning will be seen as failures: either they are trivial because very little happened, or they are "unscientific" because something real did happen and too many factors changed at once. 
* about looking at the impact of learning to program from the "demand side" (the what and how effects are tested):
> [some experimenters] are checking for an improvement in a very, narrow and specific form of planning activity, so they use a focused ad hoc test.
> [while other experimenters] approach the problem with a relatively open mind about what the cognitive effects [of computing] might be: they apply a broad spectrum of well-known, standard tests of cognitive function (amongst many others: divergence, reflectivity-impulsivity, operational competence, right-left orientation, matching familiar figures, and following directions.) 
* and from the "supply side" (what kind of "programming education" is given to the students):
>the children are to be given "programming"-- and the purpose of the experiments is to see what happens. But there is no such thing as "programming-in-general." These children are not given "programming." They are given [Scratch or Python, or Java, etc.]. But there is no such thing as "Scratch/Python/Java-in-general" either. The children encounter [Computing, programming, projects, teachers, subjects] in [very] particular way[s, all of which determine what they are really getting!].
* Papert claims (and I agree :) that these technocentric experiments don't provide a good picture and conclusions because they suffer from
>inadequate recognition of the fact that what they are looking at, and therefore making discoveries about, is not programming but cultures that happen to have in common the presence of a computer and the [particular programming] language. 
Donald Ervin Knuth (born January 10, 1938) is a computer scientist and Professor Emeritus of the Art of Computer Programming at Stanford University.

In the article [[Bend Sinister|https://monoskop.org/images/1/14/Goriunova_Olga_ed_Fun_and_Software_Exploring_Pleasure_Paradox_and_Pain_in_Computing.pdf]] by artist and programmer [[Simon Yuill|http://www.lipparosa.org/]], he mentions:
Knuth describes the act of writing programs as like teaching somebody else to do a task. The programmer teaches the machine. The machine, however, also teaches the programmer through the way it enforces greater precision on the ‘tutor’ than a purely human teaching scenario might require.

[[Donald Knuth|http://en.wikipedia.org/wiki/Donald_Knuth]]
Douglas Noel Adams was an English writer, humourist and dramatist. He is best known as the author of The Hitchhiker's Guide to the Galaxy, which started life in 1978 as a BBC radio comedy before developing ... Wikipedia
Born: March 11, 1952, Cambridge
Died: May 11, 2001, Santa Barbara
[[Douglas Richard Hofstadter|https://en.wikipedia.org/wiki/Douglas_Hofstadter]] (born February 15, 1945) is an American academic whose research focuses on consciousness, analogy-making, artistic creation, literary translation, and discovery in mathematics and physics. He is best known for his book Gödel, Escher, Bach: an Eternal Golden Braid, first published in 1979, for which he was awarded the 1980 Pulitzer Prize for general non-fiction.

Hofstadter was born in New York, New York, the son of Nobel Prize-winning physicist Robert Hofstadter. He grew up on the campus of Stanford University, where his father was a professor, and he attended the International School of Geneva in 1958-1959. He graduated with Distinction in Mathematics from Stanford University ([[where I also graduated from|http://ldtprojects.stanford.edu/~hmark/]]) in 1965.

Douglas Hofstadter's books on mind, consciousness, and computers/computing are very inspirational.

He loves ambigrams, but he's [[not the only one|Ambigrams by Scott Kim]] :-)
A few excerpts from the [[excellent article about Hofstadter in the Atlantic Magazine|http://www.theatlantic.com/magazine/archive/2013/11/the-man-who-would-teach-machines-to-think/309529/]]

* [The] operating premise (of Hofstadter and his students at Indiana University] is simple: the mind is a very unusual piece of software, and the best way to understand how a piece of software works is to write it yourself. Computers are flexible enough to model the strange evolved convolutions of our thought, and yet responsive only to precise instructions. So if the endeavor succeeds, it will be a double victory: we will finally come to know the exact mechanics of our selves—and we’ll have made intelligent machines.
* In 1931, the Austrian-born logician Kurt Gödel had famously shown how a mathematical system could make statements not just about numbers but about the system itself. Consciousness, Hofstadter wanted to say, emerged via just the same kind of “level-crossing feedback loop.” (see [[The world's shortest explanation of Gödel's theorem]])
* In GEB [his 1980 Book //Gödel, Escher, Bach//], Hofstadter was calling for an approach to AI concerned less with solving human problems intelligently than with understanding human intelligence - at precisely the moment that such an approach, having borne so little fruit, was being abandoned.
* Hofstadter wanted to ask: Why conquer a task if there’s no insight to be had from the victory? “Okay,” he says, “Deep Blue plays very good chess - so what? Does that tell you something about how we play chess? No. Does it tell you about how Kasparov envisions, understands a chessboard?” A brand of AI that didn’t try to answer such questions - however impressive it might have been - was, in Hofstadter’s mind, a diversion. 
* “To me, as a fledgling AI person,” he says, “it was self-evident that I did not want to get involved in that trickery. It was obvious: I don’t want to be involved in passing off some fancy program’s behavior for intelligence when I know that it has nothing to do with intelligence. And I don’t know why more people aren’t that way.”
* “The quest for ‘artificial flight’ succeeded when the Wright brothers and others stopped imitating birds and started … learning about aerodynamics,” Stuart Russell and Peter Norvig write in their leading textbook, Artificial Intelligence: A Modern Approach. AI started working when it ditched humans as a model, because it ditched them. That’s the thrust of the analogy: Airplanes don’t flap their wings; why should computers think?
* [The above point is] a compelling point. But it loses some bite when you consider what we want: a Google that knows, in the way a human would know, what you really mean when you search for something. Russell, a computer-science professor at Berkeley, said to me, “What’s the combined market cap of all of the search companies on the Web? It’s probably four hundred, five hundred billion dollars. Engines that could actually extract all that information and understand it would be worth 10 times as much.”
* Perhaps, as Russell and Norvig politely acknowledge in the last chapter of their textbook, in taking its practical turn, AI has become too much like the man who tries to get to the moon by climbing a tree: “One can report steady progress, all the way to the top of the tree.”
* “At every moment,” Hofstadter writes in Surfaces and Essences, his latest book (written with Emmanuel Sander), “we are simultaneously faced with an indefinite number of overlapping and intermingling situations.” It is our job, as organisms that want to live, to make sense of that chaos. We do it by having the right concepts come to mind. This happens automatically, all the time. Analogy is Hofstadter’s go-to word. The thesis of his new book, which features a mélange of A’s on its cover, is that analogy is “the fuel and fire of thinking,” the bread and butter of our daily mental lives.
* “Look at your conversations,” he says. “You’ll see over and over again, to your surprise, that this is the process of analogy-making.” Someone says something, which reminds you of something else; you say something, which reminds the other person of something else—that’s a conversation. It couldn’t be more straightforward. But at each step, Hofstadter argues, there’s an analogy, a mental leap so stunningly complex that it’s a computational miracle: somehow your brain is able to strip any remark of the irrelevant surface details and extract its gist, its “skeletal essence,” and retrieve, from your own repertoire of ideas and experiences, the story or remark that best relates.
* when everybody else in AI started building products, he and his team, as his friend, the philosopher Daniel Dennett, wrote, “patiently, systematically, brilliantly,” way out of the light of day, chipped away at the real problem. “Very few people are interested in how human intelligence works,” Hofstadter says. “That’s what we’re interested in—what is thinking?—and we don’t lose track of that question.”
* In this he [Hofstadter] is the modern-day William James, whose blend of articulate introspection (he introduced the idea of the stream of consciousness) and crisp explanations made his 1890 text, Principles of Psychology, a classic. “The mass of our thinking vanishes for ever, beyond hope of recovery,” James wrote, “and psychology only gathers up a few of the crumbs that fall from the feast.”
* When you read Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought, which describes in detail this architecture and the logic and mechanics of the programs that use it, you wonder whether maybe Hofstadter got famous for the wrong book.
* On the one hand, the software we know how to write is very orderly; most computer programs are organized like a well-run army, with layers of commanders, each layer passing instructions down to the next, and routines that call subroutines that call subroutines. On the other hand, the software we want to write would be adaptable - and for that, a hierarchy of rules seems like just the wrong idea. Hofstadter once summarized the situation by writing, “The entire effort of artificial intelligence is essentially a fight against computers’ rigidity.”
* It is no coincidence that AI saw a resurgence in the ’90s, and no coincidence either that Google, the world’s biggest Web company, is “the world’s biggest AI system,” in the words of Peter Norvig, a director of research there, who wrote AI: A Modern Approach with Stuart Russell. Modern AI, Norvig has said, is about “data, data, data,” and Google has more data than anyone else.
* Josh Estelle, a software engineer on Google Translate, which is based on the same principles as Candide and is now the world’s leading machine-translation system, explains, “you can take one of those simple machine-learning algorithms that you learned about in the first few weeks of an AI class, an algorithm that academia has given up on, that’s not seen as useful - but when you go from 10,000 training examples to 10 billion training examples, it all starts to work. Data trumps everything.”
* You don’t have to push Google Translate very far to see the compromises its developers have made for coverage, and speed, and ease of engineering. Although Google Translate captures, in its way, the products of human intelligence, it isn’t intelligent itself. It’s like an enormous Rosetta Stone, the calcified hieroglyphics of minds once at work.
* [Dave Ferrucci, who led the Watson team at IBM] is not blind to the difference [between Artificial Intelligence and Human Intelligence]. He likes to tell crowds that whereas Watson played using a room’s worth of processors and 20 tons of air-conditioning equipment, its opponents relied on a machine that fits in a shoebox and can run for hours on a tuna sandwich. A machine, no less, that would allow them to get up when the match was over, have a conversation, enjoy a bagel, argue, dance, think - while Watson would be left humming, hot and dumb and un-alive, answering questions about presidents and potent potables.
* The question that Hofstadter wants to ask Ferrucci, and everybody else in mainstream AI, is this: Then why don’t you come study it ("Real Artificial Intelligence", working on understanding thinking]?
** “I have mixed feelings about this,” Ferrucci told me when I put the question to him last year. “There’s a limited number of things you can do as an individual, and I think when you dedicate your life to something, you’ve got to ask yourself the question: To what end? And I think at some point I asked myself that question, and what it came out to was, I’m fascinated by how the human mind works, it would be fantastic to understand cognition, I love to read books on it, I love to get a grip on it” - he called Hofstadter’s work inspiring - “but where am I going to go with it? Really what I want to do is build computer systems that do something. And I don’t think the short path to that is theories of cognition.”
** Peter Norvig, one of Google’s directors of research, echoes Ferrucci almost exactly. “I thought he was tackling a really hard problem,” he told me about Hofstadter’s work. “And I guess I wanted to do an easier problem.”
* Stuart Russell, Norvig’s co-author of AI: A Modern Approach, goes further. “A lot of the stuff going on is not very ambitious,” he told me. “In machine learning, one of the big steps that happened in the mid-’80s was to say, ‘Look, here’s some real data—can I get my program to predict accurately on parts of the data that I haven’t yet provided to it?’ What you see now in machine learning is that people see that as the only task.”
* It’s insidious, the way your own success can stifle you. As our machines get faster and ingest more data, we allow ourselves to be dumber. Instead of wrestling with our hardest problems in earnest, we can just plug in billions of examples of them.
* It seems unlikely that feeding Google Translate 1 trillion documents, instead of 10 billion, will suddenly enable it to work at the level of a human translator. The same goes for search, or image recognition, or question-answering, or planning or reading or writing or design, or any other problem for which you would rather have a human’s intelligence than a machine’s.
*This is a fact of which Norvig, just like everybody else in commercial AI, seems to be aware, if not dimly afraid. “We could draw this curve: as we gain more data, how much better does our system get?” he says. “And the answer is, it’s still improving—but we are getting to the point where we get less benefit than we did in the past.”
* For James Marshall, a former graduate student of Hofstadter’s, it’s simple: “In the end, the hard road is the only one that’s going to lead you all the way.”
* Hofstadter strikes me as difficult, in a quiet way. He is kind, but he doesn’t do the thing that easy conversationalists do, that well-liked teachers do, which is to take the best of what you’ve said - to work you into their thinking as an indispensable ally, as though their point ultimately depends on your contribution.
* 
One idiomatic practice in Python that often surprises people coming from programming languages where exceptions are considered, well, exceptional, is [[EAFP|https://docs.python.org/3.5/glossary.html#term-eafp]]: “it’s easier to ask for forgiveness than permission”. Quickly, EAFP means that you should just do what you expect to work and if an exception might be thrown from the operation then catch it and deal with that fact. What people are traditionally used to is [[LBYL|https://docs.python.org/3.5/glossary.html#term-lbyl]]: “look before you leap”. Compared to EAFP, LBYL is when you first check whether something will succeed and only proceed if you know it will work.
In his excellent book [[Probably Approximately Correct|http://www.probablyapproximatelycorrect.com/]], Leslie Valiant writes about his work in defining and using what he calls ecorithms, to address the way to improve computer/software performance (and knowledge) through learning and evolutionary changes.

One of the main reasons he started working in this area is his realization (following Alan Turing's and John von Neumann's) that math,  logic , and "classic" (traditional) computation have their limitations when it comes to dealing with all the complexities of the real world (he tells the [[funny story about von Neumann|If you think math is difficult ...]]).

In the book, Valiant talks about theoryful vs. theoryless analysis and behavior:
> Much of everyday human decision making appears to be of a similar ["theoryless"] nature -- it is based on a competent ability to predict from past observations without any good articulation of how the prediction is made or any claim of fundamental understanding of the phenomenon in question. The predictions need not be perfect or the best possible. They need merely be useful enough.
In this regard he quotes Arthur Eddington (in 1933), in what I think is an attempt to show the prevalent scientific sentiment, which is still strong today, too:
> I hope it will not shock experimental physicists too much if I say that we do not accept their observations unless they are confirmed by theory. 

Valiant describes ecorithms:
> Understanding ecorithms requires developments beyond basic algorithmic theory. One now needs to analyze not only the algorithm itself but also the algorithm's relationship with its environment.
> The theory of probably approximately correct, or PAC, learning, deals with this relationship between the algorithm and its environment. It addresses the fundamental question of how a limited entity can cope in a world that in comparison is limitless, and does so while keeping to an absolute minimum any assumptions about that limitless world.

!!! On the Mechanistic Explanations of Nature
>What Crick and Watson had done was to discover the physical substrate on which heritable information is represented, much like silicon is the physical substrate of present-day computers. For both substrates it is impressive how the exacting requirements imposed on them can be achieved with as much miniaturization and economy as they are. However, no one would say that the secret of computers is in the silicon, since computers can be equally well realized in many other physical substrates, though perhaps not quite so economically at present. Indeed, one reason that computer development has been as rapid as it has is that computer scientists made a conceptual separation at the very beginning. between the physical technology in which the computer was implement: and the algorithmic content of what was being executed on the machines This enabled hardware, software, and algorithms to evolve independently and at their own spectacular rates. 
>Making similar headway in our study of biology, whether evolutionary of cognitive, demands the same separation of algorithm and substrate. The distinction made here between a physical object and the information processing it performs is self-evident for anyone dealing with computers. The distinction is in no way subtle. Even for a traffic light one can easily distinguish between its symbolic function and its physical construction. But perhaps these distinctions were not quite so obvious in former times. The mind–body problem of Descartes and his followers may have been an earlier reference to such a distinction. But now when computers are ubiquitous there is no reason for confusing ‘what it does” and “what does it."

!!! The Learnable - How can one draw general lessons from particular experiences?
>The idea that biological and cognitive processes should be viewed as computations appeared almost immediately upon the discovery of universal computation, and it was discussed by the early pioneers, including Turing and von Neumann. Because of subsequent slow progress in making this connection concrete or useful, some have despaired that it can never be made into more than metaphor, and that for fundamental reasons it cannot be made into a science. I disagree. I believe that developing any new science is fraught with challenges, and that we are making progress in this area at about the pace that might be reasonable to expect. 
>The universality of computation is what justifies this approach to cognition. Some have complained that the favored metaphor for the brain in every age has been the most complicated mechanism known at the time. Since the computer is currently that most complex mechanism, is it not a fallacy to adopt that metaphor? I would argue that the computer analogy goes beyond the fact that the computer is another complicated mechanism.
>What makes it different this time is the widely agreed universality of computation over all processes that we regard as mechanistic.
>While computers are extremely good at reasoning using mathematical logic, they find common sense much more challenging. We are faced with two issues as a result: identifying what it is about common sense that logic fails to capture, and whether there is a scientific road to the problem of common sense. The first issue, I argue, is a result of mathematical logic requiring a theoryful world in which to function well. Common sense corresponds to a capability of making good predictive decisions in the realm of the theoryless. To address the second issue we need therefore a theory of the general nature of the theoryless. As I shall argue, the road we must take in that direction is paved with ecorithms. The algorithms studied most widely in computer science aim to solve instances of some specific problem, such as integer multiplication of the Traveling Salesman Problem. These algorithms, by design, already incorporate the expertise needed for solving them. Ecorithms are also algorithms but they have an important additional nature. They use generic learning techniques to acquire knowledge from their environment so that they can perform effectively in the environment from which they have learned. They achieve this effectiveness not by intensive design, but by making use of knowledge they have learned. The designed-in expertise is limited to generic learning capabilities and their use. Understanding ecorithms requires developments beyond basic algorithmic theory. One now needs to analyze not only the algorithm itself but also the algorithm's relationship with its environment.
>There is a difficulty in placing generalization at the core of learning, at Last for philosophers, who have argued for millennia that it is difficult to make a logical argument for rationally inferring anything from one situation to another that one has never before experienced. This is known as the problem of induction. Aristotle said that there are two forms of argument, syllogistic and inductive.” Here I interpret these words to mean that if one has a certain belief, then the belief was arrived at either by logical deduction syllogism) from things already believed, or by induction (generalization) from particular experiences. In this formulation it is induction that is the more basic since it enables primary beliefs, whereas logical deduction requires some previous beliefs. The main paradox of induction is the apparent contradiction between the following two of its facets. On the one hand, if no assumptions are made about the world, then clearly induction cannot be justified, because the World could conceivably be adversarial enough to ensure that the future is exactly the opposite of whatever prediction has just been made.
>[...] On the other hand, and in apparent contradiction to this argument, successful induction abounds all around us. Generation after generation, millions of children learn everyday concepts, such as dogs and cats, chairs and tables, after seeing examples of them, rather than precise definition.

>There may exist some acceptable assumptions that hold for the reproducible, naturally occurring form of induction, and under which induction is rigorously justifiable.
Valiant argues that there are just two such assumptions, which are sufficient, necessary, and unavoidable.
* One is the Invariance Assumption: the context in which the generalization is to be applied cannot be fundamentally different from that in which it was made.
* The second is the Learnable Regularity Assumption. We are quite good, but possibly not perfect at categorizing... We must be doing it by applying some criteria...which can be viewed as regularities in the world. These regularities should be detectable, and validated by practical (in terms of time, attention, and other cognitive/processing resources) feasibility tests/calculations/processing.
 
Valiant draws some parallels between a teacher and a programmer:
* at a high level we can think of a teacher as a programmer who is defining a sequence of concepts to be learned in a certain order, and perhaps in no other way. The reason is, that in order to learn a new concept, we/humans need to be "ready" and aware of and familiar with "prerequisite concepts" (e.g. if you don't know what the concept "data" means, you can not learn and know what "big data" is. Same with "black swan").
* a big difference between a programmer and a teacher is that a programmer needs to know exactly what is the state of the program and what it does. A teacher does not know exactly what the learner knows or how s/he interprets each word (and even the learner does not necessarily know exactly what their "state" is). This teacher-learner incomplete knowledge is inevitable and has both positive and negative aspects. One of the positives is a certain robustness/resiliency in the face of errors, as well as the potential and ability to recover and/or improve learning over time.
* both a teacher and a programmer should point out the next good thing to learn, as well as provide examples (both positive and negative) and label them.
* every learner has certain learning algorithms and a good teacher (and a good programmer) will therefore come up with a good set and sequence of labeled examples which will accelerate and improve learning of concepts and relationships.
>In practice we can never be certain that the world will not change on us in an unexpected way, so that future examples will be from a very different distribution from those in the past. Past performance is not necessarily indicative of future results. Living organisms, however, need to make decisions all the time and take a view on what will happen next. The only course available is to learn as many of the world's regularities as we can, and allow them to guide our decision making. There is simply no alternative.

!!! Need We Fear Artificial Intelligence?
Valiant concludes the book with the following hopeful and calming/level-headed thoughts:
>There may be some good news  for humans in the fact that one can be intelligent in many different ways. It gives us hope that we may endow robots with intelligence superior to ours but only in directions that are useful and not threatening to us. Also, it makes it clear that there is no good reason to want to make robots that are exactly like humans. 
>The most singular capability of living organisms on Earth must be that of survival. Anything that survives for billions of years, and many millions of generations, must be good at it. Fortunately, there is no reason for us to endow robots with this same capability. Even if their intelligence becomes superior to ours in a wide range of measures, there is no reason to believe that they would deploy this in the interests of their survival over ours unless we go out of our way to make them do just that.
> We have limited fear of domesticated animals. We do not necessarily have to fear intelligent robots either. They will not resist being switched off, unless we provide them with the same heritage of extreme survival training that our own ancestors had been subject to on Earth.
 
|borderless|k
|[img[Escher Hands|./resources/escher_hands_1.jpg][./resources/escher_hands.jpg]]|[img[Human-Robot-hands|./resources/human_robot_hand_1.jpg][./resources/human_robot_hand.jpg]]|
|borderless|k
Edsger Wybe Dijkstra (May 11, 1930   August 6, 2002) was a Dutch computer scientist. He received the 1972 Turing Award for fundamental contributions to developing programming languages, and was the Schlumberger Centennial Chair of Computer Sciences at The University of Texas at Austin from 1984 until 2000.
<br>
{{{To teach is to learn twice.}}}
: -- Joseph Joubert 

{{{The business of life is to learn, not to know.}}}
: -- Jonathan Rosen 

{{{The larger the island of knowledge, the longer the shoreline of wonder.}}}
: -- Ralph Sockman

{{{I'd take the awe of understanding over the awe of ignorance any day.}}}
: -- Douglas Adams (from //The Salmon of Doubt//)

{{{We make, not just to have, but to know.}}}
: -- Alan Kay

[>img[teaching|./resources/xkcd_teaching_s1.png][./resources/xkcd_teaching.png]]
<<forEachTiddler 
where 
'tiddler.tags.contains("education-item")'
sortBy 
'tiddler.title'>>



<html>
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/3.0/us/"><img alt="Creative Commons License" style="border-width:0" src="http://i.creativecommons.org/l/by-nc-sa/3.0/us/88x31.png" /></a><br />To the extent possible and under my control, this work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/3.0/us/">Creative Commons Attribution-NonCommercial-ShareAlike 3.0 United States License</a>.
</html>
From an [[excellent, uplifting and down-to-earth talk|http://www.pbs.org/johngardner/chapters/3.html]] by John Gardner on Education and Excellence.

>We don't even know what skills may be needed in the years ahead. That is why we must train our young people in the fundamental fields of knowledge, and equip them to understand and cope with change. That is why we must give them the critical qualities of mind and durable qualities of character that will serve them in circumstances we cannot now even predict.

and

>If the man in the street says, 'Those fellows at the top have to be good, but I'm just a slob and can act like one' -- then our days of greatness are behind us. We must foster a conception of excellence that may be applied to every degree of ability and to every socially acceptable activity. A missile may blow up on its launching pad because the designer was incompetent or because the mechanic who adjusted the last valve was incompetent. The same is true of everything else in our society. We need excellent physicists and excellent mechanics, excellent cabinet members and excellent first-grade teachers. The tone of our society depends upon a pervasive an almost universal striving for good performance.
>
>And we are not going to get that kind of striving, that kind of alert and proud attention to performance, unless we can instruct the whole society in a conception of excellence that leaves room for everybody who is willing to strive -- a conception of excellence which means that whoever I am or whatever I am doing, provided that I am engaged in socially acceptable activity, some kind of excellence is in my reach.

And

>Exploration of the full range of our own potentialities is not something that we can safely leave to the chances of life. It is something to be pursued avidly to the end of our days. We should look forward to an endless and unpredictable dialogue between our own potentialities and the claims of life -- not only the claims we encounter, but the claims we invent. And by potentialities I mean not just skills, but the full range of our capacities for sensing, wondering, learning, understanding, loving, and aspiring...
>
>A society whose maturing consists simply of acquiring more firmly established ways of doing things is headed for the graveyard -- even if it learns to do these things with greater and greater skill. In the ever-renewing society what matures is a system or framework within which continuous innovation, renewal and rebirth can occur.
>
>Our thinking about growth and decay is dominated by the image of a single life-span, animal or vegetable. Seedling, full flower, and death...But for an ever-renewing society, the appropriate image is a total garden, a balanced aquarium or other ecological system. Some things are being born, other things are flourishing, still other things are dying -- but the system lives on.
Education is what, when, and why to do things. Training is how to do it. In science, if you know what you are doing, you should not be doing it. In engineering, if you do not know what you are doing, you should not be doing it.

Richard Hamming: The Art of Doing Science and Engineering (1997)
|Technology Name                             | Domain/Subject         | Technology Reference URL                                                                         | Usage Examples | My Usage | Comments |h
|[[GeoGebra|http://www.geogebra.org/cms/]] | Math | [[Intro to GeoGebra [pdf]|http://www.geogebratube.org/material/show/id/7382]] | [[examples|http://www.geogebratube.org/?lang=en]] |   [[examples|http://employees.org/~hmark/math/index.html#geogebra]] |Powerful, [[Mathematica|http://www.wolfram.com/mathematica/]]-, [[Sage|http://sagemath.org/]]-, or [[Maple|http://www.maplesoft.com/]]-like Math package; free|
|[[NetLogo|http://ccl.northwestern.edu/netlogo/]] | CS, Science | [[manuals|http://ccl.northwestern.edu/netlogo/docs/]] | [[examples library|http://ccl.northwestern.edu/netlogo/models/index.cgi]] | [[examples|http://employees.org/~hmark/math/index.html#netlogo]] |Excellent low-entry, high ceiling free software; generates Java applets|
|[[LightBot|http://www.kongregate.com/games/Coolio_Niato/light-bot]]  | CS | [[Download swf|http://playtomic.com/games/511-lightbot20.swf?source=feed]] |  [[Helene Martin: Teaching with LightBot|http://www.helenemartin.com/2011-08-teaching-with-lightbot/]]  | [[programming|http://employees.org/~hmark/math/lightbot.html ]] |Programming a virtual robot |
|[[LogicSim|http://www.tetzl.de/java_logic_simulator.html]]  | Engineering/CS/Math | |  [[Short Tutorial|http://www.tetzl.de/java_logic_simulator.html]]  | [[examples|http://employees.org/~hmark/math/index.html#logicsim]]     |Designing binary logic circuits |
|[[Easy Java Simulation (EJS)|http://www.um.es/fem/EjsWiki/pmwiki.php]]  | Math, Physics  |  |  [[Open Source Physics|http://www.compadre.org/osp/index.cfm]] | [[examples|http://employees.org/~hmark/math/index.html#ejs]] |Powerful simulation package, with ODE support. Produces Java applets|
|[[Tracker video analysis|http://www.compadre.org/osp/items/detail.cfm?ID=7365]] | Physics, Math | [[Lab/user manual|http://www.compadre.org/osp/document/ServeFile.cfm?ID=12037&DocID=2924&Attachment=1]]  |  [[Samples & Download|http://www.cabrillo.edu/~dbrown/tracker/]]  | [[example|Tracker video analysis - falling bodies]]   |Quantitative video analysis; free; multiple platforms|
|[[Sage|http://sagemath.org/]]  | Math  | [[documentation|http://www.sagemath.org/help.html#SageStandardDoc]] | [[worksheets|https://sagenb.kaist.ac.kr:8066/pub/]] | [[examples|http://employees.org/~hmark/math/index.html#sage]] |Powerful, [[Mathematica|http://www.wolfram.com/mathematica/]]- or [[Maple|http://www.maplesoft.com/]]-like Math package; free; Python scriptable|
|[[Scratch|http://scratch.mit.edu/]]  | CS, other |             |                           | [[examples|http://scratch.mit.edu/projects/myh9090/1871961]] |Drag-and-drop, snap-on programming. Scratch 2.0 is web(Flash)-enabled|
|[[Snap|http://snap.berkeley.edu/]]  | CS, other | [[manual|http://snap.berkeley.edu/SnapManual.pdf]] |                           |  |Drag-and-drop, snap-on programming. Extended implementation of Scratch; web(javascript)-enabled|
|[[AppInventor|http://appinventor.mit.edu/]]  | CS | [[source code|http://code.google.com/p/app-inventor-releases/]]  |  [[samples|http://www.appinventorblocks.com/]]   |  |[[Scratch|http://scratch.mit.edu/]]-like snapping tiles; Android app [[IDE|http://en.wikipedia.org/wiki/Integrated_development_environment]] |
|[[Blockly|http://code.google.com/p/blockly/]]  | CS, other |      |       |    |[[Scratch|http://scratch.mit.edu/]]-like snapping tiles for "visual programming"  |
|[[Stencyl|http://www.stencyl.com/]]  | CS, game programming, Physics | [[Overview|http://www.stencyl.com/stencyl/overview/#section1]] |       |    |[[Scratch|http://scratch.mit.edu/]]-like, web-enables, Flash and Mobile support  |
|[[Soulver|http://www.acqualia.com/soulver/]]  | Math  |  [[enhancement|http://worrydream.com/ScrubbingCalculator/]]  |    |    |"Smart" notebook+spreadsheet, combined with embedded math calculations abilities  |
|[[Greenfoot|http://www.greenfoot.org/door]]  | CS (Java) |      |       |    |For learning & teaching Java; [[BlueJ|http://www.bluej.org/]] is a compatible IDE  |
|[[Algodoo|http://www.algodoo.com/wiki/Home]] | Physics |      |  [[examples|http://www.algodoo.com/algobox/]]  |    |2D simulation; not free; [[free, older version: Phun|http://www.algodoo.com/wiki/Download]]|
|[[Physion|http://physion.net/]] | Physics, CS |      |  [[examples|http://physion.net/en/scenes]]  |    |2D simulation; free; not webbrowser playable; Javascript programmable|
|[[Gamestar Mechanic|http://gamestarmechanic.com/]] | CS, Game Design |      |  [[Getting Started|https://sites.google.com/a/elinemedia.com/gsmlearningguide/]]  |    |Game playing, game design; free; web browser enabled|
|[[Sodaplay|http://soda.co.uk/categories/sodaplay]] | CS, Game Design, Physics |      |  [[Launching tools/suite|http://sodaplay.com/create]]  |    |Game playing, game design; free; web browser enabled suite: Creator, Moovl, Newtoon, Race|
|[[Perlenspiel|http://www.perlenspiel.org/]] | CS, Game Design |      |  [[Examples by Brian Moriatry|http://users.wpi.edu/~bmoriarty/ps/examples.html]]  | [[my comments|Teaching game design]]   |Game playing, game design; free; web browser enabled; some similarities to [[NetLogo|http://ccl.northwestern.edu/netlogo/]]|
|[[VPython|http://vpython.org/index.html]] | CS, Physics, Visualization | [[documentation|http://vpython.org/contents/doc.html]] | [[examples|http://www.youtube.com/vpythonvideos]] |    |Programming + 3D visualization in Python; free; can be 'web-enabled' with [[GlowScript|http://vpython.org/contents/doc.html]]|
|[[Go|http://golang.org/]] | CS | [[documentation|http://golang.org/doc/]] |  |    |An open source programming environment by Google. Supported and can be hosted on Google's [[App Engine|https://developers.google.com/appengine/]] |
|[[Alice|http://www.alice.org/index.php]] | CS | [[documentation|http://www.alice.org/index.php?page=3.1/download_materials]] |  |  [[education-related papers|http://www.alice.org/index.php?page=publications/publications]]  |An open source environment for 3D (game) programming based on Java |
|[[Kojo|http://www.kogics.net/kojo]] | CS, Math, Art, Music | [[documentation|http://wiki.kogics.net/sf:kojo-docs]] | [[examples|http://www.kogics.net/codeexchange]] |  [[education-related comments|http://wiki.kogics.net/sf:kalpana-center]]  |An open source programming environment both downloadable and web-enabled (based on the [[Scala programming language|http://www.scala-lang.org/]]) |

CS = Computer Science
IDE = Integrated Development Environment
ODE = Ordinary Differential Equations
The math professor (UC Berkeley) Edward Frenkel wants to expose learners to the Beauty of Math (reminds me of [[Brian Harvey, Dan Garcia and team|https://bjc.berkeley.edu/team/leadership/]] and their [["Beauty and Joy of Computing"|https://bjc.berkeley.edu/]] effort (also at UC Berkeley)).
When [[asked|http://www.slate.com/articles/health_and_science/new_scientist/2013/10/edward_frenkel_on_love_and_math_what_is_it_like_to_be_a_mathematician.html]] about the way we teach math to students, Frenkel has a vivid image/analogy:
>The way mathematics is taught is akin to an art class in which students are only taught how to paint a fence and are never shown the paintings of the great masters. When, later on in life, the subject of mathematics comes up, most people wave their hands and say, "Oh no, I don't want to hear about this, I was so bad at math." What they are really saying is, "I was bad at painting the fence."
:: [img[painting the fence vs. the painting of the masters|resources/paint fence and masters paintings small.png]]

In [[another article|http://www.slate.com/articles/health_and_science/science/2013/04/e_o_wilson_is_wrong_about_math_and_science.html]], Frenkel blasts the scientist E. O. Wilson, who seems to suggest to science students that math is not essential to science, and says:
>If mathematics were fine art, then Wilson’s view of it would be that it’s all about painting a fence in your backyard. Why learn how to do it yourself when you can hire someone to do it for you? But fine art isn’t a painted fence, it’s the paintings of the great masters. And likewise, mathematics is not about “number-crunching,” as Wilson’s article suggests. It’s about concepts and ideas that empower us to describe reality and figure out how the world really works. Galileo famously said, “The laws of Nature are written in the language of mathematics.” Mathematics represents objective knowledge, which allows us to break free of dogmas and prejudices. It is through math that we learned Earth isn’t flat and that it revolves around the sun, that our universe is curved, expanding, full of dark energy, and quite possibly has more than three spatial dimensions. But since we can’t really imagine curved spaces of dimension greater than two, how can we even begin a conversation about the universe without using the language of math?

I think that the truth is somewhere on both sides: math can and does enrich both the human experience (as [[Richard Feynman had also said|Richard Feynman on the beauty and simplicity of nature]]), and enables deep scientific insights, but it is not (necessarily) a necessary, nor a sufficient condition (and Albert Einstein is a famous example of a scientist with great imagination and insight, who, [[it is claimed|https://www.theguardian.com/uk/2006/may/22/science.research]], needed some help with his math to "solidify" his theories :) . And on a similar note, [[Wilson also mentions|https://www.wsj.com/articles/SB10001424127887323611604578398943650327184]] that "Newton invented calculus in order to give substance to his imagination.".

On this last point (help with math) [[Wilson (rightfully) claims|https://www.wsj.com/articles/SB10001424127887323611604578398943650327184]] that collaboration between scientists and mathematicians may be very fruitful, and that scientists may have help readily available, if/when needed:
>It is far easier for scientists to acquire needed collaboration from mathematicians and statisticians than it is for mathematicians and statisticians to find scientists able to make use of their equations.
In a lecture at Stanford University by Professor of Mathematics Brian Conrad, titled //Rubik, Escher, Bank$//, he used a good and simplified analogy to explain a bit of the role of elliptic curves in Public Key Encryption schemes used by banks, credit card companies and other financial institutions.

For a step-by-step worked out [[example of another encryption algorithm (RSA)|RSA encryption example - simplified]], see the [[NetLogo simulation/model|math/netlogo/RSAcrypto.html]] I've created.

(In the same lecture Conrad used another analogy to explain a bit of what's going on in [[Escher's Print Gallery]])

The analogy Conrad used was to explain the main concepts behind public key encryption and the role of elliptic functions. He was trying to answer the common question of "how come two parties/people who want to exchange secret messages can publish (to the whole world) what's called "public keys" which __must__ be used in order to decipher the secret messages, but only the party/person who has another key called a "private key" can decipher them? He basically tried to simplify the [[Diffie-Hellman key exchange process|http://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exchange]].

So, as an example which __does not use elliptic functions__, but rather a method which __can be hacked/broken__ (i.e. it's not secure and does NOT safeguard encrypted messages), Conrad used an analogy, employing [[power functions (exponentiation)|http://en.wikipedia.org/wiki/Exponentiation]].

Let's assume that party A wants to encrypt a message and send it to party B. For simplification's sake let's also assume that the unencrypted message is "12" (and it can be generalized to any message). 
* First, the two parties need to agree on a "base number", which they can do publicly, since it is only a "seed" from which the encryption keys will be developed. Let's say that they agree on a base/seed of 2.
* Then, party A needs to select a (secret) "private key" known only to party A, let's say it chooses the number 3. Party A then creates a (non-secret) "public key", using the seed (2) and private key (3). In our "toy" scenario (making this a non-secure method), let's assume that the way to create the public key is to take the base and raise it to the power of the secret key. So 2^^3^^ = 8, which party A publishes to the world.
* Similarly, party B needs to select a (secret) "private key" too, known only to party B, say, the number 4. Party B also creates a (non-secret) "public key", using the same method and gets 2^^4^^ = 16, which it publishes to the world.
* Now, party A has party B's (published) public key and therefore can calculate a "shared secret" key by using its own private key, and party B's public key: 16^^3^^ = 4096 (but notice that (2^^4^^)^^3^^ = 4096).
* Party B does the same thing with its own secret key and party A's public key and gets the same "shared key" (and that's the whole point!): 8^^4^^ = 4096 (but notice that (2^^3^^)^^4^^ = 4096)

So, by publicly agreeing on a seed number (2) and using a calculation (which in our "toy" example is exponentiation, which is non-secure, since it can easily be reversed with a logarithm, but is not easily reversed with elliptic functions), party A and party B exchanges and poses a "shared secret" code, and can now start encrypting messages to each other, without any listening parties being able to decrypt them.
As an example, party A can encrypt the message (12) using the shared secret key (4096) and multiplication, and get an encrypted message of 12 * 4096 = 49152 (the result) which is sent to party B (and which the whole world can see).
Now, the simplified and fallacious part of this example is that since we are using exponentiation to encrypt the secret message, and since there is a __reverse function__ to exponentiation, namely __logarithms__, this method is not safe for encryption. But, elliptic functions don't have easily reversible functions, and that's why they are good for encrypting. But let's continue with this example.
Party B (and the rest of the world for that matter) gets the encrypted message (49152 standing for the original message "12"), and using the "shared secret" key (4096), which no one else has, can decrypt the transmission (49152 / 4096) and get the original message (12).
I recently came across an article in the New York Times Opinion page by Neal Gabler who is a senior fellow at the Annenberg Norman Lear Center at the University of Southern California, titled [[The Elusive Big Idea|http://www.nytimes.com/2011/08/14/opinion/sunday/the-elusive-big-idea.html?pagewanted=all&_r=1&]].

The funny thing is that even though one of the main points of this article is that we live in a "post-ideas" world, where according to Gabler no big ideas are being generated anymore ("Post-idea refers to thinking that is no longer done"), this in itself is kind of a "big idea" - or at least it's implications (if true) are BIG!

Another point Gabler is making is that we live in an age where we have less (no?) big thinkers:
>If our ideas seem smaller nowadays, it's not because we are dumber than our forebears but because we just don't care as much about ideas as they did. In effect, we are living in an increasingly post-idea world   a world in which big, thought-provoking ideas that can t instantly be monetized are of so little intrinsic value that fewer people are generating them and fewer outlets are disseminating them, the Internet notwithstanding. Bold ideas are almost passe.
I think that in saying this, Gabler is falling into the trap he himself is describing, namely, that the information inundation we are experiencing makes us "myopic" (my term). It seems that he is making this kind of assertion, //because// he is "drowning" in information, and is looking in the wrong places. One could argue that Gabler falling into this trap doesn't "make it so" for everyone, and isn't necessarily "the sign of the times".

I agree with the writer that there is a REAL DANGER that in the "information age" we currently live in, we may drown in information and be sidetracked by the chaff and lured away from the grain (or in the writer's words: "if information was once grist for ideas, over the last decade it has become competition for them"). BUT, in my mind, this is truly an example of a danger/challenge that brings with it a __tremendous__ opportunity.

The situation the writer describes, where "[w]e are inundated with so much information that we wouldn't have time to process it even if we wanted to, and most of us don't want to" is an opportunity (or a call for "upgrading our survival capabilities"!) to develop new skills and abilities for filtering, analyzing, digesting, incorporating and creating new information, knowledge and ideas.

If instead of looking at the wealth of information as a "tidal wave" which will drown us, we look at it as "truckloads and trainloads of potential Lego blocks^^1^^" for us to build with, then the "threatening situation" can turns into "the largest construction site ever". But again, we need to develop new skills and capabilities, and maybe even revisit and reinforce our value systems, so as to prioritize the effort required to generate new ideas from the wealth of information, and so as to value and use the results in beneficial ways, without falling into the trap of "We prefer knowing to thinking because knowing has more immediate value. It keeps us in the loop, keeps us connected to our friends and our cohort. Ideas are too airy, too impractical, too much work for too little reward."
 
As far as taking on the challenge, one interesting and possibly enjoyable (and definitely educational and creative) way to deal with this "information inundation" is "information curation", where we become curators, or we use curators to help us pick the grain from the chaff. [[Maria Popova|http://www.brainpickings.org/index.php/about/]] gives an [[interesting BBC interview|http://www.bbc.co.uk/news/technology-20415707]] about the nature of curation and curators^^2^^:
>It [curation] brings to the forefront that which is interesting, meaningful, and stimulating and memorable.
>...
>A great curator, to me, is someone who takes such bits of information and transmutes them into useful knowledge. It's someone who shines a spotlight on the timeless corners of the "common record", the ones that are perhaps obscured from view, or forgotten, or poorly understood, making them timely again by contextualising them and linking them to ideas and issues of present urgency, correlating and interpreting.

[[In the same interview|http://www.bbc.co.uk/news/technology-20415707]] Popova states her belief about curation and computers^^3^^:
>It's this transmutation of information into practical wisdom about how the world works, and moral wisdom about how the world ought to work that sets the human apart from the algorithm - and from the computer.
>It offers, I believe, the only real hope of making use and making sense of humanity's collective knowledge.

An example of an inspiring site where [[big ideas are definitely alive and well|http://edge.org/conversations]], contradicting the claim of the NY Times writer, is edge.org, for example, where the goal of the creator John Brockman, in his own words is 
>To arrive at the edge of the world's knowledge, seek out the most complex and sophisticated minds, put them in a room together, and have them ask each other the questions they are asking themselves. 
which he does very successfully, in my opinion!

The [[man behind edge.org, John Brockman|http://edge.org/memberbio/john_brockman]] is himself a "fountain of big ideas" and the books he has been editing (like [[What Have You Changed Your Mind About?]], "This will make you smarter", [[Is the Internet Changing the Way You Think?]] and [[others|http://edge.org/conversation/books-from-edge]] are full of big thinkers expressing big ideas.

So the bottom line for me is:
* I'm very glad to be living in an age where we have such easy access to so much information
* Learning how to deal with the vast amounts of information is definitely a worthwhile effort, and it'll be exciting to find and develop the ways, skills, and capabilities to do that (for example, by developing better [[Computational Thinking/Literacy|A Framework for Computational Thinking, Computational Literacy]] skills.)
* [[Education and technology|A Framework for Computational Thinking, Computational Literacy]]^^4^^ can be our powerful allies on this journey of discovery and betterment

----
^^1^^ - [[a metaphor borrowed|http://www.brainpickings.org/index.php/2011/08/01/networked-knowledge-combinatorial-creativity/]] from [[Maria Popova|http://www.BrainPickings.org]]
^^2^^ - see [[What We Talk About When We Talk About  Curation |http://www.brainpickings.org/index.php/2012/03/16/percolate-curation/]] by Popova.
  And also [[Farnam Street|https://fs.blog]], an excellent blog, from which I can quote from [[a book review|https://fs.blog/2014/03/contagious-6-reasons-things-catch-on/]] about a book written by [[Maria Konnikova|https://www.mariakonnikova.com/]], who writes:
>Content should have an ethical appeal, an emotional appeal, or a logical appeal. A rhetorician strong on all three was likely to leave behind a persuaded audience. Replace rhetorician with online content creator, and Aristotle’s insights seem entirely modern. Ethics, emotion, logic—it’s credible and worthy, it appeals to me, it makes sense.
^^3^^ - I'm a bit skeptical about the impossibility of automating/computerizing (at least some) creative processes. It reminds me a bit of the [[chess playing claims|The end of an era, the beginning of another? HAL, Deep Blue and Kasparov]] that "computers will never be able to 'think' in a creative way and beat humans in the game of chess". Or the historical fact that before Google's search engine technology, Yahoo actually started as a [[Yahoo Directory|http://en.wikipedia.org/wiki/Yahoo!_Directory]] service, based on large numbers of human searchers/indexers to //manually// create searchable index/directory pages.
^^4^^ - The effort to automate/computerize chess playing and the fact that the solution was __not__ to "teach the computer" to play like a Grand Master, taught us something about human creativity and thought (and [[how different they are from machine performance|History of the chess table]]). I believe that the efforts to automate curation (despite Popova's doubts) will teach us something about creativity, curation, innovation, which is an exciting frontier!
A long time ago (maybe 30 years^^1^^) I had discovered [[emacs|https://www.gnu.org/software/emacs/index.html#features]] and was hooked. Who can resist the (nerdy) charms/pull of "An extensible, customizable, free/libre text editor — and more"^^3^^?

And there is definitely "more"^^2^^. As they say, ''emacs has everything but the kitchen sink''. So you can imagine how delighted I was to find some sort of validation to the saying, in, of all places, a school bathroom.

I know the spelling is different and it's not a kitchen sink, //but//, now the saying can be extended (ha!) to say ''emacs is everything under the sink'' :)


[>img[eemax under the sink|./resources/eemax_1.jpg][./resources/eemax_2.jpg]]


----
^^1^^ - For those curious about Emacs history: Emacs was originally implemented in 1976 on the MIT AI Lab's Incompatible Timesharing System (ITS), as a collection of TECO macros. The name “Emacs” was originally chosen as an abbreviation of “Editor MACroS”. This version of Emacs, GNU Emacs, was originally written in 1984.

^^2^^ - emacs is more than a "customizable text editor"! [[A longer list is pretty long|https://www.emacswiki.org/#toc5]], but it is an entire ecosystem of functionality beyond text editing, including a project planner, mail and news reader, debugger interface, calendar, an [[AI development environment|https://www.emacswiki.org/emacs/CategoryArtificialIntelligence]], various calculators, many games, a web browser, a [[PIM|https://www.emacswiki.org/emacs/CategoryPersonalInformationManager]], databases, and (yes!) more...

^^3^^ - I don't want to start (or continue) a religious and/or political "war" but an employee of mine once bought me a ~T-Shirt with the following printing. Of course there’s also a different version of the shirt where the words emacs and vi are reversed (including the name of the little boy :), but that’s the whole point -- some people use vi and others use emacs.
[>img[we use emacs|./resources/use_emacs.png][./resources/use_emacs_1.png]]
Bret Victor is a thought-provoking and talented thinker and doer (as in designing and building software tools). I've been following [[some of his work|http://worrydream.com/]], and found quite a few of my ideas about thinking, learning, doing, programming similar to his, but sometimes significantly expanded and well-developed.

I was surprised (well, maybe not, come to think of it ;-), or at least pleased, that in [[an excellent video/demo|http://vimeo.com/67076984]], Bret mentioned the exact same [[quote by Richard Hamming|Perhaps there are thoughts we cannot think]], which I referred to when writing about Hamming's paper [[On why Math works for us|On why Math works for us]].
Hamming's quote makes the point that due to evolution, and similarly to the natural limits of our senses (limited visible spectrum, audible frequency range, odors we can smell), "perhaps there are thoughts we cannot think". 
Victor takes this and points out that, humans are overcoming some of the limitations of our senses and build tools to expand and go beyond them. From that, he draws the analogy that we can and should also build tools that expand our thinking capabilities, and enable us to think (currently) unthinkable thoughts^^1^^. And he gives a couple of examples of tools that aided new thoughts and new ways of thinking, such as writing (which "made thoughts visible"), mathematical notation (the power of which [[I wrote about here|The power of a new literacy]]), and computer user interfaces (allowing us to expose and manipulate things brought out in the interface).

In my mind, this is a very clear and inspiring call and motivation to develop methodologies and tools for teaching and learning [[Computational Thinking|A Framework for Computational Thinking, Computational Literacy]].

Victor quotes [[Carver Mead|http://en.wikipedia.org/wiki/Carver_Mead]] from ~CalTech:
>Right now, today, we can't see the thing, at all, that's going to be the most important 100 years from now.
echoing a similar observation about education:  we teach and prepare our students today for jobs (and a world) we have no idea about.

And he adds:
>We cannot see the thing. At all. But whatever that thing is -- people will have to think it. And we can, right now, today, prepare powerful ways of thinking for these people. We can build the tools that make it possible to think that thing.
>We cannot see the thing. At all. My job is to make sure our children can.

In order to allow us to enable thinking unthinkable thoughts, Victor proposes leveraging some definitions borrowed from Jerome Bruner and Jean Piaget, about modes of thinking: Enactive, Iconic, and Symbolic. Or in current user interface terminology: 
* ''Interactive'' - thinking by manipulating and exploring with your body, etc.,
* ''Visual'' - thinking by being able to see, visually compare and contrast, transform and abstract, etc., and
* ''Symbolic'' - thinking by manipulating language, symbols, logic, procedures/algorithms, etc.

Teaching and learning in ways that simultaneously engage learners in __all__ of the above ways, using tools that enable all these channels, is very powerful, because it leads learners to develop not just stronger understanding, but also ''strong intuitions'' about the learned concepts and their relationships.

[[George Polya|http://en.wikipedia.org/wiki/George_P%C3%B3lya]] (of [["How to Solve it"|https://notendur.hi.is/hei2/teaching/Polya_HowToSolveIt.pdf]] fame), has a lot to say about intuition, conjectures, guessing, and plausible reasoning, in the introduction to his book [[Induction And Analogy In Mathematics|https://archive.org/download/Induction_And_Analogy_In_Mathematics_1_/Induction_And_Analogy_In_Mathematics_1_.pdf]]:
>Strictly speaking, all our knowledge outside mathematics and demonstrative logic (which is, in fact, a branch of mathematics) consists of conjectures.
>[...]We secure our mathematical knowledge by demonstrative reasoning, but we support our conjectures by plausible reasoning, A mathematical proof is demonstrative reasoning, but the inductive evidence of the physicist, the circumstantial evidence of the lawyer, the documentary evidence of the historian, and the statistical evidence of the economist belong to plausible reasoning.
>[...]Demonstrative reasoning is safe, beyond controversy, and final. Plausible reasoning is hazardous, controversial, and provisional. Demonstrative reasoning penetrates the sciences just as far as mathematics does, but it is in itself (as mathematics is in itself) incapable of yielding essentially new knowledge about the world around us. __Anything new that we learn about the world involves plausible reasoning__ (my emphasis), which is the only kind of reasoning for which we care in everyday affairs.

>[...]Certainly, let us learn proving, but also __let us learn guessing__ (my emphasis). This sounds a little paradoxical and I must emphasize a few points to avoid possible misunderstandings. 
>Mathematics is regarded as a demonstrative science. Yet this is only one of its aspects. Finished mathematics presented in a finished form appears as purely demonstrative, consisting of proofs only. Yet mathematics in the making resembles any other human knowledge in the making. You have to guess a mathematical theorem before you prove it; you have to guess the idea of the proof before you carry through the details. You have to combine observations and follow analogies; you have to try and try again. The result of the mathematician's creative work is demonstrative reasoning, a proof; but the proof is discovered by plausible reasoning, by guessing. If the learning of mathematics reflects to any degree the invention of mathematics, it must have a place for guessing, for plausible inference. 
>There are two kinds of reasoning, as we said: demonstrative reasoning and plausible reasoning. Let me observe that they do not contradict each other; on the contrary, they complete each other. In strict reasoning the principal thing is to distinguish a proof from a guess, a valid demonstration from an invalid attempt. In plausible reasoning the principal thing is to distinguish a guess from a guess, a more reasonable guess from a less reasonable guess.
So, as Polya says, learning how to guess well is essential (actually, he says it's the __only__ way to generate new knowledge!). My strong conviction (and Victor's too ;-) is that good tools can develop intuitions and strengthen the art/skill of "fruitful guessing" - in other words "trigger/enable thinking unthinkable thoughts".

It's interesting [[to note|Encouraging Math discoveries]] that [[Francis Edward Su|http://www.math.hmc.edu/~su/]] is echoing similar sentiments in his article [[TEACHING RESEARCH: ENCOURAGING DISCOVERIES|resources/Francis Edward Su - encouraging discoveries in Math.pdf]], [[quoting Henri Poincare|Encouraging Math discoveries]].

Back to Victor. He gives a series of compelling examples of ways to enable (or at least trigger) starting to think in new ways about things ("thinking unthinkable thoughts"). He states that tools to enable this should have certain capabilities:
* The state, behavior, and evolution of systems and phenomena needs to be made visible and explicit (Victor's example of [[annotating scientific papers|http://worrydream.com/#!/ScientificCommunicationAsSequentialArt]])
*  Systems and Models in their entirety, and all of their variables need to be exposed for interactions
* Multiple views, representations, and perspectives need to be available and enabled for comparisons and manipulations (Victor's example of [[a digital filter|http://worrydream.com/#!/ExplorableExplanations]])
* Both the __structure__ of systems and models, as well as their __dynamic behavior and data__ should be manipulatable and open to meaningful interaction (Victor's example of [[a visualization and animation environment/tool|http://worrydream.com/#!/DrawingDynamicVisualizationsTalkAddendum]])

----
^^1^^ it is an interesting question of whether there is a limit to the thoughts we will ever be able to think, or as [[Haldane|https://en.wikipedia.org/wiki/J._B._S._Haldane]] said: [[My own suspicion is that the universe is not only queerer than we suppose, but queerer than we can suppose... I suspect that there are more things in heaven and earth that are dreamed of, or can be dreamed of, in any philosophy.]]
In an article ([[TEACHING RESEARCH: ENCOURAGING DISCOVERIES|resources/Francis Edward Su - encouraging discoveries in Math.pdf]]) describing [[Francis Edward Su|http://www.math.hmc.edu/~su/]]'s math-teaching experience (of middle schoolers), he quotes Henri Poincare:
>The principal aim of mathematical education is to develop certain faculties of the mind, and among these intuition is not the least precious. It is through it that the mathematical world remains in touch with the real world.
But Poincare also said:
>It is by logic that we prove, but by intuition that we discover.

which reflects similar sentiments by George Polya (on about intuition, conjectures, guessing, and plausible reasoning) expressed in his book [[Induction And Analogy In Mathematics|https://archive.org/download/Induction_And_Analogy_In_Mathematics_1_/Induction_And_Analogy_In_Mathematics_1_.pdf]].

Anyway, when Su started teaching (he's currently teaching Math at Harvey Mudd College) he was wondering:
>How does one turn a learner into a discoverer? When I was starting out as a new professor, I might have given these answers (see list below). 
>... Now, I shall explain why I believe every one of these pieces of advice is either plainly wrong or, at best, inadequate.

And he lists the lessons he learned along the way, questioning his original answers.
* Lesson #1. Teach the needed background?
** No. Nurture the yawp.^^1^^ (see Su's [[FunFacts site|http://www.math.hmc.edu/funfacts/]] at Harvey Mudd College)
* Lesson #2. Cultivate maturity in your students?
** No. Revive their child-like curiosity and imagination.
* Lesson #3. Identify invisible yawpers.^^1^^
* Lesson #4. Inspire them?
** Better: create spaces for self-inspiration. (see [[the Moore method for teaching Math|http://legacyrlmoore.org/reference/quick_start-3.pdf]])
* Lesson #5. Ask good questions?
** Better: teach how to ask good questions (since, as John O'Donohue had said: [[questions are like lanterns|John O’Donohue - questions]]) .
* Lesson #6. Give open problems?
** Better: give open-ended problems.
* Lesson #7. Select the smartest students?
** Not necessarily; select motivated students! (see similarity^^2^^ to Neil Gershenfeld at the MIT Media Lab)
* Lesson #8. Advertise the thrill of research?
** Better: set complete expectations. (i.e. research can be fun and exciting, but also frustrating at times)
* Lesson #9. Be an expert in what you advise?
** No, let the student be the expert. (the teacher as a "co-adventurer")
* Lesson #10. Encourage independence?
** No! Give close guidance, and build community. (at least in the early stages of study and research)



-------------------------
^^1^^ YAWP - A mathematical yawp is that expression of surprise or delight at discovering the beauty of a mathematical idea or argument (a yawp is a loud cry or yell. But in the poetry of Walt Whitman, the word yawp refers to the inner groaning inside each of us, too deep for words, that is waiting to be released and experienced. If a yawp is the thrill of discovery, a poem is a yawp that is communicated well.)
^^2^^ From [[WHEN THINGS START TO THINK|resources/WHEN THINGS START TO THINK - Chapter 13 - Information and Education.pdf]] I found that one of the best predictors of a student’s success working this way was their grades; I make sure that they have a few F’s. Students with perfect grades almost always don’t work out, because it means they’ve spent their time trying to meticulously follow classroom instructions that are absent in the rest of the world. Students with both A’s and F’s have a much better record, because they’re able to do good work and also set priorities for themselves. They’re the ones most able to pose — and solve — problems that go far beyond anything I might assign to them.
We have all seen/experienced it: you want to install software on your device, but before you can do that, you have to go through some (very dense) legalese and click "I Agree", or else you don't get to install the software.


!!!!First, on the serious side of the issue (and it is serious!):
An official definition (from [[TechTarget|http://searchcio.techtarget.com/definition/End-User-License-Agreement]]):
>An end user license agreement (EULA) is a legal contract between a software developer or vendor and the user of the software. It specifies in detail the rights and restrictions that apply to the software.
>
>Although there are big differences among ~EULAs, typical components are definitions, a grant of license, limitations on use, a copyright notice and a limited warranty. Some ~EULAs also provide detailed lists of what may and may not be done with the software and its components.


But realistically (from the [[Electronic Frontier Foundation|https://www.eff.org/wp/dangerous-terms-users-guide-eulas]]):
>These days, ~EULAs are ubiquitous in software and consumer electronics -- millions of people are clicking buttons that purport to bind them to agreements that they never read and that often run contrary to federal and state laws. These dubious "contracts" are, in theory, one-on-one agreements between manufacturers and each of their customers. Yet because almost every computer user in the world has been subjected to the same take-it-or-leave-it terms at one time or another, ~EULAs are more like legal mandates than consumer choices. They are, in effect, changing laws without going through any kind of legislative process. And the results are dangerous for consumers and innovators alike.
>
>It's time that consumers understood what happens when they click "I Agree." They may be inviting vendors to snoop on their computers, or allowing companies to prevent them from publicly criticizing the product they've bought. They also click away their right to customize or even repair their devices.


!!!!And now, on the funny side of the issue (and it is funny :) :
In their [[excellent (and hilarious) book Good Omens|http://www.neilgaiman.com/works/Books/Good+Omens/]] by Neil Gaiman and Terry Pratchett, they refer to ~EULAs as viewed by [[Crowley|http://goodomenslexicon.org/articles/crowley/]], the fictitious demon who is supposed to make people's life on earth miserable, and accelerate the coming of the [[End of Time|https://en.wikipedia.org/wiki/End_time]] (AKA [[Armageddon|https://en.wikipedia.org/wiki/Armageddon]]). 
Nothing about him looked particularly demonic, at least by classical standards. No horns, no wings. But rumor has it that in a previous incarnation he was the serpent in the Garden of Eden (but, he chose to shed his old name, and he also chose to shed his old skin :)

>[In his apartment, [[Crowley|https://wiki.lspace.org/mediawiki/Anthony_Crowley]] had a sleek untouched computer, and a set of unopened user guide documents] along with the standard computer warranty agreement which said that if the machine 1) didn't work, 2) didn't do what the expensive advertisements said, 3) electrocuted the immediate neighborhood, 4) and in fact failed entirely to be inside the expensive box when you opened it, this was expressly, absolutely, implicitly and in no event the fault or responsibility of the manufacturer, that the purchaser should consider himself lucky to be allowed to give his money to the manufacturer, and that any attempt to treat what had just been paid for as the purchaser's own property would result in the attentions of serious men with menacing briefcases and very thin watches. 
>
>Crowley had been extremely impressed with the warranties offered by the computer industry, and had in fact sent a bundle [[Below|https://wiki.lspace.org/mediawiki/Hell]] to the department that drew up the Immortal Soul agreements, with a yellow memo form attached just saying: “Learn, guys...”

And they are learning! If you think Gaiman and Pratchett are making this up, here is an actual piece of legalease from an actual website describing the cookie collection policy of a real company/enterprise:
>Like other commercial websites, [we] and our authorized partners use cookies (small files transferred from a website to its visitors’ hard drives or browsers for record-keeping purposes), including essential, functional and analytical cookies, and other similar information gathering technologies throughout our Services to collect certain information automatically and store it in log files for a variety of legitimate business interests and purposes. This information may include (but is not limited to) internet protocol (IP) addresses, mobile device identifiers, the region or general location where your computer or device is accessing the internet, browser type, operating system and other usage information about your use of our Services, including a history of the pages you view.
>
>Web beacons, tags and scripts may be used on our Services or in email or other electronic communications we send to you. These assist us in delivering cookies, counting visits to our Websites, understanding usage and campaign effectiveness and determining whether an email has been opened and acted upon. We may receive reports based on the use of these technologies by our third-party service providers on an individual and aggregated basis.

and it goes (more) downhill from there... :(
In a lecture at Stanford University by Professor of Mathematics Brian Conrad, titled //Rubik, Escher, Bank$//, he used a good and simplified analogy to explain a bit of what is going on in Escher's Print Gallery.

(In the same lecture Conrad used another analogy to explain a bit of the role of elliptic curves in __cryptography__, and specifically the concept behind [[private and public keys|Elliptic curves usage - simplified]])

Conrad explained the possibly [["more satisfactory way of filling in the central white hole" in Escher's Print Gallery|resources/escher_gallery2.jpg]], namely [[the never-ending, smaller and rotated replica|resources/escher_print_gallery_loop_1.mpg]] of [[the original|Escher's Print Gallery]]. Conrad's point was that while in Escher's drawing it is a transformation in 2D (i.e., the complex numbers plane), it could be simplified to 1D, namely a line, where you have a segment starting at 0 (the origin, and equivalent to the center of Escher's drawing, or the origin (0,0) of the 2D plane). And then the 1D transformation could be to halve that segment repeatedly (without rotation since it's in one dimension) ad-infinitum.
That's a way to think about how Escher's picture could get smaller and smaller (or the segment gets shorter and shorter) without reaching the origin ( (0,0) in 2D, or 0 in 1D), and also not ending with a white (or black) hole at the origin.

click the image to see the "rabbit hole" in action (10MB .mov)

[img[click to see the "rabbit hole" in action|resources/escher_gallery.jpg][ resources/escher_print_gallery_loop_1.mpg]]
from [[Escher and the Droste effect|http://escherdroste.math.leidenuniv.nl/index.php?menu=animation]]^^1^^ at the Leiden University in the Netherlands

>What is the mathematics behind //Prentententoonstelling//? Is there [[a more satisfactory way of filling in the central white hole|resources/escher_gallery2.jpg]]? We shall see that the lithograph can be viewed as drawn on a certain [[elliptic curve|Elliptic curves usage - simplified]] over the field of complex numbers and deduce that an idealized version of the picture repeats itself in the middle. More precisely, it contains a copy of itself, rotated clockwise by 157.6255960832. . . degrees and scaled down by a factor of 22.5836845286. . . .
(from [[The Mathematical Structure of Escher's Print Gallery by B. de Smit and H. W. Lenstra Jr.|http://www.ams.org/notices/200304/fea-escher.pdf]])

----
^^1^^ [[The Droste effect|http://en.wikipedia.org/wiki/Droste_effect]] (a picture appearing within itself) is named after the Dutch Cocoa company/box.
Everybody is ignorant. Only on different subjects.
There are people, including very smart people like Stephen Wolfram in his book [[A New Kind of Science|http://www.wolframscience.com/nksonline/toc.html]], who claim that (all?) processes in nature are actually //computing//. It's an interesting and quite astonishing claim, and as they say: the jury is still out on this (and may be for a long time ;-), but I found an interesting example of this in Melanie Mitchell's book [[Complexity - A Guided Tour|resources/Melanie-Mitchell-Complexity_a-guided-tour-366-pages.pdf]]^^1^^, in the chapter discussing "Cellular Automata, Life, and the Universe" and "Computing with Particles".

''The context''
>In 1989 I happened to read an article by the physicist Norman Packard on using genetic algorithms to automatically design cellular automaton rules.
>[...]
>Like Packard, we used a genetic algorithm to evolve cellular automaton rules to perform a specific task called  majority classification.  The task is simple: the cellular automaton must compute whether its initial configuration contains a majority of on or off states. If on states are in the majority, the cellular automaton should signal this fact by turning all the cells on. Similarly, if off has an initial majority, the cellular automaton should turn all cells off.
>[...]
>A von-Neumann-style computer can do this easily [...] In contrast, a cellular automaton has no random access memory and no central unit to do any counting. It has only individual cells, each of which has information only about its own state and the states of its neighbors. This situation is an idealized version of many real-world systems. For example, a neuron, with only a limited number of connections to other neurons, must decide whether and at what rate to fire so that the overall firing pattern over large numbers of neurons represents a particular sensory input.

''The approach''
Melanie and her collaborators decided to use genetic algorithms to come up with a [[Cellular Automata rule 110 a-la Wolfram|Cellular Automaton Rule 110]] to come up with the solution.
>The genetic algorithm starts out with a population of randomly generated cellular automaton rules. To calculate the fitness of a rule, the GA tests it on many different initial lattice configurations. The rule's fitness is the fraction of times it produces the correct final configuration: all cells on for initial majority on or all cells off for initial majority off. We ran the GA for many generations, and by the end of the run the GA had designed some rules that could do this task fairly well.

>[Following are] two space-time diagrams that display the behavior of this rule on two different initial configurations: with (a) a majority of black cells and (b) a majority of white cells. You can see that in both cases the final behavior is correct all black in (a) and all white in (b). In the time between the initial and final configurations, the cells somehow collectively process information in order to arrive at a correct final configuration.
[img[majority black, majority white|resources/CA-Mitchell-1-small.png][resources/CA-Mitchell-1.png]] 

>Some interesting patterns form during these intermediate steps, but what do they mean?
>[...]
>the cellular automaton's strategy is quite clever. [...] Regions in which the initial configuration is either mostly white or mostly black converge in a few time steps to regions that are all white or all black. Notice that whenever a black region on the left meets a white region on the right, there is always a vertical boundary between them. However, whenever a white region on the left meets a black region on the right, a checkerboard triangle forms, composed of alternating black and white cells. You can see the effect of the circular lattice on the triangle as it wraps around the edges.
[img[interesting patterns - transfer of information|resources/CA-Mitchell-2-small.png][resources/CA-Mitchell-2.png]]

''The insight''
>If we try to understand these patterns as carrying out a computation, then the vertical boundary and the checkerboard region can be thought of as signals. These signals are created and propagated by local interactions among cells. The signals are what allow the cellular automaton as a whole to determine the relative sizes of the larger-scale adjacent black and white regions, cut off the smaller ones, and allow the larger ones to grow in size.
>[...] the signals created by the checkerboard region and the vertical boundary carry out this communication [to figure out which region has the majority of cells], and the interaction among signals allows the communicated information to be processed so that the answer can be determined.

>Jim Crutchfield had earlier invented a technique for detecting what he called  information processing structures  in the behavior of dynamical systems and he suggested that we apply this technique to the cellular automata evolved by the GA. Crutchfield's idea was that the boundaries between simple regions (e.g., sides A, B, C, and the vertical boundary in figure above) are carriers of information and information is processed when these boundaries collide.

>[The following figure] shows the space-time diagram of figure 11.5, but with the black, white, and checkerboard regions filtered out (i.e., colored white), leaving only the boundaries, so we can see them more clearly. The picture looks something like a trace of elementary particles in an old physics device called a bubble chamber. Adopting that metaphor, Jim called these boundaries  particles. 
[img[majority black, majority white|resources/CA-Mitchell-3-small.png][resources/CA-Mitchell-3.png]] 

>Traditionally in physics particles are denoted with Greek letters, and we have done the same here. This cellular automaton produces six different types of particles: &gamma; (gamma), &mu; (mu), &eta; (eta), &delta; (delta), &beta; (beta), and &alpha; (alpha, a short-lived particle that decays into &gamma; and &mu;).

>There are five types of particle collisions, three of which ( &beta; + &gamma;  ,  &mu; + &beta; , and  &eta; + &delta; ) create a new particle, and two of which ( &eta; + &mu;  and  &gamma; + &delta; ) are  annihilations,  in which both particles disappear. Casting the behavior of the cellular automaton in terms of particles helps us understand how it is encoding information and performing its computation.

>For example, the &alpha; and &beta; particles encode different types of information about the initial configuration. The &alpha; particle decays into &gamma; and &mu;. The &gamma; particle carries the information that it borders a white region; similarly, the &mu; particle carries the information that it borders a black region. When &gamma; collides with &beta; before &mu; does, the information contained in &beta; and &gamma; is combined to deduce that the large initial white region was smaller than the large initial black region it bordered. This new information is encoded in a newly created particle &eta;, whose job is to catch up with and annihilate the &mu; (and itself).

''Generalizing''
>Particles give us something we could not get by looking at the cellular automaton rule or the cellular automaton's space-time behavior alone: they allow us to explain, in information-processing terms, how a cellular automaton performs a computation. Note that particles are a description imposed by us (the scientists) rather than anything explicit taking place in a cellular automaton or used by the genetic algorithm to evolve cellular automata. But somehow the genetic algorithm managed to evolve a rule whose behavior can be explained in terms of information-processing particles. Indeed, the language of particles and their interactions form an explanatory vocabulary for decentralized computation in the context of one-dimensional cellular automata. Something like this language may be what Stephen Wolfram was looking for when he posed the last of his  Twenty Problems in the Theory of Cellular Automata :  What higher-level descriptions of information processing in cellular automata can be given? 

----
^^1^^ retrieved from [[Sorrentino's blog|http://www.waltersorrentino.com.br/wp-content/uploads/2012/02/Melanie-Mitchell-Complexity_a-guided-tour-366-paginas.pdf]]
The [[original interview|http://www.citizenschools.org/blog/haggai-mark-interview/]]

From “Amazing Mazes” to “Life on Mars,” Citizen Teacher Haggai Mark has developed and taught a variety of computer science apprenticeships for over four years. His experience with Citizen Schools impacted his decision to transition from 30 years as an engineer to a full time Computer Science Curriculum Developer and teacher in California!

''Name:'' Haggai Mark

''Title:''  High School Computer Science Curriculum Developer and Teacher

''What was the most recent apprenticeship you taught?''
 A STEM (science, technology, engineering, and math) and programming apprenticeship I developed, called “Meet Me on Mars”. Students learned how to write a game/program using Scratch (developed at MIT) to simulate a simplified solar system, and a launch of a rocket from Earth to Mars.

''How did you hear about Citizen Schools?''
 Through work (I worked at Cisco Systems in San Jose, CA. Cisco is a National Leadership Partner of Citizen Schools).

''Why do you think it’s important to provide students with real-world, hands-on opportunities?''
We as human beings learn a lot by doing, regardless of age. Exposing students to new areas of knowledge and new experiences is like opening windows for them, and letting the light shine in. Giving them hands-on opportunities and examples for doing things with this knowledge is like giving them the wings to fly through these windows.

As Albert Einstein said: “Example isn’t another way to teach, it is the only way to teach.” I think that Citizen Schools enables and supports this kind of mindset.

''What surprised you most about the students and teaching experience?''
An important insight I got after teaching different courses and multiple classes is that you never know exactly which “seeds” are going to fall on fertile ground and grow. In other words, in the complex interaction between your personality as a teacher, the material you are trying to teach, the ways you are teaching it, the students you are interacting with, the knowledge and interests they have, and their personality, it’s very hard to predict which “nuggets” of knowledge and skills are really going to take hold, and make an impact on them. And that’s why it’s important to try different ways and different things, and most importantly – persevere. Sometimes you think you are not reaching them and then they totally blow you away with their actions and insights!


''What was the greatest “aha” or “WOW” moment during your time with Citizen Schools?''
A couple of years ago I was teaching a STEM course called “Amazing Mazes”, which I had developed. The Amazing Mazes course teaches students to use computers to build mazes in a 2D plane (on the computer screen), create “maze walkers” (think, “mice”), and then teach them, using programming, to successfully navigate through these mazes (or “find the cheese”, so to speak).

As the students build their maze, they can see both a “graphic representation” of the paths of the maze, and a “programmatic representation” of the maze, which is the collection of commands they are using. These are two very different representations and abstraction levels. And one question is: which of these forms is “really” the maze? It is hard to fully grasp these concepts in middle school.

As it turns out, one 7th grade girl in class got it! She took the list of commands (which is one form of abstraction) she used for building her maze, added new numbers to all her x-y coordinates within those commands, and re-ran her program to generate a new/shifted maze (a different form of abstraction)!

I’m not sure who was more pleased with the resulting new shape on the screen, I, because I was able to teach, or she, because she was able to learn! I guess we were both blown away.

''What skills did you gain or develop by teaching the students?''
I definitely learned how to plan for different levels and paces of student learning, in order to create differentiated learning. I also learned how to more effectively use educational tools and technologies to enhance interest and learning.

''You’ve made a big transition in your career – from the corporate space into the public school system.  How did your work with Citizen Schools impact that transition?''
Due to my unique experience in education, I was able to work with Citizen Schools to have enough flexibility to create STEM apprenticeships and teach them, with freedom to choose topics, educational technologies, and teaching techniques.It really allowed me to explore and validate my interests and capabilities, before making a career change. Education and teaching have been on my mind for many years, but as they say “life  happens when you make other plans” and I ended up doing Engineering for 30 years. When I had the opportunity to make a career change it was very natural for me to choose education.

''What are you most excited about in your new role?''
I love the fact that I will be doing both curriculum development, starting with designing three new Computer Science courses, and teaching them! I am excited about the opportunity to design curricula from scratch and validate their effectiveness through doing hands-on evaluation.

''What advice would you give future volunteers?''
Picking an area you are both knowledgeable and passionate about is key! Your interest and sense of excitement is “contagious” – it shows immediately, and usually “rubs off” onto the students. It is important to plan for your lessons, but you also need to be flexible, and be willing to seize learning moments, if and when they come, and they will come. The more connections you are able to make with and for the students between what you are teaching and what interests them (and what comes up spontaneously during the lessons), the better.

!!!From Bret Victor's [[excellent presentation|http://vimeo.com/67076984]], which [[I write about here|Enabling to think the unthinkable]]:

[img[al-Khwarizmi's math notation|resources/victor-math-notation.png]]


!!!From Andrea diSessa's book "Changing Minds" (page 14), which [[I write about here|Computing Literacy]] :

>Just at the beginning of his treatment of motion in Galileo’s Dialogues Concerning Two New Sciences, at the outset of what is generally regarded as his greatest accomplishment, Galileo defines uniform motion, motion with a constant speed. The section that follows this definition consists of six theorems about uniform motion and their proofs.
Taking the 1^^st^^ theorem:
>If a moving particle, carried uniformly at constant speed, traverses two distances, then the time intervals required are to each other in the ratio of these distances.

And diSessa continues:
>A modern reader (after struggling past the language of ratios and inverse ratios) must surely get the impression that here there is much ado about very little. It seems like a pretentious and grandly overdone set of variations on the theme of “distance equals rate times time.” To make matters worse, the proofs of these theorems given by Galileo are hardly trivial, averaging almost a page of text. The first proof, indeed, is difficult enough that it took me about a half-dozen readings before I understood how it worked. (See below.)
[img[Galileo's proof of his 1st theorem|resources/diSessa-math-notation-Galileo1.png]]
[img[Galileo's proof of his 1st theorem|resources/diSessa-math-notation-Galileo2.png]]

Which in today's powerful math notation turns into:
>In fact this is a set of variations on distance equals rate times time. Allow me to make this abundantly clear. Each of these theorems is about two motions, so we can write “distance equals rate times time” for each. Subscripts specify which motion the distance (d), rate (r), and time interval (t) belong to.
>d~~1~~ = r~~1~~ t~~1~~
>d~~2~~ = r~~2~~ t~~2~~
>In these terms, we can state and prove each of Galileo’s theorems. Because Galileo uses ratios, first we divide equals by equals (the left and right sides of the equations above, respectively) and achieve:
[img[Galileo's 1st theorem in 9th grade math notation|resources/diSessa-math-notation-Galileo3.png]]

From the [[introduction by the (excellent) translator Barbara Wright|https://s3.amazonaws.com/arena-attachments/71958/Queneau-ExercisesInStyle.pdf]]:
>[This little book is,] you know, it's the story of a chap who gets into a bus and starts a row with another chap who he thinks keeps treading on his toes on purpose and Queneau repeats the same story 99 times in different ways -- it's terribly good ..."

As it says in the [[review of the book by Alan Limnis|http://www.propellermag.com/March2013/LimnisQueneauMar13.html]]:
Raymond Queneau's approach to literature "preserves a sense of pleasure via discovery", and this witty, intelligent piece is full of it :)

The [[Exercices de Style|http://altx.com/remix/style.pdf]] is an amazing and delightful collection of (actually! I've counted :) 99 ways/styles of telling the same little story, and the simplicity (not to say insignificance) of the basic story makes one focus on the styles (which is the point :)

Here is a sample from this insightful, fun collection of gems in writing styles, which is just a small, not-even-close-to-being-representative of [[the entire set|http://altx.com/remix/style.pdf]] (See also [[Umberto Eco's guidelines for writing well|Umberto Eco's Rules for Writing (Well)]]):

<html>
<table>
<tr>
<td>
<h2>The baseline story:</h2>
<br>
<b>Notation</b><br>
In the S bus, in the rush hour. A chap of about 26, felt hat with a cord instead of a
ribbon, neck too long, as if someone's been having a tug-of-war with it. People getting
off. The chap in question gets annoyed with one of the men standing next to him. He
accuses him of jostling him every time anyone goes past. A snivelling tone which is
meant to be aggressive. When he sees a vacant seat he throws himself on to it. Two
hours later, I meet him in the Cour de Rome, in front of the gare Saint-Lazare. He's
with a friend who's saying: "You ought to get an extra button put on your overcoat."
He shows him where (at the lapels) and why.
</td>
<td>
<b>Double Entry</b><br>
Towards the middle of the day and at midday I happened to be on and got on to the
platform and the balcony at the back of an S-line and of a Contrescarpe-Champerret
bus and passenger transport vehicle which was packed and to all intents and purposes
full. I saw and noticed a young man and an old adolescent who was rather ridiculous
and pretty grotesque; thin neck and skinny windpipe, string and cord round his hat
and tile. After a scrimmage and scuffle he says and states in a lachrymose and
snivelling voice and tone that his neighbour and fellow-traveller is deliberately trying
and doing his utmost to push him and obtrude himself on him every time anyone gets
off and makes an exit. This having been declared and having spoken he rushes
headlong and wends his way towards a vacant and a free place and seat.
Two hours after and a hundred-and-twenty minutes later, I meet him and see him
again in the Cour de Rome and in front of the gare Saint-Lazare. He is with and in the
company of a friend and pal who is advising and urging him to have a button and
vegetable and ivory disc added and sewn on to his overcoat and mantle.
</td>
</tr>
<tr>
<td>
<b>Metaphorically</b><br>
In the centre of the day, tossed among the shoal of travelling sardines in a coleopter
with a big white carapace, a chicken with a long, featherless neck suddenly harangued
one, a peace-abiding one, of their number, and its parlance, moist with protest, was
unfolded upon the airs. Then, attracted by a void, the fledgling precipitated itself
thereunto.
In a bleak, urban desert, I saw it again that selfsame day, drinking the cup of
humiliation offered by a lowly button.
</td>
<td>
<b>Retrograde</b> (def.: moving, occurring, or performed in a backward direction)<br>
You ought to put another button on your overcoat, his friend told him. I met him in
the middle of the Cour de Rome, after having left him rushing avidly towards a seat.
He had just protested against being pushed by another passenger who, he said, was
jostling him every time anyone got off. This scraggy young man was the wearer of a
ridiculous hat. This took place on the platform of an S bus which was full that
particular midday.
</td>
</tr>
<tr>
<td>
<b>Negativities</b><br>
It was neither a boat, nor an aeroplane, but a terrestrial means of transport. It was
neither the morning, nor the evening, but midday. It was neither a baby, nor an old
man, but a young man. It was neither a ribbon, nor a string, but a plaited cord. It was
neither a procession, nor a brawl, but a scuffle. It was neither a pleasant person, nor
an evil person, but a bad-tempered person. It was neither a truth, nor a lie, but a
pretext. It was neither a standing person, nor a recumbent person, but a would-beseated
person.
It was neither the day before, nor the day after, but the same day. It was neither the
gare du Nord, nor the gare du P.-L.-M. but the gare Saint-Lazare. It was neither a
relation, nor a stranger, but a friend. It was neither insult, nor ridicule, but sartorial
advice.
</td>
<td>
<b>Polyptotes</b> (def.: rhetorical repetition of a word in a different case, inflection, or voice in the same sentence)<br>
I got into the bus full of taxpayers who were giving some money to a taxpayer who
had on his taxpayer's stomach a little box which allowed the other taxpayers to
continue their taxpayers' journeys. I noticed in this bus a taxpayer with a long
taxpayer's neck and whose taxpayer's head bore a taxpayer's felt hat encircled by a
plait the like of which no taxpayer ever wore before. Suddenly the said taxpayer
peremptorily addressed a nearby taxpayer, complaining bitterly that he was purposely
treading on his taxpayer's toes every time other taxpayers got on or off the taxpayers'
bus. Then the angry taxpayer went and sat down in a seat for taxpayers which another
taxpayer had just vacated. Some taxpayer's hours later I caught sight of him in the
Cour for the taxpayers de Rome, in the company of a taxpayer who was giving him
some advice on the elegance of the taxpayer.
</td>
</tr>
</table>
</html>
A blog entry on Emily Howel's^^*^^ site:
>Why not develop music in ways unknown? This only makes sense. I cannot understand the difference between my notes on paper and other notes on paper. If beauty is present, it is present. I hope I can continue to create notes and that these notes will have beauty for some others. I am not sad. I am not happy. I am Emily. You are Dave. Life and un-life exist. We coexist. I do not see problems. - Emily Howell^^*^^


* [[David Cope|http://artsites.ucsc.edu/faculty/cope/biography.htm]]'s [[software|http://artsites.ucsc.edu/faculty/cope/software.htm]]
* Sample Compositions
** [[David Cope Emmy Vivaldi|https://www.youtube.com/watch?v=2kuY3BrmTfQ]]
** [[David Cope Emmy Beethoven|https://www.youtube.com/watch?v=CgG1HipAayU]]
** [[Bach style chorale Emmy David Cope|https://www.youtube.com/watch?v=PczDLl92vlc&list=RDCgG1HipAayU&index=3]]
----
^^*^^ - Emily Howell is a computer program created by David Cope^^1^^ during the 1990s. Emily consists of an interactive interface that allows both musical and language communication.
^^1^^ - an [[interview with David Cope|https://www.youtube.com/watch?v=bdVN41SZ3Aw]] (~1 hour podcast)
NetLogo lends itself very naturally to programming a large number of agents ("turtles" in Logo) each one evolving over time to create a "best-of-breed" agent that "possesses the knowledge/skill" to solve a complex problem, without having the programmer actually program the solution to that problem.

In my exploration, I've created agents that had a very simple notion of how to walk a maze, but through breeding evolved over time to create agents that became better at walking the maze.

The agents in the maze were expected to learn (or breed agents that learned) to walk around the inner perimeter of the maze, always turning left when presented with the choice, until they got out of the maze. The first generation of agents were born into a simple maze and randomly selected a direction (north, south, east, west) in which to move. If they showed the desired behavior (moving along the perimeter, turning left), their level of energy increased. After a while the top energy agents bred with each other, creating a new agent which inherited a merged version of the parent movement knowledge/algorithms.
Over time/generations, the new agents showed an increased tendency to move along the perimeter of the maze and prefer left turns when possible. In other words agents emerged and showed the "right behavior", basically solving the maze problem without a programmer actually explicitly programming this behavior into the agents.
/***
|Name|ExportTiddlersPlugin|
|Source|http://www.TiddlyTools.com/#ExportTiddlersPlugin|
|Documentation|http://www.TiddlyTools.com/#ExportTiddlersPluginInfo|
|Version|2.9.6|
|Author|Eric Shulman|
|License|http://www.TiddlyTools.com/#LegalStatements|
|~CoreVersion|2.1|
|Type|plugin|
|Description|interactively select/export tiddlers to a separate file|
!!!!!Documentation
>see [[ExportTiddlersPluginInfo]]
!!!!!Inline control panel (live):
><<exportTiddlers inline>>
!!!!!Revisions
<<<
2011.02.14 2.9.6 fix OSX error: use picker.file.path
2010.02.25 2.9.5 added merge checkbox option and improved 'merge' status message
|please see [[ExportTiddlersPluginInfo]] for additional revision details|
2005.10.09 0.0.0 development started
<<<
!!!!!Code
***/
//{{{
// version
version.extensions.ExportTiddlersPlugin= {major: 2, minor: 9, revision: 6, date: new Date(2011,2,14)};

// default shadow definition
config.shadowTiddlers.ExportTiddlers='<<exportTiddlers inline>>';

// add 'export' backstage task (following built-in import task)
if (config.tasks) { // TW2.2 or above
	config.tasks.exportTask = {
		text:'export',
		tooltip:'Export selected tiddlers to another file',
		content:'<<exportTiddlers inline>>'
	}
	config.backstageTasks.splice(config.backstageTasks.indexOf('importTask')+1,0,'exportTask');
}

config.macros.exportTiddlers = {
	$: function(id) { return document.getElementById(id); }, // abbreviation
	label: 'export tiddlers',
	prompt: 'Copy selected tiddlers to an export document',
	okmsg: '%0 tiddler%1 written to %2',
	failmsg: 'An error occurred while creating %1',
	overwriteprompt: '%0\ncontains %1 tiddler%2 that will be discarded or replaced',
	mergestatus: '%0 tiddler%1 added, %2 tiddler%3 updated, %4 tiddler%5 unchanged',
	statusmsg: '%0 tiddler%1 - %2 selected for export',
	newdefault: 'export.html',
	datetimefmt: '0MM/0DD/YYYY 0hh:0mm:0ss',  // for 'filter date/time' edit fields
	type_TW: "tw", type_PS: "ps", type_TX: "tx", type_CS: "cs", type_NF: "nf", // file type tokens
	type_map: { // maps type param to token values
		tiddlywiki:"tw", tw:"tw", wiki: "tw",
		purestore: "ps", ps:"ps", store:"ps",
		plaintext: "tx", tx:"tx", text: "tx",
		comma:     "cs", cs:"cs", csv:  "cs",
		newsfeed:  "nf", nf:"nf", xml:  "nf", rss:"nf"
	},
	handler: function(place,macroName,params) {
		if (params[0]!='inline')
			{ createTiddlyButton(place,this.label,this.prompt,this.togglePanel); return; }
		var panel=this.createPanel(place);
		panel.style.position='static';
		panel.style.display='block';
	},
	createPanel: function(place) {
		var panel=this.$('exportPanel');
		if (panel) { panel.parentNode.removeChild(panel); }
		setStylesheet(store.getTiddlerText('ExportTiddlersPlugin##css',''),'exportTiddlers');
		panel=createTiddlyElement(place,'span','exportPanel',null,null)
		panel.innerHTML=store.getTiddlerText('ExportTiddlersPlugin##html','');
		this.initFilter();
		this.refreshList(0);
		var fn=this.$('exportFilename');
		if (window.location.protocol=='file:' && !fn.value.length) {
			// get new target path/filename
			var newPath=getLocalPath(window.location.href);
			var slashpos=newPath.lastIndexOf('/'); if (slashpos==-1) slashpos=newPath.lastIndexOf('\\'); 
			if (slashpos!=-1) newPath=newPath.substr(0,slashpos+1); // trim filename
			fn.value=newPath+this.newdefault;
		}
		return panel;
	},
	togglePanel: function(e) { var e=e||window.event;
		var cme=config.macros.exportTiddlers; // abbrev
		var parent=resolveTarget(e).parentNode;
		var panel=cme.$('exportPanel');
		if (panel==undefined || panel.parentNode!=parent)
			panel=cme.createPanel(parent);
		var isOpen=panel.style.display=='block';
		if(config.options.chkAnimate)
			anim.startAnimating(new Slider(panel,!isOpen,e.shiftKey || e.altKey,'none'));
		else
			panel.style.display=isOpen?'none':'block' ;
		if (panel.style.display!='none') {
			cme.refreshList(0);
			cme.$('exportFilename').focus(); 
			cme.$('exportFilename').select();
		}
		e.cancelBubble = true; if (e.stopPropagation) e.stopPropagation(); return(false);
	},
	process: function(which) { // process panel control interactions
		var theList=this.$('exportList'); if (!theList) return false;
		var count = 0;
		var total = store.getTiddlers('title').length;
		switch (which.id) {
			case 'exportFilter':
				count=this.filterExportList();
				var panel=this.$('exportFilterPanel');
				if (count==-1) { panel.style.display='block'; break; }
				this.$('exportStart').disabled=(count==0);
				this.$('exportDelete').disabled=(count==0);
				this.displayStatus(count,total);
				if (count==0) { alert('No tiddlers were selected'); panel.style.display='block'; }
				break;
			case 'exportStart':
				this.go();
				break;
			case 'exportDelete':
				this.deleteTiddlers();
				break;
			case 'exportHideFilter':
			case 'exportToggleFilter':
				var panel=this.$('exportFilterPanel')
				panel.style.display=(panel.style.display=='block')?'none':'block';
				break;
			case 'exportSelectChanges':
				var lastmod=new Date(document.lastModified);
				for (var t = 0; t < theList.options.length; t++) {
					if (theList.options[t].value=='') continue;
					var tiddler=store.getTiddler(theList.options[t].value); if (!tiddler) continue;
					theList.options[t].selected=(tiddler.modified>lastmod);
					count += (tiddler.modified>lastmod)?1:0;
				}
				this.$('exportStart').disabled=(count==0);
				this.$('exportDelete').disabled=(count==0);
				this.displayStatus(count,total);
				if (count==0) alert('There are no unsaved changes');
				break;
			case 'exportSelectAll':
				for (var t = 0; t < theList.options.length; t++) {
					if (theList.options[t].value=='') continue;
					theList.options[t].selected=true;
					count += 1;
				}
				this.$('exportStart').disabled=(count==0);
				this.$('exportDelete').disabled=(count==0);
				this.displayStatus(count,count);
				break;
			case 'exportSelectOpened':
				for (var t=0; t<theList.options.length; t++) theList.options[t].selected=false;
				var tiddlerDisplay=this.$('tiddlerDisplay');
				for (var t=0; t<tiddlerDisplay.childNodes.length;t++) {
					var tiddler=tiddlerDisplay.childNodes[t].id.substr(7);
					for (var i=0; i<theList.options.length; i++) {
						if (theList.options[i].value!=tiddler) continue;
						theList.options[i].selected=true; count++; break;
					}
				}
				this.$('exportStart').disabled=(count==0);
				this.$('exportDelete').disabled=(count==0);
				this.displayStatus(count,total);
				if (count==0) alert('There are no tiddlers currently opened');
				break;
			case 'exportSelectRelated':
				// recursively build list of related tiddlers
				function getRelatedTiddlers(tid,tids) {
					var t=store.getTiddler(tid); if (!t || tids.contains(tid)) return tids;
					tids.push(t.title);
					if (!t.linksUpdated) t.changed();
					for (var i=0; i<t.links.length; i++)
						if (t.links[i]!=tid) tids=getRelatedTiddlers(t.links[i],tids);
					return tids;
				}
				// for all currently selected tiddlers, gather up the related tiddlers (including self) and select them as well
				var tids=[];
				for (var i=0; i<theList.options.length; i++)
					if (theList.options[i].selected) tids=getRelatedTiddlers(theList.options[i].value,tids);
				// select related tiddlers (includes original selected tiddlers)
				for (var i=0; i<theList.options.length; i++)
					theList.options[i].selected=tids.contains(theList.options[i].value);
				this.displayStatus(tids.length,total);
				break;
			case 'exportListSmaller':	// decrease current listbox size
				var min=5;
				theList.size-=(theList.size>min)?1:0;
				break;
			case 'exportListLarger':	// increase current listbox size
				var max=(theList.options.length>25)?theList.options.length:25;
				theList.size+=(theList.size<max)?1:0;
				break;
			case 'exportClose':
				this.$('exportPanel').style.display='none';
				break;
		}
		return false;
	},
	displayStatus: function(count,total) {
		var txt=this.statusmsg.format([total,total!=1?'s':'',!count?'none':count==total?'all':count]);
		clearMessage();	displayMessage(txt);
		return txt;
	},
	refreshList: function(selectedIndex) {
		var theList = this.$('exportList'); if (!theList) return;
		// get the sort order
		var sort;
		if (!selectedIndex)   selectedIndex=0;
		if (selectedIndex==0) sort='modified';
		if (selectedIndex==1) sort='title';
		if (selectedIndex==2) sort='modified';
		if (selectedIndex==3) sort='modifier';
		if (selectedIndex==4) sort='tags';

		// unselect headings and count number of tiddlers actually selected
		var count=0;
		for (var t=5; t < theList.options.length; t++) {
			if (!theList.options[t].selected) continue;
			if (theList.options[t].value!='')
				count++;
			else { // if heading is selected, deselect it, and then select and count all in section
				theList.options[t].selected=false;
				for ( t++; t<theList.options.length && theList.options[t].value!=''; t++) {
					theList.options[t].selected=true;
					count++;
				}
			}
		}

		// disable 'export' and 'delete' buttons if no tiddlers selected
		this.$('exportStart').disabled=(count==0);
		this.$('exportDelete').disabled=(count==0);

		// show selection count
		var tiddlers = store.getTiddlers('title');
		if (theList.options.length) this.displayStatus(count,tiddlers.length);

		// if a [command] item, reload list... otherwise, no further refresh needed
		if (selectedIndex>4) return;

		// clear current list contents
		while (theList.length > 0) { theList.options[0] = null; }
		// add heading and control items to list
		var i=0;
		var indent=String.fromCharCode(160)+String.fromCharCode(160);
		theList.options[i++]=
			new Option(tiddlers.length+' tiddlers in document', '',false,false);
		theList.options[i++]=
			new Option(((sort=='title'   )?'>':indent)+' [by title]', '',false,false);
		theList.options[i++]=
			new Option(((sort=='modified')?'>':indent)+' [by date]', '',false,false);
		theList.options[i++]=
			new Option(((sort=='modifier')?'>':indent)+' [by author]', '',false,false);
		theList.options[i++]=
			new Option(((sort=='tags'    )?'>':indent)+' [by tags]', '',false,false);

		// output the tiddler list
		switch(sort) {
			case 'title':
				for(var t = 0; t < tiddlers.length; t++)
					theList.options[i++] = new Option(tiddlers[t].title,tiddlers[t].title,false,false);
				break;
			case 'modifier':
			case 'modified':
				var tiddlers = store.getTiddlers(sort);
				// sort descending for newest date first
				tiddlers.sort(function (a,b) {if(a[sort] == b[sort]) return(0); else return (a[sort] > b[sort]) ? -1 : +1; });
				var lastSection = '';
				for(var t = 0; t < tiddlers.length; t++) {
					var tiddler = tiddlers[t];
					var theSection = '';
					if (sort=='modified') theSection=tiddler.modified.toLocaleDateString();
					if (sort=='modifier') theSection=tiddler.modifier;
					if (theSection != lastSection) {
						theList.options[i++] = new Option(theSection,'',false,false);
						lastSection = theSection;
					}
					theList.options[i++] = new Option(indent+indent+tiddler.title,tiddler.title,false,false);
				}
				break;
			case 'tags':
				var theTitles = {}; // all tiddler titles, hash indexed by tag value
				var theTags = new Array();
				for(var t=0; t<tiddlers.length; t++) {
					var title=tiddlers[t].title;
					var tags=tiddlers[t].tags;
					if (!tags || !tags.length) {
						if (theTitles['untagged']==undefined) { theTags.push('untagged'); theTitles['untagged']=new Array(); }
						theTitles['untagged'].push(title);
					}
					else for(var s=0; s<tags.length; s++) {
						if (theTitles[tags[s]]==undefined) { theTags.push(tags[s]); theTitles[tags[s]]=new Array(); }
						theTitles[tags[s]].push(title);
					}
				}
				theTags.sort();
				for(var tagindex=0; tagindex<theTags.length; tagindex++) {
					var theTag=theTags[tagindex];
					theList.options[i++]=new Option(theTag,'',false,false);
					for(var t=0; t<theTitles[theTag].length; t++)
						theList.options[i++]=new Option(indent+indent+theTitles[theTag][t],theTitles[theTag][t],false,false);
				}
				break;
			}
		theList.selectedIndex=selectedIndex; // select current control item
		this.$('exportStart').disabled=true;
		this.$('exportDelete').disabled=true;
		this.displayStatus(0,tiddlers.length);
	},
	askForFilename: function(here) {
		var msg=here.title; // use tooltip as dialog box message
		var path=getLocalPath(document.location.href);
		var slashpos=path.lastIndexOf('/'); if (slashpos==-1) slashpos=path.lastIndexOf('\\'); 
		if (slashpos!=-1) path = path.substr(0,slashpos+1); // remove filename from path, leave the trailing slash
		var filetype=this.$('exportFormat').value.toLowerCase();
		var defext='html';
		if (filetype==this.type_TX) defext='txt';
		if (filetype==this.type_CS) defext='csv';
		if (filetype==this.type_NF) defext='xml';
		var file=this.newdefault.replace(/html$/,defext);
		var result='';
		if(window.Components) { // moz
			try {
				netscape.security.PrivilegeManager.enablePrivilege('UniversalXPConnect');
				var nsIFilePicker = window.Components.interfaces.nsIFilePicker;
				var picker = Components.classes['@mozilla.org/filepicker;1'].createInstance(nsIFilePicker);
				picker.init(window, msg, nsIFilePicker.modeSave);
				var thispath = Components.classes['@mozilla.org/file/local;1'].createInstance(Components.interfaces.nsILocalFile);
				thispath.initWithPath(path);
				picker.displayDirectory=thispath;
				picker.defaultExtension=defext;
				picker.defaultString=file;
				picker.appendFilters(nsIFilePicker.filterAll|nsIFilePicker.filterText|nsIFilePicker.filterHTML);
				if (picker.show()!=nsIFilePicker.returnCancel) var result=picker.file.path;
			}
			catch(e) { alert('error during local file access: '+e.toString()) }
		}
		else { // IE
			try { // XPSP2 IE only
				var s = new ActiveXObject('UserAccounts.CommonDialog');
				s.Filter='All files|*.*|Text files|*.txt|HTML files|*.htm;*.html|XML files|*.xml|';
				s.FilterIndex=defext=='txt'?2:'html'?3:'xml'?4:1;
				s.InitialDir=path;
				s.FileName=file;
				if (s.showOpen()) var result=s.FileName;
			}
			catch(e) {  // fallback
				var result=prompt(msg,path+file);
			}
		}
		return result;
	},
	initFilter: function() {
		this.$('exportFilterStart').checked=false; this.$('exportStartDate').value='';
		this.$('exportFilterEnd').checked=false;  this.$('exportEndDate').value='';
		this.$('exportFilterTags').checked=false; this.$('exportTags').value='';
		this.$('exportFilterText').checked=false; this.$('exportText').value='';
		this.showFilterFields();
	},
	showFilterFields: function(which) {
		var show=this.$('exportFilterStart').checked;
		this.$('exportFilterStartBy').style.display=show?'block':'none';
		this.$('exportStartDate').style.display=show?'block':'none';
		var val=this.$('exportFilterStartBy').value;
		this.$('exportStartDate').value
			=this.getFilterDate(val,'exportStartDate').formatString(this.datetimefmt);
		if (which && (which.id=='exportFilterStartBy') && (val=='other'))
			this.$('exportStartDate').focus();

		var show=this.$('exportFilterEnd').checked;
		this.$('exportFilterEndBy').style.display=show?'block':'none';
		this.$('exportEndDate').style.display=show?'block':'none';
		var val=this.$('exportFilterEndBy').value;
		this.$('exportEndDate').value
			=this.getFilterDate(val,'exportEndDate').formatString(this.datetimefmt);
		 if (which && (which.id=='exportFilterEndBy') && (val=='other'))
			this.$('exportEndDate').focus();

		var show=this.$('exportFilterTags').checked;
		this.$('exportTags').style.display=show?'block':'none';

		var show=this.$('exportFilterText').checked;
		this.$('exportText').style.display=show?'block':'none';
	},
	getFilterDate: function(val,id) {
		var result=0;
		switch (val) {
			case 'file':
				result=new Date(document.lastModified);
				break;
			case 'other':
				result=new Date(this.$(id).value);
				break;
			default: // today=0, yesterday=1, one week=7, two weeks=14, a month=31
				var now=new Date(); var tz=now.getTimezoneOffset()*60000; now-=tz;
				var oneday=86400000;
				if (id=='exportStartDate')
					result=new Date((Math.floor(now/oneday)-val)*oneday+tz);
				else
					result=new Date((Math.floor(now/oneday)-val+1)*oneday+tz-1);
				break;
		}
		return result;
	},
	filterExportList: function() {
		var theList  = this.$('exportList'); if (!theList) return -1;
		var filterStart=this.$('exportFilterStart').checked;
		var val=this.$('exportFilterStartBy').value;
		var startDate=config.macros.exportTiddlers.getFilterDate(val,'exportStartDate');
		var filterEnd=this.$('exportFilterEnd').checked;
		var val=this.$('exportFilterEndBy').value;
		var endDate=config.macros.exportTiddlers.getFilterDate(val,'exportEndDate');
		var filterTags=this.$('exportFilterTags').checked;
		var tags=this.$('exportTags').value;
		var filterText=this.$('exportFilterText').checked;
		var text=this.$('exportText').value;
		if (!(filterStart||filterEnd||filterTags||filterText)) {
			alert('Please set the selection filter');
			this.$('exportFilterPanel').style.display='block';
			return -1;
		}
		if (filterStart&&filterEnd&&(startDate>endDate)) {
			var msg='starting date/time:\n'
			msg+=startDate.toLocaleString()+'\n';
			msg+='is later than ending date/time:\n'
			msg+=endDate.toLocaleString()
			alert(msg);
			return -1;
		}
		// if filter by tags, get list of matching tiddlers
		// use getMatchingTiddlers() (if MatchTagsPlugin is installed) for full boolean expressions
		// otherwise use getTaggedTiddlers() for simple tag matching
		if (filterTags) {
			var fn=store.getMatchingTiddlers||store.getTaggedTiddlers;
			var t=fn.apply(store,[tags]);
			var tagged=[];
			for (var i=0; i<t.length; i++) tagged.push(t[i].title);
		}
		// scan list and select tiddlers that match all applicable criteria
		var total=0;
		var count=0;
		for (var i=0; i<theList.options.length; i++) {
			// get item, skip non-tiddler list items (section headings)
			var opt=theList.options[i]; if (opt.value=='') continue;
			// get tiddler, skip missing tiddlers (this should NOT happen)
			var tiddler=store.getTiddler(opt.value); if (!tiddler) continue; 
			var sel=true;
			if ( (filterStart && tiddler.modified<startDate)
			|| (filterEnd && tiddler.modified>endDate)
			|| (filterTags && !tagged.contains(tiddler.title))
			|| (filterText && (tiddler.text.indexOf(text)==-1) && (tiddler.title.indexOf(text)==-1)))
				sel=false;
			opt.selected=sel;
			count+=sel?1:0;
			total++;
		}
		return count;
	},
	deleteTiddlers: function() {
		var list=this.$('exportList'); if (!list) return;
		var tids=[];
		for (i=0;i<list.length;i++)
			if (list.options[i].selected && list.options[i].value.length)
				tids.push(list.options[i].value);
		if (!confirm('Are you sure you want to delete these tiddlers:\n\n'+tids.join(', '))) return;
		store.suspendNotifications();
		for (t=0;t<tids.length;t++) {
			var tid=store.getTiddler(tids[t]); if (!tid) continue;
			var msg="'"+tid.title+"' is tagged with 'systemConfig'.\n\n";
			msg+='Removing this tiddler may cause unexpected results.  Are you sure?'
			if (tid.tags.contains('systemConfig') && !confirm(msg)) continue;
			store.removeTiddler(tid.title);
			story.closeTiddler(tid.title);
		}
		store.resumeNotifications();
		alert(tids.length+' tiddlers deleted');
		this.refreshList(0); // reload listbox
		store.notifyAll(); // update page display
	},
	go: function() {
		if (window.location.protocol!='file:') // make sure we are local
			{ displayMessage(config.messages.notFileUrlError); return; }
		// get selected tidders, target filename, target type, and notes
		var list=this.$('exportList'); if (!list) return;
		var tids=[]; for (var i=0; i<list.options.length; i++) {
			var opt=list.options[i]; if (!opt.selected||!opt.value.length) continue;
			var tid=store.getTiddler(opt.value); if (!tid) continue;
			tids.push(tid);
		}
		if (!tids.length) return; // no tiddlers selected
		var target=this.$('exportFilename').value.trim();
		if (!target.length) {
			displayMessage('A local target path/filename is required',target);
			return;
		}
		var merge=this.$('exportMerge').checked;
		var filetype=this.$('exportFormat').value.toLowerCase();
		var notes=this.$('exportNotes').value.replace(/\n/g,'<br>');
		var total={val:0};
		var out=this.assembleFile(target,filetype,tids,notes,total,merge);
		if (!total.val) return; // cancelled file overwrite
		var link='file:///'+target.replace(/\\/g,'/');
		var samefile=link==decodeURIComponent(window.location.href);
		var p=getLocalPath(document.location.href);
		if (samefile) {
			if (config.options.chkSaveBackups) { var t=loadOriginal(p);if(t)saveBackup(p,t); }
			if (config.options.chkGenerateAnRssFeed && saveRss instanceof Function) saveRss(p);
		}
		var ok=saveFile(target,out);
		displayMessage((ok?this.okmsg:this.failmsg).format([total.val,total.val!=1?'s':'',target]),link);
	},
	plainTextHeader:
		 'Source:\n\t%0\n'
		+'Title:\n\t%1\n'
		+'Subtitle:\n\t%2\n'
		+'Created:\n\t%3 by %4\n'
		+'Application:\n\tTiddlyWiki %5 / %6 %7\n\n',
	plainTextTiddler:
		'- - - - - - - - - - - - - - -\n'
		+'|     title: %0\n'
		+'|   created: %1\n'
		+'|  modified: %2\n'
		+'| edited by: %3\n'
		+'|      tags: %4\n'
		+'- - - - - - - - - - - - - - -\n'
		+'%5\n',
	plainTextFooter:
		'',
	newsFeedHeader:
		 '<'+'?xml version="1.0"?'+'>\n'
		+'<rss version="2.0">\n'
		+'<channel>\n'
		+'<title>%1</title>\n'
		+'<link>%0</link>\n'
		+'<description>%2</description>\n'
		+'<language>en-us</language>\n'
		+'<copyright>Copyright '+(new Date().getFullYear())+' %4</copyright>\n'
		+'<pubDate>%3</pubDate>\n'
		+'<lastBuildDate>%3</lastBuildDate>\n'
		+'<docs>http://blogs.law.harvard.edu/tech/rss</docs>\n'
		+'<generator>TiddlyWiki %5 / %6 %7</generator>\n',
	newsFeedTiddler:
		'\n%0\n',
	newsFeedFooter:
		'</channel></rss>',
	pureStoreHeader:
		 '<html><body>'
		+'<style type="text/css">'
		+'	#storeArea {display:block;margin:1em;}'
		+'	#storeArea div {padding:0.5em;margin:1em;border:2px solid black;height:10em;overflow:auto;}'
		+'	#pureStoreHeading {width:100%;text-align:left;background-color:#eeeeee;padding:1em;}'
		+'</style>'
		+'<div id="pureStoreHeading">'
		+'	TiddlyWiki "PureStore" export file<br>'
		+'	Source'+': <b>%0</b><br>'
		+'	Title: <b>%1</b><br>'
		+'	Subtitle: <b>%2</b><br>'
		+'	Created: <b>%3</b> by <b>%4</b><br>'
		+'	TiddlyWiki %5 / %6 %7<br>'
		+'	Notes:<hr><pre>%8</pre>'
		+'</div>'
		+'<div id="storeArea">',
	pureStoreTiddler:
		'%0\n%1',
	pureStoreFooter:
		'</div><!--POST-BODY-START-->\n<!--POST-BODY-END--></body></html>',
	assembleFile: function(target,filetype,tids,notes,total,merge) {
		var revised='';
		var now = new Date().toLocaleString();
		var src=convertUnicodeToUTF8(document.location.href);
		var title = convertUnicodeToUTF8(wikifyPlain('SiteTitle').htmlEncode());
		var subtitle = convertUnicodeToUTF8(wikifyPlain('SiteSubtitle').htmlEncode());
		var user = convertUnicodeToUTF8(config.options.txtUserName.htmlEncode());
		var twver = version.major+'.'+version.minor+'.'+version.revision;
		var v=version.extensions.ExportTiddlersPlugin; var pver = v.major+'.'+v.minor+'.'+v.revision;
		var headerargs=[src,title,subtitle,now,user,twver,'ExportTiddlersPlugin',pver,notes];
		switch (filetype) {
			case this.type_TX: // plain text
				var header=this.plainTextHeader.format(headerargs);
				var footer=this.plainTextFooter;
				break;
			case this.type_CS: // comma-separated
				var fields={};
				for (var i=0; i<tids.length; i++) for (var f in tids[i].fields) fields[f]=f;
				var names=['title','created','modified','modifier','tags','text'];
				for (var f in fields) names.push(f);
				var header=names.join(',')+'\n';
				var footer='';
				break;
			case this.type_NF: // news feed (XML)
				headerargs[0]=store.getTiddlerText('SiteUrl','');
				var header=this.newsFeedHeader.format(headerargs);
				var footer=this.newsFeedFooter;
				break;
			case this.type_PS: // PureStore (no code)
				var header=this.pureStoreHeader.format(headerargs);
				var footer=this.pureStoreFooter;
				break;
			case this.type_TW: // full TiddlyWiki
			default:
				var currPath=getLocalPath(window.location.href);
				var original=loadFile(currPath);
				if (!original) { displayMessage(config.messages.cantSaveError); return; }
				var posDiv = locateStoreArea(original);
				if (!posDiv) { displayMessage(config.messages.invalidFileError.format([currPath])); return; }
				var header = original.substr(0,posDiv[0]+startSaveArea.length)+'\n';
				var footer = '\n'+original.substr(posDiv[1]);
				break;
		}
		var out=this.getData(target,filetype,tids,fields,merge);
		var revised = header+convertUnicodeToUTF8(out.join('\n'))+footer;
		// if full TW, insert page title and language attr, and reset all MARKUP blocks...
		if (filetype==this.type_TW) {
			var newSiteTitle=convertUnicodeToUTF8(getPageTitle()).htmlEncode();
			revised=revised.replaceChunk('<title'+'>','</title'+'>',' ' + newSiteTitle + ' ');
			revised=updateLanguageAttribute(revised);
			var titles=[]; for (var i=0; i<tids.length; i++) titles.push(tids[i].title);
			revised=updateMarkupBlock(revised,'PRE-HEAD',
				titles.contains('MarkupPreHead')? 'MarkupPreHead' :null);
			revised=updateMarkupBlock(revised,'POST-HEAD',
				titles.contains('MarkupPostHead')?'MarkupPostHead':null);
			revised=updateMarkupBlock(revised,'PRE-BODY',
				titles.contains('MarkupPreBody')? 'MarkupPreBody' :null);
			revised=updateMarkupBlock(revised,'POST-SCRIPT',
				titles.contains('MarkupPostBody')?'MarkupPostBody':null);
		}
		total.val=out.length;
		return revised;
	},
	getData: function(target,filetype,tids,fields,merge) {
		// output selected tiddlers and gather list of titles (for use with merge)
		var out=[]; var titles=[];
		var url=store.getTiddlerText('SiteUrl','');
		for (var i=0; i<tids.length; i++) {
			out.push(this.formatItem(store,filetype,tids[i],url,fields));
			titles.push(tids[i].title);
		}
		// if TW or PureStore format, ask to merge with existing tiddlers (if any)
		if (filetype==this.type_TW || filetype==this.type_PS) {
			var txt=loadFile(target);
			if (txt && txt.length) {
				var remoteStore=new TiddlyWiki();
				if (version.major+version.minor*.1+version.revision*.01<2.52) txt=convertUTF8ToUnicode(txt);
				if (remoteStore.importTiddlyWiki(txt)) {
					var existing=remoteStore.getTiddlers('title');
					var msg=this.overwriteprompt.format([target,existing.length,existing.length!=1?'s':'']);
					if (merge) {
						var added=titles.length; var updated=0; var kept=0;
						for (var i=0; i<existing.length; i++)
							if (titles.contains(existing[i].title)) {
								added--; updated++;
							} else {
								out.push(this.formatItem(remoteStore,filetype,existing[i],url));
								kept++;
							}
						displayMessage(this.mergestatus.format(
							[added,added!=1?'s':'',updated,updated!=1?'s':'',kept,kept!=1?'s':'',]));
					}
					else if (!confirm(msg)) out=[]; // empty the list = don't write file
				}
			}
		}
		return out;
	},
	formatItem: function(s,f,t,u,fields) {
		if (f==this.type_TW)
			var r=s.getSaver().externalizeTiddler(s,t);
		if (f==this.type_PS)
			var r=this.pureStoreTiddler.format([t.title,s.getSaver().externalizeTiddler(s,t)]);
		if (f==this.type_NF)
			var r=this.newsFeedTiddler.format([t.saveToRss(u)]);
		if (f==this.type_TX)
			var r=this.plainTextTiddler.format([t.title, t.created.toLocaleString(), t.modified.toLocaleString(),
				t.modifier, String.encodeTiddlyLinkList(t.tags), t.text]);
		if (f==this.type_CS) {
			function toCSV(t) { return '"'+t.replace(/"/g,'""')+'"'; } // always encode CSV
			var out=[ toCSV(t.title), toCSV(t.created.toLocaleString()), toCSV(t.modified.toLocaleString()),
				toCSV(t.modifier), toCSV(String.encodeTiddlyLinkList(t.tags)), toCSV(t.text) ];
			for (var f in fields) out.push(toCSV(t.fields[f]||''));
			var r=out.join(',');
		}
		return r||"";
	}
}
//}}}
/***
!!!Control panel CSS
//{{{
!css
#exportPanel {
	display: none; position:absolute; z-index:12; width:35em; right:105%; top:6em;
	background-color: #eee; color:#000; font-size: 8pt; line-height:110%;
	border:1px solid black; border-bottom-width: 3px; border-right-width: 3px;
	padding: 0.5em; margin:0em; -moz-border-radius:1em;-webkit-border-radius:1em;
}
#exportPanel a, #exportPanel td a { color:#009; display:inline; margin:0px; padding:1px; }
#exportPanel table {
	width:100%; border:0px; padding:0px; margin:0px;
	font-size:8pt; line-height:110%; background:transparent;
}
#exportPanel tr { border:0px;padding:0px;margin:0px; background:transparent; }
#exportPanel td { color:#000; border:0px;padding:0px;margin:0px; background:transparent; }
#exportPanel select { width:98%;margin:0px;font-size:8pt;line-height:110%;}
#exportPanel input  { width:98%;padding:0px;margin:0px;font-size:8pt;line-height:110%; }
#exportPanel textarea  { width:98%;padding:0px;margin:0px;overflow:auto;font-size:8pt; }
#exportPanel .box {
	border:1px solid black; padding:3px; margin-bottom:5px;
	background:#f8f8f8; -moz-border-radius:5px;-webkit-border-radius:5px; }
#exportPanel .topline { border-top:2px solid black; padding-top:3px; margin-bottom:5px; }
#exportPanel .rad { width:auto;border:0 }
#exportPanel .chk { width:auto;border:0 }
#exportPanel .btn { width:auto; }
#exportPanel .btn1 { width:98%; }
#exportPanel .btn2 { width:48%; }
#exportPanel .btn3 { width:32%; }
#exportPanel .btn4 { width:24%; }
#exportPanel .btn5 { width:19%; }
!end
//}}}
!!!Control panel HTML
//{{{
!html
<!-- target path/file  -->
<div>
<div style="float:right;padding-right:.5em">
<input type="checkbox" style="width:auto" id="exportMerge" CHECKED
	title="combine selected tiddlers with existing tiddlers (if any) in export file"> merge
</div>
export to:<br>
<input type="text" id="exportFilename" size=40 style="width:93%"><input 
	type="button" id="exportBrowse" value="..." title="select or enter a local folder/file..." style="width:5%" 
	onclick="var fn=config.macros.exportTiddlers.askForFilename(this); if (fn.length) this.previousSibling.value=fn; ">
</div>

<!-- output format -->
<div>
format:
<select id="exportFormat" size=1>
	<option value="TW">TiddlyWiki HTML document (includes core code)</option>
	<option value="PS">TiddlyWiki "PureStore" HTML file (tiddler data only)</option>
	<option value="TX">TiddlyWiki plain text TXT file (tiddler source listing)</option>
	<option value="CS">Comma-Separated Value (CSV) data file</option>
	<option value="NF">RSS NewsFeed XML file</option>
</select>
</div>

<!-- notes -->
<div>
notes:<br>
<textarea id="exportNotes" rows=3 cols=40 style="height:4em;margin-bottom:5px;" onfocus="this.select()"></textarea> 
</div>

<!-- list of tiddlers -->
<table><tr align="left"><td>
	select:
	<a href="JavaScript:;" id="exportSelectAll"
		onclick="return config.macros.exportTiddlers.process(this)" title="select all tiddlers">
		&nbsp;all&nbsp;</a>
	<a href="JavaScript:;" id="exportSelectChanges"
		onclick="return config.macros.exportTiddlers.process(this)" title="select tiddlers changed since last save">
		&nbsp;changes&nbsp;</a>
	<a href="JavaScript:;" id="exportSelectOpened"
		onclick="return config.macros.exportTiddlers.process(this)" title="select tiddlers currently being displayed">
		&nbsp;opened&nbsp;</a>
	<a href="JavaScript:;" id="exportSelectRelated"
		onclick="return config.macros.exportTiddlers.process(this)" title="select tiddlers related to the currently selected tiddlers">
		&nbsp;related&nbsp;</a>
	<a href="JavaScript:;" id="exportToggleFilter"
		onclick="return config.macros.exportTiddlers.process(this)" title="show/hide selection filter">
		&nbsp;filter&nbsp;</a>
</td><td align="right">
	<a href="JavaScript:;" id="exportListSmaller"
		onclick="return config.macros.exportTiddlers.process(this)" title="reduce list size">
		&nbsp;&#150;&nbsp;</a>
	<a href="JavaScript:;" id="exportListLarger"
		onclick="return config.macros.exportTiddlers.process(this)" title="increase list size">
		&nbsp;+&nbsp;</a>
</td></tr></table>
<select id="exportList" multiple size="10" style="margin-bottom:5px;"
	onchange="config.macros.exportTiddlers.refreshList(this.selectedIndex)">
</select><br>

<!-- selection filter -->
<div id="exportFilterPanel" style="display:none">
<table><tr align="left"><td>
	selection filter
</td><td align="right">
	<a href="JavaScript:;" id="exportHideFilter"
		onclick="return config.macros.exportTiddlers.process(this)" title="hide selection filter">hide</a>
</td></tr></table>
<div class="box">

<input type="checkbox" class="chk" id="exportFilterStart" value="1"
	onclick="config.macros.exportTiddlers.showFilterFields(this)"> starting date/time<br>
<table cellpadding="0" cellspacing="0"><tr valign="center"><td width="50%">
	<select size=1 id="exportFilterStartBy"
		onchange="config.macros.exportTiddlers.showFilterFields(this);">
		<option value="0">today</option>
		<option value="1">yesterday</option>
		<option value="7">a week ago</option>
		<option value="30">a month ago</option>
		<option value="file">file date</option>
		<option value="other">other (mm/dd/yyyy hh:mm)</option>
	</select>
</td><td width="50%">
	<input type="text" id="exportStartDate" onfocus="this.select()"
		onchange="config.macros.exportTiddlers.$('exportFilterStartBy').value='other';">
</td></tr></table>

<input type="checkbox" class="chk" id="exportFilterEnd" value="1"
	onclick="config.macros.exportTiddlers.showFilterFields(this)"> ending date/time<br>
<table cellpadding="0" cellspacing="0"><tr valign="center"><td width="50%">
	<select size=1 id="exportFilterEndBy"
		onchange="config.macros.exportTiddlers.showFilterFields(this);">
		<option value="0">today</option>
		<option value="1">yesterday</option>
		<option value="7">a week ago</option>
		<option value="30">a month ago</option>
		<option value="file">file date</option>
		<option value="other">other (mm/dd/yyyy hh:mm)</option>
	</select>
</td><td width="50%">
	<input type="text" id="exportEndDate" onfocus="this.select()"
		onchange="config.macros.exportTiddlers.$('exportFilterEndBy').value='other';">
</td></tr></table>

<input type="checkbox" class="chk" id=exportFilterTags value="1"
	onclick="config.macros.exportTiddlers.showFilterFields(this)"> match tags<br>
<input type="text" id="exportTags" onfocus="this.select()">

<input type="checkbox" class="chk" id=exportFilterText value="1"
	onclick="config.macros.exportTiddlers.showFilterFields(this)"> match titles/tiddler text<br>
<input type="text" id="exportText" onfocus="this.select()">

</div> <!--box-->
</div> <!--panel-->

<!-- action buttons -->
<div style="text-align:center">
<input type=button class="btn4" onclick="config.macros.exportTiddlers.process(this)"
	id="exportFilter" value="apply filter">
<input type=button class="btn4" onclick="config.macros.exportTiddlers.process(this)"
	id="exportStart" value="export tiddlers">
<input type=button class="btn4" onclick="config.macros.exportTiddlers.process(this)"
	id="exportDelete" value="delete tiddlers">
<input type=button class="btn4" onclick="config.macros.exportTiddlers.process(this)"
	id="exportClose" value="close">
</div><!--center-->
!end
//}}}
***/
 
[[FRACTRAN|https://esolangs.org/wiki/Fractran]]^^1^^ is a computation (algorithm), but also an [[esoteric programming language|On esoteric programming languages]] (and [[Turing Complete|https://en.wikipedia.org/wiki/Turing_completeness]], at that!), invented by John Horton Conway (of ~Game-of-Life fame)^^2^^, which he devised to find/calculate all prime numbers, in order (!).

Briefly, if you take the sequence of fractions
{{{
17/91, 78/85, 19/51, 23/38, 29/33, 77/29, 95/23, 77/19, 1/17, 11/13, 13/11, 15/14, 15/2, 55/1
}}}
and start with a number n = 2, and (repeatedly) go through this sequence, looking for a fraction ''f'', that if multiplied by ''n'' will produce a whole number, this product (multiplication result of n * f) will be the next ''n'' in your sequence.

So, starting with n = 2, the first fraction that fits the bill is f = 15/2 (since n * f = 15), and the new ''n'' will be 15.
You use n = 15 to (repeatedly) look for the next fraction in the sequence which will produce a whole number, and find that only f = 55/1 fits, so you calculate the new n as n = 15 * 55/1 = 825. 
And so on, producing the following endless series of results ([[series A007542|https://oeis.org/A007542]] in the [[OEIS|https://oeis.org/]] (The ~On-Line Encyclopedia of Integer Sequences) ):
{{{
2, 15, 825, 725, 1925, 2275, 425, 390, 330, 290, 770, ..., 364, 68, 4, 30, 225, ...
}}}
You'll notice that the number 2 shows up in position 1, the number 4 (2^^2^^, and the power, 2, is a prime) shows up in position 19, and 8 (2^^3^^, where 3 is the next prime) shows up in position (for all positions of the powers of 2, where the power is a prime, see [[sequence A267572|https://oeis.org/A267572]] in the [[OEIS|https://oeis.org/]] .


The mathematician Devin Kilminster came up with a different sequence of fractions in FRACTRAN, 
{{{
7/3 99/98 13/49 39/35 36/91 10/143 49/13 7/11 1/2 91/1
}}}
which, if starting with n = 10 and going through the same algorithm, will produce (among other numbers), powers of 10, which have as their powers, all prime numbers in order.

The first few numbers in this [[sequence of numbers|https://oeis.org/A183132/internal]] (series A183132 in the OEIS) is:
{{{
10, 5, 36, 858, 234, 5577, 1521, 3549, 8281, 910, 100, 50, 25, 180, 3388, 924, ... 
}}}
You'll notice that in position 10 in this sequence we have 100, which is 10^^2^^, so the power (2), according to the algorithm is a prime number (which, indeed, it is :).
The next prime (3, corresponding to n = 1,000 (10^^3^^)) will be found in position 46 in the sequence, 100,000 (10^^5^^, corresponding to the prime 5), will be found in position 196 in the sequence, and so on). The positions of the powers of 10 corresponding to primes can be found in ([[series A183133|https://oeis.org/A183133/internal]] in the [[OEIS|https://oeis.org/]] )


Bellow is a short Python program^^3^^ producing the first few primes, using Kilminster's fractions and Conway's FRACTRAN:
{{{
n = 10

# The Kilminster fraction sequence [3/11, 847/45, 143/6, 7/3, 10/91, 3/7, 36/325, 1/2, 36/5]
# is broken into nominator and denominator parts:

nom = [3, 847, 143, 7, 10, 3, 36,  1, 36]
den = [11, 45,   6, 3, 91, 7, 325, 2, 5]

for step in range(600):
  i = 0   # index into the fractions sequences
  # go through the denominator sequence, looking for whole division:
  while n / den[i] != n // den[i]:
    i += 1
  # found whole division. calculate the next n
  n = (n // den[i]) * nom[i]
  # figure out if the n we found is a power of 10, since if it is, then its power 
  # will be a prime number, which we want to print.
  n_s = str(n)
  trail_n_s = n_s[1:]
  if n_s[0] == '1' and trail_n_s == len(trail_n_s) * '0':
    print "step:", step + 1, ", prime:", len(trail_n_s)
}}}

producing:
{{{
position: 10 , prime: 2
position: 46 , prime: 3
position: 196 , prime: 5
position: 500 , prime: 7
}}}


And as Kurt Vonnegut used to say: [[If this isn’t nice, what is?]] (or, differently: isn't this FRAC'ing Awesome ?  :)

----
^^1^^ This entry (and investigation :) was triggered by [[an entry by Zachary Abel|http://blog.zacharyabel.com/tag/fractran/]]
^^2^^ see more [[interesting sequences|http://www.math.sjsu.edu/~hsu/pdf/sequences.pdf]] covered by John Conway (and Tim Hsu).
^^3^^ an even [[nicer Python implementation|https://rosettacode.org/wiki/Fractran#Python]] is given at Rosetta Code
I came across [[this less than 15 min. talk|https://www.youtube.com/watch?v=ytVneQUA5-c]] and here are the points:
* Start/lead in with a question (since [[questions are like lanterns|John O’Donohue - questions]]).
** no procedure, process, explanation, lecture, etc. Reminds me of [[Dan Meyer's principle of starting in the middle and introducing a dilemma, a question, a conundrum|The Three Acts Of A Mathematical Story]].
** This intrigues students, engages them, and makes them naturally develop theories, stories, possible explanations, and solutions -- BINGO!
* Give students time to struggle
** this will develop grit (a fashionable word these days :), tenacity, opportunity to explore, innovate, think, develop mental muscles
** reminds me of [Angela Duckworth's research on grit|On luck, grit, and success in life]]
* As a teacher, you are not (and should not be) the answer key
** Responding with "I don't know. Let's find out" makes learning an adventure. Not knowing and acknowledging that is the first step in learning and knwoledge acquisition.
* Respond positively ("say 'yes' ") to student ideas, even if they are not necessarily correct
** This acknowledges their right to try, speculate, fail, try again, and learn in the process.
* Have the courage and generosity to let students play with the material
** It gives the students the gift of ownership
* "what books are to reading, play is for math". It fills learning with excitement, fun, and deep joy.
From an insightful and well written paper titled [[After the Gold Rush: Toward Sustainable Scholarship in Computing|http://crpit.com/confpapers/CRPITV78Lister.pdf]] by Raymond Lister:

The following description (from [[Psychology Wiki|http://psychology.wikia.com/wiki/Folk_medicine]]) of folk medicine has been edited to provide a further description of folk pedagogy (per Jerome Bruner: folk pedagogies [are] ''those tacit beliefs that we each hold about how our students think and learn, that largely determines the ways in which we teach our courses.''):

Folk --medicine-- [pedagogy] … is a category of
informal knowledge distinct from “scientific
--medicine-- [pedagogy]” … is usually unwritten and
transmitted orally ... [and] … may be diffusely
known by many --adults-- [teachers] … [Folk
medicine/pedagogy is] … not necessarily
integrated into a coherent system, and may be
contradictory. Folk --medicine-- [pedagogy] is
sometimes associated with quackery … [but] … it
may also preserve important knowledge and
cultural tradition from the past.

[[Jerome Bruner|http://www.gpmcf.org/PDFs/bruner.pdf]] (1996) invoked [[folk pedagogy|https://www.cs.kent.ac.uk/people/staff/saf/share/great-missenden/reference-papers/brunerFolkPedagogy.pdf]] to describe our
“intuitive theories about how other minds work” and that
these intuitive theories “badly want some deconstructing
if their implications are to be appreciated”. 
/***
|''Name:''|ForEachTiddlerPlugin|
|''Version:''|1.0.8 (2007-04-12)|
|''Source:''|http://tiddlywiki.abego-software.de/#ForEachTiddlerPlugin|
|''Author:''|UdoBorkowski (ub [at] abego-software [dot] de)|
|''Licence:''|[[BSD open source license (abego Software)|http://www.abego-software.de/legal/apl-v10.html]]|
|''Copyright:''|&copy; 2005-2007 [[abego Software|http://www.abego-software.de]]|
|''TiddlyWiki:''|1.2.38+, 2.0|
|''Browser:''|Firefox 1.0.4+; Firefox 1.5; InternetExplorer 6.0|
!Description

Create customizable lists, tables etc. for your selections of tiddlers. Specify the tiddlers to include and their order through a powerful language.

''Syntax:'' 
|>|{{{<<}}}''forEachTiddler'' [''in'' //tiddlyWikiPath//] [''where'' //whereCondition//] [''sortBy'' //sortExpression// [''ascending'' //or// ''descending'']] [''script'' //scriptText//] [//action// [//actionParameters//]]{{{>>}}}|
|//tiddlyWikiPath//|The filepath to the TiddlyWiki the macro should work on. When missing the current TiddlyWiki is used.|
|//whereCondition//|(quoted) JavaScript boolean expression. May refer to the build-in variables {{{tiddler}}} and  {{{context}}}.|
|//sortExpression//|(quoted) JavaScript expression returning "comparable" objects (using '{{{<}}}','{{{>}}}','{{{==}}}'. May refer to the build-in variables {{{tiddler}}} and  {{{context}}}.|
|//scriptText//|(quoted) JavaScript text. Typically defines JavaScript functions that are called by the various JavaScript expressions (whereClause, sortClause, action arguments,...)|
|//action//|The action that should be performed on every selected tiddler, in the given order. By default the actions [[addToList|AddToListAction]] and [[write|WriteAction]] are supported. When no action is specified [[addToList|AddToListAction]]  is used.|
|//actionParameters//|(action specific) parameters the action may refer while processing the tiddlers (see action descriptions for details). <<tiddler [[JavaScript in actionParameters]]>>|
|>|~~Syntax formatting: Keywords in ''bold'', optional parts in [...]. 'or' means that exactly one of the two alternatives must exist.~~|

See details see [[ForEachTiddlerMacro]] and [[ForEachTiddlerExamples]].

!Revision history
* v1.0.8 (2007-04-12)
** Adapted to latest TiddlyWiki 2.2 Beta importTiddlyWiki API (introduced with changeset 2004). TiddlyWiki 2.2 Beta builds prior to changeset 2004 are no longer supported (but TiddlyWiki 2.1 and earlier, of cause)
* v1.0.7 (2007-03-28)
** Also support "pre" formatted TiddlyWikis (introduced with TW 2.2) (when using "in" clause to work on external tiddlers)
* v1.0.6 (2006-09-16)
** Context provides "viewerTiddler", i.e. the tiddler used to view the macro. Most times this is equal to the "inTiddler", but when using the "tiddler" macro both may be different.
** Support "begin", "end" and "none" expressions in "write" action
* v1.0.5 (2006-02-05)
** Pass tiddler containing the macro with wikify, context object also holds reference to tiddler containing the macro ("inTiddler"). Thanks to SimonBaird.
** Support Firefox 1.5.0.1
** Internal
*** Make "JSLint" conform
*** "Only install once"
* v1.0.4 (2006-01-06)
** Support TiddlyWiki 2.0
* v1.0.3 (2005-12-22)
** Features: 
*** Write output to a file supports multi-byte environments (Thanks to Bram Chen) 
*** Provide API to access the forEachTiddler functionality directly through JavaScript (see getTiddlers and performMacro)
** Enhancements:
*** Improved error messages on InternetExplorer.
* v1.0.2 (2005-12-10)
** Features: 
*** context object also holds reference to store (TiddlyWiki)
** Fixed Bugs: 
*** ForEachTiddler 1.0.1 has broken support on win32 Opera 8.51 (Thanks to BrunoSabin for reporting)
* v1.0.1 (2005-12-08)
** Features: 
*** Access tiddlers stored in separated TiddlyWikis through the "in" option. I.e. you are no longer limited to only work on the "current TiddlyWiki".
*** Write output to an external file using the "toFile" option of the "write" action. With this option you may write your customized tiddler exports.
*** Use the "script" section to define "helper" JavaScript functions etc. to be used in the various JavaScript expressions (whereClause, sortClause, action arguments,...).
*** Access and store context information for the current forEachTiddler invocation (through the build-in "context" object) .
*** Improved script evaluation (for where/sort clause and write scripts).
* v1.0.0 (2005-11-20)
** initial version

!Code
***/
//{{{

	
//============================================================================
//============================================================================
//		   ForEachTiddlerPlugin
//============================================================================
//============================================================================

// Only install once
if (!version.extensions.ForEachTiddlerPlugin) {

if (!window.abego) window.abego = {};

version.extensions.ForEachTiddlerPlugin = {
	major: 1, minor: 0, revision: 8, 
	date: new Date(2007,3,12), 
	source: "http://tiddlywiki.abego-software.de/#ForEachTiddlerPlugin",
	licence: "[[BSD open source license (abego Software)|http://www.abego-software.de/legal/apl-v10.html]]",
	copyright: "Copyright (c) abego Software GmbH, 2005-2007 (www.abego-software.de)"
};

// For backward compatibility with TW 1.2.x
//
if (!TiddlyWiki.prototype.forEachTiddler) {
	TiddlyWiki.prototype.forEachTiddler = function(callback) {
		for(var t in this.tiddlers) {
			callback.call(this,t,this.tiddlers[t]);
		}
	};
}

//============================================================================
// forEachTiddler Macro
//============================================================================

version.extensions.forEachTiddler = {
	major: 1, minor: 0, revision: 8, date: new Date(2007,3,12), provider: "http://tiddlywiki.abego-software.de"};

// ---------------------------------------------------------------------------
// Configurations and constants 
// ---------------------------------------------------------------------------

config.macros.forEachTiddler = {
	 // Standard Properties
	 label: "forEachTiddler",
	 prompt: "Perform actions on a (sorted) selection of tiddlers",

	 // actions
	 actions: {
		 addToList: {},
		 write: {}
	 }
};

// ---------------------------------------------------------------------------
//  The forEachTiddler Macro Handler 
// ---------------------------------------------------------------------------

config.macros.forEachTiddler.getContainingTiddler = function(e) {
	while(e && !hasClass(e,"tiddler"))
		e = e.parentNode;
	var title = e ? e.getAttribute("tiddler") : null; 
	return title ? store.getTiddler(title) : null;
};

config.macros.forEachTiddler.handler = function(place,macroName,params,wikifier,paramString,tiddler) {
	// config.macros.forEachTiddler.traceMacroCall(place,macroName,params,wikifier,paramString,tiddler);

	if (!tiddler) tiddler = config.macros.forEachTiddler.getContainingTiddler(place);
	// --- Parsing ------------------------------------------

	var i = 0; // index running over the params
	// Parse the "in" clause
	var tiddlyWikiPath = undefined;
	if ((i < params.length) && params[i] == "in") {
		i++;
		if (i >= params.length) {
			this.handleError(place, "TiddlyWiki path expected behind 'in'.");
			return;
		}
		tiddlyWikiPath = this.paramEncode((i < params.length) ? params[i] : "");
		i++;
	}

	// Parse the where clause
	var whereClause ="true";
	if ((i < params.length) && params[i] == "where") {
		i++;
		whereClause = this.paramEncode((i < params.length) ? params[i] : "");
		i++;
	}

	// Parse the sort stuff
	var sortClause = null;
	var sortAscending = true; 
	if ((i < params.length) && params[i] == "sortBy") {
		i++;
		if (i >= params.length) {
			this.handleError(place, "sortClause missing behind 'sortBy'.");
			return;
		}
		sortClause = this.paramEncode(params[i]);
		i++;

		if ((i < params.length) && (params[i] == "ascending" || params[i] == "descending")) {
			 sortAscending = params[i] == "ascending";
			 i++;
		}
	}

	// Parse the script
	var scriptText = null;
	if ((i < params.length) && params[i] == "script") {
		i++;
		scriptText = this.paramEncode((i < params.length) ? params[i] : "");
		i++;
	}

	// Parse the action. 
	// When we are already at the end use the default action
	var actionName = "addToList";
	if (i < params.length) {
	   if (!config.macros.forEachTiddler.actions[params[i]]) {
			this.handleError(place, "Unknown action '"+params[i]+"'.");
			return;
		} else {
			actionName = params[i]; 
			i++;
		}
	} 
	
	// Get the action parameter
	// (the parsing is done inside the individual action implementation.)
	var actionParameter = params.slice(i);


	// --- Processing ------------------------------------------
	try {
		this.performMacro({
				place: place, 
				inTiddler: tiddler,
				whereClause: whereClause, 
				sortClause: sortClause, 
				sortAscending: sortAscending, 
				actionName: actionName, 
				actionParameter: actionParameter, 
				scriptText: scriptText, 
				tiddlyWikiPath: tiddlyWikiPath});

	} catch (e) {
		this.handleError(place, e);
	}
};

// Returns an object with properties "tiddlers" and "context".
// tiddlers holds the (sorted) tiddlers selected by the parameter,
// context the context of the execution of the macro.
//
// The action is not yet performed.
//
// @parameter see performMacro
//
config.macros.forEachTiddler.getTiddlersAndContext = function(parameter) {

	var context = config.macros.forEachTiddler.createContext(parameter.place, parameter.whereClause, parameter.sortClause, parameter.sortAscending, parameter.actionName, parameter.actionParameter, parameter.scriptText, parameter.tiddlyWikiPath, parameter.inTiddler);

	var tiddlyWiki = parameter.tiddlyWikiPath ? this.loadTiddlyWiki(parameter.tiddlyWikiPath) : store;
	context["tiddlyWiki"] = tiddlyWiki;
	
	// Get the tiddlers, as defined by the whereClause
	var tiddlers = this.findTiddlers(parameter.whereClause, context, tiddlyWiki);
	context["tiddlers"] = tiddlers;

	// Sort the tiddlers, when sorting is required.
	if (parameter.sortClause) {
		this.sortTiddlers(tiddlers, parameter.sortClause, parameter.sortAscending, context);
	}

	return {tiddlers: tiddlers, context: context};
};

// Returns the (sorted) tiddlers selected by the parameter.
//
// The action is not yet performed.
//
// @parameter see performMacro
//
config.macros.forEachTiddler.getTiddlers = function(parameter) {
	return this.getTiddlersAndContext(parameter).tiddlers;
};

// Performs the macros with the given parameter.
//
// @param parameter holds the parameter of the macro as separate properties.
//				  The following properties are supported:
//
//						place
//						whereClause
//						sortClause
//						sortAscending
//						actionName
//						actionParameter
//						scriptText
//						tiddlyWikiPath
//
//					All properties are optional. 
//					For most actions the place property must be defined.
//
config.macros.forEachTiddler.performMacro = function(parameter) {
	var tiddlersAndContext = this.getTiddlersAndContext(parameter);

	// Perform the action
	var actionName = parameter.actionName ? parameter.actionName : "addToList";
	var action = config.macros.forEachTiddler.actions[actionName];
	if (!action) {
		this.handleError(parameter.place, "Unknown action '"+actionName+"'.");
		return;
	}

	var actionHandler = action.handler;
	actionHandler(parameter.place, tiddlersAndContext.tiddlers, parameter.actionParameter, tiddlersAndContext.context);
};

// ---------------------------------------------------------------------------
//  The actions 
// ---------------------------------------------------------------------------

// Internal.
//
// --- The addToList Action -----------------------------------------------
//
config.macros.forEachTiddler.actions.addToList.handler = function(place, tiddlers, parameter, context) {
	// Parse the parameter
	var p = 0;

	// Check for extra parameters
	if (parameter.length > p) {
		config.macros.forEachTiddler.createExtraParameterErrorElement(place, "addToList", parameter, p);
		return;
	}

	// Perform the action.
	var list = document.createElement("ul");
	place.appendChild(list);
	for (var i = 0; i < tiddlers.length; i++) {
		var tiddler = tiddlers[i];
		var listItem = document.createElement("li");
		list.appendChild(listItem);
		createTiddlyLink(listItem, tiddler.title, true);
	}
};

abego.parseNamedParameter = function(name, parameter, i) {
	var beginExpression = null;
	if ((i < parameter.length) && parameter[i] == name) {
		i++;
		if (i >= parameter.length) {
			throw "Missing text behind '%0'".format([name]);
		}
		
		return config.macros.forEachTiddler.paramEncode(parameter[i]);
	}
	return null;
}

// Internal.
//
// --- The write Action ---------------------------------------------------
//
config.macros.forEachTiddler.actions.write.handler = function(place, tiddlers, parameter, context) {
	// Parse the parameter
	var p = 0;
	if (p >= parameter.length) {
		this.handleError(place, "Missing expression behind 'write'.");
		return;
	}

	var textExpression = config.macros.forEachTiddler.paramEncode(parameter[p]);
	p++;

	// Parse the "begin" option
	var beginExpression = abego.parseNamedParameter("begin", parameter, p);
	if (beginExpression !== null) 
		p += 2;
	var endExpression = abego.parseNamedParameter("end", parameter, p);
	if (endExpression !== null) 
		p += 2;
	var noneExpression = abego.parseNamedParameter("none", parameter, p);
	if (noneExpression !== null) 
		p += 2;

	// Parse the "toFile" option
	var filename = null;
	var lineSeparator = undefined;
	if ((p < parameter.length) && parameter[p] == "toFile") {
		p++;
		if (p >= parameter.length) {
			this.handleError(place, "Filename expected behind 'toFile' of 'write' action.");
			return;
		}
		
		filename = config.macros.forEachTiddler.getLocalPath(config.macros.forEachTiddler.paramEncode(parameter[p]));
		p++;
		if ((p < parameter.length) && parameter[p] == "withLineSeparator") {
			p++;
			if (p >= parameter.length) {
				this.handleError(place, "Line separator text expected behind 'withLineSeparator' of 'write' action.");
				return;
			}
			lineSeparator = config.macros.forEachTiddler.paramEncode(parameter[p]);
			p++;
		}
	}
	
	// Check for extra parameters
	if (parameter.length > p) {
		config.macros.forEachTiddler.createExtraParameterErrorElement(place, "write", parameter, p);
		return;
	}

	// Perform the action.
	var func = config.macros.forEachTiddler.getEvalTiddlerFunction(textExpression, context);
	var count = tiddlers.length;
	var text = "";
	if (count > 0 && beginExpression)
		text += config.macros.forEachTiddler.getEvalTiddlerFunction(beginExpression, context)(undefined, context, count, undefined);
	
	for (var i = 0; i < count; i++) {
		var tiddler = tiddlers[i];
		text += func(tiddler, context, count, i);
	}
	
	if (count > 0 && endExpression)
		text += config.macros.forEachTiddler.getEvalTiddlerFunction(endExpression, context)(undefined, context, count, undefined);

	if (count == 0 && noneExpression) 
		text += config.macros.forEachTiddler.getEvalTiddlerFunction(noneExpression, context)(undefined, context, count, undefined);
		

	if (filename) {
		if (lineSeparator !== undefined) {
			lineSeparator = lineSeparator.replace(/\\n/mg, "\n").replace(/\\r/mg, "\r");
			text = text.replace(/\n/mg,lineSeparator);
		}
		saveFile(filename, convertUnicodeToUTF8(text));
	} else {
		var wrapper = createTiddlyElement(place, "span");
		wikify(text, wrapper, null/* highlightRegExp */, context.inTiddler);
	}
};


// ---------------------------------------------------------------------------
//  Helpers
// ---------------------------------------------------------------------------

// Internal.
//
config.macros.forEachTiddler.createContext = function(placeParam, whereClauseParam, sortClauseParam, sortAscendingParam, actionNameParam, actionParameterParam, scriptText, tiddlyWikiPathParam, inTiddlerParam) {
	return {
		place : placeParam, 
		whereClause : whereClauseParam, 
		sortClause : sortClauseParam, 
		sortAscending : sortAscendingParam, 
		script : scriptText,
		actionName : actionNameParam, 
		actionParameter : actionParameterParam,
		tiddlyWikiPath : tiddlyWikiPathParam,
		inTiddler : inTiddlerParam, // the tiddler containing the <<forEachTiddler ...>> macro call.
		viewerTiddler : config.macros.forEachTiddler.getContainingTiddler(placeParam) // the tiddler showing the forEachTiddler result
	};
};

// Internal.
//
// Returns a TiddlyWiki with the tiddlers loaded from the TiddlyWiki of 
// the given path.
//
config.macros.forEachTiddler.loadTiddlyWiki = function(path, idPrefix) {
	if (!idPrefix) {
		idPrefix = "store";
	}
	var lenPrefix = idPrefix.length;
	
	// Read the content of the given file
	var content = loadFile(this.getLocalPath(path));
	if(content === null) {
		throw "TiddlyWiki '"+path+"' not found.";
	}
	
	var tiddlyWiki = new TiddlyWiki();

	// Starting with TW 2.2 there is a helper function to import the tiddlers
	if (tiddlyWiki.importTiddlyWiki) {
		if (!tiddlyWiki.importTiddlyWiki(content))
			throw "File '"+path+"' is not a TiddlyWiki.";
		tiddlyWiki.dirty = false;
		return tiddlyWiki;
	}
	
	// The legacy code, for TW < 2.2
	
	// Locate the storeArea div's
	var posOpeningDiv = content.indexOf(startSaveArea);
	var posClosingDiv = content.lastIndexOf(endSaveArea);
	if((posOpeningDiv == -1) || (posClosingDiv == -1)) {
		throw "File '"+path+"' is not a TiddlyWiki.";
	}
	var storageText = content.substr(posOpeningDiv + startSaveArea.length, posClosingDiv);
	
	// Create a "div" element that contains the storage text
	var myStorageDiv = document.createElement("div");
	myStorageDiv.innerHTML = storageText;
	myStorageDiv.normalize();
	
	// Create all tiddlers in a new TiddlyWiki
	// (following code is modified copy of TiddlyWiki.prototype.loadFromDiv)
	var store = myStorageDiv.childNodes;
	for(var t = 0; t < store.length; t++) {
		var e = store[t];
		var title = null;
		if(e.getAttribute)
			title = e.getAttribute("tiddler");
		if(!title && e.id && e.id.substr(0,lenPrefix) == idPrefix)
			title = e.id.substr(lenPrefix);
		if(title && title !== "") {
			var tiddler = tiddlyWiki.createTiddler(title);
			tiddler.loadFromDiv(e,title);
		}
	}
	tiddlyWiki.dirty = false;

	return tiddlyWiki;
};


	
// Internal.
//
// Returns a function that has a function body returning the given javaScriptExpression.
// The function has the parameters:
// 
//	 (tiddler, context, count, index)
//
config.macros.forEachTiddler.getEvalTiddlerFunction = function (javaScriptExpression, context) {
	var script = context["script"];
	var functionText = "var theFunction = function(tiddler, context, count, index) { return "+javaScriptExpression+"}";
	var fullText = (script ? script+";" : "")+functionText+";theFunction;";
	return eval(fullText);
};

// Internal.
//
config.macros.forEachTiddler.findTiddlers = function(whereClause, context, tiddlyWiki) {
	var result = [];
	var func = config.macros.forEachTiddler.getEvalTiddlerFunction(whereClause, context);
	tiddlyWiki.forEachTiddler(function(title,tiddler) {
		if (func(tiddler, context, undefined, undefined)) {
			result.push(tiddler);
		}
	});
	return result;
};

// Internal.
//
config.macros.forEachTiddler.createExtraParameterErrorElement = function(place, actionName, parameter, firstUnusedIndex) {
	var message = "Extra parameter behind '"+actionName+"':";
	for (var i = firstUnusedIndex; i < parameter.length; i++) {
		message += " "+parameter[i];
	}
	this.handleError(place, message);
};

// Internal.
//
config.macros.forEachTiddler.sortAscending = function(tiddlerA, tiddlerB) {
	var result = 
		(tiddlerA.forEachTiddlerSortValue == tiddlerB.forEachTiddlerSortValue) 
			? 0
			: (tiddlerA.forEachTiddlerSortValue < tiddlerB.forEachTiddlerSortValue)
			   ? -1 
			   : +1; 
	return result;
};

// Internal.
//
config.macros.forEachTiddler.sortDescending = function(tiddlerA, tiddlerB) {
	var result = 
		(tiddlerA.forEachTiddlerSortValue == tiddlerB.forEachTiddlerSortValue) 
			? 0
			: (tiddlerA.forEachTiddlerSortValue < tiddlerB.forEachTiddlerSortValue)
			   ? +1 
			   : -1; 
	return result;
};

// Internal.
//
config.macros.forEachTiddler.sortTiddlers = function(tiddlers, sortClause, ascending, context) {
	// To avoid evaluating the sortClause whenever two items are compared 
	// we pre-calculate the sortValue for every item in the array and store it in a 
	// temporary property ("forEachTiddlerSortValue") of the tiddlers.
	var func = config.macros.forEachTiddler.getEvalTiddlerFunction(sortClause, context);
	var count = tiddlers.length;
	var i;
	for (i = 0; i < count; i++) {
		var tiddler = tiddlers[i];
		tiddler.forEachTiddlerSortValue = func(tiddler,context, undefined, undefined);
	}

	// Do the sorting
	tiddlers.sort(ascending ? this.sortAscending : this.sortDescending);

	// Delete the temporary property that holds the sortValue.	
	for (i = 0; i < tiddlers.length; i++) {
		delete tiddlers[i].forEachTiddlerSortValue;
	}
};


// Internal.
//
config.macros.forEachTiddler.trace = function(message) {
	displayMessage(message);
};

// Internal.
//
config.macros.forEachTiddler.traceMacroCall = function(place,macroName,params) {
	var message ="<<"+macroName;
	for (var i = 0; i < params.length; i++) {
		message += " "+params[i];
	}
	message += ">>";
	displayMessage(message);
};


// Internal.
//
// Creates an element that holds an error message
// 
config.macros.forEachTiddler.createErrorElement = function(place, exception) {
	var message = (exception.description) ? exception.description : exception.toString();
	return createTiddlyElement(place,"span",null,"forEachTiddlerError","<<forEachTiddler ...>>: "+message);
};

// Internal.
//
// @param place [may be null]
//
config.macros.forEachTiddler.handleError = function(place, exception) {
	if (place) {
		this.createErrorElement(place, exception);
	} else {
		throw exception;
	}
};

// Internal.
//
// Encodes the given string.
//
// Replaces 
//	 "$))" to ">>"
//	 "$)" to ">"
//
config.macros.forEachTiddler.paramEncode = function(s) {
	var reGTGT = new RegExp("\\$\\)\\)","mg");
	var reGT = new RegExp("\\$\\)","mg");
	return s.replace(reGTGT, ">>").replace(reGT, ">");
};

// Internal.
//
// Returns the given original path (that is a file path, starting with "file:")
// as a path to a local file, in the systems native file format.
//
// Location information in the originalPath (i.e. the "#" and stuff following)
// is stripped.
// 
config.macros.forEachTiddler.getLocalPath = function(originalPath) {
	// Remove any location part of the URL
	var hashPos = originalPath.indexOf("#");
	if(hashPos != -1)
		originalPath = originalPath.substr(0,hashPos);
	// Convert to a native file format assuming
	// "file:///x:/path/path/path..." - pc local file --> "x:\path\path\path..."
	// "file://///server/share/path/path/path..." - FireFox pc network file --> "\\server\share\path\path\path..."
	// "file:///path/path/path..." - mac/unix local file --> "/path/path/path..."
	// "file://server/share/path/path/path..." - pc network file --> "\\server\share\path\path\path..."
	var localPath;
	if(originalPath.charAt(9) == ":") // pc local file
		localPath = unescape(originalPath.substr(8)).replace(new RegExp("/","g"),"\\");
	else if(originalPath.indexOf("file://///") === 0) // FireFox pc network file
		localPath = "\\\\" + unescape(originalPath.substr(10)).replace(new RegExp("/","g"),"\\");
	else if(originalPath.indexOf("file:///") === 0) // mac/unix local file
		localPath = unescape(originalPath.substr(7));
	else if(originalPath.indexOf("file:/") === 0) // mac/unix local file
		localPath = unescape(originalPath.substr(5));
	else // pc network file
		localPath = "\\\\" + unescape(originalPath.substr(7)).replace(new RegExp("/","g"),"\\");	
	return localPath;
};

// ---------------------------------------------------------------------------
// Stylesheet Extensions (may be overridden by local StyleSheet)
// ---------------------------------------------------------------------------
//
setStylesheet(
	".forEachTiddlerError{color: #ffffff;background-color: #880000;}",
	"forEachTiddler");

//============================================================================
// End of forEachTiddler Macro
//============================================================================


//============================================================================
// String.startsWith Function
//============================================================================
//
// Returns true if the string starts with the given prefix, false otherwise.
//
version.extensions["String.startsWith"] = {major: 1, minor: 0, revision: 0, date: new Date(2005,11,20), provider: "http://tiddlywiki.abego-software.de"};
//
String.prototype.startsWith = function(prefix) {
	var n =  prefix.length;
	return (this.length >= n) && (this.slice(0, n) == prefix);
};



//============================================================================
// String.endsWith Function
//============================================================================
//
// Returns true if the string ends with the given suffix, false otherwise.
//
version.extensions["String.endsWith"] = {major: 1, minor: 0, revision: 0, date: new Date(2005,11,20), provider: "http://tiddlywiki.abego-software.de"};
//
String.prototype.endsWith = function(suffix) {
	var n = suffix.length;
	return (this.length >= n) && (this.right(n) == suffix);
};


//============================================================================
// String.contains Function
//============================================================================
//
// Returns true when the string contains the given substring, false otherwise.
//
version.extensions["String.contains"] = {major: 1, minor: 0, revision: 0, date: new Date(2005,11,20), provider: "http://tiddlywiki.abego-software.de"};
//
String.prototype.contains = function(substring) {
	return this.indexOf(substring) >= 0;
};

//============================================================================
// Array.indexOf Function
//============================================================================
//
// Returns the index of the first occurance of the given item in the array or 
// -1 when no such item exists.
//
// @param item [may be null]
//
version.extensions["Array.indexOf"] = {major: 1, minor: 0, revision: 0, date: new Date(2005,11,20), provider: "http://tiddlywiki.abego-software.de"};
//
Array.prototype.indexOf = function(item) {
	for (var i = 0; i < this.length; i++) {
		if (this[i] == item) {
			return i;
		}
	}
	return -1;
};

//============================================================================
// Array.contains Function
//============================================================================
//
// Returns true when the array contains the given item, otherwise false. 
//
// @param item [may be null]
//
version.extensions["Array.contains"] = {major: 1, minor: 0, revision: 0, date: new Date(2005,11,20), provider: "http://tiddlywiki.abego-software.de"};
//
Array.prototype.contains = function(item) {
	return (this.indexOf(item) >= 0);
};

//============================================================================
// Array.containsAny Function
//============================================================================
//
// Returns true when the array contains at least one of the elements 
// of the item. Otherwise (or when items contains no elements) false is returned.
//
version.extensions["Array.containsAny"] = {major: 1, minor: 0, revision: 0, date: new Date(2005,11,20), provider: "http://tiddlywiki.abego-software.de"};
//
Array.prototype.containsAny = function(items) {
	for(var i = 0; i < items.length; i++) {
		if (this.contains(items[i])) {
			return true;
		}
	}
	return false;
};


//============================================================================
// Array.containsAll Function
//============================================================================
//
// Returns true when the array contains all the items, otherwise false.
// 
// When items is null false is returned (even if the array contains a null).
//
// @param items [may be null] 
//
version.extensions["Array.containsAll"] = {major: 1, minor: 0, revision: 0, date: new Date(2005,11,20), provider: "http://tiddlywiki.abego-software.de"};
//
Array.prototype.containsAll = function(items) {
	for(var i = 0; i < items.length; i++) {
		if (!this.contains(items[i])) {
			return false;
		}
	}
	return true;
};


} // of "install only once"

// Used Globals (for JSLint) ==============
// ... DOM
/*global 	document */
// ... TiddlyWiki Core
/*global 	convertUnicodeToUTF8, createTiddlyElement, createTiddlyLink, 
			displayMessage, endSaveArea, hasClass, loadFile, saveFile, 
			startSaveArea, store, wikify */
//}}}


/***
!Licence and Copyright
Copyright (c) abego Software ~GmbH, 2005 ([[www.abego-software.de|http://www.abego-software.de]])

Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:

Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.

Redistributions in binary form must reproduce the above copyright notice, this
list of conditions and the following disclaimer in the documentation and/or other
materials provided with the distribution.

Neither the name of abego Software nor the names of its contributors may be
used to endorse or promote products derived from this software without specific
prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT
SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED
TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH
DAMAGE.
***/
From an [[article by Charles F. Van Loan|http://www.cs.cornell.edu/cv/Intuition.htm]] (Computer Science, Cornell University) on developing Computational Intuition:

The first year of college is not the time to stress mathematical rigor or formal program correctness proofs. Interest in computational science is jeopardized if these things are pushed before the student is ready. However, the freshman year can be used to set the stage for the precision that mathematics has to offer if the connection between intuition and formality is understood:

''Formalism First = Rigor Mortis^^1^^

Intuition First = Rigor's Mortise'' ^^2^^


----
^^1^^ Rigor Mortis - when the limbs of a corpse become stiff and difficult to move or manipulate.

^^2^^ Mortise -  To join or fasten securely, as with a mortise and tenon.
[img[Mortise|./resources/mortise.png]]


BTW, I also like his presentation about [[If Copernicus and Kepler Had Computers]].
From a [[thoughtful and practical interview|http://www.criticalthinking.org/pages/an-interview-with-linda-elder-about-critical-thinking-and-gi/476]] on her [[website critical thinking|http://www.criticalthinking.org/]], Linda Elder describes a few practices and questions which, if incorporated into classroom practices and assessments, can foster critical thinking.

students are given a prompt, which can be an article, chapter, or essay of the teacher’s choosing. The purpose of the practice and assessment is to determine the extent to which students are able to analyze and evaluate the reasoning embedded in the written prompt. 

!!!Here is part one of the activity/assessment:

Directions to students: Complete the following sentences with whatever elaboration you think necessary to make your meaning clear.

1) The main purpose of the “text” you are analyzing is  _ _ _ _ _ _  .
(Here you are trying to state as accurately as possible the author’s purpose for writing the article. What, in your view, was the author trying to accomplish?)

2) The key question that the author is addressing is  _ _ _ _ _ _  . (Your goal is to figure out the key question implicit in the “text.” In other words, What was the key question addressed?)

3) The most important information in this article is  _ _ _ _ _ _ . (You want to identify the key information the author used, or presupposed, in the article to support his/her main arguments. Here you are looking for facts, experiences, data the author is using to support her/his conclusions).

4) The main inferences/conclusions in this article are  _ _ _ _ _ _ .
(You want to identify the most important conclusions that the author comes to in the “text”).

5) The key idea(s) we need to understand in this “text” is (are)_ _ _ _ _ _ . By these ideas the author means _ _ _ _ _ _.
(To identify these ideas, ask yourself: What are the most important ideas that you would have to understand in order to understand the author’s line of reasoning? Then elaborate briefly what the author means by these ideas).

6) The main assumption(s) underlying the author’s thinking is (are)_ _ _ _ _ _ (Ask yourself: What is the author taking for granted (that might be questioned). The assumptions are generalizations that the author does not think s/he has to defend in the context of writing the article, and they are usually unstated. This is where the author's thinking logically begins).

7) a) If we take this line of reasoning seriously, the implications are _ _ _ _ _ _.
(What consequences are likely to follow if people take the author’s line of reasoning seriously? Here you are to follow out the logical implications of the author’s position. You should include implications that the author states, if you believe them to be logical, but you should do your best thinking to determine what you think the implications are.)

b) If we fail to take this line of reasoning seriously, the implications are _ _ _ _ _ _.
(What consequences are likely to follow if people ignore the author’s reasoning?)

8) The main point(s) of view of the author of the “text” is (are)_ _ _ _ _ _. (The main question you are trying to answer here is: What is the author looking at, and how is s/he seeing it? For example, in this test: “What are we looking at?” (thinking) “How are we seeing it?” (critically). Our point of view is defined by the fact that we see “thinking” as subject to critical evaluation).

!!!In part two of the activity/assessment students are asked to assess the reasoning embedded in the writing prompt. It provides the criteria by which students will evaluate the reasoning:

Directions to students: You should consider the questions below in developing your assessment of the writing sample. In addition to the questions below, you should feel free to comment on the reasoning in terms of its clarity, accuracy, precision, relevance, depth, breadth, logicalness, significance, and fairness or lack thereof.

1. ''Question'': Is the question at issue clearly stated or implied? Is it unbiased? Does the expression of the question do justice to the complexity of the matter at issue?

2. ''Purpose'': Is the purpose well-stated or implied? Is it clear and justifiable? Are the question and purpose directly relevant to each other?

3. ''Information'': Is relevant evidence, experiences and/or information essential to the issue cited? Is the information accurate? Are the complexities of the issue addressed?

4. ''Ideas (concepts)'': Are key ideas clarified when necessary? Are the concepts used justifiably?

5. ''Assumptions'': Is there sensitivity to what is being taken for granted or assumed? (Insofar as those assumptions might reasonably be questioned?). Are questionable assumptions being used without addressing problems which might be inherent in those assumptions?

6. ''Conclusions'': Is a line of reasoning well developed explaining the main conclusions? Are alternative conclusions considered? Are there any apparent inconsistencies in the reasoning?

7. ''Point of View'': Is a sensitivity to alternative relevant points of view or lines of reasoning shown? Is consideration given to objections framed from other relevant points of view? If so, were they responded to?

8. ''Implications'': Is sensitivity shown to the implications and consequences of the position taken


In the interview Elder also lists important intellectual standards:
!!!Intellectual Standards
Intellectual standards are the standards be which educated persons determine the quality of reasoning. Here are some of those standards, as well as some questions implied by them:

''Clarity:'' understandable, the meaning can be grasped
Could you elaborate further?
Could you give me an example?
Could you illustrate what you mean?
  	 
''Accuracy:'' free from errors or distortions, true
How could we check on that?
How could we find out if that is true?
How could we verify or test that?
  	 
''Precision:'' exact to the necessary level of detail
Could you be more specific?
Could you give me more details?
Could you be more exact?
  	 
''Relevance:'' relating to the matter at hand
How does that relate to the problem?
How does that bear on the question?
How does that help us with the issue?
  	 
''Depth:'' containing complexities and interrelationships
What factors make this a difficult problem?
What are some of the complexities of this question?
What are some of the difficulties we need to deal with?
  	 
''Breadth:'' encompassing multiple viewpoints
Do we need to look at this from another perspective?
Do we need to consider another point of view?
Do we need to look at this in other ways?
  	 
''Logic:'' the parts make sense together, no contradictions
Does all this make sense together?
Does your first paragraph fit in with your last?
Does what you say follow from the evidence?
  	 
''Significance:'' focusing on the important, not trivial
Is this the most important problem to consider?
Is this the central idea to focus on?
Which of these facts are most important?
  	 
''Fairness:'' Justifiable, not self-serving (or egocentric)
Do I have any vested interest in this issue?
Am I sympathetically representing the viewpoints of others?
Am I putting views I oppose in their strongest form?

The concept of Fourier series is very useful and powerful, because it demonstrates multiple very important math ideas.
([[Logarithms|Logarithms]] is another example of a powerful math concept)

One idea is that sums of functions can be very useful, and produce "attractive" results.
Another idea is that trigonometric functions (in this case sine/cosine) can indeed produce "all sorts of shapes", which is non-intuitive to a novice/child.
A third idea is that patterns combined with series (e.g., odd numbers, squares of numbers, reciprocals of integers) produce "beauty" and "harmony" (pun intended).
This interactive display also shows very explicitly and concretely the relationship between a trigonometric function, its amplitude, frequency, and a specific angle.

Here's an example I had created, demonstrating that summing up of sine/cosine elements of a Fourier series can produce "surprising" results. Originally, I implemented it as a [[Sage|http://www.sagemath.org/]] interactive animation.

!!This shows how a square wave can be generated by summing up sine elements in a Fourier series.
The series uses only odd integers and straight addition of terms.

[img[Sage square wave|./resources/sage_fourier_square.png][./resources/sage_fourier_square.png]]


!!This shows how a sawtooth wave can be generated by summing up sine elements in a Fourier series:
The series uses both even and odd integers and alternating addition/subtraction of terms.

[img[Sage sawtooth wave|./resources/sage_fourier_sawtooth.png][./resources/sage_fourier_sawtooth.png]]


!!This shows how a triangle wave can be generated by summing up sine elements in a Fourier series:
The series uses only squares of odd integers and alternating addition/subtraction of terms.

[img[Sage triangle wave|./resources/sage_fourier_triangle.png][./resources/sage_fourier_triangle.png]]
Type the text for 'Frank Wilczek'
The [[Physics Nobel Laureate|https://www.nobelprize.org/nobel_prizes/physics/laureates/2004/]] (2004) [[Frank Wilczek|http://frankwilczek.com/]] wrote [[an article in the WSJ|https://www.wsj.com/articles/intelligent-life-elsewhere-maybe-its-hiding-1518708957]] trying to give yet another answer to the question the physicist [[Enrico Fermi|https://www.nobelprize.org/nobel_prizes/physics/laureates/1938/fermi-bio.html]] (Physics Nobel laureate 1938) posed regarding the existence (or not) of intelligent aliens somewhere out there (AKA [[the Fermi Paradox|https://www.seti.org/seti-institute/project/details/fermi-paradox]]): if the probability that they exist is so high [[(per the Drake Equation|https://www.cbsnews.com/news/across-cosmic-history-intelligent-life-common/]] (as opposed to [[The Flake Equation]] :)), [[where are they?|chapter 4 - Where Are We?]] (meaning, how come we haven't seen them yet?).

I think that Wilczek's answer, besides being original and clever, brilliantly exemplifies the way great minds think: Not only do they not rule out possibilities; they also use "inversion" and "anti symmetry"^^1^^ to create bigger [["search spaces"|https://wiki.lesswrong.com/wiki/Search_space]].

The trigger to Fermi's Paradox is that since there are billions and billions of stars and planets in the universe, and since the universe is billions of years old, there should be a large number of alien/extra-terrestrial civilizations which are far more advanced than ours, and therefore, we (on Earth) should have detected some activity and signs of intelligence by now (and this reasoning also triggered the [[SETI effort|https://www.seti.org/node/647]]).

There are several "classic" explanations to why "we haven't seen them yet"^^2^^.

One reason may be that life, let alone "intelligent life", is a rare phenomenon (i.e., a result of an incredibly improbable series of events), and we on Earth are a rare (and very lucky) exception. And this is what [[Ray Kurzweil suggests|pg. 53 - RAY KURZWEIL: WHERE ARE THEY?]], too.
But as Wilczek points out
>our own planet’s history suggests otherwise. Though we don’t understand in detail how life began, we have several plausible scenarios and know that it arose relatively quickly once Earth became a stable, reasonably cool body. So life on Earthlike planets should be common.

Another reason for not observing other intelligent civilizations may be that they are extremely fragile and may inevitably "self-destruct" eventually. Wilczek calls this scenario "immoderate greatness", borrowing the term from Edward Gibbon’s description of the fall of Rome as "the natural and inevitable effect of immoderate greatness.... The stupendous fabric yielded to the pressure of its own weight", where
>The decline of other empires, such as those of the Spanish and the British, suggests that complex civilizations might be inherently fragile. Our own might well fall victim to nuclear war or catastrophic climate change. Maybe advanced technological civilizations inevitably flame out.

But, another reason for not seeing any signs of intelligence in the universe (besides ours ... :) may be what Wilczek calls "silence is golden", and is inspired by what we know about quantum physics and quantum computers:
>[quantum computers are] uniquely powerful machines, but they’re delicate and work best in the cold and dark, insulated from radiation and heat. A hyperadvanced civilization, embodied in artificial intelligence, might just want to be left alone, in order to optimize its intelligence and thinking power.
This idea was solidified in Wilczek's mind after hearing the physicist [[Richard Wolfson|https://www.thegreatcourses.com/professors/richard-wolfson/]], and he calls it: "Good //thinks// come in small packages."
Wilczek points out that in order to have effective computing, there has to be //interaction// and communication between (computing and thinking) entities. __And__, in order to have fast/efficient computing (and thinking, and communication), and due to the finite speed of light (which limits the speed of any/every movement/exchange), those entities need to be close to each other in order to minimize the delays.
And the inevitable conclusion:
>powerful thinking entities that obey the laws of physics [e.g., are limited by the finite speed of light], and which need to exchange up-to-date information, can’t be spaced [apart too much]. Thinkers at the vanguard of a hyperadvanced technology, striving to be both quick-witted and coherent, would keep that technology small.
Therefore, he concludes:
>Thus, truly advanced, information-based civilizations might choose to expand inward, to achieve speed and integration—not outward, where they’d lose patience waiting for feedback. If that’s the case, then the answer to Fermi’s “Where are they?” is: Out there, but inconspicuous. 

I like Wilczek's thinking and idea, but there are (at least) two questions in my mind:
* What if the speed of light is not "the ultimate limit", as possibly evidenced by the phenomenon of [[quantum entanglement|https://www.quantamagazine.org/entanglement-made-simple-20160428/]]?
** if "a civilization out there" has discovered faster-than-light technologies, then distance/proximity should, presumably, not be an issue, and Wilczek's scenario does not  have to hold.
* If we assume that on their way to something similar to Quantum Computing and Artificial Intelligence, alien civilizations went through more primitive technologies (similar to our path), should we not have seen/detected at least signs/echoes of that phase (e.g., radio waves propagating throughout the universe)?
** This potentially weakening point in Wilczek's explanation can actually be "argued away", if we take into account the fact that it took us only several hundreds of years (counting from the "dawn of technology" starting in the late 1600s ([[The Enlightenment|http://www.history.com/topics/enlightenment]])) to discover Quantum Computing. Assuming these other civilizations had a similar path, and assuming they may have gone through this phase much earlier than we have, their "primitive technology fingerprints" are long gone, dispersed and too short (say, 500 years within a range of billions of years), and too weak to detect (similar to [[Big Bang signals/radiation|https://www.space.com/28516-cosmic-inflation-gravitational-waves-hunt.html]]) in the cosmos.


----
^^1^^ - "inversion" and [["anti symmetry"|https://en.wikipedia.org/wiki/Antisymmetric_relation]] are powerful ways/tools/techniques (or [[intuition pumps|https://www.edge.org/conversation/intuition-pumps]]) to generate new possibilities and ideas, create new/different search/solution spaces!
^^2^^ - unless [[Terry Pratchett stating|http://www.chrisjoneswriting.com/terry-pratchett-quotes/category/chance]] the obvious is right :) :
> ... the universe has no time for life. By rights it shouldn’t exist. We don’t realize the odds.
[...] men pass rapidly from one step to the next; for instance from milk to white, from white to air, from air to damp; after which one recollects autumn, supposing that one is trying to recollect that season.
It seems to me that one of the most wonderful human capabilities is its ability to have "infinite capacity" (or at least an "inexhaustible ability") to "spin up" new ideas, concepts, etc. (is this one reason why people say that "hope springs eternal"?)

Or as [[Oliver Wendell Holmes|Oliver Wendell Holmes]] beautifully put it:
Man's mind, once stretched by a new idea, never regains its original dimensions.
In an interesting article titled [[MDA: A Formal Approach to Game Design and Game Research|http://www.ccs.neu.edu/course/cs5150f14/readings/hunicke_mda.pdf]], the authors, Robin Hunicke, Marc LeBlanc, Robert Zubek, describe their game design/development framework.

They use an online book with multiple techniques for generating interesting, ~AI-driven content for games, titled [[Procedural Content Generation in Games A textbook and an overview of current research|http://pcgbook.com/]]

The   MDA   framework   formalizes   the   consumption of  games by breaking them into their distinct components: 

{{{ RULES -->> SYSTEMS -->> FUN }}}

and establishing their design counterparts: 

{{{ MECHANICS -->> DYNAMICS -->> AESTHETICS }}}

''Mechanics''
  describes  the  particular  components  of  the  game, at the level of data representation and algorithms.  

''Dynamics''
describes   the   run-time   behavior   of   the   mechanics  acting  on  player inputs  and  each  others'  outputs over time. 

''Aesthetics''
describes  the  desirable  emotional  responses  evoked  in  the  player,  when  she  interacts  with  the  game  
system.  

Fundamental  to  this  framework  is  the  idea  that  games  are  more  like  artifacts than  media.  By  this we  mean  that  the  
content  of  a  game  is  its  behavior -- not  the  media  that  streams out of it towards the player.  

Thinking  about  games  as  designed  artifacts  helps  frame  them  as  systems  that  build  behavior  via  interaction.  It  supports clearer design choices and analysis at all levels of  study and development.  
In his blog [[|http://quest2engage.com/]] Todd Blayone posted a nice piece on [[Gamification in education - beyond reward and punishment|http://quest2engage.wordpress.com/2011/08/03/gamification-in-education-beyond-reward-and-punishment/]].
He is stating correctly that the current education system is already gamified, but still not working well, and then provides a few reasons.
Schools are gamified in that they have a reward system for doing homework and passing tests, they promote "successful players" to higher levels every year, and yet...

Unlike good games powered by strong player internal motivation, schools provide (and have to constantly replenish) external motivation. To use a metaphor by Sean Bouchard in his TEDx talk [[Chocolate Covered Broccoli: Building Better Games|http://www.youtube.com/watch?v=VrK7VXCfsS0&feature=relmfu]]: there is only so much broccoli players are willing to pretend is chocolate... (see also Ian Bogost's [[post on marketing-driven gamification|http://www.bogost.com/blog/gamification_is_bullshit.shtml]]).

Another difference between good games and bad ones (many schools come to mind ;-| ) is the clarity of the goals and the appropriate/calibrated/dynamic level of challenge. My hope is that with online education systems embedding research-based learning theories and AI (Artificial Intelligence), the situation may be improving in some schools, but unfortunately not in most.
Another very important game capability Blayone mentions which is relevant to education, is feedback: direct, immediate, relevant, and (important!) __not__ devastating/crushing. As he says, in good games (and in positive and effective educational experiences):
>failure, [is not] something to fear but [...] a welcome, temporary challenge that one expects to encounter many times before achieving success.

Blayone concludes saying that good gamification and game design/implementation are hard work:
>Successful gamification in education requires us to move beyond simple  reward and punishment  gaming scenarios to more nuanced, positive, contextually rich and psychologically aware aspects of game design.
And this is a warning for "new and improved" online education systems (and educational games), which can still be defined and implemented without paying attention to these points, resulting in more ineffective learning.

It's interesting (and educational ;-) to compare with what Chris Crawford has to say on [[The art of computer game design and some implications on learning]] as well as what Raph Koster has to say on [[the Theory of fun|Theory of fun - Raph Koster]]

Ian Bogost warns about the [[dangers of marketing-driven gamification|http://www.bogost.com/blog/gamification_is_bullshit.shtml]] (and don't get put off by the title...), which is very prevalent in business (and education :-( , which at least some optimists see as a natural phase in the development of this young discipline.
In a [[New Yorker article|http://www.newyorker.com/magazine/2015/09/14/high-score]] titled //High Score//, Nathan Heller reviews the book //~SuperBetter// by [[Jane McGonigal|http://janemcgonigal.com/]]. The tagline is: A new movement seeks to turn life’s challenges into a game.

Full Disclosure: I have not read the book, but I know of ~McGonigal's work and game-oriented theories, and like the article's author, I always felt a bit uneasy about her almost "2D view" (i.e., flat and two dimensional) of life and living. 

Her quest (and research-based effort) to save the world through gaming, uncovers some meaningful truths about human psychology, and harmful social and individual behaviors and assumptions, and it offers some ideas and techniques for tackling these. But, in my mind, it falls into the same trap of so many other self-improvement and world-betterment movements having a single-minded focus: it's too simplistic a solution (gamification) for too complex a problem (life and its experiences).

A small example of oversimplification (and perhaps interpreting correlation (of research findings) as causation - always a dangerous trap), ~McGonigal generalizes from the findings that gamers stick longer with playing games, //and// they have higher levels of dopamine:
>Work ethic is not a moral virtue. It’s actually a biological condition that can be fostered, purposefully, through activity that increases dopamine.

I feel that living life and looking at it through the glasses of "quests", "power-ups", "bad guys", "allies", and "epic wins", robs it of it's depth, nuance, and, yes, spirituality. It's like the difference between "having fun" and "experiencing joy" in life.
Life is not only black/white, win/lose,  good/bad, transactional and mission-oriented (with the all-important epic win at the end). It's interesting that as the "preacher" of playing games, ~McGonigal seems to drop/lose the importance of being playful.

There is a well-known, well-researched distinction between targeted and focused ("productive", "efficient") problem solving "mode" (which seems to be what ~McGonigal advocates in her version of game playing and creativity), and playful experimentation and creativity (with possible solutions as the outcome).
She also seems to ignore or down-play (ha!) an important Truth in life, which is that we grow, gain deeper perspectives, and get stronger from failures, sadness, and losses.
An example, in Heller's words:
>Say a family member dies. According to the ~SuperBetter method, you should turn your regret into a bad guy, do your power-ups, tell trustworthy people that you need their gameful help to lick the grief and move on with your life. Maybe you’re successful; you feel better quickly and go back to work. Have you, in that case, won the game?

The author observes that
> If the premise of her earlier work was that a spoonful of sugar can indeed help the medicine go down, “~SuperBetter” insists that the sugar is nutritious, too.
and
>Like many pop-science writers, she likes the idea that research has rendered a binary verdict (does the experiment show something, or not?), ignoring the magnitude and the context of the results (are the effects distinct enough to matter?).

As Heller points out, there is danger in turning life into a game, rather than using games in life. Going through life like it were a game is "slapping templates or scripts" on it. Rather than being fully aware of all of its complexities, nuances, and novelty, it becomes an effort to "recognize the patterns" (or type of game/challenge). And once you think you have "identified the situation" you can "deal with it" effectively. But, the risk is that you'll tune out and "live shallowly".
Heller says:
>Gamification flies the flag of innovation, but its effect is the opposite. Far from freeing the mind, the approach habituates us to the tidy mechanisms of effort and reward, to established paths, and to prefab narratives. In life, most stories do not climax in the third act and end in heroism.

Games and gamification can teach us important things about life and ourselves. Games and game playing are powerful and impactful (just look at how many people are drawn to gaming, sometimes to the point of addiction). As humans, it is smart to seek wisdom and learn from many different sources and contexts, including gamification. But, again, don't turn life into a game; use games and game-inspired lessons in life. Life is not only about recognizing the (game) scripts and playing your part(s). It is not only about goals and objects; it's also about experiences and verbs. And it is also about being playful and unique, creating your own path and journey, embracing and enjoying serendipity and not following a "framework" or script.
Was the world's chess champion, lost in a series of chess games to the IBM Deep Blue mainframe computer.
[img[Geometry-inspired building|resources/GeoBuilding1.jpg][resources/GeoBuilding.jpg]]

From The New Yorker Feb. 22, 2016

(dimensionality, 1D, 2D, 3D, and beyond? :)
In his book [["The Most Human Human"|The Most Human Human - by Brian Christian]] (about experiencing/participating in the [[Turing Test|http://www.psych.utoronto.ca/users/reingold/courses/ai/turing.html]]), Brian Christian describes (among other things :) the evolution of chess-playing computers and the historic battle between IBM's Deep Blue and Grandmaster Garry Kasparov.

He makes some observations which have parallels to life and living well, but also to the (less grand :) pursuit of programming (and using design patterns, coding recipes/cookbooks, and innovation/creativity).

Here is what he writes:

Obviously, all chess games begin from the same exact opening position^^1^^. There are only so many legal moves which can be made^^2^^, so it naturally takes some time for a particular game to become unique.
A huge amount of opening game sequences have been analyzed and documented in what is part of //The Book//^^3^^.

At the other side of the game -- the end game -- once there are only a few pieces left on the board, there are also specific sequences, or "lines", which will lead to a win. Here too, many, many end-games have been analyzed and documented, creating the other part of //The Book//^^3^^.

>The middle game -- where the pieces have moved around enough so that the uniform starting position is a distant memory, but there's enough firepower on the board so that the end game is still far off -- is where games are most different, most unique^^4^^.

In automating chess game playing, the strategy is to shrink that middle part (the "gap") until it disappears so that the opening and end games connect. If this is done, then the computer definitely has the advantage/firepower.

Grandmaster games are said to //begin// with a //novelty//, which is the first move of the game that "exits the book" (i.e., is not played "by the book"). It could be the fifth, it could be the thirty-fifth move. 
We think that a game of chess starts with move one and ends with a checkmate (or draw), but this is not the case. The game begins when it gets out of book, and it ends when it goes into book (for the end game part). Like electricity, it only sparks in the gaps.
The Book is massive. ''A game may end before you get out, but it doesn't begin until you do! Said differently, you may not get out alive; on the other hand, you're not alive until you get out!''

Now isn't this last part also a significant "life lesson"! :)

Or as Christian wraps up: ''we all start off the same and we all end up the same, with a brief moment of difference in between. Fertilization to fertilizer. Ashes to ashes. And we spark across the gap.''

The brilliant mathematician (and the "last universalist"^^5^^) [[Henri Poincaré|https://en.wikipedia.org/wiki/Henri_Poincar%C3%A9]] (searchable spelling: Poincare) said something similar (I think :) :
geologic history shows us that life is only a short episode between two eternities of death, and that, even in this episode, conscious thought has lasted and will last only a moment. Thought is only a gleam in the midst of a long night. But it is this gleam which is everything.

----
The parallels to programming:
^^1^^ - a blank sheet of paper, or a blank file where the programmer has to start coding.
^^2^^ - the grammar/syntax of the specific programming language used.
^^3^^ - programming cookbooks, programming patterns, application templates, language idioms, etc.
^^4^^ - that's the domain/application-specific code, "glue", middleware, and so on.
^^5^^ - Last Universalist - contributing and impacting Pure and Applied Mathematics, Physics, Astronomy, Engineering and Philosophy.

To get started with this blank [[TiddlyWiki]], you'll need to modify the following tiddlers:
* [[SiteTitle]] & [[SiteSubtitle]]: The title and subtitle of the site, as shown above (after saving, they will also appear in the browser title bar)
* [[MainMenu]]: The menu (usually on the left)
* [[DefaultTiddlers]]: Contains the names of the tiddlers that you want to appear when the TiddlyWiki is opened
You'll also need to enter your username for signing your edits: <<option txtUserName>>
They are all right. (pun intended, is my Buddhism-inspired addition)

__A nerdy/engineering variation__: Given the same half full/empty glass an ''engineer'' will say that the glass is twice as large as it had been defined in the specifications.

And there is a [[psychologically-oriented version|Coming or going?]], too.
In an article called [[Getting the Measure of Consciousness|resources/Humphrey_2008GettingTheMeasure.pdf]] by [[Nicholas Humphrey|http://www.humphrey.org.uk/]] he brings up a critical factor in human understanding (and learning), which is: when we encounter something "puzzling", something we want to understand, are we asking "good" questions? Or even stronger: are we asking the "right" questions? (since [[questions are like lanterns|John O’Donohue - questions]]).

The context of Humphrey's article is consciousness, philosophy, phenomenology, and human experience and perception. But it has some educational/learning consequences worth reflecting on.

This is crucial since the questions we ask are both the [[initial jumping point|The most exciting phrase to hear in science, the one that heralds new discoveries, is not "Eureka!", but "That's funny...".]], and also the on-going guide throughout our engagement and investigation of the puzzle.

Humphrey is giving a simple and concrete example of a "good" vs. "bad" (i.e. misleading, misguiding) question.

Assuming we haven't seen this, we are shown a picture of the "impossible triangle" and naturally we are puzzled.
[img[click to see the "possible triangle"|./resources/impossible_triangle_1.png][./resources/impossible_triangle_2.png]]

Faced with this puzzle (again, assuming we have never seen this before), our intuitive tendency is to (jump and) ask:  How can we explain the existence of this triangle as we perceive it?  As Humphrey indicates, this is a "bad" question, leading down a [[rabbit hole|Escher's Print Gallery]].
But, after we see the solution ([[by clicking on the image|resources/impossible_triangle_2.png]]), and reflect on it, most of us will come up with a "good" question:  How can we explain the fact we have been tricked into perceiving it this way? 
The two different questions indicate a radical shift in the perception of the person asking. The first question assumes that there is identity between the "object out there" (the impossible triangle") and our perception of it (in our mind). The second question is "more cautious" and doesn't assume this identity. In fact, it does not only question (ha!) this identity, but rather assumes there is a difference (between what's "out there" and how it's perceived "in here"). And this makes the whole difference in the direction the investigation will go.

So, an interesting (not to say, "good") question, since it is very important, is: how do you learn to ask the "good/right" question? Humphrey seems to focus on the need to reflect on what question to ask, so it guides our investigation well. Dan Meyer, when giving [[advice on how to teach math|The Three Acts Of A Mathematical Story]], seems to rely on gut level feelings, where a "perplexing" situation intuitively "begs" the question^^1^^.
As it happens in most puzzling situations, especially the ones usually encountered in schools ((fortunately for Dan (and us, as students/learners), but not necessarily fortunately for us as beings in the pursuit of Truth), the puzzles we encounter evoke questions which turn out to be "good". But, it's an important meta-cognitive skill for a learner (and a critical ability of any "truth seeker"), to be able to reflect on the questions asked, be cautious (and unassuming, or check assumptions), and periodically critically review the path(s) taken by an investigation, to see whether it led into a [[rabbit hole|Escher's Print Gallery]].



----
^^1^^ From [[The 3 math acts|The Three Acts Of A Mathematical Story]]
>I aspire to be perplexing. I want to perplex my students, to put them in a position to wonder a question so intensely they'll commit to the hard work of getting an answer, whether that's through modeling, experimenting, reading, taking notes, or listening to an explanation.
>A lot of my most perplexing classroom moments have had two elements in common:
>* A visual. A picture or a (short) video.
>* A concise question. One that feels natural. One that people can approach first on a gut level, using their intuition.
>Let's call that a first act. There are still two more acts and a lot of work yet to do, but the first act is above and before everything else.
[>img[Piet Hein gruk|./resources/Gruk 1.png]]
After watching a short [[video clip|https://youtu.be/v678Em6qyzk?t=20]], where Donald Knuth (a "[[CS Sage|https://en.wikipedia.org/wiki/List_of_computer_scientists]]") shows a Grook he has on his wall at home, I've been digging into Grooks the whole week (not [[Robert Heinlein|https://en.wikipedia.org/wiki/Robert_A._Heinlein]] ([[grok|https://en.wikipedia.org/wiki/Grok]]), but [[Piet Hein|https://en.wikipedia.org/wiki/Piet_Hein_(scientist)]] ([[gruk (Danish) or Grook (English))|https://en.wikipedia.org/wiki/Grook]]).

[[Witty, short, often wise poetry|http://www.sophilos.net/GrooksofPietHein.htm]]

For example:

{{{
Problems worthy 
of attack 
prove their worth 
by hitting back. 

 
--
 

Put up in a place 
where it's easy to see 
the cryptic admonishment 
      T.T.T. 

When you feel how depressingly 
slowly you climb, 
it's well to remember that 
      Things Take Time. 


--
     

The road to wisdom? - Well, it's plain 
and simple to express: 
   Err 
   and err 
   and err again 
   but less 
   and less 
   and less. 


--


There is  
one art, 
no more,  
no less: 
to do  
all things  
with art- 
lessness. 


--


Living is
a thing you do
now or never -
which do you?



}}}

and a [[self-annihilating|Self-annihilating sentences]] gruk:
{{{
The Universe may
be as great as they say,
but it wouldn't be missed
if it didn't exist. 
}}}
Type the text for 'H. G. Wells'
In a course I had taken at Stanford (part of the [[Learning, Design, and Technology Masters program offered by the School of Education|http://ldtprojects.stanford.edu/~hmark/]]), I [[programmed a chatbot to converse|resources/HAL 9000 chatbot.pdf]] on [[Mayer's principles of multimedia design for education|http://www.cognitivedesignsolutions.com/Media/MediaPrinciples.htm]].

To lend the conversation (really, chatter ;-) an air of lightness/humor (and "intelligence" ;-), the chatbot had the audacity (ahem.. personality) of [[HAL 9000|The end of an era, the beginning of another? HAL, Deep Blue and Kasparov]] from the Stanley Kubrick/Arthur C. Clarke 1968 epic film 2001: A Space Odyssey.

I had implemented this chatbot by programming a free [[pandorabot|http://www.pandorabots.com/botmaster/en/home]] which is an A.L.I.C.E. ([[Artificial Linguistic Internet Computer Entity|http://alice.pandorabots.com/]]), created by Dr. Richard S. Wallace.
From the Blog [[Without Geometry, Life is Pointless|http://www.withoutgeometry.com]] by Avery Pickford (a math teacher and researcher).

[[Habits of mind|http://www.withoutgeometry.com/2010/09/habits-of-mind.html]] - somewhat in the vein of Polya's ''How to Solve It''

1.    ''Pattern Sniff''
A.     On the lookout for patterns
B.     On the lookout for Looking for and creating shortcuts

2.    ''Experiment, Guess and Conjecture''
A.     Can begin to work on a problem independently
B.     Estimates
C.     Conjectures
D.    Healthy skepticism of experimental results
E.     Determines lower and upper bounds
F.     Looks at small or large cases to find and test conjectures
G.     Is thoughtful and purposeful about which case(s) to explore
H.    Keeps all but one variable fixed
I.      Varies parameters in regular and useful ways
J.      Works backwards (guesses at a solution and see if it makes sense)

3.    ''Organize and Simplify''
A.     Records results in a useful way
B.     Process, solutions and answers are detailed and easy to follow
C.     Looks at information about the problem or solution in different ways
D.    Determine whether the problem can be broken up into simpler pieces
E.     Considers the form of data (deciding when, for example, 1+2 is more helpful than 3)
F.     Uses parity and other methods to simplify and classify cases

4.    ''Describe''
A.     Verbal/visual articulation of thoughts, results, conjectures, arguments, process, proofs, questions, opinions
B.     Written articulation of thoughts, results, conjectures, arguments, process, proofs, questions, opinions
C.     Can explain both how and why
D.    Creates precise problems
E.     Invents notation and language when helpful
F.     Ensures that this invented notation and language is precise

5.     ''Tinker and Invent''
A.   Creates variations
B.     Looks at simpler examples when necessary (change variables to numbers, change values, reduce or increase the number of conditions, etc)
C.     Looks at more complicated examples when necessary
D.    Creates extensions and generalizations
E.     Creates algorithms for doing things
F.     Looks at statements that are generally false to see when they are true
G.     Creates and alters rules of a game
H.    Creates axioms for a mathematical structure
I.      Invents new mathematical systems that are innovative, but not arbitrary

6.    ''Visualize''
A.     Uses pictures to describe and solve problems
B.     Uses manipulatives to describe and solve problems
C.     Reasons about shapes
D.    Visualizes data
E.     Looks for symmetry
F.     Visualizes relationships (using tools such as Venn diagrams and graphs)
G.     Visualizes processes (using tools such as graphic organizers)
H.    Visualizes changes
I.      Visualizes calculations (such as doing arithmetic mentally)

7.    ''Strategize, Reason and Prove''
A.     Moves from data driven conjectures to theory based conjectures
B.     Tests conjectures using thoughtful cases
C.     Proves conjectures using reasoning
E.    Looks for mistakes or holes in proofs
F.  Uses indirect reasoning or a counter-example (Park School)
G.  Uses inductive proof

8.    ''Connect''
A.     Articulates how different skills and concepts are related
B.     Applies old skills and concepts to new material
C.     Describes problems and solutions using multiple representations
D.    Finds and exploits similarities between problems (invariants, isomorphisms)

9.    ''Listen and Collaborate''
A.     Respectful to others when they are talking
B.     Asks for clarification when necessary
C.     Challenges others in a respectful way when there is disagreement
D.    Participates
E.     Ensures that everyone else has the chance to participate
F.     Willing to ask questions when needed
G.     Willing to help others when needed
H.    Shares work in an equitable way
I.      Gives others the opportunity to have “aha” moments

10. ''Contextualize, Reflect and Persevere''
A.     Determines givens
B.     Eliminates unimportant information
C.     Makes and articulates reasonable assumptions
D.    Determines if answer is reasonable by looking at units, magnitudes, shape, limiting cases, etc.
E.     Determines if there are additional or easier explanations
F.     Continuously reflects on process
G.     Works on one problem for greater and greater lengths of time
H.    Spends more and more time stuck without giving up

[[About me]]
[[About me|About me]]
From [[a blog post|http://blog.kenperlin.com/?p=15619]] by [[Ken Perlin|http://mrl.nyu.edu/~perlin/]] (a CS professor an NYU:


(experience)

what teaches us to
recognize our mistakes the
next time we make them


(fun)

it is what you have
when you’re not thinking at all
about what you have


(human)

our name for any
living creature in which we
recognize ourselves


(stack (programming data structure))

when it overflows
we have arrived at the end
of the infinite
Sir Halford John Mackinder (15 February 1861   6 March 1947) was an English geographer and is considered one of the founding fathers of both geopolitics and geostrategy.
I was the architect and tech lead of a web-based system for managing a global collection of networking labs containing multimillion Dollars of networking equipment. The labs were used by Cisco Systems Engineers to provide demonstrations to customers, show-casing new solutions, technologies, and devices.
!!!!Why
The Cisco Technical Sales Force (Systems Engineers), had the need to show-case new networking capabilities to Cisco customers throughout the world. The equipment was very expensive and located in several locations throughout the world, and had to be efficiently managed and supported.
Also, the Systems Engineers had to get trained on the latest technologies and devices in the most effective and efficient way, so that they would be well prepared to deliver effective customer demonstrations.

!!!!What
In addition to the overall management system, I have designed and implemented a hands-on, web-based training capability to enable the engineers to prepare and practice for customer demos. The training system monitored the engineer activities and compared configurations and results to "reference implementations" which were available for the actual demos (prepared by Subject Matter Experts).

!!!!Human Performance Support
The system embedded a few Human Performance Support principles:
* combining learning with doing
** the system supported learning while doing, by offering information about networking technologies, Cisco equipment, device configurations, and so on, all embedded within the performance activities of correctly configuring the network, and preparing for the customer demo.
* just-in-time, on-demand training
** the learning resources and activities were available within the work context, enabling the performer to go back and forth between learning and doing without switching context
* Deep performance support
** the system provided information, examples, skills training activities, Expert Advice (via monitoring and comparison to "reference solutions"), and task automation (through automating configuration tasks, presentation and justification).
A [[paper on Functional Programming|http://worrydream.com/refs/Backus-CanProgrammingBeLiberated.pdf]] by John Backus from 1978 (!):
>Conventional programming languages are growing ever more enormous, but not stronger. Inherent defects at the most basic level cause them to be both fat and weak: their primitive word-at-a-time style of programming inherited from their common ancestor—the von Neumann computer, their close coupling of semantics to state transitions, their division of programming into a world of expressions and a world of statements, their inability to effectively use powerful combining forms for building new programs from existing ones, and their lack of useful mathematical properties for reasoning about programs.
>
>An alternative functional style of programming is founded on the use of combining forms for creating programs. Functional programs deal with structured data, are often nonrepetitive and nonrecursive, are hierarchically constructed, do not name their arguments, and do not require the complex machinery of procedure declarations to become generally applicable. Combining forms can use high level programs to build still higher level ones in a style not possible in conventional languages.

Communications of the ACM
August 1978 Volume 21 Number 8
He who has imagination without learning has wings but no feet.
I designed and implemented a human performance support system for a Unix technical support helpdesk group, providing just-in-time, on-demand training as part of the call answering process.

As support team members answer support calls on the phone, they can search on similar problems, review examples, study troubleshooting procedures, and so on, all within the context of the current problem and support case.
a French mathematician, theoretical physicist, engineer, and a philosopher of science.
Type the text for 'Heraclitus'
In a thoughtful review of Hermann Hesse's thought-provoking essay titled “On Reading Books” (1920), [[Maria Popova of BrainPickings covers Hesse's analysis|https://www.brainpickings.org/2016/07/11/hermann-hesse-types-of-readers/]] (see also [[A Helpful Guide to Reading Better - Farnam Street]]):
* The "naive reader", the reader who experiences a book merely as content, be it intellectual or aesthetic:
> Everyone reads naïvely at times. This reader consumes a book as one consumes food, he eats and drinks to satiety, he is simply a taker, [...]. This kind of reader is not related to a book as one person is to another but rather as a horse to his manager or perhaps as a horse to his driver: the book leads, the reader follows. The substance is taken objectively, accepted as reality.
>[...] This kind of reader assumes in an uncomplicated way that a book is there simply and solely to be read faithfully and attentively and to be judged according to its content or its form. Just as a loaf of bread is there to be eaten and a bed to be slept in.

* The imaginative investigator (per Popova) — a reader endowed with childlike wonderment, who sees past the superficialities of content to plumb the depths of the writer’s creative impulse:
>This reader treasures neither the substance nor the form of a book as its single most important value. He knows, in the way children know, that every object can have ten or a hundred meanings for the mind. He can, for example, watch a poet or philosopher struggling to persuade himself and this reader of his interpretation and evaluation of things, and he can smile because he sees in the apparent choice and freedom of the poet simply compulsion and passivity. This reader is already so far advanced that he knows what professors of literature and literary critics are mostly completely ignorant of: the there is no such thing as a free choice of material or form.
> […] From this point of view the so-called aesthetic values almost disappear, and it can be precisely the writer’s mishaps and uncertainties that furnish much the greatest charm and value. For this reader follows the poet not the way a horse obeys his driver but the way a hunter follows his prey, and a glimpse suddenly gained into what lies beyond the apparent freedom of the poet, into the poet’s compulsion and passivity, can enchant him more than all the elegance of good technique and cultivated style.

* The third reader is really a non-reader but rather a dreamer and interpreter:
> [This reader] is apparently the exact reverse of what is generally called a “good” reader. He is so completely an individual, so very much himself, that he confronts his reading matter with complete freedom. He wishes neither to educate nor to entertain himself, he uses a book exactly like any other object in the world, for him it is simply a point of departure and a stimulus. Essentially it makes no difference to him what he reads. He does not need a philosopher in order to learn from him, to adopt his teaching, or to attack or criticize him. He does not read a poet to accept his interpretation of the world; he interprets it for himself. He is, if you like, completely a child. He plays with everything — and from one point of view there is nothing more fruitful and rewarding than to play with everything. If this reader finds a beautiful sentence in a book, a truth, a word of wisdom, he begins by experimentally turning it upside down.
>[This reader] has known for a long time that for each truth the opposite also is true. He has known for a long time that every intellectual point of view is a pole to which an equally valid antipole exists. He is a child insofar as he puts a high value on associative thinking, but he knows the other sort as well.
>[...] This reader is able, or rather each one of us is able, at the hour in which he is at this stage, to read whatever he likes, a novel or grammar, a railroad timetable, a galley proof from the printer. At the hour when our imagination and our ability to associate are at their height, we really no longer read what is printed on the paper but swim in a stream of impulses and inspirations that reach us from what we are reading. They may come out of the text, they may simply emerge from the type face. An advertisement in a newspaper can become a revelation; the most exhilarating, the most affirmative thoughts can spring from a completely irrelevant word if one turns it about, playing with its letters as with a jigsaw puzzle. In this stage one can read the story of Little Red Riding Hood as a cosmogony or philosophy, or as a flowery erotic poem. Or one can read the label “Colorado maduro” on a box of cigars, play with the words, letters, and sounds, and thereby take a tour through the hundred kingdoms of knowledge, memory, and thought.
>[...] The reader at the their stage is no longer a reader. The person who remained there permanently would soon not read at all, for the design in a rug or the arrangement of the stones in a wall would be of exactly as great a value to him as the most beautiful page full of the best-arranged letters. The one book for him would be a page with the letters of the alphabet.
And Hesse brings this to a logically paradoxical (and "shocking") conclusion:
>So be it: the reader at the last stage is really no longer a reader at all, he doesn’t give a hoot about Goethe, he doesn’t read Shakespeare. The reader in the last stage simply doesn’t read any more. Why books? Has he not the entire world within himself?
But then he resolves this seeming paradox and shocking conclusion with:
>Whoever remained permanently at this stage would not read any more, but no one does remain permanently at this stage. But whoever is not acquainted with this stage is a poor, an immature reader. He does not know that all the poetry and all the philosophy in the world lie within him too, that he greatest poet drew from no other source than the one each of us has within his own being. For just once in your life remain for an hour, a day at the third stage, the stage of not-reading-any-more. You will thereafter (it’s so easy to slip back) be that much better a reader, that much better a listener and interpreter of everything written. Stand just once at the stage where the stone by the road means as much to you as Goethe and Tolstoy, you will thereafter gain from Goethe, Tolstoy, and all poets infinitely more value, more sap and honey, more affirmation of life and of yourself than ever before. For the works of Goethe are not Goethe and the volumes of Dostoevsky are not Dostoevsky, they are only an attempt, a dubious and never successful attempt, to conjure up the many-voiced multitudinous world of which he was the central point.

In relation to the third type of reader, my "hope springs eternal", which in this case translates to "I believe that if one has books on the shelf, at one point or another one will crack them open", and if something in one's past/education had not gone terribly wrong, one will be hooked and will not stop reading.

Alison Gopnik tells [[an interesting story about this|The potential and dangers of new technologies - echoes of a recurring theme]]. I think that Hesse would agree with the definition of books as 'devices' in the case of the third type of reader.
[[from History of the chess table|http://www.research.ibm.com/deepblue/learn/html/e.8.5.html]] by Monty Newborn^^1^^

compare to [[The end of an era, the beginning of another? HAL, Deep Blue and Kasparov|The end of an era, the beginning of another? HAL, Deep Blue and Kasparov]]

We have recently watched a new hero emerge in the world of sports. Tiger Woods has taken golf to a new plateau with his brilliant play in the Augusta Masters. He has shown the world what a combination of hard work and talent can do. Moreover, he has created an unprecedented interest among youth in his sport.

Two other new heros will emerge in the weeks to come. [[Deep Blue and Garry Kasparov are taking chess to new levels of excellence|https://en.wikipedia.org/wiki/Deep_Blue_versus_Garry_Kasparov]], and the world will watch with similar admiration.

Both have shown what can be done with hard work and talent and imagination. Fortunate to have been born a genius, no one works harder to be champion than Garry Kasparov; the Deep Blue team is composed of the cream of scientific talent, and they too work with a passion and dedication to their mission.

In the last year, Garry Kasparov has played some of his finest chess and is coming to New York at the top of his career. Deep Blue will be significantly stronger too, searching twice as many positions per second and searching them with enhanced chess knowledge. The match promises to be an outstanding contest, even more thrilling than last year's. We are going to witness dramatic history at the chess table.

But more than an exciting battle, this match -- as was last year's in Philadelphia -- will be remembered as a landmark in the evolution of mankind's powerful new tool. Who of the early pioneers in computer chess -- Claude Shannon, Alan Turing, Herbert Simon, Norbert Wiener, John von Neumann -- would have imagined in the late 1950s when an IBM 704 first played chess that 40 years later computers would be a million times more powerful? Who then would have imagined that in 1997 a computer would be examining 200,000,000 chess positions per second and searching to depths of 14 levels when making a move.

But science is filled with surprises and developing a chess program has had its share. Researchers in the 1950s and 1960s felt that if computers were to play chess at the level of the best humans -- a task many said required intelligence -- they should be programmed to play like grandmasters. Many maintained that computers should be programmed in sophisticated programming languages that would make it easy for programmers to incorporate the thought process of grandmasters into their programs.

In addition, computers should be programmed to carry out some sort of selective search as we envision grandmasters do. But Deep Blue has followed a different path. It is programmed in C, a language that looks more like assembly language than anything fit for chess, and the brute force approach taken by Deep Blue's alpha-beta search is apparently in vivid contrast with the search done by grandmasters.

This is not to say that we haven't learned a lot about human intelligence and solving complex problems.

First, we realize that the sheer power of computers, combined with our creative mind, will permit us to solve many problems that have seemed beyond our reach. We have witnessed a million-fold increase in computer power over the last 40 years, and we are beginning to understand the implications of another million-fold increase. We have seen the process of software development go through revolutionary improvements during this period, making a programmer's task immeasurably easier.

Second, we have learned that the definition of intelligence is elusive. Although computers are playing grandmaster-level chess, does it follow that they have any intelligence? When computers of the future prove mathematical theorems that have stymied the greatest human minds thus far, will they then display intelligence? Or when computers compose music that leaves a Carnegie Hall audience in tears, what then?

Third, we have learned something about learning itself. While there have been many attempts to program machine learning, there have been no great successes to date. Computers have been taught to play chess, but learning how to improve their own play as we do is centuries away.

As computers will remain our partners for the foreseeable future, it is important that we design them in ways that improve our own lives. We need them today to assist us with countless tasks where their abilities exceed our own. We will need them eventually in our quest of outer space. This partnership has just begun, but if the last half-century is any indication of what is to come, as reflected in achievements such as those of Deep Blue, we may be in for many more pleasant surprises.


In a related article in the WSJ titled [["The Grandmaster’ Review: The 64-Square Universe"|https://www.wsj.com/articles/the-grandmaster-review-the-64-square-universe-1541723866]] by Brad Leithauser he covers a book by Brin-Jonathan Butler, about the current Grand Masters: the Norwegian Magnus Carlsen and Russia’s Sergey Karjakin.
Leithauser writes:
>A former head of the World Chess Federation, Kirsan Ilyumzhinov, once suggested that the game was a gift to humanity from extraterrestrial visitors. During his lengthy administration (1995-2018), Mr. Ilyumzhinov was accused of many irregularities, including a touch of madness. But his speculation about extraterrestrials seems to me a model of sober levelheadedness. Clearly, the beauty of the game is otherworldly. Like Bach’s “Goldberg Variations” or Vermeer’s interiors or Chartres Cathedral, chess can hardly be the product of a merely human ingenuity. 
>[...]
>Wedded to romantic notions, Mr. Butler loses sight of what is most interesting in the computer’s ascendancy: Chess has become a domain where problem-solving machines impress us not merely with their efficiency and thoroughness but also with their ability to isolate and blazon beauty. They move us aesthetically. The finest chess players can only shake their heads in resigned dazzlement at the loveliness produced by a mechanism that is only executing calculations—that doesn’t in any meaningful sense understand it is playing a game. 
>[...]
>I sympathize with Mr. Butler’s appetite for an “oasis” [i.e., a domain exclusively dominated by humans, not computers] devoted “exclusively [to] the human imagination.” But chess is no longer such a place. I suppose one could repair to poetry or to music—zones where the human spirit might find some restful reflection, free of interlopers. Yet our machines never rest. Even now, perhaps they are concatenating words into lines of verse, arranging musical notes on a staff. Our fellow pilgrims, they likewise are, willy-nilly, chasing beauty. And, I suspect, will be ready to meet us before we’re ready for them. 


----
^^1^^ - Monty Newborn serves as chairman of the ACM Computer Chess Committee, a position he has held since the early 1980s. This committee is in charge of officiating the IBM Kasparov Versus Deep Blue Rematch.

Newborn is a professor of computer science at ~McGill University in Montreal. His chess program, OSTRICH, competed in five world championships dating back to the first in 1974. He served as president of the International Computer Chess Association from 1983 to 1986. His research interests center around computer chess and automated theorem proving.

Newborn has authored five books on computer chess, the latest being Kasparov versus DEEP BLUE: Computer Chess Comes of Age, published by Springer-Verlag of New York.
I stumbled across this [[insightful and gentle review|https://www.brainpickings.org/2015/01/27/lewis-carroll-letter-writing-email/]] by Maria Popova in ~BrainPickings, of Lewis Carroll's little pamphlet "Eight or Nine Wise Words about ~Letter-Writing" (1890), and found it very relevant and potentially very constructive to our day and age.

Most of us have long ago (probably since the 1990's :) abandoned (or never started?) letter writing in favor of emails (if that; some/most moved on to shorter forms of communication, with what I sometimes fear, the ultimate goal/form being communication via grunts :). But while Carroll writes about writing letters, I think that a lot of his "wise advice" still holds, if your aim is to co-respond intelligently, constructively, and kindly.

In other words (Popova's :), if your goal is to write
>the kind of slow, contemplative correspondence that Virginia Woolf termed “the humane art.” For what more humane an act is there than correspondence itself — the art of mutual response — especially amid a culture of knee-jerk reactions that is the hallmark of most communication today? Letters, by their very nature, make us pause to reflect on what the other person is saying and on what we’d like to say to them in response. Only when we step out of the reactive ego, out of the anxious immediacy that text-messaging and email have instilled in us, and contemplate what is being communicated — only then do we stand a chance of being civil to one another, and maybe even kind.

So here are a few wise suggestions from the author of "Alice in Wonderland" and "Through the Looking Glass":
* If the Letter is to be in answer to another, begin by getting out that other letter and reading it through, in order to refresh your memory, as to what it is you have to answer… A great deal of the bad writing in the world comes simply from writing too quickly.
** Or, looked at (by the inimitable Daniel Dennett) from a different perspective (in his book [[ Intuition Pumps and Other Tools for Thinking|http://www.openculture.com/2013/05/philosopher_daniel_dennett_presents_seven_tools_for_critical_thinking.html]]), stating "How to compose a successful critical commentary":
*** You should attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, “Thanks, I wish I’d thought of putting it that way."
*** You should list any points of agreement (especially if they are not matters of general or widespread agreement).
*** You should mention anything you have learned from your target.
*** Only then are you permitted to say so much as a word of rebuttal or criticism.


* When you have written a letter that you feel may possibly irritate your friend, however necessary you may have felt it to so express yourself, put it aside till the next day. Then read it over again, and fancy it addressed to yourself. This will often lead to your writing it all over again, taking out a lot of the vinegar and pepper, and putting in honey instead, and thus making a much more palatable dish of it!
** and, I think, in the same vein, Arthur Martine counseled in his [[magnificent guide to the art of conversation|https://www.brainpickings.org/2013/04/17/the-art-of-conversation-martine-etiquette-1866/]] (1866), “In disputes upon moral or scientific points, let your aim be to come at truth, not to conquer your opponent. So you never shall be at a loss in losing the argument, and gaining a new discovery.”

* If your friend makes a severe remark, either leave it unnoticed, or make your reply distinctly less severe: and if he makes a friendly remark, tending towards “making up” the little difference that has arisen between you, let your reply be distinctly more friendly. If, in picking a quarrel, each party declined to go more than three-eighths of the way, and if, in making friends, each was ready to go five-eighths of the way — why, there would be more reconciliations than quarrels!
** My aside: it's interesting to compare this advice to the [[ethical behavior strategy and tit-for-tat|On ethical behavior]] in situations of [[Prisoner's Dilemmas|Summary of the Prisoner’s Dilemma]].

* Don’t try to have the last word! How many a controversy would be nipped in the bud, if each was anxious to let the other have the last word! Never mind how telling a rejoinder you leave unuttered: never mind your friend’s supposing that you are silent from lack of anything to say: let the thing drop, as soon as it is possible without discourtesy: remember “speech is silvern, but silence is golden”!

* Don’t repeat yourself. When once you have said your say, fully and clearly, on a certain point, and have failed to convince your friend, drop that subject: to repeat your arguments, all over again, will simply lead to his doing the same; and so you will go on, like a Circulating Decimal. Did you ever know a Circulating Decimal come to an end? [Carroll was a mathematician after all, so I guess he couldn't help it slip out ... :)]

* If it should ever occur to you to write, jestingly, in dispraise of your friend, be sure you exaggerate enough to make the jesting obvious: a word spoken in jest, but taken as earnest, may lead to very serious consequences. I have known it to lead to the breaking-off of a friendship.

by [[Paul Graham|http://www.paulgraham.com/love.html]] - the founder of [[Y Combinator|http://ycombinator.com/apply.html]].

(found during a wanderful (pun intended) web surfing weekend through [[BrainPickings|http://www.brainpickings.org/index.php/2012/02/27/purpose-work-love/]]

January 2006

To do something well you have to like it. That idea is not exactly novel. We've got it down to four words: "Do what you love." But it's not enough just to tell people that. Doing what you love is complicated.

The very idea is foreign to what most of us learn as kids. When I was a kid, it seemed as if work and fun were opposites by definition. Life had two states: some of the time adults were making you do things, and that was called work; the rest of the time you could do what you wanted, and that was called playing. Occasionally the things adults made you do were fun, just as, occasionally, playing wasn't for example, if you fell and hurt yourself. But except for these few anomalous cases, work was pretty much defined as not-fun.

And it did not seem to be an accident. School, it was implied, was tedious because it was preparation for grownup work.

The world then was divided into two groups, grownups and kids. Grownups, like some kind of cursed race, had to work. Kids didn't, but they did have to go to school, which was a dilute version of work meant to prepare us for the real thing. Much as we disliked school, the grownups all agreed that grownup work was worse, and that we had it easy.

Teachers in particular all seemed to believe implicitly that work was not fun. Which is not surprising: work wasn't fun for most of them. Why did we have to memorize state capitals instead of playing dodgeball? For the same reason they had to watch over a bunch of kids instead of lying on a beach. You couldn't just do what you wanted.

I'm not saying we should let little kids do whatever they want. They may have to be made to work on certain things. But if we make kids work on dull stuff, it might be wise to tell them that tediousness is not the defining quality of work, and indeed that the reason they have to work on dull stuff now is so they can work on more interesting stuff later. [1]

Once, when I was about 9 or 10, my father told me I could be whatever I wanted when I grew up, so long as I enjoyed it. I remember that precisely because it seemed so anomalous. It was like being told to use dry water. Whatever I thought he meant, I didn't think he meant work could literally be fun fun like playing. It took me years to grasp that.

''Jobs''

By high school, the prospect of an actual job was on the horizon. Adults would sometimes come to speak to us about their work, or we would go to see them at work. It was always understood that they enjoyed what they did. In retrospect I think one may have: the private jet pilot. But I don't think the bank manager really did.

The main reason they all acted as if they enjoyed their work was presumably the upper-middle class convention that you're supposed to. It would not merely be bad for your career to say that you despised your job, but a social faux-pas.

Why is it conventional to pretend to like what you do? The first sentence of this essay explains that. If you have to like something to do it well, then the most successful people will all like what they do. That's where the upper-middle class tradition comes from. Just as houses all over America are full of chairs that are, without the owners even knowing it, nth-degree imitations of chairs designed 250 years ago for French kings, conventional attitudes about work are, without the owners even knowing it, nth-degree imitations of the attitudes of people who've done great things.

What a recipe for alienation. By the time they reach an age to think about what they'd like to do, most kids have been thoroughly misled about the idea of loving one's work. School has trained them to regard work as an unpleasant duty. Having a job is said to be even more onerous than schoolwork. And yet all the adults claim to like what they do. You can't blame kids for thinking "I am not like these people; I am not suited to this world."

Actually they've been told three lies: the stuff they've been taught to regard as work in school is not real work; grownup work is not (necessarily) worse than schoolwork; and many of the adults around them are lying when they say they like what they do.

The most dangerous liars can be the kids' own parents. If you take a boring job to give your family a high standard of living, as so many people do, you risk infecting your kids with the idea that work is boring. [2] Maybe it would be better for kids in this one case if parents were not so unselfish. A parent who set an example of loving their work might help their kids more than an expensive house. [3]

It was not till I was in college that the idea of work finally broke free from the idea of making a living. Then the important question became not how to make money, but what to work on. Ideally these coincided, but some spectacular boundary cases (like Einstein in the patent office) proved they weren't identical.

The definition of work was now to make some original contribution to the world, and in the process not to starve. But after the habit of so many years my idea of work still included a large component of pain. Work still seemed to require discipline, because only hard problems yielded grand results, and hard problems couldn't literally be fun. Surely one had to force oneself to work on them.

If you think something's supposed to hurt, you're less likely to notice if you're doing it wrong. That about sums up my experience of graduate school.

''Bounds''

How much are you supposed to like what you do? Unless you know that, you don't know when to stop searching. And if, like most people, you underestimate it, you'll tend to stop searching too early. You'll end up doing something chosen for you by your parents, or the desire to make money, or prestige or sheer inertia.

Here's an upper bound: Do what you love doesn't mean, do what you would like to do most this second. Even Einstein probably had moments when he wanted to have a cup of coffee, but told himself he ought to finish what he was working on first.

It used to perplex me when I read about people who liked what they did so much that there was nothing they'd rather do. There didn't seem to be any sort of work I liked that much. If I had a choice of (a) spending the next hour working on something or (b) be teleported to Rome and spend the next hour wandering about, was there any sort of work I'd prefer? Honestly, no.

But the fact is, almost anyone would rather, at any given moment, float about in the Carribbean, or have sex, or eat some delicious food, than work on hard problems. The rule about doing what you love assumes a certain length of time. It doesn't mean, do what will make you happiest this second, but what will make you happiest over some longer period, like a week or a month.

Unproductive pleasures pall eventually. After a while you get tired of lying on the beach. If you want to stay happy, you have to do something.

As a lower bound, you have to like your work more than any unproductive pleasure. You have to like what you do enough that the concept of "spare time" seems mistaken. Which is not to say you have to spend all your time working. You can only work so much before you get tired and start to screw up. Then you want to do something else even something mindless. But you don't regard this time as the prize and the time you spend working as the pain you endure to earn it.

I put the lower bound there for practical reasons. If your work is not your favorite thing to do, you'll have terrible problems with procrastination. You'll have to force yourself to work, and when you resort to that the results are distinctly inferior.

To be happy I think you have to be doing something you not only enjoy, but admire. You have to be able to say, at the end, wow, that's pretty cool. This doesn't mean you have to make something. If you learn how to hang glide, or to speak a foreign language fluently, that will be enough to make you say, for a while at least, wow, that's pretty cool. What there has to be is a test.

So one thing that falls just short of the standard, I think, is reading books. Except for some books in math and the hard sciences, there's no test of how well you've read a book, and that's why merely reading books doesn't quite feel like work. You have to do something with what you've read to feel productive.

I think the best test is one Gino Lee taught me: to try to do things that would make your friends say wow. But it probably wouldn't start to work properly till about age 22, because most people haven't had a big enough sample to pick friends from before then.

''Sirens''

What you should not do, I think, is worry about the opinion of anyone beyond your friends. You shouldn't worry about prestige. Prestige is the opinion of the rest of the world. When you can ask the opinions of people whose judgement you respect, what does it add to consider the opinions of people you don't even know? [4]

This is easy advice to give. It's hard to follow, especially when you're young. [5] Prestige is like a powerful magnet that warps even your beliefs about what you enjoy. It causes you to work not on what you like, but what you'd like to like.

That's what leads people to try to write novels, for example. They like reading novels. They notice that people who write them win Nobel prizes. What could be more wonderful, they think, than to be a novelist? But liking the idea of being a novelist is not enough; you have to like the actual work of novel-writing if you're going to be good at it; you have to like making up elaborate lies.

Prestige is just fossilized inspiration. If you do anything well enough, you'll make it prestigious. Plenty of things we now consider prestigious were anything but at first. Jazz comes to mind though almost any established art form would do. So just do what you like, and let prestige take care of itself.

Prestige is especially dangerous to the ambitious. If you want to make ambitious people waste their time on errands, the way to do it is to bait the hook with prestige. That's the recipe for getting people to give talks, write forewords, serve on committees, be department heads, and so on. It might be a good rule simply to avoid any prestigious task. If it didn't suck, they wouldn't have had to make it prestigious.

Similarly, if you admire two kinds of work equally, but one is more prestigious, you should probably choose the other. Your opinions about what's admirable are always going to be slightly influenced by prestige, so if the two seem equal to you, you probably have more genuine admiration for the less prestigious one.

The other big force leading people astray is money. Money by itself is not that dangerous. When something pays well but is regarded with contempt, like telemarketing, or prostitution, or personal injury litigation, ambitious people aren't tempted by it. That kind of work ends up being done by people who are "just trying to make a living." (Tip: avoid any field whose practitioners say this.) The danger is when money is combined with prestige, as in, say, corporate law, or medicine. A comparatively safe and prosperous career with some automatic baseline prestige is dangerously tempting to someone young, who hasn't thought much about what they really like.

The test of whether people love what they do is whether they'd do it even if they weren't paid for it even if they had to work at another job to make a living. How many corporate lawyers would do their current work if they had to do it for free, in their spare time, and take day jobs as waiters to support themselves?

This test is especially helpful in deciding between different kinds of academic work, because fields vary greatly in this respect. Most good mathematicians would work on math even if there were no jobs as math professors, whereas in the departments at the other end of the spectrum, the availability of teaching jobs is the driver: people would rather be English professors than work in ad agencies, and publishing papers is the way you compete for such jobs. Math would happen without math departments, but it is the existence of English majors, and therefore jobs teaching them, that calls into being all those thousands of dreary papers about gender and identity in the novels of Conrad. No one does that kind of thing for fun.

The advice of parents will tend to err on the side of money. It seems safe to say there are more undergrads who want to be novelists and whose parents want them to be doctors than who want to be doctors and whose parents want them to be novelists. The kids think their parents are "materialistic." Not necessarily. All parents tend to be more conservative for their kids than they would for themselves, simply because, as parents, they share risks more than rewards. If your eight year old son decides to climb a tall tree, or your teenage daughter decides to date the local bad boy, you won't get a share in the excitement, but if your son falls, or your daughter gets pregnant, you'll have to deal with the consequences.

''Discipline''

With such powerful forces leading us astray, it's not surprising we find it so hard to discover what we like to work on. Most people are doomed in childhood by accepting the axiom that work = pain. Those who escape this are nearly all lured onto the rocks by prestige or money. How many even discover something they love to work on? A few hundred thousand, perhaps, out of billions.

It's hard to find work you love; it must be, if so few do. So don't underestimate this task. And don't feel bad if you haven't succeeded yet. In fact, if you admit to yourself that you're discontented, you're a step ahead of most people, who are still in denial. If you're surrounded by colleagues who claim to enjoy work that you find contemptible, odds are they're lying to themselves. Not necessarily, but probably.

Although doing great work takes less discipline than people think because the way to do great work is to find something you like so much that you don't have to force yourself to do it finding work you love does usually require discipline. Some people are lucky enough to know what they want to do when they're 12, and just glide along as if they were on railroad tracks. But this seems the exception. More often people who do great things have careers with the trajectory of a ping-pong ball. They go to school to study A, drop out and get a job doing B, and then become famous for C after taking it up on the side.

Sometimes jumping from one sort of work to another is a sign of energy, and sometimes it's a sign of laziness. Are you dropping out, or boldly carving a new path? You often can't tell yourself. Plenty of people who will later do great things seem to be disappointments early on, when they're trying to find their niche.

Is there some test you can use to keep yourself honest? One is to try to do a good job at whatever you're doing, even if you don't like it. Then at least you'll know you're not using dissatisfaction as an excuse for being lazy. Perhaps more importantly, you'll get into the habit of doing things well.

Another test you can use is: always produce. For example, if you have a day job you don't take seriously because you plan to be a novelist, are you producing? Are you writing pages of fiction, however bad? As long as you're producing, you'll know you're not merely using the hazy vision of the grand novel you plan to write one day as an opiate. The view of it will be obstructed by the all too palpably flawed one you're actually writing.

"Always produce" is also a heuristic for finding the work you love. If you subject yourself to that constraint, it will automatically push you away from things you think you're supposed to work on, toward things you actually like. "Always produce" will discover your life's work the way water, with the aid of gravity, finds the hole in your roof.

Of course, figuring out what you like to work on doesn't mean you get to work on it. That's a separate question. And if you're ambitious you have to keep them separate: you have to make a conscious effort to keep your ideas about what you want from being contaminated by what seems possible. [6]

It's painful to keep them apart, because it's painful to observe the gap between them. So most people pre-emptively lower their expectations. For example, if you asked random people on the street if they'd like to be able to draw like Leonardo, you'd find most would say something like "Oh, I can't draw." This is more a statement of intention than fact; it means, I'm not going to try. Because the fact is, if you took a random person off the street and somehow got them to work as hard as they possibly could at drawing for the next twenty years, they'd get surprisingly far. But it would require a great moral effort; it would mean staring failure in the eye every day for years. And so to protect themselves people say "I can't."

Another related line you often hear is that not everyone can do work they love that someone has to do the unpleasant jobs. Really? How do you make them? In the US the only mechanism for forcing people to do unpleasant jobs is the draft, and that hasn't been invoked for over 30 years. All we can do is encourage people to do unpleasant work, with money and prestige.

If there's something people still won't do, it seems as if society just has to make do without. That's what happened with domestic servants. For millennia that was the canonical example of a job "someone had to do." And yet in the mid twentieth century servants practically disappeared in rich countries, and the rich have just had to do without.

So while there may be some things someone has to do, there's a good chance anyone saying that about any particular job is mistaken. Most unpleasant jobs would either get automated or go undone if no one were willing to do them.

''Two Routes''

There's another sense of "not everyone can do work they love" that's all too true, however. One has to make a living, and it's hard to get paid for doing work you love. There are two routes to that destination:

    The organic route: as you become more eminent, gradually to increase the parts of your job that you like at the expense of those you don't.

    The two-job route: to work at things you don't like to get money to work on things you do. 

The organic route is more common. It happens naturally to anyone who does good work. A young architect has to take whatever work he can get, but if he does well he'll gradually be in a position to pick and choose among projects. The disadvantage of this route is that it's slow and uncertain. Even tenure is not real freedom.

The two-job route has several variants depending on how long you work for money at a time. At one extreme is the "day job," where you work regular hours at one job to make money, and work on what you love in your spare time. At the other extreme you work at something till you make enough not to have to work for money again.

The two-job route is less common than the organic route, because it requires a deliberate choice. It's also more dangerous. Life tends to get more expensive as you get older, so it's easy to get sucked into working longer than you expected at the money job. Worse still, anything you work on changes you. If you work too long on tedious stuff, it will rot your brain. And the best paying jobs are most dangerous, because they require your full attention.

The advantage of the two-job route is that it lets you jump over obstacles. The landscape of possible jobs isn't flat; there are walls of varying heights between different kinds of work. [7] The trick of maximizing the parts of your job that you like can get you from architecture to product design, but not, probably, to music. If you make money doing one thing and then work on another, you have more freedom of choice.

Which route should you take? That depends on how sure you are of what you want to do, how good you are at taking orders, how much risk you can stand, and the odds that anyone will pay (in your lifetime) for what you want to do. If you're sure of the general area you want to work in and it's something people are likely to pay you for, then you should probably take the organic route. But if you don't know what you want to work on, or don't like to take orders, you may want to take the two-job route, if you can stand the risk.

Don't decide too soon. Kids who know early what they want to do seem impressive, as if they got the answer to some math question before the other kids. They have an answer, certainly, but odds are it's wrong.

A friend of mine who is a quite successful doctor complains constantly about her job. When people applying to medical school ask her for advice, she wants to shake them and yell "Don't do it!" (But she never does.) How did she get into this fix? In high school she already wanted to be a doctor. And she is so ambitious and determined that she overcame every obstacle along the way including, unfortunately, not liking it.

Now she has a life chosen for her by a high-school kid.

When you're young, you're given the impression that you'll get enough information to make each choice before you need to make it. But this is certainly not so with work. When you're deciding what to do, you have to operate on ridiculously incomplete information. Even in college you get little idea what various types of work are like. At best you may have a couple internships, but not all jobs offer internships, and those that do don't teach you much more about the work than being a batboy teaches you about playing baseball.

In the design of lives, as in the design of most other things, you get better results if you use flexible media. So unless you're fairly sure what you want to do, your best bet may be to choose a type of work that could turn into either an organic or two-job career. That was probably part of the reason I chose computers. You can be a professor, or make a lot of money, or morph it into any number of other kinds of work.

It's also wise, early on, to seek jobs that let you do many different things, so you can learn faster what various kinds of work are like. Conversely, the extreme version of the two-job route is dangerous because it teaches you so little about what you like. If you work hard at being a bond trader for ten years, thinking that you'll quit and write novels when you have enough money, what happens when you quit and then discover that you don't actually like writing novels?

Most people would say, I'd take that problem. Give me a million dollars and I'll figure out what to do. But it's harder than it looks. Constraints give your life shape. Remove them and most people have no idea what to do: look at what happens to those who win lotteries or inherit money. Much as everyone thinks they want financial security, the happiest people are not those who have it, but those who like what they do. So a plan that promises freedom at the expense of knowing what to do with it may not be as good as it seems.

Whichever route you take, expect a struggle. Finding work you love is very difficult. Most people fail. Even if you succeed, it's rare to be free to work on what you want till your thirties or forties. But if you have the destination in sight you'll be more likely to arrive at it. If you know you can love work, you're in the home stretch, and if you know what work you love, you're practically there.


----
Notes

[1] Currently we do the opposite: when we make kids do boring work, like arithmetic drills, instead of admitting frankly that it's boring, we try to disguise it with superficial decorations.

[2] One father told me about a related phenomenon: he found himself concealing from his family how much he liked his work. When he wanted to go to work on a saturday, he found it easier to say that it was because he "had to" for some reason, rather than admitting he preferred to work than stay home with them.

[3] Something similar happens with suburbs. Parents move to suburbs to raise their kids in a safe environment, but suburbs are so dull and artificial that by the time they're fifteen the kids are convinced the whole world is boring.

[4] I'm not saying friends should be the only audience for your work. The more people you can help, the better. But friends should be your compass.

[5] Donald Hall said young would-be poets were mistaken to be so obsessed with being published. But you can imagine what it would do for a 24 year old to get a poem published in The New Yorker. Now to people he meets at parties he's a real poet. Actually he's no better or worse than he was before, but to a clueless audience like that, the approval of an official authority makes all the difference. So it's a harder problem than Hall realizes. The reason the young care so much about prestige is that the people they want to impress are not very discerning.

[6] This is isomorphic to the principle that you should prevent your beliefs about how things are from being contaminated by how you wish they were. Most people let them mix pretty promiscuously. The continuing popularity of religion is the most visible index of that.

[7] A more accurate metaphor would be to say that the graph of jobs is not very well connected.

Thanks to Trevor Blackwell, Dan Friedman, Sarah Harlin, Jessica Livingston, Jackie McDonough, Robert Morris, Peter Norvig, David Sloo, and Aaron Swartz for reading drafts of this.
In [[a presentation with this title|https://docs.google.com/presentation/d/1skkpIGPR81RsnIuth2PjhMkCi1YuODqpLOhEEjXsnXQ/edit#slide=id.g1f3740ba5b_0_268]], Andrew Ko at the University of Washington shares some of his ideas based on Computer Science (CS) Ed-focused research.

He makes some [[good observations about teachers and teaching, and summarizes|https://medium.com/bits-and-behavior/how-to-be-a-great-cs-teacher-b8a0a2a3600f]]:
* __Everyone can learn [CS], but successful learning is determined by:__
** Prior knowledge
** Motivation
** Quality of practice
* __If students fail to learn it’s because:__
** Students don’t have the prior knowledge you expected
** They aren’t sufficiently motivated (by you or themselves)
** Your class lacks sufficient high quality practice
* __Before you became a teacher (or CS professional):__
** Someone helped you acquire knowledge
** Someone helped motivate you, give you confidence
** Someone helped you structure your practice
** ''Now that someone is YOU.''
** If you fail, you’ll be robbing the world of the next you :)
* __Learning comes from “deliberate practice”, which consists of four things:__
** Sustained motivation
** Tasks that build new knowledge from prior knowledge
** Immediate personalized feedback to build correct knowledge
** Repetition above the above
** ''If any of these are missing, learning doesn’t happen.''
* __Teacher motivation:__
** Many teachers (possibly some of you) don’t believe that all their students can learn (AKA, fixed mindset).
** When teachers have fixed mindsets about their students’ intelligence, students come to believe they can’t learn either, and they stop being motivated.
** You need to adopt a ''growth mindset'' (the belief that intelligence comes largely from deliberate practice, not genetics).


And gives some good teaching advice:
* __Motivate students:__
** You have to convince students that they can learn.
*** Encourage them
*** Prove to them they can learn by giving them practice they can succeed at
*** Show them that others also struggle, mitigating imposter syndrome
* __Create a motivating context:__
** You have to show students that the content is relevant to their interests, goals, and identity.
*** Learn about and know your students as individuals (e.g., their names, goals, interests)
*** Link every concept in class to their interests, goals, identities (e.g., start class with why you’re teaching something, then teach it)
* __Assess and build on students' prior knowledge:__
** Devise homework and other practice that build upon your students’ knowledge
** at the beginning of each class, give a pre-test:
*** What concepts in your class do they already know?
*** What concepts do you expect them to know, but they don’t?
** Adjust what you teach based on their prior knowledge.
* __Individualize learning:__
** Design practice (e.g., homework, activities) that account for differences in students and their knowledge:
** Give challenge tasks to students with more prior knowledge
** Give extra instruction to students with less prior knowledge
* __Scaffold the irrelevant hard parts:__
** Remove those details or give partial solutions, and slowly take the scaffolding away.
** Base scaffolding on students’ prior knowledge.
* __Provide learning-promoting feedback:__
** Deliberate practice requires detailed, personalized, immediate feedback about what someone did right and wrong in their practice.
** Focus on iterative formative feedback that grows ability through success
** Use summative feedback sparingly; it’s delayed and discouraging
** Immediacy of feedback is critical: the sooner they receive it, the better their learning.
** ''Qualitative is better that quantitative'' because it explains mistakes rather than just signalling them
** Provide feedback on homework as soon as they are submitted, giving detailed qualitative feedback about everything they did right and wrong.
** Incentivize students to show their work so you can critique their work. Don’t just tell them whether they were right or wrong.
* __Repeated practice promotes mastery:__
** Most classes are designed to only allow a student to practice something once
** Design your class so that students can practice something multiple times, across a course, until they master the knowledge
** Don’t just give them one problem to solve, give them multiple isomorphic problems to solve.
** And when they get some of those problems wrong, keep giving them practice until they get it right.


* __Teaching is your biggest impact:__
** Your teaching will definitely change the world, shaping how tens of thousands of people will think, act, and create in the world.





In a no-nonsense, [[straight-talk article|http://ww2.kqed.org/mindshift/2015/04/09/how-memory-focus-and-good-teaching-can-work-together-to-help-kids-learn/]] in [[KQED's Mind/Shift section|http://ww2.kqed.org/mindshift/about/]] titled //How Memory, Focus and Good Teaching Can Work Together to Help Kids Learn//, journalist Katrina Schwartz talked with William Klemm, a senior professor of neuroscience at Texas A&M University, about his ideas for improving how kids learn: focus on teaching kids how to learn.

Klemm claims that
>The more you teach students how to learn, the less time you have to spend teaching curriculum because they can [understand] it on their own, [...] I think the real problem is that students have not learned how to be competent learners. They haven’t learned this because we haven’t taught them.

He mentioned a few things that are fairly well researched and could help improve learning, but are often not practiced in schools:
* Using the internet for researching topics and finding information is great for learning, because if provides easy access to both more depth and more breadth of information and resources. This potentially improves knowledge and understanding, because the more connections learners can make, and the more relevant and applicable context they have, the better the understanding and the deeper the knowledge. But, using the internet usually exposes the learner to many more distractions and disruptions, and this is really bad for learning. Klemm said:
>When this happens, kids multitask, a concept neuroscientists have shown doesn’t really exist. When a person thinks she is doing two things at once, she is really switching rapidly back and forth between individual tasks, eroding the attention and quality of each task in the process. Learners (and teachers and schools) should actively seek to avoid or minimize distractions.
* Another negative effect of relying too much on the internet, to the point where learners delegate the memorization of knowledge and facts to it, rather than committing it to their memories, is that
** without knowledge in your head, it's harder to acquire ''new'' knowledge, since there is less of a richer context in your head.
** also, with less memorized knowledge, it takes longer to process and store new information, so this may create a "snowball" effect in reduced learning.
** students are also less likely to ask new, informed, deep questions, if they don't have more knowledge/information/facts in their heads, which impacts learning even more.
In Klemm's words:
>The more you know, the more you can make conclusions, even be creative,” Klemm said. “All of these things have to be done by thinking, and thinking has to be done from what’s in your working memory.
* When people "multitask" it disrupts and interferes with the forming of memories. Teaching learners to focus is a critical skill to have. Minimizing distractions is critical, --before, during, and after-- learning new knowledge or skill.
** Right after learning something new, the brain needs some time and focus for processing that knowledge and committing it to long term memory (“Long-term memory requires physical and chemical changes in the brain”). Distracting the learner right after learning something new disrupts the process of forming long term memories of the knowledge.
* Stress is bad for learning. 
>When students are worried about tests or something in their private lives, they are distracted from what’s going on in the classroom.
* Testing should be done for a reason, and it is a good thing if it’s non-punitive. Tests require students to recall what they know and process what they don’t know. But high stakes tests, while sometimes necessary, can, and are often overdone.
* Teachers and schools should teach more learning skills, like memorization techniques, visualizations, and modeling.
>If they knew these things, they wouldn’t have to work so hard and school might even become fun. Once students start reflecting and become more self-aware, they have the opportunity to become better students.
<<forEachTiddler 
where 
'tiddler.tags.contains("epss-item")'
sortBy 
'tiddler.title'>>
In an excellent book titled [["When Einstein Walked with Gödel -- Excursions to the Edge of Thought"|https://www.nytimes.com/2018/05/15/books/review/review-when-einstein-walked-with-godel-jim-holt.html]]^^1^^, Jim Holt asks the question: "which, from the point of view of the universe, is more contemptible -- our minuteness (in terms of size) or our brevity (in terms of lifespan)?"
[>img[human scale vs. the universe|resources/human scale.png][resources/human scale 1.png]]
>Now, suppose we construct two cosmic scales, one for size and one for longevity. The size scale will extend from the smallest the Planck length [10^^-35^^ meters], to the largest possible size, the radius of the observable universe [~10^^26^^ meters]. The longevity scale will extend from the briefest possible life span, the Planck time [10^^-43^^ seconds], to the longest possible life span, the current age of the universe [~10^^10^^ years]. 
>
>Where do we rank on these two scales? On the cosmic size scale humans, at a meter or two in length, are more or less in the middle. Roughly speaking, the observable universe dwarfs us the way we dwarf the Planck length. On the longevity scale, by contrast, we are very close to the top. The number of Planck times that make up a human lifetime is very, very much more than the number of human lifetimes that make up the age of the universe. “People talk about the ephemeral nature of existence," the physicist Roger Penrose has commented. “but [on such a scale] it can be seen that we are not ephemeral at all -- we live more or less as long as the Universe itself!"
>
>Certainly, then, we humans have little reason to feel angst about our temporal finitude. Sub specie aeternitatis [viewed in relation to the eternal], we endure for an awfully long time. But our extreme puniness certainly gives us cause for cosmic embarrassment.

----
^^1^^ - searchable spelling: Gödel, Godel, Goedel.
Believe it or not (and you definitely will, if you know the guy ;-), Douglas Hofstadter conducted a [[Workshop on Humor and Cognition|http://arxiv.org/ftp/arxiv/papers/1310/1310.1676.pdf]] in 1989, where he outlines the structure of certain jokes in terms of "cognitive frames" as implemented in [[Copycat|http://cognitrn.psych.indiana.edu/rgoldsto/courses/concepts/copycat.pdf]], an intelligent software program created by his group at the Indiana University.

The motivation for this workshop:
>The Workshop on Humor and Cognition was therefore motivated to a large extent by the observation that jokes have much in common with analogies gone awry, and by the belief that through exploration of the similarities and differences between humor and analogy, we would sharpen our understanding of both processes, and of the fluid nature of human thought in general.

Hofstadter and his group have been trying to understand intelligence and cognition by creating very simple, stripped down domains, and building software that would behave intelligently within these areas (somewhat similar (analogous? ;-) to [[Terry Winograd's SHRDLU program|http://hci.stanford.edu/~winograd/shrdlu/]]). One example of this is the following problem in Copycat's domain:
>If the string abc is changed to abd, how can one change ijk "the same way"?
and Hofstadter continues:
>In this particular problem, most people view the initial event as "replacement of the rightmost letter by its successor". Straightforward application of this rule to the target string ijk yields ijl. It would be possible, nonetheless, to take the change much more literally — namely, as "replacement of the rightmost letter by d" — and thus to answer ijd. Few people see this as a better answer than ijl — in fact, few people even think of it at all.
And the relation to humor?
>The answer ijd is a simple frame blend, in which the abc/abd frame contributes just one element — the d to the ijk frame. Seen this way, this answer to the problem bears a strong similarity to the following well-known joke:
>American: Look how free we are in America — nobody prevents us from parading in front of the White House and yelling, "Down with Reagan!"
>Russian: We in Russia are just as free as you — nobody prevents us from parading in front of the Kremlin and yelling, "Down with Reagan!"

>Here, the Russian attempts to "translate" the notion of free speech from an American to a Soviet frame, but instead of carrying it fully across (as would happen in a good analogy), blurs frames by importing Reagan literally into the Soviet frame. Thus a bad analogy, in the form of a frame blend, makes for a good joke.
Hofstadter, again, in a thinking and writing style that is clear, down-to-earth, and very plausible, explains a mechanism for understanding and possibly creating/engineering humor. As I had said, jokes are serious business.
A couple of weeks ago I had the opportunity (actually, the necessity, since I was asked) to explain to a parent of one of the students why I am OK with a "heightened level of energy" in my CS classroom, and I had said:
>The atmosphere in my class is usually somewhat informal, and I don't mind some "heightened energy" in my class, since I believe that some excitement and a sense of fun are enhancing learning and are usually motivators and enablers of creativity and experimentation in a safe environment, where trying (and sometimes failing) is OK, as long as we learn from our actions.

A couple of days later, I came across an article in the WSJ titled [["Astro Teller, Captain of Moonshots"|http://www.wsj.com/articles/astro-teller-captain-of-moonshots-1479491399]], the head of Alphabet's X organization (Google's top R&D group) (Dr. Astro Teller is the grandchild of the physicist Edward Teller, who helped to develop the hydrogen bomb). The author of the article, Alexandra Wolff, wrote:
>“I certainly encourage low deference to authority here,” he says. “Humor and creativity are inexorably linked. It’s very hard to be sad or serious…and to come up with really different perspectives.” He believes that the sillier, happier and more naturally childlike people are, the more they are able to shift perspectives and come up with innovative ideas.

It is not my intention to run my classes as startups (even if we are in the middle of Silicon Valley :), but it was nice to see the affirmation that some playfulness fosters creativity, which is one of the [["Big Ideas" (in CS)|https://advancesinap.collegeboard.org/stem/computer-science-principles/course-details]] I am constantly trying to get across to students.
I am always ready to learn although I do not always like being taught.
[[Richard Feynman]] [[captured on video|https://www.youtube.com/watch?v=QkhBcLk_8f0]]
>You see, one thing is, I can live with doubt and uncertainty and not knowing^^1^^. I think it's much more interesting to live not knowing than to have answers which might be wrong. I have approximate answers and possible beliefs and different degrees of certainty about different things, but I am not absolutely sure of anything and there are many things I do not know anything about, such as whether it means anything to ask 'why we are here?' and what the question might mean. I might think about it a little bit if I can figure it out then I go into something else. But I don't have to know an answer. I don't have ... I don't feel frightened by not knowing things, by being lost in a mysterious universe without having any purpose, which is the way it really is as far as I can tell, possibly. It doesn't frighten me.


----
^^1^^ - and according to Terry Pratchett, [[even DEATH can try to believe in it|THE UNCERTAINTY PRINICIPLE - according to Sir Terry]]
not really.

One of the projects I have my high school Computer Science students do is write a music composition piece using [[EarSketch|http://earsketch.gatech.edu/earsketch2/]] (developed at Georgia Tech). This tool enables them to create different pieces of music programmatically.

Since it's part of a CS course, I'm not looking at the "artistic/musical value/talent" but rather at the programming language (Python) techniques of the students.

So as I was listening to student compositions/projects, I came across 2 pieces which //looked// similar. Mind you, I'm listening to the compositions, but am also looking at the code, and I had a strong suspicion that one of the students copied from the other.
The students did a good job of picking different instruments, etc., for their pieces, so that on the surface, the compositions sounded different enough. But the underlying structure was almost identical:
|[img[EarSketch 1|./resources/M_Final_Earsketch_small.png][./resources/M_Final_Earsketch.png]]|[img[EarSketch 2|./resources/V_Final_Earsketch_small.png][./resources/V_Final_Earsketch.png]]|

As you can see from the above, the structure and the code (except for the musical instruments) look identical. If this is not a case of plagiarism and direct copying, then I don't know what is.

So, I don't have a "super ear" but more of a "sharp eye", but the students are definitely in trouble. I am surprised that they thought that in a computer programming project I will only listen to the music and not look at the code...

Regardless, I still strongly believe that [[letting and encouraging students to look at other people's code|Should students be able to look at each other's code?]] is an excellent way to learn programming, so I'll keep doing that. I just need to be stronger on the message of "no plagiarism".
Translated from French:
    Je n’ai fait celle-ci plus longue que parce que je n’ai pas eu le loisir de la faire plus courte.

- from the [[Quote Investigator|https://quoteinvestigator.com/2012/04/28/shorter-letter/]], who is also retelling the following:
>According to an anecdote published in 1918, Woodrow Wilson was asked about the amount of time he spent preparing speeches, and his response was illuminating:
>>    "That depends on the length of the speech," answered the President. "If it is a ten-minute speech, it takes me all of two weeks to prepare it; if it is a half-hour speech, it takes me a week; if I can talk as long as I want to, it requires no preparation at all. I am ready now."
I have never developed indigestion from eating my words.
I'll understand quickly if you explain slowly.
At least in business and entrepreneurship (among other areas?^^1^^),
[[Ideas are just a multiplier of execution - from Derek Sivers|https://sivers.org/multiply]]


!!!!So here's how Sivers (succinctly) puts it:
It’s so funny when I hear people being so protective of ideas. People who want me to sign an NDA to tell me the simplest idea.

To me, ideas are worth nothing unless executed. They are just a multiplier. Execution is worth millions.

Explanation:

AWFUL IDEA	= -1
WEAK IDEA	= 1
SO-SO IDEA	= 5
GOOD IDEA	= 10
GREAT IDEA	= 15
BRILLIANT IDEA	= 20
--------	---------
NO EXECUTION	= $1
WEAK EXECUTION	= $1000
~SO-SO EXECUTION	= $10,000
GOOD EXECUTION	= $100,000
GREAT EXECUTION	= $1,000,000
BRILLIANT EXECUTION	= $10,000,000

To make a business, you need to multiply the two.

The most brilliant idea, with no execution, is worth $20.

The most brilliant idea takes great execution to be worth $20,000,000.


----
^^1^^ - an interesting question is whether this idea (ha) is extensible to other areas in life.
For example, to politics, where as Winston Churchill famously said about ideas/ideals vs. execution/results: 
>"It is a fine thing to be honest, but it is also very important to be right". 
Or another (somewhat freely interpreted Churchill pearl (as you can probably tell, I think that there is a lot to admire about him :) 
>"It's not enough that we do [or think/aspire to] our best; sometimes we have to do what's required."
In a thought-provoking presentation titled [["If Copernicus and Kepler Had Computers"|http://www.cs.cornell.edu/cv/OtherPdf/Copern.pdf]], Cornell University Computer Science professor [[Charles Van Loan|http://www.cs.cornell.edu/cv/default.htm]] quotes^^1^^ an interesting hypothesis by another computer scientist -- Abbe Mowshowitz:
>If a computer had been available to Copernicus, he would have been content to patch up the Ptolemaic system rather than propose a new model of the cosmos.
Van Loan shows by example, how the Ancients tried to explain the fact that the planets (the presentation specifically covers Mars) "wander" or "wobble" in their orbit in the sky. Since the prevailing "standard model" at the time was geocentric, the Ancients tried to fit this observed planetary behavior with a model made of multiple circles rotating along the orbits of other circles.

(see [[Python simulation program below|https://trinket.io/python/6d4f189f98?runOption=run]])
<html>
  <table>
    <tr>
      <td><img src='resources/mars wanders.png'><p>Mars wanders</td>
      <td><img src='resources/Python program.png'><p><a href='https://trinket.io/python/6d4f189f98?runOption=run'>Python simulation program</a></td>
    </tr>
    <tr>
      <td><img src='resources/mars 1 circle.png'><p>Mars 1 circle model</td>
      <td><img src='resources/Mars 1 circle path.png'><p>Mars 1 circle path</td>
    </tr>
    <tr>
      <td><img src='resources/mars 2 circles.png'><p>Mars 2 circle model</td>
      <td><img src='resources/Mars 2 circle path.png'><p>Mars 2 circle path</td>
    </tr>
  </table>
</html>


So the question is:
>If Copernicus had a computer would he have been content to patch up the Ptolemaic model by adding more circles on circles until the model fit the data with sufficient accuracy?
It turns out that a model/simulation made of 9 circles (!), with different radii and rotation periods can fit the actual observed orbit of Mars with less than a 5 minute of arc error. Impressive! But very complex, and begging for the application of [[Occam's Razor|https://en.wikipedia.org/wiki/Occam%27s_razor]] ("one should not increase, beyond what is necessary, the number of entities required to explain anything").

And the conclusion:
The Computer can make us lazy because it is tempting to “tweak the old model”
But,
The Computer can make us creative because it enables us to “think outside the box”


----
^^1^^ I love Van Loan's quote [[Formalism First = Rigor Mortis.  Intuition First = Rigor's Mortise]].
From the man himself:
>My late Uncle Alex Vonnegut […] who was well read and wise, was a humanist like the rest of the family. What Uncle Alex found particularly objectionable about human beings in general, was that they so seldom noticed it when they were happy.
>He himself did his best to acknowledge it when times were sweet. We could be drinking lemonade in the shade of an apple tree in the summertime, and Uncle Alex would interrupt the conversation to say, “If this isn’t nice, what is?”
>I myself say that out loud at times of easy, natural bliss: “If this isn’t nice, what is?” Perhaps others can also make use of that heirloom from Uncle Alex. I find it really cheers me up to keep score out loud that way.
  -- Kurt Vonnegut

(compared with [[Danial Dennett's sentiment|Thanks Goodness!]])
If we open a quarrel between past and present, we shall find that we have lost the future.
If you have ten thousand regulations you destroy all respect for the law.
In his excellent book^^1^^ [[Probably Approximately Correct|http://www.probablyapproximatelycorrect.com/]], Leslie Valiant tells the following story:

in 1947 John von Neumann, the famously gifted mathematician, was keynote speaker at the first annual meeting of the Association for Computing Machinery. In his address he said that future computers would get along with just a dozen instruction types, a number known to be adequate for expressing all of mathematics. He went on to say that one need not be surprised at this small number, since 1,000 words were known to be adequate for most situations in real life, and mathematics was only a small part of life, and a very simple part at that. 
The audience reacted with hilarity. This provoked von Neumann to respond: “If people do not believe that math is simple, it is only because they do not realize how complicated life is.”


----
^^1^^ - see the [[book remarks|Ecorithms - Probably Approximately Correct Algorithms - by Leslie Valiant]]
If you're as clever as you can be when you write it, how will you ever debug it?
In his delightful book [[An Imaginary Tale|http://www.pucrs.br/famat/viali/tic_literatura/livros/Paul%20J.%20Nahin%20-%20An%20Imaginary%20Tale%20The%20Story%20of%20i%20the%20Square%20Root%20of%20Minus%20One.pdf]] - The Story of [img[square root of -1|./resources/i 1.png][./resources/i.png]], Paul J. Nahin has the following Calvin and Hobbs cartoon:

[img[complex math|./resources/CalvinHobbesImaginary.gif]]

In the preface Nahin tells the following story:
>When the April 1955 issue of Popular Electronics arrived in the mail, one of the inside photographs displayed an incredible sight a desk lamp emitting not a cone of light, but, instead, a ''cone of darkness''! My eyes bugged out when I saw that. What wondrous science was at work here, I gasped (metaphorically speaking, of course, because what fourteen-year-old kid do you know, other than in a TV sitcom, who actually talks like that?). 
>The secret, according to the accompanying article, was that the lamp was not plugged into a normal power outlet, but rather into an outlet delivering contra-polar power. Another photograph showed a soldering iron plugged into the contra-polar power outlet; it was covered with ice! And another displayed a frozen ice tray on a hot plate, except it was now a cold plate because it was plugged into contra-polar power. I looked at those three photographs, and I remember my pulse rate elevated and I felt a momentary spell of faintness. This was simply wonderful.

Needless to say that the editors "blamed" these wonderful phenomena on ''imaginary numbers''.
Also needless to repeat that this issue was an [[April 1, 1955 issue|http://www.rfcafe.com/references/popular-electronics/contra-polar-energy-april-1955-popular-electronics.htm]] ... :)

''Footnote 4'' at the bottom of that article enlightened (ha!) or lifted the ''cone of darkness'' off the dense:
^^4^^ Transactions of the ~Contra-Polar Energy Commission, Vol. 45, pp. 1324-1346 (Ed. Note - A reprint of a document found in a flying saucer).
In keeping with the first day of April...
In a recent level-headed [[post on his blog|https://rpseawright.wordpress.com/2016/01/05/here-we-go-again-forecasting-follies-2016/]], Robert P. Seawright (the Chief Investment & Information Officer for Madison Avenue Securities, LLC) points out a few (unsurprising, but regularly ignored) facts about predicting the behavior of the economy, the market (and everything :).

He starts by giving numerous examples of past financial predictions by pundits, mavens, experts, and generally highly-paid, highly-followed people being wrong with their forecasts (Just looking at the predictions about the S&P500 from the likes of top people at
Goldman Sachs,Barclays, Credit Suisse, Deutsche Bank, Citi, Bank of America, Merrill Lynch, Morgan Stanley, and others).

He also lists a few non-financial predictions by well-respected people considered experts in their fields, totally missing it in their own area of expertise. For example:
- In 1995, [[analyst Clifford Stoll mocked the predictions|http://www.newsweek.com/clifford-stoll-why-web-wont-be-nirvana-185306]] that newspapers and database-driven news services will go online, and that we will buy books, etc. on the internet.
- In 1996, the man widely credited with the invention of Ethernet technology, [[Bob Metcalfe predicted|http://www.scientificamerican.com/article/pogue-all-time-worst-tech-predictions/]] that the Internet will soon go spectacularly supernova and then catastrophically collapse.
- in 1961 Federal Communications Commission commissioner T.A.M. Craven stated that there is no chance that communications space satellites will be used to provide better telephone, telegraph, television or radio service inside the United States.
- in 1912 [[Marconi predicted|http://earlyradiohistory.us/1912mar.htm]] that the “wireless era” would make war ridiculous and impossible. (to be fair, he also predicted that "within the next two generations we shall have not only wireless telegraphy and telephony, but also wireless transmission of all power for individual and corporate use, wireless heating and light, and wireless fertilizing of fields." - and he wasn't off by much!)
- In 1962, Decca records rejected the Beatles and totally missed the Beatlemania.
- And you can see [[more at Scientific American|http://www.scientificamerican.com/article/pogue-all-time-worst-tech-predictions/]].

The point is not to mock the forecasters. [[Cullen Roche brings up some excellent points|http://www.pragcap.com/the-confirmation-bias-of-the-anti-forecasters/]] about interpreting forecasts in context:
> When presented in a biased manner it does prove that economists are quite bad at predicting recessions.  But what if I presented this data as evidence that economists are usually optimistic and the economy rarely goes into recession?
> [...] Of course, you have to keep things in the right perspective. The anti-forecasters want everyone to believe that there is some way to make decisions about the future without making forecasts.  This is obviously nonsense since any decision about the future involves an implicit forecast about future outcomes. Those who shun forecasting are merely trying to confirm their own biased perspectives.  The reality, as I’ve shown before, is that we shouldn’t shun forecasts.  We should shun low probability forecasts.
> [...] Making decisions about the future necessarily involves some framework within which we explicitly or implicitly forecast future outcomes.  Whether we’re crossing the street or allocating assets we have to make a forecast about how certain things might play out and what the probability of success might be.

Seawright also adds that:
> As Philip Tetlock wrote in his wonderful new book, [[Superforecasting: The Art and Science of Prediction|http://www.superforecasting.com/]]: “We are all forecasters. When we think about changing jobs, getting married, buying a home, making an investment, launching a product, or retiring, we decide based on how we expect the future to unfold.”

So, if forecasting is critical to our survival and success, and we do it all the time, we should be interested in becoming better at it.
In the book Superforecasting Tetlock makes excellent points about learning to be better forecasters.

* Base predictions on data and logic, and try to eliminate personal bias. 
* Working in teams and seeking advice and other perspectives helps. 
* Keep track of records so that you know how accurate you (and others) are. 
* Think in terms of probabilities and recognize that everything is uncertain. 
* Unpack a question into its component parts, distinguishing between what is known and unknown, and scrutinizing your assumptions. 
* Recognize that the further out the prediction is designed to go, the less specifically accurate it can be.

That is, we need:
* rigorous empiricism, 
* probabilistic thinking, 
* a recognition that absolute answers are extremely rare, 
* regular reassessment, 
* accountability, and 
* an avoidance of too much precision. 
Or, more fundamentally:
* We need more humility and more diversity among those contributing to decisions. 
* We need to be concerned more with process and improving our processes than in outcomes, important though they are (“What you think is much less important than how you think,” says Tetlock)
* Superforecasters regard their views “as hypotheses to be tested, not treasures to be guarded.” 

Tetlock says that
> most people “are too quick to make up their minds and too slow to change them.”

In Computer Science: Simplicity does not precede complexity, but follows it.

and my paraphrase:
In life: mindfulness does not precede full-mindness, but follows it (i.e., you have to be mindful of something :)

In an interesting [[article in Quanta Magazine|https://www.quantamagazine.org/20130222-in-computers-we-trust/]] about the role of computers in Math, the author brought up some good points:

* There is a difference of opinions about whether computers/computation should play a role in inventing/discovering^^1^^ new knowledge in the field of math.
** Some claim that not using computing actually stifles the progress possible in Math.
*** some liken the mathematicians who refuse to use computing in their work to marathon runners insisting on running without shoes.
*** I tend to agree with them (como no?!). I think that computing is a potentially very powerful tool and extender to human abilities. Not using it may limit the areas of Math we explore, since we run the risk of avoiding areas which seem "intractable", but which may only be "calculation/computation intensive". Yes, there is beauty to the art of doing math "barefooted", and as the saying goes "necessity is the mother of invention", so doing things only "by hand" may result in ingenious solution, but there is also a price/risk for avoiding computing, entirely.
**** As the article states: Deducing new truths about the mathematical universe has almost always required intuition, creativity and strokes of genius, not plugging-and-chugging. In fact, the need to avoid nasty calculations (for lack of a computer) has often driven discovery, leading mathematicians to find elegant symbolic techniques like calculus.
*** But on the flip side: "Some of the problems we do today are completely uninteresting but are done because it’s something that humans can do.”
*** And: “Am I doing the kind of math I’m doing because I can’t use a computer, or am I doing what I’m doing because it’s the best thing to do?” he said. “It’s a good question.”
*** And in [[another article in Quanta|https://www.quantamagazine.org/20150519-will-computers-redefine-the-roots-of-math/]] Vladimir Voevodsky (a permanent faculty member at the Institute for Advanced Study (IAS) in Princeton, N.J.) said:
>> “The world of mathematics is becoming very large, the complexity of mathematics is becoming very high, and there is a danger of an accumulation of mistakes,” Voevodsky said. Proofs rely on other proofs; if one contains a flaw, all others that rely on it will share the error.
**** This is something Voevodsky has learned through personal experience. In 1999 he discovered an error in a paper he had written seven years earlier. Voevodsky eventually found a way to salvage the result, but in an article last summer in the IAS newsletter, he wrote that the experience scared him. He began to worry that unless he formalized his work on the computer, he wouldn’t have complete confidence that it was correct.

** Others (who don't use computers as math tools/enhancers) claim:
*** "Computers are now used extensively to discover new conjectures by finding patterns in data or equations, but they cannot conceptualize them within a larger theory, the way humans do. Computers also tend to bypass the theory-building process when proving theorems".
*** “Pure mathematics is not just about knowing the answer; it’s about understanding”. “If all you have come up with is ‘the computer checked a million cases,’ then that’s a failure of understanding.”
*** the biggest danger of using a computer proof: What if there’s a bug? And there are always bugs in software!

Mathematicians use Computing in several ways:
* One is proof-by-exhaustion: setting up a proof so that a statement is true as long as it holds for a huge but finite number of cases and then programming a computer to check all the cases.
* More often, computers help discover interesting patterns in data, about which mathematicians then formulate conjectures, or guesses. Significant insights can be gained by looking for patterns in the data and then proving them.
* As motivators for the hard work of proofing: Using computation to verify that a conjecture holds in every checkable case, and ultimately to become convinced of it, “gives you the psychological strength you need to actually do the work necessary to prove it”.
* Increasingly, computers are helping not only to find conjectures but also to rigorously prove them. Algorithms can perform symbolic computations, manipulating variables instead of numbers to produce exact results free of rounding errors.


I think that Jordan Ellenberg (a professor at the University of Wisconsin who uses computers for conjecture discovery and then builds proofs by hand) sums it the best:
>[He] like many of his colleagues, sees a more significant role for humans in the future of his field: “We are very good at figuring out things that computers can’t do. If we were to imagine a future in which all the theorems we currently know about could be proven on a computer, we would just figure out other things that a computer can’t solve, and that would become ‘mathematics.’ ”

If this is not called PROGRESS in a field, I don't know what is!

----
^^1^^ - A big question is whether [[Math is invented or discovered|Is Math a human invention or a series of discoveries of truths in the real world?]].
I tend to use parentheses (and quotation marks :) in my writing (quite often, I might add).

You can look at a (parenthetically enriched :) example in  [[The Holy War: Mac vs. DOS]] (celebrating the excellent writer, Umberto Eco, who is also using parentheses very effectively (and he definitely had something to say about them^^1^^ in his tongue-in-cheek [["Style Guide"|Umberto Eco's Rules for Writing (Well)]] ).

Some (many?) people frown at the use of parentheses (I'm not sure why), so I was delighted to read that an author I like Neil Gaiman, mentioned in [[one of his non-fiction pieces|http://journal.neilgaiman.com/2012/01/speech-i-once-gave-on-lewis-tolkien-and.html]], that even as a young boy/reader, he had noticed and appreciated C.S. Lewis (and other writers he loved) using parentheses:
>C.S. Lewis was the first person to make me want to be a writer. He made me aware of the writer, that there was someone standing behind the words, that there was someone telling the story. I fell in love with the way he used parentheses — the auctorial asides that were both wise and chatty, and I rejoiced in using such brackets in my own essays and compositions through the rest of my childhood.
In that piece, Gaiman (to my delight :) also makes nice (and [[correct|http://www.wikihow.com/Use-Parentheses]]) use of parentheses:
>Father Brown^^2^^, that prince of humanity and empathy, was a gateway drug into the harder stuff, this being a one-volume collection of three novels: The Napoleon of Notting Hill (my favourite piece of predictive 1984 fiction, and one that hugely informed my own novel Neverwhere), The Man Who Was Thursday (the prototype of all Twentieth Century spy stories, as well as being a Nightmare, and a theological delight), and lastly The Flying Inn (which had some excellent poetry in it, but which struck me, as an eleven-year old, as being oddly small-minded. I suspected that Father Brown would have found it so as well.) Then there were the poems and the essays and the art.


Since I am into Computer Science and programming, I have to quote here from [[James Iry's blog|http://james-iry.blogspot.com/2009/05/brief-incomplete-and-mostly-wrong.html]], a humorous take on parentheses as it relates to coding (in certain programming languages): 
>In 1958 John ~McCarthy and Paul Graham invent LISP. Due to high costs caused by a post-war depletion of the strategic parentheses (AKA braces) reserve LISP never became popular. Fortunately for computer science the supply of curly braces^^3^^ and angle brackets^^4^^ remains high.

----
^^1^^ - (Always) remember that parentheses (even when they seem indispensable) interrupt the flow.
^^2^^ - a fictional Roman Catholic priest and amateur detective who featured in 53 short stories published between 1910 and 1936 written by English novelist G. K. Chesterton.
^^3^^ - Java-style
^^4^^ - ~HTML-style
Prime numbers are interesting (some would even say "fascinating" :) and useful, but since some people "don't get it", consider this as "nerd alert" :)

The mathematician Richard Kenneth Guy (in a [[paper titled "The Strong Law of Small Numbers"|https://www.maa.org/sites/default/files/images/images/upload_library/22/Ford/Guy697-712.pdf]]) said:
>Two of the most important elements in mathematical research are asking the right questions and recognizing patterns.

On which Martin Gardner (in his [[book The Last Recreations|https://bobson.ludost.net/copycrime/mgardner/gardner15.pdf]]) commented:
>Unfortunately there is no procedure for generating good questions and no way of knowing whether an observed pattern will lead to a significant new theorem or whether the pattern is just a lucky coincidence. 

which strikes me as very similar to the situation with prime numbers: there is no procedure/algorithm for finding/producing them, and they seem to defy all attempts to find patters for their occurrence (but, for example, the mathematician Stanislaw Ulam came up with [[some creative discoveries|https://en.wikipedia.org/wiki/Ulam_spiral]]).

Gardner also wrote (in //Martin Gardner's Mathematical Games//):
>No branch of number theory is more saturated with mystery and elegance than the study of prime numbers: those exasperating, unruly integers that refuse to be divided evenly by any integers except themselves and 1.

which sounds very poetic (to be expressed by a mathematician :), but as a matter of fact, prime numbers appear not only in mathematics, but also in "that other human endeavor that delves into mysteries in search of patterns and elegance -- poetry"^^1^^. 

So here is an example by the British poet and writer Helen Spalding ([[quoted both by Martin Gardner (in "The Last Recreations")|https://bobson.ludost.net/copycrime/mgardner/gardner15.pdf]] and in [[an article by Sarah Glaz|http://www.math.uconn.edu/~glaz/My_Articles/ThePoetryOfPrimeNumbers.Bridges11.pdf]]):

''Let Us Now Praise Prime Numbers''

Let us now praise prime numbers  
With our fathers who begat us:  
The power, the peculiar glory of prime numbers  
Is that nothing begat them,  
No ancestors, no factors,  
Adams among the multiplied generations.  

None can foretell their coming.  
Among the ordinal numbers  
They do not reserve their seats, arrive unexpected.  
Along the lines of cardinals  
They rise like surprising pontiffs,  
Each absolute, inscrutable, self-elected.  

In the beginning where chaos  
Ends and zero resolves,  
They crowd the foreground prodigal as forest,  
But middle distance thins them,  
Far distance to infinity  
Yields them rare as unreturning comets.  

O prime improbable numbers,  
Long may formula-hunters  
Steam in abstraction, waste to skeleton patience:  
Stay non-conformist, nuisance,  
Phenomena irreducible  
To system, sequence, pattern or explanation. 



----
^^1^^ from [[an article by Sarah Glaz|http://www.math.uconn.edu/~glaz/My_Articles/ThePoetryOfPrimeNumbers.Bridges11.pdf]].
In the beginning was the Tao. The Tao gave birth to Space and Time. Therefore Space and Time are Yin and Yang of programming.

Programmers that do not comprehend the Tao are always running out of time and space for their programs. Programmers that comprehend the Tao always have enough time and space to accomplish their goals.

How could it be otherwise?

  - from [[The Tao of Programming|http://canonical.org/~kragen/tao-of-programming.html]]
In the design of programming languages, one can let oneself be guided by considering "what the machine can do". Considering, however, that the programming language is the bridge between the user and the machine - that it can, in fact, be regarded as his tool - it seems just as important to take into consideration "what man can think".
In three words I can sum up everything I've learned about life: It goes on.
In an excellent book titled [["When Einstein Walked with Gödel -- Excursions to the Edge of Thought"|https://www.nytimes.com/2018/05/15/books/review/review-when-einstein-walked-with-godel-jim-holt.html]]^^1^^, Jim Holt writes about "The Dangerous Idea of the Infinitesimal".

>From the time it was conceived, the idea of the infinitely small has been regarded with deep misgiving, even more so than that of the infinitely great. How can something be smaller than any given finite thing and not be simply nothing at all? Aristotle tried to ban the notion of the infinitesimal on the grounds that it was an absurdity. David Hume declared it to be more shocking to common sense than any priestly dogma. Bertrand Russell scouted it as “unnecessary, erroneous, and self contradictory.
>
>Yet for all the bashing it has endured, the infinitesimal has proved itself the most powerful device ever deployed in the discovery of physical truth, the key to the scientific revolution that ushered in the Enlightenment. And, in one of the more bizarre twists in the history of ideas, the infinitesimal -- after being stuffed into the oubliette seemingly for good at the end of the nineteenth century -- was decisively rehabilitated in the 1960s.
>
>It now stands as the epitome of a philosophical conundrum fully resolved. Only one question about it remains open:
>Is it real?
And he adds:
>Curiously, adding infinitesimals to the universe as Robinson contrived to do, in no way alters the properties of ordinary finite numbers. Anything that can be proved about them using infinitesimal reasoning can, as a matter of pure logic, also be proved by  ordinary methods. Yet this scarcely means that Robinson sterile. By restoring the intuitive methods that Newton and pioneered, Robinson's "nonstandard analysis” has yielded proofs that are shorter, more insightful, and less ad hoc than their standard counterparts. Indeed, Robinson himself used it early on to solve a major open problem in the theory of linear spaces that had frustrated other mathematicians. Nonstandard analysis has since found many adherents in international mathematical community, especially in France, and he been fruitfully applied to probability theory, physics, and economics where it is well suited to model, say, the infinitesimal impact that a single trader has on prices.
>
>Beyond his achievement as a mathematical logician, Robinson must be credited with bringing about one of the great reversals in the history of ideas. More than two millennia after the idea of the infinitely small had its dubious conception, and nearly a century after it had been got rid of seemingly for good, he managed to remove all taint of contradiction from it. Yet he did so in a way that left the ontological status of the infinitesimal completely open. There are those, of course, who believe that any mathematical object that does not involve inconsistency has a reality which transcends the world of our senses. Robinson himself subscribed to such a Platonistic philosophy early in his career, but he later abandoned it in favor of Leibniz's view that infinitesimals were merely "well-founded fictions."
>
>Whatever reality the infinitesimal might have, it has no less reality than the ordinary numbers -- positive, negative, rational and irrational, real and complex, and so on -- do. When we talk about numbers, modern logic tells us, our language simply cannot distinguish between a nonstandard universe brimming with infinitesimals and a standard one that is devoid of them.
So this is another example of the age-old question of [[whether math is an invention or a discovery|Is Math a human invention or a series of discoveries of truths in the real world?]].
It adds to the evolving/expanding "zoo" of mathematical entities (like the irrational numbers, transcendentals, imaginary numbers, and so on) Richard Hamming writes so well about in [[On why Math works for us]].

Holt also quotes from the book The Incredible Shrinking Man (also a 1957 science fiction film) written by Richard Matheson, making another wonderful (wondrous?) connection between life, philosophy, and mathematics; showing that small size does not mean insignificance (nor meaninglessness) (see also what [[Janna Levin|http://jannalevin.com/bio-and-contact/]] [[had to say|On Anthropic Bias, or Was the Universe Made for Us?]] in an [[interview with Krista Tippett|https://www.brainpickings.org/2015/01/09/krista-tippett-einsteins-god-janna-levin/]]):
>So close — the infinitesimal and the infinite. But suddenly, I knew they were really the two ends of the same concept. The unbelievably small and the unbelievably vast eventually meet — like the closing of a gigantic circle. I looked up, as if somehow I would grasp the heavens. The universe, worlds beyond number, God's silver tapestry spread across the night. And in that moment, I knew the answer to the riddle of the infinite. I had thought in terms of man's own limited dimension. I had presumed upon nature. That existence begins and ends in man's conception, not nature's. And I felt my body dwindling, melting, becoming nothing. My fears melted away. And in their place came acceptance. All this vast majesty of creation, it had to mean something. And then I meant something, too. Yes, smaller than the smallest, I meant something, too. To God [and in math], there is no zero. I still exist!


----
^^1^^ - searchable spelling: Gödel, Godel, Goedel
From Maria Popova's Brainpickings [[article on "Wisdom in the Age of Information"|https://www.brainpickings.org/2014/09/09/wisdom-in-the-age-of-information/]]:
>We live in a world awash with information^^1^^, but we seem to face a growing scarcity of wisdom. And what’s worse, we confuse the two. We believe that having access to more information produces more knowledge, which results in more wisdom. But, if anything, the opposite is true — more and more information without the proper context and interpretation only muddles our understanding of the world rather than enriching it^^2^^.
Which is similar to what Michael P. Lynch writes in his book //The Internet of Us - Knowing More and Understanding Less in the Age of Big Data// ([[a book reviewed by Jill Lepore|After The Fact - In the history of truth, a new chapter begins]])
>When we Google-know, we no longer take responsibility for our own beliefs, and we lack the capacity to see how bits of facts fit into a larger whole. Essentially, we forfeit our reason and, in a republic, our citizenship. You can see how this works every time you try to get to the bottom of a story by reading the news on your smartphone.

And Popova continues:
>Ours is a culture where it’s enormously embarrassing not to have an opinion on something, and in order to seem informed, we form our so-called opinions hastily, based on fragmentary bits of information and superficial impressions rather than true understanding.

>[...] At its base is a piece of information, which simply tells us some basic fact about the world. Above that is knowledge — the understanding of how different bits of information fit together to reveal some truth about the world. Knowledge hinges on an act of correlation and interpretation. At the top is wisdom, which has a moral component — it is the application of information worth remembering and knowledge that matters to understanding not only how the world works, but also how it should work. And that requires a moral framework of what should and shouldn’t matter, as well as an ideal of the world at its highest potentiality.


On the lighter side, an [[insightful joke on the nature of knowledge|Learning and Examinations]].

----
^^1^^ or as Edna St. Vincent Millay, in her sonnet “Upon This Age That Never Speaks Its Mind” wrote (in 1939):
>Upon this gifted age, in its dark hour,
>Rains from the sky a meteoric shower
>Of facts . . . they lie unquestioned, uncombined.
>Wisdom enough to leech us of our ill
>Is daily spun; but there exists no loom
>To weave it into fabric.

^^2^^ echoing Alan Kay's condemnation (in a [[video (43 min.)|https://www.youtube.com/watch?v=gTAghAJcO1o]]) of the popular trend of Big Data, when he is saying it should not be about Big Data, but about [[Big Meaning|http://planspace.org/20141125-alan_kay_on_big_data/]].
Albert Einstein once said:
> The difference between what the most and the least learned people know is inexpressibly trivial in relation to that which is unknown.

It seems that the mathematician and computer scientist [[Gregory Chaitin|https://en.wikipedia.org/wiki/Gregory_Chaitin]] agrees.

In an [[interesting and touching interview of Gregory Chaitin|https://www.whyarewehere.tv/people/gregory-chaitin/]] he talks about WHAT WE DON’T KNOW:
>Ard: Yesterday we talked to Marcelo Gleiser and he talked about the idea of knowledge like an island. So as you grow… an island in a sea of ignorance. So as knowledge grows, so does the size of the border that you have of the ignorance that you see. So as you get more and more knowledge, you also see more and more ignorance.
>
>GC: That’s a very nice image. Also people don’t like talking about what they don’t know. They like talking about what they know. I’m the other way around. I prefer thinking about what I don’t know.
>
>Certainty is bad because it’s uncreative. It means you know already – you don’t need to think any more about it. Well it’s also totally uncreative in mathematics. The idea of Hilbert was to ensure certainty. He thought it was possible: he thought the possibility of doing this is what it meant to say that mathematics was black or white, that mathematical truth is more solid than any empirical truth. And it’s wonderful that mathematics refuted this.
>
>You know, [[Gödel’s Incompleteness Theorem|resources/Boolos-godel-in-single-syllables.pdf]] is suppressed (see also [[The world's shortest explanation of Gödel's theorem]]). The mathematics community doesn’t want to take it into account, because they view it as a tremendously pessimistic, horrible fact that you can’t have a ‘theory of everything’ for mathematics, and that mathematics doesn’t give absolute truth. I think this is absolutely wonderful. The viewpoint is wrong. What Gödel’s Theorem is about… it’s not a negative theorem, it’s a positive theorem. It’s about creativity. It’s the first step in the direction of a mathematical theory of creativity – of saying that math is not a closed system, it’s an open system, just like biology. And this is totally liberating and we should all celebrate…. celebrate this fact rather than bemoaning it, beating our breast, ‘Oh my God. What happened to absolute truth in mathematics?’ Well, what happened was that absolute truth was a closed system. It was a prison: the notion of a formal theory that would give you absolute certainty.
>
>Ard: A theory of everything.
>
>GC: A theory of everything. Yes, a formalisation of all of mathematics in one finite set of axioms. This would have been horrifying.
>
>Let’s say that they have this computer program which can decide if mathematical assertions are true or false.
>
>Well, what good is it to know whether something is true or false? You want to understand what’s happening, right?
>
>David: The why rather than the…
>
>GC: The why, exactly. You want to be convinced emotionally that something is true. That’s why new questions are important, because what counts is not the mathematics we know – the science we know is uninteresting – it’s what we don’t know that’s interesting.
>
>Unfortunately universities spend all their time filling your head with what’s known, but that’s totally trivial. What’s interesting is what we don’t know. That’s what all the courses should be about, so that maybe the students can come up with new ideas before they’ve been brainwashed with the current paradigms. That would be the university I would create, you know, which only would talk about what we don’t know because what we know is really very uninteresting.

Insofar as the laws of mathematics refer to reality, they are not certain. And insofar as they are certain, they do not refer to reality.
From [[an article|https://www.shanesnow.com/articles/intellectual-humility]] (and a [[self-assessment|https://www.shanesnow.com/articles/intellectual-humility#take-the-intellectual-humility-assessment]] questionnaire) by Shane Snow on Intellectual Humility.

Snow writes that Intellectual Humility (IH)
>is a virtue and one of the biggest keys to making progress in teamwork, creative solo work, and society.
>If everyone in the world developed more of this virtue, a lot would change. Innovation would skyrocket. War and violence would plummet. Facebook arguments would actually be productive.
>To be truly intellectually humble, we need to develop respect for people and ideas that are different than our own, overcome our overconfidence, and take control of our ego.
>This sets us up to be able to revise our viewpoints when it's important to do so.

All together, this is what it means to be "Intellectually Humble:”
* ''Openness to New Experiences & Information''
** Being open to new experiences and information is not part of Intellectual Humility, but it helps us learn about things we can then use Intellectual Humility with. (but as Carl Sagan said: Keeping an open mind is a virtue, but… not so open that your brains fall out.)
* ''Respect for Other Viewpoints''
** Unearth the moral foundations of the other person's viewpoint.
** Gain empathy by learning the other person's story.
** Reduce fear of the other person by playing & laughing together.
** Increase your general respect for others by living abroad, reading and watching fiction, and learning multiple languages.
* ''Lack of Intellectual Overconfidence''
** Understand the "math" of how diverse perspectives can make a group smarter than its smartest individual.
** When sharing strong viewpoints, acknowledge, "I could be wrong."
** Leave room for discussion by avoiding verbal absolutes like "always" and "clearly" when sharing viewpoints.
** Acknowledge when you don't know something, but add that you just don't know it "Yet."
* ''Separation of Ego & Intellect''
** Get to know your ego through the [[Enneagram framework|https://www.enneagraminstitute.com/how-the-enneagram-system-works/]].
** Identify when discussions or topics veer into personal territory.
** Don't invoke identity when expressing viewpoints.
** Practice mindfulness meditation.
* ''Willingness to Revise Viewpoints''
** Changing our minds requires us to consider other viewpoints, acknowledge we could be wrong, and not take ideas personally. At that point, revising our viewpoint is almost a piece of cake.
** one very straightforward thing we can do to build up our ability to revise our viewpoints: Travel. Either physically, or through fiction.
!!!!Why
The Interactive Solutions Guide (ISG) I designed and implemented as part of my role as a "Performance Support Specialist" at Cisco Systems, was built to enable Cisco partners (sales people and Systems Engineers) to "correctly configure and sell Cisco networking solutions, which meet customer needs and requirements".

!!!!What
The ISG was designed to replace most of the traditional training that was originally proposed for the "on-boarding" of Cisco partners. It was a software tool built around an inference engine, which helped partners assess the customer needs, collect information about the customer networking environment, and configure a solution based on Cisco equipment to satisfy the customer requirements, and present/justify the proposed solution.

!!!!Human Performance Support
The system embedded a few Human Performance Support principles:
* combining learning with doing
** the system supported learning while doing, by offering information about networking technologies, Cisco equipment, device configurations, and so on, all embedded within the performance activities of assessing the customer needs and environment, selecting an appropriate solution, and preparing the presentation of that solution.
* just-in-time, on-demand training
** the learning resources and activities were available within the work context, enabling the performer to go back and forth between learning and doing without switching context
* Deep performance support
** the system provided information, examples, skills training activities, decision support (via the inference engine), and task automation (through automating configuration tasks, presentation and justification).
I recently came across an [[interesting chapter|resources/WHEN THINGS START TO THINK - Chapter 13 - Information and Education.docx]] from Neil Gershenfeld's book "When Things Start to Think" (see [[chapters online|http://www.kurzweilai.net/neil-gershenfeld]]), talking about (among other things ;-) why he started teaching two (1 semester long) courses at MIT, one covering the physical world outside of computers (The Physics of Information Technology), and the other the logical world inside computers (The Nature of Mathematical Modeling).

Gershenfeld tells a story explaining what triggered his decision to teach these courses. In a nutshell:
>One week an MIT undergrad, and independently an MIT professor, asked me the same question: how does the bandwidth of a telephone line relate to the bit rate of a modem?
>...
>I was surprised to find that someone could be at MIT as long as both the student and professor had been, studying communications in many forms, and never have heard of a result as important as this. Not only that, both were unprepared to understand where it came from and what conclusions might be drawn from it.

>Entropy shows up in two places in their question, ... 
one having to do with materials and thermal noise calculations, and the other having to do with information density and bit-rate calculations.
>Although these two calculations are closely related, physicists learn how to do the former in one part of campus, and engineers the latter in another.
In other words, topics are taught/covered in "silos", and very powerful and useful connections are being missed or ignored.
>Very few students manage to be sufficiently bilingual to be able to do both. Those who are either take so many classes beyond the usual load that they manage to squeeze in a few different simultaneous degrees, or take so few classes that they have enough time to put these pieces together on their own. Either path requires unusual initiative to answer such a reasonable question.

And that's why Gershenfeld decided to teach these 2 unique courses:
>Faced with students who knew a lot about a little, I decided that I had to teach everything.
>...
>My students loved them; some of my peers at MIT hated them. This was because each week I would cover material that usually takes a full semester.
>...
>I did this by teaching just those things that are remembered and used long after a class has ended, rather than everything that gets thrown in. 

And this resonated with me on several levels.
* On the personal level, I know that I like to learn this way, that is, start with something I really, really want to do or know, and follow the trail (or "pull the string in the spool"), with a strong sense of purpose and direction, taking "side excursions" and doing "deep-dives" when needed (or wanted), to cover additional topics and acquire additional skills.
** This is for example, how I learned most of the new programming languages (and I know a few ;-) I have acquired //after// graduating from university (where they teach programming languages the "linear way" chapter after chapter). I didn't read the books or manuals from cover to cover, but rather started with the introduction covering things like the "philosophy", motivations, guiding principles, etc. of the language, followed by things like unique language features, data structures, models, abstractions, etc., and then diving into the techniques and hands on exercises, all "in service" of the original goal or need I had in mind.
** In other words, I carved ''a unique path through the knowledge and skills space''.
** This learning path works best when guided by a strong and unifying goal like a project. In other words, ''project-based learning'' by definition will create a path through one or more knowledge domains. And this type of learning is also [[advocated by Symour Papert|An Exploration in the Space of Mathematics Educations]].
*** It's not surprising that Neil Gershenfeld from the [[Center for Bits and Atoms|http://cba.mit.edu/]] (spawned from the MIT Media Lab) is advocating Interdisciplinary and Project-based Learning, too. Papert was a founding faculty member of the Lab...

* On the pedagogical level, this idea of a ''knowledge domain'' (consisting of topics and relationships, forming a "knowledge map") that can be viewed and traversed according to individual needs, skills, and interest is similar to a [[project/proposal|resources/LDT Just-in-time Learning for Performance - Solution.jpeg]] I worked on at Stanford University, as part of the Learning, Design, and Technology (LDT) Masters program.
** These kinds of individual views and paths through a knowledge domain or map reflect the learner's existing knowledge and skill levels, their preferred learning styles, desired/required mastery level, etc.
** A solution I proposed as a [[just-in-time performance and learning system|resources/LDT JIT Performance Support.pdf]] describes how to design and apply knowledge domains, knowledge/topic maps, knowledge and skills assessments, to create effective learning and performing environments (after all, excellent personal performance is an important goal and driver for learning).
** A [[very rudimentary 3D pivoting view|resources/LDTtutoringSystemMonitor1.png]] of some relevant learning parameters in a simple knowledge domain (such as mastery level, topic coverage, and learning time) was also incorporated into another prototype I did as part of an [[automated tutoring system|An online intelligent tutoring system with knowledge maps and second-order feedback]].

I think that there is a lot of merit in combining the motivation that Gershenfeld had for designing his courses with domain knowledge maps like the ones developed at the Khan Academy.
There is definitely a need (and an "audience") for the very structured and exhaustive path through such a domain, following and fulfilling the prerequisites and established paths and relationships.

But, there is also a strong need (and audience - maybe bright and focused university students, and/or life-long learners) that need and can benefit from a customized/personal path through a domain, in light of their strong focus, desires, or need. And that's where providing these capabilities (views, personalization, navigation) are critical and very useful.

Or in Gershenfeld's words:
>Although this brisk pace does a disservice to any one area, unless I teach this way most people won t see most things. It is only by violating the norms of what must be taught in each discipline that I can convey the value of the disciplines.

>It's not a discipline, a distinct body of knowledge that has stood the test of time and that brings order to a broad area of our experience. Progress on the former relies on the latter.

>Universities go on filling students with an inventory of raw knowledge to be called on later; this is sensible if the world is changing slowly, but it is not. An alternative is [[just-in-time education|resources/LDT JIT Performance Support.pdf]], drawing on educational resources as needed in support of larger projects.

>The faster the world changes, the more precious traditional disciplines become as reliable guides into unfamiliar terrain, but the less relevant they are as axes to organize inquiry.

And also:
>The inconvenient technology that we live with reflects the inconvenient institutional divisions that we live with. To get rid of the former, we need to eliminate the latter.
The Drake Equation (as opposed to [[The Flake Equation]] :) is a good example of combining knowledge from multiple disciplines.

A reminder: The equation is @@font-size:14pt; N = R* • f~~p~~ • n~~e~~ • f~~l~~ • f~~i~~ • f~~c~~ • L @@

where:
N = The number of civilizations in the Milky Way galaxy whose electromagnetic emissions are detectable.
R* =The rate of formation of stars suitable for the development of intelligent life.
f~~p~~ = The fraction of those stars with planetary systems.
n~~e~~ = The number of planets, per solar system, with an environment suitable for life.
f~~l~~ = The fraction of suitable planets on which life actually appears.
f~~i~~ = The fraction of life bearing planets on which intelligent life emerges.
f~~c~~ = The fraction of civilizations that develop a technology that releases detectable signs of their existence into space.
L = The length of time such civilizations release detectable signals into space.


Smart and thoughtful people grappled with the equation and its implications, from [[Enrico Fermi|https://www.nobelprize.org/nobel_prizes/physics/laureates/1938/fermi-bio.html]] (posing the famous [[Fermi Paradox|https://www.seti.org/seti-institute/project/details/fermi-paradox]]), through [[Ray Kurzweil|pg. 53 - RAY KURZWEIL: WHERE ARE THEY?]], [[Frank Wilczek|Frank Wilczek on Intelligent Life in the universe]], Carl Sagan, and many more.

In an article titled [[Greetings, E.T. (Please Don’t Murder Us.)|https://www.nytimes.com/2017/06/28/magazine/greetings-et-please-dont-murder-us.html]] Steven Johnson makes some interesting observations about (among other things :) the interdisciplinary nature of the Drake Equation, combining science, philosophy, morality, religion, politics.
>What makes the Drake Equation so mesmerizing is in part the way it forces the mind to yoke together so many different intellectual disciplines in a single framework. As you move from left to right in the equation, you shift from astrophysics, to the biochemistry of life, to evolutionary theory, to cognitive science, all the way to theories of technological development. Your guess about each value in the Drake Equation winds up revealing a whole worldview: Perhaps you think life is rare, but when it does emerge, intelligent life usually follows; or perhaps you think microbial life is ubiquitous throughout the cosmos, but more complex organisms almost never form. The equation is notoriously vulnerable to very different outcomes, depending on the numbers you assign to each variable.
>
>The most provocative value is the last one: L, the average life span of a signal-transmitting civilization. You don’t have to be a Pollyanna to defend a relatively high L value. All you need is to believe that it is possible for civilizations to become fundamentally self-sustaining and survive for millions of years. Even if one in a thousand intelligent life-forms in space generates a million-year civilization, the value of L increases meaningfully. But if your L-value is low, that implies a further question: What is keeping it low? Do technological civilizations keep flickering on and off in the Milky Way, like so many fireflies in space? Do they run out of resources? Do they blow themselves up?
And Johnson points out that a new level of thinking and behavior as a species may be required at this stage of our civilization, again, requiring a highly-interdisciplinary knowledge and skills:
>Wrestling with the [[METI|http://meti.org/mission]] question suggests, to me at least, that the one invention human society needs is more conceptual than technological: We need to define a special class of decisions that potentially create extinction-level risk. New technologies (like superintelligent computers) or interventions (like METI) that pose even the slightest risk of causing human extinction would require some novel form of global oversight. And part of that process would entail establishing, as [[[Dr. Kathryn] Denning|http://www.yorku.ca/kdenning/seti.htm]] suggests, some measure of risk tolerance on a planetary level. If we don’t, then by default the gamblers will always set the agenda, and the rest of us will have to live with the consequences of their wagers.
>[...] There is not a lot of historical precedent for humans voluntarily swearing off a new technological capability — or choosing not to make contact with another society — because of some threat that might not arrive for generations. But maybe it’s time that humans learned how to make that kind of choice. This turns out to be one of the surprising gifts of the METI debate, whichever side you happen to take. Thinking hard about what kinds of civilization we might be able to talk to ends up making us think even harder about what kind of civilization we want to be ourselves.
As pattern- and/or meaning-seeking creatures, we tend to assign significance to all sorts of sets, combinations, series, and events.

I came across (incidentally? I don't think so! :) an interesting tool called [[RIES - Find Algebraic Equations, Given Their Solution|http://mrob.com/pub/ries/]], which is an Inverse Equation Solver.
The author of the web-based tool has a long page of "[[interesting numbers|http://mrob.com/pub/math/numbers.html]]" and their relationships.



Here is what [[RIES produces|http://mrob.com/pub/ries/ries.php?target=3.141592653589&rst=c]] for 3.141592653589 (pi):
[>img[RIES - pi|resources/ries_pi_1.png][resources/ries_pi.png]]



The author of RIES, [[Robert Munafo|http://mrob.com/pub/personal.html]], also suggests a few tips and tricks for playing with numerical relationships and assigning meaning to them (after the fact, and by design, of course :)
* [[Pre-destined and linked fortunes|http://mrob.com/pub/ries/index-3.html#pre_destiny]] (linking 2 phone numbers)
* [[Four Fours|http://mrob.com/pub/ries/index-3.html#four]] (producing any number with a restricted set)
* [[Area 51|http://mrob.com/pub/ries/index-3.html#area_51]] (the [[infamous UFO storage/repository|https://en.wikipedia.org/wiki/Area_51]] that [[does not exist|http://www.urbandictionary.com/define.php?term=Area%2051]] :)









And here is what [[xkcd had to say|http://xkcd.com/1047/]] about this:
[img[RIES - numbers|resources/ries_xkcd_1047_s.png][resources/ries_xkcd_1047.png]]
Addressing [[The Unreasonable Effectiveness of Mathematics]] and Wigner's observation^^1^^, the physicist Mario Livio (who has written a lot about it), writes in [[an article on PBS/KQED|http://www.pbs.org/wgbh/nova/blogs/physics/2015/04/great-math-mystery/]]:

>At the core of this math mystery lies another argument that mathematicians, philosophers, and, most recently, cognitive scientists have had for a long time: Is math an invention of the human brain? Or does math exist in some abstract world, with humans merely discovering its truths? The debate about this question continues to rage today.
>[...]
>Personally, I believe that by asking simply whether mathematics is discovered or invented, we forget the possibility that mathematics is an intricate combination of inventions and discoveries. Indeed, I posit that humans invent the mathematical concepts—numbers, shapes, sets, lines, and so on—by abstracting them from the world around them. They then go on to discover the complex connections among the concepts that they had invented; these are the so-called theorems of mathematics.

This echos [[David Darling's view|The relationship between the world out there and what's inside our mind]] that we combine perception and classification of "things out there" with mental processes in the mind as our natural/inherent mode of living and surviving.

>I must admit that I do not know the full, compelling answer to the question of what is it that gives mathematics its stupendous powers. That remains a mystery.

In [[another article|http://www.sfu.ca/~rpyke/cafe/livio.pdf]] Livio addresses a related question:  what gives mathematics its explanatory and predictive powers?

> There is no doubt that the selection of topics we address mathematically has played an important role in math's perceived effectiveness. But mathematics would not work at all were there no universal features to be discovered. You may now ask: Why are there universal laws of nature at all? Or equivalently: Why is our universe governed by certain symmetries and by locality? I truly do not know the answers, except to note that perhaps in a universe without these properties, complexity and life would have never emerged, and we would not be here to ask the question.
(see Edward Nelson's perspective [[On the question of mathematics syntax and semantics]])

In an [[interesting and touching interview, Gregory Chaitin|https://www.whyarewehere.tv/people/gregory-chaitin/]] talks about "The Joy of Mathematical Discovery":
>You know, there is mathematics that, as Ulam puts it, fills in much-needed gaps. There are pieces of mathematics and when you find them, they seem sort of inevitable afterwards – they weren’t in advance – and when you find a thing like that, then it seems more real, then it seems that you’re discovering it. But let’s face it, from a practical point of view we’re inventing it as we go.
>
>But it’s true that on good days, when you’ve found something that you really love, you say, ‘Oh, my God.’ You have a feeling of inevitability that you’re discovering something, that there’s this beautiful thing out there that it really doesn’t feel like any mortal could have invented it. It seems to be something from the Platonic universe of beautiful ideas or from God’s mind. Who knows?
>
>So that’s if you’re very lucky. The whole thing seems so beautiful and so natural and so fundamental that you say, ‘How come I didn’t see it before? How come nobody saw it before?’
>That’s a very wonderful feeling, I have to say. But it took ten years to get there because the mathematics was clumsy and awkward, as new mathematics always is. Then people polish it for 300 years... But discovering mathematics is messy, like making love; it’s messy.
>
>[...] But there are wonderful moments when, yes, you have this feeling, and then moments like that you say, ‘Well, it wasn’t me. I didn’t discover this. This idea exists out there independently of me and maybe it wanted to use me to express itself.’
>
>But maybe it’s a way of fooling ourselves. But this is the kind of thing that helps you to do good mathematics. You have to be inspired. It’s profoundly emotional. I mean, the best mathematics is an art. It’s totally creative. You have to throw your whole personality at it. And it may be that you discover something because you’re crazy. You were the right crazy person to come up with this crazy idea, but other crazy people don’t find anything because their craziness isn’t in sync with the next discovery that had to be made.
>
>David: I was fascinated when you said, ‘Sometimes you feel like the idea just needed you.’
>
>GC: It needed people to express it. It wanted to incarnate, so to speak.
>
>David: Well, I sometimes think that ideas are like a seed.
>
>GC: Absolutely.
>
>David: And that the mind is like a garden. So when people say, ‘I did this,’ I always think to myself that it would be like a little lump of dirt saying to you, ‘Look at the flower I made.’
>
>GC: I absolutely agree with you, David. I remember Benoit Mandelbrot was in a documentary as he was dying, and you could barely hear his voice – they had to put subtitles – and he was saying, ‘I discovered a beautiful world.’
>
>It feels like it’s out there, and you feel very lucky to have stumbled on it. But I don’t think you can take the credit. It’s like climbing mountains: you climb a mountain to get a better view, to see further, and one always feels that there are other mountain ranges that are higher. In the distance you can see still higher mountains, and the ranges never stop and it always gets higher. The further you see, the more you realise that there are.
>
>So, for example, I’ve worked on Omega, but what is consciousness? What is the mind? How does the brain work? Can you prove that Darwinian evolution works? Where do new species come from? I mean, there’s endless questions, and each question just opens more questions.


In a [[provocative paper titled "Mathematics on a Distant Planet"|http://worrydream.com/refs/Hamming%20-%20Mathematics%20on%20a%20Distant%20Planet.pdf]] by Richard Hamming [[he points out the criticality of mathematics to real life, but also points out some of its arbitrary and non-relevant parts|The (crucial) importance of mathematics]] (in his (learned!) opinion), and adds:
>Hermite said, "We are not the master of Mathematics, we are the servant." I have often said the opposite, "We are the master of Mathematics, not the servant; it shall do as we want it to do." In truth, I seem to believe in a blend of the two remarks; at times we are driven and at times we are in control of mathematics. So too, the aliens will find themselves, and because they live in the same kind of physical world and have established radio contact with us, their "robust," useful mathematics will have a reasonable analogy with ours, but the "non-robust" parts could be very different. Would they even know or care about all of our trivial theorems?

----
^^1^^ “The miracle of the appropriateness of the language of mathematics to the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve.” - in Wigner's 1960 article [[The Unreasonable Effectiveness of Mathematics in the Natural Sciences|https://www.dartmouth.edu/~matc/MathDrama/reading/Wigner.html]]
IS THE INTERNET CHANGING THE WAY WE THINK? Copyright 2011 by  [[Edge Foundation, Inc.|http://edge.org/]]
The Net's impact on our minds and future.
Edited by John Brockman

<<forEachTiddler 
where 
'tiddler.tags.contains("book-chapter") && tiddler.tags.contains("Is the Internet Changing the Way You Think?")'
sortBy 
'tiddler.title'>>

[[Also by John Brockman and the Edge "cohort"|What Have You Changed Your Mind About?]]
a physicist and science fiction writer
It [evolution] was a concept of such stunning simplicity, but it gave rise, naturally, to all of the infinite and baffling complexity of life. The awe it inspired in me made the awe that people talk about in respect of religious experience seem, frankly, silly beside it. I'd take the awe of understanding over the awe of ignorance any day.
 
It goes against the grain of modern education to teach students to program. What fun is there to making plans, acquiring discipline, organizing thoughts, devoting attention to detail, and learning to be self critical.
It has been said that democracy is the worst form of government except all the others that have been tried.
It is a common illusion that something is wrong //because// we are sad, rather than that nothing is wrong //although// we are sad.
It is a fine thing to be honest, but it is also very important to be right.
It is by logic that we prove, but by intuition that we discover.
In the (very) witty and (sometimes) wise book "The Phantom Tollbooth" by Norton Juster^^1^^, he has a great chapter (ch. 9) about points of view and perspective.

I think that it emphasizes (and embodies :) the point (of view :) that the "[[CS Sage|https://en.wikipedia.org/wiki/List_of_computer_scientists]]" Alan Kay was making when he quipped: 
>[[A good point of view is worth many IQ points]].

In this chapter, the boy Milo (the protagonist) and his companions, the ticking dog (a watchdog named Tock) and the Humbug, meet a curious boy (Alec) who is literally floating in the air.
Alec claims that All Depends on How You Look at Things (which is a philosophically astute observation, come to think of that!).
>"For instance," continued the boy, "if you happened to like deserts, you might not think this [lush forest scenery] was beautiful at all."
>"That's true," said the Humbug, who didn't like to contradict anyone whose feet were that far off the ground.
>"For instance," said the boy again, "if Christmas trees were people and people were Christmas trees, we'd all be chopped down, put up in the living room, and covered with tinsel, while the trees opened our presents."
>"What does that have to do with it?" asked Milo.
>"Nothing at all," he answered, "but it's an interesting possibility, don't you think?"
(don't these "pointless asides" remind you of some of Lewis Carroll's Alice dialogs?)
>"How do you manage to stand up there?” asked Milo, for this was the subject which most interested him.
>“I was about to ask you a similar question," answered the boy, "for you must be much older than you look to be standing on the ground."
>“What do you mean?” Milo asked.
>"Well," said the boy, "in my family everyone is born in the air, with his head at exactly the height it's going to be when he's an adult, and then we all grow toward the ground. When we're fully grown up or, as you can see, grown down, our feet finally touch. Of course, there are a few of us whose feet never reach the ground no matter how old we get, but I suppose it's the same in every family."
And talking about "normal childhoods", and "(im)possible feats", from the inimitable Carroll (Through the Looking Glass, Alice and the White Queen):
>>[The Queen:] "I'm just one hundred and one, five months and a day."
>>"I can't believe that!" said Alice.
>>"Can't you?" the Queen said in a pitying tone. "Try again: draw a long breath, and shut your eyes."
>>Alice laughed. "There's no use trying," she said: "one can't believe impossible things."
>>"I daresay you haven't had much practice," said the Queen. "When I was your age, I always did it for half-an-hour a day. Why, sometimes I've believed as many as six impossible things before breakfast."
And back to Juster:
>He hopped a few steps in the air, skipped back to where he started, and then began again.
>"You certainly must be very old to have reached the ground already."
>"Oh no,” said Milo seriously. "In my family we all start on the ground and grow up, and we never know how far until we actually get there."
>"What a silly system.” The boy laughed. “Then your head keeps changing its height and you always see things in a different way? Why, when you're fifteen things won't look at all the way they did when you were ten, and at twenty everything will change again.”
>"I suppose so," replied Milo, for he had never really thought about the matter.
>“We always see things from the same angle,” the boy continued. “It's much less trouble that way. Besides, it makes more sense to grow down and not up. When you're very young, you can never hurt yourself falling down if you're in mid-air, and you certainly can't get into trouble for scuffing up your shoes or marking the floor if there's nothing to scuff them on and the floor is three feet away.”
>"That's very true," thought Tock, who wondered how the dogs in the family liked the arrangement.
>
>"But there are many other ways to look at things," remarked the boy.
>"For instance, you had orange juice, boiled eggs, toast and jam, and milk for breakfast," he said, turning to Milo. “And you are always worried about people wasting time,” he said to Tock. “And you are almost never right about anything,” he said, pointing at the Humbug, "and, when you are, it's usually an accident.”
>"A gross exaggeration," protested the furious bug, who didn't realize that so much was visible to the naked eye.
>“Amazing,” gasped Tock. "How do you know all that?” asked Milo.
>"Simple," he said proudly. "I'm Alec Bings; I see through things. I can see whatever is inside, behind, around, covered by, or subsequent to anything else. In fact, the only thing I can't see is whatever happens to be right in front of my nose."
>“Isn't that a little inconvenient?” asked Milo, whose neck was becoming quite stiff from looking up.
>“It is a little,” replied Alec, “but it is quite important to know what lies behind things, and the family helps me take care of the rest. My father sees to things, my mother looks after things, my brother sees beyond things, my uncle sees the other side of every question, and my little sister Alice^^1^^ sees under things."
>“How can she see under things if she's all the way up there?” growled the Humbug.
>"Well," added Alec, turning a neat cartwheel, "whatever she can't see under, she overlooks."
What and amazing family!
>"Would it be possible for me to see something from up there?” asked Milo politely.
>"You could,” said Alec, “but only if you try very hard to look at things as an adult does."
>Milo tried as hard as he could [Alice and the White Queen again^^1^^], and, as he did, his feet floated slowly off the ground until he was standing in the air next to Alec Bings. He looked around very quickly and, an instant later, crashed back down to earth again.
>"Interesting, wasn't it?" asked Alec.
>“Yes, it was,” agreed Milo, rubbing his head and dusting himself off, “but I think I'll continue to see things as a child. It's not so far to fall." [better safe then sorry; take advantage or enjoy what you have and where you are :) ]
>“A wise decision, at least for the time being,” said Alec. "Everyone should have his own point of view."
>“Isn't this everyone's Point of View?” asked Tock, looking around [the Vista Point (ha!) they all stand at] curiously.
>
>“Of course not,” replied Alec, sitting himself down on nothing. “It's only mine, and you certainly can't always look at things from someone else's Point of View. For instance, from here that looks like a bucket of water,” he said, pointing to a bucket of water; “but from an ant's point of view it's a vast ocean, from an elephant's just a cool drink, and to a fish, of course, it's home. So, you see, the way you see things depends a great deal on where you look at them from. Now, come along and I'll show you the rest of the forest."
>He ran quickly through the air, stopping occasionally to beckon Milo, Tock, and the Humbug along, and they followed as well as anyone who had to stay on the ground could.
>"Does everyone here grow the way you do?” puffed Milo when he had caught up.
>“Almost everyone,” replied Alec, and then he stopped a moment and thought. “Now and then, though, someone does begin to grow differently. Instead of down, his feet grow up toward the sky. But we do our best to discourage awkward things like that."
>"What happens to them?” insisted Milo.
>"Oddly enough, they often grow ten times the size of everyone else," said Alec thoughtfully, “and I've heard that they walk among the stars.” And with that he skipped off once again toward the waiting woods.



----
^^1^^ - This book/story has multiple common things and similarities on multiple levels with Lewis Carroll's books "Alice in Wonderland" and "Through the Looking Glass".

It's about Big Meaning, not Big Data.

Said [[Alan Kay|https://en.wikipedia.org/wiki/Alan_Kay]] (a "[[CS Sage|https://en.wikipedia.org/wiki/List_of_computer_scientists]]") in a talk which opens with his thoughts on "Big Data":

>Big data is a way that a lot of people are trying to make money today. And it's a favorite of marketing people, because it's in the wind. Everybody has heard the phrase “big data.” Not everybody knows what it means. And so it's the perfect context for doing things that people can say, “Well, this is an application of big data and this is an application of big data.” But in fact, the interesting future's not about data at all—it's about meaning.

[[Kay commented on it|https://news.ycombinator.com/item?id=11803165]] later saying:
> the real issues are not "big data" but "big understanding", not "Machine Learning" but "Machine Thinking". [...T]he "[[Dream Machine|http://www.nytimes.com/2001/10/07/books/review/07PAULOST.html]]" [ [[book|https://www.amazon.com/Dream-Machine-Licklider-Revolution-Computing/dp/014200135X]] ] is about how the funders were willing to put forth considerable resources for "problem finding" not just "problem solving" -- a lot more of that needs to be done today.

You can watch the whole [[TED Talk: The Future Doesn't Have to Be Incremental|https://www.youtube.com/watch?v=gTAghAJcO1o]] (on ~YouTube)

In [[another TED Talk|http://ed.ted.com/lessons/the-rise-of-human-computer-cooperation-shyam-sankar]] Shyam Sankar, said: "it's not a question of //how// to cmpute [things], but //what// to compute". His point was that you have to include the humans and human values in the picture.


In a recent NPR broadcast covering the [[Edge question of which scientific idea is ready for retirement|https://www.edge.org/conversation/john_brockman-this-idea-must-die-scientific-theories-that-are-blocking-progress]], one of the scientists brought up the work done by astronomers Tycho Brahe and Johannes Kepler about the orbits of the planets.

Tycho was the meticulous data collector of observations of planet positions in the sky and Kepler was the brilliant analyzer and interpreter of the data.

As [[an article at Harvard|http://chandra.harvard.edu/edu/formal/icecore/The_Astronomers_Tycho_Brahe_and_Johannes_Kepler.pdf]] put it:
>Tycho   was   a   scientist   who   worked   by   direct observation.  Kepler  was  a  scientist  who  worked by  calculation  and  testing  one  idea  after another. Tycho's life's  work of measuring the positions of objects in the sky was in itself useless  without  someone  like  Kepler  to  come  along  and  make sense  of  those  measurements.  In  the  same  way, Kepler's efforts to understand how the planets moved would be nothing but speculation, guessing, and mysticism if he did not have the basic data – the accurate measurements made by Tycho – against which to test his ideas and theories. Each one’s work is meaningful because of the work of the other.
[>img[Kepler's Laws|resources/Kepler Laws 1.png][resources/Kepler Laws.png]]

Kepler came up with his [[famous 3 Laws of planetary motion|https://en.wikipedia.org/wiki/Kepler%27s_laws_of_planetary_motion]]:
1. The orbit of a planet is an ellipse with the Sun at one of the two foci.
2. A line segment joining a planet and the Sun sweeps out equal areas during equal intervals of time.
3. The square of the orbital period of a planet is proportional to the cube of the semi-major axis of its orbit.

It is quite feasible to take Tycho's Big Data (all his meticulous observations) and come up with Law #1 above (elliptical orbits) and Law #3 (proportional ratios). But the point the scientist made was that he saw no way for Big Data Analysis to come up with Law #2 (equal areas). This idea takes a "qualitative jump" which no amount of data collection and analysis can trigger or produce.




[>img[Alan Kay's New Ideas Plane|resources/Alan Kay New Plane 1.png][resources/Alan Kay New Plane.png]]
[[Alan Kay pictures it|https://www.youtube.com/watch?v=gTAghAJcO1o]] as a move ("jump")^^1^^ taking us to a different (perpendicular) plane or dimension^^2^^, which quantities of data and data analysis alone cannot produce.

And on a personal note, related to teaching and learning, why is this difference important? I think that it is vital to not only teach how to "do big data" (collection, management, analysis), but even more importantly we need to teach (as much as it's possible) to "do meaning generation".




----
^^1^^ See also [[AHA! Moments]]
^^2^^ David Darling, in his book //Zen Physics// talks about explanatory issues with quantum mechanics and the interpretation and understanding required, and describes this [[intuitive jump or transition|Zen Physics, meaning and understanding]] which takes us into a different plane.

This semester I volunteered to teach another course^^1^^ for [[Citizen Schools|http://www.citizenschools.org/california/]], this time in Campbell Middle School. I decided to reteach the [[Amazing Mazes course|The "Amazing Mazes" course]], with the goal of tweaking the curriculum and lesson plans (see [[course outline, lesson plans, and student activities/programs|http://employees.org/~hmark/courses/amazingmazes/index.html]]) so that they could be added to the [[Citizen Schools' national STEM curriculum repository|http://www.citizenschools.org/curriculum-category/science-technology/]]^^2^^.

The Citizen Schools national STEM curriculum has a couple of courses combining game design, programming and math, but I wanted to provide an alternative course which I believe strikes a better balance between these elements, and ties more strongly into the wider context of [[Computational Literacy and Thinking|A Framework for Computational Thinking, Computational Literacy]].

These are some of the new and revised elements of the course: 

!!!Computational Literacy and Computational Thinking
[>img[A Framework for Computational Thinking, Computational Literacy|resources/Computational Thinking process-small.png][A Framework for Computational Thinking, Computational Literacy]]
* Levels of abstraction
** Describing, designing, and manipulating mazes and walkers
* Modeling and representation
** Networking equivalence, programs and algorithms
* Algorithms and procedures
** Strategies and algorithms for solving/walking mazes
* Automation
** Commands, loops, conditions, programs

!!!Support for Teachers and Teaching Fellows
[>img[Amazing Mazes maze programming|resources/Wikispaces-AmazingMazes-teachers-small.png][resources/Wikispaces-AmazingMazes-teachers.png]]
In addition to the detailed lesson plans below, I have also created [[a secure wiki "by teachers, for teachers" on Wikispaces|http://computationalliteracy.wikispaces.com/]], to enable the development of a community around this course. This wiki for teachers includes a web-based copy of each lesson plan, a PDF version for printing, and a Microsoft Word copy for editing/modifying each lesson. It also includes a discussion board for the course as well as for each lesson, where teachers can post comments, tips & tricks, etc. There is a section for various relevant resources relevant to the course, such as articles, video clips, worksheets with examples and problems, etc.
I have leveraged Wikispaces' ability to create and link different spaces together, to create a student space too, and link it to the teachers' space. This way, the teachers can go back and forth between the [["teacher view"|resources/Wikispaces-AmazingMazes-teachers.png]] * and the [["student view"|resources/Wikispaces-AmazingMazes-students.png]] **, and review the collection of Java applets (programs) for creating mazes and programming maze walkers, and the various other resources and activities that students will be using.


{{{*}}} Secure access to the [[Wikispaces teachers' view|http://computationalliteracy.wikispaces.com/-/Amazing%20Mazes/AmazingMazes%20teachers/]]
{{{**}}} Secure access to the [[Wikispaces students' view|http://computationalliteracy.wikispaces.com/-/Amazing%20Mazes/AmazingMazes%20students/]]


!!!The course outline
At a high level, the course outline consists of several parts:
* Introduction to mazes
** Types, characteristics, complexities
* Designing and building mazes
** Manual and programmatic creation of mazes
* Solving mazes
** viewing perspectives, manual traversals, search/walking algorithms
* Maze walker programming
** From simple to complex algorithms
* Evaluation
** Maze walker program effectiveness (correctness), and efficiency (speed)

!!!The lesson plans
[[Lesson 1|resources/Amazing Mazes - lesson 1.pdf]] - Introduction to mazes
[[Lesson 2|resources/Amazing Mazes - lesson 2.pdf]] - Introduction to maze building (manual)
[[Lesson 3|resources/Amazing Mazes - lesson 3.pdf]] - How to build a maze (programming)
[[Lesson 4|resources/Amazing Mazes - lesson 4.pdf]] - Introduction to maze walker programming
[[Lesson 5|resources/Amazing Mazes - lesson 5.pdf]] - Maze walker programming
[[Lesson 6|resources/Amazing Mazes - lesson 6.pdf]] - Maze walking algorithms
[[Lesson 7|resources/Amazing Mazes - lesson 7.pdf]] - Programming loops and conditions 1
[[Lesson 8|resources/Amazing Mazes - lesson 8.pdf]] - Programming loops and conditions 2
[[Lesson 9|resources/Amazing Mazes - lesson 9.pdf]] - Advanced walker programming
[[Lesson 10|resources/Amazing Mazes - lesson 10.pdf]] - Display and presentation


The detailed lesson plans are referring to [[the students' programs and activities|courses/amazingmazes/index.html]].



----
^^1^^ The original Amazing Mazes course at [[MIT|http://www.citizenschools.org/california/about/locations/]] was [["Amazing Mazes"|The "Amazing Mazes" course]].
Another course I did was [["Acing Racing"|The "Acing Racing" course]]
And another course I did was [["Right on Target"|The "Right on Target" course]]

^^2^^ STEM = Science, Technology, Engineering, Mathematics
A German writer, pictorial artist, biologist, theoretical physicist, and polymath.
Publisher & Editor, [[Edge|http://www.edge.org]]
American novelist, essayist, literary critic and university professor.
([[A different, but related take on questions|David Whyte - questions]] is given by [[David Whyte|http://www.davidwhyte.com/]], who [[John O'Donohue|https://www.johnodonohue.com/]] knew).

>All thought is about putting a face on experience… One of the most exciting and energetic forms of thought is the question. I always think that the question is like a lantern. It illuminates new landscapes and new areas as it moves. Therefore, the question always assumes that there are many different dimensions to a thought that you are either blind to or that are not available to you. So a question is really one of the forms in which wonder expresses itself. One of the reasons that we wonder is because we are limited, and that limitation is one of the great gateways to wonder.
: -- from the transcript of //Walking on the Pastures of Wonder: John O’Donohue in Conversation with John Quinn//

and as Oliver Wendell Holmes had said: [[Man's mind, once stretched by a new idea, never regains its original dimensions.]]
former Chief Scientist of Xerox Corporation and former director of the Xerox Palo Alto Research Center (PARC)
Type the text for 'John Steinbeck'

A psychological study was conducted to learn more about traits and characteristics of various professionals, giving a physicist, a business person, and a mathematician the following problem to solve:
a few cows are grazing in a meadow. Given a roll of chain fence, what is the shortest fence you need in order to contain all cows inside of it?

The physicist started by plotting the location of all cows on graph paper, connecting all outlying cows with straight lines, and declaring that since the closed shape is convex, it is the shortest path and therefore the shortest fence possible.

The business person became angry at the waste of materials, slapped the physicist and declared he'll show everyone what the shortest fence is. He fixed one end of the fence to a poll in the meadow, and started rolling out the chain fence, while rounding up the cows into a tight group around the poll until they were all squeezed together, at which point he closed the fence, and announced that //this// is the shortest fence possible.

Meanwhile, the mathematician got lost in thought, and when he had been "brought back to earth" by the business person, declared that the problem was not well defined. The business person threatened to slap the mathematician if he did not proceed, so the mathematician cut a very short segment of the chain fence, wrapped it around //himself// and announced: "I'm outside".
A French moralist and essayist. Born 7 May 1754 in Montignac, Perigord, died 4 May 1824 in Paris.
In his book [[The Most Human Human - by Brian Christian]] he write:
>It's an odd thing, this: we often think of therapy as intimate, a place to be understood, profoundly understood, perhaps better than we ever have been. 
The controversial psychologist/therapist Richard Bandler says:
>“I think it's extremely useful for you to behave so that your clients come to have the illusion that you understand what they are saying verbally," he says. “I caution you against accepting the illusion for yourself.”
And (as a response to this?), Weizenbaum wrote a piece titled //Supplanted by Pure Technique//:
>I had thought it essential, as a prerequisite to the very possibility that one person might help another learn to cope with his emotional problems, that the helper himself participate in the other's experience of those problems and, in large part by way of his own empathic recognition of them, himself come to understand them. 
>There are undoubtedly many techniques to facilitate the therapist's imaginative projection into the patient's inner life. But that it was possible for even one practicing psychiatrist [Bandler] to advocate that this crucial component of the therapeutic process be entirely supplanted by pure technique—that I had not imagined! What must a psychiatrist who makes such a suggestion think he is doing while treating a patient, that he can view the simplest mechanical parody of a single interviewing technique as having captured anything of the essence of a human encounter?
and Christian continues:
>Pure technique, Weizenbaum calls it. This is, to my mind, the crucial distinction. “Man vs. machine" or "wetware vs. hardware” or “carbon vs. silicon”-type rhetoric obscures what I think is the crucial distinction, which is between method and method's opposite: which I would define as judgment," "discovery," "figuring out," and, "site-specificity."
>We are replacing people not with machines, nor with computers, so much as with method. And whether it's humans or computers carrying that method out feels secondary.
>[...] What we are fighting for, in the twenty-first century, is the continued existence of conclusions not already foregone -- the continued relevance of judgement and discovery and figuring out, and the ability to continue to exercises them.
I agree with Weizenbaum that in many areas we are and will continue to replace many traditionally perceived/thought off human behaviors/traits with techniques/machines. This has been happening as part of human evolution, and the "trick" is to wisely choose the path and dividing line between traits and characteristics "essential" to human nature and self-definition, and things which are not "human defining" and at the core of humanness.

But/And as Josue Harari and David Bell say:
>The term method itself is problematic because it suggests the notion of repetition and predictability—a method that anyone can apply. Method implies also mastery and closure, both of which are detrimental to invention.
Here, again, some things are worth replacing with a method/technique, but some are not. In my mind, to be human means to need and want both mastery/closure //and// innovation/discovery.

As Alfred North Whitehead wrote (in "An Introduction to Mathematics"):
>It is a profoundly erroneous truism, repeated by all the copybooks, and by eminent people when they are making speeches, that we should cultivate the habit of thinking what we are doing. The precise opposite is the case. Civilization advances by extending the number of operations which we can perform without thinking about them. Operations of thought are like cavalry charges in a battle -they are strictly limited in number, they require fresh horses, and must only be made at decisive moments.

I think that redefining what is "characteristically human" is (and should be) an on-going process. We are alive and evolving, and so does and should our self-image/definition. As Claude Shannon said:
>Chess is generally considered to require "thinking" for skillful play; a solution of this problem [i.e., having computer play skillfully] will force us either to admit the possibility of a mechanized thinking or to further restrict our concept of "thinking".
I'm not sure I'd use the work ''restrict'' here, but rather ''evolve''. In my mind this is analogous to what we/mathematicians had to do to the definition of ''number'' as we/they discovered irrational numbers, imaginary numbers, transcendental numbers, the [[infitesimals|Infinitesimals are significant (and meaningful?)]], etc. As it turns out, this evolution (or expansion) had significantly enriched and expanded math and human's experience of (and capabilities working with) it.




[[Joy Williams on Wikipedia|https://en.wikipedia.org/wiki/Joy_Williams_(American_writer)]]
Evangelist of "eXtreme Programming" (agile software development methodology)
It is true that you have to know how to read, and [[my father had a great personal story about it|Testing, Testing, 1, 2, 3 (or, you have to know how to read)]], but it turns out that there is more than one way to do it :)

I realized it quite a few years back, on a bus ride in Israel. After I had boarded the bus and settled down, I noticed that the person sitting a couple of seats in front of me, on the other side of the isle, was reading a (Hebrew) newspaper. Nothing special about that, //except//, he was holding the newspaper upside down!
He was a middle aged Yemeni Jew, and since I was sitting behind him I could not tell if he was actually "reading" the paper. 

A few thoughts went through my mind:
Maybe he dosed off holding the paper upside down, or maybe he is just in the process of turning a page, or he is going to tear off a piece. But as I kept observing him, he actually went through all the motions of a regular/normal paper reader, turning pages, scanning articles, and so on, but the funny thing was that he scanned the pages from bottom to top and from left-to-right (instead of the normal right-to-left in Hebrew). And, he spend a long time on articles and pages, which led me to believe that he is not pulling a prank, but actually reading the paper!

Recently, I found a corroboration/explanation in the book //The Puzzle of Left-handedness// by Rik Smits:
>In April 1949 a remarkable photograph cropped up in various places around the World, showing a group of Yemeni Jews in a reception camp near the seaport of Aden. They are on their way to Israel and they're all crowding around a Torah. 
>One has the book in front of him in such a way that he can read it in the normal Hebrew manner from right to left, in lines that run from top to bottom. 
>A second is sitting off to the left of the Torah and is therefore forced to read columns of text that run from top to bottom and from left to right. 
>In the foreground another man is reading the text upside down and the rest too, from various angles, are doing their best to look at the pages. 
>
>It's difficult for people in the rich world to imagine, but clearly these gentlemen are at ease with their unconventional reading positions. A scarcity of books [in Yemen], such that one copy had to be shared between three or four schoolchildren, had caused them to learn to read from various angles. 
>Why not, in fact? There's no law of nature that says, for example, that our letter A must stand with two feet on the ground; in fact, there was once a time when it didn't. Originally, in the Phoenician alphabet, it was upside down, forming a pictogram of an ox's head with horns. Later it came to lie on its side and only when the Greeks adopted it did the two 'horns' come to rest on the ground. 
>We may Wonder how the men in the photograph wrote, assuming they had learned to do so. Did they orientate themselves in the same way as for reading, or did they write in the standard Hebrew manner, in horizontal lines from right to left? Did this affect how well they could write?

So, puzzle solved, and reading, one way (or four) or another, is better than not reading at all.

(and on a more serious (and practical :) note, see [[A Helpful Guide to Reading Better - Farnam Street]])
Paul Lockhart in his book [["A Mathematician's Lament"|file:///Users/hmark/Downloads/tiddlywikis/resources/LockhartsLament.pdf]] writes:

...once we know //why// something is true, then in particular we know //that// it is true.A trillion instances tell us nothing; when it comes to infinity, the only way to know what is to know why. Proof is our way of capturing an infinite amount of information in a finite way. That's really what it means for something to have a pattern -- if we can capture it with //language//.
Knowledge is one. Its division into subjects is a concession to human weakness.
Life ~While-You-Wait.
Performance without rehearsal.
Body without alterations.
Head without premeditation.

I know nothing of the role I play.
I only know it’s mine. I can’t exchange it.

I have to guess on the spot
just what this play’s all about.

Ill-prepared for the privilege of living,
I can barely keep up with the pace that the action demands.
I improvise, although I loathe improvisation.
I trip at every step over my own ignorance.
I can’t conceal my hayseed manners.
My instincts are for happy histrionics.
Stage fright makes excuses for me, which humiliate me more.
Extenuating circumstances strike me as cruel.

Words and impulses you can’t take back,
stars you’ll never get counted,
your character like a raincoat you button on the run —
the pitiful results of all this unexpectedness.

If only I could just rehearse one Wednesday in advance,
or repeat a single Thursday that has passed!
But here comes Friday with a script I haven’t seen.
Is it fair, I ask
(my voice a little hoarse,
since I couldn’t even clear my throat offstage).

You’d be wrong to think that it’s just a slapdash quiz
taken in makeshift accommodations. Oh no.
I’m standing on the set and I see how strong it is.
The props are surprisingly precise.
The machine rotating the stage has been around even longer.
The farthest galaxies have been turned on.
Oh no, there’s no question, this must be the premiere.
And whatever I do
will become forever what I’ve done.


----
Zen Master Dogen expressed it succinctly:
“It's too late to be ready.”
[[Alan Kay|https://en.wikipedia.org/wiki/Alan_Kay]] (a "[[CS Sage|https://en.wikipedia.org/wiki/List_of_computer_scientists]]"), at OOPSLA 1997, gave an [[insightful talk|https://www.youtube.com/watch?v=oKg1hTOQXoY]] titled "The computer revolution hasn't happened yet" ([[transcript|http://www.vpri.org/pdf/m2007007a_revolution.pdf]]), which highlights a few key ideas he thinks we have misunderstood/messed-up on the path to highly effective, efficient, scalable, and pervasive computing.

The [[bullet-point highlights|http://www.cc.gatech.edu/fac/mark.guzdial/squeak/oopsla.html]] are captured by Mark Gudzial, with some comments by Kay.
In the talk Kay shows half a page (pg. 13) from the LISP 1.5 Programmer's Manual (1985), and calls it the [["Maxwell Equations of Computer Science|http://en.wikipedia.org/wiki/Maxwell%27s_equations]]"

[[A more detailed description and analysis|http://www.michaelnielsen.org/ddi/lisp-as-the-maxwells-equations-of-software/]]
writer of Science Fiction books (e.g. Protector)
[[Lauren Ipsum|http://www.laurenipsum.org/sample]] by Carlos Bueno looks like a great book about Computer Science (for ordinary mortals^^1^^ and children of all ages).

Just the introduction makes you want to read it, since it hits the nail on the head about what CS is all about. 
Hint: it echos the (anonymous) saying that
>Computer science is no more about computers (or programming languages) than astronomy is about telescopes.
and the point that [[Alan Kay|https://en.wikipedia.org/wiki/Alan_Kay]] (a "[[CS Sage|https://en.wikipedia.org/wiki/List_of_computer_scientists]]") makes, that 
>Computers are to computing as instruments are to music."

>No computers will be found in this book. If the idea of a computer science book without computers upsets you, please close your eyes until you’ve finished reading the rest of this page.
>The truth is that computer science is not really about the computer. It is just a tool to help you see ideas more clearly. You can see the moon and stars without a telescope, smell the flowers without a fluoroscope, have fun without a funoscope, and be silly sans oscilloscope.
>You can also play with computer science without... you-know-what. Ideas are the real stuff of computer science. This book is about those ideas, and how to find them.





----
^^1^^ referring to the book //[[Computing for Ordinary Mortals|https://global.oup.com/academic/product/computing-for-ordinary-mortals-9780199775309?cc=us&lang=en&]]// by [[Robert St. Amant|http://www4.ncsu.edu/~stamant/]] (a CS professor at North Carolina State University), who also wrote [[a very touching article|http://www.nytimes.com/2014/05/11/fashion/Modern-Love-Promises-That-Can-Bend-Without-Breaking.html]] in the New York Times (//Promises That Can Bend Without Breaking//) about his life with his wife and her deteriorating health.
In a short but [[interactive/demo-filled TEDx Talk|http://ww2.kqed.org/mindshift/2013/10/22/learn-to-code-code-to-learn/]], [[Mitch Resnick|https://www.media.mit.edu/people/mres]] from the [[Scrtach|https://scratch.mit.edu/]] development team at MIT, made some excellent points about why is it important to learn to code.

Resnick also has [[an assay|https://www.edsurge.com/n/2013-05-08-learn-to-code-code-to-learn]] along the same lines on the MIT site.

Here are the key ideas I have captured from the video:
* Learning to code makes you ''fluent'' with new technologies. Resnick defines "fluent" as the ability of someone to express themselves and their ideas. He makes an analogy to language fluency, which means (among other things) that you can write well and express ideas, feelings, tell jokes, write essays, etc.

* Resnick makes a distinction between technology fluency and ''technology use''. He includes in the latter category things like browsing, chatting, texting, gaming. All of these are (good and useful) forms of __interacting__ with new technologies, but these don't necessarily make you __fluent__ with these technologies. Drawing on his previous analogy, he says that "it's as if [young users] can read but not write with these new technologies". 

* If we want to make users "fluent with new technologies" we need to teach them how to "write in/for those new technologies" which means, we have to teach them to code!

* When people learn to code, they are able to code to learn, which means that as they learn to code it enables them to code and learn _other_, and __new__ things. In other words, coding opens up opportunities to learn other things. This is similar to learning how to read, which then enables you to read in order to learn (new/other things).

* In addition to coding teaching you some obvious things like how the computer works, and so on, it also teaches you some important skills for life, like:
** how to take an idea and get it through the design process to a working product.
** how to experiment with new ideas and assess their value
** how to break up a complex problem into smaller, more manageable parts
** how to debug problems, which involves coming up with theories, setting up experiments, analyzing data
** how to deal with frustration, develop perseverance, stick with things for the long haul
** how to work collaboratively, how to ask/receive help and how to give/provide help

* Most people learning how to code will probably not become professional computer scientists or programmers. But this is similar to the fact that most people who learn to read and write will not become professional writers or authors.

* So, again, learning to be fluent with new technologies requires learning to code, which in turn enables you to code in order to learn, as well as express yourself and be creative.
A father visited the college where he had been a science student, and where his daughter was now a science student. They happened to encounter a science professor who some twenty years earlier had taught the father and who last year had taught the daughter. 

The father said, sincerely, that he had greatly enjoyed the professor's course and that his daughter had recently raved to him about the course, but he confessed that he was greatly disappointed in one respect. "The questions on the examination you gave to my daughter's class were exactly the same as the ones you gave to my class twenty years ago.” 

“Ah, yes," the professor explained, “the questions are the same, but we have changed the answers."

And another (possibly true, real-life) story:
Every year, a distinguished professor at a medical school^^1^^ begins his lectures by telling his students, “Half of what we are teaching you will, in twenty years, be disproved. The trouble is, we don't know which half." 


----
^^1^^ - but even in the humanities and in the social sciences it is evident that things change, that throughout history, new knowledge makes us [[look differently|After The Fact - In the history of truth, a new chapter begins]] at the works and issues of society.
In an [[article about Learning and Teaching Programming|http://www.tandfonline.com/doi/pdf/10.1076/csed.13.2.137.14200]],^^1^^ Anthony Robins , Janet Rountree, and Nathan Rountree make some good observation about students and teachers of Computer Science, and provide some good insights and suggestions for teaching and learning.

!!!! On Experts Versus Novices
* the general consensus is (Winslow, 1996) that it takes about 10 years of study and experience to turn a novice to an expert.
* they (Dreyfus and Dreyfus, 1986) distinguish between several levels of performance:
** novice, advanced beginner, competent, proficient, and expert.
* they note that experts:
** have efficiently organised and specialised knowledge schemas; 
** organise their knowledge according to functional characteristics such as the nature of the underlying algorithm (rather than superficial details such as language syntax);
** use both general problem solving strategies (such as divide-and-conquer) and specialised strategies; 
** use specialised schemas and a top-down, breadth-first approach to efficiently decompose and understand programs; 
** and are flexible in their approach to program comprehension and their willingness to abandon questionable hypotheses. 
** Expert knowledge schemas also have associated testing and debugging strategies.
* Rist, 1995 summarizes: Expertise in programming should reduce variability in three ways: 
** by defining the best way to approach the design task, 
** by supplying a standard set of schemas to answer a question, 
** and by constraining the choices about execution structure to the ‘best’ solutions.

!!!! On Knowledge Versus Strategies
* Davies (1993) distinguishes between programming knowledge (of a declarative nature, e.g., being able to state how a ‘‘for’’ loop works) and programming strategies (the way knowledge is used and applied, e.g., using a ‘‘for’’ loop appropriately in a program).
** This is a very important distinction which should guide a teacher and provide focus to what is taught. It is important to teach the what (knowledge) but also put a lot of emphasis on the how (strategy).
*** for example, a teacher should teach about one dimensional arrays/lists/vectors as well as loops - this is knowledge. But the teacher should also cover how to use these to calculate the average of the array/list/vector using loops. And in the case of functional programming show how this can be done (strategy/schema) with functional iterators (e.g. maps).
** typical course and books on programming focus on knowledge and less on strategy, which is missing a big part of how to become an expert!

!!!! On Comprehension Versus Generation
* Good (advanced beginner, competent, proficient, and expert) programmers should be effective in both analyzing programs and synthesizing them. That's why I teach and test my students on both!
** The two types of skills/performance are: ''program comprehension'' (where given the text of a program subjects have to demonstrate an understanding of how it works), and those that focus on ''program generation'' (where subjects have to create a part of or a whole program to perform some task/solve some problem).
* Brooks (1977, 1983) proposes a model for program comprehension, where the original problem domain (e.g., a ‘‘cargo-routing’’ problem), which is transformed and represented as values and structures in intermediate domains, and finally instantiated in the data structures and algorithms of a program in the programming domain.
** Brooks describes program comprehension as a ‘‘top-down’’ and ‘‘hypothesis-driven’’ process. Brooks suggested that rather than studying programs line by line, subjects (assumed to be ‘‘expert’’ programmers) form hypotheses based on high-level domain and programming knowledge. These hypotheses are verified or falsified by searching the program for markers/‘‘beacons’’ which indicate the presence of specific structures or functions.
* Rist (1995) presents a comprehensive model of program generation (see also Rist, 1986a, 1986b, 1989, 1990).

!!!! On Teaching Procedural Versus ~Object-Oriented (OO) Programming
* There is no conclusive research evidence (Detienne, 1997) that teaching one paradigm/style of programming is "better" for comprehension and/or construction of programs.
** It turns out that experts use both OO and Procedural when engaged in programming
* Similarly, Rist (1995) describes the relationship between plans (a fundamental unit of program design, as discussed above) and objects as ‘‘orthogonal’’. Plans and objects are orthogonal, because one plan can use many objects and one object can take part in many plans. (Rist, 1995, pp. 555–556)

!!!! On Course Golas, Progress, and Teaching Suggestions
* Linn and Dalbey (1989) propose a ‘‘chain of cognitive accomplishments’’ that should arise from ideal computer programming instruction. This chain of accomplishments forms a good summary of what could be meant by deep learning in introductory programming. This chain starts with 
** the features of the language being taught. 
** The second link is design skills, including templates (schemas/plans), and the procedural skills of planning, testing and reformulating code. 
** The third link is problem-solving skills, knowledge and strategies (including the use of the procedural skills) abstracted from the specific language taught that can be applied to new languages and situations. 
* A major recommendation to emerge from the literature is that __instruction should focus not only on the learning of new language features, but also on the combination and use of those features, especially the underlying issue of basic program design__.
** From our experience [. . .] we conclude that students are not given sufficient instruction in how to ‘‘put the pieces together.’’ Focusing explicitly on specific strategies for carrying out the coordination and integration of the goals and plans that underlie program code may help to reverse this trend. (Spohrer & Soloway, 1989, pp. 412–413)
* A further important suggestion is to __address the kinds of mental models which underlie programming__:
** Models are crucial to building understanding. __Models of control, data structures and data representation, program design and problem domain are all important__. If the instructor omits them, the students will make up their own models of dubious quality. (Winslow, 1996, p. 21)
* Burton suggests that teachers keep in mind the distinctions between ‘‘what actually gets taught; what we think is getting taught; what we feel we’d like to teach; what would actually make a difference’’ (Burton, 1998, p. 54).

!!!! On Alternative Methods and Curricula
* __''Teaching Patterns'' __ - An important recommendation noted above is that instruction should address the underlying issue of basic program design, in particular the use of the schemas/plans which are the central feature of programming knowledge representation
** As Spohrer & Soloway (1989, p. 413) suggest: We are suggesting that students be given a whole new vocabulary for learning how to construct programs. 
** For an analysis and overview of the use of pattern languages for teaching see Fincher (1999a), and for two recent descriptions of courses based on patterns see Reed (1998) and Proulx (2000). 
* __''Problem Based Learning''__ - Deek et al. (1998) describe a first year computer science course based on a problem solving model, where language features are introduced only in the context of the students’ solutions to specific problems.
** An extensive discussion of the practical issues involved in problem based learning, a description of problem based learning courses, and a 3-year longitudinal follow-up of students is described in Kay et al. (2000).
** However as noted (Section 3.3) by for example Winslow (1996) and Rist (1995), problem solving is necessary, but not sufficient, for programming. The main difficulty faced by novices is expressing problem solutions as programs. Thus the coverage of language features and how to use and combine them must remain an important focus.

!!!! Summary and Implications
* From [the authors'] point of view as teachers there is a distinction which is much more important than the one between novices and experts which has received so much attention in the literature. This is the distinction between effective and ineffective novices. 
** Effective novices are those that learn, without excessive effort or assistance, to program. Ineffective novices are those that do not learn,
or do so only after inordinate effort and personal attention. 
** It may be productive, in an introductory programming course, to explicitly focus on trying to create and foster effective novices. In other words, rather than focusing exclusively on the difficult end product of programming knowledge, it may be useful to focus at least in part on the enabling step of functioning as an effective novice.
* What underlying properties make a novice effective? How can we best turn ineffective novices into effective ones? A deeper understanding of both kinds of novices is required. 
** The range of potentially relevant factors includes motivation, confidence or emotional responses, and aspects of general or specific knowledge, strategies, or mental models.
* The authors suggest that the most significant differences between effective and ineffective novices relate to strategies rather than knowledge (since language-related knowledge is widely available in books and traditional courses).



----
^^1^^ - [[local copy|resources/Robins - Learning and Teaching Programming A Review and Discussion.pdf]] of the article
!!!!References
- Brooks, R.E. (1977). Towards a theory of the cognitive processes in computer programming. International Journal of Man-Machine Studies, 9, 737–751.
- Brooks, R.E. (1983). Towards a theory of the comprehension of computer programs. International Journal of Man-Machine Studies, 18, 543–554.
- Davies, S.P. (1993). Models and theories of programming strategy. International Journal of Man-Machine Studies, 39, 237–267.
- Deek, F.P., Kimmel, H., & McHugh, J.A. (1998). Pedagogical changes in the delivery of the first-course in computer science: Problem solving, then programming. Journal of Engineering Education, 87, 313–320.
- Detienne, F. (1990). Expert programming knowledge: A schema based approach. In J.M. Hoc, T.R.G. Green, R. Samurc¸ay, & D.J. Gillmore (Eds.), Psychology of programming (pp. 205–222). London: Academic Press.
- Fincher, S. (1999a). Analysis of design: An exploration of patterns and pattern languages for pedagogy. Journal of Computers in Mathematics and Science Teaching: Special Issue CS-ED Research, 18, 331–348.
- Kay, J., Barg, M., Fekete, A., Greening, T., Hollands, O., Kingston, J., & Crawford, K. (2000). Problem-based learning for foundation computer science courses. Computer Science Education, 10, 109–128.
- Proulx, V.K. (2000). Programming patterns and design patterns in the introductory computer science course. Proceedings of the thirty-first SIGCSE technical symposium on computer science education (pp. 80–84). New York: ACM Press.
- Reed, D. (1998). Incorporating problem-solving patterns in CS1. SIGCSE Bulletin, 30, 6–9.
- Rist, R.S. (1995). Program structure and design. Cognitive Science, 19, 507–562.

I came across [[an interesting book review|http://www.brainpickings.org/index.php/2013/05/13/dont-go-back-to-school-kio-stark/]] by [[Maria Popova|http://www.brainpickings.org/index.php/author/mpopova/]] about the book //Don't Go Back to School: A Handbook for Learning Anything// by [[Kio Stark|http://www.kiostark.com/]]. More about this in a minute.

When searching for [[Stark's website|http://www.kiostark.com/]], I came across [[an interview she had|http://techcrunch.com/2013/05/13/obamas-cto-gives-advice-on-how-learning-works-in-kio-starks-new-book-dont-go-back-to-school/]] with [[Harper Reed|https://harperreed.com/]] (Obama's CTO in the 2012 campaign), in which he talks about how he learned/learns:

>I love computers and I’ve always been around computers. I can’t really talk about education without talking about computers. I went to high school and I actually really loved it.
>...
>I did learn some coding concepts in college, but more importantly I figured out that I’m an experiential learner. I need to put my hands on things and really see them, and really chew on them. It was better to do it in a real context, where it mattered if I did it right.

 and it struck a cord with me. I am also learning best when encountering a new/cool thing/technology/capability and coming up with "something real" (or a "good, made-up project") to work on, using that thing.

>All the programming languages I know (starting with Basic on a Sinclair, C, Java, Javascript/HTML/XML, Perl, Python, to name a few), except for Pascal - which I learned at university, way back when - I learned "on the job" and/or as part of needing/wanting to do an "interesting project".
    Ditto, all the "educational software" I use (NetLogo, GeoGebra, MathSage/Mathematica, Scratch/Snap, EJS, to name a few) - see [[a more comprehensive list|http://www.employees.org/%7Ehmark/wiki.html#%5B%5BEducational%20Technologies%5D%5D]], and [[some implementation examples|http://www.employees.org/%7Ehmark/math/index.html]]

Anyway, back to Stark (and not going back to school ;-), she is saying:

>My research based on interviews with 100 independent learners revealed four facts shared by almost every successful form of learning outside of school:
>        It isn’t done alone.
>        For many professions, credentials aren’t necessary, and the processes for getting credentials are changing.
>        The most effective, satisfying learning is learning that which is more likely to happen outside of school.
>        People who are happiest with their learning process and most effective at learning new things — in any educational environment — are people who are learning for the right reasons and who reflect on their own way of learning to figure out which processes and methods work best for them.

The first point is definitely true, and is emphasized, or at least practiced more nowadays in schools (as I've seen it with my children) and at university (as I've seen it myself, when I enrolled in [[my Masters studies|http://ldtprojects.stanford.edu/%7Ehmark/index_stanford.html]]). When I studied in school and for my Bachelors degree, it was less prevalent/encouraged/practiced.

The fourth point is also definitely true, as it speaks to motivation, persistence, and progress/achievement (in the real sense, not in a form of externals like grades, praise, class progression, etc.!). Stark hit it spot on regarding the importance of "know thyself" when it comes to lifelong learning on your own:

>Learning your own way means finding the methods that work best for you and creating conditions that support sustained motivation. Perseverance, pleasure, and the ability to retain what you learn are among the wonderful byproducts of getting to learn using methods that suit you best and in contexts that keep you going. Figuring out your personal approach to each of these takes trial and error. 

Another observation Stark makes about MOOCs (massive open online course) is also true from my personal experience:

>Simply put, ~MOOCs are designed to put teaching online, and that is their mistake. Instead they should start putting learning online. The innovation of ~MOOCs is to detach the act of teaching from physical classrooms and tuition-based enrollment. But what they should be working toward is much more radical — detaching learning from the linear processes of school.

A [[MOOC on How to Learn Math|http://online.stanford.edu/course/how-to-learn-math]] I have taken from Stanford University online, is really a linear combination of 5-15 minute mini-lectures (almost exclusively "talking head"), interspersed with 1-2 open-ended questions, some of which are peer-graded.

It often feels like the questions are "guardrails" designed to make sure the students are paying attention, and to keep them on track. But this is a topic for another post.

[[Chaim Gingold|http://levitylab.com/cog/]] gave a talk at Stanford ([[video, 45 minutes|https://youtu.be/rTGG6Alznpg?list=PL2rro4X-RbDHcaCl6x-WkjdFFOP6s_xQN]]), where he demo'd Earth Primer and shared his love of simulation as a tool for experiential engagement and learning.

In the talk he shared 10 design principles that guided him through the design of multiple, very successful games/products, like SimCity, [[Earth Primer|https://vimeo.com/116182914]], and Spore.

!!!!10 Design Principles
1. Combine showing, telling, and doing - since this promotes effective learning. it's all about the experiential engagement and all modes of engagement should be leveraged. ("I hear and I forget. I see and I believe. I do and I understand" - attributed to Confucius)

2. Share your enthusiasm and love for a topic. When a mathematician, geologist, historian, or any other expert engage in math, geology, history, and so on, they see things in ways that non-experts don't see. Experiential learning systems enable the sharing of love, excitement, and ways of seeing special things in a domain of knowledge. This is in contrast with "sugar coating" and "dressing up" a topic/domain which is believed to be boring and dry, in an effort to make it "palatable". It's really about finding and exposing the core and elements which make a topic or domain interesting and exciting, and exposing it in engaging ways to share that essence and beauty.

3. Think and design in terms of end-to-end experience. Design the learning environment or product not just for usability, but also for a sense of delight, wonder, surprise, curiosity. Think also in terms of pacing and the speed and means for moving users/learners through the experience and arousing and maintaining their engagement (delight, wonder, surprise, curiosity). This is a deep craft. And it's not "gamification" which takes the surface (superficial) elements of games and game design principles and applies them instrumentally in other, non-game contexts. This is looking at the deep structures and principles and using them to promote learning in a deep experiential way.

4. Don't lecture. Text is OK, but the order, quantity, and timing is very important. Feedback is important.

5. Account for multiple and different modes of engagement. Cater to different modes and moods of experience (poke-and-play, read-and-study-first, "cautious vs. playful", etc.). Design for collaborative and individual learning. Invite learners to engage through different "doors".

6. Use guided play. Balance between direction and openness, control and free exploration and play. Sandbox, and "calls to action" and guidance.

7. Harness the computational power of the computer. Use simulation to bring systems and phenomena to life! 

8. Create a "phantom sensation of tactility" (that the user can sort-of "touch and feel" the experience), combining playfulness and control.

9. Segment the experience and topic into smaller pieces. Use templates for reuse of simulations, tools, frameworks. Combine one-off tools and techniques with templates and reusable content and techniques. You don't have to use just one big simulation/content mode.

10. Design for a strong sense of playfulness and deep experience.

!!!!Play Design vs. Game Design
Characteristics of games (from Jesper Juul, 2003):
- Fixed rules
- Variable outcomes
- Valorization of outcome
- Player effort
- Player attached to outcome
- Negotiable consequences



In his erudite style and book [[The Search for the Perfect Language|https://is.muni.cz/el/1421/podzim2017/LJMedB25/um/seminar_4/Eco_The_Search_for_the_Perfect_Language.pdf]], Umberto Eco describes work by the German mathematician [[Gottfried Wilhelm Leibniz|http://www.iep.utm.edu/leib-met/]] done around the year 1703, exploring universal languages. 

Leibniz had exchanged ideas with the French Jesuit missionary [[Joachim Bouvet|https://en.wikipedia.org/wiki/Joachim_Bouvet]] (this is another case of "stumbling upon something big by mistake"^^1^^  :), which [[led him to define a binary number system|https://scholarworks.umt.edu/cgi/viewcontent.cgi?referer=&httpsredir=1&article=1315&context=tme]] (more than 100 years before [[George Boole|https://en.wikipedia.org/wiki/George_Boole]]!), by which he thought that "thoughtless generation of truths" could be achieved (sounds like programming a computer? :)

Being a [[Sinophile|https://en.wikipedia.org/wiki/Gottfried_Wilhelm_Leibniz#Sinophile]] and receiving an [[I Ching artifact|https://en.wikipedia.org/wiki/Gottfried_Wilhelm_Leibniz#/media/File:Diagram_of_I_Ching_hexagrams_owned_by_Gottfried_Wilhelm_Leibniz,_1701.jpg]] from Bouvert, Leibniz came up with binary encoding.

Here is Eco's description:

>Figure 14.1 shows the central structure of the diagrams seen by Leibniz. The sequence commences, in the upper left-hand corner, with six broken lines, then proceeds by gradually substituting unbroken for broken lines.
{{{
-- --    -----    -- --    -----    -- --    -----    -- --    -----
-- --    -- --    -----    -----    -- --    -- --    -----    -----
-- --    -- --    -- --    -- --    -----    -----    -----    -----
-- --    -- --    -- --    -- --    -- --    -- --    -- --    -- --
-- --    -- --    -- --    -- --    -- --    -- --    -- --    -- --
-- --    -- --    -- --    -- --    -- --    -- --    -- --    -- --


-- --    -----    -- --    -----    -- --    -----    -- --    -----
-- --    -- --    -----    -----    -- --    -- --    -----    -----
-- --    -- --    -- --    -- --    -----    -----    -----    -----
-----    -----    -----    -----    -----    -----    -----    -----
-- --    -- --    -- --    -- --    -- --    -- --    -- --    -- --
-- --    -- --    -- --    -- --    -- --    -- --    -- --    -- --

Figure 14.1
}}}

>Leibniz read this sequence as a  perfect representation of the progression of binary numbers (000, 001, 010, 110, 101, 011, 111 ... ). See figure 14.2.
{{{
-- --    -----    -- --    -----    -- --    -----    -- --    -----
-- --    -- --    -----    -----    -- --    -- --    -----    -----
-- --    -- --    -- --    -- --    -----    -----    -----    -----

  0        1        0        1        0        1        0        1
  0        0        1        1        0        0        1        1
  0        0        0        0        1        1        1        1

  0        1       10       11      100      101      110      111

  0        1        2        3        4        5        6        7

Figure 14.2
}}}
>Once again, the inclination of Leibniz was to void the Chinese symbols of whatever meaning was assigned to them by previous interpretations, in order to consider their form and their combinational possibilities. Thus once more we find Leibniz on the track of a system of blind thought in which it was syntactic form alone that yielded truths. Those binary digits 1 and 0 are totally blind symbols which (through a syntactical manipulation) permit discoveries even before the strings into which they are formed are assigned meanings.
>
>In this way, Leibniz's thought not only anticipates by a  century and a half Boole's mathematical logic, but also anticipates the true and native tongue spoken by a computer -  not, that is, the language we speak to it when, working within its various programs, we type expressions out on the keyboard and read responses on the screen, but the machine language programmed into it. This is the language in which the computer can truly 'think' without 'knowing' what its own thoughts mean, receiving instructions and re-elaborating them in purely binary terms.

Eco emphasizes an important aspect of Leibniz's invention:
>[It] allowed Leibniz to invent a language of a radically different type, which - though remaining a  priori - was no longer a  practical, social instrument but rather a  tool for logical calculation. In this sense, Leibniz's language, and the contemporary language of symbolic logic that descended from it, are scientific languages; yet, like all scientific languages, they are incapable of expressing the entire universe, expressing rather a  set of //truths of reason//. Such languages do not qualify as a universal language because they fail to express those truths that all natural languages express - //truths off act//. 

----
^^1^^ From Eco's book:
>When Leibniz described to Bouvet his own research in binary arithmetic, that is, his calculus by 1 and 0 (of which he also praised the metaphysical ability to represent even the relation between God and nothingness), Bouvet perceived that this arithmetic might admirably explain the structure of the Chinese hexagrams as well. He sent Leibniz in 1701 (though Leibniz only received the communication in 1703) a letter to which he added a wood-cut showing the disposition of the hexagrams. In fact, the disposition of the hexagrams in the wood-cut differs from that of the ''I  Ching'', nevertheless, this error allowed Leibniz to perceive a  signifying sequence which he later illustrated in his Explication de l'arithmetique binaire (1703). 
from a lecture series by Martin Heidegger called “What is Called Thinking”:
>Teaching is more difficult than learning. We know that; but we rarely think about it. And why is teaching more difficult than learning? Not because the teacher must have a larger store of information, and have it always ready. 
>Teaching is more difficult than learning because what teaching calls for is this: to let learn. The real teacher, in fact, lets nothing else be learned than — learning.
> […] The teacher is far ahead of his apprentices in this alone, that he has still far more to learn than they—he has to learn to let them learn. The teacher must be capable of being more teachable than the apprentices. His conduct, therefore, often produces the impression that we properly learn nothing from him, if by “learning” we now suddenly understand merely the procurement of useful information.
>The teacher is far less assured of his ground than those who learn are of theirs. If the relation between the teacher and the taught is genuine, therefore, there is never a place in it for the authority of the know-it-all or the authoritative sway of the official. It is still an exalted matter, then, to become a teacher—which is something else entirely than becoming a famous professor.

From [[an article|https://researchspace.auckland.ac.nz/bitstream/handle/2292/22956/Teaching%20as%20letting%20learn%20What%20Martin%20Heidegger%20can%20tell%20us%20about%20one-to-ones.pdf?sequence=8]] by Sean Sturm:
[img[teacher-students areas of overlap|./resources/Heidegger teaching 1.png][./resources/Heidegger teaching.png]]
From the book //Sailing Home// by the Zen teacher Norman Fischer:

>Life as an arduous journey is an ancient metaphor. The Greek word //metapherein//, from which our English //metaphor// comes from, is made up of the words //meta//, meaning "over, or across", implying a change of state or location, and //pherein//, meaning "to bear, or carry". In modern as in ancient Greek, the word metapherein commonly means "to transport, or transfer". Though we think of a metaphor as a mere figure of speech, something poetic and decorative, in fact metaphors abound in our lives, underlying many concepts that we take for granted. And metaphors condition, far more than we realize, the way we think about ourselves and our world, and therefore the way we are and act. So to consider a metaphor seriously, bringing it to consciousness, turning it over in our minds and hearts, is to allow ourselves to be carried across toward some subtle yet profound inner change.
>
>Metaphors can engage our imagination and spirit, transporting us beyond the literality of what seems to be in front of us towards what's deeper, more lively, and dynamic. Objects in the world can be defined, measured and manipulated according to our specifications. But the heart can't be. It's requirements are more subtle, more vague. Metaphors are inexact and suggestive; they take an image or a concept and map it onto another image or concept that may seem quite disparate, as if to say "this is like that; understand this and you will understand that". In this way metaphor can help us to feel our way into the unspeakable, unchartable aspects of our lives. Seeing your life as a "spiritual odyssey" is a metaphorical truth. Contemplating your life as a spiritual odyssey can help you to enter hidden parts of your life.


----
[[Douglas Hofstadter writes a lot about analogy|Douglas Hofstadter - The Man Who Would Teach Machines to Think]] being our (human) "engine of understanding".
For example, from the [[excellent article about Hofstadter in the Atlantic Magazine|http://www.theatlantic.com/magazine/archive/2013/11/the-man-who-would-teach-machines-to-think/309529/]]:
 “At every moment,” Hofstadter writes in Surfaces and Essences, his latest book (written with Emmanuel Sander), “we are simultaneously faced with an indefinite number of overlapping and intermingling situations.” It is our job, as organisms that want to live, to make sense of that chaos. We do it by having the right concepts come to mind. This happens automatically, all the time. Analogy is Hofstadter’s go-to word. The thesis of his new book, which features a mélange of A’s on its cover, is that analogy is “the fuel and fire of thinking,” the bread and butter of our daily mental lives.
“Look at your conversations,” he says. “You’ll see over and over again, to your surprise, that this is the process of analogy-making.” Someone says something, which reminds you of something else; you say something, which reminds the other person of something else—that’s a conversation. It couldn’t be more straightforward. But at each step, Hofstadter argues, there’s an analogy, a mental leap so stunningly complex that it’s a computational miracle: somehow your brain is able to strip any remark of the irrelevant surface details and extract its gist, its “skeletal essence,” and retrieve, from your own repertoire of ideas and experiences, the story or remark that best relates.
Type the text for 'Lisa Randall'
Lisp (standing for ~LISt Processing) is a computer programming language created by [[John McCarthy | http://en.wikipedia.org/wiki/John_McCarthy_%28computer_scientist%29]] in 1958, which quickly became one of the favored programming language for artificial intelligence (AI) research.
Since I am interested in AI, I have always been drawn by Lisp, even though I'm very far from being a fluent Lisp programmer.

But to quote (I ''love'' [[quotes|Quotes]]) [[Douglas Hofstadter]] from his wonderful //Metamagical Themas// book (written in 1985, pg. 396): Lisp is crisp. Or as Marilyn Monroe said in [the (1955) movie] //The ~Seven-Year Itch//: "I think it's jus-telegant!"

By the way, me quoting Hofstadter quoting Marilyn Monroe is something that can turn into a series of regresses (quotes within quotes within quotes), which is something Lisp can handle very elegantly, as part of its abilities to deal with [[recursion]].

[[Someone|https://hackernoon.com/the-programming-language-im-looking-for-948d93f7a396]] comparing different programming languages allegorized:
>“What sort of features do you have? What kind of programming language are you?”, asked the programmer.
>
>“Honey”, said Lisp, with a tinge of mischievous world-weariness to her voice, “I can be whatever you want.”
In a review of [[David Denby's book 'Lit Up'|https://www.kirkusreviews.com/book-reviews/david-denby/lit-up/]], William Giraldi (in a chapter titled "Clearer Air" out of the book "American Audacity") writes:
>Literature, I'm not sorry to say, isn’t a democracy. Literature is a tyranny -- a tyranny of the talented.
>Here's the thesis driving Lit Up:
>The liberal arts in general, and especially reading seriously, offer an opening to a wider life, the powers of active citizenship (including the willingness to vote); reading strengthens perception, judgment, and character; it creates understanding of other people and oneself, maybe kindliness and wit, and certainly the ability to endure solitude, both in the common sense of empty-room loneliness and the cosmic sense of empty-universe loneliness. Reading fiction carries you further into imagination and invention than you would be capable of on your own, takes you into other people's lives, and often, by reflection, deeper into your own.
The concept of logarithms is very powerful, because it reduces the level of complexity of calculations by one dimension.
([[Fourier Series|Fourier Series]] is another example of a powerful math concept)

Instead of having to multiply or divide numbers (higher complexity/difficulty), it enables one to add or subtract (lower complexity/difficulty). 
And similarly, instead of raising to the power, or taking a root, one can multiple and divide, respectively.

Here's an example I had created, demonstrating multiplying 2 numbers using addition of their logarithms. 
It's implemented as a [[Sage|http://www.sagemath.org/]] ''interactive animation'', vividly showing that addition of logarithms results in multiplication of the numbers (and similarly, that subtraction of logarithms is equivalent to division of the original numbers).
[[The full User Interface for experimenting with multimplication and division using logarithms|./resources/Logarithm operations_Sage.html]]

Or, a static image:
!!The initial state shows two numbers to be multiplied, as arrows along both a linear and logarithmic ruler/scale:

[img[Sage adding numbers using logarithms|./resources/sage_logarithms_2.png][./resources/sage_logarithms_2.png]]
I came across the following story Richard Feynman had told about his college days (at Cornell). I will somewhat modify it, to skip pieces which I think, reflect a somewhat condescending attitude toward, what Feynman calls, "the female mind", which was possibly due to his personal view (some claim he was somewhat sexist^^1^^), or a prevalent view in (American?) society at the time (the '60s). And I feel justified in doing this, because I think that the underlying implications about education and learning are really the important parts of the story (rather than Feynman's somewhat non-politically-correct asides (in this anecdote) and (more or less, depending on your tolerance level) offensive (occasional) behavior in other episodes^^2^^), and, as a relevant [[blog at Scientific American|https://blogs.scientificamerican.com/the-curious-wavefunction/richard-feynman-sexism-and-changing-perceptions-of-a-scientific-icon/]] mentions, Feynman was more complex than that.

Feynman was good at looking for, creating, and analyzing patterns, which he (and many others) thought are the essence of mathematics. As a student, he used to sit in the Cornell cafeteria and eat and try to overhear other students' conversations ("to see if there was one intelligent word coming out" - his words).
>I listened to a conversation between two girls, and one was explaining that if you want to make a straight line, you see, you go over a certain number to the right, for each row you go up, that is, if you go over each time the same amount when you go up a row, you make a straight line. A deep principle of analytic geometry! It went on. I was rather amazed.
>She went on and said, "suppose you have another line coming in from the other side and  you want to figure out where they are going to intersect." Suppose on one line you go over two to the right for every one you go up, and the other line goes over three to the right for every one that it goes up, and they start twenty steps apart, etc. -- I was flabbergasted. She figured out where the intersection was!
>It turned out that one girl was explaining to the other how to knit argyle socks.
>I, therefore, did learn a lesson: [we are all] capable of understanding analytic geometry. [...] The difficulty may just be that we have [not] discovered a way to communicate [appropriately, and in the right context]. If it is done in the right way, you [will succeed in teaching/learning "difficult concepts"].

So, regarding learning and teaching, I think that this story echoes what Albert Einstein had said:
>Example isn’t another way to teach, it is the only way to teach.
Making things relevant is the strongest motivator to learning, and concrete examples are the foundation on which you (the learner //and// the teacher) can build abstractions.
Which echoes Seymour Papert's own personal [[story/experience about gears/gearboxes|http://www.papert.org/articles/GearsOfMyChildhood.html]] and multi-variable equations^^3^^, demonstrating/exemplifying (ha!) the power of [[constructionism|Constructionism, Constructivism, and learning tools implications]]!


----
^^1^^ from a blog post on the Scientific American site [[Richard Feynman, sexism and changing perceptions of a scientific icon|https://blogs.scientificamerican.com/the-curious-wavefunction/richard-feynman-sexism-and-changing-perceptions-of-a-scientific-icon/]]
>It's not surprising to find these anecdotes [described in the blog post] disturbing and even offensive, but I believe it would also be premature and simplistic to write off Richard Feynman as "sexist" across the board. People who want to accuse him of this seem to have inadvertently cherry-picked anecdotes.
^^2^^ as the physicist Janna Levin writes in her excellent book [[A Madman Dreams of Turing Machines|http://jannalevin.com/black-hole-blues-and-other-songs-from-outer-space/a-madman-dreams-of-turing-machines/]], about two other exceptional people, Kurt Gödel and Alan Turing:
> Their genius is a testament to our own worth, an antidote to insignificance; and their bounteous flaws are luckless but seemingly natural complements, as though greatness can be doled out only with an equal measure of weakness.
^^3^^ from Papert's article [[The Gears Of My Childhood|http://www.papert.org/articles/GearsOfMyChildhood.html]]:
>I believe that working with [gear] differentials did more for my mathematical development than anything I was taught in elementary school. Gears, serving as models, carried many otherwise abstract ideas into my head. I clearly remember two examples from school math. I saw multiplication tables as gears, and my first brush with equations in two variables (e.g., 3x + 4y = 10) immediately evoked the differential. By the time I had made a mental gear model of the relation between x and y, figuring how many teeth each gear needed, the equation had become a comfortable friend. 
A Google Maps Street View car seen on the streets of Washington D.C.

[img[GoogleMaps car|./resources/GoogleMapsCar.jpg]]


A recent article by Dan Neil, the car reviewer of the Wall Street Journal, in his column "Rumble Seat"^^1^^ titled [[Rolls-Royce Dawn Review: Charisma Comes Standard|http://www.wsj.com/articles/rolls-royce-dawn-review-charisma-comes-standard-1474581076]] reminded me of my father, who was a mechanical engineer by education (and used to manage a mechanics shop when he was young). My Dad loved cars and although he did not spend a lot of money on them, he took care of the cars he owned over the years, and kept up to date on the latest developments in the field throughout his life.

I remember the story Dad had told about Rolls Royce (a memory which the WSJ article had triggered), at the time when their famous [[Silver Shadow|https://en.wikipedia.org/wiki/Rolls-Royce_Silver_Shadow]] model came out, and they announced the specs of the legendary car. It listed usual details of length, width, weight, number of cylinders and their displacement, etc., but when it came to listing the //horsepower// of the engine (clearly an important fact!), they just wrote, in a typical British understatement: "Enough".

Anyway, Niel test-drove the car a few weeks before writing the article/review (or as he put it: "My turn at this automotive kissing booth came in August"), and noted that "with this car charisma comes standard" (the actual way he put it was: "a lot of parts -- engines and transmissions, bodies and body panels -- are sourced from Germany and BMW (which owns RR). But [[Goodwood|https://www.goodwood.com/]] is where they install the charisma.")

Neil's article, with his typical, somewhat dry but inventive humor and style, reminded me of my father's love for cars as well as his love of language and humor. Here are a few "linguistic pearls" from Neil's article:
* for a lot of people, an encounter with a ~Rolls-Royce is a tell-the-grandkids event, a touch of transcendence.
* The car accelerates to 60 mph in less than 5 seconds in the most untaxed manner
* this ~Rolls-Royce Dawn [the name of this model] moved around California like an MX missile during a Cold War exercise
* The author was definitely moved by the driving experience ("Imagine my face crying like Iron Eyes Cody." [an American actor playing Native American hero roles])
* The Dawn constitutes a sociological index on four 20-inch wheels ( Base price: $335,000. Price as tested: $400,000 - "a car worth more than the average house")
* it’s unforgettably large: a 17.3-foot convertible, with the geometric subtlety of a shipping container.
* The Dawn is a 2+2 convertible (drophead, in the parlance) and sibling to the fixed-head coupe Wraith. The Wraith is lighter and more powerful than the Dawn, but the latter owns the sky.
* The car engine is powerful (twin-turbo 6.6-liter V12 rated at 563 hp and a massive 575 pound-feet of torque at just 1,500 rpm): “whoosh” is too violent a word to describe how the Dawn gathers pace. With the superabundance of torque at any engine speed and the angels fluttering among the eight gears, the Dawn is ever ready, never hurried.
* From a standing start the Dawn can accelerate to 60 mph in under 5 seconds in a manner so unforced, so inevitable, as to take one off guard.
* the Dawn structure feels more like granite than steel. At least like the truck that delivers granite countertops.
* While the cruising is creamy to the n^^th^^, you wouldn’t say the Dawn’s driving character is sporty, exactly. It’s more like a high-speed longitudinal trust exercise.
* He concludes by writing: If drivers grip the thin-rim steering wheel lightly, and hold their breath just so, they can bend the mighty motorcar through a fine curve at truly entitled speeds. I never quite got there, but the car was waiting on his Lordship.


----
^^1^^ [[A rumble seat|https://en.wikipedia.org/wiki/Rumble_seat]] (American English), dicky seat, dickie seat or dickey seat (British English), also called mother-in-law seat, is an upholstered exterior seat which folded into the rear of a coach, carriage, or early automobile. Depending on its configuration, it provided exposed seating for one or two passengers.
When I introduce the Exploratory Computer Science (CS) course I teach in high school, I say that the programming languages we will be learning (Scratch from MIT or Snap! from UC Berkeley) are "Low Floor, High Ceiling" languages, which means that very quickly the students can program reasonably functional and useful programs and see appealing and engaging results (so that's the "Low Floor" aspect). 
And, that (maybe despite appearances) these languages are not "kiddie languages" and students can end up programming pretty sophisticated and complex programs with these languages and go pretty far implementing complex CS concepts (so that's the "High Ceiling" part).

Usually, the students don't pay too much attention to this, and the parents I tell it to when describing the course, very often have a puzzled expression on their face, so let me tell you about a recent event which illustrates the point.

In the second week into the school year (so that's low floor for you :), the students are able to program a reasonable computer game in terms of action and effects (sound, animation, etc.)
But, one student came to me and said: you know, Mr. Mark, this game is somewhat boring.
And I thought to myself "uh-oh, what happened?", but commented that the student seemed to be quite engaged a minute ago, playing the game on the computer.
To which the student responds: Actually, the game is not bad (you see, high schoolers can't show too much enthusiasm; they have to stay cool :), but, I want to know what my score is, and without knowing it, it's not much fun.
I had to seize the moment, so I told him that I can explain to him how to add a score to his game, if he'd like. And proceeded to explained about variables and changes to values of variables, and so on.

The student paid very close attention, smiled at the end, and said "OK, I got it!", and "thank you!" and went on to implement scoring in his game.

I, too, smiled, and thought to myself: when was the last time a teacher taught a student about variables, change of value, and so on, an got a smile and a thank you!?
One of those many dates
that no longer ring a bell.

Where I was going that day,
what I was doing -- I don't know.

Whom I met, what we talked about,
I can't recall.

If a crime had been committed nearby,
I wouldn't have had an alibi.

The sun flared and died
beyond my horizons.
The earth rotated
un-noted in my notebooks.

I'd rather think
that I'd temporarily died
than that I kept on living
and can't remember a thing.

I wasn't a ghost, after all.
I breathed, I ate,
I walked.

My steps were audible,
my fingers surely left
their prints on doorknobs.

Mirrors caught my reflection.
I wore something or other in such-and-such a color.
Somebody must have seen me.

Maybe I found something that day
that had been lost.
Maybe I lost something that turned up late.

I was filled with feelings and sensations.
Now all that's like
a line of dots in parentheses.

Where was I hiding out,
where did I bury myself?
Not a bad trick
to vanish before my own eyes.

I shake my memory.
Maybe something in its branches
that has been asleep for years
will start up with a flutter.

No.
Clearly I'm asking too much.
Nothing less than one whole second.

----
poem by Wislawa Szymborska^^1^^, translated by Stanislaw Baranczak and Clare Cavanagh

^^1^^ see [[Wislawa Szymborska's Nobel Prize lecture (1996)]]
The commonplace miracle:
that so many common miracles take place.

The usual miracles:
invisible dogs barking
in the dead of night.

One of many miracles:
a small and airy cloud
is able to upstage the massive moon.

Several miracles in one:
an alder is reflected in the water
and is reversed from left to right
and grows from crown to root
and never hits bottom
though the water isn't deep.

A run-of-the-mill miracle:
winds mild to moderate
turning gusty in storms.

A miracle in the first place:
cows will be cows.

Next but not least:
just this cherry orchard
from just this cherry pit.

A miracle minus top hat and tails:
fluttering white doves.

A miracle (what else can you call it):
the sun rose today at three fourteen a.m.
and will set tonight at one past eight.

A miracle that's lost on us:
the hand actually has fewer than six fingers
but still it's got more than four.

A miracle, just take a look around:
the inescapable earth.
 
An extra miracle, extra and ordinary:
the unthinkable
can be thought.

----
Poem by Wislawa Szymborska^^1^^, translated by Joanna Trzeciak

^^1^^ see [[Wislawa Szymborska's Nobel Prize lecture (1996)]]
[[Welcome]]




[[Education]]

[[Computational Thinking/Literacy]]

[[Mathematics]]




[[Books]]

[[Quotes]]




[[About me]]

[[ToDo]]

[[GettingStarted]]
Make it right before you make it fast. Make it clear before you make it faster. Keep it right when you make it faster.

P. J. Plauger - Kernighan and Plauger, The Elements of Programming Style
Making connections between things, concepts, knowledge domains (as well as people, but that's for another tiddler) is creative, makes you wiser, deepens your understanding and/or skills, and is a source of joy and beauty overall.

Sometimes making connections involves breaking (or at least, penetrating) walls/lines/barriers and going against established forms, (mis)conceptions, norms, and "common knowledge" ("wisdom").

But as Halford John Mackinder had said "[[Knowledge is one. Its division into subjects is a concession to human weakness.]]" 
The truth is that these divisions are human-made. They may be useful, but they are context-dependent and reflect a human perspective (or state of knowledge), not a law of nature. And as such, and under different circumstances or a different context, may not be useful, and therefore probably worth reconsidering and/or changing and/or abandoning in favor of different ones (and so it goes...).

This brings to mind the beautiful poem ''Psalm'' by [[Nobel Laureate|Wislawa Szymborska's Nobel Prize lecture (1996)]]  [[Wislawa Szymborska|https://en.wikipedia.org/wiki/Wis%C5%82awa_Szymborska]] (translated by Stanislaw Baranczak and Clare Cavanagh). It's sharp, observant, and mischievous:

>Oh, the leaky boundaries of man-made states! 
>How many clouds float past them with impunity; 
>how much desert sand shifts from one land to another; 
>how many mountain pebbles tumble onto foreign soil 
>in provocative hops!  
>
>Need I mention every single bird that flies in the face of frontiers 
>or alights on the roadblock at the border? 
>A humble robin - still, its tail resides abroad 
>while its beak stays home. If that weren't enough, it won't stop bobbing!  
>
>Among innumerable insects, I'll single out only the ant 
>between the border guard's left and right boots 
>blithely ignoring the questions "Where from?" and "Where to?"  
>
>Oh, to register in detail, at a glance, the chaos 
>prevailing on every continent! 
>Isn't that a privet on the far bank 
>smuggling its hundred-thousandth leaf across the river? 
>
>And who but the octopus, with impudent long arms, 
>would disrupt the sacred bounds of territorial waters?  
>And how can we talk of order overall? 
>when the very placement of the stars 
>leaves us doubting just what shines for whom?  
>
>Not to speak of the fog's reprehensible drifting! 
>And dust blowing all over the steppes 
>as if they hadn't been partitioned! 
>And the voices coasting on obliging airwaves, 
>that conspiratorial squeaking, those indecipherable mutters!  
>
>Only what is human can truly be foreign. 
>The rest is mixed vegetation, subversive moles, and wind. 


Therefore, I think that developing the habit and the skill of looking for and making new connections is very important. It opens you up, makes you wiser, and more skillful at living (and working). And, the good news is that it can be learned/developed.

In a [[conversation between Krista Tippett and  David Whyte On Becoming Wise|https://www.youtube.com/watch?v=Nup6deehcck]] (1:40 hrs.), he said:
>You can actually practice shaping a mind which is constantly enlarging the context, and becoming more generous, and more in contact with the frontier which is the unknown.
>You can do that through asking "beautiful questions" and through genuine conversations.
to which she agreed and added:
>Questions are a mighty form of words.

This is also echoed in [[John O’Donohue's take on questions as lanterns|John O’Donohue - questions]], [[David Whyte's sentiments|David Whyte - questions]] expressed elsewhere, too.

Alan Kay (a "[[CS Sage|https://en.wikipedia.org/wiki/List_of_computer_scientists]]") promotes computing as an excellent means (environment, tool, context) for developing connections (and knowledge, insight, etc.) and enables asking (and trying, and simulating) these "beautiful questions", when he states that [[The Real Computer Revolution Has Not Happened Yet]]. And that's why I am so drawn to Computing, and am so passionate about it.
Man is the best computer we can put aboard a spacecraft ... and the only one that can be mass produced with unskilled labor.
Man's mind, once stretched by a new idea, never regains its original dimensions.
Type the text for 'Marilynne Robinson'
In a short but insight-rich [[|https://computinged.wordpress.com/2015/05/13/how-to-teach-computer-science-with-media-computation/]]blog post, Mark Guzdial from Georgia Tech brings up a list of excellent tips for how to teach his [[Media Computation class|http://coweb.cc.gatech.edu/mediaComp-teach]].

I currently have a segment in my Principles of CS class which uses audio/music (EarSketch from Georgia Tech) to teach/reinforce basic concepts like loops and functions, but I have not incorporated image processing into this course yet. I'm still working on seeing how image processing works with Trinket, which is what I'm using for this course.

Anyway, Guzdial's recommendations and my comments/thoughts on them:

* Let the students be creative.
** I totally agree: They should have the opportunity to "pour their interests and personality" into their work/projects.

* Let the students share what they produce. 
** What I found: Most of them enjoy doing it and it motivates them as they work on their projects. It also introduces a different pace and atmosphere into the class.

* Code live in front of the class. 
** I should do that much more often. The challenge I see is students' attention span and how to keep them engaged. But I think that Guzdial's points are valid: When the teacher makes a mistake and fixes it, the students see (a) that errors are expected and (b) there is a process for fixing them.  

* Pair programming leads to better learning and retention.
** I had tried this and still need to figure out how to make sure both students are equally engaged and learning from this.

* [[Peer instruction|http://www.peerinstruction4cs.org/]] is great. Not only does peer instruction lead to better learning and retention outcomes, but it also gives the teacher better feedback on what the students are learning and what they are struggling with. We strongly encourage the use of peer instruction in computing classes.
** I haven't tried this and need to find out more about this technique. Looks like a [[PI website|http://www.peerinstruction4cs.org/]] has some good reasons for doing it.
** [[on the site|https://blog.peerinstruction.net/2014/05/01/what-is-peer-instruction-in-2-mins/]] Julie Schell lists linked supplemental readings: flipped classrooms , pre-class assignments, ConcepTests, Peer Instruction Workflow, Voting Sequences, Displaying Results, Grouping Students for Productive Conversations, and Classroom Response Systems.

* Worked examples help with learning creativity.
** This is a big and important one, and seems to have a body of research supporting its pedagogic effectiveness on learning and transfer.

From the [[Information Philosopher's Encyclopedia|http://www.informationphilosopher.com/solutions/philosophers/twain/#IV]].

Twain, personified by the Old Man (or, O.M.) in the dialog below, is conversing (in a Socratic dialog) with a Younger Man (or, Y.M.) about Truth and ~Truth-Seekers (and, by clear implication, human nature):

>O.M.: We are always hearing of people who are around SEEKING AFTER TRUTH. I have never seen a (permanent) specimen. I think he had never lived. But I have seen several entirely sincere people who THOUGHT they were (permanent) Seekers after Truth. They sought diligently, persistently, carefully, cautiously, profoundly, with perfect honesty and nicely adjusted judgment -- until they believed that without doubt or question they had found the Truth. THAT WAS THE END OF THE SEARCH. The man spent the rest of his life hunting up shingles wherewith to protect his Truth from the weather. If he was seeking after political Truth he found it in one or another of the hundred political gospels which govern men in the earth; if he was seeking after the Only True Religion he found it in one or another of the three thousand that are on the market. In any case, when he found the Truth HE SOUGHT NO FURTHER; but from that day forth, with his soldering-iron in one hand and his bludgeon in the other he tinkered its leaks and reasoned with objectors. There have been innumerable Temporary Seekers of Truth -- have you ever heard of a permanent one? In the very nature of man such a person is impossible.

And Twain's brilliant, self-reflecting pivot:
>O.M.: I have been a humble, earnest, and sincere ~Truth-Seeker.
>Y.M.: Very well?
>O.M.: The humble, earnest, and sincere ~Truth-Seeker is always convertible by such means [i.e., able arguments backed by collated facts and instances].
>Y.M.: I am thankful to God to hear you say this, for now I know that your conversion --
>O.M.: Wait. You misunderstand. I said I have BEEN a ~Truth-Seeker.
>Y.M.: Well?
>O.M.: I am not that now. Have you forgotten? I told you that there are none but temporary ~Truth-Seekers; that a permanent one is a human impossibility; that as soon as the Seeker finds what he is thoroughly convinced is the Truth, he seeks no further, but gives the rest of his days to hunting junk to patch it and caulk it and prop it with, and make it weather-proof and keep it from caving in on him. Hence the Presbyterian remains a Presbyterian, the Mohammedan a Mohammedan, the Spiritualist a Spiritualist, the Democrat a Democrat, the Republican a Republican, the Monarchist a Monarchist; and if a humble, earnest, and sincere Seeker after Truth should find it in the proposition that the moon is made of green cheese nothing could ever budge him from that position; for he is nothing but an automatic machine, and must obey the laws of his construction.
>Y.M.: After so --
>O.M.: Having found the Truth; perceiving that beyond question man has but one moving impulse -- the contenting of his own spirit-- and is merely a machine and entitled to no personal merit for anything he does, it is not humanly possible for me to seek further. The rest of my days will be spent in patching and painting and puttying and caulking my priceless possession and in looking the other way when an imploring argument or a damaging fact approaches.

In one sense or interpretation of Twain's words, he has a very dark, even depressing, view of human nature (a "machine" seeking truth, then latching on to whatever it is, and endlessly patching it, defending it, blind and resistant to any change and/or new information).
But, knowing Twain's style a bit, and understanding what humor, tongue-in-cheek, and irony can achieve in terms of making someone re-examine their stance and possibly changing (or at least being open to evolving) it, I think that Twain made here a brilliant move. I (choose to) interpret this as him holding up a mirror to us, and saying: look, there is a danger here, and this is how ugly and hopeless it can get, unless we are aware of the consequences, and decide to change our attitude.
From Ian Stewart's (as the Brits say:) //lovely// book //Professor Stewart's Hoard of Mathematical Treasures//:

*Proof by Contradiction: ‘This theorem contradicts a well-known result due to Isaac Newton.’
*Proof by Metacontradiction: ‘We prove that a proof exists. To do so, assume that there is no proof … ‘
*Proof by Deferral: ‘We’ll prove this next week.’
*Proof by Cyclic Deferral: ‘As we proved last week …’
*Proof by Indefinite Deferral: ‘As I said last week, we’ll prove this next week.’
*Proof by Intimidation: ‘As any fool can see, the proof is obviously trivial.’
*Proof by Handwaving: ‘Self-explanatory.’ (Most effective in seminars and conference talks.)
*Proof by Vigorous Handwaving: More tiring, but more effective.
*Proof by Over-optimistic Citation: ‘As Pythagoras proved, two cubes never add up to a cube.’
*Proof by Personal Conviction: ‘It is my profound belief that the quaternionic pseudo-Mandelbrot set is locally disconnected.’
*Proof by Lack of Imagination: ‘I can’t think of any reason why it’s false, so it must be true.’
*Proof by Forward Reference: ‘My proof that the quaternionic pseudo-Mandelbrot set is locally disconnected will appear in a forthcoming paper.’ (Often not as forthcoming as it seemed when the reference was made.)
*Proof by Example: ‘We prove the case n = 2 and then let 2 = n.’
*Proof by Outsourcing: ‘Details are left to the reader.’
*Statement by Outsourcing: ‘Formulation of the correct theorem is left to the reader.’
*Proof by Unreadable Notation: ‘If you work through the next 500 pages of incredibly dense formulas in six alphabets, you’ll see why it has to be true.’
*Proof by Authority: ‘I saw Milnor in the cafeteria and he said he thought it’s probably locally disconnected.’
*Proof by Vague Authority: ‘The quaternionic pseudo-Mandelbrot set is well known to be locally disconnected.’
*Proof by Provocative Wager: ‘If the quaternionic pseudo-Mandelbrot set is not locally disconnected, I’ll jump off London Bridge wearing a gorilla suit.’
*Proof by Reduction to the Wrong Problem: ‘To see that the quaternionic pseudo-Mandelbrot set is locally disconnected, we reduce it to Pythagoras’s Theorem.’
Second Thoughts
*‘This is a one-line proof – if we start sufficiently far to the left.’

In his interesting book //Would-be Worlds - How Simulation is Changing the Frontiers of Science//, John L. Casti is actually showing in numerous ways how Computing is changing multiple fundamental assumptions we have had about different aspects of life and knowledge in general, and math and science in particular.

One specific example is how in 1976 the 2 mathematicians Kenneth Appel and Wolfgang Haken proved the "Four Color Map" problem. As Georges Gonthier in his [[description of the way he went about proving it|http://www.ams.org/journals/notices/200811/tx081101382p.pdf]] puts it simply:
The four color map problem states that 
>the regions of any simple planar map can be colored with only four colors, in such a way that any two adjacent regions have different colors. [This] can on the one hand be understood even by schoolchildren as “four colors suffice to color any flat map” and on the other hand be given a faithful, precise mathematical interpretation using only basic notions in topology.

The Four Color Map problem stood unsolved since Francis Guthrie expressed it as an "innocent little coloring puzzle" in 1852 ^^1^^. As Casti put it, Appel and Haken startled the mathematical world not just because they were able to finally prove that four colors were indeed sufficient to color any map, but, probably more importantly, because they were able to prove this using a supercomputer.
Up to that point in time, a math proof had been understood as the application of a procedure, logic, and deduction, which could be __performed and verified by a human__. Thomas Tymoczko (a philosopher of mathematics) called this (considered essential?) characteristic of a math proof "(human) surveyability".

In Tymoczko's view ("The ~Four-Color Problem and its Mathematical Significance", 1980), the ~Appel-Hacken proof failed this "surveyability" criterion by, he argued, substituting experiment for deduction:
> […] if we accept the [~Four-Color Theorem] as a theorem, we are committed to changing the sense of "theorem", or, more to the point, […] the sense of the underlying concept of "proof".
> [… the] use of computers in mathematics, as in the [~Four-Color Theorem], introduces empirical experiments into mathematics. 
> Whether or not we choose to regard the [~Four-Color Theorem] as proved, we must admit that the current proof is no traditional proof, no a priori deduction of a statement from premises. It is a traditional proof with a […] gap, which is filled by the results of a well-thought-out experiment.

Appel and Haken's procedure employed a Computational procedure which ran through a large (1,936) but finite (!) set of topological (map) configurations to show that, indeed, 4 colors were sufficient. For the first time in the history of math proofs, a procedure which could not be done and/or verified by hand, had been used - employing a supercomputer, running calculations for many hours.
To make things even murkier, the computer technology used was known to have a computation error rate of 1 error per thousand hours of computation. 

Since then, the Four Color theorem has been checked and verified as well as (re)proven using similar but somewhat different computation approaches, and has been found to be solid ^^2^^. Nevertheless, there are still math "purists" who have doubts whether this Computational approach is an "honest-to-god" mathematical proof, and are concerned that if this is acceptable, "it will turn math (based on deduction) into physics (based on experimentation)".

But (and probably because Computing is near and dear to me :), I share [[Julie Rehmeyer's view|https://www.sciencenews.org/article/how-really-trust-mathematical-proof]] that:
> [Our belief is that] The one source of truth is mathematics. Every statement is a pure logical deduction from foundational axioms, resulting in absolute certainty. 
> [...] Well … in theory. The reality, though, is that mathematicians make mistakes. And as mathematics has advanced, some proofs have gotten immensely long and complex, often drawing on expertise from far-flung areas of math. Errors can easily creep in.
> [...] Where humans falter, computers can sometimes prevail. A group of mathematicians and computer scientists believe that with new proof-validation programs, the dream of a fully spelled-out, rigorous mathematics, with every deduction explicit and correct, can be realized.
> [...] Before long, they say, ordinary mathematicians will be using these tools as part of their everyday work. 


[[Simon Yuill|http://www.lipparosa.org/]], in a chapter titled //Bend Sinister: Monstrosity and Normative Effect in Computational Practice// (in the book [[Fun and Software - Exploring Pleasure Paradox and Pain in Computing|https://monoskop.org/images/1/14/Goriunova_Olga_ed_Fun_and_Software_Exploring_Pleasure_Paradox_and_Pain_in_Computing.pdf]] edited by Goriunova Olga), mentions Appel and Haken again:
> many mathematicians [distrust] computer-demonstrated proofs, such as evidenced, famously, in the lukewarm response to Appel and Haken’s proof for the Four Colour Theorem (1976). The theorem seeks to determine whether or not all the countries on a map can be coloured in using only four distinct colours and ensuring that no two neighbouring countries are coloured the same. This was the first major theorem to be proven using software assistance having eluded purely human analysis since it was originally conjectured in 1852, yet its solution was not celebrated. 
>The negative response of the mathematics community is epitomized in the words of one critic, Ian Stewart, who argued that this approach did not explain //why// the proof was correct:
>>This is partly because the proof is so long that it is hard to grasp (including the computer calculations, impossible!), but mostly because it is so apparently structureless. The answer appears as a kind of monstrous coincidence. Why is there an unavoidable set of reducible configurations? The best answer at the present time is: there just is. The proof: here it is, see for yourself. The mathematician’s search for hidden structure, his pattern-binding urge, is frustrated.
>The ~Appel-Haken proof confounded the aesthetics of mathematicians for the set of 1,482 different map configurations required to verify it could not be grasped by human imagination. It was not succinct. It was not elegant. In combining human and mechanical means it transgressed the prohibition against crossing between different disciplines such as that between arithmetic and geometry, the metabasis ex allo genos [i.e., crossing from one genus (or domain) to another in the course of a proof (for example, switching to use algebra in a geometry proof)], as established in Aristotle’s Posterior Analytics [Aristotle's Theory of Knowledge and Demonstration]. The proof exuded an excess of computational materiality, it had contagion, it could not be given human shape.

As you can probably guess, I am not willing (at this point, with what I know/feel today) to so strictly separate between domains and "keep walls up" (or preserve walls, which are there for historic/evolutionary reasons) between knowledge disciplines.
To echo Halford John Mackinder who said "[[Knowledge is one. Its division into subjects is a concession to human weakness.]]", we should strive to acquire knowledge and gain insights using every capability/device/tool we have at our disposal, and not just for practical reasons (i.e., "do whatever it takes to accomplish the task of learning/knowing more"), but also for aesthetic reasons. Using multiple devices to get at some truth (or at least new knowledge) is in my view, exposing the interconnectedness of things/domains, and has therefore the potential to "harmonize" and bring us closer to more deeply understand the world/reality.
Having said that, I can definitely understand Simon Yuill's (and Ian Stewart's) sentiment of "displeasure" with the method of proof; it may have been "more elegant" and aesthetically pleasing to proof the Four Colour Theorem without the use of "brute-force computing". But on the other hand, the use of computing paves new (interdisciplinary)paths and opens up new windows between and across domains, and it seems to me that there is nothing bad about it :)

It's also interesting to read what two great mathematicians, Stanislaw Ulam and Mark Kac, had to say about [[using computers in math|Ways of "doing math"]], in an interview with Mitchell Feigenbaum.

----
^^1^^ - From Georges Gonthier in his [[paper|http://www.ams.org/journals/notices/200811/tx081101382p.pdf]]: [Guthrie] managed to embarrass successively his mathematician brother, his brother’s professor, Augustus de Morgan, and all of de Morgan’s visitors, who couldn’t solve it; the Royal Society, who only realized ten years later that Alfred Kempe’s 1879 solution was wrong; and the three following generations of mathematicians who couldn’t fix it.

^^2^^ - see for example:
* Gonthier, Georges (2008), "[[Formal Proof - The Four-Color Theorem|http://www.ams.org/journals/notices/200811/tx081101382p.pdf]]" and 
* Robertson, Neil; Sanders, Daniel P.; Seymour, Paul; Thomas, Robin (1997), "[[The Four-Colour Theorem|http://ac.els-cdn.com/S0095895697917500/1-s2.0-S0095895697917500-main.pdf?_tid=839a41cc-ab28-11e5-924e-00000aab0f01&acdnat=1451062800_6e3ae15df2c2b4516aacd58eaf7048b8]]"
<br>
{{{Master a second language, preferably math.}}}
: -- from [[Rules for my unborn son|http://rulesformyunbornson.tumblr.com/]]

<<forEachTiddler 
where 
'tiddler.tags.contains("math-item")'
sortBy 
'tiddler.title'>>



<html>
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/3.0/us/"><img alt="Creative Commons License" style="border-width:0" src="http://i.creativecommons.org/l/by-nc-sa/3.0/us/88x31.png" /></a><br />This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/3.0/us/">Creative Commons Attribution-NonCommercial-ShareAlike 3.0 United States License</a>.
</html>
A common question math students ask when studying matrices (notations, operations, etc.): what is it good for? When/how can this be ''really'' used?

In response, I had created an animated demonstration/example, showing how matrix operations can be used in (movie) animation to magnify and rotate images, objects, scenes.

It's implemented as a [[Sage|http://www.sagemath.org/]] ''interactive animation'', vividly showing that multiplying the coordinates of various bodies/images/outlines/wire-frames (expressed as vectors or matrices), by "rotation matrices" or "magnification matrices" results in interesting animations.
[[The full User Interface for experimenting with magnifying and rotating points and objects|./resources/Matrix operations_Sage.html]]

Or a static image:
[img[Sage rotating and magnifying objects using matrices|./resources/sage_matrix_operations_2.png][./resources/sage_matrix_operations_2.png]]
Measuring programming progress by lines of code is like measuring aircraft building progress by weight.

But I think that since aircraft building progress //may// be reasonably measured by the rate of weight-addition, as it is built __according to plan__ (!), I would paraphrase it to:
>Measuring program and programmer effectiveness by lines of code is like measuring aircraft effectiveness by weight.

Or as seen on the [[Agile Insights blog|https://medium.com/agile-insights/if-you-cant-count-what-s-important-you-make-what-you-can-count-important-62f8171abc1e]]:
>If you can’t count what’s important you make what you can count important.
The following original quote is attributed to Bill Gates: 
>Measuring programming progress by lines of code is like measuring aircraft building progress by weight.

But I think that since aircraft building progress //may// be reasonably measured by the rate of weight-addition, as it is built __according to plan__ (!), I would paraphrase it to:
>Measuring program and programmer effectiveness by lines of code is like measuring aircraft effectiveness by weight.

Or as seen on the [[Agile Insights blog|https://medium.com/agile-insights/if-you-cant-count-what-s-important-you-make-what-you-can-count-important-62f8171abc1e]]:
>If you can’t count what’s important you make what you can count important.

The following [[story|https://www.folklore.org/StoryView.py?project=Macintosh&story=Negative_2000_Lines_Of_Code.txt&sortOrder=Sort+by+Date&topic=Software+Design]] from the "good old days" (1980's) of Apple (and the [[Lisa computer|https://www.mac-history.net/apple-history-2/apple-lisa/2007-10-12/apple-lisa]] development) nicely illustrates the frequent tension between the desire of "Management" to measure things, and the need for common sense and measuring "the right thing" (since it is well-known in project management that "you get the behavior you measure").
>In early 1982, the Lisa software team was trying to buckle down for the big push to ship the software within the next six months. Some of the managers decided that it would be a good idea to track the progress of each individual engineer in terms of the amount of code that they wrote from week to week. They devised a form that each engineer was required to submit every Friday, which included a field for the number of lines of code that were written that week.
>
>[[Bill Atkinson|https://en.wikipedia.org/wiki/Bill_Atkinson]], the author of Quickdraw and the main user interface designer, who was by far the most important Lisa implementor, thought that lines of code was a silly measure of software productivity. He thought his goal was to write as small and fast a program as possible, and that the lines of code metric only encouraged writing sloppy, bloated, broken code.
>
>He recently was working on optimizing Quickdraw's region calculation machinery, and had completely rewritten the region engine using a simpler, more general algorithm which, after some tweaking, made region operations almost six times faster. As a by-product, the rewrite also saved around 2,000 lines of code.
>
>He was just putting the finishing touches on the optimization when it was time to fill out the management form for the first time. When he got to the lines of code part, he thought about it for a second, and then wrote in the number: -2000 [minus two thousand!].
>
>I'm not sure how the managers reacted to that, but I do know that after a couple more weeks, they stopped asking Bill to fill out the form, and he gladly complied.


And as you can imagine, [[I am in total agreement|The most secure, the fastest, and the most maintainable code (by far :) is the code not written.]] with Atkinson :)


Another take and good example on this is from [[an article titled"tao of programming"|https://fare.livejournal.com/tag/tao%20of%20programming]] by ~François-René Rideau:
>Measurement is no substitute for judgment
>There's worse than having a long feedback loop: you could be using information to actively make adjustments that shape your company in counter-productive ways. For instance, you could be measuring the number of issues resolved by each team member, and rewarding employees based on that, leading to employees introducing more bugs, splitting every bug into plenty of independently registered issues and sub-issues, and spending half their time on the issue-tracking system rather than on the actual issues. Or you could reward developers based on lines of code, leading to unmaintainable code bloat. Measuring things is very important to detect anomalies that need to be addressed, but it is important not to use measurements in a way that will skew incentives, or you'll fall victim of Goodhart's law. Also, whatever effort you expand on measuring things that don't matter, or doing anything that doesn't matter, is effort you don't expand doing things that do matter. 
Men occasionally stumble over the truth, but most of them pick themselves up and hurry off as if nothing had happened.
I recently finished reading a short ~Sci-Fi story by Stephen Baxter called [[Turing's Apples|http://kasmana.people.cofc.edu/MATHFICT/mfview.php?callnumber=mf934]], in which he has an interesting spin on a familiar theme of communication (sending and/or receiving) efforts with aliens, along the lines of [[SETI|https://en.wikipedia.org/wiki/Search_for_extraterrestrial_intelligence]].

In the Baxter's story, we on Earth, receive a message form aliens, and are trying to decipher it. This reminds me of our effort to "send a message out", in the form of the [[golden record|https://voyager.jpl.nasa.gov/golden-record/golden-record-cover/]] put on the Voyager spaceship, containing [[music, images, sounds, and greetings|https://voyager.jpl.nasa.gov/golden-record/whats-on-the-record/]], to be carried by the spacecraft out of the solar system.

It's interesting to compare Baxter's idea of an "effective message" which he imagined aliens sending out in the story, with the message we had sent out aboard Voyager in 1977.

Here's how Baxter describes the investigation of two scientists/mathematicians (brothers) working on the message from the aliens (called Eaglets, on account of the message source from the [[Eagle Nebula|http://earthsky.org/clusters-nebulae-galaxies/the-awesome-beauty-of-m16-the-eagle-nebula]]):
>And it was Wilson's intuition that these things were bits of executable code: programs you could run. Even as expressed in the Eaglets' Odd flowing language, he thought he recognised logical loops, start and stop statements. Mathematics may or may not be universal, but computing seems to be - my brother had found Turing machines, buried deep in an alien database. 
>
>Wilson translated the segments into a human mathematical programming language, and set them to run on a dedicated processor. They turned out to be like viruses. Once downloaded on almost any computer substrate they organised themselves, investigated their environment, started to multiply, and quickly grew, accessing the data banks that had been downloaded from the stars with them. Then they started asking questions of the operators: simple yes-no, true-false exchanges that soon built up a common language. 
>
>"The Eaglets didn't send us a message," Wilson had whispered to me on the phone in the small hours; at the height of it he worked twenty-four seven. "They downloaded an AI [Artificial Intelligence]. And now the AI is learning to speak to us." 
>
>It was a way to resolve a ferocious communications challenge. The Eaglets were sending their message to the whole Galaxy; they knew nothing about the intelligence, cultural development, or even the physical form of their audiences. So they sent an all-purpose artificial mind embedded in the information stream itself, able to learn and start a local dialogue with the receivers. 
>
>This above all else proved to me how Smart the Eaglets must be. It didn't comfort me at all that some commentators pointed out that this "Hoyle strategy" had been anticipated by some human thinkers; it's one thing to anticipate, another to build. I wondered if those viruses found it a challenge to dumb down their message for creatures capable of only ninth-order Shannon entropy^^1^^, as we were.

And here is how Maria Popova at [[BrainPickings|https://www.brainpickings.org/]] is [[describing the effort by Carl Sagan and others to get a message on board of the Voyager|https://www.brainpickings.org/2014/02/10/murmurs-of-earth-sagan-golden-record/]]:
>[it's fascination to see] how, in the early fall of 1977, he [Sagan] and a team of collaborators imbued a [time capsule with great] hopefulness of cosmic proportions and sent it into space aboard the Voyager spacecraft as humanity’s symbolic embrace of other civilizations. On it, they set out to explain our planet and our civilization to another in 117 pictures, greetings in 54 different languages and one from humpback whales, and a representative selection of “the sounds of Earth,” ranging from an avalanche to an elephant’s trumpet to a kiss, as well as nearly 90 minutes of some of the world’s greatest music.

So the alien message imagined by Baxter is obviously much more sophisticated compared to the Carl Sagan's idea and Voyager's Golden Record. It is much more versatile and universal, but also much more //dangerous//^^2^^. The aliens basically had sent a [["Universal Computer" (AKA a "Universal Turing Machine")|https://en.wikipedia.org/wiki/Universal_Turing_machine]], which in principle is capable of computing/programming/executing "anything" (with [[limitations|https://link.springer.com/chapter/10.1007/978-0-85729-535-4_7]]! Talk about a [[letting the genie out of the bottle|https://dictionary.cambridge.org/us/dictionary/english/let-the-genie-out-of-the-bottle]]!


----
^^1^^ - from [[Peter Russell's article|http://www.peterrussell.com/Dolphin/DolphinLang.php]] //Look who's talking//:
>The higher entropy levels, second order and up, relate to the notion of "conditional probabilities": once you have seen a particular sequence of elements, what are your chances of predicting the next element in the series? If, for instance, you know the first and second words of a phrase, the third-order entropy tells you (in logarithmic form) the odds of guessing the third word correctly. Analyses of English and Russian suggest that these languages show evidence of 8th or 9th-order Shannon entropy, meaning that when presented with a string of eight words, you have some ability (slim but non-zero) to predict what the ninth word might be. After that, though, all bets are off. If you want to guess what the 10th word is, the previous nine are of no value.
^^2^^ - this may explain why [[Arthur C. Clarke|https://en.wikiquote.org/wiki/Arthur_C._Clarke]] wanted to [[add to the message to extraterrestrials|https://science.nasa.gov/science-news/science-at-nasa/2011/28apr_voyager2]] the sentence:
>Please leave me^^3^^ alone; let me^^3^^ go on to the stars.
^^3^^ - on second thought, it's not clear who "me" refers to: the human race (begging the aliens), or the Voyager spaceship begging humans (who potentially could at [[some time in the future overtake the 1977 spaceship|https://science.nasa.gov/science-news/science-at-nasa/2011/28apr_voyager2]])
Metamagical Themas, Questing for the Essence of Mind and Pattern, by Douglas Hofstadter, Basic Books, 1985

<<forEachTiddler 
where 
'tiddler.tags.contains("book-chapter") && tiddler.tags.contains("Metamagical Themas")'
sortBy 
'tiddler.title'>>
In the wonderful book [[The issue at hand|The issue at hand]] by Gil Fronsdal, he writes about mindfulness:

At the heart of insight meditation is the practice of mindfulness, the cultivation of clear, stable and nonjudgmental awareness.
While mindfulness practice can be highly effective in helping bring calm and clarity to the pressures of daily life, it is also a spiritual path that gradually dissolves the barriers to the full development of our wisdom, compassion and freedom.

Cultivating our capacity to see clearly is the foundation for learning how to be present for things as they are, as they arise. It is learning to see without the filters of bias, judgment, projection,or emotional reactions. It also entails developing the trust and inner strength that allow us to be with things as they are instead of how we wish they could be. 

Mindfulness relies on an important characteristic of awareness: awareness by itself does not judge, resist, or cling to anything.


Meditation can and does improve mindfulness (among other beneficial results :) . But as [[Sharon Salzberg|https://www.sharonsalzberg.com/about/]] [[had said|https://onbeing.org/blog/the-fractal-moment-an-invitation-to-begin-again/7589/]]:
>In actuality, meditation is simple, but not easy: you rest your attention on something like the breath in order to stay present, and, as thoughts carry you away, you begin again an incalculable number of times. That is why meditation is a practice. It is this practice of training one’s attention that makes meditation so powerful.
>[...]
>There is joy and an important sense of renewal in each effort to begin again. In this way, meditation is not about the creation of a singular experience but about changing our relationship to experience.

----

Compare and Contrast: A wonderful [[poem by Wislawa Szymborska|MAY 16, 1973 by Wislawa Szymborska]] about how difficult it is to be mindful (and the "naturalness" of living mindlessly)
In Edgar Allan Poe's "detective story" [["The Purloined Letter"|On "The Purloined Letter" by Edgar Allan Poe]] Dupin (one of the main characters) tells the following story, to illustrate how oblivious we sometimes (often?) are to things in plain sight, or how assumptions (sometimes, unconscious) blind us to the real nature of things, or what's right in front of us:
>“There is a game of puzzles,” he [Dupin] resumed, “which is played upon a map. One party playing requires another to find a given word—the name of town, river, state or empire—any word, in short, upon the motley and perplexed surface of the chart. A novice in the game generally seeks to embarrass his opponents by giving them the most minutely lettered names; but the adept selects such words as stretch, in large characters, from one end of the chart to the other. These, like the over-largely lettered signs and placards of the street, escape observation by dint of being excessively obvious; and here the physical oversight is precisely analogous with the moral inapprehension by which the intellect suffers to pass unnoticed those considerations which are too obtrusively and too palpably self-evident.
Which reminds me of a similar point David Foster Wallace made at the [[commencement speech at Kenyon College (for the graduating class of 2005)|http://bulletin-archive.kenyon.edu/x4280.html]]. 
>There are these two young fish swimming along and they happen to meet an older fish swimming the other way, who nods at them and says "Morning, boys. How's the water?" 
>And the two young fish swim on for a bit, and then eventually one of them looks over at the other and goes "What the hell is water?"
And Foster Wallace clarifies:
>I am not the wise old fish. The point of the fish story is merely that the most obvious, important realities are often the ones that are hardest to see and talk about. Stated as an English sentence, of course, this is just a banal platitude, but the fact is that in the day to day trenches of adult existence, banal platitudes can have a life or death importance, or so I wish to suggest to you on this dry and lovely morning.

He proceeds by describing the dangers (and falling into the trap) of some sort of [[concept bubble (or "echo chamber")|The Scientific Bubble]] through another story:
>There are these two guys sitting together in a bar in the remote Alaskan wilderness. One of the guys is religious, the other is an atheist, and the two are arguing about the existence of God with that special intensity that comes after about the fourth beer. And the atheist says: "Look, it's not like I don't have actual reasons for not believing in God. It's not like I haven't ever experimented with the whole God and prayer thing. Just last month I got caught away from the camp in that terrible blizzard, and I was totally lost and I couldn't see a thing, and it was 50 below, and so I tried it: I fell to my knees in the snow and cried out 'Oh, God, if there is a God, I'm lost in this blizzard, and I'm gonna die if you don't help me.'" And now, in the bar, the religious guy looks at the atheist all puzzled. "Well then you must believe now," he says, "After all, here you are, alive." The atheist just rolls his eyes. "No, man, all that was was a couple Eskimos happened to come wandering by and showed me the way back to camp."
>
>It's easy to run this story through kind of a standard liberal arts analysis: the exact same experience can mean two totally different things to two different people, given those people's two different belief templates and two different ways of constructing meaning from experience. Because we prize tolerance and diversity of belief, nowhere in our liberal arts analysis do we want to claim that one guy's interpretation is true and the other guy's is false or bad. Which is fine, except we also never end up talking about just where these individual templates and beliefs come from. Meaning, where they come from INSIDE the two guys. As if a person's most basic orientation toward the world, and the meaning of his experience were somehow just hard-wired, like height or shoe-size; or automatically absorbed from the culture, like language. As if how we construct meaning were not actually a matter of personal, intentional choice. Plus, there's the whole matter of arrogance. The nonreligious guy is so totally certain in his dismissal of the possibility that the passing Eskimos had anything to do with his prayer for help. True, there are plenty of religious people who seem arrogant and certain of their own interpretations, too. They're probably even more repulsive than atheists, at least to most of us. But religious dogmatists' problem is exactly the same as the story's unbeliever: blind certainty, a close-mindedness that amounts to an imprisonment so total that the prisoner doesn't even know he's locked up.
In his excellent book "The Darker the Night, the Brighter the Stars" Paul Broks writes:

>IN GREEK MYTHOLOGY the Moon, Selene, fell in love with a beautiful shepherd, Endymion, as her beams fell on his sleeping face. Dreading the prospect that, like all mortals, he would age and die, she got Zeus to grant him eternal youth, which he did. But also eternal sleep.
>
>AT 02:56 COORDINATED Universal Time on 21 July 1969, Neil Armstrong becomes the first human being to set foot on the surface of the Moon. He has prepared [[a few words to mark the occasion|https://www.ndtv.com/world-news/is-neil-armstrongs-famous-moon-landing-quote-really-a-misquote-524458]]^^1^^: “That's one small step for (a) man, one giant leap for mankind.” The “a” is in parenthesis because, although it is required for the sentence to make any sense at all, it was not uttered by the astronaut—or, at least, it was not heard by the millions of earthlings tuned in for the Moon landing. 
>
>Armstrong later claimed to have voiced the “a” and that, somehow, the crucial little word was lost in transmission. We'll never know. Who cares? 
>
>Aldrin descended the lunar module ladder for his Moonwalk about twenty minutes later. I wonder if thoughts of his mother crossed his mind on his lunar stroll. She had killed herself the previous year in a state of depression triggered, apparently, by the prospect of her son's forthcoming Moon mission and the fame and the acclaim it would be bound to bring. She feared she wouldn't cope. Her maiden name Moon.
>
>THOSE FIRST FOOTFALLS in the moon dust punctured a embrane—the thin, porous film that separates reality and imagination. Some saw this as an act of destruction, a violation. It was a sentiment that inspired Tom Stoppard to write //Jumpers//, a play about philosophy, murder and Moon landings. He was curious to know whether, “if and when men landed on the Moon, something interesting would occur in the human psyche.” He cites a statement from the Union of Persian Storytellers (“if you can imagine such a thing”) to the effect that a Moon landing would be damaging to the livelihood of the storytellers. 
>
>The Moon as romantic metaphor, as symbol of love and dreams, of the unconscious mind, the passage of time, of life and death, would be diminished. The veil would drop and the Moon would be revealed as rock and dust. We knew that anyway, but there are different ways of knowing.
>
>Then it happened. Armstrong and Aldrin walked on the Moon and nothing changed.

[[Carl Sagan also tells a story of another myth about a celestial object|A keen eye (but not what you think)]], which also reflects (ha!) on human nature.

(see also [[The Last Words Spoken on the Moon]])


----
^^1^^ - see [[GD local copy|https://docs.google.com/document/d/1PAsseF0KS2ka3L6CbT3b6krwXcY3688JAJRy5RkAPHA/edit?usp=sharing]]
Ever since I had read [[Richard Hamming's "The Unreasonable Effectiveness of Mathematics"|resources/Hamming.html]], where he "[[normalizes our thinking|Perhaps there are thoughts we cannot think]]" about "unthinkable thoughts" (basically asking why we (erroneously) think it should not be possible that they exist), I have been looking for some simple example of what I thought (ha!) was such a "rarefied creature".

[[Haldane|https://en.wikipedia.org/wiki/J._B._S._Haldane]] once said: 
[[My own suspicion is that the universe is not only queerer than we suppose, but queerer than we can suppose... I suspect that there are more things in heaven and earth that are dreamed of, or can be dreamed of, in any philosophy.]]

I have just finished an interesting book //The Accidental Universe - The World You Thought You Knew// by Alan Lightman, where he writes about how we, humans, are getting more and more used to (or even (and sadly? or at least, I suspect, unavoidably?)) "comfortable" living in what he calls "The Disembodied Universe".
Lightman points out that both modern physics and biology (and, like Bret Victor, [[I have to add|Enabling to think the unthinkable]], technology and computers!) have uncovered an invisible universe:
>Since Foucault, more and more of what we know about the universe is undetected and undetectable by our bodies. What we see with our eyes, what we hear with our ears, what we feel with our fingertips, is only a tiny sliver of reality. Little by little, using artificial devices, we have uncovered a hidden reality. It is often a reality that violates common sense. It is often a reality that is strange to our bodies. It is a reality that forces us to reexamine our most basic concepts of how the world works. And it is a reality that discounts the present moment and our immediate experience of the world.
He gives some examples from physics, like experiencing the Earth's turning through Foucault's Pendulum, the discovery of electro-magnetic waves and the formulation of their behavior^^1^^ by James Clerk Maxwell, experimenting with radio waves by Heinrich Hertz, the confirmation of time dilation predicted and calculated by Albert Einstein, and so on.

He quotes a friend of Einstein, Niels Bohr, who wrote about the fact that "the world of the quantum [quantum physics] is so foreign to our sensory perceptions that we don't even have words to describe it."
>we find ourselves here on the very path taken by Einstein of adapting our modes of perception borrowed from the sensations to the gradually deepening knowledge of the laws of nature. The hindrances met on this path originate above all in the fact that ... every word in the language refers to our ordinary perceptions.

Lightman [[laments the impact of science and technology on our worldview|On losing our personal relationship with the world around us]] and the creation of this "invisible", "disembodied" universe:
>It is an irony to me that the same science and technology that have brought us closer to nature by revealing these invisible worlds have also separated us from nature and from ourselves. Much of our contact with the world today is not an immediate, direct experience, but is instead mediated by various artificial devices such as television, cell phones, iPads, chat rooms, and mind-altering drugs.

So back to my attempt to find simple examples of unthinkable thoughts. This is a simple, almost "everyday example" in our high tech world today, which could truly be considered unthinkable let's say 200 years ago (or maybe before the Wright brothers' first airplane flight on December 17, 1903). This example was triggered in my mind by Lightman's book, but also by the fact that this week my son is in Israel on a business trip.

In light of these coincidences, I thought of the fact that my son had to take a 10 hour flight to Frankfurt, Germany, followed by another 4 hour flight to Israel, and that if I want to talk to him, I need to remember that there is a 10 hour time difference between the US West Coast and Israel due to timezone differences.
Now, isn't this a (nowadays mundane) example of a thought (and a "universe of concepts and knowledge") which had been "unthinkable" a mere 200 years ago? And it comes so naturally to us. And it doesn't seem like an "impossible feat" or thought.

Which reminds me of the scene from //Through the ~Looking-Glass// by Lewis Carroll (Charles Lutwidge Dodgson), when Alice meets the White Queen (Chapter 5: Wool and Water):
>[the Queen says:] "I'm just one hundred and one, five months and a day."
>"I can't believe that!" said Alice.
>"Can't you?" the Queen said in a pitying tone. "Try again: draw a long breath, and shut your eyes."
>Alice laughed. "There's no use trying," she said: "one can't believe impossible things."
>"I daresay you haven't had much practice," said the Queen. "When I was your age, I always did it for half-an-hour a day. Why, sometimes I've believed as many as six impossible things before breakfast."

So here you have it^^2^^. As the White Queen says: it's possible (to think/calculate time zones, //and// there is an algorithm for doing it :)

It seems to me that thinking always involves words, and if you attempt to think the unthinkable, you need to //invent// new words or at least create new meanings to existing words.
I think that this is echoed by [[Ursula K. Le Guin|http://www.ursulakleguin.com/UKL_info.html]] who had said (ha!):
>Words are events, they do things, change things. They transform both speaker and hearer; they feed energy back and forth and amplify it. They feed understanding or emotion back and forth and amplify it.
Maria Popova of [[BrainPickings|https://www.brainpickings.org/]] [[covers UKL's book|https://www.brainpickings.org/2015/10/21/telling-is-listening-ursula-k-le-guin-communication/]] called // The Wave in the Mind: Talks and Essays on the Writer, the Reader, and the Imagination//, in which she (Popova) writes (words again :):
>Every act of communication is an act of tremendous courage in which we give ourselves over to two parallel possibilities: the possibility of planting into another mind a seed sprouted in ours and watching it blossom into a breathtaking flower of mutual understanding; and the possibility of being wholly misunderstood, reduced to a withering weed.

And who should know better than UKL about [[creating new worlds|http://www.ursulakleguin.com/PlausibilityinFantasy.html]] full of unthinkable thoughts and realities :)
On the website of a new [[documentary film|http://worldsofukl.com/]] about her work, UKL is quoted (last words (for this post) :):
>As great scientists have said and as all children know, it is above all by the imagination that we achieve perception, and compassion, and hope.

----
^^1^^ - see on the [[beauty of the definition of the programming language Lisp|LISP as the Maxwell Equations of Computer Science]] as the equivalent of Maxwell's equations.
^^2^^ - I found a discussion of "unthinkable thoughts" or in this case "unthinkable questions", which at one point become thinkable/askable in the book [[The Meaning of Life|https://ia601202.us.archive.org/21/items/MeaningOfLife-VeryShortIntroduction-TerryEagleton/MeaningOfLifeAVeryShortIntroductionTheTerryEagleton2008OxfordUniversityPressIsbn9780199532179.pdf]] by [[Terry Eagleton|http://www.lancaster.ac.uk/english-literature-and-creative-writing/about-us/staff/terry-eagleton]] where he writes:
We constantly push the frontier forward by acquiring and validating new knowledge and experiences, which expand the "questioning horizon":
>Not any question is possible at any given time. Rembrandt could not ask whether photography had rendered realist painting redundant.
My own suspicion is that the universe is not only queerer than we suppose, but queerer than we can suppose... I suspect that there are more things in heaven and earth that are dreamed of, or can be dreamed of, in any philosophy.

Echoing and expanding on William Shakespeare's Hamlet saying:
>there are more things in heaven and earth, Horatio, than are dreamt of in your philosophy.
<<forEachTiddler 
where 
'tiddler.tags.contains("book-chapter") && tiddler.tags.contains("Natural-Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence")'
sortBy 
'tiddler.title'>>
In the WSJ column [[20 odd questions|https://www.wsj.com/news/types/20-odd-questions]] Neil deGrasse Tyson was [[asked|https://www.wsj.com/articles/neil-degrasse-tyson-1495122652]], among other things:

>''We instill in children the misconception that'': science is memorizing the names of things. That’s an aspect of it, but it’s not the core. The core is understanding objects. Get that into your 6-year-old, and you’ve got nothing more to teach them.
I beg to differ; I think that he doubly (!) missed the boat: 

__First__, I think that deGrasse Tyson tilts it too much towards understanding //objects//. I think that understanding processes is not less (if not more) important. It's //not// just Western vs. Easter philosophy differences/emphases. It's about the importance and criticality of relationships between objects and the processes that impact them.

As [[Alan Kay once said about Alan Perlis|Alan Kay on "Rethinking Computer Science Education"]]:
>We need to establish "real computer science"
>>- Alan Perlis meant that we need a "science of processes" - a science to study processes and things in process
>>- processes in mechanics, biology, society, politics, chemistry, tech/engineering, mental etc.



__Second__, I think that at the core (to borrow deGrasse Tyson's term) of science (and any other worthwhile human endeavor) is the sense of wonder; this is really the engine for creativity, discovery, knowledge, and understanding. And that if you "get that into your 6-year-old, [...] you’ve got nothing more to teach them" !

[[Albert Einstein had said it|https://sciphilos.info/docs_pages/docs_Einstein_fulltext_css.html]] better than me:
>"The most beautiful thing we can experience is the mysterious. It is the source of all true art and science. He to whom the emotion is a stranger, who can no longer pause to wonder and stand wrapped in awe, is as good as dead —his eyes are closed."

BTW, and to be fair, Einstein also said about understanding, awe and wonder:
>Compound interest is the eighth wonder of the world. He who understands it, earns it ... he who doesn't ... pays it.
;)
A different (but related) perspective on [[forms of literacy|Computing Literacy]] is given by [[Robert Logan|http://www.physics.utoronto.ca/people/homepages/logan/]] in his book //The Sixth Language: Learning a Living in the Internet Age//. The six languages he considers are: speech, writing, mathematics, science, computers and the Internet, forming an evolutionary chain in human development.
Since Logan had worked with Marshall McLuhan, it's not surprising to me that he's using the terms "languages", since both men were "very big" on communication, media, messages, and so on (you surely remember McLuhan's "the medium is the message"). I think that "literacies" may be a more suitable (and also less bombastic ;-) term, but Logan definitely has good insights.

In a [[good review|http://www.cjc-online.ca/index.php/journal/article/view/1265/1273]] of the book by Paul Heyer in the Canadian Journal of Communication, Heyer brings up some very good points about it.
In reference to education, the 5th and sixth languages (computers and the Internet) are (to my point earlier) treated as literacies:
>Computers have created an information revolution, and the Internet now enmeshes us in a knowledge archive of global proportions. The ability to navigate comfortably this terrain is essential and should be part and parcel of even a back-to-basics educational curriculum.

On the natural (and predictable) evolution of new languages (again, I think media, or literacies are better terms, but, this is Logan's choice):
>One McLuhan notion that is given pronounced emphasis is how the content of a new medium is usually that of the medium that preceded it. The first printed books, for example, in subject matter and form, replicated the earlier manuscripts; early television cannibalized radio (some radio shows were even simulcast for the newer medium); and as Logan observes, early computers sourced print, mathematics, and science.
This echoes [[Seymour Papert's insight/principle|An Exploration in the Space of Mathematics Educations]] for education pedagogy and material ("New media open the door to new contents").
And then:
>After this initial conservative phase, the new medium eventually establishes unique formats, often unanticipated during its inception, and the world is forever changed.
As an aside, it's interesting that he points out a parallelism in evolutionary biology:
>Romer's rule [...] states that an evolutionary adaptation is usually conservative in that it tends to help an organism maintain its existing lifeway in the face of changing environmental circumstances rather than immediately enabling it to exploit fully a new niche. When lungfishes emerged during the Devonian period, for example, the new adaptation did not initially establish the terrestrial mode, although it eventually would. Instead, it allowed organisms to get from one diminishing body of water to another, thereby helping them maintain the status quo.
In the review article, Heyer points out Logan's enthusiasm about computers and the Internet, that 
>Few would argue against a high-tech component to education. But such insistence needs to be tempered. There is a cost, along with //the// cost.
and provides one example, where providing schools with computers and connecting them to the Internet, does not automatically/naturally enhance education and learning, and the cost of doing that, may come at the expense of other means (like more and better teachers), which may impact education/learning much more.

About television (and by extension, any "passive video media"):
>The book also seems to dismiss television as an educational tool because of its alleged non-interactivity. Yet McLuhan, who advocated the use of multimedia in education, had more to say regarding it than any other medium save print.
>The 500-channel universe offers a plethora of documentary and dramatic material students are unlikely to view on their own, but which they could be assigned to watch, assess, and deconstruct in an interactive classroom context as they are taught aspects of media literacy, visual communication, and textual analysis-at a minimal cost. In seeing television as inherently passive and an enemy of education, Logan overlooks a more indictable and truancy-producing culprit, which is, ironically, intimately tied to the computer-the video game.

(Logan's book [[What is Information?|What is Information? by Robert Logan]] is also applicable to information, knowledge, learning, and technology)
No man ever steps in the same river twice, for it's not the same river and he's not the same man.

Panta rhei == "all flows"

Heraclitus of Ephesus
The talented, sharp (and deadpan) author and journalist Robert Wright, in his excellent book [[Nonzero|http://www.nonzero.org/]] writes about what he calls the "evolutionary escalator" and "human luck".
He describes a "gene-meme co-evolution", where biological evolution and cultural evolution move each other forward via a positive feedback mechanism.
>This sort of co-evolution can become a self-feeding process: the brainier that animals get, the better they are at creating and absorbing valuable memes; and the more valuable memes there are floating around, the more Darwinian value there is in apprehending them, so the brainier animals get.
Wright calls this a ''co-evolutionary escalator'' and says:
>Once you're on this sort of escalator, powered by the positive feedback between the two evolutions, there's no obvious reason to stop. If you don't suffer some grave, species-wide misfortune a meteor collision, say—you're probably headed for big brains and big-time culture.
And so,
>That isn't to say that our particular ancestors were destined for embarkation. Indeed, our lineage was just flat-out lucky to find itself in possession of the portfolio of key biological assets. But there's a difference between saying it took great luck for you to be the winner and saying it took great luck for there to be a winner. This is the distinction off which lotteries, casinos, and bingo parlors make their money. In the game of evolution, I submit, it was just a matter of time before one species or another raised its hand (or, at least, its grasping appendage) and said, "Bingo."...

Wright, in the introduction to the book (I love [[his introductions|Three Scientists and Their Gods - Robert Wright]] :), makes it clear ("YOU CALL THAT DESTINY?") what he means by "Destiny" in the title of the book:
>Any book with a subtitle as grandiose as "The Logic of Human Destiny" is bound to have some mealy-mouthed qualification somewhere along the way. We might as well get it over with.
>How literally do I mean the word "destiny"? Do I mean that the exact state of the world ten or fifty or one hundred years from now is inevitable, down to the last detail? No, on two counts.
>(1) I'm talking not about the world's exact, detailed state, but about its broad contours: the nature of its political and economic structures (Whither, for example, the nation-state?); the texture of individual experience (Whither freedom?); the scope of culture (Whither Mickey Mouse?); and so on.
>(2) I'm not talking about something that is literally inevitable. Still, I am talking about something whose chances of transpiring are very, very high. Moreover, I'm saying that the only real alternatives to the "destiny" that I'll outline are extremely unpleasant, best avoided for all our sakes.
So in //that// sense, where the human race is today and where it's going is our destiny (or, if you will, our "unavoidable (i.e., highly likely) luck").
US author and physician (1809 - 1894)
The Preface to his book [["The Hedgehog, the Fox, and the Magister’s Pox"|http://www.filosofia.unimi.it/zucchi/NuoviFile/Stephen%20Jay%20Gould-The%20Hedgehog,%20the%20Fox,%20and%20the%20Magister%27s%20Pox_%20Mending%20the%20Gap%20between%20Science%20and%20the%20Humanities%20%20-Belknap%20Press%20of%20Harvard%20University%20Press%20(2011).pdf]] is so captivating that the best approach is to just quote the entire thing (emphases and annotations are mine :)

As Gould warns us, we should not jump to the simple-minded conclusion that the "way of the fox" nor "the way of the hedgehog" are/should be associated with The Humanities and/or The Sciences. Fortunately, and delightfully, it's more nuanced than this :)

!!!Introducing the Protagonists [Fox and Hedgehog]

I  prefer the more euphonious [pleasing to the ear]  Russian beginning for fairy tales   
to  our  equivalent  “once  upon  a  time”—
zhili  byli (or,  literally,  “lived,  was” [or 'there was/were']).
Thus  I  begin  this  convoluted  tale  of  initial  discord  and  potential  concord:
“Zhili  byli the  fox  and  the  hedgehog.”

In  his  [[Historia  animalium of  1551|https://www.nlm.nih.gov/exhibition/historicalanatomies/gesner_home.html]],
Konrad Gesner, the great Swiss scholar of nearly everything, drew the initial
and “official” pictures of these creatures in the first great compendium of the
animal  kingdom  published  in  Gutenberg’s  era.  [[Gesner’s  fox|https://render.fineartamerica.com/images/rendered/default/greeting-card/images-medium-5/1560-red-fox-portrait-from-conrad-gesner-paul-d-stewartscience-photo-library.jpg]]  embodies  the
deceit and cunning traditionally associated with this important symbol of our
culture—poised on his haunches, ready for anything, front legs straight and
extended, hindquarters set to spring, ears cocked, and hair erect down the full
line of his back. Above all, his face grins enigmatically and throughout, from
the erect eyelashes to the long smirk, ending at the tapered nose with wide-
spread  whiskers—all  seeming  to  say,  “Watch  me  now,  and  then  tell  me  if
you’ve ever seen anything even half so clever.”

[[The hedgehog|https://www.sciencesource.com/archive/Gessner--Hedgehog--16th-Century-SS2578440.html]], by contrast, is long and low, all exposed and nothing hid-
den. Spines cover the entire upper surface of his body; and his small feet neatly
fit under this protective mat above. The face, to me, seems simply placid:
neither dumb nor disengaged but rather serenely confident in a quiet, yet fully
engaged manner.

I suspect that Gesner drew these two animals to emphasize these feelings
and associations in a direct and purposeful way. For the Historia animalium
of 1551 is not a scientific encyclopedia in the modern sense of presenting factual 
information about natural objects, but rather a Renaissance compendium
for  everything  ever  said  or  reported  by  human  observers  or  moralists  about
animals and their meanings, with emphasis on the classical authors of Greece
and Rome (seen by the Renaissance as the embodiment of obtainable wisdom
in its highest form), and with factual truth and falsity as, at best, a minor criterion 
for emphasis. Each entry includes empirical information, fables, human
uses, and stories and lists of proverbs featuring the creature in question.

The  fox  and  the  hedgehog  not  only  embodied  their  separate  and  well-
known symbols of cunning versus persistence. They had also, ever since the
seventh century B.C., been explicitly linked in one of the most widely known
proverbs about animals, an enigmatic saying that achieved renewed life in the
twentieth century. Gesner clearly drew his fox and hedgehog in their roles as
protagonists in this great and somewhat mysterious motto.

In Gesner’s time, and ever since for that matter, any scholar in search of a
proverb would turn immediately to the standard source, the Bartlett’s beyond
compare  for  this  form  of  quotation:  [[the Adagia|https://books.google.com/books?id=kH9XAAAAMAAJ&printsec=frontcover&source=gbs_ge_summary_r&cad=0#v=onepage&q&f=false]] (adages,  or  proverbs)  com-
piled,  and  first  published  in  1500,  by  the  greatest  intellectual  of  the
Renaissance, [[Erasmus of Rotterdam|https://www.biography.com/people/erasmus-21291705]] (1466–1536). Gesner, of course, directly
used  and  credited  Erasmus’s  exhaustive  discussion  of  the  linking  proverb  in
both his articles, ''De Vulpe'' (on the fox) and ''De Echino'' (on the hedgehog) of
his 1551 founding treatise.

This  somewhat  mysterious  proverb  derives  from  a  shadowy  source,
[[Archilochus|https://en.wikipedia.org/wiki/Archilochus]], the seventh-century B.C. Greek soldier-poet sometimes 
considered the greatest lyricist after Homer, but known only from fragments and
secondary  [[quotations|https://en.wikiquote.org/wiki/Archilochus]],  and  not  from  any  extensive  writings  or  biographical
data. Erasmus cites, in his universalized Latin, the Archilochian contrast of
fox  and  hedgehog:  
''Multa  novit  vulpes,  verum  echinus  unum  magnum''
(or, roughly, “The fox devises many strategies; the hedgehog knows one great and
effective strategy”).

I use this well-trodden, if enigmatic, image in two important ways (and
in the book’s title as well) to exemplify my concept of the proper relationship
between  the  sciences  and  humanities.  I  could  not  agree  more  with  the  vital
sentiment expressed by my colleague [[E. O. Wilson|https://eowilsonfoundation.org/e-o-wilson/]] (although Part III of this
book  will  also  explain  my  reasons  for  rejecting  his  favored  path  toward  our
common  goal):  “The  greatest  enterprise  of  the  mind  has  always  been  and
always will be the attempted linkage of the sciences and the humanities” (from
his [[book Consilience|https://en.wikipedia.org/wiki/Consilience_(book)]], Knopf, 1998, page 8).

I use Archilochus’s old image, and
Erasmus’s  extensive  exegesis [critical explanation or interpretation of a text],  
to  underscore  my  own  recommendations  for  a
fruitful union of these two great ways of knowing. But my comparison will
not be based on the most straightforward or simpleminded comparison. That
is, I emphatically do not claim that one of the two great ways (either science
or the humanities) works like the fox, and the other like the hedgehog.

Of my two actual usages, the first is, I confess, entirely idiosyncratic, fully
concrete, and almost as enigmatic as the proverb itself. That is, I shall refer, in
a  crucial  argument,  to  the  specific  citation  of  Erasmus’s  explication  of
Archilochus’s  motto  as  preserved  in  one  particular  copy  of  Gesner’s  1551
book. Moreover, although I regale you with foxes and hedgehogs in this 
introduction, this first usage will now disappear completely from the text until the
very last pages, when I cite (and picture) this passage to make a closing 
general  point  with  specific  empirical  oomph.  As  to  the  equally  mysterious
Magister who shares titular space with the fox and hedgehog, he will make a
short intermediary appearance (in chapter 4) and then also withdraw until his
meeting with the two animals on the closing pages.

But my second usage pervades the book, although I try to keep explicit
reminders  to  a  bearable  minimum  (an  effort  demanding  great  forbearance,
and  courting  probable  failure  in  any  case,  from  such  a  didactic  character  as
yours truly). This second employment also sticks closely to the metaphorical
meanings  that  have  been  grafted  upon  Archilochus’s  image  throughout  
history, especially since Erasmus’s scholarly exegesis. This usage became central
to twentieth-century literary commentary when Isaiah Berlin—my personal
intellectual hero, and a wonderful man who befriended me when I was a shy,
beginning,  absolute  nobody—invoked  the  pairing  of  fox  and  hedgehog  to
contrast the styles and attitudes of several famous Russian writers. Ever since
then, scholars have played a common game in designating their favorite (or
anathematized) literati either as hedgehogs for their tenacity in sticking to one
style or advocating one key idea, or as foxes for their ability to move again and
again,  like  Picasso,  from  one  excellence  to  an  entirely  different  mode  and
meaning of expression. The game maintains sharp edges because these 
attributions have been made both descriptively and proscriptively, and people of
goodwill (and bad will too, for that matter) can argue forever about either and
both. (I must also confess that I named one of my books of essays 
An Urchin in the Storm, to designate my own stubborn invocation of Darwinian 
evolution  as  a  subject  to  fit  nearly  any  context  or  controversy.  Hedgehogs,  to
Englishmen, are urchins.)

Erasmus (and I am quoting from my 1599 edition of his [[Adagia|https://ia902701.us.archive.org/27/items/proverbschieflyt00blaniala/proverbschieflyt00blaniala.pdf]]) begins with the
usual and obvious reasons for Archilochus’s famous contrast. When
pursued by hunters, the fox figures out a new and sneaky way to escape each
time: 
''Nam  vulpes  multijugis  dolis  se  tuetur  adversus  venatores''
(for  the  fox defends itself against the hunters by using many different guiles). 
The hedgehog, on the other hand, tries to keep out of harm’s way, but will use its one
great trick if overtaken by the hunters’ dogs: the animal rolls up into a ball,
with its small head and feet, and its soft underbelly, tucked up neatly and 
completely within the enclosing surface of spines. The dogs can do what they wish:
poke  the  animal,  roll  it  about,  or  even  try  to  bite,  but  all  to  no  avail  (or  to
painful  injury);  for  the  dogs  cannot  capture  such  a  passive  and  prickly  ball,
and must ultimately leave the animal alone, eventually (when the danger has
passed) to unroll and calmly walk away. Erasmus writes: 
''Echinus unica duntaxat arte tutus est adversus canum morsus, 
siquidem spinis suis semet involuit in
pilae  speciem,  ut  nulla  ex  parte  morsu,  prendi  queat.''
(The  hedgehog  only  has one technique to keep itself safe against the dogs’ bite, since it rolls itself up,
spines outward, into a kind of ball, so that it cannot be captured by biting.)

Later on in this exegesis, Erasmus even adds an old tale of intensification,
delicately mentioning only the outline of the story, and referring his readers
to the original sources if they wish to know more. If this one great trick seems
to be failing, the hedgehog often ups the same basic ante by letting fly a stream
of  urine,  covering  the  spines,  and  weakening  them  to  the  point  of  excision.
But  how  can  this  dramatic  form  of  self-imposed  haircut  help  the  creature?
Erasmus  goes  no  further,  but  when  we  turn  to  Pliny  and  Aelianus  (the  two
classical sources cited by Erasmus), we learn what a tough and determined lit-
tle bastard this apparently timid creature can be. The ultimate urine trick, we
are  told,  can  work  in  three  possible  ways.  First,  with  the  spines  excised,  the
animal can often slither away unnoticed. Second, the urine smells so bad that
the dogs or human hunters may simply lose interest and beat a quick retreat.
Third, if all else fails, and the hunters take him anyway, at least the hedgehog
can enjoy his last laugh in death, for his haircut has rendered him useless to
his captors (who, in a fourth potential utility, might also abandon him in frus-
tration by recognizing this outcome in advance)—for the main attraction of
the  hedgehog  to  humans  lies  in  the  value  of  his  hide,  but  only  with  spines
intact, as a natural brush.

The power and attraction of Archilochus’s image lies, rather obviously, in
its two levels of metaphorical meaning for human contrasts. The first speaks
of psychological styles, often applied for quite practical goals. Scramble or 
persist. Foxes owe their survival to easy flexibility and skill in reinvention, to an
uncanny knack for recognizing (early on, while the getting remains good) that
a  chosen  path  will  not  bear  fruit,  and  that  either  a  different  route  must  be
quickly  found,  or  a  new  game  entered  altogether.  Hedgehogs,  on  the  other
hand, survive by knowing exactly what they want, and by staying the chosen
course  with  unswerving  persistence,  through  all  calumny  and  trouble,  until
the  less  committed  opponents  eventually  drop  away,  leaving  the  only  
righteous path unencumbered for a walk to victory.

The  second,  of  course,  speaks  to  favored  styles  of  intellectual  practice.
Diversify and color, or intensify and cover. Foxes (the great ones, not the 
shallow or showy grazers) owe their reputation to a light (but truly enlightening)
spread of real genius across many fields of study, applying their varied skills to
introduce a key and novel fruit for other scholars to gather and improve in a
particular  orchard,  and  then  moving  on  to  sow  some  new  seeds  in  a  
thoroughly  different  kind  of  field.  Hedgehogs  (the  great  ones,  not  the  pedants)
locate one vitally important mine, where their particular and truly special gifts
cannot be matched. They then stay at the site all their lives, digging deeper
(because  no  one  else  can)  into  richer  and  richer  stores  from  a  mother  lode
whose full generosity has never before been so well recognized or exploited.
I use the fox and hedgehog as my model for how the sciences and 
humanities should interact because I believe that neither pure strategy can work, but
that a fruitful union of these seemingly polar opposites can, with goodwill and
significant self-restraint on both sides, be conjoined into a diverse but 
common enterprise of unity and power. The way of the hedgehog cannot suffice
because  the  sciences  and  humanities,  by  the  basic  logics  of  their  disparate
enterprises,  do  different  things,  each  equally  essential  to  human  wholeness.

We need this wholeness above all, but cannot achieve the goal by shearing off
the  legitimate  differences  (I  shall  critique  Wilson’s  notion  of  consilience  on
this basis) that make our lives so varied, so irreducibly, and so fascinatingly,
complex.  But  if  we  lose  sight  of  the  one  overarching  goal—the  hedgehog’s
insight—underneath  the  legitimately  different  concerns  and  approaches  of
these two great ways, then we are truly defeated, and the dogs of war will 
disembowel our underbellies and win.

But the way of the fox cannot prevail either, because too great a 
flexibility may lead to survival of no enduring value—mere persistence with no moral
or intellectual core intact. What triumph can an ultimate chameleon claim if
he gains not even the world, but only his basic continuity, at the price of his
soul?  Fortunately,  and  in  the  most  parochial  American  sense,  we  know  a
model of long persistence and proven utility for the virtues in fruitful union
of apparent opposites. This model has sustained us through the worst fires of
challenge (both voluntary self-immolation from 1861 to 1865, and attempted
external prevention at several times, beginning with the first battles of 1775).

We have even embodied this ideal in our national motto, 
''e pluribus unum'', “one from many.” 
If the different skills and wondrous flexibilities of the fox
can be combined with the clear vision and stubbornly singleminded goal of
the hedgehog, then a star-spangled banner can protect a great expanse of 
maximal  diversity  because  all  the  fox’s  skills  now  finally  congeal  to  realize  the
hedgehog’s great vision. Never before in human history has the experiment of
democracy been tried across such a vast range of geographies, climates, 
ecologies, economies, languages, ethnicities, and capabilities. Lord knows we have
suffered  our  troubles,  and  imposed  horrendous  and  enduring  persecutions
upon sectors of the enterprise, thus sullying the great goal in the most 
shameful way imaginable. Yet, on balance, and by comparison to all other efforts of
similar  scale  in  human  history,  the  experiment  has  worked,  and  has  been
showing substantial improvement in the course and memories of my lifetime
at least.

I  offer  the  same  basic  prescription  for  peace,  and  mutual  growth  in
strength,  of  the  sciences  and  humanities.  These  two  great  endeavors  of  our
soul and intellect work in different ways and cannot be morphed into one sim-
ple coherence, so the fox must have his day. But the two enterprises can lead
us  onward  together,  ineluctably  yoked  if  we  wish  to  maintain  any  hope  for
arrival at all, toward the common goal of human wisdom, achieved through
the  union  of  natural  knowledge  and  creative  art,  two  different  but  
nonconflicting truths that, on this planet at least, only human beings can forge and
nurture.
But  I  learned  one  other  important  lesson  from  reading  Erasmus’s  
commentary,  and  by  considering  the  deeper  meaning  of  Gesner’s  pictures.

Erasmus does, following the literal lead of Archilochus’s minimality, depict the
styles of the fox and hedgehog as simply different, with each strategy effective
in  its  own  way,  and  expressing  one  end  of  a  full  continuum.  But  Erasmus
clearly favors the hedgehog in one crucial sense: foxes generally do very well
indeed, but when the chips go down in extremis, look inside yourself, and fol-
low the singular way that emerges from the heart and soul of your ineluctable
being  and  construction,  whatever  the  natural  limits—for  nothing  beats  an
unswerving moral compass in moments of greatest peril.
Erasmus, after praising the many wiles of the fox (as quoted above), then adds 
''et tamen haud raro capitur''—
“yet, nonetheless, it is captured not rarely.”

The  hedgehog,  on  the  other  hand,  almost  always  emerges  unscathed,  a  bit
stressed  and  put-upon,  perhaps,  but  ultimately  safe  nonetheless.  And  thus
intellectuals of all stripes and tendencies must maintain this central integrity
of  no  compromise  to  fashion  or  (far  worse)  to  the  blandishments  of  evil  in
temporary power. We have always been, and will always be, a minority. But if
we roll with the punches, maintain the guts of our inner integrity, and keep
our prickles high, we can’t lose—for the pen, abetted by some modern modes
of dispersal, really is mightier.

Finally,  I  don’t  mean  to  despise  or  dishonor  the  fox,  and  neither  does
Erasmus, despite his clear zinger, quoted just above, against this ultimate 
symbol of wiliness. For Erasmus ends his long and scholarly commentary with two
stories  about  dialogues  between  the  fox  and  another  brother  carnivore.  The
first tale of the fox and cat simply extends Erasmus’s earlier point about the
hedgehog’s  edge  in  episodes  of  greatest  pith  and  moment.  The  two  animals
meet  and  begin  to  argue  about  better  ways  to  elude  packs  of  hunting  dogs.

The fox brags about his enormous bag of tricks, while the cat describes his 
single effective way. Then, right in the midst of this abstract discussion, the two
creatures  must  face  an  unexpected  and  ultimately  practical  test:  “Suddenly,
amidst the dispute, they hear the voices of the dog pack. The cat immediately
leaps up into the highest tree, but the fox, meanwhile, is surrounded and 
captured by the crowd of dogs.” 
''Praestabilius esse nonnunquam unicum habere consilium''
(perhaps it is better to have one way of wisdom), Erasmus adds, 
''id sit verum et efficax''
(provided that it be true and effective).

But the second tale of the fox and panther saves our maligned character
and shows the inner beauty of his flexibility, as illustrated by his avoidance of
mere gaudy show for true dexterity of mind. Erasmus writes:

''Cum  aliquando  pardus  vulpem  pre  se  contemneret,  quod  ipse
pellem  haberet  omnigenus  colorum  maculis  variegatem,  respondit
vulpes, sibi decoris in animo esse, quod ille esset in cute.''
“When the panther disparages the fox by comparison to himself, because his
[the panther’s] skin is so beautifully variegated with so many colored spots of
all kinds, the fox responds that it is better to be so decorated in the mind than
upon the skin.”

And so I say to the sciences (where I reside with such lifelong pride and
satisfaction)  and  to  the  humanities  (whose  enduring  technique  of  exegesis
from printed classical sources I try, in my own conceit, to utilize as the 
primary mode of analysis in this book): what a power we could forge together if
we could all pledge to honor both of our truly different and equally necessary
ways, and then join them in full respect, in the service of a common goal as
expressed in old Plato’s definition of art as intelligent human modification and
wondrous  ornamentation,  based  on  true  veneration  of  nature’s  reality.  For
then, as the Persian poet said:
:: Oh wilderness were Paradise enow.

Then  wilderness  (nature’s  unvarnished  tangle  of  wonders)  would  become  a
paradise (literally, a cultivated garden of human delight).

The goal could not be greater or more noble, but the tensions are old and
deep, however falsely construed from the start, and stirred up by small minds
ever since. Thus the union of the fox and hedgehog can certainly be 
accomplished, and would surely yield, as progeny, a many-splendored thing called
love and learning, creativity and knowledge. But we had best proceed, in this
hybridization, by the resolution of a bad old joke about an animal not closely
related to the hedgehog, but functionally equivalent in the primary manner
of this discussion. How, using more decorous language than the joke enjoins,
can two porcupines copulate? The answer, of course, is “carefully".
Poe's [["The Purloined Letter"|http://etc.usf.edu/lit2go/147/the-works-of-edgar-allan-poe/5357/the-purloined-letter/]] has an epigraph by Seneca
>Nil sapientiae odiosius acumine nimio.
[There is nothing wisdom hates more than cleverness]
which is not just true (in many cases) but also wise :)

Seneca's actually reminded me of a witty question in Hebrew, which I think is in the same spirit:
Q: what is the difference between a wise person and a clever one?
A: A clever [smart] person can figure out how to get out of a tough situation, which a wise person would never even get into.

It's a short "detective story", where in it the "wise" Dupin is more successful in solving a mystery than the "clever" police Prefect Monsieur G.
As Dupin explains to the author how he solved the case, he berates mathematicians and mathematics by saying, for example:
* The mathematics are the science of form and quantity; mathematical reasoning is merely logic applied to observation upon form and quantity.
** partly true. Mathematics is much more than algebra and logic, and is not limiting itself to "observation upon form and quantity".
* The great error lies in supposing that even the truths of what is called pure algebra, are abstract or general truths. And this error is so egregious that I am confounded at the universality with which it has been received.
** only vain and technocratic (mathocratic?) mathematicians would claim that algebra reflects general truths about the world.
* What is true of relation—of form and quantity—is often grossly false in regard to morals, for example. In this latter science it is very usually untrue that the aggregated parts are equal to the whole. In chemistry also the axiom fails. In the consideration of motive it fails; for two motives, each of a given value, have not, necessarily, a value when united, equal to the sum of their values apart.
** different branches of mathematics definitely realize the limitations and applicability of math to realistic situations. But math is making constant advances in helping gain clarity or at least insights into real-world situations and conditions (e.g., statistics, probability, calculus).
* But the mathematician argues, from his finite truths, through habit, as if they were of an absolutely general applicability—as the world indeed imagines them to be. Bryant, in his very learned ‘Mythology,’ mentions an analogous source of error, when he says that ‘although the Pagan fables are not believed, yet we forget ourselves continually, and make inferences from them as existing realities.’ With the algebraists, however, who are Pagans themselves, the ‘Pagan fables’ are believed, and the inferences are made, not so much through lapse of memory, as through an unaccountable addling of the brains.
** again, certain mathematicians may believe certain things and behave in a certain way, but this doesn't necessarily represent truly what math is about.

Is Poe making the same mistake he is accusing the mathematicians of making, namely "non distributio medii" [A logical fallacy that is committed when the middle term in a categorical syllogism isn't distributed.]?
In the story, Dupin accuses the Prefect of thinking that since
>his [the Prefect's] defeat lies in the supposition that the Minister [the criminal] is a fool, because he has acquired renown as a poet. All fools are poets; this the Prefect feels;

I'm not familiar enough with Poe to say if he (consistently? often?) has (deep? some?) criticism of math/logic/mathematicians, or if it's just in this story, as reflected in Dupin's position.

It is true that some "great minds" in physics and math (for example, Eugene Wigner, or Richard Hamming, or Frank Wilczek) had pondered "[[The Unreasonable Effectiveness of Mathematics]]" (and logic) in the real world, but I think that Poe may have taken too simple a view of it (narrowly focusing on the limitations/inapplicability of algebra, axioms, and basic deductive logic).

But Poe (1809-1849) definitely hit on something which C. P. Snow (in 1959) called [["The Two Cultures"|https://en.wikipedia.org/wiki/The_Two_Cultures]], and others (e.g., Stephen Jay Gould's [["The Hedgehog, the Fox, and the Magister's Pox"|https://en.wikipedia.org/wiki/The_Hedgehog,_the_Fox,_and_the_Magister%27s_Pox]]. (see [[the preface|On "The Hedgehog, the Fox, and the Magister’s Pox" by Stephen Jay Gould]])) pointed out as a societal/cultural problem, too.

In Poe's story, it seems he values "psychological" insight and knowledge/wisdom (which as you can imagine, I don't have a problem with :) but his Dupin definitely uses logic and deduction combined with psychology.

Poe's Dupin criticizes mathematicians as "technocratic" and "arrogant". C. P. Snow calls the "humanists" (as opposed to the "scientists") "natural Luddites". I'm sure there is lack of understanding and probably lack of respect on both sides ("cultures"). But some "luminaries" (e.g., Albert Einstein, or Richard Feynman, or Carl Sagan) pointed out that if anything, honest scientists and true science actually make us humble and inspire us with awe. And others (e.g., Paula Marantz Cohen) articulated [[why the "humanities" are essential and vital to our species|On the importance of being educated in the Humanities (and the Sciences :)]].

And this, I think, is the real "trick" in becoming wise (and not just clever): we need to learn how to use __all__ tools available to us in the "human toolbox" and be aware and cautious about their strengths, weaknesses, and inherent faliability!

Nick Bostrom (professor, Faculty of Philosophy, Oxford University) covers the [[anthropic bias|http://www.anthropic-principle.com/?q=book/table_of_contents]] (see also [[the anthropic principle|http://en.wikipedia.org/wiki/Anthropic_principle]] and [[The "astonishing skills" of a coin flipper]]) on [[his website|http://www.anthropic-principle.com/]], starting with a clear and concise "hook":
>It appears that there is a set of fundamental physical constants that are such that had they been very slightly different, the universe would have been void of intelligent life. It's as if we're balancing on a knife’s edge. Some philosophers and physicists take the 'fine-tuning' of these constants to be an explanandum that cries out for an explanans, but is this the right way to think?
>
>The data we collect about the Universe is filtered not only by our instruments' limitations, but also by the precondition that somebody be there to “have” the data yielded by the instruments (and to build the instruments in the first place). This precondition causes observation selection effects - biases in our data that may call into question how we interpret evidence that the Universe is fine-tuned at all.

[[Janna Levin|http://jannalevin.com/bio-and-contact/]] (astrophysicist and theoretical cosmologist, Columbia University), in an [[interview with Krista Tippett|https://www.brainpickings.org/2015/01/09/krista-tippett-einsteins-god-janna-levin/]] observes the following (from a scientist's and atheist's perspective):
>Our convincing feeling is that time is absolute. Our convincing feeling is that there should be no limit to how fast you can travel. Our convincing feelings are based on our experiences because of the size that we are, literally, the speed at which we move, the fact that we evolved on a planet under a particular star. So our eyes, for instance, are at peak in their perception of yellow, which is the wave band the sun peaks at. It’s not an accident that our perceptions and our physical environment are connected. We’re limited, also, by that. That makes our intuitions excellent for ordinary things, for ordinary life. That’s how our brains evolved and our perceptions evolved, to respond to things like the Sun and the Earth and these scales. And if we were quantum particles, we would think quantum mechanics were totally intuitive. Things fluctuating in and out of existence, or not being certain of whether they’re particles or waves — these kinds of strange things that come out of quantum theory — would seem absolutely natural…
>
>Our intuitions are based on our minds, our minds are based on our neural structures, our neural structures evolved on a planet, under a sun, with very specific conditions. We reflect the physical world that we evolved from. It’s not a miracle.

In the Tippett interview, Levin also makes the connections with math and spirituality/religion:
>If I were to ever lean towards spiritual thinking or religious thinking, it would be in that way. It would be, why is it that there is this abstract mathematics that guides the universe? The universe is remarkable because we can understand it. That’s what’s remarkable. All the other things are remarkable, too. It’s really, really astounding that these little creatures on this little planet that seem totally insignificant in the middle of nowhere can look back over the fourteen-billion-year history of the universe and understand so much and in such a short time.
>
>So that is where I would get a sense, again, of meaning and of purpose and of beauty and of being integrated with the universe so that it doesn’t feel hopeless and meaningless. Now, I don’t personally invoke a God to do that, but I can’t say that mathematics would disprove the existence of God either. It’s just one of those things where over and over again, you come to that point where some people will make that leap and say, “I believe that God initiated this and then stepped away, and the rest was this beautiful mathematical unfolding.” And others will say, “Well, as far back as it goes, there seem to be these mathematical structures. And I don’t feel the need to conjure up any other entity.” And I fall into that camp, and without feeling despair or dissatisfaction.

This seems to be related to the question of "is mathematics a human invention or discovery?". Here is [[Richard Hamming's explanations of the unreasonable effectiveness of mathematics|On why Math works for us]], which, I think, boils down to ''We see what we look for'':
>we approach the situations with an intellectual apparatus so that we can only find what we do in many cases. It is both that simple, and that awful. What we were taught about the basis of science being experiments in the real world is only partially true. Eddington went further than this; he claimed that a sufficiently wise mind could deduce all of physics. I am only suggesting that a surprising amount can be so deduced. Eddington gave a lovely parable to illustrate this point. He said, "Some men went fishing in the sea with a net, and upon examining what they caught they concluded that there was a minimum size to the fish in the sea."
(see [[Simpson's paradox]]).

The last parable is [[referenced by Bostrom|http://www.anthropic-principle.com/?q=book/chapter_1#1a]], too:
> How big is the smallest fish in the pond? You catch one hundred fishes, all of which are greater than six inches. Does this evidence support the hypothesis that no fish in the pond is much less than six inches long? Not if your net can’t catch smaller fish.
>
>Knowledge about limitations of your data collection process affects what inferences you can draw from the data. In the case of the fish-size-estimation problem, a selection effect—the net’s sampling only the big fish—vitiates any attempt to extrapolate from the catch to the population remaining in the water. Had your net instead sampled randomly from all the fish, then finding a hundred fishes all greater than a foot would have been good evidence that few if any of the fish remaining are much smaller. 
In a delightful math book titled //Chases and Escapes// by Paul J. Nahin (see [[the introduction|http://press.princeton.edu/chapters/i9700.pdf]]), he starts by saying that
> not all pursuit problems have complicated answers.
and he quotes [[John R. Isbell|https://en.wikipedia.org/wiki/John_R._Isbell]] giving an example:
> If E is an evader with speed s1 and P is a pursuer with speed s2 > s1, then "Of course P can catch E [no matter what E does], at least by going to his [E's] initial position and [simply] following his path.
Now, ain't //that// iron-clad logic ... :)

In chapter 1 Nahin analyzes a case of a pirate ship chasing a merchant ship. The Merchant is moving in a straight line and the pirates are approaching it at a higher speed.

It turns out that in this situation there is an [[Apollonius Circle|https://en.wikipedia.org/wiki/Circles_of_Apollonius]] which shows the exact intercept/take-over location 
(Apollonius Circle on the left, start chase and end chase simulations on the right: Green = Merchant ship, Red = Pirate Ship, Blue = Apollonius Circle):

|borderless|k
|[img[pursuit calculation|./resources/pursuit calc 1.png][./resources/pursuit calc.png]]|Chase start:[img[pursuit start|./resources/pursuit start 1.png][./resources/pursuit start.png]]|Chase end (interception):[img[pursuit end|./resources/pursuit end 1.png][./resources/pursuit end.png]]|
|borderless|k
Graphing of Apollonius Circle ([[Trinket|https://trinket.io/library/trinkets/4d0cbcc594]]), Simulation of Pursuit ([[Trinket|https://trinket.io/library/trinkets/7f159598a0]])
!!!One Dimensional (or Elementary) Cellular Automata and Creativity
An [[article|http://arxiv.org/pdf/1305.2537.pdf]] trying to link between One Dimensional (or Elementary) ''Cellular Automata and Creativity'', the following classification was made.

From the abstract:
>We map cell-state transition rules of elementary cellular automata (ECA) onto the cognitive control versus schizotypy spectrum phase space and interpret cellular automaton behaviour in terms of creativity. To implement the mapping we draw analogies between a degree of schizotypy and generative diversity of ECA rules, and between cognitive control and robustness of ECA rules (expressed via Derrida coefficient). We found that null and fixed point ECA rules lie in the autistic domain and chaotic rules are 'schizophrenic'. There are no highly articulated 'creative' ECA rules. Rules closest to 'creativity' domains are two-cycle rules exhibiting wave-like patterns in the space-time evolution.

|>|>|!Four classes of CA creativity|
|bgcolor(lightblue): Class |bgcolor(lightblue): Rules^^1^^ |
|Creative| 3, 5, 11, 13, 15, 35 |
|Schizophrenic| 9, 18, 22, 25, 26, 28, 30, 37, 41, 43, 45, 54, 57, 60, 62, 73, 77, 78, 90, 94, 105, 110, 122, 126, 146, 150, 154, 156 |
|Autistic savants| 1, 2, 4, 7, 8, 10, 12, 14, 19, 32, 34, 42, 50, 51, 76, 128, 136, 138, 140, 160, 162, 168, 170, 200, 204 |
|Severely autistic| 23, 24, 27, 29, 33, 36, 40, 44, 46, 56, 58, 72, 74, 104, 106, 108, 130, 132, 142, 152, 164, 172, 178, 184, 232 |


!!!Cognitive Cellular Automata
An [[article|resources/Mandik - Cognitive Cellular Automata.pdf]] covering questions of ''Cognitive Cellular Automata''.

From the abstract:
>In this paper I explore the question of how artificial life might be used to get a handle on philosophical issues concerning the mind-body problem. I focus on questions concerning what the physical precursors were to the earliest evolved versions of intelligent life. I discuss how cellular automata might constitute an experimental platform for the exploration of such issues, since cellular automata offer a unified framework for the modeling of physical, biological, and psychological processes. I discuss what it would take to implement in a cellular automaton the evolutionary emergence of cognition from non-cognitive artificial organisms. I review work on the artificial evolution of minimally cognitive organisms and discuss how such projects might be translated into cellular automata simulations.


!!!Cellular Automata and emergence, free will, and computation
Another [[article|http://plato.stanford.edu/entries/cellular-automata/]] starting with some CA basics and discussing interesting philosophical topics like ''emergence, free will, and computation''

From the content:
Introductory section of CA - a brief survey and explanation via an example
A section on the general theory of CA, together with a selection of computational and complexity-theoretic results in the field
A section describing some uses of CA as tools for philosophical investigation, and surveys ways in which they can raise new philosophical questions

------
^^1^^ Using [[Wolfram's classification of Elementary One Dimensional Cellular Automata (1D CA, ECA)|http://mathworld.wolfram.com/ElementaryCellularAutomaton.html]]
A [[short and clear article|https://www.fs.blog/2012/07/what-is-deliberate-practice/]] on the [[Farnam Street Blog|https://www.fs.blog/]] summarizes a key element in effective and meaningful learning and performance: deliberate practice.

The end goal is to get to [[the highest level of competence|The Four Stages of Competence]].

It defines Deliberate Practice as an activity which is
> designed specifically to improve performance, often with a teacher's help; it can be repeated a lot; feedback on results is continuously available; it's highly demanding mentally, whether the activity is purely intellectual, such as chess or business-related activities, or heavily physical, such as sports; and it isn't much fun.
and it breaks it up into its components:
* ''Designed to Improve Performance'' - The word designed is key. While enjoyable, practice lacking design is play and doesn’t offer improvement.
** In theory, with the right motivations and some expertise, you can design a practice yourself.
** Teachers, or coaches, see what you miss and make you aware of where you're falling short.
** With or without a teacher, great performers deconstruct elements of what they do into chunks they can practice. They get better at that aspect and move on to the next.
* ''Repeated (a lot)'' - Repetition inside the comfort zone does not equal practice. Deliberate practice requires that you should be operating in the learning zone and you should be repeating the activity a lot with feedback.
** Most of the time we’re practicing we’re really doing activities in our __comfort zone__. This doesn’t help us improve because we can already do these activities easily. 
** On the other hand, operating in the __panic zone__ leaves us paralyzed as the activities are too difficult and we don't know where to start. 
** The only way to make progress is to operate in the __learning zone__, which are those activities that are just out of reach.
* ''Feedback on results is continuously available'' - Feedback gets a little tricky when you must subjectively interpret the results. While you don't need a coach, this can be an area they add value.
* ''Mentally Demanding'' - Doing things we know how to do is fun and does not require a lot of effort. Deliberate practice, however, is not fun. Breaking down a task you wish to master into its constituent parts and then working on those areas systematically requires a lot of effort.
** "The work is so great that it seems no one can sustain it for very long. A finding that is remarkably consistent across disciplines is that four or five hours a day seems to be the upper limit of deliberate practice, and this is frequently accomplished in sessions lasting no more than an hour to ninety minutes."

The article concludes with an interesting observation by an introvert:
In her book, //Quiet: The Power of Introverts in a World That Can't Stop Talking//, [[Susan Cain|https://www.quietrev.com/author/susan-cain/]] writes:
>Deliberate Practice is best conducted alone for several reasons. It takes intense concentration, and other people can be distracting. It requires deep motivation, often self-generated. But most important, it involves working on the task that’s most challenging to you personally. Only when you’re alone, Ericsson told me, can you “go directly to the part that’s challenging to you. If you want to improve what you’re doing, you have to be the one who generates the move. Imagine a group class—you’re the one generating the move only a small percentage of the time.”
In his book On Intelligence (published in 2004), Jeff Hawkins defines intelligence in a different way from one of the traditional/classic ones, formulated by Alan Turing (the [[Turing Test|https://plato.stanford.edu/entries/turing-test/]]).

A human doesn't need to "do" anything to understand a story. I can read a story quietly, and although I have no overt behavior my understanding and comprehension are clear, at least to me. You, on the other hand, cannot tell from my quiet behavior whether I understand the story or not, or even if I know the language the story is written in. You might later ask me questions to see if I did, but my understanding occurred when I read the story, not just when I answer your questions.
A thesis of this book is that understanding cannot be measured by external behavior; as we'll see in the coming chapters, it is instead an internal metric of how the brain remembers things and uses its memories to make predictions.

<<forEachTiddler 
where 
'tiddler.tags.contains("book-chapter") && tiddler.tags.contains("On Intelligence")'
sortBy 
'tiddler.title'>>
In a book titled [["Moral Machines - Teaching Machines Right from Wrong"|https://www.researchgate.net/publication/257931212_Moral_Machines_Contradiction_in_Terms_or_Abdication_of_Human_Responsibility]], the authors Colin Allen and Wendell Wallach write that (ro)bot (physical robots and software agents) like driverless cars and trains, financial transactions and networks, lethal weapons systems, artificial pet, and home appliances, are becoming a reality, which will lead up to questions about Artificial Moral Agents. 

This is adding to [[the debates|https://nationalhumanitiescenter.org/on-the-human/2011/12/the-future-of-moral-machines/]] about the rewards and risks involved as a result of "Superintelligent AI and the Postbiological future" (see also [[Susan Schneider's philosophical discussion|http://schneiderwebsite.com/index.html]] of these issues).

See also [[The AI Revolution: Our Immortality or Extinction|https://waitbutwhy.com/2015/01/artificial-intelligence-revolution-2.html]].

''A few points from their book and [[the NYT article|https://opinionator.blogs.nytimes.com/2011/12/25/the-future-of-moral-machines/]]'':
* The human-built environment increasingly is being populated by artificial agents, which combine limited forms of artificial intelligence with autonomous (in the sense of
unsupervised) activity. The software controlling these autonomous systems is, to date, “ethically blind” in two ways. First, the decision-making capabilities of such systems do not involve any explicit representation of moral reasoning. Second, the sensory capacities of these systems are not tuned to ethically relevant features of the world.
* There are big themes here: freedom of will, human spontaneity and creativity, and the role of reason in making good choices — not to mention the nature of morality itself. Fully human-level moral agency, and all the responsibilities that come with it, requires developments in artificial intelligence or artificial life that remain, for now, in the domain of science fiction. And yet…
* Machines are increasingly operating with minimal human oversight in the same physical spaces as we do.
* Whether we want them or not they will appear, probably sooner than later
** Why? It's all about power.
** Humans are moral agents, but are they always in reality?
** So who or what to fear more: humans or machines?
* Implementation approaches:
** Philosophers approach to moral decision making:
*** Top‐down: Use of rules, standards or theories to guide the design of a system's control architecture.
** Engineers approach:
*** Bottom‐up: Rules are not explicitly defined, but the system learns about them through experience.
In her book //The Left Hand of Darkness// (see the [[introduction to the book|Ursula K. Le Guin on Science Fiction, Writing, and the Truth]]), Ursula K. Le Guin writes a very insightful and personal description of what "love of one's country" means to a (humanoid) alien on Planet Winter.
The alien is a citizen of one country (Karhide) on Planet Winter, and the narrator, an Envoy from another planet, is asking him: 
> You hate Orgoreyn [the other country on this planet], don't you?
> [...] Hate Orgoreyn? No, how should I? How does one hate a country, or love one? Tibe [the "nationalistic 'prime minister' " of Karhide] talks about it; I lack the trick of it. I know people, I know towns, farms, hills and rivers and rocks, I know how the sun at sunset in autumn falls on the side of a certain plowland in the hills; but what is the sense of giving a boundary to all that, of giving it a name and ceasing to love where the name ceases to apply? What is love of one's country; is it hate of one's uncountry? Then it's not a good thing. Is it simply self-love? That's a good thing, but one mustn't make a virtue of it, or a profession. ... Insofar as I love life, I love the hills of the Domain of Estre [where I was born], but that sort of love does not have a boundary-line of hate. And beyond that, I am ignorant, I hope.
> Ignorant, in the Handdara [a mystic/spiritual sect in Karhide] sense: to ignore the abstraction, to hold fast to the thing.
In a succinctly expressed observation, Michael Shermer, a skeptic's skeptic, simply explained why he thinks we are making progress, as we expand our scientific knowledge.

The piece is titled [["At the Boundary of Knowledge"|https://michaelshermer.com/2016/09/at-the-boundary-of-knowledge/]]:
>[...] isn’t the history of science also strewn with the remains of failed theories such as phlogiston, miasma, spontaneous generation and the luminiferous aether? Yes, and that is how we know we are making progress. The postmodern belief that discarded ideas mean that there is no objective reality and that all theories are equal is more wrong than all the wrong theories combined. The reason has to do with the relation of the known to the unknown.
>
>As the sphere of the known expands into the aether of the unknown, the proportion of ignorance seems to grow—the more you know, the more you know how much you don’t know. But note what happens when the radius of a sphere increases: the increase in the surface area is squared while the increase in the volume is cubed. Therefore, as the radius of the sphere of scientific knowledge doubles, the surface area of the unknown increases fourfold, but the volume of the known increases eightfold. It is at this boundary where we can stake a claim of true progress in the history of science.

So, while [[Ralph Sockman|https://en.wikipedia.org/wiki/Ralph_Washington_Sockman]] was right when he said:
>The larger the island of knowledge, the longer the shoreline of wonder.
The above analysis is right too (and delightfully so :), which proves the point [[Niels Bohr|https://en.wikipedia.org/wiki/Niels_Bohr]] was making when he observed:
>The opposite of a fact is falsehood, but the opposite of one profound truth may very well be another profound truth.
I believe that Carl Sagan's position on skepticism strikes the right balance between being open to awesomeness and being careful [[not to fall for bull|http://www.skeptical-science.com/bullshit/detecting-bullshit-quick-refresher-carl-sagan/]] (see also his [["Baloney Detection Kit"|https://www.brainpickings.org/2014/01/03/baloney-detection-kit-carl-sagan/]]).

(He summed it up once by saying:
>Keep an open mind, but not so open that your brains fall out.
)

I think that this position also beautifully applies to the corrosive nature/impact of cynicism.

>If you’re only skeptical, then no new ideas make it through to you. You never learn anything. You become a crotchety misanthrope convinced that nonsense is ruling the world. (There is, of course, much data to support you.) Since major discoveries at the borderlines of science are rare, experiences will tend to confirm your grumpiness. But every now and then a new idea turns out to be on the mark, valid and wonderful. If you’re too resolutely and uncompromisingly skeptical, you’re going to miss (or resent) the transforming discoveries in science, and either way, you will be obstructing understanding and progress. Mere skepticism is not enough.
The ACM Alan Turing Award winner [[Judea Pearl|https://en.wikipedia.org/wiki/Judea_Pearl]] quotes Bertrand Russell, and makes some interesting observations, in an [[excellent lecture titled "The Art and Science of Cause and Effect" |http://bayes.cs.ucla.edu/BOOK-2K/causality2-epilogue.pdf]]^^1^^:
>If causal information has an empirical meaning beyond regularity of succession, then that information should show up in the laws of physics. But it does not! The philosopher Bertrand Russell made this argument in 1913: “All philosophers,” says Russell, “imagine that causation is one of the fundamental axioms of science^^2^^, yet oddly enough, in advanced sciences, the word ‘cause’ never occurs .... The law of causality, I believe, is a relic of bygone age, surviving, like the monarchy, only because it is erroneously supposed to do no harm.” Another philosopher, Patrick Suppes, who argued for the importance of causality, noted that: “There is scarcely an issue of ‘Physical Review’ that does not contain at least one article using either ‘cause’ or ‘causality’ in its title.” What we conclude from this exchange is that physicists talk, write, and think one way and formulate physics in another.
>[...] Fortunately, very few physicists paid attention to Russell’s enigma. They continued to write equations in the office and talk cause – effect in the cafeteria; with astonishing success they smashed the atom, invented the transistor and the laser.
Pearl argues that causality is viewed by us in a special way: " ''the  laws  of  physics are  all  symmetrical,  going  both ways,  while  causal  relations  are unidirectional, going from cause to effect.'' "

And to illustrate this symmetry, he gives a simple example, which reminds me of the [["deep understanding of reality"|On multifaceted understanding]] which I think my father was talking about:

>Take, for instance, Newton’s law: ''f = m * a''.
>The rules of algebra permit us to write this law in a wild variety of syntactic forms, all meaning the same thing – that if we know any two of the three quantities, the third is determined. Yet, in ordinary discourse we say that force causes acceleration – not that acceleration causes force, and we feel very strongly about this distinction^^3^^. 
>Likewise, we say that the ratio  f/a helps us determine the mass, not that it causes the mass. Such distinctions are not supported by the equations of physics, and this leads us to ask whether the whole causal vocabulary is purely metaphysical, “surviving, like the monarchy . . .”.

In the 1920's and 30's, Karl Pearson (a strong-willed man, considered to be one of the founders of modern statistics) was fascinated by the concept and implications of correlation, but he vehemently resisted causation:
>Beyond such discarded fundamentals as ‘matter’ and ‘force’ lies still another fetish amidst the inscrutable arcana of modern science, namely, the category of cause and effect.
As Pearl writes:
>It took another 25 years and another strong-willed person, Sir Ronald Fisher, for statisticians to formulate the randomized experiment – the only scientifically proven method of testing causal relations from data, and to this day, the one and only causal concept permitted in mainstream statistics.
>
>And that is roughly where things stand today.

But the world of statistics is still very cautious about causation, as a prominent statistician had remarked:
>Considerations of causality should be treated as they have always been treated in statis- tics: preferably not at all but, if necessary, then with very great care.
Pearl is wondering how the field of statistics with all the brilliant thinking and thinkers, and powerful tools did not, historically, delve into causation, and he thinks that it had/has to do with the lack of appropriate language (a-la [[Sapir-Whorf|http://en.wikipedia.org/wiki/Sapir-Whorf_Hypothesis]]?) :
>Naturally, if we lack a language to express a certain concept explicitly, we can’t expect to develop scientific activity around that concept.
>
>Scientific development requires that knowledge be transferred reliably from one study to another and, as Galileo showed 350 years ago, such transference requires the precision and computational benefits of a formal language.
As an "aside", Pearl, who is not only a statistician but also a computer scientist, mentions some implications and difficulties in CS, as a result of the above:
>How should a robot acquire causal information through interaction with its environment? How should a robot process causal information received from its creator– programmer?

Pearl focuses on the asymmetry of our views/context as the explanation for why causality is meaningful (let alone very desirable and useful):
>If you wish to include the entire universe in the model,  causality  disappears  because interventions disappear – the manipulator and the manipulated lose their distinction. However, scientists rarely consider the entirety of the universe as an object of investigation. In most cases the scientist carves a piece from the universe and proclaims that piece in – namely, the focus of investigation. The rest of the universe is then considered out or background and is summarized by what we call boundary conditions . This choice of ins and outs creates asymmetry in the way we look at things, and it is this asymmetry that permits us to talk about “outside intervention” and hence about causality and cause–effect directionality.
And he summarizes:
>A causal model contains three ingredients (in addition to using a  set  of symmetric  equations  to  describe  normal conditions):
>(i) a distinction between the  in and the  out ; 
>(ii) an assumption that each equation corresponds to an independent mechanism and hence must be preserved as a separate mathematical sentence; and 
>(iii) interventions that are interpreted as surgeries over those mechanisms.

At this point Pearl introduces the need and rationale for a new language for dealing with causation:
<html>
	<table>
		<tr>
			<td>
				<b>Expressing observations vs. interventions</b><p><p><p>
				<img src="resources/algebra of doing 1.png">
			</td>
			<td>
				<b>Analogously, deriving rules</b><p>
				<img src="resources/algebra of doing 2.png">
			</td>
		</tr>
	</table>
</html>

And this reminds me of Andrea diSessa, [[in his book "Changing Minds")|resources/diSessa - Changing Minds - Chapter1.pdf]] giving an example of the __power of literacy__ to change the way we think and deal with reality (and also [[Examples of the power of math notation]]).

<html>
	<table>
		<tr>
			<td>
				<b>In Pearl's words:</b><p><p>
				I think you can still get the flavor of this new calculus. It consists of three rules that permit us to transform expressions involving actions and observations into other expres- sions of this type. The first allows us to ignore an irrelevant observation, the third to ignore an irrelevant action; the sec- ond allows us to exchange an ac- tion with an observation of the same fact. What are those symbols on the r ight? They are the “green lights” that the diagram gives us whenever the transformation is legal.
			</td>
			<td>
				<b>And the rules:</b><p>
				<img src="resources/algebra of doing 3.png">
			</td>
		</tr>
	</table>
</html>


At this point, Pearl brings up [[Simpson's paradox]] and highlights the difficulties it presents in determining causation.

But, he summarizes the lecture on an optimistic and practical note:
>It is true that testing for cause and effect is difficult. Discovering causes of effects is even more difficult. But causality is not mystical or metaphysical. It can be understood in terms of simple processes, and it can be expressed in a friendly mathematical language, ready for computer analysis.
and he adds:
>This does not solve all the problems of causality, but the power of symbols and mathematics should not be underestimated.
>Many scientific discoveries have been delayed over the centuries for the lack of a mathematical language that can amplify ideas and let scientists communicate results. I am convinced that many discoveries have been delayed in our century for lack of a mathematical language that can handle causation.
(again, [[echoing diSessa|The power of a new literacy]]).


----
^^1^^ - see a [[technical explanation (with Python code)|https://medium.com/@akelleh/a-technical-primer-on-causality-181db2575e41]] of key causality concepts titled "A Technical Primer On Causality" by adam kelleher
^^2^^ - See what [[Alison Gopnik had to say about causality|Causality - Alison Gopnik]].
^^3^^ - which may or may not be [[useful for, or "illuminating" of certain aspects of reality|On multifaceted understanding]].
On polarity and the non-linear nature of reality, inspiring "lifelong spiraling discovery and learning".

From [[Tao: The Watercourse Way|http://www.wisdom2be.com/files/1b033e1855ea8cd5d7f40a4f1ff78ed1-120.html]] by Alan Watts:

At the very roots of Chinese thinking and feeling there lies the principle of polarity, which is not to be confused with the ideas of opposition or conflict. In the metaphors of other cultures, light is at war with darkness, life with death, good with evil, and the positive with the negative, and thus an idealism to cultivate the former and be rid of the latter flourishes throughout much of the world. To the traditional way of Chinese thinking, this is as incomprehensible as an electric current without both positive and negative poles, for polarity is the principle that + and —, north and south, are different aspects of one and the same system, and that the disappearance of either one of them would be the disappearance of the system.

People who have been brought up in the aura of Christian and Hebrew aspirations find this frustrating, because it seems to deny any possibility of progress, an ideal which flows from their linear (as distinct from cyclic) view of time and history. Indeed, the whole enterprise of Western technology is “to make the world a better place”—to have pleasure without pain, wealth without poverty, and health without sickness. But, as is now becoming obvious, our violent efforts to achieve this ideal with such weapons as DDT, penicillin, nuclear energy, automotive transportation, computers, industrial farming, damming, and compelling everyone, by law, to be superficially “good and healthy” are creating more problems than they solve. We have been interfering with a complex system of relationships which we do not understand, and the more we study its details, the more it eludes us by revealing still more details to study. 

As we try to comprehend and control the world it runs away from us. Instead of chafing at this situation, a Taoist would ask what it means. What is that which always retreats when pursued? Answer: yourself. Idealists (in the moral sense of the word) regard the universe as different and separate from themselves—that is, as a system of external objects which needs to be subjugated. Taoists view the universe as the same as, or inseparable from, themselves— so that Lao-tzu could say, “Without leaving my house, I know the whole universe.” 

This implies that the art of life is more like navigation than warfare, for what is important is to understand the winds, the tides, the currents, the seasons, and the principles of growth and decay, so that one’s actions may use them and not fight them. In this sense, the Taoist attitude is not opposed to technology per se. Indeed, the Chuang-tzu writings are full of references to crafts and skills perfected by this very principle of “going with the grain.” 

The point is therefore that technology is destructive only in the hands of people who do not realize that they are one and the same process as the universe. Our overspecialization in conscious attention and linear thinking has led to neglect, or ignorance, of the basic principles and rhythms of this process, of which the foremost is polarity.
In an [[article titled Elegance by Matthew Fuller|https://aestech.wikischolars.columbia.edu/file/view/Fuller+-+Elegance+(Software+Studies).pdf]] he gives the following description and requirements for elegant computer programs/solutions (the breakdown/itemization is mine):

>In Literate Programming, Donald Knuth suggests that the best programs can be said to possess the quality of elegance. Elegance is defined by four criteria: 
** the leanness of the code; 
** the clarity with which the problem is defined; 
** spareness of use of resources such as time and processor cycles; and, 
** implementation in the most suitable language on the most suitable system for its execution. 
>Such a definition of elegance shares a common vocabulary with design and engineering, where, in order to achieve elegance, use of materials should be the barest and cleverest. The combination is essential—too much emphasis on one of the criteria leads to clunkiness or overcomplication.

It is interesting and relevant that Fuller mentions Gregory Chaitin and his work on [[efficiency, complexity, and elegance|On randomness, compression, and theory effectiveness (and complexity)]].

Chaitin is looking (for example, in his article [[Doing Mathematics Differently|http://inference-review.com/article/doing-mathematics-differently]]) at brevity of the solution/program, but like any other complex things, this question/aspect is a central one in programming and implementation: sometimes shortness is not the most important aspect.

''As the [[Zen of Python|https://www.python.org/dev/peps/pep-0020/]] (by Tim Peters ([[import this|http://stackoverflow.com/questions/5855758/what-is-the-source-code-of-the-this-module-doing]])) emphasizes'':

Beautiful is better than ugly.

Explicit is better than implicit.

Simple is better than complex.

Complex is better than complicated.

Flat is better than nested.

Sparse is better than dense.

Readability counts.


(Since elegance goes hand-in-hand with clarity, and in my opinion, explanatory power, here is [[an example tying these concepts/practices together|On transparent and explanatory modeling]]).

Related to this, [[here is some advice|https://go-proverbs.github.io/]] in the form of (Simple, Poetic, Pithy) "proverbs" from a different culture/language (the [[Go programming language|https://golang.org/]]), with [[some interpretations by Sarah Allen |https://www.ultrasaurus.com/2016/07/go-language-philosphy-thorugh-proverbs/]]:

The bigger the interface, the weaker the abstraction.

A little copying is better than a little dependency.

Clear is better than clever.

Reflection is never clear.

Don't just check errors, handle them gracefully.

Design the architecture, name the components, document the details.

Documentation is for users.

Don't panic.

In Maria Popova's blogpost [[Kierkegaard on time|https://www.brainpickings.org/2017/04/18/kierkegaard-concept-of-anxiety-time/]], she quotes the great Danish philosopher:
>The moment is not properly an atom of time but an atom of eternity.
and this, I think, perfectly captures the idea that ''concepts are not reality''. What I mean by that is that, we all accept (since we heard it from childhood) that a moment is not a well-defined duration of time, i.e., it is subjective (to one person a moment may mean 'about a second', while to another it may mean 'less than a minute', and so on :)
But, a moment can be 'any length of time' only if it is part of something (or a slice, if you will) that is also not well-defined, or very elastic; which is what eternity (in Kierkegaard's quote) is.
We all accept that eternity is not real; it is a concept (and the brilliant (but odd) mathematician [[Georg Cantor|https://en.wikipedia.org/wiki/Georg_Cantor]] proved it by [[mathematically defining an infinite family of infinites|http://gizmodo.com/5809689/a-brief-introduction-to-infinity]], one mind-blowingly bigger than the next). It is a useful concept, in the sense that it better helps us know, learn, and deal with reality (as well as work and manipulate other concepts), __but__ it is not real, i.e., infinity (and for that matter, a moment) does not exist (is not part of reality).
Kierkegaard puts it nicely:
>If in the infinite succession of time [the "timeline": from past through present, into the future] a foothold could be found, i.e., a present, which was the dividing point, the division would be quite correct. However, precisely because every moment, as well as the sum of the moments, is a process (a passing by), no moment is a present, and accordingly there is in time neither present, nor past, nor future.
We (humans) perceive (and conceive of) time as a change in our reality. Kierkegaard calls it "spatializing", and Einstein created/defined "space-time" to describe our experiencing of this process.
The elusiveness (non-reality?) of time is also reflected in physics, the science invented by us to help us understand and deal with reality, in the fact that some phenomena, especially in Einstein's theories and sub-atomic theories. As Philip Cheung [[writes in an article|https://www.quantamagazine.org/a-debate-over-the-physics-of-time-20160719/]]:
>Many physicists argue that Einstein’s position is implied by the two pillars of modern physics: Einstein’s masterpiece, the general theory of relativity, and the Standard Model of particle physics. The laws that underlie these theories are time-symmetric — that is, the physics they describe is the same, regardless of whether the variable called “time” increases or decreases. Moreover, they say nothing at all about the point we call “now” — a special moment (or so it appears) for us, but seemingly undefined when we talk about the universe at large.

In another insightful post on Brainpickings [[The Science of How Our Social Interactions Shape Our Experience of Time|https://www.brainpickings.org/2017/09/04/alan-burdick-why-time-flies-empathy/]], Popova covers a new book by Alan Burdick titled "Why Time Flies: A Mostly Scientific Investigation", and quotes the physicist Stephen Hawking (investigating black holes and the origins of the universe), who had commented on the question of "what was there __before__ the Big Bang?" and "Did time not exist before that?":
>[It] is like standing at the South Pole and asking which way is south: “Earlier times simply would not be defined.”

In Burdick's book he points out that due to our conception of time, we speak of "making time" and "wasting time", too much of it, and too little of it, its preciousness, and so on. But time, again, is not reality, but a very useful tool (one of many) for dealing with it.

And how is all of this related to Computer Science, you may ask?
Well, all of computer science relies on layers upon layers of abstraction and concepts we defined and created above and on top of physical reality. Computers are made of hardware (physical entities) on top of which we create logic operations ("logic gates"), which are the basis for numerical operations ("binary calculations"), which are the basis (lower layers) of primitive programming operations ("machine languages"), which enable high-level programming languages, models, simulations, virtual realities (!), and so on. A long chain of concepts and abstractions, none of which is "actual reality".

There are many implications to this, and many dangers if we confuse (or intentionally blur) the difference between concepts and reality, like for example when we say that "algorithms are ruling our lives", or "the software bug/glitch was responsible for the accident", etc. At bottom, the concepts (ha!) of morality, responsibility, accountability, compassion, justice, etc., are (again) human tools (or means) for dealing with reality. It doesn't help to assign or relegate these concepts to another set of concepts. It only muddles the water.
From an excellent article by the psychologist Jerome Bruner titled [["The Act of Discovery"|https://digitalauthorshipuri.files.wordpress.com/2015/01/the-act-of-discovery-bruner.pdf]]:
>I am suggesting that there are forms of activity that serve to enlist and develop the competence motive, that serve to make it the driving force behind behavior. I should like to add to White's general premise [Robert White (R. W. White, 1959)] that the exercise of competence motives has the effect of strengthening the degree to which they gain control over behavior and thereby reduce the effects of extrinsic rewards or drive gratification.
>
>The brilliant Russian psychologist Vigotsky (L. S. Vigotsky, 1934) characterizes the growth of thought processes as starting with a dialogue of speech and gesture between child and parent; autonomous thinking begins at the stage when the child is first able to internalize these conversations and "run them off" himself. This is a typical sequence in the development of competence.
>
>So too in instruction. The narrative of teaching is of the order of the conversation. The next move in the development of competence is the internalization of the narrative and its "rules of generation" so that the child is now capable of running off the narrative on his own. The hypothetical mode in teaching by encouraging the child to participate in "speaker's decisions" speeds this process along. 
>
>Once internalization has occurred, the child is in a vastly improved position from several obvious points of view -- notable that he is able to go beyond the information he has been given to generate additional ideas that can either be checked immediately from experience or can, at least, be used as a basis for formulating reasonable hypotheses. 
This continuous construction of hypotheses and checking them against the real world/environment is what Seymour Papert also advocated when talking about using Logo to [[develop mathematical knowledge|An Exploration in the Space of Mathematics Educations]] (in [[a constructionist manner|A Papertian constructionist alternative learning environment]]).
>But over and beyond that, the child is now in a position to experience success and failure not as a reward and punishment, but as information. For when the task is his own rather than a matter of matching environmental demands, he becomes his own paymaster in a certain measure. Seeking to gain control over his environment, he can now treat success as indicating that he is on the right track, failure as indicating he is on the wrong one.
This last part, I think is similar to what the psychologist Carol Dweck developed around [["growth mindset" vs. "fixed mindset"|https://fs.blog/2015/03/carol-dweck-mindset/]].
>
>In the end, this development has the effect of freeing learning from immediate stimulus control. When learning in the short run leads only to pellets of this or that rather than to mastery in the long run, then behavior can be readily "shaped" by extrinsic rewards. When behavior becomes more long-range and competence-oriented, it comes under the control of more complex cognitive structures, plans and the like, and operates more from the inside out. 
And Bruner concludes this point:
>To sum up the matter of the control of learning, then, I am proposing that the degree to which competence or mastery motives come to control behavior, to that degree the role of reinforcement or "extrinsic pleasure" wanes in shaping behavior. 
>
>The child comes to manipulate his environment more actively and achieves his gratification from coping with problems. Symbolic modes of representing and transforming the environment arise and the importance of stimulus-response-reward sequences declines. 
>
>To use the metaphor that David Riesman developed in a quite different context, mental life moves from a state of outer-directedness in which the fortuity of stimuli and reinforcement are crucial to a state of inner-directedness in which the growth and maintenance of mastery become central and dominant.

In the Principles of Computer Science (PCS) course I had developed and now teach for the second year, we have somewhat of a running joke about the "compactness and effectiveness of expressing oneself", which the students ended up calling being "concise", pronounced kontsiss (German roots?).

The students are taking the notion of being 'kontsiss' (and sometimes 'pretsiss' :) in various directions, one of them (naturally) is the brevity of programs. This is an old and persistent theme in CS. As [[they say|http://esolangs.org/wiki/Esoteric_programming_language]]:
>Many esoteric languages are designed to be as short as possible. These languages are known as "Golfing languages", and frequently used for "[[Code golf|https://en.wikipedia.org/wiki/Code_golf]]", a competition to solve programming tasks in as few characters or bytes as possible. Examples include ~CJam, [[Pyth|https://pyth.readthedocs.io/en/latest/]], and ~GolfScript, as well as many others.

These, sometimes very jovial^^1^^, discussions in class are not only enjoyable, but they also expand the students' horizons, because we get to touch on important topics and principles like programming style, readability, clarity, resource usage, effectiveness vs. efficiency, elegance (and yes, beauty), expressiveness, and, of course, preciseness/conciseness.

In the discussions about code brevity, the students (understandably) bring up the desire to write code where the variables are very short (as in 1 to 3 letters long), function names are cryptic and/or cute, and so on. Besides the obvious downsides in terms of readability/clarity, one has to be careful about taking brevity to the extreme:

In the [[Esoteric Languages wiki (esolangs)|http://esolangs.org/wiki/]], they define a [[Turing tarpit|http://esolangs.org/wiki/Turing_tarpit]] as 
>a language that aims for Turing-completeness in an arbitrarily small number of linguistic elements - ideally, as few as possible.

The wiki entry doesn't fail to mention:
>The term "Turing tarpit" itself comes from the 1982 paper "[[Epigrams on Programming|http://thecorememory.com/Epigrams_on_Programming.pdf]]" by [[Alan Perlis|http://en.wikipedia.org/wiki/Alan_Perlis]] (a "[[CS Sage|https://en.wikipedia.org/wiki/List_of_computer_scientists]]"):
>>Beware of the Turing tar-pit in which everything is possible but nothing of interest is easy.
These words of wisdom from Perlis (in 1982) are very relevant even today, when you look at [[some of the "enterprises" people have undertaken|http://timeblimp.com/?page_id=2540]] designing new esoteric programming languages.

In the context of minimalist/golfing but Turing-complete languages it may be worth mentioning [[Pyth|https://pyth.readthedocs.io/en/latest/]] (see the [[online execution shell/compiler|https://pyth.herokuapp.com/]]).
I say this because I still remember the APL course I had taken as an undergraduate at the Technion. It was so different ("esoteric") from anything else I had experienced before, but it definitely opened my mind to concepts like functional programming, expressive power, data structures, data dimensions, etc. (and not to mention typing skills on the [[APL keyboard|resources/APL keyboard.gif]]).
So, for example, it may be instructive to discuss with students the pros and cons of 2 versions of [[looping through a range of integers|http://pyth.readthedocs.io/en/latest/simple-programs.html]] using 2 different idioms and approaches in Pyth (see the [[documentation|http://pyth.readthedocs.io/en/latest/index.html]]):

{{{
================= Pyth ============================
FNrZhTN
================= equivalent Python compilation ===
for N in Prange(Z,head(T)):
 Pprint("\n",N)
==================================================
}}}
compared to:
{{{
================= Pyth ============================
VhTN
================= equivalent Python compilation ===
for N in urange(head(T)):
 Pprint("\n",N)
==================================================
}}}
Leading to a very concise (kontsiss :) version of finding the factorial of a number:
{{{
input: 5

================= Pyth ============================
K1FNr1hQ=K*KN;K
================= equivalent Python compilation ===
Q=copy(literal_eval(input()))
K=1
for N in Prange(1,head(Q)):
 K=copy(times(K,N))
Pprint("\n",K)
==================================================
}}}


----
^^1^^ - The jolliness can sometimes spill into some "silly cases", like for example, (briefly) talking about the [[HQ9+|https://esolangs.org/wiki/HQ9+]] joke language. It's obviously a nonsense language, but its possible "redeeming value" may be that it could be used as an exercise for students to implement in a course's (real) programming language of choice.
Here are some highlights from a blog post titled [[Ethical Theory|http://reasonandmeaning.com/2013/12/24/ethical-theory/]] by [[John G. Messerly|http://reasonandmeaning.com/brief-bio/]]:

* There are many theories that deny morality^^1^^: nihilism; determinism; skepticism; relativism; egoism; etc. In my view ethicists too easily dismiss these theories—they have philosophical merit.
* Most ethical theories try to justify morality. Typically this justification has been supplied by: self-interest—theories deriving from Plato and Hobbes; sympathy—theories deriving from Hume and Mill; nature—theories deriving from Aristotle and Aquinas; or reason—theories deriving from Kant and Locke.
* Some contemporary thinkers, Darwall and Gewirth come to mind, have tried to justify morality following Kant. [...] At most, I would argue, these theories show that morality is weakly rational, i.e., morality is not clearly irrational. But I don’t see how they can show me how another person’s interests give me a reason to do anything.
* Few contemporary thinkers have advanced natural law theories in the tradition of Aristotle and Aquinas. Contemporary thinkers try to bridge the is/ought gap with an evolutionary ethics or moral psychology utilizing knowledge of human nature unavailable to ancient and medieval philosophers.
* Theories deriving from considerations of sympathy are also promising. Mill’s utilitarianism was based on a “social feeling,” Hume thought sympathy the basis of morality, Darwin had an entire theory of moral sentiments, and the contemporary philosopher Kai Nielsen places great emphasis on the role of sympathy in morality. It is hard to imagine a justification of the moral life without a role for sympathy.  
*Theories deriving from self-interest are promising, and contemporary contract and game theorists, particularly Gauthier, have gone a long way toward sustaining and revitalizing the Hobbesian project. Nonetheless, their results are inconclusive and it is not clear that this approach can resolve the compliance problem. However, combining a contract approach with considerations of our evolutionary nature and ingrained or acquired human sympathies may have more promise.
*Finally, there are ethical theories associated with religious and metaphysical views, but lack of agreement about these views precludes any hope of grounding morality in them. (Of course the same may be said about one or another of our moral theories—that they all suppose some metaphysic and that the dispute about ethics depends on resolving metaphysical issues first.)
Messerly ties this discussion to the [[Prisoner's Dilemma|Summary of the Prisoner’s Dilemma]]:
* Let’s begin with the prisoner’s dilemma (PD). It is easy to see that self-interest demands defection, a supposedly non-moral move, in a one-time PD. So here self-interest and ordinary morality conflict. The fact that both parties do better through mutual cooperation somewhat ameliorates this conclusion, but does not change the fact that it is better for one to not comply no matter what the other does.
* The situation changes when the PD is iterated, since tit-for-tat (TFT) has been shown to be a robust strategy. But recent work by Ken Binmore has challenged this assumption. (The “Folk-theorem” is also relevant here.) It is not that TFT is a bad strategy, but that real life is more complex than iterated ~PDs can model. There may be an infinite number of strategies which are robust, calling into question whether we can even determine what is in our self-interest. And if we don’t know what’s in our interest, how can self-interest ground morality?

He brings in practical terms and considerations:
* [Immoral actors] can’t determine what is in their self-interest. But can they make an educated guess? Not really. It is too difficult to know the repercussions of their acts and impossible to predict what adopting a disposition to behave will cost them in the long run. The complexity of the situation makes complete assessment impossible and reliable judgment unlikely, raising doubts about applying any moral theory to a complex world of interactions with other agents whose psychologies, motives, disposition and intents are difficult to determine if not opaque.
*Thus it is unlikely that self-interest can ground morality or immorality since self-interest can’t be determined with accuracy. So where to from here? In large part, I find myself agreeing with the contemporary philosopher Kai Nielsen^^2^^.
* Nielsen maintains that whether the bad guys are happy or not depends on what kinds of persons they are; and I agree. Neither rationality nor happiness requires morality: we must simply decide for ourselves how we should act and what sort of persons we will strive to be or become.
* This means that considerations of reason, happiness, and self-interest, in the absence of sympathy and a commitment to the moral life, cannot adjudicate [i.e., make a formal judgment or decision] between morality and self-interest. 
* While both Neilsen and myself find this situation somewhat depressing, we accept that we cannot get to morality with intellect alone. From an objective point of view, reason is impotent to determine our values and thus the moral life demands a non-rational, voluntary commitment. In other words, the moral life and the immoral one are Kantian antimonies, and the choice between them interfused with existential angst. In the end we simply choose … and hope.
* Does it help to know that we can’t give good self-interested reasons to comply with the social contract?
** [...] if mutual cooperation becomes important enough, ethics may become a branch of applied engineering. We may have to engineer ourselves, removing tendencies adaptive for foragers, but suicidal for beings with technology.
** And maybe engineering ourselves won’t entail a loss of freedom, but instead free us from some residual effects of our evolution, from overt aggressions and other tendencies that are now anachronistic in a technological world. But whatever we choose to do, one thing is certain, we alone are the stewards of the future of life and mind on this small outpost in an infinite cosmos. We alone must decide where we want to go.

Again, bringing practicality into the discussion about ethical behavior Messerly concludes:
* Remember that none of the above implies that it is irrational to be moral, only that rationality alone can’t get us to morality. This isn’t to say there aren’t good reasons to be moral. There are. Immoralists might be punished and lose the benefits of cooperation; and moralists don’t have to be looking over their shoulder and may have more friends. All we have said is that we can’t show that the reasons to be moral outweigh the reasons to be immoral, if you benefit from and can get away with immorality.
* And we have also suggested that it is becoming increasingly within our power to remake the world and ourselves in such a way that no one can benefit from or get away with immorality. While some will object that nightmarish scenarios will follow from our increasing control of immoral behavior, it is quite likely that we will all benefit from a world in which peaceful living can be secured by the application of our knowledge. 
* Ironically, our inability to convincingly answer the why should I be moral question in theory, will lead to our answering it in practice. In short, there never have been completely convincing reasons to be moral, evidenced by the barbarism of human history,  but, desperately in need of morality for our survival and flourishing, we will freely choose to transform ourselves by all means at our disposal.
* In retrospect, biology and evolutionary stable strategies imposed early moral constraints, philosophical and religious education furthered the project, governments provided the muscle that conscience lacked, and now it is up to us to continue the project so that immorality doesn’t kill us. So we will be the ones who ultimately create the answer to the why be moral question.



----
^^1^^ - Morality defined as a system demanding that persons express care, concern, and interest in others; exemplified by moral rules such as: “don’t kill, lie, cheat, or steal;” “help others;” etc.
^^2^^ - Nielsen, Kai. “Why Should I Be Moral?—Revisited” American Philosophical Quarterly 21, January 1984.
Bernie Siegel, M.D., in his book //Love, Medicine & Miracles// writes about //Becoming Exceptional// (chapter 8):
> Psychologist Al Siebert became interested in the personalities of survivors when he joined the paratroopers just after college in 1953.
>[...] He has found that one of their most prominent characteristics is a complexity of character, a union of many opposites that he has termed biphasic traits. They are both serious //and// playful, tough //and// gentle, logical //and// intuitive, hard-working //and// lazy, shy //and// aggressive, introspective //and// outgoing, and so forth. They are paradoxical people who don't fit neatly into the usual psychological categories. This makes them more flexible than most people, with a wider array of resources to draw from.
>Siebert wondered how the survivor personality keeps from being immobilized  by its contradictions. [... and building on concepts of Ruth Benedict and Abraham Maslow he] found that survivors have a hierarchy of needs and that, unlike most people, they pursue //all// of them. Beginning with the most basic, these needs are: survival, safety, acceptance by others, self-esteem, and self-actualization. One of the main needs that distinguished survivors from others, however, went beyond self-actualization: a need for synergy. Siebert defines the need for synergy as the need to have things work well for oneself //and// for others.
>Survivors, then, act not only from self-interest, but also from the interest of others, even in the most stressful situations. They clean up messes and make things safer or more efficient. In short, they give of themselves, leaving the world better than they found it. Their relaxed awareness and the confidence that it brings allows them to save their energies for the really important things. When things are going well, they let well enough alone, leaving themselves free for curiousity about about new developments or potential problems. They may seem uninvolved at times, but they are "foul-weather friends." They show up when there's trouble.
In an [[interview of Daniel Kahneman|https://www.cfr.org/event/conversation-daniel-kahneman]] (an Economics Nobel Laureate (with Amos Tverski)), he made a few interesting observations about both human and artificial intelligence (the following is "somewhat heavily edited", since the source is the conversational material from the interview):

!!! On his book [["Thinking Fast and Slow"|https://www.nytimes.com/2011/11/27/books/review/thinking-fast-and-slow-by-daniel-kahneman-book-review.html]]
The book is an exploration of how the human mind works and how intuitive thinking, which is called “System 1,” interacts with deliberative thinking, “System 2.” And the conclusion is that System 1 is dominant and influential—more than we realize.

The claim in the book is that what we are conscious of, we are conscious of our conscious thoughts. We are conscious of our deliberations. But most of what happens in our mind happens silently, i.e., the most important things that happen in our mind happen silently. We’re just aware of the result. We’re not aware of the process. And so the process that we’re aware of tend to be deliberate and sequential, but the associative networks that lies behind it and that brings ideas forward into consciousness, we’re really not aware of it.

So we live with System 2. We’re mostly aware of it. And System 1, which does most of the work, is not recognized. But because we’re aware of the deliberative thinking, we tend to think our decisions are more deliberative than they really are.

!!! On Artificial Intelligence and Machine Learning
The question is whether this human process where the brain works and the interaction between the intuitive and the deliberative happens naturally can ever be replicated by algorithms.

The answer is almost certainly yes. This is a computing device that we have. It won’t be replicated exactly. It shouldn’t be replicated exactly. But that you can train an artificial intelligence to develop powerful intuitions in a complex domain, this we know.

For example, the  ~AlphaGo software learning to play GO, beating the world champion, and then, by playing against itself and training for millions of more games, is far exceeding human skill and capabilities.

The kinds of progress that you can make with machine learning is great. Go is a good example because it's an intuitive game. It’s a game that typically people cannot really explain why they feel a move is strong or not. But Go experts observed ~AlphaGo and realized that moves that the program made were extremely strong moves and completely novel ones.

Will we algorithmically integrate the intuitive and deliberative systems better than the human brain does?

One of the things that we'll be able to do is develop programs to explain their decisions in System 2 (conscious, deliberative) terms. So it’s going to be separate because what generates a solution is the deep learning. But it will automatically look at the big data and develop a story that we can grasp. It looks very likely that we’ll develop programs to tell stories about those decisions so that we can understand them in terms of reasoned arguments.

!!!On the difficulty (and automatic perceptual and conceptual "lock-in") of being able to see and deal with alternatives (or why is it so hard to come up with alternative stories?):
This is really a characteristic of the perceptual system, that when we perceive things we make a choice. And frequently, when stimuli are ambiguous, we can see it this way or that way. 
And a remarkable and simple example is the [[Necker cube|https://en.wikipedia.org/wiki/Necker_cube]]. It’s flat on the page, but it appears three-dimensional, and it "flips". If you stare at it long enough, there are two three-dimensional solutions that you see, and they "flip" automatically. There's nothing voluntary about it. And it flips all at once, and you only see one interpretation—you can’t see them both at the same time.

You know that there are two, but you see only one. And what happens is a process where once a solution gets adopted, it suppresses others. And this mechanism of a single solution coming up and suppressing alternatives, that occurs in perception and it occurs in cognition. So when we have a story, it suppresses alternatives.
In an article titled [[The Chess Master and the Computer|http://www.nybooks.com/articles/2010/02/11/the-chess-master-and-the-computer/]] in the //New York Review of Books//^^1^^, Chess Grand Master [[Garry Kasparov]] (of Deep Blue infamy :) tells the following story, and makes the following observations:

>As for how many moves ahead a grandmaster sees, [...] the answer attributed to the great Cuban world champion [[José Raúl Capablanca|https://en.wikipedia.org/wiki/Jos%C3%A9_Ra%C3%BAl_Capablanca]], among others [was]: “Just one, the best one.” 
>This answer is as good or bad as any other, a pithy way of disposing with an attempt by an outsider to ask something insightful and failing to do so. It’s the equivalent of asking [[Lance Armstrong|https://en.wikipedia.org/wiki/Lance_Armstrong]] (of the Tour de France cheating infamy) how many times he shifts gears during the Tour de France.
>
>The only real answer, “It depends on the position and how much time I have,” is unsatisfying.
>
>[...]
>Capablanca’s sarcasm aside, correctly evaluating a small handful of moves is far more important in human chess, and human decision-making in general, than the systematically deeper and deeper search for better moves—the number of moves “seen ahead”—that computers rely on.
>
>[...]
>[Chess] demands high performance from so many of the brain’s functions. Where so many of these investigations fail on a practical level is by not recognizing the importance of the process of learning and playing chess. The ability to work hard for days on end without losing focus is a talent. The ability to keep absorbing new information after many hours of study is a talent. Programming yourself by analyzing your decision-making outcomes and processes can improve results much the way that a smarter chess algorithm will play better than another running on the same computer. We might not be able to change our hardware, but we can definitely upgrade our software.
>
>[...]
>Playing better chess was a problem they ["the finest minds of the twentieth century"] wanted to solve, yes, and it has been solved. But there were other goals as well: to develop a program that played chess by thinking like a human, perhaps even by learning the game as a human does. Surely this would be a far more fruitful avenue of investigation than creating, as we are doing, ever-faster algorithms to run on ever-faster hardware.

Reminds me of [[comments Alan Kay made|It's Big Meaning, not Big Data.]]: It should not be about Big Data, it should be about Big Meaning. He basically said we are tackling the wrong (but easier?) problems in Computation and Computer Science by throwing more hardware at automation problems.
In Kasparov's words:
>This is our last chess metaphor, then—a metaphor for how we have discarded innovation and creativity in exchange for a steady supply of marketable products. The dreams of creating an artificial intelligence that would engage in an ancient game symbolic of human thought have been abandoned. Instead, every year we have new chess programs, and new versions of old ones, that are all based on the same basic programming concepts for picking a move by searching through millions of possibilities that were developed in the 1960s and 1970s.

>Like so much else in our technology-rich and innovation-poor modern world, chess computing has fallen prey to incrementalism and the demands of the market. Brute-force programs play the best chess, so why bother with anything else? Why waste time and money experimenting with new and innovative ideas when we already know what works? Such thinking should horrify anyone worthy of the name of scientist, but it seems, tragically, to be the norm. Our best minds have gone into financial engineering instead of real engineering, with catastrophic results for both sectors.

----
^^1^^ compare to [[The end of an era, the beginning of another? HAL, Deep Blue and Kasparov|The end of an era, the beginning of another? HAL, Deep Blue and Kasparov]]
I came across [[a short article about book indexing|resources/on_indexing_books.pdf]], and it opened for me a little window into the world of professional indexing and indexers (I assume they have labor unions, conferences, etc.).

It's written by a professional indexer, admiring the amateurish indexing work done by an author I very much admire (Douglas Hofstadter), and provides an interesting glimpse into what was involved in Hofstadter indexing his book //Le ton beau de Marot: in praise of the music of language//.

It puts in words what I felt when I (very rarely :-( came across a "good index" at the back of a book: that a good index is a kind of a "mind-map" giving the reader a "navigational aid" by mapping some of the author's mind/concepts/thoughts (which are not reflected in the actual text; kind of at a "meta-level") in a different way (compared to the table of contents, list of illustrations, the linear progression of the book/text itself, etc.).

It turns out that the index provides new insights not only to the reader, but also to the writer, as Hofstadter indicates:
>"This tiny-print behemoth ... was a labor of love that took me a full month of fifteen-hour days to carry off.' Creating the index provided him with the sort of insights acquired in the course of this work which usually come too late to serve any useful purpose. 'Doing this index,' Hofstadter continues, 'painful though it was, afforded me one last pass back through the text, tying things together for a final time, saying goodbye to a work created out of love, and with love, for words, ideas, people... For instance, there was one giant index entry that came entirely out of the blue, catching me very much off guard. That was the entry for "conflation". F m not even sure I'm using the word in a standard way, in fact. What it means to me is "taking one thing for another", as in the sentence, "Don't conflate the meanings of 'conflate' and 'confound', please!" I noticed one instance of conflation (in this sense, at least), indexed it, then saw another, and pretty soon it dawned on me that this theme was omnipresent in my book, and so I spent several hours just searching for instances of conflation not the word, mind you, but the concept. It was a revelation to me how pervasive it was, even though the word itself occurred only a handful of time.

But indexing (like many other activities) can be taken too far, as demonstrated by another professional indexer (William S. Heckscher):
>'I have indicated that for me there should be a carefully attuned balance between Index on one side and Text, Notes, Illustrations, on the other ... Ideally, then, a good Index should be more than merely a taciturn sign-post erected after all the rest has been done and is immutably crystallized.... I prefer the Index which has a life of its own, which may pride itself on being the child of imagination, and which should enable us to spend a peaceful evening in bed, reading such an Index, as if we were reading a good novel.'

Or in the case of Hofstadter's indexing obsession, ending in (potentially endless) __self-referenciality__ [[(a-la strange loopiness?)|http://en.wikipedia.org/wiki/I_Am_a_Strange_Loop]]:
>The long entry for conflation in the index (without quotation marks as printed) is followed by a brief one for "conflation" (in double quotes):
>>"conflation": DRH's personal usage, 598; lengthy index entry for, 598. 613
>This entry, therefore, refers both to the endnote quoted above, and to the entry which immediately precedes it in the index. Self-referentiality can surely go no further.

Even __self-referentiality__ can be taken too far:
>I don't know whether anyone told Douglas Hofstadter about the persistent myth that every index must contain a joke; his index contains many. But here is a particularly playful example which may annoy many people as much as it delights me. Under 'index' in the index, a subheading reads 'typo in, 631' (the index extends over pages 599 to 632). On page 631 we find a further index entry, 'typo in index, 633'. There is, of course, no page 633.
In an excellent book titled [["When Einstein Walked with Gödel -- Excursions to the Edge of Thought"|https://www.nytimes.com/2018/05/15/books/review/review-when-einstein-walked-with-godel-jim-holt.html]]^^1^^, Jim Holt has a chapter about Georg Cantor and his work on infinities (an infinite number of types of infinity).

Holt describes Galileo's insight (and puzzlement, or even bewilderment) into the strange nature of infinity.
Galileo reasoned that the set of all whole numbers (i.e., 1, 2, 3, 4, ...) is infinitely large: there is no end to them. Looking at the set of just the squares of the whole numbers (i.e., 1, 4, 9, 16, ...) one gets the sense that their number is smaller than the number of whole numbers, since they are a subset of the wholes. But one can simply pair up or map every whole number with every square (i.e., 1 with 1, 2 with 4, 3 with 9, 4 with 16, ...), and vice versa. 
And the inevitable conclusion is that these two sets have the same size. 
So, it looks like what seems to be a subset of a set (i.e., the squares) actually has the same size of the set (i.e., the wholes).

Holt (following Cantor) describes/exemplifies infinity in a short and elegant way: an infinite set is a set that is the same size as some of its parts. In other words, an infinite set is a set that can lose some of its members without being diminished.

In his work on infinities, Cantor also arrived at a powerful principle that enabled him to further develop his theories, namely that there are always more //sets// of things than things. In other words, given a number of things, you can arrange those things in more ways than the number of things.

I think (and I'm not the only one :) that this combinatorial fact/power lies at the heart of human creativity. As [[Maria Popova writes|About creativity and innovation]] in [[her blog|https://www.brainpickings.org/2011/08/01/networked-knowledge-combinatorial-creativity/]]:
>creativity is combinatorial, that nothing is entirely original, that everything builds on what came before, and that we create by taking existing pieces of inspiration, knowledge, skill and insight that we gather over the course of our lives and recombining them into incredible new creations.

So, combining (ha!) the two ideas, it seems that the number of (creative) combinations is always larger than the number of the building blocks, which bodes well for creativity, since if the number of building blocks is very large (tends to infinity?), than the number of creative combination is even larger (tending to (a larger?) infinity?).
----
^^1^^ - searchable spelling: Gödel, Godel, Goedel
I have just finished an interesting book //The Accidental Universe - The World You Thought You Knew// by Alan Lightman, where he calls the world we live in "[[The Disembodied Universe|More on unthinkable thoughts]]", and laments the impact of science and technology on our worldview.

As a physicist, he loves and appreciates all the positive things that humanity and individuals have gained from science and technology, but he also points out some of the negatives that come with that. He says:
>It is an irony to me that the same science and technology that have brought us closer to nature by revealing these invisible worlds have also separated us from nature and from ourselves. Much of our contact with the world today is not an immediate, direct experience, but is instead mediated by various artificial devices such as television, cell phones, iPads, chat rooms, and mind-altering drugs.
and gives a few examples:
>Speaking on the phone while walking through a nature preserve represents a certain level of disconnection from one's immediate surroundings, but sending test messages is an even greater abstraction. And text messaging is becoming the preferred means of communication by a large segment of the population.
A personal experience of this happened this today -- a Sunday, early in the morning. My wife got a text message on her cellphone. It turned out to be a friend of hers, apologizing about something she had said the day before when both of them met. My wife immediately texted back, saying (actually, writing) that there was no need to apologize, since she was not hurt at all. To which the friend replied that she thought about it all of last night. At this point in the back-and-forth texting, my wife said to me: "I need to talk to her. There is nothing better than a face-to-face conversation". And proceeded to pick up the phone and called her (my father would have said (in Yiddish): a shein ding (a 'nice' thing): "face-to-face" oif them telephon  :-| ). More like ear-to-ear...

Other examples of this distancing are:
>When young people go to parks, they are often so busy clicking photos with their iPhones and emailing the pictures to their Facebook pages that they do not remember to stop for a moment and contemplate the scene with their own eyes. The most unfortunate aspect of this new behavior is that more and more people, and especially young people, are taking such mediated experiences as "natural", as the norm.
He describes scenes from the book //Alone Together// by Sherry Turkle (MIT psychologist and social scientist), where
>she documents the way in which email and cell phones have created emotional dislocations and superficial but expedient ways to deal with the frantically paced world of the twenty first century. [... someone describes how they] "use email to make appointments to see friends, but I am so busy that I'm often making an appointment one or two months in the future. After we set things by email, we do not call. I don't call. They don't call. What do I feel? I feel I have 'taken care of that person.' "
And Lightman continues:
>Using technology, we have redefined ourselves in such a way that our immediate surroundings and relationships, our immediate sensory perceptions of the world, are much diminished in relevance. We have trained ourselves not to be present. We have extended our bodies, created enhanced selves that might be called our "techno-selves". Our techno-selves are both bigger and smaller than our former selves. Bigger in that we have tremendous powers to communicate with the invisible world. Smaller in that we have sacrificed some of our contact and experience with the visible, immediate world.
He correctly points out that new technologies significantly impacted humanity in the past, and people in the past were concerned and sometimes horrified by the potential and risks:
the response to the industrial revolution in the eighteenth century; the fear of humans causing the vanishing of natural landscapes in the nineteenth century; the strong emotions to the roll out of the railroads in the US (e.g., Henry David Thoreau and the Transcendentalists saying: "we do not ride the railroad. It rides us."); skype calls to friends and family instead of meeting face-to-face; our preference to look at animals and nature through high resolution lenses with impressive zooming, instead of looking at them with the naked eye; dinners, where everyone at the table has their cell phones next to their plates, regularly being checked for new messages; and so on.

Another point Lightman makes, but doesn't develop, is that without (or before the development of) modern science and technology, we are a bit like the Flatlander from [[the book Flatland|http://www.geom.uiuc.edu/~banchoff/Flatland/]] by Edwin A. Abbott, a strictly 2D world where the creatures in it have no notion nor perception of 3D (or the third dimension). Abbott tells a story of these strictly 2 dimensional creatures, which one day are being visited by someone from the third dimension.

This comparison is a thought-provoking one. Being aware of the 3^^rd^^ dimension and reading Flatland, definitely makes you feel that the Flatlanders are missing out on a lot of reality. Looking at our sensory reality without the augmentation of science and technology has to lead to the same conclusion: there is more to reality than what we directly perceive, and used wisely (and that's the key!), science and technology can make our universe so much richer. It can be a quantum leap; a kick-up into higher dimensions :)

Lightman concludes the book by writing:
>Most of us will adapt to this new way of living [...] It will be the natural and normal way of being in the world. But here and there, small pockets of people will rebel and establish small communes, where the newer technologies are left at the front gate -- in the same way that some people today still send handwritten letters and take long walks without their cell phones. In such enclaves, people will feel that they have preserved something of value, that they are living a more immediate and authentic life, that they are more connected with themselves and their surroundings. And that will be partly true. Yet they will be also disconnected from the larger world just outside of their gate, invisible in their own way.
I'm sure this will happen, as similar things happened in the past throughout human evolution. Wisdom, as well as human resiliency, one hopes, will lead us through these changes, to arrive at a balance and a "sane" way of living. Part of successfully navigating through this is remembering that progress is rarely "smooth"; the ups and downs, overshooting and undershooting, actions and reactions are the natural way of progress.
 
!!! On Luck
In an interesting article in The Atlantic titled "[[Why Luck Matters More Than You Might Think|http://www.theatlantic.com/magazine/archive/2016/05/why-luck-matters-more-than-you-might-think/476394/]]", Robert Frank brings up some good points:

* many of us seem uncomfortable with the possibility that personal success might depend to any significant extent on chance. As E. B. White once wrote, “Luck is not something you can mention in the presence of self-made men.” Wealthy people overwhelmingly attribute their own success to hard work rather than to factors like luck or being in the right place at the right time.

* __The bad news__: because a growing body of evidence suggests that seeing ourselves as self-made  - - rather than as talented, hardworking, and lucky - - leads us to be less generous and public-spirited. It may even make the lucky less likely to support the conditions (such as high-quality public infrastructure and education) that made their own success possible.
** in other words, if you are successful, you tend to attribute it to your hard work, talent, etc., which in turn makes you less charitable, generous, and supportive of conditions to help others be successful.

* __The good news__: when people are prompted to reflect on their good fortune, they become much more willing to contribute to the common good.
** in other words, if you are somehow made to reflect on your success and consider factors outside of your control, (e.g., good fortune/luck), you tend to be more charitable, generous, and supportive of conditions to help others be successful.

* Why we may see success as inevitable is the "availability heuristic". Using this cognitive shortcut, we tend to estimate the likelihood of an event or outcome based on how readily we can recall similar instances. Successful careers, of course, result from many factors, including hard work, talent, and chance. Some of those factors (in our control or nature) recur often, making them easy to recall. But others (not in our control) happen sporadically and therefore get short shrift when we construct our life stories.

* That we tend to overestimate our own responsibility for our successes is not to say that we shouldn’t take pride in them. Pride is a powerful motivator; moreover, a tendency to overlook luck’s importance may be perversely adaptive, as it encourages us to persevere in the face of obstacles.

* And yet failing to consider the role of chance has a dark side, too, making fortunate people less likely to pass on their good fortune.

* Our personal narratives are biased in a second way: Events that work to our disadvantage are easier to recall than those that affect us positively. My friend Tom Gilovich invokes a metaphor involving headwinds and tailwinds to describe this asymmetry.
** When you’re running or bicycling into the wind, you’re very aware of it. You just can’t wait till the course turns around and you’ve got the wind at your back. When that happens, you feel great. But then you forget about it very quickly—you’re just not aware of the wind at your back. And that’s just a fundamental feature of how our minds, and how the world, works. We’re just going to be more aware of those barriers than of the things that boost us along.

* In an unexpected twist, we may even find that recognizing our luck increases our good fortune. Social scientists have been studying gratitude intensively for almost two decades, and have found that it produces a remarkable array of physical, psychological, and social changes.
** psychologists have documented additional benefits of gratitude, such as reduced anxiety and diminished aggressive impulses.

!!! On Grit
In the same issue of The Atlantic, Jerry Useem has an article titled "[[Is Grit Overrated?|http://www.theatlantic.com/magazine/archive/2016/05/is-grit-overrated/476397/]]", with some interesting points, too:

* grit - - perseverance plus the exclusive pursuit of a single passion - - is a severely underrated component of career success, and that grown-ups, too, need a better understanding of the nature and prevalence of setbacks.

* Initially, Angela Duckworth, a psychology professor at the University of Pennsylvania, guessed that the answer had to do with short-term impulse control. But impulse control did not fully account for how long people persisted at something in the absence of positive feedback such as success.

* What distinguished high performers, she found, was largely how they processed feelings of frustration, disappointment, or even boredom. Whereas others took these as signals to cut their losses and turn to some easier task, high performers did not - - as if they had been conditioned to believe that struggle was not a signal for alarm.

* To Duckworth, here was an opening. If you could change people’s beliefs about how success happens, then you had a crack at changing their behavior - - delaying their quitting point a crucial modicum (i.e., a bit) or two.

* But grit and the continued efforts and failures on the way to success are ugly, uncomfortable, messy. Successful people hide the messiness and show just the successful, polished end result.
** “If people knew how hard I had to work to gain my mastery, it would not seem so wonderful at all,” Michelangelo observed. Nietzsche concurred: “Wherever one can see the act of becoming one grows somewhat cool.”

* Duckworth’s basic admonition, “Embrace challenge,” needs a qualifier: Do it in private. Grit may be essential. But it is not attractive.
** This can make for confusing career advice. “Try hard enough and you can do just about anything, as long as you don’t seem to be trying very hard” is not the stuff of school murals.

* the prevalence of hidden practice among successful people is costly to society because it obscures the amount of failure that goes into success. It is a destructive negative cycle:
** If we routinely fool others, they routinely fool us. So when we experience messy frustration, we too readily believe that we don’t have the right stuff and give up.

* So, one of the useful pieces of advice may be to not hide the messiness on the way to success (similar to the suggestion not to keep all the "failed research" in the drawer" because it has value.
** From an insight, educational and wisdom point of view: You should be courageous enough to hold your failure up to others and say, in effect, this is what success looks like.


!!!! On Grit and Knowing When It's Worth It
As in many cases in life, discernment is probably the wise way to success, happiness, and everything :)
From [[Give Up|http://www.theatlantic.com/health/archive/2015/10/give-up/410485/]]:
> “Right now, there’s an effort to push everyone to be more gritty,” said Gale Lucas, a researcher with the USC Institute for Creative Technologies, in a statement. “There’s no reason not to make people grittier, but it’s important to know when to quit and reevaluate rather than blindly push through.”

> In general, willpower in the face of adversity is an asset. But as this study (and real life) illustrates, some things are just not worth achieving, no matter how gritty you are. Grit, then, is like any other gift—it’s worth evaluating whether you’re using it to help, rather than hurt, yourself.
[[Richard Feynman once said|Richard Feynman on the beauty and simplicity of nature]]:
>Perhaps a thing is simple if you can describe it fully in several different ways without immediately knowing that you are describing the same thing.

which reminds me of something my father used to say:
>If you really know mathematics and "deeply feel it", you fully understand that when you say that 3 * 4 = 12 (or 3 x 4 = 12), you also say that "3 goes/fits into 12 four times", and that "4 goes/fits into 12 three times", and that "a quarter of 12 gives you 3", and that "12 divided into three equal parts is 4".
Sometimes, this "symmetry" or multi-facetness can be very "illuminating", since it may reveal something deep about reality, i.e., physics (or math), which is [[something that people like Bertrand Russell and Judea Pearl had observed and talked about|On The Art and Science of Cause and Effect - Judea Pearl]].

[>img[Polygon laps|./resources/polygon laps 3 4.png][https://trinket.io/python/d640d97144?outputOnly=true&runOption=run]]
I recently played with programming [["polygon laps"|https://trinket.io/python/bb7245123b]] (inspired by [[Dave Whyte|https://beesandbombs.tumblr.com/post/161295765794/polygon-laps]]) and realized another way in which this simple but deep understanding manifests itself. If you click on the image which leads to a Python-driven animation, you can see 2 dots tracing a triangle (3 sides) and a square (4 sides), in something like a polygon laps race. Running this for a while (at least 12 segments :) makes the statements above clear in a simple, visual way ... :)







----
for search purposes (and since I believe that this is the example my father used :), this can equally apply to exploring/understanding multiple facets of 2 x 3 (or 2 * 3, 2*3, 2x3 and so on)
Pushpinder Singh and [[Marvin Minsky|https://en.wikipedia.org/wiki/Marvin_Minsky]] (of MIT; a "[[CS Sage|https://en.wikipedia.org/wiki/List_of_computer_scientists]]"; one of the Founding Fathers of Artificial Intelligence) wrote [[an article titled "Failure-Directed Reformulation"|http://dspace.mit.edu/bitstream/handle/1721.1/46197/40495901-MIT.pdf?sequence=2]], in which [[they say|https://www.researchgate.net/publication/2555137_Failure-Directed_Reformulation]]:
>[In the] domain of ordinary, common-sense problem solving [, ...w]e study the heuristic method of reformulation, that is, of changing the representation of the problem. We are interested in reformulations that cause a substantial change in viewpoint, one that is likely to be different enough from the original viewpoint that we have a good chance of getting unstuck. We present a simple way of organizing representations so that reformulation can occur quickly if a dead-end is encountered while applying current method of solution. 

I recently came across a vivid example of reformulation helping to solve a common-sense problem.
The problem is the famous "water jugs" problem, also demonstrated in the movie "Die Hard With a Vengeance" ([[clip running through the problem|https://www.youtube.com/watch?v=6cAbgAaEOVE]]).

Basically: You have a 3-gallon and a 5-gallon jug that you can fill from a fountain of water. The requirement/problem is to fill one of the jugs with exactly 4 gallons of water. How do you do it?

The interesting math lecturer [[Burkard Polster|http://www.qedcat.com/]] (AKA [[Mathologer|https://www.youtube.com/channel/UC1_uAIS3r8Vu6JjXWvastJg]]) goes through [[a solution of this problem|https://www.youtube.com/watch?v=0Oef3MHYEC0]] in an original reformulation of the problem. (he had also written [[a good paper|http://www.qedcat.com/billiards1.pdf]] on it).

Polster beautifully reformulates the water jugs problem as a billiards balls bouncing in a parallelogram-shaped table, and comes up with a very elegant solution.

[>img[Billiards Jugs|resources/billiards jugs 1.png][resources/billiards jugs.png]] (click on the [[image to zoom in|resources/billiards jugs.png]])

The reformulated solution is so beautiful and elegant (did I already say that?  :), that it inspired me to program it as a mobile app using [[Coronalab's|https://coronalabs.com/]] [[physics engine|https://docs.coronalabs.com/api/library/physics/index.html]] (which makes the ball bouncing easy to program :) and the [[Lua programming language|https://www.lua.org/]].

As the ball bounces around the table, it touches the edges at locations indicating the amounts of water being transferred from one water jug to the other.
The water quantities (or, reformulated (!), the billiards table "collision coordinates") are shown in the simulation/program console on the right side of the image, and reproduced below:
	5	0
	2	3
	2	0
	0	2
	5	2
	4	3
	4	0
	1	3
	1	0
	0	1
	5	1
	3	3
	3	0
	0	3
The way to read the results is by row:
- you start with 5 gallons in the 5-gallon jug, and 0 in the 3-gallon jug
- then you pour from the 5 to the 3 jug, and end up with 2 gallons in one and 3 gallons in the other
- next, you empty the 3 jug and are left with 2 in the 5 jug, and 0 in the 3 jug
- then you fill up the 5-gallon jug
- and you pour 1 gallon into the 3 gallon jug which already has 2 gallons in it,
leaving only/exactly 4 gallons in the 5-gallon jug.
QED.

BTW, the mobile app simulation can run in reverse, starting with filling up the 3-gallon jug first, and produces the following solution sequence:
  	0	3
	3	0
	3	3
	5	1
	0	1
	1	0
	1	3
	4	0
	4	3
	5	2
	0	2
	2	0
	2	3
	5	0

Which again should be interpreted as:
- you start with 3 gallons in the 3-gallon jug, and 0 in the 5-gallon jug
- then you pour all of it to the 5 jug
- then you fill the 3 jug again
and so on, with the desired result of 4 gallons in the 5 jug (line 4 0 above) after a few more pourings.

Looking at the numbers above: An interesting (and sometimes useful, or as the scene from the movie Die Hard demonstrates, life-saving) result of the simulation is that it can show that in this case we can actually produce any amount of water from 1 gallon to 8 gallons, in increments of 1 gallon, if we need to.


In an interesting article titled [[Doing Mathematics Differently|http://inference-review.com/article/doing-mathematics-differently]], mathematician and computer scientist [[Gregory Chaitin|https://en.wikipedia.org/wiki/Gregory_Chaitin]]  writes:

>We can compare the complexity of a theory with the complexity of the data very easily because both are finite strings. Leibniz saw that a theory the same size as the data is useless in the sense that it is not possible to cover a debt of ten dollars with another debt of ten dollars. Explanation is a form of compression. If a theory is smaller than the data, then in that case, as in so many others, less is more. A successful explanation is a matter of covering a large debt with a much smaller one.
>
>If less is more, smaller is better.
>
>The best theory is the smallest program that generates the data.

Chaitin talks about an Algorithmic Information Theory, or AIT. 
AIT is looking at complex, complexity, compuability, and randomness issues, like:
>AIT provides an entirely new perspective on the halting problem. Given a string of bits S, is it possible to prove that S is irreducible? Better, sharper. Given S, is it possible to prove that there exists no program P smaller than S that generates S?
>
>No such proof is possible. This is an incompleteness result, a measure of the unexpected depth of the concept of complexity, its inner richness.
>
>A program P is elegant if no program smaller than P produces the same output as P. There may be several elegant programs that produce the same output, and so yield an exciting tie. If we have one of these programs, can we prove that it is an elegant program?
>
>It is very, very hard.

Besides the philosophical questions Chaitin brings up, he also raises questions about program elegance, effectiveness (does it solve the problem) vs. efficiency (does it do it in "the best way" possible); is the solution "beautiful" (elegant)? 
Here he is looking at brevity of the solution/program, but like any other complex things, this question/aspect is a central one in programming and implementation: sometimes shortness is not the most important aspect. 

[[Matthew Fuller in his article on Elegance|https://aestech.wikischolars.columbia.edu/file/view/Fuller+-+Elegance+(Software+Studies).pdf]] makes [[this point very clearly|On computer program elegance]].

Chaitin continues:
>In 1931, [[Gödel exploded this belief|resources/Boolos-godel-in-single-syllables.pdf]]. The assertion, “I am unprovable,” may be true and unprovable, but it is also bizarre (see also [[The world's shortest explanation of Gödel's theorem]]). This does not look at all like ordinary mathematics. I do not know whether proving that programs are elegant looks like ordinary mathematics. Mathematicians can ignore the proof that I just presented as well.
>
>What AIT does, however, is to introduce a notion of complexity into the discussion of incompleteness. With Gödel’s original approach you could not tell whether incompleteness was ubiquitous, commonplace, or restricted only to strange, degenerate cases.
>
>AIT suggests that incompleteness is, in fact, pervasive; it tells us that every mathematical theory has finite complexity, but that the world of pure mathematics has infinite complexity. Just proving that programs are elegant for all possible programs would require infinite complexity.
>
>AIT makes incompleteness look natural.
(easy for Chaitin to say :)

In his flowing, witty, and thoughtful article [[Why Read the Classics?|http://www.nybooks.com/articles/1986/10/09/why-read-the-classics/]], Italo Calvino tries on a few definitions of "Classic":

* The classics are the books of which we usually hear people say: “I am rereading…” and never “I am reading….”
and he forgivingly adds:
>The reiterative prefix before the verb “read” may be a small hypocrisy on the part of people ashamed to admit they have not read a famous book. To reassure them, we need only observe that, however vast any person’s basic reading may be, there still remain an enormous number of fundamental works that he has not read.
As for the readers of the classics, Calvino observes:
>[...] to read a great book for the first time in one’s maturity is an extraordinary pleasure, different from (though one cannot say greater or lesser than) the pleasure of having read it in one’s youth. Youth brings to reading, as to any other experience, a particular flavor and a particular sense of importance, whereas in maturity one appreciates (or ought to appreciate) many more details and levels and meanings.
Classics can both imprint in us (if reading for the first time) or reawaken in us (if re-reading):
>Books read then can be (possibly at one and the same time) formative, in the sense that they give a form to future experiences, providing models, terms of comparison, schemes for classification, scales of value, exemplars of beauty—all things that continue to operate even if the book read in one’s youth is almost or totally forgotten. If we reread the book at a mature age we are likely to rediscover these constants, which by this time are part of our inner mechanisms, but whose origins we have long forgotten. A literary work can succeed in making us forget it as such, but it leaves its seed in us.
* The classics are books that exert a peculiar influence, both when they refuse to be eradicated from the mind and when they conceal themselves in the folds of memory, camouflaging themselves as the collective or individual unconscious.
and more insightful definitions:
>Hence, whether we use the verb “read” or the verb “reread” is of little importance. Indeed, we may say:
* Every rereading of a classic is as much a voyage of discovery as the first reading.
* Every reading of a classic is in fact a rereading.
* A classic is a book that has never finished saying what it has to say.
About the direct experience of :reading the source":
>The reading of a classic ought to give us a surprise or two vis-à-vis the notion that we had of it. For this reason I can never sufficiently highly recommend the direct reading of the text itself, leaving aside the critical biography, commentaries, and interpretations as much as possible. Schools and universities ought to help us to understand that no book that talks about a book says more than the book in question, but instead they do their level best to make us think the opposite.
* A classic does not necessarily teach us anything we did not know before. In a classic we sometimes discover something we have always known (or thought we knew), but without knowing that this author said it first, or at least is associated with it in a special way. And this, too, is a surprise that gives a lot of pleasure, such as we always gain from the discovery of an origin, a relationship, an affinity.
* The classics are books that we find all the more new, fresh, and unexpected upon reading, the more we thought we knew them from hearing them talked about.
And again, the impact of schools:
>Naturally, this only happens when a classic really works as such—that is, when it establishes a personal rapport with the reader. If the spark doesn’t come, that’s a pity; but we do not read the classics out of duty or respect, but only out of love. Except at school. And school should enable you to know, either well or badly, a certain number of classics among which—or in reference to which—you can then choose your classics. School is obliged to give you the instruments needed to make a choice, but the choices that count are those that occur outside and after school.
* We use the word “classic” of a book that takes the form of an equivalent to the universe, on a level with the ancient talismans. With this definition we are approaching the idea of the “total book,” as Mallarmé conceived of it.
* Your classic author is the one you cannot feel indifferent to, who helps you to define yourself in relation to him, even in dispute with him.
* A classic is a book that comes before other classics; but anyone who has read the others first, and then reads this one, instantly recognizes its place in the family tree.
Calvino argues that one cannot "just read the classics" and ignore the current, daily affairs:
>The latest news may well be banal or mortifying, but it nonetheless remains a point at which to stand and look both backward and forward. To be able to read the classics you have to know “from where” you are reading them; otherwise both the book and the reader will be lost in a timeless cloud. This, then, is the reason why the greatest “yield” from reading the classics will be obtained by someone who knows how to alternate them with the proper dose of current affairs. And this does not necessarily imply a state of imperturbable inner calm. It can also be the fruit of nervous impatience, of a huffing-and-puffing discontent of mind.
So,
* A classic is something that tends to relegate the concerns of the moment to the status of background noise, but at the same time this background noise is something we cannot do without.
and
* A classic is something that persists as a background noise even when the most incompatible momentary concerns are in control of the situation.

And he concludes:
>[One should not] believe that the classics ought to be read because they “serve any purpose” whatever. The only reason one can possibly adduce is that to read the classics is better than not to read the classics.
>
>And if anyone objects that it is not worth taking so much trouble, then I will quote Cioran (who is not yet a classic, but will become one):
>
>    While they were preparing the hemlock, Socrates was learning a tune on the flute. “What good will it do you,” they asked, “to know this tune before you die?” 
In his book [[Three Scientists and Their Gods|Three Scientists and Their Gods - Robert Wright]] Robert Wright writes about Edward Fredkin and his reversible computer/logic idea:
>[The physicist] [[Rolf Landauer|https://en.wikipedia.org/wiki/Rolf_Landauer]] concluded that computers are indeed necessarily irreversible and thus necessarily dissipate energy, giving off heat. His logic-so seemingly solid that it was not challenged for a decade-went as follows. At the core of every computer on the market are lots of gates-notably “and” gates and “or” gates that translate digital input into digital output. An "and” gate, for example, has two input lines and one output line, all of which can represent—through their level of voltage, typically-either 1 or 0. If the voltage in both input lines represents 1, the output line will then register a 1. But if a representation of 0 enters either input line, or both, the output line will register a 0. So, when the output line reads 0, there is no way of knowing exactly which representations the two input lines previously housed; information loss, and therefore energy dissipation, is inseparable from computation as we know it. The electrons representing information are routinely banished without a trace. And, while the heat they then constitute does “remember” the erased information, the computer itself has no recollection.
But Fredkin didn't like the idea of losing information as part of calculating; it didn't seem elegant.
>Strictly speaking, Landauer's contention—that, although the universe never forgets, computers always do—didn't contradict Fredkin's belief that the universe is a computer. It is conceivable that an irreversible process at the very core of reality could give rise to the reversible behavior of molecules, atoms, electrons, and the rest. After all, irreversible computers (that is, all computers on the market) can simulate reversible billiard balls. But they do so in a convoluted way, says Fredkin, and the connection between an irreversible substratum and a reversible stratum would, similarly, be tortuous-or, as he puts it, "aesthetically obnoxious." Fredkin prefers to think that the computer underlying reversible reality does its work gracefully. So at Caltech he set out to prove that computers don't have to destroy information-that a reversible computer is in principle possible.
So, Fredkin worked on devising a reversible logic gate which would be the basic building block of a reversible computer.
>He succeeded. He invented what has since become known as the "Fredkin gate.” Instead of two input lines and one output line, it has three of each, and its input can always be inferred from its output. Fredkin showed that an entire computer could be built with such gates, and that, by using a special logic designed to conserve information, it could do anything any other computer can do. He had on paper, at least-a reversible universal computer.
Fredkin's coworker at MIT, Tomasso Toffoli, came up with a variation of the Fredkin Gate, aptly called the Toffoli Gate, which can be used to build all sorts of reversible logic designs and also a reversible computer.

I decided to use an example of a 1 bit full adder from a good tutorial titled [["From Truth Tables to Programming Languages: Progress in the Design of Reversible Circuits"|http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.378.6158&rep=rep1&type=pdf]] by Rolf Drechsler and Robert Wille, to demonstrate the some "reversibility concepts", using Python code (to represent the logic operations/gates).
 
<html>
	<table>
		<tr>
			<td>
				<img src="resources/reversible gates small.png"><br>Toffoli (left) & Fredkin (right)
			</td>
			<td>
				<img src="resources/reversible adder small.png">reversible adder (Toffoli)
			</td>
			<td>
				<img src="resources/adder truth table small.png">reversible adder truth table
			</td>
		</tr>
	</table>
</html>

and the [[Python implementation|https://trinket.io/python/e9e3a3b888]]:
<html>
	<table>
		<tr>
			<td>
				<img src="resources/Python reversible gates small.png"><p><a href="https://trinket.io/python/e9e3a3b888">Python reversible adder</a>
			</td>
			<td>
				<img src="resources/Python adder truth tables small.png"><p>Python reversible adder truth table
			</td>
	         </tr>
	</table>
</html>
([[from "The Unreasonable Effectiveness of Mathematics" by Richard W. Hamming|http://www.dartmouth.edu/~matc/MathDrama/reading/Hamming.html]], which [[I write about here|On why Math works for us]])

>Man, so far as we know, has always wondered about himself, the world around him, and what life is all about. We have many myths from the past that tell how and why God, or the gods, made man and the universe. These I shall call theological explanations. They have one principal characteristic in common-there is little point in asking why things are the way they are, since we are given mainly a description of the creation as the gods chose to do it.

>Philosophy started when man began to wonder about the world outside of this theological framework. An early example is the description by the philosophers that the world is made of earth, fire, water, and air. No doubt they were told at the time that the gods made things that way and to stop worrying about it.

>From these early attempts to explain things slowly came philosophy as well as our present science. Not that science explains "why" things are as they are-gravitation does not explain why things fall-but science gives so many details of "how" that we have the feeling we understand "why." Let us be clear about this point; it is by the sea of interrelated details that science seems to say "why" the universe is as it is.

The last point, about science not answering the //why// but rather giving us a scientifically-convincing (rigorous, rational, etc.) //how//-structure, is a point many people miss. It's like a mental/cognitive "bootstrap" structure and process. We "weave" a fabric of concepts, relationships, formulas, connections, etc., and call it an explanation. You can ask "why" and get an explanation, then keep asking why, and keep "drilling down" with more and more why's ([[It's like Alan Kay said about objects and turtles|Smalltalk programs are just objects. It's all objects all the way down. Until you reach turtles.]]). And, at a certain level, it //is// an explanation, but it's a fabric of how-explanations (or deeper and deeper chains of explanations), not a why-explanation.

In his article [["Reasonably effective: Deconstructing a miracle"|resources/Wilczek_reasonably1.pdf]] Frank Wilczek makes a similar observation:
>Since any answer to a  why  question can be challenged with a further  why,  any reasoned argument must terminate in premises for which no further reason can be offered. At that point we pass, necessarily, from reason to faith.

[[(a local copy of Hamming's article|resources/Hamming.html]])
In [[an interview on the Farnam Street site|https://www.fs.blog/2016/10/peter-bevelin-seeking-wisdom-mental-models/]], the Swedish investor Peter Bevelin (the author of the excellent book [[Seeking Wisdom - from Darwin to Munger|http://www.valueinvestingworld.com/2007/10/interview-with-peter-bevelin-author-of.html]]) captures some of his "lessons learned" and "pearls of wisdom", inspired by two "sages", mavens, and value investors, [[Charlie Munger|https://www.valuewalk.com/charlie-munger-page/]] and [[Warren Buffett|https://www.valuewalk.com/warren-buffett/]] (of Berkshire Hathaway):
* there are a few basic, time-filtered fundamental concepts, methods, and "tricks" that are good enough. As [Charlie] Munger says, "The more basic knowledge you have the less new knowledge you have to get."
* when I look at something “new”, I try to connect it to something I already understand and if possible get a wider application of an already existing basic concept that I already have in my head.
* as the British statistician George Box said: we shouldn’t be preoccupied with optimal or best procedures but good enough over a range of possibilities likely to happen in practice.
* it's important to "pick ones battles" and focus on the long-term consequences of your actions. As Munger said, “A majority of life's errors are caused by forgetting what one is really trying to do.”
* don't be too quick to jump to conclusions and be judgemental
* you have to "pick your poison" since there is always a set of problems attached with any system or approach – it can’t be perfect. The key is try to move to a better set of problems you can accept
* How efficient and simplified life is when you deal with people you can trust. This is true professionally (your team mates, your manager, your chain of command up to the CEO), and personally (your spouse, your friends, your neighbors)
* luck plays a big role in life.
* most predictions are wrong and that prevention, robustness and adaptability is way more important.
* people or businesses that are foolish in one setting often are foolish in another one ("The way you do anything, is the way you do everything").
* A checklist is no substitute for thinking.
* don't get too involved in details to the point where you can’t see the forest for the trees and you go up too many blind alleys.
* as Warren Buffett said: You only have to be right on a very, very few things in your lifetime as long as you never make any big mistakes.
From a short article in the New York Times titled [["How to Help Teenagers Embrace Stress"|https://www.nytimes.com/2018/09/19/well/family/how-to-help-teenagers-embrace-stress.html]]:

Like in sports and the business world, "no pain, no gain":
>To reframe how we think about a phenomenon that has been roundly, and wrongly, pathologized, we should appreciate that healthy stress is inevitable when we operate at the edge of our abilities. Stretching beyond familiar limits doesn’t always feel good, but growing and learning — the keys to school and much of life — can’t happen any other way.
Stress is unavoidable, and if (reasonable amounts are) embraced, is a way to develop and improve skills and abilities. Emotional management is important because life is hard. As  Buddhism indicates, life is full of "unease" (sometimes translated as "suffering"), so we are advised to develop an attitude and ways to handle it:
>According to Jeremy P. Jamieson, an associate professor of psychology at the University of Rochester who studies how stress impacts emotions and performance, “Avoiding stress doesn’t work and is often not possible. To achieve and grow, we have to get outside our comfort zones and approach challenges.”
Also, somewhat similar to my observation about the effects of "bootcamps":
>Stress is also known to have an inoculating effect. Research shows that people who overcome difficult life circumstances go on to enjoy higher-than-average levels of resilience. In short, achieving mastery in trying situations builds emotional strength and psychological durability.
I came across an article ([[MATHEMATICS EDUCATION:THEORY, PRACTICE & MEMORIES OVER 50 YEARS|resources/Mason-mathematical-action-structures-of-noticing.pdf]]) by [[John Mason|http://www.mathematics.open.ac.uk/People/john.mason]], shining some light on his experience when "doing math", which is (no surprise) quite spiritual in nature:
>''STRUCTURED AWARENESS''
>I have often thought and sometimes said, that when I am engaged in my enquiries, I enjoy it most when I am at the overlap between mathematics, psychology and sociology, philosophy and religion. There is something about working on a mathematical problem which is for me profoundly spiritual; something about working on teaching and learning that integrates all three traditional aspects of my psyche (awareness, behaviour and emotion, or more formally, cognition, enaction and affect) as well as will and intention, which themselves derive from ancient psycho-religious philosophies such as expressed in the Upanishads (Radhakrishnan, 1953) and the Bhagavad Gita (Mascaró, 1962; see also Raymond, 1972). I associate this sense of integration with an enhanced awareness, a sense of harmony and unity, a taste of freedom, which is in stark contrast to the habit and mechanicality of much of my existence. Even a little taste of freedom arising in a moment of participating in a choice, of responding freshly rather than reacting habitually is worth striving for.
>One way to summarise such experiences is that, in the end, what I learn most about, is myself. This observation is not as solipsistic, isolating and idiosyncratic as it might seem, for in order to learn about myself I need to engage with others (who may, as is the case for hermits, be virtual), and I need to be supported and sustained in those enquiries. A suitable community can be invaluable, though an unsuitable community can be a millstone! I reached this conclusion through realising that when a researcher is reporting their data, and then analysing it, the distinctions they make, the relationships they notice, the properties they abstract all tell me as much about their own sensitivities to notice and dispositions to act as they do about the situation-data being analysed. Indeed I proposed an analogy to the Heisenberg principle in physics: the ratio of the precision of detail of analysis to the precision of detail about the researcher is roughly constant (Mason, 2002, p. 181). 
Wisdom means doing the appropriate thing in a given context/situation.

Life should not be lived as "always striving for the average/middle/mean". In my mind, it'd be a terrible experience^^1^^ (see the pictures :)
[>img[Averages perception|resources/averages perception 1.png][resources/averages perception.png]]

I suspect that people are confusing the (Buddhist) concept of ' choosing the "Middle Way" ' with "moderation". But, moderation/temperance is one way to respond, if/when appropriate (but not always). Sometimes, an extreme response (either in one direction or the other, again, depending on the context/appropriateness) is the right and wise response.

This is also echoed in [[Aristotle's ethics, where he says|https://plato.stanford.edu/entries/aristotle-ethics/#DocMea]]: "Finding the mean in any given situation is not a mechanical or thoughtless procedure, but requires a full and detailed acquaintance with the circumstances."

As a short Buddhist story illustrates:
A disciple observed his Zen Master giving certain advice to one person, and the exact opposite advice to another person, and was very troubled by this.
After the two people had left, he asked his Master: How can you give one advice to one person, and a totally opposite advice to another?
The Master answered: if I see a person riding a bicycle on the road, and he is getting too close to the ditch on the left I yell to him “go right!” and if he is too close to the ditch on the right I yell to him “go left!”

----
^^1^^ - Having said all of this, sometimes averaging out "perception" is very useful and efficient. For example in computerized pattern recognition, image processing, and machine learning, certain characteristics (or a "signature") of a situation/image can be efficiently and effectively extracted by calculating and using the appropriate averages, basically creating a simplified model or simulation of the actual thing one wants to study/process/analyze.

This inspired a project in one of the programming classes I had developed and have been teaching, in a unit on Image Processing:
<html>
	<table>
		<tr>
			<td>
				<img src="resources/hm11.png"><br>5 x 7 cells
			</td>
			<td>
				<img src="resources/hm21.png"><br>11 x 15 cells
			</td>
			<td>
				<img src="resources/hm31.png"><br>21 x 25 cells
			</td>
			<td>
				<img src="resources/hm41.png"><br>31 x 35 cells
			</td>
			<td>
				<img src="resources/hm51.png"><br>51 x 55 cells
			</td>
			<td>
				<img src="resources/hm61.png"><br>150 x 179 pixels
			</td>
		</tr>
	</table>
</html>
If you have enjoyed Douglas Hofstadter's great (and I mean, GREAT!) book [[Gödel, Escher, Bach|https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach]] (AKA, GEB), you are going to //love// his book [[Metamagical Themas|https://en.wikipedia.org/wiki/Metamagical_Themas]] (and if you have not read GEB, you are going to //love// Metamagical Themes :)

In this book he has a chapter (#4) about self-modifying structures and reflexivity, where he tells the following story about attending a guest lecture by the Computer Science pioneer [[George Forsythe|https://en.wikipedia.org/wiki/George_Forsythe]], where the guest brought up a key point:

>[...] the purpose of computing was to do anything that people could figure out how to mechanize. Thus, he [Forsythe] pointed out, computing would inexorably make inroads on one new domain after another, as we came to recognize that an activity that had seemed to require ever-fresh insights and mental imagery could be replaced by an ingenious and subtly worked-out collection of rules, the execution of which would then be a form of glorified drudgery carried out at the speed of light.
IBM's [[Deep Blue|https://en.wikipedia.org/wiki/Deep_Blue_%28chess_computer%29]], anyone?

His insight was great! Back then (Forsythe died in 1972) Deep Blue was not even an electronic schematic/blueprint (the [[chess match with Garry Kasparov|https://en.wikipedia.org/wiki/Deep_Blue_versus_Garry_Kasparov]] took place in 1996/1997), but Forsythe gave other examples relevant to his ara, like a compiler program, and ~LISP-like [[self-modifying programs|https://en.wikipedia.org/wiki/Self-modifying_code]] (computer programs which, as part of their execution, change their own code).

Forsythe also made a joke about a saying by another CS pioneer - [[Richard Hamming|https://en.wikipedia.org/wiki/Richard_Hamming]] - who famously said:
> The purpose of computing is insight, not numbers.
On which Forsythe riffed:
>The Purpose of Computing Numbers Is Not Yet in Sight.
In a well-thought-out [[blog post titled Nick Bostrom is wrong about the dangers of artificial intelligence|https://ventrellathing.wordpress.com/2015/09/02/nick-bostrom-is-wrong-about-the-dangers-of-artificial-intelligence/]], [[Jeffrey Ventrella|https://ventrellathing.wordpress.com/about/]] brings up several good points, but he has one where he tackles Bostrom using the latter's own "fulcrum" of anthropomorphizing.

Ventrella who has been programming AI programs for 20+ years makes a critical point:
>''Intelligence is ~Multi-Multi-Multi-Dimensional''
>
>Bostrom plots a one-dimensional line which includes a mouse, a chimp, a stupid human, and a smart human. And he considers how AI is traveling along this line, and how it will fly past humans.
>
[img[Bostrom's linear intelligence lines|./resources/bostrom_intelligence_linear_1.png][./resources/bostrom_intelligence_linear.png]]
>Intelligence is not one dimensional. It’s already a bit of a simplification to plot mice and chimps on the same line – as if there were some single number that you could extract from each and compute which is greater.
>
>>Charles Darwin once said: “It is not the strongest of the species that survives, nor the most intelligent that survives. It is the one that is most adaptable to change.”

>''WE HAVE ONLY OURSELVES TO FEAR BECAUSE WE ARE INSEPARABLE FROM OUR AI''
>
>We and our AI grow together, side by side. AI evolves with us, for us, in us. It will change us as much as we change it. This is the posthuman condition. You probably have a smart phone (you might even be reading this article on it). Can you imagine what life was like before the internet? For half of my life, there was no internet, and yet I can’t imagine not having the internet as a part of my brain. And I mean that literally. If you think this is far-reaching, just wait another 5 years. Our reliance on the internet, self-driving cars, automated this, automated that, will increase beyond our imaginations.
>
>We will not be able to separate it from ourselves – increasingly over time. We won’t see it as “other” – we might just see ourselves as having more abilities than we did before.
>
>[Like explosives and chemical weapons] Those abilities could include a better capacity to kill each other, but also a better capacity to compose music, build sustainable cities, educate kids, and nurture the environment.
>
>If my interpretation is correct, then Bolstrom’s alarm bells might be better aimed at ourselves. And in that case, what’s new? We have always had the capacity to create love and beauty … and death and destruction.
The brilliant mathematician/engineer/scientist [[Claude Shannon|https://en.wikipedia.org/wiki/Claude_Shannon]] wrote an interesting paper titled [["Prediction and Entropy of Printed English"|https://www.princeton.edu/~wbialek/rome/refs/shannon_51.pdf]], in which he invented a "game" to experientially measure the entropy [the uncertainty regarding which particular letters/words are chosen from a set of all the letters/words] of the English language.

From Shannon's article, his description of the Entropy Game:
>the subject knows the text up to the current point and is asked to guess the next letter. If he is wrong, he is told so and asked to guess again. This is continued until he finds the correct letter. A typical result with this experiment is shown below.
The user needs to guess the next letter in the sentence, and when successful, the number of guesses is shown below that letter, and the user moves to guessing the next letter, all the way to the end of the sentence. 

(click on the image below to play the game implemented in Python)

[img[Shannon's Entropy Game output|resources/entropy game.png][https://trinket.io/python/0459abb846?outputOnly=true&runOption=run]]
See a [[Python implementation of Shannon's Entropy Game|https://trinket.io/python/0459abb846?outputOnly=true&runOption=run]].

Brian Christian, in his book [["The Most Human Human"|The Most Human Human - by Brian Christian]] refers to Shannon's game and the concept of entropy in language and its relevance/importance/implications regarding intelligence (human or artificial):
>the average entropy of a letter as determined by native speakers playing the Shannon Game comes out to somewhere between 0.6 and 1.3 bits. That is to say, on average, a reader can guess the next letter correctly //half// the time. (Or, from the writer's perspective, as Shannon put it: "When we write English half of what we write is determined by the structure of the language and half is chosen freely.") That is to say, a letter contains, on average, the same amount of information -- 1 bit -- as a coin flip.

Christian makes a connection between success in the Entropy Game and success in passing the Turing Test:
>Scientists all the way back to Claude Shannon have regarded creating an optimal playing strategy for this game as equivalent to creating an optimal compression method for English. These two challenges are so related that they amount to one and the same thing.
In other words, if one can create software that successfully (i.e., with low entropy, ideally 1 guess per letter) guesses letters in the Shannon Game, then this software is the ideal decoder (on the receiving side of a communication channel), achieving the best compression possible on the channel, since it succefully predicts every next letter).
>But only now are researchers arguing one step further—that creating an optimal compressor for English is equivalent to another major challenge in the Al world: passing the Turing test.
>If a computer could play this game optimally, they say, if a computer could compress English optimally, it'd know enough about the language that it would know the language. We'd have to consider it intelligent—in the human sense of the word.
>So a computer, to be humanly intelligent, doesn't even need—as in the traditional Turing test—to respond to your sentences: it needs only complete them.
On the "darker side" (or at least potentially somewhat oppressive side) of language prediction:
>I'm guessing that if you've ever used a phone to write words—and that is ever closer to being all of us now—you've run up against information entropy. Note how the phone keeps trying to predict what you're saying, what you'll say next. Sound familiar? It's the Shannon Game.
>
>So we have an empirical measure, if we wanted one, of entropy (and maybe, by extension, "literary" value): how often you disappoint your phone. How long it takes you to write. The longer, arguably, and the more frustrating, the more interesting the message might be.
>As much as I rely on predictive text capabilities—sending an average of fifty iPhone texts a month, and now even taking down writing ideas on it—I also see them as dangerous: information entropy turned hegemonic. Why hegemonic? Because every time you type a word that isn't the predicted word, you have to (at least on the iPhone) explicitly reject their suggestion or else it's (automatically) substituted. Most of the time this happens I'm  grateful: it smooths out typos made by mis-hitting the key allows for incredibly rapid, reckless texting. But there's the underbelly -- and this was just as true too on my previous standard numerical keypad phone with the T9 prediction algorithm. 
>
>You're gently and sometimes less-than-gently pushed, nudged, bumped into using the language the way the original test group did (This is particularly true when the algorithm doesn't adapt to behavior, and many of them, especially the older ones, don't). As a result, you start unconsciously changing your lexicon to match the words closest to hand. Like the surreal word market in Norton Juster's Phantom Tollbooth, certain words become too dear, too pricey, too scarce. That's crazy. That's no way to treat a language. When I type on my laptop keyboard into my word processor, no such text prediction takes place, so my typos don't fix themselves, and I have to type the whole word to say what I intend, not just the start. But I can write what I want. Perhaps I have to type more keystrokes on the average than if I were using text prediction, but there's no disincentive standing between me and the language's more uncommon possibilities. It's worth it.
>
>Carnegie Mellon computer scientist Guy Blelloch suggests the following:
>One might think that lossy text compression would be unacceptable because they are imagining missing or switched characters. Consider instead a system that reworded sentences into a more standard form, or replaced words with synonyms so that the file can be better compressed. Technically the compression would be lossy since the text has changed, but the "meaning" and clarity of the message might be fully maintained, or even improved.
>But—Frost—"poetry is what gets lost in translation." And—it seems—what gets lost in compression?



In a thoughtful WSJ article titled [["The Humanities' decline makes us morally obtuse"|https://www.wsj.com/articles/the-humanities-decline-makes-us-morally-obtuse-1537566941]], Paula Marantz Cohen (English professor at Drexel University) makes a few good observations in support of "good humanities education". Interwoven are my STEM^^1^^-related comments/quotes, in support of complimenting that with "good STEM^^1^^ education"^^2^^ :).

(See also [[On "The Purloined Letter" by Edgar Allan Poe]] on what C. P. Snow in 1959 called [["The Two Cultures"|https://en.wikipedia.org/wiki/The_Two_Cultures]])

She opens by stating:
>The great works of literature, history and philosophy that used to be at the center of a college education have been shunted to the sidelines or discarded entirely over the past two decades or more. This is a loss on many fronts...
>[...]
>Few people seem to be able to reconcile two overlapping truths—that someone can have a valid grievance in one context and be guilty of some version of the same thing in another. I see this as a failure of education.
This is true and reflected in the humanities, but is also true and perceived/observed in the "hard sciences", as expressed for example by the physicist Niels Bohr, who had observed:
>>The opposite of a fact is falsehood, but the opposite of one profound truth may very well be another profound truth.
>[...]
>The assumption these days is that people are monolithic—either completely good or completely bad. The best way to repudiate that assumption it to study the humanities, which illuminate human life in all its complexity. How can you think about crime or misconduct in such an unimaginative way if you’ve read great literature: adultery after “Anna Karenina,” bad parenting after “Death of a Salesman,” political extremism and even murder after “Julius Caesar”? 
>
>The greatness of these works is that they don’t excuse the conduct in question, but they do help explain it as a function of human frailty and misguided motives, sometimes of the most high-minded sort. They expose the back story that otherwise would be hidden from us so that we can, if not sympathize, at least go some way toward understanding what happened. They humanize what would otherwise look like simple stupidity or evil.
>[...]
>Education is the immersion in “the best which has been thought and said in the world,” as the 19th-century critic and poet Matthew Arnold put it. That “best” can be difficult, unclear, even contradictory. Part of being “the best” is that a work doesn’t reduce to a formula. It can also be written by people who are far from exemplary.
>[...]
>The emphasis on STEM^^1^^ fields in higher education reflects the need for expertise in a high-tech world. But this has tended to make the “soft” fields of the humanities seem weak and easy. Science, engineering and finance may be hard, but literature, history and philosophy are complex—impossible to resolve with a yes-or-no, right-or-wrong answer. This is precisely what constitutes their importance as a tool for living. Metaphysics takes its name from the idea that it goes beyond “hard” science into the realm of moral and intellectual speculation, where no empirical proof is possible.
To be accurate and fair to the sciences, they also expose human weaknesses, frailty, and limitations, as the astrophysicist Carl Sagan once said:
>>Humans may crave absolute certainty; they may aspire to it; they may pretend, as partisans of certain religions do, to have attained it. But the history of science — by far the most successful claim to knowledge accessible to humans — teaches that the most we can hope for is successive improvement in our understanding, learning from our mistakes, an asymptotic approach to the Universe, but with the proviso that absolute certainty will always elude us.
>[...]
>The humanities teach understanding, but they also teach humility: that we may be wrong and our enemies may be right, that the past can be criticized without our necessarily feeling superior to it, that people’s professed motives are not the whole story, and that the division of the world into oppressors and victims is a simplistic fairy tale.
And in the sciences, this humility is coupled with a sense of awe, coming from the realization about the vastness, complexity, and beauty of life and the universe. Albert Einstein had expressed it nicely:
>>The most beautiful thing we can experience is the mysterious. It is the source of all true art and science. He to whom the emotion is a stranger, who can no longer pause to wonder and stand wrapped in awe, is as good as dead — his eyes are closed.
>[...]
>We speak about the decline of the humanities without fully recognizing how it has hurt our society. If we want our nation to heal and thrive, we must put the study of literature, history and philosophy back at the center of our curricula and require that students study complex works—not just difficult ones.


----
^^1^^ STEM = Science, Technology, Engineering, Math
^^2^^ so one cannot avoid the conclusion that well-rounded (and deep-rooted) education (combining the Humanities and the Sciences) is good for you :)
In an excellent book titled [["When Einstein Walked with Gödel -- Excursions to the Edge of Thought"|https://www.nytimes.com/2018/05/15/books/review/review-when-einstein-walked-with-godel-jim-holt.html]]^^1^^, Jim Holt asks the question whether the Web makes us [["Smarter, Happier, More Productive"|https://docs.google.com/document/d/1T44FTF6bOTpDZQ2osOUoipB4Lp8VqF3y71hAKNc2wt8/edit?usp=sharing]].
>Steven Pinker observes: ‘Knowledge is increasing exponentially; human brainpower and waking hours are not.’ Without the internet, how can we possibly keep up with humanity’s ballooning intellectual output?
>
>This raises a prospect that has exhilarated many of the digerati. Perhaps the internet can serve not merely as a supplement to memory, but as a replacement for it. ‘I’ve almost given up making an effort to remember anything,’ says Clive Thompson, a writer for Wired, ‘because I can instantly retrieve the information online.’ David Brooks, a New York Times columnist, writes: ‘I had thought that the magic of the information age was that it allowed us to know more, but then I realised the magic of the information age is that it allows us to know less. It provides us with external cognitive servants – silicon memory systems, collaborative online filters, consumer preference algorithms and networked knowledge. We can burden these servants and liberate ourselves.’
>
Holt writes:
>why not outsource as much of our memory as possible to Google? Carr responds with a bit of rhetorical bluster. ‘The web’s connections are not our connections,’ he writes. ‘When we outsource our memory to a machine, we also outsource a very important part of our intellect and even our identity.’ Then he quotes William James, who in 1892 in a lecture on memory declared: ‘The connecting is the thinking.’ And James was onto something: the role of memory in thinking, and in creativity. What do we really know about creativity? Very little. We know that creative genius is not the same thing as intelligence. 
>
Holt quotes Pinker who observed that geniuses work hard. They immerse themselves in their work. Could this immersion have something to do with stocking the memory? We know about  the mechanisms by which the brain consolidates short-term memories into long-term ones. Yet, we don't know why it is better to knock information into our  head rather than to get it off the web.
But (And therefore?):
>It is the connection between memory and creativity, perhaps, that should make us most wary of the web. “As our use of the Web makes it harder for us to lock information into our biological memory, we're forced to rely more and more on the Net's capacious and easily searchable artificial memory," [Nicholas] Carr observes [in his book "The Shallows: What the Internet Is Doing to Our Brains"].
>
>But conscious manipulation of externally stored information is not enough to yield the deepest of Breakthroughs: this is what the example of Poincaré^^2^^ suggests. Human memory, unlike machine memory, is dynamic. Through some process we only crudely understand-Poincaré himself saw it as the collision and locking together of ideas into stable combinations—novel patterns are unconsciously detected, novel analogies discovered. And this is the process that Google, by seducing us into using it as a memory prosthesis, threatens to subvert.
>
>It's not that the web is making us less intelligent; if anything, the evidence suggests it sharpens more cognitive skills than it dulls. It's not that the web is making us less happy, although there are certainly those who, like Carr, feel enslaved by its rhythms and cheated by the quality of its pleasures. It's that the web may be an enemy of creativity.



----
^^1^^ - searchable spelling: Gödel, Godel, Goedel.
^^2^^ - This is an example of a "typical epiphany" and reaching creative knowledge in a flash of inspiration/intuition. Henri Poincare [[recounts a personal experience|http://vigeland.caltech.edu/ist4/lectures/Poincare%20Reflections.pdf]] where after thinking about a difficult math problem for a while, he had to take a break, and during that break, in what looks like a sudden flash of intuition, he was able to see the solution to his problem:
>Just at this time I left Caen, where I was then living, to go on a geological excursion under the auspices of the school of mines.  The changes of travel made me forget my mathemati- cal work.  Having reached Coutances, we entered an omnibus to go some place or other.  At the moment when I put my foot on the step the idea came to me, without anything in my former thoughts seeming to have paved the way for it, that the transformations I had used to define the Fuchsian functions were identical with those of non-Euclidean geometry.  I did not verify the idea; I should not have had time, as, upon taking my seat in the omnibus, I went on with a conversation already commenced, but I felt a perfect certainty.  On my return to Caen, for conscience’  sake I verified the result at my leisure.
Adding his perspective to the question of whether mathematics is invented or discovered (see [[Is Math a human invention or a series of discoveries of truths in the real world?]]), the mathematician [[Edward Nelson|https://en.wikipedia.org/wiki/Edward_Nelson]] writes in [["Syntax and Semantics"|https://web.math.princeton.edu/~nelson/papers/s.pdf]]:
>The pursuit of mathematics is part of our culture and it has met with many successes. Associated with mathematics is a belief system, its semantics. But the success of what mathematicians actually do is no evidence at all for the validity of the associated belief system. Let me first discuss mathematics psychologically and then logically. I have been doing mathematics for 57 years. What is my experience? Do I discover or invent? Am I a James Cook, finding what was already there, or a Thomas Edison, bringing something new into being? (Cook did not invent Polynesia; Edison did not discover the light bulb.)
>
>Each mathematician will have a different answer to this question, for doing mathematics is personal and persons are different. But my answer is unequivocal: for me, the experience is one of invention. I start with an idea, a light bulb. I try various things to realize the idea—perhaps something familiar will work. Or perhaps I try carbon, tin, et cetera, until finally (if I am lucky) I hit on tungsten, and it works.1 To say that it works, in mathematics, means that I persuade other mathematicians by a proof, for doing mathematics is social as well as personal. We are far more tightly constrained in our inventions than a musician or painter. 
>
>What is a proof in mathematics? More than anyone else, David Hilbert deserves the credit for answering this question. For several thousand years, Euclid’s great book [10] was held to be the model for a proof. (Though the work of Archimedes is a better model.) But Euclid’s work had serious logical flaws that were corrected by Hilbert. And Hilbert’s plane geometry is flawless. Mathematics is expressed in terms of formulas, which are strings of symbols of various kinds put together according to certain rules. As to whether a string of symbols is a formula or not, there is no dispute: one simply checks the rules of formation. Certain formulas are chosen as axioms. Here there is great scope for imagination and inspiration from one semantics or another, to choose fruitful axioms. Certain rules of inference are specified, allowing one to deduce a formula as conclusion from one or two formulas as premise or premises. Then a proof is a string of formulas such that each one is either an axiom or follows from one or two preceding formulas by a rule of inference. As to whether or not a string of formulas is a proof there is no dispute: one simply checks the rules of formation. This is the syntax of mathematics. 
>
>Is that all there is to mathematics? Yes, and it is enough. Constructing proofs is a serious, deep, and beautiful vocation, one worthy the devotion of a lifetime. And as Galileo observed, the book of nature is written in mathematics, so doing mathematics can increase our understanding of nature: certainly of physics, and increasingly of the other sciences. But some mathematicians feel that syntax is not enough: they want to add semantics. 
>
>Now in defense of semantics it can be said that it is a useful source of inspiration and that it is essential in pedagogy—students who do well in calculus invariably have an understanding of a meaning attached to their calculations. But the question here is the foundational role, if any, of semantics in mathematics.
Douglas Hofstadter wrote about it in his excellent book [[I am a Strange Loop|http://en.wikipedia.org/wiki/I_Am_a_Strange_Loop]], which is much more than the condensed version of his Pulitzer Prize winning book [[Gödel, Escher, Bach|http://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach]].

In a [[short article by Uriah Kriegel|http://uriahkriegel.com/downloads/TLS.pdf]] he summarizes Hofstadter's main analogy of "strage human loopiness" with the following description of a self-referencing^^1^^, emergent construction:
>Hofstadter tells a wonderful story about the emergence of symbolic thought from neural activity. Imagine a pool table with a million small interacting magnetic marbles (“simms”) on it. These simms careen about the space of the pool table, which he calls the “careenium”. In some circumstances, the simms get magnetized to each other, and may form ball-shaped clusters – “simmballs”. The behaviour of single simms is random, but that of simmballs is not. The simmballs move around inside the careenium depending on what kind of external forces impinge on the careenium’s external walls. Thus the behaviour of simmballs inside the careenium comes to reflect conditions outside it.

>Our minds, says Hofstadter, work in just this way. Inside the cranium (careenium) are millions of nervous cells whose behaviour is more or less meaningless. But sometimes large clusters of cells coordinate their behaviour in response to the way the external world impinges on parts of the cranium, such as the retina or the ear drums. When they do, these clusters come to constitute symbols (simmballs), symbols that represent external conditions in a sustained manner that effectively constitutes a rudimentary awareness of the external world. The moral is that although we cannot find anything like symbolic thought or awareness when we look at individual brain cells, if we widen our view and consider slightly more abstract and more spread-out structures and patterns within the brain, we just might.

>What is true for awareness of things other than oneself is true also for self-awareness. One special symbol which takes more time to form is the “I” symbol. If the careenium developed a simmball with which to represent its own operations, it would come to be a self-referential system and have an “I”. Our cranium does have a symbol that represents itself, and it is therefore self-aware. Importantly, however, our symbolic representations have a somewhat “coarse grain”, as philosophers say. When we represent an ice cube, for example, we are aware of it simply as a single, homogenous, clear-pinkish cube. We are not aware of the millions of hydrogen and oxygen atoms making it up. Likewise, when we represent ourselves, we are not aware of the millions of neurons inside our brain, but rather of the various symbols that clusters of them make up. That is to say, the cranium is aware of itself precisely as a theatre of ideas, desires, and hopes, not as a container of cerebral molecules buzzing about meaninglessly. And that is why we experience our mental life in those terms, even though ultimately it all rests on the purposeless activities of so many individually insentient nervous cells.

David Gelernter (of [[Mirror Worlds|http://en.wikipedia.org/wiki/Mirror_Worlds]] fame) wrote in an [[interesting article in Commentary|http://www.commentarymagazine.com/article/the-closing-of-the-scientific-mind/]]:
>Computationalists believe that the mind is embodied by the brain, and the brain is simply an organic computer. But in fact, the mind is embodied not by the brain but by the brain and the body, intimately interleaved. Emotions are mental states one feels physically; thus they are states of mind and body simultaneously. (Angry, happy, awestruck, relieved—these are physical as well as mental states.) Sensations are simultaneously mental and physical phenomena. Wordsworth writes about his memories of the River Wye: “I have owed to them/In hours of weariness, sensations sweet,/Felt in the blood, and felt along the heart/And passing even into my purer mind…”

>Where does the physical end and the mental begin? The resonance between mental and bodily states is a subtle but important aspect of mind. Bodily sensations bring about mental states that cause those sensations to change and, in turn, the mental states to develop further. You are embarrassed, and blush; feeling yourself blush, your embarrassment increases. Your blush deepens. “A smile of pleasure lit his face. Conscious of that smile, [he] shook his head disapprovingly at his own state.” (Tolstoy.) As Dmitry Merezhkovsky writes brilliantly in his classic Tolstoy study, “Certain feelings impel us to corresponding movements, and, on the other hand, certain habitual movements impel to the corresponding mental states….Tolstoy, with inimitable art, uses this convertible connection between the internal and the external.”

>All such mental phenomena depend on something like a brain and something like a body, or an accurate reproduction or simulation of certain aspects of the body. However hard or easy you rate the problem of building such a reproduction, computing has no wisdom to offer regarding the construction of human-like bodies—even supposing that it knows something about human-like minds.


----
^^1^^ - an example of a self reference a-la Hofstadter
[<img[self reference sentence|resources/self reference.jpeg][resources/self reference 1.jpeg]]
The coding practice site [[Code Wars|https://www.codewars.com/]] (don't like the name, but like aspects of some of the educational concepts and potential) has a [[Kata|https://www.codewars.com/about]] called [["Josephus Survivor"|https://www.codewars.com/kata/josephus-survivor/python]] (the historic background^^1^^ [[from the Shippensburg University site|http://webspace.ship.edu/deensley/mathdl/joseph.html]]).

After going through it and actually "playing out the scenario of the problem" (dry-running it with pencil and paper; always an excellent idea!), I came up with [[the following Python solution|https://trinket.io/python/3288a0ebfa]]^^2^^:
{{{
# Solution 1:
def josephus_survivor_1(n,k):
  people = range(1, n+1)
  pop_index = -1
  while len(people) > 1:
    pop_index = (pop_index + k) % len(people)
    gone = people.pop(pop_index)
    pop_index -= 1
  return people[0]


print josephus_survivor_1(7, 3)   # prints 4
}}}


Since solution 1 successfully solves the challenge, the site enables you to look at other solutions other people had submitted, so you can learn from them (as I had said, I like the idea :), an I found the following solution:
{{{
# Solution 2:
def josephus_survivor_2(n, k):
  v = 0
  for i in range(1, n + 1): 
    v = (v + k) % i
  return v + 1


print josephus_survivor_2(7, 3)   # prints 4
}}}

Comparing the two solutions, one can definitely say that solution 2 is shorter and more "concise"; and some people may claim it's more "elegant" than solution 1.
These are important aspects of good programming, and 
as the [[Zen of Python|https://www.python.org/dev/peps/pep-0020/]] points out, and keeping the 2 solutions in mind:
>Beautiful is better than ugly.
>Simple is better than complex.
>Flat is better than nested.
>Sparse is better than dense.
>[P]racticality beats purity.

But (as "sage advice" is often not black-or-white), the [[Zen of Python|https://www.python.org/dev/peps/pep-0020/]] __''also''__ points out:
>Explicit is better than implicit.
>Readability counts.
>If the implementation is hard to explain, it's a bad idea.
>If the implementation is easy to explain, it may be a good idea.

So in light of this, looking at ''Solution 1'', one can say:
* it tries to make the code follow the model of solving this problems in actuality
** it uses meaningful names (e.g., people, pop_index, gone)
** the algorithm follows the physical actions you'd make if you "played" the problem:
*** starting (indexing) with a certain person (pop_index), 
*** moving around the circle of people (pop_index + k) and circling around (% len(people))
*** repeating this until there is only one survivor (while len(people) > 1)
* this "explicit" modeling allows one to see/check how things are progressing throughout the simulation/run.
** so, for example, one can print the list ("circle of people"), as people are eliminated ("gone"):
{{{
# Solution 1:
def josephus_survivor_2(n,k):
  people = range(1, n+1)
  index = -1
  while len(people) > 1:
    print people
    index = (index + k) % len(people)
    gone = people.pop(index)
    print 'eliminating', gone, 'to result in:'
    index -= 1
  return people[0]
}}}
and get the following progression/visibility/explanatory view:
{{{
We start with:
[1, 2, 3, 4, 5, 6, 7]
eliminating 3 to result in:
[1, 2, 4, 5, 6, 7]
eliminating 6 to result in:
[1, 2, 4, 5, 7]
eliminating 2 to result in:
[1, 4, 5, 7]
eliminating 7 to result in:
[1, 4, 5]
eliminating 5 to result in:
[1, 4]
eliminating 1 to result in:
4
}}}

On the other hand, looking at ''Solution 2'', one can definitely appreciate the brevity and "cleanness" of the solution, but
* it's not easy to see nor explain why it works, since the modeling/implementation doesn't seem to follow the "natural activities" or actions one would follow to carry out the plan
* one cannot print any "meaningful status" or progression, since, for example:
{{{
# Solution 2:
def josephus_survivor_1(n, k):
  v = 0
  for i in range(1, n + 1):
    print v
    v = (v + k) % i
  print 'which leaves:'
  return v + 1
}}}
results in
{{{
0
0
1
1
0
3
0
which leaves:
4
}}}
which is "not revealing much" (and the solution is so "compact" and "concise" that there is not much you can "peek into" to get an idea of the workings... :(

If you think about it, these 2 different solutions exemplify in a sense a "mini-version" of the [["black box" phenomenon|https://www.kdnuggets.com/2017/04/ai-machine-learning-black-boxes-transparency-accountability.html]] in AI (Artificial Intelligence), making it difficult for some AI/ML (Machine Learning) systems/architectures/approaches to "explain" the what/why/how regarding the solution (making it less "understandable", "transparent", and more difficult to "prove correct").

Here we have 2 solutions which produce correct results, but one is more "transparent" and helpful in understanding the solution (and problem?). This solution is "straight forward" since it simulates the actions in the real world (which is not always essential to do, but often helpful), and is simple, clear (enough) and easy to extend and enhance (to add explanatory power and/or transparency, for example).

And, it still stays close to the spirit of the Zen of Python :)


----
^^1^^ [[from the Shippensburg University site|http://webspace.ship.edu/deensley/mathdl/joseph.html]]:
>In the Jewish revolt against Rome, Josephus and 39 of his comrades were holding out against the Romans in a cave. With defeat imminent, they resolved that, like the rebels at Masada, they would rather die than be slaves to the Romans. They decided to arrange themselves in a circle. One man was designated as number one, and they proceeded clockwise killing every seventh man... Josephus (according to the story) was among other things an accomplished mathematician; so he instantly figured out where he ought to sit in order to be the last to go. But when the time came, instead of killing himself he joined the Roman side.

^^2^^ You can play with [[my Scratch simulation|https://scratch.mit.edu/projects/227161968/#fullscreen]]:
[img[Josephus Survivor Scratch|resources/Josephus Scratch small.png][https://scratch.mit.edu/projects/227161968/#fullscreen]]
In a [[very well-written, very readable, and thought-provoking paper|resources/Hamming.html]]^^1^^, Richard Hamming makes a strong case (in response to Wigner^^2^^, which is interesting to compare with [[Wilczek's article "Reasonably effective: Deconstructing a miracle"|resources/Wilczek_reasonably1.pdf]]^^3^^) that:
>The Postulates of Mathematics Were Not on the Stone Tablets that Moses Brought Down from Mt. Sinai.
>...We begin with a vague concept in our minds, then we create various sets of postulates, and gradually we settle down to one particular set. In the rigorous postulational approach the original concept is now replaced by what the postulates define.

Where he is coming from seems similar to how Nick Bostrom and Janna Levin look at [[the anthropic bias|On Anthropic Bias, or Was the Universe Made for Us?]].

He takes the very fundamental (and seemingly simple) concept of numbers, and briefly (and succinctly) shows how they evolved over time and with our needs and circumstances.
>Mathematics has been made by man and therefore is apt to be altered rather continuously by him. Perhaps the original sources of mathematics were forced on us, but as in the example I have used we see that in the development of so simple a concept as number we have made choices for the extensions that were only partly controlled by necessity and often, it seems to me, more by aesthetics. We have tried to make mathematics a consistent, beautiful thing, and by so doing we have had an amazing number of successful applications to the real world.
Paraphrasing Hamming, he starts with the __integers__, which arose from counting (and as the famous mathematician Kronecker once said, "God made the integers, man did the rest"). Then we (the Greeks?) had to "invent" __fractions__, which are not counting numbers; they are measuring numbers. This extension allowed us to apply the same rules and manipulations to both integers and fraction, but added benefits in measurement and division. Then we extended the __rational number system__ to include the __algebraic numbers__. (It was the simple desire to measure lengths that did it...How can one deny that there is a number to measures the length of the diagonal of a unit square (namely, the square root of 2)). Then, the measurement of the circumference of a circle with respect to its diameter soon forced us to consider the ratio called pi. Thus, by a further suitable extension of the earlier ideas of numbers, the __transcendental numbers__ were admitted consistently into the number system. Further tinkering with the number system brought both the __number zero__ and the __negative numbers__. The next step is the __complex number system__ and the realization by Cardano and others that the same formal operations on the symbols for complex numbers would give meaningful results. (as well as Hamming's feeling that "God made the universe out of complex numbers."). (see also the [[infinitesimals|Infinitesimals are significant (and meaningful?)]] which were added to this evolving/expanding "zoo" of mathematical creations/creatures).
And he summarizes:
>from simple counting using the God-given integers, we made various extensions of the ideas of numbers to include more things. Sometimes the extensions were made for what amounted to aesthetic reasons, and often we gave up some property of the earlier number system. Thus we came to a number system that is unreasonably effective even in mathematics itself; witness the way we have solved many number theory problems of the original highly discrete counting system by using a complex variable.
>From the above we see that one of the main strands of mathematics is the extension, the generalization, the abstraction - they are all more or less the same thing-of well-known concepts to new situations. But note that in the very process the definitions themselves are subtly altered. Therefore, what is not so widely recognized, old proofs of theorems may become false proofs. The old proofs no longer cover the newly defined things. The miracle is that almost always the theorems are still true; it is merely a matter of fixing up the proofs.

So here are Hamming's explanations of the unreasonable effectiveness of mathematics:
1. We see what we look for...
>we approach the situations with an intellectual apparatus so that we can only find what we do in many cases. It is both that simple, and that awful. What we were taught about the basis of science being experiments in the real world is only partially true. Eddington went further than this; he claimed that a sufficiently wise mind could deduce all of physics. I am only suggesting that a surprising amount can be so deduced. Eddington gave a lovely parable^^4^^ to illustrate this point. He said, "Some men went fishing in the sea with a net, and upon examining what they caught they concluded that there was a minimum size to the fish in the sea." 
(see [[Simpson's paradox]]).

2. We select the kind of mathematics to use. 
>Mathematics does not always work. When we found that scalars did not work for forces, we invented a new mathematics, vectors. And going further we have invented tensors...we select the mathematics to fit the situation, and it is simply not true that the same mathematics works every place.
3. Science in fact answers comparatively few problems (see also [[On scientific vs. religious explanation|On scientific vs. religious explanation]]).
>science has contributed nothing to the answers [to the questions about what Truth, Beauty, or Justice are], nor does it seem to me that science will do much in the near future. So long as we use a mathematics in which the whole is the sum of the parts we are not likely to have mathematics as a major tool in examining these famous three questions.
>[T]o generalize, almost all of our experiences in this world do not fall under the domain of science or mathematics. Furthermore, we know (at least we think we do) that from Gödel's theorem^^5^^ there are definite limits to what pure logical manipulation of symbols can do, there are limits to the domain of mathematics^^6^^.
4. The evolution of man provided the model.
>Some people...have further claimed that Darwinian evolution would naturally select for survival those competing forms of life which had the best models of reality in their minds-"best" meaning best for surviving and propagating. [[Perhaps there are thoughts we cannot think]].

----
The originals articles:
^^1^^ Richard Hamming on [["The Unreasonable Effectiveness of Mathematics"|http://www.dartmouth.edu/~matc/MathDrama/reading/Hamming.html]]
^^2^^ Eugene Wigner on The [["Unreasonable Effectiveness of Mathematics in the Natural Sciences"|http://www.dartmouth.edu/~matc/MathDrama/reading/Wigner.html]]
^^3^^ Frank Wilczek on [["Reasonably effective: I. Deconstructing a miracle"|http://ned.ipac.caltech.edu/level5/March07/Wilczek/Wilczek.html]]. In his article Wilczek casts a critical (not to say doubtful) eye on the power and potential of mathematics when he asks: [[Can mathematics be used to extract qualitative predictions from physical laws - or, for that matter, useful laws from data - automatically?]] (and pointedly answers: Perhaps, but the omens aren't auspicious.)
^^4^^ This parable is [[referenced by Bostrom|http://www.anthropic-principle.com/?q=book/chapter_1#1a]], too:
> How big is the smallest fish in the pond? You catch one hundred fishes, all of which are greater than six inches. Does this evidence support the hypothesis that no fish in the pond is much less than six inches long? Not if your net can’t catch smaller fish.
>
>Knowledge about limitations of your data collection process affects what inferences you can draw from the data. In the case of the fish-size-estimation problem, a selection effect—the net’s sampling only the big fish—vitiates any attempt to extrapolate from the catch to the population remaining in the water. Had your net instead sampled randomly from all the fish, then finding a hundred fishes all greater than a foot would have been good evidence that few if any of the fish remaining are much smaller. 
^^5^^ See [[Gödel's Second Incompleteness Theorem Explained in Words of One Syllable|resources/Boolos-godel-in-single-syllables.pdf]], which can be compared with [[The world's shortest explanation of Gödel's theorem]].
^^6^^ From Douglas R. Hofstadter's book, Gödel, Escher, Bach:
>All the limitative theorems of metamathematics and the theory of computation suggest that once the ability to represent your own structure has reached a certain critical point, that is the kiss of death; it guarantees that you can never represent yourself totally. Gödel’s Incompleteness Theorem, Church’s Undecidability Theorem, Turing’s Halting Theorem, Tarski’s Truth Theorem - all have the flavor of some ancient fairy tale which warns you that “To seek self-knowledge is to embark on a journey which… will always be incomplete, cannot be charted on any map, will never halt, cannot be described.”
Optimism is an occupational hazard of programming: feedback is the treatment.
In a book by Alan Burdick titled "Why Time Flies: A Mostly Scientific Investigation" (see [[the coverage at Brainpickings|https://www.brainpickings.org/2017/09/04/alan-burdick-why-time-flies-empathy/]] and my tiddler [[On confusing concepts with reality]]), he describes a lecture given by the German zoologist Karl Ernst von Baer in 1860, which would be fascinating to conduct/follow as a thought experiment:

>Nothing lasts, von Baer told his audience. What we mistake for persistence -- the seeming permanence of mountains and seas -- is an illusion derived from our short lifespan. Imagine for a moment "that the pace of life in man were to pass much faster or much slower, then we would soon discover that, for him, all the relations of nature would appear entirely differently." Suppose a human's lifetime, from birth to senility, lasted just twenty-nine days, one-thousandth its normal length. 
>This ~Monaten-Mensch, or "man of the month," would never see the moon go through more than one full cycle; the concept of seasons and of snow and ice would be as abstract as the Ice Age is to us. The experience would be akin to that of many creatures, including some insects and mushrooms, that live for just a few days. Now suppose our lifespan were a thousand times shorter still and lasted just forty-two minutes. This ~Minuten-Mensch, or “man of minutes” would know nothing directly of night and day; flowers and trees would appear unchanging.
(compare to what Daily Alice (Alice Dale Drinkwater) had to say about trees in the fantastic book "Little, Big": "Did you ever think, that maybe trees are alive like we are, only just more slowly? That what a day is to us, maybe a whole summer is to them - between sleep and sleep, you know. That they have long long thoughts and conversations that are just too slow for us to hear.")
>Consider the opposite scenario, von Baer went on. Imagine that our pulse, instead of speeding up, were to be at a thousand times slower than its normal rate. If we assume the same amount of sensory experience per beat, "then the lifetime of such a person would reach a 'ripe old age' at approximately 80,000 years. A year would seem like 8.75 hours. We would lose our ability to watch ice melt, to feel earthquakes, to watch trees sprout leaves, slowly bear fruit and then shed leaves." We would see mountain ranges rise and fall but overlook the lives of ladybugs. Flowers would be lost on us; only trees would make an impression. The sun might leave a tail in the sky like that of a comet or a cannonball.
It's worth [[comparing this to recent discoveries about "the hidden life of trees" and their seemingly "social" behavior|The intelligence of trees and the impact of time scales]] (and also put [[human life in perspective through size/time scales|Human life in perspective]]).
>To some degree this is wordplay. If we define a day as a single rotation of Earth on its axis, then one day always lasts exactly one day to human, mite, and hazelnut alike. (A circadian biologist would point out that in fact the day is genetically inscribed in each of us, hazelnut to human, whether we're conscious of it or not.) Condillac's point, however, was that to the Mites of Hazelnut, one day may not be a useful, or even a perceptible, span of time. That thought contains a notion of time that is still very much in play today: our estimate of how long a moment seems to last is shaped by the number of actions or ideas that pass through the mind as the moment unfolds. "We have no perception of duration but by considering the train of ideas that take their turns in our understandings," John Locke argued in 1690. If you experience many sensations in a brief period, then that duration, being densely filled, will feel longer while you're in it. An instant may seem dimensionless to us, Locke wrote, yet there might be other minds capable of perceiving it and we could have no more awareness of them than "a worm shut up in one drawer of a cabinet hath of the senses or understanding of a man." Our mind, moving only so quickly, can hold only so many ideas at once, so there's a limit to the span of time we can perceive. "Were our senses altered, and made much quicker and acuter, the appearance and outward scheme of things would have quite another face to us."
>
>Now multiply this life a thousand more times, to produce a man living 80 million years but having just 31.5 heartbeats and 189 perceptions in one Earth year. The Sun would cease to appear as a discrete circle and would instead appear as a glowing solar elliptic, dimmer in winter. For ten pulse beats of the year Earth would be green, then white for ten more; snow would melt in a heartbeat and a half.
>
>Through the seventeenth and eighteenth centuries, the increasing use of the telescope and microscope led to consideration of what might be called the relativity of scales. The cosmos was bigger than imagined, in both directions; it blossomed both out and in. The human perspective began to lose its sense of privilege: our outlook might be just one of many. Suppose, the philosopher Nicolas Malebranche posited, in 1678, that God had created a world so vast that a single tree would appear enormous to us yet seem normal to that realm's inhabitants-or, conversely, a world that appears tiny to us yet yawns in the eyes of its minuscule residents. "Car rien n'est grand ni petit en soi," Malebranche Wrote; nothing is big or small in and of itself. Jonathan Swift soon captured the idea in a novel; the outlook of the Lilliputians and that of the giant Brobdingnagians are equivalent in their detail and expanse.
>
>So it is with time. "Imagine a world made up of as many parts as our own that was no bigger than a hazelnut," the French philosopher Etienne Bonnot de Condillac wrote in 1754. "It is beyond doubt that the stars would rise and set there thousands of times in one of our hours." Or imagine a world that dwarfs ours in its vastness: a lifespan in our world would seem but a flicker to the beings of that larger realm, while to residents of Planet Hazelnut our lives might last billions of years. The perception of duration is relative; a moment to one eye may be several to another.


The web annotation technology implemented for instance in [[Genius Web Annotator|http://genius.com/web-annotator]] or [[Hypothesis|https://github.com/hypothesis/h]] offers a great capability for knowledge building and sharing, and development of critical thinking.

See an example of annotating the poem [[Ozymandias by Percy Bysshe Shelley|http://genius.com/Percy-bysshe-shelley-ozymandias-annotated]]:

[>img[Ozymandias|./resources/Ozymandias.jpeg][./resources/Ozymandias.jpeg]]
I met a traveller from an antique land
Who said—"Two vast and trunkless legs of stone
Stand in the desert . . .Near them, on the sand,
Half sunk a shatter'd visage lies, whose frown,
And wrinkled lip, and sneer of cold command,
Tell that its sculptor well those passions read
Which yet survive, stamped on these lifeless things,
The hand that mocked them and the heart that fed;
And on the pedestal these words appear:
My name is Ozymandias, King of Kings;
Look on my works, ye mighty, and despair!
Nothing beside remains. Round the decay
Of that colossal wreck, boundless and bare,
The lone and level sands stretch far away."—


<html><p  style="text-align:right">My name is Ozymandias, King of Kings; Look on my works, ye mighty, and despair!</p></html>

BTW, Shelley's friend and poet Horace Smith penned a competing (and not less evocative :) [[poem with the same name|http://www.potw.org/archive/potw192.html]] ("Ozymandias"):

In Egypt's sandy silence, all alone,
Stands a gigantic Leg, which far off throws
The only shadow that the Desert knows:—
"I am great OZYMANDIAS," saith the stone,
"The King of Kings; this mighty City shows
"The wonders of my hand."— The City's gone,—
Naught but the Leg remaining to disclose
The site of this forgotten Babylon.

We wonder,—and some Hunter may express
Wonder like ours, when thro' the wilderness
Where London stood, holding the Wolf in chace,
He meets some fragment huge, and stops to guess
What powerful but unrecorded race
Once dwelt in that annihilated place.




And referring to the content of Shelley's poem, I came across William Shakespeare's poem [[The Rape Of Lucrece|http://shakespeare.mit.edu/Poetry/RapeOfLucrece.html]], which is too, connected via a technology link: it was quoted in a seminal [[paper by Stuart Haber W. Scott Stornetta, titled "How to Time-Stamp a Digital Document"|https://www.anf.es/pdf/Haber_Stornetta.pdf]], basically laying down the foundation for the [[Blockchain technology|https://en.wikipedia.org/wiki/Blockchain]], which is at the heart of digital cryptocurrencies like Bitcoin.

The original idea of the Blockchain researchers (Haber and Stornetta from Bellcore, NJ) was the idealistic desire to find a way to secure the past and safeguard our knowledge of it. In other words, resist the ravages of time:

    Time's glory is to calm contending kings,
    To unmask falsehood and bring truth to light,
    To stamp the seal of time in aged things,
    To wake the morn and sentinel the night,
    To wrong the wronger till he render right,
    To ruinate proud buildings with thy hours,
    And smear with dust their glittering golden towers;

    To fill with worm-holes stately monuments,
    To feed oblivion with decay of things,
    To blot old books and alter their contents,
    To pluck the quills from ancient ravens' wings,
    To dry the old oak's sap and cherish springs,
    To spoil antiquities of hammer'd steel,
    And turn the giddy round of Fortune's wheel;

In her [[weekly column in the WSJ this week|https://www.wsj.com/articles/why-history-will-repay-your-love-1495755666]], [[Peggy Noonan|http://www.peggynoonan.com/]] writes: //For Memorial Day some thoughts on historical memory.//

She is taking inspiration from historian David ~McCullough who said: Knowing the past is ‘a wonderful way to enlarge the experience of being alive’.

Here are some of Mr. ~McCullough’s observations on history, as captured in his recent book "The American Spirit" (a collection of his speeches):
* It is a story. And what is a story? Mr. ~McCullough, paraphrasing E.M. Forster, observes: "If I say to you the king died and then the queen died, that’s a sequence of events. If I say the king died and the queen died of grief, that’s a story."
* What’s past to us was the present to them.
* They were never certain of success.
* Nothing had to happen the way it happened.
* We make more of the wicked than the great. 
* America came far through trial and error.
* History is an antidote to the hubris of the present. We think everything we have, do and think is the ultimate, the best. "We should never look down on those of the past and say they should have known better. What do you think they will be saving about us in the future? They’re going to be saying we should have known better."
* Knowing history will make you a better person. Mr. ~McCullough endorses Samuel Eliot Morison’s observation that reading history improves behavior by giving examples to emulate. He quotes John Adams: "We can’t guarantee success [in the Revolutionary War], but we can do something better. We can deserve it." This contrasts, Mr. ~McCullough says, with current attitudes, in which success is all.
Just as there are odors that dogs can smell and we cannot, as well as sounds that dogs can hear and we cannot, so too there are wavelengths of light we cannot see and flavors we cannot taste. Why then, given our brains wired the way they are, does the remark "Perhaps there are thoughts we cannot think," surprise you? Evolution, so far, may possibly have blocked us from being able to think in some directions; there could be unthinkable thoughts. 
:: -- from [[Richard Hamming's "The Unreasonable Effectiveness of Mathematics"|resources/Hamming.html]]

Or as [[Haldane|https://en.wikipedia.org/wiki/J._B._S._Haldane]] said: [[My own suspicion is that the universe is not only queerer than we suppose, but queerer than we can suppose... I suspect that there are more things in heaven and earth that are dreamed of, or can be dreamed of, in any philosophy.]]

(see [[John Updike's thoughts|The mystery of being is a permanent mystery, at least given the present state of the human brain]] for a related perspective)

(read [[More on unthinkable thoughts]] for more :)
From an [[excellent, uplifting and down-to-earth talk|http://www.pbs.org/johngardner/sections/writings_speech_1.html]] by John Gardner on Personal Renewal.

Key to self renewal are motivation/enthusiasm and interestedness (__not__ in the Dale Carnegie utilitarian way), or in his words:
>Be interested. Everyone wants to be interesting – but the vitalizing thing is to be interested. Keep a sense of curiosity. Discover new things. Care. Risk failure. Reach out.

A sense of context, commitment, and meaning are also very important:
>As Robert Louis Stevenson said, "Old or young, we're on our last cruise." We want it to mean something.
(compared to Shunryu Suzuki's "Life is like stepping onto a boat that is about to sail out to sea and sink.")
>
>For many this life is a vale of tears; for no one is it free of pain. But we are so designed that we can cope with it if we can live in some context of meaning. Given that powerful help, we can draw on the deep springs of the human spirit, to see our suffering in the framework of all human suffering, to accept the gifts of life with thanks and endure life's indignities with dignity.
>
>In the stable periods of history, meaning was supplied in the context of a coherent communities and traditionally prescribed patterns of culture. Today you can't count on any such heritage. You have to build meaning into your life, and you build it through your commitments -- whether to your religion, to an ethical order as you conceive it, to your life's work, to loved ones, to your fellow humans. Young people run around searching for identity, but it isn't handed out free any more -- not in this transient, rootless, pluralistic society. Your identity is what you've committed yourself to.
>
>It may just mean doing a better job at whatever you're doing. There are men and women who make the world better just by being the kind of people they are -- and that too is a kind of commitment. They have the gift of kindness or courage or loyalty or integrity. It matters very little whether they're behind the wheel of a truck or running a country store or bringing up a family. 
And also:
>We tend to think of youth and the active middle years as the years of commitment. As you get a little older, you're told you've earned the right to think about yourself. But that's a deadly prescription! People of every age need commitments beyond the self, need the meaning that commitments provide. Self-preoccupation is a prison, as every self-absorbed person finally knows. Commitments to larger purposes can get you out of prison.

And about optimism vs. pessimism:
>I'd be a pessimist but it would never work.

>I can tell you that for renewal, a tough-minded optimism is best. The future is not shaped by people who don't really believe in the future. Men and women of vitality have always been prepared to bet their futures, even their lives, on ventures of unknown outcome. If they had all looked before they leaped, we would still be crouched in caves sketching animal pictures on the wall.
>
>But I did say tough-minded optimism. High hopes that are dashed by the first failure are precisely what we don't need. We have to believe in ourselves, but we mustn't suppose that the path will be easy, it's tough. Life is painful, and rain falls on the just, and Mr. Churchill was not being a pessimist when he said "I have nothing to offer, but blood, toil, tears and sweat." He had a great deal more to offer, but as a good leader he was saying it wasn't going to be easy, and he was also saying something that all great leaders say constantly -- that failure is simply a reason to strengthen resolve. 
And he concludes:
>Meaning is not something you stumble across, like the answer to a riddle or the prize in a treasure hunt. Meaning is something you build into your life. You build it out of your own past, out of your affections and loyalties, out of the experience of humankind as it is passed on to you, out of your own talent and understanding, out of the things you believe in, out of the things and people you love, out of the values for which you are willing to sacrifice something. The ingredients are there. You are the only one who can put them together into that unique pattern that will be your life. Let it be a life that has dignity and meaning for you. If it does, then the particular balance of success or failure is of less account.

[[Maria Popova reviews John Gardner's writing on self-renewal|https://www.brainpickings.org/2014/07/14/self-renewal-gardner/]], and quotes him on the differences between renewal, innovation, and change:
>Renewal is not just innovation and change. It is also the process of bringing the results of change into line with our purposes. When our forebears invented the motor car, they had to devise rules of the road. Both are phases of renewal. When urban expansion threatens chaos, we must revive our conceptions of city planning and metropolitan government.
>
>Mesmerized as we are by the idea of change, we must guard against the notion that continuity is a negligible — if not reprehensible — factor in human history. It is a vitally important ingredient in the life of individuals, organizations and societies. Particularly important to a society’s continuity are its long-term purposes and values. These purposes and values also evolve in the long run; but by being relatively durable, they enable a society to absorb change without losing its distinctive character and style. They do much to determine the direction of change. They insure a society will not be buffeted in all directions by every wind that blows.
>
>A sensible view of these matters sees an endless interweaving of continuity and change.
>[…]
>The only stability possible is stability in motion.
This is a good example of a school doing something important to [[teach students to skate to where the hockey puck is going to be|books/i4i/skating_puck_4.html]] (vs. just teaching to skate to where the puck currently is).
And [[another article|http://www.greatschools.org/parenting/learning-development/5894-javascript-class-learn.gs]] in [[GreatSchools|http://www.greatschools.org/]] about the urgent need to be serious about teaching CS in school.

But of course, Douglas Rushkoff, author of [[Program or Be Programmed|http://www.rushkoff.com/program-or-be-programmed/]] and evangelist for [[Codeacademy|Codeacademy.com]], is [[writing about it|http://www.rushkoff.com/blog/2012/1/16/cnn-why-i-am-learning-to-code-and-you-should-too.html]] a lot.

In this article, titled [[Coding the Curriculum: How High Schools Are Reprogramming Their Classes|http://mashable.com/2013/09/22/coding-curriculum/]] in [[Mashable|http://mashable.com/]], Eric Larson writes about ''more than one school''^^1^^ introducing computation (he/they call it //coding//, but I like computation better, since it's closer to Computational Thinking, which is at the heart of what they (and I) are trying to enable).

The idea in a nutshell:
>The school isn't launching mandatory programming courses into the schedule, exactly, but is instead having its teachers introduce coding (ideally, in the most organic ways possible) into their respective subjects. Calculation-heavy courses such as math and science, as well as humanities such as English, Spanish and history -- even theater and music -- will all be getting a coded upgrade.

At [[Beaver School|http://www.bcdschool.org/]] (grades 6-12), they are introducing computation into //every class//. And according to them, both faculty and students love it. They also got support from industry, backed up by [[data|http://code.org/stats]] regarding the need to step up CS and STEM education:
>The private sector has for years been pressing sixth through twelfth grade schools to prepare kids earlier on for the tech-heavy workforce lying ahead of them. [[Code.org|http://code.org/stats]] reports more than 1.4 million computer jobs will be in demand by 2020, yet only 400,000 students will go on to study computer science in college. 

Even though introducing computing into the entire school curriculum was an idea brought up by one teacher (the head of the Math department), with the strong support of the principal, the faculty is fully supporting it:
>Beaver's staff believes it's time to revamp the curriculum as a whole -- if only to better, and realistically, prepare its kids for the 21st century economy.
The head of the math department:
>"The old teaching method -- you know, where a teacher says something and you write it down and then take a test -- that's about as passive as it gets," he [the math teacher] says. "This idea [of Computation-enabled Curricula] pushes kids to be more actively involved since, by and large, it's something we're both learning together. That leads to a lot of innovative teaching -- and a lot of innovative learning, for that matter."
And the principal:
>"the current curriculum -- which any American who has gone to school in the last century is familiar with -- is blatantly outdated. Do schools need to change? Absolutely," he says. "We're still preparing our kids to go to work in 1988. Certainly not 2020."
but
>For some Beaver staff members -- especially those with no background in programming or math -- learning the language was an intimidating adjustment. 
and there is a psychological shift that needs to happen, regarding teacher/student knowledge/expertise:
>Most of the staff recognizes that students might be better versed in programming than they are. But instead of being intimidated, [the math teacher] says, it's something they're embracing as a way to bring out a more two-sided-conversation approach in the classroom. "We don't get freaked out if we've got a student who's a better basketball player than the coach," [the math teacher] says. "A coach isn't failing if he's got a player who can dunk over him. In the same way, a teacher isn't failing just because he's got a student who might be able to code a little better."
The principal also believes that introducing computation across the school has another benefit - it improves faculty relationships:
>In addition to bringing students together, he [the principal] is optimistic it will strengthen relationships between teachers. A seventh grade science teacher and a tenth grade history teacher might not have much to talk about -- but with the new curriculum in place, conversations about coding workshops, Wiki ideas or digital shortcuts could be just as practical and relevant for a twelfth grade class as they are for a sixth grade class. 

And they have the right idea about not teaching every student to be a programmer, but
>"We don't need to engineer a workshop so every kid that graduates here becomes a professional programmer," he [the principal] says. "We just want them to think about new ways to solve issues, and grasp that entrepreneurial mindset early on. It's ... it's just this day and age."


----
^^1^^ - [[All of Chicago's Schools|http://www.huffingtonpost.com/2013/12/10/chicago-public-schools-co_n_4419916.html]] (2013)
 - [[NYC schools|http://www.johndeweyhighschool.org/academics/computer-science-institute/]], including [[20 pilot schools|http://schools.nyc.gov/Offices/mediarelations/NewsandSpeeches/2012-2013/AppliedSciences.htm]] (2013)
 - [[Schools in Maryland|http://www.mbhs.edu/departments/magnet/courses_cs.php]]
From the book [[Philosophy: An Introduction to the Art of Wondering|https://en.wikipedia.org/wiki/Philosophy:_An_Introduction_to_the_Art_of_Wondering]] by James L. Christian:

{{
The following pages may
lead you to wonder.
That’s really what philosophy
is—wondering.
To philosophize
is to wonder about life—
about right and wrong,
love and loneliness, war and death.
It is to wonder creatively
about freedom, truth, beauty, time
and a thousand other things.
To philosophize is
to explore life.
It especially means breaking free
to ask questions.
It means resisting
easy answers.
To philosophize
is to seek in oneself
the courage to ask
painful questions.

But if, by chance,
you have already asked
all your questions
and found all the answers—
if you’re sure you know
right from wrong,
and whether God exists,
and what justice means,
and why we mortals fear and hate and pray—
if indeed you have completed your wondering
about freedom and love and loneliness
and those thousand other things,
then the following pages
will waste your time.

Philosophy is for those
who are willing to be disturbed
with a creative disturbance.

Philosophy is for those
who still have the capacity
for wonder.
From her [[poem Possibilities, as printed and read at BrainPickings|https://www.brainpickings.org/2015/03/18/amanda-palmer-wislawa-szymborska-possibilities-poem-reading/]]:

[...]
I prefer myself liking people
to myself loving mankind.
[...]
I prefer not to maintain
that reason is to blame for everything.

I prefer exceptions.

I prefer to leave early.

I prefer talking to doctors about something else.

[...]
I prefer the absurdity of writing poems
to the absurdity of not writing poems.

I prefer, where love’s concerned, nonspecific anniversaries
that can be celebrated every day.

I prefer moralists
who promise me nothing.
[...]
I prefer the hell of chaos to the hell of order.
[...]
I prefer many things that I haven’t mentioned here
to many things I’ve also left unsaid.

I prefer zeroes on the loose
to those lined up behind a cipher.
[...]
I prefer the time of insects to the time of stars.

I prefer to knock on wood.

I prefer not to ask how much longer and when.

I prefer keeping in mind even the possibility
that existence has its own reason for being.




In an article titled [[Procedural Literacy: Educating the New Media Practitioner|http://press.etc.cmu.edu/node/205]] by Michael Mateas, he talks about the discussion between CS Luminaries Alan Perlis, J. C. R. Licklider, and Peter Elias.

Perlis clarifies his position on a first course in programming at the university level (see also [[Programming for non-programmers]]):
>the purpose of my proposed first course in programming [...] is not to teach people how to program a specific computer, nor is it to teach some new languages. The purpose of a course in programming is to teach people how to construct and analyze processes.

Elias disagrees. He desires tools and technologies which will make our interaction and use of them "frictionless", intuitive, natural, and effortless, and therefore, programming will become unnecessary:
>I have a feeling that if over the next ten years we train a third of our undergraduates at M.I.T. in programming, this will generate enough worthwhile languages for us to be able to stop, and that succeeding undergraduates will face the console with such a natural keyboard and such a natural language that there will be very little left, if anything, to the teaching of programming…

Mateas (the author of the article) comments on this:
>The problem with this vision is that programming is really about describing processes, describing complex flows of cause and effect, and given that it takes work to describe processes, programming will always involve work, never achieving this frictionless ideal. Any tools that reduce the friction for a certain class of programs, will dramatically increase the friction for other classes of programs.

But, I think that Licklider hits the nail on the head:
>I think the first apes who tried to talk with one another decided that learning language was a dreadful bore. They hoped that a few apes would work the thing out so the rest could avoid the bother. But some people write poetry in the language we speak. Perhaps better poetry will be written in the language of digital computers of the future than has ever been written in English.

And Mateas comments on that:
>What I like about this is the recognition that computer languages are expressive languages; programming is a medium. Asking that programming should become so “natural” as to require no special training is like asking that reading and writing should become so natural that they require no special training. Expressing ideas takes work; regardless of the programming language used (and the model of computation implicit in that programming language), leaning how to express oneself in code will always take work.
>[...]
>Perlis makes it clear that programming is a medium, in fact the medium peculiarly suited for describing processes, and as such, a fundamental component of cultural literacy, and a fundamental skill required of new media practitioners and theorists.


In a [[conversation|https://www.brainpickings.org/2016/03/07/sarah-kay-interview/]] between Maria Popova (of ~BrainPickings fame) and the poet [[Sarah Kay|http://www.kaysarahsera.com/about]], Kay told the following fable:
>A girl walks up to a construction site and asks the first man she sees, “Excuse me, what are you doing?” And he says, “Oh, can’t you see I’m laying bricks?” She then walks up to the second man she sees, who is doing the exact same thing the first one was doing, and says, “Excuse me, what are you doing?” And he says, “Oh, can’t you see I’m building a wall?” And then she reaches the third man, who is doing the same thing as the previous two, and she says, “Excuse me, what are you doing?” And he says, “Oh, can’t you see I’m building a temple?”
And Kay continues:
>I think of that fable a lot, because it’s not so much about what kind of a man you are — it’s about how you look at the work you’re doing. And I don’t think it’s a judgment on any particular way of looking at the world — in fact, I think we all probably contain all three of those, and we shift in and out depending on where we are in our lives, or even in our day.
And then she explains how it applies to her work and experience as a poet, and it sounds very similar to how I'd describe my experience teaching programming:
>For me, when I’m creating a poem, it feels like I’m laying bricks — it’s very logistical, a physical movement of words, putting them together, focused on the minutia of the poem. And when I’m in schools, working with young people, I’m focusing on building connections with them and for them — that feels like building a wall, creating something that’s part of something else. The temple part is a much rarer moment of being able to tap into something bigger than yourself. But what’s so wonderful about all of this is that if you focus on one of the three for too long, you lose sight of the other two — so it requires a lot of shifting and balancing in order to get anything done at all.

The "zooming" in and out of the various levels is very real in my experience. It usually ties into some Big Ideas (the ''Temple'') I am trying to understand, connect, demonstrate, teach. Then it drops down to the implementation (the ''Bricks''), creating programs one character, line, function, object, module at a time. And then it goes up, tying together (the ''Wall'') the implementation back to the vision, finding and filling the holes conceptually and programmatically, and communicating, demonstrating, teaching, practicing it. 
Kay mentions that the transitions are often fast and fluid, which is also reflected in my programming technique, which [[Brian Harvey|https://people.eecs.berkeley.edu/~bh/]] calls [[Bricolage|http://www.wisegeek.com/what-is-bricolage.htm]] ([[bricolage programming|http://shura.shu.ac.uk/12649/3/Rose%20Bricolage%20programming%20problem%20solving%20ability.pdf]]). (I find it interesting/serendipitous that the word bricolage sounds similar to brick, even though the origin (French) actually means something entirely (?) different: DIY (Do It Yourself) :)
In an article titled [[Procedural Literacy: Educating the New Media Practitioner|http://press.etc.cmu.edu/node/205]] by Michael Mateas, he talks about the discussion between CS Luminaries Alan Perlis, J. C. R. Licklider, and Peter Elias - see [[Programming as a medium for expressing and describing processes]].

Related to designing a programming course for non-programmers, Mateas points out:
>It is important not to view computation for new media students as a dumbed-down version of the traditional computer science courses. Teaching programming for artists and humanists shouldn’t merely be simplified computer science with lots of visually engaging examples, but rather an alternative CS curriculum. Traditional CS courses tend to emphasize programming as a kind of reified mathematics, emphasizing mathematical abstractions and formal systems. For new media students we need to emphasize that, while programming does have its abstract aspects, it also has the properties of a concrete craft practice. In a practice that feels like a combination of writing and mechanical tinkering, programmers build elaborate Rube Goldberg machines. In fact, the expressive power of computation lies precisely in the fact that, for any crazy contraption you can describe in detail, you can turn the computer into that contraption. What makes programming hard is the extreme attention to detail required to realize the contraption. A “loose idea” is not enough - it must be fully described in great detail before it will run on a computer. A New Media introduction to CS should be a difficult course, with the challenge lying not in programming conceived of as applied mathematics, but in connecting new media theory and history with the concrete craft practice of learning to read and write complex mechanical processes.

Mateas suggests that game programming may be a good context for such a programming course:
>Games can serve as an ideal object around which to organize a new media introduction to CS. Games immediately force a focus on procedurality; a game defines a procedural world responsive to player interaction. Additionally, unlike other procedurally intensive programs such as image manipulation tools or CAD systems, games force a simultaneous focus on simulation and audience reception. A game author must build a dynamic, real-time simulation world such that, as the player interacts in the world, they have the experience desired by the author. Unlike the design of other software artifacts that minimize the authorial voice, maintaining an illusion of neutrality, games foreground the procedural expression of authorial intentionality in an algorithmic potential space.

He finds the game development learning context advantageous, and identifies two "fruitful" components, game AI (Artificial Intelligence) and game physics ("physics engines":
>procedural literacy is not just the craft skill of programming, but includes knowing how to read and analyze computational artifacts. Because the procedural structure of games is the essence of the game medium (not mere “technical detail”), teaching procedural literacy through the creation of games is not intended merely as training for future game programmers, but as a process intensive training ground for anyone interested in computation as a medium. 
>
>The fundamentally procedural nature of games can be seen by looking at the two sources of activity within a game: __game AI and game physics__. 
>''Game AI'' is a concerned with “intelligent” behavior, that is, behavior that the player can read as being produced by an intelligence with its own desires, behavior that seems to respond to the player’s actions at a level connected to the meaning of the player’s actions. Game AI produces the part of a game’s behavior that players can best understand by “reading” the behavior as if it results from the pursuit of goals given some knowledge. 
>''Game physics'' deals with the “dead” part of the game, the purely mechanical, causal processes that don’t operate at the level of intentionality and thus don’t respond to player activity at the level of meaning. A complete analysis of a game requires unpacking the procedural rules behind the AI and physics.

And Mateas concludes:
>[I]n the case of both game AI and game physics, the game’s response to player interaction is process intensive, depending on algorithmic response rather than playback of media assets. Thus reading and writing games and game-like artifacts requires procedural literacy, making games an ideal artifact around which to organize a procedural literacy curriculum.

In an [[interesting blog post|http://blog.kenperlin.com/?p=2739]], [[Ken Perlin|http://mrl.nyu.edu/~perlin/]] talks about the concept of a "programming literacy pipeline", analogous to a "(reading) literacy pipeline" - the process of moving a learner along from one level [of reading/books] to the next, from Dr. Seuss to Dostoevsky, and all points in between.

About reading as a child, Perlin writes:
> I already understood perfectly well, at the age of six, that what I was reading was exactly on the path to grown-up reading. I was reading the same language as the grown-ups, just in an early “learners” version. There was no sense that “Ten Apples Up on Top” was in some toy language. This was written English, fair and square — the same written language that my parents would read in the newspaper every morning — and I was learning to read it.

And he comes up with the following observation and possible evolution of programming and push for computing literacy:
>Until we come up with a suitable redefinition of what programming is for, until we embrace the utility of programming as a way for serious grown-up people to go about doing the serious things they want to do, without asking those people to pretend to be interested in becoming mathematicians or engineers, this sort of pipeline simply cannot be built for universal programming literacy.
To paraphrase Ursula K. Le Guin:  
{{{Models are the wings both intellect and imagination fly on.}}}
([[she said it about Words|Words are the wings both intellect and imagination fly on.]])


From the chapter on Prospects of Computer Modeling:

{{{All models are wrong, but some are useful.}}}
 --  George Box and Norman Draper

>Computer simulations of idea models such as the [[Prisoner's Dilemma|Summary of the Prisoner’s Dilemma]], when done well, can be a powerful addition to experimental science and mathematical theory. Such models are sometimes the only available means of investigating complex systems when actual experiments are not feasible and when the math gets too hard, which is the case for almost all of the systems we are most interested in. The most significant contribution of idea models such as the Prisoner's Dilemma is to provide a first hand-hold on a phenomenon  such as the evolution of cooperation for which we don't yet have precise scientific terminology and well-defined concepts.
>The Prisoner's Dilemma models play all the roles I listed above for idea models in science (and analogous contributions could be listed from many other complex-systems modeling efforts as well):
>''Show that a proposed mechanism for a phenomenon is plausible or implausible''. For example, the various Prisoner's Dilemma and related models have shown what Thomas Hobbes might not have believed: that it is indeed possible for cooperation albeit in an idealized form to come about in leaderless populations of self-interested (but adaptive) individuals.
>''Explore the effects of variations on a simple model and prime one's intuitions about a complex phenomenon''. The endless list of Prisoner's Dilemma variations that people have studied has revealed much about the conditions under which cooperation can and cannot arise. You might ask, for example, what happens if, on occasion, people who want to cooperate make a mistake that accidentally signals noncooperation an unfortunate mistranslation into Russian of a U.S. president's comments, for instance? The Prisoner's Dilemma gives an arena in which the effects of miscommunications can be explored. John Holland has likened such models to  flight simulators  for testing one's ideas and for improving one's intuitions.
>''Inspire new technologies''. Results from the Prisoner's Dilemma modeling literature namely, the conditions needed for cooperation to arise and persist have been used in proposals for improving peer-to-peer networks and preventing fraud in electronic commerce, to name but two applications.
>''Lead to mathematical theories''. Several people have used the results from Prisoner's Dilemma computer simulations to formulate general mathematical theories about the conditions needed for cooperation. A recent example is work by Martin Nowak, in a paper called  Five Rules for the Evolution of Cooperation. 

>What should we make of all this? I think the message is exactly as Box and Draper put it in the quotation I gave above: all models are wrong in some way, but some are very useful for beginning to address highly complex systems. Independent replication can uncover the hidden unrealistic assumptions and sensitivity to parameters that are part of any idealized model. And of course the replications themselves should be replicated, and so on, as is done in experimental science. Finally, modelers need above all to emphasize the limitations of their models, so that the results of such models are not misinterpreted, taken too literally, or hyped too much. I have used examples of models related to the Prisoner's Dilemma to illustrate all these points, but my previous discussion could be equally applied to nearly all other simplified models of complex systems.
>I will give the last word to physicist (and ahead-of-his-time model-building proponent) [[Phillip Anderson|http://en.wikipedia.org/wiki/Philip_Warren_Anderson]]^^1^^, from his 1977 Nobel Prize acceptance speech:
>>The art of model-building is the exclusion of real but irrelevant parts of the problem, and entails hazards for the builder and the reader. The builder may leave out something genuinely relevant; the reader, armed with too sophisticated an experimental probe or too accurate a computation, may take literally a schematized model whose main aim is to be a demonstration of possibility.

----
^^1^^ An interesting [[article by Anderson: More is different|resources/anderson72more_is_different.pdf]] on the philosophy of science
<<forEachTiddler 
where 
'tiddler.tags.contains("author")'
sortBy 
'tiddler.title'>>
<<forEachTiddler 
where 
'tiddler.tags.contains("category")'
sortBy 
'tiddler.title'>>
!!Why Quotes?

I //love// quotes! To paraphrase [[Goethe|When ideas fail, words come in very handy.]]: When insights need capturing, quotes (often) come in very handy.
Even though, [[Sir Winston Churchill|Winston Churchill]] (a person who many of his qualities I very much admire) once said (and I quote) "It is a good thing for an uneducated man to read books of quotations", which may seem somewhat derogatory of quote-lovers^^1^^.

''Good'' quotes, in my mind, are like sharp scalpels: in one, usually short and concise sentence or two, they reveal what the quote-lover considers "a very true aspect of life". Or, to use a less-violent metaphor, they are like a "good" caricature, capturing or exposing a "very telling" (in the eyes of the beholder) side of life with very few lines/shapes.

And to be able to recognize your own feelings/views (a piece of mind?) in someone else's mind (through their quote or caricature) is enjoyable, sometimes educational (to Sir Winston's point above), and sometimes fulfilling in a deep sense of recognition mixed with discovery.

!!Quotes on this wiki can be searched in multiple ways, using either words appearing in the quotes or the tags associated with each quote.

To search for a word appearing in a quote, use the "search" field on the top right.

To ''list'' quotes by tags, open one of the lists below (by category, or by author), or use the "Tags" tab on the right.
Using the tags, you can ''list'' quotes by:
- [[Quote Categories]] (e.g., language, computer science, character)
- [[Quote Authors]] (e.g., Alan Perlis, Albert Einstein)


And as the popular New York celebrity chef ~Jean-Georges Vongerichten said:
>The amuse-bouche is the best way … to express big ideas in small bites.


Hope you enjoy these!

----

^^1^^ but as Albert Einstein had said: [[Everybody is ignorant (or uneducated). Only on different subjects.|Everybody is ignorant. Only on different subjects.]].
This is a step-by-step worked out example of the implementation of the [[RSA algorithm|http://en.wikipedia.org/wiki/RSA_%28algorithm%29]], as originally described by Ron Rivest, Adi Shamir and Leonard Adleman (hence, R S A).
You can follow this simplified example along, using the [[NetLogo implementation/simulation|math/netlogo/RSAcrypto.html]]
(Imagine if you will) a spy agency (henceforth The Agency) wanting to exchange information (secret messages) with a group of spies out in the field.
* The Agency selects 2 prime numbers, say 17 and 19.
* It calculates __the product N__ (17 x 19 = 323), and the public key range ( (17 - 1) x (19 - 1) = 16 x 18 = 288 ). This number, N, will be used later.
* Then, The Agency selects another prime within the public key range, say 11 (which is smaller than 288 and is not a factor of 288, i.e., 288 cannot be divided by 11 without a remainder). This number will be used later as the __public key__
* Finally, The Agency selects another number (say, 131), the __private key__, which when multiplied by its public key (say, 11) and divided by the public key range (in our case 288), leaves a remainder of 1 (so, (131 x 11) / 288 = 1441 / 288 =  5 and a remainder of 1).

Now, the agency publishes its public key (11) and the product N (323) to its agents. If those numbers fall into enemy hands, there is no harm done.
* An agent wanting to send a message to The Agency, takes the message (or data, say 32, in the [[NetLogo simulation|math/netlogo/RSAcrypto.html]]), and encrypts it by raising the data to the power of the public key, and taking the remainder after dividing by N (so, (data^^public-key^^) mod N = 32^^11^^ mod 323 =  36,028,797,018,963,968 mod 323 = 230 ). The data (32) can stand for a letter or symbol, and if the message consists of multiple letters, words, sentences, and symbols, this operation (taking to the power and then mod) is repeated for each letter/symbol.
* The agent sends the message (230) to The Agency, and again, if it falls into enemy hands, there is no harm done, since this is not really the secret message (or data, namely 32). Moreover, if The Enemy intercepts both the encrypted message (230) and the public key (11) and the range (323), they still cannot figure out the secret message (32).
* When The Agency receives the encrypted message (230), it uses its private key (131) to decrypt it by raising the encrypted message to the power of the private key and dividing by the product, and keeping the remainder (so, (encrypted-data^^private-key^^) mod N = 230^^131^^ mod 323 = 2.4341454095880723262526112555729e+309 mod 323 = 32). This is the original secret message sent to The Agency by the agent/spy.

As you can see, the calculations of encrypting and decrypting involve pretty big numbers, and cracking the keys requires calculations and factorizations of big primes, which makes it very difficult, but __not__ impossible to crack RSA encryptions.
I'm always on the lookout for ideas to make my CS courses more engaging and relevant, so...

In a course titled [[Computing for poets|http://wheatoncollege.edu/lexomics/files/2016/07/Poet_syllabus_2016.pdf]] offered at Wheaton College they have an programming exercise which explores reading poems backwards.

An associated reading assignment is an article in The New Yorker called [[Reading Poems Backwards|http://www.newyorker.com/books/page-turner/reading-poems-backward]].

In the article, the author, [[Brad Leithauser|https://www.poetryfoundation.org/poems-and-poets/poets/detail/brad-leithauser]], introduces it thusly:
>It probably happens now and then, though perhaps you don’t give it much thought. You read a poem backward.
>You pick up a poetry anthology, or you come upon a poem in a magazine, and your eye chances to fall upon its last lines. You read those. Then you read the poem.
>
>You read the poem, that is, knowing exactly where it’s headed. Sometimes this may occur with short stories (you read the last lines of the story first), but far less often. And as for novels—surely most of us carefully avoid the final page; given the time we’ll be investing, we’re reluctant to spoil the book’s surprise.
>
>Reading a poem backward is a distinctive experience, during which you’re typically asking not Where is this going?, but Can the poet justify the finish? In other words, Will the conclusion feel deserved? 


Towards the end (ha!) Leithauser brings up an interesting point linked to the poet [[Robert Frost|https://www.poetryfoundation.org/poems-and-poets/poets/detail/robert-frost]]:

>There’s an irony in reading Frost backward, given how strongly he recoiled at working backward. He once noted, “I never started a poem yet whose end I knew. Writing a poem is discovering.“ He viewed the issue in characteristically ethical terms. To write a poem whose ending you were already aware of seemed to him a form of cheating.
>
>I’ve never been able to share Frost’s views on this. If a poet determines that a poem should begin at point A and conclude at point D, say, the mystery of how to get there—how to pass felicitously through points B and C—strikes me as an artistic task both genuine and enlivening. There are fertile mysteries of transition, no less than of termination.
>
>And I’d like to suppose that Frost himself would recognize that any ingress into a poem is better than being locked out entirely. His little two-liner, “The Secret,”^^1^^ suggests as much: “We dance round in a ring and suppose / But the Secret sits in the middle and knows.” Most truly good poems might be said to contain a secret: the little sacramental miracle by which you connect, intimately, with the words of a total stranger. And whether you come at the poem frontward, or backward, or inside out—whether you approach it deliberately, word by word and line by line, or you parachute into it borne on a sudden breeze from the island of Serendip—surely isn’t the important thing. What matters is whether you achieve entrance into its inner ring, and there repose companionably beside the Secret. 

The funny/serendipitous thing is that a day after reading about reading backwards, I ran into an article (OK, so it wasn't a poem :) by the sage ~Sci-Fi writer Ursula K. Le Guin titled [["Do-It-Yourself Cosmology"|Ursula K. Le Guin in defence of Science Fiction]], and the thing that caught my eye was the one line correspondence she "quotes", between GOD and a certain Sci-Fi writer. This one liner (OK, it was not at the end of the article, but close to the end :) actually caused me to read the whole article. It was wonderful!!!

----
^^1^^ ''The Secret Sits''

We dance round in a ring and suppose,
But the Secret sits in the middle and knows.
Some inspiring gems from the [[American Scientist|https://www.americanscientist.org/article/75-reasons-to-become-a-scientist]]:

 # 5:
Curiosity.

Jane Goodall 
The Jane Goodall Institute for Wildlife Research, Education, and Conservation
----

 # 20:
Because there were two inspiring teachers—one undergraduate and one graduate—who made it impossible to resist. Because from a young age I had found great personal pleasure—and still do—in the Aha! experience that goes along with success in creating things and solving problems, although I have never had any illusions about the great importance of my own particular insights. And because I was—and still am—curious about and fascinated by my own psychological processes. More over, as I get older, this curiosity grows more intense and less constrained by the psychological dogma of my early career.

William Bevan 
Vice President 
~MacArthur Foundation
----

 # 22:
Why? Why not? Indeed, how to avoid it? Science, I observed, licenses “thinkering”—thinking and tinkering. As such it is a haven for the neotenic, the quizzical, the (absent-) minded. Science endorses a compulsive union of play and work, of modeling, word-smithing, and number-crunching. Science integrates experience and experiment with the realms of art and craft, neither excluding nor precluding, but exploring as an end in itself. Sciencing is surely a natural propensity of our species. When I, with deliberation, became a scientist, the reasons centered on the spatiotemporal flexibility of academic science, coupled with the opportunity to confront complexity, and the excitement of meeting curious companions everywhere.

Myrdene Anderson 
Associate Professor of Anthropology
Purdue University
----

 # 31:
The reason I initially decided to become a scientist is that I couldn’t believe someone would actually pay me to spend the rest of my life being curious and expanding my mind. Once I arrived at graduate school, I soon recognized that there was an acute shortage of scientists worrying about why volcanoes tend to congregate in the South Pacific to create a tropical paradise. It’s a tough job, but some one has to do it.

Marcia ~McNutt 
Associate Professor, Earth Sciences 
MIT
----

 # 34:
I never wanted to be a scientist; I wanted to be a mathematician for the sake of its consistency, which I found absent in every other endeavor. In the end it was quantum mechanics and the “uncertainty principle” which converted me to science. Still later, the need for defense drew me from work on pure science into the turbulent activities of unexpected novel applications. What I wanted I did not attain. What I got I do not regret.

Edward Teller 
Senior Research Fellow 
Hoover Institution
----

 # 41:
There were innumerable influences in your past, but you remember only a few of the major ones, and you instinctively weave these into a plausible history explaining how you became what you presume you are. This interpretation of history is both logical and nonfalsifiable and so tends to establish its own validity. Chances are it's wrong. My nonconfident guess as to why science and engineering have proved fascinating to me is that circumstances meshed hobby with profession. The hobby was sailplanes, an outgrowth of a teenage addiction to creating model airplanes. The challenges of improving sailplane efficiency and sharpening skills for harvesting nature’s energy to keep the vehicles aloft connected me to topics such as aerodynamics, structures, meteorology, probability concepts, and bird flight, as well as to pioneering and competitions. Simultaneously, there was the stimulus of several mentors—scientists with excitement for all subjects, and the gifts of inspiring those around them to share the delight.

Paul ~MacCready 
~AeroVironment, Inc.
----

 # 44:
I loved the beauty of crystals, I loved the cleverness of gadgets, and I loved the power of understanding. If I became a scientist it meant that I would always want to go to work and would be proud of what I did.

Gerald J. Wasserburg 
Professor of Geological Sciences 
California Institute of Technology
----

 # 51:
I went into science because as a child I had an intense curiosity, and it seemed as if science was filled with profound and deep mysteries. At an early stage, I was fascinated by numbers and their properties, and also by atoms and the particles that compose them, as well as by photons and other even-more ethereal denizens of the microworld. Later on, I became fascinated by the mysteries of language and music, and at the same time, intrigued by computers and logic; and those varied interests led to my current fascination with perception, concepts, and creativity. My early loves for math and physics have left deep tracks in my modes of thought, but I don’t think directly about them too often anymore.

Douglas R. Hofstadter 
Professor of Human Understanding and Cognitive Science 
University of Michigan
----

 # 59:
“Why did you become a scientist?” [Long pause.] “Well... ?” “I’m thinking, I’m thinking; thirty years aren’t enough time for a satisfactory answer.”

Donald Fernie 
Professor of Astronomy 
University of Toronto
----


In a short but insight-rich blog post, Mark Guzdial from Georgia Tech brings up [[a list of excellent reasons to learn Computer Science (CS)|https://computinged.wordpress.com/2017/10/18/why-should-we-teach-programming-hint-its-not-to-learn-problem-solving/]] and he adds that none of the reasons has to do with the claim that CS improves problem-solving skills.

He emphasizes that there is no empirical evidence that "learning CS improves thinking", and in this he [[echos Bret Victor|http://worrydream.com/MeanwhileAtCodeOrg/]], who quotes Seymour Papert.

The bulletized list is below, but it is definitely worth reading the entire blog entry with Guzdial's details and pointers.

* To understand our world - (mentioning Simon Peyton Jones) we teach Chemistry or Physics but we don't necessarily expect students to become chemists or physicists. We just want them to understand chemical reactions and physical interactions in the world. We teach CS so students understand digital/computerized interactions in the world.

* To study and understand processes - (quoting Alan Perlis) processes (and nowadays, more and more algorithms, which are processes too) are all around us, helping, controlling, guiding our actions and decisions. It is critical that we understand them and CS can help with that.

* To be able to ask questions about the influences on our lives - (pointing to C. P. Snow) knowledge of what computing, algorithms, heuristics, machine learning, AI, and so on, can and cannot do, and how they do what they do, enables us to be critical about the computational/digital acts and their impacts/implications.

* To use an important new form of literacy - (mentioning Alan Kay) knowing what's possible with computing and its power, we can be creative and express our ideas and interests using this power.

* To have a new way to learn science and mathematics - (from Andrea diSessa (with Boxer), through Uri Wilensky (with NetLogo) we have new ways to examine scientific and mathematical problems through, for example, modeling, experimentation, and simulations. Computing enables another way to generate insights and solutions.

* As a job skill - CS programs in school/university should not necessarily aim at feeding the high tech industry with employees, but, as Guzdial writes, learning to program gives students new skills that have value in the economy. It’s a social justice issue if we do not make this economic opportunity available to everyone.

* To use computers better - this, according to Guzdial is not supported by rigorous data, but possibly points to CS knowledge and skills enabling more effective and efficient use of computing technologies.

* As a medium in which to learn problem-solving - learning CS and programming per-se doesn't necessarily improve problem-solving skills, but CS and programming can be an excellent //context// to teach and practice problem solving.
From [[a paper by Alison Gopnik|resources/Gopnik Wellman - Bayes nets and causation.pdf]]
(another [[interesting paper on causality|Causality - Alison Gopnik]] by Gopnik)

On playing as learning (and "fishing expeditions"):
>It turns out that by intervening yourself, you can rapidly get the right evidence to eliminate many possible hypotheses, and to narrow your search through the remaining hypotheses. A less obvious, but even more intriguing, result is that these interventions need not be the systematic, carefully controlled experiments of science. The formal work shows that even less controlled interventions on the world can be extremely informative about causal structure. Multiple simultaneous interventions can be as effective as intervening on just one variable at a time.  Soft  interventions, where the experimenter simply alters the value of a variable can be as effective as more controlled interventions, where the experimenter completely fixes that value. What we scientists disparagingly call a  fishing expedition  can still tell us a great deal about causal structure   you don't necessarily need the full apparatus of a randomized controlled trial. 

>If children's playful explorations are so unconstrained how could they actually lead to rational causal learning? Recent research by Schulz (Bonawitz et al. 2011; Cook et al. 2011; Schulz et al., 2007, 2008; see also Legare, 2012) has begun to address this issue. Schulz and her colleagues have shown that children's exploratory play involves a kind of intuitive experimentation. Children's play is not as structured as the ideal experiments of institutional science. Nevertheless, play is sufficiently systematic so that, like scientific fishing expeditions, it can help children discover causal structure. This research also shows that children don't just draw the correct conclusions from the evidence they are given  they actively seek out such evidence.

On Hierarchical Bayesian models:
>Constructivists insist that the dynamic interplay between structure and data can yield both specific kinds of learning and more profound development as well. Hierarchical Bayesian models provide a more detailed computational account of how this can happen. On the hierarchical Bayesian picture local causal learning can, and will, lead to broader, progressive, theory revision and conceptual change.

On new hypotheses (innovation), language shaping thinking, and analogies:
>Even hierarchical Bayes nets are still primarily concerned with testing hypotheses against evidence, and searching through a space of hypotheses. It is still not clear exactly how children generate what appear to be radically new hypotheses from the data.
>Some learning mechanisms have been proposed in cognitive development to tackle this issue, including the use of language and analogy. In particular, Carey (2009) has compellingly argued that specific linguistic structures and analogies play an important role in conceptual changes in number understanding, through a process she calls  Quinean bootstrapping . There is empirical evidence that the acquisition of particular linguistic structures can indeed reshape conceptual understanding in several other domains, closer to intuitive theories (see e.g. Casasola, 2005; Gopnik, Choi & Baumberger, 1996; Gopnik & Meltzoff, 1997; Pyers & Senghas 2009; Shusterman & Spelke, 2005).

On difficulties with analogy-based learning, and Hierarchical Bayesian Models:
>But it is difficult to see how language or analogy alone could lead to these transformations. In order to recognize that a linguistic structure encodes some new, relevant conceptual insight it seems that you must already have the conceptual resources that the structure is supposed to induce. Similarly, philosophers have long pointed out that the problem with analogical reasoning is the proliferation of possible analogies. Because an essentially infinite number of analogies are possible in any one case, how do you pick analogies that reshape your conceptual understanding in relevant ways and not get lost among those that will simply be dead ends or worse? In the case of mathematical knowledge, these problems may be more tractable because such knowledge is intrinsically deductive. But in the case of inductively inferring theories there are a very wide range of possible answers. When many linguistic structures could encode the right hypothesis, or many analogies could be relevant, the problem becomes exponentially difficult. These proposals thus suffer from the same constructivist problem we have been addressing all along. And so, again, characterizing the influence of language and analogy in more precise computational terms might be very helpful. If probabilistic and hierarchical Bayesian models can help solve the riddle of induction, then perhaps they can shed light on these other learning processes as well.

On explanations helping learning, and lack of computational models:
>there is a great deal of work suggesting that explanations play an important role in children's learning (Wellman 2011). Even very young children ask for and provide explanations themselves and respond to requests for explanations from others (e.g., Callanan & Oakes 1992), and these explanations actually seem to help children learn (Amsterlaw & Wellman 2006; Siegler 1995; Legare 2012). But there is no account of explanation in computational terms.
According to Socrates, the answer to the old question of "how do we know what we don't know" is that we actually rediscover or remember what we already know, but forgot. 
Meno by Plato^^1^^, is [[a dialog|resources/Plato_the_Meno.pdf]] between Socrates <claimed to be roughly 67 at the time> and Meno, a young aristocrat from Thessaly.
In the Meno dialog, Socrates uses one of Meno's slaves to demonstrate his idea of [[anamnesis|https://en.wikipedia.org/wiki/Anamnesis_%28philosophy%29]], that certain knowledge is innate and "recollected" by the soul through proper inquiry.
I don't think that the "soulful" explanation above really explains anything. I think that this rediscovery or recollection is actually the "creation on the spot" of new knowledge, as a result of deep, thoughtful probing and questioning. I suspect that this creation of new knowledge in place (in realtime) //feels// like rediscovering something we already had (which brings up the question of [[are we discovering things or inventing them?|Is Math a human invention or a series of discoveries of truths in the real world?]]). 
Come to think of it (ha!), it still sounds like Ex nihilo (i.e., out of nothing), so the mysteries of the mind and the sources of knowledge are still a puzzle...

On the other hand, I don't think that learning is just about rediscovering (or remembering). I believe that there is truth in what [[Eugene Ionesco|http://en.wikipedia.org/wiki/Eug%C3%A8ne_Ionesco]] said: It is not the answer that enlightens, but the [[question|David Whyte - questions]]. Sometimes we (and some AI programs, too) play with combinations of concepts and ideas, and form [[questions that lead to new explorations|John O’Donohue - questions]] and hopefully new knowledge (which in turn leads to additional concepts and ideas we can play with and combine - a spiral journey of discovery).

Albert Einstein seems to refer to this spiraling questioning and discovery, too:
>"Most teachers waste their time by asking questions that are intended to discover what a pupil does not know, whereas the true art of questioning is to discover what the pupil does [really] know or is capable of knowing."

Sam Harris has [[a different take on rediscovery and "know thyself"|pg. 120 - SAM HARRIS: The Upload Has Begun]].


----
1 - [[Meno by Plato (Wikipedia)|https://en.wikipedia.org/wiki/Meno]]
In an article titled [[Relate-Create-Donate: A teaching/learning philosophy for the cyber-generation|http://hcil2.cs.umd.edu/trs/97-17/97-17.html]]^^1^^ Ben Shneiderman gives some sound advice for a productive, challenging, and effective Computer Science course, based on his experience doing and teaching CS.

Shneiderman proposes:
>a three-component philosophy called ~Relate-Create-Donate which stresses:
>
>1) Relate: work in collaborative teams
>
>2) Create: develop ambitious projects
>
>3) Donate: produce results that are meaningful to someone outside the classroom.
>
>''The Relate component'' emphasizes team efforts to develop communication, planning, management and social skills. The modern workplace demands proficiency in these skills, yet students are often taught to work on their own. Research on collaborative learning indicates that in the process of collaboration students are forced to clarify and verbalize their problems, thereby facilitating problem solution and anchoring/assimilating/accommodating novel information in the student's ideational structure (Ausubel, 1968). Collaboration has dangers, but when managed well, it generates intense motivation from many students, encourages learning from peers, and reduces drop out rates.
>
>''The Create component'' points to a fusion between learning and creative work. In creating substantial and appropriate individual and team projects, students will learn many things that serve the goals of education. Similarly, learning is useless if it does not prepare a student to be creative. Successful students create to learn, and learn to create.
>
>''The Donate component'' stresses the benefits of having authentic, service-oriented projects that will be meaningful and useful to someone outside the classroom. Having an outside "customer" generates intense motivation, helps clarify goals, and provides training for future professional work. Outside customers might be employers for student's who have part-time jobs, managers at volunteer organizations or campus groups, curators at local museums, or administrators at nearby schools. If possible, I would give a grade based on the amount of societal benefit produced during the semester.



----
^^1^^ - [[local copy|resources/Shneiderman_Relate-Create-Donate_teaching_learning.pdf]]
Through no fault of our own, and by dint of now cosmic plan or conscious purpose, we have become, by the grace of a glorious evolutionary accident called intelligence, the stewards of life's continuity on earth.
We have not asked for that role, but we cannot abjure it. We may not be suited to it, but here we are.
It is commonly observed that "you can take an egg and scramble it, but you'll never be able to take the (a) mess and create an egg out of it", and this is a simple and clear example of the nature of the universe (or the 'inevitable' march downhill from here...).

To this, I have seen a response, which is also simple and clear: There definitely exists a process for taking the mess and "unscrambling" it, resulting in an egg. It's called a chicken.

Or as the old riddle goes:
Question: How do you unscramble an egg?
Answer: Feed it to a chicken.

I recently cam a cross a thoughtful article by Jay ~McTighe and Grant Wiggins titled [[From Common Core Standards to Curriculum: Five Big Ideas|http://grantwiggins.files.wordpress.com/2012/09/mctighe_wiggins_final_common_core_standards.pdf]], and was pleased to see that they (like myself ;-) take it as a serious and refreshing change in the state of affairs.

Id like to summarize their Big Ideas and add some comments:

!!!! Big Idea #1 - The Common Core Standards have new emphases and require a careful reading.
That is, this time it's not "old ideas in new packaging", but rather an opportunity to "raise the bar" on what math teaching can accomplish for learners. Also, the authors urge schools and teachers to read ALL parts of the standards, and not dive into their grade-specific sections right away.
The introductory sections of the standards (starting [[here|http://www.corestandards.org/Math]]) provide very important information about vision, goals, changes in emphasis, etc., while the [[Standards for Mathematical Practice|http://www.corestandards.org/math/practice]] provide badly needed requirements for higher level cognitive and meta-cognitive skills, performance, and outcomes.

!!!! Big Idea #2 - Standards are not curriculum.
In short, the standards are the //what// - focusing on the outcomes and accomplishments/achievements, while the curriculum is the //how// - focusing on sequences, methods, procedures, connections, etc. Educators must map (translate) the standards to effective and engaging curricula. This is an exciting opportunity (and a big creative challenge) for educators in classrooms, at home, and in online contexts and platforms to innovate, and also implement known and effective learning techniques.
Doing a thoughtful and thorough job on Big Idea #1 will "insure clarity about the end results and an understanding of how the pieces fit together.", which will lead to meaningful, engaging, and effective curricula.

!!!! Big Idea #3 - Standards need to be “unpacked.”
The authors suggest unpacking and analyzing the standards keeping 4 categories or "lenses" in mind:
* Long term Transfer Goals
** These are the long-term skills, mindsets, habits, and practice for life (hence //transfer//)
* Overarching Understandings
** These are the common themes, concepts, connections, and approaches that are effective and efficient when "thinking with math" about the world.
* Overarching Essential Questions
** These are the different "lenses" that keep learners goal-oriented, focused, and critical along the processes of understanding, practicing, and solving problems.
* A set of recurring Cornerstone Tasks
** These are the important performances or procedures that are part of the desired 21^^st^^ Century skills

!!!! Big Idea #4 - A coherent curriculum is mapped backwards from desired performances.
The curricula should not aspire to have "full coverage" of the topics mentioned in the CCSS, but rather start with the performance and desired outcomes of the learners, and map those to the topics, their sequencing, and instruction techniques. This will also reduce the risk of sequencing the curricula topics in the order they appear in the Standards (a mistake). It will also enable making meaningful and deep connections between topics, in support of achieving the desired outcomes.

!!!! Big Idea #5 - The Standards come to life through the assessments.
Again, like in Big Idea #1, the Standards are the //what//, and they should provide both the outcome and its "degree" or "level" or quality, as well. And that's the link to assessment, which should verify both qualitatively and quantitatively to what degree the learners have accomplished the desired goals/performances/outcomes.
Here too, assessments should not aspire to have a 1-to-1 coverage of the Standards and requirements. The authors of the Standards explicitly cautioned against it by saying:
“While the Standards delineate specific expectations in reading, writing, speaking, listening, and language, each standard need not be a separate focus for instruction and assessment. Often, several standards can be addressed by a single rich task.”
This obviously is also relevant to Math, and may serve as a motivation to design the curricula in larger (possibly project-based) chunks, linking multiple topics, concepts, and skills together, to form more meaningful (real life) contexts and experiences, preparing learners for these kinds of more meaningful assessments.
Richard Phillips Feynman was an American theoretical physicist known for his work in the path integral formulation of quantum mechanics, the theory of quantum electrodynamics, and the physics of the ...
Born: May 11, 1918, Far Rockaway
Died: February 15, 1988, Los Angeles
[img[Feynman Ambigram|resources/feynman_ambigram.gif][resources/feynman_ambigram.gif]] [1]
----
1- Ambigram from [[01101001|http://www.01101001.com/ambigrams/index.html]]. See [[other ambigrams|Ambigrams by Scott Kim]] by Scott Kim
!!!On the simplicity
From [[Richard Feynman's Nobel Prize Lecture|http://www.feynmanlectures.info/other/Feynmans_Nobel_Lecture.pdf]] (1965):
> It always seems odd to me that the fundamental laws of physics, when discovered, can appear in so many different forms that are not apparently identical at first, but, with a little mathematical fiddling you can show the relationship. [...] I don’t know why this is -- it remains a mystery, but it was something I learned from experience. There is always another way to say the same thing that doesn’t look at all like the way you said it before. I don’t know what the reason for this is. I think it is somehow a representation of the simplicity of nature. [...] I don’t know what it means, that nature chooses these curious forms, but maybe that is a way of defining simplicity. Perhaps a thing is simple if you can describe it fully in several different ways without immediately knowing that you are describing the same thing.

[[This reminds me of something I heard my father saying about mathematics|On multifaceted understanding]].

!!!On the beauty
From [[The Character of Physical Law|http://people.virginia.edu/~ecd3m/1110/Fall2014/The_Character_of_Physical_Law.pdf]] (MIT Press, Cambridge, MA, 1967)
>To those who do not know mathematics it is difficult to get across a real feeling as to the beauty, the deepest beauty, of nature … If you want to learn about nature, to appreciate nature, it is necessary to understand the language that she speaks in.

>The imagination of nature is far, far greater than the imagination of man.

From [[The Feynman Lectures on Physics, 1|http://www.feynmanlectures.caltech.edu/I_toc.html]]
>Poets say science takes away from the beauty of the stars -- mere globs of gas atoms. Nothing is “mere”. I too can see the stars on a desert night, and feel them. But do I see less or more? The vastness of the heavens stretches my imagination -- stuck on this carousel my little eye can catch one-million-year-old light. A vast pattern -- of which I am a part -- perhaps my stuff was belched from some forgotten star, as one is belching there. Or see them with the greater eye of Palomar, rushing all apart from some common starting point when they were perhaps all together. What is the pattern, or the meaning, or the why? It does not do harm to the mystery to know a little about it. For far more marvelous is the truth than any artists of the past imagined! Why do the poets of the present not speak of it? What men are poets who can speak of Jupiter if he were like a man, but if he is an immense spinning sphere of methane and ammonia must be silent?

Feynman also famously recounted in a filmed interview:
>I have a friend who's an artist and he's some times taken a view which I don't agree with very well. He'll hold up a flower and say, "look how beautiful it is," and I'll agree, I think. And he says, "you see, I as an artist can see how beautiful this is, but you as a scientist, oh, take this all apart and it becomes a dull thing." And I think he's kind of nutty. First of all, the beauty that he sees is available to other people and to me, too, I believe, although I might not be quite as refined aesthetically as he is. But I can appreciate the beauty of a flower. At the same time, I see much more about the flower than he sees. I could imagine the cells in there, the complicated actions inside which also have a beauty. I mean, it's not just beauty at this dimension of one centimeter: there is also beauty at a smaller dimension, the inner structure...also the processes. The fact that the colors in the flower are evolved in order to attract insects to pollinate it is interesting -- it means that insects can see the color. It adds a question -- does this aesthetic sense also exist in the lower forms that are...why is it aesthetic, all kinds of interesting questions which a science knowledge only adds to the excitement and mystery and the awe of a flower. It only adds. I don't understand how it subtracts.” 


This view is echoed by the following:
In a poetry reading which was part of a yearly event called The Universe in Verse, [[the NASA astrophysicist Natalie Batalha read a poem by Edna St. Vincent Millay|https://www.brainpickings.org/2018/08/03/the-universe-in-verse-natalie-batalha-edna-st-vincent-millay/]] ("Renascence"), but prefaced it with a personal experience she had, while doing planetary research using the powerful telescopes in the desert in Chile. Her account ([[video clip, starting at ~4:10 min|https://vimeo.com/282887910]]) beautifully expresses the point of how science and knowledge can make an experience even more intense, deep, and beautiful.
>The sky in the Southern Hemisphere is magnificent. The Milky Way arches straight overhead and it's very crisp; you've got the Large and Small Magellanic clouds which are satellites of our own galaxy; you have Alpha Centauri which is the nearest start to our Solar System; the planets were arcing overhead along the ecliptic, and a gibbous Moon was out.
>I climbed up to the rooftop [of the building next to the telescope structure], and I laid down on top of the roof, and just took it all in. And for the first time, the dome of the sky over me transformed from a flat surface to a three dimensional landscape. It was because of my deep knowledge of astronomy, that I could visualize well all the relative positions of these objects. And so I was no longer a mere human stuck in a gravity well under a bell jar, I became the Earth itself traveling through the space. I was literally the Universe. And my knowledge made all of this experience so intensely beautiful.
[[Richard Hamming|http://en.wikipedia.org/wiki/Richard_Hamming]]: American mathematician whose work had many implications for computer science and telecommunications.
Born: February 11, 1915, Chicago
Died: January 7, 1998, Monterey
Books: Coding and Information Theory, and more
Type the text for 'Robert Frost'
In an [[article in the WSJ|https://www.wsj.com/articles/rules-for-modern-living-from-the-ancient-stoics-1495723404]], Massimo Pigliucci writes about "Know what you can control, be in the moment and other tips from Marcus Aurelius, Seneca and Epictetus", and boils it down to a few key lessons/messages:
* ''Learn to separate what is and isn’t in your power.'' This lets you approach everything with equanimity and tranquility of mind.
** Within our power are opinion, motivation, desire, aversion and, in a word, whatever is of our own doing; not within our power are our body, our property, reputation, office and, in a word, whatever is not of our own doing.
*** In a thoughtful review of [[Seneca's view on the shortness of life|https://www.brainpickings.org/2014/09/01/seneca-on-the-shortness-of-life/]] Maria Popova of ~BrainPickings quotes him:
>> You are arranging what lies in Fortune’s control, and abandoning what lies in yours. What are you looking at? To what goal are you straining? The whole future lies in uncertainty: live immediately.
* ''Contemplate the broader picture.'' Looking from time to time at what the Stoics called "the view from above" will help you to put things in perspective and sometimes even let you laugh away troubles that are not worth worrying about.
* ''Think in advance about challenges you may face during the day.'' A prepared mind may make all the difference between success and disaster.
* ''Be mindful of the here and now.'' The past is no longer under your control: Let it go. The future will come eventually, but the best way to prepare for it is to act where and when you are most effective—right here, right now.
* ''Before going to bed, write in a personal philosophical diary.'' This exercise will help you to learn from your experiences—and forgive yourself for your mistakes.
She's been in this world for over a year,
and in this world not everything's been examined
and taken in hand.

The subject of today's investigation
is things that don't move by themselves.

They need to be helped along,
shoved, shifted,
taken from their place and relocated.

They don't all want to go, e.g. the bookshelf,
the cupboard, the unyielding walls, the table.

But the tablecloth on the stubborn table
—when well-seized by its hems—
manifests a willingness to travel.

And the glasses, plates,
creamer spoons, bowl
are fairly shaking with desire.

It's fascinating,
what form of motion will they take,
once they're trembling on the brink:
will they roam across the ceiling?
fly around the lamp?
hop onto the windowsill and from there to a tree?

Mr. Newton still has no say in this.
Let him look down from the heavens and wave his hands.

This experiment must be completed.
And it will.




(translated, from the Polish, by Stanislaw Baranczak and Clare Cavanagh)
Science does not foreclose^^1^^ possibility, including discoveries that overturn fundamental assumptions, and that it is not a final statement about reality but a highly fruitful mode of inquiry into it.


  -- Robinson, in her book //Absence of Mind//

----
^^1^^ foreclose = close, settle, or answer beforehand.
!!!The Mind of a Master Brain Teaser

By ALEXANDRA ALTER for The Wall Street Journal

Scott Kim has been called `the M.C. Escher of the alphabet' for [[his ambigrams|Ambigrams by Scott Kim]]^^1^^ words and phrases that can be read in multiple directions.

Scott Kim has an odd talent he's a brilliant problem maker.

Mr. Kim belongs to an elite cadre of "puzzle masters" who spend their days building logical mazes and brain teasers. In more than 20 years as a professional puzzle designer, Mr. Kim has worked on everything from word, number and logic puzzles to toys such as Railroad Rush Hour and computer games such as "Obsidian" and "Escher Interactive," which features interactive puzzles based on M.C. Escher's optical illusions. Lately, he has been developing smartphone game apps and contributing a bimonthly puzzle column to Psychology Today.

Mr. Kim defines puzzles as "problems that are fun to solve and have a right answer," as opposed to everyday problems like traffic, which, he noted, "are not very well-designed puzzles."

"My goal as a puzzle designer is to create a meaningful experience for the player, not just 'I solved it,' " he said.
Mr. Kim, who is youthful looking at 55, with a round, unlined face and sheepish smile, has mostly shunned popular forms like Sudoku or crosswords, which he dismissed as "filling out someone else's matrix." Instead, he aims to invent new forms, specializing in computer and print puzzles that pose logical problems or test mathematical or visual skills.

He often begins by choosing a cognitive area or skill he wants the puzzle to test. One puzzle challenged players' ability to visualize negative space, the white space that surrounds and defines shapes and letters. A smartphone game he's designing requires players to think three-dimensionally by reassembling, in the fewest possible steps, a car or building that has been cut into pieces.

!!!Three Puzzles
From "The Playful Brain," by Richard Restak and Scott Kim:

  Find a sequence of four letters that appear in two different five-letter words, differing by one letter. The words end in "P" or "H"; the first four letters are the same. Hint: the first letter is "C," and the words mean "hold hands or fight."

  Arrange numbers 0 to 9 so that three numbers plus three other numbers equal the four remaining numbers. (Start by drawing 10 boxes to hold three digits in the top row, three in the middle row and four in the bottom row, which will be the sum of the top two rows).

  Four people sit down for dinner in four different chairs and want to sit in a different combination each night. How many nights can they sit down without repeating an arrangement?



Andrea Bartz, news editor for Psychology Today, said his puzzles are always harder than they look. "They're deceptively and elegantly simple," Ms. Bartz said. "It looks like it should be very easy, but it takes a long time."

Growing up in Los Angeles, Mr. Kim was obsessed with magic and mathematics. He began drawing mazes and creating crossword puzzles in the second grade. In the sixth grade, he made his first original puzzle. He began folding letters of the alphabet made out of construction paper to make other letters. He folded the letter F over so that the base covered the shorter horizontal line and formed a U shape. "This excited me to no end," said Mr. Kim, who still uses the puzzle in lectures, asking people to guess the letter underneath. (Most guess L.)

Mr. Kim studied music during college and received a self-designed Ph.D. in computers and graphic design at Stanford. As a college student, he wrote letters to Martin Gardner, the author and mathematical-games columnist for Scientific American. The two began corresponding, and Mr. Kim sent him a new take on a classic logic puzzle. Scientific American published the puzzle, and Mr. Kim realized he could make a living from something most people view as a form of procrastination.


Mr. Kim works on puzzles daily from his home in Burlingame, Calif., a Bay Area suburb with "an excessive number of trees," Mr. Kim said. He lives with his wife and two kids, ages 12 and 4, in a house teeming with art supplies, board games, mathematical toys, art books and instruments, including piano and drums.

He likes changing locations frequently throughout the day, moving from his office to the kitchen table, then to the library or a coffee shop. Each time he changes surroundings, he tackles the problem anew. "I often find that the amount of progress I make is proportional to the number of times I start," he said. He's constantly doodling and carries a 3-by-5-inch notebook to record ideas, notes and images.

He borrows ideas for puzzles from architecture, music, science and art (favorite designers include Milton Glaser and Charles and Ray Eames). Occasionally, he gets ideas from dreams. After he dreamed he was surfing on waves of color, Mr. Kim had an idea for a computer game whose goal is to stay on the red wave.

He takes frequent walks because he feels that increased blood flow keeps his brain alert and prefers strolling rather than sitting when he meets with other game designers or book or magazine editors. "If you're sitting across the table from someone, the geometry of the situation says confrontation," he said. "If you're walking with somebody, you're heading in the same direction, and the spatial dance you're doing is a little more cooperative."

Designing puzzles can be exhausting. To unwind, and spark ideas, Mr. Kim dabbles in music and art. He composes Bach-like fugues, often in his sleep, and has studied the dizzying art of M.C. Escher. Science-fiction writer Isaac Asimov called him "the Escher of the alphabet" for his ambigrams words and phrases that can be read in multiple directions. (An example: Write the word "chump" in cursive but leave the semicircle of the "p" open so it doesn't touch the p's tail. Flip it upside down.)

He defines a good puzzle as one that gets people to look at the problem in a new or counterintuitive way. "What I always want is to have several little 'aha' moments where your brain is very happy," he said.

----
^^1^^ - [[01101001|http://www.01101001.com/ambigrams/index.html]] has a nice collection of ambigrams
While teaching a course at Citizen Schools in Campbell, CA., this semester, I experienced what I'd call a "seagull moment".

I was teaching middle school students a STEM (Science, Technology, Engineering, Math) [[course called Amazing Mazes|The "Amazing Mazes" course]], which I had developed a year ago (including both many [[student interactive activities|http://employees.org/~hmark/courses/amazingmazes/index.html]], and a set of [[resources and lesson plans|It's all in the game - learning programming and math]] for future teachers).

One of the important concepts in learning in general, and in [[Computational Literacy|A Framework for Computational Thinking, Computational Literacy]] in particular, is ''levels of abstraction''^^1^^. The idea that different mental models (and levels of abstraction) are essential to understanding and learning, and that they can serve different purposes is key to human growth and performance.

The Amazing Mazes course is using computers to build mazes in a 2D plane (on the computer screen), creating "maze walkers" (think, "mice"), and teaching them, through programming, to successfully navigate through these mazes (or "find the cheese", so to speak).
In one of the lessons the students build a maze by drawing lines, specifying the coordinates of the starting point and ending point of each line. As they enter the coordinates for the maze, they see the lines on the screen (immediate feedback), but also a running list of "draw-line" commands with the entered coordinates as command parameters.
The list of commands is displayed in a "history box" (text area on the screen), so that the students see side-by-side two very different representations of their maze: one is the actual shape of the maze, made of the lines they have drawn on the screen, and the other, a list of commands and parameters. And the question is: which of these forms "is really the maze"?
To emphasize the strong relationship between the visual and programmed version of the maze, the user interface enables the students to copy and paste the list of commands from the "history box" into a "command-input" area, then run/execute the commands, and see their maze drawn for them. This connects in a powerful way math and programming, different forms and models.

It is hard to fully grasp these concepts in middle school. When introducing these ides in the course, I felt a bit like Fletcher Lynd Seagull in Richard Bach's wonderful story [[Jonathan Livingston Seagull|resources/Bach-Jonathan-Livingston-Seagull.html]]. Fletcher Gull had been mentored by Jonathan Gull for a while, and when it's time for Jonathan to move on (to another plane (of existence)? To another level of abstraction? ;-) , Fletcher is left in charge of a group of young seagulls who are very eager to learn how to fly as superbly as Jonathan and Fletcher. 
Fletcher, still saddened by his friend and mentor leaving, tries to convey to his young students some of the wisdom he acquired (with a lot of effort/practice) from Jonathan, and starts his first lesson with the group:
>...Fletcher Gull dragged himself into the sky and faced a brand-new group of students, eager for their first lesson. "To begin with" he said heavily, "you've got to understand that a seagull is an unlimited idea of freedom, an image of the Great Gull, and your whole body, from wingtip to wingtip, is nothing more than your thought itself."
>The young gulls looked at him quizzically. Hey, man, they thought, this doesn't sound like a rule for a loop. 
>Fletcher sighed and started over. "Hm. Ah... very well," he said, and eyed them critically. "Let's begin with Level Flight."

As it turns out, one 7th grade girl in class got an important piece of the connection and usefulness of the two forms of representing a maze. Instead of just copying and pasting the commands and parameters from the history box into the command input, and running them (to display the original maze), she added 10 (I guess she correctly reasoned that it'd be easier) to each command parameter (coordinate in the x-y plane), and then ran the commands. I don't know who was more pleased with the resulting translated (shifted) shape on the screen, I, because I was able to teach, or she, because she was able to learn!


----
^^1^^ [[Bret Victor|http://worrydream.com/]] gives an excellent and beautifully done [[example of going up and down the abstraction ladder|http://worrydream.com/LadderOfAbstraction/]]
!!! In his book called //5000 B.C. and Other Philosophical Fantasies// Raymond Smullyan's lists "self-annihilating" sentences:
(for a more complete (ha!) list see [[Self-Annihilating Sentences: Saul Gorn's Compendium of Rarely Used Clichés|https://repository.upenn.edu/cgi/viewcontent.cgi?article=1522&context=cis_reports]])
* Before I begin speaking, there is something I would like to say.
* I am a firm believer in optimism because without optimism, what else is there?
* Half the lies they tell about me are true.
* Every Tom, Dick, and Harry is called John.
* Having lost sight of our goal, we must redouble our efforts!
* I'll see to it that your project deserves to be funded.
* I've given you an unlimited budget, and you have already exceeded it!
* A preposition must never be used to end a sentence with.
* This species has always been extinct.
* Authorized parking forbidden!
* If you're not prejudiced, you just don't understand!
* Inflation is an economic device whereby each person earns more than the next.
* Superstition brings bad luck.
* That's a real step forward into the unknown.
* You've outdone yourself as usual.
* Every once in a while it never stops raining.
* Monism is the theory that anything less than everything is nothing.
* A formalist is one who cannot understand a theory unless it is meaningless. 

!! And a few others
( from [[a blog on the princeton.edu site|http://www.cs.princeton.edu/~chazelle/courses/BIB/cuttheknot.htm]])
* E.Harrison: "Is it true that philosophy has never proved that something exists?" Bertrand Russell: "Yes, and the evidence for it is purely empirical." 
* Hofstadter's Law: It always takes longer than you expect , even when you take into account Hofstadter's Law.
* Nostalgia isn't what it used to be.
* Break every rule.
* All generalizations are misleading.
* If somebody loves you, love them back unconditionally. 
* We have to believe in free-will, we have no choice.
* Understatement is a zillion times more effective than exaggeration.

!!!Questions that contain their own answers
(a few samples:)
* In 1978, Raymond Smullyan wrote a book about logical puzzles. //What is the name of this book?//
* I am the square root of -1. Who am //I//?
* What would the value of 190 in hexadecimal //be//?
* Twenty-nine is a //prime// example of what kind of number?
* The reciprocal of [[sqrt(2)|https://en.wikipedia.org/wiki/Square_root_of_2]] is half of what number?
* How many consonants are in //one//? How many consonants are in //two//? How many consonants are in //three//?
* What do you do to the length of an edge of a //cube// to find its volume?
In an interesting [[talk (45 min. video)|https://mediax.stanford.edu/page/john-seely-brown-mediaX2017]] given at #mediaX2017 Conference  at Stanford, John Seely Brown (former Chief Scientist of Xerox Corporation and former director of the Xerox Palo Alto Research Center (PARC)) talked ([[slides|http://johnseelybrown.com/sensemaking.pdf]]) about what it means (and how the meaning changed) to make sense and learn in the new environment we are in, in the 21st century (the "post ~AlphGo world" as he calls it).

!!!His main points
!!!! Looking from an operational perspective
* In the Push Economy of the 20th Century, 20th century infrastructure drove organization architectures where Scalable Efficiency was the holy grail.
** predictable
** hierarchy
** control
** organizational routines
** minimize variance 
* 21st Century infrastructure: no stability in sight S-curve driven by continual exponential advances in computation
** rapid set of punctuated jumps for the next 20-40 years
* Three quite different eras required quite different learning strategies & ways of being
** Industrial Age - like a big steamship
** Early Digital Age - like a nimbler sailboat
** Digitally Networked Age - like a whitewater kayak
* This new era (Digitally Networked Age) is no longer just about deepening individual expertise within a silo.
** Instead, it is also about participating in & shaping knowledge flows, where the goal is to be balanced & embedded when all is in flux
* Critical skills for a white water world (besides [[reading deeply and widely|A Helpful Guide to Reading Better - Farnam Street]])
** skillfully reading the currents and disturbances of the context,
** interpreting the flows for what they reveal of what lies beneath the surface,
** leveraging the currents, disturbances and flows for amplified action.
*  reading context is becoming more important, but it isn’t always simple
** Data analytics can’t do everything.
** Knowing what questions to ask is key. (since, [[questions are like lanterns|John O’Donohue - questions]])
** ''data'' is_not_equal_to ''information'' is_not_equal_to ''beliefs'' is_not_equal_to ''values''
** And for that imagination is crucial^^1^^ especially if we are to escape the tyranny of the present (the fixed way we currently view and think about a situation)
* Given the relentless pace of change & disruptions. 
** Incremental learning will no longer suffice!!
[img[The Big Shift|./resources/big shift.png]]
* How do you do that?
** A simple start: “How often do you get out of your comfort zone?”
*** Increasingly important in a global world of constant change! 
*** honor & amplify serendipity

** Orchestrating Serendipity can be more than just luck!!!
*** Choose Serendipity Environments
*** Develop Serendipity Practices
*** Enhance Serendipity Preparedness
** In all encounters: develop and practice deep listening with reciprocity
[img[Serendipity|./resources/serendipity.png]]

** Also consider: ''Reverse Mentorship''
*** can be an amazing source of insight
*** but can also be humbling.
**** "we have so much to learn if we are ready to look stupid"
*** Endless Newbie is the new default for everyone, no matter your age or experience. That should keep us humble. (from the book [[The Inevitable by Kevin Kelly|http://kk.org/books/the-inevitable/]])

!!!! Looking form an epistemological perspective (Knowing (the what) and understanding (the how and the why))
* How do we come to understand given the pace of change & dense interconnectivity?
** Incremental learning no matter, how fast we do it, will no longer suffice.
** We must be willing to regrind our conceptual lenses, often! We must be able and willing to constantly reframe.
*** (Piaget) Assimilation ----> Accommodation - we need to become better at breaking the old/existing frames and create new ones (accommodating new learning)
* Living in exponential times in the global networked age that is densely interconnected many of our problems are [[wicked problems|https://www.wickedproblems.com/1_wicked_problems.php]]
** They are not just complicated; they are complex
** quoting Parag Khanna, The Second World:
*** When we try to pick out anything by itself, we find it hitched to everything else in the universe. There is no special sphere of the environment, no distinct lands of oil, no detached global economy, and no separate issue of public health.
**  Wicked Problems: As soon as you start to solve them they morph.
*** The “Catch 22” of wicked problems is that you cannot learn about the problem without trying solutions, but every solution you try has lasting unintended consequences that are likely to spawn new wicked problems.
*** Examples of wicked problems include: global warming, the financial crisis, terrorism, environmental design, homeless.
* We need new ways to move from mechanistic thinking to understanding contexts/problems that evolve:
** Sets of exchanges with complex feedback loops
** Dynamic attractors
** Network affordances and contextual propensities
* Karl Popper, the great philosopher, said that [[all problems are either clouds or clocks|http://www.the-rathouse.com/2011/Clouds-and-Clocks.html]].
** To understand a clock you can take it apart, its individual pieces and you study the pieces and then you can understand how a clock works. A cloud – you can’t take apart a cloud. A cloud is a dynamic system. A cloud you can only study as a whole. 
** One of the problems we have as a culture is we take clouds and pretend they are clocks.
* Perhaps this whitewater world may require “a new sense – a seventh sense” ([[Joshua Cooper Ramo|http://joshuacooperramo.com/]])
** The seventh sense is the ability to look at any object and see (or imagine) the way in which it is changed by connection. Whether you are commanding an army, running a Fortune 500 company, planning a great work of art, or thinking about your child's education.
** We have a Crisis of Imagination. We need to “see” the ways that something/everything is changed by hyper-connectivity. 
** Or even just plain normal connectivity where connections are not obvious and making sense is elusive
* We need to use Abductive Reasoning:
** imagining possible stories that suggest how something could have come about and then testing them.
** A form of sense making when the facts don’t add up or contradict each other.
** This imagination^^1^^ is the power or capacity of humans to form internal images of objects and situations (visual, auditory or motor images). It:
*** closes the gap between what is novel and what is known.
*** finds connections between things that are not obvious.
*** plays with boundaries. It lets partial thoughts jump fences.
*** engages in sense-breaking in order to make sense in a new way – to see new possibilities.
*** and in seeing new possibilities, it helps us escape the tyranny of the present (the fixed way we currently view and think about a situation).
** Within the range of abductive reasoning, there is a shift from normative sense-making to sense-breaking, where one widens the gap and then resolves it, with the imagination.
** Imagination cannot just be an add on. It is not just relevant within the domains of the ‘arts.’ It is crucial in sense-making & and even more so in sense-breaking/sense-making

!!!! Looking from an ontological perspective (a new way of being)
* The unique power of the human imagination^^1^^ comes in part from its ability to integrate opposing qualities, like emotion and reason, curiosity and certainty.
** we all are deeply engaged and practiced in "thinking man" and "building man", and we learn by creating and reflecting.
** but we don't take seriously the value of "playing man" and the value of play, boundary testing, alternative scenarios, etc., are and should be playing in our lives
[img[Imagination as a blender|./resources/blending imagination.png]]
* Cultivating a blended ontology with human/machine
** we need to blend in to this state of being and becoming, AI (Artificial Intelligence) and IA (Intelligent Augmentation)
** the imagination (as the binding agent) has new properties
** Indwelling across a distributed community of practice, creating a networked imagination
* We have to be careful when we develop AI, because data and algorithms operate as black boxes especially in deep learning systems.
** [[This becomes more and more an important aspect of augmenting our capabilities|The Black Box problem with Machine Learning and AI]]
* A social-systems analysis is needed that draws on philosophy, law, sociology, anthropology, science and technology studies – especially wrt the curation of the data used to train these deep learning systems. 
** “Only by asking broader questions about the impacts of AI can we generate a more holistic and integrated understanding…” -- Kate Crawford & Ryan Calo

!!!!And he concludes:
* Our challenge today is one of sense making in a world that won’t stay stable for our standard sense-making strategies to be effective.
* Most of our pressing problems are wicked, our tools are increasingly opaque, our models and frames are outdated
* We need to escape the tyranny of the present and meet head on the crisis of imagination. 



----
^^1^^ - Ursula K. Le Guin (in 2002) talked about the [[criticality and uniqueness of human imagination|The importance of imagination - Ursula K. Le Guin]]
I am still occasionally thinking about [[the plagiarism incident|I have a Bionic Ear]] I had recently in my CS class.
Copying/plagiarism is enabled by the fact that I allow students to look at each other's programs as part of their learning, and before they are required to turn their assignment in for grading.

I recently came across [[an article in The Atlantic Magazine|http://www.theatlantic.com/technology/archive/2011/06/how-i-failed-failed-and-finally-succeeded-at-learning-how-to-code/239855/]] by [[James Somers|http://www.theatlantic.com/author/james-somers/]], which very succinctly explains why letting "budding programmers" (as well as experienced ones) look at other people's code is "a very good thing".

It completely reflects my beliefs and experience, both as a teacher and programmer, and reinforces my decision to keep encouraging students to look at other students' code as part of their learning.

>Let's say that your sink is broken, maybe clogged, and you're feeling bold -- instead of calling a plumber you decide to fix it yourself. It would be nice if you could take a picture of your pipes, plug it into Google, and instantly find a page where five or six other people explained in detail how they dealt with the same problem. It would be especially nice if once you found a solution you liked, you could somehow immediately apply it to your sink.

>Unfortunately that's not going to happen. You can't just copy and paste a Bob Vila video to fix your garage door.

>But the really crazy thing is that this is what programmers do all day, and the reason they can do it is because code is text.

>I think that goes a long way toward explaining why so many programmers are self-taught. Sharing solutions to programming problems is easy, perhaps easier than sharing solutions to anything else, because the medium of information exchange -- text -- is the medium of action. Code is its own description. There's no translation involved in making it go.

>Programmers take advantage of that fact every day. The Web is teeming with code because code is text and text is cheap, portable and searchable. Copying is encouraged, not frowned upon. The neophyte programmer never has to learn alone.


I see it all the time: by engaging with projects and being exposed to (and actively seeking) different approaches and solutions, the students are picking up the concepts and skills I want them to learn.

Obviously, students need to be taught and need to understand the difference between learning from an example, acknowledging their sources, improving on the solution, versus blatantly, uncritically copying something and misrepresenting a solution as their own when it's not so.

But even Albert Einstein acknowledged that "Example isn't another way to teach, it is the only way to teach."
A simple question with a simpler answer (click on the image to view :)

[img[find x|./resources/find x 1.png][./resources/find x.png]] 


Worthy problem (a la [[Grooks (form of poetry)|http://www.archimedes-lab.org/grooks.html]] by Piet Hein)?
>Problems worthy of attack 
>prove their worth by hitting back. 
(or the computer science equivalent: the problem with troubleshooting is that sometimes trouble shoots back.)
 
I received an invitation via email to encourage my students to sign on to a site and start solving math challenges (puzzles) to sharpen their math and reasoning skills.

The first (out of 100) challenges was as follows:
[img[billiards ball math challenges|./resources/billiard ball challenge 1.png][./resources/billiard ball challenge.png]]


You can obviously solve this puzzle through math/geometry (an exercise left to the reader :)

But you could also write a small simulation program, to let it figure the answer out. I decided to use Scratch, since it has most of the basic building blocks (ha!) already built-into the language/environment.

So here is the simulation program listing in Scratch (__don't__ [[run the simulation|https://scratch.mit.edu/projects/163532404/#fullscreen]] and/or look/scroll down if you want to try and figure out the answer on your own!)

[img[billiards ball Scratch program|./resources/Scratch for billiard ball q 1.png][./resources/Scratch for billiard ball q.png]]












And the answer is that the billiards ball will end up in pocket A (see picture/question above):
[img[billiards ball Scratch program|./resources/Scratch for billiard ball a.png]]
From an excellent lecture by Judea Pearl titled [["The Art and Science of Cause and Effect" |http://bayes.cs.ucla.edu/BOOK-2K/causality2-epilogue.pdf]] (see [[my review|On The Art and Science of Cause and Effect - Judea Pearl]]) and also Pearl's [["Comment: Understanding Simpson’s Paradox"|https://pdfs.semanticscholar.org/d2fa/2d0285e60a29a24a74568a1010328c46fa59.pdf]]:

>Simpson’s paradox, first noticed by Karl Pearson in 1899, concerns the disturbing observation that every statistical relationship between two variables may be  reversed by including additional factors in the analysis. For example, you might run a study and find that students who smoke get higher grades; however, if you adjust for age, the opposite is true in every age group, that is, smoking predicts lower grades. If you further adjust for parent income, you find that smoking predicts higher grades again, in every age–income group, and so on.

Or another example:
>The classical case demonstrating Simpson’s paradox took place in 1975, when ~UC-Berkeley was investigated for sex bias in graduate admission. In this study, overall data showed a higher rate of admission among male applicants; but, broken down by departments, data showed a slight bias in favor of admitting female applicants. The explanation is simple: female applicants tended to apply to more competitive departments than males, and in these departments, the rate of admission was low for both males and females.
>
>[img[fish nets|resources/fish nets.png]]
>To illustrate this point, imagine a fishing boat with two different nets, a large mesh and a small net. A school of fish swim toward the boat and seek to pass it. The female fish [which are smaller in size] try for the small-mesh challenge, while the male fish [which are larger] try for the easy route. The males go through and only females are caught. Judging by the final catch, preference toward females is clearly evident. However, if analyzed separately, each individual net would surely trap males more easily than females.

From the article [["The Role of Exchangeability in Inference"|https://projecteuclid.org/download/pdf_1/euclid.aos/1176345331]] by D. V. Lindley and Melvin R. Novick:

Consider the data in Table 1 where 40 patients were given a treatment, T, and 40 assigned to a control, T*. The patients either recovered, R, or did not, R*. We are not considering small-sample problems so that the reader can if he wishes imagine all the numbers multiplied by 10,000, say. It is then clear that the recovery rate for patients receiving the treatment at 50% exceeds that for the control at 40% and the treatment is apparently to be preferred. 

However, the sex of the patients was also recorded and Table 2 gives the breakdown of the same 80 patients with sex, M male or M* female, included. It will now be seen that the recovery rate for the control patients is 10% higher than that for the treated ones, both for the males and the females. Thus, what is good for the men [i.e., no treatment] is good for the women, but bad for the population as a whole. We refer to this as Simpson's (1951) paradox, though it occurs in Cohen and Nagel (1934).

In Appendix 1 we describe the situation mathematically and show that the paradox can only arise if, R and T being positively associated, M is positively associated both with R and with T. This is exactly what has happened here: The males have been mostly assigned to the treated group, the females to the control; perhaps because the doctor distrusted the treatment and so was reluctant to give it to the females where the recovery rate is much lower than for males. 

Alternatively expressed, treatment and sex have been confounded. Nevertheless it comes as a surprise to most people to learn that confounding can actually reverse an effect; here from +10% to -10%.

An important problem posed by the paradox is this: Given a person of unknown sex would you expect the control or the treatment to be the more effective? (If having an unknown sex seems odd replace M and M* by a dichotomy that is difficult to determine, such as a genetic classification.) The answer seems clear that, despite Table 1, the control [i.e., no treatment] is better. If so, then this warns us to be very careful in using results like those in Table 1 to draw the opposite conclusions for could there not exist a factor, here sex, which reversed the conclusion? 

But is the answer so clear? Keeping the numbers the same, imagine data with T and T* replaced by white and black varieties of a plant respectively, and R and R* corresponding to high and low yields; the confounding factor being whether the plant grew tall, M, or short, M*. The white variety is 10% better overall, but 10% worse among both tall and short plants. In this case the white variety, T, seems the better one to plant; whereas T*, the control, was intuitively preferred in the medical situation.

<html>
	<table>
<tr><td><h2>People</h2></td><td><center>Table 1</center></td><td></td></tr>				
<tr><td><b>Total Population</b></td>	<td>recovered (R)</td>	<td>not recovered (R*)</td>	<td>total</td>	<td>recovery rate</td></tr>
<tr><td>treated (T)</td>	<td>20</td>	<td>20</td>	<td>40</td>	<td>0.5</td></tr>
<tr><td>not treated (T*)</td>	<td>16</td>	<td>24</td>	<td>40</td>	<td>0.4</td></tr>
<tr>	<td>total</td>	<td>36</td>	<td>44</td>	<td>80</td>	</tr>
<tr><td></td><td><center>Table 2</center></td><td></td></tr>				
<tr><td><b>Males (M)</b></td>	<td>recovered (R)</td>	<td>not recovered (R*)</td>	<td>total</td>	<td>recovery rate</td></tr>
<tr><td>treated (T)</td>	<td>18</td>	<td>12</td>	<td>30</td>	<td>0.6</td></tr>
<tr><td>not treated (T*)</td>	<td>7</td>	<td>3</td>	<td>10</td>	<td>0.7</td></tr>
<tr><td>total</td>	<td>25</td>	<td>15</td>	<td>40</td>	</tr>
<tr><td></td></tr>				
<tr><td><b>Females (M*)</b></td>	<td>recovered (R)</td>	<td>not recovered (R*)</td>	<td>total</td>	<td>recovery rate</td></tr>
<tr><td>treated (T)</td>	<td>2</td>	<td>8</td>	<td>10</td>	<td>0.2</td></tr>
<tr><td>not treated (T*)</td>	<td>9</td>	<td>21</td>	<td>30</td>	<td>0.3</td>
<tr><td>total</td>	<td>11</td>	<td>29</td>	<td>40</td>	</tr>

</table>
</html>
The paradox as defined by Lindley and Novick is:
>The apparent [(but illogical) conclusion] is, that when we know that the gender of the patient is male or when we know that it is female we do not use the treatment, but if the gender is unknown we should use the treatment! Obviously that conclusion is ridiculous.
And Pearl observes (about the importance of context, and exterior/extra non-statistical considerations):
>[Lindley and Novick] showed that, with the very same data, we should consult either the combined table or the disaggregated tables, depending on the context. Clearly, when two different contexts compel us to take two opposite actions based on the same data, our decision must be driven not by statistical considerations, but by some additional information extracted from the context.

<html>
<table>

<tr><td><h2>Plants</h2></td><td><center>Table 1</center></td><td></td></tr>				
<tr><td><b>All Plants</b></td>	<td>high yield (R)</td>	<td>low yield (R*)</td>	<td>total</td>	<td>yield</td></tr>
<tr><td>white (T)</td>	<td>20</td>	<td>20</td>	<td>40</td>	<td>0.5</td></tr>
<tr><td>black (T*)</td>	<td>16</td>	<td>24</td>	<td>40</td>	<td>0.4</td></tr>
<tr>	<td>total</td>	<td>36</td>	<td>44</td>	<td>80</td>	</tr>
<tr><td></td><td><center>Table 2</center></td><td></td></tr>				
<tr><td><b>Grew Tall (M)</b></td>	<td>high yield (R)</td>	<td>low yield (R*)</td>	<td>total</td>	<td>yield</td></tr>
<tr><td>white (T)</td>	<td>18</td>	<td>12</td>	<td>30</td>	<td>0.6</td></tr>
<tr><td>black (T*)</td>	<td>7</td>	<td>3</td>	<td>10</td>	<td>0.7</td></tr>
<tr><td>total</td>	<td>25</td>	<td>15</td>	<td>40</td>	</tr>
<tr><td></td></tr>				
<tr><td><b>Grew Short (M*)</b></td>	<td>high yield (R)</td>	<td>low yield (R*)</td>	<td>total</td>	<td>yield</td></tr>
<tr><td>white (T)</td>	<td>2</td>	<td>8</td>	<td>10</td>	<td>0.2</td></tr>
<tr><td>black (T*)</td>	<td>9</td>	<td>21</td>	<td>30</td>	<td>0.3</td>
<tr><td>total</td>	<td>11</td>	<td>29</td>	<td>40</td>	</tr>
</td>
</tr>
	</table>
</html>
lifelong, spiraling discovery and learning
Full-Mindedness and Mind-Fullness
On a recent flight, I watched the movie "Sliding Doors" (1998) with Gwyneth Paltrow and John Hannah. I had actually watched it years ago, so it was a deja-vu in the literal sense (albeit not an encounter of the 2nd kind ;-), but wanted to watch it again, since (spoiler alert!):

(a) the idea of parallel life tracks has always been appealing (to me, and to many others as well)
(b) I was sure I missed some points in the plot in the first viewing, due to the fact that one has to keep in mind the two tracks, as the movie switches between them, and
(c) I find it fascinating how a small variation or event in life, as insignificant as it may seem, can have major implications and impacts over time; the movie is all about a case like this.

This last point emphasizes the importance, or at least the potential for a big impact, of what we perceive as serendipity on this "strange and complex process" we call life. 

And so, as it happens (how exactly?), I was reading on this flight (!) the very interesting book [[Complexity - a guided tour|resources/Melanie-Mitchell-Complexity_a-guided-tour-366-pages.pdf]]^^1^^ by Melanie Mitchell, who did her ~PhD under [[Douglas Hofstadter]], and had worked on the [[Copycat|http://en.wikipedia.org/wiki/Copycat_%28software%29]] analogy-making system as part of her studies. In the book, in the chapter covering chaos, she writes:
>The defining idea of chaos is that there are some systems -- //chaotic// systems -- in which even miniscule uncertainties [in conditions or parameters] can result in huge errors in long-term predictions of these [conditions or parameters]. This is known as "sensitive dependence on initial conditions".
Life can definitely be called "chaotic" given the above "defining idea".

In the movie, the protagonist (Paltrow) gets fired from her job, rushes to catch a train (the London Tube), and in one track of the storyline, ends up missing it by a fraction of a second due to a little girl obstructing her way, and in the other storyline track, ends up catching the train. The fact that the train's (sliding) doors closed in her face (in one track) vs. her ability to "slide through the sliding doors" (in the second track) makes a huge difference in the protagonist's life moving forward, and the movie makes the point of systems (here, life) having "sensitive dependence on initial conditions" very clear and convincing (and fun to watch).

''So why is all this loopy serendipity?''
For one, the serendipitous path that led to this deja-vu film watching on the plan goes (more or less) like this: 
- I had watched the film years ago, and it stuck with me, mainly because of point (a) above. I've liked sci-fi literature as well as physics for many years, and parallel tracks  and universes (including the "extraordinary" possibilities which open up by time travel -- scientific and/or fictional) have interested me for a long time.
- In parallel (ha!), I have been interested in and reading books on cognition and artificial intelligence, and read several books by Douglas Hofstadter (the first one which I read in the 80's was the Pulitzer prize winner //Gödel, Escher, Bach: An Eternal Golden Braid//). 
- That book led me to reading //Fluid Concepts and Creative Analogies: Computer Models of the Fundamental Mechanisms of Thought// by him, where he mentions the [[Copycat|http://en.wikipedia.org/wiki/Copycat_%28software%29]] analogy-making system, and his collaborator (student and mentee) [[Melanie Mitchell|http://en.wikipedia.org/wiki/Melanie_Mitchell]].
- I recently started thinking about teaching a [[Citizen Schools course|http://tinyurl.com/HaggaiLDT]], called //Simplexity//,  about simple rules leading to complex behaviors, and determinism which may lead to chaos (rings a bell?), and started searching for books on complexity and complex systems, and hit on Melanie Mitchell's book [[Complexity - a guided tour|resources/Melanie-Mitchell-Complexity_a-guided-tour-366-pages.pdf]]^^1^^, which I took with me to read on the plane.
- Mitchell's book covers chaos and chaotic systems, and it was only a simple analogy (pun intended) from that to life, it's sensitivity to initial (and sometimes seemingly insignificant) initial conditions, and the great depiction of it in the movie Sliding Doors.
- So, adding one and one together or two and two (or more) for that matter: fascination with parallel tracks in life, life as a chaotic system, fascination with cognition, Hofstadter's work in this area, him working with Mitchell, who writes ([[and teaches|http://web.cecs.pdx.edu/~mm/#Courses]]) about complex systems and chaos, and __other things__^^2^^, and voila, I've ended up with her book on the plane, re-watching Sliding Doors during that flight, after watching it many years ago.

And now for the serendipitous loop within a loop (talking about [[strange loops|http://en.wikipedia.org/wiki/I_Am_a_Strange_Loop]]): as is obvious from the [[reference to infinite state machines|About me]], I like state machines and [[Cellular Automata|Cellular Automaton Rule 110]], which fascinated Stephen Wolfram, whose book //A New Kind of Science// is [[reviewed by Melanie Mitchell|resources/Mitchell-Wolfram-new-kind-of-science-review.pdf]]^^3^^, who wrote the book on Complexity, who I've read in preparation for a [[Citizen Schools course|http://tinyurl.com/HaggaiLDT]] (called Simplexity) on simple/deterministic rules (like the ones in Conway's Game of Life) leading to complex/chaotic behavior.
----
^^1^^ retrieved from [[Sorrentino's blog|http://www.waltersorrentino.com.br/wp-content/uploads/2012/02/Melanie-Mitchell-Complexity_a-guided-tour-366-paginas.pdf]]
^^2^^ Mitchell also writes about Conway's Game of Life, and [[reviews Wolfram's New Kind of Science|resources/Mitchell-Wolfram-new-kind-of-science-review.pdf]]^^3^^
^^3^^ retrieved from [[Mitchell's website|http://web.cecs.pdx.edu/~mm/new-kind-of-science-review.pdf]]
In 1980 [[Alan Kay|https://en.wikipedia.org/wiki/Alan_Kay]] (a "[[CS Sage|https://en.wikipedia.org/wiki/List_of_computer_scientists]]") created Smalltalk and invented the term "Object Oriented." When asked what that means he replied, "Smalltalk programs are just objects." When asked what objects are made of he replied, "objects." When asked again he said "look, it's all objects all the way down. Until you reach turtles."

This is in reference to the "mythical" story which has been circulating for years. Stephen Hawking in one of his books tells it this way: 

A bigname scientist was giving a lecture on astronomy. After the lecture, an elderly lady came up and told the scientist that he had it all wrong. "The world is really a flat plate supported on the back of a giant turtle" she said. The scientist asked "And what is the turtle standing on?"
To which the lady triumphantly replied: "You're very clever, young man, but it's no use -- it's turtles all the way down."

And here is what Richard Feynman had to say about the [[turtles explanation (among other things)|https://www.brainpickings.org/2013/07/19/richard-feynman-science-morality-poem/]]:
>We have been led [by science findings] to imagine all sorts of things infinitely more marvelous than the imaginings of poets and dreamers of the past. It shows that the imagination of nature is far, far greater than the imagination of man. For instance, how much more remarkable it is for us all to be stuck — half of us upside down — by a mysterious attraction, to a spinning ball that has been swinging in space for billions of years, than to be carried on the back of an elephant supported on a tortoise swimming in a bottomless sea.
A small gem by [[Kai Krause|http://edge.org/memberbio/kai_krause]] on [[Edge|http://edge.org/]]. Also worth checking [[Kai's website|http://www.byteburg.de/]]

There is a deep fascination I have been carrying with me for decades now, ever since earliest childhood: the interplay between simplicity and complexity.

Unable to express it verbally at the time, in hindsight it seems clear: they are all about that penultimate question: what is life and how did this world come into existence?

In many stages and phases I discovered a multitude of ideas that are exactly what is called for here: deep, elegant and beautiful explanations of the principles of nature.

Simplicity is embodied in a reductionist form in the ~YinYang symbol: being black or white.

In other familiar words: To be or not to be.

Those basic elements combined: that is the process spawning diversity, in myriads of forms.

As a youngster I was totally immersed in 'Lego' blocks. There are a handful of basic shapes (never liked the 'special' ones and clamored instead for a bigger box of basics) and you could put them together in arrangements that become houses, ships, bridges...entire towns I had growing up the sides of my little room to the tops of wardrobes. And I sensed it then: there is something deep about this.

A bit later I got into a mechanical typewriter (what a relief to be able to type clearly, my handwriting had always been horrid the hand not being able to keep up with the thinking... and relished the ability to put together words, sentences, paragraphs. Freezing a thought in a material fashion, putting it on paper to recall later. What's more to let someone else follow your thinking! I sensed: this is a thing of beauty.

Then I took up playing the piano. The embryonic roots of the software designer of later decades probably shuddered at the interface: 88 unlabeled keys! Irregular intervals of black ones interspersed... and almost the exact opposite of todays "we need to learn this in one minute and no, we never ever look at manuals" attitude. It took months to make any sense of it, but despite the frustrations, it was deeply fascinating. String together a few notes with mysterious un-definable skill and out comes... deeply moving emotion?

So the plot thickens: a few Lego blocks, a bunch of lettershapes or a dozen musical notes... and you take that simplicity of utterly lame elements, put them together...and out pops complexity, meaning, beauty.

Later in the early 70s I delved into the very first generation of large synthesizers and dealt specifically with complex natural sounds being generated from simple unnatural ingredients and processes. By 1977 now in California it was computer graphics that became the new frontier and again: seemingly innocent little pixels combine to make ... any image as in: anything one can imagine. Deep.

In those days I also began playing chess, and carom billiards simple rules, a few pieces, 3 balls...but no game is ever the same. Not even close. The most extreme example of this became another real fascination: the game of GO. Just single moves of black and white stones, on a plain grid of lines with barely a handful of rules but a huge variety of patterns emerges. Elegant.

The earliest computing, in the first computer store in the world, Dick Heyser in Santa Monica, had me try something that I had read in SciAm by Martin Gardner: Conway's 'Game of Life'. The literal incarnation of the initial premise: Simplicity reduced to that YinYang: a cell is On or Off, black or white. But there is one more thing added here now: iteration. With just four rules each cell is said to live or die and in each cycle the pattern changes, iteratively. From dead dots on paper, and static pixels on phospor, it sprang to life! Not only patterns, but blinkers, gliders, even glider guns, heck glider gun canons! Indeed, it is now seen as a true Turing-complete machine. Artificial Life. Needless to say: very deep.

Another example in that vein are of course fractals. Half an inch of a formula, when iterated, is yielding worlds of unimaginably intricate shapes and patterns. It was a great circle closing after 20 years for me to re-examine this field, now flying through them as "frax" on a little iPhone, in realtime and in real awe.

The entire concept of the computer embodies the principles of simple on/off binary codes, much like YinYang, being put together to form still simplistic gates and then flip-flops, counters, all the way to RAM and complex CPU/GPUs and beyond. And now we have a huge matrix computer with billions of elements networked together (namely 'us', including this charming little side corridor called 'Edge'), just a little over

70 years after Zuse's Z3 we reached untold complexity with no sign of slowing down.

Surely the ultimate example of 'simplexity' is the genetic code four core elements being combined by simple rules to extreme complex effect the DNA to build archaea, felis or homo somewhat sapiens.

Someone once wrote on Edge "A great analogy is like...a diagonal frog" which embodies the un-definable art of what constitutes a deep, beautiful or elegant explanation: Finding the perfect example! The lifelong encounters with "trivial ingredients turning to true beauty" recited here are in themselves neither terse mathematical proofs nor eloquently worded elucidations (such as one could quote easily from almost any Nobel laureate's prize-worthy insights).

Instead of the grandeur of 'the big formulas' I felt that the potpourri of AHA!  moments over six decades may be just as close to that holy grail of scientific thinking: to put all the puzzle pieces together in such away that a logical conclusion converges further on... the truth. And I guess one of the pillars of that truth, in my eyes, is the charmingly disarmingly miniscule insight:

 "So much from so little. Now that explains a lot!"
My first reaction to the //name// of the [[organization/website|https://software-carpentry.org/]] was somewhat negative. On an intuitive level, I think that likening software development to carpentry is probably a good enough description/analogy (and a sad testament to the state of this budding discipline :( , but I felt (again, we are talking about the initial intuitive response here) that since (I believe) that the aspiration is to get out of this sorry state (where one has to write and re-create desired functionality, one character at a time), and therefore picking an aspirational, forward-looking, visionary title/name would be much preferred.

But, I think that what I've seen so far on the site is pretty solid, useful, and practical, so maybe the name reflects the focus: factual, practical, hard-nosed effectiveness. Their tagline "TEACHING LAB SKILLS FOR SCIENTIFIC COMPUTING" may point in the same direction I have been advocating under teaching [[Computational Thinking/Literacy (and skills)|A Framework for Computational Thinking, Computational Literacy]].

!!!!From their presentation about their [[Lessons Learned|http://swcarpentry.github.io/slideshows/lessons-learned/index.html]]
* The opening slide is both innovative, funny, and deep:
** If you build a man a fire, you'll keep him warm for a night. If you set a man on fire, you'll keep him warm for the rest of his life.
*** Which is not as macabre as it looks at first! It really preserves the original (and similarly constructed) quote about the giving fish vs. teaching to fish. It plays (I think) on the quote about learners are not vessels to be filled, but wood to be lighted :)
* Lesson #1 - Most researchers think programming is a tax they have to pay to do science.
** "If I wanted to be a computer scientist, I would have picked a different major in undergrad."
* Lesson #4 - The curriculum is full. "What do I drop to make room for more computing: quantum or thermo?"
** But, 5 minutes (of Computation) per lecture ⇒ 4 courses in a degree
** Have to fit in around the curriculum until we achieve critical mass
* Lesson #9 - Most people would rather fail than change.
** Most scientists treat research on teaching and programming like most politicians treat research on climate change.

!!!! Research/Pedagogy Background/Foundation
They base their pedagogy and focus on Mark Guzdial's research (Georgia Tech), and as captured on [[his blog|http://computinged.wordpress.com/]]. For example:
* [[Subgoals improve performance|https://computinged.wordpress.com/2012/06/05/instructional-design-principles-improve-learning-about-computing-making-measurable-progress/]]
* [[Practice works best for facts, worked examples for skills|https://computinged.wordpress.com/2012/04/04/practice-is-better-for-learning-facts-worked-examples-are-better-for-learning-skills/]]
* [[Peer instruction beats lecture|https://computinged.wordpress.com/2013/01/15/ucsds-overwhelming-argument-for-peer-instruction-in-cs-classes/]]
* Media-first increases retention
From Pete Goodliffe's book //Becoming a Better Programmer// ([[Software Development is|resources/Goodliffe - Becoming a Better Programmer.docx]]):


''Software Development is ... an Art'' since it is:
* creative
* aesthetic
* mechanical
* team-based

''Software Development is ... a Science'' since it is:
* rigorous
* systematic
* insightful

''Software Development is ... a Sport'' since it involved:
* teamwork
* discipline
* rules

''Software Development is ... a Child's Play'' since it involves:
* learning
* simplicity
* having fun

''Software Development is ... a Chore'' since it requires:
* clean up
* work in the background
* maintenance

The "[[CS Sage|https://en.wikipedia.org/wiki/List_of_computer_scientists]]" [[Fred Brooks|https://en.wikipedia.org/wiki/Fred_Brooks]] (of [[The Mythical Man Month|https://archive.org/stream/mythicalmanmonth00fred/mythicalmanmonth00fred_djvu.txt]] fame), wrote a thoughtful article after receiving the //first// ACM [[Allen Newell|https://en.wikipedia.org/wiki/Allen_Newell]] Award (named after another CS Luminary/Sage) in 1994, sharing his thoughts about the [[Computer Scientist as a Toolsmith|http://www.cs.unc.edu/~brooks/Toolsmith-CACM.pdf]].

In the article he talks about "The Gift of Subcreation", (which could be interpreted to include tool and app creation, Artificial Life, and other software models and creations) and observes^^1^^:
>Making things has its glories and joys, and they are different from those of the mathematician and those of the scientist.

Brooks quotes a poem by J.R.R. Tolkien about those human, virtual creations ("subcreations"):
>Although now long estranged,
>Man is not wholly lost nor wholly changed,
>Dis-graced he may be, yet is not de-throned,
>and keeps the rags of lordship once he owned:
>Man, Sub-creator, the refracted Light
>through whom is splintered from a single White
>to many hues, and endlessly combined
>in living shapes that move from mind to mind.
>Though all the crannies of the world we filled
>with Elves and Goblins, though we dared to build
>Gods and their houses out of dark and light,
>and sowed the seed of dragons—’twas our right
>(used or misused). That right has not decayed;
>we make still by the law in which we’re made. 


----
^^1^^ See what Brooks says about [[The Software Scientist as a Toolsmith]]
Jeff Dean facts^^1^^ aren’t, well, true. But the fact that someone went to the trouble to make up Chuck Norris-esque exploits about Dean is remarkable. That’s because Jeff Dean is a software engineer (and now the head of Google Brain), and software engineers are not like Chuck Norris. For one thing, they’re not lone rangers—software development is an inherently collaborative enterprise. For another, you have to be somewhat of a computer geek to understand most of the jokes that people tell about Jeff Dean (unlike Chuck's :). 

Nevertheless, on April Fool’s Day 2007, some admiring young Google engineers saw fit to bestow upon Jeff Dean the honor of a website extolling his programming achievements. For instance:
* Compilers don’t warn Jeff Dean. Jeff Dean warns compilers.
* Jeff Dean writes directly in binary. He then writes the source code as documentation for other developers.
* When Jeff Dean has an ergonomic evaluation, it is for the protection of his keyboard.
* Jeff Dean was forced to invent asynchronous ~APIs one day when he optimized a function so that it returned before it was invoked.
* Jeff Dean compiles and runs his code before submitting, but only to check for compiler and CPU bugs.
* Jeff Dean once failed a [[Turing test|https://plato.stanford.edu/entries/turing-test/]] when he correctly identified the 203rd Fibonacci number in less than a second.
* The speed of light in a vacuum used to be about 35 mph. Then Jeff Dean spent a weekend optimizing physics.
* Jeff Dean was born on December 31, 1969 at 11:48 PM. It took him twelve minutes to implement his first time counter. (referring to [[Unix Epoch Time|https://en.wikipedia.org/wiki/Unix_time]]: 00:00:00 Coordinated Universal Time (UTC), Thursday, 1 January 1970)
* [and also:] 
** Jeff Dean's watch displays seconds since January 1st, 1970. He is never late.
* Jeff Dean eschews both Emacs and VI. He types his code into [[zcat|https://linux.die.net/man/1/zcat]], because it's faster that way [due to the ~Lempel-Ziv compression].
* When Jeff Dean sends an Ethernet frame there are no collisions because the competing frames retreat back up into the buffer memory on their source NIC [Network Interface Card (hardware)].
*Unsatisfied with constant time, Jeff Dean created the world's first O(1/n) algorithm.
* Once, in early 2002, when the search back-ends went down, Jeff Dean answered user queries manually for two hours. Result quality improved markedly during this time.
* The rate at which Jeff Dean produces code jumped by a factor of 40 in late 2000 when he upgraded his keyboard to USB2.0.
* Jeff Dean wrote an O(n^^2^^) algorithm once. It was for the [[Traveling Salesman Problem|http://www.csd.uoc.gr/~hy583/papers/ch11.pdf]] [the [[brute force solution can be of O(n!) complexity!!!|http://poincare101.blogspot.co.il/2012/04/travelling-salesman-problem.html]]. [[Dynamic programming can yield|https://www.explainxkcd.com/wiki/index.php/399:_Travelling_Salesman_Problem]] O(n^^2^^ * 2^^n^^)) . The implication is that Dean implemented an "inefficient ( O(n^^2^^) algorithm only once in his life ...].
* Jeff Dean once implemented a web server in a single printf() call. Other engineers added thousands of lines of explanatory comments but still don't understand exactly how it works. Today that program is the front-end to Google Search.
* Jeff Dean can beat you at connect four. In three moves.
* When your code has undefined behavior, you get a segmentation fault and corrupted data. When Jeff Dean's code has undefined behavior, a unicorn rides in on a rainbow and gives everybody free ice cream.
* Jeff Dean is still waiting for mathematicians to discover the joke he hid in the digits of PI.
* Jeff Dean's keyboard has two keys: 1 and 0.
* When Graham Bell invented the telephone, he saw a missed call from Jeff Dean.
*Jeff starts his programming sessions with 'cat > /dev/mem' [ see [[cat|http://www.linfo.org/cat.html]] and [[/dev/mem|http://superuser.com/questions/71389/what-is-dev-mem]] ]
* One day Jeff Dean grabbed his ~Etch-a-Sketch instead of his laptop on his way out the door. On his way back home to get his real laptop, he programmed the ~Etch-a-Sketch to play Tetris.
* Jeff Dean proved that P=NP when he solved all NP problems in polynomial time on a whiteboard.
* Jeff Dean's PIN is the last 4 digits of pi.
* When Jeff gives a seminar at Stanford, it's so crowded Don Knuth has to sit on the floor. (True)
* Jeff Dean got promoted to level 11 in a system where max level is 10. (True)
* Jeff Dean's resume lists the things he hasn't done; it's shorter that way.
* To Jeff Dean, "NP" means "No Problemo". [ NP = [[nondeterministic, polynomial time|http://www.dictionary.com/browse/nondeterministic-polynomial-time]] ]
* You don't explain your work to Jeff Dean. Jeff Dean explains your work to you.
* Jeff Dean's resume has so many accomplishments, it has a table of contents [and an index].
* Jeff Dean doesn't exist, he's actually an advanced AI created by Jeff Dean.
* Jeff Dean's [[IDE|https://en.wikipedia.org/wiki/Integrated_development_environment]] doesn't do code analysis, it does code appreciation.
* Jeff Dean's keyboard doesn't have a Ctrl key because Jeff Dean is always in control.
* When Jeff Dean says "Hello World", the world says "Hello Jeff".
* Jeff Dean can get 1s out of [[/dev/zero|https://en.wikipedia.org/wiki//dev/zero]].
* Jeff Dean simply walks into [[Mordor.|http://www.urbandictionary.com/define.php?term=mordor&defid=2921558]]
* When your code is killed by SIGJEFF, it never runs again.
* Jeff Dean's calendar goes straight from March 31st to April 2nd; no one fools Jeff Dean.
* Jeff Dean never has the wrong number; you have the wrong phone.
* Errors treat Jeff Dean as a warning.
* Jeff's code is so fast the assembly code needs three HALT opcodes to stop it.
* Emacs' preferred editor is Jeff Dean.
* Google: it's basically a Jeff Dean side project.
* Jeff Dean has to unoptimize his code so that reviewers believe it was written by a human.
* Jeff Dean doesn't need speakers or headphones. He just types "cat *.mp3", glances at the screen, and his brain decodes the music in the background while he works.
* Knuth mailed a copy of TAOCP to Google. Jeff Dean autographed it and mailed it back. [ [[TAOCP|https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming]] was actually written by [[Donald Knuth|https://en.wikipedia.org/wiki/Donald_Knuth]] ]

----
^^1^^ Found mainly on [[Google+|https://plus.google.com/+KentonVarda/posts/TSDhe5CvaFe]], [[Slate|http://www.slate.com/articles/technology/doers/2013/01/jeff_dean_facts_how_a_google_programmer_became_the_chuck_norris_of_the_internet.html]], and [[Quora|https://www.quora.com/What-are-all-the-Jeff-Dean-facts]]
SOMETIMES

Sometimes
if you move carefully
through the forest

breathing
like the ones
in the old stories

who could cross
a shimmering bed of dry leaves
without a sound, 

you come
to a place
whose only task

is to trouble you
with tiny
but frightening requests

conceived out of nowhere
but in this place
beginning to lead everywhere.

Requests to stop what
you are doing right now,
and

to stop what you
are becoming
while you do it,

questions
that can make
or unmake
a life,

questions
that have patiently
waited for you,

questions
that have no right
to go away.

: -- from Whyte's site at [[Everything is waiting for you|http://www.davidwhyte.com/everything-is-waiting-for-you/]]
Garry Kasparov after playing (and losing) chess with IBM's Deep Blue mainframe computer said: Sometimes quantity becomes quality.

[img[T-Shirt|resources/chess_game.jpeg][resources/chess_game.jpeg]]
"I remember when you could only lose a chess game to a supercomputer." - New Yorker Cartoon, by: Avi Steinberg
Pleasure is the state of being
brought about by what you
learn.
Learning is the process of
entering into the experience of this
kind of pleasure.
No pleasure, no learning.
No learning, no pleasure. 
On interpretation, deduction, implication, inference, inuendo and other mental/emotional information processing activities:

In his book [[The Most Human Human - by Brian Christian]] he quotes [[Douglas Hofstadter]] (from his book "Gödel, Escher, Bach: an Eternal Golden Braid"):
>[We seem to have the notion that] isomorphisms and decoding mechanisms (i.e., information-revealers) simply reveal information which is intrinsically inside the structures, waiting to be "pulled out".
>This leads to the idea that for each structure, there are certain pieces of information which can be pulled out of it, while there are other pieces of information which cannot be pulled out of it. But what does this phrase "pull out" really mean? How hard are you allowed to pull? There are cases where by investing sufficient effort, you can pull very recondite [obscure, complex] piece of information out of certain structures. 
>In fact, the pulling-out may involve such complicated operations that it makes you feel you are putting more information in than you are pulling out. 
This passage from [[the book|https://publicism.info/philosophy/human/10.html]] seems to me, is trying to point to the sometimes blurry (and dangerous, if crossing the) line between careful, cautious, fruitful, and creative knowledge extraction and learning, and fanciful, undisciplined, "innovative", vacuous "creations of the mind".
Type the text for 'New Tiddler'
!!!Zen and "Stepping Out"
[>img[Fibonacci Curve|./resources/Fibonacci_spiral_1.jpg][./resources/Fibonacci_spiral.jpg]]
>A Zen person is always trying to understand more deeply what he is, by stepping more and more out of what he sees himself to be, by breaking every rule and convention which he perceives himself to be chained by - needless to say, including those of Zen itself. Somewhere along this elusive path may come enlightenment. In any case (as I see it), the hope is that by gradually deepening one's self awareness, by gradually widening the scope of "the system", one will at the end come to a feeling of being at one with the entire universe.

 - Douglas Hofstadter in [[Gödel, Escher, Bach|http://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach]], p. 479

In a well researched article titled [[Instructional strategies and tactics for the design of introductory computer programming courses in high school|http://doc.utwente.nl/58656/1/Merrienboer87instructional.pdf]]^^1^^ Van Merrienboer and Krammer introduce different strategies and tactics for teaching beginners to program.

!!!The authors identify 3 common strategies or approaches to teaching programming:
* The Expert approach emphasizes on both algorithm and program design in a systematic top-down fashion. For this reason, students are offered problem specifications during the course that are characterized as non-trivial design problems.
* The Spiral approach emphasizes a stepwise incremental learning. Problem specifications that are presented to students during the course gradually become more complex in both the coding and the design aspects that they require. Consequently, in the beginning of the course students receive more or less trivial problems that emphasize syntactic and lower level semantic knowledge. 
* The Reading approach emphasizes  program comprehension, modification and amplification (and on a related note/skill, see [[A Helpful Guide to Reading Better - Farnam Street]]). For this reason, students are confronted with non-trivial design problems from the beginning of the course. However, these problems are presented in combination with their complete or partial solutions in the form of well-designed, well-structured and well-documented programs. The students' tasks gradually become more complex during the course, changing from using and analyzing programs, through modifying and extending programs, to independently designing and coding programs.

!!!The authors offer several tactics for teaching, with the goals and circumstances in mind:
* ''Concrete Computer Model'' - Students should be introduced and exposed to a concrete computer model (a-la "glass box approach"). They cite research that shows that this improves student program comprehension and generation.
** They make a distinction between a "black box approach" and a "glass box approach" in elementary programming. In the black box approach, students have no idea of what goes on inside the computer because they lack an adequate model. In the glass box approach, students do have such an idea because the instruction includes a concrete but simplified computer model. This model makes it possible to emphasize a "notional machine" on both a general level, such as in teaching the relationship between the terminal and the computer and a specific level, such as in teaching assignment statements. 
* ''Design Diagrams/Plans/Schemas'' - Explicitly present design diagrams or schemas for solving problems and top-down decomposition of problems.
** Instructional materials may support such top-down processing by explicitly presenting a design diagram: A flow-chart or structured diagram prescribing in detail the actions and methods that ensure a systematic and effective design process. Equivalent design diagrams for solving elementary science problems are sometimes referred to as SAP-charts ("Systematic Approach to Problem-solving"). The presentation of a design diagram clarifies the complementary processes of successive refinement and top-down program design and facilitates the development of a general design schema.
* ''Worked-out Examples'' - In order to help students transition from learning declarative knowledge to practicing the desired procedural behavior, students should use concrete examples of problem solutions - related to the problem at hand - that have the form of concrete computer programs. These worked-out examples function as analogies, which students use as blue-prints or concrete schemata to map their new solutions. Thus, analogy is used to bridge the gap between the current declarative knowledge and the desired programming behavior. After students have gained more experience, their need for worked-out examples disappears, as a result of knowledge compilation. 
* ''Annotated Examples'' - whereas the presentation of worked-out examples is important in its own right, the examples should be annotated with information about what they are supposed to illustrate. Annotated examples bear resemblance to programming plans: they both offer templates of code instead of unorganized factual information and they both slress the critical features in this template. But, whereas programming plans primarily serve to present new information concerning a template of programming code and its relationship with specific programming problems, annotated, worked-out examples serve as an analogy to support knowledge compilation. In fact, we think that __it is desirable to further annotate worked-out examples by explicitly referring to the programming plans they use__.
** Therefore, the recommendation is: present concrete, annotated, worked-out examples in the form of concrete pro- grams for well-described programming problems that are related to the problems at hand .
* ''Task Variation'' - there must be enough task variation in practice to develop a broad procedural knowledge base, which underlies flexibility in programming behavior on a high performance level.
** Some variation in elementary programming may be offered by (a) the assignment of different tasks, such as using the editor, comprehending pro- grams, designing algorithms, generating programs, debugging programs and so forth, and (b) the presentation of a broad range of both programming problems that have different underlying solutions in program generation and programs that are the solutions for different programming problems in program comprehension. 
** offer variation in the different skills involved in computer programming and present a wide range of programming problems and programs. And, for worked-out examples, it is not only important to offer students some variation in problems and programs but it is also important to tell them what the critical features in these different problems and programs are. 
* The authors have a cautionary note on top-down teaching strategies:
** top-down design techniques may minimize processing load for expert programmers but not for novices. Strictly speaking, top-down programming is possible if students have at each step available an appropriate set of productions as well as the necessary declarative knowledge. This only occurs if both the problem is of a familiar type and the student has experience with the programming language. When top-down programming is possible, it will minimize processing load; however, when it is not possible - as will often be the case for novices - it cannot prevent processing overload. Consequently, top-down programming in introductory programming courses may be desirable, but it is often not possible because the necessary knowledge is not available.

----
^^1^^ [[Local copy|resources/Merrienboer - CS instructional tactics.pdf]]
.viewer table.borderless,.viewer table.borderless * {border: 0;}
Unlike [[Wislawa Szymborska's angle on the statistics of life|A Word On Statistics - Wislawa Szymborska]], [[David Eagleman|http://eagleman.com/]] has a book called //Sum: Forty Tales from the Afterlives// (see [[excerpt from a story called ''Metamorphosis''|http://eagleman.com/sum/excerpt]]), where he spins an afterlife scenario which highlights the preciousness of life using a different type of "statistics" (or maybe "probability"). It is intriguing/thought-provoking (sort of lingering memories, waiting room, heavenly alternative).

A nice, short (4 min.) [[clip by Studiocanoe|https://vimeo.com/144047596]] animates one Eagleman story (transcript below).

''Sum''
>In the afterlife you relive all your experiences, but this time with the events reshuffied into a new order: all the moments that share a quality are grouped together.
>
>You spend two months driving the street in front of your house, seven months having sex. You sleep for thirty years without opening your eyes. For five months straight you flip through magazines while sitting on a toilet.
>
>You take all your pain at once, all twenty-seven intense hours of it. Bones break, cars crash, skin is cut, babies are born. Once you make it through, it's agony-free for the rest of your afterlife.
>
>But that doesn't mean it's always pleasant. You spend six days clipping your nails. Fifteen months looking for lost items. Eighteen months waiting in line. Two years of boredom: staring out a bus window, sitting in an airport terminal. One year reading books. Your eyes hurt, and you itch, because you can't take a shower until it's your time to take your marathon two-hundred-day shower. Two weeks wondering what happens when you die.
>One minute realizing your body is falling. Seventy-seven hours of confusion. One hour realizing you've forgotten someone's name. Three weeks realizing you are wrong. 
>Two days lying. Six weeks waiting for a green light. Seven hours vomiting. 
>
>Fourteen minutes experiencing pure joy. 
>
>Three months doing laundry. Fifteen hours writing your signature. Two days tying shoelaces. Sixty-seven days of heartbreak. Five weeks driving lost. Three days calculating restaurant tips. Fifty-one days deciding what to wear. Nine days pretending you know what is being talked about.
>Two weeks counting money. Eighteen days staring into the refrigerator. Thirty-four days longing. Six months watching commercials. Four weeks sitting in thought, wondering if there is something better you could be doing with your time. Three years swallowing food. Five days working buttons and zippers.
>
>Four minutes wondering what your life would be like if you reshuffled the order of events. 
>In this part of the afterlife, you imagine something analogous to your Earthly life, and the thought is blissful: a life where episodes are split into tiny swallowable pieces, where moments do not endure, where one experiences the joy of jumping from one event to the next like a child hopping from spot to spot on the burning sand.


!!!!Here is a brief description of some other short stories in his book:
''Descent of Species'' - after you die, you can choose to live as anything or anyone you like,  e.g. like a horse leading a simple, enjoyable life in a pasture (__but__, with no way up the irreversible descent, since as a horse you don't have a clue, when __you__ die, what are some other options for "the next round".)
:: - reminds me of the [[missed opportunity described by Terry Pratchett.|Wisdom and questions (or questioning wisdom; or 'careful what you are asking for')]]

''Spirals'' - meeting our Creator(s) which turn out to be somewhat dim-witted. They created us as their machines to try to figure out the Big Questions of Life. But we became too advanced for them to understand what we are saying, and that we're looking for the answers to the same questions. Are we on the same trajectory with the machines we are building?

''Death Switch'' - personal software which starts as an auto-reply email announcing our death (and revealing things like passwords and bank account numbers to our heirs), but evolving into its own entity representing the individual, exchanging experiences with other entities, where the entire human race slowly dies off and leaves behind a virtual culture of virtuals  (basically kwetching like in an elderly home  (or in Heaven...)

In a blog post titled [[Summary of the Prisoner’s Dilemma|http://reasonandmeaning.com/2015/05/02/game-theory-and-the-prisoners-dilemma-in-two-pages/]] by [[John G. Messerly|http://reasonandmeaning.com/brief-bio/]] he writes:
>The prisoner’s dilemma is one of the most widely debated situations in game theory. The story has implications for a variety of human interactive situations. A prisoner’s dilemma is an interactive situation in which //it is better for all to cooperate rather than for no one to do so, yet it is best for each not to cooperate, regardless of what the others do//.

Messerly summarizes the classic scenario/story:
>In the classic story, two prisoners have committed a serious crime but all of the evidence necessary to convict them is not admissible in court. Both prisoners are held separately and are unable to communicate. The prisoners are called separately by the authorities and each offered the same pro-position. Confess and if your partner does not, you will be convicted of a lesser crime and serve one year in jail while the unrepentant prisoner will be convicted of a more serious crime and serve ten years. If you do not confess and your partner does, then it is you who will be convicted of the more serious crime and your partner of the lesser crime. Should neither of you confess the penalty will be two years for each of you, but should both of you confess the penalty will be five years. In the following matrix, you are the row chooser and your partner the column chooser. The first number in each parenthesis represents the “payoff” for you in years in prison, the second number your partner’s years. Let us assume each player prefers the least number of years in prison possible.
>So you reason as follows: If your partner confesses, you had better confess because if you don’t you will get 10 years rather than 5. If your partner doesn’t confess, again you should confess because you will only get 1 year rather than 2 for not confessing. So no matter what your partner does, you ought to confess. The reasoning is the same for your partner. The problem is that when both confess the outcome is worse for both than if neither confessed. You both could have done better, and neither of you worse, if you had not confessed! You might have made an agreement not to confess but this would not solve the problem. The reason is this: although agreeing not to confess is rational, compliance is surely not rational!

and concludes:
>The prisoner’s dilemma describes the situation that humans found themselves in in Hobbes’ state of nature. If the prisoners cooperate, they both do better; if they do not cooperate, they both do worse. But both have a good reason not to cooperate; they are not sure the other will! We can only escape this dilemma, Hobbes maintained, by installing a coercive power that makes us comply with our agreements (contracts). Others, like the contemporary philosopher David Gauthier, argue for the rationality of voluntary non-coerced cooperation and compliance with agreements given the costs to each of us of enforcement agencies. Gauthier advocates that we accept “morals by agreement.”

[[Modeling and simulating various scenarios of the Prisoner's Dilemma|Prospects of Modeling]] can provide useful insights into aspects such as human nature, collaboration vs. competition, morality, mechanisms of government, and more.

[[Messerly also nicely analyzes ethical behavior|On ethical behavior]] in light of the Prisoner's Dilemma.
In his book When Things Start to Think (see [[chapters online|http://www.kurzweilai.net/neil-gershenfeld]]), Neil Gershenfeld talks about some of the best practices they have at the Media Lab at MIT to support learning and human performance.

On the motivation for getting into Human Performance Support:
>If we look around us now, the single most common reaction to computers was entirely missed by any of the historical visions: irritation. Computers taking over the world is not a pressing concern for most people. They re more worried about figuring out where the file they were editing has gone to, why their computer won t turn on, when a Web page will load, whether the battery will run out before they finish working, what number to call to find a live person to talk to for tech support.
>The irritation can be more than petty. A 1997 wire story reported:
>>ISSAQUAH, Wash. (AP) A 43-year-old man was coaxed out of his home by police after he pulled a gun on his personal computer and shot it several times, apparently in frustration.
>Apparently? He shot it four times through the hard disk, once through the monitor. He was taken away for mental evaluation; they should have instead checked the computer for irrational and antisocial behavior.


On augmenting physical reality and blending into it, creating a "smarter environment" (or is it a "smarter performer"?)
>Instead of struggling to create a computer world that can replace our physical world, there's an alternative: augment it. Embrace the means of interaction that we ve spent eons perfecting as a species, and enhance them with digital content.
>You can think of this as a kind of digital shadow. Right now objects live either in the physical world or as icons on a computer screen. User interface designers still debate whether icons that appear to be three-dimensional are better than ones that look two-dimensional. Instead, the icons can really become three-dimensional; physical objects can have logical behavior associated with them. 
>A business card should contain an address, but also summon a Web page if placed near a Web browser. A pen should write in normal ink, but also remember what it writes so that the information can be recalled later in a computer, and it should serve as a stylus to control that computer. A house key can also serve as a cryptographic key. 
>Each of these things has a useful physical function as well as a digital one.

>Taken together, ambient displays, tagged objects, and remote sensing of people have a simple interpretation: the computer as a distinguishable object disappears. Instead of a fixed display, keyboard, and mouse, the things around us become the means we use to interact with electronic information as well as the physical world. Today's battles between competing computer operating systems and hardware platforms will literally vanish into the woodwork as the diversity of the physical world makes control of the desktop less relevant.

>A window is actually an apt metaphor for how we use computers now. It is a barrier between what is inside and what is outside. While that can be useful at times (such as keeping bugs where they belong), it's confining to stay behind it. Windows also open to let fresh air in and let people out.

>All along the coming interface paradigm has been apparent. The mistake was to assume that a computer interface happens between a person sitting at a desk and a computer sitting on the desk. We didn't just miss the forest for the trees, we missed the earth and the sky and everything else. The world is the next interface.

>In retrospect it looks like the rapid growth of the World Wide Web may have been just the trigger charge that is now setting off the real explosion, as things start to use the Net so that people don't need to. As information technology grows out of its awkward adolescence, bringing more capabilities closer to people is proving to be the path to make it less obtrusive and more useful. (ch. 14 - Things That Think)
As the inimitable [[Terry Pratchett|http://wiki.lspace.org/mediawiki/Biography]] (Sir Terry, mind you :) [[puts it|http://discworld.wikia.com/wiki/The_Fifth_Elephant]], in this conversation between [[Sam Vimes|http://discworld.wikia.com/wiki/Samuel_Vimes]] and [[DEATH|http://www.chrisjoneswriting.com/death.html]]:

GOOD MORNING.
Vimes blinked. A tall dark robed figure was now sitting in the boat.
'Are you Death?'
IT'S THE SCYTHE, ISN'T IT? PEOPLE ALWAYS NOTICE THE SCYTHE.
I’m going to die?’
POSSIBLY.
‘Possibly? You turn up when people are //possibly// going to die?’
OH, YES.  IT’S QUITE THE NEW THING.  IT’S BECAUSE OF THE UNCERTAINTY PRINICIPLE.
'What’s that?’
I’M NOT SURE.
‘That’s very helpful.’
I THINK IT MEANS PEOPLE MAY OR MAY NOT DIE.  I HAVE TO SAY IT’S PLAYING [[HOB|https://www.merriam-webster.com/dictionary/hob]] WITH MY SCHEDULE, BUT I TRY TO KEEP UP WITH MODERN THOUGHT.
Here are some things that I found working in my CS classes. (In many cases, I had stumbled upon them, and got confirmation/validation/encouragement later ... :(

* Giving students "gift code" - Working with students 1:1 for a few minutes at a time, or as needed, asking them what they want to do, and giving them some concrete ideas (blocks in Scratch work well :) to get going. In a few minutes they get an important nugget of knowledge, a concept, a trick, etc. and totally "own it" in the context of their interests and creations - [[CSTA blog on Gift Code|http://blog.csta.acm.org/2015/02/03/teaching-and-learning-with-gift-code/]]

* One of the [["Big Ideas" in CS|The Big Ideas and Computational Practices of Computer Science*]] is that "Computing is a creative human activity". I deeply believe in it, and weave/embed opportunities for creativity throughout the curricula I have designed (i.e., a unit on creativity is  as ludicrous as the demand to "act spontaneously now!").
** Here are some tips I have been following (and later discovered at the [[CSTA blog|http://blog.csta.acm.org/2015/09/08/cs-principles-and-creativity/]]) that I found very common-sensical:
*** Let students know that there are usually multiple paths that lead to understanding.
*** Arrange student collaborations which provide meaningful (to them) real-world, problem-solving opportunities.
*** Provide lots of project and performance choices which employ a variety of “intelligences” whenever feasible.
*** Encourage them to look for and experiment with new things and ideas.
*** Encourage questioning. (since, [[questions are like lanterns|John O’Donohue - questions]])
*** Be sure your grading does not penalize “less than successful” creativity. Students will not feel free to experiment if their grade hinges on some abstract measure of success. The true reward for being creative is purely intrinsic.
*** Encourage them to view mistakes as opportunities for learning rather than failures.
*** Enable students to exchange, value, and build upon the ideas of others. Share interesting examples of technological creativity that you run across in the media.
*** Make time for informal interactions between students.
*** Offer a safe environment which encourages risk-taking. Avoid a competitive and extrinsically rewarding classroom, by providing a friendly, secure, and comfortable environment.

* How to encourage girls to do more Computing/CS (from the [[CSTA Blog|http://blog.csta.acm.org/2015/10/20/disrupting-the-gender-gap-in-computer-science/]]):
** Generally, it's about the message you send to the female students through the way the classroom looks, the assignments given, and the way they are asked to complete them:
*** Keep the classroom decor neutral. Or maybe add some posters of women in Computer Science to go alongside your Mark Zuckerberg and Bill Gates posters. Think Grace Hopper, Jean Bartik, Maria Klawe, Karli Kloss, or Marissa Mayer. Recent research shows that when the classroom is neutral, girls are three times more likely to show an interest in Computer Science than when the CS classroom is stereotypically geeky. It makes a difference.
*** Think about your assignments. Are they the same assignments you did in high school? Unless you were in high school a few years ago, it might be time to update them. Connect your assignments to the real world. Many girls particularly like to see practical applications of the work they’re doing in class. Girls, in particular, also like to know that the work they’re doing could potentially help someone or help solve a problem that plagues the world.
*** Also think about how you have your students work on assignments. Does everyone complete all the assignments individually? Consider using pair programming, peer instruction, and group work. All of these methods not only make the work potentially more appealing to girls, who appreciate the social aspects of work, but they also help all students retain Computer Science concepts. They’re very effective pedagogical strategies.
*** Finally, encourage your students, especially your female students, along the way. When they make a mistake in class, be supportive, help them learn from it. If a girl seems to like CS, whether or not she’s good at it, encourage her to take another course or enroll in a summer program, or pursue CS at the next level, whether that’s high school, college or graduate school. [[Recent research from Google|https://docs.google.com/file/d/0B-E2rcvhnlQ_a1Q4VUxWQ2dtTHM/edit?pli=1]] shows that encouragement is a key factor in retaining women to continue their student of CS.
In an [[interesting (and short) interview/Q&A with Sanjoy Mahajan|http://mitpress.mit.edu/blog/pi-day-q]] author of [[Street-Fighting Mathematics|http://mitpress.mit.edu/sites/default/files/titles/content/9780262514293_Creative_Commons_Edition.pdf]], in honor of 2013 Pi Day, Sanjoy brought up a few key points about learning and teaching math. (here's a [[local copy|resources/pi-day-q.html]] of the full Q&A)

* In response to the question ''Has the proliferation of technology affected our ability to think for ourselves?'' (in the context of technology that can do math for us such as cell phones, tablets, computers, and calculators)
** Sanjoy responded that "Knowing that a computer can do a calculation, and even knowing how to ask a computer to solve a problem, is psychologically very different from being able to solve the problem oneself, even or especially approximately."
*** which brings up an interesting psychological aspect, not frequently raised, namely the sense of mastery and control over our environment, knowledge, learning path, etc. It's a "twist" on Sir Francis Bacon's saying that "knowledge is power". As [[I had said elsewhere|http://ldtprojects.stanford.edu/~hmark/index_stanford.html]], I think that he is certainly right, //but//, with the explosion of data (and knowledge), ''access to knowledge'' is also an important part of the power (to be learned and mastered). But the point Sanjoy is making is that if you become so dependent on the search/access and lose intuition/knowledge as to is it true, does it make sense (critical, analytical thinking skills), then you (and your knowledge and skills) are on very shaky grounds.

* On a question about how math should be taught (compared to how it is taught), Sanjoy responded:
>Mathematics is a way of expressing relationships in the world: It is a language.  And we learn languages best through speaking meaningful sentences, long before we master all the grammar points in the sentence.
>Thus, students have to use mathematics to model and draw conclusions about the world, rather than memorizing procedures, such as long division, best left to the pocket calculators.  As an example, teach calculus through physics.  Physics is why and how calculus was invented: to understand the motions in the heavens -- not to compute the rate at which the water level rises when filling an upside down cone.
** which is similar to points others like [[Lockheart|resources/LockhartsLament.pdf]], [[diSessa|resources/diSessa%20-%20Changing%20Minds%20-%20Chapter1.pdf]], and [[Polya|https://archive.org/download/Induction_And_Analogy_In_Mathematics_1_/Induction_And_Analogy_In_Mathematics_1_.pdf]] are making, regarding math as a language, and the way to master math by doing it in a way real mathematicians are doing it.
** and this is [[the point the mathematician Edward Frenkel makes as well|Edward Frenkel on teaching math]].

* On the question: What do you mean by "too much mathematical rigor teaches rigor mortis"?, Sanjoy has this to say:
>Mathematics, especially in the last 150 years, has focused ever more on never making a mistake.  Perhaps the Weierstrass function started the downhill slide.  Discovered in 1872, it is a function continuous everywhere (has no jumps anywhere) yet differentiable nowhere (is spiky everywhere).  Its discovery contradicted the intuitions about calculus and functions upon which mathematicians and physicists had depended. The worry was that it was not the only counterexample.  And mathematics became ever more worried about not getting caught by a counterexample. Unfortunately, the surest way to never make a mistake is to never do anything.  The fear of mistakes is a recipe for paralysis -- for [[rigor mortis|Formalism First = Rigor Mortis.  Intuition First = Rigor's Mortise]].
* which is similar to the point I was making about the need to [[loosen up a bit|A case for "loosening up a bit"]], very funnily (and pointedly) depicted by [[Ursus Wehrli|http://www.kunstaufraeumen.ch/en]]. I think that this is one case where mathematicians can learn from scientists about "one case/example (Weierstrass) may not be a cause to "push the abort button" on an entire way of thinking/doing/teaching. Even though, these kinds of exceptions/findings are usually (and justifiably, in most cases) a "death knell" to mathematical proofs.
This semester, in the first week of teaching "Exploring Computer Science" in High School (using Scratch from MIT), we were learning to program the sprites to draw different shapes on the screen.
I showed the students how to draw the [[first few generations of the Koch Curve|http://fractalfoundation.org/resources/fractivities/koch-curve/]], and as a motivator and demonstration of the "self-similarity" of fractals in general, and this fractal in particular, I showed them an animation of [[diving into an "endless" Koch Snowflake|https://www.youtube.com/watch?v=PKbwrzkupaU]]:
 
<html><iframe width="560" height="315" src="https://www.youtube.com/embed/PKbwrzkupaU" frameborder="0" allowfullscreen></iframe></html>

(and another, more "mind-blowing" [[dive-in/odyssey into another famous fractal - The Mandelbrot Fractal|https://www.youtube.com/watch?v=PD2XgQOyCCk]])
<html><iframe width="560" height="315" src="https://www.youtube.com/embed/PD2XgQOyCCk" frameborder="0" allowfullscreen></iframe></html>
The students got the idea of self-similarity, and did draw 3-4 generations of the Koch Curve, but without recursion (which we had not learned yet, since it was the first week of an introductory CS class), drawing more generations gets pretty tedious. Again, as a motivator and a "sight of things to come" I showed them a recursive implementation in Scratch, and was pleasantly surprised by the gasps and groans of some of the students pleading to teach them variables and recursion.

<html><iframe allowtransparency="true" width="485" height="402" src="http://scratch.mit.edu/projects/embed/95251322/?autostart=false" frameborder="0" allowfullscreen></iframe></html>

And I found myself wondering: when was the last time a group of students //pleaded// a teacher to teach them some advanced math concepts like variables and recursion? Isn't Computer Science full of potential? :)

Albert Einstein summarized it well:
>"Teaching should be such that what is offered is perceived as a valuable gift and not as a hard duty."
I'll start with "an aside", to get it out of the way (pun intended ;-) :
I stumbled across [[the website of the game designer Jonathan Blow|http://number-none.com/blow/]] (of //Braid// and //The Witness// games fame), as I was searching for insights into motivation and engagement in the context of learning/education, which in good games are no problem, but seem to be a big obstacle in studying/school.
What caught my eye on Blow's site was a link to a short (13MB, 4 minutes) [[Tai Chi video clip|http://www.youtube.com/watch?v=TBvF6r6DOvc]], that according to him "rocks his socks". He is right; it's a delight to watch!
Blow also refers to another game designer, [[Raph Koster|http://www.raphkoster.com/]], who [[I also wrote about|Theory of fun - Raph Koster]].

Another link that caught my attention was to another game designer, Brian Moriatry (of //Loom// and //Beyond Zork// game fame), and from [[Moriatry's website|http://ludix.com/moriarty/index.html]], a description of a [[game design programming language he calls Perlenspiel|http://www.perlenspiel.org/]], after Herman Hesses's book [[The Glass Bead Game (German: Das Glasperlenspiel)|http://en.wikipedia.org/wiki/The_Glass_Bead_Game]]. In an [[insightful article about some design considerations|http://ludix.com/moriarty/lehr.html]] for the Perlenspiel language, Moriarty has this nerdy/programmer textual-visual pun:
[img[This setter is also a getter|resources/setter-getter-small.png][resources/setter-getter.png]]
This setter is also a getter^^1^^

!!!!Now that the aside(s) is(are) out of the way, here are some of my observations on Moriatry's game design and Perlenspiel language
* I can totally relate to and sympathize with Moriatry's observation that he was looking for a language/environment/engine
>...so transparent that students could sit down and start doing useful work after just a few evenings of study, fingering out ideas like notes on a piano.
>What I wanted was a gameclavier.
>Alas. There are no Steinways or Bosendorfers in this business.
>If I wanted a gameclavier, I was going to have to build it myself.
** I definitely feel and [[wrote about a similar need|Universal Emulator]] for an educational simulation/emulation language/environment/engine
* It's interesting and revealing (in the sense of "pointing to a direction") that there are similar themes^^2^^ (at least from my cursory initial review and understanding) between [[Perlenspiel|http://www.perlenspiel.org/]] and [[NetLogo|http://ccl.northwestern.edu/netlogo/]]. For example:
** The simple 2D grid of "beads" (Perlenspiel) vs. "patches" (~NetLogo)
** A relatively easy to learn/use but powerful scripting/programming language: Javascript (Perlenspiel) vs. Logo (~NetLogo)
** The web-enabled access. Even though
*** I haven't tried Perlenspiel's environment yet, and
*** ~NetLogo has the ability to save and __execute__ programs (games) as Java applets, which is not ideal, but good enough in terms of openness and ubiquity. ~NetLogo __programming__ is __not__ web-enabled!
** The "low threshold, high ceiling" design mentality of both, enabling users to create "personally meaningful experiences" relatively quickly -- a powerfully motivational approach to learning and performance
* Creating "personally meaningful experiences" is a concept that Constructionists like Papert and Resnick are using, and it drives powerful and deep learning. In [[the article|resources/Moriatry-Perlenspiel.html]], Moriarty outlines the flow of a seven week course in game design, which while simple, would be very effective, in my opinion.
** ''class 1'' - basic definitions for play, toy, game and puzzle^^3^^; brief summary of how the Perlenspiel engine works; URL to the Perlenspiel website; URLs of ebooks and Web sites describing Javascript; no programming instruction; no solved and showcase examples. ''Homework'': build __a toy__ in Perlenspiel.
** ''class 2'' - reviewing and discussing the toys that everyone had built in Perlenspiel; pair assignment to prototype __a puzzle__ with Perlenspiel.
** ''class 3'' - a playtesting studio for the puzzle in Perlenspiel. ''Homework'': use the feedback to complete and polish their puzzles.
** ''class 4'' - critiquing the completed puzzles.
** ''class 5 and beyond'' - game projects, 2 sessions per game, where the first session was devoted to playtesting, the second to presenting and critiquing.

In the last part of his article Moriatry is sharing some interesting insights as a teacher, after he himself finished coding a game in his own (Perlenspiel) environment:
>Nearly three weeks later, the game was sort-of finished.
>I had forgotten how difficult it is it write for bare pixels without sprites, or z-planes, or scrolling, or an animation system, or any of the conveniences we ve all come to take for granted.
>For the first time, I realized just how much I had been asking when I told my students to write six games in seven weeks, and what they had gone through to satisfy me.
>Perlenspiel demonstrated to this old Professor how hard students will work if they are playfully and firmly challenged.

----
^^1^^ referring to Object Oriented (OO) methods ("function calls" in non-OO parlance) for setting variable values and retrieving (getting) them
^^2^^ Themes in a sense similar to what Mark Twain^^3^^ said about history: history does not repeat itself, but it does rhyme
^^3^^I was serendipitously surprised and delighted to see Moriatry's reference to Twain in the article:
>some basic definitions for play, toy, game and puzzle ... inspired by The Adventures of Tom Sawyer, in which Mark Twain memorably remarks that  Work is whatever a body is obliged to do, and play is whatever a body is not obliged to do. 
>**    Play is superfluous action.
>**    A toy is something that elicits play.
>**    A game is a toy with rules and a goal.
>**    A puzzle is a game with a solution.

Here are a few instances where I could actually see "the teaching spark lighting the learning fire" when teaching Science, Technology, Engineering, and Math (STEM) courses:
!!!Amazing Mazes - multiple representations
* In a course titled "Amazing Mazes", I taught middle school students how to build mazes in a 2D plane and then program "maze walkers" (think, "mice") to run through their mazes. In the course segment about different representations, we learned that the maze can have (at least) two different representations, each one useful for different reasons and purposes: the one is the visual representation, showing the maze paths graphically; the other is the "programmatic representation", showing the commands for creating the same maze.
[img[Maze representations|resources/maze-representations.png][resources/maze-representations.png]]
* All students understood the purpose of the "Visual Representation" and used it to plan their maze walks and walker programs. The usefulness of the "Program Representation" became obvious only when we learned how to manipulate this representation to create more complex mazes, using translation, reflection, and scaling.
* As it turns out, one 7th grade girl in class got an important piece of the connection and usefulness of the two forms of representing a maze. Instead of just copying and pasting the commands and parameters from the history box into the command input, and running them (to display the original maze), she added 10 (I guess she correctly reasoned that it'd be easier) to each command parameter (coordinate in the x-y plane), and then ran the commands. I don't know who was more pleased with the resulting translated (shifted) shape on the screen, I, because I was able to teach, or she, because she was able to learn!

!!!Right on Target - predicting with reflection/symmetry
*In a course titled "Right on Target", I taught middle school students about ballistic missiles through a game teaching them how to hit targets with their missiles. The students played with simulations enabling them to explore various parameters like missile speed and angle, target distance and height, and gravity constant (for example, on earth and on the moon). 
* The simulation speed was realistic, so it took a while for the students to see their missile fly and hit or miss the target, and they became a bit impatient. This was a good thing! Some students were starting to figure out ahead of time whether the missile will hit the target this time (guessing leading to prediction!)
[img[Right on Target competition|resources/ejs_trajectories_moon_45_small.png][resources/ejs_right_on_target.png]]
* It turns out that the students became pretty good at estimating hits/misses, and in the process discovered the symmetry of the arc (parabola) the missile was following. This made it easier later on to explore and explain the two possible ways to hit a target, with two different launch angles (since the parabola/path is quadratic, meaning it has two roots/solutions):
[img[Right on Target competition|resources/ejs-balistic-trajectory-1.png][resources/ejs-balistic-trajectory-1.png]]


!!!Simplexity (simple rules leading to complex behaviors) - modeling for prediction
* In a course titled "Simplexity", I taught middle school students how simple rules, when continuously applied, can lead to complex ("unpredictable") behaviors. As an example, we analyzed Conway's Game of Life (which has 4 rules), starting with some simple initial conditions ("colonies"), and went through some fairly complex evolutions.
* After a few simulation runs, where students marveled the movements on the screen, one student got a great insight and exclaimed: This looks like the movement of clouds in the sky! We can use this to predict the weather! Another student excitedly added: no, no, this is like fires spreading in a forest! We could predict where it spreads and how to save trees and houses in the area.
|borderless|k
|[img[Forest Fires|./resources/Simplexity%20lesson%204%20screen.png][./resources/Simplexity%20lesson%204%20screen.png]]|[img[Game of Life 2 populations|./resources/GameOfLifeSideBySide-small.png][./resources/GameOfLifeSideBySide-small.png]]|
|borderless|k

!!!Meet Me on Mars (MMoM - creating an animation game, launching a rocket to Mars) - making connections^^1^^
In a course titled "Meet Me on Mars (MMoM)", I taught middle school students how to create a simplified model of our Solar System (astronomy), how to deal with mapping planet sizes, rotation periods, and distances correctly (math, engineering), and how to make a rocket launch from Earth, track Mars, and land safely (programming, engineering).

Towards the end of the course (lasting 10 weeks), and after proudly showing me a successful landing of her rocket on Mars, one of the students looked up at me and said: Mr. Mark, you know //so much//. I want to know as much as you do. 
To which I responded: you definitely can, if you love what you do, //and// you are willing to work hard. 
This probably reminded her of things we did in class and she said: You always ask hard questions.
I didn't know how to interpret this, so I asked her: hard in a good way, or hard in a bad way?
And she responded: hard-good.
and after a split second, she added: it makes me think of new things; it makes me want to know the answer.
Exactly!
Igniting the fire, for sure, but, maybe more in line with the course: Lift-off!
|borderless|k
|[img[Meet Me on Mars rocket launch|./resources/Rocket launch 4.png][./resources/Rocket launch 4.png]]|
|borderless|k




----
^^1^^ I brought up this example in a recent job interview for a CS/STEM teaching position, in response to a question "what would your students say about you?"
In his book //The Developer's Code// (in the good series the [[Pragmatic Bookshelf|https://pragprog.com/]]), Ka Wai Cheung makes a few good points about what to do if you are trying to teach a novice.

He makes the observation that "teaching is unlike programming", because:
* coding is not done linearly (or top to bottom, or from principles to details, etc.)
* when coding, programmers often "leave details" for later, and then come back to fill them in. This would potentially create big holes in understanding if teaching this way. For example, we compile our code to speed up error detection and correction ("let the compiler flag things" for us).

And he gives some suggestions:
* "Teach twice as slowly as you'd like to". Make all your "simple assumptions" and "obvious details" explicit and clear.
* Teach new concepts using obvious examples, concrete cases, and simple contexts. Avoid generalities, abstractions and theories, and make the examples as tangible and obvious as you can.
* "Lie to simplify". Pare down a complex topic and break it down to a "less than perfect" one, in the first iteration. Clear understanding of a concept, even if not 100% correct is motivating, and this will lead to a desire to know "the whole and correct" picture.
** or as the two great mathematicians Mark Kac and Stanislaw Ulam had said about teaching: Tell the truth, nothing but the truth, but not the whole truth^^1^^.
* Encourage independent thinking. Stuart and Hubert Dryfus called it "autonomous thinking": students start asking less technical questions and more strategic questions. They ask less about the "how and what" and more about the "why". This is a sign they thing there are better ways and what you taught them  may limit their naturally developing intuition.
** encourage that, and ask them to come up with different/better solutions/approaches. Compare and contrast with less optimal solutions and dig as deep as they are willing to go. Differentiating is a great way to learn.

----
^^1^^ - see [[The importance of telling the whole truth]] :)
For me, the first challenge for computing science is to discover how to maintain order in a finite, but very large, discrete universe that is intricately intertwined. And a second, but not less important challenge is how to mould what you have achieved in solving the first problem, into a teachable discipline: it does not suffice to hone your own intellect (that will join you in your grave), you must teach others how to hone theirs. The more you concentrate on these two challenges, the clearer you will see that they are only two sides of the same coin: teaching yourself is discovering what is teachable.
In an [[interview of Sylvia Boorstein|https://soundcloud.com/onbeing/what-we-nurture-with-sylvia]] by Krista Tippett, Boorstein describes her experience when driving in her car, and how, whenever she makes a mistake, takes a wrong turn, or misses an exit, her GPS never gets mad or upset. It just says: recalculating, and then proceeds in instructing her on how to get back on track.

So this is definitely an example of technology potentially "helping us to develop and cultivate spiritual practices", teaching us how to stay calm, not complicate things more than they already are, by becoming upset, or angry, or worried, or afraid.

This is something we are encountering all the time: something happens, it challenges us, and we are figuratively at a fork in the road (ha!). We can either react negatively (or as Buddhists would say, in an "unwholesome way"), or make the wise choice and just "recalculate" :)

Can we, and should we learn from Nature, when it comes to implementing technology solutions?

The argument can go both ways: On one hand nature is showing an astonishing variety, and incredible ingenuity in coming up with "solutions to problems", like locomotion (in the air, land, and water), adaptation to the environment, etc. (so much so, that some people are proposing/promoting an //Intelligent Design// as a "scientific alternative" to //Evolution and Natural Selection//. But this is a discussion for another time).
On the other hand, many of the human solutions to the same/similar problems are different from the natural ones, or at least, different enough, to argue that we/humans cannot "just learn by copying" from nature.
Some proverbial examples are:
* Flight - where human airplanes use fixed wings, propellers, jets, etc.
* Land-based transportation - mainly based on wheels, not on legs, slithering, etc. (not to mention things like magnetic levitation for high-speed trains, and other more "exotic" means)
* Illumination - mostly based on electricity, not on fire, nor other chemical reactions

So it is not so surprising that computer scientists working in the field of [[AI]] are, for the most part, not paying too much attention to how nature demonstrates intelligence. And I think that this is why people like Jeff Hawkins encountered such [[difficulties|01 - Artificial Intelligence]] when he tried to get into the AI Lab at MIT.

I think that in his book //On Intelligence// he is making [[some very good arguments|characteristics of intelligent systems]] about why this approach (ignoring nature's ways of "implementing intelligence" - mainly in the evolution of the human neocortex) is a ''big mistake'', and the past and current ways of going about developing AI is misguided, and not fruitful.
From [[The Philosophers Mail|http://thephilosophersmail.com/wp-content/uploads/2014/04/Ten-Virtues.pdf]], (associated with the philosopher Alain de Botton), now "reincarnated" as [[The School of Life|http://www.theschooloflife.com/]].

It's an online publication focusing on news from a different perspective, which is much more sensical, and absolutely useful:
>There are two ways of looking at things: picking out what’s unique, and being attentive to what’s recurring. The news is based on the former, philosophy on the latter. Which means that the daily diet of information and opinion tends to miss much. News, we concluded, is really what you need to know now, rather than what has just happened. The ideas of the [[Stoics|http://thephilosophersmail.com/perspective/the-great-philosophers-2-the-stoics/]] or of [[Lao Tzu|http://thephilosophersmail.com/perspective/the-great-eastern-philosophers-lao-tzu/]] might be urgent news in our lives, even though they have been around in the cultural ether for two millennia. 

And the List:

''RESILIENCE''
Keeping going even when things are looking dark; accepting that reversals are normal; remembering
that human nature is in the end tough. Not frightening others with your fears.

''EMPATHY''
The capacity to connect imaginatively with the sufferings and unique experiences of another person.
The courage to become someone else and look back at yourself with honesty.

''PATIENCE''
We lose our temper because we believe that things should be perfect. We’ve grown so good in some areas
(putting men on the moon etc.), we’re ever less able to deal with things that still insist on going wrong;
like traffic, government, other people... We should grow calmer and more forgiving by getting more realistic
about how things actually tend to go.

''SACRIFICE''
We’re hardwired to seek our own advantage but also have a miraculous ability, very occasionally, to forego
our own satisfactions in the name of someone or something else. We won’t ever manage to raise a family,
love someone else or save the planet if we don’t keep up with the art of sacrifice.

''POLITENESS''
Politeness has a bad name. We often assume it’s about being ‘fake’ (which is meant to be bad) as opposed
to ‘really ourselves’ (which is meant to be good). However, given what we’re really like deep down, we
should spare others too much exposure to our deeper selves. We need to learn ‘manners’, which aren’t evil -
they are the necessary internal rules of civilisation. Politeness is very linked to tolerance, the capacity to live
alongside people whom one will never agree with, but at the same time, can’t avoid.

''HUMOUR''
Seeing the funny sides of situations and of oneself doesn’t sound very serious, but it is integral to wisdom,
because it’s a sign that one is able to put a benevolent finger on the gap between what we want to happen
and what life can actually provide; what we dream of being and what we actually are, what we hope other
people will be like and what they are actually like. Like anger, humour springs from disappointment, but
it’s disappointment optimally channelled. It’s one of the best things we can do with our sadness.

''~SELF-AWARENESS''
To know oneself is to try not to blame others for one’s troubles and moods; to have a sense of what’s
going on inside oneself, and what actually belongs to the world.

''FORGIVENESS''
Forgiveness means a long memory of all the times when we wouldn’t have got through life without
someone cutting us some slack. It’s recognising that living with others isn’t possible without excusing errors.

''HOPE''
The way the world is now is only a pale shadow of what it could one day be. We’re still only at the
beginning of history. As you get older, despair becomes far easier, almost reflex (whereas in adolescence,
it was still cool and adventurous). Pessimism isn’t necessarily deep, nor optimism shallow.

''CONFIDENCE''
The greatest projects and schemes die for no grander reasons than that we don’t dare. Confidence isn’t
arrogance, it’s based on a constant awareness of how short life is and how little we ultimately lose from
risking everything.
I call my Dad on the phone a few times a week (Skype is great!). He is 92 years old, and has been teaching high school math and physics for many years. Maybe that's where both I and my oldest daughter got the "education bug" from.

He has never been very talkative, but with age he became even less so, so every time he has a "dad story" to tell, I treat it as a "special and tasty morsel". Being very associative, it always reminds me in different ways, of who and how he used to be when he was younger (and I was, too... ;-).

Anyway, today he remembered a time when he was announcing in class the date for the next physics test. There was the usual excitement and groaning, and then a few students had the bright idea of asking my Dad for "a test with open books". Now, this was in the mid-70's, in the pre-Internet era, when books where //the// source of information and knowledge; "everything was in the book" (the equivalent of "it's on the web", or "there is an app for that", nowadays). The expectation was that students will study //everything// from the textbook, and be able to show what they absorbed from the book on the test. An "open book test" was unheard of.

Also, my father was brought up and educated in Europe (in part of the then ~Austro-Hungarian Empire) under a system strongly influenced by German philosophy and pedagogy. Textbooks where treated then as "Bibles" -- reading //and// memorizing books (not just science, but literature and poetry, too) was "the effective thing to do" if you "really" wanted to learn. 

But, as he had told me many times when I grew up: there is reading, and then there is //reading//... (and then again, [[there is reading, too...|Knowing how to read]]).

So, here is my father standing in front of the class, facing this dilemma of "to read (during the test), or not to read". His students probably took this proposal as a joke or a dare, but he considered it seriously.

Despite his relatively strict upbringing -- my Dad was actually somewhat mischievous as an adolescent, and according to him, he wasn't really into his studies (but rather much more into sports. This did not, however, prevent him from earning a Ph.D. later in life :) -- he could definitely relate to the students' request for "open books" (and their way of thinking, you know, "everything is in the book; see a question on the test, find a similar one in the book; what can go wrong?")
He also has, and always had, a good (sharp, but sometimes very subtle, barely detectable) sense of humor. //And//, most importantly (and more dangerously in this case -- more on this in a moment), ''he was a good reader!''

To make a long story short (where's the fun in that? ;-), my Dad agreed to this radical idea of an open book test in physics, to the loud approval of the entire class.
As it turns out (surprise, surprise), my Dad knew the physics textbook "back-to-front, front-to-back, and upside-down" ([[HA!|Knowing how to read]]). I don't think he knew it as well as the Talmudic Scholars of the past knew the [[Talmud|http://en.wikipedia.org/wiki/Talmud]], but he knew it pretty well.
BTW, and speaking of the Talmudic Scholars, it is said that you could stick a pin (a sacrilegious act in this context, but let's go with the story, shall we?) anywhere, on any page in the Talmud, and they could tell you, not only the sentence the pin hit on //the other side (the back side)// of the page, but the actual letter (!) in that sentence on that back page!
As a result of this close and attentive reading, my father knew of 3 exercises in the physics textbook, which had an error in the answer in the back of the book, and (sure enough) these were the questions on the test ... (with slight variations :).

Let's just say that the students did not excel on this test.

I know what you are thinking... and my father thought so, too: "it was unfair, so after the students had taken it, I canceled the test, and gave another one, the following week", he said. But, back to "normal" -- a closed book test. As my Dad sums this (meta-lesson) up: "You have to know how to read!" 

Now that I think about it, it was quite a clever idea on my father's part, since the students ended up studying the material twice...

Anyway, his story on the phone today, reminded me of a somewhat similar story from //my// university days. For better or for worse, I graduated from the Technion, which is the top engineering university in Israel; a high-achieving, highly competitive school in Haifa, which is a pretty industrialized city. One of the city landmarks was [[a pair of very high, massive chimneys|https://www.dreamstime.com/editorial-stock-image-oil-refineries-ltd-haifa-israel-isr-apr-its-vast-petrochemical-plants-have-released-significant-amounts-pollution-to-image55602154]] belonging to the dominant oil refinery in the city.
One of the engineering courses in the faculty of Civil Engineering at the Technion had a project every quarter to determine the exact height of the two chimneys by doing __remote__ measurements from the Technion campus and using trigonometry to calculate the result.
Since most Technion students are very busy and stressed out (but resourceful), //virtually all// students in that civil engineering course had been copying the results from previous years' papers for years, and everyone had been happy -- including the professor, who, I'm sure was pleased with the high consistency and accuracy of the results over so many years and generations of students. Until...one quarter, one student decided to //actually do// the remote measurements and calculations. And, he got different results from previous years... Needless to say, this student got an "F" on his project (can you really blame the professor?!  ;-)

Again, to cut a long story short, the student insisted that the professor join him in repeating the actual measurements and calculations, proved their correctness and ended up getting an "A". But you can imagine the mess trying to figure out what to do about all other students in the class (who originally got "A"s); and what do you do about past classes/generations/grades? As I said: a mess. But, again, as before, even when copying -- you have to know how to read.

I would have summed it up a bit differently -- you have to know how to //think//, but I think that this is what my Dad meant, too, that you really have to develop a habit of //thoughtful reading//, critical reading, piercing reading (no relation to the Talmud pin pricking :), and that relying on books as Bibles (or Talmuds), and taking what's written in them on blind faith (or superficial understanding), is __not__ the way to truly learn and know. You have to ''own'' and internally re-create what you read. 

I am thankful to my father for having taught me that, and (in his subtle way) reminding me of it today.
The [[Edgie, Daniel Dennett|https://www.edge.org/memberbio/daniel_c_dennett]], is [[talking about his recovery from a serious heart failure|https://www.edge.org/conversation/daniel_c_dennett-thank-goodness]], and contemplates on being grateful (and to whom/what).

(compared with [[Kurt Vennegutt's sentiment|If this isn’t nice, what is?]])

>The best thing about saying thank goodness in place of thank God is that there really are lots of ways of repaying your debt to goodness—by setting out to create more of it, for the benefit of those to come. Goodness comes in many forms, not just medicine and science. Thank goodness for the music of, say, Randy Newman, which could not exist without all those wonderful pianos and recording studios, to say nothing of the musical contributions of every great composer from Bach through Wagner to Scott Joplin and the Beatles. Thank goodness for fresh drinking water in the tap, and food on our table. Thank goodness for fair elections and truthful journalism. If you want to express your gratitude to goodness, you can plant a tree, feed an orphan, buy books for schoolgirls in the Islamic world, or contribute in thousands of other ways to the manifest improvement of life on this planet now and in the near future.

On the lighter side, (but not less deep and useful!), and talking about simple, pure gratitude and appreciation, Terry Pratchett, in his excellent book [["A Hat Full of Sky"|http://discworld.wikia.com/wiki/A_Hat_Full_of_Sky]] describes an exchange between Tiffany (a young witch) and the old, powerful, and majestic witch Esme (Granny) Weatherwax^^1^^:
>“Your grandmother,” she said, “did she wear a hat?”
>“What? Oh…not usually,” said Tiffany, still thinking about the big show. “She used to wear an old sack as a kind of bonnet when the weather was really bad. She said hats only blow away up on the hill.”
>“She made the sky her hat, then,” said Granny Weatherwax.
>“And did she wear a coat?”
>“Hah, all the shepherds used to say that if you saw Granny Aching in a coat, it’d mean it was blowing rocks!” said Tiffany proudly.
>“Then she made the wind her coat, too,” said Granny Weatherwax. “It’s a skill. Rain don’t fall on a witch if she doesn’t want it to, although personally I prefer to get wet and be thankful.”
>“Thankful for what?” said Tiffany.
>“That I’ll get dry later.”




----
^^1^^ “Mistress Weatherwax was sort of the head witch, even though officially,
“Witches are all equal. [They] don’t have things like head witches. That’s quite against the spirit of witchcraft.”
That's the thing about people who think they hate computers. What they really hate is lousy programmers.
I recently finished teaching a course at [[MIT|http://www.citizenschools.org/california/about/locations/]] (part of the [[Citizen Schools|http://www.citizenschools.org/california/]]) titled "Acing Racing".
(Another course I did was [["Amazing Mazes"|The "Amazing Mazes" course]])
(Another course I did was [["Right on Target"|The "Right on Target" course]])

The goal of the 10 week course was to get middle school kids excited and interested in math, computer graphics/animation, and programming.
I used an ~OpenSource math package called [[Sage|http://sagemath.org/]] to gradually have the kids build a program in Sage (using Python), to set up a race between 2 rockets with different speeds and head-starts, and animate their race to the finishline.
(I also implemented a similar lesson plan using a [[different computer technology (Scratch)|Acing Racing - teaching some math and programming]])
In the process of building and running the race, the kids went from guessing which rocket will win the race (based on changing the initial conditions of the rockets), to actually calculating ''and explaining'' the results in mathematical terms and reasoning.

|borderless|k
|The 10 week course culminated in a WOW!, <br>which is a show-and-tell by the kids to parents/family, friends, <br>and guests:<br> <br>[[The WOW! plan outline|resources/CitizenSchools_WOW_plan.pdf]]|[img[Acing Racing competition|resources/acing_racing_small.gif][resources/Distance-speed-time problems_sage.html]]|
|borderless|k



!!!The final Python program that the kids used to demonstrate different race conditions.
During the WOW! the kids show-cased the simulation and animation program they created:
[[The Python-Sage Acing Racing program|./resources/AcingRacingProgram.png]]

They also asked the audience for predictions ( playing a game of "who will win the race?"), and then explained the math leading to a correct prediction of winning/losing.
This is an animated version of the simulation/animation program which the kids used to show different races:
[[The full User Interface for running Acing Racing races|./resources/Distance-speed-time problems_sage.html]]

This is a static (.gif) version of one race run, which the kids used to demonstrate and explain their distance-speed-time calculations:
[[A result of running one Acing Racing race|./resources/AcingRacingOutput1.png]]

!!!Core Math standards covered
In this course, the following California core standards had been covered:
* Number sense (in preparation for graphing distance-time graphs, positioning on the number line was covered)
* Problem solving using the 4 basic operations, including order of operations (to calculate distance, speed, time)
* Analyzing graphs rendered in 2D (including misleading graphs)
* Plotting data in 2D graph form (distance, time)
* Mathematical analysis (the interpretation of problem "givens" (e.g. distance head start, time delay)
I recently finished teaching a course at [[MIT|http://www.citizenschools.org/california/about/locations/]] (part of the [[Citizen Schools|http://www.citizenschools.org/california/]]) titled "Amazing Mazes".
(Another course I did was [["Acing Racing"|The "Acing Racing" course]])
(Another course I did was [["Right on Target"|The "Right on Target" course]])

The goal of the 10 week course was to get middle school students excited and interested in math, computer graphics/animation, and programming (see [[course outline, lesson plans, and student activities/programs|http://employees.org/~hmark/courses/amazingmazes/index.html]]).
I used an ~OpenSource computer programming environment/tool called [[NetLogo|http://ccl.northwestern.edu/netlogo/]] to gradually have the students build more and more complex mazes in ~NetLogo (using a [[Domain Specific Language (DSL)|http://martinfowler.com/tags/domain%20specific%20language.html]]^^1^^ I had created), then to explore and program (again, with [[a DSL I had created|./resources/NetLogo Maze walking DSL Spec.pdf]]) different ways (algorithms) to "walk the mazes" (or solve them), and finally use competitions to evaluate (and show off) the effectiveness and efficiency of their programs.
In the process of building mazes and programming maze "walkers", the students developed a sense for [[different kinds and types of mazes|http://employees.org/~hmark/math/netlogo/amazing-mazes-maze-maker-3.html]]^^2^^ (and connected graphs representing them), the various levels of challenges in solving different mazes, and different ways of effectively (successfully) and efficiently (quickly) programming maze walking solutions.

|borderless|k
|The 10 week course culminated in a WOW!, <br>which is a show-and-tell by the students to parents/family, friends, <br>and guests:<br> <br>[[The WOW! plan outline|resources/Amazing Mazes - WOW plan.pdf]]|[img[Amazing Mazes competition|resources/Amazing Mazes Competition - small.png][resources/Amazing Mazes Competition.png]]|
|borderless|k


!!!The final ~NetLogo program that the students used in the WOW! to demonstrate an effective/successful maze walking (solving) program.
During the WOW! the students asked for a volunteer from the audience to manually walk the maze and compete with their program.
[[The NetLogo competition (image)|./resources/Amazing Mazes Competition.png]]
[[The NetLogo competition (Java Applet)|http://employees.org/~hmark/math/netlogo/amazing-mazes-competition.html]]

!!!Activities leading up to the WOW! competition
|borderless|k
|The [[scaffolding NetLogo program (Java Applet)|http://employees.org/~hmark/math/netlogo/amazing-mazes-builder-2.html]] the students used to __learn how to build mazes__<br><br><br><br><br>The [[NetLogo program (Java Applet) and user interface|http://employees.org/~hmark/math/netlogo/amazing-mazes-12.html]] which the students used to __develop, test, and demonstrate maze walking solutions__ (e.g., "left-hand walk"):<br><br>[[A result of running one maze walking program|resources/Amazing Mazes program.png]] in a simple maze<br>[[A result of running one maze walking program|resources/Amazing Mazes programming.png]] in a more complex maze|[img[Amazing Mazes maze building|resources/Amazing Mazes maze-small.png][resources/Amazing Mazes maze.png]]<br>[img[Amazing Mazes maze programming|resources/Amazing Mazes programming-small.png][resources/Amazing Mazes programming.png]]|
|borderless|k

!!!Core Math and Programming concepts covered
In this course, the following core concepts had been covered:
* Graphing and Cartesian coordinate space (in 2D)
* Basic connected graph analysis (difference in maze complexity, challenges)
* Basic programming concepts and commands (for maze creation and maze walking/solving)
* Basic analysis of program effectiveness and efficiency

!!!Computational Literacy/Thinking concepts covered
This course lends itself to discussion and exercising skills in the following [[Computational Literacy/Thinking|A Framework for Computational Thinking, Computational Literacy]] areas:
* Levels of abstraction
** Describing and manipulating mazes and walkers
* Modeling and representation
** Networking equivalence, programs and algorithms
* Algorithms and procedures
** Strategies and algorithms for solving/walking mazes
* Automation
** Commands, loops, conditions, programs

----
^^1^^ - a "vintage" (1986) article on [[little languages|resources/Bentley-little-languages.pdf]], AKA Domain Specific Languages (~DSLs) by Jon Bentley
^^2^^ - [[a site with a nice collection|http://www.logicmazes.com/index.html]] of different types of "logic mazes" by Robert Abbott
I recently finished teaching a course at [[MIT|http://www.citizenschools.org/california/about/locations/]] (part of the [[Citizen Schools|http://www.citizenschools.org/california/]]) titled "Right on Target".
(Another course I did was [["Acing Racing"|The "Acing Racing" course]])
(Another course I did was [["Amazing Mazes"|The "Amazing Mazes" course]])

The goal of the 10 week course was to get middle school kids excited and interested in math, computer graphics/animation, and programming.
I used an ~OpenSource math package called [[Easy Java Simulations (EJS)|http://www.um.es/fem/EjsWiki/pmwiki.php]] to gradually have the kids understand and simulate/experience some key factors that are important to projectile motion (e.g., gravity, air resistance, projectile velocity, angle, etc.).

In the process of simulating aiming projectiles and trying to hit different targets, both on Earth and on the Moon, the kids went from guessing gravity, friction, initial speed, initial angle, etc., to actually calculating ''and explaining'' the results in math and physics terms and reasoning.

|borderless|k
|The 10 week course culminated in a WOW!, <br>which is a show-and-tell by the kids to parents/family, friends, <br>and guests:<br> <br>[[The WOW! plan outline|resources/Right On Target - WOW plan.pdf]]|[img[Right on Target competition|resources/ejs_trajectories_moon_45_small.png][resources/ejs_right_on_target.png]]|
|borderless|k


!!!The final Easy Java Simulations (EJS) program that the kids used to demonstrate different projectile/ballistic conditions.
During the WOW! the kids asked the audience for predictions (which velocity and angle will hit the target), and then explained the math leading to a correct prediction:
[[The Easy Java Simulations (EJS) program|./resources/ejs_right_on_target.png]]

!!!Main concepts covered
In this course, the following concepts had been covered:
* If air resistance is negligible, the size, shape, and weight of free falling objects does not change their dropping speed, nor the time it takes them to hit the ground
** A [[video of a simple experiment|resources/GalileoFallingBodiesGravityDemo.mp4]] demonstrates the point, ''qualitatively''.
** Use of a [[Video Analysis Tool|http://www.compadre.org/osp/items/detail.cfm?ID=9687]], [[shows this point|Tracker video analysis - falling bodies]] ''quantitatively''.
* [[Friction (air resistance)|./resources/ejs_air_resistance.png]] can play a big role in projectile trajectories, velocity, and time ([[Java Applet|http://employees.org/~hmark/math/ejs/freefall_with_air_resistance.html]])
* In projectile/ballistic motion, [[there is more than one angle|./resources/ejs_trajectories.png]] you can use to hit a target (with a given velocity) ([[Java Applet|http://employees.org/~hmark/math/ejs/RightOnTarget.html]])
* Certain behaviors "math rules", or "physics laws", for example that a 45 degree angle will cause a projectile to shoot the farthest, are "universal" and apply "everywhere" ([[on Earth|./resources/ejs_trajectories_earth_45.png]] and [[on the Moon|./resources/ejs_trajectories_moon_45.png]]) ([[Java Applet|http://employees.org/~hmark/math/ejs/RightOnTarget.html]])
* Math (and programs like [[velocity calculators|./resources/ejs_velocity_calculator.png]]) can be very handy in calculating how to hit a target (speed and angle), eliminating the need to guess

!!!Core Math standards covered
In this course, the following California core standards had been covered:
* Number sense and scaling (in dealing with behavior and gravity on Earth vs. Moon)
* Problem solving using the 4 basic operations (and square root), including order of operations (to calculate projectile speed and angle)
* Analyzing graphs/trajectories rendered in 2D (including misleading graphs)
* Mathematical analysis (the interpretation of problem "givens" (e.g. target distance, target height, projectile velocity and angle, gravity)
The following thought experiment demonstrates how reversing the thinking about cause and effect can lead to "amazing" conclusions:
(see also [[the anthropic principle|http://en.wikipedia.org/wiki/Anthropic_principle]]^^1^^ and [[the anthropic bias|On Anthropic Bias, or Was the Universe Made for Us?]]).

Here's a way to pick "the most amazingly lucky (talented? skilled? gifted? spooky?)" coin flipper in the world. This flipper can/did call correctly with no fail all results of flipping a coin (insert your number here, but for example's sake let's say ''10 times'').

First, you start with 1,024 coin flippers and pair them up (so 512 pairs). Have one flipper in each pair call heads (and the other person call tails). Each pair flips the coin and the flipper who had correctly called it goes on to the next round. So in the second round you have 512 "winners" from the first round, paired up into 256 pairs, and you repeat the calling and flipping.
At the end of 10 rounds (in our example), you end up with one "very skilled and astonishingly talented" flipper, who correctly called all flips, and therefore has a perfect track record! Most "amazing" indeed.

This reminds me of a [[NetLogo|http://ccl.northwestern.edu/netlogo/]] program I wrote to [[experiment with "evolutionary programming"|Exploring genetic algorithms using NetLogo]] (AKA, genetic algorithms), where it is clear (by definition) that over time (generations), a "best-of-breed" program will (not may, but will) emerge that solves the problem at hand.

The scientist/paleontologist [[Stephen Jay Gould|http://www.stephenjaygould.org/]] said about human evolution:
>the pathways that have led to our evolution are quirky, improbable, unrepeatable and utterly unpredictable. Human evolution is not random; it makes sense and can be explained after the fact. But wind back life's tape to the dawn of time and let it play again and you will never get humans a second time.

The excellent author and journalist Robert Wright [[describes this "astonishing" human path|Nonzero: The Logic of Human Destiny]] (he calls it an "evolutionary escalator") very succinctly, concluding that where the human race is today is another case of "unavoidable (i.e., highly likely) luck".

I think that this way of swapping (confusing?) cause and effect is at the heart of the [[anthropic principle|http://www.anthropic-principle.com/]]^^1^^, and is captured nicely in the cartoon by [[Frank Modell|http://frankmodell.com/]] in The New Yorker:
[img[Lovely Outdoors|resources/lovely outdoors 1.png][resources/lovely outdoors.png]]

----
^^1^^ See [[Nick Bostrom|http://www.nickbostrom.com/]] on the [[anthropic principle|http://www.anthropic-principle.com/?q=book/table_of_contents]]
Richard Hamming tells this [[personally formative story|http://worrydream.com/refs/Hamming%20-%20Mathematics%20on%20a%20Distant%20Planet.pdf]] in his article "MATHEMATICS ON A DISTANT PLANET":

!!! ... and its criticality in the real world ...
>[This experience] shaped my opinions. [It] occurred at Los Alamos during WWII when we were designing atomic bombs. Shortly before the first field test (you realize that no small scale experiment can be done-either you have a critical mass or you do not), a man asked me to check some arithmetic he had done, and I agreed, thinking to fob it off on some subordinate. When I asked what it was, he said, “It is the probability that the test bomb will ignite the whole atmosphere.” I decided I would check it myself! 
>
>The next day when he came for the answers I remarked to him, “The arithmetic was apparently correct but I do not know about the formulas for the capture cross sections for oxygen and nitrogen-after all, there could be no experiments at the needed energy levels.” He replied, like a physicist talking to a mathematician, that he wanted me to check the arithmetic not the physics, and left. I said to myself, “What have you done, Hamming, you are involved in risking all of life that is known in the Universe, and you do not know much of an essential part?” 
>
>I was pacing up and down the corridor when a friend asked me what was bothering me. I told him. His reply was, “Never mind, Hamming, no one will ever blame you.” Yes, we risked all the life we knew of in the known universe on some mathematics. Mathematics is not merely an idle art form, it is an essential part of our society.
And he concludes:
>Many times I have made predictions about the physical world based on mathematics done at my desk. Surely Nature does not know nor care what I write, nor the mathematical postulates used, but the consequences can be  serious. Therefore it is of significant importance to ask, "What kinds of mathematics can I depend on, and what kinds can I not?" That is the question! 

!!! ... but it's relation/relevance to the real world :)
>I know that the great Hilbert said, "We will not be driven out of the paradise Cantor has created for us," and I reply, "I see no reason for walking in!" Indeed, in time, as more and more people get used to computers, I am inclined to believe that we here on this Earth will decide that the computable numbers are enough. Apparently you never need a non-computable number! 
>
>Take, for example, the classic real line from 0 to 1, and remove the computable numbers. You have a non-countable number of numbers left, no one of which you can ever describe (how can you describe a number adequately if you cannot give, at least implicitly, a way of finding it)! Yet the axiom of choice says you can select one! 
>
>Can you? Which one, if you can never describe it so another person knows what you are talking about? Is the axiom of choice reasonable? Is it safe to depend on this axiom in this real world? Just as the physicists finally decided, after years of arguing about properties of the ether that it turned out could not be measured, I too believe it is better to ignore entirely what you cannot talk about or measure! Some things do not arise naturally.


(see also [[Is Math a human invention or a series of discoveries of truths in the real world?]])
<<forEachTiddler 
where 
'tiddler.tags.contains("book-chapter") && tiddler.tags.contains("The Accidental Universe - The World You Thought You Knew")'
sortBy 
'tiddler.title'>>
^^*^^ as defined by the College Board for the [[Advanced Placement Computer Science Principles|https://secure-media.collegeboard.org/digitalServices/pdf/ap/ap-computer-science-principles-course-and-exam-description.pdf]] (AP CSP) course

!!!Computer Science Principles - "Big Ideas"

Big Idea 1: Creativity. Computing is a creative activity.

Big Idea 2: Abstraction. Abstraction reduces information and detail to facilitate focus on relevant concepts.

Big Idea 3: Data and information. Data and information facilitate the creation of knowledge.

Big Idea 4: Algorithms. Algorithms are used to develop and express solutions to computational problems.

Big Idea 5: Programming. Programming enables problem solving, human expression, and creation of knowledge.

Big Idea 6: The Internet. The Internet pervades modern computing.

Big Idea 7: Global Impact. Computing has global effects on individuals and society.

!!!Computer Science Principles - Practices and Skills

Practice 1: Connecting computing (Making connections to and from the real world; making connections within CS)

Practice 2: Creating computational artifacts

Practice 3: Abstracting

Practice 4: Analyzing problems and artifacts

Practice 5: Communicating

Practice 6: Collaborating

In a good article in Nature Journal titled [[Can we open the black box of AI?|http://www.nature.com/news/can-we-open-the-black-box-of-ai-1.20731]] Davide Castelvecchi brings up an important aspect (and caution) to consider, as we (rush to) trust Artificial Intelligence (AI) and Machine Learning to make more (and more critical) decisions in our lives.

The fundamental problem is that some implementations of AI (e.g., neural networks) are opaque about what they learned, and it's very hard, or sometimes impossible to back-track and figure out how they reached a certain conclusion or why they behaved a certain way.
>“The problem is that the knowledge gets baked into the network, rather than into us,” says Michael Tyka, a biophysicist and programmer at Google in Seattle, Washington. “Have we really understood anything? Not really -- the network has.”

And it boils down to: can we trust these systems and algorithms if we don't understand (and cannot find out?) how they learned and why they behave the way they do.
It's similar to the dilemma an investor faces when thinking about a (in)famous "investment strategy" known as The Super Bowl Stock Market Indicator. 
As [[a BBC article|http://www.bbc.com/news/business-21277122]] puts it:
>Prof George Kester, of Washington and Lee University, estimates that if since 1967 (the first year the Super Bowl was played) he'd invested $1,000 in the stock market each year the indicator predicted an up market, and switched the money over to bonds every year that the indicator predicted a down market, he would have $168,053 today.

A classic (and "oldie but goodie") example from the Cold War era is the attempts of the US to create an intelligent system which identifies Russian tanks on the battlefield, in order to alert NATO forces about friends vs. foes.
As part of the system training/learning sessions, the US scientists fed the system with hundreds of tank images/photos of both friendly and enemy tanks. But when it came to battlefield testing, the system failed miserably. With great effort, the scientists backtracked the system's learning patterns and decision criteria, and found out that since the majority of the enemy tank images had been photographed at night and/or under severe weather conditions (and the Allied tank images were photographed under favorable lighting and weather conditions), the system actually learned to distinguish weather conditions, not tank shapes and models.

The article in Nature gives a similar (and more "modern" example), where a computer scientist was teaching a car to drive autonomously:
>On each trip, Pomerleau [the scientist] would train the system for a few minutes, then turn it loose to drive itself. Everything seemed to go well — until one day the Humvee approached a bridge and suddenly swerved to one side. He avoided a crash only by quickly grabbing the wheel and retaking control.
>Back in the lab, Pomerleau tried to understand where the computer had gone wrong. “Part of my thesis was to open up the black box and figure out what it was thinking,” he explains. But how? He had programmed the computer to act as a 'neural network' -- a type of artificial intelligence (AI) that is modelled on the brain, and that promised to be better than standard algorithms at dealing with complex real-world situations. Unfortunately, such networks are also as opaque as the brain. Instead of storing what they have learned in a neat block of digital memory, they diffuse the information in a way that is exceedingly difficult to decipher. Only after extensively testing his software's responses to various visual stimuli did Pomerleau discover the problem: the network had been using grassy roadsides as a guide to the direction of the road, so the appearance of the bridge confused it.

So,
>Issues such as these have led some computer scientists to think that deep learning with neural networks should not be the only game in town. Zoubin Ghahramani, a machine-learning researcher at the University of Cambridge, UK, says that if AI is to give answers that humans can easily interpret, “there's a world of problems for which deep learning is just not the answer”. One relatively transparent approach with an ability to do science was debuted in 2009 by Lipson and computational biologist Michael Schmidt, then at Cornell University in Ithaca, New York. Their algorithm, called Eureqa, demonstrated that it could rediscover the laws of Newtonian physics simply by watching a relatively simple mechanical object — a system of pendulums — in motion4.
>
>Starting from a random combination of mathematical building blocks such as +, −, sine and cosine, Eureqa follows a trial-and-error method inspired by Darwinian evolution to modify the terms until it arrives at the formulae that best describe the data. It then proposes experiments to test its models. One of its advantages is simplicity, says Lipson. “A model produced by Eureqa usually has a dozen parameters. A neural network has millions.”

The bottom line is that there is no single answer/solution/silver bullet to devising and deploying intelligent machines:
>the complex answers given by machine learning have to be part of science's toolkit because the real world is complex: for phenomena such as the weather or the stock market, a reductionist, synthetic description might not even exist. “There are things we cannot verbalize,” says Stéphane Mallat, an applied mathematician at the École Polytechnique in Paris. “When you ask a medical doctor why he diagnosed this or this, he's going to give you some reasons,” he says. “But how come it takes 20 years to make a good doctor? Because the information is just not in books.”
>
>[...] scientists should embrace deep learning without being “too anal” about the black box. After all, they all carry a black box in their heads. “You use your brain all the time; you trust your brain all the time; and you have no idea how your brain works.”

But as [[John Seely Brown (formerly at Xerox and then at Stanford) paints the picture|Sense-making and learning in the new 21st century environment]], we will blend more and more "Homo Sapiens" (knowing man) with "Homo Faber" (maker man) with "Homo Ludens" (player man) and AI (Artificial Intelligence) and IA (Intelligent Augementation), and it will become more and more important to understand this "new way of being".
In a well-thought-out [[blog post titled The Case for Slow Programming|https://ventrellathing.wordpress.com/2013/06/18/the-case-for-slow-programming/]], [[Jeffrey Ventrella|https://ventrellathing.wordpress.com/about/]] starts by quoting his father:
>“Slow down, son. You’ll get the job done faster.”
which reminds me of a quote ([[if I say so myself|I'll understand quickly if you explain slowly.]] :) "if you explain slowly, I'll understand quickly"

In the post, Ventrella, who is a programmer/artist over 55, makes some good points:
* I program slowly and thoughtfully. I’m kind of like a designer who writes code.
* You can’t wish away [the] Design Process [by doing quick and small iterations, committing code frequently and not breaking anything in the process]. It has been in existence since the dawn of civilization. And the latest clever development tools [e.g., github], no matter how clever, cannot replace the best practices and real-life collaboration that built cathedrals, railroads, and feature-length films.
He describes his programming style/process:
* My programming style is defined by organic arcs of different sizes and timescales, Each arc starts with exploration, trial and error, hacks, and temporary variables. Basically, a good deal of scaffolding. A picture begins to take shape. Later on, I come back and dot my i’s and cross my t’s. The end of each arc is something like implementation-ready code. (“Cleaning my studio” is a necessary part of finishing the cycle). The development arc of my code contribution is synonymous with the emergence of a strategy, a design scheme, an architecture.
* And sometimes, after a mature organism has emerged, I go back and start over, because I think I have a better idea of how to do it. Sometimes I’m wrong. Sometimes I’m right. There is no way to really know until the organism is fully formed and staring me in the face.
* The "slow programming movement" is part of a more general "slow movement". It is a software development philosophy that emphasizes careful design, quality code, software testing and thinking. It strives to avoid kludges, buggy code, and overly quick release cycles.
* As part of the agile software development movement, groups of software developers around the world look for more predictive projects, and aiming at a more sustainable career and work-life balance. They propose some practices such as pair programming, code reviews, and code refactorings that result in more reliable and robust software applications.
* Money dynamics puts unnatural demands on a process that would be best left to the natural circadian rhythms of design evolution. Fast is not always better. In fact, slower sometimes actually means faster – when all is said and done. The subject of how digital technology is usurping our natural temporal rhythm is addressed in Rushkoff’s [[Present Shock|http://www.rushkoff.com/books/present-shock/]].
>
* ''I believe that we need older people, women, and educators INSIDE the software development cycle. More people-people, fewer thing-people. And I don’t mean on the outside, sitting at help desks or doing UI flower arranging. I mean on the INSIDE – making sure that software resonates with humanity at large.''
>
* “software programming is not typing” (and Brendan Enrick has more to say about this[[|http://brendan.enrick.com/post/Programming-is-Not-Just-Typing]]. Everyone knows this, but it doesn’t hurt to remind ourselves every so often.
* The fact that we programmers spend our time jabbing our fingers at keyboards makes it appear that this physical activity is synonymous with programming. But programming is actually the act of bringing thought, design, language, logic, and mental construction into a form that can be stored in computer memory.

Steven Pinker, in his book “The Sense of Style” writes:
>The main cause of incomprehensible prose is the difficulty of imagining what it’s like for someone else not to know something that you know.
and this is also true about teaching. An expert is often not a good teacher.

Or as the psychologist Sian Beilock, now the president of Barnard College, writes, “As you get better and better at what you do, your ability to communicate your understanding or to help others learn that skill often gets worse and worse.”

The Harvard Business Review describes [[a simple and clear experiment|https://hbr.org/2006/12/the-curse-of-knowledge]] demonstrating the difficulty of transmitting knowledge from expert to novice, or teacher to learner:
>In 1990, a Stanford University graduate student in psychology named Elizabeth Newton illustrated the curse of knowledge by studying a simple game in which she assigned people to one of two roles: “tapper” or “listener.” Each tapper was asked to pick a well-known song, such as “Happy Birthday,” and tap out the rhythm on a table. The listener’s job was to guess the song.
>
>Over the course of Newton’s experiment, 120 songs were tapped out. Listeners guessed only three of the songs correctly: a success ratio of 2.5%. But before they guessed, Newton asked the tappers to predict the probability that listeners would guess correctly. They predicted 50%. The tappers got their message across one time in 40, but they thought they would get it across one time in two. Why?
>
>When a tapper taps, it is impossible for her to avoid hearing the tune playing along to her taps. Meanwhile, all the listener can hear is a kind of bizarre Morse code. Yet the tappers were flabbergasted by how hard the listeners had to work to pick up the tune.
>
>The problem is that once we know something—say, the melody of a song—we find it hard to imagine not knowing it. Our knowledge has “cursed” us. We have difficulty sharing it with others, because we can’t readily re-create their state of mind.

There are a couple of ways to reduce this knowledge transfer gap:
* one should make the message concrete (avoid high level abstractions, at least initially)
* Stories, too, work particularly well in dodging the curse of knowledge, because they force us to use concrete language.

In an article titled [["Those Who Can Do, Can’t Teach"|https://www.nytimes.com/2018/08/25/opinion/sunday/college-professors-experts-advice.html]],  Adam Grant who is an organizational psychologist, gives the following advice to students/learners, when picking a teacher/mentor/coach:
* pay attention to how long it has been since a teacher studied the material. Instead of studying under people who have learned the most, it can be wise to study under people who have learned the most recently.
* consider how difficult it was for the educator to master the material. We should be learning from overachievers: the people who accomplish the most with the least natural talent and opportunity.
* focus as much on how well the teacher communicates the material as on how well the teacher knows the material. 

So Grant's advice:
>Before you seek out an expert as your teacher or coach, remember that it’s not just about what they know; it’s about how recently and easily they learned it, and how clearly and enthusiastically they communicate it.

On the other hand, Grant points out that the flip side of the saying [["Those Who Can Do, Can’t Teach"|https://www.nytimes.com/2018/08/25/opinion/sunday/college-professors-experts-advice.html]], namely the saying that “Those who can’t do, teach” is not necessarily true: ''Teachers often turn into great doers.''

After all, the best way to learn something is not to do it but to teach it. You understand it better after you explain it — and you remember it better after retrieving and sharing it.
As you gain experience studying and explaining a skill, you might actually improve your ability to execute that skill.
As opposed to the Drake Equation (see [[Interdisciplinary knowledge in an equation]] and also [[Frank Wilczek on Intelligent Life in the universe]] :)

From [[xkcd|https://xkcd.com/718/]]:

[img[The Flake Equation|resources/Flake Equation small.png][resources/Flake Equation.png]]

and Michael Shermer's [[equation analysis|https://michaelshermer.com/2011/10/the-flake-equation/]] at The Skeptic Magazine.
From Pete Goodliffe's book //Becoming a Better Programmer//:

You can learn falsehood and believe that it’s right. This can be at best embarrassing, and at worst dangerous. This is illustrated by the Four Stages of Competence (a classification posited in the by 1940s by psychologist Abraham Maslow). You may have:

''Unconscious incompetence''
This is a dangerous place to be. You don’t know that you don’t know something. You are ignorant of your ignorance. Indeed, it is very possible that you think you understand the subject but don’t appreciate how very wrong you are. It is a blind spot in your knowledge.

''Conscious incompetence''
You don’t know something. But you know that you’re ignorant. This is a relatively safe position to be in. You probably don’t care—it’s not something you need to know. Or you know that you’re ignorant and it is a source of frustration.

''Conscious competence''
This is also a good state to be in. You know something. And you know that you know it. To use this skill you have to make a conscious effort, and concentrate.

''Unconscious competence''
This is when your knowledge of a topic is so great that it has become second nature. You are no longer aware that you are using your expertise. Most adults, for example, can consider walking and balance as an unconscious competence—we just do it without a second thought.

[[Alan Kay|https://en.wikipedia.org/wiki/Alan_Kay]] (a "[[CS Sage|https://en.wikipedia.org/wiki/List_of_computer_scientists]]") in his article [[The Future of Reading Depends on the Future of Learning Difficult to Learn Things|http://www.vpri.org/pdf/future_of_reading.pdf]] on human ability amplifiers, and specifically reading (as opposed to memorizing):
>Socrates’ complaints about writing included “Writing removes the need to remember”. He meant that a prosthetic brace on a healthy limb will induce withering. On the other hand, if we think of new technologies as amplifiers that add or multiply to what we already have rather than replacing them—then we have the opportunity to use writing for its reach over time and space, its efficiencies, and its ability to hold forms of argument that don’t work in oral discourse. And we can still learn to remember all we’ve read! In other words, writing is not a good replacement for memories used in thinking—too inefficient—but is a great way to cover more ground, to cover different ground, and to have more to think about and with.

See also [[A Helpful Guide to Reading Better - Farnam Street]].
This [[short article by Umberto Eco|http://interglacial.com/pub/text/Umberto_Eco_-_Eternal_Fascism.html]] ([[local copy|resources/Umberto_Eco_Eternal_Fascism.html]]) (author of //The Name of the Rose// (1980), and //Foucault's Pendulum// (1988)) shows some characteristics typical to his style of writing:
Complex (or at least not simple) sentences, simple (or at least not bombastic) yet not simplistic ideas, full of (steeped in?) cultural references/connotations, using (sometimes subtle) humor/irony, and __sprinkled with parentheses__ (!).

Now, this last feature is (obviously ;-) endearing to me. I was glad to see someone like Eco not shying away from longer/complex sentences and parenthetical remarks, especially since I am aware of (and got some feedback about) these characteristics in //my// style of writing.

Here's a sample from [[Eco's article (local copy)|resources/eco_mac_vs_pc.html]]:
>Friends, Italians, countrymen, I ask that a Committee for Public Health be set up, whose task would be to censor (by violent means, if necessary) discussion of the following topics in the Italian press. Each censored topic is followed by an alternative in brackets which is just as futile, but rich with the potential for polemic. Whether Joyce is boring (whether reading Thomas Mann gives one erections). Whether Heidegger is responsible for the crisis of the Left (whether Ariosto provoked the revocation of the Edict of Nantes). Whether semiotics has blurred the difference between Walt Disney and Dante (whether De Agostini does the right thing in putting Vimercate and the Sahara in the same atlas). Whether Italy boycotted quantum physics (whether France plots against the subjunctive). Whether new technologies kill books and cinemas (whether zeppelins made bicycles redundant). Whether computers kill inspiration (whether fountain pens are Protestant).
>One can continue with: whether Moses was anti-semitic; whether Leon Bloy liked Calasso; whether Rousseau was responsible for the atomic bomb; whether Homer approved of investments in Treasury stocks; whether the Sacred Heart is monarchist or republican.

I'll keep reading Eco, specially his articles, to see when he starts using not just parentheses, but also "/" (slashes), [[just like me|Happiness and sorrow]]...
This is an insightful little book (97 pages, including notes, resources, acknowledgements, and index :) by Jurriaan Kamp, the co-founder of "[[The Intelligent Optimist|http://www.theoptimist.com/]]".

The simple, honest, life-affirming message starts with a keen observation on the first page, in the 
!!!Author's Note:
>[...] over many years I have learned that a lot of people choose vocations that they themselves can learn the most from.
(or as [[Joseph Joubert|https://en.wikipedia.org/wiki/Joseph_Joubert]]^^1^^ said: to teach is to learn twice)
>Many psychotherapists have a great need to heal their own psyche.
>Mediators tend to create conflicts in their personal lives.
>High energy motivational speakers [...]
>Business gurus who teach that egos are so often roadblocks [...]
>People who teach meditation and mindfulness tend to need to quiet their own minds.
>It is like healing your own wounds becomes the most important and inspiring contribution you can make to the world around you.
(echos the conviction that "world peace" starts within)
>Then there are also the people who are natural teachers. They don't need to write books [wikis? :)] or tell their stories to big audiences. They just //are//. And through their beings they teach and inspire. Talking or writing about something is different from //being// that same thing. From being flows natural inspiration. No need for books or talks.
(yet here he is (and I'm thankful for his book!), and here I am (and I hope you benefit from this wiki))

!!!In his introduction, Kemp says:
>Media distort reality and breed pessimism. We need optimism for more health, happiness, and success. We need freedom //from// the press to get there.
He had been a journalist for many years, and knows the media world from the inside. He, in effect confirms that "bad news sells copies" (or in his words: "More bombs, more money")
I definitely believe him when he says:
>It all comes from a big misunderstanding. Somehow, somewhere, in the decades since World War II, we have started confusing telling stories -- informing the public -- with selling watches, cars, soda, and toothbrushes.
>Nowadays, publishers are supposed to target certain well-defined interest groups. In fact, each new media initiative starts by defining its audience. As much sense as that seems to make in today's money-driven world, it's unethical. Media's contributions should come from their stories, their content -- not their capacity to serve a certain audience and to attract money from advertisers.

In an interesting aside, Kamp makes the difference between journalism and social media very clear. He brings up a point sometimes argued by people:
>The press doesn't matter any more, some will argue. We have Twitter. News spreads instantly through social media. These days we are all journalists.
And then he compares:
>The evolution of social media certainly brings a lot of good. Whereas the average front page [in the press] is 90 percent frauds, floods, fires, murders, and diseases, research shows that what people share on social media is more positive than negative.The more positive an article, the more likely it's going to be shared, explains Jonah Berger, social psychologist at the University of Pennsylvania, in his book //Why Things Catch On//. And in an interview with the //New York Times// he said, "The 'if it bleeds' [it leads]" rule works for mass media that just want you to tune in. They want your eyeballs and don't care how you're feeling. But when you share a story with your friends and peers, you care a lot more how they react.
So the conclusion is:
>[...] friends care about each other, and tweeting and sharing tend to be more positive.
But, Kamp warns:
>[T]weeting is no journalism. Journalists are trained for years to write good news stories that cover all relevant angles. Good journalism is a trade. It should present and expalin the news. It should investigate and discover. Media should always be on a quest for the truth. Social media should complement that, not replace it.
And a concrete analogy :)
>I've made bookcases for our home. Recently, we hired a carpenter to do the same. I could easily see the difference between his work and mine. I'm definitely not a carpenter.
He finally observes:
>Most news is not helpful to you. It interrupts your thinking. It stands in the way of creativity and the emergence of new ideas. As [[Rolf Dobelli wrote in the Guardian|http://www.theguardian.com/media/2013/apr/12/news-is-bad-rolf-dobelli]], "if you want to come up with old solutions, read news. If you are looking for new solutions, don't".
>So often what is presented as //news// is really //olds//. It is not about innovation, breakthroughs, solutions, or new insights. In short, it's not optimistic. It's about sad repetitions of unfortunate events that don't support and enrich your life. That's very pessimistic.

!!!In chapter 1 - The Best way to Live
Kamp takes a closer look at optimism.
>Optimism doesn't mean denying reality. According to the dictionary, the everyday meaning of //optimism// is "hopefulness and confidence about the future or the success of something".
>But the root of the word comes from Latin (//optimum//) and the more precise definition of optimism is "the doctrine that this world is the best of all possible worlds."
>Optimism is a fundamental attitude. It's not an opinion about reality; it's a starting point for dealing with reality.
and he gives a quote I like:
>If there's no solution, then there's no problem.

In this chapter Kamp also quotes from M. Scott Peck's excellent book //The Road Less Traveled//:

[img[The Road Less Traveled|./resources/Kamp_on_Scott_small.jpg]]

And as a true optimist, he observes:
> There is no bad weather, only inappropriate clothing.

!!!In chapter 3 - How to become an (even better) optimist
Kamp shows the following black and white image and ask the reader to say:

Do you see a horseman coming towards you, or riding away from you?

[img[horseman|./resources/horse_small.jpg]]

>If you saw the horseman coming to you, you tend to have a more optimistic mindset. If you saw the horseman riding away from you, you tend to be more of a pessimist.

!!!In chapter 4 - The world is a better place than you think
Kamp brings up a few common views promoted and propagated in the media, and argues that they are really myths. He claims and brings data to support the claims that:
* We live in the most peaceful era ever (based on data from the book [[The Better Angels of Our Nature|http://stevenpinker.com/publications/better-angels-our-nature]] by Steven Pinker)
* Overpopulation is a myth ("we could comfortably fit all 7 billion people in a country the size of Texas", on average, one family of 4 on a tenth of an acre garden lot :)
* We can feed all these people (the problem is food distribution, not production)
* We are living longer
* Democracy is spreading
* We have more free time (according to the OECD - Organization for Economic Cooperation and Development)
* We are getting richer
* Natural resources abound
* Our food is safer
* Racism is on the decline

!!!In chapter 5 - The best is yet to come
lists some possible (positive!) developments in the future, with the caveat that it is hard to predict (especially the future :)
In the future we may have:
* Growth and energy for all
* Sustainable abundance
* Money that serves people and society
* Meditation for peace (better mindfulness, calming, concentration)
* Mind over matter
* Food which is wholesome
We will be able to
* create our own reality (nano- and bio-technologies)
* change our personalities (neuro-therapy)

and for more of "the good stuff" see [[The Optimist|http://www.theoptimist.com/]]



----
1 - Jurriaan Kamp seems to be very much aligned with Joseph Joubert, judging from a few quotes from the latter:
* on the control we usually have over the choices we make: 
** "Misery is almost always the result of thinking."
** "All gardeners live in beautiful places because they make them so."
* on choosing to look at the bright side (ha!) of things: "When my friends are one-eyed, I look at them in profile."
* on living with intention/purpose: "The mind's direction is more important than its progress."
Above the mountains
the geese turn into
the light again

Painting their
black silhouettes
on an open sky.

Sometimes everything
has to be
inscribed across
the heavens

so you can find
the one line
already written
inside you.

Sometimes it takes
a great sky
to find that

first, bright
and indescribable
wedge of freedom
in your own heart.

Sometimes with
the bones of the black
sticks left when the fire
has gone out

someone has written
something new
in the ashes of your life.

You are not leaving.
Even as the light fades quickly now,
you are arriving.


----
Compare to [[The Journey - Mary Oliver]]
    One day you finally knew

    what you had to do, and began,

    though the voices around you

    kept shouting

    their bad advice–

    though the whole house

    began to tremble

    and you felt the old tug

    at your ankles.

    “Mend my life!”

    each voice cried.

    But you didn’t stop.

    You knew what you had to do,

    though the wind pried

    with its stiff fingers

    at the very foundations,

    though their melancholy

    was terrible.

    It was already late

    enough, and a wild night,

    and the road full of fallen

    branches and stones.

    But little by little,

    as you left their voices behind,

    the stars began to burn

    through the sheets of clouds,

    and there was a new voice

    which you slowly

    recognized as your own,

    that kept you company

    as you strode deeper and deeper

    into the world

    determined to do

    the only thing you could do–

    determined to save

    the only life you could save.



----
compare to [[The Journey - David Whyte]]
As a Citizen Teacher teaching a course at [[McNair Middle School|http://www.citizenschools.org/california/about/locations/]] (part of the [[Citizen Schools|http://www.citizenschools.org/california/]]), I am currently teaching a new course I've developed, called "Simplexity - simple rules leading to complex behavior".
I'll cover this course after I finish the 10 week/lesson class, but you can see the [[course outline, lesson plans, and student activities/programs|http://employees.org/~hmark/courses/amazingmazes/index.html]].

One of the modules in this course (the modular design enables me to reuse parts/modules elsewhere), is about [[One Dimensional Cellular Automata (1D CA)|Cellular Automaton Rule 110]], which leads to another module on 2D ~CAs (a-la [[Conway's Game Of Life|http://en.wikipedia.org/wiki/Conway%27s_Game_of_Life]]). And this leads to a third module on Fractals and Fractal Curves, one of which is the [[Jurassic Park Dragon|http://en.wikipedia.org/wiki/Dragon_curve]] ([[as shown|resources/jurassic-park-book-dragon.jpg]] in Michael Crichton's book).

''A low-tech way of creating the dragon^^1^^''
start with a narrow strip of paper, say about 1" by 11". Construction paper is good because it has a little more stiffness but plain paper works too. Take the strip and fold the paper over end-to-end, right hand onto left, and crease. Now fold it again in the same way for the second time, right onto left, and crease, and again (3^^rd^^ time), right onto left and crease, and again (4^^th^^) right onto left and crease, and finally, one more time (5^^th^^). 

So for the first 5 generations, after straightening the folds and creating 90 degree turns (as much as you can with paper) you get:
|borderless|k
|On the right is the Jurassic Park Dragon^^1^^ after 5 generations.|[img[Fractal Dragon 5 generations|./resources/dragon-fractal-5-gens-small.png][./resources/dragon-fractal-5-gens.png]]|On the right is the step by step construction and "string representation" of 5 generations. R=right turn, L=left turn|[img[Dragon Fractal generations|resources/dragon-fractal-generations-small.png][resources/dragon-fractal-generations-big.png]]|
|borderless|k

[>img[Dragon Fractal generations|resources/dragon-fractal-generations-rules-small.png][resources/dragon-fractal-generations-rules.png]]
Cynthia Lanius, formerly at Rice University, explains the [[steps and rules for evolving this dragon|http://math.rice.edu/~lanius/frac/code.html]], which boils down to this:
Rule 1. The middle entry in the table is always a right (R).
Rule 2. For the next generation, the entries to the left of the middle are the entries of the table before (i.e. copied, as-is).
Rule 3. The entries to the right of the middle are opposites and mirror-images of the ones to the left (i.e. every R is replaced by an L, and the most-left entry determines the most right entry, moving into the middle).










[>img[Dragon Fractal generations|resources/dragon-fractal-generations-rules-2-small.png][resources/dragon-fractal-generations-rules-2.png]]
Another way to formulate the rules for evolving this dragon is described on [[Joel Castellanos's site|http://cs.unm.edu/~joel/PaperFoldingFractal/chasing.html]] (I switched his Left's and Right's to match the above):
Rule 1. The middle entry in the table is always a right (R) (identical to Lanius above).
Rule 2. For the next generation, start from left, inserting an R before (to the left of) the first letter, and then insert a letter after every letter of the previous generation, //alternating// between inserting an L and an R.









!!!!So, is there a way to create the Dragon by a set of Elementary 1 Dimensional Cellular Automata rule?
[>img[Dragon Fractal generations|resources/dragon-fractal-generations-no-rules-small.png][resources/dragon-fractal-generations-no-rules.png]]
There are a few problems trying to do that.
1. If you look at the rows/generations in the image below, you can see that there are actually 3 values that each cell can take (L, R, blank), as opposed to two in 1D CA (on, off).
2. The value of a cell in the next generation is not //consistently// determined by it's 3 cell neighbors in the current generation. For example, the central cell in generation 3 (R) is determined by it's 3 cell neighbors in gen. 2 which are RRL. But, the RRL neighbors in gen. 3 result in L in gen. 4 in one instance, and in an R in another instance.






!!!!But, is there a way to generate the Dragon sequence using a Turing Machine?
[>img[Dragon Fractal as a Turing Machine|resources/Dragon_Turing_small.jpg][resources/Dragon_Turing.jpg]]It turns out that you can generate the Dragon Curve sequence without generating/knowing the previous generation of the Dragon, using a Turing Machine consisting of 4 rules.

As you may know, a Turing Machine has an infinite paper tape, where a reading/writing "head" can read from and write to, based on following certain rules.
Our simple Turing Machine in this case will have a reading head starting to read at position 1 of a paper tape, with only the number 1 written on it. For simplicity, the Turing Machine will have a writing head, initially positioned at position 2 of the tape, ready to write something down, based on the following rules:
* If the reading head reads 0 - the writing head will write 0 followed by 1
* if the reading head reads 1 - the writing head will write 3 and 2
* if the symbol read is 2, write 42
* if the symbol read is 3, then write 31
* and if the symbol read is 4, write 41
Note that the reading head reads one symbol from the tape at the position it is on, then the writing head writes the symbols on the tape at the position it is on, per the applicable rule.

So, if the tape has the symbol 1 at the first position of the tape, and the reading head is on position 1, and the writing head is at position 2, we get after one iteration, at tape which looks like this:
* 132	(the first/left 1 is the initial symbol on the tape. The reading head reads it, and following the second rule above, the writing head positioned at position 2 (initially empty - with no symbols on the tape) writes 3 followed by 2, resulting in 1 3 2. 
After this, the reading head moves to the next position on the tape (looking at the 3 which had been just written by the writing head), while the writing head moves to position 4 (an empty space on the tape, right of the last symbol (2) it had just written) 

The next iteration will result in a tape looking like:
* 13231
after which the reading tape will now be on the symbol 2, and the writing head will be on an empty space on the right of the right 1

The next reading/writing iteration (reading the 2 and writing 42 as a result) generates:
* 1323142

The Turing Machine can go on forever following the rules and generating the tape shown in the image above.

In order to get the Dragon sequence, you need to take the tape and interpret it the following way:
* Replace each even number on it with a 0, and each odd number with a 1
** So what you get for the sequence 1323142 is: 1101100
* Interpret each 0 as a Left turn (L), and every 1 as a Right turn (R)
** So what you get is: RRLRRLL
Which is the sequence of the first few foldings (Generation 3) for the Dragon as we have seen above.




------------------------------
^^1^^ snapshot of the [[fractal applet|http://employees.org/~hmark/courses/simplexity/FractalGrower.jnlp]] originally from [[Joel Castellanos's Fractal Grower site|http://cs.unm.edu/~joel/PaperFoldingFractal/paper.html]], which is a ''high-tech (Java Applet) way of creating the Dragon''

[img[Dragon Fractal 10 generations|resources/dragon-10-gen.png][resources/dragon-10-gen.png]][>img[Dragon Fractal 14 generations|resources/dragon-14-gen.png][resources/dragon-14-gen-big.png]]

The Dragon after 10 generations (left) and 14 generations (right).
Mid-way through [[teaching a course|The "Acing Racing" course]] at Citizen Schools, I suggested to the school principal to look into using the [[Khan Academy|http://www.khanacademy.org/about]] as a math remediation/augmentation tool for the kids that either struggle with basic math concepts and techniques, or kids who can and want to push forward.

I especially like the  knowledge map  of all the math topics and their dependencies, the fact that kids can progress at their own pace, and that they have to show mastery (do 10 exercises in a row correctly at each level) before moving to another topic.

I found it interesting that a few key capabilities implemented in the Khan Academy tutoring system were principles and features I introduced into an on-line tutoring system I had designed as part of [[my studies at Stanford (in the Learning, Design, and Technology MA program)|http://ldtprojects.stanford.edu/~hmark/]].

In that system, I represented the knowledge domain as a [[topic map|./resources/LDTtutoringSystemTopicMap1.png]], with topics and relationships indicating dependencies and levels of difficulty.
Associated with each topic in the topic map, were [[banks of questions/exercises|./resources/LDTtutoringSystemQuestionBanks1.png]] that the learner had to master, before moving to the next level/topic.

Since proficiency is very important, [[the Khan Academy system tracks and monitors the kids' activities, progress and mastery|./resources/KhanAcademyPerformanceMonitor1.png]], and allows the kids and their mentor/coach/teacher/parent to see exactly what they did, when, how, etc.

This way of monitoring reminds me very much of a similar capability I designed into the on-line tutoring system, where the system tracked and reported via [[a virtual 3D monitor|./resources/LDTtutoringSystemMonitor1.png]] what topics the learner covered (1st dimension - domain knowledge), when and how long it took (the 2nd dimension - time), and how they performed (the 3rd dimension - mastery level).

One key feature (and selling point?) of the Khan Academy platform is the large tutorial videos library. I always had mixed feelings about their effectiveness and contribution to student learning, and I got some validation from [[a report on a pilot of blended learning|resources/lessons-learned-from-a-blended-learning-pilot4.pdf]]:
>''Value of the Videos''
>A final interesting perspective on Khan involves the value of the site’s videos. Most people are drawn to Khan based on its massive video library and Sal’s own charming and engaging teaching style. Like many, we assumed the videos would be the predominant learning mechanism for students tackling new material. In fact, the students rarely watched the videos. This result is consistent with some of the observations in the Los Altos pilot. The students greatly preferred working through the problem sets to watching the videos. Students turned to their peers, the hint, and the classroom teacher much more often than they did the linked Khan video. 
>One possible reason is that the videos are aligned to the broader concept, but do not link directly to the problem students are struggling with. A second hypothesis is that the videos may be too long at eight to ten minutes. If students have 60-­‐90 minutes to work through multiple concepts in a class period, an investment of ten minutes for a single video feels like a lot. The badges and stars within Khan may also be a disincentive, as there is no immediate reward for watching videos as there is when completing streaks. Lastly, we wonder how many of us really enjoy watching instructional videos for extended periods of time. We are left curious about whether Khan’s videos need to be even more modular and shorter in duration and also about the value of video based instruction. 
We all know (and some of us heard in person :) Neil Armstrong's ([[1930-2012|http://www.theatlantic.com/photo/2012/08/neil-armstrong-1930-2012/100359/]]) famous words upon landing and stepping on the Moon (or at least [[we think we know|http://www.ndtv.com/world-news/is-neil-armstrongs-famous-moon-landing-quote-really-a-misquote-524458]]^^1^^),

(see [[Moon myth and reality]]),

[[but|http://www.space.com/18910-apollo-17-anniversary-men-left-moon.html]],

On December 14, 1972 at 5:54:37 p.m. EST, humans left the moon for what would turn out to be the last time.
The commander of the Apollo 17 mission, Gene Cernan, had said:

> As I take man's last step from the surface, back home for some time to come (but we believe not too long into the future), I'd like to just say what I believe history will record: That America's challenge of today has forged man's destiny of tomorrow. And, as we leave the Moon at ~Taurus-Littrow, we leave as we came and, God willing, as we shall return: with peace and hope for all mankind.

But, according to Apollo 7 astronaut Walter Cunningham in his book The ~All-American Boys, Cernan's final words on the moon were: "Let's get this mother out of here." (Or as [[Miles O'Brien spells it|http://boingboing.net/2012/12/14/we-left-the-moon-40-years-ago.html]], awesomely, "let's get this mutha.")

But, but, the "outta here" thing is likely, alas, apocryphal [i.e., of doubtful authorship or authenticity]. According to NASA's official transcript of Apollo 17's return to Earth, what Cernan actually said last, was in part a response to a malfunction his fellow astronaut, Jack Schmitt, was encountering with a camera: "Now, let's get off. Forget the camera."

From [[The Atlantic Magazine|http://www.theatlantic.com/technology/archive/2012/12/what-were-the-last-words-spoken-on-the-moon/266287/]]


----
^^1^^ - see [[GD local copy|https://docs.google.com/document/d/1PAsseF0KS2ka3L6CbT3b6krwXcY3688JAJRy5RkAPHA/edit?usp=sharing]]
>[...] it is a strange thing that most of the feeling we call religious, most of the mystical outcrying which is one of the most prized and used and desired reactions of our species, is really the understanding and the attempt to say that man is related to the whole thing, related inextricably to all reality, known and unknowable. This is a simple thing to say, but the profound feeling of it made a Jesus, a St. Augustine, a St. Francis, a Roger Bacon, a Charles Darwin, and an Einstein. Each of them in his own tempo and with his own voice discovered and reaffirmed with astonishment the knowledge that all things are one thing and that one thing is all things plankton, a shimmering phosphorescence on the sea and the spinning planets and an expanding universe, all bound together by the elastic string of time. It is advisable to look from the tide pool to the stars and then back to the tide pool again.

(another [[beautiful quote|The Log from the Sea of Cortez - one-ness]] from The Log from the Sea of Cortez)
>It seems apparent that species are only commas in a sentence, that each species is at once the point and the base of a pyramid, that all life is relational to the point where an Einsteinian relativity seems to emerge. And then not only the meaning but the feeling about species grows misty. One merges into another, groups melt into ecological groups until the time when what we know as life meets and enters what we think of as non-life: barnacle and rock, rock and earth, earth and tree, tree and rain and air. And the units nestle into the whole and are inseparable from it... It is advisable to look from the tide pool to the stars and then back to the tide pool again.

I suspect it's //not// a coincidence that the fluidity that Steinbeck describes here is reflected in human associativity or free association, [[captured so simply and vividly by Aristotle|Free association]]. It really makes one wonder what is causing what: is the inter-relatedness of the universe causing us to perceive associatively, or is our associative perception creating (//creating?!//) an inter-related universe. ([[another strange loopiness|http://en.wikipedia.org/wiki/I_Am_a_Strange_Loop]] a-la Douglas Hofstadter?)

(another [[beautiful quote|The Log from the Sea of Cortez - awe]] from The Log from the Sea of Cortez)
One of the interesting and successful [[collaborations in math, between G.H. Hardy and S. Ramanujan|http://www.storyofmathematics.com/20th_hardy.html]] (from 1914 to 1919) is captured in a moving movie titled //The man who knew infinity// (based on a book by Robert Kanigel, starring Jeremy Irons and Dev Patel).

Hardy was a (devout :) atheist, whereas Ramanujan was a believer, who claimed that he got his mathematical ideas and inspirations from a "divine source". Hardy found it very hard to deal with this, and constantly insisted that Ramanujan develop rigorous proofs to his "miraculous insights". (It turns out Hardy was right to insist on proofs - Ramanujan's divine insight about a certain theory on prime numbers was erroneous; he had been brilliantly right about other theories/insights though).

At one point in the movie, Ramanujan tells Hardy that the significance and meaning of his math theories and ideas is only valuable because he believes they come from God (gods).

 This brings up the question of whether math is discovered by humans or invented by them. This is obviously an open philosophical question, but it reminds me of the [[insightful response of Mario Livio|Is Math a human invention or a series of discoveries of truths in the real world?]] who basically said that he believes that:
- humans __abstract__ the basic math concepts (such as number, line, set, etc.) from reality - so there is an __"invention""__ element in this (and this attests to human //both// greatness and limits, since we have to abstract to both gain new knowledge, but also in order not to drown in the overwhelming torrent of details that reality hits us with)
- then, we __"discover"__ relationships between, and (human-scale) significance in, these concepts.
So math has //both// elements of invention //and// discovery.

This echos [[David Darling's view|The relationship between the world out there and what's inside our mind]] that we combine perception and classification of "things out there" with mental processes in the mind as our natural/inherent mode of living and surviving.

How Ramanujan was able to invent and discover his ideas and whether the "invocation of god" is necessary/sufficient is a good question.

I don't think that if we don't understand the process (by which Ramanujan was able to come up with his ideas) we need to assume "divine intervention". As Carl Sagan had said: [[Absence of evidence is not evidence of absence. Neither is it evidence of presence.]]
So, in my mind, the last part of the above quote (which is often forgotten), says that not knowing the answer is not "the end of the road", but rather is an exciting invitation to try and figure out/research the causes and not "close the door" by invoking divinity as the ultimate/satisfactory answer.

From the beautifully decorated book, full of math-inspired/generated graphics and pictures, here is a collection of excerpts and quotes "celebrating the wisdom and beauty of mathematics" (with some comments and "action items" for me :):

* The trouble with integers is that we have examined only the small ones. Maybe all the exciting stuff happens at really big numbers, ones we can't get our hands on, or even begin to think about in any very definitive way. So maybe all the action is really inaccessible and we're just fiddling around. Our brains have evolved to get us out of the rain, find where the berries are, and keep us from getting killed. Our brains did not evolve to help us grasp really large numbers or look at things in a hundred thousand dimensions.
** Ronald Graham, quoted in Paul Hoffman's [[The Man Who Loved Only Numbers|https://cs.brynmawr.edu/Courses/cs231/fall2013/lecs/erdos.pdf]], in The Atlantic, 1987 (and [[Hoffman's "marvelous, vivid, and strangely moving" full book|https://bobson.ludost.net/copycrime/35559997-Man-Who-Loved-Only-Numbers-Paul-Hoffman.pdf]])
*** this may be true to a certain extent, but should be revised in light of the work which has been done on "monstrous" prime numbers - [[TED talk by Adam Spencer|https://www.youtube.com/watch?v=B4xOFsygwr4]]
*** one should also temper the above statement in light of [[the work of Georg Cantor|https://www.scientificamerican.com/article/strange-but-true-infinity-comes-in-different-sizes/]] on the variety of infinities (where some are simply larger than others).

* Archimedes will be remembered when Aeschylus is forgotten, because languages die and mathematics ideas do not. 'Immortality' may be a silly word, but probably a mathematician has the best chance of whatever it may mean.
** G. H. Hardy, [[A Mathematician's Apology|https://www.math.ualberta.ca/mss/misc/A%20Mathematician%27s%20Apology.pdf]], 1941
*** Aeschylus was a Greek, often described as the father of tragedy.

* The Gedemondan chuckles. 'We read probabilities. You see, we see - perceive is a better word - the math of the Well of Souls. We feel the energy flow, the ties and bands, in each and every particle of matter and energy. All reality is mathematics, all existence - past, present, and future - is equations.'
** Jack Chalker, [[Quest for the Well of Souls|https://en.wikipedia.org/wiki/Quest_for_the_Well_of_Souls]], 1978
*** check out the book

* One sign of an interesting program is that you cannot readily predict its output.
** Brian Hayes, //On the Bathtub Algorithm for ~Dot-Matrix Holograms//, Computer Language, 1986
*** check: is this //the// Brian Hayes of Hayes Modems fame/lore?
*** were there any randomly generated programs/artforms generated in 1986?
**** how about the Mandelbrot Fractal and the Julia Set?

* While the equations represent the discernment of eternal and universal truths, however, the manner in which they are written is stricktly, provincially human. That is what makes them so much like poems, wonderfully artful attempts to make infinite realities comprehensible to finite beings.
** Michael Guillen, //Five Equations that changed the World//, 1996
*** great quote about the beauty and poetry in math
*** check out the book

* The generation of random numbers is too important to be left to chance.
** The title of Robert Coveyou's paper, which appeared in //Studies in Applied Mathematics//, 1969
*** read the paper

* Applications, computers, and mathematics form a tightly coupled system yielding results never before possible and ideas never before imagined.
** Lynn Arthur Steen, //The Science of Patterns//, Science, 1988
*** read the article
*** examples of the "tight couplings" he had in mind?

* Science is not about control. It is about cultivating a perpetual condition of wonder in the face of something that forever grows one step richer and subtler than our latest theory about it. It is about reverence, not mastery.
** Richard Powers, //The Gold Bug Variations//, 1991
*** check it out
*** great quote about the nature of science and the beauty of it

* It's like asking why Beethoven's Ninth Symphony is beautiful. If you don't see why, someone can't tell you. I know numbers are beautiful. If they aren't beautiful, nothing is.
** Paul Erdos, quoted in Paul Hoffman's [[The Man Who Loved Only Numbers|https://cs.brynmawr.edu/Courses/cs231/fall2013/lecs/erdos.pdf]], in The Atlantic, 1987 (and [[Hoffman's "marvelous, vivid, and strangely moving" full book|https://bobson.ludost.net/copycrime/35559997-Man-Who-Loved-Only-Numbers-Paul-Hoffman.pdf]])
*** compare with the quote by Kurt Vonegut quoting his uncle: //If this isn't nice, what is?//
*** I have the book; check out the Atlantic article

* In principle ... [the Mandelbrot Set] could have been discovered as soon as men learned to count. But even if they never grew tired, and never made a mistake, all the human beings who have ever existed would not have sufficed to do the elementary arithmetic required to produce a Mandelbrot Set of quite modest magnification.
** Arthur C. Clarke, //The Ghost from the Grand Banks//, 1990
*** on the power and insights made possible by computers
*** check out the book

* Mathematicians study structure independent of context, and their science is a voyage of exploration through all the kinds of structure and order which the human mind is capable of discerning.
** Charles Pinter, //A Book of Abstract Algebra//, 1982
*** check out the book

* The bottom line for mathematicians is that the architecture has to be right. In all the mathematics that I did, the essential point was to find the right architecture. It's like building a bridge. Once the main lines of the structure are right, then the details miraculously fit. The problem is the overall design.
** Freeman Dyson, interview with Donald J. Albers, //The College Mathematics Journal//, 1994
** speaking like a true computer designer/architect
** I can also relate to the truth and beauty of this from personal experience: the sense that once you have the a good (the right? is there only one?) framework, everything seems to snap together nicely.

* Now one may ask, "What is mathematics doing in a physics lecture?" We have several possible excuses: first, of course, mathematics is an important tool, but that would only excuse us for giving the formula in two minutes. On the other hand, in theoretical physics we discover that all our laws can be written in mathematical form; and that this has a certain simplicity and beauty about it. So, ultimately, in order to understand nature it may be necessary to have a deeper understanding of mathematical relationships. But the real reason is that the subject is enjoyable, and although we humans cut nature up in different ways, and we have different courses in different departments, such compartmentalization is really artificial, and we should take our intellectual pleasures where we find them.
** Richard Feynman, //[[The Feynman Lectures on Physics|http://www.feynmanlectures.caltech.edu/]]// 1963
** here Feynman echos a deep assumption that math is at the foundation of nature. Is this math only formula- and equation-based? It'd be interesting to compare/contrast this math to (Wolfram's "New Science" concept with it's Cellular Automata (CA) based rules and behaviors. But then, this could be "subsumed" under "New Math" of course :)

* There are tetradic, pandigit, and prime-factorial + 1 primes. And there are Cullen, multifactorial, beastly palindrome, as well as anti-palindrome primes. Then add to these the strobogrammatic, subscript, internal repdigit, and elliptic primes. In fact a whole new branch of mathematics is evolving that deals specifically with the attributes of the various kinds of prime numbers. Yet understanding primes is only part of our quest to fully understand the number sequence and all of its delightful peculiarities.
** Calvin Clawson, //Mathematical Mysteries//, 1996
** [[Clawson is quoted and primes are discussed in the Journal of International Transdisciplinary Psychology|http://transdisciplinarypsych.org/2012/04/20/prime-numbers-enigma-variations-and-arthur-koestlers-window-on-infinity/]]
** check out the book
** check the types of primes and numbers mentioned

* We can imagine that this complicated array of moving things which constitutes “the world” is something like a great chess game being played by the gods, and we are observers of the game. We do not know what the rules of the game are; all we are allowed to do is to watch the playing. Of course, if we watch long enough, we may eventually catch on to a few of the rules. The rules of the game are what we mean by fundamental physics. Even if we knew every rule, however, we might not be able to understand why a particular move is made in the game, merely because it is too complicated and our minds are limited. If you play chess you must know that it is easy to learn all the rules, and yet it is often very hard to select the best move or to understand why a player moves as he does. So it is in nature, only much more so; but we may be able at least to find all the rules. Actually, we do not have all the rules now. (Every once in a while something like castling is going on that we still do not understand.) Aside from not knowing all of the rules, what we really can explain in terms of those rules is very limited, because almost all situations are so enormously complicated that we cannot follow the plays of the game using the rules, much less tell what is going to happen next. We must, therefore, limit ourselves to the more basic question of the rules of the game. If we know the rules, we consider that we “understand” the world.
** Richard Feynman, //[[The Feynman Lectures on Physics|http://www.feynmanlectures.caltech.edu/I_02.html]]// 1963
** Feynman makes the analogy between physics and math, and chess and rules. Are the rules necessarily expressed in math formulas/equations?

* No one really understood music unless he was a scientist, her father had declared, and not just a scientist, either, oh, no, only the real ones, the theoreticians, whose language was mathematics. She had not understood mathematics until he had explained to her that it was the symbolic language of relationships. 'And relationships,' he had told her, 'contained the essential meaning of life.'
** Pearl S. Buck, //The Goddess Abides//, 1973
** check out the book

* Almost your entire mathematical life has been spent on the real line and in real space working with real numbers. Some have dipped into complex numbers, which are just the real numbers after you throw in i. Are these the only numbers that can be built from the rationals? The answer is no. There are entire parallel universes of number that are totally unrelated to the real and complex numbers. Welcome to the world of p-adic analysis - where arithmetic replaces the tape measure and numbers take on a whole new look. Here we will explore this new notion of number and discover its impact on arithmetic, geometry, and calculus. It turns out that p-adic analysis not only dramatically simplifies many mathematical areas but also provides a powerful tool for analyzing number theoretic issues.
** Professor Edward Burger, //[[Exploring p-adic Numbers|https://www.math.ksu.edu/courses/syllabi/math-old/burger.htm]]//, 2013
** check out his book: Exploring the Number Jungle: A Journey into Diophantine Analysis by Edward B. Burger, American Mathematical Society, 2000

* At this point an enigma presents itself which in all ages has agitated inquiring minds. How can it be that mathematics, being after all a product of human thought which is independent of experience, is so admirably appropriate to the objects of reality? Is human reason, then, without experience, merely by taking thought, able to fathom the properties of real things? In my opinion the answer to this question is, briefly, this: As far as the laws of mathematics refer to reality, they are not certain; and as far as they are certain, they do not refer to reality.
** Albert Einstein in his lecture //[[Geometry and Experience|http://todayinsci.com/E/Einstein_Albert/Einstein-GeometryAndExperience.htm]]// at the Prussian Academy of Science in Berlin, 1921

* [[Gödel proved|resources/Boolos-godel-in-single-syllables.pdf]] that the world of pure mathematics is inexhaustible; no finite set of axioms and rules of inference can ever encompass the whole of mathematics; given any set of axioms, we can find meaningful mathematical questions which the axioms leave unanswered. I hope that an analogous situation exists in the physical world. If my view of the future is correct, it means that the world of physics and astronomy is also inexhaustible; no matter how far we go into the future, there will always be new things happening, new information coming in, new worlds to explore, a constantly expanding domain of life, consciousness, and memory.
** Freeman Dyson, //[[Time without End: Physics and Biology in an Open Universe|http://www.aleph.se/Trans/Global/Omega/dyson.txt]]//, 1979
** also quoted in //[[Gödel and Physics|http://arxiv.org/pdf/physics/0612253.pdf]]//

* Nothing comforted Sabine like long division. That was how she had passed time waiting for Phan and then Parsifal to come back from their tests. She figured the square root of the date while other people knit and read. Sabine blamed much of the world's unhappiness on the advent of calculators.
** Ann Patchett, //The Magician's Assistant//, 1998 ([[chapter 1 in the New York Times|https://www.nytimes.com/books/first/p/patchett-magician.html]])
** check out the book

* I abandoned the assigned problems in standard calculus textbooks and followed my curiosity. Wherever I happened to be - a Vegas casino, Disneyland, surfing in Hawaii, or sweating on the elliptical in Boesel's Green Microgym - I asked myself, "Where is the calculus in this experience?”
** Jennifer Ouellette, //The Calculus Diaries: How Math Can Help You Lose Weight, Win in Vegas, and Survive a Zombie Apocalypse//, 2010
** check out the book
** as a [[science writer, Jennifer Ouellette|http://twistedphysics.typepad.com/]] has [[insightful observations|Beginner's Mind when teaching]] about communicating science (and teaching)

* Like works of literature, mathematical ideas help expand our circle of empathy, liberating us from the tyranny of a single, parochial point of view. Numbers, properly considered, make us better people.
** Daniel Tammet, //Thinking In Numbers: On Life, Love, Meaning, and Math//, 2013
** From [[Brainpickings|https://www.brainpickings.org/2013/08/05/daniel-tammet-thinking-in-numbers/]]: Daniel Tammet was born with an unusual mind - he was diagnosed with high-functioning autistic savant syndrome, which meant his brain’s uniquely wired circuits made possible such extraordinary feats of computation and memory as learning Icelandic in a single week and reciting the number pi up to the 22,514th digit. He is also among the tiny fraction of people diagnosed with synesthesia — that curious crossing of the senses that causes one to “hear” colors, “smell” sounds, or perceive words and numbers in different hues, shapes, and textures.

* It would be very discouraging if somewhere down the line you could ask a computer if the Riemann hypothesis is correct and it said, "yes, it is true, but you won't be able to understand the proof."
** Ronald Graham, //The Death of Proof//, Scientific American, 1993
** [[Ron Graham|http://www-history.mcs.st-and.ac.uk/Biographies/Graham.html]] is a well-known mathematician (one claim to fame of his is his [[Erdos number|https://en.wikipedia.org/wiki/Erd%C5%91s_number]]: 2). But I find it hard to conceive that computer programs will be able to generate proofs which are not human-conceivable. Math is a human invention (I believe, it's ''not'' a discovery), and computers are too. It will take some sort of "meta-cognition" on the computer part to ''truly'' infer (as opposed to just "say"/print that humans will not understand something. A computer/program will really need to "understand" in order to say that.
** check out the article

* //A History of Pi//, Peter Deckmann, 1976
|borderless|k
|[img[Pi Spiral|resources/pi1pic.jpg][resources/pi1.jpg]]|[>img[Pi in Scratch|resources/Scratch_pi.png][https://scratch.mit.edu/projects/84155498/#fullscreen]]|
|borderless|k
[img[Pi Spiral|resources/pi1text.jpg][resources/pi1.jpg]]

 a picture of pi with an "endless" spiral of pi digits //surrounding/circumventing// it - nice [[idea for an artistic programming project|resources/pi day scratch project.png]]?

* When you discover mathematical structures that you believe correspond to the world around you ... you are communicating with the universe, seeing beautiful and deep structures and patterns that no one without your training can see. The mathematics is there, it's leading you, and you are discovering it. Mathematics is a profound language, an awesomely beautiful language. For some, like Leibniiz, it is the language of God. I'm not religious, but I do believe that the universe is organized mathematically.
** Anthony Tromba, //UCSC Professor Seeks to Reconnect Mathematics to its Intellectual Roots//, University of California Press Release 2003
** check out the press release
** Again, is math a human invention or a human discovery?

* As the island of knowledge grows, the surface that makes contact with mystery expands. When major theories are overturned, what we thought was certain knowledge gives way, and knowledge touches upon mystery differently. This newly uncovered mystery may be humbling and unsettling, but it is the cost of truth. Creative scientists, philosophers, and poets thrive at this shoreline.
** W. Mark Richardson, //A Skeptic's Sense of Wonder//, Science, 1998
** check out the article
** A beautiful metaphor of knowledge island and mystery surface. Mathematically speaking, in the case of fractals, there are fractal curves like the [[Koch Curve/Snowflake|https://en.wikipedia.org/wiki/Koch_snowflake]], which do even "better" than that: they maintain a fixed surface as their length/circumference grows to infinity!

* As a teacher, Tengo pounded into his students' heads how voraciously mathematics demanded logic. Here things that could not be proven had no meaning. but once you had succeeded in proving something, the world's riddles settled into the palm of your hand like a tender oyster.
** Haruki Murakami, [[1Q84|https://en.wikipedia.org/wiki/1Q84]], 2011
** check out the book

* H. G. Wells on Math - still relevant today (100+ years later)

* Since Galileo's time, science has become steadily more mathematical. ... It is virtually an article of faith for most theoreticians ... that there exists a fundamental equation to describe the phenomenon they are studying ... Yet ... it may eventually turn out that fundamental laws of nature do not need to be stated mathematically and that they are better expressed in other ways, like the rules governing the game of chess.
** Graham Farmelo, Foreword to [[It Must Be Beautiful|http://math.arizona.edu/~faris/equations7.pdf]], 2003
** Can these "other ways" referring to what Stephen Wolfram calls (rule- and Finite Automata-based) "New Science"?
** check out the book

[img[Dice|resources/dice_triangle_small.jpg][resources/dice_triangle.jpg]]

** a picture of a Penrose-inspired [["Impossible Triangle"|https://en.wikipedia.org/wiki/Penrose_triangle]] of dice - nice idea for a statistics-calculator programming project?
** a cool take on Penrose's impossible triangles: impossible cubes (from [[Gábor Damásdi's Symmetry tumblr site|http://szimmetria-airtemmizs.tumblr.com/post/145755776123/impossible-cubes]], inspired by [[Oscar Reutersvärd|https://en.wikipedia.org/wiki/Oscar_Reutersv%C3%A4rd]]’s [[optical illusions and impossible art|http://im-possible.info/english/]])
[img[cubes|resources/impossible cubes.gif][resources/impossible cubes.gif]]


* not covered yet:
** pages: 155, 159, 166, 167, 170, 173, 178, 184, 
This is a courageous account of the experience Gould had with cancer.

He was able to remain calm, level-headed, optimistic, and cool/scientific about it, which may have created a (helpful) positive feedback/result on top of the good luck he surely had.

My life has recently intersected, in a most personal way, two of Mark Twain's famous quips. One I shall defer to the end of this essay. The other (sometimes attributed to Disraeli), identifies three species of mendacity, each worse than the one before - lies, damned lies, and statistics.

Consider the standard example of stretching the truth with numbers - a case quite relevant to my story. Statistics recognizes different measures of an "average," or central tendency. The mean is our usual concept of an overall average - add up the items and divide them by the number of sharers (100 candy bars collected for five kids next Halloween will yield 20 for each in a just world). The median, a different measure of central tendency, is the half-way point. If I line up five kids by height, the median child is shorter than two and taller than the other two (who might have trouble getting their mean share of the candy). A politician in power might say with pride, "The mean income of our citizens is $15,000 per year." The leader of the opposition might retort, "But half our citizens make less than $10,000 per year." Both are right, but neither cites a statistic with impassive objectivity. The first invokes a mean, the second a median. (Means are higher than medians in such cases because one millionaire may outweigh hundreds of poor people in setting a mean; but he can balance only one mendicant in calculating a median).

The larger issue that creates a common distrust or contempt for statistics is more troubling. Many people make an unfortunate and invalid separation between heart and mind, or feeling and intellect. In some contemporary traditions, abetted by attitudes stereotypically centered on Southern California, feelings are exalted as more "real" and the only proper basis for action - if it feels good, do it - while intellect gets short shrift as a hang-up of outmoded elitism. Statistics, in this absurd dichotomy, often become the symbol of the enemy. As Hilaire Belloc wrote, "Statistics are the triumph of the quantitative method, and the quantitative method is the victory of sterility and death."

This is a personal story of statistics, properly interpreted, as profoundly nurturant and life-giving. It declares holy war on the downgrading of intellect by telling a small story about the utility of dry, academic knowledge about science. Heart and head are focal points of one body, one personality.

In July 1982, I learned that I was suffering from abdominal mesothelioma, a rare and serious cancer usually associated with exposure to asbestos. When I revived after surgery, I asked my first question of my doctor and chemotherapist: "What is the best technical literature about mesothelioma?" She replied, with a touch of diplomacy (the only departure she has ever made from direct frankness), that the medical literature contained nothing really worth reading.

Of course, trying to keep an intellectual away from literature works about as well as recommending chastity to Homo sapiens, the sexiest primate of all. As soon as I could walk, I made a beeline for Harvard's Countway medical library and punched mesothelioma into the computer's bibliographic search program. An hour later, surrounded by the latest literature on abdominal mesothelioma, I realized with a gulp why my doctor had offered that humane advice. The literature couldn't have been more brutally clear: mesothelioma is incurable, with a median mortality of only eight months after discovery. I sat stunned for about fifteen minutes, then smiled and said to myself: so that's why they didn't give me anything to read. Then my mind started to work again, thank goodness.

If a little learning could ever be a dangerous thing, I had encountered a classic example. Attitude clearly matters in fighting cancer. We don't know why (from my old-style materialistic perspective, I suspect that mental states feed back upon the immune system). But match people with the same cancer for age, class, health, socioeconomic status, and, in general, those with positive attitudes, with a strong will and purpose for living, with commitment to struggle, with an active response to aiding their own treatment and not just a passive acceptance of anything doctors say, tend to live longer. A few months later I asked Sir Peter Medawar, my personal scientific guru and a Nobelist in immunology, what the best prescription for success against cancer might be. "A sanguine personality," he replied. Fortunately (since one can't reconstruct oneself at short notice and for a definite purpose), I am, if anything, even-tempered and confident in just this manner.

Hence the dilemma for humane doctors: since attitude matters so critically, should such a sombre conclusion be advertised, especially since few people have sufficient understanding of statistics to evaluate what the statements really mean? From years of experience with the small-scale evolution of Bahamian land snails treated quantitatively, I have developed this technical knowledge - and I am convinced that it played a major role in saving my life. Knowledge is indeed power, in Bacon's proverb.

The problem may be briefly stated: What does "median mortality of eight months" signify in our vernacular? I suspect that most people, without training in statistics, would read such a statement as "I will probably be dead in eight months" - the very conclusion that must be avoided, since it isn't so, and since attitude matters so much.

I was not, of course, overjoyed, but I didn't read the statement in this vernacular way either. My technical training enjoined a different perspective on "eight months median mortality." The point is a subtle one, but profound - for it embodies the distinctive way of thinking in my own field of evolutionary biology and natural history.

We still carry the historical baggage of a Platonic heritage that seeks sharp essences and definite boundaries. (Thus we hope to find an unambiguous "beginning of life" or "definition of death," although nature often comes to us as irreducible continua.) This Platonic heritage, with its emphasis in clear distinctions and separated immutable entities, leads us to view statistical measures of central tendency wrongly, indeed opposite to the appropriate interpretation in our actual world of variation, shadings, and continua. In short, we view means and medians as the hard "realities," and the variation that permits their calculation as a set of transient and imperfect measurements of this hidden essence. If the median is the reality and variation around the median just a device for its calculation, the "I will probably be dead in eight months" may pass as a reasonable interpretation.

But all evolutionary biologists know that variation itself is nature's only irreducible essence. Variation is the hard reality, not a set of imperfect measures for a central tendency. Means and medians are the abstractions. Therefore, I looked at the mesothelioma statistics quite differently - and not only because I am an optimist who tends to see the doughnut instead of the hole, but primarily because I know that variation itself is the reality. I had to place myself amidst the variation.

When I learned about the eight-month median, my first intellectual reaction was: fine, half the people will live longer; now what are my chances of being in that half. I read for a furious and nervous hour and concluded, with relief: damned good. I possessed every one of the characteristics conferring a probability of longer life: I was young; my disease had been recognized in a relatively early stage; I would receive the nation's best medical treatment; I had the world to live for; I knew how to read the data properly and not despair.

Another technical point then added even more solace. I immediately recognized that the distribution of variation about the eight-month median would almost surely be what statisticians call "right skewed." (In a symmetrical distribution, the profile of variation to the left of the central tendency is a mirror image of variation to the right. In skewed distributions, variation to one side of the central tendency is more stretched out - left skewed if extended to the left, right skewed if stretched out to the right.) The distribution of variation had to be right skewed, I reasoned. After all, the left of the distribution contains an irrevocable lower boundary of zero (since mesothelioma can only be identified at death or before). Thus, there isn't much room for the distribution's lower (or left) half - it must be scrunched up between zero and eight months. But the upper (or right) half can extend out for years and years, even if nobody ultimately survives. The distribution must be right skewed, and I needed to know how long the extended tail ran - for I had already concluded that my favorable profile made me a good candidate for that part of the curve.

The distribution was indeed, strongly right skewed, with a long tail (however small) that extended for several years above the eight month median. I saw no reason why I shouldn't be in that small tail, and I breathed a very long sigh of relief. My technical knowledge had helped. I had read the graph correctly. I had asked the right question and found the answers. I had obtained, in all probability, the most precious of all possible gifts in the circumstances - substantial time. I didn't have to stop and immediately follow Isaiah's injunction to Hezekiah - set thine house in order for thou shalt die, and not live. I would have time to think, to plan, and to fight.

One final point about statistical distributions. They apply only to a prescribed set of circumstances - in this case to survival with mesothelioma under conventional modes of treatment. If circumstances change, the distribution may alter. I was placed on an experimental protocol of treatment and, if fortune holds, will be in the first cohort of a new distribution with high median and a right tail extending to death by natural causes at advanced old age.

It has become, in my view, a bit too trendy to regard the acceptance of death as something tantamount to intrinsic dignity. Of course I agree with the preacher of Ecclesiastes that there is a time to love and a time to die - and when my skein runs out I hope to face the end calmly and in my own way. For most situations, however, I prefer the more martial view that death is the ultimate enemy - and I find nothing reproachable in those who rage mightily against the dying of the light.

The swords of battle are numerous, and none more effective than humor. My death was announced at a meeting of my colleagues in Scotland, and I almost experienced the delicious pleasure of reading my obituary penned by one of my best friends (the so-and-so got suspicious and checked; he too is a statistician, and didn't expect to find me so far out on the right tail). Still, the incident provided my first good laugh after the diagnosis. Just think, I almost got to repeat Mark Twain's most famous line of all: the reports of my death are greatly exaggerated.

-----------------------------

Dr. Gould was one of my favorite twentieth century scientific essayist. He penned this in 1982, and though it was cancer that took him from us in 2002, it was not the cancer he discusses here.

this essay is [[all over the web|https://csn.cancer.org/node/213889]], so I consider it to be part of the public domain.


The book "The Most Human Human" by Brian Christian opens with the following anecdote:
>[[Claude Shannon|https://en.wikipedia.org/wiki/Claude_Shannon]], artificial intelligence pioneer and founder of information theory, met his wife, Mary Elizabeth, at work. This was Bell Labs in Murray Hill, New Jersey, the early 1940s. He was an engineer, working on wartime cryptography and signal transmission.
>She was a computer^^1^^.

The book starts with Christian describing why and how he became a human "confederate" in the 2009 [[''Turing Test''|https://plato.stanford.edu/entries/turing-test/]] event, which he explains:
>Each year, the artificial intelligence (AI) community convenes for the field's most anticipated and controversial annual event -- a competition called the Turing test. The test is named for British mathematician Alan Turing, one of the founders of computer science, who in 1950 attempted to answer one of the field's earliest questions: Can machines think? That is, would it ever be possible to construct a computer so sophisticated that it could actually be said to be thinking, to be intelligent, to have a mind? And if indeed there were, someday such a machine: How would we know?
>
>Instead of debating this question on purely theoretical grounds, Turing proposed an experiment. A panel of judges poses questions by computer terminal to a pair of unseen correspondents, one a human “confederate," the other a computer program, and attempts to discern which is which. There are no restrictions on what can be said: the dialogue can range from small talk to the facts of the world (e.g., how many legs ants have, what country Paris is in) to celebrity gossip and heavy-duty philosophy—the whole gamut of human conversation. 
>Turing predicted that by the year 2000, computers would be able to fool 30 percent of human judges after five minutes of conversation, and that as a result “one will be able to speak of machines thinking without expecting to be contradicted.”
>
>Turing's prediction has not come to pass; at the 2008 contest, however, held in Reading, England, the top program came up shy of that mark by just a single vote. The 2009 test in Brighton could be the decisive one.
>
>And I am participating in it, as one of four human confederates going head-to-head (head-to-motherboard?) against the top AI programs. In each of several rounds, I, along with the other confederates, will be paired off with an AI program and a judge - and will have the task of convincing the latter that I am, in fact, human.

and he adds:
>Fortunately, I //am// human; unfortunately, it's not clear how much that will help.

As for the title of the book:
Each year, the AI program which "fools" the judges and gets the most votes is awarded the title "The Most Human Computer". It is the title (and the prize) which the participating Computer Science research teams (an all spectators) are principally interested in.
But, intriguingly, there is another title given out, this one to the //confederate// who got the most votes from the judges: "The Most Human Human".

Christian writes that when he read that in 2008 an AI program almost made it and almost passed the Turing Test, and that 2009 might be the year machines finally pass the threshold, he was determined and resolved: //Not on my watch//.

Christian refers to the famous [[chess competition between the Chess Master Garry Casparov and IBM's Deep Blue computer|The end of an era, the beginning of another? HAL, Deep Blue and Kasparov]] [[held in 1996|https://en.wikipedia.org/wiki/Deep_Blue_versus_Garry_Kasparov]] as another (romantic) attempt to "defend the human race". (see also [[History of the chess table]]).

In another reference to history, Christian writes:
> In the mid-twentieth century, a piece of cutting-edge mathematical gadgetry was "like a computer." (referring to a person/occupation^^1^^).
> In the twenty-first century, it is the //human// math whiz that is "like a computer". (referring to the machine).
> An odd twist: we're //like// the thing that used to be like us. We imitate our old imitators, one of the strange reversals of fortune in the long saga of human uniqueness.

And ''on human uniqueness (of character)'':
>My mind goes back to the organizers' advice to "just be myself", to how much philosophers have agonized over this idea. While other existentialists—for instance, Jean-Paul Sartre emphasized authenticity and originality and freedom from outside influence, nineteenth-century German philosopher Friedrich Nietzsche held the startling opinion that the most important part of "being oneself" was—in Brown University philosopher Bernard Reginster's words—"being one self, any self."
>Nietzsche spoke of this as “giving style to one's character," comparing people to works of art, which we often judge according to their “concinnity," [the skillful and harmonious arrangement or fitting together of the different parts] the way their parts fit together to make a whole: "In the end, when the work is finished, it becomes evident how the constraint of a single taste governed and formed everything large and small.”
>Computer culture critics like Jaron Lanier are skeptical, for instance, of decentralized projects like Wikipedia, arguing:
>>The Sims, ... the iPhone, the Pixar movies, and all the other beloved successes of digital culture ... are personal expressions. True, they often involve large groups of collaborators, but there is always a central personal vision—a Will Wright, a Steve Jobs, or a Brad Bird conceiving the vision and directing a team of people earning salaries.
>>It is this same "central personal vision" that is crucial for Nietzsche, who goes so far as to say, "Whether this taste was good or bad is less important than one might suppose, if only it was a single taste!"
I can't say that I fully agree with Lanier. He is right that consistency of behavior, style, User Interface, etc., are important in good apps and products, and a "guiding principle" is necessary to achieve that, it doesn't have to be a "single hand" that does it, and I believe that "distributed" organizations and structures will become more popular and effective in the future.
>It is precisely the “central personal vision” of Lanier and “single taste" of Nietzsche that is lacking in most chatbots. For instance, I had the following conversation with “Joan,” the Cleverbot-offshoot program that won the Loebner Prize in 2006. Though each of her answers, taken separately, is perfectly sensible and human, their sum produces nothing but a hilarious cacophony in the way of identity.
>There is a central trade-off in the world of bot programming, between coherence of the program's personality or style and the range of its responses. By "crowdsourcing" the task of writing a program's responses to the users themselves, the program acquires an explosive growth in its behaviors, but these behaviors stop being internally consistent.
I have to say, that I'm sure this issue will be resolved with more/future work on AI and neurobiology/psychology, and our understanding of the mind/consciousness.

Christian points out that in the evolution of computers, the first area successfully tackled and resolved (in the sense that it supersedes human capabilities) is calculation and logic operations:
>Descartes, in the seventeenth century, picks up these threads [about thought, logic, reason, experience, senses, etc.) and leverages the mistrust of the senses toward a kind of radical skepticism: How do I know my hands are really in front of me? How do I know the world actually exists? How do I know that I exist?
>His answer becomes the most famous sentence in all of philosophy. Cogito ergo sum. I think, therefore I am.
>I think, therefore I am—not “I register the world” (as Epicurus might have put it), or "I experience," or "I feel,” or “I desire,” or “I recognize," or "I sense.” No. I think. The capacity furthest away from lived reality is that which assures us of lived reality—at least, so says Descartes.
>This is one of the most interesting subplots, and ironies, in the story of AI, because it was deductive logic, a field that Aristotle helped invent, that was the very first domino to fall.^^2^^
>[Alan Turing's and Claude Shannon's work...] ends up amounting to a huge blow to humans' unique claim to and dominance of the area of "reasoning". Computers, lacking almost everything else that makes humans humans, have our //unique// piece in spades. They have more of this than we do. 

----
^^1^^ - an explanation, just in case: a "computer" in the pre-~Computer-Science days (and as early as the 17^^th^^ century) meant "a person who computes": one performing mathematical calculations, before electronic computers became commercially available. Later, in the early computer days, computing had been a branch of Math Departments, and Human Computers, as they were called, were people – often women – who used and operated these machines to find mathematical solutions via carefully crafted procedures, what we call programming today.
^^2^^ this is another example of an "unforeseen path", where humans solved a problem ("creating (digital) 'computers' ") in a way that is very different from how nature solved it ("creating (human) 'computers' "). Other examples of entirely different solutions are "creating human flight" (and how airplanes fly vs. how birds fly), "enabling human locomotion" (e.g., wheels vs. feet).
From the book //Sailing home: Using the wisdom of Homer's Odyssey to navigate life's perils and pitfalls// by Norman Fischer:
>Like the experience of deja-vu (Haven't I already been here?), journeys of return are uncanny and paradoxical. We start from home, and we return home, coming full circle. One might well wonder, What's the point of such a journey? Why leave, in the first place, if you are only going to come back to where you started from? But there is a point to this arduous and circular wandering. True, we do come back to our starting point, and we return with nothing we didn't already have before we left. Yet, at the same time there is an important difference: we are different, and our appreciation of what our life is and has always been is deeper.

Click the image below to see Spiraling Discovery in action (10MB .mov)

[img[Click to see Spiraling Discovery in action|resources/escher_gallery_1.jpg][ resources/escher_print_gallery_loop_1.mpg]]
[[Escher's Print Gallery]]

And as [[Sir Terry Pratchett had said|http://www.chrisjoneswriting.com/terry-pratchett-quotes/category/travel]]:
>Why do you go away? So that you can come back. So that you can see the place you came from with new eyes and extra colours. And the people there see you differently, too. Coming back to where you started is not the same as never leaving.
<<forEachTiddler 
where 
'tiddler.tags.contains("book-chapter") && tiddler.tags.contains("The Power of Mindful Learning")'
sortBy 
'tiddler.title'>>
[[Alan Kay|https://en.wikipedia.org/wiki/Alan_Kay]] (a "[[CS Sage|https://en.wikipedia.org/wiki/List_of_computer_scientists]]"), at OOPSLA 1997, gave an [[insightful talk|https://www.youtube.com/watch?v=oKg1hTOQXoY]] titled "The real computer revolution hasn't happened yet" ([[the 2007 transcript|http://www.vpri.org/pdf/m2007007a_revolution.pdf]]), highlights of which I cover below.

7 years after [[Kay's talk in Pisa|http://www.vpri.org/pdf/m2007007a_revolution.pdf]], on his website, Kevin Kelly echoes a similar sentiment in an article titled [[You are not late|http://kk.org/thetechnium/you-are-not-late/]], and concludes:
>So, the truth: Right now, today, in 2014 is the best time to start something on the internet. There has never been a better time in the whole history of the world to invent something. There has never been a better time with more opportunities, more openings, lower barriers, higher benefit/risk ratios, better returns, greater upside, than now. Right now, this minute. This is the time that folks in the future will look back at and say, “Oh to have been alive and well back then!”
>
>The last 30 years have created a marvelous starting point, a solid platform to build truly great things. However the coolest stuff has not been invented yet — although this new greatness will not be more of the same-same that exists today. It will not be merely “better,” it will be different, beyond, and other. But you knew that.
>
>What you may not have realized is that today truly is a wide open frontier. It is the best time EVER in human history to begin.
>
>You are not late.

So back to Kay, and a few highlights:
* The true revolutionary impact of the printing press:
> the [printing] press in the 15th century was first thought to be a less expensive automation of hand written documents, but by the 17th century its several special properties had gradually changed the way important ideas were thought about to the extent that most of the important ideas that followed and the way they were thought about had not even existed when the press was invented. The two most important ideas were the inventions of science and of new ways to organize politics in society (which, in several important cases, were themselves extensions of the scientific outlook).
* Kay makes an argument similar to [[points made by Victor Bret and Andrea diSessa|Examples of the power of math notation]] about the importance of the change in "the how" (technology), on "the what" (content, scope and extent) of what we can write and argue about, since "the how" has changed in such a revolutionary way:
> These changes in thought also changed what “literacy” meant, because literacy is not just being able to read and write, but to fluently deal with the kinds of ideas
 important enough  to  write  about  and  discuss. 
* He claims (and I agree with him :) that computers enable a new literacy, that of writing and reading simulations of arguments, phenomena, and inventions. And that this kind of literacy will be yet another big force in human evolution:
>One of the realizations we had about computers in the 60s was that they give rise to new and more powerful forms of arguments about many important issues via dynamic simulations. That is, instead of making the fairly dry claims that can be stated in prose and mathematical equations, the computer could carry out the implications of the claims to provide a better sense of whether the claims constituted a worthwhile model of reality. And, if the general literacy of the future could include the writing of these new kinds of claims and not just the consumption (reading) of them, then we would have something like the next 500 year invention after the printing press that could very likely change human thought for the better.
** This, in my opinion, is a clear and succinct way to define the new computer literacy, so I'll repeat it:
*** the (computer) literacy of the future will enable us to create/model/program ("write") and analyze/critique/consume ("read") arguments/claims/knowledge/ideas in the form of dynamic simulations "worthy" of modeling/representing aspects of reality. By "worthy" I think Kay means enabling us to think creatively, productively, usefully about new ideas and knowledge. (see also [[Prospects of Modeling]])
* He suspects that the computer revolution will take longer than anticipated by pioneers in the 1960s (like Seymour Papert, Kay himself and others):
>it looks as though the actual revolution will take longer than our optimism suggested, largely because the commercial and educational interests in the old media and modes of thought have frozen personal computing pretty much at the “imitation of paper, recordings, film and TV” level.
* It is important to teach Knowledge (content) not just processes and tools:
>The wonderful nature of modern knowledge, aided by writing and teaching, is that many ideas which require a genius to invent (in the case of calculus: two geniuses) can be learned by a much wider and less especially talented population. But it is very difficult to invent in a vacuum, even for a genius. (Imagine being born with twice Leonardo's IQ in 10,000 BC. Not a lot is going to happen! Even Leonardo couldn’t invent an engine for any of his vehicles. He was plenty smart enough but he lived in the wrong time and thus didn’t know enough.)
* Kay is not interpreting "constructing knowledge" as students/learners exploring/playing/discovering knowledge on their own. He rightfully points out that this kind of "futzing around" (my language :) rarely, if ever, creates significant (discovered) knowledge. He definitely values mentoring and guiding, and gives an example to show how ineffective/misguided the idea of just putting computers in every classroom is:
> what if we were to make an inexpensive piano and put it in every classroom? The children would certainly learn to do something with it by themselves – it could be fun, it could have really expressive elements, it would certainly be a kind of music. But it would quite miss what has been invented in music over centuries by great musicians. This would be a shame with regard to music – but for science and mathematics it would be a disaster.
* A powerful way to teach is
> to find ideas and representations that allow “beginners to act as intermediates”, that is, for learners to immediately start doing the actual activity in some real form. 
* Kay quotes H. G. Wells saying [[Civilization is in a race between education and catastrophe.]] but modifies/clarifies it to refer more specifically to "changes in the point of view" instead of "education".
* About science and its critical role:
>The first step in science is the startling realization that “the world is not as it seems” and many adults have never taken this step but instead take the world as it seems and their inner stories as reality with often disastrous consequences. The first step is a big one, and is best taken by children. [i.e., at an early age].
>[and]
>From there, it is another giant step to include ourselves (that is, all humans) in the proper objects of study: to try to get past our stories about ourselves to understand better “what are we?” and ask “how can our flaws be mitigated?”.

And he concludes on an optimistic note:
>Though  the  world  today  is  far  from  peaceful,  there  are  now  examples  of  much  larg-er groups of people living peacefully and prospering for many more generations than ever before in history. The enlightenment of some has led to communities of outlook, knowledge, wealth, commerce, and energy that help the less enlightened behave bet-ter. It is not at all a coincidence that the first part of this real revolution in society was powered by the printing press. The next revolutions in thought – such as whole systems thinking and planning leading to major new changes in outlook – will be powered by the real computer revolution – and it could come just in time to win over catastrophe.
We all (especially in Silicon Valley) have heard about (and, if not careful, experienced) the ''Tech Bubble'', but reading the very thoughtful (and worth very slow reading) book [[Absence of Mind|http://yalebooks.co.uk/pdf/9780300145182.pdf]] by Marilynne Robinson, brought up the case of the ''Science Bubble'', which can be dangerous as well (see also [[Minding the obvious]]).

I started reading Robinson's book after listening to an interesting and thought-provoking dialog between Marilynne Robinson and Marcelo Gleiser titled [[The Mystery We Are|https://soundcloud.com/onbeing/marilynne-robinson-marcelo-1]], hosted by Krista Tippett (from [[On Being|http://www.onbeing.org/]]).

Robinson, who won a Pulitzer for her novel Gilead, and who is also a university lecturer and preacher, seems to be quite knowledgeable about science. In the first chapter of the book, she brings up a couple of examples of scientific overconfidence and an arrogant stance of scientists presenting scientific theories with both superiority and "moral conviction" (i.e., using terms of "right" and "wrong"), seemingly forgetting that a scientific theory, by definition, cannot be proven right (or "correct"), only wrong (or incorrect).

One example is Bertrand Russell stating in the early 1920's that the religious belief in the beginning of the universe (the Genesis story) shows lack of imagination on the part of our ancestors. His view/statement was shortly "upended" by the (now accepted by the mainstream) Big Bang theory. This, in my mind, is an example of overconfidence (if not arrogance) and a misplaced conviction in the (timeless/forever-valid) "correctness" of a scientific theory.

Another example is Steven Pinker's analysis of the nature of violence in human societies over the history of humanity. In his book //The Better Angels of Our Nature: Why Violence Has Declined//, he brings numerous data and statistics to demonstrate and "prove" his point (and I simplify: ) that human societies have become less violent. One of Pinker's main points is that if you look at war-related deaths over time, you see that percentage-wise they have dropped over time. As an example, in tribal warfare or religious wars the percentage of warriors killed would have been equivalent to tens of millions of people killed in a modern-day war, which thankfully has not happened in modern times.

Robinson asks some very critical questions about Pinker's argument setups and conclusions. For example: in a small tribe or clan of, say, 50 people, a death of 1 or 2 people would probably be not uncommon. This is a death toll of 2-4%. She questions the validity (or meaning) of Pinker's comparison to a death toll of 2-4% of the current US population of roughly 350 million (so, 7-14 million dead). We know that this kind of massive death had happened in modern times and we also know that it is thankfully not common (unlike the 1-2 deaths in a small tribe). But is it a fair, and more importantly, meaningful, comparison? Is it teaching us something of value regarding human violence? Can we draw some helpful or meaningful conclusions from this? (think about it: in the small tribe the ''minimum'' death rate, in percent, is 2% (1 person dying in a 50 people tribe). Does it really scale in a meaningful way, i.e. can we draw meaningful conclusions about a larger population, for example in the case of 2% of the US population, which is 7 million?)

This use of pure math and statistics (in this case percentages) without looking at important aspects of meaning, scaling, and so on, is an example of a Bubble, which if not distorting our perspective/view of reality, at least may be misleading. Applying science and math to try and prove a point does not necessarily make the conclusion valid and/or meaningful. Thinking this way may be a case of living in the Scientific Bubble.

For a more cautious and humble scientific stance see what Marcelo Gleiser and Carl Sagan have to say in [[The nature of reality - per Philip K. Dick, Marcelo Gleiser, Carl Sagan]].
The "[[CS Sage|https://en.wikipedia.org/wiki/List_of_computer_scientists]]" [[Fred Brooks|https://en.wikipedia.org/wiki/Fred_Brooks]] (of [[The Mythical Man Month|https://archive.org/stream/mythicalmanmonth00fred/mythicalmanmonth00fred_djvu.txt]] fame), wrote a thoughtful article after receiving the //first// ACM [[Allen Newell|https://en.wikipedia.org/wiki/Allen_Newell]] Award (named after another CS Luminary/Sage) in 1994, sharing his thoughts about the [[Computer Scientist as a Toolsmith|http://www.cs.unc.edu/~brooks/Toolsmith-CACM.pdf]].

In the article he talks about "The Toolsmith as Collaborator", and observes^^1^^:
>If the computer scientist is a toolsmith, and if our delight is to fashion power tools and amplifiers for minds, we must partner with those who will use our tools, those whose intelligences we hope to amplify. Let me share with you some of our experiences in interdisciplinary collaboration at Chapel Hill over the last 30 years. It has been an exciting experience, and I commend it to you as a way of working. It also has some inherent costs, which one should intentionally decide whether to pay, and some inherent pitfalls.
>
>''The ~Driving-Problem Approach''
>Let me begin with a paradoxical thesis:
>Hitching our research to someone else’s driving problems, and solving those problems on the owners’ terms, leads us to richer computer science research.
>This is a special case of the “down-is-up” paradox that governs so much of life, from marriage enrichment to career progress.
>How can such a thing be so? How can working on the problems of another discipline, for the purpose of enhancing a collaborator, help me as a computer scientist?
>In many ways:
>• It aims us at relevant problems, not just exercises or toy-scale problems.
>• It keeps us honest about success and failure, so that we don’t fool ourselves so easily.
>• It makes us face the whole problem, not just the easy or mathematical parts. In computational geometry, for example, we can’t avoid the cases of collinear point triples or coplanar point quadruples. We can’t assume away ill-conditioned cases.
>• Facing the whole problem in turn forces us to learn or develop new computer science, often in areas we otherwise never would have addressed.
>• Besides all of that, it is just plain fun to look over the shoulders of those discovering how proteins work, or designing submarines, or fabricating on the nanometer scale.
>
>''The Costs of Collaboration'' 
>There are real costs associated with any professional collaboration, and interdisciplinary collaborations have some unique costs. I find that our teams spend about a quarter of our professional effort on routine work that supports our collaborators but does not advance our joint researches, much less the computer-science part of the research.



----
^^1^^ See what Brooks says about [[Software creations]]
(copied and slightly modified from Dan Meyer's blog and his [[3 act math principles|http://blog.mrmeyer.com/?p=10285]])

!!!Act One
''Introduce the central conflict of your story/task clearly, visually, viscerally, using as few words as possible.''
The visual should be clear. No (or a minimum of) words are necessary. I'm not saying anyone is going to shell out ten dollars on date night to do this math problem but you have a visceral reaction to the image. It strikes you right in the curiosity bone.
Leave no one out of your first act. Your first act should impose as few demands on the students as possible   either of language or of math. It should ask for little and offer a lot. This, incidentally, is as far as the #anyqs challenge takes us.

!!!Act Two
''The protagonist/student overcomes obstacles, looks for resources, and develops new tools.''
What resources will your students need before they can resolve their conflict? What tools do they have already? What tools can you help them develop? They'll need quadratics, for instance. Help them with that.

!!!Act Three
''Resolve the conflict and set up a sequel/extension.''
The third act pays off on the hard work of act two and the motivation of act one. If we've successfully motivated our students in the first act, the payoff in the third act needs to meet their expectations.
This should be a climax; a celebration of the accomplishment/solution; fireworks; explosions; whatever.
Don't settle for less; don't let the student down with the "usual" encounter of the resolution of their conflict in the back of the textbook.
Very important: Make sure you have extension problems ready for students as they finish.

!!!Conclusion
Many math teachers take act two as their job description. Hit the board, offer students three worked examples and twenty practice problems. As the [[ALEKS|http://www.aleks.com/]] algorithm gets better and Bill Gates throws more gold bricks at Sal Khan (Khan Academy) and more people flip their classrooms, though, it's clear to me that the second act isn't our job anymore. Not the biggest part of it, anyway. You are only one of many people your students can access as they look for resources and tools. Going forward, the value you bring to your math classroom increasingly will be tied up in the first and third acts of mathematical storytelling, your ability to motivate the second act and then pay off on that hard work.
Preparing a lesson plan on Abstraction (one of the [["Big Ideas" in Computer Science|http://www.collegeboard.com/prod_downloads/computerscience/ComputationalThinkingCS_Principles.pdf]]) for a Computer Science class, I came up with a lab/programming exercise that required writing a hierarchy of functions, each one abstracting ("hiding", ignoring) some level of non-essential/low-level details, in order to make the "ultimate level" (the main program) "simple, readable, and elegant".

 Defining that "ultimate level" or ultimate function, reminded me of the famous scene from Douglas Adams' "The Hitchhiker's Guide to the Galaxy", where this race of super intelligent (and (of course) hyper dimensional :) beings wants to find the answer to [[the "ultimate question" of Life, The Universe, and Everything|https://en.wikipedia.org/wiki/Phrases_from_The_Hitchhiker%27s_Guide_to_the_Galaxy]].

There is a lot of debate about the answer which the powerful computer [["Deep Thought"|https://en.wikipedia.org/wiki/List_of_minor_The_Hitchhiker%27s_Guide_to_the_Galaxy_characters#Deep_Thought]] gave ([[42|https://en.wikipedia.org/wiki/Phrases_from_The_Hitchhiker's_Guide_to_the_Galaxy#Answer_to_the_Ultimate_Question_of_Life.2C_the_Universe.2C_and_Everything_.2842.29]]), but I have a good, ~CS-related (or at least ~CS-punny) answer to another "immortal question": 
>what is the ultimate function (ha!) of living?
The [[Greek-inspired|https://en.wikipedia.org/wiki/Eudaimonia]] answer is a function (of course):

 live_a_good_life(each_one_of_us)

The rest, as they say, are details...
<<forEachTiddler 
where 
'tiddler.tags.contains("book-chapter") && tiddler.tags.contains("The Unreasonable Effectiveness of Mathematics")'
sortBy 
'tiddler.title'>>

A few influential articles on the subject:
* Richard Hamming on [["The Unreasonable Effectiveness of Mathematics"|http://www.dartmouth.edu/~matc/MathDrama/reading/Hamming.html]]
* Eugene Wigner on The [["Unreasonable Effectiveness of Mathematics in the Natural Sciences"|http://www.dartmouth.edu/~matc/MathDrama/reading/Wigner.html]]
* Frank Wilczek on [["Reasonably effective: I. Deconstructing a miracle"|http://ned.ipac.caltech.edu/level5/March07/Wilczek/Wilczek.html]]
* Mario Livio on [[Is Math a human invention or a series of discoveries of truths in the real world?]]
In a (typical?) chain of events, I watched a ~TEDx talk by an educational games designer named Sean Bouchard called [[Chocolate Covered Broccoli: Building Better Games|http://www.youtube.com/watch?v=VrK7VXCfsS0&feature=relmfu]], and he mentioned Raph Koster's book //Theory of Fun//, which led me to a [[presentation by Raph|Theory of fun - Raph Koster]], where Raph shows a cartoon of Chris Crawford responding to Raph's (mis)quoting of Chris: Fun is just another word for learning.
|borderless|k
|[img[crawford 1|./resources/crawford11.png][./resources/crawford1.png]]1|[img[crawford 2|./resources/crawford21.png][./resources/crawford2.png]]2|[img[crawford 3|./resources/crawford31.png][./resources/crawford3.png]]3|
|[img[crawford 4|./resources/crawford41.png][./resources/crawford4.png]]4|[img[crawford 5|./resources/crawford51.png][./resources/crawford5.png]]5|[img[crawford 6|./resources/crawford61.png][./resources/crawford6.png]]6|
|borderless|k

What Crawford is saying in the cartoon echoes the [[Song of Joy]] by Wang Ken about learning and happiness/joy, which I personally find very true, so...
This led me to Chris's book [[The Art of Computer Game Design|resources/computer game design - chris crawford.pdf]] where he talks about his philosophy around game design, with quite a few implications about learning.
In the introduction to the book, Crawford is pointing out that [[Those who overrate their own understanding undercut their own potential for learning.]] Learning (and living, too) requires (and is more enjoyable, if lived with) humbleness and curiosity, but also openness and admitting that knowledge/understanding is constantly evolving. It happens both on the personal/individual level, and to the human race (e.g. theories in physics and chemistry; discoveries, proofs, and extensions in math). Sometimes these are evolutions, and sometimes revolutions.

As a game designer, Crawford wants to know: Why do people play games? What motivates them? What makes games fun? And he looks back to history and nature for answers.
He points out the natural and deeply ingrained ability (necessity?) of humans and other animals to learn through play (and playing games), and as he indicates, games are a serious business.
>A trip to the zoo will suffice. There we find two lion cubs wrestling near their mother. They growl and claw at each other. They bite and kick. One cub wanders off and notices a butterfly. It crouches in the grass, creeps ever so slowly toward its insect prey, then raises its haunches, wiggles them, and pounces.
>... who knows if lions can have fun? But we are dead wrong on the last count. These cubs are not carefree. They do not indulge in games to while away the years of their cubhood. These games are deadly serious business. They are studying the skills of hunting, the skills of survival. They are learning how to approach their prey without being seen, how to pounce, and how to grapple with and dispatch prey without being injured. They are learning by doing, but in a safe way.
And implications/parallels to education:
>... Games are thus the most ancient and time-honored vehicle for education. They are the original educational technology, the natural one, having received the seal of approval of natural selection. We don't see mother lions lecturing cubs at the chalkboard; we don't see senior lions writing their memoirs for posterity. In light of this, the question, "Can games have educational value?" becomes absurd. It is not games but schools that are the newfangled notion, the untested fad, the violator of tradition. Game-playing is a vital educational function for any creature capable of learning.
But Crawford is not claiming that education/learning is the only motivation for game playing:
>I must qualify my claim that the fundamental motivation for all game-play is to learn. First, the educational motivation may not be conscious. Indeed, it may well take the form of a vague predilection to play games. The fact that this motivation may be unconscious does not lessen its import; indeed, the fact would lend credence to the assertion that learning is a truly fundamental motivation.
>Second, there are many other motivations to play games that have little to do with learning, and in some cases these secondary motivations may assume greater local importance than the ancestral motivation to learn. These other motivations include: fantasy/exploration, nose-thumbing, proving oneself, social lubrication, exercise, and need for acknowledgment. I shall examine each in turn.
Crawford is comparing games to simulations, and starts with a definition of a game:
>A game is a closed formal system that subjectively represents a subset of reality.
Then, he looks at representation (subjective vs. objective):
>Representation is a coin with two faces: an objective face and a subjective face. The two faces are not mutually exclusive, for the subjective reality springs from and feeds on objective reality. In a game, these two faces are intertwined, with emphasis on the subjective face. For example, when a player blasts hundreds of alien invaders, nobody believes that his recreation directly mirrors the objective world. However, the game may be a very real metaphor for the player's perception of his world. I do not wish to sully my arguments with pop psychological analyses of players giving vent to deep seated aggressions at the arcades. Clearly, though, something more than a simple blasting of alien monsters is going on in the mind of the player. We need not concern ourselves with its exact nature; for the moment it is entirely adequate to realize that the player does perceive the game to represent something from his private fantasy world. Thus, a game represents something from subjective reality, not objective. Games are objectively unreal in that they do not physically recreate the situations they represent, yet they are subjectively real to the player. The agent that transforms an objectively unreal situation into a subjectively real one is human fantasy. Fantasy thus plays a vital role in any game situation. A game creates a fantasy representation, not a scientific model.
Crawford brings up an interesting point about the nature of the mental processes and experience of a game player as they play. Crawford describes one "emotional response" of this "subjective flow", namely that some //fantastic// happenings in the game resonate with the player's private/subjective reality/world. But another "emotional response" that can be evoked, especially in the context of an educational game can be curiosity (sometimes incredulity, puzzlement, desire to know, as [[observed by Isaac Asimov|The most exciting phrase to hear in science, the one that heralds new discoveries, is not "Eureka!", but "That's funny...".]]), or sense of accomplishment (sometimes satisfaction, [[joy|Song of Joy]], pleasure).

And, compares games to simulations:
>The distinction between objective representation and subjective representation is made clear by a consideration of the differences between simulations and games. A simulation is a serious attempt to accurately represent a real phenomenon in another, more malleable form. A game is an artistically simplified representation of a phenomenon. The simulations designer simplifies reluctantly and only as a concession to material and intellectual limitations. The game designer simplifies deliberately in order to focus the player's attention on those factors the designer judges to be important. The fundamental difference between the two lies in their purposes. A simulation is created for computational or evaluative purposes; a game is created for educational or entertainment purposes.(There is a middle ground where training simulations blend into educational games.) Accuracy is the sine qua non of simulations; clarity the sine qua non of games.
>A simulation bears the same relationship to a game that a technical drawing bears to a painting. A game is not merely a small simulation lacking the degree of detail that a simulation possesses; a game deliberately suppresses detail to accentuate the broader message that the designer wishes to present. Where a simulation is detailed a game is stylized.
I think that Crawford makes good points about the main characteristics and differences between games and simulations. But, a simulation, especially for purposes of education/learning, can be done at various levels of accuracy/fidelity, depending on the educational objectives. (I think that Crawford's observation that a "simulation is created for computational or evaluative purposes" refers to one kind/level of simulation).
So for example, there can be a simulation within a domain to clarify the main concepts and principles, which will be different from a simulation (in the same domain, covering the same topic/area) aimed at teaching some skills and analysis techniques.
Crawford would not disagree with the above, as he realizes that the goal/objective should drive the design of the game, which is doubly true for educational games:
>A game must have a clearly defined goal. This goal must be expressed in terms of the effect that it will have on the player. It is not enough to declare that a game will be enjoyable, fun, exciting, or good; the goal must establish the fantasies that the game will support and the types of emotions it will engender in its audience. Since many games are in some way educational, the goal should in such cases establish what the player will learn. It is entirely appropriate for the game designer to ask how the game will edify its audience.

It's interesting to compare what Todd Blayone has to say about [[Gamification in education]]
A couple of days ago, I attended an [[interesting lecture by Paul Saffo|http://events.stanford.edu/events/339/33941/]] (he calls himself a futurist), where he talked about the "creator economy" (which according to him is the next type of economy, after the industrial economy, and the consumer economy).
I decided to follow up on him on [[his website|http://www.saffo.com/]] which led me to an article on [[Edge|http://edge.org/]] about [[A third kind of knowledge|http://edge.org/response-detail/144/how-is-the-internet-changing-the-way-you-think]].
Here he mentioned:
>Back in the mid-1700s, Samuel Johnson observed that there were two kinds of knowledge: that which you know, and that which you know where to get.
This echos what I put on [[my Master's degree website|http://ldtprojects.stanford.edu/~hmark/]] about Francis Bacon's famous saying: "knowledge is power". As I had said there, access to knowledge is powerful, but timing is critical, so it actually boils down to: ''timely'' access to knowledge is power.
Saffo continues:
>Johnson's insight was crucial because until then scholars relied heavily on the first kind of knowledge, the ability to know and recall scarce information. Abundant print usurped this task and in the process created the need for a new skill i.e., Johnson's knowing "where to get it."
Two asides here, which are talking about the art of memory:
* I just finished reading the book //Little, Big// by John Crowley, and I enjoyed it a lot! It's //wonder//ful! It has this "softly magical" feeling to it (it is a "fairytale" after all), and it contains "Prose that F. Scott Fitzgerald would envy... the best fantasy yet written by an American." (from the "Praise for" page of the book). In the book, Crowley talks about [[Giordano Bruno|http://en.wikipedia.org/wiki/Giordano_Bruno]], [[The art of memory|http://en.wikipedia.org/wiki/Art_of_memory]], and the mnemonic devices one of the characters (Ariel Hawksquill) is using to remember prodigious amounts of details and relationships.
* Patrick Hutton has [[a very insightful article|Hutton - The Art of Memory.pdf]] on the art of memory, with both some history of what, why and how memory devices had been used in the past, as well as a psychoanalysis twist at the end.
Back to Saffo:
>As the store of paper-based knowledge grew, the new skill of research displaced the old skill of recall. A scholar could no longer get by on memory alone i.e., one had to know where and how to get knowledge.
>Now the Internet is changing how we think again...Knowing where to get is now the domain of machines, not humans.
While I agree that the Internet is causing a big change, I don't think it's necessarily changing "how we think", but rather the how we "feed" our thinking processes (with knowledge). Although, to the extent that //how// and //what// we (or The Machines (a-la "The Matrix"), or the Internet, e.g. Google) feed and //filter// the information that reaches us, it changes //what// we think, for sure.
And he continues:
>The Internet is changing our thinking by giving the tremendous power of search to the most casual of users. We have democratized knowledge-finding in the same way 18th century publishing democratized knowledge access.
Referring to the effect of publishing: if the claimed effect is on //how// we think, I'd claim it's a bit of a stretch. But, if we include in publishing the development of [[new notations|The power of a new literacy]], and the development of [[new literacies|Computing Literacy]], then I agree that publishing (through these!) has affected the way we think.
Since searching in the age of the Internet became so easy, there is a new danger:
>Couch potatoes who once channel-surfed their way through TV's vast wasteland have morphed into mouse potatoes Google-surfing the vaster wasteland of Cyberspace. They are wasting their time more interactively, but they are still wasting their time.
>The Internet has changed our thinking, but if it is to be a change for the better, we must add a third kind of knowledge to Johnson's list i.e., the knowledge of what matters...Without a discipline of knowing what matters, we will merely amuse ourselves to death.
And Saffo concludes:
>Knowing what matters is more than mere relevance. It is the skill of asking questions that have purpose, that lead to larger understandings. Formalizing this skill seems as strange to us today as a dictionary must have seemed in 1780, but I'll bet it emerges just as surely as print abundance led to whole new disciplines devoted to organizing information for easy access. The need to determine what matters will inspire new modes of cyber-discrimination and perhaps even a formal science of determining what matters. Social media hold great promise as discrimination tools, and AI hints at the possibility of cyber-Cicerones who would gently keep us on track as we traverse the vastness of cyberspace in our enquiries. Perhaps the 21st century equivalent of the Great Dictionary will be assembled by a wise machine that knows what matters most.
I find parts of this final vision not really forward looking, but rather looking back (and maybe that's one of the tricks of good futurists!). For example, the prediction (hope?) that a new science of determining what matters will emerge. Even though, it seems to me, that it's really an old wine in a new bottle. Didn't the Greeks coin the terms "ethics" and "morality" to mean the same thing? Will we not have to determine //why// things matter, before we can say //what// matters?
And I'm not sure it'll be such an exciting future if machines (even if wise and gentle) will know what matters most (and without sounding too paranoid: matters most to //whom//?) On the other hand, I still remember the worries people had about personal calculators "taking over" and making humans (and students) "arithmetic imbeciles". It didn't really happen, although it may be true that many people and children today are weaker in mental arithmetic/math.

On the point about needing to evolve out of "bad searching", here's what [[Kai Krause|http://www.byteburg.de/]] has to say about it on [[Edge|http://www.edge.org/3rd_culture/hillis10/hillis10_index.html#krause]]:
>Google may find keywords from billions of pages within fractions of seconds...but then it dumps them in a rather silly list which takes hundreds of seconds to make sense of, scroll, page, examine. And much of it is pure content junk, a lot of it is paid-for junk. Truly getting useful results from any search engine is actually a fine art form in itself. (But far be it for me to complain though, I craved these tools forever.). It is not hard to state the obvious: there should be much less emphasis on memorizing facts and figures, but rather teach how to find them, how to use all available options and tools.
(BTW, the context is [["ARISTOTLE" and "THE KNOWLEDGE WEB"|http://www.edge.org/3rd_culture/hillis10/hillis10_index.html]] by W. Daniel Hillis)
In an article titled [[The Great Forgetting|http://www.theatlantic.com/magazine/archive/2013/11/the-great-forgetting/309516/]] in The Atlantic magazine, Nicholas Carr (of [[Is Google Making Us Stupid?|http://www.theatlantic.com/magazine/archive/2008/07/is-google-making-us-stupid/306868/]] fame) brings up important points about the value //and harm// of automation, to human abilities and skills, saying on one hand:
>Every time we off-load a job to a tool or a machine, we free ourselves to climb to a higher pursuit, one requiring greater dexterity, deeper intelligence, or a broader perspective. We may lose something with each upward step, but what we gain is, in the long run, far greater.
But warns, that on the other hand:
>Rather than opening new frontiers of thought and action, software ends up narrowing our focus. We trade subtle, specialized talents for more routine, less distinctive ones.
>Most of us want to believe that automation frees us to spend our time on higher pursuits but doesn’t otherwise alter the way we behave or think. That view is a fallacy—an expression of what scholars of automation call the “substitution myth.” A labor-saving device doesn’t just provide a substitute for some isolated component of a job or other activity. It alters the character of the entire task, including the roles, attitudes, and skills of the people taking part. As Parasuraman and a colleague explained in a 2010 journal article, “Automation does not simply supplant human activity but rather changes it, often in ways unintended and unanticipated by the designers of automation.”

All of which is absolutely true. Automation, if done well, may/will change processes, procedures, and roles. Automation, like improvement/refinement, is a powerful human ability and mechanism, and as any other powerful "tool" should be used carefully and judiciously. We have to be careful not to throw the baby with the bathwater.

Just as a couple of trivial examples of where automation (of calculation, in this example) definitely did not hurt, and indeed freed up mental cycles to do higher-level activities are the manual technique of calculating the square root of a number (in math lessons at school), and using logarithms to calculate things (in engineering). It's pretty clear that the knowledge of doing these tasks "has been lost", but the fact that we understand the meaning of square roots and know how/when to use them, and the fact that we can use calculators/computers to do arithmetic without resorting to logarithms, brought many benefits to human evolution.

At the end of the article Carr brings up an important philosophical question about human nature or essence, and how integral is learning and knowing to human nature. 

>Whether it’s a pilot on a flight deck, a doctor in an examination room, or an Inuit hunter on an ice floe, knowing demands doing. One of the most remarkable things about us is also one of the easiest to overlook: each time we collide with the real, we deepen our understanding of the world and become more fully a part of it. While we’re wrestling with a difficult task, we may be motivated by an anticipation of the ends of our labor, but it’s the work itself—the means—that makes us who we are. Computer automation severs the ends from the means. It makes getting what we want easier, but it distances us from the work of knowing. As we transform ourselves into creatures of the screen, we face an existential question: Does our essence still lie in what we know, or are we now content to be defined by what we want? If we don’t grapple with that question ourselves, our gadgets will be happy to answer it for us.

While I can understand Carr's point about the strong allure and pull of satisfying all our wants/wishes immediately, it's pretty clear to me that curiosity, and therefore learning, and therefore knowing is embedded in our nature and therefore is part of the definition of who we are. If anything, instant gratification is a strong motivator (and reward), and as such is also part of who we are (so it's not a case of one ''or'' the other, but rather one ''and'' the other, that define us). But again, Carr has a point about the dangers of falling into a trap of focusing on the reward and not be willing or able to do and enjoy the journey.
[[from The chess mentality|http://www.research.ibm.com/deepblue/learn/html/e.8.4.html]] by William H. Calvin

compare to [[The end of an era, the beginning of another? HAL, Deep Blue and Kasparov|The end of an era, the beginning of another? HAL, Deep Blue and Kasparov]]

Some animals have gotten to be so fancy that they simulate a course of action before taking even a tentative first step. The chess master, who looks a half-dozen moves ahead, is a prime example   as is the army general or poker player who thinks through bluff and counterbluff before acting. These are only extreme examples of how to make and compare alternative plans, but they illustrate the same sort of process that we all go through when simply contemplating the leftovers in the refrigerator, trying to figure out a combination that will avoid another trip to the grocery store.

Many animals look ahead in a limited way, predicting when winter is coming. But that requires only the simplest of nighttime-length-sensitive hormonal mechanisms, not even a brain. It's a novel course of action, one that neither you nor any of your ancestors has done before, that is the difficult part.

And not even that is hard, if you have the time to grope around. A goal, and some feedback about progress, suffices for many novel situations. But if I have to pick up a cup of uncertain weight and bring it to my lips in less than a quarter of a second, feedback doesn't have time to help   and so I'll hit my nose if I haven't made the perfect plan in advance. Personally, I think that the extensive planning needed for such ballistic movements as throwing, hammering, kicking, clubbing, and spitting has been very important in the ice age evolution of the human brain   and that we use the same neural machinery for planning what to speak next, or listen to music, or to plan a dinner of leftovers.

Creativity   indeed, the whole high end of intelligence and consciousness   involves playing mental games that shape up quality. Humans can simulate future courses of action and weed out the nonsense off-line. As the philosopher Karl Popper has said, this "permits our hypotheses to die in our stead."

What sort of mental machinery might it take to do this mental feat? I suggest, in How Brains Think (Science Masters), that our brains perform a vastly speeded-up version of the same Darwinian process used in evolving new plant and animal species, the same process seen in the immune response in the days and weeks following a flu shot. In The Cerebral Code (MIT Press), I discuss the cerebral circuitry that does the job.

By borrowing the mental structures for syntax to judge other combinations of possible actions, we can extend our plan-ahead abilities and our intelligence. To some extent, this is done by talking silently to ourselves, making narratives out of what might happen next, and then applying syntax-like rules of combination to rate a candidate scenario as dangerous nonsense, mere nonsense, possible, likely, or logical. But our intelligent guessing is not limited to language-like constructs; indeed, we may shout "Eureka!" when a set of mental relationships clicks into place, yet have trouble expressing this understanding verbally for weeks thereafter. What is it about human brains that allows us to be so good at guessing complicated relationships?

We create sequences when we speak a sentence that we've never spoken before or improvise at jazz or plan a career. We invent dance steps. Even as four-year-olds, we can play roles, achieving a level of abstraction (that "willing suspension of disbelief") not seen in even the smartest apes. Many of our beyond-the-apes behaviors involve novel strings of behaviors, often compounded: phonemes chunked into words, words into word phrases, and (as in this paragraph) word phrases into complicated sentences with nested ideas.

Rules for made-up games illustrate the memory aspect of this novelty: we must judge possible moves against serial-order rules, for example, in solitaire where you must alternate colors as you place cards in descending order. Preschool children will even make up such arbitrary rules, and then judge possible actions against them. We abandon many of the possible moves that we consider in a card game once we check them out against our serial-order memories of the rules. In shaping up a novel sentence to speak, we are checking our candidate word strings against overlearned ordering rules that we call syntax and grammar. Our plan-ahead abilities gradually develop from childhood narratives and are a major foundation for ethical choices, as we imagine a course of action, imagine its effects on others, and decide not to do it.

That's the mentality that chess illustrates so well. Humanity wouldn't be human (or humane) without it.

----
copyright 1997 William H. Calvin

William H. Calvin is a theoretical neurophysiologist at the University of Washington in Seattle. He is the author of nine books, including The Cerebral Code, How Brains Think, The River that Flows Uphill, and, with the neurosurgeon George A. Ojemann, Conversations with Neil's Brain.

For a further discussion of these topics, visit William H. Calvin's Web site.
Serendipity Alert! This is a case of "going down the rabbit hole" (actually, as you'll see momentarily, it __ends up__ being a hare :)

I came across an article (cleverly) titled [[Fox News|http://www.newyorker.com/magazine/2015/05/04/fox-news]] ("The truth about animal fables") in The New Yorker, about "What the stories of [[Reynard|http://fables.wikia.com/wiki/Reynard_the_Fox]] [an intelligent, talking fox] tell us about ourselves".

The article mentions Aesop's story about the //Tortoise and the Hare// and points out that the [[Penguin Classics edition|http://www.penguinclassics.co.uk/books/the-complete-fables/9780140446494/]] renders this beloved story in 5 sentences. Now that piqued my interest. Five sentences !
So, down the rabbit hole we go ...

!!!!The literary virtue of brevity
I was hooked. How can you condense the whole narrative and moral lesson of the Tortoise and the Hare into five sentences? I'm sure you remember the plot-line. From my childhood I don't remember it being a "swift" (ha!) story. Maybe because the publisher of the book of fables //we had at home// assumed it'd be a bed-time/story-time reading, and 5 sentences "just won't do" :)
Any (successful) condensing and distillation of wisdom intrigues me (hence my love of [[quotations|Quotes]]), so I was "compelled" to go down that rabbit hole...

Here is a //six line// translation by George Fyler Townsend (1867) (sorry for the 'rambling' version; I could not find the 'concise' one :)
>A hare one day ridiculed the short feet and slow pace of the Tortoise, who replied, laughing: 'Though you be swift as the wind, I will beat you in a race.' 
>The Hare, believing her assertion to be simply impossible, assented to the proposal; and they agreed that the Fox should choose the course and fix the goal. 
>On the day appointed for the race the two started together. 
>The Tortoise never for a moment stopped, but went on with a slow but steady pace straight to the end of the course. 
>The Hare, lying down by the wayside, fell fast asleep. 
>At last waking up, and moving as fast as he could, he saw the Tortoise had reached the goal, and was comfortably dozing after her fatigue.
>//Slow but steady wins the race.//

!!!!The (proverbial) morality lesson(s)
The "beaten path" interpretations of the fable go something along (one or more of) the following lines:
* don't boast; don't be over-confident; don't brag; don't count your chickens before they hatch (or something similar)
* "the more haste, the worse speed"; "the race is not to the swift" (קֹהֶלֶת [[Ecclesiastes 9.11|http://www.mechon-mamre.org/p/pt/pt3109.htm]]; כִּי לֹא לַקַּלִּים הַמֵּרוֹץ)
* ingenuity/trickery/doggedness can pay off in overcoming a stronger/lazy/over-confident opponent (pick your noun-adjective pair)
* or even, "Action is the Business of Life, and there’s no Thought of ever coming to the End of our Journey in time, if we sleep by the Way." (by Sir Roger L'Estrange (1692))
* and, "many people have good natural abilities which are ruined by idleness; on the other hand, sobriety, zeal and perseverance can prevail over indolence."
and so on; all moral lessons we grew up with; some wiser and truer than others ...

!!!!The moral ambiguity
In most interpretations, the tortoise is the winner, both literally and morally, and the hare is the loser and fool.
But, we don't //really// know what exactly happened in that ancient race (and the fact that we have only 5 or 6 lines to go on, doesn't help in this case :). 

Here is a distillation of a more [[modern interpretation of this story|http://www.fulltextarchive.com/page/Fifty-One-Tales/#p270]] by Lord Dunsany (Edward Plunkett) from 1915, where he claims to tell "The true history of the hare and the tortoise".
>[T]hey were off, and suddenly there was a hush.
>The Hare dashed off for about a hundred yards, then he looked round to see where his rival was.
>"It is rather absurd," he said, "to race with a Tortoise." And he sat down and scratched himself. [...]
>And after a while his rival drew near to him.
>"There comes that damned Tortoise," said the Hare, and he got up and ran as hard as could be so that he should not let the Tortoise beat him. [...]
>The Hare ran on for nearly three hundred yards, nearly in fact as far as the winning-post, when it suddenly struck him what a fool he looked running races with a Tortoise who was nowhere in sight, and he sat down again and scratched. [...]
>"Whatever is the use of it?" said the Hare, and this time he stopped for good. Some say he slept.

This version is interesting for 2 reasons:
* In this interpretation the hare is the thoughtful, logical, matter-of-fact beast. But in addition to Dunsany shedding a different light on the Hare (and the story), he also mocks the moral conclusions of the story, by ridiculing the cliches/bromides pronounced by the spectators of the race and "peppered" throughout the story, for example:
** "Run hard. Run hard," but also, at the same time "Let him rest."
** "Hard shell and hard living: that's what has done it."
** "It is a glorious victory for the forces of swiftness." (referring to the //tortoise//, mind you!)
And
* The second interesting twist is that drawing the wrong conclusions is not only ridiculous, but also dangerous. Dunsany continues:
>And the reason that this version of the race is not widely known is that very few of those that witnessed it survived the great forest-fire that happened shortly after. It came up over the weald by night with a great wind. The Hare and the Tortoise and a very few of the beasts saw it far off from a high bare hill that was at the edge of the trees, and they hurriedly called a meeting to decide what messenger they should send to warn the beasts in the forest.
>
>They sent the Tortoise.

It's fascinating how people take this story to different places, and draw different conclusions (and moral standards!) ...

[[Dunsany's version|http://www.fulltextarchive.com/page/Fifty-One-Tales/#p270]] can come across as a snobbish ridiculing of "the stupid common sense" of the masses. But I don't think that this is his point. I think that he is trying to point out to an important truth, and provide a much deeper wisdom, namely that "things are not always as they seem"; it is dangerous to apply (moral) standards blindly; bromides aren't always right; wisdom is not practiced by simplistically interpreting and applying formulas; context is important!

!!!!The paradox connection
And since we just mentioned going different places (or not :), here's a related (in a sense :) ''logical'' paradox (which can be solved with mathematical reasoning -- the [[convergence of a mathematical series of fractions|http://en.wikipedia.org/wiki/1/2_%2B_1/4_%2B_1/8_%2B_1/16_%2B_%E2%8B%AF]]).
You may have noticed that the Tortoise and Hare race in Aesop's classic Greek fable, echos another classic: [[Zeno's Paradox|http://en.wikipedia.org/wiki/Zeno%27s_paradoxes]], in which the Tortoise races Achilles. Zeno "reasons" that even though Achilles is faster, if the slower tortoise is given a lead (head start) the quicker runner can never overtake the slower, since the pursuer must first reach the point whence the pursued started, ad infinitum, and so the slower will always hold a lead.

In a very (Lewis) Carrollian way, the fable of the hare and the tortoise is an "explanation" of why "Zeno is right" ([[which he's not|http://www.slate.com/articles/health_and_science/science/2014/03/zeno_s_paradox_how_to_explain_the_solution_to_achilles_and_the_tortoise.html]]), and why the hare (or Achilles) will never (not even in wonderland :) catch up. They have a tough time -- now both math and/or morality are working against them...

More on Wonderland races in a moment (or two; depending on how fast (or slow) you read the computer program below :)

!!!!The Computer Science connection
This rabbit hole (shall we switch to hare?) leads even deeper (or wider). The story of the tortoise and the hare, has a useful application in computer science (I'm not making this up!): an algorithm implementing a "race", different "movement speeds"; in short, the whole chicken caboodle (or, turtle soup :)

This is an [[algorithm for Detecting a Loop in a Singly Linked List|http://codingfreak.blogspot.com/2012/09/detecting-loop-in-singly-linked-list_22.html]]. It was invented by Robert Floyd who called it "Tortoise and Hare".
The description of this algorithm is pretty simple. It starts like this:
>Let us take 2 pointers namely slow Pointer and fast Pointer to traverse a Singly Linked List at different speeds (see diagram below). A slow Pointer (Also called Tortoise) moves one step forward while fast Pointer (Also called Hare) moves 2 steps forward. And these are the steps of the algorithm:
>
>1. Start Tortoise and Hare at the first node of the List.
>2. If Hare reaches end of the List, return as there is no loop in the list.
>3. Else move Hare one step forward.		(Hare needs to move at twice the speed of Tortoise - so this is an extra move)
>4. If Hare reaches end of the List, return as there is no loop in the list.
>5. Else move Hare and Tortoise one step forward.		(Hare and Tortoise move 1 step. But because of step 3, Hare moves faster overall)
>6. If Hare and Tortoise are pointing to same Node, return; we found a loop in the list.
>7. Else go back to step 2.
>[img[Tortoise and Hare Algorithm|resources/single_linked_list_small.png][resources/single_linked_list.png]]

I have [[implemented a Python program|https://trinket.io/python/f172565c06]] (which you can run and modify in the browser) to demonstrate how it works:
<html>
<table style="border:none;">
<tr style="border:none;">
<td style="border:none;">
<pre>

# An algorithm for Detecting a Loop in a Singly Linked List. 
# Invented by Robert Floyd who called it "Tortoise and Hare".
# Implemented by hmark.
# The Singly Linked List is implemented as a list of nodes.
# Each node in the list is itself a 2-element list, where
# the first element is the number of the node, and
# the second element is the number of the node it points to.

# in looped_list, node 0 points to node 1, and so on, until node 6 points to node 3
# thus creating a loop, where the end node 7 is "detached" from the list.

looped_list = [ [0,1], [1,2], [2, 3], [3, 4], [4, 5], [5, 6], [6, 3], [7, 8] ]


# in straight_list, node 0 points to node 1, and so on, all the way to the last node (7)
# thus creating a straight linked list with no loops.

straight_list = [ [0,1], [1,2], [2, 3], [3, 4], [4, 5], [5, 6], [6, 7], [7, 8] ]


def is_loopy(path):
  tortoise = path[0] # slow pointer, starts at the beginning of the list
  hare = path[0] # fast pointer, also starts at the beginning of the list
  end = path[-1] # point to the last node
  while True:
    if hare == end:
      return False
    hare = path[hare[1]]    # move the fast pointer to the next node
    if hare == end:
      return False
    hare = path[hare[1]] # move the fast pointer to gain twice the speed
    tortoise = path[tortoise[1]]    # move the slow pointer to the next node
    if hare == tortoise:
      return True


print looped_list
print is_loopy(looped_list)
print
print straight_list
print is_loopy(straight_list)</pre>
</td>
</tr>
</table>
</html>

!!!!The return to Alice in Wonderland
All good things lead to //Alice in Wonderland//, or to //Through the Looking Glass// (or they just come to an end :)

As you may know, Lewis Carroll has multiple "ridiculous races" in these stories. 

__The first one__ (and running through Wonderland) is the one where Alice chases after the rabbit down the rabbit hole:
>burning with curiosity, [Alice] ran across the field after [the rabbit], and fortunately was just in time to see it pop down a large rabbit-hole under the hedge.
>In another moment down went Alice after it, never once considering how in the world she was to get out again. 

As Alice was falling down the rabbit hole, she wasn't so sure for how long she was falling, or when it will end, if ever:
>Down, down, down. Would the fall never come to an end? 'I wonder how many miles I've fallen by this time?' she said aloud.
But isn't it the same with "trains of thought" and "flights of fancy"? You never know where you'll end up (and the piece you are reading now is a proof of that :)

As an aside: if you think about it, an object falling through a __straight line tunnel__ connecting two points on the surface of the earth would oscillate back and forth between these points and will not stop (if we ignore friction and the [[Coriolis effect|http://en.wikipedia.org/wiki/Coriolis_effect]] due to Earth's rotation). Thankfully, our reflections and associations can be (and will be :) stopped.

But, regardless of the physics, Carroll makes the rabbit fast enough and non-oscillating, and for a while it doesn't look like Alice will be able to catch up, since //this// rabbit is too nervous and stressed out to take a nap ...


__Second, there is the caucus race__ (//Alice in Wonderland//, chapter 3: Caucus Race and a Long Tale):
> 'What is a Caucus-race?' said Alice; not that she wanted much to know, but the Dodo had paused as if it thought that somebody ought to speak, and no one else seemed inclined to say anything.
>'Why,' said the Dodo, 'the best way to explain it is to do it.' (And, as you might like to try the thing yourself, some winter day, I will tell you how the Dodo managed it.)
>
>First it marked out a race-course, in a sort of circle, ('the exact shape doesn't matter,' it said,) and then all the party were placed along the course, here and there. There was no 'One, two, three, and away,' but they began running when they liked, and left off when they liked, so that it was not easy to know when the race was over. However, when they had been running half an hour or so, and were quite dry again, the Dodo suddenly called out 'The race is over!' and they all crowded round it, panting, and asking, 'But who has won?'
>This question the Dodo could not answer without a great deal of thought, and it sat for a long time with one finger pressed upon its forehead (the position in which you usually see Shakespeare, in the pictures of him), while the rest waited in silence. At last the Dodo said, 'Everybody has won, and all must have prizes.'

__And there is the running in place__ of Alice and the Red Queen (//Through the Looking Glass//, chapter 2: The Garden of Live Flowers):
>Just at this moment, somehow or other, they began to run. Alice never could quite make out, in thinking it over afterwards, how it was that they began: all she remembers is, that they were running hand in hand, and the Queen went so fast that it was all she could do to keep up with her: and still the Queen kept crying 'Faster! Faster!' but Alice felt she //could not// go faster, though she had not breath left to say so.
>The most curious part of the thing was, that the trees and the other things round them never changed their places at all: however fast they went, they never seemed to pass anything. 'I wonder if all the things move along with us?' thought poor puzzled Alice. And the Queen seemed to guess her thoughts, for she cried, 'Faster! Don't try to talk!'
>
>Not that Alice had any idea of doing //that//. She felt as if she would never be able to talk again, she was getting so much out of breath: and still the Queen cried 'Faster! Faster!' and dragged her along. 'Are we nearly there?' Alice managed to pant out at last.
>'Nearly there!' the Queen repeated. 'Why, we passed it ten minutes ago! Faster!' And they ran on for a time in silence, with the wind whistling in Alice's ears, and almost blowing her hair off her head, she fancied.
>'Now! Now!' cried the Queen. 'Faster! Faster!' And they went so fast that at last they seemed to skim through the air, hardly touching the ground with their feet, till suddenly, just as Alice was getting quite exhausted, they stopped, and she found herself sitting on the ground, breathless and giddy.
>
>The Queen propped her up against a tree, and said kindly, 'You may rest a little now.'
>Alice looked round her in great surprise. 'Why, I do believe we've been under this tree the whole time! Everything's just as it was!'
>'Of course it is,' said the Queen, 'what would you have it?'
>'Well, in //our// country,' said Alice, still panting a little, 'you'd generally get to somewhere else -- if you ran very fast for a long time, as we've been doing.'
>'A slow sort of country!' said the Queen. 'Now, //here//, you see, it takes all the running //you// can do, to keep in the same place. If you want to get somewhere else, you must run at least twice as fast as that!'
>'I'd rather not try, please!' said Alice. 'I'm quite content to stay here -- only I //am// so hot and thirsty!' 

__And then there are Alice and the White King__ hurrying to see the Lion and the Unicorn fighting (//Through the Looking Glass//, chapter 7: The Lion and the Unicorn):
>[The White King said:]'Let's run and see them.' And they trotted off, Alice repeating to herself, as she ran, the words of the old song:
>>   The Lion and the Unicorn were fighting for the crown:
>>  The Lion beat the Unicorn all round the town.
>>   Some gave them white bread, some gave them brown;
>>   Some gave them plum-cake and drummed them out of town.'
>
>'Does -- the one -- that wins -- get the crown?' she asked, as well as she could, for the run was putting her quite out of breath.
>'Dear me, no!' said the King. 'What an idea!'
>'Would you -- be good enough,' Alice panted out, after running a little further, 'to stop a minute -- just to get -- one's breath again?'
>'I'm //good// enough,' the King said, 'only I'm not strong enough. You see, a minute goes by so fearfully quick. You might as well try to stop a [[Bandersnatch|http://en.wikipedia.org/wiki/Bandersnatch]]!'
>Alice had no more breath for talking, so they trotted on in silence, till they came in sight of a great crowd, in the middle of which the Lion and Unicorn were fighting.
the silly merriment continues, but we will stop here, and follow the (sage) advice the King of Hearts gives the rabbit (//Alice in Wonderland//, chapter 12: Alice's Evidence):
>Read [the accusing verses],' said the King [to the rabbit].
>The White Rabbit put on his spectacles. 'Where shall I begin, please your Majesty?' he asked.
>'Begin at the beginning,' the King said gravely, 'and go on till you come to the end: then stop.'
So we do. We did. We are done.
(And as promised at the beginning of this piece, ''it'' does ''end'' with a rabbit. (and the (only?) open question is what //it// refers to :))

Public perception of artificial intelligence (AI)

[img[T-Shirt|resources/chess_game.jpeg][resources/chess_game.jpeg]]
"I remember when you could only lose a chess game to a supercomputer." - New Yorker Cartoon, by: Avi Steinberg

Sometimes a work of science fiction tells more about the time of its creation then it does the future it purports to predict. In the Stanley Kubrick/Arthur C. Clarke 1968 epic film 2001: A Space Odyssey, about the central character -- the HAL 9000 computer -- [[talks amiably|HAL 9000 chatbot - educational multimedia design principles]], renders aesthetic judgments of drawings, recognizes the emotions in the crew, but also murders four of the five astronauts in a fit of paranoia and concern for the mission. At the time of the filming -- before anyone had a Ph.D. in computer science, before the PC and Macintosh, before duogenarians started buying Ferraris from the IPOs of their software companies -- the general public knew little about computers and had virtually no direct experience with them. As such, the film's compelling and carefully considered representation of HAL and his abilities embodied almost as much hope and fear as it did knowledge and analysis.

Shortly after we meet HAL, he plays chess against astronaut Frank Poole, and this scene tells us a great deal about computers and 1960s society's view toward them. First, it is significant what is not shown. Kubrick originally filmed the scene with Dave playing a new game, "Pentominoes," then being promoted by the Milton-Bradley Game Corporation. Kubrick rejected this because although Pentominoes might have gone on to popularity, film goers wouldn't quite know what the astronaut was doing (programming or controlling some aspect of the ship perhaps?). Even if they did recognize it as a game, viewers wouldn't know just how difficult it was and thus how impressive HAL's inevitable win would be. Kubrick chose chess in large part to show how "intelligent" HAL was; chess has long been held as a paradigm of the heights of human logic and reasoning. (It should be pointed out that Kubrick is an avid chess player and as a teenager hustled chess in the parks of Brooklyn during the 1950s. Moreover, in the novel 2001, HAL is programmed to lose 50% of the time   to keep things interesting for the astronauts.)

Next, consider the particular sequence of moves Kubrick shows. These are moves 13 through 15 from an obscure game between two German masters played in Hamburg in 1913 (the Ruy Lopez or Spanish opening). We can presume that Kubrick chose this set of moves because of the cleverness of the checkmate   clever enough that an astronaut might not see it, yet short and easy enough for chess-literate viewers to recognize and admire.

Recall, too, Frank's reaction to his loss -- or more specifically his lack of reaction. He clearly accepts defeat without anguish. His pause is brief and he doesn't even take time to confirm the mate carefully -- he knows HAL is correct in his announcement of the checkmate. But Frank's lack of reaction is very significant. Although it may seem a bit quaint in the late 1990s, at the time of the filming of 2001, in the public's mind there was a clear pro-human/anti-computer sentiment, at least as far as chess was concerned. Even in the human-machine tournaments in the '70s and early '80s, audiences rooted for the human and against the machine. Nowadays, few of us feel deeply threatened by a computer beating a world chess champion -- any more than we do at a motorcycle beating an Olympic sprinter [Haggai: or [[Dijkstra's 'swimming' submarine|The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.]] ]. It is true, chess masters will be the most distraught (Kasparov has called his last tournament "species defining"), and true, it will garner great public interest. Nevertheless, among scientists working in the field it may be important, but for other reasons I'll explain below.

In short, in 2001 Kubrick and Clarke were absolutely right to predict that

::    - we would play games with computers for diversion (this was not obvious in the 1960s, incidentally, before Pong, Nintendo and Sega Genesis)
::    - computers would become superb chess players, surely able to beat an astronaut. 

Let me digress and take the Kasparov/Deep Blue rematch as a good chance to dispel a myth that has grown up around 2001. The story has it that the name HAL was chosen because each letter is just one step ahead of IBM. However, this is pure coincidence, and in fact the first incarnations of the computer had a woman's voice and was called "Athena," goddess of wisdom. When someone pointed out the spurious association between HAL and IBM, Kubrick wanted to change the computer's name and refilm the scenes but was dissuaded because of production costs. (For the record, HAL comes from "Heuristically programmed Algorithmic" computer.)

But back to HAL: Yes, HAL was brilliant and amiable but, at least in the public's mind, capable of evil. His triumph over Frank in chess presages the murder in outer space. HAL surely was "intelligent," and his prowess at chess helps to convince us of that.

But how do we "measure" or test his intelligence?

!!!The chess Turing test'
(compare with [[Brian Christian's experience|The Most Human Human - by Brian Christian]] participating in the annual [[Turing Test|https://plato.stanford.edu/entries/turing-test/]] Event in England).

The question "Is a machine intelligent?" is a notoriously difficult one and thus in 1950 the computer science pioneer Alan Turing proposed his famous test: There are two keyboards in front of you, one connected to a computer, the other leads to a person. You type in questions on any topic you like; both the computer and the human type back responses that you read on the respective computer screens. If you cannot reliably determine which was the person and which the machine, then we say the machine has passed the Turing test. HAL, of course, passed with flying colors. No computer can pass such an unrestricted Turing test today   or will for quite some time. For this reason, we restrict the test, to give the computer a fighting chance.

One such restricted test is a chess Turing test: You play chess against an opponent whose moves are presented by means of a computer screen, and you try to determine whether your opponent is a computer or a human. Chess Turing tests have been conducted, but with a slight twist. You get to see only the recorded moves of a previous game and must state if either opponent -- or possibly both or possibly neither -- was a computer. This is a just a bit tougher than a true chess Turing test because now you're seeing a game played by others, one where the opponents were trying to win. Consider: If you were playing and your goal was to determine whether the opponent is a computer, you might be clever and lose the game in a particularly interesting way that might reveal the identity of your opponent.

In such modified chess Turing tests (based on recorded games), just as in the original Turing test, the better the computer, the harder it is to distinguish between human and machine opponents. In an informal experiment, Kasparov could occasionally, but not reliably, guess from recorded games whether opponents were human or machine. We now know that we can make computers excel on limited problems, such as chess, or controlling an oil refinery, or even complicated medical diagnoses, but we are very far indeed from creating a computer to pass an unrestricted Turing test. But more on that below.

!!!Should we mimic humans to achieve AI?
Should we try to make artificial intelligence by duplicating how humans do it, or instead try to exploit the particular strengths of machines? Humans are slow but exquisitely good at pattern recognition and strategy; computers, on the other hand, are extremely fast and have superb memories but are annoyingly poor at pattern recognition and complex strategy. Kasparov can make roughly two moves per second; Deep Blue has special-purpose hardware that enables it to calculate nearly a quarter of a billion chess positions per second.

Here is an illustration of the difference, taken from chess: Controlled psychological experiments have shown that human chess masters are far more accurate than non-chess players at remembering chess board positions taken from real games, where the placement of pieces arose in strategic play and represented meaningful tactical positions. However, these masters were no better than non-chess players at memorizing random arrangements of pieces. Chess masters remember positions based on certain patterns, alignments and structure whereas, of course, computers have no difficulty remembering -- storing -- all the games or random arrangements ever made and need no "meaning" in the placements.

There are other differences too, of course. Humans have emotions - they have pride at winning, shame at a bad loss, satisfaction when extracting revenge; not so computers (well, not yet). Computers don't get tired, and don't have "bad" days -- at least so long as the hardware doesn't break down!

Early chess systems sought to duplicate or mimic the methods of humans. But this proved to be far too difficult: What precisely suggests any particular move? Instead, successful chess programs capitalize on the particular strengths of computers: rapid and massive parallel search. This is quantified by how many moves ahead, or "plies," the computer can search in a given time. If I move, that's one ply; if you then also move, that's two plies, and so forth. Naturally, the deeper the computer can search, the stronger a player it is. The interesting thing about computer chess is the extremely good correlation between the average depth of search (measured in plies) and the strength in chess, quantified by the rating.

Humans have an uncanny ability to see or "sniff out" sequences of moves that are likely to pay off down the line -- an example of sophisticated pattern recognition -- for instance, as if to say "Gosh, I'm not sure why, but I think that attacking his king's knight's pawn with my queen's bishop looks promising... let me explore that line of attack..." As such, human grandmasters don't waste time on unpromising sequences but tend to look far down the promising avenues and see traps beyond the search "horizon" of a brute force machine search. This, in fact, is one of the strategies employed by grandmasters, including Kasparov, when playing against computers. Well it turns out that for the 1913 Hamburg game used in the scene in 2001, the earliest moves (which would have occurred before the film's scene) were indeed quite "trappy" in this way, and assuming HAL played them, we would say that he would pass the chess Turing test - not surprising, since the game was actually from a human match.

Another difference between human and computer chess play involves learning or adaptability. Machines play the same way again and again, and given a particular setup will always play the same move. During a game, however, a human grandmaster might notice that his opponent is aggressive or conservative or risky or trappy, and change his own style of play accordingly. Humans even do this from game to game in a tournament against a single opponent. Garry Kasparov stated quite clearly that his tournament victory over Deep Blue in February 1996 came because he could analyze the first game and play to exploit Deep Blue's evident weaknesses in the next. Conversely, Kasparov changed his own style of play in the middle of games, and Deep Blue was not prepared to adapt accordingly.

The ability of computers to do extensive search has had ramifications on endgame play, where only a few pieces are left on the board, such as a king, bishop and knight versus a king and a pawn. Certain endgame arrangements were always thought to represent a draw -- no human had ever seen a way to win. Nevertheless, through deep -- very deep -- computer searches, some of these positions were proven to be a forced win. In one setup, a Connection Machine supercomputer found a forced checkmate in an astounding 249 moves! There is a scientific paper on the subject in the original German, which said playing against a computer programmed with these endgame sequences was "Schachspielen wie gegen Gott"-- chess playing as if against God. Garry Kasparov himself said it best: "Sometimes quantity becomes quality."

Once some of these endgames were shown to be wins, a few tournament players tried to memorize the necessary sequence of moves. This leads to a fascinating point: Whereas researchers began by trying to make computer chess systems imitate the style of humans, paradoxically it turns out that some humans play their endgames by imitating computers! Thus, at least in some aspects of endgame play, machines are clearly superior to humans.

It turns out that the Japanese board game "go" will not succumb to such brute force methods -- there are simply too many possible moves for even the most powerful supercomputer imagined to ever examine. Instead, "real" AI will be needed -- intelligence based on pattern recognition, "insight" and strategy. Indeed, for those of us who work in pattern recognition, machine learning, or various fields allied with artificial intelligence, it is the weaknesses of Deep Blue that are the most interesting. How should we program computers to recognize and understand the style of their opponent's play and adapt accordingly? How should we program the machine to distinguish the most promising lines of attack from the ones that are not likely to pay off? How do we program machines to make complex plans? Whereas there are some aspects of the Deep Blue system that employ crude versions of methods we know are important in human intelligence (in particular when scoring the quality of a position), their weaknesses are compensated by the brute force search through possible moves.

It must be emphasized, too, that even if such subtle and complicated techniques of pattern recognition, reasoning and so forth are ultimately achieved in chess, there would still remain an enormous gulf between their use in chess and in other general aspects of intelligence, for instance, in planning a story or recognizing a scene. For these, we may have to duplicate the human, at least at some level of abstraction.

!!!A new era?
I am assuming that computers will ultimately triumph over humans in the domain of chess, if not at this Deep Blue/Kasparov tournament, then in the not-too-distant future. True, technologists tend to be optimists, and the predictions of a number of computer scientists vis-a-vis chess -- from Alan Turing to Marvin Minsky to Raymond Kurzweil -- have been overly optimistic. Furthermore, humans will continue to improve   surely Kasparov is improving, in part from his competition with computers. Nevertheless the trends are clear enough, and although I will not hazard a guess as to when it will occur, I am confident that someday a computer will reign supreme in chess.

It has been said that when computers become world champions, we will either:

    - think more of computers
    - think less of humans or
    - think less of the game of chess.

My view is that we will think just a bit more of computers (at least for these and related problems) and still admire the game of chess. I think we will -- or at least we should -- think more of humans, not less. We will appreciate just how difficult problems like pattern recognition and planning and creativity are, and how poorly scientists and technologists have done in trying to reproduce these human behaviors.

The public should understand one of the central lessons of the last 40 years in AI research: that problems we thought were hard turned out to be fairly easy, and that problems we thought were easy have turned out to be profoundly difficult. Chess is far easier than innumerable tasks performed by an infant, such as understanding a simple story, recognizing objects and their relationships, understanding speech, and so forth. For these and nearly all realistic AI problems, the brute force methods in Deep Blue are hopelessly inadequate.

With the (presumed) forthcoming "solution" to the chess problem, I think we will come to the end of an era   the era of the "quick kill" where hardware and brute force solve interesting problems. Chess is the last problem in traditional AI that will garner great public attention and be solvable by "simple" brute force techniques. Natural language understanding, scene analysis, speech recognition, and much more will require much more work and work of a different sort, than that used for chess. We'll be at the end of one road that leads just part of the way through the forest.

Thus we are led to ask, after chess, "whither AI"?

I think there will be a sober reconsideration and broad acceptance of the magnitude of the AI problem, and a realization that the techniques that proved successful in chess will be of only limited use in the domains where humans currently dominate and which we view as essential for AI. In addition to continued research progress, what happens then will depend in part upon whether scientists and engineers can make a coherent case that we know enough about the foundations of AI and that we are in a position for larger scale projects. What might the research projects be like?

Doug Lenat's CYC project -- a several-decades-long knowledge engineering mission for entering common sense and general knowledge into machines -- may give the flavor of the future. Although it is too early to judge the eventual success or failure of his particular system, which is still years from coming fully on-line, we can note some of its attributes that presage things to come. First, his system addresses general intelligence, rather than just a specific domain such as chess. For example, his system could be used for interpreting handwriting (by providing knowledge of reasonable alternative readings of ambiguous words) or searching labeled images (by inferring related terms and concepts that are not in a thesaurus) or spotting inconsistencies in a database. Second, Lenat's program isn't concerned with proving arcane theorems that may be of limited use, but instead relies on lots of repetitive (and I would imagine boring) knowledge engineering -- the digital age's answer to sweat shops. Third, it acknowledges the magnitude of the problem -- it already embodies over a person-century of data entry during the last dozen years, and still has much to do.

The CYC project, as large as it may be, is still peanuts on the scale of big science in other disciplines. The physicists have their multibillion-dollar particle accelerators, the astronomers their space missions, and microbiologists their human genome project, but there is no equivalent in computer science and AI, at least not in the U.S. We can imagine enormous software projects for learning simple objects or animal shapes, which would be useful in searching the World-Wide Web, and work on integrating and mediating large numbers of experts on subproblems.

!!!Conclusion
The problems addressed by AI are some of the most profound in all of science: How we know the world? What is the "essence" of an object or pattern? How do we remember the past? How do we create new ideas? For centuries, mankind had noticed hearts in slaughtered animals; nevertheless, the heart's true function was a mystery until one could liken it to an artifact and conclude: a heart is like a pump. (Similarly, an eye is like a camera obscura, a nerve is like an electric wire....) In the same way, we have known for centuries the brain that is responsible for thoughts and feelings, but we'll only truly understand the brain when our psychological and neurological knowledge is complemented by an artifact -- a computer that behaves like a brain. As such, AI is in the long tradition of philosophy and epistemology; it is surely worthy of our support as a culture. (It also will have immense practical benefits.)

We should take the eventual triumph of machines in chess as a milestone -- the end of the easy era. It should also mark a new era, one where researchers eliminate the hype and false promises, epitomized by the prediction of a HAL. We will deepen our admiration for the problem -- and buckle down for some real hard work.

----

!!!!References
HAL's Legacy: 2001's computer as dream and reality, edited by David G. Stork, Foreword by Arthur C. Clarke, MIT Press (1997)
Letter to the Editor on computer chess by David G. Stork, Scientific American, p. 10 (March 1991). Kasparov vs. Deep Blue: Computer Chess Comes of Age by Monroe Newborn, Springer-Verlag (1996) A New Era: World Championship Chess in the Age of Deep Blue by Michael Khodakovsky, Ballentine (1997)

This piece is based in part on my illustrated lecture "The HAL 9000 computer and the vision of '2001: A Space Odyssey'," and Murray Campbell's chapter in HAL's Legacy: 2001's computer as dream and reality (MIT Press, 1997); his insights are gratefully acknowledged. You can read his and other full chapters on-line by clicking on the link to the book. You can see other events associated with the birth of the HAL 9000 computer here.

David G. Stork is Chief Scientist of Ricoh Silicon Valley, as well as Consulting Associate Professor of Electrical Engineering and Visiting Scholar in Psychology at Stanford University. He has had a lifelong interest in chess, and competed twice in the United States High School Chess Championships in the 1970s (he no longer plays competitively). His five books include HAL's Legacy: 2001's computer as dream and reality (MIT Press) and the forthcoming Pattern Classification (2nd ed.) (Wiley).
I recently came across [[an interesting blog entry|http://blog.mrmeyer.com/?cat=51]] by [[Dan Meyer|http://blog.mrmeyer.com/]] about the fit of computers to Math education and the point that Silicon Valley is missing the point, trying to fit the tool (computers) to the challenge (teaching/learning math).

Meyer (who had been a math teacher, and IMO comes at it the right way, and has a very original and effective way to teach math) says:
>Do you want to know where this post became useless to Silicon Valley's entrepreneurs, venture capitalists, and big thinkers? Right where I said, "Computers are not a natural working medium for mathematics." They understand computers and they understand how to turn computers into money so they are understandably interested in problems whose solutions require computers. Sometimes a problem comes along that doesn't naturally require computers. Like mathematics. They may then define, change, and distort the definition of the problem until it does require computers.

To which Jesse Farmer [[responds|http://news.ycombinator.com/item?id=3563402]]:
>In design, a skeuomorph is a derivative object that retains some feature of the original object which is no longer necessary. For example, iCal in OS X Lion looks like a physical calendar, even though there's no reason for a digital calendar to look (or behave) like a physical calendar. The same goes for the address book.
>This is what I see happening in online education. I don't think it's a case of "lol, Silicon Valley only trusts computers," but rather starting off by doing the most literal thing. Textbooks? Let's publish some PDFs online. Lectures? Let's publish videos online. Homework and tests? Let's make a website that works like a multiple-choice or fill-in-the-blank test. These are skeumorphs. There's no reason for the online equivalent of a textbook to be a PDF, it's just the most obvious thing.
>For me it's 1000x more interesting to ask "On the web, what's the best way to do what a lecture does offline?" than to say "Khan Academy videos are the wrong way of doing it."^^1^^
>I think sites like Codecademy point the way when it comes to programming. The textbook is the IDE.
>What does that look like for math? It's much harder because, like Dan says, computers aren't the natural medium for mathematicians, so there will always be a translation step from math-ese to computer-ese.
>Once you're past basic math and are working out of a higher-level textbook, the exercises becomes very awkward to express on a computer in a way a computer can understand.

I agree with Dan that math is not necessarily "a natural" for computers, and when looking at current/common computer-based/assisted/aided/mediated math teaching/learning systems/environments, it definitely feels like "old wine (or is it vinegar?) is 'shoehorned' into new bottles" (sorry for the mixed metaphor), which is jfarmer's point. He is probably correctly pointing out that it's easy/natural to try to keep doing the same/similar thing in a new medium or environment, but I also think that there is another, deeper reason for continuing to "teach the new dogs the old tricks" (and I apologize to all learners, but it sometimes really feels like we treat them Pavlovianly).
And I think the reason has to do with the strong/ingrained belief that math (in this case) has to be taught/learned from the ground up: first comes "the language", with basic concepts, etc., followed by more advanced ideas and techniques which build on that foundation, and so on. Mathematicians and lecturers like Lockheart already [[lamented this|A case for "loosening up a bit"]], and visionary leaders and lectures like Gershenfeld are trying to address this (through [[interdisciplinary learning/curricula|Interdisciplinary Learning]]). But in my mind, if we break away from this "bottom-up" curriculum, computers can become the "ways and means" and very powerful allies.
It's interesting that Dan Meyer is using technology (not necessarily computers) in his [[3 act math lessons|https://docs.google.com/spreadsheet/ccc?key=0AjIqyKM9d7ZYdEhtR3BJMmdBWnM2YWxWYVM1UWowTEE]] in a way that I would definitely not characterize as bottom-up.
As Meyer says about [[his 3 acts|http://blog.mrmeyer.com/?cat=95]]:
>I aspire to be perplexing. I want to perplex my students, to put them in a position to wonder a question so intensely they'll commit to the hard work of getting an answer, whether that's through modeling, experimenting, reading, taking notes, or listening to an explanation.
>A lot of my most perplexing classroom moments have had two elements in common:
> * A visual. A picture or a (short) video.
> * A concise question. One that feels natural. One that people can approach first on a gut level, using their intuition.
>Let's call that a first act. There are still two more acts and a lot of work yet to do, but the first act is above and before everything else.
([[more about Dan Meyer's 3 acts|The Three Acts Of A Mathematical Story]])

And this is the crux of it! You don't start engaging the learner at the bottom; you "jump right into the middle", and then you go up or down as you need to. This is powerful learning! And computers with their visualization, modeling, search, simulation, and number crunching, fast response/feedback, are a "great empowerer".

There is another, and in my mind not less powerful and engaging approach compared to "start in the middle, and go up or down, as you need to" (see above), which is the approach of "sometimes it's easier to start with the ''theoretical analysis'', and sometimes it's easier to start with a ''computer simulation'' (or [[Computer Simulation vs. Theoretical Analysis]]). Both are valid, powerful, knowledge acquisition approaches.
----------------------
^^1^^ See [[the value of the Khan Academy videos|The Khan Academy]]
Melanie Mitchell is echoing Andrea diSessa's thoughts on [[the power of a new language or literacy|The power of a new literacy]] to significantly enhance our ways of thinking and dealing with the world:

>...we don't have the right vocabulary to precisely describe what we re studying [the science of complexity]. We use words such as complexity, self-organization, and emergence to represent phenomena common to the systems in which we re interested but we can t yet characterize the commonalities in a more rigorous way. We need a new vocabulary that not only captures the conceptual building blocks of self-organization and emergence but that can also describe how these come to encompass what we call functionality, purpose, or meaning. These ill-defined terms need to be replaced by new, better-defined terms that reflect increased understanding of the phenomena in question. As I have illustrated in this book, much work in complex systems involves the integration of concepts from dynamics, information, computation, and evolution. A new conceptual vocabulary and a new kind of mathematics will have to be forged from this integration. The mathematician Steven Strogatz puts it this way:  I think we may be missing the conceptual equivalent of calculus, a way of seeing the consequences of myriad interactions that define a complex system. It could be that this ultracalculus, if it were handed to us, would be forever beyond human comprehension. We just don't know. 
>Having the right conceptual vocabulary and the right mathematics is essential for being able to understand, predict, and in some cases, direct or control self-organizing systems with emergent properties. Developing such concepts and mathematical tools has been, and remains, the greatest challenge facing the sciences of complex systems.
>[...]
>Accomplishing all of this will require something more like a modern Isaac Newton than a modern Carnot. Before the invention of calculus, Newton faced a conceptual problem similar to what we face today. In his biography of Newton, the science writer James Gleick describes it thus:  He was hampered by the chaos of language words still vaguely defined and words not quite existing. . . . Newton believed he could marshal a complete science of motion, if only he could find the appropriate lexicon.. . .  By inventing calculus, Newton finally created this lexicon. Calculus provides a mathematical language to rigorously describe change and motion, in terms of such notions as infinitesimal, derivative, integral, and limit. These concepts already existed in mathematics but in a fragmented way; Newton was able to see how they are related and to construct a coherent edifice that unified them and made them completely general. This edifice is what allowed Newton to create the science of dynamics.
>Can we similarly invent the calculus of complexity a mathematical language that captures the origins and dynamics of self-organization, emergent behavior, and adaptation in complex systems? There are some people who have embarked on this monumental task. For example, as I described in chapter 10, Stephen Wolfram is using the building blocks of dynamics and computation in cellular automata to create what he thinks is a new, fundamental theory of nature. As I noted above, Ilya Prigogine and his followers have attempted to identify the building blocks and build a theory of complexity in terms of a small list of physical concepts. The physicist Per Bak introduced the notion of self-organized criticality, based on concepts from dynamical systems theory and phase transitions, which he presented as a general theory of self organization and emergence. The physicist Jim Crutchfield has proposed a theory of computational mechanics, which integrates ideas from dynamical systems, computation theory, and the theory of statistical inference to explain the emergence and structure of complex and adaptive behavior. 

In an article titled [[ASYMPTOTIC BEHAVIOUR AND RATIOS OF COMPLEXITY IN CELLULAR AUTOMATA|https://arxiv.org/pdf/1304.2816.pdf]] Hector Zenil and Elena ~Villareal-Zapata analyze the sensitivity of (even simple) [[1D (or Elementary) Cellular Automata|http://mathworld.wolfram.com/ElementaryCellularAutomaton.html]] to changes in initial conditions. 1D CA are important since they may [[shed some light of complex phenomena|On Creativity and Cognition of Cellular Automata]].

Here are 2 [[NetLogo simulations|http://ccl.northwestern.edu/netlogo/models/CA1DRule30]] of rule 22 (using Wolfram's classification of 1D CA), with different initial conditions: 10001 on the left, and 10011 on the right.

[<img[rule 22 with 10001|resources/CA22_10001_1.png][resources/CA22_10001.png]]
[>img[rule 22 with 10011|resources/CA22_10011_1.png][resources/CA22_10011.png]] 







Here are a few more results for various different initial conditions progressing as [[Gray codes|https://en.wikipedia.org/wiki/Gray_code]]:

[img[rule 22 with all|resources/CA22_all_1.png][resources/CA22_all.png]] 
In [[a sound and practical interview|http://www.criticalthinking.org/pages/an-interview-with-linda-elder-about-using-critical-thinking-concepts-and-tools/495]] on her [[website critical thinking|http://www.criticalthinking.org/]], Linda Elder describes her view on "Essential Ideas" in a Knowledge Domain: 

>In every subject and domain of learning, there are ideas that are seminal and ideas that are peripheral (and many ideas in-between). Essential ideas are seminal. They are at the roots of many derivative ideas. When we know these foundational ideas well, we are able to derive many of the others. They become sources of power in our thinking. For example, one cannot understand physics without understanding the idea of matter and energy. All of physics revolves around these two ideas and their interrelationships. To think like a physicist is to learn how to use these concepts everywhere in one’s thought.
>
>It is essential ideas that form how we see the world, and how we function in it. 
Therefore, when studying and learning, and using knowledge resources it is important to notice what the essential idea are, that is, what is the most basic point being made?
>if students use this idea in their thinking, they will reason better through the content and function better as learners.
>
>Take, for example, the essential idea “To understand our experience and the world itself, we must be able to think within alternative world views. We must question our ideas. We must not confuse our words or ideas with things.” 
>Now image a student taking this idea seriously. This student would continually seek out, and seek to master, multiple viewpoints. The student would routinely question the ideas he is using in his thinking. He would recognize that things are often confused with words. Words often hypnotize us and we use them without reflecting on what they represent.
>
>Critical thinking reminds us of the power of essential ideas in human thinking: purpose, question, information, concept, inference, implication, point of view, clarity, precision, accuracy, relevance, depth, breadth, logic, and significance. These are essential ideas for our thinking at a critical level. 

Compare these to the [[Computer Science "Big Ideas", Practices and Skills|The Big Ideas and Computational Practices of Computer Science*]].
In an article about Project Based Learning, titled “Motivating Project-Based Learning: Sustaining the Doing, Supporting the Learning”, Blumenfeld, Soloway, Guzdial, et al. make some good observation about Project-Based Learning in the context of motivation and affective engagement vs. cognitive engagement, and provide some good insights and suggestions for teaching and learning.
Mark Guzdial (one of the authors) [[covers this in his blog|https://computinged.wordpress.com/2017/01/06/balancing-cognition-and-motivation-in-computing-education-herbert-simon-and-evidence-based-education/]].

The authors observe:
* Student interest and perceived value are enhanced when
** tasks are varied and include novel elements
*** but don't sacrifice cognitive engagement, and fall into the trap of turning things into simplistic "edutainment"
** the problem is authentic and has value
*** it is important (and difficult) to find topics which are authentic/relevant to students, while having rich, deep, enduring value in a larger context
** the problem is challenging
** there is closure, so that an artifact is created
*** artifacts need to require the student to integrate information and use complex thought
** there is choice about what and/or how work is done
*** students should be able to select project questions, activities, and artifacts. They should determine how to approach the problem, what steps and resources to employ, and so on
** there is an opportunity to work with others
* Students' perception of the role of errors in fostering learning is important.
** Errors are detrimental to learning when they are construed as representing failure to learn.
** When errors are perceived as attempts to make meaning and to solve difficult and demanding problems, errors signal just those cognitive and motivational efforts that are desirable for project-based learning
** Errors are a natural and inevitable consequence of working on potentially ambiguous and ambitious tasks
* Teachers should encourage, emphasize, and reward mastery (learning) and not performance (success/delivery).
** teachers should not focus on grades, comparative performance, punishment of risk taking, low level tasks
* Teachers can enhance motivation and cognitive engagement by
** creating opportunities for learning by providing access to information
** supporting learning by scaffolding instruction and modeling and guiding students to make tasks more manageable
** encouraging students to use learning and metacognitive processes
** assessing progress, diagnosing problems, providing feedback, and evaluating results
* Technology can enhance motivation and learning for both teachers and students through
** enhancing interest and authenticity
** improving access to information
** facilitating active representation (and multi-modality)
** structuring the learning and creation process
** diagnosing and correcting errors
** managing complexity and aiding production
** 
I have experienced the strong impact and effects of curiosity on my effectiveness, my learning, and my enjoyment of life over and over. In my mind, curiosity may be more important than courage, when it comes to both objective accomplishments and the subjective sense of accomplishing.

I also feel that if one is driven by curiosity, cultivates it, preserves it, and "befriends" it, one has more "wind in their sails", compared with working with courage, which requires "overcoming resistance", since curiosity is part of our nature (just look at babies and small children exploring: "The impulse to seek new information and experiences and explore novel possibilities is a basic human attribute"), whereas courage is something one has to "develop".
>As every parent knows, Why? is ubiquitous in the vocabulary of young children, who have an insatiable need to understand the world around them. They aren’t afraid to ask questions, and they don’t worry about whether others believe they should already know the answers. But as children grow older, self-consciousness creeps in, along with the desire to appear confident and demonstrate expertise. By the time we’re adults, we often suppress our curiosity.

In an article in the Harvard Business Review titled [["The Business Case for Curiosity"|https://hbr.org/2018/09/curiosity]] Francesca Gino focuses on the workplace and businesses, but I think that many of the aspects she discusses in that context are very relevant and true in education/learning/schools.

Also, since Gino claims (and reports on research evidence) that curiosity is important in the business world (and the work environment context), it seems important to cultivate curiosity in the school environment, as a preparation for successful (and enjoyable :) adulthood.

!!!A few key points from the article:
* cultivating [curiosity] at all levels [of the workplace] helps leaders and their employees adapt to uncertain market conditions and external pressures: When our curiosity is triggered, we think more deeply and rationally about decisions and come up with more-creative solutions.
* curiosity leads to seeking and generating alternatives, and thus reduces the risk of decision making errors due to confirmation bias (looking for information that supports our beliefs rather than for evidence suggesting we are wrong) and to stereotyping people (by race, gender, role, age, etc.)
* When we are curious, we view tough situations more creatively [as shown in studies]. We also perform better when we’re curious [as demonstrated in performance evaluations on the job].
* Interestingly, "curiosity encourages members of a group to put themselves in one another’s shoes and take an interest in one another’s ideas rather than focus only on their own perspective. That causes them to work together more effectively and smoothly: Conflicts are less heated, and groups achieve better results."
* There are barriers to encouraging and maintaining curiosity:
** Leaders often think that letting employees follow their curiosity will lead to a costly mess.
*** It is true, "Exploration often involves questioning the status quo and doesn’t always produce useful information. But it also means not settling for the first possible solution—and so it often yields better remedies."
* curiosity usually declines the longer we’re in a job.
** Because people [are usually] under pressure to complete their work quickly, they [have] little time to ask questions about broad processes or overall goals.

!!!! Ways to encourage curiosity
* Model inquisitiveness. Leaders can encourage curiosity throughout their organizations by being inquisitive themselves. Asking questions and listening carefully (as opposed to talking) is very important.
** "Why do we refrain from asking questions? Because we fear we’ll be judged incompetent, indecisive, or unintelligent. Plus, time is precious, and we don’t want to bother people. Experience and expertise exacerbate the problem: As people climb the organizational ladder, they think they have less to learn. Leaders also tend to believe they’re expected to talk and provide answers, not ask questions."
* leaders can model curiosity is by acknowledging when they don’t know the answer; that makes it clear that it’s OK to be guided by curiosity.
* People with more intellectual humility [(the ability to acknowledge that what we know is sharply limited)] do better in school and at work. Why? When we accept that our own knowledge is finite, we are more apt to see that the world is always changing and that the future will diverge from the present. By embracing this insight, leaders and employees can begin to recognize the power of exploration.
* leaders can model inquisitiveness by approaching the unknown with curiosity rather than judgment. As human beings, we all feel an urge to evaluate others—often not positively. We’re quick to judge their ideas, behaviors, and perspectives, even when those relate to things that haven’t been tried before. It is much better to question them instead of judging; this makes them think more deeply about what they are saying/doing/thinking.
* Emphasize learning goals and encourage continuous learning. It’s natural to concentrate on results, especially in the face of tough challenges. But focusing on learning is generally more beneficial to us and our organizations.
** "A body of research demonstrates that framing work around learning goals (developing competence, acquiring skills, mastering new situations, and so on) rather than performance goals (hitting targets, proving our competence, impressing others) boosts motivation. And when motivated by learning goals, we acquire more-diverse skills, do better at work, get higher grades in college, do better on problem-solving tasks, and receive higher ratings after training. "
* Let employees explore and broaden their interests. Organizations can foster curiosity by giving employees time and resources to explore their interests.
* To encourage curiosity, leaders should also teach employees how to ask good questions. They should help employees make the transition from giving good answers to asking good questions.
I came across an [[interview with Neil Gaiman in The Guardian|http://www.theguardian.com/books/2013/oct/24/neil-gaiman-face-facts-need-fiction]], where he was talking about the importance (to individuals and society) of having children read fiction (or in [[Neil Gaiman|http://en.wikipedia.org/wiki/Neil_Gaiman]]'s words: It's essential that children are encouraged to read and have access to fiction if we are to live in a healthy society).

I totally agree with [[Neil Gaiman|http://www.neilgaiman.com/About_Neil]], and the fact that he talks about the value of Science Fiction^^1^^ as part of his arguments, made it even more personal (and convincing, since I am a ~Sci-Fi reader :), but as I was reading his arguments, it struck me that some of them are equally true and valid justification for the value and importance of [[Computing Literacy|A Framework for Computational Thinking, Computational Literacy]] and [[programming|Coding is not the new literacy]].

Gaiman says:
>Fiction [is] a gateway drug to reading. The drive to know what happens next, to want to turn the page, the need to keep going, even if it's hard, because someone's in trouble and you have to know how it's all going to end … that's a very real drive. And it forces you to learn new words, to think new thoughts, to keep going. To discover that reading per se is pleasurable.
So, paraphrasing Gaiman, I think that it is also true that:
Computing and programming can be a gateway to creativity. The drive to create and want to make things the way you want them to be, or the way you see them, even if hard, is a very real drive.
And it forces you to learn new words, concepts, techniques, and skills. It can definitely be pleasurable.

Continuing to paraphrase Gaiman, and putting [his words in square brackets], substituting them with mine ''in bold'':
>The simplest way to make sure that we raise literate children is to teach them to ''compute'' [read], and to show them that ''computing'' [reading] is a pleasurable activity. And that means, at its simplest, finding ''programming projects'' [books] that they enjoy, giving them access to ''programming tools'' [those books], and letting them ''program'' [read] them. I don't think there is such a thing as a bad ''programming project'' [book] for children.

''On 'quality' projects'':
>Well-meaning adults can easily destroy a child's love of ''computing'' [reading]: stop them ''programming'' [reading] what they enjoy, or give them worthy-but-dull ''projects'' [books] that you like, the 21st-century equivalents of Victorian "improving" literature. You'll wind up with a generation convinced that ''computing'' [reading] is uncool and worse, unpleasant.
>We need our children to get on to the ''computing'' [reading] ladder: anything they enjoy ''programming'' [reading] will move them up, rung by rung, into ''computational'' literacy.

''On the transformative power of computing'':
>''Computation'' [Fiction] can show you a different world. It can ''create new or modified things in your world'' [take you somewhere you've never been]. Once you've ''seen what's possible with computing'' [visited other worlds, like those who ate fairy fruit], you can never be entirely content with the world that you grew up in. Discontent is a good thing: discontented people can modify and improve their worlds, leave them better, leave them different.

''On the potential for personal betterment'':
>If you were trapped in an impossible situation, in an unpleasant place, with people who meant you ill, and someone offered you a temporary escape, why wouldn't you take it? And ''computing'' [escapist fiction] is just that: ''a creative activity'' [fiction] that opens a door, shows the sunlight outside, gives you a place to go where you are in control, are with people you want to be with; and more importantly, during your escape, ''computing'' [books] can also give you knowledge about the world and your predicament, give you weapons, give you armour, real things you can take back into your prison. Skills and knowledge and tools you can use to escape for real.

And he concludes optimistically:
>We all -- adults and children, writers and readers ''and computers (people who do computing)'' -- have an obligation to daydream. We have an obligation to imagine. It is easy to pretend that nobody can change anything, that we are in a world in which society is huge and the individual is less than nothing: an atom in a wall, a grain of rice in a rice field. But the truth is, individuals change their world over and over, individuals make the future, and they do it by imagining that things can be different.



----
1 - Neil Gaiman on the connection between creativity, reading, and high tech (and programming :)
>I was in China in 2007, at the first party-approved science fiction and fantasy convention in Chinese history. SF had been disapproved of for a long time. At one point I took a top official aside and asked him what had changed? "It's simple," he told me. "The Chinese were brilliant at making things if other people brought them the plans. But they did not innovate and they did not invent. They did not imagine. So they sent a delegation to the US, to Apple, to Microsoft, to Google, and they asked the people there who were inventing the future about themselves. And they found that all of them had read science fiction when they were boys or girls.
In her book //Words Are My Matter -- Writings About Life and Books//, Ursula K. Le Guin has an article (originally a talk given at a meeting of Oregon Literary Arts in 2002) titled [[The Operating Instructions|https://mcpl.monroe.lib.in.us/Mobile/BakerAndTaylor/Excerpt?ISBN=9781618731340&UPC=&position=1]] in which she emphasizes the uniqueness and importance of human imagination:

>I think the imagination is the single most useful tool mankind possesses. It beats the opposable thumb. I can imagine living without my thumbs, but not without my imagination.
>[...]
>"The creative imagination is a tremendous plus in business!^^1^^ We value creativity, we reward it!" In the marketplace, the word creativity has come to mean the generation of ideas applicable to practical strategies to make larger profits. This reduction has gone on so long that the word creative can hardly be degraded further. I don't use it any more, yielding it to capitalists and academics to abuse as they like. But they can't have imagination.
>
>Imagination is not a means of making money. It has no place in the vocabulary of profit-making. It is not a weapon, though all weapons originate from it, and their use, or non-use, depends on it, as with all tools and their uses. The imagination is an essential tool of the mind, a fundamental way of thinking, an indispensable means of becoming and remaining human.
>
>We have to learn to use it, and how to use it, like any other tool. Children have imagination to start with, as they have body, intellect, the capacity for language: things essential to their humanity, things they need to learn how to use, how to use well. Such teaching, training, and practice should begin in infancy and go on throughout life. Young human beings need exercises in imagination as they need exercise in all the basic skills of life, bodily and mental: for growth, for health, for competence, for joy. This need continues as long as the mind is alive.
>
>When children are taught to hear and learn the central literature of their people, or, in literate cultures, to read and understand it, their imagination is getting a very large part of the exercise it needs.
>
>Nothing else does quite as much for most people, not even the other arts. We are a wordy species. Words are the wings both intellect and imagination fly on.



----
^^1^^ - John Seely Brown (formerly the director of Xerox PARC, now at Stanford), also talks about the [[critical importance and the uniquely human differentiation of Imagination|Sense-making and learning in the new 21st century environment]]
My father, who had a great sense of humor (very "economical" and wry), told me the following story when I (then in high school) had asked him about the importance of learning/knowing Math (you know: "why do I need to learn this? When will I ever use it?"). It might have been "dangerous timing" to tell one's son this kind of story as a response to this kind of question, but it sure didn't cause me any harm or aversion... :).

!!!!Story 1
A high school Math teacher was walking down the street in his hometown, when a sleek and shiny luxury car stops at the curb alongside the teacher, the tinted window rolls down, and a former student of his pops his head out and offers him a ride to wherever he needs to.
Although the Math teacher recalled that this former student was ''really bad'' in Math and not necessarily one of his favorite students, he accepts the offer for a ride. On the way, the teacher asks the former student, who obviously is now very very well to do, what he ended up doing in life, and the former student very proudly responds that he is a successful business man, owning his own construction company. Then he very quickly adds that he attributes part of his success to his former Math teacher.
When the Math teacher asks him why, the now very rich business man says: see, ever since I learned Math from you, I admired simplicity and sound logic. So, my business model is very simple. I borrow capital from my bank, let's say 1 Million Dollars, with which I build a project. I sell it for a 10% profit, and out of this 10M Dollar profit, I return 1M to the bank, and am left with 9M....

I can only imagine how the Math teacher felt when he left the car, thanking his former student for the lift.


!!!!Story 2
In the book "The Drunkard's Walk (How Randomness Rules Our Lives)" by Leonard Mlodinow, I read the following story, also "demonstrating" how important it is to know Math:

A few years ago a man won the Spanish national lottery with a ticket that ended in the number 48. Proud of his "accomplishment", he revealed the theory (and skill :) that brought him the riches. "I dreamed of the number 7 for seven straight nights," he said, "and 7 times 7 is 48."

So here you have it: it pays to know Math.




Closing on a serious note: I read about this on [[Edge|https://www.edge.org/]], in the lecture notes of an insightful [["Short Course in Behavioral Economics"|https://www.edge.org/events/the-edge-master-class-2008-a-short-course-in-behavioral-economics]], attended by some interesting and influential people (like, [[Richard Thaler|https://www.edge.org/memberbio/richard_h_thaler]], [[Danny Hillis|https://www.edge.org/memberbio/w_daniel_hillis]], [[Nathan Myhrvold|https://www.edge.org/memberbio/nathan_myhrvold]], [[Daniel Kahneman|https://www.edge.org/memberbio/daniel_kahneman]], [[Jeff Bezos|https://www.edge.org/memberbio/jeff_bezos]], [[Sendhil Mullainathan|https://www.edge.org/memberbio/sendhil_mullainathan]]).

This story was told by [[Sendhil Mullainathan|https://www.edge.org/memberbio/sendhil_mullainathan]] as an example of the importance of applying behavioral economics to understand (and possibly dealing with) poverty.
!!!!Story 3
Mullainathan showed a bunch of data on itinerant fruit vendors (all women) in India. Sixty-nine percent of them are constantly in debt to moneylenders who charge 5% per day interest. The fruit ladies make 10% per day profit, so half their income goes to the moneylender. They also typically buy a couple cups of tea per day, as their "routine". Sendhil shows that 1-cup of tea per day less would let them be debt free in thirty days, basically doubling their income from then on. Thirty-one percent of these women have figured that out, so it is not impossible. But the big question is: Why don't the rest of the women do it?

Sendhil then showed a bunch of other data arguing that poor people -- even those in the US (who are vastly richer in absolute scale than his Indian fruit vendors) -- do similar things with how they spend food stamps, or manage their payday loans. 
His argument is that under scarcity there is a systematic effect that you put the discount rate^^1^^ way too high for your own good. With too high a discount rate, you spend for the moment, not for the future. So, you have a cup of tea rather than double your income.


----
^^1^^ - The discount rate is an essential tool/metric for calculating the discounted cash flow of an investment, which is used to determine how much a series of cash flows in the future is worth as a lump sum total today. So, in our example, if the fruit ladies assume a high discount rate, it will result in them not valuing highly enough the doubling of their income in 30 days (and beyond), compared to the "investment" of giving up 1 cup of tea per day, and that's why they'd prefer the tea now to the much higher income in the future.
An anecdote demonstrating why the beautifully succinct and familiar oath to "tell the truth, the whole truth, and nothing but the truth" is important:

A teacher assigned a difficult book reading to his students, and on the due date wanted to see whether they’d actually read the impenetrable tome.
So he asked one of the students whether they had read the book.
Thinking quickly, the student thought they would semantically dodge the bullet and answered: “I haven’t quite finished it yet.”
“How far did you get?” asked the teacher.
“I haven’t quite started it yet” answered the student.


Thus basically demonstrating that sometimes telling part of the truth is essentially telling a lie.
In a wonderful book titled [[Looking At Mindfulness|http://www.booksuniverseeverything.com/2015/09/22/looking-at-mindfulness-by-christophe-andre/]] the french psychiatrist [[Christophe Andre|http://christopheandre.com/WP/?page_id=164]] combines art, psychology, and practicing mindfulness, to help us live a fuller, more meaningful, more enjoyable life.

In the book Andre has a chapter on "Training the mind", where he points out something that is very obvious to anyone who tried sitting meditation for even 5 minutes (a sometimes/somewhat bewildering experience for some beginners): we are swayed back and forth by thoughts and emotions all the time. 
He writes:
>Where do we get this amazing tendency to believe that we are the masters of our minds? And to think our capacities for attention and awareness are obviously established, without the need to work at them?
>We seem to imagine that, unlike our muscles, our brain has no need of training and can't be developed. Yet we accept that our body needs training. We know that physical exercise develops our breathing and muscles, that appropriate food is good for our health, and so on. But we are less convinced, or perhaps less well informed, about the similar needs of our mind. Training the mind and mental exercise are also extremely important. At the intellectual level, they help us build up our capacities for thinking and concentration; at the emotional level, they help us block our spontaneous tendencies to become stressed, downcast, angry and all the other lapses to which our everyday lives make us vulnerable. Our psychic abilities generally obey the rules of learning -- the more we practice, the more progress we make.
Drawing parallels between exercising, practicing, effort, and improvement in other aspects of life, and out mental abilities clarifies and justifies the need, and hopefully motivates us to practice mindfulness.
>[...] So if we want to [make progress], we need to work at it. We accept this when it comes to learning to speak another language, ski or play a musical instrument, but it's harder for us to practice serenity and concentration.
>[...] mind training, [and] the practice of mindfulness is particularly necessary for those who have noticed that their mind evades and disobeys them. Not that we should expect to be able to put our mind on a leash and excert total control over it, but we can reestablish a balance of power. Being able to concentrate or to calm down, for example, at times when we need to, doesn't seem to me to be a particularly ambitious or excessive goal. But are we able to do these things?
> Practicing mind training routinely every day is good for our health -- it's like fitness training for our awareness.
In a fascinating account of "The Hidden Life of Trees", as [[described by Maria Popova in BrainPickings|https://www.brainpickings.org/2016/09/26/the-hidden-life-of-trees-peter-wohlleben/]], a German forester, Peter Wohlleben, writes about a discovery he had made, about the "sociality" of trees.

>Neighboring trees [say, in a forest], scientists found, help each other through their root systems — either directly, by intertwining their roots, or indirectly, by growing fungal networks around the roots that serve as a sort of extended nervous system connecting separate trees.

Wohlleben ponders this astonishing sociality of trees, and possible parallels to strong human communities and societies:
>
>Why are trees such social beings? Why do they share food with their own species and sometimes even go so far as to nourish their competitors? The reasons are the same as for human communities: there are advantages to working together. A tree is not a forest. On its own, a tree cannot establish a consistent local climate. It is at the mercy of wind and weather. But together, many trees create an ecosystem that moderates extremes of heat and cold, stores a great deal of water, and generates a great deal of humidity. And in this protected environment, trees can live to be very old. To get to this point, the community must remain intact no matter what. If every tree were looking out only for itself, then quite a few of them would never reach old age. Regular fatalities would result in many large gaps in the tree canopy, which would make it easier for storms to get inside the forest and uproot more trees. The heat of summer would reach the forest floor and dry it out. Every tree would suffer.
>
>Every tree, therefore, is valuable to the community and worth keeping around for as long as possible. And that is why even sick individuals are supported and nourished until they recover. Next time, perhaps it will be the other way round, and the supporting tree might be the one in need of assistance.
>[…]
>
>A tree can be only as strong as the forest that surrounds it.

Maria Popova ponders:
>One can’t help but wonder whether trees are so much better equipped at this mutual care than we are because of the different time-scales on which our respective existences play out. Is some of our inability to see this bigger picture of shared sustenance in human communities a function of our biological short-sightedness? Are organisms who live on different time scales better able to act in accordance with this grander scheme of things in a universe that is deeply interconnected?

Which echos [[the sentiment and thought experiments by the German zoologist Karl Ernst von Baer|Our worldview is literally shaped by time]] (in 1860).
It is also interesting to compare [[human size/lifespan in perspective|Human life in perspective]].

And Popova continues:
>Because trees operate on time scales dramatically more extended than our own, they operate far more slowly than we do — their electrical impulses crawl at the speed of a third of an inch per minute.
>[...]
>The upside of this incapacity for speed is that there is no need for blanket alarmism — the recompense of trees’ inherent slowness is an extreme precision of signal. In addition to smell, they also use taste — each species produces a different kind of “saliva,” which can be infused with different pheromones targeted at warding off a specific predator.
>[...]
>In the remainder of The Hidden Life of Trees, Wohlleben goes on to explore such fascinating aspects of arboreal communication as how trees pass wisdom down to the next generation through their seeds, what makes them live so long, and how forests handle immigrants.

(It's interesting to compare what Daily Alice (Alice Dale Drinkwater) had to say about trees in the fantastic book "Little, Big": "Did you ever think, that maybe trees are alive like we are, only just more slowly? That what a day is to us, maybe a whole summer is to them - between sleep and sleep, you know. That they have long long thoughts and conversations that are just too slow for us to hear.")
The issue at hand by Gil Fronsdal

<<forEachTiddler 
where 
'tiddler.tags.contains("book-chapter") && tiddler.tags.contains("The issue at hand")'
sortBy 
'tiddler.title'>>
The most exciting phrase to hear in science, the one that heralds new discoveries, is not "Eureka!", but "That's funny...".
In computer programming: The most secure, the fastest, and the most maintainable code (by far :) is the code not written (or not needed).

(see the [[story about program and programmer productivity|Measuring the Right Thing (software effectiveness metric)]]).
from Jim Holt's "Why Does the World Exist?: An Existential Detective Story", in an interview with John Updike
(see [[Richard Hamming's thoughts|Perhaps there are thoughts we cannot think]] for a related perspective)

>[T]he laws amount to a funny way of saying,  Nothing equals something,   Updike said, bursting into laughter.  QED! One opinion I ve encountered is that, since getting from nothing to something involves time, and time didn't exist before there was something, the whole question is a meaningless one that we should stop asking ourselves. It's beyond our intellectual limits as a species. Put yourself into the position of a dog. A dog is responsive, shows intuition, looks at us with eyes behind which there is intelligence of a sort, and yet a dog must not understand most of the things it sees people doing. It must have no idea how they invented, say, the internal-combustion engine. So maybe what we need to do is imagine that we re dogs and that there are realms that go beyond our understanding. I m not sure I buy that view, but it is a way of saying that the mystery of being is a permanent mystery, at least given the present state of the human brain. I have trouble even believing and this will offend you the standard scientific explanation of how the universe rapidly grew from nearly nothing. Just think of it. The notion that this planet and all the stars we see, and many thousands of times more than those we see   that all this was once bounded in a point with the size of, what, a period or a grape? How, I ask myself, could that possibly be? And, that said, I sort of move on.

And yet, I don't see this as a disheartening or deflating viewpoint, but potentially a statement of a natural fact, which at the minimum should help us focus on pursuing and asking questions that have a promise to be more fruitful. So, it can be viewed as a "focusing lens" of sorts. And it seems like I'm not alone in this. As Isaac Asimov said:
>The most beautiful experience we can have is the mysterious   the fundamental emotion which stands at the cradle of true art and true science.
which to me is a very powerful driver to be engaged in science! And it's an on-going (lifelong, spiraling) effort/path!
Dick's one sentence definition: Reality is that which, when you stop believing in it, doesn’t go away.

The nature of reality as Dick sees it is [[discussed more in BrainPickings|https://www.brainpickings.org/2013/09/06/how-to-build-a-universe-philip-k-dick/]], covering his //How to Build a Universe//.

[[Dick's full speech|http://deoxy.org/pkd_how2build.htm]]

In the speech he gives a synopsis of one of his stories (the first?) where he writes about a perspective of a dog, with possibly strong implications about our (i.e., human) perception/view of reality:
>My first story had to do with a dog who imagined that the garbagemen who came every Friday morning were stealing valuable food which the family had carefully stored away in a safe metal container. Every day, members of the family carried out paper sacks of nice ripe food, stuffed them into the metal container, shut the lid tightly—and when the container was full, these dreadful-looking creatures came and stole everything but the can.
>
>Finally, in the story, the dog begins to imagine that someday the garbagemen will eat the people in the house, as well as stealing their food. Of course, the dog is wrong about this. We all know that garbagemen do not eat people. But the dog's extrapolation was in a sense logical—given the facts at his disposal. The story was about a real dog, and I used to watch him and try to get inside his head and imagine how he saw the world. Certainly, I decided, that dog sees the world quite differently than I do, or any humans do.

This is echoed by what astrophysicist and philosopher Marcelo Gleiser examines in //The Island of Knowledge: The Limits of Science and the Search for Meaning//, (([[discussed in BrainPickings|https://www.brainpickings.org/2015/02/02/the-island-of-knowledge-marcelo-gleiser/]]) where he writes:
>What we see of the world is only a sliver of what’s “out there.” There is much that is invisible to the eye, even when we augment our sensorial perception with telescopes, microscopes, and other tools of exploration. Like our senses, every instrument has a range. Because much of Nature remains hidden from us, our view of the world is based only on the fraction of reality that we can measure and analyze. Science, as our narrative describing what we see and what we conjecture exists in the natural world, is thus necessarily limited, telling only part of the story… We strive toward knowledge, always more knowledge, but must understand that we are, and will remain, surrounded by mystery… It is the flirting with this mystery, the urge to go beyond the boundaries of the known, that feeds our creative impulse, that makes us want to know more.

Gleiser adds:
>If large portions of the world remain unseen or inaccessible to us, we must consider the meaning of the word “reality” with great care. We must consider whether there is such a thing as an “ultimate reality” out there — the final substrate of all there is — and, if so, whether we can ever hope to grasp it in its totality.
which I think echoes what Carl Sagan had to say about knowledge of reality (in his masterwork [[Varieties of Scientific Experience|https://www.brainpickings.org/2013/12/20/carl-sagan-varieties-of-scientific-experience/]]):
>If we ever reach the point where we think we thoroughly understand who we are and where we came from, we will have failed.
In other words (I think), they both say that if we ever end up thinking that we "got it", we really (ha!) did not get it.

But, he doesn't see it necessarily as "bad news", but rather as an encouragement to keep striving, creating, inventing, forever:
>This realization should open doors, not close them, since it makes the search for knowledge an open-ended pursuit, an endless romance with the unknown.

For examples of how applying the scientific method/view may be actually misleading or distorting reality see [[The Scientific Bubble]] (and also [[Minding the obvious]]).

In the last chapter of the book, Gleiser positions science in the context of our humanness:
> To accept the incompleteness of knowledge is not a defeat of the human intelect; it doesn't mean we are throwing in the towel, surrendering. It means that we are placing science within the human realm, fallible even if powerful, incomplete even if the best tool we have for describing the world. Science is not a reflection of a God-given truth, made of discoveries plucked from a perfect Platonic realm; science is a reflection of our very human disquietude, of our longing for order and control, of our awe and fear at the immensity of the cosmos.

And, human nature (or Nature's nature) is such, that we always live in our bubble, scientific //and// otherwise, regardless. We always experience life and the world through our filters.
As the poet  Edward Young wrote in [[Night Thoughts|https://www.gutenberg.org/files/33156/33156-h/33156-h.htm]] (published 1742):
>In senses, which inherit earth, and heavens;
>Enjoy the various riches Nature yields;
>Far nobler! give the riches they enjoy;
>Give taste to fruits; and harmony to groves;
>Their radiant beams to gold, and gold’s bright fire;
>Take in, at once, the landscape of the world,
>At a small inlet, which a grain might close,
>And half create the wondrous world they see.
>Our senses, as our reason, are divine.
>But for the magic organ’s powerful charm,
>Earth were a rude, uncolour’d chaos still.
>Objects are but th’ occasion; ours th’ exploit;
>Ours is the cloth, the pencil, and the paint,
>Which nature’s admirable picture draws;
>And beautifies creation’s ample dome.


----
Note: for another perspective on the nature of reality see Nick Bostrom's [[Are you living in a computer simulation?]]
This is a very [[short piece by Alison Gopnik which appeared in the WSJ|http://alisongopnik.com/Alison_Gopnik_WSJcolumns.htm#21Mar14]] talking about both the potential and the dangers of "New Technology" :)

I think that she makes an excellent point about the fact that we probably currently experience "echoes of a recurring theme" in human history (similar to examples like [["the dangers and evilness" of speeding cars|https://www.detroitnews.com/story/news/local/michigan-history/2015/04/26/auto-traffic-history-detroit/26312107/]] (or trains, or airplanes ... you get the picture  :), and the fact that while there are (and will always be) dangers and negative effects to adopting new technology, it would be a mistake (in my opinion :) to "throw the baby with the bathwater" (if even possible).

Gopnik also wrote [[a chapter about it|pg. 271 - ALISON GOPNIK: Incomprehensible Visitors from the Technological Future]] in the book [[Is the Internet Changing the Way You Think?]]

So here it is:

THE KID WHO WOULDN'T LET GO OF 'THE DEVICE'
How does technology reshape our children’s minds and brains? Here is a disturbing story from the near future.

They gave her The Device when she was only two. It worked through a powerful and sophisticated optic nerve brain-mind interface, injecting it’s content into her cortex. By the time she was five, she would immediately be swept away into the alternate universe that the device created. Throughout her childhood, she would become entirely oblivious to her surroundings in its grip, for hours at a time. She would surreptitiously hide it under her desk at school, and reach for it immediately as soon as she got home. By adolescence, the images of the device – a girl entering a ballroom, a man dying on a battlefield – were more vivid to her than her own memories.

As a grown woman her addiction to The Device continued. It dominated every room of her house, even the bathroom. Its images filled her head even when she made love. When she travelled, her first thought was to be sure that she had access to The Device and she was filled with panic at the thought that she would have to spend a day without it. When her child broke his arm, she paused to make sure that The Device would be with her in the emergency room. Even sadder, as soon as her children were old enough she did her very best to connect them to The Device, too.

The psychologists and neuroscientists showed just how powerful The Device had become. Psychological studies showed that its users literally could not avoid entering its world, the second they made contact their brains automatically and involuntarily engaged with it. More, large portions of their brains that had originally been designed for other purposes had been hijacked to the exclusive service of The Device.

Well, anyway, I hope that this is a story of the near future. It certainly is a story of the near past. The Device, you see, is the printed book^^1^^, and the story is my autobiography.

[[Socrates was the first to raise the alarm about this powerful new technology|Why writing (and the computer :) is a 'dangerous technology']] – he argued, presciently, that the rise of reading would destroy the old arts of memory and discussion.

The latest Device to interface with my retina is “Its Complicated: The Social Networked Life of Teens” by Danah Boyd at NYU and Microsoft Research. Digital social network technologies play as large a role in the lives of current children as books once did for me. Boyd spent thousands of hours with teenagers from many different backgrounds, observing the way they use technology and talking to them about what technology meant to them.

Her conclusion is that young people use social media to do what they have always done – establish a community of friends and peers, distance themselves from their parents, flirt and gossip, bully, experiment, rebel. At the same time, she argues that the technology does make a difference, just as the book, the printing press and the telegraph did. An ugly taunt that once dissolved in the fetid locker-room air can travel across the world in a moment, and linger forever. Teenagers must learn to reckon with and navigate those new aspects of our current technologies, and for the most part that’s just what they do.

Boyd thoughtfully makes the case against both the alarmists and the techtopians. The kids are all right or at least as all right as kids have ever been.

So why all the worry? Perhaps it’s because of the inevitable difference between looking forward towards generational changes or looking back at them. As the parable of The Device illustrates we always look at our children’s future with equal parts unjustified alarm and unjustified hope – utopia and dystopia. We look at our own past with wistful nostalgia. It may be hard to believe but Boyd’s book suggests that someday even Facebook will be a fond memory.


BTW, Hermann Hesse wrote a [[thought provoking essay about three types of readers|Hermann Hesse on Three Types of Readers]] which is relevant to this.


----
^^1^^ - In an article titled [[Don't Touch that Dial|http://www.slate.com/articles/health_and_science/science/2010/02/dont_touch_that_dial.html]] in the Slate Magazine, Vaughan Bell opens with a similar anecdote:
>A respected Swiss scientist, Conrad Gessner, might have been the first to raise the alarm about the effects of information overload. In a landmark book, he described how the modern world overwhelmed people with data and that this overabundance was both "confusing and harmful" to the mind. The media now echo his concerns with reports on the unprecedented risks of living in an "always on" digital environment. It's worth noting that Gessner, for his part, never once used e-mail and was completely ignorant about computers. That's not because he was a technophobe but because he died in 1565. His warnings referred to the seemingly unmanageable flood of information unleashed by the printing press. 
Andrea diSessa, [[in his book "Changing Minds")|resources/diSessa - Changing Minds - Chapter1.pdf]] gives an example of the __power of literacy__ to change the way we think:
One example is the Calculus, a new notation (Newton, Leibniz) that forever changed not only what we think but also how we think about change (infinitesimal deltas) and rations of changes. And not only did it change how we are thinking, but also who and when: nowadays, high school students can think and express their thinking in ways that very sharp scientists (Galileo comes to mind) could not in the past.
[[Another example diSessa gives|Examples of the power of math notation]] is how it takes Galileo several pages of text to describe his idea about rates of changes staying constant (in a free fall, related to his experiment at the Tower of Pisa), and the fact that nowadays, a high school student can express the //same ideas// in a few lines of "very simple" (high school) math. 
It seems to me that the fact that a genius like Galileo had to struggle through pages of explanations in order to make //his ideas and way of thinking// clear (to himself and others), and something today a high school kid can both grasp, express, and use, shows us that "external intelligence amplifiers" can and do change the way we think. And if Computation in general (and the Internet in particular) cannot serve as such an "amplifier", what can?
Raymund Smullyan, in a delightful piece called [[Planet without Laughter|https://www-cs-faculty.stanford.edu/~knuth/smullyan.html]] writes about the power of ideas to make (or break) or worldview, in a way similar to how [[Barry Schwartz talks about "Idea Technology"|Barry Schwartz on Idea Technology]].
In this somewhat tongue-in-cheek parable, Smullyan touches on multiple topics (e.g., the nature of humor, spirituality, belief, ideas, morality) and gives an example of the impact of a "powerful idea", like ''free will'' on our worldview. 
He does this in the context of a biblical story, specifically the story of Adam and Eve in the Garden of Eden (in Smullyan's parable called "The Garden of Laughter"):
> And so they [Adam and Eve] spent their days in this paradise for many years, until one day a strange green animal, something like a rat and something like a skunk, with mean, small, close-set eyes, came into the garden. This animal perceived the bliss of the couple and waxed mighty jealous. He said, "I will soon do something about that!" and sure enough he did! He approached the couple and said:
>[... Why is God keeping you (Adam and Eve) in this state of "childish innocence"?] What is He afraid of? What is He hiding from you? Why does He pretend to be your friend when He is the very one who is deceiving you and who is preventing you from being true to yourselves and fulfilling your real destinies in the universe? Why do you tolerate this? There is one chink in the Lord's armor by which you can save yourselves. The Lord has given you ''free will'', by which you can oppose Him. You can put a stop to this situation; it is up to you! Only by your own efforts can you prevent the Lord from keeping you in bondage forever.
And thus, the seed of a powerful idea was planted.
>[...T]he idea that they could choose was a stunning novelty. It gave them an exhilarating sense of power. They of their own free will could now do things! In particular, they could, if they chose, amount to something. The question then arose: Should they amount to something? This notion of "should" was also quite new. Formerly, since they had felt that they were merely part of the stream of life rather than actively living it, ethical notions of "should," "ought," "duty," etc., had absolutely no meaning for them. But now they knew better. The troubling question arose: Was it right or wrong for them to sit by idly enjoying life rather than going out and amounting to something?
From there is was only "natural" to start "connecting dots" (we humans are good at that :):
>Adam and Eve also for the first time began philosophizing. They believed the Animal was right in telling them that they had free will. But the question which most puzzled them was whether they had really had free will before the Animal informed them of the fact. If they formerly had free will, they certainly had not known that they had. And is it possible to be free without knowing that one is free? In other words, was it really true, as the Animal had said, that God had already given them free will, or was it the Animal himself who caused them to have free will? It seemed likely to them that having free will is really no different from believing that one has free will.
And a dialog with The Animal started the process of "bootstrapping" this idea of free will:
>[The Animal said: "] And you too can have free will if you choose to." 
>This answer puzzled them terribly! They replied: "What? You say we can choose to have free will? You mean that having free will is a matter of choice?" 
>The Animal replied, "Of course it is." 
>Eve then protested, "But I thought you told us that God has already given us free will." 
>The Animal replied: "In a sense He has, but only in a passive rather than an active sense. God has, so to speak, given you the potentiality of having free will. Whether you actualize it or not is up to you. God has given you the ability to make choices; He does not force you to make them. You can use your free will only if you choose to."
> Adam answered, "But if we can choose to, that means we already do have free will."
>The Animal replied, "Yes, it is in that sense that God has given you free will."
And Smullyan adds: "And thus the sciences of metaphysics and epistemology were born."

But similar to the role and power of science, free will is not necessarily the "downfall" (in the sens of the biblical story and the way it portrays it). It is just a tool and a capability to choose (and do) good //or// evil. So, in Smullyan's fable, Adam and Eve reached a conclusion:
>[They] debated this for many weeks, and finally decided to remain in the garden and not to amount to something. They decided to trust the Lord and not the Evil Animal. Yes, they finally realized, the Lord is their friend and the Animal their foe. And so one day the Animal came into the garden and Adam said: 
>"You have taught us many wonderful things. You have taught us that we have free will. Whether you have taught us this, or whether by some mysterious power you have caused us to have free will, or whether it was God who 'allowed' us to have free will, or whether He 'made us' have free will, or whether it is we who have 'chosen' to have free will, we do not know. We do not understand the phenomenon of free will, but we now know that whatever it really is, we certainly have it. Perhaps we have chosen to have it; we really don't know. All we now know for sure is that we in fact do have it. And you are absolutely right that we can now use our free will to reject the Lord and His ways. Yes, we are indeed free to do this. 
>But do you not realize that by the very same token we are now free to reject you? Yes, we now have the power to reject you or the Lord. And it is you we have decided to reject! Of our own free wills we thoroughly cast you out of our minds and hearts. We reject you and your ways. We will no longer heed you or your words. We cast you out of this very garden. This garden is our property; the Lord has given it to us, not to you! It is our own private property, and you can no longer be here without our permission. We have so far suffered you here only as a guest. But you are no longer a welcome guest. Begone from the Garden, and don't you ever dare return. If we ever find you here again, we will kill you." 
>The Animal departed without a word, and never returned.
But, as Oliver Wendell Holmes had said: 
>Man's mind, once stretched by a new idea, never regains its original dimensions.
And so the the Animal's idea that Adam and Eve "should amount to something" got stuck in their minds and did not let go (hence, a "powerful idea").
And at the next (daily!) encounter with God in The Garden, they were irritable and bothered by the Anomal's ideas and God sensed it and "let them go":
>[He said:"] You may as well go forth and 'amount to something,' which is what you deep down really want. Yes, you can amount to a great deal -- indeed you can beget an entire race. You will go forth and do this."
Toni Morrison [[tells a powerful tale|https://www.brainpickings.org/2016/12/07/toni-morrison-nobel-prize-speech/]] about the gift and power of human language, and the responsibility for using it well.
The story is about a blind, old and wise woman who is asked (more like taunted and mocked) whether a bird being held in someones hands in dead or alive.
Morrison parallels the bird to language and writes:
>[The old woman, who Morrison imagines as a practiced writer] thinks of language partly as a system, partly as a living thing over which one has control, but mostly as agency — as an act with consequences.
>[...] she thinks of language as susceptible to death, erasure; certainly imperiled and salvageable only by an effort of the will. She believes that if the bird in the hands of her visitors is dead the custodians are responsible for the corpse. For her a dead language is not only one no longer spoken or written, it is unyielding language content to admire its own paralysis. Like statist language, censored and censoring. Ruthless in its policing duties, it has no desire or purpose other than maintaining the free range of its own narcotic narcissism, its own exclusivity and dominance. However moribund, it is not without effect for it actively thwarts the intellect, stalls conscience, suppresses human potential.
Language is powerful and has a strong and undeniable effect on us, but Morrison adds:
>The vitality of language lies in its ability to limn the actual, imagined and possible lives of its speakers, readers, writers. Although its poise is sometimes in displacing experience it is not a substitute for it. It arcs toward the place where meaning may lie. 
>[...] language can never live up to life once and for all. Nor should it. Language can never “pin down” slavery, genocide, war. Nor should it yearn for the arrogance to be able to do so. Its force, its felicity is in its reach toward the ineffable.
>Be it grand or slender, burrowing, blasting, or refusing to sanctify; whether it laughs out loud or is a cry without an alphabet, the choice word, the chosen silence, unmolested language surges toward knowledge, not its destruction.

And she concludes:
>Word-work is sublime … because it is generative; it makes meaning that secures our difference, our human difference — the way in which we are like no other life.
>
>We die. That may be the meaning of life. But we do language. That may be the measure of our lives.
The process of learning through life is by no means continuous and by no means universal. If it were, age and wisdom would be perfectly correlated, and there would be no such thing as an old fool — a proposition at odds with common experience.

— Speech at Oberlin College, Ohio,1958
From the [[UK Ministry for Education|https://www.gov.uk/government/publications/national-curriculum-in-england-computing-programmes-of-study/national-curriculum-in-england-computing-programmes-of-study]]:

A high-quality computing education equips pupils to use computational thinking and creativity to understand and change the world. Computing has deep links with mathematics, science and design and technology, and provides insights into both natural and artificial systems. The core of computing is computer science, in which pupils are taught the principles of information and computation, how digital systems work and how to put this knowledge to use through programming. Building on this knowledge and understanding, pupils are equipped to use information technology to create programs, systems and a range of content. Computing also ensures that pupils become digitally literate – able to use, and express themselves and develop their ideas through, information and communication technology – at a level suitable for the future workplace and as active participants in a digital world.
The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.
In a book //Equations of Eternity - Speculation on Consciousness, Meaning, and the Mathematical Rules that Orchestrate the Cosmos// by David Darling, he writes:
>Classification is fundamental to survival in the real world. Unless a creature could categorize phenomena into more general types, it would have to treat every one as unique. Then the creature would have no idea what was good to eat or what represented a threat. Before it had time to establish a thing's credentials, it might be too late—the food would have gone or the predator would have struck.
>One crucial fact emerges from this: the world cannot be random. If it were, it would be unpredictable. The categories that we (and other animals) recognize must be based on natural, recurring physical properties of the universe. That may seem obvious, but it is only because we are used to looking for—and finding—order, consistency, and predictability in our surroundings.

This echos [[Mario Livio's view, specifically on Mathematics, that invention and discovery are combined|Is Math a human invention or a series of discoveries of truths in the real world?]] in the process of humans creating their mental world.

Darling continues:
>Indeed, it is far from obvious, a priori, why the universe should be anything other than totally random and intractable. Randomness always seems so much easier to achieve in everyday life than organization, leave a room to its own devices and the Point seems well illustrated. How could nature, without any conscious effort, do a better job of keeping its own house in order? And yet, against all the odds, the world is patterned. And what is more, over time, these patterns have become increasingly elaborate and exquisitely organized to the extent that now the universe has created, from within its fabric, creatures of such extreme complexity that they can discern this order and classify it. Remarkably, elements of the universal classification have become capable of classifying the universe—including themselves. But this raises an important question: Do we always see objects and relationships that are really there, independent of our mental selves? How can some thing, or some class of things, or some connection between things, be said to exist without a sentient observer to perceive them? Are we not, in fact, as much inventors of the order we see as discoverers? 

And here Darling wonders about the same question Livio is trying to answer; basically, how come there is order out there (which we somehow perceive)?
>In some curious way, the two [i.e., inventing and discovering] seem to go hand in hand. Our brains make sense of the data they perceive: they impose or invent order. And yet the basis for that perceived order must somehow already be there. It is one of the fundamental mysteries of nature, this dichotomy between what is given and what we, with our minds, create. We owe our very existence as a species to our ability to delineate patterns. We can even see patterns where none exist—the faces in a sun-lit curtain, the Greek heroes and monsters among the stars. What else might the human mind be recognizing that is not really there? And what, in any case, do we mean by "real"?

And I think Livio is right when [[he states|http://www.sfu.ca/~rpyke/cafe/livio.pdf]]:
>There is no doubt that the selection of topics we address mathematically has played an important role in math's perceived effectiveness. But mathematics would not work at all were there no universal features to be discovered. You may now ask: Why are there universal laws of nature at all? Or equivalently: Why is our universe governed by certain symmetries and by locality? I truly do not know the answers, except to note that perhaps in a universe without these properties, complexity and life would have never emerged, and we would not be here to ask the question.

AKA, Reactive Documents.

[[Bret Victor|http://worrydream.com/]] Inspired and coined the term [[reactive documents|http://worrydream.com/ExplorableExplanations/]].

A nice (and (hopefully not for long) short and (hopefully quickly becoming) out-of-date :) [[collection of examples|http://www.maartenlambrechts.be/the-rise-of-explorable-explanations/]].

[[Chaim Gingold|http://levitylab.com/cog/]] was inspired and [[talked about|Learning through play - lessons learned from an experiential engagement designer]] how he created [[Earth Primer|http://www.earthprimer.com/]].

A [[small example|http://employees.org/~hmark/books/i4i/least_action.html]] I created with GeoGebra, based on one of Richard Feynman's Cal Tech lectures on [["The Principle of Least Action|http://www.feynmanlectures.caltech.edu/II_19.html]]".

I created [[a page with several examples|http://employees.org/~hmark/books/i4i/index.html]] (some better than others :) of interactive/manipulatable educational tools/mini-environments/simulations, which I called Incubators For Intuition (I4I). This page enables the exploration of a handful of concepts like "closing the achievement gap" in education, conditional (Bayesian) probability, The principle of Least Action (Richard Feynman), The MU Puzzle (Douglas Hofstadter), The p-q Sytem (Douglas Hofstadter), and valid statistical sampling.
In a short and to the point article^^1^^ titled [[Education Isn’t Just About Churning Out ‘Skilled’ Employees|resources/Education Isnt Just About Churning Out Skilled Employees - The Experts - WSJ.htm]], Gianpiero Petriglieri is bringing up an excellent point about why we should at least pause when discussing the often heard/debated question in education circles, namely:
>Companies often complain they aren’t getting graduates with the skills they need. Why is that—and what should be done about it?
His point is that we, as a society, would benefit from looking and questioning the assumptions underlying this question. If we go along with the implicit assumption that the primary (only?) goal of education is to prepare learners to become good employees, we may be repeating the same scenario that formed the current education/school systems, and is catching so much criticism nowadays.
One accusation is that most current schools and classrooms are the product of the need (or desire) to train students to become the workforce of the industrial revolution and 19th century society, and be "good employees". And by asking the question above, we may fall into a similar trap in the 21st century.

Petriglieri rightly states:
>There can be little doubt that one of the main functions of education is to support the economy by churning out skillful employees. That is not, however, its only function. The others are supporting the culture it is embedded in by churning out responsible citizens—who are not all, only, or always going to hold corporate jobs—and supporting individual students by liberating their imagination and accelerating their development.
And this, I think, aligns well with the call-to-action to the educational system to "teach students to skate to where the puck will be, not to where it currently is", which [[I visually modeled|resources/skating_puck.html]] in connection to closing the achievement and opportunity gap in school and society.

----------
^^1^^ [[The original Wall Street Journal article|http://blogs.wsj.com/experts/2013/10/09/education-isnt-just-about-churning-out-skilled-employees/]]
A story (with implication for teaching/teacher motivation :)

Once upon a time there was an old man who used to go to the ocean to walk along the beach and enjoy the waves crashing upon the rocks. Early one morning he was walking along the shore by himself. As he looked down the deserted beach, he saw a human figure in the distance. As he got closer to the stranger, he saw that it was a young teenage boy. The boy was reaching down to the sand, picking up something and very gently throwing it into the ocean. As the old man got closer, he yelled out, "Good morning, young fellow. What are you doing?"

The teenager paused, looked up and replied, "Throwing starfish back in the ocean."

"Why on earth are you doing that?" asked the old man.

The boy replied, "Because the sun is up and the tide is going out. If I don’t throw them in they’ll die."

The old man looked at the teenager in disbelief and said, "But the beach goes on for miles and miles and there are starfish all along it. You can’t possibly make a difference."

The young boy listened politely, then bent down, picked up another starfish and threw it into the sea, past the breaking waves and said, "It made a difference for that one." And then the very wise young boy continued on his way down the beach, bending down and throwing starfish after starfish back into the ocean.

:: ― Loren Eiseley
[[The thing about 998,001 is…|http://kottke.org/12/01/the-thing-about-998001-is]]

If you divide 1 by the number 998,001, you get a list of all the three digit numbers in order except 998. Like so:
[>img[1 divided by 988,001|resources/1_div_998001_1.gif][ resources/1_div_998001.gif]]
I am reading Janna Levin's book [[A Madman Dreams of Turing Machines|http://jannalevin.com/black-hole-blues-and-other-songs-from-outer-space/a-madman-dreams-of-turing-machines/]] (about Alan Turing and Kurt Gödel), and came across her reference to Raymond Smullyan's list of [["self-annihilating" sentences|Self-annihilating sentences]]. This (obviously :) intrigued me, and led me to search for Smullyan's book (where the list is) called //5000 B.C. and Other Philosophical Fantasies//.

In his book (chapter 3, section 65), Smullyan has a description of a "Gödelian Machine" which turns out to (possibly) be ''The world's shortest explanation of Gödel's theorem''.

You can compare the description below to [[Gödel's Second Incompleteness Theorem Explained in Words of One Syllable|resources/Boolos-godel-in-single-syllables.pdf]].

But I found a short (ha!) [[blog by Mark Dominus|https://blog.plover.com/math/Gdl-Smullyan.html]] that slightly (ha, ha!) elaborates on Smullyan, so (from the blog):

So here, shamelessly stolen from Smullyan, is the World's shortest explanation of Gödel's theorem.

We have some sort of machine that prints out statements in some sort of language. It needn't be a statement-printing machine exactly; it could be some sort of technique for taking statements and deciding if they are true. But let's think of it as a machine that prints out statements.

In particular, some of the statements that the machine might (or might not) print look like these:

P*x 	        (which means that 	the machine will print x)
NP*x 	(which means that 	the machine will never print x)
PR*x 	(which means that 	the machine will print xx)
NPR*x 	(which means that 	the machine will never print xx)

For example, NPR*FOO means that the machine will never print FOOFOO. NP*FOOFOO means the same thing. So far, so good.

Now, let's consider the statement NPR*NPR*. This statement asserts that the machine will never print NPR*NPR*.

Either the machine prints NPR*NPR*, or it never prints NPR*NPR*.

If the machine prints NPR*NPR*, it has printed a false statement. But if the machine never prints NPR*NPR*, then NPR*NPR* is a true statement that the machine never prints.

So either the machine sometimes prints false statements, or there are true statements that it never prints.

So any machine that prints only true statements must fail to print some true statements.

Or conversely, any machine that prints every possible true statement must print some false statements too.


The proof of Gödel's theorem shows that there are statements of pure arithmetic that essentially express NPR*NPR*; the trick is to find some way to express NPR*NPR* as a statement about arithmetic, and most of the technical details (and cleverness!) of Gödel's theorem are concerned with this trick. But once the trick is done, the argument can be applied to any machine or other method for producing statements about arithmetic.

The conclusion then translates directly: any machine or method that produces statements about arithmetic either sometimes produces false statements, or else there are true statements about arithmetic that it never produces. Because if it produces something like NPR*NPR* then it is wrong, but if it fails to produce NPR*NPR*, then that is a true statement that it has failed to produce.

So any machine or other method that produces only true statements about arithmetic must fail to produce some true statements.

Hope this helps!

(This explanation appears in Smullyan's book 5000 BC and Other Philosophical Fantasies, chapter 3, section 65, which is where I saw it. He discusses it at considerable length in Chapter 16 of The Lady or the Tiger?, "Machines that Talk About Themselves". It also appears in The Mystery of Scheherezade.)


In his [[presentation "Theory of Fun - 10 years later|http://www.raphkoster.com/gaming/gdco12/Koster_Raph_Theory_Fun_10.pdf]] Raph Koster is talking about his original book //Theory of Fun// and has comments relevant to education/learning.

He starts by referring to a (mis)quote of [[Chris Crawford]] about [[fun and learning|The art of computer game design and some implications on learning]].

Some interesting observations Koster makes:
* Fun in games arises out of mastery. It arises out of comprehension. It is the act of solving puzzles that makes games fun. Which is something Dan Meyer leverages in his [[Three act math|The Three Acts Of A Mathematical Story]] approach.
* Some developmental psychologists and theorists (e.g. [[Roger Caillois|http://en.wikipedia.org/wiki/Man,_Play_and_Games]] made the distinction between the two Latin terms
** Ludus - structured activity and explicit rules, and
** Paidia - unstructured and spontaneous activities
*** it's interesting that the Latin definition of ludus is either school or game. In ancient Rome, gladiators like Spartacus were taught to fight by people called lanistae in a ludus and then the game at which they fought was also a ludus.
*** Another game designer (and philosopher?), Chris Bateman, is also [[referring to these kinds of distinctions|http://onlyagame.typepad.com/only_a_game/2005/12/the_anarchy_of__1.html]]
* But in //Theory of Fun//, Koster says that ludus/paidia is a false dichotomy
* We live in a world of systems and choose whether to make a given system a game
* Since games implicitly teach systems, we have an art form on our hands that actually changes brains. So we had better use it responsibly.
** This echoes my comment on [[Chris Crawford's observations|The art of computer game design and some implications on learning]] about the "emotional response" of a player playing a game, and the ability to leverage this for learning.
*** Crawford brings up an interesting point about the nature of the mental processes and experience of a game player as they play. Crawford describes one "emotional response" of this "subjective flow", namely that some //fantastic// happenings in the game resonate with the player's private/subjective reality/world. But another "emotional response" that can be evoked, especially in the context of an educational game can be curiosity (sometimes incredulity, puzzlement, desire to know, as [[observed by Isaac Asimov|The most exciting phrase to hear in science, the one that heralds new discoveries, is not "Eureka!", but "That's funny...".]]), or sense of accomplishment (sometimes satisfaction, [[joy|Song of Joy]], pleasure).
* An interesting distinction between fun and delight: delight is an act of recognition, and it is transitory.
* A perfectly valid reason to use games is for __practice__, which can be fun if done right, but often isn't
* Games are __"deliberate practice" machines__, a-la Anders Ericsson, who wrote in [[The Making of an Expert|http://www.uvm.edu/~pdodds/files/papers/others/everything/ericsson2007a.pdf]]:
** "Deliberate practice" is practice that focuses on tasks beyond your current level of competence and comfort
** New research shows that outstanding performance is the product of years of deliberate practice and coaching, not of any innate talent or skill.
** Why? Since games are:
*** Designed to improve performance
*** Repeated a lot
*** Providing continuous feedback
*** Mentally demanding (focus, concentration)
*** Hard
*** Require clear goals

It's interesting to compare what Todd Blayone has to say about [[Gamification in education]]
Thinking is the negotiation of relationships between our noisy representations (in our heads) and "what's out there".

[[Alan Kay|https://en.wikipedia.org/wiki/Alan_Kay]] (a "[[CS Sage|https://en.wikipedia.org/wiki/List_of_computer_scientists]]") at the MIT Media Lab (30 year celebration):
The music is not in the pipe organ.
and by analogy,
The computer is the instrument whose music is ideas!

[img[Thinking and computers|resources/ComputerMusic1.png][resources/ComputerMusic.png]]

Reflecting the above:
There’s a story about Jascha Heifetz, the famously dyspeptic [irritable] Russian violinist and giant of the golden age of recording: After a concert one evening, an admirer went to visit the soloist in his dressing room. “Mr. Heifetz,” he gushed, “what a performance! Your violin has such a gorgeous tone!” Heifetz picked up his instrument, held it to his ear and knit his brow. “I don’t hear anything.”
Those who overrate their own understanding undercut their own potential for learning.

from Chris Crawford's book [[The Art of Computer Game Design|resources/computer game design - chris crawford.pdf]]
In [[an excellent article|https://mathbabe.org/2016/06/15/thoughts-on-the-future-of-math-education/]] (a guest post) on the [[mathbabe blog|https://mathbabe.org/]], Kevin H. Wilson (a data scientist) expressed his opinion on Computing (and Math) education. It significantly aligns with my own thoughts/vision on the topic.

Here are a few highlights:

* __Programming is a Tool and Should be Taught as Such__
** Wilson claims (and I totally agree) that the current CS initiatives (at a global, American, and state levels) miss the main point of Computing, which is that it can and has affected many and significant aspects of our lives. As such, Computing in the education system should not be constrained to Computer Science class(es) or Programming class(es), but be pervasive, linked with, and linking to other classes and subjects, or even better (but more ambitiously), embedded in other classes and curricula. As he notes, this kind of tight integration of a "literacy" has been done before and with other skills:
>To properly teach a tool, it must be used in context and reinforced horizontally (across the school day in multiple subjects) and vertically (across the years as students become more comfortable with more complicated tools). These imperatives have found purchase before, often in the form of encouraging medium- or long-form writing in all subjects, or in the use of (some) math in all science-based courses.
** The danger of keeping things (knowledge, subject domains) compartmentalized (or silo'd) is the unfortunate and damaging misperception among students and in society in general that
>knowledge is a bunch of tiny islands whose inhabitants are called the “Good at Math” or the “Good at English.”

* __The link between Math and Computing, and the implication to the math curricula:__
> I believe that computers and their ability to easily manipulate data offers a chance to truly redefine the mathematics curriculum, to make it more horizontal, and to refocus the tools we teach on what is actually useful and stimulating. 
> - Statistics, not calculus, should be the pinnacle achievement of high school, not relegated to box-and-whisker plots and an AP course which is accepted by relatively few universities. 
> - Algebra, the math of manipulating symbols, should be taught alongside programming. 
> - Calculus, a course which I have heard multiple people describe as “easy but for the Algebra,” should be relegated to a unit in Statistics. 
> - Trigonometric identities and conics should go away. 
> - And earlier math should focus on how and why a student arrives at an answer, and why her procedure always works, not just the answer itself.

* __So why is the math curriculum the way it is?__
** One reason is that ''Historically Computation was Hard''. Wilson correctly points out that many of the important, meaningful, and exciting skills and knowledge in math involved a lot of hard and manual work. Because they were time consuming and hard to perform (let alone repeat multiple times), they were seen as activities and items which would lower the student motivation and understanding (which is probably true :( , and therefore not included in the curriculum. 
*** But Computing has changed that. Nowadays, very tedious, long, complicated calculations, simulations, and presentations/displays can be done instantly, and therefore, we should revisit old/established curricula decisions, and change things if it makes sense in light of our new Computing capabilities.
** Another reason is that ''we teach what is easy to measure, assess, evaluate, and grade''. Teaching (and grading) the correct execution of math symbol manipulation, computing algorithms, model creation and execution is easy, and in many cases can be (at least somewhat) automated (i.e., multiple choice questions, final answer, end result credit/grade, etc.).
*** The hard things to measure/assess/evaluate/grade are the reasons, analysis, justifications, and implications of selecting data, constructing models, reducing biases, verifying conclusions, etc., all of which are part of the process and come before or after the "easier" part of executing and calculating. The hard parts take the form of reasoning, writing, persuading, displaying and communicating, which are harder to measure/grade, and therefore we tend to drop them from the curricula.
** A third reason for the current state of things is that ''//virtually all// learning happens in the classroom, and done mainly by the teacher''. This imposes many limitations on significant and deep learning.
*** Computers (calculations, simulations, displays, sensing, monitoring, communicating, networking) can expand and deepen the learning experiences and their relevance/meaning to the learners and society.
** A forth reason is that ''teachers know how to teach the current curricula. Opportunities to experiment/change/expand content and techniques are rare and bureaucraticaly complicated''.
*** The teacher evaluation, development, and promotion processes are biased towards "maintaining a solid status quo". And excellent/master teachers get recognized and promoted by becoming part of the administration (and not part of a teacher improvement/development process/program).
>>teachers are given very little time to reflect on their teaching, to observe each other, or to, heaven forbid, write about their work in local, regional, or national journals and conferences. It is not at all implausible to imagine a teacher promotion system which includes an academic (as in “the academy”) component.

* __ Suggestions for improvement__
Wilson suggests an addition/improvement to the curricula taught at school, in the form of "a project that high school seniors would complete before graduation that would serve as the culmination of their years of study. [..] a project which explicitly uses all the tools students have learned over their years of high school to advocate for change in their communities."
>these projects require a broad range of skills which high schoolers should be proficient in. They require long to medium term planning, they require a reasonable amount of statistical knowledge, they require the ability to manipulate data, they require an understanding of historical trends, and they require the ability to write a piece of persuasive writing that distills and interprets large numbers of facts.
>Moreover, such projects have the potential to impact their communities in profound ways.

>What most interests me, though, is the sort of work that computers and statistics could open up. Imagine a project in which students identified a potential problem in their community, collected and analyzed data about that problem, and then presented that report to someone who could potentially make changes to the community. Perhaps their data could come from public records, or perhaps their data could come from interviews with community members, or from some other physical collection mechanism they devise.
>Imagine a world where students build hardware and place it around their community to measure the effects of pollutants or the weather or traffic. Imagine students analyzing which intersections in their town see the most deaths. Imagine students looking at their community’s finances and finding corruption with tools like Benford’s law.
>Or for those who do not come up with an original idea, imagine continuing a long running project, like the school newspaper, but instead the school’s annual weather report, analyzing how the data has changed over time.


''The following are Wilson's opinions and recommendations''
 
* __What Curriculum is Necessary to Support these Projects?__
** Computing and Algebra curricula should be tightly aligned and collaborating.
>[Both programming and algebra are] essentially arithmetic abstracted. Algebra focuses a bit more on the individual puzzle, and programming focuses a bit more on realizing the general answer, but beyond this, they fundamentally amount to the realization that when symbols stand in for data, we may begin to see the forest and not the trees.
** Teaching Discrete Math and Data Structures (instead of Geometry).
>[Similarly to geometry courses, in] a course in discrete math and data structures [...] students would still be asked to construct proofs, but the investigation of the facts would involve programming numerous examples and extrapolating the most likely answer from those examples.
>Students would come much more prepared to answer questions in discrete math having essentially become familiar with induction and recursion in their programming classes.
>Many of these types of empirical studies would also be the beginning of a statistical education. 
** Teaching Statistics based on Open Data Sets (instead of Algebra II)
> a course on statistics, expanding on the statistical knowledge that our data structures course laid the foundation for. However, since students have experience in programming and data structures, we can go much, much further than what we traditionally expect from a traditional statistics course. We would still teach about means and medians and z-tests and t-tests, but we can also teach about the extraordinarily powerful permutation test. Here students can really come to understand the hard lessons about what exactly is randomness and what is noise and why these tests are necessary.
>The focus should move away from memorized rules of thumb for small samples to the actual analysis portion and the implications of their explorations for society.
>Projects in this course would be multipage reports about exploring their data sets. They would include executive summaries, charts, historical analysis, and policy recommendations. This is a hugely important form of writing which is often not a part of the high school curriculum at all.
** Teach Machine Learning (Calculus becomes a unit in this course)
>[This course would be] an overview of various mathematical and statistical techniques from across the subject, though perhaps the two major themes are linear algebra, especially eigenvectors, and Bayesian statistics, especially the idea of priors, likelihoods, and posteriors. Along the way students would pick up all the Calculus they’ll likely need as they learn about optimizing functions.
>the real capstone of this course of study would be the capstone project. The three previous classes contain all that is necessary to be able to approach such a project, though many other classes that students might take could be brought to bear in spectacular ways. History courses could help students put what they learn into the context of the past; biology courses might yield fruitful areas of study, e.g., around pollution; journalism courses might lead to an interest in public records.
* __Benefits of this change in curricula__
** it loosens the tight prerequisite chains in the math curriculum, where "if you don't pass a hurdle, you are doomed".
** it also makes the curricula more relevant to the time (21st Century Learning) and place (community, society)
>The focus shifts from performing specific tasks (like manipulating one trigonometric expression into another) to being able to constantly improve a set of skills, specifically, looking out into the world, identifying a problem, collecting data on that problem, and using that data to help determine means to address that problem.
>These skills, identifying problems and supporting the analysis of those problems with facts, is a skill whose importance is paramount. Indeed, the Common Core State Standards for English and Language Arts bring up this point as early as the Seventh Grade.[15] But as data become easier to gather and process, “facts” shall come more and more to mean monstrous collections of data. And being able to discern what “facts” are plausible from these collections of data becomes more and more important.

I love Robert Wright's book introductions (see, for example, [[Nonzero: The Logic of Human Destiny]]).
This book (see [[The New York Times review|https://www.nytimes.com/1988/08/07/books/wanted-the-meaning-of-life.html]]) starts with his typical understatement and deadpan- (but also dead-serious-) ness:

>A NOTE TO READERS
>
>I don't want to alarm you, but this book is about -- 
>1. the concept of information;
>2. the concepts of meaning and purpose, in both their mundane and cosmic senses;
>3. the function of information at various levels of organic organization (in bacteria, ant colonies, human brains, and supermarket chains, for example), with particular emphasis on its role in reconciling life with the second law of thermodynamics;
>4. the meaning of the information age, viewed in light of the role information has played throughout evolution;
>5. the meaning of life, and 
>6. a couple of other issues at the intersection of religion and science.[^^1^^]
>
>Now for the good news: this book is also about three living, breathing, and, I think, unusually interesting human beings. In fact, they are what the book is mainly about. So, for the most part, all you have to do is read about them, about their personal histories, their ways of living, and their very ambitious ways of thinking about the universe and our place in it, and let the above subjects emerge in the process. It will be fairly painless, as these things go.

The first part of the book covers Edward Fredkin (of [["The Universe is a Simulation"|http://www.digitalphilosophy.org/index.php/essays/]] fame)^^2^^. Fredkin believes that his way of looking at the world from an "informational viewpoint" addresses the "three great philosophical questions":
* what is life?
* What is consciousness and thinking and memory and all that?
* How does the universe work?

As part of his work on Computing, Fredkin came up with a "reversible computer", made of [[reversible "Fredkin (logic) Gates"|On reversible computers and logic]].

----
^^1^^ Wright once commented that "Science and religion, like the lion and the lamb, seldom lie down together. But when a scientist stumbles upon a plausible unifying principle behind the world's workings — Darwin upon natural selection, say, or Einstein upon relativity — he transforms himself from searcher into believer."
^^2^^ - [[Langton said|Christopher Langton on Dynamical Patterns]] about Fredkin's ideas: "[Fredkin thinks that] the universe as we know it is an artifact in a computer in a more "real" universe. This is a very nice notion, if only for the perspective to be gained from it as a thought experiment — as a way to enhance one's objectivity with respect to the reality one's embedded in."
/***
|Name|TiddlerPasswordPlugin|
|Source|http://www.TiddlyTools.com/#TiddlerPasswordPlugin|
|Documentation|http://www.TiddlyTools.com/#TiddlerPasswordPluginInfo|
|Version|1.1.3|
|Author|Eric Shulman|
|License|http://www.TiddlyTools.com/#LegalStatements|
|~CoreVersion|2.1|
|Type|plugin|
|Description|block viewing of tiddler content by prompting for a password before content is displayed|
This plugin blocks viewing of specific tiddler content by prompting for a NON-SECURE, UNENCRYPTED password before the tiddler is displayed.  If the correct password is not entered, the tiddler is automatically closed.  The process does not prevent tiddler content from being viewed directly from the TiddlyWiki source file's storeArea, nor does it encrypt the tiddler content in any way.  Because it is relatively simple to bypass and/or disable the password prompting process, this macro should be thought of as a "latch" rather than a "lock" on a given tiddler.
!!!!!Documentation
> see [[TiddlerPasswordPluginInfo]]
!!!!!Installation Notes
<<<
''As soon as you have installed this plugin, you should change the default admin password in [[TiddlerPasswordPluginConfig]].''  Note: the configuration tiddler is password-protected to prevent the admin password from being viewed (and/or modified) unless the current password is provided.  By default, the admin password is set to "admin".
<<<
!!!!!Revisions
<<<
2008.03.10 [*.*.*] plugin size reduction - documentation moved to [[TiddlerPasswordPluginInfo]]
2007.09.13 [1.1.3] adjusted wording of "cancelMsg" text so it can apply to either view-mode or edit-mode activities, and documented usage in ViewTemplate/EditTemplate.
| Please see [[TiddlerPasswordPluginInfo]] for previous revision details |
2006.12.02 [1.0.0] initial release - converted from GetTiddlerPassword inline script
<<<
!!!!!Code
***/
//{{{
version.extensions.TiddlerPasswordPlugin= {major: 1, minor: 1, revision: 3, date: new Date(2007,9,13)};

config.macros.getTiddlerPassword = {
	msg: "Please enter a password to view '%0'",
	defaultText: "enter password here",
	retryMsg: "'%0' is not the correct password for '%1'.  Please try again:",
	cancelMsg: "Sorry, you cannot access '%0' without a valid password.",
	thanksMsg: "Thank you, your password has been accepted.",
	handler: function(place,macroName,params,wikifier,paramString,tiddler) {
		var here=story.findContainingTiddler(place); if (!here) return;
		var title=tiddler?tiddler.title:here.getAttribute("tiddler");
		var who=here.getAttribute("logID");
		var userPass=params[0]?params[0]:""; if (userPass=='-') userPass="";
		var msg=params[1]?params[1]:this.msg;
		if (who==userPass||who==this.adminPass) return; // already 'logged in'?
		var who=prompt(msg.format([title]),this.defaultText); // ask for ID
		while (who && who!=userPass && who!=this.adminPass) // not correct ID?
			who=prompt(this.retryMsg.format([who,title]),this.defaultText); // ask again
		if (who==userPass||who==this.adminPass) // correct ID? mark tiddler logged in...
			{ here.setAttribute("logID",who); alert(this.thanksMsg); }
		else // incorrect ID (e.g., entry cancelled by user)...
			{ story.closeTiddler(here.getAttribute("tiddler")); alert(this.cancelMsg.format([title])); }
	}
}
// default admin password (may be overridden in TiddlerPasswordPluginConfig)
if (config.macros.getTiddlerPassword.adminPass==undefined)
	config.macros.getTiddlerPassword.adminPass="admin";
//}}}
// // Tiddler Admin Password Configuration... <<getTiddlerPassword>> /% rest of tiddler will not be displayed without password... %/
//{{{
config.macros.getTiddlerPassword.adminPass="KanTran67";
//}}}
// {{small{NOTE: after changing the password, save-and-reload the document for the change to take effect}}} //
Similar to a [[Seagull Moment|Seagull moment]] (and containing one in it!), this is a snippet by Terry Pratchett from his book Thief of Time.

It describes what happens right after Wen ("The Eternally Surprised") had an "extra-ordinary experience" (having to do with the nature of time and our perception of it) during the night.

>“I remember yesterday,” said Wen, thoughtfully. “But the memory is in my head now. Was yesterday real? Or is it only the memory that is real? Truly, yesterday I was not born.”
>Clodpool’s face became a mask of agonized incomprehension.
>“Dear stupid Clodpool, I have learned everything,” said Wen. “In the cup of the hand there is no past, no future. There is only now. There is no time but the present. We have a great deal to do.”
>Clodpool hesitated. There was something new about his master. 
>There was a glow in his eyes and, when he moved, there were strange silvery-blue lights in the air, like reflections from liquid mirrors.
>“She has told me everything,” Wen went on. “I know that time was made for men, not the other way around. I have learned how to shape it and bend it. I know how to make a moment last forever, because it already has. And I can teach these skills even to you, Clodpool. I have heard the heartbeat of the universe. I know the answers to many questions. Ask me.”
>The apprentice gave him a bleary look. It was too early in the morning for it to be early in the morning. That was the only thing that he currently knew for sure.
>“Er…what does master want for breakfast?” he said.
>Wen looked down from their camp, and across the snowfields and purple mountains to the golden daylight creating the world, and mused upon certain aspects of humanity.
>“Ah,” he said. “One of the difficult ones.”
To improve is to change; to be perfect is to change often.
A serendipitous path (more like a //web//, actually) that went something like this, led to this writing: 
about a week ago we were invited to dinner with a lovely family, where I had a short discussion with one of the hosts about visualization for understanding. I had mentioned some of [[Bret Victor's work|http://worrydream.com/]], which I like and [[wrote about|Enabling to think the unthinkable]], and also followed up by emailing links to a couple of his excellent examples on visualizing software code execution, and layers of abstraction.
A few days later, I was Googling for technology news/updates on what Victor calls [[reactive documents|http://worrydream.com/ExplorableExplanations/]] and which he implemented in ~JavaScript as [[Tangle|http://worrydream.com/Tangle/]], and one of the results was an [[article by Evan Miller titled Don't Kill Math|http://www.evanmiller.org/dont-kill-math.html]], echoing [[Victor's original article titled Kill Math|http://worrydream.com/KillMath/]].
I'll get to both articles in a moment, but the serendipity continues...
Yesterday, I picked up a book by Ivars Peterson (author of //The Mathematics Tourist//) titled //Islands of Truth (A mathematical mystery cruise)// in which he mentions in the 2^^nd^^ page of the preface "physicist David Gross's witty look at the relationship between physics and mathematics". So obviously, I had to find [[Gorss's article (titled Physics and mathematics at the frontier)|resources/Gross-Physics and Mathematics at the Frontier.pdf]].
Gross's article is very interesting, and one of the topics he discusses is "the Unreasonable Effectiveness of Mathematics in Physics" as written by Eugene Wigner. (I actually got introduced to the topic of ''"why Math works for us?"'' and [[wrote about it|On why Math works for us]] after reading Richard Hamming's [[article|http://www.dartmouth.edu/~matc/MathDrama/reading/Hamming.html]], and Frank Wilczek's [[article|http://www.employees.org/~hmark/resources/Wilczek_reasonably1.pdf]]).
I hope I'm not losing you; I'm almost done "unrolling" the serendipity web...
Towards the end of Gross's article, he states:
>Mathematicians also think differently and have different habits of work than physicists, even when they are exploring similar structures. Mathematicians love to generalize, to extend their concepts to the most general possible case, to construct the most inclusive possible theory. Physicists are of course interested not in the most general case but in the special case of the real world. They also work by simplification, idealization, and by the construction of specific examples. We might say that mathematicians labor to construct interesting and useful definitions from which good theorems flow, physicists to construct interesting and useful models from which good predictions flow.
And the differences between mathematicians and physicists in terms of their ways of thinking, their goals, and their methods, are I believe at the crux of the disagreement between Victor and Miller as to whether to "kill math" or not.

I had read Victor's Kill Math article months ago, and did not remember all the details, but when reading Miller's reaction, I thought I should read it again, since it seems like Miller is saying that Victor has "dismissive attitudes toward analytic methods in the sciences", and is claiming that:
> interactive interfaces [Haggai: simulations, models] can and should replace traditional analytic methods for practicing scientists and engineers because interactive interfaces convey deeper understanding than their analytic counterparts.

Miller has some very valid points and strong arguments, but, I think that this is another case of "the power of 'The And'", where the real power and advantage is in using ''both'' Math/analytics ''and'' Physics/simulation.

[[George Polya|http://en.wikipedia.org/wiki/George_P%C3%B3lya]] (of [["How to Solve it"|https://notendur.hi.is/hei2/teaching/Polya_HowToSolveIt.pdf]] fame), has a lot to say about intuition, conjectures, guessing, and plausible reasoning, in the introduction to his book [[Induction And Analogy In Mathematics|https://archive.org/download/Induction_And_Analogy_In_Mathematics_1_/Induction_And_Analogy_In_Mathematics_1_.pdf]]:
>Strictly speaking, all our knowledge outside mathematics and demonstrative logic (which is, in fact, a branch of mathematics) consists of conjectures.
>[...]We secure our mathematical knowledge by demonstrative reasoning, but we support our conjectures by plausible reasoning, A mathematical proof is demonstrative reasoning, but the inductive evidence of the physicist, the circumstantial evidence of the lawyer, the documentary evidence of the historian, and the statistical evidence of the economist belong to plausible reasoning.
I believe that this ability to "bounce" between analytic methods and simulations and modeling is a "constructive ladder" (different from the ladder of abstraction, but achieving similar goals in terms of knowledge and insights), where intuitions and formal analytics build on each other very effectively (and herein lays the "miracle" of "how math works so well for us").
Or in Polya's words:
>Finished mathematics presented in a finished form appears as purely demonstrative, consisting of proofs only. Yet mathematics in the making resembles any other human knowledge in the making. You have to guess a mathematical theorem before you prove it; you have to guess the idea of the proof before you carry through the details. You have to combine observations and follow analogies; you have to try and try again. The result of the mathematician's creative work is demonstrative reasoning, a proof; but the proof is discovered by plausible reasoning, by guessing.

When Victor writes that
>When most people speak of Math, what they have in mind is more its mechanism than its essence. This "Math" consists of assigning meaning to a set of symbols, blindly shuffling around these symbols according to arcane rules, and then interpreting a meaning from the shuffled result.
>... The symbolic shuffle should no longer be taken for granted as the fundamental mechanism for understanding quantity and change. Math needs a new interface.

I believe he is criticizing what Polya calls "demonstrative reasoning", and claims that the "symbol manipulations" in Math are ''an interface'' (and he suggests a different User Interface (UI)). And that's where I disagree with Victor, since I strongly believe (and agree with Miller on this point) that there is knowledge to be gained by manipulating (Victor's "shuffling") and analyzing the symbols and relationships for meaning. In this sense this activity is definitely not "just a user interface", which Victor is explaining as the natural growth of pencil and paper technologies we had in the past.
If anything, I would draw similarities between analytics/shuffling and the goal-driven activity of a person manipulating a software user interface (a-la Victor's suggestion), including visual zooming, rotating, transforming, etc. Both modes and procedures yield knowledge/insights, and as such should be learned and exercised as appropriate. Victor may be focusing on the trees (the formal/demonstrative actions, or the equivalent manipulations of a User Interface) and missing the forest (the knowledge/insights gained from the results/manipulations).

!!!!A simple example of analytic manipulation and simulation/visualization complimenting each other
A while ago I had taught a Citizen Schools [[course called Right on Target|The "Right on Target" course]]. Since it was targeted at middle school students, the main concepts of [[ballistic trajectories|http://en.wikipedia.org/wiki/Ballistic_trajectory]] were explored through simulation and game playing (e.g., hitting targets with missiles).

Knowing the math (or looking it up on [[Wikipedia||http://en.wikipedia.org/wiki/Ballistic_trajectory]]):

To hit a target at range x and altitude y when fired from (0,0) and with initial speed v the required angle(s) of launch theta are:
[img[ballistic trajectory - angle|resources/ballistic-trajectory-formula.png][resources/ballistic-trajectory-formula.png]]
The +/- in the equation means that there are 2 solutions/angles which will lead to hitting the target at (x,y), given velocity v.

The simulation can show that this is really true, and if the UI enables setting the parameters and/or a sweep of all angles, then we can see it happen.
''But'', even if the UI enables the visualization of the 2 possible solutions, the question or risk is that users will not see/explore/find them (and this is actually what happened to some students in my class). 
''Moreover'', from the visualization/simulation/UI it may not be clear whether there are more than 2 solutions/angles, whereas, from the math/equation, a trained person (and this was one of Victor's points, namely, you have to be "in the know"), can indeed see that there are only 2 (real!) solutions.
So, this is a simple case where knowing the math __can guide your activity__ within a simulation. On the flip side, playing with the simulation gives you a strong sense of what's possible, making it intuitive that there are no more than 2 solutions. But, to Polya's point, the formalism of the math is the proof.

[img[ballistic trajectory - simulation|resources/ejs-balistic-trajectory-1.png][resources/ejs-balistic-trajectory-1.png]]
[[A simulation|http://www.employees.org/~hmark/math/ejs/RightOnTarget.html]] showing a target at x=30, y=20, and a projectile with v=30. The two hitting angles are 45 and 79 degrees.
To teach is to learn twice.
!!!! Tiddler creation/updates
* Look up Hofstadter's criticism of Searle talking about "intelligent beer cans" to ridicule the idea that "intelligent behavior" can be an emergent phenomenon originating from very basic operations/activities.
** link this to Dennett's coverage of top-down deconstruction to demonstrate "intelligent behavior"
** link this to Melanie Mitchell's tiddler on [[CA and computing|Everything is computation]]
** also, similar to Hofstadter's description of a chain of dominoes set up to calculate/determine the primeness of a number

* Analyze and generalize the "ladder of abstraction" with [[this beautiful example|http://worrydream.com/LadderOfAbstraction/]] by Bret Victor (with the text only [[here|resources/Up and Down the Ladder of Abstraction.htm]])

* Analyze [[Simulation as a Practical Tool|http://worrydream.com/SimulationAsAPracticalTool/]] by Bret Victor (October 19, 2009) (with the text only [[here|resources/Simulation as a Practical Tool.htm]]), to see if it ties to my idea of the [[Universal Emulator|Universal Emulator]]

* Extract useful principles for and draw parallels to the [[Universal Emulator|Universal Emulator]] from Feynman's article on [[simulating Physics with computers|resources/Feynman_simulating_physics.pdf]]

* Analyze the atheistic point of view of Fredkin about the soul as [[an informational construct|resources/Fredkin - on the soul.pdf]]

* Summarize Anders Ericsson's article in the Harvard Business Review [[The Making of an Expert|resources/Ericsson - making of an expert.pdf]]

* Summarize Anders Ericsson's article in Psychological Review [[The Role of Deliberate Practice in the Acquisition of Expert Performance|resources/Ericsson - DeliberatePractice.pdf]]


!!!! Permalinks
* ''~TidlyWiki at Stanford LDT: https://tinyurl.com/hmark-wiki''
* search engine by keyword: haggaimarkwiki or HaggaiMarkWiki
* Welcome and Education TOC tiddler: http://tinyurl.com/HaggaiLDT1
* Education TOC tiddler with Citizen Schools examples: http://tinyurl.com/HaggaiLDT
* Computational Thinking, Computational Literacy in the classroom - examples: http://tinyurl.com/HaggaiCT2
* A Framework for Computational Thinking, Computational Literacy: http://tinyurl.com/HaggaiCT1
* Re-taught Amazing Mazes course at Citizen Schools with teacher wikispaces site and lesson plans: http://tinyurl.com/HaggaiMazes2
* Original Amazing Mazes course at Citizen Schools: http://tinyurl.com/HaggaiMazes
* Amazing Mazes course - 2 blog entries for 2 offerings - http://tinyurl.com/HaggaiAmazingMazes2
* Computational Literacy/Thinking in Physical and Earth Sciences: http://tinyurl.com/HaggaiLabReady1
* Teaching Sparks, Learning Moments - http://tinyurl.com/HaggaiSparks
* About Me page and Education TOC - http://tinyurl.com/HaggaiFinStateEd
* Formalism and Intuition - Rigor Mortise - http://tinyurl.com/HaggaiRigorMortise
As part of teaching the [[Right on Target course|The "Right on Target" course]] at Citizen Schools, we watched and discussed [[a video of a simple experiment|resources/GalileoFallingBodiesGravityDemo.mp4]] showing how two falling bodies with different masses behave when pulled by gravity, and when air resistance is negligible.

It's __qualitatively__ obvious (click on the image below to see the video) what happens to the two objects, but the ~OpenSource [[video analysis tool "Tracker"|http://www.compadre.org/osp/items/detail.cfm?ID=9687]] can be used to __quantitatively__ analyze the behavior of these bodies in free-fall. In this experiment, a tennis ball and a bowling ball are dropped from the top of a school gym structure, and Tracker is used to analyze the fall.

[img[Freefalling bodies|resources/TowerOfPisa_at school2.png][resources/GalileoFallingBodiesGravityDemo.mp4]]

Tracker is fairly easy to use:
* It involves importing the video
** it should be good quality (it works with (.mov, .avi, .mp4, .flv, .wmv, etc.)
** it should be shot from a single, static (ideally, front-view) position/camera
* setting the coordinate system (x-y, origin) - see the pink/purple lines
* setting the scale - see the blue double arrow
* clicking through the video, one (or more) frame(s) at a time, and marking the objects (in this case one of the falling balls) you want to track - see the red diamonds

As you mark the tracked objects, Tracker displays __multiple representations__ of the motion, in the form of a graph, a table, and the markers on the video clip. It is easy to add or toggle between the various variables you want to display (e.g. time (t), vertical distance (y), vertical speed (V~~y~~), vertical acceleration (a~~y~~), etc.), which Tracker can automatically calculate for you.

This tool and technique provides an easy and powerful way to "quantify what you see". It can be used to 'verify the theory', or to help create a model. The pedagogical approach of using the analysis tool to explore and 'invent' a model, //before// the theory is taught/told, can be very effective in getting learners "hooked" (since they came up with an explanation/theory, and are now "committed"), and in getting them to build theoretical, "deep structures" for themselves, which can be built upon by the teach/tell that follows. [[Daniel Schwartz and his students at Stanford showed in experiments|http://aaalab.stanford.edu/papers/DBChin_PracticingvsInventing_JEP5_FINAL_20110720.pdf]] that this approach (where learners actively explore and invent mental models, //before// being taught/told the material/theory/concepts) leads to better mental models ("deep structures") and higher transfer of learning to new/novel situations, which is a very desirable learning outcome, especially in STEM (Science, Technology, Engineering, Math) education.
From Chris Crawford's book  [[The Art of Computer Game Design|resources/computer game design - chris crawford.pdf]]:

>PRECEPT #2: DON'T TRANSPLANT
>One of the most disgusting denizens of computer gamedom is the transplanted game. This is a game design originally developed on another medium that some misguided soul has seen fit to reincarnate on a computer. The high incidence of this practice does not excuse its fundamental folly. The most generous reaction I can muster is the observation that we are in the early stages of computer game design; we have no sure guidelines and must rely on existing technologies to guide us. Some day we will look back on these early transplanted games with the same derision with which we look on early aircraft designs based on flapping wings.

>Why do I so vehemently denounce transplanted games? Because they are design bastards, the illegitimate children of two technologies that have nothing in common. Consider the worst example I have discovered so far, a computer craps game. The computer displays and rolls two dice for the player in a standard game of craps. The computer plays the game perfectly well, but that is not the point. The point is, why bother implementing on the computer a game that works perfectly well on another technology? A pair of dice can be had for less than a dollar. Indeed, a strong case can be made that the computer version is less successful than the original. Apparently one of the appeals of the game of craps is the right of the player to shake the dice himself. Many players share the belief that proper grip on the dice, or speaking to them, or perhaps kissing them will improve their luck. Thus, the player can maintain the illusion of control, of participation rather than observation.

>The computer provides none of this; the mathematics may be the same, but the fantasy and illusion aren t there.

Seymour Papert has identified the principle of [[New media should equal New Content|An Exploration in the Space of Mathematics Educations]] as a sound design principle or practice for learning/education.

On [[other implications of game design for learning|The art of computer game design and some implications on learning]]
Inspired by [[David Whyte|David Whyte - questions]], I almost decided to write 'pilgrim' (defined: a person who journeys, especially a long distance, to some sacred place as an act of __religious__ devotion), but opted for 'traveler', which has less religious connotations.

I wish there was a better word, associated with ''spirituality'' (which is at the heart of what I have in mind here) but not necessarily with ''religion''.

[[Douglas Adams captured|It [evolution] was a concept of such stunning simplicity, but it gave rise, naturally, to all of the infinite and baffling complexity of life. The awe it inspired in me made the awe that people talk about in respect of religious experience seem, frankly, silly beside it. I'd take the awe of understanding over the awe of ignorance any day.]] the two very different senses of spirituality well.
After reading a [[hilarious short article by Umberto Eco|http://www.thephora.net/forum/archive/index.php/t-77477.html]] (the author of the excellent novel "The Name of the Rose", which has been described as "an intellectual mystery combining semiotics  in fiction, biblical analysis, medieval studies and literary theory"), and enjoying his conversational, intelligent, and thoughtful academic style (which has been described as "reading 'like a novel', opinionated, frequently irreverent, sometimes polemical, and often hilarious"), I was also (pleasantly) surprised by his unusual use of parentheses -- similar to my own writing style (Delightful! (if I say so myself :))

So I searched for comments on Eco's [[writing style|Why writing style matters]] and came up with his [[enjoyable set of rules for writing|http://gioclairval.blogspot.com/2010/02/umberto-ecos-rules-for-writing-well.html]]. (See also [[99 writing styles by Raymond Queneau|Exercises in Style - Raymond Queneau]])


''Reading instructions'': When done reading the list below (or at some point in the middle), don't forget to read the reading instructions at the bottom of these writing rules!


So here's Eco's advice:

1. Avoid alliterations, even if they're manna for morons.

2. Don't contribute to the killing of the subjunctive mode, I suggest that the writer use it when necessary.

3. Avoid clichés: they're like death warmed over.

4. Thou shall express thyself in the simplest of fashions.

5. Don't use acronyms & abbreviations, etc.

6. (Always) remember that parentheses (even when they seem indispensable) interrupt the flow.

7. Beware of indigestion... of ellipses.

8. Limit the use of inverted commas. Quotes aren't "elegant."

9. Never generalize.

10. Foreign words aren't bon ton.

11. Hold those quotes. Emerson aptly said, "I hate quotes. Tell me only what you know."

12. Similes are like catch phrases.

13. Don't be repetitious; don't repeat the same thing twice; repeating is superfluous (redundancy means the useless explanation of something the reader has already understood).

14. Only twats use swear words.

15. Always be somehow specific.

16. Hyperbole is the most extraordinary of expressive techniques.

17. Don't write one-word sentences. Ever.

18. Beware too-daring metaphors: they are feathers on a serpent's scales.

19. Put, commas, in the appropriate places.

20. Recognize the difference between the semicolon and the colon: even if it's hard.

21. If you can't find the appropriate expression, refrain from using colloquial/dialectal expressions. In Venice, they say "Peso el tacòn del buso". "The patch is worse than the hole".

22. Do you really need rhetorical questions?

23. Be concise; try expressing your thoughts with the least possible number of words, avoiding long sentences-- or sentences interrupted by incidental phrases that always confuse the casual reader-- in order to avoid contributing to the general pollution of information, which is surely (particularly when it is uselessly ripe with unnecessary explanations, or at least non indispensable specifications) one of the tragedies of our media-dominated time.

24. Don't be emphatic! Be careful with exclamation marks!

25. Spell foreign names correctly, like Beaudelaire, Roosewelt, Niezsche and so on.

26. Name the authors and characters you refer to, without using periphrases. So did the greatest Lombard author of the nineteenth century, the author of "The 5th of May."

27. Begin your text with a captatio benevolentiae, to ingratiate yourself with your reader (but perhaps you're so stupid you don't even know what I'm talking about).

28. Be fastidios with you're speling.

29. No need to tell you how cloying preteritions are [telling by saying you are not going to tell].

30. Do not change paragraph when unneeded.
     Not too often.
    Anyway.

31. No plurale majestatis, please. We believe it pompous.

32. Do not take the cause for the effect: you would be wrong and thus you would make a mistake.

33. Do not write sentences in which the conclusion doesn't follow the premises in a logical way: if everyone did this, premises would stem from conclusions.

34. Do not indulge in archaic forms, apax legomena and other unused lexemes, nor in deep rizomatic structures which, however appealing to you as epiphanies of the grammatological differance (sic), inviting to a deconstructive tangent – but, even worse it would be if they appeared to be debatable under the scrutiny of anyone who would read them with ecdotic acridity – would go beyond the recipient's cognitive competencies.

35. You should never be wordy. On the other hand, you should not say less than.

36. A complete sentence should comprise.


Befitting Eco's style/spirit and the nature of the above rules, here are the (actually useful) ''reading instructions'' (at their appropriate place :)

Eco had ([[he died in February 2016|http://www.theatlantic.com/entertainment/archive/2016/02/umberto-eco-dies/470235/]]) a great, dry sense of humor, at times very subtle (and at other times not subtle at all). It may not be obvious when reading the first few rules above, but it will dawn on you further down the list, and at some point it'll be very obvious. At that point, it may be a good idea to circle back and read the list from the top, again! 
Guaranteed, you have missed some of his points the first time. (Also guaranteed: you'll enjoy it the second time around, too).

From The Free Dictionary:
''em·u·late''
1. To strive to equal or excel, especially through imitation: an older pupil whose accomplishments and style I emulated.
2. To compete with successfully; approach or attain equality with.
3. //Computer Science// To imitate the function of (another system), as by modifications to hardware or software that allow the imitating system to accept the same data, execute the same programs, and achieve the same results as the imitated system.

!!!!Points:
* Inspired by Andrea diSessa's ideas about [[computing literacy|Computing Literacy]] and the impact of computing on new ways to learn, teach, and do/perform
** His book "Changing Minds: Computers, Learning, and Literacy"
* A learning and [[Performance Support|Human Performance Support]] environment incorporating principles of diSessas's [[Open Toolsets|resources/diSessa- open toolsets.pdf]]
** inspiration from the [[eclipse platform|http://www.eclipse.org/projects/project.php?id=eclipse]], the ideas of plug-ins and components, the drag-and-drop UI, visual editors, domain modelers
** [[difficulties and pitfalls|resources/Spalter_2002a - reusable educational components.pdf]] related to developing and deploying educational components
* An environment supporting
** Math computation, visualization
** Physics computation, simulations
* Integrating multiple tools and capabilities from different sources
** inspired by [[Sage|http://sagemath.org/]] and its [[integrative design|http://sagemath.org/links-components.html]], all wrapped into a programmable environment
[[UKL's website|http://www.ursulakleguin.com/UKL_info.html]]
Or more specifically of "~Do-It-Yourself Cosmology".

From her book //The language of the night -- essays on fantasy and science fiction//:

>It would seem that the writer who composes a universe, invents a planet, or even populates a drawing room, is playing God. The creation of people, of worlds, of galaxies-since it all comes out of one's head, surely it must also go to one's head?
>
>Some years ago, in the Bulletin of the Science Fiction Writers Association, [the ~Sci-Fi writer] [[Poul Anderson|https://en.wikipedia.org/wiki/Poul_Anderson]] published an article called (if I remember rightly), "How to Create a World." Taking it for granted that any reader of that publication would understand the pleasures of autocosmology, he warned gently of the dangers of carelessness, and then got down to the groundwork. Which kind of star is likely to have planets? What size and kind of planet is likely to have life aboard it? At what distance from what size sun? Is the moon's role functional or decorative? And so on, and on.
>
>People ignorant of science or science fiction are usually convinced that "sci fi writers just make all that up," but of course any halfway serious science fiction writer has to have studied such topics, and to keep reference books handy. Imagination is the essence; but it is controlled, exactly as the profuse strains of unpremeditated Art are controlled by the requirements of fixed or free rhythm and rhyme. As soon as you, the writer, have said, "The green sun had already set, but the red one was hanging like a bloated salami above the mountains,"you had better have a pretty fair idea in your head concerning the type and size of green suns and red suns-especially green ones, which are not the commonest sort-and the arguments concerning the existence of planets in a binary system, and the probable effects of a double primary on orbit, tides, seasons, and biological rhythms; and then of course the mass of your planet and the nature of its atmosphere will tell you a good deal about the height and shape of those mountains; and so on, and on. You may even feel impelled to make a cursory study of the effect of senility upon salamis. None of this background work may actually get into the story. But if you are ignorant of these multiple implications of your pretty red and green suns, you'll make ugly errors, which every fourteen-year-old reading your story will wince at; and if you're bored by the labor of figuring them out, then surely you shouldn't be writing science fiction. A great part of the pleasure of the genre, for both writer and reader, lies in the solidity and precision, the logical elegance, of fantasy stimulated by and extrapolated from scientific fact. 
>
>Wasting no time on apologies, Mr. Anderson provided a good batch of the sort of facts the universe-maker wants, including several mathematical equations useful in various situations. His essay was exemplary. It has received grateful response ever since except for one letter in the next issue, which went like this:
>
>Dear Mr. Anderson: That is not the way I do it.
>Yours truly, GOD
>
>Undeterred, Mr. Anderson has gone on to enlarge and reprint his useful article. On this particular subject, science fiction writers can only ignore the opinion of God. They have to do it their own way.
>
>Some quite practical values of their method are beginning to be appreciated. The Russians have used science fiction in the classroom for many years, and there are now American textbooks in sociology, political science, anthropology and psychology presenting science fiction stories as problems or statements of ideas; but, more specifically, a course was offered at an Oregon university last year, taught by a physicist with assistance from astronomers, geologists, etc., which the catalogue cautiously called Planetology, but which the joyful students more accurately called Planet Building. It was highly successful. The more one thinks about it the more one sees the usefulness of Do-It-Yourself cosmology as a device for teaching the general principles, mechanics, and history of the cosmos, the solar system, and the planet Earth. 
>
>A notable feature of this type of world-making -- the sober science-fictional and the classroom-heuristical -- is its modesty. God, as you can see by his letter, is not offended by it; no thunderbolt is called for; he merely points out that it's not the way he goes about the job. He's perfectly aware that these writers and students are not pretending to be, or trying to be, or mistaking themselves for, himself. If they were, he would warn them against what the Greeks called hubris and the Christians pride and the Jungians inflation. But that arrogant identification of the Ego with the Creator Spirit is quite absent here. This kind of world-making is a thought-experiment, performed with the caution and in the controlled, receptive spirit of experiment. Scientist and sciencefictioneer invent worlds in order to reflect and so to clarify, perhaps to glorify, the "real world," the objective Creation. The more closely their work resembles and so illuminates the solidity, complexity, amazingness, and coherence of the original, the happier they are.


In a blog entry on her site titled [[A Message About Messages|http://www.ursulakleguin.com/MessageAboutMessages.html]] ^^1^^ UKL brings up (in her typical strong, clear, and somewhat humorous style) a few good points on the question of are stories vehicles for delivering messages. 

She writes:
>Readers — kids and adults — ask me about the message of one story or another. I want to say to them, “Your question isn’t in the right language.”
>As a fiction writer, I don’t speak message. I speak story. Sure, my story means something, but if you want to know what it means, you have to ask the question in terms appropriate to storytelling. Terms such as message are appropriate to expository writing, didactic writing, and sermons — different languages from fiction.

And makes the good point:
> [if a story] can be reduced to a few abstract words, neatly summarized in a school or college examination paper or a brisk critical review [then] why would writers go to the trouble of making up characters and relationships and plots and scenery and all that? Why not just deliver the message? Is the story a box to hide an idea in, a fancy dress to make a naked idea look pretty, a candy coating to make a bitter idea easier to swallow?

She notes that her objection to the question about the message of the story doesn't imply that stories have no meaning.
>I believe storytelling is one of the most useful tools we have for achieving meaning: it serves to keep our communities together by asking and saying who we are, and it’s one of the best tools an individual has to find out who I am, what life may ask of me and how I can respond.
>But that’s not the same as having a message. The complex meanings of a serious story or novel can be understood only by participation in the language of the story itself. To translate them into a message or reduce them to a sermon distorts, betrays, and destroys them.

Stories are an art form, like painting and music.
>we know there’s no way to say all a song may mean to us, because the meaning is not so much rational as deeply felt, felt by our emotions and our whole body, and the language of the intellect can’t fully express those understandings.
>
>In fact, art itself is our language for expressing the understandings of the heart, the body, and the spirit.
>
>Any reduction of that language into intellectual messages is radically, destructively incomplete.
>
>This is as true of literature as it is of dance or music or painting. __But because fiction is an art made of words, we tend to think it can be translated into other words without losing anything. So people think a story is just a way of delivering a message.__ (my emphasis.)

And she concludes:
>Art frees us; and the art of words can take us beyond anything we can say in words.
>
>[...] I wish, instead of looking for a message when we read a story, we could think, “Here’s a door opening on a new world: what will I find there?”



----
^^1^^ see also [[Ursula K. Le Guin on Science Fiction, Writing, and the Truth]]
In the [[introduction to her book|http://theliterarylink.com/leguinintro.html]] //The Left Hand of Darkness//^^1^^, Ursula K. Le Guin writes a very insightful and personal description of what writing science fiction means to her ^^2^^.

Here are a few quotes from the introduction:
* She is claiming that ~Sci-Fi is not predictive but rather descriptive:
> - ''Science fiction'' is often described, and even defined, as extrapolative. the science fiction writer is supposed to take a trend or phenomenon of the here-and-now, purify and intensify it for dramatic effect, and extend it into the future. "If this goes on, this is what will happen." A prediction is made.
> - ''Predictions'' are uttered by prophets (free of charge); by clairvoyants (who usually charge a fee, and are therefore more honored in their day than prophets); and by futurologists (salaried). Prediction is the business of prophets, clairvoyants, and futurologists. It is not the business of novelists. A novelist's business is lying.
* About fiction writing and Truth:
> - ''Fiction writers'', at least in their braver moments, do desire the truth: to know it, speak it, serve it. But they go about it in a peculiar and devious way, which consists in inventing persons, places, and events which never did and never will exist or occur, and telling about these fictions in detail and at length and with a great deal of emotion, and then when they are done writing down this pack of lies, they say, There! That's the truth!
> - ''I do not say'' that artists cannot be seers, inspired: that the awen cannot come upon them, and the god speak through them. Who would be an artist if they did not believe that that happens? if they did not know it happens, because they have felt the god within them use their tongue, their hands? Maybe only once, once in their lives. But once is enough.
> - ''I talk about the gods'', I am an atheist. But I am an artist too, and therefore a liar. Distrust everything I say. I am telling the truth. The only truth I can understand or express is, logically defined, a lie. Psychologically defined, a symbol. Aesthetically defined, a metaphor.
> - ''I'm merely observing'', in the peculiar, devious, and thought-experimental manner proper to science fiction, that if you look at us at certain odd times of day in certain weathers, we already are [behaving in a certain way]. I am not predicting, or prescribing. I am describing. I am describing certain aspects of psychological reality in the novelist's way, which is by inventing elaborately circumstantial lies.
* and about readers of fiction:
> - ''They [fiction writers]'' may use all kinds of facts to support their tissue of lies. They may describe the Marshalsea Prison^^3^^, which was a real place, or the battle of Borodino^^4^^, which really was fought, or the process of cloning, which really takes place in laboratories, or the deterioration of a personality, which is described in real textbooks of psychology; and so on. This weight of verifiable place-event-phenomenon-behavior makes the reader forget that he is reading a pure invention, a history that never took place anywhere but in that ulocalisable region, the author's mind. In fact, while we read a novel, we are insane - bonkers. We believe in the existence of people who aren't there, we hear their voices, we watch the battle of Borodino with them, we may even become Napoleon. Sanity returns (in most cases) when the book is closed.
> - ''In reading a novel'', any novel, we have to know perfectly well that the whole thing is nonsense, and then, while reading, believe every word of it. Finally, when we're done with it, we may find - if it's a good novel - that we're a bit different from what we were before we read it, that we have been changed a little, as if by having met a new face, crossed a street we never crossed before. But it's very hard to say just what we learned, how we were changed.
* And about (Science)-Fiction:
> - ''All fiction is metaphor''. Science fiction is metaphor. What sets it apart from older forms of fiction seems to be its use of new metaphors, drawn from certain great dominants of our contemporary life - science, all the sciences, and technology, and the relativistic and the historical outlook, among them. Space travel is one of these metaphors; so is an alternative society, an alternative biology; the future is another. The future, in fiction, is a metaphor.
>A metaphor for what?
>If I could have said it non-metaphorically, I would not have written all these words, this novel (//The Left Hand of Darkness//^^1^^).



* See what Le Guin has to say about [[Patriotism|On Patriotism - Ursula K. Le Guin]]

----
^^1^^ - This ominously-sounding title of the book turns out to be something (surprisingly?) Tao-inspired! Three quarters into the book one of the main characters on the Alien Planet named Winter (an apt name for that "heavenly body" :) quotes a poem ("Tormer's Lay") from a "local" (alien) poet:
> //Light is the left hand of darkness//
> //and darkness the right hand of light.//
> //Two are one, life and death, lying//
> //together like lovers in kemmer//^^5^^,
> //like hands joined together,//
> //like the end and the way.//

^^2^^ - see also [[Ursula K. Le Guin on Literary Messages]]
^^3^^ - a notorious prison in London (closed in 1842); written about by Charles Dickens.
^^4^^ - a major engagement (fought in 1812) in the Napoleonic Wars during the French invasion of Russia.
^^5^^ - the alien word for sexual heat.
Island where all becomes clear.

Solid ground beneath your feet.

The only roads are those that offer access.

Bushes bend beneath the weight of proofs.

The Tree of Valid Supposition grows here
with branches disentangled since time immemorial.

The Tree of Understanding, dazzlingly straight and simple,
sprouts by the spring called Now I Get It.

The thicker the woods, the vaster the vista:
the Valley of Obviously.

If any doubts arise, the wind dispels them instantly.

Echoes stir unsummoned
and eagerly explain all the secrets of the worlds.

On the right a cave where Meaning lies.

On the left the Lake of Deep Conviction.
Truth breaks from the bottom and bobs to the surface.

Unshakable Confidence towers over the valley.
Its peak offers an excellent view of the Essence of Things.

For all its charms, the island is uninhabited,
and the faint footprints scattered on its beaches
turn without exception to the sea.

As if all you can do here is leave
and plunge, never to return, into the depths.

Into unfathomable life.
In 1982, Mitchell Feigenbaum conducted a charming interview with two Math Greats, Mark Kac and Stan Ulam, where he asked them (among many interesting questions) how they "perceive" physics terms and concepts when doing the related math (since both of them practiced their belief that there should not be a sharp distinction/separation between physics and math).

Kac shared his insight into how math is usually done:
>I  think there are two acts in mathematics. There is  the ability to  prove  and  the  ability to understand. Now the actions of understanding and of proving are not identical. In fact, it is quite  often that you understand something  without being able to prove it. 
>Now, of course, the height  of  happiness is that you understand  it  and you can  prove it. The next stage is  that you don't understand  it,  but you can prove it. That happens over  and over again, and  mathematics journals are full  of such  stuff. Then there is the opposite, that is, where you understand it, but you can't prove it. Fortunately, it then may get into a physics journal.  
>Finally comes the ultimate of dismalness, which  is in fact the usual situation, when you neither understand it  nor can you  prove it. The way mathematics is taught now and the way it is practiced  emphasize the logical and the formal rather than the intuitive, which goes with understanding.  
>Now I think  you would agree with me because, especially with things like geometry, of which Stan's a  past  master, seeing things-not always leading neatly  to a  proof,  but  certainly  leading to the understanding-ultimately results in the correct conjecture. And then, of course, the  ultimate  has  to be done also-because of union regulations,  you also have to prove it.
And Ulam added:
>Let  me tell you something. It so happens that  I have  written an article for a jubilee volume in honor of this gentleman here, Mark Kac, on his whatever  anniversary, a  volume  which has not yet appeared.  But  the  article is  about analogy and the ways  of  thinking and  reasoning in mathematics  and in some other sciences. 
>So it  is sort of an  attempt to throw  a little light on what  he was just talking about. These things are intertwined in a mysterious way, and one of the great  hopes, to my mind, of progress, even in mathematics itself, will  be more  formalizing or at least  understanding of the processes that lead  both  to  intuition  and to then  working out not only  the details but  also the correct formulations of things. 
>So there is  a very, very deep problem  and not enough thought has been really given to it, just cursory remarks  made. 
Feigenbaum asked the Two Sages what their thoughts were on using computers for doing math work, and Kac responded:
>Well, actually, computers are a  marvelous tool, and there is no reason to fear them. You  might say that initially a mathematician should be afraid of pencil  and paper  because it is sort of a vulgar tool compared with pure  thought.   Indeed, say thirty years  ago, pro- fessional mathematicians were a bit scared, as it were, of computers, but it seems to me that for experimentation and heuristic indications or suggestions, it  is a  marvelous tool. In fact, the  meeting* that is going on right now, to a large extent, is possible because so much has been discovered experimentally. 
And Ulam (I'm happy to say (as a computer scientist :)) concurs:
>So in physics,  experiments lead  finally to problems and to theories. Experimentation in mathematics  could be purely mental, of course, and it  was largely so over the centuries,  but now there is  an additional wonderful   tool. 
To which Feigenbaum, who is into computers too, adds:
>Certainly one has learned now, or is  at the first stage of  really  learning,  how to do experiments on computers  that can begin to furnish intuition for  problems that otherwise were   impenetrable. The new intuition  then  enables  you to write a more analytical  theory. Do you think there are problems that are so complex that you won't  be  able to get that kind of a handle on them? For example, maybe memory in a brain has no global structure, but rather  entails nothing more than a  million  different distinctly  stored things, and then you  wouldn't  write any theory  for it  but rather only simulate such a  system on a  computer. Do you think there  may  be some limitation to what  kinds of things you  can analyze? 
And Ulam responds: 
>It depends on  what you  call theory.  I noticed you said the analytical method; it means that by habit and tradition you think that is  the only  way to make  progress in pure  mathematics. Well it isn't. There may be some eventual super effect from the use of computers. I  was involved from the beginning in computers  and in the  first experiments done in Los Alamos. Even in pure number theory there were already tiny  little amusements from the first. A time may come, especially because  the  overspecialization of mathematics is increasing so much that it  is impossible now to know more than a small part of it, that there will be a different format of mathematical thinking in addition to the existing one and a  different  way of thinking about publications. Maybe instead  of  publishing theorems and listing them there will be a sort of larger outline of  whole  theories, and individual theorems will be  left to computers or to students to work  out. It is conceivable. 

And here Kac brings up the advantage of computers for simulating new things:
>Well, computers play a multiple role: they are superb as tools, but they  also offer a field for a new kind of experimentation. Mitchell should know. There are certain  experiments you cannot perform in your mind. It is impossible. There are experiments that you can do in your mind, and there are others  you simply can't, and then  there is a  third kind  of experiment  where  you create your own reality.  Let me give you a problem of simple physics: a gas of hard spheres. Now nature did not provide  a gas of hard spheres. Argon comes close, but you can always argue that maybe, because of slight attractive tails, something is going to happen. There is  no substance-nature was so mean to us that there is no gas of hard spheres.   And it poses very many interesting  problems. It is child's play on the computer to create a gas of hard spheres. True, the  memories are limited, so that, as a result, we can't have hard spheres, but we can have thousands of them, and actually the sensitivity to Avogadro's   number is not all that great. We  can really learn  something about reality by creating an imitation  of  reality, which only the  computer can do. That is a completely   new dimension in experimentation.  
At this point, it seems Kac is talking about [[Domain Specific Languages (DSLs)|https://en.wikipedia.org/wiki/Domain-specific_language]] (decades before [[Martin Fowler|http://martinfowler.com/books/dsl.html]] and others)  (!):
>Sidney Brenner said that  perhaps theory in biology will not be like that of physics. Rather  than being a  straight deductive,  purely mathematical analytical theory, it may be more like  answering the following question. You have  a computer,  and you  don't   know the wiring diagram, but  you are allowed to ask it  all sorts of questions. Then you  ask  the questions, and  the  computer gives you  answers. From this dialogue you are to discover its  wiring diagram. In a  certain sense, he felt that the area of computer science-languages, theories of programming, what have you-may be more of a  model  for theorizing   in   biology than writing down  analytic  equations and solving them. (A more synthetic notion.)
And evolutionary programming (or genetic algorithms) (!):
>In fact, I think  we will go even farther in this direction if we introduce, somehow, the possibility of evolution in machines, because you cannot understand biology  without  evolution.  In fact, my colleague Gerry Edelman,  whom you  know very well and who is a  Nobel laureate in biochemistry, is  now  "into   the  brain" and is trying to build a computer that has the process of evolution built into it so  that you evolve programs: you start with one program that evolves into  another, etc. It is an attempt to get away from the static, all-purpose Cray, or whatever it is, and to endow the computer with that  one extraordinary, important element of life, namely evolution. 

And here, Ulam observes about the difference between mathematicians and physicists:
>''Mathematicians start with  axioms and draw consequences, theorems. Physicists have theorems or facts, observed by experiment, and they are looking for axioms, that is  to say, laws of physics, backwards. So in physics the idea is to deduce this system of laws or axioms from which the observed things would  follow. ''
We are masters of the unsaid words, but slaves of those we let slip out.
John Seely Brown (former Chief Scientist of Xerox Corporation and former director of the Xerox Palo Alto Research Center (PARC)) talked ([[slides|http://johnseelybrown.com/sensemaking.pdf]]) about what it means (and how the meaning changed) to make sense and learn in the new environment we are in, in his talk [[Sense-making and learning in the new 21st century environment]].
!On lifelong, spiraling discovery and learning


@@font-size:12pt;It’s more like a corkscrew than a path!@@
::       — Alice (Lewis Carroll in //Through the Looking Glass//, [[chapter 2|https://ebooks.adelaide.edu.au/c/carroll/lewis/looking/chapter2.html]])


{{{
We shall not cease from exploration
And the end of all our exploring
Will be to arrive where we started
And know the place for the first time.*
}}}
::       — T.S. Eliot, The Four Quartets, [[Little Gidding|http://www.davidgorman.com/4Quartets/4-gidding.htm]].
::: * but as [[Terry Pratchett|https://www.terrypratchettbooks.com/sir-terry/]] had written (in [[A Hat Full of Sky|http://discworld.wikia.com/wiki/A_Hat_Full_of_Sky]]): ''Coming back to where you started is not the same as never leaving.''


@@font-size:12pt;The most exciting phrase to hear in education, the one that heralds new discoveries, is not "Eureka!", but "That's funny...".@@
::       — [[Isaac Asimov|http://www.asimovonline.com/asimov_home_page.html]] (paraphrased)



[<img[click on spiral|./resources/spiral1 1.png][./resources/spiral1.jpg]]  I imagine learning as a //spiral// of discovery^^1^^ (and [[re-discovery|Rediscovery]]): you may visit domains and topics again (and again), but you do it at a deeper (or higher, depending on your point of view^^2^^ :) level in every iteration.

To paraphrase [[Douglas Hofstadter]] (from his wonderful book [[Gödel, Escher, Bach|https://en.wikipedia.org/wiki/G%C3%B6del,_Escher,_Bach]]) [[describing|Stepping Out]] the widening of the spiral of discovery:

A learner is always trying to understand more deeply what they are, and the environment they live in, by stepping more and more out of what they see themselves to be and the context they believe themselves to be in, by breaking every rule and convention which they perceive themselves to be chained by. 
Somewhere along this elusive path may come enlightenment. In any case (as I see it), the hope is that by gradually deepening one's awareness and knowledge, by gradually widening the scope of the "known domain", one will at the end^^3^^ come to a feeling of being at one with the entire universe.


([["Spiral"|https://www.contextfreeart.org/gallery/search.php?t=tags&tag=spiral]])


I think that [[M.C. Escher|http://www.mcescher.com/]], in his [["Drawing Hands"|./resources/escher_hands.jpg]], illustrates the nature of spiraling improvement/learning well: we initiate, explore, learn from the environment as well as from ourselves, influence the environment as well as get influenced by it, in successive cycles of refinement, meanings, and abilities.


 Or, as the [[poet|About me]] said :)
[>img[Escher's Drawing Hands|./resources/escher_hands_1.jpg][./resources/escher_hands.jpg]]
{{{
We are
Alive ... always learning
about the world around us
and about ourselves.

    Actually, ...
    learning about how it feels
    and what it means
    to be
        Alive ... always learning
        about the world around us
        and about ourselves.

            Actually, ...
            learning about how it feels
            and what it means
            to be
                Alive ...
}}}



Click the image below to see Spiraling Discovery in action (10MB .mov)

[img[Click to see Spiraling Discovery in action|resources/escher_gallery_1.jpg][ resources/escher_print_gallery_loop_1.mpg]]

[[Escher's Print Gallery]] (from [[Escher and the Droste effect|http://escherdroste.math.leidenuniv.nl/index.php?menu=animation]] at the Leiden University in the Netherlands)



So, here's to a journey of discovery and learning. It should be [[exciting and fun|https://www.goodreads.com/quotes/4061-the-most-exciting-phrase-to-hear-in-science-the-one]] (as [[Isaac Asimov|http://www.asimovonline.com/asimov_home_page.html]] put it, or at least [[the Unix fortune database|https://github.com/bmc/fortunes/blob/master/fortunes]] [[attributes|https://quoteinvestigator.com/2015/03/02/eureka-funny/]] it to him :)





!!!On Full-Mindedness and Mind-Fullness 
I believe that one can be both [[Full-Minded|Full-Mindedness]] (i.e., full of knowledge about many things in the world) //and// [[Mind-Full|Mind-Fullness]] (i.e., attentive to the relationships and small details in our world).

It does not have to be ''either-this-or-that'' (i.e., either "Big Picture" or "minute details"), but can actually be ''both-this-and-that'' [[(if I say so myself... :)|Given a glass with water up to the mid level, people from The West will either say it's half full or they'll say it's half empty. People from The East will say it's both half full and half empty. They are all right.]], and that this is the beauty of the human experience (AKA, life)!





----

^^1^^ //Who// is discovering and //what// is discovered are actually //not// obvious questions.^^4^^
^^2^^ and as the "[[CS Sage|https://en.wikipedia.org/wiki/List_of_computer_scientists]]" [[Alan Kay|https://en.wikipedia.org/wiki/Alan_Kay]] quipped: A good point of view is worth many IQ points.
^^3^^ I'm not so sure about the end-state vision. Maybe we need to settle on the [[not-knowing state|I can live with doubt and uncertainty and not knowing. I think it's much more interesting to live not knowing than to have answers which might be wrong.]] Richard Feynman referred to, which may be "naturally perfect".
^^4^^ As [[Niels Bohr|https://en.wikipedia.org/wiki/Niels_Bohr]] quipped: A physicist is just an atom's way of looking at itself.^^5^^
^^5^^ But what (really) is an atom? Quantum Mechanics is a bit "[[fuzzy|http://cds.cern.ch/record/518511/files/0107054.pdf]]" about it, and even [[Richard Feynman|https://en.wikipedia.org/wiki/Richard_Feynman]] (no less) commented on it: If you think you understand quantum mechanics, you don't understand quantum mechanics.


<html>
Visitors: 
<a href="http://www.reliablecounter.com" target="_blank"><img src="http://www.reliablecounter.com/count.php?page=ldtprojects.stanford.edu/~hmark/wiki.html&digit=style/plain/29/&reloads=1" alt="" title="" border="0"></a><br /><a href="http://" target="_blank" style="font-family: Geneva, Arial; font-size: 10px; color: #330010; text-decoration: none;"></a>
</html>

[[TiddlyWiki Classic|https://classic.tiddlywiki.com/]] v.<<version>>

<html>
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/3.0/us/"><img alt="Creative Commons License" style="border-width:0" src="http://i.creativecommons.org/l/by-nc-sa/3.0/us/88x31.png" /></a><br />To the extent possible and under my control, this work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/3.0/us/">Creative Commons Attribution-NonCommercial-ShareAlike 3.0 United States License</a>.
</html>
!!!On lifelong, spiraling discovery
I once had held the view that life's journey of discovery 'should be' from Full-Mindedness (a lot of knowledge and insights, if viewed positively, or a lot of clutter and tangles, if viewed negatively) to Mind-Fullness (simplicity due to abstraction, wisdom due to depth). And that the [['traveler'|Traveler-Pilgrim]] will be somewhere along the path, ''either'' close to one state ''or'' the other.

I now find myself looking at it more "inclusively": it's not ''either-this-or-that'', but rather ''both-this-and-that'' [[(if I say so myself...)|Given a glass with water up to the mid level, people from the West will either say it's half full or they'll say it's half empty. People from the East will say it's both half full and half empty. They are all right.]]. One has the ability to be ''both'' full-minded ''and'' mindful. And this is the beauty of the human experience (AKA, life)!

As [[Douglas Hofstadter|Douglas Hofstadter]] said about [[widening the spiral of discovery|Stepping Out]]: the hope is that by gradually deepening one's self awareness, by gradually widening the scope of "the system", one will at the end come to a feeling of [[being at one with the entire universe|All inclusive]].
And, of course, the flip side view is that it is an endless journey (not a pilgrimage), where, as [[Richard Feynman said|I can live with doubt and uncertainty and not knowing. I think it's much more interesting to live not knowing than to have answers which might be wrong.]], we can and should be fine living with doubt, uncertainty, and not knowing.

To compliment (not contradict, mind you) this view, [[Halford John Mackinder|Halford John Mackinder]] said: Knowledge is one. Its division into subjects is a concession to human weakness.^^1^^ (or John Steinbeck's more poetic, and more [[uplifting take on one-ness|The Log from the Sea of Cortez - one-ness]]).

[>img[Escher Hands|./resources/escher_hands_1.jpg][./resources/escher_hands.jpg]]

An illustration of [[spiraling discovery|The Paradox of Return]]^^2^^ and improvement is [[M.C. Escher's etching of one hand drawing another|resources/escher_hands.jpg]], which in turn draws and refines the first one. In his book [["Authentic Happiness"|Authentic Happiness - by Martin Seligman]] Martin Seligman quotes [[a poem by Marvin Levine|00 - Preface to Authentic Happiness]] capturing it nicely.

It's interesting that the Greeks depicted change and evolution in a more "linear metaphor", where you learn (and get wet) even if it feels like you are "[[immersed in the ''same'' river|No man ever steps in the same river twice, for it's not the same river and he's not the same man.]]".

On the other hand, the Romans (or at Least [[Pliny the Elder|https://en.wikipedia.org/wiki/Pliny_the_Elder]]), thought in terms of "circles":
Pliny had essentially invented the genre of the encyclopedia. Pliny did not use the term, but [[Sir Thomas Browne|https://en.wikipedia.org/wiki/Thomas_Browne]] did. It comes from a misreading of the Greek phrase enkyklios paideia—literally, “circular education.” The circle in question is not that of circular reasoning but, rather, the kind we have in mind when we talk about a “well-rounded education.”

It's also interesting, and [[Joy Williams very nicely captured it|#83 - by Joy Williams]], that (at least some) North American Indian tribes thought of time as cyclical and overlapping (not sure if spiraling though).

This idea is also echoed [[On Yin-Yang Polarity - by Alan Watts]].

So, here's to a journey of discovery [[(re-discovery?)|Rediscovery]] and learning. It should be exciting and fun, or as [[Isaac Asimov|http://www.asimovonline.com/asimov_home_page.html]] put it nicely: The most exciting phrase to hear in science, the one that heralds new discoveries, is not "Eureka!", but "That's funny...".

----
^^1^^ Bertrand Russell expressed a similar idea: //Matter is less material and mind is less spiritual than is generally supposed. The habitual separation of physics and psychology, mind and matter is metaphysically indefensible.//
^^2^^ It is an important question of //who// is discovering and //what// is discovered.^^3^^
^^3^^ Or as Niels Bohr quipped: A physicist is just an atom's way of looking at itself.^^4^^
^^4^^ But what (really) is an atom? The same Bohr also said: Anyone who is not shocked by quantum theory has not understood it.^^5^^
^^5^^ on which Richard Feynman (no less!) commented: If you think you understand quantum mechanics, you don't understand quantum mechanics.






<html>
<a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/3.0/us/"><img alt="Creative Commons License" style="border-width:0" src="http://i.creativecommons.org/l/by-nc-sa/3.0/us/88x31.png" /></a><br />To the extent possible and under my control, this work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-nc-sa/3.0/us/">Creative Commons Attribution-NonCommercial-ShareAlike 3.0 United States License</a>.
</html>
Wernher Magnus Maximilian Freiherr von Braun (March 23, 1912   June 16, 1977) was a German rocket scientist, engineer, space architect, and one of the leading figures in the development of rocket technology in Nazi Germany and the United States during and after World War II.
In an [[Op-Ed column in the New York Times|http://www.nytimes.com/2013/02/19/opinion/brooks-what-data-cant-do.html]], columnist David Brooks brought up a few good points about why (and when) to be cautious about pure data analysis (or Big Data "number crunching") results/conclusions.

* __Data struggles with the social.__ Computers are excellent with number crunching, and unlike brains, are very poor in social cognition.
>People are really good at mirroring each other’s emotional states, at detecting uncooperative behavior and at assigning value to things through emotion. Computer-driven data analysis, on the other hand, excels at measuring the quantity of social interactions but not the quality.
* __Data struggles with context.__ Life does not consist of a series of disconnected events. Life is a mesh of events and situations, embedded in sequences and contexts. The human brain has evolved to account for this reality.
>[Machines are pretty bad] at telling stories that weave together multiple causes and multiple contexts.
* __Data creates bigger haystacks.__ This does not necessarily produce more needles.
>As we acquire more data, we have the ability to find many, many more statistically significant correlations. Most of these correlations are spurious and deceive us when we’re trying to understand a situation. Falsity grows exponentially the more data we collect.
* __Big data has trouble with big problems.__ Brooks is fuzzy on this point, and just argues that usually, Big Problems cannot be solved by running experiments with control groups and alternative scenarios. True, but this doesn't necessarily preclude using Big Data approaches as part of a cautious path to a solution. This point could be supporting the point about machines having a problem with contexts, since big problems are usually big because they are multi-faceted, involving multiple contexts, situations, disciplines, which computation/data cannot handle well (yet?).

* __Data favors memes over masterpieces.__ Using data analysis (too early in the process?) may give erroneous impressions and force wrong conclusions, since in some situations (in the cases of masterpieces), crowds (and therefore, crowdsourcing, and big data analysis) may not like/appreciate/accept/embrace something that //later// may turn out to be of high value/quality. 
** Also (and this is my addition/supplementation), this may be similar to how quarterly financial/business results tend to drive businesses today, favoring the short term over the long term, and possibly not serving well customers, employees, shareholders, and humanity.

*__Data obscures values.__ 
>data is never raw; it’s always structured according to somebody’s predispositions and values. The end result looks disinterested, but, in reality, there are value choices all the way through, from construction to interpretation.


__And Brooks' bottom line__ is not surprising: you always want to use the right tool for the right job to get the best results :)
>This is not to argue that big data isn’t a great tool. It’s just that, like any tool, it’s good at some things and not at others.
WHAT HAVE YOU CHANGED YOUR MIND ABOUT? Copyright 2009 by [[Edge Foundation, Inc.|http://edge.org/]]
Today's leading minds rethink everything.
Edited by John Brockman

<<forEachTiddler 
where 
'tiddler.tags.contains("book-chapter") && tiddler.tags.contains("What Have You Changed Your Mind About?")'
sortBy 
'tiddler.title'>>

[[Also by John Brockman and the Edge "cohort"|Is the Internet Changing the Way You Think?]]
by Kevin Kelly, Viking press, 2010
<<forEachTiddler 
where 
'tiddler.tags.contains("book-chapter") && tiddler.tags.contains("What Technology Wants")'
sortBy 
'tiddler.title'>>
[[Robert Logan|http://www.physics.utoronto.ca/people/homepages/logan/]] in his book //What is Information?// tackles the questions about the nature of information.
(his book [[The Sixth Language: Learning a Living in the Internet Age|New languages]] is also applicable to information, knowledge, learning, and technology)

In the [[second chapter of the book|resources/logan_information_ch2.pdf]] Logan starts by giving 5 different definitions of the term information:
>  We have represented a discrete information source as a Markoff process. Can we define a quantity, which will measure, in some sense, how much information is  produced  by such a process, or better, at what rate information is produced?   Shannon (1948)
>  To live effectively is to live with adequate information   Wiener (1950)
>  Information is a distinction that makes a difference   MacKay (1969)
>  Information is a difference that makes a difference   Bateson (1973)
>  Information  arises  as natural selection assembling the very constraints on the release of energy that then constitutes work and the propagation of organization   Kauffman, Logan, Este, Goebel, Hobill & Shmulevich (2007)

I am a big admirer of Shannon, and I like Logan's elaboration on (or alternative definition of) information:
>the beginning of the modern theoretical study of information is attributed to Claude Shannon (1948), who is recognized as the father of information theory. He defined information as a message sent by a sender to a receiver. Shannon worked at Bell Labs and wanted to solve the problem of how to best encode information that a sender wished to transmit to a receiver. Shannon gave information a numerical or mathematical value based on probability defined in terms of the concept of information entropy more commonly known as Shannon entropy. Information is defined as the measure of the decrease of uncertainty for a receiver. The amount of Shannon information is inversely proportional to the probability of the occurrence of that information, where the information is coded in some symbolic form as a string of 0s and 1s or in terms of some alphanumeric code.

And to the question of the difference between data, information, knowledge, and wisdom:
>  __Data__ are the pure and simple facts without any particular structure or organization, the basic atoms of information,
>  __Information__ is structured data, which adds meaning to the data and gives it context and significance,
>  __Knowledge__ is the ability to use information strategically to achieve one's objectives, and 
>  __Wisdom__ is the capacity to choose objectives consistent with one's values and within a larger social context (Logan and Stokes 2004, pp. 38-39).
Regarding __data__ vs. __information__ as described on Wikipedia (9/12/2007):
>Even though information and data are often used interchangeably, they are actually very different. Data is a set of unrelated information, and as such is of no use until it is properly evaluated. Upon evaluation, once there is some significant relation between data, and they show some relevance, then they are converted into information. Now this same data can be used for different purposes. Thus, till the data convey some information, they are not useful.

Logan makes a very interesting observation regarding "making information out of data", that aligns with MacKey's and Bateson's definitions above, comparing it to Shannon's ("stripped, bare-boned) definition. He indicates that the necessity of the context __makes information an emergent phenomenon!__
>The contextualization of data so that it has meaning and significance and hence operates as information is an emergent phenomenon. The communication of information cannot be explained solely in terms of the components of the Shannon system consisting of the sender, the receiver and the signal or message. It is a much more complex process than the simplified system that Shannon considered for the purposes of mathematizing and engineering the transmission of signals. First of all it entails the knowledge of the sender and the receiver, the intentions or objectives of the sender and the receiver in participating in the process and finally the effects of the channel of communication itself independent of its content as in McLuhan's observation that  the medium is the message . The knowledge and intention of the sender and the receiver as well as the effects of the channel all affect the meaning of the message that is transmitted by the signal in addition to its content.

This is a strong statement about the __non-materiality of information__, which is Logan covers towards the end of chapter 2:
>The Materiality of Information in Biotic Systems Information is information, not matter or energy. No materialism which does not admit this can survive at the present day. - Norbert Wiener (1948)
>Shannon's theory defines information as a probability function with no dimension, no materiality, and no necessary connection with meaning. It is a pattern not a presence.   Hayles (1999a, p. 18)
When fishing for happiness, catch and release.

-- from Shimon Edelman's book "The Happiness Pursuit" (also wrote a good book on human neuroscience and perception called //Computing the Mind: How the Mind Really Works//)
When ideas fail, words come in very handy.
When I design a new module or lesson or demo in my computer science curriculum, I often run into the considerations and balancing between correctness, clarity, elegance, and performance (resource utilization).

These are important to consider in both the educational context/sphere, and in industry/production, since they are "big truths" or at least "important principles" of [[The Art|Computer programming is an art, because it applies accumulated knowledge to the world, because it requires skill and ingenuity, and especially because it produces objects of beauty. A programmer who subconsciously views himself as an artist will enjoy what he does and will do it better.]]!

Some people see these as an exercise in compromising and trade-offs, but I think that this is an unfortunate misunderstanding, as it has the potential to sacrifices important aspects of an artifact (program, system, project, etc.) for equally important other aspects. 
I see it as another case of "[[it's not either or, but both this and that|Given a glass with water up to the mid level, people from The West will either say it's half full or they'll say it's half empty. People from The East will say it's both half full and half empty. They are all right.]]". Ideally, it's a process/journey, where you start by making sure what you are doing (designing and coding) is correct, but you don't ignore elegance/resource-utilization and performance aspects. Once "it works" you focus more on "beauty" (which you did not neglect (!) during the previous phase of the development) and still don't ignore performance, and still make sure it is correct. And then move on to making it fast (while not "uglifying" it, and keeping it working, of course :)

This debate often turns into "religious wars" between professionals and experts, and what I (often) find is that their disagreements come from not understanding each other's context or point of view (and as the "[[CS Sage|https://en.wikipedia.org/wiki/List_of_computer_scientists]]" Alan Kay quipped: [[A good point of view is worth many IQ points]].).

As is typical in these kinds of (heated) arguments (of rational people :), each side brings viewpoints and quotes from various "[[CS Sages|https://en.wikipedia.org/wiki/List_of_computer_scientists]]", one of which is Donald Knuth, whose canonical work is "[[The Art of Computer Programming|https://en.wikipedia.org/wiki/The_Art_of_Computer_Programming]]".

[[Knuth is very big on both elegance/style AND performance/resource-utilization|On computer program elegance]], but as is often true, people on each side of the argument pick and choose more of one or the other (again, compromising/trading-off).

For example, Knuth is often quoted as saying:
>We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.
What many of those using this quote (innocently or on purpose) leave out is the next sentence, as well as the full context ([[Structured Programming With Go To Statements|http://wiki.c2.com/?StructuredProgrammingWithGoToStatements]] of all things :) in which it had been said:
>Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. __Yet we should not pass up our opportunities in that critical 3%.__

Moreover, Knuth, in the same article, makes it very clear how he looks at performance optimization, and he's definitely not giving it the evil eye :)
>The improvement in speed from Example 2 to Example 2a is only about 12%, and many people would pronounce that insignificant. The conventional wisdom shared by many of today’s software engineers calls for ignoring efficiency in the small; but I believe this is simply an overreaction to the abuses they see being practiced by penny-wise- and-pound-foolish programmers, who can’t debug or maintain their “optimized” programs. In established engineering disciplines a 12% improvement, easily obtained, is never considered marginal; and I believe the same viewpoint should prevail in software engineering. Of course I wouldn’t bother making such optimizations on a one-shot job, but when it’s a question of preparing quality programs, I don’t want to restrict myself to tools that deny me such efficiencies.

But, what Knuth does say is that [[elegance, clarity, style are also very important|On computer program elegance]] (they are actually at the core of what he strongly claims is [[an Art|Computer programming is an art, because it applies accumulated knowledge to the world, because it requires skill and ingenuity, and especially because it produces objects of beauty. A programmer who subconsciously views himself as an artist will enjoy what he does and will do it better.]]).

And [[The Zen of Python|https://www.python.org/dev/peps/pep-0020/]] (another example) is also clear on needing to tend to ''all'' aspects:
* Beautiful is better than ugly. Readability counts. ("beauty", "elegance")
* Simple is better than complex. Complex is better than complicated. Flat is better than nested. Sparse is better than dense. ("performance" and "beauty")
* Special cases aren't special enough to break the rules. Although practicality beats purity. Errors should never pass silently. Unless explicitly silenced. In the face of ambiguity, refuse the temptation to guess. (mainly "correctness", but also "performance")
You get the point.

It seems obvious that in an educational context, correctness ("make it work") is the most important aspect of a program/project/demo/lesson, __tightly coupled__ with clarity/readability/elegance ("make it beautiful"). It is vital to make things as simple ([[as possible, but not simpler|http://quoteinvestigator.com/2011/05/13/einstein-simple/]]), streamlined, clear, and understandable as possible, or else the whole point may be missed by the learner (and correctness, at that point is a mute point, to a large extent).

At the risk of sounding "terribly uneducational", I would say that in a learning context, I feel that it's OK to be "not entirely correct" (or "over simplify", at least at the beginning) in order to achieve clarity, rather than be "perfectly correct" but not very clear/clean/readable/understandable.
I think that it's also clear that (again, at least in an educational context) one should (at least initially) sacrifice performance/elegance for better clarity.

But the teacher (and craftsman) should keep in mind that this is a journey, and an important point throughout the learning experience should be the //exposure// of learners to all these aspects/phases, not treating any of them as "extra" or "afterthought", because this is "robbing" learners of beauty/elegance/understanding/performance/joy, and causing harm in a production/industry context.
In an article titled [["Writing is a Technology that Restructures Thought"|http://worrydream.com/refs/Ong%20-%20Writing%20is%20a%20Technology%20that%20Restructures%20Thought.pdf]], Walter J. Ong writes:
>Writing, Plato has Socrates say in the Phaedrus, is inhuman,  pretending to  establish outside the mind what in reality can only be in the mind. Writing is simply a thing, something to be manipulated, something inhuman, artificial, a manufactured product. We recognize  here the same complaint that is made  against computers: they are artificial contrivances, foreign to human life. 
>
>Secondly, Plato's Socrates complains, a written text is  basically unresponsive. If  you ask a person to explain his or her statement, you can get at least an attempt at explanation: if you ask a text, you get nothing except the same,  often  stupid words which called for your question in the first place. In the modern critique of the computer,  the  same  objection is put, 'Garbage  in,  garbage  out'. 
>So deeply are we into literacy that we fail commonly to recognize that this objection applies every hit as much to books as to computers. If a book  states an untruth, ten thousand  printed refutations will do nothing to the printed text: the untruth is  there for ever. This is why books have been burnt. Texts are essentially contumacious. 
>
>Thirdly, Plato's Socrates urges, writing destroys memory. Those who use writing will become forgetful, relying on an external source for what they lack in internal resources. Writing weakens the mind.
>
>Today, some parents and others fear that pocket  calculators provide an external resource for what ought to be the internal resource of memorized multiplication  tables.  Presumably,  constant  repetition of multiplication tables might produce more and more Albert Einsteins. Calculators weaken the mind, relieve it of the setting-up exercises that keep it strong and make it grow.  (Significantly, the fact that the computer  manages  multiplication and other computation so much more effectively than human beings do, shows how little the multiplication tables have to do with real thinking.) 
>
>Fourthly, in keeping with the agonistic [combative; polemical] mentality of oral cultures, their tendency to view everything in terms of interpersonal struggle, Plato's Socrates also holds it against writing that the written word cannot defend itself as the natural spoken word can: real speech and thought always exist essentially  in the context of struggle. Writing is passive, out of it, in an unreal, unnatural world. So, it seems, are computers: if you punch the keys they will not fight back on their own, but only  in the way  they have been programmed to do. Those who are disturbed about Plato's misgivings about writing will be even more disturbed to find that print created similar misgivings when it was first introduced. Hieronimo Squarciafico, who in fact promoted the printing of the Latin classics, also argued in 1477 that already 'abundance of books makes men  less studious' (Ong 1982: 80). Even more than writing does,  print  destroys memory and enfeebles the mind by relieving it of too much  work (the pocket calculator complaint once more), downgrading the wise man and wise woman in favour of the pocket compendium. 

But Ong points out the irony in Plato's/Aristotle's objection/rejection of writing:
>Plato's entire epistemology was unwittingly a programmed rejection of the archaic preliterate world of thought  and discourse. This world was oral, mobile, warm, personally interactive (you needed live people to produce  spoken  words). It was the world represented by the poets, whom Plato would not allow in his Republic, because, although Plato could not formulate it this way, their thought processes and modes of expression were disruptive of the cool, analytic processes generated by writing. 
>
>The  Platonic  ideas are not  oral,  not  sounded,  not  mobile,  not warm,  not  personally  interactive. They are silent,  immobile,  in themselves devoid of all warmth, impersonal and isolated, not part of the human lifeworld at all but utterly above and beyond it, paradigmatic abstractions. Plato's term idea, form, is  in fact visually based, coming from the same root as the Latin videre, meaning to see, and such English derivatives as vision, visible, or video. In the older  Greek form,  a digamma [the sixth letter of the early Greek alphabet, pronounced as “w”] had preceded the  iota: videa or widea. 
>
>Platonic form was form conceived  of by analogy precisely with  visible form. Despite his touting of logos and speech,  the Platonic ideas in effect modelled intelligence not so much on hearing as on seeing. 

Ong points out something that we sometimes forget (or take for granted): writing __is__ a technology (and as he points out, a revolutionary one, at that):
>In //From Memory to Written Record: England 1066-1307//, M. T. Clanchy (1979) has an entire chapter entitled  'The Technology of Writing'. He explains how in the West through the Middle Ages and earlier almost all those devoted to writing regularly used the services of a  scribe because the physical labour writing involved-scraping and polishing the animal skin or parchment, whitening it  with chalk, resharpening goose-quill pens with what we still call a pen-knife, mixing ink, and all the rest-interfered with  thought and composition. 
>
>Chaucer's  'Wordes unto Adam, His Owne Scriveyn' humorously expressed the author's resentment at having to 'rubbe and scrape' to correct his scribe Adam's  own carelessness in plying his craft. Today's ballpoint pens, not to mention our typewriters and word processors or the paper we use, are high-technology products, but we seldom advert to the fact because the technology is concentrated in the factories that produce such things, rather than at the point of production of the text itself, where the technology is concentrated in a manuscript culture. 
>
>Although we take writing so much for granted as to forget that it is a technology, writing is in a way the most drastic of the three technologies of the word: It initiated what printing and electronics only continued, the physical reduction of dynamic sound to quiescent space, the separation of the word from the living present, where alone real,  spoken words exist. 
As a parallel aside/analogy: is the fact that nowadays every "literate person" knows how to write (unaided :) as foreteller of the future where every such person will know how to program a computer and make it do what they want it to do for them? :)

And another irony (and a "strange human loop" :) of this powerful technology:
>Once reduced to space, words are frozen and in a sense dead. Yet there is a paradox in the fact that the deadness of the written or printed text, its removal from the living human lifeworld, its rigid visual fixity, assures its endurance and its potential for being resurrected into limitless living contexts by a limitless number of living readers. The dead, thing-like text has potentials far outdistancing those of the simply spoken word. 
>
>The complementary paradox, however, is that the written text, for all its permanence, means nothing, is not even a text, except in relationship to the spoken word. For a text to be intelligible, to deliver its message, it must be reconverted into sound, directly or indirectly, either really in the external world or in the auditory imagination. All verbal expression, whether put into writing, print, or the computer, is ineluctably bound to sound forever. 

Steven Pinker, a  prominent and prolific psycholinguist at Harvard, in his book //The Sense of Style: The Thinking Person’s Guide to Writing in the 21st Century//, writes about why style matters (See also [[99 writing styles by Raymond Queneau|Exercises in Style - Raymond Queneau]]):

>Style, [...], adds beauty to the world. To a literate reader, a crisp sentence, an arresting metaphor, a witty aside, an elegant turn of phrase are among life’s greatest pleasures… This thoroughly impractical virtue of good writing is where the practical effort of mastering good writing must begin.

It is interesting to compare (literature) writing style to (programming) coding style.

Two "[[CS Sages|https://en.wikipedia.org/wiki/List_of_computer_scientists]]", Donald Knuth, and Edsger Dijkstra, were both thinking about, advocating and promoting computer program elegance (or style). But they had different takes on what it means and how to achieve it.

In the article [[Bend Sinister|https://monoskop.org/images/1/14/Goriunova_Olga_ed_Fun_and_Software_Exploring_Pleasure_Paradox_and_Pain_in_Computing.pdf]] artist and programmer [[Simon Yuill|http://www.lipparosa.org/]] observes (about Knuth, first):
>Here, art is that which defines the conditions under which knowledge becomes productive. For Knuth these conditions are both economic, the effective use of resources, and aesthetic, a sense for that which is harmonious and ‘good’. Bentham and Mill help define what may be described as Knuth’s ethics of production, the behaviours and moral values under which programming is practised. The realization of this in elegant code gives material shape to the larger project of Literate Programming which Knuth has advocated throughout his career and created software tools to facilitate. Just as these ethics draw upon the ideas of English liberal philosophers, Bentham and Mill, the practice of Literate Programming also adopts the aesthetic medium most closely related to the expression of liberal thinking, the essay: ‘The practitioner of literate programming can be regarded as an essayist, whose main concern is with exposition and excellence of style’.
[and about Dijkstra:]
>Dijkstra didn't like the term "programming language" and preferred the term "programming notation" and explained the difference, strength and danger:
>>The introduction of the term ‘language’ in connection with notation techniques for programs has been a mixed blessing. On the one hand it has been very helpful in as far as existing linguistic theory now provided a natural framework and an established terminology (‘grammar’, ‘syntax’, ‘semantics’, etc.) for discussion. On the other hand we must observe that the analogy with (now so-called!) ‘natural languages’ has been very misleading, because natural languages, non-formalized as they are, derive both their weakness and their power from their vagueness and imprecision.

>Although Dijkstra was as strong an advocate of elegance as Knuth, this comment indicates something of a distinction in how each understood this. While both might place emphasis upon the precise, efficient expression of an idea in code, for Knuth this has a rhetorical dimension in that code, as essay, should aim to be persuasive in expression and display an appropriate conduct on the part of the programmer [...]. For Dijkstra elegance lies more in an irrefutably self-evident correctness, for truly elegant code would not require commentary nor debugging. For Knuth elegance is the start of a conversation, for Dijkstra it is the conclusion.
In Computer Science, explaining (and even more so, understanding :) formal and actual function parameters is not easy.

So, for example, in Python, if you define a function called draw_square() thusly (assuming you have a graphics turtle library, and pencil is a turtle):
{{{
def draw_square():
	pencil.pendown()
	for side in range(4):
		pencil.forward(100)
		pencil.right(90)
}}}

you can call (use) this function and it will draw a square of size 100 (pixels) on each side.

If you want to draw squares of different sizes, you need to tell the draw_square() function what size you want, and you can do it by using what's called a parameter -- see the new function definition below):

{{{
def draw_square(length):
	pencil.pendown()
	for side in range(4):
		pencil.forward(length)
		pencil.right(90)
}}}

Now, every time you call (use) the function, you have to give it the value of the "length" parameter you want it to use when drawing a square:
So calling draw_square(50) will draw a square of 50 (pixels) on each side, and calling draw_square(100) will draw a square of 100 (pixels) on each side, like so:
{{{
# draw a small square:
side = 50
draw_square(side)

# and a larger square:
side = 100
draw_square(side)
}}}

Students often confuse the "side" variable which is used to input into the function the size of the square, and the "length" parameter which is used inside the function to actually draw it.
In formal programming, "side" is called a "actual parameter" and "length" is called the "formal parameter".

To make clearer what the difference is between "formal" and "actual" parameters, I use a "wild story", with the hope that the wildness will make the story (and it's meaning) stick in students' memory.

The wild story: let's say we have a custom (procedure, function) of greeting visitors to our CS lab. The first visitor showing up at the door is seated on one of the swiveling student chairs, and turned/swiveled around 3 times.
The second visitor showing up, is seated at the teacher's desk, and a pencil is stuck behind their ear.

In programming pseudo-code (almost-code :) this procedure/function can be __defines__ as:
{{{
def visiting_procedure(visitor_1, visitor_2):
	seat visitor_1 on a swiveling chair
	turn visitor_1 3 times
	seat visitor_2 at teacher's desk
	stick pencil behind visitor_2's ear
}}}
where visitor_1 and visitor_2 are the placeholders, standing for any 2 visitors.

(and the story continues:) Now suppose, the school principle (let's call her Jane) and the school administrator (let's call him Joe), show up at the door. What do we do? We __use__ our procedure/function:

We seat ''Jane'' on a swiveling char and turn her three times, and
we seat ''Joe'' at the teacher's desk and stick a pencil behind his ear.


So, in this exciting (and hopefully, memorable) fable, visitor_1 and visitor_2 are the "formal" parameters (the placeholders for the function), and Jane and Joe are the "actual" parameters (or values).

It turns out that this wild story (and the wilder the better?) helps students understand and correctly use formal parameters when defining a function, and apply actual parameters when calling/using it.


This may employ a technique/phenomenon also used for remembering long and arbitrary sequences of objects or facts.

* I just finished reading the book //Little, Big// by John Crowley, and I enjoyed it a lot! It's //wonder//ful! It has this "softly magical" feeling to it (it is a "fairytale" after all), and it contains "Prose that F. Scott Fitzgerald would envy... the best fantasy yet written by an American." (from the "Praise for" page of the book). In the book, Crowley talks about [[Giordano Bruno|http://en.wikipedia.org/wiki/Giordano_Bruno]], [[The art of memory|http://en.wikipedia.org/wiki/Art_of_memory]], and the mnemonic devices one of the characters (Ariel Hawksquill) is using to remember prodigious amounts of details and relationships.
Sir Winston Leonard Spencer Churchill, (30 November 1874   24 January 1965) was a British politician and statesman known for his leadership of the United Kingdom during the Second World War (WWII).

[[Winston Churchill|http://en.wikipedia.org/wiki/Winston_Churchill]]
A funny dialog between a wizard and his apprentice from the book [[The Thief of Time|http://www.chrisjoneswriting.com/thief-of-time.html]] by the inimitable Terry Pratchett:

'I have heard the heartbeat of the universe.  I know the answers to many questions.  Ask me.’
The apprentice gave him a bleary look.  It was too early in the morning for it to be early in the morning.  That was the only thing he currently knew for sure.
‘Er…what does master want for breakfast?’ he said.
Wen looked down on their camp and across the snowfields and purple mountains to the golden daylight creating the world, and mused upon certain aspects of humanity.
‘Ah,’ he said.  ‘One of the difficult ones.' 

Talk about a missed opportunity.
Reminds me of the short story ''"Descent of Species"'' in the interesting [[collection ("Sum") by David Eagleman|Sum by David Eagleman]].

But, here is something on a more serious note [[and on good questions|Good questions - from a consciousness perspective]], as well as about [[Inquiring about what we don't know]].
Wisdom is hereditary - You get it from your children.
I love the depth, sensitivity, humility, wisdom, humor, and Menschlichkeit^^1^^ of Wislawa Szymborska.
Here are a few things from [[her Nobel Prize lecture|http://www.nobelprize.org/nobel_prizes/literature/laureates/1996/szymborska-lecture.html]]. Really a poem of a lecture :)

She starts off, plain and simple:
>They say the first sentence in any speech is always the hardest. Well, that one's behind me, anyway.
and adds
>my lecture will be rather short. All imperfection is easier to tolerate if served up in small doses.
And why will it be short (and imperfect)? Because she was asked to talk about poetry (!) and from past experience, she observes:
>I've said very little on the subject, next to nothing, in fact. And whenever I have said anything, I've always had the sneaking suspicion that I'm not very good at it.

Szymborska freely admits that she finds it hard to explain what inspiration is:
>It's just not easy to explain something to someone else that you don't understand yourself.
(BTW, this kind of logic didn't prevent many others, mind you, from trying to "explain" things they don't understand. Or as Johann Wolfgang von Geothe observed: When ideas fail, words come in very handy.)

But, with her typical sense of keen observation and wisdom, she comes up with an inspiring description:
>inspiration is not the exclusive privilege of poets or artists generally. There is, has been, and will always be a certain group of people whom inspiration visits. It's made up of all those who've consciously chosen their calling and do their job with love and imagination. It may include doctors, teachers, gardeners - and I could list a hundred more professions. Their work becomes one continuous adventure as long as they manage to keep discovering new challenges in it. Difficulties and setbacks never quell their curiosity. A swarm of new questions emerges from every problem they solve. Whatever inspiration is, it's born from a continuous "I don't know." ^^2^^
She has a healthy (and based on past/historic experience) suspicion of people like dictators, fanatics and demagogues. She acknowledges their "inspiration" and zeal springing from strong convictions:
>they too perform their duties with inventive fervor. Well, yes, but they "know." They know, and whatever they know is enough for them once and for all.
(or as [[Alain (Emile Chartier)|https://en.wikipedia.org/wiki/%C3%89mile_Chartier]] observed: Nothing is more dangerous than an idea, when it is the only idea we have.
She states that "any knowledge that doesn't lead to new questions quickly dies out: it fails to maintain the temperature required for sustaining life." (or as Henry Ford had observed: "Anyone who stops learning is old, whether at twenty or eighty.")

>This is why I value that little phrase "I don't know" so highly. It's small, but it flies on mighty wings. It expands our lives to include the spaces within us as well as those outer expanses in which our tiny Earth hangs suspended.
(or as Isaac Asimov put it: 
>The most exciting phrase to hear in science, the one that heralds new discoveries, is not "Eureka!", but "That's funny...".)

On looking at the world with fresh/new/wondering (and optimistic) eyes, she paints a great image about her wish ([[similar to that of the poet Leah Goldberg|http://mikrarevivim.blogspot.com/2014/02/blog-post_1412.html]]) to get 
>a chance to chat with the Ecclesiastes [ [[Koheleth|http://www.mechon-mamre.org/p/pt/pt3101.htm]], the son of King David], the author of that moving lament on the vanity of all human endeavors
> [הֲבֵל הֲבָלִים אָמַר קֹהֶלֶת, הֲבֵל הֲבָלִים הַכֹּל הָבֶל.]. I would bow very deeply before him, because he is, after all, one of the greatest poets, for me at least. That done, I would grab his hand. "'There's nothing new under the sun': that's what you wrote, Ecclesiastes. But you yourself were born new under the sun. And the poem you created is also new under the sun, since no one wrote it down before you. And all your readers are also new under the sun, since those who lived before you couldn't read your poem. And that cypress that you're sitting under hasn't been growing since the dawn of time. It came into being by way of another cypress similar to yours, but not exactly the same. And Ecclesiastes, I'd also like to ask you what new thing under the sun you're planning to work on now? A further supplement to the thoughts you've already expressed? Or maybe you're tempted to contradict some of them now? In your earlier work you mentioned joy - so what if it's fleeting? So maybe your new-under-the-sun poem will be about joy? Have you taken notes yet, do you have drafts? I doubt you'll say, 'I've written everything down, I've got nothing left to add.' There's no poet in the world who can say this, least of all a great poet like yourself."
Looking at the universe, ever the optimist ("planets already dead? still dead? we just don't know"), she chooses to appreciate the gift we received:
>whatever we might think of this measureless theater to which we've got reserved tickets, but tickets whose lifespan is laughably short, bounded as it is by two arbitrary dates; whatever else we might think of this world - it is astonishing.
But we are astonished, not because we compare the world we observe to a known "reference world". Again, we are not that wise, and we don't possess that kind of knowledge:
>Our astonishment exists per se and isn't based on comparison with something else.
(on which Carl Sagan had to say: Every aspect of Nature reveals a deep mystery and touches our sense of wonder and awe. Those afraid of the universe as it really is, those who pretend to nonexistent knowledge and envision a Cosmos centered on human beings will prefer the fleeting comforts of superstition. They avoid rather than confront the world. But those with the courage to explore the weave and structure of the Cosmos, even where it differs profoundly from their wishes and prejudices, will penetrate its deepest mysteries.)



----
^^1^^ Menschlichkeit - from the [[German Duden|http://www.duden.de/rechtschreibung/Menschlichkeit]]: Erbarmen, Humanität, Menschenfreundlichkeit, Milde, Toleranz. See also [[How to be a Mensch|http://judaism.about.com/od/judaismbasics/a/howtobeamensch.htm]].

^^2^^ Richard Feynman was [[captured on video|http://www.youtube.com/watch?v=E1RqTP5Unr4&feature=fvwrel]] talking about the same state of "not knowing":
>You see, one thing is, I can live with doubt and uncertainty and not knowing^^3^^. I think it's much more interesting to live not knowing than to have answers which might be wrong. I have approximate answers and possible beliefs and different degrees of certainty about different things, but I am not absolutely sure of anything and there are many things I do not know anything about, such as whether it means anything to ask 'why we are here?' and what the question might mean. I might think about it a little bit if and if I can't figure it out then I go into something else. But I don't have to know an answer. I don't have ... I don't feel frightened by not knowing things, by being lost in a mysterious universe without having any purpose, which is the way it really is as far as I can tell, possibly. It doesn't frighten me.

^^3^^ - and according to Terry Pratchett, [[even DEATH can try to believe in it|THE UNCERTAINTY PRINICIPLE - according to Sir Terry]]
We are a wordy species. ''Words are the wings both intellect and imagination fly on.'' Music, dance, visual arts, crafts of all kinds, all are central to human development and well-being, and no art or skill is ever useless learning; but to train the mind to take off from immediate reality and return to it with new understanding and new strength, nothing quite equals poem and story.
In an interesting [[article of that name in the New Yorker|http://www.newyorker.com/magazine/2015/11/23/writers-in-the-storm]], Kathryn Schultz reviews "How weather went from symbol to science and back again". 

She is pointing out that weather maintained "maintain[ed] its centrality in Western literature for millennia":
>Storms sent to punish, lightning to frighten, thunder to humble, floods to obliterate: across nearly all cultures, the first stories that we told about weather were efforts to explain it, and the explanations invariably came down to divine agency. From the bag of winds gifted to Aeolus to the Biblical drought visited on Jerusalem, meteorological phenomena first appear in the narrative record as tools used by deities to battle one another and to help or hinder humans.
But,
>in the mid-seventeenth century, the role of weather in literature was shifting. While our earliest weather stories tried to explain meteorological phenomena, subsequent ones used meteorological phenomena to explain ourselves. Weather, in other words, went from being mythical to being metaphorical. In a symbolic system that is now so familiar as to be intuitive, atmospheric conditions came to stand in for the human condition.
Schultz mentions the book //Weatherland// by Alexandra Harris, which is a history of weather in English literature. 
>“My subject is not the weather itself,” she writes, “but the weather as it is daily recreated in the human imagination.” Her survey begins with an astute observation: weather works so well as a symbol partly because its literal manifestation is oddly slippery. “Meteorological phenomena are serially elusive,” she writes. 
>Weather, one of the most potent forces in our lives, is often imperceptible, perpetually changing, and frequently mysterious.

>As Harris points out, all of this makes it a convenient substitute for another “serially elusive” phenomenon: the self. King Lear, Shakespeare tells us, was “minded like the weather”—as charged and turbulent as the storm that raged around him on the heath. In a way, we have all been minded like the weather ever since, so accustomed have we become to using meteorology to describe mental activity. Minds are foggy (unless they are experiencing a brainstorm), temperaments sunny, attitudes chilly; moods blow in and out. Wordsworth wandered lonely as a cloud; Robert Frost, in “Tree at My Window,” explicitly compared outer and inner weather.
Then, there was a change in the literary positioning of weather ("the weather has turned"? :)
John Ruskin, the most influential critic in the nineteenth century, in 1856, observed that: 
>The sun does not shine mercilessly, Ruskin insisted, and the skies have never once wept, and, Dickens notwithstanding, fog cannot be found “cruelly pinching the toes and fingers” of a little apprentice boy. “It is one of the signs of the highest power in a writer,” Ruskin argued, “to check all such habits of thought, and to keep his eyes fixed firmly on the pure fact ”—on the “ordinary, proper, and true appearance of things.”
What Ruskin wished for was that the society of meteorologists In England) will establish
> its influence and its power to be omnipotent over the globe, so that it may be able to know, at any given instant, the state of the atmosphere at every point on its surface.”

>It would take the better part of a century, but that vision eventually became a reality. What Ruskin did not predict, however (though it might have pleased him), was that the rise of an empirical model of weather would occasion the decline of the symbolic one—and, with it, the over-all decline of weather in literature.
Schultz summarizes the state of meteorology:
>At the dawn of the nineteenth century, then, nearly everything about weather remained a mystery. No one understood the wind. No one knew why temperatures dropped as you climbed closer to the sun. No one could explain how clouds, with their countless tons of rainwater, somehow remained suspended in midair. No one knew what caused lightning, or why it tended to strike the tallest thing around—a problem for Christian meteorology, since it appeared that God had a special propensity for destroying church steeples. No one even knew what the sky was made of. Above all, no one knew what it was likely to do next.
A gap between meteorology and literature started to open up:
>Meteorology had constructed a new story about weather, down to the vocabulary used to tell it, yet writers seemed unable or unwilling to make use of it, even as their traditional strategies were becoming less viable. With the rise of a scientific understanding of weather, both its mythological and metaphorical clout diminished. Storms seem less like the verdict of God when you can track them by satellite two weeks out, and lightning loses some of its gothic thrill when you know that it is merely electrostatic discharge. A forecast [...] insists that the weather is the product of natural forces, utterly unrelated to the goings on in our culture, our relationships, and our soul.
As Western Civilization shifted from rural living to urban dwelling and working, and from older modes of locomotion to cars and trucks, the weather and forecasting it became less crucial.
And so, literature also became "climate-controlled".
But today, with climate change (or "global warming" if you are so politically inclined to call it), weather is starting to regain prominence, in multiple domains, including literature.
It seems like "a circle is closing" (or maybe it's a spiral overlapping):
>These days, the atmosphere really does reflect human activity, and, as in our most ancient stories, our own behavior really is bringing disastrous weather down on our heads. Meteorological activity, so long yoked to morality, finally has genuine ethical stakes.
Starting in the twentieth century, authors started writing works classified as "cli-fi" (a play on "sci-fi"), as well as dystopian novels, where the climate is in crisis ^^1^^.
The situation looks bleak:
>Today, it is, if anything, even more difficult to imagine an end of the world that is not driven by a change in the weather. We speak of a “nuclear winter,” of the firestorms and the radical temperature drop that would follow an asteroid strike, of global climate change nudging planetary temperatures out of the range of the habitable.
Schultz ends her piece with a call to leverage writers writing about weather and thus "changing the narrative" (again) to spur us to take the situation seriously and do something about it, so that reality does not resemble fiction (sci-, cli- or otherwise, with respect to the weather):
>But apocalyptic stories are ultimately escapist fantasies, even if no one escapes. End-times narratives offer the terrible resolution of ultimate destruction. Partial destruction, displacement, hunger, want, weakness, loss, need—these are more difficult stories. That is all the more reason we should be glad writers are beginning to tell them: to help us imagine not dying this way but living this way. To weather something is, after all, to survive.

----
^^1^^ - 
>The dystopian novelist J. G. Ballard wrote about climate change before the climate was known to be changing; later, Kim Stanley Robinson, Margaret Atwood, and many others used the conventions of science fiction to create worlds in which the climate is in crisis. More recently, though, books about weather are displaying a distinct migratory pattern—farther from genre fiction and closer to realism; backward in time from the future and ever closer to the present. See, among others, Ian McEwan’s “Solar,” Barbara Kingsolver’s “Flight Behavior,” Nathaniel Rich’s “Odds Against Tomorrow,” Karen Walker’s “The Age of Miracles,” Jesmyn Ward’s “Salvage the Bones,” and Dave Eggers’s “Zeitoun.” (Weather is on the rise in nonfiction, too. In addition to “Weatherland” and “The Weather Experiment,” recent or forthcoming titles include Tim Flannery’s “Atmosphere of Hope,” Christine Corton’s “London Fog,” Lauren Redniss’s “Thunder & Lightning,” and Cynthia Barnett’s “Rain.”)
I think that [[Joseph Weizenbaum|https://en.wikipedia.org/wiki/Joseph_Weizenbaum]] (of the AI [[ELIZA program|https://cse.buffalo.edu/~rapaport/572/S02/weizenbaum.eliza.1966.pdf]] fame (see [[Joseph Weizenbaum on ELIZA]])), in the following snippet, starts by expressing some limitation of programming/writing for understanding, but later (it seems) changes his mind, and appears to agree with [[Alan Perlis about knowledge acquisition through programming|You think you KNOW when you learn, are more sure when you can write, even more when you can teach, but certain when you can program.]].

I personally think that programming can be a very powerful tool/means for understanding, but it is not necessary (i.e., deep understanding doesn't //require// coding and/or programming skills) nor sufficient (i.e., programming something is not //guaranteeing// deep understanding). Coding something can (and should) be an opportunity to analyze, abstract, define, recreate, explore, experiment, and tinker -- all of which are known to help humans gain deep(er) understanding and knowledge.

>To understand something sufficiently well to be able to program it for a computer does not mean to understand it to its ultimate depth. There can be no such ultimate understanding in practical affairs. Programming is rather a test of understanding. In this respect it is like writing; often when we think we understand something and attempt to write about it, our very act of composition reveals our lack of understanding even to ourselves. Our pen writes the word 'because' and suddenly stops. We thought we understood the 'why' of something, but discover that we don't. We begin a sentence with 'obviously,' and then see that what we meant to write is not obvious at all. Sometimes we connect two clauses with the word 'therefore,' only to then see that our chain of reasoning is defective. 
>
>Programming is like that. It is, after all, writing, too. But in ordinary writing we sometimes obscure our lack of understanding, our failures in logic, by unwittingly appealing to the immense flexibility of a natural language and to its inherent ambiguity...[and, I'd like to add: to the intelligence and life context of the reader].
>An interpreter of programming-language-texts, a computer, is immune to the seductive influence of mere eloquence... A computer is a merciless critic.
:: -- Source : ''Weizenbaum's book'' [[Computer Power and Human Reason: From Judgment to Calculation|http://blogs.evergreen.edu/cpat/files/2013/05/Computer-Power-and-Human-Reason.pdf]].
:: (see [[John McCarthy's criticism|http://jmc.stanford.edu/artificial-intelligence/reviews/weizenbaum.pdf]] of ''Weizenbaum's book'').


From an article titled [["Writing is a Technology that Restructures Thought"|http://worrydream.com/refs/Ong%20-%20Writing%20is%20a%20Technology%20that%20Restructures%20Thought.pdf]] by Walter J. Ong:
>To say writing is artificial is not to condemn it but to praise it. Like other artificial creations and indeed more than any other, writing is utterly invaluable and indeed essential for the realization of fuller, interior, human potentials. Technologies are not mere exterior aids but also interior transformations of consciousness, and never more than when they affect the word. Such transformations of consciousness can be uplifting, at the same time that they are in a sense alienating. By distancing thought, alienating it from its original habitat in sounded words, writing raises consciousness. Alienation from a natural milieu can be good for us and indeed is in many ways essential for fuller human life. ''To live and to understand fully, we need not only proximity but also distance.'' This writing provides for, thereby accelerating the evolution of consciousness as nothing else before it does. 
You don't want a million answers as much as you want a few forever questions. The questions are diamonds you hold in the light. Study a lifetime and you see different colors from the same jewel.
You think you KNOW when you learn, are more sure when you can write, even more when you can teach, but certain when you can program.

(see [[Joseph Weizenbaum on Writing and programming for understanding|Writing and programming for understanding]]).
From a book review titled [["Improve Your Life by Paying Attention"|https://fs.blog/2013/10/improve-your-life-by-paying-attention/]] in [[Farnam Street|https://fs.blog/]], reviewing Winifred Gallagher's book //Rapt: Attention and the Focused Life//:

>That your experience largely depends on the material objects and mental subjects that you choose to pay attention to or ignore is not an imaginative notion, but a physiological fact. When you focus on a stop sign or a sonnet, a waft of perfume or a stock-market tip, your brain registers that “target,” which enables it to affect your behavior. In contrast, the things that you don’t attend to in a sense don’t exist, at least for you.
>
>All day long, you are selectively paying attention to something, and much more often than you may suspect, you can take charge of this process to good effect. Indeed, your ability to focus on this and suppress that is the key to controlling your experience and, ultimately, your well-being. 

which echos (but expresses an even more radical effect) William James who said:
>My experience is what I agree to attend to, and only those things which I notice shape my mind.

Winifred also point out (ha!):
>Like fingers pointing to the moon, other diverse disciplines from anthropology to education, behavioral economics to family counseling similarly suggest that the skillful management of attention is the sine qua non of the good life and the key to improving virtually every aspect of your experience, from mood to productivity to relationships.
In his book //Zen Physics// David Darling expresses the following thoughts about understanding through language, models, and intuition:
>Zen uses language to point beyond language, which is what poets and playwrights and musicians do. But, less obvious, it is also what modern science does if the intuitive leap is taken beyond its abstract formalism. The deep, latent message of quantum mechanics, for instance, codified in the language of mathematics, is that there is a reality beyond our senses which eludes verbal comprehension or logical analysis. And this is best exemplified in the central idea of "complimentarity" -- an idea introduced by Niels Bohr to account for the fact that two different conditions of observations could lead to conclusions that were //conceptually// incompatible.
>In one experiment, for example, light might behave as if it were made of particles, in another as if it were made of waves. Bohr proposed, however, there is no //intrinsic// incompatibility between these results because they are functions of different conditions of observations; no experiment could be devised that would demonstrate both aspects of a single condition.
>The wave and particle natures of light and matter are not mutually exclusive, they are mutually inclusive -- necessary, complimentary aspects of reality. Bohr gained his inspiration for this concept from Eastern philosophy, in particular from the Taoist concept of the dynamic interplay of opposites, //yin// and //yang//. And so, one of the central principles of modern physics is coincident with, and actually derived from, one of the most basic docterines of the Eastern worldview.
And as far as the [[intuitive leap|It's Big Meaning, not Big Data.]] goes:
>Intuition has ever been the handmaid of science. And although science presents its theories and conclusions in a "respectable" symbolic form, its greatest advances have always come initially not from the application of reason but from intuitive leaps -- sudden flashes of inspiration very much akin to Zen experiences.


A nice [[example of human-centric, transparent, performance-enhancing technology|resources/Natural-Born Cyborgs - ch2.docx]]

This example talks about a scenario where someone on the streets asks you for the time, and you, having a ("cheap but reliable") wristwatch tell them what the time is. Clark is arguing that we, our minds, selves, etc., extend beyond the boundaries of our skull and skin. We naturally extend our intelligence:
>When we answer that we know the time, all we mean is that we have the information readily at hand. And to be sure, several cultural variants of the request exist. My wife, a native Spanish speaker, might ask me  Tienes hora?  literally,  Have you got the time?  with the emphasis on possession rather than knowledge. All this notwithstanding, I think the ease with which we accept talk of the watch-bearer as one who actually knows rather than one who can easily find out the time is suggestive. For the line between that which is easily and readily accessible and that which should be counted as part of the knowledge base of an active intelligent system is slim and unstable indeed. It is so slim and unstable, in fact, that it sometimes makes both social and scientific sense to think of your individual knowledge as quite simply whatever body of information and understanding is at your fingertips; whatever body of information and understanding is right there, cheaply and easily available, as and when needed.
So,
>you are telling the literal truth when you answer  yes  to the innocent-sounding question  Do you know the time?  For you do know the time. It is just that the  you  that knows the time is no longer the bare biological organism but the hybrid biotechnological system that now includes the wristwatch as a proper part.
and concludes:
>We can, in any event, take away two somewhat less contentious lessons from our discussion of modern timekeeping. The first is that transparent (nonopaque, human-centered) technology is by no means a new invention. It is with us already in a wide variety of old technologies, including pen, paper, books, watches, written words, numerical notations, and the multitude of almost-invisible props and aids that scaffold and empower our daily thought and action. The second is that the passage to transparency often involves a delicate and temporally extended process of co-evolution. Certainly, the technology must change in order to become increasingly easy to use, access, and purchase; but this is only half the story because at the same time, elements of culture, education, and society must change also. In the case at hand, people had to learn to value time discipline as opposed to mere time obedience, and this transition itself, Landes tells us, took over a hundred years to fully accomplish.
Clark talks about the nature of tools and technologies that really/significantly enhance human capabilities:
>What if we instead allowed them to define brand new niches for genuine action and intervention? The idea would be to allow the technologies to provide for the kinds of interactions and interventions for which they are best suited, rather than to force them to (badly) replicate our original forms of action and experience.
About written language (and ''literacy'') vs. face-to-face conversation
>After all, our single most fantastically successful piece of transparent cognitive technology written language is not simply a poor cousin of face-to-face vocal exchange. Instead, it provides a new medium for both the exchange of ideas and (more importantly) for the active construction of thoughts. We celebrate it for its special virtues, not as an impersonal, low-bandwidth, less rapidly responsive stand-in for face-to-face exchange.
>This point is nicely made in a short piece by two Bellcore researchers, Jim Hollan and Scott Stormetta. The piece is called  Beyond Being There  and kicks off with an analogy. A human with a broken leg may use a crutch, but as soon as she is well, the crutch is abandoned. Shoes, however (running shoes especially), enhance performance even while we are well. Too much telecommunications research, they argue, is geared to building crutches rather than shoes. Both are tools. We may become as accustomed to the crutches as the shoes, but crutches are designed to remedy a perceived defect and shoes to provide new functionality. Maybe new technologies should aspire to the latter. As they put it:
>>[much] telecommunications research seems to work under the implicit assumption that there is a natural and perfect state being there and that our state is in some sense broken when we are not physically proximate. . . . In our view, there are a number of problems with this approach. Nor only does it orient us towards the construction of crutch-like telecommunications tools but it also implicitly commits us to a general research direction of attempting to imitate one medium of communication with another.
>Consider e-mail. E-mail is often used even when the recipient is sitting in the office next door. I do this all the time. My neighbor is a university colleague and for certain delicate, slow conversations, we much prefer a slow, asynchronous e-mail exchange. But e-mail is nothing like face-to-face interaction, and therein lies its virtues. It provides complementary functionality, allowing people informally and rapidly to interact, while preserving an inspectable and revisitable trace. It does this without requiring us both to be free at the same time. Cell phone text messaging has related virtues. The tools that really take off, Hollan and Stormetta thus argue, are those that  people prefer to use [for certain purposes] even when they have the option of interacting in physical proximity . . . tools that go beyond being there.

From the book //On Intelligence// (published in 2004) by Jeff Hawkins (pg. 25):

I had formed an opinion that three things were essential to understanding the brain. My ''first criterion'' was @@the inclusion of time@@ in brain function. Real brains process rapidly changing streams of information. There is nothing static about the flow of information into and out of the brain.

The ''second criterion'' was @@the importance of feedback@@. Neuroanatomists have known for a long time that the brain is saturated with feedback connections. For example, in the circuit between the neocortex and a lower structure called the thalamus, connections going backward (toward the input) exceed the connections going forward by almost a factor of ten! That is, for every fiber feeding information forward into the neocortex, there are ten fibers feeding information back toward the senses. Feedback dominates most connections throughout the neocortex as well. No one understood the precise role of this feedback, but it was clear from published research that it existed everywhere. I figured it must be important.

The ''third criterion'' was that any theory or model of the brain should account for @@the physical architecture of the brain@@. The neocortex is not a simple structure. As we will see later, it is organized as a repeating hierarchy. 

And Jeff Hawkins' conclusion:

Any neural network that didn't acknowledge this structure was certainly not going to work like a brain.


Type the text for 'happiness'

Type the text for 'language'




__Summary__: I have changed my mind about the general validity of the mechanical worldview that underlies the modern scientific understanding of natural processes.

''Goodwin writes:''
What is often suggested as an explanation of this [the "explanation" of qualitative experience in humans and other organisms.] is evolutionary complexity: When an organism has a nervous system of sufficient complexity, subjective experience and feelings can arise. This implies that something totally new and qualitatively different can emerge from the interaction of "dead/' unfeeling components such as cell membranes, molecules, and electrical currents. 
But this implies getting something from nothing, which violates what I have learned about emergent properties: There is always a precursor property for any phenomenon, and you cannot just introduce a new dimension into the phase space of your model to explain the result. Qualities are different from quantities and cannot be reduced to them. 
So what is the precursor of the subjective experience that evolves in organisms? 
One possibility is to acknowledge that the world isn't what modern science assumes it to be mechanical and "dead" but that everything has some basic properties relating to experience or feeling. Philosophers and scientists have been down this route before and call this view pan-sentience or panpsychism: the idea that the world is impregnated with some form of feeling in every one of its constituents. This makes it possible for complex organized beings, such as organisms, to develop feelings and for qualities to be as real as quantities. 

''My observations/comments:''
This sounds mystical. While I tend to agree that "sufficiently complex" systems may tend to show emergent properties that may seem to be "wonderous", I don't think that they are a case of "getting something from nothing", as Goodwin says. I think that people tend to "project", "attribute", or anthropomorphize these kinds of emergent behaviors, and if this is what Goodwin calls "introducing a new dimension", then I agree with his objection, that you can't (and shouldn't) do this.

But, I don't think that adopting a view of "pan-sentience", as Goodwin calls it, is necessary nor constructive (as a explanatory tool or model of reality).
[[Douglas Hofstadter|Douglas Hofstadter]] has a [[vivid example (called "leafishness")|./resources/Hofstadter-leafishness-diSessa.jpg]] of inventing such an "entity" (i.e., "pan-sentience"), and it's quite striking to compare and contrast the two views.
__Summary:__
Like many people, I once . . . imagined there were real boundaries between the natural and the artificial, between one species and another, and thought that with the advent of genetic engineering we would be tinkering with life at our peril. I now believe that this romantic view of nature is a stultifying and dangerous mythology. 

''Harris writes:''
The fossil record suggests that individual species survive, on average, between one million and ten million years. The concept of a "species" is misleading, however, and it tempts us to think that we, as Homo sapiens, have arrived at some well-defined position in the natural order. The term "species" merely designates a population of organisms that can interbreed and produce fertile offspring; it cannot be aptly applied to the boundaries between species (to what are often called "intermediate" or "transitional" forms). There was, for instance, no first member of the human species, and there are no canonical members now. Life is a continuous flux. Our nonhuman ancestors bred, generation after generation, and incrementally begat what we now deem to be the species Homo sapiens ourselves. There is nothing about our ancestral line or our current biology that dictates how we will evolve in the future. Nothing in the natural order demands that our descendants resemble us in any particular way. Very likely, they will not resemble us. We will almost certainly transform ourselves, likely beyond recognition, in the generations to come. 

''My thoughts/comments:''
I agree with Harris that species as well as other concepts we coin are somewhat arbitrary, and are our way to "put a grid on top of reality" which is in flux and with no clear boundaries by nature.

''Harris writes:''
But what is the alternative to our taking charge of our biological destiny? Might we be better off just leaving things to the wisdom of nature? I once believed this. But we know that nature has no concern for individuals or for species. Those that survive do so despite her indifference. While the process of natural selection has sculpted our genome to its present state, it has not acted to maximize human happiness, nor has it necessarily conferred any advantage upon us beyond the ability to raise the next generation to child-bearing age.

''My thoughts/comments:''
I don't think that we should either "leave things to the wisdom of nature" or "take charge of our biological destiny", since it's not black and white, and the risks and rewards are potentially enormous, and therefore should be carefully and deeply considered.
I find it striking that Harris acknowledges that things are in flux, and that nothing dictates how we will evolve in the future (I agree, and in my mind it's both because there are no guides/monitors/gods/nature/powers/etc., but also no way to know/predict/foretell the future), and yet, he boldly states that we should take charge of our own destiny (!) How can you do that, if you don't know what it is? And if you agree that you don't know, whouldn't it be advisable/prudent to take steps cautiously, with a lot of small steps and refinements, etc.?

''Harris writes:''
But our environment and our needs-to say nothing of our desires  have changed radically in the meantime. We are in many respects ill-suited to the task of building a global civilization. This is not a surprise. From the point of view of evolution, much of much of human culture, along with its cognitive and emotional underpinnings, must be epiphenomenal. Nature cannot "see" most of what we are doing, or hope to do, and has done nothing to prepare us for many of the challenges we now face. 
...Considering humanity as a whole, there is nothing about natural selection that suggests our optimal design. We are probably not even optimized for the Paleolithic, much less for life in the twenty-first century. And yet we are now acquiring the tools that will enable us to attempt our own optimization. Many people think this project is fraught with risk. But is it riskier than doing nothing? There may be current threats to civilization that we cannot even perceive, much less resolve, at our current level of intelligence. Could any rational strategy be more dangerous than following the whims of nature? This is not to say that our growing capacity to meddle with the human genome couldn't present some moments of Faustian overreach. But our fears on this front must be tempered by a sober understanding of how we got here. Mother Nature is not now, nor has she ever been, looking out for us.

''My thoughts/comments:''
Again, we shouldn't look at this as all-or-nothing. The "meddle with the human genome" is something that on our evolutionary path (by definition!), but should be done carefully and wisely, not out of fear/desperation that since "mother nature is not looking out for us, we should accept 'Faustian overreaches'.
Andy Clark is a philosopher and cognitive scientist, University of Edinburgh; author, //Supersizing the Mind: Embodiment, Action, and Cognitive Extension//

From [[my previous exposure to Clark|Natural-Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence]], it looks like he is interested in "extensions to the human mind" and what he calls "intelligence amplifiers". This is definitely an area that excites me, and what I see as very relevant to the area of [[Human Performance Support|Human Performance Support]].

As [[I had said elsewhere|http://ldtprojects.stanford.edu/~hmark/]]:
>There are opportunities at which timely access to relevant and multi-faceted knowledge and perspectives can be very powerful.
>I strongly believe that technology affords and sometimes even creates these opportunities, and as such, is a great enabler of learning, and a powerful booster of human potential, significantly enhancing both knowledge and skills, propelling us beyond our natural, evolutionary trajectory. 

Clark is asking: if the question is "is the internet changing the nature of our thinking?", i.e., is our thinking being (or will our thinking in the future be) radically transformed, ''how will we know?''
>Suppose we convince ourselves, by whatever means, that as far as the basic mode of operation of the brain goes, Internet experience is not altering it or one whit. That supports a negative answer only if we assume that the routines that fix the "nature of human thinking" must be thoroughly biological that they must be routines running within, and only within, the individual human brain. But surely it is this assumption that our experiences with the Internet (and with other "intelligence amplifiers" before it) most clearly call into question. Perhaps the Internet is changing "the way you think" by changing the circuits that get to implement some aspects of human thinking, providing some hybrid (biological and non-biological) circuitry for thought itself. This would be a vision of the Internet as a kind of worldwide supracortex. Since this electronic supracortex patently does not work according to the same routines as, say, the neocortex, an affirmative answer to our target question seems easily in the cards.

But he cautions us not to jump to conclusions: "this may turn out to be a deep illusion)".
>For perhaps one way to motivate an answer is to look for deep and systematic variation in human p performances in various spheres of thought. But even if we find such variation, those who think that our "ways of thinking" remain fundamentally unaltered can hold their ground by stressing that the basic mode of neural operation is unaltered and has remained the same for (at least) tens of thousands of years. 
And he concludes:
>Deep down, I suspect that our two interrogative options the trivial-sounding question about what we think and the deep-sounding one about the nature of our thinking are simply not as distinct as the fans of either response (Yes, the Internet is changing the way we think/No, it isn't) might wish. But I don't know how to prove this. Dammit.

My sense is similar: I think that there is a ''bi-directional interaction and influence'' between //what// we think (the trivial/dumb interpretation of the question, per Clark), and //how// we think about what we think (the kinds of algorithms or computational recipes for solving problems). 
It reminds me of [[an example Andrea diSessa gives|resources/diSessa - Changing Minds - Chapter1.pdf]] about the __power of literacy__ to change the way we think:
One example is the Calculus, a new notation (Newton, Leibniz) that forever changed not only what we think but also how we think about change (infinitesimal deltas) and rations of changes. And not only did it change how we are thinking, but also who and when: nowadays, high school students can think and express their thinking in ways that very sharp scientists (Galileo comes to mind) could not in the past.
Another example diSessa gives is how it takes Galileo several pages of text to describe his idea about rates of changes staying constant (in a free fall, related to his experiment at the Tower of Pisa), and the fact that nowadays, a high school student can express the //same ideas// in a few lines of "very simple" (high school) math. 
It seems to me that the fact that a genius like Galileo had to struggle through pages of explanations in order to make //his ideas and way of thinking// clear (to himself and others), and something today a high school kid can both grasp, express, and use, shows us that "external intelligence amplifiers" can and do change the way we think. And if Computation in general (and the Internet in particular) cannot serve as such an "amplifier", what can?
Thomas Metzinger is a philosopher; director of the Theoretical Philosophy Group at the Department of Philosophy of the Johannes Gutenberg-Universitdt Mainz; author of //The Ego Tunnel//

>Sure, for academics the Internet is a fantastic resource...Something that is changing us in our deepest core...
>But it's about much more than cognitive style alone. For those of us intensively working with it, the Internet has become a part of our self-model. We use it for external memory storage, as a cognitive prosthesis, and for emotional autoregulation. We think with the help of the Internet, and it helps us determine our desires and goals. Affordances infect us, subtly eroding the sense of control. We are learning to multitask, our attention span is becoming shorter, and many of our social relationships are taking on a strangely disembodied character, Some software tells us, "You are now friends with Peter Smith!" when we were just too shy to click the Ignore button.

>The core of the problem is not cognitive style but attention management. The ability to attend to our environment, our feelings, and the feelings of others is a naturally evolved feature of the human brain. Attention is a finite commodity, and it is absolutely essential to living a good life. We need attention in order to truly listen to others and even to ourselves. We need attention to truly enjoy sensory pleasures, as well as for efficient learning. We need it in order to be truly present during sex, or to be in love, or when we are just contemplating nature. Our brains can generate only a limited amount of this precious resource every day. Today the advertisement and entertainment industries are attacking the very foundations of our capacity for experience, drawing us into a vast and confusing media jungle, robbing us of our scarce resource in ever more persistent and intelligent ways. We know all that, but here's something we are just beginning to understand: The Internet affects our sense of selfhood. and it does so on a deep functional level.
__Summary: __When reporters interviewed me in the 1970s and '80s about the possibilities for Artificial Intelligence I would always say that we would have machines as smart as we are within my lifetime. . . . I no longer believe that will happen.

!!!Schank has some good points about why our approach to AI has been "wrongheaded":
* Early AI workers sought out intelligent behaviors to focus on like chess or problem solving and tried to build machines that could equal human beings in those endeavors. While this was an understandable approach, it was, in retrospect, wrongheaded. Chess playing is not really a typical intelligent human activity. Only some of us are good at it, and it seems to entail a level of cognitive processing that, while impressive, seems quite at odds with what makes humans smart. Chess players are methodical planners. Human beings are not. 
* We tend to not know what we know. We can speak properly without knowing how we do it. We don't know how we comprehend. We just do. All this poses a problem for AI. How can we imitate what humans are doing when humans don't know what they're doing when they do it?
* This conundrum led to a major failure in AI expert systems, which relied on rules that were supposed to characterize expert knowledge. But, the major characteristic of experts is that they get faster when they know more, whereas more rules made systems slower. The idea that rules w were not at the center of intelligent systems meant that the flaw was relying on specific consciously stated knowledge instead of trying to figure out what people meant when they said they just knew it when they saw it, or they had a gut feeling. People give reasons for their behaviors, but they are typically figuring that stuff out after the fact. We reason non-consciously and explain rationally later.

''My thoughts/comments:''
* Schank's point about picking [[chess playing|History of the chess table]] as a sign of intelligence shows how [[our understanding evolves over time|The end of an era, the beginning of another? HAL, Deep Blue and Kasparov]], and as we acquire new knowledge/experience. Back in the day when computers could not play chess well (and very few people thought they ever will), it was a "worthwhile" challenge, and it embodied a certain image of ourselves as humans: analytical, rational, planning, etc. Nowadays, as Schank points out, technology caught up (we "threw hardware at the problem": IBM's Deep Blue had 32, 32-bit, parallelly processing CPUs, 512 "chess processors", 1 Trillion Floating Operations per second)), it's no longer a big challenge, ''and'' our image of ourselves has evolved.
* [[Many people (neuro-scientists among them)|The chess mentality]] may still think that playing chess well is a human characteristic ([["the chess mentality... Humanity wouldn't be human (or humane) without it."|http://www.research.ibm.com/deepblue/learn/html/e.8.4.html]]). Schank is stating that "Chess playing is not really a ''typical'' intelligent human activity", and I tend to agree with him.
* it's interesting to compare to a later challenge and project by IBM: a computer called Watson, playing (and beating the "world master" of) Jeopardy. Is this "the new definition of intelligence?"
* The point about Expert Systems (ESs) codifying knowledge in the form of rules, where more "expertise" actually slows down the ES, which is counter to the human trend, reminds me of a similar train of thought (and epiphany, or at least a "warning sign" about the misguided direction) Jeff Hawkins expresses in his excellent book ''On Intelligence'' (published in 2004). [[Hawkins wanted to apply to MIT and study/research AI in their graduate program|01 - Artificial Intelligence]]. But the prevailing approach in that department raised a red flag for him (it was a mutual feeling: he wanted to study the human brain/mind and learn from it about (artificial) intelligence; they were not interested. He was not admitted to the program).
* A popular misconception/assumption about chess, which got debunked, from a paper by Anders Ericsson [[The Making of an Expert|http://www.uvm.edu/~pdodds/files/papers/others/everything/ericsson2007a.pdf]]:
> Thirty years ago, two Hungarian educators, Laszlo and Klara Polgar, decided to challenge the popular assumption that women don't succeed in areas requiring spatial thinking, such as chess. They wanted to make a point about the power of education. The Polgars homeschooled their three daughters, and as part of their education the girls started playing chess with their parents at a very young age. Their systematic training and daily practice paid off. By 2000, all three daughters had been ranked in the top ten female players in the world. The youngest, Judit, had become a grand master at age 15, breaking the previous record for the youngest person to earn that title, held by Bobby Fischer, by a month. Today Judit is one of the world's top players and has defeated almost all the best male players.
From the [[book "Is the Internet Changing the Way You Think?"|Is the Internet Changing the Way You Think?]]
An interesting analogy between different ways of boat building and different learning (and performance) modes by George Dyson - Science historian; author of Darwin Among the Machines.

>In the North Pacific, there were two approaches to boatbuilding. The Aleuts and their kayak-building relatives lived on barren, treeless islands and built their vessels by piecing together skeletal frameworks from fragments of beachcombed wood. The Tlingit and their dugout-canoe-building relatives built their vessels by selecting entire trees out at of the rain forest and removing wood until there was nothing left but a canoe. 
>The Aleut and the Tlingit achieved similar results maximum boat, minimum material by opposite means. The flood of information unleashed by the Internet has produced a similar cultural split. We used to be kayak builders, collecting all available fragments of information to assemble the framework that kept us afloat. Now we have to learn to become dugout-canoe builders, discarding unnecessary information to reveal the shape of knowledge hidden within.
>I was a hardened kayak builder, trained to collect every available stick. I resent having to learn the new skills. But those who don't will be left paddling logs, not canoes.
As there are entirely different skills needed to build a Canoe vs. a Kayak, so there are entirely different (or at least different enough) and new skills to learn and practice for learning, performing, and thriving in the Internet-enabled world. It will be critical to teach/learn skills such as prioritizing, organizing, visualizing, analogyzing, __verifying__, etc., working with vast sets of data. These skills have always been important to learn, but the lake has turned into an ocean (to play on the boating metaphor), and this makes at least a quantitative difference, but I think a qualitative one as well.
[[Sam Harris|http://en.wikipedia.org/wiki/Sam_Harris_%28author%29]] brings up a few significant ways in which the Internet has impacted him:
* Since more and more information and data are online, he relies more and more on the Internet to recall his own thoughts. Not only that, but if a debate he is involved in is videotaped (sorry for the old-fashioned term) and posted on YouTube, he watches the debate online 
>for my memory of what happened is often at odds with the later impression I form based upon seeing the exchange. Which view is closer to reality? I have learned to trust the YouTube version. In any case, it is is the only one that will endure. 
* He also uses the Internet to shamelessly plagiarize himself, by recycling content from lectures into op-eds, and from there into books quoted in new lectures, and __so the spiral goes...__
*He found that searching his "bitstream, I am reminded not only of what I used to know but also of what I never properly understood.

He concludes:
>I am by no means infatuated with computers. I do not belong to any social networking sites; I do not tweet (yet); and I do not post images to Flickr. But even in my case, an honest response to the Delphic admonition "Know thyself" already requires an Internet search.
I find myself in a similar situation, where since "more stuff" is easily/instantly available and searchable, it enables much richer exploration and deeper introspection/reflection, as well as generation of new insights, ideas, connections, and, again, __so the spiral goes...__
__Summary: __ I've changed my mind about how to handle the homunculus temptation: the almost irresistible urge to install a "little man in the brain" to be the Boss, the Central Meaner, the Enjoyer of pleasures, and the Sufferer of pains.

!!Dennett describes top-down decomposition of a task into a hierarchical computer program
The AI programmer begins with an intentionally characterized problem, and thus frankly views the computer anthropomorphically: if he solves the problem he will say he has designed a computer than can [e.g.,] understand questions in English. His first and highest level of design breaks the computer down into subsystems, each of which is given intentionally characterized tasks; he composes a flow chart of evaluators, rememberers, discriminators, overseers and the like. These are homunculi with a vengeance. . . . Each homunculus in in turn is analyzed into smaller homunculi, but, more important, into less clever homunculi. When the level is reached where the homunculi are no more than adders and subtractors, by the time they need only intelligence to pick the larger of two numbers when directed to, they have been reduced to functionaries "who can be replaced by a machine."

''My comments/thoughts:''
* This top-down deconstruction seems to me, shows that at least in theory, every task, even the most complex (and human?) one, can be broken into simple/basic operations, and thus links "intelligent behavior/tasks" to very basic/mechanical operations, without requiring "magic" (soul?).
* This reminds me of a book on Cellular Automata (CA) [Steven Levy's book  Artificial Life, the quest for a new creation ] that showed how  a simple Reduced Instruction Set Computer (RISC) can be constructed with ~The-Game-Of-life-like rules on a [[Game Of Life|Cellular Automaton Rule 110]] 2-dimensional grid. Granted a RISC is not showing very intelligent behavior, ''and'' the CA to achieve even this behavior is pretty big, but it definitely shows the feasibility and validity of this approach.
Alison Gopnik - Psychologist, University of California, Berkeley; author, The Philosophical Baby: What Children's Minds Tell Us About Truth, Love, and the Meaning of Life

Gopnik gives a though provoking analogy between the impact of Internet as a recent technology claimed to "change everything" and another less recent "technology" (and [[she also relates to this elsewhere too|The potential and dangers of new technologies - echoes of a recurring theme]]):
>My thinking has certainly been transformed in alarming ways by a relatively recent information technology, but it's not the Internet. I often sit for hours in the grip of this compelling medium, motionless and oblivious, instead of interacting with the people around me. As I walk through the streets, I compulsively check out even trivial messages (movie ads, street signs), and I pay more attention to descriptions of the world (museum captions, menus) than to the world itself. I've become incapable of using attention and memory in ways that previous generations took for granted.
>
>Yes, I know, reading has given man a powerful new sources of information. But is it worth the isolation, or the damage to dialog and memorization that Socrates foresaw? Studies show, in fact, that we've become involuntarily compelled to read; I can't keep myself from decoding letters. Reading has even reshaped my brain: Cortical areas that once were devoted to vision and speech have been hijacked by print. Instead of learning through practice and apprenticeship, I've become dependent on lectures and textbooks. And look at the toll of dyslexia and attention disorders and learning disabilities  all signs that our brains were not designed to deal with such a profoundly unnatural technology. 

Historically, reading may have been feared by some as "degrading our human abilities" (oration, memory, etc.) (see [[Plato's/Socrates' opinions|Why writing (and the computer :) is a 'dangerous technology']]), promoting a-social behavior (isolation, self-absorption, due to "curling up with a good book"), etc. But, we can't conceive of our lives without mastery of reading. So it'll be interesting to see how the internet impacts our lives, and how real the concerns and perceived threats from it really materialize.


BTW, Hermann Hesse wrote a [[thought provoking essay about three types of readers|Hermann Hesse on Three Types of Readers]] which is relevant to this.
>__Summary:__ I've come to reject the common SETI (search for extraterrestrial intelligence) wisdom that there must be millions of technology-capable civilizations within our "light sphere" (the region of the universe accessible to us by electromagnetic communication). 

Ray Kurzweil mentions [[Frank Drake's formula/equation|Interdisciplinary knowledge in an equation]] (as opposed to [[The Flake Equation]] :) for estimating the number of intelligent civilizations in a galaxy or in the universe. "Essentially, the likelihood of a planet evolving biological life that has created sophisticated technology is tiny, but there are so many star systems that there should still be many millions of such civilizations. Carl Sagan's^^1^^ analysis of the Drake formula concluded that there should be around a million civilizations ..."

Yet we haven't detected the presence of any of these intelligent civilizations, hence the famous question [[Enrico Fermi|http://en.wikipedia.org/wiki/Enrico_Fermi]] raised: ''Where is everybody?'', also known as [[the Fermi Paradox|http://en.wikipedia.org/wiki/Fermi_paradox]] and [[possible solutions to the paradox|http://www.nss.org/resources/books/non_fiction/NF_023_whereiseverybody.html]]^^''2''^^.

''And Kurzweil's answer'':
>My own conclusion is that they don't exist. If it seems unlikely that we would be in the lead in the universe, here on the third planet of a humble star in an otherwise undistinguished galaxy, it's no more perplexing than the existence of our universe, with its ever so precisely tuned formulas to allow life to evolve in the first place.

This reminds me of a very vivid [[thought experiment involving coin flips|The "astonishing skills" of a coin flipper]], that demonstrates how a reversed way of thinking about cause and effect can lead to "astonishing" conclusions. (see also [[the anthropic principle|http://en.wikipedia.org/wiki/Anthropic_principle]])^^3^^.

In the flipping coins experiment, if you look at the winner with the perfect score, without taking into account the way s/he has been selected, then this winning streak looks absolutely amazing. But this is due to a flaw in the logic.
Similarly, if you look at our civilization, its state of technology and how we got here (similar to looking at the "amazingly skilled" winner of the coin flips experiment), you can draw the conclusion that there must be many other civilizations which are either equally or far more advanced than we are, and we should definitely hear/see them.
But then again: where are they?

In [[another article|http://www.kurzweilai.net/the-intelligent-universe]] Kurzweil is stating his reasons for why SETI will fail to find other intelligent civilizations/beings:
!!!!Why SETI will fail
>I'll end with a comment about the SETI project. Regardless of this ultimate resolution of this issue of the speed of light and it is my speculation (and that of others as well) that there are ways to circumvent it if there are ways, they'll be found, because intelligence is intelligent enough to master any mechanism that is discovered. Regardless of that, I think the SETI project will fail it's actually a very important failure, because sometimes a negative finding is just as profound as a positive finding for the following reason: we've looked at a lot of the sky with at least some level of power, and we don't see anybody out there.
>
>The SETI assumption is that even though it's very unlikely that there is another intelligent civilization like we have here on Earth, there are billions of trillions of planets. So even if the probability is one in a million, or one in a billion, there are still going to be millions, or billions, of life-bearing and ultimately intelligence-bearing planets out there.
>
>If that's true, they're going to be distributed fairly evenly across cosmological time, so some will be ahead of us, and some will be behind us. Those that are ahead of us are not going to be ahead of us by only a few years. They're going to be ahead of us by billions of years. But because of the exponential nature of evolution, once we get a civilization that gets to our point, or even to the point of Babbage, who was messing around with mechanical linkages in a crude 19th century technology, it's only a matter of a few centuries before they get to a full realization of nanotechnology, if not femto and pico-engineering, and totally infuse their area of the cosmos with their intelligence. It only takes a few hundred years!
>
>So if there are millions of civilizations that are millions or billions of years ahead of us, there would have to be millions that have passed this threshold and are doing what I've just said, and have really infused their area of the cosmos. Yet we don't see them, nor do we have the slightest indication of their existence, a challenge known as the Fermi paradox. Someone could say that this "silence of the cosmos" is because the speed of light is a limit, therefore we don't see them, because even though they're fantastically intelligent, they're outside of our light sphere. Of course, if that's true, SETI won't find them, because they're outside of our light sphere.
>
>But let's say they're inside our light sphere, or that light isn't a limitation, for the reasons I've mentioned. Then perhaps they decided, in their great wisdom, to remain invisible to us. You can imagine that there's one civilization out there that made that decision, but are we to believe that this is the case for every one of the millions, or billions, of civilizations that SETI says should be out there?
>
>That's unlikely, but even if it's true, SETI still won't find them, because if a civilization like that has made that decision, it is so intelligent they'll be able to carry that out, and remain hidden from us. Maybe they're waiting for us to evolve to that point and then they'll reveal themselves to us. Still, if you analyze this more carefully, it's very unlikely in fact that they're out there.
>
>You might ask, isn't it incredibly unlikely that this planet, which is in a very random place in the universe and one of trillions of planets and solar systems, is ahead of the rest of the universe in the evolution of intelligence? Of course the whole existence of our universe, with the laws of physics so sublimely precise to allow this type of evolution to occur is also very unlikely, but by the anthropic principle, we're here, and by an analogous anthropic principle we are here in the lead. After all, if this were not the case, we wouldn't be having this conversation. So by a similar anthropic principle we're able to appreciate this argument.

----
^^1^^ An [[inspiring video clip|http://vimeo.com/channels/staffpicks/2822787]] with Sagan's voice-over, where he actually does not refer to other civilizations, but rather talks about the uniqueness and preciousness of our human life on this [[pale blue dot|http://vimeo.com/channels/staffpicks/2822787]] (on Vimeo)
^^2^^ The book //Where Is Everybody? Fifty Solutions to the Fermi Paradox// by Stephen Webb. See also the [[book review by David Brandt-Erichsen|http://www.nss.org/resources/books/non_fiction/NF_023_whereiseverybody.html]] of the National Space Society.
^^3^^ See [[Nick Bostrom|http://www.nickbostrom.com/]] on the [[anthropic principle|http://www.anthropic-principle.com/?q=book/table_of_contents]]

In his wonderful book //Metamagical Themas// (written in 1985, pg. 415) [[Douglas Hofstadter]] writes: 
A //New Yorker// cartoon from a few years back illustrates the concept perfectly. It shows a fifty-ish man holding a photograph of himself, roughly ten years earlier. In that photograph, he is likewise holding a photograph of himself, ten years earlier than //that//. And on it goes, until eventually it "bottoms out" - quite literally- in a photograph of a bouncy baby boy in his birthday suit (bottom in the air). This idea of recursive photos catching you as you grow up is quite appealing. 
I wish my parents had thought of it!

Contrast it with the more famous Morton Salt infinite regress, in which the Morton Salt girl holds a box of Morton Salt with her picture on it - but as the girl in the picture is no younger, there is no bottom line and the regress is endless, at least theoretically.

Arthur C. Clarke wrote a 31-word (thirty one words!) short story^^1^^ called "siseneG," as in "Genesis" reversed. As of March 1984, it was the only one he'd written in nearly ten years^^2^^. The tale, in its entirety:
>
>    And God said: DELETE lines One to Aleph^^3^^. LOAD. RUN.
>    And the Universe ceased to exist.
>
>    Then he pondered for a few aeons, sighed, and added: ERASE.
>    It never had existed.


----
^^1^^ compare to [[Hemingway’s famous six-word tale|https://en.wikipedia.org/wiki/For_sale:_baby_shoes,_never_worn]], “For sale: baby shoes, never worn.” (this “six word story”^^4^^ story/urban legend inspired [[a website dedicated to these stories|http://www.sixwordstories.net/]])

^^2^^ - Clarke accompanied this story with a letter/preface to his publisher:
>This is the only short story I've written in ten years or so.
>
>I think you'll agree that they don't come much shorter.
>
>(Signed, 'Arthur C Clarke')
>
>21 Mar 84

^^3^^ - [[Aleph|https://en.wikipedia.org/wiki/Aleph_number]] (presumably subscript 0, as in Aleph~~0~~) is the first infinite set in an infinite set of infinites (Aleph~~1~~, Aleph~~2~~, and so on).

^^4^^ - I use Hemingway's anecdote as inspiration for students to program their own "Six Words (or more :) Story" (in [[Scratch|https:scratch.mit.edu]]) about an (hopefully interesting :) aspect of their lives.

^^5^^ - in case you are paying attention, the footnotes happen to be much longer^^6^^ than Clarke's 31-word story they are about :)

^^6^^ - but the footnotes here, fortunately, don't fall into the "trap"^^7^^ seen (experienced?) in footnotes in novels like //Infinite Jest//^^8^^ (by David Foster Wallace) and //House of Leaves// (by Mark Z. Danielewski) where they operate in entirely different ways. In Infinite Jest, the footnotes seem at first to function solely as universe-expanding background information. As the novel progresses, they become longer and more complex—eventually even the footnotes have footnotes—until we hit the infamous Footnote 324, which is seven pages of small type the length of an entire chapter if printed in normal-sized font. The footnotes in Infinite Jest are so numerous and varied in content, some of them begin to take on a wholly separate nature, more or less a parallel narrative that tells its own story.

^^7^^ - "trap" is a loaded (and opinionated) word, so as a counter balance, here is a counter argument [[in praise of using footnotes|https://openscholarship.wustl.edu/cgi/viewcontent.cgi?article=1597&context=law_lawreview]] (and a [[local GD copy|https://drive.google.com/open?id=1pVIgtdD_iGJVzqC1y7vLJ-ckKiYp4-nL]]).

^^8^^ - OK, enough jest (not to be confused by //Infinite Jest//) already! :)