I recommend reading a-math-book-to-change-your-teaching, an article written by a fellow blogger about a book that simplifies the teaching of mathematics while making the subject more meaningful for students.

Excel file of puzzles and previous week’s factor solutions: 12 Factors 2014-08-25

]]>We looking forward to exploring with you. Thank you for stopping by. We are on twitter @penngeometry (we share a twitter feed with the Geometry Community at Pennfield).

Feel free to comment freely. This is an open forum, but please observe the house rules.

]]>Author: David Elkins

]]>“‘*First collect a mass of Facts*; *and* then *construct a Theory*. That*, I believe, is the true Scientific Method*.’ I sat up, rubbed my eyes, and began to accumulate Facts.” *Sylvie and Bruno*, Lewis Carroll

G’day, and an excellent Monday to you. This time, I’ll review the 3rd chapter, *Double spirals and Möbius maps*, describing those odd mathematical entities, bizarre non-Euclidean symmetries which can change a shape radically while leaving it essentially intact. These are known simply as *Möbius transformations, *after their developer*,* *August Ferdinand Möbius*, of strip fame (*ahem* Oh my, that didn’t come out quite right, let me rephrase that…)

…Their developer would be this entry’s co-protagonist, *August Ferdinand Möbius*, whose work on *Verwanschaft*, or ‘*relationships’* between objects, on topology, on number theory, and the very least but better known of his contributions being the famous Möbius band, cemented his place among the mathematicians of history

Born in 1790, he was the conductor of Leipzig’s Pleissenburg observatory stating in 1816 and lasting till his death in 1868. For a time, he served as the University’s professor of mathematics, while earlier, as a student, attended a few of Gauss’s astronomy lectures.

How interesting that so many early men of science had a broad range of interests, then again, mathematics is the language of science.

in a paper published in 1855, Möbius’s theory of transformations foreshadowed Felix Klein’s work on the concept of *groups*.

There are four general sorts of such transformations, so without getting too technical…

The first sort of transformation, the loxodrome, forms a double spiral pattern, whether on the complex number plane or the surface of a Riemann sphere. Think of this as like the path a ship at sea keeping a constant bearing would travel on a globe. “Loxodrome” comes from the Greek for “running obliquely.” It’s a navigational term which those canny readers in professions involving sea travel will no doubt be familiar with.

A loxodrome cuts the same angle at each longitude while winding from pole to pole, from an origin point usually defined as zero, ending at a point with a value of infinity if the origin point also is assigned a value of zero…

Zero to infinity…from one to the other of my favorite mathematical concepts! Zero point and the point of infinity are fixed points, and often serve as a source point whence the spirals originate, or as a sink point where they eventually end up on their twisting journey, but not always. That depends on the map.

Hyperbolic maps are a sort of loxodrome, use the same methods of calculation, and like it are a type of scaling map, but they generate circular paths rather than spiral.

Elliptic maps have a pair of neutral fixed points, neither source nor sink, around which move circles.

Parabolic maps have a single fixed point that serves as both source and sink.

There exist exactly as many such transformations as are needed to move any three points P, Q, and R to any other three points, P’, Q’ and R’. They can be weird, really weird, but useful in generating our fractal shapes through warping, stretching and twisting just so as to determine the nature of the plane or surface.

Giants see far by standing on each others’ shoulders…and so we pass through the gate to begin our quest. Overall, I found this chapter, like *A delightful fiction*, and *The language of symmetry*, to be rich in information, full of cool formulas and other number patterns to make use of. I’m involved in learning as many heuristics as I can, to apply them to new and different contexts, and there are many here.

Next fortnight, we’ll begin our review of chapter #4, *The Schottky dance*. See you then.

Dear Superintendents,

As part of ongoing efforts to support school administrative units in ensuring that teachers are led by highly effective building administrators, I am pleased to announce the release of the Maine DOE Principal Performance Evaluation and Professional Growth (PEPG) Model.

The Department has adopted as a State Principal PEPG Model the Administrator Evaluation Framework, which was prepared by the Auburn School Department in collaboration with the Department and the Maine Principals’ Association.

The collaborative group based the Administrator Evaluation Framework on the Principal Evaluation System developed by the MPA, attaching to the MPA system student learning and growth measures and other elements required in a PEPG model. A Quality Assurance Inventory prepared by the Department provides detailed evidence of the ways in which the Administrator Evaluation Framework meets the requirements of Rule Chapter 180.

The State Principal PEPG Model may be used by SAUs in one of four ways:

- A model to be voluntarily adopted in its entirety prior to June 1, 2015;
- A model to be adopted by default, in accordance with the requirements of Rule Chapter 180, by SAUs who are not able to complete the development of a model prior to June 1, 2015.
- A model to be adopted in part and merged with locally determined elements by SAUs prior to June 1, 2015; or
- A guide to local SAUs in developing and implementing a model.

One of the surest ways to ensure the success of our future graduates is through effective teachers and leaders. Many Maine schools are well on their way to implementing PEPG systems that facilitate professional growth and meaningful evaluations of teachers and principals. Supporting resources can be found on the Educator Effectiveness webpage.

Sincerely,

Jim Rier, Commissioner

Maine Department of Education

At the same time, I am puzzled by Green’s utter lack of skepticism over certain exemplars of pedagogy that she offers in the book. In saying this, I am not trying to disparage them. My point is only that they could use some critical questioning and examination–in the very spirit of the kind of lesson study that Green finds promising.

This is a preliminary review, with a focus on a particular passage (about a third-grade lesson) in the second chapter. I haven’t read the whole book yet (I read slowly and have been very busy), but I had so many thoughts about these few pages that I decided to start here.

The context: Deborah Loewenberg Ball, at the time a professor at Michigan State, a scholar of math pedagogy, and a teacher at the public school Spartan Village, was teaching her third-grade students about odd and even numbers. The lesson was one of many that she and her colleague Magdalene Lampert had filmed for close study and discussion. Just before this lesson, the fourth-graders had a conference with the third-graders in which they discussed their findings on the question: “Was zero even, odd, or, as some children argued, neither one?”

For this lesson, Ball intended to have the students move from conjectures to proofs about odd and even numbers. But something unexpected happens: a “tall boy named Sean” puts forth a surprising conjecture that six is both even and odd. His classmates then jump in to refute him. What follows is a lively but flawed discussion–flawed not because of the students’ insights, which are excellent, but because of the lack of attention to basic principles, such as the principle of identifying and building on one’s working definitions (or, in the absence of definitions, information leading up to them).

The problem throughout the entire passage is that we never learn whether the students have a working definition of *odd* numbers. This lack of information affects everything, as I will show. It seems that they have a working definition of *even* numbers–but at times they appear to confuse definitions with properties. Moreover, the working definition itself could be the cause of Sean’s confusion–but this possibility is not mentioned. More about all of this shortly.

Back to the conference: it is a brilliant idea to have fourth-graders present their findings to third-graders. This gives the fourth-graders a chance to teach others what they have learned, and it gives the third-graders a glimpse of knowledge and insights that lie ahead. In addition, a conference on zero is a great idea; there’s much to explore about zero. Yet I fail to see why the question of zero’s odd, even, or other status merits a conference (even a short one). If the students have a viable definition of odd and even numbers, they can immediately rule out the possibility that zero is odd. (If they do not have working definitions, then they have no way of discussing the question anyway.) Then, if the students have a viable definition of even numbers, they can see (without a great amount of trouble) that zero meets the criteria. One stumbling block might be the concept of dividing zero in two. Some students might think that can’t be done. So, that would be the meat of the discussion, but it’s easily digestible. There isn’t much gristle here.

The students themselves don’t seem to be clear about their working definitions, or whether or not they have them. After Sean has spoken, Cassandra goes up to the board to refute him. She says that six can’t be an odd number, because zero is even, one odd, two even, and so on up to six, which must be even.

Green comments on the reactions of the mathematician Hyman Bass as he watches the video.

Hy marveled as the video continued. These third-graders–not a gifted class, but average, public school third-graders from, Deborah said, a wide range of backgrounds and ability levels–were having a real mathematical debate. One of them had made a claim, and then the others were trying to prove him wrong. Cassandra’s proof followed a classic structure. First, she had invoked one definition of even and odd–the fact that integers alternate between the two types on a number line–to show that six could only be even. Then she had drawn out a counterargument. To be odd and still fit the alternating definition, she’d shown, zero would have to be odd too. But, she’d concluded with a flourish, they had just decided the other day that zero was even. QED: Sean’s conjecture was impossible.

The two descriptions of Cassandra’s words and actions don’t match–the second is much more sophisticated than the first–but that’s only a secondary problem. The bigger problem lies in the notion that “the fact that integers alternate between the two types on a number line” could be called a definition. To me, this appears as a property, not a definition. It makes sense that the students would be working* from* properties *to* definitions–but it’s essential to point out the difference.

The same confusion arises a couple of pages earlier, in a footnote regarding the evenness of zero: “Like all even numbers, zero can be divided evenly by 2, is surrounded on either side by odd numbers, and when it is subtracted from an even number, produces an even result.” Only the first of these qualifies as a definition, and it alone is necessary.

The discussion goes on.Apparently the students do have a definition of even numbers: one girl, Jeannie, reminds them that an even number is “one that you can split up evenly without having to split one in half.” If this is indeed the working definition, then it seems possible (though it never gets mentioned as a possibility) that Sean’s confusion arises directly from this wording, particularly the word “evenly.” (His own explanation of his reasoning seems to proceed from such a misunderstanding.) He may have taken this definition to mean that a number is even if it can be divided into even numbers–a circular definition, but one that “evenly” seems to invite. In that case, there’s more to say about Sean’s conjecture. More about that in a minute.

Now another student, Mei, makes a great argument: by Sean’s reasoning, it could turn out that all numbers were both odd and even, in which case “we wouldn’t be even having this discussion!”

What Mei suggests here–but no one brings out–is that they have been working with the premise that a number is odd or even, but not both. If that is indeed one of their working premises, then it should be on the table. If it isn’t, then I wonder how they conceive of odd numbers in the first place.

I admire Mei’s energy and logic, but I feel bad for the student who has been sitting there quietly–who gets odd and even numbers and yearns to move on. I also feel bad for the student who has no idea at this point what has been established and what hasn’t.

To draw something helpful–and fascinating–out of this discussion, the teacher only had to remind the students to go back to their working definitions (and distinguish them from properties). This is important mathematical practice. One has to return to working definitions continually. Sometimes they come up for questioning. Sometimes a definition may prove flawed, or it may need better phrasing. But one must be clear about what the definitions are.

If, as I suspect, Sean thought that a number was even if it was divisible into even numbers, then the teacher could have clarified the meaning of “evenly” (and “even” elicited a rewording of the definition).

Then, to take up Sean’s idea (which is actually very interesting), she could have asked: Which numbers are divisible into even numbers *only *(assuming one does not treat 1 or -1 as a factor)? Students would notice that the positive integers in this set were 2, 4, 8, 16, …. in other words (though they wouldn’t have the vocabulary for this yet) exponentiation of 2 to the powers 1, 2, 3, 4, etc.

Many interesting things happen in the lesson–but the confusion over definitions and properties prevents the discussion from moving forward. For this reason, I do not share Green’s amazement, though I am grateful to the lesson (and to Green’s description) for stirring up some thoughts.

*Note: I made some minor edits to this piece after posting it. Also, on 8/26/2014 I added one parenthetical sentence.
*

- “Scientific study suggests celery causes cancer”

- “Beetroot found to boost brain activity by 123%”

Etc. Why is a British taxpayer on the hook for this kind of drivel? Such hyperbole is far more suited to the tabloids.

An article today suggested that “Manufacturing wages on the rise” according to an EEF survey.

http://www.bbc.co.uk/news/business-28923749

This survey is based on 68,000 jobs. There are approx 2.4M manufacturing jobs in the UK so based on a sample of 3%, the BBC are encouraging us to make some heroic inferences. There’s a further question mark around the EEF- what are their motivations?

According to the EEF website:

*“The EEF is dedicated to the future of manufacturing. Everything we do is designed to help manufacturing businesses evolve, innovate and compete in a fast-changing world.”*

And so on. Their interests clearly lie in the ongoing health of UK manufacturing. Fostering optimism towards this sector is therefore very much an important part of their agenda. How about a dollop of bias to go with those terrible statistics.

]]>However, after solving some problems for this week, I concluded that I need more firm background on Mathematics, especially on Probabilities. My mind is used to think in procedural ways, and that’s not good for understanding randomized algorithms.

From yesterday, I’m solving exercises from Lehman lecture note. After finishing the lecture note, I think I should look through the proofs of the randomized algorithms again.

]]>We understand an order to be a kind of relation. A set is ordered when a certain type of relation holds over it. To return to the people in the line at the bank, we see that the line is ordered because the people are related to each other in a certain way: each person’s position in the line depends on the relative arrival times of the people in the line.

So, before I define what an order is, I’ll define what a relation is. And because relations are defined in terms of Cartesian products, I’ll define Cartesian products first of all.

The *Cartesian product *of sets A and B is the set of all ordered pairs (a, b) such that a belongs to A and b belongs to B. The Cartesian product of A and B is denoted A x B.

For example, let A = {1, 2} and B = {3, 4}. Then A x B is {(1, 3), (1, 4), (2, 3), (2, 4)}.

A is a *subset* of B if every element of A is an element of B. A *relation* from A to B is a subset of A x B.

For example, again let A = {1, 2} and B = {3, 4}. Then {(1, 3), (2, 4)} is a relation from A to B. So are {(1, 4), (2, 3)} and {(1, 3), (1, 4)}. As an exercise, find the remaining relations.

Incidentally, a *function* from A to B is a relation from A to B where every element of A is associated with one and only element of B. In our example, {(1, 3), (2, 4)} and {(1, 4), (2, 3)} are functions from A to B, because every element of A is associated with one and only one element of B. {(1, 3), (1, 4)} is not a function from A to B, because 2 is an element of A but isn’t associated with an element of B and because 1 is associated with more than one element of B.

If a and b are elements and R is a relation, then we write “aRb” to indicate that (a, b) belongs to R.

A *binary relation* on a set S is a subset of S x S.

A binary relation R on a set S is *reflexive* if for any a in S, aRa. For example, the equality relation is reflexive. For all real numbers a, a = a. So, 3 = 3, 6 = 6, 10 = 10, etc., etc., etc.

A binary relation R on a set S is *transitive* if for all a and b in S, if aRb and bRc, then aRc. For example, divisibility is a transitive relation. If a divides b and b divides c, then a divides c. So, as 2 divides 6 and 6 divides 12, 2 divides 12.

A *preorder* is a binary relation that is reflexive and transitive. For example, reachability is a preorder:

- For any location a, a is reachable from a.
- For all locations a, b, and c, if a is reachable from b and b is reachable from c, then a is reachable from c.

So, for example, you can reach Manhattan from Manhattan, and if you can reach Manhattan from Chicago and you can reach Chicago from Los Angeles, then you can reach Manhattan from Los Angeles.

A binary relation R on a set S is *symmetric* if for all a and b in S, if aRb, then bRa. For example, being married is a symmetric relation. So, if Sue is married to Tom, then Tom is married to Sue.

An *equivalence relation* is a preorder that is symmetric. For example, equivalency is, unsurprisingly, an equivalence relation:

- Each thing is equivalent to itself.
- If a is equivalent to b and b is equivalent to c, then a is equivalent to c.
- If a is equivalent to b, then b is equivalent to a.

The *equivalence class* of an element a in S is the set of all elements in S that are equivalent to a.

A binary relation R on a set S is *antisymmetric* if for all a and b in S, if aRb and bRa, then a = b. For example, the divisibility relation is antisymmetric. If a divides b and b divides a, then a = b.

A *partial order* is a preorder that is antisymmetric. For example, >= is a partial order on the real numbers:

- For any real number a, a >= a.
- For all real numbers a, b, and c, if a >= b and b >= c, then a >= c.
- For all real numbers a and b, if a >= b and b >= a, then a = b.

If a set has a partial order defined on it, then it is a *partially ordered set*. Partially ordered sets are often called *posets* for short.

A binary relation R on a set S is *total* if for all a and b in S, then aRb or bRa.

A binary relation R on a set S is *connected* if for all a and b in S, if a and b are distinct, then aRb or bRa.

A total relation is a connected relation that is reflexive.

A *total order* is a binary relation that is total, transitive, and antisymmetric. For example, > is a total order on the real numbers:

- For all real numbers a and b, a > b or b > a.
- For all real numbers a, b, and c, if a > b and b > c, then a > c.
- For all real numbers a and b, if a > b and b > a, then a = b.

If a set has a total order defined on it, then it is a *totally ordered set*.

All total orders are partial orders, because totality implies reflexivity. That is, any total relation is also reflexive.

All total orders are partial orders, but not all partial orders are total orders. Totality is the difference between partial orders and total orders.

Because all total orders are partial orders and all partial orders are preorders, all total orders are preorders. This an example of the fact that subset membership is a partial order.

]]>180. This classification, which aims to base itself on the principal affinities of the objects classified, is concerned not with all possible sciences, nor with so many branches of knowledge, but with sciences in their present condition, as so many businesses of groups of living men. It borrows its idea from Comte’s classification; namely, the idea that one science depends upon another for fundamental principles, but does not furnish such principles to that other. It turns out that in most cases the divisions are trichotomic; the First of the three members relating to universal elements or laws, the Second arranging classes of forms and seeking to bring them under universal laws, the Third going into the utmost detail, describing individual phenomena and endeavoring to explain them. But not all the divisions are of this character.

The classification has been carried into great detail; but only its broader divisions are here given.

181. All science is either,

- A. Science of Discovery;
- B. Science of Review; or
- C. Practical Science.

182. By “science of review” is meant the business of those who occupy themselves with arranging the results of discovery, beginning with digests, and going on to endeavor to form a philosophy of science. Such is the nature of Humboldt’s *Cosmos*, of Comte’s *Philosophie positive*, and of Spencer’s *Synthetic Philosophy*. The classification of the sciences belongs to this department.

183. Science of Discovery is either,

- I. Mathematics;
- II. Philosophy; or
- III. Idioscopy.

184. Mathematics studies what is and what is not logically possible, without making itself responsible for its actual existence. Philosophy is *positive science*, in the sense of discovering what really is true; but it limits itself to so much of truth as can be inferred from common experience. Idioscopy embraces all of the special sciences, which are principally occupied with the accumulation of new facts.

185. Mathematics may be divided into

*a*. the Mathematics of Logic;*b*. the Mathematics of Discrete Series;*c*. the Mathematics of Continua and Pseudo-continua.

I shall not carry this division further. Branch *b* has recourse to branch *a*, and branch *c* to branch *b*.

186. Philosophy is divided into

*a*. Phenomenology;*b*. Normative Science;*c*. Metaphysics.

Phenomenology ascertains and studies the kinds of elements universally present in the phenomenon; meaning by the *phenomenon*, whatever is present at any time to the mind in any way.

Normative science distinguishes what ought to be from what ought not to be, and makes many other divisions and arrangements subservient to its primary dualistic distinction.

Metaphysics seeks to give an account of the universe of mind and matter.

Normative science rests largely on phenomenology and on mathematics; metaphysics on phenomenology and on normative science.

(Peirce, CP 1.180–186, EP 2.258–259, Online)

- Pp. 5–9 of
*A Syllabus of Certain Topics of Logic*, 1903, Alfred Mudge & Son, Boston, bearing the following preface: “This syllabus has for its object to supplement a course of eight lectures to be delivered at the Lowell Institute, by some statements for which there will not be time in the lectures, and by some others not easily carried away from one hearing. It is to be a help to those who wish seriously to study the subject, and to show others what the style of thought is that is required in such study. Like the lectures themselves, this syllabus is intended chiefly to convey results that have never appeared in print; and much is omitted because it can be found elsewhere.”

- Peirce, C.S.,
*Collected Papers of Charles Sanders Peirce*, vols. 1–6, Charles Hartshorne and Paul Weiss (eds.), vols. 7–8, Arthur W. Burks (ed.), Harvard University Press, Cambridge, MA, 1931–1935, 1958.*Volume 1 : Principles of Philosophy*, 1931. - Peirce Edition Project (eds., 1998),
*The Essential Peirce, Selected Philosophical Writings, Volume 2 (1893–1913)*, Indiana University Press, Bloomington and Indianapolis, IN.

Here’s something interesting. All Prime Palindromes with the exception of 11 have an odd number of digits. This is because all palindrome numbers with an even number of digits are all multiples of 11.

Here are some examples:

Though all even digit palindrome numbers with the exception of 11 are composite, not all odd digit palindrome numbers are prime. Numbers such as 121, 13431 are also multiples of 11.

*For more fun with numbers click here.*

Consider one mutation site of fifty-two different mutations. An analogy would be a playing card.

(1) Let each of the fifty-two different mutations be generated deliberately and one mutation be selected randomly, discarding the rest.

(2) Let fifty-two mutations be generated randomly and the selection of a specified mutation, if generated, be deliberate, discarding the rest.

These two algorithms are grossly the same. They present a proliferation of mutations followed by its reduction to a single mutation. They differ in whether the proliferation is identified as random or non-random and whether the reduction to a single mutation is identified as random or non-random.

Apply these two mathematical algorithms analogically to playing cards.

For the evolution of the Ace of Spades, the first algorithm would begin with a deck of fifty-two cards followed by selecting one card at random from the deck. If it is the Ace of Spades, it is kept. If not, it is discarded. The probability of evolutionary success would be 1/52 = 1.9%.

For the evolution of the Ace of Spades by the second algorithm, fifty-two decks of cards would be used to select randomly one card from each deck. The resulting pool of fifty-two cards would be sorted, discarding all cards except for copies of the Ace of Spades, if any. The probability of evolutionary success would be 1 – (51/52)^52 = 63.6%.

The probability of success of the second algorithm can be increased by increasing the number of random mutations generated. If 118 mutations are generated randomly, the probability of this pool’s containing at least one copy of the Ace of Spades is 90%.

Notice of the two processes, the generation of mutations and their differential survival, that either process is arbitrarily represented as mathematically random and the other is arbitrarily represented as mathematically non-random.

Also, notice that in the material analogy of the mathematics, the analog of randomness is human ignorance and lack of knowledgeable control. In its materiality, ‘random selection’ of a playing card is a natural, scientifically delineable, non-random, material process.

In the mathematics of probability, random selection is solely a logical relationship of logical elements of logical sets. It is only analogically applied to material elements and sets of material elements. The IDs of the elements are purely nominal. Measurable properties, which are the subject of science, and which are implicitly associated with the IDs, are completely irrelevant to the mathematical relationships. A set of seven elements consisting of four sheep and three roofing nails has the exact same mathematical relationships of randomness and probability as a set consisting of four elephants and three sodium ions.

In the logic of the mathematics of probability, the elementary composition of sets is arbitrary. The logic does not apply to material things as such because the IDs of elements and the IDs of sets can only be nominal due to the logical relationships defined by the mathematics. This is in contrast to the logic of the syllogism in which the elementary composition of sets is not arbitrary. The logic of the syllogism does apply to material things, but only if the material things are not arbitrarily assigned as elements to sets, but are assigned as elements to sets according to their natural properties, which properties are irrelevant to the mathematics of probability. The logic of the syllogism applies to material things, if the IDs are natural rather than nominal.

Charles Darwin published *The Origin of Species* in 1859. Meiosis, which is essential to the detailed modern scientific knowledge of genetic inheritance, was discovered in 1876 by Oscar Hertwig. In the interim, Gergor Mendel applied mathematical probability as a tool of ignorance of the details of genetics to the inheritance of flower color in peas. The conclusion was not that the material processes of genetics are random. The conclusion was that the material processes involved binary division of genetic material in parents and its recombination in their offspring. The binary division and recombination are now known in scientific detail as meiosis and fertilization.

The mathematics of randomness and probability, which can be applied only analogically to material, serves as a math of the gaps in the scientific knowledge of the details of material processes.

Consider the following two propositions. Can both be accepted as compatible, as applying the mathematics of randomness and probability optionally to one process or the other? Can either be rejected as scientifically untenable in principle, without rejecting the other by that same principle?

(I) The generation of biological mutations is random, while their differential survival is due to natural, non-random, scientifically delineable, material processes.

(II) The generation of biological mutations is due to natural, non-random, scientifically delineable, material processes, while their differential survival is random.

My detailed answers are contained in the essay, “The Imposition of Belief by Government”, Delta Epsilon Sigma Journal, Vol. LIII, p 44-53 (2008). My answers are also readily inferred from the context in which I have presented the questions in this essay.

]]>**A Precise Definition of Knowledge**

**Knowledge Representation as a Means to Define the Meaning of Meaning Precisely**

Copyright © Carey G. Butler

August 24, 2014

*What is this video about?*

In this introductory video I would like to explain what knowledge representation is, how to build and apply them. There are basically three phases involved in the process of building a knowledge representation. Acquisition of data (which includes staging), collation and the representation itself. The collation and the representation phases of the process are mentioned here, but I will explain them further in future videos.

You are now watching a simulation of the acquisition phase as it collects and stores preliminary structure from the data it encounters in terms of the vocabulary contained within that data. Acquisition is a necessary prerequisite for the collation phase following it, because the information it creates from the data are used by the collation algorithms which then transform that information into knowledge.

The statistics you are seeing tabulated are only a small subset of those collected in a typical acquisition phase. Each of these counters are being updated in correspondence to the recognition coming from underlying parsers running in the background. Depending upon the computer resources involved in the

acquisition, these parsers may even even run concurrently as is shown in this simulation.

The objects you see moving around in the video are of two different kinds:* knowledge fields or knowledge molecules.* Those nearest to you are the field representations of the actual data being collected called

Those farther away from the view are clusters of fields which have already coalesced into groups according to shared dynamically adaptive factors such as similarity, relation, ordinality, cardinality,…

These ‘molecules’ also contain their own set of signatures and may be composed of* a mixture of fields, meta-fields and hyper-fields that are unique to all others.*The collation phase has the job of assigning these molecules to their preliminary * holarchical domains* which are then made visible in the resulting knowledge representation. Uniqueness is preserved even if they contain common elements with others in the domain they occupy.

**We now need a short introduction to what knowledge representation is in order to explain why you’re seeing these objects here.**

**What is Knowledge Representation?**

Knowledge representation provides all of the ways and means necessary to reliably and consistently conceptualize our world. It helps us navigate landscapes of meaning without losing our way; however, navigational bearing isn’t the only advantage. Knowledge representation aids our recognition of what changes when we change our world or something about ourselves. It does so, because even our own perspective is included in the representation. It can even reveal to us when elements are missing or hidden from our view!

It’s important to remember that** knowledge representation is not an end, rather a means or process** that makes explicit to us everything we already do with what we come to be aware of. A knowledge representation must be capable of representing knowledge such that it, like a book or other artifact, brings awareness of that knowledge to us. When we do it right, it actually perpetuates our understanding by providing a means for us to recognize, interpret (understand) and utilize the how and what we know as it relates to itself and to us. In fact –

*What Knowledge is not!*

Knowledge is not very well understood so I’ll briefly point out some of the reasons why we’ve been unable to precisely define what knowledge is thus far. Humanity has made numerous attempts at defining knowledge. Plato taught that justified truth and belief are required for something to be considered knowledge. Throughout the history of the theory of knowledge (epistemology), others have done their best to add to Plato’s work or create new or more comprehensive definitions in their attempts to ‘contain’ the meaning of meaning (knowledge). All of these efforts have failed for one reason or another. ** Using truth value and justification as a basis for knowledge or introducing broader definitions or finer classifications can only fail.** I will now provide a small set of examples of why this is so.

**Truth value is only a value that knowledge may attend.***Knowledge can be true or false, justified or unjustified, because knowledge is the meaning of meaning.* What about false or fictitious knowledge? Their perfectly valid structure and dynamics are ignored by classifying them as something else than what they are. Differences in culture or language make even make no difference, because the objects being referred to have meaning that transcends language barriers.

Another problem is that knowledge is often thought to be primarily semantics or even ontology based! Both of these cannot be true for many reasons. In the first case (semantics):* There already exists knowledge structure and dynamics for objects we cannot or will not yet know.* The same is true for objects to which meaning has not yet been assigned,such as ideas, connections and perspectives that we’re not yet aware of or have forgotten. Their meaning is never clear until we’ve become aware of or remember them.

In the second case (ontology): collations that are fed ontological framing are necessarily bound to memory, initial conditions of some kind and/or association in terms of space, time, order, context, relation,… We build whole catalogs, dictionaries and theories about them! Triads, diads, quints, ontology charts, neural networks, semiotics and even the current research in linguistics are examples. *Even if an ontology or set of them attempts to represent intrinsic meaning, it can only do so in a descriptive (extrinsic) way.*

**An ontology, no matter how sophisticated, is incapable of generating the purpose of even its own inception, not to mention the purpose of objects to which it corresponds! The knowledge is not coming from the data itself, it’s always coming from the observer of the data – even if that observer is an algorithm!**

Therefore ontology-based semantic analysis can only produce the artifacts of knowledge, such as search results, association to other objects, ‘knowledge graphs’ like Cayley,.. Real knowledge precedes, transcends and includes our conceptions, cognitive processes, perception, communication, reasoning and is more than simply related to our capacity of acknowledgment.** In fact knowledge cannot even be completely systematized, it can only be interacted with using ever increasing precision!**

**What is knowledge then?**

•**Knowledge is what awareness does.**

• Awareness of some kind and at some level is the only prerequisite for knowledge and is the substrate upon which knowledge is generated.

• Awareness coalesces, interacts with and perpetuates itself in all of its form and function.

• Awareness which resonates (shares dynamics) at, near, or in some kind of harmony (even disharmony) with another tends to associate (disassociate) with that other in some way.

• These requisites of awareness hold true even for objects that are infinite or indeterminate.

• This is why knowledge, the meaning of meaning, can be precisely defined and even provides its own means for doing so.

•**Knowledge is, pure and simply: the resonance, structure and dynamics of awareness as it creates and discovers for and of itself.**

• Awareness precedes meaning and provides the only fundamentally necessary and sufficient basis for meaning of meaning expressing itself as knowledge.

•**Knowledge is the dialog between participants in awareness**–*even if that dialog appears to be only one-way, incoherent or incomplete.*

• Even language, mathematics, philosophy, symbolism, analogy, metaphor and sign systems can all be resolved to this common denominator found at the foundation of each and every one of them.

**More information about the objects seen:**

The objects on the surface of the pyramid correspond to basic structures denoting some of the basic paradigms that are being used to mine data into information and then collate that information into knowledge. You may notice that their basic structures do not change, only their content does. * These paradigms are comprised of contra-positional fields that harmonize with each other so closely that they build complete harmonic structures.* Their function is similar to what proteins and enzymes do in our cells.

#Knowledge #Wisdom #Understanding #Learning #Insight #Semantics #Ontology #Epistemology #Philosophy #PhilosophyOfLanguage #PhilosophyOfMind #Cognition #OrganicIntelligence #ArtificialIntelligence #OI #AI

#Awareness

But a good part, and the most interesting part, of the book is about algorithms, the ways to solve complicated problems without demanding too much computing power. This is fun to read because it showcases the ingenuity and creativity required to *do* useful work. The need for ingenuity will never leave us — we will always want to compute things that are a little beyond our ability — but to see how it’s done for a simple problem is instructive, if for nothing else to learn the kinds of tricks you can do to get the most of your computing resources.

The example that most struck me and which I want to share is from the chapter on the IBM Automatic Sequence-Controlled Calculator, built at Harvard at a cost of “somewhere near 3 or 4 hundred thousand dollars, if we leave out some of the cost of research and development, which would have been done whether or not this particular machine had ever been built”. It started working in April 1944, and wasn’t officially retired until 1959. It could store 72 numbers, each with 23 decimal digits. Like most computers (then and now) it could do addition and subtraction very quickly, in the then-blazing speed of about a third of a second; it could do multiplication tolerably quickly, in about six seconds; and division, rather slowly, in about fifteen seconds.

The process I want to describe is the taking of logarithms, and *why* logarithms should be interesting to compute takes a little bit of justification, although it’s implicitly there just in how fast calculations get done. Logarithms let one replace the multiplication of numbers with their addition, for a considerable savings in time; better, they let you replace the division of numbers with subtraction. They further let you turn exponentiation and roots into multiplication and division, which is almost always faster to do. Many human senses seem to work on a logarithmic scale, as well: we can tell that one weight is twice as heavy as the other much more reliably than we can tell that one weight is four pounds heavier than the other, or that one light is twice as bright as the other rather than is ten lumens brighter.

What the logarithm of a number is depends on some other, fixed, quantity, known as the base. In principle any positive number will do as base; in practice, these days people mostly only care about base e (which is a little over 2.718), the “natural” logarithm, because it has some nice analytic properties. Back in the day, which includes when this book was written, we also cared about base 10, the “common” logarithm, because we mostly work in base ten. I have heard of people who use base 2, but haven’t seen them myself and must regard them as an urban legend. The other bases are mostly used by people who are writing homework problems for the part of the class dealing with logarithms. To some extent it doesn’t matter what base you use. If you work out the logarithm in one base, you can convert that to the logarithm in another base by a multiplication.

The logarithm of some number in your base is the exponent you have to raise the base to to get your desired number. For example, the logarithm of 100, in base 10, is going to be 2 because 10^{2} is 100, and the logarithm of e^{1/3} (a touch greater than 1.3956), in base e, is going to be 1/3. To dig deeper in my reserve of in-jokes, the logarithm of 2038, in base 10, is approximately 3.3092, because 10^{3.3092} is just about 2038. The logarithm of e, in base 10, is about 0.4343, and the logarithm of 10, in base e, is about 2.303. Your calculator will verify all that.

All that talk about “approximately” should have given you some hint of the trouble with logarithms. They’re only really *easy* to compute if you’re looking for whole powers of whatever your base is, and that if your base is 10 or 2 or something else simple like that. If you’re clever and determined you can work out, say, that the logarithm of 2, base 10, has to be close to 0.3. It’s fun to do that, but it’ll involve such reasoning as “two to the tenth power is 1,024, which is very close to ten to the third power, which is 1,000, so therefore the logarithm of two to the tenth power must be about the same as the logarithm of ten to the third power”. That’s clever and fun, but it’s hardly systematic, and it doesn’t get you many digits of accuracy.

So when I pick up this thread I hope to explain one way to produce as many decimal digits of a logarithm as you could want, without asking for too much from your poor Automatic Sequence-Controlled Calculator.

]]>My name is Brian D. and to be honest with you I don’t give a damn if you believe me or not. I know it sounds very hard to believe…and that what you read above sounds like it is impossible …but I’m the living proof that I did it.

The next 10 minutes might shock you because…

http://freebonusdownload.net/blog/lottery-variant-system-free-download/

]]>* Those moments are so irritating and many times we do not know what to do in order to change those feelings, it is as nothing is this universe is working and you do not work along it. I do believe that everything happens with a reason and nothing is coincidence in our life and since everything might be considered infinite, just as universe, numbers, imagination and soul, I can surely say energy is one of those infinities, only that sometimes it seems that we receive less and sometimes more. *

* With an allusion to astrology, the universe is what it gives us energies, and the rotation of the planets and Sun we receive less or more energies, that is why in rainy mornings we feel as going back in bed, while on the other hand, when sunny, we feel like we can conquer the world after only getting a few hours of rest. Is it the universe that affects our mood? *

* In those days with low power it is hard to get through the day because we do not have any motivation to do anything; it is as if we could pass an entire day staying in bed and do nothing else but sleeping, and the problem here is that the more time we spend in bed, the more stoned we feel, because energies do not come only from sleeping, but from everything our body needs, from water to food, from love to hate, from information to knowledge. All we have to do is to understand ourselves and find a way to gain energies even though we have no motivation to do it. Since imagination is infinite, our minds can create infinite of solutions for solving problems. *

Chapter 1 “Euclid’s Method” is the most important chapter in the book. If you read only one chapter it should be this one. Chapter 1 identifies indirect measurement as the reason for a science of mathematics and depicts abstract measurement as the essential method of establishing connections in geometry.

Chapter 2 provides a geometric perspective on magnitude, drawing out the relationships among magnitudes that underpin the application of real numbers to measure magnitude. Chapter 2 provides an essential context for chapter 4 on the real number system. If you read only three chapters, read chapters 1, 2, and 4.

Chapter 3 is a continuation of Chapter 1, but more focused on Euclid’s geometry. Its focus is Euclid’s fifth postulate: how it relates to the world and how Euclid uses it to measure area and to establish his theory of geometric proportion (the basis of trigonometry). Chapter 3 is optional in relation to the rest of the book and can be read at any time after Chapter 1.

Chapter 4, a core chapter, offers a reality-oriented perspective on the real number system. Chapter 4 offers a reality-based reformulation of the standard “constructions” while rejecting the idea that numbers are constructed objects. They are, rather, methods to identify and specify relationships in the world. Chapter 4 closes by relating its approach to the work of Dedekind, Cantor, and Heine in the late nineteenth century.

Chapter 5 is a very short chapter identifying a role for geometry (an abstract focus on the objects of measurement) that transcends its restriction to the measurement of three dimensional spatial relationships. As Chapter 5 notices, the application of geometric methods to numerical relationships and to magnitudes such as force dates back at least to Euclid and Archemedes.

The last three chapters, 6-8, though written for a general audience, are intended primarily for college math majors and mathematically advanced high school students. These chapters are included to illustrate how my approach to understanding mathematics applies to advanced mathematics. They develop and motivate key concepts that are often (if not typically) left unmotivated in standard presentations and these chapters are essential reading for math majors who want to understand what they are being taught.

Chapter 6 answers the question: “But what about set theory?” Chapter 6 finds a valid need for set theory in mathematics as a methodological device, but rejects any notion that set theory provides a foundation of mathematics. Closing with a brief discussion of the standard set theory axioms, Chapter 6 points out that, by design, these axioms, building an entire edifice on the empty set, are meaningless. Finally, as an illustration of the value of a proper set theory, Chapter 6 offers a motivation of the key concepts in point set topology.

Chapter 7 motivates and relates key concepts in vector spaces and linear transformations/matrices. It should be read by anyone studying linear algebra.

Abstract mathematical groups, discussed in Chapter 8, play an essential role in advanced mathematics and physics. Chapter 8 develops and motivates key concepts in group theory and group representations from an elementary perspective. To understand group theory one needs to broaden one’s concepts of quantity and measurement and Chapter 8 provides the required perspective. As such, it is the final test of this book’s central thesis. Chapter 8 should be read by anyone who wants to understand how group theory relates to the world.

]]>2. Tool, NIN 공연 미국 가서 직접 관람하기.

3. 알래스카 여행. (자동차 렌트해서….)

4. 매년 1편 이상씩 논문 쓰기.

5. 미국에 포닥자리 구하기. ]]>

George Thomas’ clear, exact calculus text with superior purposes defined the trendy-day, three-semester or 4-quarter calculus course. The ninth edition of this confirmed textual content has been carefully revised to present college students the stable base of material they will need to achieve math, science, and engineering programs. This version includes recent innovations in teaching and studying that contain technology, initiatives, and group work.

**Download Options:**

_http://ul.to/rm7472xt

_http://ul.to/iw2dms3b

_http://ul.to/okhs9tec

2. 근데 amplitude 로부터 transpose, composition 관련된 것을 공부하려니, 나오는 함수들이 L^1 function 들이 아니라서 문제가 됨. Convolution 과 Fourier transform 관련된 실해석학의 기본적인 내용 중에서, 등장하는 함수들이 모두 L^1 인 경우밖에 잘 모르는데, 좀 더 나쁜 조건의 함수들에 대해서도 같은 결과가 성립할 수 있는 것 같음. f,g 가 어느 정도까지 좋은 조건을 갖춰야 convolution theorem 이 성립하는지에 대해서 심도있는 공부학 필요할 것으로 보임.

3. 그래서, 2 의 계산을 위해서, 혹시 PsiDOs on NC Tori 의 well-definedness 를 위해서 사용했던 테크닉으로 지수함수 파트에 differential operator L 을 작용해서 부분적분으로 다른 쪽을 넘겼을 때, 2 의 계산의 어려움이 극복될 수 있는지 한 번 시도해 볼 것.

]]>