For part of our workshop, we are going to showcase the Number Clothesline as a nice complement for Number Talks. In the past, we have done this activity with fractions, decimals, and percentage cards (see this post), but we wanted to try to differentiate a bit for our primary and secondary attendees. I have been intrigued by the idea of using a double clothesline for algebra concepts and so for our secondary extension, we are going to try this activity from Andrew Stadel’s Estimation 180 website.

I was trying to imagine what a beginning clothesline might look like for our early primary students, and so I made up some cards with numbers, dots, ten frames and fingers (after reading this fabulous article by Jo Boaler). I thought I would do a test run this weekend with my own kiddos (Kindergarten and Grade 2).

We set up the double number line in the living room and this is how it went down…

Overall, a pretty successful test run. A few thoughts…

- They both enjoyed the activity, especially the different types of pictures.
- They were both disappointed that there were no numbers between 10 and 20 for the bottom number line. I can’t really do that with fingers, but maybe I will make ten frame cards up to 20 and look for domino pictures (at least up to 15’s… do dominos go up to double 20’s?).
- As usual, I am impressed with how kids solve problems when they are left to themselves to figure out what makes sense. They didn’t need me to help them with anything – they figured out how to make the right spaces, what to do about missing numbers, what to do about double numbers etc. all on their own. Another good reminder that sometimes the most powerful teaching is to set the stage carefully, ask good questions and then stand back and let the kids do the thinking!!

If you would like to have these cards for your class, you can download them here. I will update this file if I manage to make some more ten frame or domino cards, but for now they only have numbers 1-10 in the dots, hands and ten frames.

]]>

In the first installment (https://larryemarshall.wordpress.com/2016/11/21/why-intelligent-design/ ) we left off with the concept of opposite worldviews. We will still discuss some of the philosophy behind how these worldviews came to be at opposite ends of the spectrum with each other.

The main theory that pervades all scientific disciplines is that simple material entities governed by natural laws eventually produce chemical elements from elementary particles. Then these elements swirling around in some kind of primordial environment (most call it soup) created complex molecules from these simple chemical elements. Then somehow, these inanimate chemicals became ALIVE! These simple life forms survived all kinds of improbable events to combine into life that was more complex. Finally conscious living beings developed and eventually morphed, mutated, naturally selected into YOU and ME. In this view, matter comes first, and conscious mind arrives on the scene much later as a by-product of material processes and undirected evolutionary change. “Chance” they say; “goo to you” mutation with natural selection picking the best of the lot by CHANCE.

The Greek philosophers (who were called atomists), such as Leucippus and Democritus, were perhaps the first Western thinkers to articulate something like this view in writing.[1] The Enlightenment philosophers Thomas Hobbes and David Hume also later espoused this matter-first philosophy.[2]

Following the widespread acceptance of Darwin’s theory of evolution in the late nineteenth century, many modern scientists adopted this view-why is the subject for another series of articles, but essentially it is when in doubt anything should make sense. This worldview has been called several things, depending upon which scientific study you have majored in: naturalism or materialism, or sometimes-scientific materialism or scientific naturalism, in the latter case because many of the scientists and philosophers who hold this perspective think that scientific evidence supports it.

So this is brings up a number of questions. Not for most of you who are reading this. You have probably never imagined that such questions existed, let alone have or have not been answered. That is the reason for my series; to open everyone’s minds to the facts that are out there but you are unaware of. What are these questions?

Can the origin of life be explained purely by reference to material processes such as undirected chemical reactions or random collisions of molecules?

Can the origin of life be explained without recourse to the activity of a designing intelligence?

Who needs to invoke an unobservable designing intelligence to explain the origin of life, if observable material processes can produce life on their own?

On the other hand, if there is something about life that points to the activity of a designing intelligence, then that raises other philosophical possibilities.

Does a matter-first or a mind-first explanation best explain the origin of life?

Either way, the origin of life is an infinitely interesting scientific topic, but one that has raised incredible philosophical issues as well.

My insatiable desire for information when I was in high school and college blinded me to the only methodology bring taught at the time. It was taught as the TRUTH with very little supporting information. You might say they wanted us to believe what they were saying on a hope and a prayer.

So let us start unlocking the mystery of the mystery of all things.

Many of the founders of early modern science such as Johannes Kepler[3], Robert Boyle[4], and Isaac Newton[5] had deep religious conviction. They believed that scientific evidence pointed to a rational mind behind the order and design they perceived in nature, which is so easy to observe all around us.

Many late-nineteenth-century scientists came to see the cosmos as an autonomous, self-existent, and self-creating system- matter was the most important thing. It appeared to them that the cosmos required no transcendent cause, no external direction or design. Several of these nineteenth-century scientific theories actually provided some support for this perspective despite the fragility of the knowledge the theory was based upon.

In astronomy, for example, the French mathematician Pierre Laplace[6] offered an ingenious theory known as the “nebular hypothesis” to account for the origin of the solar system as the outcome of purely natural gravitational forces[7].

In geology, Charles Lyell[8] explained the origin of the earth’s most dramatic topographical features— mountain ranges and canyons— as the result of a slow, gradual, and completely naturalistic processes of change such as erosion or sedimentation[9]. This brought about the theories of plate tectonics.

In physics and cosmology, a belief in the infinity of space and time obviated any need to consider the question of the ultimate origin of matter- if it has always been there, then it never originated. That obviously brings about many other questions, but it was and is easier to avoid them.

In biology, Darwin’s theory of evolution by natural selection suggested that an undirected process could account for the origin of new forms of life without divine any intervention, guidance, or design. Again, the questions left unanswered and only partially explained were left until sometime in the future.

Collectively, these theories made it possible to explain all the salient events in natural history from before the origin of the solar system to the emergence of modern forms of life solely by reference to natural processes— unaided and unguided by any kind or type of designing mind or intelligence. Matter has always existed and could in effect, arrange and rearrange itself into any combination that, by chance, would become more complex as time went on.

But does it! Here we need to dive more into the philosophy of science and the underlying premises of how scientists determine things. We will be delving into some areas of history and science that many of you have never, ever thought about. Fortunately, others do and what they have formulated is a deeper understanding of how and why you believe the way you do- whether rightly or wrongly.

Continue on enigmatic of challenge of seeking the mystery of the mystery.

continued at:

[1] Kirk and Raven, The Presocratic Philosophers.

[2] Hobbes, Leviathan; Hume, Dialogues Concerning Natural Religion.

[3] a German mathematician, astronomer, and astrologer. A key figure in the 17th century scientific revolution, he is best known for his laws of planetary motion, based on his works *Astronomia nova*, *Harmonices Mundi*, and *Epitome of Copernican Astronomy*. These works also provided one of the foundations for Isaac Newton’s theory of universal gravitation.

[4] a natural philosopher, chemist, physicist and inventor. Boyle is largely regarded today the founder of modern chemistry, and one of the pioneers of modern experimental scientific method. He is best known for Boyle’s law, which describes the inversely proportional relationship between the absolute pressure and volume of a gas, if the temperature is kept constant within a closed system

[5] an English physicist and mathematician who is widely recognised as one of the most influential scientists of all time and a key figure in the scientific revolution. His book *Philosophiæ Naturalis Principia Mathematica* (“Mathematical Principles of Natural Philosophy”), first published in 1687, laid the foundations for classical mechanics. Newton made seminal contributions to optics, and he shares credit with Gottfried Wilhelm Leibniz for the development of calculus. Newton’s *Principia* formulated the laws of motion and universal gravitation, which dominated scientists’ view of the physical universe for the next three centuries. By deriving Kepler’s laws of planetary motion from his mathematical description of gravity, and then using the same principles to account for the trajectories of comets, the tides, the precession of the equinoxes, and other phenomena.

[6] an influential French scholar whose work was important to the development of mathematics, statistics, physics and astronomy. He translated the geometric study of classical mechanics to one based on calculus, opening up a broader range of problems. In statistics, the Bayesian interpretation of probability was developed mainly by Laplace. He restated and developed the nebular hypothesis of the origin of the Solar System and was one of the first scientists to postulate the existence of black holes and the notion of gravitational collapse.

[7] Laplace, (Vietnamese) Exposition du système du monde.

[8] a British lawyer and the foremost geologist of his day. He is best known as the author of *Principles of Geology*, which popularized the concept of uniformitarianism—the idea that the Earth was shaped by the same processes still in operation today. His scientific contributions included an explanation of earthquakes, the theory of gradual “backed up-building” of volcanoes, and in stratigraphy the division of the Tertiary period into the Pliocene, Miocene, and Eocene. He also coined the currently-used names for geological eras, Paleozoic, Mesozoic and Cenozoic.

[9] Lyell, Principles of Geology..

]]>“If I could just unravel this just a little bit more, and just get a little closer to the answer, then… Then I would go to my grave a happy woman.”

]]>In past few years the writing and reading has left me with a keen eye for detail. So, when I tell you that “Nil Battey Sannata” is a well written story with its fair share of highs and lows; I am not exaggerating. The story revolves around a 10th fail single mom (played by Swara Bhaskar) trying to get her only daughter to study for her 10th exams and get decent marks. It shows the struggle of a mother, who takes multiple odd jobs and works all day, so she could give her daughter a decent life and save some money for her future education. Whereas the daughter is busy enjoying life, failing miserably in her class. Things turn interesting, when the mother is advised by one of her employers (Ratna Pathak) to join the school, so she can both study and help her daughter in it too. Taking the matters in her own hand, the mother joins the same school as her daughter; but the daughter takes her noble intentions as a challenge. The remaining story is an amazing series of events that follow and leads to a climax that would leave you teary eyed.

**MY TAKE:**

The best thing about this movie; it will easily resonate with common people like us, because it brilliantly portrays the hardships of the common man/woman. No extravagant costumes, set ups, locations makes the movie totally run on merits of a well written script and some brilliant acting. One of the key features of this movie is it has a female protagonist. So this is not the same old story of a damsel in distress. Some really great supporting acting in the movie which will make you laugh and cry. In an era when education is taking the back seat with the masses, this movie reflects the effect a good education can have on the society. It teaches you that no dream is big enough, if you only have the courage to follow your heart. The movie is all about the little dreams of the common crowd and an undying hope to achieve them someday. The movie is like the scent of land after the first rainfall. If you like a good story, good acting and have a sensitive side like me, this is must watch for you. For my friends outside India, you go ahead and watch it with subtitles, you won’t be disappointed.

**Reasons to watch it:**

- Brilliant script writing.
- A story that you would connect to.
- Brilliant acting by the lead actress and supporting roles.
- Look out for the unique analogies used to learn mathematics.
- Watch out for the guy with the specs, he will leave you both sad and proud.

Image credits – Google

]]>At some mysterious point in time, someone must have picked up the stones individually and wondered how to designate them. Perhaps, one person wanted to exchange his “many” for another person’s “many” in a business matter. How could each determine that he wasn’t cheated except by counting. Stones, then, were no longer simply a collection of “many”, but had special identities or designations. Thus, from such primitive beginnings the world of number was born. The simplest numbers uncovered were the “counting numbers” or “natural numbers”. The numbers 2, 3, 4, 5, 6, 7, 8, 9 came into being, which followed 1.

The Babylonians and Egyptians needed numbers to measure fields and the shapes of the fields necessitated some rudimentary elements of geometry for buying and selling properties. However, it was the ancient Greeks that studied number as something that had its own existence independent of human needs. Indeed, to Pythagoras, all things were numbers. Men and women were numbers and he assumed there were only ten heavenly bodies, since 10 had special significance. In fact, Pythagoreans worshipped the “tetrakys”, a triangle composed of four rows of dots. The first row had one dot, the second two dots, the third three dots and the fourth four dots. When the four rows were added, the sacred 10 was the result. Everything appeared to represent perfect balance and harmony until… Yes, even in this well-constructed world, irrationality stuck out it’s ugly snout as it does so often in human lives. Someone constructed a square with sides one. When a diagonal was added, the length of the diagonal had to be the square root of 2. Pythagoras’s own theorem led to the unhappy result. The irrational could be dealt with later. But there was still one important number missing: 0.

The Greeks never could find 0, and this fact imposed strict limits on what they could do with numbers. For zero, we have to go to another country, India. The Indians had long had a concept of nullity. It came from their philosophy. It came from their religion. When “0” joined the counting numbers, a major step was put in place for solving equations, the construction of the Cartestian plane and, in today’s world, the binary system which is the basis of computer circuitry.

In the Middle Ages, the first algebraic equations were born, arising from Arabic countries. The mysterious x and y of algebra represented an abstract way of thinking hitherto unknown in mathematics. The Greeks may have been the philosophers of number, but the Arabs were not only philosophers, but active participants in extending the range of number to greater practical and theoretical heights. However, algebra and geometry were still separated. It required a major step to bring them together.

]]>We will now construct yet another interesting thing from sets. We will need two kinds; the first is a set equipped with a topology, or topological space; the other is either just an ordinary set, or equipped with a law of composition, either groups, or rings, or modules. Note that functions between sets also themselves form sets; we now use this to construct a motivating example for the concepts that we are about to introduce.

Consider the set of all complex numbers (also called the complex plane) equipped with the topology where we declare the closed sets to be the finite sets of complex numbers (which can be imagined as a finite number of points in the complex plane). The open sets are therefore the complements of these closed sets. This is a special case of what is called the **Zariski topology**.

Consider also the functions from the complex numbers to the complex numbers which are of the form where and are polynomials (the functions we refer to in the rest of this post will be of this form). Examples of these functions are , , , and so on. Consider now the function

.

It is not actually a function from the complex numbers to the complex numbers. Why? Because it does not send the complex number anywhere, and a function must send every element of its domain to some element in its range. But we can say that it is a function from the set of complex numbers except for the complex number , written or , to the complex numbers. This set is of course a subset, and actually an open subset, of the set of all complex numbers .

For ease of notation, we keep the range of our functions fixed (in this case it is the complex numbers) and speak informally of functions “on”, or sometimes “living on” their respective domains.

Although is “smaller” than , there are actually more functions on than on . Aside from , we also have , , and all other functions whose denominator otherwise would have been on the complex number and nowhere else on the complex plane . If we take an even “smaller” open subset of , such as , we will obtain even more functions on this open subset, such as and .

At the same time, for every function on there is also a corresponding function on the open subset which assigns to every element of the same element that the function on assigns to every element of . Technically is a different function from because it has a different domain. It is called the restriction of to . For every function on there are also corresponding restrictions to .

In order to formalize this, we note that if is a subset of , usually written , then we have a function called an **inclusion function**, or **inclusion**, from to , which sends every element of to the same element in . If we write the set of functions on as , and the set of functions on as , we obtain a map from to that assigns to every function on its restriction to . This mapping from to is called a **restriction function**, or **restriction map**.

We can summarize and generalize our discussion above as the condition that whenever we have an inclusion from to , then we also have a restriction map from to . We now obtain the “classical” notion of a presheaf. For the more rigorous definition, we quote from the book Algebraic Geometry by Robin Hartshorne:

*Let be a topological space. A presheaf of abelian groups on consists of the data*

*(a) for every open subset , an abelian group , and*

*(b) for every inclusion of open subsets of , a morphism of abelian groups ,*

*subject to the conditions*

*(0) , where is the empty set,*

*(1) is the identity map, ,*

*(2) if are three open subsets, then .*

Since this particular definition in the book of Hartshorne only defines presheaves of abelian groups, the functions are required to be morphisms, which means that they respect the abelian group structure on and , i.e. if we write the law of composition of the abelian group using ““, and we have in the domain, then a morphism of abelian groups is a function that satisfies . However, presheaves can be defined for more general sets and functions.

We quote some more useful terminology from the book of Hartshorne:

*If is a presheaf on , we refer to as the sections of the presheaf over the open set , and we sometimes use the notation to denote the group . We call the maps restriction maps, and we sometimes write instead of , if .*

The concept of presheaf can be generalized even further so that the functions from to need not be inclusion functions. Together with the generalization of the concept of open covers in topology, and the concept of a sheaf, this leads to the concepts of **site** and **topos**.

References:

Algebraic Geometry by Andreas Gathmann

The Rising Sea: Foundations of Algebraic Geometry by Ravi Vakil

Algebraic Topology by Robin Hartshorne

Sheaves in Geometry and Logic: A First Introduction to Topos Theory by Saunders Mac Lane and Ieke Moerdijk

]]>When this particular student came to me for the first lesson, I pointed at one of the pieces he was studying for the exam: the beloved *Etude No. 2 *by* *Rodolphe Kreutzer (depending on your attitude, “beloved” can be interpreted as genuine or sarcastic…). His mother had contacted me saying that her son had no prior training in music theory. To test exactly what she meant, I asked the student “What key is this piece in?”

The student wasn’t able to give a reply! He could not identify C major, and so I had to start at the beginning. Over the next 4 lessons, I introduced the concept of major and minor scales, the rudiments of the Western system of tonality (key signatures and the circle of fifths), and also the names of intervals. He had particular trouble grasping the fact that the terms major and minor can describe a scale, a key, and also an interval (which might be a comment on the language we use to describe music… or my teaching skills, or both).

Playing scales is a mandatory part of all AMEB exams, so there was no doubt that he could play a C major scale, but I realised that he somehow lacked the concept of C major as a sort of “separate entity” — that is, divorced from the specific action of playing the scale on the violin, in the manner prescribed by the AMEB Technical Work book. To give a concrete example of this, consider the following snippets of music:

An experienced musician will know at a glance that all 6 excerpts are just different manifestations of the basic C major chord. The notes C – E – G form the C major chord regardless of the order of the notes, the register, and the rhythms used. Numbers 1 to 5 are just examples I’ve cooked up on the spot to illustrate some commonly used figurations, but number 6 is actually the opening of a piece of chamber music by Mozart (+10 cool points if you know which one!). One does not need to have practiced those *particular* figurations in order to execute them, it is about recognition of a broader pattern, and being able to adapt to variations on the basic pattern. This brings us to a brief discussion of **sight-reading, **another component of the AMEB exam.

One of the most remarkable features of Western classical music is the complex system of notation that has been developed over many centuries. Improvisation *used* to be an integral part of Western classical music, but generally it has lost its prestige in our era. Although there have been many successful efforts (e.g. historically informed performance, modern compositions which include improvised sections, and cross-overs with jazz and folk traditions) to reintroduce it as part of the classical musician’s skill set, I think it is safe to say that most classically-trained students are not taught improvisation. There is hence a strong emphasis on being able to read and interpret notated scores. Sight-reading is the practice of performing a score which has not been prepared beforehand. Since playing a musical instrument is so demanding, sight-reading is not a trivial exercise, and techniques must be developed. The minimum requirement is simply to be able to reproduce the notes and rhythms faithfully, within some reasonable margin of error. At higher levels of examination, students are expected to also pay attention to different articulations, dynamics, and expressive markings on the score. I would argue that the first step is the most difficult.

Translating a single note that is written on paper to a sound on the instrument (this includes the human voice too!) requires the musician to identify what pitch is represented by the notation, then to engage whatever physical actions are necessary to produce that pitch on the instrument. However, in order to play a passage of music, one must also be able to take into account the rhythm (roughly speaking, the relative durations of notes), as well as the succession of pitches, and translate all of that into a fluent process on the instrument. There is no hope of attaining the fluency required by considering single notes at a time. Experienced musicians will be able to internalise larger chunks of music, say an entire bar or several bars at a time, and also be able to read ahead, so that while they are playing a certain passage, they are mentally prepared for what comes next. Thus, to successfully perform even a simple piece of music at sight, the reproduction of the notated pitches and rhythms should be second-nature, as effortlessly as a literate person can read and recite written text (which is the motivation behind my choice of words “translate” and “fluent”). Attention to articulations, dynamics, and expressive markings can be trained later and often comes naturally with experience, but it is the fundamental, near-instantaneous connection between notation, sound, and physical action that is difficult to master, and requires diligent practice and time commitment.

I find that it helps tremendously if the student already has some knowledge of the Western tonal system — the rudiments of scales and keys — and a decent sense of rhythm. In this case, the bare essentials of a piece of music can be quickly internalised, and sets a rough framework or guideline during the sight-reading. As the student further develops their sight-reading skills, they will be able to transform the “passive” knowledge — e.g. *recognising *that a piece is in a given key — quickly into “active” knowledge, that is, knowing how to *realise* the notation as sound on the instrument. When I see a notated pitch, I can instantaneously hear the said pitch (unless it’s in some strange transposition!), and if it is violin music I am reading, I also immediately ‘feel’ the correct position of the fingers even without the instrument on hand. This is the fundamental connection I described above, and I’m sure all highly-trained musicians can experience it.

Unfortunately for my student, before our lessons, he lacked the knowledge even to recognise basic features like tonalities and intervals, and hence, as his mother had described to me, was practically unable to do sight-reading. After the limited number of sessions we had before his exam, I feel confident that he can now recognise key signatures and tonalities appropriate for the grade 4 level, but unfortunately we did not have enough time to make significant progress in putting this knowledge into action and developing his “inner ear” (referring to the connection between the notation and sound, not the anatomical inner ear). Nevertheless, I hope he has grasped the basics quickly enough to allow him to score some points in the sight-reading component of the exam. It is at this point that I bring in the relationship with studying maths.

There is nothing inherently wrong with the AMEB exam format. After all, music competitions and professional auditions all require the candidate to prepare selections from a set list of music. However, there is the unfortunate tendency to view the grade progression as the definitive* *way to study music, as if all it takes to become a good musician is to “level up” your music skills (like in the Sims computer games). This is hazardous, as I saw in the case of my student. It was clear that he knew his chosen examination pieces well and could perform them competently, but he was unable to adapt and extend his existing knowledge to sight-read a piece he had never seen before. There is a similar situation in high school maths. Let’s look at the example of solving quadratic equations. A typical “drilling” exercise might be as follows:

**Solve the following equations for x using factorisation:**

(The last one isn’t as obnoxious as it looks. What are the divisors of 55? and 21?)

While solving quadratics is a very important skill, this is not a particularly inspiring exercise. Now consider this GCSE exam question, which went viral for apparently being “unfair” and too challenging:

There are *n* sweets in a bag. Six of the sweets are orange. The rest of the sweets are yellow. Hannah takes a random sweet from the bag. She eats the sweet. Hannah then takes at random another sweet from the bag. She eats the sweet. The probability that Hannah eats two orange sweets is 1/3. Show that

This is actually more of a probability question, but I think part of the reason it was considered difficult is that the resulting quadratic equation appears (at first glance) to have no connection with the other information. In fact, the question could have been *more* challenging if the students were simply asked to “find the value of *n*“, it was quite kind of the examiners to provide the correct quadratic equation! The point is that the question combines two topics — basic probability and quadratic equations — that when considered individually should have caused no problems for a student who has adequately prepared for the exam. But of course, interesting mathematical problems are interesting precisely because the techniques needed are not handed to you on a silver platter, and the road to the solution is not paved nicely and marked with flashing signposts. There is no huge conceptual leap from typical textbook exercises to Hannah’s sweets, but students who are too accustomed to the textbook fail to adapt the basic techniques to tackle more interesting problems that require a more involved process. (Previously, I have discussed briefly the role of creativity in mathematics).

As I continue my mathematical studies at the University of Sydney, I come to realise the utmost importance of complementing the theory explained in lectures with a rich variety of problems to tackle, so that I begin to appreciate the myriad of ways the theory is used in practice. Interesting and challenging problems will often require creative manipulations, finding connections between different concepts, expressing the same quantity in different ways, combining the results of various theorems in a clever way, and so on. The temptation is to consider yourself a master after getting all the textbook exercises correct. Sure, you understand the basic theory, but this is only the beginning! It is likely that most of the students who complained about the GCSE question were perfectly capable of solving quadratic equations, but floundered when the technique was disguised in a more creative way. The converse seems to be true in the case of my AMEB student. He had diligently prepared four *specific* pieces, but lacked the techniques required to appreciate and process music more generally, as demonstrated by his initial inability to sight-read music. In mathematics as in music, it is most satisfying when you begin to appreciate the interaction between theory and practice.

As is common in many problems, it helps to express the same quantity in two different ways. We want to compute the probability of getting two orange sweets in a row. At the first draw, there are *n* sweets, 6 of which are orange, so the probability of getting an orange sweet is simply 6/*n*. At the second draw, there are now *n *– 1 sweets in total, 5 of which are orange (remember Hannah ate the first one!), so the probability of getting orange on the second draw is 5/(*n *– 1). Now, by the multiplication principle, it follows that the probability of getting two orange sweets in a row is:

where the right hand side of the equation is the probability of getting two orange sweets as provided in the question. This simplifies to the desired quadratic equation (I’ll leave the details to you, dear reader :P ). To take the problem one step further, we can factorise and solve for *n:*

There are two solutions (as expected from a quadratic), *n* = -9 and *n* = 10, but clearly you can’t have minus 9 sweets (unless all the sweets in Hannah’s bag are stolen and she owes someone 9 sweets!), so the only valid answer is *n *= 10.

The earliest type of cipher (coding system for messages) was called a ‘monoalphabetic cipher’. This includes writing the alphabet out, and shifting the start of the alphabet a few places, as shown.

This, however, is very easy to decipher, simply because there are only 25 combinations of coding, and after twenty minutes, if you try each one, you are guaranteed to have found the cipher that produces normal English. However, if you mix up the order of the alphabet on the second line, as shown below, you suddenly have 400,000,000,000,000,000,000,000,000 combinations of alphabet, far too many to check, even with computers. Even if a computer could check 10 alphabet combinations per second, and you used all the computers in the UK, it would still take you approximately 36 billion years to check them all.

As you can imagine, for centuries this cypher was seen as unbreakable, especially in a time when computers weren’t invented. The only way they hoped to understand these secret messages, was by stealing the ‘key’ (the details of how to decode the message, given to the receiver).

However, after much analysis by mathematicians, they found a way to make this cipher vunerable.

They did this by using something called ‘frequency analysis’, which just means they counted the percentage of English that each letter of the alphabet occupies. This is shown below:

By using these graphs, and analysing the ‘cipher text’ (encrypted messages), we can work out at least a few letters, looking at how frequently the letters occur in the ciphertext.

A section of our ciphertext is as follows: __“rdhhjpgxnggksjldckdypj”__

If we take this example, we know that the most frequent letter in the English alphabet is ‘e’. If we look now at the lilac graph, we can see that the most frequent letter in the ciphertext is ‘g’, and using this information, can make an educated guess and substitute all our g’s for e’s. We can also do this for t and a, which compare to j and s respectively. If we write a list of the most frequently occurring in both sets and the least occurring in both sets, we get the following tables:

Most Frequent: Least frequent:

NORMAL: E T A O I N S H J Q X Z

CYPHERTEXT: G J S W D K P Q E Z F M

Using this table, we now have “-i–tseq-eenat-i-ni-st”. Considering that ‘the’ is the most common word in the English language and that the combination “tseq” is not very frequent, even if it is two words. By swapping some letters, eg. S and H, we can now change our table for the most frequent letters to the one shown and from this, gain a new

Most frequent (modified):

E T A O I N S H

G J S W D K Q P

translation: “-i–theq-eenat-i-ni-ht”. Knowing that ‘q’ is usually followed by a ‘u’, we can fill this in, and knowing that “queen at ______” is a phrase, we can assume that the last word is a time or place, and finally conclude it says “kill the queen at midnight”.

After this discovery, the world of cryptography was in ruins, and the secrets of the past were finally revealed. The search for a new cipher began.

Soon the Vigenère cipher was created, which was so trusted that it was nicknamed “le chiffre déchiffrable” in France, meaning “the undecipherable cipher”. This used a list of different monoalphabetic ciphers for each letter of the alphabet, known as a Vigenère square:

The sender would then pick a code word, such as ‘water’, which they would use to code their message. They would write out there message as so:

W H I T E W H I T E W H I T E

K I L L T H E Q U E E N T O N…

Then, using the first row against the row across the top of the Vigenère square, and the second row against the letters down the side of the Vigenère square, we can find the coinciding letter, as shown on the Vigenère square for the first letter of my message, which we can see is ‘g’. The idea of this cipher is that if we code any letter twice in a row, we get a different outcome each time, making it seemingly impossible to do frequency analysis on, because the same letter in my message would most likely not be coded into the same letters. However cryptographers found a way around this.

If our code word was five letters long, we would take every consecutive five letters and split them into a group.

T J L O F M J K K P C Z A I P U G E R…

1 2 3 4 5 1 2 3 4 5 1 2 3 4 5 1 2 3 4…

All the letters in group one would be analysed in their groups to find the likely alphabet shift that caused these letters. They would do this for all five groups until they had come up with a word that produced English. You may be wondering how they know the length of the code word; they don’t. Instead, they use groups of cryptographers and computing power to test all the different lengths of codeword, and see which one works. This may sound like a long process, however it is not when you have more than a hundred men with computers at your disposal, which during times of war is likely.

Finally it seems as though all hope is lost for securely coding messages, after the failure of many supposedly indecipherable ciphers. There is a way, however, you can encode your message to be absolutely undecipherable. This is if your code word is in fact a random non-repeating sequence of letters that only you and the receiver have. This means that even if a team of cryptographers want to find out your random sequence of letters, they could not. This is because they are random and non-repeating, and therefore technically there would never be a test to check you had the right letters. Any cryptographer could think they had made the right sequence, but really millions of sequences could make any English phrase known- it is unbreakable.

The impracticalities of this however are huge; being able to generate random numbers is extremely hard, as most computing software for random numbers is biased and therefore cryptanalysists have a chance of discovering your message. It is also extremely impractical to have a huge book full of letters being carried around, especially because everyone in the army, for example, also has to have one. This is virtually impossible to organise.

This leaves cryptographers at an advantage, until the day someone discovers a practical undecipherable cipher- but the question that haunts my mind is: should we be excited or scared of the discovery of a way to write completely secret messages?

]]>We think our decorations look fantastic…

Now our tree looks amazing!

]]>Do the questions based on pictograph in NB1.

]]>Complete the remaining questions of Ex.9.1 in NB1. Do the following questions in NB1.

- The blood groups of 25 students are recorded as under:

A, B, O, A, AB, O, A, O, B, A, O, B, A, AB, AB, A, A, B, B,O, B, AB, O, A, B.

Arrange the information in a table using tally marks.

2. Follow the website for tally mark worksheet.

3.

]]>

It wasn’t very hard finding a Tutoring centre in Sydney, a quick google search would give you more results than an Asian Kid with a goal. However, finding the right Tutoring proved to be of tremendous difficulty.

At the time, I was barely an average student at a very low ranked school however I realised the need for me to pick up my game. The first step towards my goal was to discover what needed to change and how I would do it. Ultimately, after searching for a while I found Dux College.

I’ve experienced my fair share of tutoring centres where I travelled far and wide in search of the place that could adhere to my needs instead of me conforming around their methods. Alas I found Dux College, a tutoring centre in the heart of Parramatta. The lessons I learnt here are things that would stick with me for a lifetime.

The most important thing I learnt from this College is Passion. Passion is nothing more than an infatuation, an innate desire to explore and share the knowledge in a particular subject and this is what stuck with me amongst the vast amount of knowledge I gained at Dux College.

The things that made Dux College ideal for me was it small classroom size peaking at 5 students max. The tutors oozed with passion for their respective subjects and that was what really made Dux College perfect for me. It taught me that in order to really excel at your weakest subjects (or any subject for that matter) you must first develop a passion and appreciation for the subject before you can really take your skills to the next level.

The classes at Dux College do an outstanding job in maintaining professionalism whilst also adding the perfect amount of informality to help the students enter a classroom of an ideal welcoming learning environment.

The effects of these subtle yet powerful features at Dux College is very evident in my results as before I came to Dux College in year 10, I was barely scraping an average rank of 15th out of 30 students at a very low ranked school. Fast forward two years and after getting tutored at Dux College, I managed to attain a Band E4 in Mathematics Extension 2.

This improvement took hard work and dedication but it also required a catalyst for this to take place and that’s what Dux College provided for me. This mark I attained, due to HSC scaling , played a great role in helping me get into my dream university and Course which was a Bachelors of Electrical Engineering/Masters of Electrical Engineering (combined) at UNSW.

The course that I chose was the most amazing decision I have ever made in my life. The day to day experience I have in this course are closely linked to the subjects and content I learnt in High school HSC Physics and HSC mathematics.

All in all, Dux College played a major role in helping me achieve my goals and to me, that should be the objective of every tutoring centre.

]]>Here is your today’s H.W.

2-12-16-h-w

With Regards,

Preeti Lashkari

Here is your today’s H.W.

2-12-16-h-w

With Regards,

Charu Soni

Check your today’s H.W here

]]>Loops are “deformations” of the circle; hence we have defined the set (which also happens to form a group) of equivalence classes of loops on the space “deformable” to each other as . Similarly, the other homotopy groups are defined as the set of equivalence classes , where is the n-dimensional sphere. In this post we will define another notion, that of a “cycle”, which also expresses ideas related to circles and more generally -dimensional spheres. Just as loops and their higher-dimensional counterparts play a central role in homotopy theory, cycles and the related concept of boundaries also play a central role in **homology theory**.

First we note that when we speak of circles, we do not usually include the interior. But we have a different term for the interior; we call it the **open disk**. The open disk and the circle together form the **closed disk**. Similarly when we speak of the sphere, we refer only to the surface of the sphere and not its interior. We call the interior the **open ball**, and the open ball and the sphere together the **closed ball**. This terminology also generalizes to -dimensional spheres as well. The interior of the -dimensional sphere is called the -dimensional open ball and both of them together form the -dimensional closed** **ball.

We note again that the -dimensional sphere of radius can be thought of just the two points and on the real line. Its interior, the -dimensional open ball, is the set of all real numbers between and , i.e. the set of all real numbers such that , i.e. the open interval . The -dimensional closed ball is then the closed interval .

Intuitively, the -dimensional sphere is the **boundary** of the -dimensional closed** **ball (we will sometimes speak of just the boundary of a ball or a disk, hoping that this will cause no confusion). For example, the boundary of an interval is made up of its two endpoints. If we were to consider some other shape, like, say, a more general curve with endpoints, intuitively we could still think of these endpoints as forming the boundaries of the curve. However, some curves, such as the circle, or any closed loop, do not have endpoints, and therefore do not have a boundary. Shapes that have no boundary are called **cycles**.

We recall that we have been thinking of the circle itself as being the boundary of a disk. Combined with our observation that the circle does not have a boundary, this provides us with an example of the following important principle central to homology theory:

**A shape which is the boundary of some other shape, has itself no boundary.**

In other words:

**All boundaries are cycles**.

However, the converse is not actually true. Not all cycles are boundaries. Intuitively we think of circles as boundaries of disks because we have been subconsciously embedding them in the plane. We can come up with examples of circles which are not the boundaries of disks if we think of them as being parts of some other surface other than the plane. Still, this is probably quite confusing, so we will attempt to show what we mean by explicitly giving some examples.

But first we consider another space in which, like the plane, all circles are the boundaries of disks. We consider an ordinary sphere. One can think of, say, a basketball. We could take a pen and draw circles or loops on this basketball, and each circle or loop would bound some part of the basketball. If we take a pair of scissors and cut the basketball along the circle or loop that we have drawn, we will end up with a piece of rubber in the shape of the region bounded by the circle or loop. If we drew a circle, this region will be a disk. Hence, on a sphere, all circles are boundaries of disks.

Now let us consider an example of a surface in which not all circles are boundaries of disks. We consider the torus. It is the shape of a surface of a donut, but we can also think of the inner tube of a tire, which people also often use as flotation aids in swimming pools. We can still draw a circle bounding a disk on this surface, so that if we cut along the circle with a pair of scissors we still get a piece of rubber in the shape of a disk. However, we can also draw a circle around the “body” of the tube; if we cut along this circle, we would just cut the tube into something like a cylinder, since the circle was “bounding” no part of the tube, only the empty space inside (or it could have been filled with air).

There is another circle we can draw, around the “hole” in the middle of the inner tube, and if we cut it open, we just “open up” the inner tube. Once again this circle is not the boundary of a disk on the inner tube. This circle, along with the one we have considered earlier, still do not have any boundary, and yet, they are not boundaries of disks either. Therefore we see that on the torus, not all cycles are boundaries.

We see also that keeping track of whether there are cycles that are not boundaries give us some information about the space these cycles are on, the same way that keeping track of the loops that cannot be contracted to a point give us information about the space the loops are on.

To help formalize these ideas (although we won’t completely formalize them in this post), we note that the dimension of the boundary of a shape is one less than dimension of the shape itself. So, for example, let us consider a set of shapes of dimension , which we write as . We also have another set of shapes of dimension , which we write as . We now want the boundary of a shape in to be found in , and we want a “**boundary function**” that assigns to a shape in its boundary in . We write this boundary function as .

Some of the shapes in also have boundaries, and these boundaries are to be found in yet another set . The boundary function that sends shapes in to their boundaries in is written .

All these sets must have “zero elements” to allow for the case when a shape has no boundary. If a shape in has no boundary, then the boundary function sends it to the zero element in .

If we then define an abelian group structure on the sets , , and , with the zero element being the identity of the group, we can then define the cycles to be the kernel of the boundary function. Recall that the **kernel** of a function between groups is the subset of the domain that the function sends to the identity element in the range. We can also define the boundaries as the **image** of the boundary function. Recall that the image of a function is the subset of the range made up of the elements the function assigned to the elements of the domain.

Note that that the function obtained by composing the two successive boundary functions, , sends any element of to the identity element in . This is simply a reformulation of our “important principle” above which states that all boundaries are cycles.

We can now generalize the idea expressed by the groups , , and , so that we can have any number of groups indexed by the natural numbers, and boundary functions between two successive groups, which obey the property that the composition of two successive boundary functions will send any element of its domain to the identity element in its range. These groups together with the boundary functions between them form what is called a **chain complex**.

We can now define the homology groups. Since our shapes now form groups, we can use the law of composition of the group to define an equivalence relation between the elements of the group and form a quotient group (see also Groups and Modular Arithmetic and Quotient Sets). What we want is to declare two cycles in the group equivalent if they differ by a boundary. The -th **homology group**, written , is then defined as

.

Here refers to the kernel of the -th boundary operator, i.e. the cycles in and refers to the image of the -th boundary operator, i.e. the boundaries in . Recall that what we are doing is keeping track of the cycles that are not boundaries. We declare two cycles equivalent if they differ by a boundary, so any cycle which is also a boundary is declared equivalent to the identity element of the group, i.e. the zero element. If we write the law of composition of the group using the symbol ““, we can express the equivalence relation as

where is a cycle and is a boundary. We can therefore easily see that

.

This expresses the idea that what we are interested in are the cycles that are not boundaries. We are not so interested in the cycles that are boundaries, so we hide them away by declaring them to be equivalent to the identity element or zero element.

The sets of *functions* from the abelian groups that make up the chain complex to another abelian group form what is called a **cochain complex** of abelian groups, with its own **coboundary functions**. If we write the set of functions from to some other abelian group as , the coboundary function will go in the opposite direction as the boundary function. Whereas the boundary function sends elements from to their boundaries in , the coboundary function sends elements from to their **coboundaries** in . Note, once again, that while is a set of shapes (which happen to form an abelian group), is a set of *functions* from shapes to some other abelian group (which also happen to form an abelian group). The -th **cohomology group**, written is then defined as

.

We have not yet explained how we are to define the shapes and abelian groups that make up our chain complex. We have relied only on the intuitive idea of cycles and boundaries. The methods by which these shapes and abelian groups are defined, such as singular homology and cellular homology, can be found in the references listed at the end of this post.

References:

]]>All the best 👍

]]>All the best 👍

]]>In Germany, there are a few Advent Calendars themed around Maths and Physics on the Internet. I highly recommend signing up, as they are free and available in English.

For Grade 4-6 as well as 7-9 there is *Mathe im Advent* (which you obviously can also play in if you are older, you just don’t win anything and problems are aimed at those age groups), for grade 10 and up there is the *Matheon* calendar.

And on the side of Physics, there is *Physik im Advent*.

All of these are governed by German Mathematical Society and Matheon. I have participated several years and they are always fun to do, so do sign up!

]]>